···44interrupt.5566Required Properties:77-- compatible: has to be "qca,<soctype>-cpu-intc", "qca,ar7100-misc-intc"88- as fallback77+- compatible: has to be "qca,<soctype>-cpu-intc", "qca,ar7100-misc-intc" or88+ "qca,<soctype>-cpu-intc", "qca,ar7240-misc-intc"99- reg: Base address and size of the controllers memory area1010- interrupt-parent: phandle of the parent interrupt controller.1111- interrupts: Interrupt specifier for the controllers interrupt.1212- interrupt-controller : Identifies the node as an interrupt controller1313- #interrupt-cells : Specifies the number of cells needed to encode interrupt1414 source, should be 11515+1616+Compatible fallback depends on the SoC. Use ar7100 for ar71xx and ar913x,1717+use ar7240 for all other SoCs.15181619Please refer to interrupts.txt in this directory for details of the common1720Interrupt Controllers bindings used by client devices.···23202421 interrupt-controller@18060010 {2522 compatible = "qca,ar9132-misc-intc", qca,ar7100-misc-intc";2323+ reg = <0x18060010 0x4>;2424+2525+ interrupt-parent = <&cpuintc>;2626+ interrupts = <6>;2727+2828+ interrupt-controller;2929+ #interrupt-cells = <1>;3030+ };3131+3232+Another example:3333+3434+ interrupt-controller@18060010 {3535+ compatible = "qca,ar9331-misc-intc", qca,ar7240-misc-intc";2636 reg = <0x18060010 0x4>;27372838 interrupt-parent = <&cpuintc>;
+1-1
Documentation/input/multi-touch-protocol.txt
···361361 ABS_MT_POSITION_X := T_X362362 ABS_MT_POSITION_Y := T_Y363363 ABS_MT_TOOL_X := C_X364364- ABS_MT_TOOL_X := C_Y364364+ ABS_MT_TOOL_Y := C_Y365365366366Unfortunately, there is not enough information to specify both the touching367367ellipse and the tool ellipse, so one has to resort to approximations. One
+38-13
Documentation/power/pci.txt
···979979(alternatively, the runtime_suspend() callback will have to check if the980980device should really be suspended and return -EAGAIN if that is not the case).981981982982-The runtime PM of PCI devices is disabled by default. It is also blocked by983983-pci_pm_init() that runs the pm_runtime_forbid() helper function. If a PCI984984-driver implements the runtime PM callbacks and intends to use the runtime PM985985-framework provided by the PM core and the PCI subsystem, it should enable this986986-feature by executing the pm_runtime_enable() helper function. However, the987987-driver should not call the pm_runtime_allow() helper function unblocking988988-the runtime PM of the device. Instead, it should allow user space or some989989-platform-specific code to do that (user space can do it via sysfs), although990990-once it has called pm_runtime_enable(), it must be prepared to handle the982982+The runtime PM of PCI devices is enabled by default by the PCI core. PCI983983+device drivers do not need to enable it and should not attempt to do so.984984+However, it is blocked by pci_pm_init() that runs the pm_runtime_forbid()985985+helper function. In addition to that, the runtime PM usage counter of986986+each PCI device is incremented by local_pci_probe() before executing the987987+probe callback provided by the device's driver.988988+989989+If a PCI driver implements the runtime PM callbacks and intends to use the990990+runtime PM framework provided by the PM core and the PCI subsystem, it needs991991+to decrement the device's runtime PM usage counter in its probe callback992992+function. If it doesn't do that, the counter will always be different from993993+zero for the device and it will never be runtime-suspended. The simplest994994+way to do that is by calling pm_runtime_put_noidle(), but if the driver995995+wants to schedule an autosuspend right away, for example, it may call996996+pm_runtime_put_autosuspend() instead for this purpose. Generally, it997997+just needs to call a function that decrements the devices usage counter998998+from its probe routine to make runtime PM work for the device.999999+10001000+It is important to remember that the driver's runtime_suspend() callback10011001+may be executed right after the usage counter has been decremented, because10021002+user space may already have cuased the pm_runtime_allow() helper function10031003+unblocking the runtime PM of the device to run via sysfs, so the driver must10041004+be prepared to cope with that.10051005+10061006+The driver itself should not call pm_runtime_allow(), though. Instead, it10071007+should let user space or some platform-specific code do that (user space can10081008+do it via sysfs as stated above), but it must be prepared to handle the9911009runtime PM of the device correctly as soon as pm_runtime_allow() is called992992-(which may happen at any time). [It also is possible that user space causes993993-pm_runtime_allow() to be called via sysfs before the driver is loaded, so in994994-fact the driver has to be prepared to handle the runtime PM of the device as995995-soon as it calls pm_runtime_enable().]10101010+(which may happen at any time, even before the driver is loaded).10111011+10121012+When the driver's remove callback runs, it has to balance the decrementation10131013+of the device's runtime PM usage counter at the probe time. For this reason,10141014+if it has decremented the counter in its probe callback, it must run10151015+pm_runtime_get_noresume() in its remove callback. [Since the core carries10161016+out a runtime resume of the device and bumps up the device's usage counter10171017+before running the driver's remove callback, the runtime PM of the device10181018+is effectively disabled for the duration of the remove execution and all10191019+runtime PM helper functions incrementing the device's usage counter are10201020+then effectively equivalent to pm_runtime_get_noresume().]99610219971022The runtime PM framework works by processing requests to suspend or resume9981023devices, or to check if they are idle (in which cases it is reasonable to
+1
Documentation/ptp/testptp.c
···1818 * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.1919 */2020#define _GNU_SOURCE2121+#define __SANE_USERSPACE_TYPES__ /* For PPC64, to get LL64 types */2122#include <errno.h>2223#include <fcntl.h>2324#include <inttypes.h>
+2-2
MAINTAINERS
···59575957KERNEL VIRTUAL MACHINE (KVM) FOR AMD-V59585958M: Joerg Roedel <joro@8bytes.org>59595959L: kvm@vger.kernel.org59605960-W: http://kvm.qumranet.com59605960+W: http://www.linux-kvm.org/59615961S: Maintained59625962F: arch/x86/include/asm/svm.h59635963F: arch/x86/kvm/svm.c···59655965KERNEL VIRTUAL MACHINE (KVM) FOR POWERPC59665966M: Alexander Graf <agraf@suse.com>59675967L: kvm-ppc@vger.kernel.org59685968-W: http://kvm.qumranet.com59685968+W: http://www.linux-kvm.org/59695969T: git git://github.com/agraf/linux-2.6.git59705970S: Supported59715971F: arch/powerpc/include/asm/kvm*
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···5857CONFIG_NET_IPGRE=m5958CONFIG_NET_IPVTI=m6059CONFIG_NET_FOU_IP_TUNNELS=y6161-CONFIG_GENEVE_CORE=m6260CONFIG_INET_AH=m6361CONFIG_INET_ESP=m6462CONFIG_INET_IPCOMP=m···6767# CONFIG_INET_LRO is not set6868CONFIG_INET_DIAG=m6969CONFIG_INET_UDP_DIAG=m7070+CONFIG_IPV6=m7071CONFIG_IPV6_ROUTER_PREF=y7172CONFIG_INET6_AH=m7273CONFIG_INET6_ESP=m7374CONFIG_INET6_IPCOMP=m7575+CONFIG_IPV6_ILA=m7476CONFIG_IPV6_VTI=m7577CONFIG_IPV6_GRE=m7678CONFIG_NETFILTER=y···181179CONFIG_IP_SET_LIST_SET=m182180CONFIG_NF_CONNTRACK_IPV4=m183181CONFIG_NFT_CHAIN_ROUTE_IPV4=m182182+CONFIG_NFT_DUP_IPV4=m184183CONFIG_NF_TABLES_ARP=m185184CONFIG_NF_LOG_ARP=m186185CONFIG_NFT_CHAIN_NAT_IPV4=m···209206CONFIG_IP_NF_ARP_MANGLE=m210207CONFIG_NF_CONNTRACK_IPV6=m211208CONFIG_NFT_CHAIN_ROUTE_IPV6=m209209+CONFIG_NFT_DUP_IPV6=m212210CONFIG_NFT_CHAIN_NAT_IPV6=m213211CONFIG_NFT_MASQ_IPV6=m214212CONFIG_NFT_REDIR_IPV6=m···275271CONFIG_MPLS=y276272CONFIG_NET_MPLS_GSO=m277273CONFIG_MPLS_ROUTING=m274274+CONFIG_MPLS_IPTUNNEL=m278275# CONFIG_WIRELESS is not set279276# CONFIG_UEVENT_HELPER is not set280277CONFIG_DEVTMPFS=y···375370# CONFIG_NET_VENDOR_SEEQ is not set376371# CONFIG_NET_VENDOR_SMSC is not set377372# CONFIG_NET_VENDOR_STMICRO is not set373373+# CONFIG_NET_VENDOR_SYNOPSYS is not set378374# CONFIG_NET_VENDOR_VIA is not set379375# CONFIG_NET_VENDOR_WIZNET is not set380376CONFIG_PPP=m···543537CONFIG_TEST_BPF=m544538CONFIG_TEST_FIRMWARE=m545539CONFIG_TEST_UDELAY=m540540+CONFIG_TEST_STATIC_KEYS=m546541CONFIG_EARLY_PRINTK=y547542CONFIG_ENCRYPTED_KEYS=m548543CONFIG_CRYPTO_RSA=m
+8-1
arch/m68k/configs/apollo_defconfig
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···5655CONFIG_NET_IPGRE=m5756CONFIG_NET_IPVTI=m5857CONFIG_NET_FOU_IP_TUNNELS=y5959-CONFIG_GENEVE_CORE=m6058CONFIG_INET_AH=m6159CONFIG_INET_ESP=m6260CONFIG_INET_IPCOMP=m···6565# CONFIG_INET_LRO is not set6666CONFIG_INET_DIAG=m6767CONFIG_INET_UDP_DIAG=m6868+CONFIG_IPV6=m6869CONFIG_IPV6_ROUTER_PREF=y6970CONFIG_INET6_AH=m7071CONFIG_INET6_ESP=m7172CONFIG_INET6_IPCOMP=m7373+CONFIG_IPV6_ILA=m7274CONFIG_IPV6_VTI=m7375CONFIG_IPV6_GRE=m7476CONFIG_NETFILTER=y···179177CONFIG_IP_SET_LIST_SET=m180178CONFIG_NF_CONNTRACK_IPV4=m181179CONFIG_NFT_CHAIN_ROUTE_IPV4=m180180+CONFIG_NFT_DUP_IPV4=m182181CONFIG_NF_TABLES_ARP=m183182CONFIG_NF_LOG_ARP=m184183CONFIG_NFT_CHAIN_NAT_IPV4=m···207204CONFIG_IP_NF_ARP_MANGLE=m208205CONFIG_NF_CONNTRACK_IPV6=m209206CONFIG_NFT_CHAIN_ROUTE_IPV6=m207207+CONFIG_NFT_DUP_IPV6=m210208CONFIG_NFT_CHAIN_NAT_IPV6=m211209CONFIG_NFT_MASQ_IPV6=m212210CONFIG_NFT_REDIR_IPV6=m···273269CONFIG_MPLS=y274270CONFIG_NET_MPLS_GSO=m275271CONFIG_MPLS_ROUTING=m272272+CONFIG_MPLS_IPTUNNEL=m276273# CONFIG_WIRELESS is not set277274# CONFIG_UEVENT_HELPER is not set278275CONFIG_DEVTMPFS=y···349344# CONFIG_NET_VENDOR_SAMSUNG is not set350345# CONFIG_NET_VENDOR_SEEQ is not set351346# CONFIG_NET_VENDOR_STMICRO is not set347347+# CONFIG_NET_VENDOR_SYNOPSYS is not set352348# CONFIG_NET_VENDOR_VIA is not set353349# CONFIG_NET_VENDOR_WIZNET is not set354350CONFIG_PPP=m···501495CONFIG_TEST_BPF=m502496CONFIG_TEST_FIRMWARE=m503497CONFIG_TEST_UDELAY=m498498+CONFIG_TEST_STATIC_KEYS=m504499CONFIG_EARLY_PRINTK=y505500CONFIG_ENCRYPTED_KEYS=m506501CONFIG_CRYPTO_RSA=m
+8-1
arch/m68k/configs/atari_defconfig
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···5655CONFIG_NET_IPGRE=m5756CONFIG_NET_IPVTI=m5857CONFIG_NET_FOU_IP_TUNNELS=y5959-CONFIG_GENEVE_CORE=m6058CONFIG_INET_AH=m6159CONFIG_INET_ESP=m6260CONFIG_INET_IPCOMP=m···6565# CONFIG_INET_LRO is not set6666CONFIG_INET_DIAG=m6767CONFIG_INET_UDP_DIAG=m6868+CONFIG_IPV6=m6869CONFIG_IPV6_ROUTER_PREF=y6970CONFIG_INET6_AH=m7071CONFIG_INET6_ESP=m7172CONFIG_INET6_IPCOMP=m7373+CONFIG_IPV6_ILA=m7274CONFIG_IPV6_VTI=m7375CONFIG_IPV6_GRE=m7476CONFIG_NETFILTER=y···179177CONFIG_IP_SET_LIST_SET=m180178CONFIG_NF_CONNTRACK_IPV4=m181179CONFIG_NFT_CHAIN_ROUTE_IPV4=m180180+CONFIG_NFT_DUP_IPV4=m182181CONFIG_NF_TABLES_ARP=m183182CONFIG_NF_LOG_ARP=m184183CONFIG_NFT_CHAIN_NAT_IPV4=m···207204CONFIG_IP_NF_ARP_MANGLE=m208205CONFIG_NF_CONNTRACK_IPV6=m209206CONFIG_NFT_CHAIN_ROUTE_IPV6=m207207+CONFIG_NFT_DUP_IPV6=m210208CONFIG_NFT_CHAIN_NAT_IPV6=m211209CONFIG_NFT_MASQ_IPV6=m212210CONFIG_NFT_REDIR_IPV6=m···273269CONFIG_MPLS=y274270CONFIG_NET_MPLS_GSO=m275271CONFIG_MPLS_ROUTING=m272272+CONFIG_MPLS_IPTUNNEL=m276273# CONFIG_WIRELESS is not set277274# CONFIG_UEVENT_HELPER is not set278275CONFIG_DEVTMPFS=y···360355# CONFIG_NET_VENDOR_SEEQ is not set361356CONFIG_SMC91X=y362357# CONFIG_NET_VENDOR_STMICRO is not set358358+# CONFIG_NET_VENDOR_SYNOPSYS is not set363359# CONFIG_NET_VENDOR_VIA is not set364360# CONFIG_NET_VENDOR_WIZNET is not set365361CONFIG_PPP=m···523517CONFIG_TEST_BPF=m524518CONFIG_TEST_FIRMWARE=m525519CONFIG_TEST_UDELAY=m520520+CONFIG_TEST_STATIC_KEYS=m526521CONFIG_EARLY_PRINTK=y527522CONFIG_ENCRYPTED_KEYS=m528523CONFIG_CRYPTO_RSA=m
+8-1
arch/m68k/configs/bvme6000_defconfig
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···5453CONFIG_NET_IPGRE=m5554CONFIG_NET_IPVTI=m5655CONFIG_NET_FOU_IP_TUNNELS=y5757-CONFIG_GENEVE_CORE=m5856CONFIG_INET_AH=m5957CONFIG_INET_ESP=m6058CONFIG_INET_IPCOMP=m···6363# CONFIG_INET_LRO is not set6464CONFIG_INET_DIAG=m6565CONFIG_INET_UDP_DIAG=m6666+CONFIG_IPV6=m6667CONFIG_IPV6_ROUTER_PREF=y6768CONFIG_INET6_AH=m6869CONFIG_INET6_ESP=m6970CONFIG_INET6_IPCOMP=m7171+CONFIG_IPV6_ILA=m7072CONFIG_IPV6_VTI=m7173CONFIG_IPV6_GRE=m7274CONFIG_NETFILTER=y···177175CONFIG_IP_SET_LIST_SET=m178176CONFIG_NF_CONNTRACK_IPV4=m179177CONFIG_NFT_CHAIN_ROUTE_IPV4=m178178+CONFIG_NFT_DUP_IPV4=m180179CONFIG_NF_TABLES_ARP=m181180CONFIG_NF_LOG_ARP=m182181CONFIG_NFT_CHAIN_NAT_IPV4=m···205202CONFIG_IP_NF_ARP_MANGLE=m206203CONFIG_NF_CONNTRACK_IPV6=m207204CONFIG_NFT_CHAIN_ROUTE_IPV6=m205205+CONFIG_NFT_DUP_IPV6=m208206CONFIG_NFT_CHAIN_NAT_IPV6=m209207CONFIG_NFT_MASQ_IPV6=m210208CONFIG_NFT_REDIR_IPV6=m···271267CONFIG_MPLS=y272268CONFIG_NET_MPLS_GSO=m273269CONFIG_MPLS_ROUTING=m270270+CONFIG_MPLS_IPTUNNEL=m274271# CONFIG_WIRELESS is not set275272# CONFIG_UEVENT_HELPER is not set276273CONFIG_DEVTMPFS=y···348343# CONFIG_NET_VENDOR_SAMSUNG is not set349344# CONFIG_NET_VENDOR_SEEQ is not set350345# CONFIG_NET_VENDOR_STMICRO is not set346346+# CONFIG_NET_VENDOR_SYNOPSYS is not set351347# CONFIG_NET_VENDOR_VIA is not set352348# CONFIG_NET_VENDOR_WIZNET is not set353349CONFIG_PPP=m···494488CONFIG_TEST_BPF=m495489CONFIG_TEST_FIRMWARE=m496490CONFIG_TEST_UDELAY=m491491+CONFIG_TEST_STATIC_KEYS=m497492CONFIG_EARLY_PRINTK=y498493CONFIG_ENCRYPTED_KEYS=m499494CONFIG_CRYPTO_RSA=m
+8-1
arch/m68k/configs/hp300_defconfig
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···5655CONFIG_NET_IPGRE=m5756CONFIG_NET_IPVTI=m5857CONFIG_NET_FOU_IP_TUNNELS=y5959-CONFIG_GENEVE_CORE=m6058CONFIG_INET_AH=m6159CONFIG_INET_ESP=m6260CONFIG_INET_IPCOMP=m···6565# CONFIG_INET_LRO is not set6666CONFIG_INET_DIAG=m6767CONFIG_INET_UDP_DIAG=m6868+CONFIG_IPV6=m6869CONFIG_IPV6_ROUTER_PREF=y6970CONFIG_INET6_AH=m7071CONFIG_INET6_ESP=m7172CONFIG_INET6_IPCOMP=m7373+CONFIG_IPV6_ILA=m7274CONFIG_IPV6_VTI=m7375CONFIG_IPV6_GRE=m7476CONFIG_NETFILTER=y···179177CONFIG_IP_SET_LIST_SET=m180178CONFIG_NF_CONNTRACK_IPV4=m181179CONFIG_NFT_CHAIN_ROUTE_IPV4=m180180+CONFIG_NFT_DUP_IPV4=m182181CONFIG_NF_TABLES_ARP=m183182CONFIG_NF_LOG_ARP=m184183CONFIG_NFT_CHAIN_NAT_IPV4=m···207204CONFIG_IP_NF_ARP_MANGLE=m208205CONFIG_NF_CONNTRACK_IPV6=m209206CONFIG_NFT_CHAIN_ROUTE_IPV6=m207207+CONFIG_NFT_DUP_IPV6=m210208CONFIG_NFT_CHAIN_NAT_IPV6=m211209CONFIG_NFT_MASQ_IPV6=m212210CONFIG_NFT_REDIR_IPV6=m···273269CONFIG_MPLS=y274270CONFIG_NET_MPLS_GSO=m275271CONFIG_MPLS_ROUTING=m272272+CONFIG_MPLS_IPTUNNEL=m276273# CONFIG_WIRELESS is not set277274# CONFIG_UEVENT_HELPER is not set278275CONFIG_DEVTMPFS=y···350345# CONFIG_NET_VENDOR_SAMSUNG is not set351346# CONFIG_NET_VENDOR_SEEQ is not set352347# CONFIG_NET_VENDOR_STMICRO is not set348348+# CONFIG_NET_VENDOR_SYNOPSYS is not set353349# CONFIG_NET_VENDOR_VIA is not set354350# CONFIG_NET_VENDOR_WIZNET is not set355351CONFIG_PPP=m···503497CONFIG_TEST_BPF=m504498CONFIG_TEST_FIRMWARE=m505499CONFIG_TEST_UDELAY=m500500+CONFIG_TEST_STATIC_KEYS=m506501CONFIG_EARLY_PRINTK=y507502CONFIG_ENCRYPTED_KEYS=m508503CONFIG_CRYPTO_RSA=m
+8-1
arch/m68k/configs/mac_defconfig
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···5554CONFIG_NET_IPGRE=m5655CONFIG_NET_IPVTI=m5756CONFIG_NET_FOU_IP_TUNNELS=y5858-CONFIG_GENEVE_CORE=m5957CONFIG_INET_AH=m6058CONFIG_INET_ESP=m6159CONFIG_INET_IPCOMP=m···6464# CONFIG_INET_LRO is not set6565CONFIG_INET_DIAG=m6666CONFIG_INET_UDP_DIAG=m6767+CONFIG_IPV6=m6768CONFIG_IPV6_ROUTER_PREF=y6869CONFIG_INET6_AH=m6970CONFIG_INET6_ESP=m7071CONFIG_INET6_IPCOMP=m7272+CONFIG_IPV6_ILA=m7173CONFIG_IPV6_VTI=m7274CONFIG_IPV6_GRE=m7375CONFIG_NETFILTER=y···178176CONFIG_IP_SET_LIST_SET=m179177CONFIG_NF_CONNTRACK_IPV4=m180178CONFIG_NFT_CHAIN_ROUTE_IPV4=m179179+CONFIG_NFT_DUP_IPV4=m181180CONFIG_NF_TABLES_ARP=m182181CONFIG_NF_LOG_ARP=m183182CONFIG_NFT_CHAIN_NAT_IPV4=m···206203CONFIG_IP_NF_ARP_MANGLE=m207204CONFIG_NF_CONNTRACK_IPV6=m208205CONFIG_NFT_CHAIN_ROUTE_IPV6=m206206+CONFIG_NFT_DUP_IPV6=m209207CONFIG_NFT_CHAIN_NAT_IPV6=m210208CONFIG_NFT_MASQ_IPV6=m211209CONFIG_NFT_REDIR_IPV6=m···275271CONFIG_MPLS=y276272CONFIG_NET_MPLS_GSO=m277273CONFIG_MPLS_ROUTING=m274274+CONFIG_MPLS_IPTUNNEL=m278275# CONFIG_WIRELESS is not set279276# CONFIG_UEVENT_HELPER is not set280277CONFIG_DEVTMPFS=y···369364# CONFIG_NET_VENDOR_SEEQ is not set370365# CONFIG_NET_VENDOR_SMSC is not set371366# CONFIG_NET_VENDOR_STMICRO is not set367367+# CONFIG_NET_VENDOR_SYNOPSYS is not set372368# CONFIG_NET_VENDOR_VIA is not set373369# CONFIG_NET_VENDOR_WIZNET is not set374370CONFIG_PPP=m···525519CONFIG_TEST_BPF=m526520CONFIG_TEST_FIRMWARE=m527521CONFIG_TEST_UDELAY=m522522+CONFIG_TEST_STATIC_KEYS=m528523CONFIG_EARLY_PRINTK=y529524CONFIG_ENCRYPTED_KEYS=m530525CONFIG_CRYPTO_RSA=m
+8-1
arch/m68k/configs/multi_defconfig
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···6564CONFIG_NET_IPGRE=m6665CONFIG_NET_IPVTI=m6766CONFIG_NET_FOU_IP_TUNNELS=y6868-CONFIG_GENEVE_CORE=m6967CONFIG_INET_AH=m7068CONFIG_INET_ESP=m7169CONFIG_INET_IPCOMP=m···7474# CONFIG_INET_LRO is not set7575CONFIG_INET_DIAG=m7676CONFIG_INET_UDP_DIAG=m7777+CONFIG_IPV6=m7778CONFIG_IPV6_ROUTER_PREF=y7879CONFIG_INET6_AH=m7980CONFIG_INET6_ESP=m8081CONFIG_INET6_IPCOMP=m8282+CONFIG_IPV6_ILA=m8183CONFIG_IPV6_VTI=m8284CONFIG_IPV6_GRE=m8385CONFIG_NETFILTER=y···188186CONFIG_IP_SET_LIST_SET=m189187CONFIG_NF_CONNTRACK_IPV4=m190188CONFIG_NFT_CHAIN_ROUTE_IPV4=m189189+CONFIG_NFT_DUP_IPV4=m191190CONFIG_NF_TABLES_ARP=m192191CONFIG_NF_LOG_ARP=m193192CONFIG_NFT_CHAIN_NAT_IPV4=m···216213CONFIG_IP_NF_ARP_MANGLE=m217214CONFIG_NF_CONNTRACK_IPV6=m218215CONFIG_NFT_CHAIN_ROUTE_IPV6=m216216+CONFIG_NFT_DUP_IPV6=m219217CONFIG_NFT_CHAIN_NAT_IPV6=m220218CONFIG_NFT_MASQ_IPV6=m221219CONFIG_NFT_REDIR_IPV6=m···285281CONFIG_MPLS=y286282CONFIG_NET_MPLS_GSO=m287283CONFIG_MPLS_ROUTING=m284284+CONFIG_MPLS_IPTUNNEL=m288285# CONFIG_WIRELESS is not set289286# CONFIG_UEVENT_HELPER is not set290287CONFIG_DEVTMPFS=y···415410# CONFIG_NET_VENDOR_SEEQ is not set416411CONFIG_SMC91X=y417412# CONFIG_NET_VENDOR_STMICRO is not set413413+# CONFIG_NET_VENDOR_SYNOPSYS is not set418414# CONFIG_NET_VENDOR_VIA is not set419415# CONFIG_NET_VENDOR_WIZNET is not set420416CONFIG_PLIP=m···605599CONFIG_TEST_BPF=m606600CONFIG_TEST_FIRMWARE=m607601CONFIG_TEST_UDELAY=m602602+CONFIG_TEST_STATIC_KEYS=m608603CONFIG_EARLY_PRINTK=y609604CONFIG_ENCRYPTED_KEYS=m610605CONFIG_CRYPTO_RSA=m
+8-1
arch/m68k/configs/mvme147_defconfig
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···5352CONFIG_NET_IPGRE=m5453CONFIG_NET_IPVTI=m5554CONFIG_NET_FOU_IP_TUNNELS=y5656-CONFIG_GENEVE_CORE=m5755CONFIG_INET_AH=m5856CONFIG_INET_ESP=m5957CONFIG_INET_IPCOMP=m···6262# CONFIG_INET_LRO is not set6363CONFIG_INET_DIAG=m6464CONFIG_INET_UDP_DIAG=m6565+CONFIG_IPV6=m6566CONFIG_IPV6_ROUTER_PREF=y6667CONFIG_INET6_AH=m6768CONFIG_INET6_ESP=m6869CONFIG_INET6_IPCOMP=m7070+CONFIG_IPV6_ILA=m6971CONFIG_IPV6_VTI=m7072CONFIG_IPV6_GRE=m7173CONFIG_NETFILTER=y···176174CONFIG_IP_SET_LIST_SET=m177175CONFIG_NF_CONNTRACK_IPV4=m178176CONFIG_NFT_CHAIN_ROUTE_IPV4=m177177+CONFIG_NFT_DUP_IPV4=m179178CONFIG_NF_TABLES_ARP=m180179CONFIG_NF_LOG_ARP=m181180CONFIG_NFT_CHAIN_NAT_IPV4=m···204201CONFIG_IP_NF_ARP_MANGLE=m205202CONFIG_NF_CONNTRACK_IPV6=m206203CONFIG_NFT_CHAIN_ROUTE_IPV6=m204204+CONFIG_NFT_DUP_IPV6=m207205CONFIG_NFT_CHAIN_NAT_IPV6=m208206CONFIG_NFT_MASQ_IPV6=m209207CONFIG_NFT_REDIR_IPV6=m···270266CONFIG_MPLS=y271267CONFIG_NET_MPLS_GSO=m272268CONFIG_MPLS_ROUTING=m269269+CONFIG_MPLS_IPTUNNEL=m273270# CONFIG_WIRELESS is not set274271# CONFIG_UEVENT_HELPER is not set275272CONFIG_DEVTMPFS=y···348343# CONFIG_NET_VENDOR_SAMSUNG is not set349344# CONFIG_NET_VENDOR_SEEQ is not set350345# CONFIG_NET_VENDOR_STMICRO is not set346346+# CONFIG_NET_VENDOR_SYNOPSYS is not set351347# CONFIG_NET_VENDOR_VIA is not set352348# CONFIG_NET_VENDOR_WIZNET is not set353349CONFIG_PPP=m···494488CONFIG_TEST_BPF=m495489CONFIG_TEST_FIRMWARE=m496490CONFIG_TEST_UDELAY=m491491+CONFIG_TEST_STATIC_KEYS=m497492CONFIG_EARLY_PRINTK=y498493CONFIG_ENCRYPTED_KEYS=m499494CONFIG_CRYPTO_RSA=m
+8-1
arch/m68k/configs/mvme16x_defconfig
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···5453CONFIG_NET_IPGRE=m5554CONFIG_NET_IPVTI=m5655CONFIG_NET_FOU_IP_TUNNELS=y5757-CONFIG_GENEVE_CORE=m5856CONFIG_INET_AH=m5957CONFIG_INET_ESP=m6058CONFIG_INET_IPCOMP=m···6363# CONFIG_INET_LRO is not set6464CONFIG_INET_DIAG=m6565CONFIG_INET_UDP_DIAG=m6666+CONFIG_IPV6=m6667CONFIG_IPV6_ROUTER_PREF=y6768CONFIG_INET6_AH=m6869CONFIG_INET6_ESP=m6970CONFIG_INET6_IPCOMP=m7171+CONFIG_IPV6_ILA=m7072CONFIG_IPV6_VTI=m7173CONFIG_IPV6_GRE=m7274CONFIG_NETFILTER=y···177175CONFIG_IP_SET_LIST_SET=m178176CONFIG_NF_CONNTRACK_IPV4=m179177CONFIG_NFT_CHAIN_ROUTE_IPV4=m178178+CONFIG_NFT_DUP_IPV4=m180179CONFIG_NF_TABLES_ARP=m181180CONFIG_NF_LOG_ARP=m182181CONFIG_NFT_CHAIN_NAT_IPV4=m···205202CONFIG_IP_NF_ARP_MANGLE=m206203CONFIG_NF_CONNTRACK_IPV6=m207204CONFIG_NFT_CHAIN_ROUTE_IPV6=m205205+CONFIG_NFT_DUP_IPV6=m208206CONFIG_NFT_CHAIN_NAT_IPV6=m209207CONFIG_NFT_MASQ_IPV6=m210208CONFIG_NFT_REDIR_IPV6=m···271267CONFIG_MPLS=y272268CONFIG_NET_MPLS_GSO=m273269CONFIG_MPLS_ROUTING=m270270+CONFIG_MPLS_IPTUNNEL=m274271# CONFIG_WIRELESS is not set275272# CONFIG_UEVENT_HELPER is not set276273CONFIG_DEVTMPFS=y···348343# CONFIG_NET_VENDOR_SAMSUNG is not set349344# CONFIG_NET_VENDOR_SEEQ is not set350345# CONFIG_NET_VENDOR_STMICRO is not set346346+# CONFIG_NET_VENDOR_SYNOPSYS is not set351347# CONFIG_NET_VENDOR_VIA is not set352348# CONFIG_NET_VENDOR_WIZNET is not set353349CONFIG_PPP=m···494488CONFIG_TEST_BPF=m495489CONFIG_TEST_FIRMWARE=m496490CONFIG_TEST_UDELAY=m491491+CONFIG_TEST_STATIC_KEYS=m497492CONFIG_EARLY_PRINTK=y498493CONFIG_ENCRYPTED_KEYS=m499494CONFIG_CRYPTO_RSA=m
+8-1
arch/m68k/configs/q40_defconfig
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···5453CONFIG_NET_IPGRE=m5554CONFIG_NET_IPVTI=m5655CONFIG_NET_FOU_IP_TUNNELS=y5757-CONFIG_GENEVE_CORE=m5856CONFIG_INET_AH=m5957CONFIG_INET_ESP=m6058CONFIG_INET_IPCOMP=m···6363# CONFIG_INET_LRO is not set6464CONFIG_INET_DIAG=m6565CONFIG_INET_UDP_DIAG=m6666+CONFIG_IPV6=m6667CONFIG_IPV6_ROUTER_PREF=y6768CONFIG_INET6_AH=m6869CONFIG_INET6_ESP=m6970CONFIG_INET6_IPCOMP=m7171+CONFIG_IPV6_ILA=m7072CONFIG_IPV6_VTI=m7173CONFIG_IPV6_GRE=m7274CONFIG_NETFILTER=y···177175CONFIG_IP_SET_LIST_SET=m178176CONFIG_NF_CONNTRACK_IPV4=m179177CONFIG_NFT_CHAIN_ROUTE_IPV4=m178178+CONFIG_NFT_DUP_IPV4=m180179CONFIG_NF_TABLES_ARP=m181180CONFIG_NF_LOG_ARP=m182181CONFIG_NFT_CHAIN_NAT_IPV4=m···205202CONFIG_IP_NF_ARP_MANGLE=m206203CONFIG_NF_CONNTRACK_IPV6=m207204CONFIG_NFT_CHAIN_ROUTE_IPV6=m205205+CONFIG_NFT_DUP_IPV6=m208206CONFIG_NFT_CHAIN_NAT_IPV6=m209207CONFIG_NFT_MASQ_IPV6=m210208CONFIG_NFT_REDIR_IPV6=m···271267CONFIG_MPLS=y272268CONFIG_NET_MPLS_GSO=m273269CONFIG_MPLS_ROUTING=m270270+CONFIG_MPLS_IPTUNNEL=m274271# CONFIG_WIRELESS is not set275272# CONFIG_UEVENT_HELPER is not set276273CONFIG_DEVTMPFS=y···359354# CONFIG_NET_VENDOR_SEEQ is not set360355# CONFIG_NET_VENDOR_SMSC is not set361356# CONFIG_NET_VENDOR_STMICRO is not set357357+# CONFIG_NET_VENDOR_SYNOPSYS is not set362358# CONFIG_NET_VENDOR_VIA is not set363359# CONFIG_NET_VENDOR_WIZNET is not set364360CONFIG_PLIP=m···516510CONFIG_TEST_BPF=m517511CONFIG_TEST_FIRMWARE=m518512CONFIG_TEST_UDELAY=m513513+CONFIG_TEST_STATIC_KEYS=m519514CONFIG_EARLY_PRINTK=y520515CONFIG_ENCRYPTED_KEYS=m521516CONFIG_CRYPTO_RSA=m
+8-1
arch/m68k/configs/sun3_defconfig
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···5150CONFIG_NET_IPGRE=m5251CONFIG_NET_IPVTI=m5352CONFIG_NET_FOU_IP_TUNNELS=y5454-CONFIG_GENEVE_CORE=m5553CONFIG_INET_AH=m5654CONFIG_INET_ESP=m5755CONFIG_INET_IPCOMP=m···6060# CONFIG_INET_LRO is not set6161CONFIG_INET_DIAG=m6262CONFIG_INET_UDP_DIAG=m6363+CONFIG_IPV6=m6364CONFIG_IPV6_ROUTER_PREF=y6465CONFIG_INET6_AH=m6566CONFIG_INET6_ESP=m6667CONFIG_INET6_IPCOMP=m6868+CONFIG_IPV6_ILA=m6769CONFIG_IPV6_VTI=m6870CONFIG_IPV6_GRE=m6971CONFIG_NETFILTER=y···174172CONFIG_IP_SET_LIST_SET=m175173CONFIG_NF_CONNTRACK_IPV4=m176174CONFIG_NFT_CHAIN_ROUTE_IPV4=m175175+CONFIG_NFT_DUP_IPV4=m177176CONFIG_NF_TABLES_ARP=m178177CONFIG_NF_LOG_ARP=m179178CONFIG_NFT_CHAIN_NAT_IPV4=m···202199CONFIG_IP_NF_ARP_MANGLE=m203200CONFIG_NF_CONNTRACK_IPV6=m204201CONFIG_NFT_CHAIN_ROUTE_IPV6=m202202+CONFIG_NFT_DUP_IPV6=m205203CONFIG_NFT_CHAIN_NAT_IPV6=m206204CONFIG_NFT_MASQ_IPV6=m207205CONFIG_NFT_REDIR_IPV6=m···268264CONFIG_MPLS=y269265CONFIG_NET_MPLS_GSO=m270266CONFIG_MPLS_ROUTING=m267267+CONFIG_MPLS_IPTUNNEL=m271268# CONFIG_WIRELESS is not set272269# CONFIG_UEVENT_HELPER is not set273270CONFIG_DEVTMPFS=y···346341# CONFIG_NET_VENDOR_SEEQ is not set347342# CONFIG_NET_VENDOR_STMICRO is not set348343# CONFIG_NET_VENDOR_SUN is not set344344+# CONFIG_NET_VENDOR_SYNOPSYS is not set349345# CONFIG_NET_VENDOR_VIA is not set350346# CONFIG_NET_VENDOR_WIZNET is not set351347CONFIG_PPP=m···495489CONFIG_TEST_BPF=m496490CONFIG_TEST_FIRMWARE=m497491CONFIG_TEST_UDELAY=m492492+CONFIG_TEST_STATIC_KEYS=m498493CONFIG_ENCRYPTED_KEYS=m499494CONFIG_CRYPTO_RSA=m500495CONFIG_CRYPTO_MANAGER=y
+8-1
arch/m68k/configs/sun3x_defconfig
···1010# CONFIG_PID_NS is not set1111# CONFIG_NET_NS is not set1212CONFIG_BLK_DEV_INITRD=y1313+CONFIG_USERFAULTFD=y1314CONFIG_SLAB=y1415CONFIG_MODULES=y1516CONFIG_MODULE_UNLOAD=y···5150CONFIG_NET_IPGRE=m5251CONFIG_NET_IPVTI=m5352CONFIG_NET_FOU_IP_TUNNELS=y5454-CONFIG_GENEVE_CORE=m5553CONFIG_INET_AH=m5654CONFIG_INET_ESP=m5755CONFIG_INET_IPCOMP=m···6060# CONFIG_INET_LRO is not set6161CONFIG_INET_DIAG=m6262CONFIG_INET_UDP_DIAG=m6363+CONFIG_IPV6=m6364CONFIG_IPV6_ROUTER_PREF=y6465CONFIG_INET6_AH=m6566CONFIG_INET6_ESP=m6667CONFIG_INET6_IPCOMP=m6868+CONFIG_IPV6_ILA=m6769CONFIG_IPV6_VTI=m6870CONFIG_IPV6_GRE=m6971CONFIG_NETFILTER=y···174172CONFIG_IP_SET_LIST_SET=m175173CONFIG_NF_CONNTRACK_IPV4=m176174CONFIG_NFT_CHAIN_ROUTE_IPV4=m175175+CONFIG_NFT_DUP_IPV4=m177176CONFIG_NF_TABLES_ARP=m178177CONFIG_NF_LOG_ARP=m179178CONFIG_NFT_CHAIN_NAT_IPV4=m···202199CONFIG_IP_NF_ARP_MANGLE=m203200CONFIG_NF_CONNTRACK_IPV6=m204201CONFIG_NFT_CHAIN_ROUTE_IPV6=m202202+CONFIG_NFT_DUP_IPV6=m205203CONFIG_NFT_CHAIN_NAT_IPV6=m206204CONFIG_NFT_MASQ_IPV6=m207205CONFIG_NFT_REDIR_IPV6=m···268264CONFIG_MPLS=y269265CONFIG_NET_MPLS_GSO=m270266CONFIG_MPLS_ROUTING=m267267+CONFIG_MPLS_IPTUNNEL=m271268# CONFIG_WIRELESS is not set272269# CONFIG_UEVENT_HELPER is not set273270CONFIG_DEVTMPFS=y···346341# CONFIG_NET_VENDOR_SAMSUNG is not set347342# CONFIG_NET_VENDOR_SEEQ is not set348343# CONFIG_NET_VENDOR_STMICRO is not set344344+# CONFIG_NET_VENDOR_SYNOPSYS is not set349345# CONFIG_NET_VENDOR_VIA is not set350346# CONFIG_NET_VENDOR_WIZNET is not set351347CONFIG_PPP=m···495489CONFIG_TEST_BPF=m496490CONFIG_TEST_FIRMWARE=m497491CONFIG_TEST_UDELAY=m492492+CONFIG_TEST_STATIC_KEYS=m498493CONFIG_EARLY_PRINTK=y499494CONFIG_ENCRYPTED_KEYS=m500495CONFIG_CRYPTO_RSA=m
+30
arch/m68k/include/asm/linkage.h
···44#define __ALIGN .align 455#define __ALIGN_STR ".align 4"6677+/*88+ * Make sure the compiler doesn't do anything stupid with the99+ * arguments on the stack - they are owned by the *caller*, not1010+ * the callee. This just fools gcc into not spilling into them,1111+ * and keeps it from doing tailcall recursion and/or using the1212+ * stack slots for temporaries, since they are live and "used"1313+ * all the way to the end of the function.1414+ */1515+#define asmlinkage_protect(n, ret, args...) \1616+ __asmlinkage_protect##n(ret, ##args)1717+#define __asmlinkage_protect_n(ret, args...) \1818+ __asm__ __volatile__ ("" : "=r" (ret) : "0" (ret), ##args)1919+#define __asmlinkage_protect0(ret) \2020+ __asmlinkage_protect_n(ret)2121+#define __asmlinkage_protect1(ret, arg1) \2222+ __asmlinkage_protect_n(ret, "m" (arg1))2323+#define __asmlinkage_protect2(ret, arg1, arg2) \2424+ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2))2525+#define __asmlinkage_protect3(ret, arg1, arg2, arg3) \2626+ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3))2727+#define __asmlinkage_protect4(ret, arg1, arg2, arg3, arg4) \2828+ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \2929+ "m" (arg4))3030+#define __asmlinkage_protect5(ret, arg1, arg2, arg3, arg4, arg5) \3131+ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \3232+ "m" (arg4), "m" (arg5))3333+#define __asmlinkage_protect6(ret, arg1, arg2, arg3, arg4, arg5, arg6) \3434+ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \3535+ "m" (arg4), "m" (arg5), "m" (arg6))3636+737#endif
···385385#define MIPS_CPU_CDMM 0x4000000000ull /* CPU has Common Device Memory Map */386386#define MIPS_CPU_BP_GHIST 0x8000000000ull /* R12K+ Branch Prediction Global History */387387#define MIPS_CPU_SP 0x10000000000ull /* Small (1KB) page support */388388+#define MIPS_CPU_FTLB 0x20000000000ull /* CPU has Fixed-page-size TLB */388389389390/*390391 * CPU ASE encodings
+9
arch/mips/include/asm/maar.h
···6666}67676868/**6969+ * maar_init() - initialise MAARs7070+ *7171+ * Performs initialisation of MAARs for the current CPU, making use of the7272+ * platforms implementation of platform_maar_init where necessary and7373+ * duplicating the setup it provides on secondary CPUs.7474+ */7575+extern void maar_init(void);7676+7777+/**6978 * struct maar_config - MAAR configuration data7079 * @lower: The lowest address that the MAAR pair will affect. Must be7180 * aligned to a 2^16 byte boundary.
+39
arch/mips/include/asm/mips-cm.h
···194194BUILD_CM_R_(gic_status, MIPS_CM_GCB_OFS + 0xd0)195195BUILD_CM_R_(cpc_status, MIPS_CM_GCB_OFS + 0xf0)196196BUILD_CM_RW(l2_config, MIPS_CM_GCB_OFS + 0x130)197197+BUILD_CM_RW(sys_config2, MIPS_CM_GCB_OFS + 0x150)197198198199/* Core Local & Core Other register accessor functions */199200BUILD_CM_Cx_RW(reset_release, 0x00)···317316#define CM_GCR_L2_CONFIG_ASSOC_SHF 0318317#define CM_GCR_L2_CONFIG_ASSOC_MSK (_ULCAST_(0xff) << 0)319318319319+/* GCR_SYS_CONFIG2 register fields */320320+#define CM_GCR_SYS_CONFIG2_MAXVPW_SHF 0321321+#define CM_GCR_SYS_CONFIG2_MAXVPW_MSK (_ULCAST_(0xf) << 0)322322+320323/* GCR_Cx_COHERENCE register fields */321324#define CM_GCR_Cx_COHERENCE_COHDOMAINEN_SHF 0322325#define CM_GCR_Cx_COHERENCE_COHDOMAINEN_MSK (_ULCAST_(0xff) << 0)···408403 return 0;409404410405 return read_gcr_rev();406406+}407407+408408+/**409409+ * mips_cm_max_vp_width() - return the width in bits of VP indices410410+ *411411+ * Return: the width, in bits, of VP indices in fields that combine core & VP412412+ * indices.413413+ */414414+static inline unsigned int mips_cm_max_vp_width(void)415415+{416416+ extern int smp_num_siblings;417417+418418+ if (mips_cm_revision() >= CM_REV_CM3)419419+ return read_gcr_sys_config2() & CM_GCR_SYS_CONFIG2_MAXVPW_MSK;420420+421421+ return smp_num_siblings;422422+}423423+424424+/**425425+ * mips_cm_vp_id() - calculate the hardware VP ID for a CPU426426+ * @cpu: the CPU whose VP ID to calculate427427+ *428428+ * Hardware such as the GIC uses identifiers for VPs which may not match the429429+ * CPU numbers used by Linux. This function calculates the hardware VP430430+ * identifier corresponding to a given CPU.431431+ *432432+ * Return: the VP ID for the CPU.433433+ */434434+static inline unsigned int mips_cm_vp_id(unsigned int cpu)435435+{436436+ unsigned int core = cpu_data[cpu].core;437437+ unsigned int vp = cpu_vpe_id(&cpu_data[cpu]);438438+439439+ return (core * mips_cm_max_vp_width()) + vp;411440}412441413442#endif /* __MIPS_ASM_MIPS_CM_H__ */
···3939 mfc0 \dest, CP0_CONFIG, 34040 andi \dest, \dest, MIPS_CONF3_MT4141 beqz \dest, \nomt4242+ nop4243 .endm43444445.section .text.cps-vec···224223 END(excep_ejtag)225224226225LEAF(mips_cps_core_init)227227-#ifdef CONFIG_MIPS_MT226226+#ifdef CONFIG_MIPS_MT_SMP228227 /* Check that the core implements the MT ASE */229228 has_mt t0, 3f230230- nop231229232230 .set push233231 .set mips64r2···310310 PTR_ADDU t0, t0, t1311311312312 /* Calculate this VPEs ID. If the core doesn't support MT use 0 */313313+ li t9, 0314314+#ifdef CONFIG_MIPS_MT_SMP313315 has_mt ta2, 1f314314- li t9, 0315316316317 /* Find the number of VPEs present in the core */317318 mfc0 t1, CP0_MVPCONF0···331330 /* Retrieve the VPE ID from EBase.CPUNum */332331 mfc0 t9, $15, 1333332 and t9, t9, t1333333+#endif3343343353351: /* Calculate a pointer to this VPEs struct vpe_boot_config */336336 li t1, VPEBOOTCFG_SIZE···339337 PTR_L ta3, COREBOOTCFG_VPECONFIG(t0)340338 PTR_ADDU v0, v0, ta3341339342342-#ifdef CONFIG_MIPS_MT340340+#ifdef CONFIG_MIPS_MT_SMP343341344342 /* If the core doesn't support MT then return */345343 bnez ta2, 1f···4534514544522: .set pop455453456456-#endif /* CONFIG_MIPS_MT */454454+#endif /* CONFIG_MIPS_MT_SMP */457455458456 /* Return */459457 jr ra
+13-8
arch/mips/kernel/cpu-probe.c
···410410static inline unsigned int decode_config0(struct cpuinfo_mips *c)411411{412412 unsigned int config0;413413- int isa;413413+ int isa, mt;414414415415 config0 = read_c0_config();416416417417 /*418418 * Look for Standard TLB or Dual VTLB and FTLB419419 */420420- if ((((config0 & MIPS_CONF_MT) >> 7) == 1) ||421421- (((config0 & MIPS_CONF_MT) >> 7) == 4))420420+ mt = config0 & MIPS_CONF_MT;421421+ if (mt == MIPS_CONF_MT_TLB)422422 c->options |= MIPS_CPU_TLB;423423+ else if (mt == MIPS_CONF_MT_FTLB)424424+ c->options |= MIPS_CPU_TLB | MIPS_CPU_FTLB;423425424426 isa = (config0 & MIPS_CONF_AT) >> 13;425427 switch (isa) {···561559 if (cpu_has_tlb) {562560 if (((config4 & MIPS_CONF4_IE) >> 29) == 2)563561 c->options |= MIPS_CPU_TLBINV;562562+564563 /*565565- * This is a bit ugly. R6 has dropped that field from566566- * config4 and the only valid configuration is VTLB+FTLB so567567- * set a good value for mmuextdef for that case.564564+ * R6 has dropped the MMUExtDef field from config4.565565+ * On R6 the fields always describe the FTLB, and only if it is566566+ * present according to Config.MT.568567 */569569- if (cpu_has_mips_r6)568568+ if (!cpu_has_mips_r6)569569+ mmuextdef = config4 & MIPS_CONF4_MMUEXTDEF;570570+ else if (cpu_has_ftlb)570571 mmuextdef = MIPS_CONF4_MMUEXTDEF_VTLBSIZEEXT;571572 else572572- mmuextdef = config4 & MIPS_CONF4_MMUEXTDEF;573573+ mmuextdef = 0;573574574575 switch (mmuextdef) {575576 case MIPS_CONF4_MMUEXTDEF_MMUSIZEEXT:
+1-25
arch/mips/kernel/octeon_switch.S
···1818 .set pop1919/*2020 * task_struct *resume(task_struct *prev, task_struct *next,2121- * struct thread_info *next_ti, int usedfpu)2121+ * struct thread_info *next_ti)2222 */2323 .align 72424 LEAF(resume)···2727 LONG_S t1, THREAD_STATUS(a0)2828 cpu_save_nonscratch a02929 LONG_S ra, THREAD_REG31(a0)3030-3131- /*3232- * check if we need to save FPU registers3333- */3434- .set push3535- .set noreorder3636- beqz a3, 1f3737- PTR_L t3, TASK_THREAD_INFO(a0)3838- .set pop3939-4040- /*4141- * clear saved user stack CU1 bit4242- */4343- LONG_L t0, ST_OFF(t3)4444- li t1, ~ST0_CU14545- and t0, t0, t14646- LONG_S t0, ST_OFF(t3)4747-4848- .set push4949- .set arch=mips64r25050- fpu_save_double a0 t0 t1 # c0_status passed in t05151- # clobbers t15252- .set pop5353-1:54305531#if CONFIG_CAVIUM_OCTEON_CVMSEG_SIZE > 05632 /* Check if we need to store CVMSEG state */
+1-27
arch/mips/kernel/r2300_switch.S
···3131#define ST_OFF (_THREAD_SIZE - 32 - PT_SIZE + PT_STATUS)32323333/*3434- * FPU context is saved iff the process has used it's FPU in the current3535- * time slice as indicated by TIF_USEDFPU. In any case, the CU1 bit for user3636- * space STATUS register should be 0, so that a process *always* starts its3737- * userland with FPU disabled after each context switch.3838- *3939- * FPU will be enabled as soon as the process accesses FPU again, through4040- * do_cpu() trap.4141- */4242-4343-/*4434 * task_struct *resume(task_struct *prev, task_struct *next,4545- * struct thread_info *next_ti, int usedfpu)3535+ * struct thread_info *next_ti)4636 */4737LEAF(resume)4838 mfc0 t1, CP0_STATUS4939 sw t1, THREAD_STATUS(a0)5040 cpu_save_nonscratch a05141 sw ra, THREAD_REG31(a0)5252-5353- beqz a3, 1f5454-5555- PTR_L t3, TASK_THREAD_INFO(a0)5656-5757- /*5858- * clear saved user stack CU1 bit5959- */6060- lw t0, ST_OFF(t3)6161- li t1, ~ST0_CU16262- and t0, t0, t16363- sw t0, ST_OFF(t3)6464-6565- fpu_save_single a0, t0 # clobbers t06666-6767-1:68426943#if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP)7044 PTR_LA t8, __stack_chk_guard
···338338 if (end <= reserved_end)339339 continue;340340#ifdef CONFIG_BLK_DEV_INITRD341341- /* mapstart should be after initrd_end */341341+ /* Skip zones before initrd and initrd itself */342342 if (initrd_end && end <= (unsigned long)PFN_UP(__pa(initrd_end)))343343 continue;344344#endif···370370#endif371371 max_low_pfn = PFN_DOWN(HIGHMEM_START);372372 }373373+374374+#ifdef CONFIG_BLK_DEV_INITRD375375+ /*376376+ * mapstart should be after initrd_end377377+ */378378+ if (initrd_end)379379+ mapstart = max(mapstart, (unsigned long)PFN_UP(__pa(initrd_end)));380380+#endif373381374382 /*375383 * Initialize the boot-time allocator with low memory only.
+2
arch/mips/kernel/smp.c
···4242#include <asm/mmu_context.h>4343#include <asm/time.h>4444#include <asm/setup.h>4545+#include <asm/maar.h>45464647cpumask_t cpu_callin_map; /* Bitmask of started secondaries */4748···158157 mips_clockevent_init();159158 mp_ops->init_secondary();160159 cpu_report();160160+ maar_init();161161162162 /*163163 * XXX parity protection should be folded in here when it's converted
···1919#include <linux/errno.h>2020#include <linux/io.h>2121#include <linux/module.h>2222+#include <linux/string.h>22232324#include <gxio/iorpc_globals.h>2425#include <gxio/iorpc_mpipe.h>···29283029/* HACK: Avoid pointless "shadow" warnings. */3130#define link link_shadow3232-3333-/**3434- * strscpy - Copy a C-string into a sized buffer, but only if it fits3535- * @dest: Where to copy the string to3636- * @src: Where to copy the string from3737- * @size: size of destination buffer3838- *3939- * Use this routine to avoid copying too-long strings.4040- * The routine returns the total number of bytes copied4141- * (including the trailing NUL) or zero if the buffer wasn't4242- * big enough. To ensure that programmers pay attention4343- * to the return code, the destination has a single NUL4444- * written at the front (if size is non-zero) when the4545- * buffer is not big enough.4646- */4747-static size_t strscpy(char *dest, const char *src, size_t size)4848-{4949- size_t len = strnlen(src, size) + 1;5050- if (len > size) {5151- if (size)5252- dest[0] = '\0';5353- return 0;5454- }5555- memcpy(dest, src, len);5656- return len;5757-}58315932int gxio_mpipe_init(gxio_mpipe_context_t *context, unsigned int mpipe_index)6033{···515540 if (!context)516541 return GXIO_ERR_NO_DEVICE;517542518518- if (strscpy(name.name, link_name, sizeof(name.name)) == 0)543543+ if (strscpy(name.name, link_name, sizeof(name.name)) < 0)519544 return GXIO_ERR_NO_DEVICE;520545521546 return gxio_mpipe_info_instance_aux(context, name);···534559535560 rv = gxio_mpipe_info_enumerate_aux(context, idx, &name, &mac);536561 if (rv >= 0) {537537- if (strscpy(link_name, name.name, sizeof(name.name)) == 0)562562+ if (strscpy(link_name, name.name, sizeof(name.name)) < 0)538563 return GXIO_ERR_INVAL_MEMORY_SIZE;539564 memcpy(link_mac, mac.mac, sizeof(mac.mac));540565 }···551576 _gxio_mpipe_link_name_t name;552577 int rv;553578554554- if (strscpy(name.name, link_name, sizeof(name.name)) == 0)579579+ if (strscpy(name.name, link_name, sizeof(name.name)) < 0)555580 return GXIO_ERR_NO_DEVICE;556581557582 rv = gxio_mpipe_link_open_aux(context, name, flags);
···8686extern void __iomem *__init efi_ioremap(unsigned long addr, unsigned long size,8787 u32 type, u64 attribute);88888989+#ifdef CONFIG_KASAN8990/*9091 * CONFIG_KASAN may redefine memset to __memset. __memset function is present9192 * only in kernel binary. Since the EFI stub linked into a separate binary it···9695#undef memcpy9796#undef memset9897#undef memmove9898+#endif9999100100#endif /* CONFIG_X86_32 */101101
···185185}186186187187#ifdef CONFIG_KEXEC_FILE188188-static int get_nr_ram_ranges_callback(unsigned long start_pfn,189189- unsigned long nr_pfn, void *arg)188188+static int get_nr_ram_ranges_callback(u64 start, u64 end, void *arg)190189{191191- int *nr_ranges = arg;190190+ unsigned int *nr_ranges = arg;192191193192 (*nr_ranges)++;194193 return 0;···213214214215 ced->image = image;215216216216- walk_system_ram_range(0, -1, &nr_ranges,217217+ walk_system_ram_res(0, -1, &nr_ranges,217218 get_nr_ram_ranges_callback);218219219220 ced->max_nr_ranges = nr_ranges;
+55
arch/x86/kernel/process.c
···506506 return randomize_range(mm->brk, range_end, 0) ? : mm->brk;507507}508508509509+/*510510+ * Called from fs/proc with a reference on @p to find the function511511+ * which called into schedule(). This needs to be done carefully512512+ * because the task might wake up and we might look at a stack513513+ * changing under us.514514+ */515515+unsigned long get_wchan(struct task_struct *p)516516+{517517+ unsigned long start, bottom, top, sp, fp, ip;518518+ int count = 0;519519+520520+ if (!p || p == current || p->state == TASK_RUNNING)521521+ return 0;522522+523523+ start = (unsigned long)task_stack_page(p);524524+ if (!start)525525+ return 0;526526+527527+ /*528528+ * Layout of the stack page:529529+ *530530+ * ----------- topmax = start + THREAD_SIZE - sizeof(unsigned long)531531+ * PADDING532532+ * ----------- top = topmax - TOP_OF_KERNEL_STACK_PADDING533533+ * stack534534+ * ----------- bottom = start + sizeof(thread_info)535535+ * thread_info536536+ * ----------- start537537+ *538538+ * The tasks stack pointer points at the location where the539539+ * framepointer is stored. The data on the stack is:540540+ * ... IP FP ... IP FP541541+ *542542+ * We need to read FP and IP, so we need to adjust the upper543543+ * bound by another unsigned long.544544+ */545545+ top = start + THREAD_SIZE - TOP_OF_KERNEL_STACK_PADDING;546546+ top -= 2 * sizeof(unsigned long);547547+ bottom = start + sizeof(struct thread_info);548548+549549+ sp = READ_ONCE(p->thread.sp);550550+ if (sp < bottom || sp > top)551551+ return 0;552552+553553+ fp = READ_ONCE(*(unsigned long *)sp);554554+ do {555555+ if (fp < bottom || fp > top)556556+ return 0;557557+ ip = READ_ONCE(*(unsigned long *)(fp + sizeof(unsigned long)));558558+ if (!in_sched_functions(ip))559559+ return ip;560560+ fp = READ_ONCE(*(unsigned long *)fp);561561+ } while (count++ < 16 && p->state != TASK_RUNNING);562562+ return 0;563563+}
-28
arch/x86/kernel/process_32.c
···324324325325 return prev_p;326326}327327-328328-#define top_esp (THREAD_SIZE - sizeof(unsigned long))329329-#define top_ebp (THREAD_SIZE - 2*sizeof(unsigned long))330330-331331-unsigned long get_wchan(struct task_struct *p)332332-{333333- unsigned long bp, sp, ip;334334- unsigned long stack_page;335335- int count = 0;336336- if (!p || p == current || p->state == TASK_RUNNING)337337- return 0;338338- stack_page = (unsigned long)task_stack_page(p);339339- sp = p->thread.sp;340340- if (!stack_page || sp < stack_page || sp > top_esp+stack_page)341341- return 0;342342- /* include/asm-i386/system.h:switch_to() pushes bp last. */343343- bp = *(unsigned long *) sp;344344- do {345345- if (bp < stack_page || bp > top_ebp+stack_page)346346- return 0;347347- ip = *(unsigned long *) (bp+4);348348- if (!in_sched_functions(ip))349349- return ip;350350- bp = *(unsigned long *) bp;351351- } while (count++ < 16);352352- return 0;353353-}354354-
-24
arch/x86/kernel/process_64.c
···499499}500500EXPORT_SYMBOL_GPL(set_personality_ia32);501501502502-unsigned long get_wchan(struct task_struct *p)503503-{504504- unsigned long stack;505505- u64 fp, ip;506506- int count = 0;507507-508508- if (!p || p == current || p->state == TASK_RUNNING)509509- return 0;510510- stack = (unsigned long)task_stack_page(p);511511- if (p->thread.sp < stack || p->thread.sp >= stack+THREAD_SIZE)512512- return 0;513513- fp = *(u64 *)(p->thread.sp);514514- do {515515- if (fp < (unsigned long)stack ||516516- fp >= (unsigned long)stack+THREAD_SIZE)517517- return 0;518518- ip = *(u64 *)(fp+8);519519- if (!in_sched_functions(ip))520520- return ip;521521- fp = *(u64 *)fp;522522- } while (count++ < 16);523523- return 0;524524-}525525-526502long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)527503{528504 int ret = 0;
+13-112
arch/x86/kvm/svm.c
···514514 struct vcpu_svm *svm = to_svm(vcpu);515515516516 if (svm->vmcb->control.next_rip != 0) {517517- WARN_ON(!static_cpu_has(X86_FEATURE_NRIPS));517517+ WARN_ON_ONCE(!static_cpu_has(X86_FEATURE_NRIPS));518518 svm->next_rip = svm->vmcb->control.next_rip;519519 }520520···866866 set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);867867}868868869869-#define MTRR_TYPE_UC_MINUS 7870870-#define MTRR2PROTVAL_INVALID 0xff871871-872872-static u8 mtrr2protval[8];873873-874874-static u8 fallback_mtrr_type(int mtrr)875875-{876876- /*877877- * WT and WP aren't always available in the host PAT. Treat878878- * them as UC and UC- respectively. Everything else should be879879- * there.880880- */881881- switch (mtrr)882882- {883883- case MTRR_TYPE_WRTHROUGH:884884- return MTRR_TYPE_UNCACHABLE;885885- case MTRR_TYPE_WRPROT:886886- return MTRR_TYPE_UC_MINUS;887887- default:888888- BUG();889889- }890890-}891891-892892-static void build_mtrr2protval(void)893893-{894894- int i;895895- u64 pat;896896-897897- for (i = 0; i < 8; i++)898898- mtrr2protval[i] = MTRR2PROTVAL_INVALID;899899-900900- /* Ignore the invalid MTRR types. */901901- mtrr2protval[2] = 0;902902- mtrr2protval[3] = 0;903903-904904- /*905905- * Use host PAT value to figure out the mapping from guest MTRR906906- * values to nested page table PAT/PCD/PWT values. We do not907907- * want to change the host PAT value every time we enter the908908- * guest.909909- */910910- rdmsrl(MSR_IA32_CR_PAT, pat);911911- for (i = 0; i < 8; i++) {912912- u8 mtrr = pat >> (8 * i);913913-914914- if (mtrr2protval[mtrr] == MTRR2PROTVAL_INVALID)915915- mtrr2protval[mtrr] = __cm_idx2pte(i);916916- }917917-918918- for (i = 0; i < 8; i++) {919919- if (mtrr2protval[i] == MTRR2PROTVAL_INVALID) {920920- u8 fallback = fallback_mtrr_type(i);921921- mtrr2protval[i] = mtrr2protval[fallback];922922- BUG_ON(mtrr2protval[i] == MTRR2PROTVAL_INVALID);923923- }924924- }925925-}926926-927869static __init int svm_hardware_setup(void)928870{929871 int cpu;···932990 } else933991 kvm_disable_tdp();934992935935- build_mtrr2protval();936993 return 0;937994938995err:···10861145 return target_tsc - tsc;10871146}1088114710891089-static void svm_set_guest_pat(struct vcpu_svm *svm, u64 *g_pat)10901090-{10911091- struct kvm_vcpu *vcpu = &svm->vcpu;10921092-10931093- /* Unlike Intel, AMD takes the guest's CR0.CD into account.10941094- *10951095- * AMD does not have IPAT. To emulate it for the case of guests10961096- * with no assigned devices, just set everything to WB. If guests10971097- * have assigned devices, however, we cannot force WB for RAM10981098- * pages only, so use the guest PAT directly.10991099- */11001100- if (!kvm_arch_has_assigned_device(vcpu->kvm))11011101- *g_pat = 0x0606060606060606;11021102- else11031103- *g_pat = vcpu->arch.pat;11041104-}11051105-11061106-static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)11071107-{11081108- u8 mtrr;11091109-11101110- /*11111111- * 1. MMIO: trust guest MTRR, so same as item 3.11121112- * 2. No passthrough: always map as WB, and force guest PAT to WB as well11131113- * 3. Passthrough: can't guarantee the result, try to trust guest.11141114- */11151115- if (!is_mmio && !kvm_arch_has_assigned_device(vcpu->kvm))11161116- return 0;11171117-11181118- if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED) &&11191119- kvm_read_cr0(vcpu) & X86_CR0_CD)11201120- return _PAGE_NOCACHE;11211121-11221122- mtrr = kvm_mtrr_get_guest_memory_type(vcpu, gfn);11231123- return mtrr2protval[mtrr];11241124-}11251125-11261148static void init_vmcb(struct vcpu_svm *svm, bool init_event)11271149{11281150 struct vmcb_control_area *control = &svm->vmcb->control;···11821278 clr_cr_intercept(svm, INTERCEPT_CR3_READ);11831279 clr_cr_intercept(svm, INTERCEPT_CR3_WRITE);11841280 save->g_pat = svm->vcpu.arch.pat;11851185- svm_set_guest_pat(svm, &save->g_pat);11861281 save->cr3 = 0;11871282 save->cr4 = 0;11881283 }···1576167315771674 if (!vcpu->fpu_active)15781675 cr0 |= X86_CR0_TS;15791579-15801580- /* These are emulated via page tables. */15811581- cr0 &= ~(X86_CR0_CD | X86_CR0_NW);15821582-16761676+ /*16771677+ * re-enable caching here because the QEMU bios16781678+ * does not do it - this results in some delay at16791679+ * reboot16801680+ */16811681+ if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))16821682+ cr0 &= ~(X86_CR0_CD | X86_CR0_NW);15831683 svm->vmcb->save.cr0 = cr0;15841684 mark_dirty(svm->vmcb, VMCB_CR);15851685 update_cr0_intercept(svm);···32573351 case MSR_VM_IGNNE:32583352 vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data);32593353 break;32603260- case MSR_IA32_CR_PAT:32613261- if (npt_enabled) {32623262- if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))32633263- return 1;32643264- vcpu->arch.pat = data;32653265- svm_set_guest_pat(svm, &svm->vmcb->save.g_pat);32663266- mark_dirty(svm->vmcb, VMCB_NPT);32673267- break;32683268- }32693269- /* fall through */32703354 default:32713355 return kvm_set_msr_common(vcpu, msr);32723356 }···40894193static bool svm_has_high_real_mode_segbase(void)40904194{40914195 return true;41964196+}41974197+41984198+static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)41994199+{42004200+ return 0;40924201}4093420240944203static void svm_cpuid_update(struct kvm_vcpu *vcpu)
+8-3
arch/x86/kvm/vmx.c
···86178617 u64 ipat = 0;8618861886198619 /* For VT-d and EPT combination86208620- * 1. MMIO: guest may want to apply WC, trust it.86208620+ * 1. MMIO: always map as UC86218621 * 2. EPT with VT-d:86228622 * a. VT-d without snooping control feature: can't guarantee the86238623- * result, try to trust guest. So the same as item 1.86238623+ * result, try to trust guest.86248624 * b. VT-d with snooping control feature: snooping control feature of86258625 * VT-d engine can guarantee the cache correctness. Just set it86268626 * to WB to keep consistent with host. So the same as item 3.86278627 * 3. EPT without VT-d: always map as WB and set IPAT=1 to keep86288628 * consistent with host MTRR86298629 */86308630- if (!is_mmio && !kvm_arch_has_noncoherent_dma(vcpu->kvm)) {86308630+ if (is_mmio) {86318631+ cache = MTRR_TYPE_UNCACHABLE;86328632+ goto exit;86338633+ }86348634+86358635+ if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) {86318636 ipat = VMX_EPT_IPAT_BIT;86328637 cache = MTRR_TYPE_WRBACK;86338638 goto exit;
-4
arch/x86/kvm/x86.c
···17081708 vcpu->pvclock_set_guest_stopped_request = false;17091709 }1710171017111711- pvclock_flags |= PVCLOCK_COUNTS_FROM_ZERO;17121712-17131711 /* If the host uses TSC clocksource, then it is stable */17141712 if (use_master_clock)17151713 pvclock_flags |= PVCLOCK_TSC_STABLE_BIT;···20052007 &vcpu->requests);2006200820072009 ka->boot_vcpu_runs_old_kvmclock = tmp;20082008-20092009- ka->kvmclock_offset = -get_kernel_ns();20102010 }2011201120122012 vcpu->arch.time = data;
+1-1
arch/x86/mm/init_64.c
···11321132 * has been zapped already via cleanup_highmem().11331133 */11341134 all_end = roundup((unsigned long)_brk_end, PMD_SIZE);11351135- set_memory_nx(rodata_start, (all_end - rodata_start) >> PAGE_SHIFT);11351135+ set_memory_nx(text_end, (all_end - text_end) >> PAGE_SHIFT);1136113611371137 rodata_test();11381138
+66-1
arch/x86/platform/efi/efi.c
···705705}706706707707/*708708+ * Iterate the EFI memory map in reverse order because the regions709709+ * will be mapped top-down. The end result is the same as if we had710710+ * mapped things forward, but doesn't require us to change the711711+ * existing implementation of efi_map_region().712712+ */713713+static inline void *efi_map_next_entry_reverse(void *entry)714714+{715715+ /* Initial call */716716+ if (!entry)717717+ return memmap.map_end - memmap.desc_size;718718+719719+ entry -= memmap.desc_size;720720+ if (entry < memmap.map)721721+ return NULL;722722+723723+ return entry;724724+}725725+726726+/*727727+ * efi_map_next_entry - Return the next EFI memory map descriptor728728+ * @entry: Previous EFI memory map descriptor729729+ *730730+ * This is a helper function to iterate over the EFI memory map, which731731+ * we do in different orders depending on the current configuration.732732+ *733733+ * To begin traversing the memory map @entry must be %NULL.734734+ *735735+ * Returns %NULL when we reach the end of the memory map.736736+ */737737+static void *efi_map_next_entry(void *entry)738738+{739739+ if (!efi_enabled(EFI_OLD_MEMMAP) && efi_enabled(EFI_64BIT)) {740740+ /*741741+ * Starting in UEFI v2.5 the EFI_PROPERTIES_TABLE742742+ * config table feature requires us to map all entries743743+ * in the same order as they appear in the EFI memory744744+ * map. That is to say, entry N must have a lower745745+ * virtual address than entry N+1. This is because the746746+ * firmware toolchain leaves relative references in747747+ * the code/data sections, which are split and become748748+ * separate EFI memory regions. Mapping things749749+ * out-of-order leads to the firmware accessing750750+ * unmapped addresses.751751+ *752752+ * Since we need to map things this way whether or not753753+ * the kernel actually makes use of754754+ * EFI_PROPERTIES_TABLE, let's just switch to this755755+ * scheme by default for 64-bit.756756+ */757757+ return efi_map_next_entry_reverse(entry);758758+ }759759+760760+ /* Initial call */761761+ if (!entry)762762+ return memmap.map;763763+764764+ entry += memmap.desc_size;765765+ if (entry >= memmap.map_end)766766+ return NULL;767767+768768+ return entry;769769+}770770+771771+/*708772 * Map the efi memory ranges of the runtime services and update new_mmap with709773 * virtual addresses.710774 */···778714 unsigned long left = 0;779715 efi_memory_desc_t *md;780716781781- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {717717+ p = NULL;718718+ while ((p = efi_map_next_entry(p))) {782719 md = p;783720 if (!(md->attribute & EFI_MEMORY_RUNTIME)) {784721#ifdef CONFIG_X86_64
···393393 * Ends all I/O on a request. It does not handle partial completions.394394 * The actual completion happens out-of-order, through a IPI handler.395395 **/396396-void blk_mq_complete_request(struct request *rq)396396+void blk_mq_complete_request(struct request *rq, int error)397397{398398 struct request_queue *q = rq->q;399399400400 if (unlikely(blk_should_fake_timeout(q)))401401 return;402402- if (!blk_mark_rq_complete(rq))402402+ if (!blk_mark_rq_complete(rq)) {403403+ rq->errors = error;403404 __blk_mq_complete_request(rq);405405+ }404406}405407EXPORT_SYMBOL(blk_mq_complete_request);406408···618616 * If a request wasn't started before the queue was619617 * marked dying, kill it here or it'll go unnoticed.620618 */621621- if (unlikely(blk_queue_dying(rq->q))) {622622- rq->errors = -EIO;623623- blk_mq_complete_request(rq);624624- }619619+ if (unlikely(blk_queue_dying(rq->q)))620620+ blk_mq_complete_request(rq, -EIO);625621 return;626622 }627623 if (rq->cmd_flags & REQ_NO_TIMEOUT)···641641 .next = 0,642642 .next_set = 0,643643 };644644- struct blk_mq_hw_ctx *hctx;645644 int i;646645647647- queue_for_each_hw_ctx(q, hctx, i) {648648- /*649649- * If not software queues are currently mapped to this650650- * hardware queue, there's nothing to check651651- */652652- if (!blk_mq_hw_queue_mapped(hctx))653653- continue;654654-655655- blk_mq_tag_busy_iter(hctx, blk_mq_check_expired, &data);656656- }646646+ blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &data);657647658648 if (data.next_set) {659649 data.next = blk_rq_timeout(round_jiffies_up(data.next));660650 mod_timer(&q->timeout, data.next);661651 } else {652652+ struct blk_mq_hw_ctx *hctx;653653+662654 queue_for_each_hw_ctx(q, hctx, i) {663655 /* the hctx may be unmapped, so check it here */664656 if (blk_mq_hw_queue_mapped(hctx))···17811789 }17821790}1783179117841784-static void blk_mq_map_swqueue(struct request_queue *q)17921792+static void blk_mq_map_swqueue(struct request_queue *q,17931793+ const struct cpumask *online_mask)17851794{17861795 unsigned int i;17871796 struct blk_mq_hw_ctx *hctx;17881797 struct blk_mq_ctx *ctx;17891798 struct blk_mq_tag_set *set = q->tag_set;17991799+18001800+ /*18011801+ * Avoid others reading imcomplete hctx->cpumask through sysfs18021802+ */18031803+ mutex_lock(&q->sysfs_lock);1790180417911805 queue_for_each_hw_ctx(q, hctx, i) {17921806 cpumask_clear(hctx->cpumask);···18041806 */18051807 queue_for_each_ctx(q, ctx, i) {18061808 /* If the cpu isn't online, the cpu is mapped to first hctx */18071807- if (!cpu_online(i))18091809+ if (!cpumask_test_cpu(i, online_mask))18081810 continue;1809181118101812 hctx = q->mq_ops->map_queue(q, i);18111813 cpumask_set_cpu(i, hctx->cpumask);18121812- cpumask_set_cpu(i, hctx->tags->cpumask);18131814 ctx->index_hw = hctx->nr_ctx;18141815 hctx->ctxs[hctx->nr_ctx++] = ctx;18151816 }18171817+18181818+ mutex_unlock(&q->sysfs_lock);1816181918171820 queue_for_each_hw_ctx(q, hctx, i) {18181821 struct blk_mq_ctxmap *map = &hctx->ctx_map;···18491850 */18501851 hctx->next_cpu = cpumask_first(hctx->cpumask);18511852 hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH;18531853+ }18541854+18551855+ queue_for_each_ctx(q, ctx, i) {18561856+ if (!cpumask_test_cpu(i, online_mask))18571857+ continue;18581858+18591859+ hctx = q->mq_ops->map_queue(q, i);18601860+ cpumask_set_cpu(i, hctx->tags->cpumask);18521861 }18531862}18541863···19241917 kfree(hctx->ctxs);19251918 kfree(hctx);19261919 }19201920+19211921+ kfree(q->mq_map);19221922+ q->mq_map = NULL;1927192319281924 kfree(q->queue_hw_ctx);19291925···20372027 if (blk_mq_init_hw_queues(q, set))20382028 goto err_hctxs;2039202920302030+ get_online_cpus();20402031 mutex_lock(&all_q_mutex);20322032+20412033 list_add_tail(&q->all_q_node, &all_q_list);20422042- mutex_unlock(&all_q_mutex);20432043-20442034 blk_mq_add_queue_tag_set(set, q);20352035+ blk_mq_map_swqueue(q, cpu_online_mask);2045203620462046- blk_mq_map_swqueue(q);20372037+ mutex_unlock(&all_q_mutex);20382038+ put_online_cpus();2047203920482040 return q;20492041···20692057{20702058 struct blk_mq_tag_set *set = q->tag_set;2071205920602060+ mutex_lock(&all_q_mutex);20612061+ list_del_init(&q->all_q_node);20622062+ mutex_unlock(&all_q_mutex);20632063+20722064 blk_mq_del_queue_tag_set(q);2073206520742066 blk_mq_exit_hw_queues(q, set, set->nr_hw_queues);20752067 blk_mq_free_hw_queues(q, set);2076206820772069 percpu_ref_exit(&q->mq_usage_counter);20782078-20792079- kfree(q->mq_map);20802080-20812081- q->mq_map = NULL;20822082-20832083- mutex_lock(&all_q_mutex);20842084- list_del_init(&q->all_q_node);20852085- mutex_unlock(&all_q_mutex);20862070}2087207120882072/* Basically redo blk_mq_init_queue with queue frozen */20892089-static void blk_mq_queue_reinit(struct request_queue *q)20732073+static void blk_mq_queue_reinit(struct request_queue *q,20742074+ const struct cpumask *online_mask)20902075{20912076 WARN_ON_ONCE(!atomic_read(&q->mq_freeze_depth));2092207720932078 blk_mq_sysfs_unregister(q);2094207920952095- blk_mq_update_queue_map(q->mq_map, q->nr_hw_queues);20802080+ blk_mq_update_queue_map(q->mq_map, q->nr_hw_queues, online_mask);2096208120972082 /*20982083 * redo blk_mq_init_cpu_queues and blk_mq_init_hw_queues. FIXME: maybe···20972088 * involves free and re-allocate memory, worthy doing?)20982089 */2099209021002100- blk_mq_map_swqueue(q);20912091+ blk_mq_map_swqueue(q, online_mask);2101209221022093 blk_mq_sysfs_register(q);21032094}···21062097 unsigned long action, void *hcpu)21072098{21082099 struct request_queue *q;21002100+ int cpu = (unsigned long)hcpu;21012101+ /*21022102+ * New online cpumask which is going to be set in this hotplug event.21032103+ * Declare this cpumasks as global as cpu-hotplug operation is invoked21042104+ * one-by-one and dynamically allocating this could result in a failure.21052105+ */21062106+ static struct cpumask online_new;2109210721102108 /*21112111- * Before new mappings are established, hotadded cpu might already21122112- * start handling requests. This doesn't break anything as we map21132113- * offline CPUs to first hardware queue. We will re-init the queue21142114- * below to get optimal settings.21092109+ * Before hotadded cpu starts handling requests, new mappings must21102110+ * be established. Otherwise, these requests in hw queue might21112111+ * never be dispatched.21122112+ *21132113+ * For example, there is a single hw queue (hctx) and two CPU queues21142114+ * (ctx0 for CPU0, and ctx1 for CPU1).21152115+ *21162116+ * Now CPU1 is just onlined and a request is inserted into21172117+ * ctx1->rq_list and set bit0 in pending bitmap as ctx1->index_hw is21182118+ * still zero.21192119+ *21202120+ * And then while running hw queue, flush_busy_ctxs() finds bit0 is21212121+ * set in pending bitmap and tries to retrieve requests in21222122+ * hctx->ctxs[0]->rq_list. But htx->ctxs[0] is a pointer to ctx0,21232123+ * so the request in ctx1->rq_list is ignored.21152124 */21162116- if (action != CPU_DEAD && action != CPU_DEAD_FROZEN &&21172117- action != CPU_ONLINE && action != CPU_ONLINE_FROZEN)21252125+ switch (action & ~CPU_TASKS_FROZEN) {21262126+ case CPU_DEAD:21272127+ case CPU_UP_CANCELED:21282128+ cpumask_copy(&online_new, cpu_online_mask);21292129+ break;21302130+ case CPU_UP_PREPARE:21312131+ cpumask_copy(&online_new, cpu_online_mask);21322132+ cpumask_set_cpu(cpu, &online_new);21332133+ break;21342134+ default:21182135 return NOTIFY_OK;21362136+ }2119213721202138 mutex_lock(&all_q_mutex);21212139···21662130 }2167213121682132 list_for_each_entry(q, &all_q_list, all_q_node)21692169- blk_mq_queue_reinit(q);21332133+ blk_mq_queue_reinit(q, &online_new);2170213421712135 list_for_each_entry(q, &all_q_list, all_q_node)21722136 blk_mq_unfreeze_queue(q);
+2-1
block/blk-mq.h
···5151 * CPU -> queue mappings5252 */5353extern unsigned int *blk_mq_make_queue_map(struct blk_mq_tag_set *set);5454-extern int blk_mq_update_queue_map(unsigned int *map, unsigned int nr_queues);5454+extern int blk_mq_update_queue_map(unsigned int *map, unsigned int nr_queues,5555+ const struct cpumask *online_mask);5556extern int blk_mq_hw_queue_to_node(unsigned int *map, unsigned int);56575758/*
···10441044 goto err_exit;1045104510461046 mutex_lock(&ec->mutex);10471047+ result = -ENODATA;10471048 list_for_each_entry(handler, &ec->list, node) {10481049 if (value == handler->query_bit) {10501050+ result = 0;10491051 q->handler = acpi_ec_get_query_handler(handler);10501052 ec_dbg_evt("Query(0x%02x) scheduled",10511053 q->handler->query_bit);
+1
drivers/acpi/pci_irq.c
···372372373373 /* Interrupt Line values above 0xF are forbidden */374374 if (dev->irq > 0 && (dev->irq <= 0xF) &&375375+ acpi_isa_irq_available(dev->irq) &&375376 (acpi_isa_irq_to_gsi(dev->irq, &dev_gsi) == 0)) {376377 dev_warn(&dev->dev, "PCI INT %c: no GSI - using ISA IRQ %d\n",377378 pin_name(dev->pin), dev->irq);
+14-2
drivers/acpi/pci_link.c
···498498 PIRQ_PENALTY_PCI_POSSIBLE;499499 }500500 }501501- /* Add a penalty for the SCI */502502- acpi_irq_penalty[acpi_gbl_FADT.sci_interrupt] += PIRQ_PENALTY_PCI_USING;501501+503502 return 0;504503}505504···551552 acpi_irq_penalty[link->irq.possible[i]])552553 irq = link->irq.possible[i];553554 }555555+ }556556+ if (acpi_irq_penalty[irq] >= PIRQ_PENALTY_ISA_ALWAYS) {557557+ printk(KERN_ERR PREFIX "No IRQ available for %s [%s]. "558558+ "Try pci=noacpi or acpi=off\n",559559+ acpi_device_name(link->device),560560+ acpi_device_bid(link->device));561561+ return -ENODEV;554562 }555563556564 /* Attempt to enable the link device at this IRQ. */···825819 else826820 acpi_irq_penalty[irq] += PIRQ_PENALTY_PCI_USING;827821 }822822+}823823+824824+bool acpi_isa_irq_available(int irq)825825+{826826+ return irq >= 0 && (irq >= ARRAY_SIZE(acpi_irq_penalty) ||827827+ acpi_irq_penalty[irq] < PIRQ_PENALTY_ISA_ALWAYS);828828}829829830830/*
+12-5
drivers/base/power/opp.c
···892892 u32 microvolt[3] = {0};893893 int count, ret;894894895895- count = of_property_count_u32_elems(opp->np, "opp-microvolt");896896- if (!count)895895+ /* Missing property isn't a problem, but an invalid entry is */896896+ if (!of_find_property(opp->np, "opp-microvolt", NULL))897897 return 0;898898+899899+ count = of_property_count_u32_elems(opp->np, "opp-microvolt");900900+ if (count < 0) {901901+ dev_err(dev, "%s: Invalid opp-microvolt property (%d)\n",902902+ __func__, count);903903+ return count;904904+ }898905899906 /* There can be one or three elements here */900907 if (count != 1 && count != 3) {···10701063 * share a common logic which is isolated here.10711064 *10721065 * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the10731073- * copy operation, returns 0 if no modifcation was done OR modification was10661066+ * copy operation, returns 0 if no modification was done OR modification was10741067 * successful.10751068 *10761069 * Locking: The internal device_opp and opp structures are RCU protected.···11581151 * mutex locking or synchronize_rcu() blocking calls cannot be used.11591152 *11601153 * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the11611161- * copy operation, returns 0 if no modifcation was done OR modification was11541154+ * copy operation, returns 0 if no modification was done OR modification was11621155 * successful.11631156 */11641157int dev_pm_opp_enable(struct device *dev, unsigned long freq)···11841177 * mutex locking or synchronize_rcu() blocking calls cannot be used.11851178 *11861179 * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the11871187- * copy operation, returns 0 if no modifcation was done OR modification was11801180+ * copy operation, returns 0 if no modification was done OR modification was11881181 * successful.11891182 */11901183int dev_pm_opp_disable(struct device *dev, unsigned long freq)
+5-6
drivers/block/loop.c
···14861486{14871487 const bool write = cmd->rq->cmd_flags & REQ_WRITE;14881488 struct loop_device *lo = cmd->rq->q->queuedata;14891489- int ret = -EIO;14891489+ int ret = 0;1490149014911491- if (write && (lo->lo_flags & LO_FLAGS_READ_ONLY))14911491+ if (write && (lo->lo_flags & LO_FLAGS_READ_ONLY)) {14921492+ ret = -EIO;14921493 goto failed;14941494+ }1493149514941496 ret = do_req_filebacked(lo, cmd->rq);14951495-14961497 failed:14971497- if (ret)14981498- cmd->rq->errors = -EIO;14991499- blk_mq_complete_request(cmd->rq);14981498+ blk_mq_complete_request(cmd->rq, ret ? -EIO : 0);15001499}1501150015021501static void loop_queue_write_work(struct work_struct *work)
+1-1
drivers/block/null_blk.c
···289289 case NULL_IRQ_SOFTIRQ:290290 switch (queue_mode) {291291 case NULL_Q_MQ:292292- blk_mq_complete_request(cmd->rq);292292+ blk_mq_complete_request(cmd->rq, cmd->rq->errors);293293 break;294294 case NULL_Q_RQ:295295 blk_complete_request(cmd->rq);
+24-28
drivers/block/nvme-core.c
···618618 spin_unlock_irqrestore(req->q->queue_lock, flags);619619 return;620620 }621621+621622 if (req->cmd_type == REQ_TYPE_DRV_PRIV) {622623 if (cmd_rq->ctx == CMD_CTX_CANCELLED)623623- req->errors = -EINTR;624624- else625625- req->errors = status;624624+ status = -EINTR;626625 } else {627627- req->errors = nvme_error_status(status);626626+ status = nvme_error_status(status);628627 }629629- } else630630- req->errors = 0;628628+ }629629+631630 if (req->cmd_type == REQ_TYPE_DRV_PRIV) {632631 u32 result = le32_to_cpup(&cqe->result);633632 req->special = (void *)(uintptr_t)result;···649650 }650651 nvme_free_iod(nvmeq->dev, iod);651652652652- blk_mq_complete_request(req);653653+ blk_mq_complete_request(req, status);653654}654655655656/* length is in bytes. gfp flags indicates whether we may sleep. */···862863 if (ns && ns->ms && !blk_integrity_rq(req)) {863864 if (!(ns->pi_type && ns->ms == 8) &&864865 req->cmd_type != REQ_TYPE_DRV_PRIV) {865865- req->errors = -EFAULT;866866- blk_mq_complete_request(req);866866+ blk_mq_complete_request(req, -EFAULT);867867 return BLK_MQ_RQ_QUEUE_OK;868868 }869869 }···24372439 list_sort(NULL, &dev->namespaces, ns_cmp);24382440}2439244124422442+static void nvme_set_irq_hints(struct nvme_dev *dev)24432443+{24442444+ struct nvme_queue *nvmeq;24452445+ int i;24462446+24472447+ for (i = 0; i < dev->online_queues; i++) {24482448+ nvmeq = dev->queues[i];24492449+24502450+ if (!nvmeq->tags || !(*nvmeq->tags))24512451+ continue;24522452+24532453+ irq_set_affinity_hint(dev->entry[nvmeq->cq_vector].vector,24542454+ blk_mq_tags_cpumask(*nvmeq->tags));24552455+ }24562456+}24572457+24402458static void nvme_dev_scan(struct work_struct *work)24412459{24422460 struct nvme_dev *dev = container_of(work, struct nvme_dev, scan_work);···24642450 return;24652451 nvme_scan_namespaces(dev, le32_to_cpup(&ctrl->nn));24662452 kfree(ctrl);24532453+ nvme_set_irq_hints(dev);24672454}2468245524692456/*···29682953 .compat_ioctl = nvme_dev_ioctl,29692954};2970295529712971-static void nvme_set_irq_hints(struct nvme_dev *dev)29722972-{29732973- struct nvme_queue *nvmeq;29742974- int i;29752975-29762976- for (i = 0; i < dev->online_queues; i++) {29772977- nvmeq = dev->queues[i];29782978-29792979- if (!nvmeq->tags || !(*nvmeq->tags))29802980- continue;29812981-29822982- irq_set_affinity_hint(dev->entry[nvmeq->cq_vector].vector,29832983- blk_mq_tags_cpumask(*nvmeq->tags));29842984- }29852985-}29862986-29872956static int nvme_dev_start(struct nvme_dev *dev)29882957{29892958 int result;···30083009 result = nvme_setup_io_queues(dev);30093010 if (result)30103011 goto free_tags;30113011-30123012- nvme_set_irq_hints(dev);3013301230143013 dev->event_limit = 1;30153014 return result;···30593062 } else {30603063 nvme_unfreeze_queues(dev);30613064 nvme_dev_add(dev);30623062- nvme_set_irq_hints(dev);30633065 }30643066 return 0;30653067}
+1-1
drivers/block/virtio_blk.c
···144144 do {145145 virtqueue_disable_cb(vq);146146 while ((vbr = virtqueue_get_buf(vblk->vqs[qid].vq, &len)) != NULL) {147147- blk_mq_complete_request(vbr->req);147147+ blk_mq_complete_request(vbr->req, vbr->req->errors);148148 req_done = true;149149 }150150 if (unlikely(virtqueue_is_broken(vq)))
+21-19
drivers/block/xen-blkback/xenbus.c
···212212213213static int xen_blkif_disconnect(struct xen_blkif *blkif)214214{215215+ struct pending_req *req, *n;216216+ int i = 0, j;217217+215218 if (blkif->xenblkd) {216219 kthread_stop(blkif->xenblkd);217220 wake_up(&blkif->shutdown_wq);···241238 /* Remove all persistent grants and the cache of ballooned pages. */242239 xen_blkbk_free_caches(blkif);243240244244- return 0;245245-}246246-247247-static void xen_blkif_free(struct xen_blkif *blkif)248248-{249249- struct pending_req *req, *n;250250- int i = 0, j;251251-252252- xen_blkif_disconnect(blkif);253253- xen_vbd_free(&blkif->vbd);254254-255255- /* Make sure everything is drained before shutting down */256256- BUG_ON(blkif->persistent_gnt_c != 0);257257- BUG_ON(atomic_read(&blkif->persistent_gnt_in_use) != 0);258258- BUG_ON(blkif->free_pages_num != 0);259259- BUG_ON(!list_empty(&blkif->persistent_purge_list));260260- BUG_ON(!list_empty(&blkif->free_pages));261261- BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));262262-263241 /* Check that there is no request in use */264242 list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) {265243 list_del(&req->free_list);···256272 }257273258274 WARN_ON(i != (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages));275275+ blkif->nr_ring_pages = 0;276276+277277+ return 0;278278+}279279+280280+static void xen_blkif_free(struct xen_blkif *blkif)281281+{282282+283283+ xen_blkif_disconnect(blkif);284284+ xen_vbd_free(&blkif->vbd);285285+286286+ /* Make sure everything is drained before shutting down */287287+ BUG_ON(blkif->persistent_gnt_c != 0);288288+ BUG_ON(atomic_read(&blkif->persistent_gnt_in_use) != 0);289289+ BUG_ON(blkif->free_pages_num != 0);290290+ BUG_ON(!list_empty(&blkif->persistent_purge_list));291291+ BUG_ON(!list_empty(&blkif->free_pages));292292+ BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));259293260294 kmem_cache_free(xen_blkif_cachep, blkif);261295}
+10-9
drivers/block/xen-blkfront.c
···11421142 RING_IDX i, rp;11431143 unsigned long flags;11441144 struct blkfront_info *info = (struct blkfront_info *)dev_id;11451145+ int error;1145114611461147 spin_lock_irqsave(&info->io_lock, flags);11471148···11831182 continue;11841183 }1185118411861186- req->errors = (bret->status == BLKIF_RSP_OKAY) ? 0 : -EIO;11851185+ error = (bret->status == BLKIF_RSP_OKAY) ? 0 : -EIO;11871186 switch (bret->operation) {11881187 case BLKIF_OP_DISCARD:11891188 if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {11901189 struct request_queue *rq = info->rq;11911190 printk(KERN_WARNING "blkfront: %s: %s op failed\n",11921191 info->gd->disk_name, op_name(bret->operation));11931193- req->errors = -EOPNOTSUPP;11921192+ error = -EOPNOTSUPP;11941193 info->feature_discard = 0;11951194 info->feature_secdiscard = 0;11961195 queue_flag_clear(QUEUE_FLAG_DISCARD, rq);11971196 queue_flag_clear(QUEUE_FLAG_SECDISCARD, rq);11981197 }11991199- blk_mq_complete_request(req);11981198+ blk_mq_complete_request(req, error);12001199 break;12011200 case BLKIF_OP_FLUSH_DISKCACHE:12021201 case BLKIF_OP_WRITE_BARRIER:12031202 if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {12041203 printk(KERN_WARNING "blkfront: %s: %s op failed\n",12051204 info->gd->disk_name, op_name(bret->operation));12061206- req->errors = -EOPNOTSUPP;12051205+ error = -EOPNOTSUPP;12071206 }12081207 if (unlikely(bret->status == BLKIF_RSP_ERROR &&12091208 info->shadow[id].req.u.rw.nr_segments == 0)) {12101209 printk(KERN_WARNING "blkfront: %s: empty %s op failed\n",12111210 info->gd->disk_name, op_name(bret->operation));12121212- req->errors = -EOPNOTSUPP;12111211+ error = -EOPNOTSUPP;12131212 }12141214- if (unlikely(req->errors)) {12151215- if (req->errors == -EOPNOTSUPP)12161216- req->errors = 0;12131213+ if (unlikely(error)) {12141214+ if (error == -EOPNOTSUPP)12151215+ error = 0;12171216 info->feature_flush = 0;12181217 xlvbd_flush(info);12191218 }···12241223 dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "12251224 "request: %x\n", bret->status);1226122512271227- blk_mq_complete_request(req);12261226+ blk_mq_complete_request(req, error);12281227 break;12291228 default:12301229 BUG();
+1-1
drivers/clocksource/rockchip_timer.c
···148148 bc_timer.freq = clk_get_rate(timer_clk);149149150150 irq = irq_of_parse_and_map(np, 0);151151- if (irq == NO_IRQ) {151151+ if (!irq) {152152 pr_err("Failed to map interrupts for '%s'\n", TIMER_NAME);153153 return;154154 }
+1-1
drivers/clocksource/timer-keystone.c
···152152 int irq, error;153153154154 irq = irq_of_parse_and_map(np, 0);155155- if (irq == NO_IRQ) {155155+ if (!irq) {156156 pr_err("%s: failed to map interrupts\n", __func__);157157 return;158158 }
···5959#define XGENE_DMA_RING_MEM_RAM_SHUTDOWN 0xD0706060#define XGENE_DMA_RING_BLK_MEM_RDY 0xD0746161#define XGENE_DMA_RING_BLK_MEM_RDY_VAL 0xFFFFFFFF6262-#define XGENE_DMA_RING_DESC_CNT(v) (((v) & 0x0001FFFE) >> 1)6362#define XGENE_DMA_RING_ID_GET(owner, num) (((owner) << 6) | (num))6463#define XGENE_DMA_RING_DST_ID(v) ((1 << 10) | (v))6564#define XGENE_DMA_RING_CMD_OFFSET 0x2C···378379 return flyby_type[src_cnt];379380}380381381381-static u32 xgene_dma_ring_desc_cnt(struct xgene_dma_ring *ring)382382-{383383- u32 __iomem *cmd_base = ring->cmd_base;384384- u32 ring_state = ioread32(&cmd_base[1]);385385-386386- return XGENE_DMA_RING_DESC_CNT(ring_state);387387-}388388-389382static void xgene_dma_set_src_buffer(__le64 *ext8, size_t *len,390383 dma_addr_t *paddr)391384{···650659 dma_pool_free(chan->desc_pool, desc, desc->tx.phys);651660}652661653653-static int xgene_chan_xfer_request(struct xgene_dma_ring *ring,654654- struct xgene_dma_desc_sw *desc_sw)662662+static void xgene_chan_xfer_request(struct xgene_dma_chan *chan,663663+ struct xgene_dma_desc_sw *desc_sw)655664{665665+ struct xgene_dma_ring *ring = &chan->tx_ring;656666 struct xgene_dma_desc_hw *desc_hw;657657-658658- /* Check if can push more descriptor to hw for execution */659659- if (xgene_dma_ring_desc_cnt(ring) > (ring->slots - 2))660660- return -EBUSY;661667662668 /* Get hw descriptor from DMA tx ring */663669 desc_hw = &ring->desc_hw[ring->head];···682694 memcpy(desc_hw, &desc_sw->desc2, sizeof(*desc_hw));683695 }684696697697+ /* Increment the pending transaction count */698698+ chan->pending += ((desc_sw->flags &699699+ XGENE_DMA_FLAG_64B_DESC) ? 2 : 1);700700+685701 /* Notify the hw that we have descriptor ready for execution */686702 iowrite32((desc_sw->flags & XGENE_DMA_FLAG_64B_DESC) ?687703 2 : 1, ring->cmd);688688-689689- return 0;690704}691705692706/**···700710static void xgene_chan_xfer_ld_pending(struct xgene_dma_chan *chan)701711{702712 struct xgene_dma_desc_sw *desc_sw, *_desc_sw;703703- int ret;704713705714 /*706715 * If the list of pending descriptors is empty, then we···724735 if (chan->pending >= chan->max_outstanding)725736 return;726737727727- ret = xgene_chan_xfer_request(&chan->tx_ring, desc_sw);728728- if (ret)729729- return;738738+ xgene_chan_xfer_request(chan, desc_sw);730739731740 /*732741 * Delete this element from ld pending queue and append it to733742 * ld running queue734743 */735744 list_move_tail(&desc_sw->node, &chan->ld_running);736736-737737- /* Increment the pending transaction count */738738- chan->pending++;739745 }740746}741747···805821 * Decrement the pending transaction count806822 * as we have processed one807823 */808808- chan->pending--;824824+ chan->pending -= ((desc_sw->flags &825825+ XGENE_DMA_FLAG_64B_DESC) ? 2 : 1);809826810827 /*811828 * Delete this node from ld running queue and append it to···14061421 struct xgene_dma_ring *ring,14071422 enum xgene_dma_ring_cfgsize cfgsize)14081423{14241424+ int ret;14251425+14091426 /* Setup DMA ring descriptor variables */14101427 ring->pdma = chan->pdma;14111428 ring->cfgsize = cfgsize;14121429 ring->num = chan->pdma->ring_num++;14131430 ring->id = XGENE_DMA_RING_ID_GET(ring->owner, ring->buf_num);1414143114151415- ring->size = xgene_dma_get_ring_size(chan, cfgsize);14161416- if (ring->size <= 0)14171417- return ring->size;14321432+ ret = xgene_dma_get_ring_size(chan, cfgsize);14331433+ if (ret <= 0)14341434+ return ret;14351435+ ring->size = ret;1418143614191437 /* Allocate memory for DMA ring descriptor */14201438 ring->desc_vaddr = dma_zalloc_coherent(chan->dev, ring->size,···14701482 tx_ring->id, tx_ring->num, tx_ring->desc_vaddr);1471148314721484 /* Set the max outstanding request possible to this channel */14731473- chan->max_outstanding = rx_ring->slots;14851485+ chan->max_outstanding = tx_ring->slots;1474148614751487 return ret;14761488}
+1-1
drivers/dma/zx296702_dma.c
···739739 struct dma_chan *chan;740740 struct zx_dma_chan *c;741741742742- if (request > d->dma_requests)742742+ if (request >= d->dma_requests)743743 return NULL;744744745745 chan = dma_get_any_slave_channel(&d->slave);
+72-14
drivers/firmware/efi/libstub/arm-stub.c
···1313 */14141515#include <linux/efi.h>1616+#include <linux/sort.h>1617#include <asm/efi.h>17181819#include "efistub.h"···306305 */307306#define EFI_RT_VIRTUAL_BASE 0x40000000308307308308+static int cmp_mem_desc(const void *l, const void *r)309309+{310310+ const efi_memory_desc_t *left = l, *right = r;311311+312312+ return (left->phys_addr > right->phys_addr) ? 1 : -1;313313+}314314+315315+/*316316+ * Returns whether region @left ends exactly where region @right starts,317317+ * or false if either argument is NULL.318318+ */319319+static bool regions_are_adjacent(efi_memory_desc_t *left,320320+ efi_memory_desc_t *right)321321+{322322+ u64 left_end;323323+324324+ if (left == NULL || right == NULL)325325+ return false;326326+327327+ left_end = left->phys_addr + left->num_pages * EFI_PAGE_SIZE;328328+329329+ return left_end == right->phys_addr;330330+}331331+332332+/*333333+ * Returns whether region @left and region @right have compatible memory type334334+ * mapping attributes, and are both EFI_MEMORY_RUNTIME regions.335335+ */336336+static bool regions_have_compatible_memory_type_attrs(efi_memory_desc_t *left,337337+ efi_memory_desc_t *right)338338+{339339+ static const u64 mem_type_mask = EFI_MEMORY_WB | EFI_MEMORY_WT |340340+ EFI_MEMORY_WC | EFI_MEMORY_UC |341341+ EFI_MEMORY_RUNTIME;342342+343343+ return ((left->attribute ^ right->attribute) & mem_type_mask) == 0;344344+}345345+309346/*310347 * efi_get_virtmap() - create a virtual mapping for the EFI memory map311348 *···356317 int *count)357318{358319 u64 efi_virt_base = EFI_RT_VIRTUAL_BASE;359359- efi_memory_desc_t *out = runtime_map;320320+ efi_memory_desc_t *in, *prev = NULL, *out = runtime_map;360321 int l;361322362362- for (l = 0; l < map_size; l += desc_size) {363363- efi_memory_desc_t *in = (void *)memory_map + l;323323+ /*324324+ * To work around potential issues with the Properties Table feature325325+ * introduced in UEFI 2.5, which may split PE/COFF executable images326326+ * in memory into several RuntimeServicesCode and RuntimeServicesData327327+ * regions, we need to preserve the relative offsets between adjacent328328+ * EFI_MEMORY_RUNTIME regions with the same memory type attributes.329329+ * The easiest way to find adjacent regions is to sort the memory map330330+ * before traversing it.331331+ */332332+ sort(memory_map, map_size / desc_size, desc_size, cmp_mem_desc, NULL);333333+334334+ for (l = 0; l < map_size; l += desc_size, prev = in) {364335 u64 paddr, size;365336337337+ in = (void *)memory_map + l;366338 if (!(in->attribute & EFI_MEMORY_RUNTIME))367339 continue;340340+341341+ paddr = in->phys_addr;342342+ size = in->num_pages * EFI_PAGE_SIZE;368343369344 /*370345 * Make the mapping compatible with 64k pages: this allows371346 * a 4k page size kernel to kexec a 64k page size kernel and372347 * vice versa.373348 */374374- paddr = round_down(in->phys_addr, SZ_64K);375375- size = round_up(in->num_pages * EFI_PAGE_SIZE +376376- in->phys_addr - paddr, SZ_64K);349349+ if (!regions_are_adjacent(prev, in) ||350350+ !regions_have_compatible_memory_type_attrs(prev, in)) {377351378378- /*379379- * Avoid wasting memory on PTEs by choosing a virtual base that380380- * is compatible with section mappings if this region has the381381- * appropriate size and physical alignment. (Sections are 2 MB382382- * on 4k granule kernels)383383- */384384- if (IS_ALIGNED(in->phys_addr, SZ_2M) && size >= SZ_2M)385385- efi_virt_base = round_up(efi_virt_base, SZ_2M);352352+ paddr = round_down(in->phys_addr, SZ_64K);353353+ size += in->phys_addr - paddr;354354+355355+ /*356356+ * Avoid wasting memory on PTEs by choosing a virtual357357+ * base that is compatible with section mappings if this358358+ * region has the appropriate size and physical359359+ * alignment. (Sections are 2 MB on 4k granule kernels)360360+ */361361+ if (IS_ALIGNED(in->phys_addr, SZ_2M) && size >= SZ_2M)362362+ efi_virt_base = round_up(efi_virt_base, SZ_2M);363363+ else364364+ efi_virt_base = round_up(efi_virt_base, SZ_64K);365365+ }386366387367 in->virt_addr = efi_virt_base + in->phys_addr - paddr;388368 efi_virt_base += size;
-39
drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
···208208 return ret;209209}210210211211-static int amdgpu_cgs_import_gpu_mem(void *cgs_device, int dmabuf_fd,212212- cgs_handle_t *handle)213213-{214214- CGS_FUNC_ADEV;215215- int r;216216- uint32_t dma_handle;217217- struct drm_gem_object *obj;218218- struct amdgpu_bo *bo;219219- struct drm_device *dev = adev->ddev;220220- struct drm_file *file_priv = NULL, *priv;221221-222222- mutex_lock(&dev->struct_mutex);223223- list_for_each_entry(priv, &dev->filelist, lhead) {224224- rcu_read_lock();225225- if (priv->pid == get_pid(task_pid(current)))226226- file_priv = priv;227227- rcu_read_unlock();228228- if (file_priv)229229- break;230230- }231231- mutex_unlock(&dev->struct_mutex);232232- r = dev->driver->prime_fd_to_handle(dev,233233- file_priv, dmabuf_fd,234234- &dma_handle);235235- spin_lock(&file_priv->table_lock);236236-237237- /* Check if we currently have a reference on the object */238238- obj = idr_find(&file_priv->object_idr, dma_handle);239239- if (obj == NULL) {240240- spin_unlock(&file_priv->table_lock);241241- return -EINVAL;242242- }243243- spin_unlock(&file_priv->table_lock);244244- bo = gem_to_amdgpu_bo(obj);245245- *handle = (cgs_handle_t)bo;246246- return 0;247247-}248248-249211static int amdgpu_cgs_free_gpu_mem(void *cgs_device, cgs_handle_t handle)250212{251213 struct amdgpu_bo *obj = (struct amdgpu_bo *)handle;···772810};773811774812static const struct cgs_os_ops amdgpu_cgs_os_ops = {775775- amdgpu_cgs_import_gpu_mem,776813 amdgpu_cgs_add_irq_source,777814 amdgpu_cgs_irq_get,778815 amdgpu_cgs_irq_put
+2-1
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
···156156 uint64_t *chunk_array_user;157157 uint64_t *chunk_array;158158 struct amdgpu_fpriv *fpriv = p->filp->driver_priv;159159- unsigned size, i;159159+ unsigned size;160160+ int i;160161 int ret;161162162163 if (cs->in.num_chunks == 0)
···12621262 addr = RREG32(mmVM_CONTEXT1_PROTECTION_FAULT_ADDR);12631263 status = RREG32(mmVM_CONTEXT1_PROTECTION_FAULT_STATUS);12641264 mc_client = RREG32(mmVM_CONTEXT1_PROTECTION_FAULT_MCCLIENT);12651265+ /* reset addr and status */12661266+ WREG32_P(mmVM_CONTEXT1_CNTL2, 1, ~1);12671267+12681268+ if (!addr && !status)12691269+ return 0;12701270+12651271 dev_err(adev->dev, "GPU fault detected: %d 0x%08x\n",12661272 entry->src_id, entry->src_data);12671273 dev_err(adev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n",···12751269 dev_err(adev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n",12761270 status);12771271 gmc_v7_0_vm_decode_fault(adev, status, addr, mc_client);12781278- /* reset addr and status */12791279- WREG32_P(mmVM_CONTEXT1_CNTL2, 1, ~1);1280127212811273 return 0;12821274}
+6-2
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
···12621262 addr = RREG32(mmVM_CONTEXT1_PROTECTION_FAULT_ADDR);12631263 status = RREG32(mmVM_CONTEXT1_PROTECTION_FAULT_STATUS);12641264 mc_client = RREG32(mmVM_CONTEXT1_PROTECTION_FAULT_MCCLIENT);12651265+ /* reset addr and status */12661266+ WREG32_P(mmVM_CONTEXT1_CNTL2, 1, ~1);12671267+12681268+ if (!addr && !status)12691269+ return 0;12701270+12651271 dev_err(adev->dev, "GPU fault detected: %d 0x%08x\n",12661272 entry->src_id, entry->src_data);12671273 dev_err(adev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n",···12751269 dev_err(adev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n",12761270 status);12771271 gmc_v8_0_vm_decode_fault(adev, status, addr, mc_client);12781278- /* reset addr and status */12791279- WREG32_P(mmVM_CONTEXT1_CNTL2, 1, ~1);1280127212811273 return 0;12821274}
-17
drivers/gpu/drm/amd/include/cgs_linux.h
···2727#include "cgs_common.h"28282929/**3030- * cgs_import_gpu_mem() - Import dmabuf handle3131- * @cgs_device: opaque device handle3232- * @dmabuf_fd: DMABuf file descriptor3333- * @handle: memory handle (output)3434- *3535- * Must be called in the process context that dmabuf_fd belongs to.3636- *3737- * Return: 0 on success, -errno otherwise3838- */3939-typedef int (*cgs_import_gpu_mem_t)(void *cgs_device, int dmabuf_fd,4040- cgs_handle_t *handle);4141-4242-/**4330 * cgs_irq_source_set_func() - Callback for enabling/disabling interrupt sources4431 * @private_data: private data provided to cgs_add_irq_source4532 * @src_id: interrupt source ID···101114typedef int (*cgs_irq_put_t)(void *cgs_device, unsigned src_id, unsigned type);102115103116struct cgs_os_ops {104104- cgs_import_gpu_mem_t import_gpu_mem;105105-106117 /* IRQ handling */107118 cgs_add_irq_source_t add_irq_source;108119 cgs_irq_get_t irq_get;109120 cgs_irq_put_t irq_put;110121};111122112112-#define cgs_import_gpu_mem(dev,dmabuf_fd,handle) \113113- CGS_OS_CALL(import_gpu_mem,dev,dmabuf_fd,handle)114123#define cgs_add_irq_source(dev,src_id,num_types,set,handler,private_data) \115124 CGS_OS_CALL(add_irq_source,dev,src_id,num_types,set,handler, \116125 private_data)
+53-34
drivers/gpu/drm/drm_dp_mst_topology.c
···5353 struct drm_dp_mst_port *port,5454 int offset, int size, u8 *bytes);55555656-static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,5757- struct drm_dp_mst_branch *mstb);5656+static void drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,5757+ struct drm_dp_mst_branch *mstb);5858static int drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,5959 struct drm_dp_mst_branch *mstb,6060 struct drm_dp_mst_port *port);···804804 struct drm_dp_mst_port *port, *tmp;805805 bool wake_tx = false;806806807807- cancel_work_sync(&mstb->mgr->work);808808-809807 /*810808 * destroy all ports - don't need lock811809 * as there are no more references to the mst branch···861863{862864 struct drm_dp_mst_port *port = container_of(kref, struct drm_dp_mst_port, kref);863865 struct drm_dp_mst_topology_mgr *mgr = port->mgr;866866+864867 if (!port->input) {865868 port->vcpi.num_slots = 0;866869867870 kfree(port->cached_edid);868871869869- /* we can't destroy the connector here, as870870- we might be holding the mode_config.mutex871871- from an EDID retrieval */872872+ /*873873+ * The only time we don't have a connector874874+ * on an output port is if the connector init875875+ * fails.876876+ */872877 if (port->connector) {878878+ /* we can't destroy the connector here, as879879+ * we might be holding the mode_config.mutex880880+ * from an EDID retrieval */881881+873882 mutex_lock(&mgr->destroy_connector_lock);874883 list_add(&port->next, &mgr->destroy_connector_list);875884 mutex_unlock(&mgr->destroy_connector_lock);876885 schedule_work(&mgr->destroy_connector_work);877886 return;878887 }888888+ /* no need to clean up vcpi889889+ * as if we have no connector we never setup a vcpi */879890 drm_dp_port_teardown_pdt(port, port->pdt);880880-881881- if (!port->input && port->vcpi.vcpi > 0)882882- drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);883891 }884892 kfree(port);885885-886886- (*mgr->cbs->hotplug)(mgr);887893}888894889895static void drm_dp_put_port(struct drm_dp_mst_port *port)···10291027 }10301028}1031102910321032-static void build_mst_prop_path(struct drm_dp_mst_port *port,10331033- struct drm_dp_mst_branch *mstb,10301030+static void build_mst_prop_path(const struct drm_dp_mst_branch *mstb,10311031+ int pnum,10341032 char *proppath,10351033 size_t proppath_size)10361034{···10431041 snprintf(temp, sizeof(temp), "-%d", port_num);10441042 strlcat(proppath, temp, proppath_size);10451043 }10461046- snprintf(temp, sizeof(temp), "-%d", port->port_num);10441044+ snprintf(temp, sizeof(temp), "-%d", pnum);10471045 strlcat(proppath, temp, proppath_size);10481046}10491047···11071105 drm_dp_port_teardown_pdt(port, old_pdt);1108110611091107 ret = drm_dp_port_setup_pdt(port);11101110- if (ret == true) {11081108+ if (ret == true)11111109 drm_dp_send_link_address(mstb->mgr, port->mstb);11121112- port->mstb->link_address_sent = true;11131113- }11141110 }1115111111161112 if (created && !port->input) {11171113 char proppath[255];11181118- build_mst_prop_path(port, mstb, proppath, sizeof(proppath));11191119- port->connector = (*mstb->mgr->cbs->add_connector)(mstb->mgr, port, proppath);1120111411211121- if (port->port_num >= 8) {11221122- port->cached_edid = drm_get_edid(port->connector, &port->aux.ddc);11151115+ build_mst_prop_path(mstb, port->port_num, proppath, sizeof(proppath));11161116+ port->connector = (*mstb->mgr->cbs->add_connector)(mstb->mgr, port, proppath);11171117+ if (!port->connector) {11181118+ /* remove it from the port list */11191119+ mutex_lock(&mstb->mgr->lock);11201120+ list_del(&port->next);11211121+ mutex_unlock(&mstb->mgr->lock);11221122+ /* drop port list reference */11231123+ drm_dp_put_port(port);11241124+ goto out;11231125 }11261126+ if (port->port_num >= DP_MST_LOGICAL_PORT_0) {11271127+ port->cached_edid = drm_get_edid(port->connector, &port->aux.ddc);11281128+ drm_mode_connector_set_tile_property(port->connector);11291129+ }11301130+ (*mstb->mgr->cbs->register_connector)(port->connector);11241131 }1125113211331133+out:11261134 /* put reference to this port */11271135 drm_dp_put_port(port);11281136}···12141202{12151203 struct drm_dp_mst_port *port;12161204 struct drm_dp_mst_branch *mstb_child;12171217- if (!mstb->link_address_sent) {12051205+ if (!mstb->link_address_sent)12181206 drm_dp_send_link_address(mgr, mstb);12191219- mstb->link_address_sent = true;12201220- }12071207+12211208 list_for_each_entry(port, &mstb->ports, next) {12221209 if (port->input)12231210 continue;···14691458 mutex_unlock(&mgr->qlock);14701459}1471146014721472-static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,14731473- struct drm_dp_mst_branch *mstb)14611461+static void drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,14621462+ struct drm_dp_mst_branch *mstb)14741463{14751464 int len;14761465 struct drm_dp_sideband_msg_tx *txmsg;···1478146714791468 txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);14801469 if (!txmsg)14811481- return -ENOMEM;14701470+ return;1482147114831472 txmsg->dst = mstb;14841473 len = build_link_address(txmsg);1485147414751475+ mstb->link_address_sent = true;14861476 drm_dp_queue_down_tx(mgr, txmsg);1487147714881478 ret = drm_dp_mst_wait_tx_reply(mstb, txmsg);···15111499 }15121500 (*mgr->cbs->hotplug)(mgr);15131501 }15141514- } else15021502+ } else {15031503+ mstb->link_address_sent = false;15151504 DRM_DEBUG_KMS("link address failed %d\n", ret);15051505+ }1516150615171507 kfree(txmsg);15181518- return 0;15191508}1520150915211510static int drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,···19911978 drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,19921979 DP_MST_EN | DP_UPSTREAM_IS_SRC);19931980 mutex_unlock(&mgr->lock);19811981+ flush_work(&mgr->work);19821982+ flush_work(&mgr->destroy_connector_work);19941983}19951984EXPORT_SYMBOL(drm_dp_mst_topology_mgr_suspend);19961985···2278226322792264 if (port->cached_edid)22802265 edid = drm_edid_duplicate(port->cached_edid);22812281- else22662266+ else {22822267 edid = drm_get_edid(connector, &port->aux.ddc);22832283-22842284- drm_mode_connector_set_tile_property(connector);22682268+ drm_mode_connector_set_tile_property(connector);22692269+ }22852270 drm_dp_put_port(port);22862271 return edid;22872272}···26862671{26872672 struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct drm_dp_mst_topology_mgr, destroy_connector_work);26882673 struct drm_dp_mst_port *port;26892689-26742674+ bool send_hotplug = false;26902675 /*26912676 * Not a regular list traverse as we have to drop the destroy26922677 * connector lock before destroying the connector, to avoid AB->BA···27092694 if (!port->input && port->vcpi.vcpi > 0)27102695 drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);27112696 kfree(port);26972697+ send_hotplug = true;27122698 }26992699+ if (send_hotplug)27002700+ (*mgr->cbs->hotplug)(mgr);27132701}2714270227152703/**···27652747 */27662748void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr)27672749{27502750+ flush_work(&mgr->work);27682751 flush_work(&mgr->destroy_connector_work);27692752 mutex_lock(&mgr->payload_lock);27702753 kfree(mgr->payloads);
+5-1
drivers/gpu/drm/drm_fb_helper.c
···345345 struct drm_crtc *crtc = mode_set->crtc;346346 int ret;347347348348- if (crtc->funcs->cursor_set) {348348+ if (crtc->funcs->cursor_set2) {349349+ ret = crtc->funcs->cursor_set2(crtc, NULL, 0, 0, 0, 0, 0);350350+ if (ret)351351+ error = true;352352+ } else if (crtc->funcs->cursor_set) {349353 ret = crtc->funcs->cursor_set(crtc, NULL, 0, 0, 0);350354 if (ret)351355 error = true;
+16-3
drivers/gpu/drm/drm_probe_helper.c
···9494}95959696#define DRM_OUTPUT_POLL_PERIOD (10*HZ)9797-static void __drm_kms_helper_poll_enable(struct drm_device *dev)9797+/**9898+ * drm_kms_helper_poll_enable_locked - re-enable output polling.9999+ * @dev: drm_device100100+ *101101+ * This function re-enables the output polling work without102102+ * locking the mode_config mutex.103103+ *104104+ * This is like drm_kms_helper_poll_enable() however it is to be105105+ * called from a context where the mode_config mutex is locked106106+ * already.107107+ */108108+void drm_kms_helper_poll_enable_locked(struct drm_device *dev)98109{99110 bool poll = false;100111 struct drm_connector *connector;···124113 if (poll)125114 schedule_delayed_work(&dev->mode_config.output_poll_work, DRM_OUTPUT_POLL_PERIOD);126115}116116+EXPORT_SYMBOL(drm_kms_helper_poll_enable_locked);117117+127118128119static int drm_helper_probe_single_connector_modes_merge_bits(struct drm_connector *connector,129120 uint32_t maxX, uint32_t maxY, bool merge_type_bits)···187174188175 /* Re-enable polling in case the global poll config changed. */189176 if (drm_kms_helper_poll != dev->mode_config.poll_running)190190- __drm_kms_helper_poll_enable(dev);177177+ drm_kms_helper_poll_enable_locked(dev);191178192179 dev->mode_config.poll_running = drm_kms_helper_poll;193180···441428void drm_kms_helper_poll_enable(struct drm_device *dev)442429{443430 mutex_lock(&dev->mode_config.mutex);444444- __drm_kms_helper_poll_enable(dev);431431+ drm_kms_helper_poll_enable_locked(dev);445432 mutex_unlock(&dev->mode_config.mutex);446433}447434EXPORT_SYMBOL(drm_kms_helper_poll_enable);
···5656 nr_pages = obj->size >> PAGE_SHIFT;57575858 if (!is_drm_iommu_supported(dev)) {5959- dma_addr_t start_addr;6060- unsigned int i = 0;6161-6259 obj->pages = drm_calloc_large(nr_pages, sizeof(struct page *));6360 if (!obj->pages) {6461 DRM_ERROR("failed to allocate pages.\n");6562 return -ENOMEM;6663 }6464+ }67656868- obj->cookie = dma_alloc_attrs(dev->dev,6969- obj->size,7070- &obj->dma_addr, GFP_KERNEL,7171- &obj->dma_attrs);7272- if (!obj->cookie) {7373- DRM_ERROR("failed to allocate buffer.\n");6666+ obj->cookie = dma_alloc_attrs(dev->dev, obj->size, &obj->dma_addr,6767+ GFP_KERNEL, &obj->dma_attrs);6868+ if (!obj->cookie) {6969+ DRM_ERROR("failed to allocate buffer.\n");7070+ if (obj->pages)7471 drm_free_large(obj->pages);7575- return -ENOMEM;7676- }7272+ return -ENOMEM;7373+ }7474+7575+ if (obj->pages) {7676+ dma_addr_t start_addr;7777+ unsigned int i = 0;77787879 start_addr = obj->dma_addr;7980 while (i < nr_pages) {8080- obj->pages[i] = phys_to_page(start_addr);8181+ obj->pages[i] = pfn_to_page(dma_to_pfn(dev->dev,8282+ start_addr));8183 start_addr += PAGE_SIZE;8284 i++;8385 }8486 } else {8585- obj->pages = dma_alloc_attrs(dev->dev, obj->size,8686- &obj->dma_addr, GFP_KERNEL,8787- &obj->dma_attrs);8888- if (!obj->pages) {8989- DRM_ERROR("failed to allocate buffer.\n");9090- return -ENOMEM;9191- }8787+ obj->pages = obj->cookie;9288 }93899490 DRM_DEBUG_KMS("dma_addr(0x%lx), size(0x%lx)\n",···106110 DRM_DEBUG_KMS("dma_addr(0x%lx), size(0x%lx)\n",107111 (unsigned long)obj->dma_addr, obj->size);108112109109- if (!is_drm_iommu_supported(dev)) {110110- dma_free_attrs(dev->dev, obj->size, obj->cookie,111111- (dma_addr_t)obj->dma_addr, &obj->dma_attrs);112112- drm_free_large(obj->pages);113113- } else114114- dma_free_attrs(dev->dev, obj->size, obj->pages,115115- (dma_addr_t)obj->dma_addr, &obj->dma_attrs);113113+ dma_free_attrs(dev->dev, obj->size, obj->cookie,114114+ (dma_addr_t)obj->dma_addr, &obj->dma_attrs);116115117117- obj->dma_addr = (dma_addr_t)NULL;116116+ if (!is_drm_iommu_supported(dev))117117+ drm_free_large(obj->pages);118118}119119120120static int exynos_drm_gem_handle_create(struct drm_gem_object *obj,···148156 * once dmabuf's refcount becomes 0.149157 */150158 if (obj->import_attach)151151- goto out;152152-153153- exynos_drm_free_buf(exynos_gem_obj);154154-155155-out:156156- drm_gem_free_mmap_offset(obj);159159+ drm_prime_gem_destroy(obj, exynos_gem_obj->sgt);160160+ else161161+ exynos_drm_free_buf(exynos_gem_obj);157162158163 /* release file pointer to gem object. */159164 drm_gem_object_release(obj);160165161166 kfree(exynos_gem_obj);162162- exynos_gem_obj = NULL;163167}164168165169unsigned long exynos_drm_gem_get_size(struct drm_device *dev,···178190 return exynos_gem_obj->size;179191}180192181181-182182-struct exynos_drm_gem_obj *exynos_drm_gem_init(struct drm_device *dev,193193+static struct exynos_drm_gem_obj *exynos_drm_gem_init(struct drm_device *dev,183194 unsigned long size)184195{185196 struct exynos_drm_gem_obj *exynos_gem_obj;···195208 ret = drm_gem_object_init(dev, obj, size);196209 if (ret < 0) {197210 DRM_ERROR("failed to initialize gem object\n");211211+ kfree(exynos_gem_obj);212212+ return ERR_PTR(ret);213213+ }214214+215215+ ret = drm_gem_create_mmap_offset(obj);216216+ if (ret < 0) {217217+ drm_gem_object_release(obj);198218 kfree(exynos_gem_obj);199219 return ERR_PTR(ret);200220 }···307313 drm_gem_object_unreference_unlocked(obj);308314}309315310310-int exynos_drm_gem_mmap_buffer(struct exynos_drm_gem_obj *exynos_gem_obj,316316+static int exynos_drm_gem_mmap_buffer(struct exynos_drm_gem_obj *exynos_gem_obj,311317 struct vm_area_struct *vma)312318{313319 struct drm_device *drm_dev = exynos_gem_obj->base.dev;···336342337343int exynos_drm_gem_get_ioctl(struct drm_device *dev, void *data,338344 struct drm_file *file_priv)339339-{ struct exynos_drm_gem_obj *exynos_gem_obj;345345+{346346+ struct exynos_drm_gem_obj *exynos_gem_obj;340347 struct drm_exynos_gem_info *args = data;341348 struct drm_gem_object *obj;342349···397402 struct drm_mode_create_dumb *args)398403{399404 struct exynos_drm_gem_obj *exynos_gem_obj;405405+ unsigned int flags;400406 int ret;401407402408 /*···409413 args->pitch = args->width * ((args->bpp + 7) / 8);410414 args->size = args->pitch * args->height;411415412412- if (is_drm_iommu_supported(dev)) {413413- exynos_gem_obj = exynos_drm_gem_create(dev,414414- EXYNOS_BO_NONCONTIG | EXYNOS_BO_WC,415415- args->size);416416- } else {417417- exynos_gem_obj = exynos_drm_gem_create(dev,418418- EXYNOS_BO_CONTIG | EXYNOS_BO_WC,419419- args->size);420420- }416416+ if (is_drm_iommu_supported(dev))417417+ flags = EXYNOS_BO_NONCONTIG | EXYNOS_BO_WC;418418+ else419419+ flags = EXYNOS_BO_CONTIG | EXYNOS_BO_WC;421420421421+ exynos_gem_obj = exynos_drm_gem_create(dev, flags, args->size);422422 if (IS_ERR(exynos_gem_obj)) {423423 dev_warn(dev->dev, "FB allocation failed.\n");424424 return PTR_ERR(exynos_gem_obj);···452460 goto unlock;453461 }454462455455- ret = drm_gem_create_mmap_offset(obj);456456- if (ret)457457- goto out;458458-459463 *offset = drm_vma_node_offset_addr(&obj->vma_node);460464 DRM_DEBUG_KMS("offset = 0x%lx\n", (unsigned long)*offset);461465462462-out:463466 drm_gem_object_unreference(obj);464467unlock:465468 mutex_unlock(&dev->struct_mutex);···530543531544err_close_vm:532545 drm_gem_vm_close(vma);533533- drm_gem_free_mmap_offset(obj);534546535547 return ret;536548}···573587 npages);574588 if (ret < 0)575589 goto err_free_large;590590+591591+ exynos_gem_obj->sgt = sgt;576592577593 if (sgt->nents == 1) {578594 /* always physically continuous memory if sgt->nents is 1. */
+2-4
drivers/gpu/drm/exynos/exynos_drm_gem.h
···3939 * - this address could be physical address without IOMMU and4040 * device address with IOMMU.4141 * @pages: Array of backing pages.4242+ * @sgt: Imported sg_table.4243 *4344 * P.S. this object would be transferred to user as kms_bo.handle so4445 * user can access the buffer through kms_bo.handle.···5352 dma_addr_t dma_addr;5453 struct dma_attrs dma_attrs;5554 struct page **pages;5555+ struct sg_table *sgt;5656};57575858struct page **exynos_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask);59596060/* destroy a buffer with gem object */6161void exynos_drm_gem_destroy(struct exynos_drm_gem_obj *exynos_gem_obj);6262-6363-/* create a private gem object and initialize it. */6464-struct exynos_drm_gem_obj *exynos_drm_gem_init(struct drm_device *dev,6565- unsigned long size);66626763/* create a new buffer with gem object */6864struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev,
+1-1
drivers/gpu/drm/exynos/exynos_drm_rotator.c
···786786 return 0;787787}788788789789+#ifdef CONFIG_PM789790static int rotator_clk_crtl(struct rot_context *rot, bool enable)790791{791792 if (enable) {···823822}824823#endif825824826826-#ifdef CONFIG_PM827825static int rotator_runtime_suspend(struct device *dev)828826{829827 struct rot_context *rot = dev_get_drvdata(dev);
···11491149 unsigned long dt;11501150 unsigned long flags;11511151 int i;11521152+ LIST_HEAD(remove_list);11531153+ struct ipoib_mcast *mcast, *tmcast;11541154+ struct net_device *dev = priv->dev;1152115511531156 if (test_bit(IPOIB_STOP_NEIGH_GC, &priv->flags))11541157 return;···11791176 lockdep_is_held(&priv->lock))) != NULL) {11801177 /* was the neigh idle for two GC periods */11811178 if (time_after(neigh_obsolete, neigh->alive)) {11791179+ u8 *mgid = neigh->daddr + 4;11801180+11811181+ /* Is this multicast ? */11821182+ if (*mgid == 0xff) {11831183+ mcast = __ipoib_mcast_find(dev, mgid);11841184+11851185+ if (mcast && test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) {11861186+ list_del(&mcast->list);11871187+ rb_erase(&mcast->rb_node, &priv->multicast_tree);11881188+ list_add_tail(&mcast->list, &remove_list);11891189+ }11901190+ }11911191+11821192 rcu_assign_pointer(*np,11831193 rcu_dereference_protected(neigh->hnext,11841194 lockdep_is_held(&priv->lock)));···1207119112081192out_unlock:12091193 spin_unlock_irqrestore(&priv->lock, flags);11941194+ list_for_each_entry_safe(mcast, tmcast, &remove_list, list)11951195+ ipoib_mcast_leave(dev, mcast);12101196}1211119712121198static void ipoib_reap_neigh(struct work_struct *work)
+14-12
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
···153153 return mcast;154154}155155156156-static struct ipoib_mcast *__ipoib_mcast_find(struct net_device *dev, void *mgid)156156+struct ipoib_mcast *__ipoib_mcast_find(struct net_device *dev, void *mgid)157157{158158 struct ipoib_dev_priv *priv = netdev_priv(dev);159159 struct rb_node *n = priv->multicast_tree.rb_node;···508508 rec.hop_limit = priv->broadcast->mcmember.hop_limit;509509510510 /*511511- * Historically Linux IPoIB has never properly supported SEND512512- * ONLY join. It emulated it by not providing all the required513513- * attributes, which is enough to prevent group creation and514514- * detect if there are full members or not. A major problem515515- * with supporting SEND ONLY is detecting when the group is516516- * auto-destroyed as IPoIB will cache the MLID..511511+ * Send-only IB Multicast joins do not work at the core512512+ * IB layer yet, so we can't use them here. However,513513+ * we are emulating an Ethernet multicast send, which514514+ * does not require a multicast subscription and will515515+ * still send properly. The most appropriate thing to516516+ * do is to create the group if it doesn't exist as that517517+ * most closely emulates the behavior, from a user space518518+ * application perspecitive, of Ethernet multicast519519+ * operation. For now, we do a full join, maybe later520520+ * when the core IB layers support send only joins we521521+ * will use them.517522 */518518-#if 1519519- if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags))520520- comp_mask &= ~IB_SA_MCMEMBER_REC_TRAFFIC_CLASS;521521-#else523523+#if 0522524 if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags))523525 rec.join_state = 4;524526#endif···677675 return 0;678676}679677680680-static int ipoib_mcast_leave(struct net_device *dev, struct ipoib_mcast *mcast)678678+int ipoib_mcast_leave(struct net_device *dev, struct ipoib_mcast *mcast)681679{682680 struct ipoib_dev_priv *priv = netdev_priv(dev);683681 int ret = 0;
+5
drivers/infiniband/ulp/iser/iscsi_iser.c
···9797module_param_named(max_sectors, iser_max_sectors, uint, S_IRUGO | S_IWUSR);9898MODULE_PARM_DESC(max_sectors, "Max number of sectors in a single scsi command (default:1024");9999100100+bool iser_always_reg = true;101101+module_param_named(always_register, iser_always_reg, bool, S_IRUGO);102102+MODULE_PARM_DESC(always_register,103103+ "Always register memory, even for continuous memory regions (default:true)");104104+100105bool iser_pi_enable = false;101106module_param_named(pi_enable, iser_pi_enable, bool, S_IRUGO);102107MODULE_PARM_DESC(pi_enable, "Enable T10-PI offload support (default:disabled)");
+1
drivers/infiniband/ulp/iser/iscsi_iser.h
···611611extern bool iser_pi_enable;612612extern int iser_pi_guard;613613extern unsigned int iser_max_sectors;614614+extern bool iser_always_reg;614615615616int iser_assign_reg_ops(struct iser_device *device);616617
···133133 (unsigned long)comp);134134 }135135136136- device->mr = ib_get_dma_mr(device->pd, IB_ACCESS_LOCAL_WRITE |137137- IB_ACCESS_REMOTE_WRITE |138138- IB_ACCESS_REMOTE_READ);139139- if (IS_ERR(device->mr))140140- goto dma_mr_err;136136+ if (!iser_always_reg) {137137+ int access = IB_ACCESS_LOCAL_WRITE |138138+ IB_ACCESS_REMOTE_WRITE |139139+ IB_ACCESS_REMOTE_READ;140140+141141+ device->mr = ib_get_dma_mr(device->pd, access);142142+ if (IS_ERR(device->mr))143143+ goto dma_mr_err;144144+ }141145142146 INIT_IB_EVENT_HANDLER(&device->event_handler, device->ib_device,143147 iser_event_handler);···151147 return 0;152148153149handler_err:154154- ib_dereg_mr(device->mr);150150+ if (device->mr)151151+ ib_dereg_mr(device->mr);155152dma_mr_err:156153 for (i = 0; i < device->comps_used; i++)157154 tasklet_kill(&device->comps[i].tasklet);···178173static void iser_free_device_ib_res(struct iser_device *device)179174{180175 int i;181181- BUG_ON(device->mr == NULL);182176183177 for (i = 0; i < device->comps_used; i++) {184178 struct iser_comp *comp = &device->comps[i];···188184 }189185190186 (void)ib_unregister_event_handler(&device->event_handler);191191- (void)ib_dereg_mr(device->mr);187187+ if (device->mr)188188+ (void)ib_dereg_mr(device->mr);192189 ib_dealloc_pd(device->pd);193190194191 kfree(device->comps);
+1
drivers/input/joystick/Kconfig
···196196config JOYSTICK_ZHENHUA197197 tristate "5-byte Zhenhua RC transmitter"198198 select SERIO199199+ select BITREVERSE199200 help200201 Say Y here if you have a Zhen Hua PPM-4CH transmitter which is201202 supplied with a ready to fly micro electric indoor helicopters
+2-2
drivers/input/joystick/walkera0701.c
···150150 if (w->counter == 24) { /* full frame */151151 walkera0701_parse_frame(w);152152 w->counter = NO_SYNC;153153- if (abs(pulse_time - SYNC_PULSE) < RESERVE) /* new frame sync */153153+ if (abs64(pulse_time - SYNC_PULSE) < RESERVE) /* new frame sync */154154 w->counter = 0;155155 } else {156156 if ((pulse_time > (ANALOG_MIN_PULSE - RESERVE)···161161 } else162162 w->counter = NO_SYNC;163163 }164164- } else if (abs(pulse_time - SYNC_PULSE - BIN0_PULSE) <164164+ } else if (abs64(pulse_time - SYNC_PULSE - BIN0_PULSE) <165165 RESERVE + BIN1_PULSE - BIN0_PULSE) /* frame sync .. */166166 w->counter = 0;167167
···414414 dev->id.product = user_dev->id.product;415415 dev->id.version = user_dev->id.version;416416417417- for_each_set_bit(i, dev->absbit, ABS_CNT) {417417+ for (i = 0; i < ABS_CNT; i++) {418418 input_abs_set_max(dev, i, user_dev->absmax[i]);419419 input_abs_set_min(dev, i, user_dev->absmin[i]);420420 input_abs_set_fuzz(dev, i, user_dev->absfuzz[i]);
+1-1
drivers/input/mouse/elan_i2c.h
···6060 int (*get_sm_version)(struct i2c_client *client,6161 u8* ic_type, u8 *version);6262 int (*get_checksum)(struct i2c_client *client, bool iap, u16 *csum);6363- int (*get_product_id)(struct i2c_client *client, u8 *id);6363+ int (*get_product_id)(struct i2c_client *client, u16 *id);64646565 int (*get_max)(struct i2c_client *client,6666 unsigned int *max_x, unsigned int *max_y);
+19-7
drivers/input/mouse/elan_i2c_core.c
···4040#include "elan_i2c.h"41414242#define DRIVER_NAME "elan_i2c"4343-#define ELAN_DRIVER_VERSION "1.6.0"4343+#define ELAN_DRIVER_VERSION "1.6.1"4444#define ETP_MAX_PRESSURE 2554545#define ETP_FWIDTH_REDUCE 904646#define ETP_FINGER_WIDTH 15···7676 unsigned int x_res;7777 unsigned int y_res;78787979- u8 product_id;7979+ u16 product_id;8080 u8 fw_version;8181 u8 sm_version;8282 u8 iap_version;···9898 u16 *signature_address)9999{100100 switch (iap_version) {101101+ case 0x00:102102+ case 0x06:101103 case 0x08:102104 *validpage_count = 512;103105 break;106106+ case 0x03:107107+ case 0x07:104108 case 0x09:109109+ case 0x0A:110110+ case 0x0B:111111+ case 0x0C:105112 *validpage_count = 768;106113 break;107114 case 0x0D:108115 *validpage_count = 896;116116+ break;117117+ case 0x0E:118118+ *validpage_count = 640;109119 break;110120 default:111121 /* unknown ic type clear value */···276266277267 error = elan_get_fwinfo(data->iap_version, &data->fw_validpage_count,278268 &data->fw_signature_address);279279- if (error) {280280- dev_err(&data->client->dev,281281- "unknown iap version %d\n", data->iap_version);282282- return error;283283- }269269+ if (error)270270+ dev_warn(&data->client->dev,271271+ "unexpected iap version %#04x (ic type: %#04x), firmware update will not work\n",272272+ data->iap_version, data->ic_type);284273285274 return 0;286275}···494485 int error;495486 const u8 *fw_signature;496487 static const u8 signature[] = {0xAA, 0x55, 0xCC, 0x33, 0xFF, 0xFF};488488+489489+ if (data->fw_validpage_count == 0)490490+ return -EINVAL;497491498492 /* Look for a firmware with the product id appended. */499493 fw_name = kasprintf(GFP_KERNEL, ETP_FW_NAME, data->product_id);
···3215321532163216 /* Restrict dma_mask to the width that the iommu can handle */32173217 dma_mask = min_t(uint64_t, DOMAIN_MAX_ADDR(domain->gaw), dma_mask);32183218+ /* Ensure we reserve the whole size-aligned region */32193219+ nrpages = __roundup_pow_of_two(nrpages);3218322032193221 if (!dmar_forcedac && dma_mask > DMA_BIT_MASK(32)) {32203222 /*···37133711static int __init iommu_init_mempool(void)37143712{37153713 int ret;37163716- ret = iommu_iova_cache_init();37143714+ ret = iova_cache_get();37173715 if (ret)37183716 return ret;37193717···3727372537283726 kmem_cache_destroy(iommu_domain_cache);37293727domain_error:37303730- iommu_iova_cache_destroy();37283728+ iova_cache_put();3731372937323730 return -ENOMEM;37333731}···37363734{37373735 kmem_cache_destroy(iommu_devinfo_cache);37383736 kmem_cache_destroy(iommu_domain_cache);37393739- iommu_iova_cache_destroy();37373737+ iova_cache_put();37403738}3741373937423740static void quirk_ioat_snb_local_iommu(struct pci_dev *pdev)
+69-51
drivers/iommu/iova.c
···1818 */19192020#include <linux/iova.h>2121+#include <linux/module.h>2122#include <linux/slab.h>2222-2323-static struct kmem_cache *iommu_iova_cache;2424-2525-int iommu_iova_cache_init(void)2626-{2727- int ret = 0;2828-2929- iommu_iova_cache = kmem_cache_create("iommu_iova",3030- sizeof(struct iova),3131- 0,3232- SLAB_HWCACHE_ALIGN,3333- NULL);3434- if (!iommu_iova_cache) {3535- pr_err("Couldn't create iova cache\n");3636- ret = -ENOMEM;3737- }3838-3939- return ret;4040-}4141-4242-void iommu_iova_cache_destroy(void)4343-{4444- kmem_cache_destroy(iommu_iova_cache);4545-}4646-4747-struct iova *alloc_iova_mem(void)4848-{4949- return kmem_cache_alloc(iommu_iova_cache, GFP_ATOMIC);5050-}5151-5252-void free_iova_mem(struct iova *iova)5353-{5454- kmem_cache_free(iommu_iova_cache, iova);5555-}56235724void5825init_iova_domain(struct iova_domain *iovad, unsigned long granule,···3972 iovad->start_pfn = start_pfn;4073 iovad->dma_32bit_pfn = pfn_32bit;4174}7575+EXPORT_SYMBOL_GPL(init_iova_domain);42764377static struct rb_node *4478__get_cached_rbnode(struct iova_domain *iovad, unsigned long *limit_pfn)···88120 }89121}901229191-/* Computes the padding size required, to make the9292- * the start address naturally aligned on its size123123+/*124124+ * Computes the padding size required, to make the start address125125+ * naturally aligned on the power-of-two order of its size93126 */9494-static int9595-iova_get_pad_size(int size, unsigned int limit_pfn)127127+static unsigned int128128+iova_get_pad_size(unsigned int size, unsigned int limit_pfn)96129{9797- unsigned int pad_size = 0;9898- unsigned int order = ilog2(size);9999-100100- if (order)101101- pad_size = (limit_pfn + 1) % (1 << order);102102-103103- return pad_size;130130+ return (limit_pfn + 1 - size) & (__roundup_pow_of_two(size) - 1);104131}105132106133static int __alloc_and_insert_iova_range(struct iova_domain *iovad,···205242 rb_insert_color(&iova->node, root);206243}207244245245+static struct kmem_cache *iova_cache;246246+static unsigned int iova_cache_users;247247+static DEFINE_MUTEX(iova_cache_mutex);248248+249249+struct iova *alloc_iova_mem(void)250250+{251251+ return kmem_cache_alloc(iova_cache, GFP_ATOMIC);252252+}253253+EXPORT_SYMBOL(alloc_iova_mem);254254+255255+void free_iova_mem(struct iova *iova)256256+{257257+ kmem_cache_free(iova_cache, iova);258258+}259259+EXPORT_SYMBOL(free_iova_mem);260260+261261+int iova_cache_get(void)262262+{263263+ mutex_lock(&iova_cache_mutex);264264+ if (!iova_cache_users) {265265+ iova_cache = kmem_cache_create(266266+ "iommu_iova", sizeof(struct iova), 0,267267+ SLAB_HWCACHE_ALIGN, NULL);268268+ if (!iova_cache) {269269+ mutex_unlock(&iova_cache_mutex);270270+ printk(KERN_ERR "Couldn't create iova cache\n");271271+ return -ENOMEM;272272+ }273273+ }274274+275275+ iova_cache_users++;276276+ mutex_unlock(&iova_cache_mutex);277277+278278+ return 0;279279+}280280+EXPORT_SYMBOL_GPL(iova_cache_get);281281+282282+void iova_cache_put(void)283283+{284284+ mutex_lock(&iova_cache_mutex);285285+ if (WARN_ON(!iova_cache_users)) {286286+ mutex_unlock(&iova_cache_mutex);287287+ return;288288+ }289289+ iova_cache_users--;290290+ if (!iova_cache_users)291291+ kmem_cache_destroy(iova_cache);292292+ mutex_unlock(&iova_cache_mutex);293293+}294294+EXPORT_SYMBOL_GPL(iova_cache_put);295295+208296/**209297 * alloc_iova - allocates an iova210298 * @iovad: - iova domain in question···279265 if (!new_iova)280266 return NULL;281267282282- /* If size aligned is set then round the size to283283- * to next power of two.284284- */285285- if (size_aligned)286286- size = __roundup_pow_of_two(size);287287-288268 ret = __alloc_and_insert_iova_range(iovad, size, limit_pfn,289269 new_iova, size_aligned);290270···289281290282 return new_iova;291283}284284+EXPORT_SYMBOL_GPL(alloc_iova);292285293286/**294287 * find_iova - find's an iova for a given pfn···330321 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);331322 return NULL;332323}324324+EXPORT_SYMBOL_GPL(find_iova);333325334326/**335327 * __free_iova - frees the given iova···349339 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);350340 free_iova_mem(iova);351341}342342+EXPORT_SYMBOL_GPL(__free_iova);352343353344/**354345 * free_iova - finds and frees the iova for a given pfn···367356 __free_iova(iovad, iova);368357369358}359359+EXPORT_SYMBOL_GPL(free_iova);370360371361/**372362 * put_iova_domain - destroys the iova doamin···390378 }391379 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);392380}381381+EXPORT_SYMBOL_GPL(put_iova_domain);393382394383static int395384__is_range_overlap(struct rb_node *node,···480467 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);481468 return iova;482469}470470+EXPORT_SYMBOL_GPL(reserve_iova);483471484472/**485473 * copy_reserved_iova - copies the reserved between domains···507493 }508494 spin_unlock_irqrestore(&from->iova_rbtree_lock, flags);509495}496496+EXPORT_SYMBOL_GPL(copy_reserved_iova);510497511498struct iova *512499split_and_remove_iova(struct iova_domain *iovad, struct iova *iova,···549534 free_iova_mem(prev);550535 return NULL;551536}537537+538538+MODULE_AUTHOR("Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>");539539+MODULE_LICENSE("GPL");
···881881 }882882883883 if (bio && bio_data_dir(bio) == WRITE) {884884- if (bio->bi_iter.bi_sector >=885885- conf->mddev->curr_resync_completed) {884884+ if (bio->bi_iter.bi_sector >= conf->next_resync) {886885 if (conf->start_next_window == MaxSector)887886 conf->start_next_window =888887 conf->next_resync +···15151516 conf->r1buf_pool = NULL;1516151715171518 spin_lock_irq(&conf->resync_lock);15181518- conf->next_resync = 0;15191519+ conf->next_resync = MaxSector - 2 * NEXT_NORMALIO_DISTANCE;15191520 conf->start_next_window = MaxSector;15201521 conf->current_window_requests +=15211522 conf->next_window_requests;···2842284328432844 abort:28442845 if (conf) {28452845- if (conf->r1bio_pool)28462846- mempool_destroy(conf->r1bio_pool);28462846+ mempool_destroy(conf->r1bio_pool);28472847 kfree(conf->mirrors);28482848 safe_put_page(conf->tmppage);28492849 kfree(conf->poolinfo);···29442946{29452947 struct r1conf *conf = priv;2946294829472947- if (conf->r1bio_pool)29482948- mempool_destroy(conf->r1bio_pool);29492949+ mempool_destroy(conf->r1bio_pool);29492950 kfree(conf->mirrors);29502951 safe_put_page(conf->tmppage);29512952 kfree(conf->poolinfo);
+3-6
drivers/md/raid10.c
···34863486 printk(KERN_ERR "md/raid10:%s: couldn't allocate memory.\n",34873487 mdname(mddev));34883488 if (conf) {34893489- if (conf->r10bio_pool)34903490- mempool_destroy(conf->r10bio_pool);34893489+ mempool_destroy(conf->r10bio_pool);34913490 kfree(conf->mirrors);34923491 safe_put_page(conf->tmppage);34933492 kfree(conf);···3681368236823683out_free_conf:36833684 md_unregister_thread(&mddev->thread);36843684- if (conf->r10bio_pool)36853685- mempool_destroy(conf->r10bio_pool);36853685+ mempool_destroy(conf->r10bio_pool);36863686 safe_put_page(conf->tmppage);36873687 kfree(conf->mirrors);36883688 kfree(conf);···36943696{36953697 struct r10conf *conf = priv;3696369836973697- if (conf->r10bio_pool)36983698- mempool_destroy(conf->r10bio_pool);36993699+ mempool_destroy(conf->r10bio_pool);36993700 safe_put_page(conf->tmppage);37003701 kfree(conf->mirrors);37013702 kfree(conf->mirrors_old);
+7-4
drivers/md/raid5.c
···22712271 drop_one_stripe(conf))22722272 ;2273227322742274- if (conf->slab_cache)22752275- kmem_cache_destroy(conf->slab_cache);22742274+ kmem_cache_destroy(conf->slab_cache);22762275 conf->slab_cache = NULL;22772276}22782277···31493150 spin_unlock_irq(&sh->stripe_lock);31503151 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags))31513152 wake_up(&conf->wait_for_overlap);31533153+ if (bi)31543154+ s->to_read--;31523155 while (bi && bi->bi_iter.bi_sector <31533156 sh->dev[i].sector + STRIPE_SECTORS) {31543157 struct bio *nextbi =···31703169 */31713170 clear_bit(R5_LOCKED, &sh->dev[i].flags);31723171 }31723172+ s->to_write = 0;31733173+ s->written = 0;3173317431743175 if (test_and_clear_bit(STRIPE_FULL_WRITE, &sh->state))31753176 if (atomic_dec_and_test(&conf->pending_full_writes))···33033300 */33043301 return 0;3305330233063306- for (i = 0; i < s->failed; i++) {33033303+ for (i = 0; i < s->failed && i < 2; i++) {33073304 if (fdev[i]->towrite &&33083305 !test_bit(R5_UPTODATE, &fdev[i]->flags) &&33093306 !test_bit(R5_OVERWRITE, &fdev[i]->flags))···33273324 sh->sector < sh->raid_conf->mddev->recovery_cp)33283325 /* reconstruct-write isn't being forced */33293326 return 0;33303330- for (i = 0; i < s->failed; i++) {33273327+ for (i = 0; i < s->failed && i < 2; i++) {33313328 if (s->failed_num[i] != sh->pd_idx &&33323329 s->failed_num[i] != sh->qd_idx &&33333330 !test_bit(R5_UPTODATE, &fdev[i]->flags) &&
···926926 goto bad;927927 }928928929929+ if (data_size > ubi->leb_size) {930930+ ubi_err(ubi, "bad data_size");931931+ goto bad;932932+ }933933+929934 if (vol_type == UBI_VID_STATIC) {930935 /*931936 * Although from high-level point of view static volumes may
+1
drivers/mtd/ubi/vtbl.c
···649649 if (ubi->corr_peb_count)650650 ubi_err(ubi, "%d PEBs are corrupted and not used",651651 ubi->corr_peb_count);652652+ return -ENOSPC;652653 }653654 ubi->rsvd_pebs += reserved_pebs;654655 ubi->avail_pebs -= reserved_pebs;
+1
drivers/mtd/ubi/wl.c
···16011601 if (ubi->corr_peb_count)16021602 ubi_err(ubi, "%d PEBs are corrupted and not used",16031603 ubi->corr_peb_count);16041604+ err = -ENOSPC;16041605 goto out_free;16051606 }16061607 ubi->avail_pebs -= reserved_pebs;
···946946 /* take the lock before we start messing with the ring */947947 mutex_lock(&hw->aq.arq_mutex);948948949949+ if (hw->aq.arq.count == 0) {950950+ i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,951951+ "AQRX: Admin queue not initialized.\n");952952+ ret_code = I40E_ERR_QUEUE_EMPTY;953953+ goto clean_arq_element_err;954954+ }955955+949956 /* set next_to_use to head */950957 ntu = (rd32(hw, hw->aq.arq.head) & I40E_PF_ARQH_ARQH_MASK);951958 if (ntu == ntc) {···10141007 /* Set pending if needed, unlock and return */10151008 if (pending != NULL)10161009 *pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);10101010+10111011+clean_arq_element_err:10171012 mutex_unlock(&hw->aq.arq_mutex);1018101310191014 if (i40e_is_nvm_update_op(&e->desc)) {
···887887 /* take the lock before we start messing with the ring */888888 mutex_lock(&hw->aq.arq_mutex);889889890890+ if (hw->aq.arq.count == 0) {891891+ i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,892892+ "AQRX: Admin queue not initialized.\n");893893+ ret_code = I40E_ERR_QUEUE_EMPTY;894894+ goto clean_arq_element_err;895895+ }896896+890897 /* set next_to_use to head */891898 ntu = (rd32(hw, hw->aq.arq.head) & I40E_VF_ARQH1_ARQH_MASK);892899 if (ntu == ntc) {···955948 /* Set pending if needed, unlock and return */956949 if (pending != NULL)957950 *pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);951951+952952+clean_arq_element_err:958953 mutex_unlock(&hw->aq.arq_mutex);959954960955 return ret_code;
+4-3
drivers/net/ethernet/mellanox/mlx4/mcg.c
···11841184 if (prot == MLX4_PROT_ETH) {11851185 /* manage the steering entry for promisc mode */11861186 if (new_entry)11871187- new_steering_entry(dev, port, steer, index, qp->qpn);11871187+ err = new_steering_entry(dev, port, steer,11881188+ index, qp->qpn);11881189 else11891189- existing_steering_entry(dev, port, steer,11901190- index, qp->qpn);11901190+ err = existing_steering_entry(dev, port, steer,11911191+ index, qp->qpn);11911192 }11921193 if (err && link && index != -1) {11931194 if (index < dev->caps.num_mgms)
···299299 * Unbound PCI devices are always put in D0, regardless of300300 * runtime PM status. During probe, the device is set to301301 * active and the usage count is incremented. If the driver302302- * supports runtime PM, it should call pm_runtime_put_noidle()303303- * in its probe routine and pm_runtime_get_noresume() in its304304- * remove routine.302302+ * supports runtime PM, it should call pm_runtime_put_noidle(),303303+ * or any other runtime PM helper function decrementing the usage304304+ * count, in its probe routine and pm_runtime_get_noresume() in305305+ * its remove routine.305306 */306307 pm_runtime_get_sync(dev);307308 pci_dev->driver = pci_drv;
···144144 switch_on_temp = 0;145145146146 temperature_threshold = control_temp - switch_on_temp;147147+ /*148148+ * estimate_pid_constants() tries to find appropriate default149149+ * values for thermal zones that don't provide them. If a150150+ * system integrator has configured a thermal zone with two151151+ * passive trip points at the same temperature, that person152152+ * hasn't put any effort to set up the thermal zone properly153153+ * so just give up.154154+ */155155+ if (!temperature_threshold)156156+ return;147157148158 if (!tz->tzp->k_po || force)149159 tz->tzp->k_po = int_to_frac(sustainable_power) /
+2-1
drivers/watchdog/Kconfig
···817817 tristate "Intel TCO Timer/Watchdog"818818 depends on (X86 || IA64) && PCI819819 select WATCHDOG_CORE820820+ depends on I2C || I2C=n820821 select LPC_ICH if !EXPERT821821- select I2C_I801 if !EXPERT822822+ select I2C_I801 if !EXPERT && I2C822823 ---help---823824 Hardware driver for the intel TCO timer based watchdog devices.824825 These drivers are included in the Intel 82801 I/O Controller
+8-2
drivers/watchdog/bcm2835_wdt.c
···3636#define PM_RSTC_WRCFG_FULL_RESET 0x000000203737#define PM_RSTC_RESET 0x0000010238383939+/*4040+ * The Raspberry Pi firmware uses the RSTS register to know which partiton4141+ * to boot from. The partiton value is spread into bits 0, 2, 4, 6, 8, 10.4242+ * Partiton 63 is a special partition used by the firmware to indicate halt.4343+ */4444+#define PM_RSTS_RASPBERRYPI_HALT 0x5554545+3946#define SECS_TO_WDOG_TICKS(x) ((x) << 16)4047#define WDOG_TICKS_TO_SECS(x) ((x) >> 16)4148···158151 * hard reset.159152 */160153 val = readl_relaxed(wdt->base + PM_RSTS);161161- val &= PM_RSTC_WRCFG_CLR;162162- val |= PM_PASSWORD | PM_RSTS_HADWRH_SET;154154+ val |= PM_PASSWORD | PM_RSTS_RASPBERRYPI_HALT;163155 writel_relaxed(val, wdt->base + PM_RSTS);164156165157 /* Continue with normal reset mechanism */
···569569 if (!buffer_size_valid(&bh) || bh.b_size < PMD_SIZE)570570 goto fallback;571571572572+ sector = bh.b_blocknr << (blkbits - 9);573573+572574 if (buffer_unwritten(&bh) || buffer_new(&bh)) {573575 int i;576576+577577+ length = bdev_direct_access(bh.b_bdev, sector, &kaddr, &pfn,578578+ bh.b_size);579579+ if (length < 0) {580580+ result = VM_FAULT_SIGBUS;581581+ goto out;582582+ }583583+ if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR))584584+ goto fallback;585585+574586 for (i = 0; i < PTRS_PER_PMD; i++)575587 clear_pmem(kaddr + i * PAGE_SIZE, PAGE_SIZE);576588 wmb_pmem();···635623 result = VM_FAULT_NOPAGE;636624 spin_unlock(ptl);637625 } else {638638- sector = bh.b_blocknr << (blkbits - 9);639626 length = bdev_direct_access(bh.b_bdev, sector, &kaddr, &pfn,640627 bh.b_size);641628 if (length < 0) {
-3
fs/ubifs/xattr.c
···652652{653653 int err;654654655655- mutex_lock(&inode->i_mutex);656655 err = security_inode_init_security(inode, dentry, qstr,657656 &init_xattrs, 0);658658- mutex_unlock(&inode->i_mutex);659659-660657 if (err) {661658 struct ubifs_info *c = dentry->i_sb->s_fs_info;662659 ubifs_err(c, "cannot initialize security for inode %lu, error %d",
+72-8
include/asm-generic/word-at-a-time.h
···11#ifndef _ASM_WORD_AT_A_TIME_H22#define _ASM_WORD_AT_A_TIME_H3344-/*55- * This says "generic", but it's actually big-endian only.66- * Little-endian can use more efficient versions of these77- * interfaces, see for example88- * arch/x86/include/asm/word-at-a-time.h99- * for those.1010- */1111-124#include <linux/kernel.h>55+#include <asm/byteorder.h>66+77+#ifdef __BIG_ENDIAN138149struct word_at_a_time {1510 const unsigned long high_bits, low_bits;···4752#ifndef zero_bytemask4853#define zero_bytemask(mask) (~1ul << __fls(mask))4954#endif5555+5656+#else5757+5858+/*5959+ * The optimal byte mask counting is probably going to be something6060+ * that is architecture-specific. If you have a reliably fast6161+ * bit count instruction, that might be better than the multiply6262+ * and shift, for example.6363+ */6464+struct word_at_a_time {6565+ const unsigned long one_bits, high_bits;6666+};6767+6868+#define WORD_AT_A_TIME_CONSTANTS { REPEAT_BYTE(0x01), REPEAT_BYTE(0x80) }6969+7070+#ifdef CONFIG_64BIT7171+7272+/*7373+ * Jan Achrenius on G+: microoptimized version of7474+ * the simpler "(mask & ONEBYTES) * ONEBYTES >> 56"7575+ * that works for the bytemasks without having to7676+ * mask them first.7777+ */7878+static inline long count_masked_bytes(unsigned long mask)7979+{8080+ return mask*0x0001020304050608ul >> 56;8181+}8282+8383+#else /* 32-bit case */8484+8585+/* Carl Chatfield / Jan Achrenius G+ version for 32-bit */8686+static inline long count_masked_bytes(long mask)8787+{8888+ /* (000000 0000ff 00ffff ffffff) -> ( 1 1 2 3 ) */8989+ long a = (0x0ff0001+mask) >> 23;9090+ /* Fix the 1 for 00 case */9191+ return a & mask;9292+}9393+9494+#endif9595+9696+/* Return nonzero if it has a zero */9797+static inline unsigned long has_zero(unsigned long a, unsigned long *bits, const struct word_at_a_time *c)9898+{9999+ unsigned long mask = ((a - c->one_bits) & ~a) & c->high_bits;100100+ *bits = mask;101101+ return mask;102102+}103103+104104+static inline unsigned long prep_zero_mask(unsigned long a, unsigned long bits, const struct word_at_a_time *c)105105+{106106+ return bits;107107+}108108+109109+static inline unsigned long create_zero_mask(unsigned long bits)110110+{111111+ bits = (bits - 1) & ~bits;112112+ return bits >> 7;113113+}114114+115115+/* The mask we created is directly usable as a bytemask */116116+#define zero_bytemask(mask) (mask)117117+118118+static inline unsigned long find_zero(unsigned long mask)119119+{120120+ return count_masked_bytes(mask);121121+}122122+123123+#endif /* __BIG_ENDIAN */5012451125#endif /* _ASM_WORD_AT_A_TIME_H */
···11111212#include <linux/types.h>13131414-#include <linux/compiler.h>1515-1614#define UFFD_API ((__u64)0xAA)1715/*1816 * After implementing the respective features it will become:
+7-7
ipc/msg.c
···137137 return retval;138138 }139139140140- /* ipc_addid() locks msq upon success. */141141- id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);142142- if (id < 0) {143143- ipc_rcu_putref(msq, msg_rcu_free);144144- return id;145145- }146146-147140 msq->q_stime = msq->q_rtime = 0;148141 msq->q_ctime = get_seconds();149142 msq->q_cbytes = msq->q_qnum = 0;···145152 INIT_LIST_HEAD(&msq->q_messages);146153 INIT_LIST_HEAD(&msq->q_receivers);147154 INIT_LIST_HEAD(&msq->q_senders);155155+156156+ /* ipc_addid() locks msq upon success. */157157+ id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);158158+ if (id < 0) {159159+ ipc_rcu_putref(msq, msg_rcu_free);160160+ return id;161161+ }148162149163 ipc_unlock_object(&msq->q_perm);150164 rcu_read_unlock();
+7-6
ipc/shm.c
···551551 if (IS_ERR(file))552552 goto no_file;553553554554- id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);555555- if (id < 0) {556556- error = id;557557- goto no_id;558558- }559559-560554 shp->shm_cprid = task_tgid_vnr(current);561555 shp->shm_lprid = 0;562556 shp->shm_atim = shp->shm_dtim = 0;···559565 shp->shm_nattch = 0;560566 shp->shm_file = file;561567 shp->shm_creator = current;568568+569569+ id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);570570+ if (id < 0) {571571+ error = id;572572+ goto no_id;573573+ }574574+562575 list_add(&shp->shm_clist, ¤t->sysvshm.shm_clist);563576564577 /*
···12431243 PERF_EVENT_STATE_INACTIVE;12441244}1245124512461246-/*12471247- * Called at perf_event creation and when events are attached/detached from a12481248- * group.12491249- */12501250-static void perf_event__read_size(struct perf_event *event)12461246+static void __perf_event_read_size(struct perf_event *event, int nr_siblings)12511247{12521248 int entry = sizeof(u64); /* value */12531249 int size = 0;···12591263 entry += sizeof(u64);1260126412611265 if (event->attr.read_format & PERF_FORMAT_GROUP) {12621262- nr += event->group_leader->nr_siblings;12661266+ nr += nr_siblings;12631267 size += sizeof(u64);12641268 }12651269···12671271 event->read_size = size;12681272}1269127312701270-static void perf_event__header_size(struct perf_event *event)12741274+static void __perf_event_header_size(struct perf_event *event, u64 sample_type)12711275{12721276 struct perf_sample_data *data;12731273- u64 sample_type = event->attr.sample_type;12741277 u16 size = 0;12751275-12761276- perf_event__read_size(event);1277127812781279 if (sample_type & PERF_SAMPLE_IP)12791280 size += sizeof(data->ip);···12941301 size += sizeof(data->txn);1295130212961303 event->header_size = size;13041304+}13051305+13061306+/*13071307+ * Called at perf_event creation and when events are attached/detached from a13081308+ * group.13091309+ */13101310+static void perf_event__header_size(struct perf_event *event)13111311+{13121312+ __perf_event_read_size(event,13131313+ event->group_leader->nr_siblings);13141314+ __perf_event_header_size(event, event->attr.sample_type);12971315}1298131612991317static void perf_event__id_header_size(struct perf_event *event)···13321328 size += sizeof(data->cpu_entry);1333132913341330 event->id_header_size = size;13311331+}13321332+13331333+static bool perf_event_validate_size(struct perf_event *event)13341334+{13351335+ /*13361336+ * The values computed here will be over-written when we actually13371337+ * attach the event.13381338+ */13391339+ __perf_event_read_size(event, event->group_leader->nr_siblings + 1);13401340+ __perf_event_header_size(event, event->attr.sample_type & ~PERF_SAMPLE_READ);13411341+ perf_event__id_header_size(event);13421342+13431343+ /*13441344+ * Sum the lot; should not exceed the 64k limit we have on records.13451345+ * Conservative limit to allow for callchains and other variable fields.13461346+ */13471347+ if (event->read_size + event->header_size +13481348+ event->id_header_size + sizeof(struct perf_event_header) >= 16*1024)13491349+ return false;13501350+13511351+ return true;13351352}1336135313371354static void perf_group_attach(struct perf_event *event)···8322829783238298 if (move_group) {83248299 gctx = group_leader->ctx;83008300+ mutex_lock_double(&gctx->mutex, &ctx->mutex);83018301+ } else {83028302+ mutex_lock(&ctx->mutex);83038303+ }8325830483058305+ if (!perf_event_validate_size(event)) {83068306+ err = -E2BIG;83078307+ goto err_locked;83088308+ }83098309+83108310+ /*83118311+ * Must be under the same ctx::mutex as perf_install_in_context(),83128312+ * because we need to serialize with concurrent event creation.83138313+ */83148314+ if (!exclusive_event_installable(event, ctx)) {83158315+ /* exclusive and group stuff are assumed mutually exclusive */83168316+ WARN_ON_ONCE(move_group);83178317+83188318+ err = -EBUSY;83198319+ goto err_locked;83208320+ }83218321+83228322+ WARN_ON_ONCE(ctx->parent_ctx);83238323+83248324+ if (move_group) {83268325 /*83278326 * See perf_event_ctx_lock() for comments on the details83288327 * of swizzling perf_event::ctx.83298328 */83308330- mutex_lock_double(&gctx->mutex, &ctx->mutex);83318331-83328329 perf_remove_from_context(group_leader, false);8333833083348331 list_for_each_entry(sibling, &group_leader->sibling_list,···83588311 perf_remove_from_context(sibling, false);83598312 put_ctx(gctx);83608313 }83618361- } else {83628362- mutex_lock(&ctx->mutex);83638363- }8364831483658365- WARN_ON_ONCE(ctx->parent_ctx);83668366-83678367- if (move_group) {83688315 /*83698316 * Wait for everybody to stop referencing the events through83708317 * the old lists, before installing it on new lists.···83908349 perf_event__state_init(group_leader);83918350 perf_install_in_context(ctx, group_leader, group_leader->cpu);83928351 get_ctx(ctx);83528352+83538353+ /*83548354+ * Now that all events are installed in @ctx, nothing83558355+ * references @gctx anymore, so drop the last reference we have83568356+ * on it.83578357+ */83588358+ put_ctx(gctx);83938359 }8394836083958395- if (!exclusive_event_installable(event, ctx)) {83968396- err = -EBUSY;83978397- mutex_unlock(&ctx->mutex);83988398- fput(event_file);83998399- goto err_context;84008400- }83618361+ /*83628362+ * Precalculate sample_data sizes; do while holding ctx::mutex such83638363+ * that we're serialized against further additions and before83648364+ * perf_install_in_context() which is the point the event is active and83658365+ * can use these values.83668366+ */83678367+ perf_event__header_size(event);83688368+ perf_event__id_header_size(event);8401836984028370 perf_install_in_context(ctx, event, event->cpu);84038371 perf_unpin_context(ctx);8404837284058405- if (move_group) {83738373+ if (move_group)84068374 mutex_unlock(&gctx->mutex);84078407- put_ctx(gctx);84088408- }84098375 mutex_unlock(&ctx->mutex);8410837684118377 put_online_cpus();···84248376 mutex_unlock(¤t->perf_event_mutex);8425837784268378 /*84278427- * Precalculate sample_data sizes84288428- */84298429- perf_event__header_size(event);84308430- perf_event__id_header_size(event);84318431-84328432- /*84338379 * Drop the reference on the group_event after placing the84348380 * new event on the sibling_list. This ensures destruction84358381 * of the group leader will find the pointer to itself in···84338391 fd_install(event_fd, event_file);84348392 return event_fd;8435839383948394+err_locked:83958395+ if (move_group)83968396+ mutex_unlock(&gctx->mutex);83978397+ mutex_unlock(&ctx->mutex);83988398+/* err_file: */83998399+ fput(event_file);84368400err_context:84378401 perf_unpin_context(ctx);84388402 put_ctx(ctx);
+17-2
kernel/irq/proc.c
···1212#include <linux/seq_file.h>1313#include <linux/interrupt.h>1414#include <linux/kernel_stat.h>1515+#include <linux/mutex.h>15161617#include "internals.h"1718···324323325324void register_irq_proc(unsigned int irq, struct irq_desc *desc)326325{326326+ static DEFINE_MUTEX(register_lock);327327 char name [MAX_NAMELEN];328328329329- if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip) || desc->dir)329329+ if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip))330330 return;331331+332332+ /*333333+ * irq directories are registered only when a handler is334334+ * added, not when the descriptor is created, so multiple335335+ * tasks might try to register at the same time.336336+ */337337+ mutex_lock(®ister_lock);338338+339339+ if (desc->dir)340340+ goto out_unlock;331341332342 memset(name, 0, MAX_NAMELEN);333343 sprintf(name, "%d", irq);···346334 /* create /proc/irq/1234 */347335 desc->dir = proc_mkdir(name, root_irq_dir);348336 if (!desc->dir)349349- return;337337+ goto out_unlock;350338351339#ifdef CONFIG_SMP352340 /* create /proc/irq/<irq>/smp_affinity */···367355368356 proc_create_data("spurious", 0444, desc->dir,369357 &irq_spurious_proc_fops, (void *)(long)irq);358358+359359+out_unlock:360360+ mutex_unlock(®ister_lock);370361}371362372363void unregister_irq_proc(unsigned int irq, struct irq_desc *desc)
···49344934 idle->state = TASK_RUNNING;49354935 idle->se.exec_start = sched_clock();4936493649374937- do_set_cpus_allowed(idle, cpumask_of(cpu));49374937+#ifdef CONFIG_SMP49384938+ /*49394939+ * Its possible that init_idle() gets called multiple times on a task,49404940+ * in that case do_set_cpus_allowed() will not do the right thing.49414941+ *49424942+ * And since this is boot we can forgo the serialization.49434943+ */49444944+ set_cpus_allowed_common(idle, cpumask_of(cpu));49454945+#endif49384946 /*49394947 * We're having a chicken and egg problem, even though we are49404948 * holding rq->lock, the cpu isn't yet set to this cpu so the···4959495149604952 rq->curr = rq->idle = idle;49614953 idle->on_rq = TASK_ON_RQ_QUEUED;49624962-#if defined(CONFIG_SMP)49544954+#ifdef CONFIG_SMP49634955 idle->on_cpu = 1;49644956#endif49654957 raw_spin_unlock(&rq->lock);···49744966 idle->sched_class = &idle_sched_class;49754967 ftrace_graph_init_idle_task(idle, cpu);49764968 vtime_init_idle(idle, cpu);49774977-#if defined(CONFIG_SMP)49694969+#ifdef CONFIG_SMP49784970 sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);49794971#endif49804972}
+1-1
kernel/time/clocksource.c
···217217 continue;218218219219 /* Check the deviation from the watchdog clocksource. */220220- if ((abs(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD)) {220220+ if (abs64(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD) {221221 pr_warn("timekeeping watchdog: Marking clocksource '%s' as unstable because the skew is too large:\n",222222 cs->name);223223 pr_warn(" '%s' wd_now: %llx wd_last: %llx mask: %llx\n",
+88
lib/string.c
···2727#include <linux/bug.h>2828#include <linux/errno.h>29293030+#include <asm/byteorder.h>3131+#include <asm/word-at-a-time.h>3232+#include <asm/page.h>3333+3034#ifndef __HAVE_ARCH_STRNCASECMP3135/**3236 * strncasecmp - Case insensitive, length-limited string comparison···148144 return ret;149145}150146EXPORT_SYMBOL(strlcpy);147147+#endif148148+149149+#ifndef __HAVE_ARCH_STRSCPY150150+/**151151+ * strscpy - Copy a C-string into a sized buffer152152+ * @dest: Where to copy the string to153153+ * @src: Where to copy the string from154154+ * @count: Size of destination buffer155155+ *156156+ * Copy the string, or as much of it as fits, into the dest buffer.157157+ * The routine returns the number of characters copied (not including158158+ * the trailing NUL) or -E2BIG if the destination buffer wasn't big enough.159159+ * The behavior is undefined if the string buffers overlap.160160+ * The destination buffer is always NUL terminated, unless it's zero-sized.161161+ *162162+ * Preferred to strlcpy() since the API doesn't require reading memory163163+ * from the src string beyond the specified "count" bytes, and since164164+ * the return value is easier to error-check than strlcpy()'s.165165+ * In addition, the implementation is robust to the string changing out166166+ * from underneath it, unlike the current strlcpy() implementation.167167+ *168168+ * Preferred to strncpy() since it always returns a valid string, and169169+ * doesn't unnecessarily force the tail of the destination buffer to be170170+ * zeroed. If the zeroing is desired, it's likely cleaner to use strscpy()171171+ * with an overflow test, then just memset() the tail of the dest buffer.172172+ */173173+ssize_t strscpy(char *dest, const char *src, size_t count)174174+{175175+ const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;176176+ size_t max = count;177177+ long res = 0;178178+179179+ if (count == 0)180180+ return -E2BIG;181181+182182+#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS183183+ /*184184+ * If src is unaligned, don't cross a page boundary,185185+ * since we don't know if the next page is mapped.186186+ */187187+ if ((long)src & (sizeof(long) - 1)) {188188+ size_t limit = PAGE_SIZE - ((long)src & (PAGE_SIZE - 1));189189+ if (limit < max)190190+ max = limit;191191+ }192192+#else193193+ /* If src or dest is unaligned, don't do word-at-a-time. */194194+ if (((long) dest | (long) src) & (sizeof(long) - 1))195195+ max = 0;196196+#endif197197+198198+ while (max >= sizeof(unsigned long)) {199199+ unsigned long c, data;200200+201201+ c = *(unsigned long *)(src+res);202202+ *(unsigned long *)(dest+res) = c;203203+ if (has_zero(c, &data, &constants)) {204204+ data = prep_zero_mask(c, data, &constants);205205+ data = create_zero_mask(data);206206+ return res + find_zero(data);207207+ }208208+ res += sizeof(unsigned long);209209+ count -= sizeof(unsigned long);210210+ max -= sizeof(unsigned long);211211+ }212212+213213+ while (count) {214214+ char c;215215+216216+ c = src[res];217217+ dest[res] = c;218218+ if (!c)219219+ return res;220220+ res++;221221+ count--;222222+ }223223+224224+ /* Hit buffer length without finding a NUL; force NUL-termination. */225225+ if (res)226226+ dest[res-1] = '\0';227227+228228+ return -E2BIG;229229+}230230+EXPORT_SYMBOL(strscpy);151231#endif152232153233#ifndef __HAVE_ARCH_STRCAT
+1-1
mm/dmapool.c
···394394 list_for_each_entry(page, &pool->page_list, page_list) {395395 if (dma < page->dma)396396 continue;397397- if (dma < (page->dma + pool->allocation))397397+ if ((dma - page->dma) < pool->allocation)398398 return page;399399 }400400 return NULL;
+8
mm/hugetlb.c
···32023202 continue;3203320332043204 /*32053205+ * Shared VMAs have their own reserves and do not affect32063206+ * MAP_PRIVATE accounting but it is possible that a shared32073207+ * VMA is using the same page so check and skip such VMAs.32083208+ */32093209+ if (iter_vma->vm_flags & VM_MAYSHARE)32103210+ continue;32113211+32123212+ /*32053213 * Unmap the page from other VMAs without their own reserves.32063214 * They get marked to be SIGKILLed if they fault in these32073215 * areas. This is because a future no-page fault on this VMA
+18-13
mm/memcontrol.c
···644644}645645646646/*647647+ * Return page count for single (non recursive) @memcg.648648+ *647649 * Implementation Note: reading percpu statistics for memcg.648650 *649651 * Both of vmstat[] and percpu_counter has threshold and do periodic650652 * synchronization to implement "quick" read. There are trade-off between651653 * reading cost and precision of value. Then, we may have a chance to implement652652- * a periodic synchronizion of counter in memcg's counter.654654+ * a periodic synchronization of counter in memcg's counter.653655 *654656 * But this _read() function is used for user interface now. The user accounts655657 * memory usage by memory cgroup and he _always_ requires exact value because···661659 *662660 * If there are kernel internal actions which can make use of some not-exact663661 * value, and reading all cpu value can be performance bottleneck in some664664- * common workload, threashold and synchonization as vmstat[] should be662662+ * common workload, threshold and synchronization as vmstat[] should be665663 * implemented.666664 */667667-static long mem_cgroup_read_stat(struct mem_cgroup *memcg,668668- enum mem_cgroup_stat_index idx)665665+static unsigned long666666+mem_cgroup_read_stat(struct mem_cgroup *memcg, enum mem_cgroup_stat_index idx)669667{670668 long val = 0;671669 int cpu;672670671671+ /* Per-cpu values can be negative, use a signed accumulator */673672 for_each_possible_cpu(cpu)674673 val += per_cpu(memcg->stat->count[idx], cpu);674674+ /*675675+ * Summing races with updates, so val may be negative. Avoid exposing676676+ * transient negative values.677677+ */678678+ if (val < 0)679679+ val = 0;675680 return val;676681}677682···12631254 for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) {12641255 if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account)12651256 continue;12661266- pr_cont(" %s:%ldKB", mem_cgroup_stat_names[i],12571257+ pr_cont(" %s:%luKB", mem_cgroup_stat_names[i],12671258 K(mem_cgroup_read_stat(iter, i)));12681259 }12691260···28282819 enum mem_cgroup_stat_index idx)28292820{28302821 struct mem_cgroup *iter;28312831- long val = 0;28222822+ unsigned long val = 0;2832282328332833- /* Per-cpu values can be negative, use a signed accumulator */28342824 for_each_mem_cgroup_tree(iter, memcg)28352825 val += mem_cgroup_read_stat(iter, idx);2836282628372837- if (val < 0) /* race ? */28382838- val = 0;28392827 return val;28402828}28412829···31753169 for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) {31763170 if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account)31773171 continue;31783178- seq_printf(m, "%s %ld\n", mem_cgroup_stat_names[i],31723172+ seq_printf(m, "%s %lu\n", mem_cgroup_stat_names[i],31793173 mem_cgroup_read_stat(memcg, i) * PAGE_SIZE);31803174 }31813175···32003194 (u64)memsw * PAGE_SIZE);3201319532023196 for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) {32033203- long long val = 0;31973197+ unsigned long long val = 0;3204319832053199 if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account)32063200 continue;32073201 for_each_mem_cgroup_tree(mi, memcg)32083202 val += mem_cgroup_read_stat(mi, i) * PAGE_SIZE;32093209- seq_printf(m, "total_%s %lld\n", mem_cgroup_stat_names[i], val);32033203+ seq_printf(m, "total_%s %llu\n", mem_cgroup_stat_names[i], val);32103204 }3211320532123206 for (i = 0; i < MEM_CGROUP_EVENTS_NSTATS; i++) {···41854179 if (memcg_wb_domain_init(memcg, GFP_KERNEL))41864180 goto out_free_stat;4187418141884188- spin_lock_init(&memcg->pcp_counter_lock);41894182 return memcg;4190418341914184out_free_stat:
+11-1
mm/migrate.c
···740740 if (PageSwapBacked(page))741741 SetPageSwapBacked(newpage);742742743743+ /*744744+ * Indirectly called below, migrate_page_copy() copies PG_dirty and thus745745+ * needs newpage's memcg set to transfer memcg dirty page accounting.746746+ * So perform memcg migration in two steps:747747+ * 1. set newpage->mem_cgroup (here)748748+ * 2. clear page->mem_cgroup (below)749749+ */750750+ set_page_memcg(newpage, page_memcg(page));751751+743752 mapping = page_mapping(page);744753 if (!mapping)745754 rc = migrate_page(mapping, newpage, page, mode);···765756 rc = fallback_migrate_page(mapping, newpage, page, mode);766757767758 if (rc != MIGRATEPAGE_SUCCESS) {759759+ set_page_memcg(newpage, NULL);768760 newpage->mapping = NULL;769761 } else {770770- mem_cgroup_migrate(page, newpage, false);762762+ set_page_memcg(page, NULL);771763 if (page_was_mapped)772764 remove_migration_ptes(page, newpage);773765 page->mapping = NULL;
+10-3
mm/slab.c
···21902190 size += BYTES_PER_WORD;21912191 }21922192#if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC)21932193- if (size >= kmalloc_size(INDEX_NODE + 1)21942194- && cachep->object_size > cache_line_size()21952195- && ALIGN(size, cachep->align) < PAGE_SIZE) {21932193+ /*21942194+ * To activate debug pagealloc, off-slab management is necessary21952195+ * requirement. In early phase of initialization, small sized slab21962196+ * doesn't get initialized so it would not be possible. So, we need21972197+ * to check size >= 256. It guarantees that all necessary small21982198+ * sized slab is initialized in current slab initialization sequence.21992199+ */22002200+ if (!slab_early_init && size >= kmalloc_size(INDEX_NODE) &&22012201+ size >= 256 && cachep->object_size > cache_line_size() &&22022202+ ALIGN(size, cachep->align) < PAGE_SIZE) {21962203 cachep->obj_offset += PAGE_SIZE - ALIGN(size, cachep->align);21972204 size = PAGE_SIZE;21982205 }
···1193119311941194 fl6->flowi6_iif = LOOPBACK_IFINDEX;1195119511961196- if ((sk && sk->sk_bound_dev_if) || rt6_need_strict(&fl6->daddr))11961196+ if ((sk && sk->sk_bound_dev_if) || rt6_need_strict(&fl6->daddr) ||11971197+ fl6->flowi6_oif)11971198 flags |= RT6_LOOKUP_F_IFACE;1198119911991200 if (!ipv6_addr_any(&fl6->saddr))
+9-2
net/l2tp/l2tp_core.c
···13191319 tunnel = container_of(work, struct l2tp_tunnel, del_work);13201320 sk = l2tp_tunnel_sock_lookup(tunnel);13211321 if (!sk)13221322- return;13221322+ goto out;1323132313241324 sock = sk->sk_socket;13251325···13411341 }1342134213431343 l2tp_tunnel_sock_put(sk);13441344+out:13451345+ l2tp_tunnel_dec_refcount(tunnel);13441346}1345134713461348/* Create a socket for the tunnel, if one isn't set up by···16381636 */16391637int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel)16401638{16391639+ l2tp_tunnel_inc_refcount(tunnel);16411640 l2tp_tunnel_closeall(tunnel);16421642- return (false == queue_work(l2tp_wq, &tunnel->del_work));16411641+ if (false == queue_work(l2tp_wq, &tunnel->del_work)) {16421642+ l2tp_tunnel_dec_refcount(tunnel);16431643+ return 1;16441644+ }16451645+ return 0;16431646}16441647EXPORT_SYMBOL_GPL(l2tp_tunnel_delete);16451648
+11-9
net/sctp/associola.c
···12081208 * within this document.12091209 *12101210 * Our basic strategy is to round-robin transports in priorities12111211- * according to sctp_state_prio_map[] e.g., if no such12111211+ * according to sctp_trans_score() e.g., if no such12121212 * transport with state SCTP_ACTIVE exists, round-robin through12131213 * SCTP_UNKNOWN, etc. You get the picture.12141214 */12151215-static const u8 sctp_trans_state_to_prio_map[] = {12161216- [SCTP_ACTIVE] = 3, /* best case */12171217- [SCTP_UNKNOWN] = 2,12181218- [SCTP_PF] = 1,12191219- [SCTP_INACTIVE] = 0, /* worst case */12201220-};12211221-12221215static u8 sctp_trans_score(const struct sctp_transport *trans)12231216{12241224- return sctp_trans_state_to_prio_map[trans->state];12171217+ switch (trans->state) {12181218+ case SCTP_ACTIVE:12191219+ return 3; /* best case */12201220+ case SCTP_UNKNOWN:12211221+ return 2;12221222+ case SCTP_PF:12231223+ return 1;12241224+ default: /* case SCTP_INACTIVE */12251225+ return 0; /* worst case */12261226+ }12251227}1226122812271229static struct sctp_transport *sctp_trans_elect_tie(struct sctp_transport *trans1,
+24-20
net/sctp/sm_sideeffect.c
···244244 int error;245245 struct sctp_transport *transport = (struct sctp_transport *) peer;246246 struct sctp_association *asoc = transport->asoc;247247- struct net *net = sock_net(asoc->base.sk);247247+ struct sock *sk = asoc->base.sk;248248+ struct net *net = sock_net(sk);248249249250 /* Check whether a task is in the sock. */250251251251- bh_lock_sock(asoc->base.sk);252252- if (sock_owned_by_user(asoc->base.sk)) {252252+ bh_lock_sock(sk);253253+ if (sock_owned_by_user(sk)) {253254 pr_debug("%s: sock is busy\n", __func__);254255255256 /* Try again later. */···273272 transport, GFP_ATOMIC);274273275274 if (error)276276- asoc->base.sk->sk_err = -error;275275+ sk->sk_err = -error;277276278277out_unlock:279279- bh_unlock_sock(asoc->base.sk);278278+ bh_unlock_sock(sk);280279 sctp_transport_put(transport);281280}282281···286285static void sctp_generate_timeout_event(struct sctp_association *asoc,287286 sctp_event_timeout_t timeout_type)288287{289289- struct net *net = sock_net(asoc->base.sk);288288+ struct sock *sk = asoc->base.sk;289289+ struct net *net = sock_net(sk);290290 int error = 0;291291292292- bh_lock_sock(asoc->base.sk);293293- if (sock_owned_by_user(asoc->base.sk)) {292292+ bh_lock_sock(sk);293293+ if (sock_owned_by_user(sk)) {294294 pr_debug("%s: sock is busy: timer %d\n", __func__,295295 timeout_type);296296···314312 (void *)timeout_type, GFP_ATOMIC);315313316314 if (error)317317- asoc->base.sk->sk_err = -error;315315+ sk->sk_err = -error;318316319317out_unlock:320320- bh_unlock_sock(asoc->base.sk);318318+ bh_unlock_sock(sk);321319 sctp_association_put(asoc);322320}323321···367365 int error = 0;368366 struct sctp_transport *transport = (struct sctp_transport *) data;369367 struct sctp_association *asoc = transport->asoc;370370- struct net *net = sock_net(asoc->base.sk);368368+ struct sock *sk = asoc->base.sk;369369+ struct net *net = sock_net(sk);371370372372- bh_lock_sock(asoc->base.sk);373373- if (sock_owned_by_user(asoc->base.sk)) {371371+ bh_lock_sock(sk);372372+ if (sock_owned_by_user(sk)) {374373 pr_debug("%s: sock is busy\n", __func__);375374376375 /* Try again later. */···391388 asoc->state, asoc->ep, asoc,392389 transport, GFP_ATOMIC);393390394394- if (error)395395- asoc->base.sk->sk_err = -error;391391+ if (error)392392+ sk->sk_err = -error;396393397394out_unlock:398398- bh_unlock_sock(asoc->base.sk);395395+ bh_unlock_sock(sk);399396 sctp_transport_put(transport);400397}401398···406403{407404 struct sctp_transport *transport = (struct sctp_transport *) data;408405 struct sctp_association *asoc = transport->asoc;409409- struct net *net = sock_net(asoc->base.sk);406406+ struct sock *sk = asoc->base.sk;407407+ struct net *net = sock_net(sk);410408411411- bh_lock_sock(asoc->base.sk);412412- if (sock_owned_by_user(asoc->base.sk)) {409409+ bh_lock_sock(sk);410410+ if (sock_owned_by_user(sk)) {413411 pr_debug("%s: sock is busy\n", __func__);414412415413 /* Try again later. */···431427 asoc->state, asoc->ep, asoc, transport, GFP_ATOMIC);432428433429out_unlock:434434- bh_unlock_sock(asoc->base.sk);430430+ bh_unlock_sock(sk);435431 sctp_association_put(asoc);436432}437433
-19
net/sunrpc/xprtrdma/fmr_ops.c
···3939fmr_op_open(struct rpcrdma_ia *ia, struct rpcrdma_ep *ep,4040 struct rpcrdma_create_data_internal *cdata)4141{4242- struct ib_device_attr *devattr = &ia->ri_devattr;4343- struct ib_mr *mr;4444-4545- /* Obtain an lkey to use for the regbufs, which are4646- * protected from remote access.4747- */4848- if (devattr->device_cap_flags & IB_DEVICE_LOCAL_DMA_LKEY) {4949- ia->ri_dma_lkey = ia->ri_device->local_dma_lkey;5050- } else {5151- mr = ib_get_dma_mr(ia->ri_pd, IB_ACCESS_LOCAL_WRITE);5252- if (IS_ERR(mr)) {5353- pr_err("%s: ib_get_dma_mr for failed with %lX\n",5454- __func__, PTR_ERR(mr));5555- return -ENOMEM;5656- }5757- ia->ri_dma_lkey = ia->ri_dma_mr->lkey;5858- ia->ri_dma_mr = mr;5959- }6060-6142 return 0;6243}6344
-5
net/sunrpc/xprtrdma/frwr_ops.c
···189189 struct ib_device_attr *devattr = &ia->ri_devattr;190190 int depth, delta;191191192192- /* Obtain an lkey to use for the regbufs, which are193193- * protected from remote access.194194- */195195- ia->ri_dma_lkey = ia->ri_device->local_dma_lkey;196196-197192 ia->ri_max_frmr_depth =198193 min_t(unsigned int, RPCRDMA_MAX_DATA_SEGS,199194 devattr->max_fast_reg_page_list_len);
+1-9
net/sunrpc/xprtrdma/physical_ops.c
···2323physical_op_open(struct rpcrdma_ia *ia, struct rpcrdma_ep *ep,2424 struct rpcrdma_create_data_internal *cdata)2525{2626- struct ib_device_attr *devattr = &ia->ri_devattr;2726 struct ib_mr *mr;28272928 /* Obtain an rkey to use for RPC data payloads.···3637 __func__, PTR_ERR(mr));3738 return -ENOMEM;3839 }4040+3941 ia->ri_dma_mr = mr;4040-4141- /* Obtain an lkey to use for regbufs.4242- */4343- if (devattr->device_cap_flags & IB_DEVICE_LOCAL_DMA_LKEY)4444- ia->ri_dma_lkey = ia->ri_device->local_dma_lkey;4545- else4646- ia->ri_dma_lkey = ia->ri_dma_mr->lkey;4747-4842 return 0;4943}5044
···6565 struct rdma_cm_id *ri_id;6666 struct ib_pd *ri_pd;6767 struct ib_mr *ri_dma_mr;6868- u32 ri_dma_lkey;6968 struct completion ri_done;7069 int ri_async_rc;7170 unsigned int ri_max_frmr_depth;
+14-1
net/unix/af_unix.c
···21792179 if (UNIXCB(skb).fp)21802180 scm.fp = scm_fp_dup(UNIXCB(skb).fp);2181218121822182- sk_peek_offset_fwd(sk, chunk);21822182+ if (skip) {21832183+ sk_peek_offset_fwd(sk, chunk);21842184+ skip -= chunk;21852185+ }2183218621872187+ if (UNIXCB(skb).fp)21882188+ break;21892189+21902190+ last = skb;21912191+ last_len = skb->len;21922192+ unix_state_lock(sk);21932193+ skb = skb_peek_next(skb, &sk->sk_receive_queue);21942194+ if (skb)21952195+ goto again;21962196+ unix_state_unlock(sk);21842197 break;21852198 }21862199 } while (size);
+7-7
samples/kprobes/jprobe_example.c
···11/*22 * Here's a sample kernel module showing the use of jprobes to dump33- * the arguments of do_fork().33+ * the arguments of _do_fork().44 *55 * For more information on theory of operation of jprobes, see66 * Documentation/kprobes.txt77 *88 * Build and insert the kernel module as done in the kprobe example.99 * You will see the trace data in /var/log/messages and on the1010- * console whenever do_fork() is invoked to create a new process.1010+ * console whenever _do_fork() is invoked to create a new process.1111 * (Some messages may be suppressed if syslogd is configured to1212 * eliminate duplicate messages.)1313 */···1717#include <linux/kprobes.h>18181919/*2020- * Jumper probe for do_fork.2020+ * Jumper probe for _do_fork.2121 * Mirror principle enables access to arguments of the probed routine2222 * from the probe handler.2323 */24242525-/* Proxy routine having the same arguments as actual do_fork() routine */2626-static long jdo_fork(unsigned long clone_flags, unsigned long stack_start,2525+/* Proxy routine having the same arguments as actual _do_fork() routine */2626+static long j_do_fork(unsigned long clone_flags, unsigned long stack_start,2727 unsigned long stack_size, int __user *parent_tidptr,2828 int __user *child_tidptr)2929{···3636}37373838static struct jprobe my_jprobe = {3939- .entry = jdo_fork,3939+ .entry = j_do_fork,4040 .kp = {4141- .symbol_name = "do_fork",4141+ .symbol_name = "_do_fork",4242 },4343};4444
+3-3
samples/kprobes/kprobe_example.c
···11/*22 * NOTE: This example is works on x86 and powerpc.33 * Here's a sample kernel module showing the use of kprobes to dump a44- * stack trace and selected registers when do_fork() is called.44+ * stack trace and selected registers when _do_fork() is called.55 *66 * For more information on theory of operation of kprobes, see77 * Documentation/kprobes.txt88 *99 * You will see the trace data in /var/log/messages and on the console1010- * whenever do_fork() is invoked to create a new process.1010+ * whenever _do_fork() is invoked to create a new process.1111 */12121313#include <linux/kernel.h>···16161717/* For each probe you need to allocate a kprobe structure */1818static struct kprobe kp = {1919- .symbol_name = "do_fork",1919+ .symbol_name = "_do_fork",2020};21212222/* kprobe pre_handler: called just before the probed instruction is executed */
+2-2
samples/kprobes/kretprobe_example.c
···77 *88 * usage: insmod kretprobe_example.ko func=<func_name>99 *1010- * If no func_name is specified, do_fork is instrumented1010+ * If no func_name is specified, _do_fork is instrumented1111 *1212 * For more information on theory of operation of kretprobes, see1313 * Documentation/kprobes.txt···2525#include <linux/limits.h>2626#include <linux/sched.h>27272828-static char func_name[NAME_MAX] = "do_fork";2828+static char func_name[NAME_MAX] = "_do_fork";2929module_param_string(func, func_name, NAME_MAX, S_IRUGO);3030MODULE_PARM_DESC(func, "Function to kretprobe; this module will report the"3131 " function's execution time");
···2020#include <getopt.h>2121#include <err.h>2222#include <arpa/inet.h>2323+#include <openssl/opensslv.h>2324#include <openssl/bio.h>2425#include <openssl/evp.h>2526#include <openssl/pem.h>2626-#include <openssl/cms.h>2727#include <openssl/err.h>2828#include <openssl/engine.h>2929+3030+/*3131+ * Use CMS if we have openssl-1.0.0 or newer available - otherwise we have to3232+ * assume that it's not available and its header file is missing and that we3333+ * should use PKCS#7 instead. Switching to the older PKCS#7 format restricts3434+ * the options we have on specifying the X.509 certificate we want.3535+ *3636+ * Further, older versions of OpenSSL don't support manually adding signers to3737+ * the PKCS#7 message so have to accept that we get a certificate included in3838+ * the signature message. Nor do such older versions of OpenSSL support3939+ * signing with anything other than SHA1 - so we're stuck with that if such is4040+ * the case.4141+ */4242+#if OPENSSL_VERSION_NUMBER < 0x10000000L4343+#define USE_PKCS74444+#endif4545+#ifndef USE_PKCS74646+#include <openssl/cms.h>4747+#else4848+#include <openssl/pkcs7.h>4949+#endif29503051struct module_signature {3152 uint8_t algo; /* Public-key crypto algorithm [0] */···131110 struct module_signature sig_info = { .id_type = PKEY_ID_PKCS7 };132111 char *hash_algo = NULL;133112 char *private_key_name, *x509_name, *module_name, *dest_name;134134- bool save_cms = false, replace_orig;113113+ bool save_sig = false, replace_orig;135114 bool sign_only = false;136115 unsigned char buf[4096];137137- unsigned long module_size, cms_size;138138- unsigned int use_keyid = 0, use_signed_attrs = CMS_NOATTR;116116+ unsigned long module_size, sig_size;117117+ unsigned int use_signed_attrs;139118 const EVP_MD *digest_algo;140119 EVP_PKEY *private_key;120120+#ifndef USE_PKCS7141121 CMS_ContentInfo *cms;122122+ unsigned int use_keyid = 0;123123+#else124124+ PKCS7 *pkcs7;125125+#endif142126 X509 *x509;143127 BIO *b, *bd = NULL, *bm;144128 int opt, n;145145-146129 OpenSSL_add_all_algorithms();147130 ERR_load_crypto_strings();148131 ERR_clear_error();149132150133 key_pass = getenv("KBUILD_SIGN_PIN");151134135135+#ifndef USE_PKCS7136136+ use_signed_attrs = CMS_NOATTR;137137+#else138138+ use_signed_attrs = PKCS7_NOATTR;139139+#endif140140+152141 do {153142 opt = getopt(argc, argv, "dpk");154143 switch (opt) {155155- case 'p': save_cms = true; break;156156- case 'd': sign_only = true; save_cms = true; break;144144+ case 'p': save_sig = true; break;145145+ case 'd': sign_only = true; save_sig = true; break;146146+#ifndef USE_PKCS7157147 case 'k': use_keyid = CMS_USE_KEYID; break;148148+#endif158149 case -1: break;159150 default: format();160151 }···189156 "asprintf");190157 replace_orig = true;191158 }159159+160160+#ifdef USE_PKCS7161161+ if (strcmp(hash_algo, "sha1") != 0) {162162+ fprintf(stderr, "sign-file: %s only supports SHA1 signing\n",163163+ OPENSSL_VERSION_TEXT);164164+ exit(3);165165+ }166166+#endif192167193168 /* Read the private key and the X.509 cert the PKCS#7 message194169 * will point to.···254213 bm = BIO_new_file(module_name, "rb");255214 ERR(!bm, "%s", module_name);256215257257- /* Load the CMS message from the digest buffer. */216216+#ifndef USE_PKCS7217217+ /* Load the signature message from the digest buffer. */258218 cms = CMS_sign(NULL, NULL, NULL, NULL,259219 CMS_NOCERTS | CMS_PARTIAL | CMS_BINARY | CMS_DETACHED | CMS_STREAM);260220 ERR(!cms, "CMS_sign");···263221 ERR(!CMS_add1_signer(cms, x509, private_key, digest_algo,264222 CMS_NOCERTS | CMS_BINARY | CMS_NOSMIMECAP |265223 use_keyid | use_signed_attrs),266266- "CMS_sign_add_signer");224224+ "CMS_add1_signer");267225 ERR(CMS_final(cms, bm, NULL, CMS_NOCERTS | CMS_BINARY) < 0,268226 "CMS_final");269227270270- if (save_cms) {271271- char *cms_name;228228+#else229229+ pkcs7 = PKCS7_sign(x509, private_key, NULL, bm,230230+ PKCS7_NOCERTS | PKCS7_BINARY |231231+ PKCS7_DETACHED | use_signed_attrs);232232+ ERR(!pkcs7, "PKCS7_sign");233233+#endif272234273273- ERR(asprintf(&cms_name, "%s.p7s", module_name) < 0, "asprintf");274274- b = BIO_new_file(cms_name, "wb");275275- ERR(!b, "%s", cms_name);276276- ERR(i2d_CMS_bio_stream(b, cms, NULL, 0) < 0, "%s", cms_name);235235+ if (save_sig) {236236+ char *sig_file_name;237237+238238+ ERR(asprintf(&sig_file_name, "%s.p7s", module_name) < 0,239239+ "asprintf");240240+ b = BIO_new_file(sig_file_name, "wb");241241+ ERR(!b, "%s", sig_file_name);242242+#ifndef USE_PKCS7243243+ ERR(i2d_CMS_bio_stream(b, cms, NULL, 0) < 0,244244+ "%s", sig_file_name);245245+#else246246+ ERR(i2d_PKCS7_bio(b, pkcs7) < 0,247247+ "%s", sig_file_name);248248+#endif277249 BIO_free(b);278250 }279251···303247 ERR(n < 0, "%s", module_name);304248 module_size = BIO_number_written(bd);305249250250+#ifndef USE_PKCS7306251 ERR(i2d_CMS_bio_stream(bd, cms, NULL, 0) < 0, "%s", dest_name);307307- cms_size = BIO_number_written(bd) - module_size;308308- sig_info.sig_len = htonl(cms_size);252252+#else253253+ ERR(i2d_PKCS7_bio(bd, pkcs7) < 0, "%s", dest_name);254254+#endif255255+ sig_size = BIO_number_written(bd) - module_size;256256+ sig_info.sig_len = htonl(sig_size);309257 ERR(BIO_write(bd, &sig_info, sizeof(sig_info)) < 0, "%s", dest_name);310258 ERR(BIO_write(bd, magic_number, sizeof(magic_number) - 1) < 0, "%s", dest_name);311259
+4-4
security/keys/gc.c
···134134 kdebug("- %u", key->serial);135135 key_check(key);136136137137+ /* Throw away the key data */138138+ if (key->type->destroy)139139+ key->type->destroy(key);140140+137141 security_key_free(key);138142139143 /* deal with the user's key tracking and quota */···151147 atomic_dec(&key->user->nkeys);152148 if (test_bit(KEY_FLAG_INSTANTIATED, &key->flags))153149 atomic_dec(&key->user->nikeys);154154-155155- /* now throw away the key memory */156156- if (key->type->destroy)157157- key->type->destroy(key);158150159151 key_user_put(key->user);160152
+6-2
tools/build/Makefile.feature
···4141 libelf-getphdrnum \4242 libelf-mmap \4343 libnuma \4444+ numa_num_possible_cpus \4445 libperl \4546 libpython \4647 libpython-version \···5251 timerfd \5352 libdw-dwarf-unwind \5453 zlib \5555- lzma5454+ lzma \5555+ get_cpuid56565757FEATURE_DISPLAY ?= \5858 dwarf \···6361 libbfd \6462 libelf \6563 libnuma \6464+ numa_num_possible_cpus \6665 libperl \6766 libpython \6867 libslang \6968 libunwind \7069 libdw-dwarf-unwind \7170 zlib \7272- lzma7171+ lzma \7272+ get_cpuid73737474# Set FEATURE_CHECK_(C|LD)FLAGS-all for all FEATURE_TESTS features.7575# If in the future we need per-feature checks/flags for features not
···7777# include "test-libnuma.c"7878#undef main79798080+#define main main_test_numa_num_possible_cpus8181+# include "test-numa_num_possible_cpus.c"8282+#undef main8383+8084#define main main_test_timerfd8185# include "test-timerfd.c"8286#undef main···121117# include "test-lzma.c"122118#undef main123119120120+#define main main_test_get_cpuid121121+# include "test-get_cpuid.c"122122+#undef main123123+124124int main(int argc, char *argv[])125125{126126 main_test_libpython();···144136 main_test_libbfd();145137 main_test_backtrace();146138 main_test_libnuma();139139+ main_test_numa_num_possible_cpus();147140 main_test_timerfd();148141 main_test_stackprotector_all();149142 main_test_libdw_dwarf_unwind();···152143 main_test_zlib();153144 main_test_pthread_attr_setaffinity_np();154145 main_test_lzma();146146+ main_test_get_cpuid();155147156148 return 0;157149}
···37953795 struct format_field *field;37963796 struct printk_map *printk;37973797 long long val, fval;37983798- unsigned long addr;37983798+ unsigned long long addr;37993799 char *str;38003800 unsigned char *hex;38013801 int print;···38283828 */38293829 if (!(field->flags & FIELD_IS_ARRAY) &&38303830 field->size == pevent->long_size) {38313831- addr = *(unsigned long *)(data + field->offset);38313831+38323832+ /* Handle heterogeneous recording and processing38333833+ * architectures38343834+ *38353835+ * CASE I:38363836+ * Traces recorded on 32-bit devices (32-bit38373837+ * addressing) and processed on 64-bit devices:38383838+ * In this case, only 32 bits should be read.38393839+ *38403840+ * CASE II:38413841+ * Traces recorded on 64 bit devices and processed38423842+ * on 32-bit devices:38433843+ * In this case, 64 bits must be read.38443844+ */38453845+ addr = (pevent->long_size == 8) ?38463846+ *(unsigned long long *)(data + field->offset) :38473847+ (unsigned long long)*(unsigned int *)(data + field->offset);38483848+38323849 /* Check if it matches a print format */38333850 printk = find_printk(pevent, addr);38343851 if (printk)38353852 trace_seq_puts(s, printk->printk);38363853 else38373837- trace_seq_printf(s, "%lx", addr);38543854+ trace_seq_printf(s, "%llx", addr);38383855 break;38393856 }38403857 str = malloc(len + 1);
-15
tools/perf/Documentation/intel-pt.txt
···364364365365 CYC packets are not requested by default.366366367367-no_force_psb This is a driver option and is not in the IA32_RTIT_CTL MSR.368368-369369- It stops the driver resetting the byte count to zero whenever370370- enabling the trace (for example on context switches) which in371371- turn results in no PSB being forced. However some processors372372- will produce a PSB anyway.373373-374374- In any case, there is still a PSB when the trace is enabled for375375- the first time.376376-377377- no_force_psb can be used to slightly decrease the trace size but378378- may make it harder for the decoder to recover from errors.379379-380380- no_force_psb is not selected by default.381381-382367383368new snapshot option384369-------------------
+15-5
tools/perf/config/Makefile
···573573 msg := $(warning No numa.h found, disables 'perf bench numa mem' benchmark, please install numactl-devel/libnuma-devel/libnuma-dev);574574 NO_LIBNUMA := 1575575 else576576- CFLAGS += -DHAVE_LIBNUMA_SUPPORT577577- EXTLIBS += -lnuma578578- $(call detected,CONFIG_NUMA)576576+ ifeq ($(feature-numa_num_possible_cpus), 0)577577+ msg := $(warning Old numa library found, disables 'perf bench numa mem' benchmark, please install numactl-devel/libnuma-devel/libnuma-dev >= 2.0.8);578578+ NO_LIBNUMA := 1579579+ else580580+ CFLAGS += -DHAVE_LIBNUMA_SUPPORT581581+ EXTLIBS += -lnuma582582+ $(call detected,CONFIG_NUMA)583583+ endif579584 endif580585endif581586···626621endif627622628623ifndef NO_AUXTRACE629629- $(call detected,CONFIG_AUXTRACE)630630- CFLAGS += -DHAVE_AUXTRACE_SUPPORT624624+ ifeq ($(feature-get_cpuid), 0)625625+ msg := $(warning Your gcc lacks the __get_cpuid() builtin, disables support for auxtrace/Intel PT, please install a newer gcc);626626+ NO_AUXTRACE := 1627627+ else628628+ $(call detected,CONFIG_AUXTRACE)629629+ CFLAGS += -DHAVE_AUXTRACE_SUPPORT630630+ endif631631endif632632633633# Among the variables below, these:
+7-6
tools/perf/util/probe-event.c
···270270 int ret = 0;271271272272 if (module) {273273- list_for_each_entry(dso, &host_machine->dsos.head, node) {274274- if (!dso->kernel)275275- continue;276276- if (strncmp(dso->short_name + 1, module,277277- dso->short_name_len - 2) == 0)278278- goto found;273273+ char module_name[128];274274+275275+ snprintf(module_name, sizeof(module_name), "[%s]", module);276276+ map = map_groups__find_by_name(&host_machine->kmaps, MAP__FUNCTION, module_name);277277+ if (map) {278278+ dso = map->dso;279279+ goto found;279280 }280281 pr_debug("Failed to find module %s.\n", module);281282 return -ENOENT;
+4-1
tools/perf/util/session.c
···15801580 file_offset = page_offset;15811581 head = data_offset - page_offset;1582158215831583- if (data_size && (data_offset + data_size < file_size))15831583+ if (data_size == 0)15841584+ goto out;15851585+15861586+ if (data_offset + data_size < file_size)15841587 file_size = data_offset + data_size;1585158815861589 ui_progress__init(&prog, file_size, "Processing events...");
+14-2
tools/perf/util/stat.c
···196196 memset(counter->per_pkg_mask, 0, MAX_NR_CPUS);197197}198198199199-static int check_per_pkg(struct perf_evsel *counter, int cpu, bool *skip)199199+static int check_per_pkg(struct perf_evsel *counter,200200+ struct perf_counts_values *vals, int cpu, bool *skip)200201{201202 unsigned long *mask = counter->per_pkg_mask;202203 struct cpu_map *cpus = perf_evsel__cpus(counter);···219218 counter->per_pkg_mask = mask;220219 }221220221221+ /*222222+ * we do not consider an event that has not run as a good223223+ * instance to mark a package as used (skip=1). Otherwise224224+ * we may run into a situation where the first CPU in a package225225+ * is not running anything, yet the second is, and this function226226+ * would mark the package as used after the first CPU and would227227+ * not read the values from the second CPU.228228+ */229229+ if (!(vals->run && vals->ena))230230+ return 0;231231+222232 s = cpu_map__get_socket(cpus, cpu);223233 if (s < 0)224234 return -1;···247235 static struct perf_counts_values zero;248236 bool skip = false;249237250250- if (check_per_pkg(evsel, cpu, &skip)) {238238+ if (check_per_pkg(evsel, count, cpu, &skip)) {251239 pr_err("failed to read per-pkg counter\n");252240 return -1;253241 }
···709709710710 dir = opendir(procfs__mountpoint());711711 if (!dir)712712- return -1;712712+ return false;713713714714 /* Walk through the directory. */715715 while (ret && (d = readdir(dir)) != NULL) {
+34-5
tools/power/x86/turbostat/turbostat.c
···7171unsigned int extra_msr_offset64;7272unsigned int extra_delta_offset32;7373unsigned int extra_delta_offset64;7474+unsigned int aperf_mperf_multiplier = 1;7475int do_smi;7576double bclk;7777+double base_hz;7878+double tsc_tweak = 1.0;7679unsigned int show_pkg;7780unsigned int show_core;7881unsigned int show_cpu;···505502 /* %Busy */506503 if (has_aperf) {507504 if (!skip_c0)508508- outp += sprintf(outp, "%8.2f", 100.0 * t->mperf/t->tsc);505505+ outp += sprintf(outp, "%8.2f", 100.0 * t->mperf/t->tsc/tsc_tweak);509506 else510507 outp += sprintf(outp, "********");511508 }···513510 /* Bzy_MHz */514511 if (has_aperf)515512 outp += sprintf(outp, "%8.0f",516516- 1.0 * t->tsc / units * t->aperf / t->mperf / interval_float);513513+ 1.0 * t->tsc * tsc_tweak / units * t->aperf / t->mperf / interval_float);517514518515 /* TSC_MHz */519516 outp += sprintf(outp, "%8.0f", 1.0 * t->tsc/units/interval_float);···987984 return -3;988985 if (get_msr(cpu, MSR_IA32_MPERF, &t->mperf))989986 return -4;987987+ t->aperf = t->aperf * aperf_mperf_multiplier;988988+ t->mperf = t->mperf * aperf_mperf_multiplier;990989 }991990992991 if (do_smi) {···11531148int slv_pkg_cstate_limits[16] = {PCL__0, PCL__1, PCLRSV, PCLRSV, PCL__4, PCLRSV, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};11541149int amt_pkg_cstate_limits[16] = {PCL__0, PCL__1, PCL__2, PCLRSV, PCLRSV, PCLRSV, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};11551150int phi_pkg_cstate_limits[16] = {PCL__0, PCL__2, PCL_6N, PCL_6R, PCLRSV, PCLRSV, PCLRSV, PCLUNL, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};11511151+11521152+11531153+static void11541154+calculate_tsc_tweak()11551155+{11561156+ unsigned long long msr;11571157+ unsigned int base_ratio;11581158+11591159+ get_msr(base_cpu, MSR_NHM_PLATFORM_INFO, &msr);11601160+ base_ratio = (msr >> 8) & 0xFF;11611161+ base_hz = base_ratio * bclk * 1000000;11621162+ tsc_tweak = base_hz / tsc_hz;11631163+}1156116411571165static void11581166dump_nhm_platform_info(void)···1944192619451927 switch (model) {19461928 case 0x3A: /* IVB */19471947- case 0x3E: /* IVB Xeon */19481948-19491929 case 0x3C: /* HSW */19501930 case 0x3F: /* HSX */19511931 case 0x45: /* HSW */···25592543 return 0;25602544}2561254525462546+unsigned int get_aperf_mperf_multiplier(unsigned int family, unsigned int model)25472547+{25482548+ if (is_knl(family, model))25492549+ return 1024;25502550+ return 1;25512551+}25522552+25622553#define SLM_BCLK_FREQS 525632554double slm_freq_table[SLM_BCLK_FREQS] = { 83.3, 100.0, 133.3, 116.7, 80.0};25642555···27672744 }27682745 }2769274627472747+ if (has_aperf)27482748+ aperf_mperf_multiplier = get_aperf_mperf_multiplier(family, model);27492749+27702750 do_nhm_platform_info = do_nhm_cstates = do_smi = probe_nhm_msrs(family, model);27712751 do_snb_cstates = has_snb_msrs(family, model);27722752 do_pc2 = do_snb_cstates && (pkg_cstate_limit >= PCL__2);···2787276127882762 if (debug)27892763 dump_cstate_pstate_config_info();27642764+27652765+ if (has_skl_msrs(family, model))27662766+ calculate_tsc_tweak();2790276727912768 return;27922769}···31193090}3120309131213092void print_version() {31223122- fprintf(stderr, "turbostat version 4.7 17-June, 2015"30933093+ fprintf(stderr, "turbostat version 4.8 26-Sep, 2015"31233094 " - Len Brown <lenb@kernel.org>\n");31243095}31253096