Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/ethernet/renesas/ravb_main.c
kernel/bpf/syscall.c
net/ipv4/ipmr.c

All three conflicts were cases of overlapping changes.

Signed-off-by: David S. Miller <davem@davemloft.net>

+9756 -4835
+5 -2
Documentation/IPMI.txt
··· 587 587 588 588 modprobe ipmi_watchdog timeout=<t> pretimeout=<t> action=<action type> 589 589 preaction=<preaction type> preop=<preop type> start_now=x 590 - nowayout=x ifnum_to_use=n 590 + nowayout=x ifnum_to_use=n panic_wdt_timeout=<t> 591 591 592 592 ifnum_to_use specifies which interface the watchdog timer should use. 593 593 The default is -1, which means to pick the first one registered. ··· 597 597 occur (if pretimeout is zero, then pretimeout will not be enabled). Note 598 598 that the pretimeout is the time before the final timeout. So if the 599 599 timeout is 50 seconds and the pretimeout is 10 seconds, then the pretimeout 600 - will occur in 40 second (10 seconds before the timeout). 600 + will occur in 40 second (10 seconds before the timeout). The panic_wdt_timeout 601 + is the value of timeout which is set on kernel panic, in order to let actions 602 + such as kdump to occur during panic. 601 603 602 604 The action may be "reset", "power_cycle", or "power_off", and 603 605 specifies what to do when the timer times out, and defaults to ··· 636 634 ipmi_watchdog.preop=<preop type> 637 635 ipmi_watchdog.start_now=x 638 636 ipmi_watchdog.nowayout=x 637 + ipmi_watchdog.panic_wdt_timeout=<t> 639 638 640 639 The options are the same as the module parameter options. 641 640
-18
Documentation/arm/keystone/Overview.txt
··· 49 49 The device tree documentation for the keystone machines are located at 50 50 Documentation/devicetree/bindings/arm/keystone/keystone.txt 51 51 52 - Known issues & workaround 53 - ------------------------- 54 - 55 - Some of the device drivers used on keystone are re-used from that from 56 - DaVinci and other TI SoCs. These device drivers may use clock APIs directly. 57 - Some of the keystone specific drivers such as netcp uses run time power 58 - management API instead to enable clock. As this API has limitations on 59 - keystone, following workaround is needed to boot Linux. 60 - 61 - Add 'clk_ignore_unused' to the bootargs env variable in u-boot. Otherwise 62 - clock frameworks will try to disable clocks that are unused and disable 63 - the hardware. This is because netcp related power domain and clock 64 - domains are enabled in u-boot as run time power management API currently 65 - doesn't enable clocks for netcp due to a limitation. This workaround is 66 - expected to be removed in the future when proper API support becomes 67 - available. Until then, this work around is needed. 68 - 69 - 70 52 Document Author 71 53 --------------- 72 54 Murali Karicheri <m-karicheri2@ti.com>
+3
Documentation/block/null_blk.txt
··· 70 70 parameter. 71 71 1: The multi-queue block layer is instantiated with a hardware dispatch 72 72 queue for each CPU node in the system. 73 + 74 + use_lightnvm=[0/1]: Default: 0 75 + Register device with LightNVM. Requires blk-mq to be used.
+6
Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt
··· 8 8 - phy-mode: See ethernet.txt file in the same directory 9 9 - clocks: a pointer to the reference clock for this device. 10 10 11 + Optional properties: 12 + - tx-csum-limit: maximum mtu supported by port that allow TX checksum. 13 + Value is presented in bytes. If not used, by default 1600B is set for 14 + "marvell,armada-370-neta" and 9800B for others. 15 + 11 16 Example: 12 17 13 18 ethernet@d0070000 { ··· 20 15 reg = <0xd0070000 0x2500>; 21 16 interrupts = <8>; 22 17 clocks = <&gate_clk 4>; 18 + tx-csum-limit = <9800> 23 19 status = "okay"; 24 20 phy = <&phy0>; 25 21 phy-mode = "rgmii-id";
+3 -1
Documentation/devicetree/bindings/thermal/rockchip-thermal.txt
··· 1 1 * Temperature Sensor ADC (TSADC) on rockchip SoCs 2 2 3 3 Required properties: 4 - - compatible : "rockchip,rk3288-tsadc" 4 + - compatible : should be "rockchip,<name>-tsadc" 5 + "rockchip,rk3288-tsadc": found on RK3288 SoCs 6 + "rockchip,rk3368-tsadc": found on RK3368 SoCs 5 7 - reg : physical base address of the controller and length of memory mapped 6 8 region. 7 9 - interrupts : The interrupt number to the cpu. The interrupt specifier format
+1
Documentation/i2c/busses/i2c-i801
··· 32 32 * Intel Sunrise Point-LP (PCH) 33 33 * Intel DNV (SOC) 34 34 * Intel Broxton (SOC) 35 + * Intel Lewisburg (PCH) 35 36 Datasheets: Publicly available at the Intel website 36 37 37 38 On Intel Patsburg and later chipsets, both the normal host SMBus controller
-3
Documentation/kernel-parameters.txt
··· 1583 1583 hwp_only 1584 1584 Only load intel_pstate on systems which support 1585 1585 hardware P state control (HWP) if available. 1586 - no_acpi 1587 - Don't use ACPI processor performance control objects 1588 - _PSS and _PPC specified limits. 1589 1586 1590 1587 intremap= [X86-64, Intel-IOMMU] 1591 1588 on enable Interrupt Remapping (default)
+23 -8
MAINTAINERS
··· 1847 1847 F: drivers/net/wireless/ath/ath6kl/ 1848 1848 1849 1849 WILOCITY WIL6210 WIRELESS DRIVER 1850 - M: Vladimir Kondratiev <qca_vkondrat@qca.qualcomm.com> 1850 + M: Maya Erez <qca_merez@qca.qualcomm.com> 1851 1851 L: linux-wireless@vger.kernel.org 1852 1852 L: wil6210@qca.qualcomm.com 1853 1853 S: Supported ··· 1931 1931 F: drivers/i2c/busses/i2c-at91.c 1932 1932 1933 1933 ATMEL ISI DRIVER 1934 - M: Josh Wu <josh.wu@atmel.com> 1934 + M: Ludovic Desroches <ludovic.desroches@atmel.com> 1935 1935 L: linux-media@vger.kernel.org 1936 1936 S: Supported 1937 1937 F: drivers/media/platform/soc_camera/atmel-isi.c ··· 1950 1950 F: drivers/net/ethernet/cadence/ 1951 1951 1952 1952 ATMEL NAND DRIVER 1953 - M: Josh Wu <josh.wu@atmel.com> 1953 + M: Wenyou Yang <wenyou.yang@atmel.com> 1954 + M: Josh Wu <rainyfeeling@outlook.com> 1954 1955 L: linux-mtd@lists.infradead.org 1955 1956 S: Supported 1956 1957 F: drivers/mtd/nand/atmel_nand* ··· 2450 2449 2451 2450 BROADCOM STB NAND FLASH DRIVER 2452 2451 M: Brian Norris <computersforpeace@gmail.com> 2452 + M: Kamal Dasu <kdasu.kdev@gmail.com> 2453 2453 L: linux-mtd@lists.infradead.org 2454 + L: bcm-kernel-feedback-list@broadcom.com 2454 2455 S: Maintained 2455 2456 F: drivers/mtd/nand/brcmnand/ 2456 2457 ··· 2932 2929 F: drivers/platform/x86/compal-laptop.c 2933 2930 2934 2931 CONEXANT ACCESSRUNNER USB DRIVER 2935 - M: Simon Arlott <cxacru@fire.lp0.eu> 2936 2932 L: accessrunner-general@lists.sourceforge.net 2937 2933 W: http://accessrunner.sourceforge.net/ 2938 - S: Maintained 2934 + S: Orphan 2939 2935 F: drivers/usb/atm/cxacru.c 2940 2936 2941 2937 CONFIGFS ··· 4411 4409 4412 4410 FPGA MANAGER FRAMEWORK 4413 4411 M: Alan Tull <atull@opensource.altera.com> 4412 + R: Moritz Fischer <moritz.fischer@ettus.com> 4414 4413 S: Maintained 4415 4414 F: drivers/fpga/ 4416 4415 F: include/linux/fpga/fpga-mgr.h ··· 6367 6364 LIGHTNVM PLATFORM SUPPORT 6368 6365 M: Matias Bjorling <mb@lightnvm.io> 6369 6366 W: http://github/OpenChannelSSD 6367 + L: linux-block@vger.kernel.org 6370 6368 S: Maintained 6371 6369 F: drivers/lightnvm/ 6372 6370 F: include/linux/lightnvm.h ··· 7913 7909 F: net/openvswitch/ 7914 7910 F: include/uapi/linux/openvswitch.h 7915 7911 7912 + OPERATING PERFORMANCE POINTS (OPP) 7913 + M: Viresh Kumar <vireshk@kernel.org> 7914 + M: Nishanth Menon <nm@ti.com> 7915 + M: Stephen Boyd <sboyd@codeaurora.org> 7916 + L: linux-pm@vger.kernel.org 7917 + S: Maintained 7918 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm.git 7919 + F: drivers/base/power/opp/ 7920 + F: include/linux/pm_opp.h 7921 + F: Documentation/power/opp.txt 7922 + F: Documentation/devicetree/bindings/opp/ 7923 + 7916 7924 OPL4 DRIVER 7917 7925 M: Clemens Ladisch <clemens@ladisch.de> 7918 7926 L: alsa-devel@alsa-project.org (moderated for non-subscribers) ··· 9338 9322 F: include/linux/platform_data/i2c-designware.h 9339 9323 9340 9324 SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER 9341 - M: Seungwon Jeon <tgih.jun@samsung.com> 9342 9325 M: Jaehoon Chung <jh80.chung@samsung.com> 9343 9326 L: linux-mmc@vger.kernel.org 9344 9327 S: Maintained ··· 10910 10895 F: drivers/media/tuners/tua9001* 10911 10896 10912 10897 TULIP NETWORK DRIVERS 10913 - M: Grant Grundler <grundler@parisc-linux.org> 10914 10898 L: netdev@vger.kernel.org 10915 - S: Maintained 10899 + L: linux-parisc@vger.kernel.org 10900 + S: Orphan 10916 10901 F: drivers/net/ethernet/dec/tulip/ 10917 10902 10918 10903 TUN/TAP driver
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 4 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc1 4 + EXTRAVERSION = -rc3 5 5 NAME = Blurry Fish Butt 6 6 7 7 # *DOCUMENTATION*
+1 -1
arch/arc/configs/axs101_defconfig
··· 1 - CONFIG_CROSS_COMPILE="arc-linux-uclibc-" 1 + CONFIG_CROSS_COMPILE="arc-linux-" 2 2 CONFIG_DEFAULT_HOSTNAME="ARCLinux" 3 3 # CONFIG_SWAP is not set 4 4 CONFIG_SYSVIPC=y
+1 -1
arch/arc/configs/axs103_defconfig
··· 1 - CONFIG_CROSS_COMPILE="arc-linux-uclibc-" 1 + CONFIG_CROSS_COMPILE="arc-linux-" 2 2 CONFIG_DEFAULT_HOSTNAME="ARCLinux" 3 3 # CONFIG_SWAP is not set 4 4 CONFIG_SYSVIPC=y
+1 -1
arch/arc/configs/axs103_smp_defconfig
··· 1 - CONFIG_CROSS_COMPILE="arc-linux-uclibc-" 1 + CONFIG_CROSS_COMPILE="arc-linux-" 2 2 CONFIG_DEFAULT_HOSTNAME="ARCLinux" 3 3 # CONFIG_SWAP is not set 4 4 CONFIG_SYSVIPC=y
+1 -1
arch/arc/configs/nsim_hs_defconfig
··· 1 - CONFIG_CROSS_COMPILE="arc-linux-uclibc-" 1 + CONFIG_CROSS_COMPILE="arc-linux-" 2 2 # CONFIG_LOCALVERSION_AUTO is not set 3 3 CONFIG_DEFAULT_HOSTNAME="ARCLinux" 4 4 # CONFIG_SWAP is not set
+1 -1
arch/arc/configs/nsim_hs_smp_defconfig
··· 1 - CONFIG_CROSS_COMPILE="arc-linux-uclibc-" 1 + CONFIG_CROSS_COMPILE="arc-linux-" 2 2 # CONFIG_LOCALVERSION_AUTO is not set 3 3 CONFIG_DEFAULT_HOSTNAME="ARCLinux" 4 4 # CONFIG_SWAP is not set
+1 -1
arch/arc/configs/nsimosci_hs_defconfig
··· 1 - CONFIG_CROSS_COMPILE="arc-linux-uclibc-" 1 + CONFIG_CROSS_COMPILE="arc-linux-" 2 2 # CONFIG_LOCALVERSION_AUTO is not set 3 3 CONFIG_DEFAULT_HOSTNAME="ARCLinux" 4 4 # CONFIG_SWAP is not set
+1 -1
arch/arc/configs/nsimosci_hs_smp_defconfig
··· 1 - CONFIG_CROSS_COMPILE="arc-linux-uclibc-" 1 + CONFIG_CROSS_COMPILE="arc-linux-" 2 2 CONFIG_DEFAULT_HOSTNAME="ARCLinux" 3 3 # CONFIG_SWAP is not set 4 4 CONFIG_SYSVIPC=y
+1 -1
arch/arc/configs/vdk_hs38_defconfig
··· 1 - CONFIG_CROSS_COMPILE="arc-linux-uclibc-" 1 + CONFIG_CROSS_COMPILE="arc-linux-" 2 2 # CONFIG_LOCALVERSION_AUTO is not set 3 3 CONFIG_DEFAULT_HOSTNAME="ARCLinux" 4 4 # CONFIG_CROSS_MEMORY_ATTACH is not set
+1 -1
arch/arc/configs/vdk_hs38_smp_defconfig
··· 1 - CONFIG_CROSS_COMPILE="arc-linux-uclibc-" 1 + CONFIG_CROSS_COMPILE="arc-linux-" 2 2 # CONFIG_LOCALVERSION_AUTO is not set 3 3 CONFIG_DEFAULT_HOSTNAME="ARCLinux" 4 4 # CONFIG_CROSS_MEMORY_ATTACH is not set
+3
arch/arc/include/asm/irqflags-arcv2.h
··· 37 37 #define ISA_INIT_STATUS_BITS (STATUS_IE_MASK | STATUS_AD_MASK | \ 38 38 (ARCV2_IRQ_DEF_PRIO << 1)) 39 39 40 + /* SLEEP needs default irq priority (<=) which can interrupt the doze */ 41 + #define ISA_SLEEP_ARG (0x10 | ARCV2_IRQ_DEF_PRIO) 42 + 40 43 #ifndef __ASSEMBLY__ 41 44 42 45 /*
+2
arch/arc/include/asm/irqflags-compact.h
··· 43 43 44 44 #define ISA_INIT_STATUS_BITS STATUS_IE_MASK 45 45 46 + #define ISA_SLEEP_ARG 0x3 47 + 46 48 #ifndef __ASSEMBLY__ 47 49 48 50 /******************************************************************
-2
arch/arc/kernel/ctx_sw.c
··· 58 58 "st sp, [r24] \n\t" 59 59 #endif 60 60 61 - "sync \n\t" 62 - 63 61 /* 64 62 * setup _current_task with incoming tsk. 65 63 * optionally, set r25 to that as well
-3
arch/arc/kernel/ctx_sw_asm.S
··· 44 44 * don't need to do anything special to return it 45 45 */ 46 46 47 - /* hardware memory barrier */ 48 - sync 49 - 50 47 /* 51 48 * switch to new task, contained in r1 52 49 * Temp reg r3 is required to get the ptr to store val
+4 -5
arch/arc/kernel/process.c
··· 44 44 void arch_cpu_idle(void) 45 45 { 46 46 /* sleep, but enable all interrupts before committing */ 47 - if (is_isa_arcompact()) { 48 - __asm__("sleep 0x3"); 49 - } else { 50 - __asm__("sleep 0x10"); 51 - } 47 + __asm__ __volatile__( 48 + "sleep %0 \n" 49 + : 50 + :"I"(ISA_SLEEP_ARG)); /* can't be "r" has to be embedded const */ 52 51 } 53 52 54 53 asmlinkage void ret_from_fork(void);
+4 -33
arch/arc/kernel/unwind.c
··· 986 986 (const u8 *)(fde + 987 987 1) + 988 988 *fde, ptrType); 989 - if (pc >= endLoc) 989 + if (pc >= endLoc) { 990 990 fde = NULL; 991 - } else 992 - fde = NULL; 993 - } 994 - if (fde == NULL) { 995 - for (fde = table->address, tableSize = table->size; 996 - cie = NULL, tableSize > sizeof(*fde) 997 - && tableSize - sizeof(*fde) >= *fde; 998 - tableSize -= sizeof(*fde) + *fde, 999 - fde += 1 + *fde / sizeof(*fde)) { 1000 - cie = cie_for_fde(fde, table); 1001 - if (cie == &bad_cie) { 1002 991 cie = NULL; 1003 - break; 1004 992 } 1005 - if (cie == NULL 1006 - || cie == &not_fde 1007 - || (ptrType = fde_pointer_type(cie)) < 0) 1008 - continue; 1009 - ptr = (const u8 *)(fde + 2); 1010 - startLoc = read_pointer(&ptr, 1011 - (const u8 *)(fde + 1) + 1012 - *fde, ptrType); 1013 - if (!startLoc) 1014 - continue; 1015 - if (!(ptrType & DW_EH_PE_indirect)) 1016 - ptrType &= 1017 - DW_EH_PE_FORM | DW_EH_PE_signed; 1018 - endLoc = 1019 - startLoc + read_pointer(&ptr, 1020 - (const u8 *)(fde + 1021 - 1) + 1022 - *fde, ptrType); 1023 - if (pc >= startLoc && pc < endLoc) 1024 - break; 993 + } else { 994 + fde = NULL; 995 + cie = NULL; 1025 996 } 1026 997 } 1027 998 }
+2 -2
arch/arc/mm/tlb.c
··· 619 619 620 620 int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); 621 621 if (dirty) { 622 - /* wback + inv dcache lines */ 622 + /* wback + inv dcache lines (K-mapping) */ 623 623 __flush_dcache_page(paddr, paddr); 624 624 625 - /* invalidate any existing icache lines */ 625 + /* invalidate any existing icache lines (U-mapping) */ 626 626 if (vma->vm_flags & VM_EXEC) 627 627 __inv_icache_page(paddr, vaddr); 628 628 }
+2 -2
arch/arm/Kconfig
··· 76 76 select IRQ_FORCED_THREADING 77 77 select MODULES_USE_ELF_REL 78 78 select NO_BOOTMEM 79 + select OF_EARLY_FLATTREE if OF 80 + select OF_RESERVED_MEM if OF 79 81 select OLD_SIGACTION 80 82 select OLD_SIGSUSPEND3 81 83 select PERF_USE_VMALLOC ··· 1824 1822 bool "Flattened Device Tree support" 1825 1823 select IRQ_DOMAIN 1826 1824 select OF 1827 - select OF_EARLY_FLATTREE 1828 - select OF_RESERVED_MEM 1829 1825 help 1830 1826 Include support for flattened device tree machine descriptions. 1831 1827
+1
arch/arm/boot/dts/am57xx-beagle-x15.dts
··· 604 604 reg = <0x6f>; 605 605 interrupts-extended = <&crossbar_mpu GIC_SPI 2 IRQ_TYPE_EDGE_RISING>, 606 606 <&dra7_pmx_core 0x424>; 607 + interrupt-names = "irq", "wakeup"; 607 608 608 609 pinctrl-names = "default"; 609 610 pinctrl-0 = <&mcp79410_pins_default>;
+3 -3
arch/arm/boot/dts/animeo_ip.dts
··· 155 155 label = "keyswitch_in"; 156 156 gpios = <&pioB 1 GPIO_ACTIVE_HIGH>; 157 157 linux,code = <28>; 158 - gpio-key,wakeup; 158 + wakeup-source; 159 159 }; 160 160 161 161 error_in { 162 162 label = "error_in"; 163 163 gpios = <&pioB 2 GPIO_ACTIVE_HIGH>; 164 164 linux,code = <29>; 165 - gpio-key,wakeup; 165 + wakeup-source; 166 166 }; 167 167 168 168 btn { 169 169 label = "btn"; 170 170 gpios = <&pioC 23 GPIO_ACTIVE_HIGH>; 171 171 linux,code = <31>; 172 - gpio-key,wakeup; 172 + wakeup-source; 173 173 }; 174 174 }; 175 175 };
+1
arch/arm/boot/dts/armada-38x.dtsi
··· 498 498 reg = <0x70000 0x4000>; 499 499 interrupts-extended = <&mpic 8>; 500 500 clocks = <&gateclk 4>; 501 + tx-csum-limit = <9800>; 501 502 status = "disabled"; 502 503 }; 503 504
+1 -1
arch/arm/boot/dts/at91-foxg20.dts
··· 159 159 label = "Button"; 160 160 gpios = <&pioC 4 GPIO_ACTIVE_LOW>; 161 161 linux,code = <0x103>; 162 - gpio-key,wakeup; 162 + wakeup-source; 163 163 }; 164 164 }; 165 165 };
+2 -11
arch/arm/boot/dts/at91-kizbox.dts
··· 24 24 }; 25 25 26 26 clocks { 27 - #address-cells = <1>; 28 - #size-cells = <1>; 29 - ranges; 30 - 31 - main_clock: clock@0 { 32 - compatible = "atmel,osc", "fixed-clock"; 33 - clock-frequency = <18432000>; 34 - }; 35 - 36 27 main_xtal { 37 28 clock-frequency = <18432000>; 38 29 }; ··· 85 94 label = "PB_RST"; 86 95 gpios = <&pioB 30 GPIO_ACTIVE_HIGH>; 87 96 linux,code = <0x100>; 88 - gpio-key,wakeup; 97 + wakeup-source; 89 98 }; 90 99 91 100 user { 92 101 label = "PB_USER"; 93 102 gpios = <&pioB 31 GPIO_ACTIVE_HIGH>; 94 103 linux,code = <0x101>; 95 - gpio-key,wakeup; 104 + wakeup-source; 96 105 }; 97 106 }; 98 107
+3 -3
arch/arm/boot/dts/at91-kizbox2.dts
··· 171 171 label = "PB_PROG"; 172 172 gpios = <&pioE 27 GPIO_ACTIVE_LOW>; 173 173 linux,code = <0x102>; 174 - gpio-key,wakeup; 174 + wakeup-source; 175 175 }; 176 176 177 177 reset { 178 178 label = "PB_RST"; 179 179 gpios = <&pioE 29 GPIO_ACTIVE_LOW>; 180 180 linux,code = <0x100>; 181 - gpio-key,wakeup; 181 + wakeup-source; 182 182 }; 183 183 184 184 user { 185 185 label = "PB_USER"; 186 186 gpios = <&pioE 31 GPIO_ACTIVE_HIGH>; 187 187 linux,code = <0x101>; 188 - gpio-key,wakeup; 188 + wakeup-source; 189 189 }; 190 190 }; 191 191
+2 -2
arch/arm/boot/dts/at91-kizboxmini.dts
··· 98 98 label = "PB_PROG"; 99 99 gpios = <&pioC 17 GPIO_ACTIVE_LOW>; 100 100 linux,code = <0x102>; 101 - gpio-key,wakeup; 101 + wakeup-source; 102 102 }; 103 103 104 104 reset { 105 105 label = "PB_RST"; 106 106 gpios = <&pioC 16 GPIO_ACTIVE_LOW>; 107 107 linux,code = <0x100>; 108 - gpio-key,wakeup; 108 + wakeup-source; 109 109 }; 110 110 }; 111 111
+1 -1
arch/arm/boot/dts/at91-qil_a9260.dts
··· 183 183 label = "user_pb"; 184 184 gpios = <&pioB 10 GPIO_ACTIVE_LOW>; 185 185 linux,code = <28>; 186 - gpio-key,wakeup; 186 + wakeup-source; 187 187 }; 188 188 }; 189 189
+106 -9
arch/arm/boot/dts/at91-sama5d2_xplained.dts
··· 45 45 /dts-v1/; 46 46 #include "sama5d2.dtsi" 47 47 #include "sama5d2-pinfunc.h" 48 + #include <dt-bindings/mfd/atmel-flexcom.h> 48 49 49 50 / { 50 51 model = "Atmel SAMA5D2 Xplained"; ··· 60 59 }; 61 60 62 61 clocks { 63 - #address-cells = <1>; 64 - #size-cells = <1>; 65 - ranges; 66 - 67 - main_clock: clock@0 { 68 - compatible = "atmel,osc", "fixed-clock"; 69 - clock-frequency = <12000000>; 70 - }; 71 - 72 62 slow_xtal { 73 63 clock-frequency = <32768>; 74 64 }; ··· 81 89 82 90 usb2: ehci@00500000 { 83 91 status = "okay"; 92 + }; 93 + 94 + sdmmc0: sdio-host@a0000000 { 95 + bus-width = <8>; 96 + pinctrl-names = "default"; 97 + pinctrl-0 = <&pinctrl_sdmmc0_default>; 98 + non-removable; 99 + mmc-ddr-1_8v; 100 + status = "okay"; 101 + }; 102 + 103 + sdmmc1: sdio-host@b0000000 { 104 + bus-width = <4>; 105 + pinctrl-names = "default"; 106 + pinctrl-0 = <&pinctrl_sdmmc1_default>; 107 + status = "okay"; /* conflict with qspi0 */ 84 108 }; 85 109 86 110 apb { ··· 189 181 }; 190 182 }; 191 183 184 + flx0: flexcom@f8034000 { 185 + atmel,flexcom-mode = <ATMEL_FLEXCOM_MODE_USART>; 186 + status = "disabled"; /* conflict with ISC_D2 & ISC_D3 data pins */ 187 + 188 + uart5: serial@200 { 189 + compatible = "atmel,at91sam9260-usart"; 190 + reg = <0x200 0x200>; 191 + interrupts = <19 IRQ_TYPE_LEVEL_HIGH 7>; 192 + clocks = <&flx0_clk>; 193 + clock-names = "usart"; 194 + pinctrl-names = "default"; 195 + pinctrl-0 = <&pinctrl_flx0_default>; 196 + atmel,fifo-size = <32>; 197 + status = "okay"; 198 + }; 199 + }; 200 + 192 201 uart3: serial@fc008000 { 193 202 pinctrl-names = "default"; 194 203 pinctrl-0 = <&pinctrl_uart3_default>; 195 204 status = "okay"; 205 + }; 206 + 207 + flx4: flexcom@fc018000 { 208 + atmel,flexcom-mode = <ATMEL_FLEXCOM_MODE_TWI>; 209 + status = "okay"; 210 + 211 + i2c2: i2c@600 { 212 + compatible = "atmel,sama5d2-i2c"; 213 + reg = <0x600 0x200>; 214 + interrupts = <23 IRQ_TYPE_LEVEL_HIGH 7>; 215 + dmas = <0>, <0>; 216 + dma-names = "tx", "rx"; 217 + #address-cells = <1>; 218 + #size-cells = <0>; 219 + clocks = <&flx4_clk>; 220 + pinctrl-names = "default"; 221 + pinctrl-0 = <&pinctrl_flx4_default>; 222 + atmel,fifo-size = <16>; 223 + status = "okay"; 224 + }; 196 225 }; 197 226 198 227 i2c1: i2c@fc028000 { ··· 246 201 }; 247 202 248 203 pinctrl@fc038000 { 204 + pinctrl_flx0_default: flx0_default { 205 + pinmux = <PIN_PB28__FLEXCOM0_IO0>, 206 + <PIN_PB29__FLEXCOM0_IO1>; 207 + bias-disable; 208 + }; 209 + 210 + pinctrl_flx4_default: flx4_default { 211 + pinmux = <PIN_PD12__FLEXCOM4_IO0>, 212 + <PIN_PD13__FLEXCOM4_IO1>; 213 + bias-disable; 214 + }; 215 + 249 216 pinctrl_i2c0_default: i2c0_default { 250 217 pinmux = <PIN_PD21__TWD0>, 251 218 <PIN_PD22__TWCK0>; ··· 282 225 <PIN_PB22__GMDC>, 283 226 <PIN_PB23__GMDIO>; 284 227 bias-disable; 228 + }; 229 + 230 + pinctrl_sdmmc0_default: sdmmc0_default { 231 + cmd_data { 232 + pinmux = <PIN_PA1__SDMMC0_CMD>, 233 + <PIN_PA2__SDMMC0_DAT0>, 234 + <PIN_PA3__SDMMC0_DAT1>, 235 + <PIN_PA4__SDMMC0_DAT2>, 236 + <PIN_PA5__SDMMC0_DAT3>, 237 + <PIN_PA6__SDMMC0_DAT4>, 238 + <PIN_PA7__SDMMC0_DAT5>, 239 + <PIN_PA8__SDMMC0_DAT6>, 240 + <PIN_PA9__SDMMC0_DAT7>; 241 + bias-pull-up; 242 + }; 243 + 244 + ck_cd_rstn_vddsel { 245 + pinmux = <PIN_PA0__SDMMC0_CK>, 246 + <PIN_PA10__SDMMC0_RSTN>, 247 + <PIN_PA11__SDMMC0_VDDSEL>, 248 + <PIN_PA13__SDMMC0_CD>; 249 + bias-disable; 250 + }; 251 + }; 252 + 253 + pinctrl_sdmmc1_default: sdmmc1_default { 254 + cmd_data { 255 + pinmux = <PIN_PA28__SDMMC1_CMD>, 256 + <PIN_PA18__SDMMC1_DAT0>, 257 + <PIN_PA19__SDMMC1_DAT1>, 258 + <PIN_PA20__SDMMC1_DAT2>, 259 + <PIN_PA21__SDMMC1_DAT3>; 260 + bias-pull-up; 261 + }; 262 + 263 + conf-ck_cd { 264 + pinmux = <PIN_PA22__SDMMC1_CK>, 265 + <PIN_PA30__SDMMC1_CD>; 266 + bias-disable; 267 + }; 285 268 }; 286 269 287 270 pinctrl_spi0_default: spi0_default {
+1 -1
arch/arm/boot/dts/at91-sama5d3_xplained.dts
··· 315 315 label = "PB_USER"; 316 316 gpios = <&pioE 29 GPIO_ACTIVE_LOW>; 317 317 linux,code = <0x104>; 318 - gpio-key,wakeup; 318 + wakeup-source; 319 319 }; 320 320 }; 321 321
+1 -11
arch/arm/boot/dts/at91-sama5d4_xplained.dts
··· 50 50 compatible = "atmel,sama5d4-xplained", "atmel,sama5d4", "atmel,sama5"; 51 51 52 52 chosen { 53 - bootargs = "ignore_loglevel earlyprintk"; 54 53 stdout-path = "serial0:115200n8"; 55 54 }; 56 55 ··· 58 59 }; 59 60 60 61 clocks { 61 - #address-cells = <1>; 62 - #size-cells = <1>; 63 - ranges; 64 - 65 - main_clock: clock@0 { 66 - compatible = "atmel,osc", "fixed-clock"; 67 - clock-frequency = <12000000>; 68 - }; 69 - 70 62 slow_xtal { 71 63 clock-frequency = <32768>; 72 64 }; ··· 225 235 label = "pb_user1"; 226 236 gpios = <&pioE 8 GPIO_ACTIVE_HIGH>; 227 237 linux,code = <0x100>; 228 - gpio-key,wakeup; 238 + wakeup-source; 229 239 }; 230 240 }; 231 241
+1 -11
arch/arm/boot/dts/at91-sama5d4ek.dts
··· 50 50 compatible = "atmel,sama5d4ek", "atmel,sama5d4", "atmel,sama5"; 51 51 52 52 chosen { 53 - bootargs = "ignore_loglevel earlyprintk"; 54 53 stdout-path = "serial0:115200n8"; 55 54 }; 56 55 ··· 58 59 }; 59 60 60 61 clocks { 61 - #address-cells = <1>; 62 - #size-cells = <1>; 63 - ranges; 64 - 65 - main_clock: clock@0 { 66 - compatible = "atmel,osc", "fixed-clock"; 67 - clock-frequency = <12000000>; 68 - }; 69 - 70 62 slow_xtal { 71 63 clock-frequency = <32768>; 72 64 }; ··· 294 304 label = "pb_user1"; 295 305 gpios = <&pioE 13 GPIO_ACTIVE_HIGH>; 296 306 linux,code = <0x100>; 297 - gpio-key,wakeup; 307 + wakeup-source; 298 308 }; 299 309 }; 300 310
-9
arch/arm/boot/dts/at91rm9200ek.dts
··· 21 21 }; 22 22 23 23 clocks { 24 - #address-cells = <1>; 25 - #size-cells = <1>; 26 - ranges; 27 - 28 - main_clock: clock@0 { 29 - compatible = "atmel,osc", "fixed-clock"; 30 - clock-frequency = <18432000>; 31 - }; 32 - 33 24 slow_xtal { 34 25 clock-frequency = <32768>; 35 26 };
+5 -14
arch/arm/boot/dts/at91sam9261ek.dts
··· 22 22 }; 23 23 24 24 clocks { 25 - #address-cells = <1>; 26 - #size-cells = <1>; 27 - ranges; 28 - 29 - main_clock: clock@0 { 30 - compatible = "atmel,osc", "fixed-clock"; 31 - clock-frequency = <18432000>; 32 - }; 33 - 34 25 slow_xtal { 35 26 clock-frequency = <32768>; 36 27 }; ··· 140 149 ti,debounce-tol = /bits/ 16 <65535>; 141 150 ti,debounce-max = /bits/ 16 <1>; 142 151 143 - linux,wakeup; 152 + wakeup-source; 144 153 }; 145 154 }; 146 155 ··· 184 193 label = "button_0"; 185 194 gpios = <&pioA 27 GPIO_ACTIVE_LOW>; 186 195 linux,code = <256>; 187 - gpio-key,wakeup; 196 + wakeup-source; 188 197 }; 189 198 190 199 button_1 { 191 200 label = "button_1"; 192 201 gpios = <&pioA 26 GPIO_ACTIVE_LOW>; 193 202 linux,code = <257>; 194 - gpio-key,wakeup; 203 + wakeup-source; 195 204 }; 196 205 197 206 button_2 { 198 207 label = "button_2"; 199 208 gpios = <&pioA 25 GPIO_ACTIVE_LOW>; 200 209 linux,code = <258>; 201 - gpio-key,wakeup; 210 + wakeup-source; 202 211 }; 203 212 204 213 button_3 { 205 214 label = "button_3"; 206 215 gpios = <&pioA 24 GPIO_ACTIVE_LOW>; 207 216 linux,code = <259>; 208 - gpio-key,wakeup; 217 + wakeup-source; 209 218 }; 210 219 }; 211 220 };
+2 -11
arch/arm/boot/dts/at91sam9263ek.dts
··· 22 22 }; 23 23 24 24 clocks { 25 - #address-cells = <1>; 26 - #size-cells = <1>; 27 - ranges; 28 - 29 - main_clock: clock@0 { 30 - compatible = "atmel,osc", "fixed-clock"; 31 - clock-frequency = <16367660>; 32 - }; 33 - 34 25 slow_xtal { 35 26 clock-frequency = <32768>; 36 27 }; ··· 204 213 label = "left_click"; 205 214 gpios = <&pioC 5 GPIO_ACTIVE_LOW>; 206 215 linux,code = <272>; 207 - gpio-key,wakeup; 216 + wakeup-source; 208 217 }; 209 218 210 219 right_click { 211 220 label = "right_click"; 212 221 gpios = <&pioC 4 GPIO_ACTIVE_LOW>; 213 222 linux,code = <273>; 214 - gpio-key,wakeup; 223 + wakeup-source; 215 224 }; 216 225 }; 217 226
+2 -11
arch/arm/boot/dts/at91sam9g20ek_common.dtsi
··· 19 19 }; 20 20 21 21 clocks { 22 - #address-cells = <1>; 23 - #size-cells = <1>; 24 - ranges; 25 - 26 - main_clock: clock@0 { 27 - compatible = "atmel,osc", "fixed-clock"; 28 - clock-frequency = <18432000>; 29 - }; 30 - 31 22 slow_xtal { 32 23 clock-frequency = <32768>; 33 24 }; ··· 197 206 label = "Button 3"; 198 207 gpios = <&pioA 30 GPIO_ACTIVE_LOW>; 199 208 linux,code = <0x103>; 200 - gpio-key,wakeup; 209 + wakeup-source; 201 210 }; 202 211 203 212 btn4 { 204 213 label = "Button 4"; 205 214 gpios = <&pioA 31 GPIO_ACTIVE_LOW>; 206 215 linux,code = <0x104>; 207 - gpio-key,wakeup; 216 + wakeup-source; 208 217 }; 209 218 }; 210 219
+2 -11
arch/arm/boot/dts/at91sam9m10g45ek.dts
··· 24 24 }; 25 25 26 26 clocks { 27 - #address-cells = <1>; 28 - #size-cells = <1>; 29 - ranges; 30 - 31 - main_clock: clock@0 { 32 - compatible = "atmel,osc", "fixed-clock"; 33 - clock-frequency = <12000000>; 34 - }; 35 - 36 27 slow_xtal { 37 28 clock-frequency = <32768>; 38 29 }; ··· 314 323 label = "left_click"; 315 324 gpios = <&pioB 6 GPIO_ACTIVE_LOW>; 316 325 linux,code = <272>; 317 - gpio-key,wakeup; 326 + wakeup-source; 318 327 }; 319 328 320 329 right_click { 321 330 label = "right_click"; 322 331 gpios = <&pioB 7 GPIO_ACTIVE_LOW>; 323 332 linux,code = <273>; 324 - gpio-key,wakeup; 333 + wakeup-source; 325 334 }; 326 335 327 336 left {
+1 -10
arch/arm/boot/dts/at91sam9n12ek.dts
··· 23 23 }; 24 24 25 25 clocks { 26 - #address-cells = <1>; 27 - #size-cells = <1>; 28 - ranges; 29 - 30 - main_clock: clock@0 { 31 - compatible = "atmel,osc", "fixed-clock"; 32 - clock-frequency = <16000000>; 33 - }; 34 - 35 26 slow_xtal { 36 27 clock-frequency = <32768>; 37 28 }; ··· 210 219 label = "Enter"; 211 220 gpios = <&pioB 3 GPIO_ACTIVE_LOW>; 212 221 linux,code = <28>; 213 - gpio-key,wakeup; 222 + wakeup-source; 214 223 }; 215 224 }; 216 225
+2 -11
arch/arm/boot/dts/at91sam9rlek.dts
··· 22 22 }; 23 23 24 24 clocks { 25 - #address-cells = <1>; 26 - #size-cells = <1>; 27 - ranges; 28 - 29 - main_clock: clock { 30 - compatible = "atmel,osc", "fixed-clock"; 31 - clock-frequency = <12000000>; 32 - }; 33 - 34 25 slow_xtal { 35 26 clock-frequency = <32768>; 36 27 }; ··· 216 225 label = "right_click"; 217 226 gpios = <&pioB 0 GPIO_ACTIVE_LOW>; 218 227 linux,code = <273>; 219 - gpio-key,wakeup; 228 + wakeup-source; 220 229 }; 221 230 222 231 left_click { 223 232 label = "left_click"; 224 233 gpios = <&pioB 1 GPIO_ACTIVE_LOW>; 225 234 linux,code = <272>; 226 - gpio-key,wakeup; 235 + wakeup-source; 227 236 }; 228 237 }; 229 238
-11
arch/arm/boot/dts/at91sam9x5cm.dtsi
··· 13 13 }; 14 14 15 15 clocks { 16 - #address-cells = <1>; 17 - #size-cells = <1>; 18 - ranges; 19 - 20 - main_clock: clock@0 { 21 - compatible = "atmel,osc", "fixed-clock"; 22 - clock-frequency = <12000000>; 23 - }; 24 - }; 25 - 26 - clocks { 27 16 slow_xtal { 28 17 clock-frequency = <32768>; 29 18 };
+2 -2
arch/arm/boot/dts/dra7.dtsi
··· 1459 1459 interrupt-names = "tx", "rx"; 1460 1460 dmas = <&sdma_xbar 133>, <&sdma_xbar 132>; 1461 1461 dma-names = "tx", "rx"; 1462 - clocks = <&mcasp3_ahclkx_mux>; 1463 - clock-names = "fck"; 1462 + clocks = <&mcasp3_aux_gfclk_mux>, <&mcasp3_ahclkx_mux>; 1463 + clock-names = "fck", "ahclkx"; 1464 1464 status = "disabled"; 1465 1465 }; 1466 1466
+12 -4
arch/arm/boot/dts/imx27.dtsi
··· 486 486 compatible = "fsl,imx27-usb"; 487 487 reg = <0x10024000 0x200>; 488 488 interrupts = <56>; 489 - clocks = <&clks IMX27_CLK_USB_IPG_GATE>; 489 + clocks = <&clks IMX27_CLK_USB_IPG_GATE>, 490 + <&clks IMX27_CLK_USB_AHB_GATE>, 491 + <&clks IMX27_CLK_USB_DIV>; 492 + clock-names = "ipg", "ahb", "per"; 490 493 fsl,usbmisc = <&usbmisc 0>; 491 494 status = "disabled"; 492 495 }; ··· 498 495 compatible = "fsl,imx27-usb"; 499 496 reg = <0x10024200 0x200>; 500 497 interrupts = <54>; 501 - clocks = <&clks IMX27_CLK_USB_IPG_GATE>; 498 + clocks = <&clks IMX27_CLK_USB_IPG_GATE>, 499 + <&clks IMX27_CLK_USB_AHB_GATE>, 500 + <&clks IMX27_CLK_USB_DIV>; 501 + clock-names = "ipg", "ahb", "per"; 502 502 fsl,usbmisc = <&usbmisc 1>; 503 503 dr_mode = "host"; 504 504 status = "disabled"; ··· 511 505 compatible = "fsl,imx27-usb"; 512 506 reg = <0x10024400 0x200>; 513 507 interrupts = <55>; 514 - clocks = <&clks IMX27_CLK_USB_IPG_GATE>; 508 + clocks = <&clks IMX27_CLK_USB_IPG_GATE>, 509 + <&clks IMX27_CLK_USB_AHB_GATE>, 510 + <&clks IMX27_CLK_USB_DIV>; 511 + clock-names = "ipg", "ahb", "per"; 515 512 fsl,usbmisc = <&usbmisc 2>; 516 513 dr_mode = "host"; 517 514 status = "disabled"; ··· 524 515 #index-cells = <1>; 525 516 compatible = "fsl,imx27-usbmisc"; 526 517 reg = <0x10024600 0x200>; 527 - clocks = <&clks IMX27_CLK_USB_AHB_GATE>; 528 518 }; 529 519 530 520 sahara2: sahara@10025000 {
+1 -1
arch/arm/boot/dts/k2l-netcp.dtsi
··· 137 137 /* NetCP address range */ 138 138 ranges = <0 0x26000000 0x1000000>; 139 139 140 - clocks = <&papllclk>, <&clkcpgmac>, <&chipclk12>; 140 + clocks = <&clkosr>, <&papllclk>, <&clkcpgmac>, <&chipclk12>; 141 141 dma-coherent; 142 142 143 143 ti,navigator-dmas = <&dma_gbe 0>,
+1 -1
arch/arm/boot/dts/kirkwood-ts219.dtsi
··· 40 40 }; 41 41 poweroff@12100 { 42 42 compatible = "qnap,power-off"; 43 - reg = <0x12000 0x100>; 43 + reg = <0x12100 0x100>; 44 44 clocks = <&gate_clk 7>; 45 45 }; 46 46 spi@10600 {
+4
arch/arm/boot/dts/rk3288-veyron-minnie.dts
··· 86 86 }; 87 87 }; 88 88 89 + &emmc { 90 + /delete-property/mmc-hs200-1_8v; 91 + }; 92 + 89 93 &gpio_keys { 90 94 pinctrl-0 = <&pwr_key_l &ap_lid_int_l &volum_down_l &volum_up_l>; 91 95
+8 -2
arch/arm/boot/dts/rk3288.dtsi
··· 452 452 clock-names = "tsadc", "apb_pclk"; 453 453 resets = <&cru SRST_TSADC>; 454 454 reset-names = "tsadc-apb"; 455 - pinctrl-names = "default"; 456 - pinctrl-0 = <&otp_out>; 455 + pinctrl-names = "init", "default", "sleep"; 456 + pinctrl-0 = <&otp_gpio>; 457 + pinctrl-1 = <&otp_out>; 458 + pinctrl-2 = <&otp_gpio>; 457 459 #thermal-sensor-cells = <1>; 458 460 rockchip,hw-tshut-temp = <95000>; 459 461 status = "disabled"; ··· 1397 1395 }; 1398 1396 1399 1397 tsadc { 1398 + otp_gpio: otp-gpio { 1399 + rockchip,pins = <0 10 RK_FUNC_GPIO &pcfg_pull_none>; 1400 + }; 1401 + 1400 1402 otp_out: otp-out { 1401 1403 rockchip,pins = <0 10 RK_FUNC_1 &pcfg_pull_none>; 1402 1404 };
+1 -1
arch/arm/boot/dts/sama5d35ek.dts
··· 49 49 label = "pb_user1"; 50 50 gpios = <&pioE 27 GPIO_ACTIVE_HIGH>; 51 51 linux,code = <0x100>; 52 - gpio-key,wakeup; 52 + wakeup-source; 53 53 }; 54 54 }; 55 55 };
+1 -1
arch/arm/boot/dts/sama5d4.dtsi
··· 1300 1300 }; 1301 1301 1302 1302 watchdog@fc068640 { 1303 - compatible = "atmel,at91sam9260-wdt"; 1303 + compatible = "atmel,sama5d4-wdt"; 1304 1304 reg = <0xfc068640 0x10>; 1305 1305 clocks = <&clk32k>; 1306 1306 status = "disabled";
+1 -1
arch/arm/boot/dts/usb_a9260_common.dtsi
··· 115 115 label = "user_pb"; 116 116 gpios = <&pioB 10 GPIO_ACTIVE_LOW>; 117 117 linux,code = <28>; 118 - gpio-key,wakeup; 118 + wakeup-source; 119 119 }; 120 120 }; 121 121
+1 -1
arch/arm/boot/dts/usb_a9263.dts
··· 143 143 label = "user_pb"; 144 144 gpios = <&pioB 10 GPIO_ACTIVE_LOW>; 145 145 linux,code = <28>; 146 - gpio-key,wakeup; 146 + wakeup-source; 147 147 }; 148 148 }; 149 149
+4 -4
arch/arm/boot/dts/vfxxx.dtsi
··· 158 158 interrupts = <67 IRQ_TYPE_LEVEL_HIGH>; 159 159 clocks = <&clks VF610_CLK_DSPI0>; 160 160 clock-names = "dspi"; 161 - spi-num-chipselects = <5>; 161 + spi-num-chipselects = <6>; 162 162 status = "disabled"; 163 163 }; 164 164 ··· 170 170 interrupts = <68 IRQ_TYPE_LEVEL_HIGH>; 171 171 clocks = <&clks VF610_CLK_DSPI1>; 172 172 clock-names = "dspi"; 173 - spi-num-chipselects = <5>; 173 + spi-num-chipselects = <4>; 174 174 status = "disabled"; 175 175 }; 176 176 ··· 461 461 clock-names = "adc"; 462 462 #io-channel-cells = <1>; 463 463 status = "disabled"; 464 + fsl,adck-max-frequency = <30000000>, <40000000>, 465 + <20000000>; 464 466 }; 465 467 466 468 esdhc0: esdhc@400b1000 { ··· 474 472 <&clks VF610_CLK_ESDHC0>; 475 473 clock-names = "ipg", "ahb", "per"; 476 474 status = "disabled"; 477 - fsl,adck-max-frequency = <30000000>, <40000000>, 478 - <20000000>; 479 475 }; 480 476 481 477 esdhc1: esdhc@400b2000 {
-1
arch/arm/configs/at91_dt_defconfig
··· 125 125 # CONFIG_HWMON is not set 126 126 CONFIG_WATCHDOG=y 127 127 CONFIG_AT91SAM9X_WATCHDOG=y 128 - CONFIG_SSB=m 129 128 CONFIG_MFD_ATMEL_HLCDC=y 130 129 CONFIG_REGULATOR=y 131 130 CONFIG_REGULATOR_FIXED_VOLTAGE=y
-1
arch/arm/configs/sama5_defconfig
··· 129 129 CONFIG_POWER_SUPPLY=y 130 130 CONFIG_POWER_RESET=y 131 131 # CONFIG_HWMON is not set 132 - CONFIG_SSB=m 133 132 CONFIG_MFD_ATMEL_FLEXCOM=y 134 133 CONFIG_REGULATOR=y 135 134 CONFIG_REGULATOR_FIXED_VOLTAGE=y
+5
arch/arm/include/asm/irq.h
··· 40 40 #define arch_trigger_all_cpu_backtrace(x) arch_trigger_all_cpu_backtrace(x) 41 41 #endif 42 42 43 + static inline int nr_legacy_irqs(void) 44 + { 45 + return NR_IRQS_LEGACY; 46 + } 47 + 43 48 #endif 44 49 45 50 #endif
+1
arch/arm/include/uapi/asm/unistd.h
··· 416 416 #define __NR_execveat (__NR_SYSCALL_BASE+387) 417 417 #define __NR_userfaultfd (__NR_SYSCALL_BASE+388) 418 418 #define __NR_membarrier (__NR_SYSCALL_BASE+389) 419 + #define __NR_mlock2 (__NR_SYSCALL_BASE+390) 419 420 420 421 /* 421 422 * The following SWIs are ARM private.
+11 -8
arch/arm/kernel/bios32.c
··· 17 17 #include <asm/mach/pci.h> 18 18 19 19 static int debug_pci; 20 - static resource_size_t (*align_resource)(struct pci_dev *dev, 21 - const struct resource *res, 22 - resource_size_t start, 23 - resource_size_t size, 24 - resource_size_t align) = NULL; 25 20 26 21 /* 27 22 * We can't use pci_get_device() here since we are ··· 456 461 sys->busnr = busnr; 457 462 sys->swizzle = hw->swizzle; 458 463 sys->map_irq = hw->map_irq; 459 - align_resource = hw->align_resource; 460 464 INIT_LIST_HEAD(&sys->resources); 461 465 462 466 if (hw->private_data) ··· 464 470 ret = hw->setup(nr, sys); 465 471 466 472 if (ret > 0) { 473 + struct pci_host_bridge *host_bridge; 474 + 467 475 ret = pcibios_init_resources(nr, sys); 468 476 if (ret) { 469 477 kfree(sys); ··· 487 491 busnr = sys->bus->busn_res.end + 1; 488 492 489 493 list_add(&sys->node, head); 494 + 495 + host_bridge = pci_find_host_bridge(sys->bus); 496 + host_bridge->align_resource = hw->align_resource; 490 497 } else { 491 498 kfree(sys); 492 499 if (ret < 0) ··· 577 578 { 578 579 struct pci_dev *dev = data; 579 580 resource_size_t start = res->start; 581 + struct pci_host_bridge *host_bridge; 580 582 581 583 if (res->flags & IORESOURCE_IO && start & 0x300) 582 584 start = (start + 0x3ff) & ~0x3ff; 583 585 584 586 start = (start + align - 1) & ~(align - 1); 585 587 586 - if (align_resource) 587 - return align_resource(dev, res, start, size, align); 588 + host_bridge = pci_find_host_bridge(dev->bus); 589 + 590 + if (host_bridge->align_resource) 591 + return host_bridge->align_resource(dev, res, 592 + start, size, align); 588 593 589 594 return start; 590 595 }
+1
arch/arm/kernel/calls.S
··· 399 399 CALL(sys_execveat) 400 400 CALL(sys_userfaultfd) 401 401 CALL(sys_membarrier) 402 + CALL(sys_mlock2) 402 403 #ifndef syscalls_counted 403 404 .equ syscalls_padding, ((NR_syscalls + 3) & ~3) - NR_syscalls 404 405 #define syscalls_counted
+1 -6
arch/arm/kvm/arm.c
··· 564 564 vcpu_sleep(vcpu); 565 565 566 566 /* 567 - * Disarming the background timer must be done in a 568 - * preemptible context, as this call may sleep. 569 - */ 570 - kvm_timer_flush_hwstate(vcpu); 571 - 572 - /* 573 567 * Preparing the interrupts to be injected also 574 568 * involves poking the GIC, which must be done in a 575 569 * non-preemptible context. 576 570 */ 577 571 preempt_disable(); 572 + kvm_timer_flush_hwstate(vcpu); 578 573 kvm_vgic_flush_hwstate(vcpu); 579 574 580 575 local_irq_disable();
+7 -8
arch/arm/kvm/mmu.c
··· 98 98 __kvm_flush_dcache_pud(pud); 99 99 } 100 100 101 + static bool kvm_is_device_pfn(unsigned long pfn) 102 + { 103 + return !pfn_valid(pfn); 104 + } 105 + 101 106 /** 102 107 * stage2_dissolve_pmd() - clear and flush huge PMD entry 103 108 * @kvm: pointer to kvm structure. ··· 218 213 kvm_tlb_flush_vmid_ipa(kvm, addr); 219 214 220 215 /* No need to invalidate the cache for device mappings */ 221 - if ((pte_val(old_pte) & PAGE_S2_DEVICE) != PAGE_S2_DEVICE) 216 + if (!kvm_is_device_pfn(__phys_to_pfn(addr))) 222 217 kvm_flush_dcache_pte(old_pte); 223 218 224 219 put_page(virt_to_page(pte)); ··· 310 305 311 306 pte = pte_offset_kernel(pmd, addr); 312 307 do { 313 - if (!pte_none(*pte) && 314 - (pte_val(*pte) & PAGE_S2_DEVICE) != PAGE_S2_DEVICE) 308 + if (!pte_none(*pte) && !kvm_is_device_pfn(__phys_to_pfn(addr))) 315 309 kvm_flush_dcache_pte(*pte); 316 310 } while (pte++, addr += PAGE_SIZE, addr != end); 317 311 } ··· 1039 1035 return false; 1040 1036 1041 1037 return kvm_vcpu_dabt_iswrite(vcpu); 1042 - } 1043 - 1044 - static bool kvm_is_device_pfn(unsigned long pfn) 1045 - { 1046 - return !pfn_valid(pfn); 1047 1038 } 1048 1039 1049 1040 /**
+2 -2
arch/arm/mach-dove/include/mach/entry-macro.S
··· 18 18 @ check low interrupts 19 19 ldr \irqstat, [\base, #IRQ_CAUSE_LOW_OFF] 20 20 ldr \tmp, [\base, #IRQ_MASK_LOW_OFF] 21 - mov \irqnr, #31 21 + mov \irqnr, #32 22 22 ands \irqstat, \irqstat, \tmp 23 23 24 24 @ if no low interrupts set, check high interrupts 25 25 ldreq \irqstat, [\base, #IRQ_CAUSE_HIGH_OFF] 26 26 ldreq \tmp, [\base, #IRQ_MASK_HIGH_OFF] 27 - moveq \irqnr, #63 27 + moveq \irqnr, #64 28 28 andeqs \irqstat, \irqstat, \tmp 29 29 30 30 @ find first active interrupt source
+1
arch/arm/mach-imx/gpc.c
··· 177 177 .irq_unmask = imx_gpc_irq_unmask, 178 178 .irq_retrigger = irq_chip_retrigger_hierarchy, 179 179 .irq_set_wake = imx_gpc_irq_set_wake, 180 + .irq_set_type = irq_chip_set_type_parent, 180 181 #ifdef CONFIG_SMP 181 182 .irq_set_affinity = irq_chip_set_affinity_parent, 182 183 #endif
+3 -3
arch/arm/mach-omap2/omap-smp.c
··· 143 143 * Ensure that CPU power state is set to ON to avoid CPU 144 144 * powerdomain transition on wfi 145 145 */ 146 - clkdm_wakeup(cpu1_clkdm); 147 - omap_set_pwrdm_state(cpu1_pwrdm, PWRDM_POWER_ON); 148 - clkdm_allow_idle(cpu1_clkdm); 146 + clkdm_wakeup_nolock(cpu1_clkdm); 147 + pwrdm_set_next_pwrst(cpu1_pwrdm, PWRDM_POWER_ON); 148 + clkdm_allow_idle_nolock(cpu1_clkdm); 149 149 150 150 if (IS_PM44XX_ERRATUM(PM_OMAP4_ROM_SMP_BOOT_ERRATUM_GICD)) { 151 151 while (gic_dist_disabled()) {
+36 -30
arch/arm/mach-omap2/omap_hwmod.c
··· 890 890 return ret; 891 891 } 892 892 893 + static void _enable_optional_clocks(struct omap_hwmod *oh) 894 + { 895 + struct omap_hwmod_opt_clk *oc; 896 + int i; 897 + 898 + pr_debug("omap_hwmod: %s: enabling optional clocks\n", oh->name); 899 + 900 + for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++) 901 + if (oc->_clk) { 902 + pr_debug("omap_hwmod: enable %s:%s\n", oc->role, 903 + __clk_get_name(oc->_clk)); 904 + clk_enable(oc->_clk); 905 + } 906 + } 907 + 908 + static void _disable_optional_clocks(struct omap_hwmod *oh) 909 + { 910 + struct omap_hwmod_opt_clk *oc; 911 + int i; 912 + 913 + pr_debug("omap_hwmod: %s: disabling optional clocks\n", oh->name); 914 + 915 + for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++) 916 + if (oc->_clk) { 917 + pr_debug("omap_hwmod: disable %s:%s\n", oc->role, 918 + __clk_get_name(oc->_clk)); 919 + clk_disable(oc->_clk); 920 + } 921 + } 922 + 893 923 /** 894 924 * _enable_clocks - enable hwmod main clock and interface clocks 895 925 * @oh: struct omap_hwmod * ··· 946 916 if (os->_clk && (os->flags & OCPIF_SWSUP_IDLE)) 947 917 clk_enable(os->_clk); 948 918 } 919 + 920 + if (oh->flags & HWMOD_OPT_CLKS_NEEDED) 921 + _enable_optional_clocks(oh); 949 922 950 923 /* The opt clocks are controlled by the device driver. */ 951 924 ··· 981 948 clk_disable(os->_clk); 982 949 } 983 950 951 + if (oh->flags & HWMOD_OPT_CLKS_NEEDED) 952 + _disable_optional_clocks(oh); 953 + 984 954 /* The opt clocks are controlled by the device driver. */ 985 955 986 956 return 0; 987 - } 988 - 989 - static void _enable_optional_clocks(struct omap_hwmod *oh) 990 - { 991 - struct omap_hwmod_opt_clk *oc; 992 - int i; 993 - 994 - pr_debug("omap_hwmod: %s: enabling optional clocks\n", oh->name); 995 - 996 - for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++) 997 - if (oc->_clk) { 998 - pr_debug("omap_hwmod: enable %s:%s\n", oc->role, 999 - __clk_get_name(oc->_clk)); 1000 - clk_enable(oc->_clk); 1001 - } 1002 - } 1003 - 1004 - static void _disable_optional_clocks(struct omap_hwmod *oh) 1005 - { 1006 - struct omap_hwmod_opt_clk *oc; 1007 - int i; 1008 - 1009 - pr_debug("omap_hwmod: %s: disabling optional clocks\n", oh->name); 1010 - 1011 - for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++) 1012 - if (oc->_clk) { 1013 - pr_debug("omap_hwmod: disable %s:%s\n", oc->role, 1014 - __clk_get_name(oc->_clk)); 1015 - clk_disable(oc->_clk); 1016 - } 1017 957 } 1018 958 1019 959 /**
+3
arch/arm/mach-omap2/omap_hwmod.h
··· 523 523 * HWMOD_RECONFIG_IO_CHAIN: omap_hwmod code needs to reconfigure wake-up 524 524 * events by calling _reconfigure_io_chain() when a device is enabled 525 525 * or idled. 526 + * HWMOD_OPT_CLKS_NEEDED: The optional clocks are needed for the module to 527 + * operate and they need to be handled at the same time as the main_clk. 526 528 */ 527 529 #define HWMOD_SWSUP_SIDLE (1 << 0) 528 530 #define HWMOD_SWSUP_MSTANDBY (1 << 1) ··· 540 538 #define HWMOD_FORCE_MSTANDBY (1 << 11) 541 539 #define HWMOD_SWSUP_SIDLE_ACT (1 << 12) 542 540 #define HWMOD_RECONFIG_IO_CHAIN (1 << 13) 541 + #define HWMOD_OPT_CLKS_NEEDED (1 << 14) 543 542 544 543 /* 545 544 * omap_hwmod._int_flags definitions
+56
arch/arm/mach-omap2/omap_hwmod_7xx_data.c
··· 1298 1298 }; 1299 1299 1300 1300 /* 1301 + * 'mcasp' class 1302 + * 1303 + */ 1304 + static struct omap_hwmod_class_sysconfig dra7xx_mcasp_sysc = { 1305 + .sysc_offs = 0x0004, 1306 + .sysc_flags = SYSC_HAS_SIDLEMODE, 1307 + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART), 1308 + .sysc_fields = &omap_hwmod_sysc_type3, 1309 + }; 1310 + 1311 + static struct omap_hwmod_class dra7xx_mcasp_hwmod_class = { 1312 + .name = "mcasp", 1313 + .sysc = &dra7xx_mcasp_sysc, 1314 + }; 1315 + 1316 + /* mcasp3 */ 1317 + static struct omap_hwmod_opt_clk mcasp3_opt_clks[] = { 1318 + { .role = "ahclkx", .clk = "mcasp3_ahclkx_mux" }, 1319 + }; 1320 + 1321 + static struct omap_hwmod dra7xx_mcasp3_hwmod = { 1322 + .name = "mcasp3", 1323 + .class = &dra7xx_mcasp_hwmod_class, 1324 + .clkdm_name = "l4per2_clkdm", 1325 + .main_clk = "mcasp3_aux_gfclk_mux", 1326 + .flags = HWMOD_OPT_CLKS_NEEDED, 1327 + .prcm = { 1328 + .omap4 = { 1329 + .clkctrl_offs = DRA7XX_CM_L4PER2_MCASP3_CLKCTRL_OFFSET, 1330 + .context_offs = DRA7XX_RM_L4PER2_MCASP3_CONTEXT_OFFSET, 1331 + .modulemode = MODULEMODE_SWCTRL, 1332 + }, 1333 + }, 1334 + .opt_clks = mcasp3_opt_clks, 1335 + .opt_clks_cnt = ARRAY_SIZE(mcasp3_opt_clks), 1336 + }; 1337 + 1338 + /* 1301 1339 * 'mmc' class 1302 1340 * 1303 1341 */ ··· 2604 2566 .user = OCP_USER_MPU | OCP_USER_SDMA, 2605 2567 }; 2606 2568 2569 + /* l4_per2 -> mcasp3 */ 2570 + static struct omap_hwmod_ocp_if dra7xx_l4_per2__mcasp3 = { 2571 + .master = &dra7xx_l4_per2_hwmod, 2572 + .slave = &dra7xx_mcasp3_hwmod, 2573 + .clk = "l4_root_clk_div", 2574 + .user = OCP_USER_MPU | OCP_USER_SDMA, 2575 + }; 2576 + 2577 + /* l3_main_1 -> mcasp3 */ 2578 + static struct omap_hwmod_ocp_if dra7xx_l3_main_1__mcasp3 = { 2579 + .master = &dra7xx_l3_main_1_hwmod, 2580 + .slave = &dra7xx_mcasp3_hwmod, 2581 + .clk = "l3_iclk_div", 2582 + .user = OCP_USER_MPU | OCP_USER_SDMA, 2583 + }; 2584 + 2607 2585 /* l4_per1 -> elm */ 2608 2586 static struct omap_hwmod_ocp_if dra7xx_l4_per1__elm = { 2609 2587 .master = &dra7xx_l4_per1_hwmod, ··· 3362 3308 &dra7xx_l4_wkup__dcan1, 3363 3309 &dra7xx_l4_per2__dcan2, 3364 3310 &dra7xx_l4_per2__cpgmac0, 3311 + &dra7xx_l4_per2__mcasp3, 3312 + &dra7xx_l3_main_1__mcasp3, 3365 3313 &dra7xx_gmac__mdio, 3366 3314 &dra7xx_l4_cfg__dma_system, 3367 3315 &dra7xx_l3_main_1__dss,
+3
arch/arm/mach-omap2/omap_hwmod_81xx_data.c
··· 144 144 .name = "l4_ls", 145 145 .clkdm_name = "alwon_l3s_clkdm", 146 146 .class = &l4_hwmod_class, 147 + .flags = HWMOD_NO_IDLEST, 147 148 }; 148 149 149 150 /* ··· 156 155 .name = "l4_hs", 157 156 .clkdm_name = "alwon_l3_med_clkdm", 158 157 .class = &l4_hwmod_class, 158 + .flags = HWMOD_NO_IDLEST, 159 159 }; 160 160 161 161 /* L3 slow -> L4 ls peripheral interface running at 125MHz */ ··· 852 850 .name = "emac0", 853 851 .clkdm_name = "alwon_ethernet_clkdm", 854 852 .class = &dm816x_emac_hwmod_class, 853 + .flags = HWMOD_NO_IDLEST, 855 854 }; 856 855 857 856 static struct omap_hwmod_ocp_if dm81xx_l4_hs__emac0 = {
-29
arch/arm/mach-omap2/pdata-quirks.c
··· 24 24 #include <linux/platform_data/iommu-omap.h> 25 25 #include <linux/platform_data/wkup_m3.h> 26 26 27 - #include <asm/siginfo.h> 28 - #include <asm/signal.h> 29 - 30 27 #include "common.h" 31 28 #include "common-board-devices.h" 32 29 #include "dss-common.h" ··· 382 385 } 383 386 #endif /* CONFIG_ARCH_OMAP3 */ 384 387 385 - #ifdef CONFIG_SOC_TI81XX 386 - static int fault_fixed_up; 387 - 388 - static int t410_abort_handler(unsigned long addr, unsigned int fsr, 389 - struct pt_regs *regs) 390 - { 391 - if ((fsr == 0x406 || fsr == 0xc06) && !fault_fixed_up) { 392 - pr_warn("External imprecise Data abort at addr=%#lx, fsr=%#x ignored.\n", 393 - addr, fsr); 394 - fault_fixed_up = 1; 395 - return 0; 396 - } 397 - 398 - return 1; 399 - } 400 - 401 - static void __init t410_abort_init(void) 402 - { 403 - hook_fault_code(16 + 6, t410_abort_handler, SIGBUS, BUS_OBJERR, 404 - "imprecise external abort"); 405 - } 406 - #endif 407 - 408 388 #if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) 409 389 static struct iommu_platform_data omap4_iommu_pdata = { 410 390 .reset_name = "mmu_cache", ··· 509 535 { "technexion,omap3-tao3530", omap3_tao3530_legacy_init, }, 510 536 { "openpandora,omap3-pandora-600mhz", omap3_pandora_legacy_init, }, 511 537 { "openpandora,omap3-pandora-1ghz", omap3_pandora_legacy_init, }, 512 - #endif 513 - #ifdef CONFIG_SOC_TI81XX 514 - { "hp,t410", t410_abort_init, }, 515 538 #endif 516 539 #ifdef CONFIG_SOC_OMAP5 517 540 { "ti,omap5-uevm", omap5_uevm_legacy_init, },
+2 -2
arch/arm/mach-omap2/pm34xx.c
··· 301 301 if (omap_irq_pending()) 302 302 return; 303 303 304 - trace_cpu_idle(1, smp_processor_id()); 304 + trace_cpu_idle_rcuidle(1, smp_processor_id()); 305 305 306 306 omap_sram_idle(); 307 307 308 - trace_cpu_idle(PWR_EVENT_EXIT, smp_processor_id()); 308 + trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id()); 309 309 } 310 310 311 311 #ifdef CONFIG_SUSPEND
+1 -1
arch/arm/mach-orion5x/include/mach/entry-macro.S
··· 21 21 @ find cause bits that are unmasked 22 22 ands \irqstat, \irqstat, \tmp @ clear Z flag if any 23 23 clzne \irqnr, \irqstat @ calc irqnr 24 - rsbne \irqnr, \irqnr, #31 24 + rsbne \irqnr, \irqnr, #32 25 25 .endm
+1 -1
arch/arm/mach-pxa/palm27x.c
··· 344 344 { 345 345 palm_bl_power = bl; 346 346 palm_lcd_power = lcd; 347 - pwm_add_lookup(palm27x_pwm_lookup, ARRAY_SIZE(palm27x_pwm_lookup)); 347 + pwm_add_table(palm27x_pwm_lookup, ARRAY_SIZE(palm27x_pwm_lookup)); 348 348 platform_device_register(&palm27x_backlight); 349 349 } 350 350 #endif
+1 -1
arch/arm/mach-pxa/palmtc.c
··· 169 169 #if defined(CONFIG_BACKLIGHT_PWM) || defined(CONFIG_BACKLIGHT_PWM_MODULE) 170 170 static struct pwm_lookup palmtc_pwm_lookup[] = { 171 171 PWM_LOOKUP("pxa25x-pwm.1", 0, "pwm-backlight.0", NULL, PALMTC_PERIOD_NS, 172 - PWM_PERIOD_NORMAL), 172 + PWM_POLARITY_NORMAL), 173 173 }; 174 174 175 175 static struct platform_pwm_backlight_data palmtc_backlight_data = {
+1 -1
arch/arm/mach-shmobile/setup-r8a7793.c
··· 19 19 #include "common.h" 20 20 #include "rcar-gen2.h" 21 21 22 - static const char *r8a7793_boards_compat_dt[] __initconst = { 22 + static const char * const r8a7793_boards_compat_dt[] __initconst = { 23 23 "renesas,r8a7793", 24 24 NULL, 25 25 };
+1 -1
arch/arm/mach-zx/Kconfig
··· 13 13 select ARM_GLOBAL_TIMER 14 14 select HAVE_ARM_SCU if SMP 15 15 select HAVE_ARM_TWD if SMP 16 - select PM_GENERIC_DOMAINS 16 + select PM_GENERIC_DOMAINS if PM 17 17 help 18 18 Support for ZTE ZX296702 SoC which is a dual core CortexA9MP 19 19 endif
+22 -1
arch/arm64/Kconfig
··· 49 49 select HAVE_ARCH_AUDITSYSCALL 50 50 select HAVE_ARCH_BITREVERSE 51 51 select HAVE_ARCH_JUMP_LABEL 52 - select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP 52 + select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48) 53 53 select HAVE_ARCH_KGDB 54 54 select HAVE_ARCH_SECCOMP_FILTER 55 55 select HAVE_ARCH_TRACEHOOK ··· 310 310 311 311 The workaround is to promote device loads to use Load-Acquire 312 312 semantics. 313 + Please note that this does not necessarily enable the workaround, 314 + as it depends on the alternative framework, which will only patch 315 + the kernel if an affected CPU is detected. 316 + 317 + If unsure, say Y. 318 + 319 + config ARM64_ERRATUM_834220 320 + bool "Cortex-A57: 834220: Stage 2 translation fault might be incorrectly reported in presence of a Stage 1 fault" 321 + depends on KVM 322 + default y 323 + help 324 + This option adds an alternative code sequence to work around ARM 325 + erratum 834220 on Cortex-A57 parts up to r1p2. 326 + 327 + Affected Cortex-A57 parts might report a Stage 2 translation 328 + fault as the result of a Stage 1 fault for load crossing a 329 + page boundary when there is a permission or device memory 330 + alignment fault at Stage 1 and a translation fault at Stage 2. 331 + 332 + The workaround is to verify that the Stage 1 translation 333 + doesn't generate a fault before handling the Stage 2 fault. 313 334 Please note that this does not necessarily enable the workaround, 314 335 as it depends on the alternative framework, which will only patch 315 336 the kernel if an affected CPU is detected.
+1 -1
arch/arm64/crypto/aes-ce-cipher.c
··· 237 237 static struct crypto_alg aes_alg = { 238 238 .cra_name = "aes", 239 239 .cra_driver_name = "aes-ce", 240 - .cra_priority = 300, 240 + .cra_priority = 250, 241 241 .cra_flags = CRYPTO_ALG_TYPE_CIPHER, 242 242 .cra_blocksize = AES_BLOCK_SIZE, 243 243 .cra_ctxsize = sizeof(struct crypto_aes_ctx),
+10 -6
arch/arm64/include/asm/barrier.h
··· 64 64 65 65 #define smp_load_acquire(p) \ 66 66 ({ \ 67 - typeof(*p) ___p1; \ 67 + union { typeof(*p) __val; char __c[1]; } __u; \ 68 68 compiletime_assert_atomic_type(*p); \ 69 69 switch (sizeof(*p)) { \ 70 70 case 1: \ 71 71 asm volatile ("ldarb %w0, %1" \ 72 - : "=r" (___p1) : "Q" (*p) : "memory"); \ 72 + : "=r" (*(__u8 *)__u.__c) \ 73 + : "Q" (*p) : "memory"); \ 73 74 break; \ 74 75 case 2: \ 75 76 asm volatile ("ldarh %w0, %1" \ 76 - : "=r" (___p1) : "Q" (*p) : "memory"); \ 77 + : "=r" (*(__u16 *)__u.__c) \ 78 + : "Q" (*p) : "memory"); \ 77 79 break; \ 78 80 case 4: \ 79 81 asm volatile ("ldar %w0, %1" \ 80 - : "=r" (___p1) : "Q" (*p) : "memory"); \ 82 + : "=r" (*(__u32 *)__u.__c) \ 83 + : "Q" (*p) : "memory"); \ 81 84 break; \ 82 85 case 8: \ 83 86 asm volatile ("ldar %0, %1" \ 84 - : "=r" (___p1) : "Q" (*p) : "memory"); \ 87 + : "=r" (*(__u64 *)__u.__c) \ 88 + : "Q" (*p) : "memory"); \ 85 89 break; \ 86 90 } \ 87 - ___p1; \ 91 + __u.__val; \ 88 92 }) 89 93 90 94 #define read_barrier_depends() do { } while(0)
+1 -2
arch/arm64/include/asm/compat.h
··· 23 23 */ 24 24 #include <linux/types.h> 25 25 #include <linux/sched.h> 26 - #include <linux/ptrace.h> 27 26 28 27 #define COMPAT_USER_HZ 100 29 28 #ifdef __AARCH64EB__ ··· 233 234 return (u32)(unsigned long)uptr; 234 235 } 235 236 236 - #define compat_user_stack_pointer() (user_stack_pointer(current_pt_regs())) 237 + #define compat_user_stack_pointer() (user_stack_pointer(task_pt_regs(current))) 237 238 238 239 static inline void __user *arch_compat_alloc_user_space(long len) 239 240 {
+22 -3
arch/arm64/include/asm/cpufeature.h
··· 29 29 #define ARM64_HAS_PAN 4 30 30 #define ARM64_HAS_LSE_ATOMICS 5 31 31 #define ARM64_WORKAROUND_CAVIUM_23154 6 32 + #define ARM64_WORKAROUND_834220 7 32 33 33 - #define ARM64_NCAPS 7 34 + #define ARM64_NCAPS 8 34 35 35 36 #ifndef __ASSEMBLY__ 36 37 ··· 47 46 #define FTR_STRICT true /* SANITY check strict matching required */ 48 47 #define FTR_NONSTRICT false /* SANITY check ignored */ 49 48 49 + #define FTR_SIGNED true /* Value should be treated as signed */ 50 + #define FTR_UNSIGNED false /* Value should be treated as unsigned */ 51 + 50 52 struct arm64_ftr_bits { 51 - bool strict; /* CPU Sanity check: strict matching required ? */ 53 + bool sign; /* Value is signed ? */ 54 + bool strict; /* CPU Sanity check: strict matching required ? */ 52 55 enum ftr_type type; 53 56 u8 shift; 54 57 u8 width; ··· 128 123 return cpuid_feature_extract_field_width(features, field, 4); 129 124 } 130 125 126 + static inline unsigned int __attribute_const__ 127 + cpuid_feature_extract_unsigned_field_width(u64 features, int field, int width) 128 + { 129 + return (u64)(features << (64 - width - field)) >> (64 - width); 130 + } 131 + 132 + static inline unsigned int __attribute_const__ 133 + cpuid_feature_extract_unsigned_field(u64 features, int field) 134 + { 135 + return cpuid_feature_extract_unsigned_field_width(features, field, 4); 136 + } 137 + 131 138 static inline u64 arm64_ftr_mask(struct arm64_ftr_bits *ftrp) 132 139 { 133 140 return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift); ··· 147 130 148 131 static inline s64 arm64_ftr_value(struct arm64_ftr_bits *ftrp, u64 val) 149 132 { 150 - return cpuid_feature_extract_field_width(val, ftrp->shift, ftrp->width); 133 + return ftrp->sign ? 134 + cpuid_feature_extract_field_width(val, ftrp->shift, ftrp->width) : 135 + cpuid_feature_extract_unsigned_field_width(val, ftrp->shift, ftrp->width); 151 136 } 152 137 153 138 static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0)
+3 -10
arch/arm64/include/asm/dma-mapping.h
··· 18 18 19 19 #ifdef __KERNEL__ 20 20 21 - #include <linux/acpi.h> 22 21 #include <linux/types.h> 23 22 #include <linux/vmalloc.h> 24 23 ··· 25 26 #include <asm/xen/hypervisor.h> 26 27 27 28 #define DMA_ERROR_CODE (~(dma_addr_t)0) 28 - extern struct dma_map_ops *dma_ops; 29 29 extern struct dma_map_ops dummy_dma_ops; 30 30 31 31 static inline struct dma_map_ops *__generic_dma_ops(struct device *dev) 32 32 { 33 - if (unlikely(!dev)) 34 - return dma_ops; 35 - else if (dev->archdata.dma_ops) 33 + if (dev && dev->archdata.dma_ops) 36 34 return dev->archdata.dma_ops; 37 - else if (acpi_disabled) 38 - return dma_ops; 39 35 40 36 /* 41 - * When ACPI is enabled, if arch_set_dma_ops is not called, 42 - * we will disable device DMA capability by setting it 43 - * to dummy_dma_ops. 37 + * We expect no ISA devices, and all other DMA masters are expected to 38 + * have someone call arch_setup_dma_ops at device creation time. 44 39 */ 45 40 return &dummy_dma_ops; 46 41 }
+4 -2
arch/arm64/include/asm/hw_breakpoint.h
··· 138 138 /* Determine number of BRP registers available. */ 139 139 static inline int get_num_brps(void) 140 140 { 141 + u64 dfr0 = read_system_reg(SYS_ID_AA64DFR0_EL1); 141 142 return 1 + 142 - cpuid_feature_extract_field(read_system_reg(SYS_ID_AA64DFR0_EL1), 143 + cpuid_feature_extract_unsigned_field(dfr0, 143 144 ID_AA64DFR0_BRPS_SHIFT); 144 145 } 145 146 146 147 /* Determine number of WRP registers available. */ 147 148 static inline int get_num_wrps(void) 148 149 { 150 + u64 dfr0 = read_system_reg(SYS_ID_AA64DFR0_EL1); 149 151 return 1 + 150 - cpuid_feature_extract_field(read_system_reg(SYS_ID_AA64DFR0_EL1), 152 + cpuid_feature_extract_unsigned_field(dfr0, 151 153 ID_AA64DFR0_WRPS_SHIFT); 152 154 } 153 155
+5
arch/arm64/include/asm/irq.h
··· 7 7 8 8 extern void set_handle_irq(void (*handle_irq)(struct pt_regs *)); 9 9 10 + static inline int nr_legacy_irqs(void) 11 + { 12 + return 0; 13 + } 14 + 10 15 #endif
+5 -3
arch/arm64/include/asm/kvm_emulate.h
··· 99 99 *vcpu_cpsr(vcpu) |= COMPAT_PSR_T_BIT; 100 100 } 101 101 102 + /* 103 + * vcpu_reg should always be passed a register number coming from a 104 + * read of ESR_EL2. Otherwise, it may give the wrong result on AArch32 105 + * with banked registers. 106 + */ 102 107 static inline unsigned long *vcpu_reg(const struct kvm_vcpu *vcpu, u8 reg_num) 103 108 { 104 - if (vcpu_mode_is_32bit(vcpu)) 105 - return vcpu_reg32(vcpu, reg_num); 106 - 107 109 return (unsigned long *)&vcpu_gp_regs(vcpu)->regs.regs[reg_num]; 108 110 } 109 111
+1 -1
arch/arm64/include/asm/mmu_context.h
··· 101 101 #define destroy_context(mm) do { } while(0) 102 102 void check_and_switch_context(struct mm_struct *mm, unsigned int cpu); 103 103 104 - #define init_new_context(tsk,mm) ({ atomic64_set(&mm->context.id, 0); 0; }) 104 + #define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 0; }) 105 105 106 106 /* 107 107 * This is called when "tsk" is about to enter lazy TLB mode.
+1
arch/arm64/include/asm/pgtable.h
··· 81 81 82 82 #define PAGE_KERNEL __pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE) 83 83 #define PAGE_KERNEL_RO __pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_RDONLY) 84 + #define PAGE_KERNEL_ROX __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_RDONLY) 84 85 #define PAGE_KERNEL_EXEC __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE) 85 86 #define PAGE_KERNEL_EXEC_CONT __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT) 86 87
+9
arch/arm64/kernel/cpu_errata.c
··· 75 75 (1 << MIDR_VARIANT_SHIFT) | 2), 76 76 }, 77 77 #endif 78 + #ifdef CONFIG_ARM64_ERRATUM_834220 79 + { 80 + /* Cortex-A57 r0p0 - r1p2 */ 81 + .desc = "ARM erratum 834220", 82 + .capability = ARM64_WORKAROUND_834220, 83 + MIDR_RANGE(MIDR_CORTEX_A57, 0x00, 84 + (1 << MIDR_VARIANT_SHIFT) | 2), 85 + }, 86 + #endif 78 87 #ifdef CONFIG_ARM64_ERRATUM_845719 79 88 { 80 89 /* Cortex-A53 r0p[01234] */
+23 -14
arch/arm64/kernel/cpufeature.c
··· 44 44 45 45 DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); 46 46 47 - #define ARM64_FTR_BITS(STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ 47 + #define __ARM64_FTR_BITS(SIGNED, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ 48 48 { \ 49 + .sign = SIGNED, \ 49 50 .strict = STRICT, \ 50 51 .type = TYPE, \ 51 52 .shift = SHIFT, \ 52 53 .width = WIDTH, \ 53 54 .safe_val = SAFE_VAL, \ 54 55 } 56 + 57 + /* Define a feature with signed values */ 58 + #define ARM64_FTR_BITS(STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ 59 + __ARM64_FTR_BITS(FTR_SIGNED, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) 60 + 61 + /* Define a feature with unsigned value */ 62 + #define U_ARM64_FTR_BITS(STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ 63 + __ARM64_FTR_BITS(FTR_UNSIGNED, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) 55 64 56 65 #define ARM64_FTR_END \ 57 66 { \ ··· 108 99 * Differing PARange is fine as long as all peripherals and memory are mapped 109 100 * within the minimum PARange of all CPUs 110 101 */ 111 - ARM64_FTR_BITS(FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_PARANGE_SHIFT, 4, 0), 102 + U_ARM64_FTR_BITS(FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_PARANGE_SHIFT, 4, 0), 112 103 ARM64_FTR_END, 113 104 }; 114 105 ··· 124 115 }; 125 116 126 117 static struct arm64_ftr_bits ftr_ctr[] = { 127 - ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RAO */ 118 + U_ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RAO */ 128 119 ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 28, 3, 0), 129 - ARM64_FTR_BITS(FTR_STRICT, FTR_HIGHER_SAFE, 24, 4, 0), /* CWG */ 130 - ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0), /* ERG */ 131 - ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 1), /* DminLine */ 120 + U_ARM64_FTR_BITS(FTR_STRICT, FTR_HIGHER_SAFE, 24, 4, 0), /* CWG */ 121 + U_ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0), /* ERG */ 122 + U_ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 1), /* DminLine */ 132 123 /* 133 124 * Linux can handle differing I-cache policies. Userspace JITs will 134 125 * make use of *minLine 135 126 */ 136 - ARM64_FTR_BITS(FTR_NONSTRICT, FTR_EXACT, 14, 2, 0), /* L1Ip */ 127 + U_ARM64_FTR_BITS(FTR_NONSTRICT, FTR_EXACT, 14, 2, 0), /* L1Ip */ 137 128 ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 4, 10, 0), /* RAZ */ 138 - ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0), /* IminLine */ 129 + U_ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0), /* IminLine */ 139 130 ARM64_FTR_END, 140 131 }; 141 132 ··· 153 144 154 145 static struct arm64_ftr_bits ftr_id_aa64dfr0[] = { 155 146 ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 32, 32, 0), 156 - ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0), 157 - ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_WRPS_SHIFT, 4, 0), 158 - ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_BRPS_SHIFT, 4, 0), 159 - ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64DFR0_PMUVER_SHIFT, 4, 0), 160 - ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64DFR0_TRACEVER_SHIFT, 4, 0), 161 - ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6), 147 + U_ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0), 148 + U_ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_WRPS_SHIFT, 4, 0), 149 + U_ARM64_FTR_BITS(FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_BRPS_SHIFT, 4, 0), 150 + U_ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64DFR0_PMUVER_SHIFT, 4, 0), 151 + U_ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64DFR0_TRACEVER_SHIFT, 4, 0), 152 + U_ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6), 162 153 ARM64_FTR_END, 163 154 }; 164 155
+5
arch/arm64/kernel/cpuinfo.c
··· 30 30 #include <linux/seq_file.h> 31 31 #include <linux/sched.h> 32 32 #include <linux/smp.h> 33 + #include <linux/delay.h> 33 34 34 35 /* 35 36 * In case the boot CPU is hotpluggable, we record its initial state and ··· 112 111 * "processor". Give glibc what it expects. 113 112 */ 114 113 seq_printf(m, "processor\t: %d\n", i); 114 + 115 + seq_printf(m, "BogoMIPS\t: %lu.%02lu\n", 116 + loops_per_jiffy / (500000UL/HZ), 117 + loops_per_jiffy / (5000UL/HZ) % 100); 115 118 116 119 /* 117 120 * Dump out the common processor features in a single line.
+24 -21
arch/arm64/kernel/efi.c
··· 127 127 table_size = sizeof(efi_config_table_64_t) * efi.systab->nr_tables; 128 128 config_tables = early_memremap(efi_to_phys(efi.systab->tables), 129 129 table_size); 130 - 130 + if (config_tables == NULL) { 131 + pr_warn("Unable to map EFI config table array.\n"); 132 + retval = -ENOMEM; 133 + goto out; 134 + } 131 135 retval = efi_config_parse_tables(config_tables, efi.systab->nr_tables, 132 136 sizeof(efi_config_table_64_t), NULL); 133 137 ··· 213 209 PAGE_ALIGN(params.mmap_size + (params.mmap & ~PAGE_MASK))); 214 210 memmap.phys_map = params.mmap; 215 211 memmap.map = early_memremap(params.mmap, params.mmap_size); 212 + if (memmap.map == NULL) { 213 + /* 214 + * If we are booting via UEFI, the UEFI memory map is the only 215 + * description of memory we have, so there is little point in 216 + * proceeding if we cannot access it. 217 + */ 218 + panic("Unable to map EFI memory map.\n"); 219 + } 216 220 memmap.map_end = memmap.map + params.mmap_size; 217 221 memmap.desc_size = params.desc_size; 218 222 memmap.desc_version = params.desc_ver; ··· 236 224 { 237 225 efi_memory_desc_t *md; 238 226 227 + init_new_context(NULL, &efi_mm); 228 + 239 229 for_each_efi_memory_desc(&memmap, md) { 240 - u64 paddr, npages, size; 241 230 pgprot_t prot; 242 231 243 232 if (!(md->attribute & EFI_MEMORY_RUNTIME)) 244 233 continue; 245 234 if (md->virt_addr == 0) 246 235 return false; 247 - 248 - paddr = md->phys_addr; 249 - npages = md->num_pages; 250 - memrange_efi_to_native(&paddr, &npages); 251 - size = npages << PAGE_SHIFT; 252 236 253 237 pr_info(" EFI remap 0x%016llx => %p\n", 254 238 md->phys_addr, (void *)md->virt_addr); ··· 262 254 else 263 255 prot = PAGE_KERNEL; 264 256 265 - create_pgd_mapping(&efi_mm, paddr, md->virt_addr, size, prot); 257 + create_pgd_mapping(&efi_mm, md->phys_addr, md->virt_addr, 258 + md->num_pages << EFI_PAGE_SHIFT, 259 + __pgprot(pgprot_val(prot) | PTE_NG)); 266 260 } 267 261 return true; 268 262 } ··· 280 270 281 271 if (!efi_enabled(EFI_BOOT)) { 282 272 pr_info("EFI services will not be available.\n"); 283 - return -1; 273 + return 0; 284 274 } 285 275 286 276 if (efi_runtime_disabled()) { 287 277 pr_info("EFI runtime services will be disabled.\n"); 288 - return -1; 278 + return 0; 289 279 } 290 280 291 281 pr_info("Remapping and enabling EFI services.\n"); ··· 295 285 mapsize); 296 286 if (!memmap.map) { 297 287 pr_err("Failed to remap EFI memory map\n"); 298 - return -1; 288 + return -ENOMEM; 299 289 } 300 290 memmap.map_end = memmap.map + mapsize; 301 291 efi.memmap = &memmap; ··· 304 294 sizeof(efi_system_table_t)); 305 295 if (!efi.systab) { 306 296 pr_err("Failed to remap EFI System Table\n"); 307 - return -1; 297 + return -ENOMEM; 308 298 } 309 299 set_bit(EFI_SYSTEM_TABLES, &efi.flags); 310 300 311 301 if (!efi_virtmap_init()) { 312 302 pr_err("No UEFI virtual mapping was installed -- runtime services will not be available\n"); 313 - return -1; 303 + return -ENOMEM; 314 304 } 315 305 316 306 /* Set up runtime services function pointers */ ··· 339 329 340 330 static void efi_set_pgd(struct mm_struct *mm) 341 331 { 342 - if (mm == &init_mm) 343 - cpu_set_reserved_ttbr0(); 344 - else 345 - cpu_switch_mm(mm->pgd, mm); 346 - 347 - local_flush_tlb_all(); 348 - if (icache_is_aivivt()) 349 - __local_flush_icache_all(); 332 + switch_mm(NULL, mm, NULL); 350 333 } 351 334 352 335 void efi_virtmap_load(void)
+10
arch/arm64/kernel/suspend.c
··· 1 + #include <linux/ftrace.h> 1 2 #include <linux/percpu.h> 2 3 #include <linux/slab.h> 3 4 #include <asm/cacheflush.h> ··· 72 71 local_dbg_save(flags); 73 72 74 73 /* 74 + * Function graph tracer state gets incosistent when the kernel 75 + * calls functions that never return (aka suspend finishers) hence 76 + * disable graph tracing during their execution. 77 + */ 78 + pause_graph_tracing(); 79 + 80 + /* 75 81 * mm context saved on the stack, it will be restored when 76 82 * the cpu comes out of reset through the identity mapped 77 83 * page tables, so that the thread address space is properly ··· 118 110 if (hw_breakpoint_restore) 119 111 hw_breakpoint_restore(NULL); 120 112 } 113 + 114 + unpause_graph_tracing(); 121 115 122 116 /* 123 117 * Restore pstate flags. OS lock and mdscr have been already
+12 -2
arch/arm64/kvm/hyp.S
··· 864 864 ENDPROC(__kvm_flush_vm_context) 865 865 866 866 __kvm_hyp_panic: 867 + // Stash PAR_EL1 before corrupting it in __restore_sysregs 868 + mrs x0, par_el1 869 + push x0, xzr 870 + 867 871 // Guess the context by looking at VTTBR: 868 872 // If zero, then we're already a host. 869 873 // Otherwise restore a minimal host context before panicing. ··· 902 898 mrs x3, esr_el2 903 899 mrs x4, far_el2 904 900 mrs x5, hpfar_el2 905 - mrs x6, par_el1 901 + pop x6, xzr // active context PAR_EL1 906 902 mrs x7, tpidr_el2 907 903 908 904 mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\ ··· 918 914 ENDPROC(__kvm_hyp_panic) 919 915 920 916 __hyp_panic_str: 921 - .ascii "HYP panic:\nPS:%08x PC:%p ESR:%p\nFAR:%p HPFAR:%p PAR:%p\nVCPU:%p\n\0" 917 + .ascii "HYP panic:\nPS:%08x PC:%016x ESR:%08x\nFAR:%016x HPFAR:%016x PAR:%016x\nVCPU:%p\n\0" 922 918 923 919 .align 2 924 920 ··· 1019 1015 b.ne 1f // Not an abort we care about 1020 1016 1021 1017 /* This is an abort. Check for permission fault */ 1018 + alternative_if_not ARM64_WORKAROUND_834220 1022 1019 and x2, x1, #ESR_ELx_FSC_TYPE 1023 1020 cmp x2, #FSC_PERM 1024 1021 b.ne 1f // Not a permission fault 1022 + alternative_else 1023 + nop // Use the permission fault path to 1024 + nop // check for a valid S1 translation, 1025 + nop // regardless of the ESR value. 1026 + alternative_endif 1025 1027 1026 1028 /* 1027 1029 * Check for Stage-1 page table walk, which is guaranteed
+1 -1
arch/arm64/kvm/inject_fault.c
··· 48 48 49 49 /* Note: These now point to the banked copies */ 50 50 *vcpu_spsr(vcpu) = new_spsr_value; 51 - *vcpu_reg(vcpu, 14) = *vcpu_pc(vcpu) + return_offset; 51 + *vcpu_reg32(vcpu, 14) = *vcpu_pc(vcpu) + return_offset; 52 52 53 53 /* Branch to exception vector */ 54 54 if (sctlr & (1 << 13))
+26 -12
arch/arm64/mm/context.c
··· 76 76 __flush_icache_all(); 77 77 } 78 78 79 - static int is_reserved_asid(u64 asid) 79 + static bool check_update_reserved_asid(u64 asid, u64 newasid) 80 80 { 81 81 int cpu; 82 - for_each_possible_cpu(cpu) 83 - if (per_cpu(reserved_asids, cpu) == asid) 84 - return 1; 85 - return 0; 82 + bool hit = false; 83 + 84 + /* 85 + * Iterate over the set of reserved ASIDs looking for a match. 86 + * If we find one, then we can update our mm to use newasid 87 + * (i.e. the same ASID in the current generation) but we can't 88 + * exit the loop early, since we need to ensure that all copies 89 + * of the old ASID are updated to reflect the mm. Failure to do 90 + * so could result in us missing the reserved ASID in a future 91 + * generation. 92 + */ 93 + for_each_possible_cpu(cpu) { 94 + if (per_cpu(reserved_asids, cpu) == asid) { 95 + hit = true; 96 + per_cpu(reserved_asids, cpu) = newasid; 97 + } 98 + } 99 + 100 + return hit; 86 101 } 87 102 88 103 static u64 new_context(struct mm_struct *mm, unsigned int cpu) ··· 107 92 u64 generation = atomic64_read(&asid_generation); 108 93 109 94 if (asid != 0) { 95 + u64 newasid = generation | (asid & ~ASID_MASK); 96 + 110 97 /* 111 98 * If our current ASID was active during a rollover, we 112 99 * can continue to use it and this was just a false alarm. 113 100 */ 114 - if (is_reserved_asid(asid)) 115 - return generation | (asid & ~ASID_MASK); 101 + if (check_update_reserved_asid(asid, newasid)) 102 + return newasid; 116 103 117 104 /* 118 105 * We had a valid ASID in a previous life, so try to re-use ··· 122 105 */ 123 106 asid &= ~ASID_MASK; 124 107 if (!__test_and_set_bit(asid, asid_map)) 125 - goto bump_gen; 108 + return newasid; 126 109 } 127 110 128 111 /* ··· 146 129 set_asid: 147 130 __set_bit(asid, asid_map); 148 131 cur_idx = asid; 149 - 150 - bump_gen: 151 - asid |= generation; 152 - return asid; 132 + return asid | generation; 153 133 } 154 134 155 135 void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
+17 -18
arch/arm64/mm/dma-mapping.c
··· 18 18 */ 19 19 20 20 #include <linux/gfp.h> 21 + #include <linux/acpi.h> 21 22 #include <linux/export.h> 22 23 #include <linux/slab.h> 23 24 #include <linux/genalloc.h> ··· 28 27 #include <linux/swiotlb.h> 29 28 30 29 #include <asm/cacheflush.h> 31 - 32 - struct dma_map_ops *dma_ops; 33 - EXPORT_SYMBOL(dma_ops); 34 30 35 31 static pgprot_t __get_dma_pgprot(struct dma_attrs *attrs, pgprot_t prot, 36 32 bool coherent) ··· 513 515 514 516 static int __init arm64_dma_init(void) 515 517 { 516 - int ret; 517 - 518 - dma_ops = &swiotlb_dma_ops; 519 - 520 - ret = atomic_pool_init(); 521 - 522 - return ret; 518 + return atomic_pool_init(); 523 519 } 524 520 arch_initcall(arm64_dma_init); 525 521 ··· 544 552 { 545 553 bool coherent = is_device_dma_coherent(dev); 546 554 int ioprot = dma_direction_to_prot(DMA_BIDIRECTIONAL, coherent); 555 + size_t iosize = size; 547 556 void *addr; 548 557 549 558 if (WARN(!dev, "cannot create IOMMU mapping for unknown device\n")) 550 559 return NULL; 560 + 561 + size = PAGE_ALIGN(size); 562 + 551 563 /* 552 564 * Some drivers rely on this, and we probably don't want the 553 565 * possibility of stale kernel data being read by devices anyway. ··· 562 566 struct page **pages; 563 567 pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent); 564 568 565 - pages = iommu_dma_alloc(dev, size, gfp, ioprot, handle, 569 + pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, handle, 566 570 flush_page); 567 571 if (!pages) 568 572 return NULL; ··· 570 574 addr = dma_common_pages_remap(pages, size, VM_USERMAP, prot, 571 575 __builtin_return_address(0)); 572 576 if (!addr) 573 - iommu_dma_free(dev, pages, size, handle); 577 + iommu_dma_free(dev, pages, iosize, handle); 574 578 } else { 575 579 struct page *page; 576 580 /* ··· 587 591 if (!addr) 588 592 return NULL; 589 593 590 - *handle = iommu_dma_map_page(dev, page, 0, size, ioprot); 594 + *handle = iommu_dma_map_page(dev, page, 0, iosize, ioprot); 591 595 if (iommu_dma_mapping_error(dev, *handle)) { 592 596 if (coherent) 593 597 __free_pages(page, get_order(size)); ··· 602 606 static void __iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, 603 607 dma_addr_t handle, struct dma_attrs *attrs) 604 608 { 609 + size_t iosize = size; 610 + 611 + size = PAGE_ALIGN(size); 605 612 /* 606 613 * @cpu_addr will be one of 3 things depending on how it was allocated: 607 614 * - A remapped array of pages from iommu_dma_alloc(), for all ··· 616 617 * Hence how dodgy the below logic looks... 617 618 */ 618 619 if (__in_atomic_pool(cpu_addr, size)) { 619 - iommu_dma_unmap_page(dev, handle, size, 0, NULL); 620 + iommu_dma_unmap_page(dev, handle, iosize, 0, NULL); 620 621 __free_from_pool(cpu_addr, size); 621 622 } else if (is_vmalloc_addr(cpu_addr)){ 622 623 struct vm_struct *area = find_vm_area(cpu_addr); 623 624 624 625 if (WARN_ON(!area || !area->pages)) 625 626 return; 626 - iommu_dma_free(dev, area->pages, size, &handle); 627 + iommu_dma_free(dev, area->pages, iosize, &handle); 627 628 dma_common_free_remap(cpu_addr, size, VM_USERMAP); 628 629 } else { 629 - iommu_dma_unmap_page(dev, handle, size, 0, NULL); 630 + iommu_dma_unmap_page(dev, handle, iosize, 0, NULL); 630 631 __free_pages(virt_to_page(cpu_addr), get_order(size)); 631 632 } 632 633 } ··· 983 984 void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, 984 985 struct iommu_ops *iommu, bool coherent) 985 986 { 986 - if (!acpi_disabled && !dev->archdata.dma_ops) 987 - dev->archdata.dma_ops = dma_ops; 987 + if (!dev->archdata.dma_ops) 988 + dev->archdata.dma_ops = &swiotlb_dma_ops; 988 989 989 990 dev->archdata.dma_coherent = coherent; 990 991 __iommu_setup_dma_ops(dev, dma_base, size, iommu);
+14 -14
arch/arm64/mm/fault.c
··· 393 393 { do_translation_fault, SIGSEGV, SEGV_MAPERR, "level 1 translation fault" }, 394 394 { do_translation_fault, SIGSEGV, SEGV_MAPERR, "level 2 translation fault" }, 395 395 { do_page_fault, SIGSEGV, SEGV_MAPERR, "level 3 translation fault" }, 396 - { do_bad, SIGBUS, 0, "reserved access flag fault" }, 396 + { do_bad, SIGBUS, 0, "unknown 8" }, 397 397 { do_page_fault, SIGSEGV, SEGV_ACCERR, "level 1 access flag fault" }, 398 398 { do_page_fault, SIGSEGV, SEGV_ACCERR, "level 2 access flag fault" }, 399 399 { do_page_fault, SIGSEGV, SEGV_ACCERR, "level 3 access flag fault" }, 400 - { do_bad, SIGBUS, 0, "reserved permission fault" }, 400 + { do_bad, SIGBUS, 0, "unknown 12" }, 401 401 { do_page_fault, SIGSEGV, SEGV_ACCERR, "level 1 permission fault" }, 402 402 { do_page_fault, SIGSEGV, SEGV_ACCERR, "level 2 permission fault" }, 403 403 { do_page_fault, SIGSEGV, SEGV_ACCERR, "level 3 permission fault" }, 404 404 { do_bad, SIGBUS, 0, "synchronous external abort" }, 405 - { do_bad, SIGBUS, 0, "asynchronous external abort" }, 405 + { do_bad, SIGBUS, 0, "unknown 17" }, 406 406 { do_bad, SIGBUS, 0, "unknown 18" }, 407 407 { do_bad, SIGBUS, 0, "unknown 19" }, 408 408 { do_bad, SIGBUS, 0, "synchronous abort (translation table walk)" }, ··· 410 410 { do_bad, SIGBUS, 0, "synchronous abort (translation table walk)" }, 411 411 { do_bad, SIGBUS, 0, "synchronous abort (translation table walk)" }, 412 412 { do_bad, SIGBUS, 0, "synchronous parity error" }, 413 - { do_bad, SIGBUS, 0, "asynchronous parity error" }, 413 + { do_bad, SIGBUS, 0, "unknown 25" }, 414 414 { do_bad, SIGBUS, 0, "unknown 26" }, 415 415 { do_bad, SIGBUS, 0, "unknown 27" }, 416 - { do_bad, SIGBUS, 0, "synchronous parity error (translation table walk" }, 417 - { do_bad, SIGBUS, 0, "synchronous parity error (translation table walk" }, 418 - { do_bad, SIGBUS, 0, "synchronous parity error (translation table walk" }, 419 - { do_bad, SIGBUS, 0, "synchronous parity error (translation table walk" }, 416 + { do_bad, SIGBUS, 0, "synchronous parity error (translation table walk)" }, 417 + { do_bad, SIGBUS, 0, "synchronous parity error (translation table walk)" }, 418 + { do_bad, SIGBUS, 0, "synchronous parity error (translation table walk)" }, 419 + { do_bad, SIGBUS, 0, "synchronous parity error (translation table walk)" }, 420 420 { do_bad, SIGBUS, 0, "unknown 32" }, 421 421 { do_bad, SIGBUS, BUS_ADRALN, "alignment fault" }, 422 - { do_bad, SIGBUS, 0, "debug event" }, 422 + { do_bad, SIGBUS, 0, "unknown 34" }, 423 423 { do_bad, SIGBUS, 0, "unknown 35" }, 424 424 { do_bad, SIGBUS, 0, "unknown 36" }, 425 425 { do_bad, SIGBUS, 0, "unknown 37" }, ··· 433 433 { do_bad, SIGBUS, 0, "unknown 45" }, 434 434 { do_bad, SIGBUS, 0, "unknown 46" }, 435 435 { do_bad, SIGBUS, 0, "unknown 47" }, 436 - { do_bad, SIGBUS, 0, "unknown 48" }, 436 + { do_bad, SIGBUS, 0, "TLB conflict abort" }, 437 437 { do_bad, SIGBUS, 0, "unknown 49" }, 438 438 { do_bad, SIGBUS, 0, "unknown 50" }, 439 439 { do_bad, SIGBUS, 0, "unknown 51" }, 440 440 { do_bad, SIGBUS, 0, "implementation fault (lockdown abort)" }, 441 - { do_bad, SIGBUS, 0, "unknown 53" }, 441 + { do_bad, SIGBUS, 0, "implementation fault (unsupported exclusive)" }, 442 442 { do_bad, SIGBUS, 0, "unknown 54" }, 443 443 { do_bad, SIGBUS, 0, "unknown 55" }, 444 444 { do_bad, SIGBUS, 0, "unknown 56" }, 445 445 { do_bad, SIGBUS, 0, "unknown 57" }, 446 - { do_bad, SIGBUS, 0, "implementation fault (coprocessor abort)" }, 446 + { do_bad, SIGBUS, 0, "unknown 58" }, 447 447 { do_bad, SIGBUS, 0, "unknown 59" }, 448 448 { do_bad, SIGBUS, 0, "unknown 60" }, 449 - { do_bad, SIGBUS, 0, "unknown 61" }, 450 - { do_bad, SIGBUS, 0, "unknown 62" }, 449 + { do_bad, SIGBUS, 0, "section domain fault" }, 450 + { do_bad, SIGBUS, 0, "page domain fault" }, 451 451 { do_bad, SIGBUS, 0, "unknown 63" }, 452 452 }; 453 453
+21 -70
arch/arm64/mm/mmu.c
··· 64 64 65 65 static void __init *early_alloc(unsigned long sz) 66 66 { 67 - void *ptr = __va(memblock_alloc(sz, sz)); 68 - BUG_ON(!ptr); 67 + phys_addr_t phys; 68 + void *ptr; 69 + 70 + phys = memblock_alloc(sz, sz); 71 + BUG_ON(!phys); 72 + ptr = __va(phys); 69 73 memset(ptr, 0, sz); 70 74 return ptr; 71 75 } ··· 85 81 do { 86 82 /* 87 83 * Need to have the least restrictive permissions available 88 - * permissions will be fixed up later. Default the new page 89 - * range as contiguous ptes. 84 + * permissions will be fixed up later 90 85 */ 91 - set_pte(pte, pfn_pte(pfn, PAGE_KERNEL_EXEC_CONT)); 86 + set_pte(pte, pfn_pte(pfn, PAGE_KERNEL_EXEC)); 92 87 pfn++; 93 88 } while (pte++, i++, i < PTRS_PER_PTE); 94 89 } 95 90 96 - /* 97 - * Given a PTE with the CONT bit set, determine where the CONT range 98 - * starts, and clear the entire range of PTE CONT bits. 99 - */ 100 - static void clear_cont_pte_range(pte_t *pte, unsigned long addr) 101 - { 102 - int i; 103 - 104 - pte -= CONT_RANGE_OFFSET(addr); 105 - for (i = 0; i < CONT_PTES; i++) { 106 - set_pte(pte, pte_mknoncont(*pte)); 107 - pte++; 108 - } 109 - flush_tlb_all(); 110 - } 111 - 112 - /* 113 - * Given a range of PTEs set the pfn and provided page protection flags 114 - */ 115 - static void __populate_init_pte(pte_t *pte, unsigned long addr, 116 - unsigned long end, phys_addr_t phys, 117 - pgprot_t prot) 118 - { 119 - unsigned long pfn = __phys_to_pfn(phys); 120 - 121 - do { 122 - /* clear all the bits except the pfn, then apply the prot */ 123 - set_pte(pte, pfn_pte(pfn, prot)); 124 - pte++; 125 - pfn++; 126 - addr += PAGE_SIZE; 127 - } while (addr != end); 128 - } 129 - 130 91 static void alloc_init_pte(pmd_t *pmd, unsigned long addr, 131 - unsigned long end, phys_addr_t phys, 92 + unsigned long end, unsigned long pfn, 132 93 pgprot_t prot, 133 94 void *(*alloc)(unsigned long size)) 134 95 { 135 96 pte_t *pte; 136 - unsigned long next; 137 97 138 98 if (pmd_none(*pmd) || pmd_sect(*pmd)) { 139 99 pte = alloc(PTRS_PER_PTE * sizeof(pte_t)); ··· 110 142 111 143 pte = pte_offset_kernel(pmd, addr); 112 144 do { 113 - next = min(end, (addr + CONT_SIZE) & CONT_MASK); 114 - if (((addr | next | phys) & ~CONT_MASK) == 0) { 115 - /* a block of CONT_PTES */ 116 - __populate_init_pte(pte, addr, next, phys, 117 - __pgprot(pgprot_val(prot) | PTE_CONT)); 118 - } else { 119 - /* 120 - * If the range being split is already inside of a 121 - * contiguous range but this PTE isn't going to be 122 - * contiguous, then we want to unmark the adjacent 123 - * ranges, then update the portion of the range we 124 - * are interrested in. 125 - */ 126 - clear_cont_pte_range(pte, addr); 127 - __populate_init_pte(pte, addr, next, phys, prot); 128 - } 129 - 130 - pte += (next - addr) >> PAGE_SHIFT; 131 - phys += next - addr; 132 - addr = next; 133 - } while (addr != end); 145 + set_pte(pte, pfn_pte(pfn, prot)); 146 + pfn++; 147 + } while (pte++, addr += PAGE_SIZE, addr != end); 134 148 } 135 149 136 150 static void split_pud(pud_t *old_pud, pmd_t *pmd) ··· 173 223 } 174 224 } 175 225 } else { 176 - alloc_init_pte(pmd, addr, next, phys, prot, alloc); 226 + alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys), 227 + prot, alloc); 177 228 } 178 229 phys += next - addr; 179 230 } while (pmd++, addr = next, addr != end); ··· 313 362 * for now. This will get more fine grained later once all memory 314 363 * is mapped 315 364 */ 316 - unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE); 317 - unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE); 365 + unsigned long kernel_x_start = round_down(__pa(_stext), SWAPPER_BLOCK_SIZE); 366 + unsigned long kernel_x_end = round_up(__pa(__init_end), SWAPPER_BLOCK_SIZE); 318 367 319 368 if (end < kernel_x_start) { 320 369 create_mapping(start, __phys_to_virt(start), ··· 402 451 { 403 452 #ifdef CONFIG_DEBUG_RODATA 404 453 /* now that we are actually fully mapped, make the start/end more fine grained */ 405 - if (!IS_ALIGNED((unsigned long)_stext, SECTION_SIZE)) { 454 + if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) { 406 455 unsigned long aligned_start = round_down(__pa(_stext), 407 - SECTION_SIZE); 456 + SWAPPER_BLOCK_SIZE); 408 457 409 458 create_mapping(aligned_start, __phys_to_virt(aligned_start), 410 459 __pa(_stext) - aligned_start, 411 460 PAGE_KERNEL); 412 461 } 413 462 414 - if (!IS_ALIGNED((unsigned long)__init_end, SECTION_SIZE)) { 463 + if (!IS_ALIGNED((unsigned long)__init_end, SWAPPER_BLOCK_SIZE)) { 415 464 unsigned long aligned_end = round_up(__pa(__init_end), 416 - SECTION_SIZE); 465 + SWAPPER_BLOCK_SIZE); 417 466 create_mapping(__pa(__init_end), (unsigned long)__init_end, 418 467 aligned_end - __pa(__init_end), 419 468 PAGE_KERNEL); ··· 426 475 { 427 476 create_mapping_late(__pa(_stext), (unsigned long)_stext, 428 477 (unsigned long)_etext - (unsigned long)_stext, 429 - PAGE_KERNEL_EXEC | PTE_RDONLY); 478 + PAGE_KERNEL_ROX); 430 479 431 480 } 432 481 #endif
+32 -15
arch/arm64/net/bpf_jit_comp.c
··· 139 139 /* Stack must be multiples of 16B */ 140 140 #define STACK_ALIGN(sz) (((sz) + 15) & ~15) 141 141 142 + #define _STACK_SIZE \ 143 + (MAX_BPF_STACK \ 144 + + 4 /* extra for skb_copy_bits buffer */) 145 + 146 + #define STACK_SIZE STACK_ALIGN(_STACK_SIZE) 147 + 142 148 static void build_prologue(struct jit_ctx *ctx) 143 149 { 144 150 const u8 r6 = bpf2a64[BPF_REG_6]; ··· 156 150 const u8 rx = bpf2a64[BPF_REG_X]; 157 151 const u8 tmp1 = bpf2a64[TMP_REG_1]; 158 152 const u8 tmp2 = bpf2a64[TMP_REG_2]; 159 - int stack_size = MAX_BPF_STACK; 160 - 161 - stack_size += 4; /* extra for skb_copy_bits buffer */ 162 - stack_size = STACK_ALIGN(stack_size); 163 153 164 154 /* 165 155 * BPF prog stack layout ··· 167 165 * | ... | callee saved registers 168 166 * +-----+ 169 167 * | | x25/x26 170 - * BPF fp register => -80:+-----+ 168 + * BPF fp register => -80:+-----+ <= (BPF_FP) 171 169 * | | 172 170 * | ... | BPF prog stack 173 171 * | | 174 - * | | 175 - * current A64_SP => +-----+ 172 + * +-----+ <= (BPF_FP - MAX_BPF_STACK) 173 + * |RSVD | JIT scratchpad 174 + * current A64_SP => +-----+ <= (BPF_FP - STACK_SIZE) 176 175 * | | 177 176 * | ... | Function call stack 178 177 * | | ··· 199 196 emit(A64_MOV(1, fp, A64_SP), ctx); 200 197 201 198 /* Set up function call stack */ 202 - emit(A64_SUB_I(1, A64_SP, A64_SP, stack_size), ctx); 199 + emit(A64_SUB_I(1, A64_SP, A64_SP, STACK_SIZE), ctx); 203 200 204 201 /* Clear registers A and X */ 205 202 emit_a64_mov_i64(ra, 0, ctx); ··· 216 213 const u8 fp = bpf2a64[BPF_REG_FP]; 217 214 const u8 tmp1 = bpf2a64[TMP_REG_1]; 218 215 const u8 tmp2 = bpf2a64[TMP_REG_2]; 219 - int stack_size = MAX_BPF_STACK; 220 - 221 - stack_size += 4; /* extra for skb_copy_bits buffer */ 222 - stack_size = STACK_ALIGN(stack_size); 223 216 224 217 /* We're done with BPF stack */ 225 - emit(A64_ADD_I(1, A64_SP, A64_SP, stack_size), ctx); 218 + emit(A64_ADD_I(1, A64_SP, A64_SP, STACK_SIZE), ctx); 226 219 227 220 /* Restore fs (x25) and x26 */ 228 221 emit(A64_POP(fp, A64_R(26), A64_SP), ctx); ··· 590 591 case BPF_ST | BPF_MEM | BPF_H: 591 592 case BPF_ST | BPF_MEM | BPF_B: 592 593 case BPF_ST | BPF_MEM | BPF_DW: 593 - goto notyet; 594 + /* Load imm to a register then store it */ 595 + ctx->tmp_used = 1; 596 + emit_a64_mov_i(1, tmp2, off, ctx); 597 + emit_a64_mov_i(1, tmp, imm, ctx); 598 + switch (BPF_SIZE(code)) { 599 + case BPF_W: 600 + emit(A64_STR32(tmp, dst, tmp2), ctx); 601 + break; 602 + case BPF_H: 603 + emit(A64_STRH(tmp, dst, tmp2), ctx); 604 + break; 605 + case BPF_B: 606 + emit(A64_STRB(tmp, dst, tmp2), ctx); 607 + break; 608 + case BPF_DW: 609 + emit(A64_STR64(tmp, dst, tmp2), ctx); 610 + break; 611 + } 612 + break; 594 613 595 614 /* STX: *(size *)(dst + off) = src */ 596 615 case BPF_STX | BPF_MEM | BPF_W: ··· 675 658 return -EINVAL; 676 659 } 677 660 emit_a64_mov_i64(r3, size, ctx); 678 - emit(A64_ADD_I(1, r4, fp, MAX_BPF_STACK), ctx); 661 + emit(A64_SUB_I(1, r4, fp, STACK_SIZE), ctx); 679 662 emit_a64_mov_i64(r5, (unsigned long)bpf_load_pointer, ctx); 680 663 emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx); 681 664 emit(A64_MOV(1, A64_FP, A64_SP), ctx);
+1 -1
arch/m68k/coldfire/m54xx.c
··· 98 98 memstart = PAGE_ALIGN(_ramstart); 99 99 min_low_pfn = PFN_DOWN(_rambase); 100 100 start_pfn = PFN_DOWN(memstart); 101 - max_low_pfn = PFN_DOWN(_ramend); 101 + max_pfn = max_low_pfn = PFN_DOWN(_ramend); 102 102 high_memory = (void *)_ramend; 103 103 104 104 m68k_virt_to_node_shift = fls(_ramend - _rambase - 1) - 6;
+1 -1
arch/m68k/include/asm/unistd.h
··· 4 4 #include <uapi/asm/unistd.h> 5 5 6 6 7 - #define NR_syscalls 375 7 + #define NR_syscalls 376 8 8 9 9 #define __ARCH_WANT_OLD_READDIR 10 10 #define __ARCH_WANT_OLD_STAT
+1
arch/m68k/include/uapi/asm/unistd.h
··· 380 380 #define __NR_sendmmsg 372 381 381 #define __NR_userfaultfd 373 382 382 #define __NR_membarrier 374 383 + #define __NR_mlock2 375 383 384 384 385 #endif /* _UAPI_ASM_M68K_UNISTD_H_ */
+6 -3
arch/m68k/kernel/setup_no.c
··· 238 238 * Give all the memory to the bootmap allocator, tell it to put the 239 239 * boot mem_map at the start of memory. 240 240 */ 241 + min_low_pfn = PFN_DOWN(memory_start); 242 + max_pfn = max_low_pfn = PFN_DOWN(memory_end); 243 + 241 244 bootmap_size = init_bootmem_node( 242 245 NODE_DATA(0), 243 - memory_start >> PAGE_SHIFT, /* map goes here */ 244 - PAGE_OFFSET >> PAGE_SHIFT, /* 0 on coldfire */ 245 - memory_end >> PAGE_SHIFT); 246 + min_low_pfn, /* map goes here */ 247 + PFN_DOWN(PAGE_OFFSET), 248 + max_pfn); 246 249 /* 247 250 * Free the usable memory, we have to make sure we do not free 248 251 * the bootmem bitmap so we then reserve it after freeing it :-)
+1
arch/m68k/kernel/syscalltable.S
··· 395 395 .long sys_sendmmsg 396 396 .long sys_userfaultfd 397 397 .long sys_membarrier 398 + .long sys_mlock2 /* 375 */
+1 -1
arch/m68k/mm/motorola.c
··· 250 250 high_memory = phys_to_virt(max_addr); 251 251 252 252 min_low_pfn = availmem >> PAGE_SHIFT; 253 - max_low_pfn = max_addr >> PAGE_SHIFT; 253 + max_pfn = max_low_pfn = max_addr >> PAGE_SHIFT; 254 254 255 255 for (i = 0; i < m68k_num_memory; i++) { 256 256 addr = m68k_memory[i].addr;
+2 -2
arch/m68k/sun3/config.c
··· 118 118 memory_end = memory_end & PAGE_MASK; 119 119 120 120 start_page = __pa(memory_start) >> PAGE_SHIFT; 121 - num_pages = __pa(memory_end) >> PAGE_SHIFT; 121 + max_pfn = num_pages = __pa(memory_end) >> PAGE_SHIFT; 122 122 123 123 high_memory = (void *)memory_end; 124 124 availmem = memory_start; 125 125 126 126 m68k_setup_node(0); 127 - availmem += init_bootmem_node(NODE_DATA(0), start_page, 0, num_pages); 127 + availmem += init_bootmem(start_page, num_pages); 128 128 availmem = (availmem + (PAGE_SIZE-1)) & PAGE_MASK; 129 129 130 130 free_bootmem(__pa(availmem), memory_end - (availmem));
+6 -1
arch/mips/ath79/setup.c
··· 216 216 AR71XX_RESET_SIZE); 217 217 ath79_pll_base = ioremap_nocache(AR71XX_PLL_BASE, 218 218 AR71XX_PLL_SIZE); 219 + ath79_detect_sys_type(); 219 220 ath79_ddr_ctrl_init(); 220 221 221 - ath79_detect_sys_type(); 222 222 if (mips_machtype != ATH79_MACH_GENERIC_OF) 223 223 detect_memory_region(0, ATH79_MEM_SIZE_MIN, ATH79_MEM_SIZE_MAX); 224 224 ··· 281 281 "Generic", 282 282 "Generic AR71XX/AR724X/AR913X based board", 283 283 ath79_generic_init); 284 + 285 + MIPS_MACHINE(ATH79_MACH_GENERIC_OF, 286 + "DTB", 287 + "Generic AR71XX/AR724X/AR913X based board (DT)", 288 + NULL);
+1 -1
arch/mips/boot/dts/qca/ar9132.dtsi
··· 107 107 miscintc: interrupt-controller@18060010 { 108 108 compatible = "qca,ar9132-misc-intc", 109 109 "qca,ar7100-misc-intc"; 110 - reg = <0x18060010 0x4>; 110 + reg = <0x18060010 0x8>; 111 111 112 112 interrupt-parent = <&cpuintc>; 113 113 interrupts = <6>;
+2 -1
arch/mips/include/asm/page.h
··· 200 200 { 201 201 /* avoid <linux/mm.h> include hell */ 202 202 extern unsigned long max_mapnr; 203 + unsigned long pfn_offset = ARCH_PFN_OFFSET; 203 204 204 - return pfn >= ARCH_PFN_OFFSET && pfn < max_mapnr; 205 + return pfn >= pfn_offset && pfn < max_mapnr; 205 206 } 206 207 207 208 #elif defined(CONFIG_SPARSEMEM)
+1 -1
arch/mips/kvm/emulate.c
··· 1581 1581 1582 1582 base = (inst >> 21) & 0x1f; 1583 1583 op_inst = (inst >> 16) & 0x1f; 1584 - offset = inst & 0xffff; 1584 + offset = (int16_t)inst; 1585 1585 cache = (inst >> 16) & 0x3; 1586 1586 op = (inst >> 18) & 0x7; 1587 1587
+10 -6
arch/mips/kvm/locore.S
··· 157 157 158 158 FEXPORT(__kvm_mips_load_asid) 159 159 /* Set the ASID for the Guest Kernel */ 160 - INT_SLL t0, t0, 1 /* with kseg0 @ 0x40000000, kernel */ 161 - /* addresses shift to 0x80000000 */ 162 - bltz t0, 1f /* If kernel */ 160 + PTR_L t0, VCPU_COP0(k1) 161 + LONG_L t0, COP0_STATUS(t0) 162 + andi t0, KSU_USER | ST0_ERL | ST0_EXL 163 + xori t0, KSU_USER 164 + bnez t0, 1f /* If kernel */ 163 165 INT_ADDIU t1, k1, VCPU_GUEST_KERNEL_ASID /* (BD) */ 164 166 INT_ADDIU t1, k1, VCPU_GUEST_USER_ASID /* else user */ 165 167 1: ··· 476 474 mtc0 t0, CP0_EPC 477 475 478 476 /* Set the ASID for the Guest Kernel */ 479 - INT_SLL t0, t0, 1 /* with kseg0 @ 0x40000000, kernel */ 480 - /* addresses shift to 0x80000000 */ 481 - bltz t0, 1f /* If kernel */ 477 + PTR_L t0, VCPU_COP0(k1) 478 + LONG_L t0, COP0_STATUS(t0) 479 + andi t0, KSU_USER | ST0_ERL | ST0_EXL 480 + xori t0, KSU_USER 481 + bnez t0, 1f /* If kernel */ 482 482 INT_ADDIU t1, k1, VCPU_GUEST_KERNEL_ASID /* (BD) */ 483 483 INT_ADDIU t1, k1, VCPU_GUEST_USER_ASID /* else user */ 484 484 1:
+4 -1
arch/mips/kvm/mips.c
··· 279 279 280 280 if (!gebase) { 281 281 err = -ENOMEM; 282 - goto out_free_cpu; 282 + goto out_uninit_cpu; 283 283 } 284 284 kvm_debug("Allocated %d bytes for KVM Exception Handlers @ %p\n", 285 285 ALIGN(size, PAGE_SIZE), gebase); ··· 342 342 343 343 out_free_gebase: 344 344 kfree(gebase); 345 + 346 + out_uninit_cpu: 347 + kvm_vcpu_uninit(vcpu); 345 348 346 349 out_free_cpu: 347 350 kfree(vcpu);
+2 -2
arch/mips/pci/pci-rt2880.c
··· 11 11 * by the Free Software Foundation. 12 12 */ 13 13 14 + #include <linux/delay.h> 14 15 #include <linux/types.h> 15 16 #include <linux/pci.h> 16 17 #include <linux/io.h> ··· 233 232 ioport_resource.end = RT2880_PCI_IO_BASE + RT2880_PCI_IO_SIZE - 1; 234 233 235 234 rt2880_pci_reg_write(0, RT2880_PCI_REG_PCICFG_ADDR); 236 - for (i = 0; i < 0xfffff; i++) 237 - ; 235 + udelay(1); 238 236 239 237 rt2880_pci_reg_write(0x79, RT2880_PCI_REG_ARBCTL); 240 238 rt2880_pci_reg_write(0x07FF0001, RT2880_PCI_REG_BAR0SETUP_ADDR);
+3 -1
arch/mips/pmcs-msp71xx/msp_setup.c
··· 10 10 * option) any later version. 11 11 */ 12 12 13 + #include <linux/delay.h> 14 + 13 15 #include <asm/bootinfo.h> 14 16 #include <asm/cacheflush.h> 15 17 #include <asm/idle.h> ··· 79 77 */ 80 78 81 79 /* Wait a bit for the DDRC to settle */ 82 - for (i = 0; i < 100000000; i++); 80 + mdelay(125); 83 81 84 82 #if defined(CONFIG_PMC_MSP7120_GW) 85 83 /*
+4 -2
arch/mips/sni/reset.c
··· 3 3 * 4 4 * Reset a SNI machine. 5 5 */ 6 + #include <linux/delay.h> 7 + 6 8 #include <asm/io.h> 7 9 #include <asm/reboot.h> 8 10 #include <asm/sni.h> ··· 34 32 for (;;) { 35 33 for (i = 0; i < 100; i++) { 36 34 kb_wait(); 37 - for (j = 0; j < 100000 ; j++) 38 - /* nothing */; 35 + udelay(50); 39 36 outb_p(0xfe, 0x64); /* pulse reset low */ 37 + udelay(50); 40 38 } 41 39 } 42 40 }
+1 -3
arch/mn10300/Kconfig
··· 1 1 config MN10300 2 2 def_bool y 3 3 select HAVE_OPROFILE 4 + select HAVE_UID16 4 5 select GENERIC_IRQ_SHOW 5 6 select ARCH_WANT_IPC_PARSE_VERSION 6 7 select HAVE_ARCH_TRACEHOOK ··· 37 36 38 37 config NUMA 39 38 def_bool n 40 - 41 - config UID16 42 - def_bool y 43 39 44 40 config RWSEM_GENERIC_SPINLOCK 45 41 def_bool y
+4 -20
arch/nios2/mm/cacheflush.c
··· 23 23 end += (cpuinfo.dcache_line_size - 1); 24 24 end &= ~(cpuinfo.dcache_line_size - 1); 25 25 26 - for (addr = start; addr < end; addr += cpuinfo.dcache_line_size) { 27 - __asm__ __volatile__ (" flushda 0(%0)\n" 28 - : /* Outputs */ 29 - : /* Inputs */ "r"(addr) 30 - /* : No clobber */); 31 - } 32 - } 33 - 34 - static void __flush_dcache_all(unsigned long start, unsigned long end) 35 - { 36 - unsigned long addr; 37 - 38 - start &= ~(cpuinfo.dcache_line_size - 1); 39 - end += (cpuinfo.dcache_line_size - 1); 40 - end &= ~(cpuinfo.dcache_line_size - 1); 41 - 42 26 if (end > start + cpuinfo.dcache_size) 43 27 end = start + cpuinfo.dcache_size; 44 28 ··· 96 112 97 113 void flush_cache_all(void) 98 114 { 99 - __flush_dcache_all(0, cpuinfo.dcache_size); 115 + __flush_dcache(0, cpuinfo.dcache_size); 100 116 __flush_icache(0, cpuinfo.icache_size); 101 117 } 102 118 ··· 166 182 */ 167 183 unsigned long start = (unsigned long)page_address(page); 168 184 169 - __flush_dcache_all(start, start + PAGE_SIZE); 185 + __flush_dcache(start, start + PAGE_SIZE); 170 186 } 171 187 172 188 void flush_dcache_page(struct page *page) ··· 252 268 { 253 269 flush_cache_page(vma, user_vaddr, page_to_pfn(page)); 254 270 memcpy(dst, src, len); 255 - __flush_dcache_all((unsigned long)src, (unsigned long)src + len); 271 + __flush_dcache((unsigned long)src, (unsigned long)src + len); 256 272 if (vma->vm_flags & VM_EXEC) 257 273 __flush_icache((unsigned long)src, (unsigned long)src + len); 258 274 } ··· 263 279 { 264 280 flush_cache_page(vma, user_vaddr, page_to_pfn(page)); 265 281 memcpy(dst, src, len); 266 - __flush_dcache_all((unsigned long)dst, (unsigned long)dst + len); 282 + __flush_dcache((unsigned long)dst, (unsigned long)dst + len); 267 283 if (vma->vm_flags & VM_EXEC) 268 284 __flush_icache((unsigned long)dst, (unsigned long)dst + len); 269 285 }
+3
arch/parisc/Kconfig
··· 108 108 default 3 if 64BIT && PARISC_PAGE_SIZE_4KB 109 109 default 2 110 110 111 + config SYS_SUPPORTS_HUGETLBFS 112 + def_bool y if PA20 113 + 111 114 source "init/Kconfig" 112 115 113 116 source "kernel/Kconfig.freezer"
+85
arch/parisc/include/asm/hugetlb.h
··· 1 + #ifndef _ASM_PARISC64_HUGETLB_H 2 + #define _ASM_PARISC64_HUGETLB_H 3 + 4 + #include <asm/page.h> 5 + #include <asm-generic/hugetlb.h> 6 + 7 + 8 + void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, 9 + pte_t *ptep, pte_t pte); 10 + 11 + pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, 12 + pte_t *ptep); 13 + 14 + static inline int is_hugepage_only_range(struct mm_struct *mm, 15 + unsigned long addr, 16 + unsigned long len) { 17 + return 0; 18 + } 19 + 20 + /* 21 + * If the arch doesn't supply something else, assume that hugepage 22 + * size aligned regions are ok without further preparation. 23 + */ 24 + static inline int prepare_hugepage_range(struct file *file, 25 + unsigned long addr, unsigned long len) 26 + { 27 + if (len & ~HPAGE_MASK) 28 + return -EINVAL; 29 + if (addr & ~HPAGE_MASK) 30 + return -EINVAL; 31 + return 0; 32 + } 33 + 34 + static inline void hugetlb_free_pgd_range(struct mmu_gather *tlb, 35 + unsigned long addr, unsigned long end, 36 + unsigned long floor, 37 + unsigned long ceiling) 38 + { 39 + free_pgd_range(tlb, addr, end, floor, ceiling); 40 + } 41 + 42 + static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, 43 + unsigned long addr, pte_t *ptep) 44 + { 45 + } 46 + 47 + static inline int huge_pte_none(pte_t pte) 48 + { 49 + return pte_none(pte); 50 + } 51 + 52 + static inline pte_t huge_pte_wrprotect(pte_t pte) 53 + { 54 + return pte_wrprotect(pte); 55 + } 56 + 57 + static inline void huge_ptep_set_wrprotect(struct mm_struct *mm, 58 + unsigned long addr, pte_t *ptep) 59 + { 60 + pte_t old_pte = *ptep; 61 + set_huge_pte_at(mm, addr, ptep, pte_wrprotect(old_pte)); 62 + } 63 + 64 + static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma, 65 + unsigned long addr, pte_t *ptep, 66 + pte_t pte, int dirty) 67 + { 68 + int changed = !pte_same(*ptep, pte); 69 + if (changed) { 70 + set_huge_pte_at(vma->vm_mm, addr, ptep, pte); 71 + flush_tlb_page(vma, addr); 72 + } 73 + return changed; 74 + } 75 + 76 + static inline pte_t huge_ptep_get(pte_t *ptep) 77 + { 78 + return *ptep; 79 + } 80 + 81 + static inline void arch_clear_hugepage_flags(struct page *page) 82 + { 83 + } 84 + 85 + #endif /* _ASM_PARISC64_HUGETLB_H */
+12 -1
arch/parisc/include/asm/page.h
··· 145 145 #endif /* CONFIG_DISCONTIGMEM */ 146 146 147 147 #ifdef CONFIG_HUGETLB_PAGE 148 - #define HPAGE_SHIFT 22 /* 4MB (is this fixed?) */ 148 + #define HPAGE_SHIFT PMD_SHIFT /* fixed for transparent huge pages */ 149 149 #define HPAGE_SIZE ((1UL) << HPAGE_SHIFT) 150 150 #define HPAGE_MASK (~(HPAGE_SIZE - 1)) 151 151 #define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT) 152 + 153 + #if defined(CONFIG_64BIT) && defined(CONFIG_PARISC_PAGE_SIZE_4KB) 154 + # define REAL_HPAGE_SHIFT 20 /* 20 = 1MB */ 155 + # define _HUGE_PAGE_SIZE_ENCODING_DEFAULT _PAGE_SIZE_ENCODING_1M 156 + #elif !defined(CONFIG_64BIT) && defined(CONFIG_PARISC_PAGE_SIZE_4KB) 157 + # define REAL_HPAGE_SHIFT 22 /* 22 = 4MB */ 158 + # define _HUGE_PAGE_SIZE_ENCODING_DEFAULT _PAGE_SIZE_ENCODING_4M 159 + #else 160 + # define REAL_HPAGE_SHIFT 24 /* 24 = 16MB */ 161 + # define _HUGE_PAGE_SIZE_ENCODING_DEFAULT _PAGE_SIZE_ENCODING_16M 152 162 #endif 163 + #endif /* CONFIG_HUGETLB_PAGE */ 153 164 154 165 #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) 155 166
+1 -1
arch/parisc/include/asm/pgalloc.h
··· 35 35 PxD_FLAG_VALID | 36 36 PxD_FLAG_ATTACHED) 37 37 + (__u32)(__pa((unsigned long)pgd) >> PxD_VALUE_SHIFT)); 38 - /* The first pmd entry also is marked with _PAGE_GATEWAY as 38 + /* The first pmd entry also is marked with PxD_FLAG_ATTACHED as 39 39 * a signal that this pmd may not be freed */ 40 40 __pgd_val_set(*pgd, PxD_FLAG_ATTACHED); 41 41 #endif
+22 -4
arch/parisc/include/asm/pgtable.h
··· 83 83 printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, (unsigned long)pgd_val(e)) 84 84 85 85 /* This is the size of the initially mapped kernel memory */ 86 - #define KERNEL_INITIAL_ORDER 24 /* 0 to 1<<24 = 16MB */ 86 + #ifdef CONFIG_64BIT 87 + #define KERNEL_INITIAL_ORDER 25 /* 1<<25 = 32MB */ 88 + #else 89 + #define KERNEL_INITIAL_ORDER 24 /* 1<<24 = 16MB */ 90 + #endif 87 91 #define KERNEL_INITIAL_SIZE (1 << KERNEL_INITIAL_ORDER) 88 92 89 93 #if CONFIG_PGTABLE_LEVELS == 3 ··· 171 167 #define _PAGE_NO_CACHE_BIT 24 /* (0x080) Uncached Page (U bit) */ 172 168 #define _PAGE_ACCESSED_BIT 23 /* (0x100) Software: Page Accessed */ 173 169 #define _PAGE_PRESENT_BIT 22 /* (0x200) Software: translation valid */ 174 - /* bit 21 was formerly the FLUSH bit but is now unused */ 170 + #define _PAGE_HPAGE_BIT 21 /* (0x400) Software: Huge Page */ 175 171 #define _PAGE_USER_BIT 20 /* (0x800) Software: User accessible page */ 176 172 177 173 /* N.B. The bits are defined in terms of a 32 bit word above, so the */ ··· 198 194 #define _PAGE_NO_CACHE (1 << xlate_pabit(_PAGE_NO_CACHE_BIT)) 199 195 #define _PAGE_ACCESSED (1 << xlate_pabit(_PAGE_ACCESSED_BIT)) 200 196 #define _PAGE_PRESENT (1 << xlate_pabit(_PAGE_PRESENT_BIT)) 197 + #define _PAGE_HUGE (1 << xlate_pabit(_PAGE_HPAGE_BIT)) 201 198 #define _PAGE_USER (1 << xlate_pabit(_PAGE_USER_BIT)) 202 199 203 200 #define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_ACCESSED) ··· 222 217 #define PxD_FLAG_VALID (1 << xlate_pabit(_PxD_VALID_BIT)) 223 218 #define PxD_FLAG_MASK (0xf) 224 219 #define PxD_FLAG_SHIFT (4) 225 - #define PxD_VALUE_SHIFT (8) /* (PAGE_SHIFT-PxD_FLAG_SHIFT) */ 220 + #define PxD_VALUE_SHIFT (PFN_PTE_SHIFT-PxD_FLAG_SHIFT) 226 221 227 222 #ifndef __ASSEMBLY__ 228 223 ··· 368 363 static inline pte_t pte_mkspecial(pte_t pte) { return pte; } 369 364 370 365 /* 366 + * Huge pte definitions. 367 + */ 368 + #ifdef CONFIG_HUGETLB_PAGE 369 + #define pte_huge(pte) (pte_val(pte) & _PAGE_HUGE) 370 + #define pte_mkhuge(pte) (__pte(pte_val(pte) | _PAGE_HUGE)) 371 + #else 372 + #define pte_huge(pte) (0) 373 + #define pte_mkhuge(pte) (pte) 374 + #endif 375 + 376 + 377 + /* 371 378 * Conversion functions: convert a page and protection to a page entry, 372 379 * and a page entry and page directory to the page they refer to. 373 380 */ ··· 427 410 /* Find an entry in the second-level page table.. */ 428 411 429 412 #if CONFIG_PGTABLE_LEVELS == 3 413 + #define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) 430 414 #define pmd_offset(dir,address) \ 431 - ((pmd_t *) pgd_page_vaddr(*(dir)) + (((address)>>PMD_SHIFT) & (PTRS_PER_PMD-1))) 415 + ((pmd_t *) pgd_page_vaddr(*(dir)) + pmd_index(address)) 432 416 #else 433 417 #define pmd_offset(dir,addr) ((pmd_t *) dir) 434 418 #endif
-27
arch/parisc/include/asm/processor.h
··· 192 192 */ 193 193 typedef unsigned int elf_caddr_t; 194 194 195 - #define start_thread_som(regs, new_pc, new_sp) do { \ 196 - unsigned long *sp = (unsigned long *)new_sp; \ 197 - __u32 spaceid = (__u32)current->mm->context; \ 198 - unsigned long pc = (unsigned long)new_pc; \ 199 - /* offset pc for priv. level */ \ 200 - pc |= 3; \ 201 - \ 202 - regs->iasq[0] = spaceid; \ 203 - regs->iasq[1] = spaceid; \ 204 - regs->iaoq[0] = pc; \ 205 - regs->iaoq[1] = pc + 4; \ 206 - regs->sr[2] = LINUX_GATEWAY_SPACE; \ 207 - regs->sr[3] = 0xffff; \ 208 - regs->sr[4] = spaceid; \ 209 - regs->sr[5] = spaceid; \ 210 - regs->sr[6] = spaceid; \ 211 - regs->sr[7] = spaceid; \ 212 - regs->gr[ 0] = USER_PSW; \ 213 - regs->gr[30] = ((new_sp)+63)&~63; \ 214 - regs->gr[31] = pc; \ 215 - \ 216 - get_user(regs->gr[26],&sp[0]); \ 217 - get_user(regs->gr[25],&sp[-1]); \ 218 - get_user(regs->gr[24],&sp[-2]); \ 219 - get_user(regs->gr[23],&sp[-3]); \ 220 - } while(0) 221 - 222 195 /* The ELF abi wants things done a "wee bit" differently than 223 196 * som does. Supporting this behavior here avoids 224 197 * having our own version of create_elf_tables.
-10
arch/parisc/include/uapi/asm/mman.h
··· 49 49 #define MADV_DONTFORK 10 /* don't inherit across fork */ 50 50 #define MADV_DOFORK 11 /* do inherit across fork */ 51 51 52 - /* The range 12-64 is reserved for page size specification. */ 53 - #define MADV_4K_PAGES 12 /* Use 4K pages */ 54 - #define MADV_16K_PAGES 14 /* Use 16K pages */ 55 - #define MADV_64K_PAGES 16 /* Use 64K pages */ 56 - #define MADV_256K_PAGES 18 /* Use 256K pages */ 57 - #define MADV_1M_PAGES 20 /* Use 1 Megabyte pages */ 58 - #define MADV_4M_PAGES 22 /* Use 4 Megabyte pages */ 59 - #define MADV_16M_PAGES 24 /* Use 16 Megabyte pages */ 60 - #define MADV_64M_PAGES 26 /* Use 64 Megabyte pages */ 61 - 62 52 #define MADV_MERGEABLE 65 /* KSM may merge identical pages */ 63 53 #define MADV_UNMERGEABLE 66 /* KSM may not merge identical pages */ 64 54
+8
arch/parisc/kernel/asm-offsets.c
··· 290 290 DEFINE(ASM_PFN_PTE_SHIFT, PFN_PTE_SHIFT); 291 291 DEFINE(ASM_PT_INITIAL, PT_INITIAL); 292 292 BLANK(); 293 + /* HUGEPAGE_SIZE is only used in vmlinux.lds.S to align kernel text 294 + * and kernel data on physical huge pages */ 295 + #ifdef CONFIG_HUGETLB_PAGE 296 + DEFINE(HUGEPAGE_SIZE, 1UL << REAL_HPAGE_SHIFT); 297 + #else 298 + DEFINE(HUGEPAGE_SIZE, PAGE_SIZE); 299 + #endif 300 + BLANK(); 293 301 DEFINE(EXCDATA_IP, offsetof(struct exception_data, fault_ip)); 294 302 DEFINE(EXCDATA_SPACE, offsetof(struct exception_data, fault_space)); 295 303 DEFINE(EXCDATA_ADDR, offsetof(struct exception_data, fault_addr));
+34 -22
arch/parisc/kernel/entry.S
··· 502 502 STREG \pte,0(\ptp) 503 503 .endm 504 504 505 + /* We have (depending on the page size): 506 + * - 38 to 52-bit Physical Page Number 507 + * - 12 to 26-bit page offset 508 + */ 505 509 /* bitshift difference between a PFN (based on kernel's PAGE_SIZE) 506 510 * to a CPU TLB 4k PFN (4k => 12 bits to shift) */ 507 - #define PAGE_ADD_SHIFT (PAGE_SHIFT-12) 511 + #define PAGE_ADD_SHIFT (PAGE_SHIFT-12) 512 + #define PAGE_ADD_HUGE_SHIFT (REAL_HPAGE_SHIFT-12) 508 513 509 514 /* Drop prot bits and convert to page addr for iitlbt and idtlbt */ 510 - .macro convert_for_tlb_insert20 pte 515 + .macro convert_for_tlb_insert20 pte,tmp 516 + #ifdef CONFIG_HUGETLB_PAGE 517 + copy \pte,\tmp 518 + extrd,u \tmp,(63-ASM_PFN_PTE_SHIFT)+(63-58)+PAGE_ADD_SHIFT,\ 519 + 64-PAGE_SHIFT-PAGE_ADD_SHIFT,\pte 520 + 521 + depdi _PAGE_SIZE_ENCODING_DEFAULT,63,\ 522 + (63-58)+PAGE_ADD_SHIFT,\pte 523 + extrd,u,*= \tmp,_PAGE_HPAGE_BIT+32,1,%r0 524 + depdi _HUGE_PAGE_SIZE_ENCODING_DEFAULT,63,\ 525 + (63-58)+PAGE_ADD_HUGE_SHIFT,\pte 526 + #else /* Huge pages disabled */ 511 527 extrd,u \pte,(63-ASM_PFN_PTE_SHIFT)+(63-58)+PAGE_ADD_SHIFT,\ 512 528 64-PAGE_SHIFT-PAGE_ADD_SHIFT,\pte 513 529 depdi _PAGE_SIZE_ENCODING_DEFAULT,63,\ 514 530 (63-58)+PAGE_ADD_SHIFT,\pte 531 + #endif 515 532 .endm 516 533 517 534 /* Convert the pte and prot to tlb insertion values. How 518 535 * this happens is quite subtle, read below */ 519 - .macro make_insert_tlb spc,pte,prot 536 + .macro make_insert_tlb spc,pte,prot,tmp 520 537 space_to_prot \spc \prot /* create prot id from space */ 521 538 /* The following is the real subtlety. This is depositing 522 539 * T <-> _PAGE_REFTRAP ··· 570 553 depdi 1,12,1,\prot 571 554 572 555 /* Drop prot bits and convert to page addr for iitlbt and idtlbt */ 573 - convert_for_tlb_insert20 \pte 556 + convert_for_tlb_insert20 \pte \tmp 574 557 .endm 575 558 576 559 /* Identical macro to make_insert_tlb above, except it ··· 663 646 664 647 665 648 /* 666 - * Align fault_vector_20 on 4K boundary so that both 667 - * fault_vector_11 and fault_vector_20 are on the 668 - * same page. This is only necessary as long as we 669 - * write protect the kernel text, which we may stop 670 - * doing once we use large page translations to cover 671 - * the static part of the kernel address space. 649 + * Fault_vectors are architecturally required to be aligned on a 2K 650 + * boundary 672 651 */ 673 652 674 653 .text 675 - 676 - .align 4096 654 + .align 2048 677 655 678 656 ENTRY(fault_vector_20) 679 657 /* First vector is invalid (0) */ ··· 1159 1147 tlb_lock spc,ptp,pte,t0,t1,dtlb_check_alias_20w 1160 1148 update_accessed ptp,pte,t0,t1 1161 1149 1162 - make_insert_tlb spc,pte,prot 1150 + make_insert_tlb spc,pte,prot,t1 1163 1151 1164 1152 idtlbt pte,prot 1165 1153 ··· 1185 1173 tlb_lock spc,ptp,pte,t0,t1,nadtlb_check_alias_20w 1186 1174 update_accessed ptp,pte,t0,t1 1187 1175 1188 - make_insert_tlb spc,pte,prot 1176 + make_insert_tlb spc,pte,prot,t1 1189 1177 1190 1178 idtlbt pte,prot 1191 1179 ··· 1279 1267 tlb_lock spc,ptp,pte,t0,t1,dtlb_check_alias_20 1280 1268 update_accessed ptp,pte,t0,t1 1281 1269 1282 - make_insert_tlb spc,pte,prot 1270 + make_insert_tlb spc,pte,prot,t1 1283 1271 1284 1272 f_extend pte,t1 1285 1273 ··· 1307 1295 tlb_lock spc,ptp,pte,t0,t1,nadtlb_check_alias_20 1308 1296 update_accessed ptp,pte,t0,t1 1309 1297 1310 - make_insert_tlb spc,pte,prot 1298 + make_insert_tlb spc,pte,prot,t1 1311 1299 1312 1300 f_extend pte,t1 1313 1301 ··· 1416 1404 tlb_lock spc,ptp,pte,t0,t1,itlb_fault 1417 1405 update_accessed ptp,pte,t0,t1 1418 1406 1419 - make_insert_tlb spc,pte,prot 1407 + make_insert_tlb spc,pte,prot,t1 1420 1408 1421 1409 iitlbt pte,prot 1422 1410 ··· 1440 1428 tlb_lock spc,ptp,pte,t0,t1,naitlb_check_alias_20w 1441 1429 update_accessed ptp,pte,t0,t1 1442 1430 1443 - make_insert_tlb spc,pte,prot 1431 + make_insert_tlb spc,pte,prot,t1 1444 1432 1445 1433 iitlbt pte,prot 1446 1434 ··· 1526 1514 tlb_lock spc,ptp,pte,t0,t1,itlb_fault 1527 1515 update_accessed ptp,pte,t0,t1 1528 1516 1529 - make_insert_tlb spc,pte,prot 1517 + make_insert_tlb spc,pte,prot,t1 1530 1518 1531 1519 f_extend pte,t1 1532 1520 ··· 1546 1534 tlb_lock spc,ptp,pte,t0,t1,naitlb_check_alias_20 1547 1535 update_accessed ptp,pte,t0,t1 1548 1536 1549 - make_insert_tlb spc,pte,prot 1537 + make_insert_tlb spc,pte,prot,t1 1550 1538 1551 1539 f_extend pte,t1 1552 1540 ··· 1578 1566 tlb_lock spc,ptp,pte,t0,t1,dbit_fault 1579 1567 update_dirty ptp,pte,t1 1580 1568 1581 - make_insert_tlb spc,pte,prot 1569 + make_insert_tlb spc,pte,prot,t1 1582 1570 1583 1571 idtlbt pte,prot 1584 1572 ··· 1622 1610 tlb_lock spc,ptp,pte,t0,t1,dbit_fault 1623 1611 update_dirty ptp,pte,t1 1624 1612 1625 - make_insert_tlb spc,pte,prot 1613 + make_insert_tlb spc,pte,prot,t1 1626 1614 1627 1615 f_extend pte,t1 1628 1616
+2 -2
arch/parisc/kernel/head.S
··· 69 69 stw,ma %arg2,4(%r1) 70 70 stw,ma %arg3,4(%r1) 71 71 72 - /* Initialize startup VM. Just map first 8/16 MB of memory */ 72 + /* Initialize startup VM. Just map first 16/32 MB of memory */ 73 73 load32 PA(swapper_pg_dir),%r4 74 74 mtctl %r4,%cr24 /* Initialize kernel root pointer */ 75 75 mtctl %r4,%cr25 /* Initialize user root pointer */ ··· 107 107 /* Now initialize the PTEs themselves. We use RWX for 108 108 * everything ... it will get remapped correctly later */ 109 109 ldo 0+_PAGE_KERNEL_RWX(%r0),%r3 /* Hardwired 0 phys addr start */ 110 - ldi (1<<(KERNEL_INITIAL_ORDER-PAGE_SHIFT)),%r11 /* PFN count */ 110 + load32 (1<<(KERNEL_INITIAL_ORDER-PAGE_SHIFT)),%r11 /* PFN count */ 111 111 load32 PA(pg0),%r1 112 112 113 113 $pgt_fill_loop:
+13 -1
arch/parisc/kernel/setup.c
··· 130 130 printk(KERN_INFO "The 32-bit Kernel has started...\n"); 131 131 #endif 132 132 133 - printk(KERN_INFO "Default page size is %dKB.\n", (int)(PAGE_SIZE / 1024)); 133 + printk(KERN_INFO "Kernel default page size is %d KB. Huge pages ", 134 + (int)(PAGE_SIZE / 1024)); 135 + #ifdef CONFIG_HUGETLB_PAGE 136 + printk(KERN_CONT "enabled with %d MB physical and %d MB virtual size", 137 + 1 << (REAL_HPAGE_SHIFT - 20), 1 << (HPAGE_SHIFT - 20)); 138 + #else 139 + printk(KERN_CONT "disabled"); 140 + #endif 141 + printk(KERN_CONT ".\n"); 142 + 134 143 135 144 pdc_console_init(); 136 145 ··· 386 377 void start_parisc(void) 387 378 { 388 379 extern void start_kernel(void); 380 + extern void early_trap_init(void); 389 381 390 382 int ret, cpunum; 391 383 struct pdc_coproc_cfg coproc_cfg; ··· 406 396 } else { 407 397 panic("must have an fpu to boot linux"); 408 398 } 399 + 400 + early_trap_init(); /* initialize checksum of fault_vector */ 409 401 410 402 start_kernel(); 411 403 // not reached
+2 -2
arch/parisc/kernel/syscall.S
··· 369 369 ldo -16(%r30),%r29 /* Reference param save area */ 370 370 #endif 371 371 ldo TASK_REGS(%r1),%r26 372 - bl do_syscall_trace_exit,%r2 372 + BL do_syscall_trace_exit,%r2 373 373 STREG %r28,TASK_PT_GR28(%r1) /* save return value now */ 374 374 ldo -THREAD_SZ_ALGN-FRAME_SIZE(%r30),%r1 /* get task ptr */ 375 375 LDREG TI_TASK(%r1), %r1 ··· 390 390 #ifdef CONFIG_64BIT 391 391 ldo -16(%r30),%r29 /* Reference param save area */ 392 392 #endif 393 - bl do_syscall_trace_exit,%r2 393 + BL do_syscall_trace_exit,%r2 394 394 ldo TASK_REGS(%r1),%r26 395 395 396 396 ldil L%syscall_exit_rfi,%r1
+15 -20
arch/parisc/kernel/traps.c
··· 807 807 } 808 808 809 809 810 - int __init check_ivt(void *iva) 810 + void __init initialize_ivt(const void *iva) 811 811 { 812 812 extern u32 os_hpmc_size; 813 813 extern const u32 os_hpmc[]; ··· 818 818 u32 *hpmcp; 819 819 u32 length; 820 820 821 - if (strcmp((char *)iva, "cows can fly")) 822 - return -1; 821 + if (strcmp((const char *)iva, "cows can fly")) 822 + panic("IVT invalid"); 823 823 824 824 ivap = (u32 *)iva; 825 825 ··· 839 839 check += ivap[i]; 840 840 841 841 ivap[5] = -check; 842 - 843 - return 0; 844 842 } 845 843 844 + 845 + /* early_trap_init() is called before we set up kernel mappings and 846 + * write-protect the kernel */ 847 + void __init early_trap_init(void) 848 + { 849 + extern const void fault_vector_20; 850 + 846 851 #ifndef CONFIG_64BIT 847 - extern const void fault_vector_11; 852 + extern const void fault_vector_11; 853 + initialize_ivt(&fault_vector_11); 848 854 #endif 849 - extern const void fault_vector_20; 855 + 856 + initialize_ivt(&fault_vector_20); 857 + } 850 858 851 859 void __init trap_init(void) 852 860 { 853 - void *iva; 854 - 855 - if (boot_cpu_data.cpu_type >= pcxu) 856 - iva = (void *) &fault_vector_20; 857 - else 858 - #ifdef CONFIG_64BIT 859 - panic("Can't boot 64-bit OS on PA1.1 processor!"); 860 - #else 861 - iva = (void *) &fault_vector_11; 862 - #endif 863 - 864 - if (check_ivt(iva)) 865 - panic("IVT invalid"); 866 861 }
+6 -3
arch/parisc/kernel/vmlinux.lds.S
··· 60 60 EXIT_DATA 61 61 } 62 62 PERCPU_SECTION(8) 63 - . = ALIGN(PAGE_SIZE); 63 + . = ALIGN(HUGEPAGE_SIZE); 64 64 __init_end = .; 65 65 /* freed after init ends here */ 66 66 ··· 116 116 * that we can properly leave these 117 117 * as writable 118 118 */ 119 - . = ALIGN(PAGE_SIZE); 119 + . = ALIGN(HUGEPAGE_SIZE); 120 120 data_start = .; 121 121 122 122 EXCEPTION_TABLE(8) ··· 135 135 _edata = .; 136 136 137 137 /* BSS */ 138 - BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 8) 138 + BSS_SECTION(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE) 139 139 140 + /* bootmap is allocated in setup_bootmem() directly behind bss. */ 141 + 142 + . = ALIGN(HUGEPAGE_SIZE); 140 143 _end = . ; 141 144 142 145 STABS_DEBUG
+1
arch/parisc/mm/Makefile
··· 3 3 # 4 4 5 5 obj-y := init.o fault.o ioremap.o 6 + obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
+161
arch/parisc/mm/hugetlbpage.c
··· 1 + /* 2 + * PARISC64 Huge TLB page support. 3 + * 4 + * This parisc implementation is heavily based on the SPARC and x86 code. 5 + * 6 + * Copyright (C) 2015 Helge Deller <deller@gmx.de> 7 + */ 8 + 9 + #include <linux/fs.h> 10 + #include <linux/mm.h> 11 + #include <linux/hugetlb.h> 12 + #include <linux/pagemap.h> 13 + #include <linux/sysctl.h> 14 + 15 + #include <asm/mman.h> 16 + #include <asm/pgalloc.h> 17 + #include <asm/tlb.h> 18 + #include <asm/tlbflush.h> 19 + #include <asm/cacheflush.h> 20 + #include <asm/mmu_context.h> 21 + 22 + 23 + unsigned long 24 + hugetlb_get_unmapped_area(struct file *file, unsigned long addr, 25 + unsigned long len, unsigned long pgoff, unsigned long flags) 26 + { 27 + struct hstate *h = hstate_file(file); 28 + 29 + if (len & ~huge_page_mask(h)) 30 + return -EINVAL; 31 + if (len > TASK_SIZE) 32 + return -ENOMEM; 33 + 34 + if (flags & MAP_FIXED) 35 + if (prepare_hugepage_range(file, addr, len)) 36 + return -EINVAL; 37 + 38 + if (addr) 39 + addr = ALIGN(addr, huge_page_size(h)); 40 + 41 + /* we need to make sure the colouring is OK */ 42 + return arch_get_unmapped_area(file, addr, len, pgoff, flags); 43 + } 44 + 45 + 46 + pte_t *huge_pte_alloc(struct mm_struct *mm, 47 + unsigned long addr, unsigned long sz) 48 + { 49 + pgd_t *pgd; 50 + pud_t *pud; 51 + pmd_t *pmd; 52 + pte_t *pte = NULL; 53 + 54 + /* We must align the address, because our caller will run 55 + * set_huge_pte_at() on whatever we return, which writes out 56 + * all of the sub-ptes for the hugepage range. So we have 57 + * to give it the first such sub-pte. 58 + */ 59 + addr &= HPAGE_MASK; 60 + 61 + pgd = pgd_offset(mm, addr); 62 + pud = pud_alloc(mm, pgd, addr); 63 + if (pud) { 64 + pmd = pmd_alloc(mm, pud, addr); 65 + if (pmd) 66 + pte = pte_alloc_map(mm, NULL, pmd, addr); 67 + } 68 + return pte; 69 + } 70 + 71 + pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr) 72 + { 73 + pgd_t *pgd; 74 + pud_t *pud; 75 + pmd_t *pmd; 76 + pte_t *pte = NULL; 77 + 78 + addr &= HPAGE_MASK; 79 + 80 + pgd = pgd_offset(mm, addr); 81 + if (!pgd_none(*pgd)) { 82 + pud = pud_offset(pgd, addr); 83 + if (!pud_none(*pud)) { 84 + pmd = pmd_offset(pud, addr); 85 + if (!pmd_none(*pmd)) 86 + pte = pte_offset_map(pmd, addr); 87 + } 88 + } 89 + return pte; 90 + } 91 + 92 + /* Purge data and instruction TLB entries. Must be called holding 93 + * the pa_tlb_lock. The TLB purge instructions are slow on SMP 94 + * machines since the purge must be broadcast to all CPUs. 95 + */ 96 + static inline void purge_tlb_entries_huge(struct mm_struct *mm, unsigned long addr) 97 + { 98 + int i; 99 + 100 + /* We may use multiple physical huge pages (e.g. 2x1 MB) to emulate 101 + * Linux standard huge pages (e.g. 2 MB) */ 102 + BUILD_BUG_ON(REAL_HPAGE_SHIFT > HPAGE_SHIFT); 103 + 104 + addr &= HPAGE_MASK; 105 + addr |= _HUGE_PAGE_SIZE_ENCODING_DEFAULT; 106 + 107 + for (i = 0; i < (1 << (HPAGE_SHIFT-REAL_HPAGE_SHIFT)); i++) { 108 + mtsp(mm->context, 1); 109 + pdtlb(addr); 110 + if (unlikely(split_tlb)) 111 + pitlb(addr); 112 + addr += (1UL << REAL_HPAGE_SHIFT); 113 + } 114 + } 115 + 116 + void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, 117 + pte_t *ptep, pte_t entry) 118 + { 119 + unsigned long addr_start; 120 + int i; 121 + 122 + addr &= HPAGE_MASK; 123 + addr_start = addr; 124 + 125 + for (i = 0; i < (1 << HUGETLB_PAGE_ORDER); i++) { 126 + /* Directly write pte entry. We could call set_pte_at(mm, addr, ptep, entry) 127 + * instead, but then we get double locking on pa_tlb_lock. */ 128 + *ptep = entry; 129 + ptep++; 130 + 131 + /* Drop the PAGE_SIZE/non-huge tlb entry */ 132 + purge_tlb_entries(mm, addr); 133 + 134 + addr += PAGE_SIZE; 135 + pte_val(entry) += PAGE_SIZE; 136 + } 137 + 138 + purge_tlb_entries_huge(mm, addr_start); 139 + } 140 + 141 + 142 + pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, 143 + pte_t *ptep) 144 + { 145 + pte_t entry; 146 + 147 + entry = *ptep; 148 + set_huge_pte_at(mm, addr, ptep, __pte(0)); 149 + 150 + return entry; 151 + } 152 + 153 + int pmd_huge(pmd_t pmd) 154 + { 155 + return 0; 156 + } 157 + 158 + int pud_huge(pud_t pud) 159 + { 160 + return 0; 161 + }
+17 -23
arch/parisc/mm/init.c
··· 409 409 unsigned long vaddr; 410 410 unsigned long ro_start; 411 411 unsigned long ro_end; 412 - unsigned long fv_addr; 413 - unsigned long gw_addr; 414 - extern const unsigned long fault_vector_20; 415 - extern void * const linux_gateway_page; 412 + unsigned long kernel_end; 416 413 417 414 ro_start = __pa((unsigned long)_text); 418 415 ro_end = __pa((unsigned long)&data_start); 419 - fv_addr = __pa((unsigned long)&fault_vector_20) & PAGE_MASK; 420 - gw_addr = __pa((unsigned long)&linux_gateway_page) & PAGE_MASK; 416 + kernel_end = __pa((unsigned long)&_end); 421 417 422 418 end_paddr = start_paddr + size; 423 419 ··· 471 475 for (tmp2 = start_pte; tmp2 < PTRS_PER_PTE; tmp2++, pg_table++) { 472 476 pte_t pte; 473 477 474 - /* 475 - * Map the fault vector writable so we can 476 - * write the HPMC checksum. 477 - */ 478 478 if (force) 479 479 pte = __mk_pte(address, pgprot); 480 - else if (parisc_text_address(vaddr) && 481 - address != fv_addr) 480 + else if (parisc_text_address(vaddr)) { 482 481 pte = __mk_pte(address, PAGE_KERNEL_EXEC); 482 + if (address >= ro_start && address < kernel_end) 483 + pte = pte_mkhuge(pte); 484 + } 483 485 else 484 486 #if defined(CONFIG_PARISC_PAGE_SIZE_4KB) 485 - if (address >= ro_start && address < ro_end 486 - && address != fv_addr 487 - && address != gw_addr) 488 - pte = __mk_pte(address, PAGE_KERNEL_RO); 489 - else 487 + if (address >= ro_start && address < ro_end) { 488 + pte = __mk_pte(address, PAGE_KERNEL_EXEC); 489 + pte = pte_mkhuge(pte); 490 + } else 490 491 #endif 492 + { 491 493 pte = __mk_pte(address, pgprot); 494 + if (address >= ro_start && address < kernel_end) 495 + pte = pte_mkhuge(pte); 496 + } 492 497 493 498 if (address >= end_paddr) { 494 499 if (force) ··· 533 536 534 537 /* force the kernel to see the new TLB entries */ 535 538 __flush_tlb_range(0, init_begin, init_end); 536 - /* Attempt to catch anyone trying to execute code here 537 - * by filling the page with BRK insns. 538 - */ 539 - memset((void *)init_begin, 0x00, init_end - init_begin); 539 + 540 540 /* finally dump all the instructions which were cached, since the 541 541 * pages are no-longer executable */ 542 542 flush_icache_range(init_begin, init_end); 543 543 544 - free_initmem_default(-1); 544 + free_initmem_default(POISON_FREE_INITMEM); 545 545 546 546 /* set up a new led state on systems shipped LED State panel */ 547 547 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_BCOMPLETE); ··· 722 728 unsigned long size; 723 729 724 730 start_paddr = pmem_ranges[range].start_pfn << PAGE_SHIFT; 725 - end_paddr = start_paddr + (pmem_ranges[range].pages << PAGE_SHIFT); 726 731 size = pmem_ranges[range].pages << PAGE_SHIFT; 732 + end_paddr = start_paddr + size; 727 733 728 734 map_pages((unsigned long)__va(start_paddr), start_paddr, 729 735 size, PAGE_KERNEL, 0);
+1
arch/powerpc/include/asm/reg.h
··· 108 108 #define MSR_TS_T __MASK(MSR_TS_T_LG) /* Transaction Transactional */ 109 109 #define MSR_TS_MASK (MSR_TS_T | MSR_TS_S) /* Transaction State bits */ 110 110 #define MSR_TM_ACTIVE(x) (((x) & MSR_TS_MASK) != 0) /* Transaction active? */ 111 + #define MSR_TM_RESV(x) (((x) & MSR_TS_MASK) == MSR_TS_MASK) /* Reserved */ 111 112 #define MSR_TM_TRANSACTIONAL(x) (((x) & MSR_TS_MASK) == MSR_TS_T) 112 113 #define MSR_TM_SUSPENDED(x) (((x) & MSR_TS_MASK) == MSR_TS_S) 113 114
+1
arch/powerpc/include/asm/systbl.h
··· 382 382 SYSCALL(shmdt) 383 383 SYSCALL(shmget) 384 384 COMPAT_SYS(shmctl) 385 + SYSCALL(mlock2)
+1 -1
arch/powerpc/include/asm/unistd.h
··· 12 12 #include <uapi/asm/unistd.h> 13 13 14 14 15 - #define __NR_syscalls 378 15 + #define __NR_syscalls 379 16 16 17 17 #define __NR__exit __NR_exit 18 18 #define NR_syscalls __NR_syscalls
+1
arch/powerpc/include/uapi/asm/unistd.h
··· 400 400 #define __NR_shmdt 375 401 401 #define __NR_shmget 376 402 402 #define __NR_shmctl 377 403 + #define __NR_mlock2 378 403 404 404 405 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
+18
arch/powerpc/kernel/process.c
··· 551 551 msr_diff &= MSR_FP | MSR_VEC | MSR_VSX | MSR_FE0 | MSR_FE1; 552 552 } 553 553 554 + /* 555 + * Use the current MSR TM suspended bit to track if we have 556 + * checkpointed state outstanding. 557 + * On signal delivery, we'd normally reclaim the checkpointed 558 + * state to obtain stack pointer (see:get_tm_stackpointer()). 559 + * This will then directly return to userspace without going 560 + * through __switch_to(). However, if the stack frame is bad, 561 + * we need to exit this thread which calls __switch_to() which 562 + * will again attempt to reclaim the already saved tm state. 563 + * Hence we need to check that we've not already reclaimed 564 + * this state. 565 + * We do this using the current MSR, rather tracking it in 566 + * some specific thread_struct bit, as it has the additional 567 + * benifit of checking for a potential TM bad thing exception. 568 + */ 569 + if (!MSR_TM_SUSPENDED(mfmsr())) 570 + return; 571 + 554 572 tm_reclaim(thr, thr->regs->msr, cause); 555 573 556 574 /* Having done the reclaim, we now have the checkpointed
+9 -5
arch/powerpc/kernel/signal_32.c
··· 875 875 return 1; 876 876 #endif /* CONFIG_SPE */ 877 877 878 + /* Get the top half of the MSR from the user context */ 879 + if (__get_user(msr_hi, &tm_sr->mc_gregs[PT_MSR])) 880 + return 1; 881 + msr_hi <<= 32; 882 + /* If TM bits are set to the reserved value, it's an invalid context */ 883 + if (MSR_TM_RESV(msr_hi)) 884 + return 1; 885 + /* Pull in the MSR TM bits from the user context */ 886 + regs->msr = (regs->msr & ~MSR_TS_MASK) | (msr_hi & MSR_TS_MASK); 878 887 /* Now, recheckpoint. This loads up all of the checkpointed (older) 879 888 * registers, including FP and V[S]Rs. After recheckpointing, the 880 889 * transactional versions should be loaded. ··· 893 884 current->thread.tm_texasr |= TEXASR_FS; 894 885 /* This loads the checkpointed FP/VEC state, if used */ 895 886 tm_recheckpoint(&current->thread, msr); 896 - /* Get the top half of the MSR */ 897 - if (__get_user(msr_hi, &tm_sr->mc_gregs[PT_MSR])) 898 - return 1; 899 - /* Pull in MSR TM from user context */ 900 - regs->msr = (regs->msr & ~MSR_TS_MASK) | ((msr_hi<<32) & MSR_TS_MASK); 901 887 902 888 /* This loads the speculative FP/VEC state, if used */ 903 889 if (msr & MSR_FP) {
+4
arch/powerpc/kernel/signal_64.c
··· 438 438 439 439 /* get MSR separately, transfer the LE bit if doing signal return */ 440 440 err |= __get_user(msr, &sc->gp_regs[PT_MSR]); 441 + /* Don't allow reserved mode. */ 442 + if (MSR_TM_RESV(msr)) 443 + return -EINVAL; 444 + 441 445 /* pull in MSR TM from user context */ 442 446 regs->msr = (regs->msr & ~MSR_TS_MASK) | (msr & MSR_TS_MASK); 443 447
+5 -2
arch/s390/kvm/interrupt.c
··· 1030 1030 src_id, 0); 1031 1031 1032 1032 /* sending vcpu invalid */ 1033 - if (src_id >= KVM_MAX_VCPUS || 1034 - kvm_get_vcpu(vcpu->kvm, src_id) == NULL) 1033 + if (kvm_get_vcpu_by_id(vcpu->kvm, src_id) == NULL) 1035 1034 return -EINVAL; 1036 1035 1037 1036 if (sclp.has_sigpif) ··· 1108 1109 irq->u.emerg.code); 1109 1110 trace_kvm_s390_inject_vcpu(vcpu->vcpu_id, KVM_S390_INT_EMERGENCY, 1110 1111 irq->u.emerg.code, 0); 1112 + 1113 + /* sending vcpu invalid */ 1114 + if (kvm_get_vcpu_by_id(vcpu->kvm, irq->u.emerg.code) == NULL) 1115 + return -EINVAL; 1111 1116 1112 1117 set_bit(irq->u.emerg.code, li->sigp_emerg_pending); 1113 1118 set_bit(IRQ_PEND_EXT_EMERGENCY, &li->pending_irqs);
+5 -1
arch/s390/kvm/kvm-s390.c
··· 342 342 r = 0; 343 343 break; 344 344 case KVM_CAP_S390_VECTOR_REGISTERS: 345 - if (MACHINE_HAS_VX) { 345 + mutex_lock(&kvm->lock); 346 + if (atomic_read(&kvm->online_vcpus)) { 347 + r = -EBUSY; 348 + } else if (MACHINE_HAS_VX) { 346 349 set_kvm_facility(kvm->arch.model.fac->mask, 129); 347 350 set_kvm_facility(kvm->arch.model.fac->list, 129); 348 351 r = 0; 349 352 } else 350 353 r = -EINVAL; 354 + mutex_unlock(&kvm->lock); 351 355 VM_EVENT(kvm, 3, "ENABLE: CAP_S390_VECTOR_REGISTERS %s", 352 356 r ? "(not available)" : "(success)"); 353 357 break;
+1 -1
arch/s390/kvm/priv.c
··· 660 660 661 661 kvm_s390_get_regs_rre(vcpu, &reg1, &reg2); 662 662 663 - if (!MACHINE_HAS_PFMF) 663 + if (!test_kvm_facility(vcpu->kvm, 8)) 664 664 return kvm_s390_inject_program_int(vcpu, PGM_OPERATION); 665 665 666 666 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)
+2 -6
arch/s390/kvm/sigp.c
··· 291 291 u16 cpu_addr, u32 parameter, u64 *status_reg) 292 292 { 293 293 int rc; 294 - struct kvm_vcpu *dst_vcpu; 294 + struct kvm_vcpu *dst_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, cpu_addr); 295 295 296 - if (cpu_addr >= KVM_MAX_VCPUS) 297 - return SIGP_CC_NOT_OPERATIONAL; 298 - 299 - dst_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr); 300 296 if (!dst_vcpu) 301 297 return SIGP_CC_NOT_OPERATIONAL; 302 298 ··· 474 478 trace_kvm_s390_handle_sigp_pei(vcpu, order_code, cpu_addr); 475 479 476 480 if (order_code == SIGP_EXTERNAL_CALL) { 477 - dest_vcpu = kvm_get_vcpu(vcpu->kvm, cpu_addr); 481 + dest_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, cpu_addr); 478 482 BUG_ON(dest_vcpu == NULL); 479 483 480 484 kvm_s390_vcpu_wakeup(dest_vcpu);
+1 -2
arch/x86/include/asm/msr-index.h
··· 35 35 #define MSR_IA32_PERFCTR0 0x000000c1 36 36 #define MSR_IA32_PERFCTR1 0x000000c2 37 37 #define MSR_FSB_FREQ 0x000000cd 38 - #define MSR_NHM_PLATFORM_INFO 0x000000ce 38 + #define MSR_PLATFORM_INFO 0x000000ce 39 39 40 40 #define MSR_NHM_SNB_PKG_CST_CFG_CTL 0x000000e2 41 41 #define NHM_C3_AUTO_DEMOTE (1UL << 25) ··· 44 44 #define SNB_C1_AUTO_UNDEMOTE (1UL << 27) 45 45 #define SNB_C3_AUTO_UNDEMOTE (1UL << 28) 46 46 47 - #define MSR_PLATFORM_INFO 0x000000ce 48 47 #define MSR_MTRRcap 0x000000fe 49 48 #define MSR_IA32_BBL_CR_CTL 0x00000119 50 49 #define MSR_IA32_BBL_CR_CTL3 0x0000011e
+1 -2
arch/x86/kernel/cpu/common.c
··· 273 273 274 274 static __always_inline void setup_smap(struct cpuinfo_x86 *c) 275 275 { 276 - unsigned long eflags; 276 + unsigned long eflags = native_save_fl(); 277 277 278 278 /* This should have been cleared long ago */ 279 - raw_local_save_flags(eflags); 280 279 BUG_ON(eflags & X86_EFLAGS_AC); 281 280 282 281 if (cpu_has(c, X86_FEATURE_SMAP)) {
+5 -6
arch/x86/kernel/fpu/signal.c
··· 385 385 */ 386 386 void fpu__init_prepare_fx_sw_frame(void) 387 387 { 388 - int fsave_header_size = sizeof(struct fregs_state); 389 388 int size = xstate_size + FP_XSTATE_MAGIC2_SIZE; 390 - 391 - if (config_enabled(CONFIG_X86_32)) 392 - size += fsave_header_size; 393 389 394 390 fx_sw_reserved.magic1 = FP_XSTATE_MAGIC1; 395 391 fx_sw_reserved.extended_size = size; 396 392 fx_sw_reserved.xfeatures = xfeatures_mask; 397 393 fx_sw_reserved.xstate_size = xstate_size; 398 394 399 - if (config_enabled(CONFIG_IA32_EMULATION)) { 395 + if (config_enabled(CONFIG_IA32_EMULATION) || 396 + config_enabled(CONFIG_X86_32)) { 397 + int fsave_header_size = sizeof(struct fregs_state); 398 + 400 399 fx_sw_reserved_ia32 = fx_sw_reserved; 401 - fx_sw_reserved_ia32.extended_size += fsave_header_size; 400 + fx_sw_reserved_ia32.extended_size = size + fsave_header_size; 402 401 } 403 402 } 404 403
-1
arch/x86/kernel/fpu/xstate.c
··· 694 694 if (!boot_cpu_has(X86_FEATURE_XSAVE)) 695 695 return NULL; 696 696 697 - xsave = &current->thread.fpu.state.xsave; 698 697 /* 699 698 * We should not ever be requesting features that we 700 699 * have not enabled. Remember that pcntxt_mask is
+6
arch/x86/kernel/mcount_64.S
··· 278 278 /* save_mcount_regs fills in first two parameters */ 279 279 save_mcount_regs 280 280 281 + /* 282 + * When DYNAMIC_FTRACE is not defined, ARCH_SUPPORTS_FTRACE_OPS is not 283 + * set (see include/asm/ftrace.h and include/linux/ftrace.h). Only the 284 + * ip and parent ip are used and the list function is called when 285 + * function tracing is enabled. 286 + */ 281 287 call *ftrace_trace_function 282 288 283 289 restore_mcount_regs
-5
arch/x86/kvm/vmx.c
··· 7394 7394 7395 7395 switch (type) { 7396 7396 case VMX_VPID_EXTENT_ALL_CONTEXT: 7397 - if (get_vmcs12(vcpu)->virtual_processor_id == 0) { 7398 - nested_vmx_failValid(vcpu, 7399 - VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID); 7400 - return 1; 7401 - } 7402 7397 __vmx_flush_tlb(vcpu, to_vmx(vcpu)->nested.vpid02); 7403 7398 nested_vmx_succeed(vcpu); 7404 7399 break;
+32 -29
arch/x86/kvm/x86.c
··· 2763 2763 return 0; 2764 2764 } 2765 2765 2766 + static int kvm_cpu_accept_dm_intr(struct kvm_vcpu *vcpu) 2767 + { 2768 + return (!lapic_in_kernel(vcpu) || 2769 + kvm_apic_accept_pic_intr(vcpu)); 2770 + } 2771 + 2772 + /* 2773 + * if userspace requested an interrupt window, check that the 2774 + * interrupt window is open. 2775 + * 2776 + * No need to exit to userspace if we already have an interrupt queued. 2777 + */ 2778 + static int kvm_vcpu_ready_for_interrupt_injection(struct kvm_vcpu *vcpu) 2779 + { 2780 + return kvm_arch_interrupt_allowed(vcpu) && 2781 + !kvm_cpu_has_interrupt(vcpu) && 2782 + !kvm_event_needs_reinjection(vcpu) && 2783 + kvm_cpu_accept_dm_intr(vcpu); 2784 + } 2785 + 2766 2786 static int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, 2767 2787 struct kvm_interrupt *irq) 2768 2788 { ··· 2806 2786 return -EEXIST; 2807 2787 2808 2788 vcpu->arch.pending_external_vector = irq->irq; 2789 + kvm_make_request(KVM_REQ_EVENT, vcpu); 2809 2790 return 0; 2810 2791 } 2811 2792 ··· 5931 5910 return emulator_write_emulated(ctxt, rip, instruction, 3, NULL); 5932 5911 } 5933 5912 5934 - /* 5935 - * Check if userspace requested an interrupt window, and that the 5936 - * interrupt window is open. 5937 - * 5938 - * No need to exit to userspace if we already have an interrupt queued. 5939 - */ 5940 5913 static int dm_request_for_irq_injection(struct kvm_vcpu *vcpu) 5941 5914 { 5942 - if (!vcpu->run->request_interrupt_window || pic_in_kernel(vcpu->kvm)) 5943 - return false; 5944 - 5945 - if (kvm_cpu_has_interrupt(vcpu)) 5946 - return false; 5947 - 5948 - return (irqchip_split(vcpu->kvm) 5949 - ? kvm_apic_accept_pic_intr(vcpu) 5950 - : kvm_arch_interrupt_allowed(vcpu)); 5915 + return vcpu->run->request_interrupt_window && 5916 + likely(!pic_in_kernel(vcpu->kvm)); 5951 5917 } 5952 5918 5953 5919 static void post_kvm_run_save(struct kvm_vcpu *vcpu) ··· 5945 5937 kvm_run->flags = is_smm(vcpu) ? KVM_RUN_X86_SMM : 0; 5946 5938 kvm_run->cr8 = kvm_get_cr8(vcpu); 5947 5939 kvm_run->apic_base = kvm_get_apic_base(vcpu); 5948 - if (!irqchip_in_kernel(vcpu->kvm)) 5949 - kvm_run->ready_for_interrupt_injection = 5950 - kvm_arch_interrupt_allowed(vcpu) && 5951 - !kvm_cpu_has_interrupt(vcpu) && 5952 - !kvm_event_needs_reinjection(vcpu); 5953 - else if (!pic_in_kernel(vcpu->kvm)) 5954 - kvm_run->ready_for_interrupt_injection = 5955 - kvm_apic_accept_pic_intr(vcpu) && 5956 - !kvm_cpu_has_interrupt(vcpu); 5957 - else 5958 - kvm_run->ready_for_interrupt_injection = 1; 5940 + kvm_run->ready_for_interrupt_injection = 5941 + pic_in_kernel(vcpu->kvm) || 5942 + kvm_vcpu_ready_for_interrupt_injection(vcpu); 5959 5943 } 5960 5944 5961 5945 static void update_cr8_intercept(struct kvm_vcpu *vcpu) ··· 6360 6360 static int vcpu_enter_guest(struct kvm_vcpu *vcpu) 6361 6361 { 6362 6362 int r; 6363 - bool req_int_win = !lapic_in_kernel(vcpu) && 6364 - vcpu->run->request_interrupt_window; 6363 + bool req_int_win = 6364 + dm_request_for_irq_injection(vcpu) && 6365 + kvm_cpu_accept_dm_intr(vcpu); 6366 + 6365 6367 bool req_immediate_exit = false; 6366 6368 6367 6369 if (vcpu->requests) { ··· 6665 6663 if (kvm_cpu_has_pending_timer(vcpu)) 6666 6664 kvm_inject_pending_timer_irqs(vcpu); 6667 6665 6668 - if (dm_request_for_irq_injection(vcpu)) { 6666 + if (dm_request_for_irq_injection(vcpu) && 6667 + kvm_vcpu_ready_for_interrupt_injection(vcpu)) { 6669 6668 r = 0; 6670 6669 vcpu->run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN; 6671 6670 ++vcpu->stat.request_irq_exits;
+41 -6
arch/x86/mm/mpx.c
··· 586 586 } 587 587 588 588 /* 589 + * We only want to do a 4-byte get_user() on 32-bit. Otherwise, 590 + * we might run off the end of the bounds table if we are on 591 + * a 64-bit kernel and try to get 8 bytes. 592 + */ 593 + int get_user_bd_entry(struct mm_struct *mm, unsigned long *bd_entry_ret, 594 + long __user *bd_entry_ptr) 595 + { 596 + u32 bd_entry_32; 597 + int ret; 598 + 599 + if (is_64bit_mm(mm)) 600 + return get_user(*bd_entry_ret, bd_entry_ptr); 601 + 602 + /* 603 + * Note that get_user() uses the type of the *pointer* to 604 + * establish the size of the get, not the destination. 605 + */ 606 + ret = get_user(bd_entry_32, (u32 __user *)bd_entry_ptr); 607 + *bd_entry_ret = bd_entry_32; 608 + return ret; 609 + } 610 + 611 + /* 589 612 * Get the base of bounds tables pointed by specific bounds 590 613 * directory entry. 591 614 */ ··· 628 605 int need_write = 0; 629 606 630 607 pagefault_disable(); 631 - ret = get_user(bd_entry, bd_entry_ptr); 608 + ret = get_user_bd_entry(mm, &bd_entry, bd_entry_ptr); 632 609 pagefault_enable(); 633 610 if (!ret) 634 611 break; ··· 723 700 */ 724 701 static inline unsigned long bd_entry_virt_space(struct mm_struct *mm) 725 702 { 726 - unsigned long long virt_space = (1ULL << boot_cpu_data.x86_virt_bits); 727 - if (is_64bit_mm(mm)) 728 - return virt_space / MPX_BD_NR_ENTRIES_64; 729 - else 730 - return virt_space / MPX_BD_NR_ENTRIES_32; 703 + unsigned long long virt_space; 704 + unsigned long long GB = (1ULL << 30); 705 + 706 + /* 707 + * This covers 32-bit emulation as well as 32-bit kernels 708 + * running on 64-bit harware. 709 + */ 710 + if (!is_64bit_mm(mm)) 711 + return (4ULL * GB) / MPX_BD_NR_ENTRIES_32; 712 + 713 + /* 714 + * 'x86_virt_bits' returns what the hardware is capable 715 + * of, and returns the full >32-bit adddress space when 716 + * running 32-bit kernels on 64-bit hardware. 717 + */ 718 + virt_space = (1ULL << boot_cpu_data.x86_virt_bits); 719 + return virt_space / MPX_BD_NR_ENTRIES_64; 731 720 } 732 721 733 722 /*
+7 -14
block/blk-core.c
··· 2114 2114 EXPORT_SYMBOL(submit_bio); 2115 2115 2116 2116 /** 2117 - * blk_rq_check_limits - Helper function to check a request for the queue limit 2117 + * blk_cloned_rq_check_limits - Helper function to check a cloned request 2118 + * for new the queue limits 2118 2119 * @q: the queue 2119 2120 * @rq: the request being checked 2120 2121 * ··· 2126 2125 * after it is inserted to @q, it should be checked against @q before 2127 2126 * the insertion using this generic function. 2128 2127 * 2129 - * This function should also be useful for request stacking drivers 2130 - * in some cases below, so export this function. 2131 2128 * Request stacking drivers like request-based dm may change the queue 2132 - * limits while requests are in the queue (e.g. dm's table swapping). 2133 - * Such request stacking drivers should check those requests against 2134 - * the new queue limits again when they dispatch those requests, 2135 - * although such checkings are also done against the old queue limits 2136 - * when submitting requests. 2129 + * limits when retrying requests on other queues. Those requests need 2130 + * to be checked against the new queue limits again during dispatch. 2137 2131 */ 2138 - int blk_rq_check_limits(struct request_queue *q, struct request *rq) 2132 + static int blk_cloned_rq_check_limits(struct request_queue *q, 2133 + struct request *rq) 2139 2134 { 2140 - if (!rq_mergeable(rq)) 2141 - return 0; 2142 - 2143 2135 if (blk_rq_sectors(rq) > blk_queue_get_max_sectors(q, rq->cmd_flags)) { 2144 2136 printk(KERN_ERR "%s: over max size limit.\n", __func__); 2145 2137 return -EIO; ··· 2152 2158 2153 2159 return 0; 2154 2160 } 2155 - EXPORT_SYMBOL_GPL(blk_rq_check_limits); 2156 2161 2157 2162 /** 2158 2163 * blk_insert_cloned_request - Helper for stacking drivers to submit a request ··· 2163 2170 unsigned long flags; 2164 2171 int where = ELEVATOR_INSERT_BACK; 2165 2172 2166 - if (blk_rq_check_limits(q, rq)) 2173 + if (blk_cloned_rq_check_limits(q, rq)) 2167 2174 return -EIO; 2168 2175 2169 2176 if (rq->rq_disk &&
+30 -5
block/blk-merge.c
··· 76 76 struct bio_vec bv, bvprv, *bvprvp = NULL; 77 77 struct bvec_iter iter; 78 78 unsigned seg_size = 0, nsegs = 0, sectors = 0; 79 + unsigned front_seg_size = bio->bi_seg_front_size; 80 + bool do_split = true; 81 + struct bio *new = NULL; 79 82 80 83 bio_for_each_segment(bv, bio, iter) { 81 84 if (sectors + (bv.bv_len >> 9) > queue_max_sectors(q)) ··· 101 98 102 99 seg_size += bv.bv_len; 103 100 bvprv = bv; 104 - bvprvp = &bv; 101 + bvprvp = &bvprv; 105 102 sectors += bv.bv_len >> 9; 103 + 104 + if (nsegs == 1 && seg_size > front_seg_size) 105 + front_seg_size = seg_size; 106 106 continue; 107 107 } 108 108 new_segment: ··· 114 108 115 109 nsegs++; 116 110 bvprv = bv; 117 - bvprvp = &bv; 111 + bvprvp = &bvprv; 118 112 seg_size = bv.bv_len; 119 113 sectors += bv.bv_len >> 9; 114 + 115 + if (nsegs == 1 && seg_size > front_seg_size) 116 + front_seg_size = seg_size; 120 117 } 121 118 122 - *segs = nsegs; 123 - return NULL; 119 + do_split = false; 124 120 split: 125 121 *segs = nsegs; 126 - return bio_split(bio, sectors, GFP_NOIO, bs); 122 + 123 + if (do_split) { 124 + new = bio_split(bio, sectors, GFP_NOIO, bs); 125 + if (new) 126 + bio = new; 127 + } 128 + 129 + bio->bi_seg_front_size = front_seg_size; 130 + if (seg_size > bio->bi_seg_back_size) 131 + bio->bi_seg_back_size = seg_size; 132 + 133 + return do_split ? new : NULL; 127 134 } 128 135 129 136 void blk_queue_split(struct request_queue *q, struct bio **bio, ··· 430 411 431 412 if (sg) 432 413 sg_mark_end(sg); 414 + 415 + /* 416 + * Something must have been wrong if the figured number of 417 + * segment is bigger than number of req's physical segments 418 + */ 419 + WARN_ON(nsegs > rq->nr_phys_segments); 433 420 434 421 return nsegs; 435 422 }
+9 -5
block/blk-mq.c
··· 1291 1291 blk_mq_bio_to_request(rq, bio); 1292 1292 1293 1293 /* 1294 - * we do limited pluging. If bio can be merged, do merge. 1294 + * We do limited pluging. If the bio can be merged, do that. 1295 1295 * Otherwise the existing request in the plug list will be 1296 1296 * issued. So the plug list will have one request at most 1297 1297 */ 1298 1298 if (plug) { 1299 1299 /* 1300 1300 * The plug list might get flushed before this. If that 1301 - * happens, same_queue_rq is invalid and plug list is empty 1302 - **/ 1301 + * happens, same_queue_rq is invalid and plug list is 1302 + * empty 1303 + */ 1303 1304 if (same_queue_rq && !list_empty(&plug->mq_list)) { 1304 1305 old_rq = same_queue_rq; 1305 1306 list_del_init(&old_rq->queuelist); ··· 1381 1380 blk_mq_bio_to_request(rq, bio); 1382 1381 if (!request_count) 1383 1382 trace_block_plug(q); 1384 - else if (request_count >= BLK_MAX_REQUEST_COUNT) { 1383 + 1384 + blk_mq_put_ctx(data.ctx); 1385 + 1386 + if (request_count >= BLK_MAX_REQUEST_COUNT) { 1385 1387 blk_flush_plug_list(plug, false); 1386 1388 trace_block_plug(q); 1387 1389 } 1390 + 1388 1391 list_add_tail(&rq->queuelist, &plug->mq_list); 1389 - blk_mq_put_ctx(data.ctx); 1390 1392 return cookie; 1391 1393 } 1392 1394
+5 -3
block/blk-timeout.c
··· 158 158 { 159 159 if (blk_mark_rq_complete(req)) 160 160 return; 161 - blk_delete_timer(req); 162 - if (req->q->mq_ops) 161 + 162 + if (req->q->mq_ops) { 163 163 blk_mq_rq_timed_out(req, false); 164 - else 164 + } else { 165 + blk_delete_timer(req); 165 166 blk_rq_timed_out(req); 167 + } 166 168 } 167 169 EXPORT_SYMBOL_GPL(blk_abort_request); 168 170
-2
block/blk.h
··· 72 72 void __blk_queue_free_tags(struct request_queue *q); 73 73 bool __blk_end_bidi_request(struct request *rq, int error, 74 74 unsigned int nr_bytes, unsigned int bidi_bytes); 75 - int blk_queue_enter(struct request_queue *q, gfp_t gfp); 76 - void blk_queue_exit(struct request_queue *q); 77 75 void blk_freeze_queue(struct request_queue *q); 78 76 79 77 static inline void blk_queue_enter_live(struct request_queue *q)
+5 -5
block/noop-iosched.c
··· 21 21 static int noop_dispatch(struct request_queue *q, int force) 22 22 { 23 23 struct noop_data *nd = q->elevator->elevator_data; 24 + struct request *rq; 24 25 25 - if (!list_empty(&nd->queue)) { 26 - struct request *rq; 27 - rq = list_entry(nd->queue.next, struct request, queuelist); 26 + rq = list_first_entry_or_null(&nd->queue, struct request, queuelist); 27 + if (rq) { 28 28 list_del_init(&rq->queuelist); 29 29 elv_dispatch_sort(q, rq); 30 30 return 1; ··· 46 46 47 47 if (rq->queuelist.prev == &nd->queue) 48 48 return NULL; 49 - return list_entry(rq->queuelist.prev, struct request, queuelist); 49 + return list_prev_entry(rq, queuelist); 50 50 } 51 51 52 52 static struct request * ··· 56 56 57 57 if (rq->queuelist.next == &nd->queue) 58 58 return NULL; 59 - return list_entry(rq->queuelist.next, struct request, queuelist); 59 + return list_next_entry(rq, queuelist); 60 60 } 61 61 62 62 static int noop_init_queue(struct request_queue *q, struct elevator_type *e)
+1 -1
block/partition-generic.c
··· 397 397 struct hd_struct *part; 398 398 int res; 399 399 400 - if (bdev->bd_part_count) 400 + if (bdev->bd_part_count || bdev->bd_super) 401 401 return -EBUSY; 402 402 res = invalidate_partition(disk, 0); 403 403 if (res)
+7 -3
block/partitions/mac.c
··· 32 32 Sector sect; 33 33 unsigned char *data; 34 34 int slot, blocks_in_map; 35 - unsigned secsize; 35 + unsigned secsize, datasize, partoffset; 36 36 #ifdef CONFIG_PPC_PMAC 37 37 int found_root = 0; 38 38 int found_root_goodness = 0; ··· 50 50 } 51 51 secsize = be16_to_cpu(md->block_size); 52 52 put_dev_sector(sect); 53 - data = read_part_sector(state, secsize/512, &sect); 53 + datasize = round_down(secsize, 512); 54 + data = read_part_sector(state, datasize / 512, &sect); 54 55 if (!data) 55 56 return -1; 56 - part = (struct mac_partition *) (data + secsize%512); 57 + partoffset = secsize % 512; 58 + if (partoffset + sizeof(*part) > datasize) 59 + return -1; 60 + part = (struct mac_partition *) (data + partoffset); 57 61 if (be16_to_cpu(part->signature) != MAC_PARTITION_MAGIC) { 58 62 put_dev_sector(sect); 59 63 return 0; /* not a MacOS disk */
+2 -2
crypto/algif_aead.c
··· 125 125 if (flags & MSG_DONTWAIT) 126 126 return -EAGAIN; 127 127 128 - set_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 128 + sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 129 129 130 130 for (;;) { 131 131 if (signal_pending(current)) ··· 139 139 } 140 140 finish_wait(sk_sleep(sk), &wait); 141 141 142 - clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 142 + sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 143 143 144 144 return err; 145 145 }
+3 -3
crypto/algif_skcipher.c
··· 212 212 if (flags & MSG_DONTWAIT) 213 213 return -EAGAIN; 214 214 215 - set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 215 + sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 216 216 217 217 for (;;) { 218 218 if (signal_pending(current)) ··· 258 258 return -EAGAIN; 259 259 } 260 260 261 - set_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 261 + sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 262 262 263 263 for (;;) { 264 264 if (signal_pending(current)) ··· 272 272 } 273 273 finish_wait(sk_sleep(sk), &wait); 274 274 275 - clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 275 + sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 276 276 277 277 return err; 278 278 }
+1 -1
drivers/Makefile
··· 63 63 obj-$(CONFIG_FB_INTEL) += video/fbdev/intelfb/ 64 64 65 65 obj-$(CONFIG_PARPORT) += parport/ 66 + obj-$(CONFIG_NVM) += lightnvm/ 66 67 obj-y += base/ block/ misc/ mfd/ nfc/ 67 68 obj-$(CONFIG_LIBNVDIMM) += nvdimm/ 68 69 obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/ ··· 71 70 obj-y += macintosh/ 72 71 obj-$(CONFIG_IDE) += ide/ 73 72 obj-$(CONFIG_SCSI) += scsi/ 74 - obj-$(CONFIG_NVM) += lightnvm/ 75 73 obj-y += nvme/ 76 74 obj-$(CONFIG_ATA) += ata/ 77 75 obj-$(CONFIG_TARGET_CORE) += target/
+1 -1
drivers/acpi/cppc_acpi.c
··· 304 304 305 305 static int register_pcc_channel(int pcc_subspace_idx) 306 306 { 307 - struct acpi_pcct_subspace *cppc_ss; 307 + struct acpi_pcct_hw_reduced *cppc_ss; 308 308 unsigned int len; 309 309 310 310 if (pcc_subspace_idx >= 0) {
+1 -1
drivers/acpi/ec.c
··· 1103 1103 } 1104 1104 1105 1105 err_exit: 1106 - if (result && q) 1106 + if (result) 1107 1107 acpi_ec_delete_query(q); 1108 1108 if (data) 1109 1109 *data = value;
+7 -41
drivers/acpi/sbshc.c
··· 14 14 #include <linux/delay.h> 15 15 #include <linux/module.h> 16 16 #include <linux/interrupt.h> 17 - #include <linux/dmi.h> 18 17 #include "sbshc.h" 19 18 20 19 #define PREFIX "ACPI: " ··· 29 30 u8 query_bit; 30 31 smbus_alarm_callback callback; 31 32 void *context; 33 + bool done; 32 34 }; 33 35 34 36 static int acpi_smbus_hc_add(struct acpi_device *device); ··· 88 88 ACPI_SMB_ALARM_DATA = 0x26, /* 2 bytes alarm data */ 89 89 }; 90 90 91 - static bool macbook; 92 - 93 91 static inline int smb_hc_read(struct acpi_smb_hc *hc, u8 address, u8 *data) 94 92 { 95 93 return ec_read(hc->offset + address, data); ··· 98 100 return ec_write(hc->offset + address, data); 99 101 } 100 102 101 - static inline int smb_check_done(struct acpi_smb_hc *hc) 102 - { 103 - union acpi_smb_status status = {.raw = 0}; 104 - smb_hc_read(hc, ACPI_SMB_STATUS, &status.raw); 105 - return status.fields.done && (status.fields.status == SMBUS_OK); 106 - } 107 - 108 103 static int wait_transaction_complete(struct acpi_smb_hc *hc, int timeout) 109 104 { 110 - if (wait_event_timeout(hc->wait, smb_check_done(hc), 111 - msecs_to_jiffies(timeout))) 105 + if (wait_event_timeout(hc->wait, hc->done, msecs_to_jiffies(timeout))) 112 106 return 0; 113 - /* 114 - * After the timeout happens, OS will try to check the status of SMbus. 115 - * If the status is what OS expected, it will be regarded as the bogus 116 - * timeout. 117 - */ 118 - if (smb_check_done(hc)) 119 - return 0; 120 - else 121 - return -ETIME; 107 + return -ETIME; 122 108 } 123 109 124 110 static int acpi_smbus_transaction(struct acpi_smb_hc *hc, u8 protocol, ··· 117 135 } 118 136 119 137 mutex_lock(&hc->lock); 120 - if (macbook) 121 - udelay(5); 138 + hc->done = false; 122 139 if (smb_hc_read(hc, ACPI_SMB_PROTOCOL, &temp)) 123 140 goto end; 124 141 if (temp) { ··· 216 235 if (smb_hc_read(hc, ACPI_SMB_STATUS, &status.raw)) 217 236 return 0; 218 237 /* Check if it is only a completion notify */ 219 - if (status.fields.done) 238 + if (status.fields.done && status.fields.status == SMBUS_OK) { 239 + hc->done = true; 220 240 wake_up(&hc->wait); 241 + } 221 242 if (!status.fields.alarm) 222 243 return 0; 223 244 mutex_lock(&hc->lock); ··· 245 262 acpi_handle handle, acpi_ec_query_func func, 246 263 void *data); 247 264 248 - static int macbook_dmi_match(const struct dmi_system_id *d) 249 - { 250 - pr_debug("Detected MacBook, enabling workaround\n"); 251 - macbook = true; 252 - return 0; 253 - } 254 - 255 - static struct dmi_system_id acpi_smbus_dmi_table[] = { 256 - { macbook_dmi_match, "Apple MacBook", { 257 - DMI_MATCH(DMI_BOARD_VENDOR, "Apple"), 258 - DMI_MATCH(DMI_PRODUCT_NAME, "MacBook") }, 259 - }, 260 - { }, 261 - }; 262 - 263 265 static int acpi_smbus_hc_add(struct acpi_device *device) 264 266 { 265 267 int status; 266 268 unsigned long long val; 267 269 struct acpi_smb_hc *hc; 268 - 269 - dmi_check_system(acpi_smbus_dmi_table); 270 270 271 271 if (!device) 272 272 return -EINVAL;
+6
drivers/base/power/wakeirq.c
··· 68 68 struct wake_irq *wirq; 69 69 int err; 70 70 71 + if (irq < 0) 72 + return -EINVAL; 73 + 71 74 wirq = kzalloc(sizeof(*wirq), GFP_KERNEL); 72 75 if (!wirq) 73 76 return -ENOMEM; ··· 169 166 { 170 167 struct wake_irq *wirq; 171 168 int err; 169 + 170 + if (irq < 0) 171 + return -EINVAL; 172 172 173 173 wirq = kzalloc(sizeof(*wirq), GFP_KERNEL); 174 174 if (!wirq)
+2 -4
drivers/block/mtip32xx/mtip32xx.c
··· 3810 3810 sector_t capacity; 3811 3811 unsigned int index = 0; 3812 3812 struct kobject *kobj; 3813 - unsigned char thd_name[16]; 3814 3813 3815 3814 if (dd->disk) 3816 3815 goto skip_create_disk; /* hw init done, before rebuild */ ··· 3957 3958 } 3958 3959 3959 3960 start_service_thread: 3960 - sprintf(thd_name, "mtip_svc_thd_%02d", index); 3961 3961 dd->mtip_svc_handler = kthread_create_on_node(mtip_service_thread, 3962 - dd, dd->numa_node, "%s", 3963 - thd_name); 3962 + dd, dd->numa_node, 3963 + "mtip_svc_thd_%02d", index); 3964 3964 3965 3965 if (IS_ERR(dd->mtip_svc_handler)) { 3966 3966 dev_err(&dd->pdev->dev, "service thread failed to start\n");
+231 -70
drivers/block/null_blk.c
··· 8 8 #include <linux/slab.h> 9 9 #include <linux/blk-mq.h> 10 10 #include <linux/hrtimer.h> 11 + #include <linux/lightnvm.h> 11 12 12 13 struct nullb_cmd { 13 14 struct list_head list; ··· 18 17 struct bio *bio; 19 18 unsigned int tag; 20 19 struct nullb_queue *nq; 20 + struct hrtimer timer; 21 21 }; 22 22 23 23 struct nullb_queue { ··· 41 39 42 40 struct nullb_queue *queues; 43 41 unsigned int nr_queues; 42 + char disk_name[DISK_NAME_LEN]; 44 43 }; 45 44 46 45 static LIST_HEAD(nullb_list); 47 46 static struct mutex lock; 48 47 static int null_major; 49 48 static int nullb_indexes; 50 - 51 - struct completion_queue { 52 - struct llist_head list; 53 - struct hrtimer timer; 54 - }; 55 - 56 - /* 57 - * These are per-cpu for now, they will need to be configured by the 58 - * complete_queues parameter and appropriately mapped. 59 - */ 60 - static DEFINE_PER_CPU(struct completion_queue, completion_queues); 49 + static struct kmem_cache *ppa_cache; 61 50 62 51 enum { 63 52 NULL_IRQ_NONE = 0, ··· 112 119 module_param(nr_devices, int, S_IRUGO); 113 120 MODULE_PARM_DESC(nr_devices, "Number of devices to register"); 114 121 122 + static bool use_lightnvm; 123 + module_param(use_lightnvm, bool, S_IRUGO); 124 + MODULE_PARM_DESC(use_lightnvm, "Register as a LightNVM device"); 125 + 115 126 static int irqmode = NULL_IRQ_SOFTIRQ; 116 127 117 128 static int null_set_irqmode(const char *str, const struct kernel_param *kp) ··· 132 135 device_param_cb(irqmode, &null_irqmode_param_ops, &irqmode, S_IRUGO); 133 136 MODULE_PARM_DESC(irqmode, "IRQ completion handler. 0-none, 1-softirq, 2-timer"); 134 137 135 - static int completion_nsec = 10000; 136 - module_param(completion_nsec, int, S_IRUGO); 138 + static unsigned long completion_nsec = 10000; 139 + module_param(completion_nsec, ulong, S_IRUGO); 137 140 MODULE_PARM_DESC(completion_nsec, "Time in ns to complete a request in hardware. Default: 10,000ns"); 138 141 139 142 static int hw_queue_depth = 64; ··· 170 173 put_tag(cmd->nq, cmd->tag); 171 174 } 172 175 176 + static enum hrtimer_restart null_cmd_timer_expired(struct hrtimer *timer); 177 + 173 178 static struct nullb_cmd *__alloc_cmd(struct nullb_queue *nq) 174 179 { 175 180 struct nullb_cmd *cmd; ··· 182 183 cmd = &nq->cmds[tag]; 183 184 cmd->tag = tag; 184 185 cmd->nq = nq; 186 + if (irqmode == NULL_IRQ_TIMER) { 187 + hrtimer_init(&cmd->timer, CLOCK_MONOTONIC, 188 + HRTIMER_MODE_REL); 189 + cmd->timer.function = null_cmd_timer_expired; 190 + } 185 191 return cmd; 186 192 } 187 193 ··· 217 213 218 214 static void end_cmd(struct nullb_cmd *cmd) 219 215 { 216 + struct request_queue *q = NULL; 217 + 220 218 switch (queue_mode) { 221 219 case NULL_Q_MQ: 222 220 blk_mq_end_request(cmd->rq, 0); ··· 229 223 break; 230 224 case NULL_Q_BIO: 231 225 bio_endio(cmd->bio); 232 - break; 226 + goto free_cmd; 233 227 } 234 228 229 + if (cmd->rq) 230 + q = cmd->rq->q; 231 + 232 + /* Restart queue if needed, as we are freeing a tag */ 233 + if (q && !q->mq_ops && blk_queue_stopped(q)) { 234 + unsigned long flags; 235 + 236 + spin_lock_irqsave(q->queue_lock, flags); 237 + if (blk_queue_stopped(q)) 238 + blk_start_queue(q); 239 + spin_unlock_irqrestore(q->queue_lock, flags); 240 + } 241 + free_cmd: 235 242 free_cmd(cmd); 236 243 } 237 244 238 245 static enum hrtimer_restart null_cmd_timer_expired(struct hrtimer *timer) 239 246 { 240 - struct completion_queue *cq; 241 - struct llist_node *entry; 242 - struct nullb_cmd *cmd; 243 - 244 - cq = &per_cpu(completion_queues, smp_processor_id()); 245 - 246 - while ((entry = llist_del_all(&cq->list)) != NULL) { 247 - entry = llist_reverse_order(entry); 248 - do { 249 - struct request_queue *q = NULL; 250 - 251 - cmd = container_of(entry, struct nullb_cmd, ll_list); 252 - entry = entry->next; 253 - if (cmd->rq) 254 - q = cmd->rq->q; 255 - end_cmd(cmd); 256 - 257 - if (q && !q->mq_ops && blk_queue_stopped(q)) { 258 - spin_lock(q->queue_lock); 259 - if (blk_queue_stopped(q)) 260 - blk_start_queue(q); 261 - spin_unlock(q->queue_lock); 262 - } 263 - } while (entry); 264 - } 247 + end_cmd(container_of(timer, struct nullb_cmd, timer)); 265 248 266 249 return HRTIMER_NORESTART; 267 250 } 268 251 269 252 static void null_cmd_end_timer(struct nullb_cmd *cmd) 270 253 { 271 - struct completion_queue *cq = &per_cpu(completion_queues, get_cpu()); 254 + ktime_t kt = ktime_set(0, completion_nsec); 272 255 273 - cmd->ll_list.next = NULL; 274 - if (llist_add(&cmd->ll_list, &cq->list)) { 275 - ktime_t kt = ktime_set(0, completion_nsec); 276 - 277 - hrtimer_start(&cq->timer, kt, HRTIMER_MODE_REL_PINNED); 278 - } 279 - 280 - put_cpu(); 256 + hrtimer_start(&cmd->timer, kt, HRTIMER_MODE_REL); 281 257 } 282 258 283 259 static void null_softirq_done_fn(struct request *rq) ··· 357 369 { 358 370 struct nullb_cmd *cmd = blk_mq_rq_to_pdu(bd->rq); 359 371 372 + if (irqmode == NULL_IRQ_TIMER) { 373 + hrtimer_init(&cmd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 374 + cmd->timer.function = null_cmd_timer_expired; 375 + } 360 376 cmd->rq = bd->rq; 361 377 cmd->nq = hctx->driver_data; 362 378 ··· 419 427 { 420 428 list_del_init(&nullb->list); 421 429 422 - del_gendisk(nullb->disk); 430 + if (use_lightnvm) 431 + nvm_unregister(nullb->disk_name); 432 + else 433 + del_gendisk(nullb->disk); 423 434 blk_cleanup_queue(nullb->q); 424 435 if (queue_mode == NULL_Q_MQ) 425 436 blk_mq_free_tag_set(&nullb->tag_set); 426 - put_disk(nullb->disk); 437 + if (!use_lightnvm) 438 + put_disk(nullb->disk); 427 439 cleanup_queues(nullb); 428 440 kfree(nullb); 429 441 } 442 + 443 + #ifdef CONFIG_NVM 444 + 445 + static void null_lnvm_end_io(struct request *rq, int error) 446 + { 447 + struct nvm_rq *rqd = rq->end_io_data; 448 + struct nvm_dev *dev = rqd->dev; 449 + 450 + dev->mt->end_io(rqd, error); 451 + 452 + blk_put_request(rq); 453 + } 454 + 455 + static int null_lnvm_submit_io(struct request_queue *q, struct nvm_rq *rqd) 456 + { 457 + struct request *rq; 458 + struct bio *bio = rqd->bio; 459 + 460 + rq = blk_mq_alloc_request(q, bio_rw(bio), GFP_KERNEL, 0); 461 + if (IS_ERR(rq)) 462 + return -ENOMEM; 463 + 464 + rq->cmd_type = REQ_TYPE_DRV_PRIV; 465 + rq->__sector = bio->bi_iter.bi_sector; 466 + rq->ioprio = bio_prio(bio); 467 + 468 + if (bio_has_data(bio)) 469 + rq->nr_phys_segments = bio_phys_segments(q, bio); 470 + 471 + rq->__data_len = bio->bi_iter.bi_size; 472 + rq->bio = rq->biotail = bio; 473 + 474 + rq->end_io_data = rqd; 475 + 476 + blk_execute_rq_nowait(q, NULL, rq, 0, null_lnvm_end_io); 477 + 478 + return 0; 479 + } 480 + 481 + static int null_lnvm_id(struct request_queue *q, struct nvm_id *id) 482 + { 483 + sector_t size = gb * 1024 * 1024 * 1024ULL; 484 + sector_t blksize; 485 + struct nvm_id_group *grp; 486 + 487 + id->ver_id = 0x1; 488 + id->vmnt = 0; 489 + id->cgrps = 1; 490 + id->cap = 0x3; 491 + id->dom = 0x1; 492 + 493 + id->ppaf.blk_offset = 0; 494 + id->ppaf.blk_len = 16; 495 + id->ppaf.pg_offset = 16; 496 + id->ppaf.pg_len = 16; 497 + id->ppaf.sect_offset = 32; 498 + id->ppaf.sect_len = 8; 499 + id->ppaf.pln_offset = 40; 500 + id->ppaf.pln_len = 8; 501 + id->ppaf.lun_offset = 48; 502 + id->ppaf.lun_len = 8; 503 + id->ppaf.ch_offset = 56; 504 + id->ppaf.ch_len = 8; 505 + 506 + do_div(size, bs); /* convert size to pages */ 507 + do_div(size, 256); /* concert size to pgs pr blk */ 508 + grp = &id->groups[0]; 509 + grp->mtype = 0; 510 + grp->fmtype = 0; 511 + grp->num_ch = 1; 512 + grp->num_pg = 256; 513 + blksize = size; 514 + do_div(size, (1 << 16)); 515 + grp->num_lun = size + 1; 516 + do_div(blksize, grp->num_lun); 517 + grp->num_blk = blksize; 518 + grp->num_pln = 1; 519 + 520 + grp->fpg_sz = bs; 521 + grp->csecs = bs; 522 + grp->trdt = 25000; 523 + grp->trdm = 25000; 524 + grp->tprt = 500000; 525 + grp->tprm = 500000; 526 + grp->tbet = 1500000; 527 + grp->tbem = 1500000; 528 + grp->mpos = 0x010101; /* single plane rwe */ 529 + grp->cpar = hw_queue_depth; 530 + 531 + return 0; 532 + } 533 + 534 + static void *null_lnvm_create_dma_pool(struct request_queue *q, char *name) 535 + { 536 + mempool_t *virtmem_pool; 537 + 538 + virtmem_pool = mempool_create_slab_pool(64, ppa_cache); 539 + if (!virtmem_pool) { 540 + pr_err("null_blk: Unable to create virtual memory pool\n"); 541 + return NULL; 542 + } 543 + 544 + return virtmem_pool; 545 + } 546 + 547 + static void null_lnvm_destroy_dma_pool(void *pool) 548 + { 549 + mempool_destroy(pool); 550 + } 551 + 552 + static void *null_lnvm_dev_dma_alloc(struct request_queue *q, void *pool, 553 + gfp_t mem_flags, dma_addr_t *dma_handler) 554 + { 555 + return mempool_alloc(pool, mem_flags); 556 + } 557 + 558 + static void null_lnvm_dev_dma_free(void *pool, void *entry, 559 + dma_addr_t dma_handler) 560 + { 561 + mempool_free(entry, pool); 562 + } 563 + 564 + static struct nvm_dev_ops null_lnvm_dev_ops = { 565 + .identity = null_lnvm_id, 566 + .submit_io = null_lnvm_submit_io, 567 + 568 + .create_dma_pool = null_lnvm_create_dma_pool, 569 + .destroy_dma_pool = null_lnvm_destroy_dma_pool, 570 + .dev_dma_alloc = null_lnvm_dev_dma_alloc, 571 + .dev_dma_free = null_lnvm_dev_dma_free, 572 + 573 + /* Simulate nvme protocol restriction */ 574 + .max_phys_sect = 64, 575 + }; 576 + #else 577 + static struct nvm_dev_ops null_lnvm_dev_ops; 578 + #endif /* CONFIG_NVM */ 430 579 431 580 static int null_open(struct block_device *bdev, fmode_t mode) 432 581 { ··· 708 575 queue_flag_set_unlocked(QUEUE_FLAG_NONROT, nullb->q); 709 576 queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, nullb->q); 710 577 711 - disk = nullb->disk = alloc_disk_node(1, home_node); 712 - if (!disk) { 713 - rv = -ENOMEM; 714 - goto out_cleanup_blk_queue; 715 - } 716 578 717 579 mutex_lock(&lock); 718 580 list_add_tail(&nullb->list, &nullb_list); ··· 717 589 blk_queue_logical_block_size(nullb->q, bs); 718 590 blk_queue_physical_block_size(nullb->q, bs); 719 591 592 + sprintf(nullb->disk_name, "nullb%d", nullb->index); 593 + 594 + if (use_lightnvm) { 595 + rv = nvm_register(nullb->q, nullb->disk_name, 596 + &null_lnvm_dev_ops); 597 + if (rv) 598 + goto out_cleanup_blk_queue; 599 + goto done; 600 + } 601 + 602 + disk = nullb->disk = alloc_disk_node(1, home_node); 603 + if (!disk) { 604 + rv = -ENOMEM; 605 + goto out_cleanup_lightnvm; 606 + } 720 607 size = gb * 1024 * 1024 * 1024ULL; 721 608 set_capacity(disk, size >> 9); 722 609 ··· 741 598 disk->fops = &null_fops; 742 599 disk->private_data = nullb; 743 600 disk->queue = nullb->q; 744 - sprintf(disk->disk_name, "nullb%d", nullb->index); 601 + strncpy(disk->disk_name, nullb->disk_name, DISK_NAME_LEN); 602 + 745 603 add_disk(disk); 604 + done: 746 605 return 0; 747 606 607 + out_cleanup_lightnvm: 608 + if (use_lightnvm) 609 + nvm_unregister(nullb->disk_name); 748 610 out_cleanup_blk_queue: 749 611 blk_cleanup_queue(nullb->q); 750 612 out_cleanup_tags: ··· 773 625 bs = PAGE_SIZE; 774 626 } 775 627 628 + if (use_lightnvm && bs != 4096) { 629 + pr_warn("null_blk: LightNVM only supports 4k block size\n"); 630 + pr_warn("null_blk: defaults block size to 4k\n"); 631 + bs = 4096; 632 + } 633 + 634 + if (use_lightnvm && queue_mode != NULL_Q_MQ) { 635 + pr_warn("null_blk: LightNVM only supported for blk-mq\n"); 636 + pr_warn("null_blk: defaults queue mode to blk-mq\n"); 637 + queue_mode = NULL_Q_MQ; 638 + } 639 + 776 640 if (queue_mode == NULL_Q_MQ && use_per_node_hctx) { 777 641 if (submit_queues < nr_online_nodes) { 778 642 pr_warn("null_blk: submit_queues param is set to %u.", ··· 798 638 799 639 mutex_init(&lock); 800 640 801 - /* Initialize a separate list for each CPU for issuing softirqs */ 802 - for_each_possible_cpu(i) { 803 - struct completion_queue *cq = &per_cpu(completion_queues, i); 804 - 805 - init_llist_head(&cq->list); 806 - 807 - if (irqmode != NULL_IRQ_TIMER) 808 - continue; 809 - 810 - hrtimer_init(&cq->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 811 - cq->timer.function = null_cmd_timer_expired; 812 - } 813 - 814 641 null_major = register_blkdev(0, "nullb"); 815 642 if (null_major < 0) 816 643 return null_major; 817 644 645 + if (use_lightnvm) { 646 + ppa_cache = kmem_cache_create("ppa_cache", 64 * sizeof(u64), 647 + 0, 0, NULL); 648 + if (!ppa_cache) { 649 + pr_err("null_blk: unable to create ppa cache\n"); 650 + return -ENOMEM; 651 + } 652 + } 653 + 818 654 for (i = 0; i < nr_devices; i++) { 819 655 if (null_add_dev()) { 820 656 unregister_blkdev(null_major, "nullb"); 821 - return -EINVAL; 657 + goto err_ppa; 822 658 } 823 659 } 824 660 825 661 pr_info("null: module loaded\n"); 826 662 return 0; 663 + err_ppa: 664 + kmem_cache_destroy(ppa_cache); 665 + return -EINVAL; 827 666 } 828 667 829 668 static void __exit null_exit(void) ··· 837 678 null_del_dev(nullb); 838 679 } 839 680 mutex_unlock(&lock); 681 + 682 + kmem_cache_destroy(ppa_cache); 840 683 } 841 684 842 685 module_init(null_init);
+1 -1
drivers/bus/omap-ocp2scp.c
··· 117 117 118 118 module_platform_driver(omap_ocp2scp_driver); 119 119 120 - MODULE_ALIAS("platform: omap-ocp2scp"); 120 + MODULE_ALIAS("platform:omap-ocp2scp"); 121 121 MODULE_AUTHOR("Texas Instruments Inc."); 122 122 MODULE_DESCRIPTION("OMAP OCP2SCP driver"); 123 123 MODULE_LICENSE("GPL v2");
+52 -30
drivers/char/ipmi/ipmi_si_intf.c
··· 412 412 return rv; 413 413 } 414 414 415 - static void start_check_enables(struct smi_info *smi_info) 415 + static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val) 416 + { 417 + smi_info->last_timeout_jiffies = jiffies; 418 + mod_timer(&smi_info->si_timer, new_val); 419 + smi_info->timer_running = true; 420 + } 421 + 422 + /* 423 + * Start a new message and (re)start the timer and thread. 424 + */ 425 + static void start_new_msg(struct smi_info *smi_info, unsigned char *msg, 426 + unsigned int size) 427 + { 428 + smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES); 429 + 430 + if (smi_info->thread) 431 + wake_up_process(smi_info->thread); 432 + 433 + smi_info->handlers->start_transaction(smi_info->si_sm, msg, size); 434 + } 435 + 436 + static void start_check_enables(struct smi_info *smi_info, bool start_timer) 416 437 { 417 438 unsigned char msg[2]; 418 439 419 440 msg[0] = (IPMI_NETFN_APP_REQUEST << 2); 420 441 msg[1] = IPMI_GET_BMC_GLOBAL_ENABLES_CMD; 421 442 422 - smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2); 443 + if (start_timer) 444 + start_new_msg(smi_info, msg, 2); 445 + else 446 + smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2); 423 447 smi_info->si_state = SI_CHECKING_ENABLES; 424 448 } 425 449 426 - static void start_clear_flags(struct smi_info *smi_info) 450 + static void start_clear_flags(struct smi_info *smi_info, bool start_timer) 427 451 { 428 452 unsigned char msg[3]; 429 453 ··· 456 432 msg[1] = IPMI_CLEAR_MSG_FLAGS_CMD; 457 433 msg[2] = WDT_PRE_TIMEOUT_INT; 458 434 459 - smi_info->handlers->start_transaction(smi_info->si_sm, msg, 3); 435 + if (start_timer) 436 + start_new_msg(smi_info, msg, 3); 437 + else 438 + smi_info->handlers->start_transaction(smi_info->si_sm, msg, 3); 460 439 smi_info->si_state = SI_CLEARING_FLAGS; 461 440 } 462 441 ··· 469 442 smi_info->curr_msg->data[1] = IPMI_GET_MSG_CMD; 470 443 smi_info->curr_msg->data_size = 2; 471 444 472 - smi_info->handlers->start_transaction( 473 - smi_info->si_sm, 474 - smi_info->curr_msg->data, 475 - smi_info->curr_msg->data_size); 445 + start_new_msg(smi_info, smi_info->curr_msg->data, 446 + smi_info->curr_msg->data_size); 476 447 smi_info->si_state = SI_GETTING_MESSAGES; 477 448 } 478 449 ··· 480 455 smi_info->curr_msg->data[1] = IPMI_READ_EVENT_MSG_BUFFER_CMD; 481 456 smi_info->curr_msg->data_size = 2; 482 457 483 - smi_info->handlers->start_transaction( 484 - smi_info->si_sm, 485 - smi_info->curr_msg->data, 486 - smi_info->curr_msg->data_size); 458 + start_new_msg(smi_info, smi_info->curr_msg->data, 459 + smi_info->curr_msg->data_size); 487 460 smi_info->si_state = SI_GETTING_EVENTS; 488 - } 489 - 490 - static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val) 491 - { 492 - smi_info->last_timeout_jiffies = jiffies; 493 - mod_timer(&smi_info->si_timer, new_val); 494 - smi_info->timer_running = true; 495 461 } 496 462 497 463 /* ··· 494 478 * Note that we cannot just use disable_irq(), since the interrupt may 495 479 * be shared. 496 480 */ 497 - static inline bool disable_si_irq(struct smi_info *smi_info) 481 + static inline bool disable_si_irq(struct smi_info *smi_info, bool start_timer) 498 482 { 499 483 if ((smi_info->irq) && (!smi_info->interrupt_disabled)) { 500 484 smi_info->interrupt_disabled = true; 501 - start_check_enables(smi_info); 485 + start_check_enables(smi_info, start_timer); 502 486 return true; 503 487 } 504 488 return false; ··· 508 492 { 509 493 if ((smi_info->irq) && (smi_info->interrupt_disabled)) { 510 494 smi_info->interrupt_disabled = false; 511 - start_check_enables(smi_info); 495 + start_check_enables(smi_info, true); 512 496 return true; 513 497 } 514 498 return false; ··· 526 510 527 511 msg = ipmi_alloc_smi_msg(); 528 512 if (!msg) { 529 - if (!disable_si_irq(smi_info)) 513 + if (!disable_si_irq(smi_info, true)) 530 514 smi_info->si_state = SI_NORMAL; 531 515 } else if (enable_si_irq(smi_info)) { 532 516 ipmi_free_smi_msg(msg); ··· 542 526 /* Watchdog pre-timeout */ 543 527 smi_inc_stat(smi_info, watchdog_pretimeouts); 544 528 545 - start_clear_flags(smi_info); 529 + start_clear_flags(smi_info, true); 546 530 smi_info->msg_flags &= ~WDT_PRE_TIMEOUT_INT; 547 531 if (smi_info->intf) 548 532 ipmi_smi_watchdog_pretimeout(smi_info->intf); ··· 895 879 msg[0] = (IPMI_NETFN_APP_REQUEST << 2); 896 880 msg[1] = IPMI_GET_MSG_FLAGS_CMD; 897 881 898 - smi_info->handlers->start_transaction( 899 - smi_info->si_sm, msg, 2); 882 + start_new_msg(smi_info, msg, 2); 900 883 smi_info->si_state = SI_GETTING_FLAGS; 901 884 goto restart; 902 885 } ··· 925 910 * disable and messages disabled. 926 911 */ 927 912 if (smi_info->supports_event_msg_buff || smi_info->irq) { 928 - start_check_enables(smi_info); 913 + start_check_enables(smi_info, true); 929 914 } else { 930 915 smi_info->curr_msg = alloc_msg_handle_irq(smi_info); 931 916 if (!smi_info->curr_msg) ··· 935 920 } 936 921 goto restart; 937 922 } 923 + 924 + if (si_sm_result == SI_SM_IDLE && smi_info->timer_running) { 925 + /* Ok it if fails, the timer will just go off. */ 926 + if (del_timer(&smi_info->si_timer)) 927 + smi_info->timer_running = false; 928 + } 929 + 938 930 out: 939 931 return si_sm_result; 940 932 } ··· 2582 2560 .data = (void *)(unsigned long) SI_BT }, 2583 2561 {}, 2584 2562 }; 2563 + MODULE_DEVICE_TABLE(of, of_ipmi_match); 2585 2564 2586 2565 static int of_ipmi_probe(struct platform_device *dev) 2587 2566 { ··· 2669 2646 } 2670 2647 return 0; 2671 2648 } 2672 - MODULE_DEVICE_TABLE(of, of_ipmi_match); 2673 2649 #else 2674 2650 #define of_ipmi_match NULL 2675 2651 static int of_ipmi_probe(struct platform_device *dev) ··· 3635 3613 * Start clearing the flags before we enable interrupts or the 3636 3614 * timer to avoid racing with the timer. 3637 3615 */ 3638 - start_clear_flags(new_smi); 3616 + start_clear_flags(new_smi, false); 3639 3617 3640 3618 /* 3641 3619 * IRQ is defined to be set when non-zero. req_events will ··· 3930 3908 poll(to_clean); 3931 3909 schedule_timeout_uninterruptible(1); 3932 3910 } 3933 - disable_si_irq(to_clean); 3911 + disable_si_irq(to_clean, false); 3934 3912 while (to_clean->curr_msg || (to_clean->si_state != SI_NORMAL)) { 3935 3913 poll(to_clean); 3936 3914 schedule_timeout_uninterruptible(1);
+7 -1
drivers/char/ipmi/ipmi_watchdog.c
··· 153 153 /* The pre-timeout is disabled by default. */ 154 154 static int pretimeout; 155 155 156 + /* Default timeout to set on panic */ 157 + static int panic_wdt_timeout = 255; 158 + 156 159 /* Default action is to reset the board on a timeout. */ 157 160 static unsigned char action_val = WDOG_TIMEOUT_RESET; 158 161 ··· 295 292 296 293 module_param(pretimeout, timeout, 0644); 297 294 MODULE_PARM_DESC(pretimeout, "Pretimeout value in seconds."); 295 + 296 + module_param(panic_wdt_timeout, timeout, 0644); 297 + MODULE_PARM_DESC(timeout, "Timeout value on kernel panic in seconds."); 298 298 299 299 module_param_cb(action, &param_ops_str, action_op, 0644); 300 300 MODULE_PARM_DESC(action, "Timeout action. One of: " ··· 1195 1189 /* Make sure we do this only once. */ 1196 1190 panic_event_handled = 1; 1197 1191 1198 - timeout = 255; 1192 + timeout = panic_wdt_timeout; 1199 1193 pretimeout = 0; 1200 1194 panic_halt_ipmi_set_timeout(); 1201 1195 }
+1
drivers/clocksource/Kconfig
··· 1 1 menu "Clock Source drivers" 2 + depends on !ARCH_USES_GETTIMEOFFSET 2 3 3 4 config CLKSRC_OF 4 5 bool
+2 -2
drivers/clocksource/fsl_ftm_timer.c
··· 203 203 int err; 204 204 205 205 ftm_writel(0x00, priv->clkevt_base + FTM_CNTIN); 206 - ftm_writel(~0UL, priv->clkevt_base + FTM_MOD); 206 + ftm_writel(~0u, priv->clkevt_base + FTM_MOD); 207 207 208 208 ftm_reset_counter(priv->clkevt_base); 209 209 ··· 230 230 int err; 231 231 232 232 ftm_writel(0x00, priv->clksrc_base + FTM_CNTIN); 233 - ftm_writel(~0UL, priv->clksrc_base + FTM_MOD); 233 + ftm_writel(~0u, priv->clksrc_base + FTM_MOD); 234 234 235 235 ftm_reset_counter(priv->clksrc_base); 236 236
+2 -1
drivers/cpufreq/Kconfig.arm
··· 84 84 config ARM_MT8173_CPUFREQ 85 85 bool "Mediatek MT8173 CPUFreq support" 86 86 depends on ARCH_MEDIATEK && REGULATOR 87 + depends on ARM64 || (ARM_CPU_TOPOLOGY && COMPILE_TEST) 87 88 depends on !CPU_THERMAL || THERMAL=y 88 89 select PM_OPP 89 90 help ··· 202 201 203 202 config ARM_SCPI_CPUFREQ 204 203 tristate "SCPI based CPUfreq driver" 205 - depends on ARM_BIG_LITTLE_CPUFREQ && ARM_SCPI_PROTOCOL 204 + depends on ARM_BIG_LITTLE_CPUFREQ && ARM_SCPI_PROTOCOL && COMMON_CLK_SCPI 206 205 help 207 206 This adds the CPUfreq driver support for ARM big.LITTLE platforms 208 207 using SCPI protocol for CPU power management.
-1
drivers/cpufreq/Kconfig.x86
··· 5 5 config X86_INTEL_PSTATE 6 6 bool "Intel P state control" 7 7 depends on X86 8 - select ACPI_PROCESSOR if ACPI 9 8 help 10 9 This driver provides a P state for Intel core processors. 11 10 The driver implements an internal governor and will become
+2 -1
drivers/cpufreq/cppc_cpufreq.c
··· 98 98 policy->max = cpu->perf_caps.highest_perf; 99 99 policy->cpuinfo.min_freq = policy->min; 100 100 policy->cpuinfo.max_freq = policy->max; 101 + policy->shared_type = cpu->shared_type; 101 102 102 103 if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) 103 104 cpumask_copy(policy->cpus, cpu->shared_cpu_map); 104 - else { 105 + else if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL) { 105 106 /* Support only SW_ANY for now. */ 106 107 pr_debug("Unsupported CPU co-ord type\n"); 107 108 return -EFAULT;
+3 -6
drivers/cpufreq/cpufreq.c
··· 1401 1401 } 1402 1402 1403 1403 cpumask_clear_cpu(cpu, policy->real_cpus); 1404 - 1405 - if (cpumask_empty(policy->real_cpus)) { 1406 - cpufreq_policy_free(policy, true); 1407 - return; 1408 - } 1409 - 1410 1404 remove_cpu_dev_symlink(policy, cpu); 1405 + 1406 + if (cpumask_empty(policy->real_cpus)) 1407 + cpufreq_policy_free(policy, true); 1411 1408 } 1412 1409 1413 1410 static void handle_update(struct work_struct *work)
+78 -246
drivers/cpufreq/intel_pstate.c
··· 34 34 #include <asm/cpu_device_id.h> 35 35 #include <asm/cpufeature.h> 36 36 37 - #if IS_ENABLED(CONFIG_ACPI) 38 - #include <acpi/processor.h> 39 - #endif 40 - 41 - #define BYT_RATIOS 0x66a 42 - #define BYT_VIDS 0x66b 43 - #define BYT_TURBO_RATIOS 0x66c 44 - #define BYT_TURBO_VIDS 0x66d 37 + #define ATOM_RATIOS 0x66a 38 + #define ATOM_VIDS 0x66b 39 + #define ATOM_TURBO_RATIOS 0x66c 40 + #define ATOM_TURBO_VIDS 0x66d 45 41 46 42 #define FRAC_BITS 8 47 43 #define int_tofp(X) ((int64_t)(X) << FRAC_BITS) ··· 113 117 u64 prev_mperf; 114 118 u64 prev_tsc; 115 119 struct sample sample; 116 - #if IS_ENABLED(CONFIG_ACPI) 117 - struct acpi_processor_performance acpi_perf_data; 118 - #endif 119 120 }; 120 121 121 122 static struct cpudata **all_cpu_data; ··· 143 150 static struct pstate_adjust_policy pid_params; 144 151 static struct pstate_funcs pstate_funcs; 145 152 static int hwp_active; 146 - static int no_acpi_perf; 147 153 148 154 struct perf_limits { 149 155 int no_turbo; ··· 155 163 int max_sysfs_pct; 156 164 int min_policy_pct; 157 165 int min_sysfs_pct; 158 - int max_perf_ctl; 159 - int min_perf_ctl; 160 166 }; 161 167 162 168 static struct perf_limits performance_limits = { ··· 181 191 .max_sysfs_pct = 100, 182 192 .min_policy_pct = 0, 183 193 .min_sysfs_pct = 0, 184 - .max_perf_ctl = 0, 185 - .min_perf_ctl = 0, 186 194 }; 187 195 188 196 #ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE 189 197 static struct perf_limits *limits = &performance_limits; 190 198 #else 191 199 static struct perf_limits *limits = &powersave_limits; 192 - #endif 193 - 194 - #if IS_ENABLED(CONFIG_ACPI) 195 - /* 196 - * The max target pstate ratio is a 8 bit value in both PLATFORM_INFO MSR and 197 - * in TURBO_RATIO_LIMIT MSR, which pstate driver stores in max_pstate and 198 - * max_turbo_pstate fields. The PERF_CTL MSR contains 16 bit value for P state 199 - * ratio, out of it only high 8 bits are used. For example 0x1700 is setting 200 - * target ratio 0x17. The _PSS control value stores in a format which can be 201 - * directly written to PERF_CTL MSR. But in intel_pstate driver this shift 202 - * occurs during write to PERF_CTL (E.g. for cores core_set_pstate()). 203 - * This function converts the _PSS control value to intel pstate driver format 204 - * for comparison and assignment. 205 - */ 206 - static int convert_to_native_pstate_format(struct cpudata *cpu, int index) 207 - { 208 - return cpu->acpi_perf_data.states[index].control >> 8; 209 - } 210 - 211 - static int intel_pstate_init_perf_limits(struct cpufreq_policy *policy) 212 - { 213 - struct cpudata *cpu; 214 - int ret; 215 - bool turbo_absent = false; 216 - int max_pstate_index; 217 - int min_pss_ctl, max_pss_ctl, turbo_pss_ctl; 218 - int i; 219 - 220 - cpu = all_cpu_data[policy->cpu]; 221 - 222 - pr_debug("intel_pstate: default limits 0x%x 0x%x 0x%x\n", 223 - cpu->pstate.min_pstate, cpu->pstate.max_pstate, 224 - cpu->pstate.turbo_pstate); 225 - 226 - if (!cpu->acpi_perf_data.shared_cpu_map && 227 - zalloc_cpumask_var_node(&cpu->acpi_perf_data.shared_cpu_map, 228 - GFP_KERNEL, cpu_to_node(policy->cpu))) { 229 - return -ENOMEM; 230 - } 231 - 232 - ret = acpi_processor_register_performance(&cpu->acpi_perf_data, 233 - policy->cpu); 234 - if (ret) 235 - return ret; 236 - 237 - /* 238 - * Check if the control value in _PSS is for PERF_CTL MSR, which should 239 - * guarantee that the states returned by it map to the states in our 240 - * list directly. 241 - */ 242 - if (cpu->acpi_perf_data.control_register.space_id != 243 - ACPI_ADR_SPACE_FIXED_HARDWARE) 244 - return -EIO; 245 - 246 - pr_debug("intel_pstate: CPU%u - ACPI _PSS perf data\n", policy->cpu); 247 - for (i = 0; i < cpu->acpi_perf_data.state_count; i++) 248 - pr_debug(" %cP%d: %u MHz, %u mW, 0x%x\n", 249 - (i == cpu->acpi_perf_data.state ? '*' : ' '), i, 250 - (u32) cpu->acpi_perf_data.states[i].core_frequency, 251 - (u32) cpu->acpi_perf_data.states[i].power, 252 - (u32) cpu->acpi_perf_data.states[i].control); 253 - 254 - /* 255 - * If there is only one entry _PSS, simply ignore _PSS and continue as 256 - * usual without taking _PSS into account 257 - */ 258 - if (cpu->acpi_perf_data.state_count < 2) 259 - return 0; 260 - 261 - turbo_pss_ctl = convert_to_native_pstate_format(cpu, 0); 262 - min_pss_ctl = convert_to_native_pstate_format(cpu, 263 - cpu->acpi_perf_data.state_count - 1); 264 - /* Check if there is a turbo freq in _PSS */ 265 - if (turbo_pss_ctl <= cpu->pstate.max_pstate && 266 - turbo_pss_ctl > cpu->pstate.min_pstate) { 267 - pr_debug("intel_pstate: no turbo range exists in _PSS\n"); 268 - limits->no_turbo = limits->turbo_disabled = 1; 269 - cpu->pstate.turbo_pstate = cpu->pstate.max_pstate; 270 - turbo_absent = true; 271 - } 272 - 273 - /* Check if the max non turbo p state < Intel P state max */ 274 - max_pstate_index = turbo_absent ? 0 : 1; 275 - max_pss_ctl = convert_to_native_pstate_format(cpu, max_pstate_index); 276 - if (max_pss_ctl < cpu->pstate.max_pstate && 277 - max_pss_ctl > cpu->pstate.min_pstate) 278 - cpu->pstate.max_pstate = max_pss_ctl; 279 - 280 - /* check If min perf > Intel P State min */ 281 - if (min_pss_ctl > cpu->pstate.min_pstate && 282 - min_pss_ctl < cpu->pstate.max_pstate) { 283 - cpu->pstate.min_pstate = min_pss_ctl; 284 - policy->cpuinfo.min_freq = min_pss_ctl * cpu->pstate.scaling; 285 - } 286 - 287 - if (turbo_absent) 288 - policy->cpuinfo.max_freq = cpu->pstate.max_pstate * 289 - cpu->pstate.scaling; 290 - else { 291 - policy->cpuinfo.max_freq = cpu->pstate.turbo_pstate * 292 - cpu->pstate.scaling; 293 - /* 294 - * The _PSS table doesn't contain whole turbo frequency range. 295 - * This just contains +1 MHZ above the max non turbo frequency, 296 - * with control value corresponding to max turbo ratio. But 297 - * when cpufreq set policy is called, it will call with this 298 - * max frequency, which will cause a reduced performance as 299 - * this driver uses real max turbo frequency as the max 300 - * frequeny. So correct this frequency in _PSS table to 301 - * correct max turbo frequency based on the turbo ratio. 302 - * Also need to convert to MHz as _PSS freq is in MHz. 303 - */ 304 - cpu->acpi_perf_data.states[0].core_frequency = 305 - turbo_pss_ctl * 100; 306 - } 307 - 308 - pr_debug("intel_pstate: Updated limits using _PSS 0x%x 0x%x 0x%x\n", 309 - cpu->pstate.min_pstate, cpu->pstate.max_pstate, 310 - cpu->pstate.turbo_pstate); 311 - pr_debug("intel_pstate: policy max_freq=%d Khz min_freq = %d KHz\n", 312 - policy->cpuinfo.max_freq, policy->cpuinfo.min_freq); 313 - 314 - return 0; 315 - } 316 - 317 - static int intel_pstate_exit_perf_limits(struct cpufreq_policy *policy) 318 - { 319 - struct cpudata *cpu; 320 - 321 - if (!no_acpi_perf) 322 - return 0; 323 - 324 - cpu = all_cpu_data[policy->cpu]; 325 - acpi_processor_unregister_performance(policy->cpu); 326 - return 0; 327 - } 328 - 329 - #else 330 - static int intel_pstate_init_perf_limits(struct cpufreq_policy *policy) 331 - { 332 - return 0; 333 - } 334 - 335 - static int intel_pstate_exit_perf_limits(struct cpufreq_policy *policy) 336 - { 337 - return 0; 338 - } 339 200 #endif 340 201 341 202 static inline void pid_reset(struct _pid *pid, int setpoint, int busy, ··· 528 687 wrmsrl_on_cpu(cpudata->cpu, MSR_PM_ENABLE, 0x1); 529 688 } 530 689 531 - static int byt_get_min_pstate(void) 690 + static int atom_get_min_pstate(void) 532 691 { 533 692 u64 value; 534 693 535 - rdmsrl(BYT_RATIOS, value); 694 + rdmsrl(ATOM_RATIOS, value); 536 695 return (value >> 8) & 0x7F; 537 696 } 538 697 539 - static int byt_get_max_pstate(void) 698 + static int atom_get_max_pstate(void) 540 699 { 541 700 u64 value; 542 701 543 - rdmsrl(BYT_RATIOS, value); 702 + rdmsrl(ATOM_RATIOS, value); 544 703 return (value >> 16) & 0x7F; 545 704 } 546 705 547 - static int byt_get_turbo_pstate(void) 706 + static int atom_get_turbo_pstate(void) 548 707 { 549 708 u64 value; 550 709 551 - rdmsrl(BYT_TURBO_RATIOS, value); 710 + rdmsrl(ATOM_TURBO_RATIOS, value); 552 711 return value & 0x7F; 553 712 } 554 713 555 - static void byt_set_pstate(struct cpudata *cpudata, int pstate) 714 + static void atom_set_pstate(struct cpudata *cpudata, int pstate) 556 715 { 557 716 u64 val; 558 717 int32_t vid_fp; ··· 577 736 wrmsrl_on_cpu(cpudata->cpu, MSR_IA32_PERF_CTL, val); 578 737 } 579 738 580 - #define BYT_BCLK_FREQS 5 581 - static int byt_freq_table[BYT_BCLK_FREQS] = { 833, 1000, 1333, 1167, 800}; 582 - 583 - static int byt_get_scaling(void) 739 + static int silvermont_get_scaling(void) 584 740 { 585 741 u64 value; 586 742 int i; 743 + /* Defined in Table 35-6 from SDM (Sept 2015) */ 744 + static int silvermont_freq_table[] = { 745 + 83300, 100000, 133300, 116700, 80000}; 587 746 588 747 rdmsrl(MSR_FSB_FREQ, value); 589 - i = value & 0x3; 748 + i = value & 0x7; 749 + WARN_ON(i > 4); 590 750 591 - BUG_ON(i > BYT_BCLK_FREQS); 592 - 593 - return byt_freq_table[i] * 100; 751 + return silvermont_freq_table[i]; 594 752 } 595 753 596 - static void byt_get_vid(struct cpudata *cpudata) 754 + static int airmont_get_scaling(void) 755 + { 756 + u64 value; 757 + int i; 758 + /* Defined in Table 35-10 from SDM (Sept 2015) */ 759 + static int airmont_freq_table[] = { 760 + 83300, 100000, 133300, 116700, 80000, 761 + 93300, 90000, 88900, 87500}; 762 + 763 + rdmsrl(MSR_FSB_FREQ, value); 764 + i = value & 0xF; 765 + WARN_ON(i > 8); 766 + 767 + return airmont_freq_table[i]; 768 + } 769 + 770 + static void atom_get_vid(struct cpudata *cpudata) 597 771 { 598 772 u64 value; 599 773 600 - rdmsrl(BYT_VIDS, value); 774 + rdmsrl(ATOM_VIDS, value); 601 775 cpudata->vid.min = int_tofp((value >> 8) & 0x7f); 602 776 cpudata->vid.max = int_tofp((value >> 16) & 0x7f); 603 777 cpudata->vid.ratio = div_fp( ··· 620 764 int_tofp(cpudata->pstate.max_pstate - 621 765 cpudata->pstate.min_pstate)); 622 766 623 - rdmsrl(BYT_TURBO_VIDS, value); 767 + rdmsrl(ATOM_TURBO_VIDS, value); 624 768 cpudata->vid.turbo = value & 0x7f; 625 769 } 626 770 ··· 741 885 }, 742 886 }; 743 887 744 - static struct cpu_defaults byt_params = { 888 + static struct cpu_defaults silvermont_params = { 745 889 .pid_policy = { 746 890 .sample_rate_ms = 10, 747 891 .deadband = 0, ··· 751 895 .i_gain_pct = 4, 752 896 }, 753 897 .funcs = { 754 - .get_max = byt_get_max_pstate, 755 - .get_max_physical = byt_get_max_pstate, 756 - .get_min = byt_get_min_pstate, 757 - .get_turbo = byt_get_turbo_pstate, 758 - .set = byt_set_pstate, 759 - .get_scaling = byt_get_scaling, 760 - .get_vid = byt_get_vid, 898 + .get_max = atom_get_max_pstate, 899 + .get_max_physical = atom_get_max_pstate, 900 + .get_min = atom_get_min_pstate, 901 + .get_turbo = atom_get_turbo_pstate, 902 + .set = atom_set_pstate, 903 + .get_scaling = silvermont_get_scaling, 904 + .get_vid = atom_get_vid, 905 + }, 906 + }; 907 + 908 + static struct cpu_defaults airmont_params = { 909 + .pid_policy = { 910 + .sample_rate_ms = 10, 911 + .deadband = 0, 912 + .setpoint = 60, 913 + .p_gain_pct = 14, 914 + .d_gain_pct = 0, 915 + .i_gain_pct = 4, 916 + }, 917 + .funcs = { 918 + .get_max = atom_get_max_pstate, 919 + .get_max_physical = atom_get_max_pstate, 920 + .get_min = atom_get_min_pstate, 921 + .get_turbo = atom_get_turbo_pstate, 922 + .set = atom_set_pstate, 923 + .get_scaling = airmont_get_scaling, 924 + .get_vid = atom_get_vid, 761 925 }, 762 926 }; 763 927 ··· 814 938 * policy, or by cpu specific default values determined through 815 939 * experimentation. 816 940 */ 817 - if (limits->max_perf_ctl && limits->max_sysfs_pct >= 818 - limits->max_policy_pct) { 819 - *max = limits->max_perf_ctl; 820 - } else { 821 - max_perf_adj = fp_toint(mul_fp(int_tofp(max_perf), 822 - limits->max_perf)); 823 - *max = clamp_t(int, max_perf_adj, cpu->pstate.min_pstate, 824 - cpu->pstate.turbo_pstate); 825 - } 941 + max_perf_adj = fp_toint(mul_fp(int_tofp(max_perf), limits->max_perf)); 942 + *max = clamp_t(int, max_perf_adj, 943 + cpu->pstate.min_pstate, cpu->pstate.turbo_pstate); 826 944 827 - if (limits->min_perf_ctl) { 828 - *min = limits->min_perf_ctl; 829 - } else { 830 - min_perf = fp_toint(mul_fp(int_tofp(max_perf), 831 - limits->min_perf)); 832 - *min = clamp_t(int, min_perf, cpu->pstate.min_pstate, max_perf); 833 - } 945 + min_perf = fp_toint(mul_fp(int_tofp(max_perf), limits->min_perf)); 946 + *min = clamp_t(int, min_perf, cpu->pstate.min_pstate, max_perf); 834 947 } 835 948 836 949 static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate, bool force) ··· 1018 1153 static const struct x86_cpu_id intel_pstate_cpu_ids[] = { 1019 1154 ICPU(0x2a, core_params), 1020 1155 ICPU(0x2d, core_params), 1021 - ICPU(0x37, byt_params), 1156 + ICPU(0x37, silvermont_params), 1022 1157 ICPU(0x3a, core_params), 1023 1158 ICPU(0x3c, core_params), 1024 1159 ICPU(0x3d, core_params), ··· 1027 1162 ICPU(0x45, core_params), 1028 1163 ICPU(0x46, core_params), 1029 1164 ICPU(0x47, core_params), 1030 - ICPU(0x4c, byt_params), 1165 + ICPU(0x4c, airmont_params), 1031 1166 ICPU(0x4e, core_params), 1032 1167 ICPU(0x4f, core_params), 1033 1168 ICPU(0x5e, core_params), ··· 1094 1229 1095 1230 static int intel_pstate_set_policy(struct cpufreq_policy *policy) 1096 1231 { 1097 - #if IS_ENABLED(CONFIG_ACPI) 1098 - struct cpudata *cpu; 1099 - int i; 1100 - #endif 1101 - pr_debug("intel_pstate: %s max %u policy->max %u\n", __func__, 1102 - policy->cpuinfo.max_freq, policy->max); 1103 1232 if (!policy->cpuinfo.max_freq) 1104 1233 return -ENODEV; 1105 1234 ··· 1101 1242 policy->max >= policy->cpuinfo.max_freq) { 1102 1243 pr_debug("intel_pstate: set performance\n"); 1103 1244 limits = &performance_limits; 1245 + if (hwp_active) 1246 + intel_pstate_hwp_set(); 1104 1247 return 0; 1105 1248 } 1106 1249 ··· 1110 1249 limits = &powersave_limits; 1111 1250 limits->min_policy_pct = (policy->min * 100) / policy->cpuinfo.max_freq; 1112 1251 limits->min_policy_pct = clamp_t(int, limits->min_policy_pct, 0 , 100); 1113 - limits->max_policy_pct = (policy->max * 100) / policy->cpuinfo.max_freq; 1252 + limits->max_policy_pct = DIV_ROUND_UP(policy->max * 100, 1253 + policy->cpuinfo.max_freq); 1114 1254 limits->max_policy_pct = clamp_t(int, limits->max_policy_pct, 0 , 100); 1115 1255 1116 1256 /* Normalize user input to [min_policy_pct, max_policy_pct] */ ··· 1123 1261 limits->max_sysfs_pct); 1124 1262 limits->max_perf_pct = max(limits->min_policy_pct, 1125 1263 limits->max_perf_pct); 1264 + limits->max_perf = round_up(limits->max_perf, 8); 1126 1265 1127 1266 /* Make sure min_perf_pct <= max_perf_pct */ 1128 1267 limits->min_perf_pct = min(limits->max_perf_pct, limits->min_perf_pct); ··· 1132 1269 int_tofp(100)); 1133 1270 limits->max_perf = div_fp(int_tofp(limits->max_perf_pct), 1134 1271 int_tofp(100)); 1135 - 1136 - #if IS_ENABLED(CONFIG_ACPI) 1137 - cpu = all_cpu_data[policy->cpu]; 1138 - for (i = 0; i < cpu->acpi_perf_data.state_count; i++) { 1139 - int control; 1140 - 1141 - control = convert_to_native_pstate_format(cpu, i); 1142 - if (control * cpu->pstate.scaling == policy->max) 1143 - limits->max_perf_ctl = control; 1144 - if (control * cpu->pstate.scaling == policy->min) 1145 - limits->min_perf_ctl = control; 1146 - } 1147 - 1148 - pr_debug("intel_pstate: max %u policy_max %u perf_ctl [0x%x-0x%x]\n", 1149 - policy->cpuinfo.max_freq, policy->max, limits->min_perf_ctl, 1150 - limits->max_perf_ctl); 1151 - #endif 1152 1272 1153 1273 if (hwp_active) 1154 1274 intel_pstate_hwp_set(); ··· 1187 1341 policy->cpuinfo.min_freq = cpu->pstate.min_pstate * cpu->pstate.scaling; 1188 1342 policy->cpuinfo.max_freq = 1189 1343 cpu->pstate.turbo_pstate * cpu->pstate.scaling; 1190 - if (!no_acpi_perf) 1191 - intel_pstate_init_perf_limits(policy); 1192 - /* 1193 - * If there is no acpi perf data or error, we ignore and use Intel P 1194 - * state calculated limits, So this is not fatal error. 1195 - */ 1196 1344 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 1197 1345 cpumask_set_cpu(policy->cpu, policy->cpus); 1198 1346 1199 1347 return 0; 1200 - } 1201 - 1202 - static int intel_pstate_cpu_exit(struct cpufreq_policy *policy) 1203 - { 1204 - return intel_pstate_exit_perf_limits(policy); 1205 1348 } 1206 1349 1207 1350 static struct cpufreq_driver intel_pstate_driver = { ··· 1199 1364 .setpolicy = intel_pstate_set_policy, 1200 1365 .get = intel_pstate_get, 1201 1366 .init = intel_pstate_cpu_init, 1202 - .exit = intel_pstate_cpu_exit, 1203 1367 .stop_cpu = intel_pstate_stop_cpu, 1204 1368 .name = "intel_pstate", 1205 1369 }; ··· 1240 1406 } 1241 1407 1242 1408 #if IS_ENABLED(CONFIG_ACPI) 1409 + #include <acpi/processor.h> 1243 1410 1244 1411 static bool intel_pstate_no_acpi_pss(void) 1245 1412 { ··· 1436 1601 force_load = 1; 1437 1602 if (!strcmp(str, "hwp_only")) 1438 1603 hwp_only = 1; 1439 - if (!strcmp(str, "no_acpi")) 1440 - no_acpi_perf = 1; 1441 - 1442 1604 return 0; 1443 1605 } 1444 1606 early_param("intel_pstate", intel_pstate_setup);
+10 -10
drivers/dma/at_hdmac.c
··· 729 729 return NULL; 730 730 731 731 dev_info(chan2dev(chan), 732 - "%s: src=0x%08x, dest=0x%08x, numf=%d, frame_size=%d, flags=0x%lx\n", 733 - __func__, xt->src_start, xt->dst_start, xt->numf, 732 + "%s: src=%pad, dest=%pad, numf=%d, frame_size=%d, flags=0x%lx\n", 733 + __func__, &xt->src_start, &xt->dst_start, xt->numf, 734 734 xt->frame_size, flags); 735 735 736 736 /* ··· 824 824 u32 ctrla; 825 825 u32 ctrlb; 826 826 827 - dev_vdbg(chan2dev(chan), "prep_dma_memcpy: d0x%x s0x%x l0x%zx f0x%lx\n", 828 - dest, src, len, flags); 827 + dev_vdbg(chan2dev(chan), "prep_dma_memcpy: d%pad s%pad l0x%zx f0x%lx\n", 828 + &dest, &src, len, flags); 829 829 830 830 if (unlikely(!len)) { 831 831 dev_dbg(chan2dev(chan), "prep_dma_memcpy: length is zero!\n"); ··· 938 938 void __iomem *vaddr; 939 939 dma_addr_t paddr; 940 940 941 - dev_vdbg(chan2dev(chan), "%s: d0x%x v0x%x l0x%zx f0x%lx\n", __func__, 942 - dest, value, len, flags); 941 + dev_vdbg(chan2dev(chan), "%s: d%pad v0x%x l0x%zx f0x%lx\n", __func__, 942 + &dest, value, len, flags); 943 943 944 944 if (unlikely(!len)) { 945 945 dev_dbg(chan2dev(chan), "%s: length is zero!\n", __func__); ··· 1022 1022 dma_addr_t dest = sg_dma_address(sg); 1023 1023 size_t len = sg_dma_len(sg); 1024 1024 1025 - dev_vdbg(chan2dev(chan), "%s: d0x%08x, l0x%zx\n", 1026 - __func__, dest, len); 1025 + dev_vdbg(chan2dev(chan), "%s: d%pad, l0x%zx\n", 1026 + __func__, &dest, len); 1027 1027 1028 1028 if (!is_dma_fill_aligned(chan->device, dest, 0, len)) { 1029 1029 dev_err(chan2dev(chan), "%s: buffer is not aligned\n", ··· 1439 1439 unsigned int periods = buf_len / period_len; 1440 1440 unsigned int i; 1441 1441 1442 - dev_vdbg(chan2dev(chan), "prep_dma_cyclic: %s buf@0x%08x - %d (%d/%d)\n", 1442 + dev_vdbg(chan2dev(chan), "prep_dma_cyclic: %s buf@%pad - %d (%d/%d)\n", 1443 1443 direction == DMA_MEM_TO_DEV ? "TO DEVICE" : "FROM DEVICE", 1444 - buf_addr, 1444 + &buf_addr, 1445 1445 periods, buf_len, period_len); 1446 1446 1447 1447 if (unlikely(!atslave || !buf_len || !period_len)) {
+3 -3
drivers/dma/at_hdmac_regs.h
··· 385 385 static void atc_dump_lli(struct at_dma_chan *atchan, struct at_lli *lli) 386 386 { 387 387 dev_crit(chan2dev(&atchan->chan_common), 388 - " desc: s0x%x d0x%x ctrl0x%x:0x%x l0x%x\n", 389 - lli->saddr, lli->daddr, 390 - lli->ctrla, lli->ctrlb, lli->dscr); 388 + " desc: s%pad d%pad ctrl0x%x:0x%x l0x%pad\n", 389 + &lli->saddr, &lli->daddr, 390 + lli->ctrla, lli->ctrlb, &lli->dscr); 391 391 } 392 392 393 393
+10 -10
drivers/dma/at_xdmac.c
··· 920 920 desc->lld.mbr_cfg = chan_cc; 921 921 922 922 dev_dbg(chan2dev(chan), 923 - "%s: lld: mbr_sa=0x%08x, mbr_da=0x%08x, mbr_ubc=0x%08x, mbr_cfg=0x%08x\n", 924 - __func__, desc->lld.mbr_sa, desc->lld.mbr_da, 923 + "%s: lld: mbr_sa=%pad, mbr_da=%pad, mbr_ubc=0x%08x, mbr_cfg=0x%08x\n", 924 + __func__, &desc->lld.mbr_sa, &desc->lld.mbr_da, 925 925 desc->lld.mbr_ubc, desc->lld.mbr_cfg); 926 926 927 927 /* Chain lld. */ ··· 953 953 if ((xt->numf > 1) && (xt->frame_size > 1)) 954 954 return NULL; 955 955 956 - dev_dbg(chan2dev(chan), "%s: src=0x%08x, dest=0x%08x, numf=%d, frame_size=%d, flags=0x%lx\n", 957 - __func__, xt->src_start, xt->dst_start, xt->numf, 956 + dev_dbg(chan2dev(chan), "%s: src=%pad, dest=%pad, numf=%d, frame_size=%d, flags=0x%lx\n", 957 + __func__, &xt->src_start, &xt->dst_start, xt->numf, 958 958 xt->frame_size, flags); 959 959 960 960 src_addr = xt->src_start; ··· 1179 1179 desc->lld.mbr_cfg = chan_cc; 1180 1180 1181 1181 dev_dbg(chan2dev(chan), 1182 - "%s: lld: mbr_da=0x%08x, mbr_ds=0x%08x, mbr_ubc=0x%08x, mbr_cfg=0x%08x\n", 1183 - __func__, desc->lld.mbr_da, desc->lld.mbr_ds, desc->lld.mbr_ubc, 1182 + "%s: lld: mbr_da=%pad, mbr_ds=%pad, mbr_ubc=0x%08x, mbr_cfg=0x%08x\n", 1183 + __func__, &desc->lld.mbr_da, &desc->lld.mbr_ds, desc->lld.mbr_ubc, 1184 1184 desc->lld.mbr_cfg); 1185 1185 1186 1186 return desc; ··· 1193 1193 struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan); 1194 1194 struct at_xdmac_desc *desc; 1195 1195 1196 - dev_dbg(chan2dev(chan), "%s: dest=0x%08x, len=%d, pattern=0x%x, flags=0x%lx\n", 1197 - __func__, dest, len, value, flags); 1196 + dev_dbg(chan2dev(chan), "%s: dest=%pad, len=%d, pattern=0x%x, flags=0x%lx\n", 1197 + __func__, &dest, len, value, flags); 1198 1198 1199 1199 if (unlikely(!len)) 1200 1200 return NULL; ··· 1229 1229 1230 1230 /* Prepare descriptors. */ 1231 1231 for_each_sg(sgl, sg, sg_len, i) { 1232 - dev_dbg(chan2dev(chan), "%s: dest=0x%08x, len=%d, pattern=0x%x, flags=0x%lx\n", 1233 - __func__, sg_dma_address(sg), sg_dma_len(sg), 1232 + dev_dbg(chan2dev(chan), "%s: dest=%pad, len=%d, pattern=0x%x, flags=0x%lx\n", 1233 + __func__, &sg_dma_address(sg), sg_dma_len(sg), 1234 1234 value, flags); 1235 1235 desc = at_xdmac_memset_create_desc(chan, atchan, 1236 1236 sg_dma_address(sg),
+2 -2
drivers/dma/edma.c
··· 107 107 108 108 /* CCCFG register */ 109 109 #define GET_NUM_DMACH(x) (x & 0x7) /* bits 0-2 */ 110 - #define GET_NUM_QDMACH(x) (x & 0x70 >> 4) /* bits 4-6 */ 110 + #define GET_NUM_QDMACH(x) ((x & 0x70) >> 4) /* bits 4-6 */ 111 111 #define GET_NUM_PAENTRY(x) ((x & 0x7000) >> 12) /* bits 12-14 */ 112 112 #define GET_NUM_EVQUE(x) ((x & 0x70000) >> 16) /* bits 16-18 */ 113 113 #define GET_NUM_REGN(x) ((x & 0x300000) >> 20) /* bits 20-21 */ ··· 1565 1565 struct platform_device *tc_pdev; 1566 1566 int ret; 1567 1567 1568 - if (!tc) 1568 + if (!IS_ENABLED(CONFIG_OF) || !tc) 1569 1569 return; 1570 1570 1571 1571 tc_pdev = of_find_device_by_node(tc->node);
+1 -1
drivers/dma/imx-sdma.c
··· 1462 1462 1463 1463 #define EVENT_REMAP_CELLS 3 1464 1464 1465 - static int __init sdma_event_remap(struct sdma_engine *sdma) 1465 + static int sdma_event_remap(struct sdma_engine *sdma) 1466 1466 { 1467 1467 struct device_node *np = sdma->dev->of_node; 1468 1468 struct device_node *gpr_np = of_parse_phandle(np, "gpr", 0);
+8 -3
drivers/dma/sh/usb-dmac.c
··· 679 679 struct usb_dmac *dmac = dev_get_drvdata(dev); 680 680 int i; 681 681 682 - for (i = 0; i < dmac->n_channels; ++i) 682 + for (i = 0; i < dmac->n_channels; ++i) { 683 + if (!dmac->channels[i].iomem) 684 + break; 683 685 usb_dmac_chan_halt(&dmac->channels[i]); 686 + } 684 687 685 688 return 0; 686 689 } ··· 802 799 ret = pm_runtime_get_sync(&pdev->dev); 803 800 if (ret < 0) { 804 801 dev_err(&pdev->dev, "runtime PM get sync failed (%d)\n", ret); 805 - return ret; 802 + goto error_pm; 806 803 } 807 804 808 805 ret = usb_dmac_init(dmac); 809 - pm_runtime_put(&pdev->dev); 810 806 811 807 if (ret) { 812 808 dev_err(&pdev->dev, "failed to reset device\n"); ··· 853 851 if (ret < 0) 854 852 goto error; 855 853 854 + pm_runtime_put(&pdev->dev); 856 855 return 0; 857 856 858 857 error: 859 858 of_dma_controller_free(pdev->dev.of_node); 859 + pm_runtime_put(&pdev->dev); 860 + error_pm: 860 861 pm_runtime_disable(&pdev->dev); 861 862 return ret; 862 863 }
+5 -2
drivers/gpio/gpio-74xx-mmio.c
··· 113 113 114 114 static int mmio_74xx_gpio_probe(struct platform_device *pdev) 115 115 { 116 - const struct of_device_id *of_id = 117 - of_match_device(mmio_74xx_gpio_ids, &pdev->dev); 116 + const struct of_device_id *of_id; 118 117 struct mmio_74xx_gpio_priv *priv; 119 118 struct resource *res; 120 119 void __iomem *dat; 121 120 int err; 121 + 122 + of_id = of_match_device(mmio_74xx_gpio_ids, &pdev->dev); 123 + if (!of_id) 124 + return -ENODEV; 122 125 123 126 priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 124 127 if (!priv)
-2
drivers/gpio/gpio-omap.c
··· 1122 1122 /* MPUIO is a bit different, reading IRQ status clears it */ 1123 1123 if (bank->is_mpuio) { 1124 1124 irqc->irq_ack = dummy_irq_chip.irq_ack; 1125 - irqc->irq_mask = irq_gc_mask_set_bit; 1126 - irqc->irq_unmask = irq_gc_mask_clr_bit; 1127 1125 if (!bank->regs->wkup_en) 1128 1126 irqc->irq_set_wake = NULL; 1129 1127 }
+2
drivers/gpio/gpio-palmas.c
··· 167 167 const struct palmas_device_data *dev_data; 168 168 169 169 match = of_match_device(of_palmas_gpio_match, &pdev->dev); 170 + if (!match) 171 + return -ENODEV; 170 172 dev_data = match->data; 171 173 if (!dev_data) 172 174 dev_data = &palmas_dev_data;
+5 -1
drivers/gpio/gpio-syscon.c
··· 187 187 static int syscon_gpio_probe(struct platform_device *pdev) 188 188 { 189 189 struct device *dev = &pdev->dev; 190 - const struct of_device_id *of_id = of_match_device(syscon_gpio_ids, dev); 190 + const struct of_device_id *of_id; 191 191 struct syscon_gpio_priv *priv; 192 192 struct device_node *np = dev->of_node; 193 193 int ret; 194 + 195 + of_id = of_match_device(syscon_gpio_ids, dev); 196 + if (!of_id) 197 + return -ENODEV; 194 198 195 199 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 196 200 if (!priv)
+56 -49
drivers/gpio/gpio-tegra.c
··· 375 375 } 376 376 #endif 377 377 378 + #ifdef CONFIG_DEBUG_FS 379 + 380 + #include <linux/debugfs.h> 381 + #include <linux/seq_file.h> 382 + 383 + static int dbg_gpio_show(struct seq_file *s, void *unused) 384 + { 385 + int i; 386 + int j; 387 + 388 + for (i = 0; i < tegra_gpio_bank_count; i++) { 389 + for (j = 0; j < 4; j++) { 390 + int gpio = tegra_gpio_compose(i, j, 0); 391 + seq_printf(s, 392 + "%d:%d %02x %02x %02x %02x %02x %02x %06x\n", 393 + i, j, 394 + tegra_gpio_readl(GPIO_CNF(gpio)), 395 + tegra_gpio_readl(GPIO_OE(gpio)), 396 + tegra_gpio_readl(GPIO_OUT(gpio)), 397 + tegra_gpio_readl(GPIO_IN(gpio)), 398 + tegra_gpio_readl(GPIO_INT_STA(gpio)), 399 + tegra_gpio_readl(GPIO_INT_ENB(gpio)), 400 + tegra_gpio_readl(GPIO_INT_LVL(gpio))); 401 + } 402 + } 403 + return 0; 404 + } 405 + 406 + static int dbg_gpio_open(struct inode *inode, struct file *file) 407 + { 408 + return single_open(file, dbg_gpio_show, &inode->i_private); 409 + } 410 + 411 + static const struct file_operations debug_fops = { 412 + .open = dbg_gpio_open, 413 + .read = seq_read, 414 + .llseek = seq_lseek, 415 + .release = single_release, 416 + }; 417 + 418 + static void tegra_gpio_debuginit(void) 419 + { 420 + (void) debugfs_create_file("tegra_gpio", S_IRUGO, 421 + NULL, NULL, &debug_fops); 422 + } 423 + 424 + #else 425 + 426 + static inline void tegra_gpio_debuginit(void) 427 + { 428 + } 429 + 430 + #endif 431 + 378 432 static struct irq_chip tegra_gpio_irq_chip = { 379 433 .name = "GPIO", 380 434 .irq_ack = tegra_gpio_irq_ack, ··· 573 519 spin_lock_init(&bank->lvl_lock[j]); 574 520 } 575 521 522 + tegra_gpio_debuginit(); 523 + 576 524 return 0; 577 525 } 578 526 ··· 592 536 return platform_driver_register(&tegra_gpio_driver); 593 537 } 594 538 postcore_initcall(tegra_gpio_init); 595 - 596 - #ifdef CONFIG_DEBUG_FS 597 - 598 - #include <linux/debugfs.h> 599 - #include <linux/seq_file.h> 600 - 601 - static int dbg_gpio_show(struct seq_file *s, void *unused) 602 - { 603 - int i; 604 - int j; 605 - 606 - for (i = 0; i < tegra_gpio_bank_count; i++) { 607 - for (j = 0; j < 4; j++) { 608 - int gpio = tegra_gpio_compose(i, j, 0); 609 - seq_printf(s, 610 - "%d:%d %02x %02x %02x %02x %02x %02x %06x\n", 611 - i, j, 612 - tegra_gpio_readl(GPIO_CNF(gpio)), 613 - tegra_gpio_readl(GPIO_OE(gpio)), 614 - tegra_gpio_readl(GPIO_OUT(gpio)), 615 - tegra_gpio_readl(GPIO_IN(gpio)), 616 - tegra_gpio_readl(GPIO_INT_STA(gpio)), 617 - tegra_gpio_readl(GPIO_INT_ENB(gpio)), 618 - tegra_gpio_readl(GPIO_INT_LVL(gpio))); 619 - } 620 - } 621 - return 0; 622 - } 623 - 624 - static int dbg_gpio_open(struct inode *inode, struct file *file) 625 - { 626 - return single_open(file, dbg_gpio_show, &inode->i_private); 627 - } 628 - 629 - static const struct file_operations debug_fops = { 630 - .open = dbg_gpio_open, 631 - .read = seq_read, 632 - .llseek = seq_lseek, 633 - .release = single_release, 634 - }; 635 - 636 - static int __init tegra_gpio_debuginit(void) 637 - { 638 - (void) debugfs_create_file("tegra_gpio", S_IRUGO, 639 - NULL, NULL, &debug_fops); 640 - return 0; 641 - } 642 - late_initcall(tegra_gpio_debuginit); 643 - #endif
+1 -1
drivers/gpio/gpiolib.c
··· 233 233 for (i = 0; i != chip->ngpio; ++i) { 234 234 struct gpio_desc *gpio = &chip->desc[i]; 235 235 236 - if (!gpio->name) 236 + if (!gpio->name || !name) 237 237 continue; 238 238 239 239 if (!strcmp(gpio->name, name)) {
+59 -64
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 389 389 * Fences. 390 390 */ 391 391 struct amdgpu_fence_driver { 392 - struct amdgpu_ring *ring; 393 392 uint64_t gpu_addr; 394 393 volatile uint32_t *cpu_addr; 395 394 /* sync_seq is protected by ring emission lock */ ··· 397 398 bool initialized; 398 399 struct amdgpu_irq_src *irq_src; 399 400 unsigned irq_type; 400 - struct delayed_work lockup_work; 401 + struct timer_list fallback_timer; 401 402 wait_queue_head_t fence_queue; 402 403 }; 403 404 ··· 496 497 497 498 /* bo virtual addresses in a specific vm */ 498 499 struct amdgpu_bo_va { 500 + struct mutex mutex; 499 501 /* protected by bo being reserved */ 500 502 struct list_head bo_list; 501 503 struct fence *last_pt_update; ··· 917 917 #define AMDGPU_VM_FAULT_STOP_ALWAYS 2 918 918 919 919 struct amdgpu_vm_pt { 920 - struct amdgpu_bo *bo; 921 - uint64_t addr; 920 + struct amdgpu_bo *bo; 921 + uint64_t addr; 922 922 }; 923 923 924 924 struct amdgpu_vm_id { ··· 926 926 uint64_t pd_gpu_addr; 927 927 /* last flushed PD/PT update */ 928 928 struct fence *flushed_updates; 929 - /* last use of vmid */ 930 - struct fence *last_id_use; 931 929 }; 932 930 933 931 struct amdgpu_vm { 934 - struct mutex mutex; 935 - 936 932 struct rb_root va; 937 933 938 934 /* protecting invalidated */ ··· 953 957 954 958 /* for id and flush management per ring */ 955 959 struct amdgpu_vm_id ids[AMDGPU_MAX_RINGS]; 960 + /* for interval tree */ 961 + spinlock_t it_lock; 956 962 }; 957 963 958 964 struct amdgpu_vm_manager { 959 - struct fence *active[AMDGPU_NUM_VM]; 960 - uint32_t max_pfn; 965 + struct { 966 + struct fence *active; 967 + atomic_long_t owner; 968 + } ids[AMDGPU_NUM_VM]; 969 + 970 + uint32_t max_pfn; 961 971 /* number of VMIDs */ 962 - unsigned nvm; 972 + unsigned nvm; 963 973 /* vram base address for page table entry */ 964 - u64 vram_base_offset; 974 + u64 vram_base_offset; 965 975 /* is vm enabled? */ 966 - bool enabled; 967 - /* for hw to save the PD addr on suspend/resume */ 968 - uint32_t saved_table_addr[AMDGPU_NUM_VM]; 976 + bool enabled; 969 977 /* vm pte handling */ 970 978 const struct amdgpu_vm_pte_funcs *vm_pte_funcs; 971 979 struct amdgpu_ring *vm_pte_funcs_ring; 972 980 }; 981 + 982 + void amdgpu_vm_manager_fini(struct amdgpu_device *adev); 983 + int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm); 984 + void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm); 985 + struct amdgpu_bo_list_entry *amdgpu_vm_get_bos(struct amdgpu_device *adev, 986 + struct amdgpu_vm *vm, 987 + struct list_head *head); 988 + int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring, 989 + struct amdgpu_sync *sync); 990 + void amdgpu_vm_flush(struct amdgpu_ring *ring, 991 + struct amdgpu_vm *vm, 992 + struct fence *updates); 993 + void amdgpu_vm_fence(struct amdgpu_device *adev, 994 + struct amdgpu_vm *vm, 995 + struct fence *fence); 996 + uint64_t amdgpu_vm_map_gart(struct amdgpu_device *adev, uint64_t addr); 997 + int amdgpu_vm_update_page_directory(struct amdgpu_device *adev, 998 + struct amdgpu_vm *vm); 999 + int amdgpu_vm_clear_freed(struct amdgpu_device *adev, 1000 + struct amdgpu_vm *vm); 1001 + int amdgpu_vm_clear_invalids(struct amdgpu_device *adev, struct amdgpu_vm *vm, 1002 + struct amdgpu_sync *sync); 1003 + int amdgpu_vm_bo_update(struct amdgpu_device *adev, 1004 + struct amdgpu_bo_va *bo_va, 1005 + struct ttm_mem_reg *mem); 1006 + void amdgpu_vm_bo_invalidate(struct amdgpu_device *adev, 1007 + struct amdgpu_bo *bo); 1008 + struct amdgpu_bo_va *amdgpu_vm_bo_find(struct amdgpu_vm *vm, 1009 + struct amdgpu_bo *bo); 1010 + struct amdgpu_bo_va *amdgpu_vm_bo_add(struct amdgpu_device *adev, 1011 + struct amdgpu_vm *vm, 1012 + struct amdgpu_bo *bo); 1013 + int amdgpu_vm_bo_map(struct amdgpu_device *adev, 1014 + struct amdgpu_bo_va *bo_va, 1015 + uint64_t addr, uint64_t offset, 1016 + uint64_t size, uint32_t flags); 1017 + int amdgpu_vm_bo_unmap(struct amdgpu_device *adev, 1018 + struct amdgpu_bo_va *bo_va, 1019 + uint64_t addr); 1020 + void amdgpu_vm_bo_rmv(struct amdgpu_device *adev, 1021 + struct amdgpu_bo_va *bo_va); 1022 + int amdgpu_vm_free_job(struct amdgpu_job *job); 973 1023 974 1024 /* 975 1025 * context related structures ··· 1253 1211 /* relocations */ 1254 1212 struct amdgpu_bo_list_entry *vm_bos; 1255 1213 struct list_head validated; 1214 + struct fence *fence; 1256 1215 1257 1216 struct amdgpu_ib *ibs; 1258 1217 uint32_t num_ibs; ··· 1269 1226 struct amdgpu_device *adev; 1270 1227 struct amdgpu_ib *ibs; 1271 1228 uint32_t num_ibs; 1272 - struct mutex job_lock; 1229 + void *owner; 1273 1230 struct amdgpu_user_fence uf; 1274 1231 int (*free_job)(struct amdgpu_job *job); 1275 1232 }; ··· 2300 2257 bool amdgpu_card_posted(struct amdgpu_device *adev); 2301 2258 void amdgpu_update_display_priority(struct amdgpu_device *adev); 2302 2259 bool amdgpu_boot_test_post_card(struct amdgpu_device *adev); 2303 - struct amdgpu_cs_parser *amdgpu_cs_parser_create(struct amdgpu_device *adev, 2304 - struct drm_file *filp, 2305 - struct amdgpu_ctx *ctx, 2306 - struct amdgpu_ib *ibs, 2307 - uint32_t num_ibs); 2308 2260 2309 2261 int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data); 2310 2262 int amdgpu_cs_get_ring(struct amdgpu_device *adev, u32 ip_type, ··· 2356 2318 long amdgpu_kms_compat_ioctl(struct file *filp, unsigned int cmd, 2357 2319 unsigned long arg); 2358 2320 2359 - /* 2360 - * vm 2361 - */ 2362 - int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm); 2363 - void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm); 2364 - struct amdgpu_bo_list_entry *amdgpu_vm_get_bos(struct amdgpu_device *adev, 2365 - struct amdgpu_vm *vm, 2366 - struct list_head *head); 2367 - int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring, 2368 - struct amdgpu_sync *sync); 2369 - void amdgpu_vm_flush(struct amdgpu_ring *ring, 2370 - struct amdgpu_vm *vm, 2371 - struct fence *updates); 2372 - void amdgpu_vm_fence(struct amdgpu_device *adev, 2373 - struct amdgpu_vm *vm, 2374 - struct amdgpu_fence *fence); 2375 - uint64_t amdgpu_vm_map_gart(struct amdgpu_device *adev, uint64_t addr); 2376 - int amdgpu_vm_update_page_directory(struct amdgpu_device *adev, 2377 - struct amdgpu_vm *vm); 2378 - int amdgpu_vm_clear_freed(struct amdgpu_device *adev, 2379 - struct amdgpu_vm *vm); 2380 - int amdgpu_vm_clear_invalids(struct amdgpu_device *adev, 2381 - struct amdgpu_vm *vm, struct amdgpu_sync *sync); 2382 - int amdgpu_vm_bo_update(struct amdgpu_device *adev, 2383 - struct amdgpu_bo_va *bo_va, 2384 - struct ttm_mem_reg *mem); 2385 - void amdgpu_vm_bo_invalidate(struct amdgpu_device *adev, 2386 - struct amdgpu_bo *bo); 2387 - struct amdgpu_bo_va *amdgpu_vm_bo_find(struct amdgpu_vm *vm, 2388 - struct amdgpu_bo *bo); 2389 - struct amdgpu_bo_va *amdgpu_vm_bo_add(struct amdgpu_device *adev, 2390 - struct amdgpu_vm *vm, 2391 - struct amdgpu_bo *bo); 2392 - int amdgpu_vm_bo_map(struct amdgpu_device *adev, 2393 - struct amdgpu_bo_va *bo_va, 2394 - uint64_t addr, uint64_t offset, 2395 - uint64_t size, uint32_t flags); 2396 - int amdgpu_vm_bo_unmap(struct amdgpu_device *adev, 2397 - struct amdgpu_bo_va *bo_va, 2398 - uint64_t addr); 2399 - void amdgpu_vm_bo_rmv(struct amdgpu_device *adev, 2400 - struct amdgpu_bo_va *bo_va); 2401 - int amdgpu_vm_free_job(struct amdgpu_job *job); 2402 2321 /* 2403 2322 * functions used by amdgpu_encoder.c 2404 2323 */
+72 -109
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 127 127 return 0; 128 128 } 129 129 130 - struct amdgpu_cs_parser *amdgpu_cs_parser_create(struct amdgpu_device *adev, 131 - struct drm_file *filp, 132 - struct amdgpu_ctx *ctx, 133 - struct amdgpu_ib *ibs, 134 - uint32_t num_ibs) 135 - { 136 - struct amdgpu_cs_parser *parser; 137 - int i; 138 - 139 - parser = kzalloc(sizeof(struct amdgpu_cs_parser), GFP_KERNEL); 140 - if (!parser) 141 - return NULL; 142 - 143 - parser->adev = adev; 144 - parser->filp = filp; 145 - parser->ctx = ctx; 146 - parser->ibs = ibs; 147 - parser->num_ibs = num_ibs; 148 - for (i = 0; i < num_ibs; i++) 149 - ibs[i].ctx = ctx; 150 - 151 - return parser; 152 - } 153 - 154 130 int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data) 155 131 { 156 132 union drm_amdgpu_cs *cs = data; ··· 439 463 return (int)la->robj->tbo.num_pages - (int)lb->robj->tbo.num_pages; 440 464 } 441 465 442 - static void amdgpu_cs_parser_fini_early(struct amdgpu_cs_parser *parser, int error, bool backoff) 466 + /** 467 + * cs_parser_fini() - clean parser states 468 + * @parser: parser structure holding parsing context. 469 + * @error: error number 470 + * 471 + * If error is set than unvalidate buffer, otherwise just free memory 472 + * used by parsing context. 473 + **/ 474 + static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, bool backoff) 443 475 { 476 + unsigned i; 477 + 444 478 if (!error) { 445 479 /* Sort the buffer list from the smallest to largest buffer, 446 480 * which affects the order of buffers in the LRU list. ··· 465 479 list_sort(NULL, &parser->validated, cmp_size_smaller_first); 466 480 467 481 ttm_eu_fence_buffer_objects(&parser->ticket, 468 - &parser->validated, 469 - &parser->ibs[parser->num_ibs-1].fence->base); 482 + &parser->validated, 483 + parser->fence); 470 484 } else if (backoff) { 471 485 ttm_eu_backoff_reservation(&parser->ticket, 472 486 &parser->validated); 473 487 } 474 - } 488 + fence_put(parser->fence); 475 489 476 - static void amdgpu_cs_parser_fini_late(struct amdgpu_cs_parser *parser) 477 - { 478 - unsigned i; 479 490 if (parser->ctx) 480 491 amdgpu_ctx_put(parser->ctx); 481 492 if (parser->bo_list) ··· 482 499 for (i = 0; i < parser->nchunks; i++) 483 500 drm_free_large(parser->chunks[i].kdata); 484 501 kfree(parser->chunks); 485 - if (!amdgpu_enable_scheduler) 486 - { 487 - if (parser->ibs) 488 - for (i = 0; i < parser->num_ibs; i++) 489 - amdgpu_ib_free(parser->adev, &parser->ibs[i]); 490 - kfree(parser->ibs); 491 - if (parser->uf.bo) 492 - drm_gem_object_unreference_unlocked(&parser->uf.bo->gem_base); 493 - } 494 - 495 - kfree(parser); 496 - } 497 - 498 - /** 499 - * cs_parser_fini() - clean parser states 500 - * @parser: parser structure holding parsing context. 501 - * @error: error number 502 - * 503 - * If error is set than unvalidate buffer, otherwise just free memory 504 - * used by parsing context. 505 - **/ 506 - static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, bool backoff) 507 - { 508 - amdgpu_cs_parser_fini_early(parser, error, backoff); 509 - amdgpu_cs_parser_fini_late(parser); 502 + if (parser->ibs) 503 + for (i = 0; i < parser->num_ibs; i++) 504 + amdgpu_ib_free(parser->adev, &parser->ibs[i]); 505 + kfree(parser->ibs); 506 + if (parser->uf.bo) 507 + drm_gem_object_unreference_unlocked(&parser->uf.bo->gem_base); 510 508 } 511 509 512 510 static int amdgpu_bo_vm_update_pte(struct amdgpu_cs_parser *p, ··· 574 610 } 575 611 576 612 r = amdgpu_bo_vm_update_pte(parser, vm); 577 - if (r) { 578 - goto out; 579 - } 580 - amdgpu_cs_sync_rings(parser); 581 - if (!amdgpu_enable_scheduler) 582 - r = amdgpu_ib_schedule(adev, parser->num_ibs, parser->ibs, 583 - parser->filp); 613 + if (!r) 614 + amdgpu_cs_sync_rings(parser); 584 615 585 - out: 586 616 return r; 587 617 } 588 618 ··· 784 826 { 785 827 struct amdgpu_device *adev = dev->dev_private; 786 828 union drm_amdgpu_cs *cs = data; 787 - struct amdgpu_fpriv *fpriv = filp->driver_priv; 788 - struct amdgpu_vm *vm = &fpriv->vm; 789 - struct amdgpu_cs_parser *parser; 829 + struct amdgpu_cs_parser parser = {}; 790 830 bool reserved_buffers = false; 791 831 int i, r; 792 832 793 833 if (!adev->accel_working) 794 834 return -EBUSY; 795 835 796 - parser = amdgpu_cs_parser_create(adev, filp, NULL, NULL, 0); 797 - if (!parser) 798 - return -ENOMEM; 799 - r = amdgpu_cs_parser_init(parser, data); 836 + parser.adev = adev; 837 + parser.filp = filp; 838 + 839 + r = amdgpu_cs_parser_init(&parser, data); 800 840 if (r) { 801 841 DRM_ERROR("Failed to initialize parser !\n"); 802 - amdgpu_cs_parser_fini(parser, r, false); 842 + amdgpu_cs_parser_fini(&parser, r, false); 803 843 r = amdgpu_cs_handle_lockup(adev, r); 804 844 return r; 805 845 } 806 - mutex_lock(&vm->mutex); 807 - r = amdgpu_cs_parser_relocs(parser); 846 + r = amdgpu_cs_parser_relocs(&parser); 808 847 if (r == -ENOMEM) 809 848 DRM_ERROR("Not enough memory for command submission!\n"); 810 849 else if (r && r != -ERESTARTSYS) 811 850 DRM_ERROR("Failed to process the buffer list %d!\n", r); 812 851 else if (!r) { 813 852 reserved_buffers = true; 814 - r = amdgpu_cs_ib_fill(adev, parser); 853 + r = amdgpu_cs_ib_fill(adev, &parser); 815 854 } 816 855 817 856 if (!r) { 818 - r = amdgpu_cs_dependencies(adev, parser); 857 + r = amdgpu_cs_dependencies(adev, &parser); 819 858 if (r) 820 859 DRM_ERROR("Failed in the dependencies handling %d!\n", r); 821 860 } ··· 820 865 if (r) 821 866 goto out; 822 867 823 - for (i = 0; i < parser->num_ibs; i++) 824 - trace_amdgpu_cs(parser, i); 868 + for (i = 0; i < parser.num_ibs; i++) 869 + trace_amdgpu_cs(&parser, i); 825 870 826 - r = amdgpu_cs_ib_vm_chunk(adev, parser); 871 + r = amdgpu_cs_ib_vm_chunk(adev, &parser); 827 872 if (r) 828 873 goto out; 829 874 830 - if (amdgpu_enable_scheduler && parser->num_ibs) { 875 + if (amdgpu_enable_scheduler && parser.num_ibs) { 876 + struct amdgpu_ring * ring = parser.ibs->ring; 877 + struct amd_sched_fence *fence; 831 878 struct amdgpu_job *job; 832 - struct amdgpu_ring * ring = parser->ibs->ring; 879 + 833 880 job = kzalloc(sizeof(struct amdgpu_job), GFP_KERNEL); 834 881 if (!job) { 835 882 r = -ENOMEM; 836 883 goto out; 837 884 } 885 + 838 886 job->base.sched = &ring->sched; 839 - job->base.s_entity = &parser->ctx->rings[ring->idx].entity; 840 - job->adev = parser->adev; 841 - job->ibs = parser->ibs; 842 - job->num_ibs = parser->num_ibs; 843 - job->base.owner = parser->filp; 844 - mutex_init(&job->job_lock); 887 + job->base.s_entity = &parser.ctx->rings[ring->idx].entity; 888 + job->adev = parser.adev; 889 + job->owner = parser.filp; 890 + job->free_job = amdgpu_cs_free_job; 891 + 892 + job->ibs = parser.ibs; 893 + job->num_ibs = parser.num_ibs; 894 + parser.ibs = NULL; 895 + parser.num_ibs = 0; 896 + 845 897 if (job->ibs[job->num_ibs - 1].user) { 846 - memcpy(&job->uf, &parser->uf, 847 - sizeof(struct amdgpu_user_fence)); 898 + job->uf = parser.uf; 848 899 job->ibs[job->num_ibs - 1].user = &job->uf; 900 + parser.uf.bo = NULL; 849 901 } 850 902 851 - job->free_job = amdgpu_cs_free_job; 852 - mutex_lock(&job->job_lock); 853 - r = amd_sched_entity_push_job(&job->base); 854 - if (r) { 855 - mutex_unlock(&job->job_lock); 903 + fence = amd_sched_fence_create(job->base.s_entity, 904 + parser.filp); 905 + if (!fence) { 906 + r = -ENOMEM; 856 907 amdgpu_cs_free_job(job); 857 908 kfree(job); 858 909 goto out; 859 910 } 860 - cs->out.handle = 861 - amdgpu_ctx_add_fence(parser->ctx, ring, 862 - &job->base.s_fence->base); 863 - parser->ibs[parser->num_ibs - 1].sequence = cs->out.handle; 911 + job->base.s_fence = fence; 912 + parser.fence = fence_get(&fence->base); 864 913 865 - list_sort(NULL, &parser->validated, cmp_size_smaller_first); 866 - ttm_eu_fence_buffer_objects(&parser->ticket, 867 - &parser->validated, 868 - &job->base.s_fence->base); 914 + cs->out.handle = amdgpu_ctx_add_fence(parser.ctx, ring, 915 + &fence->base); 916 + job->ibs[job->num_ibs - 1].sequence = cs->out.handle; 869 917 870 - mutex_unlock(&job->job_lock); 871 - amdgpu_cs_parser_fini_late(parser); 872 - mutex_unlock(&vm->mutex); 873 - return 0; 918 + trace_amdgpu_cs_ioctl(job); 919 + amd_sched_entity_push_job(&job->base); 920 + 921 + } else { 922 + struct amdgpu_fence *fence; 923 + 924 + r = amdgpu_ib_schedule(adev, parser.num_ibs, parser.ibs, 925 + parser.filp); 926 + fence = parser.ibs[parser.num_ibs - 1].fence; 927 + parser.fence = fence_get(&fence->base); 928 + cs->out.handle = parser.ibs[parser.num_ibs - 1].sequence; 874 929 } 875 930 876 - cs->out.handle = parser->ibs[parser->num_ibs - 1].sequence; 877 931 out: 878 - amdgpu_cs_parser_fini(parser, r, reserved_buffers); 879 - mutex_unlock(&vm->mutex); 932 + amdgpu_cs_parser_fini(&parser, r, reserved_buffers); 880 933 r = amdgpu_cs_handle_lockup(adev, r); 881 934 return r; 882 935 }
+55 -48
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 47 47 * that the the relevant GPU caches have been flushed. 48 48 */ 49 49 50 + static struct kmem_cache *amdgpu_fence_slab; 51 + static atomic_t amdgpu_fence_slab_ref = ATOMIC_INIT(0); 52 + 50 53 /** 51 54 * amdgpu_fence_write - write a fence value 52 55 * ··· 88 85 } 89 86 90 87 /** 91 - * amdgpu_fence_schedule_check - schedule lockup check 92 - * 93 - * @ring: pointer to struct amdgpu_ring 94 - * 95 - * Queues a delayed work item to check for lockups. 96 - */ 97 - static void amdgpu_fence_schedule_check(struct amdgpu_ring *ring) 98 - { 99 - /* 100 - * Do not reset the timer here with mod_delayed_work, 101 - * this can livelock in an interaction with TTM delayed destroy. 102 - */ 103 - queue_delayed_work(system_power_efficient_wq, 104 - &ring->fence_drv.lockup_work, 105 - AMDGPU_FENCE_JIFFIES_TIMEOUT); 106 - } 107 - 108 - /** 109 88 * amdgpu_fence_emit - emit a fence on the requested ring 110 89 * 111 90 * @ring: ring the fence is associated with ··· 103 118 struct amdgpu_device *adev = ring->adev; 104 119 105 120 /* we are protected by the ring emission mutex */ 106 - *fence = kmalloc(sizeof(struct amdgpu_fence), GFP_KERNEL); 121 + *fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_KERNEL); 107 122 if ((*fence) == NULL) { 108 123 return -ENOMEM; 109 124 } ··· 117 132 amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, 118 133 (*fence)->seq, 119 134 AMDGPU_FENCE_FLAG_INT); 120 - trace_amdgpu_fence_emit(ring->adev->ddev, ring->idx, (*fence)->seq); 121 135 return 0; 136 + } 137 + 138 + /** 139 + * amdgpu_fence_schedule_fallback - schedule fallback check 140 + * 141 + * @ring: pointer to struct amdgpu_ring 142 + * 143 + * Start a timer as fallback to our interrupts. 144 + */ 145 + static void amdgpu_fence_schedule_fallback(struct amdgpu_ring *ring) 146 + { 147 + mod_timer(&ring->fence_drv.fallback_timer, 148 + jiffies + AMDGPU_FENCE_JIFFIES_TIMEOUT); 122 149 } 123 150 124 151 /** ··· 199 202 } while (atomic64_xchg(&ring->fence_drv.last_seq, seq) > seq); 200 203 201 204 if (seq < last_emitted) 202 - amdgpu_fence_schedule_check(ring); 205 + amdgpu_fence_schedule_fallback(ring); 203 206 204 207 return wake; 205 - } 206 - 207 - /** 208 - * amdgpu_fence_check_lockup - check for hardware lockup 209 - * 210 - * @work: delayed work item 211 - * 212 - * Checks for fence activity and if there is none probe 213 - * the hardware if a lockup occured. 214 - */ 215 - static void amdgpu_fence_check_lockup(struct work_struct *work) 216 - { 217 - struct amdgpu_fence_driver *fence_drv; 218 - struct amdgpu_ring *ring; 219 - 220 - fence_drv = container_of(work, struct amdgpu_fence_driver, 221 - lockup_work.work); 222 - ring = fence_drv->ring; 223 - 224 - if (amdgpu_fence_activity(ring)) 225 - wake_up_all(&ring->fence_drv.fence_queue); 226 208 } 227 209 228 210 /** ··· 217 241 { 218 242 if (amdgpu_fence_activity(ring)) 219 243 wake_up_all(&ring->fence_drv.fence_queue); 244 + } 245 + 246 + /** 247 + * amdgpu_fence_fallback - fallback for hardware interrupts 248 + * 249 + * @work: delayed work item 250 + * 251 + * Checks for fence activity. 252 + */ 253 + static void amdgpu_fence_fallback(unsigned long arg) 254 + { 255 + struct amdgpu_ring *ring = (void *)arg; 256 + 257 + amdgpu_fence_process(ring); 220 258 } 221 259 222 260 /** ··· 280 290 if (atomic64_read(&ring->fence_drv.last_seq) >= seq) 281 291 return 0; 282 292 283 - amdgpu_fence_schedule_check(ring); 293 + amdgpu_fence_schedule_fallback(ring); 284 294 wait_event(ring->fence_drv.fence_queue, ( 285 295 (signaled = amdgpu_fence_seq_signaled(ring, seq)))); 286 296 ··· 481 491 atomic64_set(&ring->fence_drv.last_seq, 0); 482 492 ring->fence_drv.initialized = false; 483 493 484 - INIT_DELAYED_WORK(&ring->fence_drv.lockup_work, 485 - amdgpu_fence_check_lockup); 486 - ring->fence_drv.ring = ring; 494 + setup_timer(&ring->fence_drv.fallback_timer, amdgpu_fence_fallback, 495 + (unsigned long)ring); 487 496 488 497 init_waitqueue_head(&ring->fence_drv.fence_queue); 489 498 ··· 525 536 */ 526 537 int amdgpu_fence_driver_init(struct amdgpu_device *adev) 527 538 { 539 + if (atomic_inc_return(&amdgpu_fence_slab_ref) == 1) { 540 + amdgpu_fence_slab = kmem_cache_create( 541 + "amdgpu_fence", sizeof(struct amdgpu_fence), 0, 542 + SLAB_HWCACHE_ALIGN, NULL); 543 + if (!amdgpu_fence_slab) 544 + return -ENOMEM; 545 + } 528 546 if (amdgpu_debugfs_fence_init(adev)) 529 547 dev_err(adev->dev, "fence debugfs file creation failed\n"); 530 548 ··· 550 554 { 551 555 int i, r; 552 556 557 + if (atomic_dec_and_test(&amdgpu_fence_slab_ref)) 558 + kmem_cache_destroy(amdgpu_fence_slab); 553 559 mutex_lock(&adev->ring_lock); 554 560 for (i = 0; i < AMDGPU_MAX_RINGS; i++) { 555 561 struct amdgpu_ring *ring = adev->rings[i]; 562 + 556 563 if (!ring || !ring->fence_drv.initialized) 557 564 continue; 558 565 r = amdgpu_fence_wait_empty(ring); ··· 567 568 amdgpu_irq_put(adev, ring->fence_drv.irq_src, 568 569 ring->fence_drv.irq_type); 569 570 amd_sched_fini(&ring->sched); 571 + del_timer_sync(&ring->fence_drv.fallback_timer); 570 572 ring->fence_drv.initialized = false; 571 573 } 572 574 mutex_unlock(&adev->ring_lock); ··· 751 751 fence->fence_wake.func = amdgpu_fence_check_signaled; 752 752 __add_wait_queue(&ring->fence_drv.fence_queue, &fence->fence_wake); 753 753 fence_get(f); 754 - amdgpu_fence_schedule_check(ring); 754 + if (!timer_pending(&ring->fence_drv.fallback_timer)) 755 + amdgpu_fence_schedule_fallback(ring); 755 756 FENCE_TRACE(&fence->base, "armed on ring %i!\n", ring->idx); 756 757 return true; 758 + } 759 + 760 + static void amdgpu_fence_release(struct fence *f) 761 + { 762 + struct amdgpu_fence *fence = to_amdgpu_fence(f); 763 + kmem_cache_free(amdgpu_fence_slab, fence); 757 764 } 758 765 759 766 const struct fence_ops amdgpu_fence_ops = { ··· 769 762 .enable_signaling = amdgpu_fence_enable_signaling, 770 763 .signaled = amdgpu_fence_is_signaled, 771 764 .wait = fence_default_wait, 772 - .release = NULL, 765 + .release = amdgpu_fence_release, 773 766 }; 774 767 775 768 /*
+23 -15
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
··· 115 115 struct amdgpu_vm *vm = &fpriv->vm; 116 116 struct amdgpu_bo_va *bo_va; 117 117 int r; 118 - mutex_lock(&vm->mutex); 119 118 r = amdgpu_bo_reserve(rbo, false); 120 - if (r) { 121 - mutex_unlock(&vm->mutex); 119 + if (r) 122 120 return r; 123 - } 124 121 125 122 bo_va = amdgpu_vm_bo_find(vm, rbo); 126 123 if (!bo_va) { ··· 126 129 ++bo_va->ref_count; 127 130 } 128 131 amdgpu_bo_unreserve(rbo); 129 - mutex_unlock(&vm->mutex); 130 132 return 0; 131 133 } 132 134 ··· 138 142 struct amdgpu_vm *vm = &fpriv->vm; 139 143 struct amdgpu_bo_va *bo_va; 140 144 int r; 141 - mutex_lock(&vm->mutex); 142 145 r = amdgpu_bo_reserve(rbo, true); 143 146 if (r) { 144 - mutex_unlock(&vm->mutex); 145 147 dev_err(adev->dev, "leaking bo va because " 146 148 "we fail to reserve bo (%d)\n", r); 147 149 return; ··· 151 157 } 152 158 } 153 159 amdgpu_bo_unreserve(rbo); 154 - mutex_unlock(&vm->mutex); 155 160 } 156 161 157 162 static int amdgpu_gem_handle_lockup(struct amdgpu_device *adev, int r) ··· 476 483 if (domain == AMDGPU_GEM_DOMAIN_CPU) 477 484 goto error_unreserve; 478 485 } 486 + r = amdgpu_vm_update_page_directory(adev, bo_va->vm); 487 + if (r) 488 + goto error_unreserve; 479 489 480 490 r = amdgpu_vm_clear_freed(adev, bo_va->vm); 481 491 if (r) ··· 508 512 struct amdgpu_fpriv *fpriv = filp->driver_priv; 509 513 struct amdgpu_bo *rbo; 510 514 struct amdgpu_bo_va *bo_va; 515 + struct ttm_validate_buffer tv, tv_pd; 516 + struct ww_acquire_ctx ticket; 517 + struct list_head list, duplicates; 511 518 uint32_t invalid_flags, va_flags = 0; 512 519 int r = 0; 513 520 ··· 546 547 gobj = drm_gem_object_lookup(dev, filp, args->handle); 547 548 if (gobj == NULL) 548 549 return -ENOENT; 549 - mutex_lock(&fpriv->vm.mutex); 550 550 rbo = gem_to_amdgpu_bo(gobj); 551 - r = amdgpu_bo_reserve(rbo, false); 551 + INIT_LIST_HEAD(&list); 552 + INIT_LIST_HEAD(&duplicates); 553 + tv.bo = &rbo->tbo; 554 + tv.shared = true; 555 + list_add(&tv.head, &list); 556 + 557 + if (args->operation == AMDGPU_VA_OP_MAP) { 558 + tv_pd.bo = &fpriv->vm.page_directory->tbo; 559 + tv_pd.shared = true; 560 + list_add(&tv_pd.head, &list); 561 + } 562 + r = ttm_eu_reserve_buffers(&ticket, &list, true, &duplicates); 552 563 if (r) { 553 - mutex_unlock(&fpriv->vm.mutex); 554 564 drm_gem_object_unreference_unlocked(gobj); 555 565 return r; 556 566 } 557 567 558 568 bo_va = amdgpu_vm_bo_find(&fpriv->vm, rbo); 559 569 if (!bo_va) { 560 - amdgpu_bo_unreserve(rbo); 561 - mutex_unlock(&fpriv->vm.mutex); 570 + ttm_eu_backoff_reservation(&ticket, &list); 571 + drm_gem_object_unreference_unlocked(gobj); 562 572 return -ENOENT; 563 573 } 564 574 ··· 589 581 default: 590 582 break; 591 583 } 592 - 584 + ttm_eu_backoff_reservation(&ticket, &list); 593 585 if (!r && !(args->flags & AMDGPU_VM_DELAY_UPDATE)) 594 586 amdgpu_gem_va_update_vm(adev, bo_va, args->operation); 595 - mutex_unlock(&fpriv->vm.mutex); 587 + 596 588 drm_gem_object_unreference_unlocked(gobj); 597 589 return r; 598 590 }
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
··· 62 62 int r; 63 63 64 64 if (size) { 65 - r = amdgpu_sa_bo_new(adev, &adev->ring_tmp_bo, 65 + r = amdgpu_sa_bo_new(&adev->ring_tmp_bo, 66 66 &ib->sa_bo, size, 256); 67 67 if (r) { 68 68 dev_err(adev->dev, "failed to get a new IB (%d)\n", r); ··· 216 216 } 217 217 218 218 if (ib->vm) 219 - amdgpu_vm_fence(adev, ib->vm, ib->fence); 219 + amdgpu_vm_fence(adev, ib->vm, &ib->fence->base); 220 220 221 221 amdgpu_ring_unlock_commit(ring); 222 222 return 0;
+3 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
··· 189 189 struct amdgpu_sa_manager *sa_manager); 190 190 int amdgpu_sa_bo_manager_suspend(struct amdgpu_device *adev, 191 191 struct amdgpu_sa_manager *sa_manager); 192 - int amdgpu_sa_bo_new(struct amdgpu_device *adev, 193 - struct amdgpu_sa_manager *sa_manager, 194 - struct amdgpu_sa_bo **sa_bo, 195 - unsigned size, unsigned align); 192 + int amdgpu_sa_bo_new(struct amdgpu_sa_manager *sa_manager, 193 + struct amdgpu_sa_bo **sa_bo, 194 + unsigned size, unsigned align); 196 195 void amdgpu_sa_bo_free(struct amdgpu_device *adev, 197 196 struct amdgpu_sa_bo **sa_bo, 198 197 struct fence *fence);
+1 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c
··· 311 311 return false; 312 312 } 313 313 314 - int amdgpu_sa_bo_new(struct amdgpu_device *adev, 315 - struct amdgpu_sa_manager *sa_manager, 314 + int amdgpu_sa_bo_new(struct amdgpu_sa_manager *sa_manager, 316 315 struct amdgpu_sa_bo **sa_bo, 317 316 unsigned size, unsigned align) 318 317 {
+12 -18
drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
··· 26 26 #include <linux/sched.h> 27 27 #include <drm/drmP.h> 28 28 #include "amdgpu.h" 29 + #include "amdgpu_trace.h" 29 30 30 31 static struct fence *amdgpu_sched_dependency(struct amd_sched_job *sched_job) 31 32 { ··· 45 44 return NULL; 46 45 } 47 46 job = to_amdgpu_job(sched_job); 48 - mutex_lock(&job->job_lock); 49 - r = amdgpu_ib_schedule(job->adev, 50 - job->num_ibs, 51 - job->ibs, 52 - job->base.owner); 47 + trace_amdgpu_sched_run_job(job); 48 + r = amdgpu_ib_schedule(job->adev, job->num_ibs, job->ibs, job->owner); 53 49 if (r) { 54 50 DRM_ERROR("Error scheduling IBs (%d)\n", r); 55 51 goto err; ··· 59 61 if (job->free_job) 60 62 job->free_job(job); 61 63 62 - mutex_unlock(&job->job_lock); 63 - fence_put(&job->base.s_fence->base); 64 64 kfree(job); 65 65 return fence ? &fence->base : NULL; 66 66 } ··· 84 88 return -ENOMEM; 85 89 job->base.sched = &ring->sched; 86 90 job->base.s_entity = &adev->kernel_ctx.rings[ring->idx].entity; 91 + job->base.s_fence = amd_sched_fence_create(job->base.s_entity, owner); 92 + if (!job->base.s_fence) { 93 + kfree(job); 94 + return -ENOMEM; 95 + } 96 + *f = fence_get(&job->base.s_fence->base); 97 + 87 98 job->adev = adev; 88 99 job->ibs = ibs; 89 100 job->num_ibs = num_ibs; 90 - job->base.owner = owner; 91 - mutex_init(&job->job_lock); 101 + job->owner = owner; 92 102 job->free_job = free_job; 93 - mutex_lock(&job->job_lock); 94 - r = amd_sched_entity_push_job(&job->base); 95 - if (r) { 96 - mutex_unlock(&job->job_lock); 97 - kfree(job); 98 - return r; 99 - } 100 - *f = fence_get(&job->base.s_fence->base); 101 - mutex_unlock(&job->job_lock); 103 + amd_sched_entity_push_job(&job->base); 102 104 } else { 103 105 r = amdgpu_ib_schedule(adev, num_ibs, ibs, owner); 104 106 if (r)
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_semaphore.c
··· 40 40 if (*semaphore == NULL) { 41 41 return -ENOMEM; 42 42 } 43 - r = amdgpu_sa_bo_new(adev, &adev->ring_tmp_bo, 43 + r = amdgpu_sa_bo_new(&adev->ring_tmp_bo, 44 44 &(*semaphore)->sa_bo, 8, 8); 45 45 if (r) { 46 46 kfree(*semaphore);
+8 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
··· 302 302 return -EINVAL; 303 303 } 304 304 305 - if (amdgpu_enable_scheduler || !amdgpu_enable_semaphores || 306 - (count >= AMDGPU_NUM_SYNCS)) { 305 + if (amdgpu_enable_scheduler || !amdgpu_enable_semaphores) { 306 + r = fence_wait(&fence->base, true); 307 + if (r) 308 + return r; 309 + continue; 310 + } 311 + 312 + if (count >= AMDGPU_NUM_SYNCS) { 307 313 /* not enough room, wait manually */ 308 314 r = fence_wait(&fence->base, false); 309 315 if (r)
+51 -43
drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
··· 48 48 __entry->fences) 49 49 ); 50 50 51 + TRACE_EVENT(amdgpu_cs_ioctl, 52 + TP_PROTO(struct amdgpu_job *job), 53 + TP_ARGS(job), 54 + TP_STRUCT__entry( 55 + __field(struct amdgpu_device *, adev) 56 + __field(struct amd_sched_job *, sched_job) 57 + __field(struct amdgpu_ib *, ib) 58 + __field(struct fence *, fence) 59 + __field(char *, ring_name) 60 + __field(u32, num_ibs) 61 + ), 62 + 63 + TP_fast_assign( 64 + __entry->adev = job->adev; 65 + __entry->sched_job = &job->base; 66 + __entry->ib = job->ibs; 67 + __entry->fence = &job->base.s_fence->base; 68 + __entry->ring_name = job->ibs[0].ring->name; 69 + __entry->num_ibs = job->num_ibs; 70 + ), 71 + TP_printk("adev=%p, sched_job=%p, first ib=%p, sched fence=%p, ring name:%s, num_ibs:%u", 72 + __entry->adev, __entry->sched_job, __entry->ib, 73 + __entry->fence, __entry->ring_name, __entry->num_ibs) 74 + ); 75 + 76 + TRACE_EVENT(amdgpu_sched_run_job, 77 + TP_PROTO(struct amdgpu_job *job), 78 + TP_ARGS(job), 79 + TP_STRUCT__entry( 80 + __field(struct amdgpu_device *, adev) 81 + __field(struct amd_sched_job *, sched_job) 82 + __field(struct amdgpu_ib *, ib) 83 + __field(struct fence *, fence) 84 + __field(char *, ring_name) 85 + __field(u32, num_ibs) 86 + ), 87 + 88 + TP_fast_assign( 89 + __entry->adev = job->adev; 90 + __entry->sched_job = &job->base; 91 + __entry->ib = job->ibs; 92 + __entry->fence = &job->base.s_fence->base; 93 + __entry->ring_name = job->ibs[0].ring->name; 94 + __entry->num_ibs = job->num_ibs; 95 + ), 96 + TP_printk("adev=%p, sched_job=%p, first ib=%p, sched fence=%p, ring name:%s, num_ibs:%u", 97 + __entry->adev, __entry->sched_job, __entry->ib, 98 + __entry->fence, __entry->ring_name, __entry->num_ibs) 99 + ); 100 + 101 + 51 102 TRACE_EVENT(amdgpu_vm_grab_id, 52 103 TP_PROTO(unsigned vmid, int ring), 53 104 TP_ARGS(vmid, ring), ··· 245 194 __entry->bo = bo; 246 195 ), 247 196 TP_printk("list=%p, bo=%p", __entry->list, __entry->bo) 248 - ); 249 - 250 - DECLARE_EVENT_CLASS(amdgpu_fence_request, 251 - 252 - TP_PROTO(struct drm_device *dev, int ring, u32 seqno), 253 - 254 - TP_ARGS(dev, ring, seqno), 255 - 256 - TP_STRUCT__entry( 257 - __field(u32, dev) 258 - __field(int, ring) 259 - __field(u32, seqno) 260 - ), 261 - 262 - TP_fast_assign( 263 - __entry->dev = dev->primary->index; 264 - __entry->ring = ring; 265 - __entry->seqno = seqno; 266 - ), 267 - 268 - TP_printk("dev=%u, ring=%d, seqno=%u", 269 - __entry->dev, __entry->ring, __entry->seqno) 270 - ); 271 - 272 - DEFINE_EVENT(amdgpu_fence_request, amdgpu_fence_emit, 273 - 274 - TP_PROTO(struct drm_device *dev, int ring, u32 seqno), 275 - 276 - TP_ARGS(dev, ring, seqno) 277 - ); 278 - 279 - DEFINE_EVENT(amdgpu_fence_request, amdgpu_fence_wait_begin, 280 - 281 - TP_PROTO(struct drm_device *dev, int ring, u32 seqno), 282 - 283 - TP_ARGS(dev, ring, seqno) 284 - ); 285 - 286 - DEFINE_EVENT(amdgpu_fence_request, amdgpu_fence_wait_end, 287 - 288 - TP_PROTO(struct drm_device *dev, int ring, u32 seqno), 289 - 290 - TP_ARGS(dev, ring, seqno) 291 197 ); 292 198 293 199 DECLARE_EVENT_CLASS(amdgpu_semaphore_request,
+3 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 1073 1073 ret = drm_mm_dump_table(m, mm); 1074 1074 spin_unlock(&glob->lru_lock); 1075 1075 if (ttm_pl == TTM_PL_VRAM) 1076 - seq_printf(m, "man size:%llu pages, ram usage:%luMB, vis usage:%luMB\n", 1076 + seq_printf(m, "man size:%llu pages, ram usage:%lluMB, vis usage:%lluMB\n", 1077 1077 adev->mman.bdev.man[ttm_pl].size, 1078 - atomic64_read(&adev->vram_usage) >> 20, 1079 - atomic64_read(&adev->vram_vis_usage) >> 20); 1078 + (u64)atomic64_read(&adev->vram_usage) >> 20, 1079 + (u64)atomic64_read(&adev->vram_vis_usage) >> 20); 1080 1080 return ret; 1081 1081 } 1082 1082
+10 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
··· 392 392 ib->ptr[ib->length_dw++] = 0x00000001; /* session cmd */ 393 393 ib->ptr[ib->length_dw++] = handle; 394 394 395 - ib->ptr[ib->length_dw++] = 0x00000030; /* len */ 395 + if ((ring->adev->vce.fw_version >> 24) >= 52) 396 + ib->ptr[ib->length_dw++] = 0x00000040; /* len */ 397 + else 398 + ib->ptr[ib->length_dw++] = 0x00000030; /* len */ 396 399 ib->ptr[ib->length_dw++] = 0x01000001; /* create cmd */ 397 400 ib->ptr[ib->length_dw++] = 0x00000000; 398 401 ib->ptr[ib->length_dw++] = 0x00000042; ··· 407 404 ib->ptr[ib->length_dw++] = 0x00000100; 408 405 ib->ptr[ib->length_dw++] = 0x0000000c; 409 406 ib->ptr[ib->length_dw++] = 0x00000000; 407 + if ((ring->adev->vce.fw_version >> 24) >= 52) { 408 + ib->ptr[ib->length_dw++] = 0x00000000; 409 + ib->ptr[ib->length_dw++] = 0x00000000; 410 + ib->ptr[ib->length_dw++] = 0x00000000; 411 + ib->ptr[ib->length_dw++] = 0x00000000; 412 + } 410 413 411 414 ib->ptr[ib->length_dw++] = 0x00000014; /* len */ 412 415 ib->ptr[ib->length_dw++] = 0x05000005; /* feedback buffer */
+85 -66
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 143 143 unsigned i; 144 144 145 145 /* check if the id is still valid */ 146 - if (vm_id->id && vm_id->last_id_use && 147 - vm_id->last_id_use == adev->vm_manager.active[vm_id->id]) { 148 - trace_amdgpu_vm_grab_id(vm_id->id, ring->idx); 149 - return 0; 146 + if (vm_id->id) { 147 + unsigned id = vm_id->id; 148 + long owner; 149 + 150 + owner = atomic_long_read(&adev->vm_manager.ids[id].owner); 151 + if (owner == (long)vm) { 152 + trace_amdgpu_vm_grab_id(vm_id->id, ring->idx); 153 + return 0; 154 + } 150 155 } 151 156 152 157 /* we definately need to flush */ ··· 159 154 160 155 /* skip over VMID 0, since it is the system VM */ 161 156 for (i = 1; i < adev->vm_manager.nvm; ++i) { 162 - struct fence *fence = adev->vm_manager.active[i]; 157 + struct fence *fence = adev->vm_manager.ids[i].active; 163 158 struct amdgpu_ring *fring; 164 159 165 160 if (fence == NULL) { ··· 181 176 if (choices[i]) { 182 177 struct fence *fence; 183 178 184 - fence = adev->vm_manager.active[choices[i]]; 179 + fence = adev->vm_manager.ids[choices[i]].active; 185 180 vm_id->id = choices[i]; 186 181 187 182 trace_amdgpu_vm_grab_id(choices[i], ring->idx); ··· 212 207 uint64_t pd_addr = amdgpu_bo_gpu_offset(vm->page_directory); 213 208 struct amdgpu_vm_id *vm_id = &vm->ids[ring->idx]; 214 209 struct fence *flushed_updates = vm_id->flushed_updates; 215 - bool is_earlier = false; 210 + bool is_later; 216 211 217 - if (flushed_updates && updates) { 218 - BUG_ON(flushed_updates->context != updates->context); 219 - is_earlier = (updates->seqno - flushed_updates->seqno <= 220 - INT_MAX) ? true : false; 221 - } 212 + if (!flushed_updates) 213 + is_later = true; 214 + else if (!updates) 215 + is_later = false; 216 + else 217 + is_later = fence_is_later(updates, flushed_updates); 222 218 223 - if (pd_addr != vm_id->pd_gpu_addr || !flushed_updates || 224 - is_earlier) { 225 - 219 + if (pd_addr != vm_id->pd_gpu_addr || is_later) { 226 220 trace_amdgpu_vm_flush(pd_addr, ring->idx, vm_id->id); 227 - if (is_earlier) { 221 + if (is_later) { 228 222 vm_id->flushed_updates = fence_get(updates); 229 223 fence_put(flushed_updates); 230 224 } 231 - if (!flushed_updates) 232 - vm_id->flushed_updates = fence_get(updates); 233 225 vm_id->pd_gpu_addr = pd_addr; 234 226 amdgpu_ring_emit_vm_flush(ring, vm_id->id, vm_id->pd_gpu_addr); 235 227 } ··· 246 244 */ 247 245 void amdgpu_vm_fence(struct amdgpu_device *adev, 248 246 struct amdgpu_vm *vm, 249 - struct amdgpu_fence *fence) 247 + struct fence *fence) 250 248 { 251 - unsigned ridx = fence->ring->idx; 252 - unsigned vm_id = vm->ids[ridx].id; 249 + struct amdgpu_ring *ring = amdgpu_ring_from_fence(fence); 250 + unsigned vm_id = vm->ids[ring->idx].id; 253 251 254 - fence_put(adev->vm_manager.active[vm_id]); 255 - adev->vm_manager.active[vm_id] = fence_get(&fence->base); 256 - 257 - fence_put(vm->ids[ridx].last_id_use); 258 - vm->ids[ridx].last_id_use = fence_get(&fence->base); 252 + fence_put(adev->vm_manager.ids[vm_id].active); 253 + adev->vm_manager.ids[vm_id].active = fence_get(fence); 254 + atomic_long_set(&adev->vm_manager.ids[vm_id].owner, (long)vm); 259 255 } 260 256 261 257 /** ··· 332 332 * 333 333 * @adev: amdgpu_device pointer 334 334 * @bo: bo to clear 335 + * 336 + * need to reserve bo first before calling it. 335 337 */ 336 338 static int amdgpu_vm_clear_bo(struct amdgpu_device *adev, 337 339 struct amdgpu_bo *bo) ··· 345 343 uint64_t addr; 346 344 int r; 347 345 348 - r = amdgpu_bo_reserve(bo, false); 349 - if (r) 350 - return r; 351 - 352 346 r = reservation_object_reserve_shared(bo->tbo.resv); 353 347 if (r) 354 348 return r; 355 349 356 350 r = ttm_bo_validate(&bo->tbo, &bo->placement, true, false); 357 351 if (r) 358 - goto error_unreserve; 352 + goto error; 359 353 360 354 addr = amdgpu_bo_gpu_offset(bo); 361 355 entries = amdgpu_bo_size(bo) / 8; 362 356 363 357 ib = kzalloc(sizeof(struct amdgpu_ib), GFP_KERNEL); 364 358 if (!ib) 365 - goto error_unreserve; 359 + goto error; 366 360 367 361 r = amdgpu_ib_get(ring, NULL, entries * 2 + 64, ib); 368 362 if (r) ··· 376 378 if (!r) 377 379 amdgpu_bo_fence(bo, fence, true); 378 380 fence_put(fence); 379 - if (amdgpu_enable_scheduler) { 380 - amdgpu_bo_unreserve(bo); 381 + if (amdgpu_enable_scheduler) 381 382 return 0; 382 - } 383 + 383 384 error_free: 384 385 amdgpu_ib_free(adev, ib); 385 386 kfree(ib); 386 387 387 - error_unreserve: 388 - amdgpu_bo_unreserve(bo); 388 + error: 389 389 return r; 390 390 } 391 391 ··· 922 926 bo_va = list_first_entry(&vm->invalidated, 923 927 struct amdgpu_bo_va, vm_status); 924 928 spin_unlock(&vm->status_lock); 925 - 929 + mutex_lock(&bo_va->mutex); 926 930 r = amdgpu_vm_bo_update(adev, bo_va, NULL); 931 + mutex_unlock(&bo_va->mutex); 927 932 if (r) 928 933 return r; 929 934 ··· 968 971 INIT_LIST_HEAD(&bo_va->valids); 969 972 INIT_LIST_HEAD(&bo_va->invalids); 970 973 INIT_LIST_HEAD(&bo_va->vm_status); 971 - 974 + mutex_init(&bo_va->mutex); 972 975 list_add_tail(&bo_va->bo_list, &bo->va); 973 976 974 977 return bo_va; ··· 986 989 * Add a mapping of the BO at the specefied addr into the VM. 987 990 * Returns 0 for success, error for failure. 988 991 * 989 - * Object has to be reserved and gets unreserved by this function! 992 + * Object has to be reserved and unreserved outside! 990 993 */ 991 994 int amdgpu_vm_bo_map(struct amdgpu_device *adev, 992 995 struct amdgpu_bo_va *bo_va, ··· 1002 1005 1003 1006 /* validate the parameters */ 1004 1007 if (saddr & AMDGPU_GPU_PAGE_MASK || offset & AMDGPU_GPU_PAGE_MASK || 1005 - size == 0 || size & AMDGPU_GPU_PAGE_MASK) { 1006 - amdgpu_bo_unreserve(bo_va->bo); 1008 + size == 0 || size & AMDGPU_GPU_PAGE_MASK) 1007 1009 return -EINVAL; 1008 - } 1009 1010 1010 1011 /* make sure object fit at this offset */ 1011 1012 eaddr = saddr + size; 1012 - if ((saddr >= eaddr) || (offset + size > amdgpu_bo_size(bo_va->bo))) { 1013 - amdgpu_bo_unreserve(bo_va->bo); 1013 + if ((saddr >= eaddr) || (offset + size > amdgpu_bo_size(bo_va->bo))) 1014 1014 return -EINVAL; 1015 - } 1016 1015 1017 1016 last_pfn = eaddr / AMDGPU_GPU_PAGE_SIZE; 1018 1017 if (last_pfn > adev->vm_manager.max_pfn) { 1019 1018 dev_err(adev->dev, "va above limit (0x%08X > 0x%08X)\n", 1020 1019 last_pfn, adev->vm_manager.max_pfn); 1021 - amdgpu_bo_unreserve(bo_va->bo); 1022 1020 return -EINVAL; 1023 1021 } 1024 1022 1025 1023 saddr /= AMDGPU_GPU_PAGE_SIZE; 1026 1024 eaddr /= AMDGPU_GPU_PAGE_SIZE; 1027 1025 1026 + spin_lock(&vm->it_lock); 1028 1027 it = interval_tree_iter_first(&vm->va, saddr, eaddr - 1); 1028 + spin_unlock(&vm->it_lock); 1029 1029 if (it) { 1030 1030 struct amdgpu_bo_va_mapping *tmp; 1031 1031 tmp = container_of(it, struct amdgpu_bo_va_mapping, it); ··· 1030 1036 dev_err(adev->dev, "bo %p va 0x%010Lx-0x%010Lx conflict with " 1031 1037 "0x%010lx-0x%010lx\n", bo_va->bo, saddr, eaddr, 1032 1038 tmp->it.start, tmp->it.last + 1); 1033 - amdgpu_bo_unreserve(bo_va->bo); 1034 1039 r = -EINVAL; 1035 1040 goto error; 1036 1041 } 1037 1042 1038 1043 mapping = kmalloc(sizeof(*mapping), GFP_KERNEL); 1039 1044 if (!mapping) { 1040 - amdgpu_bo_unreserve(bo_va->bo); 1041 1045 r = -ENOMEM; 1042 1046 goto error; 1043 1047 } ··· 1046 1054 mapping->offset = offset; 1047 1055 mapping->flags = flags; 1048 1056 1057 + mutex_lock(&bo_va->mutex); 1049 1058 list_add(&mapping->list, &bo_va->invalids); 1059 + mutex_unlock(&bo_va->mutex); 1060 + spin_lock(&vm->it_lock); 1050 1061 interval_tree_insert(&mapping->it, &vm->va); 1062 + spin_unlock(&vm->it_lock); 1051 1063 trace_amdgpu_vm_bo_map(bo_va, mapping); 1052 1064 1053 1065 /* Make sure the page tables are allocated */ ··· 1063 1067 if (eaddr > vm->max_pde_used) 1064 1068 vm->max_pde_used = eaddr; 1065 1069 1066 - amdgpu_bo_unreserve(bo_va->bo); 1067 - 1068 1070 /* walk over the address space and allocate the page tables */ 1069 1071 for (pt_idx = saddr; pt_idx <= eaddr; ++pt_idx) { 1070 1072 struct reservation_object *resv = vm->page_directory->tbo.resv; ··· 1071 1077 if (vm->page_tables[pt_idx].bo) 1072 1078 continue; 1073 1079 1074 - ww_mutex_lock(&resv->lock, NULL); 1075 1080 r = amdgpu_bo_create(adev, AMDGPU_VM_PTE_COUNT * 8, 1076 1081 AMDGPU_GPU_PAGE_SIZE, true, 1077 1082 AMDGPU_GEM_DOMAIN_VRAM, 1078 1083 AMDGPU_GEM_CREATE_NO_CPU_ACCESS, 1079 1084 NULL, resv, &pt); 1080 - ww_mutex_unlock(&resv->lock); 1081 1085 if (r) 1082 1086 goto error_free; 1083 1087 ··· 1093 1101 1094 1102 error_free: 1095 1103 list_del(&mapping->list); 1104 + spin_lock(&vm->it_lock); 1096 1105 interval_tree_remove(&mapping->it, &vm->va); 1106 + spin_unlock(&vm->it_lock); 1097 1107 trace_amdgpu_vm_bo_unmap(bo_va, mapping); 1098 1108 kfree(mapping); 1099 1109 ··· 1113 1119 * Remove a mapping of the BO at the specefied addr from the VM. 1114 1120 * Returns 0 for success, error for failure. 1115 1121 * 1116 - * Object has to be reserved and gets unreserved by this function! 1122 + * Object has to be reserved and unreserved outside! 1117 1123 */ 1118 1124 int amdgpu_vm_bo_unmap(struct amdgpu_device *adev, 1119 1125 struct amdgpu_bo_va *bo_va, ··· 1124 1130 bool valid = true; 1125 1131 1126 1132 saddr /= AMDGPU_GPU_PAGE_SIZE; 1127 - 1133 + mutex_lock(&bo_va->mutex); 1128 1134 list_for_each_entry(mapping, &bo_va->valids, list) { 1129 1135 if (mapping->it.start == saddr) 1130 1136 break; ··· 1139 1145 } 1140 1146 1141 1147 if (&mapping->list == &bo_va->invalids) { 1142 - amdgpu_bo_unreserve(bo_va->bo); 1148 + mutex_unlock(&bo_va->mutex); 1143 1149 return -ENOENT; 1144 1150 } 1145 1151 } 1146 - 1152 + mutex_unlock(&bo_va->mutex); 1147 1153 list_del(&mapping->list); 1154 + spin_lock(&vm->it_lock); 1148 1155 interval_tree_remove(&mapping->it, &vm->va); 1156 + spin_unlock(&vm->it_lock); 1149 1157 trace_amdgpu_vm_bo_unmap(bo_va, mapping); 1150 1158 1151 1159 if (valid) 1152 1160 list_add(&mapping->list, &vm->freed); 1153 1161 else 1154 1162 kfree(mapping); 1155 - amdgpu_bo_unreserve(bo_va->bo); 1156 1163 1157 1164 return 0; 1158 1165 } ··· 1182 1187 1183 1188 list_for_each_entry_safe(mapping, next, &bo_va->valids, list) { 1184 1189 list_del(&mapping->list); 1190 + spin_lock(&vm->it_lock); 1185 1191 interval_tree_remove(&mapping->it, &vm->va); 1192 + spin_unlock(&vm->it_lock); 1186 1193 trace_amdgpu_vm_bo_unmap(bo_va, mapping); 1187 1194 list_add(&mapping->list, &vm->freed); 1188 1195 } 1189 1196 list_for_each_entry_safe(mapping, next, &bo_va->invalids, list) { 1190 1197 list_del(&mapping->list); 1198 + spin_lock(&vm->it_lock); 1191 1199 interval_tree_remove(&mapping->it, &vm->va); 1200 + spin_unlock(&vm->it_lock); 1192 1201 kfree(mapping); 1193 1202 } 1194 - 1195 1203 fence_put(bo_va->last_pt_update); 1204 + mutex_destroy(&bo_va->mutex); 1196 1205 kfree(bo_va); 1197 1206 } 1198 1207 ··· 1240 1241 for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { 1241 1242 vm->ids[i].id = 0; 1242 1243 vm->ids[i].flushed_updates = NULL; 1243 - vm->ids[i].last_id_use = NULL; 1244 1244 } 1245 - mutex_init(&vm->mutex); 1246 1245 vm->va = RB_ROOT; 1247 1246 spin_lock_init(&vm->status_lock); 1248 1247 INIT_LIST_HEAD(&vm->invalidated); 1249 1248 INIT_LIST_HEAD(&vm->cleared); 1250 1249 INIT_LIST_HEAD(&vm->freed); 1251 - 1250 + spin_lock_init(&vm->it_lock); 1252 1251 pd_size = amdgpu_vm_directory_size(adev); 1253 1252 pd_entries = amdgpu_vm_num_pdes(adev); 1254 1253 ··· 1266 1269 NULL, NULL, &vm->page_directory); 1267 1270 if (r) 1268 1271 return r; 1269 - 1272 + r = amdgpu_bo_reserve(vm->page_directory, false); 1273 + if (r) { 1274 + amdgpu_bo_unref(&vm->page_directory); 1275 + vm->page_directory = NULL; 1276 + return r; 1277 + } 1270 1278 r = amdgpu_vm_clear_bo(adev, vm->page_directory); 1279 + amdgpu_bo_unreserve(vm->page_directory); 1271 1280 if (r) { 1272 1281 amdgpu_bo_unref(&vm->page_directory); 1273 1282 vm->page_directory = NULL; ··· 1316 1313 1317 1314 amdgpu_bo_unref(&vm->page_directory); 1318 1315 fence_put(vm->page_directory_fence); 1319 - 1320 1316 for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { 1317 + unsigned id = vm->ids[i].id; 1318 + 1319 + atomic_long_cmpxchg(&adev->vm_manager.ids[id].owner, 1320 + (long)vm, 0); 1321 1321 fence_put(vm->ids[i].flushed_updates); 1322 - fence_put(vm->ids[i].last_id_use); 1323 1322 } 1324 1323 1325 - mutex_destroy(&vm->mutex); 1324 + } 1325 + 1326 + /** 1327 + * amdgpu_vm_manager_fini - cleanup VM manager 1328 + * 1329 + * @adev: amdgpu_device pointer 1330 + * 1331 + * Cleanup the VM manager and free resources. 1332 + */ 1333 + void amdgpu_vm_manager_fini(struct amdgpu_device *adev) 1334 + { 1335 + unsigned i; 1336 + 1337 + for (i = 0; i < AMDGPU_NUM_VM; ++i) 1338 + fence_put(adev->vm_manager.ids[i].active); 1326 1339 }
+4 -4
drivers/gpu/drm/amd/amdgpu/ci_dpm.c
··· 6569 6569 switch (state) { 6570 6570 case AMDGPU_IRQ_STATE_DISABLE: 6571 6571 cg_thermal_int = RREG32_SMC(ixCG_THERMAL_INT); 6572 - cg_thermal_int &= ~CG_THERMAL_INT_CTRL__THERM_INTH_MASK_MASK; 6572 + cg_thermal_int |= CG_THERMAL_INT_CTRL__THERM_INTH_MASK_MASK; 6573 6573 WREG32_SMC(ixCG_THERMAL_INT, cg_thermal_int); 6574 6574 break; 6575 6575 case AMDGPU_IRQ_STATE_ENABLE: 6576 6576 cg_thermal_int = RREG32_SMC(ixCG_THERMAL_INT); 6577 - cg_thermal_int |= CG_THERMAL_INT_CTRL__THERM_INTH_MASK_MASK; 6577 + cg_thermal_int &= ~CG_THERMAL_INT_CTRL__THERM_INTH_MASK_MASK; 6578 6578 WREG32_SMC(ixCG_THERMAL_INT, cg_thermal_int); 6579 6579 break; 6580 6580 default: ··· 6586 6586 switch (state) { 6587 6587 case AMDGPU_IRQ_STATE_DISABLE: 6588 6588 cg_thermal_int = RREG32_SMC(ixCG_THERMAL_INT); 6589 - cg_thermal_int &= ~CG_THERMAL_INT_CTRL__THERM_INTL_MASK_MASK; 6589 + cg_thermal_int |= CG_THERMAL_INT_CTRL__THERM_INTL_MASK_MASK; 6590 6590 WREG32_SMC(ixCG_THERMAL_INT, cg_thermal_int); 6591 6591 break; 6592 6592 case AMDGPU_IRQ_STATE_ENABLE: 6593 6593 cg_thermal_int = RREG32_SMC(ixCG_THERMAL_INT); 6594 - cg_thermal_int |= CG_THERMAL_INT_CTRL__THERM_INTL_MASK_MASK; 6594 + cg_thermal_int &= ~CG_THERMAL_INT_CTRL__THERM_INTL_MASK_MASK; 6595 6595 WREG32_SMC(ixCG_THERMAL_INT, cg_thermal_int); 6596 6596 break; 6597 6597 default:
+295 -7
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
··· 268 268 mmCGTT_CP_CLK_CTRL, 0xffffffff, 0x00000100, 269 269 mmCGTT_CPC_CLK_CTRL, 0xffffffff, 0x00000100, 270 270 mmCGTT_CPF_CLK_CTRL, 0xffffffff, 0x40000100, 271 - mmCGTT_DRM_CLK_CTRL0, 0xffffffff, 0x00600100, 272 271 mmCGTT_GDS_CLK_CTRL, 0xffffffff, 0x00000100, 273 272 mmCGTT_IA_CLK_CTRL, 0xffffffff, 0x06000100, 274 273 mmCGTT_PA_CLK_CTRL, 0xffffffff, 0x00000100, ··· 295 296 mmCGTS_SM_CTRL_REG, 0xffffffff, 0x96e00200, 296 297 mmCP_RB_WPTR_POLL_CNTL, 0xffffffff, 0x00900100, 297 298 mmRLC_CGCG_CGLS_CTRL, 0xffffffff, 0x0020003c, 298 - mmPCIE_INDEX, 0xffffffff, 0x0140001c, 299 - mmPCIE_DATA, 0x000f0000, 0x00000000, 300 - mmCGTT_DRM_CLK_CTRL0, 0xff000fff, 0x00000100, 301 - mmHDP_XDP_CGTT_BLK_CTRL, 0xc0000fff, 0x00000104, 302 299 mmCP_MEM_SLP_CNTL, 0x00000001, 0x00000001, 303 300 }; 304 301 ··· 995 1000 adev->gfx.config.max_cu_per_sh = 16; 996 1001 adev->gfx.config.max_sh_per_se = 1; 997 1002 adev->gfx.config.max_backends_per_se = 4; 998 - adev->gfx.config.max_texture_channel_caches = 8; 1003 + adev->gfx.config.max_texture_channel_caches = 16; 999 1004 adev->gfx.config.max_gprs = 256; 1000 1005 adev->gfx.config.max_gs_threads = 32; 1001 1006 adev->gfx.config.max_hw_contexts = 8; ··· 1608 1613 WREG32(mmGB_MACROTILE_MODE0 + reg_offset, gb_tile_moden); 1609 1614 } 1610 1615 case CHIP_FIJI: 1616 + for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++) { 1617 + switch (reg_offset) { 1618 + case 0: 1619 + gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) | 1620 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1621 + TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) | 1622 + MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING)); 1623 + break; 1624 + case 1: 1625 + gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) | 1626 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1627 + TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) | 1628 + MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING)); 1629 + break; 1630 + case 2: 1631 + gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) | 1632 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1633 + TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) | 1634 + MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING)); 1635 + break; 1636 + case 3: 1637 + gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) | 1638 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1639 + TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) | 1640 + MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING)); 1641 + break; 1642 + case 4: 1643 + gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) | 1644 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1645 + TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) | 1646 + MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING)); 1647 + break; 1648 + case 5: 1649 + gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) | 1650 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1651 + TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) | 1652 + MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING)); 1653 + break; 1654 + case 6: 1655 + gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) | 1656 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1657 + TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) | 1658 + MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING)); 1659 + break; 1660 + case 7: 1661 + gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) | 1662 + PIPE_CONFIG(ADDR_SURF_P4_16x16) | 1663 + TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) | 1664 + MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING)); 1665 + break; 1666 + case 8: 1667 + gb_tile_moden = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) | 1668 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16)); 1669 + break; 1670 + case 9: 1671 + gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) | 1672 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1673 + MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) | 1674 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2)); 1675 + break; 1676 + case 10: 1677 + gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) | 1678 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1679 + MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) | 1680 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2)); 1681 + break; 1682 + case 11: 1683 + gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) | 1684 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1685 + MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) | 1686 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8)); 1687 + break; 1688 + case 12: 1689 + gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) | 1690 + PIPE_CONFIG(ADDR_SURF_P4_16x16) | 1691 + MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) | 1692 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8)); 1693 + break; 1694 + case 13: 1695 + gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) | 1696 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1697 + MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) | 1698 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2)); 1699 + break; 1700 + case 14: 1701 + gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) | 1702 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1703 + MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) | 1704 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2)); 1705 + break; 1706 + case 15: 1707 + gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THIN1) | 1708 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1709 + MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) | 1710 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2)); 1711 + break; 1712 + case 16: 1713 + gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) | 1714 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1715 + MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) | 1716 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8)); 1717 + break; 1718 + case 17: 1719 + gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) | 1720 + PIPE_CONFIG(ADDR_SURF_P4_16x16) | 1721 + MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) | 1722 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8)); 1723 + break; 1724 + case 18: 1725 + gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) | 1726 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1727 + MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) | 1728 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1)); 1729 + break; 1730 + case 19: 1731 + gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) | 1732 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1733 + MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) | 1734 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1)); 1735 + break; 1736 + case 20: 1737 + gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) | 1738 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1739 + MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) | 1740 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1)); 1741 + break; 1742 + case 21: 1743 + gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THICK) | 1744 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1745 + MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) | 1746 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1)); 1747 + break; 1748 + case 22: 1749 + gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) | 1750 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1751 + MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) | 1752 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1)); 1753 + break; 1754 + case 23: 1755 + gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) | 1756 + PIPE_CONFIG(ADDR_SURF_P4_16x16) | 1757 + MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) | 1758 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1)); 1759 + break; 1760 + case 24: 1761 + gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) | 1762 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1763 + MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) | 1764 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1)); 1765 + break; 1766 + case 25: 1767 + gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_XTHICK) | 1768 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1769 + MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) | 1770 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1)); 1771 + break; 1772 + case 26: 1773 + gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_XTHICK) | 1774 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1775 + MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) | 1776 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1)); 1777 + break; 1778 + case 27: 1779 + gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) | 1780 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1781 + MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) | 1782 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2)); 1783 + break; 1784 + case 28: 1785 + gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) | 1786 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1787 + MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) | 1788 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2)); 1789 + break; 1790 + case 29: 1791 + gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) | 1792 + PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) | 1793 + MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) | 1794 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8)); 1795 + break; 1796 + case 30: 1797 + gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) | 1798 + PIPE_CONFIG(ADDR_SURF_P4_16x16) | 1799 + MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) | 1800 + SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8)); 1801 + break; 1802 + default: 1803 + gb_tile_moden = 0; 1804 + break; 1805 + } 1806 + adev->gfx.config.tile_mode_array[reg_offset] = gb_tile_moden; 1807 + WREG32(mmGB_TILE_MODE0 + reg_offset, gb_tile_moden); 1808 + } 1809 + for (reg_offset = 0; reg_offset < num_secondary_tile_mode_states; reg_offset++) { 1810 + switch (reg_offset) { 1811 + case 0: 1812 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1813 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) | 1814 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) | 1815 + NUM_BANKS(ADDR_SURF_8_BANK)); 1816 + break; 1817 + case 1: 1818 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1819 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) | 1820 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) | 1821 + NUM_BANKS(ADDR_SURF_8_BANK)); 1822 + break; 1823 + case 2: 1824 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1825 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) | 1826 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) | 1827 + NUM_BANKS(ADDR_SURF_8_BANK)); 1828 + break; 1829 + case 3: 1830 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1831 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) | 1832 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) | 1833 + NUM_BANKS(ADDR_SURF_8_BANK)); 1834 + break; 1835 + case 4: 1836 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1837 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) | 1838 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) | 1839 + NUM_BANKS(ADDR_SURF_8_BANK)); 1840 + break; 1841 + case 5: 1842 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1843 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) | 1844 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) | 1845 + NUM_BANKS(ADDR_SURF_8_BANK)); 1846 + break; 1847 + case 6: 1848 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1849 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) | 1850 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) | 1851 + NUM_BANKS(ADDR_SURF_8_BANK)); 1852 + break; 1853 + case 8: 1854 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1855 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_8) | 1856 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) | 1857 + NUM_BANKS(ADDR_SURF_8_BANK)); 1858 + break; 1859 + case 9: 1860 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1861 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) | 1862 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) | 1863 + NUM_BANKS(ADDR_SURF_8_BANK)); 1864 + break; 1865 + case 10: 1866 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1867 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) | 1868 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) | 1869 + NUM_BANKS(ADDR_SURF_8_BANK)); 1870 + break; 1871 + case 11: 1872 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1873 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) | 1874 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) | 1875 + NUM_BANKS(ADDR_SURF_8_BANK)); 1876 + break; 1877 + case 12: 1878 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1879 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) | 1880 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) | 1881 + NUM_BANKS(ADDR_SURF_8_BANK)); 1882 + break; 1883 + case 13: 1884 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1885 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) | 1886 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) | 1887 + NUM_BANKS(ADDR_SURF_8_BANK)); 1888 + break; 1889 + case 14: 1890 + gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) | 1891 + BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) | 1892 + MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) | 1893 + NUM_BANKS(ADDR_SURF_4_BANK)); 1894 + break; 1895 + case 7: 1896 + /* unused idx */ 1897 + continue; 1898 + default: 1899 + gb_tile_moden = 0; 1900 + break; 1901 + } 1902 + adev->gfx.config.macrotile_mode_array[reg_offset] = gb_tile_moden; 1903 + WREG32(mmGB_MACROTILE_MODE0 + reg_offset, gb_tile_moden); 1904 + } 1905 + break; 1611 1906 case CHIP_TONGA: 1612 1907 for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++) { 1613 1908 switch (reg_offset) { ··· 3256 2971 amdgpu_ring_write(ring, mmPA_SC_RASTER_CONFIG - PACKET3_SET_CONTEXT_REG_START); 3257 2972 switch (adev->asic_type) { 3258 2973 case CHIP_TONGA: 3259 - case CHIP_FIJI: 3260 2974 amdgpu_ring_write(ring, 0x16000012); 3261 2975 amdgpu_ring_write(ring, 0x0000002A); 2976 + break; 2977 + case CHIP_FIJI: 2978 + amdgpu_ring_write(ring, 0x3a00161a); 2979 + amdgpu_ring_write(ring, 0x0000002e); 3262 2980 break; 3263 2981 case CHIP_TOPAZ: 3264 2982 case CHIP_CARRIZO:
+4 -7
drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
··· 40 40 static void gmc_v7_0_set_gart_funcs(struct amdgpu_device *adev); 41 41 static void gmc_v7_0_set_irq_funcs(struct amdgpu_device *adev); 42 42 43 - MODULE_FIRMWARE("radeon/boniare_mc.bin"); 43 + MODULE_FIRMWARE("radeon/bonaire_mc.bin"); 44 44 MODULE_FIRMWARE("radeon/hawaii_mc.bin"); 45 45 46 46 /** ··· 501 501 tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_PDE0_CACHE_LRU_UPDATE_BY_WRITE, 1); 502 502 tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, EFFECTIVE_L2_QUEUE_SIZE, 7); 503 503 tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, CONTEXT1_IDENTITY_ACCESS_MODE, 1); 504 + tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_DEFAULT_PAGE_OUT_TO_SYSTEM_MEMORY, 1); 504 505 WREG32(mmVM_L2_CNTL, tmp); 505 506 tmp = REG_SET_FIELD(0, VM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1); 506 507 tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1); ··· 961 960 962 961 static int gmc_v7_0_sw_fini(void *handle) 963 962 { 964 - int i; 965 963 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 966 964 967 965 if (adev->vm_manager.enabled) { 968 - for (i = 0; i < AMDGPU_NUM_VM; ++i) 969 - fence_put(adev->vm_manager.active[i]); 966 + amdgpu_vm_manager_fini(adev); 970 967 gmc_v7_0_vm_fini(adev); 971 968 adev->vm_manager.enabled = false; 972 969 } ··· 1009 1010 1010 1011 static int gmc_v7_0_suspend(void *handle) 1011 1012 { 1012 - int i; 1013 1013 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1014 1014 1015 1015 if (adev->vm_manager.enabled) { 1016 - for (i = 0; i < AMDGPU_NUM_VM; ++i) 1017 - fence_put(adev->vm_manager.active[i]); 1016 + amdgpu_vm_manager_fini(adev); 1018 1017 gmc_v7_0_vm_fini(adev); 1019 1018 adev->vm_manager.enabled = false; 1020 1019 }
+3 -6
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
··· 629 629 tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_PDE0_CACHE_LRU_UPDATE_BY_WRITE, 1); 630 630 tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, EFFECTIVE_L2_QUEUE_SIZE, 7); 631 631 tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, CONTEXT1_IDENTITY_ACCESS_MODE, 1); 632 + tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_DEFAULT_PAGE_OUT_TO_SYSTEM_MEMORY, 1); 632 633 WREG32(mmVM_L2_CNTL, tmp); 633 634 tmp = RREG32(mmVM_L2_CNTL2); 634 635 tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1); ··· 980 979 981 980 static int gmc_v8_0_sw_fini(void *handle) 982 981 { 983 - int i; 984 982 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 985 983 986 984 if (adev->vm_manager.enabled) { 987 - for (i = 0; i < AMDGPU_NUM_VM; ++i) 988 - fence_put(adev->vm_manager.active[i]); 985 + amdgpu_vm_manager_fini(adev); 989 986 gmc_v8_0_vm_fini(adev); 990 987 adev->vm_manager.enabled = false; 991 988 } ··· 1030 1031 1031 1032 static int gmc_v8_0_suspend(void *handle) 1032 1033 { 1033 - int i; 1034 1034 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1035 1035 1036 1036 if (adev->vm_manager.enabled) { 1037 - for (i = 0; i < AMDGPU_NUM_VM; ++i) 1038 - fence_put(adev->vm_manager.active[i]); 1037 + amdgpu_vm_manager_fini(adev); 1039 1038 gmc_v8_0_vm_fini(adev); 1040 1039 adev->vm_manager.enabled = false; 1041 1040 }
+19 -5
drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
··· 40 40 41 41 #define GRBM_GFX_INDEX__VCE_INSTANCE__SHIFT 0x04 42 42 #define GRBM_GFX_INDEX__VCE_INSTANCE_MASK 0x10 43 + #define mmVCE_LMI_VCPU_CACHE_40BIT_BAR0 0x8616 44 + #define mmVCE_LMI_VCPU_CACHE_40BIT_BAR1 0x8617 45 + #define mmVCE_LMI_VCPU_CACHE_40BIT_BAR2 0x8618 43 46 44 47 #define VCE_V3_0_FW_SIZE (384 * 1024) 45 48 #define VCE_V3_0_STACK_SIZE (64 * 1024) ··· 133 130 134 131 /* set BUSY flag */ 135 132 WREG32_P(mmVCE_STATUS, 1, ~1); 136 - 137 - WREG32_P(mmVCE_VCPU_CNTL, VCE_VCPU_CNTL__CLK_EN_MASK, 138 - ~VCE_VCPU_CNTL__CLK_EN_MASK); 133 + if (adev->asic_type >= CHIP_STONEY) 134 + WREG32_P(mmVCE_VCPU_CNTL, 1, ~0x200001); 135 + else 136 + WREG32_P(mmVCE_VCPU_CNTL, VCE_VCPU_CNTL__CLK_EN_MASK, 137 + ~VCE_VCPU_CNTL__CLK_EN_MASK); 139 138 140 139 WREG32_P(mmVCE_SOFT_RESET, 141 140 VCE_SOFT_RESET__ECPU_SOFT_RESET_MASK, ··· 396 391 WREG32(mmVCE_LMI_SWAP_CNTL, 0); 397 392 WREG32(mmVCE_LMI_SWAP_CNTL1, 0); 398 393 WREG32(mmVCE_LMI_VM_CTRL, 0); 399 - 400 - WREG32(mmVCE_LMI_VCPU_CACHE_40BIT_BAR, (adev->vce.gpu_addr >> 8)); 394 + if (adev->asic_type >= CHIP_STONEY) { 395 + WREG32(mmVCE_LMI_VCPU_CACHE_40BIT_BAR0, (adev->vce.gpu_addr >> 8)); 396 + WREG32(mmVCE_LMI_VCPU_CACHE_40BIT_BAR1, (adev->vce.gpu_addr >> 8)); 397 + WREG32(mmVCE_LMI_VCPU_CACHE_40BIT_BAR2, (adev->vce.gpu_addr >> 8)); 398 + } else 399 + WREG32(mmVCE_LMI_VCPU_CACHE_40BIT_BAR, (adev->vce.gpu_addr >> 8)); 401 400 offset = AMDGPU_VCE_FIRMWARE_OFFSET; 402 401 size = VCE_V3_0_FW_SIZE; 403 402 WREG32(mmVCE_VCPU_CACHE_OFFSET0, offset & 0x7fffffff); ··· 585 576 struct amdgpu_iv_entry *entry) 586 577 { 587 578 DRM_DEBUG("IH: VCE\n"); 579 + 580 + WREG32_P(mmVCE_SYS_INT_STATUS, 581 + VCE_SYS_INT_STATUS__VCE_SYS_INT_TRAP_INTERRUPT_INT_MASK, 582 + ~VCE_SYS_INT_STATUS__VCE_SYS_INT_TRAP_INTERRUPT_INT_MASK); 583 + 588 584 switch (entry->src_data) { 589 585 case 0: 590 586 amdgpu_fence_process(&adev->vce.ring[0]);
+21 -3
drivers/gpu/drm/amd/scheduler/gpu_sched_trace.h
··· 16 16 TP_ARGS(sched_job), 17 17 TP_STRUCT__entry( 18 18 __field(struct amd_sched_entity *, entity) 19 + __field(struct amd_sched_job *, sched_job) 20 + __field(struct fence *, fence) 19 21 __field(const char *, name) 20 22 __field(u32, job_count) 21 23 __field(int, hw_job_count) ··· 25 23 26 24 TP_fast_assign( 27 25 __entry->entity = sched_job->s_entity; 26 + __entry->sched_job = sched_job; 27 + __entry->fence = &sched_job->s_fence->base; 28 28 __entry->name = sched_job->sched->name; 29 29 __entry->job_count = kfifo_len( 30 30 &sched_job->s_entity->job_queue) / sizeof(sched_job); 31 31 __entry->hw_job_count = atomic_read( 32 32 &sched_job->sched->hw_rq_count); 33 33 ), 34 - TP_printk("entity=%p, ring=%s, job count:%u, hw job count:%d", 35 - __entry->entity, __entry->name, __entry->job_count, 36 - __entry->hw_job_count) 34 + TP_printk("entity=%p, sched job=%p, fence=%p, ring=%s, job count:%u, hw job count:%d", 35 + __entry->entity, __entry->sched_job, __entry->fence, __entry->name, 36 + __entry->job_count, __entry->hw_job_count) 37 37 ); 38 + 39 + TRACE_EVENT(amd_sched_process_job, 40 + TP_PROTO(struct amd_sched_fence *fence), 41 + TP_ARGS(fence), 42 + TP_STRUCT__entry( 43 + __field(struct fence *, fence) 44 + ), 45 + 46 + TP_fast_assign( 47 + __entry->fence = &fence->base; 48 + ), 49 + TP_printk("fence=%p signaled", __entry->fence) 50 + ); 51 + 38 52 #endif 39 53 40 54 /* This part must be outside protection */
+96 -50
drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
··· 30 30 #define CREATE_TRACE_POINTS 31 31 #include "gpu_sched_trace.h" 32 32 33 - static struct amd_sched_job * 34 - amd_sched_entity_pop_job(struct amd_sched_entity *entity); 33 + static bool amd_sched_entity_is_ready(struct amd_sched_entity *entity); 35 34 static void amd_sched_wakeup(struct amd_gpu_scheduler *sched); 35 + 36 + struct kmem_cache *sched_fence_slab; 37 + atomic_t sched_fence_slab_ref = ATOMIC_INIT(0); 36 38 37 39 /* Initialize a given run queue struct */ 38 40 static void amd_sched_rq_init(struct amd_sched_rq *rq) ··· 63 61 } 64 62 65 63 /** 66 - * Select next job from a specified run queue with round robin policy. 67 - * Return NULL if nothing available. 64 + * Select an entity which could provide a job to run 65 + * 66 + * @rq The run queue to check. 67 + * 68 + * Try to find a ready entity, returns NULL if none found. 68 69 */ 69 - static struct amd_sched_job * 70 - amd_sched_rq_select_job(struct amd_sched_rq *rq) 70 + static struct amd_sched_entity * 71 + amd_sched_rq_select_entity(struct amd_sched_rq *rq) 71 72 { 72 73 struct amd_sched_entity *entity; 73 - struct amd_sched_job *sched_job; 74 74 75 75 spin_lock(&rq->lock); 76 76 77 77 entity = rq->current_entity; 78 78 if (entity) { 79 79 list_for_each_entry_continue(entity, &rq->entities, list) { 80 - sched_job = amd_sched_entity_pop_job(entity); 81 - if (sched_job) { 80 + if (amd_sched_entity_is_ready(entity)) { 82 81 rq->current_entity = entity; 83 82 spin_unlock(&rq->lock); 84 - return sched_job; 83 + return entity; 85 84 } 86 85 } 87 86 } 88 87 89 88 list_for_each_entry(entity, &rq->entities, list) { 90 89 91 - sched_job = amd_sched_entity_pop_job(entity); 92 - if (sched_job) { 90 + if (amd_sched_entity_is_ready(entity)) { 93 91 rq->current_entity = entity; 94 92 spin_unlock(&rq->lock); 95 - return sched_job; 93 + return entity; 96 94 } 97 95 98 96 if (entity == rq->current_entity) ··· 176 174 } 177 175 178 176 /** 177 + * Check if entity is ready 178 + * 179 + * @entity The pointer to a valid scheduler entity 180 + * 181 + * Return true if entity could provide a job. 182 + */ 183 + static bool amd_sched_entity_is_ready(struct amd_sched_entity *entity) 184 + { 185 + if (kfifo_is_empty(&entity->job_queue)) 186 + return false; 187 + 188 + if (ACCESS_ONCE(entity->dependency)) 189 + return false; 190 + 191 + return true; 192 + } 193 + 194 + /** 179 195 * Destroy a context entity 180 196 * 181 197 * @sched Pointer to scheduler instance ··· 228 208 amd_sched_wakeup(entity->sched); 229 209 } 230 210 211 + static bool amd_sched_entity_add_dependency_cb(struct amd_sched_entity *entity) 212 + { 213 + struct amd_gpu_scheduler *sched = entity->sched; 214 + struct fence * fence = entity->dependency; 215 + struct amd_sched_fence *s_fence; 216 + 217 + if (fence->context == entity->fence_context) { 218 + /* We can ignore fences from ourself */ 219 + fence_put(entity->dependency); 220 + return false; 221 + } 222 + 223 + s_fence = to_amd_sched_fence(fence); 224 + if (s_fence && s_fence->sched == sched) { 225 + /* Fence is from the same scheduler */ 226 + if (test_bit(AMD_SCHED_FENCE_SCHEDULED_BIT, &fence->flags)) { 227 + /* Ignore it when it is already scheduled */ 228 + fence_put(entity->dependency); 229 + return false; 230 + } 231 + 232 + /* Wait for fence to be scheduled */ 233 + entity->cb.func = amd_sched_entity_wakeup; 234 + list_add_tail(&entity->cb.node, &s_fence->scheduled_cb); 235 + return true; 236 + } 237 + 238 + if (!fence_add_callback(entity->dependency, &entity->cb, 239 + amd_sched_entity_wakeup)) 240 + return true; 241 + 242 + fence_put(entity->dependency); 243 + return false; 244 + } 245 + 231 246 static struct amd_sched_job * 232 247 amd_sched_entity_pop_job(struct amd_sched_entity *entity) 233 248 { 234 249 struct amd_gpu_scheduler *sched = entity->sched; 235 250 struct amd_sched_job *sched_job; 236 251 237 - if (ACCESS_ONCE(entity->dependency)) 238 - return NULL; 239 - 240 252 if (!kfifo_out_peek(&entity->job_queue, &sched_job, sizeof(sched_job))) 241 253 return NULL; 242 254 243 - while ((entity->dependency = sched->ops->dependency(sched_job))) { 244 - 245 - if (entity->dependency->context == entity->fence_context) { 246 - /* We can ignore fences from ourself */ 247 - fence_put(entity->dependency); 248 - continue; 249 - } 250 - 251 - if (fence_add_callback(entity->dependency, &entity->cb, 252 - amd_sched_entity_wakeup)) 253 - fence_put(entity->dependency); 254 - else 255 + while ((entity->dependency = sched->ops->dependency(sched_job))) 256 + if (amd_sched_entity_add_dependency_cb(entity)) 255 257 return NULL; 256 - } 257 258 258 259 return sched_job; 259 260 } ··· 314 273 * 315 274 * Returns 0 for success, negative error code otherwise. 316 275 */ 317 - int amd_sched_entity_push_job(struct amd_sched_job *sched_job) 276 + void amd_sched_entity_push_job(struct amd_sched_job *sched_job) 318 277 { 319 278 struct amd_sched_entity *entity = sched_job->s_entity; 320 - struct amd_sched_fence *fence = amd_sched_fence_create( 321 - entity, sched_job->owner); 322 - 323 - if (!fence) 324 - return -ENOMEM; 325 - 326 - fence_get(&fence->base); 327 - sched_job->s_fence = fence; 328 279 329 280 wait_event(entity->sched->job_scheduled, 330 281 amd_sched_entity_in(sched_job)); 331 282 trace_amd_sched_job(sched_job); 332 - return 0; 333 283 } 334 284 335 285 /** ··· 342 310 } 343 311 344 312 /** 345 - * Select next to run 313 + * Select next entity to process 346 314 */ 347 - static struct amd_sched_job * 348 - amd_sched_select_job(struct amd_gpu_scheduler *sched) 315 + static struct amd_sched_entity * 316 + amd_sched_select_entity(struct amd_gpu_scheduler *sched) 349 317 { 350 - struct amd_sched_job *sched_job; 318 + struct amd_sched_entity *entity; 351 319 352 320 if (!amd_sched_ready(sched)) 353 321 return NULL; 354 322 355 323 /* Kernel run queue has higher priority than normal run queue*/ 356 - sched_job = amd_sched_rq_select_job(&sched->kernel_rq); 357 - if (sched_job == NULL) 358 - sched_job = amd_sched_rq_select_job(&sched->sched_rq); 324 + entity = amd_sched_rq_select_entity(&sched->kernel_rq); 325 + if (entity == NULL) 326 + entity = amd_sched_rq_select_entity(&sched->sched_rq); 359 327 360 - return sched_job; 328 + return entity; 361 329 } 362 330 363 331 static void amd_sched_process_job(struct fence *f, struct fence_cb *cb) ··· 375 343 list_del_init(&s_fence->list); 376 344 spin_unlock_irqrestore(&sched->fence_list_lock, flags); 377 345 } 346 + trace_amd_sched_process_job(s_fence); 378 347 fence_put(&s_fence->base); 379 348 wake_up_interruptible(&sched->wake_up_worker); 380 349 } ··· 419 386 unsigned long flags; 420 387 421 388 wait_event_interruptible(sched->wake_up_worker, 422 - kthread_should_stop() || 423 - (sched_job = amd_sched_select_job(sched))); 389 + (entity = amd_sched_select_entity(sched)) || 390 + kthread_should_stop()); 424 391 392 + if (!entity) 393 + continue; 394 + 395 + sched_job = amd_sched_entity_pop_job(entity); 425 396 if (!sched_job) 426 397 continue; 427 398 428 - entity = sched_job->s_entity; 429 399 s_fence = sched_job->s_fence; 430 400 431 401 if (sched->timeout != MAX_SCHEDULE_TIMEOUT) { ··· 441 405 442 406 atomic_inc(&sched->hw_rq_count); 443 407 fence = sched->ops->run_job(sched_job); 408 + amd_sched_fence_scheduled(s_fence); 444 409 if (fence) { 445 410 r = fence_add_callback(fence, &s_fence->cb, 446 411 amd_sched_process_job); ··· 487 450 init_waitqueue_head(&sched->wake_up_worker); 488 451 init_waitqueue_head(&sched->job_scheduled); 489 452 atomic_set(&sched->hw_rq_count, 0); 453 + if (atomic_inc_return(&sched_fence_slab_ref) == 1) { 454 + sched_fence_slab = kmem_cache_create( 455 + "amd_sched_fence", sizeof(struct amd_sched_fence), 0, 456 + SLAB_HWCACHE_ALIGN, NULL); 457 + if (!sched_fence_slab) 458 + return -ENOMEM; 459 + } 490 460 491 461 /* Each scheduler will run on a seperate kernel thread */ 492 462 sched->thread = kthread_run(amd_sched_main, sched, sched->name); ··· 514 470 { 515 471 if (sched->thread) 516 472 kthread_stop(sched->thread); 473 + if (atomic_dec_and_test(&sched_fence_slab_ref)) 474 + kmem_cache_destroy(sched_fence_slab); 517 475 }
+8 -3
drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
··· 27 27 #include <linux/kfifo.h> 28 28 #include <linux/fence.h> 29 29 30 + #define AMD_SCHED_FENCE_SCHEDULED_BIT FENCE_FLAG_USER_BITS 31 + 30 32 struct amd_gpu_scheduler; 31 33 struct amd_sched_rq; 34 + 35 + extern struct kmem_cache *sched_fence_slab; 36 + extern atomic_t sched_fence_slab_ref; 32 37 33 38 /** 34 39 * A scheduler entity is a wrapper around a job queue or a group ··· 70 65 struct amd_sched_fence { 71 66 struct fence base; 72 67 struct fence_cb cb; 68 + struct list_head scheduled_cb; 73 69 struct amd_gpu_scheduler *sched; 74 70 spinlock_t lock; 75 71 void *owner; ··· 82 76 struct amd_gpu_scheduler *sched; 83 77 struct amd_sched_entity *s_entity; 84 78 struct amd_sched_fence *s_fence; 85 - void *owner; 86 79 }; 87 80 88 81 extern const struct fence_ops amd_sched_fence_ops; ··· 133 128 uint32_t jobs); 134 129 void amd_sched_entity_fini(struct amd_gpu_scheduler *sched, 135 130 struct amd_sched_entity *entity); 136 - int amd_sched_entity_push_job(struct amd_sched_job *sched_job); 131 + void amd_sched_entity_push_job(struct amd_sched_job *sched_job); 137 132 138 133 struct amd_sched_fence *amd_sched_fence_create( 139 134 struct amd_sched_entity *s_entity, void *owner); 135 + void amd_sched_fence_scheduled(struct amd_sched_fence *fence); 140 136 void amd_sched_fence_signal(struct amd_sched_fence *fence); 141 - 142 137 143 138 #endif
+21 -2
drivers/gpu/drm/amd/scheduler/sched_fence.c
··· 32 32 struct amd_sched_fence *fence = NULL; 33 33 unsigned seq; 34 34 35 - fence = kzalloc(sizeof(struct amd_sched_fence), GFP_KERNEL); 35 + fence = kmem_cache_zalloc(sched_fence_slab, GFP_KERNEL); 36 36 if (fence == NULL) 37 37 return NULL; 38 + 39 + INIT_LIST_HEAD(&fence->scheduled_cb); 38 40 fence->owner = owner; 39 41 fence->sched = s_entity->sched; 40 42 spin_lock_init(&fence->lock); ··· 57 55 FENCE_TRACE(&fence->base, "was already signaled\n"); 58 56 } 59 57 58 + void amd_sched_fence_scheduled(struct amd_sched_fence *s_fence) 59 + { 60 + struct fence_cb *cur, *tmp; 61 + 62 + set_bit(AMD_SCHED_FENCE_SCHEDULED_BIT, &s_fence->base.flags); 63 + list_for_each_entry_safe(cur, tmp, &s_fence->scheduled_cb, node) { 64 + list_del_init(&cur->node); 65 + cur->func(&s_fence->base, cur); 66 + } 67 + } 68 + 60 69 static const char *amd_sched_fence_get_driver_name(struct fence *fence) 61 70 { 62 71 return "amd_sched"; ··· 84 71 return true; 85 72 } 86 73 74 + static void amd_sched_fence_release(struct fence *f) 75 + { 76 + struct amd_sched_fence *fence = to_amd_sched_fence(f); 77 + kmem_cache_free(sched_fence_slab, fence); 78 + } 79 + 87 80 const struct fence_ops amd_sched_fence_ops = { 88 81 .get_driver_name = amd_sched_fence_get_driver_name, 89 82 .get_timeline_name = amd_sched_fence_get_timeline_name, 90 83 .enable_signaling = amd_sched_fence_enable_signaling, 91 84 .signaled = NULL, 92 85 .wait = fence_default_wait, 93 - .release = NULL, 86 + .release = amd_sched_fence_release, 94 87 };
+42 -19
drivers/gpu/drm/drm_atomic.c
··· 1432 1432 return ret; 1433 1433 } 1434 1434 1435 + /** 1436 + * drm_atomic_update_old_fb -- Unset old_fb pointers and set plane->fb pointers. 1437 + * 1438 + * @dev: drm device to check. 1439 + * @plane_mask: plane mask for planes that were updated. 1440 + * @ret: return value, can be -EDEADLK for a retry. 1441 + * 1442 + * Before doing an update plane->old_fb is set to plane->fb, 1443 + * but before dropping the locks old_fb needs to be set to NULL 1444 + * and plane->fb updated. This is a common operation for each 1445 + * atomic update, so this call is split off as a helper. 1446 + */ 1447 + void drm_atomic_clean_old_fb(struct drm_device *dev, 1448 + unsigned plane_mask, 1449 + int ret) 1450 + { 1451 + struct drm_plane *plane; 1452 + 1453 + /* if succeeded, fixup legacy plane crtc/fb ptrs before dropping 1454 + * locks (ie. while it is still safe to deref plane->state). We 1455 + * need to do this here because the driver entry points cannot 1456 + * distinguish between legacy and atomic ioctls. 1457 + */ 1458 + drm_for_each_plane_mask(plane, dev, plane_mask) { 1459 + if (ret == 0) { 1460 + struct drm_framebuffer *new_fb = plane->state->fb; 1461 + if (new_fb) 1462 + drm_framebuffer_reference(new_fb); 1463 + plane->fb = new_fb; 1464 + plane->crtc = plane->state->crtc; 1465 + 1466 + if (plane->old_fb) 1467 + drm_framebuffer_unreference(plane->old_fb); 1468 + } 1469 + plane->old_fb = NULL; 1470 + } 1471 + } 1472 + EXPORT_SYMBOL(drm_atomic_clean_old_fb); 1473 + 1435 1474 int drm_mode_atomic_ioctl(struct drm_device *dev, 1436 1475 void *data, struct drm_file *file_priv) 1437 1476 { ··· 1485 1446 struct drm_plane *plane; 1486 1447 struct drm_crtc *crtc; 1487 1448 struct drm_crtc_state *crtc_state; 1488 - unsigned plane_mask = 0; 1449 + unsigned plane_mask; 1489 1450 int ret = 0; 1490 1451 unsigned int i, j; 1491 1452 ··· 1525 1486 state->allow_modeset = !!(arg->flags & DRM_MODE_ATOMIC_ALLOW_MODESET); 1526 1487 1527 1488 retry: 1489 + plane_mask = 0; 1528 1490 copied_objs = 0; 1529 1491 copied_props = 0; 1530 1492 ··· 1616 1576 } 1617 1577 1618 1578 out: 1619 - /* if succeeded, fixup legacy plane crtc/fb ptrs before dropping 1620 - * locks (ie. while it is still safe to deref plane->state). We 1621 - * need to do this here because the driver entry points cannot 1622 - * distinguish between legacy and atomic ioctls. 1623 - */ 1624 - drm_for_each_plane_mask(plane, dev, plane_mask) { 1625 - if (ret == 0) { 1626 - struct drm_framebuffer *new_fb = plane->state->fb; 1627 - if (new_fb) 1628 - drm_framebuffer_reference(new_fb); 1629 - plane->fb = new_fb; 1630 - plane->crtc = plane->state->crtc; 1631 - 1632 - if (plane->old_fb) 1633 - drm_framebuffer_unreference(plane->old_fb); 1634 - } 1635 - plane->old_fb = NULL; 1636 - } 1579 + drm_atomic_clean_old_fb(dev, plane_mask, ret); 1637 1580 1638 1581 if (ret && arg->flags & DRM_MODE_PAGE_FLIP_EVENT) { 1639 1582 /*
+20 -9
drivers/gpu/drm/drm_atomic_helper.c
··· 210 210 return -EINVAL; 211 211 } 212 212 213 + if (!drm_encoder_crtc_ok(new_encoder, connector_state->crtc)) { 214 + DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] incompatible with [CRTC:%d]\n", 215 + new_encoder->base.id, 216 + new_encoder->name, 217 + connector_state->crtc->base.id); 218 + return -EINVAL; 219 + } 220 + 213 221 if (new_encoder == connector_state->best_encoder) { 214 222 DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] keeps [ENCODER:%d:%s], now on [CRTC:%d]\n", 215 223 connector->base.id, ··· 1561 1553 goto fail; 1562 1554 } 1563 1555 1556 + if (plane_state->crtc && (plane == plane->crtc->cursor)) 1557 + plane_state->state->legacy_cursor_update = true; 1558 + 1564 1559 ret = __drm_atomic_helper_disable_plane(plane, plane_state); 1565 1560 if (ret != 0) 1566 1561 goto fail; ··· 1615 1604 plane_state->src_y = 0; 1616 1605 plane_state->src_h = 0; 1617 1606 plane_state->src_w = 0; 1618 - 1619 - if (plane->crtc && (plane == plane->crtc->cursor)) 1620 - plane_state->state->legacy_cursor_update = true; 1621 1607 1622 1608 return 0; 1623 1609 } ··· 1749 1741 struct drm_crtc_state *crtc_state; 1750 1742 struct drm_plane_state *primary_state; 1751 1743 struct drm_crtc *crtc = set->crtc; 1744 + int hdisplay, vdisplay; 1752 1745 int ret; 1753 1746 1754 1747 crtc_state = drm_atomic_get_crtc_state(state, crtc); ··· 1792 1783 if (ret != 0) 1793 1784 return ret; 1794 1785 1786 + drm_crtc_get_hv_timing(set->mode, &hdisplay, &vdisplay); 1787 + 1795 1788 drm_atomic_set_fb_for_plane(primary_state, set->fb); 1796 1789 primary_state->crtc_x = 0; 1797 1790 primary_state->crtc_y = 0; 1798 - primary_state->crtc_h = set->mode->vdisplay; 1799 - primary_state->crtc_w = set->mode->hdisplay; 1791 + primary_state->crtc_h = vdisplay; 1792 + primary_state->crtc_w = hdisplay; 1800 1793 primary_state->src_x = set->x << 16; 1801 1794 primary_state->src_y = set->y << 16; 1802 1795 if (primary_state->rotation & (BIT(DRM_ROTATE_90) | BIT(DRM_ROTATE_270))) { 1803 - primary_state->src_h = set->mode->hdisplay << 16; 1804 - primary_state->src_w = set->mode->vdisplay << 16; 1796 + primary_state->src_h = hdisplay << 16; 1797 + primary_state->src_w = vdisplay << 16; 1805 1798 } else { 1806 - primary_state->src_h = set->mode->vdisplay << 16; 1807 - primary_state->src_w = set->mode->hdisplay << 16; 1799 + primary_state->src_h = vdisplay << 16; 1800 + primary_state->src_w = hdisplay << 16; 1808 1801 } 1809 1802 1810 1803 commit:
+14 -37
drivers/gpu/drm/drm_fb_helper.c
··· 342 342 struct drm_plane *plane; 343 343 struct drm_atomic_state *state; 344 344 int i, ret; 345 + unsigned plane_mask; 345 346 346 347 state = drm_atomic_state_alloc(dev); 347 348 if (!state) ··· 350 349 351 350 state->acquire_ctx = dev->mode_config.acquire_ctx; 352 351 retry: 352 + plane_mask = 0; 353 353 drm_for_each_plane(plane, dev) { 354 354 struct drm_plane_state *plane_state; 355 - 356 - plane->old_fb = plane->fb; 357 355 358 356 plane_state = drm_atomic_get_plane_state(state, plane); 359 357 if (IS_ERR(plane_state)) { ··· 361 361 } 362 362 363 363 plane_state->rotation = BIT(DRM_ROTATE_0); 364 + 365 + plane->old_fb = plane->fb; 366 + plane_mask |= 1 << drm_plane_index(plane); 364 367 365 368 /* disable non-primary: */ 366 369 if (plane->type == DRM_PLANE_TYPE_PRIMARY) ··· 385 382 ret = drm_atomic_commit(state); 386 383 387 384 fail: 388 - drm_for_each_plane(plane, dev) { 389 - if (ret == 0) { 390 - struct drm_framebuffer *new_fb = plane->state->fb; 391 - if (new_fb) 392 - drm_framebuffer_reference(new_fb); 393 - plane->fb = new_fb; 394 - plane->crtc = plane->state->crtc; 395 - 396 - if (plane->old_fb) 397 - drm_framebuffer_unreference(plane->old_fb); 398 - } 399 - plane->old_fb = NULL; 400 - } 385 + drm_atomic_clean_old_fb(dev, plane_mask, ret); 401 386 402 387 if (ret == -EDEADLK) 403 388 goto backoff; ··· 1227 1236 struct drm_fb_helper *fb_helper = info->par; 1228 1237 struct drm_device *dev = fb_helper->dev; 1229 1238 struct drm_atomic_state *state; 1239 + struct drm_plane *plane; 1230 1240 int i, ret; 1241 + unsigned plane_mask; 1231 1242 1232 1243 state = drm_atomic_state_alloc(dev); 1233 1244 if (!state) ··· 1237 1244 1238 1245 state->acquire_ctx = dev->mode_config.acquire_ctx; 1239 1246 retry: 1247 + plane_mask = 0; 1240 1248 for(i = 0; i < fb_helper->crtc_count; i++) { 1241 1249 struct drm_mode_set *mode_set; 1242 1250 1243 1251 mode_set = &fb_helper->crtc_info[i].mode_set; 1244 - 1245 - mode_set->crtc->primary->old_fb = mode_set->crtc->primary->fb; 1246 1252 1247 1253 mode_set->x = var->xoffset; 1248 1254 mode_set->y = var->yoffset; ··· 1249 1257 ret = __drm_atomic_helper_set_config(mode_set, state); 1250 1258 if (ret != 0) 1251 1259 goto fail; 1260 + 1261 + plane = mode_set->crtc->primary; 1262 + plane_mask |= drm_plane_index(plane); 1263 + plane->old_fb = plane->fb; 1252 1264 } 1253 1265 1254 1266 ret = drm_atomic_commit(state); ··· 1264 1268 1265 1269 1266 1270 fail: 1267 - for(i = 0; i < fb_helper->crtc_count; i++) { 1268 - struct drm_mode_set *mode_set; 1269 - struct drm_plane *plane; 1270 - 1271 - mode_set = &fb_helper->crtc_info[i].mode_set; 1272 - plane = mode_set->crtc->primary; 1273 - 1274 - if (ret == 0) { 1275 - struct drm_framebuffer *new_fb = plane->state->fb; 1276 - 1277 - if (new_fb) 1278 - drm_framebuffer_reference(new_fb); 1279 - plane->fb = new_fb; 1280 - plane->crtc = plane->state->crtc; 1281 - 1282 - if (plane->old_fb) 1283 - drm_framebuffer_unreference(plane->old_fb); 1284 - } 1285 - plane->old_fb = NULL; 1286 - } 1271 + drm_atomic_clean_old_fb(dev, plane_mask, ret); 1287 1272 1288 1273 if (ret == -EDEADLK) 1289 1274 goto backoff;
+4
drivers/gpu/drm/i915/i915_drv.h
··· 351 351 /* hsw/bdw */ 352 352 DPLL_ID_WRPLL1 = 0, 353 353 DPLL_ID_WRPLL2 = 1, 354 + DPLL_ID_SPLL = 2, 355 + 354 356 /* skl */ 355 357 DPLL_ID_SKL_DPLL1 = 0, 356 358 DPLL_ID_SKL_DPLL2 = 1, ··· 369 367 370 368 /* hsw, bdw */ 371 369 uint32_t wrpll; 370 + uint32_t spll; 372 371 373 372 /* skl */ 374 373 /* ··· 2651 2648 int enable_cmd_parser; 2652 2649 /* leave bools at the end to not create holes */ 2653 2650 bool enable_hangcheck; 2651 + bool fastboot; 2654 2652 bool prefault_disable; 2655 2653 bool load_detect_test; 2656 2654 bool reset;
+7 -1
drivers/gpu/drm/i915/i915_gem.c
··· 3809 3809 int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data, 3810 3810 struct drm_file *file) 3811 3811 { 3812 + struct drm_i915_private *dev_priv = dev->dev_private; 3812 3813 struct drm_i915_gem_caching *args = data; 3813 3814 struct drm_i915_gem_object *obj; 3814 3815 enum i915_cache_level level; ··· 3838 3837 return -EINVAL; 3839 3838 } 3840 3839 3840 + intel_runtime_pm_get(dev_priv); 3841 + 3841 3842 ret = i915_mutex_lock_interruptible(dev); 3842 3843 if (ret) 3843 - return ret; 3844 + goto rpm_put; 3844 3845 3845 3846 obj = to_intel_bo(drm_gem_object_lookup(dev, file, args->handle)); 3846 3847 if (&obj->base == NULL) { ··· 3855 3852 drm_gem_object_unreference(&obj->base); 3856 3853 unlock: 3857 3854 mutex_unlock(&dev->struct_mutex); 3855 + rpm_put: 3856 + intel_runtime_pm_put(dev_priv); 3857 + 3858 3858 return ret; 3859 3859 } 3860 3860
+5
drivers/gpu/drm/i915/i915_params.c
··· 40 40 .preliminary_hw_support = IS_ENABLED(CONFIG_DRM_I915_PRELIMINARY_HW_SUPPORT), 41 41 .disable_power_well = -1, 42 42 .enable_ips = 1, 43 + .fastboot = 0, 43 44 .prefault_disable = 0, 44 45 .load_detect_test = 0, 45 46 .reset = true, ··· 133 132 134 133 module_param_named_unsafe(enable_ips, i915.enable_ips, int, 0600); 135 134 MODULE_PARM_DESC(enable_ips, "Enable IPS (default: true)"); 135 + 136 + module_param_named(fastboot, i915.fastboot, bool, 0600); 137 + MODULE_PARM_DESC(fastboot, 138 + "Try to skip unnecessary mode sets at boot time (default: false)"); 136 139 137 140 module_param_named_unsafe(prefault_disable, i915.prefault_disable, bool, 0600); 138 141 MODULE_PARM_DESC(prefault_disable,
+4 -27
drivers/gpu/drm/i915/intel_crt.c
··· 138 138 pipe_config->base.adjusted_mode.flags |= intel_crt_get_flags(encoder); 139 139 } 140 140 141 - static void hsw_crt_pre_enable(struct intel_encoder *encoder) 142 - { 143 - struct drm_device *dev = encoder->base.dev; 144 - struct drm_i915_private *dev_priv = dev->dev_private; 145 - 146 - WARN(I915_READ(SPLL_CTL) & SPLL_PLL_ENABLE, "SPLL already enabled\n"); 147 - I915_WRITE(SPLL_CTL, 148 - SPLL_PLL_ENABLE | SPLL_PLL_FREQ_1350MHz | SPLL_PLL_SSC); 149 - POSTING_READ(SPLL_CTL); 150 - udelay(20); 151 - } 152 - 153 141 /* Note: The caller is required to filter out dpms modes not supported by the 154 142 * platform. */ 155 143 static void intel_crt_set_dpms(struct intel_encoder *encoder, int mode) ··· 204 216 intel_disable_crt(encoder); 205 217 } 206 218 207 - static void hsw_crt_post_disable(struct intel_encoder *encoder) 208 - { 209 - struct drm_device *dev = encoder->base.dev; 210 - struct drm_i915_private *dev_priv = dev->dev_private; 211 - uint32_t val; 212 - 213 - DRM_DEBUG_KMS("Disabling SPLL\n"); 214 - val = I915_READ(SPLL_CTL); 215 - WARN_ON(!(val & SPLL_PLL_ENABLE)); 216 - I915_WRITE(SPLL_CTL, val & ~SPLL_PLL_ENABLE); 217 - POSTING_READ(SPLL_CTL); 218 - } 219 - 220 219 static void intel_enable_crt(struct intel_encoder *encoder) 221 220 { 222 221 struct intel_crt *crt = intel_encoder_to_crt(encoder); ··· 255 280 if (HAS_DDI(dev)) { 256 281 pipe_config->ddi_pll_sel = PORT_CLK_SEL_SPLL; 257 282 pipe_config->port_clock = 135000 * 2; 283 + 284 + pipe_config->dpll_hw_state.wrpll = 0; 285 + pipe_config->dpll_hw_state.spll = 286 + SPLL_PLL_ENABLE | SPLL_PLL_FREQ_1350MHz | SPLL_PLL_SSC; 258 287 } 259 288 260 289 return true; ··· 839 860 if (HAS_DDI(dev)) { 840 861 crt->base.get_config = hsw_crt_get_config; 841 862 crt->base.get_hw_state = intel_ddi_get_hw_state; 842 - crt->base.pre_enable = hsw_crt_pre_enable; 843 - crt->base.post_disable = hsw_crt_post_disable; 844 863 } else { 845 864 crt->base.get_config = intel_crt_get_config; 846 865 crt->base.get_hw_state = intel_crt_get_hw_state;
+65 -10
drivers/gpu/drm/i915/intel_ddi.c
··· 1286 1286 } 1287 1287 1288 1288 crtc_state->ddi_pll_sel = PORT_CLK_SEL_WRPLL(pll->id); 1289 + } else if (crtc_state->ddi_pll_sel == PORT_CLK_SEL_SPLL) { 1290 + struct drm_atomic_state *state = crtc_state->base.state; 1291 + struct intel_shared_dpll_config *spll = 1292 + &intel_atomic_get_shared_dpll_state(state)[DPLL_ID_SPLL]; 1293 + 1294 + if (spll->crtc_mask && 1295 + WARN_ON(spll->hw_state.spll != crtc_state->dpll_hw_state.spll)) 1296 + return false; 1297 + 1298 + crtc_state->shared_dpll = DPLL_ID_SPLL; 1299 + spll->hw_state.spll = crtc_state->dpll_hw_state.spll; 1300 + spll->crtc_mask |= 1 << intel_crtc->pipe; 1289 1301 } 1290 1302 1291 1303 return true; ··· 2449 2437 } 2450 2438 } 2451 2439 2452 - static void hsw_ddi_pll_enable(struct drm_i915_private *dev_priv, 2440 + static void hsw_ddi_wrpll_enable(struct drm_i915_private *dev_priv, 2453 2441 struct intel_shared_dpll *pll) 2454 2442 { 2455 2443 I915_WRITE(WRPLL_CTL(pll->id), pll->config.hw_state.wrpll); ··· 2457 2445 udelay(20); 2458 2446 } 2459 2447 2460 - static void hsw_ddi_pll_disable(struct drm_i915_private *dev_priv, 2448 + static void hsw_ddi_spll_enable(struct drm_i915_private *dev_priv, 2461 2449 struct intel_shared_dpll *pll) 2450 + { 2451 + I915_WRITE(SPLL_CTL, pll->config.hw_state.spll); 2452 + POSTING_READ(SPLL_CTL); 2453 + udelay(20); 2454 + } 2455 + 2456 + static void hsw_ddi_wrpll_disable(struct drm_i915_private *dev_priv, 2457 + struct intel_shared_dpll *pll) 2462 2458 { 2463 2459 uint32_t val; 2464 2460 ··· 2475 2455 POSTING_READ(WRPLL_CTL(pll->id)); 2476 2456 } 2477 2457 2478 - static bool hsw_ddi_pll_get_hw_state(struct drm_i915_private *dev_priv, 2479 - struct intel_shared_dpll *pll, 2480 - struct intel_dpll_hw_state *hw_state) 2458 + static void hsw_ddi_spll_disable(struct drm_i915_private *dev_priv, 2459 + struct intel_shared_dpll *pll) 2460 + { 2461 + uint32_t val; 2462 + 2463 + val = I915_READ(SPLL_CTL); 2464 + I915_WRITE(SPLL_CTL, val & ~SPLL_PLL_ENABLE); 2465 + POSTING_READ(SPLL_CTL); 2466 + } 2467 + 2468 + static bool hsw_ddi_wrpll_get_hw_state(struct drm_i915_private *dev_priv, 2469 + struct intel_shared_dpll *pll, 2470 + struct intel_dpll_hw_state *hw_state) 2481 2471 { 2482 2472 uint32_t val; 2483 2473 ··· 2500 2470 return val & WRPLL_PLL_ENABLE; 2501 2471 } 2502 2472 2473 + static bool hsw_ddi_spll_get_hw_state(struct drm_i915_private *dev_priv, 2474 + struct intel_shared_dpll *pll, 2475 + struct intel_dpll_hw_state *hw_state) 2476 + { 2477 + uint32_t val; 2478 + 2479 + if (!intel_display_power_is_enabled(dev_priv, POWER_DOMAIN_PLLS)) 2480 + return false; 2481 + 2482 + val = I915_READ(SPLL_CTL); 2483 + hw_state->spll = val; 2484 + 2485 + return val & SPLL_PLL_ENABLE; 2486 + } 2487 + 2488 + 2503 2489 static const char * const hsw_ddi_pll_names[] = { 2504 2490 "WRPLL 1", 2505 2491 "WRPLL 2", 2492 + "SPLL" 2506 2493 }; 2507 2494 2508 2495 static void hsw_shared_dplls_init(struct drm_i915_private *dev_priv) 2509 2496 { 2510 2497 int i; 2511 2498 2512 - dev_priv->num_shared_dpll = 2; 2499 + dev_priv->num_shared_dpll = 3; 2513 2500 2514 - for (i = 0; i < dev_priv->num_shared_dpll; i++) { 2501 + for (i = 0; i < 2; i++) { 2515 2502 dev_priv->shared_dplls[i].id = i; 2516 2503 dev_priv->shared_dplls[i].name = hsw_ddi_pll_names[i]; 2517 - dev_priv->shared_dplls[i].disable = hsw_ddi_pll_disable; 2518 - dev_priv->shared_dplls[i].enable = hsw_ddi_pll_enable; 2504 + dev_priv->shared_dplls[i].disable = hsw_ddi_wrpll_disable; 2505 + dev_priv->shared_dplls[i].enable = hsw_ddi_wrpll_enable; 2519 2506 dev_priv->shared_dplls[i].get_hw_state = 2520 - hsw_ddi_pll_get_hw_state; 2507 + hsw_ddi_wrpll_get_hw_state; 2521 2508 } 2509 + 2510 + /* SPLL is special, but needs to be initialized anyway.. */ 2511 + dev_priv->shared_dplls[i].id = i; 2512 + dev_priv->shared_dplls[i].name = hsw_ddi_pll_names[i]; 2513 + dev_priv->shared_dplls[i].disable = hsw_ddi_spll_disable; 2514 + dev_priv->shared_dplls[i].enable = hsw_ddi_spll_enable; 2515 + dev_priv->shared_dplls[i].get_hw_state = hsw_ddi_spll_get_hw_state; 2516 + 2522 2517 } 2523 2518 2524 2519 static const char * const skl_ddi_pll_names[] = {
+27 -10
drivers/gpu/drm/i915/intel_display.c
··· 2646 2646 return; 2647 2647 2648 2648 valid_fb: 2649 - plane_state->src_x = plane_state->src_y = 0; 2649 + plane_state->src_x = 0; 2650 + plane_state->src_y = 0; 2650 2651 plane_state->src_w = fb->width << 16; 2651 2652 plane_state->src_h = fb->height << 16; 2652 2653 2653 - plane_state->crtc_x = plane_state->src_y = 0; 2654 + plane_state->crtc_x = 0; 2655 + plane_state->crtc_y = 0; 2654 2656 plane_state->crtc_w = fb->width; 2655 2657 plane_state->crtc_h = fb->height; 2656 2658 ··· 4239 4237 struct intel_shared_dpll *pll; 4240 4238 struct intel_shared_dpll_config *shared_dpll; 4241 4239 enum intel_dpll_id i; 4240 + int max = dev_priv->num_shared_dpll; 4242 4241 4243 4242 shared_dpll = intel_atomic_get_shared_dpll_state(crtc_state->base.state); 4244 4243 ··· 4274 4271 WARN_ON(shared_dpll[i].crtc_mask); 4275 4272 4276 4273 goto found; 4277 - } 4274 + } else if (INTEL_INFO(dev_priv)->gen < 9 && HAS_DDI(dev_priv)) 4275 + /* Do not consider SPLL */ 4276 + max = 2; 4278 4277 4279 - for (i = 0; i < dev_priv->num_shared_dpll; i++) { 4278 + for (i = 0; i < max; i++) { 4280 4279 pll = &dev_priv->shared_dplls[i]; 4281 4280 4282 4281 /* Only want to check enabled timings first */ ··· 9728 9723 case PORT_CLK_SEL_WRPLL2: 9729 9724 pipe_config->shared_dpll = DPLL_ID_WRPLL2; 9730 9725 break; 9726 + case PORT_CLK_SEL_SPLL: 9727 + pipe_config->shared_dpll = DPLL_ID_SPLL; 9731 9728 } 9732 9729 } 9733 9730 ··· 12010 12003 pipe_config->dpll_hw_state.cfgcr1, 12011 12004 pipe_config->dpll_hw_state.cfgcr2); 12012 12005 } else if (HAS_DDI(dev)) { 12013 - DRM_DEBUG_KMS("ddi_pll_sel: %u; dpll_hw_state: wrpll: 0x%x\n", 12006 + DRM_DEBUG_KMS("ddi_pll_sel: %u; dpll_hw_state: wrpll: 0x%x spll: 0x%x\n", 12014 12007 pipe_config->ddi_pll_sel, 12015 - pipe_config->dpll_hw_state.wrpll); 12008 + pipe_config->dpll_hw_state.wrpll, 12009 + pipe_config->dpll_hw_state.spll); 12016 12010 } else { 12017 12011 DRM_DEBUG_KMS("dpll_hw_state: dpll: 0x%x, dpll_md: 0x%x, " 12018 12012 "fp0: 0x%x, fp1: 0x%x\n", ··· 12536 12528 PIPE_CONF_CHECK_X(dpll_hw_state.fp0); 12537 12529 PIPE_CONF_CHECK_X(dpll_hw_state.fp1); 12538 12530 PIPE_CONF_CHECK_X(dpll_hw_state.wrpll); 12531 + PIPE_CONF_CHECK_X(dpll_hw_state.spll); 12539 12532 PIPE_CONF_CHECK_X(dpll_hw_state.ctrl1); 12540 12533 PIPE_CONF_CHECK_X(dpll_hw_state.cfgcr1); 12541 12534 PIPE_CONF_CHECK_X(dpll_hw_state.cfgcr2); ··· 13041 13032 struct intel_crtc_state *pipe_config = 13042 13033 to_intel_crtc_state(crtc_state); 13043 13034 13035 + memset(&to_intel_crtc(crtc)->atomic, 0, 13036 + sizeof(struct intel_crtc_atomic_commit)); 13037 + 13044 13038 /* Catch I915_MODE_FLAG_INHERITED */ 13045 13039 if (crtc_state->mode.private_flags != crtc->state->mode.private_flags) 13046 13040 crtc_state->mode_changed = true; ··· 13068 13056 if (ret) 13069 13057 return ret; 13070 13058 13071 - if (intel_pipe_config_compare(state->dev, 13059 + if (i915.fastboot && 13060 + intel_pipe_config_compare(state->dev, 13072 13061 to_intel_crtc_state(crtc->state), 13073 13062 pipe_config, true)) { 13074 13063 crtc_state->mode_changed = false; ··· 14377 14364 static struct drm_framebuffer * 14378 14365 intel_user_framebuffer_create(struct drm_device *dev, 14379 14366 struct drm_file *filp, 14380 - struct drm_mode_fb_cmd2 *mode_cmd) 14367 + struct drm_mode_fb_cmd2 *user_mode_cmd) 14381 14368 { 14382 14369 struct drm_i915_gem_object *obj; 14370 + struct drm_mode_fb_cmd2 mode_cmd = *user_mode_cmd; 14383 14371 14384 14372 obj = to_intel_bo(drm_gem_object_lookup(dev, filp, 14385 - mode_cmd->handles[0])); 14373 + mode_cmd.handles[0])); 14386 14374 if (&obj->base == NULL) 14387 14375 return ERR_PTR(-ENOENT); 14388 14376 14389 - return intel_framebuffer_create(dev, mode_cmd, obj); 14377 + return intel_framebuffer_create(dev, &mode_cmd, obj); 14390 14378 } 14391 14379 14392 14380 #ifndef CONFIG_DRM_FBDEV_EMULATION ··· 14718 14704 14719 14705 /* Apple Macbook 2,1 (Core 2 T7400) */ 14720 14706 { 0x27a2, 0x8086, 0x7270, quirk_backlight_present }, 14707 + 14708 + /* Apple Macbook 4,1 */ 14709 + { 0x2a02, 0x106b, 0x00a1, quirk_backlight_present }, 14721 14710 14722 14711 /* Toshiba CB35 Chromebook (Celeron 2955U) */ 14723 14712 { 0x0a06, 0x1179, 0x0a88, quirk_backlight_present },
+6 -4
drivers/gpu/drm/i915/intel_pm.c
··· 4449 4449 POSTING_READ(GEN6_RPNSWREQ); 4450 4450 4451 4451 dev_priv->rps.cur_freq = val; 4452 - trace_intel_gpu_freq_change(val * 50); 4452 + trace_intel_gpu_freq_change(intel_gpu_freq(dev_priv, val)); 4453 4453 } 4454 4454 4455 4455 static void valleyview_set_rps(struct drm_device *dev, u8 val) ··· 7255 7255 int intel_gpu_freq(struct drm_i915_private *dev_priv, int val) 7256 7256 { 7257 7257 if (IS_GEN9(dev_priv->dev)) 7258 - return (val * GT_FREQUENCY_MULTIPLIER) / GEN9_FREQ_SCALER; 7258 + return DIV_ROUND_CLOSEST(val * GT_FREQUENCY_MULTIPLIER, 7259 + GEN9_FREQ_SCALER); 7259 7260 else if (IS_CHERRYVIEW(dev_priv->dev)) 7260 7261 return chv_gpu_freq(dev_priv, val); 7261 7262 else if (IS_VALLEYVIEW(dev_priv->dev)) ··· 7268 7267 int intel_freq_opcode(struct drm_i915_private *dev_priv, int val) 7269 7268 { 7270 7269 if (IS_GEN9(dev_priv->dev)) 7271 - return (val * GEN9_FREQ_SCALER) / GT_FREQUENCY_MULTIPLIER; 7270 + return DIV_ROUND_CLOSEST(val * GEN9_FREQ_SCALER, 7271 + GT_FREQUENCY_MULTIPLIER); 7272 7272 else if (IS_CHERRYVIEW(dev_priv->dev)) 7273 7273 return chv_freq_opcode(dev_priv, val); 7274 7274 else if (IS_VALLEYVIEW(dev_priv->dev)) 7275 7275 return byt_freq_opcode(dev_priv, val); 7276 7276 else 7277 - return val / GT_FREQUENCY_MULTIPLIER; 7277 + return DIV_ROUND_CLOSEST(val, GT_FREQUENCY_MULTIPLIER); 7278 7278 } 7279 7279 7280 7280 struct request_boost {
+5 -6
drivers/gpu/drm/mgag200/mgag200_cursor.c
··· 70 70 BUG_ON(pixels_2 != pixels_current && pixels_2 != pixels_prev); 71 71 BUG_ON(pixels_current == pixels_prev); 72 72 73 + if (!handle || !file_priv) { 74 + mga_hide_cursor(mdev); 75 + return 0; 76 + } 77 + 73 78 obj = drm_gem_object_lookup(dev, file_priv, handle); 74 79 if (!obj) 75 80 return -ENOENT; ··· 91 86 WREG8(MGA_CURPOSXH, 0); 92 87 mgag200_bo_unreserve(pixels_1); 93 88 goto out_unreserve1; 94 - } 95 - 96 - if (!handle) { 97 - mga_hide_cursor(mdev); 98 - ret = 0; 99 - goto out1; 100 89 } 101 90 102 91 /* Move cursor buffers into VRAM if they aren't already */
+1
drivers/gpu/drm/nouveau/include/nvkm/subdev/instmem.h
··· 7 7 const struct nvkm_instmem_func *func; 8 8 struct nvkm_subdev subdev; 9 9 10 + spinlock_t lock; 10 11 struct list_head list; 11 12 u32 reserved; 12 13
+1
drivers/gpu/drm/nouveau/nouveau_acpi.c
··· 367 367 return -ENODEV; 368 368 } 369 369 obj = (union acpi_object *)buffer.pointer; 370 + len = min(len, (int)obj->buffer.length); 370 371 memcpy(bios+offset, obj->buffer.pointer, len); 371 372 kfree(buffer.pointer); 372 373 return len;
+3 -1
drivers/gpu/drm/nouveau/nouveau_drm.h
··· 39 39 40 40 #include <nvif/client.h> 41 41 #include <nvif/device.h> 42 + #include <nvif/ioctl.h> 42 43 43 44 #include <drmP.h> 44 45 ··· 66 65 }; 67 66 68 67 enum nouveau_drm_object_route { 69 - NVDRM_OBJECT_NVIF = 0, 68 + NVDRM_OBJECT_NVIF = NVIF_IOCTL_V0_OWNER_NVIF, 70 69 NVDRM_OBJECT_USIF, 71 70 NVDRM_OBJECT_ABI16, 71 + NVDRM_OBJECT_ANY = NVIF_IOCTL_V0_OWNER_ANY, 72 72 }; 73 73 74 74 enum nouveau_drm_notify_route {
+4 -1
drivers/gpu/drm/nouveau/nouveau_usif.c
··· 313 313 if (nvif_unpack(argv->v0, 0, 0, true)) { 314 314 /* block access to objects not created via this interface */ 315 315 owner = argv->v0.owner; 316 - argv->v0.owner = NVDRM_OBJECT_USIF; 316 + if (argv->v0.object == 0ULL) 317 + argv->v0.owner = NVDRM_OBJECT_ANY; /* except client */ 318 + else 319 + argv->v0.owner = NVDRM_OBJECT_USIF; 317 320 } else 318 321 goto done; 319 322
+14 -2
drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c
··· 279 279 }; 280 280 281 281 static const struct nvkm_device_pci_vendor 282 + nvkm_device_pci_10de_0fe4[] = { 283 + { 0x144d, 0xc740, NULL, { .War00C800_0 = true } }, 284 + {} 285 + }; 286 + 287 + static const struct nvkm_device_pci_vendor 282 288 nvkm_device_pci_10de_104b[] = { 283 289 { 0x1043, 0x844c, "GeForce GT 625" }, 284 290 { 0x1043, 0x846b, "GeForce GT 625" }, ··· 691 685 nvkm_device_pci_10de_1199[] = { 692 686 { 0x1458, 0xd001, "GeForce GTX 760" }, 693 687 { 0x1462, 0x1106, "GeForce GTX 780M", { .War00C800_0 = true } }, /* Medion Erazer X7827 */ 688 + {} 689 + }; 690 + 691 + static const struct nvkm_device_pci_vendor 692 + nvkm_device_pci_10de_11e0[] = { 693 + { 0x1558, 0x5106, NULL, { .War00C800_0 = true } }, 694 694 {} 695 695 }; 696 696 ··· 1382 1370 { 0x0fe1, "GeForce GT 730M" }, 1383 1371 { 0x0fe2, "GeForce GT 745M" }, 1384 1372 { 0x0fe3, "GeForce GT 745M", nvkm_device_pci_10de_0fe3 }, 1385 - { 0x0fe4, "GeForce GT 750M" }, 1373 + { 0x0fe4, "GeForce GT 750M", nvkm_device_pci_10de_0fe4 }, 1386 1374 { 0x0fe9, "GeForce GT 750M" }, 1387 1375 { 0x0fea, "GeForce GT 755M" }, 1388 1376 { 0x0fec, "GeForce 710A" }, ··· 1497 1485 { 0x11c6, "GeForce GTX 650 Ti" }, 1498 1486 { 0x11c8, "GeForce GTX 650" }, 1499 1487 { 0x11cb, "GeForce GT 740" }, 1500 - { 0x11e0, "GeForce GTX 770M" }, 1488 + { 0x11e0, "GeForce GTX 770M", nvkm_device_pci_10de_11e0 }, 1501 1489 { 0x11e1, "GeForce GTX 765M" }, 1502 1490 { 0x11e2, "GeForce GTX 765M" }, 1503 1491 { 0x11e3, "GeForce GTX 760M", nvkm_device_pci_10de_11e3 },
+2
drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf117.c
··· 207 207 const u32 b = beta * gr->ppc_tpc_nr[gpc][ppc]; 208 208 const u32 t = timeslice_mode; 209 209 const u32 o = PPC_UNIT(gpc, ppc, 0); 210 + if (!(gr->ppc_mask[gpc] & (1 << ppc))) 211 + continue; 210 212 mmio_skip(info, o + 0xc0, (t << 28) | (b << 16) | ++bo); 211 213 mmio_wr32(info, o + 0xc0, (t << 28) | (b << 16) | --bo); 212 214 bo += grctx->attrib_nr_max * gr->ppc_tpc_nr[gpc][ppc];
+5 -3
drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpc.fuc
··· 52 52 #endif 53 53 54 54 #ifdef INCLUDE_CODE 55 + #define gpc_addr(reg,addr) /* 56 + */ imm32(reg,addr) /* 57 + */ or reg NV_PGRAPH_GPCX_GPCCS_MMIO_CTRL_BASE_ENABLE 55 58 #define gpc_wr32(addr,reg) /* 59 + */ gpc_addr($r14,addr) /* 56 60 */ mov b32 $r15 reg /* 57 - */ imm32($r14, addr) /* 58 - */ or $r14 NV_PGRAPH_GPCX_GPCCS_MMIO_CTRL_BASE_ENABLE /* 59 61 */ call(nv_wr32) 60 62 61 63 // reports an exception to the host ··· 163 161 164 162 #if NV_PGRAPH_GPCX_UNK__SIZE > 0 165 163 // figure out which, and how many, UNKs are actually present 166 - imm32($r14, 0x500c30) 164 + gpc_addr($r14, 0x500c30) 167 165 clear b32 $r2 168 166 clear b32 $r3 169 167 clear b32 $r4
+172 -172
drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgf117.fuc3.h
··· 314 314 0x03f01200, 315 315 0x0002d000, 316 316 0x17f104bd, 317 - 0x10fe0542, 317 + 0x10fe0545, 318 318 0x0007f100, 319 319 0x0003f007, 320 320 0xbd0000d0, ··· 338 338 0x02d00103, 339 339 0xf104bd00, 340 340 0xf00c30e7, 341 - 0x24bd50e3, 342 - 0x44bd34bd, 343 - /* 0x0430: init_unk_loop */ 344 - 0xb06821f4, 345 - 0x0bf400f6, 346 - 0x01f7f00f, 347 - 0xfd04f2bb, 348 - 0x30b6054f, 349 - /* 0x0445: init_unk_next */ 350 - 0x0120b601, 351 - 0xb004e0b6, 352 - 0x1bf40126, 353 - /* 0x0451: init_unk_done */ 354 - 0x070380e2, 355 - 0xf1080480, 356 - 0xf0010027, 357 - 0x22cf0223, 358 - 0x9534bd00, 359 - 0x07f10825, 360 - 0x03f0c000, 361 - 0x0005d001, 362 - 0x07f104bd, 363 - 0x03f0c100, 364 - 0x0005d001, 365 - 0x0e9804bd, 366 - 0x010f9800, 367 - 0x015021f5, 368 - 0xbb002fbb, 369 - 0x0e98003f, 370 - 0x020f9801, 371 - 0x015021f5, 372 - 0xfd050e98, 373 - 0x2ebb00ef, 374 - 0x003ebb00, 375 - 0x98020e98, 376 - 0x21f5030f, 377 - 0x0e980150, 378 - 0x00effd07, 379 - 0xbb002ebb, 380 - 0x35b6003e, 381 - 0x0007f102, 382 - 0x0103f0d3, 383 - 0xbd0003d0, 384 - 0x0825b604, 385 - 0xb60635b6, 386 - 0x30b60120, 387 - 0x0824b601, 388 - 0xb90834b6, 389 - 0x21f5022f, 390 - 0x2fbb02d3, 391 - 0x003fbb00, 392 - 0x010007f1, 393 - 0xd00203f0, 341 + 0xe5f050e3, 342 + 0xbd24bd01, 343 + /* 0x0433: init_unk_loop */ 344 + 0xf444bd34, 345 + 0xf6b06821, 346 + 0x0f0bf400, 347 + 0xbb01f7f0, 348 + 0x4ffd04f2, 349 + 0x0130b605, 350 + /* 0x0448: init_unk_next */ 351 + 0xb60120b6, 352 + 0x26b004e0, 353 + 0xe21bf401, 354 + /* 0x0454: init_unk_done */ 355 + 0x80070380, 356 + 0x27f10804, 357 + 0x23f00100, 358 + 0x0022cf02, 359 + 0x259534bd, 360 + 0x0007f108, 361 + 0x0103f0c0, 362 + 0xbd0005d0, 363 + 0x0007f104, 364 + 0x0103f0c1, 365 + 0xbd0005d0, 366 + 0x000e9804, 367 + 0xf5010f98, 368 + 0xbb015021, 369 + 0x3fbb002f, 370 + 0x010e9800, 371 + 0xf5020f98, 372 + 0x98015021, 373 + 0xeffd050e, 374 + 0x002ebb00, 375 + 0x98003ebb, 376 + 0x0f98020e, 377 + 0x5021f503, 378 + 0x070e9801, 379 + 0xbb00effd, 380 + 0x3ebb002e, 381 + 0x0235b600, 382 + 0xd30007f1, 383 + 0xd00103f0, 394 384 0x04bd0003, 395 - 0x29f024bd, 396 - 0x0007f11f, 397 - 0x0203f008, 398 - 0xbd0002d0, 399 - /* 0x0505: main */ 400 - 0x0031f404, 401 - 0xf00028f4, 402 - 0x21f424d7, 403 - 0xf401f439, 404 - 0xf404e4b0, 405 - 0x81fe1e18, 406 - 0x0627f001, 407 - 0x12fd20bd, 408 - 0x01e4b604, 409 - 0xfe051efd, 410 - 0x21f50018, 411 - 0x0ef405fa, 412 - /* 0x0535: main_not_ctx_xfer */ 413 - 0x10ef94d3, 414 - 0xf501f5f0, 415 - 0xf4037e21, 416 - /* 0x0542: ih */ 417 - 0x80f9c60e, 418 - 0xf90188fe, 419 - 0xf990f980, 420 - 0xf9b0f9a0, 421 - 0xf9e0f9d0, 422 - 0xf104bdf0, 423 - 0xf00200a7, 424 - 0xaacf00a3, 425 - 0x04abc400, 426 - 0xf02c0bf4, 427 - 0xe7f124d7, 428 - 0xe3f01a00, 429 - 0x00eecf00, 430 - 0x1900f7f1, 431 - 0xcf00f3f0, 432 - 0x21f400ff, 433 - 0x01e7f004, 434 - 0x1d0007f1, 435 - 0xd00003f0, 436 - 0x04bd000e, 437 - /* 0x0590: ih_no_fifo */ 438 - 0x010007f1, 439 - 0xd00003f0, 440 - 0x04bd000a, 441 - 0xe0fcf0fc, 442 - 0xb0fcd0fc, 443 - 0x90fca0fc, 444 - 0x88fe80fc, 445 - 0xf480fc00, 446 - 0x01f80032, 447 - /* 0x05b4: hub_barrier_done */ 448 - 0x9801f7f0, 449 - 0xfebb040e, 450 - 0x02ffb904, 451 - 0x9418e7f1, 452 - 0xf440e3f0, 453 - 0x00f89d21, 454 - /* 0x05cc: ctx_redswitch */ 455 - 0xf120f7f0, 385 + 0xb60825b6, 386 + 0x20b60635, 387 + 0x0130b601, 388 + 0xb60824b6, 389 + 0x2fb90834, 390 + 0xd321f502, 391 + 0x002fbb02, 392 + 0xf1003fbb, 393 + 0xf0010007, 394 + 0x03d00203, 395 + 0xbd04bd00, 396 + 0x1f29f024, 397 + 0x080007f1, 398 + 0xd00203f0, 399 + 0x04bd0002, 400 + /* 0x0508: main */ 401 + 0xf40031f4, 402 + 0xd7f00028, 403 + 0x3921f424, 404 + 0xb0f401f4, 405 + 0x18f404e4, 406 + 0x0181fe1e, 407 + 0xbd0627f0, 408 + 0x0412fd20, 409 + 0xfd01e4b6, 410 + 0x18fe051e, 411 + 0xfd21f500, 412 + 0xd30ef405, 413 + /* 0x0538: main_not_ctx_xfer */ 414 + 0xf010ef94, 415 + 0x21f501f5, 416 + 0x0ef4037e, 417 + /* 0x0545: ih */ 418 + 0xfe80f9c6, 419 + 0x80f90188, 420 + 0xa0f990f9, 421 + 0xd0f9b0f9, 422 + 0xf0f9e0f9, 423 + 0xa7f104bd, 424 + 0xa3f00200, 425 + 0x00aacf00, 426 + 0xf404abc4, 427 + 0xd7f02c0b, 428 + 0x00e7f124, 429 + 0x00e3f01a, 430 + 0xf100eecf, 431 + 0xf01900f7, 432 + 0xffcf00f3, 433 + 0x0421f400, 434 + 0xf101e7f0, 435 + 0xf01d0007, 436 + 0x0ed00003, 437 + /* 0x0593: ih_no_fifo */ 438 + 0xf104bd00, 439 + 0xf0010007, 440 + 0x0ad00003, 441 + 0xfc04bd00, 442 + 0xfce0fcf0, 443 + 0xfcb0fcd0, 444 + 0xfc90fca0, 445 + 0x0088fe80, 446 + 0x32f480fc, 447 + /* 0x05b7: hub_barrier_done */ 448 + 0xf001f800, 449 + 0x0e9801f7, 450 + 0x04febb04, 451 + 0xf102ffb9, 452 + 0xf09418e7, 453 + 0x21f440e3, 454 + /* 0x05cf: ctx_redswitch */ 455 + 0xf000f89d, 456 + 0x07f120f7, 457 + 0x03f08500, 458 + 0x000fd001, 459 + 0xe7f004bd, 460 + /* 0x05e1: ctx_redswitch_delay */ 461 + 0x01e2b608, 462 + 0xf1fd1bf4, 463 + 0xf10800f5, 464 + 0xf10200f5, 456 465 0xf0850007, 457 466 0x0fd00103, 458 - 0xf004bd00, 459 - /* 0x05de: ctx_redswitch_delay */ 460 - 0xe2b608e7, 461 - 0xfd1bf401, 462 - 0x0800f5f1, 463 - 0x0200f5f1, 464 - 0x850007f1, 465 - 0xd00103f0, 466 - 0x04bd000f, 467 - /* 0x05fa: ctx_xfer */ 468 - 0x07f100f8, 469 - 0x03f08100, 470 - 0x000fd002, 471 - 0x11f404bd, 472 - 0xcc21f507, 473 - /* 0x060d: ctx_xfer_not_load */ 474 - 0x6a21f505, 475 - 0xf124bd02, 476 - 0xf047fc07, 477 - 0x02d00203, 478 - 0xf004bd00, 479 - 0x20b6012c, 480 - 0xfc07f103, 481 - 0x0203f04a, 482 - 0xbd0002d0, 483 - 0x01acf004, 484 - 0xf102a5f0, 485 - 0xf00000b7, 486 - 0x0c9850b3, 487 - 0x0fc4b604, 488 - 0x9800bcbb, 489 - 0x0d98000c, 490 - 0x00e7f001, 491 - 0x016f21f5, 492 - 0xf101acf0, 493 - 0xf04000b7, 494 - 0x0c9850b3, 495 - 0x0fc4b604, 496 - 0x9800bcbb, 497 - 0x0d98010c, 498 - 0x060f9802, 499 - 0x0800e7f1, 500 - 0x016f21f5, 467 + 0xf804bd00, 468 + /* 0x05fd: ctx_xfer */ 469 + 0x0007f100, 470 + 0x0203f081, 471 + 0xbd000fd0, 472 + 0x0711f404, 473 + 0x05cf21f5, 474 + /* 0x0610: ctx_xfer_not_load */ 475 + 0x026a21f5, 476 + 0x07f124bd, 477 + 0x03f047fc, 478 + 0x0002d002, 479 + 0x2cf004bd, 480 + 0x0320b601, 481 + 0x4afc07f1, 482 + 0xd00203f0, 483 + 0x04bd0002, 501 484 0xf001acf0, 502 - 0xb7f104a5, 503 - 0xb3f03000, 485 + 0xb7f102a5, 486 + 0xb3f00000, 504 487 0x040c9850, 505 488 0xbb0fc4b6, 506 489 0x0c9800bc, 507 - 0x030d9802, 508 - 0xf1080f98, 509 - 0xf50200e7, 510 - 0xf5016f21, 511 - 0xf4025e21, 512 - 0x12f40601, 513 - /* 0x06a9: ctx_xfer_post */ 514 - 0x7f21f507, 515 - /* 0x06ad: ctx_xfer_done */ 516 - 0xb421f502, 517 - 0x0000f805, 518 - 0x00000000, 490 + 0x010d9800, 491 + 0xf500e7f0, 492 + 0xf0016f21, 493 + 0xb7f101ac, 494 + 0xb3f04000, 495 + 0x040c9850, 496 + 0xbb0fc4b6, 497 + 0x0c9800bc, 498 + 0x020d9801, 499 + 0xf1060f98, 500 + 0xf50800e7, 501 + 0xf0016f21, 502 + 0xa5f001ac, 503 + 0x00b7f104, 504 + 0x50b3f030, 505 + 0xb6040c98, 506 + 0xbcbb0fc4, 507 + 0x020c9800, 508 + 0x98030d98, 509 + 0xe7f1080f, 510 + 0x21f50200, 511 + 0x21f5016f, 512 + 0x01f4025e, 513 + 0x0712f406, 514 + /* 0x06ac: ctx_xfer_post */ 515 + 0x027f21f5, 516 + /* 0x06b0: ctx_xfer_done */ 517 + 0x05b721f5, 518 + 0x000000f8, 519 519 0x00000000, 520 520 0x00000000, 521 521 0x00000000,
+172 -172
drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk104.fuc3.h
··· 314 314 0x03f01200, 315 315 0x0002d000, 316 316 0x17f104bd, 317 - 0x10fe0542, 317 + 0x10fe0545, 318 318 0x0007f100, 319 319 0x0003f007, 320 320 0xbd0000d0, ··· 338 338 0x02d00103, 339 339 0xf104bd00, 340 340 0xf00c30e7, 341 - 0x24bd50e3, 342 - 0x44bd34bd, 343 - /* 0x0430: init_unk_loop */ 344 - 0xb06821f4, 345 - 0x0bf400f6, 346 - 0x01f7f00f, 347 - 0xfd04f2bb, 348 - 0x30b6054f, 349 - /* 0x0445: init_unk_next */ 350 - 0x0120b601, 351 - 0xb004e0b6, 352 - 0x1bf40126, 353 - /* 0x0451: init_unk_done */ 354 - 0x070380e2, 355 - 0xf1080480, 356 - 0xf0010027, 357 - 0x22cf0223, 358 - 0x9534bd00, 359 - 0x07f10825, 360 - 0x03f0c000, 361 - 0x0005d001, 362 - 0x07f104bd, 363 - 0x03f0c100, 364 - 0x0005d001, 365 - 0x0e9804bd, 366 - 0x010f9800, 367 - 0x015021f5, 368 - 0xbb002fbb, 369 - 0x0e98003f, 370 - 0x020f9801, 371 - 0x015021f5, 372 - 0xfd050e98, 373 - 0x2ebb00ef, 374 - 0x003ebb00, 375 - 0x98020e98, 376 - 0x21f5030f, 377 - 0x0e980150, 378 - 0x00effd07, 379 - 0xbb002ebb, 380 - 0x35b6003e, 381 - 0x0007f102, 382 - 0x0103f0d3, 383 - 0xbd0003d0, 384 - 0x0825b604, 385 - 0xb60635b6, 386 - 0x30b60120, 387 - 0x0824b601, 388 - 0xb90834b6, 389 - 0x21f5022f, 390 - 0x2fbb02d3, 391 - 0x003fbb00, 392 - 0x010007f1, 393 - 0xd00203f0, 341 + 0xe5f050e3, 342 + 0xbd24bd01, 343 + /* 0x0433: init_unk_loop */ 344 + 0xf444bd34, 345 + 0xf6b06821, 346 + 0x0f0bf400, 347 + 0xbb01f7f0, 348 + 0x4ffd04f2, 349 + 0x0130b605, 350 + /* 0x0448: init_unk_next */ 351 + 0xb60120b6, 352 + 0x26b004e0, 353 + 0xe21bf401, 354 + /* 0x0454: init_unk_done */ 355 + 0x80070380, 356 + 0x27f10804, 357 + 0x23f00100, 358 + 0x0022cf02, 359 + 0x259534bd, 360 + 0x0007f108, 361 + 0x0103f0c0, 362 + 0xbd0005d0, 363 + 0x0007f104, 364 + 0x0103f0c1, 365 + 0xbd0005d0, 366 + 0x000e9804, 367 + 0xf5010f98, 368 + 0xbb015021, 369 + 0x3fbb002f, 370 + 0x010e9800, 371 + 0xf5020f98, 372 + 0x98015021, 373 + 0xeffd050e, 374 + 0x002ebb00, 375 + 0x98003ebb, 376 + 0x0f98020e, 377 + 0x5021f503, 378 + 0x070e9801, 379 + 0xbb00effd, 380 + 0x3ebb002e, 381 + 0x0235b600, 382 + 0xd30007f1, 383 + 0xd00103f0, 394 384 0x04bd0003, 395 - 0x29f024bd, 396 - 0x0007f11f, 397 - 0x0203f008, 398 - 0xbd0002d0, 399 - /* 0x0505: main */ 400 - 0x0031f404, 401 - 0xf00028f4, 402 - 0x21f424d7, 403 - 0xf401f439, 404 - 0xf404e4b0, 405 - 0x81fe1e18, 406 - 0x0627f001, 407 - 0x12fd20bd, 408 - 0x01e4b604, 409 - 0xfe051efd, 410 - 0x21f50018, 411 - 0x0ef405fa, 412 - /* 0x0535: main_not_ctx_xfer */ 413 - 0x10ef94d3, 414 - 0xf501f5f0, 415 - 0xf4037e21, 416 - /* 0x0542: ih */ 417 - 0x80f9c60e, 418 - 0xf90188fe, 419 - 0xf990f980, 420 - 0xf9b0f9a0, 421 - 0xf9e0f9d0, 422 - 0xf104bdf0, 423 - 0xf00200a7, 424 - 0xaacf00a3, 425 - 0x04abc400, 426 - 0xf02c0bf4, 427 - 0xe7f124d7, 428 - 0xe3f01a00, 429 - 0x00eecf00, 430 - 0x1900f7f1, 431 - 0xcf00f3f0, 432 - 0x21f400ff, 433 - 0x01e7f004, 434 - 0x1d0007f1, 435 - 0xd00003f0, 436 - 0x04bd000e, 437 - /* 0x0590: ih_no_fifo */ 438 - 0x010007f1, 439 - 0xd00003f0, 440 - 0x04bd000a, 441 - 0xe0fcf0fc, 442 - 0xb0fcd0fc, 443 - 0x90fca0fc, 444 - 0x88fe80fc, 445 - 0xf480fc00, 446 - 0x01f80032, 447 - /* 0x05b4: hub_barrier_done */ 448 - 0x9801f7f0, 449 - 0xfebb040e, 450 - 0x02ffb904, 451 - 0x9418e7f1, 452 - 0xf440e3f0, 453 - 0x00f89d21, 454 - /* 0x05cc: ctx_redswitch */ 455 - 0xf120f7f0, 385 + 0xb60825b6, 386 + 0x20b60635, 387 + 0x0130b601, 388 + 0xb60824b6, 389 + 0x2fb90834, 390 + 0xd321f502, 391 + 0x002fbb02, 392 + 0xf1003fbb, 393 + 0xf0010007, 394 + 0x03d00203, 395 + 0xbd04bd00, 396 + 0x1f29f024, 397 + 0x080007f1, 398 + 0xd00203f0, 399 + 0x04bd0002, 400 + /* 0x0508: main */ 401 + 0xf40031f4, 402 + 0xd7f00028, 403 + 0x3921f424, 404 + 0xb0f401f4, 405 + 0x18f404e4, 406 + 0x0181fe1e, 407 + 0xbd0627f0, 408 + 0x0412fd20, 409 + 0xfd01e4b6, 410 + 0x18fe051e, 411 + 0xfd21f500, 412 + 0xd30ef405, 413 + /* 0x0538: main_not_ctx_xfer */ 414 + 0xf010ef94, 415 + 0x21f501f5, 416 + 0x0ef4037e, 417 + /* 0x0545: ih */ 418 + 0xfe80f9c6, 419 + 0x80f90188, 420 + 0xa0f990f9, 421 + 0xd0f9b0f9, 422 + 0xf0f9e0f9, 423 + 0xa7f104bd, 424 + 0xa3f00200, 425 + 0x00aacf00, 426 + 0xf404abc4, 427 + 0xd7f02c0b, 428 + 0x00e7f124, 429 + 0x00e3f01a, 430 + 0xf100eecf, 431 + 0xf01900f7, 432 + 0xffcf00f3, 433 + 0x0421f400, 434 + 0xf101e7f0, 435 + 0xf01d0007, 436 + 0x0ed00003, 437 + /* 0x0593: ih_no_fifo */ 438 + 0xf104bd00, 439 + 0xf0010007, 440 + 0x0ad00003, 441 + 0xfc04bd00, 442 + 0xfce0fcf0, 443 + 0xfcb0fcd0, 444 + 0xfc90fca0, 445 + 0x0088fe80, 446 + 0x32f480fc, 447 + /* 0x05b7: hub_barrier_done */ 448 + 0xf001f800, 449 + 0x0e9801f7, 450 + 0x04febb04, 451 + 0xf102ffb9, 452 + 0xf09418e7, 453 + 0x21f440e3, 454 + /* 0x05cf: ctx_redswitch */ 455 + 0xf000f89d, 456 + 0x07f120f7, 457 + 0x03f08500, 458 + 0x000fd001, 459 + 0xe7f004bd, 460 + /* 0x05e1: ctx_redswitch_delay */ 461 + 0x01e2b608, 462 + 0xf1fd1bf4, 463 + 0xf10800f5, 464 + 0xf10200f5, 456 465 0xf0850007, 457 466 0x0fd00103, 458 - 0xf004bd00, 459 - /* 0x05de: ctx_redswitch_delay */ 460 - 0xe2b608e7, 461 - 0xfd1bf401, 462 - 0x0800f5f1, 463 - 0x0200f5f1, 464 - 0x850007f1, 465 - 0xd00103f0, 466 - 0x04bd000f, 467 - /* 0x05fa: ctx_xfer */ 468 - 0x07f100f8, 469 - 0x03f08100, 470 - 0x000fd002, 471 - 0x11f404bd, 472 - 0xcc21f507, 473 - /* 0x060d: ctx_xfer_not_load */ 474 - 0x6a21f505, 475 - 0xf124bd02, 476 - 0xf047fc07, 477 - 0x02d00203, 478 - 0xf004bd00, 479 - 0x20b6012c, 480 - 0xfc07f103, 481 - 0x0203f04a, 482 - 0xbd0002d0, 483 - 0x01acf004, 484 - 0xf102a5f0, 485 - 0xf00000b7, 486 - 0x0c9850b3, 487 - 0x0fc4b604, 488 - 0x9800bcbb, 489 - 0x0d98000c, 490 - 0x00e7f001, 491 - 0x016f21f5, 492 - 0xf101acf0, 493 - 0xf04000b7, 494 - 0x0c9850b3, 495 - 0x0fc4b604, 496 - 0x9800bcbb, 497 - 0x0d98010c, 498 - 0x060f9802, 499 - 0x0800e7f1, 500 - 0x016f21f5, 467 + 0xf804bd00, 468 + /* 0x05fd: ctx_xfer */ 469 + 0x0007f100, 470 + 0x0203f081, 471 + 0xbd000fd0, 472 + 0x0711f404, 473 + 0x05cf21f5, 474 + /* 0x0610: ctx_xfer_not_load */ 475 + 0x026a21f5, 476 + 0x07f124bd, 477 + 0x03f047fc, 478 + 0x0002d002, 479 + 0x2cf004bd, 480 + 0x0320b601, 481 + 0x4afc07f1, 482 + 0xd00203f0, 483 + 0x04bd0002, 501 484 0xf001acf0, 502 - 0xb7f104a5, 503 - 0xb3f03000, 485 + 0xb7f102a5, 486 + 0xb3f00000, 504 487 0x040c9850, 505 488 0xbb0fc4b6, 506 489 0x0c9800bc, 507 - 0x030d9802, 508 - 0xf1080f98, 509 - 0xf50200e7, 510 - 0xf5016f21, 511 - 0xf4025e21, 512 - 0x12f40601, 513 - /* 0x06a9: ctx_xfer_post */ 514 - 0x7f21f507, 515 - /* 0x06ad: ctx_xfer_done */ 516 - 0xb421f502, 517 - 0x0000f805, 518 - 0x00000000, 490 + 0x010d9800, 491 + 0xf500e7f0, 492 + 0xf0016f21, 493 + 0xb7f101ac, 494 + 0xb3f04000, 495 + 0x040c9850, 496 + 0xbb0fc4b6, 497 + 0x0c9800bc, 498 + 0x020d9801, 499 + 0xf1060f98, 500 + 0xf50800e7, 501 + 0xf0016f21, 502 + 0xa5f001ac, 503 + 0x00b7f104, 504 + 0x50b3f030, 505 + 0xb6040c98, 506 + 0xbcbb0fc4, 507 + 0x020c9800, 508 + 0x98030d98, 509 + 0xe7f1080f, 510 + 0x21f50200, 511 + 0x21f5016f, 512 + 0x01f4025e, 513 + 0x0712f406, 514 + /* 0x06ac: ctx_xfer_post */ 515 + 0x027f21f5, 516 + /* 0x06b0: ctx_xfer_done */ 517 + 0x05b721f5, 518 + 0x000000f8, 519 519 0x00000000, 520 520 0x00000000, 521 521 0x00000000,
+172 -172
drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk110.fuc3.h
··· 314 314 0x03f01200, 315 315 0x0002d000, 316 316 0x17f104bd, 317 - 0x10fe0542, 317 + 0x10fe0545, 318 318 0x0007f100, 319 319 0x0003f007, 320 320 0xbd0000d0, ··· 338 338 0x02d00103, 339 339 0xf104bd00, 340 340 0xf00c30e7, 341 - 0x24bd50e3, 342 - 0x44bd34bd, 343 - /* 0x0430: init_unk_loop */ 344 - 0xb06821f4, 345 - 0x0bf400f6, 346 - 0x01f7f00f, 347 - 0xfd04f2bb, 348 - 0x30b6054f, 349 - /* 0x0445: init_unk_next */ 350 - 0x0120b601, 351 - 0xb004e0b6, 352 - 0x1bf40226, 353 - /* 0x0451: init_unk_done */ 354 - 0x070380e2, 355 - 0xf1080480, 356 - 0xf0010027, 357 - 0x22cf0223, 358 - 0x9534bd00, 359 - 0x07f10825, 360 - 0x03f0c000, 361 - 0x0005d001, 362 - 0x07f104bd, 363 - 0x03f0c100, 364 - 0x0005d001, 365 - 0x0e9804bd, 366 - 0x010f9800, 367 - 0x015021f5, 368 - 0xbb002fbb, 369 - 0x0e98003f, 370 - 0x020f9801, 371 - 0x015021f5, 372 - 0xfd050e98, 373 - 0x2ebb00ef, 374 - 0x003ebb00, 375 - 0x98020e98, 376 - 0x21f5030f, 377 - 0x0e980150, 378 - 0x00effd07, 379 - 0xbb002ebb, 380 - 0x35b6003e, 381 - 0x0007f102, 382 - 0x0103f0d3, 383 - 0xbd0003d0, 384 - 0x0825b604, 385 - 0xb60635b6, 386 - 0x30b60120, 387 - 0x0824b601, 388 - 0xb90834b6, 389 - 0x21f5022f, 390 - 0x2fbb02d3, 391 - 0x003fbb00, 392 - 0x010007f1, 393 - 0xd00203f0, 341 + 0xe5f050e3, 342 + 0xbd24bd01, 343 + /* 0x0433: init_unk_loop */ 344 + 0xf444bd34, 345 + 0xf6b06821, 346 + 0x0f0bf400, 347 + 0xbb01f7f0, 348 + 0x4ffd04f2, 349 + 0x0130b605, 350 + /* 0x0448: init_unk_next */ 351 + 0xb60120b6, 352 + 0x26b004e0, 353 + 0xe21bf402, 354 + /* 0x0454: init_unk_done */ 355 + 0x80070380, 356 + 0x27f10804, 357 + 0x23f00100, 358 + 0x0022cf02, 359 + 0x259534bd, 360 + 0x0007f108, 361 + 0x0103f0c0, 362 + 0xbd0005d0, 363 + 0x0007f104, 364 + 0x0103f0c1, 365 + 0xbd0005d0, 366 + 0x000e9804, 367 + 0xf5010f98, 368 + 0xbb015021, 369 + 0x3fbb002f, 370 + 0x010e9800, 371 + 0xf5020f98, 372 + 0x98015021, 373 + 0xeffd050e, 374 + 0x002ebb00, 375 + 0x98003ebb, 376 + 0x0f98020e, 377 + 0x5021f503, 378 + 0x070e9801, 379 + 0xbb00effd, 380 + 0x3ebb002e, 381 + 0x0235b600, 382 + 0xd30007f1, 383 + 0xd00103f0, 394 384 0x04bd0003, 395 - 0x29f024bd, 396 - 0x0007f11f, 397 - 0x0203f030, 398 - 0xbd0002d0, 399 - /* 0x0505: main */ 400 - 0x0031f404, 401 - 0xf00028f4, 402 - 0x21f424d7, 403 - 0xf401f439, 404 - 0xf404e4b0, 405 - 0x81fe1e18, 406 - 0x0627f001, 407 - 0x12fd20bd, 408 - 0x01e4b604, 409 - 0xfe051efd, 410 - 0x21f50018, 411 - 0x0ef405fa, 412 - /* 0x0535: main_not_ctx_xfer */ 413 - 0x10ef94d3, 414 - 0xf501f5f0, 415 - 0xf4037e21, 416 - /* 0x0542: ih */ 417 - 0x80f9c60e, 418 - 0xf90188fe, 419 - 0xf990f980, 420 - 0xf9b0f9a0, 421 - 0xf9e0f9d0, 422 - 0xf104bdf0, 423 - 0xf00200a7, 424 - 0xaacf00a3, 425 - 0x04abc400, 426 - 0xf02c0bf4, 427 - 0xe7f124d7, 428 - 0xe3f01a00, 429 - 0x00eecf00, 430 - 0x1900f7f1, 431 - 0xcf00f3f0, 432 - 0x21f400ff, 433 - 0x01e7f004, 434 - 0x1d0007f1, 435 - 0xd00003f0, 436 - 0x04bd000e, 437 - /* 0x0590: ih_no_fifo */ 438 - 0x010007f1, 439 - 0xd00003f0, 440 - 0x04bd000a, 441 - 0xe0fcf0fc, 442 - 0xb0fcd0fc, 443 - 0x90fca0fc, 444 - 0x88fe80fc, 445 - 0xf480fc00, 446 - 0x01f80032, 447 - /* 0x05b4: hub_barrier_done */ 448 - 0x9801f7f0, 449 - 0xfebb040e, 450 - 0x02ffb904, 451 - 0x9418e7f1, 452 - 0xf440e3f0, 453 - 0x00f89d21, 454 - /* 0x05cc: ctx_redswitch */ 455 - 0xf120f7f0, 385 + 0xb60825b6, 386 + 0x20b60635, 387 + 0x0130b601, 388 + 0xb60824b6, 389 + 0x2fb90834, 390 + 0xd321f502, 391 + 0x002fbb02, 392 + 0xf1003fbb, 393 + 0xf0010007, 394 + 0x03d00203, 395 + 0xbd04bd00, 396 + 0x1f29f024, 397 + 0x300007f1, 398 + 0xd00203f0, 399 + 0x04bd0002, 400 + /* 0x0508: main */ 401 + 0xf40031f4, 402 + 0xd7f00028, 403 + 0x3921f424, 404 + 0xb0f401f4, 405 + 0x18f404e4, 406 + 0x0181fe1e, 407 + 0xbd0627f0, 408 + 0x0412fd20, 409 + 0xfd01e4b6, 410 + 0x18fe051e, 411 + 0xfd21f500, 412 + 0xd30ef405, 413 + /* 0x0538: main_not_ctx_xfer */ 414 + 0xf010ef94, 415 + 0x21f501f5, 416 + 0x0ef4037e, 417 + /* 0x0545: ih */ 418 + 0xfe80f9c6, 419 + 0x80f90188, 420 + 0xa0f990f9, 421 + 0xd0f9b0f9, 422 + 0xf0f9e0f9, 423 + 0xa7f104bd, 424 + 0xa3f00200, 425 + 0x00aacf00, 426 + 0xf404abc4, 427 + 0xd7f02c0b, 428 + 0x00e7f124, 429 + 0x00e3f01a, 430 + 0xf100eecf, 431 + 0xf01900f7, 432 + 0xffcf00f3, 433 + 0x0421f400, 434 + 0xf101e7f0, 435 + 0xf01d0007, 436 + 0x0ed00003, 437 + /* 0x0593: ih_no_fifo */ 438 + 0xf104bd00, 439 + 0xf0010007, 440 + 0x0ad00003, 441 + 0xfc04bd00, 442 + 0xfce0fcf0, 443 + 0xfcb0fcd0, 444 + 0xfc90fca0, 445 + 0x0088fe80, 446 + 0x32f480fc, 447 + /* 0x05b7: hub_barrier_done */ 448 + 0xf001f800, 449 + 0x0e9801f7, 450 + 0x04febb04, 451 + 0xf102ffb9, 452 + 0xf09418e7, 453 + 0x21f440e3, 454 + /* 0x05cf: ctx_redswitch */ 455 + 0xf000f89d, 456 + 0x07f120f7, 457 + 0x03f08500, 458 + 0x000fd001, 459 + 0xe7f004bd, 460 + /* 0x05e1: ctx_redswitch_delay */ 461 + 0x01e2b608, 462 + 0xf1fd1bf4, 463 + 0xf10800f5, 464 + 0xf10200f5, 456 465 0xf0850007, 457 466 0x0fd00103, 458 - 0xf004bd00, 459 - /* 0x05de: ctx_redswitch_delay */ 460 - 0xe2b608e7, 461 - 0xfd1bf401, 462 - 0x0800f5f1, 463 - 0x0200f5f1, 464 - 0x850007f1, 465 - 0xd00103f0, 466 - 0x04bd000f, 467 - /* 0x05fa: ctx_xfer */ 468 - 0x07f100f8, 469 - 0x03f08100, 470 - 0x000fd002, 471 - 0x11f404bd, 472 - 0xcc21f507, 473 - /* 0x060d: ctx_xfer_not_load */ 474 - 0x6a21f505, 475 - 0xf124bd02, 476 - 0xf047fc07, 477 - 0x02d00203, 478 - 0xf004bd00, 479 - 0x20b6012c, 480 - 0xfc07f103, 481 - 0x0203f04a, 482 - 0xbd0002d0, 483 - 0x01acf004, 484 - 0xf102a5f0, 485 - 0xf00000b7, 486 - 0x0c9850b3, 487 - 0x0fc4b604, 488 - 0x9800bcbb, 489 - 0x0d98000c, 490 - 0x00e7f001, 491 - 0x016f21f5, 492 - 0xf101acf0, 493 - 0xf04000b7, 494 - 0x0c9850b3, 495 - 0x0fc4b604, 496 - 0x9800bcbb, 497 - 0x0d98010c, 498 - 0x060f9802, 499 - 0x0800e7f1, 500 - 0x016f21f5, 467 + 0xf804bd00, 468 + /* 0x05fd: ctx_xfer */ 469 + 0x0007f100, 470 + 0x0203f081, 471 + 0xbd000fd0, 472 + 0x0711f404, 473 + 0x05cf21f5, 474 + /* 0x0610: ctx_xfer_not_load */ 475 + 0x026a21f5, 476 + 0x07f124bd, 477 + 0x03f047fc, 478 + 0x0002d002, 479 + 0x2cf004bd, 480 + 0x0320b601, 481 + 0x4afc07f1, 482 + 0xd00203f0, 483 + 0x04bd0002, 501 484 0xf001acf0, 502 - 0xb7f104a5, 503 - 0xb3f03000, 485 + 0xb7f102a5, 486 + 0xb3f00000, 504 487 0x040c9850, 505 488 0xbb0fc4b6, 506 489 0x0c9800bc, 507 - 0x030d9802, 508 - 0xf1080f98, 509 - 0xf50200e7, 510 - 0xf5016f21, 511 - 0xf4025e21, 512 - 0x12f40601, 513 - /* 0x06a9: ctx_xfer_post */ 514 - 0x7f21f507, 515 - /* 0x06ad: ctx_xfer_done */ 516 - 0xb421f502, 517 - 0x0000f805, 518 - 0x00000000, 490 + 0x010d9800, 491 + 0xf500e7f0, 492 + 0xf0016f21, 493 + 0xb7f101ac, 494 + 0xb3f04000, 495 + 0x040c9850, 496 + 0xbb0fc4b6, 497 + 0x0c9800bc, 498 + 0x020d9801, 499 + 0xf1060f98, 500 + 0xf50800e7, 501 + 0xf0016f21, 502 + 0xa5f001ac, 503 + 0x00b7f104, 504 + 0x50b3f030, 505 + 0xb6040c98, 506 + 0xbcbb0fc4, 507 + 0x020c9800, 508 + 0x98030d98, 509 + 0xe7f1080f, 510 + 0x21f50200, 511 + 0x21f5016f, 512 + 0x01f4025e, 513 + 0x0712f406, 514 + /* 0x06ac: ctx_xfer_post */ 515 + 0x027f21f5, 516 + /* 0x06b0: ctx_xfer_done */ 517 + 0x05b721f5, 518 + 0x000000f8, 519 519 0x00000000, 520 520 0x00000000, 521 521 0x00000000,
+154 -154
drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk208.fuc5.h
··· 276 276 0x02020014, 277 277 0xf6120040, 278 278 0x04bd0002, 279 - 0xfe048141, 279 + 0xfe048441, 280 280 0x00400010, 281 281 0x0000f607, 282 282 0x040204bd, ··· 295 295 0x01c90080, 296 296 0xbd0002f6, 297 297 0x0c308e04, 298 - 0xbd24bd50, 299 - /* 0x0383: init_unk_loop */ 300 - 0x7e44bd34, 301 - 0xb0000065, 302 - 0x0bf400f6, 303 - 0xbb010f0e, 304 - 0x4ffd04f2, 305 - 0x0130b605, 306 - /* 0x0398: init_unk_next */ 307 - 0xb60120b6, 308 - 0x26b004e0, 309 - 0xe21bf401, 310 - /* 0x03a4: init_unk_done */ 311 - 0xb50703b5, 312 - 0x00820804, 313 - 0x22cf0201, 314 - 0x9534bd00, 315 - 0x00800825, 316 - 0x05f601c0, 317 - 0x8004bd00, 318 - 0xf601c100, 319 - 0x04bd0005, 320 - 0x98000e98, 321 - 0x207e010f, 322 - 0x2fbb0001, 323 - 0x003fbb00, 324 - 0x98010e98, 325 - 0x207e020f, 326 - 0x0e980001, 327 - 0x00effd05, 328 - 0xbb002ebb, 329 - 0x0e98003e, 330 - 0x030f9802, 331 - 0x0001207e, 332 - 0xfd070e98, 333 - 0x2ebb00ef, 334 - 0x003ebb00, 335 - 0x800235b6, 336 - 0xf601d300, 337 - 0x04bd0003, 338 - 0xb60825b6, 339 - 0x20b60635, 340 - 0x0130b601, 341 - 0xb60824b6, 342 - 0x2fb20834, 343 - 0x0002687e, 344 - 0xbb002fbb, 345 - 0x0080003f, 346 - 0x03f60201, 347 - 0xbd04bd00, 348 - 0x1f29f024, 349 - 0x02300080, 350 - 0xbd0002f6, 351 - /* 0x0445: main */ 352 - 0x0031f404, 353 - 0x0d0028f4, 354 - 0x00377e24, 355 - 0xf401f400, 356 - 0xf404e4b0, 357 - 0x81fe1d18, 358 - 0xbd060201, 359 - 0x0412fd20, 360 - 0xfd01e4b6, 361 - 0x18fe051e, 362 - 0x05187e00, 363 - 0xd40ef400, 364 - /* 0x0474: main_not_ctx_xfer */ 365 - 0xf010ef94, 366 - 0xf87e01f5, 367 - 0x0ef40002, 368 - /* 0x0481: ih */ 369 - 0xfe80f9c7, 370 - 0x80f90188, 371 - 0xa0f990f9, 372 - 0xd0f9b0f9, 373 - 0xf0f9e0f9, 374 - 0x004a04bd, 375 - 0x00aacf02, 376 - 0xf404abc4, 377 - 0x240d1f0b, 378 - 0xcf1a004e, 379 - 0x004f00ee, 380 - 0x00ffcf19, 381 - 0x0000047e, 382 - 0x0040010e, 383 - 0x000ef61d, 384 - /* 0x04be: ih_no_fifo */ 385 - 0x004004bd, 386 - 0x000af601, 387 - 0xf0fc04bd, 388 - 0xd0fce0fc, 389 - 0xa0fcb0fc, 390 - 0x80fc90fc, 391 - 0xfc0088fe, 392 - 0x0032f480, 393 - /* 0x04de: hub_barrier_done */ 394 - 0x010f01f8, 395 - 0xbb040e98, 396 - 0xffb204fe, 397 - 0x4094188e, 398 - 0x00008f7e, 399 - /* 0x04f2: ctx_redswitch */ 400 - 0x200f00f8, 298 + 0x01e5f050, 299 + 0x34bd24bd, 300 + /* 0x0386: init_unk_loop */ 301 + 0x657e44bd, 302 + 0xf6b00000, 303 + 0x0e0bf400, 304 + 0xf2bb010f, 305 + 0x054ffd04, 306 + /* 0x039b: init_unk_next */ 307 + 0xb60130b6, 308 + 0xe0b60120, 309 + 0x0126b004, 310 + /* 0x03a7: init_unk_done */ 311 + 0xb5e21bf4, 312 + 0x04b50703, 313 + 0x01008208, 314 + 0x0022cf02, 315 + 0x259534bd, 316 + 0xc0008008, 317 + 0x0005f601, 318 + 0x008004bd, 319 + 0x05f601c1, 320 + 0x9804bd00, 321 + 0x0f98000e, 322 + 0x01207e01, 323 + 0x002fbb00, 324 + 0x98003fbb, 325 + 0x0f98010e, 326 + 0x01207e02, 327 + 0x050e9800, 328 + 0xbb00effd, 329 + 0x3ebb002e, 330 + 0x020e9800, 331 + 0x7e030f98, 332 + 0x98000120, 333 + 0xeffd070e, 334 + 0x002ebb00, 335 + 0xb6003ebb, 336 + 0x00800235, 337 + 0x03f601d3, 338 + 0xb604bd00, 339 + 0x35b60825, 340 + 0x0120b606, 341 + 0xb60130b6, 342 + 0x34b60824, 343 + 0x7e2fb208, 344 + 0xbb000268, 345 + 0x3fbb002f, 346 + 0x01008000, 347 + 0x0003f602, 348 + 0x24bd04bd, 349 + 0x801f29f0, 350 + 0xf6023000, 351 + 0x04bd0002, 352 + /* 0x0448: main */ 353 + 0xf40031f4, 354 + 0x240d0028, 355 + 0x0000377e, 356 + 0xb0f401f4, 357 + 0x18f404e4, 358 + 0x0181fe1d, 359 + 0x20bd0602, 360 + 0xb60412fd, 361 + 0x1efd01e4, 362 + 0x0018fe05, 363 + 0x00051b7e, 364 + /* 0x0477: main_not_ctx_xfer */ 365 + 0x94d40ef4, 366 + 0xf5f010ef, 367 + 0x02f87e01, 368 + 0xc70ef400, 369 + /* 0x0484: ih */ 370 + 0x88fe80f9, 371 + 0xf980f901, 372 + 0xf9a0f990, 373 + 0xf9d0f9b0, 374 + 0xbdf0f9e0, 375 + 0x02004a04, 376 + 0xc400aacf, 377 + 0x0bf404ab, 378 + 0x4e240d1f, 379 + 0xeecf1a00, 380 + 0x19004f00, 381 + 0x7e00ffcf, 382 + 0x0e000004, 383 + 0x1d004001, 384 + 0xbd000ef6, 385 + /* 0x04c1: ih_no_fifo */ 386 + 0x01004004, 387 + 0xbd000af6, 388 + 0xfcf0fc04, 389 + 0xfcd0fce0, 390 + 0xfca0fcb0, 391 + 0xfe80fc90, 392 + 0x80fc0088, 393 + 0xf80032f4, 394 + /* 0x04e1: hub_barrier_done */ 395 + 0x98010f01, 396 + 0xfebb040e, 397 + 0x8effb204, 398 + 0x7e409418, 399 + 0xf800008f, 400 + /* 0x04f5: ctx_redswitch */ 401 + 0x80200f00, 402 + 0xf6018500, 403 + 0x04bd000f, 404 + /* 0x0502: ctx_redswitch_delay */ 405 + 0xe2b6080e, 406 + 0xfd1bf401, 407 + 0x0800f5f1, 408 + 0x0200f5f1, 401 409 0x01850080, 402 410 0xbd000ff6, 403 - /* 0x04ff: ctx_redswitch_delay */ 404 - 0xb6080e04, 405 - 0x1bf401e2, 406 - 0x00f5f1fd, 407 - 0x00f5f108, 408 - 0x85008002, 409 - 0x000ff601, 410 - 0x00f804bd, 411 - /* 0x0518: ctx_xfer */ 412 - 0x02810080, 413 - 0xbd000ff6, 414 - 0x0711f404, 415 - 0x0004f27e, 416 - /* 0x0528: ctx_xfer_not_load */ 417 - 0x0002167e, 418 - 0xfc8024bd, 419 - 0x02f60247, 420 - 0xf004bd00, 421 - 0x20b6012c, 422 - 0x4afc8003, 411 + /* 0x051b: ctx_xfer */ 412 + 0x8000f804, 413 + 0xf6028100, 414 + 0x04bd000f, 415 + 0x7e0711f4, 416 + /* 0x052b: ctx_xfer_not_load */ 417 + 0x7e0004f5, 418 + 0xbd000216, 419 + 0x47fc8024, 423 420 0x0002f602, 424 - 0xacf004bd, 425 - 0x02a5f001, 426 - 0x5000008b, 427 - 0xb6040c98, 428 - 0xbcbb0fc4, 429 - 0x000c9800, 430 - 0x0e010d98, 431 - 0x013d7e00, 432 - 0x01acf000, 433 - 0x5040008b, 434 - 0xb6040c98, 435 - 0xbcbb0fc4, 436 - 0x010c9800, 437 - 0x98020d98, 438 - 0x004e060f, 439 - 0x013d7e08, 440 - 0x01acf000, 441 - 0x8b04a5f0, 442 - 0x98503000, 421 + 0x2cf004bd, 422 + 0x0320b601, 423 + 0x024afc80, 424 + 0xbd0002f6, 425 + 0x01acf004, 426 + 0x8b02a5f0, 427 + 0x98500000, 443 428 0xc4b6040c, 444 429 0x00bcbb0f, 445 - 0x98020c98, 446 - 0x0f98030d, 447 - 0x02004e08, 430 + 0x98000c98, 431 + 0x000e010d, 448 432 0x00013d7e, 449 - 0x00020a7e, 450 - 0xf40601f4, 451 - /* 0x05b2: ctx_xfer_post */ 452 - 0x277e0712, 453 - /* 0x05b6: ctx_xfer_done */ 454 - 0xde7e0002, 455 - 0x00f80004, 456 - 0x00000000, 433 + 0x8b01acf0, 434 + 0x98504000, 435 + 0xc4b6040c, 436 + 0x00bcbb0f, 437 + 0x98010c98, 438 + 0x0f98020d, 439 + 0x08004e06, 440 + 0x00013d7e, 441 + 0xf001acf0, 442 + 0x008b04a5, 443 + 0x0c985030, 444 + 0x0fc4b604, 445 + 0x9800bcbb, 446 + 0x0d98020c, 447 + 0x080f9803, 448 + 0x7e02004e, 449 + 0x7e00013d, 450 + 0xf400020a, 451 + 0x12f40601, 452 + /* 0x05b5: ctx_xfer_post */ 453 + 0x02277e07, 454 + /* 0x05b9: ctx_xfer_done */ 455 + 0x04e17e00, 456 + 0x0000f800, 457 457 0x00000000, 458 458 0x00000000, 459 459 0x00000000,
+239 -239
drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgm107.fuc5.h
··· 289 289 0x020014fe, 290 290 0x12004002, 291 291 0xbd0002f6, 292 - 0x05b04104, 292 + 0x05b34104, 293 293 0x400010fe, 294 294 0x00f60700, 295 295 0x0204bd00, ··· 308 308 0xc900800f, 309 309 0x0002f601, 310 310 0x308e04bd, 311 - 0x24bd500c, 312 - 0x44bd34bd, 313 - /* 0x03b0: init_unk_loop */ 314 - 0x0000657e, 315 - 0xf400f6b0, 316 - 0x010f0e0b, 317 - 0xfd04f2bb, 318 - 0x30b6054f, 319 - /* 0x03c5: init_unk_next */ 320 - 0x0120b601, 321 - 0xb004e0b6, 322 - 0x1bf40226, 323 - /* 0x03d1: init_unk_done */ 324 - 0x0703b5e2, 325 - 0x820804b5, 326 - 0xcf020100, 327 - 0x34bd0022, 328 - 0x80082595, 329 - 0xf601c000, 311 + 0xe5f0500c, 312 + 0xbd24bd01, 313 + /* 0x03b3: init_unk_loop */ 314 + 0x7e44bd34, 315 + 0xb0000065, 316 + 0x0bf400f6, 317 + 0xbb010f0e, 318 + 0x4ffd04f2, 319 + 0x0130b605, 320 + /* 0x03c8: init_unk_next */ 321 + 0xb60120b6, 322 + 0x26b004e0, 323 + 0xe21bf402, 324 + /* 0x03d4: init_unk_done */ 325 + 0xb50703b5, 326 + 0x00820804, 327 + 0x22cf0201, 328 + 0x9534bd00, 329 + 0x00800825, 330 + 0x05f601c0, 331 + 0x8004bd00, 332 + 0xf601c100, 330 333 0x04bd0005, 331 - 0x01c10080, 332 - 0xbd0005f6, 333 - 0x000e9804, 334 - 0x7e010f98, 335 - 0xbb000120, 336 - 0x3fbb002f, 337 - 0x010e9800, 338 - 0x7e020f98, 339 - 0x98000120, 340 - 0xeffd050e, 341 - 0x002ebb00, 342 - 0x98003ebb, 343 - 0x0f98020e, 344 - 0x01207e03, 345 - 0x070e9800, 346 - 0xbb00effd, 347 - 0x3ebb002e, 348 - 0x0235b600, 349 - 0x01d30080, 350 - 0xbd0003f6, 351 - 0x0825b604, 352 - 0xb60635b6, 353 - 0x30b60120, 354 - 0x0824b601, 355 - 0xb20834b6, 356 - 0x02687e2f, 357 - 0x002fbb00, 358 - 0x0f003fbb, 359 - 0x8effb23f, 360 - 0xf0501d60, 361 - 0x8f7e01e5, 362 - 0x0c0f0000, 363 - 0xa88effb2, 364 - 0xe5f0501d, 365 - 0x008f7e01, 366 - 0x03147e00, 367 - 0xb23f0f00, 368 - 0x1d608eff, 369 - 0x01e5f050, 370 - 0x00008f7e, 371 - 0xffb2000f, 372 - 0x501d9c8e, 373 - 0x7e01e5f0, 374 - 0x0f00008f, 375 - 0x03147e01, 376 - 0x8effb200, 334 + 0x98000e98, 335 + 0x207e010f, 336 + 0x2fbb0001, 337 + 0x003fbb00, 338 + 0x98010e98, 339 + 0x207e020f, 340 + 0x0e980001, 341 + 0x00effd05, 342 + 0xbb002ebb, 343 + 0x0e98003e, 344 + 0x030f9802, 345 + 0x0001207e, 346 + 0xfd070e98, 347 + 0x2ebb00ef, 348 + 0x003ebb00, 349 + 0x800235b6, 350 + 0xf601d300, 351 + 0x04bd0003, 352 + 0xb60825b6, 353 + 0x20b60635, 354 + 0x0130b601, 355 + 0xb60824b6, 356 + 0x2fb20834, 357 + 0x0002687e, 358 + 0xbb002fbb, 359 + 0x3f0f003f, 360 + 0x501d608e, 361 + 0xb201e5f0, 362 + 0x008f7eff, 363 + 0x8e0c0f00, 377 364 0xf0501da8, 378 - 0x8f7e01e5, 379 - 0xff0f0000, 380 - 0x988effb2, 381 - 0xe5f0501d, 382 - 0x008f7e01, 383 - 0xb2020f00, 384 - 0x1da88eff, 385 - 0x01e5f050, 365 + 0xffb201e5, 386 366 0x00008f7e, 387 367 0x0003147e, 388 - 0x85050498, 389 - 0x98504000, 390 - 0x64b60406, 391 - 0x0056bb0f, 392 - /* 0x04e0: tpc_strand_init_tpc_loop */ 393 - 0x05705eb8, 394 - 0x00657e00, 395 - 0xbdf6b200, 396 - /* 0x04ed: tpc_strand_init_idx_loop */ 397 - 0x605eb874, 398 - 0x7fb20005, 399 - 0x00008f7e, 400 - 0x05885eb8, 401 - 0x082f9500, 402 - 0x00008f7e, 403 - 0x058c5eb8, 404 - 0x082f9500, 405 - 0x00008f7e, 406 - 0x05905eb8, 407 - 0x00657e00, 408 - 0x06f5b600, 409 - 0xb601f0b6, 410 - 0x2fbb08f4, 411 - 0x003fbb00, 412 - 0xb60170b6, 413 - 0x1bf40162, 414 - 0x0050b7bf, 415 - 0x0142b608, 416 - 0x0fa81bf4, 417 - 0x8effb23f, 418 - 0xf0501d60, 419 - 0x8f7e01e5, 420 - 0x0d0f0000, 421 - 0xa88effb2, 368 + 0x608e3f0f, 422 369 0xe5f0501d, 423 - 0x008f7e01, 424 - 0x03147e00, 425 - 0x01008000, 426 - 0x0003f602, 427 - 0x24bd04bd, 428 - 0x801f29f0, 429 - 0xf6023000, 430 - 0x04bd0002, 431 - /* 0x0574: main */ 432 - 0xf40031f4, 433 - 0x240d0028, 434 - 0x0000377e, 435 - 0xb0f401f4, 436 - 0x18f404e4, 437 - 0x0181fe1d, 438 - 0x20bd0602, 439 - 0xb60412fd, 440 - 0x1efd01e4, 441 - 0x0018fe05, 442 - 0x0006477e, 443 - /* 0x05a3: main_not_ctx_xfer */ 444 - 0x94d40ef4, 445 - 0xf5f010ef, 446 - 0x02f87e01, 447 - 0xc70ef400, 448 - /* 0x05b0: ih */ 449 - 0x88fe80f9, 450 - 0xf980f901, 451 - 0xf9a0f990, 452 - 0xf9d0f9b0, 453 - 0xbdf0f9e0, 454 - 0x02004a04, 455 - 0xc400aacf, 456 - 0x0bf404ab, 457 - 0x4e240d1f, 458 - 0xeecf1a00, 459 - 0x19004f00, 460 - 0x7e00ffcf, 461 - 0x0e000004, 462 - 0x1d004001, 463 - 0xbd000ef6, 464 - /* 0x05ed: ih_no_fifo */ 465 - 0x01004004, 466 - 0xbd000af6, 467 - 0xfcf0fc04, 468 - 0xfcd0fce0, 469 - 0xfca0fcb0, 470 - 0xfe80fc90, 471 - 0x80fc0088, 472 - 0xf80032f4, 473 - /* 0x060d: hub_barrier_done */ 474 - 0x98010f01, 475 - 0xfebb040e, 476 - 0x8effb204, 477 - 0x7e409418, 478 - 0xf800008f, 479 - /* 0x0621: ctx_redswitch */ 480 - 0x80200f00, 370 + 0x7effb201, 371 + 0x0f00008f, 372 + 0x1d9c8e00, 373 + 0x01e5f050, 374 + 0x8f7effb2, 375 + 0x010f0000, 376 + 0x0003147e, 377 + 0x501da88e, 378 + 0xb201e5f0, 379 + 0x008f7eff, 380 + 0x8eff0f00, 381 + 0xf0501d98, 382 + 0xffb201e5, 383 + 0x00008f7e, 384 + 0xa88e020f, 385 + 0xe5f0501d, 386 + 0x7effb201, 387 + 0x7e00008f, 388 + 0x98000314, 389 + 0x00850504, 390 + 0x06985040, 391 + 0x0f64b604, 392 + /* 0x04e3: tpc_strand_init_tpc_loop */ 393 + 0xb80056bb, 394 + 0x0005705e, 395 + 0x0000657e, 396 + 0x74bdf6b2, 397 + /* 0x04f0: tpc_strand_init_idx_loop */ 398 + 0x05605eb8, 399 + 0x7e7fb200, 400 + 0xb800008f, 401 + 0x0005885e, 402 + 0x7e082f95, 403 + 0xb800008f, 404 + 0x00058c5e, 405 + 0x7e082f95, 406 + 0xb800008f, 407 + 0x0005905e, 408 + 0x0000657e, 409 + 0xb606f5b6, 410 + 0xf4b601f0, 411 + 0x002fbb08, 412 + 0xb6003fbb, 413 + 0x62b60170, 414 + 0xbf1bf401, 415 + 0x080050b7, 416 + 0xf40142b6, 417 + 0x3f0fa81b, 418 + 0x501d608e, 419 + 0xb201e5f0, 420 + 0x008f7eff, 421 + 0x8e0d0f00, 422 + 0xf0501da8, 423 + 0xffb201e5, 424 + 0x00008f7e, 425 + 0x0003147e, 426 + 0x02010080, 427 + 0xbd0003f6, 428 + 0xf024bd04, 429 + 0x00801f29, 430 + 0x02f60230, 431 + /* 0x0577: main */ 432 + 0xf404bd00, 433 + 0x28f40031, 434 + 0x7e240d00, 435 + 0xf4000037, 436 + 0xe4b0f401, 437 + 0x1d18f404, 438 + 0x020181fe, 439 + 0xfd20bd06, 440 + 0xe4b60412, 441 + 0x051efd01, 442 + 0x7e0018fe, 443 + 0xf400064a, 444 + /* 0x05a6: main_not_ctx_xfer */ 445 + 0xef94d40e, 446 + 0x01f5f010, 447 + 0x0002f87e, 448 + /* 0x05b3: ih */ 449 + 0xf9c70ef4, 450 + 0x0188fe80, 451 + 0x90f980f9, 452 + 0xb0f9a0f9, 453 + 0xe0f9d0f9, 454 + 0x04bdf0f9, 455 + 0xcf02004a, 456 + 0xabc400aa, 457 + 0x1f0bf404, 458 + 0x004e240d, 459 + 0x00eecf1a, 460 + 0xcf19004f, 461 + 0x047e00ff, 462 + 0x010e0000, 463 + 0xf61d0040, 464 + 0x04bd000e, 465 + /* 0x05f0: ih_no_fifo */ 466 + 0xf6010040, 467 + 0x04bd000a, 468 + 0xe0fcf0fc, 469 + 0xb0fcd0fc, 470 + 0x90fca0fc, 471 + 0x88fe80fc, 472 + 0xf480fc00, 473 + 0x01f80032, 474 + /* 0x0610: hub_barrier_done */ 475 + 0x0e98010f, 476 + 0x04febb04, 477 + 0x188effb2, 478 + 0x8f7e4094, 479 + 0x00f80000, 480 + /* 0x0624: ctx_redswitch */ 481 + 0x0080200f, 482 + 0x0ff60185, 483 + 0x0e04bd00, 484 + /* 0x0631: ctx_redswitch_delay */ 485 + 0x01e2b608, 486 + 0xf1fd1bf4, 487 + 0xf10800f5, 488 + 0x800200f5, 481 489 0xf6018500, 482 490 0x04bd000f, 483 - /* 0x062e: ctx_redswitch_delay */ 484 - 0xe2b6080e, 485 - 0xfd1bf401, 486 - 0x0800f5f1, 487 - 0x0200f5f1, 488 - 0x01850080, 489 - 0xbd000ff6, 490 - /* 0x0647: ctx_xfer */ 491 - 0x8000f804, 492 - 0xf6028100, 493 - 0x04bd000f, 494 - 0xc48effb2, 495 - 0xe5f0501d, 496 - 0x008f7e01, 497 - 0x0711f400, 498 - 0x0006217e, 499 - /* 0x0664: ctx_xfer_not_load */ 500 - 0x0002167e, 501 - 0xfc8024bd, 502 - 0x02f60247, 503 - 0xf004bd00, 504 - 0x20b6012c, 505 - 0x4afc8003, 491 + /* 0x064a: ctx_xfer */ 492 + 0x008000f8, 493 + 0x0ff60281, 494 + 0x8e04bd00, 495 + 0xf0501dc4, 496 + 0xffb201e5, 497 + 0x00008f7e, 498 + 0x7e0711f4, 499 + /* 0x0667: ctx_xfer_not_load */ 500 + 0x7e000624, 501 + 0xbd000216, 502 + 0x47fc8024, 506 503 0x0002f602, 507 - 0x0c0f04bd, 508 - 0xa88effb2, 504 + 0x2cf004bd, 505 + 0x0320b601, 506 + 0x024afc80, 507 + 0xbd0002f6, 508 + 0x8e0c0f04, 509 + 0xf0501da8, 510 + 0xffb201e5, 511 + 0x00008f7e, 512 + 0x0003147e, 513 + 0x608e3f0f, 509 514 0xe5f0501d, 510 - 0x008f7e01, 511 - 0x03147e00, 512 - 0xb23f0f00, 513 - 0x1d608eff, 514 - 0x01e5f050, 515 - 0x00008f7e, 516 - 0xffb2000f, 517 - 0x501d9c8e, 518 - 0x7e01e5f0, 515 + 0x7effb201, 519 516 0x0f00008f, 520 - 0x03147e01, 521 - 0x01fcf000, 522 - 0xb203f0b6, 523 - 0x1da88eff, 517 + 0x1d9c8e00, 524 518 0x01e5f050, 525 - 0x00008f7e, 526 - 0xf001acf0, 527 - 0x008b02a5, 528 - 0x0c985000, 529 - 0x0fc4b604, 530 - 0x9800bcbb, 531 - 0x0d98000c, 532 - 0x7e000e01, 533 - 0xf000013d, 534 - 0x008b01ac, 535 - 0x0c985040, 536 - 0x0fc4b604, 537 - 0x9800bcbb, 538 - 0x0d98010c, 539 - 0x060f9802, 540 - 0x7e08004e, 541 - 0xf000013d, 519 + 0x8f7effb2, 520 + 0x010f0000, 521 + 0x0003147e, 522 + 0xb601fcf0, 523 + 0xa88e03f0, 524 + 0xe5f0501d, 525 + 0x7effb201, 526 + 0xf000008f, 542 527 0xa5f001ac, 543 - 0x30008b04, 528 + 0x00008b02, 544 529 0x040c9850, 545 530 0xbb0fc4b6, 546 531 0x0c9800bc, 547 - 0x030d9802, 548 - 0x4e080f98, 549 - 0x3d7e0200, 550 - 0x0a7e0001, 551 - 0x147e0002, 552 - 0x01f40003, 553 - 0x1a12f406, 554 - /* 0x073c: ctx_xfer_post */ 555 - 0x0002277e, 556 - 0xffb20d0f, 557 - 0x501da88e, 558 - 0x7e01e5f0, 559 - 0x7e00008f, 560 - /* 0x0753: ctx_xfer_done */ 561 - 0x7e000314, 562 - 0xf800060d, 563 - 0x00000000, 532 + 0x010d9800, 533 + 0x3d7e000e, 534 + 0xacf00001, 535 + 0x40008b01, 536 + 0x040c9850, 537 + 0xbb0fc4b6, 538 + 0x0c9800bc, 539 + 0x020d9801, 540 + 0x4e060f98, 541 + 0x3d7e0800, 542 + 0xacf00001, 543 + 0x04a5f001, 544 + 0x5030008b, 545 + 0xb6040c98, 546 + 0xbcbb0fc4, 547 + 0x020c9800, 548 + 0x98030d98, 549 + 0x004e080f, 550 + 0x013d7e02, 551 + 0x020a7e00, 552 + 0x03147e00, 553 + 0x0601f400, 554 + /* 0x073f: ctx_xfer_post */ 555 + 0x7e1a12f4, 556 + 0x0f000227, 557 + 0x1da88e0d, 558 + 0x01e5f050, 559 + 0x8f7effb2, 560 + 0x147e0000, 561 + /* 0x0756: ctx_xfer_done */ 562 + 0x107e0003, 563 + 0x00f80006, 564 564 0x00000000, 565 565 0x00000000, 566 566 0x00000000,
+4 -2
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
··· 143 143 static int 144 144 gf100_fermi_mthd_zbc_color(struct nvkm_object *object, void *data, u32 size) 145 145 { 146 - struct gf100_gr *gr = (void *)object->engine; 146 + struct gf100_gr *gr = gf100_gr(nvkm_gr(object->engine)); 147 147 union { 148 148 struct fermi_a_zbc_color_v0 v0; 149 149 } *args = data; ··· 189 189 static int 190 190 gf100_fermi_mthd_zbc_depth(struct nvkm_object *object, void *data, u32 size) 191 191 { 192 - struct gf100_gr *gr = (void *)object->engine; 192 + struct gf100_gr *gr = gf100_gr(nvkm_gr(object->engine)); 193 193 union { 194 194 struct fermi_a_zbc_depth_v0 v0; 195 195 } *args = data; ··· 1530 1530 gr->ppc_nr[i] = gr->func->ppc_nr; 1531 1531 for (j = 0; j < gr->ppc_nr[i]; j++) { 1532 1532 u8 mask = nvkm_rd32(device, GPC_UNIT(i, 0x0c30 + (j * 4))); 1533 + if (mask) 1534 + gr->ppc_mask[i] |= (1 << j); 1533 1535 gr->ppc_tpc_nr[i][j] = hweight8(mask); 1534 1536 } 1535 1537 }
+1
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.h
··· 97 97 u8 tpc_nr[GPC_MAX]; 98 98 u8 tpc_total; 99 99 u8 ppc_nr[GPC_MAX]; 100 + u8 ppc_mask[GPC_MAX]; 100 101 u8 ppc_tpc_nr[GPC_MAX][4]; 101 102 102 103 struct nvkm_memory *unk4188b4;
+5
drivers/gpu/drm/nouveau/nvkm/subdev/instmem/base.c
··· 97 97 nvkm_instobj_dtor(struct nvkm_memory *memory) 98 98 { 99 99 struct nvkm_instobj *iobj = nvkm_instobj(memory); 100 + spin_lock(&iobj->imem->lock); 100 101 list_del(&iobj->head); 102 + spin_unlock(&iobj->imem->lock); 101 103 nvkm_memory_del(&iobj->parent); 102 104 return iobj; 103 105 } ··· 192 190 nvkm_memory_ctor(&nvkm_instobj_func_slow, &iobj->memory); 193 191 iobj->parent = memory; 194 192 iobj->imem = imem; 193 + spin_lock(&iobj->imem->lock); 195 194 list_add_tail(&iobj->head, &imem->list); 195 + spin_unlock(&iobj->imem->lock); 196 196 memory = &iobj->memory; 197 197 } 198 198 ··· 313 309 { 314 310 nvkm_subdev_ctor(&nvkm_instmem, device, index, 0, &imem->subdev); 315 311 imem->func = func; 312 + spin_lock_init(&imem->lock); 316 313 INIT_LIST_HEAD(&imem->list); 317 314 }
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/volt/gk104.c
··· 59 59 duty = (uv - bios->base) * div / bios->pwm_range; 60 60 61 61 nvkm_wr32(device, 0x20340, div); 62 - nvkm_wr32(device, 0x20344, 0x8000000 | duty); 62 + nvkm_wr32(device, 0x20344, 0x80000000 | duty); 63 63 64 64 return 0; 65 65 }
+11 -4
drivers/gpu/drm/radeon/radeon_object.c
··· 221 221 if (!(rdev->flags & RADEON_IS_PCIE)) 222 222 bo->flags &= ~(RADEON_GEM_GTT_WC | RADEON_GEM_GTT_UC); 223 223 224 + /* Write-combined CPU mappings of GTT cause GPU hangs with RV6xx 225 + * See https://bugs.freedesktop.org/show_bug.cgi?id=91268 226 + */ 227 + if (rdev->family >= CHIP_RV610 && rdev->family <= CHIP_RV635) 228 + bo->flags &= ~(RADEON_GEM_GTT_WC | RADEON_GEM_GTT_UC); 229 + 224 230 #ifdef CONFIG_X86_32 225 231 /* XXX: Write-combined CPU mappings of GTT seem broken on 32-bit 226 232 * See https://bugs.freedesktop.org/show_bug.cgi?id=84627 227 233 */ 228 - bo->flags &= ~RADEON_GEM_GTT_WC; 234 + bo->flags &= ~(RADEON_GEM_GTT_WC | RADEON_GEM_GTT_UC); 229 235 #elif defined(CONFIG_X86) && !defined(CONFIG_X86_PAT) 230 236 /* Don't try to enable write-combining when it can't work, or things 231 237 * may be slow ··· 241 235 #warning Please enable CONFIG_MTRR and CONFIG_X86_PAT for better performance \ 242 236 thanks to write-combining 243 237 244 - DRM_INFO_ONCE("Please enable CONFIG_MTRR and CONFIG_X86_PAT for " 245 - "better performance thanks to write-combining\n"); 246 - bo->flags &= ~RADEON_GEM_GTT_WC; 238 + if (bo->flags & RADEON_GEM_GTT_WC) 239 + DRM_INFO_ONCE("Please enable CONFIG_MTRR and CONFIG_X86_PAT for " 240 + "better performance thanks to write-combining\n"); 241 + bo->flags &= ~(RADEON_GEM_GTT_WC | RADEON_GEM_GTT_UC); 247 242 #endif 248 243 249 244 radeon_ttm_placement_from_domain(bo, domain);
+1 -2
drivers/gpu/drm/radeon/radeon_pm.c
··· 1542 1542 ret = device_create_file(rdev->dev, &dev_attr_power_method); 1543 1543 if (ret) 1544 1544 DRM_ERROR("failed to create device file for power method\n"); 1545 - if (!ret) 1546 - rdev->pm.sysfs_initialized = true; 1545 + rdev->pm.sysfs_initialized = true; 1547 1546 } 1548 1547 1549 1548 mutex_lock(&rdev->pm.mutex);
+1 -1
drivers/gpu/drm/radeon/rv730_dpm.c
··· 464 464 result = rv770_send_msg_to_smc(rdev, PPSMC_MSG_TwoLevelsDisabled); 465 465 466 466 if (result != PPSMC_Result_OK) 467 - DRM_ERROR("Could not force DPM to low\n"); 467 + DRM_DEBUG("Could not force DPM to low\n"); 468 468 469 469 WREG32_P(GENERAL_PWRMGT, 0, ~GLOBAL_PWRMGT_EN); 470 470
+2 -2
drivers/gpu/drm/radeon/rv770_dpm.c
··· 193 193 result = rv770_send_msg_to_smc(rdev, PPSMC_MSG_TwoLevelsDisabled); 194 194 195 195 if (result != PPSMC_Result_OK) 196 - DRM_ERROR("Could not force DPM to low.\n"); 196 + DRM_DEBUG("Could not force DPM to low.\n"); 197 197 198 198 WREG32_P(GENERAL_PWRMGT, 0, ~GLOBAL_PWRMGT_EN); 199 199 ··· 1418 1418 int rv770_set_sw_state(struct radeon_device *rdev) 1419 1419 { 1420 1420 if (rv770_send_msg_to_smc(rdev, PPSMC_MSG_SwitchToSwState) != PPSMC_Result_OK) 1421 - return -EINVAL; 1421 + DRM_DEBUG("rv770_set_sw_state failed\n"); 1422 1422 return 0; 1423 1423 } 1424 1424
+1 -1
drivers/gpu/drm/radeon/si_dpm.c
··· 2927 2927 { PCI_VENDOR_ID_ATI, 0x6810, 0x1462, 0x3036, 0, 120000 }, 2928 2928 { PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0xe271, 0, 120000 }, 2929 2929 { PCI_VENDOR_ID_ATI, 0x6810, 0x174b, 0xe271, 85000, 90000 }, 2930 - { PCI_VENDOR_ID_ATI, 0x6811, 0x1762, 0x2015, 0, 120000 }, 2930 + { PCI_VENDOR_ID_ATI, 0x6811, 0x1462, 0x2015, 0, 120000 }, 2931 2931 { PCI_VENDOR_ID_ATI, 0x6811, 0x1043, 0x2015, 0, 120000 }, 2932 2932 { 0, 0, 0, 0 }, 2933 2933 };
+5 -4
drivers/gpu/drm/vc4/vc4_crtc.c
··· 168 168 struct drm_connector *connector; 169 169 170 170 drm_for_each_connector(connector, crtc->dev) { 171 - if (connector && connector->state->crtc == crtc) { 171 + if (connector->state->crtc == crtc) { 172 172 struct drm_encoder *encoder = connector->encoder; 173 173 struct vc4_encoder *vc4_encoder = 174 174 to_vc4_encoder(encoder); ··· 401 401 dlist_next++; 402 402 403 403 HVS_WRITE(SCALER_DISPLISTX(vc4_crtc->channel), 404 - (u32 *)vc4_crtc->dlist - (u32 *)vc4->hvs->dlist); 404 + (u32 __iomem *)vc4_crtc->dlist - 405 + (u32 __iomem *)vc4->hvs->dlist); 405 406 406 407 /* Make the next display list start after ours. */ 407 408 vc4_crtc->dlist_size -= (dlist_next - vc4_crtc->dlist); ··· 592 591 * that will take too much. 593 592 */ 594 593 primary_plane = vc4_plane_init(drm, DRM_PLANE_TYPE_PRIMARY); 595 - if (!primary_plane) { 594 + if (IS_ERR(primary_plane)) { 596 595 dev_err(dev, "failed to construct primary plane\n"); 597 596 ret = PTR_ERR(primary_plane); 598 597 goto err; 599 598 } 600 599 601 600 cursor_plane = vc4_plane_init(drm, DRM_PLANE_TYPE_CURSOR); 602 - if (!cursor_plane) { 601 + if (IS_ERR(cursor_plane)) { 603 602 dev_err(dev, "failed to construct cursor plane\n"); 604 603 ret = PTR_ERR(cursor_plane); 605 604 goto err_primary;
-1
drivers/gpu/drm/vc4/vc4_drv.c
··· 259 259 .remove = vc4_platform_drm_remove, 260 260 .driver = { 261 261 .name = "vc4-drm", 262 - .owner = THIS_MODULE, 263 262 .of_match_table = vc4_of_match, 264 263 }, 265 264 };
+4 -4
drivers/gpu/drm/vc4/vc4_hvs.c
··· 75 75 for (i = 0; i < 64; i += 4) { 76 76 DRM_INFO("0x%08x (%s): 0x%08x 0x%08x 0x%08x 0x%08x\n", 77 77 i * 4, i < HVS_BOOTLOADER_DLIST_END ? "B" : "D", 78 - ((uint32_t *)vc4->hvs->dlist)[i + 0], 79 - ((uint32_t *)vc4->hvs->dlist)[i + 1], 80 - ((uint32_t *)vc4->hvs->dlist)[i + 2], 81 - ((uint32_t *)vc4->hvs->dlist)[i + 3]); 78 + readl((u32 __iomem *)vc4->hvs->dlist + i + 0), 79 + readl((u32 __iomem *)vc4->hvs->dlist + i + 1), 80 + readl((u32 __iomem *)vc4->hvs->dlist + i + 2), 81 + readl((u32 __iomem *)vc4->hvs->dlist + i + 3)); 82 82 } 83 83 } 84 84
+14 -4
drivers/gpu/drm/vc4/vc4_plane.c
··· 70 70 return state->fb && state->crtc; 71 71 } 72 72 73 - struct drm_plane_state *vc4_plane_duplicate_state(struct drm_plane *plane) 73 + static struct drm_plane_state *vc4_plane_duplicate_state(struct drm_plane *plane) 74 74 { 75 75 struct vc4_plane_state *vc4_state; 76 76 ··· 97 97 return &vc4_state->base; 98 98 } 99 99 100 - void vc4_plane_destroy_state(struct drm_plane *plane, 101 - struct drm_plane_state *state) 100 + static void vc4_plane_destroy_state(struct drm_plane *plane, 101 + struct drm_plane_state *state) 102 102 { 103 103 struct vc4_plane_state *vc4_state = to_vc4_plane_state(state); 104 104 ··· 108 108 } 109 109 110 110 /* Called during init to allocate the plane's atomic state. */ 111 - void vc4_plane_reset(struct drm_plane *plane) 111 + static void vc4_plane_reset(struct drm_plane *plane) 112 112 { 113 113 struct vc4_plane_state *vc4_state; 114 114 ··· 156 156 int crtc_y = state->crtc_y; 157 157 int crtc_w = state->crtc_w; 158 158 int crtc_h = state->crtc_h; 159 + 160 + if (state->crtc_w << 16 != state->src_w || 161 + state->crtc_h << 16 != state->src_h) { 162 + /* We don't support scaling yet, which involves 163 + * allocating the LBM memory for scaling temporary 164 + * storage, and putting filter kernels in the HVS 165 + * context. 166 + */ 167 + return -EINVAL; 168 + } 159 169 160 170 if (crtc_x < 0) { 161 171 offset += drm_format_plane_cpp(fb->pixel_format, 0) * -crtc_x;
+1
drivers/hid/hid-ids.h
··· 609 609 #define USB_DEVICE_ID_LOGITECH_HARMONY_FIRST 0xc110 610 610 #define USB_DEVICE_ID_LOGITECH_HARMONY_LAST 0xc14f 611 611 #define USB_DEVICE_ID_LOGITECH_HARMONY_PS3 0x0306 612 + #define USB_DEVICE_ID_LOGITECH_KEYBOARD_G710_PLUS 0xc24d 612 613 #define USB_DEVICE_ID_LOGITECH_MOUSE_C01A 0xc01a 613 614 #define USB_DEVICE_ID_LOGITECH_MOUSE_C05A 0xc05a 614 615 #define USB_DEVICE_ID_LOGITECH_MOUSE_C06A 0xc06a
+3 -2
drivers/hid/hid-lg.c
··· 665 665 struct lg_drv_data *drv_data; 666 666 int ret; 667 667 668 - /* Only work with the 1st interface (G29 presents multiple) */ 669 - if (iface_num != 0) { 668 + /* G29 only work with the 1st interface */ 669 + if ((hdev->product == USB_DEVICE_ID_LOGITECH_G29_WHEEL) && 670 + (iface_num != 0)) { 670 671 dbg_hid("%s: ignoring ifnum %d\n", __func__, iface_num); 671 672 return -ENODEV; 672 673 }
+1
drivers/hid/usbhid/hid-quirks.c
··· 84 84 { USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_LOGITECH_OEM_USB_OPTICAL_MOUSE_0B4A, HID_QUIRK_ALWAYS_POLL }, 85 85 { USB_VENDOR_ID_HP, USB_PRODUCT_ID_HP_PIXART_OEM_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL }, 86 86 { USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL }, 87 + { USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_KEYBOARD_G710_PLUS, HID_QUIRK_NOGET }, 87 88 { USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C01A, HID_QUIRK_ALWAYS_POLL }, 88 89 { USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C05A, HID_QUIRK_ALWAYS_POLL }, 89 90 { USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_MOUSE_C06A, HID_QUIRK_ALWAYS_POLL },
+3 -2
drivers/hid/wacom_wac.c
··· 2481 2481 if (features->pktlen == WACOM_PKGLEN_BBTOUCH3) { 2482 2482 if (features->touch_max) 2483 2483 features->device_type |= WACOM_DEVICETYPE_TOUCH; 2484 - if (features->type >= INTUOSHT || features->type <= BAMBOO_PT) 2484 + if (features->type >= INTUOSHT && features->type <= BAMBOO_PT) 2485 2485 features->device_type |= WACOM_DEVICETYPE_PAD; 2486 2486 2487 2487 features->x_max = 4096; ··· 3213 3213 WACOM_DTU_OFFSET, WACOM_DTU_OFFSET }; 3214 3214 static const struct wacom_features wacom_features_0x336 = 3215 3215 { "Wacom DTU1141", 23472, 13203, 1023, 0, 3216 - DTUS, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4 }; 3216 + DTUS, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4, 3217 + WACOM_DTU_OFFSET, WACOM_DTU_OFFSET }; 3217 3218 static const struct wacom_features wacom_features_0x57 = 3218 3219 { "Wacom DTK2241", 95640, 54060, 2047, 63, 3219 3220 DTK, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES, 6,
+1
drivers/i2c/busses/Kconfig
··· 126 126 Sunrise Point-LP (PCH) 127 127 DNV (SOC) 128 128 Broxton (SOC) 129 + Lewisburg (PCH) 129 130 130 131 This driver can also be built as a module. If so, the module 131 132 will be called i2c-i801.
+6
drivers/i2c/busses/i2c-i801.c
··· 62 62 * Sunrise Point-LP (PCH) 0x9d23 32 hard yes yes yes 63 63 * DNV (SOC) 0x19df 32 hard yes yes yes 64 64 * Broxton (SOC) 0x5ad4 32 hard yes yes yes 65 + * Lewisburg (PCH) 0xa1a3 32 hard yes yes yes 66 + * Lewisburg Supersku (PCH) 0xa223 32 hard yes yes yes 65 67 * 66 68 * Features supported by this driver: 67 69 * Software PEC no ··· 208 206 #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_SMBUS 0x9d23 209 207 #define PCI_DEVICE_ID_INTEL_DNV_SMBUS 0x19df 210 208 #define PCI_DEVICE_ID_INTEL_BROXTON_SMBUS 0x5ad4 209 + #define PCI_DEVICE_ID_INTEL_LEWISBURG_SMBUS 0xa1a3 210 + #define PCI_DEVICE_ID_INTEL_LEWISBURG_SSKU_SMBUS 0xa223 211 211 212 212 struct i801_mux_config { 213 213 char *gpio_chip; ··· 873 869 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_SMBUS) }, 874 870 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_DNV_SMBUS) }, 875 871 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BROXTON_SMBUS) }, 872 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_LEWISBURG_SMBUS) }, 873 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_LEWISBURG_SSKU_SMBUS) }, 876 874 { 0, } 877 875 }; 878 876
+1
drivers/i2c/busses/i2c-imx.c
··· 50 50 #include <linux/of_device.h> 51 51 #include <linux/of_dma.h> 52 52 #include <linux/of_gpio.h> 53 + #include <linux/pinctrl/consumer.h> 53 54 #include <linux/platform_data/i2c-imx.h> 54 55 #include <linux/platform_device.h> 55 56 #include <linux/sched.h>
+3 -1
drivers/i2c/busses/i2c-xiic.c
··· 662 662 663 663 static void xiic_start_xfer(struct xiic_i2c *i2c) 664 664 { 665 - 665 + spin_lock(&i2c->lock); 666 + xiic_reinit(i2c); 666 667 __xiic_start_xfer(i2c); 668 + spin_unlock(&i2c->lock); 667 669 } 668 670 669 671 static int xiic_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+1 -1
drivers/i2c/i2c-core.c
··· 715 715 if (wakeirq > 0 && wakeirq != client->irq) 716 716 status = dev_pm_set_dedicated_wake_irq(dev, wakeirq); 717 717 else if (client->irq > 0) 718 - status = dev_pm_set_wake_irq(dev, wakeirq); 718 + status = dev_pm_set_wake_irq(dev, client->irq); 719 719 else 720 720 status = 0; 721 721
+1 -1
drivers/iio/adc/ad7793.c
··· 101 101 #define AD7795_CH_AIN1M_AIN1M 8 /* AIN1(-) - AIN1(-) */ 102 102 103 103 /* ID Register Bit Designations (AD7793_REG_ID) */ 104 - #define AD7785_ID 0xB 104 + #define AD7785_ID 0x3 105 105 #define AD7792_ID 0xA 106 106 #define AD7793_ID 0xB 107 107 #define AD7794_ID 0xF
+16 -6
drivers/iio/adc/vf610_adc.c
··· 106 106 107 107 #define DEFAULT_SAMPLE_TIME 1000 108 108 109 + /* V at 25°C of 696 mV */ 110 + #define VF610_VTEMP25_3V0 950 111 + /* V at 25°C of 699 mV */ 112 + #define VF610_VTEMP25_3V3 867 113 + /* Typical sensor slope coefficient at all temperatures */ 114 + #define VF610_TEMP_SLOPE_COEFF 1840 115 + 109 116 enum clk_sel { 110 117 VF610_ADCIOC_BUSCLK_SET, 111 118 VF610_ADCIOC_ALTCLK_SET, ··· 204 197 adc_feature->clk_div = 8; 205 198 } 206 199 200 + adck_rate = ipg_rate / adc_feature->clk_div; 201 + 207 202 /* 208 203 * Determine the long sample time adder value to be used based 209 204 * on the default minimum sample time provided. ··· 230 221 * BCT (Base Conversion Time): fixed to 25 ADCK cycles for 12 bit mode 231 222 * LSTAdder(Long Sample Time): 3, 5, 7, 9, 13, 17, 21, 25 ADCK cycles 232 223 */ 233 - adck_rate = ipg_rate / info->adc_feature.clk_div; 234 224 for (i = 0; i < ARRAY_SIZE(vf610_hw_avgs); i++) 235 225 info->sample_freq_avail[i] = 236 226 adck_rate / (6 + vf610_hw_avgs[i] * ··· 671 663 break; 672 664 case IIO_TEMP: 673 665 /* 674 - * Calculate in degree Celsius times 1000 675 - * Using sensor slope of 1.84 mV/°C and 676 - * V at 25°C of 696 mV 677 - */ 678 - *val = 25000 - ((int)info->value - 864) * 1000000 / 1840; 666 + * Calculate in degree Celsius times 1000 667 + * Using the typical sensor slope of 1.84 mV/°C 668 + * and VREFH_ADC at 3.3V, V at 25°C of 699 mV 669 + */ 670 + *val = 25000 - ((int)info->value - VF610_VTEMP25_3V3) * 671 + 1000000 / VF610_TEMP_SLOPE_COEFF; 672 + 679 673 break; 680 674 default: 681 675 mutex_unlock(&indio_dev->mlock);
+1
drivers/iio/adc/xilinx-xadc-core.c
··· 841 841 case XADC_REG_VCCINT: 842 842 case XADC_REG_VCCAUX: 843 843 case XADC_REG_VREFP: 844 + case XADC_REG_VREFN: 844 845 case XADC_REG_VCCBRAM: 845 846 case XADC_REG_VCCPINT: 846 847 case XADC_REG_VCCPAUX:
+64 -27
drivers/iio/dac/ad5064.c
··· 113 113 ID_AD5065, 114 114 ID_AD5628_1, 115 115 ID_AD5628_2, 116 + ID_AD5629_1, 117 + ID_AD5629_2, 116 118 ID_AD5648_1, 117 119 ID_AD5648_2, 118 120 ID_AD5666_1, 119 121 ID_AD5666_2, 120 122 ID_AD5668_1, 121 123 ID_AD5668_2, 124 + ID_AD5669_1, 125 + ID_AD5669_2, 122 126 }; 123 127 124 128 static int ad5064_write(struct ad5064_state *st, unsigned int cmd, ··· 295 291 { }, 296 292 }; 297 293 298 - #define AD5064_CHANNEL(chan, addr, bits) { \ 294 + #define AD5064_CHANNEL(chan, addr, bits, _shift) { \ 299 295 .type = IIO_VOLTAGE, \ 300 296 .indexed = 1, \ 301 297 .output = 1, \ ··· 307 303 .sign = 'u', \ 308 304 .realbits = (bits), \ 309 305 .storagebits = 16, \ 310 - .shift = 20 - bits, \ 306 + .shift = (_shift), \ 311 307 }, \ 312 308 .ext_info = ad5064_ext_info, \ 313 309 } 314 310 315 - #define DECLARE_AD5064_CHANNELS(name, bits) \ 311 + #define DECLARE_AD5064_CHANNELS(name, bits, shift) \ 316 312 const struct iio_chan_spec name[] = { \ 317 - AD5064_CHANNEL(0, 0, bits), \ 318 - AD5064_CHANNEL(1, 1, bits), \ 319 - AD5064_CHANNEL(2, 2, bits), \ 320 - AD5064_CHANNEL(3, 3, bits), \ 321 - AD5064_CHANNEL(4, 4, bits), \ 322 - AD5064_CHANNEL(5, 5, bits), \ 323 - AD5064_CHANNEL(6, 6, bits), \ 324 - AD5064_CHANNEL(7, 7, bits), \ 313 + AD5064_CHANNEL(0, 0, bits, shift), \ 314 + AD5064_CHANNEL(1, 1, bits, shift), \ 315 + AD5064_CHANNEL(2, 2, bits, shift), \ 316 + AD5064_CHANNEL(3, 3, bits, shift), \ 317 + AD5064_CHANNEL(4, 4, bits, shift), \ 318 + AD5064_CHANNEL(5, 5, bits, shift), \ 319 + AD5064_CHANNEL(6, 6, bits, shift), \ 320 + AD5064_CHANNEL(7, 7, bits, shift), \ 325 321 } 326 322 327 - #define DECLARE_AD5065_CHANNELS(name, bits) \ 323 + #define DECLARE_AD5065_CHANNELS(name, bits, shift) \ 328 324 const struct iio_chan_spec name[] = { \ 329 - AD5064_CHANNEL(0, 0, bits), \ 330 - AD5064_CHANNEL(1, 3, bits), \ 325 + AD5064_CHANNEL(0, 0, bits, shift), \ 326 + AD5064_CHANNEL(1, 3, bits, shift), \ 331 327 } 332 328 333 - static DECLARE_AD5064_CHANNELS(ad5024_channels, 12); 334 - static DECLARE_AD5064_CHANNELS(ad5044_channels, 14); 335 - static DECLARE_AD5064_CHANNELS(ad5064_channels, 16); 329 + static DECLARE_AD5064_CHANNELS(ad5024_channels, 12, 8); 330 + static DECLARE_AD5064_CHANNELS(ad5044_channels, 14, 6); 331 + static DECLARE_AD5064_CHANNELS(ad5064_channels, 16, 4); 336 332 337 - static DECLARE_AD5065_CHANNELS(ad5025_channels, 12); 338 - static DECLARE_AD5065_CHANNELS(ad5045_channels, 14); 339 - static DECLARE_AD5065_CHANNELS(ad5065_channels, 16); 333 + static DECLARE_AD5065_CHANNELS(ad5025_channels, 12, 8); 334 + static DECLARE_AD5065_CHANNELS(ad5045_channels, 14, 6); 335 + static DECLARE_AD5065_CHANNELS(ad5065_channels, 16, 4); 336 + 337 + static DECLARE_AD5064_CHANNELS(ad5629_channels, 12, 4); 338 + static DECLARE_AD5064_CHANNELS(ad5669_channels, 16, 0); 340 339 341 340 static const struct ad5064_chip_info ad5064_chip_info_tbl[] = { 342 341 [ID_AD5024] = { ··· 389 382 .channels = ad5024_channels, 390 383 .num_channels = 8, 391 384 }, 385 + [ID_AD5629_1] = { 386 + .shared_vref = true, 387 + .internal_vref = 2500000, 388 + .channels = ad5629_channels, 389 + .num_channels = 8, 390 + }, 391 + [ID_AD5629_2] = { 392 + .shared_vref = true, 393 + .internal_vref = 5000000, 394 + .channels = ad5629_channels, 395 + .num_channels = 8, 396 + }, 392 397 [ID_AD5648_1] = { 393 398 .shared_vref = true, 394 399 .internal_vref = 2500000, ··· 435 416 .shared_vref = true, 436 417 .internal_vref = 5000000, 437 418 .channels = ad5064_channels, 419 + .num_channels = 8, 420 + }, 421 + [ID_AD5669_1] = { 422 + .shared_vref = true, 423 + .internal_vref = 2500000, 424 + .channels = ad5669_channels, 425 + .num_channels = 8, 426 + }, 427 + [ID_AD5669_2] = { 428 + .shared_vref = true, 429 + .internal_vref = 5000000, 430 + .channels = ad5669_channels, 438 431 .num_channels = 8, 439 432 }, 440 433 }; ··· 628 597 unsigned int addr, unsigned int val) 629 598 { 630 599 struct i2c_client *i2c = to_i2c_client(st->dev); 600 + int ret; 631 601 632 602 st->data.i2c[0] = (cmd << 4) | addr; 633 603 put_unaligned_be16(val, &st->data.i2c[1]); 634 - return i2c_master_send(i2c, st->data.i2c, 3); 604 + 605 + ret = i2c_master_send(i2c, st->data.i2c, 3); 606 + if (ret < 0) 607 + return ret; 608 + 609 + return 0; 635 610 } 636 611 637 612 static int ad5064_i2c_probe(struct i2c_client *i2c, ··· 653 616 } 654 617 655 618 static const struct i2c_device_id ad5064_i2c_ids[] = { 656 - {"ad5629-1", ID_AD5628_1}, 657 - {"ad5629-2", ID_AD5628_2}, 658 - {"ad5629-3", ID_AD5628_2}, /* similar enough to ad5629-2 */ 659 - {"ad5669-1", ID_AD5668_1}, 660 - {"ad5669-2", ID_AD5668_2}, 661 - {"ad5669-3", ID_AD5668_2}, /* similar enough to ad5669-2 */ 619 + {"ad5629-1", ID_AD5629_1}, 620 + {"ad5629-2", ID_AD5629_2}, 621 + {"ad5629-3", ID_AD5629_2}, /* similar enough to ad5629-2 */ 622 + {"ad5669-1", ID_AD5669_1}, 623 + {"ad5669-2", ID_AD5669_2}, 624 + {"ad5669-3", ID_AD5669_2}, /* similar enough to ad5669-2 */ 662 625 {} 663 626 }; 664 627 MODULE_DEVICE_TABLE(i2c, ad5064_i2c_ids);
+4 -4
drivers/iio/humidity/si7020.c
··· 50 50 51 51 switch (mask) { 52 52 case IIO_CHAN_INFO_RAW: 53 - ret = i2c_smbus_read_word_data(*client, 54 - chan->type == IIO_TEMP ? 55 - SI7020CMD_TEMP_HOLD : 56 - SI7020CMD_RH_HOLD); 53 + ret = i2c_smbus_read_word_swapped(*client, 54 + chan->type == IIO_TEMP ? 55 + SI7020CMD_TEMP_HOLD : 56 + SI7020CMD_RH_HOLD); 57 57 if (ret < 0) 58 58 return ret; 59 59 *val = ret >> 2;
+9 -4
drivers/irqchip/irq-gic-common.c
··· 84 84 writel_relaxed(GICD_INT_DEF_PRI_X4, base + GIC_DIST_PRI + i); 85 85 86 86 /* 87 - * Disable all interrupts. Leave the PPI and SGIs alone 88 - * as they are enabled by redistributor registers. 87 + * Deactivate and disable all SPIs. Leave the PPI and SGIs 88 + * alone as they are in the redistributor registers on GICv3. 89 89 */ 90 - for (i = 32; i < gic_irqs; i += 32) 90 + for (i = 32; i < gic_irqs; i += 32) { 91 91 writel_relaxed(GICD_INT_EN_CLR_X32, 92 - base + GIC_DIST_ENABLE_CLEAR + i / 8); 92 + base + GIC_DIST_ACTIVE_CLEAR + i / 8); 93 + writel_relaxed(GICD_INT_EN_CLR_X32, 94 + base + GIC_DIST_ENABLE_CLEAR + i / 8); 95 + } 93 96 94 97 if (sync_access) 95 98 sync_access(); ··· 105 102 /* 106 103 * Deal with the banked PPI and SGI interrupts - disable all 107 104 * PPI interrupts, ensure all SGI interrupts are enabled. 105 + * Make sure everything is deactivated. 108 106 */ 107 + writel_relaxed(GICD_INT_EN_CLR_X32, base + GIC_DIST_ACTIVE_CLEAR); 109 108 writel_relaxed(GICD_INT_EN_CLR_PPI, base + GIC_DIST_ENABLE_CLEAR); 110 109 writel_relaxed(GICD_INT_EN_SET_SGI, base + GIC_DIST_ENABLE_SET); 111 110
+36 -2
drivers/irqchip/irq-gic.c
··· 73 73 union gic_base cpu_base; 74 74 #ifdef CONFIG_CPU_PM 75 75 u32 saved_spi_enable[DIV_ROUND_UP(1020, 32)]; 76 + u32 saved_spi_active[DIV_ROUND_UP(1020, 32)]; 76 77 u32 saved_spi_conf[DIV_ROUND_UP(1020, 16)]; 77 78 u32 saved_spi_target[DIV_ROUND_UP(1020, 4)]; 78 79 u32 __percpu *saved_ppi_enable; 80 + u32 __percpu *saved_ppi_active; 79 81 u32 __percpu *saved_ppi_conf; 80 82 #endif 81 83 struct irq_domain *domain; ··· 568 566 for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++) 569 567 gic_data[gic_nr].saved_spi_enable[i] = 570 568 readl_relaxed(dist_base + GIC_DIST_ENABLE_SET + i * 4); 569 + 570 + for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++) 571 + gic_data[gic_nr].saved_spi_active[i] = 572 + readl_relaxed(dist_base + GIC_DIST_ACTIVE_SET + i * 4); 571 573 } 572 574 573 575 /* ··· 610 604 writel_relaxed(gic_data[gic_nr].saved_spi_target[i], 611 605 dist_base + GIC_DIST_TARGET + i * 4); 612 606 613 - for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++) 607 + for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++) { 608 + writel_relaxed(GICD_INT_EN_CLR_X32, 609 + dist_base + GIC_DIST_ENABLE_CLEAR + i * 4); 614 610 writel_relaxed(gic_data[gic_nr].saved_spi_enable[i], 615 611 dist_base + GIC_DIST_ENABLE_SET + i * 4); 612 + } 613 + 614 + for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++) { 615 + writel_relaxed(GICD_INT_EN_CLR_X32, 616 + dist_base + GIC_DIST_ACTIVE_CLEAR + i * 4); 617 + writel_relaxed(gic_data[gic_nr].saved_spi_active[i], 618 + dist_base + GIC_DIST_ACTIVE_SET + i * 4); 619 + } 616 620 617 621 writel_relaxed(GICD_ENABLE, dist_base + GIC_DIST_CTRL); 618 622 } ··· 647 631 for (i = 0; i < DIV_ROUND_UP(32, 32); i++) 648 632 ptr[i] = readl_relaxed(dist_base + GIC_DIST_ENABLE_SET + i * 4); 649 633 634 + ptr = raw_cpu_ptr(gic_data[gic_nr].saved_ppi_active); 635 + for (i = 0; i < DIV_ROUND_UP(32, 32); i++) 636 + ptr[i] = readl_relaxed(dist_base + GIC_DIST_ACTIVE_SET + i * 4); 637 + 650 638 ptr = raw_cpu_ptr(gic_data[gic_nr].saved_ppi_conf); 651 639 for (i = 0; i < DIV_ROUND_UP(32, 16); i++) 652 640 ptr[i] = readl_relaxed(dist_base + GIC_DIST_CONFIG + i * 4); ··· 674 654 return; 675 655 676 656 ptr = raw_cpu_ptr(gic_data[gic_nr].saved_ppi_enable); 677 - for (i = 0; i < DIV_ROUND_UP(32, 32); i++) 657 + for (i = 0; i < DIV_ROUND_UP(32, 32); i++) { 658 + writel_relaxed(GICD_INT_EN_CLR_X32, 659 + dist_base + GIC_DIST_ENABLE_CLEAR + i * 4); 678 660 writel_relaxed(ptr[i], dist_base + GIC_DIST_ENABLE_SET + i * 4); 661 + } 662 + 663 + ptr = raw_cpu_ptr(gic_data[gic_nr].saved_ppi_active); 664 + for (i = 0; i < DIV_ROUND_UP(32, 32); i++) { 665 + writel_relaxed(GICD_INT_EN_CLR_X32, 666 + dist_base + GIC_DIST_ACTIVE_CLEAR + i * 4); 667 + writel_relaxed(ptr[i], dist_base + GIC_DIST_ACTIVE_SET + i * 4); 668 + } 679 669 680 670 ptr = raw_cpu_ptr(gic_data[gic_nr].saved_ppi_conf); 681 671 for (i = 0; i < DIV_ROUND_UP(32, 16); i++) ··· 739 709 gic->saved_ppi_enable = __alloc_percpu(DIV_ROUND_UP(32, 32) * 4, 740 710 sizeof(u32)); 741 711 BUG_ON(!gic->saved_ppi_enable); 712 + 713 + gic->saved_ppi_active = __alloc_percpu(DIV_ROUND_UP(32, 32) * 4, 714 + sizeof(u32)); 715 + BUG_ON(!gic->saved_ppi_active); 742 716 743 717 gic->saved_ppi_conf = __alloc_percpu(DIV_ROUND_UP(32, 16) * 4, 744 718 sizeof(u32));
+1 -1
drivers/isdn/hisax/config.c
··· 1896 1896 ptr--; 1897 1897 *ptr++ = '\n'; 1898 1898 *ptr = 0; 1899 - HiSax_putstatus(cs, NULL, "%s", cs->dlog); 1899 + HiSax_putstatus(cs, NULL, cs->dlog); 1900 1900 } else 1901 1901 HiSax_putstatus(cs, "LogEcho: ", 1902 1902 "warning Frame too big (%d)",
+1 -1
drivers/isdn/hisax/hfc_pci.c
··· 901 901 ptr--; 902 902 *ptr++ = '\n'; 903 903 *ptr = 0; 904 - HiSax_putstatus(cs, NULL, "%s", cs->dlog); 904 + HiSax_putstatus(cs, NULL, cs->dlog); 905 905 } else 906 906 HiSax_putstatus(cs, "LogEcho: ", "warning Frame too big (%d)", total - 3); 907 907 }
+1 -1
drivers/isdn/hisax/hfc_sx.c
··· 674 674 ptr--; 675 675 *ptr++ = '\n'; 676 676 *ptr = 0; 677 - HiSax_putstatus(cs, NULL, "%s", cs->dlog); 677 + HiSax_putstatus(cs, NULL, cs->dlog); 678 678 } else 679 679 HiSax_putstatus(cs, "LogEcho: ", "warning Frame too big (%d)", skb->len); 680 680 }
+3 -3
drivers/isdn/hisax/q931.c
··· 1179 1179 dp--; 1180 1180 *dp++ = '\n'; 1181 1181 *dp = 0; 1182 - HiSax_putstatus(cs, NULL, "%s", cs->dlog); 1182 + HiSax_putstatus(cs, NULL, cs->dlog); 1183 1183 } else 1184 1184 HiSax_putstatus(cs, "LogFrame: ", "warning Frame too big (%d)", size); 1185 1185 } ··· 1246 1246 } 1247 1247 if (finish) { 1248 1248 *dp = 0; 1249 - HiSax_putstatus(cs, NULL, "%s", cs->dlog); 1249 + HiSax_putstatus(cs, NULL, cs->dlog); 1250 1250 return; 1251 1251 } 1252 1252 if ((0xfe & buf[0]) == PROTO_DIS_N0) { /* 1TR6 */ ··· 1509 1509 dp += sprintf(dp, "Unknown protocol %x!", buf[0]); 1510 1510 } 1511 1511 *dp = 0; 1512 - HiSax_putstatus(cs, NULL, "%s", cs->dlog); 1512 + HiSax_putstatus(cs, NULL, cs->dlog); 1513 1513 }
+79 -60
drivers/lightnvm/core.c
··· 123 123 } 124 124 EXPORT_SYMBOL(nvm_unregister_mgr); 125 125 126 + /* register with device with a supported manager */ 127 + static int register_mgr(struct nvm_dev *dev) 128 + { 129 + struct nvmm_type *mt; 130 + int ret = 0; 131 + 132 + list_for_each_entry(mt, &nvm_mgrs, list) { 133 + ret = mt->register_mgr(dev); 134 + if (ret > 0) { 135 + dev->mt = mt; 136 + break; /* successfully initialized */ 137 + } 138 + } 139 + 140 + if (!ret) 141 + pr_info("nvm: no compatible nvm manager found.\n"); 142 + 143 + return ret; 144 + } 145 + 126 146 static struct nvm_dev *nvm_find_nvm_dev(const char *name) 127 147 { 128 148 struct nvm_dev *dev; ··· 180 160 } 181 161 EXPORT_SYMBOL(nvm_erase_blk); 182 162 183 - static void nvm_core_free(struct nvm_dev *dev) 184 - { 185 - kfree(dev); 186 - } 187 - 188 163 static int nvm_core_init(struct nvm_dev *dev) 189 164 { 190 165 struct nvm_id *id = &dev->identity; ··· 194 179 dev->sec_size = grp->csecs; 195 180 dev->oob_size = grp->sos; 196 181 dev->sec_per_pg = grp->fpg_sz / grp->csecs; 197 - dev->addr_mode = id->ppat; 198 - dev->addr_format = id->ppaf; 182 + memcpy(&dev->ppaf, &id->ppaf, sizeof(struct nvm_addr_format)); 199 183 200 184 dev->plane_mode = NVM_PLANE_SINGLE; 201 185 dev->max_rq_size = dev->ops->max_phys_sect * dev->sec_size; 186 + 187 + if (grp->mtype != 0) { 188 + pr_err("nvm: memory type not supported\n"); 189 + return -EINVAL; 190 + } 191 + 192 + if (grp->fmtype != 0 && grp->fmtype != 1) { 193 + pr_err("nvm: flash type not supported\n"); 194 + return -EINVAL; 195 + } 202 196 203 197 if (grp->mpos & 0x020202) 204 198 dev->plane_mode = NVM_PLANE_DOUBLE; ··· 237 213 238 214 if (dev->mt) 239 215 dev->mt->unregister_mgr(dev); 240 - 241 - nvm_core_free(dev); 242 216 } 243 217 244 218 static int nvm_init(struct nvm_dev *dev) 245 219 { 246 - struct nvmm_type *mt; 247 - int ret = 0; 220 + int ret = -EINVAL; 248 221 249 222 if (!dev->q || !dev->ops) 250 - return -EINVAL; 223 + return ret; 251 224 252 225 if (dev->ops->identity(dev->q, &dev->identity)) { 253 226 pr_err("nvm: device could not be identified\n"); 254 - ret = -EINVAL; 255 227 goto err; 256 228 } 257 229 ··· 271 251 goto err; 272 252 } 273 253 274 - /* register with device with a supported manager */ 275 - list_for_each_entry(mt, &nvm_mgrs, list) { 276 - ret = mt->register_mgr(dev); 277 - if (ret < 0) 278 - goto err; /* initialization failed */ 279 - if (ret > 0) { 280 - dev->mt = mt; 281 - break; /* successfully initialized */ 282 - } 283 - } 284 - 285 - if (!ret) { 286 - pr_info("nvm: no compatible manager found.\n"); 254 + down_write(&nvm_lock); 255 + ret = register_mgr(dev); 256 + up_write(&nvm_lock); 257 + if (ret < 0) 258 + goto err; 259 + if (!ret) 287 260 return 0; 288 - } 289 261 290 262 pr_info("nvm: registered %s [%u/%u/%u/%u/%u/%u]\n", 291 263 dev->name, dev->sec_per_pg, dev->nr_planes, ··· 285 273 dev->nr_chnls); 286 274 return 0; 287 275 err: 288 - nvm_free(dev); 289 276 pr_err("nvm: failed to initialize nvm\n"); 290 277 return ret; 291 278 } ··· 319 308 if (ret) 320 309 goto err_init; 321 310 322 - down_write(&nvm_lock); 323 - list_add(&dev->devices, &nvm_devices); 324 - up_write(&nvm_lock); 311 + if (dev->ops->max_phys_sect > 256) { 312 + pr_info("nvm: max sectors supported is 256.\n"); 313 + ret = -EINVAL; 314 + goto err_init; 315 + } 325 316 326 317 if (dev->ops->max_phys_sect > 1) { 327 318 dev->ppalist_pool = dev->ops->create_dma_pool(dev->q, 328 319 "ppalist"); 329 320 if (!dev->ppalist_pool) { 330 321 pr_err("nvm: could not create ppa pool\n"); 331 - return -ENOMEM; 322 + ret = -ENOMEM; 323 + goto err_init; 332 324 } 333 - } else if (dev->ops->max_phys_sect > 256) { 334 - pr_info("nvm: max sectors supported is 256.\n"); 335 - return -EINVAL; 336 325 } 326 + 327 + down_write(&nvm_lock); 328 + list_add(&dev->devices, &nvm_devices); 329 + up_write(&nvm_lock); 337 330 338 331 return 0; 339 332 err_init: ··· 348 333 349 334 void nvm_unregister(char *disk_name) 350 335 { 351 - struct nvm_dev *dev = nvm_find_nvm_dev(disk_name); 336 + struct nvm_dev *dev; 352 337 338 + down_write(&nvm_lock); 339 + dev = nvm_find_nvm_dev(disk_name); 353 340 if (!dev) { 354 341 pr_err("nvm: could not find device %s to unregister\n", 355 342 disk_name); 343 + up_write(&nvm_lock); 356 344 return; 357 345 } 358 346 359 - nvm_exit(dev); 360 - 361 - down_write(&nvm_lock); 362 347 list_del(&dev->devices); 363 348 up_write(&nvm_lock); 349 + 350 + nvm_exit(dev); 351 + kfree(dev); 364 352 } 365 353 EXPORT_SYMBOL(nvm_unregister); 366 354 ··· 376 358 { 377 359 struct nvm_ioctl_create_simple *s = &create->conf.s; 378 360 struct request_queue *tqueue; 379 - struct nvmm_type *mt; 380 361 struct gendisk *tdisk; 381 362 struct nvm_tgt_type *tt; 382 363 struct nvm_target *t; 383 364 void *targetdata; 384 365 int ret = 0; 385 366 367 + down_write(&nvm_lock); 386 368 if (!dev->mt) { 387 - /* register with device with a supported NVM manager */ 388 - list_for_each_entry(mt, &nvm_mgrs, list) { 389 - ret = mt->register_mgr(dev); 390 - if (ret < 0) 391 - return ret; /* initialization failed */ 392 - if (ret > 0) { 393 - dev->mt = mt; 394 - break; /* successfully initialized */ 395 - } 396 - } 397 - 398 - if (!ret) { 399 - pr_info("nvm: no compatible nvm manager found.\n"); 400 - return -ENODEV; 369 + ret = register_mgr(dev); 370 + if (!ret) 371 + ret = -ENODEV; 372 + if (ret < 0) { 373 + up_write(&nvm_lock); 374 + return ret; 401 375 } 402 376 } 403 377 404 378 tt = nvm_find_target_type(create->tgttype); 405 379 if (!tt) { 406 380 pr_err("nvm: target type %s not found\n", create->tgttype); 381 + up_write(&nvm_lock); 407 382 return -EINVAL; 408 383 } 409 384 410 - down_write(&nvm_lock); 411 385 list_for_each_entry(t, &dev->online_targets, list) { 412 386 if (!strcmp(create->tgtname, t->disk->disk_name)) { 413 387 pr_err("nvm: target name already exists.\n"); ··· 467 457 lockdep_assert_held(&nvm_lock); 468 458 469 459 del_gendisk(tdisk); 460 + blk_cleanup_queue(q); 461 + 470 462 if (tt->exit) 471 463 tt->exit(tdisk->private_data); 472 - 473 - blk_cleanup_queue(q); 474 464 475 465 put_disk(tdisk); 476 466 ··· 483 473 struct nvm_dev *dev; 484 474 struct nvm_ioctl_create_simple *s; 485 475 476 + down_write(&nvm_lock); 486 477 dev = nvm_find_nvm_dev(create->dev); 478 + up_write(&nvm_lock); 487 479 if (!dev) { 488 480 pr_err("nvm: device not found\n"); 489 481 return -EINVAL; ··· 544 532 return -EINVAL; 545 533 } 546 534 535 + down_write(&nvm_lock); 547 536 dev = nvm_find_nvm_dev(devname); 537 + up_write(&nvm_lock); 548 538 if (!dev) { 549 539 pr_err("nvm: device not found\n"); 550 540 return -EINVAL; ··· 555 541 if (!dev->mt) 556 542 return 0; 557 543 558 - dev->mt->free_blocks_print(dev); 544 + dev->mt->lun_info_print(dev); 559 545 560 546 return 0; 561 547 } ··· 691 677 info->tgtsize = tgt_iter; 692 678 up_write(&nvm_lock); 693 679 694 - if (copy_to_user(arg, info, sizeof(struct nvm_ioctl_info))) 680 + if (copy_to_user(arg, info, sizeof(struct nvm_ioctl_info))) { 681 + kfree(info); 695 682 return -EFAULT; 683 + } 696 684 697 685 kfree(info); 698 686 return 0; ··· 737 721 738 722 devices->nr_devices = i; 739 723 740 - if (copy_to_user(arg, devices, sizeof(struct nvm_ioctl_get_devices))) 724 + if (copy_to_user(arg, devices, 725 + sizeof(struct nvm_ioctl_get_devices))) { 726 + kfree(devices); 741 727 return -EFAULT; 728 + } 742 729 743 730 kfree(devices); 744 731 return 0;
+62 -25
drivers/lightnvm/gennvm.c
··· 60 60 lun->vlun.lun_id = i % dev->luns_per_chnl; 61 61 lun->vlun.chnl_id = i / dev->luns_per_chnl; 62 62 lun->vlun.nr_free_blocks = dev->blks_per_lun; 63 + lun->vlun.nr_inuse_blocks = 0; 64 + lun->vlun.nr_bad_blocks = 0; 63 65 } 64 66 return 0; 65 67 } 66 68 67 - static int gennvm_block_bb(u32 lun_id, void *bb_bitmap, unsigned int nr_blocks, 69 + static int gennvm_block_bb(struct ppa_addr ppa, int nr_blocks, u8 *blks, 68 70 void *private) 69 71 { 70 72 struct gen_nvm *gn = private; 71 - struct gen_lun *lun = &gn->luns[lun_id]; 73 + struct nvm_dev *dev = gn->dev; 74 + struct gen_lun *lun; 72 75 struct nvm_block *blk; 73 76 int i; 74 77 75 - if (unlikely(bitmap_empty(bb_bitmap, nr_blocks))) 76 - return 0; 78 + lun = &gn->luns[(dev->nr_luns * ppa.g.ch) + ppa.g.lun]; 77 79 78 - i = -1; 79 - while ((i = find_next_bit(bb_bitmap, nr_blocks, i + 1)) < nr_blocks) { 80 + for (i = 0; i < nr_blocks; i++) { 81 + if (blks[i] == 0) 82 + continue; 83 + 80 84 blk = &lun->vlun.blocks[i]; 81 85 if (!blk) { 82 86 pr_err("gennvm: BB data is out of bounds.\n"); ··· 88 84 } 89 85 90 86 list_move_tail(&blk->list, &lun->bb_list); 87 + lun->vlun.nr_bad_blocks++; 91 88 } 92 89 93 90 return 0; ··· 141 136 list_move_tail(&blk->list, &lun->used_list); 142 137 blk->type = 1; 143 138 lun->vlun.nr_free_blocks--; 139 + lun->vlun.nr_inuse_blocks++; 144 140 } 145 141 } 146 142 ··· 170 164 block->id = cur_block_id++; 171 165 172 166 /* First block is reserved for device */ 173 - if (unlikely(lun_iter == 0 && blk_iter == 0)) 167 + if (unlikely(lun_iter == 0 && blk_iter == 0)) { 168 + lun->vlun.nr_free_blocks--; 174 169 continue; 170 + } 175 171 176 172 list_add_tail(&block->list, &lun->free_list); 177 173 } 178 174 179 175 if (dev->ops->get_bb_tbl) { 180 - ret = dev->ops->get_bb_tbl(dev->q, lun->vlun.id, 181 - dev->blks_per_lun, gennvm_block_bb, gn); 176 + struct ppa_addr ppa; 177 + 178 + ppa.ppa = 0; 179 + ppa.g.ch = lun->vlun.chnl_id; 180 + ppa.g.lun = lun->vlun.id; 181 + ppa = generic_to_dev_addr(dev, ppa); 182 + 183 + ret = dev->ops->get_bb_tbl(dev, ppa, 184 + dev->blks_per_lun, 185 + gennvm_block_bb, gn); 182 186 if (ret) 183 187 pr_err("gennvm: could not read BB table\n"); 184 188 } ··· 206 190 return 0; 207 191 } 208 192 193 + static void gennvm_free(struct nvm_dev *dev) 194 + { 195 + gennvm_blocks_free(dev); 196 + gennvm_luns_free(dev); 197 + kfree(dev->mp); 198 + dev->mp = NULL; 199 + } 200 + 209 201 static int gennvm_register(struct nvm_dev *dev) 210 202 { 211 203 struct gen_nvm *gn; ··· 223 199 if (!gn) 224 200 return -ENOMEM; 225 201 202 + gn->dev = dev; 226 203 gn->nr_luns = dev->nr_luns; 227 204 dev->mp = gn; 228 205 ··· 241 216 242 217 return 1; 243 218 err: 244 - kfree(gn); 219 + gennvm_free(dev); 245 220 return ret; 246 221 } 247 222 248 223 static void gennvm_unregister(struct nvm_dev *dev) 249 224 { 250 - gennvm_blocks_free(dev); 251 - gennvm_luns_free(dev); 252 - kfree(dev->mp); 253 - dev->mp = NULL; 225 + gennvm_free(dev); 254 226 } 255 227 256 228 static struct nvm_block *gennvm_get_blk(struct nvm_dev *dev, ··· 276 254 blk->type = 1; 277 255 278 256 lun->vlun.nr_free_blocks--; 257 + lun->vlun.nr_inuse_blocks++; 279 258 280 259 spin_unlock(&vlun->lock); 281 260 out: ··· 294 271 case 1: 295 272 list_move_tail(&blk->list, &lun->free_list); 296 273 lun->vlun.nr_free_blocks++; 274 + lun->vlun.nr_inuse_blocks--; 297 275 blk->type = 0; 298 276 break; 299 277 case 2: 300 278 list_move_tail(&blk->list, &lun->bb_list); 279 + lun->vlun.nr_bad_blocks++; 280 + lun->vlun.nr_inuse_blocks--; 301 281 break; 302 282 default: 303 283 WARN_ON_ONCE(1); 304 284 pr_err("gennvm: erroneous block type (%lu -> %u)\n", 305 285 blk->id, blk->type); 306 286 list_move_tail(&blk->list, &lun->bb_list); 287 + lun->vlun.nr_bad_blocks++; 288 + lun->vlun.nr_inuse_blocks--; 307 289 } 308 290 309 291 spin_unlock(&vlun->lock); ··· 320 292 321 293 if (rqd->nr_pages > 1) { 322 294 for (i = 0; i < rqd->nr_pages; i++) 323 - rqd->ppa_list[i] = addr_to_generic_mode(dev, 295 + rqd->ppa_list[i] = dev_to_generic_addr(dev, 324 296 rqd->ppa_list[i]); 325 297 } else { 326 - rqd->ppa_addr = addr_to_generic_mode(dev, rqd->ppa_addr); 298 + rqd->ppa_addr = dev_to_generic_addr(dev, rqd->ppa_addr); 327 299 } 328 300 } 329 301 ··· 333 305 334 306 if (rqd->nr_pages > 1) { 335 307 for (i = 0; i < rqd->nr_pages; i++) 336 - rqd->ppa_list[i] = generic_to_addr_mode(dev, 308 + rqd->ppa_list[i] = generic_to_dev_addr(dev, 337 309 rqd->ppa_list[i]); 338 310 } else { 339 - rqd->ppa_addr = generic_to_addr_mode(dev, rqd->ppa_addr); 311 + rqd->ppa_addr = generic_to_dev_addr(dev, rqd->ppa_addr); 340 312 } 341 313 } 342 314 ··· 382 354 { 383 355 int i; 384 356 385 - if (!dev->ops->set_bb) 357 + if (!dev->ops->set_bb_tbl) 386 358 return; 387 359 388 - if (dev->ops->set_bb(dev->q, rqd, 1)) 360 + if (dev->ops->set_bb_tbl(dev->q, rqd, 1)) 389 361 return; 390 362 391 363 gennvm_addr_to_generic_mode(dev, rqd); ··· 468 440 return &gn->luns[lunid].vlun; 469 441 } 470 442 471 - static void gennvm_free_blocks_print(struct nvm_dev *dev) 443 + static void gennvm_lun_info_print(struct nvm_dev *dev) 472 444 { 473 445 struct gen_nvm *gn = dev->mp; 474 446 struct gen_lun *lun; 475 447 unsigned int i; 476 448 477 - gennvm_for_each_lun(gn, lun, i) 478 - pr_info("%s: lun%8u\t%u\n", 479 - dev->name, i, lun->vlun.nr_free_blocks); 449 + 450 + gennvm_for_each_lun(gn, lun, i) { 451 + spin_lock(&lun->vlun.lock); 452 + 453 + pr_info("%s: lun%8u\t%u\t%u\t%u\n", 454 + dev->name, i, 455 + lun->vlun.nr_free_blocks, 456 + lun->vlun.nr_inuse_blocks, 457 + lun->vlun.nr_bad_blocks); 458 + 459 + spin_unlock(&lun->vlun.lock); 460 + } 480 461 } 481 462 482 463 static struct nvmm_type gennvm = { ··· 503 466 .erase_blk = gennvm_erase_blk, 504 467 505 468 .get_lun = gennvm_get_lun, 506 - .free_blocks_print = gennvm_free_blocks_print, 469 + .lun_info_print = gennvm_lun_info_print, 507 470 }; 508 471 509 472 static int __init gennvm_module_init(void)
+2
drivers/lightnvm/gennvm.h
··· 35 35 }; 36 36 37 37 struct gen_nvm { 38 + struct nvm_dev *dev; 39 + 38 40 int nr_luns; 39 41 struct gen_lun *luns; 40 42 };
+31 -1
drivers/lightnvm/rrpc.c
··· 123 123 return blk->id * rrpc->dev->pgs_per_blk; 124 124 } 125 125 126 + static struct ppa_addr linear_to_generic_addr(struct nvm_dev *dev, 127 + struct ppa_addr r) 128 + { 129 + struct ppa_addr l; 130 + int secs, pgs, blks, luns; 131 + sector_t ppa = r.ppa; 132 + 133 + l.ppa = 0; 134 + 135 + div_u64_rem(ppa, dev->sec_per_pg, &secs); 136 + l.g.sec = secs; 137 + 138 + sector_div(ppa, dev->sec_per_pg); 139 + div_u64_rem(ppa, dev->sec_per_blk, &pgs); 140 + l.g.pg = pgs; 141 + 142 + sector_div(ppa, dev->pgs_per_blk); 143 + div_u64_rem(ppa, dev->blks_per_lun, &blks); 144 + l.g.blk = blks; 145 + 146 + sector_div(ppa, dev->blks_per_lun); 147 + div_u64_rem(ppa, dev->luns_per_chnl, &luns); 148 + l.g.lun = luns; 149 + 150 + sector_div(ppa, dev->luns_per_chnl); 151 + l.g.ch = ppa; 152 + 153 + return l; 154 + } 155 + 126 156 static struct ppa_addr rrpc_ppa_to_gaddr(struct nvm_dev *dev, u64 addr) 127 157 { 128 158 struct ppa_addr paddr; 129 159 130 160 paddr.ppa = addr; 131 - return __linear_to_generic_addr(dev, paddr); 161 + return linear_to_generic_addr(dev, paddr); 132 162 } 133 163 134 164 /* requires lun->lock taken */
+13 -9
drivers/md/dm-crypt.c
··· 112 112 * and encrypts / decrypts at the same time. 113 113 */ 114 114 enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID, 115 - DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD }; 115 + DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD, 116 + DM_CRYPT_EXIT_THREAD}; 116 117 117 118 /* 118 119 * The fields in here must be read only after initialization. ··· 1204 1203 if (!RB_EMPTY_ROOT(&cc->write_tree)) 1205 1204 goto pop_from_list; 1206 1205 1206 + if (unlikely(test_bit(DM_CRYPT_EXIT_THREAD, &cc->flags))) { 1207 + spin_unlock_irq(&cc->write_thread_wait.lock); 1208 + break; 1209 + } 1210 + 1207 1211 __set_current_state(TASK_INTERRUPTIBLE); 1208 1212 __add_wait_queue(&cc->write_thread_wait, &wait); 1209 1213 1210 1214 spin_unlock_irq(&cc->write_thread_wait.lock); 1211 1215 1212 - if (unlikely(kthread_should_stop())) { 1213 - set_task_state(current, TASK_RUNNING); 1214 - remove_wait_queue(&cc->write_thread_wait, &wait); 1215 - break; 1216 - } 1217 - 1218 1216 schedule(); 1219 1217 1220 - set_task_state(current, TASK_RUNNING); 1221 1218 spin_lock_irq(&cc->write_thread_wait.lock); 1222 1219 __remove_wait_queue(&cc->write_thread_wait, &wait); 1223 1220 goto continue_locked; ··· 1530 1531 if (!cc) 1531 1532 return; 1532 1533 1533 - if (cc->write_thread) 1534 + if (cc->write_thread) { 1535 + spin_lock_irq(&cc->write_thread_wait.lock); 1536 + set_bit(DM_CRYPT_EXIT_THREAD, &cc->flags); 1537 + wake_up_locked(&cc->write_thread_wait); 1538 + spin_unlock_irq(&cc->write_thread_wait.lock); 1534 1539 kthread_stop(cc->write_thread); 1540 + } 1535 1541 1536 1542 if (cc->io_queue) 1537 1543 destroy_workqueue(cc->io_queue);
+16 -14
drivers/md/dm-mpath.c
··· 1537 1537 struct block_device **bdev, fmode_t *mode) 1538 1538 { 1539 1539 struct multipath *m = ti->private; 1540 - struct pgpath *pgpath; 1541 1540 unsigned long flags; 1542 1541 int r; 1543 - 1544 - r = 0; 1545 1542 1546 1543 spin_lock_irqsave(&m->lock, flags); 1547 1544 1548 1545 if (!m->current_pgpath) 1549 1546 __choose_pgpath(m, 0); 1550 1547 1551 - pgpath = m->current_pgpath; 1552 - 1553 - if (pgpath) { 1554 - *bdev = pgpath->path.dev->bdev; 1555 - *mode = pgpath->path.dev->mode; 1548 + if (m->current_pgpath) { 1549 + if (!m->queue_io) { 1550 + *bdev = m->current_pgpath->path.dev->bdev; 1551 + *mode = m->current_pgpath->path.dev->mode; 1552 + r = 0; 1553 + } else { 1554 + /* pg_init has not started or completed */ 1555 + r = -ENOTCONN; 1556 + } 1557 + } else { 1558 + /* No path is available */ 1559 + if (m->queue_if_no_path) 1560 + r = -ENOTCONN; 1561 + else 1562 + r = -EIO; 1556 1563 } 1557 - 1558 - if ((pgpath && m->queue_io) || (!pgpath && m->queue_if_no_path)) 1559 - r = -ENOTCONN; 1560 - else if (!*bdev) 1561 - r = -EIO; 1562 1564 1563 1565 spin_unlock_irqrestore(&m->lock, flags); 1564 1566 1565 - if (r == -ENOTCONN && !fatal_signal_pending(current)) { 1567 + if (r == -ENOTCONN) { 1566 1568 spin_lock_irqsave(&m->lock, flags); 1567 1569 if (!m->current_pg) { 1568 1570 /* Path status changed, redo selection */
+3 -3
drivers/md/dm-thin.c
··· 2432 2432 case PM_WRITE: 2433 2433 if (old_mode != new_mode) 2434 2434 notify_of_pool_mode_change(pool, "write"); 2435 + pool->pf.error_if_no_space = pt->requested_pf.error_if_no_space; 2435 2436 dm_pool_metadata_read_write(pool->pmd); 2436 2437 pool->process_bio = process_bio; 2437 2438 pool->process_discard = process_discard_bio; ··· 4250 4249 { 4251 4250 struct thin_c *tc = ti->private; 4252 4251 struct pool *pool = tc->pool; 4253 - struct queue_limits *pool_limits = dm_get_queue_limits(pool->pool_md); 4254 4252 4255 - if (!pool_limits->discard_granularity) 4256 - return; /* pool's discard support is disabled */ 4253 + if (!pool->pf.discard_enabled) 4254 + return; 4257 4255 4258 4256 limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT; 4259 4257 limits->max_discard_sectors = 2048 * 1024 * 16; /* 16G */
+4 -3
drivers/md/dm.c
··· 591 591 592 592 out: 593 593 dm_put_live_table(md, *srcu_idx); 594 - if (r == -ENOTCONN) { 594 + if (r == -ENOTCONN && !fatal_signal_pending(current)) { 595 595 msleep(10); 596 596 goto retry; 597 597 } ··· 603 603 { 604 604 struct mapped_device *md = bdev->bd_disk->private_data; 605 605 struct dm_target *tgt; 606 + struct block_device *tgt_bdev = NULL; 606 607 int srcu_idx, r; 607 608 608 - r = dm_get_live_table_for_ioctl(md, &tgt, &bdev, &mode, &srcu_idx); 609 + r = dm_get_live_table_for_ioctl(md, &tgt, &tgt_bdev, &mode, &srcu_idx); 609 610 if (r < 0) 610 611 return r; 611 612 ··· 621 620 goto out; 622 621 } 623 622 624 - r = __blkdev_driver_ioctl(bdev, mode, cmd, arg); 623 + r = __blkdev_driver_ioctl(tgt_bdev, mode, cmd, arg); 625 624 out: 626 625 dm_put_live_table(md, srcu_idx); 627 626 return r;
+2 -2
drivers/media/pci/cx23885/cx23885-core.c
··· 1992 1992 (unsigned long long)pci_resource_start(pci_dev, 0)); 1993 1993 1994 1994 pci_set_master(pci_dev); 1995 - if (!pci_set_dma_mask(pci_dev, 0xffffffff)) { 1995 + err = pci_set_dma_mask(pci_dev, 0xffffffff); 1996 + if (err) { 1996 1997 printk("%s/0: Oops: no 32bit PCI DMA ???\n", dev->name); 1997 - err = -EIO; 1998 1998 goto fail_context; 1999 1999 } 2000 2000
+2 -1
drivers/media/pci/cx25821/cx25821-core.c
··· 1319 1319 dev->pci_lat, (unsigned long long)dev->base_io_addr); 1320 1320 1321 1321 pci_set_master(pci_dev); 1322 - if (!pci_set_dma_mask(pci_dev, 0xffffffff)) { 1322 + err = pci_set_dma_mask(pci_dev, 0xffffffff); 1323 + if (err) { 1323 1324 pr_err("%s/0: Oops: no 32bit PCI DMA ???\n", dev->name); 1324 1325 err = -EIO; 1325 1326 goto fail_irq;
+2 -2
drivers/media/pci/cx88/cx88-alsa.c
··· 890 890 return err; 891 891 } 892 892 893 - if (!pci_set_dma_mask(pci,DMA_BIT_MASK(32))) { 893 + err = pci_set_dma_mask(pci,DMA_BIT_MASK(32)); 894 + if (err) { 894 895 dprintk(0, "%s/1: Oops: no 32bit PCI DMA ???\n",core->name); 895 - err = -EIO; 896 896 cx88_core_put(core, pci); 897 897 return err; 898 898 }
+2 -1
drivers/media/pci/cx88/cx88-mpeg.c
··· 393 393 if (pci_enable_device(dev->pci)) 394 394 return -EIO; 395 395 pci_set_master(dev->pci); 396 - if (!pci_set_dma_mask(dev->pci,DMA_BIT_MASK(32))) { 396 + err = pci_set_dma_mask(dev->pci,DMA_BIT_MASK(32)); 397 + if (err) { 397 398 printk("%s/2: Oops: no 32bit PCI DMA ???\n",dev->core->name); 398 399 return -EIO; 399 400 }
+2 -2
drivers/media/pci/cx88/cx88-video.c
··· 1314 1314 dev->pci_lat,(unsigned long long)pci_resource_start(pci_dev,0)); 1315 1315 1316 1316 pci_set_master(pci_dev); 1317 - if (!pci_set_dma_mask(pci_dev,DMA_BIT_MASK(32))) { 1317 + err = pci_set_dma_mask(pci_dev,DMA_BIT_MASK(32)); 1318 + if (err) { 1318 1319 printk("%s/0: Oops: no 32bit PCI DMA ???\n",core->name); 1319 - err = -EIO; 1320 1320 goto fail_core; 1321 1321 } 1322 1322 dev->alloc_ctx = vb2_dma_sg_init_ctx(&pci_dev->dev);
+1 -1
drivers/media/pci/netup_unidvb/netup_unidvb_core.c
··· 810 810 "%s(): board vendor 0x%x, revision 0x%x\n", 811 811 __func__, board_vendor, board_revision); 812 812 pci_set_master(pci_dev); 813 - if (!pci_set_dma_mask(pci_dev, 0xffffffff)) { 813 + if (pci_set_dma_mask(pci_dev, 0xffffffff) < 0) { 814 814 dev_err(&pci_dev->dev, 815 815 "%s(): 32bit PCI DMA is not supported\n", __func__); 816 816 goto pci_detect_err;
+2 -2
drivers/media/pci/saa7134/saa7134-core.c
··· 951 951 pci_name(pci_dev), dev->pci_rev, pci_dev->irq, 952 952 dev->pci_lat,(unsigned long long)pci_resource_start(pci_dev,0)); 953 953 pci_set_master(pci_dev); 954 - if (!pci_set_dma_mask(pci_dev, DMA_BIT_MASK(32))) { 954 + err = pci_set_dma_mask(pci_dev, DMA_BIT_MASK(32)); 955 + if (err) { 955 956 pr_warn("%s: Oops: no 32bit PCI DMA ???\n", dev->name); 956 - err = -EIO; 957 957 goto fail1; 958 958 } 959 959
+2 -2
drivers/media/pci/saa7164/saa7164-core.c
··· 1264 1264 1265 1265 pci_set_master(pci_dev); 1266 1266 /* TODO */ 1267 - if (!pci_set_dma_mask(pci_dev, 0xffffffff)) { 1267 + err = pci_set_dma_mask(pci_dev, 0xffffffff); 1268 + if (err) { 1268 1269 printk("%s/0: Oops: no 32bit PCI DMA ???\n", dev->name); 1269 - err = -EIO; 1270 1270 goto fail_irq; 1271 1271 } 1272 1272
+2 -2
drivers/media/pci/tw68/tw68-core.c
··· 257 257 dev->name, pci_name(pci_dev), dev->pci_rev, pci_dev->irq, 258 258 dev->pci_lat, (u64)pci_resource_start(pci_dev, 0)); 259 259 pci_set_master(pci_dev); 260 - if (!pci_set_dma_mask(pci_dev, DMA_BIT_MASK(32))) { 260 + err = pci_set_dma_mask(pci_dev, DMA_BIT_MASK(32)); 261 + if (err) { 261 262 pr_info("%s: Oops: no 32bit PCI DMA ???\n", dev->name); 262 - err = -EIO; 263 263 goto fail1; 264 264 } 265 265
+3 -8
drivers/mmc/card/block.c
··· 65 65 #define MMC_SANITIZE_REQ_TIMEOUT 240000 66 66 #define MMC_EXTRACT_INDEX_FROM_ARG(x) ((x & 0x00FF0000) >> 16) 67 67 68 - #define mmc_req_rel_wr(req) (((req->cmd_flags & REQ_FUA) || \ 69 - (req->cmd_flags & REQ_META)) && \ 68 + #define mmc_req_rel_wr(req) ((req->cmd_flags & REQ_FUA) && \ 70 69 (rq_data_dir(req) == WRITE)) 71 70 #define PACKED_CMD_VER 0x01 72 71 #define PACKED_CMD_WR 0x02 ··· 1466 1467 1467 1468 /* 1468 1469 * Reliable writes are used to implement Forced Unit Access and 1469 - * REQ_META accesses, and are supported only on MMCs. 1470 - * 1471 - * XXX: this really needs a good explanation of why REQ_META 1472 - * is treated special. 1470 + * are supported only on MMCs. 1473 1471 */ 1474 - bool do_rel_wr = ((req->cmd_flags & REQ_FUA) || 1475 - (req->cmd_flags & REQ_META)) && 1472 + bool do_rel_wr = (req->cmd_flags & REQ_FUA) && 1476 1473 (rq_data_dir(req) == WRITE) && 1477 1474 (md->flags & MMC_BLK_REL_WR); 1478 1475
+113 -68
drivers/mmc/core/mmc.c
··· 1040 1040 return err; 1041 1041 } 1042 1042 1043 - static int mmc_select_hs400(struct mmc_card *card) 1044 - { 1045 - struct mmc_host *host = card->host; 1046 - int err = 0; 1047 - u8 val; 1048 - 1049 - /* 1050 - * HS400 mode requires 8-bit bus width 1051 - */ 1052 - if (!(card->mmc_avail_type & EXT_CSD_CARD_TYPE_HS400 && 1053 - host->ios.bus_width == MMC_BUS_WIDTH_8)) 1054 - return 0; 1055 - 1056 - /* 1057 - * Before switching to dual data rate operation for HS400, 1058 - * it is required to convert from HS200 mode to HS mode. 1059 - */ 1060 - mmc_set_timing(card->host, MMC_TIMING_MMC_HS); 1061 - mmc_set_bus_speed(card); 1062 - 1063 - val = EXT_CSD_TIMING_HS | 1064 - card->drive_strength << EXT_CSD_DRV_STR_SHIFT; 1065 - err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1066 - EXT_CSD_HS_TIMING, val, 1067 - card->ext_csd.generic_cmd6_time, 1068 - true, true, true); 1069 - if (err) { 1070 - pr_err("%s: switch to high-speed from hs200 failed, err:%d\n", 1071 - mmc_hostname(host), err); 1072 - return err; 1073 - } 1074 - 1075 - err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1076 - EXT_CSD_BUS_WIDTH, 1077 - EXT_CSD_DDR_BUS_WIDTH_8, 1078 - card->ext_csd.generic_cmd6_time); 1079 - if (err) { 1080 - pr_err("%s: switch to bus width for hs400 failed, err:%d\n", 1081 - mmc_hostname(host), err); 1082 - return err; 1083 - } 1084 - 1085 - val = EXT_CSD_TIMING_HS400 | 1086 - card->drive_strength << EXT_CSD_DRV_STR_SHIFT; 1087 - err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1088 - EXT_CSD_HS_TIMING, val, 1089 - card->ext_csd.generic_cmd6_time, 1090 - true, true, true); 1091 - if (err) { 1092 - pr_err("%s: switch to hs400 failed, err:%d\n", 1093 - mmc_hostname(host), err); 1094 - return err; 1095 - } 1096 - 1097 - mmc_set_timing(host, MMC_TIMING_MMC_HS400); 1098 - mmc_set_bus_speed(card); 1099 - 1100 - return 0; 1101 - } 1102 - 1103 - int mmc_hs200_to_hs400(struct mmc_card *card) 1104 - { 1105 - return mmc_select_hs400(card); 1106 - } 1107 - 1108 1043 /* Caller must hold re-tuning */ 1109 1044 static int mmc_switch_status(struct mmc_card *card) 1110 1045 { ··· 1051 1116 return err; 1052 1117 1053 1118 return mmc_switch_status_error(card->host, status); 1119 + } 1120 + 1121 + static int mmc_select_hs400(struct mmc_card *card) 1122 + { 1123 + struct mmc_host *host = card->host; 1124 + bool send_status = true; 1125 + unsigned int max_dtr; 1126 + int err = 0; 1127 + u8 val; 1128 + 1129 + /* 1130 + * HS400 mode requires 8-bit bus width 1131 + */ 1132 + if (!(card->mmc_avail_type & EXT_CSD_CARD_TYPE_HS400 && 1133 + host->ios.bus_width == MMC_BUS_WIDTH_8)) 1134 + return 0; 1135 + 1136 + if (host->caps & MMC_CAP_WAIT_WHILE_BUSY) 1137 + send_status = false; 1138 + 1139 + /* Reduce frequency to HS frequency */ 1140 + max_dtr = card->ext_csd.hs_max_dtr; 1141 + mmc_set_clock(host, max_dtr); 1142 + 1143 + /* Switch card to HS mode */ 1144 + val = EXT_CSD_TIMING_HS | 1145 + card->drive_strength << EXT_CSD_DRV_STR_SHIFT; 1146 + err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1147 + EXT_CSD_HS_TIMING, val, 1148 + card->ext_csd.generic_cmd6_time, 1149 + true, send_status, true); 1150 + if (err) { 1151 + pr_err("%s: switch to high-speed from hs200 failed, err:%d\n", 1152 + mmc_hostname(host), err); 1153 + return err; 1154 + } 1155 + 1156 + /* Set host controller to HS timing */ 1157 + mmc_set_timing(card->host, MMC_TIMING_MMC_HS); 1158 + 1159 + if (!send_status) { 1160 + err = mmc_switch_status(card); 1161 + if (err) 1162 + goto out_err; 1163 + } 1164 + 1165 + /* Switch card to DDR */ 1166 + err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1167 + EXT_CSD_BUS_WIDTH, 1168 + EXT_CSD_DDR_BUS_WIDTH_8, 1169 + card->ext_csd.generic_cmd6_time); 1170 + if (err) { 1171 + pr_err("%s: switch to bus width for hs400 failed, err:%d\n", 1172 + mmc_hostname(host), err); 1173 + return err; 1174 + } 1175 + 1176 + /* Switch card to HS400 */ 1177 + val = EXT_CSD_TIMING_HS400 | 1178 + card->drive_strength << EXT_CSD_DRV_STR_SHIFT; 1179 + err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1180 + EXT_CSD_HS_TIMING, val, 1181 + card->ext_csd.generic_cmd6_time, 1182 + true, send_status, true); 1183 + if (err) { 1184 + pr_err("%s: switch to hs400 failed, err:%d\n", 1185 + mmc_hostname(host), err); 1186 + return err; 1187 + } 1188 + 1189 + /* Set host controller to HS400 timing and frequency */ 1190 + mmc_set_timing(host, MMC_TIMING_MMC_HS400); 1191 + mmc_set_bus_speed(card); 1192 + 1193 + if (!send_status) { 1194 + err = mmc_switch_status(card); 1195 + if (err) 1196 + goto out_err; 1197 + } 1198 + 1199 + return 0; 1200 + 1201 + out_err: 1202 + pr_err("%s: %s failed, error %d\n", mmc_hostname(card->host), 1203 + __func__, err); 1204 + return err; 1205 + } 1206 + 1207 + int mmc_hs200_to_hs400(struct mmc_card *card) 1208 + { 1209 + return mmc_select_hs400(card); 1054 1210 } 1055 1211 1056 1212 int mmc_hs400_to_hs200(struct mmc_card *card) ··· 1245 1219 static int mmc_select_hs200(struct mmc_card *card) 1246 1220 { 1247 1221 struct mmc_host *host = card->host; 1222 + bool send_status = true; 1223 + unsigned int old_timing; 1248 1224 int err = -EINVAL; 1249 1225 u8 val; 1250 1226 ··· 1262 1234 1263 1235 mmc_select_driver_type(card); 1264 1236 1237 + if (host->caps & MMC_CAP_WAIT_WHILE_BUSY) 1238 + send_status = false; 1239 + 1265 1240 /* 1266 1241 * Set the bus width(4 or 8) with host's support and 1267 1242 * switch to HS200 mode if bus width is set successfully. ··· 1276 1245 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1277 1246 EXT_CSD_HS_TIMING, val, 1278 1247 card->ext_csd.generic_cmd6_time, 1279 - true, true, true); 1280 - if (!err) 1281 - mmc_set_timing(host, MMC_TIMING_MMC_HS200); 1248 + true, send_status, true); 1249 + if (err) 1250 + goto err; 1251 + old_timing = host->ios.timing; 1252 + mmc_set_timing(host, MMC_TIMING_MMC_HS200); 1253 + if (!send_status) { 1254 + err = mmc_switch_status(card); 1255 + /* 1256 + * mmc_select_timing() assumes timing has not changed if 1257 + * it is a switch error. 1258 + */ 1259 + if (err == -EBADMSG) 1260 + mmc_set_timing(host, old_timing); 1261 + } 1282 1262 } 1283 1263 err: 1264 + if (err) 1265 + pr_err("%s: %s failed, error %d\n", mmc_hostname(card->host), 1266 + __func__, err); 1284 1267 return err; 1285 1268 } 1286 1269
+1
drivers/mmc/host/Kconfig
··· 473 473 474 474 config MMC_GOLDFISH 475 475 tristate "goldfish qemu Multimedia Card Interface support" 476 + depends on HAS_DMA 476 477 depends on GOLDFISH || COMPILE_TEST 477 478 help 478 479 This selects the Goldfish Multimedia card Interface emulation
+1 -1
drivers/mmc/host/mtk-sd.c
··· 1276 1276 int start = 0, len = 0; 1277 1277 int start_final = 0, len_final = 0; 1278 1278 u8 final_phase = 0xff; 1279 - struct msdc_delay_phase delay_phase; 1279 + struct msdc_delay_phase delay_phase = { 0, }; 1280 1280 1281 1281 if (delay == 0) { 1282 1282 dev_err(host->dev, "phase error: [map:%x]\n", delay);
+1 -1
drivers/mmc/host/pxamci.c
··· 805 805 goto out; 806 806 } else { 807 807 mmc->caps |= host->pdata->gpio_card_ro_invert ? 808 - MMC_CAP2_RO_ACTIVE_HIGH : 0; 808 + 0 : MMC_CAP2_RO_ACTIVE_HIGH; 809 809 } 810 810 811 811 if (gpio_is_valid(gpio_cd))
+1
drivers/mtd/nand/jz4740_nand.c
··· 25 25 26 26 #include <linux/gpio.h> 27 27 28 + #include <asm/mach-jz4740/gpio.h> 28 29 #include <asm/mach-jz4740/jz4740_nand.h> 29 30 30 31 #define JZ_REG_NAND_CTRL 0x50
+1 -1
drivers/mtd/nand/nand_base.c
··· 3110 3110 */ 3111 3111 static void nand_shutdown(struct mtd_info *mtd) 3112 3112 { 3113 - nand_get_device(mtd, FL_SHUTDOWN); 3113 + nand_get_device(mtd, FL_PM_SUSPENDED); 3114 3114 } 3115 3115 3116 3116 /* Set default functions */
-2
drivers/net/can/bfin_can.c
··· 501 501 cf->data[2] |= CAN_ERR_PROT_FORM; 502 502 else if (status & SER) 503 503 cf->data[2] |= CAN_ERR_PROT_STUFF; 504 - else 505 - cf->data[2] |= CAN_ERR_PROT_UNSPEC; 506 504 } 507 505 508 506 priv->can.state = state;
+2 -5
drivers/net/can/c_can/c_can.c
··· 962 962 * type of the last error to occur on the CAN bus 963 963 */ 964 964 cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR; 965 - cf->data[2] |= CAN_ERR_PROT_UNSPEC; 966 965 967 966 switch (lec_type) { 968 967 case LEC_STUFF_ERROR: ··· 974 975 break; 975 976 case LEC_ACK_ERROR: 976 977 netdev_dbg(dev, "ack error\n"); 977 - cf->data[3] |= (CAN_ERR_PROT_LOC_ACK | 978 - CAN_ERR_PROT_LOC_ACK_DEL); 978 + cf->data[3] = CAN_ERR_PROT_LOC_ACK; 979 979 break; 980 980 case LEC_BIT1_ERROR: 981 981 netdev_dbg(dev, "bit1 error\n"); ··· 986 988 break; 987 989 case LEC_CRC_ERROR: 988 990 netdev_dbg(dev, "CRC error\n"); 989 - cf->data[3] |= (CAN_ERR_PROT_LOC_CRC_SEQ | 990 - CAN_ERR_PROT_LOC_CRC_DEL); 991 + cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ; 991 992 break; 992 993 default: 993 994 break;
+1 -1
drivers/net/can/cc770/cc770.c
··· 578 578 cf->data[2] |= CAN_ERR_PROT_BIT0; 579 579 break; 580 580 case STAT_LEC_CRC: 581 - cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ; 581 + cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ; 582 582 break; 583 583 } 584 584 }
+2 -2
drivers/net/can/flexcan.c
··· 535 535 if (reg_esr & FLEXCAN_ESR_ACK_ERR) { 536 536 netdev_dbg(dev, "ACK_ERR irq\n"); 537 537 cf->can_id |= CAN_ERR_ACK; 538 - cf->data[3] |= CAN_ERR_PROT_LOC_ACK; 538 + cf->data[3] = CAN_ERR_PROT_LOC_ACK; 539 539 tx_errors = 1; 540 540 } 541 541 if (reg_esr & FLEXCAN_ESR_CRC_ERR) { 542 542 netdev_dbg(dev, "CRC_ERR irq\n"); 543 543 cf->data[2] |= CAN_ERR_PROT_BIT; 544 - cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ; 544 + cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ; 545 545 rx_errors = 1; 546 546 } 547 547 if (reg_esr & FLEXCAN_ESR_FRM_ERR) {
-1
drivers/net/can/janz-ican3.c
··· 1096 1096 cf->data[2] |= CAN_ERR_PROT_STUFF; 1097 1097 break; 1098 1098 default: 1099 - cf->data[2] |= CAN_ERR_PROT_UNSPEC; 1100 1099 cf->data[3] = ecc & ECC_SEG; 1101 1100 break; 1102 1101 }
+2 -5
drivers/net/can/m_can/m_can.c
··· 487 487 * type of the last error to occur on the CAN bus 488 488 */ 489 489 cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR; 490 - cf->data[2] |= CAN_ERR_PROT_UNSPEC; 491 490 492 491 switch (lec_type) { 493 492 case LEC_STUFF_ERROR: ··· 499 500 break; 500 501 case LEC_ACK_ERROR: 501 502 netdev_dbg(dev, "ack error\n"); 502 - cf->data[3] |= (CAN_ERR_PROT_LOC_ACK | 503 - CAN_ERR_PROT_LOC_ACK_DEL); 503 + cf->data[3] = CAN_ERR_PROT_LOC_ACK; 504 504 break; 505 505 case LEC_BIT1_ERROR: 506 506 netdev_dbg(dev, "bit1 error\n"); ··· 511 513 break; 512 514 case LEC_CRC_ERROR: 513 515 netdev_dbg(dev, "CRC error\n"); 514 - cf->data[3] |= (CAN_ERR_PROT_LOC_CRC_SEQ | 515 - CAN_ERR_PROT_LOC_CRC_DEL); 516 + cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ; 516 517 break; 517 518 default: 518 519 break;
+1 -2
drivers/net/can/pch_can.c
··· 559 559 stats->rx_errors++; 560 560 break; 561 561 case PCH_CRC_ERR: 562 - cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ | 563 - CAN_ERR_PROT_LOC_CRC_DEL; 562 + cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ; 564 563 priv->can.can_stats.bus_error++; 565 564 stats->rx_errors++; 566 565 break;
+5 -6
drivers/net/can/rcar_can.c
··· 241 241 u8 ecsr; 242 242 243 243 netdev_dbg(priv->ndev, "Bus error interrupt:\n"); 244 - if (skb) { 244 + if (skb) 245 245 cf->can_id |= CAN_ERR_BUSERROR | CAN_ERR_PROT; 246 - cf->data[2] = CAN_ERR_PROT_UNSPEC; 247 - } 246 + 248 247 ecsr = readb(&priv->regs->ecsr); 249 248 if (ecsr & RCAR_CAN_ECSR_ADEF) { 250 249 netdev_dbg(priv->ndev, "ACK Delimiter Error\n"); 251 250 tx_errors++; 252 251 writeb(~RCAR_CAN_ECSR_ADEF, &priv->regs->ecsr); 253 252 if (skb) 254 - cf->data[3] |= CAN_ERR_PROT_LOC_ACK_DEL; 253 + cf->data[3] = CAN_ERR_PROT_LOC_ACK_DEL; 255 254 } 256 255 if (ecsr & RCAR_CAN_ECSR_BE0F) { 257 256 netdev_dbg(priv->ndev, "Bit Error (dominant)\n"); ··· 271 272 rx_errors++; 272 273 writeb(~RCAR_CAN_ECSR_CEF, &priv->regs->ecsr); 273 274 if (skb) 274 - cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ; 275 + cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ; 275 276 } 276 277 if (ecsr & RCAR_CAN_ECSR_AEF) { 277 278 netdev_dbg(priv->ndev, "ACK Error\n"); ··· 279 280 writeb(~RCAR_CAN_ECSR_AEF, &priv->regs->ecsr); 280 281 if (skb) { 281 282 cf->can_id |= CAN_ERR_ACK; 282 - cf->data[3] |= CAN_ERR_PROT_LOC_ACK; 283 + cf->data[3] = CAN_ERR_PROT_LOC_ACK; 283 284 } 284 285 } 285 286 if (ecsr & RCAR_CAN_ECSR_FEF) {
+3 -1
drivers/net/can/sja1000/sja1000.c
··· 218 218 priv->write_reg(priv, SJA1000_RXERR, 0x0); 219 219 priv->read_reg(priv, SJA1000_ECC); 220 220 221 + /* clear interrupt flags */ 222 + priv->read_reg(priv, SJA1000_IR); 223 + 221 224 /* leave reset mode */ 222 225 set_normal_mode(dev); 223 226 } ··· 449 446 cf->data[2] |= CAN_ERR_PROT_STUFF; 450 447 break; 451 448 default: 452 - cf->data[2] |= CAN_ERR_PROT_UNSPEC; 453 449 cf->data[3] = ecc & ECC_SEG; 454 450 break; 455 451 }
-1
drivers/net/can/sun4i_can.c
··· 575 575 cf->data[2] |= CAN_ERR_PROT_STUFF; 576 576 break; 577 577 default: 578 - cf->data[2] |= CAN_ERR_PROT_UNSPEC; 579 578 cf->data[3] = (ecc & SUN4I_STA_ERR_SEG_CODE) 580 579 >> 16; 581 580 break;
+2 -5
drivers/net/can/ti_hecc.c
··· 722 722 if (err_status & HECC_BUS_ERROR) { 723 723 ++priv->can.can_stats.bus_error; 724 724 cf->can_id |= CAN_ERR_BUSERROR | CAN_ERR_PROT; 725 - cf->data[2] |= CAN_ERR_PROT_UNSPEC; 726 725 if (err_status & HECC_CANES_FE) { 727 726 hecc_set_bit(priv, HECC_CANES, HECC_CANES_FE); 728 727 cf->data[2] |= CAN_ERR_PROT_FORM; ··· 736 737 } 737 738 if (err_status & HECC_CANES_CRCE) { 738 739 hecc_set_bit(priv, HECC_CANES, HECC_CANES_CRCE); 739 - cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ | 740 - CAN_ERR_PROT_LOC_CRC_DEL; 740 + cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ; 741 741 } 742 742 if (err_status & HECC_CANES_ACKE) { 743 743 hecc_set_bit(priv, HECC_CANES, HECC_CANES_ACKE); 744 - cf->data[3] |= CAN_ERR_PROT_LOC_ACK | 745 - CAN_ERR_PROT_LOC_ACK_DEL; 744 + cf->data[3] = CAN_ERR_PROT_LOC_ACK; 746 745 } 747 746 } 748 747
-1
drivers/net/can/usb/ems_usb.c
··· 377 377 cf->data[2] |= CAN_ERR_PROT_STUFF; 378 378 break; 379 379 default: 380 - cf->data[2] |= CAN_ERR_PROT_UNSPEC; 381 380 cf->data[3] = ecc & SJA1000_ECC_SEG; 382 381 break; 383 382 }
-1
drivers/net/can/usb/esd_usb2.c
··· 282 282 cf->data[2] |= CAN_ERR_PROT_STUFF; 283 283 break; 284 284 default: 285 - cf->data[2] |= CAN_ERR_PROT_UNSPEC; 286 285 cf->data[3] = ecc & SJA1000_ECC_SEG; 287 286 break; 288 287 }
+2 -3
drivers/net/can/usb/kvaser_usb.c
··· 944 944 cf->can_id |= CAN_ERR_BUSERROR | CAN_ERR_PROT; 945 945 946 946 if (es->leaf.error_factor & M16C_EF_ACKE) 947 - cf->data[3] |= (CAN_ERR_PROT_LOC_ACK); 947 + cf->data[3] = CAN_ERR_PROT_LOC_ACK; 948 948 if (es->leaf.error_factor & M16C_EF_CRCE) 949 - cf->data[3] |= (CAN_ERR_PROT_LOC_CRC_SEQ | 950 - CAN_ERR_PROT_LOC_CRC_DEL); 949 + cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ; 951 950 if (es->leaf.error_factor & M16C_EF_FORME) 952 951 cf->data[2] |= CAN_ERR_PROT_FORM; 953 952 if (es->leaf.error_factor & M16C_EF_STFE)
+1 -3
drivers/net/can/usb/usb_8dev.c
··· 401 401 tx_errors = 1; 402 402 break; 403 403 case USB_8DEV_STATUSMSG_CRC: 404 - cf->data[2] |= CAN_ERR_PROT_UNSPEC; 405 - cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ | 406 - CAN_ERR_PROT_LOC_CRC_DEL; 404 + cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ; 407 405 rx_errors = 1; 408 406 break; 409 407 case USB_8DEV_STATUSMSG_BIT0:
+3 -6
drivers/net/can/xilinx_can.c
··· 609 609 610 610 /* Check for error interrupt */ 611 611 if (isr & XCAN_IXR_ERROR_MASK) { 612 - if (skb) { 612 + if (skb) 613 613 cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR; 614 - cf->data[2] |= CAN_ERR_PROT_UNSPEC; 615 - } 616 614 617 615 /* Check for Ack error interrupt */ 618 616 if (err_status & XCAN_ESR_ACKER_MASK) { 619 617 stats->tx_errors++; 620 618 if (skb) { 621 619 cf->can_id |= CAN_ERR_ACK; 622 - cf->data[3] |= CAN_ERR_PROT_LOC_ACK; 620 + cf->data[3] = CAN_ERR_PROT_LOC_ACK; 623 621 } 624 622 } 625 623 ··· 653 655 stats->rx_errors++; 654 656 if (skb) { 655 657 cf->can_id |= CAN_ERR_PROT; 656 - cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ | 657 - CAN_ERR_PROT_LOC_CRC_DEL; 658 + cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ; 658 659 } 659 660 } 660 661 priv->can.can_stats.bus_error++;
+1
drivers/net/ethernet/Kconfig
··· 29 29 source "drivers/net/ethernet/apple/Kconfig" 30 30 source "drivers/net/ethernet/arc/Kconfig" 31 31 source "drivers/net/ethernet/atheros/Kconfig" 32 + source "drivers/net/ethernet/aurora/Kconfig" 32 33 source "drivers/net/ethernet/cadence/Kconfig" 33 34 source "drivers/net/ethernet/adi/Kconfig" 34 35 source "drivers/net/ethernet/broadcom/Kconfig"
+1
drivers/net/ethernet/Makefile
··· 15 15 obj-$(CONFIG_NET_VENDOR_APPLE) += apple/ 16 16 obj-$(CONFIG_NET_VENDOR_ARC) += arc/ 17 17 obj-$(CONFIG_NET_VENDOR_ATHEROS) += atheros/ 18 + obj-$(CONFIG_NET_VENDOR_AURORA) += aurora/ 18 19 obj-$(CONFIG_NET_CADENCE) += cadence/ 19 20 obj-$(CONFIG_NET_BFIN) += adi/ 20 21 obj-$(CONFIG_NET_VENDOR_BROADCOM) += broadcom/
+3 -2
drivers/net/ethernet/amd/pcnet32.c
··· 1500 1500 return -ENODEV; 1501 1501 } 1502 1502 1503 - if (!pci_set_dma_mask(pdev, PCNET32_DMA_MASK)) { 1503 + err = pci_set_dma_mask(pdev, PCNET32_DMA_MASK); 1504 + if (err) { 1504 1505 if (pcnet32_debug & NETIF_MSG_PROBE) 1505 1506 pr_err("architecture does not support 32bit PCI busmaster DMA\n"); 1506 - return -ENODEV; 1507 + return err; 1507 1508 } 1508 1509 if (!request_region(ioaddr, PCNET32_TOTAL_SIZE, "pcnet32_probe_pci")) { 1509 1510 if (pcnet32_debug & NETIF_MSG_PROBE)
+19 -16
drivers/net/ethernet/apm/xgene/xgene_enet_main.c
··· 450 450 return NETDEV_TX_OK; 451 451 } 452 452 453 - pdata->ring_ops->wr_cmd(tx_ring, count); 454 453 skb_tx_timestamp(skb); 455 454 456 455 pdata->stats.tx_packets++; 457 456 pdata->stats.tx_bytes += skb->len; 458 457 458 + pdata->ring_ops->wr_cmd(tx_ring, count); 459 459 return NETDEV_TX_OK; 460 460 } 461 461 ··· 688 688 mac_ops->tx_enable(pdata); 689 689 mac_ops->rx_enable(pdata); 690 690 691 + xgene_enet_napi_enable(pdata); 691 692 ret = xgene_enet_register_irq(ndev); 692 693 if (ret) 693 694 return ret; 694 - xgene_enet_napi_enable(pdata); 695 695 696 696 if (pdata->phy_mode == PHY_INTERFACE_MODE_RGMII) 697 697 phy_start(pdata->phy_dev); ··· 715 715 else 716 716 cancel_delayed_work_sync(&pdata->link_work); 717 717 718 - xgene_enet_napi_disable(pdata); 719 - xgene_enet_free_irq(ndev); 720 - xgene_enet_process_ring(pdata->rx_ring, -1); 721 - 722 718 mac_ops->tx_disable(pdata); 723 719 mac_ops->rx_disable(pdata); 720 + 721 + xgene_enet_free_irq(ndev); 722 + xgene_enet_napi_disable(pdata); 723 + xgene_enet_process_ring(pdata->rx_ring, -1); 724 724 725 725 return 0; 726 726 } ··· 1467 1467 } 1468 1468 ndev->hw_features = ndev->features; 1469 1469 1470 - ret = register_netdev(ndev); 1471 - if (ret) { 1472 - netdev_err(ndev, "Failed to register netdev\n"); 1473 - goto err; 1474 - } 1475 - 1476 1470 ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64)); 1477 1471 if (ret) { 1478 1472 netdev_err(ndev, "No usable DMA configuration\n"); 1473 + goto err; 1474 + } 1475 + 1476 + ret = register_netdev(ndev); 1477 + if (ret) { 1478 + netdev_err(ndev, "Failed to register netdev\n"); 1479 1479 goto err; 1480 1480 } 1481 1481 ··· 1483 1483 if (ret) 1484 1484 goto err; 1485 1485 1486 - xgene_enet_napi_add(pdata); 1487 1486 mac_ops = pdata->mac_ops; 1488 - if (pdata->phy_mode == PHY_INTERFACE_MODE_RGMII) 1487 + if (pdata->phy_mode == PHY_INTERFACE_MODE_RGMII) { 1489 1488 ret = xgene_enet_mdio_config(pdata); 1490 - else 1489 + if (ret) 1490 + goto err; 1491 + } else { 1491 1492 INIT_DELAYED_WORK(&pdata->link_work, mac_ops->link_state); 1493 + } 1492 1494 1493 - return ret; 1495 + xgene_enet_napi_add(pdata); 1496 + return 0; 1494 1497 err: 1495 1498 unregister_netdev(ndev); 1496 1499 free_netdev(ndev);
+2
drivers/net/ethernet/atheros/alx/main.c
··· 1533 1533 .driver_data = ALX_DEV_QUIRK_MSI_INTX_DISABLE_BUG }, 1534 1534 { PCI_VDEVICE(ATTANSIC, ALX_DEV_ID_E2200), 1535 1535 .driver_data = ALX_DEV_QUIRK_MSI_INTX_DISABLE_BUG }, 1536 + { PCI_VDEVICE(ATTANSIC, ALX_DEV_ID_E2400), 1537 + .driver_data = ALX_DEV_QUIRK_MSI_INTX_DISABLE_BUG }, 1536 1538 { PCI_VDEVICE(ATTANSIC, ALX_DEV_ID_AR8162), 1537 1539 .driver_data = ALX_DEV_QUIRK_MSI_INTX_DISABLE_BUG }, 1538 1540 { PCI_VDEVICE(ATTANSIC, ALX_DEV_ID_AR8171) },
+1
drivers/net/ethernet/atheros/alx/reg.h
··· 37 37 38 38 #define ALX_DEV_ID_AR8161 0x1091 39 39 #define ALX_DEV_ID_E2200 0xe091 40 + #define ALX_DEV_ID_E2400 0xe0a1 40 41 #define ALX_DEV_ID_AR8162 0x1090 41 42 #define ALX_DEV_ID_AR8171 0x10A1 42 43 #define ALX_DEV_ID_AR8172 0x10A0
+20
drivers/net/ethernet/aurora/Kconfig
··· 1 + config NET_VENDOR_AURORA 2 + bool "Aurora VLSI devices" 3 + help 4 + If you have a network (Ethernet) device belonging to this class, 5 + say Y. 6 + 7 + Note that the answer to this question doesn't directly affect the 8 + kernel: saying N will just cause the configurator to skip all 9 + questions about Aurora devices. If you say Y, you will be asked 10 + for your specific device in the following questions. 11 + 12 + if NET_VENDOR_AURORA 13 + 14 + config AURORA_NB8800 15 + tristate "Aurora AU-NB8800 support" 16 + select PHYLIB 17 + help 18 + Support for the AU-NB8800 gigabit Ethernet controller. 19 + 20 + endif
+1
drivers/net/ethernet/aurora/Makefile
··· 1 + obj-$(CONFIG_AURORA_NB8800) += nb8800.o
+1552
drivers/net/ethernet/aurora/nb8800.c
··· 1 + /* 2 + * Copyright (C) 2015 Mans Rullgard <mans@mansr.com> 3 + * 4 + * Mostly rewritten, based on driver from Sigma Designs. Original 5 + * copyright notice below. 6 + * 7 + * 8 + * Driver for tangox SMP864x/SMP865x/SMP867x/SMP868x builtin Ethernet Mac. 9 + * 10 + * Copyright (C) 2005 Maxime Bizon <mbizon@freebox.fr> 11 + * 12 + * This program is free software; you can redistribute it and/or modify 13 + * it under the terms of the GNU General Public License as published by 14 + * the Free Software Foundation; either version 2 of the License, or 15 + * (at your option) any later version. 16 + * 17 + * This program is distributed in the hope that it will be useful, 18 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 19 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 + * GNU General Public License for more details. 21 + */ 22 + 23 + #include <linux/module.h> 24 + #include <linux/etherdevice.h> 25 + #include <linux/delay.h> 26 + #include <linux/ethtool.h> 27 + #include <linux/interrupt.h> 28 + #include <linux/platform_device.h> 29 + #include <linux/of_device.h> 30 + #include <linux/of_mdio.h> 31 + #include <linux/of_net.h> 32 + #include <linux/dma-mapping.h> 33 + #include <linux/phy.h> 34 + #include <linux/cache.h> 35 + #include <linux/jiffies.h> 36 + #include <linux/io.h> 37 + #include <linux/iopoll.h> 38 + #include <asm/barrier.h> 39 + 40 + #include "nb8800.h" 41 + 42 + static void nb8800_tx_done(struct net_device *dev); 43 + static int nb8800_dma_stop(struct net_device *dev); 44 + 45 + static inline u8 nb8800_readb(struct nb8800_priv *priv, int reg) 46 + { 47 + return readb_relaxed(priv->base + reg); 48 + } 49 + 50 + static inline u32 nb8800_readl(struct nb8800_priv *priv, int reg) 51 + { 52 + return readl_relaxed(priv->base + reg); 53 + } 54 + 55 + static inline void nb8800_writeb(struct nb8800_priv *priv, int reg, u8 val) 56 + { 57 + writeb_relaxed(val, priv->base + reg); 58 + } 59 + 60 + static inline void nb8800_writew(struct nb8800_priv *priv, int reg, u16 val) 61 + { 62 + writew_relaxed(val, priv->base + reg); 63 + } 64 + 65 + static inline void nb8800_writel(struct nb8800_priv *priv, int reg, u32 val) 66 + { 67 + writel_relaxed(val, priv->base + reg); 68 + } 69 + 70 + static inline void nb8800_maskb(struct nb8800_priv *priv, int reg, 71 + u32 mask, u32 val) 72 + { 73 + u32 old = nb8800_readb(priv, reg); 74 + u32 new = (old & ~mask) | (val & mask); 75 + 76 + if (new != old) 77 + nb8800_writeb(priv, reg, new); 78 + } 79 + 80 + static inline void nb8800_maskl(struct nb8800_priv *priv, int reg, 81 + u32 mask, u32 val) 82 + { 83 + u32 old = nb8800_readl(priv, reg); 84 + u32 new = (old & ~mask) | (val & mask); 85 + 86 + if (new != old) 87 + nb8800_writel(priv, reg, new); 88 + } 89 + 90 + static inline void nb8800_modb(struct nb8800_priv *priv, int reg, u8 bits, 91 + bool set) 92 + { 93 + nb8800_maskb(priv, reg, bits, set ? bits : 0); 94 + } 95 + 96 + static inline void nb8800_setb(struct nb8800_priv *priv, int reg, u8 bits) 97 + { 98 + nb8800_maskb(priv, reg, bits, bits); 99 + } 100 + 101 + static inline void nb8800_clearb(struct nb8800_priv *priv, int reg, u8 bits) 102 + { 103 + nb8800_maskb(priv, reg, bits, 0); 104 + } 105 + 106 + static inline void nb8800_modl(struct nb8800_priv *priv, int reg, u32 bits, 107 + bool set) 108 + { 109 + nb8800_maskl(priv, reg, bits, set ? bits : 0); 110 + } 111 + 112 + static inline void nb8800_setl(struct nb8800_priv *priv, int reg, u32 bits) 113 + { 114 + nb8800_maskl(priv, reg, bits, bits); 115 + } 116 + 117 + static inline void nb8800_clearl(struct nb8800_priv *priv, int reg, u32 bits) 118 + { 119 + nb8800_maskl(priv, reg, bits, 0); 120 + } 121 + 122 + static int nb8800_mdio_wait(struct mii_bus *bus) 123 + { 124 + struct nb8800_priv *priv = bus->priv; 125 + u32 val; 126 + 127 + return readl_poll_timeout_atomic(priv->base + NB8800_MDIO_CMD, 128 + val, !(val & MDIO_CMD_GO), 1, 1000); 129 + } 130 + 131 + static int nb8800_mdio_cmd(struct mii_bus *bus, u32 cmd) 132 + { 133 + struct nb8800_priv *priv = bus->priv; 134 + int err; 135 + 136 + err = nb8800_mdio_wait(bus); 137 + if (err) 138 + return err; 139 + 140 + nb8800_writel(priv, NB8800_MDIO_CMD, cmd); 141 + udelay(10); 142 + nb8800_writel(priv, NB8800_MDIO_CMD, cmd | MDIO_CMD_GO); 143 + 144 + return nb8800_mdio_wait(bus); 145 + } 146 + 147 + static int nb8800_mdio_read(struct mii_bus *bus, int phy_id, int reg) 148 + { 149 + struct nb8800_priv *priv = bus->priv; 150 + u32 val; 151 + int err; 152 + 153 + err = nb8800_mdio_cmd(bus, MDIO_CMD_ADDR(phy_id) | MDIO_CMD_REG(reg)); 154 + if (err) 155 + return err; 156 + 157 + val = nb8800_readl(priv, NB8800_MDIO_STS); 158 + if (val & MDIO_STS_ERR) 159 + return 0xffff; 160 + 161 + return val & 0xffff; 162 + } 163 + 164 + static int nb8800_mdio_write(struct mii_bus *bus, int phy_id, int reg, u16 val) 165 + { 166 + u32 cmd = MDIO_CMD_ADDR(phy_id) | MDIO_CMD_REG(reg) | 167 + MDIO_CMD_DATA(val) | MDIO_CMD_WR; 168 + 169 + return nb8800_mdio_cmd(bus, cmd); 170 + } 171 + 172 + static void nb8800_mac_tx(struct net_device *dev, bool enable) 173 + { 174 + struct nb8800_priv *priv = netdev_priv(dev); 175 + 176 + while (nb8800_readl(priv, NB8800_TXC_CR) & TCR_EN) 177 + cpu_relax(); 178 + 179 + nb8800_modb(priv, NB8800_TX_CTL1, TX_EN, enable); 180 + } 181 + 182 + static void nb8800_mac_rx(struct net_device *dev, bool enable) 183 + { 184 + nb8800_modb(netdev_priv(dev), NB8800_RX_CTL, RX_EN, enable); 185 + } 186 + 187 + static void nb8800_mac_af(struct net_device *dev, bool enable) 188 + { 189 + nb8800_modb(netdev_priv(dev), NB8800_RX_CTL, RX_AF_EN, enable); 190 + } 191 + 192 + static void nb8800_start_rx(struct net_device *dev) 193 + { 194 + nb8800_setl(netdev_priv(dev), NB8800_RXC_CR, RCR_EN); 195 + } 196 + 197 + static int nb8800_alloc_rx(struct net_device *dev, unsigned int i, bool napi) 198 + { 199 + struct nb8800_priv *priv = netdev_priv(dev); 200 + struct nb8800_rx_desc *rxd = &priv->rx_descs[i]; 201 + struct nb8800_rx_buf *rxb = &priv->rx_bufs[i]; 202 + int size = L1_CACHE_ALIGN(RX_BUF_SIZE); 203 + dma_addr_t dma_addr; 204 + struct page *page; 205 + unsigned long offset; 206 + void *data; 207 + 208 + data = napi ? napi_alloc_frag(size) : netdev_alloc_frag(size); 209 + if (!data) 210 + return -ENOMEM; 211 + 212 + page = virt_to_head_page(data); 213 + offset = data - page_address(page); 214 + 215 + dma_addr = dma_map_page(&dev->dev, page, offset, RX_BUF_SIZE, 216 + DMA_FROM_DEVICE); 217 + 218 + if (dma_mapping_error(&dev->dev, dma_addr)) { 219 + skb_free_frag(data); 220 + return -ENOMEM; 221 + } 222 + 223 + rxb->page = page; 224 + rxb->offset = offset; 225 + rxd->desc.s_addr = dma_addr; 226 + 227 + return 0; 228 + } 229 + 230 + static void nb8800_receive(struct net_device *dev, unsigned int i, 231 + unsigned int len) 232 + { 233 + struct nb8800_priv *priv = netdev_priv(dev); 234 + struct nb8800_rx_desc *rxd = &priv->rx_descs[i]; 235 + struct page *page = priv->rx_bufs[i].page; 236 + int offset = priv->rx_bufs[i].offset; 237 + void *data = page_address(page) + offset; 238 + dma_addr_t dma = rxd->desc.s_addr; 239 + struct sk_buff *skb; 240 + unsigned int size; 241 + int err; 242 + 243 + size = len <= RX_COPYBREAK ? len : RX_COPYHDR; 244 + 245 + skb = napi_alloc_skb(&priv->napi, size); 246 + if (!skb) { 247 + netdev_err(dev, "rx skb allocation failed\n"); 248 + dev->stats.rx_dropped++; 249 + return; 250 + } 251 + 252 + if (len <= RX_COPYBREAK) { 253 + dma_sync_single_for_cpu(&dev->dev, dma, len, DMA_FROM_DEVICE); 254 + memcpy(skb_put(skb, len), data, len); 255 + dma_sync_single_for_device(&dev->dev, dma, len, 256 + DMA_FROM_DEVICE); 257 + } else { 258 + err = nb8800_alloc_rx(dev, i, true); 259 + if (err) { 260 + netdev_err(dev, "rx buffer allocation failed\n"); 261 + dev->stats.rx_dropped++; 262 + return; 263 + } 264 + 265 + dma_unmap_page(&dev->dev, dma, RX_BUF_SIZE, DMA_FROM_DEVICE); 266 + memcpy(skb_put(skb, RX_COPYHDR), data, RX_COPYHDR); 267 + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, 268 + offset + RX_COPYHDR, len - RX_COPYHDR, 269 + RX_BUF_SIZE); 270 + } 271 + 272 + skb->protocol = eth_type_trans(skb, dev); 273 + napi_gro_receive(&priv->napi, skb); 274 + } 275 + 276 + static void nb8800_rx_error(struct net_device *dev, u32 report) 277 + { 278 + if (report & RX_LENGTH_ERR) 279 + dev->stats.rx_length_errors++; 280 + 281 + if (report & RX_FCS_ERR) 282 + dev->stats.rx_crc_errors++; 283 + 284 + if (report & RX_FIFO_OVERRUN) 285 + dev->stats.rx_fifo_errors++; 286 + 287 + if (report & RX_ALIGNMENT_ERROR) 288 + dev->stats.rx_frame_errors++; 289 + 290 + dev->stats.rx_errors++; 291 + } 292 + 293 + static int nb8800_poll(struct napi_struct *napi, int budget) 294 + { 295 + struct net_device *dev = napi->dev; 296 + struct nb8800_priv *priv = netdev_priv(dev); 297 + struct nb8800_rx_desc *rxd; 298 + unsigned int last = priv->rx_eoc; 299 + unsigned int next; 300 + int work = 0; 301 + 302 + nb8800_tx_done(dev); 303 + 304 + again: 305 + while (work < budget) { 306 + struct nb8800_rx_buf *rxb; 307 + unsigned int len; 308 + 309 + next = (last + 1) % RX_DESC_COUNT; 310 + 311 + rxb = &priv->rx_bufs[next]; 312 + rxd = &priv->rx_descs[next]; 313 + 314 + if (!rxd->report) 315 + break; 316 + 317 + len = RX_BYTES_TRANSFERRED(rxd->report); 318 + 319 + if (IS_RX_ERROR(rxd->report)) 320 + nb8800_rx_error(dev, rxd->report); 321 + else 322 + nb8800_receive(dev, next, len); 323 + 324 + dev->stats.rx_packets++; 325 + dev->stats.rx_bytes += len; 326 + 327 + if (rxd->report & RX_MULTICAST_PKT) 328 + dev->stats.multicast++; 329 + 330 + rxd->report = 0; 331 + last = next; 332 + work++; 333 + } 334 + 335 + if (work) { 336 + priv->rx_descs[last].desc.config |= DESC_EOC; 337 + wmb(); /* ensure new EOC is written before clearing old */ 338 + priv->rx_descs[priv->rx_eoc].desc.config &= ~DESC_EOC; 339 + priv->rx_eoc = last; 340 + nb8800_start_rx(dev); 341 + } 342 + 343 + if (work < budget) { 344 + nb8800_writel(priv, NB8800_RX_ITR, priv->rx_itr_irq); 345 + 346 + /* If a packet arrived after we last checked but 347 + * before writing RX_ITR, the interrupt will be 348 + * delayed, so we retrieve it now. 349 + */ 350 + if (priv->rx_descs[next].report) 351 + goto again; 352 + 353 + napi_complete_done(napi, work); 354 + } 355 + 356 + return work; 357 + } 358 + 359 + static void __nb8800_tx_dma_start(struct net_device *dev) 360 + { 361 + struct nb8800_priv *priv = netdev_priv(dev); 362 + struct nb8800_tx_buf *txb; 363 + u32 txc_cr; 364 + 365 + txb = &priv->tx_bufs[priv->tx_queue]; 366 + if (!txb->ready) 367 + return; 368 + 369 + txc_cr = nb8800_readl(priv, NB8800_TXC_CR); 370 + if (txc_cr & TCR_EN) 371 + return; 372 + 373 + nb8800_writel(priv, NB8800_TX_DESC_ADDR, txb->dma_desc); 374 + wmb(); /* ensure desc addr is written before starting DMA */ 375 + nb8800_writel(priv, NB8800_TXC_CR, txc_cr | TCR_EN); 376 + 377 + priv->tx_queue = (priv->tx_queue + txb->chain_len) % TX_DESC_COUNT; 378 + } 379 + 380 + static void nb8800_tx_dma_start(struct net_device *dev) 381 + { 382 + struct nb8800_priv *priv = netdev_priv(dev); 383 + 384 + spin_lock_irq(&priv->tx_lock); 385 + __nb8800_tx_dma_start(dev); 386 + spin_unlock_irq(&priv->tx_lock); 387 + } 388 + 389 + static void nb8800_tx_dma_start_irq(struct net_device *dev) 390 + { 391 + struct nb8800_priv *priv = netdev_priv(dev); 392 + 393 + spin_lock(&priv->tx_lock); 394 + __nb8800_tx_dma_start(dev); 395 + spin_unlock(&priv->tx_lock); 396 + } 397 + 398 + static int nb8800_xmit(struct sk_buff *skb, struct net_device *dev) 399 + { 400 + struct nb8800_priv *priv = netdev_priv(dev); 401 + struct nb8800_tx_desc *txd; 402 + struct nb8800_tx_buf *txb; 403 + struct nb8800_dma_desc *desc; 404 + dma_addr_t dma_addr; 405 + unsigned int dma_len; 406 + unsigned int align; 407 + unsigned int next; 408 + 409 + if (atomic_read(&priv->tx_free) <= NB8800_DESC_LOW) { 410 + netif_stop_queue(dev); 411 + return NETDEV_TX_BUSY; 412 + } 413 + 414 + align = (8 - (uintptr_t)skb->data) & 7; 415 + 416 + dma_len = skb->len - align; 417 + dma_addr = dma_map_single(&dev->dev, skb->data + align, 418 + dma_len, DMA_TO_DEVICE); 419 + 420 + if (dma_mapping_error(&dev->dev, dma_addr)) { 421 + netdev_err(dev, "tx dma mapping error\n"); 422 + kfree_skb(skb); 423 + dev->stats.tx_dropped++; 424 + return NETDEV_TX_OK; 425 + } 426 + 427 + if (atomic_dec_return(&priv->tx_free) <= NB8800_DESC_LOW) { 428 + netif_stop_queue(dev); 429 + skb->xmit_more = 0; 430 + } 431 + 432 + next = priv->tx_next; 433 + txb = &priv->tx_bufs[next]; 434 + txd = &priv->tx_descs[next]; 435 + desc = &txd->desc[0]; 436 + 437 + next = (next + 1) % TX_DESC_COUNT; 438 + 439 + if (align) { 440 + memcpy(txd->buf, skb->data, align); 441 + 442 + desc->s_addr = 443 + txb->dma_desc + offsetof(struct nb8800_tx_desc, buf); 444 + desc->n_addr = txb->dma_desc + sizeof(txd->desc[0]); 445 + desc->config = DESC_BTS(2) | DESC_DS | align; 446 + 447 + desc++; 448 + } 449 + 450 + desc->s_addr = dma_addr; 451 + desc->n_addr = priv->tx_bufs[next].dma_desc; 452 + desc->config = DESC_BTS(2) | DESC_DS | DESC_EOF | dma_len; 453 + 454 + if (!skb->xmit_more) 455 + desc->config |= DESC_EOC; 456 + 457 + txb->skb = skb; 458 + txb->dma_addr = dma_addr; 459 + txb->dma_len = dma_len; 460 + 461 + if (!priv->tx_chain) { 462 + txb->chain_len = 1; 463 + priv->tx_chain = txb; 464 + } else { 465 + priv->tx_chain->chain_len++; 466 + } 467 + 468 + netdev_sent_queue(dev, skb->len); 469 + 470 + priv->tx_next = next; 471 + 472 + if (!skb->xmit_more) { 473 + smp_wmb(); 474 + priv->tx_chain->ready = true; 475 + priv->tx_chain = NULL; 476 + nb8800_tx_dma_start(dev); 477 + } 478 + 479 + return NETDEV_TX_OK; 480 + } 481 + 482 + static void nb8800_tx_error(struct net_device *dev, u32 report) 483 + { 484 + if (report & TX_LATE_COLLISION) 485 + dev->stats.collisions++; 486 + 487 + if (report & TX_PACKET_DROPPED) 488 + dev->stats.tx_dropped++; 489 + 490 + if (report & TX_FIFO_UNDERRUN) 491 + dev->stats.tx_fifo_errors++; 492 + 493 + dev->stats.tx_errors++; 494 + } 495 + 496 + static void nb8800_tx_done(struct net_device *dev) 497 + { 498 + struct nb8800_priv *priv = netdev_priv(dev); 499 + unsigned int limit = priv->tx_next; 500 + unsigned int done = priv->tx_done; 501 + unsigned int packets = 0; 502 + unsigned int len = 0; 503 + 504 + while (done != limit) { 505 + struct nb8800_tx_desc *txd = &priv->tx_descs[done]; 506 + struct nb8800_tx_buf *txb = &priv->tx_bufs[done]; 507 + struct sk_buff *skb; 508 + 509 + if (!txd->report) 510 + break; 511 + 512 + skb = txb->skb; 513 + len += skb->len; 514 + 515 + dma_unmap_single(&dev->dev, txb->dma_addr, txb->dma_len, 516 + DMA_TO_DEVICE); 517 + 518 + if (IS_TX_ERROR(txd->report)) { 519 + nb8800_tx_error(dev, txd->report); 520 + kfree_skb(skb); 521 + } else { 522 + consume_skb(skb); 523 + } 524 + 525 + dev->stats.tx_packets++; 526 + dev->stats.tx_bytes += TX_BYTES_TRANSFERRED(txd->report); 527 + dev->stats.collisions += TX_EARLY_COLLISIONS(txd->report); 528 + 529 + txb->skb = NULL; 530 + txb->ready = false; 531 + txd->report = 0; 532 + 533 + done = (done + 1) % TX_DESC_COUNT; 534 + packets++; 535 + } 536 + 537 + if (packets) { 538 + smp_mb__before_atomic(); 539 + atomic_add(packets, &priv->tx_free); 540 + netdev_completed_queue(dev, packets, len); 541 + netif_wake_queue(dev); 542 + priv->tx_done = done; 543 + } 544 + } 545 + 546 + static irqreturn_t nb8800_irq(int irq, void *dev_id) 547 + { 548 + struct net_device *dev = dev_id; 549 + struct nb8800_priv *priv = netdev_priv(dev); 550 + irqreturn_t ret = IRQ_NONE; 551 + u32 val; 552 + 553 + /* tx interrupt */ 554 + val = nb8800_readl(priv, NB8800_TXC_SR); 555 + if (val) { 556 + nb8800_writel(priv, NB8800_TXC_SR, val); 557 + 558 + if (val & TSR_DI) 559 + nb8800_tx_dma_start_irq(dev); 560 + 561 + if (val & TSR_TI) 562 + napi_schedule_irqoff(&priv->napi); 563 + 564 + if (unlikely(val & TSR_DE)) 565 + netdev_err(dev, "TX DMA error\n"); 566 + 567 + /* should never happen with automatic status retrieval */ 568 + if (unlikely(val & TSR_TO)) 569 + netdev_err(dev, "TX Status FIFO overflow\n"); 570 + 571 + ret = IRQ_HANDLED; 572 + } 573 + 574 + /* rx interrupt */ 575 + val = nb8800_readl(priv, NB8800_RXC_SR); 576 + if (val) { 577 + nb8800_writel(priv, NB8800_RXC_SR, val); 578 + 579 + if (likely(val & (RSR_RI | RSR_DI))) { 580 + nb8800_writel(priv, NB8800_RX_ITR, priv->rx_itr_poll); 581 + napi_schedule_irqoff(&priv->napi); 582 + } 583 + 584 + if (unlikely(val & RSR_DE)) 585 + netdev_err(dev, "RX DMA error\n"); 586 + 587 + /* should never happen with automatic status retrieval */ 588 + if (unlikely(val & RSR_RO)) 589 + netdev_err(dev, "RX Status FIFO overflow\n"); 590 + 591 + ret = IRQ_HANDLED; 592 + } 593 + 594 + return ret; 595 + } 596 + 597 + static void nb8800_mac_config(struct net_device *dev) 598 + { 599 + struct nb8800_priv *priv = netdev_priv(dev); 600 + bool gigabit = priv->speed == SPEED_1000; 601 + u32 mac_mode_mask = RGMII_MODE | HALF_DUPLEX | GMAC_MODE; 602 + u32 mac_mode = 0; 603 + u32 slot_time; 604 + u32 phy_clk; 605 + u32 ict; 606 + 607 + if (!priv->duplex) 608 + mac_mode |= HALF_DUPLEX; 609 + 610 + if (gigabit) { 611 + if (priv->phy_mode == PHY_INTERFACE_MODE_RGMII) 612 + mac_mode |= RGMII_MODE; 613 + 614 + mac_mode |= GMAC_MODE; 615 + phy_clk = 125000000; 616 + 617 + /* Should be 512 but register is only 8 bits */ 618 + slot_time = 255; 619 + } else { 620 + phy_clk = 25000000; 621 + slot_time = 128; 622 + } 623 + 624 + ict = DIV_ROUND_UP(phy_clk, clk_get_rate(priv->clk)); 625 + 626 + nb8800_writeb(priv, NB8800_IC_THRESHOLD, ict); 627 + nb8800_writeb(priv, NB8800_SLOT_TIME, slot_time); 628 + nb8800_maskb(priv, NB8800_MAC_MODE, mac_mode_mask, mac_mode); 629 + } 630 + 631 + static void nb8800_pause_config(struct net_device *dev) 632 + { 633 + struct nb8800_priv *priv = netdev_priv(dev); 634 + struct phy_device *phydev = priv->phydev; 635 + u32 rxcr; 636 + 637 + if (priv->pause_aneg) { 638 + if (!phydev || !phydev->link) 639 + return; 640 + 641 + priv->pause_rx = phydev->pause; 642 + priv->pause_tx = phydev->pause ^ phydev->asym_pause; 643 + } 644 + 645 + nb8800_modb(priv, NB8800_RX_CTL, RX_PAUSE_EN, priv->pause_rx); 646 + 647 + rxcr = nb8800_readl(priv, NB8800_RXC_CR); 648 + if (!!(rxcr & RCR_FL) == priv->pause_tx) 649 + return; 650 + 651 + if (netif_running(dev)) { 652 + napi_disable(&priv->napi); 653 + netif_tx_lock_bh(dev); 654 + nb8800_dma_stop(dev); 655 + nb8800_modl(priv, NB8800_RXC_CR, RCR_FL, priv->pause_tx); 656 + nb8800_start_rx(dev); 657 + netif_tx_unlock_bh(dev); 658 + napi_enable(&priv->napi); 659 + } else { 660 + nb8800_modl(priv, NB8800_RXC_CR, RCR_FL, priv->pause_tx); 661 + } 662 + } 663 + 664 + static void nb8800_link_reconfigure(struct net_device *dev) 665 + { 666 + struct nb8800_priv *priv = netdev_priv(dev); 667 + struct phy_device *phydev = priv->phydev; 668 + int change = 0; 669 + 670 + if (phydev->link) { 671 + if (phydev->speed != priv->speed) { 672 + priv->speed = phydev->speed; 673 + change = 1; 674 + } 675 + 676 + if (phydev->duplex != priv->duplex) { 677 + priv->duplex = phydev->duplex; 678 + change = 1; 679 + } 680 + 681 + if (change) 682 + nb8800_mac_config(dev); 683 + 684 + nb8800_pause_config(dev); 685 + } 686 + 687 + if (phydev->link != priv->link) { 688 + priv->link = phydev->link; 689 + change = 1; 690 + } 691 + 692 + if (change) 693 + phy_print_status(priv->phydev); 694 + } 695 + 696 + static void nb8800_update_mac_addr(struct net_device *dev) 697 + { 698 + struct nb8800_priv *priv = netdev_priv(dev); 699 + int i; 700 + 701 + for (i = 0; i < ETH_ALEN; i++) 702 + nb8800_writeb(priv, NB8800_SRC_ADDR(i), dev->dev_addr[i]); 703 + 704 + for (i = 0; i < ETH_ALEN; i++) 705 + nb8800_writeb(priv, NB8800_UC_ADDR(i), dev->dev_addr[i]); 706 + } 707 + 708 + static int nb8800_set_mac_address(struct net_device *dev, void *addr) 709 + { 710 + struct sockaddr *sock = addr; 711 + 712 + if (netif_running(dev)) 713 + return -EBUSY; 714 + 715 + ether_addr_copy(dev->dev_addr, sock->sa_data); 716 + nb8800_update_mac_addr(dev); 717 + 718 + return 0; 719 + } 720 + 721 + static void nb8800_mc_init(struct net_device *dev, int val) 722 + { 723 + struct nb8800_priv *priv = netdev_priv(dev); 724 + 725 + nb8800_writeb(priv, NB8800_MC_INIT, val); 726 + readb_poll_timeout_atomic(priv->base + NB8800_MC_INIT, val, !val, 727 + 1, 1000); 728 + } 729 + 730 + static void nb8800_set_rx_mode(struct net_device *dev) 731 + { 732 + struct nb8800_priv *priv = netdev_priv(dev); 733 + struct netdev_hw_addr *ha; 734 + int i; 735 + 736 + if (dev->flags & (IFF_PROMISC | IFF_ALLMULTI)) { 737 + nb8800_mac_af(dev, false); 738 + return; 739 + } 740 + 741 + nb8800_mac_af(dev, true); 742 + nb8800_mc_init(dev, 0); 743 + 744 + netdev_for_each_mc_addr(ha, dev) { 745 + for (i = 0; i < ETH_ALEN; i++) 746 + nb8800_writeb(priv, NB8800_MC_ADDR(i), ha->addr[i]); 747 + 748 + nb8800_mc_init(dev, 0xff); 749 + } 750 + } 751 + 752 + #define RX_DESC_SIZE (RX_DESC_COUNT * sizeof(struct nb8800_rx_desc)) 753 + #define TX_DESC_SIZE (TX_DESC_COUNT * sizeof(struct nb8800_tx_desc)) 754 + 755 + static void nb8800_dma_free(struct net_device *dev) 756 + { 757 + struct nb8800_priv *priv = netdev_priv(dev); 758 + unsigned int i; 759 + 760 + if (priv->rx_bufs) { 761 + for (i = 0; i < RX_DESC_COUNT; i++) 762 + if (priv->rx_bufs[i].page) 763 + put_page(priv->rx_bufs[i].page); 764 + 765 + kfree(priv->rx_bufs); 766 + priv->rx_bufs = NULL; 767 + } 768 + 769 + if (priv->tx_bufs) { 770 + for (i = 0; i < TX_DESC_COUNT; i++) 771 + kfree_skb(priv->tx_bufs[i].skb); 772 + 773 + kfree(priv->tx_bufs); 774 + priv->tx_bufs = NULL; 775 + } 776 + 777 + if (priv->rx_descs) { 778 + dma_free_coherent(dev->dev.parent, RX_DESC_SIZE, priv->rx_descs, 779 + priv->rx_desc_dma); 780 + priv->rx_descs = NULL; 781 + } 782 + 783 + if (priv->tx_descs) { 784 + dma_free_coherent(dev->dev.parent, TX_DESC_SIZE, priv->tx_descs, 785 + priv->tx_desc_dma); 786 + priv->tx_descs = NULL; 787 + } 788 + } 789 + 790 + static void nb8800_dma_reset(struct net_device *dev) 791 + { 792 + struct nb8800_priv *priv = netdev_priv(dev); 793 + struct nb8800_rx_desc *rxd; 794 + struct nb8800_tx_desc *txd; 795 + unsigned int i; 796 + 797 + for (i = 0; i < RX_DESC_COUNT; i++) { 798 + dma_addr_t rx_dma = priv->rx_desc_dma + i * sizeof(*rxd); 799 + 800 + rxd = &priv->rx_descs[i]; 801 + rxd->desc.n_addr = rx_dma + sizeof(*rxd); 802 + rxd->desc.r_addr = 803 + rx_dma + offsetof(struct nb8800_rx_desc, report); 804 + rxd->desc.config = priv->rx_dma_config; 805 + rxd->report = 0; 806 + } 807 + 808 + rxd->desc.n_addr = priv->rx_desc_dma; 809 + rxd->desc.config |= DESC_EOC; 810 + 811 + priv->rx_eoc = RX_DESC_COUNT - 1; 812 + 813 + for (i = 0; i < TX_DESC_COUNT; i++) { 814 + struct nb8800_tx_buf *txb = &priv->tx_bufs[i]; 815 + dma_addr_t r_dma = txb->dma_desc + 816 + offsetof(struct nb8800_tx_desc, report); 817 + 818 + txd = &priv->tx_descs[i]; 819 + txd->desc[0].r_addr = r_dma; 820 + txd->desc[1].r_addr = r_dma; 821 + txd->report = 0; 822 + } 823 + 824 + priv->tx_next = 0; 825 + priv->tx_queue = 0; 826 + priv->tx_done = 0; 827 + atomic_set(&priv->tx_free, TX_DESC_COUNT); 828 + 829 + nb8800_writel(priv, NB8800_RX_DESC_ADDR, priv->rx_desc_dma); 830 + 831 + wmb(); /* ensure all setup is written before starting */ 832 + } 833 + 834 + static int nb8800_dma_init(struct net_device *dev) 835 + { 836 + struct nb8800_priv *priv = netdev_priv(dev); 837 + unsigned int n_rx = RX_DESC_COUNT; 838 + unsigned int n_tx = TX_DESC_COUNT; 839 + unsigned int i; 840 + int err; 841 + 842 + priv->rx_descs = dma_alloc_coherent(dev->dev.parent, RX_DESC_SIZE, 843 + &priv->rx_desc_dma, GFP_KERNEL); 844 + if (!priv->rx_descs) 845 + goto err_out; 846 + 847 + priv->rx_bufs = kcalloc(n_rx, sizeof(*priv->rx_bufs), GFP_KERNEL); 848 + if (!priv->rx_bufs) 849 + goto err_out; 850 + 851 + for (i = 0; i < n_rx; i++) { 852 + err = nb8800_alloc_rx(dev, i, false); 853 + if (err) 854 + goto err_out; 855 + } 856 + 857 + priv->tx_descs = dma_alloc_coherent(dev->dev.parent, TX_DESC_SIZE, 858 + &priv->tx_desc_dma, GFP_KERNEL); 859 + if (!priv->tx_descs) 860 + goto err_out; 861 + 862 + priv->tx_bufs = kcalloc(n_tx, sizeof(*priv->tx_bufs), GFP_KERNEL); 863 + if (!priv->tx_bufs) 864 + goto err_out; 865 + 866 + for (i = 0; i < n_tx; i++) 867 + priv->tx_bufs[i].dma_desc = 868 + priv->tx_desc_dma + i * sizeof(struct nb8800_tx_desc); 869 + 870 + nb8800_dma_reset(dev); 871 + 872 + return 0; 873 + 874 + err_out: 875 + nb8800_dma_free(dev); 876 + 877 + return -ENOMEM; 878 + } 879 + 880 + static int nb8800_dma_stop(struct net_device *dev) 881 + { 882 + struct nb8800_priv *priv = netdev_priv(dev); 883 + struct nb8800_tx_buf *txb = &priv->tx_bufs[0]; 884 + struct nb8800_tx_desc *txd = &priv->tx_descs[0]; 885 + int retry = 5; 886 + u32 txcr; 887 + u32 rxcr; 888 + int err; 889 + unsigned int i; 890 + 891 + /* wait for tx to finish */ 892 + err = readl_poll_timeout_atomic(priv->base + NB8800_TXC_CR, txcr, 893 + !(txcr & TCR_EN) && 894 + priv->tx_done == priv->tx_next, 895 + 1000, 1000000); 896 + if (err) 897 + return err; 898 + 899 + /* The rx DMA only stops if it reaches the end of chain. 900 + * To make this happen, we set the EOC flag on all rx 901 + * descriptors, put the device in loopback mode, and send 902 + * a few dummy frames. The interrupt handler will ignore 903 + * these since NAPI is disabled and no real frames are in 904 + * the tx queue. 905 + */ 906 + 907 + for (i = 0; i < RX_DESC_COUNT; i++) 908 + priv->rx_descs[i].desc.config |= DESC_EOC; 909 + 910 + txd->desc[0].s_addr = 911 + txb->dma_desc + offsetof(struct nb8800_tx_desc, buf); 912 + txd->desc[0].config = DESC_BTS(2) | DESC_DS | DESC_EOF | DESC_EOC | 8; 913 + memset(txd->buf, 0, sizeof(txd->buf)); 914 + 915 + nb8800_mac_af(dev, false); 916 + nb8800_setb(priv, NB8800_MAC_MODE, LOOPBACK_EN); 917 + 918 + do { 919 + nb8800_writel(priv, NB8800_TX_DESC_ADDR, txb->dma_desc); 920 + wmb(); 921 + nb8800_writel(priv, NB8800_TXC_CR, txcr | TCR_EN); 922 + 923 + err = readl_poll_timeout_atomic(priv->base + NB8800_RXC_CR, 924 + rxcr, !(rxcr & RCR_EN), 925 + 1000, 100000); 926 + } while (err && --retry); 927 + 928 + nb8800_mac_af(dev, true); 929 + nb8800_clearb(priv, NB8800_MAC_MODE, LOOPBACK_EN); 930 + nb8800_dma_reset(dev); 931 + 932 + return retry ? 0 : -ETIMEDOUT; 933 + } 934 + 935 + static void nb8800_pause_adv(struct net_device *dev) 936 + { 937 + struct nb8800_priv *priv = netdev_priv(dev); 938 + u32 adv = 0; 939 + 940 + if (!priv->phydev) 941 + return; 942 + 943 + if (priv->pause_rx) 944 + adv |= ADVERTISED_Pause | ADVERTISED_Asym_Pause; 945 + if (priv->pause_tx) 946 + adv ^= ADVERTISED_Asym_Pause; 947 + 948 + priv->phydev->supported |= adv; 949 + priv->phydev->advertising |= adv; 950 + } 951 + 952 + static int nb8800_open(struct net_device *dev) 953 + { 954 + struct nb8800_priv *priv = netdev_priv(dev); 955 + int err; 956 + 957 + /* clear any pending interrupts */ 958 + nb8800_writel(priv, NB8800_RXC_SR, 0xf); 959 + nb8800_writel(priv, NB8800_TXC_SR, 0xf); 960 + 961 + err = nb8800_dma_init(dev); 962 + if (err) 963 + return err; 964 + 965 + err = request_irq(dev->irq, nb8800_irq, 0, dev_name(&dev->dev), dev); 966 + if (err) 967 + goto err_free_dma; 968 + 969 + nb8800_mac_rx(dev, true); 970 + nb8800_mac_tx(dev, true); 971 + 972 + priv->phydev = of_phy_connect(dev, priv->phy_node, 973 + nb8800_link_reconfigure, 0, 974 + priv->phy_mode); 975 + if (!priv->phydev) 976 + goto err_free_irq; 977 + 978 + nb8800_pause_adv(dev); 979 + 980 + netdev_reset_queue(dev); 981 + napi_enable(&priv->napi); 982 + netif_start_queue(dev); 983 + 984 + nb8800_start_rx(dev); 985 + phy_start(priv->phydev); 986 + 987 + return 0; 988 + 989 + err_free_irq: 990 + free_irq(dev->irq, dev); 991 + err_free_dma: 992 + nb8800_dma_free(dev); 993 + 994 + return err; 995 + } 996 + 997 + static int nb8800_stop(struct net_device *dev) 998 + { 999 + struct nb8800_priv *priv = netdev_priv(dev); 1000 + 1001 + phy_stop(priv->phydev); 1002 + 1003 + netif_stop_queue(dev); 1004 + napi_disable(&priv->napi); 1005 + 1006 + nb8800_dma_stop(dev); 1007 + nb8800_mac_rx(dev, false); 1008 + nb8800_mac_tx(dev, false); 1009 + 1010 + phy_disconnect(priv->phydev); 1011 + priv->phydev = NULL; 1012 + 1013 + free_irq(dev->irq, dev); 1014 + 1015 + nb8800_dma_free(dev); 1016 + 1017 + return 0; 1018 + } 1019 + 1020 + static int nb8800_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) 1021 + { 1022 + struct nb8800_priv *priv = netdev_priv(dev); 1023 + 1024 + return phy_mii_ioctl(priv->phydev, rq, cmd); 1025 + } 1026 + 1027 + static const struct net_device_ops nb8800_netdev_ops = { 1028 + .ndo_open = nb8800_open, 1029 + .ndo_stop = nb8800_stop, 1030 + .ndo_start_xmit = nb8800_xmit, 1031 + .ndo_set_mac_address = nb8800_set_mac_address, 1032 + .ndo_set_rx_mode = nb8800_set_rx_mode, 1033 + .ndo_do_ioctl = nb8800_ioctl, 1034 + .ndo_change_mtu = eth_change_mtu, 1035 + .ndo_validate_addr = eth_validate_addr, 1036 + }; 1037 + 1038 + static int nb8800_get_settings(struct net_device *dev, struct ethtool_cmd *cmd) 1039 + { 1040 + struct nb8800_priv *priv = netdev_priv(dev); 1041 + 1042 + if (!priv->phydev) 1043 + return -ENODEV; 1044 + 1045 + return phy_ethtool_gset(priv->phydev, cmd); 1046 + } 1047 + 1048 + static int nb8800_set_settings(struct net_device *dev, struct ethtool_cmd *cmd) 1049 + { 1050 + struct nb8800_priv *priv = netdev_priv(dev); 1051 + 1052 + if (!priv->phydev) 1053 + return -ENODEV; 1054 + 1055 + return phy_ethtool_sset(priv->phydev, cmd); 1056 + } 1057 + 1058 + static int nb8800_nway_reset(struct net_device *dev) 1059 + { 1060 + struct nb8800_priv *priv = netdev_priv(dev); 1061 + 1062 + if (!priv->phydev) 1063 + return -ENODEV; 1064 + 1065 + return genphy_restart_aneg(priv->phydev); 1066 + } 1067 + 1068 + static void nb8800_get_pauseparam(struct net_device *dev, 1069 + struct ethtool_pauseparam *pp) 1070 + { 1071 + struct nb8800_priv *priv = netdev_priv(dev); 1072 + 1073 + pp->autoneg = priv->pause_aneg; 1074 + pp->rx_pause = priv->pause_rx; 1075 + pp->tx_pause = priv->pause_tx; 1076 + } 1077 + 1078 + static int nb8800_set_pauseparam(struct net_device *dev, 1079 + struct ethtool_pauseparam *pp) 1080 + { 1081 + struct nb8800_priv *priv = netdev_priv(dev); 1082 + 1083 + priv->pause_aneg = pp->autoneg; 1084 + priv->pause_rx = pp->rx_pause; 1085 + priv->pause_tx = pp->tx_pause; 1086 + 1087 + nb8800_pause_adv(dev); 1088 + 1089 + if (!priv->pause_aneg) 1090 + nb8800_pause_config(dev); 1091 + else if (priv->phydev) 1092 + phy_start_aneg(priv->phydev); 1093 + 1094 + return 0; 1095 + } 1096 + 1097 + static const char nb8800_stats_names[][ETH_GSTRING_LEN] = { 1098 + "rx_bytes_ok", 1099 + "rx_frames_ok", 1100 + "rx_undersize_frames", 1101 + "rx_fragment_frames", 1102 + "rx_64_byte_frames", 1103 + "rx_127_byte_frames", 1104 + "rx_255_byte_frames", 1105 + "rx_511_byte_frames", 1106 + "rx_1023_byte_frames", 1107 + "rx_max_size_frames", 1108 + "rx_oversize_frames", 1109 + "rx_bad_fcs_frames", 1110 + "rx_broadcast_frames", 1111 + "rx_multicast_frames", 1112 + "rx_control_frames", 1113 + "rx_pause_frames", 1114 + "rx_unsup_control_frames", 1115 + "rx_align_error_frames", 1116 + "rx_overrun_frames", 1117 + "rx_jabber_frames", 1118 + "rx_bytes", 1119 + "rx_frames", 1120 + 1121 + "tx_bytes_ok", 1122 + "tx_frames_ok", 1123 + "tx_64_byte_frames", 1124 + "tx_127_byte_frames", 1125 + "tx_255_byte_frames", 1126 + "tx_511_byte_frames", 1127 + "tx_1023_byte_frames", 1128 + "tx_max_size_frames", 1129 + "tx_oversize_frames", 1130 + "tx_broadcast_frames", 1131 + "tx_multicast_frames", 1132 + "tx_control_frames", 1133 + "tx_pause_frames", 1134 + "tx_underrun_frames", 1135 + "tx_single_collision_frames", 1136 + "tx_multi_collision_frames", 1137 + "tx_deferred_collision_frames", 1138 + "tx_late_collision_frames", 1139 + "tx_excessive_collision_frames", 1140 + "tx_bytes", 1141 + "tx_frames", 1142 + "tx_collisions", 1143 + }; 1144 + 1145 + #define NB8800_NUM_STATS ARRAY_SIZE(nb8800_stats_names) 1146 + 1147 + static int nb8800_get_sset_count(struct net_device *dev, int sset) 1148 + { 1149 + if (sset == ETH_SS_STATS) 1150 + return NB8800_NUM_STATS; 1151 + 1152 + return -EOPNOTSUPP; 1153 + } 1154 + 1155 + static void nb8800_get_strings(struct net_device *dev, u32 sset, u8 *buf) 1156 + { 1157 + if (sset == ETH_SS_STATS) 1158 + memcpy(buf, &nb8800_stats_names, sizeof(nb8800_stats_names)); 1159 + } 1160 + 1161 + static u32 nb8800_read_stat(struct net_device *dev, int index) 1162 + { 1163 + struct nb8800_priv *priv = netdev_priv(dev); 1164 + 1165 + nb8800_writeb(priv, NB8800_STAT_INDEX, index); 1166 + 1167 + return nb8800_readl(priv, NB8800_STAT_DATA); 1168 + } 1169 + 1170 + static void nb8800_get_ethtool_stats(struct net_device *dev, 1171 + struct ethtool_stats *estats, u64 *st) 1172 + { 1173 + unsigned int i; 1174 + u32 rx, tx; 1175 + 1176 + for (i = 0; i < NB8800_NUM_STATS / 2; i++) { 1177 + rx = nb8800_read_stat(dev, i); 1178 + tx = nb8800_read_stat(dev, i | 0x80); 1179 + st[i] = rx; 1180 + st[i + NB8800_NUM_STATS / 2] = tx; 1181 + } 1182 + } 1183 + 1184 + static const struct ethtool_ops nb8800_ethtool_ops = { 1185 + .get_settings = nb8800_get_settings, 1186 + .set_settings = nb8800_set_settings, 1187 + .nway_reset = nb8800_nway_reset, 1188 + .get_link = ethtool_op_get_link, 1189 + .get_pauseparam = nb8800_get_pauseparam, 1190 + .set_pauseparam = nb8800_set_pauseparam, 1191 + .get_sset_count = nb8800_get_sset_count, 1192 + .get_strings = nb8800_get_strings, 1193 + .get_ethtool_stats = nb8800_get_ethtool_stats, 1194 + }; 1195 + 1196 + static int nb8800_hw_init(struct net_device *dev) 1197 + { 1198 + struct nb8800_priv *priv = netdev_priv(dev); 1199 + u32 val; 1200 + 1201 + val = TX_RETRY_EN | TX_PAD_EN | TX_APPEND_FCS; 1202 + nb8800_writeb(priv, NB8800_TX_CTL1, val); 1203 + 1204 + /* Collision retry count */ 1205 + nb8800_writeb(priv, NB8800_TX_CTL2, 5); 1206 + 1207 + val = RX_PAD_STRIP | RX_AF_EN; 1208 + nb8800_writeb(priv, NB8800_RX_CTL, val); 1209 + 1210 + /* Chosen by fair dice roll */ 1211 + nb8800_writeb(priv, NB8800_RANDOM_SEED, 4); 1212 + 1213 + /* TX cycles per deferral period */ 1214 + nb8800_writeb(priv, NB8800_TX_SDP, 12); 1215 + 1216 + /* The following three threshold values have been 1217 + * experimentally determined for good results. 1218 + */ 1219 + 1220 + /* RX/TX FIFO threshold for partial empty (64-bit entries) */ 1221 + nb8800_writeb(priv, NB8800_PE_THRESHOLD, 0); 1222 + 1223 + /* RX/TX FIFO threshold for partial full (64-bit entries) */ 1224 + nb8800_writeb(priv, NB8800_PF_THRESHOLD, 255); 1225 + 1226 + /* Buffer size for transmit (64-bit entries) */ 1227 + nb8800_writeb(priv, NB8800_TX_BUFSIZE, 64); 1228 + 1229 + /* Configure tx DMA */ 1230 + 1231 + val = nb8800_readl(priv, NB8800_TXC_CR); 1232 + val &= TCR_LE; /* keep endian setting */ 1233 + val |= TCR_DM; /* DMA descriptor mode */ 1234 + val |= TCR_RS; /* automatically store tx status */ 1235 + val |= TCR_DIE; /* interrupt on DMA chain completion */ 1236 + val |= TCR_TFI(7); /* interrupt after 7 frames transmitted */ 1237 + val |= TCR_BTS(2); /* 32-byte bus transaction size */ 1238 + nb8800_writel(priv, NB8800_TXC_CR, val); 1239 + 1240 + /* TX complete interrupt after 10 ms or 7 frames (see above) */ 1241 + val = clk_get_rate(priv->clk) / 100; 1242 + nb8800_writel(priv, NB8800_TX_ITR, val); 1243 + 1244 + /* Configure rx DMA */ 1245 + 1246 + val = nb8800_readl(priv, NB8800_RXC_CR); 1247 + val &= RCR_LE; /* keep endian setting */ 1248 + val |= RCR_DM; /* DMA descriptor mode */ 1249 + val |= RCR_RS; /* automatically store rx status */ 1250 + val |= RCR_DIE; /* interrupt at end of DMA chain */ 1251 + val |= RCR_RFI(7); /* interrupt after 7 frames received */ 1252 + val |= RCR_BTS(2); /* 32-byte bus transaction size */ 1253 + nb8800_writel(priv, NB8800_RXC_CR, val); 1254 + 1255 + /* The rx interrupt can fire before the DMA has completed 1256 + * unless a small delay is added. 50 us is hopefully enough. 1257 + */ 1258 + priv->rx_itr_irq = clk_get_rate(priv->clk) / 20000; 1259 + 1260 + /* In NAPI poll mode we want to disable interrupts, but the 1261 + * hardware does not permit this. Delay 10 ms instead. 1262 + */ 1263 + priv->rx_itr_poll = clk_get_rate(priv->clk) / 100; 1264 + 1265 + nb8800_writel(priv, NB8800_RX_ITR, priv->rx_itr_irq); 1266 + 1267 + priv->rx_dma_config = RX_BUF_SIZE | DESC_BTS(2) | DESC_DS | DESC_EOF; 1268 + 1269 + /* Flow control settings */ 1270 + 1271 + /* Pause time of 0.1 ms */ 1272 + val = 100000 / 512; 1273 + nb8800_writeb(priv, NB8800_PQ1, val >> 8); 1274 + nb8800_writeb(priv, NB8800_PQ2, val & 0xff); 1275 + 1276 + /* Auto-negotiate by default */ 1277 + priv->pause_aneg = true; 1278 + priv->pause_rx = true; 1279 + priv->pause_tx = true; 1280 + 1281 + nb8800_mc_init(dev, 0); 1282 + 1283 + return 0; 1284 + } 1285 + 1286 + static int nb8800_tangox_init(struct net_device *dev) 1287 + { 1288 + struct nb8800_priv *priv = netdev_priv(dev); 1289 + u32 pad_mode = PAD_MODE_MII; 1290 + 1291 + switch (priv->phy_mode) { 1292 + case PHY_INTERFACE_MODE_MII: 1293 + case PHY_INTERFACE_MODE_GMII: 1294 + pad_mode = PAD_MODE_MII; 1295 + break; 1296 + 1297 + case PHY_INTERFACE_MODE_RGMII: 1298 + pad_mode = PAD_MODE_RGMII; 1299 + break; 1300 + 1301 + case PHY_INTERFACE_MODE_RGMII_TXID: 1302 + pad_mode = PAD_MODE_RGMII | PAD_MODE_GTX_CLK_DELAY; 1303 + break; 1304 + 1305 + default: 1306 + dev_err(dev->dev.parent, "unsupported phy mode %s\n", 1307 + phy_modes(priv->phy_mode)); 1308 + return -EINVAL; 1309 + } 1310 + 1311 + nb8800_writeb(priv, NB8800_TANGOX_PAD_MODE, pad_mode); 1312 + 1313 + return 0; 1314 + } 1315 + 1316 + static int nb8800_tangox_reset(struct net_device *dev) 1317 + { 1318 + struct nb8800_priv *priv = netdev_priv(dev); 1319 + int clk_div; 1320 + 1321 + nb8800_writeb(priv, NB8800_TANGOX_RESET, 0); 1322 + usleep_range(1000, 10000); 1323 + nb8800_writeb(priv, NB8800_TANGOX_RESET, 1); 1324 + 1325 + wmb(); /* ensure reset is cleared before proceeding */ 1326 + 1327 + clk_div = DIV_ROUND_UP(clk_get_rate(priv->clk), 2 * MAX_MDC_CLOCK); 1328 + nb8800_writew(priv, NB8800_TANGOX_MDIO_CLKDIV, clk_div); 1329 + 1330 + return 0; 1331 + } 1332 + 1333 + static const struct nb8800_ops nb8800_tangox_ops = { 1334 + .init = nb8800_tangox_init, 1335 + .reset = nb8800_tangox_reset, 1336 + }; 1337 + 1338 + static int nb8800_tango4_init(struct net_device *dev) 1339 + { 1340 + struct nb8800_priv *priv = netdev_priv(dev); 1341 + int err; 1342 + 1343 + err = nb8800_tangox_init(dev); 1344 + if (err) 1345 + return err; 1346 + 1347 + /* On tango4 interrupt on DMA completion per frame works and gives 1348 + * better performance despite generating more rx interrupts. 1349 + */ 1350 + 1351 + /* Disable unnecessary interrupt on rx completion */ 1352 + nb8800_clearl(priv, NB8800_RXC_CR, RCR_RFI(7)); 1353 + 1354 + /* Request interrupt on descriptor DMA completion */ 1355 + priv->rx_dma_config |= DESC_ID; 1356 + 1357 + return 0; 1358 + } 1359 + 1360 + static const struct nb8800_ops nb8800_tango4_ops = { 1361 + .init = nb8800_tango4_init, 1362 + .reset = nb8800_tangox_reset, 1363 + }; 1364 + 1365 + static const struct of_device_id nb8800_dt_ids[] = { 1366 + { 1367 + .compatible = "aurora,nb8800", 1368 + }, 1369 + { 1370 + .compatible = "sigma,smp8642-ethernet", 1371 + .data = &nb8800_tangox_ops, 1372 + }, 1373 + { 1374 + .compatible = "sigma,smp8734-ethernet", 1375 + .data = &nb8800_tango4_ops, 1376 + }, 1377 + { } 1378 + }; 1379 + 1380 + static int nb8800_probe(struct platform_device *pdev) 1381 + { 1382 + const struct of_device_id *match; 1383 + const struct nb8800_ops *ops = NULL; 1384 + struct nb8800_priv *priv; 1385 + struct resource *res; 1386 + struct net_device *dev; 1387 + struct mii_bus *bus; 1388 + const unsigned char *mac; 1389 + void __iomem *base; 1390 + int irq; 1391 + int ret; 1392 + 1393 + match = of_match_device(nb8800_dt_ids, &pdev->dev); 1394 + if (match) 1395 + ops = match->data; 1396 + 1397 + irq = platform_get_irq(pdev, 0); 1398 + if (irq <= 0) { 1399 + dev_err(&pdev->dev, "No IRQ\n"); 1400 + return -EINVAL; 1401 + } 1402 + 1403 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1404 + base = devm_ioremap_resource(&pdev->dev, res); 1405 + if (IS_ERR(base)) 1406 + return PTR_ERR(base); 1407 + 1408 + dev_dbg(&pdev->dev, "AU-NB8800 Ethernet at %pa\n", &res->start); 1409 + 1410 + dev = alloc_etherdev(sizeof(*priv)); 1411 + if (!dev) 1412 + return -ENOMEM; 1413 + 1414 + platform_set_drvdata(pdev, dev); 1415 + SET_NETDEV_DEV(dev, &pdev->dev); 1416 + 1417 + priv = netdev_priv(dev); 1418 + priv->base = base; 1419 + 1420 + priv->phy_mode = of_get_phy_mode(pdev->dev.of_node); 1421 + if (priv->phy_mode < 0) 1422 + priv->phy_mode = PHY_INTERFACE_MODE_RGMII; 1423 + 1424 + priv->clk = devm_clk_get(&pdev->dev, NULL); 1425 + if (IS_ERR(priv->clk)) { 1426 + dev_err(&pdev->dev, "failed to get clock\n"); 1427 + ret = PTR_ERR(priv->clk); 1428 + goto err_free_dev; 1429 + } 1430 + 1431 + ret = clk_prepare_enable(priv->clk); 1432 + if (ret) 1433 + goto err_free_dev; 1434 + 1435 + spin_lock_init(&priv->tx_lock); 1436 + 1437 + if (ops && ops->reset) { 1438 + ret = ops->reset(dev); 1439 + if (ret) 1440 + goto err_free_dev; 1441 + } 1442 + 1443 + bus = devm_mdiobus_alloc(&pdev->dev); 1444 + if (!bus) { 1445 + ret = -ENOMEM; 1446 + goto err_disable_clk; 1447 + } 1448 + 1449 + bus->name = "nb8800-mii"; 1450 + bus->read = nb8800_mdio_read; 1451 + bus->write = nb8800_mdio_write; 1452 + bus->parent = &pdev->dev; 1453 + snprintf(bus->id, MII_BUS_ID_SIZE, "%lx.nb8800-mii", 1454 + (unsigned long)res->start); 1455 + bus->priv = priv; 1456 + 1457 + ret = of_mdiobus_register(bus, pdev->dev.of_node); 1458 + if (ret) { 1459 + dev_err(&pdev->dev, "failed to register MII bus\n"); 1460 + goto err_disable_clk; 1461 + } 1462 + 1463 + priv->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0); 1464 + if (!priv->phy_node) { 1465 + dev_err(&pdev->dev, "no PHY specified\n"); 1466 + ret = -ENODEV; 1467 + goto err_free_bus; 1468 + } 1469 + 1470 + priv->mii_bus = bus; 1471 + 1472 + ret = nb8800_hw_init(dev); 1473 + if (ret) 1474 + goto err_free_bus; 1475 + 1476 + if (ops && ops->init) { 1477 + ret = ops->init(dev); 1478 + if (ret) 1479 + goto err_free_bus; 1480 + } 1481 + 1482 + dev->netdev_ops = &nb8800_netdev_ops; 1483 + dev->ethtool_ops = &nb8800_ethtool_ops; 1484 + dev->flags |= IFF_MULTICAST; 1485 + dev->irq = irq; 1486 + 1487 + mac = of_get_mac_address(pdev->dev.of_node); 1488 + if (mac) 1489 + ether_addr_copy(dev->dev_addr, mac); 1490 + 1491 + if (!is_valid_ether_addr(dev->dev_addr)) 1492 + eth_hw_addr_random(dev); 1493 + 1494 + nb8800_update_mac_addr(dev); 1495 + 1496 + netif_carrier_off(dev); 1497 + 1498 + ret = register_netdev(dev); 1499 + if (ret) { 1500 + netdev_err(dev, "failed to register netdev\n"); 1501 + goto err_free_dma; 1502 + } 1503 + 1504 + netif_napi_add(dev, &priv->napi, nb8800_poll, NAPI_POLL_WEIGHT); 1505 + 1506 + netdev_info(dev, "MAC address %pM\n", dev->dev_addr); 1507 + 1508 + return 0; 1509 + 1510 + err_free_dma: 1511 + nb8800_dma_free(dev); 1512 + err_free_bus: 1513 + mdiobus_unregister(bus); 1514 + err_disable_clk: 1515 + clk_disable_unprepare(priv->clk); 1516 + err_free_dev: 1517 + free_netdev(dev); 1518 + 1519 + return ret; 1520 + } 1521 + 1522 + static int nb8800_remove(struct platform_device *pdev) 1523 + { 1524 + struct net_device *ndev = platform_get_drvdata(pdev); 1525 + struct nb8800_priv *priv = netdev_priv(ndev); 1526 + 1527 + unregister_netdev(ndev); 1528 + 1529 + mdiobus_unregister(priv->mii_bus); 1530 + 1531 + clk_disable_unprepare(priv->clk); 1532 + 1533 + nb8800_dma_free(ndev); 1534 + free_netdev(ndev); 1535 + 1536 + return 0; 1537 + } 1538 + 1539 + static struct platform_driver nb8800_driver = { 1540 + .driver = { 1541 + .name = "nb8800", 1542 + .of_match_table = nb8800_dt_ids, 1543 + }, 1544 + .probe = nb8800_probe, 1545 + .remove = nb8800_remove, 1546 + }; 1547 + 1548 + module_platform_driver(nb8800_driver); 1549 + 1550 + MODULE_DESCRIPTION("Aurora AU-NB8800 Ethernet driver"); 1551 + MODULE_AUTHOR("Mans Rullgard <mans@mansr.com>"); 1552 + MODULE_LICENSE("GPL");
+316
drivers/net/ethernet/aurora/nb8800.h
··· 1 + #ifndef _NB8800_H_ 2 + #define _NB8800_H_ 3 + 4 + #include <linux/types.h> 5 + #include <linux/skbuff.h> 6 + #include <linux/phy.h> 7 + #include <linux/clk.h> 8 + #include <linux/bitops.h> 9 + 10 + #define RX_DESC_COUNT 256 11 + #define TX_DESC_COUNT 256 12 + 13 + #define NB8800_DESC_LOW 4 14 + 15 + #define RX_BUF_SIZE 1552 16 + 17 + #define RX_COPYBREAK 256 18 + #define RX_COPYHDR 128 19 + 20 + #define MAX_MDC_CLOCK 2500000 21 + 22 + /* Stargate Solutions SSN8800 core registers */ 23 + #define NB8800_TX_CTL1 0x000 24 + #define TX_TPD BIT(5) 25 + #define TX_APPEND_FCS BIT(4) 26 + #define TX_PAD_EN BIT(3) 27 + #define TX_RETRY_EN BIT(2) 28 + #define TX_EN BIT(0) 29 + 30 + #define NB8800_TX_CTL2 0x001 31 + 32 + #define NB8800_RX_CTL 0x004 33 + #define RX_BC_DISABLE BIT(7) 34 + #define RX_RUNT BIT(6) 35 + #define RX_AF_EN BIT(5) 36 + #define RX_PAUSE_EN BIT(3) 37 + #define RX_SEND_CRC BIT(2) 38 + #define RX_PAD_STRIP BIT(1) 39 + #define RX_EN BIT(0) 40 + 41 + #define NB8800_RANDOM_SEED 0x008 42 + #define NB8800_TX_SDP 0x14 43 + #define NB8800_TX_TPDP1 0x18 44 + #define NB8800_TX_TPDP2 0x19 45 + #define NB8800_SLOT_TIME 0x1c 46 + 47 + #define NB8800_MDIO_CMD 0x020 48 + #define MDIO_CMD_GO BIT(31) 49 + #define MDIO_CMD_WR BIT(26) 50 + #define MDIO_CMD_ADDR(x) ((x) << 21) 51 + #define MDIO_CMD_REG(x) ((x) << 16) 52 + #define MDIO_CMD_DATA(x) ((x) << 0) 53 + 54 + #define NB8800_MDIO_STS 0x024 55 + #define MDIO_STS_ERR BIT(31) 56 + 57 + #define NB8800_MC_ADDR(i) (0x028 + (i)) 58 + #define NB8800_MC_INIT 0x02e 59 + #define NB8800_UC_ADDR(i) (0x03c + (i)) 60 + 61 + #define NB8800_MAC_MODE 0x044 62 + #define RGMII_MODE BIT(7) 63 + #define HALF_DUPLEX BIT(4) 64 + #define BURST_EN BIT(3) 65 + #define LOOPBACK_EN BIT(2) 66 + #define GMAC_MODE BIT(0) 67 + 68 + #define NB8800_IC_THRESHOLD 0x050 69 + #define NB8800_PE_THRESHOLD 0x051 70 + #define NB8800_PF_THRESHOLD 0x052 71 + #define NB8800_TX_BUFSIZE 0x054 72 + #define NB8800_FIFO_CTL 0x056 73 + #define NB8800_PQ1 0x060 74 + #define NB8800_PQ2 0x061 75 + #define NB8800_SRC_ADDR(i) (0x06a + (i)) 76 + #define NB8800_STAT_DATA 0x078 77 + #define NB8800_STAT_INDEX 0x07c 78 + #define NB8800_STAT_CLEAR 0x07d 79 + 80 + #define NB8800_SLEEP_MODE 0x07e 81 + #define SLEEP_MODE BIT(0) 82 + 83 + #define NB8800_WAKEUP 0x07f 84 + #define WAKEUP BIT(0) 85 + 86 + /* Aurora NB8800 host interface registers */ 87 + #define NB8800_TXC_CR 0x100 88 + #define TCR_LK BIT(12) 89 + #define TCR_DS BIT(11) 90 + #define TCR_BTS(x) (((x) & 0x7) << 8) 91 + #define TCR_DIE BIT(7) 92 + #define TCR_TFI(x) (((x) & 0x7) << 4) 93 + #define TCR_LE BIT(3) 94 + #define TCR_RS BIT(2) 95 + #define TCR_DM BIT(1) 96 + #define TCR_EN BIT(0) 97 + 98 + #define NB8800_TXC_SR 0x104 99 + #define TSR_DE BIT(3) 100 + #define TSR_DI BIT(2) 101 + #define TSR_TO BIT(1) 102 + #define TSR_TI BIT(0) 103 + 104 + #define NB8800_TX_SAR 0x108 105 + #define NB8800_TX_DESC_ADDR 0x10c 106 + 107 + #define NB8800_TX_REPORT_ADDR 0x110 108 + #define TX_BYTES_TRANSFERRED(x) (((x) >> 16) & 0xffff) 109 + #define TX_FIRST_DEFERRAL BIT(7) 110 + #define TX_EARLY_COLLISIONS(x) (((x) >> 3) & 0xf) 111 + #define TX_LATE_COLLISION BIT(2) 112 + #define TX_PACKET_DROPPED BIT(1) 113 + #define TX_FIFO_UNDERRUN BIT(0) 114 + #define IS_TX_ERROR(r) ((r) & 0x07) 115 + 116 + #define NB8800_TX_FIFO_SR 0x114 117 + #define NB8800_TX_ITR 0x118 118 + 119 + #define NB8800_RXC_CR 0x200 120 + #define RCR_FL BIT(13) 121 + #define RCR_LK BIT(12) 122 + #define RCR_DS BIT(11) 123 + #define RCR_BTS(x) (((x) & 7) << 8) 124 + #define RCR_DIE BIT(7) 125 + #define RCR_RFI(x) (((x) & 7) << 4) 126 + #define RCR_LE BIT(3) 127 + #define RCR_RS BIT(2) 128 + #define RCR_DM BIT(1) 129 + #define RCR_EN BIT(0) 130 + 131 + #define NB8800_RXC_SR 0x204 132 + #define RSR_DE BIT(3) 133 + #define RSR_DI BIT(2) 134 + #define RSR_RO BIT(1) 135 + #define RSR_RI BIT(0) 136 + 137 + #define NB8800_RX_SAR 0x208 138 + #define NB8800_RX_DESC_ADDR 0x20c 139 + 140 + #define NB8800_RX_REPORT_ADDR 0x210 141 + #define RX_BYTES_TRANSFERRED(x) (((x) >> 16) & 0xFFFF) 142 + #define RX_MULTICAST_PKT BIT(9) 143 + #define RX_BROADCAST_PKT BIT(8) 144 + #define RX_LENGTH_ERR BIT(7) 145 + #define RX_FCS_ERR BIT(6) 146 + #define RX_RUNT_PKT BIT(5) 147 + #define RX_FIFO_OVERRUN BIT(4) 148 + #define RX_LATE_COLLISION BIT(3) 149 + #define RX_ALIGNMENT_ERROR BIT(2) 150 + #define RX_ERROR_MASK 0xfc 151 + #define IS_RX_ERROR(r) ((r) & RX_ERROR_MASK) 152 + 153 + #define NB8800_RX_FIFO_SR 0x214 154 + #define NB8800_RX_ITR 0x218 155 + 156 + /* Sigma Designs SMP86xx additional registers */ 157 + #define NB8800_TANGOX_PAD_MODE 0x400 158 + #define PAD_MODE_MASK 0x7 159 + #define PAD_MODE_MII 0x0 160 + #define PAD_MODE_RGMII 0x1 161 + #define PAD_MODE_GTX_CLK_INV BIT(3) 162 + #define PAD_MODE_GTX_CLK_DELAY BIT(4) 163 + 164 + #define NB8800_TANGOX_MDIO_CLKDIV 0x420 165 + #define NB8800_TANGOX_RESET 0x424 166 + 167 + /* Hardware DMA descriptor */ 168 + struct nb8800_dma_desc { 169 + u32 s_addr; /* start address */ 170 + u32 n_addr; /* next descriptor address */ 171 + u32 r_addr; /* report address */ 172 + u32 config; 173 + } __aligned(8); 174 + 175 + #define DESC_ID BIT(23) 176 + #define DESC_EOC BIT(22) 177 + #define DESC_EOF BIT(21) 178 + #define DESC_LK BIT(20) 179 + #define DESC_DS BIT(19) 180 + #define DESC_BTS(x) (((x) & 0x7) << 16) 181 + 182 + /* DMA descriptor and associated data for rx. 183 + * Allocated from coherent memory. 184 + */ 185 + struct nb8800_rx_desc { 186 + /* DMA descriptor */ 187 + struct nb8800_dma_desc desc; 188 + 189 + /* Status report filled in by hardware */ 190 + u32 report; 191 + }; 192 + 193 + /* Address of buffer on rx ring */ 194 + struct nb8800_rx_buf { 195 + struct page *page; 196 + unsigned long offset; 197 + }; 198 + 199 + /* DMA descriptors and associated data for tx. 200 + * Allocated from coherent memory. 201 + */ 202 + struct nb8800_tx_desc { 203 + /* DMA descriptor. The second descriptor is used if packet 204 + * data is unaligned. 205 + */ 206 + struct nb8800_dma_desc desc[2]; 207 + 208 + /* Status report filled in by hardware */ 209 + u32 report; 210 + 211 + /* Bounce buffer for initial unaligned part of packet */ 212 + u8 buf[8] __aligned(8); 213 + }; 214 + 215 + /* Packet in tx queue */ 216 + struct nb8800_tx_buf { 217 + /* Currently queued skb */ 218 + struct sk_buff *skb; 219 + 220 + /* DMA address of the first descriptor */ 221 + dma_addr_t dma_desc; 222 + 223 + /* DMA address of packet data */ 224 + dma_addr_t dma_addr; 225 + 226 + /* Length of DMA mapping, less than skb->len if alignment 227 + * buffer is used. 228 + */ 229 + unsigned int dma_len; 230 + 231 + /* Number of packets in chain starting here */ 232 + unsigned int chain_len; 233 + 234 + /* Packet chain ready to be submitted to hardware */ 235 + bool ready; 236 + }; 237 + 238 + struct nb8800_priv { 239 + struct napi_struct napi; 240 + 241 + void __iomem *base; 242 + 243 + /* RX DMA descriptors */ 244 + struct nb8800_rx_desc *rx_descs; 245 + 246 + /* RX buffers referenced by DMA descriptors */ 247 + struct nb8800_rx_buf *rx_bufs; 248 + 249 + /* Current end of chain */ 250 + u32 rx_eoc; 251 + 252 + /* Value for rx interrupt time register in NAPI interrupt mode */ 253 + u32 rx_itr_irq; 254 + 255 + /* Value for rx interrupt time register in NAPI poll mode */ 256 + u32 rx_itr_poll; 257 + 258 + /* Value for config field of rx DMA descriptors */ 259 + u32 rx_dma_config; 260 + 261 + /* TX DMA descriptors */ 262 + struct nb8800_tx_desc *tx_descs; 263 + 264 + /* TX packet queue */ 265 + struct nb8800_tx_buf *tx_bufs; 266 + 267 + /* Number of free tx queue entries */ 268 + atomic_t tx_free; 269 + 270 + /* First free tx queue entry */ 271 + u32 tx_next; 272 + 273 + /* Next buffer to transmit */ 274 + u32 tx_queue; 275 + 276 + /* Start of current packet chain */ 277 + struct nb8800_tx_buf *tx_chain; 278 + 279 + /* Next buffer to reclaim */ 280 + u32 tx_done; 281 + 282 + /* Lock for DMA activation */ 283 + spinlock_t tx_lock; 284 + 285 + struct mii_bus *mii_bus; 286 + struct device_node *phy_node; 287 + struct phy_device *phydev; 288 + 289 + /* PHY connection type from DT */ 290 + int phy_mode; 291 + 292 + /* Current link status */ 293 + int speed; 294 + int duplex; 295 + int link; 296 + 297 + /* Pause settings */ 298 + bool pause_aneg; 299 + bool pause_rx; 300 + bool pause_tx; 301 + 302 + /* DMA base address of rx descriptors, see rx_descs above */ 303 + dma_addr_t rx_desc_dma; 304 + 305 + /* DMA base address of tx descriptors, see tx_descs above */ 306 + dma_addr_t tx_desc_dma; 307 + 308 + struct clk *clk; 309 + }; 310 + 311 + struct nb8800_ops { 312 + int (*init)(struct net_device *dev); 313 + int (*reset)(struct net_device *dev); 314 + }; 315 + 316 + #endif /* _NB8800_H_ */
+2 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 10139 10139 DP(BNX2X_MSG_SP, "Invalid vxlan port\n"); 10140 10140 return; 10141 10141 } 10142 - bp->vxlan_dst_port--; 10143 - if (bp->vxlan_dst_port) 10142 + bp->vxlan_dst_port_count--; 10143 + if (bp->vxlan_dst_port_count) 10144 10144 return; 10145 10145 10146 10146 if (netif_running(bp->dev)) {
+32 -14
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 3625 3625 pf->fw_fid = le16_to_cpu(resp->fid); 3626 3626 pf->port_id = le16_to_cpu(resp->port_id); 3627 3627 memcpy(pf->mac_addr, resp->perm_mac_address, ETH_ALEN); 3628 + memcpy(bp->dev->dev_addr, pf->mac_addr, ETH_ALEN); 3628 3629 pf->max_rsscos_ctxs = le16_to_cpu(resp->max_rsscos_ctx); 3629 3630 pf->max_cp_rings = le16_to_cpu(resp->max_cmpl_rings); 3630 3631 pf->max_tx_rings = le16_to_cpu(resp->max_tx_rings); ··· 3649 3648 3650 3649 vf->fw_fid = le16_to_cpu(resp->fid); 3651 3650 memcpy(vf->mac_addr, resp->perm_mac_address, ETH_ALEN); 3652 - if (!is_valid_ether_addr(vf->mac_addr)) 3653 - random_ether_addr(vf->mac_addr); 3651 + if (is_valid_ether_addr(vf->mac_addr)) 3652 + /* overwrite netdev dev_adr with admin VF MAC */ 3653 + memcpy(bp->dev->dev_addr, vf->mac_addr, ETH_ALEN); 3654 + else 3655 + random_ether_addr(bp->dev->dev_addr); 3654 3656 3655 3657 vf->max_rsscos_ctxs = le16_to_cpu(resp->max_rsscos_ctx); 3656 3658 vf->max_cp_rings = le16_to_cpu(resp->max_cmpl_rings); ··· 3884 3880 #endif 3885 3881 } 3886 3882 3883 + static int bnxt_cfg_rx_mode(struct bnxt *); 3884 + 3887 3885 static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init) 3888 3886 { 3889 3887 int rc = 0; ··· 3952 3946 bp->vnic_info[0].rx_mask |= 3953 3947 CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS; 3954 3948 3955 - rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, 0); 3956 - if (rc) { 3957 - netdev_err(bp->dev, "HWRM cfa l2 rx mask failure rc: %x\n", rc); 3949 + rc = bnxt_cfg_rx_mode(bp); 3950 + if (rc) 3958 3951 goto err_out; 3959 - } 3960 3952 3961 3953 rc = bnxt_hwrm_set_coal(bp); 3962 3954 if (rc) ··· 4867 4863 } 4868 4864 } 4869 4865 4870 - static void bnxt_cfg_rx_mode(struct bnxt *bp) 4866 + static int bnxt_cfg_rx_mode(struct bnxt *bp) 4871 4867 { 4872 4868 struct net_device *dev = bp->dev; 4873 4869 struct bnxt_vnic_info *vnic = &bp->vnic_info[0]; ··· 4916 4912 netdev_err(bp->dev, "HWRM vnic filter failure rc: %x\n", 4917 4913 rc); 4918 4914 vnic->uc_filter_count = i; 4915 + return rc; 4919 4916 } 4920 4917 } 4921 4918 ··· 4925 4920 if (rc) 4926 4921 netdev_err(bp->dev, "HWRM cfa l2 rx mask failure rc: %x\n", 4927 4922 rc); 4923 + 4924 + return rc; 4928 4925 } 4929 4926 4930 4927 static netdev_features_t bnxt_fix_features(struct net_device *dev, ··· 5217 5210 static int bnxt_change_mac_addr(struct net_device *dev, void *p) 5218 5211 { 5219 5212 struct sockaddr *addr = p; 5213 + struct bnxt *bp = netdev_priv(dev); 5214 + int rc = 0; 5220 5215 5221 5216 if (!is_valid_ether_addr(addr->sa_data)) 5222 5217 return -EADDRNOTAVAIL; 5223 5218 5224 - memcpy(dev->dev_addr, addr->sa_data, dev->addr_len); 5219 + #ifdef CONFIG_BNXT_SRIOV 5220 + if (BNXT_VF(bp) && is_valid_ether_addr(bp->vf.mac_addr)) 5221 + return -EADDRNOTAVAIL; 5222 + #endif 5225 5223 5226 - return 0; 5224 + if (ether_addr_equal(addr->sa_data, dev->dev_addr)) 5225 + return 0; 5226 + 5227 + memcpy(dev->dev_addr, addr->sa_data, dev->addr_len); 5228 + if (netif_running(dev)) { 5229 + bnxt_close_nic(bp, false, false); 5230 + rc = bnxt_open_nic(bp, false, false); 5231 + } 5232 + 5233 + return rc; 5227 5234 } 5228 5235 5229 5236 /* rtnl_lock held */ ··· 5705 5684 bnxt_set_tpa_flags(bp); 5706 5685 bnxt_set_ring_params(bp); 5707 5686 dflt_rings = netif_get_num_default_rss_queues(); 5708 - if (BNXT_PF(bp)) { 5709 - memcpy(dev->dev_addr, bp->pf.mac_addr, ETH_ALEN); 5687 + if (BNXT_PF(bp)) 5710 5688 bp->pf.max_irqs = max_irqs; 5711 - } else { 5712 5689 #if defined(CONFIG_BNXT_SRIOV) 5713 - memcpy(dev->dev_addr, bp->vf.mac_addr, ETH_ALEN); 5690 + else 5714 5691 bp->vf.max_irqs = max_irqs; 5715 5692 #endif 5716 - } 5717 5693 bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings); 5718 5694 bp->rx_nr_rings = min_t(int, dflt_rings, max_rx_rings); 5719 5695 bp->tx_nr_rings_per_tc = min_t(int, dflt_rings, max_tx_rings);
+3 -4
drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
··· 804 804 if (!is_valid_ether_addr(resp->perm_mac_address)) 805 805 goto update_vf_mac_exit; 806 806 807 - if (ether_addr_equal(resp->perm_mac_address, bp->vf.mac_addr)) 808 - goto update_vf_mac_exit; 809 - 810 - memcpy(bp->vf.mac_addr, resp->perm_mac_address, ETH_ALEN); 807 + if (!ether_addr_equal(resp->perm_mac_address, bp->vf.mac_addr)) 808 + memcpy(bp->vf.mac_addr, resp->perm_mac_address, ETH_ALEN); 809 + /* overwrite netdev dev_adr with admin VF MAC */ 811 810 memcpy(bp->dev->dev_addr, bp->vf.mac_addr, ETH_ALEN); 812 811 update_vf_mac_exit: 813 812 mutex_unlock(&bp->hwrm_cmd_lock);
+4
drivers/net/ethernet/cadence/macb.c
··· 1682 1682 macb_set_hwaddr(bp); 1683 1683 1684 1684 config = macb_mdc_clk_div(bp); 1685 + if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) 1686 + config |= GEM_BIT(SGMIIEN) | GEM_BIT(PCSSEL); 1685 1687 config |= MACB_BF(RBOF, NET_IP_ALIGN); /* Make eth data aligned */ 1686 1688 config |= MACB_BIT(PAE); /* PAuse Enable */ 1687 1689 config |= MACB_BIT(DRFCS); /* Discard Rx FCS */ ··· 2418 2416 /* Set MII management clock divider */ 2419 2417 val = macb_mdc_clk_div(bp); 2420 2418 val |= macb_dbw(bp); 2419 + if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) 2420 + val |= GEM_BIT(SGMIIEN) | GEM_BIT(PCSSEL); 2421 2421 macb_writel(bp, NCFGR, val); 2422 2422 2423 2423 return 0;
+5
drivers/net/ethernet/cadence/macb.h
··· 215 215 /* GEM specific NCFGR bitfields. */ 216 216 #define GEM_GBE_OFFSET 10 /* Gigabit mode enable */ 217 217 #define GEM_GBE_SIZE 1 218 + #define GEM_PCSSEL_OFFSET 11 219 + #define GEM_PCSSEL_SIZE 1 218 220 #define GEM_CLK_OFFSET 18 /* MDC clock division */ 219 221 #define GEM_CLK_SIZE 3 220 222 #define GEM_DBW_OFFSET 21 /* Data bus width */ 221 223 #define GEM_DBW_SIZE 2 222 224 #define GEM_RXCOEN_OFFSET 24 223 225 #define GEM_RXCOEN_SIZE 1 226 + #define GEM_SGMIIEN_OFFSET 27 227 + #define GEM_SGMIIEN_SIZE 1 228 + 224 229 225 230 /* Constants for data bus width. */ 226 231 #define GEM_DBW32 0 /* 32 bit AMBA AHB data bus width */
+2 -3
drivers/net/ethernet/cavium/thunder/nic.h
··· 120 120 * Calculated for SCLK of 700Mhz 121 121 * value written should be a 1/16th of what is expected 122 122 * 123 - * 1 tick per 0.05usec = value of 2.2 124 - * This 10% would be covered in CQ timer thresh value 123 + * 1 tick per 0.025usec 125 124 */ 126 - #define NICPF_CLK_PER_INT_TICK 2 125 + #define NICPF_CLK_PER_INT_TICK 1 127 126 128 127 /* Time to wait before we decide that a SQ is stuck. 129 128 *
+20 -3
drivers/net/ethernet/cavium/thunder/nic_main.c
··· 37 37 #define NIC_GET_BGX_FROM_VF_LMAC_MAP(map) ((map >> 4) & 0xF) 38 38 #define NIC_GET_LMAC_FROM_VF_LMAC_MAP(map) (map & 0xF) 39 39 u8 vf_lmac_map[MAX_LMAC]; 40 + u8 lmac_cnt; 40 41 struct delayed_work dwork; 41 42 struct workqueue_struct *check_link; 42 43 u8 link[MAX_LMAC]; ··· 280 279 u64 lmac_credit; 281 280 282 281 nic->num_vf_en = 0; 282 + nic->lmac_cnt = 0; 283 283 284 284 for (bgx = 0; bgx < NIC_MAX_BGX; bgx++) { 285 285 if (!(bgx_map & (1 << bgx))) ··· 290 288 nic->vf_lmac_map[next_bgx_lmac++] = 291 289 NIC_SET_VF_LMAC_MAP(bgx, lmac); 292 290 nic->num_vf_en += lmac_cnt; 291 + nic->lmac_cnt += lmac_cnt; 293 292 294 293 /* Program LMAC credits */ 295 294 lmac_credit = (1ull << 1); /* channel credit enable */ ··· 718 715 case NIC_MBOX_MSG_CFG_DONE: 719 716 /* Last message of VF config msg sequence */ 720 717 nic->vf_enabled[vf] = true; 718 + if (vf >= nic->lmac_cnt) 719 + goto unlock; 720 + 721 + bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 722 + lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 723 + 724 + bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, true); 721 725 goto unlock; 722 726 case NIC_MBOX_MSG_SHUTDOWN: 723 727 /* First msg in VF teardown sequence */ ··· 732 722 if (vf >= nic->num_vf_en) 733 723 nic->sqs_used[vf - nic->num_vf_en] = false; 734 724 nic->pqs_vf[vf] = 0; 725 + 726 + if (vf >= nic->lmac_cnt) 727 + break; 728 + 729 + bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 730 + lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 731 + 732 + bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, false); 735 733 break; 736 734 case NIC_MBOX_MSG_ALLOC_SQS: 737 735 nic_alloc_sqs(nic, &mbx.sqs_alloc); ··· 958 940 959 941 mbx.link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE; 960 942 961 - for (vf = 0; vf < nic->num_vf_en; vf++) { 943 + for (vf = 0; vf < nic->lmac_cnt; vf++) { 962 944 /* Poll only if VF is UP */ 963 945 if (!nic->vf_enabled[vf]) 964 946 continue; ··· 1092 1074 1093 1075 if (nic->check_link) { 1094 1076 /* Destroy work Queue */ 1095 - cancel_delayed_work(&nic->dwork); 1096 - flush_workqueue(nic->check_link); 1077 + cancel_delayed_work_sync(&nic->dwork); 1097 1078 destroy_workqueue(nic->check_link); 1098 1079 } 1099 1080
+15 -1
drivers/net/ethernet/cavium/thunder/nicvf_ethtool.c
··· 112 112 113 113 cmd->supported = 0; 114 114 cmd->transceiver = XCVR_EXTERNAL; 115 + 116 + if (!nic->link_up) { 117 + cmd->duplex = DUPLEX_UNKNOWN; 118 + ethtool_cmd_speed_set(cmd, SPEED_UNKNOWN); 119 + return 0; 120 + } 121 + 115 122 if (nic->speed <= 1000) { 116 123 cmd->port = PORT_MII; 117 124 cmd->autoneg = AUTONEG_ENABLE; ··· 130 123 ethtool_cmd_speed_set(cmd, nic->speed); 131 124 132 125 return 0; 126 + } 127 + 128 + static u32 nicvf_get_link(struct net_device *netdev) 129 + { 130 + struct nicvf *nic = netdev_priv(netdev); 131 + 132 + return nic->link_up; 133 133 } 134 134 135 135 static void nicvf_get_drvinfo(struct net_device *netdev, ··· 674 660 675 661 static const struct ethtool_ops nicvf_ethtool_ops = { 676 662 .get_settings = nicvf_get_settings, 677 - .get_link = ethtool_op_get_link, 663 + .get_link = nicvf_get_link, 678 664 .get_drvinfo = nicvf_get_drvinfo, 679 665 .get_msglevel = nicvf_get_msglevel, 680 666 .set_msglevel = nicvf_set_msglevel,
+1 -3
drivers/net/ethernet/cavium/thunder/nicvf_main.c
··· 1057 1057 1058 1058 netif_carrier_off(netdev); 1059 1059 netif_tx_stop_all_queues(nic->netdev); 1060 + nic->link_up = false; 1060 1061 1061 1062 /* Teardown secondary qsets first */ 1062 1063 if (!nic->sqs_mode) { ··· 1211 1210 1212 1211 nic->drv_stats.txq_stop = 0; 1213 1212 nic->drv_stats.txq_wake = 0; 1214 - 1215 - netif_carrier_on(netdev); 1216 - netif_tx_start_all_queues(netdev); 1217 1213 1218 1214 return 0; 1219 1215 cleanup:
+1 -1
drivers/net/ethernet/cavium/thunder/nicvf_queues.c
··· 592 592 /* Set threshold value for interrupt generation */ 593 593 nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_THRESH, qidx, cq->thresh); 594 594 nicvf_queue_reg_write(nic, NIC_QSET_CQ_0_7_CFG2, 595 - qidx, nic->cq_coalesce_usecs); 595 + qidx, CMP_QUEUE_TIMER_THRESH); 596 596 } 597 597 598 598 /* Configures transmit queue */
+1 -1
drivers/net/ethernet/cavium/thunder/nicvf_queues.h
··· 76 76 #define CMP_QSIZE CMP_QUEUE_SIZE2 77 77 #define CMP_QUEUE_LEN (1ULL << (CMP_QSIZE + 10)) 78 78 #define CMP_QUEUE_CQE_THRESH 0 79 - #define CMP_QUEUE_TIMER_THRESH 220 /* 10usec */ 79 + #define CMP_QUEUE_TIMER_THRESH 80 /* ~2usec */ 80 80 81 81 #define RBDR_SIZE RBDR_SIZE0 82 82 #define RCV_BUF_COUNT (1ULL << (RBDR_SIZE + 13))
+24 -4
drivers/net/ethernet/cavium/thunder/thunder_bgx.c
··· 186 186 } 187 187 EXPORT_SYMBOL(bgx_set_lmac_mac); 188 188 189 + void bgx_lmac_rx_tx_enable(int node, int bgx_idx, int lmacid, bool enable) 190 + { 191 + struct bgx *bgx = bgx_vnic[(node * MAX_BGX_PER_CN88XX) + bgx_idx]; 192 + u64 cfg; 193 + 194 + if (!bgx) 195 + return; 196 + 197 + cfg = bgx_reg_read(bgx, lmacid, BGX_CMRX_CFG); 198 + if (enable) 199 + cfg |= CMR_PKT_RX_EN | CMR_PKT_TX_EN; 200 + else 201 + cfg &= ~(CMR_PKT_RX_EN | CMR_PKT_TX_EN); 202 + bgx_reg_write(bgx, lmacid, BGX_CMRX_CFG, cfg); 203 + } 204 + EXPORT_SYMBOL(bgx_lmac_rx_tx_enable); 205 + 189 206 static void bgx_sgmii_change_link_state(struct lmac *lmac) 190 207 { 191 208 struct bgx *bgx = lmac->bgx; ··· 629 612 lmac->last_duplex = 1; 630 613 } else { 631 614 lmac->link_up = 0; 615 + lmac->last_speed = SPEED_UNKNOWN; 616 + lmac->last_duplex = DUPLEX_UNKNOWN; 632 617 } 633 618 634 619 if (lmac->last_link != lmac->link_up) { ··· 673 654 } 674 655 675 656 /* Enable lmac */ 676 - bgx_reg_modify(bgx, lmacid, BGX_CMRX_CFG, 677 - CMR_EN | CMR_PKT_RX_EN | CMR_PKT_TX_EN); 657 + bgx_reg_modify(bgx, lmacid, BGX_CMRX_CFG, CMR_EN); 678 658 679 659 /* Restore default cfg, incase low level firmware changed it */ 680 660 bgx_reg_write(bgx, lmacid, BGX_CMRX_RX_DMAC_CTL, 0x03); ··· 713 695 lmac = &bgx->lmac[lmacid]; 714 696 if (lmac->check_link) { 715 697 /* Destroy work queue */ 716 - cancel_delayed_work(&lmac->dwork); 717 - flush_workqueue(lmac->check_link); 698 + cancel_delayed_work_sync(&lmac->dwork); 718 699 destroy_workqueue(lmac->check_link); 719 700 } 720 701 ··· 1025 1008 struct device *dev = &pdev->dev; 1026 1009 struct bgx *bgx = NULL; 1027 1010 u8 lmac; 1011 + 1012 + /* Load octeon mdio driver */ 1013 + octeon_mdiobus_force_mod_depencency(); 1028 1014 1029 1015 bgx = devm_kzalloc(dev, sizeof(*bgx), GFP_KERNEL); 1030 1016 if (!bgx)
+2
drivers/net/ethernet/cavium/thunder/thunder_bgx.h
··· 182 182 #define BCAST_ACCEPT 1 183 183 #define CAM_ACCEPT 1 184 184 185 + void octeon_mdiobus_force_mod_depencency(void); 186 + void bgx_lmac_rx_tx_enable(int node, int bgx_idx, int lmacid, bool enable); 185 187 void bgx_add_dmac_addr(u64 dmac, int node, int bgx_idx, int lmac); 186 188 unsigned bgx_get_map(int node); 187 189 int bgx_get_lmac_count(int node, int bgx);
+7 -2
drivers/net/ethernet/dec/tulip/tulip_core.c
··· 98 98 #elif defined(__mips__) 99 99 static int csr0 = 0x00200000 | 0x4000; 100 100 #else 101 - #warning Processor architecture undefined! 102 - static int csr0 = 0x00A00000 | 0x4800; 101 + static int csr0; 103 102 #endif 104 103 105 104 /* Operational parameters that usually are not changed. */ ··· 1980 1981 #ifdef MODULE 1981 1982 pr_info("%s", version); 1982 1983 #endif 1984 + 1985 + if (!csr0) { 1986 + pr_warn("tulip: unknown CPU architecture, using default csr0\n"); 1987 + /* default to 8 longword cache line alignment */ 1988 + csr0 = 0x00A00000 | 0x4800; 1989 + } 1983 1990 1984 1991 /* copy module parms into globals */ 1985 1992 tulip_rx_copybreak = rx_copybreak;
+1 -1
drivers/net/ethernet/dec/tulip/winbond-840.c
··· 907 907 #elif defined(CONFIG_SPARC) || defined (CONFIG_PARISC) || defined(CONFIG_ARM) 908 908 i |= 0x4800; 909 909 #else 910 - #warning Processor architecture undefined 910 + dev_warn(&dev->dev, "unknown CPU architecture, using default csr0 setting\n"); 911 911 i |= 0x4800; 912 912 #endif 913 913 iowrite32(i, ioaddr + PCIBusCfg);
+2 -1
drivers/net/ethernet/freescale/Kconfig
··· 7 7 default y 8 8 depends on FSL_SOC || QUICC_ENGINE || CPM1 || CPM2 || PPC_MPC512x || \ 9 9 M523x || M527x || M5272 || M528x || M520x || M532x || \ 10 - ARCH_MXC || ARCH_MXS || (PPC_MPC52xx && PPC_BESTCOMM) 10 + ARCH_MXC || ARCH_MXS || (PPC_MPC52xx && PPC_BESTCOMM) || \ 11 + ARCH_LAYERSCAPE 11 12 ---help--- 12 13 If you have a network (Ethernet) card belonging to this class, say Y. 13 14
+3 -3
drivers/net/ethernet/freescale/gianfar.c
··· 647 647 if (model && strcasecmp(model, "FEC")) { 648 648 gfar_irq(grp, RX)->irq = irq_of_parse_and_map(np, 1); 649 649 gfar_irq(grp, ER)->irq = irq_of_parse_and_map(np, 2); 650 - if (gfar_irq(grp, TX)->irq == NO_IRQ || 651 - gfar_irq(grp, RX)->irq == NO_IRQ || 652 - gfar_irq(grp, ER)->irq == NO_IRQ) 650 + if (!gfar_irq(grp, TX)->irq || 651 + !gfar_irq(grp, RX)->irq || 652 + !gfar_irq(grp, ER)->irq) 653 653 return -EINVAL; 654 654 } 655 655
+1 -1
drivers/net/ethernet/freescale/gianfar_ptp.c
··· 467 467 468 468 etsects->irq = platform_get_irq(dev, 0); 469 469 470 - if (etsects->irq == NO_IRQ) { 470 + if (etsects->irq < 0) { 471 471 pr_err("irq not in device tree\n"); 472 472 goto no_node; 473 473 }
+3 -1
drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
··· 627 627 628 628 /* verify the skb head is not shared */ 629 629 err = skb_cow_head(skb, 0); 630 - if (err) 630 + if (err) { 631 + dev_kfree_skb(skb); 631 632 return NETDEV_TX_OK; 633 + } 632 634 633 635 /* locate vlan header */ 634 636 vhdr = (struct vlan_hdr *)(skb->data + ETH_HLEN);
+27 -6
drivers/net/ethernet/marvell/mvneta.c
··· 36 36 37 37 /* Registers */ 38 38 #define MVNETA_RXQ_CONFIG_REG(q) (0x1400 + ((q) << 2)) 39 - #define MVNETA_RXQ_HW_BUF_ALLOC BIT(1) 39 + #define MVNETA_RXQ_HW_BUF_ALLOC BIT(0) 40 40 #define MVNETA_RXQ_PKT_OFFSET_ALL_MASK (0xf << 8) 41 41 #define MVNETA_RXQ_PKT_OFFSET_MASK(offs) ((offs) << 8) 42 42 #define MVNETA_RXQ_THRESHOLD_REG(q) (0x14c0 + ((q) << 2)) ··· 62 62 #define MVNETA_WIN_SIZE(w) (0x2204 + ((w) << 3)) 63 63 #define MVNETA_WIN_REMAP(w) (0x2280 + ((w) << 2)) 64 64 #define MVNETA_BASE_ADDR_ENABLE 0x2290 65 + #define MVNETA_ACCESS_PROTECT_ENABLE 0x2294 65 66 #define MVNETA_PORT_CONFIG 0x2400 66 67 #define MVNETA_UNI_PROMISC_MODE BIT(0) 67 68 #define MVNETA_DEF_RXQ(q) ((q) << 1) ··· 160 159 161 160 #define MVNETA_INTR_ENABLE 0x25b8 162 161 #define MVNETA_TXQ_INTR_ENABLE_ALL_MASK 0x0000ff00 163 - #define MVNETA_RXQ_INTR_ENABLE_ALL_MASK 0xff000000 // note: neta says it's 0x000000FF 162 + #define MVNETA_RXQ_INTR_ENABLE_ALL_MASK 0x000000ff 164 163 165 164 #define MVNETA_RXQ_CMD 0x2680 166 165 #define MVNETA_RXQ_DISABLE_SHIFT 8 ··· 243 242 #define MVNETA_VLAN_TAG_LEN 4 244 243 245 244 #define MVNETA_CPU_D_CACHE_LINE_SIZE 32 245 + #define MVNETA_TX_CSUM_DEF_SIZE 1600 246 246 #define MVNETA_TX_CSUM_MAX_SIZE 9800 247 247 #define MVNETA_ACC_MODE_EXT 1 248 248 ··· 1600 1598 } 1601 1599 1602 1600 skb = build_skb(data, pp->frag_size > PAGE_SIZE ? 0 : pp->frag_size); 1603 - if (!skb) 1604 - goto err_drop_frame; 1605 1601 1602 + /* After refill old buffer has to be unmapped regardless 1603 + * the skb is successfully built or not. 1604 + */ 1606 1605 dma_unmap_single(dev->dev.parent, phys_addr, 1607 1606 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE); 1607 + 1608 + if (!skb) 1609 + goto err_drop_frame; 1608 1610 1609 1611 rcvd_pkts++; 1610 1612 rcvd_bytes += rx_bytes; ··· 3249 3243 } 3250 3244 3251 3245 mvreg_write(pp, MVNETA_BASE_ADDR_ENABLE, win_enable); 3246 + mvreg_write(pp, MVNETA_ACCESS_PROTECT_ENABLE, win_protect); 3252 3247 } 3253 3248 3254 3249 /* Power up the port */ ··· 3306 3299 char hw_mac_addr[ETH_ALEN]; 3307 3300 const char *mac_from; 3308 3301 const char *managed; 3302 + int tx_csum_limit; 3309 3303 int phy_mode; 3310 3304 int err; 3311 3305 int cpu; ··· 3407 3399 } 3408 3400 } 3409 3401 3410 - if (of_device_is_compatible(dn, "marvell,armada-370-neta")) 3411 - pp->tx_csum_limit = 1600; 3402 + if (!of_property_read_u32(dn, "tx-csum-limit", &tx_csum_limit)) { 3403 + if (tx_csum_limit < 0 || 3404 + tx_csum_limit > MVNETA_TX_CSUM_MAX_SIZE) { 3405 + tx_csum_limit = MVNETA_TX_CSUM_DEF_SIZE; 3406 + dev_info(&pdev->dev, 3407 + "Wrong TX csum limit in DT, set to %dB\n", 3408 + MVNETA_TX_CSUM_DEF_SIZE); 3409 + } 3410 + } else if (of_device_is_compatible(dn, "marvell,armada-370-neta")) { 3411 + tx_csum_limit = MVNETA_TX_CSUM_DEF_SIZE; 3412 + } else { 3413 + tx_csum_limit = MVNETA_TX_CSUM_MAX_SIZE; 3414 + } 3415 + 3416 + pp->tx_csum_limit = tx_csum_limit; 3412 3417 3413 3418 pp->tx_ring_size = MVNETA_MAX_TXD; 3414 3419 pp->rx_ring_size = MVNETA_MAX_RXD;
+1 -1
drivers/net/ethernet/nxp/lpc_eth.c
··· 1326 1326 /* Get platform resources */ 1327 1327 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1328 1328 irq = platform_get_irq(pdev, 0); 1329 - if ((!res) || (irq < 0) || (irq >= NR_IRQS)) { 1329 + if (!res || irq < 0) { 1330 1330 dev_err(&pdev->dev, "error getting resources.\n"); 1331 1331 ret = -ENXIO; 1332 1332 goto err_exit;
+4 -2
drivers/net/ethernet/renesas/ravb_main.c
··· 1227 1227 /* Device init */ 1228 1228 error = ravb_dmac_init(ndev); 1229 1229 if (error) 1230 - goto out_free_irq; 1230 + goto out_free_irq2; 1231 1231 ravb_emac_init(ndev); 1232 1232 1233 1233 /* Initialise PTP Clock driver */ ··· 1247 1247 /* Stop PTP Clock driver */ 1248 1248 if (priv->chip_id == RCAR_GEN2) 1249 1249 ravb_ptp_stop(ndev); 1250 + out_free_irq2: 1251 + if (priv->chip_id == RCAR_GEN3) 1252 + free_irq(priv->emac_irq, ndev); 1250 1253 out_free_irq: 1251 1254 free_irq(ndev->irq, ndev); 1252 - free_irq(priv->emac_irq, ndev); 1253 1255 out_napi_off: 1254 1256 napi_disable(&priv->napi[RAVB_NC]); 1255 1257 napi_disable(&priv->napi[RAVB_BE]);
+7 -6
drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c
··· 299 299 if (IS_PHY_IF_MODE_GBIT(dwmac->interface)) { 300 300 const char *rs; 301 301 302 + dwmac->tx_retime_src = TX_RETIME_SRC_CLKGEN; 303 + 302 304 err = of_property_read_string(np, "st,tx-retime-src", &rs); 303 305 if (err < 0) { 304 306 dev_warn(dev, "Use internal clock source\n"); 305 - dwmac->tx_retime_src = TX_RETIME_SRC_CLKGEN; 306 - } else if (!strcasecmp(rs, "clk_125")) { 307 - dwmac->tx_retime_src = TX_RETIME_SRC_CLK_125; 308 - } else if (!strcasecmp(rs, "txclk")) { 309 - dwmac->tx_retime_src = TX_RETIME_SRC_TXCLK; 307 + } else { 308 + if (!strcasecmp(rs, "clk_125")) 309 + dwmac->tx_retime_src = TX_RETIME_SRC_CLK_125; 310 + else if (!strcasecmp(rs, "txclk")) 311 + dwmac->tx_retime_src = TX_RETIME_SRC_TXCLK; 310 312 } 311 - 312 313 dwmac->speed = SPEED_1000; 313 314 } 314 315
+8 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 185 185 priv->clk_csr = STMMAC_CSR_100_150M; 186 186 else if ((clk_rate >= CSR_F_150M) && (clk_rate < CSR_F_250M)) 187 187 priv->clk_csr = STMMAC_CSR_150_250M; 188 - else if ((clk_rate >= CSR_F_250M) && (clk_rate < CSR_F_300M)) 188 + else if ((clk_rate >= CSR_F_250M) && (clk_rate <= CSR_F_300M)) 189 189 priv->clk_csr = STMMAC_CSR_250_300M; 190 190 } 191 191 } ··· 2232 2232 2233 2233 frame_len = priv->hw->desc->get_rx_frame_len(p, coe); 2234 2234 2235 + /* check if frame_len fits the preallocated memory */ 2236 + if (frame_len > priv->dma_buf_sz) { 2237 + priv->dev->stats.rx_length_errors++; 2238 + break; 2239 + } 2240 + 2235 2241 /* ACS is set; GMAC core strips PAD/FCS for IEEE 802.3 2236 2242 * Type frames (LLC/LLC-SNAP) 2237 2243 */ ··· 3108 3102 init_dma_desc_rings(ndev, GFP_ATOMIC); 3109 3103 stmmac_hw_setup(ndev, false); 3110 3104 stmmac_init_tx_coalesce(priv); 3105 + stmmac_set_rx_mode(ndev); 3111 3106 3112 3107 napi_enable(&priv->napi); 3113 3108
+13 -15
drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
··· 138 138 139 139 #ifdef CONFIG_OF 140 140 if (priv->device->of_node) { 141 - int reset_gpio, active_low; 142 141 143 142 if (data->reset_gpio < 0) { 144 143 struct device_node *np = priv->device->of_node; ··· 153 154 "snps,reset-active-low"); 154 155 of_property_read_u32_array(np, 155 156 "snps,reset-delays-us", data->delays, 3); 157 + 158 + if (gpio_request(data->reset_gpio, "mdio-reset")) 159 + return 0; 156 160 } 157 161 158 - reset_gpio = data->reset_gpio; 159 - active_low = data->active_low; 162 + gpio_direction_output(data->reset_gpio, 163 + data->active_low ? 1 : 0); 164 + if (data->delays[0]) 165 + msleep(DIV_ROUND_UP(data->delays[0], 1000)); 160 166 161 - if (!gpio_request(reset_gpio, "mdio-reset")) { 162 - gpio_direction_output(reset_gpio, active_low ? 1 : 0); 163 - if (data->delays[0]) 164 - msleep(DIV_ROUND_UP(data->delays[0], 1000)); 167 + gpio_set_value(data->reset_gpio, data->active_low ? 0 : 1); 168 + if (data->delays[1]) 169 + msleep(DIV_ROUND_UP(data->delays[1], 1000)); 165 170 166 - gpio_set_value(reset_gpio, active_low ? 0 : 1); 167 - if (data->delays[1]) 168 - msleep(DIV_ROUND_UP(data->delays[1], 1000)); 169 - 170 - gpio_set_value(reset_gpio, active_low ? 1 : 0); 171 - if (data->delays[2]) 172 - msleep(DIV_ROUND_UP(data->delays[2], 1000)); 173 - } 171 + gpio_set_value(data->reset_gpio, data->active_low ? 1 : 0); 172 + if (data->delays[2]) 173 + msleep(DIV_ROUND_UP(data->delays[2], 1000)); 174 174 } 175 175 #endif 176 176
+3
drivers/net/ethernet/ti/cpsw-common.c
··· 78 78 79 79 int ti_cm_get_macid(struct device *dev, int slave, u8 *mac_addr) 80 80 { 81 + if (of_machine_is_compatible("ti,dm8148")) 82 + return cpsw_am33xx_cm_get_macid(dev, 0x630, slave, mac_addr); 83 + 81 84 if (of_machine_is_compatible("ti,am33xx")) 82 85 return cpsw_am33xx_cm_get_macid(dev, 0x630, slave, mac_addr); 83 86
+2 -2
drivers/net/macvtap.c
··· 498 498 wait_queue_head_t *wqueue; 499 499 500 500 if (!sock_writeable(sk) || 501 - !test_and_clear_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags)) 501 + !test_and_clear_bit(SOCKWQ_ASYNC_NOSPACE, &sk->sk_socket->flags)) 502 502 return; 503 503 504 504 wqueue = sk_sleep(sk); ··· 585 585 mask |= POLLIN | POLLRDNORM; 586 586 587 587 if (sock_writeable(&q->sk) || 588 - (!test_and_set_bit(SOCK_ASYNC_NOSPACE, &q->sock.flags) && 588 + (!test_and_set_bit(SOCKWQ_ASYNC_NOSPACE, &q->sock.flags) && 589 589 sock_writeable(&q->sk))) 590 590 mask |= POLLOUT | POLLWRNORM; 591 591
+1 -1
drivers/net/phy/broadcom.c
··· 614 614 { PHY_ID_BCM5461, 0xfffffff0 }, 615 615 { PHY_ID_BCM54616S, 0xfffffff0 }, 616 616 { PHY_ID_BCM5464, 0xfffffff0 }, 617 - { PHY_ID_BCM5482, 0xfffffff0 }, 617 + { PHY_ID_BCM5481, 0xfffffff0 }, 618 618 { PHY_ID_BCM5482, 0xfffffff0 }, 619 619 { PHY_ID_BCM50610, 0xfffffff0 }, 620 620 { PHY_ID_BCM50610M, 0xfffffff0 },
+2 -1
drivers/net/phy/phy.c
··· 448 448 mdiobus_write(phydev->bus, mii_data->phy_id, 449 449 mii_data->reg_num, val); 450 450 451 - if (mii_data->reg_num == MII_BMCR && 451 + if (mii_data->phy_id == phydev->addr && 452 + mii_data->reg_num == MII_BMCR && 452 453 val & BMCR_RESET) 453 454 return phy_init_hw(phydev); 454 455
+2 -2
drivers/net/tun.c
··· 1040 1040 mask |= POLLIN | POLLRDNORM; 1041 1041 1042 1042 if (sock_writeable(sk) || 1043 - (!test_and_set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags) && 1043 + (!test_and_set_bit(SOCKWQ_ASYNC_NOSPACE, &sk->sk_socket->flags) && 1044 1044 sock_writeable(sk))) 1045 1045 mask |= POLLOUT | POLLWRNORM; 1046 1046 ··· 1488 1488 if (!sock_writeable(sk)) 1489 1489 return; 1490 1490 1491 - if (!test_and_clear_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags)) 1491 + if (!test_and_clear_bit(SOCKWQ_ASYNC_NOSPACE, &sk->sk_socket->flags)) 1492 1492 return; 1493 1493 1494 1494 wqueue = sk_sleep(sk);
+4 -4
drivers/net/usb/cdc_ncm.c
··· 691 691 692 692 int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting, int drvflags) 693 693 { 694 - const struct usb_cdc_union_desc *union_desc = NULL; 695 694 struct cdc_ncm_ctx *ctx; 696 695 struct usb_driver *driver; 697 696 u8 *buf; ··· 724 725 /* parse through descriptors associated with control interface */ 725 726 cdc_parse_cdc_header(&hdr, intf, buf, len); 726 727 727 - ctx->data = usb_ifnum_to_if(dev->udev, 728 - hdr.usb_cdc_union_desc->bSlaveInterface0); 728 + if (hdr.usb_cdc_union_desc) 729 + ctx->data = usb_ifnum_to_if(dev->udev, 730 + hdr.usb_cdc_union_desc->bSlaveInterface0); 729 731 ctx->ether_desc = hdr.usb_cdc_ether_desc; 730 732 ctx->func_desc = hdr.usb_cdc_ncm_desc; 731 733 ctx->mbim_desc = hdr.usb_cdc_mbim_desc; 732 734 ctx->mbim_extended_desc = hdr.usb_cdc_mbim_extended_desc; 733 735 734 736 /* some buggy devices have an IAD but no CDC Union */ 735 - if (!union_desc && intf->intf_assoc && intf->intf_assoc->bInterfaceCount == 2) { 737 + if (!hdr.usb_cdc_union_desc && intf->intf_assoc && intf->intf_assoc->bInterfaceCount == 2) { 736 738 ctx->data = usb_ifnum_to_if(dev->udev, intf->cur_altsetting->desc.bInterfaceNumber + 1); 737 739 dev_dbg(&intf->dev, "CDC Union missing - got slave from IAD\n"); 738 740 }
+1
drivers/net/usb/qmi_wwan.c
··· 725 725 {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ 726 726 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ 727 727 {QMI_FIXED_INTF(0x1bc7, 0x1201, 2)}, /* Telit LE920 */ 728 + {QMI_FIXED_INTF(0x1c9e, 0x9b01, 3)}, /* XS Stick W100-2 from 4G Systems */ 728 729 {QMI_FIXED_INTF(0x0b3c, 0xc000, 4)}, /* Olivetti Olicard 100 */ 729 730 {QMI_FIXED_INTF(0x0b3c, 0xc001, 4)}, /* Olivetti Olicard 120 */ 730 731 {QMI_FIXED_INTF(0x0b3c, 0xc002, 4)}, /* Olivetti Olicard 140 */
+60 -11
drivers/net/vmxnet3/vmxnet3_drv.c
··· 587 587 &adapter->pdev->dev, 588 588 rbi->skb->data, rbi->len, 589 589 PCI_DMA_FROMDEVICE); 590 + if (dma_mapping_error(&adapter->pdev->dev, 591 + rbi->dma_addr)) { 592 + dev_kfree_skb_any(rbi->skb); 593 + rq->stats.rx_buf_alloc_failure++; 594 + break; 595 + } 590 596 } else { 591 597 /* rx buffer skipped by the device */ 592 598 } ··· 611 605 &adapter->pdev->dev, 612 606 rbi->page, 0, PAGE_SIZE, 613 607 PCI_DMA_FROMDEVICE); 608 + if (dma_mapping_error(&adapter->pdev->dev, 609 + rbi->dma_addr)) { 610 + put_page(rbi->page); 611 + rq->stats.rx_buf_alloc_failure++; 612 + break; 613 + } 614 614 } else { 615 615 /* rx buffers skipped by the device */ 616 616 } 617 617 val = VMXNET3_RXD_BTYPE_BODY << VMXNET3_RXD_BTYPE_SHIFT; 618 618 } 619 619 620 - BUG_ON(rbi->dma_addr == 0); 621 620 gd->rxd.addr = cpu_to_le64(rbi->dma_addr); 622 621 gd->dword[2] = cpu_to_le32((!ring->gen << VMXNET3_RXD_GEN_SHIFT) 623 622 | val | rbi->len); ··· 666 655 } 667 656 668 657 669 - static void 658 + static int 670 659 vmxnet3_map_pkt(struct sk_buff *skb, struct vmxnet3_tx_ctx *ctx, 671 660 struct vmxnet3_tx_queue *tq, struct pci_dev *pdev, 672 661 struct vmxnet3_adapter *adapter) ··· 726 715 tbi->dma_addr = dma_map_single(&adapter->pdev->dev, 727 716 skb->data + buf_offset, buf_size, 728 717 PCI_DMA_TODEVICE); 718 + if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr)) 719 + return -EFAULT; 729 720 730 721 tbi->len = buf_size; 731 722 ··· 768 755 tbi->dma_addr = skb_frag_dma_map(&adapter->pdev->dev, frag, 769 756 buf_offset, buf_size, 770 757 DMA_TO_DEVICE); 758 + if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr)) 759 + return -EFAULT; 771 760 772 761 tbi->len = buf_size; 773 762 ··· 797 782 /* set the last buf_info for the pkt */ 798 783 tbi->skb = skb; 799 784 tbi->sop_idx = ctx->sop_txd - tq->tx_ring.base; 785 + 786 + return 0; 800 787 } 801 788 802 789 ··· 1037 1020 } 1038 1021 1039 1022 /* fill tx descs related to addr & len */ 1040 - vmxnet3_map_pkt(skb, &ctx, tq, adapter->pdev, adapter); 1023 + if (vmxnet3_map_pkt(skb, &ctx, tq, adapter->pdev, adapter)) 1024 + goto unlock_drop_pkt; 1041 1025 1042 1026 /* setup the EOP desc */ 1043 1027 ctx.eop_txd->dword[3] = cpu_to_le32(VMXNET3_TXD_CQ | VMXNET3_TXD_EOP); ··· 1249 1231 struct vmxnet3_rx_buf_info *rbi; 1250 1232 struct sk_buff *skb, *new_skb = NULL; 1251 1233 struct page *new_page = NULL; 1234 + dma_addr_t new_dma_addr; 1252 1235 int num_to_alloc; 1253 1236 struct Vmxnet3_RxDesc *rxd; 1254 1237 u32 idx, ring_idx; ··· 1306 1287 skip_page_frags = true; 1307 1288 goto rcd_done; 1308 1289 } 1290 + new_dma_addr = dma_map_single(&adapter->pdev->dev, 1291 + new_skb->data, rbi->len, 1292 + PCI_DMA_FROMDEVICE); 1293 + if (dma_mapping_error(&adapter->pdev->dev, 1294 + new_dma_addr)) { 1295 + dev_kfree_skb(new_skb); 1296 + /* Skb allocation failed, do not handover this 1297 + * skb to stack. Reuse it. Drop the existing pkt 1298 + */ 1299 + rq->stats.rx_buf_alloc_failure++; 1300 + ctx->skb = NULL; 1301 + rq->stats.drop_total++; 1302 + skip_page_frags = true; 1303 + goto rcd_done; 1304 + } 1309 1305 1310 1306 dma_unmap_single(&adapter->pdev->dev, rbi->dma_addr, 1311 1307 rbi->len, ··· 1337 1303 1338 1304 /* Immediate refill */ 1339 1305 rbi->skb = new_skb; 1340 - rbi->dma_addr = dma_map_single(&adapter->pdev->dev, 1341 - rbi->skb->data, rbi->len, 1342 - PCI_DMA_FROMDEVICE); 1306 + rbi->dma_addr = new_dma_addr; 1343 1307 rxd->addr = cpu_to_le64(rbi->dma_addr); 1344 1308 rxd->len = rbi->len; 1345 1309 if (adapter->version == 2 && ··· 1380 1348 skip_page_frags = true; 1381 1349 goto rcd_done; 1382 1350 } 1351 + new_dma_addr = dma_map_page(&adapter->pdev->dev 1352 + , rbi->page, 1353 + 0, PAGE_SIZE, 1354 + PCI_DMA_FROMDEVICE); 1355 + if (dma_mapping_error(&adapter->pdev->dev, 1356 + new_dma_addr)) { 1357 + put_page(new_page); 1358 + rq->stats.rx_buf_alloc_failure++; 1359 + dev_kfree_skb(ctx->skb); 1360 + ctx->skb = NULL; 1361 + skip_page_frags = true; 1362 + goto rcd_done; 1363 + } 1383 1364 1384 1365 dma_unmap_page(&adapter->pdev->dev, 1385 1366 rbi->dma_addr, rbi->len, ··· 1402 1357 1403 1358 /* Immediate refill */ 1404 1359 rbi->page = new_page; 1405 - rbi->dma_addr = dma_map_page(&adapter->pdev->dev 1406 - , rbi->page, 1407 - 0, PAGE_SIZE, 1408 - PCI_DMA_FROMDEVICE); 1360 + rbi->dma_addr = new_dma_addr; 1409 1361 rxd->addr = cpu_to_le64(rbi->dma_addr); 1410 1362 rxd->len = rbi->len; 1411 1363 } ··· 2209 2167 PCI_DMA_TODEVICE); 2210 2168 } 2211 2169 2212 - if (new_table_pa) { 2170 + if (!dma_mapping_error(&adapter->pdev->dev, 2171 + new_table_pa)) { 2213 2172 new_mode |= VMXNET3_RXM_MCAST; 2214 2173 rxConf->mfTablePA = cpu_to_le64(new_table_pa); 2215 2174 } else { ··· 3118 3075 adapter->adapter_pa = dma_map_single(&adapter->pdev->dev, adapter, 3119 3076 sizeof(struct vmxnet3_adapter), 3120 3077 PCI_DMA_TODEVICE); 3078 + if (dma_mapping_error(&adapter->pdev->dev, adapter->adapter_pa)) { 3079 + dev_err(&pdev->dev, "Failed to map dma\n"); 3080 + err = -EFAULT; 3081 + goto err_dma_map; 3082 + } 3121 3083 adapter->shared = dma_alloc_coherent( 3122 3084 &adapter->pdev->dev, 3123 3085 sizeof(struct Vmxnet3_DriverShared), ··· 3281 3233 err_alloc_shared: 3282 3234 dma_unmap_single(&adapter->pdev->dev, adapter->adapter_pa, 3283 3235 sizeof(struct vmxnet3_adapter), PCI_DMA_TODEVICE); 3236 + err_dma_map: 3284 3237 free_netdev(netdev); 3285 3238 return err; 3286 3239 }
+1 -10
drivers/net/vrf.c
··· 849 849 struct nlattr *tb[], struct nlattr *data[]) 850 850 { 851 851 struct net_vrf *vrf = netdev_priv(dev); 852 - int err; 853 852 854 853 if (!data || !data[IFLA_VRF_TABLE]) 855 854 return -EINVAL; ··· 857 858 858 859 dev->priv_flags |= IFF_L3MDEV_MASTER; 859 860 860 - err = register_netdevice(dev); 861 - if (err < 0) 862 - goto out_fail; 863 - 864 - return 0; 865 - 866 - out_fail: 867 - free_netdev(dev); 868 - return err; 861 + return register_netdevice(dev); 869 862 } 870 863 871 864 static size_t vrf_nl_getsize(const struct net_device *dev)
+5 -5
drivers/net/wan/hdlc_fr.c
··· 1075 1075 1076 1076 used = pvc_is_used(pvc); 1077 1077 1078 - if (type == ARPHRD_ETHER) { 1078 + if (type == ARPHRD_ETHER) 1079 1079 dev = alloc_netdev(0, "pvceth%d", NET_NAME_UNKNOWN, 1080 1080 ether_setup); 1081 - dev->priv_flags &= ~IFF_TX_SKB_SHARING; 1082 - } else 1081 + else 1083 1082 dev = alloc_netdev(0, "pvc%d", NET_NAME_UNKNOWN, pvc_setup); 1084 1083 1085 1084 if (!dev) { ··· 1087 1088 return -ENOBUFS; 1088 1089 } 1089 1090 1090 - if (type == ARPHRD_ETHER) 1091 + if (type == ARPHRD_ETHER) { 1092 + dev->priv_flags &= ~IFF_TX_SKB_SHARING; 1091 1093 eth_hw_addr_random(dev); 1092 - else { 1094 + } else { 1093 1095 *(__be16*)dev->dev_addr = htons(dlci); 1094 1096 dlci_to_q922(dev->broadcast, dlci); 1095 1097 }
+1 -5
drivers/net/wan/x25_asy.c
··· 549 549 550 550 static int x25_asy_open_tty(struct tty_struct *tty) 551 551 { 552 - struct x25_asy *sl = tty->disc_data; 552 + struct x25_asy *sl; 553 553 int err; 554 554 555 555 if (tty->ops->write == NULL) 556 556 return -EOPNOTSUPP; 557 - 558 - /* First make sure we're not already connected. */ 559 - if (sl && sl->magic == X25_ASY_MAGIC) 560 - return -EEXIST; 561 557 562 558 /* OK. Find a free X.25 channel to use. */ 563 559 sl = x25_asy_alloc();
+47 -2
drivers/net/wireless/ath/ath10k/core.c
··· 51 51 static const struct ath10k_hw_params ath10k_hw_params_list[] = { 52 52 { 53 53 .id = QCA988X_HW_2_0_VERSION, 54 + .dev_id = QCA988X_2_0_DEVICE_ID, 54 55 .name = "qca988x hw2.0", 55 56 .patch_load_addr = QCA988X_HW_2_0_PATCH_LOAD_ADDR, 56 57 .uart_pin = 7, ··· 70 69 }, 71 70 { 72 71 .id = QCA6174_HW_2_1_VERSION, 72 + .dev_id = QCA6164_2_1_DEVICE_ID, 73 + .name = "qca6164 hw2.1", 74 + .patch_load_addr = QCA6174_HW_2_1_PATCH_LOAD_ADDR, 75 + .uart_pin = 6, 76 + .otp_exe_param = 0, 77 + .channel_counters_freq_hz = 88000, 78 + .max_probe_resp_desc_thres = 0, 79 + .fw = { 80 + .dir = QCA6174_HW_2_1_FW_DIR, 81 + .fw = QCA6174_HW_2_1_FW_FILE, 82 + .otp = QCA6174_HW_2_1_OTP_FILE, 83 + .board = QCA6174_HW_2_1_BOARD_DATA_FILE, 84 + .board_size = QCA6174_BOARD_DATA_SZ, 85 + .board_ext_size = QCA6174_BOARD_EXT_DATA_SZ, 86 + }, 87 + }, 88 + { 89 + .id = QCA6174_HW_2_1_VERSION, 90 + .dev_id = QCA6174_2_1_DEVICE_ID, 73 91 .name = "qca6174 hw2.1", 74 92 .patch_load_addr = QCA6174_HW_2_1_PATCH_LOAD_ADDR, 75 93 .uart_pin = 6, ··· 106 86 }, 107 87 { 108 88 .id = QCA6174_HW_3_0_VERSION, 89 + .dev_id = QCA6174_2_1_DEVICE_ID, 109 90 .name = "qca6174 hw3.0", 110 91 .patch_load_addr = QCA6174_HW_3_0_PATCH_LOAD_ADDR, 111 92 .uart_pin = 6, ··· 124 103 }, 125 104 { 126 105 .id = QCA6174_HW_3_2_VERSION, 106 + .dev_id = QCA6174_2_1_DEVICE_ID, 127 107 .name = "qca6174 hw3.2", 128 108 .patch_load_addr = QCA6174_HW_3_0_PATCH_LOAD_ADDR, 129 109 .uart_pin = 6, ··· 143 121 }, 144 122 { 145 123 .id = QCA99X0_HW_2_0_DEV_VERSION, 124 + .dev_id = QCA99X0_2_0_DEVICE_ID, 146 125 .name = "qca99x0 hw2.0", 147 126 .patch_load_addr = QCA99X0_HW_2_0_PATCH_LOAD_ADDR, 148 127 .uart_pin = 7, ··· 162 139 }, 163 140 { 164 141 .id = QCA9377_HW_1_0_DEV_VERSION, 142 + .dev_id = QCA9377_1_0_DEVICE_ID, 165 143 .name = "qca9377 hw1.0", 166 144 .patch_load_addr = QCA9377_HW_1_0_PATCH_LOAD_ADDR, 167 - .uart_pin = 7, 145 + .uart_pin = 6, 168 146 .otp_exe_param = 0, 147 + .channel_counters_freq_hz = 88000, 148 + .max_probe_resp_desc_thres = 0, 149 + .fw = { 150 + .dir = QCA9377_HW_1_0_FW_DIR, 151 + .fw = QCA9377_HW_1_0_FW_FILE, 152 + .otp = QCA9377_HW_1_0_OTP_FILE, 153 + .board = QCA9377_HW_1_0_BOARD_DATA_FILE, 154 + .board_size = QCA9377_BOARD_DATA_SZ, 155 + .board_ext_size = QCA9377_BOARD_EXT_DATA_SZ, 156 + }, 157 + }, 158 + { 159 + .id = QCA9377_HW_1_1_DEV_VERSION, 160 + .dev_id = QCA9377_1_0_DEVICE_ID, 161 + .name = "qca9377 hw1.1", 162 + .patch_load_addr = QCA9377_HW_1_0_PATCH_LOAD_ADDR, 163 + .uart_pin = 6, 164 + .otp_exe_param = 0, 165 + .channel_counters_freq_hz = 88000, 166 + .max_probe_resp_desc_thres = 0, 169 167 .fw = { 170 168 .dir = QCA9377_HW_1_0_FW_DIR, 171 169 .fw = QCA9377_HW_1_0_FW_FILE, ··· 1307 1263 for (i = 0; i < ARRAY_SIZE(ath10k_hw_params_list); i++) { 1308 1264 hw_params = &ath10k_hw_params_list[i]; 1309 1265 1310 - if (hw_params->id == ar->target_version) 1266 + if (hw_params->id == ar->target_version && 1267 + hw_params->dev_id == ar->dev_id) 1311 1268 break; 1312 1269 } 1313 1270
+1
drivers/net/wireless/ath/ath10k/core.h
··· 636 636 637 637 struct ath10k_hw_params { 638 638 u32 id; 639 + u16 dev_id; 639 640 const char *name; 640 641 u32 patch_load_addr; 641 642 int uart_pin;
+15 -2
drivers/net/wireless/ath/ath10k/hw.h
··· 22 22 23 23 #define ATH10K_FW_DIR "ath10k" 24 24 25 + #define QCA988X_2_0_DEVICE_ID (0x003c) 26 + #define QCA6164_2_1_DEVICE_ID (0x0041) 27 + #define QCA6174_2_1_DEVICE_ID (0x003e) 28 + #define QCA99X0_2_0_DEVICE_ID (0x0040) 29 + #define QCA9377_1_0_DEVICE_ID (0x0042) 30 + 25 31 /* QCA988X 1.0 definitions (unsupported) */ 26 32 #define QCA988X_HW_1_0_CHIP_ID_REV 0x0 27 33 ··· 48 42 #define QCA6174_HW_3_0_VERSION 0x05020000 49 43 #define QCA6174_HW_3_2_VERSION 0x05030000 50 44 45 + /* QCA9377 target BMI version signatures */ 46 + #define QCA9377_HW_1_0_DEV_VERSION 0x05020000 47 + #define QCA9377_HW_1_1_DEV_VERSION 0x05020001 48 + 51 49 enum qca6174_pci_rev { 52 50 QCA6174_PCI_REV_1_1 = 0x11, 53 51 QCA6174_PCI_REV_1_3 = 0x13, ··· 68 58 QCA6174_HW_3_0_CHIP_ID_REV = 8, 69 59 QCA6174_HW_3_1_CHIP_ID_REV = 9, 70 60 QCA6174_HW_3_2_CHIP_ID_REV = 10, 61 + }; 62 + 63 + enum qca9377_chip_id_rev { 64 + QCA9377_HW_1_0_CHIP_ID_REV = 0x0, 65 + QCA9377_HW_1_1_CHIP_ID_REV = 0x1, 71 66 }; 72 67 73 68 #define QCA6174_HW_2_1_FW_DIR "ath10k/QCA6174/hw2.1" ··· 100 85 #define QCA99X0_HW_2_0_PATCH_LOAD_ADDR 0x1234 101 86 102 87 /* QCA9377 1.0 definitions */ 103 - #define QCA9377_HW_1_0_DEV_VERSION 0x05020001 104 - #define QCA9377_HW_1_0_CHIP_ID_REV 0x1 105 88 #define QCA9377_HW_1_0_FW_DIR ATH10K_FW_DIR "/QCA9377/hw1.0" 106 89 #define QCA9377_HW_1_0_FW_FILE "firmware.bin" 107 90 #define QCA9377_HW_1_0_OTP_FILE "otp.bin"
+1 -1
drivers/net/wireless/ath/ath10k/mac.c
··· 4225 4225 4226 4226 static u32 get_nss_from_chainmask(u16 chain_mask) 4227 4227 { 4228 - if ((chain_mask & 0x15) == 0x15) 4228 + if ((chain_mask & 0xf) == 0xf) 4229 4229 return 4; 4230 4230 else if ((chain_mask & 0x7) == 0x7) 4231 4231 return 3;
+43 -10
drivers/net/wireless/ath/ath10k/pci.c
··· 57 57 #define ATH10K_PCI_TARGET_WAIT 3000 58 58 #define ATH10K_PCI_NUM_WARM_RESET_ATTEMPTS 3 59 59 60 - #define QCA988X_2_0_DEVICE_ID (0x003c) 61 - #define QCA6164_2_1_DEVICE_ID (0x0041) 62 - #define QCA6174_2_1_DEVICE_ID (0x003e) 63 - #define QCA99X0_2_0_DEVICE_ID (0x0040) 64 - #define QCA9377_1_0_DEVICE_ID (0x0042) 65 - 66 60 static const struct pci_device_id ath10k_pci_id_table[] = { 67 61 { PCI_VDEVICE(ATHEROS, QCA988X_2_0_DEVICE_ID) }, /* PCI-E QCA988X V2 */ 68 62 { PCI_VDEVICE(ATHEROS, QCA6164_2_1_DEVICE_ID) }, /* PCI-E QCA6164 V2.1 */ ··· 86 92 { QCA6174_2_1_DEVICE_ID, QCA6174_HW_3_2_CHIP_ID_REV }, 87 93 88 94 { QCA99X0_2_0_DEVICE_ID, QCA99X0_HW_2_0_CHIP_ID_REV }, 95 + 89 96 { QCA9377_1_0_DEVICE_ID, QCA9377_HW_1_0_CHIP_ID_REV }, 97 + { QCA9377_1_0_DEVICE_ID, QCA9377_HW_1_1_CHIP_ID_REV }, 90 98 }; 91 99 92 100 static void ath10k_pci_buffer_cleanup(struct ath10k *ar); ··· 107 111 static void ath10k_pci_htc_rx_cb(struct ath10k_ce_pipe *ce_state); 108 112 static void ath10k_pci_htt_tx_cb(struct ath10k_ce_pipe *ce_state); 109 113 static void ath10k_pci_htt_rx_cb(struct ath10k_ce_pipe *ce_state); 114 + static void ath10k_pci_htt_htc_rx_cb(struct ath10k_ce_pipe *ce_state); 110 115 111 - static const struct ce_attr host_ce_config_wlan[] = { 116 + static struct ce_attr host_ce_config_wlan[] = { 112 117 /* CE0: host->target HTC control and raw streams */ 113 118 { 114 119 .flags = CE_ATTR_FLAGS, ··· 125 128 .src_nentries = 0, 126 129 .src_sz_max = 2048, 127 130 .dest_nentries = 512, 128 - .recv_cb = ath10k_pci_htc_rx_cb, 131 + .recv_cb = ath10k_pci_htt_htc_rx_cb, 129 132 }, 130 133 131 134 /* CE2: target->host WMI */ ··· 214 217 }; 215 218 216 219 /* Target firmware's Copy Engine configuration. */ 217 - static const struct ce_pipe_config target_ce_config_wlan[] = { 220 + static struct ce_pipe_config target_ce_config_wlan[] = { 218 221 /* CE0: host->target HTC control and raw streams */ 219 222 { 220 223 .pipenum = __cpu_to_le32(0), ··· 327 330 * This table is derived from the CE_PCI TABLE, above. 328 331 * It is passed to the Target at startup for use by firmware. 329 332 */ 330 - static const struct service_to_pipe target_service_to_ce_map_wlan[] = { 333 + static struct service_to_pipe target_service_to_ce_map_wlan[] = { 331 334 { 332 335 __cpu_to_le32(ATH10K_HTC_SVC_ID_WMI_DATA_VO), 333 336 __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */ ··· 1205 1208 ath10k_pci_process_rx_cb(ce_state, ath10k_htc_rx_completion_handler); 1206 1209 } 1207 1210 1211 + static void ath10k_pci_htt_htc_rx_cb(struct ath10k_ce_pipe *ce_state) 1212 + { 1213 + /* CE4 polling needs to be done whenever CE pipe which transports 1214 + * HTT Rx (target->host) is processed. 1215 + */ 1216 + ath10k_ce_per_engine_service(ce_state->ar, 4); 1217 + 1218 + ath10k_pci_process_rx_cb(ce_state, ath10k_htc_rx_completion_handler); 1219 + } 1220 + 1208 1221 /* Called by lower (CE) layer when a send to HTT Target completes. */ 1209 1222 static void ath10k_pci_htt_tx_cb(struct ath10k_ce_pipe *ce_state) 1210 1223 { ··· 2032 2025 } 2033 2026 2034 2027 return 0; 2028 + } 2029 + 2030 + static void ath10k_pci_override_ce_config(struct ath10k *ar) 2031 + { 2032 + struct ce_attr *attr; 2033 + struct ce_pipe_config *config; 2034 + 2035 + /* For QCA6174 we're overriding the Copy Engine 5 configuration, 2036 + * since it is currently used for other feature. 2037 + */ 2038 + 2039 + /* Override Host's Copy Engine 5 configuration */ 2040 + attr = &host_ce_config_wlan[5]; 2041 + attr->src_sz_max = 0; 2042 + attr->dest_nentries = 0; 2043 + 2044 + /* Override Target firmware's Copy Engine configuration */ 2045 + config = &target_ce_config_wlan[5]; 2046 + config->pipedir = __cpu_to_le32(PIPEDIR_OUT); 2047 + config->nbytes_max = __cpu_to_le32(2048); 2048 + 2049 + /* Map from service/endpoint to Copy Engine */ 2050 + target_service_to_ce_map_wlan[15].pipenum = __cpu_to_le32(1); 2035 2051 } 2036 2052 2037 2053 static int ath10k_pci_alloc_pipes(struct ath10k *ar) ··· 3049 3019 ath10k_err(ar, "failed to claim device: %d\n", ret); 3050 3020 goto err_core_destroy; 3051 3021 } 3022 + 3023 + if (QCA_REV_6174(ar)) 3024 + ath10k_pci_override_ce_config(ar); 3052 3025 3053 3026 ret = ath10k_pci_alloc_pipes(ar); 3054 3027 if (ret) {
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-7000.c
··· 69 69 #include "iwl-agn-hw.h" 70 70 71 71 /* Highest firmware API version supported */ 72 - #define IWL7260_UCODE_API_MAX 17 72 + #define IWL7260_UCODE_API_MAX 19 73 73 74 74 /* Oldest version we won't warn about */ 75 75 #define IWL7260_UCODE_API_OK 13
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-8000.c
··· 69 69 #include "iwl-agn-hw.h" 70 70 71 71 /* Highest firmware API version supported */ 72 - #define IWL8000_UCODE_API_MAX 17 72 + #define IWL8000_UCODE_API_MAX 19 73 73 74 74 /* Oldest version we won't warn about */ 75 75 #define IWL8000_UCODE_API_OK 13
+2 -6
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 309 309 * to transmit packets to the AP, i.e. the PTK. 310 310 */ 311 311 if (key->flags & IEEE80211_KEY_FLAG_PAIRWISE) { 312 - key->hw_key_idx = 0; 313 312 mvm->ptk_ivlen = key->iv_len; 314 313 mvm->ptk_icvlen = key->icv_len; 314 + ret = iwl_mvm_set_sta_key(mvm, vif, sta, key, 0); 315 315 } else { 316 316 /* 317 317 * firmware only supports TSC/RSC for a single key, ··· 319 319 * with new ones -- this relies on mac80211 doing 320 320 * list_add_tail(). 321 321 */ 322 - key->hw_key_idx = 1; 323 322 mvm->gtk_ivlen = key->iv_len; 324 323 mvm->gtk_icvlen = key->icv_len; 324 + ret = iwl_mvm_set_sta_key(mvm, vif, sta, key, 1); 325 325 } 326 326 327 - ret = iwl_mvm_set_sta_key(mvm, vif, sta, key, true); 328 327 data->error = ret != 0; 329 328 out_unlock: 330 329 mutex_unlock(&mvm->mutex); ··· 770 771 * back to the runtime firmware image. 771 772 */ 772 773 set_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status); 773 - 774 - /* We reprogram keys and shouldn't allocate new key indices */ 775 - memset(mvm->fw_key_table, 0, sizeof(mvm->fw_key_table)); 776 774 777 775 mvm->ptk_ivlen = 0; 778 776 mvm->ptk_icvlen = 0;
+8 -3
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 2941 2941 { 2942 2942 struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); 2943 2943 int ret; 2944 + u8 key_offset; 2944 2945 2945 2946 if (iwlwifi_mod_params.sw_crypto) { 2946 2947 IWL_DEBUG_MAC80211(mvm, "leave - hwcrypto disabled\n"); ··· 3007 3006 break; 3008 3007 } 3009 3008 3009 + /* in HW restart reuse the index, otherwise request a new one */ 3010 + if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) 3011 + key_offset = key->hw_key_idx; 3012 + else 3013 + key_offset = STA_KEY_IDX_INVALID; 3014 + 3010 3015 IWL_DEBUG_MAC80211(mvm, "set hwcrypto key\n"); 3011 - ret = iwl_mvm_set_sta_key(mvm, vif, sta, key, 3012 - test_bit(IWL_MVM_STATUS_IN_HW_RESTART, 3013 - &mvm->status)); 3016 + ret = iwl_mvm_set_sta_key(mvm, vif, sta, key, key_offset); 3014 3017 if (ret) { 3015 3018 IWL_WARN(mvm, "set key failed\n"); 3016 3019 /*
+47 -41
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
··· 1201 1201 return max_offs; 1202 1202 } 1203 1203 1204 - static u8 iwl_mvm_get_key_sta_id(struct ieee80211_vif *vif, 1204 + static u8 iwl_mvm_get_key_sta_id(struct iwl_mvm *mvm, 1205 + struct ieee80211_vif *vif, 1205 1206 struct ieee80211_sta *sta) 1206 1207 { 1207 1208 struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); ··· 1219 1218 * station ID, then use AP's station ID. 1220 1219 */ 1221 1220 if (vif->type == NL80211_IFTYPE_STATION && 1222 - mvmvif->ap_sta_id != IWL_MVM_STATION_COUNT) 1223 - return mvmvif->ap_sta_id; 1221 + mvmvif->ap_sta_id != IWL_MVM_STATION_COUNT) { 1222 + u8 sta_id = mvmvif->ap_sta_id; 1223 + 1224 + sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[sta_id], 1225 + lockdep_is_held(&mvm->mutex)); 1226 + /* 1227 + * It is possible that the 'sta' parameter is NULL, 1228 + * for example when a GTK is removed - the sta_id will then 1229 + * be the AP ID, and no station was passed by mac80211. 1230 + */ 1231 + if (IS_ERR_OR_NULL(sta)) 1232 + return IWL_MVM_STATION_COUNT; 1233 + 1234 + return sta_id; 1235 + } 1224 1236 1225 1237 return IWL_MVM_STATION_COUNT; 1226 1238 } ··· 1241 1227 static int iwl_mvm_send_sta_key(struct iwl_mvm *mvm, 1242 1228 struct iwl_mvm_sta *mvm_sta, 1243 1229 struct ieee80211_key_conf *keyconf, bool mcast, 1244 - u32 tkip_iv32, u16 *tkip_p1k, u32 cmd_flags) 1230 + u32 tkip_iv32, u16 *tkip_p1k, u32 cmd_flags, 1231 + u8 key_offset) 1245 1232 { 1246 1233 struct iwl_mvm_add_sta_key_cmd cmd = {}; 1247 1234 __le16 key_flags; ··· 1284 1269 if (mcast) 1285 1270 key_flags |= cpu_to_le16(STA_KEY_MULTICAST); 1286 1271 1287 - cmd.key_offset = keyconf->hw_key_idx; 1272 + cmd.key_offset = key_offset; 1288 1273 cmd.key_flags = key_flags; 1289 1274 cmd.sta_id = sta_id; 1290 1275 ··· 1375 1360 struct ieee80211_vif *vif, 1376 1361 struct ieee80211_sta *sta, 1377 1362 struct ieee80211_key_conf *keyconf, 1363 + u8 key_offset, 1378 1364 bool mcast) 1379 1365 { 1380 1366 struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta); ··· 1391 1375 ieee80211_get_key_rx_seq(keyconf, 0, &seq); 1392 1376 ieee80211_get_tkip_rx_p1k(keyconf, addr, seq.tkip.iv32, p1k); 1393 1377 ret = iwl_mvm_send_sta_key(mvm, mvm_sta, keyconf, mcast, 1394 - seq.tkip.iv32, p1k, 0); 1378 + seq.tkip.iv32, p1k, 0, key_offset); 1395 1379 break; 1396 1380 case WLAN_CIPHER_SUITE_CCMP: 1397 1381 case WLAN_CIPHER_SUITE_WEP40: 1398 1382 case WLAN_CIPHER_SUITE_WEP104: 1399 1383 ret = iwl_mvm_send_sta_key(mvm, mvm_sta, keyconf, mcast, 1400 - 0, NULL, 0); 1384 + 0, NULL, 0, key_offset); 1401 1385 break; 1402 1386 default: 1403 1387 ret = iwl_mvm_send_sta_key(mvm, mvm_sta, keyconf, mcast, 1404 - 0, NULL, 0); 1388 + 0, NULL, 0, key_offset); 1405 1389 } 1406 1390 1407 1391 return ret; ··· 1449 1433 struct ieee80211_vif *vif, 1450 1434 struct ieee80211_sta *sta, 1451 1435 struct ieee80211_key_conf *keyconf, 1452 - bool have_key_offset) 1436 + u8 key_offset) 1453 1437 { 1454 1438 bool mcast = !(keyconf->flags & IEEE80211_KEY_FLAG_PAIRWISE); 1455 1439 u8 sta_id; ··· 1459 1443 lockdep_assert_held(&mvm->mutex); 1460 1444 1461 1445 /* Get the station id from the mvm local station table */ 1462 - sta_id = iwl_mvm_get_key_sta_id(vif, sta); 1446 + sta_id = iwl_mvm_get_key_sta_id(mvm, vif, sta); 1463 1447 if (sta_id == IWL_MVM_STATION_COUNT) { 1464 1448 IWL_ERR(mvm, "Failed to find station id\n"); 1465 1449 return -EINVAL; ··· 1486 1470 if (WARN_ON_ONCE(iwl_mvm_sta_from_mac80211(sta)->vif != vif)) 1487 1471 return -EINVAL; 1488 1472 1489 - if (!have_key_offset) { 1490 - /* 1491 - * The D3 firmware hardcodes the PTK offset to 0, so we have to 1492 - * configure it there. As a result, this workaround exists to 1493 - * let the caller set the key offset (hw_key_idx), see d3.c. 1494 - */ 1495 - keyconf->hw_key_idx = iwl_mvm_set_fw_key_idx(mvm); 1496 - if (keyconf->hw_key_idx == STA_KEY_IDX_INVALID) 1473 + /* If the key_offset is not pre-assigned, we need to find a 1474 + * new offset to use. In normal cases, the offset is not 1475 + * pre-assigned, but during HW_RESTART we want to reuse the 1476 + * same indices, so we pass them when this function is called. 1477 + * 1478 + * In D3 entry, we need to hardcoded the indices (because the 1479 + * firmware hardcodes the PTK offset to 0). In this case, we 1480 + * need to make sure we don't overwrite the hw_key_idx in the 1481 + * keyconf structure, because otherwise we cannot configure 1482 + * the original ones back when resuming. 1483 + */ 1484 + if (key_offset == STA_KEY_IDX_INVALID) { 1485 + key_offset = iwl_mvm_set_fw_key_idx(mvm); 1486 + if (key_offset == STA_KEY_IDX_INVALID) 1497 1487 return -ENOSPC; 1488 + keyconf->hw_key_idx = key_offset; 1498 1489 } 1499 1490 1500 - ret = __iwl_mvm_set_sta_key(mvm, vif, sta, keyconf, mcast); 1491 + ret = __iwl_mvm_set_sta_key(mvm, vif, sta, keyconf, key_offset, mcast); 1501 1492 if (ret) { 1502 1493 __clear_bit(keyconf->hw_key_idx, mvm->fw_key_table); 1503 1494 goto end; ··· 1518 1495 */ 1519 1496 if (keyconf->cipher == WLAN_CIPHER_SUITE_WEP40 || 1520 1497 keyconf->cipher == WLAN_CIPHER_SUITE_WEP104) { 1521 - ret = __iwl_mvm_set_sta_key(mvm, vif, sta, keyconf, !mcast); 1498 + ret = __iwl_mvm_set_sta_key(mvm, vif, sta, keyconf, 1499 + key_offset, !mcast); 1522 1500 if (ret) { 1523 1501 __clear_bit(keyconf->hw_key_idx, mvm->fw_key_table); 1524 1502 __iwl_mvm_remove_sta_key(mvm, sta_id, keyconf, mcast); ··· 1545 1521 lockdep_assert_held(&mvm->mutex); 1546 1522 1547 1523 /* Get the station id from the mvm local station table */ 1548 - sta_id = iwl_mvm_get_key_sta_id(vif, sta); 1524 + sta_id = iwl_mvm_get_key_sta_id(mvm, vif, sta); 1549 1525 1550 1526 IWL_DEBUG_WEP(mvm, "mvm remove dynamic key: idx=%d sta=%d\n", 1551 1527 keyconf->keyidx, sta_id); ··· 1571 1547 return 0; 1572 1548 } 1573 1549 1574 - /* 1575 - * It is possible that the 'sta' parameter is NULL, and thus 1576 - * there is a need to retrieve the sta from the local station table, 1577 - * for example when a GTK is removed (where the sta_id will then be 1578 - * the AP ID, and no station was passed by mac80211.) 1579 - */ 1580 - if (!sta) { 1581 - sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[sta_id], 1582 - lockdep_is_held(&mvm->mutex)); 1583 - if (!sta) { 1584 - IWL_ERR(mvm, "Invalid station id\n"); 1585 - return -EINVAL; 1586 - } 1587 - } 1588 - 1589 - if (WARN_ON_ONCE(iwl_mvm_sta_from_mac80211(sta)->vif != vif)) 1590 - return -EINVAL; 1591 - 1592 1550 ret = __iwl_mvm_remove_sta_key(mvm, sta_id, keyconf, mcast); 1593 1551 if (ret) 1594 1552 return ret; ··· 1590 1584 u16 *phase1key) 1591 1585 { 1592 1586 struct iwl_mvm_sta *mvm_sta; 1593 - u8 sta_id = iwl_mvm_get_key_sta_id(vif, sta); 1587 + u8 sta_id = iwl_mvm_get_key_sta_id(mvm, vif, sta); 1594 1588 bool mcast = !(keyconf->flags & IEEE80211_KEY_FLAG_PAIRWISE); 1595 1589 1596 1590 if (WARN_ON_ONCE(sta_id == IWL_MVM_STATION_COUNT)) ··· 1608 1602 1609 1603 mvm_sta = iwl_mvm_sta_from_mac80211(sta); 1610 1604 iwl_mvm_send_sta_key(mvm, mvm_sta, keyconf, mcast, 1611 - iv32, phase1key, CMD_ASYNC); 1605 + iv32, phase1key, CMD_ASYNC, keyconf->hw_key_idx); 1612 1606 rcu_read_unlock(); 1613 1607 } 1614 1608
+2 -2
drivers/net/wireless/intel/iwlwifi/mvm/sta.h
··· 365 365 int iwl_mvm_set_sta_key(struct iwl_mvm *mvm, 366 366 struct ieee80211_vif *vif, 367 367 struct ieee80211_sta *sta, 368 - struct ieee80211_key_conf *key, 369 - bool have_key_offset); 368 + struct ieee80211_key_conf *keyconf, 369 + u8 key_offset); 370 370 int iwl_mvm_remove_sta_key(struct iwl_mvm *mvm, 371 371 struct ieee80211_vif *vif, 372 372 struct ieee80211_sta *sta,
+18 -1
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 423 423 /* 8000 Series */ 424 424 {IWL_PCI_DEVICE(0x24F3, 0x0010, iwl8260_2ac_cfg)}, 425 425 {IWL_PCI_DEVICE(0x24F3, 0x1010, iwl8260_2ac_cfg)}, 426 + {IWL_PCI_DEVICE(0x24F3, 0x0130, iwl8260_2ac_cfg)}, 427 + {IWL_PCI_DEVICE(0x24F3, 0x1130, iwl8260_2ac_cfg)}, 428 + {IWL_PCI_DEVICE(0x24F3, 0x0132, iwl8260_2ac_cfg)}, 429 + {IWL_PCI_DEVICE(0x24F3, 0x1132, iwl8260_2ac_cfg)}, 426 430 {IWL_PCI_DEVICE(0x24F3, 0x0110, iwl8260_2ac_cfg)}, 431 + {IWL_PCI_DEVICE(0x24F3, 0x01F0, iwl8260_2ac_cfg)}, 432 + {IWL_PCI_DEVICE(0x24F3, 0x0012, iwl8260_2ac_cfg)}, 433 + {IWL_PCI_DEVICE(0x24F3, 0x1012, iwl8260_2ac_cfg)}, 427 434 {IWL_PCI_DEVICE(0x24F3, 0x1110, iwl8260_2ac_cfg)}, 428 435 {IWL_PCI_DEVICE(0x24F3, 0x0050, iwl8260_2ac_cfg)}, 429 436 {IWL_PCI_DEVICE(0x24F3, 0x0250, iwl8260_2ac_cfg)}, 430 437 {IWL_PCI_DEVICE(0x24F3, 0x1050, iwl8260_2ac_cfg)}, 431 438 {IWL_PCI_DEVICE(0x24F3, 0x0150, iwl8260_2ac_cfg)}, 439 + {IWL_PCI_DEVICE(0x24F3, 0x1150, iwl8260_2ac_cfg)}, 432 440 {IWL_PCI_DEVICE(0x24F4, 0x0030, iwl8260_2ac_cfg)}, 433 - {IWL_PCI_DEVICE(0x24F4, 0x1130, iwl8260_2ac_cfg)}, 434 441 {IWL_PCI_DEVICE(0x24F4, 0x1030, iwl8260_2ac_cfg)}, 435 442 {IWL_PCI_DEVICE(0x24F3, 0xC010, iwl8260_2ac_cfg)}, 436 443 {IWL_PCI_DEVICE(0x24F3, 0xC110, iwl8260_2ac_cfg)}, ··· 445 438 {IWL_PCI_DEVICE(0x24F3, 0xC050, iwl8260_2ac_cfg)}, 446 439 {IWL_PCI_DEVICE(0x24F3, 0xD050, iwl8260_2ac_cfg)}, 447 440 {IWL_PCI_DEVICE(0x24F3, 0x8010, iwl8260_2ac_cfg)}, 441 + {IWL_PCI_DEVICE(0x24F3, 0x8110, iwl8260_2ac_cfg)}, 448 442 {IWL_PCI_DEVICE(0x24F3, 0x9010, iwl8260_2ac_cfg)}, 443 + {IWL_PCI_DEVICE(0x24F3, 0x9110, iwl8260_2ac_cfg)}, 449 444 {IWL_PCI_DEVICE(0x24F4, 0x8030, iwl8260_2ac_cfg)}, 450 445 {IWL_PCI_DEVICE(0x24F4, 0x9030, iwl8260_2ac_cfg)}, 446 + {IWL_PCI_DEVICE(0x24F3, 0x8130, iwl8260_2ac_cfg)}, 447 + {IWL_PCI_DEVICE(0x24F3, 0x9130, iwl8260_2ac_cfg)}, 448 + {IWL_PCI_DEVICE(0x24F3, 0x8132, iwl8260_2ac_cfg)}, 449 + {IWL_PCI_DEVICE(0x24F3, 0x9132, iwl8260_2ac_cfg)}, 451 450 {IWL_PCI_DEVICE(0x24F3, 0x8050, iwl8260_2ac_cfg)}, 451 + {IWL_PCI_DEVICE(0x24F3, 0x8150, iwl8260_2ac_cfg)}, 452 452 {IWL_PCI_DEVICE(0x24F3, 0x9050, iwl8260_2ac_cfg)}, 453 + {IWL_PCI_DEVICE(0x24F3, 0x9150, iwl8260_2ac_cfg)}, 453 454 {IWL_PCI_DEVICE(0x24F3, 0x0004, iwl8260_2n_cfg)}, 455 + {IWL_PCI_DEVICE(0x24F3, 0x0044, iwl8260_2n_cfg)}, 454 456 {IWL_PCI_DEVICE(0x24F5, 0x0010, iwl4165_2ac_cfg)}, 455 457 {IWL_PCI_DEVICE(0x24F6, 0x0030, iwl4165_2ac_cfg)}, 456 458 {IWL_PCI_DEVICE(0x24F3, 0x0810, iwl8260_2ac_cfg)}, 457 459 {IWL_PCI_DEVICE(0x24F3, 0x0910, iwl8260_2ac_cfg)}, 458 460 {IWL_PCI_DEVICE(0x24F3, 0x0850, iwl8260_2ac_cfg)}, 459 461 {IWL_PCI_DEVICE(0x24F3, 0x0950, iwl8260_2ac_cfg)}, 462 + {IWL_PCI_DEVICE(0x24F3, 0x0930, iwl8260_2ac_cfg)}, 460 463 #endif /* CONFIG_IWLMVM */ 461 464 462 465 {0}
+1 -1
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
··· 2272 2272 struct rtl_priv *rtlpriv = rtl_priv(hw); 2273 2273 struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw)); 2274 2274 2275 - if (!rtlpci->int_clear) 2275 + if (rtlpci->int_clear) 2276 2276 rtl8821ae_clear_interrupt(hw);/*clear it here first*/ 2277 2277 2278 2278 rtl_write_dword(rtlpriv, REG_HIMR, rtlpci->irq_mask[0] & 0xFFFFFFFF);
+1 -1
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/sw.c
··· 448 448 MODULE_PARM_DESC(msi, "Set to 1 to use MSI interrupts mode (default 1)\n"); 449 449 MODULE_PARM_DESC(debug, "Set debug level (0-5) (default 0)"); 450 450 MODULE_PARM_DESC(disable_watchdog, "Set to 1 to disable the watchdog (default 0)\n"); 451 - MODULE_PARM_DESC(int_clear, "Set to 1 to disable interrupt clear before set (default 0)\n"); 451 + MODULE_PARM_DESC(int_clear, "Set to 0 to disable interrupt clear before set (default 1)\n"); 452 452 453 453 static SIMPLE_DEV_PM_OPS(rtlwifi_pm_ops, rtl_pci_suspend, rtl_pci_resume); 454 454
+2 -1
drivers/nvme/host/Makefile
··· 1 1 2 2 obj-$(CONFIG_BLK_DEV_NVME) += nvme.o 3 3 4 - nvme-y += pci.o scsi.o lightnvm.o 4 + lightnvm-$(CONFIG_NVM) := lightnvm.o 5 + nvme-y += pci.o scsi.o $(lightnvm-y)
+118 -47
drivers/nvme/host/lightnvm.c
··· 22 22 23 23 #include "nvme.h" 24 24 25 - #ifdef CONFIG_NVM 26 - 27 25 #include <linux/nvme.h> 28 26 #include <linux/bitops.h> 29 27 #include <linux/lightnvm.h> ··· 91 93 __le16 cdw14[6]; 92 94 }; 93 95 94 - struct nvme_nvm_bbtbl { 96 + struct nvme_nvm_getbbtbl { 95 97 __u8 opcode; 96 98 __u8 flags; 97 99 __u16 command_id; ··· 99 101 __u64 rsvd[2]; 100 102 __le64 prp1; 101 103 __le64 prp2; 102 - __le32 prp1_len; 103 - __le32 prp2_len; 104 - __le32 lbb; 105 - __u32 rsvd11[3]; 104 + __le64 spba; 105 + __u32 rsvd4[4]; 106 + }; 107 + 108 + struct nvme_nvm_setbbtbl { 109 + __u8 opcode; 110 + __u8 flags; 111 + __u16 command_id; 112 + __le32 nsid; 113 + __le64 rsvd[2]; 114 + __le64 prp1; 115 + __le64 prp2; 116 + __le64 spba; 117 + __le16 nlb; 118 + __u8 value; 119 + __u8 rsvd3; 120 + __u32 rsvd4[3]; 106 121 }; 107 122 108 123 struct nvme_nvm_erase_blk { ··· 140 129 struct nvme_nvm_hb_rw hb_rw; 141 130 struct nvme_nvm_ph_rw ph_rw; 142 131 struct nvme_nvm_l2ptbl l2p; 143 - struct nvme_nvm_bbtbl get_bb; 144 - struct nvme_nvm_bbtbl set_bb; 132 + struct nvme_nvm_getbbtbl get_bb; 133 + struct nvme_nvm_setbbtbl set_bb; 145 134 struct nvme_nvm_erase_blk erase; 146 135 }; 147 136 }; ··· 153 142 __u8 num_ch; 154 143 __u8 num_lun; 155 144 __u8 num_pln; 145 + __u8 rsvd1; 156 146 __le16 num_blk; 157 147 __le16 num_pg; 158 148 __le16 fpg_sz; 159 149 __le16 csecs; 160 150 __le16 sos; 151 + __le16 rsvd2; 161 152 __le32 trdt; 162 153 __le32 trdm; 163 154 __le32 tprt; ··· 167 154 __le32 tbet; 168 155 __le32 tbem; 169 156 __le32 mpos; 157 + __le32 mccap; 170 158 __le16 cpar; 171 - __u8 reserved[913]; 159 + __u8 reserved[906]; 172 160 } __packed; 173 161 174 162 struct nvme_nvm_addr_format { ··· 192 178 __u8 ver_id; 193 179 __u8 vmnt; 194 180 __u8 cgrps; 195 - __u8 res[5]; 181 + __u8 res; 196 182 __le32 cap; 197 183 __le32 dom; 198 184 struct nvme_nvm_addr_format ppaf; 199 - __u8 ppat; 200 - __u8 resv[223]; 185 + __u8 resv[228]; 201 186 struct nvme_nvm_id_group groups[4]; 202 187 } __packed; 188 + 189 + struct nvme_nvm_bb_tbl { 190 + __u8 tblid[4]; 191 + __le16 verid; 192 + __le16 revid; 193 + __le32 rvsd1; 194 + __le32 tblks; 195 + __le32 tfact; 196 + __le32 tgrown; 197 + __le32 tdresv; 198 + __le32 thresv; 199 + __le32 rsvd2[8]; 200 + __u8 blk[0]; 201 + }; 203 202 204 203 /* 205 204 * Check we didn't inadvertently grow the command struct ··· 222 195 BUILD_BUG_ON(sizeof(struct nvme_nvm_identity) != 64); 223 196 BUILD_BUG_ON(sizeof(struct nvme_nvm_hb_rw) != 64); 224 197 BUILD_BUG_ON(sizeof(struct nvme_nvm_ph_rw) != 64); 225 - BUILD_BUG_ON(sizeof(struct nvme_nvm_bbtbl) != 64); 198 + BUILD_BUG_ON(sizeof(struct nvme_nvm_getbbtbl) != 64); 199 + BUILD_BUG_ON(sizeof(struct nvme_nvm_setbbtbl) != 64); 226 200 BUILD_BUG_ON(sizeof(struct nvme_nvm_l2ptbl) != 64); 227 201 BUILD_BUG_ON(sizeof(struct nvme_nvm_erase_blk) != 64); 228 202 BUILD_BUG_ON(sizeof(struct nvme_nvm_id_group) != 960); 229 203 BUILD_BUG_ON(sizeof(struct nvme_nvm_addr_format) != 128); 230 204 BUILD_BUG_ON(sizeof(struct nvme_nvm_id) != 4096); 205 + BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 512); 231 206 } 232 207 233 208 static int init_grps(struct nvm_id *nvm_id, struct nvme_nvm_id *nvme_nvm_id) ··· 263 234 dst->tbet = le32_to_cpu(src->tbet); 264 235 dst->tbem = le32_to_cpu(src->tbem); 265 236 dst->mpos = le32_to_cpu(src->mpos); 237 + dst->mccap = le32_to_cpu(src->mccap); 266 238 267 239 dst->cpar = le16_to_cpu(src->cpar); 268 240 } ··· 274 244 static int nvme_nvm_identity(struct request_queue *q, struct nvm_id *nvm_id) 275 245 { 276 246 struct nvme_ns *ns = q->queuedata; 247 + struct nvme_dev *dev = ns->dev; 277 248 struct nvme_nvm_id *nvme_nvm_id; 278 249 struct nvme_nvm_command c = {}; 279 250 int ret; ··· 287 256 if (!nvme_nvm_id) 288 257 return -ENOMEM; 289 258 290 - ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c, nvme_nvm_id, 291 - sizeof(struct nvme_nvm_id)); 259 + ret = nvme_submit_sync_cmd(dev->admin_q, (struct nvme_command *)&c, 260 + nvme_nvm_id, sizeof(struct nvme_nvm_id)); 292 261 if (ret) { 293 262 ret = -EIO; 294 263 goto out; ··· 299 268 nvm_id->cgrps = nvme_nvm_id->cgrps; 300 269 nvm_id->cap = le32_to_cpu(nvme_nvm_id->cap); 301 270 nvm_id->dom = le32_to_cpu(nvme_nvm_id->dom); 271 + memcpy(&nvm_id->ppaf, &nvme_nvm_id->ppaf, 272 + sizeof(struct nvme_nvm_addr_format)); 302 273 303 274 ret = init_grps(nvm_id, nvme_nvm_id); 304 275 out: ··· 314 281 struct nvme_ns *ns = q->queuedata; 315 282 struct nvme_dev *dev = ns->dev; 316 283 struct nvme_nvm_command c = {}; 317 - u32 len = queue_max_hw_sectors(q) << 9; 284 + u32 len = queue_max_hw_sectors(dev->admin_q) << 9; 318 285 u32 nlb_pr_rq = len / sizeof(u64); 319 286 u64 cmd_slba = slba; 320 287 void *entries; ··· 332 299 c.l2p.slba = cpu_to_le64(cmd_slba); 333 300 c.l2p.nlb = cpu_to_le32(cmd_nlb); 334 301 335 - ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c, 336 - entries, len); 302 + ret = nvme_submit_sync_cmd(dev->admin_q, 303 + (struct nvme_command *)&c, entries, len); 337 304 if (ret) { 338 305 dev_err(dev->dev, "L2P table transfer failed (%d)\n", 339 306 ret); ··· 355 322 return ret; 356 323 } 357 324 358 - static int nvme_nvm_get_bb_tbl(struct request_queue *q, int lunid, 359 - unsigned int nr_blocks, 360 - nvm_bb_update_fn *update_bbtbl, void *priv) 325 + static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa, 326 + int nr_blocks, nvm_bb_update_fn *update_bbtbl, 327 + void *priv) 361 328 { 329 + struct request_queue *q = nvmdev->q; 362 330 struct nvme_ns *ns = q->queuedata; 363 331 struct nvme_dev *dev = ns->dev; 364 332 struct nvme_nvm_command c = {}; 365 - void *bb_bitmap; 366 - u16 bb_bitmap_size; 333 + struct nvme_nvm_bb_tbl *bb_tbl; 334 + int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blocks; 367 335 int ret = 0; 368 336 369 337 c.get_bb.opcode = nvme_nvm_admin_get_bb_tbl; 370 338 c.get_bb.nsid = cpu_to_le32(ns->ns_id); 371 - c.get_bb.lbb = cpu_to_le32(lunid); 372 - bb_bitmap_size = ((nr_blocks >> 15) + 1) * PAGE_SIZE; 373 - bb_bitmap = kmalloc(bb_bitmap_size, GFP_KERNEL); 374 - if (!bb_bitmap) 339 + c.get_bb.spba = cpu_to_le64(ppa.ppa); 340 + 341 + bb_tbl = kzalloc(tblsz, GFP_KERNEL); 342 + if (!bb_tbl) 375 343 return -ENOMEM; 376 344 377 - bitmap_zero(bb_bitmap, nr_blocks); 378 - 379 - ret = nvme_submit_sync_cmd(q, (struct nvme_command *)&c, bb_bitmap, 380 - bb_bitmap_size); 345 + ret = nvme_submit_sync_cmd(dev->admin_q, (struct nvme_command *)&c, 346 + bb_tbl, tblsz); 381 347 if (ret) { 382 348 dev_err(dev->dev, "get bad block table failed (%d)\n", ret); 383 349 ret = -EIO; 384 350 goto out; 385 351 } 386 352 387 - ret = update_bbtbl(lunid, bb_bitmap, nr_blocks, priv); 353 + if (bb_tbl->tblid[0] != 'B' || bb_tbl->tblid[1] != 'B' || 354 + bb_tbl->tblid[2] != 'L' || bb_tbl->tblid[3] != 'T') { 355 + dev_err(dev->dev, "bbt format mismatch\n"); 356 + ret = -EINVAL; 357 + goto out; 358 + } 359 + 360 + if (le16_to_cpu(bb_tbl->verid) != 1) { 361 + ret = -EINVAL; 362 + dev_err(dev->dev, "bbt version not supported\n"); 363 + goto out; 364 + } 365 + 366 + if (le32_to_cpu(bb_tbl->tblks) != nr_blocks) { 367 + ret = -EINVAL; 368 + dev_err(dev->dev, "bbt unsuspected blocks returned (%u!=%u)", 369 + le32_to_cpu(bb_tbl->tblks), nr_blocks); 370 + goto out; 371 + } 372 + 373 + ppa = dev_to_generic_addr(nvmdev, ppa); 374 + ret = update_bbtbl(ppa, nr_blocks, bb_tbl->blk, priv); 388 375 if (ret) { 389 376 ret = -EINTR; 390 377 goto out; 391 378 } 392 379 393 380 out: 394 - kfree(bb_bitmap); 381 + kfree(bb_tbl); 382 + return ret; 383 + } 384 + 385 + static int nvme_nvm_set_bb_tbl(struct request_queue *q, struct nvm_rq *rqd, 386 + int type) 387 + { 388 + struct nvme_ns *ns = q->queuedata; 389 + struct nvme_dev *dev = ns->dev; 390 + struct nvme_nvm_command c = {}; 391 + int ret = 0; 392 + 393 + c.set_bb.opcode = nvme_nvm_admin_set_bb_tbl; 394 + c.set_bb.nsid = cpu_to_le32(ns->ns_id); 395 + c.set_bb.spba = cpu_to_le64(rqd->ppa_addr.ppa); 396 + c.set_bb.nlb = cpu_to_le16(rqd->nr_pages - 1); 397 + c.set_bb.value = type; 398 + 399 + ret = nvme_submit_sync_cmd(dev->admin_q, (struct nvme_command *)&c, 400 + NULL, 0); 401 + if (ret) 402 + dev_err(dev->dev, "set bad block table failed (%d)\n", ret); 395 403 return ret; 396 404 } 397 405 ··· 548 474 .get_l2p_tbl = nvme_nvm_get_l2p_tbl, 549 475 550 476 .get_bb_tbl = nvme_nvm_get_bb_tbl, 477 + .set_bb_tbl = nvme_nvm_set_bb_tbl, 551 478 552 479 .submit_io = nvme_nvm_submit_io, 553 480 .erase_block = nvme_nvm_erase_block, ··· 571 496 nvm_unregister(disk_name); 572 497 } 573 498 499 + /* move to shared place when used in multiple places. */ 500 + #define PCI_VENDOR_ID_CNEX 0x1d1d 501 + #define PCI_DEVICE_ID_CNEX_WL 0x2807 502 + #define PCI_DEVICE_ID_CNEX_QEMU 0x1f1f 503 + 574 504 int nvme_nvm_ns_supported(struct nvme_ns *ns, struct nvme_id_ns *id) 575 505 { 576 506 struct nvme_dev *dev = ns->dev; 577 507 struct pci_dev *pdev = to_pci_dev(dev->dev); 578 508 579 509 /* QEMU NVMe simulator - PCI ID + Vendor specific bit */ 580 - if (pdev->vendor == PCI_VENDOR_ID_INTEL && pdev->device == 0x5845 && 510 + if (pdev->vendor == PCI_VENDOR_ID_CNEX && 511 + pdev->device == PCI_DEVICE_ID_CNEX_QEMU && 581 512 id->vs[0] == 0x1) 582 513 return 1; 583 514 584 515 /* CNEX Labs - PCI ID + Vendor specific bit */ 585 - if (pdev->vendor == 0x1d1d && pdev->device == 0x2807 && 516 + if (pdev->vendor == PCI_VENDOR_ID_CNEX && 517 + pdev->device == PCI_DEVICE_ID_CNEX_WL && 586 518 id->vs[0] == 0x1) 587 519 return 1; 588 520 589 521 return 0; 590 522 } 591 - #else 592 - int nvme_nvm_register(struct request_queue *q, char *disk_name) 593 - { 594 - return 0; 595 - } 596 - void nvme_nvm_unregister(struct request_queue *q, char *disk_name) {}; 597 - int nvme_nvm_ns_supported(struct nvme_ns *ns, struct nvme_id_ns *id) 598 - { 599 - return 0; 600 - } 601 - #endif /* CONFIG_NVM */
+14
drivers/nvme/host/nvme.h
··· 136 136 int nvme_sg_io32(struct nvme_ns *ns, unsigned long arg); 137 137 int nvme_sg_get_version_num(int __user *ip); 138 138 139 + #ifdef CONFIG_NVM 139 140 int nvme_nvm_ns_supported(struct nvme_ns *ns, struct nvme_id_ns *id); 140 141 int nvme_nvm_register(struct request_queue *q, char *disk_name); 141 142 void nvme_nvm_unregister(struct request_queue *q, char *disk_name); 143 + #else 144 + static inline int nvme_nvm_register(struct request_queue *q, char *disk_name) 145 + { 146 + return 0; 147 + } 148 + 149 + static inline void nvme_nvm_unregister(struct request_queue *q, char *disk_name) {}; 150 + 151 + static inline int nvme_nvm_ns_supported(struct nvme_ns *ns, struct nvme_id_ns *id) 152 + { 153 + return 0; 154 + } 155 + #endif /* CONFIG_NVM */ 142 156 143 157 #endif /* _NVME_H */
+37 -14
drivers/nvme/host/pci.c
··· 896 896 goto retry_cmd; 897 897 } 898 898 if (blk_integrity_rq(req)) { 899 - if (blk_rq_count_integrity_sg(req->q, req->bio) != 1) 899 + if (blk_rq_count_integrity_sg(req->q, req->bio) != 1) { 900 + dma_unmap_sg(dev->dev, iod->sg, iod->nents, 901 + dma_dir); 900 902 goto error_cmd; 903 + } 901 904 902 905 sg_init_table(iod->meta_sg, 1); 903 906 if (blk_rq_map_integrity_sg( 904 - req->q, req->bio, iod->meta_sg) != 1) 907 + req->q, req->bio, iod->meta_sg) != 1) { 908 + dma_unmap_sg(dev->dev, iod->sg, iod->nents, 909 + dma_dir); 905 910 goto error_cmd; 911 + } 906 912 907 913 if (rq_data_dir(req)) 908 914 nvme_dif_remap(req, nvme_dif_prep); 909 915 910 - if (!dma_map_sg(nvmeq->q_dmadev, iod->meta_sg, 1, dma_dir)) 916 + if (!dma_map_sg(nvmeq->q_dmadev, iod->meta_sg, 1, dma_dir)) { 917 + dma_unmap_sg(dev->dev, iod->sg, iod->nents, 918 + dma_dir); 911 919 goto error_cmd; 920 + } 912 921 } 913 922 } 914 923 ··· 977 968 if (head == nvmeq->cq_head && phase == nvmeq->cq_phase) 978 969 return; 979 970 980 - writel(head, nvmeq->q_db + nvmeq->dev->db_stride); 971 + if (likely(nvmeq->cq_vector >= 0)) 972 + writel(head, nvmeq->q_db + nvmeq->dev->db_stride); 981 973 nvmeq->cq_head = head; 982 974 nvmeq->cq_phase = phase; 983 975 ··· 1737 1727 u32 aqa; 1738 1728 u64 cap = lo_hi_readq(&dev->bar->cap); 1739 1729 struct nvme_queue *nvmeq; 1740 - unsigned page_shift = PAGE_SHIFT; 1730 + /* 1731 + * default to a 4K page size, with the intention to update this 1732 + * path in the future to accomodate architectures with differing 1733 + * kernel and IO page sizes. 1734 + */ 1735 + unsigned page_shift = 12; 1741 1736 unsigned dev_page_min = NVME_CAP_MPSMIN(cap) + 12; 1742 - unsigned dev_page_max = NVME_CAP_MPSMAX(cap) + 12; 1743 1737 1744 1738 if (page_shift < dev_page_min) { 1745 1739 dev_err(dev->dev, ··· 1751 1737 "host (%u)\n", 1 << dev_page_min, 1752 1738 1 << page_shift); 1753 1739 return -ENODEV; 1754 - } 1755 - if (page_shift > dev_page_max) { 1756 - dev_info(dev->dev, 1757 - "Device maximum page size (%u) smaller than " 1758 - "host (%u); enabling work-around\n", 1759 - 1 << dev_page_max, 1 << page_shift); 1760 - page_shift = dev_page_max; 1761 1740 } 1762 1741 1763 1742 dev->subsystem = readl(&dev->bar->vs) >= NVME_VS(1, 1) ? ··· 2275 2268 if (dev->max_hw_sectors) { 2276 2269 blk_queue_max_hw_sectors(ns->queue, dev->max_hw_sectors); 2277 2270 blk_queue_max_segments(ns->queue, 2278 - ((dev->max_hw_sectors << 9) / dev->page_size) + 1); 2271 + (dev->max_hw_sectors / (dev->page_size >> 9)) + 1); 2279 2272 } 2280 2273 if (dev->stripe_size) 2281 2274 blk_queue_chunk_sectors(ns->queue, dev->stripe_size >> 9); ··· 2708 2701 dev->q_depth = min_t(int, NVME_CAP_MQES(cap) + 1, NVME_Q_DEPTH); 2709 2702 dev->db_stride = 1 << NVME_CAP_STRIDE(cap); 2710 2703 dev->dbs = ((void __iomem *)dev->bar) + 4096; 2704 + 2705 + /* 2706 + * Temporary fix for the Apple controller found in the MacBook8,1 and 2707 + * some MacBook7,1 to avoid controller resets and data loss. 2708 + */ 2709 + if (pdev->vendor == PCI_VENDOR_ID_APPLE && pdev->device == 0x2001) { 2710 + dev->q_depth = 2; 2711 + dev_warn(dev->dev, "detected Apple NVMe controller, set " 2712 + "queue depth=%u to work around controller resets\n", 2713 + dev->q_depth); 2714 + } 2715 + 2711 2716 if (readl(&dev->bar->vs) >= NVME_VS(1, 2)) 2712 2717 dev->cmb = nvme_map_cmb(dev); 2713 2718 ··· 2806 2787 { 2807 2788 struct nvme_delq_ctx *dq = nvmeq->cmdinfo.ctx; 2808 2789 nvme_put_dq(dq); 2790 + 2791 + spin_lock_irq(&nvmeq->q_lock); 2792 + nvme_process_cq(nvmeq); 2793 + spin_unlock_irq(&nvmeq->q_lock); 2809 2794 } 2810 2795 2811 2796 static int adapter_async_del_queue(struct nvme_queue *nvmeq, u8 opcode,
-1
drivers/pci/host/pcie-designware.c
··· 440 440 ret, pp->io); 441 441 continue; 442 442 } 443 - pp->io_base = pp->io->start; 444 443 break; 445 444 case IORESOURCE_MEM: 446 445 pp->mem = win->res;
+2 -2
drivers/pci/host/pcie-hisi.c
··· 111 111 .link_up = hisi_pcie_link_up, 112 112 }; 113 113 114 - static int __init hisi_add_pcie_port(struct pcie_port *pp, 114 + static int hisi_add_pcie_port(struct pcie_port *pp, 115 115 struct platform_device *pdev) 116 116 { 117 117 int ret; ··· 139 139 return 0; 140 140 } 141 141 142 - static int __init hisi_pcie_probe(struct platform_device *pdev) 142 + static int hisi_pcie_probe(struct platform_device *pdev) 143 143 { 144 144 struct hisi_pcie *hisi_pcie; 145 145 struct pcie_port *pp;
+4 -1
drivers/pci/pci-sysfs.c
··· 216 216 if (ret) 217 217 return ret; 218 218 219 - if (node >= MAX_NUMNODES || !node_online(node)) 219 + if ((node < 0 && node != NUMA_NO_NODE) || node >= MAX_NUMNODES) 220 + return -EINVAL; 221 + 222 + if (node != NUMA_NO_NODE && !node_online(node)) 220 223 return -EINVAL; 221 224 222 225 add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
-2
drivers/pci/pci.h
··· 337 337 } 338 338 #endif 339 339 340 - struct pci_host_bridge *pci_find_host_bridge(struct pci_bus *bus); 341 - 342 340 #endif /* DRIVERS_PCI_H */
+2 -2
drivers/pci/probe.c
··· 1685 1685 { 1686 1686 struct device *bridge = pci_get_host_bridge_device(dev); 1687 1687 1688 - if (IS_ENABLED(CONFIG_OF) && dev->dev.of_node) { 1689 - if (bridge->parent) 1688 + if (IS_ENABLED(CONFIG_OF) && 1689 + bridge->parent && bridge->parent->of_node) { 1690 1690 of_dma_configure(&dev->dev, bridge->parent->of_node); 1691 1691 } else if (has_acpi_companion(bridge)) { 1692 1692 struct acpi_device *adev = to_acpi_device_node(bridge->fwnode);
-4
drivers/pinctrl/Kconfig
··· 5 5 config PINCTRL 6 6 bool 7 7 8 - if PINCTRL 9 - 10 8 menu "Pin controllers" 11 9 depends on PINCTRL 12 10 ··· 272 274 select GPIOLIB 273 275 274 276 endmenu 275 - 276 - endif
+6 -2
drivers/pinctrl/freescale/pinctrl-imx1-core.c
··· 538 538 func->groups[i] = child->name; 539 539 grp = &info->groups[grp_index++]; 540 540 ret = imx1_pinctrl_parse_groups(child, grp, info, i++); 541 - if (ret == -ENOMEM) 541 + if (ret == -ENOMEM) { 542 + of_node_put(child); 542 543 return ret; 544 + } 543 545 } 544 546 545 547 return 0; ··· 584 582 585 583 for_each_child_of_node(np, child) { 586 584 ret = imx1_pinctrl_parse_functions(child, info, ifunc++); 587 - if (ret == -ENOMEM) 585 + if (ret == -ENOMEM) { 586 + of_node_put(child); 588 587 return -ENOMEM; 588 + } 589 589 } 590 590 591 591 return 0;
+4 -7
drivers/pinctrl/mediatek/pinctrl-mtk-common.c
··· 747 747 reg_addr = mtk_get_port(pctl, offset) + pctl->devdata->dir_offset; 748 748 bit = BIT(offset & 0xf); 749 749 regmap_read(pctl->regmap1, reg_addr, &read_val); 750 - return !!(read_val & bit); 750 + return !(read_val & bit); 751 751 } 752 752 753 753 static int mtk_gpio_get(struct gpio_chip *chip, unsigned offset) ··· 757 757 unsigned int read_val = 0; 758 758 struct mtk_pinctrl *pctl = dev_get_drvdata(chip->dev); 759 759 760 - if (mtk_gpio_get_direction(chip, offset)) 761 - reg_addr = mtk_get_port(pctl, offset) + 762 - pctl->devdata->dout_offset; 763 - else 764 - reg_addr = mtk_get_port(pctl, offset) + 765 - pctl->devdata->din_offset; 760 + reg_addr = mtk_get_port(pctl, offset) + 761 + pctl->devdata->din_offset; 766 762 767 763 bit = BIT(offset & 0xf); 768 764 regmap_read(pctl->regmap1, reg_addr, &read_val); ··· 993 997 .owner = THIS_MODULE, 994 998 .request = gpiochip_generic_request, 995 999 .free = gpiochip_generic_free, 1000 + .get_direction = mtk_gpio_get_direction, 996 1001 .direction_input = mtk_gpio_direction_input, 997 1002 .direction_output = mtk_gpio_direction_output, 998 1003 .get = mtk_gpio_get,
+1 -1
drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
··· 672 672 return -ENOMEM; 673 673 674 674 pctrl->dev = &pdev->dev; 675 - pctrl->npins = (unsigned)of_device_get_match_data(&pdev->dev); 675 + pctrl->npins = (unsigned long)of_device_get_match_data(&pdev->dev); 676 676 677 677 pctrl->regmap = dev_get_regmap(pdev->dev.parent, NULL); 678 678 if (!pctrl->regmap) {
+1 -1
drivers/pinctrl/qcom/pinctrl-ssbi-mpp.c
··· 763 763 return -ENOMEM; 764 764 765 765 pctrl->dev = &pdev->dev; 766 - pctrl->npins = (unsigned)of_device_get_match_data(&pdev->dev); 766 + pctrl->npins = (unsigned long)of_device_get_match_data(&pdev->dev); 767 767 768 768 pctrl->regmap = dev_get_regmap(pdev->dev.parent, NULL); 769 769 if (!pctrl->regmap) {
+3 -3
drivers/pinctrl/sh-pfc/pfc-sh7734.c
··· 31 31 PORT_GP_12(5, fn, sfx) 32 32 33 33 #undef _GP_DATA 34 - #define _GP_DATA(bank, pin, name, sfx) \ 34 + #define _GP_DATA(bank, pin, name, sfx, cfg) \ 35 35 PINMUX_DATA(name##_DATA, name##_FN, name##_IN, name##_OUT) 36 36 37 - #define _GP_INOUTSEL(bank, pin, name, sfx) name##_IN, name##_OUT 38 - #define _GP_INDT(bank, pin, name, sfx) name##_DATA 37 + #define _GP_INOUTSEL(bank, pin, name, sfx, cfg) name##_IN, name##_OUT 38 + #define _GP_INDT(bank, pin, name, sfx, cfg) name##_DATA 39 39 #define GP_INOUTSEL(bank) PORT_GP_32_REV(bank, _GP_INOUTSEL, unused) 40 40 #define GP_INDT(bank) PORT_GP_32_REV(bank, _GP_INDT, unused) 41 41
+2
drivers/remoteproc/remoteproc_core.c
··· 1478 1478 1479 1479 static void __exit remoteproc_exit(void) 1480 1480 { 1481 + ida_destroy(&rproc_dev_index); 1482 + 1481 1483 rproc_exit_debugfs(); 1482 1484 } 1483 1485 module_exit(remoteproc_exit);
+1 -1
drivers/remoteproc/remoteproc_debugfs.c
··· 156 156 char buf[10]; 157 157 int ret; 158 158 159 - if (count > sizeof(buf)) 159 + if (count < 1 || count > sizeof(buf)) 160 160 return count; 161 161 162 162 ret = copy_from_user(buf, user_buf, count);
+8 -36
drivers/rtc/rtc-ds1307.c
··· 15 15 #include <linux/i2c.h> 16 16 #include <linux/init.h> 17 17 #include <linux/module.h> 18 - #include <linux/of_device.h> 19 - #include <linux/of_irq.h> 20 - #include <linux/pm_wakeirq.h> 21 18 #include <linux/rtc/ds1307.h> 22 19 #include <linux/rtc.h> 23 20 #include <linux/slab.h> ··· 114 117 #define HAS_ALARM 1 /* bit 1 == irq claimed */ 115 118 struct i2c_client *client; 116 119 struct rtc_device *rtc; 117 - int wakeirq; 118 120 s32 (*read_block_data)(const struct i2c_client *client, u8 command, 119 121 u8 length, u8 *values); 120 122 s32 (*write_block_data)(const struct i2c_client *client, u8 command, ··· 1134 1138 bin2bcd(tmp)); 1135 1139 } 1136 1140 1137 - device_set_wakeup_capable(&client->dev, want_irq); 1141 + if (want_irq) { 1142 + device_set_wakeup_capable(&client->dev, true); 1143 + set_bit(HAS_ALARM, &ds1307->flags); 1144 + } 1138 1145 ds1307->rtc = devm_rtc_device_register(&client->dev, client->name, 1139 1146 rtc_ops, THIS_MODULE); 1140 1147 if (IS_ERR(ds1307->rtc)) { ··· 1145 1146 } 1146 1147 1147 1148 if (want_irq) { 1148 - struct device_node *node = client->dev.of_node; 1149 - 1150 1149 err = devm_request_threaded_irq(&client->dev, 1151 1150 client->irq, NULL, irq_handler, 1152 1151 IRQF_SHARED | IRQF_ONESHOT, 1153 1152 ds1307->rtc->name, client); 1154 1153 if (err) { 1155 1154 client->irq = 0; 1155 + device_set_wakeup_capable(&client->dev, false); 1156 + clear_bit(HAS_ALARM, &ds1307->flags); 1156 1157 dev_err(&client->dev, "unable to request IRQ!\n"); 1157 - goto no_irq; 1158 - } 1159 - 1160 - set_bit(HAS_ALARM, &ds1307->flags); 1161 - dev_dbg(&client->dev, "got IRQ %d\n", client->irq); 1162 - 1163 - /* Currently supported by OF code only! */ 1164 - if (!node) 1165 - goto no_irq; 1166 - 1167 - err = of_irq_get(node, 1); 1168 - if (err <= 0) { 1169 - if (err == -EPROBE_DEFER) 1170 - goto exit; 1171 - goto no_irq; 1172 - } 1173 - ds1307->wakeirq = err; 1174 - 1175 - err = dev_pm_set_dedicated_wake_irq(&client->dev, 1176 - ds1307->wakeirq); 1177 - if (err) { 1178 - dev_err(&client->dev, "unable to setup wakeIRQ %d!\n", 1179 - err); 1180 - goto exit; 1181 - } 1158 + } else 1159 + dev_dbg(&client->dev, "got IRQ %d\n", client->irq); 1182 1160 } 1183 1161 1184 - no_irq: 1185 1162 if (chip->nvram_size) { 1186 1163 1187 1164 ds1307->nvram = devm_kzalloc(&client->dev, ··· 1200 1225 static int ds1307_remove(struct i2c_client *client) 1201 1226 { 1202 1227 struct ds1307 *ds1307 = i2c_get_clientdata(client); 1203 - 1204 - if (ds1307->wakeirq) 1205 - dev_pm_clear_wake_irq(&client->dev); 1206 1228 1207 1229 if (test_and_clear_bit(HAS_NVRAM, &ds1307->flags)) 1208 1230 sysfs_remove_bin_file(&client->dev.kobj, ds1307->nvram);
+1 -1
drivers/scsi/qla2xxx/tcm_qla2xxx.c
··· 902 902 return sprintf(page, "%d\n", tpg->tpg_attrib.fabric_prot_type); 903 903 } 904 904 905 - CONFIGFS_ATTR_WO(tcm_qla2xxx_tpg_, enable); 905 + CONFIGFS_ATTR(tcm_qla2xxx_tpg_, enable); 906 906 CONFIGFS_ATTR_RO(tcm_qla2xxx_tpg_, dynamic_sessions); 907 907 CONFIGFS_ATTR(tcm_qla2xxx_tpg_, fabric_prot_type); 908 908
+1 -1
drivers/sh/pm_runtime.c
··· 34 34 35 35 static int __init sh_pm_runtime_init(void) 36 36 { 37 - if (IS_ENABLED(CONFIG_ARCH_SHMOBILE_MULTI)) { 37 + if (IS_ENABLED(CONFIG_ARCH_SHMOBILE)) { 38 38 if (!of_find_compatible_node(NULL, NULL, 39 39 "renesas,cpg-mstp-clocks")) 40 40 return 0;
+1
drivers/soc/mediatek/Kconfig
··· 23 23 config MTK_SCPSYS 24 24 bool "MediaTek SCPSYS Support" 25 25 depends on ARCH_MEDIATEK || COMPILE_TEST 26 + default ARM64 && ARCH_MEDIATEK 26 27 select REGMAP 27 28 select MTK_INFRACFG 28 29 select PM_GENERIC_DOMAINS if PM
+4 -4
drivers/soc/ti/knav_qmss_queue.c
··· 1179 1179 1180 1180 block++; 1181 1181 if (!block->size) 1182 - return 0; 1182 + continue; 1183 1183 1184 1184 dev_dbg(kdev->dev, "linkram1: phys:%x, virt:%p, size:%x\n", 1185 1185 block->phys, block->virt, block->size); ··· 1519 1519 1520 1520 for (i = 0; i < ARRAY_SIZE(knav_acc_firmwares); i++) { 1521 1521 if (knav_acc_firmwares[i]) { 1522 - ret = request_firmware(&fw, 1523 - knav_acc_firmwares[i], 1524 - kdev->dev); 1522 + ret = request_firmware_direct(&fw, 1523 + knav_acc_firmwares[i], 1524 + kdev->dev); 1525 1525 if (!ret) { 1526 1526 found = true; 1527 1527 break;
+2 -2
drivers/spi/spi-bcm63xx.c
··· 562 562 goto out_clk_disable; 563 563 } 564 564 565 - dev_info(dev, "at 0x%08x (irq %d, FIFOs size %d)\n", 566 - r->start, irq, bs->fifo_size); 565 + dev_info(dev, "at %pr (irq %d, FIFOs size %d)\n", 566 + r, irq, bs->fifo_size); 567 567 568 568 return 0; 569 569
+18 -8
drivers/spi/spi-mt65xx.c
··· 410 410 if (!spi->controller_data) 411 411 spi->controller_data = (void *)&mtk_default_chip_info; 412 412 413 - if (mdata->dev_comp->need_pad_sel) 413 + if (mdata->dev_comp->need_pad_sel && gpio_is_valid(spi->cs_gpio)) 414 414 gpio_direction_output(spi->cs_gpio, !(spi->mode & SPI_CS_HIGH)); 415 415 416 416 return 0; ··· 632 632 goto err_put_master; 633 633 } 634 634 635 - for (i = 0; i < master->num_chipselect; i++) { 636 - ret = devm_gpio_request(&pdev->dev, master->cs_gpios[i], 637 - dev_name(&pdev->dev)); 638 - if (ret) { 639 - dev_err(&pdev->dev, 640 - "can't get CS GPIO %i\n", i); 641 - goto err_put_master; 635 + if (!master->cs_gpios && master->num_chipselect > 1) { 636 + dev_err(&pdev->dev, 637 + "cs_gpios not specified and num_chipselect > 1\n"); 638 + ret = -EINVAL; 639 + goto err_put_master; 640 + } 641 + 642 + if (master->cs_gpios) { 643 + for (i = 0; i < master->num_chipselect; i++) { 644 + ret = devm_gpio_request(&pdev->dev, 645 + master->cs_gpios[i], 646 + dev_name(&pdev->dev)); 647 + if (ret) { 648 + dev_err(&pdev->dev, 649 + "can't get CS GPIO %i\n", i); 650 + goto err_put_master; 651 + } 642 652 } 643 653 } 644 654 }
+22 -6
drivers/spi/spi-pl022.c
··· 1171 1171 static int pl022_dma_autoprobe(struct pl022 *pl022) 1172 1172 { 1173 1173 struct device *dev = &pl022->adev->dev; 1174 + struct dma_chan *chan; 1175 + int err; 1174 1176 1175 1177 /* automatically configure DMA channels from platform, normally using DT */ 1176 - pl022->dma_rx_channel = dma_request_slave_channel(dev, "rx"); 1177 - if (!pl022->dma_rx_channel) 1178 + chan = dma_request_slave_channel_reason(dev, "rx"); 1179 + if (IS_ERR(chan)) { 1180 + err = PTR_ERR(chan); 1178 1181 goto err_no_rxchan; 1182 + } 1179 1183 1180 - pl022->dma_tx_channel = dma_request_slave_channel(dev, "tx"); 1181 - if (!pl022->dma_tx_channel) 1184 + pl022->dma_rx_channel = chan; 1185 + 1186 + chan = dma_request_slave_channel_reason(dev, "tx"); 1187 + if (IS_ERR(chan)) { 1188 + err = PTR_ERR(chan); 1182 1189 goto err_no_txchan; 1190 + } 1191 + 1192 + pl022->dma_tx_channel = chan; 1183 1193 1184 1194 pl022->dummypage = kmalloc(PAGE_SIZE, GFP_KERNEL); 1185 - if (!pl022->dummypage) 1195 + if (!pl022->dummypage) { 1196 + err = -ENOMEM; 1186 1197 goto err_no_dummypage; 1198 + } 1187 1199 1188 1200 return 0; 1189 1201 ··· 1206 1194 dma_release_channel(pl022->dma_rx_channel); 1207 1195 pl022->dma_rx_channel = NULL; 1208 1196 err_no_rxchan: 1209 - return -ENODEV; 1197 + return err; 1210 1198 } 1211 1199 1212 1200 static void terminate_dma(struct pl022 *pl022) ··· 2248 2236 2249 2237 /* Get DMA channels, try autoconfiguration first */ 2250 2238 status = pl022_dma_autoprobe(pl022); 2239 + if (status == -EPROBE_DEFER) { 2240 + dev_dbg(dev, "deferring probe to get DMA channel\n"); 2241 + goto err_no_irq; 2242 + } 2251 2243 2252 2244 /* If that failed, use channels from platform_info */ 2253 2245 if (status == 0)
+2
drivers/spi/spi.c
··· 376 376 377 377 /** 378 378 * __spi_register_driver - register a SPI driver 379 + * @owner: owner module of the driver to register 379 380 * @sdrv: the driver to register 380 381 * Context: can sleep 381 382 * ··· 2131 2130 * Set transfer tx_nbits and rx_nbits as single transfer default 2132 2131 * (SPI_NBITS_SINGLE) if it is not set for this transfer. 2133 2132 */ 2133 + message->frame_length = 0; 2134 2134 list_for_each_entry(xfer, &message->transfers, transfer_list) { 2135 2135 message->frame_length += xfer->len; 2136 2136 if (!xfer->bits_per_word)
+2 -1
drivers/staging/iio/Kconfig
··· 18 18 source "drivers/staging/iio/trigger/Kconfig" 19 19 20 20 config IIO_DUMMY_EVGEN 21 - tristate 21 + tristate 22 + select IRQ_WORK 22 23 23 24 config IIO_SIMPLE_DUMMY 24 25 tristate "An example driver with no hardware requirements"
+2 -2
drivers/staging/iio/adc/lpc32xx_adc.c
··· 76 76 77 77 if (mask == IIO_CHAN_INFO_RAW) { 78 78 mutex_lock(&indio_dev->mlock); 79 - clk_enable(info->clk); 79 + clk_prepare_enable(info->clk); 80 80 /* Measurement setup */ 81 81 __raw_writel(AD_INTERNAL | (chan->address) | AD_REFp | AD_REFm, 82 82 LPC32XX_ADC_SELECT(info->adc_base)); ··· 84 84 __raw_writel(AD_PDN_CTRL | AD_STROBE, 85 85 LPC32XX_ADC_CTRL(info->adc_base)); 86 86 wait_for_completion(&info->completion); /* set by ISR */ 87 - clk_disable(info->clk); 87 + clk_disable_unprepare(info->clk); 88 88 *val = info->value; 89 89 mutex_unlock(&indio_dev->mlock); 90 90
+25 -23
drivers/staging/wilc1000/coreconfigurator.c
··· 13 13 #include "wilc_wlan.h" 14 14 #include <linux/errno.h> 15 15 #include <linux/slab.h> 16 - #include <linux/etherdevice.h> 17 16 #define TAG_PARAM_OFFSET (MAC_HDR_LEN + TIME_STAMP_LEN + \ 18 17 BEACON_INTERVAL_LEN + CAP_INFO_LEN) 19 - #define ADDR1 4 20 - #define ADDR2 10 21 - #define ADDR3 16 22 18 23 19 /* Basic Frame Type Codes (2-bit) */ 24 20 enum basic_frame_type { ··· 171 175 return ((header[1] & 0x02) >> 1); 172 176 } 173 177 178 + /* This function extracts the MAC Address in 'address1' field of the MAC */ 179 + /* header and updates the MAC Address in the allocated 'addr' variable. */ 180 + static inline void get_address1(u8 *pu8msa, u8 *addr) 181 + { 182 + memcpy(addr, pu8msa + 4, 6); 183 + } 184 + 185 + /* This function extracts the MAC Address in 'address2' field of the MAC */ 186 + /* header and updates the MAC Address in the allocated 'addr' variable. */ 187 + static inline void get_address2(u8 *pu8msa, u8 *addr) 188 + { 189 + memcpy(addr, pu8msa + 10, 6); 190 + } 191 + 192 + /* This function extracts the MAC Address in 'address3' field of the MAC */ 193 + /* header and updates the MAC Address in the allocated 'addr' variable. */ 194 + static inline void get_address3(u8 *pu8msa, u8 *addr) 195 + { 196 + memcpy(addr, pu8msa + 16, 6); 197 + } 198 + 174 199 /* This function extracts the BSSID from the incoming WLAN packet based on */ 175 - /* the 'from ds' bit, and updates the MAC Address in the allocated 'data' */ 200 + /* the 'from ds' bit, and updates the MAC Address in the allocated 'addr' */ 176 201 /* variable. */ 177 202 static inline void get_BSSID(u8 *data, u8 *bssid) 178 203 { 179 204 if (get_from_ds(data) == 1) 180 - /* 181 - * Extract the MAC Address in 'address2' field of the MAC 182 - * header and update the MAC Address in the allocated 'data' 183 - * variable. 184 - */ 185 - ether_addr_copy(data, bssid + ADDR2); 205 + get_address2(data, bssid); 186 206 else if (get_to_ds(data) == 1) 187 - /* 188 - * Extract the MAC Address in 'address1' field of the MAC 189 - * header and update the MAC Address in the allocated 'data' 190 - * variable. 191 - */ 192 - ether_addr_copy(data, bssid + ADDR1); 207 + get_address1(data, bssid); 193 208 else 194 - /* 195 - * Extract the MAC Address in 'address3' field of the MAC 196 - * header and update the MAC Address in the allocated 'data' 197 - * variable. 198 - */ 199 - ether_addr_copy(data, bssid + ADDR3); 209 + get_address3(data, bssid); 200 210 } 201 211 202 212 /* This function extracts the SSID from a beacon/probe response frame */
+12 -1
drivers/target/iscsi/iscsi_target.c
··· 4074 4074 return iscsit_add_reject(conn, ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf); 4075 4075 } 4076 4076 4077 + static bool iscsi_target_check_conn_state(struct iscsi_conn *conn) 4078 + { 4079 + bool ret; 4080 + 4081 + spin_lock_bh(&conn->state_lock); 4082 + ret = (conn->conn_state != TARG_CONN_STATE_LOGGED_IN); 4083 + spin_unlock_bh(&conn->state_lock); 4084 + 4085 + return ret; 4086 + } 4087 + 4077 4088 int iscsi_target_rx_thread(void *arg) 4078 4089 { 4079 4090 int ret, rc; ··· 4102 4091 * incoming iscsi/tcp socket I/O, and/or failing the connection. 4103 4092 */ 4104 4093 rc = wait_for_completion_interruptible(&conn->rx_login_comp); 4105 - if (rc < 0) 4094 + if (rc < 0 || iscsi_target_check_conn_state(conn)) 4106 4095 return 0; 4107 4096 4108 4097 if (conn->conn_transport->transport_type == ISCSI_INFINIBAND) {
+1
drivers/target/iscsi/iscsi_target_nego.c
··· 388 388 if (login->login_complete) { 389 389 if (conn->rx_thread && conn->rx_thread_active) { 390 390 send_sig(SIGINT, conn->rx_thread, 1); 391 + complete(&conn->rx_login_comp); 391 392 kthread_stop(conn->rx_thread); 392 393 } 393 394 if (conn->tx_thread && conn->tx_thread_active) {
+5 -5
drivers/target/iscsi/iscsi_target_parameters.c
··· 208 208 if (!pl) { 209 209 pr_err("Unable to allocate memory for" 210 210 " struct iscsi_param_list.\n"); 211 - return -1 ; 211 + return -ENOMEM; 212 212 } 213 213 INIT_LIST_HEAD(&pl->param_list); 214 214 INIT_LIST_HEAD(&pl->extra_response_list); ··· 578 578 param_list = kzalloc(sizeof(struct iscsi_param_list), GFP_KERNEL); 579 579 if (!param_list) { 580 580 pr_err("Unable to allocate memory for struct iscsi_param_list.\n"); 581 - return -1; 581 + return -ENOMEM; 582 582 } 583 583 INIT_LIST_HEAD(&param_list->param_list); 584 584 INIT_LIST_HEAD(&param_list->extra_response_list); ··· 629 629 630 630 err_out: 631 631 iscsi_release_param_list(param_list); 632 - return -1; 632 + return -ENOMEM; 633 633 } 634 634 635 635 static void iscsi_release_extra_responses(struct iscsi_param_list *param_list) ··· 729 729 if (!extra_response) { 730 730 pr_err("Unable to allocate memory for" 731 731 " struct iscsi_extra_response.\n"); 732 - return -1; 732 + return -ENOMEM; 733 733 } 734 734 INIT_LIST_HEAD(&extra_response->er_list); 735 735 ··· 1370 1370 tmpbuf = kzalloc(length + 1, GFP_KERNEL); 1371 1371 if (!tmpbuf) { 1372 1372 pr_err("Unable to allocate %u + 1 bytes for tmpbuf.\n", length); 1373 - return -1; 1373 + return -ENOMEM; 1374 1374 } 1375 1375 1376 1376 memcpy(tmpbuf, textbuf, length);
+11 -6
drivers/target/target_core_sbc.c
··· 371 371 return 0; 372 372 } 373 373 374 - static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success) 374 + static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success, 375 + int *post_ret) 375 376 { 376 377 unsigned char *buf, *addr; 377 378 struct scatterlist *sg; ··· 438 437 cmd->data_direction); 439 438 } 440 439 441 - static sense_reason_t compare_and_write_post(struct se_cmd *cmd, bool success) 440 + static sense_reason_t compare_and_write_post(struct se_cmd *cmd, bool success, 441 + int *post_ret) 442 442 { 443 443 struct se_device *dev = cmd->se_dev; 444 444 ··· 449 447 * sent to the backend driver. 450 448 */ 451 449 spin_lock_irq(&cmd->t_state_lock); 452 - if ((cmd->transport_state & CMD_T_SENT) && !cmd->scsi_status) 450 + if ((cmd->transport_state & CMD_T_SENT) && !cmd->scsi_status) { 453 451 cmd->se_cmd_flags |= SCF_COMPARE_AND_WRITE_POST; 452 + *post_ret = 1; 453 + } 454 454 spin_unlock_irq(&cmd->t_state_lock); 455 455 456 456 /* ··· 464 460 return TCM_NO_SENSE; 465 461 } 466 462 467 - static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool success) 463 + static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool success, 464 + int *post_ret) 468 465 { 469 466 struct se_device *dev = cmd->se_dev; 470 467 struct scatterlist *write_sg = NULL, *sg; ··· 561 556 562 557 if (block_size < PAGE_SIZE) { 563 558 sg_set_page(&write_sg[i], m.page, block_size, 564 - block_size); 559 + m.piter.sg->offset + block_size); 565 560 } else { 566 561 sg_miter_next(&m); 567 562 sg_set_page(&write_sg[i], m.page, block_size, 568 - 0); 563 + m.piter.sg->offset); 569 564 } 570 565 len -= block_size; 571 566 i++;
+1 -1
drivers/target/target_core_stat.c
··· 246 246 char str[sizeof(dev->t10_wwn.model)+1]; 247 247 248 248 /* scsiLuProductId */ 249 - for (i = 0; i < sizeof(dev->t10_wwn.vendor); i++) 249 + for (i = 0; i < sizeof(dev->t10_wwn.model); i++) 250 250 str[i] = ISPRINT(dev->t10_wwn.model[i]) ? 251 251 dev->t10_wwn.model[i] : ' '; 252 252 str[i] = '\0';
+6 -1
drivers/target/target_core_tmr.c
··· 130 130 if (tmr->ref_task_tag != ref_tag) 131 131 continue; 132 132 133 + if (!kref_get_unless_zero(&se_cmd->cmd_kref)) 134 + continue; 135 + 133 136 printk("ABORT_TASK: Found referenced %s task_tag: %llu\n", 134 137 se_cmd->se_tfo->get_fabric_name(), ref_tag); 135 138 ··· 142 139 " skipping\n", ref_tag); 143 140 spin_unlock(&se_cmd->t_state_lock); 144 141 spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); 142 + 143 + target_put_sess_cmd(se_cmd); 144 + 145 145 goto out; 146 146 } 147 147 se_cmd->transport_state |= CMD_T_ABORTED; 148 148 spin_unlock(&se_cmd->t_state_lock); 149 149 150 150 list_del_init(&se_cmd->se_cmd_list); 151 - kref_get(&se_cmd->cmd_kref); 152 151 spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); 153 152 154 153 cancel_work_sync(&se_cmd->work);
+14 -12
drivers/target/target_core_transport.c
··· 1658 1658 void transport_generic_request_failure(struct se_cmd *cmd, 1659 1659 sense_reason_t sense_reason) 1660 1660 { 1661 - int ret = 0; 1661 + int ret = 0, post_ret = 0; 1662 1662 1663 1663 pr_debug("-----[ Storage Engine Exception for cmd: %p ITT: 0x%08llx" 1664 1664 " CDB: 0x%02x\n", cmd, cmd->tag, cmd->t_task_cdb[0]); ··· 1680 1680 */ 1681 1681 if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) && 1682 1682 cmd->transport_complete_callback) 1683 - cmd->transport_complete_callback(cmd, false); 1683 + cmd->transport_complete_callback(cmd, false, &post_ret); 1684 1684 1685 1685 switch (sense_reason) { 1686 1686 case TCM_NON_EXISTENT_LUN: ··· 2068 2068 */ 2069 2069 if (cmd->transport_complete_callback) { 2070 2070 sense_reason_t rc; 2071 + bool caw = (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE); 2072 + bool zero_dl = !(cmd->data_length); 2073 + int post_ret = 0; 2071 2074 2072 - rc = cmd->transport_complete_callback(cmd, true); 2073 - if (!rc && !(cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE_POST)) { 2074 - if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) && 2075 - !cmd->data_length) 2075 + rc = cmd->transport_complete_callback(cmd, true, &post_ret); 2076 + if (!rc && !post_ret) { 2077 + if (caw && zero_dl) 2076 2078 goto queue_rsp; 2077 2079 2078 2080 return; ··· 2509 2507 EXPORT_SYMBOL(target_get_sess_cmd); 2510 2508 2511 2509 static void target_release_cmd_kref(struct kref *kref) 2512 - __releases(&se_cmd->se_sess->sess_cmd_lock) 2513 2510 { 2514 2511 struct se_cmd *se_cmd = container_of(kref, struct se_cmd, cmd_kref); 2515 2512 struct se_session *se_sess = se_cmd->se_sess; 2513 + unsigned long flags; 2516 2514 2515 + spin_lock_irqsave(&se_sess->sess_cmd_lock, flags); 2517 2516 if (list_empty(&se_cmd->se_cmd_list)) { 2518 - spin_unlock(&se_sess->sess_cmd_lock); 2517 + spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); 2519 2518 se_cmd->se_tfo->release_cmd(se_cmd); 2520 2519 return; 2521 2520 } 2522 2521 if (se_sess->sess_tearing_down && se_cmd->cmd_wait_set) { 2523 - spin_unlock(&se_sess->sess_cmd_lock); 2522 + spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); 2524 2523 complete(&se_cmd->cmd_wait_comp); 2525 2524 return; 2526 2525 } 2527 2526 list_del(&se_cmd->se_cmd_list); 2528 - spin_unlock(&se_sess->sess_cmd_lock); 2527 + spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); 2529 2528 2530 2529 se_cmd->se_tfo->release_cmd(se_cmd); 2531 2530 } ··· 2542 2539 se_cmd->se_tfo->release_cmd(se_cmd); 2543 2540 return 1; 2544 2541 } 2545 - return kref_put_spinlock_irqsave(&se_cmd->cmd_kref, target_release_cmd_kref, 2546 - &se_sess->sess_cmd_lock); 2542 + return kref_put(&se_cmd->cmd_kref, target_release_cmd_kref); 2547 2543 } 2548 2544 EXPORT_SYMBOL(target_put_sess_cmd); 2549 2545
+1 -3
drivers/target/target_core_user.c
··· 638 638 if (test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags)) 639 639 return 0; 640 640 641 - if (!time_after(cmd->deadline, jiffies)) 641 + if (!time_after(jiffies, cmd->deadline)) 642 642 return 0; 643 643 644 644 set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags); ··· 1101 1101 1102 1102 static const struct target_backend_ops tcmu_ops = { 1103 1103 .name = "user", 1104 - .inquiry_prod = "USER", 1105 - .inquiry_rev = TCMU_VERSION, 1106 1104 .owner = THIS_MODULE, 1107 1105 .transport_flags = TRANSPORT_FLAG_PASSTHROUGH, 1108 1106 .attach_hba = tcmu_attach_hba,
+1 -1
drivers/thermal/Kconfig
··· 382 382 383 383 config QCOM_SPMI_TEMP_ALARM 384 384 tristate "Qualcomm SPMI PMIC Temperature Alarm" 385 - depends on OF && (SPMI || COMPILE_TEST) && IIO 385 + depends on OF && SPMI && IIO 386 386 select REGMAP_SPMI 387 387 help 388 388 This enables a thermal sysfs driver for Qualcomm plug-and-play (QPNP)
+41 -15
drivers/thermal/imx_thermal.c
··· 55 55 #define TEMPSENSE2_PANIC_VALUE_SHIFT 16 56 56 #define TEMPSENSE2_PANIC_VALUE_MASK 0xfff0000 57 57 58 + #define OCOTP_MEM0 0x0480 58 59 #define OCOTP_ANA1 0x04e0 59 60 60 61 /* The driver supports 1 passive trip point and 1 critical trip point */ ··· 64 63 IMX_TRIP_CRITICAL, 65 64 IMX_TRIP_NUM, 66 65 }; 67 - 68 - /* 69 - * It defines the temperature in millicelsius for passive trip point 70 - * that will trigger cooling action when crossed. 71 - */ 72 - #define IMX_TEMP_PASSIVE 85000 73 66 74 67 #define IMX_POLLING_DELAY 2000 /* millisecond */ 75 68 #define IMX_PASSIVE_DELAY 1000 ··· 95 100 u32 c1, c2; /* See formula in imx_get_sensor_data() */ 96 101 int temp_passive; 97 102 int temp_critical; 103 + int temp_max; 98 104 int alarm_temp; 99 105 int last_temp; 100 106 bool irq_enabled; 101 107 int irq; 102 108 struct clk *thermal_clk; 103 109 const struct thermal_soc_data *socdata; 110 + const char *temp_grade; 104 111 }; 105 112 106 113 static void imx_set_panic_temp(struct imx_thermal_data *data, ··· 282 285 { 283 286 struct imx_thermal_data *data = tz->devdata; 284 287 288 + /* do not allow changing critical threshold */ 285 289 if (trip == IMX_TRIP_CRITICAL) 286 290 return -EPERM; 287 291 288 - if (temp < 0 || temp > IMX_TEMP_PASSIVE) 292 + /* do not allow passive to be set higher than critical */ 293 + if (temp < 0 || temp > data->temp_critical) 289 294 return -EINVAL; 290 295 291 296 data->temp_passive = temp; ··· 403 404 data->c1 = temp64; 404 405 data->c2 = n1 * data->c1 + 1000 * t1; 405 406 406 - /* 407 - * Set the default passive cooling trip point, 408 - * can be changed from userspace. 409 - */ 410 - data->temp_passive = IMX_TEMP_PASSIVE; 407 + /* use OTP for thermal grade */ 408 + ret = regmap_read(map, OCOTP_MEM0, &val); 409 + if (ret) { 410 + dev_err(&pdev->dev, "failed to read temp grade: %d\n", ret); 411 + return ret; 412 + } 413 + 414 + /* The maximum die temp is specified by the Temperature Grade */ 415 + switch ((val >> 6) & 0x3) { 416 + case 0: /* Commercial (0 to 95C) */ 417 + data->temp_grade = "Commercial"; 418 + data->temp_max = 95000; 419 + break; 420 + case 1: /* Extended Commercial (-20 to 105C) */ 421 + data->temp_grade = "Extended Commercial"; 422 + data->temp_max = 105000; 423 + break; 424 + case 2: /* Industrial (-40 to 105C) */ 425 + data->temp_grade = "Industrial"; 426 + data->temp_max = 105000; 427 + break; 428 + case 3: /* Automotive (-40 to 125C) */ 429 + data->temp_grade = "Automotive"; 430 + data->temp_max = 125000; 431 + break; 432 + } 411 433 412 434 /* 413 - * The maximum die temperature set to 20 C higher than 414 - * IMX_TEMP_PASSIVE. 435 + * Set the critical trip point at 5C under max 436 + * Set the passive trip point at 10C under max (can change via sysfs) 415 437 */ 416 - data->temp_critical = 1000 * 20 + data->temp_passive; 438 + data->temp_critical = data->temp_max - (1000 * 5); 439 + data->temp_passive = data->temp_max - (1000 * 10); 417 440 418 441 return 0; 419 442 } ··· 571 550 cpufreq_cooling_unregister(data->cdev); 572 551 return ret; 573 552 } 553 + 554 + dev_info(&pdev->dev, "%s CPU temperature grade - max:%dC" 555 + " critical:%dC passive:%dC\n", data->temp_grade, 556 + data->temp_max / 1000, data->temp_critical / 1000, 557 + data->temp_passive / 1000); 574 558 575 559 /* Enable measurements at ~ 10 Hz */ 576 560 regmap_write(map, TEMPSENSE1 + REG_CLR, TEMPSENSE1_MEASURE_FREQ);
+1 -1
drivers/thermal/of-thermal.c
··· 964 964 965 965 np = of_find_node_by_name(NULL, "thermal-zones"); 966 966 if (!np) { 967 - pr_err("unable to find thermal zones\n"); 967 + pr_debug("unable to find thermal zones\n"); 968 968 return; 969 969 } 970 970
+7 -17
drivers/thermal/power_allocator.c
··· 174 174 /** 175 175 * pid_controller() - PID controller 176 176 * @tz: thermal zone we are operating in 177 - * @current_temp: the current temperature in millicelsius 178 177 * @control_temp: the target temperature in millicelsius 179 178 * @max_allocatable_power: maximum allocatable power for this thermal zone 180 179 * ··· 190 191 * Return: The power budget for the next period. 191 192 */ 192 193 static u32 pid_controller(struct thermal_zone_device *tz, 193 - int current_temp, 194 194 int control_temp, 195 195 u32 max_allocatable_power) 196 196 { ··· 209 211 true); 210 212 } 211 213 212 - err = control_temp - current_temp; 214 + err = control_temp - tz->temperature; 213 215 err = int_to_frac(err); 214 216 215 217 /* Calculate the proportional term */ ··· 330 332 } 331 333 332 334 static int allocate_power(struct thermal_zone_device *tz, 333 - int current_temp, 334 335 int control_temp) 335 336 { 336 337 struct thermal_instance *instance; ··· 415 418 i++; 416 419 } 417 420 418 - power_range = pid_controller(tz, current_temp, control_temp, 419 - max_allocatable_power); 421 + power_range = pid_controller(tz, control_temp, max_allocatable_power); 420 422 421 423 divvy_up_power(weighted_req_power, max_power, num_actors, 422 424 total_weighted_req_power, power_range, granted_power, ··· 440 444 trace_thermal_power_allocator(tz, req_power, total_req_power, 441 445 granted_power, total_granted_power, 442 446 num_actors, power_range, 443 - max_allocatable_power, current_temp, 444 - control_temp - current_temp); 447 + max_allocatable_power, tz->temperature, 448 + control_temp - tz->temperature); 445 449 446 450 kfree(req_power); 447 451 unlock: ··· 608 612 static int power_allocator_throttle(struct thermal_zone_device *tz, int trip) 609 613 { 610 614 int ret; 611 - int switch_on_temp, control_temp, current_temp; 615 + int switch_on_temp, control_temp; 612 616 struct power_allocator_params *params = tz->governor_data; 613 617 614 618 /* ··· 618 622 if (trip != params->trip_max_desired_temperature) 619 623 return 0; 620 624 621 - ret = thermal_zone_get_temp(tz, &current_temp); 622 - if (ret) { 623 - dev_warn(&tz->device, "Failed to get temperature: %d\n", ret); 624 - return ret; 625 - } 626 - 627 625 ret = tz->ops->get_trip_temp(tz, params->trip_switch_on, 628 626 &switch_on_temp); 629 - if (!ret && (current_temp < switch_on_temp)) { 627 + if (!ret && (tz->temperature < switch_on_temp)) { 630 628 tz->passive = 0; 631 629 reset_pid_controller(params); 632 630 allow_maximum_power(tz); ··· 638 648 return ret; 639 649 } 640 650 641 - return allocate_power(tz, current_temp, control_temp); 651 + return allocate_power(tz, control_temp); 642 652 } 643 653 644 654 static struct thermal_governor thermal_gov_power_allocator = {
+21 -28
drivers/thermal/rcar_thermal.c
··· 361 361 /* 362 362 * platform functions 363 363 */ 364 + static int rcar_thermal_remove(struct platform_device *pdev) 365 + { 366 + struct rcar_thermal_common *common = platform_get_drvdata(pdev); 367 + struct device *dev = &pdev->dev; 368 + struct rcar_thermal_priv *priv; 369 + 370 + rcar_thermal_for_each_priv(priv, common) { 371 + if (rcar_has_irq_support(priv)) 372 + rcar_thermal_irq_disable(priv); 373 + thermal_zone_device_unregister(priv->zone); 374 + } 375 + 376 + pm_runtime_put(dev); 377 + pm_runtime_disable(dev); 378 + 379 + return 0; 380 + } 381 + 364 382 static int rcar_thermal_probe(struct platform_device *pdev) 365 383 { 366 384 struct rcar_thermal_common *common; ··· 394 376 common = devm_kzalloc(dev, sizeof(*common), GFP_KERNEL); 395 377 if (!common) 396 378 return -ENOMEM; 379 + 380 + platform_set_drvdata(pdev, common); 397 381 398 382 INIT_LIST_HEAD(&common->head); 399 383 spin_lock_init(&common->lock); ··· 474 454 rcar_thermal_common_write(common, ENR, enr_bits); 475 455 } 476 456 477 - platform_set_drvdata(pdev, common); 478 - 479 457 dev_info(dev, "%d sensor probed\n", i); 480 458 481 459 return 0; 482 460 483 461 error_unregister: 484 - rcar_thermal_for_each_priv(priv, common) { 485 - if (rcar_has_irq_support(priv)) 486 - rcar_thermal_irq_disable(priv); 487 - thermal_zone_device_unregister(priv->zone); 488 - } 489 - 490 - pm_runtime_put(dev); 491 - pm_runtime_disable(dev); 462 + rcar_thermal_remove(pdev); 492 463 493 464 return ret; 494 - } 495 - 496 - static int rcar_thermal_remove(struct platform_device *pdev) 497 - { 498 - struct rcar_thermal_common *common = platform_get_drvdata(pdev); 499 - struct device *dev = &pdev->dev; 500 - struct rcar_thermal_priv *priv; 501 - 502 - rcar_thermal_for_each_priv(priv, common) { 503 - if (rcar_has_irq_support(priv)) 504 - rcar_thermal_irq_disable(priv); 505 - thermal_zone_device_unregister(priv->zone); 506 - } 507 - 508 - pm_runtime_put(dev); 509 - pm_runtime_disable(dev); 510 - 511 - return 0; 512 465 } 513 466 514 467 static const struct of_device_id rcar_thermal_dt_ids[] = {
+238 -86
drivers/thermal/rockchip_thermal.c
··· 1 1 /* 2 2 * Copyright (c) 2014, Fuzhou Rockchip Electronics Co., Ltd 3 3 * 4 + * Copyright (c) 2015, Fuzhou Rockchip Electronics Co., Ltd 5 + * Caesar Wang <wxt@rock-chips.com> 6 + * 4 7 * This program is free software; you can redistribute it and/or modify it 5 8 * under the terms and conditions of the GNU General Public License, 6 9 * version 2, as published by the Free Software Foundation. ··· 48 45 }; 49 46 50 47 /** 51 - * The system has three Temperature Sensors. channel 0 is reserved, 52 - * channel 1 is for CPU, and channel 2 is for GPU. 48 + * The system has two Temperature Sensors. 49 + * sensor0 is for CPU, and sensor1 is for GPU. 53 50 */ 54 51 enum sensor_id { 55 - SENSOR_CPU = 1, 52 + SENSOR_CPU = 0, 56 53 SENSOR_GPU, 57 54 }; 58 55 56 + /** 57 + * The conversion table has the adc value and temperature. 58 + * ADC_DECREMENT is the adc value decremnet.(e.g. v2_code_table) 59 + * ADC_INCREMNET is the adc value incremnet.(e.g. v3_code_table) 60 + */ 61 + enum adc_sort_mode { 62 + ADC_DECREMENT = 0, 63 + ADC_INCREMENT, 64 + }; 65 + 66 + /** 67 + * The max sensors is two in rockchip SoCs. 68 + * Two sensors: CPU and GPU sensor. 69 + */ 70 + #define SOC_MAX_SENSORS 2 71 + 72 + struct chip_tsadc_table { 73 + const struct tsadc_table *id; 74 + 75 + /* the array table size*/ 76 + unsigned int length; 77 + 78 + /* that analogic mask data */ 79 + u32 data_mask; 80 + 81 + /* the sort mode is adc value that increment or decrement in table */ 82 + enum adc_sort_mode mode; 83 + }; 84 + 59 85 struct rockchip_tsadc_chip { 86 + /* The sensor id of chip correspond to the ADC channel */ 87 + int chn_id[SOC_MAX_SENSORS]; 88 + int chn_num; 89 + 60 90 /* The hardware-controlled tshut property */ 61 - long tshut_temp; 91 + int tshut_temp; 62 92 enum tshut_mode tshut_mode; 63 93 enum tshut_polarity tshut_polarity; 64 94 ··· 101 65 void (*control)(void __iomem *reg, bool on); 102 66 103 67 /* Per-sensor methods */ 104 - int (*get_temp)(int chn, void __iomem *reg, int *temp); 105 - void (*set_tshut_temp)(int chn, void __iomem *reg, long temp); 68 + int (*get_temp)(struct chip_tsadc_table table, 69 + int chn, void __iomem *reg, int *temp); 70 + void (*set_tshut_temp)(struct chip_tsadc_table table, 71 + int chn, void __iomem *reg, int temp); 106 72 void (*set_tshut_mode)(int chn, void __iomem *reg, enum tshut_mode m); 73 + 74 + /* Per-table methods */ 75 + struct chip_tsadc_table table; 107 76 }; 108 77 109 78 struct rockchip_thermal_sensor { 110 79 struct rockchip_thermal_data *thermal; 111 80 struct thermal_zone_device *tzd; 112 - enum sensor_id id; 81 + int id; 113 82 }; 114 - 115 - #define NUM_SENSORS 2 /* Ignore unused sensor 0 */ 116 83 117 84 struct rockchip_thermal_data { 118 85 const struct rockchip_tsadc_chip *chip; 119 86 struct platform_device *pdev; 120 87 struct reset_control *reset; 121 88 122 - struct rockchip_thermal_sensor sensors[NUM_SENSORS]; 89 + struct rockchip_thermal_sensor sensors[SOC_MAX_SENSORS]; 123 90 124 91 struct clk *clk; 125 92 struct clk *pclk; 126 93 127 94 void __iomem *regs; 128 95 129 - long tshut_temp; 96 + int tshut_temp; 130 97 enum tshut_mode tshut_mode; 131 98 enum tshut_polarity tshut_polarity; 132 99 }; 133 100 134 - /* TSADC V2 Sensor info define: */ 101 + /* TSADC Sensor info define: */ 135 102 #define TSADCV2_AUTO_CON 0x04 136 103 #define TSADCV2_INT_EN 0x08 137 104 #define TSADCV2_INT_PD 0x0c ··· 156 117 #define TSADCV2_INT_PD_CLEAR_MASK ~BIT(8) 157 118 158 119 #define TSADCV2_DATA_MASK 0xfff 120 + #define TSADCV3_DATA_MASK 0x3ff 121 + 159 122 #define TSADCV2_HIGHT_INT_DEBOUNCE_COUNT 4 160 123 #define TSADCV2_HIGHT_TSHUT_DEBOUNCE_COUNT 4 161 124 #define TSADCV2_AUTO_PERIOD_TIME 250 /* msec */ ··· 165 124 166 125 struct tsadc_table { 167 126 u32 code; 168 - long temp; 127 + int temp; 169 128 }; 170 129 171 130 static const struct tsadc_table v2_code_table[] = { ··· 206 165 {3421, 125000}, 207 166 }; 208 167 209 - static u32 rk_tsadcv2_temp_to_code(long temp) 168 + static const struct tsadc_table v3_code_table[] = { 169 + {0, -40000}, 170 + {106, -40000}, 171 + {108, -35000}, 172 + {110, -30000}, 173 + {112, -25000}, 174 + {114, -20000}, 175 + {116, -15000}, 176 + {118, -10000}, 177 + {120, -5000}, 178 + {122, 0}, 179 + {124, 5000}, 180 + {126, 10000}, 181 + {128, 15000}, 182 + {130, 20000}, 183 + {132, 25000}, 184 + {134, 30000}, 185 + {136, 35000}, 186 + {138, 40000}, 187 + {140, 45000}, 188 + {142, 50000}, 189 + {144, 55000}, 190 + {146, 60000}, 191 + {148, 65000}, 192 + {150, 70000}, 193 + {152, 75000}, 194 + {154, 80000}, 195 + {156, 85000}, 196 + {158, 90000}, 197 + {160, 95000}, 198 + {162, 100000}, 199 + {163, 105000}, 200 + {165, 110000}, 201 + {167, 115000}, 202 + {169, 120000}, 203 + {171, 125000}, 204 + {TSADCV3_DATA_MASK, 125000}, 205 + }; 206 + 207 + static u32 rk_tsadcv2_temp_to_code(struct chip_tsadc_table table, 208 + int temp) 210 209 { 211 210 int high, low, mid; 212 211 213 212 low = 0; 214 - high = ARRAY_SIZE(v2_code_table) - 1; 213 + high = table.length - 1; 215 214 mid = (high + low) / 2; 216 215 217 - if (temp < v2_code_table[low].temp || temp > v2_code_table[high].temp) 216 + if (temp < table.id[low].temp || temp > table.id[high].temp) 218 217 return 0; 219 218 220 219 while (low <= high) { 221 - if (temp == v2_code_table[mid].temp) 222 - return v2_code_table[mid].code; 223 - else if (temp < v2_code_table[mid].temp) 220 + if (temp == table.id[mid].temp) 221 + return table.id[mid].code; 222 + else if (temp < table.id[mid].temp) 224 223 high = mid - 1; 225 224 else 226 225 low = mid + 1; ··· 270 189 return 0; 271 190 } 272 191 273 - static int rk_tsadcv2_code_to_temp(u32 code, int *temp) 192 + static int rk_tsadcv2_code_to_temp(struct chip_tsadc_table table, u32 code, 193 + int *temp) 274 194 { 275 195 unsigned int low = 1; 276 - unsigned int high = ARRAY_SIZE(v2_code_table) - 1; 196 + unsigned int high = table.length - 1; 277 197 unsigned int mid = (low + high) / 2; 278 198 unsigned int num; 279 199 unsigned long denom; 280 200 281 - BUILD_BUG_ON(ARRAY_SIZE(v2_code_table) < 2); 201 + WARN_ON(table.length < 2); 282 202 283 - code &= TSADCV2_DATA_MASK; 284 - if (code < v2_code_table[high].code) 285 - return -EAGAIN; /* Incorrect reading */ 203 + switch (table.mode) { 204 + case ADC_DECREMENT: 205 + code &= table.data_mask; 206 + if (code < table.id[high].code) 207 + return -EAGAIN; /* Incorrect reading */ 286 208 287 - while (low <= high) { 288 - if (code >= v2_code_table[mid].code && 289 - code < v2_code_table[mid - 1].code) 290 - break; 291 - else if (code < v2_code_table[mid].code) 292 - low = mid + 1; 293 - else 294 - high = mid - 1; 295 - mid = (low + high) / 2; 209 + while (low <= high) { 210 + if (code >= table.id[mid].code && 211 + code < table.id[mid - 1].code) 212 + break; 213 + else if (code < table.id[mid].code) 214 + low = mid + 1; 215 + else 216 + high = mid - 1; 217 + 218 + mid = (low + high) / 2; 219 + } 220 + break; 221 + case ADC_INCREMENT: 222 + code &= table.data_mask; 223 + if (code < table.id[low].code) 224 + return -EAGAIN; /* Incorrect reading */ 225 + 226 + while (low <= high) { 227 + if (code >= table.id[mid - 1].code && 228 + code < table.id[mid].code) 229 + break; 230 + else if (code > table.id[mid].code) 231 + low = mid + 1; 232 + else 233 + high = mid - 1; 234 + 235 + mid = (low + high) / 2; 236 + } 237 + break; 238 + default: 239 + pr_err("Invalid the conversion table\n"); 296 240 } 297 241 298 242 /* ··· 326 220 * temperature between 2 table entries is linear and interpolate 327 221 * to produce less granular result. 328 222 */ 329 - num = v2_code_table[mid].temp - v2_code_table[mid - 1].temp; 330 - num *= v2_code_table[mid - 1].code - code; 331 - denom = v2_code_table[mid - 1].code - v2_code_table[mid].code; 332 - *temp = v2_code_table[mid - 1].temp + (num / denom); 223 + num = table.id[mid].temp - v2_code_table[mid - 1].temp; 224 + num *= abs(table.id[mid - 1].code - code); 225 + denom = abs(table.id[mid - 1].code - table.id[mid].code); 226 + *temp = table.id[mid - 1].temp + (num / denom); 333 227 334 228 return 0; 335 229 } 336 230 337 231 /** 338 - * rk_tsadcv2_initialize - initialize TASDC Controller 339 - * (1) Set TSADCV2_AUTO_PERIOD, configure the interleave between 340 - * every two accessing of TSADC in normal operation. 341 - * (2) Set TSADCV2_AUTO_PERIOD_HT, configure the interleave between 342 - * every two accessing of TSADC after the temperature is higher 343 - * than COM_SHUT or COM_INT. 344 - * (3) Set TSADCV2_HIGH_INT_DEBOUNCE and TSADC_HIGHT_TSHUT_DEBOUNCE, 345 - * if the temperature is higher than COMP_INT or COMP_SHUT for 346 - * "debounce" times, TSADC controller will generate interrupt or TSHUT. 232 + * rk_tsadcv2_initialize - initialize TASDC Controller. 233 + * 234 + * (1) Set TSADC_V2_AUTO_PERIOD: 235 + * Configure the interleave between every two accessing of 236 + * TSADC in normal operation. 237 + * 238 + * (2) Set TSADCV2_AUTO_PERIOD_HT: 239 + * Configure the interleave between every two accessing of 240 + * TSADC after the temperature is higher than COM_SHUT or COM_INT. 241 + * 242 + * (3) Set TSADCV2_HIGH_INT_DEBOUNCE and TSADC_HIGHT_TSHUT_DEBOUNCE: 243 + * If the temperature is higher than COMP_INT or COMP_SHUT for 244 + * "debounce" times, TSADC controller will generate interrupt or TSHUT. 347 245 */ 348 246 static void rk_tsadcv2_initialize(void __iomem *regs, 349 247 enum tshut_polarity tshut_polarity) ··· 389 279 writel_relaxed(val, regs + TSADCV2_AUTO_CON); 390 280 } 391 281 392 - static int rk_tsadcv2_get_temp(int chn, void __iomem *regs, int *temp) 282 + static int rk_tsadcv2_get_temp(struct chip_tsadc_table table, 283 + int chn, void __iomem *regs, int *temp) 393 284 { 394 285 u32 val; 395 286 396 287 val = readl_relaxed(regs + TSADCV2_DATA(chn)); 397 288 398 - return rk_tsadcv2_code_to_temp(val, temp); 289 + return rk_tsadcv2_code_to_temp(table, val, temp); 399 290 } 400 291 401 - static void rk_tsadcv2_tshut_temp(int chn, void __iomem *regs, long temp) 292 + static void rk_tsadcv2_tshut_temp(struct chip_tsadc_table table, 293 + int chn, void __iomem *regs, int temp) 402 294 { 403 295 u32 tshut_value, val; 404 296 405 - tshut_value = rk_tsadcv2_temp_to_code(temp); 297 + tshut_value = rk_tsadcv2_temp_to_code(table, temp); 406 298 writel_relaxed(tshut_value, regs + TSADCV2_COMP_SHUT(chn)); 407 299 408 300 /* TSHUT will be valid */ ··· 430 318 } 431 319 432 320 static const struct rockchip_tsadc_chip rk3288_tsadc_data = { 321 + .chn_id[SENSOR_CPU] = 1, /* cpu sensor is channel 1 */ 322 + .chn_id[SENSOR_GPU] = 2, /* gpu sensor is channel 2 */ 323 + .chn_num = 2, /* two channels for tsadc */ 324 + 433 325 .tshut_mode = TSHUT_MODE_GPIO, /* default TSHUT via GPIO give PMIC */ 434 326 .tshut_polarity = TSHUT_LOW_ACTIVE, /* default TSHUT LOW ACTIVE */ 435 327 .tshut_temp = 95000, ··· 444 328 .get_temp = rk_tsadcv2_get_temp, 445 329 .set_tshut_temp = rk_tsadcv2_tshut_temp, 446 330 .set_tshut_mode = rk_tsadcv2_tshut_mode, 331 + 332 + .table = { 333 + .id = v2_code_table, 334 + .length = ARRAY_SIZE(v2_code_table), 335 + .data_mask = TSADCV2_DATA_MASK, 336 + .mode = ADC_DECREMENT, 337 + }, 338 + }; 339 + 340 + static const struct rockchip_tsadc_chip rk3368_tsadc_data = { 341 + .chn_id[SENSOR_CPU] = 0, /* cpu sensor is channel 0 */ 342 + .chn_id[SENSOR_GPU] = 1, /* gpu sensor is channel 1 */ 343 + .chn_num = 2, /* two channels for tsadc */ 344 + 345 + .tshut_mode = TSHUT_MODE_GPIO, /* default TSHUT via GPIO give PMIC */ 346 + .tshut_polarity = TSHUT_LOW_ACTIVE, /* default TSHUT LOW ACTIVE */ 347 + .tshut_temp = 95000, 348 + 349 + .initialize = rk_tsadcv2_initialize, 350 + .irq_ack = rk_tsadcv2_irq_ack, 351 + .control = rk_tsadcv2_control, 352 + .get_temp = rk_tsadcv2_get_temp, 353 + .set_tshut_temp = rk_tsadcv2_tshut_temp, 354 + .set_tshut_mode = rk_tsadcv2_tshut_mode, 355 + 356 + .table = { 357 + .id = v3_code_table, 358 + .length = ARRAY_SIZE(v3_code_table), 359 + .data_mask = TSADCV3_DATA_MASK, 360 + .mode = ADC_INCREMENT, 361 + }, 447 362 }; 448 363 449 364 static const struct of_device_id of_rockchip_thermal_match[] = { 450 365 { 451 366 .compatible = "rockchip,rk3288-tsadc", 452 367 .data = (void *)&rk3288_tsadc_data, 368 + }, 369 + { 370 + .compatible = "rockchip,rk3368-tsadc", 371 + .data = (void *)&rk3368_tsadc_data, 453 372 }, 454 373 { /* end */ }, 455 374 }; ··· 508 357 509 358 thermal->chip->irq_ack(thermal->regs); 510 359 511 - for (i = 0; i < ARRAY_SIZE(thermal->sensors); i++) 360 + for (i = 0; i < thermal->chip->chn_num; i++) 512 361 thermal_zone_device_update(thermal->sensors[i].tzd); 513 362 514 363 return IRQ_HANDLED; ··· 521 370 const struct rockchip_tsadc_chip *tsadc = sensor->thermal->chip; 522 371 int retval; 523 372 524 - retval = tsadc->get_temp(sensor->id, thermal->regs, out_temp); 373 + retval = tsadc->get_temp(tsadc->table, 374 + sensor->id, thermal->regs, out_temp); 525 375 dev_dbg(&thermal->pdev->dev, "sensor %d - temp: %d, retval: %d\n", 526 376 sensor->id, *out_temp, retval); 527 377 ··· 541 389 542 390 if (of_property_read_u32(np, "rockchip,hw-tshut-temp", &shut_temp)) { 543 391 dev_warn(dev, 544 - "Missing tshut temp property, using default %ld\n", 392 + "Missing tshut temp property, using default %d\n", 545 393 thermal->chip->tshut_temp); 546 394 thermal->tshut_temp = thermal->chip->tshut_temp; 547 395 } else { ··· 549 397 } 550 398 551 399 if (thermal->tshut_temp > INT_MAX) { 552 - dev_err(dev, "Invalid tshut temperature specified: %ld\n", 400 + dev_err(dev, "Invalid tshut temperature specified: %d\n", 553 401 thermal->tshut_temp); 554 402 return -ERANGE; 555 403 } ··· 594 442 rockchip_thermal_register_sensor(struct platform_device *pdev, 595 443 struct rockchip_thermal_data *thermal, 596 444 struct rockchip_thermal_sensor *sensor, 597 - enum sensor_id id) 445 + int id) 598 446 { 599 447 const struct rockchip_tsadc_chip *tsadc = thermal->chip; 600 448 int error; 601 449 602 450 tsadc->set_tshut_mode(id, thermal->regs, thermal->tshut_mode); 603 - tsadc->set_tshut_temp(id, thermal->regs, thermal->tshut_temp); 451 + tsadc->set_tshut_temp(tsadc->table, id, thermal->regs, 452 + thermal->tshut_temp); 604 453 605 454 sensor->thermal = thermal; 606 455 sensor->id = id; ··· 634 481 const struct of_device_id *match; 635 482 struct resource *res; 636 483 int irq; 637 - int i; 484 + int i, j; 638 485 int error; 639 486 640 487 match = of_match_node(of_rockchip_thermal_match, np); ··· 709 556 710 557 thermal->chip->initialize(thermal->regs, thermal->tshut_polarity); 711 558 712 - error = rockchip_thermal_register_sensor(pdev, thermal, 713 - &thermal->sensors[0], 714 - SENSOR_CPU); 715 - if (error) { 716 - dev_err(&pdev->dev, 717 - "failed to register CPU thermal sensor: %d\n", error); 718 - goto err_disable_pclk; 719 - } 720 - 721 - error = rockchip_thermal_register_sensor(pdev, thermal, 722 - &thermal->sensors[1], 723 - SENSOR_GPU); 724 - if (error) { 725 - dev_err(&pdev->dev, 726 - "failed to register GPU thermal sensor: %d\n", error); 727 - goto err_unregister_cpu_sensor; 559 + for (i = 0; i < thermal->chip->chn_num; i++) { 560 + error = rockchip_thermal_register_sensor(pdev, thermal, 561 + &thermal->sensors[i], 562 + thermal->chip->chn_id[i]); 563 + if (error) { 564 + dev_err(&pdev->dev, 565 + "failed to register sensor[%d] : error = %d\n", 566 + i, error); 567 + for (j = 0; j < i; j++) 568 + thermal_zone_of_sensor_unregister(&pdev->dev, 569 + thermal->sensors[j].tzd); 570 + goto err_disable_pclk; 571 + } 728 572 } 729 573 730 574 error = devm_request_threaded_irq(&pdev->dev, irq, NULL, ··· 731 581 if (error) { 732 582 dev_err(&pdev->dev, 733 583 "failed to request tsadc irq: %d\n", error); 734 - goto err_unregister_gpu_sensor; 584 + goto err_unregister_sensor; 735 585 } 736 586 737 587 thermal->chip->control(thermal->regs, true); 738 588 739 - for (i = 0; i < ARRAY_SIZE(thermal->sensors); i++) 589 + for (i = 0; i < thermal->chip->chn_num; i++) 740 590 rockchip_thermal_toggle_sensor(&thermal->sensors[i], true); 741 591 742 592 platform_set_drvdata(pdev, thermal); 743 593 744 594 return 0; 745 595 746 - err_unregister_gpu_sensor: 747 - thermal_zone_of_sensor_unregister(&pdev->dev, thermal->sensors[1].tzd); 748 - err_unregister_cpu_sensor: 749 - thermal_zone_of_sensor_unregister(&pdev->dev, thermal->sensors[0].tzd); 596 + err_unregister_sensor: 597 + while (i--) 598 + thermal_zone_of_sensor_unregister(&pdev->dev, 599 + thermal->sensors[i].tzd); 600 + 750 601 err_disable_pclk: 751 602 clk_disable_unprepare(thermal->pclk); 752 603 err_disable_clk: ··· 761 610 struct rockchip_thermal_data *thermal = platform_get_drvdata(pdev); 762 611 int i; 763 612 764 - for (i = 0; i < ARRAY_SIZE(thermal->sensors); i++) { 613 + for (i = 0; i < thermal->chip->chn_num; i++) { 765 614 struct rockchip_thermal_sensor *sensor = &thermal->sensors[i]; 766 615 767 616 rockchip_thermal_toggle_sensor(sensor, false); ··· 782 631 struct rockchip_thermal_data *thermal = platform_get_drvdata(pdev); 783 632 int i; 784 633 785 - for (i = 0; i < ARRAY_SIZE(thermal->sensors); i++) 634 + for (i = 0; i < thermal->chip->chn_num; i++) 786 635 rockchip_thermal_toggle_sensor(&thermal->sensors[i], false); 787 636 788 637 thermal->chip->control(thermal->regs, false); ··· 814 663 815 664 thermal->chip->initialize(thermal->regs, thermal->tshut_polarity); 816 665 817 - for (i = 0; i < ARRAY_SIZE(thermal->sensors); i++) { 818 - enum sensor_id id = thermal->sensors[i].id; 666 + for (i = 0; i < thermal->chip->chn_num; i++) { 667 + int id = thermal->sensors[i].id; 819 668 820 669 thermal->chip->set_tshut_mode(id, thermal->regs, 821 670 thermal->tshut_mode); 822 - thermal->chip->set_tshut_temp(id, thermal->regs, 671 + thermal->chip->set_tshut_temp(thermal->chip->table, 672 + id, thermal->regs, 823 673 thermal->tshut_temp); 824 674 } 825 675 826 676 thermal->chip->control(thermal->regs, true); 827 677 828 - for (i = 0; i < ARRAY_SIZE(thermal->sensors); i++) 678 + for (i = 0; i < thermal->chip->chn_num; i++) 829 679 rockchip_thermal_toggle_sensor(&thermal->sensors[i], true); 830 680 831 681 pinctrl_pm_select_default_state(dev);
+1 -1
drivers/tty/n_tty.c
··· 169 169 { 170 170 struct n_tty_data *ldata = tty->disc_data; 171 171 172 - tty_audit_add_data(tty, to, n, ldata->icanon); 172 + tty_audit_add_data(tty, from, n, ldata->icanon); 173 173 return copy_to_user(to, from, n); 174 174 } 175 175
+1
drivers/tty/serial/8250/8250_fsl.c
··· 60 60 spin_unlock_irqrestore(&up->port.lock, flags); 61 61 return 1; 62 62 } 63 + EXPORT_SYMBOL_GPL(fsl8250_handle_irq);
+1
drivers/tty/serial/8250/Kconfig
··· 373 373 depends on SERIAL_8250 && PCI 374 374 select HSU_DMA if SERIAL_8250_DMA 375 375 select HSU_DMA_PCI if X86_INTEL_MID 376 + select RATIONAL 376 377 help 377 378 Selecting this option will enable handling of the extra features 378 379 present on the UART found on Intel Medfield SOC and various other
+1 -1
drivers/tty/serial/Kconfig
··· 1539 1539 tristate "Freescale lpuart serial port support" 1540 1540 depends on HAS_DMA 1541 1541 select SERIAL_CORE 1542 - select SERIAL_EARLYCON 1543 1542 help 1544 1543 Support for the on-chip lpuart on some Freescale SOCs. 1545 1544 ··· 1546 1547 bool "Console on Freescale lpuart serial port" 1547 1548 depends on SERIAL_FSL_LPUART=y 1548 1549 select SERIAL_CORE_CONSOLE 1550 + select SERIAL_EARLYCON 1549 1551 help 1550 1552 If you have enabled the lpuart serial port on the Freescale SoCs, 1551 1553 you can make it the console by answering Y to this option.
+1 -1
drivers/tty/serial/bcm63xx_uart.c
··· 474 474 475 475 /* register irq and enable rx interrupts */ 476 476 ret = request_irq(port->irq, bcm_uart_interrupt, 0, 477 - bcm_uart_type(port), port); 477 + dev_name(port->dev), port); 478 478 if (ret) 479 479 return ret; 480 480 bcm_uart_writel(port, UART_RX_INT_MASK, UART_IR_REG);
+1 -1
drivers/tty/serial/etraxfs-uart.c
··· 894 894 up->regi_ser = of_iomap(np, 0); 895 895 up->port.dev = &pdev->dev; 896 896 897 - up->gpios = mctrl_gpio_init(&pdev->dev, 0); 897 + up->gpios = mctrl_gpio_init_noauto(&pdev->dev, 0); 898 898 if (IS_ERR(up->gpios)) 899 899 return PTR_ERR(up->gpios); 900 900
+1 -1
drivers/tty/tty_audit.c
··· 265 265 * 266 266 * Audit @data of @size from @tty, if necessary. 267 267 */ 268 - void tty_audit_add_data(struct tty_struct *tty, unsigned char *data, 268 + void tty_audit_add_data(struct tty_struct *tty, const void *data, 269 269 size_t size, unsigned icanon) 270 270 { 271 271 struct tty_audit_buf *buf;
+4
drivers/tty/tty_io.c
··· 1282 1282 int was_stopped = tty->stopped; 1283 1283 1284 1284 if (tty->ops->send_xchar) { 1285 + down_read(&tty->termios_rwsem); 1285 1286 tty->ops->send_xchar(tty, ch); 1287 + up_read(&tty->termios_rwsem); 1286 1288 return 0; 1287 1289 } 1288 1290 1289 1291 if (tty_write_lock(tty, 0) < 0) 1290 1292 return -ERESTARTSYS; 1291 1293 1294 + down_read(&tty->termios_rwsem); 1292 1295 if (was_stopped) 1293 1296 start_tty(tty); 1294 1297 tty->ops->write(tty, &ch, 1); 1295 1298 if (was_stopped) 1296 1299 stop_tty(tty); 1300 + up_read(&tty->termios_rwsem); 1297 1301 tty_write_unlock(tty); 1298 1302 return 0; 1299 1303 }
-4
drivers/tty/tty_ioctl.c
··· 1147 1147 spin_unlock_irq(&tty->flow_lock); 1148 1148 break; 1149 1149 case TCIOFF: 1150 - down_read(&tty->termios_rwsem); 1151 1150 if (STOP_CHAR(tty) != __DISABLED_CHAR) 1152 1151 retval = tty_send_xchar(tty, STOP_CHAR(tty)); 1153 - up_read(&tty->termios_rwsem); 1154 1152 break; 1155 1153 case TCION: 1156 - down_read(&tty->termios_rwsem); 1157 1154 if (START_CHAR(tty) != __DISABLED_CHAR) 1158 1155 retval = tty_send_xchar(tty, START_CHAR(tty)); 1159 - up_read(&tty->termios_rwsem); 1160 1156 break; 1161 1157 default: 1162 1158 return -EINVAL;
+1 -1
drivers/tty/tty_ldisc.c
··· 592 592 593 593 /* Restart the work queue in case no characters kick it off. Safe if 594 594 already running */ 595 - schedule_work(&tty->port->buf.work); 595 + tty_buffer_restart_work(tty->port); 596 596 597 597 tty_unlock(tty); 598 598 return retval;
+122 -22
drivers/usb/chipidea/ci_hdrc_imx.c
··· 84 84 struct imx_usbmisc_data *usbmisc_data; 85 85 bool supports_runtime_pm; 86 86 bool in_lpm; 87 + /* SoC before i.mx6 (except imx23/imx28) needs three clks */ 88 + bool need_three_clks; 89 + struct clk *clk_ipg; 90 + struct clk *clk_ahb; 91 + struct clk *clk_per; 92 + /* --------------------------------- */ 87 93 }; 88 94 89 95 /* Common functions shared by usbmisc drivers */ ··· 141 135 } 142 136 143 137 /* End of common functions shared by usbmisc drivers*/ 138 + static int imx_get_clks(struct device *dev) 139 + { 140 + struct ci_hdrc_imx_data *data = dev_get_drvdata(dev); 141 + int ret = 0; 142 + 143 + data->clk_ipg = devm_clk_get(dev, "ipg"); 144 + if (IS_ERR(data->clk_ipg)) { 145 + /* If the platform only needs one clocks */ 146 + data->clk = devm_clk_get(dev, NULL); 147 + if (IS_ERR(data->clk)) { 148 + ret = PTR_ERR(data->clk); 149 + dev_err(dev, 150 + "Failed to get clks, err=%ld,%ld\n", 151 + PTR_ERR(data->clk), PTR_ERR(data->clk_ipg)); 152 + return ret; 153 + } 154 + return ret; 155 + } 156 + 157 + data->clk_ahb = devm_clk_get(dev, "ahb"); 158 + if (IS_ERR(data->clk_ahb)) { 159 + ret = PTR_ERR(data->clk_ahb); 160 + dev_err(dev, 161 + "Failed to get ahb clock, err=%d\n", ret); 162 + return ret; 163 + } 164 + 165 + data->clk_per = devm_clk_get(dev, "per"); 166 + if (IS_ERR(data->clk_per)) { 167 + ret = PTR_ERR(data->clk_per); 168 + dev_err(dev, 169 + "Failed to get per clock, err=%d\n", ret); 170 + return ret; 171 + } 172 + 173 + data->need_three_clks = true; 174 + return ret; 175 + } 176 + 177 + static int imx_prepare_enable_clks(struct device *dev) 178 + { 179 + struct ci_hdrc_imx_data *data = dev_get_drvdata(dev); 180 + int ret = 0; 181 + 182 + if (data->need_three_clks) { 183 + ret = clk_prepare_enable(data->clk_ipg); 184 + if (ret) { 185 + dev_err(dev, 186 + "Failed to prepare/enable ipg clk, err=%d\n", 187 + ret); 188 + return ret; 189 + } 190 + 191 + ret = clk_prepare_enable(data->clk_ahb); 192 + if (ret) { 193 + dev_err(dev, 194 + "Failed to prepare/enable ahb clk, err=%d\n", 195 + ret); 196 + clk_disable_unprepare(data->clk_ipg); 197 + return ret; 198 + } 199 + 200 + ret = clk_prepare_enable(data->clk_per); 201 + if (ret) { 202 + dev_err(dev, 203 + "Failed to prepare/enable per clk, err=%d\n", 204 + ret); 205 + clk_disable_unprepare(data->clk_ahb); 206 + clk_disable_unprepare(data->clk_ipg); 207 + return ret; 208 + } 209 + } else { 210 + ret = clk_prepare_enable(data->clk); 211 + if (ret) { 212 + dev_err(dev, 213 + "Failed to prepare/enable clk, err=%d\n", 214 + ret); 215 + return ret; 216 + } 217 + } 218 + 219 + return ret; 220 + } 221 + 222 + static void imx_disable_unprepare_clks(struct device *dev) 223 + { 224 + struct ci_hdrc_imx_data *data = dev_get_drvdata(dev); 225 + 226 + if (data->need_three_clks) { 227 + clk_disable_unprepare(data->clk_per); 228 + clk_disable_unprepare(data->clk_ahb); 229 + clk_disable_unprepare(data->clk_ipg); 230 + } else { 231 + clk_disable_unprepare(data->clk); 232 + } 233 + } 144 234 145 235 static int ci_hdrc_imx_probe(struct platform_device *pdev) 146 236 { ··· 247 145 .flags = CI_HDRC_SET_NON_ZERO_TTHA, 248 146 }; 249 147 int ret; 250 - const struct of_device_id *of_id = 251 - of_match_device(ci_hdrc_imx_dt_ids, &pdev->dev); 252 - const struct ci_hdrc_imx_platform_flag *imx_platform_flag = of_id->data; 148 + const struct of_device_id *of_id; 149 + const struct ci_hdrc_imx_platform_flag *imx_platform_flag; 150 + 151 + of_id = of_match_device(ci_hdrc_imx_dt_ids, &pdev->dev); 152 + if (!of_id) 153 + return -ENODEV; 154 + 155 + imx_platform_flag = of_id->data; 253 156 254 157 data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 255 158 if (!data) 256 159 return -ENOMEM; 257 160 161 + platform_set_drvdata(pdev, data); 258 162 data->usbmisc_data = usbmisc_get_init_data(&pdev->dev); 259 163 if (IS_ERR(data->usbmisc_data)) 260 164 return PTR_ERR(data->usbmisc_data); 261 165 262 - data->clk = devm_clk_get(&pdev->dev, NULL); 263 - if (IS_ERR(data->clk)) { 264 - dev_err(&pdev->dev, 265 - "Failed to get clock, err=%ld\n", PTR_ERR(data->clk)); 266 - return PTR_ERR(data->clk); 267 - } 268 - 269 - ret = clk_prepare_enable(data->clk); 270 - if (ret) { 271 - dev_err(&pdev->dev, 272 - "Failed to prepare or enable clock, err=%d\n", ret); 166 + ret = imx_get_clks(&pdev->dev); 167 + if (ret) 273 168 return ret; 274 - } 169 + 170 + ret = imx_prepare_enable_clks(&pdev->dev); 171 + if (ret) 172 + return ret; 275 173 276 174 data->phy = devm_usb_get_phy_by_phandle(&pdev->dev, "fsl,usbphy", 0); 277 175 if (IS_ERR(data->phy)) { ··· 314 212 goto disable_device; 315 213 } 316 214 317 - platform_set_drvdata(pdev, data); 318 - 319 215 if (data->supports_runtime_pm) { 320 216 pm_runtime_set_active(&pdev->dev); 321 217 pm_runtime_enable(&pdev->dev); ··· 326 226 disable_device: 327 227 ci_hdrc_remove_device(data->ci_pdev); 328 228 err_clk: 329 - clk_disable_unprepare(data->clk); 229 + imx_disable_unprepare_clks(&pdev->dev); 330 230 return ret; 331 231 } 332 232 ··· 340 240 pm_runtime_put_noidle(&pdev->dev); 341 241 } 342 242 ci_hdrc_remove_device(data->ci_pdev); 343 - clk_disable_unprepare(data->clk); 243 + imx_disable_unprepare_clks(&pdev->dev); 344 244 345 245 return 0; 346 246 } ··· 352 252 353 253 dev_dbg(dev, "at %s\n", __func__); 354 254 355 - clk_disable_unprepare(data->clk); 255 + imx_disable_unprepare_clks(dev); 356 256 data->in_lpm = true; 357 257 358 258 return 0; ··· 370 270 return 0; 371 271 } 372 272 373 - ret = clk_prepare_enable(data->clk); 273 + ret = imx_prepare_enable_clks(dev); 374 274 if (ret) 375 275 return ret; 376 276 ··· 385 285 return 0; 386 286 387 287 clk_disable: 388 - clk_disable_unprepare(data->clk); 288 + imx_disable_unprepare_clks(dev); 389 289 return ret; 390 290 } 391 291
+2
drivers/usb/chipidea/debug.c
··· 322 322 return -EINVAL; 323 323 324 324 pm_runtime_get_sync(ci->dev); 325 + disable_irq(ci->irq); 325 326 ci_role_stop(ci); 326 327 ret = ci_role_start(ci, role); 328 + enable_irq(ci->irq); 327 329 pm_runtime_put_sync(ci->dev); 328 330 329 331 return ret ? ret : count;
+17
drivers/usb/chipidea/udc.c
··· 1751 1751 return retval; 1752 1752 } 1753 1753 1754 + static void ci_udc_stop_for_otg_fsm(struct ci_hdrc *ci) 1755 + { 1756 + if (!ci_otg_is_fsm_mode(ci)) 1757 + return; 1758 + 1759 + mutex_lock(&ci->fsm.lock); 1760 + if (ci->fsm.otg->state == OTG_STATE_A_PERIPHERAL) { 1761 + ci->fsm.a_bidl_adis_tmout = 1; 1762 + ci_hdrc_otg_fsm_start(ci); 1763 + } else if (ci->fsm.otg->state == OTG_STATE_B_PERIPHERAL) { 1764 + ci->fsm.protocol = PROTO_UNDEF; 1765 + ci->fsm.otg->state = OTG_STATE_UNDEFINED; 1766 + } 1767 + mutex_unlock(&ci->fsm.lock); 1768 + } 1769 + 1754 1770 /** 1755 1771 * ci_udc_stop: unregister a gadget driver 1756 1772 */ ··· 1791 1775 ci->driver = NULL; 1792 1776 spin_unlock_irqrestore(&ci->lock, flags); 1793 1777 1778 + ci_udc_stop_for_otg_fsm(ci); 1794 1779 return 0; 1795 1780 } 1796 1781
+6 -4
drivers/usb/chipidea/usbmisc_imx.c
··· 500 500 { 501 501 struct resource *res; 502 502 struct imx_usbmisc *data; 503 - struct of_device_id *tmp_dev; 503 + const struct of_device_id *of_id; 504 + 505 + of_id = of_match_device(usbmisc_imx_dt_ids, &pdev->dev); 506 + if (!of_id) 507 + return -ENODEV; 504 508 505 509 data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 506 510 if (!data) ··· 517 513 if (IS_ERR(data->base)) 518 514 return PTR_ERR(data->base); 519 515 520 - tmp_dev = (struct of_device_id *) 521 - of_match_device(usbmisc_imx_dt_ids, &pdev->dev); 522 - data->ops = (const struct usbmisc_ops *)tmp_dev->data; 516 + data->ops = (const struct usbmisc_ops *)of_id->data; 523 517 platform_set_drvdata(pdev, data); 524 518 525 519 return 0;
+1 -1
drivers/usb/class/usblp.c
··· 884 884 885 885 add_wait_queue(&usblp->wwait, &waita); 886 886 for (;;) { 887 - set_current_state(TASK_INTERRUPTIBLE); 888 887 if (mutex_lock_interruptible(&usblp->mut)) { 889 888 rc = -EINTR; 890 889 break; 891 890 } 891 + set_current_state(TASK_INTERRUPTIBLE); 892 892 rc = usblp_wtest(usblp, nonblock); 893 893 mutex_unlock(&usblp->mut); 894 894 if (rc <= 0)
+1 -2
drivers/usb/core/Kconfig
··· 77 77 78 78 config USB_OTG_FSM 79 79 tristate "USB 2.0 OTG FSM implementation" 80 - depends on USB 81 - select USB_OTG 80 + depends on USB && USB_OTG 82 81 select USB_PHY 83 82 help 84 83 Implements OTG Finite State Machine as specified in On-The-Go
+5 -4
drivers/usb/dwc2/hcd.c
··· 324 324 */ 325 325 static void dwc2_hcd_rem_wakeup(struct dwc2_hsotg *hsotg) 326 326 { 327 - if (hsotg->lx_state == DWC2_L2) { 327 + if (hsotg->bus_suspended) { 328 328 hsotg->flags.b.port_suspend_change = 1; 329 329 usb_hcd_resume_root_hub(hsotg->priv); 330 - } else { 331 - hsotg->flags.b.port_l1_change = 1; 332 330 } 331 + 332 + if (hsotg->lx_state == DWC2_L1) 333 + hsotg->flags.b.port_l1_change = 1; 333 334 } 334 335 335 336 /** ··· 1429 1428 dev_dbg(hsotg->dev, "Clear Resume: HPRT0=%0x\n", 1430 1429 dwc2_readl(hsotg->regs + HPRT0)); 1431 1430 1432 - hsotg->bus_suspended = 0; 1433 1431 dwc2_hcd_rem_wakeup(hsotg); 1432 + hsotg->bus_suspended = 0; 1434 1433 1435 1434 /* Change to L0 state */ 1436 1435 hsotg->lx_state = DWC2_L0;
+2 -1
drivers/usb/dwc2/platform.c
··· 108 108 .host_ls_low_power_phy_clk = -1, 109 109 .ts_dline = -1, 110 110 .reload_ctl = -1, 111 - .ahbcfg = 0x7, /* INCR16 */ 111 + .ahbcfg = GAHBCFG_HBSTLEN_INCR16 << 112 + GAHBCFG_HBSTLEN_SHIFT, 112 113 .uframe_sched = -1, 113 114 .external_id_pin_ctl = -1, 114 115 .hibernation = -1,
+4
drivers/usb/dwc3/dwc3-pci.c
··· 34 34 #define PCI_DEVICE_ID_INTEL_BSW 0x22b7 35 35 #define PCI_DEVICE_ID_INTEL_SPTLP 0x9d30 36 36 #define PCI_DEVICE_ID_INTEL_SPTH 0xa130 37 + #define PCI_DEVICE_ID_INTEL_BXT 0x0aaa 38 + #define PCI_DEVICE_ID_INTEL_APL 0x5aaa 37 39 38 40 static const struct acpi_gpio_params reset_gpios = { 0, 0, false }; 39 41 static const struct acpi_gpio_params cs_gpios = { 1, 0, false }; ··· 212 210 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MRFLD), }, 213 211 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SPTLP), }, 214 212 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SPTH), }, 213 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BXT), }, 214 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_APL), }, 215 215 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB), }, 216 216 { } /* Terminating Entry */ 217 217 };
+23 -1
drivers/usb/dwc3/gadget.c
··· 2744 2744 } 2745 2745 2746 2746 dwc->gadget.ops = &dwc3_gadget_ops; 2747 - dwc->gadget.max_speed = USB_SPEED_SUPER; 2748 2747 dwc->gadget.speed = USB_SPEED_UNKNOWN; 2749 2748 dwc->gadget.sg_supported = true; 2750 2749 dwc->gadget.name = "dwc3-gadget"; 2750 + 2751 + /* 2752 + * FIXME We might be setting max_speed to <SUPER, however versions 2753 + * <2.20a of dwc3 have an issue with metastability (documented 2754 + * elsewhere in this driver) which tells us we can't set max speed to 2755 + * anything lower than SUPER. 2756 + * 2757 + * Because gadget.max_speed is only used by composite.c and function 2758 + * drivers (i.e. it won't go into dwc3's registers) we are allowing this 2759 + * to happen so we avoid sending SuperSpeed Capability descriptor 2760 + * together with our BOS descriptor as that could confuse host into 2761 + * thinking we can handle super speed. 2762 + * 2763 + * Note that, in fact, we won't even support GetBOS requests when speed 2764 + * is less than super speed because we don't have means, yet, to tell 2765 + * composite.c that we are USB 2.0 + LPM ECN. 2766 + */ 2767 + if (dwc->revision < DWC3_REVISION_220A) 2768 + dwc3_trace(trace_dwc3_gadget, 2769 + "Changing max_speed on rev %08x\n", 2770 + dwc->revision); 2771 + 2772 + dwc->gadget.max_speed = dwc->maximum_speed; 2751 2773 2752 2774 /* 2753 2775 * Per databook, DWC3 needs buffer size to be aligned to MaxPacketSize
+1 -1
drivers/usb/gadget/function/f_loopback.c
··· 329 329 for (i = 0; i < loop->qlen && result == 0; i++) { 330 330 result = -ENOMEM; 331 331 332 - in_req = usb_ep_alloc_request(loop->in_ep, GFP_KERNEL); 332 + in_req = usb_ep_alloc_request(loop->in_ep, GFP_ATOMIC); 333 333 if (!in_req) 334 334 goto fail; 335 335
+1 -1
drivers/usb/gadget/udc/atmel_usba_udc.c
··· 1633 1633 spin_lock(&udc->lock); 1634 1634 1635 1635 int_enb = usba_int_enb_get(udc); 1636 - status = usba_readl(udc, INT_STA) & int_enb; 1636 + status = usba_readl(udc, INT_STA) & (int_enb | USBA_HIGH_SPEED); 1637 1637 DBG(DBG_INT, "irq, status=%#08x\n", status); 1638 1638 1639 1639 if (status & USBA_DET_SUSPEND) {
+9 -6
drivers/usb/host/xhci-hub.c
··· 782 782 status |= USB_PORT_STAT_SUSPEND; 783 783 } 784 784 } 785 - if ((raw_port_status & PORT_PLS_MASK) == XDEV_U0 786 - && (raw_port_status & PORT_POWER) 787 - && (bus_state->suspended_ports & (1 << wIndex))) { 788 - bus_state->suspended_ports &= ~(1 << wIndex); 789 - if (hcd->speed < HCD_USB3) 790 - bus_state->port_c_suspend |= 1 << wIndex; 785 + if ((raw_port_status & PORT_PLS_MASK) == XDEV_U0 && 786 + (raw_port_status & PORT_POWER)) { 787 + if (bus_state->suspended_ports & (1 << wIndex)) { 788 + bus_state->suspended_ports &= ~(1 << wIndex); 789 + if (hcd->speed < HCD_USB3) 790 + bus_state->port_c_suspend |= 1 << wIndex; 791 + } 792 + bus_state->resume_done[wIndex] = 0; 793 + clear_bit(wIndex, &bus_state->resuming_ports); 791 794 } 792 795 if (raw_port_status & PORT_CONNECT) { 793 796 status |= USB_PORT_STAT_CONNECTION;
+6 -26
drivers/usb/host/xhci-ring.c
··· 3896 3896 return ret; 3897 3897 } 3898 3898 3899 - static int ep_ring_is_processing(struct xhci_hcd *xhci, 3900 - int slot_id, unsigned int ep_index) 3901 - { 3902 - struct xhci_virt_device *xdev; 3903 - struct xhci_ring *ep_ring; 3904 - struct xhci_ep_ctx *ep_ctx; 3905 - struct xhci_virt_ep *xep; 3906 - dma_addr_t hw_deq; 3907 - 3908 - xdev = xhci->devs[slot_id]; 3909 - xep = &xhci->devs[slot_id]->eps[ep_index]; 3910 - ep_ring = xep->ring; 3911 - ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); 3912 - 3913 - if ((le32_to_cpu(ep_ctx->ep_info) & EP_STATE_MASK) != EP_STATE_RUNNING) 3914 - return 0; 3915 - 3916 - hw_deq = le64_to_cpu(ep_ctx->deq) & ~EP_CTX_CYCLE_MASK; 3917 - return (hw_deq != 3918 - xhci_trb_virt_to_dma(ep_ring->enq_seg, ep_ring->enqueue)); 3919 - } 3920 - 3921 3899 /* 3922 3900 * Check transfer ring to guarantee there is enough room for the urb. 3923 3901 * Update ISO URB start_frame and interval. ··· 3961 3983 } 3962 3984 3963 3985 /* Calculate the start frame and put it in urb->start_frame. */ 3964 - if (HCC_CFC(xhci->hcc_params) && 3965 - ep_ring_is_processing(xhci, slot_id, ep_index)) { 3966 - urb->start_frame = xep->next_frame_id; 3967 - goto skip_start_over; 3986 + if (HCC_CFC(xhci->hcc_params) && !list_empty(&ep_ring->td_list)) { 3987 + if ((le32_to_cpu(ep_ctx->ep_info) & EP_STATE_MASK) == 3988 + EP_STATE_RUNNING) { 3989 + urb->start_frame = xep->next_frame_id; 3990 + goto skip_start_over; 3991 + } 3968 3992 } 3969 3993 3970 3994 start_frame = readl(&xhci->run_regs->microframe_index);
+10
drivers/usb/host/xhci.c
··· 175 175 command |= CMD_RESET; 176 176 writel(command, &xhci->op_regs->command); 177 177 178 + /* Existing Intel xHCI controllers require a delay of 1 mS, 179 + * after setting the CMD_RESET bit, and before accessing any 180 + * HC registers. This allows the HC to complete the 181 + * reset operation and be ready for HC register access. 182 + * Without this delay, the subsequent HC register access, 183 + * may result in a system hang very rarely. 184 + */ 185 + if (xhci->quirks & XHCI_INTEL_HOST) 186 + udelay(1000); 187 + 178 188 ret = xhci_handshake(&xhci->op_regs->command, 179 189 CMD_RESET, 0, 10 * 1000 * 1000); 180 190 if (ret)
+6 -6
drivers/usb/musb/musb_core.c
··· 132 132 /*-------------------------------------------------------------------------*/ 133 133 134 134 #ifndef CONFIG_BLACKFIN 135 - static int musb_ulpi_read(struct usb_phy *phy, u32 offset) 135 + static int musb_ulpi_read(struct usb_phy *phy, u32 reg) 136 136 { 137 137 void __iomem *addr = phy->io_priv; 138 138 int i = 0; ··· 151 151 * ULPICarKitControlDisableUTMI after clearing POWER_SUSPENDM. 152 152 */ 153 153 154 - musb_writeb(addr, MUSB_ULPI_REG_ADDR, (u8)offset); 154 + musb_writeb(addr, MUSB_ULPI_REG_ADDR, (u8)reg); 155 155 musb_writeb(addr, MUSB_ULPI_REG_CONTROL, 156 156 MUSB_ULPI_REG_REQ | MUSB_ULPI_RDN_WR); 157 157 ··· 176 176 return ret; 177 177 } 178 178 179 - static int musb_ulpi_write(struct usb_phy *phy, u32 offset, u32 data) 179 + static int musb_ulpi_write(struct usb_phy *phy, u32 val, u32 reg) 180 180 { 181 181 void __iomem *addr = phy->io_priv; 182 182 int i = 0; ··· 191 191 power &= ~MUSB_POWER_SUSPENDM; 192 192 musb_writeb(addr, MUSB_POWER, power); 193 193 194 - musb_writeb(addr, MUSB_ULPI_REG_ADDR, (u8)offset); 195 - musb_writeb(addr, MUSB_ULPI_REG_DATA, (u8)data); 194 + musb_writeb(addr, MUSB_ULPI_REG_ADDR, (u8)reg); 195 + musb_writeb(addr, MUSB_ULPI_REG_DATA, (u8)val); 196 196 musb_writeb(addr, MUSB_ULPI_REG_CONTROL, MUSB_ULPI_REG_REQ); 197 197 198 198 while (!(musb_readb(addr, MUSB_ULPI_REG_CONTROL) ··· 1668 1668 static bool use_dma = 1; 1669 1669 1670 1670 /* "modprobe ... use_dma=0" etc */ 1671 - module_param(use_dma, bool, 0); 1671 + module_param(use_dma, bool, 0644); 1672 1672 MODULE_PARM_DESC(use_dma, "enable/disable use of DMA"); 1673 1673 1674 1674 void musb_dma_completion(struct musb *musb, u8 epnum, u8 transmit)
+16 -6
drivers/usb/musb/musb_host.c
··· 112 112 struct musb *musb = ep->musb; 113 113 void __iomem *epio = ep->regs; 114 114 u16 csr; 115 - u16 lastcsr = 0; 116 115 int retries = 1000; 117 116 118 117 csr = musb_readw(epio, MUSB_TXCSR); 119 118 while (csr & MUSB_TXCSR_FIFONOTEMPTY) { 120 - if (csr != lastcsr) 121 - dev_dbg(musb->controller, "Host TX FIFONOTEMPTY csr: %02x\n", csr); 122 - lastcsr = csr; 123 119 csr |= MUSB_TXCSR_FLUSHFIFO | MUSB_TXCSR_TXPKTRDY; 124 120 musb_writew(epio, MUSB_TXCSR, csr); 125 121 csr = musb_readw(epio, MUSB_TXCSR); 126 - if (WARN(retries-- < 1, 122 + 123 + /* 124 + * FIXME: sometimes the tx fifo flush failed, it has been 125 + * observed during device disconnect on AM335x. 126 + * 127 + * To reproduce the issue, ensure tx urb(s) are queued when 128 + * unplug the usb device which is connected to AM335x usb 129 + * host port. 130 + * 131 + * I found using a usb-ethernet device and running iperf 132 + * (client on AM335x) has very high chance to trigger it. 133 + * 134 + * Better to turn on dev_dbg() in musb_cleanup_urb() with 135 + * CPPI enabled to see the issue when aborting the tx channel. 136 + */ 137 + if (dev_WARN_ONCE(musb->controller, retries-- < 1, 127 138 "Could not flush host TX%d fifo: csr: %04x\n", 128 139 ep->epnum, csr)) 129 140 return; 130 - mdelay(1); 131 141 } 132 142 } 133 143
+1 -3
drivers/usb/phy/Kconfig
··· 21 21 config FSL_USB2_OTG 22 22 bool "Freescale USB OTG Transceiver Driver" 23 23 depends on USB_EHCI_FSL && USB_FSL_USB2 && USB_OTG_FSM && PM 24 - select USB_OTG 25 24 select USB_PHY 26 25 help 27 26 Enable this to support Freescale USB OTG transceiver. ··· 167 168 168 169 config USB_MV_OTG 169 170 tristate "Marvell USB OTG support" 170 - depends on USB_EHCI_MV && USB_MV_UDC && PM 171 - select USB_OTG 171 + depends on USB_EHCI_MV && USB_MV_UDC && PM && USB_OTG 172 172 select USB_PHY 173 173 help 174 174 Say Y here if you want to build Marvell USB OTG transciever
+5 -2
drivers/usb/phy/phy-mxs-usb.c
··· 452 452 struct clk *clk; 453 453 struct mxs_phy *mxs_phy; 454 454 int ret; 455 - const struct of_device_id *of_id = 456 - of_match_device(mxs_phy_dt_ids, &pdev->dev); 455 + const struct of_device_id *of_id; 457 456 struct device_node *np = pdev->dev.of_node; 457 + 458 + of_id = of_match_device(mxs_phy_dt_ids, &pdev->dev); 459 + if (!of_id) 460 + return -ENODEV; 458 461 459 462 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 460 463 base = devm_ioremap_resource(&pdev->dev, res);
+1 -1
drivers/usb/phy/phy-omap-otg.c
··· 105 105 extcon = extcon_get_extcon_dev(config->extcon); 106 106 if (!extcon) 107 107 return -EPROBE_DEFER; 108 - otg_dev->extcon = extcon; 109 108 110 109 otg_dev = devm_kzalloc(&pdev->dev, sizeof(*otg_dev), GFP_KERNEL); 111 110 if (!otg_dev) ··· 114 115 if (IS_ERR(otg_dev->base)) 115 116 return PTR_ERR(otg_dev->base); 116 117 118 + otg_dev->extcon = extcon; 117 119 otg_dev->id_nb.notifier_call = omap_otg_id_notifier; 118 120 otg_dev->vbus_nb.notifier_call = omap_otg_vbus_notifier; 119 121
+11
drivers/usb/serial/option.c
··· 161 161 #define NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_HIGHSPEED 0x9001 162 162 #define NOVATELWIRELESS_PRODUCT_E362 0x9010 163 163 #define NOVATELWIRELESS_PRODUCT_E371 0x9011 164 + #define NOVATELWIRELESS_PRODUCT_U620L 0x9022 164 165 #define NOVATELWIRELESS_PRODUCT_G2 0xA010 165 166 #define NOVATELWIRELESS_PRODUCT_MC551 0xB001 166 167 ··· 355 354 /* This is the 4G XS Stick W14 a.k.a. Mobilcom Debitel Surf-Stick * 356 355 * It seems to contain a Qualcomm QSC6240/6290 chipset */ 357 356 #define FOUR_G_SYSTEMS_PRODUCT_W14 0x9603 357 + #define FOUR_G_SYSTEMS_PRODUCT_W100 0x9b01 358 358 359 359 /* iBall 3.5G connect wireless modem */ 360 360 #define IBALL_3_5G_CONNECT 0x9605 ··· 519 517 520 518 static const struct option_blacklist_info four_g_w14_blacklist = { 521 519 .sendsetup = BIT(0) | BIT(1), 520 + }; 521 + 522 + static const struct option_blacklist_info four_g_w100_blacklist = { 523 + .sendsetup = BIT(1) | BIT(2), 524 + .reserved = BIT(3), 522 525 }; 523 526 524 527 static const struct option_blacklist_info alcatel_x200_blacklist = { ··· 1059 1052 { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_MC551, 0xff, 0xff, 0xff) }, 1060 1053 { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_E362, 0xff, 0xff, 0xff) }, 1061 1054 { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_E371, 0xff, 0xff, 0xff) }, 1055 + { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_U620L, 0xff, 0x00, 0x00) }, 1062 1056 1063 1057 { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01) }, 1064 1058 { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01A) }, ··· 1649 1641 { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14), 1650 1642 .driver_info = (kernel_ulong_t)&four_g_w14_blacklist 1651 1643 }, 1644 + { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W100), 1645 + .driver_info = (kernel_ulong_t)&four_g_w100_blacklist 1646 + }, 1652 1647 { USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, SPEEDUP_PRODUCT_SU9800, 0xff) }, 1653 1648 { USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) }, 1654 1649 { USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) },
+74 -20
drivers/usb/serial/qcserial.c
··· 22 22 #define DRIVER_AUTHOR "Qualcomm Inc" 23 23 #define DRIVER_DESC "Qualcomm USB Serial driver" 24 24 25 + #define QUECTEL_EC20_PID 0x9215 26 + 25 27 /* standard device layouts supported by this driver */ 26 28 enum qcserial_layouts { 27 29 QCSERIAL_G2K = 0, /* Gobi 2000 */ ··· 173 171 }; 174 172 MODULE_DEVICE_TABLE(usb, id_table); 175 173 174 + static int handle_quectel_ec20(struct device *dev, int ifnum) 175 + { 176 + int altsetting = 0; 177 + 178 + /* 179 + * Quectel EC20 Mini PCIe LTE module layout: 180 + * 0: DM/DIAG (use libqcdm from ModemManager for communication) 181 + * 1: NMEA 182 + * 2: AT-capable modem port 183 + * 3: Modem interface 184 + * 4: NDIS 185 + */ 186 + switch (ifnum) { 187 + case 0: 188 + dev_dbg(dev, "Quectel EC20 DM/DIAG interface found\n"); 189 + break; 190 + case 1: 191 + dev_dbg(dev, "Quectel EC20 NMEA GPS interface found\n"); 192 + break; 193 + case 2: 194 + case 3: 195 + dev_dbg(dev, "Quectel EC20 Modem port found\n"); 196 + break; 197 + case 4: 198 + /* Don't claim the QMI/net interface */ 199 + altsetting = -1; 200 + break; 201 + } 202 + 203 + return altsetting; 204 + } 205 + 176 206 static int qcprobe(struct usb_serial *serial, const struct usb_device_id *id) 177 207 { 178 208 struct usb_host_interface *intf = serial->interface->cur_altsetting; ··· 214 180 __u8 ifnum; 215 181 int altsetting = -1; 216 182 bool sendsetup = false; 183 + 184 + /* we only support vendor specific functions */ 185 + if (intf->desc.bInterfaceClass != USB_CLASS_VENDOR_SPEC) 186 + goto done; 217 187 218 188 nintf = serial->dev->actconfig->desc.bNumInterfaces; 219 189 dev_dbg(dev, "Num Interfaces = %d\n", nintf); ··· 278 240 altsetting = -1; 279 241 break; 280 242 case QCSERIAL_G2K: 243 + /* handle non-standard layouts */ 244 + if (nintf == 5 && id->idProduct == QUECTEL_EC20_PID) { 245 + altsetting = handle_quectel_ec20(dev, ifnum); 246 + goto done; 247 + } 248 + 281 249 /* 282 250 * Gobi 2K+ USB layout: 283 251 * 0: QMI/net ··· 345 301 break; 346 302 case QCSERIAL_HWI: 347 303 /* 348 - * Huawei layout: 349 - * 0: AT-capable modem port 350 - * 1: DM/DIAG 351 - * 2: AT-capable modem port 352 - * 3: CCID-compatible PCSC interface 353 - * 4: QMI/net 354 - * 5: NMEA 304 + * Huawei devices map functions by subclass + protocol 305 + * instead of interface numbers. The protocol identify 306 + * a specific function, while the subclass indicate a 307 + * specific firmware source 308 + * 309 + * This is a blacklist of functions known to be 310 + * non-serial. The rest are assumed to be serial and 311 + * will be handled by this driver 355 312 */ 356 - switch (ifnum) { 357 - case 0: 358 - case 2: 359 - dev_dbg(dev, "Modem port found\n"); 360 - break; 361 - case 1: 362 - dev_dbg(dev, "DM/DIAG interface found\n"); 363 - break; 364 - case 5: 365 - dev_dbg(dev, "NMEA GPS interface found\n"); 366 - break; 367 - default: 368 - /* don't claim any unsupported interface */ 313 + switch (intf->desc.bInterfaceProtocol) { 314 + /* QMI combined (qmi_wwan) */ 315 + case 0x07: 316 + case 0x37: 317 + case 0x67: 318 + /* QMI data (qmi_wwan) */ 319 + case 0x08: 320 + case 0x38: 321 + case 0x68: 322 + /* QMI control (qmi_wwan) */ 323 + case 0x09: 324 + case 0x39: 325 + case 0x69: 326 + /* NCM like (huawei_cdc_ncm) */ 327 + case 0x16: 328 + case 0x46: 329 + case 0x76: 369 330 altsetting = -1; 370 331 break; 332 + default: 333 + dev_dbg(dev, "Huawei type serial port found (%02x/%02x/%02x)\n", 334 + intf->desc.bInterfaceClass, 335 + intf->desc.bInterfaceSubClass, 336 + intf->desc.bInterfaceProtocol); 371 337 } 372 338 break; 373 339 default:
+2
drivers/usb/serial/ti_usb_3410_5052.c
··· 159 159 { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STEREO_PLUG_ID) }, 160 160 { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STRIP_PORT_ID) }, 161 161 { USB_DEVICE(TI_VENDOR_ID, FRI2_PRODUCT_ID) }, 162 + { USB_DEVICE(HONEYWELL_VENDOR_ID, HONEYWELL_HGI80_PRODUCT_ID) }, 162 163 { } /* terminator */ 163 164 }; 164 165 ··· 192 191 { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_PRODUCT_ID) }, 193 192 { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STRIP_PORT_ID) }, 194 193 { USB_DEVICE(TI_VENDOR_ID, FRI2_PRODUCT_ID) }, 194 + { USB_DEVICE(HONEYWELL_VENDOR_ID, HONEYWELL_HGI80_PRODUCT_ID) }, 195 195 { } /* terminator */ 196 196 }; 197 197
+4
drivers/usb/serial/ti_usb_3410_5052.h
··· 56 56 #define ABBOTT_PRODUCT_ID ABBOTT_STEREO_PLUG_ID 57 57 #define ABBOTT_STRIP_PORT_ID 0x3420 58 58 59 + /* Honeywell vendor and product IDs */ 60 + #define HONEYWELL_VENDOR_ID 0x10ac 61 + #define HONEYWELL_HGI80_PRODUCT_ID 0x0102 /* Honeywell HGI80 */ 62 + 59 63 /* Commands */ 60 64 #define TI_GET_VERSION 0x01 61 65 #define TI_GET_PORT_STATUS 0x02
+1 -1
drivers/watchdog/Kconfig
··· 446 446 447 447 config IMX2_WDT 448 448 tristate "IMX2+ Watchdog" 449 - depends on ARCH_MXC 449 + depends on ARCH_MXC || ARCH_LAYERSCAPE 450 450 select REGMAP_MMIO 451 451 select WATCHDOG_CORE 452 452 help
+1
drivers/watchdog/mtk_wdt.c
··· 123 123 124 124 reg = readl(wdt_base + WDT_MODE); 125 125 reg &= ~WDT_MODE_EN; 126 + reg |= WDT_MODE_KEY; 126 127 iowrite32(reg, wdt_base + WDT_MODE); 127 128 128 129 return 0;
+1 -1
drivers/watchdog/omap_wdt.c
··· 205 205 206 206 static unsigned int omap_wdt_get_timeleft(struct watchdog_device *wdog) 207 207 { 208 - struct omap_wdt_dev *wdev = watchdog_get_drvdata(wdog); 208 + struct omap_wdt_dev *wdev = to_omap_wdt_dev(wdog); 209 209 void __iomem *base = wdev->base; 210 210 u32 value; 211 211
+4 -4
drivers/watchdog/pnx4008_wdt.c
··· 80 80 81 81 static DEFINE_SPINLOCK(io_lock); 82 82 static void __iomem *wdt_base; 83 - struct clk *wdt_clk; 83 + static struct clk *wdt_clk; 84 84 85 85 static int pnx4008_wdt_start(struct watchdog_device *wdd) 86 86 { ··· 161 161 if (IS_ERR(wdt_clk)) 162 162 return PTR_ERR(wdt_clk); 163 163 164 - ret = clk_enable(wdt_clk); 164 + ret = clk_prepare_enable(wdt_clk); 165 165 if (ret) 166 166 return ret; 167 167 ··· 184 184 return 0; 185 185 186 186 disable_clk: 187 - clk_disable(wdt_clk); 187 + clk_disable_unprepare(wdt_clk); 188 188 return ret; 189 189 } 190 190 ··· 192 192 { 193 193 watchdog_unregister_device(&pnx4008_wdd); 194 194 195 - clk_disable(wdt_clk); 195 + clk_disable_unprepare(wdt_clk); 196 196 197 197 return 0; 198 198 }
+3 -1
drivers/watchdog/tegra_wdt.c
··· 140 140 { 141 141 wdd->timeout = timeout; 142 142 143 - if (watchdog_active(wdd)) 143 + if (watchdog_active(wdd)) { 144 + tegra_wdt_stop(wdd); 144 145 return tegra_wdt_start(wdd); 146 + } 145 147 146 148 return 0; 147 149 }
+1 -1
drivers/watchdog/w83977f_wdt.c
··· 224 224 225 225 static int wdt_set_timeout(int t) 226 226 { 227 - int tmrval; 227 + unsigned int tmrval; 228 228 229 229 /* 230 230 * Convert seconds to watchdog counter time units, rounding up.
+3 -2
drivers/xen/events/events_base.c
··· 39 39 #include <asm/irq.h> 40 40 #include <asm/idle.h> 41 41 #include <asm/io_apic.h> 42 + #include <asm/i8259.h> 42 43 #include <asm/xen/pci.h> 43 44 #endif 44 45 #include <asm/sync_bitops.h> ··· 421 420 return xen_allocate_irq_dynamic(); 422 421 423 422 /* Legacy IRQ descriptors are already allocated by the arch. */ 424 - if (gsi < NR_IRQS_LEGACY) 423 + if (gsi < nr_legacy_irqs()) 425 424 irq = gsi; 426 425 else 427 426 irq = irq_alloc_desc_at(gsi, -1); ··· 447 446 kfree(info); 448 447 449 448 /* Legacy IRQ descriptors are managed by the arch. */ 450 - if (irq < NR_IRQS_LEGACY) 449 + if (irq < nr_legacy_irqs()) 451 450 return; 452 451 453 452 irq_free_desc(irq);
+107 -16
drivers/xen/evtchn.c
··· 49 49 #include <linux/init.h> 50 50 #include <linux/mutex.h> 51 51 #include <linux/cpu.h> 52 + #include <linux/mm.h> 53 + #include <linux/vmalloc.h> 52 54 53 55 #include <xen/xen.h> 54 56 #include <xen/events.h> ··· 60 58 struct per_user_data { 61 59 struct mutex bind_mutex; /* serialize bind/unbind operations */ 62 60 struct rb_root evtchns; 61 + unsigned int nr_evtchns; 63 62 64 63 /* Notification ring, accessed via /dev/xen/evtchn. */ 65 - #define EVTCHN_RING_SIZE (PAGE_SIZE / sizeof(evtchn_port_t)) 66 - #define EVTCHN_RING_MASK(_i) ((_i)&(EVTCHN_RING_SIZE-1)) 64 + unsigned int ring_size; 67 65 evtchn_port_t *ring; 68 66 unsigned int ring_cons, ring_prod, ring_overflow; 69 67 struct mutex ring_cons_mutex; /* protect against concurrent readers */ ··· 82 80 bool enabled; 83 81 }; 84 82 83 + static evtchn_port_t *evtchn_alloc_ring(unsigned int size) 84 + { 85 + evtchn_port_t *ring; 86 + size_t s = size * sizeof(*ring); 87 + 88 + ring = kmalloc(s, GFP_KERNEL); 89 + if (!ring) 90 + ring = vmalloc(s); 91 + 92 + return ring; 93 + } 94 + 95 + static void evtchn_free_ring(evtchn_port_t *ring) 96 + { 97 + kvfree(ring); 98 + } 99 + 100 + static unsigned int evtchn_ring_offset(struct per_user_data *u, 101 + unsigned int idx) 102 + { 103 + return idx & (u->ring_size - 1); 104 + } 105 + 106 + static evtchn_port_t *evtchn_ring_entry(struct per_user_data *u, 107 + unsigned int idx) 108 + { 109 + return u->ring + evtchn_ring_offset(u, idx); 110 + } 111 + 85 112 static int add_evtchn(struct per_user_data *u, struct user_evtchn *evtchn) 86 113 { 87 114 struct rb_node **new = &(u->evtchns.rb_node), *parent = NULL; 115 + 116 + u->nr_evtchns++; 88 117 89 118 while (*new) { 90 119 struct user_evtchn *this; ··· 140 107 141 108 static void del_evtchn(struct per_user_data *u, struct user_evtchn *evtchn) 142 109 { 110 + u->nr_evtchns--; 143 111 rb_erase(&evtchn->node, &u->evtchns); 144 112 kfree(evtchn); 145 113 } ··· 178 144 179 145 spin_lock(&u->ring_prod_lock); 180 146 181 - if ((u->ring_prod - u->ring_cons) < EVTCHN_RING_SIZE) { 182 - u->ring[EVTCHN_RING_MASK(u->ring_prod)] = evtchn->port; 147 + if ((u->ring_prod - u->ring_cons) < u->ring_size) { 148 + *evtchn_ring_entry(u, u->ring_prod) = evtchn->port; 183 149 wmb(); /* Ensure ring contents visible */ 184 150 if (u->ring_cons == u->ring_prod++) { 185 151 wake_up_interruptible(&u->evtchn_wait); ··· 234 200 } 235 201 236 202 /* Byte lengths of two chunks. Chunk split (if any) is at ring wrap. */ 237 - if (((c ^ p) & EVTCHN_RING_SIZE) != 0) { 238 - bytes1 = (EVTCHN_RING_SIZE - EVTCHN_RING_MASK(c)) * 203 + if (((c ^ p) & u->ring_size) != 0) { 204 + bytes1 = (u->ring_size - evtchn_ring_offset(u, c)) * 239 205 sizeof(evtchn_port_t); 240 - bytes2 = EVTCHN_RING_MASK(p) * sizeof(evtchn_port_t); 206 + bytes2 = evtchn_ring_offset(u, p) * sizeof(evtchn_port_t); 241 207 } else { 242 208 bytes1 = (p - c) * sizeof(evtchn_port_t); 243 209 bytes2 = 0; ··· 253 219 254 220 rc = -EFAULT; 255 221 rmb(); /* Ensure that we see the port before we copy it. */ 256 - if (copy_to_user(buf, &u->ring[EVTCHN_RING_MASK(c)], bytes1) || 222 + if (copy_to_user(buf, evtchn_ring_entry(u, c), bytes1) || 257 223 ((bytes2 != 0) && 258 224 copy_to_user(&buf[bytes1], &u->ring[0], bytes2))) 259 225 goto unlock_out; ··· 312 278 return rc; 313 279 } 314 280 281 + static int evtchn_resize_ring(struct per_user_data *u) 282 + { 283 + unsigned int new_size; 284 + evtchn_port_t *new_ring, *old_ring; 285 + unsigned int p, c; 286 + 287 + /* 288 + * Ensure the ring is large enough to capture all possible 289 + * events. i.e., one free slot for each bound event. 290 + */ 291 + if (u->nr_evtchns <= u->ring_size) 292 + return 0; 293 + 294 + if (u->ring_size == 0) 295 + new_size = 64; 296 + else 297 + new_size = 2 * u->ring_size; 298 + 299 + new_ring = evtchn_alloc_ring(new_size); 300 + if (!new_ring) 301 + return -ENOMEM; 302 + 303 + old_ring = u->ring; 304 + 305 + /* 306 + * Access to the ring contents is serialized by either the 307 + * prod /or/ cons lock so take both when resizing. 308 + */ 309 + mutex_lock(&u->ring_cons_mutex); 310 + spin_lock_irq(&u->ring_prod_lock); 311 + 312 + /* 313 + * Copy the old ring contents to the new ring. 314 + * 315 + * If the ring contents crosses the end of the current ring, 316 + * it needs to be copied in two chunks. 317 + * 318 + * +---------+ +------------------+ 319 + * |34567 12| -> | 1234567 | 320 + * +-----p-c-+ +------------------+ 321 + */ 322 + p = evtchn_ring_offset(u, u->ring_prod); 323 + c = evtchn_ring_offset(u, u->ring_cons); 324 + if (p < c) { 325 + memcpy(new_ring + c, u->ring + c, (u->ring_size - c) * sizeof(*u->ring)); 326 + memcpy(new_ring + u->ring_size, u->ring, p * sizeof(*u->ring)); 327 + } else 328 + memcpy(new_ring + c, u->ring + c, (p - c) * sizeof(*u->ring)); 329 + 330 + u->ring = new_ring; 331 + u->ring_size = new_size; 332 + 333 + spin_unlock_irq(&u->ring_prod_lock); 334 + mutex_unlock(&u->ring_cons_mutex); 335 + 336 + evtchn_free_ring(old_ring); 337 + 338 + return 0; 339 + } 340 + 315 341 static int evtchn_bind_to_user(struct per_user_data *u, int port) 316 342 { 317 343 struct user_evtchn *evtchn; ··· 396 302 evtchn->enabled = true; /* start enabled */ 397 303 398 304 rc = add_evtchn(u, evtchn); 305 + if (rc < 0) 306 + goto err; 307 + 308 + rc = evtchn_resize_ring(u); 399 309 if (rc < 0) 400 310 goto err; 401 311 ··· 601 503 602 504 init_waitqueue_head(&u->evtchn_wait); 603 505 604 - u->ring = (evtchn_port_t *)__get_free_page(GFP_KERNEL); 605 - if (u->ring == NULL) { 606 - kfree(u->name); 607 - kfree(u); 608 - return -ENOMEM; 609 - } 610 - 611 506 mutex_init(&u->bind_mutex); 612 507 mutex_init(&u->ring_cons_mutex); 613 508 spin_lock_init(&u->ring_prod_lock); ··· 623 532 evtchn_unbind_from_user(u, evtchn); 624 533 } 625 534 626 - free_page((unsigned long)u->ring); 535 + evtchn_free_ring(u->ring); 627 536 kfree(u->name); 628 537 kfree(u); 629 538
+1 -1
drivers/xen/gntdev.c
··· 804 804 805 805 vma->vm_ops = &gntdev_vmops; 806 806 807 - vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP; 807 + vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP | VM_IO; 808 808 809 809 if (use_ptemod) 810 810 vma->vm_flags |= VM_DONTCOPY;
+6
fs/Kconfig
··· 46 46 or if unsure, say N. Saying Y will increase the size of the kernel 47 47 by about 5kB. 48 48 49 + config FS_DAX_PMD 50 + bool 51 + default FS_DAX 52 + depends on FS_DAX 53 + depends on BROKEN 54 + 49 55 endif # BLOCK 50 56 51 57 # Posix ACL utility routines
+16 -2
fs/block_dev.c
··· 390 390 struct page *page) 391 391 { 392 392 const struct block_device_operations *ops = bdev->bd_disk->fops; 393 + int result = -EOPNOTSUPP; 394 + 393 395 if (!ops->rw_page || bdev_get_integrity(bdev)) 394 - return -EOPNOTSUPP; 395 - return ops->rw_page(bdev, sector + get_start_sect(bdev), page, READ); 396 + return result; 397 + 398 + result = blk_queue_enter(bdev->bd_queue, GFP_KERNEL); 399 + if (result) 400 + return result; 401 + result = ops->rw_page(bdev, sector + get_start_sect(bdev), page, READ); 402 + blk_queue_exit(bdev->bd_queue); 403 + return result; 396 404 } 397 405 EXPORT_SYMBOL_GPL(bdev_read_page); 398 406 ··· 429 421 int result; 430 422 int rw = (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : WRITE; 431 423 const struct block_device_operations *ops = bdev->bd_disk->fops; 424 + 432 425 if (!ops->rw_page || bdev_get_integrity(bdev)) 433 426 return -EOPNOTSUPP; 427 + result = blk_queue_enter(bdev->bd_queue, GFP_KERNEL); 428 + if (result) 429 + return result; 430 + 434 431 set_page_writeback(page); 435 432 result = ops->rw_page(bdev, sector + get_start_sect(bdev), page, rw); 436 433 if (result) 437 434 end_page_writeback(page); 438 435 else 439 436 unlock_page(page); 437 + blk_queue_exit(bdev->bd_queue); 440 438 return result; 441 439 } 442 440 EXPORT_SYMBOL_GPL(bdev_write_page);
+1 -1
fs/btrfs/backref.c
··· 355 355 356 356 index = srcu_read_lock(&fs_info->subvol_srcu); 357 357 358 - root = btrfs_read_fs_root_no_name(fs_info, &root_key); 358 + root = btrfs_get_fs_root(fs_info, &root_key, false); 359 359 if (IS_ERR(root)) { 360 360 srcu_read_unlock(&fs_info->subvol_srcu, index); 361 361 ret = PTR_ERR(root);
+4
fs/btrfs/ctree.h
··· 3416 3416 struct btrfs_block_group_cache *btrfs_lookup_block_group( 3417 3417 struct btrfs_fs_info *info, 3418 3418 u64 bytenr); 3419 + void btrfs_get_block_group(struct btrfs_block_group_cache *cache); 3419 3420 void btrfs_put_block_group(struct btrfs_block_group_cache *cache); 3420 3421 int get_block_group_index(struct btrfs_block_group_cache *cache); 3421 3422 struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans, ··· 3480 3479 struct btrfs_root *root, u64 bytes_used, 3481 3480 u64 type, u64 chunk_objectid, u64 chunk_offset, 3482 3481 u64 size); 3482 + struct btrfs_trans_handle *btrfs_start_trans_remove_block_group( 3483 + struct btrfs_fs_info *fs_info, 3484 + const u64 chunk_offset); 3483 3485 int btrfs_remove_block_group(struct btrfs_trans_handle *trans, 3484 3486 struct btrfs_root *root, u64 group_start, 3485 3487 struct extent_map *em);
+100 -23
fs/btrfs/extent-tree.c
··· 124 124 return (cache->flags & bits) == bits; 125 125 } 126 126 127 - static void btrfs_get_block_group(struct btrfs_block_group_cache *cache) 127 + void btrfs_get_block_group(struct btrfs_block_group_cache *cache) 128 128 { 129 129 atomic_inc(&cache->count); 130 130 } ··· 5915 5915 set_extent_dirty(info->pinned_extents, 5916 5916 bytenr, bytenr + num_bytes - 1, 5917 5917 GFP_NOFS | __GFP_NOFAIL); 5918 - /* 5919 - * No longer have used bytes in this block group, queue 5920 - * it for deletion. 5921 - */ 5922 - if (old_val == 0) { 5923 - spin_lock(&info->unused_bgs_lock); 5924 - if (list_empty(&cache->bg_list)) { 5925 - btrfs_get_block_group(cache); 5926 - list_add_tail(&cache->bg_list, 5927 - &info->unused_bgs); 5928 - } 5929 - spin_unlock(&info->unused_bgs_lock); 5930 - } 5931 5918 } 5932 5919 5933 5920 spin_lock(&trans->transaction->dirty_bgs_lock); ··· 5925 5938 btrfs_get_block_group(cache); 5926 5939 } 5927 5940 spin_unlock(&trans->transaction->dirty_bgs_lock); 5941 + 5942 + /* 5943 + * No longer have used bytes in this block group, queue it for 5944 + * deletion. We do this after adding the block group to the 5945 + * dirty list to avoid races between cleaner kthread and space 5946 + * cache writeout. 5947 + */ 5948 + if (!alloc && old_val == 0) { 5949 + spin_lock(&info->unused_bgs_lock); 5950 + if (list_empty(&cache->bg_list)) { 5951 + btrfs_get_block_group(cache); 5952 + list_add_tail(&cache->bg_list, 5953 + &info->unused_bgs); 5954 + } 5955 + spin_unlock(&info->unused_bgs_lock); 5956 + } 5928 5957 5929 5958 btrfs_put_block_group(cache); 5930 5959 total -= num_bytes; ··· 8108 8105 } 8109 8106 8110 8107 /* 8111 - * TODO: Modify related function to add related node/leaf to dirty_extent_root, 8112 - * for later qgroup accounting. 8113 - * 8114 - * Current, this function does nothing. 8108 + * These may not be seen by the usual inc/dec ref code so we have to 8109 + * add them here. 8115 8110 */ 8111 + static int record_one_subtree_extent(struct btrfs_trans_handle *trans, 8112 + struct btrfs_root *root, u64 bytenr, 8113 + u64 num_bytes) 8114 + { 8115 + struct btrfs_qgroup_extent_record *qrecord; 8116 + struct btrfs_delayed_ref_root *delayed_refs; 8117 + 8118 + qrecord = kmalloc(sizeof(*qrecord), GFP_NOFS); 8119 + if (!qrecord) 8120 + return -ENOMEM; 8121 + 8122 + qrecord->bytenr = bytenr; 8123 + qrecord->num_bytes = num_bytes; 8124 + qrecord->old_roots = NULL; 8125 + 8126 + delayed_refs = &trans->transaction->delayed_refs; 8127 + spin_lock(&delayed_refs->lock); 8128 + if (btrfs_qgroup_insert_dirty_extent(delayed_refs, qrecord)) 8129 + kfree(qrecord); 8130 + spin_unlock(&delayed_refs->lock); 8131 + 8132 + return 0; 8133 + } 8134 + 8116 8135 static int account_leaf_items(struct btrfs_trans_handle *trans, 8117 8136 struct btrfs_root *root, 8118 8137 struct extent_buffer *eb) 8119 8138 { 8120 8139 int nr = btrfs_header_nritems(eb); 8121 - int i, extent_type; 8140 + int i, extent_type, ret; 8122 8141 struct btrfs_key key; 8123 8142 struct btrfs_file_extent_item *fi; 8124 8143 u64 bytenr, num_bytes; 8144 + 8145 + /* We can be called directly from walk_up_proc() */ 8146 + if (!root->fs_info->quota_enabled) 8147 + return 0; 8125 8148 8126 8149 for (i = 0; i < nr; i++) { 8127 8150 btrfs_item_key_to_cpu(eb, &key, i); ··· 8167 8138 continue; 8168 8139 8169 8140 num_bytes = btrfs_file_extent_disk_num_bytes(eb, fi); 8141 + 8142 + ret = record_one_subtree_extent(trans, root, bytenr, num_bytes); 8143 + if (ret) 8144 + return ret; 8170 8145 } 8171 8146 return 0; 8172 8147 } ··· 8239 8206 8240 8207 /* 8241 8208 * root_eb is the subtree root and is locked before this function is called. 8242 - * TODO: Modify this function to mark all (including complete shared node) 8243 - * to dirty_extent_root to allow it get accounted in qgroup. 8244 8209 */ 8245 8210 static int account_shared_subtree(struct btrfs_trans_handle *trans, 8246 8211 struct btrfs_root *root, ··· 8316 8285 btrfs_tree_read_lock(eb); 8317 8286 btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK); 8318 8287 path->locks[level] = BTRFS_READ_LOCK_BLOCKING; 8288 + 8289 + ret = record_one_subtree_extent(trans, root, child_bytenr, 8290 + root->nodesize); 8291 + if (ret) 8292 + goto out; 8319 8293 } 8320 8294 8321 8295 if (level == 0) { ··· 10292 10256 return ret; 10293 10257 } 10294 10258 10259 + struct btrfs_trans_handle * 10260 + btrfs_start_trans_remove_block_group(struct btrfs_fs_info *fs_info, 10261 + const u64 chunk_offset) 10262 + { 10263 + struct extent_map_tree *em_tree = &fs_info->mapping_tree.map_tree; 10264 + struct extent_map *em; 10265 + struct map_lookup *map; 10266 + unsigned int num_items; 10267 + 10268 + read_lock(&em_tree->lock); 10269 + em = lookup_extent_mapping(em_tree, chunk_offset, 1); 10270 + read_unlock(&em_tree->lock); 10271 + ASSERT(em && em->start == chunk_offset); 10272 + 10273 + /* 10274 + * We need to reserve 3 + N units from the metadata space info in order 10275 + * to remove a block group (done at btrfs_remove_chunk() and at 10276 + * btrfs_remove_block_group()), which are used for: 10277 + * 10278 + * 1 unit for adding the free space inode's orphan (located in the tree 10279 + * of tree roots). 10280 + * 1 unit for deleting the block group item (located in the extent 10281 + * tree). 10282 + * 1 unit for deleting the free space item (located in tree of tree 10283 + * roots). 10284 + * N units for deleting N device extent items corresponding to each 10285 + * stripe (located in the device tree). 10286 + * 10287 + * In order to remove a block group we also need to reserve units in the 10288 + * system space info in order to update the chunk tree (update one or 10289 + * more device items and remove one chunk item), but this is done at 10290 + * btrfs_remove_chunk() through a call to check_system_chunk(). 10291 + */ 10292 + map = (struct map_lookup *)em->bdev; 10293 + num_items = 3 + map->num_stripes; 10294 + free_extent_map(em); 10295 + 10296 + return btrfs_start_transaction_fallback_global_rsv(fs_info->extent_root, 10297 + num_items, 1); 10298 + } 10299 + 10295 10300 /* 10296 10301 * Process the unused_bgs list and remove any that don't have any allocated 10297 10302 * space inside of them. ··· 10399 10322 * Want to do this before we do anything else so we can recover 10400 10323 * properly if we fail to join the transaction. 10401 10324 */ 10402 - /* 1 for btrfs_orphan_reserve_metadata() */ 10403 - trans = btrfs_start_transaction(root, 1); 10325 + trans = btrfs_start_trans_remove_block_group(fs_info, 10326 + block_group->key.objectid); 10404 10327 if (IS_ERR(trans)) { 10405 10328 btrfs_dec_block_group_ro(root, block_group); 10406 10329 ret = PTR_ERR(trans);
+7 -3
fs/btrfs/file.c
··· 1882 1882 struct btrfs_log_ctx ctx; 1883 1883 int ret = 0; 1884 1884 bool full_sync = 0; 1885 - const u64 len = end - start + 1; 1885 + u64 len; 1886 1886 1887 + /* 1888 + * The range length can be represented by u64, we have to do the typecasts 1889 + * to avoid signed overflow if it's [0, LLONG_MAX] eg. from fsync() 1890 + */ 1891 + len = (u64)end - (u64)start + 1; 1887 1892 trace_btrfs_sync_file(file, datasync); 1888 1893 1889 1894 /* ··· 2076 2071 } 2077 2072 } 2078 2073 if (!full_sync) { 2079 - ret = btrfs_wait_ordered_range(inode, start, 2080 - end - start + 1); 2074 + ret = btrfs_wait_ordered_range(inode, start, len); 2081 2075 if (ret) { 2082 2076 btrfs_end_transaction(trans, root); 2083 2077 goto out;
+1 -23
fs/btrfs/inode.c
··· 4046 4046 */ 4047 4047 static struct btrfs_trans_handle *__unlink_start_trans(struct inode *dir) 4048 4048 { 4049 - struct btrfs_trans_handle *trans; 4050 4049 struct btrfs_root *root = BTRFS_I(dir)->root; 4051 - int ret; 4052 4050 4053 4051 /* 4054 4052 * 1 for the possible orphan item ··· 4055 4057 * 1 for the inode ref 4056 4058 * 1 for the inode 4057 4059 */ 4058 - trans = btrfs_start_transaction(root, 5); 4059 - if (!IS_ERR(trans) || PTR_ERR(trans) != -ENOSPC) 4060 - return trans; 4061 - 4062 - if (PTR_ERR(trans) == -ENOSPC) { 4063 - u64 num_bytes = btrfs_calc_trans_metadata_size(root, 5); 4064 - 4065 - trans = btrfs_start_transaction(root, 0); 4066 - if (IS_ERR(trans)) 4067 - return trans; 4068 - ret = btrfs_cond_migrate_bytes(root->fs_info, 4069 - &root->fs_info->trans_block_rsv, 4070 - num_bytes, 5); 4071 - if (ret) { 4072 - btrfs_end_transaction(trans, root); 4073 - return ERR_PTR(ret); 4074 - } 4075 - trans->block_rsv = &root->fs_info->trans_block_rsv; 4076 - trans->bytes_reserved = num_bytes; 4077 - } 4078 - return trans; 4060 + return btrfs_start_transaction_fallback_global_rsv(root, 5, 5); 4079 4061 } 4080 4062 4081 4063 static int btrfs_unlink(struct inode *dir, struct dentry *dentry)
+4 -1
fs/btrfs/qgroup.c
··· 993 993 mutex_lock(&fs_info->qgroup_ioctl_lock); 994 994 if (!fs_info->quota_root) 995 995 goto out; 996 - spin_lock(&fs_info->qgroup_lock); 997 996 fs_info->quota_enabled = 0; 998 997 fs_info->pending_quota_state = 0; 998 + btrfs_qgroup_wait_for_completion(fs_info); 999 + spin_lock(&fs_info->qgroup_lock); 999 1000 quota_root = fs_info->quota_root; 1000 1001 fs_info->quota_root = NULL; 1001 1002 fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_ON; ··· 1461 1460 struct rb_node *parent_node = NULL; 1462 1461 struct btrfs_qgroup_extent_record *entry; 1463 1462 u64 bytenr = record->bytenr; 1463 + 1464 + assert_spin_locked(&delayed_refs->lock); 1464 1465 1465 1466 while (*p) { 1466 1467 parent_node = *p;
+56 -6
fs/btrfs/scrub.c
··· 3432 3432 static noinline_for_stack int scrub_chunk(struct scrub_ctx *sctx, 3433 3433 struct btrfs_device *scrub_dev, 3434 3434 u64 chunk_offset, u64 length, 3435 - u64 dev_offset, int is_dev_replace) 3435 + u64 dev_offset, 3436 + struct btrfs_block_group_cache *cache, 3437 + int is_dev_replace) 3436 3438 { 3437 3439 struct btrfs_mapping_tree *map_tree = 3438 3440 &sctx->dev_root->fs_info->mapping_tree; ··· 3447 3445 em = lookup_extent_mapping(&map_tree->map_tree, chunk_offset, 1); 3448 3446 read_unlock(&map_tree->map_tree.lock); 3449 3447 3450 - if (!em) 3451 - return -EINVAL; 3448 + if (!em) { 3449 + /* 3450 + * Might have been an unused block group deleted by the cleaner 3451 + * kthread or relocation. 3452 + */ 3453 + spin_lock(&cache->lock); 3454 + if (!cache->removed) 3455 + ret = -EINVAL; 3456 + spin_unlock(&cache->lock); 3457 + 3458 + return ret; 3459 + } 3452 3460 3453 3461 map = (struct map_lookup *)em->bdev; 3454 3462 if (em->start != chunk_offset) ··· 3495 3483 u64 length; 3496 3484 u64 chunk_offset; 3497 3485 int ret = 0; 3486 + int ro_set; 3498 3487 int slot; 3499 3488 struct extent_buffer *l; 3500 3489 struct btrfs_key key; ··· 3581 3568 scrub_pause_on(fs_info); 3582 3569 ret = btrfs_inc_block_group_ro(root, cache); 3583 3570 scrub_pause_off(fs_info); 3584 - if (ret) { 3571 + 3572 + if (ret == 0) { 3573 + ro_set = 1; 3574 + } else if (ret == -ENOSPC) { 3575 + /* 3576 + * btrfs_inc_block_group_ro return -ENOSPC when it 3577 + * failed in creating new chunk for metadata. 3578 + * It is not a problem for scrub/replace, because 3579 + * metadata are always cowed, and our scrub paused 3580 + * commit_transactions. 3581 + */ 3582 + ro_set = 0; 3583 + } else { 3584 + btrfs_warn(fs_info, "failed setting block group ro, ret=%d\n", 3585 + ret); 3585 3586 btrfs_put_block_group(cache); 3586 3587 break; 3587 3588 } ··· 3604 3577 dev_replace->cursor_left = found_key.offset; 3605 3578 dev_replace->item_needs_writeback = 1; 3606 3579 ret = scrub_chunk(sctx, scrub_dev, chunk_offset, length, 3607 - found_key.offset, is_dev_replace); 3580 + found_key.offset, cache, is_dev_replace); 3608 3581 3609 3582 /* 3610 3583 * flush, submit all pending read and write bios, afterwards ··· 3638 3611 3639 3612 scrub_pause_off(fs_info); 3640 3613 3641 - btrfs_dec_block_group_ro(root, cache); 3614 + if (ro_set) 3615 + btrfs_dec_block_group_ro(root, cache); 3616 + 3617 + /* 3618 + * We might have prevented the cleaner kthread from deleting 3619 + * this block group if it was already unused because we raced 3620 + * and set it to RO mode first. So add it back to the unused 3621 + * list, otherwise it might not ever be deleted unless a manual 3622 + * balance is triggered or it becomes used and unused again. 3623 + */ 3624 + spin_lock(&cache->lock); 3625 + if (!cache->removed && !cache->ro && cache->reserved == 0 && 3626 + btrfs_block_group_used(&cache->item) == 0) { 3627 + spin_unlock(&cache->lock); 3628 + spin_lock(&fs_info->unused_bgs_lock); 3629 + if (list_empty(&cache->bg_list)) { 3630 + btrfs_get_block_group(cache); 3631 + list_add_tail(&cache->bg_list, 3632 + &fs_info->unused_bgs); 3633 + } 3634 + spin_unlock(&fs_info->unused_bgs_lock); 3635 + } else { 3636 + spin_unlock(&cache->lock); 3637 + } 3642 3638 3643 3639 btrfs_put_block_group(cache); 3644 3640 if (ret)
+3 -1
fs/btrfs/tests/free-space-tests.c
··· 898 898 } 899 899 900 900 root = btrfs_alloc_dummy_root(); 901 - if (!root) 901 + if (IS_ERR(root)) { 902 + ret = PTR_ERR(root); 902 903 goto out; 904 + } 903 905 904 906 root->fs_info = btrfs_alloc_dummy_fs_info(); 905 907 if (!root->fs_info)
+32
fs/btrfs/transaction.c
··· 592 592 return start_transaction(root, num_items, TRANS_START, 593 593 BTRFS_RESERVE_FLUSH_ALL); 594 594 } 595 + struct btrfs_trans_handle *btrfs_start_transaction_fallback_global_rsv( 596 + struct btrfs_root *root, 597 + unsigned int num_items, 598 + int min_factor) 599 + { 600 + struct btrfs_trans_handle *trans; 601 + u64 num_bytes; 602 + int ret; 603 + 604 + trans = btrfs_start_transaction(root, num_items); 605 + if (!IS_ERR(trans) || PTR_ERR(trans) != -ENOSPC) 606 + return trans; 607 + 608 + trans = btrfs_start_transaction(root, 0); 609 + if (IS_ERR(trans)) 610 + return trans; 611 + 612 + num_bytes = btrfs_calc_trans_metadata_size(root, num_items); 613 + ret = btrfs_cond_migrate_bytes(root->fs_info, 614 + &root->fs_info->trans_block_rsv, 615 + num_bytes, 616 + min_factor); 617 + if (ret) { 618 + btrfs_end_transaction(trans, root); 619 + return ERR_PTR(ret); 620 + } 621 + 622 + trans->block_rsv = &root->fs_info->trans_block_rsv; 623 + trans->bytes_reserved = num_bytes; 624 + 625 + return trans; 626 + } 595 627 596 628 struct btrfs_trans_handle *btrfs_start_transaction_lflush( 597 629 struct btrfs_root *root,
+4
fs/btrfs/transaction.h
··· 185 185 struct btrfs_root *root); 186 186 struct btrfs_trans_handle *btrfs_start_transaction(struct btrfs_root *root, 187 187 unsigned int num_items); 188 + struct btrfs_trans_handle *btrfs_start_transaction_fallback_global_rsv( 189 + struct btrfs_root *root, 190 + unsigned int num_items, 191 + int min_factor); 188 192 struct btrfs_trans_handle *btrfs_start_transaction_lflush( 189 193 struct btrfs_root *root, 190 194 unsigned int num_items);
+6 -7
fs/btrfs/volumes.c
··· 1973 1973 if (srcdev->writeable) { 1974 1974 fs_devices->rw_devices--; 1975 1975 /* zero out the old super if it is writable */ 1976 - btrfs_scratch_superblocks(srcdev->bdev, 1977 - rcu_str_deref(srcdev->name)); 1976 + btrfs_scratch_superblocks(srcdev->bdev, srcdev->name->str); 1978 1977 } 1979 1978 1980 1979 if (srcdev->bdev) ··· 2023 2024 btrfs_sysfs_rm_device_link(fs_info->fs_devices, tgtdev); 2024 2025 2025 2026 if (tgtdev->bdev) { 2026 - btrfs_scratch_superblocks(tgtdev->bdev, 2027 - rcu_str_deref(tgtdev->name)); 2027 + btrfs_scratch_superblocks(tgtdev->bdev, tgtdev->name->str); 2028 2028 fs_info->fs_devices->open_devices--; 2029 2029 } 2030 2030 fs_info->fs_devices->num_devices--; ··· 2851 2853 if (ret) 2852 2854 return ret; 2853 2855 2854 - trans = btrfs_start_transaction(root, 0); 2856 + trans = btrfs_start_trans_remove_block_group(root->fs_info, 2857 + chunk_offset); 2855 2858 if (IS_ERR(trans)) { 2856 2859 ret = PTR_ERR(trans); 2857 2860 btrfs_std_error(root->fs_info, ret, NULL); ··· 3122 3123 return 1; 3123 3124 } 3124 3125 3125 - static int chunk_usage_filter(struct btrfs_fs_info *fs_info, u64 chunk_offset, 3126 + static int chunk_usage_range_filter(struct btrfs_fs_info *fs_info, u64 chunk_offset, 3126 3127 struct btrfs_balance_args *bargs) 3127 3128 { 3128 3129 struct btrfs_block_group_cache *cache; ··· 3155 3156 return ret; 3156 3157 } 3157 3158 3158 - static int chunk_usage_range_filter(struct btrfs_fs_info *fs_info, 3159 + static int chunk_usage_filter(struct btrfs_fs_info *fs_info, 3159 3160 u64 chunk_offset, struct btrfs_balance_args *bargs) 3160 3161 { 3161 3162 struct btrfs_block_group_cache *cache;
+1 -1
fs/btrfs/volumes.h
··· 382 382 #define BTRFS_BALANCE_ARGS_LIMIT (1ULL << 5) 383 383 #define BTRFS_BALANCE_ARGS_LIMIT_RANGE (1ULL << 6) 384 384 #define BTRFS_BALANCE_ARGS_STRIPES_RANGE (1ULL << 7) 385 - #define BTRFS_BALANCE_ARGS_USAGE_RANGE (1ULL << 8) 385 + #define BTRFS_BALANCE_ARGS_USAGE_RANGE (1ULL << 10) 386 386 387 387 #define BTRFS_BALANCE_ARGS_MASK \ 388 388 (BTRFS_BALANCE_ARGS_PROFILES | \
+110
fs/configfs/dir.c
··· 1636 1636 .iterate = configfs_readdir, 1637 1637 }; 1638 1638 1639 + /** 1640 + * configfs_register_group - creates a parent-child relation between two groups 1641 + * @parent_group: parent group 1642 + * @group: child group 1643 + * 1644 + * link groups, creates dentry for the child and attaches it to the 1645 + * parent dentry. 1646 + * 1647 + * Return: 0 on success, negative errno code on error 1648 + */ 1649 + int configfs_register_group(struct config_group *parent_group, 1650 + struct config_group *group) 1651 + { 1652 + struct configfs_subsystem *subsys = parent_group->cg_subsys; 1653 + struct dentry *parent; 1654 + int ret; 1655 + 1656 + mutex_lock(&subsys->su_mutex); 1657 + link_group(parent_group, group); 1658 + mutex_unlock(&subsys->su_mutex); 1659 + 1660 + parent = parent_group->cg_item.ci_dentry; 1661 + 1662 + mutex_lock_nested(&d_inode(parent)->i_mutex, I_MUTEX_PARENT); 1663 + ret = create_default_group(parent_group, group); 1664 + if (!ret) { 1665 + spin_lock(&configfs_dirent_lock); 1666 + configfs_dir_set_ready(group->cg_item.ci_dentry->d_fsdata); 1667 + spin_unlock(&configfs_dirent_lock); 1668 + } 1669 + mutex_unlock(&d_inode(parent)->i_mutex); 1670 + return ret; 1671 + } 1672 + EXPORT_SYMBOL(configfs_register_group); 1673 + 1674 + /** 1675 + * configfs_unregister_group() - unregisters a child group from its parent 1676 + * @group: parent group to be unregistered 1677 + * 1678 + * Undoes configfs_register_group() 1679 + */ 1680 + void configfs_unregister_group(struct config_group *group) 1681 + { 1682 + struct configfs_subsystem *subsys = group->cg_subsys; 1683 + struct dentry *dentry = group->cg_item.ci_dentry; 1684 + struct dentry *parent = group->cg_item.ci_parent->ci_dentry; 1685 + 1686 + mutex_lock_nested(&d_inode(parent)->i_mutex, I_MUTEX_PARENT); 1687 + spin_lock(&configfs_dirent_lock); 1688 + configfs_detach_prep(dentry, NULL); 1689 + spin_unlock(&configfs_dirent_lock); 1690 + 1691 + configfs_detach_group(&group->cg_item); 1692 + d_inode(dentry)->i_flags |= S_DEAD; 1693 + dont_mount(dentry); 1694 + d_delete(dentry); 1695 + mutex_unlock(&d_inode(parent)->i_mutex); 1696 + 1697 + dput(dentry); 1698 + 1699 + mutex_lock(&subsys->su_mutex); 1700 + unlink_group(group); 1701 + mutex_unlock(&subsys->su_mutex); 1702 + } 1703 + EXPORT_SYMBOL(configfs_unregister_group); 1704 + 1705 + /** 1706 + * configfs_register_default_group() - allocates and registers a child group 1707 + * @parent_group: parent group 1708 + * @name: child group name 1709 + * @item_type: child item type description 1710 + * 1711 + * boilerplate to allocate and register a child group with its parent. We need 1712 + * kzalloc'ed memory because child's default_group is initially empty. 1713 + * 1714 + * Return: allocated config group or ERR_PTR() on error 1715 + */ 1716 + struct config_group * 1717 + configfs_register_default_group(struct config_group *parent_group, 1718 + const char *name, 1719 + struct config_item_type *item_type) 1720 + { 1721 + int ret; 1722 + struct config_group *group; 1723 + 1724 + group = kzalloc(sizeof(*group), GFP_KERNEL); 1725 + if (!group) 1726 + return ERR_PTR(-ENOMEM); 1727 + config_group_init_type_name(group, name, item_type); 1728 + 1729 + ret = configfs_register_group(parent_group, group); 1730 + if (ret) { 1731 + kfree(group); 1732 + return ERR_PTR(ret); 1733 + } 1734 + return group; 1735 + } 1736 + EXPORT_SYMBOL(configfs_register_default_group); 1737 + 1738 + /** 1739 + * configfs_unregister_default_group() - unregisters and frees a child group 1740 + * @group: the group to act on 1741 + */ 1742 + void configfs_unregister_default_group(struct config_group *group) 1743 + { 1744 + configfs_unregister_group(group); 1745 + kfree(group); 1746 + } 1747 + EXPORT_SYMBOL(configfs_unregister_default_group); 1748 + 1639 1749 int configfs_register_subsystem(struct configfs_subsystem *subsys) 1640 1750 { 1641 1751 int err;
+4
fs/dax.c
··· 541 541 unsigned long pfn; 542 542 int result = 0; 543 543 544 + /* dax pmd mappings are broken wrt gup and fork */ 545 + if (!IS_ENABLED(CONFIG_FS_DAX_PMD)) 546 + return VM_FAULT_FALLBACK; 547 + 544 548 /* Fall back to PTEs if we're going to COW */ 545 549 if (write && !(vma->vm_flags & VM_SHARED)) 546 550 return VM_FAULT_FALLBACK;
+9 -1
fs/direct-io.c
··· 1169 1169 } 1170 1170 } 1171 1171 1172 + /* Once we sampled i_size check for reads beyond EOF */ 1173 + dio->i_size = i_size_read(inode); 1174 + if (iov_iter_rw(iter) == READ && offset >= dio->i_size) { 1175 + if (dio->flags & DIO_LOCKING) 1176 + mutex_unlock(&inode->i_mutex); 1177 + kmem_cache_free(dio_cache, dio); 1178 + goto out; 1179 + } 1180 + 1172 1181 /* 1173 1182 * For file extending writes updating i_size before data writeouts 1174 1183 * complete can expose uninitialized blocks in dumb filesystems. ··· 1231 1222 sdio.next_block_for_io = -1; 1232 1223 1233 1224 dio->iocb = iocb; 1234 - dio->i_size = i_size_read(inode); 1235 1225 1236 1226 spin_lock_init(&dio->bio_lock); 1237 1227 dio->refcount = 1;
+2 -2
fs/dlm/lowcomms.c
··· 421 421 422 422 if (test_and_clear_bit(CF_APP_LIMITED, &con->flags)) { 423 423 con->sock->sk->sk_write_pending--; 424 - clear_bit(SOCK_ASYNC_NOSPACE, &con->sock->flags); 424 + clear_bit(SOCKWQ_ASYNC_NOSPACE, &con->sock->flags); 425 425 } 426 426 427 427 if (!test_and_set_bit(CF_WRITE_PENDING, &con->flags)) ··· 1448 1448 msg_flags); 1449 1449 if (ret == -EAGAIN || ret == 0) { 1450 1450 if (ret == -EAGAIN && 1451 - test_bit(SOCK_ASYNC_NOSPACE, &con->sock->flags) && 1451 + test_bit(SOCKWQ_ASYNC_NOSPACE, &con->sock->flags) && 1452 1452 !test_and_set_bit(CF_APP_LIMITED, &con->flags)) { 1453 1453 /* Notify TCP that we're limited by the 1454 1454 * application window size.
+2
fs/ext2/super.c
··· 569 569 /* Fall through */ 570 570 case Opt_dax: 571 571 #ifdef CONFIG_FS_DAX 572 + ext2_msg(sb, KERN_WARNING, 573 + "DAX enabled. Warning: EXPERIMENTAL, use at your own risk"); 572 574 set_opt(sbi->s_mount_opt, DAX); 573 575 #else 574 576 ext2_msg(sb, KERN_INFO, "dax option not supported");
+5 -1
fs/ext4/super.c
··· 1664 1664 } 1665 1665 sbi->s_jquota_fmt = m->mount_opt; 1666 1666 #endif 1667 - #ifndef CONFIG_FS_DAX 1668 1667 } else if (token == Opt_dax) { 1668 + #ifdef CONFIG_FS_DAX 1669 + ext4_msg(sb, KERN_WARNING, 1670 + "DAX enabled. Warning: EXPERIMENTAL, use at your own risk"); 1671 + sbi->s_mount_opt |= m->mount_opt; 1672 + #else 1669 1673 ext4_msg(sb, KERN_INFO, "dax option not supported"); 1670 1674 return -1; 1671 1675 #endif
+11 -5
fs/fat/dir.c
··· 610 610 int status = fat_parse_long(inode, &cpos, &bh, &de, 611 611 &unicode, &nr_slots); 612 612 if (status < 0) { 613 - ctx->pos = cpos; 613 + bh = NULL; 614 614 ret = status; 615 - goto out; 615 + goto end_of_dir; 616 616 } else if (status == PARSE_INVALID) 617 617 goto record_end; 618 618 else if (status == PARSE_NOT_LONGNAME) ··· 654 654 fill_len = short_len; 655 655 656 656 start_filldir: 657 - if (!fake_offset) 658 - ctx->pos = cpos - (nr_slots + 1) * sizeof(struct msdos_dir_entry); 657 + ctx->pos = cpos - (nr_slots + 1) * sizeof(struct msdos_dir_entry); 658 + if (fake_offset && ctx->pos < 2) 659 + ctx->pos = 2; 659 660 660 661 if (!memcmp(de->name, MSDOS_DOT, MSDOS_NAME)) { 661 662 if (!dir_emit_dot(file, ctx)) ··· 682 681 fake_offset = 0; 683 682 ctx->pos = cpos; 684 683 goto get_new; 684 + 685 685 end_of_dir: 686 - ctx->pos = cpos; 686 + if (fake_offset && cpos < 2) 687 + ctx->pos = 2; 688 + else 689 + ctx->pos = cpos; 687 690 fill_failed: 688 691 brelse(bh); 689 692 if (unicode) 690 693 __putname(unicode); 691 694 out: 692 695 mutex_unlock(&sbi->s_lock); 696 + 693 697 return ret; 694 698 } 695 699
+32 -33
fs/hugetlbfs/inode.c
··· 332 332 * truncation is indicated by end of range being LLONG_MAX 333 333 * In this case, we first scan the range and release found pages. 334 334 * After releasing pages, hugetlb_unreserve_pages cleans up region/reserv 335 - * maps and global counts. 335 + * maps and global counts. Page faults can not race with truncation 336 + * in this routine. hugetlb_no_page() prevents page faults in the 337 + * truncated range. It checks i_size before allocation, and again after 338 + * with the page table lock for the page held. The same lock must be 339 + * acquired to unmap a page. 336 340 * hole punch is indicated if end is not LLONG_MAX 337 341 * In the hole punch case we scan the range and release found pages. 338 342 * Only when releasing a page is the associated region/reserv map 339 343 * deleted. The region/reserv map for ranges without associated 340 - * pages are not modified. 344 + * pages are not modified. Page faults can race with hole punch. 345 + * This is indicated if we find a mapped page. 341 346 * Note: If the passed end of range value is beyond the end of file, but 342 347 * not LLONG_MAX this routine still performs a hole punch operation. 343 348 */ ··· 366 361 next = start; 367 362 while (next < end) { 368 363 /* 369 - * Make sure to never grab more pages that we 370 - * might possibly need. 364 + * Don't grab more pages than the number left in the range. 371 365 */ 372 366 if (end - next < lookup_nr) 373 367 lookup_nr = end - next; 374 368 375 369 /* 376 - * This pagevec_lookup() may return pages past 'end', 377 - * so we must check for page->index > end. 370 + * When no more pages are found, we are done. 378 371 */ 379 - if (!pagevec_lookup(&pvec, mapping, next, lookup_nr)) { 380 - if (next == start) 381 - break; 382 - next = start; 383 - continue; 384 - } 372 + if (!pagevec_lookup(&pvec, mapping, next, lookup_nr)) 373 + break; 385 374 386 375 for (i = 0; i < pagevec_count(&pvec); ++i) { 387 376 struct page *page = pvec.pages[i]; 388 377 u32 hash; 378 + 379 + /* 380 + * The page (index) could be beyond end. This is 381 + * only possible in the punch hole case as end is 382 + * max page offset in the truncate case. 383 + */ 384 + next = page->index; 385 + if (next >= end) 386 + break; 389 387 390 388 hash = hugetlb_fault_mutex_hash(h, current->mm, 391 389 &pseudo_vma, ··· 396 388 mutex_lock(&hugetlb_fault_mutex_table[hash]); 397 389 398 390 lock_page(page); 399 - if (page->index >= end) { 400 - unlock_page(page); 401 - mutex_unlock(&hugetlb_fault_mutex_table[hash]); 402 - next = end; /* we are done */ 403 - break; 404 - } 405 - 406 - /* 407 - * If page is mapped, it was faulted in after being 408 - * unmapped. Do nothing in this race case. In the 409 - * normal case page is not mapped. 410 - */ 411 - if (!page_mapped(page)) { 391 + if (likely(!page_mapped(page))) { 412 392 bool rsv_on_error = !PagePrivate(page); 413 393 /* 414 394 * We must free the huge page and remove ··· 417 421 hugetlb_fix_reserve_counts( 418 422 inode, rsv_on_error); 419 423 } 424 + } else { 425 + /* 426 + * If page is mapped, it was faulted in after 427 + * being unmapped. It indicates a race between 428 + * hole punch and page fault. Do nothing in 429 + * this case. Getting here in a truncate 430 + * operation is a bug. 431 + */ 432 + BUG_ON(truncate_op); 420 433 } 421 434 422 - if (page->index > next) 423 - next = page->index; 424 - 425 - ++next; 426 435 unlock_page(page); 427 - 428 436 mutex_unlock(&hugetlb_fault_mutex_table[hash]); 429 437 } 438 + ++next; 430 439 huge_pagevec_release(&pvec); 440 + cond_resched(); 431 441 } 432 442 433 443 if (truncate_op) ··· 649 647 if (!(mode & FALLOC_FL_KEEP_SIZE) && offset + len > inode->i_size) 650 648 i_size_write(inode, offset + len); 651 649 inode->i_ctime = CURRENT_TIME; 652 - spin_lock(&inode->i_lock); 653 - inode->i_private = NULL; 654 - spin_unlock(&inode->i_lock); 655 650 out: 656 651 mutex_unlock(&inode->i_mutex); 657 652 return error;
+2
fs/ncpfs/ioctl.c
··· 525 525 switch (rqdata.cmd) { 526 526 case NCP_LOCK_EX: 527 527 case NCP_LOCK_SH: 528 + if (rqdata.timeout < 0) 529 + return -EINVAL; 528 530 if (rqdata.timeout == 0) 529 531 rqdata.timeout = NCP_LOCK_DEFAULT_TIMEOUT; 530 532 else if (rqdata.timeout > NCP_LOCK_MAX_TIMEOUT)
+5 -2
fs/nfs/callback_xdr.c
··· 78 78 79 79 p = xdr_inline_decode(xdr, nbytes); 80 80 if (unlikely(p == NULL)) 81 - printk(KERN_WARNING "NFS: NFSv4 callback reply buffer overflowed!\n"); 81 + printk(KERN_WARNING "NFS: NFSv4 callback reply buffer overflowed " 82 + "or truncated request.\n"); 82 83 return p; 83 84 } 84 85 ··· 890 889 struct cb_compound_hdr_arg hdr_arg = { 0 }; 891 890 struct cb_compound_hdr_res hdr_res = { NULL }; 892 891 struct xdr_stream xdr_in, xdr_out; 892 + struct xdr_buf *rq_arg = &rqstp->rq_arg; 893 893 __be32 *p, status; 894 894 struct cb_process_state cps = { 895 895 .drc_status = 0, ··· 902 900 903 901 dprintk("%s: start\n", __func__); 904 902 905 - xdr_init_decode(&xdr_in, &rqstp->rq_arg, rqstp->rq_arg.head[0].iov_base); 903 + rq_arg->len = rq_arg->head[0].iov_len + rq_arg->page_len; 904 + xdr_init_decode(&xdr_in, rq_arg, rq_arg->head[0].iov_base); 906 905 907 906 p = (__be32*)((char *)rqstp->rq_res.head[0].iov_base + rqstp->rq_res.head[0].iov_len); 908 907 xdr_init_encode(&xdr_out, &rqstp->rq_res, p);
+9 -2
fs/nfs/inode.c
··· 618 618 nfs_inc_stats(inode, NFSIOS_SETATTRTRUNC); 619 619 nfs_vmtruncate(inode, attr->ia_size); 620 620 } 621 - nfs_update_inode(inode, fattr); 621 + if (fattr->valid) 622 + nfs_update_inode(inode, fattr); 623 + else 624 + NFS_I(inode)->cache_validity |= NFS_INO_INVALID_ATTR; 622 625 spin_unlock(&inode->i_lock); 623 626 } 624 627 EXPORT_SYMBOL_GPL(nfs_setattr_update_inode); ··· 1827 1824 if ((long)fattr->gencount - (long)nfsi->attr_gencount > 0) 1828 1825 nfsi->attr_gencount = fattr->gencount; 1829 1826 } 1830 - invalid &= ~NFS_INO_INVALID_ATTR; 1827 + 1828 + /* Don't declare attrcache up to date if there were no attrs! */ 1829 + if (fattr->valid != 0) 1830 + invalid &= ~NFS_INO_INVALID_ATTR; 1831 + 1831 1832 /* Don't invalidate the data if we were to blame */ 1832 1833 if (!(S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) 1833 1834 || S_ISLNK(inode->i_mode)))
+2 -1
fs/nfs/nfs42proc.c
··· 14 14 #include "pnfs.h" 15 15 #include "internal.h" 16 16 17 - #define NFSDBG_FACILITY NFSDBG_PNFS 17 + #define NFSDBG_FACILITY NFSDBG_PROC 18 18 19 19 static int nfs42_set_rw_stateid(nfs4_stateid *dst, struct file *file, 20 20 fmode_t fmode) ··· 284 284 .dst_fh = NFS_FH(dst_inode), 285 285 .src_offset = src_offset, 286 286 .dst_offset = dst_offset, 287 + .count = count, 287 288 .dst_bitmask = server->cache_consistency_bitmask, 288 289 }; 289 290 struct nfs42_clone_res res = {
+1 -1
fs/nfs/nfs4client.c
··· 33 33 return ret; 34 34 idr_preload(GFP_KERNEL); 35 35 spin_lock(&nn->nfs_client_lock); 36 - ret = idr_alloc(&nn->cb_ident_idr, clp, 0, 0, GFP_NOWAIT); 36 + ret = idr_alloc(&nn->cb_ident_idr, clp, 1, 0, GFP_NOWAIT); 37 37 if (ret >= 0) 38 38 clp->cl_cb_ident = ret; 39 39 spin_unlock(&nn->nfs_client_lock);
+27 -32
fs/nfs/nfs4file.c
··· 7 7 #include <linux/file.h> 8 8 #include <linux/falloc.h> 9 9 #include <linux/nfs_fs.h> 10 + #include <uapi/linux/btrfs.h> /* BTRFS_IOC_CLONE/BTRFS_IOC_CLONE_RANGE */ 10 11 #include "delegation.h" 11 12 #include "internal.h" 12 13 #include "iostat.h" ··· 204 203 struct fd src_file; 205 204 struct inode *src_inode; 206 205 unsigned int bs = server->clone_blksize; 206 + bool same_inode = false; 207 207 int ret; 208 208 209 209 /* dst file must be opened for writing */ ··· 223 221 224 222 src_inode = file_inode(src_file.file); 225 223 226 - /* src and dst must be different files */ 227 - ret = -EINVAL; 228 224 if (src_inode == dst_inode) 229 - goto out_fput; 225 + same_inode = true; 230 226 231 227 /* src file must be opened for reading */ 232 228 if (!(src_file.file->f_mode & FMODE_READ)) ··· 249 249 goto out_fput; 250 250 } 251 251 252 + /* verify if ranges are overlapped within the same file */ 253 + if (same_inode) { 254 + if (dst_off + count > src_off && dst_off < src_off + count) 255 + goto out_fput; 256 + } 257 + 252 258 /* XXX: do we lock at all? what if server needs CB_RECALL_LAYOUT? */ 253 - if (dst_inode < src_inode) { 259 + if (same_inode) { 260 + mutex_lock(&src_inode->i_mutex); 261 + } else if (dst_inode < src_inode) { 254 262 mutex_lock_nested(&dst_inode->i_mutex, I_MUTEX_PARENT); 255 263 mutex_lock_nested(&src_inode->i_mutex, I_MUTEX_CHILD); 256 264 } else { ··· 283 275 truncate_inode_pages_range(&dst_inode->i_data, dst_off, dst_off + count - 1); 284 276 285 277 out_unlock: 286 - if (dst_inode < src_inode) { 278 + if (same_inode) { 279 + mutex_unlock(&src_inode->i_mutex); 280 + } else if (dst_inode < src_inode) { 287 281 mutex_unlock(&src_inode->i_mutex); 288 282 mutex_unlock(&dst_inode->i_mutex); 289 283 } else { ··· 301 291 302 292 static long nfs42_ioctl_clone_range(struct file *dst_file, void __user *argp) 303 293 { 304 - struct nfs_ioctl_clone_range_args args; 294 + struct btrfs_ioctl_clone_range_args args; 305 295 306 296 if (copy_from_user(&args, argp, sizeof(args))) 307 297 return -EFAULT; 308 298 309 - return nfs42_ioctl_clone(dst_file, args.src_fd, args.src_off, args.dst_off, args.count); 299 + return nfs42_ioctl_clone(dst_file, args.src_fd, args.src_offset, 300 + args.dest_offset, args.src_length); 310 301 } 311 - #else 312 - static long nfs42_ioctl_clone(struct file *dst_file, unsigned long srcfd, 313 - u64 src_off, u64 dst_off, u64 count) 314 - { 315 - return -ENOTTY; 316 - } 317 - 318 - static long nfs42_ioctl_clone_range(struct file *dst_file, void __user *argp) 319 - { 320 - return -ENOTTY; 321 - } 322 - #endif /* CONFIG_NFS_V4_2 */ 323 302 324 303 long nfs4_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 325 304 { 326 305 void __user *argp = (void __user *)arg; 327 306 328 307 switch (cmd) { 329 - case NFS_IOC_CLONE: 308 + case BTRFS_IOC_CLONE: 330 309 return nfs42_ioctl_clone(file, arg, 0, 0, 0); 331 - case NFS_IOC_CLONE_RANGE: 310 + case BTRFS_IOC_CLONE_RANGE: 332 311 return nfs42_ioctl_clone_range(file, argp); 333 312 } 334 313 335 314 return -ENOTTY; 336 315 } 316 + #endif /* CONFIG_NFS_V4_2 */ 337 317 338 318 const struct file_operations nfs4_file_operations = { 339 - #ifdef CONFIG_NFS_V4_2 340 - .llseek = nfs4_file_llseek, 341 - #else 342 - .llseek = nfs_file_llseek, 343 - #endif 344 319 .read_iter = nfs_file_read, 345 320 .write_iter = nfs_file_write, 346 321 .mmap = nfs_file_mmap, ··· 337 342 .flock = nfs_flock, 338 343 .splice_read = nfs_file_splice_read, 339 344 .splice_write = iter_file_splice_write, 340 - #ifdef CONFIG_NFS_V4_2 341 - .fallocate = nfs42_fallocate, 342 - #endif /* CONFIG_NFS_V4_2 */ 343 345 .check_flags = nfs_check_flags, 344 346 .setlease = simple_nosetlease, 345 - #ifdef CONFIG_COMPAT 347 + #ifdef CONFIG_NFS_V4_2 348 + .llseek = nfs4_file_llseek, 349 + .fallocate = nfs42_fallocate, 346 350 .unlocked_ioctl = nfs4_ioctl, 347 - #else 348 351 .compat_ioctl = nfs4_ioctl, 349 - #endif /* CONFIG_COMPAT */ 352 + #else 353 + .llseek = nfs_file_llseek, 354 + #endif 350 355 };
+1 -1
fs/nfs/nfs4proc.c
··· 7866 7866 spin_unlock(&inode->i_lock); 7867 7867 goto out_restart; 7868 7868 } 7869 - if (nfs4_async_handle_error(task, server, state, NULL) == -EAGAIN) 7869 + if (nfs4_async_handle_error(task, server, state, &lgp->timeout) == -EAGAIN) 7870 7870 goto out_restart; 7871 7871 out: 7872 7872 dprintk("<-- %s\n", __func__);
+1
fs/nfs/nfs4xdr.c
··· 3615 3615 status = 0; 3616 3616 if (unlikely(!(bitmap[0] & FATTR4_WORD0_FS_LOCATIONS))) 3617 3617 goto out; 3618 + bitmap[0] &= ~FATTR4_WORD0_FS_LOCATIONS; 3618 3619 status = -EIO; 3619 3620 /* Ignore borken servers that return unrequested attrs */ 3620 3621 if (unlikely(res == NULL))
+32 -26
fs/nfs/pnfs.c
··· 872 872 873 873 dprintk("--> %s\n", __func__); 874 874 875 - lgp = kzalloc(sizeof(*lgp), gfp_flags); 876 - if (lgp == NULL) 877 - return NULL; 878 - 879 - i_size = i_size_read(ino); 880 - 881 - lgp->args.minlength = PAGE_CACHE_SIZE; 882 - if (lgp->args.minlength > range->length) 883 - lgp->args.minlength = range->length; 884 - if (range->iomode == IOMODE_READ) { 885 - if (range->offset >= i_size) 886 - lgp->args.minlength = 0; 887 - else if (i_size - range->offset < lgp->args.minlength) 888 - lgp->args.minlength = i_size - range->offset; 889 - } 890 - lgp->args.maxcount = PNFS_LAYOUT_MAXSIZE; 891 - lgp->args.range = *range; 892 - lgp->args.type = server->pnfs_curr_ld->id; 893 - lgp->args.inode = ino; 894 - lgp->args.ctx = get_nfs_open_context(ctx); 895 - lgp->gfp_flags = gfp_flags; 896 - lgp->cred = lo->plh_lc_cred; 897 - 898 - /* Synchronously retrieve layout information from server and 899 - * store in lseg. 875 + /* 876 + * Synchronously retrieve layout information from server and 877 + * store in lseg. If we race with a concurrent seqid morphing 878 + * op, then re-send the LAYOUTGET. 900 879 */ 901 - lseg = nfs4_proc_layoutget(lgp, gfp_flags); 880 + do { 881 + lgp = kzalloc(sizeof(*lgp), gfp_flags); 882 + if (lgp == NULL) 883 + return NULL; 884 + 885 + i_size = i_size_read(ino); 886 + 887 + lgp->args.minlength = PAGE_CACHE_SIZE; 888 + if (lgp->args.minlength > range->length) 889 + lgp->args.minlength = range->length; 890 + if (range->iomode == IOMODE_READ) { 891 + if (range->offset >= i_size) 892 + lgp->args.minlength = 0; 893 + else if (i_size - range->offset < lgp->args.minlength) 894 + lgp->args.minlength = i_size - range->offset; 895 + } 896 + lgp->args.maxcount = PNFS_LAYOUT_MAXSIZE; 897 + lgp->args.range = *range; 898 + lgp->args.type = server->pnfs_curr_ld->id; 899 + lgp->args.inode = ino; 900 + lgp->args.ctx = get_nfs_open_context(ctx); 901 + lgp->gfp_flags = gfp_flags; 902 + lgp->cred = lo->plh_lc_cred; 903 + 904 + lseg = nfs4_proc_layoutget(lgp, gfp_flags); 905 + } while (lseg == ERR_PTR(-EAGAIN)); 906 + 902 907 if (IS_ERR(lseg)) { 903 908 switch (PTR_ERR(lseg)) { 904 909 case -ENOMEM: ··· 1692 1687 /* existing state ID, make sure the sequence number matches. */ 1693 1688 if (pnfs_layout_stateid_blocked(lo, &res->stateid)) { 1694 1689 dprintk("%s forget reply due to sequence\n", __func__); 1690 + status = -EAGAIN; 1695 1691 goto out_forget_reply; 1696 1692 } 1697 1693 pnfs_set_layout_stateid(lo, &res->stateid, false);
+2
fs/ocfs2/namei.c
··· 372 372 mlog_errno(status); 373 373 goto leave; 374 374 } 375 + /* update inode->i_mode after mask with "umask". */ 376 + inode->i_mode = mode; 375 377 376 378 handle = ocfs2_start_trans(osb, ocfs2_mknod_credits(osb->sb, 377 379 S_ISDIR(mode),
+8
fs/splice.c
··· 809 809 */ 810 810 static int splice_from_pipe_next(struct pipe_inode_info *pipe, struct splice_desc *sd) 811 811 { 812 + /* 813 + * Check for signal early to make process killable when there are 814 + * always buffers available 815 + */ 816 + if (signal_pending(current)) 817 + return -ERESTARTSYS; 818 + 812 819 while (!pipe->nrbufs) { 813 820 if (!pipe->writers) 814 821 return 0; ··· 891 884 892 885 splice_from_pipe_begin(sd); 893 886 do { 887 + cond_resched(); 894 888 ret = splice_from_pipe_next(pipe, sd); 895 889 if (ret > 0) 896 890 ret = splice_from_pipe_feed(pipe, sd, actor);
+2 -9
fs/sysv/inode.c
··· 162 162 inode->i_fop = &sysv_dir_operations; 163 163 inode->i_mapping->a_ops = &sysv_aops; 164 164 } else if (S_ISLNK(inode->i_mode)) { 165 - if (inode->i_blocks) { 166 - inode->i_op = &sysv_symlink_inode_operations; 167 - inode->i_mapping->a_ops = &sysv_aops; 168 - } else { 169 - inode->i_op = &simple_symlink_inode_operations; 170 - inode->i_link = (char *)SYSV_I(inode)->i_data; 171 - nd_terminate_link(inode->i_link, inode->i_size, 172 - sizeof(SYSV_I(inode)->i_data) - 1); 173 - } 165 + inode->i_op = &sysv_symlink_inode_operations; 166 + inode->i_mapping->a_ops = &sysv_aops; 174 167 } else 175 168 init_special_inode(inode, inode->i_mode, rdev); 176 169 }
+3
include/drm/drm_atomic.h
··· 136 136 137 137 void drm_atomic_legacy_backoff(struct drm_atomic_state *state); 138 138 139 + void 140 + drm_atomic_clean_old_fb(struct drm_device *dev, unsigned plane_mask, int ret); 141 + 139 142 int __must_check drm_atomic_check_only(struct drm_atomic_state *state); 140 143 int __must_check drm_atomic_commit(struct drm_atomic_state *state); 141 144 int __must_check drm_atomic_async_commit(struct drm_atomic_state *state);
+1 -1
include/kvm/arm_vgic.h
··· 342 342 struct irq_phys_map *map, bool level); 343 343 void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg); 344 344 int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu); 345 - int kvm_vgic_vcpu_active_irq(struct kvm_vcpu *vcpu); 346 345 struct irq_phys_map *kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, 347 346 int virt_irq, int irq); 348 347 int kvm_vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, struct irq_phys_map *map); 348 + bool kvm_vgic_map_is_active(struct kvm_vcpu *vcpu, struct irq_phys_map *map); 349 349 350 350 #define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel)) 351 351 #define vgic_initialized(k) (!!((k)->arch.vgic.nr_cpus))
+2 -1
include/linux/blkdev.h
··· 773 773 extern void blk_requeue_request(struct request_queue *, struct request *); 774 774 extern void blk_add_request_payload(struct request *rq, struct page *page, 775 775 unsigned int len); 776 - extern int blk_rq_check_limits(struct request_queue *q, struct request *rq); 777 776 extern int blk_lld_busy(struct request_queue *q); 778 777 extern int blk_rq_prep_clone(struct request *rq, struct request *rq_src, 779 778 struct bio_set *bs, gfp_t gfp_mask, ··· 793 794 extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t, 794 795 struct scsi_ioctl_command __user *); 795 796 797 + extern int blk_queue_enter(struct request_queue *q, gfp_t gfp); 798 + extern void blk_queue_exit(struct request_queue *q); 796 799 extern void blk_start_queue(struct request_queue *q); 797 800 extern void blk_stop_queue(struct request_queue *q); 798 801 extern void blk_sync_queue(struct request_queue *q);
+4 -1
include/linux/bpf.h
··· 40 40 struct user_struct *user; 41 41 const struct bpf_map_ops *ops; 42 42 struct work_struct work; 43 + atomic_t usercnt; 43 44 }; 44 45 45 46 struct bpf_map_type_list { ··· 168 167 void bpf_prog_put(struct bpf_prog *prog); 169 168 void bpf_prog_put_rcu(struct bpf_prog *prog); 170 169 171 - struct bpf_map *bpf_map_get(u32 ufd); 170 + struct bpf_map *bpf_map_get_with_uref(u32 ufd); 172 171 struct bpf_map *__bpf_map_get(struct fd f); 172 + void bpf_map_inc(struct bpf_map *map, bool uref); 173 + void bpf_map_put_with_uref(struct bpf_map *map); 173 174 void bpf_map_put(struct bpf_map *map); 174 175 175 176 extern int sysctl_unprivileged_bpf_disabled;
+10
include/linux/configfs.h
··· 197 197 int configfs_register_subsystem(struct configfs_subsystem *subsys); 198 198 void configfs_unregister_subsystem(struct configfs_subsystem *subsys); 199 199 200 + int configfs_register_group(struct config_group *parent_group, 201 + struct config_group *group); 202 + void configfs_unregister_group(struct config_group *group); 203 + 204 + struct config_group * 205 + configfs_register_default_group(struct config_group *parent_group, 206 + const char *name, 207 + struct config_item_type *item_type); 208 + void configfs_unregister_default_group(struct config_group *group); 209 + 200 210 /* These functions can sleep and can alloc with GFP_KERNEL */ 201 211 /* WARNING: These cannot be called underneath configfs callbacks!! */ 202 212 int configfs_depend_item(struct configfs_subsystem *subsys, struct config_item *target);
+1 -1
include/linux/dns_resolver.h
··· 27 27 #ifdef __KERNEL__ 28 28 29 29 extern int dns_query(const char *type, const char *name, size_t namelen, 30 - const char *options, char **_result, time_t *_expiry); 30 + const char *options, char **_result, time64_t *_expiry); 31 31 32 32 #endif /* KERNEL */ 33 33
+1 -1
include/linux/gfp.h
··· 271 271 272 272 static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags) 273 273 { 274 - return gfp_flags & __GFP_DIRECT_RECLAIM; 274 + return (bool __force)(gfp_flags & __GFP_DIRECT_RECLAIM); 275 275 } 276 276 277 277 #ifdef CONFIG_HIGHMEM
+1 -1
include/linux/ipv6.h
··· 227 227 struct ipv6_ac_socklist *ipv6_ac_list; 228 228 struct ipv6_fl_socklist __rcu *ipv6_fl_list; 229 229 230 - struct ipv6_txoptions *opt; 230 + struct ipv6_txoptions __rcu *opt; 231 231 struct sk_buff *pktoptions; 232 232 struct sk_buff *rxpmtu; 233 233 struct inet6_cork cork;
-33
include/linux/kref.h
··· 19 19 #include <linux/atomic.h> 20 20 #include <linux/kernel.h> 21 21 #include <linux/mutex.h> 22 - #include <linux/spinlock.h> 23 22 24 23 struct kref { 25 24 atomic_t refcount; ··· 96 97 static inline int kref_put(struct kref *kref, void (*release)(struct kref *kref)) 97 98 { 98 99 return kref_sub(kref, 1, release); 99 - } 100 - 101 - /** 102 - * kref_put_spinlock_irqsave - decrement refcount for object. 103 - * @kref: object. 104 - * @release: pointer to the function that will clean up the object when the 105 - * last reference to the object is released. 106 - * This pointer is required, and it is not acceptable to pass kfree 107 - * in as this function. 108 - * @lock: lock to take in release case 109 - * 110 - * Behaves identical to kref_put with one exception. If the reference count 111 - * drops to zero, the lock will be taken atomically wrt dropping the reference 112 - * count. The release function has to call spin_unlock() without _irqrestore. 113 - */ 114 - static inline int kref_put_spinlock_irqsave(struct kref *kref, 115 - void (*release)(struct kref *kref), 116 - spinlock_t *lock) 117 - { 118 - unsigned long flags; 119 - 120 - WARN_ON(release == NULL); 121 - if (atomic_add_unless(&kref->refcount, -1, 1)) 122 - return 0; 123 - spin_lock_irqsave(lock, flags); 124 - if (atomic_dec_and_test(&kref->refcount)) { 125 - release(kref); 126 - local_irq_restore(flags); 127 - return 1; 128 - } 129 - spin_unlock_irqrestore(lock, flags); 130 - return 0; 131 100 } 132 101 133 102 static inline int kref_put_mutex(struct kref *kref,
+11
include/linux/kvm_host.h
··· 460 460 (vcpup = kvm_get_vcpu(kvm, idx)) != NULL; \ 461 461 idx++) 462 462 463 + static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id) 464 + { 465 + struct kvm_vcpu *vcpu; 466 + int i; 467 + 468 + kvm_for_each_vcpu(i, vcpu, kvm) 469 + if (vcpu->vcpu_id == id) 470 + return vcpu; 471 + return NULL; 472 + } 473 + 463 474 #define kvm_for_each_memslot(memslot, slots) \ 464 475 for (memslot = &slots->memslots[0]; \ 465 476 memslot < slots->memslots + KVM_MEM_SLOTS_NUM && memslot->npages;\
+44 -132
include/linux/lightnvm.h
··· 58 58 struct nvm_id_group { 59 59 u8 mtype; 60 60 u8 fmtype; 61 - u16 res16; 62 61 u8 num_ch; 63 62 u8 num_lun; 64 63 u8 num_pln; ··· 73 74 u32 tbet; 74 75 u32 tbem; 75 76 u32 mpos; 77 + u32 mccap; 76 78 u16 cpar; 77 - u8 res[913]; 78 - } __packed; 79 + }; 79 80 80 81 struct nvm_addr_format { 81 82 u8 ch_offset; ··· 90 91 u8 pg_len; 91 92 u8 sect_offset; 92 93 u8 sect_len; 93 - u8 res[4]; 94 94 }; 95 95 96 96 struct nvm_id { 97 97 u8 ver_id; 98 98 u8 vmnt; 99 99 u8 cgrps; 100 - u8 res[5]; 101 100 u32 cap; 102 101 u32 dom; 103 102 struct nvm_addr_format ppaf; 104 - u8 ppat; 105 - u8 resv[224]; 106 103 struct nvm_id_group groups[4]; 107 104 } __packed; 108 105 ··· 118 123 #define NVM_VERSION_MINOR 0 119 124 #define NVM_VERSION_PATCH 0 120 125 121 - #define NVM_SEC_BITS (8) 122 - #define NVM_PL_BITS (6) 123 - #define NVM_PG_BITS (16) 124 126 #define NVM_BLK_BITS (16) 125 - #define NVM_LUN_BITS (10) 127 + #define NVM_PG_BITS (16) 128 + #define NVM_SEC_BITS (8) 129 + #define NVM_PL_BITS (8) 130 + #define NVM_LUN_BITS (8) 126 131 #define NVM_CH_BITS (8) 127 132 128 133 struct ppa_addr { 134 + /* Generic structure for all addresses */ 129 135 union { 130 - /* Channel-based PPA format in nand 4x2x2x2x8x10 */ 131 136 struct { 132 - u64 ch : 4; 133 - u64 sec : 2; /* 4 sectors per page */ 134 - u64 pl : 2; /* 4 planes per LUN */ 135 - u64 lun : 2; /* 4 LUNs per channel */ 136 - u64 pg : 8; /* 256 pages per block */ 137 - u64 blk : 10;/* 1024 blocks per plane */ 138 - u64 resved : 36; 139 - } chnl; 140 - 141 - /* Generic structure for all addresses */ 142 - struct { 137 + u64 blk : NVM_BLK_BITS; 138 + u64 pg : NVM_PG_BITS; 143 139 u64 sec : NVM_SEC_BITS; 144 140 u64 pl : NVM_PL_BITS; 145 - u64 pg : NVM_PG_BITS; 146 - u64 blk : NVM_BLK_BITS; 147 141 u64 lun : NVM_LUN_BITS; 148 142 u64 ch : NVM_CH_BITS; 149 143 } g; 150 144 151 145 u64 ppa; 152 146 }; 153 - } __packed; 147 + }; 154 148 155 149 struct nvm_rq { 156 150 struct nvm_tgt_instance *ins; ··· 175 191 struct nvm_block; 176 192 177 193 typedef int (nvm_l2p_update_fn)(u64, u32, __le64 *, void *); 178 - typedef int (nvm_bb_update_fn)(u32, void *, unsigned int, void *); 194 + typedef int (nvm_bb_update_fn)(struct ppa_addr, int, u8 *, void *); 179 195 typedef int (nvm_id_fn)(struct request_queue *, struct nvm_id *); 180 196 typedef int (nvm_get_l2p_tbl_fn)(struct request_queue *, u64, u32, 181 197 nvm_l2p_update_fn *, void *); 182 - typedef int (nvm_op_bb_tbl_fn)(struct request_queue *, int, unsigned int, 198 + typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, int, 183 199 nvm_bb_update_fn *, void *); 184 200 typedef int (nvm_op_set_bb_fn)(struct request_queue *, struct nvm_rq *, int); 185 201 typedef int (nvm_submit_io_fn)(struct request_queue *, struct nvm_rq *); ··· 194 210 nvm_id_fn *identity; 195 211 nvm_get_l2p_tbl_fn *get_l2p_tbl; 196 212 nvm_op_bb_tbl_fn *get_bb_tbl; 197 - nvm_op_set_bb_fn *set_bb; 213 + nvm_op_set_bb_fn *set_bb_tbl; 198 214 199 215 nvm_submit_io_fn *submit_io; 200 216 nvm_erase_blk_fn *erase_block; ··· 204 220 nvm_dev_dma_alloc_fn *dev_dma_alloc; 205 221 nvm_dev_dma_free_fn *dev_dma_free; 206 222 207 - uint8_t max_phys_sect; 223 + unsigned int max_phys_sect; 208 224 }; 209 225 210 226 struct nvm_lun { ··· 213 229 int lun_id; 214 230 int chnl_id; 215 231 232 + unsigned int nr_inuse_blocks; /* Number of used blocks */ 216 233 unsigned int nr_free_blocks; /* Number of unused blocks */ 234 + unsigned int nr_bad_blocks; /* Number of bad blocks */ 217 235 struct nvm_block *blocks; 218 236 219 237 spinlock_t lock; ··· 249 263 int blks_per_lun; 250 264 int sec_size; 251 265 int oob_size; 252 - int addr_mode; 253 - struct nvm_addr_format addr_format; 266 + struct nvm_addr_format ppaf; 254 267 255 268 /* Calculated/Cached values. These do not reflect the actual usable 256 269 * blocks at run-time. ··· 275 290 char name[DISK_NAME_LEN]; 276 291 }; 277 292 278 - /* fallback conversion */ 279 - static struct ppa_addr __generic_to_linear_addr(struct nvm_dev *dev, 280 - struct ppa_addr r) 293 + static inline struct ppa_addr generic_to_dev_addr(struct nvm_dev *dev, 294 + struct ppa_addr r) 281 295 { 282 296 struct ppa_addr l; 283 297 284 - l.ppa = r.g.sec + 285 - r.g.pg * dev->sec_per_pg + 286 - r.g.blk * (dev->pgs_per_blk * 287 - dev->sec_per_pg) + 288 - r.g.lun * (dev->blks_per_lun * 289 - dev->pgs_per_blk * 290 - dev->sec_per_pg) + 291 - r.g.ch * (dev->blks_per_lun * 292 - dev->pgs_per_blk * 293 - dev->luns_per_chnl * 294 - dev->sec_per_pg); 298 + l.ppa = ((u64)r.g.blk) << dev->ppaf.blk_offset; 299 + l.ppa |= ((u64)r.g.pg) << dev->ppaf.pg_offset; 300 + l.ppa |= ((u64)r.g.sec) << dev->ppaf.sect_offset; 301 + l.ppa |= ((u64)r.g.pl) << dev->ppaf.pln_offset; 302 + l.ppa |= ((u64)r.g.lun) << dev->ppaf.lun_offset; 303 + l.ppa |= ((u64)r.g.ch) << dev->ppaf.ch_offset; 295 304 296 305 return l; 297 306 } 298 307 299 - /* fallback conversion */ 300 - static struct ppa_addr __linear_to_generic_addr(struct nvm_dev *dev, 301 - struct ppa_addr r) 302 - { 303 - struct ppa_addr l; 304 - int secs, pgs, blks, luns; 305 - sector_t ppa = r.ppa; 306 - 307 - l.ppa = 0; 308 - 309 - div_u64_rem(ppa, dev->sec_per_pg, &secs); 310 - l.g.sec = secs; 311 - 312 - sector_div(ppa, dev->sec_per_pg); 313 - div_u64_rem(ppa, dev->sec_per_blk, &pgs); 314 - l.g.pg = pgs; 315 - 316 - sector_div(ppa, dev->pgs_per_blk); 317 - div_u64_rem(ppa, dev->blks_per_lun, &blks); 318 - l.g.blk = blks; 319 - 320 - sector_div(ppa, dev->blks_per_lun); 321 - div_u64_rem(ppa, dev->luns_per_chnl, &luns); 322 - l.g.lun = luns; 323 - 324 - sector_div(ppa, dev->luns_per_chnl); 325 - l.g.ch = ppa; 326 - 327 - return l; 328 - } 329 - 330 - static struct ppa_addr __generic_to_chnl_addr(struct ppa_addr r) 308 + static inline struct ppa_addr dev_to_generic_addr(struct nvm_dev *dev, 309 + struct ppa_addr r) 331 310 { 332 311 struct ppa_addr l; 333 312 334 - l.ppa = 0; 335 - 336 - l.chnl.sec = r.g.sec; 337 - l.chnl.pl = r.g.pl; 338 - l.chnl.pg = r.g.pg; 339 - l.chnl.blk = r.g.blk; 340 - l.chnl.lun = r.g.lun; 341 - l.chnl.ch = r.g.ch; 342 - 343 - return l; 344 - } 345 - 346 - static struct ppa_addr __chnl_to_generic_addr(struct ppa_addr r) 347 - { 348 - struct ppa_addr l; 349 - 350 - l.ppa = 0; 351 - 352 - l.g.sec = r.chnl.sec; 353 - l.g.pl = r.chnl.pl; 354 - l.g.pg = r.chnl.pg; 355 - l.g.blk = r.chnl.blk; 356 - l.g.lun = r.chnl.lun; 357 - l.g.ch = r.chnl.ch; 313 + /* 314 + * (r.ppa << X offset) & X len bitmask. X eq. blk, pg, etc. 315 + */ 316 + l.g.blk = (r.ppa >> dev->ppaf.blk_offset) & 317 + (((1 << dev->ppaf.blk_len) - 1)); 318 + l.g.pg |= (r.ppa >> dev->ppaf.pg_offset) & 319 + (((1 << dev->ppaf.pg_len) - 1)); 320 + l.g.sec |= (r.ppa >> dev->ppaf.sect_offset) & 321 + (((1 << dev->ppaf.sect_len) - 1)); 322 + l.g.pl |= (r.ppa >> dev->ppaf.pln_offset) & 323 + (((1 << dev->ppaf.pln_len) - 1)); 324 + l.g.lun |= (r.ppa >> dev->ppaf.lun_offset) & 325 + (((1 << dev->ppaf.lun_len) - 1)); 326 + l.g.ch |= (r.ppa >> dev->ppaf.ch_offset) & 327 + (((1 << dev->ppaf.ch_len) - 1)); 358 328 359 329 return l; 360 - } 361 - 362 - static inline struct ppa_addr addr_to_generic_mode(struct nvm_dev *dev, 363 - struct ppa_addr gppa) 364 - { 365 - switch (dev->addr_mode) { 366 - case NVM_ADDRMODE_LINEAR: 367 - return __linear_to_generic_addr(dev, gppa); 368 - case NVM_ADDRMODE_CHANNEL: 369 - return __chnl_to_generic_addr(gppa); 370 - default: 371 - BUG(); 372 - } 373 - return gppa; 374 - } 375 - 376 - static inline struct ppa_addr generic_to_addr_mode(struct nvm_dev *dev, 377 - struct ppa_addr gppa) 378 - { 379 - switch (dev->addr_mode) { 380 - case NVM_ADDRMODE_LINEAR: 381 - return __generic_to_linear_addr(dev, gppa); 382 - case NVM_ADDRMODE_CHANNEL: 383 - return __generic_to_chnl_addr(gppa); 384 - default: 385 - BUG(); 386 - } 387 - return gppa; 388 330 } 389 331 390 332 static inline int ppa_empty(struct ppa_addr ppa_addr) ··· 380 468 typedef int (nvmm_erase_blk_fn)(struct nvm_dev *, struct nvm_block *, 381 469 unsigned long); 382 470 typedef struct nvm_lun *(nvmm_get_lun_fn)(struct nvm_dev *, int); 383 - typedef void (nvmm_free_blocks_print_fn)(struct nvm_dev *); 471 + typedef void (nvmm_lun_info_print_fn)(struct nvm_dev *); 384 472 385 473 struct nvmm_type { 386 474 const char *name; ··· 404 492 nvmm_get_lun_fn *get_lun; 405 493 406 494 /* Statistics */ 407 - nvmm_free_blocks_print_fn *free_blocks_print; 495 + nvmm_lun_info_print_fn *lun_info_print; 408 496 struct list_head list; 409 497 }; 410 498
+9 -4
include/linux/net.h
··· 34 34 struct file; 35 35 struct net; 36 36 37 - #define SOCK_ASYNC_NOSPACE 0 38 - #define SOCK_ASYNC_WAITDATA 1 37 + /* Historically, SOCKWQ_ASYNC_NOSPACE & SOCKWQ_ASYNC_WAITDATA were located 38 + * in sock->flags, but moved into sk->sk_wq->flags to be RCU protected. 39 + * Eventually all flags will be in sk->sk_wq_flags. 40 + */ 41 + #define SOCKWQ_ASYNC_NOSPACE 0 42 + #define SOCKWQ_ASYNC_WAITDATA 1 39 43 #define SOCK_NOSPACE 2 40 44 #define SOCK_PASSCRED 3 41 45 #define SOCK_PASSSEC 4 ··· 93 89 /* Note: wait MUST be first field of socket_wq */ 94 90 wait_queue_head_t wait; 95 91 struct fasync_struct *fasync_list; 92 + unsigned long flags; /* %SOCKWQ_ASYNC_NOSPACE, etc */ 96 93 struct rcu_head rcu; 97 94 } ____cacheline_aligned_in_smp; 98 95 ··· 101 96 * struct socket - general BSD socket 102 97 * @state: socket state (%SS_CONNECTED, etc) 103 98 * @type: socket type (%SOCK_STREAM, etc) 104 - * @flags: socket flags (%SOCK_ASYNC_NOSPACE, etc) 99 + * @flags: socket flags (%SOCK_NOSPACE, etc) 105 100 * @ops: protocol specific socket operations 106 101 * @file: File back pointer for gc 107 102 * @sk: internal networking protocol agnostic socket representation ··· 207 202 SOCK_WAKE_URG, 208 203 }; 209 204 210 - int sock_wake_async(struct socket *sk, int how, int band); 205 + int sock_wake_async(struct socket_wq *sk_wq, int how, int band); 211 206 int sock_register(const struct net_proto_family *fam); 212 207 void sock_unregister(int family); 213 208 int __sock_create(struct net *net, int family, int type, int proto,
+2 -1
include/linux/netdevice.h
··· 1403 1403 * @dma: DMA channel 1404 1404 * @mtu: Interface MTU value 1405 1405 * @type: Interface hardware type 1406 - * @hard_header_len: Hardware header length 1406 + * @hard_header_len: Hardware header length, which means that this is the 1407 + * minimum size of a packet. 1407 1408 * 1408 1409 * @needed_headroom: Extra headroom the hardware may need, but not in all 1409 1410 * cases can this be guaranteed
+1
include/linux/nfs_xdr.h
··· 251 251 struct nfs4_layoutget_res res; 252 252 struct rpc_cred *cred; 253 253 gfp_t gfp_flags; 254 + long timeout; 254 255 }; 255 256 256 257 struct nfs4_getdeviceinfo_args {
+1 -1
include/linux/of_dma.h
··· 80 80 static inline struct dma_chan *of_dma_request_slave_channel(struct device_node *np, 81 81 const char *name) 82 82 { 83 - return NULL; 83 + return ERR_PTR(-ENODEV); 84 84 } 85 85 86 86 static inline struct dma_chan *of_dma_simple_xlate(struct of_phandle_args *dma_spec,
+9
include/linux/pci.h
··· 412 412 void (*release_fn)(struct pci_host_bridge *); 413 413 void *release_data; 414 414 unsigned int ignore_reset_delay:1; /* for entire hierarchy */ 415 + /* Resource alignment requirements */ 416 + resource_size_t (*align_resource)(struct pci_dev *dev, 417 + const struct resource *res, 418 + resource_size_t start, 419 + resource_size_t size, 420 + resource_size_t align); 415 421 }; 416 422 417 423 #define to_pci_host_bridge(n) container_of(n, struct pci_host_bridge, dev) 424 + 425 + struct pci_host_bridge *pci_find_host_bridge(struct pci_bus *bus); 426 + 418 427 void pci_set_host_bridge_release(struct pci_host_bridge *bridge, 419 428 void (*release_fn)(struct pci_host_bridge *), 420 429 void *release_data);
+1 -1
include/linux/scpi_protocol.h
··· 71 71 int (*sensor_get_value)(u16, u32 *); 72 72 }; 73 73 74 - #if IS_ENABLED(CONFIG_ARM_SCPI_PROTOCOL) 74 + #if IS_REACHABLE(CONFIG_ARM_SCPI_PROTOCOL) 75 75 struct scpi_ops *get_scpi_ops(void); 76 76 #else 77 77 static inline struct scpi_ops *get_scpi_ops(void) { return NULL; }
-1
include/linux/signal.h
··· 239 239 extern void set_current_blocked(sigset_t *); 240 240 extern void __set_current_blocked(const sigset_t *); 241 241 extern int show_unhandled_signals; 242 - extern int sigsuspend(sigset_t *); 243 242 244 243 struct sigaction { 245 244 #ifndef __ARCH_HAS_IRIX_SIGACTION
+27 -18
include/linux/slab.h
··· 158 158 #endif 159 159 160 160 /* 161 + * Setting ARCH_SLAB_MINALIGN in arch headers allows a different alignment. 162 + * Intended for arches that get misalignment faults even for 64 bit integer 163 + * aligned buffers. 164 + */ 165 + #ifndef ARCH_SLAB_MINALIGN 166 + #define ARCH_SLAB_MINALIGN __alignof__(unsigned long long) 167 + #endif 168 + 169 + /* 170 + * kmalloc and friends return ARCH_KMALLOC_MINALIGN aligned 171 + * pointers. kmem_cache_alloc and friends return ARCH_SLAB_MINALIGN 172 + * aligned pointers. 173 + */ 174 + #define __assume_kmalloc_alignment __assume_aligned(ARCH_KMALLOC_MINALIGN) 175 + #define __assume_slab_alignment __assume_aligned(ARCH_SLAB_MINALIGN) 176 + #define __assume_page_alignment __assume_aligned(PAGE_SIZE) 177 + 178 + /* 161 179 * Kmalloc array related definitions 162 180 */ 163 181 ··· 304 286 } 305 287 #endif /* !CONFIG_SLOB */ 306 288 307 - void *__kmalloc(size_t size, gfp_t flags); 308 - void *kmem_cache_alloc(struct kmem_cache *, gfp_t flags); 289 + void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment; 290 + void *kmem_cache_alloc(struct kmem_cache *, gfp_t flags) __assume_slab_alignment; 309 291 void kmem_cache_free(struct kmem_cache *, void *); 310 292 311 293 /* ··· 316 298 * Note that interrupts must be enabled when calling these functions. 317 299 */ 318 300 void kmem_cache_free_bulk(struct kmem_cache *, size_t, void **); 319 - bool kmem_cache_alloc_bulk(struct kmem_cache *, gfp_t, size_t, void **); 301 + int kmem_cache_alloc_bulk(struct kmem_cache *, gfp_t, size_t, void **); 320 302 321 303 #ifdef CONFIG_NUMA 322 - void *__kmalloc_node(size_t size, gfp_t flags, int node); 323 - void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node); 304 + void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment; 305 + void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node) __assume_slab_alignment; 324 306 #else 325 307 static __always_inline void *__kmalloc_node(size_t size, gfp_t flags, int node) 326 308 { ··· 334 316 #endif 335 317 336 318 #ifdef CONFIG_TRACING 337 - extern void *kmem_cache_alloc_trace(struct kmem_cache *, gfp_t, size_t); 319 + extern void *kmem_cache_alloc_trace(struct kmem_cache *, gfp_t, size_t) __assume_slab_alignment; 338 320 339 321 #ifdef CONFIG_NUMA 340 322 extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, 341 323 gfp_t gfpflags, 342 - int node, size_t size); 324 + int node, size_t size) __assume_slab_alignment; 343 325 #else 344 326 static __always_inline void * 345 327 kmem_cache_alloc_node_trace(struct kmem_cache *s, ··· 372 354 } 373 355 #endif /* CONFIG_TRACING */ 374 356 375 - extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order); 357 + extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment; 376 358 377 359 #ifdef CONFIG_TRACING 378 - extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order); 360 + extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment; 379 361 #else 380 362 static __always_inline void * 381 363 kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) ··· 499 481 #endif 500 482 return __kmalloc_node(size, flags, node); 501 483 } 502 - 503 - /* 504 - * Setting ARCH_SLAB_MINALIGN in arch headers allows a different alignment. 505 - * Intended for arches that get misalignment faults even for 64 bit integer 506 - * aligned buffers. 507 - */ 508 - #ifndef ARCH_SLAB_MINALIGN 509 - #define ARCH_SLAB_MINALIGN __alignof__(unsigned long long) 510 - #endif 511 484 512 485 struct memcg_cache_array { 513 486 struct rcu_head rcu;
+1 -1
include/linux/syscalls.h
··· 524 524 asmlinkage long sys_lchown(const char __user *filename, 525 525 uid_t user, gid_t group); 526 526 asmlinkage long sys_fchown(unsigned int fd, uid_t user, gid_t group); 527 - #ifdef CONFIG_UID16 527 + #ifdef CONFIG_HAVE_UID16 528 528 asmlinkage long sys_chown16(const char __user *filename, 529 529 old_uid_t user, old_gid_t group); 530 530 asmlinkage long sys_lchown16(const char __user *filename,
+2 -1
include/linux/thermal.h
··· 438 438 static inline int thermal_zone_bind_cooling_device( 439 439 struct thermal_zone_device *tz, int trip, 440 440 struct thermal_cooling_device *cdev, 441 - unsigned long upper, unsigned long lower) 441 + unsigned long upper, unsigned long lower, 442 + unsigned int weight) 442 443 { return -ENODEV; } 443 444 static inline int thermal_zone_unbind_cooling_device( 444 445 struct thermal_zone_device *tz, int trip,
+3 -3
include/linux/tty.h
··· 607 607 608 608 /* tty_audit.c */ 609 609 #ifdef CONFIG_AUDIT 610 - extern void tty_audit_add_data(struct tty_struct *tty, unsigned char *data, 610 + extern void tty_audit_add_data(struct tty_struct *tty, const void *data, 611 611 size_t size, unsigned icanon); 612 612 extern void tty_audit_exit(void); 613 613 extern void tty_audit_fork(struct signal_struct *sig); ··· 615 615 extern void tty_audit_push(struct tty_struct *tty); 616 616 extern int tty_audit_push_current(void); 617 617 #else 618 - static inline void tty_audit_add_data(struct tty_struct *tty, 619 - unsigned char *data, size_t size, unsigned icanon) 618 + static inline void tty_audit_add_data(struct tty_struct *tty, const void *data, 619 + size_t size, unsigned icanon) 620 620 { 621 621 } 622 622 static inline void tty_audit_tiocsti(struct tty_struct *tty, char ch)
+1 -1
include/linux/types.h
··· 35 35 36 36 typedef unsigned long uintptr_t; 37 37 38 - #ifdef CONFIG_UID16 38 + #ifdef CONFIG_HAVE_UID16 39 39 /* This is defined by include/asm-{arch}/posix_types.h */ 40 40 typedef __kernel_old_uid_t old_uid_t; 41 41 typedef __kernel_old_gid_t old_gid_t;
+1
include/net/af_unix.h
··· 62 62 #define UNIX_GC_CANDIDATE 0 63 63 #define UNIX_GC_MAYBE_CYCLE 1 64 64 struct socket_wq peer_wq; 65 + wait_queue_t peer_wake; 65 66 }; 66 67 67 68 static inline struct unix_sock *unix_sk(const struct sock *sk)
+4 -13
include/net/ip6_route.h
··· 133 133 /* 134 134 * Store a destination cache entry in a socket 135 135 */ 136 - static inline void __ip6_dst_store(struct sock *sk, struct dst_entry *dst, 137 - const struct in6_addr *daddr, 138 - const struct in6_addr *saddr) 136 + static inline void ip6_dst_store(struct sock *sk, struct dst_entry *dst, 137 + const struct in6_addr *daddr, 138 + const struct in6_addr *saddr) 139 139 { 140 140 struct ipv6_pinfo *np = inet6_sk(sk); 141 - struct rt6_info *rt = (struct rt6_info *) dst; 142 141 142 + np->dst_cookie = rt6_get_cookie((struct rt6_info *)dst); 143 143 sk_setup_caps(sk, dst); 144 144 np->daddr_cache = daddr; 145 145 #ifdef CONFIG_IPV6_SUBTREES 146 146 np->saddr_cache = saddr; 147 147 #endif 148 - np->dst_cookie = rt6_get_cookie(rt); 149 - } 150 - 151 - static inline void ip6_dst_store(struct sock *sk, struct dst_entry *dst, 152 - struct in6_addr *daddr, struct in6_addr *saddr) 153 - { 154 - spin_lock(&sk->sk_dst_lock); 155 - __ip6_dst_store(sk, dst, daddr, saddr); 156 - spin_unlock(&sk->sk_dst_lock); 157 148 } 158 149 159 150 static inline bool ipv6_unicast_destination(const struct sk_buff *skb)
+21 -1
include/net/ipv6.h
··· 205 205 */ 206 206 207 207 struct ipv6_txoptions { 208 + atomic_t refcnt; 208 209 /* Length of this structure */ 209 210 int tot_len; 210 211 ··· 218 217 struct ipv6_opt_hdr *dst0opt; 219 218 struct ipv6_rt_hdr *srcrt; /* Routing Header */ 220 219 struct ipv6_opt_hdr *dst1opt; 221 - 220 + struct rcu_head rcu; 222 221 /* Option buffer, as read by IPV6_PKTOPTIONS, starts here. */ 223 222 }; 224 223 ··· 252 251 struct ip6_flowlabel *fl; 253 252 struct rcu_head rcu; 254 253 }; 254 + 255 + static inline struct ipv6_txoptions *txopt_get(const struct ipv6_pinfo *np) 256 + { 257 + struct ipv6_txoptions *opt; 258 + 259 + rcu_read_lock(); 260 + opt = rcu_dereference(np->opt); 261 + if (opt && !atomic_inc_not_zero(&opt->refcnt)) 262 + opt = NULL; 263 + rcu_read_unlock(); 264 + return opt; 265 + } 266 + 267 + static inline void txopt_put(struct ipv6_txoptions *opt) 268 + { 269 + if (opt && atomic_dec_and_test(&opt->refcnt)) 270 + kfree_rcu(opt, rcu); 271 + } 255 272 256 273 struct ip6_flowlabel *fl6_sock_lookup(struct sock *sk, __be32 label); 257 274 struct ipv6_txoptions *fl6_merge_options(struct ipv6_txoptions *opt_space, ··· 509 490 u32 user; 510 491 const struct in6_addr *src; 511 492 const struct in6_addr *dst; 493 + int iif; 512 494 u8 ecn; 513 495 }; 514 496
+4 -2
include/net/mac80211.h
··· 2003 2003 * it shouldn't be set. 2004 2004 * 2005 2005 * @max_tx_aggregation_subframes: maximum number of subframes in an 2006 - * aggregate an HT driver will transmit, used by the peer as a 2007 - * hint to size its reorder buffer. 2006 + * aggregate an HT driver will transmit. Though ADDBA will advertise 2007 + * a constant value of 64 as some older APs can crash if the window 2008 + * size is smaller (an example is LinkSys WRT120N with FW v1.0.07 2009 + * build 002 Jun 18 2012). 2008 2010 * 2009 2011 * @offchannel_tx_hw_queue: HW queue ID to use for offchannel TX 2010 2012 * (if %IEEE80211_HW_QUEUE_CONTROL is set)
+1 -2
include/net/ndisc.h
··· 181 181 int ndisc_rcv(struct sk_buff *skb); 182 182 183 183 void ndisc_send_ns(struct net_device *dev, const struct in6_addr *solicit, 184 - const struct in6_addr *daddr, const struct in6_addr *saddr, 185 - struct sk_buff *oskb); 184 + const struct in6_addr *daddr, const struct in6_addr *saddr); 186 185 187 186 void ndisc_send_rs(struct net_device *dev, 188 187 const struct in6_addr *saddr, const struct in6_addr *daddr);
+3
include/net/sch_generic.h
··· 61 61 */ 62 62 #define TCQ_F_WARN_NONWC (1 << 16) 63 63 #define TCQ_F_CPUSTATS 0x20 /* run using percpu statistics */ 64 + #define TCQ_F_NOPARENT 0x40 /* root of its hierarchy : 65 + * qdisc_tree_decrease_qlen() should stop. 66 + */ 64 67 u32 limit; 65 68 const struct Qdisc_ops *ops; 66 69 struct qdisc_size_table __rcu *stab;
+8 -8
include/net/sctp/structs.h
··· 775 775 hb_sent:1, 776 776 777 777 /* Is the Path MTU update pending on this tranport */ 778 - pmtu_pending:1; 778 + pmtu_pending:1, 779 779 780 - /* Has this transport moved the ctsn since we last sacked */ 781 - __u32 sack_generation; 780 + /* Has this transport moved the ctsn since we last sacked */ 781 + sack_generation:1; 782 782 u32 dst_cookie; 783 783 784 784 struct flowi fl; ··· 1482 1482 prsctp_capable:1, /* Can peer do PR-SCTP? */ 1483 1483 auth_capable:1; /* Is peer doing SCTP-AUTH? */ 1484 1484 1485 - /* Ack State : This flag indicates if the next received 1485 + /* sack_needed : This flag indicates if the next received 1486 1486 * : packet is to be responded to with a 1487 - * : SACK. This is initializedto 0. When a packet 1488 - * : is received it is incremented. If this value 1487 + * : SACK. This is initialized to 0. When a packet 1488 + * : is received sack_cnt is incremented. If this value 1489 1489 * : reaches 2 or more, a SACK is sent and the 1490 1490 * : value is reset to 0. Note: This is used only 1491 1491 * : when no DATA chunks are received out of 1492 1492 * : order. When DATA chunks are out of order, 1493 1493 * : SACK's are not delayed (see Section 6). 1494 1494 */ 1495 - __u8 sack_needed; /* Do we need to sack the peer? */ 1495 + __u8 sack_needed:1, /* Do we need to sack the peer? */ 1496 + sack_generation:1; 1496 1497 __u32 sack_cnt; 1497 - __u32 sack_generation; 1498 1498 1499 1499 __u32 adaptation_ind; /* Adaptation Code point. */ 1500 1500
+25 -7
include/net/sock.h
··· 255 255 * @sk_wq: sock wait queue and async head 256 256 * @sk_rx_dst: receive input route used by early demux 257 257 * @sk_dst_cache: destination cache 258 - * @sk_dst_lock: destination cache lock 259 258 * @sk_policy: flow policy 260 259 * @sk_receive_queue: incoming packets 261 260 * @sk_wmem_alloc: transmit queue bytes committed ··· 384 385 int sk_rcvbuf; 385 386 386 387 struct sk_filter __rcu *sk_filter; 387 - struct socket_wq __rcu *sk_wq; 388 - 388 + union { 389 + struct socket_wq __rcu *sk_wq; 390 + struct socket_wq *sk_wq_raw; 391 + }; 389 392 #ifdef CONFIG_XFRM 390 393 struct xfrm_policy *sk_policy[2]; 391 394 #endif 392 395 struct dst_entry *sk_rx_dst; 393 396 struct dst_entry __rcu *sk_dst_cache; 394 - spinlock_t sk_dst_lock; 397 + /* Note: 32bit hole on 64bit arches */ 395 398 atomic_t sk_wmem_alloc; 396 399 atomic_t sk_omem_alloc; 397 400 int sk_sndbuf; ··· 2001 2000 return amt; 2002 2001 } 2003 2002 2004 - static inline void sk_wake_async(struct sock *sk, int how, int band) 2003 + /* Note: 2004 + * We use sk->sk_wq_raw, from contexts knowing this 2005 + * pointer is not NULL and cannot disappear/change. 2006 + */ 2007 + static inline void sk_set_bit(int nr, struct sock *sk) 2005 2008 { 2006 - if (sock_flag(sk, SOCK_FASYNC)) 2007 - sock_wake_async(sk->sk_socket, how, band); 2009 + set_bit(nr, &sk->sk_wq_raw->flags); 2010 + } 2011 + 2012 + static inline void sk_clear_bit(int nr, struct sock *sk) 2013 + { 2014 + clear_bit(nr, &sk->sk_wq_raw->flags); 2015 + } 2016 + 2017 + static inline void sk_wake_async(const struct sock *sk, int how, int band) 2018 + { 2019 + if (sock_flag(sk, SOCK_FASYNC)) { 2020 + rcu_read_lock(); 2021 + sock_wake_async(rcu_dereference(sk->sk_wq), how, band); 2022 + rcu_read_unlock(); 2023 + } 2008 2024 } 2009 2025 2010 2026 /* Since sk_{r,w}mem_alloc sums skb->truesize, even a small frame might
+1 -1
include/target/target_core_base.h
··· 474 474 struct completion cmd_wait_comp; 475 475 const struct target_core_fabric_ops *se_tfo; 476 476 sense_reason_t (*execute_cmd)(struct se_cmd *); 477 - sense_reason_t (*transport_complete_callback)(struct se_cmd *, bool); 477 + sense_reason_t (*transport_complete_callback)(struct se_cmd *, bool, int *); 478 478 void *protocol_data; 479 479 480 480 unsigned char *t_task_cdb;
-11
include/uapi/linux/nfs.h
··· 33 33 34 34 #define NFS_PIPE_DIRNAME "nfs" 35 35 36 - /* NFS ioctls */ 37 - /* Let's follow btrfs lead on CLONE to avoid messing userspace */ 38 - #define NFS_IOC_CLONE _IOW(0x94, 9, int) 39 - #define NFS_IOC_CLONE_RANGE _IOW(0x94, 13, int) 40 - 41 - struct nfs_ioctl_clone_range_args { 42 - __s64 src_fd; 43 - __u64 src_off, count; 44 - __u64 dst_off; 45 - }; 46 - 47 36 /* 48 37 * NFS stats. The good thing with these values is that NFSv3 errors are 49 38 * a superset of NFSv2 errors (with the exception of NFSERR_WFLUSH which
+8 -2
kernel/bpf/arraymap.c
··· 28 28 attr->value_size == 0) 29 29 return ERR_PTR(-EINVAL); 30 30 31 + if (attr->value_size >= 1 << (KMALLOC_SHIFT_MAX - 1)) 32 + /* if value_size is bigger, the user space won't be able to 33 + * access the elements. 34 + */ 35 + return ERR_PTR(-E2BIG); 36 + 31 37 elem_size = round_up(attr->value_size, 8); 32 38 33 39 /* check round_up into zero and u32 overflow */ 34 40 if (elem_size == 0 || 35 - attr->max_entries > (U32_MAX - sizeof(*array)) / elem_size) 41 + attr->max_entries > (U32_MAX - PAGE_SIZE - sizeof(*array)) / elem_size) 36 42 return ERR_PTR(-ENOMEM); 37 43 38 44 array_size = sizeof(*array) + attr->max_entries * elem_size; ··· 111 105 /* all elements already exist */ 112 106 return -EEXIST; 113 107 114 - memcpy(array->value + array->elem_size * index, value, array->elem_size); 108 + memcpy(array->value + array->elem_size * index, value, map->value_size); 115 109 return 0; 116 110 } 117 111
+25 -9
kernel/bpf/hashtab.c
··· 64 64 */ 65 65 goto free_htab; 66 66 67 - err = -ENOMEM; 67 + if (htab->map.value_size >= (1 << (KMALLOC_SHIFT_MAX - 1)) - 68 + MAX_BPF_STACK - sizeof(struct htab_elem)) 69 + /* if value_size is bigger, the user space won't be able to 70 + * access the elements via bpf syscall. This check also makes 71 + * sure that the elem_size doesn't overflow and it's 72 + * kmalloc-able later in htab_map_update_elem() 73 + */ 74 + goto free_htab; 75 + 76 + htab->elem_size = sizeof(struct htab_elem) + 77 + round_up(htab->map.key_size, 8) + 78 + htab->map.value_size; 79 + 68 80 /* prevent zero size kmalloc and check for u32 overflow */ 69 81 if (htab->n_buckets == 0 || 70 82 htab->n_buckets > U32_MAX / sizeof(struct hlist_head)) 71 83 goto free_htab; 72 84 85 + if ((u64) htab->n_buckets * sizeof(struct hlist_head) + 86 + (u64) htab->elem_size * htab->map.max_entries >= 87 + U32_MAX - PAGE_SIZE) 88 + /* make sure page count doesn't overflow */ 89 + goto free_htab; 90 + 91 + htab->map.pages = round_up(htab->n_buckets * sizeof(struct hlist_head) + 92 + htab->elem_size * htab->map.max_entries, 93 + PAGE_SIZE) >> PAGE_SHIFT; 94 + 95 + err = -ENOMEM; 73 96 htab->buckets = kmalloc_array(htab->n_buckets, sizeof(struct hlist_head), 74 97 GFP_USER | __GFP_NOWARN); 75 98 ··· 108 85 raw_spin_lock_init(&htab->lock); 109 86 htab->count = 0; 110 87 111 - htab->elem_size = sizeof(struct htab_elem) + 112 - round_up(htab->map.key_size, 8) + 113 - htab->map.value_size; 114 - 115 - htab->map.pages = round_up(htab->n_buckets * sizeof(struct hlist_head) + 116 - htab->elem_size * htab->map.max_entries, 117 - PAGE_SIZE) >> PAGE_SHIFT; 118 88 return &htab->map; 119 89 120 90 free_htab: ··· 238 222 WARN_ON_ONCE(!rcu_read_lock_held()); 239 223 240 224 /* allocate new element outside of lock */ 241 - l_new = kmalloc(htab->elem_size, GFP_ATOMIC); 225 + l_new = kmalloc(htab->elem_size, GFP_ATOMIC | __GFP_NOWARN); 242 226 if (!l_new) 243 227 return -ENOMEM; 244 228
+3 -3
kernel/bpf/inode.c
··· 34 34 atomic_inc(&((struct bpf_prog *)raw)->aux->refcnt); 35 35 break; 36 36 case BPF_TYPE_MAP: 37 - atomic_inc(&((struct bpf_map *)raw)->refcnt); 37 + bpf_map_inc(raw, true); 38 38 break; 39 39 default: 40 40 WARN_ON_ONCE(1); ··· 51 51 bpf_prog_put(raw); 52 52 break; 53 53 case BPF_TYPE_MAP: 54 - bpf_map_put(raw); 54 + bpf_map_put_with_uref(raw); 55 55 break; 56 56 default: 57 57 WARN_ON_ONCE(1); ··· 64 64 void *raw; 65 65 66 66 *type = BPF_TYPE_MAP; 67 - raw = bpf_map_get(ufd); 67 + raw = bpf_map_get_with_uref(ufd); 68 68 if (IS_ERR(raw)) { 69 69 *type = BPF_TYPE_PROG; 70 70 raw = bpf_prog_get(ufd);
+32 -18
kernel/bpf/syscall.c
··· 82 82 map->ops->map_free(map); 83 83 } 84 84 85 + static void bpf_map_put_uref(struct bpf_map *map) 86 + { 87 + if (atomic_dec_and_test(&map->usercnt)) { 88 + if (map->map_type == BPF_MAP_TYPE_PROG_ARRAY) 89 + bpf_fd_array_map_clear(map); 90 + } 91 + } 92 + 85 93 /* decrement map refcnt and schedule it for freeing via workqueue 86 94 * (unrelying map implementation ops->map_free() might sleep) 87 95 */ ··· 99 91 INIT_WORK(&map->work, bpf_map_free_deferred); 100 92 schedule_work(&map->work); 101 93 } 94 + } 95 + 96 + void bpf_map_put_with_uref(struct bpf_map *map) 97 + { 98 + bpf_map_put_uref(map); 99 + bpf_map_put(map); 100 + } 101 + 102 + static int bpf_map_release(struct inode *inode, struct file *filp) 103 + { 104 + bpf_map_put_with_uref(filp->private_data); 105 + return 0; 102 106 } 103 107 104 108 #ifdef CONFIG_PROC_FS ··· 129 109 map->max_entries); 130 110 } 131 111 #endif 132 - 133 - static int bpf_map_release(struct inode *inode, struct file *filp) 134 - { 135 - struct bpf_map *map = filp->private_data; 136 - 137 - if (map->map_type == BPF_MAP_TYPE_PROG_ARRAY) 138 - /* prog_array stores refcnt-ed bpf_prog pointers 139 - * release them all when user space closes prog_array_fd 140 - */ 141 - bpf_fd_array_map_clear(map); 142 - 143 - bpf_map_put(map); 144 - return 0; 145 - } 146 112 147 113 static const struct file_operations bpf_map_fops = { 148 114 #ifdef CONFIG_PROC_FS ··· 168 162 return PTR_ERR(map); 169 163 170 164 atomic_set(&map->refcnt, 1); 165 + atomic_set(&map->usercnt, 1); 171 166 172 167 err = bpf_map_charge_memlock(map); 173 168 if (err) ··· 201 194 return f.file->private_data; 202 195 } 203 196 204 - struct bpf_map *bpf_map_get(u32 ufd) 197 + void bpf_map_inc(struct bpf_map *map, bool uref) 198 + { 199 + atomic_inc(&map->refcnt); 200 + if (uref) 201 + atomic_inc(&map->usercnt); 202 + } 203 + 204 + struct bpf_map *bpf_map_get_with_uref(u32 ufd) 205 205 { 206 206 struct fd f = fdget(ufd); 207 207 struct bpf_map *map; ··· 217 203 if (IS_ERR(map)) 218 204 return map; 219 205 220 - atomic_inc(&map->refcnt); 206 + bpf_map_inc(map, true); 221 207 fdput(f); 222 208 223 209 return map; ··· 260 246 goto free_key; 261 247 262 248 err = -ENOMEM; 263 - value = kmalloc(map->value_size, GFP_USER); 249 + value = kmalloc(map->value_size, GFP_USER | __GFP_NOWARN); 264 250 if (!value) 265 251 goto free_key; 266 252 ··· 319 305 goto free_key; 320 306 321 307 err = -ENOMEM; 322 - value = kmalloc(map->value_size, GFP_USER); 308 + value = kmalloc(map->value_size, GFP_USER | __GFP_NOWARN); 323 309 if (!value) 324 310 goto free_key; 325 311
+1 -2
kernel/bpf/verifier.c
··· 2021 2021 * will be used by the valid program until it's unloaded 2022 2022 * and all maps are released in free_bpf_prog_info() 2023 2023 */ 2024 - atomic_inc(&map->refcnt); 2025 - 2024 + bpf_map_inc(map, false); 2026 2025 fdput(f); 2027 2026 next_insn: 2028 2027 insn++;
+6
kernel/livepatch/core.c
··· 294 294 295 295 for (reloc = obj->relocs; reloc->name; reloc++) { 296 296 if (!klp_is_module(obj)) { 297 + 298 + #if defined(CONFIG_RANDOMIZE_BASE) 299 + /* If KASLR has been enabled, adjust old value accordingly */ 300 + if (kaslr_enabled()) 301 + reloc->val += kaslr_offset(); 302 + #endif 297 303 ret = klp_verify_vmlinux_symbol(reloc->name, 298 304 reloc->val); 299 305 if (ret)
+4 -1
kernel/panic.c
··· 152 152 * We may have ended up stopping the CPU holding the lock (in 153 153 * smp_send_stop()) while still having some valuable data in the console 154 154 * buffer. Try to acquire the lock then release it regardless of the 155 - * result. The release will also print the buffers out. 155 + * result. The release will also print the buffers out. Locks debug 156 + * should be disabled to avoid reporting bad unlock balance when 157 + * panic() is not being callled from OOPS. 156 158 */ 159 + debug_locks_off(); 157 160 console_trylock(); 158 161 console_unlock(); 159 162
+2 -2
kernel/pid.c
··· 467 467 rcu_read_lock(); 468 468 if (type != PIDTYPE_PID) 469 469 task = task->group_leader; 470 - pid = get_pid(task->pids[type].pid); 470 + pid = get_pid(rcu_dereference(task->pids[type].pid)); 471 471 rcu_read_unlock(); 472 472 return pid; 473 473 } ··· 528 528 if (likely(pid_alive(task))) { 529 529 if (type != PIDTYPE_PID) 530 530 task = task->group_leader; 531 - nr = pid_nr_ns(task->pids[type].pid, ns); 531 + nr = pid_nr_ns(rcu_dereference(task->pids[type].pid), ns); 532 532 } 533 533 rcu_read_unlock(); 534 534
+1 -1
kernel/signal.c
··· 3503 3503 3504 3504 #endif 3505 3505 3506 - int sigsuspend(sigset_t *set) 3506 + static int sigsuspend(sigset_t *set) 3507 3507 { 3508 3508 current->saved_sigmask = current->blocked; 3509 3509 set_current_blocked(set);
+9 -8
kernel/trace/ring_buffer.c
··· 1887 1887 return (addr & ~PAGE_MASK) - BUF_PAGE_HDR_SIZE; 1888 1888 } 1889 1889 1890 - static void rb_reset_reader_page(struct ring_buffer_per_cpu *cpu_buffer) 1891 - { 1892 - cpu_buffer->read_stamp = cpu_buffer->reader_page->page->time_stamp; 1893 - cpu_buffer->reader_page->read = 0; 1894 - } 1895 - 1896 1890 static void rb_inc_iter(struct ring_buffer_iter *iter) 1897 1891 { 1898 1892 struct ring_buffer_per_cpu *cpu_buffer = iter->cpu_buffer; ··· 2797 2803 2798 2804 event = __rb_reserve_next(cpu_buffer, &info); 2799 2805 2800 - if (unlikely(PTR_ERR(event) == -EAGAIN)) 2806 + if (unlikely(PTR_ERR(event) == -EAGAIN)) { 2807 + if (info.add_timestamp) 2808 + info.length -= RB_LEN_TIME_EXTEND; 2801 2809 goto again; 2810 + } 2802 2811 2803 2812 if (!event) 2804 2813 goto out_fail; ··· 3623 3626 3624 3627 /* Finally update the reader page to the new head */ 3625 3628 cpu_buffer->reader_page = reader; 3626 - rb_reset_reader_page(cpu_buffer); 3629 + cpu_buffer->reader_page->read = 0; 3627 3630 3628 3631 if (overwrite != cpu_buffer->last_overrun) { 3629 3632 cpu_buffer->lost_events = overwrite - cpu_buffer->last_overrun; ··· 3633 3636 goto again; 3634 3637 3635 3638 out: 3639 + /* Update the read_stamp on the first event */ 3640 + if (reader && reader->read == 0) 3641 + cpu_buffer->read_stamp = reader->page->time_stamp; 3642 + 3636 3643 arch_spin_unlock(&cpu_buffer->lock); 3637 3644 local_irq_restore(flags); 3638 3645
+16
kernel/trace/trace_events.c
··· 582 582 unregister_trace_sched_wakeup(event_filter_pid_sched_wakeup_probe_pre, tr); 583 583 unregister_trace_sched_wakeup(event_filter_pid_sched_wakeup_probe_post, tr); 584 584 585 + unregister_trace_sched_wakeup_new(event_filter_pid_sched_wakeup_probe_pre, tr); 586 + unregister_trace_sched_wakeup_new(event_filter_pid_sched_wakeup_probe_post, tr); 587 + 588 + unregister_trace_sched_waking(event_filter_pid_sched_wakeup_probe_pre, tr); 589 + unregister_trace_sched_waking(event_filter_pid_sched_wakeup_probe_post, tr); 590 + 585 591 list_for_each_entry(file, &tr->events, list) { 586 592 clear_bit(EVENT_FILE_FL_PID_FILTER_BIT, &file->flags); 587 593 } ··· 1734 1728 register_trace_prio_sched_wakeup(event_filter_pid_sched_wakeup_probe_pre, 1735 1729 tr, INT_MAX); 1736 1730 register_trace_prio_sched_wakeup(event_filter_pid_sched_wakeup_probe_post, 1731 + tr, 0); 1732 + 1733 + register_trace_prio_sched_wakeup_new(event_filter_pid_sched_wakeup_probe_pre, 1734 + tr, INT_MAX); 1735 + register_trace_prio_sched_wakeup_new(event_filter_pid_sched_wakeup_probe_post, 1736 + tr, 0); 1737 + 1738 + register_trace_prio_sched_waking(event_filter_pid_sched_wakeup_probe_pre, 1739 + tr, INT_MAX); 1740 + register_trace_prio_sched_waking(event_filter_pid_sched_wakeup_probe_post, 1737 1741 tr, 0); 1738 1742 } 1739 1743
+2 -2
mm/huge_memory.c
··· 2009 2009 /* 2010 2010 * Be somewhat over-protective like KSM for now! 2011 2011 */ 2012 - if (*vm_flags & (VM_HUGEPAGE | VM_NO_THP)) 2012 + if (*vm_flags & VM_NO_THP) 2013 2013 return -EINVAL; 2014 2014 *vm_flags &= ~VM_NOHUGEPAGE; 2015 2015 *vm_flags |= VM_HUGEPAGE; ··· 2025 2025 /* 2026 2026 * Be somewhat over-protective like KSM for now! 2027 2027 */ 2028 - if (*vm_flags & (VM_NOHUGEPAGE | VM_NO_THP)) 2028 + if (*vm_flags & VM_NO_THP) 2029 2029 return -EINVAL; 2030 2030 *vm_flags &= ~VM_HUGEPAGE; 2031 2031 *vm_flags |= VM_NOHUGEPAGE;
+2
mm/kasan/kasan.c
··· 19 19 #include <linux/export.h> 20 20 #include <linux/init.h> 21 21 #include <linux/kernel.h> 22 + #include <linux/kmemleak.h> 22 23 #include <linux/memblock.h> 23 24 #include <linux/memory.h> 24 25 #include <linux/mm.h> ··· 445 444 446 445 if (ret) { 447 446 find_vm_area(addr)->flags |= VM_KASAN; 447 + kmemleak_ignore(ret); 448 448 return 0; 449 449 } 450 450
+4 -4
mm/memory.c
··· 3015 3015 } else { 3016 3016 /* 3017 3017 * The fault handler has no page to lock, so it holds 3018 - * i_mmap_lock for write to protect against truncate. 3018 + * i_mmap_lock for read to protect against truncate. 3019 3019 */ 3020 - i_mmap_unlock_write(vma->vm_file->f_mapping); 3020 + i_mmap_unlock_read(vma->vm_file->f_mapping); 3021 3021 } 3022 3022 goto uncharge_out; 3023 3023 } ··· 3031 3031 } else { 3032 3032 /* 3033 3033 * The fault handler has no page to lock, so it holds 3034 - * i_mmap_lock for write to protect against truncate. 3034 + * i_mmap_lock for read to protect against truncate. 3035 3035 */ 3036 - i_mmap_unlock_write(vma->vm_file->f_mapping); 3036 + i_mmap_unlock_read(vma->vm_file->f_mapping); 3037 3037 } 3038 3038 return ret; 3039 3039 uncharge_out:
+3 -1
mm/page-writeback.c
··· 1542 1542 for (;;) { 1543 1543 unsigned long now = jiffies; 1544 1544 unsigned long dirty, thresh, bg_thresh; 1545 - unsigned long m_dirty, m_thresh, m_bg_thresh; 1545 + unsigned long m_dirty = 0; /* stop bogus uninit warnings */ 1546 + unsigned long m_thresh = 0; 1547 + unsigned long m_bg_thresh = 0; 1546 1548 1547 1549 /* 1548 1550 * Unstable writes are a feature of certain networked
+1 -1
mm/slab.c
··· 3419 3419 } 3420 3420 EXPORT_SYMBOL(kmem_cache_free_bulk); 3421 3421 3422 - bool kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, 3422 + int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, 3423 3423 void **p) 3424 3424 { 3425 3425 return __kmem_cache_alloc_bulk(s, flags, size, p);
+1 -1
mm/slab.h
··· 170 170 * may be allocated or freed using these operations. 171 171 */ 172 172 void __kmem_cache_free_bulk(struct kmem_cache *, size_t, void **); 173 - bool __kmem_cache_alloc_bulk(struct kmem_cache *, gfp_t, size_t, void **); 173 + int __kmem_cache_alloc_bulk(struct kmem_cache *, gfp_t, size_t, void **); 174 174 175 175 #ifdef CONFIG_MEMCG_KMEM 176 176 /*
+3 -3
mm/slab_common.c
··· 112 112 kmem_cache_free(s, p[i]); 113 113 } 114 114 115 - bool __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t nr, 115 + int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t nr, 116 116 void **p) 117 117 { 118 118 size_t i; ··· 121 121 void *x = p[i] = kmem_cache_alloc(s, flags); 122 122 if (!x) { 123 123 __kmem_cache_free_bulk(s, i, p); 124 - return false; 124 + return 0; 125 125 } 126 126 } 127 - return true; 127 + return i; 128 128 } 129 129 130 130 #ifdef CONFIG_MEMCG_KMEM
+1 -1
mm/slob.c
··· 617 617 } 618 618 EXPORT_SYMBOL(kmem_cache_free_bulk); 619 619 620 - bool kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, 620 + int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, 621 621 void **p) 622 622 { 623 623 return __kmem_cache_alloc_bulk(s, flags, size, p);
+211 -93
mm/slub.c
··· 1065 1065 return 0; 1066 1066 } 1067 1067 1068 + /* Supports checking bulk free of a constructed freelist */ 1068 1069 static noinline struct kmem_cache_node *free_debug_processing( 1069 - struct kmem_cache *s, struct page *page, void *object, 1070 + struct kmem_cache *s, struct page *page, 1071 + void *head, void *tail, int bulk_cnt, 1070 1072 unsigned long addr, unsigned long *flags) 1071 1073 { 1072 1074 struct kmem_cache_node *n = get_node(s, page_to_nid(page)); 1075 + void *object = head; 1076 + int cnt = 0; 1073 1077 1074 1078 spin_lock_irqsave(&n->list_lock, *flags); 1075 1079 slab_lock(page); 1076 1080 1077 1081 if (!check_slab(s, page)) 1078 1082 goto fail; 1083 + 1084 + next_object: 1085 + cnt++; 1079 1086 1080 1087 if (!check_valid_pointer(s, page, object)) { 1081 1088 slab_err(s, page, "Invalid object pointer 0x%p", object); ··· 1114 1107 if (s->flags & SLAB_STORE_USER) 1115 1108 set_track(s, object, TRACK_FREE, addr); 1116 1109 trace(s, page, object, 0); 1110 + /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */ 1117 1111 init_object(s, object, SLUB_RED_INACTIVE); 1112 + 1113 + /* Reached end of constructed freelist yet? */ 1114 + if (object != tail) { 1115 + object = get_freepointer(s, object); 1116 + goto next_object; 1117 + } 1118 1118 out: 1119 + if (cnt != bulk_cnt) 1120 + slab_err(s, page, "Bulk freelist count(%d) invalid(%d)\n", 1121 + bulk_cnt, cnt); 1122 + 1119 1123 slab_unlock(page); 1120 1124 /* 1121 1125 * Keep node_lock to preserve integrity ··· 1222 1204 1223 1205 return flags; 1224 1206 } 1225 - #else 1207 + #else /* !CONFIG_SLUB_DEBUG */ 1226 1208 static inline void setup_object_debug(struct kmem_cache *s, 1227 1209 struct page *page, void *object) {} 1228 1210 ··· 1230 1212 struct page *page, void *object, unsigned long addr) { return 0; } 1231 1213 1232 1214 static inline struct kmem_cache_node *free_debug_processing( 1233 - struct kmem_cache *s, struct page *page, void *object, 1215 + struct kmem_cache *s, struct page *page, 1216 + void *head, void *tail, int bulk_cnt, 1234 1217 unsigned long addr, unsigned long *flags) { return NULL; } 1235 1218 1236 1219 static inline int slab_pad_check(struct kmem_cache *s, struct page *page) ··· 1292 1273 return memcg_kmem_get_cache(s, flags); 1293 1274 } 1294 1275 1295 - static inline void slab_post_alloc_hook(struct kmem_cache *s, 1296 - gfp_t flags, void *object) 1276 + static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, 1277 + size_t size, void **p) 1297 1278 { 1279 + size_t i; 1280 + 1298 1281 flags &= gfp_allowed_mask; 1299 - kmemcheck_slab_alloc(s, flags, object, slab_ksize(s)); 1300 - kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags); 1282 + for (i = 0; i < size; i++) { 1283 + void *object = p[i]; 1284 + 1285 + kmemcheck_slab_alloc(s, flags, object, slab_ksize(s)); 1286 + kmemleak_alloc_recursive(object, s->object_size, 1, 1287 + s->flags, flags); 1288 + kasan_slab_alloc(s, object); 1289 + } 1301 1290 memcg_kmem_put_cache(s); 1302 - kasan_slab_alloc(s, object); 1303 1291 } 1304 1292 1305 1293 static inline void slab_free_hook(struct kmem_cache *s, void *x) ··· 1332 1306 debug_check_no_obj_freed(x, s->object_size); 1333 1307 1334 1308 kasan_slab_free(s, x); 1309 + } 1310 + 1311 + static inline void slab_free_freelist_hook(struct kmem_cache *s, 1312 + void *head, void *tail) 1313 + { 1314 + /* 1315 + * Compiler cannot detect this function can be removed if slab_free_hook() 1316 + * evaluates to nothing. Thus, catch all relevant config debug options here. 1317 + */ 1318 + #if defined(CONFIG_KMEMCHECK) || \ 1319 + defined(CONFIG_LOCKDEP) || \ 1320 + defined(CONFIG_DEBUG_KMEMLEAK) || \ 1321 + defined(CONFIG_DEBUG_OBJECTS_FREE) || \ 1322 + defined(CONFIG_KASAN) 1323 + 1324 + void *object = head; 1325 + void *tail_obj = tail ? : head; 1326 + 1327 + do { 1328 + slab_free_hook(s, object); 1329 + } while ((object != tail_obj) && 1330 + (object = get_freepointer(s, object))); 1331 + #endif 1335 1332 } 1336 1333 1337 1334 static void setup_object(struct kmem_cache *s, struct page *page, ··· 2344 2295 * And if we were unable to get a new slab from the partial slab lists then 2345 2296 * we need to allocate a new slab. This is the slowest path since it involves 2346 2297 * a call to the page allocator and the setup of a new slab. 2298 + * 2299 + * Version of __slab_alloc to use when we know that interrupts are 2300 + * already disabled (which is the case for bulk allocation). 2347 2301 */ 2348 - static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, 2302 + static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, 2349 2303 unsigned long addr, struct kmem_cache_cpu *c) 2350 2304 { 2351 2305 void *freelist; 2352 2306 struct page *page; 2353 - unsigned long flags; 2354 - 2355 - local_irq_save(flags); 2356 - #ifdef CONFIG_PREEMPT 2357 - /* 2358 - * We may have been preempted and rescheduled on a different 2359 - * cpu before disabling interrupts. Need to reload cpu area 2360 - * pointer. 2361 - */ 2362 - c = this_cpu_ptr(s->cpu_slab); 2363 - #endif 2364 2307 2365 2308 page = c->page; 2366 2309 if (!page) ··· 2410 2369 VM_BUG_ON(!c->page->frozen); 2411 2370 c->freelist = get_freepointer(s, freelist); 2412 2371 c->tid = next_tid(c->tid); 2413 - local_irq_restore(flags); 2414 2372 return freelist; 2415 2373 2416 2374 new_slab: ··· 2426 2386 2427 2387 if (unlikely(!freelist)) { 2428 2388 slab_out_of_memory(s, gfpflags, node); 2429 - local_irq_restore(flags); 2430 2389 return NULL; 2431 2390 } 2432 2391 ··· 2441 2402 deactivate_slab(s, page, get_freepointer(s, freelist)); 2442 2403 c->page = NULL; 2443 2404 c->freelist = NULL; 2444 - local_irq_restore(flags); 2445 2405 return freelist; 2406 + } 2407 + 2408 + /* 2409 + * Another one that disabled interrupt and compensates for possible 2410 + * cpu changes by refetching the per cpu area pointer. 2411 + */ 2412 + static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, 2413 + unsigned long addr, struct kmem_cache_cpu *c) 2414 + { 2415 + void *p; 2416 + unsigned long flags; 2417 + 2418 + local_irq_save(flags); 2419 + #ifdef CONFIG_PREEMPT 2420 + /* 2421 + * We may have been preempted and rescheduled on a different 2422 + * cpu before disabling interrupts. Need to reload cpu area 2423 + * pointer. 2424 + */ 2425 + c = this_cpu_ptr(s->cpu_slab); 2426 + #endif 2427 + 2428 + p = ___slab_alloc(s, gfpflags, node, addr, c); 2429 + local_irq_restore(flags); 2430 + return p; 2446 2431 } 2447 2432 2448 2433 /* ··· 2482 2419 static __always_inline void *slab_alloc_node(struct kmem_cache *s, 2483 2420 gfp_t gfpflags, int node, unsigned long addr) 2484 2421 { 2485 - void **object; 2422 + void *object; 2486 2423 struct kmem_cache_cpu *c; 2487 2424 struct page *page; 2488 2425 unsigned long tid; ··· 2561 2498 if (unlikely(gfpflags & __GFP_ZERO) && object) 2562 2499 memset(object, 0, s->object_size); 2563 2500 2564 - slab_post_alloc_hook(s, gfpflags, object); 2501 + slab_post_alloc_hook(s, gfpflags, 1, &object); 2565 2502 2566 2503 return object; 2567 2504 } ··· 2632 2569 * handling required then we can return immediately. 2633 2570 */ 2634 2571 static void __slab_free(struct kmem_cache *s, struct page *page, 2635 - void *x, unsigned long addr) 2572 + void *head, void *tail, int cnt, 2573 + unsigned long addr) 2574 + 2636 2575 { 2637 2576 void *prior; 2638 - void **object = (void *)x; 2639 2577 int was_frozen; 2640 2578 struct page new; 2641 2579 unsigned long counters; ··· 2646 2582 stat(s, FREE_SLOWPATH); 2647 2583 2648 2584 if (kmem_cache_debug(s) && 2649 - !(n = free_debug_processing(s, page, x, addr, &flags))) 2585 + !(n = free_debug_processing(s, page, head, tail, cnt, 2586 + addr, &flags))) 2650 2587 return; 2651 2588 2652 2589 do { ··· 2657 2592 } 2658 2593 prior = page->freelist; 2659 2594 counters = page->counters; 2660 - set_freepointer(s, object, prior); 2595 + set_freepointer(s, tail, prior); 2661 2596 new.counters = counters; 2662 2597 was_frozen = new.frozen; 2663 - new.inuse--; 2598 + new.inuse -= cnt; 2664 2599 if ((!new.inuse || !prior) && !was_frozen) { 2665 2600 2666 2601 if (kmem_cache_has_cpu_partial(s) && !prior) { ··· 2691 2626 2692 2627 } while (!cmpxchg_double_slab(s, page, 2693 2628 prior, counters, 2694 - object, new.counters, 2629 + head, new.counters, 2695 2630 "__slab_free")); 2696 2631 2697 2632 if (likely(!n)) { ··· 2756 2691 * 2757 2692 * If fastpath is not possible then fall back to __slab_free where we deal 2758 2693 * with all sorts of special processing. 2694 + * 2695 + * Bulk free of a freelist with several objects (all pointing to the 2696 + * same page) possible by specifying head and tail ptr, plus objects 2697 + * count (cnt). Bulk free indicated by tail pointer being set. 2759 2698 */ 2760 - static __always_inline void slab_free(struct kmem_cache *s, 2761 - struct page *page, void *x, unsigned long addr) 2699 + static __always_inline void slab_free(struct kmem_cache *s, struct page *page, 2700 + void *head, void *tail, int cnt, 2701 + unsigned long addr) 2762 2702 { 2763 - void **object = (void *)x; 2703 + void *tail_obj = tail ? : head; 2764 2704 struct kmem_cache_cpu *c; 2765 2705 unsigned long tid; 2766 2706 2767 - slab_free_hook(s, x); 2707 + slab_free_freelist_hook(s, head, tail); 2768 2708 2769 2709 redo: 2770 2710 /* ··· 2788 2718 barrier(); 2789 2719 2790 2720 if (likely(page == c->page)) { 2791 - set_freepointer(s, object, c->freelist); 2721 + set_freepointer(s, tail_obj, c->freelist); 2792 2722 2793 2723 if (unlikely(!this_cpu_cmpxchg_double( 2794 2724 s->cpu_slab->freelist, s->cpu_slab->tid, 2795 2725 c->freelist, tid, 2796 - object, next_tid(tid)))) { 2726 + head, next_tid(tid)))) { 2797 2727 2798 2728 note_cmpxchg_failure("slab_free", s, tid); 2799 2729 goto redo; 2800 2730 } 2801 2731 stat(s, FREE_FASTPATH); 2802 2732 } else 2803 - __slab_free(s, page, x, addr); 2733 + __slab_free(s, page, head, tail_obj, cnt, addr); 2804 2734 2805 2735 } 2806 2736 ··· 2809 2739 s = cache_from_obj(s, x); 2810 2740 if (!s) 2811 2741 return; 2812 - slab_free(s, virt_to_head_page(x), x, _RET_IP_); 2742 + slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_); 2813 2743 trace_kmem_cache_free(_RET_IP_, x); 2814 2744 } 2815 2745 EXPORT_SYMBOL(kmem_cache_free); 2816 2746 2817 - /* Note that interrupts must be enabled when calling this function. */ 2818 - void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) 2819 - { 2820 - struct kmem_cache_cpu *c; 2747 + struct detached_freelist { 2821 2748 struct page *page; 2822 - int i; 2749 + void *tail; 2750 + void *freelist; 2751 + int cnt; 2752 + }; 2823 2753 2824 - local_irq_disable(); 2825 - c = this_cpu_ptr(s->cpu_slab); 2754 + /* 2755 + * This function progressively scans the array with free objects (with 2756 + * a limited look ahead) and extract objects belonging to the same 2757 + * page. It builds a detached freelist directly within the given 2758 + * page/objects. This can happen without any need for 2759 + * synchronization, because the objects are owned by running process. 2760 + * The freelist is build up as a single linked list in the objects. 2761 + * The idea is, that this detached freelist can then be bulk 2762 + * transferred to the real freelist(s), but only requiring a single 2763 + * synchronization primitive. Look ahead in the array is limited due 2764 + * to performance reasons. 2765 + */ 2766 + static int build_detached_freelist(struct kmem_cache *s, size_t size, 2767 + void **p, struct detached_freelist *df) 2768 + { 2769 + size_t first_skipped_index = 0; 2770 + int lookahead = 3; 2771 + void *object; 2826 2772 2827 - for (i = 0; i < size; i++) { 2828 - void *object = p[i]; 2773 + /* Always re-init detached_freelist */ 2774 + df->page = NULL; 2829 2775 2830 - BUG_ON(!object); 2831 - /* kmem cache debug support */ 2832 - s = cache_from_obj(s, object); 2833 - if (unlikely(!s)) 2834 - goto exit; 2835 - slab_free_hook(s, object); 2776 + do { 2777 + object = p[--size]; 2778 + } while (!object && size); 2836 2779 2837 - page = virt_to_head_page(object); 2780 + if (!object) 2781 + return 0; 2838 2782 2839 - if (c->page == page) { 2840 - /* Fastpath: local CPU free */ 2841 - set_freepointer(s, object, c->freelist); 2842 - c->freelist = object; 2843 - } else { 2844 - c->tid = next_tid(c->tid); 2845 - local_irq_enable(); 2846 - /* Slowpath: overhead locked cmpxchg_double_slab */ 2847 - __slab_free(s, page, object, _RET_IP_); 2848 - local_irq_disable(); 2849 - c = this_cpu_ptr(s->cpu_slab); 2783 + /* Start new detached freelist */ 2784 + set_freepointer(s, object, NULL); 2785 + df->page = virt_to_head_page(object); 2786 + df->tail = object; 2787 + df->freelist = object; 2788 + p[size] = NULL; /* mark object processed */ 2789 + df->cnt = 1; 2790 + 2791 + while (size) { 2792 + object = p[--size]; 2793 + if (!object) 2794 + continue; /* Skip processed objects */ 2795 + 2796 + /* df->page is always set at this point */ 2797 + if (df->page == virt_to_head_page(object)) { 2798 + /* Opportunity build freelist */ 2799 + set_freepointer(s, object, df->freelist); 2800 + df->freelist = object; 2801 + df->cnt++; 2802 + p[size] = NULL; /* mark object processed */ 2803 + 2804 + continue; 2850 2805 } 2806 + 2807 + /* Limit look ahead search */ 2808 + if (!--lookahead) 2809 + break; 2810 + 2811 + if (!first_skipped_index) 2812 + first_skipped_index = size + 1; 2851 2813 } 2852 - exit: 2853 - c->tid = next_tid(c->tid); 2854 - local_irq_enable(); 2814 + 2815 + return first_skipped_index; 2816 + } 2817 + 2818 + 2819 + /* Note that interrupts must be enabled when calling this function. */ 2820 + void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) 2821 + { 2822 + if (WARN_ON(!size)) 2823 + return; 2824 + 2825 + do { 2826 + struct detached_freelist df; 2827 + struct kmem_cache *s; 2828 + 2829 + /* Support for memcg */ 2830 + s = cache_from_obj(orig_s, p[size - 1]); 2831 + 2832 + size = build_detached_freelist(s, size, p, &df); 2833 + if (unlikely(!df.page)) 2834 + continue; 2835 + 2836 + slab_free(s, df.page, df.freelist, df.tail, df.cnt, _RET_IP_); 2837 + } while (likely(size)); 2855 2838 } 2856 2839 EXPORT_SYMBOL(kmem_cache_free_bulk); 2857 2840 2858 2841 /* Note that interrupts must be enabled when calling this function. */ 2859 - bool kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, 2860 - void **p) 2842 + int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, 2843 + void **p) 2861 2844 { 2862 2845 struct kmem_cache_cpu *c; 2863 2846 int i; 2864 2847 2848 + /* memcg and kmem_cache debug support */ 2849 + s = slab_pre_alloc_hook(s, flags); 2850 + if (unlikely(!s)) 2851 + return false; 2865 2852 /* 2866 2853 * Drain objects in the per cpu slab, while disabling local 2867 2854 * IRQs, which protects against PREEMPT and interrupts ··· 2931 2804 void *object = c->freelist; 2932 2805 2933 2806 if (unlikely(!object)) { 2934 - local_irq_enable(); 2935 2807 /* 2936 2808 * Invoking slow path likely have side-effect 2937 2809 * of re-populating per CPU c->freelist 2938 2810 */ 2939 - p[i] = __slab_alloc(s, flags, NUMA_NO_NODE, 2811 + p[i] = ___slab_alloc(s, flags, NUMA_NO_NODE, 2940 2812 _RET_IP_, c); 2941 - if (unlikely(!p[i])) { 2942 - __kmem_cache_free_bulk(s, i, p); 2943 - return false; 2944 - } 2945 - local_irq_disable(); 2813 + if (unlikely(!p[i])) 2814 + goto error; 2815 + 2946 2816 c = this_cpu_ptr(s->cpu_slab); 2947 2817 continue; /* goto for-loop */ 2948 2818 } 2949 - 2950 - /* kmem_cache debug support */ 2951 - s = slab_pre_alloc_hook(s, flags); 2952 - if (unlikely(!s)) { 2953 - __kmem_cache_free_bulk(s, i, p); 2954 - c->tid = next_tid(c->tid); 2955 - local_irq_enable(); 2956 - return false; 2957 - } 2958 - 2959 2819 c->freelist = get_freepointer(s, object); 2960 2820 p[i] = object; 2961 - 2962 - /* kmem_cache debug support */ 2963 - slab_post_alloc_hook(s, flags, object); 2964 2821 } 2965 2822 c->tid = next_tid(c->tid); 2966 2823 local_irq_enable(); ··· 2957 2846 memset(p[j], 0, s->object_size); 2958 2847 } 2959 2848 2960 - return true; 2849 + /* memcg and kmem_cache debug support */ 2850 + slab_post_alloc_hook(s, flags, size, p); 2851 + return i; 2852 + error: 2853 + local_irq_enable(); 2854 + slab_post_alloc_hook(s, flags, i, p); 2855 + __kmem_cache_free_bulk(s, i, p); 2856 + return 0; 2961 2857 } 2962 2858 EXPORT_SYMBOL(kmem_cache_alloc_bulk); 2963 2859 ··· 3629 3511 __free_kmem_pages(page, compound_order(page)); 3630 3512 return; 3631 3513 } 3632 - slab_free(page->slab_cache, page, object, _RET_IP_); 3514 + slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_); 3633 3515 } 3634 3516 EXPORT_SYMBOL(kfree); 3635 3517
+2 -3
mm/vmalloc.c
··· 1443 1443 vmap_debug_free_range(va->va_start, va->va_end); 1444 1444 kasan_free_shadow(vm); 1445 1445 free_unmap_vmap_area(va); 1446 - vm->size -= PAGE_SIZE; 1447 1446 1448 1447 return vm; 1449 1448 } ··· 1467 1468 return; 1468 1469 } 1469 1470 1470 - debug_check_no_locks_freed(addr, area->size); 1471 - debug_check_no_obj_freed(addr, area->size); 1471 + debug_check_no_locks_freed(addr, get_vm_area_size(area)); 1472 + debug_check_no_obj_freed(addr, get_vm_area_size(area)); 1472 1473 1473 1474 if (deallocate_pages) { 1474 1475 int i;
+3 -3
net/bluetooth/af_bluetooth.c
··· 269 269 if (signal_pending(current) || !timeo) 270 270 break; 271 271 272 - set_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 272 + sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 273 273 release_sock(sk); 274 274 timeo = schedule_timeout(timeo); 275 275 lock_sock(sk); 276 - clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 276 + sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 277 277 } 278 278 279 279 __set_current_state(TASK_RUNNING); ··· 439 439 if (!test_bit(BT_SK_SUSPEND, &bt_sk(sk)->flags) && sock_writeable(sk)) 440 440 mask |= POLLOUT | POLLWRNORM | POLLWRBAND; 441 441 else 442 - set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 442 + sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 443 443 444 444 return mask; 445 445 }
+6 -1
net/bluetooth/smp.c
··· 3027 3027 3028 3028 BT_DBG("chan %p", chan); 3029 3029 3030 + /* No need to call l2cap_chan_hold() here since we already own 3031 + * the reference taken in smp_new_conn_cb(). This is just the 3032 + * first time that we tie it to a specific pointer. The code in 3033 + * l2cap_core.c ensures that there's no risk this function wont 3034 + * get called if smp_new_conn_cb was previously called. 3035 + */ 3030 3036 conn->smp = chan; 3031 - l2cap_chan_hold(chan); 3032 3037 3033 3038 if (hcon->type == ACL_LINK && test_bit(HCI_CONN_ENCRYPT, &hcon->flags)) 3034 3039 bredr_pairing(chan);
+2 -2
net/caif/caif_socket.c
··· 323 323 !timeo) 324 324 break; 325 325 326 - set_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 326 + sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 327 327 release_sock(sk); 328 328 timeo = schedule_timeout(timeo); 329 329 lock_sock(sk); ··· 331 331 if (sock_flag(sk, SOCK_DEAD)) 332 332 break; 333 333 334 - clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 334 + sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 335 335 } 336 336 337 337 finish_wait(sk_sleep(sk), &wait);
+1 -1
net/core/datagram.c
··· 785 785 if (sock_writeable(sk)) 786 786 mask |= POLLOUT | POLLWRNORM | POLLWRBAND; 787 787 else 788 - set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 788 + sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 789 789 790 790 return mask; 791 791 }
+2 -2
net/core/neighbour.c
··· 2215 2215 ndm->ndm_pad2 = 0; 2216 2216 ndm->ndm_flags = pn->flags | NTF_PROXY; 2217 2217 ndm->ndm_type = RTN_UNICAST; 2218 - ndm->ndm_ifindex = pn->dev->ifindex; 2218 + ndm->ndm_ifindex = pn->dev ? pn->dev->ifindex : 0; 2219 2219 ndm->ndm_state = NUD_NONE; 2220 2220 2221 2221 if (nla_put(skb, NDA_DST, tbl->key_len, pn->key)) ··· 2333 2333 if (h > s_h) 2334 2334 s_idx = 0; 2335 2335 for (n = tbl->phash_buckets[h], idx = 0; n; n = n->next) { 2336 - if (dev_net(n->dev) != net) 2336 + if (pneigh_net(n) != net) 2337 2337 continue; 2338 2338 if (idx < s_idx) 2339 2339 goto next;
+21 -11
net/core/netclassid_cgroup.c
··· 56 56 kfree(css_cls_state(css)); 57 57 } 58 58 59 - static int update_classid(const void *v, struct file *file, unsigned n) 59 + static int update_classid_sock(const void *v, struct file *file, unsigned n) 60 60 { 61 61 int err; 62 62 struct socket *sock = sock_from_file(file, &err); ··· 67 67 return 0; 68 68 } 69 69 70 + static void update_classid(struct cgroup_subsys_state *css, void *v) 71 + { 72 + struct css_task_iter it; 73 + struct task_struct *p; 74 + 75 + css_task_iter_start(css, &it); 76 + while ((p = css_task_iter_next(&it))) { 77 + task_lock(p); 78 + iterate_fd(p->files, 0, update_classid_sock, v); 79 + task_unlock(p); 80 + } 81 + css_task_iter_end(&it); 82 + } 83 + 70 84 static void cgrp_attach(struct cgroup_subsys_state *css, 71 85 struct cgroup_taskset *tset) 72 86 { 73 - struct cgroup_cls_state *cs = css_cls_state(css); 74 - void *v = (void *)(unsigned long)cs->classid; 75 - struct task_struct *p; 76 - 77 - cgroup_taskset_for_each(p, tset) { 78 - task_lock(p); 79 - iterate_fd(p->files, 0, update_classid, v); 80 - task_unlock(p); 81 - } 87 + update_classid(css, 88 + (void *)(unsigned long)css_cls_state(css)->classid); 82 89 } 83 90 84 91 static u64 read_classid(struct cgroup_subsys_state *css, struct cftype *cft) ··· 96 89 static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft, 97 90 u64 value) 98 91 { 99 - css_cls_state(css)->classid = (u32) value; 92 + struct cgroup_cls_state *cs = css_cls_state(css); 100 93 94 + cs->classid = (u32)value; 95 + 96 + update_classid(css, (void *)(unsigned long)cs->classid); 101 97 return 0; 102 98 } 103 99
+2
net/core/scm.c
··· 305 305 err = put_user(cmlen, &cm->cmsg_len); 306 306 if (!err) { 307 307 cmlen = CMSG_SPACE(i*sizeof(int)); 308 + if (msg->msg_controllen < cmlen) 309 + cmlen = msg->msg_controllen; 308 310 msg->msg_control += cmlen; 309 311 msg->msg_controllen -= cmlen; 310 312 }
+5 -7
net/core/sock.c
··· 1530 1530 skb_queue_head_init(&newsk->sk_receive_queue); 1531 1531 skb_queue_head_init(&newsk->sk_write_queue); 1532 1532 1533 - spin_lock_init(&newsk->sk_dst_lock); 1534 1533 rwlock_init(&newsk->sk_callback_lock); 1535 1534 lockdep_set_class_and_name(&newsk->sk_callback_lock, 1536 1535 af_callback_keys + newsk->sk_family, ··· 1606 1607 { 1607 1608 u32 max_segs = 1; 1608 1609 1609 - __sk_dst_set(sk, dst); 1610 + sk_dst_set(sk, dst); 1610 1611 sk->sk_route_caps = dst->dev->features; 1611 1612 if (sk->sk_route_caps & NETIF_F_GSO) 1612 1613 sk->sk_route_caps |= NETIF_F_GSO_SOFTWARE; ··· 1814 1815 { 1815 1816 DEFINE_WAIT(wait); 1816 1817 1817 - clear_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 1818 + sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); 1818 1819 for (;;) { 1819 1820 if (!timeo) 1820 1821 break; ··· 1860 1861 if (sk_wmem_alloc_get(sk) < sk->sk_sndbuf) 1861 1862 break; 1862 1863 1863 - set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 1864 + sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 1864 1865 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 1865 1866 err = -EAGAIN; 1866 1867 if (!timeo) ··· 2047 2048 DEFINE_WAIT(wait); 2048 2049 2049 2050 prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 2050 - set_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 2051 + sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 2051 2052 rc = sk_wait_event(sk, timeo, skb_peek_tail(&sk->sk_receive_queue) != skb); 2052 - clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 2053 + sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 2053 2054 finish_wait(sk_sleep(sk), &wait); 2054 2055 return rc; 2055 2056 } ··· 2387 2388 } else 2388 2389 sk->sk_wq = NULL; 2389 2390 2390 - spin_lock_init(&sk->sk_dst_lock); 2391 2391 rwlock_init(&sk->sk_callback_lock); 2392 2392 lockdep_set_class_and_name(&sk->sk_callback_lock, 2393 2393 af_callback_keys + sk->sk_family,
+3 -3
net/core/stream.c
··· 39 39 wake_up_interruptible_poll(&wq->wait, POLLOUT | 40 40 POLLWRNORM | POLLWRBAND); 41 41 if (wq && wq->fasync_list && !(sk->sk_shutdown & SEND_SHUTDOWN)) 42 - sock_wake_async(sock, SOCK_WAKE_SPACE, POLL_OUT); 42 + sock_wake_async(wq, SOCK_WAKE_SPACE, POLL_OUT); 43 43 rcu_read_unlock(); 44 44 } 45 45 } ··· 126 126 current_timeo = vm_wait = (prandom_u32() % (HZ / 5)) + 2; 127 127 128 128 while (1) { 129 - set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 129 + sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 130 130 131 131 prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 132 132 ··· 139 139 } 140 140 if (signal_pending(current)) 141 141 goto do_interrupted; 142 - clear_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 142 + sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); 143 143 if (sk_stream_memory_free(sk) && !vm_wait) 144 144 break; 145 145
+23 -14
net/dccp/ipv6.c
··· 202 202 security_req_classify_flow(req, flowi6_to_flowi(&fl6)); 203 203 204 204 205 - final_p = fl6_update_dst(&fl6, np->opt, &final); 205 + rcu_read_lock(); 206 + final_p = fl6_update_dst(&fl6, rcu_dereference(np->opt), &final); 207 + rcu_read_unlock(); 206 208 207 209 dst = ip6_dst_lookup_flow(sk, &fl6, final_p); 208 210 if (IS_ERR(dst)) { ··· 221 219 &ireq->ir_v6_loc_addr, 222 220 &ireq->ir_v6_rmt_addr); 223 221 fl6.daddr = ireq->ir_v6_rmt_addr; 224 - err = ip6_xmit(sk, skb, &fl6, np->opt, np->tclass); 222 + rcu_read_lock(); 223 + err = ip6_xmit(sk, skb, &fl6, rcu_dereference(np->opt), 224 + np->tclass); 225 + rcu_read_unlock(); 225 226 err = net_xmit_eval(err); 226 227 } 227 228 ··· 392 387 struct inet_request_sock *ireq = inet_rsk(req); 393 388 struct ipv6_pinfo *newnp; 394 389 const struct ipv6_pinfo *np = inet6_sk(sk); 390 + struct ipv6_txoptions *opt; 395 391 struct inet_sock *newinet; 396 392 struct dccp6_sock *newdp6; 397 393 struct sock *newsk; ··· 459 453 * comment in that function for the gory details. -acme 460 454 */ 461 455 462 - __ip6_dst_store(newsk, dst, NULL, NULL); 456 + ip6_dst_store(newsk, dst, NULL, NULL); 463 457 newsk->sk_route_caps = dst->dev->features & ~(NETIF_F_IP_CSUM | 464 458 NETIF_F_TSO); 465 459 newdp6 = (struct dccp6_sock *)newsk; ··· 494 488 * Yes, keeping reference count would be much more clever, but we make 495 489 * one more one thing there: reattach optmem to newsk. 496 490 */ 497 - if (np->opt != NULL) 498 - newnp->opt = ipv6_dup_options(newsk, np->opt); 499 - 491 + opt = rcu_dereference(np->opt); 492 + if (opt) { 493 + opt = ipv6_dup_options(newsk, opt); 494 + RCU_INIT_POINTER(newnp->opt, opt); 495 + } 500 496 inet_csk(newsk)->icsk_ext_hdr_len = 0; 501 - if (newnp->opt != NULL) 502 - inet_csk(newsk)->icsk_ext_hdr_len = (newnp->opt->opt_nflen + 503 - newnp->opt->opt_flen); 497 + if (opt) 498 + inet_csk(newsk)->icsk_ext_hdr_len = opt->opt_nflen + 499 + opt->opt_flen; 504 500 505 501 dccp_sync_mss(newsk, dst_mtu(dst)); 506 502 ··· 765 757 struct ipv6_pinfo *np = inet6_sk(sk); 766 758 struct dccp_sock *dp = dccp_sk(sk); 767 759 struct in6_addr *saddr = NULL, *final_p, final; 760 + struct ipv6_txoptions *opt; 768 761 struct flowi6 fl6; 769 762 struct dst_entry *dst; 770 763 int addr_type; ··· 865 856 fl6.fl6_sport = inet->inet_sport; 866 857 security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); 867 858 868 - final_p = fl6_update_dst(&fl6, np->opt, &final); 859 + opt = rcu_dereference_protected(np->opt, sock_owned_by_user(sk)); 860 + final_p = fl6_update_dst(&fl6, opt, &final); 869 861 870 862 dst = ip6_dst_lookup_flow(sk, &fl6, final_p); 871 863 if (IS_ERR(dst)) { ··· 883 873 np->saddr = *saddr; 884 874 inet->inet_rcv_saddr = LOOPBACK4_IPV6; 885 875 886 - __ip6_dst_store(sk, dst, NULL, NULL); 876 + ip6_dst_store(sk, dst, NULL, NULL); 887 877 888 878 icsk->icsk_ext_hdr_len = 0; 889 - if (np->opt != NULL) 890 - icsk->icsk_ext_hdr_len = (np->opt->opt_flen + 891 - np->opt->opt_nflen); 879 + if (opt) 880 + icsk->icsk_ext_hdr_len = opt->opt_flen + opt->opt_nflen; 892 881 893 882 inet->inet_dport = usin->sin6_port; 894 883
+1 -2
net/dccp/proto.c
··· 339 339 if (sk_stream_is_writeable(sk)) { 340 340 mask |= POLLOUT | POLLWRNORM; 341 341 } else { /* send SIGIO later */ 342 - set_bit(SOCK_ASYNC_NOSPACE, 343 - &sk->sk_socket->flags); 342 + sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 344 343 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 345 344 346 345 /* Race breaker. If space is freed after
+4 -4
net/decnet/af_decnet.c
··· 1747 1747 } 1748 1748 1749 1749 prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 1750 - set_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 1750 + sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 1751 1751 sk_wait_event(sk, &timeo, dn_data_ready(sk, queue, flags, target)); 1752 - clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 1752 + sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 1753 1753 finish_wait(sk_sleep(sk), &wait); 1754 1754 } 1755 1755 ··· 2004 2004 } 2005 2005 2006 2006 prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 2007 - set_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 2007 + sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 2008 2008 sk_wait_event(sk, &timeo, 2009 2009 !dn_queue_too_long(scp, queue, flags)); 2010 - clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 2010 + sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 2011 2011 finish_wait(sk_sleep(sk), &wait); 2012 2012 continue; 2013 2013 }
+1 -1
net/dns_resolver/dns_query.c
··· 67 67 * Returns the size of the result on success, -ve error code otherwise. 68 68 */ 69 69 int dns_query(const char *type, const char *name, size_t namelen, 70 - const char *options, char **_result, time_t *_expiry) 70 + const char *options, char **_result, time64_t *_expiry) 71 71 { 72 72 struct key *rkey; 73 73 const struct user_key_payload *upayload;
+1 -1
net/hsr/hsr_device.c
··· 312 312 return; 313 313 314 314 out: 315 - WARN_ON_ONCE("HSR: Could not send supervision frame\n"); 315 + WARN_ONCE(1, "HSR: Could not send supervision frame\n"); 316 316 kfree_skb(skb); 317 317 } 318 318
+3 -2
net/ipv4/igmp.c
··· 2126 2126 ASSERT_RTNL(); 2127 2127 2128 2128 in_dev = ip_mc_find_dev(net, imr); 2129 - if (!in_dev) { 2129 + if (!imr->imr_ifindex && !imr->imr_address.s_addr && !in_dev) { 2130 2130 ret = -ENODEV; 2131 2131 goto out; 2132 2132 } ··· 2147 2147 2148 2148 *imlp = iml->next_rcu; 2149 2149 2150 - ip_mc_dec_group(in_dev, group); 2150 + if (in_dev) 2151 + ip_mc_dec_group(in_dev, group); 2151 2152 2152 2153 /* decrease mem now to avoid the memleak warning */ 2153 2154 atomic_sub(sizeof(*iml), &sk->sk_omem_alloc);
+8 -15
net/ipv4/ipmr.c
··· 109 109 struct mfc_cache *c, struct rtmsg *rtm); 110 110 static void mroute_netlink_event(struct mr_table *mrt, struct mfc_cache *mfc, 111 111 int cmd); 112 - static void mroute_clean_tables(struct mr_table *mrt); 112 + static void mroute_clean_tables(struct mr_table *mrt, bool all); 113 113 static void ipmr_expire_process(unsigned long arg); 114 114 115 115 #ifdef CONFIG_IP_MROUTE_MULTIPLE_TABLES ··· 332 332 static void ipmr_free_table(struct mr_table *mrt) 333 333 { 334 334 del_timer_sync(&mrt->ipmr_expire_timer); 335 - mroute_clean_tables(mrt); 335 + mroute_clean_tables(mrt, true); 336 336 kfree(mrt); 337 337 } 338 338 ··· 431 431 return dev; 432 432 433 433 failure: 434 - /* allow the register to be completed before unregistering. */ 435 - rtnl_unlock(); 436 - rtnl_lock(); 437 - 438 434 unregister_netdevice(dev); 439 435 return NULL; 440 436 } ··· 514 518 return dev; 515 519 516 520 failure: 517 - /* allow the register to be completed before unregistering. */ 518 - rtnl_unlock(); 519 - rtnl_lock(); 520 - 521 521 unregister_netdevice(dev); 522 522 return NULL; 523 523 } ··· 1181 1189 } 1182 1190 1183 1191 /* Close the multicast socket, and clear the vif tables etc */ 1184 - static void mroute_clean_tables(struct mr_table *mrt) 1192 + static void mroute_clean_tables(struct mr_table *mrt, bool all) 1185 1193 { 1186 1194 int i; 1187 1195 LIST_HEAD(list); ··· 1189 1197 1190 1198 /* Shut down all active vif entries */ 1191 1199 for (i = 0; i < mrt->maxvif; i++) { 1192 - if (!(mrt->vif_table[i].flags & VIFF_STATIC)) 1193 - vif_delete(mrt, i, 0, &list); 1200 + if (!all && (mrt->vif_table[i].flags & VIFF_STATIC)) 1201 + continue; 1202 + vif_delete(mrt, i, 0, &list); 1194 1203 } 1195 1204 unregister_netdevice_many(&list); 1196 1205 1197 1206 /* Wipe the cache */ 1198 1207 for (i = 0; i < MFC_LINES; i++) { 1199 1208 list_for_each_entry_safe(c, next, &mrt->mfc_cache_array[i], list) { 1200 - if (c->mfc_flags & MFC_STATIC) 1209 + if (!all && (c->mfc_flags & MFC_STATIC)) 1201 1210 continue; 1202 1211 list_del_rcu(&c->list); 1203 1212 mroute_netlink_event(mrt, c, RTM_DELROUTE); ··· 1233 1240 NETCONFA_IFINDEX_ALL, 1234 1241 net->ipv4.devconf_all); 1235 1242 RCU_INIT_POINTER(mrt->mroute_sk, NULL); 1236 - mroute_clean_tables(mrt); 1243 + mroute_clean_tables(mrt, false); 1237 1244 } 1238 1245 } 1239 1246 rtnl_unlock();
+3 -4
net/ipv4/tcp.c
··· 517 517 if (sk_stream_is_writeable(sk)) { 518 518 mask |= POLLOUT | POLLWRNORM; 519 519 } else { /* send SIGIO later */ 520 - set_bit(SOCK_ASYNC_NOSPACE, 521 - &sk->sk_socket->flags); 520 + sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 522 521 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 523 522 524 523 /* Race breaker. If space is freed after ··· 905 906 goto out_err; 906 907 } 907 908 908 - clear_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 909 + sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); 909 910 910 911 mss_now = tcp_send_mss(sk, &size_goal, flags); 911 912 copied = 0; ··· 1133 1134 } 1134 1135 1135 1136 /* This should be in poll */ 1136 - clear_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 1137 + sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); 1137 1138 1138 1139 mss_now = tcp_send_mss(sk, &size_goal, flags); 1139 1140
+20 -3
net/ipv4/tcp_input.c
··· 4481 4481 int tcp_send_rcvq(struct sock *sk, struct msghdr *msg, size_t size) 4482 4482 { 4483 4483 struct sk_buff *skb; 4484 + int err = -ENOMEM; 4485 + int data_len = 0; 4484 4486 bool fragstolen; 4485 4487 4486 4488 if (size == 0) 4487 4489 return 0; 4488 4490 4489 - skb = alloc_skb(size, sk->sk_allocation); 4491 + if (size > PAGE_SIZE) { 4492 + int npages = min_t(size_t, size >> PAGE_SHIFT, MAX_SKB_FRAGS); 4493 + 4494 + data_len = npages << PAGE_SHIFT; 4495 + size = data_len + (size & ~PAGE_MASK); 4496 + } 4497 + skb = alloc_skb_with_frags(size - data_len, data_len, 4498 + PAGE_ALLOC_COSTLY_ORDER, 4499 + &err, sk->sk_allocation); 4490 4500 if (!skb) 4491 4501 goto err; 4502 + 4503 + skb_put(skb, size - data_len); 4504 + skb->data_len = data_len; 4505 + skb->len = size; 4492 4506 4493 4507 if (tcp_try_rmem_schedule(sk, skb, skb->truesize)) 4494 4508 goto err_free; 4495 4509 4496 - if (memcpy_from_msg(skb_put(skb, size), msg, size)) 4510 + err = skb_copy_datagram_from_iter(skb, 0, &msg->msg_iter, size); 4511 + if (err) 4497 4512 goto err_free; 4498 4513 4499 4514 TCP_SKB_CB(skb)->seq = tcp_sk(sk)->rcv_nxt; ··· 4524 4509 err_free: 4525 4510 kfree_skb(skb); 4526 4511 err: 4527 - return -ENOMEM; 4512 + return err; 4513 + 4528 4514 } 4529 4515 4530 4516 static void tcp_data_queue(struct sock *sk, struct sk_buff *skb) ··· 5683 5667 } 5684 5668 5685 5669 tp->rcv_nxt = TCP_SKB_CB(skb)->seq + 1; 5670 + tp->copied_seq = tp->rcv_nxt; 5686 5671 tp->rcv_wup = TCP_SKB_CB(skb)->seq + 1; 5687 5672 5688 5673 /* RFC1323: The window in SYN & SYN/ACK segments is
+2 -1
net/ipv4/tcp_ipv4.c
··· 921 921 } 922 922 923 923 md5sig = rcu_dereference_protected(tp->md5sig_info, 924 - sock_owned_by_user(sk)); 924 + sock_owned_by_user(sk) || 925 + lockdep_is_held(&sk->sk_lock.slock)); 925 926 if (!md5sig) { 926 927 md5sig = kmalloc(sizeof(*md5sig), gfp); 927 928 if (!md5sig)
+13 -1
net/ipv4/tcp_timer.c
··· 168 168 dst_negative_advice(sk); 169 169 if (tp->syn_fastopen || tp->syn_data) 170 170 tcp_fastopen_cache_set(sk, 0, NULL, true, 0); 171 - if (tp->syn_data) 171 + if (tp->syn_data && icsk->icsk_retransmits == 1) 172 172 NET_INC_STATS_BH(sock_net(sk), 173 173 LINUX_MIB_TCPFASTOPENACTIVEFAIL); 174 174 } ··· 176 176 syn_set = true; 177 177 } else { 178 178 if (retransmits_timed_out(sk, sysctl_tcp_retries1, 0, 0)) { 179 + /* Some middle-boxes may black-hole Fast Open _after_ 180 + * the handshake. Therefore we conservatively disable 181 + * Fast Open on this path on recurring timeouts with 182 + * few or zero bytes acked after Fast Open. 183 + */ 184 + if (tp->syn_data_acked && 185 + tp->bytes_acked <= tp->rx_opt.mss_clamp) { 186 + tcp_fastopen_cache_set(sk, 0, NULL, true, 0); 187 + if (icsk->icsk_retransmits == sysctl_tcp_retries1) 188 + NET_INC_STATS_BH(sock_net(sk), 189 + LINUX_MIB_TCPFASTOPENACTIVEFAIL); 190 + } 179 191 /* Black hole detection */ 180 192 tcp_mtu_probing(icsk, sk); 181 193
-1
net/ipv4/udp.c
··· 100 100 #include <linux/slab.h> 101 101 #include <net/tcp_states.h> 102 102 #include <linux/skbuff.h> 103 - #include <linux/netdevice.h> 104 103 #include <linux/proc_fs.h> 105 104 #include <linux/seq_file.h> 106 105 #include <net/net_namespace.h>
+1 -1
net/ipv6/addrconf.c
··· 3642 3642 3643 3643 /* send a neighbour solicitation for our addr */ 3644 3644 addrconf_addr_solict_mult(&ifp->addr, &mcaddr); 3645 - ndisc_send_ns(ifp->idev->dev, &ifp->addr, &mcaddr, &in6addr_any, NULL); 3645 + ndisc_send_ns(ifp->idev->dev, &ifp->addr, &mcaddr, &in6addr_any); 3646 3646 out: 3647 3647 in6_ifa_put(ifp); 3648 3648 rtnl_unlock();
+10 -5
net/ipv6/af_inet6.c
··· 428 428 429 429 /* Free tx options */ 430 430 431 - opt = xchg(&np->opt, NULL); 432 - if (opt) 433 - sock_kfree_s(sk, opt, opt->tot_len); 431 + opt = xchg((__force struct ipv6_txoptions **)&np->opt, NULL); 432 + if (opt) { 433 + atomic_sub(opt->tot_len, &sk->sk_omem_alloc); 434 + txopt_put(opt); 435 + } 434 436 } 435 437 EXPORT_SYMBOL_GPL(inet6_destroy_sock); 436 438 ··· 661 659 fl6.fl6_sport = inet->inet_sport; 662 660 security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); 663 661 664 - final_p = fl6_update_dst(&fl6, np->opt, &final); 662 + rcu_read_lock(); 663 + final_p = fl6_update_dst(&fl6, rcu_dereference(np->opt), 664 + &final); 665 + rcu_read_unlock(); 665 666 666 667 dst = ip6_dst_lookup_flow(sk, &fl6, final_p); 667 668 if (IS_ERR(dst)) { ··· 673 668 return PTR_ERR(dst); 674 669 } 675 670 676 - __ip6_dst_store(sk, dst, NULL, NULL); 671 + ip6_dst_store(sk, dst, NULL, NULL); 677 672 } 678 673 679 674 return 0;
+3 -1
net/ipv6/datagram.c
··· 167 167 168 168 security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); 169 169 170 - opt = flowlabel ? flowlabel->opt : np->opt; 170 + rcu_read_lock(); 171 + opt = flowlabel ? flowlabel->opt : rcu_dereference(np->opt); 171 172 final_p = fl6_update_dst(&fl6, opt, &final); 173 + rcu_read_unlock(); 172 174 173 175 dst = ip6_dst_lookup_flow(sk, &fl6, final_p); 174 176 err = 0;
+2 -1
net/ipv6/exthdrs.c
··· 727 727 *((char **)&opt2->dst1opt) += dif; 728 728 if (opt2->srcrt) 729 729 *((char **)&opt2->srcrt) += dif; 730 + atomic_set(&opt2->refcnt, 1); 730 731 } 731 732 return opt2; 732 733 } ··· 791 790 return ERR_PTR(-ENOBUFS); 792 791 793 792 memset(opt2, 0, tot_len); 794 - 793 + atomic_set(&opt2->refcnt, 1); 795 794 opt2->tot_len = tot_len; 796 795 p = (char *)(opt2 + 1); 797 796
-14
net/ipv6/icmp.c
··· 834 834 security_sk_classify_flow(sk, flowi6_to_flowi(fl6)); 835 835 } 836 836 837 - /* 838 - * Special lock-class for __icmpv6_sk: 839 - */ 840 - static struct lock_class_key icmpv6_socket_sk_dst_lock_key; 841 - 842 837 static int __net_init icmpv6_sk_init(struct net *net) 843 838 { 844 839 struct sock *sk; ··· 854 859 } 855 860 856 861 net->ipv6.icmp_sk[i] = sk; 857 - 858 - /* 859 - * Split off their lock-class, because sk->sk_dst_lock 860 - * gets used from softirqs, which is safe for 861 - * __icmpv6_sk (because those never get directly used 862 - * via userspace syscalls), but unsafe for normal sockets. 863 - */ 864 - lockdep_set_class(&sk->sk_dst_lock, 865 - &icmpv6_socket_sk_dst_lock_key); 866 862 867 863 /* Enough space for 2 64K ICMP packets, including 868 864 * sk_buff struct overhead.
+9 -12
net/ipv6/inet6_connection_sock.c
··· 78 78 memset(fl6, 0, sizeof(*fl6)); 79 79 fl6->flowi6_proto = proto; 80 80 fl6->daddr = ireq->ir_v6_rmt_addr; 81 - final_p = fl6_update_dst(fl6, np->opt, &final); 81 + rcu_read_lock(); 82 + final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final); 83 + rcu_read_unlock(); 82 84 fl6->saddr = ireq->ir_v6_loc_addr; 83 85 fl6->flowi6_oif = ireq->ir_iif; 84 86 fl6->flowi6_mark = ireq->ir_mark; ··· 111 109 EXPORT_SYMBOL_GPL(inet6_csk_addr2sockaddr); 112 110 113 111 static inline 114 - void __inet6_csk_dst_store(struct sock *sk, struct dst_entry *dst, 115 - const struct in6_addr *daddr, 116 - const struct in6_addr *saddr) 117 - { 118 - __ip6_dst_store(sk, dst, daddr, saddr); 119 - } 120 - 121 - static inline 122 112 struct dst_entry *__inet6_csk_dst_check(struct sock *sk, u32 cookie) 123 113 { 124 114 return __sk_dst_check(sk, cookie); ··· 136 142 fl6->fl6_dport = inet->inet_dport; 137 143 security_sk_classify_flow(sk, flowi6_to_flowi(fl6)); 138 144 139 - final_p = fl6_update_dst(fl6, np->opt, &final); 145 + rcu_read_lock(); 146 + final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final); 147 + rcu_read_unlock(); 140 148 141 149 dst = __inet6_csk_dst_check(sk, np->dst_cookie); 142 150 if (!dst) { 143 151 dst = ip6_dst_lookup_flow(sk, fl6, final_p); 144 152 145 153 if (!IS_ERR(dst)) 146 - __inet6_csk_dst_store(sk, dst, NULL, NULL); 154 + ip6_dst_store(sk, dst, NULL, NULL); 147 155 } 148 156 return dst; 149 157 } ··· 171 175 /* Restore final destination back after routing done */ 172 176 fl6.daddr = sk->sk_v6_daddr; 173 177 174 - res = ip6_xmit(sk, skb, &fl6, np->opt, np->tclass); 178 + res = ip6_xmit(sk, skb, &fl6, rcu_dereference(np->opt), 179 + np->tclass); 175 180 rcu_read_unlock(); 176 181 return res; 177 182 }
+1 -1
net/ipv6/ip6_tunnel.c
··· 177 177 int i; 178 178 179 179 for_each_possible_cpu(i) 180 - ip6_tnl_per_cpu_dst_set(raw_cpu_ptr(t->dst_cache), NULL); 180 + ip6_tnl_per_cpu_dst_set(per_cpu_ptr(t->dst_cache, i), NULL); 181 181 } 182 182 EXPORT_SYMBOL_GPL(ip6_tnl_dst_reset); 183 183
+8 -11
net/ipv6/ip6mr.c
··· 118 118 int cmd); 119 119 static int ip6mr_rtm_dumproute(struct sk_buff *skb, 120 120 struct netlink_callback *cb); 121 - static void mroute_clean_tables(struct mr6_table *mrt); 121 + static void mroute_clean_tables(struct mr6_table *mrt, bool all); 122 122 static void ipmr_expire_process(unsigned long arg); 123 123 124 124 #ifdef CONFIG_IPV6_MROUTE_MULTIPLE_TABLES ··· 334 334 static void ip6mr_free_table(struct mr6_table *mrt) 335 335 { 336 336 del_timer_sync(&mrt->ipmr_expire_timer); 337 - mroute_clean_tables(mrt); 337 + mroute_clean_tables(mrt, true); 338 338 kfree(mrt); 339 339 } 340 340 ··· 765 765 return dev; 766 766 767 767 failure: 768 - /* allow the register to be completed before unregistering. */ 769 - rtnl_unlock(); 770 - rtnl_lock(); 771 - 772 768 unregister_netdevice(dev); 773 769 return NULL; 774 770 } ··· 1538 1542 * Close the multicast socket, and clear the vif tables etc 1539 1543 */ 1540 1544 1541 - static void mroute_clean_tables(struct mr6_table *mrt) 1545 + static void mroute_clean_tables(struct mr6_table *mrt, bool all) 1542 1546 { 1543 1547 int i; 1544 1548 LIST_HEAD(list); ··· 1548 1552 * Shut down all active vif entries 1549 1553 */ 1550 1554 for (i = 0; i < mrt->maxvif; i++) { 1551 - if (!(mrt->vif6_table[i].flags & VIFF_STATIC)) 1552 - mif6_delete(mrt, i, &list); 1555 + if (!all && (mrt->vif6_table[i].flags & VIFF_STATIC)) 1556 + continue; 1557 + mif6_delete(mrt, i, &list); 1553 1558 } 1554 1559 unregister_netdevice_many(&list); 1555 1560 ··· 1559 1562 */ 1560 1563 for (i = 0; i < MFC6_LINES; i++) { 1561 1564 list_for_each_entry_safe(c, next, &mrt->mfc6_cache_array[i], list) { 1562 - if (c->mfc_flags & MFC_STATIC) 1565 + if (!all && (c->mfc_flags & MFC_STATIC)) 1563 1566 continue; 1564 1567 write_lock_bh(&mrt_lock); 1565 1568 list_del(&c->list); ··· 1622 1625 net->ipv6.devconf_all); 1623 1626 write_unlock_bh(&mrt_lock); 1624 1627 1625 - mroute_clean_tables(mrt); 1628 + mroute_clean_tables(mrt, false); 1626 1629 err = 0; 1627 1630 break; 1628 1631 }
+22 -11
net/ipv6/ipv6_sockglue.c
··· 111 111 icsk->icsk_sync_mss(sk, icsk->icsk_pmtu_cookie); 112 112 } 113 113 } 114 - opt = xchg(&inet6_sk(sk)->opt, opt); 114 + opt = xchg((__force struct ipv6_txoptions **)&inet6_sk(sk)->opt, 115 + opt); 115 116 sk_dst_reset(sk); 116 117 117 118 return opt; ··· 232 231 sk->sk_socket->ops = &inet_dgram_ops; 233 232 sk->sk_family = PF_INET; 234 233 } 235 - opt = xchg(&np->opt, NULL); 236 - if (opt) 237 - sock_kfree_s(sk, opt, opt->tot_len); 234 + opt = xchg((__force struct ipv6_txoptions **)&np->opt, 235 + NULL); 236 + if (opt) { 237 + atomic_sub(opt->tot_len, &sk->sk_omem_alloc); 238 + txopt_put(opt); 239 + } 238 240 pktopt = xchg(&np->pktoptions, NULL); 239 241 kfree_skb(pktopt); 240 242 ··· 407 403 if (optname != IPV6_RTHDR && !ns_capable(net->user_ns, CAP_NET_RAW)) 408 404 break; 409 405 410 - opt = ipv6_renew_options(sk, np->opt, optname, 406 + opt = rcu_dereference_protected(np->opt, sock_owned_by_user(sk)); 407 + opt = ipv6_renew_options(sk, opt, optname, 411 408 (struct ipv6_opt_hdr __user *)optval, 412 409 optlen); 413 410 if (IS_ERR(opt)) { ··· 437 432 retv = 0; 438 433 opt = ipv6_update_options(sk, opt); 439 434 sticky_done: 440 - if (opt) 441 - sock_kfree_s(sk, opt, opt->tot_len); 435 + if (opt) { 436 + atomic_sub(opt->tot_len, &sk->sk_omem_alloc); 437 + txopt_put(opt); 438 + } 442 439 break; 443 440 } 444 441 ··· 493 486 break; 494 487 495 488 memset(opt, 0, sizeof(*opt)); 489 + atomic_set(&opt->refcnt, 1); 496 490 opt->tot_len = sizeof(*opt) + optlen; 497 491 retv = -EFAULT; 498 492 if (copy_from_user(opt+1, optval, optlen)) ··· 510 502 retv = 0; 511 503 opt = ipv6_update_options(sk, opt); 512 504 done: 513 - if (opt) 514 - sock_kfree_s(sk, opt, opt->tot_len); 505 + if (opt) { 506 + atomic_sub(opt->tot_len, &sk->sk_omem_alloc); 507 + txopt_put(opt); 508 + } 515 509 break; 516 510 } 517 511 case IPV6_UNICAST_HOPS: ··· 1120 1110 case IPV6_RTHDR: 1121 1111 case IPV6_DSTOPTS: 1122 1112 { 1113 + struct ipv6_txoptions *opt; 1123 1114 1124 1115 lock_sock(sk); 1125 - len = ipv6_getsockopt_sticky(sk, np->opt, 1126 - optname, optval, len); 1116 + opt = rcu_dereference_protected(np->opt, sock_owned_by_user(sk)); 1117 + len = ipv6_getsockopt_sticky(sk, opt, optname, optval, len); 1127 1118 release_sock(sk); 1128 1119 /* check if ipv6_getsockopt_sticky() returns err code */ 1129 1120 if (len < 0)
+3 -7
net/ipv6/ndisc.c
··· 556 556 } 557 557 558 558 void ndisc_send_ns(struct net_device *dev, const struct in6_addr *solicit, 559 - const struct in6_addr *daddr, const struct in6_addr *saddr, 560 - struct sk_buff *oskb) 559 + const struct in6_addr *daddr, const struct in6_addr *saddr) 561 560 { 562 561 struct sk_buff *skb; 563 562 struct in6_addr addr_buf; ··· 591 592 if (inc_opt) 592 593 ndisc_fill_addr_option(skb, ND_OPT_SOURCE_LL_ADDR, 593 594 dev->dev_addr); 594 - 595 - if (!(dev->priv_flags & IFF_XMIT_DST_RELEASE) && oskb) 596 - skb_dst_copy(skb, oskb); 597 595 598 596 ndisc_send_skb(skb, daddr, saddr); 599 597 } ··· 678 682 "%s: trying to ucast probe in NUD_INVALID: %pI6\n", 679 683 __func__, target); 680 684 } 681 - ndisc_send_ns(dev, target, target, saddr, skb); 685 + ndisc_send_ns(dev, target, target, saddr); 682 686 } else if ((probes -= NEIGH_VAR(neigh->parms, APP_PROBES)) < 0) { 683 687 neigh_app_ns(neigh); 684 688 } else { 685 689 addrconf_addr_solict_mult(target, &mcaddr); 686 - ndisc_send_ns(dev, target, &mcaddr, saddr, skb); 690 + ndisc_send_ns(dev, target, &mcaddr, saddr); 687 691 } 688 692 } 689 693
+3 -2
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 190 190 /* Creation primitives. */ 191 191 static inline struct frag_queue *fq_find(struct net *net, __be32 id, 192 192 u32 user, struct in6_addr *src, 193 - struct in6_addr *dst, u8 ecn) 193 + struct in6_addr *dst, int iif, u8 ecn) 194 194 { 195 195 struct inet_frag_queue *q; 196 196 struct ip6_create_arg arg; ··· 200 200 arg.user = user; 201 201 arg.src = src; 202 202 arg.dst = dst; 203 + arg.iif = iif; 203 204 arg.ecn = ecn; 204 205 205 206 local_bh_disable(); ··· 602 601 fhdr = (struct frag_hdr *)skb_transport_header(clone); 603 602 604 603 fq = fq_find(net, fhdr->identification, user, &hdr->saddr, &hdr->daddr, 605 - ip6_frag_ecn(hdr)); 604 + skb->dev ? skb->dev->ifindex : 0, ip6_frag_ecn(hdr)); 606 605 if (fq == NULL) { 607 606 pr_debug("Can't find and can't create new queue\n"); 608 607 goto ret_orig;
+6 -2
net/ipv6/raw.c
··· 733 733 734 734 static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) 735 735 { 736 + struct ipv6_txoptions *opt_to_free = NULL; 736 737 struct ipv6_txoptions opt_space; 737 738 DECLARE_SOCKADDR(struct sockaddr_in6 *, sin6, msg->msg_name); 738 739 struct in6_addr *daddr, *final_p, final; ··· 840 839 if (!(opt->opt_nflen|opt->opt_flen)) 841 840 opt = NULL; 842 841 } 843 - if (!opt) 844 - opt = np->opt; 842 + if (!opt) { 843 + opt = txopt_get(np); 844 + opt_to_free = opt; 845 + } 845 846 if (flowlabel) 846 847 opt = fl6_merge_options(&opt_space, flowlabel, opt); 847 848 opt = ipv6_fixup_options(&opt_space, opt); ··· 909 906 dst_release(dst); 910 907 out: 911 908 fl6_sock_release(flowlabel); 909 + txopt_put(opt_to_free); 912 910 return err < 0 ? err : len; 913 911 do_confirm: 914 912 dst_confirm(dst);
+7 -3
net/ipv6/reassembly.c
··· 108 108 return fq->id == arg->id && 109 109 fq->user == arg->user && 110 110 ipv6_addr_equal(&fq->saddr, arg->src) && 111 - ipv6_addr_equal(&fq->daddr, arg->dst); 111 + ipv6_addr_equal(&fq->daddr, arg->dst) && 112 + (arg->iif == fq->iif || 113 + !(ipv6_addr_type(arg->dst) & (IPV6_ADDR_MULTICAST | 114 + IPV6_ADDR_LINKLOCAL))); 112 115 } 113 116 EXPORT_SYMBOL(ip6_frag_match); 114 117 ··· 183 180 184 181 static struct frag_queue * 185 182 fq_find(struct net *net, __be32 id, const struct in6_addr *src, 186 - const struct in6_addr *dst, u8 ecn) 183 + const struct in6_addr *dst, int iif, u8 ecn) 187 184 { 188 185 struct inet_frag_queue *q; 189 186 struct ip6_create_arg arg; ··· 193 190 arg.user = IP6_DEFRAG_LOCAL_DELIVER; 194 191 arg.src = src; 195 192 arg.dst = dst; 193 + arg.iif = iif; 196 194 arg.ecn = ecn; 197 195 198 196 hash = inet6_hash_frag(id, src, dst); ··· 555 551 } 556 552 557 553 fq = fq_find(net, fhdr->identification, &hdr->saddr, &hdr->daddr, 558 - ip6_frag_ecn(hdr)); 554 + skb->dev ? skb->dev->ifindex : 0, ip6_frag_ecn(hdr)); 559 555 if (fq) { 560 556 int ret; 561 557
+1 -1
net/ipv6/route.c
··· 524 524 container_of(w, struct __rt6_probe_work, work); 525 525 526 526 addrconf_addr_solict_mult(&work->target, &mcaddr); 527 - ndisc_send_ns(work->dev, &work->target, &mcaddr, NULL, NULL); 527 + ndisc_send_ns(work->dev, &work->target, &mcaddr, NULL); 528 528 dev_put(work->dev); 529 529 kfree(work); 530 530 }
+1 -1
net/ipv6/syncookies.c
··· 222 222 memset(&fl6, 0, sizeof(fl6)); 223 223 fl6.flowi6_proto = IPPROTO_TCP; 224 224 fl6.daddr = ireq->ir_v6_rmt_addr; 225 - final_p = fl6_update_dst(&fl6, np->opt, &final); 225 + final_p = fl6_update_dst(&fl6, rcu_dereference(np->opt), &final); 226 226 fl6.saddr = ireq->ir_v6_loc_addr; 227 227 fl6.flowi6_oif = sk->sk_bound_dev_if; 228 228 fl6.flowi6_mark = ireq->ir_mark;
+19 -13
net/ipv6/tcp_ipv6.c
··· 120 120 struct ipv6_pinfo *np = inet6_sk(sk); 121 121 struct tcp_sock *tp = tcp_sk(sk); 122 122 struct in6_addr *saddr = NULL, *final_p, final; 123 + struct ipv6_txoptions *opt; 123 124 struct flowi6 fl6; 124 125 struct dst_entry *dst; 125 126 int addr_type; ··· 236 235 fl6.fl6_dport = usin->sin6_port; 237 236 fl6.fl6_sport = inet->inet_sport; 238 237 239 - final_p = fl6_update_dst(&fl6, np->opt, &final); 238 + opt = rcu_dereference_protected(np->opt, sock_owned_by_user(sk)); 239 + final_p = fl6_update_dst(&fl6, opt, &final); 240 240 241 241 security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); 242 242 ··· 257 255 inet->inet_rcv_saddr = LOOPBACK4_IPV6; 258 256 259 257 sk->sk_gso_type = SKB_GSO_TCPV6; 260 - __ip6_dst_store(sk, dst, NULL, NULL); 258 + ip6_dst_store(sk, dst, NULL, NULL); 261 259 262 260 if (tcp_death_row.sysctl_tw_recycle && 263 261 !tp->rx_opt.ts_recent_stamp && ··· 265 263 tcp_fetch_timewait_stamp(sk, dst); 266 264 267 265 icsk->icsk_ext_hdr_len = 0; 268 - if (np->opt) 269 - icsk->icsk_ext_hdr_len = (np->opt->opt_flen + 270 - np->opt->opt_nflen); 266 + if (opt) 267 + icsk->icsk_ext_hdr_len = opt->opt_flen + 268 + opt->opt_nflen; 271 269 272 270 tp->rx_opt.mss_clamp = IPV6_MIN_MTU - sizeof(struct tcphdr) - sizeof(struct ipv6hdr); 273 271 ··· 463 461 if (np->repflow && ireq->pktopts) 464 462 fl6->flowlabel = ip6_flowlabel(ipv6_hdr(ireq->pktopts)); 465 463 466 - err = ip6_xmit(sk, skb, fl6, np->opt, np->tclass); 464 + err = ip6_xmit(sk, skb, fl6, rcu_dereference(np->opt), 465 + np->tclass); 467 466 err = net_xmit_eval(err); 468 467 } 469 468 ··· 975 972 struct inet_request_sock *ireq; 976 973 struct ipv6_pinfo *newnp; 977 974 const struct ipv6_pinfo *np = inet6_sk(sk); 975 + struct ipv6_txoptions *opt; 978 976 struct tcp6_sock *newtcp6sk; 979 977 struct inet_sock *newinet; 980 978 struct tcp_sock *newtp; ··· 1060 1056 */ 1061 1057 1062 1058 newsk->sk_gso_type = SKB_GSO_TCPV6; 1063 - __ip6_dst_store(newsk, dst, NULL, NULL); 1059 + ip6_dst_store(newsk, dst, NULL, NULL); 1064 1060 inet6_sk_rx_dst_set(newsk, skb); 1065 1061 1066 1062 newtcp6sk = (struct tcp6_sock *)newsk; ··· 1102 1098 but we make one more one thing there: reattach optmem 1103 1099 to newsk. 1104 1100 */ 1105 - if (np->opt) 1106 - newnp->opt = ipv6_dup_options(newsk, np->opt); 1107 - 1101 + opt = rcu_dereference(np->opt); 1102 + if (opt) { 1103 + opt = ipv6_dup_options(newsk, opt); 1104 + RCU_INIT_POINTER(newnp->opt, opt); 1105 + } 1108 1106 inet_csk(newsk)->icsk_ext_hdr_len = 0; 1109 - if (newnp->opt) 1110 - inet_csk(newsk)->icsk_ext_hdr_len = (newnp->opt->opt_nflen + 1111 - newnp->opt->opt_flen); 1107 + if (opt) 1108 + inet_csk(newsk)->icsk_ext_hdr_len = opt->opt_nflen + 1109 + opt->opt_flen; 1112 1110 1113 1111 tcp_ca_openreq_child(newsk, dst); 1114 1112
+6 -2
net/ipv6/udp.c
··· 1110 1110 DECLARE_SOCKADDR(struct sockaddr_in6 *, sin6, msg->msg_name); 1111 1111 struct in6_addr *daddr, *final_p, final; 1112 1112 struct ipv6_txoptions *opt = NULL; 1113 + struct ipv6_txoptions *opt_to_free = NULL; 1113 1114 struct ip6_flowlabel *flowlabel = NULL; 1114 1115 struct flowi6 fl6; 1115 1116 struct dst_entry *dst; ··· 1264 1263 opt = NULL; 1265 1264 connected = 0; 1266 1265 } 1267 - if (!opt) 1268 - opt = np->opt; 1266 + if (!opt) { 1267 + opt = txopt_get(np); 1268 + opt_to_free = opt; 1269 + } 1269 1270 if (flowlabel) 1270 1271 opt = fl6_merge_options(&opt_space, flowlabel, opt); 1271 1272 opt = ipv6_fixup_options(&opt_space, opt); ··· 1376 1373 out: 1377 1374 dst_release(dst); 1378 1375 fl6_sock_release(flowlabel); 1376 + txopt_put(opt_to_free); 1379 1377 if (!err) 1380 1378 return len; 1381 1379 /*
+1 -1
net/iucv/af_iucv.c
··· 1483 1483 if (sock_writeable(sk) && iucv_below_msglim(sk)) 1484 1484 mask |= POLLOUT | POLLWRNORM | POLLWRBAND; 1485 1485 else 1486 - set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 1486 + sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 1487 1487 1488 1488 return mask; 1489 1489 }
+6 -2
net/l2tp/l2tp_ip6.c
··· 486 486 DECLARE_SOCKADDR(struct sockaddr_l2tpip6 *, lsa, msg->msg_name); 487 487 struct in6_addr *daddr, *final_p, final; 488 488 struct ipv6_pinfo *np = inet6_sk(sk); 489 + struct ipv6_txoptions *opt_to_free = NULL; 489 490 struct ipv6_txoptions *opt = NULL; 490 491 struct ip6_flowlabel *flowlabel = NULL; 491 492 struct dst_entry *dst = NULL; ··· 576 575 opt = NULL; 577 576 } 578 577 579 - if (opt == NULL) 580 - opt = np->opt; 578 + if (!opt) { 579 + opt = txopt_get(np); 580 + opt_to_free = opt; 581 + } 581 582 if (flowlabel) 582 583 opt = fl6_merge_options(&opt_space, flowlabel, opt); 583 584 opt = ipv6_fixup_options(&opt_space, opt); ··· 634 631 dst_release(dst); 635 632 out: 636 633 fl6_sock_release(flowlabel); 634 + txopt_put(opt_to_free); 637 635 638 636 return err < 0 ? err : len; 639 637
+2 -1
net/mac80211/agg-tx.c
··· 500 500 /* send AddBA request */ 501 501 ieee80211_send_addba_request(sdata, sta->sta.addr, tid, 502 502 tid_tx->dialog_token, start_seq_num, 503 - local->hw.max_tx_aggregation_subframes, 503 + IEEE80211_MAX_AMPDU_BUF, 504 504 tid_tx->timeout); 505 505 } 506 506 ··· 926 926 amsdu = capab & IEEE80211_ADDBA_PARAM_AMSDU_MASK; 927 927 tid = (capab & IEEE80211_ADDBA_PARAM_TID_MASK) >> 2; 928 928 buf_size = (capab & IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK) >> 6; 929 + buf_size = min(buf_size, local->hw.max_tx_aggregation_subframes); 929 930 930 931 mutex_lock(&sta->ampdu_mlme.mtx); 931 932
+6 -2
net/mac80211/cfg.c
··· 3454 3454 goto out_unlock; 3455 3455 } 3456 3456 } else { 3457 - /* for cookie below */ 3458 - ack_skb = skb; 3457 + /* Assign a dummy non-zero cookie, it's not sent to 3458 + * userspace in this case but we rely on its value 3459 + * internally in the need_offchan case to distinguish 3460 + * mgmt-tx from remain-on-channel. 3461 + */ 3462 + *cookie = 0xffffffff; 3459 3463 } 3460 3464 3461 3465 if (!need_offchan) {
+3 -2
net/mac80211/iface.c
··· 76 76 void ieee80211_recalc_txpower(struct ieee80211_sub_if_data *sdata, 77 77 bool update_bss) 78 78 { 79 - if (__ieee80211_recalc_txpower(sdata) || update_bss) 79 + if (__ieee80211_recalc_txpower(sdata) || 80 + (update_bss && ieee80211_sdata_running(sdata))) 80 81 ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_TXPOWER); 81 82 } 82 83 ··· 1862 1861 unregister_netdevice(sdata->dev); 1863 1862 } else { 1864 1863 cfg80211_unregister_wdev(&sdata->wdev); 1864 + ieee80211_teardown_sdata(sdata); 1865 1865 kfree(sdata); 1866 1866 } 1867 1867 } ··· 1872 1870 if (WARN_ON_ONCE(!test_bit(SDATA_STATE_RUNNING, &sdata->state))) 1873 1871 return; 1874 1872 ieee80211_do_stop(sdata, true); 1875 - ieee80211_teardown_sdata(sdata); 1876 1873 } 1877 1874 1878 1875 void ieee80211_remove_interfaces(struct ieee80211_local *local)
+1 -2
net/mac80211/main.c
··· 541 541 NL80211_FEATURE_HT_IBSS | 542 542 NL80211_FEATURE_VIF_TXPOWER | 543 543 NL80211_FEATURE_MAC_ON_CREATE | 544 - NL80211_FEATURE_USERSPACE_MPM | 545 - NL80211_FEATURE_FULL_AP_CLIENT_STATE; 544 + NL80211_FEATURE_USERSPACE_MPM; 546 545 547 546 if (!ops->hw_scan) 548 547 wiphy->features |= NL80211_FEATURE_LOW_PRIORITY_SCAN |
+4 -4
net/mac80211/mesh_pathtbl.c
··· 779 779 static void mesh_path_node_reclaim(struct rcu_head *rp) 780 780 { 781 781 struct mpath_node *node = container_of(rp, struct mpath_node, rcu); 782 - struct ieee80211_sub_if_data *sdata = node->mpath->sdata; 783 782 784 783 del_timer_sync(&node->mpath->timer); 785 - atomic_dec(&sdata->u.mesh.mpaths); 786 784 kfree(node->mpath); 787 785 kfree(node); 788 786 } ··· 788 790 /* needs to be called with the corresponding hashwlock taken */ 789 791 static void __mesh_path_del(struct mesh_table *tbl, struct mpath_node *node) 790 792 { 791 - struct mesh_path *mpath; 792 - mpath = node->mpath; 793 + struct mesh_path *mpath = node->mpath; 794 + struct ieee80211_sub_if_data *sdata = node->mpath->sdata; 795 + 793 796 spin_lock(&mpath->state_lock); 794 797 mpath->flags |= MESH_PATH_RESOLVING; 795 798 if (mpath->is_gate) ··· 798 799 hlist_del_rcu(&node->list); 799 800 call_rcu(&node->rcu, mesh_path_node_reclaim); 800 801 spin_unlock(&mpath->state_lock); 802 + atomic_dec(&sdata->u.mesh.mpaths); 801 803 atomic_dec(&tbl->entries); 802 804 } 803 805
+5 -4
net/mac80211/scan.c
··· 597 597 /* We need to ensure power level is at max for scanning. */ 598 598 ieee80211_hw_config(local, 0); 599 599 600 - if ((req->channels[0]->flags & 601 - IEEE80211_CHAN_NO_IR) || 600 + if ((req->channels[0]->flags & (IEEE80211_CHAN_NO_IR | 601 + IEEE80211_CHAN_RADAR)) || 602 602 !req->n_ssids) { 603 603 next_delay = IEEE80211_PASSIVE_CHANNEL_TIME; 604 604 } else { ··· 645 645 * TODO: channel switching also consumes quite some time, 646 646 * add that delay as well to get a better estimation 647 647 */ 648 - if (chan->flags & IEEE80211_CHAN_NO_IR) 648 + if (chan->flags & (IEEE80211_CHAN_NO_IR | IEEE80211_CHAN_RADAR)) 649 649 return IEEE80211_PASSIVE_CHANNEL_TIME; 650 650 return IEEE80211_PROBE_DELAY + IEEE80211_CHANNEL_TIME; 651 651 } ··· 777 777 * 778 778 * In any case, it is not necessary for a passive scan. 779 779 */ 780 - if (chan->flags & IEEE80211_CHAN_NO_IR || !scan_req->n_ssids) { 780 + if ((chan->flags & (IEEE80211_CHAN_NO_IR | IEEE80211_CHAN_RADAR)) || 781 + !scan_req->n_ssids) { 781 782 *next_delay = IEEE80211_PASSIVE_CHANNEL_TIME; 782 783 local->next_scan_state = SCAN_DECISION; 783 784 return;
+1 -1
net/nfc/llcp_sock.c
··· 572 572 if (sock_writeable(sk) && sk->sk_state == LLCP_CONNECTED) 573 573 mask |= POLLOUT | POLLWRNORM | POLLWRBAND; 574 574 else 575 - set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 575 + sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 576 576 577 577 pr_debug("mask 0x%x\n", mask); 578 578
+1 -1
net/openvswitch/dp_notify.c
··· 58 58 struct hlist_node *n; 59 59 60 60 hlist_for_each_entry_safe(vport, n, &dp->ports[i], dp_hash_node) { 61 - if (vport->ops->type != OVS_VPORT_TYPE_NETDEV) 61 + if (vport->ops->type == OVS_VPORT_TYPE_INTERNAL) 62 62 continue; 63 63 64 64 if (!(vport->dev->priv_flags & IFF_OVS_DATAPATH))
-1
net/openvswitch/vport-geneve.c
··· 117 117 .destroy = ovs_netdev_tunnel_destroy, 118 118 .get_options = geneve_get_options, 119 119 .send = dev_queue_xmit, 120 - .owner = THIS_MODULE, 121 120 }; 122 121 123 122 static int __init ovs_geneve_tnl_init(void)
-1
net/openvswitch/vport-gre.c
··· 89 89 .create = gre_create, 90 90 .send = dev_queue_xmit, 91 91 .destroy = ovs_netdev_tunnel_destroy, 92 - .owner = THIS_MODULE, 93 92 }; 94 93 95 94 static int __init ovs_gre_tnl_init(void)
+6 -2
net/openvswitch/vport-netdev.c
··· 180 180 if (vport->dev->priv_flags & IFF_OVS_DATAPATH) 181 181 ovs_netdev_detach_dev(vport); 182 182 183 - /* Early release so we can unregister the device */ 183 + /* We can be invoked by both explicit vport deletion and 184 + * underlying netdev deregistration; delete the link only 185 + * if it's not already shutting down. 186 + */ 187 + if (vport->dev->reg_state == NETREG_REGISTERED) 188 + rtnl_delete_link(vport->dev); 184 189 dev_put(vport->dev); 185 - rtnl_delete_link(vport->dev); 186 190 vport->dev = NULL; 187 191 rtnl_unlock(); 188 192
+4 -4
net/openvswitch/vport.c
··· 71 71 return &dev_table[hash & (VPORT_HASH_BUCKETS - 1)]; 72 72 } 73 73 74 - int ovs_vport_ops_register(struct vport_ops *ops) 74 + int __ovs_vport_ops_register(struct vport_ops *ops) 75 75 { 76 76 int err = -EEXIST; 77 77 struct vport_ops *o; ··· 87 87 ovs_unlock(); 88 88 return err; 89 89 } 90 - EXPORT_SYMBOL_GPL(ovs_vport_ops_register); 90 + EXPORT_SYMBOL_GPL(__ovs_vport_ops_register); 91 91 92 92 void ovs_vport_ops_unregister(struct vport_ops *ops) 93 93 { ··· 256 256 * 257 257 * @vport: vport to delete. 258 258 * 259 - * Detaches @vport from its datapath and destroys it. It is possible to fail 260 - * for reasons such as lack of memory. ovs_mutex must be held. 259 + * Detaches @vport from its datapath and destroys it. ovs_mutex must 260 + * be held. 261 261 */ 262 262 void ovs_vport_del(struct vport *vport) 263 263 {
+7 -1
net/openvswitch/vport.h
··· 196 196 return vport->dev->name; 197 197 } 198 198 199 - int ovs_vport_ops_register(struct vport_ops *ops); 199 + int __ovs_vport_ops_register(struct vport_ops *ops); 200 + #define ovs_vport_ops_register(ops) \ 201 + ({ \ 202 + (ops)->owner = THIS_MODULE; \ 203 + __ovs_vport_ops_register(ops); \ 204 + }) 205 + 200 206 void ovs_vport_ops_unregister(struct vport_ops *ops); 201 207 202 208 static inline struct rtable *ovs_tunnel_route_lookup(struct net *net,
+2 -2
net/packet/af_packet.c
··· 2329 2329 static bool ll_header_truncated(const struct net_device *dev, int len) 2330 2330 { 2331 2331 /* net device doesn't like empty head */ 2332 - if (unlikely(len <= dev->hard_header_len)) { 2333 - net_warn_ratelimited("%s: packet size is too short (%d <= %d)\n", 2332 + if (unlikely(len < dev->hard_header_len)) { 2333 + net_warn_ratelimited("%s: packet size is too short (%d < %d)\n", 2334 2334 current->comm, len, dev->hard_header_len); 2335 2335 return true; 2336 2336 }
-6
net/rds/connection.c
··· 186 186 } 187 187 } 188 188 189 - if (trans == NULL) { 190 - kmem_cache_free(rds_conn_slab, conn); 191 - conn = ERR_PTR(-ENODEV); 192 - goto out; 193 - } 194 - 195 189 conn->c_trans = trans; 196 190 197 191 ret = trans->conn_alloc(conn, gfp);
+3 -1
net/rds/send.c
··· 1013 1013 release_sock(sk); 1014 1014 } 1015 1015 1016 - /* racing with another thread binding seems ok here */ 1016 + lock_sock(sk); 1017 1017 if (daddr == 0 || rs->rs_bound_addr == 0) { 1018 + release_sock(sk); 1018 1019 ret = -ENOTCONN; /* XXX not a great errno */ 1019 1020 goto out; 1020 1021 } 1022 + release_sock(sk); 1021 1023 1022 1024 if (payload_len > rds_sk_sndbuf(rs)) { 1023 1025 ret = -EMSGSIZE;
+3 -1
net/rxrpc/ar-ack.c
··· 723 723 724 724 if ((call->state == RXRPC_CALL_CLIENT_AWAIT_REPLY || 725 725 call->state == RXRPC_CALL_SERVER_AWAIT_ACK) && 726 - hard > tx) 726 + hard > tx) { 727 + call->acks_hard = tx; 727 728 goto all_acked; 729 + } 728 730 729 731 smp_rmb(); 730 732 rxrpc_rotate_tx_window(call, hard - 1);
+1 -1
net/rxrpc/ar-output.c
··· 531 531 timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); 532 532 533 533 /* this should be in poll */ 534 - clear_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 534 + sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); 535 535 536 536 if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN)) 537 537 return -EPIPE;
+18 -9
net/sched/sch_api.c
··· 253 253 } 254 254 255 255 /* We know handle. Find qdisc among all qdisc's attached to device 256 - (root qdisc, all its children, children of children etc.) 256 + * (root qdisc, all its children, children of children etc.) 257 + * Note: caller either uses rtnl or rcu_read_lock() 257 258 */ 258 259 259 260 static struct Qdisc *qdisc_match_from_root(struct Qdisc *root, u32 handle) ··· 265 264 root->handle == handle) 266 265 return root; 267 266 268 - list_for_each_entry(q, &root->list, list) { 267 + list_for_each_entry_rcu(q, &root->list, list) { 269 268 if (q->handle == handle) 270 269 return q; 271 270 } ··· 278 277 struct Qdisc *root = qdisc_dev(q)->qdisc; 279 278 280 279 WARN_ON_ONCE(root == &noop_qdisc); 281 - list_add_tail(&q->list, &root->list); 280 + ASSERT_RTNL(); 281 + list_add_tail_rcu(&q->list, &root->list); 282 282 } 283 283 } 284 284 EXPORT_SYMBOL(qdisc_list_add); 285 285 286 286 void qdisc_list_del(struct Qdisc *q) 287 287 { 288 - if ((q->parent != TC_H_ROOT) && !(q->flags & TCQ_F_INGRESS)) 289 - list_del(&q->list); 288 + if ((q->parent != TC_H_ROOT) && !(q->flags & TCQ_F_INGRESS)) { 289 + ASSERT_RTNL(); 290 + list_del_rcu(&q->list); 291 + } 290 292 } 291 293 EXPORT_SYMBOL(qdisc_list_del); 292 294 ··· 754 750 if (n == 0) 755 751 return; 756 752 drops = max_t(int, n, 0); 753 + rcu_read_lock(); 757 754 while ((parentid = sch->parent)) { 758 755 if (TC_H_MAJ(parentid) == TC_H_MAJ(TC_H_INGRESS)) 759 - return; 756 + break; 760 757 758 + if (sch->flags & TCQ_F_NOPARENT) 759 + break; 760 + /* TODO: perform the search on a per txq basis */ 761 761 sch = qdisc_lookup(qdisc_dev(sch), TC_H_MAJ(parentid)); 762 762 if (sch == NULL) { 763 - WARN_ON(parentid != TC_H_ROOT); 764 - return; 763 + WARN_ON_ONCE(parentid != TC_H_ROOT); 764 + break; 765 765 } 766 766 cops = sch->ops->cl_ops; 767 767 if (cops->qlen_notify) { ··· 776 768 sch->q.qlen -= n; 777 769 __qdisc_qstats_drop(sch, drops); 778 770 } 771 + rcu_read_unlock(); 779 772 } 780 773 EXPORT_SYMBOL(qdisc_tree_decrease_qlen); 781 774 ··· 950 941 } 951 942 lockdep_set_class(qdisc_lock(sch), &qdisc_tx_lock); 952 943 if (!netif_is_multiqueue(dev)) 953 - sch->flags |= TCQ_F_ONETXQUEUE; 944 + sch->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT; 954 945 } 955 946 956 947 sch->handle = handle;
+1 -1
net/sched/sch_generic.c
··· 737 737 return; 738 738 } 739 739 if (!netif_is_multiqueue(dev)) 740 - qdisc->flags |= TCQ_F_ONETXQUEUE; 740 + qdisc->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT; 741 741 dev_queue->qdisc_sleeping = qdisc; 742 742 } 743 743
+2 -2
net/sched/sch_mq.c
··· 63 63 if (qdisc == NULL) 64 64 goto err; 65 65 priv->qdiscs[ntx] = qdisc; 66 - qdisc->flags |= TCQ_F_ONETXQUEUE; 66 + qdisc->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT; 67 67 } 68 68 69 69 sch->flags |= TCQ_F_MQROOT; ··· 156 156 157 157 *old = dev_graft_qdisc(dev_queue, new); 158 158 if (new) 159 - new->flags |= TCQ_F_ONETXQUEUE; 159 + new->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT; 160 160 if (dev->flags & IFF_UP) 161 161 dev_activate(dev); 162 162 return 0;
+2 -2
net/sched/sch_mqprio.c
··· 132 132 goto err; 133 133 } 134 134 priv->qdiscs[i] = qdisc; 135 - qdisc->flags |= TCQ_F_ONETXQUEUE; 135 + qdisc->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT; 136 136 } 137 137 138 138 /* If the mqprio options indicate that hardware should own ··· 209 209 *old = dev_graft_qdisc(dev_queue, new); 210 210 211 211 if (new) 212 - new->flags |= TCQ_F_ONETXQUEUE; 212 + new->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT; 213 213 214 214 if (dev->flags & IFF_UP) 215 215 dev_activate(dev);
+10 -3
net/sctp/ipv6.c
··· 209 209 struct sock *sk = skb->sk; 210 210 struct ipv6_pinfo *np = inet6_sk(sk); 211 211 struct flowi6 *fl6 = &transport->fl.u.ip6; 212 + int res; 212 213 213 214 pr_debug("%s: skb:%p, len:%d, src:%pI6 dst:%pI6\n", __func__, skb, 214 215 skb->len, &fl6->saddr, &fl6->daddr); ··· 221 220 222 221 SCTP_INC_STATS(sock_net(sk), SCTP_MIB_OUTSCTPPACKS); 223 222 224 - return ip6_xmit(sk, skb, fl6, np->opt, np->tclass); 223 + rcu_read_lock(); 224 + res = ip6_xmit(sk, skb, fl6, rcu_dereference(np->opt), np->tclass); 225 + rcu_read_unlock(); 226 + return res; 225 227 } 226 228 227 229 /* Returns the dst cache entry for the given source and destination ip ··· 266 262 pr_debug("src=%pI6 - ", &fl6->saddr); 267 263 } 268 264 269 - final_p = fl6_update_dst(fl6, np->opt, &final); 265 + rcu_read_lock(); 266 + final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final); 267 + rcu_read_unlock(); 268 + 270 269 dst = ip6_dst_lookup_flow(sk, fl6, final_p); 271 270 if (!asoc || saddr) 272 271 goto out; ··· 328 321 if (baddr) { 329 322 fl6->saddr = baddr->v6.sin6_addr; 330 323 fl6->fl6_sport = baddr->v6.sin6_port; 331 - final_p = fl6_update_dst(fl6, np->opt, &final); 324 + final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final); 332 325 dst = ip6_dst_lookup_flow(sk, fl6, final_p); 333 326 } 334 327
+25 -14
net/sctp/socket.c
··· 972 972 return -EFAULT; 973 973 974 974 /* Alloc space for the address array in kernel memory. */ 975 - kaddrs = kmalloc(addrs_size, GFP_KERNEL); 975 + kaddrs = kmalloc(addrs_size, GFP_USER | __GFP_NOWARN); 976 976 if (unlikely(!kaddrs)) 977 977 return -ENOMEM; 978 978 ··· 4928 4928 to = optval + offsetof(struct sctp_getaddrs, addrs); 4929 4929 space_left = len - offsetof(struct sctp_getaddrs, addrs); 4930 4930 4931 - addrs = kmalloc(space_left, GFP_KERNEL); 4931 + addrs = kmalloc(space_left, GFP_USER | __GFP_NOWARN); 4932 4932 if (!addrs) 4933 4933 return -ENOMEM; 4934 4934 ··· 6458 6458 if (sctp_writeable(sk)) { 6459 6459 mask |= POLLOUT | POLLWRNORM; 6460 6460 } else { 6461 - set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 6461 + sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 6462 6462 /* 6463 6463 * Since the socket is not locked, the buffer 6464 6464 * might be made available after the writeable check and ··· 6801 6801 static void __sctp_write_space(struct sctp_association *asoc) 6802 6802 { 6803 6803 struct sock *sk = asoc->base.sk; 6804 - struct socket *sock = sk->sk_socket; 6805 6804 6806 - if ((sctp_wspace(asoc) > 0) && sock) { 6807 - if (waitqueue_active(&asoc->wait)) 6808 - wake_up_interruptible(&asoc->wait); 6805 + if (sctp_wspace(asoc) <= 0) 6806 + return; 6809 6807 6810 - if (sctp_writeable(sk)) { 6811 - wait_queue_head_t *wq = sk_sleep(sk); 6808 + if (waitqueue_active(&asoc->wait)) 6809 + wake_up_interruptible(&asoc->wait); 6812 6810 6813 - if (wq && waitqueue_active(wq)) 6814 - wake_up_interruptible(wq); 6811 + if (sctp_writeable(sk)) { 6812 + struct socket_wq *wq; 6813 + 6814 + rcu_read_lock(); 6815 + wq = rcu_dereference(sk->sk_wq); 6816 + if (wq) { 6817 + if (waitqueue_active(&wq->wait)) 6818 + wake_up_interruptible(&wq->wait); 6815 6819 6816 6820 /* Note that we try to include the Async I/O support 6817 6821 * here by modeling from the current TCP/UDP code. 6818 6822 * We have not tested with it yet. 6819 6823 */ 6820 6824 if (!(sk->sk_shutdown & SEND_SHUTDOWN)) 6821 - sock_wake_async(sock, 6822 - SOCK_WAKE_SPACE, POLL_OUT); 6825 + sock_wake_async(wq, SOCK_WAKE_SPACE, POLL_OUT); 6823 6826 } 6827 + rcu_read_unlock(); 6824 6828 } 6825 6829 } 6826 6830 ··· 7379 7375 7380 7376 #if IS_ENABLED(CONFIG_IPV6) 7381 7377 7378 + #include <net/transp_v6.h> 7379 + static void sctp_v6_destroy_sock(struct sock *sk) 7380 + { 7381 + sctp_destroy_sock(sk); 7382 + inet6_destroy_sock(sk); 7383 + } 7384 + 7382 7385 struct proto sctpv6_prot = { 7383 7386 .name = "SCTPv6", 7384 7387 .owner = THIS_MODULE, ··· 7395 7384 .accept = sctp_accept, 7396 7385 .ioctl = sctp_ioctl, 7397 7386 .init = sctp_init_sock, 7398 - .destroy = sctp_destroy_sock, 7387 + .destroy = sctp_v6_destroy_sock, 7399 7388 .shutdown = sctp_shutdown, 7400 7389 .setsockopt = sctp_setsockopt, 7401 7390 .getsockopt = sctp_getsockopt,
+7 -14
net/socket.c
··· 1056 1056 return 0; 1057 1057 } 1058 1058 1059 - /* This function may be called only under socket lock or callback_lock or rcu_lock */ 1059 + /* This function may be called only under rcu_lock */ 1060 1060 1061 - int sock_wake_async(struct socket *sock, int how, int band) 1061 + int sock_wake_async(struct socket_wq *wq, int how, int band) 1062 1062 { 1063 - struct socket_wq *wq; 1063 + if (!wq || !wq->fasync_list) 1064 + return -1; 1064 1065 1065 - if (!sock) 1066 - return -1; 1067 - rcu_read_lock(); 1068 - wq = rcu_dereference(sock->wq); 1069 - if (!wq || !wq->fasync_list) { 1070 - rcu_read_unlock(); 1071 - return -1; 1072 - } 1073 1066 switch (how) { 1074 1067 case SOCK_WAKE_WAITD: 1075 - if (test_bit(SOCK_ASYNC_WAITDATA, &sock->flags)) 1068 + if (test_bit(SOCKWQ_ASYNC_WAITDATA, &wq->flags)) 1076 1069 break; 1077 1070 goto call_kill; 1078 1071 case SOCK_WAKE_SPACE: 1079 - if (!test_and_clear_bit(SOCK_ASYNC_NOSPACE, &sock->flags)) 1072 + if (!test_and_clear_bit(SOCKWQ_ASYNC_NOSPACE, &wq->flags)) 1080 1073 break; 1081 1074 /* fall through */ 1082 1075 case SOCK_WAKE_IO: ··· 1079 1086 case SOCK_WAKE_URG: 1080 1087 kill_fasync(&wq->fasync_list, SIGURG, band); 1081 1088 } 1082 - rcu_read_unlock(); 1089 + 1083 1090 return 0; 1084 1091 } 1085 1092 EXPORT_SYMBOL(sock_wake_async);
+8
net/sunrpc/backchannel_rqst.c
··· 353 353 { 354 354 struct rpc_xprt *xprt = req->rq_xprt; 355 355 struct svc_serv *bc_serv = xprt->bc_serv; 356 + struct xdr_buf *rq_rcv_buf = &req->rq_rcv_buf; 356 357 357 358 spin_lock(&xprt->bc_pa_lock); 358 359 list_del(&req->rq_bc_pa_list); 359 360 xprt_dec_alloc_count(xprt, 1); 360 361 spin_unlock(&xprt->bc_pa_lock); 362 + 363 + if (copied <= rq_rcv_buf->head[0].iov_len) { 364 + rq_rcv_buf->head[0].iov_len = copied; 365 + rq_rcv_buf->page_len = 0; 366 + } else { 367 + rq_rcv_buf->page_len = copied - rq_rcv_buf->head[0].iov_len; 368 + } 361 369 362 370 req->rq_private_buf.len = copied; 363 371 set_bit(RPC_BC_PA_IN_USE, &req->rq_bc_pa_state);
+1
net/sunrpc/svc.c
··· 1363 1363 memcpy(&rqstp->rq_addr, &req->rq_xprt->addr, rqstp->rq_addrlen); 1364 1364 memcpy(&rqstp->rq_arg, &req->rq_rcv_buf, sizeof(rqstp->rq_arg)); 1365 1365 memcpy(&rqstp->rq_res, &req->rq_snd_buf, sizeof(rqstp->rq_res)); 1366 + rqstp->rq_arg.len = req->rq_private_buf.len; 1366 1367 1367 1368 /* reset result send buffer "put" position */ 1368 1369 resv->iov_len = 0;
+7 -7
net/sunrpc/xprtsock.c
··· 398 398 if (unlikely(!sock)) 399 399 return -ENOTSOCK; 400 400 401 - clear_bit(SOCK_ASYNC_NOSPACE, &sock->flags); 401 + clear_bit(SOCKWQ_ASYNC_NOSPACE, &sock->flags); 402 402 if (base != 0) { 403 403 addr = NULL; 404 404 addrlen = 0; ··· 442 442 struct sock_xprt *transport = container_of(task->tk_rqstp->rq_xprt, struct sock_xprt, xprt); 443 443 444 444 transport->inet->sk_write_pending--; 445 - clear_bit(SOCK_ASYNC_NOSPACE, &transport->sock->flags); 445 + clear_bit(SOCKWQ_ASYNC_NOSPACE, &transport->sock->flags); 446 446 } 447 447 448 448 /** ··· 467 467 468 468 /* Don't race with disconnect */ 469 469 if (xprt_connected(xprt)) { 470 - if (test_bit(SOCK_ASYNC_NOSPACE, &transport->sock->flags)) { 470 + if (test_bit(SOCKWQ_ASYNC_NOSPACE, &transport->sock->flags)) { 471 471 /* 472 472 * Notify TCP that we're limited by the application 473 473 * window size ··· 478 478 xprt_wait_for_buffer_space(task, xs_nospace_callback); 479 479 } 480 480 } else { 481 - clear_bit(SOCK_ASYNC_NOSPACE, &transport->sock->flags); 481 + clear_bit(SOCKWQ_ASYNC_NOSPACE, &transport->sock->flags); 482 482 ret = -ENOTCONN; 483 483 } 484 484 ··· 626 626 case -EPERM: 627 627 /* When the server has died, an ICMP port unreachable message 628 628 * prompts ECONNREFUSED. */ 629 - clear_bit(SOCK_ASYNC_NOSPACE, &transport->sock->flags); 629 + clear_bit(SOCKWQ_ASYNC_NOSPACE, &transport->sock->flags); 630 630 } 631 631 632 632 return status; ··· 715 715 case -EADDRINUSE: 716 716 case -ENOBUFS: 717 717 case -EPIPE: 718 - clear_bit(SOCK_ASYNC_NOSPACE, &transport->sock->flags); 718 + clear_bit(SOCKWQ_ASYNC_NOSPACE, &transport->sock->flags); 719 719 } 720 720 721 721 return status; ··· 1618 1618 1619 1619 if (unlikely(!(xprt = xprt_from_sock(sk)))) 1620 1620 return; 1621 - if (test_and_clear_bit(SOCK_ASYNC_NOSPACE, &sock->flags) == 0) 1621 + if (test_and_clear_bit(SOCKWQ_ASYNC_NOSPACE, &sock->flags) == 0) 1622 1622 return; 1623 1623 1624 1624 xprt_write_space(xprt);
+2
net/tipc/link.c
··· 348 348 349 349 snd_l->ackers++; 350 350 rcv_l->acked = snd_l->snd_nxt - 1; 351 + snd_l->state = LINK_ESTABLISHED; 351 352 tipc_link_build_bc_init_msg(uc_l, xmitq); 352 353 } 353 354 ··· 364 363 rcv_l->state = LINK_RESET; 365 364 if (!snd_l->ackers) { 366 365 tipc_link_reset(snd_l); 366 + snd_l->state = LINK_RESET; 367 367 __skb_queue_purge(xmitq); 368 368 } 369 369 }
+7 -3
net/tipc/socket.c
··· 105 105 static int tipc_backlog_rcv(struct sock *sk, struct sk_buff *skb); 106 106 static void tipc_data_ready(struct sock *sk); 107 107 static void tipc_write_space(struct sock *sk); 108 + static void tipc_sock_destruct(struct sock *sk); 108 109 static int tipc_release(struct socket *sock); 109 110 static int tipc_accept(struct socket *sock, struct socket *new_sock, int flags); 110 111 static int tipc_wait_for_sndmsg(struct socket *sock, long *timeo_p); ··· 382 381 sk->sk_rcvbuf = sysctl_tipc_rmem[1]; 383 382 sk->sk_data_ready = tipc_data_ready; 384 383 sk->sk_write_space = tipc_write_space; 384 + sk->sk_destruct = tipc_sock_destruct; 385 385 tsk->conn_timeout = CONN_TIMEOUT_DEFAULT; 386 386 tsk->sent_unacked = 0; 387 387 atomic_set(&tsk->dupl_rcvcnt, 0); ··· 471 469 tipc_node_xmit_skb(net, skb, dnode, tsk->portid); 472 470 tipc_node_remove_conn(net, dnode, tsk->portid); 473 471 } 474 - 475 - /* Discard any remaining (connection-based) messages in receive queue */ 476 - __skb_queue_purge(&sk->sk_receive_queue); 477 472 478 473 /* Reject any messages that accumulated in backlog queue */ 479 474 sock->state = SS_DISCONNECTING; ··· 1512 1513 wake_up_interruptible_sync_poll(&wq->wait, POLLIN | 1513 1514 POLLRDNORM | POLLRDBAND); 1514 1515 rcu_read_unlock(); 1516 + } 1517 + 1518 + static void tipc_sock_destruct(struct sock *sk) 1519 + { 1520 + __skb_queue_purge(&sk->sk_receive_queue); 1515 1521 } 1516 1522 1517 1523 /**
+5 -2
net/tipc/udp_media.c
··· 157 157 struct udp_media_addr *src = (struct udp_media_addr *)&b->addr.value; 158 158 struct rtable *rt; 159 159 160 - if (skb_headroom(skb) < UDP_MIN_HEADROOM) 161 - pskb_expand_head(skb, UDP_MIN_HEADROOM, 0, GFP_ATOMIC); 160 + if (skb_headroom(skb) < UDP_MIN_HEADROOM) { 161 + err = pskb_expand_head(skb, UDP_MIN_HEADROOM, 0, GFP_ATOMIC); 162 + if (err) 163 + goto tx_error; 164 + } 162 165 163 166 skb_set_inner_protocol(skb, htons(ETH_P_TIPC)); 164 167 ub = rcu_dereference_rtnl(b->media_ptr);
+231 -37
net/unix/af_unix.c
··· 326 326 return s; 327 327 } 328 328 329 + /* Support code for asymmetrically connected dgram sockets 330 + * 331 + * If a datagram socket is connected to a socket not itself connected 332 + * to the first socket (eg, /dev/log), clients may only enqueue more 333 + * messages if the present receive queue of the server socket is not 334 + * "too large". This means there's a second writeability condition 335 + * poll and sendmsg need to test. The dgram recv code will do a wake 336 + * up on the peer_wait wait queue of a socket upon reception of a 337 + * datagram which needs to be propagated to sleeping would-be writers 338 + * since these might not have sent anything so far. This can't be 339 + * accomplished via poll_wait because the lifetime of the server 340 + * socket might be less than that of its clients if these break their 341 + * association with it or if the server socket is closed while clients 342 + * are still connected to it and there's no way to inform "a polling 343 + * implementation" that it should let go of a certain wait queue 344 + * 345 + * In order to propagate a wake up, a wait_queue_t of the client 346 + * socket is enqueued on the peer_wait queue of the server socket 347 + * whose wake function does a wake_up on the ordinary client socket 348 + * wait queue. This connection is established whenever a write (or 349 + * poll for write) hit the flow control condition and broken when the 350 + * association to the server socket is dissolved or after a wake up 351 + * was relayed. 352 + */ 353 + 354 + static int unix_dgram_peer_wake_relay(wait_queue_t *q, unsigned mode, int flags, 355 + void *key) 356 + { 357 + struct unix_sock *u; 358 + wait_queue_head_t *u_sleep; 359 + 360 + u = container_of(q, struct unix_sock, peer_wake); 361 + 362 + __remove_wait_queue(&unix_sk(u->peer_wake.private)->peer_wait, 363 + q); 364 + u->peer_wake.private = NULL; 365 + 366 + /* relaying can only happen while the wq still exists */ 367 + u_sleep = sk_sleep(&u->sk); 368 + if (u_sleep) 369 + wake_up_interruptible_poll(u_sleep, key); 370 + 371 + return 0; 372 + } 373 + 374 + static int unix_dgram_peer_wake_connect(struct sock *sk, struct sock *other) 375 + { 376 + struct unix_sock *u, *u_other; 377 + int rc; 378 + 379 + u = unix_sk(sk); 380 + u_other = unix_sk(other); 381 + rc = 0; 382 + spin_lock(&u_other->peer_wait.lock); 383 + 384 + if (!u->peer_wake.private) { 385 + u->peer_wake.private = other; 386 + __add_wait_queue(&u_other->peer_wait, &u->peer_wake); 387 + 388 + rc = 1; 389 + } 390 + 391 + spin_unlock(&u_other->peer_wait.lock); 392 + return rc; 393 + } 394 + 395 + static void unix_dgram_peer_wake_disconnect(struct sock *sk, 396 + struct sock *other) 397 + { 398 + struct unix_sock *u, *u_other; 399 + 400 + u = unix_sk(sk); 401 + u_other = unix_sk(other); 402 + spin_lock(&u_other->peer_wait.lock); 403 + 404 + if (u->peer_wake.private == other) { 405 + __remove_wait_queue(&u_other->peer_wait, &u->peer_wake); 406 + u->peer_wake.private = NULL; 407 + } 408 + 409 + spin_unlock(&u_other->peer_wait.lock); 410 + } 411 + 412 + static void unix_dgram_peer_wake_disconnect_wakeup(struct sock *sk, 413 + struct sock *other) 414 + { 415 + unix_dgram_peer_wake_disconnect(sk, other); 416 + wake_up_interruptible_poll(sk_sleep(sk), 417 + POLLOUT | 418 + POLLWRNORM | 419 + POLLWRBAND); 420 + } 421 + 422 + /* preconditions: 423 + * - unix_peer(sk) == other 424 + * - association is stable 425 + */ 426 + static int unix_dgram_peer_wake_me(struct sock *sk, struct sock *other) 427 + { 428 + int connected; 429 + 430 + connected = unix_dgram_peer_wake_connect(sk, other); 431 + 432 + if (unix_recvq_full(other)) 433 + return 1; 434 + 435 + if (connected) 436 + unix_dgram_peer_wake_disconnect(sk, other); 437 + 438 + return 0; 439 + } 440 + 329 441 static int unix_writable(const struct sock *sk) 330 442 { 331 443 return sk->sk_state != TCP_LISTEN && ··· 543 431 skpair->sk_state_change(skpair); 544 432 sk_wake_async(skpair, SOCK_WAKE_WAITD, POLL_HUP); 545 433 } 434 + 435 + unix_dgram_peer_wake_disconnect(sk, skpair); 546 436 sock_put(skpair); /* It may now die */ 547 437 unix_peer(sk) = NULL; 548 438 } ··· 780 666 INIT_LIST_HEAD(&u->link); 781 667 mutex_init(&u->readlock); /* single task reading lock */ 782 668 init_waitqueue_head(&u->peer_wait); 669 + init_waitqueue_func_entry(&u->peer_wake, unix_dgram_peer_wake_relay); 783 670 unix_insert_socket(unix_sockets_unbound(sk), sk); 784 671 out: 785 672 if (sk == NULL) ··· 1148 1033 if (unix_peer(sk)) { 1149 1034 struct sock *old_peer = unix_peer(sk); 1150 1035 unix_peer(sk) = other; 1036 + unix_dgram_peer_wake_disconnect_wakeup(sk, old_peer); 1037 + 1151 1038 unix_state_double_unlock(sk, other); 1152 1039 1153 1040 if (other != old_peer) ··· 1551 1434 return err; 1552 1435 } 1553 1436 1437 + static bool unix_passcred_enabled(const struct socket *sock, 1438 + const struct sock *other) 1439 + { 1440 + return test_bit(SOCK_PASSCRED, &sock->flags) || 1441 + !other->sk_socket || 1442 + test_bit(SOCK_PASSCRED, &other->sk_socket->flags); 1443 + } 1444 + 1554 1445 /* 1555 1446 * Some apps rely on write() giving SCM_CREDENTIALS 1556 1447 * We include credentials if source or destination socket ··· 1569 1444 { 1570 1445 if (UNIXCB(skb).pid) 1571 1446 return; 1572 - if (test_bit(SOCK_PASSCRED, &sock->flags) || 1573 - !other->sk_socket || 1574 - test_bit(SOCK_PASSCRED, &other->sk_socket->flags)) { 1447 + if (unix_passcred_enabled(sock, other)) { 1575 1448 UNIXCB(skb).pid = get_pid(task_tgid(current)); 1576 1449 current_uid_gid(&UNIXCB(skb).uid, &UNIXCB(skb).gid); 1577 1450 } 1451 + } 1452 + 1453 + static int maybe_init_creds(struct scm_cookie *scm, 1454 + struct socket *socket, 1455 + const struct sock *other) 1456 + { 1457 + int err; 1458 + struct msghdr msg = { .msg_controllen = 0 }; 1459 + 1460 + err = scm_send(socket, &msg, scm, false); 1461 + if (err) 1462 + return err; 1463 + 1464 + if (unix_passcred_enabled(socket, other)) { 1465 + scm->pid = get_pid(task_tgid(current)); 1466 + current_uid_gid(&scm->creds.uid, &scm->creds.gid); 1467 + } 1468 + return err; 1469 + } 1470 + 1471 + static bool unix_skb_scm_eq(struct sk_buff *skb, 1472 + struct scm_cookie *scm) 1473 + { 1474 + const struct unix_skb_parms *u = &UNIXCB(skb); 1475 + 1476 + return u->pid == scm->pid && 1477 + uid_eq(u->uid, scm->creds.uid) && 1478 + gid_eq(u->gid, scm->creds.gid) && 1479 + unix_secdata_eq(scm, skb); 1578 1480 } 1579 1481 1580 1482 /* ··· 1624 1472 struct scm_cookie scm; 1625 1473 int max_level; 1626 1474 int data_len = 0; 1475 + int sk_locked; 1627 1476 1628 1477 wait_for_unix_gc(); 1629 1478 err = scm_send(sock, msg, &scm, false); ··· 1703 1550 goto out_free; 1704 1551 } 1705 1552 1553 + sk_locked = 0; 1706 1554 unix_state_lock(other); 1555 + restart_locked: 1707 1556 err = -EPERM; 1708 1557 if (!unix_may_send(sk, other)) 1709 1558 goto out_unlock; 1710 1559 1711 - if (sock_flag(other, SOCK_DEAD)) { 1560 + if (unlikely(sock_flag(other, SOCK_DEAD))) { 1712 1561 /* 1713 1562 * Check with 1003.1g - what should 1714 1563 * datagram error ··· 1718 1563 unix_state_unlock(other); 1719 1564 sock_put(other); 1720 1565 1566 + if (!sk_locked) 1567 + unix_state_lock(sk); 1568 + 1721 1569 err = 0; 1722 - unix_state_lock(sk); 1723 1570 if (unix_peer(sk) == other) { 1724 1571 unix_peer(sk) = NULL; 1572 + unix_dgram_peer_wake_disconnect_wakeup(sk, other); 1573 + 1725 1574 unix_state_unlock(sk); 1726 1575 1727 1576 unix_dgram_disconnected(sk, other); ··· 1751 1592 goto out_unlock; 1752 1593 } 1753 1594 1754 - if (unix_peer(other) != sk && unix_recvq_full(other)) { 1755 - if (!timeo) { 1595 + if (unlikely(unix_peer(other) != sk && unix_recvq_full(other))) { 1596 + if (timeo) { 1597 + timeo = unix_wait_for_peer(other, timeo); 1598 + 1599 + err = sock_intr_errno(timeo); 1600 + if (signal_pending(current)) 1601 + goto out_free; 1602 + 1603 + goto restart; 1604 + } 1605 + 1606 + if (!sk_locked) { 1607 + unix_state_unlock(other); 1608 + unix_state_double_lock(sk, other); 1609 + } 1610 + 1611 + if (unix_peer(sk) != other || 1612 + unix_dgram_peer_wake_me(sk, other)) { 1756 1613 err = -EAGAIN; 1614 + sk_locked = 1; 1757 1615 goto out_unlock; 1758 1616 } 1759 1617 1760 - timeo = unix_wait_for_peer(other, timeo); 1761 - 1762 - err = sock_intr_errno(timeo); 1763 - if (signal_pending(current)) 1764 - goto out_free; 1765 - 1766 - goto restart; 1618 + if (!sk_locked) { 1619 + sk_locked = 1; 1620 + goto restart_locked; 1621 + } 1767 1622 } 1623 + 1624 + if (unlikely(sk_locked)) 1625 + unix_state_unlock(sk); 1768 1626 1769 1627 if (sock_flag(other, SOCK_RCVTSTAMP)) 1770 1628 __net_timestamp(skb); ··· 1796 1620 return len; 1797 1621 1798 1622 out_unlock: 1623 + if (sk_locked) 1624 + unix_state_unlock(sk); 1799 1625 unix_state_unlock(other); 1800 1626 out_free: 1801 1627 kfree_skb(skb); ··· 1919 1741 static ssize_t unix_stream_sendpage(struct socket *socket, struct page *page, 1920 1742 int offset, size_t size, int flags) 1921 1743 { 1922 - int err = 0; 1923 - bool send_sigpipe = true; 1744 + int err; 1745 + bool send_sigpipe = false; 1746 + bool init_scm = true; 1747 + struct scm_cookie scm; 1924 1748 struct sock *other, *sk = socket->sk; 1925 1749 struct sk_buff *skb, *newskb = NULL, *tail = NULL; 1926 1750 ··· 1940 1760 newskb = sock_alloc_send_pskb(sk, 0, 0, flags & MSG_DONTWAIT, 1941 1761 &err, 0); 1942 1762 if (!newskb) 1943 - return err; 1763 + goto err; 1944 1764 } 1945 1765 1946 1766 /* we must acquire readlock as we modify already present ··· 1949 1769 err = mutex_lock_interruptible(&unix_sk(other)->readlock); 1950 1770 if (err) { 1951 1771 err = flags & MSG_DONTWAIT ? -EAGAIN : -ERESTARTSYS; 1952 - send_sigpipe = false; 1953 1772 goto err; 1954 1773 } 1955 1774 1956 1775 if (sk->sk_shutdown & SEND_SHUTDOWN) { 1957 1776 err = -EPIPE; 1777 + send_sigpipe = true; 1958 1778 goto err_unlock; 1959 1779 } 1960 1780 ··· 1963 1783 if (sock_flag(other, SOCK_DEAD) || 1964 1784 other->sk_shutdown & RCV_SHUTDOWN) { 1965 1785 err = -EPIPE; 1786 + send_sigpipe = true; 1966 1787 goto err_state_unlock; 1788 + } 1789 + 1790 + if (init_scm) { 1791 + err = maybe_init_creds(&scm, socket, other); 1792 + if (err) 1793 + goto err_state_unlock; 1794 + init_scm = false; 1967 1795 } 1968 1796 1969 1797 skb = skb_peek_tail(&other->sk_receive_queue); 1970 1798 if (tail && tail == skb) { 1971 1799 skb = newskb; 1972 - } else if (!skb) { 1973 - if (newskb) 1800 + } else if (!skb || !unix_skb_scm_eq(skb, &scm)) { 1801 + if (newskb) { 1974 1802 skb = newskb; 1975 - else 1803 + } else { 1804 + tail = skb; 1976 1805 goto alloc_skb; 1806 + } 1977 1807 } else if (newskb) { 1978 1808 /* this is fast path, we don't necessarily need to 1979 1809 * call to kfree_skb even though with newskb == NULL ··· 2004 1814 atomic_add(size, &sk->sk_wmem_alloc); 2005 1815 2006 1816 if (newskb) { 1817 + err = unix_scm_to_skb(&scm, skb, false); 1818 + if (err) 1819 + goto err_state_unlock; 2007 1820 spin_lock(&other->sk_receive_queue.lock); 2008 1821 __skb_queue_tail(&other->sk_receive_queue, newskb); 2009 1822 spin_unlock(&other->sk_receive_queue.lock); ··· 2016 1823 mutex_unlock(&unix_sk(other)->readlock); 2017 1824 2018 1825 other->sk_data_ready(other); 2019 - 1826 + scm_destroy(&scm); 2020 1827 return size; 2021 1828 2022 1829 err_state_unlock: ··· 2027 1834 kfree_skb(newskb); 2028 1835 if (send_sigpipe && !(flags & MSG_NOSIGNAL)) 2029 1836 send_sig(SIGPIPE, current, 0); 1837 + if (!init_scm) 1838 + scm_destroy(&scm); 2030 1839 return err; 2031 1840 } 2032 1841 ··· 2193 1998 !timeo) 2194 1999 break; 2195 2000 2196 - set_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 2001 + sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 2197 2002 unix_state_unlock(sk); 2198 2003 timeo = freezable_schedule_timeout(timeo); 2199 2004 unix_state_lock(sk); ··· 2201 2006 if (sock_flag(sk, SOCK_DEAD)) 2202 2007 break; 2203 2008 2204 - clear_bit(SOCK_ASYNC_WAITDATA, &sk->sk_socket->flags); 2009 + sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 2205 2010 } 2206 2011 2207 2012 finish_wait(sk_sleep(sk), &wait); ··· 2334 2139 2335 2140 if (check_creds) { 2336 2141 /* Never glue messages from different writers */ 2337 - if ((UNIXCB(skb).pid != scm.pid) || 2338 - !uid_eq(UNIXCB(skb).uid, scm.creds.uid) || 2339 - !gid_eq(UNIXCB(skb).gid, scm.creds.gid) || 2340 - !unix_secdata_eq(&scm, skb)) 2142 + if (!unix_skb_scm_eq(skb, &scm)) 2341 2143 break; 2342 2144 } else if (test_bit(SOCK_PASSCRED, &sock->flags)) { 2343 2145 /* Copy credentials */ ··· 2670 2478 return mask; 2671 2479 2672 2480 writable = unix_writable(sk); 2673 - other = unix_peer_get(sk); 2674 - if (other) { 2675 - if (unix_peer(other) != sk) { 2676 - sock_poll_wait(file, &unix_sk(other)->peer_wait, wait); 2677 - if (unix_recvq_full(other)) 2678 - writable = 0; 2679 - } 2680 - sock_put(other); 2481 + if (writable) { 2482 + unix_state_lock(sk); 2483 + 2484 + other = unix_peer(sk); 2485 + if (other && unix_peer(other) != sk && 2486 + unix_recvq_full(other) && 2487 + unix_dgram_peer_wake_me(sk, other)) 2488 + writable = 0; 2489 + 2490 + unix_state_unlock(sk); 2681 2491 } 2682 2492 2683 2493 if (writable) 2684 2494 mask |= POLLOUT | POLLWRNORM | POLLWRBAND; 2685 2495 else 2686 - set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags); 2496 + sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 2687 2497 2688 2498 return mask; 2689 2499 }
+1 -1
scripts/kernel-doc
··· 2711 2711 2712 2712 # generate a sequence of code that will splice in highlighting information 2713 2713 # using the s// operator. 2714 - foreach my $k (keys @highlights) { 2714 + for (my $k = 0; $k < @highlights; $k++) { 2715 2715 my $pattern = $highlights[$k][0]; 2716 2716 my $result = $highlights[$k][1]; 2717 2717 # print STDERR "scanning pattern:$pattern, highlight:($result)\n";
+2
security/keys/encrypted-keys/encrypted.c
··· 845 845 size_t datalen = prep->datalen; 846 846 int ret = 0; 847 847 848 + if (test_bit(KEY_FLAG_NEGATIVE, &key->flags)) 849 + return -ENOKEY; 848 850 if (datalen <= 0 || datalen > 32767 || !prep->data) 849 851 return -EINVAL; 850 852
+4 -1
security/keys/trusted.c
··· 1007 1007 */ 1008 1008 static int trusted_update(struct key *key, struct key_preparsed_payload *prep) 1009 1009 { 1010 - struct trusted_key_payload *p = key->payload.data[0]; 1010 + struct trusted_key_payload *p; 1011 1011 struct trusted_key_payload *new_p; 1012 1012 struct trusted_key_options *new_o; 1013 1013 size_t datalen = prep->datalen; 1014 1014 char *datablob; 1015 1015 int ret = 0; 1016 1016 1017 + if (test_bit(KEY_FLAG_NEGATIVE, &key->flags)) 1018 + return -ENOKEY; 1019 + p = key->payload.data[0]; 1017 1020 if (!p->migratable) 1018 1021 return -EPERM; 1019 1022 if (datalen <= 0 || datalen > 32767 || !prep->data)
+4 -1
security/keys/user_defined.c
··· 120 120 121 121 if (ret == 0) { 122 122 /* attach the new data, displacing the old */ 123 - zap = key->payload.data[0]; 123 + if (!test_bit(KEY_FLAG_NEGATIVE, &key->flags)) 124 + zap = key->payload.data[0]; 125 + else 126 + zap = NULL; 124 127 rcu_assign_keypointer(key, upayload); 125 128 key->expiry = 0; 126 129 }
+2 -2
security/selinux/ss/conditional.c
··· 638 638 { 639 639 struct avtab_node *node; 640 640 641 - if (!ctab || !key || !avd || !xperms) 641 + if (!ctab || !key || !avd) 642 642 return; 643 643 644 644 for (node = avtab_search_node(ctab, key); node; ··· 657 657 if ((u16)(AVTAB_AUDITALLOW|AVTAB_ENABLED) == 658 658 (node->key.specified & (AVTAB_AUDITALLOW|AVTAB_ENABLED))) 659 659 avd->auditallow |= node->datum.u.data; 660 - if ((node->key.specified & AVTAB_ENABLED) && 660 + if (xperms && (node->key.specified & AVTAB_ENABLED) && 661 661 (node->key.specified & AVTAB_XPERMS)) 662 662 services_compute_xperms_drivers(xperms, node); 663 663 }
+4
sound/firewire/dice/dice.c
··· 12 12 MODULE_LICENSE("GPL v2"); 13 13 14 14 #define OUI_WEISS 0x001c6a 15 + #define OUI_LOUD 0x000ff2 15 16 16 17 #define DICE_CATEGORY_ID 0x04 17 18 #define WEISS_CATEGORY_ID 0x00 19 + #define LOUD_CATEGORY_ID 0x10 18 20 19 21 static int dice_interface_check(struct fw_unit *unit) 20 22 { ··· 59 57 } 60 58 if (vendor == OUI_WEISS) 61 59 category = WEISS_CATEGORY_ID; 60 + else if (vendor == OUI_LOUD) 61 + category = LOUD_CATEGORY_ID; 62 62 else 63 63 category = DICE_CATEGORY_ID; 64 64 if (device->config_rom[3] != ((vendor << 8) | category) ||
+7
sound/pci/hda/hda_intel.c
··· 312 312 (AZX_DCAPS_INTEL_PCH | AZX_DCAPS_SEPARATE_STREAM_TAG |\ 313 313 AZX_DCAPS_I915_POWERWELL) 314 314 315 + #define AZX_DCAPS_INTEL_BROXTON \ 316 + (AZX_DCAPS_INTEL_PCH | AZX_DCAPS_SEPARATE_STREAM_TAG |\ 317 + AZX_DCAPS_I915_POWERWELL) 318 + 315 319 /* quirks for ATI SB / AMD Hudson */ 316 320 #define AZX_DCAPS_PRESET_ATI_SB \ 317 321 (AZX_DCAPS_NO_TCSEL | AZX_DCAPS_SYNC_WRITE | AZX_DCAPS_POSFIX_LPIB |\ ··· 2128 2124 /* Sunrise Point-LP */ 2129 2125 { PCI_DEVICE(0x8086, 0x9d70), 2130 2126 .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_SKYLAKE }, 2127 + /* Broxton-P(Apollolake) */ 2128 + { PCI_DEVICE(0x8086, 0x5a98), 2129 + .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_BROXTON }, 2131 2130 /* Haswell */ 2132 2131 { PCI_DEVICE(0x8086, 0x0a0c), 2133 2132 .driver_data = AZX_DRIVER_HDMI | AZX_DCAPS_INTEL_HASWELL },
+2 -1
sound/pci/hda/patch_hdmi.c
··· 2378 2378 * can cover the codec power request, and so need not set this flag. 2379 2379 * For previous platforms, there is no such power well feature. 2380 2380 */ 2381 - if (is_valleyview_plus(codec) || is_skylake(codec)) 2381 + if (is_valleyview_plus(codec) || is_skylake(codec) || 2382 + is_broxton(codec)) 2382 2383 codec->core.link_power_control = 1; 2383 2384 2384 2385 if (is_haswell_plus(codec) || is_valleyview_plus(codec)) {
+23
sound/pci/hda/patch_realtek.c
··· 1759 1759 ALC882_FIXUP_NO_PRIMARY_HP, 1760 1760 ALC887_FIXUP_ASUS_BASS, 1761 1761 ALC887_FIXUP_BASS_CHMAP, 1762 + ALC882_FIXUP_DISABLE_AAMIX, 1762 1763 }; 1763 1764 1764 1765 static void alc889_fixup_coef(struct hda_codec *codec, ··· 1921 1920 1922 1921 static void alc_fixup_bass_chmap(struct hda_codec *codec, 1923 1922 const struct hda_fixup *fix, int action); 1923 + static void alc_fixup_disable_aamix(struct hda_codec *codec, 1924 + const struct hda_fixup *fix, int action); 1924 1925 1925 1926 static const struct hda_fixup alc882_fixups[] = { 1926 1927 [ALC882_FIXUP_ABIT_AW9D_MAX] = { ··· 2154 2151 .type = HDA_FIXUP_FUNC, 2155 2152 .v.func = alc_fixup_bass_chmap, 2156 2153 }, 2154 + [ALC882_FIXUP_DISABLE_AAMIX] = { 2155 + .type = HDA_FIXUP_FUNC, 2156 + .v.func = alc_fixup_disable_aamix, 2157 + }, 2157 2158 }; 2158 2159 2159 2160 static const struct snd_pci_quirk alc882_fixup_tbl[] = { ··· 2225 2218 SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD), 2226 2219 SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3), 2227 2220 SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE), 2221 + SND_PCI_QUIRK(0x1458, 0xa182, "Gigabyte Z170X-UD3", ALC882_FIXUP_DISABLE_AAMIX), 2228 2222 SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX), 2229 2223 SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD), 2230 2224 SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD), ··· 4595 4587 ALC292_FIXUP_DISABLE_AAMIX, 4596 4588 ALC298_FIXUP_DELL1_MIC_NO_PRESENCE, 4597 4589 ALC275_FIXUP_DELL_XPS, 4590 + ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE, 4598 4591 }; 4599 4592 4600 4593 static const struct hda_fixup alc269_fixups[] = { ··· 5176 5167 {} 5177 5168 } 5178 5169 }, 5170 + [ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE] = { 5171 + .type = HDA_FIXUP_VERBS, 5172 + .v.verbs = (const struct hda_verb[]) { 5173 + /* Disable pass-through path for FRONT 14h */ 5174 + {0x20, AC_VERB_SET_COEF_INDEX, 0x36}, 5175 + {0x20, AC_VERB_SET_PROC_COEF, 0x1737}, 5176 + {} 5177 + }, 5178 + .chained = true, 5179 + .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE 5180 + }, 5179 5181 }; 5180 5182 5181 5183 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 5200 5180 SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK), 5201 5181 SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572), 5202 5182 SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS), 5183 + SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK), 5203 5184 SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z), 5204 5185 SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS), 5186 + SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X), 5205 5187 SND_PCI_QUIRK(0x1028, 0x05ca, "Dell Latitude E7240", ALC292_FIXUP_DELL_E7X), 5206 5188 SND_PCI_QUIRK(0x1028, 0x05cb, "Dell Latitude E7440", ALC292_FIXUP_DELL_E7X), 5207 5189 SND_PCI_QUIRK(0x1028, 0x05da, "Dell Vostro 5460", ALC290_FIXUP_SUBWOOFER), ··· 5226 5204 SND_PCI_QUIRK(0x1028, 0x06de, "Dell", ALC292_FIXUP_DISABLE_AAMIX), 5227 5205 SND_PCI_QUIRK(0x1028, 0x06df, "Dell", ALC292_FIXUP_DISABLE_AAMIX), 5228 5206 SND_PCI_QUIRK(0x1028, 0x06e0, "Dell", ALC292_FIXUP_DISABLE_AAMIX), 5207 + SND_PCI_QUIRK(0x1028, 0x0704, "Dell XPS 13", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE), 5229 5208 SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 5230 5209 SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 5231 5210 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+29 -16
sound/pci/hda/patch_sigmatel.c
··· 3110 3110 spec->gpio_led = 0x08; 3111 3111 } 3112 3112 3113 + static bool is_hp_output(struct hda_codec *codec, hda_nid_t pin) 3114 + { 3115 + unsigned int pin_cfg = snd_hda_codec_get_pincfg(codec, pin); 3116 + 3117 + /* count line-out, too, as BIOS sets often so */ 3118 + return get_defcfg_connect(pin_cfg) != AC_JACK_PORT_NONE && 3119 + (get_defcfg_device(pin_cfg) == AC_JACK_LINE_OUT || 3120 + get_defcfg_device(pin_cfg) == AC_JACK_HP_OUT); 3121 + } 3122 + 3123 + static void fixup_hp_headphone(struct hda_codec *codec, hda_nid_t pin) 3124 + { 3125 + unsigned int pin_cfg = snd_hda_codec_get_pincfg(codec, pin); 3126 + 3127 + /* It was changed in the BIOS to just satisfy MS DTM. 3128 + * Lets turn it back into slaved HP 3129 + */ 3130 + pin_cfg = (pin_cfg & (~AC_DEFCFG_DEVICE)) | 3131 + (AC_JACK_HP_OUT << AC_DEFCFG_DEVICE_SHIFT); 3132 + pin_cfg = (pin_cfg & (~(AC_DEFCFG_DEF_ASSOC | AC_DEFCFG_SEQUENCE))) | 3133 + 0x1f; 3134 + snd_hda_codec_set_pincfg(codec, pin, pin_cfg); 3135 + } 3113 3136 3114 3137 static void stac92hd71bxx_fixup_hp(struct hda_codec *codec, 3115 3138 const struct hda_fixup *fix, int action) ··· 3142 3119 if (action != HDA_FIXUP_ACT_PRE_PROBE) 3143 3120 return; 3144 3121 3145 - if (hp_blike_system(codec->core.subsystem_id)) { 3146 - unsigned int pin_cfg = snd_hda_codec_get_pincfg(codec, 0x0f); 3147 - if (get_defcfg_device(pin_cfg) == AC_JACK_LINE_OUT || 3148 - get_defcfg_device(pin_cfg) == AC_JACK_SPEAKER || 3149 - get_defcfg_device(pin_cfg) == AC_JACK_HP_OUT) { 3150 - /* It was changed in the BIOS to just satisfy MS DTM. 3151 - * Lets turn it back into slaved HP 3152 - */ 3153 - pin_cfg = (pin_cfg & (~AC_DEFCFG_DEVICE)) 3154 - | (AC_JACK_HP_OUT << 3155 - AC_DEFCFG_DEVICE_SHIFT); 3156 - pin_cfg = (pin_cfg & (~(AC_DEFCFG_DEF_ASSOC 3157 - | AC_DEFCFG_SEQUENCE))) 3158 - | 0x1f; 3159 - snd_hda_codec_set_pincfg(codec, 0x0f, pin_cfg); 3160 - } 3122 + /* when both output A and F are assigned, these are supposedly 3123 + * dock and built-in headphones; fix both pin configs 3124 + */ 3125 + if (is_hp_output(codec, 0x0a) && is_hp_output(codec, 0x0f)) { 3126 + fixup_hp_headphone(codec, 0x0a); 3127 + fixup_hp_headphone(codec, 0x0f); 3161 3128 } 3162 3129 3163 3130 if (find_mute_led_cfg(codec, 1))
+46
sound/usb/midi.c
··· 174 174 u8 running_status_length; 175 175 } ports[0x10]; 176 176 u8 seen_f5; 177 + bool in_sysex; 178 + u8 last_cin; 177 179 u8 error_resubmit; 178 180 int current_port; 179 181 }; ··· 470 468 } 471 469 472 470 /* 471 + * QinHeng CH345 is buggy: every second packet inside a SysEx has not CIN 4 472 + * but the previously seen CIN, but still with three data bytes. 473 + */ 474 + static void ch345_broken_sysex_input(struct snd_usb_midi_in_endpoint *ep, 475 + uint8_t *buffer, int buffer_length) 476 + { 477 + unsigned int i, cin, length; 478 + 479 + for (i = 0; i + 3 < buffer_length; i += 4) { 480 + if (buffer[i] == 0 && i > 0) 481 + break; 482 + cin = buffer[i] & 0x0f; 483 + if (ep->in_sysex && 484 + cin == ep->last_cin && 485 + (buffer[i + 1 + (cin == 0x6)] & 0x80) == 0) 486 + cin = 0x4; 487 + #if 0 488 + if (buffer[i + 1] == 0x90) { 489 + /* 490 + * Either a corrupted running status or a real note-on 491 + * message; impossible to detect reliably. 492 + */ 493 + } 494 + #endif 495 + length = snd_usbmidi_cin_length[cin]; 496 + snd_usbmidi_input_data(ep, 0, &buffer[i + 1], length); 497 + ep->in_sysex = cin == 0x4; 498 + if (!ep->in_sysex) 499 + ep->last_cin = cin; 500 + } 501 + } 502 + 503 + /* 473 504 * CME protocol: like the standard protocol, but SysEx commands are sent as a 474 505 * single USB packet preceded by a 0x0F byte. 475 506 */ ··· 691 656 692 657 static struct usb_protocol_ops snd_usbmidi_cme_ops = { 693 658 .input = snd_usbmidi_cme_input, 659 + .output = snd_usbmidi_standard_output, 660 + .output_packet = snd_usbmidi_output_standard_packet, 661 + }; 662 + 663 + static struct usb_protocol_ops snd_usbmidi_ch345_broken_sysex_ops = { 664 + .input = ch345_broken_sysex_input, 694 665 .output = snd_usbmidi_standard_output, 695 666 .output_packet = snd_usbmidi_output_standard_packet, 696 667 }; ··· 1382 1341 * Various chips declare a packet size larger than 4 bytes, but 1383 1342 * do not actually work with larger packets: 1384 1343 */ 1344 + case USB_ID(0x0a67, 0x5011): /* Medeli DD305 */ 1385 1345 case USB_ID(0x0a92, 0x1020): /* ESI M4U */ 1386 1346 case USB_ID(0x1430, 0x474b): /* RedOctane GH MIDI INTERFACE */ 1387 1347 case USB_ID(0x15ca, 0x0101): /* Textech USB Midi Cable */ ··· 2418 2376 if (err < 0) 2419 2377 break; 2420 2378 2379 + err = snd_usbmidi_detect_per_port_endpoints(umidi, endpoints); 2380 + break; 2381 + case QUIRK_MIDI_CH345: 2382 + umidi->usb_protocol_ops = &snd_usbmidi_ch345_broken_sysex_ops; 2421 2383 err = snd_usbmidi_detect_per_port_endpoints(umidi, endpoints); 2422 2384 break; 2423 2385 default:
+11
sound/usb/quirks-table.h
··· 2829 2829 .idProduct = 0x1020, 2830 2830 }, 2831 2831 2832 + /* QinHeng devices */ 2833 + { 2834 + USB_DEVICE(0x1a86, 0x752d), 2835 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 2836 + .vendor_name = "QinHeng", 2837 + .product_name = "CH345", 2838 + .ifnum = 1, 2839 + .type = QUIRK_MIDI_CH345 2840 + } 2841 + }, 2842 + 2832 2843 /* KeithMcMillen Stringport */ 2833 2844 { 2834 2845 USB_DEVICE(0x1f38, 0x0001),
+1
sound/usb/quirks.c
··· 538 538 [QUIRK_MIDI_CME] = create_any_midi_quirk, 539 539 [QUIRK_MIDI_AKAI] = create_any_midi_quirk, 540 540 [QUIRK_MIDI_FTDI] = create_any_midi_quirk, 541 + [QUIRK_MIDI_CH345] = create_any_midi_quirk, 541 542 [QUIRK_AUDIO_STANDARD_INTERFACE] = create_standard_audio_quirk, 542 543 [QUIRK_AUDIO_FIXED_ENDPOINT] = create_fixed_stream_quirk, 543 544 [QUIRK_AUDIO_EDIROL_UAXX] = create_uaxx_quirk,
+1
sound/usb/usbaudio.h
··· 95 95 QUIRK_MIDI_AKAI, 96 96 QUIRK_MIDI_US122L, 97 97 QUIRK_MIDI_FTDI, 98 + QUIRK_MIDI_CH345, 98 99 QUIRK_AUDIO_STANDARD_INTERFACE, 99 100 QUIRK_AUDIO_FIXED_ENDPOINT, 100 101 QUIRK_AUDIO_EDIROL_UAXX,
+10 -1
tools/Makefile
··· 32 32 @echo ' from the kernel command line to build and install one of' 33 33 @echo ' the tools above' 34 34 @echo '' 35 + @echo ' $$ make tools/all' 36 + @echo '' 37 + @echo ' builds all tools.' 38 + @echo '' 35 39 @echo ' $$ make tools/install' 36 40 @echo '' 37 41 @echo ' installs all tools.' ··· 81 77 freefall: FORCE 82 78 $(call descend,laptop/$@) 83 79 80 + all: acpi cgroup cpupower hv firewire lguest \ 81 + perf selftests turbostat usb \ 82 + virtio vm net x86_energy_perf_policy \ 83 + tmon freefall 84 + 84 85 acpi_install: 85 86 $(call descend,power/$(@:_install=),install) 86 87 ··· 110 101 install: acpi_install cgroup_install cpupower_install hv_install firewire_install lguest_install \ 111 102 perf_install selftests_install turbostat_install usb_install \ 112 103 virtio_install vm_install net_install x86_energy_perf_policy_install \ 113 - tmon freefall_install 104 + tmon_install freefall_install 114 105 115 106 acpi_clean: 116 107 $(call descend,power/acpi,clean)
+1
tools/perf/builtin-inject.c
··· 675 675 .fork = perf_event__repipe, 676 676 .exit = perf_event__repipe, 677 677 .lost = perf_event__repipe, 678 + .lost_samples = perf_event__repipe, 678 679 .aux = perf_event__repipe, 679 680 .itrace_start = perf_event__repipe, 680 681 .context_switch = perf_event__repipe,
+3 -3
tools/perf/builtin-report.c
··· 44 44 struct report { 45 45 struct perf_tool tool; 46 46 struct perf_session *session; 47 - bool force, use_tui, use_gtk, use_stdio; 47 + bool use_tui, use_gtk, use_stdio; 48 48 bool hide_unresolved; 49 49 bool dont_use_callchains; 50 50 bool show_full_info; ··· 678 678 "file", "vmlinux pathname"), 679 679 OPT_STRING(0, "kallsyms", &symbol_conf.kallsyms_name, 680 680 "file", "kallsyms pathname"), 681 - OPT_BOOLEAN('f', "force", &report.force, "don't complain, do it"), 681 + OPT_BOOLEAN('f', "force", &symbol_conf.force, "don't complain, do it"), 682 682 OPT_BOOLEAN('m', "modules", &symbol_conf.use_modules, 683 683 "load module symbols - WARNING: use only with -k and LIVE kernel"), 684 684 OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples, ··· 832 832 } 833 833 834 834 file.path = input_name; 835 - file.force = report.force; 835 + file.force = symbol_conf.force; 836 836 837 837 repeat: 838 838 session = perf_session__new(&file, false, &report.tool);
+1 -6
tools/perf/ui/browsers/hists.c
··· 1430 1430 1431 1431 struct popup_action { 1432 1432 struct thread *thread; 1433 - struct dso *dso; 1434 1433 struct map_symbol ms; 1435 1434 int socket; 1436 1435 ··· 1564 1565 return 0; 1565 1566 1566 1567 act->ms.map = map; 1567 - act->dso = map->dso; 1568 1568 act->fn = do_zoom_dso; 1569 1569 return 1; 1570 1570 } ··· 1825 1827 1826 1828 while (1) { 1827 1829 struct thread *thread = NULL; 1828 - struct dso *dso = NULL; 1829 1830 struct map *map = NULL; 1830 1831 int choice = 0; 1831 1832 int socked_id = -1; ··· 1836 1839 if (browser->he_selection != NULL) { 1837 1840 thread = hist_browser__selected_thread(browser); 1838 1841 map = browser->selection->map; 1839 - if (map) 1840 - dso = map->dso; 1841 1842 socked_id = browser->he_selection->socket; 1842 1843 } 1843 1844 switch (key) { ··· 1869 1874 hist_browser__dump(browser); 1870 1875 continue; 1871 1876 case 'd': 1872 - actions->dso = dso; 1877 + actions->ms.map = map; 1873 1878 do_zoom_dso(browser, actions); 1874 1879 continue; 1875 1880 case 'V':
+1
tools/perf/util/build-id.c
··· 76 76 .exit = perf_event__exit_del_thread, 77 77 .attr = perf_event__process_attr, 78 78 .build_id = perf_event__process_build_id, 79 + .ordered_events = true, 79 80 }; 80 81 81 82 int build_id__sprintf(const u8 *build_id, int len, char *bf)
+17
tools/perf/util/dso.c
··· 933 933 /* Add new node and rebalance tree */ 934 934 rb_link_node(&dso->rb_node, parent, p); 935 935 rb_insert_color(&dso->rb_node, root); 936 + dso->root = root; 936 937 } 937 938 return NULL; 938 939 } ··· 946 945 947 946 void dso__set_long_name(struct dso *dso, const char *name, bool name_allocated) 948 947 { 948 + struct rb_root *root = dso->root; 949 + 949 950 if (name == NULL) 950 951 return; 951 952 952 953 if (dso->long_name_allocated) 953 954 free((char *)dso->long_name); 954 955 956 + if (root) { 957 + rb_erase(&dso->rb_node, root); 958 + /* 959 + * __dso__findlink_by_longname() isn't guaranteed to add it 960 + * back, so a clean removal is required here. 961 + */ 962 + RB_CLEAR_NODE(&dso->rb_node); 963 + dso->root = NULL; 964 + } 965 + 955 966 dso->long_name = name; 956 967 dso->long_name_len = strlen(name); 957 968 dso->long_name_allocated = name_allocated; 969 + 970 + if (root) 971 + __dso__findlink_by_longname(root, dso, NULL); 958 972 } 959 973 960 974 void dso__set_short_name(struct dso *dso, const char *name, bool name_allocated) ··· 1062 1046 dso->kernel = DSO_TYPE_USER; 1063 1047 dso->needs_swap = DSO_SWAP__UNSET; 1064 1048 RB_CLEAR_NODE(&dso->rb_node); 1049 + dso->root = NULL; 1065 1050 INIT_LIST_HEAD(&dso->node); 1066 1051 INIT_LIST_HEAD(&dso->data.open_entry); 1067 1052 pthread_mutex_init(&dso->lock, NULL);
+1
tools/perf/util/dso.h
··· 135 135 pthread_mutex_t lock; 136 136 struct list_head node; 137 137 struct rb_node rb_node; /* rbtree node sorted by long name */ 138 + struct rb_root *root; /* root of rbtree that rb_node is in */ 138 139 struct rb_root symbols[MAP__NR_TYPES]; 139 140 struct rb_root symbol_names[MAP__NR_TYPES]; 140 141 struct {
+1
tools/perf/util/machine.c
··· 91 91 92 92 list_for_each_entry_safe(pos, n, &dsos->head, node) { 93 93 RB_CLEAR_NODE(&pos->rb_node); 94 + pos->root = NULL; 94 95 list_del_init(&pos->node); 95 96 dso__put(pos); 96 97 }
+17 -7
tools/perf/util/probe-finder.c
··· 1183 1183 container_of(pf, struct trace_event_finder, pf); 1184 1184 struct perf_probe_point *pp = &pf->pev->point; 1185 1185 struct probe_trace_event *tev; 1186 - struct perf_probe_arg *args; 1186 + struct perf_probe_arg *args = NULL; 1187 1187 int ret, i; 1188 1188 1189 1189 /* Check number of tevs */ ··· 1198 1198 ret = convert_to_trace_point(&pf->sp_die, tf->mod, pf->addr, 1199 1199 pp->retprobe, pp->function, &tev->point); 1200 1200 if (ret < 0) 1201 - return ret; 1201 + goto end; 1202 1202 1203 1203 tev->point.realname = strdup(dwarf_diename(sc_die)); 1204 - if (!tev->point.realname) 1205 - return -ENOMEM; 1204 + if (!tev->point.realname) { 1205 + ret = -ENOMEM; 1206 + goto end; 1207 + } 1206 1208 1207 1209 pr_debug("Probe point found: %s+%lu\n", tev->point.symbol, 1208 1210 tev->point.offset); 1209 1211 1210 1212 /* Expand special probe argument if exist */ 1211 1213 args = zalloc(sizeof(struct perf_probe_arg) * MAX_PROBE_ARGS); 1212 - if (args == NULL) 1213 - return -ENOMEM; 1214 + if (args == NULL) { 1215 + ret = -ENOMEM; 1216 + goto end; 1217 + } 1214 1218 1215 1219 ret = expand_probe_args(sc_die, pf, args); 1216 1220 if (ret < 0) ··· 1238 1234 } 1239 1235 1240 1236 end: 1237 + if (ret) { 1238 + clear_probe_trace_event(tev); 1239 + tf->ntevs--; 1240 + } 1241 1241 free(args); 1242 1242 return ret; 1243 1243 } ··· 1254 1246 struct trace_event_finder tf = { 1255 1247 .pf = {.pev = pev, .callback = add_probe_trace_event}, 1256 1248 .max_tevs = probe_conf.max_probes, .mod = dbg->mod}; 1257 - int ret; 1249 + int ret, i; 1258 1250 1259 1251 /* Allocate result tevs array */ 1260 1252 *tevs = zalloc(sizeof(struct probe_trace_event) * tf.max_tevs); ··· 1266 1258 1267 1259 ret = debuginfo__find_probes(dbg, &tf.pf); 1268 1260 if (ret < 0) { 1261 + for (i = 0; i < tf.ntevs; i++) 1262 + clear_probe_trace_event(&tf.tevs[i]); 1269 1263 zfree(tevs); 1270 1264 return ret; 1271 1265 }
+16 -18
tools/perf/util/symbol.c
··· 654 654 struct map_groups *kmaps = map__kmaps(map); 655 655 struct map *curr_map; 656 656 struct symbol *pos; 657 - int count = 0, moved = 0; 657 + int count = 0; 658 + struct rb_root old_root = dso->symbols[map->type]; 658 659 struct rb_root *root = &dso->symbols[map->type]; 659 660 struct rb_node *next = rb_first(root); 660 661 661 662 if (!kmaps) 662 663 return -1; 663 664 665 + *root = RB_ROOT; 666 + 664 667 while (next) { 665 668 char *module; 666 669 667 670 pos = rb_entry(next, struct symbol, rb_node); 668 671 next = rb_next(&pos->rb_node); 672 + 673 + rb_erase_init(&pos->rb_node, &old_root); 669 674 670 675 module = strchr(pos->name, '\t'); 671 676 if (module) ··· 679 674 curr_map = map_groups__find(kmaps, map->type, pos->start); 680 675 681 676 if (!curr_map || (filter && filter(curr_map, pos))) { 682 - rb_erase_init(&pos->rb_node, root); 683 677 symbol__delete(pos); 684 - } else { 685 - pos->start -= curr_map->start - curr_map->pgoff; 686 - if (pos->end) 687 - pos->end -= curr_map->start - curr_map->pgoff; 688 - if (curr_map->dso != map->dso) { 689 - rb_erase_init(&pos->rb_node, root); 690 - symbols__insert( 691 - &curr_map->dso->symbols[curr_map->type], 692 - pos); 693 - ++moved; 694 - } else { 695 - ++count; 696 - } 678 + continue; 697 679 } 680 + 681 + pos->start -= curr_map->start - curr_map->pgoff; 682 + if (pos->end) 683 + pos->end -= curr_map->start - curr_map->pgoff; 684 + symbols__insert(&curr_map->dso->symbols[curr_map->type], pos); 685 + ++count; 698 686 } 699 687 700 688 /* Symbols have been adjusted */ 701 689 dso->adjust_symbols = 1; 702 690 703 - return count + moved; 691 + return count; 704 692 } 705 693 706 694 /* ··· 1436 1438 if (lstat(dso->name, &st) < 0) 1437 1439 goto out; 1438 1440 1439 - if (st.st_uid && (st.st_uid != geteuid())) { 1441 + if (!symbol_conf.force && st.st_uid && (st.st_uid != geteuid())) { 1440 1442 pr_warning("File %s not owned by current user or root, " 1441 - "ignoring it.\n", dso->name); 1443 + "ignoring it (use -f to override).\n", dso->name); 1442 1444 goto out; 1443 1445 } 1444 1446
+1
tools/perf/util/symbol.h
··· 84 84 unsigned short priv_size; 85 85 unsigned short nr_events; 86 86 bool try_vmlinux_path, 87 + force, 87 88 ignore_vmlinux, 88 89 ignore_vmlinux_buildid, 89 90 show_kernel_path,
+4 -4
tools/power/x86/turbostat/turbostat.c
··· 1173 1173 unsigned long long msr; 1174 1174 unsigned int ratio; 1175 1175 1176 - get_msr(base_cpu, MSR_NHM_PLATFORM_INFO, &msr); 1176 + get_msr(base_cpu, MSR_PLATFORM_INFO, &msr); 1177 1177 1178 - fprintf(stderr, "cpu%d: MSR_NHM_PLATFORM_INFO: 0x%08llx\n", base_cpu, msr); 1178 + fprintf(stderr, "cpu%d: MSR_PLATFORM_INFO: 0x%08llx\n", base_cpu, msr); 1179 1179 1180 1180 ratio = (msr >> 40) & 0xFF; 1181 1181 fprintf(stderr, "%d * %.0f = %.0f MHz max efficiency frequency\n", ··· 1807 1807 * 1808 1808 * MSR_SMI_COUNT 0x00000034 1809 1809 * 1810 - * MSR_NHM_PLATFORM_INFO 0x000000ce 1810 + * MSR_PLATFORM_INFO 0x000000ce 1811 1811 * MSR_NHM_SNB_PKG_CST_CFG_CTL 0x000000e2 1812 1812 * 1813 1813 * MSR_PKG_C3_RESIDENCY 0x000003f8 ··· 1876 1876 get_msr(base_cpu, MSR_NHM_SNB_PKG_CST_CFG_CTL, &msr); 1877 1877 pkg_cstate_limit = pkg_cstate_limits[msr & 0xF]; 1878 1878 1879 - get_msr(base_cpu, MSR_NHM_PLATFORM_INFO, &msr); 1879 + get_msr(base_cpu, MSR_PLATFORM_INFO, &msr); 1880 1880 base_ratio = (msr >> 8) & 0xFF; 1881 1881 1882 1882 base_hz = base_ratio * bclk * 1000000;
+1 -1
tools/testing/selftests/futex/README
··· 27 27 o Where possible, any helper functions or other package-wide code shall be 28 28 implemented in header files, avoiding the need to compile intermediate object 29 29 files. 30 - o External dependendencies shall remain as minimal as possible. Currently gcc 30 + o External dependencies shall remain as minimal as possible. Currently gcc 31 31 and glibc are the only dependencies. 32 32 o Tests return 0 for success and < 0 for failure. 33 33
+7 -4
tools/testing/selftests/seccomp/seccomp_bpf.c
··· 492 492 pid_t parent = getppid(); 493 493 int fd; 494 494 void *map1, *map2; 495 + int page_size = sysconf(_SC_PAGESIZE); 496 + 497 + ASSERT_LT(0, page_size); 495 498 496 499 ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); 497 500 ASSERT_EQ(0, ret); ··· 507 504 508 505 EXPECT_EQ(parent, syscall(__NR_getppid)); 509 506 map1 = (void *)syscall(sysno, 510 - NULL, PAGE_SIZE, PROT_READ, MAP_PRIVATE, fd, PAGE_SIZE); 507 + NULL, page_size, PROT_READ, MAP_PRIVATE, fd, page_size); 511 508 EXPECT_NE(MAP_FAILED, map1); 512 509 /* mmap2() should never return. */ 513 510 map2 = (void *)syscall(sysno, 514 - NULL, PAGE_SIZE, PROT_READ, MAP_PRIVATE, fd, 0x0C0FFEE); 511 + NULL, page_size, PROT_READ, MAP_PRIVATE, fd, 0x0C0FFEE); 515 512 EXPECT_EQ(MAP_FAILED, map2); 516 513 517 514 /* The test failed, so clean up the resources. */ 518 - munmap(map1, PAGE_SIZE); 519 - munmap(map2, PAGE_SIZE); 515 + munmap(map1, page_size); 516 + munmap(map2, page_size); 520 517 close(fd); 521 518 } 522 519
+1
tools/vm/page-types.c
··· 128 128 [KPF_THP] = "t:thp", 129 129 [KPF_BALLOON] = "o:balloon", 130 130 [KPF_ZERO_PAGE] = "z:zero_page", 131 + [KPF_IDLE] = "i:idle_page", 131 132 132 133 [KPF_RESERVED] = "r:reserved", 133 134 [KPF_MLOCKED] = "m:mlocked",
+17 -11
virt/kvm/arm/arch_timer.c
··· 221 221 kvm_timer_update_state(vcpu); 222 222 223 223 /* 224 - * If we enter the guest with the virtual input level to the VGIC 225 - * asserted, then we have already told the VGIC what we need to, and 226 - * we don't need to exit from the guest until the guest deactivates 227 - * the already injected interrupt, so therefore we should set the 228 - * hardware active state to prevent unnecessary exits from the guest. 229 - * 230 - * Conversely, if the virtual input level is deasserted, then always 231 - * clear the hardware active state to ensure that hardware interrupts 232 - * from the timer triggers a guest exit. 233 - */ 234 - if (timer->irq.level) 224 + * If we enter the guest with the virtual input level to the VGIC 225 + * asserted, then we have already told the VGIC what we need to, and 226 + * we don't need to exit from the guest until the guest deactivates 227 + * the already injected interrupt, so therefore we should set the 228 + * hardware active state to prevent unnecessary exits from the guest. 229 + * 230 + * Also, if we enter the guest with the virtual timer interrupt active, 231 + * then it must be active on the physical distributor, because we set 232 + * the HW bit and the guest must be able to deactivate the virtual and 233 + * physical interrupt at the same time. 234 + * 235 + * Conversely, if the virtual input level is deasserted and the virtual 236 + * interrupt is not active, then always clear the hardware active state 237 + * to ensure that hardware interrupts from the timer triggers a guest 238 + * exit. 239 + */ 240 + if (timer->irq.level || kvm_vgic_map_is_active(vcpu, timer->map)) 235 241 phys_active = true; 236 242 else 237 243 phys_active = false;
+24 -26
virt/kvm/arm/vgic.c
··· 1096 1096 vgic_set_lr(vcpu, lr_nr, vlr); 1097 1097 } 1098 1098 1099 + static bool dist_active_irq(struct kvm_vcpu *vcpu) 1100 + { 1101 + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; 1102 + 1103 + return test_bit(vcpu->vcpu_id, dist->irq_active_on_cpu); 1104 + } 1105 + 1106 + bool kvm_vgic_map_is_active(struct kvm_vcpu *vcpu, struct irq_phys_map *map) 1107 + { 1108 + int i; 1109 + 1110 + for (i = 0; i < vcpu->arch.vgic_cpu.nr_lr; i++) { 1111 + struct vgic_lr vlr = vgic_get_lr(vcpu, i); 1112 + 1113 + if (vlr.irq == map->virt_irq && vlr.state & LR_STATE_ACTIVE) 1114 + return true; 1115 + } 1116 + 1117 + return dist_active_irq(vcpu); 1118 + } 1119 + 1099 1120 /* 1100 1121 * An interrupt may have been disabled after being made pending on the 1101 1122 * CPU interface (the classic case is a timer running while we're ··· 1269 1248 * may have been serviced from another vcpu. In all cases, 1270 1249 * move along. 1271 1250 */ 1272 - if (!kvm_vgic_vcpu_pending_irq(vcpu) && !kvm_vgic_vcpu_active_irq(vcpu)) 1251 + if (!kvm_vgic_vcpu_pending_irq(vcpu) && !dist_active_irq(vcpu)) 1273 1252 goto epilog; 1274 1253 1275 1254 /* SGIs */ ··· 1417 1396 static bool vgic_sync_hwirq(struct kvm_vcpu *vcpu, int lr, struct vgic_lr vlr) 1418 1397 { 1419 1398 struct vgic_dist *dist = &vcpu->kvm->arch.vgic; 1420 - struct irq_phys_map *map; 1421 - bool phys_active; 1422 1399 bool level_pending; 1423 - int ret; 1424 1400 1425 1401 if (!(vlr.state & LR_HW)) 1426 1402 return false; 1427 1403 1428 - map = vgic_irq_map_search(vcpu, vlr.irq); 1429 - BUG_ON(!map); 1430 - 1431 - ret = irq_get_irqchip_state(map->irq, 1432 - IRQCHIP_STATE_ACTIVE, 1433 - &phys_active); 1434 - 1435 - WARN_ON(ret); 1436 - 1437 - if (phys_active) 1438 - return 0; 1404 + if (vlr.state & LR_STATE_ACTIVE) 1405 + return false; 1439 1406 1440 1407 spin_lock(&dist->lock); 1441 1408 level_pending = process_queued_irq(vcpu, lr, vlr); ··· 1487 1478 1488 1479 return test_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu); 1489 1480 } 1490 - 1491 - int kvm_vgic_vcpu_active_irq(struct kvm_vcpu *vcpu) 1492 - { 1493 - struct vgic_dist *dist = &vcpu->kvm->arch.vgic; 1494 - 1495 - if (!irqchip_in_kernel(vcpu->kvm)) 1496 - return 0; 1497 - 1498 - return test_bit(vcpu->vcpu_id, dist->irq_active_on_cpu); 1499 - } 1500 - 1501 1481 1502 1482 void vgic_kick_vcpus(struct kvm *kvm) 1503 1483 {