Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

The MSCC bug fix in 'net' had to be slightly adjusted because the
register accesses are done slightly differently in net-next.

Signed-off-by: David S. Miller <davem@davemloft.net>

+3508 -1711
+2
.mailmap
··· 288 288 Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@parallels.com> 289 289 Takashi YOSHII <takashi.yoshii.zj@renesas.com> 290 290 Will Deacon <will@kernel.org> <will.deacon@arm.com> 291 + Wolfram Sang <wsa@kernel.org> <wsa@the-dreams.de> 292 + Wolfram Sang <wsa@kernel.org> <w.sang@pengutronix.de> 291 293 Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com> 292 294 Yusuke Goda <goda.yusuke@renesas.com> 293 295 Gustavo Padovan <gustavo@las.ic.unicamp.br>
+2 -1
Documentation/devicetree/bindings/dma/fsl-edma.txt
··· 10 10 - compatible : 11 11 - "fsl,vf610-edma" for eDMA used similar to that on Vybrid vf610 SoC 12 12 - "fsl,imx7ulp-edma" for eDMA2 used similar to that on i.mx7ulp 13 - - "fsl,fsl,ls1028a-edma" for eDMA used similar to that on Vybrid vf610 SoC 13 + - "fsl,ls1028a-edma" followed by "fsl,vf610-edma" for eDMA used on the 14 + LS1028A SoC. 14 15 - reg : Specifies base physical address(s) and size of the eDMA registers. 15 16 The 1st region is eDMA control register's address and size. 16 17 The 2nd and the 3rd regions are programmable channel multiplexing
+3
Documentation/devicetree/bindings/net/dsa/b53.txt
··· 110 110 #size-cells = <0>; 111 111 112 112 ports { 113 + #address-cells = <1>; 114 + #size-cells = <0>; 115 + 113 116 port0@0 { 114 117 reg = <0>; 115 118 label = "lan1";
+30 -7
Documentation/usb/raw-gadget.rst
··· 27 27 3. Raw Gadget provides a way to select a UDC device/driver to bind to, 28 28 while GadgetFS currently binds to the first available UDC. 29 29 30 - 4. Raw Gadget uses predictable endpoint names (handles) across different 31 - UDCs (as long as UDCs have enough endpoints of each required transfer 32 - type). 30 + 4. Raw Gadget explicitly exposes information about endpoints addresses and 31 + capabilities allowing a user to write UDC-agnostic gadgets. 33 32 34 33 5. Raw Gadget has ioctl-based interface instead of a filesystem-based one. 35 34 ··· 49 50 Raw Gadget and react to those depending on what kind of USB device 50 51 needs to be emulated. 51 52 53 + Note, that some UDC drivers have fixed addresses assigned to endpoints, and 54 + therefore arbitrary endpoint addresses can't be used in the descriptors. 55 + Nevertheles, Raw Gadget provides a UDC-agnostic way to write USB gadgets. 56 + Once a USB_RAW_EVENT_CONNECT event is received via USB_RAW_IOCTL_EVENT_FETCH, 57 + the USB_RAW_IOCTL_EPS_INFO ioctl can be used to find out information about 58 + endpoints that the UDC driver has. Based on that information, the user must 59 + chose UDC endpoints that will be used for the gadget being emulated, and 60 + properly assign addresses in endpoint descriptors. 61 + 62 + You can find usage examples (along with a test suite) here: 63 + 64 + https://github.com/xairy/raw-gadget 65 + 66 + Internal details 67 + ~~~~~~~~~~~~~~~~ 68 + 69 + Currently every endpoint read/write ioctl submits a USB request and waits until 70 + its completion. This is the desired mode for coverage-guided fuzzing (as we'd 71 + like all USB request processing happen during the lifetime of a syscall), 72 + and must be kept in the implementation. (This might be slow for real world 73 + applications, thus the O_NONBLOCK improvement suggestion below.) 74 + 52 75 Potential future improvements 53 76 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 54 77 55 - - Implement ioctl's for setting/clearing halt status on endpoints. 56 - 57 - - Reporting more events (suspend, resume, etc.) through 58 - USB_RAW_IOCTL_EVENT_FETCH. 78 + - Report more events (suspend, resume, etc.) through USB_RAW_IOCTL_EVENT_FETCH. 59 79 60 80 - Support O_NONBLOCK I/O. 81 + 82 + - Support USB 3 features (accept SS endpoint companion descriptor when 83 + enabling endpoints; allow providing stream_id for bulk transfers). 84 + 85 + - Support ISO transfer features (expose frame_number for completed requests).
+16 -4
MAINTAINERS
··· 5507 5507 5508 5508 DRM DRIVER FOR VMWARE VIRTUAL GPU 5509 5509 M: "VMware Graphics" <linux-graphics-maintainer@vmware.com> 5510 - M: Thomas Hellstrom <thellstrom@vmware.com> 5510 + M: Roland Scheidegger <sroland@vmware.com> 5511 5511 L: dri-devel@lists.freedesktop.org 5512 5512 S: Supported 5513 - T: git git://people.freedesktop.org/~thomash/linux 5513 + T: git git://people.freedesktop.org/~sroland/linux 5514 5514 F: drivers/gpu/drm/vmwgfx/ 5515 5515 F: include/uapi/drm/vmwgfx_drm.h 5516 5516 ··· 7829 7829 F: drivers/media/platform/sti/hva 7830 7830 7831 7831 HWPOISON MEMORY FAILURE HANDLING 7832 - M: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> 7832 + M: Naoya Horiguchi <naoya.horiguchi@nec.com> 7833 7833 L: linux-mm@kvack.org 7834 7834 S: Maintained 7835 7835 F: mm/hwpoison-inject.c ··· 7941 7941 F: drivers/i2c/busses/i2c-parport.c 7942 7942 7943 7943 I2C SUBSYSTEM 7944 - M: Wolfram Sang <wsa@the-dreams.de> 7944 + M: Wolfram Sang <wsa@kernel.org> 7945 7945 L: linux-i2c@vger.kernel.org 7946 7946 S: Maintained 7947 7947 W: https://i2c.wiki.kernel.org/ ··· 9185 9185 S: Maintained 9186 9186 W: http://lse.sourceforge.net/kdump/ 9187 9187 F: Documentation/admin-guide/kdump/ 9188 + F: fs/proc/vmcore.c 9189 + F: include/linux/crash_core.h 9190 + F: include/linux/crash_dump.h 9191 + F: include/uapi/linux/vmcore.h 9192 + F: kernel/crash_*.c 9188 9193 9189 9194 KEENE FM RADIO TRANSMITTER DRIVER 9190 9195 M: Hans Verkuil <hverkuil@xs4all.nl> ··· 10666 10661 L: netdev@vger.kernel.org 10667 10662 S: Maintained 10668 10663 F: drivers/net/ethernet/mediatek/ 10664 + 10665 + MEDIATEK I2C CONTROLLER DRIVER 10666 + M: Qii Wang <qii.wang@mediatek.com> 10667 + L: linux-i2c@vger.kernel.org 10668 + S: Maintained 10669 + F: Documentation/devicetree/bindings/i2c/i2c-mt65xx.txt 10670 + F: drivers/i2c/busses/i2c-mt65xx.c 10669 10671 10670 10672 MEDIATEK JPEG DRIVER 10671 10673 M: Rick Chang <rick.chang@mediatek.com>
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 7 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION*
+1
arch/arc/configs/hsdk_defconfig
··· 65 65 CONFIG_DRM_ETNAVIV=y 66 66 CONFIG_FB=y 67 67 CONFIG_FRAMEBUFFER_CONSOLE=y 68 + CONFIG_USB=y 68 69 CONFIG_USB_EHCI_HCD=y 69 70 CONFIG_USB_EHCI_HCD_PLATFORM=y 70 71 CONFIG_USB_OHCI_HCD=y
+2
arch/arc/include/asm/dsp-impl.h
··· 15 15 16 16 /* clobbers r5 register */ 17 17 .macro DSP_EARLY_INIT 18 + #ifdef CONFIG_ISA_ARCV2 18 19 lr r5, [ARC_AUX_DSP_BUILD] 19 20 bmsk r5, r5, 7 20 21 breq r5, 0, 1f 21 22 mov r5, DSP_CTRL_DISABLED_ALL 22 23 sr r5, [ARC_AUX_DSP_CTRL] 23 24 1: 25 + #endif 24 26 .endm 25 27 26 28 /* clobbers r10, r11 registers pair */
+2
arch/arc/include/asm/entry-arcv2.h
··· 233 233 234 234 #ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE 235 235 __RESTORE_REGFILE_HARD 236 + 237 + ; SP points to PC/STAT32: hw restores them despite NO_AUTOSAVE 236 238 add sp, sp, SZ_PT_REGS - 8 237 239 #else 238 240 add sp, sp, PT_r0
-3
arch/arc/kernel/Makefile
··· 3 3 # Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) 4 4 # 5 5 6 - # Pass UTS_MACHINE for user_regset definition 7 - CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"' 8 - 9 6 obj-y := arcksyms.o setup.o irq.o reset.o ptrace.o process.o devtree.o 10 7 obj-y += signal.o traps.o sys.o troubleshoot.o stacktrace.o disasm.o 11 8 obj-$(CONFIG_ISA_ARCOMPACT) += entry-compact.o intc-compact.o
+1 -1
arch/arc/kernel/ptrace.c
··· 253 253 }; 254 254 255 255 static const struct user_regset_view user_arc_view = { 256 - .name = UTS_MACHINE, 256 + .name = "arc", 257 257 .e_machine = EM_ARC_INUSE, 258 258 .regsets = arc_regsets, 259 259 .n = ARRAY_SIZE(arc_regsets)
+3 -2
arch/arc/kernel/setup.c
··· 11 11 #include <linux/clocksource.h> 12 12 #include <linux/console.h> 13 13 #include <linux/module.h> 14 + #include <linux/sizes.h> 14 15 #include <linux/cpu.h> 15 16 #include <linux/of_clk.h> 16 17 #include <linux/of_fdt.h> ··· 425 424 if ((unsigned int)__arc_dccm_base != cpu->dccm.base_addr) 426 425 panic("Linux built with incorrect DCCM Base address\n"); 427 426 428 - if (CONFIG_ARC_DCCM_SZ != cpu->dccm.sz) 427 + if (CONFIG_ARC_DCCM_SZ * SZ_1K != cpu->dccm.sz) 429 428 panic("Linux built with incorrect DCCM Size\n"); 430 429 #endif 431 430 432 431 #ifdef CONFIG_ARC_HAS_ICCM 433 - if (CONFIG_ARC_ICCM_SZ != cpu->iccm.sz) 432 + if (CONFIG_ARC_ICCM_SZ * SZ_1K != cpu->iccm.sz) 434 433 panic("Linux built with incorrect ICCM Size\n"); 435 434 #endif 436 435
+6 -8
arch/arc/kernel/troubleshoot.c
··· 191 191 if (user_mode(regs)) 192 192 show_faulting_vma(regs->ret); /* faulting code, not data */ 193 193 194 - pr_info("ECR: 0x%08lx EFA: 0x%08lx ERET: 0x%08lx\n", 195 - regs->event, current->thread.fault_address, regs->ret); 196 - 197 - pr_info("STAT32: 0x%08lx", regs->status32); 194 + pr_info("ECR: 0x%08lx EFA: 0x%08lx ERET: 0x%08lx\nSTAT: 0x%08lx", 195 + regs->event, current->thread.fault_address, regs->ret, 196 + regs->status32); 198 197 199 198 #define STS_BIT(r, bit) r->status32 & STATUS_##bit##_MASK ? #bit" " : "" 200 199 ··· 209 210 (regs->status32 & STATUS_U_MASK) ? "U " : "K ", 210 211 STS_BIT(regs, DE), STS_BIT(regs, AE)); 211 212 #endif 212 - pr_cont(" BTA: 0x%08lx\n", regs->bta); 213 - pr_info("BLK: %pS\n SP: 0x%08lx FP: 0x%08lx\n", 214 - (void *)regs->blink, regs->sp, regs->fp); 213 + pr_cont(" BTA: 0x%08lx\n SP: 0x%08lx FP: 0x%08lx BLK: %pS\n", 214 + regs->bta, regs->sp, regs->fp, (void *)regs->blink); 215 215 pr_info("LPS: 0x%08lx\tLPE: 0x%08lx\tLPC: 0x%08lx\n", 216 - regs->lp_start, regs->lp_end, regs->lp_count); 216 + regs->lp_start, regs->lp_end, regs->lp_count); 217 217 218 218 /* print regs->r0 thru regs->r12 219 219 * Sequential printing was generating horrible code
-2
arch/arc/kernel/unwind.c
··· 1178 1178 #endif 1179 1179 1180 1180 /* update frame */ 1181 - #ifndef CONFIG_AS_CFI_SIGNAL_FRAME 1182 1181 if (frame->call_frame 1183 1182 && !UNW_DEFAULT_RA(state.regs[retAddrReg], state.dataAlign)) 1184 1183 frame->call_frame = 0; 1185 - #endif 1186 1184 cfa = FRAME_REG(state.cfa.reg, unsigned long) + state.cfa.offs; 1187 1185 startLoc = min_t(unsigned long, UNW_SP(frame), cfa); 1188 1186 endLoc = max_t(unsigned long, UNW_SP(frame), cfa);
+1
arch/arc/plat-eznps/Kconfig
··· 6 6 7 7 menuconfig ARC_PLAT_EZNPS 8 8 bool "\"EZchip\" ARC dev platform" 9 + depends on ISA_ARCOMPACT 9 10 select CPU_BIG_ENDIAN 10 11 select CLKSRC_NPS if !PHYS_ADDR_T_64BIT 11 12 select EZNPS_GIC
+4
arch/arm/boot/dts/am574x-idk.dts
··· 40 40 status = "okay"; 41 41 dual_emac; 42 42 }; 43 + 44 + &m_can0 { 45 + status = "disabled"; 46 + };
+2 -2
arch/arm/boot/dts/dra7.dtsi
··· 172 172 #address-cells = <1>; 173 173 ranges = <0x51000000 0x51000000 0x3000 174 174 0x0 0x20000000 0x10000000>; 175 + dma-ranges; 175 176 /** 176 177 * To enable PCI endpoint mode, disable the pcie1_rc 177 178 * node and enable pcie1_ep mode. ··· 186 185 device_type = "pci"; 187 186 ranges = <0x81000000 0 0 0x03000 0 0x00010000 188 187 0x82000000 0 0x20013000 0x13000 0 0xffed000>; 189 - dma-ranges = <0x02000000 0x0 0x00000000 0x00000000 0x1 0x00000000>; 190 188 bus-range = <0x00 0xff>; 191 189 #interrupt-cells = <1>; 192 190 num-lanes = <1>; ··· 230 230 #address-cells = <1>; 231 231 ranges = <0x51800000 0x51800000 0x3000 232 232 0x0 0x30000000 0x10000000>; 233 + dma-ranges; 233 234 status = "disabled"; 234 235 pcie2_rc: pcie@51800000 { 235 236 reg = <0x51800000 0x2000>, <0x51802000 0x14c>, <0x1000 0x2000>; ··· 241 240 device_type = "pci"; 242 241 ranges = <0x81000000 0 0 0x03000 0 0x00010000 243 242 0x82000000 0 0x30013000 0x13000 0 0xffed000>; 244 - dma-ranges = <0x02000000 0x0 0x00000000 0x00000000 0x1 0x00000000>; 245 243 bus-range = <0x00 0xff>; 246 244 #interrupt-cells = <1>; 247 245 num-lanes = <1>;
+2 -2
arch/arm/boot/dts/imx27-phytec-phycard-s-rdk.dts
··· 75 75 imx27-phycard-s-rdk { 76 76 pinctrl_i2c1: i2c1grp { 77 77 fsl,pins = < 78 - MX27_PAD_I2C2_SDA__I2C2_SDA 0x0 79 - MX27_PAD_I2C2_SCL__I2C2_SCL 0x0 78 + MX27_PAD_I2C_DATA__I2C_DATA 0x0 79 + MX27_PAD_I2C_CLK__I2C_CLK 0x0 80 80 >; 81 81 }; 82 82
+1 -1
arch/arm/boot/dts/imx6dl-yapp4-ursa.dts
··· 38 38 }; 39 39 40 40 &switch_ports { 41 - /delete-node/ port@2; 41 + /delete-node/ port@3; 42 42 }; 43 43 44 44 &touchscreen {
-2
arch/arm/boot/dts/iwg20d-q7-dbcm-ca.dtsi
··· 72 72 adi,input-depth = <8>; 73 73 adi,input-colorspace = "rgb"; 74 74 adi,input-clock = "1x"; 75 - adi,input-style = <1>; 76 - adi,input-justification = "evenly"; 77 75 78 76 ports { 79 77 #address-cells = <1>;
+40 -3
arch/arm/boot/dts/motorola-mapphone-common.dtsi
··· 367 367 }; 368 368 369 369 &mmc3 { 370 + pinctrl-names = "default"; 371 + pinctrl-0 = <&mmc3_pins>; 370 372 vmmc-supply = <&wl12xx_vmmc>; 371 373 /* uart2_tx.sdmmc3_dat1 pad as wakeirq */ 372 374 interrupts-extended = <&wakeupgen GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH ··· 471 469 OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0) 472 470 OMAP4_IOPAD(0x09c, PIN_INPUT | MUX_MODE0) 473 471 OMAP4_IOPAD(0x09e, PIN_INPUT | MUX_MODE0) 472 + >; 473 + }; 474 + 475 + /* 476 + * Android uses PIN_OFF_INPUT_PULLDOWN | PIN_INPUT_PULLUP | MUX_MODE3 477 + * for gpio_100, but the internal pull makes wlan flakey on some 478 + * devices. Off mode value should be tested if we have off mode working 479 + * later on. 480 + */ 481 + mmc3_pins: pinmux_mmc3_pins { 482 + pinctrl-single,pins = < 483 + /* 0x4a10008e gpmc_wait2.gpio_100 d23 */ 484 + OMAP4_IOPAD(0x08e, PIN_INPUT | MUX_MODE3) 485 + 486 + /* 0x4a100102 abe_mcbsp1_dx.sdmmc3_dat2 ab25 */ 487 + OMAP4_IOPAD(0x102, PIN_INPUT_PULLUP | MUX_MODE1) 488 + 489 + /* 0x4a100104 abe_mcbsp1_fsx.sdmmc3_dat3 ac27 */ 490 + OMAP4_IOPAD(0x104, PIN_INPUT_PULLUP | MUX_MODE1) 491 + 492 + /* 0x4a100118 uart2_cts.sdmmc3_clk ab26 */ 493 + OMAP4_IOPAD(0x118, PIN_INPUT | MUX_MODE1) 494 + 495 + /* 0x4a10011a uart2_rts.sdmmc3_cmd ab27 */ 496 + OMAP4_IOPAD(0x11a, PIN_INPUT_PULLUP | MUX_MODE1) 497 + 498 + /* 0x4a10011c uart2_rx.sdmmc3_dat0 aa25 */ 499 + OMAP4_IOPAD(0x11c, PIN_INPUT_PULLUP | MUX_MODE1) 500 + 501 + /* 0x4a10011e uart2_tx.sdmmc3_dat1 aa26 */ 502 + OMAP4_IOPAD(0x11e, PIN_INPUT_PULLUP | MUX_MODE1) 474 503 >; 475 504 }; 476 505 ··· 723 690 }; 724 691 725 692 /* 726 - * As uart1 is wired to mdm6600 with rts and cts, we can use the cts pin for 727 - * uart1 wakeirq. 693 + * The uart1 port is wired to mdm6600 with rts and cts. The modem uses gpio_149 694 + * for wake-up events for both the USB PHY and the UART. We can use gpio_149 695 + * pad as the shared wakeirq for the UART rather than the RX or CTS pad as we 696 + * have gpio_149 trigger before the UART transfer starts. 728 697 */ 729 698 &uart1 { 730 699 pinctrl-names = "default"; 731 700 pinctrl-0 = <&uart1_pins>; 732 701 interrupts-extended = <&wakeupgen GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH 733 - &omap4_pmx_core 0xfc>; 702 + &omap4_pmx_core 0x110>; 703 + uart-has-rtscts; 704 + current-speed = <115200>; 734 705 }; 735 706 736 707 &uart3 {
-3
arch/arm/boot/dts/r7s9210.dtsi
··· 304 304 reg = <0xe803b000 0x30>; 305 305 interrupts = <GIC_SPI 56 IRQ_TYPE_EDGE_RISING>; 306 306 clocks = <&cpg CPG_MOD 36>; 307 - clock-names = "ostm0"; 308 307 power-domains = <&cpg>; 309 308 status = "disabled"; 310 309 }; ··· 313 314 reg = <0xe803c000 0x30>; 314 315 interrupts = <GIC_SPI 57 IRQ_TYPE_EDGE_RISING>; 315 316 clocks = <&cpg CPG_MOD 35>; 316 - clock-names = "ostm1"; 317 317 power-domains = <&cpg>; 318 318 status = "disabled"; 319 319 }; ··· 322 324 reg = <0xe803d000 0x30>; 323 325 interrupts = <GIC_SPI 58 IRQ_TYPE_EDGE_RISING>; 324 326 clocks = <&cpg CPG_MOD 34>; 325 - clock-names = "ostm2"; 326 327 power-domains = <&cpg>; 327 328 status = "disabled"; 328 329 };
+8 -1
arch/arm/boot/dts/r8a73a4.dtsi
··· 131 131 cmt1: timer@e6130000 { 132 132 compatible = "renesas,r8a73a4-cmt1", "renesas,rcar-gen2-cmt1"; 133 133 reg = <0 0xe6130000 0 0x1004>; 134 - interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>; 134 + interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>, 135 + <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>, 136 + <GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>, 137 + <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>, 138 + <GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>, 139 + <GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>, 140 + <GIC_SPI 126 IRQ_TYPE_LEVEL_HIGH>, 141 + <GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>; 135 142 clocks = <&mstp3_clks R8A73A4_CLK_CMT1>; 136 143 clock-names = "fck"; 137 144 power-domains = <&pd_c5>;
+1 -1
arch/arm/boot/dts/r8a7740.dtsi
··· 479 479 cpg_clocks: cpg_clocks@e6150000 { 480 480 compatible = "renesas,r8a7740-cpg-clocks"; 481 481 reg = <0xe6150000 0x10000>; 482 - clocks = <&extal1_clk>, <&extalr_clk>; 482 + clocks = <&extal1_clk>, <&extal2_clk>, <&extalr_clk>; 483 483 #clock-cells = <1>; 484 484 clock-output-names = "system", "pllc0", "pllc1", 485 485 "pllc2", "r",
-2
arch/arm/boot/dts/r8a7745-iwg22d-sodimm-dbhd-ca.dts
··· 84 84 adi,input-depth = <8>; 85 85 adi,input-colorspace = "rgb"; 86 86 adi,input-clock = "1x"; 87 - adi,input-style = <1>; 88 - adi,input-justification = "evenly"; 89 87 90 88 ports { 91 89 #address-cells = <1>;
-2
arch/arm/boot/dts/r8a7790-lager.dts
··· 364 364 adi,input-depth = <8>; 365 365 adi,input-colorspace = "rgb"; 366 366 adi,input-clock = "1x"; 367 - adi,input-style = <1>; 368 - adi,input-justification = "evenly"; 369 367 370 368 ports { 371 369 #address-cells = <1>;
-2
arch/arm/boot/dts/r8a7790-stout.dts
··· 297 297 adi,input-depth = <8>; 298 298 adi,input-colorspace = "rgb"; 299 299 adi,input-clock = "1x"; 300 - adi,input-style = <1>; 301 - adi,input-justification = "evenly"; 302 300 303 301 ports { 304 302 #address-cells = <1>;
-2
arch/arm/boot/dts/r8a7791-koelsch.dts
··· 387 387 adi,input-depth = <8>; 388 388 adi,input-colorspace = "rgb"; 389 389 adi,input-clock = "1x"; 390 - adi,input-style = <1>; 391 - adi,input-justification = "evenly"; 392 390 393 391 ports { 394 392 #address-cells = <1>;
-2
arch/arm/boot/dts/r8a7791-porter.dts
··· 181 181 adi,input-depth = <8>; 182 182 adi,input-colorspace = "rgb"; 183 183 adi,input-clock = "1x"; 184 - adi,input-style = <1>; 185 - adi,input-justification = "evenly"; 186 184 187 185 ports { 188 186 #address-cells = <1>;
-2
arch/arm/boot/dts/r8a7792-blanche.dts
··· 289 289 adi,input-depth = <8>; 290 290 adi,input-colorspace = "rgb"; 291 291 adi,input-clock = "1x"; 292 - adi,input-style = <1>; 293 - adi,input-justification = "evenly"; 294 292 295 293 ports { 296 294 #address-cells = <1>;
+4 -8
arch/arm/boot/dts/r8a7792-wheat.dts
··· 249 249 */ 250 250 hdmi@3d { 251 251 compatible = "adi,adv7513"; 252 - reg = <0x3d>, <0x2d>, <0x4d>, <0x5d>; 253 - reg-names = "main", "cec", "edid", "packet"; 252 + reg = <0x3d>, <0x4d>, <0x2d>, <0x5d>; 253 + reg-names = "main", "edid", "cec", "packet"; 254 254 255 255 adi,input-depth = <8>; 256 256 adi,input-colorspace = "rgb"; 257 257 adi,input-clock = "1x"; 258 - adi,input-style = <1>; 259 - adi,input-justification = "evenly"; 260 258 261 259 ports { 262 260 #address-cells = <1>; ··· 278 280 279 281 hdmi@39 { 280 282 compatible = "adi,adv7513"; 281 - reg = <0x39>, <0x29>, <0x49>, <0x59>; 282 - reg-names = "main", "cec", "edid", "packet"; 283 + reg = <0x39>, <0x49>, <0x29>, <0x59>; 284 + reg-names = "main", "edid", "cec", "packet"; 283 285 284 286 adi,input-depth = <8>; 285 287 adi,input-colorspace = "rgb"; 286 288 adi,input-clock = "1x"; 287 - adi,input-style = <1>; 288 - adi,input-justification = "evenly"; 289 289 290 290 ports { 291 291 #address-cells = <1>;
-2
arch/arm/boot/dts/r8a7793-gose.dts
··· 366 366 adi,input-depth = <8>; 367 367 adi,input-colorspace = "rgb"; 368 368 adi,input-clock = "1x"; 369 - adi,input-style = <1>; 370 - adi,input-justification = "evenly"; 371 369 372 370 ports { 373 371 #address-cells = <1>;
-2
arch/arm/boot/dts/r8a7794-silk.dts
··· 255 255 adi,input-depth = <8>; 256 256 adi,input-colorspace = "rgb"; 257 257 adi,input-clock = "1x"; 258 - adi,input-style = <1>; 259 - adi,input-justification = "evenly"; 260 258 261 259 ports { 262 260 #address-cells = <1>;
+1 -1
arch/arm/boot/dts/rk3036.dtsi
··· 128 128 assigned-clocks = <&cru SCLK_GPU>; 129 129 assigned-clock-rates = <100000000>; 130 130 clocks = <&cru SCLK_GPU>, <&cru SCLK_GPU>; 131 - clock-names = "core", "bus"; 131 + clock-names = "bus", "core"; 132 132 resets = <&cru SRST_GPU>; 133 133 status = "disabled"; 134 134 };
+1 -1
arch/arm/boot/dts/rk3228-evb.dts
··· 46 46 #address-cells = <1>; 47 47 #size-cells = <0>; 48 48 49 - phy: phy@0 { 49 + phy: ethernet-phy@0 { 50 50 compatible = "ethernet-phy-id1234.d400", "ethernet-phy-ieee802.3-c22"; 51 51 reg = <0>; 52 52 clocks = <&cru SCLK_MAC_PHY>;
+1 -1
arch/arm/boot/dts/rk3229-xms6.dts
··· 150 150 #address-cells = <1>; 151 151 #size-cells = <0>; 152 152 153 - phy: phy@0 { 153 + phy: ethernet-phy@0 { 154 154 compatible = "ethernet-phy-id1234.d400", 155 155 "ethernet-phy-ieee802.3-c22"; 156 156 reg = <0>;
+3 -3
arch/arm/boot/dts/rk322x.dtsi
··· 555 555 "pp1", 556 556 "ppmmu1"; 557 557 clocks = <&cru ACLK_GPU>, <&cru ACLK_GPU>; 558 - clock-names = "core", "bus"; 558 + clock-names = "bus", "core"; 559 559 resets = <&cru SRST_GPU_A>; 560 560 status = "disabled"; 561 561 }; ··· 1020 1020 }; 1021 1021 }; 1022 1022 1023 - spi-0 { 1023 + spi0 { 1024 1024 spi0_clk: spi0-clk { 1025 1025 rockchip,pins = <0 RK_PB1 2 &pcfg_pull_up>; 1026 1026 }; ··· 1038 1038 }; 1039 1039 }; 1040 1040 1041 - spi-1 { 1041 + spi1 { 1042 1042 spi1_clk: spi1-clk { 1043 1043 rockchip,pins = <0 RK_PC7 2 &pcfg_pull_up>; 1044 1044 };
+1 -1
arch/arm/boot/dts/rk3xxx.dtsi
··· 84 84 compatible = "arm,mali-400"; 85 85 reg = <0x10090000 0x10000>; 86 86 clocks = <&cru ACLK_GPU>, <&cru ACLK_GPU>; 87 - clock-names = "core", "bus"; 87 + clock-names = "bus", "core"; 88 88 assigned-clocks = <&cru ACLK_GPU>; 89 89 assigned-clock-rates = <100000000>; 90 90 resets = <&cru SRST_GPU>;
+2 -1
arch/arm/mach-oxnas/platsmp.c
··· 27 27 #define GIC_CPU_CTRL 0x00 28 28 #define GIC_CPU_CTRL_ENABLE 1 29 29 30 - int __init ox820_boot_secondary(unsigned int cpu, struct task_struct *idle) 30 + static int __init ox820_boot_secondary(unsigned int cpu, 31 + struct task_struct *idle) 31 32 { 32 33 /* 33 34 * Write the address of secondary startup into the
+1 -1
arch/arm64/boot/dts/allwinner/sun50i-a64-pinetab.dts
··· 98 98 }; 99 99 100 100 &codec_analog { 101 - hpvcc-supply = <&reg_eldo1>; 101 + cpvdd-supply = <&reg_eldo1>; 102 102 status = "okay"; 103 103 }; 104 104
-18
arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
··· 154 154 }; 155 155 }; 156 156 157 - sound_spdif { 158 - compatible = "simple-audio-card"; 159 - simple-audio-card,name = "On-board SPDIF"; 160 - 161 - simple-audio-card,cpu { 162 - sound-dai = <&spdif>; 163 - }; 164 - 165 - simple-audio-card,codec { 166 - sound-dai = <&spdif_out>; 167 - }; 168 - }; 169 - 170 - spdif_out: spdif-out { 171 - #sound-dai-cells = <0>; 172 - compatible = "linux,spdif-dit"; 173 - }; 174 - 175 157 timer { 176 158 compatible = "arm,armv8-timer"; 177 159 allwinner,erratum-unknown1;
+1 -1
arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
··· 2319 2319 reg = <0x0 0xff400000 0x0 0x40000>; 2320 2320 interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>; 2321 2321 clocks = <&clkc CLKID_USB1_DDR_BRIDGE>; 2322 - clock-names = "ddr"; 2322 + clock-names = "otg"; 2323 2323 phys = <&usb2_phy1>; 2324 2324 phy-names = "usb2-phy"; 2325 2325 dr_mode = "peripheral";
-1
arch/arm64/boot/dts/amlogic/meson-g12.dtsi
··· 1 - 2 1 // SPDX-License-Identifier: (GPL-2.0+ OR MIT) 3 2 /* 4 3 * Copyright (c) 2019 BayLibre, SAS
+4
arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi
··· 154 154 clock-latency = <50000>; 155 155 }; 156 156 157 + &frddr_a { 158 + status = "okay"; 159 + }; 160 + 157 161 &frddr_b { 158 162 status = "okay"; 159 163 };
+1 -1
arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
··· 545 545 &usb { 546 546 status = "okay"; 547 547 dr_mode = "host"; 548 - vbus-regulator = <&usb_pwr_en>; 548 + vbus-supply = <&usb_pwr_en>; 549 549 }; 550 550 551 551 &usb2_phy0 {
+1 -1
arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
··· 447 447 448 448 edma0: dma-controller@22c0000 { 449 449 #dma-cells = <2>; 450 - compatible = "fsl,ls1028a-edma"; 450 + compatible = "fsl,ls1028a-edma", "fsl,vf610-edma"; 451 451 reg = <0x0 0x22c0000 0x0 0x10000>, 452 452 <0x0 0x22d0000 0x0 0x10000>, 453 453 <0x0 0x22e0000 0x0 0x10000>;
+4 -4
arch/arm64/boot/dts/freescale/imx8mm.dtsi
··· 264 264 265 265 aips1: bus@30000000 { 266 266 compatible = "fsl,aips-bus", "simple-bus"; 267 - reg = <0x301f0000 0x10000>; 267 + reg = <0x30000000 0x400000>; 268 268 #address-cells = <1>; 269 269 #size-cells = <1>; 270 270 ranges = <0x30000000 0x30000000 0x400000>; ··· 543 543 544 544 aips2: bus@30400000 { 545 545 compatible = "fsl,aips-bus", "simple-bus"; 546 - reg = <0x305f0000 0x10000>; 546 + reg = <0x30400000 0x400000>; 547 547 #address-cells = <1>; 548 548 #size-cells = <1>; 549 549 ranges = <0x30400000 0x30400000 0x400000>; ··· 603 603 604 604 aips3: bus@30800000 { 605 605 compatible = "fsl,aips-bus", "simple-bus"; 606 - reg = <0x309f0000 0x10000>; 606 + reg = <0x30800000 0x400000>; 607 607 #address-cells = <1>; 608 608 #size-cells = <1>; 609 609 ranges = <0x30800000 0x30800000 0x400000>, ··· 863 863 864 864 aips4: bus@32c00000 { 865 865 compatible = "fsl,aips-bus", "simple-bus"; 866 - reg = <0x32df0000 0x10000>; 866 + reg = <0x32c00000 0x400000>; 867 867 #address-cells = <1>; 868 868 #size-cells = <1>; 869 869 ranges = <0x32c00000 0x32c00000 0x400000>;
+5 -5
arch/arm64/boot/dts/freescale/imx8mn.dtsi
··· 241 241 242 242 aips1: bus@30000000 { 243 243 compatible = "fsl,aips-bus", "simple-bus"; 244 - reg = <0x301f0000 0x10000>; 244 + reg = <0x30000000 0x400000>; 245 245 #address-cells = <1>; 246 246 #size-cells = <1>; 247 247 ranges; ··· 448 448 449 449 aips2: bus@30400000 { 450 450 compatible = "fsl,aips-bus", "simple-bus"; 451 - reg = <0x305f0000 0x10000>; 451 + reg = <0x30400000 0x400000>; 452 452 #address-cells = <1>; 453 453 #size-cells = <1>; 454 454 ranges; ··· 508 508 509 509 aips3: bus@30800000 { 510 510 compatible = "fsl,aips-bus", "simple-bus"; 511 - reg = <0x309f0000 0x10000>; 511 + reg = <0x30800000 0x400000>; 512 512 #address-cells = <1>; 513 513 #size-cells = <1>; 514 514 ranges; ··· 718 718 reg = <0x30bd0000 0x10000>; 719 719 interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>; 720 720 clocks = <&clk IMX8MN_CLK_SDMA1_ROOT>, 721 - <&clk IMX8MN_CLK_SDMA1_ROOT>; 721 + <&clk IMX8MN_CLK_AHB>; 722 722 clock-names = "ipg", "ahb"; 723 723 #dma-cells = <3>; 724 724 fsl,sdma-ram-script-name = "imx/sdma/sdma-imx7d.bin"; ··· 754 754 755 755 aips4: bus@32c00000 { 756 756 compatible = "fsl,aips-bus", "simple-bus"; 757 - reg = <0x32df0000 0x10000>; 757 + reg = <0x32c00000 0x400000>; 758 758 #address-cells = <1>; 759 759 #size-cells = <1>; 760 760 ranges;
+23 -23
arch/arm64/boot/dts/freescale/imx8mp-pinfunc.h
··· 151 151 #define MX8MP_IOMUXC_ENET_TXC__SIM_M_HADDR22 0x070 0x2D0 0x000 0x7 0x0 152 152 #define MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x074 0x2D4 0x000 0x0 0x0 153 153 #define MX8MP_IOMUXC_ENET_RX_CTL__AUDIOMIX_SAI7_TX_SYNC 0x074 0x2D4 0x540 0x2 0x0 154 - #define MX8MP_IOMUXC_ENET_RX_CTL__AUDIOMIX_BIT_STREAM03 0x074 0x2D4 0x4CC 0x3 0x0 154 + #define MX8MP_IOMUXC_ENET_RX_CTL__AUDIOMIX_BIT_STREAM03 0x074 0x2D4 0x4CC 0x3 0x1 155 155 #define MX8MP_IOMUXC_ENET_RX_CTL__GPIO1_IO24 0x074 0x2D4 0x000 0x5 0x0 156 156 #define MX8MP_IOMUXC_ENET_RX_CTL__USDHC3_DATA2 0x074 0x2D4 0x618 0x6 0x0 157 157 #define MX8MP_IOMUXC_ENET_RX_CTL__SIM_M_HADDR23 0x074 0x2D4 0x000 0x7 0x0 158 158 #define MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x078 0x2D8 0x000 0x0 0x0 159 159 #define MX8MP_IOMUXC_ENET_RXC__ENET_QOS_RX_ER 0x078 0x2D8 0x000 0x1 0x0 160 160 #define MX8MP_IOMUXC_ENET_RXC__AUDIOMIX_SAI7_TX_BCLK 0x078 0x2D8 0x53C 0x2 0x0 161 - #define MX8MP_IOMUXC_ENET_RXC__AUDIOMIX_BIT_STREAM02 0x078 0x2D8 0x4C8 0x3 0x0 161 + #define MX8MP_IOMUXC_ENET_RXC__AUDIOMIX_BIT_STREAM02 0x078 0x2D8 0x4C8 0x3 0x1 162 162 #define MX8MP_IOMUXC_ENET_RXC__GPIO1_IO25 0x078 0x2D8 0x000 0x5 0x0 163 163 #define MX8MP_IOMUXC_ENET_RXC__USDHC3_DATA3 0x078 0x2D8 0x61C 0x6 0x0 164 164 #define MX8MP_IOMUXC_ENET_RXC__SIM_M_HADDR24 0x078 0x2D8 0x000 0x7 0x0 165 165 #define MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x07C 0x2DC 0x000 0x0 0x0 166 166 #define MX8MP_IOMUXC_ENET_RD0__AUDIOMIX_SAI7_RX_DATA00 0x07C 0x2DC 0x534 0x2 0x0 167 - #define MX8MP_IOMUXC_ENET_RD0__AUDIOMIX_BIT_STREAM01 0x07C 0x2DC 0x4C4 0x3 0x0 167 + #define MX8MP_IOMUXC_ENET_RD0__AUDIOMIX_BIT_STREAM01 0x07C 0x2DC 0x4C4 0x3 0x1 168 168 #define MX8MP_IOMUXC_ENET_RD0__GPIO1_IO26 0x07C 0x2DC 0x000 0x5 0x0 169 169 #define MX8MP_IOMUXC_ENET_RD0__USDHC3_DATA4 0x07C 0x2DC 0x620 0x6 0x0 170 170 #define MX8MP_IOMUXC_ENET_RD0__SIM_M_HADDR25 0x07C 0x2DC 0x000 0x7 0x0 171 171 #define MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x080 0x2E0 0x000 0x0 0x0 172 172 #define MX8MP_IOMUXC_ENET_RD1__AUDIOMIX_SAI7_RX_SYNC 0x080 0x2E0 0x538 0x2 0x0 173 - #define MX8MP_IOMUXC_ENET_RD1__AUDIOMIX_BIT_STREAM00 0x080 0x2E0 0x4C0 0x3 0x0 173 + #define MX8MP_IOMUXC_ENET_RD1__AUDIOMIX_BIT_STREAM00 0x080 0x2E0 0x4C0 0x3 0x1 174 174 #define MX8MP_IOMUXC_ENET_RD1__GPIO1_IO27 0x080 0x2E0 0x000 0x5 0x0 175 175 #define MX8MP_IOMUXC_ENET_RD1__USDHC3_RESET_B 0x080 0x2E0 0x000 0x6 0x0 176 176 #define MX8MP_IOMUXC_ENET_RD1__SIM_M_HADDR26 0x080 0x2E0 0x000 0x7 0x0 ··· 291 291 #define MX8MP_IOMUXC_SD2_DATA0__I2C4_SDA 0x0C8 0x328 0x5C0 0x2 0x1 292 292 #define MX8MP_IOMUXC_SD2_DATA0__UART2_DCE_RX 0x0C8 0x328 0x5F0 0x3 0x2 293 293 #define MX8MP_IOMUXC_SD2_DATA0__UART2_DTE_TX 0x0C8 0x328 0x000 0x3 0x0 294 - #define MX8MP_IOMUXC_SD2_DATA0__AUDIOMIX_BIT_STREAM00 0x0C8 0x328 0x4C0 0x4 0x1 294 + #define MX8MP_IOMUXC_SD2_DATA0__AUDIOMIX_BIT_STREAM00 0x0C8 0x328 0x4C0 0x4 0x2 295 295 #define MX8MP_IOMUXC_SD2_DATA0__GPIO2_IO15 0x0C8 0x328 0x000 0x5 0x0 296 296 #define MX8MP_IOMUXC_SD2_DATA0__CCMSRCGPCMIX_OBSERVE2 0x0C8 0x328 0x000 0x6 0x0 297 297 #define MX8MP_IOMUXC_SD2_DATA0__OBSERVE_MUX_OUT02 0x0C8 0x328 0x000 0x7 0x0 ··· 313 313 #define MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x0D4 0x334 0x000 0x0 0x0 314 314 #define MX8MP_IOMUXC_SD2_DATA3__ECSPI2_MISO 0x0D4 0x334 0x56C 0x2 0x0 315 315 #define MX8MP_IOMUXC_SD2_DATA3__AUDIOMIX_SPDIF_IN 0x0D4 0x334 0x544 0x3 0x1 316 - #define MX8MP_IOMUXC_SD2_DATA3__AUDIOMIX_BIT_STREAM03 0x0D4 0x334 0x4CC 0x4 0x1 316 + #define MX8MP_IOMUXC_SD2_DATA3__AUDIOMIX_BIT_STREAM03 0x0D4 0x334 0x4CC 0x4 0x2 317 317 #define MX8MP_IOMUXC_SD2_DATA3__GPIO2_IO18 0x0D4 0x334 0x000 0x5 0x0 318 318 #define MX8MP_IOMUXC_SD2_DATA3__CCMSRCGPCMIX_EARLY_RESET 0x0D4 0x334 0x000 0x6 0x0 319 319 #define MX8MP_IOMUXC_SD2_RESET_B__USDHC2_RESET_B 0x0D8 0x338 0x000 0x0 0x0 ··· 487 487 #define MX8MP_IOMUXC_SAI5_RXD0__AUDIOMIX_SAI1_TX_DATA02 0x134 0x394 0x000 0x1 0x0 488 488 #define MX8MP_IOMUXC_SAI5_RXD0__PWM2_OUT 0x134 0x394 0x000 0x2 0x0 489 489 #define MX8MP_IOMUXC_SAI5_RXD0__I2C5_SCL 0x134 0x394 0x5C4 0x3 0x1 490 - #define MX8MP_IOMUXC_SAI5_RXD0__AUDIOMIX_BIT_STREAM00 0x134 0x394 0x4C0 0x4 0x2 490 + #define MX8MP_IOMUXC_SAI5_RXD0__AUDIOMIX_BIT_STREAM00 0x134 0x394 0x4C0 0x4 0x3 491 491 #define MX8MP_IOMUXC_SAI5_RXD0__GPIO3_IO21 0x134 0x394 0x000 0x5 0x0 492 492 #define MX8MP_IOMUXC_SAI5_RXD1__AUDIOMIX_SAI5_RX_DATA01 0x138 0x398 0x4FC 0x0 0x0 493 493 #define MX8MP_IOMUXC_SAI5_RXD1__AUDIOMIX_SAI1_TX_DATA03 0x138 0x398 0x000 0x1 0x0 494 494 #define MX8MP_IOMUXC_SAI5_RXD1__AUDIOMIX_SAI1_TX_SYNC 0x138 0x398 0x4D8 0x2 0x0 495 495 #define MX8MP_IOMUXC_SAI5_RXD1__AUDIOMIX_SAI5_TX_SYNC 0x138 0x398 0x510 0x3 0x0 496 - #define MX8MP_IOMUXC_SAI5_RXD1__AUDIOMIX_BIT_STREAM01 0x138 0x398 0x4C4 0x4 0x2 496 + #define MX8MP_IOMUXC_SAI5_RXD1__AUDIOMIX_BIT_STREAM01 0x138 0x398 0x4C4 0x4 0x3 497 497 #define MX8MP_IOMUXC_SAI5_RXD1__GPIO3_IO22 0x138 0x398 0x000 0x5 0x0 498 498 #define MX8MP_IOMUXC_SAI5_RXD1__CAN1_TX 0x138 0x398 0x000 0x6 0x0 499 499 #define MX8MP_IOMUXC_SAI5_RXD2__AUDIOMIX_SAI5_RX_DATA02 0x13C 0x39C 0x500 0x0 0x0 500 500 #define MX8MP_IOMUXC_SAI5_RXD2__AUDIOMIX_SAI1_TX_DATA04 0x13C 0x39C 0x000 0x1 0x0 501 501 #define MX8MP_IOMUXC_SAI5_RXD2__AUDIOMIX_SAI1_TX_SYNC 0x13C 0x39C 0x4D8 0x2 0x1 502 502 #define MX8MP_IOMUXC_SAI5_RXD2__AUDIOMIX_SAI5_TX_BCLK 0x13C 0x39C 0x50C 0x3 0x0 503 - #define MX8MP_IOMUXC_SAI5_RXD2__AUDIOMIX_BIT_STREAM02 0x13C 0x39C 0x4C8 0x4 0x2 503 + #define MX8MP_IOMUXC_SAI5_RXD2__AUDIOMIX_BIT_STREAM02 0x13C 0x39C 0x4C8 0x4 0x3 504 504 #define MX8MP_IOMUXC_SAI5_RXD2__GPIO3_IO23 0x13C 0x39C 0x000 0x5 0x0 505 505 #define MX8MP_IOMUXC_SAI5_RXD2__CAN1_RX 0x13C 0x39C 0x54C 0x6 0x0 506 506 #define MX8MP_IOMUXC_SAI5_RXD3__AUDIOMIX_SAI5_RX_DATA03 0x140 0x3A0 0x504 0x0 0x0 507 507 #define MX8MP_IOMUXC_SAI5_RXD3__AUDIOMIX_SAI1_TX_DATA05 0x140 0x3A0 0x000 0x1 0x0 508 508 #define MX8MP_IOMUXC_SAI5_RXD3__AUDIOMIX_SAI1_TX_SYNC 0x140 0x3A0 0x4D8 0x2 0x2 509 509 #define MX8MP_IOMUXC_SAI5_RXD3__AUDIOMIX_SAI5_TX_DATA00 0x140 0x3A0 0x000 0x3 0x0 510 - #define MX8MP_IOMUXC_SAI5_RXD3__AUDIOMIX_BIT_STREAM03 0x140 0x3A0 0x4CC 0x4 0x2 510 + #define MX8MP_IOMUXC_SAI5_RXD3__AUDIOMIX_BIT_STREAM03 0x140 0x3A0 0x4CC 0x4 0x3 511 511 #define MX8MP_IOMUXC_SAI5_RXD3__GPIO3_IO24 0x140 0x3A0 0x000 0x5 0x0 512 512 #define MX8MP_IOMUXC_SAI5_RXD3__CAN2_TX 0x140 0x3A0 0x000 0x6 0x0 513 513 #define MX8MP_IOMUXC_SAI5_MCLK__AUDIOMIX_SAI5_MCLK 0x144 0x3A4 0x4F0 0x0 0x0 ··· 528 528 #define MX8MP_IOMUXC_SAI1_RXD0__AUDIOMIX_SAI1_RX_DATA00 0x150 0x3B0 0x000 0x0 0x0 529 529 #define MX8MP_IOMUXC_SAI1_RXD0__AUDIOMIX_SAI5_RX_DATA00 0x150 0x3B0 0x4F8 0x1 0x1 530 530 #define MX8MP_IOMUXC_SAI1_RXD0__AUDIOMIX_SAI1_TX_DATA01 0x150 0x3B0 0x000 0x2 0x0 531 - #define MX8MP_IOMUXC_SAI1_RXD0__AUDIOMIX_BIT_STREAM00 0x150 0x3B0 0x4C0 0x3 0x3 531 + #define MX8MP_IOMUXC_SAI1_RXD0__AUDIOMIX_BIT_STREAM00 0x150 0x3B0 0x4C0 0x3 0x4 532 532 #define MX8MP_IOMUXC_SAI1_RXD0__ENET1_1588_EVENT1_IN 0x150 0x3B0 0x000 0x4 0x0 533 533 #define MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02 0x150 0x3B0 0x000 0x5 0x0 534 534 #define MX8MP_IOMUXC_SAI1_RXD1__AUDIOMIX_SAI1_RX_DATA01 0x154 0x3B4 0x000 0x0 0x0 535 535 #define MX8MP_IOMUXC_SAI1_RXD1__AUDIOMIX_SAI5_RX_DATA01 0x154 0x3B4 0x4FC 0x1 0x1 536 - #define MX8MP_IOMUXC_SAI1_RXD1__AUDIOMIX_BIT_STREAM01 0x154 0x3B4 0x4C4 0x3 0x3 536 + #define MX8MP_IOMUXC_SAI1_RXD1__AUDIOMIX_BIT_STREAM01 0x154 0x3B4 0x4C4 0x3 0x4 537 537 #define MX8MP_IOMUXC_SAI1_RXD1__ENET1_1588_EVENT1_OUT 0x154 0x3B4 0x000 0x4 0x0 538 538 #define MX8MP_IOMUXC_SAI1_RXD1__GPIO4_IO03 0x154 0x3B4 0x000 0x5 0x0 539 539 #define MX8MP_IOMUXC_SAI1_RXD2__AUDIOMIX_SAI1_RX_DATA02 0x158 0x3B8 0x000 0x0 0x0 540 540 #define MX8MP_IOMUXC_SAI1_RXD2__AUDIOMIX_SAI5_RX_DATA02 0x158 0x3B8 0x500 0x1 0x1 541 - #define MX8MP_IOMUXC_SAI1_RXD2__AUDIOMIX_BIT_STREAM02 0x158 0x3B8 0x4C8 0x3 0x3 541 + #define MX8MP_IOMUXC_SAI1_RXD2__AUDIOMIX_BIT_STREAM02 0x158 0x3B8 0x4C8 0x3 0x4 542 542 #define MX8MP_IOMUXC_SAI1_RXD2__ENET1_MDC 0x158 0x3B8 0x000 0x4 0x0 543 543 #define MX8MP_IOMUXC_SAI1_RXD2__GPIO4_IO04 0x158 0x3B8 0x000 0x5 0x0 544 544 #define MX8MP_IOMUXC_SAI1_RXD3__AUDIOMIX_SAI1_RX_DATA03 0x15C 0x3BC 0x000 0x0 0x0 545 545 #define MX8MP_IOMUXC_SAI1_RXD3__AUDIOMIX_SAI5_RX_DATA03 0x15C 0x3BC 0x504 0x1 0x1 546 - #define MX8MP_IOMUXC_SAI1_RXD3__AUDIOMIX_BIT_STREAM03 0x15C 0x3BC 0x4CC 0x3 0x3 546 + #define MX8MP_IOMUXC_SAI1_RXD3__AUDIOMIX_BIT_STREAM03 0x15C 0x3BC 0x4CC 0x3 0x4 547 547 #define MX8MP_IOMUXC_SAI1_RXD3__ENET1_MDIO 0x15C 0x3BC 0x57C 0x4 0x1 548 548 #define MX8MP_IOMUXC_SAI1_RXD3__GPIO4_IO05 0x15C 0x3BC 0x000 0x5 0x0 549 549 #define MX8MP_IOMUXC_SAI1_RXD4__AUDIOMIX_SAI1_RX_DATA04 0x160 0x3C0 0x000 0x0 0x0 ··· 624 624 #define MX8MP_IOMUXC_SAI2_RXFS__UART1_DCE_TX 0x19C 0x3FC 0x000 0x4 0x0 625 625 #define MX8MP_IOMUXC_SAI2_RXFS__UART1_DTE_RX 0x19C 0x3FC 0x5E8 0x4 0x2 626 626 #define MX8MP_IOMUXC_SAI2_RXFS__GPIO4_IO21 0x19C 0x3FC 0x000 0x5 0x0 627 - #define MX8MP_IOMUXC_SAI2_RXFS__AUDIOMIX_BIT_STREAM02 0x19C 0x3FC 0x4C8 0x6 0x4 627 + #define MX8MP_IOMUXC_SAI2_RXFS__AUDIOMIX_BIT_STREAM02 0x19C 0x3FC 0x4C8 0x6 0x5 628 628 #define MX8MP_IOMUXC_SAI2_RXFS__SIM_M_HSIZE00 0x19C 0x3FC 0x000 0x7 0x0 629 629 #define MX8MP_IOMUXC_SAI2_RXC__AUDIOMIX_SAI2_RX_BCLK 0x1A0 0x400 0x000 0x0 0x0 630 630 #define MX8MP_IOMUXC_SAI2_RXC__AUDIOMIX_SAI5_TX_BCLK 0x1A0 0x400 0x50C 0x1 0x2 ··· 632 632 #define MX8MP_IOMUXC_SAI2_RXC__UART1_DCE_RX 0x1A0 0x400 0x5E8 0x4 0x3 633 633 #define MX8MP_IOMUXC_SAI2_RXC__UART1_DTE_TX 0x1A0 0x400 0x000 0x4 0x0 634 634 #define MX8MP_IOMUXC_SAI2_RXC__GPIO4_IO22 0x1A0 0x400 0x000 0x5 0x0 635 - #define MX8MP_IOMUXC_SAI2_RXC__AUDIOMIX_BIT_STREAM01 0x1A0 0x400 0x4C4 0x6 0x4 635 + #define MX8MP_IOMUXC_SAI2_RXC__AUDIOMIX_BIT_STREAM01 0x1A0 0x400 0x4C4 0x6 0x5 636 636 #define MX8MP_IOMUXC_SAI2_RXC__SIM_M_HSIZE01 0x1A0 0x400 0x000 0x7 0x0 637 637 #define MX8MP_IOMUXC_SAI2_RXD0__AUDIOMIX_SAI2_RX_DATA00 0x1A4 0x404 0x000 0x0 0x0 638 638 #define MX8MP_IOMUXC_SAI2_RXD0__AUDIOMIX_SAI5_TX_DATA00 0x1A4 0x404 0x000 0x1 0x0 ··· 641 641 #define MX8MP_IOMUXC_SAI2_RXD0__UART1_DCE_RTS 0x1A4 0x404 0x5E4 0x4 0x2 642 642 #define MX8MP_IOMUXC_SAI2_RXD0__UART1_DTE_CTS 0x1A4 0x404 0x000 0x4 0x0 643 643 #define MX8MP_IOMUXC_SAI2_RXD0__GPIO4_IO23 0x1A4 0x404 0x000 0x5 0x0 644 - #define MX8MP_IOMUXC_SAI2_RXD0__AUDIOMIX_BIT_STREAM03 0x1A4 0x404 0x4CC 0x6 0x4 644 + #define MX8MP_IOMUXC_SAI2_RXD0__AUDIOMIX_BIT_STREAM03 0x1A4 0x404 0x4CC 0x6 0x5 645 645 #define MX8MP_IOMUXC_SAI2_RXD0__SIM_M_HSIZE02 0x1A4 0x404 0x000 0x7 0x0 646 646 #define MX8MP_IOMUXC_SAI2_TXFS__AUDIOMIX_SAI2_TX_SYNC 0x1A8 0x408 0x000 0x0 0x0 647 647 #define MX8MP_IOMUXC_SAI2_TXFS__AUDIOMIX_SAI5_TX_DATA01 0x1A8 0x408 0x000 0x1 0x0 ··· 650 650 #define MX8MP_IOMUXC_SAI2_TXFS__UART1_DCE_CTS 0x1A8 0x408 0x000 0x4 0x0 651 651 #define MX8MP_IOMUXC_SAI2_TXFS__UART1_DTE_RTS 0x1A8 0x408 0x5E4 0x4 0x3 652 652 #define MX8MP_IOMUXC_SAI2_TXFS__GPIO4_IO24 0x1A8 0x408 0x000 0x5 0x0 653 - #define MX8MP_IOMUXC_SAI2_TXFS__AUDIOMIX_BIT_STREAM02 0x1A8 0x408 0x4C8 0x6 0x5 653 + #define MX8MP_IOMUXC_SAI2_TXFS__AUDIOMIX_BIT_STREAM02 0x1A8 0x408 0x4C8 0x6 0x6 654 654 #define MX8MP_IOMUXC_SAI2_TXFS__SIM_M_HWRITE 0x1A8 0x408 0x000 0x7 0x0 655 655 #define MX8MP_IOMUXC_SAI2_TXC__AUDIOMIX_SAI2_TX_BCLK 0x1AC 0x40C 0x000 0x0 0x0 656 656 #define MX8MP_IOMUXC_SAI2_TXC__AUDIOMIX_SAI5_TX_DATA02 0x1AC 0x40C 0x000 0x1 0x0 657 657 #define MX8MP_IOMUXC_SAI2_TXC__CAN1_RX 0x1AC 0x40C 0x54C 0x3 0x1 658 658 #define MX8MP_IOMUXC_SAI2_TXC__GPIO4_IO25 0x1AC 0x40C 0x000 0x5 0x0 659 - #define MX8MP_IOMUXC_SAI2_TXC__AUDIOMIX_BIT_STREAM01 0x1AC 0x40C 0x4C4 0x6 0x5 659 + #define MX8MP_IOMUXC_SAI2_TXC__AUDIOMIX_BIT_STREAM01 0x1AC 0x40C 0x4C4 0x6 0x6 660 660 #define MX8MP_IOMUXC_SAI2_TXC__SIM_M_HREADYOUT 0x1AC 0x40C 0x000 0x7 0x0 661 661 #define MX8MP_IOMUXC_SAI2_TXD0__AUDIOMIX_SAI2_TX_DATA00 0x1B0 0x410 0x000 0x0 0x0 662 662 #define MX8MP_IOMUXC_SAI2_TXD0__AUDIOMIX_SAI5_TX_DATA03 0x1B0 0x410 0x000 0x1 0x0 ··· 680 680 #define MX8MP_IOMUXC_SAI3_RXFS__AUDIOMIX_SAI3_RX_DATA01 0x1B8 0x418 0x000 0x3 0x0 681 681 #define MX8MP_IOMUXC_SAI3_RXFS__AUDIOMIX_SPDIF_IN 0x1B8 0x418 0x544 0x4 0x2 682 682 #define MX8MP_IOMUXC_SAI3_RXFS__GPIO4_IO28 0x1B8 0x418 0x000 0x5 0x0 683 - #define MX8MP_IOMUXC_SAI3_RXFS__AUDIOMIX_BIT_STREAM00 0x1B8 0x418 0x4C0 0x6 0x4 683 + #define MX8MP_IOMUXC_SAI3_RXFS__AUDIOMIX_BIT_STREAM00 0x1B8 0x418 0x4C0 0x6 0x5 684 684 #define MX8MP_IOMUXC_SAI3_RXFS__TPSMP_HTRANS00 0x1B8 0x418 0x000 0x7 0x0 685 685 #define MX8MP_IOMUXC_SAI3_RXC__AUDIOMIX_SAI3_RX_BCLK 0x1BC 0x41C 0x000 0x0 0x0 686 686 #define MX8MP_IOMUXC_SAI3_RXC__AUDIOMIX_SAI2_RX_DATA02 0x1BC 0x41C 0x000 0x1 0x0 ··· 697 697 #define MX8MP_IOMUXC_SAI3_RXD__UART2_DCE_RTS 0x1C0 0x420 0x5EC 0x4 0x3 698 698 #define MX8MP_IOMUXC_SAI3_RXD__UART2_DTE_CTS 0x1C0 0x420 0x000 0x4 0x0 699 699 #define MX8MP_IOMUXC_SAI3_RXD__GPIO4_IO30 0x1C0 0x420 0x000 0x5 0x0 700 - #define MX8MP_IOMUXC_SAI3_RXD__AUDIOMIX_BIT_STREAM01 0x1C0 0x420 0x4C4 0x6 0x6 700 + #define MX8MP_IOMUXC_SAI3_RXD__AUDIOMIX_BIT_STREAM01 0x1C0 0x420 0x4C4 0x6 0x7 701 701 #define MX8MP_IOMUXC_SAI3_RXD__TPSMP_HDATA00 0x1C0 0x420 0x000 0x7 0x0 702 702 #define MX8MP_IOMUXC_SAI3_TXFS__AUDIOMIX_SAI3_TX_SYNC 0x1C4 0x424 0x4EC 0x0 0x1 703 703 #define MX8MP_IOMUXC_SAI3_TXFS__AUDIOMIX_SAI2_TX_DATA01 0x1C4 0x424 0x000 0x1 0x0 ··· 706 706 #define MX8MP_IOMUXC_SAI3_TXFS__UART2_DCE_RX 0x1C4 0x424 0x5F0 0x4 0x4 707 707 #define MX8MP_IOMUXC_SAI3_TXFS__UART2_DTE_TX 0x1C4 0x424 0x000 0x4 0x0 708 708 #define MX8MP_IOMUXC_SAI3_TXFS__GPIO4_IO31 0x1C4 0x424 0x000 0x5 0x0 709 - #define MX8MP_IOMUXC_SAI3_TXFS__AUDIOMIX_BIT_STREAM03 0x1C4 0x424 0x4CC 0x6 0x5 709 + #define MX8MP_IOMUXC_SAI3_TXFS__AUDIOMIX_BIT_STREAM03 0x1C4 0x424 0x4CC 0x6 0x6 710 710 #define MX8MP_IOMUXC_SAI3_TXFS__TPSMP_HDATA01 0x1C4 0x424 0x000 0x7 0x0 711 711 #define MX8MP_IOMUXC_SAI3_TXC__AUDIOMIX_SAI3_TX_BCLK 0x1C8 0x428 0x4E8 0x0 0x1 712 712 #define MX8MP_IOMUXC_SAI3_TXC__AUDIOMIX_SAI2_TX_DATA02 0x1C8 0x428 0x000 0x1 0x0 ··· 715 715 #define MX8MP_IOMUXC_SAI3_TXC__UART2_DCE_TX 0x1C8 0x428 0x000 0x4 0x0 716 716 #define MX8MP_IOMUXC_SAI3_TXC__UART2_DTE_RX 0x1C8 0x428 0x5F0 0x4 0x5 717 717 #define MX8MP_IOMUXC_SAI3_TXC__GPIO5_IO00 0x1C8 0x428 0x000 0x5 0x0 718 - #define MX8MP_IOMUXC_SAI3_TXC__AUDIOMIX_BIT_STREAM02 0x1C8 0x428 0x4C8 0x6 0x6 718 + #define MX8MP_IOMUXC_SAI3_TXC__AUDIOMIX_BIT_STREAM02 0x1C8 0x428 0x4C8 0x6 0x7 719 719 #define MX8MP_IOMUXC_SAI3_TXC__TPSMP_HDATA02 0x1C8 0x428 0x000 0x7 0x0 720 720 #define MX8MP_IOMUXC_SAI3_TXD__AUDIOMIX_SAI3_TX_DATA00 0x1CC 0x42C 0x000 0x0 0x0 721 721 #define MX8MP_IOMUXC_SAI3_TXD__AUDIOMIX_SAI2_TX_DATA03 0x1CC 0x42C 0x000 0x1 0x0
+3 -3
arch/arm64/boot/dts/freescale/imx8mp.dtsi
··· 145 145 146 146 aips1: bus@30000000 { 147 147 compatible = "fsl,aips-bus", "simple-bus"; 148 - reg = <0x301f0000 0x10000>; 148 + reg = <0x30000000 0x400000>; 149 149 #address-cells = <1>; 150 150 #size-cells = <1>; 151 151 ranges; ··· 318 318 319 319 aips2: bus@30400000 { 320 320 compatible = "fsl,aips-bus", "simple-bus"; 321 - reg = <0x305f0000 0x400000>; 321 + reg = <0x30400000 0x400000>; 322 322 #address-cells = <1>; 323 323 #size-cells = <1>; 324 324 ranges; ··· 378 378 379 379 aips3: bus@30800000 { 380 380 compatible = "fsl,aips-bus", "simple-bus"; 381 - reg = <0x309f0000 0x400000>; 381 + reg = <0x30800000 0x400000>; 382 382 #address-cells = <1>; 383 383 #size-cells = <1>; 384 384 ranges;
+4 -4
arch/arm64/boot/dts/freescale/imx8mq.dtsi
··· 291 291 292 292 bus@30000000 { /* AIPS1 */ 293 293 compatible = "fsl,aips-bus", "simple-bus"; 294 - reg = <0x301f0000 0x10000>; 294 + reg = <0x30000000 0x400000>; 295 295 #address-cells = <1>; 296 296 #size-cells = <1>; 297 297 ranges = <0x30000000 0x30000000 0x400000>; ··· 696 696 697 697 bus@30400000 { /* AIPS2 */ 698 698 compatible = "fsl,aips-bus", "simple-bus"; 699 - reg = <0x305f0000 0x10000>; 699 + reg = <0x30400000 0x400000>; 700 700 #address-cells = <1>; 701 701 #size-cells = <1>; 702 702 ranges = <0x30400000 0x30400000 0x400000>; ··· 756 756 757 757 bus@30800000 { /* AIPS3 */ 758 758 compatible = "fsl,aips-bus", "simple-bus"; 759 - reg = <0x309f0000 0x10000>; 759 + reg = <0x30800000 0x400000>; 760 760 #address-cells = <1>; 761 761 #size-cells = <1>; 762 762 ranges = <0x30800000 0x30800000 0x400000>, ··· 1029 1029 1030 1030 bus@32c00000 { /* AIPS4 */ 1031 1031 compatible = "fsl,aips-bus", "simple-bus"; 1032 - reg = <0x32df0000 0x10000>; 1032 + reg = <0x32c00000 0x400000>; 1033 1033 #address-cells = <1>; 1034 1034 #size-cells = <1>; 1035 1035 ranges = <0x32c00000 0x32c00000 0x400000>;
+20 -3
arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
··· 658 658 s11 { 659 659 qcom,saw-leader; 660 660 regulator-always-on; 661 - regulator-min-microvolt = <1230000>; 662 - regulator-max-microvolt = <1230000>; 661 + regulator-min-microvolt = <980000>; 662 + regulator-max-microvolt = <980000>; 663 663 }; 664 664 }; 665 665 ··· 908 908 status = "okay"; 909 909 }; 910 910 911 + &q6asmdai { 912 + dai@0 { 913 + reg = <0>; 914 + }; 915 + 916 + dai@1 { 917 + reg = <1>; 918 + }; 919 + 920 + dai@2 { 921 + reg = <2>; 922 + }; 923 + }; 924 + 911 925 &sound { 912 926 compatible = "qcom,apq8096-sndcard"; 913 927 model = "DB820c"; 914 - audio-routing = "RX_BIAS", "MCLK"; 928 + audio-routing = "RX_BIAS", "MCLK", 929 + "MM_DL1", "MultiMedia1 Playback", 930 + "MM_DL2", "MultiMedia2 Playback", 931 + "MultiMedia3 Capture", "MM_UL3"; 915 932 916 933 mm1-dai-link { 917 934 link-name = "MultiMedia1";
+2
arch/arm64/boot/dts/qcom/msm8996.dtsi
··· 2066 2066 reg = <APR_SVC_ASM>; 2067 2067 q6asmdai: dais { 2068 2068 compatible = "qcom,q6asm-dais"; 2069 + #address-cells = <1>; 2070 + #size-cells = <0>; 2069 2071 #sound-dai-cells = <1>; 2070 2072 iommus = <&lpass_q6_smmu 1>; 2071 2073 };
-3
arch/arm64/boot/dts/qcom/sdm845-db845c.dts
··· 442 442 &q6asmdai { 443 443 dai@0 { 444 444 reg = <0>; 445 - direction = <2>; 446 445 }; 447 446 448 447 dai@1 { 449 448 reg = <1>; 450 - direction = <2>; 451 449 }; 452 450 453 451 dai@2 { 454 452 reg = <2>; 455 - direction = <1>; 456 453 }; 457 454 458 455 dai@3 {
-2
arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
··· 359 359 &q6asmdai { 360 360 dai@0 { 361 361 reg = <0>; 362 - direction = <2>; 363 362 }; 364 363 365 364 dai@1 { 366 365 reg = <1>; 367 - direction = <1>; 368 366 }; 369 367 }; 370 368
-2
arch/arm64/boot/dts/renesas/r8a77970-eagle.dts
··· 137 137 adi,input-depth = <8>; 138 138 adi,input-colorspace = "rgb"; 139 139 adi,input-clock = "1x"; 140 - adi,input-style = <1>; 141 - adi,input-justification = "evenly"; 142 140 143 141 ports { 144 142 #address-cells = <1>;
-2
arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts
··· 150 150 adi,input-depth = <8>; 151 151 adi,input-colorspace = "rgb"; 152 152 adi,input-clock = "1x"; 153 - adi,input-style = <1>; 154 - adi,input-justification = "evenly"; 155 153 156 154 ports { 157 155 #address-cells = <1>;
-2
arch/arm64/boot/dts/renesas/r8a77980-condor.dts
··· 174 174 adi,input-depth = <8>; 175 175 adi,input-colorspace = "rgb"; 176 176 adi,input-clock = "1x"; 177 - adi,input-style = <1>; 178 - adi,input-justification = "evenly"; 179 177 180 178 ports { 181 179 #address-cells = <1>;
-2
arch/arm64/boot/dts/renesas/r8a77980-v3hsk.dts
··· 141 141 adi,input-depth = <8>; 142 142 adi,input-colorspace = "rgb"; 143 143 adi,input-clock = "1x"; 144 - adi,input-style = <1>; 145 - adi,input-justification = "evenly"; 146 144 147 145 ports { 148 146 #address-cells = <1>;
+2
arch/arm64/boot/dts/renesas/r8a77980.dtsi
··· 1318 1318 ipmmu_vip0: mmu@e7b00000 { 1319 1319 compatible = "renesas,ipmmu-r8a77980"; 1320 1320 reg = <0 0xe7b00000 0 0x1000>; 1321 + renesas,ipmmu-main = <&ipmmu_mm 4>; 1321 1322 power-domains = <&sysc R8A77980_PD_ALWAYS_ON>; 1322 1323 #iommu-cells = <1>; 1323 1324 }; ··· 1326 1325 ipmmu_vip1: mmu@e7960000 { 1327 1326 compatible = "renesas,ipmmu-r8a77980"; 1328 1327 reg = <0 0xe7960000 0 0x1000>; 1328 + renesas,ipmmu-main = <&ipmmu_mm 11>; 1329 1329 power-domains = <&sysc R8A77980_PD_ALWAYS_ON>; 1330 1330 #iommu-cells = <1>; 1331 1331 };
-2
arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts
··· 360 360 adi,input-depth = <8>; 361 361 adi,input-colorspace = "rgb"; 362 362 adi,input-clock = "1x"; 363 - adi,input-style = <1>; 364 - adi,input-justification = "evenly"; 365 363 366 364 ports { 367 365 #address-cells = <1>;
+2 -4
arch/arm64/boot/dts/renesas/r8a77995-draak.dts
··· 272 272 273 273 hdmi-encoder@39 { 274 274 compatible = "adi,adv7511w"; 275 - reg = <0x39>, <0x3f>, <0x38>, <0x3c>; 276 - reg-names = "main", "edid", "packet", "cec"; 275 + reg = <0x39>, <0x3f>, <0x3c>, <0x38>; 276 + reg-names = "main", "edid", "cec", "packet"; 277 277 interrupt-parent = <&gpio1>; 278 278 interrupts = <28 IRQ_TYPE_LEVEL_LOW>; 279 279 ··· 284 284 adi,input-depth = <8>; 285 285 adi,input-colorspace = "rgb"; 286 286 adi,input-clock = "1x"; 287 - adi,input-style = <1>; 288 - adi,input-justification = "evenly"; 289 287 290 288 ports { 291 289 #address-cells = <1>;
+1 -1
arch/arm64/boot/dts/rockchip/px30.dtsi
··· 143 143 }; 144 144 145 145 arm-pmu { 146 - compatible = "arm,cortex-a53-pmu"; 146 + compatible = "arm,cortex-a35-pmu"; 147 147 interrupts = <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>, 148 148 <GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>, 149 149 <GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>,
+1 -1
arch/arm64/boot/dts/rockchip/rk3308.dtsi
··· 127 127 }; 128 128 129 129 arm-pmu { 130 - compatible = "arm,cortex-a53-pmu"; 130 + compatible = "arm,cortex-a35-pmu"; 131 131 interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>, 132 132 <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>, 133 133 <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>,
+2 -3
arch/arm64/boot/dts/rockchip/rk3328-evb.dts
··· 82 82 &gmac2phy { 83 83 phy-supply = <&vcc_phy>; 84 84 clock_in_out = "output"; 85 - assigned-clocks = <&cru SCLK_MAC2PHY_SRC>; 86 85 assigned-clock-rate = <50000000>; 87 86 assigned-clocks = <&cru SCLK_MAC2PHY>; 88 87 assigned-clock-parents = <&cru SCLK_MAC2PHY_SRC>; 89 - 88 + status = "okay"; 90 89 }; 91 90 92 91 &i2c1 { 93 92 status = "okay"; 94 93 95 - rk805: rk805@18 { 94 + rk805: pmic@18 { 96 95 compatible = "rockchip,rk805"; 97 96 reg = <0x18>; 98 97 interrupt-parent = <&gpio2>;
+1 -1
arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
··· 170 170 &i2c1 { 171 171 status = "okay"; 172 172 173 - rk805: rk805@18 { 173 + rk805: pmic@18 { 174 174 compatible = "rockchip,rk805"; 175 175 reg = <0x18>; 176 176 interrupt-parent = <&gpio2>;
-18
arch/arm64/boot/dts/rockchip/rk3328.dtsi
··· 299 299 grf: syscon@ff100000 { 300 300 compatible = "rockchip,rk3328-grf", "syscon", "simple-mfd"; 301 301 reg = <0x0 0xff100000 0x0 0x1000>; 302 - #address-cells = <1>; 303 - #size-cells = <1>; 304 302 305 303 io_domains: io-domains { 306 304 compatible = "rockchip,rk3328-io-voltage-domain"; ··· 1792 1794 }; 1793 1795 1794 1796 gmac2phy { 1795 - fephyled_speed100: fephyled-speed100 { 1796 - rockchip,pins = <0 RK_PD7 1 &pcfg_pull_none>; 1797 - }; 1798 - 1799 1797 fephyled_speed10: fephyled-speed10 { 1800 1798 rockchip,pins = <0 RK_PD6 1 &pcfg_pull_none>; 1801 1799 }; 1802 1800 1803 1801 fephyled_duplex: fephyled-duplex { 1804 1802 rockchip,pins = <0 RK_PD6 2 &pcfg_pull_none>; 1805 - }; 1806 - 1807 - fephyled_rxm0: fephyled-rxm0 { 1808 - rockchip,pins = <0 RK_PD5 1 &pcfg_pull_none>; 1809 - }; 1810 - 1811 - fephyled_txm0: fephyled-txm0 { 1812 - rockchip,pins = <0 RK_PD5 2 &pcfg_pull_none>; 1813 - }; 1814 - 1815 - fephyled_linkm0: fephyled-linkm0 { 1816 - rockchip,pins = <0 RK_PD4 1 &pcfg_pull_none>; 1817 1803 }; 1818 1804 1819 1805 fephyled_rxm1: fephyled-rxm1 {
+5 -4
arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
··· 147 147 "Speaker", "Speaker Amplifier OUTL", 148 148 "Speaker", "Speaker Amplifier OUTR"; 149 149 150 - simple-audio-card,hp-det-gpio = <&gpio0 RK_PB0 GPIO_ACTIVE_LOW>; 150 + simple-audio-card,hp-det-gpio = <&gpio0 RK_PB0 GPIO_ACTIVE_HIGH>; 151 151 simple-audio-card,aux-devs = <&speaker_amp>; 152 152 simple-audio-card,pin-switches = "Speaker"; 153 153 ··· 690 690 fusb0: fusb30x@22 { 691 691 compatible = "fcs,fusb302"; 692 692 reg = <0x22>; 693 - fcs,int_n = <&gpio1 RK_PA2 GPIO_ACTIVE_HIGH>; 693 + interrupt-parent = <&gpio1>; 694 + interrupts = <RK_PA2 IRQ_TYPE_LEVEL_LOW>; 694 695 pinctrl-names = "default"; 695 696 pinctrl-0 = <&fusb0_int_gpio>; 696 697 vbus-supply = <&vbus_typec>; ··· 789 788 790 789 dc-charger { 791 790 dc_det_gpio: dc-det-gpio { 792 - rockchip,pins = <4 RK_PD0 RK_FUNC_GPIO &pcfg_pull_none>; 791 + rockchip,pins = <4 RK_PD0 RK_FUNC_GPIO &pcfg_pull_up>; 793 792 }; 794 793 }; 795 794 796 795 es8316 { 797 796 hp_det_gpio: hp-det-gpio { 798 - rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_down>; 797 + rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_up>; 799 798 }; 800 799 }; 801 800
+6 -8
arch/arm64/boot/dts/rockchip/rk3399.dtsi
··· 403 403 reset-names = "usb3-otg"; 404 404 status = "disabled"; 405 405 406 - usbdrd_dwc3_0: dwc3 { 406 + usbdrd_dwc3_0: usb@fe800000 { 407 407 compatible = "snps,dwc3"; 408 408 reg = <0x0 0xfe800000 0x0 0x100000>; 409 409 interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH 0>; ··· 439 439 reset-names = "usb3-otg"; 440 440 status = "disabled"; 441 441 442 - usbdrd_dwc3_1: dwc3 { 442 + usbdrd_dwc3_1: usb@fe900000 { 443 443 compatible = "snps,dwc3"; 444 444 reg = <0x0 0xfe900000 0x0 0x100000>; 445 445 interrupts = <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH 0>; ··· 1124 1124 pmugrf: syscon@ff320000 { 1125 1125 compatible = "rockchip,rk3399-pmugrf", "syscon", "simple-mfd"; 1126 1126 reg = <0x0 0xff320000 0x0 0x1000>; 1127 - #address-cells = <1>; 1128 - #size-cells = <1>; 1129 1127 1130 1128 pmu_io_domains: io-domains { 1131 1129 compatible = "rockchip,rk3399-pmu-io-voltage-domain"; ··· 1881 1883 gpu: gpu@ff9a0000 { 1882 1884 compatible = "rockchip,rk3399-mali", "arm,mali-t860"; 1883 1885 reg = <0x0 0xff9a0000 0x0 0x10000>; 1884 - interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH 0>, 1885 - <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH 0>, 1886 - <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH 0>; 1887 - interrupt-names = "gpu", "job", "mmu"; 1886 + interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH 0>, 1887 + <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH 0>, 1888 + <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH 0>; 1889 + interrupt-names = "job", "mmu", "gpu"; 1888 1890 clocks = <&cru ACLK_GPU>; 1889 1891 #cooling-cells = <2>; 1890 1892 power-domains = <&power RK3399_PD_GPU>;
+6 -3
arch/arm64/configs/defconfig
··· 208 208 CONFIG_PCIE_ARMADA_8K=y 209 209 CONFIG_PCIE_KIRIN=y 210 210 CONFIG_PCIE_HISI_STB=y 211 - CONFIG_PCIE_TEGRA194=m 211 + CONFIG_PCIE_TEGRA194_HOST=m 212 212 CONFIG_DEVTMPFS=y 213 213 CONFIG_DEVTMPFS_MOUNT=y 214 214 CONFIG_FW_LOADER_USER_HELPER=y ··· 567 567 CONFIG_MEDIA_SDR_SUPPORT=y 568 568 CONFIG_MEDIA_CONTROLLER=y 569 569 CONFIG_VIDEO_V4L2_SUBDEV_API=y 570 + CONFIG_MEDIA_PLATFORM_SUPPORT=y 570 571 # CONFIG_DVB_NET is not set 571 572 CONFIG_MEDIA_USB_SUPPORT=y 572 573 CONFIG_USB_VIDEO_CLASS=m ··· 611 610 CONFIG_DRM_TEGRA=m 612 611 CONFIG_DRM_PANEL_LVDS=m 613 612 CONFIG_DRM_PANEL_SIMPLE=m 614 - CONFIG_DRM_DUMB_VGA_DAC=m 613 + CONFIG_DRM_SIMPLE_BRIDGE=m 615 614 CONFIG_DRM_PANEL_TRULY_NT35597_WQXGA=m 615 + CONFIG_DRM_DISPLAY_CONNECTOR=m 616 616 CONFIG_DRM_SII902X=m 617 617 CONFIG_DRM_THINE_THC63LVD1024=m 618 618 CONFIG_DRM_TI_SN65DSI86=m ··· 850 848 CONFIG_ARCH_R8A774A1=y 851 849 CONFIG_ARCH_R8A774B1=y 852 850 CONFIG_ARCH_R8A774C0=y 853 - CONFIG_ARCH_R8A7795=y 851 + CONFIG_ARCH_R8A77950=y 852 + CONFIG_ARCH_R8A77951=y 854 853 CONFIG_ARCH_R8A77960=y 855 854 CONFIG_ARCH_R8A77961=y 856 855 CONFIG_ARCH_R8A77965=y
+1 -1
arch/arm64/include/asm/uaccess.h
··· 304 304 __p = uaccess_mask_ptr(__p); \ 305 305 __raw_get_user((x), __p, (err)); \ 306 306 } else { \ 307 - (x) = 0; (err) = -EFAULT; \ 307 + (x) = (__force __typeof__(x))0; (err) = -EFAULT; \ 308 308 } \ 309 309 } while (0) 310 310
+4 -3
arch/arm64/kernel/ptrace.c
··· 1829 1829 1830 1830 int syscall_trace_enter(struct pt_regs *regs) 1831 1831 { 1832 - if (test_thread_flag(TIF_SYSCALL_TRACE) || 1833 - test_thread_flag(TIF_SYSCALL_EMU)) { 1832 + unsigned long flags = READ_ONCE(current_thread_info()->flags); 1833 + 1834 + if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) { 1834 1835 tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER); 1835 - if (!in_syscall(regs) || test_thread_flag(TIF_SYSCALL_EMU)) 1836 + if (!in_syscall(regs) || (flags & _TIF_SYSCALL_EMU)) 1836 1837 return -1; 1837 1838 } 1838 1839
+2
arch/csky/Kconfig
··· 8 8 select ARCH_HAS_SYNC_DMA_FOR_DEVICE 9 9 select ARCH_USE_BUILTIN_BSWAP 10 10 select ARCH_USE_QUEUED_RWLOCKS if NR_CPUS>2 11 + select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 11 12 select COMMON_CLK 12 13 select CLKSRC_MMIO 13 14 select CSKY_MPINTC if CPU_CK860 ··· 39 38 select HAVE_ARCH_TRACEHOOK 40 39 select HAVE_ARCH_AUDITSYSCALL 41 40 select HAVE_COPY_THREAD_TLS 41 + select HAVE_DEBUG_BUGVERBOSE 42 42 select HAVE_DYNAMIC_FTRACE 43 43 select HAVE_DYNAMIC_FTRACE_WITH_REGS 44 44 select HAVE_FUNCTION_TRACER
+1 -1
arch/csky/Makefile
··· 47 47 KBUILD_CFLAGS += -mno-stack-size 48 48 endif 49 49 50 - ifdef CONFIG_STACKTRACE 50 + ifdef CONFIG_FRAME_POINTER 51 51 KBUILD_CFLAGS += -mbacktrace 52 52 endif 53 53
+2 -2
arch/csky/abiv1/inc/abi/entry.h
··· 167 167 * BA Reserved C D V 168 168 */ 169 169 cprcr r6, cpcr30 170 - lsri r6, 28 171 - lsli r6, 28 170 + lsri r6, 29 171 + lsli r6, 29 172 172 addi r6, 0xe 173 173 cpwcr r6, cpcr30 174 174
+2 -2
arch/csky/abiv2/inc/abi/entry.h
··· 285 285 */ 286 286 mfcr r6, cr<30, 15> /* Get MSA0 */ 287 287 2: 288 - lsri r6, 28 289 - lsli r6, 28 288 + lsri r6, 29 289 + lsli r6, 29 290 290 addi r6, 0x1ce 291 291 mtcr r6, cr<30, 15> /* Set MSA0 */ 292 292
+2
arch/csky/abiv2/mcount.S
··· 103 103 mov a0, lr 104 104 subi a0, 4 105 105 ldw a1, (sp, 24) 106 + lrw a2, function_trace_op 107 + ldw a2, (a2, 0) 106 108 107 109 jsr r26 108 110
+2 -4
arch/csky/include/asm/processor.h
··· 41 41 #define TASK_UNMAPPED_BASE (TASK_SIZE / 3) 42 42 43 43 struct thread_struct { 44 - unsigned long ksp; /* kernel stack pointer */ 45 - unsigned long sr; /* saved status register */ 44 + unsigned long sp; /* kernel stack pointer */ 46 45 unsigned long trap_no; /* saved status register */ 47 46 48 47 /* FPU regs */ ··· 49 50 }; 50 51 51 52 #define INIT_THREAD { \ 52 - .ksp = sizeof(init_stack) + (unsigned long) &init_stack, \ 53 - .sr = DEFAULT_PSR_VALUE, \ 53 + .sp = sizeof(init_stack) + (unsigned long) &init_stack, \ 54 54 } 55 55 56 56 /*
+10
arch/csky/include/asm/ptrace.h
··· 58 58 return regs->usp; 59 59 } 60 60 61 + static inline unsigned long frame_pointer(struct pt_regs *regs) 62 + { 63 + return regs->regs[4]; 64 + } 65 + static inline void frame_pointer_set(struct pt_regs *regs, 66 + unsigned long val) 67 + { 68 + regs->regs[4] = val; 69 + } 70 + 61 71 extern int regs_query_register_offset(const char *name); 62 72 extern unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, 63 73 unsigned int n);
+11 -5
arch/csky/include/asm/thread_info.h
··· 38 38 #define THREAD_SIZE_ORDER (THREAD_SHIFT - PAGE_SHIFT) 39 39 40 40 #define thread_saved_fp(tsk) \ 41 - ((unsigned long)(((struct switch_stack *)(tsk->thread.ksp))->r8)) 41 + ((unsigned long)(((struct switch_stack *)(tsk->thread.sp))->r8)) 42 + 43 + #define thread_saved_sp(tsk) \ 44 + ((unsigned long)(tsk->thread.sp)) 45 + 46 + #define thread_saved_lr(tsk) \ 47 + ((unsigned long)(((struct switch_stack *)(tsk->thread.sp))->r15)) 42 48 43 49 static inline struct thread_info *current_thread_info(void) 44 50 { ··· 60 54 #define TIF_SIGPENDING 0 /* signal pending */ 61 55 #define TIF_NOTIFY_RESUME 1 /* callback before returning to user */ 62 56 #define TIF_NEED_RESCHED 2 /* rescheduling necessary */ 63 - #define TIF_SYSCALL_TRACE 3 /* syscall trace active */ 64 - #define TIF_SYSCALL_TRACEPOINT 4 /* syscall tracepoint instrumentation */ 65 - #define TIF_SYSCALL_AUDIT 5 /* syscall auditing */ 66 - #define TIF_UPROBE 6 /* uprobe breakpoint or singlestep */ 57 + #define TIF_UPROBE 3 /* uprobe breakpoint or singlestep */ 58 + #define TIF_SYSCALL_TRACE 4 /* syscall trace active */ 59 + #define TIF_SYSCALL_TRACEPOINT 5 /* syscall tracepoint instrumentation */ 60 + #define TIF_SYSCALL_AUDIT 6 /* syscall auditing */ 67 61 #define TIF_POLLING_NRFLAG 16 /* poll_idle() is TIF_NEED_RESCHED */ 68 62 #define TIF_MEMDIE 18 /* is terminating due to OOM killer */ 69 63 #define TIF_RESTORE_SIGMASK 20 /* restore signal mask in do_signal() */
+26 -23
arch/csky/include/asm/uaccess.h
··· 253 253 254 254 extern int __get_user_bad(void); 255 255 256 - #define __copy_user(to, from, n) \ 256 + #define ___copy_to_user(to, from, n) \ 257 257 do { \ 258 258 int w0, w1, w2, w3; \ 259 259 asm volatile( \ ··· 288 288 " subi %0, 4 \n" \ 289 289 " br 3b \n" \ 290 290 "5: cmpnei %0, 0 \n" /* 1B */ \ 291 - " bf 8f \n" \ 291 + " bf 13f \n" \ 292 292 " ldb %3, (%2, 0) \n" \ 293 293 "6: stb %3, (%1, 0) \n" \ 294 294 " addi %2, 1 \n" \ 295 295 " addi %1, 1 \n" \ 296 296 " subi %0, 1 \n" \ 297 297 " br 5b \n" \ 298 - "7: br 8f \n" \ 298 + "7: subi %0, 4 \n" \ 299 + "8: subi %0, 4 \n" \ 300 + "12: subi %0, 4 \n" \ 301 + " br 13f \n" \ 299 302 ".section __ex_table, \"a\" \n" \ 300 303 ".align 2 \n" \ 301 - ".long 2b, 7b \n" \ 302 - ".long 9b, 7b \n" \ 303 - ".long 10b, 7b \n" \ 304 + ".long 2b, 13f \n" \ 305 + ".long 4b, 13f \n" \ 306 + ".long 6b, 13f \n" \ 307 + ".long 9b, 12b \n" \ 308 + ".long 10b, 8b \n" \ 304 309 ".long 11b, 7b \n" \ 305 - ".long 4b, 7b \n" \ 306 - ".long 6b, 7b \n" \ 307 310 ".previous \n" \ 308 - "8: \n" \ 311 + "13: \n" \ 309 312 : "=r"(n), "=r"(to), "=r"(from), "=r"(w0), \ 310 313 "=r"(w1), "=r"(w2), "=r"(w3) \ 311 314 : "0"(n), "1"(to), "2"(from) \ 312 315 : "memory"); \ 313 316 } while (0) 314 317 315 - #define __copy_user_zeroing(to, from, n) \ 318 + #define ___copy_from_user(to, from, n) \ 316 319 do { \ 317 320 int tmp; \ 318 321 int nsave; \ ··· 358 355 " addi %1, 1 \n" \ 359 356 " subi %0, 1 \n" \ 360 357 " br 5b \n" \ 361 - "8: mov %3, %0 \n" \ 362 - " movi %4, 0 \n" \ 363 - "9: stb %4, (%1, 0) \n" \ 364 - " addi %1, 1 \n" \ 365 - " subi %3, 1 \n" \ 366 - " cmpnei %3, 0 \n" \ 367 - " bt 9b \n" \ 368 - " br 7f \n" \ 358 + "8: stw %3, (%1, 0) \n" \ 359 + " subi %0, 4 \n" \ 360 + " bf 7f \n" \ 361 + "9: subi %0, 8 \n" \ 362 + " bf 7f \n" \ 363 + "13: stw %3, (%1, 8) \n" \ 364 + " subi %0, 12 \n" \ 365 + " bf 7f \n" \ 369 366 ".section __ex_table, \"a\" \n" \ 370 367 ".align 2 \n" \ 371 - ".long 2b, 8b \n" \ 368 + ".long 2b, 7f \n" \ 369 + ".long 4b, 7f \n" \ 370 + ".long 6b, 7f \n" \ 372 371 ".long 10b, 8b \n" \ 373 - ".long 11b, 8b \n" \ 374 - ".long 12b, 8b \n" \ 375 - ".long 4b, 8b \n" \ 376 - ".long 6b, 8b \n" \ 372 + ".long 11b, 9b \n" \ 373 + ".long 12b,13b \n" \ 377 374 ".previous \n" \ 378 375 "7: \n" \ 379 376 : "=r"(n), "=r"(to), "=r"(from), "=r"(nsave), \
+1 -1
arch/csky/kernel/Makefile
··· 3 3 4 4 obj-y += entry.o atomic.o signal.o traps.o irq.o time.o vdso.o 5 5 obj-y += power.o syscall.o syscall_table.o setup.o 6 - obj-y += process.o cpu-probe.o ptrace.o dumpstack.o 6 + obj-y += process.o cpu-probe.o ptrace.o stacktrace.o 7 7 obj-y += probes/ 8 8 9 9 obj-$(CONFIG_MODULES) += module.o
+1 -2
arch/csky/kernel/asm-offsets.c
··· 18 18 DEFINE(TASK_ACTIVE_MM, offsetof(struct task_struct, active_mm)); 19 19 20 20 /* offsets into the thread struct */ 21 - DEFINE(THREAD_KSP, offsetof(struct thread_struct, ksp)); 22 - DEFINE(THREAD_SR, offsetof(struct thread_struct, sr)); 21 + DEFINE(THREAD_KSP, offsetof(struct thread_struct, sp)); 23 22 DEFINE(THREAD_FESR, offsetof(struct thread_struct, user_fp.fesr)); 24 23 DEFINE(THREAD_FCR, offsetof(struct thread_struct, user_fp.fcr)); 25 24 DEFINE(THREAD_FPREG, offsetof(struct thread_struct, user_fp.vr));
-49
arch/csky/kernel/dumpstack.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd. 3 - 4 - #include <linux/ptrace.h> 5 - 6 - int kstack_depth_to_print = 48; 7 - 8 - void show_trace(unsigned long *stack) 9 - { 10 - unsigned long *stack_end; 11 - unsigned long *stack_start; 12 - unsigned long *fp; 13 - unsigned long addr; 14 - 15 - addr = (unsigned long) stack & THREAD_MASK; 16 - stack_start = (unsigned long *) addr; 17 - stack_end = (unsigned long *) (addr + THREAD_SIZE); 18 - 19 - fp = stack; 20 - pr_info("\nCall Trace:"); 21 - 22 - while (fp > stack_start && fp < stack_end) { 23 - #ifdef CONFIG_STACKTRACE 24 - addr = fp[1]; 25 - fp = (unsigned long *) fp[0]; 26 - #else 27 - addr = *fp++; 28 - #endif 29 - if (__kernel_text_address(addr)) 30 - pr_cont("\n[<%08lx>] %pS", addr, (void *)addr); 31 - } 32 - pr_cont("\n"); 33 - } 34 - 35 - void show_stack(struct task_struct *task, unsigned long *stack) 36 - { 37 - if (!stack) { 38 - if (task) 39 - stack = (unsigned long *)thread_saved_fp(task); 40 - else 41 - #ifdef CONFIG_STACKTRACE 42 - asm volatile("mov %0, r8\n":"=r"(stack)::"memory"); 43 - #else 44 - stack = (unsigned long *)&stack; 45 - #endif 46 - } 47 - 48 - show_trace(stack); 49 - }
+2 -10
arch/csky/kernel/entry.S
··· 330 330 lrw a3, TASK_THREAD 331 331 addu a3, a0 332 332 333 - mfcr a2, psr /* Save PSR value */ 334 - stw a2, (a3, THREAD_SR) /* Save PSR in task struct */ 335 - bclri a2, 6 /* Disable interrupts */ 336 - mtcr a2, psr 337 - 338 333 SAVE_SWITCH_STACK 339 334 340 335 stw sp, (a3, THREAD_KSP) ··· 340 345 341 346 ldw sp, (a3, THREAD_KSP) /* Set next kernel sp */ 342 347 343 - ldw a2, (a3, THREAD_SR) /* Set next PSR */ 344 - mtcr a2, psr 345 - 346 348 #if defined(__CSKYABIV2__) 347 - addi r7, a1, TASK_THREAD_INFO 348 - ldw tls, (r7, TINFO_TP_VALUE) 349 + addi a3, a1, TASK_THREAD_INFO 350 + ldw tls, (a3, TINFO_TP_VALUE) 349 351 #endif 350 352 351 353 RESTORE_SWITCH_STACK
+2
arch/csky/kernel/ftrace.c
··· 202 202 #endif /* CONFIG_DYNAMIC_FTRACE */ 203 203 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 204 204 205 + #ifdef CONFIG_DYNAMIC_FTRACE 205 206 #ifndef CONFIG_CPU_HAS_ICACHE_INS 206 207 struct ftrace_modify_param { 207 208 int command; ··· 232 231 stop_machine(__ftrace_modify_code, &param, cpu_online_mask); 233 232 } 234 233 #endif 234 + #endif /* CONFIG_DYNAMIC_FTRACE */ 235 235 236 236 /* _mcount is defined in abi's mcount.S */ 237 237 EXPORT_SYMBOL(_mcount);
+7 -2
arch/csky/kernel/perf_callchain.c
··· 12 12 13 13 static int unwind_frame_kernel(struct stackframe *frame) 14 14 { 15 - if (kstack_end((void *)frame->fp)) 15 + unsigned long low = (unsigned long)task_stack_page(current); 16 + unsigned long high = low + THREAD_SIZE; 17 + 18 + if (unlikely(frame->fp < low || frame->fp > high)) 16 19 return -EPERM; 17 - if (frame->fp & 0x3 || frame->fp < TASK_SIZE) 20 + 21 + if (kstack_end((void *)frame->fp) || frame->fp & 0x3) 18 22 return -EPERM; 19 23 20 24 *frame = *(struct stackframe *)frame->fp; 25 + 21 26 if (__kernel_text_address(frame->lr)) { 22 27 int graph = 0; 23 28
+5
arch/csky/kernel/probes/uprobes.c
··· 11 11 12 12 #define UPROBE_TRAP_NR UINT_MAX 13 13 14 + bool is_swbp_insn(uprobe_opcode_t *insn) 15 + { 16 + return (*insn & 0xffff) == UPROBE_SWBP_INSN; 17 + } 18 + 14 19 unsigned long uprobe_get_swbp_addr(struct pt_regs *regs) 15 20 { 16 21 return instruction_pointer(regs);
+3 -34
arch/csky/kernel/process.c
··· 35 35 */ 36 36 unsigned long thread_saved_pc(struct task_struct *tsk) 37 37 { 38 - struct switch_stack *sw = (struct switch_stack *)tsk->thread.ksp; 38 + struct switch_stack *sw = (struct switch_stack *)tsk->thread.sp; 39 39 40 40 return sw->r15; 41 41 } ··· 56 56 childstack = ((struct switch_stack *) childregs) - 1; 57 57 memset(childstack, 0, sizeof(struct switch_stack)); 58 58 59 - /* setup ksp for switch_to !!! */ 60 - p->thread.ksp = (unsigned long)childstack; 59 + /* setup thread.sp for switch_to !!! */ 60 + p->thread.sp = (unsigned long)childstack; 61 61 62 62 if (unlikely(p->flags & PF_KTHREAD)) { 63 63 memset(childregs, 0, sizeof(struct pt_regs)); ··· 97 97 98 98 return 1; 99 99 } 100 - 101 - unsigned long get_wchan(struct task_struct *p) 102 - { 103 - unsigned long lr; 104 - unsigned long *fp, *stack_start, *stack_end; 105 - int count = 0; 106 - 107 - if (!p || p == current || p->state == TASK_RUNNING) 108 - return 0; 109 - 110 - stack_start = (unsigned long *)end_of_stack(p); 111 - stack_end = (unsigned long *)(task_stack_page(p) + THREAD_SIZE); 112 - 113 - fp = (unsigned long *) thread_saved_fp(p); 114 - do { 115 - if (fp < stack_start || fp > stack_end) 116 - return 0; 117 - #ifdef CONFIG_STACKTRACE 118 - lr = fp[1]; 119 - fp = (unsigned long *)fp[0]; 120 - #else 121 - lr = *fp++; 122 - #endif 123 - if (!in_sched_functions(lr) && 124 - __kernel_text_address(lr)) 125 - return lr; 126 - } while (count++ < 16); 127 - 128 - return 0; 129 - } 130 - EXPORT_SYMBOL(get_wchan); 131 100 132 101 #ifndef CONFIG_CPU_PM_NONE 133 102 void arch_cpu_idle(void)
+6
arch/csky/kernel/ptrace.c
··· 41 41 42 42 regs = task_pt_regs(tsk); 43 43 regs->sr = (regs->sr & TRACE_MODE_MASK) | TRACE_MODE_RUN; 44 + 45 + /* Enable irq */ 46 + regs->sr |= BIT(6); 44 47 } 45 48 46 49 static void singlestep_enable(struct task_struct *tsk) ··· 52 49 53 50 regs = task_pt_regs(tsk); 54 51 regs->sr = (regs->sr & TRACE_MODE_MASK) | TRACE_MODE_SI; 52 + 53 + /* Disable irq */ 54 + regs->sr &= ~BIT(6); 55 55 } 56 56 57 57 /*
+147 -45
arch/csky/kernel/stacktrace.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /* Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd. */ 3 2 4 3 #include <linux/sched/debug.h> 5 4 #include <linux/sched/task_stack.h> 6 5 #include <linux/stacktrace.h> 7 6 #include <linux/ftrace.h> 7 + #include <linux/ptrace.h> 8 + 9 + #ifdef CONFIG_FRAME_POINTER 10 + 11 + struct stackframe { 12 + unsigned long fp; 13 + unsigned long ra; 14 + }; 15 + 16 + void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs, 17 + bool (*fn)(unsigned long, void *), void *arg) 18 + { 19 + unsigned long fp, sp, pc; 20 + 21 + if (regs) { 22 + fp = frame_pointer(regs); 23 + sp = user_stack_pointer(regs); 24 + pc = instruction_pointer(regs); 25 + } else if (task == NULL || task == current) { 26 + const register unsigned long current_sp __asm__ ("sp"); 27 + const register unsigned long current_fp __asm__ ("r8"); 28 + fp = current_fp; 29 + sp = current_sp; 30 + pc = (unsigned long)walk_stackframe; 31 + } else { 32 + /* task blocked in __switch_to */ 33 + fp = thread_saved_fp(task); 34 + sp = thread_saved_sp(task); 35 + pc = thread_saved_lr(task); 36 + } 37 + 38 + for (;;) { 39 + unsigned long low, high; 40 + struct stackframe *frame; 41 + 42 + if (unlikely(!__kernel_text_address(pc) || fn(pc, arg))) 43 + break; 44 + 45 + /* Validate frame pointer */ 46 + low = sp; 47 + high = ALIGN(sp, THREAD_SIZE); 48 + if (unlikely(fp < low || fp > high || fp & 0x3)) 49 + break; 50 + /* Unwind stack frame */ 51 + frame = (struct stackframe *)fp; 52 + sp = fp; 53 + fp = frame->fp; 54 + pc = ftrace_graph_ret_addr(current, NULL, frame->ra, 55 + (unsigned long *)(fp - 8)); 56 + } 57 + } 58 + 59 + #else /* !CONFIG_FRAME_POINTER */ 60 + 61 + static void notrace walk_stackframe(struct task_struct *task, 62 + struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg) 63 + { 64 + unsigned long sp, pc; 65 + unsigned long *ksp; 66 + 67 + if (regs) { 68 + sp = user_stack_pointer(regs); 69 + pc = instruction_pointer(regs); 70 + } else if (task == NULL || task == current) { 71 + const register unsigned long current_sp __asm__ ("sp"); 72 + sp = current_sp; 73 + pc = (unsigned long)walk_stackframe; 74 + } else { 75 + /* task blocked in __switch_to */ 76 + sp = thread_saved_sp(task); 77 + pc = thread_saved_lr(task); 78 + } 79 + 80 + if (unlikely(sp & 0x3)) 81 + return; 82 + 83 + ksp = (unsigned long *)sp; 84 + while (!kstack_end(ksp)) { 85 + if (__kernel_text_address(pc) && unlikely(fn(pc, arg))) 86 + break; 87 + pc = (*ksp++) - 0x4; 88 + } 89 + } 90 + #endif /* CONFIG_FRAME_POINTER */ 91 + 92 + static bool print_trace_address(unsigned long pc, void *arg) 93 + { 94 + print_ip_sym(pc); 95 + return false; 96 + } 97 + 98 + void show_stack(struct task_struct *task, unsigned long *sp) 99 + { 100 + pr_cont("Call Trace:\n"); 101 + walk_stackframe(task, NULL, print_trace_address, NULL); 102 + } 103 + 104 + static bool save_wchan(unsigned long pc, void *arg) 105 + { 106 + if (!in_sched_functions(pc)) { 107 + unsigned long *p = arg; 108 + *p = pc; 109 + return true; 110 + } 111 + return false; 112 + } 113 + 114 + unsigned long get_wchan(struct task_struct *task) 115 + { 116 + unsigned long pc = 0; 117 + 118 + if (likely(task && task != current && task->state != TASK_RUNNING)) 119 + walk_stackframe(task, NULL, save_wchan, &pc); 120 + return pc; 121 + } 122 + 123 + #ifdef CONFIG_STACKTRACE 124 + static bool __save_trace(unsigned long pc, void *arg, bool nosched) 125 + { 126 + struct stack_trace *trace = arg; 127 + 128 + if (unlikely(nosched && in_sched_functions(pc))) 129 + return false; 130 + if (unlikely(trace->skip > 0)) { 131 + trace->skip--; 132 + return false; 133 + } 134 + 135 + trace->entries[trace->nr_entries++] = pc; 136 + return (trace->nr_entries >= trace->max_entries); 137 + } 138 + 139 + static bool save_trace(unsigned long pc, void *arg) 140 + { 141 + return __save_trace(pc, arg, false); 142 + } 143 + 144 + /* 145 + * Save stack-backtrace addresses into a stack_trace buffer. 146 + */ 147 + void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) 148 + { 149 + walk_stackframe(tsk, NULL, save_trace, trace); 150 + } 151 + EXPORT_SYMBOL_GPL(save_stack_trace_tsk); 8 152 9 153 void save_stack_trace(struct stack_trace *trace) 10 154 { 11 - save_stack_trace_tsk(current, trace); 155 + save_stack_trace_tsk(NULL, trace); 12 156 } 13 157 EXPORT_SYMBOL_GPL(save_stack_trace); 14 158 15 - void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) 16 - { 17 - unsigned long *fp, *stack_start, *stack_end; 18 - unsigned long addr; 19 - int skip = trace->skip; 20 - int savesched; 21 - int graph_idx = 0; 22 - 23 - if (tsk == current) { 24 - asm volatile("mov %0, r8\n":"=r"(fp)); 25 - savesched = 1; 26 - } else { 27 - fp = (unsigned long *)thread_saved_fp(tsk); 28 - savesched = 0; 29 - } 30 - 31 - addr = (unsigned long) fp & THREAD_MASK; 32 - stack_start = (unsigned long *) addr; 33 - stack_end = (unsigned long *) (addr + THREAD_SIZE); 34 - 35 - while (fp > stack_start && fp < stack_end) { 36 - unsigned long lpp, fpp; 37 - 38 - fpp = fp[0]; 39 - lpp = fp[1]; 40 - if (!__kernel_text_address(lpp)) 41 - break; 42 - else 43 - lpp = ftrace_graph_ret_addr(tsk, &graph_idx, lpp, NULL); 44 - 45 - if (savesched || !in_sched_functions(lpp)) { 46 - if (skip) { 47 - skip--; 48 - } else { 49 - trace->entries[trace->nr_entries++] = lpp; 50 - if (trace->nr_entries >= trace->max_entries) 51 - break; 52 - } 53 - } 54 - fp = (unsigned long *)fpp; 55 - } 56 - } 57 - EXPORT_SYMBOL_GPL(save_stack_trace_tsk); 159 + #endif /* CONFIG_STACKTRACE */
+2 -6
arch/csky/lib/usercopy.c
··· 7 7 unsigned long raw_copy_from_user(void *to, const void *from, 8 8 unsigned long n) 9 9 { 10 - if (access_ok(from, n)) 11 - __copy_user_zeroing(to, from, n); 12 - else 13 - memset(to, 0, n); 10 + ___copy_from_user(to, from, n); 14 11 return n; 15 12 } 16 13 EXPORT_SYMBOL(raw_copy_from_user); ··· 15 18 unsigned long raw_copy_to_user(void *to, const void *from, 16 19 unsigned long n) 17 20 { 18 - if (access_ok(to, n)) 19 - __copy_user(to, from, n); 21 + ___copy_to_user(to, from, n); 20 22 return n; 21 23 } 22 24 EXPORT_SYMBOL(raw_copy_to_user);
+1 -1
arch/powerpc/Kconfig
··· 130 130 select ARCH_HAS_PTE_SPECIAL 131 131 select ARCH_HAS_MEMBARRIER_CALLBACKS 132 132 select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64 133 - select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION) 133 + select ARCH_HAS_STRICT_KERNEL_RWX if (PPC32 && !HIBERNATION) 134 134 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST 135 135 select ARCH_HAS_UACCESS_FLUSHCACHE 136 136 select ARCH_HAS_UACCESS_MCSAFE if PPC64
+4 -4
arch/powerpc/include/asm/book3s/32/hash.h
··· 17 17 * updating the accessed and modified bits in the page table tree. 18 18 */ 19 19 20 - #define _PAGE_USER 0x001 /* usermode access allowed */ 21 - #define _PAGE_RW 0x002 /* software: user write access allowed */ 22 - #define _PAGE_PRESENT 0x004 /* software: pte contains a translation */ 20 + #define _PAGE_PRESENT 0x001 /* software: pte contains a translation */ 21 + #define _PAGE_HASHPTE 0x002 /* hash_page has made an HPTE for this pte */ 22 + #define _PAGE_USER 0x004 /* usermode access allowed */ 23 23 #define _PAGE_GUARDED 0x008 /* G: prohibit speculative access */ 24 24 #define _PAGE_COHERENT 0x010 /* M: enforce memory coherence (SMP systems) */ 25 25 #define _PAGE_NO_CACHE 0x020 /* I: cache inhibit */ ··· 27 27 #define _PAGE_DIRTY 0x080 /* C: page changed */ 28 28 #define _PAGE_ACCESSED 0x100 /* R: page referenced */ 29 29 #define _PAGE_EXEC 0x200 /* software: exec allowed */ 30 - #define _PAGE_HASHPTE 0x400 /* hash_page has made an HPTE for this pte */ 30 + #define _PAGE_RW 0x400 /* software: user write access allowed */ 31 31 #define _PAGE_SPECIAL 0x800 /* software: Special page */ 32 32 33 33 #ifdef CONFIG_PTE_64BIT
+1 -1
arch/powerpc/include/asm/book3s/32/kup.h
··· 75 75 76 76 .macro kuap_check current, gpr 77 77 #ifdef CONFIG_PPC_KUAP_DEBUG 78 - lwz \gpr2, KUAP(thread) 78 + lwz \gpr, KUAP(thread) 79 79 999: twnei \gpr, 0 80 80 EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE) 81 81 #endif
+19 -1
arch/powerpc/include/asm/hw_irq.h
··· 250 250 } \ 251 251 } while(0) 252 252 253 + static inline bool __lazy_irq_pending(u8 irq_happened) 254 + { 255 + return !!(irq_happened & ~PACA_IRQ_HARD_DIS); 256 + } 257 + 258 + /* 259 + * Check if a lazy IRQ is pending. Should be called with IRQs hard disabled. 260 + */ 253 261 static inline bool lazy_irq_pending(void) 254 262 { 255 - return !!(get_paca()->irq_happened & ~PACA_IRQ_HARD_DIS); 263 + return __lazy_irq_pending(get_paca()->irq_happened); 264 + } 265 + 266 + /* 267 + * Check if a lazy IRQ is pending, with no debugging checks. 268 + * Should be called with IRQs hard disabled. 269 + * For use in RI disabled code or other constrained situations. 270 + */ 271 + static inline bool lazy_irq_pending_nocheck(void) 272 + { 273 + return __lazy_irq_pending(local_paca->irq_happened); 256 274 } 257 275 258 276 /*
+35 -14
arch/powerpc/include/asm/uaccess.h
··· 166 166 ({ \ 167 167 long __pu_err; \ 168 168 __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ 169 + __typeof__(*(ptr)) __pu_val = (x); \ 170 + __typeof__(size) __pu_size = (size); \ 171 + \ 169 172 if (!is_kernel_addr((unsigned long)__pu_addr)) \ 170 173 might_fault(); \ 171 - __chk_user_ptr(ptr); \ 174 + __chk_user_ptr(__pu_addr); \ 172 175 if (do_allow) \ 173 - __put_user_size((x), __pu_addr, (size), __pu_err); \ 176 + __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \ 174 177 else \ 175 - __put_user_size_allowed((x), __pu_addr, (size), __pu_err); \ 178 + __put_user_size_allowed(__pu_val, __pu_addr, __pu_size, __pu_err); \ 179 + \ 176 180 __pu_err; \ 177 181 }) 178 182 ··· 184 180 ({ \ 185 181 long __pu_err = -EFAULT; \ 186 182 __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ 183 + __typeof__(*(ptr)) __pu_val = (x); \ 184 + __typeof__(size) __pu_size = (size); \ 185 + \ 187 186 might_fault(); \ 188 - if (access_ok(__pu_addr, size)) \ 189 - __put_user_size((x), __pu_addr, (size), __pu_err); \ 187 + if (access_ok(__pu_addr, __pu_size)) \ 188 + __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \ 189 + \ 190 190 __pu_err; \ 191 191 }) 192 192 ··· 198 190 ({ \ 199 191 long __pu_err; \ 200 192 __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ 201 - __chk_user_ptr(ptr); \ 202 - __put_user_size((x), __pu_addr, (size), __pu_err); \ 193 + __typeof__(*(ptr)) __pu_val = (x); \ 194 + __typeof__(size) __pu_size = (size); \ 195 + \ 196 + __chk_user_ptr(__pu_addr); \ 197 + __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \ 198 + \ 203 199 __pu_err; \ 204 200 }) 205 201 ··· 295 283 long __gu_err; \ 296 284 __long_type(*(ptr)) __gu_val; \ 297 285 __typeof__(*(ptr)) __user *__gu_addr = (ptr); \ 298 - __chk_user_ptr(ptr); \ 286 + __typeof__(size) __gu_size = (size); \ 287 + \ 288 + __chk_user_ptr(__gu_addr); \ 299 289 if (!is_kernel_addr((unsigned long)__gu_addr)) \ 300 290 might_fault(); \ 301 291 barrier_nospec(); \ 302 292 if (do_allow) \ 303 - __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ 293 + __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \ 304 294 else \ 305 - __get_user_size_allowed(__gu_val, __gu_addr, (size), __gu_err); \ 295 + __get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err); \ 306 296 (x) = (__typeof__(*(ptr)))__gu_val; \ 297 + \ 307 298 __gu_err; \ 308 299 }) 309 300 ··· 315 300 long __gu_err = -EFAULT; \ 316 301 __long_type(*(ptr)) __gu_val = 0; \ 317 302 __typeof__(*(ptr)) __user *__gu_addr = (ptr); \ 303 + __typeof__(size) __gu_size = (size); \ 304 + \ 318 305 might_fault(); \ 319 - if (access_ok(__gu_addr, (size))) { \ 306 + if (access_ok(__gu_addr, __gu_size)) { \ 320 307 barrier_nospec(); \ 321 - __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ 308 + __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \ 322 309 } \ 323 310 (x) = (__force __typeof__(*(ptr)))__gu_val; \ 311 + \ 324 312 __gu_err; \ 325 313 }) 326 314 ··· 332 314 long __gu_err; \ 333 315 __long_type(*(ptr)) __gu_val; \ 334 316 __typeof__(*(ptr)) __user *__gu_addr = (ptr); \ 335 - __chk_user_ptr(ptr); \ 317 + __typeof__(size) __gu_size = (size); \ 318 + \ 319 + __chk_user_ptr(__gu_addr); \ 336 320 barrier_nospec(); \ 337 - __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ 321 + __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \ 338 322 (x) = (__force __typeof__(*(ptr)))__gu_val; \ 323 + \ 339 324 __gu_err; \ 340 325 }) 341 326
+3 -1
arch/powerpc/kernel/entry_64.S
··· 472 472 #ifdef CONFIG_PPC_BOOK3S 473 473 /* 474 474 * If MSR EE/RI was never enabled, IRQs not reconciled, NVGPRs not 475 - * touched, AMR not set, no exit work created, then this can be used. 475 + * touched, no exit work created, then this can be used. 476 476 */ 477 477 .balign IFETCH_ALIGN_BYTES 478 478 .globl fast_interrupt_return 479 479 fast_interrupt_return: 480 480 _ASM_NOKPROBE_SYMBOL(fast_interrupt_return) 481 + kuap_check_amr r3, r4 481 482 ld r4,_MSR(r1) 482 483 andi. r0,r4,MSR_PR 483 484 bne .Lfast_user_interrupt_return 485 + kuap_restore_amr r3 484 486 andi. r0,r4,MSR_RI 485 487 li r3,0 /* 0 return value, no EMULATE_STACK_STORE */ 486 488 bne+ .Lfast_kernel_interrupt_return
+1
arch/powerpc/kernel/exceptions-64s.S
··· 971 971 ld r10,SOFTE(r1) 972 972 stb r10,PACAIRQSOFTMASK(r13) 973 973 974 + kuap_restore_amr r10 974 975 EXCEPTION_RESTORE_REGS 975 976 RFI_TO_USER_OR_KERNEL 976 977
+6 -3
arch/powerpc/kernel/head_32.S
··· 348 348 andis. r0, r5, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH)@h 349 349 #endif 350 350 bne handle_page_fault_tramp_2 /* if not, try to put a PTE */ 351 - rlwinm r3, r5, 32 - 24, 30, 30 /* DSISR_STORE -> _PAGE_RW */ 351 + rlwinm r3, r5, 32 - 15, 21, 21 /* DSISR_STORE -> _PAGE_RW */ 352 352 bl hash_page 353 353 b handle_page_fault_tramp_1 354 354 FTR_SECTION_ELSE ··· 497 497 andc. r1,r1,r0 /* check access & ~permission */ 498 498 bne- InstructionAddressInvalid /* return if access not permitted */ 499 499 /* Convert linux-style PTE to low word of PPC-style PTE */ 500 + rlwimi r0,r0,32-2,31,31 /* _PAGE_USER -> PP lsb */ 500 501 ori r1, r1, 0xe06 /* clear out reserved bits */ 501 502 andc r1, r0, r1 /* PP = user? 1 : 0 */ 502 503 BEGIN_FTR_SECTION ··· 565 564 * we would need to update the pte atomically with lwarx/stwcx. 566 565 */ 567 566 /* Convert linux-style PTE to low word of PPC-style PTE */ 568 - rlwinm r1,r0,0,30,30 /* _PAGE_RW -> PP msb */ 569 - rlwimi r0,r0,1,30,30 /* _PAGE_USER -> PP msb */ 567 + rlwinm r1,r0,32-9,30,30 /* _PAGE_RW -> PP msb */ 568 + rlwimi r0,r0,32-1,30,30 /* _PAGE_USER -> PP msb */ 569 + rlwimi r0,r0,32-1,31,31 /* _PAGE_USER -> PP lsb */ 570 570 ori r1,r1,0xe04 /* clear out reserved bits */ 571 571 andc r1,r0,r1 /* PP = user? rw? 1: 3: 0 */ 572 572 BEGIN_FTR_SECTION ··· 645 643 * we would need to update the pte atomically with lwarx/stwcx. 646 644 */ 647 645 /* Convert linux-style PTE to low word of PPC-style PTE */ 646 + rlwimi r0,r0,32-2,31,31 /* _PAGE_USER -> PP lsb */ 648 647 li r1,0xe06 /* clear out reserved bits & PP msb */ 649 648 andc r1,r0,r1 /* PP = user? 1: 0 */ 650 649 BEGIN_FTR_SECTION
+2 -1
arch/powerpc/kernel/head_40x.S
··· 344 344 /* 0x0C00 - System Call Exception */ 345 345 START_EXCEPTION(0x0C00, SystemCall) 346 346 SYSCALL_ENTRY 0xc00 347 + /* Trap_0D is commented out to get more space for system call exception */ 347 348 348 - EXCEPTION(0x0D00, Trap_0D, unknown_exception, EXC_XFER_STD) 349 + /* EXCEPTION(0x0D00, Trap_0D, unknown_exception, EXC_XFER_STD) */ 349 350 EXCEPTION(0x0E00, Trap_0E, unknown_exception, EXC_XFER_STD) 350 351 EXCEPTION(0x0F00, Trap_0F, unknown_exception, EXC_XFER_STD) 351 352
+3 -3
arch/powerpc/kernel/ima_arch.c
··· 19 19 * to be stored as an xattr or as an appended signature. 20 20 * 21 21 * To avoid duplicate signature verification as much as possible, the IMA 22 - * policy rule for module appraisal is added only if CONFIG_MODULE_SIG_FORCE 22 + * policy rule for module appraisal is added only if CONFIG_MODULE_SIG 23 23 * is not enabled. 24 24 */ 25 25 static const char *const secure_rules[] = { 26 26 "appraise func=KEXEC_KERNEL_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig", 27 - #ifndef CONFIG_MODULE_SIG_FORCE 27 + #ifndef CONFIG_MODULE_SIG 28 28 "appraise func=MODULE_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig", 29 29 #endif 30 30 NULL ··· 50 50 "measure func=KEXEC_KERNEL_CHECK template=ima-modsig", 51 51 "measure func=MODULE_CHECK template=ima-modsig", 52 52 "appraise func=KEXEC_KERNEL_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig", 53 - #ifndef CONFIG_MODULE_SIG_FORCE 53 + #ifndef CONFIG_MODULE_SIG 54 54 "appraise func=MODULE_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig", 55 55 #endif 56 56 NULL
+11 -9
arch/powerpc/kernel/syscall_64.c
··· 35 35 BUG_ON(!FULL_REGS(regs)); 36 36 BUG_ON(regs->softe != IRQS_ENABLED); 37 37 38 + kuap_check_amr(); 39 + 38 40 account_cpu_user_entry(); 39 41 40 42 #ifdef CONFIG_PPC_SPLPAR ··· 48 46 accumulate_stolen_time(); 49 47 } 50 48 #endif 51 - 52 - kuap_check_amr(); 53 49 54 50 /* 55 51 * This is not required for the syscall exit path, but makes the ··· 116 116 unsigned long *ti_flagsp = &current_thread_info()->flags; 117 117 unsigned long ti_flags; 118 118 unsigned long ret = 0; 119 + 120 + kuap_check_amr(); 119 121 120 122 regs->result = r3; 121 123 ··· 191 189 192 190 /* This pattern matches prep_irq_for_idle */ 193 191 __hard_EE_RI_disable(); 194 - if (unlikely(lazy_irq_pending())) { 192 + if (unlikely(lazy_irq_pending_nocheck())) { 195 193 __hard_RI_enable(); 196 194 trace_hardirqs_off(); 197 195 local_paca->irq_happened |= PACA_IRQ_HARD_DIS; ··· 205 203 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM 206 204 local_paca->tm_scratch = regs->msr; 207 205 #endif 208 - 209 - kuap_check_amr(); 210 206 211 207 account_cpu_user_exit(); 212 208 ··· 227 227 BUG_ON(!(regs->msr & MSR_PR)); 228 228 BUG_ON(!FULL_REGS(regs)); 229 229 BUG_ON(regs->softe != IRQS_ENABLED); 230 + 231 + kuap_check_amr(); 230 232 231 233 local_irq_save(flags); 232 234 ··· 266 264 267 265 trace_hardirqs_on(); 268 266 __hard_EE_RI_disable(); 269 - if (unlikely(lazy_irq_pending())) { 267 + if (unlikely(lazy_irq_pending_nocheck())) { 270 268 __hard_RI_enable(); 271 269 trace_hardirqs_off(); 272 270 local_paca->irq_happened |= PACA_IRQ_HARD_DIS; ··· 294 292 local_paca->tm_scratch = regs->msr; 295 293 #endif 296 294 297 - kuap_check_amr(); 298 - 299 295 account_cpu_user_exit(); 300 296 301 297 return ret; ··· 312 312 unrecoverable_exception(regs); 313 313 BUG_ON(regs->msr & MSR_PR); 314 314 BUG_ON(!FULL_REGS(regs)); 315 + 316 + kuap_check_amr(); 315 317 316 318 if (unlikely(*ti_flagsp & _TIF_EMULATE_STACK_STORE)) { 317 319 clear_bits(_TIF_EMULATE_STACK_STORE, ti_flagsp); ··· 336 334 337 335 trace_hardirqs_on(); 338 336 __hard_EE_RI_disable(); 339 - if (unlikely(lazy_irq_pending())) { 337 + if (unlikely(lazy_irq_pending_nocheck())) { 340 338 __hard_RI_enable(); 341 339 irq_soft_mask_set(IRQS_ALL_DISABLED); 342 340 trace_hardirqs_off();
+3 -3
arch/powerpc/kernel/vdso32/gettimeofday.S
··· 218 218 blr 219 219 220 220 /* 221 - * invalid clock 221 + * syscall fallback 222 222 */ 223 223 99: 224 - li r3, EINVAL 225 - crset so 224 + li r0,__NR_clock_getres 225 + sc 226 226 blr 227 227 .cfi_endproc 228 228 V_FUNCTION_END(__kernel_clock_getres)
+8 -6
arch/powerpc/mm/book3s32/hash_low.S
··· 35 35 /* 36 36 * Load a PTE into the hash table, if possible. 37 37 * The address is in r4, and r3 contains an access flag: 38 - * _PAGE_RW (0x002) if a write. 38 + * _PAGE_RW (0x400) if a write. 39 39 * r9 contains the SRR1 value, from which we use the MSR_PR bit. 40 40 * SPRG_THREAD contains the physical address of the current task's thread. 41 41 * ··· 69 69 blt+ 112f /* assume user more likely */ 70 70 lis r5, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */ 71 71 addi r5 ,r5 ,(swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */ 72 - rlwimi r3,r9,32-14,31,31 /* MSR_PR -> _PAGE_USER */ 72 + rlwimi r3,r9,32-12,29,29 /* MSR_PR -> _PAGE_USER */ 73 73 112: 74 74 #ifndef CONFIG_PTE_64BIT 75 75 rlwimi r5,r4,12,20,29 /* insert top 10 bits of address */ ··· 94 94 #else 95 95 rlwimi r8,r4,23,20,28 /* compute pte address */ 96 96 #endif 97 - rlwinm r0,r3,6,24,24 /* _PAGE_RW access -> _PAGE_DIRTY */ 97 + rlwinm r0,r3,32-3,24,24 /* _PAGE_RW access -> _PAGE_DIRTY */ 98 98 ori r0,r0,_PAGE_ACCESSED|_PAGE_HASHPTE 99 99 100 100 /* ··· 310 310 311 311 _GLOBAL(create_hpte) 312 312 /* Convert linux-style PTE (r5) to low word of PPC-style PTE (r8) */ 313 + rlwinm r8,r5,32-9,30,30 /* _PAGE_RW -> PP msb */ 313 314 rlwinm r0,r5,32-6,30,30 /* _PAGE_DIRTY -> PP msb */ 314 - and r8,r5,r0 /* writable if _RW & _DIRTY */ 315 - rlwimi r5,r5,1,30,30 /* _PAGE_USER -> PP msb */ 315 + and r8,r8,r0 /* writable if _RW & _DIRTY */ 316 + rlwimi r5,r5,32-1,30,30 /* _PAGE_USER -> PP msb */ 317 + rlwimi r5,r5,32-2,31,31 /* _PAGE_USER -> PP lsb */ 316 318 ori r8,r8,0xe04 /* clear out reserved bits */ 317 319 andc r8,r5,r8 /* PP = user? (rw&dirty? 1: 3): 0 */ 318 320 BEGIN_FTR_SECTION ··· 566 564 33: lwarx r8,0,r5 /* fetch the pte flags word */ 567 565 andi. r0,r8,_PAGE_HASHPTE 568 566 beq 8f /* done if HASHPTE is already clear */ 569 - rlwinm r8,r8,0,~_PAGE_HASHPTE /* clear HASHPTE bit */ 567 + rlwinm r8,r8,0,31,29 /* clear HASHPTE bit */ 570 568 stwcx. r8,0,r5 /* update the pte */ 571 569 bne- 33b 572 570
+1 -1
arch/riscv/kernel/process.c
··· 22 22 #include <asm/switch_to.h> 23 23 #include <asm/thread_info.h> 24 24 25 - unsigned long gp_in_global __asm__("gp"); 25 + register unsigned long gp_in_global __asm__("gp"); 26 26 27 27 extern asmlinkage void ret_from_fork(void); 28 28 extern asmlinkage void ret_from_kernel_thread(void);
+1 -1
arch/riscv/mm/init.c
··· 47 47 memset((void *)empty_zero_page, 0, PAGE_SIZE); 48 48 } 49 49 50 - #ifdef CONFIG_DEBUG_VM 50 + #if defined(CONFIG_MMU) && defined(CONFIG_DEBUG_VM) 51 51 static inline void print_mlk(char *name, unsigned long b, unsigned long t) 52 52 { 53 53 pr_notice("%12s : 0x%08lx - 0x%08lx (%4ld kB)\n", name, b, t,
+8 -2
arch/s390/include/asm/pci_io.h
··· 8 8 #include <linux/slab.h> 9 9 #include <asm/pci_insn.h> 10 10 11 + /* I/O size constraints */ 12 + #define ZPCI_MAX_READ_SIZE 8 13 + #define ZPCI_MAX_WRITE_SIZE 128 14 + 11 15 /* I/O Map */ 12 16 #define ZPCI_IOMAP_SHIFT 48 13 17 #define ZPCI_IOMAP_ADDR_BASE 0x8000000000000000UL ··· 144 140 145 141 while (n > 0) { 146 142 size = zpci_get_max_write_size((u64 __force) src, 147 - (u64) dst, n, 8); 143 + (u64) dst, n, 144 + ZPCI_MAX_READ_SIZE); 148 145 rc = zpci_read_single(dst, src, size); 149 146 if (rc) 150 147 break; ··· 166 161 167 162 while (n > 0) { 168 163 size = zpci_get_max_write_size((u64 __force) dst, 169 - (u64) src, n, 128); 164 + (u64) src, n, 165 + ZPCI_MAX_WRITE_SIZE); 170 166 if (size > 8) /* main path */ 171 167 rc = zpci_write_block(dst, src, size); 172 168 else
+1 -1
arch/s390/kernel/machine_kexec_file.c
··· 151 151 buf.mem += crashk_res.start; 152 152 buf.memsz = buf.bufsz; 153 153 154 - data->parm->initrd_start = buf.mem; 154 + data->parm->initrd_start = data->memsz; 155 155 data->parm->initrd_size = buf.memsz; 156 156 data->memsz += buf.memsz; 157 157
+1
arch/s390/kernel/machine_kexec_reloc.c
··· 28 28 break; 29 29 case R_390_64: /* Direct 64 bit. */ 30 30 case R_390_GLOB_DAT: 31 + case R_390_JMP_SLOT: 31 32 *(u64 *)loc = val; 32 33 break; 33 34 case R_390_PC16: /* PC relative 16 bit. */
+6 -3
arch/s390/mm/hugetlbpage.c
··· 159 159 rste &= ~_SEGMENT_ENTRY_NOEXEC; 160 160 161 161 /* Set correct table type for 2G hugepages */ 162 - if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3) 163 - rste |= _REGION_ENTRY_TYPE_R3 | _REGION3_ENTRY_LARGE; 164 - else 162 + if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3) { 163 + if (likely(pte_present(pte))) 164 + rste |= _REGION3_ENTRY_LARGE; 165 + rste |= _REGION_ENTRY_TYPE_R3; 166 + } else if (likely(pte_present(pte))) 165 167 rste |= _SEGMENT_ENTRY_LARGE; 168 + 166 169 clear_huge_pte_skeys(mm, rste); 167 170 pte_val(*ptep) = rste; 168 171 }
+211 -2
arch/s390/pci/pci_mmio.c
··· 11 11 #include <linux/mm.h> 12 12 #include <linux/errno.h> 13 13 #include <linux/pci.h> 14 + #include <asm/pci_io.h> 15 + #include <asm/pci_debug.h> 16 + 17 + static inline void zpci_err_mmio(u8 cc, u8 status, u64 offset) 18 + { 19 + struct { 20 + u64 offset; 21 + u8 cc; 22 + u8 status; 23 + } data = {offset, cc, status}; 24 + 25 + zpci_err_hex(&data, sizeof(data)); 26 + } 27 + 28 + static inline int __pcistb_mio_inuser( 29 + void __iomem *ioaddr, const void __user *src, 30 + u64 len, u8 *status) 31 + { 32 + int cc = -ENXIO; 33 + 34 + asm volatile ( 35 + " sacf 256\n" 36 + "0: .insn rsy,0xeb00000000d4,%[len],%[ioaddr],%[src]\n" 37 + "1: ipm %[cc]\n" 38 + " srl %[cc],28\n" 39 + "2: sacf 768\n" 40 + EX_TABLE(0b, 2b) EX_TABLE(1b, 2b) 41 + : [cc] "+d" (cc), [len] "+d" (len) 42 + : [ioaddr] "a" (ioaddr), [src] "Q" (*((u8 __force *)src)) 43 + : "cc", "memory"); 44 + *status = len >> 24 & 0xff; 45 + return cc; 46 + } 47 + 48 + static inline int __pcistg_mio_inuser( 49 + void __iomem *ioaddr, const void __user *src, 50 + u64 ulen, u8 *status) 51 + { 52 + register u64 addr asm("2") = (u64 __force) ioaddr; 53 + register u64 len asm("3") = ulen; 54 + int cc = -ENXIO; 55 + u64 val = 0; 56 + u64 cnt = ulen; 57 + u8 tmp; 58 + 59 + /* 60 + * copy 0 < @len <= 8 bytes from @src into the right most bytes of 61 + * a register, then store it to PCI at @ioaddr while in secondary 62 + * address space. pcistg then uses the user mappings. 63 + */ 64 + asm volatile ( 65 + " sacf 256\n" 66 + "0: llgc %[tmp],0(%[src])\n" 67 + " sllg %[val],%[val],8\n" 68 + " aghi %[src],1\n" 69 + " ogr %[val],%[tmp]\n" 70 + " brctg %[cnt],0b\n" 71 + "1: .insn rre,0xb9d40000,%[val],%[ioaddr]\n" 72 + "2: ipm %[cc]\n" 73 + " srl %[cc],28\n" 74 + "3: sacf 768\n" 75 + EX_TABLE(0b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b) 76 + : 77 + [src] "+a" (src), [cnt] "+d" (cnt), 78 + [val] "+d" (val), [tmp] "=d" (tmp), 79 + [len] "+d" (len), [cc] "+d" (cc), 80 + [ioaddr] "+a" (addr) 81 + :: "cc", "memory"); 82 + *status = len >> 24 & 0xff; 83 + 84 + /* did we read everything from user memory? */ 85 + if (!cc && cnt != 0) 86 + cc = -EFAULT; 87 + 88 + return cc; 89 + } 90 + 91 + static inline int __memcpy_toio_inuser(void __iomem *dst, 92 + const void __user *src, size_t n) 93 + { 94 + int size, rc = 0; 95 + u8 status = 0; 96 + mm_segment_t old_fs; 97 + 98 + if (!src) 99 + return -EINVAL; 100 + 101 + old_fs = enable_sacf_uaccess(); 102 + while (n > 0) { 103 + size = zpci_get_max_write_size((u64 __force) dst, 104 + (u64 __force) src, n, 105 + ZPCI_MAX_WRITE_SIZE); 106 + if (size > 8) /* main path */ 107 + rc = __pcistb_mio_inuser(dst, src, size, &status); 108 + else 109 + rc = __pcistg_mio_inuser(dst, src, size, &status); 110 + if (rc) 111 + break; 112 + src += size; 113 + dst += size; 114 + n -= size; 115 + } 116 + disable_sacf_uaccess(old_fs); 117 + if (rc) 118 + zpci_err_mmio(rc, status, (__force u64) dst); 119 + return rc; 120 + } 14 121 15 122 static long get_pfn(unsigned long user_addr, unsigned long access, 16 123 unsigned long *pfn) ··· 153 46 154 47 if (length <= 0 || PAGE_SIZE - (mmio_addr & ~PAGE_MASK) < length) 155 48 return -EINVAL; 49 + 50 + /* 51 + * Only support read access to MIO capable devices on a MIO enabled 52 + * system. Otherwise we would have to check for every address if it is 53 + * a special ZPCI_ADDR and we would have to do a get_pfn() which we 54 + * don't need for MIO capable devices. 55 + */ 56 + if (static_branch_likely(&have_mio)) { 57 + ret = __memcpy_toio_inuser((void __iomem *) mmio_addr, 58 + user_buffer, 59 + length); 60 + return ret; 61 + } 62 + 156 63 if (length > 64) { 157 64 buf = kmalloc(length, GFP_KERNEL); 158 65 if (!buf) ··· 177 56 ret = get_pfn(mmio_addr, VM_WRITE, &pfn); 178 57 if (ret) 179 58 goto out; 180 - io_addr = (void __iomem *)((pfn << PAGE_SHIFT) | (mmio_addr & ~PAGE_MASK)); 59 + io_addr = (void __iomem *)((pfn << PAGE_SHIFT) | 60 + (mmio_addr & ~PAGE_MASK)); 181 61 182 62 ret = -EFAULT; 183 63 if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE) ··· 192 70 if (buf != local_buf) 193 71 kfree(buf); 194 72 return ret; 73 + } 74 + 75 + static inline int __pcilg_mio_inuser( 76 + void __user *dst, const void __iomem *ioaddr, 77 + u64 ulen, u8 *status) 78 + { 79 + register u64 addr asm("2") = (u64 __force) ioaddr; 80 + register u64 len asm("3") = ulen; 81 + u64 cnt = ulen; 82 + int shift = ulen * 8; 83 + int cc = -ENXIO; 84 + u64 val, tmp; 85 + 86 + /* 87 + * read 0 < @len <= 8 bytes from the PCI memory mapped at @ioaddr (in 88 + * user space) into a register using pcilg then store these bytes at 89 + * user address @dst 90 + */ 91 + asm volatile ( 92 + " sacf 256\n" 93 + "0: .insn rre,0xb9d60000,%[val],%[ioaddr]\n" 94 + "1: ipm %[cc]\n" 95 + " srl %[cc],28\n" 96 + " ltr %[cc],%[cc]\n" 97 + " jne 4f\n" 98 + "2: ahi %[shift],-8\n" 99 + " srlg %[tmp],%[val],0(%[shift])\n" 100 + "3: stc %[tmp],0(%[dst])\n" 101 + " aghi %[dst],1\n" 102 + " brctg %[cnt],2b\n" 103 + "4: sacf 768\n" 104 + EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b) 105 + : 106 + [cc] "+d" (cc), [val] "=d" (val), [len] "+d" (len), 107 + [dst] "+a" (dst), [cnt] "+d" (cnt), [tmp] "=d" (tmp), 108 + [shift] "+d" (shift) 109 + : 110 + [ioaddr] "a" (addr) 111 + : "cc", "memory"); 112 + 113 + /* did we write everything to the user space buffer? */ 114 + if (!cc && cnt != 0) 115 + cc = -EFAULT; 116 + 117 + *status = len >> 24 & 0xff; 118 + return cc; 119 + } 120 + 121 + static inline int __memcpy_fromio_inuser(void __user *dst, 122 + const void __iomem *src, 123 + unsigned long n) 124 + { 125 + int size, rc = 0; 126 + u8 status; 127 + mm_segment_t old_fs; 128 + 129 + old_fs = enable_sacf_uaccess(); 130 + while (n > 0) { 131 + size = zpci_get_max_write_size((u64 __force) src, 132 + (u64 __force) dst, n, 133 + ZPCI_MAX_READ_SIZE); 134 + rc = __pcilg_mio_inuser(dst, src, size, &status); 135 + if (rc) 136 + break; 137 + src += size; 138 + dst += size; 139 + n -= size; 140 + } 141 + disable_sacf_uaccess(old_fs); 142 + if (rc) 143 + zpci_err_mmio(rc, status, (__force u64) dst); 144 + return rc; 195 145 } 196 146 197 147 SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr, ··· 280 86 281 87 if (length <= 0 || PAGE_SIZE - (mmio_addr & ~PAGE_MASK) < length) 282 88 return -EINVAL; 89 + 90 + /* 91 + * Only support write access to MIO capable devices on a MIO enabled 92 + * system. Otherwise we would have to check for every address if it is 93 + * a special ZPCI_ADDR and we would have to do a get_pfn() which we 94 + * don't need for MIO capable devices. 95 + */ 96 + if (static_branch_likely(&have_mio)) { 97 + ret = __memcpy_fromio_inuser( 98 + user_buffer, (const void __iomem *)mmio_addr, 99 + length); 100 + return ret; 101 + } 102 + 283 103 if (length > 64) { 284 104 buf = kmalloc(length, GFP_KERNEL); 285 105 if (!buf) 286 106 return -ENOMEM; 287 - } else 107 + } else { 288 108 buf = local_buf; 109 + } 289 110 290 111 ret = get_pfn(mmio_addr, VM_READ, &pfn); 291 112 if (ret)
+2
arch/sh/include/uapi/asm/sockios.h
··· 2 2 #ifndef __ASM_SH_SOCKIOS_H 3 3 #define __ASM_SH_SOCKIOS_H 4 4 5 + #include <linux/time_types.h> 6 + 5 7 /* Socket-level I/O control calls. */ 6 8 #define FIOGETOWN _IOR('f', 123, int) 7 9 #define FIOSETOWN _IOW('f', 124, int)
+3 -3
arch/sparc/mm/srmmu.c
··· 331 331 332 332 while (vaddr < srmmu_nocache_end) { 333 333 pgd = pgd_offset_k(vaddr); 334 - p4d = p4d_offset(__nocache_fix(pgd), vaddr); 335 - pud = pud_offset(__nocache_fix(p4d), vaddr); 336 - pmd = pmd_offset(__nocache_fix(pgd), vaddr); 334 + p4d = p4d_offset(pgd, vaddr); 335 + pud = pud_offset(p4d, vaddr); 336 + pmd = pmd_offset(__nocache_fix(pud), vaddr); 337 337 pte = pte_offset_kernel(__nocache_fix(pmd), vaddr); 338 338 339 339 pteval = ((paddr >> 4) | SRMMU_ET_PTE | SRMMU_PRIV);
+1 -1
arch/um/drivers/vector_user.h
··· 17 17 #define TRANS_TAP_LEN strlen(TRANS_TAP) 18 18 19 19 #define TRANS_GRE "gre" 20 - #define TRANS_GRE_LEN strlen(TRANS_RAW) 20 + #define TRANS_GRE_LEN strlen(TRANS_GRE) 21 21 22 22 #define TRANS_L2TPV3 "l2tpv3" 23 23 #define TRANS_L2TPV3_LEN strlen(TRANS_L2TPV3)
+1 -1
arch/um/include/asm/xor.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 #include <asm-generic/xor.h> 3 - #include <shared/timer-internal.h> 3 + #include <linux/time-internal.h> 4 4 5 5 /* pick an arbitrary one - measuring isn't possible with inf-cpu */ 6 6 #define XOR_SELECT_TEMPLATE(x) \
+1
arch/um/kernel/skas/syscall.c
··· 11 11 #include <sysdep/ptrace_user.h> 12 12 #include <sysdep/syscalls.h> 13 13 #include <linux/time-internal.h> 14 + #include <asm/unistd.h> 14 15 15 16 void handle_syscall(struct uml_pt_regs *r) 16 17 {
+8 -8
arch/x86/boot/tools/build.c
··· 59 59 #define PECOFF_COMPAT_RESERVE 0x0 60 60 #endif 61 61 62 - unsigned long efi32_stub_entry; 63 - unsigned long efi64_stub_entry; 64 - unsigned long efi_pe_entry; 65 - unsigned long efi32_pe_entry; 66 - unsigned long kernel_info; 67 - unsigned long startup_64; 68 - unsigned long _ehead; 69 - unsigned long _end; 62 + static unsigned long efi32_stub_entry; 63 + static unsigned long efi64_stub_entry; 64 + static unsigned long efi_pe_entry; 65 + static unsigned long efi32_pe_entry; 66 + static unsigned long kernel_info; 67 + static unsigned long startup_64; 68 + static unsigned long _ehead; 69 + static unsigned long _end; 70 70 71 71 /*----------------------------------------------------------------------*/ 72 72
+17 -2
arch/x86/hyperv/hv_init.c
··· 226 226 227 227 rdmsrl(HV_X64_MSR_REENLIGHTENMENT_CONTROL, *((u64 *)&re_ctrl)); 228 228 if (re_ctrl.target_vp == hv_vp_index[cpu]) { 229 - /* Reassign to some other online CPU */ 229 + /* 230 + * Reassign reenlightenment notifications to some other online 231 + * CPU or just disable the feature if there are no online CPUs 232 + * left (happens on hibernation). 233 + */ 230 234 new_cpu = cpumask_any_but(cpu_online_mask, cpu); 231 235 232 - re_ctrl.target_vp = hv_vp_index[new_cpu]; 236 + if (new_cpu < nr_cpu_ids) 237 + re_ctrl.target_vp = hv_vp_index[new_cpu]; 238 + else 239 + re_ctrl.enabled = 0; 240 + 233 241 wrmsrl(HV_X64_MSR_REENLIGHTENMENT_CONTROL, *((u64 *)&re_ctrl)); 234 242 } 235 243 ··· 301 293 302 294 hv_hypercall_pg = hv_hypercall_pg_saved; 303 295 hv_hypercall_pg_saved = NULL; 296 + 297 + /* 298 + * Reenlightenment notifications are disabled by hv_cpu_die(0), 299 + * reenable them here if hv_reenlightenment_cb was previously set. 300 + */ 301 + if (hv_reenlightenment_cb) 302 + set_hv_tscchange_cb(hv_reenlightenment_cb); 304 303 } 305 304 306 305 /* Note: when the ops are called, only CPU0 is online and IRQs are disabled. */
+6 -6
arch/x86/include/asm/bitops.h
··· 52 52 arch_set_bit(long nr, volatile unsigned long *addr) 53 53 { 54 54 if (__builtin_constant_p(nr)) { 55 - asm volatile(LOCK_PREFIX "orb %1,%0" 55 + asm volatile(LOCK_PREFIX "orb %b1,%0" 56 56 : CONST_MASK_ADDR(nr, addr) 57 - : "iq" (CONST_MASK(nr) & 0xff) 57 + : "iq" (CONST_MASK(nr)) 58 58 : "memory"); 59 59 } else { 60 60 asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0" ··· 72 72 arch_clear_bit(long nr, volatile unsigned long *addr) 73 73 { 74 74 if (__builtin_constant_p(nr)) { 75 - asm volatile(LOCK_PREFIX "andb %1,%0" 75 + asm volatile(LOCK_PREFIX "andb %b1,%0" 76 76 : CONST_MASK_ADDR(nr, addr) 77 - : "iq" (CONST_MASK(nr) ^ 0xff)); 77 + : "iq" (~CONST_MASK(nr))); 78 78 } else { 79 79 asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0" 80 80 : : RLONG_ADDR(addr), "Ir" (nr) : "memory"); ··· 123 123 arch_change_bit(long nr, volatile unsigned long *addr) 124 124 { 125 125 if (__builtin_constant_p(nr)) { 126 - asm volatile(LOCK_PREFIX "xorb %1,%0" 126 + asm volatile(LOCK_PREFIX "xorb %b1,%0" 127 127 : CONST_MASK_ADDR(nr, addr) 128 - : "iq" ((u8)CONST_MASK(nr))); 128 + : "iq" (CONST_MASK(nr))); 129 129 } else { 130 130 asm volatile(LOCK_PREFIX __ASM_SIZE(btc) " %1,%0" 131 131 : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
+2 -2
arch/x86/include/asm/kvm_host.h
··· 578 578 unsigned long cr4; 579 579 unsigned long cr4_guest_owned_bits; 580 580 unsigned long cr8; 581 + u32 host_pkru; 581 582 u32 pkru; 582 583 u32 hflags; 583 584 u64 efer; ··· 1094 1093 void (*set_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); 1095 1094 void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); 1096 1095 void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); 1097 - u64 (*get_dr6)(struct kvm_vcpu *vcpu); 1098 - void (*set_dr6)(struct kvm_vcpu *vcpu, unsigned long value); 1099 1096 void (*sync_dirty_debug_regs)(struct kvm_vcpu *vcpu); 1100 1097 void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value); 1101 1098 void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg); ··· 1448 1449 1449 1450 void kvm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr); 1450 1451 void kvm_queue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code); 1452 + void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr, unsigned long payload); 1451 1453 void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned nr); 1452 1454 void kvm_requeue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code); 1453 1455 void kvm_inject_page_fault(struct kvm_vcpu *vcpu, struct x86_exception *fault);
+6 -1
arch/x86/include/asm/stackprotector.h
··· 55 55 /* 56 56 * Initialize the stackprotector canary value. 57 57 * 58 - * NOTE: this must only be called from functions that never return, 58 + * NOTE: this must only be called from functions that never return 59 59 * and it must always be inlined. 60 + * 61 + * In addition, it should be called from a compilation unit for which 62 + * stack protector is disabled. Alternatively, the caller should not end 63 + * with a function call which gets tail-call optimized as that would 64 + * lead to checking a modified canary value. 60 65 */ 61 66 static __always_inline void boot_init_stack_canary(void) 62 67 {
+8
arch/x86/kernel/smpboot.c
··· 266 266 267 267 wmb(); 268 268 cpu_startup_entry(CPUHP_AP_ONLINE_IDLE); 269 + 270 + /* 271 + * Prevent tail call to cpu_startup_entry() because the stack protector 272 + * guard has been changed a couple of function calls up, in 273 + * boot_init_stack_canary() and must not be checked before tail calling 274 + * another function. 275 + */ 276 + prevent_tail_call_optimization(); 269 277 } 270 278 271 279 /**
+16 -7
arch/x86/kernel/unwind_orc.c
··· 320 320 321 321 unsigned long *unwind_get_return_address_ptr(struct unwind_state *state) 322 322 { 323 + struct task_struct *task = state->task; 324 + 323 325 if (unwind_done(state)) 324 326 return NULL; 325 327 326 328 if (state->regs) 327 329 return &state->regs->ip; 330 + 331 + if (task != current && state->sp == task->thread.sp) { 332 + struct inactive_task_frame *frame = (void *)task->thread.sp; 333 + return &frame->ret_addr; 334 + } 328 335 329 336 if (state->sp) 330 337 return (unsigned long *)state->sp - 1; ··· 624 617 void __unwind_start(struct unwind_state *state, struct task_struct *task, 625 618 struct pt_regs *regs, unsigned long *first_frame) 626 619 { 627 - if (!orc_init) 628 - goto done; 629 - 630 620 memset(state, 0, sizeof(*state)); 631 621 state->task = task; 622 + 623 + if (!orc_init) 624 + goto err; 632 625 633 626 /* 634 627 * Refuse to unwind the stack of a task while it's executing on another ··· 636 629 * checks to prevent it from going off the rails. 637 630 */ 638 631 if (task_on_another_cpu(task)) 639 - goto done; 632 + goto err; 640 633 641 634 if (regs) { 642 635 if (user_mode(regs)) 643 - goto done; 636 + goto the_end; 644 637 645 638 state->ip = regs->ip; 646 639 state->sp = regs->sp; ··· 673 666 * generate some kind of backtrace if this happens. 674 667 */ 675 668 void *next_page = (void *)PAGE_ALIGN((unsigned long)state->sp); 669 + state->error = true; 676 670 if (get_stack_info(next_page, state->task, &state->stack_info, 677 671 &state->stack_mask)) 678 672 return; ··· 699 691 700 692 return; 701 693 702 - done: 694 + err: 695 + state->error = true; 696 + the_end: 703 697 state->stack_info.type = STACK_TYPE_UNKNOWN; 704 - return; 705 698 } 706 699 EXPORT_SYMBOL_GPL(__unwind_start);
+1 -1
arch/x86/kvm/hyperv.c
··· 1427 1427 */ 1428 1428 kvm_make_vcpus_request_mask(kvm, 1429 1429 KVM_REQ_TLB_FLUSH | KVM_REQUEST_NO_WAKEUP, 1430 - vcpu_mask, &hv_vcpu->tlb_flush); 1430 + NULL, vcpu_mask, &hv_vcpu->tlb_flush); 1431 1431 1432 1432 ret_success: 1433 1433 /* We always do full TLB flush, set rep_done = rep_cnt. */
+31 -8
arch/x86/kvm/svm/nested.c
··· 19 19 #include <linux/kernel.h> 20 20 21 21 #include <asm/msr-index.h> 22 + #include <asm/debugreg.h> 22 23 23 24 #include "kvm_emulate.h" 24 25 #include "trace.h" ··· 268 267 svm->vmcb->save.rsp = nested_vmcb->save.rsp; 269 268 svm->vmcb->save.rip = nested_vmcb->save.rip; 270 269 svm->vmcb->save.dr7 = nested_vmcb->save.dr7; 271 - svm->vmcb->save.dr6 = nested_vmcb->save.dr6; 270 + svm->vcpu.arch.dr6 = nested_vmcb->save.dr6; 272 271 svm->vmcb->save.cpl = nested_vmcb->save.cpl; 273 272 274 273 svm->nested.vmcb_msrpm = nested_vmcb->control.msrpm_base_pa & ~0x0fffULL; ··· 483 482 nested_vmcb->save.rsp = vmcb->save.rsp; 484 483 nested_vmcb->save.rax = vmcb->save.rax; 485 484 nested_vmcb->save.dr7 = vmcb->save.dr7; 486 - nested_vmcb->save.dr6 = vmcb->save.dr6; 485 + nested_vmcb->save.dr6 = svm->vcpu.arch.dr6; 487 486 nested_vmcb->save.cpl = vmcb->save.cpl; 488 487 489 488 nested_vmcb->control.int_ctl = vmcb->control.int_ctl; ··· 607 606 /* DB exceptions for our internal use must not cause vmexit */ 608 607 static int nested_svm_intercept_db(struct vcpu_svm *svm) 609 608 { 610 - unsigned long dr6; 609 + unsigned long dr6 = svm->vmcb->save.dr6; 610 + 611 + /* Always catch it and pass it to userspace if debugging. */ 612 + if (svm->vcpu.guest_debug & 613 + (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) 614 + return NESTED_EXIT_HOST; 611 615 612 616 /* if we're not singlestepping, it's not ours */ 613 617 if (!svm->nmi_singlestep) 614 - return NESTED_EXIT_DONE; 618 + goto reflected_db; 615 619 616 620 /* if it's not a singlestep exception, it's not ours */ 617 - if (kvm_get_dr(&svm->vcpu, 6, &dr6)) 618 - return NESTED_EXIT_DONE; 619 621 if (!(dr6 & DR6_BS)) 620 - return NESTED_EXIT_DONE; 622 + goto reflected_db; 621 623 622 624 /* if the guest is singlestepping, it should get the vmexit */ 623 625 if (svm->nmi_singlestep_guest_rflags & X86_EFLAGS_TF) { 624 626 disable_nmi_singlestep(svm); 625 - return NESTED_EXIT_DONE; 627 + goto reflected_db; 626 628 } 627 629 628 630 /* it's ours, the nested hypervisor must not see this one */ 629 631 return NESTED_EXIT_HOST; 632 + 633 + reflected_db: 634 + /* 635 + * Synchronize guest DR6 here just like in kvm_deliver_exception_payload; 636 + * it will be moved into the nested VMCB by nested_svm_vmexit. Once 637 + * exceptions will be moved to svm_check_nested_events, all this stuff 638 + * will just go away and we could just return NESTED_EXIT_HOST 639 + * unconditionally. db_interception will queue the exception, which 640 + * will be processed by svm_check_nested_events if a nested vmexit is 641 + * required, and we will just use kvm_deliver_exception_payload to copy 642 + * the payload to DR6 before vmexit. 643 + */ 644 + WARN_ON(svm->vcpu.arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT); 645 + svm->vcpu.arch.dr6 &= ~(DR_TRAP_BITS | DR6_RTM); 646 + svm->vcpu.arch.dr6 |= dr6 & ~DR6_FIXED_1; 647 + return NESTED_EXIT_DONE; 630 648 } 631 649 632 650 static int nested_svm_intercept_ioio(struct vcpu_svm *svm) ··· 702 682 if (svm->nested.intercept_exceptions & excp_bits) { 703 683 if (exit_code == SVM_EXIT_EXCP_BASE + DB_VECTOR) 704 684 vmexit = nested_svm_intercept_db(svm); 685 + else if (exit_code == SVM_EXIT_EXCP_BASE + BP_VECTOR && 686 + svm->vcpu.guest_debug & KVM_GUESTDBG_USE_SW_BP) 687 + vmexit = NESTED_EXIT_HOST; 705 688 else 706 689 vmexit = NESTED_EXIT_DONE; 707 690 }
+22 -14
arch/x86/kvm/svm/svm.c
··· 1672 1672 mark_dirty(svm->vmcb, VMCB_ASID); 1673 1673 } 1674 1674 1675 - static u64 svm_get_dr6(struct kvm_vcpu *vcpu) 1675 + static void svm_set_dr6(struct vcpu_svm *svm, unsigned long value) 1676 1676 { 1677 - return to_svm(vcpu)->vmcb->save.dr6; 1678 - } 1677 + struct vmcb *vmcb = svm->vmcb; 1679 1678 1680 - static void svm_set_dr6(struct kvm_vcpu *vcpu, unsigned long value) 1681 - { 1682 - struct vcpu_svm *svm = to_svm(vcpu); 1683 - 1684 - svm->vmcb->save.dr6 = value; 1685 - mark_dirty(svm->vmcb, VMCB_DR); 1679 + if (unlikely(value != vmcb->save.dr6)) { 1680 + vmcb->save.dr6 = value; 1681 + mark_dirty(vmcb, VMCB_DR); 1682 + } 1686 1683 } 1687 1684 1688 1685 static void svm_sync_dirty_debug_regs(struct kvm_vcpu *vcpu) ··· 1690 1693 get_debugreg(vcpu->arch.db[1], 1); 1691 1694 get_debugreg(vcpu->arch.db[2], 2); 1692 1695 get_debugreg(vcpu->arch.db[3], 3); 1693 - vcpu->arch.dr6 = svm_get_dr6(vcpu); 1696 + /* 1697 + * We cannot reset svm->vmcb->save.dr6 to DR6_FIXED_1|DR6_RTM here, 1698 + * because db_interception might need it. We can do it before vmentry. 1699 + */ 1700 + vcpu->arch.dr6 = svm->vmcb->save.dr6; 1694 1701 vcpu->arch.dr7 = svm->vmcb->save.dr7; 1695 - 1696 1702 vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_WONT_EXIT; 1697 1703 set_dr_intercepts(svm); 1698 1704 } ··· 1739 1739 if (!(svm->vcpu.guest_debug & 1740 1740 (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) && 1741 1741 !svm->nmi_singlestep) { 1742 - kvm_queue_exception(&svm->vcpu, DB_VECTOR); 1742 + u32 payload = (svm->vmcb->save.dr6 ^ DR6_RTM) & ~DR6_FIXED_1; 1743 + kvm_queue_exception_p(&svm->vcpu, DB_VECTOR, payload); 1743 1744 return 1; 1744 1745 } 1745 1746 ··· 3318 3317 3319 3318 svm->vmcb->save.cr2 = vcpu->arch.cr2; 3320 3319 3320 + /* 3321 + * Run with all-zero DR6 unless needed, so that we can get the exact cause 3322 + * of a #DB. 3323 + */ 3324 + if (unlikely(svm->vcpu.arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) 3325 + svm_set_dr6(svm, vcpu->arch.dr6); 3326 + else 3327 + svm_set_dr6(svm, DR6_FIXED_1 | DR6_RTM); 3328 + 3321 3329 clgi(); 3322 3330 kvm_load_guest_xsave_state(vcpu); 3323 3331 ··· 3941 3931 .set_idt = svm_set_idt, 3942 3932 .get_gdt = svm_get_gdt, 3943 3933 .set_gdt = svm_set_gdt, 3944 - .get_dr6 = svm_get_dr6, 3945 - .set_dr6 = svm_set_dr6, 3946 3934 .set_dr7 = svm_set_dr7, 3947 3935 .sync_dirty_debug_regs = svm_sync_dirty_debug_regs, 3948 3936 .cache_reg = svm_cache_reg,
+4 -37
arch/x86/kvm/vmx/vmx.c
··· 1372 1372 1373 1373 vmx_vcpu_pi_load(vcpu, cpu); 1374 1374 1375 - vmx->host_pkru = read_pkru(); 1376 1375 vmx->host_debugctlmsr = get_debugctlmsr(); 1377 1376 } 1378 1377 ··· 4676 4677 dr6 = vmcs_readl(EXIT_QUALIFICATION); 4677 4678 if (!(vcpu->guest_debug & 4678 4679 (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP))) { 4679 - vcpu->arch.dr6 &= ~DR_TRAP_BITS; 4680 - vcpu->arch.dr6 |= dr6 | DR6_RTM; 4681 4680 if (is_icebp(intr_info)) 4682 4681 WARN_ON(!skip_emulated_instruction(vcpu)); 4683 4682 4684 - kvm_queue_exception(vcpu, DB_VECTOR); 4683 + kvm_queue_exception_p(vcpu, DB_VECTOR, dr6); 4685 4684 return 1; 4686 4685 } 4687 - kvm_run->debug.arch.dr6 = dr6 | DR6_FIXED_1; 4686 + kvm_run->debug.arch.dr6 = dr6 | DR6_FIXED_1 | DR6_RTM; 4688 4687 kvm_run->debug.arch.dr7 = vmcs_readl(GUEST_DR7); 4689 4688 /* fall through */ 4690 4689 case BP_VECTOR: ··· 4926 4929 * guest debugging itself. 4927 4930 */ 4928 4931 if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) { 4929 - vcpu->run->debug.arch.dr6 = vcpu->arch.dr6; 4932 + vcpu->run->debug.arch.dr6 = DR6_BD | DR6_RTM | DR6_FIXED_1; 4930 4933 vcpu->run->debug.arch.dr7 = dr7; 4931 4934 vcpu->run->debug.arch.pc = kvm_get_linear_rip(vcpu); 4932 4935 vcpu->run->debug.arch.exception = DB_VECTOR; 4933 4936 vcpu->run->exit_reason = KVM_EXIT_DEBUG; 4934 4937 return 0; 4935 4938 } else { 4936 - vcpu->arch.dr6 &= ~DR_TRAP_BITS; 4937 - vcpu->arch.dr6 |= DR6_BD | DR6_RTM; 4938 - kvm_queue_exception(vcpu, DB_VECTOR); 4939 + kvm_queue_exception_p(vcpu, DB_VECTOR, DR6_BD); 4939 4940 return 1; 4940 4941 } 4941 4942 } ··· 4962 4967 return 1; 4963 4968 4964 4969 return kvm_skip_emulated_instruction(vcpu); 4965 - } 4966 - 4967 - static u64 vmx_get_dr6(struct kvm_vcpu *vcpu) 4968 - { 4969 - return vcpu->arch.dr6; 4970 - } 4971 - 4972 - static void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val) 4973 - { 4974 4970 } 4975 4971 4976 4972 static void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu) ··· 6563 6577 6564 6578 kvm_load_guest_xsave_state(vcpu); 6565 6579 6566 - if (static_cpu_has(X86_FEATURE_PKU) && 6567 - kvm_read_cr4_bits(vcpu, X86_CR4_PKE) && 6568 - vcpu->arch.pkru != vmx->host_pkru) 6569 - __write_pkru(vcpu->arch.pkru); 6570 - 6571 6580 pt_guest_enter(vmx); 6572 6581 6573 6582 if (vcpu_to_pmu(vcpu)->version) ··· 6651 6670 vcpu->arch.regs_dirty = 0; 6652 6671 6653 6672 pt_guest_exit(vmx); 6654 - 6655 - /* 6656 - * eager fpu is enabled if PKEY is supported and CR4 is switched 6657 - * back on host, so it is safe to read guest PKRU from current 6658 - * XSAVE. 6659 - */ 6660 - if (static_cpu_has(X86_FEATURE_PKU) && 6661 - kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) { 6662 - vcpu->arch.pkru = rdpkru(); 6663 - if (vcpu->arch.pkru != vmx->host_pkru) 6664 - __write_pkru(vmx->host_pkru); 6665 - } 6666 6673 6667 6674 kvm_load_host_xsave_state(vcpu); 6668 6675 ··· 7709 7740 .set_idt = vmx_set_idt, 7710 7741 .get_gdt = vmx_get_gdt, 7711 7742 .set_gdt = vmx_set_gdt, 7712 - .get_dr6 = vmx_get_dr6, 7713 - .set_dr6 = vmx_set_dr6, 7714 7743 .set_dr7 = vmx_set_dr7, 7715 7744 .sync_dirty_debug_regs = vmx_sync_dirty_debug_regs, 7716 7745 .cache_reg = vmx_cache_reg,
+37 -23
arch/x86/kvm/x86.c
··· 572 572 } 573 573 EXPORT_SYMBOL_GPL(kvm_requeue_exception); 574 574 575 - static void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr, 576 - unsigned long payload) 575 + void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr, 576 + unsigned long payload) 577 577 { 578 578 kvm_multiple_exception(vcpu, nr, false, 0, true, payload, false); 579 579 } 580 + EXPORT_SYMBOL_GPL(kvm_queue_exception_p); 580 581 581 582 static void kvm_queue_exception_e_p(struct kvm_vcpu *vcpu, unsigned nr, 582 583 u32 error_code, unsigned long payload) ··· 837 836 vcpu->arch.ia32_xss != host_xss) 838 837 wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss); 839 838 } 839 + 840 + if (static_cpu_has(X86_FEATURE_PKU) && 841 + (kvm_read_cr4_bits(vcpu, X86_CR4_PKE) || 842 + (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU)) && 843 + vcpu->arch.pkru != vcpu->arch.host_pkru) 844 + __write_pkru(vcpu->arch.pkru); 840 845 } 841 846 EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state); 842 847 843 848 void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) 844 849 { 850 + if (static_cpu_has(X86_FEATURE_PKU) && 851 + (kvm_read_cr4_bits(vcpu, X86_CR4_PKE) || 852 + (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU))) { 853 + vcpu->arch.pkru = rdpkru(); 854 + if (vcpu->arch.pkru != vcpu->arch.host_pkru) 855 + __write_pkru(vcpu->arch.host_pkru); 856 + } 857 + 845 858 if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE)) { 846 859 847 860 if (vcpu->arch.xcr0 != host_xcr0) ··· 1060 1045 } 1061 1046 } 1062 1047 1063 - static void kvm_update_dr6(struct kvm_vcpu *vcpu) 1064 - { 1065 - if (!(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP)) 1066 - kvm_x86_ops.set_dr6(vcpu, vcpu->arch.dr6); 1067 - } 1068 - 1069 1048 static void kvm_update_dr7(struct kvm_vcpu *vcpu) 1070 1049 { 1071 1050 unsigned long dr7; ··· 1099 1090 if (val & 0xffffffff00000000ULL) 1100 1091 return -1; /* #GP */ 1101 1092 vcpu->arch.dr6 = (val & DR6_VOLATILE) | kvm_dr6_fixed(vcpu); 1102 - kvm_update_dr6(vcpu); 1103 1093 break; 1104 1094 case 5: 1105 1095 /* fall through */ ··· 1134 1126 case 4: 1135 1127 /* fall through */ 1136 1128 case 6: 1137 - if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) 1138 - *val = vcpu->arch.dr6; 1139 - else 1140 - *val = kvm_x86_ops.get_dr6(vcpu); 1129 + *val = vcpu->arch.dr6; 1141 1130 break; 1142 1131 case 5: 1143 1132 /* fall through */ ··· 3563 3558 3564 3559 kvm_x86_ops.vcpu_load(vcpu, cpu); 3565 3560 3561 + /* Save host pkru register if supported */ 3562 + vcpu->arch.host_pkru = read_pkru(); 3563 + 3566 3564 /* Apply any externally detected TSC adjustments (due to suspend) */ 3567 3565 if (unlikely(vcpu->arch.tsc_offset_adjustment)) { 3568 3566 adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment); ··· 3759 3751 unsigned bank_num = mcg_cap & 0xff, bank; 3760 3752 3761 3753 r = -EINVAL; 3762 - if (!bank_num || bank_num >= KVM_MAX_MCE_BANKS) 3754 + if (!bank_num || bank_num > KVM_MAX_MCE_BANKS) 3763 3755 goto out; 3764 3756 if (mcg_cap & ~(kvm_mce_cap_supported | 0xff | 0xff0000)) 3765 3757 goto out; ··· 4017 4009 memcpy(vcpu->arch.db, dbgregs->db, sizeof(vcpu->arch.db)); 4018 4010 kvm_update_dr0123(vcpu); 4019 4011 vcpu->arch.dr6 = dbgregs->dr6; 4020 - kvm_update_dr6(vcpu); 4021 4012 vcpu->arch.dr7 = dbgregs->dr7; 4022 4013 kvm_update_dr7(vcpu); 4023 4014 ··· 6666 6659 6667 6660 if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { 6668 6661 kvm_run->debug.arch.dr6 = DR6_BS | DR6_FIXED_1 | DR6_RTM; 6669 - kvm_run->debug.arch.pc = vcpu->arch.singlestep_rip; 6662 + kvm_run->debug.arch.pc = kvm_get_linear_rip(vcpu); 6670 6663 kvm_run->debug.arch.exception = DB_VECTOR; 6671 6664 kvm_run->exit_reason = KVM_EXIT_DEBUG; 6672 6665 return 0; ··· 6726 6719 vcpu->arch.db); 6727 6720 6728 6721 if (dr6 != 0) { 6729 - vcpu->arch.dr6 &= ~DR_TRAP_BITS; 6730 - vcpu->arch.dr6 |= dr6 | DR6_RTM; 6731 - kvm_queue_exception(vcpu, DB_VECTOR); 6722 + kvm_queue_exception_p(vcpu, DB_VECTOR, dr6); 6732 6723 *r = 1; 6733 6724 return true; 6734 6725 } ··· 8047 8042 zalloc_cpumask_var(&cpus, GFP_ATOMIC); 8048 8043 8049 8044 kvm_make_vcpus_request_mask(kvm, KVM_REQ_SCAN_IOAPIC, 8050 - vcpu_bitmap, cpus); 8045 + NULL, vcpu_bitmap, cpus); 8051 8046 8052 8047 free_cpumask_var(cpus); 8053 8048 } ··· 8077 8072 */ 8078 8073 void kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit) 8079 8074 { 8075 + struct kvm_vcpu *except; 8080 8076 unsigned long old, new, expected; 8081 8077 8082 8078 if (!kvm_x86_ops.check_apicv_inhibit_reasons || ··· 8102 8096 trace_kvm_apicv_update_request(activate, bit); 8103 8097 if (kvm_x86_ops.pre_update_apicv_exec_ctrl) 8104 8098 kvm_x86_ops.pre_update_apicv_exec_ctrl(kvm, activate); 8105 - kvm_make_all_cpus_request(kvm, KVM_REQ_APICV_UPDATE); 8099 + 8100 + /* 8101 + * Sending request to update APICV for all other vcpus, 8102 + * while update the calling vcpu immediately instead of 8103 + * waiting for another #VMEXIT to handle the request. 8104 + */ 8105 + except = kvm_get_running_vcpu(); 8106 + kvm_make_all_cpus_request_except(kvm, KVM_REQ_APICV_UPDATE, 8107 + except); 8108 + if (except) 8109 + kvm_vcpu_update_apicv(except); 8106 8110 } 8107 8111 EXPORT_SYMBOL_GPL(kvm_request_apicv_update); 8108 8112 ··· 8436 8420 WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP); 8437 8421 kvm_x86_ops.sync_dirty_debug_regs(vcpu); 8438 8422 kvm_update_dr0123(vcpu); 8439 - kvm_update_dr6(vcpu); 8440 8423 kvm_update_dr7(vcpu); 8441 8424 vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_RELOAD; 8442 8425 } ··· 9496 9481 memset(vcpu->arch.db, 0, sizeof(vcpu->arch.db)); 9497 9482 kvm_update_dr0123(vcpu); 9498 9483 vcpu->arch.dr6 = DR6_INIT; 9499 - kvm_update_dr6(vcpu); 9500 9484 vcpu->arch.dr7 = DR7_FIXED_1; 9501 9485 kvm_update_dr7(vcpu); 9502 9486
+2 -2
arch/x86/mm/mmio-mod.c
··· 372 372 int cpu; 373 373 int err; 374 374 375 - if (downed_cpus == NULL && 375 + if (!cpumask_available(downed_cpus) && 376 376 !alloc_cpumask_var(&downed_cpus, GFP_KERNEL)) { 377 377 pr_notice("Failed to allocate mask\n"); 378 378 goto out; ··· 402 402 int cpu; 403 403 int err; 404 404 405 - if (downed_cpus == NULL || cpumask_weight(downed_cpus) == 0) 405 + if (!cpumask_available(downed_cpus) || cpumask_weight(downed_cpus) == 0) 406 406 return; 407 407 pr_notice("Re-enabling CPUs...\n"); 408 408 for_each_cpu(cpu, downed_cpus) {
+1
arch/x86/xen/smp_pv.c
··· 93 93 cpu_bringup(); 94 94 boot_init_stack_canary(); 95 95 cpu_startup_entry(CPUHP_AP_ONLINE_IDLE); 96 + prevent_tail_call_optimization(); 96 97 } 97 98 98 99 void xen_smp_intr_free_pv(unsigned int cpu)
+5 -1
drivers/acpi/ec.c
··· 2016 2016 * to allow the caller to process events properly after that. 2017 2017 */ 2018 2018 ret = acpi_dispatch_gpe(NULL, first_ec->gpe); 2019 - if (ret == ACPI_INTERRUPT_HANDLED) 2019 + if (ret == ACPI_INTERRUPT_HANDLED) { 2020 2020 pm_pr_dbg("EC GPE dispatched\n"); 2021 + 2022 + /* Flush the event and query workqueues. */ 2023 + acpi_ec_flush_work(); 2024 + } 2021 2025 2022 2026 return false; 2023 2027 }
+4 -11
drivers/acpi/sleep.c
··· 980 980 return 0; 981 981 } 982 982 983 - static void acpi_s2idle_sync(void) 984 - { 985 - /* The EC driver uses special workqueues that need to be flushed. */ 986 - acpi_ec_flush_work(); 987 - acpi_os_wait_events_complete(); /* synchronize Notify handling */ 988 - } 989 - 990 983 static bool acpi_s2idle_wake(void) 991 984 { 992 985 if (!acpi_sci_irq_valid()) ··· 1011 1018 return true; 1012 1019 1013 1020 /* 1014 - * Cancel the wakeup and process all pending events in case 1021 + * Cancel the SCI wakeup and process all pending events in case 1015 1022 * there are any wakeup ones in there. 1016 1023 * 1017 1024 * Note that if any non-EC GPEs are active at this point, the ··· 1019 1026 * should be missed by canceling the wakeup here. 1020 1027 */ 1021 1028 pm_system_cancel_wakeup(); 1022 - 1023 - acpi_s2idle_sync(); 1029 + acpi_os_wait_events_complete(); 1024 1030 1025 1031 /* 1026 1032 * The SCI is in the "suspended" state now and it cannot produce ··· 1052 1060 * of GPEs. 1053 1061 */ 1054 1062 acpi_os_wait_events_complete(); /* synchronize GPE processing */ 1055 - acpi_s2idle_sync(); 1063 + acpi_ec_flush_work(); /* flush the EC driver's workqueues */ 1064 + acpi_os_wait_events_complete(); /* synchronize Notify handling */ 1056 1065 1057 1066 s2idle_wakeup = false; 1058 1067
+37 -18
drivers/base/core.c
··· 365 365 link->flags |= DL_FLAG_STATELESS; 366 366 goto reorder; 367 367 } else { 368 + link->flags |= DL_FLAG_STATELESS; 368 369 goto out; 369 370 } 370 371 } ··· 434 433 flags & DL_FLAG_PM_RUNTIME) 435 434 pm_runtime_resume(supplier); 436 435 436 + list_add_tail_rcu(&link->s_node, &supplier->links.consumers); 437 + list_add_tail_rcu(&link->c_node, &consumer->links.suppliers); 438 + 437 439 if (flags & DL_FLAG_SYNC_STATE_ONLY) { 438 440 dev_dbg(consumer, 439 441 "Linked as a sync state only consumer to %s\n", 440 442 dev_name(supplier)); 441 443 goto out; 442 444 } 445 + 443 446 reorder: 444 447 /* 445 448 * Move the consumer and all of the devices depending on it to the end ··· 454 449 */ 455 450 device_reorder_to_tail(consumer, NULL); 456 451 457 - list_add_tail_rcu(&link->s_node, &supplier->links.consumers); 458 - list_add_tail_rcu(&link->c_node, &consumer->links.suppliers); 459 - 460 452 dev_dbg(consumer, "Linked as a consumer to %s\n", dev_name(supplier)); 461 453 462 - out: 454 + out: 463 455 device_pm_unlock(); 464 456 device_links_write_unlock(); 465 457 ··· 831 829 list_add_tail(&sup->links.defer_sync, &deferred_sync); 832 830 } 833 831 832 + static void device_link_drop_managed(struct device_link *link) 833 + { 834 + link->flags &= ~DL_FLAG_MANAGED; 835 + WRITE_ONCE(link->status, DL_STATE_NONE); 836 + kref_put(&link->kref, __device_link_del); 837 + } 838 + 834 839 /** 835 840 * device_links_driver_bound - Update device links after probing its driver. 836 841 * @dev: Device to update the links for. ··· 851 842 */ 852 843 void device_links_driver_bound(struct device *dev) 853 844 { 854 - struct device_link *link; 845 + struct device_link *link, *ln; 855 846 LIST_HEAD(sync_list); 856 847 857 848 /* ··· 891 882 else 892 883 __device_links_queue_sync_state(dev, &sync_list); 893 884 894 - list_for_each_entry(link, &dev->links.suppliers, c_node) { 885 + list_for_each_entry_safe(link, ln, &dev->links.suppliers, c_node) { 886 + struct device *supplier; 887 + 895 888 if (!(link->flags & DL_FLAG_MANAGED)) 896 889 continue; 897 890 898 - WARN_ON(link->status != DL_STATE_CONSUMER_PROBE); 899 - WRITE_ONCE(link->status, DL_STATE_ACTIVE); 891 + supplier = link->supplier; 892 + if (link->flags & DL_FLAG_SYNC_STATE_ONLY) { 893 + /* 894 + * When DL_FLAG_SYNC_STATE_ONLY is set, it means no 895 + * other DL_MANAGED_LINK_FLAGS have been set. So, it's 896 + * save to drop the managed link completely. 897 + */ 898 + device_link_drop_managed(link); 899 + } else { 900 + WARN_ON(link->status != DL_STATE_CONSUMER_PROBE); 901 + WRITE_ONCE(link->status, DL_STATE_ACTIVE); 902 + } 900 903 904 + /* 905 + * This needs to be done even for the deleted 906 + * DL_FLAG_SYNC_STATE_ONLY device link in case it was the last 907 + * device link that was preventing the supplier from getting a 908 + * sync_state() call. 909 + */ 901 910 if (defer_sync_state_count) 902 - __device_links_supplier_defer_sync(link->supplier); 911 + __device_links_supplier_defer_sync(supplier); 903 912 else 904 - __device_links_queue_sync_state(link->supplier, 905 - &sync_list); 913 + __device_links_queue_sync_state(supplier, &sync_list); 906 914 } 907 915 908 916 dev->links.status = DL_DEV_DRIVER_BOUND; ··· 927 901 device_links_write_unlock(); 928 902 929 903 device_links_flush_sync_list(&sync_list, dev); 930 - } 931 - 932 - static void device_link_drop_managed(struct device_link *link) 933 - { 934 - link->flags &= ~DL_FLAG_MANAGED; 935 - WRITE_ONCE(link->status, DL_STATE_NONE); 936 - kref_put(&link->kref, __device_link_del); 937 904 } 938 905 939 906 /**
+7
drivers/block/null_blk_main.c
··· 1535 1535 { 1536 1536 if (nullb->dev->discard == false) 1537 1537 return; 1538 + 1539 + if (nullb->dev->zoned) { 1540 + nullb->dev->discard = false; 1541 + pr_info("discard option is ignored in zoned mode\n"); 1542 + return; 1543 + } 1544 + 1538 1545 nullb->q->limits.discard_granularity = nullb->dev->blocksize; 1539 1546 nullb->q->limits.discard_alignment = nullb->dev->blocksize; 1540 1547 blk_queue_max_discard_sectors(nullb->q, UINT_MAX >> 9);
+4
drivers/block/null_blk_zoned.c
··· 23 23 pr_err("zone_size must be power-of-two\n"); 24 24 return -EINVAL; 25 25 } 26 + if (dev->zone_size > dev->size) { 27 + pr_err("Zone size larger than device capacity\n"); 28 + return -EINVAL; 29 + } 26 30 27 31 dev->zone_size_sects = dev->zone_size << ZONE_SIZE_SHIFT; 28 32 dev->nr_zones = dev_size >>
+2
drivers/bus/mhi/core/init.c
··· 291 291 } 292 292 293 293 /* Setup cmd context */ 294 + ret = -ENOMEM; 294 295 mhi_ctxt->cmd_ctxt = mhi_alloc_coherent(mhi_cntrl, 295 296 sizeof(*mhi_ctxt->cmd_ctxt) * 296 297 NR_OF_CMD_RINGS, ··· 1101 1100 } 1102 1101 } 1103 1102 1103 + ret = -EINVAL; 1104 1104 if (dl_chan) { 1105 1105 /* 1106 1106 * If channel supports LPM notifications then status_cb should
+2 -2
drivers/char/ipmi/ipmi_ssif.c
··· 1947 1947 if (adev->type != &i2c_adapter_type) 1948 1948 return 0; 1949 1949 1950 - addr_info->added_client = i2c_new_device(to_i2c_adapter(adev), 1951 - &addr_info->binfo); 1950 + addr_info->added_client = i2c_new_client_device(to_i2c_adapter(adev), 1951 + &addr_info->binfo); 1952 1952 1953 1953 if (!addr_info->adapter_name) 1954 1954 return 1; /* Only try the first I2C adapter by default. */
+3
drivers/clk/clk.c
··· 3519 3519 out: 3520 3520 clk_pm_runtime_put(core); 3521 3521 unlock: 3522 + if (ret) 3523 + hlist_del_init(&core->child_node); 3524 + 3522 3525 clk_prepare_unlock(); 3523 3526 3524 3527 if (!ret)
+4 -13
drivers/clk/rockchip/clk-rk3228.c
··· 156 156 PNAME(mux_i2s2_p) = { "i2s2_src", "i2s2_frac", "xin12m" }; 157 157 PNAME(mux_sclk_spdif_p) = { "sclk_spdif_src", "spdif_frac", "xin12m" }; 158 158 159 - PNAME(mux_aclk_gpu_pre_p) = { "cpll_gpu", "gpll_gpu", "hdmiphy_gpu", "usb480m_gpu" }; 160 - 161 159 PNAME(mux_uart0_p) = { "uart0_src", "uart0_frac", "xin24m" }; 162 160 PNAME(mux_uart1_p) = { "uart1_src", "uart1_frac", "xin24m" }; 163 161 PNAME(mux_uart2_p) = { "uart2_src", "uart2_frac", "xin24m" }; ··· 466 468 RK2928_CLKSEL_CON(24), 6, 10, DFLAGS, 467 469 RK2928_CLKGATE_CON(2), 8, GFLAGS), 468 470 469 - GATE(0, "cpll_gpu", "cpll", 0, 471 + COMPOSITE(0, "aclk_gpu_pre", mux_pll_src_4plls_p, 0, 472 + RK2928_CLKSEL_CON(34), 5, 2, MFLAGS, 0, 5, DFLAGS, 470 473 RK2928_CLKGATE_CON(3), 13, GFLAGS), 471 - GATE(0, "gpll_gpu", "gpll", 0, 472 - RK2928_CLKGATE_CON(3), 13, GFLAGS), 473 - GATE(0, "hdmiphy_gpu", "hdmiphy", 0, 474 - RK2928_CLKGATE_CON(3), 13, GFLAGS), 475 - GATE(0, "usb480m_gpu", "usb480m", 0, 476 - RK2928_CLKGATE_CON(3), 13, GFLAGS), 477 - COMPOSITE_NOGATE(0, "aclk_gpu_pre", mux_aclk_gpu_pre_p, 0, 478 - RK2928_CLKSEL_CON(34), 5, 2, MFLAGS, 0, 5, DFLAGS), 479 474 480 475 COMPOSITE(SCLK_SPI0, "sclk_spi0", mux_pll_src_2plls_p, 0, 481 476 RK2928_CLKSEL_CON(25), 8, 1, MFLAGS, 0, 7, DFLAGS, ··· 573 582 GATE(0, "pclk_peri_noc", "pclk_peri", CLK_IGNORE_UNUSED, RK2928_CLKGATE_CON(12), 2, GFLAGS), 574 583 575 584 /* PD_GPU */ 576 - GATE(ACLK_GPU, "aclk_gpu", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(13), 14, GFLAGS), 577 - GATE(0, "aclk_gpu_noc", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(13), 15, GFLAGS), 585 + GATE(ACLK_GPU, "aclk_gpu", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(7), 14, GFLAGS), 586 + GATE(0, "aclk_gpu_noc", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(7), 15, GFLAGS), 578 587 579 588 /* PD_BUS */ 580 589 GATE(0, "sclk_initmem_mbist", "aclk_cpu", 0, RK2928_CLKGATE_CON(8), 1, GFLAGS),
+1 -1
drivers/clk/tegra/clk-tegra124.c
··· 1292 1292 { TEGRA124_CLK_UARTB, TEGRA124_CLK_PLL_P, 408000000, 0 }, 1293 1293 { TEGRA124_CLK_UARTC, TEGRA124_CLK_PLL_P, 408000000, 0 }, 1294 1294 { TEGRA124_CLK_UARTD, TEGRA124_CLK_PLL_P, 408000000, 0 }, 1295 - { TEGRA124_CLK_PLL_A, TEGRA124_CLK_CLK_MAX, 564480000, 0 }, 1295 + { TEGRA124_CLK_PLL_A, TEGRA124_CLK_CLK_MAX, 282240000, 0 }, 1296 1296 { TEGRA124_CLK_PLL_A_OUT0, TEGRA124_CLK_CLK_MAX, 11289600, 0 }, 1297 1297 { TEGRA124_CLK_I2S0, TEGRA124_CLK_PLL_A_OUT0, 11289600, 0 }, 1298 1298 { TEGRA124_CLK_I2S1, TEGRA124_CLK_PLL_A_OUT0, 11289600, 0 },
+1 -1
drivers/clk/ti/clk-33xx.c
··· 212 212 }; 213 213 214 214 static const struct omap_clkctrl_reg_data am3_l4_rtc_clkctrl_regs[] __initconst = { 215 - { AM3_L4_RTC_RTC_CLKCTRL, NULL, CLKF_SW_SUP, "clk_32768_ck" }, 215 + { AM3_L4_RTC_RTC_CLKCTRL, NULL, CLKF_SW_SUP, "clk-24mhz-clkctrl:0000:0" }, 216 216 { 0 }, 217 217 }; 218 218
+48 -51
drivers/clk/ti/clkctrl.c
··· 255 255 return entry->clk; 256 256 } 257 257 258 + /* Get clkctrl clock base name based on clkctrl_name or dts node */ 259 + static const char * __init clkctrl_get_clock_name(struct device_node *np, 260 + const char *clkctrl_name, 261 + int offset, int index, 262 + bool legacy_naming) 263 + { 264 + char *clock_name; 265 + 266 + /* l4per-clkctrl:1234:0 style naming based on clkctrl_name */ 267 + if (clkctrl_name && !legacy_naming) { 268 + clock_name = kasprintf(GFP_KERNEL, "%s-clkctrl:%04x:%d", 269 + clkctrl_name, offset, index); 270 + strreplace(clock_name, '_', '-'); 271 + 272 + return clock_name; 273 + } 274 + 275 + /* l4per:1234:0 old style naming based on clkctrl_name */ 276 + if (clkctrl_name) 277 + return kasprintf(GFP_KERNEL, "%s_cm:clk:%04x:%d", 278 + clkctrl_name, offset, index); 279 + 280 + /* l4per_cm:1234:0 old style naming based on parent node name */ 281 + if (legacy_naming) 282 + return kasprintf(GFP_KERNEL, "%pOFn:clk:%04x:%d", 283 + np->parent, offset, index); 284 + 285 + /* l4per-clkctrl:1234:0 style naming based on node name */ 286 + return kasprintf(GFP_KERNEL, "%pOFn:%04x:%d", np, offset, index); 287 + } 288 + 258 289 static int __init 259 290 _ti_clkctrl_clk_register(struct omap_clkctrl_provider *provider, 260 291 struct device_node *node, struct clk_hw *clk_hw, 261 292 u16 offset, u8 bit, const char * const *parents, 262 - int num_parents, const struct clk_ops *ops) 293 + int num_parents, const struct clk_ops *ops, 294 + const char *clkctrl_name) 263 295 { 264 296 struct clk_init_data init = { NULL }; 265 297 struct clk *clk; 266 298 struct omap_clkctrl_clk *clkctrl_clk; 267 299 int ret = 0; 268 300 269 - if (ti_clk_get_features()->flags & TI_CLK_CLKCTRL_COMPAT) 270 - init.name = kasprintf(GFP_KERNEL, "%pOFn:%pOFn:%04x:%d", 271 - node->parent, node, offset, 272 - bit); 273 - else 274 - init.name = kasprintf(GFP_KERNEL, "%pOFn:%04x:%d", node, 275 - offset, bit); 301 + init.name = clkctrl_get_clock_name(node, clkctrl_name, offset, bit, 302 + ti_clk_get_features()->flags & 303 + TI_CLK_CLKCTRL_COMPAT); 304 + 276 305 clkctrl_clk = kzalloc(sizeof(*clkctrl_clk), GFP_KERNEL); 277 306 if (!init.name || !clkctrl_clk) { 278 307 ret = -ENOMEM; ··· 338 309 _ti_clkctrl_setup_gate(struct omap_clkctrl_provider *provider, 339 310 struct device_node *node, u16 offset, 340 311 const struct omap_clkctrl_bit_data *data, 341 - void __iomem *reg) 312 + void __iomem *reg, const char *clkctrl_name) 342 313 { 343 314 struct clk_hw_omap *clk_hw; 344 315 ··· 351 322 352 323 if (_ti_clkctrl_clk_register(provider, node, &clk_hw->hw, offset, 353 324 data->bit, data->parents, 1, 354 - &omap_gate_clk_ops)) 325 + &omap_gate_clk_ops, clkctrl_name)) 355 326 kfree(clk_hw); 356 327 } 357 328 ··· 359 330 _ti_clkctrl_setup_mux(struct omap_clkctrl_provider *provider, 360 331 struct device_node *node, u16 offset, 361 332 const struct omap_clkctrl_bit_data *data, 362 - void __iomem *reg) 333 + void __iomem *reg, const char *clkctrl_name) 363 334 { 364 335 struct clk_omap_mux *mux; 365 336 int num_parents = 0; ··· 386 357 387 358 if (_ti_clkctrl_clk_register(provider, node, &mux->hw, offset, 388 359 data->bit, data->parents, num_parents, 389 - &ti_clk_mux_ops)) 360 + &ti_clk_mux_ops, clkctrl_name)) 390 361 kfree(mux); 391 362 } 392 363 ··· 394 365 _ti_clkctrl_setup_div(struct omap_clkctrl_provider *provider, 395 366 struct device_node *node, u16 offset, 396 367 const struct omap_clkctrl_bit_data *data, 397 - void __iomem *reg) 368 + void __iomem *reg, const char *clkctrl_name) 398 369 { 399 370 struct clk_omap_divider *div; 400 371 const struct omap_clkctrl_div_data *div_data = data->data; ··· 422 393 423 394 if (_ti_clkctrl_clk_register(provider, node, &div->hw, offset, 424 395 data->bit, data->parents, 1, 425 - &ti_clk_divider_ops)) 396 + &ti_clk_divider_ops, clkctrl_name)) 426 397 kfree(div); 427 398 } 428 399 ··· 430 401 _ti_clkctrl_setup_subclks(struct omap_clkctrl_provider *provider, 431 402 struct device_node *node, 432 403 const struct omap_clkctrl_reg_data *data, 433 - void __iomem *reg) 404 + void __iomem *reg, const char *clkctrl_name) 434 405 { 435 406 const struct omap_clkctrl_bit_data *bits = data->bit_data; 436 407 ··· 441 412 switch (bits->type) { 442 413 case TI_CLK_GATE: 443 414 _ti_clkctrl_setup_gate(provider, node, data->offset, 444 - bits, reg); 415 + bits, reg, clkctrl_name); 445 416 break; 446 417 447 418 case TI_CLK_DIVIDER: 448 419 _ti_clkctrl_setup_div(provider, node, data->offset, 449 - bits, reg); 420 + bits, reg, clkctrl_name); 450 421 break; 451 422 452 423 case TI_CLK_MUX: 453 424 _ti_clkctrl_setup_mux(provider, node, data->offset, 454 - bits, reg); 425 + bits, reg, clkctrl_name); 455 426 break; 456 427 457 428 default: ··· 490 461 return name; 491 462 } 492 463 } 493 - of_node_put(np); 494 464 495 465 return NULL; 496 - } 497 - 498 - /* Get clkctrl clock base name based on clkctrl_name or dts node */ 499 - static const char * __init clkctrl_get_clock_name(struct device_node *np, 500 - const char *clkctrl_name, 501 - int offset, int index, 502 - bool legacy_naming) 503 - { 504 - char *clock_name; 505 - 506 - /* l4per-clkctrl:1234:0 style naming based on clkctrl_name */ 507 - if (clkctrl_name && !legacy_naming) { 508 - clock_name = kasprintf(GFP_KERNEL, "%s-clkctrl:%04x:%d", 509 - clkctrl_name, offset, index); 510 - strreplace(clock_name, '_', '-'); 511 - 512 - return clock_name; 513 - } 514 - 515 - /* l4per:1234:0 old style naming based on clkctrl_name */ 516 - if (clkctrl_name) 517 - return kasprintf(GFP_KERNEL, "%s_cm:clk:%04x:%d", 518 - clkctrl_name, offset, index); 519 - 520 - /* l4per_cm:1234:0 old style naming based on parent node name */ 521 - if (legacy_naming) 522 - return kasprintf(GFP_KERNEL, "%pOFn:clk:%04x:%d", 523 - np->parent, offset, index); 524 - 525 - /* l4per-clkctrl:1234:0 style naming based on node name */ 526 - return kasprintf(GFP_KERNEL, "%pOFn:%04x:%d", np, offset, index); 527 466 } 528 467 529 468 static void __init _ti_omap4_clkctrl_setup(struct device_node *node) ··· 661 664 hw->enable_reg.ptr = provider->base + reg_data->offset; 662 665 663 666 _ti_clkctrl_setup_subclks(provider, node, reg_data, 664 - hw->enable_reg.ptr); 667 + hw->enable_reg.ptr, clkctrl_name); 665 668 666 669 if (reg_data->flags & CLKF_SW_SUP) 667 670 hw->enable_bit = MODULEMODE_SWCTRL;
+1
drivers/clk/versatile/clk-impd1.c
··· 206 206 return -ENODEV; 207 207 } 208 208 209 + of_property_read_string(np, "clock-output-names", &name); 209 210 parent_name = of_clk_get_parent_name(np, 0); 210 211 clk = icst_clk_setup(NULL, desc, name, parent_name, map, 211 212 ICST_INTEGRATOR_IM_PD1);
+11 -3
drivers/dax/kmem.c
··· 22 22 resource_size_t kmem_size; 23 23 resource_size_t kmem_end; 24 24 struct resource *new_res; 25 + const char *new_res_name; 25 26 int numa_node; 26 27 int rc; 27 28 ··· 49 48 kmem_size &= ~(memory_block_size_bytes() - 1); 50 49 kmem_end = kmem_start + kmem_size; 51 50 52 - /* Region is permanently reserved. Hot-remove not yet implemented. */ 53 - new_res = request_mem_region(kmem_start, kmem_size, dev_name(dev)); 51 + new_res_name = kstrdup(dev_name(dev), GFP_KERNEL); 52 + if (!new_res_name) 53 + return -ENOMEM; 54 + 55 + /* Region is permanently reserved if hotremove fails. */ 56 + new_res = request_mem_region(kmem_start, kmem_size, new_res_name); 54 57 if (!new_res) { 55 58 dev_warn(dev, "could not reserve region [%pa-%pa]\n", 56 59 &kmem_start, &kmem_end); 60 + kfree(new_res_name); 57 61 return -EBUSY; 58 62 } 59 63 ··· 69 63 * unknown to us that will break add_memory() below. 70 64 */ 71 65 new_res->flags = IORESOURCE_SYSTEM_RAM; 72 - new_res->name = dev_name(dev); 73 66 74 67 rc = add_memory(numa_node, new_res->start, resource_size(new_res)); 75 68 if (rc) { 76 69 release_resource(new_res); 77 70 kfree(new_res); 71 + kfree(new_res_name); 78 72 return rc; 79 73 } 80 74 dev_dax->dax_kmem_res = new_res; ··· 89 83 struct resource *res = dev_dax->dax_kmem_res; 90 84 resource_size_t kmem_start = res->start; 91 85 resource_size_t kmem_size = resource_size(res); 86 + const char *res_name = res->name; 92 87 int rc; 93 88 94 89 /* ··· 109 102 /* Release and free dax resources */ 110 103 release_resource(res); 111 104 kfree(res); 105 + kfree(res_name); 112 106 dev_dax->dax_kmem_res = NULL; 113 107 114 108 return 0;
+5 -4
drivers/dma/dmatest.c
··· 1166 1166 mutex_unlock(&info->lock); 1167 1167 return ret; 1168 1168 } else if (dmatest_run) { 1169 - if (is_threaded_test_pending(info)) 1170 - start_threaded_tests(info); 1171 - else 1172 - pr_info("Could not start test, no channels configured\n"); 1169 + if (!is_threaded_test_pending(info)) { 1170 + pr_info("No channels configured, continue with any\n"); 1171 + add_threaded_test(info); 1172 + } 1173 + start_threaded_tests(info); 1173 1174 } else { 1174 1175 stop_threaded_test(info); 1175 1176 }
+7
drivers/dma/idxd/device.c
··· 62 62 perm.ignore = 0; 63 63 iowrite32(perm.bits, idxd->reg_base + offset); 64 64 65 + /* 66 + * A readback from the device ensures that any previously generated 67 + * completion record writes are visible to software based on PCI 68 + * ordering rules. 69 + */ 70 + perm.bits = ioread32(idxd->reg_base + offset); 71 + 65 72 return 0; 66 73 } 67 74
+19 -7
drivers/dma/idxd/irq.c
··· 173 173 struct llist_node *head; 174 174 int queued = 0; 175 175 176 + *processed = 0; 176 177 head = llist_del_all(&irq_entry->pending_llist); 177 178 if (!head) 178 179 return 0; ··· 198 197 struct list_head *node, *next; 199 198 int queued = 0; 200 199 200 + *processed = 0; 201 201 if (list_empty(&irq_entry->work_list)) 202 202 return 0; 203 203 ··· 220 218 return queued; 221 219 } 222 220 223 - irqreturn_t idxd_wq_thread(int irq, void *data) 221 + static int idxd_desc_process(struct idxd_irq_entry *irq_entry) 224 222 { 225 - struct idxd_irq_entry *irq_entry = data; 226 - int rc, processed = 0, retry = 0; 223 + int rc, processed, total = 0; 227 224 228 225 /* 229 226 * There are two lists we are processing. The pending_llist is where ··· 245 244 */ 246 245 do { 247 246 rc = irq_process_work_list(irq_entry, &processed); 248 - if (rc != 0) { 249 - retry++; 247 + total += processed; 248 + if (rc != 0) 250 249 continue; 251 - } 252 250 253 251 rc = irq_process_pending_llist(irq_entry, &processed); 254 - } while (rc != 0 && retry != 10); 252 + total += processed; 253 + } while (rc != 0); 255 254 255 + return total; 256 + } 257 + 258 + irqreturn_t idxd_wq_thread(int irq, void *data) 259 + { 260 + struct idxd_irq_entry *irq_entry = data; 261 + int processed; 262 + 263 + processed = idxd_desc_process(irq_entry); 256 264 idxd_unmask_msix_vector(irq_entry->idxd, irq_entry->id); 265 + /* catch anything unprocessed after unmasking */ 266 + processed += idxd_desc_process(irq_entry); 257 267 258 268 if (processed == 0) 259 269 return IRQ_NONE;
+3 -5
drivers/dma/owl-dma.c
··· 175 175 * @id: physical index to this channel 176 176 * @base: virtual memory base for the dma channel 177 177 * @vchan: the virtual channel currently being served by this physical channel 178 - * @lock: a lock to use when altering an instance of this struct 179 178 */ 180 179 struct owl_dma_pchan { 181 180 u32 id; 182 181 void __iomem *base; 183 182 struct owl_dma_vchan *vchan; 184 - spinlock_t lock; 185 183 }; 186 184 187 185 /** ··· 435 437 for (i = 0; i < od->nr_pchans; i++) { 436 438 pchan = &od->pchans[i]; 437 439 438 - spin_lock_irqsave(&pchan->lock, flags); 440 + spin_lock_irqsave(&od->lock, flags); 439 441 if (!pchan->vchan) { 440 442 pchan->vchan = vchan; 441 - spin_unlock_irqrestore(&pchan->lock, flags); 443 + spin_unlock_irqrestore(&od->lock, flags); 442 444 break; 443 445 } 444 446 445 - spin_unlock_irqrestore(&pchan->lock, flags); 447 + spin_unlock_irqrestore(&od->lock, flags); 446 448 } 447 449 448 450 return pchan;
+1 -1
drivers/dma/tegra210-adma.c
··· 900 900 ret = dma_async_device_register(&tdma->dma_dev); 901 901 if (ret < 0) { 902 902 dev_err(&pdev->dev, "ADMA registration failed: %d\n", ret); 903 - goto irq_dispose; 903 + goto rpm_put; 904 904 } 905 905 906 906 ret = of_dma_controller_register(pdev->dev.of_node,
+4 -2
drivers/dma/ti/k3-udma.c
··· 2156 2156 d->residue += sg_dma_len(sgent); 2157 2157 } 2158 2158 2159 - cppi5_tr_csf_set(&tr_req[tr_idx - 1].flags, CPPI5_TR_CSF_EOP); 2159 + cppi5_tr_csf_set(&tr_req[tr_idx - 1].flags, 2160 + CPPI5_TR_CSF_SUPR_EVT | CPPI5_TR_CSF_EOP); 2160 2161 2161 2162 return d; 2162 2163 } ··· 2734 2733 tr_req[1].dicnt3 = 1; 2735 2734 } 2736 2735 2737 - cppi5_tr_csf_set(&tr_req[num_tr - 1].flags, CPPI5_TR_CSF_EOP); 2736 + cppi5_tr_csf_set(&tr_req[num_tr - 1].flags, 2737 + CPPI5_TR_CSF_SUPR_EVT | CPPI5_TR_CSF_EOP); 2738 2738 2739 2739 if (uc->config.metadata_size) 2740 2740 d->vd.tx.metadata_ops = &metadata_ops;
+1 -2
drivers/dma/xilinx/zynqmp_dma.c
··· 434 434 struct zynqmp_dma_desc_sw *child, *next; 435 435 436 436 chan->desc_free_cnt++; 437 + list_del(&sdesc->node); 437 438 list_add_tail(&sdesc->node, &chan->free_list); 438 439 list_for_each_entry_safe(child, next, &sdesc->tx_list, node) { 439 440 chan->desc_free_cnt++; ··· 608 607 list_for_each_entry_safe(desc, next, &chan->done_list, node) { 609 608 dma_async_tx_callback callback; 610 609 void *callback_param; 611 - 612 - list_del(&desc->node); 613 610 614 611 callback = desc->async_tx.callback; 615 612 callback_param = desc->async_tx.callback_param;
+62
drivers/firmware/efi/cper.c
··· 407 407 } 408 408 } 409 409 410 + static const char * const fw_err_rec_type_strs[] = { 411 + "IPF SAL Error Record", 412 + "SOC Firmware Error Record Type1 (Legacy CrashLog Support)", 413 + "SOC Firmware Error Record Type2", 414 + }; 415 + 416 + static void cper_print_fw_err(const char *pfx, 417 + struct acpi_hest_generic_data *gdata, 418 + const struct cper_sec_fw_err_rec_ref *fw_err) 419 + { 420 + void *buf = acpi_hest_get_payload(gdata); 421 + u32 offset, length = gdata->error_data_length; 422 + 423 + printk("%s""Firmware Error Record Type: %s\n", pfx, 424 + fw_err->record_type < ARRAY_SIZE(fw_err_rec_type_strs) ? 425 + fw_err_rec_type_strs[fw_err->record_type] : "unknown"); 426 + printk("%s""Revision: %d\n", pfx, fw_err->revision); 427 + 428 + /* Record Type based on UEFI 2.7 */ 429 + if (fw_err->revision == 0) { 430 + printk("%s""Record Identifier: %08llx\n", pfx, 431 + fw_err->record_identifier); 432 + } else if (fw_err->revision == 2) { 433 + printk("%s""Record Identifier: %pUl\n", pfx, 434 + &fw_err->record_identifier_guid); 435 + } 436 + 437 + /* 438 + * The FW error record may contain trailing data beyond the 439 + * structure defined by the specification. As the fields 440 + * defined (and hence the offset of any trailing data) vary 441 + * with the revision, set the offset to account for this 442 + * variation. 443 + */ 444 + if (fw_err->revision == 0) { 445 + /* record_identifier_guid not defined */ 446 + offset = offsetof(struct cper_sec_fw_err_rec_ref, 447 + record_identifier_guid); 448 + } else if (fw_err->revision == 1) { 449 + /* record_identifier not defined */ 450 + offset = offsetof(struct cper_sec_fw_err_rec_ref, 451 + record_identifier); 452 + } else { 453 + offset = sizeof(*fw_err); 454 + } 455 + 456 + buf += offset; 457 + length -= offset; 458 + 459 + print_hex_dump(pfx, "", DUMP_PREFIX_OFFSET, 16, 4, buf, length, true); 460 + } 461 + 410 462 static void cper_print_tstamp(const char *pfx, 411 463 struct acpi_hest_generic_data_v300 *gdata) 412 464 { ··· 546 494 else 547 495 goto err_section_too_small; 548 496 #endif 497 + } else if (guid_equal(sec_type, &CPER_SEC_FW_ERR_REC_REF)) { 498 + struct cper_sec_fw_err_rec_ref *fw_err = acpi_hest_get_payload(gdata); 499 + 500 + printk("%ssection_type: Firmware Error Record Reference\n", 501 + newpfx); 502 + /* The minimal FW Error Record contains 16 bytes */ 503 + if (gdata->error_data_length >= SZ_16) 504 + cper_print_fw_err(newpfx, gdata, fw_err); 505 + else 506 + goto err_section_too_small; 549 507 } else { 550 508 const void *err = acpi_hest_get_payload(gdata); 551 509
+8 -6
drivers/firmware/efi/earlycon.c
··· 114 114 const u32 color_black = 0x00000000; 115 115 const u32 color_white = 0x00ffffff; 116 116 const u8 *src; 117 - u8 s8; 118 - int m; 117 + int m, n, bytes; 118 + u8 x; 119 119 120 - src = font->data + c * font->height; 121 - s8 = *(src + h); 120 + bytes = BITS_TO_BYTES(font->width); 121 + src = font->data + c * font->height * bytes + h * bytes; 122 122 123 - for (m = 0; m < 8; m++) { 124 - if ((s8 >> (7 - m)) & 1) 123 + for (m = 0; m < font->width; m++) { 124 + n = m % 8; 125 + x = *(src + m / 8); 126 + if ((x >> (7 - n)) & 1) 125 127 *dst = color_white; 126 128 else 127 129 *dst = color_black;
+1 -4
drivers/firmware/efi/efi.c
··· 130 130 if (efi.smbios != EFI_INVALID_TABLE_ADDR) 131 131 str += sprintf(str, "SMBIOS=0x%lx\n", efi.smbios); 132 132 133 - if (IS_ENABLED(CONFIG_IA64) || IS_ENABLED(CONFIG_X86)) { 134 - extern char *efi_systab_show_arch(char *str); 135 - 133 + if (IS_ENABLED(CONFIG_IA64) || IS_ENABLED(CONFIG_X86)) 136 134 str = efi_systab_show_arch(str); 137 - } 138 135 139 136 return str - buf; 140 137 }
+5 -1
drivers/firmware/efi/libstub/arm-stub.c
··· 60 60 si = alloc_screen_info(); 61 61 if (!si) 62 62 return NULL; 63 - efi_setup_gop(si, &gop_proto, size); 63 + status = efi_setup_gop(si, &gop_proto, size); 64 + if (status != EFI_SUCCESS) { 65 + free_screen_info(si); 66 + return NULL; 67 + } 64 68 } 65 69 return si; 66 70 }
+13
drivers/firmware/efi/libstub/efistub.h
··· 92 92 #define EFI_LOCATE_BY_REGISTER_NOTIFY 1 93 93 #define EFI_LOCATE_BY_PROTOCOL 2 94 94 95 + /* 96 + * An efi_boot_memmap is used by efi_get_memory_map() to return the 97 + * EFI memory map in a dynamically allocated buffer. 98 + * 99 + * The buffer allocated for the EFI memory map includes extra room for 100 + * a minimum of EFI_MMAP_NR_SLACK_SLOTS additional EFI memory descriptors. 101 + * This facilitates the reuse of the EFI memory map buffer when a second 102 + * call to ExitBootServices() is needed because of intervening changes to 103 + * the EFI memory map. Other related structures, e.g. x86 e820ext, need 104 + * to factor in this headroom requirement as well. 105 + */ 106 + #define EFI_MMAP_NR_SLACK_SLOTS 8 107 + 95 108 struct efi_boot_memmap { 96 109 efi_memory_desc_t **map; 97 110 unsigned long *map_size;
-2
drivers/firmware/efi/libstub/mem.c
··· 5 5 6 6 #include "efistub.h" 7 7 8 - #define EFI_MMAP_NR_SLACK_SLOTS 8 9 - 10 8 static inline bool mmap_has_headroom(unsigned long buff_size, 11 9 unsigned long map_size, 12 10 unsigned long desc_size)
+3 -2
drivers/firmware/efi/libstub/tpm.c
··· 54 54 efi_status_t status; 55 55 efi_physical_addr_t log_location = 0, log_last_entry = 0; 56 56 struct linux_efi_tpm_eventlog *log_tbl = NULL; 57 - struct efi_tcg2_final_events_table *final_events_table; 57 + struct efi_tcg2_final_events_table *final_events_table = NULL; 58 58 unsigned long first_entry_addr, last_entry_addr; 59 59 size_t log_size, last_entry_size; 60 60 efi_bool_t truncated; ··· 127 127 * Figure out whether any events have already been logged to the 128 128 * final events structure, and if so how much space they take up 129 129 */ 130 - final_events_table = get_efi_config_table(LINUX_EFI_TPM_FINAL_LOG_GUID); 130 + if (version == EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) 131 + final_events_table = get_efi_config_table(LINUX_EFI_TPM_FINAL_LOG_GUID); 131 132 if (final_events_table && final_events_table->nr_events) { 132 133 struct tcg_pcr_event2_head *header; 133 134 int offset;
+9 -15
drivers/firmware/efi/libstub/x86-stub.c
··· 606 606 struct setup_data **e820ext, 607 607 u32 *e820ext_size) 608 608 { 609 - unsigned long map_size, desc_size, buff_size; 610 - struct efi_boot_memmap boot_map; 611 - efi_memory_desc_t *map; 609 + unsigned long map_size, desc_size, map_key; 612 610 efi_status_t status; 613 - __u32 nr_desc; 611 + __u32 nr_desc, desc_version; 614 612 615 - boot_map.map = &map; 616 - boot_map.map_size = &map_size; 617 - boot_map.desc_size = &desc_size; 618 - boot_map.desc_ver = NULL; 619 - boot_map.key_ptr = NULL; 620 - boot_map.buff_size = &buff_size; 613 + /* Only need the size of the mem map and size of each mem descriptor */ 614 + map_size = 0; 615 + status = efi_bs_call(get_memory_map, &map_size, NULL, &map_key, 616 + &desc_size, &desc_version); 617 + if (status != EFI_BUFFER_TOO_SMALL) 618 + return (status != EFI_SUCCESS) ? status : EFI_UNSUPPORTED; 621 619 622 - status = efi_get_memory_map(&boot_map); 623 - if (status != EFI_SUCCESS) 624 - return status; 625 - 626 - nr_desc = buff_size / desc_size; 620 + nr_desc = map_size / desc_size + EFI_MMAP_NR_SLACK_SLOTS; 627 621 628 622 if (nr_desc > ARRAY_SIZE(params->e820_table)) { 629 623 u32 nr_e820ext = nr_desc - ARRAY_SIZE(params->e820_table);
+4 -1
drivers/firmware/efi/tpm.c
··· 62 62 tbl_size = sizeof(*log_tbl) + log_tbl->size; 63 63 memblock_reserve(efi.tpm_log, tbl_size); 64 64 65 - if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR) 65 + if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR || 66 + log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) { 67 + pr_warn(FW_BUG "TPM Final Events table missing or invalid\n"); 66 68 goto out; 69 + } 67 70 68 71 final_tbl = early_memremap(efi.tpm_final_log, sizeof(*final_tbl)); 69 72
+68 -1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
··· 1625 1625 hws->funcs.verify_allow_pstate_change_high(dc); 1626 1626 } 1627 1627 1628 + /** 1629 + * delay_cursor_until_vupdate() - Delay cursor update if too close to VUPDATE. 1630 + * 1631 + * Software keepout workaround to prevent cursor update locking from stalling 1632 + * out cursor updates indefinitely or from old values from being retained in 1633 + * the case where the viewport changes in the same frame as the cursor. 1634 + * 1635 + * The idea is to calculate the remaining time from VPOS to VUPDATE. If it's 1636 + * too close to VUPDATE, then stall out until VUPDATE finishes. 1637 + * 1638 + * TODO: Optimize cursor programming to be once per frame before VUPDATE 1639 + * to avoid the need for this workaround. 1640 + */ 1641 + static void delay_cursor_until_vupdate(struct dc *dc, struct pipe_ctx *pipe_ctx) 1642 + { 1643 + struct dc_stream_state *stream = pipe_ctx->stream; 1644 + struct crtc_position position; 1645 + uint32_t vupdate_start, vupdate_end; 1646 + unsigned int lines_to_vupdate, us_to_vupdate, vpos; 1647 + unsigned int us_per_line, us_vupdate; 1648 + 1649 + if (!dc->hwss.calc_vupdate_position || !dc->hwss.get_position) 1650 + return; 1651 + 1652 + if (!pipe_ctx->stream_res.stream_enc || !pipe_ctx->stream_res.tg) 1653 + return; 1654 + 1655 + dc->hwss.calc_vupdate_position(dc, pipe_ctx, &vupdate_start, 1656 + &vupdate_end); 1657 + 1658 + dc->hwss.get_position(&pipe_ctx, 1, &position); 1659 + vpos = position.vertical_count; 1660 + 1661 + /* Avoid wraparound calculation issues */ 1662 + vupdate_start += stream->timing.v_total; 1663 + vupdate_end += stream->timing.v_total; 1664 + vpos += stream->timing.v_total; 1665 + 1666 + if (vpos <= vupdate_start) { 1667 + /* VPOS is in VACTIVE or back porch. */ 1668 + lines_to_vupdate = vupdate_start - vpos; 1669 + } else if (vpos > vupdate_end) { 1670 + /* VPOS is in the front porch. */ 1671 + return; 1672 + } else { 1673 + /* VPOS is in VUPDATE. */ 1674 + lines_to_vupdate = 0; 1675 + } 1676 + 1677 + /* Calculate time until VUPDATE in microseconds. */ 1678 + us_per_line = 1679 + stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz; 1680 + us_to_vupdate = lines_to_vupdate * us_per_line; 1681 + 1682 + /* 70 us is a conservative estimate of cursor update time*/ 1683 + if (us_to_vupdate > 70) 1684 + return; 1685 + 1686 + /* Stall out until the cursor update completes. */ 1687 + us_vupdate = (vupdate_end - vupdate_start + 1) * us_per_line; 1688 + udelay(us_to_vupdate + us_vupdate); 1689 + } 1690 + 1628 1691 void dcn10_cursor_lock(struct dc *dc, struct pipe_ctx *pipe, bool lock) 1629 1692 { 1630 1693 /* cursor lock is per MPCC tree, so only need to lock one pipe per stream */ 1631 1694 if (!pipe || pipe->top_pipe) 1632 1695 return; 1696 + 1697 + /* Prevent cursor lock from stalling out cursor updates. */ 1698 + if (lock) 1699 + delay_cursor_until_vupdate(dc, pipe); 1633 1700 1634 1701 dc->res_pool->mpc->funcs->cursor_lock(dc->res_pool->mpc, 1635 1702 pipe->stream_res.opp->inst, lock); ··· 3303 3236 return vertical_line_start; 3304 3237 } 3305 3238 3306 - static void dcn10_calc_vupdate_position( 3239 + void dcn10_calc_vupdate_position( 3307 3240 struct dc *dc, 3308 3241 struct pipe_ctx *pipe_ctx, 3309 3242 uint32_t *start_line,
+5
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
··· 34 34 void dcn10_hw_sequencer_construct(struct dc *dc); 35 35 36 36 int dcn10_get_vupdate_offset_from_vsync(struct pipe_ctx *pipe_ctx); 37 + void dcn10_calc_vupdate_position( 38 + struct dc *dc, 39 + struct pipe_ctx *pipe_ctx, 40 + uint32_t *start_line, 41 + uint32_t *end_line); 37 42 void dcn10_setup_vupdate_interrupt(struct dc *dc, struct pipe_ctx *pipe_ctx); 38 43 enum dc_status dcn10_enable_stream_timing( 39 44 struct pipe_ctx *pipe_ctx,
+1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
··· 72 72 .set_clock = dcn10_set_clock, 73 73 .get_clock = dcn10_get_clock, 74 74 .get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync, 75 + .calc_vupdate_position = dcn10_calc_vupdate_position, 75 76 }; 76 77 77 78 static const struct hwseq_private_funcs dcn10_private_funcs = {
+1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
··· 83 83 .init_vm_ctx = dcn20_init_vm_ctx, 84 84 .set_flip_control_gsl = dcn20_set_flip_control_gsl, 85 85 .get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync, 86 + .calc_vupdate_position = dcn10_calc_vupdate_position, 86 87 }; 87 88 88 89 static const struct hwseq_private_funcs dcn20_private_funcs = {
+1
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
··· 86 86 .optimize_pwr_state = dcn21_optimize_pwr_state, 87 87 .exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state, 88 88 .get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync, 89 + .calc_vupdate_position = dcn10_calc_vupdate_position, 89 90 .set_cursor_position = dcn10_set_cursor_position, 90 91 .set_cursor_attribute = dcn10_set_cursor_attribute, 91 92 .set_cursor_sdr_white_level = dcn10_set_cursor_sdr_white_level,
-2
drivers/gpu/drm/amd/display/dc/dml/Makefile
··· 63 63 endif 64 64 CFLAGS_$(AMDDALPATH)/dc/dml/dml1_display_rq_dlg_calc.o := $(dml_ccflags) 65 65 CFLAGS_$(AMDDALPATH)/dc/dml/display_rq_dlg_helpers.o := $(dml_ccflags) 66 - CFLAGS_$(AMDDALPATH)/dc/dml/dml_common_defs.o := $(dml_ccflags) 67 66 68 67 DML = display_mode_lib.o display_rq_dlg_helpers.o dml1_display_rq_dlg_calc.o \ 69 - dml_common_defs.o 70 68 71 69 ifdef CONFIG_DRM_AMD_DC_DCN 72 70 DML += display_mode_vba.o dcn20/display_rq_dlg_calc_20.o dcn20/display_mode_vba_20.o
-1
drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.h
··· 26 26 #ifndef __DML20_DISPLAY_RQ_DLG_CALC_H__ 27 27 #define __DML20_DISPLAY_RQ_DLG_CALC_H__ 28 28 29 - #include "../dml_common_defs.h" 30 29 #include "../display_rq_dlg_helpers.h" 31 30 32 31 struct display_mode_lib;
-1
drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.h
··· 26 26 #ifndef __DML20V2_DISPLAY_RQ_DLG_CALC_H__ 27 27 #define __DML20V2_DISPLAY_RQ_DLG_CALC_H__ 28 28 29 - #include "../dml_common_defs.h" 30 29 #include "../display_rq_dlg_helpers.h" 31 30 32 31 struct display_mode_lib;
+1 -1
drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.h
··· 26 26 #ifndef __DML21_DISPLAY_RQ_DLG_CALC_H__ 27 27 #define __DML21_DISPLAY_RQ_DLG_CALC_H__ 28 28 29 - #include "../dml_common_defs.h" 29 + #include "dm_services.h" 30 30 #include "../display_rq_dlg_helpers.h" 31 31 32 32 struct display_mode_lib;
+4 -2
drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.h
··· 25 25 #ifndef __DISPLAY_MODE_LIB_H__ 26 26 #define __DISPLAY_MODE_LIB_H__ 27 27 28 - 29 - #include "dml_common_defs.h" 28 + #include "dm_services.h" 29 + #include "dc_features.h" 30 + #include "display_mode_structs.h" 31 + #include "display_mode_enums.h" 30 32 #include "display_mode_vba.h" 31 33 32 34 enum dml_project {
-2
drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
··· 27 27 #ifndef __DML2_DISPLAY_MODE_VBA_H__ 28 28 #define __DML2_DISPLAY_MODE_VBA_H__ 29 29 30 - #include "dml_common_defs.h" 31 - 32 30 struct display_mode_lib; 33 31 34 32 void ModeSupportAndSystemConfiguration(struct display_mode_lib *mode_lib);
-1
drivers/gpu/drm/amd/display/dc/dml/display_rq_dlg_helpers.h
··· 26 26 #ifndef __DISPLAY_RQ_DLG_HELPERS_H__ 27 27 #define __DISPLAY_RQ_DLG_HELPERS_H__ 28 28 29 - #include "dml_common_defs.h" 30 29 #include "display_mode_lib.h" 31 30 32 31 /* Function: Printer functions
-2
drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.h
··· 26 26 #ifndef __DISPLAY_RQ_DLG_CALC_H__ 27 27 #define __DISPLAY_RQ_DLG_CALC_H__ 28 28 29 - #include "dml_common_defs.h" 30 - 31 29 struct display_mode_lib; 32 30 33 31 #include "display_rq_dlg_helpers.h"
-43
drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.c
··· 1 - /* 2 - * Copyright 2017 Advanced Micro Devices, Inc. 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: AMD 23 - * 24 - */ 25 - 26 - #include "dml_common_defs.h" 27 - #include "dcn_calc_math.h" 28 - 29 - #include "dml_inline_defs.h" 30 - 31 - double dml_round(double a) 32 - { 33 - double round_pt = 0.5; 34 - double ceil = dml_ceil(a, 1); 35 - double floor = dml_floor(a, 1); 36 - 37 - if (a - floor >= round_pt) 38 - return ceil; 39 - else 40 - return floor; 41 - } 42 - 43 -
-37
drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.h
··· 1 - /* 2 - * Copyright 2017 Advanced Micro Devices, Inc. 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: AMD 23 - * 24 - */ 25 - 26 - #ifndef __DC_COMMON_DEFS_H__ 27 - #define __DC_COMMON_DEFS_H__ 28 - 29 - #include "dm_services.h" 30 - #include "dc_features.h" 31 - #include "display_mode_structs.h" 32 - #include "display_mode_enums.h" 33 - 34 - 35 - double dml_round(double a); 36 - 37 - #endif /* __DC_COMMON_DEFS_H__ */
+13 -2
drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
··· 26 26 #ifndef __DML_INLINE_DEFS_H__ 27 27 #define __DML_INLINE_DEFS_H__ 28 28 29 - #include "dml_common_defs.h" 30 29 #include "dcn_calc_math.h" 31 30 #include "dml_logger.h" 32 31 ··· 74 75 return (double) dcn_bw_floor2(a, granularity); 75 76 } 76 77 78 + static inline double dml_round(double a) 79 + { 80 + double round_pt = 0.5; 81 + double ceil = dml_ceil(a, 1); 82 + double floor = dml_floor(a, 1); 83 + 84 + if (a - floor >= round_pt) 85 + return ceil; 86 + else 87 + return floor; 88 + } 89 + 77 90 static inline int dml_log2(double x) 78 91 { 79 92 return dml_round((double)dcn_bw_log(x, 2)); ··· 123 112 124 113 static inline unsigned int dml_round_to_multiple(unsigned int num, 125 114 unsigned int multiple, 126 - bool up) 115 + unsigned char up) 127 116 { 128 117 unsigned int remainder; 129 118
+5
drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
··· 92 92 void (*get_position)(struct pipe_ctx **pipe_ctx, int num_pipes, 93 93 struct crtc_position *position); 94 94 int (*get_vupdate_offset_from_vsync)(struct pipe_ctx *pipe_ctx); 95 + void (*calc_vupdate_position)( 96 + struct dc *dc, 97 + struct pipe_ctx *pipe_ctx, 98 + uint32_t *start_line, 99 + uint32_t *end_line); 95 100 void (*enable_per_frame_crtc_position_reset)(struct dc *dc, 96 101 int group_size, struct pipe_ctx *grouped_pipes[]); 97 102 void (*enable_timing_synchronization)(struct dc *dc,
+2 -1
drivers/gpu/drm/drm_edid.c
··· 191 191 { "HVR", 0xaa01, EDID_QUIRK_NON_DESKTOP }, 192 192 { "HVR", 0xaa02, EDID_QUIRK_NON_DESKTOP }, 193 193 194 - /* Oculus Rift DK1, DK2, and CV1 VR Headsets */ 194 + /* Oculus Rift DK1, DK2, CV1 and Rift S VR Headsets */ 195 195 { "OVR", 0x0001, EDID_QUIRK_NON_DESKTOP }, 196 196 { "OVR", 0x0003, EDID_QUIRK_NON_DESKTOP }, 197 197 { "OVR", 0x0004, EDID_QUIRK_NON_DESKTOP }, 198 + { "OVR", 0x0012, EDID_QUIRK_NON_DESKTOP }, 198 199 199 200 /* Windows Mixed Reality Headsets */ 200 201 { "ACR", 0x7fce, EDID_QUIRK_NON_DESKTOP },
+3 -1
drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
··· 238 238 } 239 239 240 240 if ((submit->flags & ETNA_SUBMIT_SOFTPIN) && 241 - submit->bos[i].va != mapping->iova) 241 + submit->bos[i].va != mapping->iova) { 242 + etnaviv_gem_mapping_unreference(mapping); 242 243 return -EINVAL; 244 + } 243 245 244 246 atomic_inc(&etnaviv_obj->gpu_active); 245 247
+1 -1
drivers/gpu/drm/etnaviv/etnaviv_perfmon.c
··· 453 453 if (!(gpu->identity.features & meta->feature)) 454 454 continue; 455 455 456 - if (meta->nr_domains < (index - offset)) { 456 + if (index - offset >= meta->nr_domains) { 457 457 offset += meta->nr_domains; 458 458 continue; 459 459 }
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 850 850 extern int vmw_bo_init(struct vmw_private *dev_priv, 851 851 struct vmw_buffer_object *vmw_bo, 852 852 size_t size, struct ttm_placement *placement, 853 - bool interuptable, 853 + bool interruptible, 854 854 void (*bo_free)(struct ttm_buffer_object *bo)); 855 855 extern int vmw_user_bo_verify_access(struct ttm_buffer_object *bo, 856 856 struct ttm_object_file *tfile);
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
··· 515 515 struct vmw_fence_manager *fman = fman_from_fence(fence); 516 516 517 517 if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->base.flags)) 518 - return 1; 518 + return true; 519 519 520 520 vmw_fences_update(fman); 521 521
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
··· 1651 1651 struct vmw_surface_metadata *metadata; 1652 1652 struct ttm_base_object *base; 1653 1653 uint32_t backup_handle; 1654 - int ret = -EINVAL; 1654 + int ret; 1655 1655 1656 1656 ret = vmw_surface_handle_reference(dev_priv, file_priv, req->sid, 1657 1657 req->handle_type, &base);
+2 -2
drivers/hwtracing/coresight/coresight-cti-platform.c
··· 120 120 121 121 /* Can optionally have an etm node - return if not */ 122 122 cs_fwnode = fwnode_find_reference(root_fwnode, CTI_DT_CSDEV_ASSOC, 0); 123 - if (IS_ERR_OR_NULL(cs_fwnode)) 123 + if (IS_ERR(cs_fwnode)) 124 124 return 0; 125 125 126 126 /* allocate memory */ ··· 393 393 /* associated device ? */ 394 394 cs_fwnode = fwnode_find_reference(fwnode, 395 395 CTI_DT_CSDEV_ASSOC, 0); 396 - if (!IS_ERR_OR_NULL(cs_fwnode)) { 396 + if (!IS_ERR(cs_fwnode)) { 397 397 assoc_name = cti_plat_get_csdev_or_node_name(cs_fwnode, 398 398 &csdev); 399 399 fwnode_handle_put(cs_fwnode);
+1 -1
drivers/i2c/algos/i2c-algo-pca.c
··· 542 542 EXPORT_SYMBOL(i2c_pca_add_numbered_bus); 543 543 544 544 MODULE_AUTHOR("Ian Campbell <icampbell@arcom.com>, " 545 - "Wolfram Sang <w.sang@pengutronix.de>"); 545 + "Wolfram Sang <kernel@pengutronix.de>"); 546 546 MODULE_DESCRIPTION("I2C-Bus PCA9564/PCA9665 algorithm"); 547 547 MODULE_LICENSE("GPL"); 548 548
+9 -1
drivers/i2c/busses/i2c-altera.c
··· 70 70 * @isr_mask: cached copy of local ISR enables. 71 71 * @isr_status: cached copy of local ISR status. 72 72 * @lock: spinlock for IRQ synchronization. 73 + * @isr_mutex: mutex for IRQ thread. 73 74 */ 74 75 struct altr_i2c_dev { 75 76 void __iomem *base; ··· 87 86 u32 isr_mask; 88 87 u32 isr_status; 89 88 spinlock_t lock; /* IRQ synchronization */ 89 + struct mutex isr_mutex; 90 90 }; 91 91 92 92 static void ··· 247 245 struct altr_i2c_dev *idev = _dev; 248 246 u32 status = idev->isr_status; 249 247 248 + mutex_lock(&idev->isr_mutex); 250 249 if (!idev->msg) { 251 250 dev_warn(idev->dev, "unexpected interrupt\n"); 252 251 altr_i2c_int_clear(idev, ALTR_I2C_ALL_IRQ); 253 - return IRQ_HANDLED; 252 + goto out; 254 253 } 255 254 read = (idev->msg->flags & I2C_M_RD) != 0; 256 255 ··· 304 301 complete(&idev->msg_complete); 305 302 dev_dbg(idev->dev, "Message Complete\n"); 306 303 } 304 + out: 305 + mutex_unlock(&idev->isr_mutex); 307 306 308 307 return IRQ_HANDLED; 309 308 } ··· 317 312 u32 value; 318 313 u8 addr = i2c_8bit_addr_from_msg(msg); 319 314 315 + mutex_lock(&idev->isr_mutex); 320 316 idev->msg = msg; 321 317 idev->msg_len = msg->len; 322 318 idev->buf = msg->buf; ··· 342 336 altr_i2c_int_enable(idev, imask, true); 343 337 altr_i2c_fill_tx_fifo(idev); 344 338 } 339 + mutex_unlock(&idev->isr_mutex); 345 340 346 341 time_left = wait_for_completion_timeout(&idev->msg_complete, 347 342 ALTR_I2C_XFER_TIMEOUT); ··· 416 409 idev->dev = &pdev->dev; 417 410 init_completion(&idev->msg_complete); 418 411 spin_lock_init(&idev->lock); 412 + mutex_init(&idev->isr_mutex); 419 413 420 414 ret = device_property_read_u32(idev->dev, "fifo-size", 421 415 &idev->fifo_size);
+17 -3
drivers/i2c/busses/i2c-at91-master.c
··· 845 845 PINCTRL_STATE_DEFAULT); 846 846 dev->pinctrl_pins_gpio = pinctrl_lookup_state(dev->pinctrl, 847 847 "gpio"); 848 + if (IS_ERR(dev->pinctrl_pins_default) || 849 + IS_ERR(dev->pinctrl_pins_gpio)) { 850 + dev_info(&pdev->dev, "pinctrl states incomplete for recovery\n"); 851 + return -EINVAL; 852 + } 853 + 854 + /* 855 + * pins will be taken as GPIO, so we might as well inform pinctrl about 856 + * this and move the state to GPIO 857 + */ 858 + pinctrl_select_state(dev->pinctrl, dev->pinctrl_pins_gpio); 859 + 848 860 rinfo->sda_gpiod = devm_gpiod_get(&pdev->dev, "sda", GPIOD_IN); 849 861 if (PTR_ERR(rinfo->sda_gpiod) == -EPROBE_DEFER) 850 862 return -EPROBE_DEFER; ··· 867 855 return -EPROBE_DEFER; 868 856 869 857 if (IS_ERR(rinfo->sda_gpiod) || 870 - IS_ERR(rinfo->scl_gpiod) || 871 - IS_ERR(dev->pinctrl_pins_default) || 872 - IS_ERR(dev->pinctrl_pins_gpio)) { 858 + IS_ERR(rinfo->scl_gpiod)) { 873 859 dev_info(&pdev->dev, "recovery information incomplete\n"); 874 860 if (!IS_ERR(rinfo->sda_gpiod)) { 875 861 gpiod_put(rinfo->sda_gpiod); ··· 877 867 gpiod_put(rinfo->scl_gpiod); 878 868 rinfo->scl_gpiod = NULL; 879 869 } 870 + pinctrl_select_state(dev->pinctrl, dev->pinctrl_pins_default); 880 871 return -EINVAL; 881 872 } 873 + 874 + /* change the state of the pins back to their default state */ 875 + pinctrl_select_state(dev->pinctrl, dev->pinctrl_pins_default); 882 876 883 877 dev_info(&pdev->dev, "using scl, sda for recovery\n"); 884 878
+17 -7
drivers/i2c/i2c-core-base.c
··· 7 7 * Mux support by Rodolfo Giometti <giometti@enneenne.com> and 8 8 * Michael Lawnick <michael.lawnick.ext@nsn.com> 9 9 * 10 - * Copyright (C) 2013-2017 Wolfram Sang <wsa@the-dreams.de> 10 + * Copyright (C) 2013-2017 Wolfram Sang <wsa@kernel.org> 11 11 */ 12 12 13 13 #define pr_fmt(fmt) "i2c-core: " fmt ··· 338 338 } else if (ACPI_COMPANION(dev)) { 339 339 irq = i2c_acpi_get_irq(client); 340 340 } 341 - if (irq == -EPROBE_DEFER) 342 - return irq; 341 + if (irq == -EPROBE_DEFER) { 342 + status = irq; 343 + goto put_sync_adapter; 344 + } 343 345 344 346 if (irq < 0) 345 347 irq = 0; ··· 355 353 */ 356 354 if (!driver->id_table && 357 355 !i2c_acpi_match_device(dev->driver->acpi_match_table, client) && 358 - !i2c_of_match_device(dev->driver->of_match_table, client)) 359 - return -ENODEV; 356 + !i2c_of_match_device(dev->driver->of_match_table, client)) { 357 + status = -ENODEV; 358 + goto put_sync_adapter; 359 + } 360 360 361 361 if (client->flags & I2C_CLIENT_WAKE) { 362 362 int wakeirq; 363 363 364 364 wakeirq = of_irq_get_byname(dev->of_node, "wakeup"); 365 - if (wakeirq == -EPROBE_DEFER) 366 - return wakeirq; 365 + if (wakeirq == -EPROBE_DEFER) { 366 + status = wakeirq; 367 + goto put_sync_adapter; 368 + } 367 369 368 370 device_init_wakeup(&client->dev, true); 369 371 ··· 414 408 err_clear_wakeup_irq: 415 409 dev_pm_clear_wake_irq(&client->dev); 416 410 device_init_wakeup(&client->dev, false); 411 + put_sync_adapter: 412 + if (client->flags & I2C_CLIENT_HOST_NOTIFY) 413 + pm_runtime_put_sync(&client->adapter->dev); 414 + 417 415 return status; 418 416 } 419 417
+1 -1
drivers/i2c/i2c-core-of.c
··· 5 5 * Copyright (C) 2008 Jochen Friedrich <jochen@scram.de> 6 6 * based on a previous patch from Jon Smirl <jonsmirl@gmail.com> 7 7 * 8 - * Copyright (C) 2013, 2018 Wolfram Sang <wsa@the-dreams.de> 8 + * Copyright (C) 2013, 2018 Wolfram Sang <wsa@kernel.org> 9 9 */ 10 10 11 11 #include <dt-bindings/i2c/i2c.h>
+1
drivers/i2c/muxes/i2c-demux-pinctrl.c
··· 272 272 err_rollback_available: 273 273 device_remove_file(&pdev->dev, &dev_attr_available_masters); 274 274 err_rollback: 275 + i2c_demux_deactivate_master(priv); 275 276 for (j = 0; j < i; j++) { 276 277 of_node_put(priv->chan[j].parent_np); 277 278 of_changeset_destroy(&priv->chan[j].chgset);
+1 -1
drivers/iio/accel/sca3000.c
··· 980 980 st->tx[0] = SCA3000_READ_REG(reg_address_high); 981 981 ret = spi_sync_transfer(st->us, xfer, ARRAY_SIZE(xfer)); 982 982 if (ret) { 983 - dev_err(get_device(&st->us->dev), "problem reading register"); 983 + dev_err(&st->us->dev, "problem reading register\n"); 984 984 return ret; 985 985 } 986 986
+4 -4
drivers/iio/adc/stm32-adc.c
··· 1812 1812 return 0; 1813 1813 } 1814 1814 1815 - static int stm32_adc_dma_request(struct iio_dev *indio_dev) 1815 + static int stm32_adc_dma_request(struct device *dev, struct iio_dev *indio_dev) 1816 1816 { 1817 1817 struct stm32_adc *adc = iio_priv(indio_dev); 1818 1818 struct dma_slave_config config; 1819 1819 int ret; 1820 1820 1821 - adc->dma_chan = dma_request_chan(&indio_dev->dev, "rx"); 1821 + adc->dma_chan = dma_request_chan(dev, "rx"); 1822 1822 if (IS_ERR(adc->dma_chan)) { 1823 1823 ret = PTR_ERR(adc->dma_chan); 1824 1824 if (ret != -ENODEV) { 1825 1825 if (ret != -EPROBE_DEFER) 1826 - dev_err(&indio_dev->dev, 1826 + dev_err(dev, 1827 1827 "DMA channel request failed with %d\n", 1828 1828 ret); 1829 1829 return ret; ··· 1930 1930 if (ret < 0) 1931 1931 return ret; 1932 1932 1933 - ret = stm32_adc_dma_request(indio_dev); 1933 + ret = stm32_adc_dma_request(dev, indio_dev); 1934 1934 if (ret < 0) 1935 1935 return ret; 1936 1936
+11 -10
drivers/iio/adc/stm32-dfsdm-adc.c
··· 62 62 63 63 struct stm32_dfsdm_dev_data { 64 64 int type; 65 - int (*init)(struct iio_dev *indio_dev); 65 + int (*init)(struct device *dev, struct iio_dev *indio_dev); 66 66 unsigned int num_channels; 67 67 const struct regmap_config *regmap_cfg; 68 68 }; ··· 1365 1365 } 1366 1366 } 1367 1367 1368 - static int stm32_dfsdm_dma_request(struct iio_dev *indio_dev) 1368 + static int stm32_dfsdm_dma_request(struct device *dev, 1369 + struct iio_dev *indio_dev) 1369 1370 { 1370 1371 struct stm32_dfsdm_adc *adc = iio_priv(indio_dev); 1371 1372 1372 - adc->dma_chan = dma_request_chan(&indio_dev->dev, "rx"); 1373 + adc->dma_chan = dma_request_chan(dev, "rx"); 1373 1374 if (IS_ERR(adc->dma_chan)) { 1374 1375 int ret = PTR_ERR(adc->dma_chan); 1375 1376 ··· 1426 1425 &adc->dfsdm->ch_list[ch->channel]); 1427 1426 } 1428 1427 1429 - static int stm32_dfsdm_audio_init(struct iio_dev *indio_dev) 1428 + static int stm32_dfsdm_audio_init(struct device *dev, struct iio_dev *indio_dev) 1430 1429 { 1431 1430 struct iio_chan_spec *ch; 1432 1431 struct stm32_dfsdm_adc *adc = iio_priv(indio_dev); ··· 1453 1452 indio_dev->num_channels = 1; 1454 1453 indio_dev->channels = ch; 1455 1454 1456 - return stm32_dfsdm_dma_request(indio_dev); 1455 + return stm32_dfsdm_dma_request(dev, indio_dev); 1457 1456 } 1458 1457 1459 - static int stm32_dfsdm_adc_init(struct iio_dev *indio_dev) 1458 + static int stm32_dfsdm_adc_init(struct device *dev, struct iio_dev *indio_dev) 1460 1459 { 1461 1460 struct iio_chan_spec *ch; 1462 1461 struct stm32_dfsdm_adc *adc = iio_priv(indio_dev); ··· 1500 1499 init_completion(&adc->completion); 1501 1500 1502 1501 /* Optionally request DMA */ 1503 - ret = stm32_dfsdm_dma_request(indio_dev); 1502 + ret = stm32_dfsdm_dma_request(dev, indio_dev); 1504 1503 if (ret) { 1505 1504 if (ret != -ENODEV) { 1506 1505 if (ret != -EPROBE_DEFER) 1507 - dev_err(&indio_dev->dev, 1506 + dev_err(dev, 1508 1507 "DMA channel request failed with %d\n", 1509 1508 ret); 1510 1509 return ret; 1511 1510 } 1512 1511 1513 - dev_dbg(&indio_dev->dev, "No DMA support\n"); 1512 + dev_dbg(dev, "No DMA support\n"); 1514 1513 return 0; 1515 1514 } 1516 1515 ··· 1623 1622 adc->dfsdm->fl_list[adc->fl_id].sync_mode = val; 1624 1623 1625 1624 adc->dev_data = dev_data; 1626 - ret = dev_data->init(iio); 1625 + ret = dev_data->init(dev, iio); 1627 1626 if (ret < 0) 1628 1627 return ret; 1629 1628
+5 -3
drivers/iio/adc/ti-ads8344.c
··· 32 32 u8 rx_buf[3]; 33 33 }; 34 34 35 - #define ADS8344_VOLTAGE_CHANNEL(chan, si) \ 35 + #define ADS8344_VOLTAGE_CHANNEL(chan, addr) \ 36 36 { \ 37 37 .type = IIO_VOLTAGE, \ 38 38 .indexed = 1, \ 39 39 .channel = chan, \ 40 40 .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \ 41 41 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \ 42 + .address = addr, \ 42 43 } 43 44 44 - #define ADS8344_VOLTAGE_CHANNEL_DIFF(chan1, chan2, si) \ 45 + #define ADS8344_VOLTAGE_CHANNEL_DIFF(chan1, chan2, addr) \ 45 46 { \ 46 47 .type = IIO_VOLTAGE, \ 47 48 .indexed = 1, \ ··· 51 50 .differential = 1, \ 52 51 .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \ 53 52 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \ 53 + .address = addr, \ 54 54 } 55 55 56 56 static const struct iio_chan_spec ads8344_channels[] = { ··· 107 105 switch (mask) { 108 106 case IIO_CHAN_INFO_RAW: 109 107 mutex_lock(&adc->lock); 110 - *value = ads8344_adc_conversion(adc, channel->scan_index, 108 + *value = ads8344_adc_conversion(adc, channel->address, 111 109 channel->differential); 112 110 mutex_unlock(&adc->lock); 113 111 if (*value < 0)
+13 -1
drivers/iio/chemical/atlas-sensor.c
··· 194 194 }; 195 195 196 196 static const struct iio_chan_spec atlas_do_channels[] = { 197 - ATLAS_CONCENTRATION_CHANNEL(0, ATLAS_REG_DO_DATA), 197 + { 198 + .type = IIO_CONCENTRATION, 199 + .address = ATLAS_REG_DO_DATA, 200 + .info_mask_separate = 201 + BIT(IIO_CHAN_INFO_RAW) | BIT(IIO_CHAN_INFO_SCALE), 202 + .scan_index = 0, 203 + .scan_type = { 204 + .sign = 'u', 205 + .realbits = 32, 206 + .storagebits = 32, 207 + .endianness = IIO_BE, 208 + }, 209 + }, 198 210 IIO_CHAN_SOFT_TIMESTAMP(1), 199 211 { 200 212 .type = IIO_TEMP,
+1
drivers/iio/dac/vf610_dac.c
··· 223 223 return 0; 224 224 225 225 error_iio_device_register: 226 + vf610_dac_exit(info); 226 227 clk_disable_unprepare(info->clk); 227 228 228 229 return ret;
+5 -2
drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
··· 544 544 545 545 ref_sensor = iio_priv(hw->iio_devs[ST_LSM6DSX_ID_ACC]); 546 546 odr = st_lsm6dsx_check_odr(ref_sensor, val, &odr_val); 547 - if (odr < 0) 548 - return odr; 547 + if (odr < 0) { 548 + err = odr; 549 + goto release; 550 + } 549 551 550 552 sensor->ext_info.slv_odr = val; 551 553 sensor->odr = odr; ··· 559 557 break; 560 558 } 561 559 560 + release: 562 561 iio_device_release_direct_mode(iio_dev); 563 562 564 563 return err;
+2 -1
drivers/iommu/amd_iommu.c
··· 127 127 return -ENODEV; 128 128 129 129 list_for_each_entry(p, &acpihid_map, list) { 130 - if (acpi_dev_hid_uid_match(adev, p->hid, p->uid)) { 130 + if (acpi_dev_hid_uid_match(adev, p->hid, 131 + p->uid[0] ? p->uid : NULL)) { 131 132 if (entry) 132 133 *entry = p; 133 134 return p->devid;
+5 -4
drivers/iommu/amd_iommu_init.c
··· 1329 1329 } 1330 1330 case IVHD_DEV_ACPI_HID: { 1331 1331 u16 devid; 1332 - u8 hid[ACPIHID_HID_LEN] = {0}; 1333 - u8 uid[ACPIHID_UID_LEN] = {0}; 1332 + u8 hid[ACPIHID_HID_LEN]; 1333 + u8 uid[ACPIHID_UID_LEN]; 1334 1334 int ret; 1335 1335 1336 1336 if (h->type != 0x40) { ··· 1347 1347 break; 1348 1348 } 1349 1349 1350 + uid[0] = '\0'; 1350 1351 switch (e->uidf) { 1351 1352 case UID_NOT_PRESENT: 1352 1353 ··· 1362 1361 break; 1363 1362 case UID_IS_CHARACTER: 1364 1363 1365 - memcpy(uid, (u8 *)(&e->uid), ACPIHID_UID_LEN - 1); 1366 - uid[ACPIHID_UID_LEN - 1] = '\0'; 1364 + memcpy(uid, &e->uid, e->uidl); 1365 + uid[e->uidl] = '\0'; 1367 1366 1368 1367 break; 1369 1368 default:
+11 -6
drivers/iommu/iommu.c
··· 693 693 return ret; 694 694 } 695 695 696 + static bool iommu_is_attach_deferred(struct iommu_domain *domain, 697 + struct device *dev) 698 + { 699 + if (domain->ops->is_attach_deferred) 700 + return domain->ops->is_attach_deferred(domain, dev); 701 + 702 + return false; 703 + } 704 + 696 705 /** 697 706 * iommu_group_add_device - add a device to an iommu group 698 707 * @group: the group into which to add the device (reference should be held) ··· 756 747 757 748 mutex_lock(&group->mutex); 758 749 list_add_tail(&device->list, &group->devices); 759 - if (group->domain) 750 + if (group->domain && !iommu_is_attach_deferred(group->domain, dev)) 760 751 ret = __iommu_attach_device(group->domain, dev); 761 752 mutex_unlock(&group->mutex); 762 753 if (ret) ··· 1662 1653 struct device *dev) 1663 1654 { 1664 1655 int ret; 1665 - if ((domain->ops->is_attach_deferred != NULL) && 1666 - domain->ops->is_attach_deferred(domain, dev)) 1667 - return 0; 1668 1656 1669 1657 if (unlikely(domain->ops->attach_dev == NULL)) 1670 1658 return -ENODEV; ··· 1733 1727 static void __iommu_detach_device(struct iommu_domain *domain, 1734 1728 struct device *dev) 1735 1729 { 1736 - if ((domain->ops->is_attach_deferred != NULL) && 1737 - domain->ops->is_attach_deferred(domain, dev)) 1730 + if (iommu_is_attach_deferred(domain, dev)) 1738 1731 return; 1739 1732 1740 1733 if (unlikely(domain->ops->detach_dev == NULL))
+1
drivers/ipack/carriers/tpci200.c
··· 306 306 "(bn 0x%X, sn 0x%X) failed to map driver user space!", 307 307 tpci200->info->pdev->bus->number, 308 308 tpci200->info->pdev->devfn); 309 + res = -ENOMEM; 309 310 goto out_release_mem8_space; 310 311 } 311 312
+3
drivers/misc/cardreader/rtsx_pcr.c
··· 142 142 143 143 rtsx_disable_aspm(pcr); 144 144 145 + /* Fixes DMA transfer timout issue after disabling ASPM on RTS5260 */ 146 + msleep(1); 147 + 145 148 if (option->ltr_enabled) 146 149 rtsx_set_ltr_latency(pcr, option->ltr_active_latency); 147 150
+2
drivers/misc/mei/client.c
··· 266 266 down_write(&dev->me_clients_rwsem); 267 267 me_cl = __mei_me_cl_by_uuid(dev, uuid); 268 268 __mei_me_cl_del(dev, me_cl); 269 + mei_me_cl_put(me_cl); 269 270 up_write(&dev->me_clients_rwsem); 270 271 } 271 272 ··· 288 287 down_write(&dev->me_clients_rwsem); 289 288 me_cl = __mei_me_cl_by_uuid_id(dev, uuid, id); 290 289 __mei_me_cl_del(dev, me_cl); 290 + mei_me_cl_put(me_cl); 291 291 up_write(&dev->me_clients_rwsem); 292 292 } 293 293
+1 -1
drivers/mtd/mtdcore.c
··· 555 555 556 556 config.id = -1; 557 557 config.dev = &mtd->dev; 558 - config.name = mtd->name; 558 + config.name = dev_name(&mtd->dev); 559 559 config.owner = THIS_MODULE; 560 560 config.reg_read = mtd_nvmem_reg_read; 561 561 config.size = mtd->size;
+1 -2
drivers/mtd/nand/raw/brcmnand/brcmnand.c
··· 2728 2728 flash_dma_writel(ctrl, FLASH_DMA_ERROR_STATUS, 0); 2729 2729 } 2730 2730 2731 - if (has_edu(ctrl)) 2731 + if (has_edu(ctrl)) { 2732 2732 ctrl->edu_config = edu_readl(ctrl, EDU_CONFIG); 2733 - else { 2734 2733 edu_writel(ctrl, EDU_CONFIG, ctrl->edu_config); 2735 2734 edu_readl(ctrl, EDU_CONFIG); 2736 2735 brcmnand_edu_init(ctrl);
+4
drivers/mtd/nand/spi/core.c
··· 1089 1089 1090 1090 mtd->oobavail = ret; 1091 1091 1092 + /* Propagate ECC information to mtd_info */ 1093 + mtd->ecc_strength = nand->eccreq.strength; 1094 + mtd->ecc_step_size = nand->eccreq.step_size; 1095 + 1092 1096 return 0; 1093 1097 1094 1098 err_cleanup_nanddev:
+2 -10
drivers/mtd/ubi/debug.c
··· 393 393 { 394 394 struct ubi_device *ubi = s->private; 395 395 396 - if (*pos == 0) 397 - return SEQ_START_TOKEN; 398 - 399 396 if (*pos < ubi->peb_count) 400 397 return pos; 401 398 ··· 406 409 { 407 410 struct ubi_device *ubi = s->private; 408 411 409 - if (v == SEQ_START_TOKEN) 410 - return pos; 411 412 (*pos)++; 412 413 413 414 if (*pos < ubi->peb_count) ··· 427 432 int err; 428 433 429 434 /* If this is the start, print a header */ 430 - if (iter == SEQ_START_TOKEN) { 431 - seq_puts(s, 432 - "physical_block_number\terase_count\tblock_status\tread_status\n"); 433 - return 0; 434 - } 435 + if (*block_number == 0) 436 + seq_puts(s, "physical_block_number\terase_count\n"); 435 437 436 438 err = ubi_io_is_bad(ubi, *block_number); 437 439 if (err)
+4 -1
drivers/net/can/ifi_canfd/ifi_canfd.c
··· 947 947 u32 id, rev; 948 948 949 949 addr = devm_platform_ioremap_resource(pdev, 0); 950 + if (IS_ERR(addr)) 951 + return PTR_ERR(addr); 952 + 950 953 irq = platform_get_irq(pdev, 0); 951 - if (IS_ERR(addr) || irq < 0) 954 + if (irq < 0) 952 955 return -EINVAL; 953 956 954 957 id = readl(addr + IFI_CANFD_IP_ID);
+1 -1
drivers/net/can/sun4i_can.c
··· 792 792 793 793 addr = devm_platform_ioremap_resource(pdev, 0); 794 794 if (IS_ERR(addr)) { 795 - err = -EBUSY; 795 + err = PTR_ERR(addr); 796 796 goto exit; 797 797 } 798 798
+1 -1
drivers/net/dsa/b53/b53_srab.c
··· 609 609 610 610 priv->regs = devm_platform_ioremap_resource(pdev, 0); 611 611 if (IS_ERR(priv->regs)) 612 - return -ENOMEM; 612 + return PTR_ERR(priv->regs); 613 613 614 614 dev = b53_switch_alloc(&pdev->dev, &b53_srab_ops, priv); 615 615 if (!dev)
+2 -7
drivers/net/dsa/mt7530.c
··· 628 628 mt7530_write(priv, MT7530_PVC_P(port), 629 629 PORT_SPEC_TAG); 630 630 631 - /* Disable auto learning on the cpu port */ 632 - mt7530_set(priv, MT7530_PSC_P(port), SA_DIS); 633 - 634 - /* Unknown unicast frame fordwarding to the cpu port */ 635 - mt7530_set(priv, MT7530_MFC, UNU_FFP(BIT(port))); 631 + /* Unknown multicast frame forwarding to the cpu port */ 632 + mt7530_rmw(priv, MT7530_MFC, UNM_FFP_MASK, UNM_FFP(BIT(port))); 636 633 637 634 /* Set CPU port number */ 638 635 if (priv->id == ID_MT7621) ··· 1284 1287 1285 1288 /* Enable and reset MIB counters */ 1286 1289 mt7530_mib_reset(ds); 1287 - 1288 - mt7530_clear(priv, MT7530_MFC, UNU_FFP_MASK); 1289 1290 1290 1291 for (i = 0; i < MT7530_NUM_PORTS; i++) { 1291 1292 /* Disable forwarding by default on all ports */
+1
drivers/net/dsa/mt7530.h
··· 31 31 #define MT7530_MFC 0x10 32 32 #define BC_FFP(x) (((x) & 0xff) << 24) 33 33 #define UNM_FFP(x) (((x) & 0xff) << 16) 34 + #define UNM_FFP_MASK UNM_FFP(~0) 34 35 #define UNU_FFP(x) (((x) & 0xff) << 8) 35 36 #define UNU_FFP_MASK UNU_FFP(~0) 36 37 #define CPU_EN BIT(7)
+11 -12
drivers/net/dsa/ocelot/felix.c
··· 414 414 struct ocelot *ocelot = &felix->ocelot; 415 415 phy_interface_t *port_phy_modes; 416 416 resource_size_t switch_base; 417 + struct resource res; 417 418 int port, i, err; 418 419 419 420 ocelot->num_phys_ports = num_phys_ports; ··· 449 448 450 449 for (i = 0; i < TARGET_MAX; i++) { 451 450 struct regmap *target; 452 - struct resource *res; 453 451 454 452 if (!felix->info->target_io_res[i].name) 455 453 continue; 456 454 457 - res = &felix->info->target_io_res[i]; 458 - res->flags = IORESOURCE_MEM; 459 - res->start += switch_base; 460 - res->end += switch_base; 455 + memcpy(&res, &felix->info->target_io_res[i], sizeof(res)); 456 + res.flags = IORESOURCE_MEM; 457 + res.start += switch_base; 458 + res.end += switch_base; 461 459 462 - target = ocelot_regmap_init(ocelot, res); 460 + target = ocelot_regmap_init(ocelot, &res); 463 461 if (IS_ERR(target)) { 464 462 dev_err(ocelot->dev, 465 463 "Failed to map device memory space\n"); ··· 479 479 for (port = 0; port < num_phys_ports; port++) { 480 480 struct ocelot_port *ocelot_port; 481 481 void __iomem *port_regs; 482 - struct resource *res; 483 482 484 483 ocelot_port = devm_kzalloc(ocelot->dev, 485 484 sizeof(struct ocelot_port), ··· 490 491 return -ENOMEM; 491 492 } 492 493 493 - res = &felix->info->port_io_res[port]; 494 - res->flags = IORESOURCE_MEM; 495 - res->start += switch_base; 496 - res->end += switch_base; 494 + memcpy(&res, &felix->info->port_io_res[port], sizeof(res)); 495 + res.flags = IORESOURCE_MEM; 496 + res.start += switch_base; 497 + res.end += switch_base; 497 498 498 - port_regs = devm_ioremap_resource(ocelot->dev, res); 499 + port_regs = devm_ioremap_resource(ocelot->dev, &res); 499 500 if (IS_ERR(port_regs)) { 500 501 dev_err(ocelot->dev, 501 502 "failed to map registers for port %d\n", port);
+3 -3
drivers/net/dsa/ocelot/felix.h
··· 9 9 10 10 /* Platform-specific information */ 11 11 struct felix_info { 12 - struct resource *target_io_res; 13 - struct resource *port_io_res; 14 - struct resource *imdio_res; 12 + const struct resource *target_io_res; 13 + const struct resource *port_io_res; 14 + const struct resource *imdio_res; 15 15 const struct reg_field *regfields; 16 16 const u32 *const *map; 17 17 const struct ocelot_ops *ops;
+10 -12
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 340 340 [GCB] = vsc9959_gcb_regmap, 341 341 }; 342 342 343 - /* Addresses are relative to the PCI device's base address and 344 - * will be fixed up at ioremap time. 345 - */ 346 - static struct resource vsc9959_target_io_res[] = { 343 + /* Addresses are relative to the PCI device's base address */ 344 + static const struct resource vsc9959_target_io_res[] = { 347 345 [ANA] = { 348 346 .start = 0x0280000, 349 347 .end = 0x028ffff, ··· 384 386 }, 385 387 }; 386 388 387 - static struct resource vsc9959_port_io_res[] = { 389 + static const struct resource vsc9959_port_io_res[] = { 388 390 { 389 391 .start = 0x0100000, 390 392 .end = 0x010ffff, ··· 420 422 /* Port MAC 0 Internal MDIO bus through which the SerDes acting as an 421 423 * SGMII/QSGMII MAC PCS can be found. 422 424 */ 423 - static struct resource vsc9959_imdio_res = { 425 + static const struct resource vsc9959_imdio_res = { 424 426 .start = 0x8030, 425 427 .end = 0x8040, 426 428 .name = "imdio", ··· 1116 1118 struct device *dev = ocelot->dev; 1117 1119 resource_size_t imdio_base; 1118 1120 void __iomem *imdio_regs; 1119 - struct resource *res; 1121 + struct resource res; 1120 1122 struct enetc_hw *hw; 1121 1123 struct mii_bus *bus; 1122 1124 int port; ··· 1133 1135 imdio_base = pci_resource_start(felix->pdev, 1134 1136 felix->info->imdio_pci_bar); 1135 1137 1136 - res = felix->info->imdio_res; 1137 - res->flags = IORESOURCE_MEM; 1138 - res->start += imdio_base; 1139 - res->end += imdio_base; 1138 + memcpy(&res, felix->info->imdio_res, sizeof(res)); 1139 + res.flags = IORESOURCE_MEM; 1140 + res.start += imdio_base; 1141 + res.end += imdio_base; 1140 1142 1141 - imdio_regs = devm_ioremap_resource(dev, res); 1143 + imdio_regs = devm_ioremap_resource(dev, &res); 1142 1144 if (IS_ERR(imdio_regs)) { 1143 1145 dev_err(dev, "failed to map internal MDIO registers\n"); 1144 1146 return PTR_ERR(imdio_regs);
+1 -1
drivers/net/ethernet/apple/bmac.c
··· 1182 1182 int i; 1183 1183 unsigned short data; 1184 1184 1185 - for (i = 0; i < 6; i++) 1185 + for (i = 0; i < 3; i++) 1186 1186 { 1187 1187 reset_and_select_srom(dev); 1188 1188 data = read_srom(dev, i + EnetAddressOffset/2, SROMAddressBits);
+7 -6
drivers/net/ethernet/freescale/ucc_geth.c
··· 42 42 #include <soc/fsl/qe/ucc.h> 43 43 #include <soc/fsl/qe/ucc_fast.h> 44 44 #include <asm/machdep.h> 45 + #include <net/sch_generic.h> 45 46 46 47 #include "ucc_geth.h" 47 48 ··· 1549 1548 1550 1549 static void ugeth_quiesce(struct ucc_geth_private *ugeth) 1551 1550 { 1552 - /* Prevent any further xmits, plus detach the device. */ 1553 - netif_device_detach(ugeth->ndev); 1554 - 1555 - /* Wait for any current xmits to finish. */ 1556 - netif_tx_disable(ugeth->ndev); 1551 + /* Prevent any further xmits */ 1552 + netif_tx_stop_all_queues(ugeth->ndev); 1557 1553 1558 1554 /* Disable the interrupt to avoid NAPI rescheduling. */ 1559 1555 disable_irq(ugeth->ug_info->uf_info.irq); ··· 1563 1565 { 1564 1566 napi_enable(&ugeth->napi); 1565 1567 enable_irq(ugeth->ug_info->uf_info.irq); 1566 - netif_device_attach(ugeth->ndev); 1568 + 1569 + /* allow to xmit again */ 1570 + netif_tx_wake_all_queues(ugeth->ndev); 1571 + __netdev_watchdog_up(ugeth->ndev); 1567 1572 } 1568 1573 1569 1574 /* Called every time the controller might need to be made
+1 -1
drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
··· 1070 1070 (port->first_rxq >> MVPP2_CLS_OVERSIZE_RXQ_LOW_BITS)); 1071 1071 1072 1072 val = mvpp2_read(port->priv, MVPP2_CLS_SWFWD_PCTRL_REG); 1073 - val |= MVPP2_CLS_SWFWD_PCTRL_MASK(port->id); 1073 + val &= ~MVPP2_CLS_SWFWD_PCTRL_MASK(port->id); 1074 1074 mvpp2_write(port->priv, MVPP2_CLS_SWFWD_PCTRL_REG, val); 1075 1075 } 1076 1076
+1 -1
drivers/net/ethernet/marvell/pxa168_eth.c
··· 1418 1418 1419 1419 pep->base = devm_platform_ioremap_resource(pdev, 0); 1420 1420 if (IS_ERR(pep->base)) { 1421 - err = -ENOMEM; 1421 + err = PTR_ERR(pep->base); 1422 1422 goto err_netdev; 1423 1423 } 1424 1424
+1 -1
drivers/net/ethernet/mellanox/mlx4/fw.c
··· 2734 2734 if (err) { 2735 2735 mlx4_err(dev, "Failed to retrieve required operation: %d\n", 2736 2736 err); 2737 - return; 2737 + goto out; 2738 2738 } 2739 2739 MLX4_GET(modifier, outbox, GET_OP_REQ_MODIFIER_OFFSET); 2740 2740 MLX4_GET(token, outbox, GET_OP_REQ_TOKEN_OFFSET);
+55 -4
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 848 848 static void mlx5_free_cmd_msg(struct mlx5_core_dev *dev, 849 849 struct mlx5_cmd_msg *msg); 850 850 851 + static bool opcode_allowed(struct mlx5_cmd *cmd, u16 opcode) 852 + { 853 + if (cmd->allowed_opcode == CMD_ALLOWED_OPCODE_ALL) 854 + return true; 855 + 856 + return cmd->allowed_opcode == opcode; 857 + } 858 + 851 859 static void cmd_work_handler(struct work_struct *work) 852 860 { 853 861 struct mlx5_cmd_work_ent *ent = container_of(work, struct mlx5_cmd_work_ent, work); ··· 869 861 int alloc_ret; 870 862 int cmd_mode; 871 863 864 + complete(&ent->handling); 872 865 sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem; 873 866 down(sem); 874 867 if (!ent->page_queue) { ··· 922 913 923 914 /* Skip sending command to fw if internal error */ 924 915 if (pci_channel_offline(dev->pdev) || 925 - dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) { 916 + dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR || 917 + cmd->state != MLX5_CMDIF_STATE_UP || 918 + !opcode_allowed(&dev->cmd, ent->op)) { 926 919 u8 status = 0; 927 920 u32 drv_synd; 928 921 ··· 989 978 struct mlx5_cmd *cmd = &dev->cmd; 990 979 int err; 991 980 981 + if (!wait_for_completion_timeout(&ent->handling, timeout) && 982 + cancel_work_sync(&ent->work)) { 983 + ent->ret = -ECANCELED; 984 + goto out_err; 985 + } 992 986 if (cmd->mode == CMD_MODE_POLLING || ent->polling) { 993 987 wait_for_completion(&ent->done); 994 988 } else if (!wait_for_completion_timeout(&ent->done, timeout)) { ··· 1001 985 mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true); 1002 986 } 1003 987 988 + out_err: 1004 989 err = ent->ret; 1005 990 1006 991 if (err == -ETIMEDOUT) { 1007 992 mlx5_core_warn(dev, "%s(0x%x) timeout. Will cause a leak of a command resource\n", 993 + mlx5_command_str(msg_to_opcode(ent->in)), 994 + msg_to_opcode(ent->in)); 995 + } else if (err == -ECANCELED) { 996 + mlx5_core_warn(dev, "%s(0x%x) canceled on out of queue timeout.\n", 1008 997 mlx5_command_str(msg_to_opcode(ent->in)), 1009 998 msg_to_opcode(ent->in)); 1010 999 } ··· 1047 1026 ent->token = token; 1048 1027 ent->polling = force_polling; 1049 1028 1029 + init_completion(&ent->handling); 1050 1030 if (!callback) 1051 1031 init_completion(&ent->done); 1052 1032 ··· 1067 1045 err = wait_func(dev, ent); 1068 1046 if (err == -ETIMEDOUT) 1069 1047 goto out; 1048 + if (err == -ECANCELED) 1049 + goto out_free; 1070 1050 1071 1051 ds = ent->ts2 - ent->ts1; 1072 1052 op = MLX5_GET(mbox_in, in->first.data, opcode); ··· 1415 1391 mlx5_cmdif_debugfs_init(dev); 1416 1392 } 1417 1393 1394 + void mlx5_cmd_allowed_opcode(struct mlx5_core_dev *dev, u16 opcode) 1395 + { 1396 + struct mlx5_cmd *cmd = &dev->cmd; 1397 + int i; 1398 + 1399 + for (i = 0; i < cmd->max_reg_cmds; i++) 1400 + down(&cmd->sem); 1401 + down(&cmd->pages_sem); 1402 + 1403 + cmd->allowed_opcode = opcode; 1404 + 1405 + up(&cmd->pages_sem); 1406 + for (i = 0; i < cmd->max_reg_cmds; i++) 1407 + up(&cmd->sem); 1408 + } 1409 + 1418 1410 static void mlx5_cmd_change_mod(struct mlx5_core_dev *dev, int mode) 1419 1411 { 1420 1412 struct mlx5_cmd *cmd = &dev->cmd; ··· 1707 1667 int err; 1708 1668 u8 status = 0; 1709 1669 u32 drv_synd; 1670 + u16 opcode; 1710 1671 u8 token; 1711 1672 1673 + opcode = MLX5_GET(mbox_in, in, opcode); 1712 1674 if (pci_channel_offline(dev->pdev) || 1713 - dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) { 1714 - u16 opcode = MLX5_GET(mbox_in, in, opcode); 1715 - 1675 + dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR || 1676 + dev->cmd.state != MLX5_CMDIF_STATE_UP || 1677 + !opcode_allowed(&dev->cmd, opcode)) { 1716 1678 err = mlx5_internal_err_ret_value(dev, opcode, &drv_synd, &status); 1717 1679 MLX5_SET(mbox_out, out, status, status); 1718 1680 MLX5_SET(mbox_out, out, syndrome, drv_synd); ··· 1979 1937 goto err_free_page; 1980 1938 } 1981 1939 1940 + cmd->state = MLX5_CMDIF_STATE_DOWN; 1982 1941 cmd->checksum_disabled = 1; 1983 1942 cmd->max_reg_cmds = (1 << cmd->log_sz) - 1; 1984 1943 cmd->bitmask = (1UL << cmd->max_reg_cmds) - 1; ··· 2017 1974 mlx5_core_dbg(dev, "descriptor at dma 0x%llx\n", (unsigned long long)(cmd->dma)); 2018 1975 2019 1976 cmd->mode = CMD_MODE_POLLING; 1977 + cmd->allowed_opcode = CMD_ALLOWED_OPCODE_ALL; 2020 1978 2021 1979 create_msg_cache(dev); 2022 1980 ··· 2057 2013 dma_pool_destroy(cmd->pool); 2058 2014 } 2059 2015 EXPORT_SYMBOL(mlx5_cmd_cleanup); 2016 + 2017 + void mlx5_cmd_set_state(struct mlx5_core_dev *dev, 2018 + enum mlx5_cmdif_state cmdif_state) 2019 + { 2020 + dev->cmd.state = cmdif_state; 2021 + } 2022 + EXPORT_SYMBOL(mlx5_cmd_set_state);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 1042 1042 int mlx5e_create_indirect_rqt(struct mlx5e_priv *priv); 1043 1043 1044 1044 int mlx5e_create_indirect_tirs(struct mlx5e_priv *priv, bool inner_ttc); 1045 - void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv, bool inner_ttc); 1045 + void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv); 1046 1046 1047 1047 int mlx5e_create_direct_rqts(struct mlx5e_priv *priv, struct mlx5e_tir *tirs); 1048 1048 void mlx5e_destroy_direct_rqts(struct mlx5e_priv *priv, struct mlx5e_tir *tirs);
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 711 711 struct netlink_ext_ack *extack) 712 712 { 713 713 struct mlx5_tc_ct_priv *ct_priv = mlx5_tc_ct_get_ct_priv(priv); 714 + struct flow_rule *rule = flow_cls_offload_flow_rule(f); 714 715 struct flow_dissector_key_ct *mask, *key; 715 716 bool trk, est, untrk, unest, new; 716 717 u32 ctstate = 0, ctstate_mask = 0; ··· 719 718 u16 ct_state, ct_state_mask; 720 719 struct flow_match_ct match; 721 720 722 - if (!flow_rule_match_key(f->rule, FLOW_DISSECTOR_KEY_CT)) 721 + if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CT)) 723 722 return 0; 724 723 725 724 if (!ct_priv) { ··· 728 727 return -EOPNOTSUPP; 729 728 } 730 729 731 - flow_rule_match_ct(f->rule, &match); 730 + flow_rule_match_ct(rule, &match); 732 731 733 732 key = match.key; 734 733 mask = match.mask;
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
··· 130 130 struct flow_cls_offload *f, 131 131 struct netlink_ext_ack *extack) 132 132 { 133 - if (!flow_rule_match_key(f->rule, FLOW_DISSECTOR_KEY_CT)) 133 + struct flow_rule *rule = flow_cls_offload_flow_rule(f); 134 + 135 + if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CT)) 134 136 return 0; 135 137 136 138 NL_SET_ERR_MSG_MOD(extack, "mlx5 tc ct offload isn't enabled.");
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
··· 82 82 struct mlx5e_ktls_offload_context_tx *tx_priv = 83 83 mlx5e_get_ktls_tx_priv_ctx(tls_ctx); 84 84 85 - mlx5_ktls_destroy_key(priv->mdev, tx_priv->key_id); 86 85 mlx5e_destroy_tis(priv->mdev, tx_priv->tisn); 86 + mlx5_ktls_destroy_key(priv->mdev, tx_priv->key_id); 87 87 kvfree(tx_priv); 88 88 } 89 89
+7 -5
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 2715 2715 mlx5_core_modify_tir(mdev, priv->indir_tir[tt].tirn, in); 2716 2716 } 2717 2717 2718 - if (!mlx5e_tunnel_inner_ft_supported(priv->mdev)) 2718 + /* Verify inner tirs resources allocated */ 2719 + if (!priv->inner_indir_tir[0].tirn) 2719 2720 return; 2720 2721 2721 2722 for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) { ··· 3437 3436 return err; 3438 3437 } 3439 3438 3440 - void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv, bool inner_ttc) 3439 + void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv) 3441 3440 { 3442 3441 int i; 3443 3442 3444 3443 for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++) 3445 3444 mlx5e_destroy_tir(priv->mdev, &priv->indir_tir[i]); 3446 3445 3447 - if (!inner_ttc || !mlx5e_tunnel_inner_ft_supported(priv->mdev)) 3446 + /* Verify inner tirs resources allocated */ 3447 + if (!priv->inner_indir_tir[0].tirn) 3448 3448 return; 3449 3449 3450 3450 for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++) ··· 5124 5122 err_destroy_direct_tirs: 5125 5123 mlx5e_destroy_direct_tirs(priv, priv->direct_tir); 5126 5124 err_destroy_indirect_tirs: 5127 - mlx5e_destroy_indirect_tirs(priv, true); 5125 + mlx5e_destroy_indirect_tirs(priv); 5128 5126 err_destroy_direct_rqts: 5129 5127 mlx5e_destroy_direct_rqts(priv, priv->direct_tir); 5130 5128 err_destroy_indirect_rqts: ··· 5143 5141 mlx5e_destroy_direct_tirs(priv, priv->xsk_tir); 5144 5142 mlx5e_destroy_direct_rqts(priv, priv->xsk_tir); 5145 5143 mlx5e_destroy_direct_tirs(priv, priv->direct_tir); 5146 - mlx5e_destroy_indirect_tirs(priv, true); 5144 + mlx5e_destroy_indirect_tirs(priv); 5147 5145 mlx5e_destroy_direct_rqts(priv, priv->direct_tir); 5148 5146 mlx5e_destroy_rqt(priv, &priv->indir_rqt); 5149 5147 mlx5e_close_drop_rq(&priv->drop_rq);
+4 -8
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 650 650 return netdev->netdev_ops == &mlx5e_netdev_ops_uplink_rep; 651 651 } 652 652 653 - bool mlx5e_eswitch_rep(struct net_device *netdev) 653 + bool mlx5e_eswitch_vf_rep(struct net_device *netdev) 654 654 { 655 - if (netdev->netdev_ops == &mlx5e_netdev_ops_rep || 656 - netdev->netdev_ops == &mlx5e_netdev_ops_uplink_rep) 657 - return true; 658 - 659 - return false; 655 + return netdev->netdev_ops == &mlx5e_netdev_ops_rep; 660 656 } 661 657 662 658 static void mlx5e_build_rep_params(struct net_device *netdev) ··· 906 910 err_destroy_direct_tirs: 907 911 mlx5e_destroy_direct_tirs(priv, priv->direct_tir); 908 912 err_destroy_indirect_tirs: 909 - mlx5e_destroy_indirect_tirs(priv, false); 913 + mlx5e_destroy_indirect_tirs(priv); 910 914 err_destroy_direct_rqts: 911 915 mlx5e_destroy_direct_rqts(priv, priv->direct_tir); 912 916 err_destroy_indirect_rqts: ··· 924 928 mlx5e_destroy_rep_root_ft(priv); 925 929 mlx5e_destroy_ttc_table(priv, &priv->fs.ttc); 926 930 mlx5e_destroy_direct_tirs(priv, priv->direct_tir); 927 - mlx5e_destroy_indirect_tirs(priv, false); 931 + mlx5e_destroy_indirect_tirs(priv); 928 932 mlx5e_destroy_direct_rqts(priv, priv->direct_tir); 929 933 mlx5e_destroy_rqt(priv, &priv->indir_rqt); 930 934 mlx5e_close_drop_rq(&priv->drop_rq);
+6 -1
drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
··· 221 221 222 222 void mlx5e_rep_queue_neigh_stats_work(struct mlx5e_priv *priv); 223 223 224 - bool mlx5e_eswitch_rep(struct net_device *netdev); 224 + bool mlx5e_eswitch_vf_rep(struct net_device *netdev); 225 225 bool mlx5e_eswitch_uplink_rep(struct net_device *netdev); 226 + static inline bool mlx5e_eswitch_rep(struct net_device *netdev) 227 + { 228 + return mlx5e_eswitch_vf_rep(netdev) || 229 + mlx5e_eswitch_uplink_rep(netdev); 230 + } 226 231 227 232 #else /* CONFIG_MLX5_ESWITCH */ 228 233 static inline bool mlx5e_is_uplink_rep(struct mlx5e_priv *priv) { return false; }
+33 -5
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 3188 3188 return true; 3189 3189 } 3190 3190 3191 + static bool same_port_devs(struct mlx5e_priv *priv, struct mlx5e_priv *peer_priv) 3192 + { 3193 + return priv->mdev == peer_priv->mdev; 3194 + } 3195 + 3191 3196 static bool same_hw_devs(struct mlx5e_priv *priv, struct mlx5e_priv *peer_priv) 3192 3197 { 3193 3198 struct mlx5_core_dev *fmdev, *pmdev; ··· 3421 3416 return jhash(&key->key, sizeof(key->key), 0); 3422 3417 } 3423 3418 3424 - static bool is_merged_eswitch_dev(struct mlx5e_priv *priv, 3419 + static bool is_merged_eswitch_vfs(struct mlx5e_priv *priv, 3425 3420 struct net_device *peer_netdev) 3426 3421 { 3427 3422 struct mlx5e_priv *peer_priv; ··· 3429 3424 peer_priv = netdev_priv(peer_netdev); 3430 3425 3431 3426 return (MLX5_CAP_ESW(priv->mdev, merged_eswitch) && 3432 - mlx5e_eswitch_rep(priv->netdev) && 3433 - mlx5e_eswitch_rep(peer_netdev) && 3427 + mlx5e_eswitch_vf_rep(priv->netdev) && 3428 + mlx5e_eswitch_vf_rep(peer_netdev) && 3434 3429 same_hw_devs(priv, peer_priv)); 3435 3430 } 3436 3431 ··· 3804 3799 return err; 3805 3800 } 3806 3801 3802 + static bool same_hw_reps(struct mlx5e_priv *priv, 3803 + struct net_device *peer_netdev) 3804 + { 3805 + struct mlx5e_priv *peer_priv; 3806 + 3807 + peer_priv = netdev_priv(peer_netdev); 3808 + 3809 + return mlx5e_eswitch_rep(priv->netdev) && 3810 + mlx5e_eswitch_rep(peer_netdev) && 3811 + same_hw_devs(priv, peer_priv); 3812 + } 3813 + 3814 + static bool is_lag_dev(struct mlx5e_priv *priv, 3815 + struct net_device *peer_netdev) 3816 + { 3817 + return ((mlx5_lag_is_sriov(priv->mdev) || 3818 + mlx5_lag_is_multipath(priv->mdev)) && 3819 + same_hw_reps(priv, peer_netdev)); 3820 + } 3821 + 3807 3822 bool mlx5e_is_valid_eswitch_fwd_dev(struct mlx5e_priv *priv, 3808 3823 struct net_device *out_dev) 3809 3824 { 3810 - if (is_merged_eswitch_dev(priv, out_dev)) 3825 + if (is_merged_eswitch_vfs(priv, out_dev)) 3826 + return true; 3827 + 3828 + if (is_lag_dev(priv, out_dev)) 3811 3829 return true; 3812 3830 3813 3831 return mlx5e_eswitch_rep(out_dev) && 3814 - same_hw_devs(priv, netdev_priv(out_dev)); 3832 + same_port_devs(priv, netdev_priv(out_dev)); 3815 3833 } 3816 3834 3817 3835 static bool is_duplicated_output_device(struct net_device *dev,
+6 -3
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 529 529 void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq) 530 530 { 531 531 struct mlx5e_tx_wqe_info *wi; 532 + u32 dma_fifo_cc, nbytes = 0; 533 + u16 ci, sqcc, npkts = 0; 532 534 struct sk_buff *skb; 533 - u32 dma_fifo_cc; 534 - u16 sqcc; 535 - u16 ci; 536 535 int i; 537 536 538 537 sqcc = sq->cc; ··· 556 557 } 557 558 558 559 dev_kfree_skb_any(skb); 560 + npkts++; 561 + nbytes += wi->num_bytes; 559 562 sqcc += wi->num_wqebbs; 560 563 } 561 564 562 565 sq->dma_fifo_cc = dma_fifo_cc; 563 566 sq->cc = sqcc; 567 + 568 + netdev_tx_completed_queue(sq->txq, npkts, nbytes); 564 569 } 565 570 566 571 #ifdef CONFIG_MLX5_CORE_IPOIB
+3
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 609 609 .nent = MLX5_NUM_CMD_EQE, 610 610 .mask[0] = 1ull << MLX5_EVENT_TYPE_CMD, 611 611 }; 612 + mlx5_cmd_allowed_opcode(dev, MLX5_CMD_OP_CREATE_EQ); 612 613 err = setup_async_eq(dev, &table->cmd_eq, &param, "cmd"); 613 614 if (err) 614 615 goto err1; 615 616 616 617 mlx5_cmd_use_events(dev); 618 + mlx5_cmd_allowed_opcode(dev, CMD_ALLOWED_OPCODE_ALL); 617 619 618 620 param = (struct mlx5_eq_param) { 619 621 .irq_index = 0, ··· 645 643 mlx5_cmd_use_polling(dev); 646 644 cleanup_async_eq(dev, &table->cmd_eq, "cmd"); 647 645 err1: 646 + mlx5_cmd_allowed_opcode(dev, CMD_ALLOWED_OPCODE_ALL); 648 647 mlx5_eq_notifier_unregister(dev, &table->cq_err_nb); 649 648 return err; 650 649 }
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/events.c
··· 346 346 events->dev = dev; 347 347 dev->priv.events = events; 348 348 events->wq = create_singlethread_workqueue("mlx5_events"); 349 - if (!events->wq) 349 + if (!events->wq) { 350 + kfree(events); 350 351 return -ENOMEM; 352 + } 351 353 INIT_WORK(&events->pcie_core_work, mlx5_pcie_event); 352 354 353 355 return 0;
+19 -11
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 344 344 if (node->del_hw_func) 345 345 node->del_hw_func(node); 346 346 if (parent_node) { 347 - /* Only root namespace doesn't have parent and we just 348 - * need to free its node. 349 - */ 350 347 down_write_ref_node(parent_node, locked); 351 348 list_del_init(&node->list); 352 - if (node->del_sw_func) 353 - node->del_sw_func(node); 354 - up_write_ref_node(parent_node, locked); 355 - } else { 356 - kfree(node); 357 349 } 350 + node->del_sw_func(node); 351 + if (parent_node) 352 + up_write_ref_node(parent_node, locked); 358 353 node = NULL; 359 354 } 360 355 if (!node && parent_node) ··· 463 468 fs_get_obj(ft, node); 464 469 465 470 rhltable_destroy(&ft->fgs_hash); 466 - fs_get_obj(prio, ft->node.parent); 467 - prio->num_ft--; 471 + if (ft->node.parent) { 472 + fs_get_obj(prio, ft->node.parent); 473 + prio->num_ft--; 474 + } 468 475 kfree(ft); 469 476 } 470 477 ··· 2350 2353 return 0; 2351 2354 } 2352 2355 2356 + static void del_sw_root_ns(struct fs_node *node) 2357 + { 2358 + struct mlx5_flow_root_namespace *root_ns; 2359 + struct mlx5_flow_namespace *ns; 2360 + 2361 + fs_get_obj(ns, node); 2362 + root_ns = container_of(ns, struct mlx5_flow_root_namespace, ns); 2363 + mutex_destroy(&root_ns->chain_lock); 2364 + kfree(node); 2365 + } 2366 + 2353 2367 static struct mlx5_flow_root_namespace 2354 2368 *create_root_ns(struct mlx5_flow_steering *steering, 2355 2369 enum fs_flow_table_type table_type) ··· 2387 2379 ns = &root_ns->ns; 2388 2380 fs_init_namespace(ns); 2389 2381 mutex_init(&root_ns->chain_lock); 2390 - tree_init_node(&ns->node, NULL, NULL); 2382 + tree_init_node(&ns->node, NULL, del_sw_root_ns); 2391 2383 tree_add_node(&ns->node, NULL); 2392 2384 2393 2385 return root_ns;
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
··· 407 407 err_destroy_direct_tirs: 408 408 mlx5e_destroy_direct_tirs(priv, priv->direct_tir); 409 409 err_destroy_indirect_tirs: 410 - mlx5e_destroy_indirect_tirs(priv, true); 410 + mlx5e_destroy_indirect_tirs(priv); 411 411 err_destroy_direct_rqts: 412 412 mlx5e_destroy_direct_rqts(priv, priv->direct_tir); 413 413 err_destroy_indirect_rqts: ··· 423 423 { 424 424 mlx5i_destroy_flow_steering(priv); 425 425 mlx5e_destroy_direct_tirs(priv, priv->direct_tir); 426 - mlx5e_destroy_indirect_tirs(priv, true); 426 + mlx5e_destroy_indirect_tirs(priv); 427 427 mlx5e_destroy_direct_rqts(priv, priv->direct_tir); 428 428 mlx5e_destroy_rqt(priv, &priv->indir_rqt); 429 429 mlx5e_close_drop_rq(&priv->drop_rq);
+6 -1
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 946 946 goto err_cmd_cleanup; 947 947 } 948 948 949 + mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_UP); 950 + 949 951 err = mlx5_core_enable_hca(dev, 0); 950 952 if (err) { 951 953 mlx5_core_err(dev, "enable hca failed\n"); ··· 1009 1007 err_disable_hca: 1010 1008 mlx5_core_disable_hca(dev, 0); 1011 1009 err_cmd_cleanup: 1010 + mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN); 1012 1011 mlx5_cmd_cleanup(dev); 1013 1012 1014 1013 return err; ··· 1027 1024 } 1028 1025 mlx5_reclaim_startup_pages(dev); 1029 1026 mlx5_core_disable_hca(dev, 0); 1027 + mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN); 1030 1028 mlx5_cmd_cleanup(dev); 1031 1029 1032 1030 return 0; ··· 1175 1171 1176 1172 err = mlx5_function_setup(dev, boot); 1177 1173 if (err) 1178 - goto out; 1174 + goto err_function; 1179 1175 1180 1176 if (boot) { 1181 1177 err = mlx5_init_once(dev); ··· 1212 1208 mlx5_cleanup_once(dev); 1213 1209 function_teardown: 1214 1210 mlx5_function_teardown(dev, boot); 1211 + err_function: 1215 1212 dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; 1216 1213 out: 1217 1214 mutex_unlock(&dev->intf_state_mutex);
+12 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
··· 3555 3555 mlxsw_sp_port_remove(mlxsw_sp, i); 3556 3556 mlxsw_sp_cpu_port_remove(mlxsw_sp); 3557 3557 kfree(mlxsw_sp->ports); 3558 + mlxsw_sp->ports = NULL; 3558 3559 } 3559 3560 3560 3561 static int mlxsw_sp_ports_create(struct mlxsw_sp *mlxsw_sp) ··· 3592 3591 mlxsw_sp_cpu_port_remove(mlxsw_sp); 3593 3592 err_cpu_port_create: 3594 3593 kfree(mlxsw_sp->ports); 3594 + mlxsw_sp->ports = NULL; 3595 3595 return err; 3596 3596 } 3597 3597 ··· 3714 3712 return mlxsw_core_res_get(mlxsw_core, local_ports_in_x_res_id); 3715 3713 } 3716 3714 3715 + static struct mlxsw_sp_port * 3716 + mlxsw_sp_port_get_by_local_port(struct mlxsw_sp *mlxsw_sp, u8 local_port) 3717 + { 3718 + if (mlxsw_sp->ports && mlxsw_sp->ports[local_port]) 3719 + return mlxsw_sp->ports[local_port]; 3720 + return NULL; 3721 + } 3722 + 3717 3723 static int mlxsw_sp_port_split(struct mlxsw_core *mlxsw_core, u8 local_port, 3718 3724 unsigned int count, 3719 3725 struct netlink_ext_ack *extack) ··· 3735 3725 int i; 3736 3726 int err; 3737 3727 3738 - mlxsw_sp_port = mlxsw_sp->ports[local_port]; 3728 + mlxsw_sp_port = mlxsw_sp_port_get_by_local_port(mlxsw_sp, local_port); 3739 3729 if (!mlxsw_sp_port) { 3740 3730 dev_err(mlxsw_sp->bus_info->dev, "Port number \"%d\" does not exist\n", 3741 3731 local_port); ··· 3830 3820 int offset; 3831 3821 int i; 3832 3822 3833 - mlxsw_sp_port = mlxsw_sp->ports[local_port]; 3823 + mlxsw_sp_port = mlxsw_sp_port_get_by_local_port(mlxsw_sp, local_port); 3834 3824 if (!mlxsw_sp_port) { 3835 3825 dev_err(mlxsw_sp->bus_info->dev, "Port number \"%d\" does not exist\n", 3836 3826 local_port);
+8
drivers/net/ethernet/mellanox/mlxsw/switchx2.c
··· 1259 1259 if (mlxsw_sx_port_created(mlxsw_sx, i)) 1260 1260 mlxsw_sx_port_remove(mlxsw_sx, i); 1261 1261 kfree(mlxsw_sx->ports); 1262 + mlxsw_sx->ports = NULL; 1262 1263 } 1263 1264 1264 1265 static int mlxsw_sx_ports_create(struct mlxsw_sx *mlxsw_sx) ··· 1294 1293 if (mlxsw_sx_port_created(mlxsw_sx, i)) 1295 1294 mlxsw_sx_port_remove(mlxsw_sx, i); 1296 1295 kfree(mlxsw_sx->ports); 1296 + mlxsw_sx->ports = NULL; 1297 1297 return err; 1298 1298 } 1299 1299 ··· 1377 1375 struct mlxsw_sx *mlxsw_sx = mlxsw_core_driver_priv(mlxsw_core); 1378 1376 u8 module, width; 1379 1377 int err; 1378 + 1379 + if (!mlxsw_sx->ports || !mlxsw_sx->ports[local_port]) { 1380 + dev_err(mlxsw_sx->bus_info->dev, "Port number \"%d\" does not exist\n", 1381 + local_port); 1382 + return -EINVAL; 1383 + } 1380 1384 1381 1385 if (new_type == DEVLINK_PORT_TYPE_AUTO) 1382 1386 return -EOPNOTSUPP;
+1 -1
drivers/net/ethernet/mscc/ocelot.c
··· 1472 1472 unsigned long ageing_clock_t) 1473 1473 { 1474 1474 unsigned long ageing_jiffies = clock_t_to_jiffies(ageing_clock_t); 1475 - u32 ageing_time = jiffies_to_msecs(ageing_jiffies) / 1000; 1475 + u32 ageing_time = jiffies_to_msecs(ageing_jiffies); 1476 1476 1477 1477 ocelot_set_ageing_time(ocelot, ageing_time); 1478 1478 }
+15 -2
drivers/net/ethernet/realtek/r8169_main.c
··· 1027 1027 RTL_R32(tp, EPHYAR) & EPHYAR_DATA_MASK : ~0; 1028 1028 } 1029 1029 1030 + static void r8168fp_adjust_ocp_cmd(struct rtl8169_private *tp, u32 *cmd, int type) 1031 + { 1032 + /* based on RTL8168FP_OOBMAC_BASE in vendor driver */ 1033 + if (tp->mac_version == RTL_GIGA_MAC_VER_52 && type == ERIAR_OOB) 1034 + *cmd |= 0x7f0 << 18; 1035 + } 1036 + 1030 1037 DECLARE_RTL_COND(rtl_eriar_cond) 1031 1038 { 1032 1039 return RTL_R32(tp, ERIAR) & ERIAR_FLAG; ··· 1042 1035 static void _rtl_eri_write(struct rtl8169_private *tp, int addr, u32 mask, 1043 1036 u32 val, int type) 1044 1037 { 1038 + u32 cmd = ERIAR_WRITE_CMD | type | mask | addr; 1039 + 1045 1040 BUG_ON((addr & 3) || (mask == 0)); 1046 1041 RTL_W32(tp, ERIDR, val); 1047 - RTL_W32(tp, ERIAR, ERIAR_WRITE_CMD | type | mask | addr); 1042 + r8168fp_adjust_ocp_cmd(tp, &cmd, type); 1043 + RTL_W32(tp, ERIAR, cmd); 1048 1044 1049 1045 rtl_loop_wait_low(tp, &rtl_eriar_cond, 100, 100); 1050 1046 } ··· 1060 1050 1061 1051 static u32 _rtl_eri_read(struct rtl8169_private *tp, int addr, int type) 1062 1052 { 1063 - RTL_W32(tp, ERIAR, ERIAR_READ_CMD | type | ERIAR_MASK_1111 | addr); 1053 + u32 cmd = ERIAR_READ_CMD | type | ERIAR_MASK_1111 | addr; 1054 + 1055 + r8168fp_adjust_ocp_cmd(tp, &cmd, type); 1056 + RTL_W32(tp, ERIAR, cmd); 1064 1057 1065 1058 return rtl_loop_wait_high(tp, &rtl_eriar_cond, 100, 100) ? 1066 1059 RTL_R32(tp, ERIDR) : ~0;
+4 -4
drivers/net/ethernet/sgi/ioc3-eth.c
··· 848 848 ip = netdev_priv(dev); 849 849 ip->dma_dev = pdev->dev.parent; 850 850 ip->regs = devm_platform_ioremap_resource(pdev, 0); 851 - if (!ip->regs) { 852 - err = -ENOMEM; 851 + if (IS_ERR(ip->regs)) { 852 + err = PTR_ERR(ip->regs); 853 853 goto out_free; 854 854 } 855 855 856 856 ip->ssram = devm_platform_ioremap_resource(pdev, 1); 857 - if (!ip->ssram) { 858 - err = -ENOMEM; 857 + if (IS_ERR(ip->ssram)) { 858 + err = PTR_ERR(ip->ssram); 859 859 goto out_free; 860 860 } 861 861
+5 -4
drivers/net/ethernet/smsc/smsc911x.c
··· 2493 2493 2494 2494 retval = smsc911x_init(dev); 2495 2495 if (retval < 0) 2496 - goto out_disable_resources; 2496 + goto out_init_fail; 2497 2497 2498 2498 netif_carrier_off(dev); 2499 2499 2500 2500 retval = smsc911x_mii_init(pdev, dev); 2501 2501 if (retval) { 2502 2502 SMSC_WARN(pdata, probe, "Error %i initialising mii", retval); 2503 - goto out_disable_resources; 2503 + goto out_init_fail; 2504 2504 } 2505 2505 2506 2506 retval = register_netdev(dev); 2507 2507 if (retval) { 2508 2508 SMSC_WARN(pdata, probe, "Error %i registering device", retval); 2509 - goto out_disable_resources; 2509 + goto out_init_fail; 2510 2510 } else { 2511 2511 SMSC_TRACE(pdata, probe, 2512 2512 "Network interface: \"%s\"", dev->name); ··· 2547 2547 2548 2548 return 0; 2549 2549 2550 - out_disable_resources: 2550 + out_init_fail: 2551 2551 pm_runtime_put(&pdev->dev); 2552 2552 pm_runtime_disable(&pdev->dev); 2553 + out_disable_resources: 2553 2554 (void)smsc911x_disable_resources(pdev); 2554 2555 out_enable_resources_fail: 2555 2556 smsc911x_free_resources(pdev);
+13
drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
··· 319 319 /* Enable PTP clock */ 320 320 regmap_read(gmac->nss_common, NSS_COMMON_CLK_GATE, &val); 321 321 val |= NSS_COMMON_CLK_GATE_PTP_EN(gmac->id); 322 + switch (gmac->phy_mode) { 323 + case PHY_INTERFACE_MODE_RGMII: 324 + val |= NSS_COMMON_CLK_GATE_RGMII_RX_EN(gmac->id) | 325 + NSS_COMMON_CLK_GATE_RGMII_TX_EN(gmac->id); 326 + break; 327 + case PHY_INTERFACE_MODE_SGMII: 328 + val |= NSS_COMMON_CLK_GATE_GMII_RX_EN(gmac->id) | 329 + NSS_COMMON_CLK_GATE_GMII_TX_EN(gmac->id); 330 + break; 331 + default: 332 + /* We don't get here; the switch above will have errored out */ 333 + unreachable(); 334 + } 322 335 regmap_write(gmac->nss_common, NSS_COMMON_CLK_GATE, val); 323 336 324 337 if (gmac->phy_mode == PHY_INTERFACE_MODE_SGMII) {
+2 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 5181 5181 return ret; 5182 5182 } 5183 5183 5184 - netif_device_attach(ndev); 5185 - 5186 5184 mutex_lock(&priv->lock); 5187 5185 5188 5186 stmmac_reset_queues_param(priv); ··· 5206 5208 } 5207 5209 5208 5210 phylink_mac_change(priv->phylink, true); 5211 + 5212 + netif_device_attach(ndev); 5209 5213 5210 5214 return 0; 5211 5215 }
+1 -2
drivers/net/ethernet/sun/cassini.c
··· 4951 4951 cas_cacheline_size)) { 4952 4952 dev_err(&pdev->dev, "Could not set PCI cache " 4953 4953 "line size\n"); 4954 - goto err_write_cacheline; 4954 + goto err_out_free_res; 4955 4955 } 4956 4956 } 4957 4957 #endif ··· 5124 5124 err_out_free_res: 5125 5125 pci_release_regions(pdev); 5126 5126 5127 - err_write_cacheline: 5128 5127 /* Try to restore it in case the error occurred after we 5129 5128 * set it. 5130 5129 */
+2 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 2083 2083 ale_params.nu_switch_ale = true; 2084 2084 2085 2085 common->ale = cpsw_ale_create(&ale_params); 2086 - if (!common->ale) { 2086 + if (IS_ERR(common->ale)) { 2087 2087 dev_err(dev, "error initializing ale engine\n"); 2088 + ret = PTR_ERR(common->ale); 2088 2089 goto err_of_clear; 2089 2090 } 2090 2091
+4
drivers/net/ethernet/ti/cpsw.c
··· 1775 1775 struct cpsw_common *cpsw = dev_get_drvdata(dev); 1776 1776 int i; 1777 1777 1778 + rtnl_lock(); 1779 + 1778 1780 for (i = 0; i < cpsw->data.slaves; i++) 1779 1781 if (cpsw->slaves[i].ndev) 1780 1782 if (netif_running(cpsw->slaves[i].ndev)) 1781 1783 cpsw_ndo_stop(cpsw->slaves[i].ndev); 1784 + 1785 + rtnl_unlock(); 1782 1786 1783 1787 /* Select sleep pin state */ 1784 1788 pinctrl_pm_select_sleep_state(dev);
+1 -1
drivers/net/ethernet/ti/cpsw_ale.c
··· 955 955 956 956 ale = devm_kzalloc(params->dev, sizeof(*ale), GFP_KERNEL); 957 957 if (!ale) 958 - return NULL; 958 + return ERR_PTR(-ENOMEM); 959 959 960 960 ale->p0_untag_vid_mask = 961 961 devm_kmalloc_array(params->dev, BITS_TO_LONGS(VLAN_N_VID),
+2 -2
drivers/net/ethernet/ti/cpsw_priv.c
··· 504 504 ale_params.ale_ports = CPSW_ALE_PORTS_NUM; 505 505 506 506 cpsw->ale = cpsw_ale_create(&ale_params); 507 - if (!cpsw->ale) { 507 + if (IS_ERR(cpsw->ale)) { 508 508 dev_err(dev, "error initializing ale engine\n"); 509 - return -ENODEV; 509 + return PTR_ERR(cpsw->ale); 510 510 } 511 511 512 512 dma_params.dev = dev;
+2 -2
drivers/net/ethernet/ti/netcp_ethss.c
··· 3704 3704 ale_params.nu_switch_ale = true; 3705 3705 } 3706 3706 gbe_dev->ale = cpsw_ale_create(&ale_params); 3707 - if (!gbe_dev->ale) { 3707 + if (IS_ERR(gbe_dev->ale)) { 3708 3708 dev_err(gbe_dev->dev, "error initializing ale engine\n"); 3709 - ret = -ENODEV; 3709 + ret = PTR_ERR(gbe_dev->ale); 3710 3710 goto free_sec_ports; 3711 3711 } else { 3712 3712 dev_dbg(gbe_dev->dev, "Created a gbe ale engine\n");
+1
drivers/net/ipa/gsi.c
··· 1405 1405 while (count < budget) { 1406 1406 struct gsi_trans *trans; 1407 1407 1408 + count++; 1408 1409 trans = gsi_channel_poll_one(channel); 1409 1410 if (!trans) 1410 1411 break;
+1 -2
drivers/net/netdevsim/dev.c
··· 858 858 return -EINVAL; 859 859 860 860 cnt = &nsim_dev->trap_data->trap_policers_cnt_arr[policer->id - 1]; 861 - *p_drops = *cnt; 862 - *cnt += jiffies % 64; 861 + *p_drops = (*cnt)++; 863 862 864 863 return 0; 865 864 }
+2
drivers/net/phy/mscc/mscc.h
··· 353 353 const struct vsc85xx_hw_stat *hw_stats; 354 354 u64 *stats; 355 355 int nstats; 356 + /* PHY address within the package. */ 357 + u8 addr; 356 358 /* For multiple port PHYs; the MDIO address of the base PHY in the 357 359 * package. 358 360 */
+3 -3
drivers/net/phy/mscc/mscc_mac.h
··· 152 152 #define MSCC_MAC_PAUSE_CFG_STATE_PAUSE_STATE BIT(0) 153 153 #define MSCC_MAC_PAUSE_CFG_STATE_MAC_TX_PAUSE_GEN BIT(4) 154 154 155 - #define MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL 0x2 156 - #define MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE(x) (x) 157 - #define MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE_M GENMASK(2, 0) 155 + #define MSCC_PROC_IP_1588_TOP_CFG_STAT_MODE_CTL 0x2 156 + #define MSCC_PROC_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE(x) (x) 157 + #define MSCC_PROC_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE_M GENMASK(2, 0) 158 158 159 159 #endif /* _MSCC_PHY_LINE_MAC_H_ */
+10 -6
drivers/net/phy/mscc/mscc_macsec.c
··· 316 316 /* Must be called with mdio_lock taken */ 317 317 static int __vsc8584_macsec_init(struct phy_device *phydev) 318 318 { 319 + struct vsc8531_private *priv = phydev->priv; 320 + enum macsec_bank proc_bank; 319 321 u32 val; 320 322 321 323 vsc8584_macsec_block_init(phydev, MACSEC_INGR); ··· 353 351 val |= MSCC_FCBUF_ENA_CFG_TX_ENA | MSCC_FCBUF_ENA_CFG_RX_ENA; 354 352 vsc8584_macsec_phy_write(phydev, FC_BUFFER, MSCC_FCBUF_ENA_CFG, val); 355 353 356 - val = vsc8584_macsec_phy_read(phydev, IP_1588, 357 - MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL); 358 - val &= ~MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE_M; 359 - val |= MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE(4); 360 - vsc8584_macsec_phy_write(phydev, IP_1588, 361 - MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL, val); 354 + proc_bank = (priv->addr < 2) ? PROC_0 : PROC_2; 355 + 356 + val = vsc8584_macsec_phy_read(phydev, proc_bank, 357 + MSCC_PROC_IP_1588_TOP_CFG_STAT_MODE_CTL); 358 + val &= ~MSCC_PROC_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE_M; 359 + val |= MSCC_PROC_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE(4); 360 + vsc8584_macsec_phy_write(phydev, proc_bank, 361 + MSCC_PROC_IP_1588_TOP_CFG_STAT_MODE_CTL, val); 362 362 363 363 return 0; 364 364 }
+2 -1
drivers/net/phy/mscc/mscc_macsec.h
··· 64 64 FC_BUFFER = 0x04, 65 65 HOST_MAC = 0x05, 66 66 LINE_MAC = 0x06, 67 - IP_1588 = 0x0e, 67 + PROC_0 = 0x0e, 68 + PROC_2 = 0x0f, 68 69 MACSEC_INGR = 0x38, 69 70 MACSEC_EGR = 0x3c, 70 71 };
+2
drivers/net/phy/mscc/mscc_main.c
··· 1303 1303 vsc8531->base_addr = phydev->mdio.addr + addr; 1304 1304 else 1305 1305 vsc8531->base_addr = phydev->mdio.addr - addr; 1306 + 1307 + vsc8531->addr = addr; 1306 1308 } 1307 1309 1308 1310 static int vsc8584_config_init(struct phy_device *phydev)
+2 -2
drivers/net/phy/phy_device.c
··· 1235 1235 const struct sfp_upstream_ops *ops) 1236 1236 { 1237 1237 struct sfp_bus *bus; 1238 - int ret; 1238 + int ret = 0; 1239 1239 1240 1240 if (phydev->mdio.dev.fwnode) { 1241 1241 bus = sfp_bus_find_fwnode(phydev->mdio.dev.fwnode); ··· 1247 1247 ret = sfp_bus_add_upstream(bus, phydev, ops); 1248 1248 sfp_bus_put(bus); 1249 1249 } 1250 - return 0; 1250 + return ret; 1251 1251 } 1252 1252 EXPORT_SYMBOL(phy_sfp_probe); 1253 1253
+9 -2
drivers/net/usb/cdc_ether.c
··· 815 815 .driver_info = 0, 816 816 }, 817 817 818 - /* Microsoft Surface 3 dock (based on Realtek RTL8153) */ 818 + /* Microsoft Surface Ethernet Adapter (based on Realtek RTL8153) */ 819 819 { 820 820 USB_DEVICE_AND_INTERFACE_INFO(MICROSOFT_VENDOR_ID, 0x07c6, USB_CLASS_COMM, 821 821 USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 822 822 .driver_info = 0, 823 823 }, 824 824 825 - /* TP-LINK UE300 USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */ 825 + /* Microsoft Surface Ethernet Adapter (based on Realtek RTL8153B) */ 826 + { 827 + USB_DEVICE_AND_INTERFACE_INFO(MICROSOFT_VENDOR_ID, 0x0927, USB_CLASS_COMM, 828 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 829 + .driver_info = 0, 830 + }, 831 + 832 + /* TP-LINK UE300 USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */ 826 833 { 827 834 USB_DEVICE_AND_INTERFACE_INFO(TPLINK_VENDOR_ID, 0x0601, USB_CLASS_COMM, 828 835 USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
+1
drivers/net/usb/r8152.c
··· 6884 6884 {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8153)}, 6885 6885 {REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07ab)}, 6886 6886 {REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07c6)}, 6887 + {REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927)}, 6887 6888 {REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101)}, 6888 6889 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x304f)}, 6889 6890 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3062)},
+1 -1
drivers/net/wireguard/messages.h
··· 32 32 }; 33 33 34 34 enum counter_values { 35 - COUNTER_BITS_TOTAL = 2048, 35 + COUNTER_BITS_TOTAL = 8192, 36 36 COUNTER_REDUNDANT_BITS = BITS_PER_LONG, 37 37 COUNTER_WINDOW_SIZE = COUNTER_BITS_TOTAL - COUNTER_REDUNDANT_BITS 38 38 };
+9 -13
drivers/net/wireguard/noise.c
··· 104 104 105 105 if (unlikely(!keypair)) 106 106 return NULL; 107 + spin_lock_init(&keypair->receiving_counter.lock); 107 108 keypair->internal_id = atomic64_inc_return(&keypair_counter); 108 109 keypair->entry.type = INDEX_HASHTABLE_KEYPAIR; 109 110 keypair->entry.peer = peer; ··· 359 358 memzero_explicit(output, BLAKE2S_HASH_SIZE + 1); 360 359 } 361 360 362 - static void symmetric_key_init(struct noise_symmetric_key *key) 363 - { 364 - spin_lock_init(&key->counter.receive.lock); 365 - atomic64_set(&key->counter.counter, 0); 366 - memset(key->counter.receive.backtrack, 0, 367 - sizeof(key->counter.receive.backtrack)); 368 - key->birthdate = ktime_get_coarse_boottime_ns(); 369 - key->is_valid = true; 370 - } 371 - 372 361 static void derive_keys(struct noise_symmetric_key *first_dst, 373 362 struct noise_symmetric_key *second_dst, 374 363 const u8 chaining_key[NOISE_HASH_LEN]) 375 364 { 365 + u64 birthdate = ktime_get_coarse_boottime_ns(); 376 366 kdf(first_dst->key, second_dst->key, NULL, NULL, 377 367 NOISE_SYMMETRIC_KEY_LEN, NOISE_SYMMETRIC_KEY_LEN, 0, 0, 378 368 chaining_key); 379 - symmetric_key_init(first_dst); 380 - symmetric_key_init(second_dst); 369 + first_dst->birthdate = second_dst->birthdate = birthdate; 370 + first_dst->is_valid = second_dst->is_valid = true; 381 371 } 382 372 383 373 static bool __must_check mix_dh(u8 chaining_key[NOISE_HASH_LEN], ··· 707 715 u8 e[NOISE_PUBLIC_KEY_LEN]; 708 716 u8 ephemeral_private[NOISE_PUBLIC_KEY_LEN]; 709 717 u8 static_private[NOISE_PUBLIC_KEY_LEN]; 718 + u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN]; 710 719 711 720 down_read(&wg->static_identity.lock); 712 721 ··· 726 733 memcpy(chaining_key, handshake->chaining_key, NOISE_HASH_LEN); 727 734 memcpy(ephemeral_private, handshake->ephemeral_private, 728 735 NOISE_PUBLIC_KEY_LEN); 736 + memcpy(preshared_key, handshake->preshared_key, 737 + NOISE_SYMMETRIC_KEY_LEN); 729 738 up_read(&handshake->lock); 730 739 731 740 if (state != HANDSHAKE_CREATED_INITIATION) ··· 745 750 goto fail; 746 751 747 752 /* psk */ 748 - mix_psk(chaining_key, hash, key, handshake->preshared_key); 753 + mix_psk(chaining_key, hash, key, preshared_key); 749 754 750 755 /* {} */ 751 756 if (!message_decrypt(NULL, src->encrypted_nothing, ··· 778 783 memzero_explicit(chaining_key, NOISE_HASH_LEN); 779 784 memzero_explicit(ephemeral_private, NOISE_PUBLIC_KEY_LEN); 780 785 memzero_explicit(static_private, NOISE_PUBLIC_KEY_LEN); 786 + memzero_explicit(preshared_key, NOISE_SYMMETRIC_KEY_LEN); 781 787 up_read(&wg->static_identity.lock); 782 788 return ret_peer; 783 789 }
+6 -8
drivers/net/wireguard/noise.h
··· 15 15 #include <linux/mutex.h> 16 16 #include <linux/kref.h> 17 17 18 - union noise_counter { 19 - struct { 20 - u64 counter; 21 - unsigned long backtrack[COUNTER_BITS_TOTAL / BITS_PER_LONG]; 22 - spinlock_t lock; 23 - } receive; 24 - atomic64_t counter; 18 + struct noise_replay_counter { 19 + u64 counter; 20 + spinlock_t lock; 21 + unsigned long backtrack[COUNTER_BITS_TOTAL / BITS_PER_LONG]; 25 22 }; 26 23 27 24 struct noise_symmetric_key { 28 25 u8 key[NOISE_SYMMETRIC_KEY_LEN]; 29 - union noise_counter counter; 30 26 u64 birthdate; 31 27 bool is_valid; 32 28 }; ··· 30 34 struct noise_keypair { 31 35 struct index_hashtable_entry entry; 32 36 struct noise_symmetric_key sending; 37 + atomic64_t sending_counter; 33 38 struct noise_symmetric_key receiving; 39 + struct noise_replay_counter receiving_counter; 34 40 __le32 remote_index; 35 41 bool i_am_the_initiator; 36 42 struct kref refcount;
+9 -1
drivers/net/wireguard/queueing.h
··· 87 87 return real_protocol && skb->protocol == real_protocol; 88 88 } 89 89 90 - static inline void wg_reset_packet(struct sk_buff *skb) 90 + static inline void wg_reset_packet(struct sk_buff *skb, bool encapsulating) 91 91 { 92 + u8 l4_hash = skb->l4_hash; 93 + u8 sw_hash = skb->sw_hash; 94 + u32 hash = skb->hash; 92 95 skb_scrub_packet(skb, true); 93 96 memset(&skb->headers_start, 0, 94 97 offsetof(struct sk_buff, headers_end) - 95 98 offsetof(struct sk_buff, headers_start)); 99 + if (encapsulating) { 100 + skb->l4_hash = l4_hash; 101 + skb->sw_hash = sw_hash; 102 + skb->hash = hash; 103 + } 96 104 skb->queue_mapping = 0; 97 105 skb->nohdr = 0; 98 106 skb->peeked = 0;
+22 -22
drivers/net/wireguard/receive.c
··· 245 245 } 246 246 } 247 247 248 - static bool decrypt_packet(struct sk_buff *skb, struct noise_symmetric_key *key) 248 + static bool decrypt_packet(struct sk_buff *skb, struct noise_keypair *keypair) 249 249 { 250 250 struct scatterlist sg[MAX_SKB_FRAGS + 8]; 251 251 struct sk_buff *trailer; 252 252 unsigned int offset; 253 253 int num_frags; 254 254 255 - if (unlikely(!key)) 255 + if (unlikely(!keypair)) 256 256 return false; 257 257 258 - if (unlikely(!READ_ONCE(key->is_valid) || 259 - wg_birthdate_has_expired(key->birthdate, REJECT_AFTER_TIME) || 260 - key->counter.receive.counter >= REJECT_AFTER_MESSAGES)) { 261 - WRITE_ONCE(key->is_valid, false); 258 + if (unlikely(!READ_ONCE(keypair->receiving.is_valid) || 259 + wg_birthdate_has_expired(keypair->receiving.birthdate, REJECT_AFTER_TIME) || 260 + keypair->receiving_counter.counter >= REJECT_AFTER_MESSAGES)) { 261 + WRITE_ONCE(keypair->receiving.is_valid, false); 262 262 return false; 263 263 } 264 264 ··· 283 283 284 284 if (!chacha20poly1305_decrypt_sg_inplace(sg, skb->len, NULL, 0, 285 285 PACKET_CB(skb)->nonce, 286 - key->key)) 286 + keypair->receiving.key)) 287 287 return false; 288 288 289 289 /* Another ugly situation of pushing and pulling the header so as to ··· 298 298 } 299 299 300 300 /* This is RFC6479, a replay detection bitmap algorithm that avoids bitshifts */ 301 - static bool counter_validate(union noise_counter *counter, u64 their_counter) 301 + static bool counter_validate(struct noise_replay_counter *counter, u64 their_counter) 302 302 { 303 303 unsigned long index, index_current, top, i; 304 304 bool ret = false; 305 305 306 - spin_lock_bh(&counter->receive.lock); 306 + spin_lock_bh(&counter->lock); 307 307 308 - if (unlikely(counter->receive.counter >= REJECT_AFTER_MESSAGES + 1 || 308 + if (unlikely(counter->counter >= REJECT_AFTER_MESSAGES + 1 || 309 309 their_counter >= REJECT_AFTER_MESSAGES)) 310 310 goto out; 311 311 312 312 ++their_counter; 313 313 314 314 if (unlikely((COUNTER_WINDOW_SIZE + their_counter) < 315 - counter->receive.counter)) 315 + counter->counter)) 316 316 goto out; 317 317 318 318 index = their_counter >> ilog2(BITS_PER_LONG); 319 319 320 - if (likely(their_counter > counter->receive.counter)) { 321 - index_current = counter->receive.counter >> ilog2(BITS_PER_LONG); 320 + if (likely(their_counter > counter->counter)) { 321 + index_current = counter->counter >> ilog2(BITS_PER_LONG); 322 322 top = min_t(unsigned long, index - index_current, 323 323 COUNTER_BITS_TOTAL / BITS_PER_LONG); 324 324 for (i = 1; i <= top; ++i) 325 - counter->receive.backtrack[(i + index_current) & 325 + counter->backtrack[(i + index_current) & 326 326 ((COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1)] = 0; 327 - counter->receive.counter = their_counter; 327 + counter->counter = their_counter; 328 328 } 329 329 330 330 index &= (COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1; 331 331 ret = !test_and_set_bit(their_counter & (BITS_PER_LONG - 1), 332 - &counter->receive.backtrack[index]); 332 + &counter->backtrack[index]); 333 333 334 334 out: 335 - spin_unlock_bh(&counter->receive.lock); 335 + spin_unlock_bh(&counter->lock); 336 336 return ret; 337 337 } 338 338 ··· 472 472 if (unlikely(state != PACKET_STATE_CRYPTED)) 473 473 goto next; 474 474 475 - if (unlikely(!counter_validate(&keypair->receiving.counter, 475 + if (unlikely(!counter_validate(&keypair->receiving_counter, 476 476 PACKET_CB(skb)->nonce))) { 477 477 net_dbg_ratelimited("%s: Packet has invalid nonce %llu (max %llu)\n", 478 478 peer->device->dev->name, 479 479 PACKET_CB(skb)->nonce, 480 - keypair->receiving.counter.receive.counter); 480 + keypair->receiving_counter.counter); 481 481 goto next; 482 482 } 483 483 484 484 if (unlikely(wg_socket_endpoint_from_skb(&endpoint, skb))) 485 485 goto next; 486 486 487 - wg_reset_packet(skb); 487 + wg_reset_packet(skb, false); 488 488 wg_packet_consume_data_done(peer, skb, &endpoint); 489 489 free = false; 490 490 ··· 511 511 struct sk_buff *skb; 512 512 513 513 while ((skb = ptr_ring_consume_bh(&queue->ring)) != NULL) { 514 - enum packet_state state = likely(decrypt_packet(skb, 515 - &PACKET_CB(skb)->keypair->receiving)) ? 514 + enum packet_state state = 515 + likely(decrypt_packet(skb, PACKET_CB(skb)->keypair)) ? 516 516 PACKET_STATE_CRYPTED : PACKET_STATE_DEAD; 517 517 wg_queue_enqueue_per_peer_napi(skb, state); 518 518 if (need_resched())
+12 -5
drivers/net/wireguard/selftest/counter.c
··· 6 6 #ifdef DEBUG 7 7 bool __init wg_packet_counter_selftest(void) 8 8 { 9 + struct noise_replay_counter *counter; 9 10 unsigned int test_num = 0, i; 10 - union noise_counter counter; 11 11 bool success = true; 12 12 13 - #define T_INIT do { \ 14 - memset(&counter, 0, sizeof(union noise_counter)); \ 15 - spin_lock_init(&counter.receive.lock); \ 13 + counter = kmalloc(sizeof(*counter), GFP_KERNEL); 14 + if (unlikely(!counter)) { 15 + pr_err("nonce counter self-test malloc: FAIL\n"); 16 + return false; 17 + } 18 + 19 + #define T_INIT do { \ 20 + memset(counter, 0, sizeof(*counter)); \ 21 + spin_lock_init(&counter->lock); \ 16 22 } while (0) 17 23 #define T_LIM (COUNTER_WINDOW_SIZE + 1) 18 24 #define T(n, v) do { \ 19 25 ++test_num; \ 20 - if (counter_validate(&counter, n) != (v)) { \ 26 + if (counter_validate(counter, n) != (v)) { \ 21 27 pr_err("nonce counter self-test %u: FAIL\n", \ 22 28 test_num); \ 23 29 success = false; \ ··· 105 99 106 100 if (success) 107 101 pr_info("nonce counter self-tests: pass\n"); 102 + kfree(counter); 108 103 return success; 109 104 } 110 105 #endif
+11 -8
drivers/net/wireguard/send.c
··· 129 129 rcu_read_lock_bh(); 130 130 keypair = rcu_dereference_bh(peer->keypairs.current_keypair); 131 131 send = keypair && READ_ONCE(keypair->sending.is_valid) && 132 - (atomic64_read(&keypair->sending.counter.counter) > REKEY_AFTER_MESSAGES || 132 + (atomic64_read(&keypair->sending_counter) > REKEY_AFTER_MESSAGES || 133 133 (keypair->i_am_the_initiator && 134 134 wg_birthdate_has_expired(keypair->sending.birthdate, REKEY_AFTER_TIME))); 135 135 rcu_read_unlock_bh(); ··· 166 166 struct message_data *header; 167 167 struct sk_buff *trailer; 168 168 int num_frags; 169 + 170 + /* Force hash calculation before encryption so that flow analysis is 171 + * consistent over the inner packet. 172 + */ 173 + skb_get_hash(skb); 169 174 170 175 /* Calculate lengths. */ 171 176 padding_len = calculate_skb_padding(skb); ··· 300 295 skb_list_walk_safe(first, skb, next) { 301 296 if (likely(encrypt_packet(skb, 302 297 PACKET_CB(first)->keypair))) { 303 - wg_reset_packet(skb); 298 + wg_reset_packet(skb, true); 304 299 } else { 305 300 state = PACKET_STATE_DEAD; 306 301 break; ··· 349 344 350 345 void wg_packet_send_staged_packets(struct wg_peer *peer) 351 346 { 352 - struct noise_symmetric_key *key; 353 347 struct noise_keypair *keypair; 354 348 struct sk_buff_head packets; 355 349 struct sk_buff *skb; ··· 368 364 rcu_read_unlock_bh(); 369 365 if (unlikely(!keypair)) 370 366 goto out_nokey; 371 - key = &keypair->sending; 372 - if (unlikely(!READ_ONCE(key->is_valid))) 367 + if (unlikely(!READ_ONCE(keypair->sending.is_valid))) 373 368 goto out_nokey; 374 - if (unlikely(wg_birthdate_has_expired(key->birthdate, 369 + if (unlikely(wg_birthdate_has_expired(keypair->sending.birthdate, 375 370 REJECT_AFTER_TIME))) 376 371 goto out_invalid; 377 372 ··· 385 382 */ 386 383 PACKET_CB(skb)->ds = ip_tunnel_ecn_encap(0, ip_hdr(skb), skb); 387 384 PACKET_CB(skb)->nonce = 388 - atomic64_inc_return(&key->counter.counter) - 1; 385 + atomic64_inc_return(&keypair->sending_counter) - 1; 389 386 if (unlikely(PACKET_CB(skb)->nonce >= REJECT_AFTER_MESSAGES)) 390 387 goto out_invalid; 391 388 } ··· 397 394 return; 398 395 399 396 out_invalid: 400 - WRITE_ONCE(key->is_valid, false); 397 + WRITE_ONCE(keypair->sending.is_valid, false); 401 398 out_nokey: 402 399 wg_noise_keypair_put(keypair, false); 403 400
+4
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 1112 1112 iwl_trans->cfg = &iwl_ax101_cfg_quz_hr; 1113 1113 else if (iwl_trans->cfg == &iwl_ax201_cfg_qu_hr) 1114 1114 iwl_trans->cfg = &iwl_ax201_cfg_quz_hr; 1115 + else if (iwl_trans->cfg == &killer1650s_2ax_cfg_qu_b0_hr_b0) 1116 + iwl_trans->cfg = &iwl_ax1650s_cfg_quz_hr; 1117 + else if (iwl_trans->cfg == &killer1650i_2ax_cfg_qu_b0_hr_b0) 1118 + iwl_trans->cfg = &iwl_ax1650i_cfg_quz_hr; 1115 1119 } 1116 1120 1117 1121 #endif
+5
drivers/nvme/host/pci.c
··· 989 989 990 990 while (nvme_cqe_pending(nvmeq)) { 991 991 found++; 992 + /* 993 + * load-load control dependency between phase and the rest of 994 + * the cqe requires a full read memory barrier 995 + */ 996 + dma_rmb(); 992 997 nvme_handle_cqe(nvmeq, nvmeq->cq_head); 993 998 nvme_update_cq_head(nvmeq); 994 999 }
+1 -1
drivers/pinctrl/actions/pinctrl-s700.c
··· 1435 1435 static const char * const i2c0_groups[] = { 1436 1436 "uart0_rx_mfp", 1437 1437 "uart0_tx_mfp", 1438 - "i2c0_mfp_mfp", 1438 + "i2c0_mfp", 1439 1439 }; 1440 1440 1441 1441 static const char * const i2c1_groups[] = {
+1
drivers/pinctrl/intel/pinctrl-baytrail.c
··· 1286 1286 .direction_output = byt_gpio_direction_output, 1287 1287 .get = byt_gpio_get, 1288 1288 .set = byt_gpio_set, 1289 + .set_config = gpiochip_generic_config, 1289 1290 .dbg_show = byt_gpio_dbg_show, 1290 1291 }; 1291 1292
+4
drivers/pinctrl/intel/pinctrl-cherryview.c
··· 1479 1479 struct chv_pinctrl *pctrl = gpiochip_get_data(gc); 1480 1480 struct irq_chip *chip = irq_desc_get_chip(desc); 1481 1481 unsigned long pending; 1482 + unsigned long flags; 1482 1483 u32 intr_line; 1483 1484 1484 1485 chained_irq_enter(chip, desc); 1485 1486 1487 + raw_spin_lock_irqsave(&chv_lock, flags); 1486 1488 pending = readl(pctrl->regs + CHV_INTSTAT); 1489 + raw_spin_unlock_irqrestore(&chv_lock, flags); 1490 + 1487 1491 for_each_set_bit(intr_line, &pending, pctrl->community->nirqs) { 1488 1492 unsigned int irq, offset; 1489 1493
+8 -7
drivers/pinctrl/intel/pinctrl-sunrisepoint.c
··· 15 15 16 16 #include "pinctrl-intel.h" 17 17 18 - #define SPT_PAD_OWN 0x020 19 - #define SPT_PADCFGLOCK 0x0a0 20 - #define SPT_HOSTSW_OWN 0x0d0 21 - #define SPT_GPI_IS 0x100 22 - #define SPT_GPI_IE 0x120 18 + #define SPT_PAD_OWN 0x020 19 + #define SPT_H_PADCFGLOCK 0x090 20 + #define SPT_LP_PADCFGLOCK 0x0a0 21 + #define SPT_HOSTSW_OWN 0x0d0 22 + #define SPT_GPI_IS 0x100 23 + #define SPT_GPI_IE 0x120 23 24 24 25 #define SPT_COMMUNITY(b, s, e) \ 25 26 { \ 26 27 .barno = (b), \ 27 28 .padown_offset = SPT_PAD_OWN, \ 28 - .padcfglock_offset = SPT_PADCFGLOCK, \ 29 + .padcfglock_offset = SPT_LP_PADCFGLOCK, \ 29 30 .hostown_offset = SPT_HOSTSW_OWN, \ 30 31 .is_offset = SPT_GPI_IS, \ 31 32 .ie_offset = SPT_GPI_IE, \ ··· 48 47 { \ 49 48 .barno = (b), \ 50 49 .padown_offset = SPT_PAD_OWN, \ 51 - .padcfglock_offset = SPT_PADCFGLOCK, \ 50 + .padcfglock_offset = SPT_H_PADCFGLOCK, \ 52 51 .hostown_offset = SPT_HOSTSW_OWN, \ 53 52 .is_offset = SPT_GPI_IS, \ 54 53 .ie_offset = SPT_GPI_IE, \
-2
drivers/pinctrl/mediatek/pinctrl-paris.c
··· 164 164 case MTK_PIN_CONFIG_PU_ADV: 165 165 case MTK_PIN_CONFIG_PD_ADV: 166 166 if (hw->soc->adv_pull_get) { 167 - bool pullup; 168 - 169 167 pullup = param == MTK_PIN_CONFIG_PU_ADV; 170 168 err = hw->soc->adv_pull_get(hw, desc, pullup, &ret); 171 169 } else
+26 -1
drivers/pinctrl/qcom/pinctrl-msm.c
··· 697 697 698 698 pol = msm_readl_intr_cfg(pctrl, g); 699 699 pol ^= BIT(g->intr_polarity_bit); 700 - msm_writel_intr_cfg(val, pctrl, g); 700 + msm_writel_intr_cfg(pol, pctrl, g); 701 701 702 702 val2 = msm_readl_io(pctrl, g) & BIT(g->in_bit); 703 703 intstat = msm_readl_intr_status(pctrl, g); ··· 1034 1034 module_put(gc->owner); 1035 1035 } 1036 1036 1037 + static int msm_gpio_irq_set_affinity(struct irq_data *d, 1038 + const struct cpumask *dest, bool force) 1039 + { 1040 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 1041 + struct msm_pinctrl *pctrl = gpiochip_get_data(gc); 1042 + 1043 + if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs)) 1044 + return irq_chip_set_affinity_parent(d, dest, force); 1045 + 1046 + return 0; 1047 + } 1048 + 1049 + static int msm_gpio_irq_set_vcpu_affinity(struct irq_data *d, void *vcpu_info) 1050 + { 1051 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 1052 + struct msm_pinctrl *pctrl = gpiochip_get_data(gc); 1053 + 1054 + if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs)) 1055 + return irq_chip_set_vcpu_affinity_parent(d, vcpu_info); 1056 + 1057 + return 0; 1058 + } 1059 + 1037 1060 static void msm_gpio_irq_handler(struct irq_desc *desc) 1038 1061 { 1039 1062 struct gpio_chip *gc = irq_desc_get_handler_data(desc); ··· 1155 1132 pctrl->irq_chip.irq_set_wake = msm_gpio_irq_set_wake; 1156 1133 pctrl->irq_chip.irq_request_resources = msm_gpio_irq_reqres; 1157 1134 pctrl->irq_chip.irq_release_resources = msm_gpio_irq_relres; 1135 + pctrl->irq_chip.irq_set_affinity = msm_gpio_irq_set_affinity; 1136 + pctrl->irq_chip.irq_set_vcpu_affinity = msm_gpio_irq_set_vcpu_affinity; 1158 1137 1159 1138 np = of_parse_phandle(pctrl->dev->of_node, "wakeup-parent", 0); 1160 1139 if (np) {
+5
drivers/rapidio/devices/rio_mport_cdev.c
··· 877 877 rmcd_error("pinned %ld out of %ld pages", 878 878 pinned, nr_pages); 879 879 ret = -EFAULT; 880 + /* 881 + * Set nr_pages up to mean "how many pages to unpin, in 882 + * the error handler: 883 + */ 884 + nr_pages = pinned; 880 885 goto err_pg; 881 886 } 882 887
-3
drivers/scsi/qla2xxx/qla_attr.c
··· 1850 1850 return -EINVAL; 1851 1851 } 1852 1852 1853 - ql_log(ql_log_info, vha, 0x70d6, 1854 - "port speed:%d\n", ha->link_data_rate); 1855 - 1856 1853 return scnprintf(buf, PAGE_SIZE, "%s\n", spd[ha->link_data_rate]); 1857 1854 } 1858 1855
+8 -2
drivers/scsi/scsi_pm.c
··· 80 80 dev_dbg(dev, "scsi resume: %d\n", err); 81 81 82 82 if (err == 0) { 83 + bool was_runtime_suspended; 84 + 85 + was_runtime_suspended = pm_runtime_suspended(dev); 86 + 83 87 pm_runtime_disable(dev); 84 88 err = pm_runtime_set_active(dev); 85 89 pm_runtime_enable(dev); ··· 97 93 */ 98 94 if (!err && scsi_is_sdev_device(dev)) { 99 95 struct scsi_device *sdev = to_scsi_device(dev); 100 - 101 - blk_set_runtime_active(sdev->request_queue); 96 + if (was_runtime_suspended) 97 + blk_post_runtime_resume(sdev->request_queue, 0); 98 + else 99 + blk_set_runtime_active(sdev->request_queue); 102 100 } 103 101 } 104 102
+2 -2
drivers/staging/greybus/uart.c
··· 537 537 } 538 538 539 539 if (C_CRTSCTS(tty) && C_BAUD(tty) != B0) 540 - newline.flow_control |= GB_SERIAL_AUTO_RTSCTS_EN; 540 + newline.flow_control = GB_SERIAL_AUTO_RTSCTS_EN; 541 541 else 542 - newline.flow_control &= ~GB_SERIAL_AUTO_RTSCTS_EN; 542 + newline.flow_control = 0; 543 543 544 544 if (memcmp(&gb_tty->line_coding, &newline, sizeof(newline))) { 545 545 memcpy(&gb_tty->line_coding, &newline, sizeof(newline));
+12 -5
drivers/staging/iio/resolver/ad2s1210.c
··· 130 130 static int ad2s1210_config_read(struct ad2s1210_state *st, 131 131 unsigned char address) 132 132 { 133 - struct spi_transfer xfer = { 134 - .len = 2, 135 - .rx_buf = st->rx, 136 - .tx_buf = st->tx, 133 + struct spi_transfer xfers[] = { 134 + { 135 + .len = 1, 136 + .rx_buf = &st->rx[0], 137 + .tx_buf = &st->tx[0], 138 + .cs_change = 1, 139 + }, { 140 + .len = 1, 141 + .rx_buf = &st->rx[1], 142 + .tx_buf = &st->tx[1], 143 + }, 137 144 }; 138 145 int ret = 0; 139 146 140 147 ad2s1210_set_mode(MOD_CONFIG, st); 141 148 st->tx[0] = address | AD2S1210_MSB_IS_HIGH; 142 149 st->tx[1] = AD2S1210_REG_FAULT; 143 - ret = spi_sync_transfer(st->sdev, &xfer, 1); 150 + ret = spi_sync_transfer(st->sdev, xfers, 2); 144 151 if (ret < 0) 145 152 return ret; 146 153
+4 -5
drivers/staging/kpc2000/kpc2000/core.c
··· 298 298 { 299 299 int err = 0; 300 300 struct kp2000_device *pcard; 301 - int rv; 302 301 unsigned long reg_bar_phys_addr; 303 302 unsigned long reg_bar_phys_len; 304 303 unsigned long dma_bar_phys_addr; ··· 444 445 if (err < 0) 445 446 goto err_release_dma; 446 447 447 - rv = request_irq(pcard->pdev->irq, kp2000_irq_handler, IRQF_SHARED, 448 - pcard->name, pcard); 449 - if (rv) { 448 + err = request_irq(pcard->pdev->irq, kp2000_irq_handler, IRQF_SHARED, 449 + pcard->name, pcard); 450 + if (err) { 450 451 dev_err(&pcard->pdev->dev, 451 - "%s: failed to request_irq: %d\n", __func__, rv); 452 + "%s: failed to request_irq: %d\n", __func__, err); 452 453 goto err_disable_msi; 453 454 } 454 455
+3 -1
drivers/staging/wfx/scan.c
··· 57 57 wvif->scan_abort = false; 58 58 reinit_completion(&wvif->scan_complete); 59 59 timeout = hif_scan(wvif, req, start_idx, i - start_idx); 60 - if (timeout < 0) 60 + if (timeout < 0) { 61 + wfx_tx_unlock(wvif->wdev); 61 62 return timeout; 63 + } 62 64 ret = wait_for_completion_timeout(&wvif->scan_complete, timeout); 63 65 if (req->channels[start_idx]->max_power != wvif->vif->bss_conf.txpower) 64 66 hif_set_output_power(wvif, wvif->vif->bss_conf.txpower);
+1
drivers/target/target_core_transport.c
··· 3350 3350 3351 3351 cmd->se_tfo->queue_tm_rsp(cmd); 3352 3352 3353 + transport_lun_remove_cmd(cmd); 3353 3354 transport_cmd_check_stop_to_fabric(cmd); 3354 3355 return; 3355 3356
+1
drivers/tty/serial/sifive.c
··· 883 883 884 884 static void __ssp_add_console_port(struct sifive_serial_port *ssp) 885 885 { 886 + spin_lock_init(&ssp->port.lock); 886 887 sifive_serial_console_ports[ssp->port.line] = ssp; 887 888 } 888 889
+11 -11
drivers/usb/cdns3/gadget.c
··· 82 82 * @ptr: address of device controller register to be read and changed 83 83 * @mask: bits requested to clar 84 84 */ 85 - void cdns3_clear_register_bit(void __iomem *ptr, u32 mask) 85 + static void cdns3_clear_register_bit(void __iomem *ptr, u32 mask) 86 86 { 87 87 mask = readl(ptr) & ~mask; 88 88 writel(mask, ptr); ··· 137 137 * 138 138 * Returns buffer or NULL if no buffers in list 139 139 */ 140 - struct cdns3_aligned_buf *cdns3_next_align_buf(struct list_head *list) 140 + static struct cdns3_aligned_buf *cdns3_next_align_buf(struct list_head *list) 141 141 { 142 142 return list_first_entry_or_null(list, struct cdns3_aligned_buf, list); 143 143 } ··· 148 148 * 149 149 * Returns request or NULL if no requests in list 150 150 */ 151 - struct cdns3_request *cdns3_next_priv_request(struct list_head *list) 151 + static struct cdns3_request *cdns3_next_priv_request(struct list_head *list) 152 152 { 153 153 return list_first_entry_or_null(list, struct cdns3_request, list); 154 154 } ··· 190 190 return priv_ep->trb_pool_dma + offset; 191 191 } 192 192 193 - int cdns3_ring_size(struct cdns3_endpoint *priv_ep) 193 + static int cdns3_ring_size(struct cdns3_endpoint *priv_ep) 194 194 { 195 195 switch (priv_ep->type) { 196 196 case USB_ENDPOINT_XFER_ISOC: ··· 345 345 cdns3_ep_inc_trb(&priv_ep->dequeue, &priv_ep->ccs, priv_ep->num_trbs); 346 346 } 347 347 348 - void cdns3_move_deq_to_next_trb(struct cdns3_request *priv_req) 348 + static void cdns3_move_deq_to_next_trb(struct cdns3_request *priv_req) 349 349 { 350 350 struct cdns3_endpoint *priv_ep = priv_req->priv_ep; 351 351 int current_trb = priv_req->start_trb; ··· 511 511 } 512 512 } 513 513 514 - struct usb_request *cdns3_wa2_gadget_giveback(struct cdns3_device *priv_dev, 514 + static struct usb_request *cdns3_wa2_gadget_giveback(struct cdns3_device *priv_dev, 515 515 struct cdns3_endpoint *priv_ep, 516 516 struct cdns3_request *priv_req) 517 517 { ··· 551 551 return &priv_req->request; 552 552 } 553 553 554 - int cdns3_wa2_gadget_ep_queue(struct cdns3_device *priv_dev, 554 + static int cdns3_wa2_gadget_ep_queue(struct cdns3_device *priv_dev, 555 555 struct cdns3_endpoint *priv_ep, 556 556 struct cdns3_request *priv_req) 557 557 { ··· 836 836 cdns3_gadget_ep_free_request(&priv_ep->endpoint, request); 837 837 } 838 838 839 - void cdns3_wa1_restore_cycle_bit(struct cdns3_endpoint *priv_ep) 839 + static void cdns3_wa1_restore_cycle_bit(struct cdns3_endpoint *priv_ep) 840 840 { 841 841 /* Work around for stale data address in TRB*/ 842 842 if (priv_ep->wa1_set) { ··· 1904 1904 return 0; 1905 1905 } 1906 1906 1907 - void cdns3_stream_ep_reconfig(struct cdns3_device *priv_dev, 1907 + static void cdns3_stream_ep_reconfig(struct cdns3_device *priv_dev, 1908 1908 struct cdns3_endpoint *priv_ep) 1909 1909 { 1910 1910 if (!priv_ep->use_streams || priv_dev->gadget.speed < USB_SPEED_SUPER) ··· 1925 1925 EP_CFG_TDL_CHK | EP_CFG_SID_CHK); 1926 1926 } 1927 1927 1928 - void cdns3_configure_dmult(struct cdns3_device *priv_dev, 1928 + static void cdns3_configure_dmult(struct cdns3_device *priv_dev, 1929 1929 struct cdns3_endpoint *priv_ep) 1930 1930 { 1931 1931 struct cdns3_usb_regs __iomem *regs = priv_dev->regs; ··· 2548 2548 link_trb = priv_req->trb; 2549 2549 2550 2550 /* Update ring only if removed request is on pending_req_list list */ 2551 - if (req_on_hw_ring) { 2551 + if (req_on_hw_ring && link_trb) { 2552 2552 link_trb->buffer = TRB_BUFFER(priv_ep->trb_pool_dma + 2553 2553 ((priv_req->end_trb + 1) * TRB_SIZE)); 2554 2554 link_trb->control = (link_trb->control & TRB_CYCLE) |
+13 -3
drivers/usb/core/devio.c
··· 251 251 usbm->vma_use_count = 1; 252 252 INIT_LIST_HEAD(&usbm->memlist); 253 253 254 - if (dma_mmap_coherent(hcd->self.sysdev, vma, mem, dma_handle, size)) { 255 - dec_usb_memory_use_count(usbm, &usbm->vma_use_count); 256 - return -EAGAIN; 254 + if (hcd->localmem_pool || !hcd_uses_dma(hcd)) { 255 + if (remap_pfn_range(vma, vma->vm_start, 256 + virt_to_phys(usbm->mem) >> PAGE_SHIFT, 257 + size, vma->vm_page_prot) < 0) { 258 + dec_usb_memory_use_count(usbm, &usbm->vma_use_count); 259 + return -EAGAIN; 260 + } 261 + } else { 262 + if (dma_mmap_coherent(hcd->self.sysdev, vma, mem, dma_handle, 263 + size)) { 264 + dec_usb_memory_use_count(usbm, &usbm->vma_use_count); 265 + return -EAGAIN; 266 + } 257 267 } 258 268 259 269 vma->vm_flags |= VM_IO;
+5 -1
drivers/usb/core/hub.c
··· 39 39 40 40 #define USB_VENDOR_GENESYS_LOGIC 0x05e3 41 41 #define USB_VENDOR_SMSC 0x0424 42 + #define USB_PRODUCT_USB5534B 0x5534 42 43 #define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND 0x01 43 44 #define HUB_QUIRK_DISABLE_AUTOSUSPEND 0x02 44 45 ··· 5622 5621 } 5623 5622 5624 5623 static const struct usb_device_id hub_id_table[] = { 5625 - { .match_flags = USB_DEVICE_ID_MATCH_VENDOR | USB_DEVICE_ID_MATCH_INT_CLASS, 5624 + { .match_flags = USB_DEVICE_ID_MATCH_VENDOR 5625 + | USB_DEVICE_ID_MATCH_PRODUCT 5626 + | USB_DEVICE_ID_MATCH_INT_CLASS, 5626 5627 .idVendor = USB_VENDOR_SMSC, 5628 + .idProduct = USB_PRODUCT_USB5534B, 5627 5629 .bInterfaceClass = USB_CLASS_HUB, 5628 5630 .driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND}, 5629 5631 { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
+1
drivers/usb/dwc3/Kconfig
··· 4 4 tristate "DesignWare USB3 DRD Core Support" 5 5 depends on (USB || USB_GADGET) && HAS_DMA 6 6 select USB_XHCI_PLATFORM if USB_XHCI_HCD 7 + select USB_ROLE_SWITCH if USB_DWC3_DUAL_ROLE 7 8 help 8 9 Say Y or M here if your system has a Dual Role SuperSpeed 9 10 USB controller based on the DesignWare USB3 IP Core.
+1
drivers/usb/dwc3/dwc3-pci.c
··· 114 114 115 115 static const struct property_entry dwc3_pci_mrfld_properties[] = { 116 116 PROPERTY_ENTRY_STRING("dr_mode", "otg"), 117 + PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"), 117 118 PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"), 118 119 {} 119 120 };
-3
drivers/usb/dwc3/gadget.c
··· 2483 2483 for_each_sg(sg, s, pending, i) { 2484 2484 trb = &dep->trb_pool[dep->trb_dequeue]; 2485 2485 2486 - if (trb->ctrl & DWC3_TRB_CTRL_HWO) 2487 - break; 2488 - 2489 2486 req->sg = sg_next(s); 2490 2487 req->num_pending_sgs--; 2491 2488
+3
drivers/usb/gadget/configfs.c
··· 260 260 char *name; 261 261 int ret; 262 262 263 + if (strlen(page) < len) 264 + return -EOVERFLOW; 265 + 263 266 name = kstrdup(page, GFP_KERNEL); 264 267 if (!name) 265 268 return -ENOMEM;
+3 -1
drivers/usb/gadget/legacy/audio.c
··· 300 300 struct usb_descriptor_header *usb_desc; 301 301 302 302 usb_desc = usb_otg_descriptor_alloc(cdev->gadget); 303 - if (!usb_desc) 303 + if (!usb_desc) { 304 + status = -ENOMEM; 304 305 goto fail; 306 + } 305 307 usb_otg_descriptor_init(cdev->gadget, usb_desc); 306 308 otg_desc[0] = usb_desc; 307 309 otg_desc[1] = NULL;
+3 -1
drivers/usb/gadget/legacy/cdc2.c
··· 179 179 struct usb_descriptor_header *usb_desc; 180 180 181 181 usb_desc = usb_otg_descriptor_alloc(gadget); 182 - if (!usb_desc) 182 + if (!usb_desc) { 183 + status = -ENOMEM; 183 184 goto fail1; 185 + } 184 186 usb_otg_descriptor_init(gadget, usb_desc); 185 187 otg_desc[0] = usb_desc; 186 188 otg_desc[1] = NULL;
+1 -2
drivers/usb/gadget/legacy/inode.c
··· 1361 1361 1362 1362 req->buf = dev->rbuf; 1363 1363 req->context = NULL; 1364 - value = -EOPNOTSUPP; 1365 1364 switch (ctrl->bRequest) { 1366 1365 1367 1366 case USB_REQ_GET_DESCRIPTOR: ··· 1783 1784 dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr) 1784 1785 { 1785 1786 struct dev_data *dev = fd->private_data; 1786 - ssize_t value = len, length = len; 1787 + ssize_t value, length = len; 1787 1788 unsigned total; 1788 1789 u32 tag; 1789 1790 char *kbuf;
+3 -1
drivers/usb/gadget/legacy/ncm.c
··· 156 156 struct usb_descriptor_header *usb_desc; 157 157 158 158 usb_desc = usb_otg_descriptor_alloc(gadget); 159 - if (!usb_desc) 159 + if (!usb_desc) { 160 + status = -ENOMEM; 160 161 goto fail; 162 + } 161 163 usb_otg_descriptor_init(gadget, usb_desc); 162 164 otg_desc[0] = usb_desc; 163 165 otg_desc[1] = NULL;
+252 -63
drivers/usb/gadget/legacy/raw_gadget.c
··· 7 7 */ 8 8 9 9 #include <linux/compiler.h> 10 + #include <linux/ctype.h> 10 11 #include <linux/debugfs.h> 11 12 #include <linux/delay.h> 12 13 #include <linux/kref.h> ··· 124 123 125 124 struct raw_dev; 126 125 127 - #define USB_RAW_MAX_ENDPOINTS 32 128 - 129 126 enum ep_state { 130 127 STATE_EP_DISABLED, 131 128 STATE_EP_ENABLED, ··· 133 134 struct raw_dev *dev; 134 135 enum ep_state state; 135 136 struct usb_ep *ep; 137 + u8 addr; 136 138 struct usb_request *req; 137 139 bool urb_queued; 138 140 bool disabling; ··· 168 168 bool ep0_out_pending; 169 169 bool ep0_urb_queued; 170 170 ssize_t ep0_status; 171 - struct raw_ep eps[USB_RAW_MAX_ENDPOINTS]; 171 + struct raw_ep eps[USB_RAW_EPS_NUM_MAX]; 172 + int eps_num; 172 173 173 174 struct completion ep0_done; 174 175 struct raw_event_queue queue; ··· 203 202 usb_ep_free_request(dev->gadget->ep0, dev->req); 204 203 } 205 204 raw_event_queue_destroy(&dev->queue); 206 - for (i = 0; i < USB_RAW_MAX_ENDPOINTS; i++) { 207 - if (dev->eps[i].state != STATE_EP_ENABLED) 205 + for (i = 0; i < dev->eps_num; i++) { 206 + if (dev->eps[i].state == STATE_EP_DISABLED) 208 207 continue; 209 208 usb_ep_disable(dev->eps[i].ep); 210 209 usb_ep_free_request(dev->eps[i].ep, dev->eps[i].req); ··· 250 249 complete(&dev->ep0_done); 251 250 } 252 251 252 + static u8 get_ep_addr(const char *name) 253 + { 254 + /* If the endpoint has fixed function (named as e.g. "ep12out-bulk"), 255 + * parse the endpoint address from its name. We deliberately use 256 + * deprecated simple_strtoul() function here, as the number isn't 257 + * followed by '\0' nor '\n'. 258 + */ 259 + if (isdigit(name[2])) 260 + return simple_strtoul(&name[2], NULL, 10); 261 + /* Otherwise the endpoint is configurable (named as e.g. "ep-a"). */ 262 + return USB_RAW_EP_ADDR_ANY; 263 + } 264 + 253 265 static int gadget_bind(struct usb_gadget *gadget, 254 266 struct usb_gadget_driver *driver) 255 267 { 256 - int ret = 0; 268 + int ret = 0, i = 0; 257 269 struct raw_dev *dev = container_of(driver, struct raw_dev, driver); 258 270 struct usb_request *req; 271 + struct usb_ep *ep; 259 272 unsigned long flags; 260 273 261 274 if (strcmp(gadget->name, dev->udc_name) != 0) ··· 288 273 dev->req->context = dev; 289 274 dev->req->complete = gadget_ep0_complete; 290 275 dev->gadget = gadget; 276 + gadget_for_each_ep(ep, dev->gadget) { 277 + dev->eps[i].ep = ep; 278 + dev->eps[i].addr = get_ep_addr(ep->name); 279 + dev->eps[i].state = STATE_EP_DISABLED; 280 + i++; 281 + } 282 + dev->eps_num = i; 291 283 spin_unlock_irqrestore(&dev->lock, flags); 292 284 293 285 /* Matches kref_put() in gadget_unbind(). */ ··· 577 555 578 556 if (copy_from_user(io, ptr, sizeof(*io))) 579 557 return ERR_PTR(-EFAULT); 580 - if (io->ep >= USB_RAW_MAX_ENDPOINTS) 558 + if (io->ep >= USB_RAW_EPS_NUM_MAX) 581 559 return ERR_PTR(-EINVAL); 582 560 if (!usb_raw_io_flags_valid(io->flags)) 583 561 return ERR_PTR(-EINVAL); ··· 691 669 if (IS_ERR(data)) 692 670 return PTR_ERR(data); 693 671 ret = raw_process_ep0_io(dev, &io, data, false); 694 - if (ret) 672 + if (ret < 0) 695 673 goto free; 696 674 697 675 length = min(io.length, (unsigned int)ret); 698 676 if (copy_to_user((void __user *)(value + sizeof(io)), data, length)) 699 677 ret = -EFAULT; 678 + else 679 + ret = length; 700 680 free: 701 681 kfree(data); 702 682 return ret; 703 683 } 704 684 705 - static bool check_ep_caps(struct usb_ep *ep, 706 - struct usb_endpoint_descriptor *desc) 685 + static int raw_ioctl_ep0_stall(struct raw_dev *dev, unsigned long value) 707 686 { 708 - switch (usb_endpoint_type(desc)) { 709 - case USB_ENDPOINT_XFER_ISOC: 710 - if (!ep->caps.type_iso) 711 - return false; 712 - break; 713 - case USB_ENDPOINT_XFER_BULK: 714 - if (!ep->caps.type_bulk) 715 - return false; 716 - break; 717 - case USB_ENDPOINT_XFER_INT: 718 - if (!ep->caps.type_int) 719 - return false; 720 - break; 721 - default: 722 - return false; 687 + int ret = 0; 688 + unsigned long flags; 689 + 690 + if (value) 691 + return -EINVAL; 692 + spin_lock_irqsave(&dev->lock, flags); 693 + if (dev->state != STATE_DEV_RUNNING) { 694 + dev_dbg(dev->dev, "fail, device is not running\n"); 695 + ret = -EINVAL; 696 + goto out_unlock; 697 + } 698 + if (!dev->gadget) { 699 + dev_dbg(dev->dev, "fail, gadget is not bound\n"); 700 + ret = -EBUSY; 701 + goto out_unlock; 702 + } 703 + if (dev->ep0_urb_queued) { 704 + dev_dbg(&dev->gadget->dev, "fail, urb already queued\n"); 705 + ret = -EBUSY; 706 + goto out_unlock; 707 + } 708 + if (!dev->ep0_in_pending && !dev->ep0_out_pending) { 709 + dev_dbg(&dev->gadget->dev, "fail, no request pending\n"); 710 + ret = -EBUSY; 711 + goto out_unlock; 723 712 } 724 713 725 - if (usb_endpoint_dir_in(desc) && !ep->caps.dir_in) 726 - return false; 727 - if (usb_endpoint_dir_out(desc) && !ep->caps.dir_out) 728 - return false; 714 + ret = usb_ep_set_halt(dev->gadget->ep0); 715 + if (ret < 0) 716 + dev_err(&dev->gadget->dev, 717 + "fail, usb_ep_set_halt returned %d\n", ret); 729 718 730 - return true; 719 + if (dev->ep0_in_pending) 720 + dev->ep0_in_pending = false; 721 + else 722 + dev->ep0_out_pending = false; 723 + 724 + out_unlock: 725 + spin_unlock_irqrestore(&dev->lock, flags); 726 + return ret; 731 727 } 732 728 733 729 static int raw_ioctl_ep_enable(struct raw_dev *dev, unsigned long value) ··· 753 713 int ret = 0, i; 754 714 unsigned long flags; 755 715 struct usb_endpoint_descriptor *desc; 756 - struct usb_ep *ep = NULL; 716 + struct raw_ep *ep; 757 717 758 718 desc = memdup_user((void __user *)value, sizeof(*desc)); 759 719 if (IS_ERR(desc)) ··· 781 741 goto out_free; 782 742 } 783 743 784 - for (i = 0; i < USB_RAW_MAX_ENDPOINTS; i++) { 785 - if (dev->eps[i].state == STATE_EP_ENABLED) 744 + for (i = 0; i < dev->eps_num; i++) { 745 + ep = &dev->eps[i]; 746 + if (ep->state != STATE_EP_DISABLED) 786 747 continue; 787 - break; 788 - } 789 - if (i == USB_RAW_MAX_ENDPOINTS) { 790 - dev_dbg(&dev->gadget->dev, 791 - "fail, no device endpoints available\n"); 792 - ret = -EBUSY; 793 - goto out_free; 794 - } 795 - 796 - gadget_for_each_ep(ep, dev->gadget) { 797 - if (ep->enabled) 748 + if (ep->addr != usb_endpoint_num(desc) && 749 + ep->addr != USB_RAW_EP_ADDR_ANY) 798 750 continue; 799 - if (!check_ep_caps(ep, desc)) 751 + if (!usb_gadget_ep_match_desc(dev->gadget, ep->ep, desc, NULL)) 800 752 continue; 801 - ep->desc = desc; 802 - ret = usb_ep_enable(ep); 753 + ep->ep->desc = desc; 754 + ret = usb_ep_enable(ep->ep); 803 755 if (ret < 0) { 804 756 dev_err(&dev->gadget->dev, 805 757 "fail, usb_ep_enable returned %d\n", ret); 806 758 goto out_free; 807 759 } 808 - dev->eps[i].req = usb_ep_alloc_request(ep, GFP_ATOMIC); 809 - if (!dev->eps[i].req) { 760 + ep->req = usb_ep_alloc_request(ep->ep, GFP_ATOMIC); 761 + if (!ep->req) { 810 762 dev_err(&dev->gadget->dev, 811 763 "fail, usb_ep_alloc_request failed\n"); 812 - usb_ep_disable(ep); 764 + usb_ep_disable(ep->ep); 813 765 ret = -ENOMEM; 814 766 goto out_free; 815 767 } 816 - dev->eps[i].ep = ep; 817 - dev->eps[i].state = STATE_EP_ENABLED; 818 - ep->driver_data = &dev->eps[i]; 768 + ep->state = STATE_EP_ENABLED; 769 + ep->ep->driver_data = ep; 819 770 ret = i; 820 771 goto out_unlock; 821 772 } ··· 825 794 { 826 795 int ret = 0, i = value; 827 796 unsigned long flags; 828 - const void *desc; 829 - 830 - if (i < 0 || i >= USB_RAW_MAX_ENDPOINTS) 831 - return -EINVAL; 832 797 833 798 spin_lock_irqsave(&dev->lock, flags); 834 799 if (dev->state != STATE_DEV_RUNNING) { ··· 837 810 ret = -EBUSY; 838 811 goto out_unlock; 839 812 } 840 - if (dev->eps[i].state != STATE_EP_ENABLED) { 813 + if (i < 0 || i >= dev->eps_num) { 814 + dev_dbg(dev->dev, "fail, invalid endpoint\n"); 815 + ret = -EBUSY; 816 + goto out_unlock; 817 + } 818 + if (dev->eps[i].state == STATE_EP_DISABLED) { 841 819 dev_dbg(&dev->gadget->dev, "fail, endpoint is not enabled\n"); 842 820 ret = -EINVAL; 843 821 goto out_unlock; ··· 866 834 867 835 spin_lock_irqsave(&dev->lock, flags); 868 836 usb_ep_free_request(dev->eps[i].ep, dev->eps[i].req); 869 - desc = dev->eps[i].ep->desc; 870 - dev->eps[i].ep = NULL; 837 + kfree(dev->eps[i].ep->desc); 871 838 dev->eps[i].state = STATE_EP_DISABLED; 872 - kfree(desc); 873 839 dev->eps[i].disabling = false; 840 + 841 + out_unlock: 842 + spin_unlock_irqrestore(&dev->lock, flags); 843 + return ret; 844 + } 845 + 846 + static int raw_ioctl_ep_set_clear_halt_wedge(struct raw_dev *dev, 847 + unsigned long value, bool set, bool halt) 848 + { 849 + int ret = 0, i = value; 850 + unsigned long flags; 851 + 852 + spin_lock_irqsave(&dev->lock, flags); 853 + if (dev->state != STATE_DEV_RUNNING) { 854 + dev_dbg(dev->dev, "fail, device is not running\n"); 855 + ret = -EINVAL; 856 + goto out_unlock; 857 + } 858 + if (!dev->gadget) { 859 + dev_dbg(dev->dev, "fail, gadget is not bound\n"); 860 + ret = -EBUSY; 861 + goto out_unlock; 862 + } 863 + if (i < 0 || i >= dev->eps_num) { 864 + dev_dbg(dev->dev, "fail, invalid endpoint\n"); 865 + ret = -EBUSY; 866 + goto out_unlock; 867 + } 868 + if (dev->eps[i].state == STATE_EP_DISABLED) { 869 + dev_dbg(&dev->gadget->dev, "fail, endpoint is not enabled\n"); 870 + ret = -EINVAL; 871 + goto out_unlock; 872 + } 873 + if (dev->eps[i].disabling) { 874 + dev_dbg(&dev->gadget->dev, 875 + "fail, disable is in progress\n"); 876 + ret = -EINVAL; 877 + goto out_unlock; 878 + } 879 + if (dev->eps[i].urb_queued) { 880 + dev_dbg(&dev->gadget->dev, 881 + "fail, waiting for urb completion\n"); 882 + ret = -EINVAL; 883 + goto out_unlock; 884 + } 885 + if (usb_endpoint_xfer_isoc(dev->eps[i].ep->desc)) { 886 + dev_dbg(&dev->gadget->dev, 887 + "fail, can't halt/wedge ISO endpoint\n"); 888 + ret = -EINVAL; 889 + goto out_unlock; 890 + } 891 + 892 + if (set && halt) { 893 + ret = usb_ep_set_halt(dev->eps[i].ep); 894 + if (ret < 0) 895 + dev_err(&dev->gadget->dev, 896 + "fail, usb_ep_set_halt returned %d\n", ret); 897 + } else if (!set && halt) { 898 + ret = usb_ep_clear_halt(dev->eps[i].ep); 899 + if (ret < 0) 900 + dev_err(&dev->gadget->dev, 901 + "fail, usb_ep_clear_halt returned %d\n", ret); 902 + } else if (set && !halt) { 903 + ret = usb_ep_set_wedge(dev->eps[i].ep); 904 + if (ret < 0) 905 + dev_err(&dev->gadget->dev, 906 + "fail, usb_ep_set_wedge returned %d\n", ret); 907 + } 874 908 875 909 out_unlock: 876 910 spin_unlock_irqrestore(&dev->lock, flags); ··· 964 866 { 965 867 int ret = 0; 966 868 unsigned long flags; 967 - struct raw_ep *ep = &dev->eps[io->ep]; 869 + struct raw_ep *ep; 968 870 DECLARE_COMPLETION_ONSTACK(done); 969 871 970 872 spin_lock_irqsave(&dev->lock, flags); ··· 978 880 ret = -EBUSY; 979 881 goto out_unlock; 980 882 } 883 + if (io->ep >= dev->eps_num) { 884 + dev_dbg(&dev->gadget->dev, "fail, invalid endpoint\n"); 885 + ret = -EINVAL; 886 + goto out_unlock; 887 + } 888 + ep = &dev->eps[io->ep]; 981 889 if (ep->state != STATE_EP_ENABLED) { 982 890 dev_dbg(&dev->gadget->dev, "fail, endpoint is not enabled\n"); 983 891 ret = -EBUSY; ··· 1068 964 if (IS_ERR(data)) 1069 965 return PTR_ERR(data); 1070 966 ret = raw_process_ep_io(dev, &io, data, false); 1071 - if (ret) 967 + if (ret < 0) 1072 968 goto free; 1073 969 1074 970 length = min(io.length, (unsigned int)ret); 1075 971 if (copy_to_user((void __user *)(value + sizeof(io)), data, length)) 1076 972 ret = -EFAULT; 973 + else 974 + ret = length; 1077 975 free: 1078 976 kfree(data); 1079 977 return ret; ··· 1129 1023 return ret; 1130 1024 } 1131 1025 1026 + static void fill_ep_caps(struct usb_ep_caps *caps, 1027 + struct usb_raw_ep_caps *raw_caps) 1028 + { 1029 + raw_caps->type_control = caps->type_control; 1030 + raw_caps->type_iso = caps->type_iso; 1031 + raw_caps->type_bulk = caps->type_bulk; 1032 + raw_caps->type_int = caps->type_int; 1033 + raw_caps->dir_in = caps->dir_in; 1034 + raw_caps->dir_out = caps->dir_out; 1035 + } 1036 + 1037 + static void fill_ep_limits(struct usb_ep *ep, struct usb_raw_ep_limits *limits) 1038 + { 1039 + limits->maxpacket_limit = ep->maxpacket_limit; 1040 + limits->max_streams = ep->max_streams; 1041 + } 1042 + 1043 + static int raw_ioctl_eps_info(struct raw_dev *dev, unsigned long value) 1044 + { 1045 + int ret = 0, i; 1046 + unsigned long flags; 1047 + struct usb_raw_eps_info *info; 1048 + struct raw_ep *ep; 1049 + 1050 + info = kmalloc(sizeof(*info), GFP_KERNEL); 1051 + if (!info) { 1052 + ret = -ENOMEM; 1053 + goto out; 1054 + } 1055 + 1056 + spin_lock_irqsave(&dev->lock, flags); 1057 + if (dev->state != STATE_DEV_RUNNING) { 1058 + dev_dbg(dev->dev, "fail, device is not running\n"); 1059 + ret = -EINVAL; 1060 + spin_unlock_irqrestore(&dev->lock, flags); 1061 + goto out_free; 1062 + } 1063 + if (!dev->gadget) { 1064 + dev_dbg(dev->dev, "fail, gadget is not bound\n"); 1065 + ret = -EBUSY; 1066 + spin_unlock_irqrestore(&dev->lock, flags); 1067 + goto out_free; 1068 + } 1069 + 1070 + memset(info, 0, sizeof(*info)); 1071 + for (i = 0; i < dev->eps_num; i++) { 1072 + ep = &dev->eps[i]; 1073 + strscpy(&info->eps[i].name[0], ep->ep->name, 1074 + USB_RAW_EP_NAME_MAX); 1075 + info->eps[i].addr = ep->addr; 1076 + fill_ep_caps(&ep->ep->caps, &info->eps[i].caps); 1077 + fill_ep_limits(ep->ep, &info->eps[i].limits); 1078 + } 1079 + ret = dev->eps_num; 1080 + spin_unlock_irqrestore(&dev->lock, flags); 1081 + 1082 + if (copy_to_user((void __user *)value, info, sizeof(*info))) 1083 + ret = -EFAULT; 1084 + 1085 + out_free: 1086 + kfree(info); 1087 + out: 1088 + return ret; 1089 + } 1090 + 1132 1091 static long raw_ioctl(struct file *fd, unsigned int cmd, unsigned long value) 1133 1092 { 1134 1093 struct raw_dev *dev = fd->private_data; ··· 1235 1064 break; 1236 1065 case USB_RAW_IOCTL_VBUS_DRAW: 1237 1066 ret = raw_ioctl_vbus_draw(dev, value); 1067 + break; 1068 + case USB_RAW_IOCTL_EPS_INFO: 1069 + ret = raw_ioctl_eps_info(dev, value); 1070 + break; 1071 + case USB_RAW_IOCTL_EP0_STALL: 1072 + ret = raw_ioctl_ep0_stall(dev, value); 1073 + break; 1074 + case USB_RAW_IOCTL_EP_SET_HALT: 1075 + ret = raw_ioctl_ep_set_clear_halt_wedge( 1076 + dev, value, true, true); 1077 + break; 1078 + case USB_RAW_IOCTL_EP_CLEAR_HALT: 1079 + ret = raw_ioctl_ep_set_clear_halt_wedge( 1080 + dev, value, false, true); 1081 + break; 1082 + case USB_RAW_IOCTL_EP_SET_WEDGE: 1083 + ret = raw_ioctl_ep_set_clear_halt_wedge( 1084 + dev, value, true, false); 1238 1085 break; 1239 1086 default: 1240 1087 ret = -EINVAL;
+2 -2
drivers/usb/gadget/udc/atmel_usba_udc.c
··· 185 185 return 0; 186 186 } 187 187 188 - const struct file_operations queue_dbg_fops = { 188 + static const struct file_operations queue_dbg_fops = { 189 189 .owner = THIS_MODULE, 190 190 .open = queue_dbg_open, 191 191 .llseek = no_llseek, ··· 193 193 .release = queue_dbg_release, 194 194 }; 195 195 196 - const struct file_operations regs_dbg_fops = { 196 + static const struct file_operations regs_dbg_fops = { 197 197 .owner = THIS_MODULE, 198 198 .open = regs_dbg_open, 199 199 .llseek = generic_file_llseek,
+2
drivers/usb/gadget/udc/net2272.c
··· 2647 2647 err_req: 2648 2648 release_mem_region(base, len); 2649 2649 err: 2650 + kfree(dev); 2651 + 2650 2652 return ret; 2651 2653 } 2652 2654
+4 -4
drivers/usb/gadget/udc/tegra-xudc.c
··· 3840 3840 3841 3841 flush_work(&xudc->usb_role_sw_work); 3842 3842 3843 - /* Forcibly disconnect before powergating. */ 3844 - tegra_xudc_device_mode_off(xudc); 3845 - 3846 - if (!pm_runtime_status_suspended(dev)) 3843 + if (!pm_runtime_status_suspended(dev)) { 3844 + /* Forcibly disconnect before powergating. */ 3845 + tegra_xudc_device_mode_off(xudc); 3847 3846 tegra_xudc_powergate(xudc); 3847 + } 3848 3848 3849 3849 pm_runtime_disable(dev); 3850 3850
+3 -1
drivers/usb/host/xhci-plat.c
··· 362 362 struct clk *reg_clk = xhci->reg_clk; 363 363 struct usb_hcd *shared_hcd = xhci->shared_hcd; 364 364 365 + pm_runtime_get_sync(&dev->dev); 365 366 xhci->xhc_state |= XHCI_STATE_REMOVING; 366 367 367 368 usb_remove_hcd(shared_hcd); ··· 376 375 clk_disable_unprepare(reg_clk); 377 376 usb_put_hcd(hcd); 378 377 379 - pm_runtime_set_suspended(&dev->dev); 380 378 pm_runtime_disable(&dev->dev); 379 + pm_runtime_put_noidle(&dev->dev); 380 + pm_runtime_set_suspended(&dev->dev); 381 381 382 382 return 0; 383 383 }
+2 -2
drivers/usb/host/xhci-ring.c
··· 3433 3433 /* New sg entry */ 3434 3434 --num_sgs; 3435 3435 sent_len -= block_len; 3436 - if (num_sgs != 0) { 3437 - sg = sg_next(sg); 3436 + sg = sg_next(sg); 3437 + if (num_sgs != 0 && sg) { 3438 3438 block_len = sg_dma_len(sg); 3439 3439 addr = (u64) sg_dma_address(sg); 3440 3440 addr += sent_len;
+2 -2
drivers/usb/mtu3/mtu3_debugfs.c
··· 276 276 .release = single_release, 277 277 }; 278 278 279 - static struct debugfs_reg32 mtu3_prb_regs[] = { 279 + static const struct debugfs_reg32 mtu3_prb_regs[] = { 280 280 dump_prb_reg("enable", U3D_SSUSB_PRB_CTRL0), 281 281 dump_prb_reg("byte-sell", U3D_SSUSB_PRB_CTRL1), 282 282 dump_prb_reg("byte-selh", U3D_SSUSB_PRB_CTRL2), ··· 349 349 static void mtu3_debugfs_create_prb_files(struct mtu3 *mtu) 350 350 { 351 351 struct ssusb_mtk *ssusb = mtu->ssusb; 352 - struct debugfs_reg32 *regs; 352 + const struct debugfs_reg32 *regs; 353 353 struct dentry *dir_prb; 354 354 int i; 355 355
+9 -3
drivers/usb/phy/phy-twl6030-usb.c
··· 377 377 if (status < 0) { 378 378 dev_err(&pdev->dev, "can't get IRQ %d, err %d\n", 379 379 twl->irq1, status); 380 - return status; 380 + goto err_put_regulator; 381 381 } 382 382 383 383 status = request_threaded_irq(twl->irq2, NULL, twl6030_usb_irq, ··· 386 386 if (status < 0) { 387 387 dev_err(&pdev->dev, "can't get IRQ %d, err %d\n", 388 388 twl->irq2, status); 389 - free_irq(twl->irq1, twl); 390 - return status; 389 + goto err_free_irq1; 391 390 } 392 391 393 392 twl->asleep = 0; ··· 395 396 dev_info(&pdev->dev, "Initialized TWL6030 USB module\n"); 396 397 397 398 return 0; 399 + 400 + err_free_irq1: 401 + free_irq(twl->irq1, twl); 402 + err_put_regulator: 403 + regulator_put(twl->usb3v3); 404 + 405 + return status; 398 406 } 399 407 400 408 static int twl6030_usb_remove(struct platform_device *pdev)
+3 -3
drivers/usb/typec/mux/intel_pmc_mux.c
··· 63 63 #define PMC_USB_ALTMODE_DP_MODE_SHIFT 8 64 64 65 65 /* TBT specific Mode Data bits */ 66 + #define PMC_USB_ALTMODE_HPD_HIGH BIT(14) 66 67 #define PMC_USB_ALTMODE_TBT_TYPE BIT(17) 67 68 #define PMC_USB_ALTMODE_CABLE_TYPE BIT(18) 68 69 #define PMC_USB_ALTMODE_ACTIVE_LINK BIT(20) ··· 75 74 #define PMC_USB_ALTMODE_TBT_GEN(_g_) (((_g_) & GENMASK(1, 0)) << 28) 76 75 77 76 /* Display HPD Request bits */ 77 + #define PMC_USB_DP_HPD_LVL BIT(4) 78 78 #define PMC_USB_DP_HPD_IRQ BIT(5) 79 - #define PMC_USB_DP_HPD_LVL BIT(6) 80 79 81 80 struct pmc_usb; 82 81 ··· 159 158 PMC_USB_ALTMODE_DP_MODE_SHIFT; 160 159 161 160 if (data->status & DP_STATUS_HPD_STATE) 162 - req.mode_data |= PMC_USB_DP_HPD_LVL << 163 - PMC_USB_ALTMODE_DP_MODE_SHIFT; 161 + req.mode_data |= PMC_USB_ALTMODE_HPD_HIGH; 164 162 165 163 return pmc_usb_command(port, (void *)&req, sizeof(req)); 166 164 }
+7 -8
drivers/vdpa/vdpa_sim/vdpa_sim.c
··· 89 89 static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) 90 90 { 91 91 struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; 92 - int ret; 93 92 94 - ret = vringh_init_iotlb(&vq->vring, vdpasim_features, 95 - VDPASIM_QUEUE_MAX, false, 96 - (struct vring_desc *)(uintptr_t)vq->desc_addr, 97 - (struct vring_avail *) 98 - (uintptr_t)vq->driver_addr, 99 - (struct vring_used *) 100 - (uintptr_t)vq->device_addr); 93 + vringh_init_iotlb(&vq->vring, vdpasim_features, 94 + VDPASIM_QUEUE_MAX, false, 95 + (struct vring_desc *)(uintptr_t)vq->desc_addr, 96 + (struct vring_avail *) 97 + (uintptr_t)vq->driver_addr, 98 + (struct vring_used *) 99 + (uintptr_t)vq->device_addr); 101 100 } 102 101 103 102 static void vdpasim_vq_reset(struct vdpasim_virtqueue *vq)
+2 -2
drivers/vhost/vhost.c
··· 730 730 if (!map) 731 731 return NULL; 732 732 733 - return (void *)(uintptr_t)(map->addr + addr - map->start); 733 + return (void __user *)(uintptr_t)(map->addr + addr - map->start); 734 734 } 735 735 736 736 /* Can we switch to this memory table? */ ··· 869 869 * not happen in this case. 870 870 */ 871 871 static inline void __user *__vhost_get_user(struct vhost_virtqueue *vq, 872 - void *addr, unsigned int size, 872 + void __user *addr, unsigned int size, 873 873 int type) 874 874 { 875 875 void __user *uaddr = vhost_vq_meta_fetch(vq,
+5 -13
fs/afs/fs_probe.c
··· 32 32 struct afs_server *server = call->server; 33 33 unsigned int server_index = call->server_index; 34 34 unsigned int index = call->addr_ix; 35 - unsigned int rtt = UINT_MAX; 35 + unsigned int rtt_us = 0; 36 36 bool have_result = false; 37 - u64 _rtt; 38 37 int ret = call->error; 39 38 40 39 _enter("%pU,%u", &server->uuid, index); ··· 92 93 } 93 94 } 94 95 95 - /* Get the RTT and scale it to fit into a 32-bit value that represents 96 - * over a minute of time so that we can access it with one instruction 97 - * on a 32-bit system. 98 - */ 99 - _rtt = rxrpc_kernel_get_rtt(call->net->socket, call->rxcall); 100 - _rtt /= 64; 101 - rtt = (_rtt > UINT_MAX) ? UINT_MAX : _rtt; 102 - if (rtt < server->probe.rtt) { 103 - server->probe.rtt = rtt; 96 + rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall); 97 + if (rtt_us < server->probe.rtt) { 98 + server->probe.rtt = rtt_us; 104 99 alist->preferred = index; 105 100 have_result = true; 106 101 } ··· 106 113 spin_unlock(&server->probe_lock); 107 114 108 115 _debug("probe [%u][%u] %pISpc rtt=%u ret=%d", 109 - server_index, index, &alist->addrs[index].transport, 110 - (unsigned int)rtt, ret); 116 + server_index, index, &alist->addrs[index].transport, rtt_us, ret); 111 117 112 118 have_result |= afs_fs_probe_done(server); 113 119 if (have_result)
+4 -4
fs/afs/fsclient.c
··· 385 385 ASSERTCMP(req->offset, <=, PAGE_SIZE); 386 386 if (req->offset == PAGE_SIZE) { 387 387 req->offset = 0; 388 - if (req->page_done) 389 - req->page_done(req); 390 388 req->index++; 391 389 if (req->remain > 0) 392 390 goto begin_page; ··· 438 440 if (req->offset < PAGE_SIZE) 439 441 zero_user_segment(req->pages[req->index], 440 442 req->offset, PAGE_SIZE); 441 - if (req->page_done) 442 - req->page_done(req); 443 443 req->offset = 0; 444 444 } 445 + 446 + if (req->page_done) 447 + for (req->index = 0; req->index < req->nr_pages; req->index++) 448 + req->page_done(req); 445 449 446 450 _leave(" = 0 [done]"); 447 451 return 0;
+5 -13
fs/afs/vl_probe.c
··· 31 31 struct afs_addr_list *alist = call->alist; 32 32 struct afs_vlserver *server = call->vlserver; 33 33 unsigned int server_index = call->server_index; 34 + unsigned int rtt_us = 0; 34 35 unsigned int index = call->addr_ix; 35 - unsigned int rtt = UINT_MAX; 36 36 bool have_result = false; 37 - u64 _rtt; 38 37 int ret = call->error; 39 38 40 39 _enter("%s,%u,%u,%d,%d", server->name, server_index, index, ret, call->abort_code); ··· 92 93 } 93 94 } 94 95 95 - /* Get the RTT and scale it to fit into a 32-bit value that represents 96 - * over a minute of time so that we can access it with one instruction 97 - * on a 32-bit system. 98 - */ 99 - _rtt = rxrpc_kernel_get_rtt(call->net->socket, call->rxcall); 100 - _rtt /= 64; 101 - rtt = (_rtt > UINT_MAX) ? UINT_MAX : _rtt; 102 - if (rtt < server->probe.rtt) { 103 - server->probe.rtt = rtt; 96 + rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall); 97 + if (rtt_us < server->probe.rtt) { 98 + server->probe.rtt = rtt_us; 104 99 alist->preferred = index; 105 100 have_result = true; 106 101 } ··· 106 113 spin_unlock(&server->probe_lock); 107 114 108 115 _debug("probe [%u][%u] %pISpc rtt=%u ret=%d", 109 - server_index, index, &alist->addrs[index].transport, 110 - (unsigned int)rtt, ret); 116 + server_index, index, &alist->addrs[index].transport, rtt_us, ret); 111 117 112 118 have_result |= afs_vl_probe_done(server); 113 119 if (have_result) {
+4 -4
fs/afs/yfsclient.c
··· 497 497 ASSERTCMP(req->offset, <=, PAGE_SIZE); 498 498 if (req->offset == PAGE_SIZE) { 499 499 req->offset = 0; 500 - if (req->page_done) 501 - req->page_done(req); 502 500 req->index++; 503 501 if (req->remain > 0) 504 502 goto begin_page; ··· 554 556 if (req->offset < PAGE_SIZE) 555 557 zero_user_segment(req->pages[req->index], 556 558 req->offset, PAGE_SIZE); 557 - if (req->page_done) 558 - req->page_done(req); 559 559 req->offset = 0; 560 560 } 561 + 562 + if (req->page_done) 563 + for (req->index = 0; req->index < req->nr_pages; req->index++) 564 + req->page_done(req); 561 565 562 566 _leave(" = 0 [done]"); 563 567 return 0;
+6 -6
fs/cachefiles/rdwr.c
··· 60 60 object = container_of(op->op.object, struct cachefiles_object, fscache); 61 61 spin_lock(&object->work_lock); 62 62 list_add_tail(&monitor->op_link, &op->to_do); 63 + fscache_enqueue_retrieval(op); 63 64 spin_unlock(&object->work_lock); 64 65 65 - fscache_enqueue_retrieval(op); 66 66 fscache_put_retrieval(op); 67 67 return 0; 68 68 } ··· 398 398 struct inode *inode; 399 399 sector_t block; 400 400 unsigned shift; 401 - int ret; 401 + int ret, ret2; 402 402 403 403 object = container_of(op->op.object, 404 404 struct cachefiles_object, fscache); ··· 430 430 block = page->index; 431 431 block <<= shift; 432 432 433 - ret = bmap(inode, &block); 434 - ASSERT(ret < 0); 433 + ret2 = bmap(inode, &block); 434 + ASSERT(ret2 == 0); 435 435 436 436 _debug("%llx -> %llx", 437 437 (unsigned long long) (page->index << shift), ··· 739 739 block = page->index; 740 740 block <<= shift; 741 741 742 - ret = bmap(inode, &block); 743 - ASSERT(!ret); 742 + ret2 = bmap(inode, &block); 743 + ASSERT(ret2 == 0); 744 744 745 745 _debug("%llx -> %llx", 746 746 (unsigned long long) (page->index << shift),
+1 -1
fs/cifs/cifssmb.c
··· 2152 2152 } 2153 2153 } 2154 2154 2155 + kref_put(&wdata2->refcount, cifs_writedata_release); 2155 2156 if (rc) { 2156 - kref_put(&wdata2->refcount, cifs_writedata_release); 2157 2157 if (is_retryable_error(rc)) 2158 2158 continue; 2159 2159 i += nr_pages;
+1 -1
fs/cifs/file.c
··· 4060 4060 * than it negotiated since it will refuse the read 4061 4061 * then. 4062 4062 */ 4063 - if ((tcon->ses) && !(tcon->ses->capabilities & 4063 + if (!(tcon->ses->capabilities & 4064 4064 tcon->ses->server->vals->cap_large_files)) { 4065 4065 current_read_size = min_t(uint, 4066 4066 current_read_size, CIFSMaxBufSize);
+1 -1
fs/cifs/inode.c
··· 730 730 * cifs_backup_query_path_info - SMB1 fallback code to get ino 731 731 * 732 732 * Fallback code to get file metadata when we don't have access to 733 - * @full_path (EACCESS) and have backup creds. 733 + * @full_path (EACCES) and have backup creds. 734 734 * 735 735 * @data will be set to search info result buffer 736 736 * @resp_buf will be set to cifs resp buf and needs to be freed with
+2 -2
fs/exec.c
··· 1317 1317 */ 1318 1318 set_mm_exe_file(bprm->mm, bprm->file); 1319 1319 1320 + would_dump(bprm, bprm->file); 1321 + 1320 1322 /* 1321 1323 * Release all of the old mmap stuff 1322 1324 */ ··· 1877 1875 retval = copy_strings(bprm->argc, argv, bprm); 1878 1876 if (retval < 0) 1879 1877 goto out; 1880 - 1881 - would_dump(bprm, bprm->file); 1882 1878 1883 1879 retval = exec_binprm(bprm); 1884 1880 if (retval < 0)
+7 -6
fs/exfat/file.c
··· 348 348 } 349 349 350 350 const struct file_operations exfat_file_operations = { 351 - .llseek = generic_file_llseek, 352 - .read_iter = generic_file_read_iter, 353 - .write_iter = generic_file_write_iter, 354 - .mmap = generic_file_mmap, 355 - .fsync = generic_file_fsync, 356 - .splice_read = generic_file_splice_read, 351 + .llseek = generic_file_llseek, 352 + .read_iter = generic_file_read_iter, 353 + .write_iter = generic_file_write_iter, 354 + .mmap = generic_file_mmap, 355 + .fsync = generic_file_fsync, 356 + .splice_read = generic_file_splice_read, 357 + .splice_write = iter_file_splice_write, 357 358 }; 358 359 359 360 const struct inode_operations exfat_file_inode_operations = {
+1
fs/exfat/namei.c
··· 692 692 exfat_fs_error(sb, 693 693 "non-zero size file starts with zero cluster (size : %llu, p_dir : %u, entry : 0x%08x)", 694 694 i_size_read(dir), ei->dir.dir, ei->entry); 695 + kfree(es); 695 696 return -EIO; 696 697 } 697 698
+19
fs/exfat/super.c
··· 203 203 Opt_errors, 204 204 Opt_discard, 205 205 Opt_time_offset, 206 + 207 + /* Deprecated options */ 208 + Opt_utf8, 209 + Opt_debug, 210 + Opt_namecase, 211 + Opt_codepage, 206 212 }; 207 213 208 214 static const struct constant_table exfat_param_enums[] = { ··· 229 223 fsparam_enum("errors", Opt_errors, exfat_param_enums), 230 224 fsparam_flag("discard", Opt_discard), 231 225 fsparam_s32("time_offset", Opt_time_offset), 226 + __fsparam(NULL, "utf8", Opt_utf8, fs_param_deprecated, 227 + NULL), 228 + __fsparam(NULL, "debug", Opt_debug, fs_param_deprecated, 229 + NULL), 230 + __fsparam(fs_param_is_u32, "namecase", Opt_namecase, 231 + fs_param_deprecated, NULL), 232 + __fsparam(fs_param_is_u32, "codepage", Opt_codepage, 233 + fs_param_deprecated, NULL), 232 234 {} 233 235 }; 234 236 ··· 291 277 if (result.int_32 < -24 * 60 || result.int_32 > 24 * 60) 292 278 return -EINVAL; 293 279 opts->time_offset = result.int_32; 280 + break; 281 + case Opt_utf8: 282 + case Opt_debug: 283 + case Opt_namecase: 284 + case Opt_codepage: 294 285 break; 295 286 default: 296 287 return -EINVAL;
+1 -1
fs/ext4/ext4.h
··· 722 722 #define EXT4_MAX_BLOCK_FILE_PHYS 0xFFFFFFFF 723 723 724 724 /* Max logical block we can support */ 725 - #define EXT4_MAX_LOGICAL_BLOCK 0xFFFFFFFF 725 + #define EXT4_MAX_LOGICAL_BLOCK 0xFFFFFFFE 726 726 727 727 /* 728 728 * Structure of an inode on the disk
+31
fs/ext4/extents.c
··· 4832 4832 .iomap_begin = ext4_iomap_xattr_begin, 4833 4833 }; 4834 4834 4835 + static int ext4_fiemap_check_ranges(struct inode *inode, u64 start, u64 *len) 4836 + { 4837 + u64 maxbytes; 4838 + 4839 + if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) 4840 + maxbytes = inode->i_sb->s_maxbytes; 4841 + else 4842 + maxbytes = EXT4_SB(inode->i_sb)->s_bitmap_maxbytes; 4843 + 4844 + if (*len == 0) 4845 + return -EINVAL; 4846 + if (start > maxbytes) 4847 + return -EFBIG; 4848 + 4849 + /* 4850 + * Shrink request scope to what the fs can actually handle. 4851 + */ 4852 + if (*len > maxbytes || (maxbytes - *len) < start) 4853 + *len = maxbytes - start; 4854 + return 0; 4855 + } 4856 + 4835 4857 static int _ext4_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, 4836 4858 __u64 start, __u64 len, bool from_es_cache) 4837 4859 { ··· 4873 4851 4874 4852 if (fiemap_check_flags(fieinfo, ext4_fiemap_flags)) 4875 4853 return -EBADR; 4854 + 4855 + /* 4856 + * For bitmap files the maximum size limit could be smaller than 4857 + * s_maxbytes, so check len here manually instead of just relying on the 4858 + * generic check. 4859 + */ 4860 + error = ext4_fiemap_check_ranges(inode, start, &len); 4861 + if (error) 4862 + return error; 4876 4863 4877 4864 if (fieinfo->fi_flags & FIEMAP_FLAG_XATTR) { 4878 4865 fieinfo->fi_flags &= ~FIEMAP_FLAG_XATTR;
+2 -31
fs/ext4/ioctl.c
··· 733 733 fa->fsx_projid = from_kprojid(&init_user_ns, ei->i_projid); 734 734 } 735 735 736 - /* copied from fs/ioctl.c */ 737 - static int fiemap_check_ranges(struct super_block *sb, 738 - u64 start, u64 len, u64 *new_len) 739 - { 740 - u64 maxbytes = (u64) sb->s_maxbytes; 741 - 742 - *new_len = len; 743 - 744 - if (len == 0) 745 - return -EINVAL; 746 - 747 - if (start > maxbytes) 748 - return -EFBIG; 749 - 750 - /* 751 - * Shrink request scope to what the fs can actually handle. 752 - */ 753 - if (len > maxbytes || (maxbytes - len) < start) 754 - *new_len = maxbytes - start; 755 - 756 - return 0; 757 - } 758 - 759 736 /* So that the fiemap access checks can't overflow on 32 bit machines. */ 760 737 #define FIEMAP_MAX_EXTENTS (UINT_MAX / sizeof(struct fiemap_extent)) 761 738 ··· 742 765 struct fiemap __user *ufiemap = (struct fiemap __user *) arg; 743 766 struct fiemap_extent_info fieinfo = { 0, }; 744 767 struct inode *inode = file_inode(filp); 745 - struct super_block *sb = inode->i_sb; 746 - u64 len; 747 768 int error; 748 769 749 770 if (copy_from_user(&fiemap, ufiemap, sizeof(fiemap))) ··· 749 774 750 775 if (fiemap.fm_extent_count > FIEMAP_MAX_EXTENTS) 751 776 return -EINVAL; 752 - 753 - error = fiemap_check_ranges(sb, fiemap.fm_start, fiemap.fm_length, 754 - &len); 755 - if (error) 756 - return error; 757 777 758 778 fieinfo.fi_flags = fiemap.fm_flags; 759 779 fieinfo.fi_extents_max = fiemap.fm_extent_count; ··· 762 792 if (fieinfo.fi_flags & FIEMAP_FLAG_SYNC) 763 793 filemap_write_and_wait(inode->i_mapping); 764 794 765 - error = ext4_get_es_cache(inode, &fieinfo, fiemap.fm_start, len); 795 + error = ext4_get_es_cache(inode, &fieinfo, fiemap.fm_start, 796 + fiemap.fm_length); 766 797 fiemap.fm_flags = fieinfo.fi_flags; 767 798 fiemap.fm_mapped_extents = fieinfo.fi_extents_mapped; 768 799 if (copy_to_user(ufiemap, &fiemap, sizeof(fiemap)))
+1 -1
fs/file.c
··· 70 70 */ 71 71 static void copy_fdtable(struct fdtable *nfdt, struct fdtable *ofdt) 72 72 { 73 - unsigned int cpy, set; 73 + size_t cpy, set; 74 74 75 75 BUG_ON(nfdt->max_fds < ofdt->max_fds); 76 76
+38 -31
fs/io_uring.c
··· 619 619 bool needs_fixed_file; 620 620 u8 opcode; 621 621 622 + u16 buf_index; 623 + 622 624 struct io_ring_ctx *ctx; 623 625 struct list_head list; 624 626 unsigned int flags; ··· 926 924 goto err; 927 925 928 926 ctx->flags = p->flags; 927 + init_waitqueue_head(&ctx->sqo_wait); 929 928 init_waitqueue_head(&ctx->cq_wait); 930 929 INIT_LIST_HEAD(&ctx->cq_overflow_list); 931 930 init_completion(&ctx->completions[0]); ··· 1397 1394 for (i = 0; i < rb->to_free; i++) { 1398 1395 struct io_kiocb *req = rb->reqs[i]; 1399 1396 1400 - if (req->flags & REQ_F_FIXED_FILE) { 1401 - req->file = NULL; 1402 - percpu_ref_put(req->fixed_file_refs); 1403 - } 1404 1397 if (req->flags & REQ_F_INFLIGHT) 1405 1398 inflight++; 1406 1399 __io_req_aux_free(req); ··· 1669 1670 if ((req->flags & REQ_F_LINK_HEAD) || io_is_fallback_req(req)) 1670 1671 return false; 1671 1672 1672 - if (!(req->flags & REQ_F_FIXED_FILE) || req->io) 1673 + if (req->file || req->io) 1673 1674 rb->need_iter++; 1674 1675 1675 1676 rb->reqs[rb->to_free++] = req; ··· 2103 2104 2104 2105 req->rw.addr = READ_ONCE(sqe->addr); 2105 2106 req->rw.len = READ_ONCE(sqe->len); 2106 - /* we own ->private, reuse it for the buffer index / buffer ID */ 2107 - req->rw.kiocb.private = (void *) (unsigned long) 2108 - READ_ONCE(sqe->buf_index); 2107 + req->buf_index = READ_ONCE(sqe->buf_index); 2109 2108 return 0; 2110 2109 } 2111 2110 ··· 2146 2149 struct io_ring_ctx *ctx = req->ctx; 2147 2150 size_t len = req->rw.len; 2148 2151 struct io_mapped_ubuf *imu; 2149 - unsigned index, buf_index; 2152 + u16 index, buf_index; 2150 2153 size_t offset; 2151 2154 u64 buf_addr; 2152 2155 ··· 2154 2157 if (unlikely(!ctx->user_bufs)) 2155 2158 return -EFAULT; 2156 2159 2157 - buf_index = (unsigned long) req->rw.kiocb.private; 2160 + buf_index = req->buf_index; 2158 2161 if (unlikely(buf_index >= ctx->nr_user_bufs)) 2159 2162 return -EFAULT; 2160 2163 ··· 2270 2273 bool needs_lock) 2271 2274 { 2272 2275 struct io_buffer *kbuf; 2273 - int bgid; 2276 + u16 bgid; 2274 2277 2275 2278 kbuf = (struct io_buffer *) (unsigned long) req->rw.addr; 2276 - bgid = (int) (unsigned long) req->rw.kiocb.private; 2279 + bgid = req->buf_index; 2277 2280 kbuf = io_buffer_select(req, len, bgid, kbuf, needs_lock); 2278 2281 if (IS_ERR(kbuf)) 2279 2282 return kbuf; ··· 2364 2367 } 2365 2368 2366 2369 /* buffer index only valid with fixed read/write, or buffer select */ 2367 - if (req->rw.kiocb.private && !(req->flags & REQ_F_BUFFER_SELECT)) 2370 + if (req->buf_index && !(req->flags & REQ_F_BUFFER_SELECT)) 2368 2371 return -EINVAL; 2369 2372 2370 2373 if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) { ··· 2764 2767 struct file *out = sp->file_out; 2765 2768 unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED; 2766 2769 loff_t *poff_in, *poff_out; 2767 - long ret; 2770 + long ret = 0; 2768 2771 2769 2772 if (force_nonblock) 2770 2773 return -EAGAIN; 2771 2774 2772 2775 poff_in = (sp->off_in == -1) ? NULL : &sp->off_in; 2773 2776 poff_out = (sp->off_out == -1) ? NULL : &sp->off_out; 2774 - ret = do_splice(in, poff_in, out, poff_out, sp->len, flags); 2775 - if (force_nonblock && ret == -EAGAIN) 2776 - return -EAGAIN; 2777 + 2778 + if (sp->len) 2779 + ret = do_splice(in, poff_in, out, poff_out, sp->len, flags); 2777 2780 2778 2781 io_put_file(req, in, (sp->flags & SPLICE_F_FD_IN_FIXED)); 2779 2782 req->flags &= ~REQ_F_NEED_CLEANUP; ··· 4135 4138 req->result = mask; 4136 4139 init_task_work(&req->task_work, func); 4137 4140 /* 4138 - * If this fails, then the task is exiting. Punt to one of the io-wq 4139 - * threads to ensure the work gets run, we can't always rely on exit 4140 - * cancelation taking care of this. 4141 + * If this fails, then the task is exiting. When a task exits, the 4142 + * work gets canceled, so just cancel this request as well instead 4143 + * of executing it. We can't safely execute it anyway, as we may not 4144 + * have the needed state needed for it anyway. 4141 4145 */ 4142 4146 ret = task_work_add(tsk, &req->task_work, true); 4143 4147 if (unlikely(ret)) { 4148 + WRITE_ONCE(poll->canceled, true); 4144 4149 tsk = io_wq_get_task(req->ctx->io_wq); 4145 4150 task_work_add(tsk, &req->task_work, true); 4146 4151 } ··· 5013 5014 if (!req_need_defer(req) && list_empty_careful(&ctx->defer_list)) 5014 5015 return 0; 5015 5016 5016 - if (!req->io && io_alloc_async_ctx(req)) 5017 - return -EAGAIN; 5018 - 5019 - ret = io_req_defer_prep(req, sqe); 5020 - if (ret < 0) 5021 - return ret; 5017 + if (!req->io) { 5018 + if (io_alloc_async_ctx(req)) 5019 + return -EAGAIN; 5020 + ret = io_req_defer_prep(req, sqe); 5021 + if (ret < 0) 5022 + return ret; 5023 + } 5022 5024 5023 5025 spin_lock_irq(&ctx->completion_lock); 5024 5026 if (!req_need_defer(req) && list_empty(&ctx->defer_list)) { ··· 5306 5306 if (ret) 5307 5307 return ret; 5308 5308 5309 - if (ctx->flags & IORING_SETUP_IOPOLL) { 5309 + /* If the op doesn't have a file, we're not polling for it */ 5310 + if ((ctx->flags & IORING_SETUP_IOPOLL) && req->file) { 5310 5311 const bool in_async = io_wq_current_is_worker(); 5311 5312 5312 5313 if (req->result == -EAGAIN) ··· 5608 5607 io_double_put_req(req); 5609 5608 } 5610 5609 } else if (req->flags & REQ_F_FORCE_ASYNC) { 5611 - ret = io_req_defer_prep(req, sqe); 5612 - if (unlikely(ret < 0)) 5613 - goto fail_req; 5610 + if (!req->io) { 5611 + ret = -EAGAIN; 5612 + if (io_alloc_async_ctx(req)) 5613 + goto fail_req; 5614 + ret = io_req_defer_prep(req, sqe); 5615 + if (unlikely(ret < 0)) 5616 + goto fail_req; 5617 + } 5618 + 5614 5619 /* 5615 5620 * Never try inline submit of IOSQE_ASYNC is set, go straight 5616 5621 * to async execution. ··· 6032 6025 finish_wait(&ctx->sqo_wait, &wait); 6033 6026 6034 6027 ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP; 6028 + ret = 0; 6035 6029 continue; 6036 6030 } 6037 6031 finish_wait(&ctx->sqo_wait, &wait); ··· 6846 6838 { 6847 6839 int ret; 6848 6840 6849 - init_waitqueue_head(&ctx->sqo_wait); 6850 6841 mmgrab(current->mm); 6851 6842 ctx->sqo_mm = current->mm; 6852 6843
+18 -21
fs/nfs/fscache.c
··· 118 118 119 119 nfss->fscache_key = NULL; 120 120 nfss->fscache = NULL; 121 - if (!(nfss->options & NFS_OPTION_FSCACHE)) 122 - return; 123 121 if (!uniq) { 124 122 uniq = ""; 125 123 ulen = 1; ··· 186 188 /* create a cache index for looking up filehandles */ 187 189 nfss->fscache = fscache_acquire_cookie(nfss->nfs_client->fscache, 188 190 &nfs_fscache_super_index_def, 189 - key, sizeof(*key) + ulen, 191 + &key->key, 192 + sizeof(key->key) + ulen, 190 193 NULL, 0, 191 194 nfss, 0, true); 192 195 dfprintk(FSCACHE, "NFS: get superblock cookie (0x%p/0x%p)\n", ··· 225 226 } 226 227 } 227 228 229 + static void nfs_fscache_update_auxdata(struct nfs_fscache_inode_auxdata *auxdata, 230 + struct nfs_inode *nfsi) 231 + { 232 + memset(auxdata, 0, sizeof(*auxdata)); 233 + auxdata->mtime_sec = nfsi->vfs_inode.i_mtime.tv_sec; 234 + auxdata->mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec; 235 + auxdata->ctime_sec = nfsi->vfs_inode.i_ctime.tv_sec; 236 + auxdata->ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec; 237 + 238 + if (NFS_SERVER(&nfsi->vfs_inode)->nfs_client->rpc_ops->version == 4) 239 + auxdata->change_attr = inode_peek_iversion_raw(&nfsi->vfs_inode); 240 + } 241 + 228 242 /* 229 243 * Initialise the per-inode cache cookie pointer for an NFS inode. 230 244 */ ··· 251 239 if (!(nfss->fscache && S_ISREG(inode->i_mode))) 252 240 return; 253 241 254 - memset(&auxdata, 0, sizeof(auxdata)); 255 - auxdata.mtime_sec = nfsi->vfs_inode.i_mtime.tv_sec; 256 - auxdata.mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec; 257 - auxdata.ctime_sec = nfsi->vfs_inode.i_ctime.tv_sec; 258 - auxdata.ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec; 259 - 260 - if (NFS_SERVER(&nfsi->vfs_inode)->nfs_client->rpc_ops->version == 4) 261 - auxdata.change_attr = inode_peek_iversion_raw(&nfsi->vfs_inode); 242 + nfs_fscache_update_auxdata(&auxdata, nfsi); 262 243 263 244 nfsi->fscache = fscache_acquire_cookie(NFS_SB(inode->i_sb)->fscache, 264 245 &nfs_fscache_inode_object_def, ··· 271 266 272 267 dfprintk(FSCACHE, "NFS: clear cookie (0x%p/0x%p)\n", nfsi, cookie); 273 268 274 - memset(&auxdata, 0, sizeof(auxdata)); 275 - auxdata.mtime_sec = nfsi->vfs_inode.i_mtime.tv_sec; 276 - auxdata.mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec; 277 - auxdata.ctime_sec = nfsi->vfs_inode.i_ctime.tv_sec; 278 - auxdata.ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec; 269 + nfs_fscache_update_auxdata(&auxdata, nfsi); 279 270 fscache_relinquish_cookie(cookie, &auxdata, false); 280 271 nfsi->fscache = NULL; 281 272 } ··· 311 310 if (!fscache_cookie_valid(cookie)) 312 311 return; 313 312 314 - memset(&auxdata, 0, sizeof(auxdata)); 315 - auxdata.mtime_sec = nfsi->vfs_inode.i_mtime.tv_sec; 316 - auxdata.mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec; 317 - auxdata.ctime_sec = nfsi->vfs_inode.i_ctime.tv_sec; 318 - auxdata.ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec; 313 + nfs_fscache_update_auxdata(&auxdata, nfsi); 319 314 320 315 if (inode_is_open_for_write(inode)) { 321 316 dfprintk(FSCACHE, "NFS: nfsi 0x%p disabling cache\n", nfsi);
+2 -1
fs/nfs/mount_clnt.c
··· 30 30 #define encode_dirpath_sz (1 + XDR_QUADLEN(MNTPATHLEN)) 31 31 #define MNT_status_sz (1) 32 32 #define MNT_fhandle_sz XDR_QUADLEN(NFS2_FHSIZE) 33 + #define MNT_fhandlev3_sz XDR_QUADLEN(NFS3_FHSIZE) 33 34 #define MNT_authflav3_sz (1 + NFS_MAX_SECFLAVORS) 34 35 35 36 /* ··· 38 37 */ 39 38 #define MNT_enc_dirpath_sz encode_dirpath_sz 40 39 #define MNT_dec_mountres_sz (MNT_status_sz + MNT_fhandle_sz) 41 - #define MNT_dec_mountres3_sz (MNT_status_sz + MNT_fhandle_sz + \ 40 + #define MNT_dec_mountres3_sz (MNT_status_sz + MNT_fhandlev3_sz + \ 42 41 MNT_authflav3_sz) 43 42 44 43 /*
+1 -1
fs/nfs/nfs4proc.c
··· 6347 6347 .rpc_client = server->client, 6348 6348 .rpc_message = &msg, 6349 6349 .callback_ops = &nfs4_delegreturn_ops, 6350 - .flags = RPC_TASK_ASYNC | RPC_TASK_CRED_NOREF | RPC_TASK_TIMEOUT, 6350 + .flags = RPC_TASK_ASYNC | RPC_TASK_TIMEOUT, 6351 6351 }; 6352 6352 int status = 0; 6353 6353
+1 -1
fs/nfs/nfs4state.c
··· 734 734 state = new; 735 735 state->owner = owner; 736 736 atomic_inc(&owner->so_count); 737 - list_add_rcu(&state->inode_states, &nfsi->open_states); 738 737 ihold(inode); 739 738 state->inode = inode; 739 + list_add_rcu(&state->inode_states, &nfsi->open_states); 740 740 spin_unlock(&inode->i_lock); 741 741 /* Note: The reclaim code dictates that we add stateless 742 742 * and read-only stateids to the end of the list */
+3 -2
fs/nfs/pagelist.c
··· 752 752 .callback_ops = call_ops, 753 753 .callback_data = hdr, 754 754 .workqueue = nfsiod_workqueue, 755 - .flags = RPC_TASK_ASYNC | RPC_TASK_CRED_NOREF | flags, 755 + .flags = RPC_TASK_ASYNC | flags, 756 756 }; 757 757 758 758 hdr->rw_ops->rw_initiate(hdr, &msg, rpc_ops, &task_setup_data, how); ··· 950 950 hdr->cred, 951 951 NFS_PROTO(hdr->inode), 952 952 desc->pg_rpc_callops, 953 - desc->pg_ioflags, 0); 953 + desc->pg_ioflags, 954 + RPC_TASK_CRED_NOREF); 954 955 return ret; 955 956 } 956 957
+2 -1
fs/nfs/pnfs_nfs.c
··· 536 536 nfs_init_commit(data, NULL, NULL, cinfo); 537 537 nfs_initiate_commit(NFS_CLIENT(inode), data, 538 538 NFS_PROTO(data->inode), 539 - data->mds_ops, how, 0); 539 + data->mds_ops, how, 540 + RPC_TASK_CRED_NOREF); 540 541 } else { 541 542 nfs_init_commit(data, NULL, data->lseg, cinfo); 542 543 initiate_commit(data, how);
-1
fs/nfs/super.c
··· 1189 1189 uniq = ctx->fscache_uniq; 1190 1190 ulen = strlen(ctx->fscache_uniq); 1191 1191 } 1192 - return; 1193 1192 } 1194 1193 1195 1194 nfs_fscache_get_super_cookie(sb, uniq, ulen);
+2 -2
fs/nfs/write.c
··· 1695 1695 .callback_ops = call_ops, 1696 1696 .callback_data = data, 1697 1697 .workqueue = nfsiod_workqueue, 1698 - .flags = RPC_TASK_ASYNC | RPC_TASK_CRED_NOREF | flags, 1698 + .flags = RPC_TASK_ASYNC | flags, 1699 1699 .priority = priority, 1700 1700 }; 1701 1701 /* Set up the initial task struct. */ ··· 1813 1813 nfs_init_commit(data, head, NULL, cinfo); 1814 1814 atomic_inc(&cinfo->mds->rpcs_out); 1815 1815 return nfs_initiate_commit(NFS_CLIENT(inode), data, NFS_PROTO(inode), 1816 - data->mds_ops, how, 0); 1816 + data->mds_ops, how, RPC_TASK_CRED_NOREF); 1817 1817 } 1818 1818 1819 1819 /*
+3
fs/overlayfs/export.c
··· 783 783 if (fh_type != OVL_FILEID_V0) 784 784 return ERR_PTR(-EINVAL); 785 785 786 + if (buflen <= OVL_FH_WIRE_OFFSET) 787 + return ERR_PTR(-EINVAL); 788 + 786 789 fh = kzalloc(buflen, GFP_KERNEL); 787 790 if (!fh) 788 791 return ERR_PTR(-ENOMEM);
+18
fs/overlayfs/inode.c
··· 58 58 if (attr->ia_valid & (ATTR_KILL_SUID|ATTR_KILL_SGID)) 59 59 attr->ia_valid &= ~ATTR_MODE; 60 60 61 + /* 62 + * We might have to translate ovl file into real file object 63 + * once use cases emerge. For now, simply don't let underlying 64 + * filesystem rely on attr->ia_file 65 + */ 66 + attr->ia_valid &= ~ATTR_FILE; 67 + 68 + /* 69 + * If open(O_TRUNC) is done, VFS calls ->setattr with ATTR_OPEN 70 + * set. Overlayfs does not pass O_TRUNC flag to underlying 71 + * filesystem during open -> do not pass ATTR_OPEN. This 72 + * disables optimization in fuse which assumes open(O_TRUNC) 73 + * already set file size to 0. But we never passed O_TRUNC to 74 + * fuse. So by clearing ATTR_OPEN, fuse will be forced to send 75 + * setattr request to server. 76 + */ 77 + attr->ia_valid &= ~ATTR_OPEN; 78 + 61 79 inode_lock(upperdentry->d_inode); 62 80 old_cred = ovl_override_creds(dentry->d_sb); 63 81 err = notify_change(upperdentry, attr, NULL);
+1 -1
fs/splice.c
··· 1494 1494 * Check pipe occupancy without the inode lock first. This function 1495 1495 * is speculative anyways, so missing one is ok. 1496 1496 */ 1497 - if (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) 1497 + if (!pipe_full(pipe->head, pipe->tail, pipe->max_usage)) 1498 1498 return 0; 1499 1499 1500 1500 ret = 0;
+4 -13
fs/ubifs/auth.c
··· 79 79 struct shash_desc *inhash) 80 80 { 81 81 struct ubifs_auth_node *auth = node; 82 - u8 *hash; 82 + u8 hash[UBIFS_HASH_ARR_SZ]; 83 83 int err; 84 - 85 - hash = kmalloc(crypto_shash_descsize(c->hash_tfm), GFP_NOFS); 86 - if (!hash) 87 - return -ENOMEM; 88 84 89 85 { 90 86 SHASH_DESC_ON_STACK(hash_desc, c->hash_tfm); ··· 90 94 91 95 err = crypto_shash_final(hash_desc, hash); 92 96 if (err) 93 - goto out; 97 + return err; 94 98 } 95 99 96 100 err = ubifs_hash_calc_hmac(c, hash, auth->hmac); 97 101 if (err) 98 - goto out; 102 + return err; 99 103 100 104 auth->ch.node_type = UBIFS_AUTH_NODE; 101 105 ubifs_prepare_node(c, auth, ubifs_auth_node_sz(c), 0); 102 - 103 - err = 0; 104 - out: 105 - kfree(hash); 106 - 107 - return err; 106 + return 0; 108 107 } 109 108 110 109 static struct shash_desc *ubifs_get_desc(const struct ubifs_info *c,
+1 -5
fs/ubifs/file.c
··· 1375 1375 struct ubifs_info *c = inode->i_sb->s_fs_info; 1376 1376 struct ubifs_budget_req req = { .dirtied_ino = 1, 1377 1377 .dirtied_ino_d = ALIGN(ui->data_len, 8) }; 1378 - int iflags = I_DIRTY_TIME; 1379 1378 int err, release; 1380 1379 1381 1380 if (!IS_ENABLED(CONFIG_UBIFS_ATIME_SUPPORT)) ··· 1392 1393 if (flags & S_MTIME) 1393 1394 inode->i_mtime = *time; 1394 1395 1395 - if (!(inode->i_sb->s_flags & SB_LAZYTIME)) 1396 - iflags |= I_DIRTY_SYNC; 1397 - 1398 1396 release = ui->dirty; 1399 - __mark_inode_dirty(inode, iflags); 1397 + __mark_inode_dirty(inode, I_DIRTY_SYNC); 1400 1398 mutex_unlock(&ui->ui_mutex); 1401 1399 if (release) 1402 1400 ubifs_release_budget(c, &req);
+2 -11
fs/ubifs/replay.c
··· 601 601 struct ubifs_scan_node *snod; 602 602 int n_nodes = 0; 603 603 int err; 604 - u8 *hash, *hmac; 604 + u8 hash[UBIFS_HASH_ARR_SZ]; 605 + u8 hmac[UBIFS_HMAC_ARR_SZ]; 605 606 606 607 if (!ubifs_authenticated(c)) 607 608 return sleb->nodes_cnt; 608 - 609 - hash = kmalloc(crypto_shash_descsize(c->hash_tfm), GFP_NOFS); 610 - hmac = kmalloc(c->hmac_desc_len, GFP_NOFS); 611 - if (!hash || !hmac) { 612 - err = -ENOMEM; 613 - goto out; 614 - } 615 609 616 610 list_for_each_entry(snod, &sleb->nodes, list) { 617 611 ··· 656 662 err = 0; 657 663 } 658 664 out: 659 - kfree(hash); 660 - kfree(hmac); 661 - 662 665 return err ? err : n_nodes - n_not_auth; 663 666 } 664 667
+6
include/linux/compiler.h
··· 356 356 /* &a[0] degrades to a pointer: a different type from an array */ 357 357 #define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0])) 358 358 359 + /* 360 + * This is needed in functions which generate the stack canary, see 361 + * arch/x86/kernel/smpboot.c::start_secondary() for an example. 362 + */ 363 + #define prevent_tail_call_optimization() mb() 364 + 359 365 #endif /* __LINUX_COMPILER_H */
+9
include/linux/cper.h
··· 521 521 u8 aer_info[96]; 522 522 }; 523 523 524 + /* Firmware Error Record Reference, UEFI v2.7 sec N.2.10 */ 525 + struct cper_sec_fw_err_rec_ref { 526 + u8 record_type; 527 + u8 revision; 528 + u8 reserved[6]; 529 + u64 record_identifier; 530 + guid_t record_identifier_guid; 531 + }; 532 + 524 533 /* Reset to default packing */ 525 534 #pragma pack() 526 535
+2
include/linux/efi.h
··· 1245 1245 1246 1246 void __init efi_arch_mem_reserve(phys_addr_t addr, u64 size); 1247 1247 1248 + char *efi_systab_show_arch(char *str); 1249 + 1248 1250 #endif /* _LINUX_EFI_H */
+1 -1
include/linux/i2c-mux.h
··· 29 29 30 30 int num_adapters; 31 31 int max_adapters; 32 - struct i2c_adapter *adapter[0]; 32 + struct i2c_adapter *adapter[]; 33 33 }; 34 34 35 35 struct i2c_mux_core *i2c_mux_alloc(struct i2c_adapter *parent,
+1 -1
include/linux/i2c.h
··· 2 2 /* 3 3 * i2c.h - definitions for the Linux i2c bus interface 4 4 * Copyright (C) 1995-2000 Simon G. Vogl 5 - * Copyright (C) 2013-2019 Wolfram Sang <wsa@the-dreams.de> 5 + * Copyright (C) 2013-2019 Wolfram Sang <wsa@kernel.org> 6 6 * 7 7 * With some changes from Kyösti Mälkki <kmalkki@cc.hut.fi> and 8 8 * Frodo Looijaard <frodol@dds.nl>
+3
include/linux/kvm_host.h
··· 813 813 void kvm_reload_remote_mmus(struct kvm *kvm); 814 814 815 815 bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req, 816 + struct kvm_vcpu *except, 816 817 unsigned long *vcpu_bitmap, cpumask_var_t tmp); 817 818 bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req); 819 + bool kvm_make_all_cpus_request_except(struct kvm *kvm, unsigned int req, 820 + struct kvm_vcpu *except); 818 821 bool kvm_make_cpus_request_mask(struct kvm *kvm, unsigned int req, 819 822 unsigned long *vcpu_bitmap); 820 823
+16
include/linux/mlx5/driver.h
··· 214 214 MLX5_PORT_DOWN = 2, 215 215 }; 216 216 217 + enum mlx5_cmdif_state { 218 + MLX5_CMDIF_STATE_UNINITIALIZED, 219 + MLX5_CMDIF_STATE_UP, 220 + MLX5_CMDIF_STATE_DOWN, 221 + }; 222 + 217 223 struct mlx5_cmd_first { 218 224 __be32 data[4]; 219 225 }; ··· 265 259 struct mlx5_cmd { 266 260 struct mlx5_nb nb; 267 261 262 + enum mlx5_cmdif_state state; 268 263 void *cmd_alloc_buf; 269 264 dma_addr_t alloc_dma; 270 265 int alloc_size; ··· 292 285 struct semaphore sem; 293 286 struct semaphore pages_sem; 294 287 int mode; 288 + u16 allowed_opcode; 295 289 struct mlx5_cmd_work_ent *ent_arr[MLX5_MAX_COMMANDS]; 296 290 struct dma_pool *pool; 297 291 struct mlx5_cmd_debug dbg; ··· 750 742 struct delayed_work cb_timeout_work; 751 743 void *context; 752 744 int idx; 745 + struct completion handling; 753 746 struct completion done; 754 747 struct mlx5_cmd *cmd; 755 748 struct work_struct work; ··· 882 873 return min_t(u32, last_frag_stride_idx - fbc->strides_offset, fbc->sz_m1); 883 874 } 884 875 876 + enum { 877 + CMD_ALLOWED_OPCODE_ALL, 878 + }; 879 + 885 880 int mlx5_cmd_init(struct mlx5_core_dev *dev); 886 881 void mlx5_cmd_cleanup(struct mlx5_core_dev *dev); 882 + void mlx5_cmd_set_state(struct mlx5_core_dev *dev, 883 + enum mlx5_cmdif_state cmdif_state); 887 884 void mlx5_cmd_use_events(struct mlx5_core_dev *dev); 888 885 void mlx5_cmd_use_polling(struct mlx5_core_dev *dev); 886 + void mlx5_cmd_allowed_opcode(struct mlx5_core_dev *dev, u16 opcode); 889 887 890 888 struct mlx5_async_ctx { 891 889 struct mlx5_core_dev *dev;
+2 -1
include/net/act_api.h
··· 75 75 { 76 76 dtm->install = jiffies_to_clock_t(jiffies - stm->install); 77 77 dtm->lastuse = jiffies_to_clock_t(jiffies - stm->lastuse); 78 - dtm->firstuse = jiffies_to_clock_t(jiffies - stm->firstuse); 78 + dtm->firstuse = stm->firstuse ? 79 + jiffies_to_clock_t(jiffies - stm->firstuse) : 0; 79 80 dtm->expires = jiffies_to_clock_t(stm->expires); 80 81 } 81 82
+1 -1
include/net/af_rxrpc.h
··· 59 59 void rxrpc_kernel_end_call(struct socket *, struct rxrpc_call *); 60 60 void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *, 61 61 struct sockaddr_rxrpc *); 62 - u64 rxrpc_kernel_get_rtt(struct socket *, struct rxrpc_call *); 62 + u32 rxrpc_kernel_get_srtt(struct socket *, struct rxrpc_call *); 63 63 int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t, 64 64 rxrpc_user_attach_call_t, unsigned long, gfp_t, 65 65 unsigned int);
-1
include/net/ip_fib.h
··· 257 257 u32 table_id; 258 258 /* filter_set is an optimization that an entry is set */ 259 259 bool filter_set; 260 - bool dump_all_families; 261 260 bool dump_routes; 262 261 bool dump_exceptions; 263 262 unsigned char protocol;
+42 -10
include/trace/events/rxrpc.h
··· 1112 1112 TRACE_EVENT(rxrpc_rtt_rx, 1113 1113 TP_PROTO(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why, 1114 1114 rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial, 1115 - s64 rtt, u8 nr, s64 avg), 1115 + u32 rtt, u32 rto), 1116 1116 1117 - TP_ARGS(call, why, send_serial, resp_serial, rtt, nr, avg), 1117 + TP_ARGS(call, why, send_serial, resp_serial, rtt, rto), 1118 1118 1119 1119 TP_STRUCT__entry( 1120 1120 __field(unsigned int, call ) 1121 1121 __field(enum rxrpc_rtt_rx_trace, why ) 1122 - __field(u8, nr ) 1123 1122 __field(rxrpc_serial_t, send_serial ) 1124 1123 __field(rxrpc_serial_t, resp_serial ) 1125 - __field(s64, rtt ) 1126 - __field(u64, avg ) 1124 + __field(u32, rtt ) 1125 + __field(u32, rto ) 1127 1126 ), 1128 1127 1129 1128 TP_fast_assign( ··· 1131 1132 __entry->send_serial = send_serial; 1132 1133 __entry->resp_serial = resp_serial; 1133 1134 __entry->rtt = rtt; 1134 - __entry->nr = nr; 1135 - __entry->avg = avg; 1135 + __entry->rto = rto; 1136 1136 ), 1137 1137 1138 - TP_printk("c=%08x %s sr=%08x rr=%08x rtt=%lld nr=%u avg=%lld", 1138 + TP_printk("c=%08x %s sr=%08x rr=%08x rtt=%u rto=%u", 1139 1139 __entry->call, 1140 1140 __print_symbolic(__entry->why, rxrpc_rtt_rx_traces), 1141 1141 __entry->send_serial, 1142 1142 __entry->resp_serial, 1143 1143 __entry->rtt, 1144 - __entry->nr, 1145 - __entry->avg) 1144 + __entry->rto) 1146 1145 ); 1147 1146 1148 1147 TRACE_EVENT(rxrpc_timer, ··· 1539 1542 TP_printk("c=%08x r=%08x", 1540 1543 __entry->debug_id, 1541 1544 __entry->serial) 1545 + ); 1546 + 1547 + TRACE_EVENT(rxrpc_rx_discard_ack, 1548 + TP_PROTO(unsigned int debug_id, rxrpc_serial_t serial, 1549 + rxrpc_seq_t first_soft_ack, rxrpc_seq_t call_ackr_first, 1550 + rxrpc_seq_t prev_pkt, rxrpc_seq_t call_ackr_prev), 1551 + 1552 + TP_ARGS(debug_id, serial, first_soft_ack, call_ackr_first, 1553 + prev_pkt, call_ackr_prev), 1554 + 1555 + TP_STRUCT__entry( 1556 + __field(unsigned int, debug_id ) 1557 + __field(rxrpc_serial_t, serial ) 1558 + __field(rxrpc_seq_t, first_soft_ack) 1559 + __field(rxrpc_seq_t, call_ackr_first) 1560 + __field(rxrpc_seq_t, prev_pkt) 1561 + __field(rxrpc_seq_t, call_ackr_prev) 1562 + ), 1563 + 1564 + TP_fast_assign( 1565 + __entry->debug_id = debug_id; 1566 + __entry->serial = serial; 1567 + __entry->first_soft_ack = first_soft_ack; 1568 + __entry->call_ackr_first = call_ackr_first; 1569 + __entry->prev_pkt = prev_pkt; 1570 + __entry->call_ackr_prev = call_ackr_prev; 1571 + ), 1572 + 1573 + TP_printk("c=%08x r=%08x %08x<%08x %08x<%08x", 1574 + __entry->debug_id, 1575 + __entry->serial, 1576 + __entry->first_soft_ack, 1577 + __entry->call_ackr_first, 1578 + __entry->prev_pkt, 1579 + __entry->call_ackr_prev) 1542 1580 ); 1543 1581 1544 1582 #endif /* _TRACE_RXRPC_H */
+95 -13
include/uapi/linux/usb/raw_gadget.h
··· 93 93 __u8 data[0]; 94 94 }; 95 95 96 + /* Maximum number of non-control endpoints in struct usb_raw_eps_info. */ 97 + #define USB_RAW_EPS_NUM_MAX 30 98 + 99 + /* Maximum length of UDC endpoint name in struct usb_raw_ep_info. */ 100 + #define USB_RAW_EP_NAME_MAX 16 101 + 102 + /* Used as addr in struct usb_raw_ep_info if endpoint accepts any address. */ 103 + #define USB_RAW_EP_ADDR_ANY 0xff 104 + 105 + /* 106 + * struct usb_raw_ep_caps - exposes endpoint capabilities from struct usb_ep 107 + * (technically from its member struct usb_ep_caps). 108 + */ 109 + struct usb_raw_ep_caps { 110 + __u32 type_control : 1; 111 + __u32 type_iso : 1; 112 + __u32 type_bulk : 1; 113 + __u32 type_int : 1; 114 + __u32 dir_in : 1; 115 + __u32 dir_out : 1; 116 + }; 117 + 118 + /* 119 + * struct usb_raw_ep_limits - exposes endpoint limits from struct usb_ep. 120 + * @maxpacket_limit: Maximum packet size value supported by this endpoint. 121 + * @max_streams: maximum number of streams supported by this endpoint 122 + * (actual number is 2^n). 123 + * @reserved: Empty, reserved for potential future extensions. 124 + */ 125 + struct usb_raw_ep_limits { 126 + __u16 maxpacket_limit; 127 + __u16 max_streams; 128 + __u32 reserved; 129 + }; 130 + 131 + /* 132 + * struct usb_raw_ep_info - stores information about a gadget endpoint. 133 + * @name: Name of the endpoint as it is defined in the UDC driver. 134 + * @addr: Address of the endpoint that must be specified in the endpoint 135 + * descriptor passed to USB_RAW_IOCTL_EP_ENABLE ioctl. 136 + * @caps: Endpoint capabilities. 137 + * @limits: Endpoint limits. 138 + */ 139 + struct usb_raw_ep_info { 140 + __u8 name[USB_RAW_EP_NAME_MAX]; 141 + __u32 addr; 142 + struct usb_raw_ep_caps caps; 143 + struct usb_raw_ep_limits limits; 144 + }; 145 + 146 + /* 147 + * struct usb_raw_eps_info - argument for USB_RAW_IOCTL_EPS_INFO ioctl. 148 + * eps: Structures that store information about non-control endpoints. 149 + */ 150 + struct usb_raw_eps_info { 151 + struct usb_raw_ep_info eps[USB_RAW_EPS_NUM_MAX]; 152 + }; 153 + 96 154 /* 97 155 * Initializes a Raw Gadget instance. 98 156 * Accepts a pointer to the usb_raw_init struct as an argument. ··· 173 115 #define USB_RAW_IOCTL_EVENT_FETCH _IOR('U', 2, struct usb_raw_event) 174 116 175 117 /* 176 - * Queues an IN (OUT for READ) urb as a response to the last control request 177 - * received on endpoint 0, provided that was an IN (OUT for READ) request and 178 - * waits until the urb is completed. Copies received data to user for READ. 118 + * Queues an IN (OUT for READ) request as a response to the last setup request 119 + * received on endpoint 0 (provided that was an IN (OUT for READ) request), and 120 + * waits until the request is completed. Copies received data to user for READ. 179 121 * Accepts a pointer to the usb_raw_ep_io struct as an argument. 180 - * Returns length of trasferred data on success or negative error code on 122 + * Returns length of transferred data on success or negative error code on 181 123 * failure. 182 124 */ 183 125 #define USB_RAW_IOCTL_EP0_WRITE _IOW('U', 3, struct usb_raw_ep_io) 184 126 #define USB_RAW_IOCTL_EP0_READ _IOWR('U', 4, struct usb_raw_ep_io) 185 127 186 128 /* 187 - * Finds an endpoint that supports the transfer type specified in the 188 - * descriptor and enables it. 189 - * Accepts a pointer to the usb_endpoint_descriptor struct as an argument. 129 + * Finds an endpoint that satisfies the parameters specified in the provided 130 + * descriptors (address, transfer type, etc.) and enables it. 131 + * Accepts a pointer to the usb_raw_ep_descs struct as an argument. 190 132 * Returns enabled endpoint handle on success or negative error code on failure. 191 133 */ 192 134 #define USB_RAW_IOCTL_EP_ENABLE _IOW('U', 5, struct usb_endpoint_descriptor) 193 135 194 - /* Disables specified endpoint. 136 + /* 137 + * Disables specified endpoint. 195 138 * Accepts endpoint handle as an argument. 196 139 * Returns 0 on success or negative error code on failure. 197 140 */ 198 141 #define USB_RAW_IOCTL_EP_DISABLE _IOW('U', 6, __u32) 199 142 200 143 /* 201 - * Queues an IN (OUT for READ) urb as a response to the last control request 202 - * received on endpoint usb_raw_ep_io.ep, provided that was an IN (OUT for READ) 203 - * request and waits until the urb is completed. Copies received data to user 204 - * for READ. 144 + * Queues an IN (OUT for READ) request as a response to the last setup request 145 + * received on endpoint usb_raw_ep_io.ep (provided that was an IN (OUT for READ) 146 + * request), and waits until the request is completed. Copies received data to 147 + * user for READ. 205 148 * Accepts a pointer to the usb_raw_ep_io struct as an argument. 206 - * Returns length of trasferred data on success or negative error code on 149 + * Returns length of transferred data on success or negative error code on 207 150 * failure. 208 151 */ 209 152 #define USB_RAW_IOCTL_EP_WRITE _IOW('U', 7, struct usb_raw_ep_io) ··· 222 163 * Returns 0 on success or negative error code on failure. 223 164 */ 224 165 #define USB_RAW_IOCTL_VBUS_DRAW _IOW('U', 10, __u32) 166 + 167 + /* 168 + * Fills in the usb_raw_eps_info structure with information about non-control 169 + * endpoints available for the currently connected UDC. 170 + * Returns the number of available endpoints on success or negative error code 171 + * on failure. 172 + */ 173 + #define USB_RAW_IOCTL_EPS_INFO _IOR('U', 11, struct usb_raw_eps_info) 174 + 175 + /* 176 + * Stalls a pending control request on endpoint 0. 177 + * Returns 0 on success or negative error code on failure. 178 + */ 179 + #define USB_RAW_IOCTL_EP0_STALL _IO('U', 12) 180 + 181 + /* 182 + * Sets or clears halt or wedge status of the endpoint. 183 + * Accepts endpoint handle as an argument. 184 + * Returns 0 on success or negative error code on failure. 185 + */ 186 + #define USB_RAW_IOCTL_EP_SET_HALT _IOW('U', 13, __u32) 187 + #define USB_RAW_IOCTL_EP_CLEAR_HALT _IOW('U', 14, __u32) 188 + #define USB_RAW_IOCTL_EP_SET_WEDGE _IOW('U', 15, __u32) 225 189 226 190 #endif /* _UAPI__LINUX_USB_RAW_GADGET_H */
+2
init/main.c
··· 1038 1038 1039 1039 /* Do the rest non-__init'ed, we're now alive */ 1040 1040 arch_call_rest_init(); 1041 + 1042 + prevent_tail_call_optimization(); 1041 1043 } 1042 1044 1043 1045 /* Call all constructor functions linked into the kernel. */
+14 -3
kernel/bpf/syscall.c
··· 627 627 628 628 mutex_lock(&map->freeze_mutex); 629 629 630 - if ((vma->vm_flags & VM_WRITE) && map->frozen) { 631 - err = -EPERM; 632 - goto out; 630 + if (vma->vm_flags & VM_WRITE) { 631 + if (map->frozen) { 632 + err = -EPERM; 633 + goto out; 634 + } 635 + /* map is meant to be read-only, so do not allow mapping as 636 + * writable, because it's possible to leak a writable page 637 + * reference and allows user-space to still modify it after 638 + * freezing, while verifier will assume contents do not change 639 + */ 640 + if (map->map_flags & BPF_F_RDONLY_PROG) { 641 + err = -EACCES; 642 + goto out; 643 + } 633 644 } 634 645 635 646 /* set default open/close callbacks */
+2 -2
kernel/sched/debug.c
··· 948 948 P(se.avg.util_est.enqueued); 949 949 #endif 950 950 #ifdef CONFIG_UCLAMP_TASK 951 - __PS("uclamp.min", p->uclamp[UCLAMP_MIN].value); 952 - __PS("uclamp.max", p->uclamp[UCLAMP_MAX].value); 951 + __PS("uclamp.min", p->uclamp_req[UCLAMP_MIN].value); 952 + __PS("uclamp.max", p->uclamp_req[UCLAMP_MAX].value); 953 953 __PS("effective uclamp.min", uclamp_eff_value(p, UCLAMP_MIN)); 954 954 __PS("effective uclamp.max", uclamp_eff_value(p, UCLAMP_MAX)); 955 955 #endif
+38 -13
kernel/sched/fair.c
··· 4773 4773 struct rq *rq = rq_of(cfs_rq); 4774 4774 struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg); 4775 4775 struct sched_entity *se; 4776 - int enqueue = 1; 4777 4776 long task_delta, idle_task_delta; 4778 4777 4779 4778 se = cfs_rq->tg->se[cpu_of(rq)]; ··· 4796 4797 idle_task_delta = cfs_rq->idle_h_nr_running; 4797 4798 for_each_sched_entity(se) { 4798 4799 if (se->on_rq) 4799 - enqueue = 0; 4800 - 4800 + break; 4801 4801 cfs_rq = cfs_rq_of(se); 4802 - if (enqueue) { 4803 - enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP); 4804 - } else { 4805 - update_load_avg(cfs_rq, se, 0); 4806 - se_update_runnable(se); 4807 - } 4802 + enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP); 4808 4803 4809 4804 cfs_rq->h_nr_running += task_delta; 4810 4805 cfs_rq->idle_h_nr_running += idle_task_delta; 4811 4806 4807 + /* end evaluation on encountering a throttled cfs_rq */ 4812 4808 if (cfs_rq_throttled(cfs_rq)) 4813 - break; 4809 + goto unthrottle_throttle; 4814 4810 } 4815 4811 4816 - if (!se) 4817 - add_nr_running(rq, task_delta); 4812 + for_each_sched_entity(se) { 4813 + cfs_rq = cfs_rq_of(se); 4818 4814 4815 + update_load_avg(cfs_rq, se, UPDATE_TG); 4816 + se_update_runnable(se); 4817 + 4818 + cfs_rq->h_nr_running += task_delta; 4819 + cfs_rq->idle_h_nr_running += idle_task_delta; 4820 + 4821 + 4822 + /* end evaluation on encountering a throttled cfs_rq */ 4823 + if (cfs_rq_throttled(cfs_rq)) 4824 + goto unthrottle_throttle; 4825 + 4826 + /* 4827 + * One parent has been throttled and cfs_rq removed from the 4828 + * list. Add it back to not break the leaf list. 4829 + */ 4830 + if (throttled_hierarchy(cfs_rq)) 4831 + list_add_leaf_cfs_rq(cfs_rq); 4832 + } 4833 + 4834 + /* At this point se is NULL and we are at root level*/ 4835 + add_nr_running(rq, task_delta); 4836 + 4837 + unthrottle_throttle: 4819 4838 /* 4820 4839 * The cfs_rq_throttled() breaks in the above iteration can result in 4821 4840 * incomplete leaf list maintenance, resulting in triggering the ··· 4842 4825 for_each_sched_entity(se) { 4843 4826 cfs_rq = cfs_rq_of(se); 4844 4827 4845 - list_add_leaf_cfs_rq(cfs_rq); 4828 + if (list_add_leaf_cfs_rq(cfs_rq)) 4829 + break; 4846 4830 } 4847 4831 4848 4832 assert_list_leaf_cfs_rq(rq); ··· 5496 5478 /* end evaluation on encountering a throttled cfs_rq */ 5497 5479 if (cfs_rq_throttled(cfs_rq)) 5498 5480 goto enqueue_throttle; 5481 + 5482 + /* 5483 + * One parent has been throttled and cfs_rq removed from the 5484 + * list. Add it back to not break the leaf list. 5485 + */ 5486 + if (throttled_hierarchy(cfs_rq)) 5487 + list_add_leaf_cfs_rq(cfs_rq); 5499 5488 } 5500 5489 5501 5490 enqueue_throttle:
+18 -1
lib/test_printf.c
··· 214 214 #define PTR_STR "ffff0123456789ab" 215 215 #define PTR_VAL_NO_CRNG "(____ptrval____)" 216 216 #define ZEROS "00000000" /* hex 32 zero bits */ 217 + #define ONES "ffffffff" /* hex 32 one bits */ 217 218 218 219 static int __init 219 220 plain_format(void) ··· 246 245 #define PTR_STR "456789ab" 247 246 #define PTR_VAL_NO_CRNG "(ptrval)" 248 247 #define ZEROS "" 248 + #define ONES "" 249 249 250 250 static int __init 251 251 plain_format(void) ··· 332 330 test(buf, fmt, p); 333 331 } 334 332 333 + /* 334 + * NULL pointers aren't hashed. 335 + */ 335 336 static void __init 336 337 null_pointer(void) 337 338 { 338 - test_hashed("%p", NULL); 339 + test(ZEROS "00000000", "%p", NULL); 339 340 test(ZEROS "00000000", "%px", NULL); 340 341 test("(null)", "%pE", NULL); 342 + } 343 + 344 + /* 345 + * Error pointers aren't hashed. 346 + */ 347 + static void __init 348 + error_pointer(void) 349 + { 350 + test(ONES "fffffff5", "%p", ERR_PTR(-11)); 351 + test(ONES "fffffff5", "%px", ERR_PTR(-11)); 352 + test("(efault)", "%pE", ERR_PTR(-11)); 341 353 } 342 354 343 355 #define PTR_INVALID ((void *)0x000000ab) ··· 665 649 { 666 650 plain(); 667 651 null_pointer(); 652 + error_pointer(); 668 653 invalid_pointer(); 669 654 symbol_ptr(); 670 655 kernel_ptr();
+7
lib/vsprintf.c
··· 794 794 unsigned long hashval; 795 795 int ret; 796 796 797 + /* 798 + * Print the real pointer value for NULL and error pointers, 799 + * as they are not actual addresses. 800 + */ 801 + if (IS_ERR_OR_NULL(ptr)) 802 + return pointer_string(buf, end, ptr, spec); 803 + 797 804 /* When debugging early boot use non-cryptographically secure hash. */ 798 805 if (unlikely(debug_boot_weak_hash)) { 799 806 hashval = hash_long((unsigned long)ptr, 32);
+8 -8
mm/kasan/Makefile
··· 15 15 16 16 # Function splitter causes unnecessary splits in __asan_load1/__asan_store1 17 17 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533 18 - CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 19 - CFLAGS_generic.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 20 - CFLAGS_generic_report.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 21 - CFLAGS_init.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 22 - CFLAGS_quarantine.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 23 - CFLAGS_report.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 24 - CFLAGS_tags.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 25 - CFLAGS_tags_report.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 18 + CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING 19 + CFLAGS_generic.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING 20 + CFLAGS_generic_report.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING 21 + CFLAGS_init.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING 22 + CFLAGS_quarantine.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING 23 + CFLAGS_report.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING 24 + CFLAGS_tags.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING 25 + CFLAGS_tags_report.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING 26 26 27 27 obj-$(CONFIG_KASAN) := common.o init.o report.o 28 28 obj-$(CONFIG_KASAN_GENERIC) += generic.o generic_report.o quarantine.o
-1
mm/kasan/generic.c
··· 15 15 */ 16 16 17 17 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 18 - #define DISABLE_BRANCH_PROFILING 19 18 20 19 #include <linux/export.h> 21 20 #include <linux/interrupt.h>
-1
mm/kasan/tags.c
··· 12 12 */ 13 13 14 14 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 15 - #define DISABLE_BRANCH_PROFILING 16 15 17 16 #include <linux/export.h> 18 17 #include <linux/interrupt.h>
+6 -5
mm/z3fold.c
··· 318 318 slots = handle_to_slots(handle); 319 319 write_lock(&slots->lock); 320 320 *(unsigned long *)handle = 0; 321 - write_unlock(&slots->lock); 322 - if (zhdr->slots == slots) 321 + if (zhdr->slots == slots) { 322 + write_unlock(&slots->lock); 323 323 return; /* simple case, nothing else to do */ 324 + } 324 325 325 326 /* we are freeing a foreign handle if we are here */ 326 327 zhdr->foreign_handles--; 327 328 is_free = true; 328 - read_lock(&slots->lock); 329 329 if (!test_bit(HANDLES_ORPHANED, &slots->pool)) { 330 - read_unlock(&slots->lock); 330 + write_unlock(&slots->lock); 331 331 return; 332 332 } 333 333 for (i = 0; i <= BUDDY_MASK; i++) { ··· 336 336 break; 337 337 } 338 338 } 339 - read_unlock(&slots->lock); 339 + write_unlock(&slots->lock); 340 340 341 341 if (is_free) { 342 342 struct z3fold_pool *pool = slots_to_pool(slots); ··· 422 422 zhdr->start_middle = 0; 423 423 zhdr->cpu = -1; 424 424 zhdr->foreign_handles = 0; 425 + zhdr->mapped_count = 0; 425 426 zhdr->slots = slots; 426 427 zhdr->pool = pool; 427 428 INIT_LIST_HEAD(&zhdr->buddy);
+4 -2
net/ax25/af_ax25.c
··· 635 635 break; 636 636 637 637 case SO_BINDTODEVICE: 638 - if (optlen > IFNAMSIZ) 639 - optlen = IFNAMSIZ; 638 + if (optlen > IFNAMSIZ - 1) 639 + optlen = IFNAMSIZ - 1; 640 + 641 + memset(devname, 0, sizeof(devname)); 640 642 641 643 if (copy_from_user(devname, optval, optlen)) { 642 644 res = -EFAULT;
+15 -5
net/core/dev.c
··· 5058 5058 return 0; 5059 5059 } 5060 5060 5061 - static int __netif_receive_skb_core(struct sk_buff *skb, bool pfmemalloc, 5061 + static int __netif_receive_skb_core(struct sk_buff **pskb, bool pfmemalloc, 5062 5062 struct packet_type **ppt_prev) 5063 5063 { 5064 5064 struct packet_type *ptype, *pt_prev; 5065 5065 rx_handler_func_t *rx_handler; 5066 + struct sk_buff *skb = *pskb; 5066 5067 struct net_device *orig_dev; 5067 5068 bool deliver_exact = false; 5068 5069 int ret = NET_RX_DROP; ··· 5094 5093 ret2 = do_xdp_generic(rcu_dereference(skb->dev->xdp_prog), skb); 5095 5094 preempt_enable(); 5096 5095 5097 - if (ret2 != XDP_PASS) 5098 - return NET_RX_DROP; 5096 + if (ret2 != XDP_PASS) { 5097 + ret = NET_RX_DROP; 5098 + goto out; 5099 + } 5099 5100 skb_reset_mac_len(skb); 5100 5101 } 5101 5102 ··· 5247 5244 } 5248 5245 5249 5246 out: 5247 + /* The invariant here is that if *ppt_prev is not NULL 5248 + * then skb should also be non-NULL. 5249 + * 5250 + * Apparently *ppt_prev assignment above holds this invariant due to 5251 + * skb dereferencing near it. 5252 + */ 5253 + *pskb = skb; 5250 5254 return ret; 5251 5255 } 5252 5256 ··· 5263 5253 struct packet_type *pt_prev = NULL; 5264 5254 int ret; 5265 5255 5266 - ret = __netif_receive_skb_core(skb, pfmemalloc, &pt_prev); 5256 + ret = __netif_receive_skb_core(&skb, pfmemalloc, &pt_prev); 5267 5257 if (pt_prev) 5268 5258 ret = INDIRECT_CALL_INET(pt_prev->func, ipv6_rcv, ip_rcv, skb, 5269 5259 skb->dev, pt_prev, orig_dev); ··· 5341 5331 struct packet_type *pt_prev = NULL; 5342 5332 5343 5333 skb_list_del_init(skb); 5344 - __netif_receive_skb_core(skb, pfmemalloc, &pt_prev); 5334 + __netif_receive_skb_core(&skb, pfmemalloc, &pt_prev); 5345 5335 if (!pt_prev) 5346 5336 continue; 5347 5337 if (pt_curr != pt_prev || od_curr != orig_dev) {
+21 -5
net/core/flow_dissector.c
··· 160 160 return ret; 161 161 } 162 162 163 - int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr) 163 + static int flow_dissector_bpf_prog_detach(struct net *net) 164 164 { 165 165 struct bpf_prog *attached; 166 - struct net *net; 167 166 168 - net = current->nsproxy->net_ns; 169 167 mutex_lock(&flow_dissector_mutex); 170 168 attached = rcu_dereference_protected(net->flow_dissector_prog, 171 169 lockdep_is_held(&flow_dissector_mutex)); ··· 176 178 mutex_unlock(&flow_dissector_mutex); 177 179 return 0; 178 180 } 181 + 182 + int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr) 183 + { 184 + return flow_dissector_bpf_prog_detach(current->nsproxy->net_ns); 185 + } 186 + 187 + static void __net_exit flow_dissector_pernet_pre_exit(struct net *net) 188 + { 189 + /* We're not racing with attach/detach because there are no 190 + * references to netns left when pre_exit gets called. 191 + */ 192 + if (rcu_access_pointer(net->flow_dissector_prog)) 193 + flow_dissector_bpf_prog_detach(net); 194 + } 195 + 196 + static struct pernet_operations flow_dissector_pernet_ops __net_initdata = { 197 + .pre_exit = flow_dissector_pernet_pre_exit, 198 + }; 179 199 180 200 /** 181 201 * __skb_flow_get_ports - extract the upper layer ports and return them ··· 1852 1836 skb_flow_dissector_init(&flow_keys_basic_dissector, 1853 1837 flow_keys_basic_dissector_keys, 1854 1838 ARRAY_SIZE(flow_keys_basic_dissector_keys)); 1855 - return 0; 1856 - } 1857 1839 1840 + return register_pernet_subsys(&flow_dissector_pernet_ops); 1841 + } 1858 1842 core_initcall(init_default_flow_dissectors);
+15
net/dsa/tag_mtk.c
··· 15 15 #define MTK_HDR_XMIT_TAGGED_TPID_8100 1 16 16 #define MTK_HDR_RECV_SOURCE_PORT_MASK GENMASK(2, 0) 17 17 #define MTK_HDR_XMIT_DP_BIT_MASK GENMASK(5, 0) 18 + #define MTK_HDR_XMIT_SA_DIS BIT(6) 18 19 19 20 static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb, 20 21 struct net_device *dev) ··· 23 22 struct dsa_port *dp = dsa_slave_to_port(dev); 24 23 u8 *mtk_tag; 25 24 bool is_vlan_skb = true; 25 + unsigned char *dest = eth_hdr(skb)->h_dest; 26 + bool is_multicast_skb = is_multicast_ether_addr(dest) && 27 + !is_broadcast_ether_addr(dest); 26 28 27 29 /* Build the special tag after the MAC Source Address. If VLAN header 28 30 * is present, it's required that VLAN header and special tag is ··· 51 47 MTK_HDR_XMIT_UNTAGGED; 52 48 mtk_tag[1] = (1 << dp->index) & MTK_HDR_XMIT_DP_BIT_MASK; 53 49 50 + /* Disable SA learning for multicast frames */ 51 + if (unlikely(is_multicast_skb)) 52 + mtk_tag[1] |= MTK_HDR_XMIT_SA_DIS; 53 + 54 54 /* Tag control information is kept for 802.1Q */ 55 55 if (!is_vlan_skb) { 56 56 mtk_tag[2] = 0; ··· 69 61 { 70 62 int port; 71 63 __be16 *phdr, hdr; 64 + unsigned char *dest = eth_hdr(skb)->h_dest; 65 + bool is_multicast_skb = is_multicast_ether_addr(dest) && 66 + !is_broadcast_ether_addr(dest); 72 67 73 68 if (unlikely(!pskb_may_pull(skb, MTK_HDR_LEN))) 74 69 return NULL; ··· 96 85 skb->dev = dsa_master_find_slave(dev, 0, port); 97 86 if (!skb->dev) 98 87 return NULL; 88 + 89 + /* Only unicast or broadcast frames are offloaded */ 90 + if (likely(!is_multicast_skb)) 91 + skb->offload_fwd_mark = 1; 99 92 100 93 return skb; 101 94 }
+2 -2
net/ethtool/netlink.c
··· 342 342 ret = ops->reply_size(req_info, reply_data); 343 343 if (ret < 0) 344 344 goto err_cleanup; 345 - reply_len = ret; 345 + reply_len = ret + ethnl_reply_header_size(); 346 346 ret = -ENOMEM; 347 347 rskb = ethnl_reply_init(reply_len, req_info->dev, ops->reply_cmd, 348 348 ops->hdr_attr, info, &reply_payload); ··· 588 588 ret = ops->reply_size(req_info, reply_data); 589 589 if (ret < 0) 590 590 goto err_cleanup; 591 - reply_len = ret; 591 + reply_len = ret + ethnl_reply_header_size(); 592 592 ret = -ENOMEM; 593 593 skb = genlmsg_new(reply_len, GFP_KERNEL); 594 594 if (!skb)
-1
net/ethtool/strset.c
··· 324 324 int len = 0; 325 325 int ret; 326 326 327 - len += ethnl_reply_header_size(); 328 327 for (i = 0; i < ETH_SS_COUNT; i++) { 329 328 const struct strset_info *set_info = &data->sets[i]; 330 329
+1 -2
net/ipv4/fib_frontend.c
··· 918 918 else 919 919 filter->dump_exceptions = false; 920 920 921 - filter->dump_all_families = (rtm->rtm_family == AF_UNSPEC); 922 921 filter->flags = rtm->rtm_flags; 923 922 filter->protocol = rtm->rtm_protocol; 924 923 filter->rt_type = rtm->rtm_type; ··· 989 990 if (filter.table_id) { 990 991 tb = fib_get_table(net, filter.table_id); 991 992 if (!tb) { 992 - if (filter.dump_all_families) 993 + if (rtnl_msg_family(cb->nlh) != PF_INET) 993 994 return skb->len; 994 995 995 996 NL_SET_ERR_MSG(cb->extack, "ipv4: FIB table does not exist");
+24 -19
net/ipv4/inet_connection_sock.c
··· 24 24 #include <net/addrconf.h> 25 25 26 26 #if IS_ENABLED(CONFIG_IPV6) 27 - /* match_wildcard == true: IPV6_ADDR_ANY equals to any IPv6 addresses if IPv6 28 - * only, and any IPv4 addresses if not IPv6 only 29 - * match_wildcard == false: addresses must be exactly the same, i.e. 30 - * IPV6_ADDR_ANY only equals to IPV6_ADDR_ANY, 31 - * and 0.0.0.0 equals to 0.0.0.0 only 27 + /* match_sk*_wildcard == true: IPV6_ADDR_ANY equals to any IPv6 addresses 28 + * if IPv6 only, and any IPv4 addresses 29 + * if not IPv6 only 30 + * match_sk*_wildcard == false: addresses must be exactly the same, i.e. 31 + * IPV6_ADDR_ANY only equals to IPV6_ADDR_ANY, 32 + * and 0.0.0.0 equals to 0.0.0.0 only 32 33 */ 33 34 static bool ipv6_rcv_saddr_equal(const struct in6_addr *sk1_rcv_saddr6, 34 35 const struct in6_addr *sk2_rcv_saddr6, 35 36 __be32 sk1_rcv_saddr, __be32 sk2_rcv_saddr, 36 37 bool sk1_ipv6only, bool sk2_ipv6only, 37 - bool match_wildcard) 38 + bool match_sk1_wildcard, 39 + bool match_sk2_wildcard) 38 40 { 39 41 int addr_type = ipv6_addr_type(sk1_rcv_saddr6); 40 42 int addr_type2 = sk2_rcv_saddr6 ? ipv6_addr_type(sk2_rcv_saddr6) : IPV6_ADDR_MAPPED; ··· 46 44 if (!sk2_ipv6only) { 47 45 if (sk1_rcv_saddr == sk2_rcv_saddr) 48 46 return true; 49 - if (!sk1_rcv_saddr || !sk2_rcv_saddr) 50 - return match_wildcard; 47 + return (match_sk1_wildcard && !sk1_rcv_saddr) || 48 + (match_sk2_wildcard && !sk2_rcv_saddr); 51 49 } 52 50 return false; 53 51 } ··· 55 53 if (addr_type == IPV6_ADDR_ANY && addr_type2 == IPV6_ADDR_ANY) 56 54 return true; 57 55 58 - if (addr_type2 == IPV6_ADDR_ANY && match_wildcard && 56 + if (addr_type2 == IPV6_ADDR_ANY && match_sk2_wildcard && 59 57 !(sk2_ipv6only && addr_type == IPV6_ADDR_MAPPED)) 60 58 return true; 61 59 62 - if (addr_type == IPV6_ADDR_ANY && match_wildcard && 60 + if (addr_type == IPV6_ADDR_ANY && match_sk1_wildcard && 63 61 !(sk1_ipv6only && addr_type2 == IPV6_ADDR_MAPPED)) 64 62 return true; 65 63 ··· 71 69 } 72 70 #endif 73 71 74 - /* match_wildcard == true: 0.0.0.0 equals to any IPv4 addresses 75 - * match_wildcard == false: addresses must be exactly the same, i.e. 76 - * 0.0.0.0 only equals to 0.0.0.0 72 + /* match_sk*_wildcard == true: 0.0.0.0 equals to any IPv4 addresses 73 + * match_sk*_wildcard == false: addresses must be exactly the same, i.e. 74 + * 0.0.0.0 only equals to 0.0.0.0 77 75 */ 78 76 static bool ipv4_rcv_saddr_equal(__be32 sk1_rcv_saddr, __be32 sk2_rcv_saddr, 79 - bool sk2_ipv6only, bool match_wildcard) 77 + bool sk2_ipv6only, bool match_sk1_wildcard, 78 + bool match_sk2_wildcard) 80 79 { 81 80 if (!sk2_ipv6only) { 82 81 if (sk1_rcv_saddr == sk2_rcv_saddr) 83 82 return true; 84 - if (!sk1_rcv_saddr || !sk2_rcv_saddr) 85 - return match_wildcard; 83 + return (match_sk1_wildcard && !sk1_rcv_saddr) || 84 + (match_sk2_wildcard && !sk2_rcv_saddr); 86 85 } 87 86 return false; 88 87 } ··· 99 96 sk2->sk_rcv_saddr, 100 97 ipv6_only_sock(sk), 101 98 ipv6_only_sock(sk2), 99 + match_wildcard, 102 100 match_wildcard); 103 101 #endif 104 102 return ipv4_rcv_saddr_equal(sk->sk_rcv_saddr, sk2->sk_rcv_saddr, 105 - ipv6_only_sock(sk2), match_wildcard); 103 + ipv6_only_sock(sk2), match_wildcard, 104 + match_wildcard); 106 105 } 107 106 EXPORT_SYMBOL(inet_rcv_saddr_equal); 108 107 ··· 290 285 tb->fast_rcv_saddr, 291 286 sk->sk_rcv_saddr, 292 287 tb->fast_ipv6_only, 293 - ipv6_only_sock(sk), true); 288 + ipv6_only_sock(sk), true, false); 294 289 #endif 295 290 return ipv4_rcv_saddr_equal(tb->fast_rcv_saddr, sk->sk_rcv_saddr, 296 - ipv6_only_sock(sk), true); 291 + ipv6_only_sock(sk), true, false); 297 292 } 298 293 299 294 /* Obtain a reference to a local port for the given sock,
+1 -1
net/ipv4/ipip.c
··· 686 686 687 687 rtnl_link_failed: 688 688 #if IS_ENABLED(CONFIG_MPLS) 689 - xfrm4_tunnel_deregister(&mplsip_handler, AF_INET); 689 + xfrm4_tunnel_deregister(&mplsip_handler, AF_MPLS); 690 690 xfrm_tunnel_mplsip_failed: 691 691 692 692 #endif
+1 -1
net/ipv4/ipmr.c
··· 2577 2577 2578 2578 mrt = ipmr_get_table(sock_net(skb->sk), filter.table_id); 2579 2579 if (!mrt) { 2580 - if (filter.dump_all_families) 2580 + if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IPMR) 2581 2581 return skb->len; 2582 2582 2583 2583 NL_SET_ERR_MSG(cb->extack, "ipv4: MR table does not exist");
+2 -1
net/ipv4/nexthop.c
··· 292 292 return 0; 293 293 294 294 nla_put_failure: 295 + nlmsg_cancel(skb, nlh); 295 296 return -EMSGSIZE; 296 297 } 297 298 ··· 483 482 return -EINVAL; 484 483 } 485 484 } 486 - for (i = NHA_GROUP + 1; i < __NHA_MAX; ++i) { 485 + for (i = NHA_GROUP_TYPE + 1; i < __NHA_MAX; ++i) { 487 486 if (!tb[i]) 488 487 continue; 489 488 if (tb[NHA_FDB])
+6 -8
net/ipv4/route.c
··· 491 491 atomic_t *p_id = ip_idents + hash % IP_IDENTS_SZ; 492 492 u32 old = READ_ONCE(*p_tstamp); 493 493 u32 now = (u32)jiffies; 494 - u32 new, delta = 0; 494 + u32 delta = 0; 495 495 496 496 if (old != now && cmpxchg(p_tstamp, old, now) == old) 497 497 delta = prandom_u32_max(now - old); 498 498 499 - /* Do not use atomic_add_return() as it makes UBSAN unhappy */ 500 - do { 501 - old = (u32)atomic_read(p_id); 502 - new = old + delta + segs; 503 - } while (atomic_cmpxchg(p_id, old, new) != old); 504 - 505 - return new - segs; 499 + /* If UBSAN reports an error there, please make sure your compiler 500 + * supports -fno-strict-overflow before reporting it that was a bug 501 + * in UBSAN, and it has been fixed in GCC-8. 502 + */ 503 + return atomic_add_return(segs + delta, p_id) - segs; 506 504 } 507 505 EXPORT_SYMBOL(ip_idents_reserve); 508 506
+1 -1
net/ipv6/ip6_fib.c
··· 664 664 if (arg.filter.table_id) { 665 665 tb = fib6_get_table(net, arg.filter.table_id); 666 666 if (!tb) { 667 - if (arg.filter.dump_all_families) 667 + if (rtnl_msg_family(cb->nlh) != PF_INET6) 668 668 goto out; 669 669 670 670 NL_SET_ERR_MSG_MOD(cb->extack, "FIB table does not exist");
+3 -2
net/ipv6/ip6mr.c
··· 98 98 #ifdef CONFIG_IPV6_MROUTE_MULTIPLE_TABLES 99 99 #define ip6mr_for_each_table(mrt, net) \ 100 100 list_for_each_entry_rcu(mrt, &net->ipv6.mr6_tables, list, \ 101 - lockdep_rtnl_is_held()) 101 + lockdep_rtnl_is_held() || \ 102 + list_empty(&net->ipv6.mr6_tables)) 102 103 103 104 static struct mr_table *ip6mr_mr_table_iter(struct net *net, 104 105 struct mr_table *mrt) ··· 2503 2502 2504 2503 mrt = ip6mr_get_table(sock_net(skb->sk), filter.table_id); 2505 2504 if (!mrt) { 2506 - if (filter.dump_all_families) 2505 + if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IP6MR) 2507 2506 return skb->len; 2508 2507 2509 2508 NL_SET_ERR_MSG_MOD(cb->extack, "MR table does not exist");
+9 -15
net/mptcp/crypto.c
··· 47 47 void mptcp_crypto_hmac_sha(u64 key1, u64 key2, u8 *msg, int len, void *hmac) 48 48 { 49 49 u8 input[SHA256_BLOCK_SIZE + SHA256_DIGEST_SIZE]; 50 - __be32 mptcp_hashed_key[SHA256_DIGEST_WORDS]; 51 - __be32 *hash_out = (__force __be32 *)hmac; 52 50 struct sha256_state state; 53 51 u8 key1be[8]; 54 52 u8 key2be[8]; ··· 84 86 85 87 sha256_init(&state); 86 88 sha256_update(&state, input, SHA256_BLOCK_SIZE + SHA256_DIGEST_SIZE); 87 - sha256_final(&state, (u8 *)mptcp_hashed_key); 88 - 89 - /* takes only first 160 bits */ 90 - for (i = 0; i < 5; i++) 91 - hash_out[i] = mptcp_hashed_key[i]; 89 + sha256_final(&state, (u8 *)hmac); 92 90 } 93 91 94 92 #ifdef CONFIG_MPTCP_HMAC_TEST ··· 95 101 }; 96 102 97 103 /* we can't reuse RFC 4231 test vectors, as we have constraint on the 98 - * input and key size, and we truncate the output. 104 + * input and key size. 99 105 */ 100 106 static struct test_cast tests[] = { 101 107 { 102 108 .key = "0b0b0b0b0b0b0b0b", 103 109 .msg = "48692054", 104 - .result = "8385e24fb4235ac37556b6b886db106284a1da67", 110 + .result = "8385e24fb4235ac37556b6b886db106284a1da671699f46db1f235ec622dcafa", 105 111 }, 106 112 { 107 113 .key = "aaaaaaaaaaaaaaaa", 108 114 .msg = "dddddddd", 109 - .result = "2c5e219164ff1dca1c4a92318d847bb6b9d44492", 115 + .result = "2c5e219164ff1dca1c4a92318d847bb6b9d44492984e1eb71aff9022f71046e9", 110 116 }, 111 117 { 112 118 .key = "0102030405060708", 113 119 .msg = "cdcdcdcd", 114 - .result = "e73b9ba9969969cefb04aa0d6df18ec2fcc075b6", 120 + .result = "e73b9ba9969969cefb04aa0d6df18ec2fcc075b6f23b4d8c4da736a5dbbc6e7d", 115 121 }, 116 122 }; 117 123 118 124 static int __init test_mptcp_crypto(void) 119 125 { 120 - char hmac[20], hmac_hex[41]; 126 + char hmac[32], hmac_hex[65]; 121 127 u32 nonce1, nonce2; 122 128 u64 key1, key2; 123 129 u8 msg[8]; ··· 134 140 put_unaligned_be32(nonce2, &msg[4]); 135 141 136 142 mptcp_crypto_hmac_sha(key1, key2, msg, 8, hmac); 137 - for (j = 0; j < 20; ++j) 143 + for (j = 0; j < 32; ++j) 138 144 sprintf(&hmac_hex[j << 1], "%02x", hmac[j] & 0xff); 139 - hmac_hex[40] = 0; 145 + hmac_hex[64] = 0; 140 146 141 - if (memcmp(hmac_hex, tests[i].result, 40)) 147 + if (memcmp(hmac_hex, tests[i].result, 64)) 142 148 pr_err("test %d failed, got %s expected %s", i, 143 149 hmac_hex, tests[i].result); 144 150 else
+5 -4
net/mptcp/options.c
··· 7 7 #define pr_fmt(fmt) "MPTCP: " fmt 8 8 9 9 #include <linux/kernel.h> 10 + #include <crypto/sha.h> 10 11 #include <net/tcp.h> 11 12 #include <net/mptcp.h> 12 13 #include "protocol.h" ··· 541 540 static u64 add_addr_generate_hmac(u64 key1, u64 key2, u8 addr_id, 542 541 struct in_addr *addr) 543 542 { 544 - u8 hmac[MPTCP_ADDR_HMAC_LEN]; 543 + u8 hmac[SHA256_DIGEST_SIZE]; 545 544 u8 msg[7]; 546 545 547 546 msg[0] = addr_id; ··· 551 550 552 551 mptcp_crypto_hmac_sha(key1, key2, msg, 7, hmac); 553 552 554 - return get_unaligned_be64(hmac); 553 + return get_unaligned_be64(&hmac[SHA256_DIGEST_SIZE - sizeof(u64)]); 555 554 } 556 555 557 556 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 558 557 static u64 add_addr6_generate_hmac(u64 key1, u64 key2, u8 addr_id, 559 558 struct in6_addr *addr) 560 559 { 561 - u8 hmac[MPTCP_ADDR_HMAC_LEN]; 560 + u8 hmac[SHA256_DIGEST_SIZE]; 562 561 u8 msg[19]; 563 562 564 563 msg[0] = addr_id; ··· 568 567 569 568 mptcp_crypto_hmac_sha(key1, key2, msg, 19, hmac); 570 569 571 - return get_unaligned_be64(hmac); 570 + return get_unaligned_be64(&hmac[SHA256_DIGEST_SIZE - sizeof(u64)]); 572 571 } 573 572 #endif 574 573
-1
net/mptcp/protocol.h
··· 81 81 82 82 /* MPTCP ADD_ADDR flags */ 83 83 #define MPTCP_ADDR_ECHO BIT(0) 84 - #define MPTCP_ADDR_HMAC_LEN 20 85 84 #define MPTCP_ADDR_IPVERSION_4 4 86 85 #define MPTCP_ADDR_IPVERSION_6 6 87 86
+10 -5
net/mptcp/subflow.c
··· 10 10 #include <linux/module.h> 11 11 #include <linux/netdevice.h> 12 12 #include <crypto/algapi.h> 13 + #include <crypto/sha.h> 13 14 #include <net/sock.h> 14 15 #include <net/inet_common.h> 15 16 #include <net/inet_hashtables.h> ··· 90 89 const struct sk_buff *skb) 91 90 { 92 91 struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req); 93 - u8 hmac[MPTCPOPT_HMAC_LEN]; 92 + u8 hmac[SHA256_DIGEST_SIZE]; 94 93 struct mptcp_sock *msk; 95 94 int local_id; 96 95 ··· 202 201 /* validate received truncated hmac and create hmac for third ACK */ 203 202 static bool subflow_thmac_valid(struct mptcp_subflow_context *subflow) 204 203 { 205 - u8 hmac[MPTCPOPT_HMAC_LEN]; 204 + u8 hmac[SHA256_DIGEST_SIZE]; 206 205 u64 thmac; 207 206 208 207 subflow_generate_hmac(subflow->remote_key, subflow->local_key, ··· 268 267 subflow->ssn_offset = TCP_SKB_CB(skb)->seq; 269 268 } 270 269 } else if (subflow->mp_join) { 270 + u8 hmac[SHA256_DIGEST_SIZE]; 271 + 271 272 pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u", 272 273 subflow, subflow->thmac, 273 274 subflow->remote_nonce); ··· 282 279 subflow_generate_hmac(subflow->local_key, subflow->remote_key, 283 280 subflow->local_nonce, 284 281 subflow->remote_nonce, 285 - subflow->hmac); 282 + hmac); 283 + 284 + memcpy(subflow->hmac, hmac, MPTCPOPT_HMAC_LEN); 286 285 287 286 if (skb) 288 287 subflow->ssn_offset = TCP_SKB_CB(skb)->seq; ··· 352 347 const struct mptcp_options_received *mp_opt) 353 348 { 354 349 const struct mptcp_subflow_request_sock *subflow_req; 355 - u8 hmac[MPTCPOPT_HMAC_LEN]; 350 + u8 hmac[SHA256_DIGEST_SIZE]; 356 351 struct mptcp_sock *msk; 357 352 bool ret; 358 353 ··· 366 361 subflow_req->local_nonce, hmac); 367 362 368 363 ret = true; 369 - if (crypto_memneq(hmac, mp_opt->hmac, sizeof(hmac))) 364 + if (crypto_memneq(hmac, mp_opt->hmac, MPTCPOPT_HMAC_LEN)) 370 365 ret = false; 371 366 372 367 sock_put((struct sock *)msk);
+1 -1
net/qrtr/qrtr.c
··· 854 854 } 855 855 mutex_unlock(&qrtr_node_lock); 856 856 857 - qrtr_local_enqueue(node, skb, type, from, to); 857 + qrtr_local_enqueue(NULL, skb, type, from, to); 858 858 859 859 return 0; 860 860 }
+1
net/rxrpc/Makefile
··· 25 25 peer_event.o \ 26 26 peer_object.o \ 27 27 recvmsg.o \ 28 + rtt.o \ 28 29 security.o \ 29 30 sendmsg.o \ 30 31 skbuff.o \
+17 -8
net/rxrpc/ar-internal.h
··· 7 7 8 8 #include <linux/atomic.h> 9 9 #include <linux/seqlock.h> 10 + #include <linux/win_minmax.h> 10 11 #include <net/net_namespace.h> 11 12 #include <net/netns/generic.h> 12 13 #include <net/sock.h> ··· 312 311 #define RXRPC_RTT_CACHE_SIZE 32 313 312 spinlock_t rtt_input_lock; /* RTT lock for input routine */ 314 313 ktime_t rtt_last_req; /* Time of last RTT request */ 315 - u64 rtt; /* Current RTT estimate (in nS) */ 316 - u64 rtt_sum; /* Sum of cache contents */ 317 - u64 rtt_cache[RXRPC_RTT_CACHE_SIZE]; /* Determined RTT cache */ 318 - u8 rtt_cursor; /* next entry at which to insert */ 319 - u8 rtt_usage; /* amount of cache actually used */ 314 + unsigned int rtt_count; /* Number of samples we've got */ 315 + 316 + u32 srtt_us; /* smoothed round trip time << 3 in usecs */ 317 + u32 mdev_us; /* medium deviation */ 318 + u32 mdev_max_us; /* maximal mdev for the last rtt period */ 319 + u32 rttvar_us; /* smoothed mdev_max */ 320 + u32 rto_j; /* Retransmission timeout in jiffies */ 321 + u8 backoff; /* Backoff timeout */ 320 322 321 323 u8 cong_cwnd; /* Congestion window size */ 322 324 }; ··· 1045 1041 extern unsigned int rxrpc_rx_window_size; 1046 1042 extern unsigned int rxrpc_rx_mtu; 1047 1043 extern unsigned int rxrpc_rx_jumbo_max; 1048 - extern unsigned long rxrpc_resend_timeout; 1049 1044 1050 1045 extern const s8 rxrpc_ack_priority[]; 1051 1046 ··· 1072 1069 * peer_event.c 1073 1070 */ 1074 1071 void rxrpc_error_report(struct sock *); 1075 - void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace, 1076 - rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t); 1077 1072 void rxrpc_peer_keepalive_worker(struct work_struct *); 1078 1073 1079 1074 /* ··· 1102 1101 */ 1103 1102 void rxrpc_notify_socket(struct rxrpc_call *); 1104 1103 int rxrpc_recvmsg(struct socket *, struct msghdr *, size_t, int); 1104 + 1105 + /* 1106 + * rtt.c 1107 + */ 1108 + void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace, 1109 + rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t); 1110 + unsigned long rxrpc_get_rto_backoff(struct rxrpc_peer *, bool); 1111 + void rxrpc_peer_init_rtt(struct rxrpc_peer *); 1105 1112 1106 1113 /* 1107 1114 * rxkad.c
+1 -1
net/rxrpc/call_accept.c
··· 248 248 struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 249 249 ktime_t now = skb->tstamp; 250 250 251 - if (call->peer->rtt_usage < 3 || 251 + if (call->peer->rtt_count < 3 || 252 252 ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000), now)) 253 253 rxrpc_propose_ACK(call, RXRPC_ACK_PING, sp->hdr.serial, 254 254 true, true,
+8 -14
net/rxrpc/call_event.c
··· 111 111 } else { 112 112 unsigned long now = jiffies, ack_at; 113 113 114 - if (call->peer->rtt_usage > 0) 115 - ack_at = nsecs_to_jiffies(call->peer->rtt); 114 + if (call->peer->srtt_us != 0) 115 + ack_at = usecs_to_jiffies(call->peer->srtt_us >> 3); 116 116 else 117 117 ack_at = expiry; 118 118 ··· 157 157 static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) 158 158 { 159 159 struct sk_buff *skb; 160 - unsigned long resend_at; 160 + unsigned long resend_at, rto_j; 161 161 rxrpc_seq_t cursor, seq, top; 162 - ktime_t now, max_age, oldest, ack_ts, timeout, min_timeo; 162 + ktime_t now, max_age, oldest, ack_ts; 163 163 int ix; 164 164 u8 annotation, anno_type, retrans = 0, unacked = 0; 165 165 166 166 _enter("{%d,%d}", call->tx_hard_ack, call->tx_top); 167 167 168 - if (call->peer->rtt_usage > 1) 169 - timeout = ns_to_ktime(call->peer->rtt * 3 / 2); 170 - else 171 - timeout = ms_to_ktime(rxrpc_resend_timeout); 172 - min_timeo = ns_to_ktime((1000000000 / HZ) * 4); 173 - if (ktime_before(timeout, min_timeo)) 174 - timeout = min_timeo; 168 + rto_j = call->peer->rto_j; 175 169 176 170 now = ktime_get_real(); 177 - max_age = ktime_sub(now, timeout); 171 + max_age = ktime_sub(now, jiffies_to_usecs(rto_j)); 178 172 179 173 spin_lock_bh(&call->lock); 180 174 ··· 213 219 } 214 220 215 221 resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest))); 216 - resend_at += jiffies + rxrpc_resend_timeout; 222 + resend_at += jiffies + rto_j; 217 223 WRITE_ONCE(call->resend_at, resend_at); 218 224 219 225 if (unacked) ··· 228 234 rxrpc_timer_set_for_resend); 229 235 spin_unlock_bh(&call->lock); 230 236 ack_ts = ktime_sub(now, call->acks_latest_ts); 231 - if (ktime_to_ns(ack_ts) < call->peer->rtt) 237 + if (ktime_to_us(ack_ts) < (call->peer->srtt_us >> 3)) 232 238 goto out; 233 239 rxrpc_propose_ACK(call, RXRPC_ACK_PING, 0, true, false, 234 240 rxrpc_propose_ack_ping_for_lost_ack);
+37 -7
net/rxrpc/input.c
··· 91 91 /* We analyse the number of packets that get ACK'd per RTT 92 92 * period and increase the window if we managed to fill it. 93 93 */ 94 - if (call->peer->rtt_usage == 0) 94 + if (call->peer->rtt_count == 0) 95 95 goto out; 96 96 if (ktime_before(skb->tstamp, 97 - ktime_add_ns(call->cong_tstamp, 98 - call->peer->rtt))) 97 + ktime_add_us(call->cong_tstamp, 98 + call->peer->srtt_us >> 3))) 99 99 goto out_no_clear_ca; 100 100 change = rxrpc_cong_rtt_window_end; 101 101 call->cong_tstamp = skb->tstamp; ··· 803 803 } 804 804 805 805 /* 806 + * Return true if the ACK is valid - ie. it doesn't appear to have regressed 807 + * with respect to the ack state conveyed by preceding ACKs. 808 + */ 809 + static bool rxrpc_is_ack_valid(struct rxrpc_call *call, 810 + rxrpc_seq_t first_pkt, rxrpc_seq_t prev_pkt) 811 + { 812 + rxrpc_seq_t base = READ_ONCE(call->ackr_first_seq); 813 + 814 + if (after(first_pkt, base)) 815 + return true; /* The window advanced */ 816 + 817 + if (before(first_pkt, base)) 818 + return false; /* firstPacket regressed */ 819 + 820 + if (after_eq(prev_pkt, call->ackr_prev_seq)) 821 + return true; /* previousPacket hasn't regressed. */ 822 + 823 + /* Some rx implementations put a serial number in previousPacket. */ 824 + if (after_eq(prev_pkt, base + call->tx_winsize)) 825 + return false; 826 + return true; 827 + } 828 + 829 + /* 806 830 * Process an ACK packet. 807 831 * 808 832 * ack.firstPacket is the sequence number of the first soft-ACK'd/NAK'd packet ··· 889 865 } 890 866 891 867 /* Discard any out-of-order or duplicate ACKs (outside lock). */ 892 - if (before(first_soft_ack, call->ackr_first_seq) || 893 - before(prev_pkt, call->ackr_prev_seq)) 868 + if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { 869 + trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial, 870 + first_soft_ack, call->ackr_first_seq, 871 + prev_pkt, call->ackr_prev_seq); 894 872 return; 873 + } 895 874 896 875 buf.info.rxMTU = 0; 897 876 ioffset = offset + nr_acks + 3; ··· 905 878 spin_lock(&call->input_lock); 906 879 907 880 /* Discard any out-of-order or duplicate ACKs (inside lock). */ 908 - if (before(first_soft_ack, call->ackr_first_seq) || 909 - before(prev_pkt, call->ackr_prev_seq)) 881 + if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { 882 + trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial, 883 + first_soft_ack, call->ackr_first_seq, 884 + prev_pkt, call->ackr_prev_seq); 910 885 goto out; 886 + } 911 887 call->acks_latest_ts = skb->tstamp; 912 888 913 889 call->ackr_first_seq = first_soft_ack;
-5
net/rxrpc/misc.c
··· 63 63 */ 64 64 unsigned int rxrpc_rx_jumbo_max = 4; 65 65 66 - /* 67 - * Time till packet resend (in milliseconds). 68 - */ 69 - unsigned long rxrpc_resend_timeout = 4 * HZ; 70 - 71 66 const s8 rxrpc_ack_priority[] = { 72 67 [0] = 0, 73 68 [RXRPC_ACK_DELAY] = 1,
+3 -6
net/rxrpc/output.c
··· 369 369 (test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events) || 370 370 retrans || 371 371 call->cong_mode == RXRPC_CALL_SLOW_START || 372 - (call->peer->rtt_usage < 3 && sp->hdr.seq & 1) || 372 + (call->peer->rtt_count < 3 && sp->hdr.seq & 1) || 373 373 ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000), 374 374 ktime_get_real()))) 375 375 whdr.flags |= RXRPC_REQUEST_ACK; ··· 423 423 if (whdr.flags & RXRPC_REQUEST_ACK) { 424 424 call->peer->rtt_last_req = skb->tstamp; 425 425 trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_data, serial); 426 - if (call->peer->rtt_usage > 1) { 426 + if (call->peer->rtt_count > 1) { 427 427 unsigned long nowj = jiffies, ack_lost_at; 428 428 429 - ack_lost_at = nsecs_to_jiffies(2 * call->peer->rtt); 430 - if (ack_lost_at < 1) 431 - ack_lost_at = 1; 432 - 429 + ack_lost_at = rxrpc_get_rto_backoff(call->peer, retrans); 433 430 ack_lost_at += nowj; 434 431 WRITE_ONCE(call->ack_lost_at, ack_lost_at); 435 432 rxrpc_reduce_call_timer(call, ack_lost_at, nowj,
-46
net/rxrpc/peer_event.c
··· 296 296 } 297 297 298 298 /* 299 - * Add RTT information to cache. This is called in softirq mode and has 300 - * exclusive access to the peer RTT data. 301 - */ 302 - void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why, 303 - rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial, 304 - ktime_t send_time, ktime_t resp_time) 305 - { 306 - struct rxrpc_peer *peer = call->peer; 307 - s64 rtt; 308 - u64 sum = peer->rtt_sum, avg; 309 - u8 cursor = peer->rtt_cursor, usage = peer->rtt_usage; 310 - 311 - rtt = ktime_to_ns(ktime_sub(resp_time, send_time)); 312 - if (rtt < 0) 313 - return; 314 - 315 - spin_lock(&peer->rtt_input_lock); 316 - 317 - /* Replace the oldest datum in the RTT buffer */ 318 - sum -= peer->rtt_cache[cursor]; 319 - sum += rtt; 320 - peer->rtt_cache[cursor] = rtt; 321 - peer->rtt_cursor = (cursor + 1) & (RXRPC_RTT_CACHE_SIZE - 1); 322 - peer->rtt_sum = sum; 323 - if (usage < RXRPC_RTT_CACHE_SIZE) { 324 - usage++; 325 - peer->rtt_usage = usage; 326 - } 327 - 328 - spin_unlock(&peer->rtt_input_lock); 329 - 330 - /* Now recalculate the average */ 331 - if (usage == RXRPC_RTT_CACHE_SIZE) { 332 - avg = sum / RXRPC_RTT_CACHE_SIZE; 333 - } else { 334 - avg = sum; 335 - do_div(avg, usage); 336 - } 337 - 338 - /* Don't need to update this under lock */ 339 - peer->rtt = avg; 340 - trace_rxrpc_rtt_rx(call, why, send_serial, resp_serial, rtt, 341 - usage, avg); 342 - } 343 - 344 - /* 345 299 * Perform keep-alive pings. 346 300 */ 347 301 static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
+7 -5
net/rxrpc/peer_object.c
··· 225 225 spin_lock_init(&peer->rtt_input_lock); 226 226 peer->debug_id = atomic_inc_return(&rxrpc_debug_id); 227 227 228 + rxrpc_peer_init_rtt(peer); 229 + 228 230 if (RXRPC_TX_SMSS > 2190) 229 231 peer->cong_cwnd = 2; 230 232 else if (RXRPC_TX_SMSS > 1095) ··· 499 497 EXPORT_SYMBOL(rxrpc_kernel_get_peer); 500 498 501 499 /** 502 - * rxrpc_kernel_get_rtt - Get a call's peer RTT 500 + * rxrpc_kernel_get_srtt - Get a call's peer smoothed RTT 503 501 * @sock: The socket on which the call is in progress. 504 502 * @call: The call to query 505 503 * 506 - * Get the call's peer RTT. 504 + * Get the call's peer smoothed RTT. 507 505 */ 508 - u64 rxrpc_kernel_get_rtt(struct socket *sock, struct rxrpc_call *call) 506 + u32 rxrpc_kernel_get_srtt(struct socket *sock, struct rxrpc_call *call) 509 507 { 510 - return call->peer->rtt; 508 + return call->peer->srtt_us >> 3; 511 509 } 512 - EXPORT_SYMBOL(rxrpc_kernel_get_rtt); 510 + EXPORT_SYMBOL(rxrpc_kernel_get_srtt);
+4 -4
net/rxrpc/proc.c
··· 222 222 seq_puts(seq, 223 223 "Proto Local " 224 224 " Remote " 225 - " Use CW MTU LastUse RTT Rc\n" 225 + " Use CW MTU LastUse RTT RTO\n" 226 226 ); 227 227 return 0; 228 228 } ··· 236 236 now = ktime_get_seconds(); 237 237 seq_printf(seq, 238 238 "UDP %-47.47s %-47.47s %3u" 239 - " %3u %5u %6llus %12llu %2u\n", 239 + " %3u %5u %6llus %8u %8u\n", 240 240 lbuff, 241 241 rbuff, 242 242 atomic_read(&peer->usage), 243 243 peer->cong_cwnd, 244 244 peer->mtu, 245 245 now - peer->last_tx_at, 246 - peer->rtt, 247 - peer->rtt_cursor); 246 + peer->srtt_us >> 3, 247 + jiffies_to_usecs(peer->rto_j)); 248 248 249 249 return 0; 250 250 }
+195
net/rxrpc/rtt.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* RTT/RTO calculation. 3 + * 4 + * Adapted from TCP for AF_RXRPC by David Howells (dhowells@redhat.com) 5 + * 6 + * https://tools.ietf.org/html/rfc6298 7 + * https://tools.ietf.org/html/rfc1122#section-4.2.3.1 8 + * http://ccr.sigcomm.org/archive/1995/jan95/ccr-9501-partridge87.pdf 9 + */ 10 + 11 + #include <linux/net.h> 12 + #include "ar-internal.h" 13 + 14 + #define RXRPC_RTO_MAX ((unsigned)(120 * HZ)) 15 + #define RXRPC_TIMEOUT_INIT ((unsigned)(1*HZ)) /* RFC6298 2.1 initial RTO value */ 16 + #define rxrpc_jiffies32 ((u32)jiffies) /* As rxrpc_jiffies32 */ 17 + #define rxrpc_min_rtt_wlen 300 /* As sysctl_tcp_min_rtt_wlen */ 18 + 19 + static u32 rxrpc_rto_min_us(struct rxrpc_peer *peer) 20 + { 21 + return 200; 22 + } 23 + 24 + static u32 __rxrpc_set_rto(const struct rxrpc_peer *peer) 25 + { 26 + return _usecs_to_jiffies((peer->srtt_us >> 3) + peer->rttvar_us); 27 + } 28 + 29 + static u32 rxrpc_bound_rto(u32 rto) 30 + { 31 + return min(rto, RXRPC_RTO_MAX); 32 + } 33 + 34 + /* 35 + * Called to compute a smoothed rtt estimate. The data fed to this 36 + * routine either comes from timestamps, or from segments that were 37 + * known _not_ to have been retransmitted [see Karn/Partridge 38 + * Proceedings SIGCOMM 87]. The algorithm is from the SIGCOMM 88 39 + * piece by Van Jacobson. 40 + * NOTE: the next three routines used to be one big routine. 41 + * To save cycles in the RFC 1323 implementation it was better to break 42 + * it up into three procedures. -- erics 43 + */ 44 + static void rxrpc_rtt_estimator(struct rxrpc_peer *peer, long sample_rtt_us) 45 + { 46 + long m = sample_rtt_us; /* RTT */ 47 + u32 srtt = peer->srtt_us; 48 + 49 + /* The following amusing code comes from Jacobson's 50 + * article in SIGCOMM '88. Note that rtt and mdev 51 + * are scaled versions of rtt and mean deviation. 52 + * This is designed to be as fast as possible 53 + * m stands for "measurement". 54 + * 55 + * On a 1990 paper the rto value is changed to: 56 + * RTO = rtt + 4 * mdev 57 + * 58 + * Funny. This algorithm seems to be very broken. 59 + * These formulae increase RTO, when it should be decreased, increase 60 + * too slowly, when it should be increased quickly, decrease too quickly 61 + * etc. I guess in BSD RTO takes ONE value, so that it is absolutely 62 + * does not matter how to _calculate_ it. Seems, it was trap 63 + * that VJ failed to avoid. 8) 64 + */ 65 + if (srtt != 0) { 66 + m -= (srtt >> 3); /* m is now error in rtt est */ 67 + srtt += m; /* rtt = 7/8 rtt + 1/8 new */ 68 + if (m < 0) { 69 + m = -m; /* m is now abs(error) */ 70 + m -= (peer->mdev_us >> 2); /* similar update on mdev */ 71 + /* This is similar to one of Eifel findings. 72 + * Eifel blocks mdev updates when rtt decreases. 73 + * This solution is a bit different: we use finer gain 74 + * for mdev in this case (alpha*beta). 75 + * Like Eifel it also prevents growth of rto, 76 + * but also it limits too fast rto decreases, 77 + * happening in pure Eifel. 78 + */ 79 + if (m > 0) 80 + m >>= 3; 81 + } else { 82 + m -= (peer->mdev_us >> 2); /* similar update on mdev */ 83 + } 84 + 85 + peer->mdev_us += m; /* mdev = 3/4 mdev + 1/4 new */ 86 + if (peer->mdev_us > peer->mdev_max_us) { 87 + peer->mdev_max_us = peer->mdev_us; 88 + if (peer->mdev_max_us > peer->rttvar_us) 89 + peer->rttvar_us = peer->mdev_max_us; 90 + } 91 + } else { 92 + /* no previous measure. */ 93 + srtt = m << 3; /* take the measured time to be rtt */ 94 + peer->mdev_us = m << 1; /* make sure rto = 3*rtt */ 95 + peer->rttvar_us = max(peer->mdev_us, rxrpc_rto_min_us(peer)); 96 + peer->mdev_max_us = peer->rttvar_us; 97 + } 98 + 99 + peer->srtt_us = max(1U, srtt); 100 + } 101 + 102 + /* 103 + * Calculate rto without backoff. This is the second half of Van Jacobson's 104 + * routine referred to above. 105 + */ 106 + static void rxrpc_set_rto(struct rxrpc_peer *peer) 107 + { 108 + u32 rto; 109 + 110 + /* 1. If rtt variance happened to be less 50msec, it is hallucination. 111 + * It cannot be less due to utterly erratic ACK generation made 112 + * at least by solaris and freebsd. "Erratic ACKs" has _nothing_ 113 + * to do with delayed acks, because at cwnd>2 true delack timeout 114 + * is invisible. Actually, Linux-2.4 also generates erratic 115 + * ACKs in some circumstances. 116 + */ 117 + rto = __rxrpc_set_rto(peer); 118 + 119 + /* 2. Fixups made earlier cannot be right. 120 + * If we do not estimate RTO correctly without them, 121 + * all the algo is pure shit and should be replaced 122 + * with correct one. It is exactly, which we pretend to do. 123 + */ 124 + 125 + /* NOTE: clamping at RXRPC_RTO_MIN is not required, current algo 126 + * guarantees that rto is higher. 127 + */ 128 + peer->rto_j = rxrpc_bound_rto(rto); 129 + } 130 + 131 + static void rxrpc_ack_update_rtt(struct rxrpc_peer *peer, long rtt_us) 132 + { 133 + if (rtt_us < 0) 134 + return; 135 + 136 + //rxrpc_update_rtt_min(peer, rtt_us); 137 + rxrpc_rtt_estimator(peer, rtt_us); 138 + rxrpc_set_rto(peer); 139 + 140 + /* RFC6298: only reset backoff on valid RTT measurement. */ 141 + peer->backoff = 0; 142 + } 143 + 144 + /* 145 + * Add RTT information to cache. This is called in softirq mode and has 146 + * exclusive access to the peer RTT data. 147 + */ 148 + void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why, 149 + rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial, 150 + ktime_t send_time, ktime_t resp_time) 151 + { 152 + struct rxrpc_peer *peer = call->peer; 153 + s64 rtt_us; 154 + 155 + rtt_us = ktime_to_us(ktime_sub(resp_time, send_time)); 156 + if (rtt_us < 0) 157 + return; 158 + 159 + spin_lock(&peer->rtt_input_lock); 160 + rxrpc_ack_update_rtt(peer, rtt_us); 161 + if (peer->rtt_count < 3) 162 + peer->rtt_count++; 163 + spin_unlock(&peer->rtt_input_lock); 164 + 165 + trace_rxrpc_rtt_rx(call, why, send_serial, resp_serial, 166 + peer->srtt_us >> 3, peer->rto_j); 167 + } 168 + 169 + /* 170 + * Get the retransmission timeout to set in jiffies, backing it off each time 171 + * we retransmit. 172 + */ 173 + unsigned long rxrpc_get_rto_backoff(struct rxrpc_peer *peer, bool retrans) 174 + { 175 + u64 timo_j; 176 + u8 backoff = READ_ONCE(peer->backoff); 177 + 178 + timo_j = peer->rto_j; 179 + timo_j <<= backoff; 180 + if (retrans && timo_j * 2 <= RXRPC_RTO_MAX) 181 + WRITE_ONCE(peer->backoff, backoff + 1); 182 + 183 + if (timo_j < 1) 184 + timo_j = 1; 185 + 186 + return timo_j; 187 + } 188 + 189 + void rxrpc_peer_init_rtt(struct rxrpc_peer *peer) 190 + { 191 + peer->rto_j = RXRPC_TIMEOUT_INIT; 192 + peer->mdev_us = jiffies_to_usecs(RXRPC_TIMEOUT_INIT); 193 + peer->backoff = 0; 194 + //minmax_reset(&peer->rtt_min, rxrpc_jiffies32, ~0U); 195 + }
+1 -2
net/rxrpc/rxkad.c
··· 1148 1148 ret = rxkad_decrypt_ticket(conn, skb, ticket, ticket_len, &session_key, 1149 1149 &expiry, _abort_code); 1150 1150 if (ret < 0) 1151 - goto temporary_error_free_resp; 1151 + goto temporary_error_free_ticket; 1152 1152 1153 1153 /* use the session key from inside the ticket to decrypt the 1154 1154 * response */ ··· 1230 1230 1231 1231 temporary_error_free_ticket: 1232 1232 kfree(ticket); 1233 - temporary_error_free_resp: 1234 1233 kfree(response); 1235 1234 temporary_error: 1236 1235 /* Ignore the response packet if we got a temporary error such as
+9 -17
net/rxrpc/sendmsg.c
··· 66 66 struct rxrpc_call *call) 67 67 { 68 68 rxrpc_seq_t tx_start, tx_win; 69 - signed long rtt2, timeout; 70 - u64 rtt; 69 + signed long rtt, timeout; 71 70 72 - rtt = READ_ONCE(call->peer->rtt); 73 - rtt2 = nsecs_to_jiffies64(rtt) * 2; 74 - if (rtt2 < 2) 75 - rtt2 = 2; 71 + rtt = READ_ONCE(call->peer->srtt_us) >> 3; 72 + rtt = usecs_to_jiffies(rtt) * 2; 73 + if (rtt < 2) 74 + rtt = 2; 76 75 77 - timeout = rtt2; 76 + timeout = rtt; 78 77 tx_start = READ_ONCE(call->tx_hard_ack); 79 78 80 79 for (;;) { ··· 91 92 return -EINTR; 92 93 93 94 if (tx_win != tx_start) { 94 - timeout = rtt2; 95 + timeout = rtt; 95 96 tx_start = tx_win; 96 97 } 97 98 ··· 270 271 _debug("need instant resend %d", ret); 271 272 rxrpc_instant_resend(call, ix); 272 273 } else { 273 - unsigned long now = jiffies, resend_at; 274 + unsigned long now = jiffies; 275 + unsigned long resend_at = now + call->peer->rto_j; 274 276 275 - if (call->peer->rtt_usage > 1) 276 - resend_at = nsecs_to_jiffies(call->peer->rtt * 3 / 2); 277 - else 278 - resend_at = rxrpc_resend_timeout; 279 - if (resend_at < 1) 280 - resend_at = 1; 281 - 282 - resend_at += now; 283 277 WRITE_ONCE(call->resend_at, resend_at); 284 278 rxrpc_reduce_call_timer(call, resend_at, now, 285 279 rxrpc_timer_set_for_send);
-9
net/rxrpc/sysctl.c
··· 71 71 .extra1 = (void *)&one_jiffy, 72 72 .extra2 = (void *)&max_jiffies, 73 73 }, 74 - { 75 - .procname = "resend_timeout", 76 - .data = &rxrpc_resend_timeout, 77 - .maxlen = sizeof(unsigned long), 78 - .mode = 0644, 79 - .proc_handler = proc_doulongvec_ms_jiffies_minmax, 80 - .extra1 = (void *)&one_jiffy, 81 - .extra2 = (void *)&max_jiffies, 82 - }, 83 74 84 75 /* Non-time values */ 85 76 {
+11 -3
net/sctp/sm_sideeffect.c
··· 1523 1523 timeout = asoc->timeouts[cmd->obj.to]; 1524 1524 BUG_ON(!timeout); 1525 1525 1526 - timer->expires = jiffies + timeout; 1527 - sctp_association_hold(asoc); 1528 - add_timer(timer); 1526 + /* 1527 + * SCTP has a hard time with timer starts. Because we process 1528 + * timer starts as side effects, it can be hard to tell if we 1529 + * have already started a timer or not, which leads to BUG 1530 + * halts when we call add_timer. So here, instead of just starting 1531 + * a timer, if the timer is already started, and just mod 1532 + * the timer with the shorter of the two expiration times 1533 + */ 1534 + if (!timer_pending(timer)) 1535 + sctp_association_hold(asoc); 1536 + timer_reduce(timer, jiffies + timeout); 1529 1537 break; 1530 1538 1531 1539 case SCTP_CMD_TIMER_RESTART:
+5 -4
net/sctp/sm_statefuns.c
··· 1856 1856 /* Update the content of current association. */ 1857 1857 sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc)); 1858 1858 sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev)); 1859 - if (sctp_state(asoc, SHUTDOWN_PENDING) && 1859 + if ((sctp_state(asoc, SHUTDOWN_PENDING) || 1860 + sctp_state(asoc, SHUTDOWN_SENT)) && 1860 1861 (sctp_sstate(asoc->base.sk, CLOSING) || 1861 1862 sock_flag(asoc->base.sk, SOCK_DEAD))) { 1862 - /* if were currently in SHUTDOWN_PENDING, but the socket 1863 - * has been closed by user, don't transition to ESTABLISHED. 1864 - * Instead trigger SHUTDOWN bundled with COOKIE_ACK. 1863 + /* If the socket has been closed by user, don't 1864 + * transition to ESTABLISHED. Instead trigger SHUTDOWN 1865 + * bundled with COOKIE_ACK. 1865 1866 */ 1866 1867 sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl)); 1867 1868 return sctp_sf_do_9_2_start_shutdown(net, ep, asoc,
+7 -2
net/sunrpc/clnt.c
··· 889 889 * here. 890 890 */ 891 891 rpc_clnt_debugfs_unregister(clnt); 892 + rpc_free_clid(clnt); 892 893 rpc_clnt_remove_pipedir(clnt); 894 + xprt_put(rcu_dereference_raw(clnt->cl_xprt)); 893 895 894 896 kfree(clnt); 895 897 rpciod_down(); ··· 909 907 rpc_unregister_client(clnt); 910 908 rpc_free_iostats(clnt->cl_metrics); 911 909 clnt->cl_metrics = NULL; 912 - xprt_put(rcu_dereference_raw(clnt->cl_xprt)); 913 910 xprt_iter_destroy(&clnt->cl_xpi); 914 911 put_cred(clnt->cl_cred); 915 - rpc_free_clid(clnt); 916 912 917 913 INIT_WORK(&clnt->cl_work, rpc_free_client_work); 918 914 schedule_work(&clnt->cl_work); ··· 2432 2432 rpc_check_timeout(struct rpc_task *task) 2433 2433 { 2434 2434 struct rpc_clnt *clnt = task->tk_client; 2435 + 2436 + if (RPC_SIGNALLED(task)) { 2437 + rpc_call_rpcerror(task, -ERESTARTSYS); 2438 + return; 2439 + } 2435 2440 2436 2441 if (xprt_adjust_timeout(task->tk_rqstp) == 0) 2437 2442 return;
+5 -1
net/tipc/udp_media.c
··· 161 161 struct udp_bearer *ub, struct udp_media_addr *src, 162 162 struct udp_media_addr *dst, struct dst_cache *cache) 163 163 { 164 - struct dst_entry *ndst = dst_cache_get(cache); 164 + struct dst_entry *ndst; 165 165 int ttl, err = 0; 166 166 167 + local_bh_disable(); 168 + ndst = dst_cache_get(cache); 167 169 if (dst->proto == htons(ETH_P_IP)) { 168 170 struct rtable *rt = (struct rtable *)ndst; 169 171 ··· 212 210 src->port, dst->port, false); 213 211 #endif 214 212 } 213 + local_bh_enable(); 215 214 return err; 216 215 217 216 tx_error: 217 + local_bh_enable(); 218 218 kfree_skb(skb); 219 219 return err; 220 220 }
+10 -7
net/tls/tls_sw.c
··· 780 780 781 781 static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk, 782 782 bool full_record, u8 record_type, 783 - size_t *copied, int flags) 783 + ssize_t *copied, int flags) 784 784 { 785 785 struct tls_context *tls_ctx = tls_get_ctx(sk); 786 786 struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx); ··· 796 796 psock = sk_psock_get(sk); 797 797 if (!psock || !policy) { 798 798 err = tls_push_record(sk, flags, record_type); 799 - if (err && err != -EINPROGRESS) { 799 + if (err && sk->sk_err == EBADMSG) { 800 800 *copied -= sk_msg_free(sk, msg); 801 801 tls_free_open_rec(sk); 802 + err = -sk->sk_err; 802 803 } 803 804 if (psock) 804 805 sk_psock_put(sk, psock); ··· 825 824 switch (psock->eval) { 826 825 case __SK_PASS: 827 826 err = tls_push_record(sk, flags, record_type); 828 - if (err && err != -EINPROGRESS) { 827 + if (err && sk->sk_err == EBADMSG) { 829 828 *copied -= sk_msg_free(sk, msg); 830 829 tls_free_open_rec(sk); 830 + err = -sk->sk_err; 831 831 goto out_err; 832 832 } 833 833 break; ··· 918 916 unsigned char record_type = TLS_RECORD_TYPE_DATA; 919 917 bool is_kvec = iov_iter_is_kvec(&msg->msg_iter); 920 918 bool eor = !(msg->msg_flags & MSG_MORE); 921 - size_t try_to_copy, copied = 0; 919 + size_t try_to_copy; 920 + ssize_t copied = 0; 922 921 struct sk_msg *msg_pl, *msg_en; 923 922 struct tls_rec *rec; 924 923 int required_size; ··· 1121 1118 1122 1119 release_sock(sk); 1123 1120 mutex_unlock(&tls_ctx->tx_lock); 1124 - return copied ? copied : ret; 1121 + return copied > 0 ? copied : ret; 1125 1122 } 1126 1123 1127 1124 static int tls_sw_do_sendpage(struct sock *sk, struct page *page, ··· 1135 1132 struct sk_msg *msg_pl; 1136 1133 struct tls_rec *rec; 1137 1134 int num_async = 0; 1138 - size_t copied = 0; 1135 + ssize_t copied = 0; 1139 1136 bool full_record; 1140 1137 int record_room; 1141 1138 int ret = 0; ··· 1237 1234 } 1238 1235 sendpage_end: 1239 1236 ret = sk_stream_error(sk, flags, ret); 1240 - return copied ? copied : ret; 1237 + return copied > 0 ? copied : ret; 1241 1238 } 1242 1239 1243 1240 int tls_sw_sendpage_locked(struct sock *sk, struct page *page,
+2 -1
security/apparmor/apparmorfs.c
··· 454 454 */ 455 455 error = aa_may_manage_policy(label, ns, mask); 456 456 if (error) 457 - return error; 457 + goto end_section; 458 458 459 459 data = aa_simple_write_to_buffer(buf, size, size, pos); 460 460 error = PTR_ERR(data); ··· 462 462 error = aa_replace_profiles(ns, label, mask, data); 463 463 aa_put_loaddata(data); 464 464 } 465 + end_section: 465 466 end_current_label_crit_section(label); 466 467 467 468 return error;
+2 -1
security/apparmor/audit.c
··· 197 197 rule->label = aa_label_parse(&root_ns->unconfined->label, rulestr, 198 198 GFP_KERNEL, true, false); 199 199 if (IS_ERR(rule->label)) { 200 + int err = PTR_ERR(rule->label); 200 201 aa_audit_rule_free(rule); 201 - return PTR_ERR(rule->label); 202 + return err; 202 203 } 203 204 204 205 *vrule = rule;
+1 -2
security/apparmor/domain.c
··· 1328 1328 ctx->nnp = aa_get_label(label); 1329 1329 1330 1330 if (!fqname || !*fqname) { 1331 + aa_put_label(label); 1331 1332 AA_DEBUG("no profile name"); 1332 1333 return -EINVAL; 1333 1334 } ··· 1346 1345 else 1347 1346 op = OP_CHANGE_PROFILE; 1348 1347 } 1349 - 1350 - label = aa_get_current_label(); 1351 1348 1352 1349 if (*fqname == '&') { 1353 1350 stack = true;
+23 -23
security/integrity/evm/evm_crypto.c
··· 73 73 { 74 74 long rc; 75 75 const char *algo; 76 - struct crypto_shash **tfm; 76 + struct crypto_shash **tfm, *tmp_tfm; 77 77 struct shash_desc *desc; 78 78 79 79 if (type == EVM_XATTR_HMAC) { ··· 91 91 algo = hash_algo_name[hash_algo]; 92 92 } 93 93 94 - if (*tfm == NULL) { 95 - mutex_lock(&mutex); 96 - if (*tfm) 97 - goto out; 98 - *tfm = crypto_alloc_shash(algo, 0, CRYPTO_NOLOAD); 99 - if (IS_ERR(*tfm)) { 100 - rc = PTR_ERR(*tfm); 101 - pr_err("Can not allocate %s (reason: %ld)\n", algo, rc); 102 - *tfm = NULL; 94 + if (*tfm) 95 + goto alloc; 96 + mutex_lock(&mutex); 97 + if (*tfm) 98 + goto unlock; 99 + 100 + tmp_tfm = crypto_alloc_shash(algo, 0, CRYPTO_NOLOAD); 101 + if (IS_ERR(tmp_tfm)) { 102 + pr_err("Can not allocate %s (reason: %ld)\n", algo, 103 + PTR_ERR(tmp_tfm)); 104 + mutex_unlock(&mutex); 105 + return ERR_CAST(tmp_tfm); 106 + } 107 + if (type == EVM_XATTR_HMAC) { 108 + rc = crypto_shash_setkey(tmp_tfm, evmkey, evmkey_len); 109 + if (rc) { 110 + crypto_free_shash(tmp_tfm); 103 111 mutex_unlock(&mutex); 104 112 return ERR_PTR(rc); 105 113 } 106 - if (type == EVM_XATTR_HMAC) { 107 - rc = crypto_shash_setkey(*tfm, evmkey, evmkey_len); 108 - if (rc) { 109 - crypto_free_shash(*tfm); 110 - *tfm = NULL; 111 - mutex_unlock(&mutex); 112 - return ERR_PTR(rc); 113 - } 114 - } 115 - out: 116 - mutex_unlock(&mutex); 117 114 } 118 - 115 + *tfm = tmp_tfm; 116 + unlock: 117 + mutex_unlock(&mutex); 118 + alloc: 119 119 desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(*tfm), 120 120 GFP_KERNEL); 121 121 if (!desc) ··· 207 207 data->hdr.length = crypto_shash_digestsize(desc->tfm); 208 208 209 209 error = -ENODATA; 210 - list_for_each_entry_rcu(xattr, &evm_config_xattrnames, list) { 210 + list_for_each_entry_lockless(xattr, &evm_config_xattrnames, list) { 211 211 bool is_ima = false; 212 212 213 213 if (strcmp(xattr->name, XATTR_NAME_IMA) == 0)
+2 -2
security/integrity/evm/evm_main.c
··· 97 97 if (!(inode->i_opflags & IOP_XATTR)) 98 98 return -EOPNOTSUPP; 99 99 100 - list_for_each_entry_rcu(xattr, &evm_config_xattrnames, list) { 100 + list_for_each_entry_lockless(xattr, &evm_config_xattrnames, list) { 101 101 error = __vfs_getxattr(dentry, inode, xattr->name, NULL, 0); 102 102 if (error < 0) { 103 103 if (error == -ENODATA) ··· 228 228 struct xattr_list *xattr; 229 229 230 230 namelen = strlen(req_xattr_name); 231 - list_for_each_entry_rcu(xattr, &evm_config_xattrnames, list) { 231 + list_for_each_entry_lockless(xattr, &evm_config_xattrnames, list) { 232 232 if ((strlen(xattr->name) == namelen) 233 233 && (strncmp(req_xattr_name, xattr->name, namelen) == 0)) { 234 234 found = 1;
+8 -1
security/integrity/evm/evm_secfs.c
··· 232 232 goto out; 233 233 } 234 234 235 - /* Guard against races in evm_read_xattrs */ 235 + /* 236 + * xattr_list_mutex guards against races in evm_read_xattrs(). 237 + * Entries are only added to the evm_config_xattrnames list 238 + * and never deleted. Therefore, the list is traversed 239 + * using list_for_each_entry_lockless() without holding 240 + * the mutex in evm_calc_hmac_or_hash(), evm_find_protected_xattrs() 241 + * and evm_protected_xattr(). 242 + */ 236 243 mutex_lock(&xattr_list_mutex); 237 244 list_for_each_entry(tmp, &evm_config_xattrnames, list) { 238 245 if (strcmp(xattr->name, tmp->name) == 0) {
+6 -6
security/integrity/ima/ima_crypto.c
··· 411 411 loff_t i_size; 412 412 int rc; 413 413 struct file *f = file; 414 - bool new_file_instance = false, modified_flags = false; 414 + bool new_file_instance = false, modified_mode = false; 415 415 416 416 /* 417 417 * For consistency, fail file's opened with the O_DIRECT flag on ··· 431 431 f = dentry_open(&file->f_path, flags, file->f_cred); 432 432 if (IS_ERR(f)) { 433 433 /* 434 - * Cannot open the file again, lets modify f_flags 434 + * Cannot open the file again, lets modify f_mode 435 435 * of original and continue 436 436 */ 437 437 pr_info_ratelimited("Unable to reopen file for reading.\n"); 438 438 f = file; 439 - f->f_flags |= FMODE_READ; 440 - modified_flags = true; 439 + f->f_mode |= FMODE_READ; 440 + modified_mode = true; 441 441 } else { 442 442 new_file_instance = true; 443 443 } ··· 455 455 out: 456 456 if (new_file_instance) 457 457 fput(f); 458 - else if (modified_flags) 459 - f->f_flags &= ~FMODE_READ; 458 + else if (modified_mode) 459 + f->f_mode &= ~FMODE_READ; 460 460 return rc; 461 461 } 462 462
+1 -2
security/integrity/ima/ima_fs.c
··· 338 338 integrity_audit_msg(AUDIT_INTEGRITY_STATUS, NULL, NULL, 339 339 "policy_update", "signed policy required", 340 340 1, 0); 341 - if (ima_appraise & IMA_APPRAISE_ENFORCE) 342 - result = -EACCES; 341 + result = -EACCES; 343 342 } else { 344 343 result = ima_parse_add_rule(data); 345 344 }
+14 -2
security/security.c
··· 1965 1965 1966 1966 int security_secid_to_secctx(u32 secid, char **secdata, u32 *seclen) 1967 1967 { 1968 - return call_int_hook(secid_to_secctx, -EOPNOTSUPP, secid, secdata, 1969 - seclen); 1968 + struct security_hook_list *hp; 1969 + int rc; 1970 + 1971 + /* 1972 + * Currently, only one LSM can implement secid_to_secctx (i.e this 1973 + * LSM hook is not "stackable"). 1974 + */ 1975 + hlist_for_each_entry(hp, &security_hook_heads.secid_to_secctx, list) { 1976 + rc = hp->hook.secid_to_secctx(secid, secdata, seclen); 1977 + if (rc != LSM_RET_DEFAULT(secid_to_secctx)) 1978 + return rc; 1979 + } 1980 + 1981 + return LSM_RET_DEFAULT(secid_to_secctx); 1970 1982 } 1971 1983 EXPORT_SYMBOL(security_secid_to_secctx); 1972 1984
+1
sound/core/pcm_lib.c
··· 433 433 434 434 no_delta_check: 435 435 if (runtime->status->hw_ptr == new_hw_ptr) { 436 + runtime->hw_ptr_jiffies = curr_jiffies; 436 437 update_audio_tstamp(substream, &curr_tstamp, &audio_tstamp); 437 438 return 0; 438 439 }
+4
sound/pci/hda/patch_realtek.c
··· 2457 2457 SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE), 2458 2458 SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS), 2459 2459 SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_CLEVO_P950), 2460 + SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_CLEVO_P950), 2460 2461 SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950), 2461 2462 SND_PCI_QUIRK(0x1462, 0x1275, "MSI-GL63", ALC1220_FIXUP_CLEVO_P950), 2462 2463 SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950), ··· 2473 2472 SND_PCI_QUIRK(0x1558, 0x97e1, "Clevo P970[ER][CDFN]", ALC1220_FIXUP_CLEVO_P950), 2474 2473 SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2475 2474 SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2475 + SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2476 + SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2477 + SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2476 2478 SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD), 2477 2479 SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD), 2478 2480 SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530),
+2 -1
sound/pci/ice1712/ice1712.c
··· 2332 2332 pci_write_config_byte(ice->pci, 0x61, ice->eeprom.data[ICE_EEP1_ACLINK]); 2333 2333 pci_write_config_byte(ice->pci, 0x62, ice->eeprom.data[ICE_EEP1_I2SID]); 2334 2334 pci_write_config_byte(ice->pci, 0x63, ice->eeprom.data[ICE_EEP1_SPDIF]); 2335 - if (ice->eeprom.subvendor != ICE1712_SUBDEVICE_STDSP24) { 2335 + if (ice->eeprom.subvendor != ICE1712_SUBDEVICE_STDSP24 && 2336 + ice->eeprom.subvendor != ICE1712_SUBDEVICE_STAUDIO_ADCIII) { 2336 2337 ice->gpio.write_mask = ice->eeprom.gpiomask; 2337 2338 ice->gpio.direction = ice->eeprom.gpiodir; 2338 2339 snd_ice1712_write(ice, ICE1712_IREG_GPIO_WRITE_MASK,
+12 -1
tools/testing/selftests/bpf/prog_tests/mmap.c
··· 19 19 const size_t map_sz = roundup_page(sizeof(struct map_data)); 20 20 const int zero = 0, one = 1, two = 2, far = 1500; 21 21 const long page_size = sysconf(_SC_PAGE_SIZE); 22 - int err, duration = 0, i, data_map_fd, data_map_id, tmp_fd; 22 + int err, duration = 0, i, data_map_fd, data_map_id, tmp_fd, rdmap_fd; 23 23 struct bpf_map *data_map, *bss_map; 24 24 void *bss_mmaped = NULL, *map_mmaped = NULL, *tmp1, *tmp2; 25 25 struct test_mmap__bss *bss_data; ··· 36 36 bss_map = skel->maps.bss; 37 37 data_map = skel->maps.data_map; 38 38 data_map_fd = bpf_map__fd(data_map); 39 + 40 + rdmap_fd = bpf_map__fd(skel->maps.rdonly_map); 41 + tmp1 = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_SHARED, rdmap_fd, 0); 42 + if (CHECK(tmp1 != MAP_FAILED, "rdonly_write_mmap", "unexpected success\n")) { 43 + munmap(tmp1, 4096); 44 + goto cleanup; 45 + } 46 + /* now double-check if it's mmap()'able at all */ 47 + tmp1 = mmap(NULL, 4096, PROT_READ, MAP_SHARED, rdmap_fd, 0); 48 + if (CHECK(tmp1 == MAP_FAILED, "rdonly_read_mmap", "failed: %d\n", errno)) 49 + goto cleanup; 39 50 40 51 /* get map's ID */ 41 52 memset(&map_info, 0, map_info_sz);
+8
tools/testing/selftests/bpf/progs/test_mmap.c
··· 9 9 10 10 struct { 11 11 __uint(type, BPF_MAP_TYPE_ARRAY); 12 + __uint(max_entries, 4096); 13 + __uint(map_flags, BPF_F_MMAPABLE | BPF_F_RDONLY_PROG); 14 + __type(key, __u32); 15 + __type(value, char); 16 + } rdonly_map SEC(".maps"); 17 + 18 + struct { 19 + __uint(type, BPF_MAP_TYPE_ARRAY); 12 20 __uint(max_entries, 512 * 4); /* at least 4 pages of data */ 13 21 __uint(map_flags, BPF_F_MMAPABLE); 14 22 __type(key, __u32);
+1 -1
tools/testing/selftests/drivers/net/mlxsw/qos_mc_aware.sh
··· 300 300 local i 301 301 302 302 for ((i = 0; i < attempts; ++i)); do 303 - if $ARPING -c 1 -I $h1 -b 192.0.2.66 -q -w 0.1; then 303 + if $ARPING -c 1 -I $h1 -b 192.0.2.66 -q -w 1; then 304 304 ((passes++)) 305 305 fi 306 306
+1
tools/testing/selftests/kvm/Makefile
··· 54 54 TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test 55 55 TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test 56 56 TEST_GEN_PROGS_x86_64 += x86_64/xss_msr_test 57 + TEST_GEN_PROGS_x86_64 += x86_64/debug_regs 57 58 TEST_GEN_PROGS_x86_64 += clear_dirty_log_test 58 59 TEST_GEN_PROGS_x86_64 += demand_paging_test 59 60 TEST_GEN_PROGS_x86_64 += dirty_log_test
+2
tools/testing/selftests/kvm/include/kvm_util.h
··· 143 143 void vcpu_run(struct kvm_vm *vm, uint32_t vcpuid); 144 144 int _vcpu_run(struct kvm_vm *vm, uint32_t vcpuid); 145 145 void vcpu_run_complete_io(struct kvm_vm *vm, uint32_t vcpuid); 146 + void vcpu_set_guest_debug(struct kvm_vm *vm, uint32_t vcpuid, 147 + struct kvm_guest_debug *debug); 146 148 void vcpu_set_mp_state(struct kvm_vm *vm, uint32_t vcpuid, 147 149 struct kvm_mp_state *mp_state); 148 150 void vcpu_regs_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs);
+9
tools/testing/selftests/kvm/lib/kvm_util.c
··· 1201 1201 ret, errno); 1202 1202 } 1203 1203 1204 + void vcpu_set_guest_debug(struct kvm_vm *vm, uint32_t vcpuid, 1205 + struct kvm_guest_debug *debug) 1206 + { 1207 + struct vcpu *vcpu = vcpu_find(vm, vcpuid); 1208 + int ret = ioctl(vcpu->fd, KVM_SET_GUEST_DEBUG, debug); 1209 + 1210 + TEST_ASSERT(ret == 0, "KVM_SET_GUEST_DEBUG failed: %d", ret); 1211 + } 1212 + 1204 1213 /* 1205 1214 * VM VCPU Set MP State 1206 1215 *
+202
tools/testing/selftests/kvm/x86_64/debug_regs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * KVM guest debug register tests 4 + * 5 + * Copyright (C) 2020, Red Hat, Inc. 6 + */ 7 + #include <stdio.h> 8 + #include <string.h> 9 + #include "kvm_util.h" 10 + #include "processor.h" 11 + 12 + #define VCPU_ID 0 13 + 14 + #define DR6_BD (1 << 13) 15 + #define DR7_GD (1 << 13) 16 + 17 + /* For testing data access debug BP */ 18 + uint32_t guest_value; 19 + 20 + extern unsigned char sw_bp, hw_bp, write_data, ss_start, bd_start; 21 + 22 + static void guest_code(void) 23 + { 24 + /* 25 + * Software BP tests. 26 + * 27 + * NOTE: sw_bp need to be before the cmd here, because int3 is an 28 + * exception rather than a normal trap for KVM_SET_GUEST_DEBUG (we 29 + * capture it using the vcpu exception bitmap). 30 + */ 31 + asm volatile("sw_bp: int3"); 32 + 33 + /* Hardware instruction BP test */ 34 + asm volatile("hw_bp: nop"); 35 + 36 + /* Hardware data BP test */ 37 + asm volatile("mov $1234,%%rax;\n\t" 38 + "mov %%rax,%0;\n\t write_data:" 39 + : "=m" (guest_value) : : "rax"); 40 + 41 + /* Single step test, covers 2 basic instructions and 2 emulated */ 42 + asm volatile("ss_start: " 43 + "xor %%rax,%%rax\n\t" 44 + "cpuid\n\t" 45 + "movl $0x1a0,%%ecx\n\t" 46 + "rdmsr\n\t" 47 + : : : "rax", "ecx"); 48 + 49 + /* DR6.BD test */ 50 + asm volatile("bd_start: mov %%dr0, %%rax" : : : "rax"); 51 + GUEST_DONE(); 52 + } 53 + 54 + #define CLEAR_DEBUG() memset(&debug, 0, sizeof(debug)) 55 + #define APPLY_DEBUG() vcpu_set_guest_debug(vm, VCPU_ID, &debug) 56 + #define CAST_TO_RIP(v) ((unsigned long long)&(v)) 57 + #define SET_RIP(v) do { \ 58 + vcpu_regs_get(vm, VCPU_ID, &regs); \ 59 + regs.rip = (v); \ 60 + vcpu_regs_set(vm, VCPU_ID, &regs); \ 61 + } while (0) 62 + #define MOVE_RIP(v) SET_RIP(regs.rip + (v)); 63 + 64 + int main(void) 65 + { 66 + struct kvm_guest_debug debug; 67 + unsigned long long target_dr6, target_rip; 68 + struct kvm_regs regs; 69 + struct kvm_run *run; 70 + struct kvm_vm *vm; 71 + struct ucall uc; 72 + uint64_t cmd; 73 + int i; 74 + /* Instruction lengths starting at ss_start */ 75 + int ss_size[4] = { 76 + 3, /* xor */ 77 + 2, /* cpuid */ 78 + 5, /* mov */ 79 + 2, /* rdmsr */ 80 + }; 81 + 82 + if (!kvm_check_cap(KVM_CAP_SET_GUEST_DEBUG)) { 83 + print_skip("KVM_CAP_SET_GUEST_DEBUG not supported"); 84 + return 0; 85 + } 86 + 87 + vm = vm_create_default(VCPU_ID, 0, guest_code); 88 + vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid()); 89 + run = vcpu_state(vm, VCPU_ID); 90 + 91 + /* Test software BPs - int3 */ 92 + CLEAR_DEBUG(); 93 + debug.control = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP; 94 + APPLY_DEBUG(); 95 + vcpu_run(vm, VCPU_ID); 96 + TEST_ASSERT(run->exit_reason == KVM_EXIT_DEBUG && 97 + run->debug.arch.exception == BP_VECTOR && 98 + run->debug.arch.pc == CAST_TO_RIP(sw_bp), 99 + "INT3: exit %d exception %d rip 0x%llx (should be 0x%llx)", 100 + run->exit_reason, run->debug.arch.exception, 101 + run->debug.arch.pc, CAST_TO_RIP(sw_bp)); 102 + MOVE_RIP(1); 103 + 104 + /* Test instruction HW BP over DR[0-3] */ 105 + for (i = 0; i < 4; i++) { 106 + CLEAR_DEBUG(); 107 + debug.control = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_HW_BP; 108 + debug.arch.debugreg[i] = CAST_TO_RIP(hw_bp); 109 + debug.arch.debugreg[7] = 0x400 | (1UL << (2*i+1)); 110 + APPLY_DEBUG(); 111 + vcpu_run(vm, VCPU_ID); 112 + target_dr6 = 0xffff0ff0 | (1UL << i); 113 + TEST_ASSERT(run->exit_reason == KVM_EXIT_DEBUG && 114 + run->debug.arch.exception == DB_VECTOR && 115 + run->debug.arch.pc == CAST_TO_RIP(hw_bp) && 116 + run->debug.arch.dr6 == target_dr6, 117 + "INS_HW_BP (DR%d): exit %d exception %d rip 0x%llx " 118 + "(should be 0x%llx) dr6 0x%llx (should be 0x%llx)", 119 + i, run->exit_reason, run->debug.arch.exception, 120 + run->debug.arch.pc, CAST_TO_RIP(hw_bp), 121 + run->debug.arch.dr6, target_dr6); 122 + } 123 + /* Skip "nop" */ 124 + MOVE_RIP(1); 125 + 126 + /* Test data access HW BP over DR[0-3] */ 127 + for (i = 0; i < 4; i++) { 128 + CLEAR_DEBUG(); 129 + debug.control = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_HW_BP; 130 + debug.arch.debugreg[i] = CAST_TO_RIP(guest_value); 131 + debug.arch.debugreg[7] = 0x00000400 | (1UL << (2*i+1)) | 132 + (0x000d0000UL << (4*i)); 133 + APPLY_DEBUG(); 134 + vcpu_run(vm, VCPU_ID); 135 + target_dr6 = 0xffff0ff0 | (1UL << i); 136 + TEST_ASSERT(run->exit_reason == KVM_EXIT_DEBUG && 137 + run->debug.arch.exception == DB_VECTOR && 138 + run->debug.arch.pc == CAST_TO_RIP(write_data) && 139 + run->debug.arch.dr6 == target_dr6, 140 + "DATA_HW_BP (DR%d): exit %d exception %d rip 0x%llx " 141 + "(should be 0x%llx) dr6 0x%llx (should be 0x%llx)", 142 + i, run->exit_reason, run->debug.arch.exception, 143 + run->debug.arch.pc, CAST_TO_RIP(write_data), 144 + run->debug.arch.dr6, target_dr6); 145 + /* Rollback the 4-bytes "mov" */ 146 + MOVE_RIP(-7); 147 + } 148 + /* Skip the 4-bytes "mov" */ 149 + MOVE_RIP(7); 150 + 151 + /* Test single step */ 152 + target_rip = CAST_TO_RIP(ss_start); 153 + target_dr6 = 0xffff4ff0ULL; 154 + vcpu_regs_get(vm, VCPU_ID, &regs); 155 + for (i = 0; i < (sizeof(ss_size) / sizeof(ss_size[0])); i++) { 156 + target_rip += ss_size[i]; 157 + CLEAR_DEBUG(); 158 + debug.control = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_SINGLESTEP; 159 + debug.arch.debugreg[7] = 0x00000400; 160 + APPLY_DEBUG(); 161 + vcpu_run(vm, VCPU_ID); 162 + TEST_ASSERT(run->exit_reason == KVM_EXIT_DEBUG && 163 + run->debug.arch.exception == DB_VECTOR && 164 + run->debug.arch.pc == target_rip && 165 + run->debug.arch.dr6 == target_dr6, 166 + "SINGLE_STEP[%d]: exit %d exception %d rip 0x%llx " 167 + "(should be 0x%llx) dr6 0x%llx (should be 0x%llx)", 168 + i, run->exit_reason, run->debug.arch.exception, 169 + run->debug.arch.pc, target_rip, run->debug.arch.dr6, 170 + target_dr6); 171 + } 172 + 173 + /* Finally test global disable */ 174 + CLEAR_DEBUG(); 175 + debug.control = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_HW_BP; 176 + debug.arch.debugreg[7] = 0x400 | DR7_GD; 177 + APPLY_DEBUG(); 178 + vcpu_run(vm, VCPU_ID); 179 + target_dr6 = 0xffff0ff0 | DR6_BD; 180 + TEST_ASSERT(run->exit_reason == KVM_EXIT_DEBUG && 181 + run->debug.arch.exception == DB_VECTOR && 182 + run->debug.arch.pc == CAST_TO_RIP(bd_start) && 183 + run->debug.arch.dr6 == target_dr6, 184 + "DR7.GD: exit %d exception %d rip 0x%llx " 185 + "(should be 0x%llx) dr6 0x%llx (should be 0x%llx)", 186 + run->exit_reason, run->debug.arch.exception, 187 + run->debug.arch.pc, target_rip, run->debug.arch.dr6, 188 + target_dr6); 189 + 190 + /* Disable all debug controls, run to the end */ 191 + CLEAR_DEBUG(); 192 + APPLY_DEBUG(); 193 + 194 + vcpu_run(vm, VCPU_ID); 195 + TEST_ASSERT(run->exit_reason == KVM_EXIT_IO, "KVM_EXIT_IO"); 196 + cmd = get_ucall(vm, VCPU_ID, &uc); 197 + TEST_ASSERT(cmd == UCALL_DONE, "UCALL_DONE"); 198 + 199 + kvm_vm_free(vm); 200 + 201 + return 0; 202 + }
+1
tools/testing/selftests/vm/.gitignore
··· 6 6 thuge-gen 7 7 compaction_test 8 8 mlock2-tests 9 + mremap_dontunmap 9 10 on-fault-limit 10 11 transhuge-stress 11 12 userfaultfd
-2
tools/testing/selftests/vm/write_to_hugetlbfs.c
··· 74 74 int write = 0; 75 75 int reserve = 1; 76 76 77 - unsigned long i; 78 - 79 77 if (signal(SIGINT, sig_handler) == SIG_ERR) 80 78 err(1, "\ncan't catch SIGINT\n"); 81 79
+1 -1
tools/testing/selftests/wireguard/qemu/Makefile
··· 44 44 $(eval $(call tar_download,MUSL,musl,1.2.0,.tar.gz,https://musl.libc.org/releases/,c6de7b191139142d3f9a7b5b702c9cae1b5ee6e7f57e582da9328629408fd4e8)) 45 45 $(eval $(call tar_download,IPERF,iperf,3.7,.tar.gz,https://downloads.es.net/pub/iperf/,d846040224317caf2f75c843d309a950a7db23f9b44b94688ccbe557d6d1710c)) 46 46 $(eval $(call tar_download,BASH,bash,5.0,.tar.gz,https://ftp.gnu.org/gnu/bash/,b4a80f2ac66170b2913efbfb9f2594f1f76c7b1afd11f799e22035d63077fb4d)) 47 - $(eval $(call tar_download,IPROUTE2,iproute2,5.4.0,.tar.xz,https://www.kernel.org/pub/linux/utils/net/iproute2/,fe97aa60a0d4c5ac830be18937e18dc3400ca713a33a89ad896ff1e3d46086ae)) 47 + $(eval $(call tar_download,IPROUTE2,iproute2,5.6.0,.tar.xz,https://www.kernel.org/pub/linux/utils/net/iproute2/,1b5b0e25ce6e23da7526ea1da044e814ad85ba761b10dd29c2b027c056b04692)) 48 48 $(eval $(call tar_download,IPTABLES,iptables,1.8.4,.tar.bz2,https://www.netfilter.org/projects/iptables/files/,993a3a5490a544c2cbf2ef15cf7e7ed21af1845baf228318d5c36ef8827e157c)) 49 49 $(eval $(call tar_download,NMAP,nmap,7.80,.tar.bz2,https://nmap.org/dist/,fcfa5a0e42099e12e4bf7a68ebe6fde05553383a682e816a7ec9256ab4773faa)) 50 50 $(eval $(call tar_download,IPUTILS,iputils,s20190709,.tar.gz,https://github.com/iputils/iputils/archive/s20190709.tar.gz/#,a15720dd741d7538dd2645f9f516d193636ae4300ff7dbc8bfca757bf166490a))
+11 -3
virt/kvm/kvm_main.c
··· 259 259 } 260 260 261 261 bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req, 262 + struct kvm_vcpu *except, 262 263 unsigned long *vcpu_bitmap, cpumask_var_t tmp) 263 264 { 264 265 int i, cpu, me; ··· 269 268 me = get_cpu(); 270 269 271 270 kvm_for_each_vcpu(i, vcpu, kvm) { 272 - if (vcpu_bitmap && !test_bit(i, vcpu_bitmap)) 271 + if ((vcpu_bitmap && !test_bit(i, vcpu_bitmap)) || 272 + vcpu == except) 273 273 continue; 274 274 275 275 kvm_make_request(req, vcpu); ··· 290 288 return called; 291 289 } 292 290 293 - bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req) 291 + bool kvm_make_all_cpus_request_except(struct kvm *kvm, unsigned int req, 292 + struct kvm_vcpu *except) 294 293 { 295 294 cpumask_var_t cpus; 296 295 bool called; 297 296 298 297 zalloc_cpumask_var(&cpus, GFP_ATOMIC); 299 298 300 - called = kvm_make_vcpus_request_mask(kvm, req, NULL, cpus); 299 + called = kvm_make_vcpus_request_mask(kvm, req, except, NULL, cpus); 301 300 302 301 free_cpumask_var(cpus); 303 302 return called; 303 + } 304 + 305 + bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req) 306 + { 307 + return kvm_make_all_cpus_request_except(kvm, req, NULL); 304 308 } 305 309 306 310 #ifndef CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL