···856856 causing system reset or hang due to sending857857 INIT from AP to BSP.858858859859- disable_counter_freezing [HW]859859+ perf_v4_pmi= [X86,INTEL]860860+ Format: <bool>860861 Disable Intel PMU counter freezing feature.861862 The feature only exists starting from862863 Arch Perfmon v4 (Skylake and newer).···35053504 before loading.35063505 See Documentation/blockdev/ramdisk.txt.3507350635073507+ psi= [KNL] Enable or disable pressure stall information35083508+ tracking.35093509+ Format: <bool>35103510+35083511 psmouse.proto= [HW,MOUSE] Highest PS2 mouse protocol extension to35093512 probe for; one of (bare|imps|exps|lifebook|any).35103513 psmouse.rate= [HW,MOUSE] Set desired mouse report rate, in reports···4199419442004195 spectre_v2= [X86] Control mitigation of Spectre variant 242014196 (indirect branch speculation) vulnerability.41974197+ The default operation protects the kernel from41984198+ user space attacks.4202419942034203- on - unconditionally enable42044204- off - unconditionally disable42004200+ on - unconditionally enable, implies42014201+ spectre_v2_user=on42024202+ off - unconditionally disable, implies42034203+ spectre_v2_user=off42054204 auto - kernel detects whether your CPU model is42064205 vulnerable42074206···42154206 CONFIG_RETPOLINE configuration option, and the42164207 compiler with which the kernel was built.4217420842094209+ Selecting 'on' will also enable the mitigation42104210+ against user space to user space task attacks.42114211+42124212+ Selecting 'off' will disable both the kernel and42134213+ the user space protections.42144214+42184215 Specific mitigations can also be selected manually:4219421642204217 retpoline - replace indirect branches···4229421442304215 Not specifying this option is equivalent to42314216 spectre_v2=auto.42174217+42184218+ spectre_v2_user=42194219+ [X86] Control mitigation of Spectre variant 242204220+ (indirect branch speculation) vulnerability between42214221+ user space tasks42224222+42234223+ on - Unconditionally enable mitigations. Is42244224+ enforced by spectre_v2=on42254225+42264226+ off - Unconditionally disable mitigations. Is42274227+ enforced by spectre_v2=off42284228+42294229+ prctl - Indirect branch speculation is enabled,42304230+ but mitigation can be enabled via prctl42314231+ per thread. The mitigation control state42324232+ is inherited on fork.42334233+42344234+ prctl,ibpb42354235+ - Like "prctl" above, but only STIBP is42364236+ controlled per thread. IBPB is issued42374237+ always when switching between different user42384238+ space processes.42394239+42404240+ seccomp42414241+ - Same as "prctl" above, but all seccomp42424242+ threads will enable the mitigation unless42434243+ they explicitly opt out.42444244+42454245+ seccomp,ibpb42464246+ - Like "seccomp" above, but only STIBP is42474247+ controlled per thread. IBPB is issued42484248+ always when switching between different42494249+ user space processes.42504250+42514251+ auto - Kernel selects the mitigation depending on42524252+ the available CPU features and vulnerability.42534253+42544254+ Default mitigation:42554255+ If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl"42564256+42574257+ Not specifying this option is equivalent to42584258+ spectre_v2_user=auto.4232425942334260 spec_store_bypass_disable=42344261 [HW] Control Speculative Store Bypass (SSB) Disable mitigation
+1
Documentation/arm64/silicon-errata.txt
···5757| ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 |5858| ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 |5959| ARM | Cortex-A76 | #1188873 | ARM64_ERRATUM_1188873 |6060+| ARM | Cortex-A76 | #1286807 | ARM64_ERRATUM_1286807 |6061| ARM | MMU-500 | #841119,#826419 | N/A |6162| | | | |6263| Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
···4040 "ref" for 19.2 MHz ref clk,4141 "com_aux" for phy common block aux clock,4242 "ref_aux" for phy reference aux clock,4343+4444+ For "qcom,ipq8074-qmp-pcie-phy": no clocks are listed.4345 For "qcom,msm8996-qmp-pcie-phy" must contain:4446 "aux", "cfg_ahb", "ref".4547 For "qcom,msm8996-qmp-usb3-phy" must contain:4648 "aux", "cfg_ahb", "ref".4747- For "qcom,qmp-v3-usb3-phy" must contain:4949+ For "qcom,sdm845-qmp-usb3-phy" must contain:4850 "aux", "cfg_ahb", "ref", "com_aux".5151+ For "qcom,sdm845-qmp-usb3-uni-phy" must contain:5252+ "aux", "cfg_ahb", "ref", "com_aux".5353+ For "qcom,sdm845-qmp-ufs-phy" must contain:5454+ "ref", "ref_aux".49555056 - resets: a list of phandles and reset controller specifier pairs,5157 one for each entry in reset-names.5258 - reset-names: "phy" for reset of phy block,5359 "common" for phy common block reset,5454- "cfg" for phy's ahb cfg block reset (Optional).5555- For "qcom,msm8996-qmp-pcie-phy" must contain:5656- "phy", "common", "cfg".5757- For "qcom,msm8996-qmp-usb3-phy" must contain5858- "phy", "common".6060+ "cfg" for phy's ahb cfg block reset.6161+5962 For "qcom,ipq8074-qmp-pcie-phy" must contain:6060- "phy", "common".6363+ "phy", "common".6464+ For "qcom,msm8996-qmp-pcie-phy" must contain:6565+ "phy", "common", "cfg".6666+ For "qcom,msm8996-qmp-usb3-phy" must contain6767+ "phy", "common".6868+ For "qcom,sdm845-qmp-usb3-phy" must contain:6969+ "phy", "common".7070+ For "qcom,sdm845-qmp-usb3-uni-phy" must contain:7171+ "phy", "common".7272+ For "qcom,sdm845-qmp-ufs-phy": no resets are listed.61736274 - vdda-phy-supply: Phandle to a regulator supply to PHY core block.6375 - vdda-pll-supply: Phandle to 1.8V regulator supply to PHY refclk pll block.···91799280 - #phy-cells: must be 093818282+Required properties child node of pcie and usb3 qmp phys:9483 - clocks: a list of phandles and clock-specifier pairs,9584 one for each entry in clock-names.9696- - clock-names: Must contain following for pcie and usb qmp phys:8585+ - clock-names: Must contain following:9786 "pipe<lane-number>" for pipe clock specific to each lane.9887 - clock-output-names: Name of the PHY clock that will be the parent for9988 the above pipe clock.···10491 (or)10592 "pcie20_phy1_pipe_clk"106939494+Required properties for child node of PHYs with lane reset, AKA:9595+ "qcom,msm8996-qmp-pcie-phy"10796 - resets: a list of phandles and reset controller specifier pairs,10897 one for each entry in reset-names.109109- - reset-names: Must contain following for pcie qmp phys:9898+ - reset-names: Must contain following:11099 "lane<lane-number>" for reset specific to each lane.111100112101Example:
···55Required properties:66 - compatible: should be "socionext,uniphier-scssi"77 - reg: address and length of the spi master registers88- - #address-cells: must be <1>, see spi-bus.txt99- - #size-cells: must be <0>, see spi-bus.txt1010- - clocks: A phandle to the clock for the device.1111- - resets: A phandle to the reset control for the device.88+ - interrupts: a single interrupt specifier99+ - pinctrl-names: should be "default"1010+ - pinctrl-0: pin control state for the default mode1111+ - clocks: a phandle to the clock for the device1212+ - resets: a phandle to the reset control for the device12131314Example:14151516spi0: spi@54006000 {1617 compatible = "socionext,uniphier-scssi";1718 reg = <0x54006000 0x100>;1818- #address-cells = <1>;1919- #size-cells = <0>;1919+ interrupts = <0 39 4>;2020+ pinctrl-names = "default";2121+ pinctrl-0 = <&pinctrl_spi0>;2022 clocks = <&peri_clk 11>;2123 resets = <&peri_rst 11>;2224};
···6161 to struct boot_params for loading bzImage and ramdisk6262 above 4G in 64bit.63636464-Protocol 2.13: (Kernel 3.14) Support 32- and 64-bit flags being set in6565- xloadflags to support booting a 64-bit kernel from 32-bit6666- EFI6767-6868-Protocol 2.14: (Kernel 4.20) Added acpi_rsdp_addr holding the physical6969- address of the ACPI RSDP table.7070- The bootloader updates version with:7171- 0x8000 | min(kernel-version, bootloader-version)7272- kernel-version being the protocol version supported by7373- the kernel and bootloader-version the protocol version7474- supported by the bootloader.7575-7664**** MEMORY LAYOUT77657866The traditional memory map for the kernel loader, used for Image or···1972090258/8 2.10+ pref_address Preferred loading address1982100260/4 2.10+ init_size Linear memory required during initialization1992110264/4 2.11+ handover_offset Offset of handover entry point200200-0268/8 2.14+ acpi_rsdp_addr Physical address of RSDP table201212202213(1) For backwards compatibility, if the setup_sects field contains 0, the203214 real value is 4.···309322 Contains the magic number "HdrS" (0x53726448).310323311324Field name: version312312-Type: modify325325+Type: read313326Offset/size: 0x206/2314327Protocol: 2.00+315328316329 Contains the boot protocol version, in (major << 8)+minor format,317330 e.g. 0x0204 for version 2.04, and 0x0a11 for a hypothetical version318331 10.17.319319-320320- Up to protocol version 2.13 this information is only read by the321321- bootloader. From protocol version 2.14 onwards the bootloader will322322- write the used protocol version or-ed with 0x8000 to the field. The323323- used protocol version will be the minimum of the supported protocol324324- versions of the bootloader and the kernel.325332326333Field name: realmode_swtch327334Type: modify (optional)···743762 handover protocol to boot the kernel should jump to this offset.744763745764 See EFI HANDOVER PROTOCOL below for more details.746746-747747-Field name: acpi_rsdp_addr748748-Type: write749749-Offset/size: 0x268/8750750-Protocol: 2.14+751751-752752- This field can be set by the boot loader to tell the kernel the753753- physical address of the ACPI RSDP table.754754-755755- A value of 0 indicates the kernel should fall back to the standard756756- methods to locate the RSDP.757765758766759767**** THE IMAGE CHECKSUM
+93-33
MAINTAINERS
···19231923M: Andy Gross <andy.gross@linaro.org>19241924M: David Brown <david.brown@linaro.org>19251925L: linux-arm-msm@vger.kernel.org19261926-L: linux-soc@vger.kernel.org19271926S: Maintained19281927F: Documentation/devicetree/bindings/soc/qcom/19291928F: arch/arm/boot/dts/qcom-*.dts···24902491ATHEROS ATH5K WIRELESS DRIVER24912492M: Jiri Slaby <jirislaby@gmail.com>24922493M: Nick Kossifidis <mickflemm@gmail.com>24932493-M: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>24942494+M: Luis Chamberlain <mcgrof@kernel.org>24942495L: linux-wireless@vger.kernel.org24952496W: http://wireless.kernel.org/en/users/Drivers/ath5k24962497S: Maintained···28002801T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git28012802Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=7714728022803S: Supported28032803-F: arch/x86/net/bpf_jit*28042804+F: arch/*/net/*28042805F: Documentation/networking/filter.txt28052806F: Documentation/bpf/28062807F: include/linux/bpf*···28192820F: tools/bpf/28202821F: tools/lib/bpf/28212822F: tools/testing/selftests/bpf/28232823+28242824+BPF JIT for ARM28252825+M: Shubham Bansal <illusionist.neo@gmail.com>28262826+L: netdev@vger.kernel.org28272827+S: Maintained28282828+F: arch/arm/net/28292829+28302830+BPF JIT for ARM6428312831+M: Daniel Borkmann <daniel@iogearbox.net>28322832+M: Alexei Starovoitov <ast@kernel.org>28332833+M: Zi Shen Lim <zlim.lnx@gmail.com>28342834+L: netdev@vger.kernel.org28352835+S: Supported28362836+F: arch/arm64/net/28372837+28382838+BPF JIT for MIPS (32-BIT AND 64-BIT)28392839+M: Paul Burton <paul.burton@mips.com>28402840+L: netdev@vger.kernel.org28412841+S: Maintained28422842+F: arch/mips/net/28432843+28442844+BPF JIT for NFP NICs28452845+M: Jakub Kicinski <jakub.kicinski@netronome.com>28462846+L: netdev@vger.kernel.org28472847+S: Supported28482848+F: drivers/net/ethernet/netronome/nfp/bpf/28492849+28502850+BPF JIT for POWERPC (32-BIT AND 64-BIT)28512851+M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>28522852+M: Sandipan Das <sandipan@linux.ibm.com>28532853+L: netdev@vger.kernel.org28542854+S: Maintained28552855+F: arch/powerpc/net/28562856+28572857+BPF JIT for S39028582858+M: Martin Schwidefsky <schwidefsky@de.ibm.com>28592859+M: Heiko Carstens <heiko.carstens@de.ibm.com>28602860+L: netdev@vger.kernel.org28612861+S: Maintained28622862+F: arch/s390/net/28632863+X: arch/s390/net/pnet.c28642864+28652865+BPF JIT for SPARC (32-BIT AND 64-BIT)28662866+M: David S. Miller <davem@davemloft.net>28672867+L: netdev@vger.kernel.org28682868+S: Maintained28692869+F: arch/sparc/net/28702870+28712871+BPF JIT for X86 32-BIT28722872+M: Wang YanQing <udknight@gmail.com>28732873+L: netdev@vger.kernel.org28742874+S: Maintained28752875+F: arch/x86/net/bpf_jit_comp32.c28762876+28772877+BPF JIT for X86 64-BIT28782878+M: Alexei Starovoitov <ast@kernel.org>28792879+M: Daniel Borkmann <daniel@iogearbox.net>28802880+L: netdev@vger.kernel.org28812881+S: Supported28822882+F: arch/x86/net/28832883+X: arch/x86/net/bpf_jit_comp32.c2822288428232885BROADCOM B44 10/100 ETHERNET DRIVER28242886M: Michael Chan <michael.chan@broadcom.com>···29212861BROADCOM BCM47XX MIPS ARCHITECTURE29222862M: Hauke Mehrtens <hauke@hauke-m.de>29232863M: Rafał Miłecki <zajec5@gmail.com>29242924-L: linux-mips@linux-mips.org28642864+L: linux-mips@vger.kernel.org29252865S: Maintained29262866F: Documentation/devicetree/bindings/mips/brcm/29272867F: arch/mips/bcm47xx/*···29302870BROADCOM BCM5301X ARM ARCHITECTURE29312871M: Hauke Mehrtens <hauke@hauke-m.de>29322872M: Rafał Miłecki <zajec5@gmail.com>29332933-M: Jon Mason <jonmason@broadcom.com>29342873M: bcm-kernel-feedback-list@broadcom.com29352874L: linux-arm-kernel@lists.infradead.org29362875S: Maintained···29842925BROADCOM BMIPS MIPS ARCHITECTURE29852926M: Kevin Cernekee <cernekee@gmail.com>29862927M: Florian Fainelli <f.fainelli@gmail.com>29872987-L: linux-mips@linux-mips.org29282928+L: linux-mips@vger.kernel.org29882929T: git git://github.com/broadcom/stblinux.git29892930S: Maintained29902931F: arch/mips/bmips/*···30753016BROADCOM IPROC ARM ARCHITECTURE30763017M: Ray Jui <rjui@broadcom.com>30773018M: Scott Branden <sbranden@broadcom.com>30783078-M: Jon Mason <jonmason@broadcom.com>30793019M: bcm-kernel-feedback-list@broadcom.com30803020L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)30813021T: git git://github.com/broadcom/cygnus-linux.git···3121306331223064BROADCOM NVRAM DRIVER31233065M: Rafał Miłecki <zajec5@gmail.com>31243124-L: linux-mips@linux-mips.org30663066+L: linux-mips@vger.kernel.org31253067S: Maintained31263068F: drivers/firmware/broadcom/*31273069···4223416542244166DECSTATION PLATFORM SUPPORT42254167M: "Maciej W. Rozycki" <macro@linux-mips.org>42264226-L: linux-mips@linux-mips.org41684168+L: linux-mips@vger.kernel.org42274169W: http://www.linux-mips.org/wiki/DECstation42284170S: Maintained42294171F: arch/mips/dec/···53145256M: Ralf Baechle <ralf@linux-mips.org>53155257M: David Daney <david.daney@cavium.com>53165258L: linux-edac@vger.kernel.org53175317-L: linux-mips@linux-mips.org52595259+L: linux-mips@vger.kernel.org53185260S: Supported53195261F: drivers/edac/octeon_edac*53205262···58325774F: tools/firewire/5833577558345776FIRMWARE LOADER (request_firmware)58355835-M: Luis R. Rodriguez <mcgrof@kernel.org>57775777+M: Luis Chamberlain <mcgrof@kernel.org>58365778L: linux-kernel@vger.kernel.org58375779S: Maintained58385780F: Documentation/firmware_class/···7761770377627704IOC3 ETHERNET DRIVER77637705M: Ralf Baechle <ralf@linux-mips.org>77647764-L: linux-mips@linux-mips.org77067706+L: linux-mips@vger.kernel.org77657707S: Maintained77667708F: drivers/net/ethernet/sgi/ioc3-eth.c77677709···81328074F: Documentation/dev-tools/kselftest*8133807581348076KERNEL USERMODE HELPER81358135-M: "Luis R. Rodriguez" <mcgrof@kernel.org>80778077+M: Luis Chamberlain <mcgrof@kernel.org>81368078L: linux-kernel@vger.kernel.org81378079S: Maintained81388080F: kernel/umh.c···8189813181908132KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)81918133M: James Hogan <jhogan@kernel.org>81928192-L: linux-mips@linux-mips.org81348134+L: linux-mips@vger.kernel.org81938135S: Supported81948136F: arch/mips/include/uapi/asm/kvm*81958137F: arch/mips/include/asm/kvm*···83088250F: mm/kmemleak-test.c8309825183108252KMOD KERNEL MODULE LOADER - USERMODE HELPER83118311-M: "Luis R. Rodriguez" <mcgrof@kernel.org>82538253+M: Luis Chamberlain <mcgrof@kernel.org>83128254L: linux-kernel@vger.kernel.org83138255S: Maintained83148256F: kernel/kmod.c···8362830483638305LANTIQ MIPS ARCHITECTURE83648306M: John Crispin <john@phrozen.org>83658365-L: linux-mips@linux-mips.org83078307+L: linux-mips@vger.kernel.org83668308S: Maintained83678309F: arch/mips/lantiq83688310F: drivers/soc/lantiq···8925886789268868MARDUK (CREATOR CI40) DEVICE TREE SUPPORT89278869M: Rahul Bedarkar <rahulbedarkar89@gmail.com>89288928-L: linux-mips@linux-mips.org88708870+L: linux-mips@vger.kernel.org89298871S: Maintained89308872F: arch/mips/boot/dts/img/pistachio_marduk.dts89318873···9884982698859827MICROSEMI MIPS SOCS98869828M: Alexandre Belloni <alexandre.belloni@bootlin.com>98879887-L: linux-mips@linux-mips.org98299829+L: linux-mips@vger.kernel.org98889830S: Maintained98899831F: arch/mips/generic/board-ocelot.c98909832F: arch/mips/configs/generic/board-ocelot.config···99249866M: Ralf Baechle <ralf@linux-mips.org>99259867M: Paul Burton <paul.burton@mips.com>99269868M: James Hogan <jhogan@kernel.org>99279927-L: linux-mips@linux-mips.org98699869+L: linux-mips@vger.kernel.org99289870W: http://www.linux-mips.org/99299871T: git git://git.linux-mips.org/pub/scm/ralf/linux.git99309872T: git git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux.git···9937987999389880MIPS BOSTON DEVELOPMENT BOARD99399881M: Paul Burton <paul.burton@mips.com>99409940-L: linux-mips@linux-mips.org98829882+L: linux-mips@vger.kernel.org99419883S: Maintained99429884F: Documentation/devicetree/bindings/clock/img,boston-clock.txt99439885F: arch/mips/boot/dts/img/boston.dts···9947988999489890MIPS GENERIC PLATFORM99499891M: Paul Burton <paul.burton@mips.com>99509950-L: linux-mips@linux-mips.org98929892+L: linux-mips@vger.kernel.org99519893S: Supported99529894F: Documentation/devicetree/bindings/power/mti,mips-cpc.txt99539895F: arch/mips/generic/···9955989799569898MIPS/LOONGSON1 ARCHITECTURE99579899M: Keguang Zhang <keguang.zhang@gmail.com>99589958-L: linux-mips@linux-mips.org99009900+L: linux-mips@vger.kernel.org99599901S: Maintained99609902F: arch/mips/loongson32/99619903F: arch/mips/include/asm/mach-loongson32/···9964990699659907MIPS/LOONGSON2 ARCHITECTURE99669908M: Jiaxun Yang <jiaxun.yang@flygoat.com>99679967-L: linux-mips@linux-mips.org99099909+L: linux-mips@vger.kernel.org99689910S: Maintained99699911F: arch/mips/loongson64/fuloong-2e/99709912F: arch/mips/loongson64/lemote-2f/···9974991699759917MIPS/LOONGSON3 ARCHITECTURE99769918M: Huacai Chen <chenhc@lemote.com>99779977-L: linux-mips@linux-mips.org99199919+L: linux-mips@vger.kernel.org99789920S: Maintained99799921F: arch/mips/loongson64/99809922F: arch/mips/include/asm/mach-loongson64/···9984992699859927MIPS RINT INSTRUCTION EMULATION99869928M: Aleksandar Markovic <aleksandar.markovic@mips.com>99879987-L: linux-mips@linux-mips.org99299929+L: linux-mips@vger.kernel.org99889930S: Supported99899931F: arch/mips/math-emu/sp_rint.c99909932F: arch/mips/math-emu/dp_rint.c···10969109111097010912ONION OMEGA2+ BOARD1097110913M: Harvey Hunt <harveyhuntnexus@gmail.com>1097210972-L: linux-mips@linux-mips.org1091410914+L: linux-mips@vger.kernel.org1097310915S: Maintained1097410916F: arch/mips/boot/dts/ralink/omega2p.dts1097510917···11878118201187911821PISTACHIO SOC SUPPORT1188011822M: James Hartley <james.hartley@sondrel.com>1188111881-L: linux-mips@linux-mips.org1182311823+L: linux-mips@vger.kernel.org1188211824S: Odd Fixes1188311825F: arch/mips/pistachio/1188411826F: arch/mips/include/asm/mach-pistachio/···1205812000F: include/linux/printk.h12059120011206012002PRISM54 WIRELESS DRIVER1206112061-M: "Luis R. Rodriguez" <mcgrof@gmail.com>1200312003+M: Luis Chamberlain <mcgrof@kernel.org>1206212004L: linux-wireless@vger.kernel.org1206312005W: http://wireless.kernel.org/en/users/Drivers/p541206412006S: Obsolete···1207212014F: fs/proc/1207312015F: include/linux/proc_fs.h1207412016F: tools/testing/selftests/proc/1201712017+F: Documentation/filesystems/proc.txt12075120181207612019PROC SYSCTL1207712077-M: "Luis R. Rodriguez" <mcgrof@kernel.org>1202012020+M: Luis Chamberlain <mcgrof@kernel.org>1207812021M: Kees Cook <keescook@chromium.org>1207912022L: linux-kernel@vger.kernel.org1208012023L: linux-fsdevel@vger.kernel.org···12538124791253912480RALINK MIPS ARCHITECTURE1254012481M: John Crispin <john@phrozen.org>1254112541-L: linux-mips@linux-mips.org1248212482+L: linux-mips@vger.kernel.org1254212483S: Maintained1254312484F: arch/mips/ralink1254412485···12558124991255912500RANCHU VIRTUAL BOARD FOR MIPS1256012501M: Miodrag Dinic <miodrag.dinic@mips.com>1256112561-L: linux-mips@linux-mips.org1250212502+L: linux-mips@vger.kernel.org1256212503S: Supported1256312504F: arch/mips/generic/board-ranchu.c1256412505F: arch/mips/configs/generic/board-ranchu.config···1400813949F: Documentation/devicetree/bindings/sound/1400913950F: Documentation/sound/soc/1401013951F: sound/soc/1395213952+F: include/dt-bindings/sound/1401113953F: include/sound/soc*14012139541401313955SOUNDWIRE SUBSYSTEM···1529415234TURBOCHANNEL SUBSYSTEM1529515235M: "Maciej W. Rozycki" <macro@linux-mips.org>1529615236M: Ralf Baechle <ralf@linux-mips.org>1529715297-L: linux-mips@linux-mips.org1523715237+L: linux-mips@vger.kernel.org1529815238Q: http://patchwork.linux-mips.org/project/linux-mips/list/1529915239S: Maintained1530015240F: drivers/tc/···16115160551611616056VOCORE VOCORE2 BOARD1611716057M: Harvey Hunt <harveyhuntnexus@gmail.com>1611816118-L: linux-mips@linux-mips.org1605816058+L: linux-mips@vger.kernel.org1611916059S: Maintained1612016060F: arch/mips/boot/dts/ralink/vocore2.dts1612116061
···351351 * to occur, WAKEUPENABLE bits must be set in the pad mux registers, and352352 * omap44xx_prm_reconfigure_io_chain() must be called. No return value.353353 */354354-static void __init omap44xx_prm_enable_io_wakeup(void)354354+static void omap44xx_prm_enable_io_wakeup(void)355355{356356 s32 inst = omap4_prmst_get_prm_dev_inst();357357
+25
arch/arm64/Kconfig
···497497498498 If unsure, say Y.499499500500+config ARM64_ERRATUM_1286807501501+ bool "Cortex-A76: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"502502+ default y503503+ select ARM64_WORKAROUND_REPEAT_TLBI504504+ help505505+ This option adds workaround for ARM Cortex-A76 erratum 1286807506506+507507+ On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual508508+ address for a cacheable mapping of a location is being509509+ accessed by a core while another core is remapping the virtual510510+ address to a new physical page using the recommended511511+ break-before-make sequence, then under very rare circumstances512512+ TLBI+DSB completes before a read using the translation being513513+ invalidated has been observed by other observers. The514514+ workaround repeats the TLBI+DSB operation.515515+516516+ If unsure, say Y.517517+500518config CAVIUM_ERRATUM_22375501519 bool "Cavium erratum 22375, 24313"502520 default y···584566 is unchanged. Work around the erratum by invalidating the walk cache585567 entries for the trampoline before entering the kernel proper.586568569569+config ARM64_WORKAROUND_REPEAT_TLBI570570+ bool571571+ help572572+ Enable the repeat TLBI workaround for Falkor erratum 1009 and573573+ Cortex-A76 erratum 1286807.574574+587575config QCOM_FALKOR_ERRATUM_1009588576 bool "Falkor E1009: Prematurely complete a DSB after a TLBI"589577 default y578578+ select ARM64_WORKAROUND_REPEAT_TLBI590579 help591580 On Falkor v1, the CPU may prematurely complete a DSB following a592581 TLBI xxIS invalidate maintenance operation. Repeat the TLBI operation
···5656{5757 return is_compat_task();5858}5959+6060+#define ARCH_HAS_SYSCALL_MATCH_SYM_NAME6161+6262+static inline bool arch_syscall_match_sym_name(const char *sym,6363+ const char *name)6464+{6565+ /*6666+ * Since all syscall functions have __arm64_ prefix, we must skip it.6767+ * However, as we described above, we decided to ignore compat6868+ * syscalls, so we don't care about __arm64_compat_ prefix here.6969+ */7070+ return !strcmp(sym + 8, name);7171+}5972#endif /* ifndef __ASSEMBLY__ */60736174#endif /* __ASM_FTRACE_H */
···216216{217217 unsigned long return_hooker = (unsigned long)&return_to_handler;218218 unsigned long old;219219- struct ftrace_graph_ent trace;220220- int err;221219222220 if (unlikely(atomic_read(¤t->tracing_graph_pause)))223221 return;···227229 */228230 old = *parent;229231230230- trace.func = self_addr;231231- trace.depth = current->curr_ret_stack + 1;232232-233233- /* Only trace if the calling function expects to */234234- if (!ftrace_graph_entry(&trace))235235- return;236236-237237- err = ftrace_push_return_trace(old, self_addr, &trace.depth,238238- frame_pointer, NULL);239239- if (err == -EBUSY)240240- return;241241- else232232+ if (!function_graph_enter(old, self_addr, frame_pointer, NULL))242233 *parent = return_hooker;243234}244235
+17-9
arch/arm64/net/bpf_jit_comp.c
···351351 * >0 - successfully JITed a 16-byte eBPF instruction.352352 * <0 - failed to JIT.353353 */354354-static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)354354+static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,355355+ bool extra_pass)355356{356357 const u8 code = insn->code;357358 const u8 dst = bpf2a64[insn->dst_reg];···626625 case BPF_JMP | BPF_CALL:627626 {628627 const u8 r0 = bpf2a64[BPF_REG_0];629629- const u64 func = (u64)__bpf_call_base + imm;628628+ bool func_addr_fixed;629629+ u64 func_addr;630630+ int ret;630631631631- if (ctx->prog->is_func)632632- emit_addr_mov_i64(tmp, func, ctx);632632+ ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,633633+ &func_addr, &func_addr_fixed);634634+ if (ret < 0)635635+ return ret;636636+ if (func_addr_fixed)637637+ /* We can use optimized emission here. */638638+ emit_a64_mov_i64(tmp, func_addr, ctx);633639 else634634- emit_a64_mov_i64(tmp, func, ctx);640640+ emit_addr_mov_i64(tmp, func_addr, ctx);635641 emit(A64_BLR(tmp), ctx);636642 emit(A64_MOV(1, r0, A64_R(0)), ctx);637643 break;···761753 return 0;762754}763755764764-static int build_body(struct jit_ctx *ctx)756756+static int build_body(struct jit_ctx *ctx, bool extra_pass)765757{766758 const struct bpf_prog *prog = ctx->prog;767759 int i;···770762 const struct bpf_insn *insn = &prog->insnsi[i];771763 int ret;772764773773- ret = build_insn(insn, ctx);765765+ ret = build_insn(insn, ctx, extra_pass);774766 if (ret > 0) {775767 i++;776768 if (ctx->image == NULL)···866858 /* 1. Initial fake pass to compute ctx->idx. */867859868860 /* Fake pass to fill in ctx->offset. */869869- if (build_body(&ctx)) {861861+ if (build_body(&ctx, extra_pass)) {870862 prog = orig_prog;871863 goto out_off;872864 }···896888897889 build_prologue(&ctx, was_classic);898890899899- if (build_body(&ctx)) {891891+ if (build_body(&ctx, extra_pass)) {900892 bpf_jit_binary_free(header);901893 prog = orig_prog;902894 goto out_off;
+3-1
arch/ia64/include/asm/numa.h
···5959 */60606161extern u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];6262-#define node_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)])6262+#define slit_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)])6363+extern int __node_distance(int from, int to);6464+#define node_distance(from,to) __node_distance(from, to)63656466extern int paddr_to_nid(unsigned long paddr);6567
+3-3
arch/ia64/kernel/acpi.c
···578578 if (!slit_table) {579579 for (i = 0; i < MAX_NUMNODES; i++)580580 for (j = 0; j < MAX_NUMNODES; j++)581581- node_distance(i, j) = i == j ? LOCAL_DISTANCE :582582- REMOTE_DISTANCE;581581+ slit_distance(i, j) = i == j ?582582+ LOCAL_DISTANCE : REMOTE_DISTANCE;583583 return;584584 }585585···592592 if (!pxm_bit_test(j))593593 continue;594594 node_to = pxm_to_node(j);595595- node_distance(node_from, node_to) =595595+ slit_distance(node_from, node_to) =596596 slit_table->entry[i * slit_table->locality_count + j];597597 }598598 }
+6
arch/ia64/mm/numa.c
···3636 */3737u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];38383939+int __node_distance(int from, int to)4040+{4141+ return slit_distance(from, to);4242+}4343+EXPORT_SYMBOL(__node_distance);4444+3945/* Identify which cnode a physical address resides on */4046int4147paddr_to_nid(unsigned long paddr)
+2-13
arch/microblaze/kernel/ftrace.c
···2222void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)2323{2424 unsigned long old;2525- int faulted, err;2626- struct ftrace_graph_ent trace;2525+ int faulted;2726 unsigned long return_hooker = (unsigned long)2827 &return_to_handler;2928···6263 return;6364 }64656565- err = ftrace_push_return_trace(old, self_addr, &trace.depth, 0, NULL);6666- if (err == -EBUSY) {6666+ if (function_graph_enter(old, self_addr, 0, NULL))6767 *parent = old;6868- return;6969- }7070-7171- trace.func = self_addr;7272- /* Only trace if the calling function expects to */7373- if (!ftrace_graph_entry(&trace)) {7474- current->curr_ret_stack--;7575- *parent = old;7676- }7768}7869#endif /* CONFIG_FUNCTION_GRAPH_TRACER */7970
+1-1
arch/mips/include/asm/syscall.h
···7373#ifdef CONFIG_64BIT7474 case 4: case 5: case 6: case 7:7575#ifdef CONFIG_MIPS32_O327676- if (test_thread_flag(TIF_32BIT_REGS))7676+ if (test_tsk_thread_flag(task, TIF_32BIT_REGS))7777 return get_user(*arg, (int *)usp + n);7878 else7979#endif
+2-12
arch/mips/kernel/ftrace.c
···322322 unsigned long fp)323323{324324 unsigned long old_parent_ra;325325- struct ftrace_graph_ent trace;326325 unsigned long return_hooker = (unsigned long)327326 &return_to_handler;328327 int faulted, insns;···368369 if (unlikely(faulted))369370 goto out;370371371371- if (ftrace_push_return_trace(old_parent_ra, self_ra, &trace.depth, fp,372372- NULL) == -EBUSY) {373373- *parent_ra_addr = old_parent_ra;374374- return;375375- }376376-377372 /*378373 * Get the recorded ip of the current mcount calling site in the379374 * __mcount_loc section, which will be used to filter the function···375382 */376383377384 insns = core_kernel_text(self_ra) ? 2 : MCOUNT_OFFSET_INSNS + 1;378378- trace.func = self_ra - (MCOUNT_INSN_SIZE * insns);385385+ self_ra -= (MCOUNT_INSN_SIZE * insns);379386380380- /* Only trace if the calling function expects to */381381- if (!ftrace_graph_entry(&trace)) {382382- current->curr_ret_stack--;387387+ if (function_graph_enter(old_parent_ra, self_ra, fp, NULL))383388 *parent_ra_addr = old_parent_ra;384384- }385389 return;386390out:387391 ftrace_graph_stop();
···211211 unsigned long frame_pointer)212212{213213 unsigned long return_hooker = (unsigned long)&return_to_handler;214214- struct ftrace_graph_ent trace;215214 unsigned long old;216216- int err;217215218216 if (unlikely(atomic_read(¤t->tracing_graph_pause)))219217 return;220218221219 old = *parent;222220223223- trace.func = self_addr;224224- trace.depth = current->curr_ret_stack + 1;225225-226226- /* Only trace if the calling function expects to */227227- if (!ftrace_graph_entry(&trace))228228- return;229229-230230- err = ftrace_push_return_trace(old, self_addr, &trace.depth,231231- frame_pointer, NULL);232232-233233- if (err == -EBUSY)234234- return;235235-236236- *parent = return_hooker;221221+ if (!function_graph_enter(old, self_addr, frame_pointer, NULL))222222+ *parent = return_hooker;237223}238224239225noinline void ftrace_graph_caller(void)
+3-14
arch/parisc/kernel/ftrace.c
···3030 unsigned long self_addr)3131{3232 unsigned long old;3333- struct ftrace_graph_ent trace;3433 extern int parisc_return_to_handler;35343635 if (unlikely(ftrace_graph_is_dead()))···40414142 old = *parent;42434343- trace.func = self_addr;4444- trace.depth = current->curr_ret_stack + 1;4545-4646- /* Only trace if the calling function expects to */4747- if (!ftrace_graph_entry(&trace))4848- return;4949-5050- if (ftrace_push_return_trace(old, self_addr, &trace.depth,5151- 0, NULL) == -EBUSY)5252- return;5353-5454- /* activate parisc_return_to_handler() as return point */5555- *parent = (unsigned long) &parisc_return_to_handler;4444+ if (!function_graph_enter(old, self_addr, 0, NULL))4545+ /* activate parisc_return_to_handler() as return point */4646+ *parent = (unsigned long) &parisc_return_to_handler;5647}5748#endif /* CONFIG_FUNCTION_GRAPH_TRACER */5849
+2-13
arch/powerpc/kernel/trace/ftrace.c
···950950 */951951unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip)952952{953953- struct ftrace_graph_ent trace;954953 unsigned long return_hooker;955954956955 if (unlikely(ftrace_graph_is_dead()))···960961961962 return_hooker = ppc_function_entry(return_to_handler);962963963963- trace.func = ip;964964- trace.depth = current->curr_ret_stack + 1;965965-966966- /* Only trace if the calling function expects to */967967- if (!ftrace_graph_entry(&trace))968968- goto out;969969-970970- if (ftrace_push_return_trace(parent, ip, &trace.depth, 0,971971- NULL) == -EBUSY)972972- goto out;973973-974974- parent = return_hooker;964964+ if (!function_graph_enter(parent, ip, 0, NULL))965965+ parent = return_hooker;975966out:976967 return parent;977968}
+1
arch/powerpc/kvm/book3s_hv.c
···983983 ret = kvmhv_enter_nested_guest(vcpu);984984 if (ret == H_INTERRUPT) {985985 kvmppc_set_gpr(vcpu, 3, 0);986986+ vcpu->arch.hcall_needed = 0;986987 return -EINTR;987988 }988989 break;
+38-19
arch/powerpc/net/bpf_jit_comp64.c
···166166 PPC_BLR();167167}168168169169-static void bpf_jit_emit_func_call(u32 *image, struct codegen_context *ctx, u64 func)169169+static void bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context *ctx,170170+ u64 func)171171+{172172+#ifdef PPC64_ELF_ABI_v1173173+ /* func points to the function descriptor */174174+ PPC_LI64(b2p[TMP_REG_2], func);175175+ /* Load actual entry point from function descriptor */176176+ PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_2], 0);177177+ /* ... and move it to LR */178178+ PPC_MTLR(b2p[TMP_REG_1]);179179+ /*180180+ * Load TOC from function descriptor at offset 8.181181+ * We can clobber r2 since we get called through a182182+ * function pointer (so caller will save/restore r2)183183+ * and since we don't use a TOC ourself.184184+ */185185+ PPC_BPF_LL(2, b2p[TMP_REG_2], 8);186186+#else187187+ /* We can clobber r12 */188188+ PPC_FUNC_ADDR(12, func);189189+ PPC_MTLR(12);190190+#endif191191+ PPC_BLRL();192192+}193193+194194+static void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx,195195+ u64 func)170196{171197 unsigned int i, ctx_idx = ctx->idx;172198···299273{300274 const struct bpf_insn *insn = fp->insnsi;301275 int flen = fp->len;302302- int i;276276+ int i, ret;303277304278 /* Start of epilogue code - will only be valid 2nd pass onwards */305279 u32 exit_addr = addrs[flen];···310284 u32 src_reg = b2p[insn[i].src_reg];311285 s16 off = insn[i].off;312286 s32 imm = insn[i].imm;287287+ bool func_addr_fixed;288288+ u64 func_addr;313289 u64 imm64;314314- u8 *func;315290 u32 true_cond;316291 u32 tmp_idx;317292···738711 case BPF_JMP | BPF_CALL:739712 ctx->seen |= SEEN_FUNC;740713741741- /* bpf function call */742742- if (insn[i].src_reg == BPF_PSEUDO_CALL)743743- if (!extra_pass)744744- func = NULL;745745- else if (fp->aux->func && off < fp->aux->func_cnt)746746- /* use the subprog id from the off747747- * field to lookup the callee address748748- */749749- func = (u8 *) fp->aux->func[off]->bpf_func;750750- else751751- return -EINVAL;752752- /* kernel helper call */714714+ ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass,715715+ &func_addr, &func_addr_fixed);716716+ if (ret < 0)717717+ return ret;718718+719719+ if (func_addr_fixed)720720+ bpf_jit_emit_func_call_hlp(image, ctx, func_addr);753721 else754754- func = (u8 *) __bpf_call_base + imm;755755-756756- bpf_jit_emit_func_call(image, ctx, (u64)func);757757-722722+ bpf_jit_emit_func_call_rel(image, ctx, func_addr);758723 /* move return value from r3 to BPF_REG_0 */759724 PPC_MR(b2p[BPF_REG_0], 3);760725 break;
+2-12
arch/riscv/kernel/ftrace.c
···132132{133133 unsigned long return_hooker = (unsigned long)&return_to_handler;134134 unsigned long old;135135- struct ftrace_graph_ent trace;136135 int err;137136138137 if (unlikely(atomic_read(¤t->tracing_graph_pause)))···143144 */144145 old = *parent;145146146146- trace.func = self_addr;147147- trace.depth = current->curr_ret_stack + 1;148148-149149- if (!ftrace_graph_entry(&trace))150150- return;151151-152152- err = ftrace_push_return_trace(old, self_addr, &trace.depth,153153- frame_pointer, parent);154154- if (err == -EBUSY)155155- return;156156- *parent = return_hooker;147147+ if (function_graph_enter(old, self_addr, frame_pointer, parent))148148+ *parent = return_hooker;157149}158150159151#ifdef CONFIG_DYNAMIC_FTRACE
+2-11
arch/s390/kernel/ftrace.c
···203203 */204204unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip)205205{206206- struct ftrace_graph_ent trace;207207-208206 if (unlikely(ftrace_graph_is_dead()))209207 goto out;210208 if (unlikely(atomic_read(¤t->tracing_graph_pause)))211209 goto out;212210 ip -= MCOUNT_INSN_SIZE;213213- trace.func = ip;214214- trace.depth = current->curr_ret_stack + 1;215215- /* Only trace if the calling function expects to. */216216- if (!ftrace_graph_entry(&trace))217217- goto out;218218- if (ftrace_push_return_trace(parent, ip, &trace.depth, 0,219219- NULL) == -EBUSY)220220- goto out;221221- parent = (unsigned long) return_to_handler;211211+ if (!function_graph_enter(parent, ip, 0, NULL))212212+ parent = (unsigned long) return_to_handler;222213out:223214 return parent;224215}
+2
arch/s390/kernel/perf_cpum_cf.c
···346346 break;347347348348 case PERF_TYPE_HARDWARE:349349+ if (is_sampling_event(event)) /* No sampling support */350350+ return -ENOENT;349351 ev = attr->config;350352 /* Count user space (problem-state) only */351353 if (!attr->exclude_user && attr->exclude_kernel) {
···321321void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)322322{323323 unsigned long old;324324- int faulted, err;325325- struct ftrace_graph_ent trace;324324+ int faulted;326325 unsigned long return_hooker = (unsigned long)&return_to_handler;327326328327 if (unlikely(ftrace_graph_is_dead()))···364365 return;365366 }366367367367- err = ftrace_push_return_trace(old, self_addr, &trace.depth, 0, NULL);368368- if (err == -EBUSY) {368368+ if (function_graph_enter(old, self_addr, 0, NULL))369369 __raw_writel(old, parent);370370- return;371371- }372372-373373- trace.func = self_addr;374374-375375- /* Only trace if the calling function expects to */376376- if (!ftrace_graph_entry(&trace)) {377377- current->curr_ret_stack--;378378- __raw_writel(old, parent);379379- }380370}381371#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+1-10
arch/sparc/kernel/ftrace.c
···126126 unsigned long frame_pointer)127127{128128 unsigned long return_hooker = (unsigned long) &return_to_handler;129129- struct ftrace_graph_ent trace;130129131130 if (unlikely(atomic_read(¤t->tracing_graph_pause)))132131 return parent + 8UL;133132134134- trace.func = self_addr;135135- trace.depth = current->curr_ret_stack + 1;136136-137137- /* Only trace if the calling function expects to */138138- if (!ftrace_graph_entry(&trace))139139- return parent + 8UL;140140-141141- if (ftrace_push_return_trace(parent, self_addr, &trace.depth,142142- frame_pointer, NULL) == -EBUSY)133133+ if (function_graph_enter(parent, self_addr, frame_pointer, NULL))143134 return parent + 8UL;144135145136 return return_hooker;
···444444 branches. Requires a compiler with -mindirect-branch=thunk-extern445445 support for full protection. The kernel may run slower.446446447447- Without compiler support, at least indirect branches in assembler448448- code are eliminated. Since this includes the syscall entry path,449449- it is not entirely pointless.450450-451447config INTEL_RDT452448 bool "Intel Resource Director Technology support"453449 depends on X86 && CPU_SUP_INTEL···10001004 to the kernel image.1001100510021006config SCHED_SMT10031003- bool "SMT (Hyperthreading) scheduler support"10041004- depends on SMP10051005- ---help---10061006- SMT scheduler support improves the CPU scheduler's decision making10071007- when dealing with Intel Pentium 4 chips with HyperThreading at a10081008- cost of slightly increased overhead in some places. If unsure say10091009- N here.10071007+ def_bool y if SMP1010100810111009config SCHED_MC10121010 def_bool y
+3-2
arch/x86/Makefile
···220220221221# Avoid indirect branches in kernel to deal with Spectre222222ifdef CONFIG_RETPOLINE223223-ifneq ($(RETPOLINE_CFLAGS),)224224- KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) -DRETPOLINE223223+ifeq ($(RETPOLINE_CFLAGS),)224224+ $(error You are building kernel with non-retpoline compiler, please update your compiler.)225225endif226226+ KBUILD_CFLAGS += $(RETPOLINE_CFLAGS)226227endif227228228229archscripts: scripts_basic
+1-5
arch/x86/boot/header.S
···300300 # Part 2 of the header, from the old setup.S301301302302 .ascii "HdrS" # header signature303303- .word 0x020e # header version number (>= 0x0105)303303+ .word 0x020d # header version number (>= 0x0105)304304 # or else old loadlin-1.5 will fail)305305 .globl realmode_swtch306306realmode_swtch: .word 0, 0 # default_switch, SETUPSEG···557557558558init_size: .long INIT_SIZE # kernel initialization size559559handover_offset: .long 0 # Filled in by build.c560560-561561-acpi_rsdp_addr: .quad 0 # 64-bit physical pointer to the562562- # ACPI RSDP table, added with563563- # version 2.14564560565561# End of setup header #####################################################566562
-20
arch/x86/events/core.c
···438438 if (config == -1LL)439439 return -EINVAL;440440441441- /*442442- * Branch tracing:443443- */444444- if (attr->config == PERF_COUNT_HW_BRANCH_INSTRUCTIONS &&445445- !attr->freq && hwc->sample_period == 1) {446446- /* BTS is not supported by this architecture. */447447- if (!x86_pmu.bts_active)448448- return -EOPNOTSUPP;449449-450450- /* BTS is currently only allowed for user-mode. */451451- if (!attr->exclude_kernel)452452- return -EOPNOTSUPP;453453-454454- /* disallow bts if conflicting events are present */455455- if (x86_add_exclusive(x86_lbr_exclusive_lbr))456456- return -EBUSY;457457-458458- event->destroy = hw_perf_lbr_event_destroy;459459- }460460-461441 hwc->config |= config;462442463443 return 0;
+52-16
arch/x86/events/intel/core.c
···23062306 return handled;23072307}2308230823092309-static bool disable_counter_freezing;23092309+static bool disable_counter_freezing = true;23102310static int __init intel_perf_counter_freezing_setup(char *s)23112311{23122312- disable_counter_freezing = true;23132313- pr_info("Intel PMU Counter freezing feature disabled\n");23122312+ bool res;23132313+23142314+ if (kstrtobool(s, &res))23152315+ return -EINVAL;23162316+23172317+ disable_counter_freezing = !res;23142318 return 1;23152319}23162316-__setup("disable_counter_freezing", intel_perf_counter_freezing_setup);23202320+__setup("perf_v4_pmi=", intel_perf_counter_freezing_setup);2317232123182322/*23192323 * Simplified handler for Arch Perfmon v4:···24742470static struct event_constraint *24752471intel_bts_constraints(struct perf_event *event)24762472{24772477- struct hw_perf_event *hwc = &event->hw;24782478- unsigned int hw_event, bts_event;24792479-24802480- if (event->attr.freq)24812481- return NULL;24822482-24832483- hw_event = hwc->config & INTEL_ARCH_EVENT_MASK;24842484- bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS);24852485-24862486- if (unlikely(hw_event == bts_event && hwc->sample_period == 1))24732473+ if (unlikely(intel_pmu_has_bts(event)))24872474 return &bts_constraint;2488247524892476 return NULL;···30933098 return flags;30943099}3095310031013101+static int intel_pmu_bts_config(struct perf_event *event)31023102+{31033103+ struct perf_event_attr *attr = &event->attr;31043104+31053105+ if (unlikely(intel_pmu_has_bts(event))) {31063106+ /* BTS is not supported by this architecture. */31073107+ if (!x86_pmu.bts_active)31083108+ return -EOPNOTSUPP;31093109+31103110+ /* BTS is currently only allowed for user-mode. */31113111+ if (!attr->exclude_kernel)31123112+ return -EOPNOTSUPP;31133113+31143114+ /* BTS is not allowed for precise events. */31153115+ if (attr->precise_ip)31163116+ return -EOPNOTSUPP;31173117+31183118+ /* disallow bts if conflicting events are present */31193119+ if (x86_add_exclusive(x86_lbr_exclusive_lbr))31203120+ return -EBUSY;31213121+31223122+ event->destroy = hw_perf_lbr_event_destroy;31233123+ }31243124+31253125+ return 0;31263126+}31273127+31283128+static int core_pmu_hw_config(struct perf_event *event)31293129+{31303130+ int ret = x86_pmu_hw_config(event);31313131+31323132+ if (ret)31333133+ return ret;31343134+31353135+ return intel_pmu_bts_config(event);31363136+}31373137+30963138static int intel_pmu_hw_config(struct perf_event *event)30973139{30983140 int ret = x86_pmu_hw_config(event);3099314131423142+ if (ret)31433143+ return ret;31443144+31453145+ ret = intel_pmu_bts_config(event);31003146 if (ret)31013147 return ret;31023148···31633127 /*31643128 * BTS is set up earlier in this path, so don't account twice31653129 */31663166- if (!intel_pmu_has_bts(event)) {31303130+ if (!unlikely(intel_pmu_has_bts(event))) {31673131 /* disallow lbr if conflicting events are present */31683132 if (x86_add_exclusive(x86_lbr_exclusive_lbr))31693133 return -EBUSY;···36323596 .enable_all = core_pmu_enable_all,36333597 .enable = core_pmu_enable_event,36343598 .disable = x86_pmu_disable_event,36353635- .hw_config = x86_pmu_hw_config,35993599+ .hw_config = core_pmu_hw_config,36363600 .schedule_events = x86_schedule_events,36373601 .eventsel = MSR_ARCH_PERFMON_EVENTSEL0,36383602 .perfctr = MSR_ARCH_PERFMON_PERFCTR0,
···33#ifndef _ASM_X86_NOSPEC_BRANCH_H_44#define _ASM_X86_NOSPEC_BRANCH_H_5566+#include <linux/static_key.h>77+68#include <asm/alternative.h>79#include <asm/alternative-asm.h>810#include <asm/cpufeatures.h>···164162 _ASM_PTR " 999b\n\t" \165163 ".popsection\n\t"166164167167-#if defined(CONFIG_X86_64) && defined(RETPOLINE)165165+#ifdef CONFIG_RETPOLINE166166+#ifdef CONFIG_X86_64168167169168/*170170- * Since the inline asm uses the %V modifier which is only in newer GCC,171171- * the 64-bit one is dependent on RETPOLINE not CONFIG_RETPOLINE.169169+ * Inline asm uses the %V modifier which is only in newer GCC170170+ * which is ensured when CONFIG_RETPOLINE is defined.172171 */173172# define CALL_NOSPEC \174173 ANNOTATE_NOSPEC_ALTERNATIVE \···184181 X86_FEATURE_RETPOLINE_AMD)185182# define THUNK_TARGET(addr) [thunk_target] "r" (addr)186183187187-#elif defined(CONFIG_X86_32) && defined(CONFIG_RETPOLINE)184184+#else /* CONFIG_X86_32 */188185/*189186 * For i386 we use the original ret-equivalent retpoline, because190187 * otherwise we'll run out of registers. We don't care about CET···214211 X86_FEATURE_RETPOLINE_AMD)215212216213# define THUNK_TARGET(addr) [thunk_target] "rm" (addr)214214+#endif217215#else /* No retpoline for C / inline asm */218216# define CALL_NOSPEC "call *%[thunk_target]\n"219217# define THUNK_TARGET(addr) [thunk_target] "rm" (addr)···223219/* The Spectre V2 mitigation variants */224220enum spectre_v2_mitigation {225221 SPECTRE_V2_NONE,226226- SPECTRE_V2_RETPOLINE_MINIMAL,227227- SPECTRE_V2_RETPOLINE_MINIMAL_AMD,228222 SPECTRE_V2_RETPOLINE_GENERIC,229223 SPECTRE_V2_RETPOLINE_AMD,230224 SPECTRE_V2_IBRS_ENHANCED,225225+};226226+227227+/* The indirect branch speculation control variants */228228+enum spectre_v2_user_mitigation {229229+ SPECTRE_V2_USER_NONE,230230+ SPECTRE_V2_USER_STRICT,231231+ SPECTRE_V2_USER_PRCTL,232232+ SPECTRE_V2_USER_SECCOMP,231233};232234233235/* The Speculative Store Bypass disable variants */···312302 X86_FEATURE_USE_IBRS_FW); \313303 preempt_enable(); \314304} while (0)305305+306306+DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);307307+DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);308308+DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);315309316310#endif /* __ASSEMBLY__ */317311
···7979#define TIF_SIGPENDING 2 /* signal pending */8080#define TIF_NEED_RESCHED 3 /* rescheduling necessary */8181#define TIF_SINGLESTEP 4 /* reenable singlestep on user return*/8282-#define TIF_SSBD 5 /* Reduced data speculation */8282+#define TIF_SSBD 5 /* Speculative store bypass disable */8383#define TIF_SYSCALL_EMU 6 /* syscall emulation active */8484#define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */8585#define TIF_SECCOMP 8 /* secure computing */8686+#define TIF_SPEC_IB 9 /* Indirect branch speculation mitigation */8787+#define TIF_SPEC_FORCE_UPDATE 10 /* Force speculation MSR update in context switch */8688#define TIF_USER_RETURN_NOTIFY 11 /* notify kernel of userspace return */8789#define TIF_UPROBE 12 /* breakpointed or singlestepping */8890#define TIF_PATCH_PENDING 13 /* pending live patching update */···112110#define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU)113111#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)114112#define _TIF_SECCOMP (1 << TIF_SECCOMP)113113+#define _TIF_SPEC_IB (1 << TIF_SPEC_IB)114114+#define _TIF_SPEC_FORCE_UPDATE (1 << TIF_SPEC_FORCE_UPDATE)115115#define _TIF_USER_RETURN_NOTIFY (1 << TIF_USER_RETURN_NOTIFY)116116#define _TIF_UPROBE (1 << TIF_UPROBE)117117#define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING)···149145 _TIF_FSCHECK)150146151147/* flags to check in __switch_to() */152152-#define _TIF_WORK_CTXSW \153153- (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)148148+#define _TIF_WORK_CTXSW_BASE \149149+ (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP| \150150+ _TIF_SSBD | _TIF_SPEC_FORCE_UPDATE)151151+152152+/*153153+ * Avoid calls to __switch_to_xtra() on UP as STIBP is not evaluated.154154+ */155155+#ifdef CONFIG_SMP156156+# define _TIF_WORK_CTXSW (_TIF_WORK_CTXSW_BASE | _TIF_SPEC_IB)157157+#else158158+# define _TIF_WORK_CTXSW (_TIF_WORK_CTXSW_BASE)159159+#endif154160155161#define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)156162#define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
+6-2
arch/x86/include/asm/tlbflush.h
···169169170170#define LOADED_MM_SWITCHING ((struct mm_struct *)1)171171172172+ /* Last user mm for optimizing IBPB */173173+ union {174174+ struct mm_struct *last_user_mm;175175+ unsigned long last_user_mm_ibpb;176176+ };177177+172178 u16 loaded_mm_asid;173179 u16 next_asid;174174- /* last user mm's ctx id */175175- u64 last_ctx_id;176180177181 /*178182 * We can be in one of several states:
-2
arch/x86/include/asm/x86_init.h
···303303extern void x86_init_uint_noop(unsigned int unused);304304extern bool x86_pnpbios_disabled(void);305305306306-void x86_verify_bootdata_version(void);307307-308306#endif
···457457 if (!boot_params.hdr.version)458458 copy_bootdata(__va(real_mode_data));459459460460- x86_verify_bootdata_version();461461-462460 x86_early_init_platform_quirks();463461464462 switch (boot_params.hdr.hardware_subarch) {
+82-19
arch/x86/kernel/process.c
···4040#include <asm/prctl.h>4141#include <asm/spec-ctrl.h>42424343+#include "process.h"4444+4345/*4446 * per-CPU TSS segments. Threads are completely 'soft' on Linux,4547 * no more per-task TSS's. The TSS size is kept cacheline-aligned···254252 enable_cpuid();255253}256254257257-static inline void switch_to_bitmap(struct tss_struct *tss,258258- struct thread_struct *prev,255255+static inline void switch_to_bitmap(struct thread_struct *prev,259256 struct thread_struct *next,260257 unsigned long tifp, unsigned long tifn)261258{259259+ struct tss_struct *tss = this_cpu_ptr(&cpu_tss_rw);260260+262261 if (tifn & _TIF_IO_BITMAP) {263262 /*264263 * Copy the relevant range of the IO bitmap.···398395 wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));399396}400397401401-static __always_inline void intel_set_ssb_state(unsigned long tifn)398398+/*399399+ * Update the MSRs managing speculation control, during context switch.400400+ *401401+ * tifp: Previous task's thread flags402402+ * tifn: Next task's thread flags403403+ */404404+static __always_inline void __speculation_ctrl_update(unsigned long tifp,405405+ unsigned long tifn)402406{403403- u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);407407+ unsigned long tif_diff = tifp ^ tifn;408408+ u64 msr = x86_spec_ctrl_base;409409+ bool updmsr = false;404410405405- wrmsrl(MSR_IA32_SPEC_CTRL, msr);411411+ /*412412+ * If TIF_SSBD is different, select the proper mitigation413413+ * method. Note that if SSBD mitigation is disabled or permanentely414414+ * enabled this branch can't be taken because nothing can set415415+ * TIF_SSBD.416416+ */417417+ if (tif_diff & _TIF_SSBD) {418418+ if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) {419419+ amd_set_ssb_virt_state(tifn);420420+ } else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {421421+ amd_set_core_ssb_state(tifn);422422+ } else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||423423+ static_cpu_has(X86_FEATURE_AMD_SSBD)) {424424+ msr |= ssbd_tif_to_spec_ctrl(tifn);425425+ updmsr = true;426426+ }427427+ }428428+429429+ /*430430+ * Only evaluate TIF_SPEC_IB if conditional STIBP is enabled,431431+ * otherwise avoid the MSR write.432432+ */433433+ if (IS_ENABLED(CONFIG_SMP) &&434434+ static_branch_unlikely(&switch_to_cond_stibp)) {435435+ updmsr |= !!(tif_diff & _TIF_SPEC_IB);436436+ msr |= stibp_tif_to_spec_ctrl(tifn);437437+ }438438+439439+ if (updmsr)440440+ wrmsrl(MSR_IA32_SPEC_CTRL, msr);406441}407442408408-static __always_inline void __speculative_store_bypass_update(unsigned long tifn)443443+static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk)409444{410410- if (static_cpu_has(X86_FEATURE_VIRT_SSBD))411411- amd_set_ssb_virt_state(tifn);412412- else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))413413- amd_set_core_ssb_state(tifn);414414- else415415- intel_set_ssb_state(tifn);445445+ if (test_and_clear_tsk_thread_flag(tsk, TIF_SPEC_FORCE_UPDATE)) {446446+ if (task_spec_ssb_disable(tsk))447447+ set_tsk_thread_flag(tsk, TIF_SSBD);448448+ else449449+ clear_tsk_thread_flag(tsk, TIF_SSBD);450450+451451+ if (task_spec_ib_disable(tsk))452452+ set_tsk_thread_flag(tsk, TIF_SPEC_IB);453453+ else454454+ clear_tsk_thread_flag(tsk, TIF_SPEC_IB);455455+ }456456+ /* Return the updated threadinfo flags*/457457+ return task_thread_info(tsk)->flags;416458}417459418418-void speculative_store_bypass_update(unsigned long tif)460460+void speculation_ctrl_update(unsigned long tif)419461{462462+ /* Forced update. Make sure all relevant TIF flags are different */420463 preempt_disable();421421- __speculative_store_bypass_update(tif);464464+ __speculation_ctrl_update(~tif, tif);422465 preempt_enable();423466}424467425425-void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,426426- struct tss_struct *tss)468468+/* Called from seccomp/prctl update */469469+void speculation_ctrl_update_current(void)470470+{471471+ preempt_disable();472472+ speculation_ctrl_update(speculation_ctrl_update_tif(current));473473+ preempt_enable();474474+}475475+476476+void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p)427477{428478 struct thread_struct *prev, *next;429479 unsigned long tifp, tifn;···486430487431 tifn = READ_ONCE(task_thread_info(next_p)->flags);488432 tifp = READ_ONCE(task_thread_info(prev_p)->flags);489489- switch_to_bitmap(tss, prev, next, tifp, tifn);433433+ switch_to_bitmap(prev, next, tifp, tifn);490434491435 propagate_user_return_notify(prev_p, next_p);492436···507451 if ((tifp ^ tifn) & _TIF_NOCPUID)508452 set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));509453510510- if ((tifp ^ tifn) & _TIF_SSBD)511511- __speculative_store_bypass_update(tifn);454454+ if (likely(!((tifp | tifn) & _TIF_SPEC_FORCE_UPDATE))) {455455+ __speculation_ctrl_update(tifp, tifn);456456+ } else {457457+ speculation_ctrl_update_tif(prev_p);458458+ tifn = speculation_ctrl_update_tif(next_p);459459+460460+ /* Enforce MSR update to ensure consistent state */461461+ __speculation_ctrl_update(~tifn, tifn);462462+ }512463}513464514465/*
+39
arch/x86/kernel/process.h
···11+// SPDX-License-Identifier: GPL-2.022+//33+// Code shared between 32 and 64 bit44+55+#include <asm/spec-ctrl.h>66+77+void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p);88+99+/*1010+ * This needs to be inline to optimize for the common case where no extra1111+ * work needs to be done.1212+ */1313+static inline void switch_to_extra(struct task_struct *prev,1414+ struct task_struct *next)1515+{1616+ unsigned long next_tif = task_thread_info(next)->flags;1717+ unsigned long prev_tif = task_thread_info(prev)->flags;1818+1919+ if (IS_ENABLED(CONFIG_SMP)) {2020+ /*2121+ * Avoid __switch_to_xtra() invocation when conditional2222+ * STIPB is disabled and the only different bit is2323+ * TIF_SPEC_IB. For CONFIG_SMP=n TIF_SPEC_IB is not2424+ * in the TIF_WORK_CTXSW masks.2525+ */2626+ if (!static_branch_likely(&switch_to_cond_stibp)) {2727+ prev_tif &= ~_TIF_SPEC_IB;2828+ next_tif &= ~_TIF_SPEC_IB;2929+ }3030+ }3131+3232+ /*3333+ * __switch_to_xtra() handles debug registers, i/o bitmaps,3434+ * speculation mitigations etc.3535+ */3636+ if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT ||3737+ prev_tif & _TIF_WORK_CTXSW_PREV))3838+ __switch_to_xtra(prev, next);3939+}
+3-7
arch/x86/kernel/process_32.c
···5959#include <asm/intel_rdt_sched.h>6060#include <asm/proto.h>61616262+#include "process.h"6363+6264void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)6365{6466 unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L;···234232 struct fpu *prev_fpu = &prev->fpu;235233 struct fpu *next_fpu = &next->fpu;236234 int cpu = smp_processor_id();237237- struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu);238235239236 /* never put a printk in __switch_to... printk() calls wake_up*() indirectly */240237···265264 if (get_kernel_rpl() && unlikely(prev->iopl != next->iopl))266265 set_iopl_mask(next->iopl);267266268268- /*269269- * Now maybe handle debug registers and/or IO bitmaps270270- */271271- if (unlikely(task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV ||272272- task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT))273273- __switch_to_xtra(prev_p, next_p, tss);267267+ switch_to_extra(prev_p, next_p);274268275269 /*276270 * Leave lazy mode, flushing any hypercalls made here.
+3-7
arch/x86/kernel/process_64.c
···6060#include <asm/unistd_32_ia32.h>6161#endif62626363+#include "process.h"6464+6365/* Prints also some state that isn't saved in the pt_regs */6466void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)6567{···555553 struct fpu *prev_fpu = &prev->fpu;556554 struct fpu *next_fpu = &next->fpu;557555 int cpu = smp_processor_id();558558- struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu);559556560557 WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) &&561558 this_cpu_read(irq_count) != -1);···618617 /* Reload sp0. */619618 update_task_stack(next_p);620619621621- /*622622- * Now maybe reload the debug registers and handle I/O bitmaps623623- */624624- if (unlikely(task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT ||625625- task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))626626- __switch_to_xtra(prev_p, next_p, tss);620620+ switch_to_extra(prev_p, next_p);627621628622#ifdef CONFIG_XEN_PV629623 /*
-17
arch/x86/kernel/setup.c
···12801280 unwind_init();12811281}1282128212831283-/*12841284- * From boot protocol 2.14 onwards we expect the bootloader to set the12851285- * version to "0x8000 | <used version>". In case we find a version >= 2.1412861286- * without the 0x8000 we assume the boot loader supports 2.13 only and12871287- * reset the version accordingly. The 0x8000 flag is removed in any case.12881288- */12891289-void __init x86_verify_bootdata_version(void)12901290-{12911291- if (boot_params.hdr.version & VERSION_WRITTEN)12921292- boot_params.hdr.version &= ~VERSION_WRITTEN;12931293- else if (boot_params.hdr.version >= 0x020e)12941294- boot_params.hdr.version = 0x020d;12951295-12961296- if (boot_params.hdr.version < 0x020e)12971297- boot_params.hdr.acpi_rsdp_addr = 0;12981298-}12991299-13001283#ifdef CONFIG_X86_321301128413021285static struct resource video_ram_resource = {
+6-1
arch/x86/kvm/lapic.c
···5555#define PRIo64 "o"56565757/* #define apic_debug(fmt,arg...) printk(KERN_WARNING fmt,##arg) */5858-#define apic_debug(fmt, arg...)5858+#define apic_debug(fmt, arg...) do {} while (0)59596060/* 14 is the version for Xeon and Pentium 8.4.8*/6161#define APIC_VERSION (0x14UL | ((KVM_APIC_LVT_NUM - 1) << 16))···575575576576 rcu_read_lock();577577 map = rcu_dereference(kvm->arch.apic_map);578578+579579+ if (unlikely(!map)) {580580+ count = -EOPNOTSUPP;581581+ goto out;582582+ }578583579584 if (min > map->max_apic_id)580585 goto out;
+9-18
arch/x86/kvm/mmu.c
···50745074}5075507550765076static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa,50775077- const u8 *new, int *bytes)50775077+ int *bytes)50785078{50795079- u64 gentry;50795079+ u64 gentry = 0;50805080 int r;5081508150825082 /*···50885088 /* Handle a 32-bit guest writing two halves of a 64-bit gpte */50895089 *gpa &= ~(gpa_t)7;50905090 *bytes = 8;50915091- r = kvm_vcpu_read_guest(vcpu, *gpa, &gentry, 8);50925092- if (r)50935093- gentry = 0;50945094- new = (const u8 *)&gentry;50955091 }5096509250975097- switch (*bytes) {50985098- case 4:50995099- gentry = *(const u32 *)new;51005100- break;51015101- case 8:51025102- gentry = *(const u64 *)new;51035103- break;51045104- default:51055105- gentry = 0;51065106- break;50935093+ if (*bytes == 4 || *bytes == 8) {50945094+ r = kvm_vcpu_read_guest_atomic(vcpu, *gpa, &gentry, *bytes);50955095+ if (r)50965096+ gentry = 0;51075097 }5108509851095099 return gentry;···5197520751985208 pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes);5199520952005200- gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, new, &bytes);52015201-52025210 /*52035211 * No need to care whether allocation memory is successful52045212 * or not since pte prefetch is skiped if it does not have···52055217 mmu_topup_memory_caches(vcpu);5206521852075219 spin_lock(&vcpu->kvm->mmu_lock);52205220+52215221+ gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, &bytes);52225222+52085223 ++vcpu->kvm->stat.mmu_pte_write;52095224 kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE);52105225
+29-15
arch/x86/kvm/svm.c
···14461446 return vcpu->arch.tsc_offset;14471447}1448144814491449-static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)14491449+static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)14501450{14511451 struct vcpu_svm *svm = to_svm(vcpu);14521452 u64 g_tsc_offset = 0;···14641464 svm->vmcb->control.tsc_offset = offset + g_tsc_offset;1465146514661466 mark_dirty(svm->vmcb, VMCB_INTERCEPTS);14671467+ return svm->vmcb->control.tsc_offset;14671468}1468146914691470static void avic_init_vmcb(struct vcpu_svm *svm)···16651664static int avic_init_access_page(struct kvm_vcpu *vcpu)16661665{16671666 struct kvm *kvm = vcpu->kvm;16681668- int ret;16671667+ int ret = 0;1669166816691669+ mutex_lock(&kvm->slots_lock);16701670 if (kvm->arch.apic_access_page_done)16711671- return 0;16711671+ goto out;1672167216731673- ret = x86_set_memory_region(kvm,16741674- APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,16751675- APIC_DEFAULT_PHYS_BASE,16761676- PAGE_SIZE);16731673+ ret = __x86_set_memory_region(kvm,16741674+ APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,16751675+ APIC_DEFAULT_PHYS_BASE,16761676+ PAGE_SIZE);16771677 if (ret)16781678- return ret;16781678+ goto out;1679167916801680 kvm->arch.apic_access_page_done = true;16811681- return 0;16811681+out:16821682+ mutex_unlock(&kvm->slots_lock);16831683+ return ret;16821684}1683168516841686static int avic_init_backing_page(struct kvm_vcpu *vcpu)···21932189 return ERR_PTR(err);21942190}2195219121922192+static void svm_clear_current_vmcb(struct vmcb *vmcb)21932193+{21942194+ int i;21952195+21962196+ for_each_online_cpu(i)21972197+ cmpxchg(&per_cpu(svm_data, i)->current_vmcb, vmcb, NULL);21982198+}21992199+21962200static void svm_free_vcpu(struct kvm_vcpu *vcpu)21972201{21982202 struct vcpu_svm *svm = to_svm(vcpu);22032203+22042204+ /*22052205+ * The vmcb page can be recycled, causing a false negative in22062206+ * svm_vcpu_load(). So, ensure that no logical CPU has this22072207+ * vmcb page recorded as its current vmcb.22082208+ */22092209+ svm_clear_current_vmcb(svm->vmcb);2199221022002211 __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT));22012212 __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER);···22182199 __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);22192200 kvm_vcpu_uninit(vcpu);22202201 kmem_cache_free(kvm_vcpu_cache, svm);22212221- /*22222222- * The vmcb page can be recycled, causing a false negative in22232223- * svm_vcpu_load(). So do a full IBPB now.22242224- */22252225- indirect_branch_prediction_barrier();22262202}2227220322282204static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)···71637149 .has_wbinvd_exit = svm_has_wbinvd_exit,7164715071657151 .read_l1_tsc_offset = svm_read_l1_tsc_offset,71667166- .write_tsc_offset = svm_write_tsc_offset,71527152+ .write_l1_tsc_offset = svm_write_l1_tsc_offset,7167715371687154 .set_tdp_cr3 = set_tdp_cr3,71697155
+65-33
arch/x86/kvm/vmx.c
···174174 * refer SDM volume 3b section 21.6.13 & 22.1.3.175175 */176176static unsigned int ple_gap = KVM_DEFAULT_PLE_GAP;177177+module_param(ple_gap, uint, 0444);177178178179static unsigned int ple_window = KVM_VMX_DEFAULT_PLE_WINDOW;179180module_param(ple_window, uint, 0444);···985984 struct shared_msr_entry *guest_msrs;986985 int nmsrs;987986 int save_nmsrs;987987+ bool guest_msrs_dirty;988988 unsigned long host_idt_base;989989#ifdef CONFIG_X86_64990990 u64 msr_host_kernel_gs_base;···13081306static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12,13091307 u16 error_code);13101308static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);13111311-static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,13091309+static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,13121310 u32 msr, int type);1313131113141312static DEFINE_PER_CPU(struct vmcs *, vmxarea);···16121610{16131611 struct vcpu_vmx *vmx = to_vmx(vcpu);1614161216151615- /* We don't support disabling the feature for simplicity. */16161616- if (vmx->nested.enlightened_vmcs_enabled)16171617- return 0;16181618-16191619- vmx->nested.enlightened_vmcs_enabled = true;16201620-16211613 /*16221614 * vmcs_version represents the range of supported Enlightened VMCS16231615 * versions: lower 8 bits is the minimal version, higher 8 bits is the···16201624 */16211625 if (vmcs_version)16221626 *vmcs_version = (KVM_EVMCS_VERSION << 8) | 1;16271627+16281628+ /* We don't support disabling the feature for simplicity. */16291629+ if (vmx->nested.enlightened_vmcs_enabled)16301630+ return 0;16311631+16321632+ vmx->nested.enlightened_vmcs_enabled = true;1623163316241634 vmx->nested.msrs.pinbased_ctls_high &= ~EVMCS1_UNSUPPORTED_PINCTRL;16251635 vmx->nested.msrs.entry_ctls_high &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL;···2899289729002898 vmx->req_immediate_exit = false;2901289929002900+ /*29012901+ * Note that guest MSRs to be saved/restored can also be changed29022902+ * when guest state is loaded. This happens when guest transitions29032903+ * to/from long-mode by setting MSR_EFER.LMA.29042904+ */29052905+ if (!vmx->loaded_cpu_state || vmx->guest_msrs_dirty) {29062906+ vmx->guest_msrs_dirty = false;29072907+ for (i = 0; i < vmx->save_nmsrs; ++i)29082908+ kvm_set_shared_msr(vmx->guest_msrs[i].index,29092909+ vmx->guest_msrs[i].data,29102910+ vmx->guest_msrs[i].mask);29112911+29122912+ }29132913+29022914 if (vmx->loaded_cpu_state)29032915 return;29042916···29732957 vmcs_writel(HOST_GS_BASE, gs_base);29742958 host_state->gs_base = gs_base;29752959 }29762976-29772977- for (i = 0; i < vmx->save_nmsrs; ++i)29782978- kvm_set_shared_msr(vmx->guest_msrs[i].index,29792979- vmx->guest_msrs[i].data,29802980- vmx->guest_msrs[i].mask);29812960}2982296129832962static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx)···34473436 move_msr_up(vmx, index, save_nmsrs++);3448343734493438 vmx->save_nmsrs = save_nmsrs;34393439+ vmx->guest_msrs_dirty = true;3450344034513441 if (cpu_has_vmx_msr_bitmap())34523442 vmx_update_msr_bitmap(&vmx->vcpu);···34643452 return vcpu->arch.tsc_offset;34653453}3466345434673467-/*34683468- * writes 'offset' into guest's timestamp counter offset register34693469- */34703470-static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)34553455+static u64 vmx_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)34713456{34573457+ u64 active_offset = offset;34723458 if (is_guest_mode(vcpu)) {34733459 /*34743460 * We're here if L1 chose not to trap WRMSR to TSC. According···34743464 * set for L2 remains unchanged, and still needs to be added34753465 * to the newly set TSC to get L2's TSC.34763466 */34773477- struct vmcs12 *vmcs12;34783478- /* recalculate vmcs02.TSC_OFFSET: */34793479- vmcs12 = get_vmcs12(vcpu);34803480- vmcs_write64(TSC_OFFSET, offset +34813481- (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING) ?34823482- vmcs12->tsc_offset : 0));34673467+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);34683468+ if (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING))34693469+ active_offset += vmcs12->tsc_offset;34833470 } else {34843471 trace_kvm_write_tsc_offset(vcpu->vcpu_id,34853472 vmcs_read64(TSC_OFFSET), offset);34863486- vmcs_write64(TSC_OFFSET, offset);34873473 }34743474+34753475+ vmcs_write64(TSC_OFFSET, active_offset);34763476+ return active_offset;34883477}3489347834903479/*···59535944 spin_unlock(&vmx_vpid_lock);59545945}5955594659565956-static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,59475947+static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,59575948 u32 msr, int type)59585949{59595950 int f = sizeof(unsigned long);···59915982 }59925983}5993598459945994-static void __always_inline vmx_enable_intercept_for_msr(unsigned long *msr_bitmap,59855985+static __always_inline void vmx_enable_intercept_for_msr(unsigned long *msr_bitmap,59955986 u32 msr, int type)59965987{59975988 int f = sizeof(unsigned long);···60296020 }60306021}6031602260326032-static void __always_inline vmx_set_intercept_for_msr(unsigned long *msr_bitmap,60236023+static __always_inline void vmx_set_intercept_for_msr(unsigned long *msr_bitmap,60336024 u32 msr, int type, bool value)60346025{60356026 if (value)···86738664 struct vmcs12 *vmcs12 = vmx->nested.cached_vmcs12;86748665 struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs;8675866686768676- vmcs12->hdr.revision_id = evmcs->revision_id;86778677-86788667 /* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */86798668 vmcs12->tpr_threshold = evmcs->tpr_threshold;86808669 vmcs12->guest_rip = evmcs->guest_rip;···9376936993779370 vmx->nested.hv_evmcs = kmap(vmx->nested.hv_evmcs_page);9378937193799379- if (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION) {93729372+ /*93739373+ * Currently, KVM only supports eVMCS version 193749374+ * (== KVM_EVMCS_VERSION) and thus we expect guest to set this93759375+ * value to first u32 field of eVMCS which should specify eVMCS93769376+ * VersionNumber.93779377+ *93789378+ * Guest should be aware of supported eVMCS versions by host by93799379+ * examining CPUID.0x4000000A.EAX[0:15]. Host userspace VMM is93809380+ * expected to set this CPUID leaf according to the value93819381+ * returned in vmcs_version from nested_enable_evmcs().93829382+ *93839383+ * However, it turns out that Microsoft Hyper-V fails to comply93849384+ * to their own invented interface: When Hyper-V use eVMCS, it93859385+ * just sets first u32 field of eVMCS to revision_id specified93869386+ * in MSR_IA32_VMX_BASIC. Instead of used eVMCS version number93879387+ * which is one of the supported versions specified in93889388+ * CPUID.0x4000000A.EAX[0:15].93899389+ *93909390+ * To overcome Hyper-V bug, we accept here either a supported93919391+ * eVMCS version or VMCS12 revision_id as valid values for first93929392+ * u32 field of eVMCS.93939393+ */93949394+ if ((vmx->nested.hv_evmcs->revision_id != KVM_EVMCS_VERSION) &&93959395+ (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION)) {93809396 nested_release_evmcs(vcpu);93819397 return 0;93829398 }···94209390 * present in struct hv_enlightened_vmcs, ...). Make sure there94219391 * are no leftovers.94229392 */94239423- if (from_launch)94249424- memset(vmx->nested.cached_vmcs12, 0,94259425- sizeof(*vmx->nested.cached_vmcs12));93939393+ if (from_launch) {93949394+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);93959395+ memset(vmcs12, 0, sizeof(*vmcs12));93969396+ vmcs12->hdr.revision_id = VMCS12_REVISION;93979397+ }9426939894279399 }94289400 return 1;···1509415062 .has_wbinvd_exit = cpu_has_vmx_wbinvd_exit,15095150631509615064 .read_l1_tsc_offset = vmx_read_l1_tsc_offset,1509715097- .write_tsc_offset = vmx_write_tsc_offset,1506515065+ .write_l1_tsc_offset = vmx_write_l1_tsc_offset,15098150661509915067 .set_tdp_cr3 = vmx_set_cr3,1510015068
···77#include <linux/export.h>88#include <linux/cpu.h>99#include <linux/debugfs.h>1010-#include <linux/ptrace.h>11101211#include <asm/tlbflush.h>1312#include <asm/mmu_context.h>···2829 *2930 * Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi3031 */3232+3333+/*3434+ * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is3535+ * stored in cpu_tlb_state.last_user_mm_ibpb.3636+ */3737+#define LAST_USER_MM_IBPB 0x1UL31383239/*3340 * We get here when we do something requiring a TLB invalidation···186181 }187182}188183189189-static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)184184+static inline unsigned long mm_mangle_tif_spec_ib(struct task_struct *next)190185{186186+ unsigned long next_tif = task_thread_info(next)->flags;187187+ unsigned long ibpb = (next_tif >> TIF_SPEC_IB) & LAST_USER_MM_IBPB;188188+189189+ return (unsigned long)next->mm | ibpb;190190+}191191+192192+static void cond_ibpb(struct task_struct *next)193193+{194194+ if (!next || !next->mm)195195+ return;196196+191197 /*192192- * Check if the current (previous) task has access to the memory193193- * of the @tsk (next) task. If access is denied, make sure to194194- * issue a IBPB to stop user->user Spectre-v2 attacks.195195- *196196- * Note: __ptrace_may_access() returns 0 or -ERRNO.198198+ * Both, the conditional and the always IBPB mode use the mm199199+ * pointer to avoid the IBPB when switching between tasks of the200200+ * same process. Using the mm pointer instead of mm->context.ctx_id201201+ * opens a hypothetical hole vs. mm_struct reuse, which is more or202202+ * less impossible to control by an attacker. Aside of that it203203+ * would only affect the first schedule so the theoretically204204+ * exposed data is not really interesting.197205 */198198- return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&199199- ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));206206+ if (static_branch_likely(&switch_mm_cond_ibpb)) {207207+ unsigned long prev_mm, next_mm;208208+209209+ /*210210+ * This is a bit more complex than the always mode because211211+ * it has to handle two cases:212212+ *213213+ * 1) Switch from a user space task (potential attacker)214214+ * which has TIF_SPEC_IB set to a user space task215215+ * (potential victim) which has TIF_SPEC_IB not set.216216+ *217217+ * 2) Switch from a user space task (potential attacker)218218+ * which has TIF_SPEC_IB not set to a user space task219219+ * (potential victim) which has TIF_SPEC_IB set.220220+ *221221+ * This could be done by unconditionally issuing IBPB when222222+ * a task which has TIF_SPEC_IB set is either scheduled in223223+ * or out. Though that results in two flushes when:224224+ *225225+ * - the same user space task is scheduled out and later226226+ * scheduled in again and only a kernel thread ran in227227+ * between.228228+ *229229+ * - a user space task belonging to the same process is230230+ * scheduled in after a kernel thread ran in between231231+ *232232+ * - a user space task belonging to the same process is233233+ * scheduled in immediately.234234+ *235235+ * Optimize this with reasonably small overhead for the236236+ * above cases. Mangle the TIF_SPEC_IB bit into the mm237237+ * pointer of the incoming task which is stored in238238+ * cpu_tlbstate.last_user_mm_ibpb for comparison.239239+ */240240+ next_mm = mm_mangle_tif_spec_ib(next);241241+ prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_ibpb);242242+243243+ /*244244+ * Issue IBPB only if the mm's are different and one or245245+ * both have the IBPB bit set.246246+ */247247+ if (next_mm != prev_mm &&248248+ (next_mm | prev_mm) & LAST_USER_MM_IBPB)249249+ indirect_branch_prediction_barrier();250250+251251+ this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, next_mm);252252+ }253253+254254+ if (static_branch_unlikely(&switch_mm_always_ibpb)) {255255+ /*256256+ * Only flush when switching to a user space task with a257257+ * different context than the user space task which ran258258+ * last on this CPU.259259+ */260260+ if (this_cpu_read(cpu_tlbstate.last_user_mm) != next->mm) {261261+ indirect_branch_prediction_barrier();262262+ this_cpu_write(cpu_tlbstate.last_user_mm, next->mm);263263+ }264264+ }200265}201266202267void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,···367292 new_asid = prev_asid;368293 need_flush = true;369294 } else {370370- u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id);371371-372295 /*373296 * Avoid user/user BTB poisoning by flushing the branch374297 * predictor when switching between processes. This stops375298 * one process from doing Spectre-v2 attacks on another.376376- *377377- * As an optimization, flush indirect branches only when378378- * switching into a processes that can't be ptrace by the379379- * current one (as in such case, attacker has much more380380- * convenient way how to tamper with the next process than381381- * branch buffer poisoning).382299 */383383- if (static_cpu_has(X86_FEATURE_USE_IBPB) &&384384- ibpb_needed(tsk, last_ctx_id))385385- indirect_branch_prediction_barrier();300300+ cond_ibpb(tsk);386301387302 if (IS_ENABLED(CONFIG_VMAP_STACK)) {388303 /*···429364 /* See above wrt _rcuidle. */430365 trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0);431366 }432432-433433- /*434434- * Record last user mm's context id, so we can avoid435435- * flushing branch buffer with IBPB if we switch back436436- * to the same user.437437- */438438- if (next != &init_mm)439439- this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);440367441368 /* Make sure we write CR3 before loaded_mm. */442369 barrier();···498441 write_cr3(build_cr3(mm->pgd, 0));499442500443 /* Reinitialize tlbstate. */501501- this_cpu_write(cpu_tlbstate.last_ctx_id, mm->context.ctx_id);444444+ this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, LAST_USER_MM_IBPB);502445 this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0);503446 this_cpu_write(cpu_tlbstate.next_asid, 1);504447 this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id);
-78
arch/x86/xen/enlighten.c
···1010#include <xen/xen.h>1111#include <xen/features.h>1212#include <xen/page.h>1313-#include <xen/interface/memory.h>14131514#include <asm/xen/hypercall.h>1615#include <asm/xen/hypervisor.h>···345346}346347EXPORT_SYMBOL(xen_arch_unregister_cpu);347348#endif348348-349349-#ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG350350-void __init arch_xen_balloon_init(struct resource *hostmem_resource)351351-{352352- struct xen_memory_map memmap;353353- int rc;354354- unsigned int i, last_guest_ram;355355- phys_addr_t max_addr = PFN_PHYS(max_pfn);356356- struct e820_table *xen_e820_table;357357- const struct e820_entry *entry;358358- struct resource *res;359359-360360- if (!xen_initial_domain())361361- return;362362-363363- xen_e820_table = kmalloc(sizeof(*xen_e820_table), GFP_KERNEL);364364- if (!xen_e820_table)365365- return;366366-367367- memmap.nr_entries = ARRAY_SIZE(xen_e820_table->entries);368368- set_xen_guest_handle(memmap.buffer, xen_e820_table->entries);369369- rc = HYPERVISOR_memory_op(XENMEM_machine_memory_map, &memmap);370370- if (rc) {371371- pr_warn("%s: Can't read host e820 (%d)\n", __func__, rc);372372- goto out;373373- }374374-375375- last_guest_ram = 0;376376- for (i = 0; i < memmap.nr_entries; i++) {377377- if (xen_e820_table->entries[i].addr >= max_addr)378378- break;379379- if (xen_e820_table->entries[i].type == E820_TYPE_RAM)380380- last_guest_ram = i;381381- }382382-383383- entry = &xen_e820_table->entries[last_guest_ram];384384- if (max_addr >= entry->addr + entry->size)385385- goto out; /* No unallocated host RAM. */386386-387387- hostmem_resource->start = max_addr;388388- hostmem_resource->end = entry->addr + entry->size;389389-390390- /*391391- * Mark non-RAM regions between the end of dom0 RAM and end of host RAM392392- * as unavailable. The rest of that region can be used for hotplug-based393393- * ballooning.394394- */395395- for (; i < memmap.nr_entries; i++) {396396- entry = &xen_e820_table->entries[i];397397-398398- if (entry->type == E820_TYPE_RAM)399399- continue;400400-401401- if (entry->addr >= hostmem_resource->end)402402- break;403403-404404- res = kzalloc(sizeof(*res), GFP_KERNEL);405405- if (!res)406406- goto out;407407-408408- res->name = "Unavailable host RAM";409409- res->start = entry->addr;410410- res->end = (entry->addr + entry->size < hostmem_resource->end) ?411411- entry->addr + entry->size : hostmem_resource->end;412412- rc = insert_resource(hostmem_resource, res);413413- if (rc) {414414- pr_warn("%s: Can't insert [%llx - %llx) (%d)\n",415415- __func__, res->start, res->end, rc);416416- kfree(res);417417- goto out;418418- }419419- }420420-421421- out:422422- kfree(xen_e820_table);423423-}424424-#endif /* CONFIG_XEN_BALLOON_MEMORY_HOTPLUG */
+20-15
arch/x86/xen/multicalls.c
···69697070 trace_xen_mc_flush(b->mcidx, b->argidx, b->cbidx);71717272+#if MC_DEBUG7373+ memcpy(b->debug, b->entries,7474+ b->mcidx * sizeof(struct multicall_entry));7575+#endif7676+7277 switch (b->mcidx) {7378 case 0:7479 /* no-op */···9287 break;93889489 default:9595-#if MC_DEBUG9696- memcpy(b->debug, b->entries,9797- b->mcidx * sizeof(struct multicall_entry));9898-#endif9999-10090 if (HYPERVISOR_multicall(b->entries, b->mcidx) != 0)10191 BUG();10292 for (i = 0; i < b->mcidx; i++)10393 if (b->entries[i].result < 0)10494 ret++;9595+ }105969797+ if (WARN_ON(ret)) {9898+ pr_err("%d of %d multicall(s) failed: cpu %d\n",9999+ ret, b->mcidx, smp_processor_id());100100+ for (i = 0; i < b->mcidx; i++) {101101+ if (b->entries[i].result < 0) {106102#if MC_DEBUG107107- if (ret) {108108- printk(KERN_ERR "%d multicall(s) failed: cpu %d\n",109109- ret, smp_processor_id());110110- dump_stack();111111- for (i = 0; i < b->mcidx; i++) {112112- printk(KERN_DEBUG " call %2d/%d: op=%lu arg=[%lx] result=%ld\t%pF\n",113113- i+1, b->mcidx,103103+ pr_err(" call %2d: op=%lu arg=[%lx] result=%ld\t%pF\n",104104+ i + 1,114105 b->debug[i].op,115106 b->debug[i].args[0],116107 b->entries[i].result,117108 b->caller[i]);109109+#else110110+ pr_err(" call %2d: op=%lu arg=[%lx] result=%ld\n",111111+ i + 1,112112+ b->entries[i].op,113113+ b->entries[i].args[0],114114+ b->entries[i].result);115115+#endif118116 }119117 }120120-#endif121118 }122119123120 b->mcidx = 0;···133126 b->cbidx = 0;134127135128 local_irq_restore(flags);136136-137137- WARN_ON(ret);138129}139130140131struct multicall_space __xen_mc_entry(size_t args)
···33 * Split spinlock implementation out into its own file, so it can be44 * compiled in a FTRACE-compatible way.55 */66-#include <linux/kernel_stat.h>66+#include <linux/kernel.h>77#include <linux/spinlock.h>88-#include <linux/debugfs.h>99-#include <linux/log2.h>1010-#include <linux/gfp.h>118#include <linux/slab.h>129#include <linux/atomic.h>13101411#include <asm/paravirt.h>1512#include <asm/qspinlock.h>16131717-#include <xen/interface/xen.h>1814#include <xen/events.h>19152016#include "xen-ops.h"2121-#include "debugfs.h"22172318static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;2419static DEFINE_PER_CPU(char *, irq_name);
···1410141014111411 func_enter ();1412141214131413- fs_dprintk (FS_DEBUG_INIT, "Inititing queue at %x: %d entries:\n", 14131413+ fs_dprintk (FS_DEBUG_INIT, "Initializing queue at %x: %d entries:\n",14141414 queue, nentries);1415141514161416 p = aligned_kmalloc (sz, GFP_KERNEL, 0x10);···14431443{14441444 func_enter ();1445144514461446- fs_dprintk (FS_DEBUG_INIT, "Inititing free pool at %x:\n", queue);14461446+ fs_dprintk (FS_DEBUG_INIT, "Initializing free pool at %x:\n", queue);1447144714481448 write_fs (dev, FP_CNF(queue), (bufsize * RBFP_RBS) | RBFP_RBSVAL | RBFP_CME);14491449 write_fs (dev, FP_SA(queue), 0);
+8-2
drivers/base/devres.c
···26262727struct devres {2828 struct devres_node node;2929- /* -- 3 pointers */3030- unsigned long long data[]; /* guarantee ull alignment */2929+ /*3030+ * Some archs want to perform DMA into kmalloc caches3131+ * and need a guaranteed alignment larger than3232+ * the alignment of a 64-bit integer.3333+ * Thus we use ARCH_KMALLOC_MINALIGN here and get exactly the same3434+ * buffer alignment as if it was allocated by plain kmalloc().3535+ */3636+ u8 __aligned(ARCH_KMALLOC_MINALIGN) data[];3137};32383339struct devres_group {
···969969static DEFINE_SPINLOCK(efi_mem_reserve_persistent_lock);970970static struct linux_efi_memreserve *efi_memreserve_root __ro_after_init;971971972972-int efi_mem_reserve_persistent(phys_addr_t addr, u64 size)972972+static int __init efi_memreserve_map_root(void)973973+{974974+ if (efi.mem_reserve == EFI_INVALID_TABLE_ADDR)975975+ return -ENODEV;976976+977977+ efi_memreserve_root = memremap(efi.mem_reserve,978978+ sizeof(*efi_memreserve_root),979979+ MEMREMAP_WB);980980+ if (WARN_ON_ONCE(!efi_memreserve_root))981981+ return -ENOMEM;982982+ return 0;983983+}984984+985985+int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)973986{974987 struct linux_efi_memreserve *rsv;988988+ int rc;975989976976- if (!efi_memreserve_root)990990+ if (efi_memreserve_root == (void *)ULONG_MAX)977991 return -ENODEV;992992+993993+ if (!efi_memreserve_root) {994994+ rc = efi_memreserve_map_root();995995+ if (rc)996996+ return rc;997997+ }978998979999 rsv = kmalloc(sizeof(*rsv), GFP_ATOMIC);9801000 if (!rsv)···10139931014994static int __init efi_memreserve_root_init(void)1015995{10161016- if (efi.mem_reserve == EFI_INVALID_TABLE_ADDR)10171017- return -ENODEV;10181018-10191019- efi_memreserve_root = memremap(efi.mem_reserve,10201020- sizeof(*efi_memreserve_root),10211021- MEMREMAP_WB);10221022- if (!efi_memreserve_root)10231023- return -ENOMEM;996996+ if (efi_memreserve_root)997997+ return 0;998998+ if (efi_memreserve_map_root())999999+ efi_memreserve_root = (void *)ULONG_MAX;10241000 return 0;10251001}10261002early_initcall(efi_memreserve_root_init);
+1
drivers/fsi/Kconfig
···4646 tristate "FSI master based on Aspeed ColdFire coprocessor"4747 depends on GPIOLIB4848 depends on GPIO_ASPEED4949+ select GENERIC_ALLOCATOR4950 ---help---5051 This option enables a FSI master using the AST2400 and AST2500 GPIO5152 lines driven by the internal ColdFire coprocessor. This requires
···184184 if (lut_sel == VIU_LUT_OSD_OETF) {185185 writel(0, priv->io_base + _REG(addr_port));186186187187- for (i = 0; i < 20; i++)187187+ for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++)188188 writel(r_map[i * 2] | (r_map[i * 2 + 1] << 16),189189 priv->io_base + _REG(data_port));190190191191 writel(r_map[OSD_OETF_LUT_SIZE - 1] | (g_map[0] << 16),192192 priv->io_base + _REG(data_port));193193194194- for (i = 0; i < 20; i++)194194+ for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++)195195 writel(g_map[i * 2 + 1] | (g_map[i * 2 + 2] << 16),196196 priv->io_base + _REG(data_port));197197198198- for (i = 0; i < 20; i++)198198+ for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++)199199 writel(b_map[i * 2] | (b_map[i * 2 + 1] << 16),200200 priv->io_base + _REG(data_port));201201···211211 } else if (lut_sel == VIU_LUT_OSD_EOTF) {212212 writel(0, priv->io_base + _REG(addr_port));213213214214- for (i = 0; i < 20; i++)214214+ for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++)215215 writel(r_map[i * 2] | (r_map[i * 2 + 1] << 16),216216 priv->io_base + _REG(data_port));217217218218 writel(r_map[OSD_EOTF_LUT_SIZE - 1] | (g_map[0] << 16),219219 priv->io_base + _REG(data_port));220220221221- for (i = 0; i < 20; i++)221221+ for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++)222222 writel(g_map[i * 2 + 1] | (g_map[i * 2 + 2] << 16),223223 priv->io_base + _REG(data_port));224224225225- for (i = 0; i < 20; i++)225225+ for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++)226226 writel(b_map[i * 2] | (b_map[i * 2 + 1] << 16),227227 priv->io_base + _REG(data_port));228228
+18-3
drivers/gpu/drm/rcar-du/rcar_du_group.c
···202202203203static void __rcar_du_group_start_stop(struct rcar_du_group *rgrp, bool start)204204{205205- struct rcar_du_crtc *rcrtc = &rgrp->dev->crtcs[rgrp->index * 2];205205+ struct rcar_du_device *rcdu = rgrp->dev;206206207207- rcar_du_crtc_dsysr_clr_set(rcrtc, DSYSR_DRES | DSYSR_DEN,208208- start ? DSYSR_DEN : DSYSR_DRES);207207+ /*208208+ * Group start/stop is controlled by the DRES and DEN bits of DSYSR0209209+ * for the first group and DSYSR2 for the second group. On most DU210210+ * instances, this maps to the first CRTC of the group, and we can just211211+ * use rcar_du_crtc_dsysr_clr_set() to access the correct DSYSR. On212212+ * M3-N, however, DU2 doesn't exist, but DSYSR2 does. We thus need to213213+ * access the register directly using group read/write.214214+ */215215+ if (rcdu->info->channels_mask & BIT(rgrp->index * 2)) {216216+ struct rcar_du_crtc *rcrtc = &rgrp->dev->crtcs[rgrp->index * 2];217217+218218+ rcar_du_crtc_dsysr_clr_set(rcrtc, DSYSR_DRES | DSYSR_DEN,219219+ start ? DSYSR_DEN : DSYSR_DRES);220220+ } else {221221+ rcar_du_group_write(rgrp, DSYSR,222222+ start ? DSYSR_DEN : DSYSR_DRES);223223+ }209224}210225211226void rcar_du_group_start_stop(struct rcar_du_group *rgrp, bool start)
+1-1
drivers/hid/hid-sensor-custom.c
···358358 sensor_inst->hsdev,359359 sensor_inst->hsdev->usage,360360 usage, report_id,361361- SENSOR_HUB_SYNC);361361+ SENSOR_HUB_SYNC, false);362362 } else if (!strncmp(name, "units", strlen("units")))363363 value = sensor_inst->fields[field_index].attribute.units;364364 else if (!strncmp(name, "unit-expo", strlen("unit-expo")))
···11241124 IB_MR_CHECK_SIG_STATUS, &mr_status);11251125 if (ret) {11261126 pr_err("ib_check_mr_status failed, ret %d\n", ret);11271127- goto err;11271127+ /* Not a lot we can do, return ambiguous guard error */11281128+ *sector = 0;11291129+ return 0x1;11281130 }1129113111301132 if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) {···11541152 }1155115311561154 return 0;11571157-err:11581158- /* Not alot we can do here, return ambiguous guard error */11591159- return 0x1;11601155}1161115611621157void iser_err_comp(struct ib_wc *wc, const char *type)
···3333}34343535/**3636- * i40e_add_xsk_umem - Store an UMEM for a certain ring/qid3636+ * i40e_add_xsk_umem - Store a UMEM for a certain ring/qid3737 * @vsi: Current VSI3838 * @umem: UMEM to store3939 * @qid: Ring/qid to associate with the UMEM···5656}57575858/**5959- * i40e_remove_xsk_umem - Remove an UMEM for a certain ring/qid5959+ * i40e_remove_xsk_umem - Remove a UMEM for a certain ring/qid6060 * @vsi: Current VSI6161 * @qid: Ring/qid associated with the UMEM6262 **/···130130}131131132132/**133133- * i40e_xsk_umem_enable - Enable/associate an UMEM to a certain ring/qid133133+ * i40e_xsk_umem_enable - Enable/associate a UMEM to a certain ring/qid134134 * @vsi: Current VSI135135 * @umem: UMEM136136 * @qid: Rx ring to associate UMEM to···189189}190190191191/**192192- * i40e_xsk_umem_disable - Diassociate an UMEM from a certain ring/qid192192+ * i40e_xsk_umem_disable - Disassociate a UMEM from a certain ring/qid193193 * @vsi: Current VSI194194 * @qid: Rx ring to associate UMEM to195195 *···255255}256256257257/**258258- * i40e_xsk_umem_query - Queries a certain ring/qid for its UMEM258258+ * i40e_xsk_umem_setup - Enable/disassociate a UMEM to/from a ring/qid259259 * @vsi: Current VSI260260 * @umem: UMEM to enable/associate to a ring, or NULL to disable261261 * @qid: Rx ring to (dis)associate UMEM (from)to262262 *263263- * This function enables or disables an UMEM to a certain ring.263263+ * This function enables or disables a UMEM to a certain ring.264264 *265265 * Returns 0 on success, <0 on failure266266 **/···276276 * @rx_ring: Rx ring277277 * @xdp: xdp_buff used as input to the XDP program278278 *279279- * This function enables or disables an UMEM to a certain ring.279279+ * This function enables or disables a UMEM to a certain ring.280280 *281281 * Returns any of I40E_XDP_{PASS, CONSUMED, TX, REDIR}282282 **/
+1
drivers/net/ethernet/intel/igb/e1000_i210.c
···842842 nvm_word = E1000_INVM_DEFAULT_AL;843843 tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL;844844 igb_write_phy_reg_82580(hw, I347AT4_PAGE_SELECT, E1000_PHY_PLL_FREQ_PAGE);845845+ phy_word = E1000_PHY_PLL_UNCONF;845846 for (i = 0; i < E1000_MAX_PLL_TRIES; i++) {846847 /* check current state directly from internal PHY */847848 igb_read_phy_reg_82580(hw, E1000_PHY_PLL_FREQ_REG, &phy_word);
···60716071 "no error",60726072 "length error",60736073 "function disabled",60746074- "VF sent command to attnetion address",60746074+ "VF sent command to attention address",60756075 "host sent prod update command",60766076 "read of during interrupt register while in MIMD mode",60776077 "access to PXP BAR reserved address",
···21972197 new_driver->mdiodrv.driver.remove = phy_remove;21982198 new_driver->mdiodrv.driver.owner = owner;2199219922002200+ /* The following works around an issue where the PHY driver doesn't bind22012201+ * to the device, resulting in the genphy driver being used instead of22022202+ * the dedicated driver. The root cause of the issue isn't known yet22032203+ * and seems to be in the base driver core. Once this is fixed we may22042204+ * remove this workaround.22052205+ */22062206+ new_driver->mdiodrv.driver.probe_type = PROBE_FORCE_SYNCHRONOUS;22072207+22002208 retval = driver_register(&new_driver->mdiodrv.driver);22012209 if (retval) {22022210 pr_err("%s: Error %d in registering driver\n",
+1-1
drivers/net/rionet.c
···216216 * it just report sending a packet to the target217217 * (without actual packet transfer).218218 */219219- dev_kfree_skb_any(skb);220219 ndev->stats.tx_packets++;221220 ndev->stats.tx_bytes += skb->len;221221+ dev_kfree_skb_any(skb);222222 }223223 }224224
···33143314 struct nvme_ns *ns, *next;33153315 LIST_HEAD(ns_list);3316331633173317+ /* prevent racing with ns scanning */33183318+ flush_work(&ctrl->scan_work);33193319+33173320 /*33183321 * The dead states indicates the controller was not gracefully33193322 * disconnected. In that case, we won't be able to flush any data while···34793476 nvme_mpath_stop(ctrl);34803477 nvme_stop_keep_alive(ctrl);34813478 flush_work(&ctrl->async_event_work);34823482- flush_work(&ctrl->scan_work);34833479 cancel_work_sync(&ctrl->fw_act_work);34843480 if (ctrl->ops->stop_ctrl)34853481 ctrl->ops->stop_ctrl(ctrl);···3587358535883586 return 0;35893587out_free_name:35903590- kfree_const(dev->kobj.name);35883588+ kfree_const(ctrl->device->kobj.name);35913589out_release_instance:35923590 ida_simple_remove(&nvme_instance_ida, ctrl->instance);35933591out:···36093607 down_read(&ctrl->namespaces_rwsem);3610360836113609 /* Forcibly unquiesce queues to avoid blocking dispatch */36123612- if (ctrl->admin_q)36103610+ if (ctrl->admin_q && !blk_queue_dying(ctrl->admin_q))36133611 blk_mq_unquiesce_queue(ctrl->admin_q);3614361236153613 list_for_each_entry(ns, &ctrl->namespaces, list)
···8888 int i;89899090 for (i = 0; i < PCIE_IATU_NUM; i++)9191- dw_pcie_disable_atu(pcie->pci, DW_PCIE_REGION_OUTBOUND, i);9191+ dw_pcie_disable_atu(pcie->pci, i, DW_PCIE_REGION_OUTBOUND);9292}93939494static int ls1021_pcie_link_up(struct dw_pcie *pci)
···55565556 u32 lnkcap2, lnkcap;5557555755585558 /*55595559- * PCIe r4.0 sec 7.5.3.18 recommends using the Supported Link55605560- * Speeds Vector in Link Capabilities 2 when supported, falling55615561- * back to Max Link Speed in Link Capabilities otherwise.55595559+ * Link Capabilities 2 was added in PCIe r3.0, sec 7.8.18. The55605560+ * implementation note there recommends using the Supported Link55615561+ * Speeds Vector in Link Capabilities 2 when supported.55625562+ *55635563+ * Without Link Capabilities 2, i.e., prior to PCIe r3.0, software55645564+ * should use the Supported Link Speeds field in Link Capabilities,55655565+ * where only 2.5 GT/s and 5.0 GT/s speeds were defined.55625566 */55635567 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2);55645568 if (lnkcap2) { /* PCIe r3.0-compliant */···55785574 }5579557555805576 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap);55815581- if (lnkcap) {55825582- if (lnkcap & PCI_EXP_LNKCAP_SLS_16_0GB)55835583- return PCIE_SPEED_16_0GT;55845584- else if (lnkcap & PCI_EXP_LNKCAP_SLS_8_0GB)55855585- return PCIE_SPEED_8_0GT;55865586- else if (lnkcap & PCI_EXP_LNKCAP_SLS_5_0GB)55875587- return PCIE_SPEED_5_0GT;55885588- else if (lnkcap & PCI_EXP_LNKCAP_SLS_2_5GB)55895589- return PCIE_SPEED_2_5GT;55905590- }55775577+ if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_5_0GB)55785578+ return PCIE_SPEED_5_0GT;55795579+ else if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_2_5GB)55805580+ return PCIE_SPEED_2_5GT;5591558155925582 return PCI_SPEED_UNKNOWN;55935583}
+11-9
drivers/phy/qualcomm/phy-qcom-qusb2.c
···231231 .mask_core_ready = CORE_READY_STATUS,232232 .has_pll_override = true,233233 .autoresume_en = BIT(0),234234+ .update_tune1_with_efuse = true,234235};235236236237static const char * const qusb2_phy_vreg_names[] = {···403402404403 /*405404 * Read efuse register having TUNE2/1 parameter's high nibble.406406- * If efuse register shows value as 0x0, or if we fail to find407407- * a valid efuse register settings, then use default value408408- * as 0xB for high nibble that we have already set while409409- * configuring phy.405405+ * If efuse register shows value as 0x0 (indicating value is not406406+ * fused), or if we fail to find a valid efuse register setting,407407+ * then use default value for high nibble that we have already408408+ * set while configuring the phy.410409 */411410 val = nvmem_cell_read(qphy->cell, NULL);412411 if (IS_ERR(val) || !val[0]) {···416415417416 /* Fused TUNE1/2 value is the higher nibble only */418417 if (cfg->update_tune1_with_efuse)419419- qusb2_setbits(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE1],420420- val[0] << 0x4);418418+ qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE1],419419+ val[0] << HSTX_TRIM_SHIFT,420420+ HSTX_TRIM_MASK);421421 else422422- qusb2_setbits(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE2],423423- val[0] << 0x4);424424-422422+ qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE2],423423+ val[0] << HSTX_TRIM_SHIFT,424424+ HSTX_TRIM_MASK);425425}426426427427static int qusb2_phy_set_mode(struct phy *phy, enum phy_mode mode)
+2-1
drivers/phy/socionext/Kconfig
···26262727config PHY_UNIPHIER_PCIE2828 tristate "Uniphier PHY driver for PCIe controller"2929- depends on (ARCH_UNIPHIER || COMPILE_TEST) && OF2929+ depends on ARCH_UNIPHIER || COMPILE_TEST3030+ depends on OF && HAS_IOMEM3031 default PCIE_UNIPHIER3132 select GENERIC_PHY3233 help
+1-1
drivers/rtc/rtc-hid-sensor-time.c
···213213 /* get a report with all values through requesting one value */214214 sensor_hub_input_attr_get_raw_value(time_state->common_attributes.hsdev,215215 HID_USAGE_SENSOR_TIME, hid_time_addresses[0],216216- time_state->info[0].report_id, SENSOR_HUB_SYNC);216216+ time_state->info[0].report_id, SENSOR_HUB_SYNC, false);217217 /* wait for all values (event) */218218 ret = wait_for_completion_killable_timeout(219219 &time_state->comp_last_time, HZ*6);
+4-2
drivers/s390/cio/vfio_ccw_cp.c
···387387 * orb specified one of the unsupported formats, we defer388388 * checking for IDAWs in unsupported formats to here.389389 */390390- if ((!cp->orb.cmd.c64 || cp->orb.cmd.i2k) && ccw_is_idal(ccw))390390+ if ((!cp->orb.cmd.c64 || cp->orb.cmd.i2k) && ccw_is_idal(ccw)) {391391+ kfree(p);391392 return -EOPNOTSUPP;393393+ }392394393395 if ((!ccw_is_chain(ccw)) && (!ccw_is_tic(ccw)))394396 break;···530528531529 ret = pfn_array_alloc_pin(pat->pat_pa, cp->mdev, ccw->cda, ccw->count);532530 if (ret < 0)533533- goto out_init;531531+ goto out_unpin;534532535533 /* Translate this direct ccw to a idal ccw. */536534 idaws = kcalloc(ret, sizeof(*idaws), GFP_DMA | GFP_KERNEL);
+5-5
drivers/s390/cio/vfio_ccw_drv.c
···2222#include "vfio_ccw_private.h"23232424struct workqueue_struct *vfio_ccw_work_q;2525-struct kmem_cache *vfio_ccw_io_region;2525+static struct kmem_cache *vfio_ccw_io_region;26262727/*2828 * Helpers···134134 if (ret)135135 goto out_free;136136137137- ret = vfio_ccw_mdev_reg(sch);138138- if (ret)139139- goto out_disable;140140-141137 INIT_WORK(&private->io_work, vfio_ccw_sch_io_todo);142138 atomic_set(&private->avail, 1);143139 private->state = VFIO_CCW_STATE_STANDBY;140140+141141+ ret = vfio_ccw_mdev_reg(sch);142142+ if (ret)143143+ goto out_disable;144144145145 return 0;146146
+4-4
drivers/s390/crypto/ap_bus.c
···775775 drvres = ap_drv->flags & AP_DRIVER_FLAG_DEFAULT;776776 if (!!devres != !!drvres)777777 return -ENODEV;778778+ /* (re-)init queue's state machine */779779+ ap_queue_reinit_state(to_ap_queue(dev));778780 }779781780782 /* Add queue/card to list of active queues/cards */···809807 struct ap_device *ap_dev = to_ap_dev(dev);810808 struct ap_driver *ap_drv = ap_dev->drv;811809810810+ if (is_queue_dev(dev))811811+ ap_queue_remove(to_ap_queue(dev));812812 if (ap_drv->remove)813813 ap_drv->remove(ap_dev);814814···14481444 aq->ap_dev.device.parent = &ac->ap_dev.device;14491445 dev_set_name(&aq->ap_dev.device,14501446 "%02x.%04x", id, dom);14511451- /* Start with a device reset */14521452- spin_lock_bh(&aq->lock);14531453- ap_wait(ap_sm_event(aq, AP_EVENT_POLL));14541454- spin_unlock_bh(&aq->lock);14551447 /* Register device */14561448 rc = device_register(&aq->ap_dev.device);14571449 if (rc) {
+1
drivers/s390/crypto/ap_bus.h
···254254void ap_queue_remove(struct ap_queue *aq);255255void ap_queue_suspend(struct ap_device *ap_dev);256256void ap_queue_resume(struct ap_device *ap_dev);257257+void ap_queue_reinit_state(struct ap_queue *aq);257258258259struct ap_card *ap_card_create(int id, int queue_depth, int raw_device_type,259260 int comp_device_type, unsigned int functions);
···28432843 return ni_ao_arm(dev, s);28442844 case INSN_CONFIG_GET_CMD_TIMING_CONSTRAINTS:28452845 /* we don't care about actual channels */28462846- data[1] = board->ao_speed;28462846+ /* data[3] : chanlist_len */28472847+ data[1] = board->ao_speed * data[3];28472848 data[2] = 0;28482849 return 0;28492850 default:
+11-11
drivers/staging/media/sunxi/cedrus/cedrus.c
···108108 unsigned int count;109109 unsigned int i;110110111111- count = vb2_request_buffer_cnt(req);112112- if (!count) {113113- v4l2_info(&ctx->dev->v4l2_dev,114114- "No buffer was provided with the request\n");115115- return -ENOENT;116116- } else if (count > 1) {117117- v4l2_info(&ctx->dev->v4l2_dev,118118- "More than one buffer was provided with the request\n");119119- return -EINVAL;120120- }121121-122111 list_for_each_entry(obj, &req->objects, list) {123112 struct vb2_buffer *vb;124113···121132122133 if (!ctx)123134 return -ENOENT;135135+136136+ count = vb2_request_buffer_cnt(req);137137+ if (!count) {138138+ v4l2_info(&ctx->dev->v4l2_dev,139139+ "No buffer was provided with the request\n");140140+ return -ENOENT;141141+ } else if (count > 1) {142142+ v4l2_info(&ctx->dev->v4l2_dev,143143+ "More than one buffer was provided with the request\n");144144+ return -EINVAL;145145+ }124146125147 parent_hdl = &ctx->hdl;126148
+1-1
drivers/staging/most/core.c
···351351352352 for (i = 0; i < ARRAY_SIZE(ch_data_type); i++) {353353 if (c->cfg.data_type & ch_data_type[i].most_ch_data_type)354354- return snprintf(buf, PAGE_SIZE, ch_data_type[i].name);354354+ return snprintf(buf, PAGE_SIZE, "%s", ch_data_type[i].name);355355 }356356 return snprintf(buf, PAGE_SIZE, "unconfigured\n");357357}
···17951795 struct vchiq_await_completion32 args32;17961796 struct vchiq_completion_data32 completion32;17971797 unsigned int *msgbufcount32;17981798+ unsigned int msgbufcount_native;17981799 compat_uptr_t msgbuf32;17991800 void *msgbuf;18001801 void **msgbufptr;···19071906 sizeof(completion32)))19081907 return -EFAULT;1909190819101910- args32.msgbufcount--;19091909+ if (get_user(msgbufcount_native, &args->msgbufcount))19101910+ return -EFAULT;19111911+19121912+ if (!msgbufcount_native)19131913+ args32.msgbufcount--;1911191419121915 msgbufcount32 =19131916 &((struct vchiq_await_completion32 __user *)arg)->msgbufcount;
+38-2
drivers/thunderbolt/switch.c
···863863}864864static DEVICE_ATTR(key, 0600, key_show, key_store);865865866866+static void nvm_authenticate_start(struct tb_switch *sw)867867+{868868+ struct pci_dev *root_port;869869+870870+ /*871871+ * During host router NVM upgrade we should not allow root port to872872+ * go into D3cold because some root ports cannot trigger PME873873+ * itself. To be on the safe side keep the root port in D0 during874874+ * the whole upgrade process.875875+ */876876+ root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev);877877+ if (root_port)878878+ pm_runtime_get_noresume(&root_port->dev);879879+}880880+881881+static void nvm_authenticate_complete(struct tb_switch *sw)882882+{883883+ struct pci_dev *root_port;884884+885885+ root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev);886886+ if (root_port)887887+ pm_runtime_put(&root_port->dev);888888+}889889+866890static ssize_t nvm_authenticate_show(struct device *dev,867891 struct device_attribute *attr, char *buf)868892{···936912937913 sw->nvm->authenticating = true;938914939939- if (!tb_route(sw))915915+ if (!tb_route(sw)) {916916+ /*917917+ * Keep root port from suspending as long as the918918+ * NVM upgrade process is running.919919+ */920920+ nvm_authenticate_start(sw);940921 ret = nvm_authenticate_host(sw);941941- else922922+ if (ret)923923+ nvm_authenticate_complete(sw);924924+ } else {942925 ret = nvm_authenticate_device(sw);926926+ }943927 pm_runtime_mark_last_busy(&sw->dev);944928 pm_runtime_put_autosuspend(&sw->dev);945929 }···13651333 ret = dma_port_flash_update_auth_status(sw->dma_port, &status);13661334 if (ret <= 0)13671335 return ret;13361336+13371337+ /* Now we can allow root port to suspend again */13381338+ if (!tb_route(sw))13391339+ nvm_authenticate_complete(sw);1368134013691341 if (status) {13701342 tb_sw_info(sw, "switch flash authentication failed\n");
···696696};697697698698/*699699+ * Error prioritisation and accumulation.700700+ */701701+struct afs_error {702702+ short error; /* Accumulated error */703703+ bool responded; /* T if server responded */704704+};705705+706706+/*699707 * Cursor for iterating over a server's address list.700708 */701709struct afs_addr_cursor {···10231015 * misc.c10241016 */10251017extern int afs_abort_to_error(u32);10181018+extern void afs_prioritise_error(struct afs_error *, int, u32);1026101910271020/*10281021 * mntpt.c
+52
fs/afs/misc.c
···118118 default: return -EREMOTEIO;119119 }120120}121121+122122+/*123123+ * Select the error to report from a set of errors.124124+ */125125+void afs_prioritise_error(struct afs_error *e, int error, u32 abort_code)126126+{127127+ switch (error) {128128+ case 0:129129+ return;130130+ default:131131+ if (e->error == -ETIMEDOUT ||132132+ e->error == -ETIME)133133+ return;134134+ case -ETIMEDOUT:135135+ case -ETIME:136136+ if (e->error == -ENOMEM ||137137+ e->error == -ENONET)138138+ return;139139+ case -ENOMEM:140140+ case -ENONET:141141+ if (e->error == -ERFKILL)142142+ return;143143+ case -ERFKILL:144144+ if (e->error == -EADDRNOTAVAIL)145145+ return;146146+ case -EADDRNOTAVAIL:147147+ if (e->error == -ENETUNREACH)148148+ return;149149+ case -ENETUNREACH:150150+ if (e->error == -EHOSTUNREACH)151151+ return;152152+ case -EHOSTUNREACH:153153+ if (e->error == -EHOSTDOWN)154154+ return;155155+ case -EHOSTDOWN:156156+ if (e->error == -ECONNREFUSED)157157+ return;158158+ case -ECONNREFUSED:159159+ if (e->error == -ECONNRESET)160160+ return;161161+ case -ECONNRESET: /* Responded, but call expired. */162162+ if (e->responded)163163+ return;164164+ e->error = error;165165+ return;166166+167167+ case -ECONNABORTED:168168+ e->responded = true;169169+ e->error = afs_abort_to_error(abort_code);170170+ return;171171+ }172172+}
+13-40
fs/afs/rotate.c
···136136 struct afs_addr_list *alist;137137 struct afs_server *server;138138 struct afs_vnode *vnode = fc->vnode;139139- u32 rtt, abort_code;139139+ struct afs_error e;140140+ u32 rtt;140141 int error = fc->ac.error, i;141142142143 _enter("%lx[%d],%lx[%d],%d,%d",···307306 if (fc->error != -EDESTADDRREQ)308307 goto iterate_address;309308 /* Fall through */309309+ case -ERFKILL:310310+ case -EADDRNOTAVAIL:310311 case -ENETUNREACH:311312 case -EHOSTUNREACH:313313+ case -EHOSTDOWN:312314 case -ECONNREFUSED:313315 _debug("no conn");314316 fc->error = error;···450446 if (fc->flags & AFS_FS_CURSOR_VBUSY)451447 goto restart_from_beginning;452448453453- abort_code = 0;454454- error = -EDESTADDRREQ;449449+ e.error = -EDESTADDRREQ;450450+ e.responded = false;455451 for (i = 0; i < fc->server_list->nr_servers; i++) {456452 struct afs_server *s = fc->server_list->servers[i].server;457457- int probe_error = READ_ONCE(s->probe.error);458453459459- switch (probe_error) {460460- case 0:461461- continue;462462- default:463463- if (error == -ETIMEDOUT ||464464- error == -ETIME)465465- continue;466466- case -ETIMEDOUT:467467- case -ETIME:468468- if (error == -ENOMEM ||469469- error == -ENONET)470470- continue;471471- case -ENOMEM:472472- case -ENONET:473473- if (error == -ENETUNREACH)474474- continue;475475- case -ENETUNREACH:476476- if (error == -EHOSTUNREACH)477477- continue;478478- case -EHOSTUNREACH:479479- if (error == -ECONNREFUSED)480480- continue;481481- case -ECONNREFUSED:482482- if (error == -ECONNRESET)483483- continue;484484- case -ECONNRESET: /* Responded, but call expired. */485485- if (error == -ECONNABORTED)486486- continue;487487- case -ECONNABORTED:488488- abort_code = s->probe.abort_code;489489- error = probe_error;490490- continue;491491- }454454+ afs_prioritise_error(&e, READ_ONCE(s->probe.error),455455+ s->probe.abort_code);492456 }493493-494494- if (error == -ECONNABORTED)495495- error = afs_abort_to_error(abort_code);496457497458failed_set_error:498459 fc->error = error;···522553 _leave(" = f [abort]");523554 return false;524555556556+ case -ERFKILL:557557+ case -EADDRNOTAVAIL:525558 case -ENETUNREACH:526559 case -EHOSTUNREACH:560560+ case -EHOSTDOWN:527561 case -ECONNREFUSED:528562 case -ETIMEDOUT:529563 case -ETIME:···605633 struct afs_net *net = afs_v2net(fc->vnode);606634607635 if (fc->error == -EDESTADDRREQ ||636636+ fc->error == -EADDRNOTAVAIL ||608637 fc->error == -ENETUNREACH ||609638 fc->error == -EHOSTUNREACH)610639 afs_dump_edestaddrreq(fc);
+27-18
fs/afs/vl_probe.c
···6161 afs_io_error(call, afs_io_error_vl_probe_fail);6262 goto out;6363 case -ECONNRESET: /* Responded, but call expired. */6464+ case -ERFKILL:6565+ case -EADDRNOTAVAIL:6466 case -ENETUNREACH:6567 case -EHOSTUNREACH:6868+ case -EHOSTDOWN:6669 case -ECONNREFUSED:6770 case -ETIMEDOUT:6871 case -ETIME:···132129 * Probe all of a vlserver's addresses to find out the best route and to133130 * query its capabilities.134131 */135135-static int afs_do_probe_vlserver(struct afs_net *net,136136- struct afs_vlserver *server,137137- struct key *key,138138- unsigned int server_index)132132+static bool afs_do_probe_vlserver(struct afs_net *net,133133+ struct afs_vlserver *server,134134+ struct key *key,135135+ unsigned int server_index,136136+ struct afs_error *_e)139137{140138 struct afs_addr_cursor ac = {141139 .index = 0,142140 };143143- int ret;141141+ bool in_progress = false;142142+ int err;144143145144 _enter("%s", server->name);146145···156151 server->probe.rtt = UINT_MAX;157152158153 for (ac.index = 0; ac.index < ac.alist->nr_addrs; ac.index++) {159159- ret = afs_vl_get_capabilities(net, &ac, key, server,154154+ err = afs_vl_get_capabilities(net, &ac, key, server,160155 server_index, true);161161- if (ret != -EINPROGRESS) {162162- afs_vl_probe_done(server);163163- return ret;164164- }156156+ if (err == -EINPROGRESS)157157+ in_progress = true;158158+ else159159+ afs_prioritise_error(_e, err, ac.abort_code);165160 }166161167167- return 0;162162+ if (!in_progress)163163+ afs_vl_probe_done(server);164164+ return in_progress;168165}169166170167/*···176169 struct afs_vlserver_list *vllist)177170{178171 struct afs_vlserver *server;179179- int i, ret;172172+ struct afs_error e;173173+ bool in_progress = false;174174+ int i;180175176176+ e.error = 0;177177+ e.responded = false;181178 for (i = 0; i < vllist->nr_servers; i++) {182179 server = vllist->servers[i].server;183180 if (test_bit(AFS_VLSERVER_FL_PROBED, &server->flags))184181 continue;185182186186- if (!test_and_set_bit_lock(AFS_VLSERVER_FL_PROBING, &server->flags)) {187187- ret = afs_do_probe_vlserver(net, server, key, i);188188- if (ret)189189- return ret;190190- }183183+ if (!test_and_set_bit_lock(AFS_VLSERVER_FL_PROBING, &server->flags) &&184184+ afs_do_probe_vlserver(net, server, key, i, &e))185185+ in_progress = true;191186 }192187193193- return 0;188188+ return in_progress ? 0 : e.error;194189}195190196191/*
+10-40
fs/afs/vl_rotate.c
···7171{7272 struct afs_addr_list *alist;7373 struct afs_vlserver *vlserver;7474+ struct afs_error e;7475 u32 rtt;7575- int error = vc->ac.error, abort_code, i;7676+ int error = vc->ac.error, i;76777778 _enter("%lx[%d],%lx[%d],%d,%d",7879 vc->untried, vc->index,···120119 goto failed;121120 }122121122122+ case -ERFKILL:123123+ case -EADDRNOTAVAIL:123124 case -ENETUNREACH:124125 case -EHOSTUNREACH:126126+ case -EHOSTDOWN:125127 case -ECONNREFUSED:126128 case -ETIMEDOUT:127129 case -ETIME:···239235 if (vc->flags & AFS_VL_CURSOR_RETRY)240236 goto restart_from_beginning;241237242242- abort_code = 0;243243- error = -EDESTADDRREQ;238238+ e.error = -EDESTADDRREQ;239239+ e.responded = false;244240 for (i = 0; i < vc->server_list->nr_servers; i++) {245241 struct afs_vlserver *s = vc->server_list->servers[i].server;246246- int probe_error = READ_ONCE(s->probe.error);247242248248- switch (probe_error) {249249- case 0:250250- continue;251251- default:252252- if (error == -ETIMEDOUT ||253253- error == -ETIME)254254- continue;255255- case -ETIMEDOUT:256256- case -ETIME:257257- if (error == -ENOMEM ||258258- error == -ENONET)259259- continue;260260- case -ENOMEM:261261- case -ENONET:262262- if (error == -ENETUNREACH)263263- continue;264264- case -ENETUNREACH:265265- if (error == -EHOSTUNREACH)266266- continue;267267- case -EHOSTUNREACH:268268- if (error == -ECONNREFUSED)269269- continue;270270- case -ECONNREFUSED:271271- if (error == -ECONNRESET)272272- continue;273273- case -ECONNRESET: /* Responded, but call expired. */274274- if (error == -ECONNABORTED)275275- continue;276276- case -ECONNABORTED:277277- abort_code = s->probe.abort_code;278278- error = probe_error;279279- continue;280280- }243243+ afs_prioritise_error(&e, READ_ONCE(s->probe.error),244244+ s->probe.abort_code);281245 }282282-283283- if (error == -ECONNABORTED)284284- error = afs_abort_to_error(abort_code);285246286247failed_set_error:287248 vc->error = error;···310341 struct afs_net *net = vc->cell->net;311342312343 if (vc->error == -EDESTADDRREQ ||344344+ vc->error == -EADDRNOTAVAIL ||313345 vc->error == -ENETUNREACH ||314346 vc->error == -EHOSTUNREACH)315347 afs_vl_dump_edestaddrreq(vc);
···477477 int mirror_num = 0;478478 int failed_mirror = 0;479479480480- clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);481480 io_tree = &BTRFS_I(fs_info->btree_inode)->io_tree;482481 while (1) {482482+ clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags);483483 ret = read_extent_buffer_pages(io_tree, eb, WAIT_COMPLETE,484484 mirror_num);485485 if (!ret) {···492492 else493493 break;494494 }495495-496496- /*497497- * This buffer's crc is fine, but its contents are corrupted, so498498- * there is no reason to read the other copies, they won't be499499- * any less wrong.500500- */501501- if (test_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags) ||502502- ret == -EUCLEAN)503503- break;504495505496 num_copies = btrfs_num_copies(fs_info,506497 eb->start, eb->len);
+24
fs/btrfs/file.c
···20892089 atomic_inc(&root->log_batch);2090209020912091 /*20922092+ * Before we acquired the inode's lock, someone may have dirtied more20932093+ * pages in the target range. We need to make sure that writeback for20942094+ * any such pages does not start while we are logging the inode, because20952095+ * if it does, any of the following might happen when we are not doing a20962096+ * full inode sync:20972097+ *20982098+ * 1) We log an extent after its writeback finishes but before its20992099+ * checksums are added to the csum tree, leading to -EIO errors21002100+ * when attempting to read the extent after a log replay.21012101+ *21022102+ * 2) We can end up logging an extent before its writeback finishes.21032103+ * Therefore after the log replay we will have a file extent item21042104+ * pointing to an unwritten extent (and no data checksums as well).21052105+ *21062106+ * So trigger writeback for any eventual new dirty pages and then we21072107+ * wait for all ordered extents to complete below.21082108+ */21092109+ ret = start_ordered_ops(inode, start, end);21102110+ if (ret) {21112111+ inode_unlock(inode);21122112+ goto out;21132113+ }21142114+21152115+ /*20922116 * We have to do this here to avoid the priority inversion of waiting on20932117 * IO of a lower priority task while holding a transaciton open.20942118 */
···22372237 vol = memdup_user((void __user *)arg, sizeof(*vol));22382238 if (IS_ERR(vol))22392239 return PTR_ERR(vol);22402240+ vol->name[BTRFS_PATH_NAME_MAX] = '\0';2240224122412242 switch (cmd) {22422243 case BTRFS_IOC_SCAN_DEV:
+5-3
fs/cachefiles/namei.c
···244244245245 ASSERT(!test_bit(CACHEFILES_OBJECT_ACTIVE, &xobject->flags));246246247247- cache->cache.ops->put_object(&xobject->fscache, cachefiles_obj_put_wait_retry);247247+ cache->cache.ops->put_object(&xobject->fscache,248248+ (enum fscache_obj_ref_trace)cachefiles_obj_put_wait_retry);248249 goto try_again;249250250251requeue:251251- cache->cache.ops->put_object(&xobject->fscache, cachefiles_obj_put_wait_timeo);252252+ cache->cache.ops->put_object(&xobject->fscache,253253+ (enum fscache_obj_ref_trace)cachefiles_obj_put_wait_timeo);252254 _leave(" = -ETIMEDOUT");253255 return -ETIMEDOUT;254256}···338336try_again:339337 /* first step is to make up a grave dentry in the graveyard */340338 sprintf(nbuffer, "%08x%08x",341341- (uint32_t) get_seconds(),339339+ (uint32_t) ktime_get_real_seconds(),342340 (uint32_t) atomic_inc_return(&cache->gravecounter));343341344342 /* do the multiway lock magic */
···892892 if (sb->s_magic != EXT2_SUPER_MAGIC)893893 goto cantfind_ext2;894894895895+ opts.s_mount_opt = 0;895896 /* Set defaults before we parse the mount options */896897 def_mount_opts = le32_to_cpu(es->s_default_mount_opts);897898 if (def_mount_opts & EXT2_DEFM_DEBUG)
···730730731731 if (awaken)732732 wake_up_bit(&cookie->flags, FSCACHE_COOKIE_INVALIDATING);733733+ if (test_and_clear_bit(FSCACHE_COOKIE_LOOKING_UP, &cookie->flags))734734+ wake_up_bit(&cookie->flags, FSCACHE_COOKIE_LOOKING_UP);735735+733736734737 /* Prevent a race with our last child, which has to signal EV_CLEARED735738 * before dropping our spinlock.
+2-1
fs/hfs/btree.c
···338338339339 nidx -= len * 8;340340 i = node->next;341341- hfs_bnode_put(node);342341 if (!i) {343342 /* panic */;344343 pr_crit("unable to free bnode %u. bmap not found!\n",345344 node->this);345345+ hfs_bnode_put(node);346346 return;347347 }348348+ hfs_bnode_put(node);348349 node = hfs_bnode_find(tree, i);349350 if (IS_ERR(node))350351 return;
+2-1
fs/hfsplus/btree.c
···466466467467 nidx -= len * 8;468468 i = node->next;469469- hfs_bnode_put(node);470469 if (!i) {471470 /* panic */;472471 pr_crit("unable to free bnode %u. "473472 "bmap not found!\n",474473 node->this);474474+ hfs_bnode_put(node);475475 return;476476 }477477+ hfs_bnode_put(node);477478 node = hfs_bnode_find(tree, i);478479 if (IS_ERR(node))479480 return;
+1-1
fs/ocfs2/export.c
···125125126126check_gen:127127 if (handle->ih_generation != inode->i_generation) {128128- iput(inode);129128 trace_ocfs2_get_dentry_generation((unsigned long long)blkno,130129 handle->ih_generation,131130 inode->i_generation);131131+ iput(inode);132132 result = ERR_PTR(-ESTALE);133133 goto bail;134134 }
+26-21
fs/ocfs2/move_extents.c
···157157}158158159159/*160160- * lock allocators, and reserving appropriate number of bits for161161- * meta blocks and data clusters.162162- *163163- * in some cases, we don't need to reserve clusters, just let data_ac164164- * be NULL.160160+ * lock allocator, and reserve appropriate number of bits for161161+ * meta blocks.165162 */166166-static int ocfs2_lock_allocators_move_extents(struct inode *inode,163163+static int ocfs2_lock_meta_allocator_move_extents(struct inode *inode,167164 struct ocfs2_extent_tree *et,168165 u32 clusters_to_move,169166 u32 extents_to_split,170167 struct ocfs2_alloc_context **meta_ac,171171- struct ocfs2_alloc_context **data_ac,172168 int extra_blocks,173169 int *credits)174170{···189193 goto out;190194 }191195192192- if (data_ac) {193193- ret = ocfs2_reserve_clusters(osb, clusters_to_move, data_ac);194194- if (ret) {195195- mlog_errno(ret);196196- goto out;197197- }198198- }199196200197 *credits += ocfs2_calc_extend_credits(osb->sb, et->et_root_el);201198···248259 }249260 }250261251251- ret = ocfs2_lock_allocators_move_extents(inode, &context->et, *len, 1,252252- &context->meta_ac,253253- &context->data_ac,254254- extra_blocks, &credits);262262+ ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et,263263+ *len, 1,264264+ &context->meta_ac,265265+ extra_blocks, &credits);255266 if (ret) {256267 mlog_errno(ret);257268 goto out;···272283 mlog_errno(ret);273284 goto out_unlock_mutex;274285 }286286+ }287287+288288+ /*289289+ * Make sure ocfs2_reserve_cluster is called after290290+ * __ocfs2_flush_truncate_log, otherwise, dead lock may happen.291291+ *292292+ * If ocfs2_reserve_cluster is called293293+ * before __ocfs2_flush_truncate_log, dead lock on global bitmap294294+ * may happen.295295+ *296296+ */297297+ ret = ocfs2_reserve_clusters(osb, *len, &context->data_ac);298298+ if (ret) {299299+ mlog_errno(ret);300300+ goto out_unlock_mutex;275301 }276302277303 handle = ocfs2_start_trans(osb, credits);···621617 }622618 }623619624624- ret = ocfs2_lock_allocators_move_extents(inode, &context->et, len, 1,625625- &context->meta_ac,626626- NULL, extra_blocks, &credits);620620+ ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et,621621+ len, 1,622622+ &context->meta_ac,623623+ extra_blocks, &credits);627624 if (ret) {628625 mlog_errno(ret);629626 goto out;
+6-9
fs/pstore/ram.c
···816816817817 cxt->pstore.data = cxt;818818 /*819819- * Console can handle any buffer size, so prefer LOG_LINE_MAX. If we820820- * have to handle dumps, we must have at least record_size buffer. And821821- * for ftrace, bufsize is irrelevant (if bufsize is 0, buf will be822822- * ZERO_SIZE_PTR).819819+ * Since bufsize is only used for dmesg crash dumps, it820820+ * must match the size of the dprz record (after PRZ header821821+ * and ECC bytes have been accounted for).823822 */824824- if (cxt->console_size)825825- cxt->pstore.bufsize = 1024; /* LOG_LINE_MAX */826826- cxt->pstore.bufsize = max(cxt->record_size, cxt->pstore.bufsize);827827- cxt->pstore.buf = kmalloc(cxt->pstore.bufsize, GFP_KERNEL);823823+ cxt->pstore.bufsize = cxt->dprzs[0]->buffer_size;824824+ cxt->pstore.buf = kzalloc(cxt->pstore.bufsize, GFP_KERNEL);828825 if (!cxt->pstore.buf) {829829- pr_err("cannot allocate pstore buffer\n");826826+ pr_err("cannot allocate pstore crash dump buffer\n");830827 err = -ENOMEM;831828 goto fail_clear;832829 }
···827827828828829829 ret = udf_dstrCS0toChar(sb, outstr, 31, pvoldesc->volIdent, 32);830830- if (ret < 0)831831- goto out_bh;832832-833833- strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret);830830+ if (ret < 0) {831831+ strcpy(UDF_SB(sb)->s_volume_ident, "InvalidName");832832+ pr_warn("incorrect volume identification, setting to "833833+ "'InvalidName'\n");834834+ } else {835835+ strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret);836836+ }834837 udf_debug("volIdent[] = '%s'\n", UDF_SB(sb)->s_volume_ident);835838836839 ret = udf_dstrCS0toChar(sb, outstr, 127, pvoldesc->volSetIdent, 128);837837- if (ret < 0)840840+ if (ret < 0) {841841+ ret = 0;838842 goto out_bh;839839-843843+ }840844 outstr[ret] = 0;841845 udf_debug("volSetIdent[] = '%s'\n", outstr);842846
+11-3
fs/udf/unicode.c
···351351 return u_len;352352}353353354354+/*355355+ * Convert CS0 dstring to output charset. Warning: This function may truncate356356+ * input string if it is too long as it is used for informational strings only357357+ * and it is better to truncate the string than to refuse mounting a media.358358+ */354359int udf_dstrCS0toChar(struct super_block *sb, uint8_t *utf_o, int o_len,355360 const uint8_t *ocu_i, int i_len)356361{···364359 if (i_len > 0) {365360 s_len = ocu_i[i_len - 1];366361 if (s_len >= i_len) {367367- pr_err("incorrect dstring lengths (%d/%d)\n",368368- s_len, i_len);369369- return -EINVAL;362362+ pr_warn("incorrect dstring lengths (%d/%d),"363363+ " truncating\n", s_len, i_len);364364+ s_len = i_len - 1;365365+ /* 2-byte encoding? Need to round properly... */366366+ if (ocu_i[0] == 16)367367+ s_len -= (s_len - 1) & 2;370368 }371369 }372370
+15
fs/userfaultfd.c
···13611361 ret = -EINVAL;13621362 if (!vma_can_userfault(cur))13631363 goto out_unlock;13641364+13651365+ /*13661366+ * UFFDIO_COPY will fill file holes even without13671367+ * PROT_WRITE. This check enforces that if this is a13681368+ * MAP_SHARED, the process has write permission to the backing13691369+ * file. If VM_MAYWRITE is set it also enforces that on a13701370+ * MAP_SHARED vma: there is no F_WRITE_SEAL and no further13711371+ * F_WRITE_SEAL can be taken until the vma is destroyed.13721372+ */13731373+ ret = -EPERM;13741374+ if (unlikely(!(cur->vm_flags & VM_MAYWRITE)))13751375+ goto out_unlock;13761376+13641377 /*13651378 * If this vma contains ending address, and huge pages13661379 * check alignment.···14191406 BUG_ON(!vma_can_userfault(vma));14201407 BUG_ON(vma->vm_userfaultfd_ctx.ctx &&14211408 vma->vm_userfaultfd_ctx.ctx != ctx);14091409+ WARN_ON(!(vma->vm_flags & VM_MAYWRITE));1422141014231411 /*14241412 * Nothing to do: this vma is already registered into this···15661552 cond_resched();1567155315681554 BUG_ON(!vma_can_userfault(vma));15551555+ WARN_ON(!(vma->vm_flags & VM_MAYWRITE));1569155615701557 /*15711558 * Nothing to do: this vma is already registered into this
···196196static inline void fscache_retrieval_complete(struct fscache_retrieval *op,197197 int n_pages)198198{199199- atomic_sub(n_pages, &op->n_pages);200200- if (atomic_read(&op->n_pages) <= 0)199199+ if (atomic_sub_return_relaxed(n_pages, &op->n_pages) <= 0)201200 fscache_op_complete(&op->op, false);202201}203202
+2-2
include/linux/ftrace.h
···777777extern void return_to_handler(void);778778779779extern int780780-ftrace_push_return_trace(unsigned long ret, unsigned long func, int *depth,781781- unsigned long frame_pointer, unsigned long *retp);780780+function_graph_enter(unsigned long ret, unsigned long func,781781+ unsigned long frame_pointer, unsigned long *retp);782782783783unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx,784784 unsigned long ret, unsigned long *retp);
+3-1
include/linux/hid-sensor-hub.h
···177177* @attr_usage_id: Attribute usage id as per spec178178* @report_id: Report id to look for179179* @flag: Synchronous or asynchronous read180180+* @is_signed: If true then fields < 32 bits will be sign-extended180181*181182* Issues a synchronous or asynchronous read request for an input attribute.182183* Returns data upto 32 bits.···191190int sensor_hub_input_attr_get_raw_value(struct hid_sensor_hub_device *hsdev,192191 u32 usage_id,193192 u32 attr_usage_id, u32 report_id,194194- enum sensor_hub_read_flags flag193193+ enum sensor_hub_read_flags flag,194194+ bool is_signed195195);196196197197/**
···9090 *9191 * @buf_lock: spinlock to serialize access to @buf9292 * @buf: preallocated crash dump buffer9393- * @bufsize: size of @buf available for crash dump writes9393+ * @bufsize: size of @buf available for crash dump bytes (must match9494+ * smallest number of bytes available for writing to a9595+ * backend entry, since compressed bytes don't take kindly9696+ * to being truncated)9497 *9598 * @read_mutex: serializes @open, @read, @close, and @erase callbacks9699 * @flags: bitfield of frontends the backend can accept writes for
-17
include/linux/ptrace.h
···6464#define PTRACE_MODE_NOAUDIT 0x046565#define PTRACE_MODE_FSCREDS 0x086666#define PTRACE_MODE_REALCREDS 0x106767-#define PTRACE_MODE_SCHED 0x206868-#define PTRACE_MODE_IBPB 0x4069677068/* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */7169#define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)7270#define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)7371#define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)7472#define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_REALCREDS)7575-#define PTRACE_MODE_SPEC_IBPB (PTRACE_MODE_ATTACH_REALCREDS | PTRACE_MODE_IBPB)76737774/**7875 * ptrace_may_access - check whether the caller is permitted to access···8689 * process_vm_writev or ptrace (and should use the real credentials).8790 */8891extern bool ptrace_may_access(struct task_struct *task, unsigned int mode);8989-9090-/**9191- * ptrace_may_access - check whether the caller is permitted to access9292- * a target task.9393- * @task: target task9494- * @mode: selects type of access and caller credentials9595- *9696- * Returns true on success, false on denial.9797- *9898- * Similar to ptrace_may_access(). Only to be called from context switch9999- * code. Does not call into audit and the regular LSM hooks due to locking100100- * constraints.101101- */102102-extern bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode);1039210493static inline int ptrace_reparented(struct task_struct *child)10594{
+10
include/linux/sched.h
···11161116#ifdef CONFIG_FUNCTION_GRAPH_TRACER11171117 /* Index of current stored address in ret_stack: */11181118 int curr_ret_stack;11191119+ int curr_ret_depth;1119112011201121 /* Stack of return addresses for return function tracing: */11211122 struct ftrace_ret_stack *ret_stack;···14541453#define PFA_SPREAD_SLAB 2 /* Spread some slab caches over cpuset */14551454#define PFA_SPEC_SSB_DISABLE 3 /* Speculative Store Bypass disabled */14561455#define PFA_SPEC_SSB_FORCE_DISABLE 4 /* Speculative Store Bypass force disabled*/14561456+#define PFA_SPEC_IB_DISABLE 5 /* Indirect branch speculation restricted */14571457+#define PFA_SPEC_IB_FORCE_DISABLE 6 /* Indirect branch speculation permanently restricted */1457145814581459#define TASK_PFA_TEST(name, func) \14591460 static inline bool task_##func(struct task_struct *p) \···1486148314871484TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)14881485TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)14861486+14871487+TASK_PFA_TEST(SPEC_IB_DISABLE, spec_ib_disable)14881488+TASK_PFA_SET(SPEC_IB_DISABLE, spec_ib_disable)14891489+TASK_PFA_CLEAR(SPEC_IB_DISABLE, spec_ib_disable)14901490+14911491+TASK_PFA_TEST(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable)14921492+TASK_PFA_SET(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable)1489149314901494static inline void14911495current_restore_flags(unsigned long orig_flags, unsigned long flags)
···8383 * tracehook_report_syscall_entry - task is about to attempt a system call8484 * @regs: user register state of current task8585 *8686- * This will be called if %TIF_SYSCALL_TRACE has been set, when the8787- * current task has just entered the kernel for a system call.8686+ * This will be called if %TIF_SYSCALL_TRACE or %TIF_SYSCALL_EMU have been set,8787+ * when the current task has just entered the kernel for a system call.8888 * Full user register state is available here. Changing the values8989 * in @regs can affect the system call number and arguments to be tried.9090 * It is safe to block here, preventing the system call from beginning.
+3-3
include/linux/tracepoint.h
···166166 struct tracepoint_func *it_func_ptr; \167167 void *it_func; \168168 void *__data; \169169- int __maybe_unused idx = 0; \169169+ int __maybe_unused __idx = 0; \170170 \171171 if (!(cond)) \172172 return; \···182182 * doesn't work from the idle path. \183183 */ \184184 if (rcuidle) { \185185- idx = srcu_read_lock_notrace(&tracepoint_srcu); \185185+ __idx = srcu_read_lock_notrace(&tracepoint_srcu);\186186 rcu_irq_enter_irqson(); \187187 } \188188 \···198198 \199199 if (rcuidle) { \200200 rcu_irq_exit_irqson(); \201201- srcu_read_unlock_notrace(&tracepoint_srcu, idx);\201201+ srcu_read_unlock_notrace(&tracepoint_srcu, __idx);\202202 } \203203 \204204 preempt_enable_notrace(); \
···11921192 ((i) < rtd->num_codecs) && ((dai) = rtd->codec_dais[i]); \11931193 (i)++)11941194#define for_each_rtd_codec_dai_rollback(rtd, i, dai) \11951195- for (; ((i--) >= 0) && ((dai) = rtd->codec_dais[i]);)11951195+ for (; ((--i) >= 0) && ((dai) = rtd->codec_dais[i]);)119611961197119711981198/* mixer control */
+11-1
include/trace/events/sched.h
···107107#ifdef CREATE_TRACE_POINTS108108static inline long __trace_sched_switch_state(bool preempt, struct task_struct *p)109109{110110+ unsigned int state;111111+110112#ifdef CONFIG_SCHED_DEBUG111113 BUG_ON(p != current);112114#endif /* CONFIG_SCHED_DEBUG */···120118 if (preempt)121119 return TASK_REPORT_MAX;122120123123- return 1 << task_state_index(p);121121+ /*122122+ * task_state_index() uses fls() and returns a value from 0-8 range.123123+ * Decrement it by 1 (except TASK_RUNNING state i.e 0) before using124124+ * it for left shift operation to get the correct task->state125125+ * mapping.126126+ */127127+ state = task_state_index(p);128128+129129+ return state ? (1 << (state - 1)) : state;124130}125131#endif /* CREATE_TRACE_POINTS */126132
+1
include/uapi/linux/prctl.h
···212212#define PR_SET_SPECULATION_CTRL 53213213/* Speculation control variants */214214# define PR_SPEC_STORE_BYPASS 0215215+# define PR_SPEC_INDIRECT_BRANCH 1215216/* Return and control values for PR_SET/GET_SPECULATION_CTRL */216217# define PR_SPEC_NOT_AFFECTED 0217218# define PR_SPEC_PRCTL (1UL << 0)
···509509510510 Say N if unsure.511511512512+config PSI_DEFAULT_DISABLED513513+ bool "Require boot parameter to enable pressure stall information tracking"514514+ default n515515+ depends on PSI516516+ help517517+ If set, pressure stall information tracking will be disabled518518+ per default but can be enabled through passing psi_enable=1519519+ on the kernel commandline during boot.520520+512521endmenu # "CPU/Task time and stats accounting"513522514523config CPU_ISOLATION
···672672 bpf_prog_unlock_free(fp);673673}674674675675+int bpf_jit_get_func_addr(const struct bpf_prog *prog,676676+ const struct bpf_insn *insn, bool extra_pass,677677+ u64 *func_addr, bool *func_addr_fixed)678678+{679679+ s16 off = insn->off;680680+ s32 imm = insn->imm;681681+ u8 *addr;682682+683683+ *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;684684+ if (!*func_addr_fixed) {685685+ /* Place-holder address till the last pass has collected686686+ * all addresses for JITed subprograms in which case we687687+ * can pick them up from prog->aux.688688+ */689689+ if (!extra_pass)690690+ addr = NULL;691691+ else if (prog->aux->func &&692692+ off >= 0 && off < prog->aux->func_cnt)693693+ addr = (u8 *)prog->aux->func[off]->bpf_func;694694+ else695695+ return -EINVAL;696696+ } else {697697+ /* Address of a BPF helper call. Since part of the core698698+ * kernel, it's always at a fixed location. __bpf_call_base699699+ * and the helper with imm relative to it are both in core700700+ * kernel.701701+ */702702+ addr = (u8 *)__bpf_call_base + imm;703703+ }704704+705705+ *func_addr = (unsigned long)addr;706706+ return 0;707707+}708708+675709static int bpf_jit_blind_insn(const struct bpf_insn *from,676710 const struct bpf_insn *aux,677711 struct bpf_insn *to_buff)
···56505650 return;56515651 /* NOTE: fake 'exit' subprog should be updated as well. */56525652 for (i = 0; i <= env->subprog_cnt; i++) {56535653- if (env->subprog_info[i].start < off)56535653+ if (env->subprog_info[i].start <= off)56545654 continue;56555655 env->subprog_info[i].start += len - 1;56565656 }
+9-6
kernel/cpu.c
···1010#include <linux/sched/signal.h>1111#include <linux/sched/hotplug.h>1212#include <linux/sched/task.h>1313+#include <linux/sched/smt.h>1314#include <linux/unistd.h>1415#include <linux/cpu.h>1516#include <linux/oom.h>···367366}368367369368#endif /* CONFIG_HOTPLUG_CPU */369369+370370+/*371371+ * Architectures that need SMT-specific errata handling during SMT hotplug372372+ * should override this.373373+ */374374+void __weak arch_smt_update(void) { }370375371376#ifdef CONFIG_HOTPLUG_SMT372377enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;···10181011 * concurrent CPU hotplug via cpu_add_remove_lock.10191012 */10201013 lockup_detector_cleanup();10141014+ arch_smt_update();10211015 return ret;10221016}10231017···11471139 ret = cpuhp_up_callbacks(cpu, st, target);11481140out:11491141 cpus_write_unlock();11421142+ arch_smt_update();11501143 return ret;11511144}11521145···20632054 /* Tell user space about the state change */20642055 kobject_uevent(&dev->kobj, KOBJ_ONLINE);20652056}20662066-20672067-/*20682068- * Architectures that need SMT-specific errata handling during SMT hotplug20692069- * should override this.20702070- */20712071-void __weak arch_smt_update(void) { };2072205720732058static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)20742059{
+10-2
kernel/events/uprobes.c
···829829 BUG_ON((uprobe->offset & ~PAGE_MASK) +830830 UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);831831832832- smp_wmb(); /* pairs with rmb() in find_active_uprobe() */832832+ smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */833833 set_bit(UPROBE_COPY_INSN, &uprobe->flags);834834835835 out:···21782178 * After we hit the bp, _unregister + _register can install the21792179 * new and not-yet-analyzed uprobe at the same address, restart.21802180 */21812181- smp_rmb(); /* pairs with wmb() in install_breakpoint() */21822181 if (unlikely(!test_bit(UPROBE_COPY_INSN, &uprobe->flags)))21832182 goto out;21832183+21842184+ /*21852185+ * Pairs with the smp_wmb() in prepare_uprobe().21862186+ *21872187+ * Guarantees that if we see the UPROBE_COPY_INSN bit set, then21882188+ * we must also see the stores to &uprobe->arch performed by the21892189+ * prepare_uprobe() call.21902190+ */21912191+ smp_rmb();2184219221852193 /* Tracing handlers use ->utask to communicate with fetch methods */21862194 if (!get_utask())
+2-2
kernel/kcov.c
···5656 struct task_struct *t;5757};58585959-static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)5959+static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)6060{6161 unsigned int mode;6262···7878 return mode == needed_mode;7979}80808181-static unsigned long canonicalize_ip(unsigned long ip)8181+static notrace unsigned long canonicalize_ip(unsigned long ip)8282{8383#ifdef CONFIG_RANDOMIZE_BASE8484 ip -= kaslr_offset();
-10
kernel/ptrace.c
···261261262262static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode)263263{264264- if (mode & PTRACE_MODE_SCHED)265265- return false;266266-267264 if (mode & PTRACE_MODE_NOAUDIT)268265 return has_ns_capability_noaudit(current, ns, CAP_SYS_PTRACE);269266 else···328331 !ptrace_has_cap(mm->user_ns, mode)))329332 return -EPERM;330333331331- if (mode & PTRACE_MODE_SCHED)332332- return 0;333334 return security_ptrace_access_check(task, mode);334334-}335335-336336-bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode)337337-{338338- return __ptrace_may_access(task, mode | PTRACE_MODE_SCHED);339335}340336341337bool ptrace_may_access(struct task_struct *task, unsigned int mode)
+11-8
kernel/sched/core.c
···5738573857395739#ifdef CONFIG_SCHED_SMT57405740 /*57415741- * The sched_smt_present static key needs to be evaluated on every57425742- * hotplug event because at boot time SMT might be disabled when57435743- * the number of booted CPUs is limited.57445744- *57455745- * If then later a sibling gets hotplugged, then the key would stay57465746- * off and SMT scheduling would never be functional.57415741+ * When going up, increment the number of cores with SMT present.57475742 */57485748- if (cpumask_weight(cpu_smt_mask(cpu)) > 1)57495749- static_branch_enable_cpuslocked(&sched_smt_present);57435743+ if (cpumask_weight(cpu_smt_mask(cpu)) == 2)57445744+ static_branch_inc_cpuslocked(&sched_smt_present);57505745#endif57515746 set_cpu_active(cpu, true);57525747···57845789 * Do sync before park smpboot threads to take care the rcu boost case.57855790 */57865791 synchronize_rcu_mult(call_rcu, call_rcu_sched);57925792+57935793+#ifdef CONFIG_SCHED_SMT57945794+ /*57955795+ * When going down, decrement the number of cores with SMT present.57965796+ */57975797+ if (cpumask_weight(cpu_smt_mask(cpu)) == 2)57985798+ static_branch_dec_cpuslocked(&sched_smt_present);57995799+#endif5787580057885801 if (!sched_smp_initialized)57895802 return 0;
+21-9
kernel/sched/psi.c
···136136137137static int psi_bug __read_mostly;138138139139-bool psi_disabled __read_mostly;140140-core_param(psi_disabled, psi_disabled, bool, 0644);139139+DEFINE_STATIC_KEY_FALSE(psi_disabled);140140+141141+#ifdef CONFIG_PSI_DEFAULT_DISABLED142142+bool psi_enable;143143+#else144144+bool psi_enable = true;145145+#endif146146+static int __init setup_psi(char *str)147147+{148148+ return kstrtobool(str, &psi_enable) == 0;149149+}150150+__setup("psi=", setup_psi);141151142152/* Running averages - we need to be higher-res than loadavg */143153#define PSI_FREQ (2*HZ+1) /* 2 sec intervals */···179169180170void __init psi_init(void)181171{182182- if (psi_disabled)172172+ if (!psi_enable) {173173+ static_branch_enable(&psi_disabled);183174 return;175175+ }184176185177 psi_period = jiffies_to_nsecs(PSI_FREQ);186178 group_init(&psi_system);···561549 struct rq_flags rf;562550 struct rq *rq;563551564564- if (psi_disabled)552552+ if (static_branch_likely(&psi_disabled))565553 return;566554567555 *flags = current->flags & PF_MEMSTALL;···591579 struct rq_flags rf;592580 struct rq *rq;593581594594- if (psi_disabled)582582+ if (static_branch_likely(&psi_disabled))595583 return;596584597585 if (*flags)···612600#ifdef CONFIG_CGROUPS613601int psi_cgroup_alloc(struct cgroup *cgroup)614602{615615- if (psi_disabled)603603+ if (static_branch_likely(&psi_disabled))616604 return 0;617605618606 cgroup->psi.pcpu = alloc_percpu(struct psi_group_cpu);···624612625613void psi_cgroup_free(struct cgroup *cgroup)626614{627627- if (psi_disabled)615615+ if (static_branch_likely(&psi_disabled))628616 return;629617630618 cancel_delayed_work_sync(&cgroup->psi.clock_work);···649637 struct rq_flags rf;650638 struct rq *rq;651639652652- if (psi_disabled) {640640+ if (static_branch_likely(&psi_disabled)) {653641 /*654642 * Lame to do this here, but the scheduler cannot be locked655643 * from the outside, so we move cgroups from inside sched/.···685673{686674 int full;687675688688- if (psi_disabled)676676+ if (static_branch_likely(&psi_disabled))689677 return -EOPNOTSUPP;690678691679 update_stats(group);
···6666{6767 int clear = 0, set = TSK_RUNNING;68686969- if (psi_disabled)6969+ if (static_branch_likely(&psi_disabled))7070 return;71717272 if (!wakeup || p->sched_psi_wake_requeue) {···8686{8787 int clear = TSK_RUNNING, set = 0;88888989- if (psi_disabled)8989+ if (static_branch_likely(&psi_disabled))9090 return;91919292 if (!sleep) {···102102103103static inline void psi_ttwu_dequeue(struct task_struct *p)104104{105105- if (psi_disabled)105105+ if (static_branch_likely(&psi_disabled))106106 return;107107 /*108108 * Is the task being migrated during a wakeup? Make sure to···128128129129static inline void psi_task_tick(struct rq *rq)130130{131131- if (psi_disabled)131131+ if (static_branch_likely(&psi_disabled))132132 return;133133134134 if (unlikely(rq->curr->flags & PF_MEMSTALL))
+3-1
kernel/stackleak.c
···1111 */12121313#include <linux/stackleak.h>1414+#include <linux/kprobes.h>14151516#ifdef CONFIG_STACKLEAK_RUNTIME_DISABLE1617#include <linux/jump_label.h>···4847#define skip_erasing() false4948#endif /* CONFIG_STACKLEAK_RUNTIME_DISABLE */50495151-asmlinkage void stackleak_erase(void)5050+asmlinkage void notrace stackleak_erase(void)5251{5352 /* It would be nice not to have 'kstack_ptr' and 'boundary' on stack */5453 unsigned long kstack_ptr = current->lowest_stack;···102101 /* Reset the 'lowest_stack' value for the next syscall */103102 current->lowest_stack = current_top_of_stack() - THREAD_SIZE/64;104103}104104+NOKPROBE_SYMBOL(stackleak_erase);105105106106void __used stackleak_track_stack(void)107107{
+5-3
kernel/trace/bpf_trace.c
···196196 i++;197197 } else if (fmt[i] == 'p' || fmt[i] == 's') {198198 mod[fmt_cnt]++;199199- i++;200200- if (!isspace(fmt[i]) && !ispunct(fmt[i]) && fmt[i] != 0)199199+ /* disallow any further format extensions */200200+ if (fmt[i + 1] != 0 &&201201+ !isspace(fmt[i + 1]) &&202202+ !ispunct(fmt[i + 1]))201203 return -EINVAL;202204 fmt_cnt++;203203- if (fmt[i - 1] == 's') {205205+ if (fmt[i] == 's') {204206 if (str_seen)205207 /* allow only one '%s' per fmt string */206208 return -EINVAL;
+5-2
kernel/trace/ftrace.c
···817817#ifdef CONFIG_FUNCTION_GRAPH_TRACER818818static int profile_graph_entry(struct ftrace_graph_ent *trace)819819{820820- int index = trace->depth;820820+ int index = current->curr_ret_stack;821821822822 function_profile_call(trace->func, 0, NULL, NULL);823823···852852 if (!fgraph_graph_time) {853853 int index;854854855855- index = trace->depth;855855+ index = current->curr_ret_stack;856856857857 /* Append this call time to the parent time to subtract */858858 if (index)···68146814 atomic_set(&t->tracing_graph_pause, 0);68156815 atomic_set(&t->trace_overrun, 0);68166816 t->curr_ret_stack = -1;68176817+ t->curr_ret_depth = -1;68176818 /* Make sure the tasks see the -1 first: */68186819 smp_wmb();68196820 t->ret_stack = ret_stack_list[start++];···70397038void ftrace_graph_init_idle_task(struct task_struct *t, int cpu)70407039{70417040 t->curr_ret_stack = -1;70417041+ t->curr_ret_depth = -1;70427042 /*70437043 * The idle task has no parent, it either has its own70447044 * stack or no stack at all.···70707068 /* Make sure we do not use the parent ret_stack */70717069 t->ret_stack = NULL;70727070 t->curr_ret_stack = -1;70717071+ t->curr_ret_depth = -1;7073707270747073 if (ftrace_graph_active) {70757074 struct ftrace_ret_stack *ret_stack;
+54-3
kernel/trace/trace.h
···512512 * can only be modified by current, we can reuse trace_recursion.513513 */514514 TRACE_IRQ_BIT,515515+516516+ /* Set if the function is in the set_graph_function file */517517+ TRACE_GRAPH_BIT,518518+519519+ /*520520+ * In the very unlikely case that an interrupt came in521521+ * at a start of graph tracing, and we want to trace522522+ * the function in that interrupt, the depth can be greater523523+ * than zero, because of the preempted start of a previous524524+ * trace. In an even more unlikely case, depth could be 2525525+ * if a softirq interrupted the start of graph tracing,526526+ * followed by an interrupt preempting a start of graph527527+ * tracing in the softirq, and depth can even be 3528528+ * if an NMI came in at the start of an interrupt function529529+ * that preempted a softirq start of a function that530530+ * preempted normal context!!!! Luckily, it can't be531531+ * greater than 3, so the next two bits are a mask532532+ * of what the depth is when we set TRACE_GRAPH_BIT533533+ */534534+535535+ TRACE_GRAPH_DEPTH_START_BIT,536536+ TRACE_GRAPH_DEPTH_END_BIT,515537};516538517539#define trace_recursion_set(bit) do { (current)->trace_recursion |= (1<<(bit)); } while (0)518540#define trace_recursion_clear(bit) do { (current)->trace_recursion &= ~(1<<(bit)); } while (0)519541#define trace_recursion_test(bit) ((current)->trace_recursion & (1<<(bit)))542542+543543+#define trace_recursion_depth() \544544+ (((current)->trace_recursion >> TRACE_GRAPH_DEPTH_START_BIT) & 3)545545+#define trace_recursion_set_depth(depth) \546546+ do { \547547+ current->trace_recursion &= \548548+ ~(3 << TRACE_GRAPH_DEPTH_START_BIT); \549549+ current->trace_recursion |= \550550+ ((depth) & 3) << TRACE_GRAPH_DEPTH_START_BIT; \551551+ } while (0)520552521553#define TRACE_CONTEXT_BITS 4522554···875843extern struct ftrace_hash *ftrace_graph_hash;876844extern struct ftrace_hash *ftrace_graph_notrace_hash;877845878878-static inline int ftrace_graph_addr(unsigned long addr)846846+static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace)879847{848848+ unsigned long addr = trace->func;880849 int ret = 0;881850882851 preempt_disable_notrace();···888855 }889856890857 if (ftrace_lookup_ip(ftrace_graph_hash, addr)) {858858+859859+ /*860860+ * This needs to be cleared on the return functions861861+ * when the depth is zero.862862+ */863863+ trace_recursion_set(TRACE_GRAPH_BIT);864864+ trace_recursion_set_depth(trace->depth);865865+891866 /*892867 * If no irqs are to be traced, but a set_graph_function893868 * is set, and called by an interrupt handler, we still···913872 return ret;914873}915874875875+static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace)876876+{877877+ if (trace_recursion_test(TRACE_GRAPH_BIT) &&878878+ trace->depth == trace_recursion_depth())879879+ trace_recursion_clear(TRACE_GRAPH_BIT);880880+}881881+916882static inline int ftrace_graph_notrace_addr(unsigned long addr)917883{918884 int ret = 0;···933885 return ret;934886}935887#else936936-static inline int ftrace_graph_addr(unsigned long addr)888888+static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace)937889{938890 return 1;939891}···942894{943895 return 0;944896}897897+static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace)898898+{ }945899#endif /* CONFIG_DYNAMIC_FTRACE */946900947901extern unsigned int fgraph_max_depth;···951901static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace)952902{953903 /* trace it when it is-nested-in or is a function enabled. */954954- return !(trace->depth || ftrace_graph_addr(trace->func)) ||904904+ return !(trace_recursion_test(TRACE_GRAPH_BIT) ||905905+ ftrace_graph_addr(trace)) ||955906 (trace->depth < 0) ||956907 (fgraph_max_depth && trace->depth >= fgraph_max_depth);957908}
+42-11
kernel/trace/trace_functions_graph.c
···118118 struct trace_seq *s, u32 flags);119119120120/* Add a function return address to the trace stack on thread info.*/121121-int122122-ftrace_push_return_trace(unsigned long ret, unsigned long func, int *depth,121121+static int122122+ftrace_push_return_trace(unsigned long ret, unsigned long func,123123 unsigned long frame_pointer, unsigned long *retp)124124{125125 unsigned long long calltime;···177177#ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR178178 current->ret_stack[index].retp = retp;179179#endif180180- *depth = current->curr_ret_stack;180180+ return 0;181181+}182182+183183+int function_graph_enter(unsigned long ret, unsigned long func,184184+ unsigned long frame_pointer, unsigned long *retp)185185+{186186+ struct ftrace_graph_ent trace;187187+188188+ trace.func = func;189189+ trace.depth = ++current->curr_ret_depth;190190+191191+ if (ftrace_push_return_trace(ret, func,192192+ frame_pointer, retp))193193+ goto out;194194+195195+ /* Only trace if the calling function expects to */196196+ if (!ftrace_graph_entry(&trace))197197+ goto out_ret;181198182199 return 0;200200+ out_ret:201201+ current->curr_ret_stack--;202202+ out:203203+ current->curr_ret_depth--;204204+ return -EBUSY;183205}184206185207/* Retrieve a function return address to the trace stack on thread info.*/···263241 trace->func = current->ret_stack[index].func;264242 trace->calltime = current->ret_stack[index].calltime;265243 trace->overrun = atomic_read(¤t->trace_overrun);266266- trace->depth = index;244244+ trace->depth = current->curr_ret_depth--;245245+ /*246246+ * We still want to trace interrupts coming in if247247+ * max_depth is set to 1. Make sure the decrement is248248+ * seen before ftrace_graph_return.249249+ */250250+ barrier();267251}268252269253/*···283255284256 ftrace_pop_return_trace(&trace, &ret, frame_pointer);285257 trace.rettime = trace_clock_local();258258+ ftrace_graph_return(&trace);259259+ /*260260+ * The ftrace_graph_return() may still access the current261261+ * ret_stack structure, we need to make sure the update of262262+ * curr_ret_stack is after that.263263+ */286264 barrier();287265 current->curr_ret_stack--;288266 /*···300266 current->curr_ret_stack += FTRACE_NOTRACE_DEPTH;301267 return ret;302268 }303303-304304- /*305305- * The trace should run after decrementing the ret counter306306- * in case an interrupt were to come in. We don't want to307307- * lose the interrupt if max_depth is set.308308- */309309- ftrace_graph_return(&trace);310269311270 if (unlikely(!ret)) {312271 ftrace_graph_stop();···509482 int cpu;510483 int pc;511484485485+ ftrace_graph_addr_finish(trace);486486+512487 local_irq_save(flags);513488 cpu = raw_smp_processor_id();514489 data = per_cpu_ptr(tr->trace_buffer.data, cpu);···534505535506static void trace_graph_thresh_return(struct ftrace_graph_ret *trace)536507{508508+ ftrace_graph_addr_finish(trace);509509+537510 if (tracing_thresh &&538511 (trace->rettime - trace->calltime < tracing_thresh))539512 return;
···270270 unsigned long flags;271271 int pc;272272273273+ ftrace_graph_addr_finish(trace);274274+273275 if (!func_prolog_preempt_disable(tr, &data, &pc))274276 return;275277
···702702 if (!vma || start >= vma->vm_end) {703703 vma = find_extend_vma(mm, start);704704 if (!vma && in_gate_area(mm, start)) {705705- int ret;706705 ret = get_gate_page(mm, start & PAGE_MASK,707706 gup_flags, &vma,708707 pages ? &pages[i] : NULL);709708 if (ret)710710- return i ? : ret;709709+ goto out;711710 ctx.page_mask = 0;712711 goto next_page;713712 }
+25-18
mm/huge_memory.c
···23502350 }23512351}2352235223532353-static void freeze_page(struct page *page)23532353+static void unmap_page(struct page *page)23542354{23552355 enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS |23562356 TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD;···23652365 VM_BUG_ON_PAGE(!unmap_success, page);23662366}2367236723682368-static void unfreeze_page(struct page *page)23682368+static void remap_page(struct page *page)23692369{23702370 int i;23712371 if (PageTransHuge(page)) {···24022402 (1L << PG_unevictable) |24032403 (1L << PG_dirty)));2404240424052405+ /* ->mapping in first tail page is compound_mapcount */24062406+ VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING,24072407+ page_tail);24082408+ page_tail->mapping = head->mapping;24092409+ page_tail->index = head->index + tail;24102410+24052411 /* Page flags must be visible before we make the page non-compound. */24062412 smp_wmb();24072413···24282422 if (page_is_idle(head))24292423 set_page_idle(page_tail);2430242424312431- /* ->mapping in first tail page is compound_mapcount */24322432- VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING,24332433- page_tail);24342434- page_tail->mapping = head->mapping;24352435-24362436- page_tail->index = head->index + tail;24372425 page_cpupid_xchg_last(page_tail, page_cpupid_last(head));2438242624392427 /*···24392439}2440244024412441static void __split_huge_page(struct page *page, struct list_head *list,24422442- unsigned long flags)24422442+ pgoff_t end, unsigned long flags)24432443{24442444 struct page *head = compound_head(page);24452445 struct zone *zone = page_zone(head);24462446 struct lruvec *lruvec;24472447- pgoff_t end = -1;24482447 int i;2449244824502449 lruvec = mem_cgroup_page_lruvec(head, zone->zone_pgdat);2451245024522451 /* complete memcg works before add pages to LRU */24532452 mem_cgroup_split_huge_fixup(head);24542454-24552455- if (!PageAnon(page))24562456- end = DIV_ROUND_UP(i_size_read(head->mapping->host), PAGE_SIZE);2457245324582454 for (i = HPAGE_PMD_NR - 1; i >= 1; i--) {24592455 __split_huge_page_tail(head, i, lruvec, list);···2479248324802484 spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);2481248524822482- unfreeze_page(head);24862486+ remap_page(head);2483248724842488 for (i = 0; i < HPAGE_PMD_NR; i++) {24852489 struct page *subpage = head + i;···26222626 int count, mapcount, extra_pins, ret;26232627 bool mlocked;26242628 unsigned long flags;26292629+ pgoff_t end;2625263026262631 VM_BUG_ON_PAGE(is_huge_zero_page(page), page);26272632 VM_BUG_ON_PAGE(!PageLocked(page), page);···26452648 ret = -EBUSY;26462649 goto out;26472650 }26512651+ end = -1;26482652 mapping = NULL;26492653 anon_vma_lock_write(anon_vma);26502654 } else {···2659266126602662 anon_vma = NULL;26612663 i_mmap_lock_read(mapping);26642664+26652665+ /*26662666+ *__split_huge_page() may need to trim off pages beyond EOF:26672667+ * but on 32-bit, i_size_read() takes an irq-unsafe seqlock,26682668+ * which cannot be nested inside the page tree lock. So note26692669+ * end now: i_size itself may be changed at any moment, but26702670+ * head page lock is good enough to serialize the trimming.26712671+ */26722672+ end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE);26622673 }2663267426642675 /*26652665- * Racy check if we can split the page, before freeze_page() will26762676+ * Racy check if we can split the page, before unmap_page() will26662677 * split PMDs26672678 */26682679 if (!can_split_huge_page(head, &extra_pins)) {···26802673 }2681267426822675 mlocked = PageMlocked(page);26832683- freeze_page(head);26762676+ unmap_page(head);26842677 VM_BUG_ON_PAGE(compound_mapcount(head), head);2685267826862679 /* Make sure the page is not on per-CPU pagevec as it takes pin */···27142707 if (mapping)27152708 __dec_node_page_state(page, NR_SHMEM_THPS);27162709 spin_unlock(&pgdata->split_queue_lock);27172717- __split_huge_page(page, list, flags);27102710+ __split_huge_page(page, list, end, flags);27182711 if (PageSwapCache(head)) {27192712 swp_entry_t entry = { .val = page_private(head) };27202713···27342727fail: if (mapping)27352728 xa_unlock(&mapping->i_pages);27362729 spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);27372737- unfreeze_page(head);27302730+ remap_page(head);27382731 ret = -EBUSY;27392732 }27402733
···12871287 * collapse_shmem - collapse small tmpfs/shmem pages into huge one.12881288 *12891289 * Basic scheme is simple, details are more complex:12901290- * - allocate and freeze a new huge page;12901290+ * - allocate and lock a new huge page;12911291 * - scan page cache replacing old pages with the new one12921292 * + swap in pages if necessary;12931293 * + fill in gaps;···12951295 * - if replacing succeeds:12961296 * + copy data over;12971297 * + free old pages;12981298- * + unfreeze huge page;12981298+ * + unlock huge page;12991299 * - if replacing failed;13001300 * + put all pages back and unfreeze them;13011301 * + restore gaps in the page cache;13021302- * + free huge page;13021302+ * + unlock and free huge page;13031303 */13041304static void collapse_shmem(struct mm_struct *mm,13051305 struct address_space *mapping, pgoff_t start,···13291329 goto out;13301330 }1331133113321332- new_page->index = start;13331333- new_page->mapping = mapping;13341334- __SetPageSwapBacked(new_page);13351335- __SetPageLocked(new_page);13361336- BUG_ON(!page_ref_freeze(new_page, 1));13371337-13381338- /*13391339- * At this point the new_page is 'frozen' (page_count() is zero),13401340- * locked and not up-to-date. It's safe to insert it into the page13411341- * cache, because nobody would be able to map it or use it in other13421342- * way until we unfreeze it.13431343- */13441344-13451332 /* This will be less messy when we use multi-index entries */13461333 do {13471334 xas_lock_irq(&xas);···13361349 if (!xas_error(&xas))13371350 break;13381351 xas_unlock_irq(&xas);13391339- if (!xas_nomem(&xas, GFP_KERNEL))13521352+ if (!xas_nomem(&xas, GFP_KERNEL)) {13531353+ mem_cgroup_cancel_charge(new_page, memcg, true);13541354+ result = SCAN_FAIL;13401355 goto out;13561356+ }13411357 } while (1);13581358+13591359+ __SetPageLocked(new_page);13601360+ __SetPageSwapBacked(new_page);13611361+ new_page->index = start;13621362+ new_page->mapping = mapping;13631363+13641364+ /*13651365+ * At this point the new_page is locked and not up-to-date.13661366+ * It's safe to insert it into the page cache, because nobody would13671367+ * be able to map it or use it in another way until we unlock it.13681368+ */1342136913431370 xas_set(&xas, start);13441371 for (index = start; index < end; index++) {···1360135913611360 VM_BUG_ON(index != xas.xa_index);13621361 if (!page) {13621362+ /*13631363+ * Stop if extent has been truncated or hole-punched,13641364+ * and is now completely empty.13651365+ */13661366+ if (index == start) {13671367+ if (!xas_next_entry(&xas, end - 1)) {13681368+ result = SCAN_TRUNCATED;13691369+ goto xa_locked;13701370+ }13711371+ xas_set(&xas, index);13721372+ }13631373 if (!shmem_charge(mapping->host, 1)) {13641374 result = SCAN_FAIL;13651365- break;13751375+ goto xa_locked;13661376 }13671377 xas_store(&xas, new_page + (index % HPAGE_PMD_NR));13681378 nr_none++;···13881376 result = SCAN_FAIL;13891377 goto xa_unlocked;13901378 }13911391- xas_lock_irq(&xas);13921392- xas_set(&xas, index);13931379 } else if (trylock_page(page)) {13941380 get_page(page);13811381+ xas_unlock_irq(&xas);13951382 } else {13961383 result = SCAN_PAGE_LOCK;13971397- break;13841384+ goto xa_locked;13981385 }1399138614001387 /*···14021391 */14031392 VM_BUG_ON_PAGE(!PageLocked(page), page);14041393 VM_BUG_ON_PAGE(!PageUptodate(page), page);14051405- VM_BUG_ON_PAGE(PageTransCompound(page), page);13941394+13951395+ /*13961396+ * If file was truncated then extended, or hole-punched, before13971397+ * we locked the first page, then a THP might be there already.13981398+ */13991399+ if (PageTransCompound(page)) {14001400+ result = SCAN_PAGE_COMPOUND;14011401+ goto out_unlock;14021402+ }1406140314071404 if (page_mapping(page) != mapping) {14081405 result = SCAN_TRUNCATED;14091406 goto out_unlock;14101407 }14111411- xas_unlock_irq(&xas);1412140814131409 if (isolate_lru_page(page)) {14141410 result = SCAN_DEL_PAGE_LRU;14151415- goto out_isolate_failed;14111411+ goto out_unlock;14161412 }1417141314181414 if (page_mapped(page))···14391421 */14401422 if (!page_ref_freeze(page, 3)) {14411423 result = SCAN_PAGE_COUNT;14421442- goto out_lru;14241424+ xas_unlock_irq(&xas);14251425+ putback_lru_page(page);14261426+ goto out_unlock;14431427 }1444142814451429 /*···14531433 /* Finally, replace with the new page. */14541434 xas_store(&xas, new_page + (index % HPAGE_PMD_NR));14551435 continue;14561456-out_lru:14571457- xas_unlock_irq(&xas);14581458- putback_lru_page(page);14591459-out_isolate_failed:14601460- unlock_page(page);14611461- put_page(page);14621462- goto xa_unlocked;14631436out_unlock:14641437 unlock_page(page);14651438 put_page(page);14661466- break;14391439+ goto xa_unlocked;14671440 }14681468- xas_unlock_irq(&xas);1469144114421442+ __inc_node_page_state(new_page, NR_SHMEM_THPS);14431443+ if (nr_none) {14441444+ struct zone *zone = page_zone(new_page);14451445+14461446+ __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none);14471447+ __mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none);14481448+ }14491449+14501450+xa_locked:14511451+ xas_unlock_irq(&xas);14701452xa_unlocked:14531453+14711454 if (result == SCAN_SUCCEED) {14721455 struct page *page, *tmp;14731473- struct zone *zone = page_zone(new_page);1474145614751457 /*14761458 * Replacing old pages with new one has succeeded, now we14771459 * need to copy the content and free the old pages.14781460 */14611461+ index = start;14791462 list_for_each_entry_safe(page, tmp, &pagelist, lru) {14631463+ while (index < page->index) {14641464+ clear_highpage(new_page + (index % HPAGE_PMD_NR));14651465+ index++;14661466+ }14801467 copy_highpage(new_page + (page->index % HPAGE_PMD_NR),14811468 page);14821469 list_del(&page->lru);14831483- unlock_page(page);14841484- page_ref_unfreeze(page, 1);14851470 page->mapping = NULL;14711471+ page_ref_unfreeze(page, 1);14861472 ClearPageActive(page);14871473 ClearPageUnevictable(page);14741474+ unlock_page(page);14881475 put_page(page);14761476+ index++;14771477+ }14781478+ while (index < end) {14791479+ clear_highpage(new_page + (index % HPAGE_PMD_NR));14801480+ index++;14891481 }1490148214911491- local_irq_disable();14921492- __inc_node_page_state(new_page, NR_SHMEM_THPS);14931493- if (nr_none) {14941494- __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none);14951495- __mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none);14961496- }14971497- local_irq_enable();14981498-14991499- /*15001500- * Remove pte page tables, so we can re-fault15011501- * the page as huge.15021502- */15031503- retract_page_tables(mapping, start);15041504-15051505- /* Everything is ready, let's unfreeze the new_page */15061506- set_page_dirty(new_page);15071483 SetPageUptodate(new_page);15081508- page_ref_unfreeze(new_page, HPAGE_PMD_NR);14841484+ page_ref_add(new_page, HPAGE_PMD_NR - 1);14851485+ set_page_dirty(new_page);15091486 mem_cgroup_commit_charge(new_page, memcg, false, true);15101487 lru_cache_add_anon(new_page);15111511- unlock_page(new_page);1512148814891489+ /*14901490+ * Remove pte page tables, so we can re-fault the page as huge.14911491+ */14921492+ retract_page_tables(mapping, start);15131493 *hpage = NULL;1514149415151495 khugepaged_pages_collapsed++;15161496 } else {15171497 struct page *page;14981498+15181499 /* Something went wrong: roll back page cache changes */15191519- shmem_uncharge(mapping->host, nr_none);15201500 xas_lock_irq(&xas);15011501+ mapping->nrpages -= nr_none;15021502+ shmem_uncharge(mapping->host, nr_none);15031503+15211504 xas_set(&xas, start);15221505 xas_for_each(&xas, page, end - 1) {15231506 page = list_first_entry_or_null(&pagelist,···15421519 xas_store(&xas, page);15431520 xas_pause(&xas);15441521 xas_unlock_irq(&xas);15451545- putback_lru_page(page);15461522 unlock_page(page);15231523+ putback_lru_page(page);15471524 xas_lock_irq(&xas);15481525 }15491526 VM_BUG_ON(nr_none);15501527 xas_unlock_irq(&xas);1551152815521552- /* Unfreeze new_page, caller would take care about freeing it */15531553- page_ref_unfreeze(new_page, 1);15541529 mem_cgroup_cancel_charge(new_page, memcg, true);15551555- unlock_page(new_page);15561530 new_page->mapping = NULL;15571531 }15321532+15331533+ unlock_page(new_page);15581534out:15591535 VM_BUG_ON(!list_empty(&pagelist));15601536 /* TODO: tracepoints */
+3-1
mm/page_alloc.c
···58135813 unsigned long size)58145814{58155815 struct pglist_data *pgdat = zone->zone_pgdat;58165816+ int zone_idx = zone_idx(zone) + 1;5816581758175817- pgdat->nr_zones = zone_idx(zone) + 1;58185818+ if (zone_idx > pgdat->nr_zones)58195819+ pgdat->nr_zones = zone_idx;5818582058195821 zone->zone_start_pfn = zone_start_pfn;58205822
+3-10
mm/rmap.c
···16271627 address + PAGE_SIZE);16281628 } else {16291629 /*16301630- * We should not need to notify here as we reach this16311631- * case only from freeze_page() itself only call from16321632- * split_huge_page_to_list() so everything below must16331633- * be true:16341634- * - page is not anonymous16351635- * - page is locked16361636- *16371637- * So as it is a locked file back page thus it can not16381638- * be remove from the page cache and replace by a new16391639- * page before mmu_notifier_invalidate_range_end so no16301630+ * This is a locked file-backed page, thus it cannot16311631+ * be removed from the page cache and replaced by a new16321632+ * page before mmu_notifier_invalidate_range_end, so no16401633 * concurrent thread might update its page table to16411634 * point at new page while a device still is using this16421635 * page.
+37-6
mm/shmem.c
···297297 if (!shmem_inode_acct_block(inode, pages))298298 return false;299299300300+ /* nrpages adjustment first, then shmem_recalc_inode() when balanced */301301+ inode->i_mapping->nrpages += pages;302302+300303 spin_lock_irqsave(&info->lock, flags);301304 info->alloced += pages;302305 inode->i_blocks += pages * BLOCKS_PER_PAGE;303306 shmem_recalc_inode(inode);304307 spin_unlock_irqrestore(&info->lock, flags);305305- inode->i_mapping->nrpages += pages;306308307309 return true;308310}···313311{314312 struct shmem_inode_info *info = SHMEM_I(inode);315313 unsigned long flags;314314+315315+ /* nrpages adjustment done by __delete_from_page_cache() or caller */316316317317 spin_lock_irqsave(&info->lock, flags);318318 info->alloced -= pages;···15131509{15141510 struct page *oldpage, *newpage;15151511 struct address_space *swap_mapping;15121512+ swp_entry_t entry;15161513 pgoff_t swap_index;15171514 int error;1518151515191516 oldpage = *pagep;15201520- swap_index = page_private(oldpage);15171517+ entry.val = page_private(oldpage);15181518+ swap_index = swp_offset(entry);15211519 swap_mapping = page_mapping(oldpage);1522152015231521 /*···15381532 __SetPageLocked(newpage);15391533 __SetPageSwapBacked(newpage);15401534 SetPageUptodate(newpage);15411541- set_page_private(newpage, swap_index);15351535+ set_page_private(newpage, entry.val);15421536 SetPageSwapCache(newpage);1543153715441538 /*···22202214 struct page *page;22212215 pte_t _dst_pte, *dst_pte;22222216 int ret;22172217+ pgoff_t offset, max_off;2223221822242219 ret = -ENOMEM;22252220 if (!shmem_inode_acct_block(inode, 1))···22432236 *pagep = page;22442237 shmem_inode_unacct_blocks(inode, 1);22452238 /* don't free the page */22462246- return -EFAULT;22392239+ return -ENOENT;22472240 }22482241 } else { /* mfill_zeropage_atomic */22492242 clear_highpage(page);···22572250 __SetPageLocked(page);22582251 __SetPageSwapBacked(page);22592252 __SetPageUptodate(page);22532253+22542254+ ret = -EFAULT;22552255+ offset = linear_page_index(dst_vma, dst_addr);22562256+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);22572257+ if (unlikely(offset >= max_off))22582258+ goto out_release;2260225922612260 ret = mem_cgroup_try_charge_delay(page, dst_mm, gfp, &memcg, false);22622261 if (ret)···22782265 _dst_pte = mk_pte(page, dst_vma->vm_page_prot);22792266 if (dst_vma->vm_flags & VM_WRITE)22802267 _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));22682268+ else {22692269+ /*22702270+ * We don't set the pte dirty if the vma has no22712271+ * VM_WRITE permission, so mark the page dirty or it22722272+ * could be freed from under us. We could do it22732273+ * unconditionally before unlock_page(), but doing it22742274+ * only if VM_WRITE is not set is faster.22752275+ */22762276+ set_page_dirty(page);22772277+ }22782278+22792279+ dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);22802280+22812281+ ret = -EFAULT;22822282+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);22832283+ if (unlikely(offset >= max_off))22842284+ goto out_release_uncharge_unlock;2281228522822286 ret = -EEXIST;22832283- dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);22842287 if (!pte_none(*dst_pte))22852288 goto out_release_uncharge_unlock;22862289···2314228523152286 /* No need to invalidate - it was non-present before */23162287 update_mmu_cache(dst_vma, dst_addr, dst_pte);23172317- unlock_page(page);23182288 pte_unmap_unlock(dst_pte, ptl);22892289+ unlock_page(page);23192290 ret = 0;23202291out:23212292 return ret;23222293out_release_uncharge_unlock:23232294 pte_unmap_unlock(dst_pte, ptl);22952295+ ClearPageDirty(page);22962296+ delete_from_page_cache(page);23242297out_release_uncharge:23252298 mem_cgroup_cancel_charge(page, memcg, false);23262299out_release:
+6-2
mm/truncate.c
···517517 */518518 xa_lock_irq(&mapping->i_pages);519519 xa_unlock_irq(&mapping->i_pages);520520-521521- truncate_inode_pages(mapping, 0);522520 }521521+522522+ /*523523+ * Cleancache needs notification even if there are no pages or shadow524524+ * entries.525525+ */526526+ truncate_inode_pages(mapping, 0);523527}524528EXPORT_SYMBOL(truncate_inode_pages_final);525529
+46-16
mm/userfaultfd.c
···3333 void *page_kaddr;3434 int ret;3535 struct page *page;3636+ pgoff_t offset, max_off;3737+ struct inode *inode;36383739 if (!*pagep) {3840 ret = -ENOMEM;···50485149 /* fallback to copy_from_user outside mmap_sem */5250 if (unlikely(ret)) {5353- ret = -EFAULT;5151+ ret = -ENOENT;5452 *pagep = page;5553 /* don't free the page */5654 goto out;···7573 if (dst_vma->vm_flags & VM_WRITE)7674 _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));77757878- ret = -EEXIST;7976 dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);7777+ if (dst_vma->vm_file) {7878+ /* the shmem MAP_PRIVATE case requires checking the i_size */7979+ inode = dst_vma->vm_file->f_inode;8080+ offset = linear_page_index(dst_vma, dst_addr);8181+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);8282+ ret = -EFAULT;8383+ if (unlikely(offset >= max_off))8484+ goto out_release_uncharge_unlock;8585+ }8686+ ret = -EEXIST;8087 if (!pte_none(*dst_pte))8188 goto out_release_uncharge_unlock;8289···119108 pte_t _dst_pte, *dst_pte;120109 spinlock_t *ptl;121110 int ret;111111+ pgoff_t offset, max_off;112112+ struct inode *inode;122113123114 _dst_pte = pte_mkspecial(pfn_pte(my_zero_pfn(dst_addr),124115 dst_vma->vm_page_prot));125125- ret = -EEXIST;126116 dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);117117+ if (dst_vma->vm_file) {118118+ /* the shmem MAP_PRIVATE case requires checking the i_size */119119+ inode = dst_vma->vm_file->f_inode;120120+ offset = linear_page_index(dst_vma, dst_addr);121121+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);122122+ ret = -EFAULT;123123+ if (unlikely(offset >= max_off))124124+ goto out_unlock;125125+ }126126+ ret = -EEXIST;127127 if (!pte_none(*dst_pte))128128 goto out_unlock;129129 set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);···227205 if (!dst_vma || !is_vm_hugetlb_page(dst_vma))228206 goto out_unlock;229207 /*230230- * Only allow __mcopy_atomic_hugetlb on userfaultfd231231- * registered ranges.208208+ * Check the vma is registered in uffd, this is209209+ * required to enforce the VM_MAYWRITE check done at210210+ * uffd registration time.232211 */233212 if (!dst_vma->vm_userfaultfd_ctx.ctx)234213 goto out_unlock;···297274298275 cond_resched();299276300300- if (unlikely(err == -EFAULT)) {277277+ if (unlikely(err == -ENOENT)) {301278 up_read(&dst_mm->mmap_sem);302279 BUG_ON(!page);303280···403380{404381 ssize_t err;405382406406- if (vma_is_anonymous(dst_vma)) {383383+ /*384384+ * The normal page fault path for a shmem will invoke the385385+ * fault, fill the hole in the file and COW it right away. The386386+ * result generates plain anonymous memory. So when we are387387+ * asked to fill an hole in a MAP_PRIVATE shmem mapping, we'll388388+ * generate anonymous memory directly without actually filling389389+ * the hole. For the MAP_PRIVATE case the robustness check390390+ * only happens in the pagetable (to verify it's still none)391391+ * and not in the radix tree.392392+ */393393+ if (!(dst_vma->vm_flags & VM_SHARED)) {407394 if (!zeropage)408395 err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma,409396 dst_addr, src_addr, page);···482449 if (!dst_vma)483450 goto out_unlock;484451 /*485485- * Be strict and only allow __mcopy_atomic on userfaultfd486486- * registered ranges to prevent userland errors going487487- * unnoticed. As far as the VM consistency is concerned, it488488- * would be perfectly safe to remove this check, but there's489489- * no useful usage for __mcopy_atomic ouside of userfaultfd490490- * registered ranges. This is after all why these are ioctls491491- * belonging to the userfaultfd and not syscalls.452452+ * Check the vma is registered in uffd, this is required to453453+ * enforce the VM_MAYWRITE check done at uffd registration454454+ * time.492455 */493456 if (!dst_vma->vm_userfaultfd_ctx.ctx)494457 goto out_unlock;···518489 * dst_vma.519490 */520491 err = -ENOMEM;521521- if (vma_is_anonymous(dst_vma) && unlikely(anon_vma_prepare(dst_vma)))492492+ if (!(dst_vma->vm_flags & VM_SHARED) &&493493+ unlikely(anon_vma_prepare(dst_vma)))522494 goto out_unlock;523495524496 while (src_addr < src_start + len) {···560530 src_addr, &page, zeropage);561531 cond_resched();562532563563- if (unlikely(err == -EFAULT)) {533533+ if (unlikely(err == -ENOENT)) {564534 void *page_kaddr;565535566536 up_read(&dst_mm->mmap_sem);
···939939 unsigned int fraglen;940940 unsigned int fraggap;941941 unsigned int alloclen;942942- unsigned int pagedlen = 0;942942+ unsigned int pagedlen;943943 struct sk_buff *skb_prev;944944alloc_new_skb:945945 skb_prev = skb;···956956 if (datalen > mtu - fragheaderlen)957957 datalen = maxfraglen - fragheaderlen;958958 fraglen = datalen + fragheaderlen;959959+ pagedlen = 0;959960960961 if ((flags & MSG_MORE) &&961962 !(rt->dst.dev->features&NETIF_F_SG))
+5-2
net/ipv4/netfilter/ipt_MASQUERADE.c
···8181 int ret;82828383 ret = xt_register_target(&masquerade_tg_reg);8484+ if (ret)8585+ return ret;84868585- if (ret == 0)8686- nf_nat_masquerade_ipv4_register_notifier();8787+ ret = nf_nat_masquerade_ipv4_register_notifier();8888+ if (ret)8989+ xt_unregister_target(&masquerade_tg_reg);87908891 return ret;8992}
+30-8
net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
···147147 .notifier_call = masq_inet_event,148148};149149150150-static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0);150150+static int masq_refcnt;151151+static DEFINE_MUTEX(masq_mutex);151152152152-void nf_nat_masquerade_ipv4_register_notifier(void)153153+int nf_nat_masquerade_ipv4_register_notifier(void)153154{155155+ int ret = 0;156156+157157+ mutex_lock(&masq_mutex);154158 /* check if the notifier was already set */155155- if (atomic_inc_return(&masquerade_notifier_refcount) > 1)156156- return;159159+ if (++masq_refcnt > 1)160160+ goto out_unlock;157161158162 /* Register for device down reports */159159- register_netdevice_notifier(&masq_dev_notifier);163163+ ret = register_netdevice_notifier(&masq_dev_notifier);164164+ if (ret)165165+ goto err_dec;160166 /* Register IP address change reports */161161- register_inetaddr_notifier(&masq_inet_notifier);167167+ ret = register_inetaddr_notifier(&masq_inet_notifier);168168+ if (ret)169169+ goto err_unregister;170170+171171+ mutex_unlock(&masq_mutex);172172+ return ret;173173+174174+err_unregister:175175+ unregister_netdevice_notifier(&masq_dev_notifier);176176+err_dec:177177+ masq_refcnt--;178178+out_unlock:179179+ mutex_unlock(&masq_mutex);180180+ return ret;162181}163182EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_register_notifier);164183165184void nf_nat_masquerade_ipv4_unregister_notifier(void)166185{186186+ mutex_lock(&masq_mutex);167187 /* check if the notifier still has clients */168168- if (atomic_dec_return(&masquerade_notifier_refcount) > 0)169169- return;188188+ if (--masq_refcnt > 0)189189+ goto out_unlock;170190171191 unregister_netdevice_notifier(&masq_dev_notifier);172192 unregister_inetaddr_notifier(&masq_inet_notifier);193193+out_unlock:194194+ mutex_unlock(&masq_mutex);173195}174196EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_unregister_notifier);
+3-1
net/ipv4/netfilter/nft_masq_ipv4.c
···6969 if (ret < 0)7070 return ret;71717272- nf_nat_masquerade_ipv4_register_notifier();7272+ ret = nf_nat_masquerade_ipv4_register_notifier();7373+ if (ret)7474+ nft_unregister_expr(&nft_masq_ipv4_type);73757476 return ret;7577}
···4040{4141 struct inet_connection_sock *icsk = inet_csk(sk);4242 u32 elapsed, start_ts;4343+ s32 remaining;43444445 start_ts = tcp_retransmit_stamp(sk);4546 if (!icsk->icsk_user_timeout || !start_ts)4647 return icsk->icsk_rto;4748 elapsed = tcp_time_stamp(tcp_sk(sk)) - start_ts;4848- if (elapsed >= icsk->icsk_user_timeout)4949+ remaining = icsk->icsk_user_timeout - elapsed;5050+ if (remaining <= 0)4951 return 1; /* user timeout has passed; fire ASAP */5050- else5151- return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(icsk->icsk_user_timeout - elapsed));5252+5353+ return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(remaining));5254}53555456/**···211209 (boundary - linear_backoff_thresh) * TCP_RTO_MAX;212210 timeout = jiffies_to_msecs(timeout);213211 }214214- return (tcp_time_stamp(tcp_sk(sk)) - start_ts) >= timeout;212212+ return (s32)(tcp_time_stamp(tcp_sk(sk)) - start_ts - timeout) >= 0;215213}216214217215/* A write timeout has occurred. Process the after effects. */
+2-1
net/ipv6/ip6_output.c
···13541354 unsigned int fraglen;13551355 unsigned int fraggap;13561356 unsigned int alloclen;13571357- unsigned int pagedlen = 0;13571357+ unsigned int pagedlen;13581358alloc_new_skb:13591359 /* There's no room in the current skb */13601360 if (skb)···13781378 if (datalen > (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - fragheaderlen)13791379 datalen = maxfraglen - fragheaderlen - rt->dst.trailer_len;13801380 fraglen = datalen + fragheaderlen;13811381+ pagedlen = 0;1381138213821383 if ((flags & MSG_MORE) &&13831384 !(rt->dst.dev->features&NETIF_F_SG))
···5858 int err;59596060 err = xt_register_target(&masquerade_tg6_reg);6161- if (err == 0)6262- nf_nat_masquerade_ipv6_register_notifier();6161+ if (err)6262+ return err;6363+6464+ err = nf_nat_masquerade_ipv6_register_notifier();6565+ if (err)6666+ xt_unregister_target(&masquerade_tg6_reg);63676468 return err;6569}
+37-14
net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
···132132 * of ipv6 addresses being deleted), we also need to add an upper133133 * limit to the number of queued work items.134134 */135135-static int masq_inet_event(struct notifier_block *this,136136- unsigned long event, void *ptr)135135+static int masq_inet6_event(struct notifier_block *this,136136+ unsigned long event, void *ptr)137137{138138 struct inet6_ifaddr *ifa = ptr;139139 const struct net_device *dev;···171171 return NOTIFY_DONE;172172}173173174174-static struct notifier_block masq_inet_notifier = {175175- .notifier_call = masq_inet_event,174174+static struct notifier_block masq_inet6_notifier = {175175+ .notifier_call = masq_inet6_event,176176};177177178178-static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0);178178+static int masq_refcnt;179179+static DEFINE_MUTEX(masq_mutex);179180180180-void nf_nat_masquerade_ipv6_register_notifier(void)181181+int nf_nat_masquerade_ipv6_register_notifier(void)181182{182182- /* check if the notifier is already set */183183- if (atomic_inc_return(&masquerade_notifier_refcount) > 1)184184- return;183183+ int ret = 0;185184186186- register_netdevice_notifier(&masq_dev_notifier);187187- register_inet6addr_notifier(&masq_inet_notifier);185185+ mutex_lock(&masq_mutex);186186+ /* check if the notifier is already set */187187+ if (++masq_refcnt > 1)188188+ goto out_unlock;189189+190190+ ret = register_netdevice_notifier(&masq_dev_notifier);191191+ if (ret)192192+ goto err_dec;193193+194194+ ret = register_inet6addr_notifier(&masq_inet6_notifier);195195+ if (ret)196196+ goto err_unregister;197197+198198+ mutex_unlock(&masq_mutex);199199+ return ret;200200+201201+err_unregister:202202+ unregister_netdevice_notifier(&masq_dev_notifier);203203+err_dec:204204+ masq_refcnt--;205205+out_unlock:206206+ mutex_unlock(&masq_mutex);207207+ return ret;188208}189209EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_register_notifier);190210191211void nf_nat_masquerade_ipv6_unregister_notifier(void)192212{213213+ mutex_lock(&masq_mutex);193214 /* check if the notifier still has clients */194194- if (atomic_dec_return(&masquerade_notifier_refcount) > 0)195195- return;215215+ if (--masq_refcnt > 0)216216+ goto out_unlock;196217197197- unregister_inet6addr_notifier(&masq_inet_notifier);218218+ unregister_inet6addr_notifier(&masq_inet6_notifier);198219 unregister_netdevice_notifier(&masq_dev_notifier);220220+out_unlock:221221+ mutex_unlock(&masq_mutex);199222}200223EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_unregister_notifier);
+3-1
net/ipv6/netfilter/nft_masq_ipv6.c
···7070 if (ret < 0)7171 return ret;72727373- nf_nat_masquerade_ipv6_register_notifier();7373+ ret = nf_nat_masquerade_ipv6_register_notifier();7474+ if (ret)7575+ nft_unregister_expr(&nft_masq_ipv6_type);74767577 return ret;7678}
···584584/* tipc_node_cleanup - delete nodes that does not585585 * have active links for NODE_CLEANUP_AFTER time586586 */587587-static int tipc_node_cleanup(struct tipc_node *peer)587587+static bool tipc_node_cleanup(struct tipc_node *peer)588588{589589 struct tipc_net *tn = tipc_net(peer->net);590590 bool deleted = false;591591592592- spin_lock_bh(&tn->node_list_lock);592592+ /* If lock held by tipc_node_stop() the node will be deleted anyway */593593+ if (!spin_trylock_bh(&tn->node_list_lock))594594+ return false;595595+593596 tipc_node_write_lock(peer);594597595598 if (!node_is_up(peer) && time_after(jiffies, peer->delete_at)) {
···395395 * When we have processed a group that starts off with a known-false396396 * #if/#elif sequence (which has therefore been deleted) followed by a397397 * #elif that we don't understand and therefore must keep, we edit the398398- * latter into a #if to keep the nesting correct. We use strncpy() to398398+ * latter into a #if to keep the nesting correct. We use memcpy() to399399 * overwrite the 4 byte token "elif" with "if " without a '\0' byte.400400 *401401 * When we find a true #elif in a group, the following block will···450450static void Itrue (void) { Ftrue(); ignoreon(); }451451static void Ifalse(void) { Ffalse(); ignoreon(); }452452/* modify this line */453453-static void Mpass (void) { strncpy(keyword, "if ", 4); Pelif(); }453453+static void Mpass (void) { memcpy(keyword, "if ", 4); Pelif(); }454454static void Mtrue (void) { keywordedit("else"); state(IS_TRUE_MIDDLE); }455455static void Melif (void) { keywordedit("endif"); state(IS_FALSE_TRAILER); }456456static void Melse (void) { keywordedit("endif"); state(IS_FALSE_ELSE); }
+12-1
security/selinux/nlmsgtab.c
···8080 { RTM_NEWSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ },8181 { RTM_GETSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ },8282 { RTM_NEWCACHEREPORT, NETLINK_ROUTE_SOCKET__NLMSG_READ },8383+ { RTM_NEWCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },8484+ { RTM_DELCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },8585+ { RTM_GETCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_READ },8386};84878588static const struct nlmsg_perm nlmsg_tcpdiag_perms[] =···161158162159 switch (sclass) {163160 case SECCLASS_NETLINK_ROUTE_SOCKET:164164- /* RTM_MAX always point to RTM_SETxxxx, ie RTM_NEWxxx + 3 */161161+ /* RTM_MAX always points to RTM_SETxxxx, ie RTM_NEWxxx + 3.162162+ * If the BUILD_BUG_ON() below fails you must update the163163+ * structures at the top of this file with the new mappings164164+ * before updating the BUILD_BUG_ON() macro!165165+ */165166 BUILD_BUG_ON(RTM_MAX != (RTM_NEWCHAIN + 3));166167 err = nlmsg_perm(nlmsg_type, perm, nlmsg_route_perms,167168 sizeof(nlmsg_route_perms));···177170 break;178171179172 case SECCLASS_NETLINK_XFRM_SOCKET:173173+ /* If the BUILD_BUG_ON() below fails you must update the174174+ * structures at the top of this file with the new mappings175175+ * before updating the BUILD_BUG_ON() macro!176176+ */180177 BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_MAPPING);181178 err = nlmsg_perm(nlmsg_type, perm, nlmsg_xfrm_perms,182179 sizeof(nlmsg_xfrm_perms));
+45-35
sound/core/control.c
···348348 return 0;349349}350350351351+/* add a new kcontrol object; call with card->controls_rwsem locked */352352+static int __snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol)353353+{354354+ struct snd_ctl_elem_id id;355355+ unsigned int idx;356356+ unsigned int count;357357+358358+ id = kcontrol->id;359359+ if (id.index > UINT_MAX - kcontrol->count)360360+ return -EINVAL;361361+362362+ if (snd_ctl_find_id(card, &id)) {363363+ dev_err(card->dev,364364+ "control %i:%i:%i:%s:%i is already present\n",365365+ id.iface, id.device, id.subdevice, id.name, id.index);366366+ return -EBUSY;367367+ }368368+369369+ if (snd_ctl_find_hole(card, kcontrol->count) < 0)370370+ return -ENOMEM;371371+372372+ list_add_tail(&kcontrol->list, &card->controls);373373+ card->controls_count += kcontrol->count;374374+ kcontrol->id.numid = card->last_numid + 1;375375+ card->last_numid += kcontrol->count;376376+377377+ id = kcontrol->id;378378+ count = kcontrol->count;379379+ for (idx = 0; idx < count; idx++, id.index++, id.numid++)380380+ snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id);381381+382382+ return 0;383383+}384384+351385/**352386 * snd_ctl_add - add the control instance to the card353387 * @card: the card instance···398364 */399365int snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol)400366{401401- struct snd_ctl_elem_id id;402402- unsigned int idx;403403- unsigned int count;404367 int err = -EINVAL;405368406369 if (! kcontrol)407370 return err;408371 if (snd_BUG_ON(!card || !kcontrol->info))409372 goto error;410410- id = kcontrol->id;411411- if (id.index > UINT_MAX - kcontrol->count)412412- goto error;413373414374 down_write(&card->controls_rwsem);415415- if (snd_ctl_find_id(card, &id)) {416416- up_write(&card->controls_rwsem);417417- dev_err(card->dev, "control %i:%i:%i:%s:%i is already present\n",418418- id.iface,419419- id.device,420420- id.subdevice,421421- id.name,422422- id.index);423423- err = -EBUSY;424424- goto error;425425- }426426- if (snd_ctl_find_hole(card, kcontrol->count) < 0) {427427- up_write(&card->controls_rwsem);428428- err = -ENOMEM;429429- goto error;430430- }431431- list_add_tail(&kcontrol->list, &card->controls);432432- card->controls_count += kcontrol->count;433433- kcontrol->id.numid = card->last_numid + 1;434434- card->last_numid += kcontrol->count;435435- id = kcontrol->id;436436- count = kcontrol->count;375375+ err = __snd_ctl_add(card, kcontrol);437376 up_write(&card->controls_rwsem);438438- for (idx = 0; idx < count; idx++, id.index++, id.numid++)439439- snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id);377377+ if (err < 0)378378+ goto error;440379 return 0;441380442381 error:···13681361 kctl->tlv.c = snd_ctl_elem_user_tlv;1369136213701363 /* This function manage to free the instance on failure. */13711371- err = snd_ctl_add(card, kctl);13721372- if (err < 0)13731373- return err;13641364+ down_write(&card->controls_rwsem);13651365+ err = __snd_ctl_add(card, kctl);13661366+ if (err < 0) {13671367+ snd_ctl_free_one(kctl);13681368+ goto unlock;13691369+ }13741370 offset = snd_ctl_get_ioff(kctl, &info->id);13751371 snd_ctl_build_ioff(&info->id, kctl, offset);13761372 /*···13841374 * which locks the element.13851375 */1386137613871387- down_write(&card->controls_rwsem);13881377 card->user_ctl_count++;13891389- up_write(&card->controls_rwsem);1390137813791379+ unlock:13801380+ up_write(&card->controls_rwsem);13911381 return 0;13921382}13931383
-2
sound/isa/wss/wss_lib.c
···15311531 if (err < 0) {15321532 if (chip->release_dma)15331533 chip->release_dma(chip, chip->dma_private_data, chip->dma1);15341534- snd_free_pages(runtime->dma_area, runtime->dma_bytes);15351534 return err;15361535 }15371536 chip->playback_substream = substream;···15711572 if (err < 0) {15721573 if (chip->release_dma)15731574 chip->release_dma(chip, chip->dma_private_data, chip->dma2);15741574- snd_free_pages(runtime->dma_area, runtime->dma_bytes);15751575 return err;15761576 }15771577 chip->capture_substream = substream;
+1-1
sound/pci/ac97/ac97_codec.c
···824824{825825 struct snd_ac97 *ac97 = snd_kcontrol_chip(kcontrol);826826 int reg = kcontrol->private_value & 0xff;827827- int shift = (kcontrol->private_value >> 8) & 0xff;827827+ int shift = (kcontrol->private_value >> 8) & 0x0f;828828 int mask = (kcontrol->private_value >> 16) & 0xff;829829 // int invert = (kcontrol->private_value >> 24) & 0xff;830830 unsigned short value, old, new;
···765765766766static void wm_adsp2_show_fw_status(struct wm_adsp *dsp)767767{768768- u16 scratch[4];768768+ unsigned int scratch[4];769769+ unsigned int addr = dsp->base + ADSP2_SCRATCH0;770770+ unsigned int i;769771 int ret;770772771771- ret = regmap_raw_read(dsp->regmap, dsp->base + ADSP2_SCRATCH0,772772- scratch, sizeof(scratch));773773- if (ret) {774774- adsp_err(dsp, "Failed to read SCRATCH regs: %d\n", ret);775775- return;773773+ for (i = 0; i < ARRAY_SIZE(scratch); ++i) {774774+ ret = regmap_read(dsp->regmap, addr + i, &scratch[i]);775775+ if (ret) {776776+ adsp_err(dsp, "Failed to read SCRATCH%u: %d\n", i, ret);777777+ return;778778+ }776779 }777780778781 adsp_dbg(dsp, "FW SCRATCH 0:0x%x 1:0x%x 2:0x%x 3:0x%x\n",779779- be16_to_cpu(scratch[0]),780780- be16_to_cpu(scratch[1]),781781- be16_to_cpu(scratch[2]),782782- be16_to_cpu(scratch[3]));782782+ scratch[0], scratch[1], scratch[2], scratch[3]);783783}784784785785static void wm_adsp2v2_show_fw_status(struct wm_adsp *dsp)786786{787787- u32 scratch[2];787787+ unsigned int scratch[2];788788 int ret;789789790790- ret = regmap_raw_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH0_1,791791- scratch, sizeof(scratch));792792-790790+ ret = regmap_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH0_1,791791+ &scratch[0]);793792 if (ret) {794794- adsp_err(dsp, "Failed to read SCRATCH regs: %d\n", ret);793793+ adsp_err(dsp, "Failed to read SCRATCH0_1: %d\n", ret);795794 return;796795 }797796798798- scratch[0] = be32_to_cpu(scratch[0]);799799- scratch[1] = be32_to_cpu(scratch[1]);797797+ ret = regmap_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH2_3,798798+ &scratch[1]);799799+ if (ret) {800800+ adsp_err(dsp, "Failed to read SCRATCH2_3: %d\n", ret);801801+ return;802802+ }800803801804 adsp_dbg(dsp, "FW SCRATCH 0:0x%x 1:0x%x 2:0x%x 3:0x%x\n",802805 scratch[0] & 0xFFFF,
+23-3
sound/soc/intel/Kconfig
···101101 codec, then enable this option by saying Y or m. This is a102102 recommended option103103104104-config SND_SOC_INTEL_SKYLAKE_SSP_CLK105105- tristate106106-107104config SND_SOC_INTEL_SKYLAKE108105 tristate "SKL/BXT/KBL/GLK/CNL... Platforms"109106 depends on PCI && ACPI107107+ select SND_SOC_INTEL_SKYLAKE_COMMON108108+ help109109+ If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/110110+ GeminiLake or CannonLake platform with the DSP enabled in the BIOS111111+ then enable this option by saying Y or m.112112+113113+if SND_SOC_INTEL_SKYLAKE114114+115115+config SND_SOC_INTEL_SKYLAKE_SSP_CLK116116+ tristate117117+118118+config SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC119119+ bool "HDAudio codec support"120120+ help121121+ If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/122122+ GeminiLake or CannonLake platform with an HDaudio codec123123+ then enable this option by saying Y124124+125125+config SND_SOC_INTEL_SKYLAKE_COMMON126126+ tristate110127 select SND_HDA_EXT_CORE111128 select SND_HDA_DSP_LOADER112129 select SND_SOC_TOPOLOGY113130 select SND_SOC_INTEL_SST131131+ select SND_SOC_HDAC_HDA if SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC114132 select SND_SOC_ACPI_INTEL_MATCH115133 help116134 If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/117135 GeminiLake or CannonLake platform with the DSP enabled in the BIOS118136 then enable this option by saying Y or m.137137+138138+endif ## SND_SOC_INTEL_SKYLAKE119139120140config SND_SOC_ACPI_INTEL_MATCH121141 tristate
+14-10
sound/soc/intel/boards/Kconfig
···293293 Say Y if you have such a device.294294 If unsure select "N".295295296296-config SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH297297- tristate "SKL/KBL/BXT/APL with HDA Codecs"298298- select SND_SOC_HDAC_HDMI299299- select SND_SOC_HDAC_HDA300300- help301301- This adds support for ASoC machine driver for Intel platforms302302- SKL/KBL/BXT/APL with iDisp, HDA audio codecs.303303- Say Y or m if you have such a device. This is a recommended option.304304- If unsure select "N".305305-306296config SND_SOC_INTEL_GLK_RT5682_MAX98357A_MACH307297 tristate "GLK with RT5682 and MAX98357A in I2S Mode"308298 depends on MFD_INTEL_LPSS && I2C && ACPI···308318 If unsure select "N".309319310320endif ## SND_SOC_INTEL_SKYLAKE321321+322322+if SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC323323+324324+config SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH325325+ tristate "SKL/KBL/BXT/APL with HDA Codecs"326326+ select SND_SOC_HDAC_HDMI327327+ # SND_SOC_HDAC_HDA is already selected328328+ help329329+ This adds support for ASoC machine driver for Intel platforms330330+ SKL/KBL/BXT/APL with iDisp, HDA audio codecs.331331+ Say Y or m if you have such a device. This is a recommended option.332332+ If unsure select "N".333333+334334+endif ## SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC311335312336endif ## SND_SOC_INTEL_MACH
+29-3
sound/soc/intel/boards/cht_bsw_max98090_ti.c
···1919 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~2020 */21212222+#include <linux/dmi.h>2223#include <linux/module.h>2324#include <linux/platform_device.h>2425#include <linux/slab.h>···35343635#define CHT_PLAT_CLK_3_HZ 192000003736#define CHT_CODEC_DAI "HiFi"3737+3838+#define QUIRK_PMC_PLT_CLK_0 0x0138393940struct cht_mc_private {4041 struct clk *mclk;···388385 .num_controls = ARRAY_SIZE(cht_mc_controls),389386};390387388388+static const struct dmi_system_id cht_max98090_quirk_table[] = {389389+ {390390+ /* Swanky model Chromebook (Toshiba Chromebook 2) */391391+ .matches = {392392+ DMI_MATCH(DMI_PRODUCT_NAME, "Swanky"),393393+ },394394+ .driver_data = (void *)QUIRK_PMC_PLT_CLK_0,395395+ },396396+ {}397397+};398398+391399static int snd_cht_mc_probe(struct platform_device *pdev)392400{401401+ const struct dmi_system_id *dmi_id;393402 struct device *dev = &pdev->dev;394403 int ret_val = 0;395404 struct cht_mc_private *drv;405405+ const char *mclk_name;406406+ int quirks = 0;407407+408408+ dmi_id = dmi_first_match(cht_max98090_quirk_table);409409+ if (dmi_id)410410+ quirks = (unsigned long)dmi_id->driver_data;396411397412 drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL);398413 if (!drv)···432411 snd_soc_card_cht.dev = &pdev->dev;433412 snd_soc_card_set_drvdata(&snd_soc_card_cht, drv);434413435435- drv->mclk = devm_clk_get(&pdev->dev, "pmc_plt_clk_3");414414+ if (quirks & QUIRK_PMC_PLT_CLK_0)415415+ mclk_name = "pmc_plt_clk_0";416416+ else417417+ mclk_name = "pmc_plt_clk_3";418418+419419+ drv->mclk = devm_clk_get(&pdev->dev, mclk_name);436420 if (IS_ERR(drv->mclk)) {437421 dev_err(&pdev->dev,438438- "Failed to get MCLK from pmc_plt_clk_3: %ld\n",439439- PTR_ERR(drv->mclk));422422+ "Failed to get MCLK from %s: %ld\n",423423+ mclk_name, PTR_ERR(drv->mclk));440424 return PTR_ERR(drv->mclk);441425 }442426
+24-8
sound/soc/intel/skylake/skl.c
···3737#include "skl.h"3838#include "skl-sst-dsp.h"3939#include "skl-sst-ipc.h"4040+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)4041#include "../../../soc/codecs/hdac_hda.h"4242+#endif41434244/*4345 * initialize the PCI registers···660658 platform_device_unregister(skl->clk_dev);661659}662660661661+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)662662+663663#define IDISP_INTEL_VENDOR_ID 0x80860000664664665665/*···680676#endif681677}682678679679+#endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */680680+683681/*684682 * Probe the given codec address685683 */···691685 (AC_VERB_PARAMETERS << 8) | AC_PAR_VENDOR_ID;692686 unsigned int res = -1;693687 struct skl *skl = bus_to_skl(bus);688688+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)694689 struct hdac_hda_priv *hda_codec;695695- struct hdac_device *hdev;696690 int err;691691+#endif692692+ struct hdac_device *hdev;697693698694 mutex_lock(&bus->cmd_mutex);699695 snd_hdac_bus_send_cmd(bus, cmd);···705697 return -EIO;706698 dev_dbg(bus->dev, "codec #%d probed OK: %x\n", addr, res);707699700700+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)708701 hda_codec = devm_kzalloc(&skl->pci->dev, sizeof(*hda_codec),709702 GFP_KERNEL);710703 if (!hda_codec)···724715 load_codec_module(&hda_codec->codec);725716 }726717 return 0;718718+#else719719+ hdev = devm_kzalloc(&skl->pci->dev, sizeof(*hdev), GFP_KERNEL);720720+ if (!hdev)721721+ return -ENOMEM;722722+723723+ return snd_hdac_ext_bus_device_init(bus, addr, hdev);724724+#endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */727725}728726729727/* Codec initialization */···831815 }832816 }833817818818+ /*819819+ * we are done probing so decrement link counts820820+ */821821+ list_for_each_entry(hlink, &bus->hlink_list, list)822822+ snd_hdac_ext_bus_link_put(bus, hlink);823823+834824 if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) {835825 err = snd_hdac_display_power(bus, false);836826 if (err < 0) {···845823 return;846824 }847825 }848848-849849- /*850850- * we are done probing so decrement link counts851851- */852852- list_for_each_entry(hlink, &bus->hlink_list, list)853853- snd_hdac_ext_bus_link_put(bus, hlink);854826855827 /* configure PM */856828 pm_runtime_put_noidle(bus->dev);···886870 hbus = skl_to_hbus(skl);887871 bus = skl_to_bus(skl);888872889889-#if IS_ENABLED(CONFIG_SND_SOC_HDAC_HDA)873873+#if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC)890874 ext_ops = snd_soc_hdac_hda_get_ops();891875#endif892876 snd_hdac_ext_bus_init(bus, &pci->dev, &bus_core_ops, io_ops, ext_ops);
···306306 if (rsnd_ssi_is_multi_slave(mod, io))307307 return 0;308308309309- if (ssi->rate) {309309+ if (ssi->usrcnt > 1) {310310 if (ssi->rate != rate) {311311 dev_err(dev, "SSI parent/child should use same rate\n");312312 return -EINVAL;
+8-2
sound/soc/soc-acpi.c
···1010snd_soc_acpi_find_machine(struct snd_soc_acpi_mach *machines)1111{1212 struct snd_soc_acpi_mach *mach;1313+ struct snd_soc_acpi_mach *mach_alt;13141415 for (mach = machines; mach->id[0]; mach++) {1516 if (acpi_dev_present(mach->id, NULL, -1)) {1616- if (mach->machine_quirk)1717- mach = mach->machine_quirk(mach);1717+ if (mach->machine_quirk) {1818+ mach_alt = mach->machine_quirk(mach);1919+ if (!mach_alt)2020+ continue; /* not full match, ignore */2121+ mach = mach_alt;2222+ }2323+1824 return mach;1925 }2026 }
···390390 char *mclk_name, *p, *s = (char *)pname;391391 int ret, i = 0;392392393393- mclk = devm_kzalloc(dev, sizeof(mclk), GFP_KERNEL);393393+ mclk = devm_kzalloc(dev, sizeof(*mclk), GFP_KERNEL);394394 if (!mclk)395395 return -ENOMEM;396396
+1-1
sound/soc/sunxi/Kconfig
···3131config SND_SUN50I_CODEC_ANALOG3232 tristate "Allwinner sun50i Codec Analog Controls Support"3333 depends on (ARM64 && ARCH_SUNXI) || COMPILE_TEST3434- select SND_SUNXI_ADDA_PR_REGMAP3434+ select SND_SUN8I_ADDA_PR_REGMAP3535 help3636 Say Y or M if you want to add support for the analog controls for3737 the codec embedded in Allwinner A64 SoC.
+5-7
sound/soc/sunxi/sun8i-codec.c
···481481 { "Right Digital DAC Mixer", "AIF1 Slot 0 Digital DAC Playback Switch",482482 "AIF1 Slot 0 Right"},483483484484- /* ADC routes */484484+ /* ADC Routes */485485+ { "AIF1 Slot 0 Right ADC", NULL, "ADC" },486486+ { "AIF1 Slot 0 Left ADC", NULL, "ADC" },487487+488488+ /* ADC Mixer Routes */485489 { "Left Digital ADC Mixer", "AIF1 Data Digital ADC Capture Switch",486490 "AIF1 Slot 0 Left ADC" },487491 { "Right Digital ADC Mixer", "AIF1 Data Digital ADC Capture Switch",···609605610606static int sun8i_codec_remove(struct platform_device *pdev)611607{612612- struct snd_soc_card *card = platform_get_drvdata(pdev);613613- struct sun8i_codec *scodec = snd_soc_card_get_drvdata(card);614614-615608 pm_runtime_disable(&pdev->dev);616609 if (!pm_runtime_status_suspended(&pdev->dev))617610 sun8i_codec_runtime_suspend(&pdev->dev);618618-619619- clk_disable_unprepare(scodec->clk_module);620620- clk_disable_unprepare(scodec->clk_bus);621611622612 return 0;623613}
···3434# include "test-libelf-mmap.c"3535#undef main36363737+#define main main_test_get_current_dir_name3838+# include "test-get_current_dir_name.c"3939+#undef main4040+3741#define main main_test_glibc3842# include "test-glibc.c"3943#undef main···178174 main_test_hello();179175 main_test_libelf();180176 main_test_libelf_mmap();177177+ main_test_get_current_dir_name();181178 main_test_glibc();182179 main_test_dwarf();183180 main_test_dwarf_getlocations();
···7979#define TIOCGPTLCK _IOR('T', 0x39, int) /* Get Pty lock state */8080#define TIOCGEXCL _IOR('T', 0x40, int) /* Get exclusive mode state */8181#define TIOCGPTPEER _IO('T', 0x41) /* Safely open the slave */8282+#define TIOCGISO7816 _IOR('T', 0x42, struct serial_iso7816)8383+#define TIOCSISO7816 _IOWR('T', 0x43, struct serial_iso7816)82848385#define FIONCLEX 0x54508486#define FIOCLEX 0x5451
+22
tools/include/uapi/drm/i915_drm.h
···529529 */530530#define I915_PARAM_CS_TIMESTAMP_FREQUENCY 51531531532532+/*533533+ * Once upon a time we supposed that writes through the GGTT would be534534+ * immediately in physical memory (once flushed out of the CPU path). However,535535+ * on a few different processors and chipsets, this is not necessarily the case536536+ * as the writes appear to be buffered internally. Thus a read of the backing537537+ * storage (physical memory) via a different path (with different physical tags538538+ * to the indirect write via the GGTT) will see stale values from before539539+ * the GGTT write. Inside the kernel, we can for the most part keep track of540540+ * the different read/write domains in use (e.g. set-domain), but the assumption541541+ * of coherency is baked into the ABI, hence reporting its true state in this542542+ * parameter.543543+ *544544+ * Reports true when writes via mmap_gtt are immediately visible following an545545+ * lfence to flush the WCB.546546+ *547547+ * Reports false when writes via mmap_gtt are indeterminately delayed in an in548548+ * internal buffer and are _not_ immediately visible to third parties accessing549549+ * directly via mmap_cpu/mmap_wc. Use of mmap_gtt as part of an IPC550550+ * communications channel when reporting false is strongly disadvised.551551+ */552552+#define I915_PARAM_MMAP_GTT_COHERENT 52553553+532554typedef struct drm_i915_getparam {533555 __s32 param;534556 /*
···212212#define PR_SET_SPECULATION_CTRL 53213213/* Speculation control variants */214214# define PR_SPEC_STORE_BYPASS 0215215+# define PR_SPEC_INDIRECT_BRANCH 1215216/* Return and control values for PR_SET/GET_SPECULATION_CTRL */216217# define PR_SPEC_NOT_AFFECTED 0217218# define PR_SPEC_PRCTL (1UL << 0)
+37
tools/include/uapi/linux/tc_act/tc_bpf.h
···11+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */22+/*33+ * Copyright (c) 2015 Jiri Pirko <jiri@resnulli.us>44+ *55+ * This program is free software; you can redistribute it and/or modify66+ * it under the terms of the GNU General Public License as published by77+ * the Free Software Foundation; either version 2 of the License, or88+ * (at your option) any later version.99+ */1010+1111+#ifndef __LINUX_TC_BPF_H1212+#define __LINUX_TC_BPF_H1313+1414+#include <linux/pkt_cls.h>1515+1616+#define TCA_ACT_BPF 131717+1818+struct tc_act_bpf {1919+ tc_gen;2020+};2121+2222+enum {2323+ TCA_ACT_BPF_UNSPEC,2424+ TCA_ACT_BPF_TM,2525+ TCA_ACT_BPF_PARMS,2626+ TCA_ACT_BPF_OPS_LEN,2727+ TCA_ACT_BPF_OPS,2828+ TCA_ACT_BPF_FD,2929+ TCA_ACT_BPF_NAME,3030+ TCA_ACT_BPF_PAD,3131+ TCA_ACT_BPF_TAG,3232+ TCA_ACT_BPF_ID,3333+ __TCA_ACT_BPF_MAX,3434+};3535+#define TCA_ACT_BPF_MAX (__TCA_ACT_BPF_MAX - 1)3636+3737+#endif
···1313 * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF1414 * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.1515 */1616-/* Test readlink /proc/self/map_files/... with address 0. */1616+/* Test readlink /proc/self/map_files/... with minimum address. */1717#include <errno.h>1818#include <sys/types.h>1919#include <sys/stat.h>···4747int main(void)4848{4949 const unsigned int PAGE_SIZE = sysconf(_SC_PAGESIZE);5050+#ifdef __arm__5151+ unsigned long va = 2 * PAGE_SIZE;5252+#else5353+ unsigned long va = 0;5454+#endif5055 void *p;5156 int fd;5257 unsigned long a, b;···6055 if (fd == -1)6156 return 1;62576363- p = mmap(NULL, PAGE_SIZE, PROT_NONE, MAP_PRIVATE|MAP_FILE|MAP_FIXED, fd, 0);5858+ p = mmap((void *)va, PAGE_SIZE, PROT_NONE, MAP_PRIVATE|MAP_FILE|MAP_FIXED, fd, 0);6459 if (p == MAP_FAILED) {6560 if (errno == EPERM)6661 return 2;