Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'kvmarm-5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm64 updates for Linux 5.12

- Make the nVHE EL2 object relocatable, resulting in much more
maintainable code
- Handle concurrent translation faults hitting the same page
in a more elegant way
- Support for the standard TRNG hypervisor call
- A bunch of small PMU/Debug fixes
- Allow the disabling of symbol export from assembly code
- Simplification of the early init hypercall handling

+4989 -1968
+24
CREDITS
··· 710 710 S: Las Heras, Mendoza CP 5539 711 711 S: Argentina 712 712 713 + N: Jay Cliburn 714 + E: jcliburn@gmail.com 715 + D: ATLX Ethernet drivers 716 + 713 717 N: Steven P. Cole 714 718 E: scole@lanl.gov 715 719 E: elenstev@mesatop.com ··· 1287 1283 D: Major kbuild rework during the 2.5 cycle 1288 1284 D: ISDN Maintainer 1289 1285 S: USA 1286 + 1287 + N: Gerrit Renker 1288 + E: gerrit@erg.abdn.ac.uk 1289 + D: DCCP protocol support. 1290 1290 1291 1291 N: Philip Gladstone 1292 1292 E: philip@gladstonefamily.net ··· 2146 2138 E: seasons@makosteszta.sote.hu 2147 2139 D: Original author of software suspend 2148 2140 2141 + N: Alexey Kuznetsov 2142 + E: kuznet@ms2.inr.ac.ru 2143 + D: Author and maintainer of large parts of the networking stack 2144 + 2149 2145 N: Jaroslav Kysela 2150 2146 E: perex@perex.cz 2151 2147 W: https://www.perex.cz ··· 2707 2695 N: Wolfgang Muees 2708 2696 E: wolfgang@iksw-muees.de 2709 2697 D: Auerswald USB driver 2698 + 2699 + N: Shrijeet Mukherjee 2700 + E: shrijeet@gmail.com 2701 + D: Network routing domains (VRF). 2710 2702 2711 2703 N: Paul Mundt 2712 2704 E: paul.mundt@gmail.com ··· 4126 4110 S: 16 Baliqiao Nanjie, Beijing 101100 4127 4111 S: People's Repulic of China 4128 4112 4113 + N: Aviad Yehezkel 4114 + E: aviadye@nvidia.com 4115 + D: Kernel TLS implementation and offload support. 4116 + 4129 4117 N: Victor Yodaiken 4130 4118 E: yodaiken@fsmlabs.com 4131 4119 D: RTLinux (RealTime Linux) ··· 4186 4166 S: 1507 145th Place SE #B5 4187 4167 S: Bellevue, Washington 98007 4188 4168 S: USA 4169 + 4170 + N: Wensong Zhang 4171 + E: wensong@linux-vs.org 4172 + D: IP virtual server (IPVS). 4189 4173 4190 4174 N: Haojian Zhuang 4191 4175 E: haojian.zhuang@gmail.com
+4
Documentation/admin-guide/kernel-parameters.txt
··· 5972 5972 This option is obsoleted by the "nopv" option, which 5973 5973 has equivalent effect for XEN platform. 5974 5974 5975 + xen_no_vector_callback 5976 + [KNL,X86,XEN] Disable the vector callback for Xen 5977 + event channel interrupts. 5978 + 5975 5979 xen_scrub_pages= [XEN] 5976 5980 Boolean option to control scrubbing pages before giving them back 5977 5981 to Xen, for use by other domains. Can be also changed at runtime
+1
Documentation/devicetree/bindings/net/renesas,etheravb.yaml
··· 163 163 enum: 164 164 - renesas,etheravb-r8a774a1 165 165 - renesas,etheravb-r8a774b1 166 + - renesas,etheravb-r8a774e1 166 167 - renesas,etheravb-r8a7795 167 168 - renesas,etheravb-r8a7796 168 169 - renesas,etheravb-r8a77961
+6 -2
Documentation/devicetree/bindings/net/snps,dwmac.yaml
··· 161 161 * snps,route-dcbcp, DCB Control Packets 162 162 * snps,route-up, Untagged Packets 163 163 * snps,route-multi-broad, Multicast & Broadcast Packets 164 - * snps,priority, RX queue priority (Range 0x0 to 0xF) 164 + * snps,priority, bitmask of the tagged frames priorities assigned to 165 + the queue 165 166 166 167 snps,mtl-tx-config: 167 168 $ref: /schemas/types.yaml#/definitions/phandle ··· 189 188 * snps,idle_slope, unlock on WoL 190 189 * snps,high_credit, max write outstanding req. limit 191 190 * snps,low_credit, max read outstanding req. limit 192 - * snps,priority, TX queue priority (Range 0x0 to 0xF) 191 + * snps,priority, bitmask of the priorities assigned to the queue. 192 + When a PFC frame is received with priorities matching the bitmask, 193 + the queue is blocked from transmitting for the pause time specified 194 + in the PFC frame. 193 195 194 196 snps,reset-gpio: 195 197 deprecated: true
+3 -1
Documentation/devicetree/bindings/sound/ti,j721e-cpb-audio.yaml
··· 1 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + # Copyright (C) 2020 Texas Instruments Incorporated 3 + # Author: Peter Ujfalusi <peter.ujfalusi@ti.com> 2 4 %YAML 1.2 3 5 --- 4 6 $id: http://devicetree.org/schemas/sound/ti,j721e-cpb-audio.yaml# ··· 9 7 title: Texas Instruments J721e Common Processor Board Audio Support 10 8 11 9 maintainers: 12 - - Peter Ujfalusi <peter.ujfalusi@ti.com> 10 + - Peter Ujfalusi <peter.ujfalusi@gmail.com> 13 11 14 12 description: | 15 13 The audio support on the board is using pcm3168a codec connected to McASP10
+3 -1
Documentation/devicetree/bindings/sound/ti,j721e-cpb-ivi-audio.yaml
··· 1 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + # Copyright (C) 2020 Texas Instruments Incorporated 3 + # Author: Peter Ujfalusi <peter.ujfalusi@ti.com> 2 4 %YAML 1.2 3 5 --- 4 6 $id: http://devicetree.org/schemas/sound/ti,j721e-cpb-ivi-audio.yaml# ··· 9 7 title: Texas Instruments J721e Common Processor Board Audio Support 10 8 11 9 maintainers: 12 - - Peter Ujfalusi <peter.ujfalusi@ti.com> 10 + - Peter Ujfalusi <peter.ujfalusi@gmail.com> 13 11 14 12 description: | 15 13 The Infotainment board plugs into the Common Processor Board, the support of the
+2 -2
Documentation/firmware-guide/acpi/apei/einj.rst
··· 50 50 0x00000010 Memory Uncorrectable non-fatal 51 51 0x00000020 Memory Uncorrectable fatal 52 52 0x00000040 PCI Express Correctable 53 - 0x00000080 PCI Express Uncorrectable fatal 54 - 0x00000100 PCI Express Uncorrectable non-fatal 53 + 0x00000080 PCI Express Uncorrectable non-fatal 54 + 0x00000100 PCI Express Uncorrectable fatal 55 55 0x00000200 Platform Correctable 56 56 0x00000400 Platform Uncorrectable non-fatal 57 57 0x00000800 Platform Uncorrectable fatal
+165 -6
Documentation/networking/netdevices.rst
··· 10 10 The following is a random collection of documentation regarding 11 11 network devices. 12 12 13 - struct net_device allocation rules 14 - ================================== 13 + struct net_device lifetime rules 14 + ================================ 15 15 Network device structures need to persist even after module is unloaded and 16 16 must be allocated with alloc_netdev_mqs() and friends. 17 17 If device has registered successfully, it will be freed on last use 18 - by free_netdev(). This is required to handle the pathologic case cleanly 19 - (example: rmmod mydriver </sys/class/net/myeth/mtu ) 18 + by free_netdev(). This is required to handle the pathological case cleanly 19 + (example: ``rmmod mydriver </sys/class/net/myeth/mtu``) 20 20 21 - alloc_netdev_mqs()/alloc_netdev() reserve extra space for driver 21 + alloc_netdev_mqs() / alloc_netdev() reserve extra space for driver 22 22 private data which gets freed when the network device is freed. If 23 23 separately allocated data is attached to the network device 24 - (netdev_priv(dev)) then it is up to the module exit handler to free that. 24 + (netdev_priv()) then it is up to the module exit handler to free that. 25 + 26 + There are two groups of APIs for registering struct net_device. 27 + First group can be used in normal contexts where ``rtnl_lock`` is not already 28 + held: register_netdev(), unregister_netdev(). 29 + Second group can be used when ``rtnl_lock`` is already held: 30 + register_netdevice(), unregister_netdevice(), free_netdevice(). 31 + 32 + Simple drivers 33 + -------------- 34 + 35 + Most drivers (especially device drivers) handle lifetime of struct net_device 36 + in context where ``rtnl_lock`` is not held (e.g. driver probe and remove paths). 37 + 38 + In that case the struct net_device registration is done using 39 + the register_netdev(), and unregister_netdev() functions: 40 + 41 + .. code-block:: c 42 + 43 + int probe() 44 + { 45 + struct my_device_priv *priv; 46 + int err; 47 + 48 + dev = alloc_netdev_mqs(...); 49 + if (!dev) 50 + return -ENOMEM; 51 + priv = netdev_priv(dev); 52 + 53 + /* ... do all device setup before calling register_netdev() ... 54 + */ 55 + 56 + err = register_netdev(dev); 57 + if (err) 58 + goto err_undo; 59 + 60 + /* net_device is visible to the user! */ 61 + 62 + err_undo: 63 + /* ... undo the device setup ... */ 64 + free_netdev(dev); 65 + return err; 66 + } 67 + 68 + void remove() 69 + { 70 + unregister_netdev(dev); 71 + free_netdev(dev); 72 + } 73 + 74 + Note that after calling register_netdev() the device is visible in the system. 75 + Users can open it and start sending / receiving traffic immediately, 76 + or run any other callback, so all initialization must be done prior to 77 + registration. 78 + 79 + unregister_netdev() closes the device and waits for all users to be done 80 + with it. The memory of struct net_device itself may still be referenced 81 + by sysfs but all operations on that device will fail. 82 + 83 + free_netdev() can be called after unregister_netdev() returns on when 84 + register_netdev() failed. 85 + 86 + Device management under RTNL 87 + ---------------------------- 88 + 89 + Registering struct net_device while in context which already holds 90 + the ``rtnl_lock`` requires extra care. In those scenarios most drivers 91 + will want to make use of struct net_device's ``needs_free_netdev`` 92 + and ``priv_destructor`` members for freeing of state. 93 + 94 + Example flow of netdev handling under ``rtnl_lock``: 95 + 96 + .. code-block:: c 97 + 98 + static void my_setup(struct net_device *dev) 99 + { 100 + dev->needs_free_netdev = true; 101 + } 102 + 103 + static void my_destructor(struct net_device *dev) 104 + { 105 + some_obj_destroy(priv->obj); 106 + some_uninit(priv); 107 + } 108 + 109 + int create_link() 110 + { 111 + struct my_device_priv *priv; 112 + int err; 113 + 114 + ASSERT_RTNL(); 115 + 116 + dev = alloc_netdev(sizeof(*priv), "net%d", NET_NAME_UNKNOWN, my_setup); 117 + if (!dev) 118 + return -ENOMEM; 119 + priv = netdev_priv(dev); 120 + 121 + /* Implicit constructor */ 122 + err = some_init(priv); 123 + if (err) 124 + goto err_free_dev; 125 + 126 + priv->obj = some_obj_create(); 127 + if (!priv->obj) { 128 + err = -ENOMEM; 129 + goto err_some_uninit; 130 + } 131 + /* End of constructor, set the destructor: */ 132 + dev->priv_destructor = my_destructor; 133 + 134 + err = register_netdevice(dev); 135 + if (err) 136 + /* register_netdevice() calls destructor on failure */ 137 + goto err_free_dev; 138 + 139 + /* If anything fails now unregister_netdevice() (or unregister_netdev()) 140 + * will take care of calling my_destructor and free_netdev(). 141 + */ 142 + 143 + return 0; 144 + 145 + err_some_uninit: 146 + some_uninit(priv); 147 + err_free_dev: 148 + free_netdev(dev); 149 + return err; 150 + } 151 + 152 + If struct net_device.priv_destructor is set it will be called by the core 153 + some time after unregister_netdevice(), it will also be called if 154 + register_netdevice() fails. The callback may be invoked with or without 155 + ``rtnl_lock`` held. 156 + 157 + There is no explicit constructor callback, driver "constructs" the private 158 + netdev state after allocating it and before registration. 159 + 160 + Setting struct net_device.needs_free_netdev makes core call free_netdevice() 161 + automatically after unregister_netdevice() when all references to the device 162 + are gone. It only takes effect after a successful call to register_netdevice() 163 + so if register_netdevice() fails driver is responsible for calling 164 + free_netdev(). 165 + 166 + free_netdev() is safe to call on error paths right after unregister_netdevice() 167 + or when register_netdevice() fails. Parts of netdev (de)registration process 168 + happen after ``rtnl_lock`` is released, therefore in those cases free_netdev() 169 + will defer some of the processing until ``rtnl_lock`` is released. 170 + 171 + Devices spawned from struct rtnl_link_ops should never free the 172 + struct net_device directly. 173 + 174 + .ndo_init and .ndo_uninit 175 + ~~~~~~~~~~~~~~~~~~~~~~~~~ 176 + 177 + ``.ndo_init`` and ``.ndo_uninit`` callbacks are called during net_device 178 + registration and de-registration, under ``rtnl_lock``. Drivers can use 179 + those e.g. when parts of their init process need to run under ``rtnl_lock``. 180 + 181 + ``.ndo_init`` runs before device is visible in the system, ``.ndo_uninit`` 182 + runs during de-registering after device is closed but other subsystems 183 + may still have outstanding references to the netdevice. 25 184 26 185 MTU 27 186 ===
+1 -1
Documentation/networking/tls-offload.rst
··· 530 530 offloads, old connections will remain active after flags are cleared. 531 531 532 532 TLS encryption cannot be offloaded to devices without checksum calculation 533 - offload. Hence, TLS TX device feature flag requires NETIF_F_HW_CSUM being set. 533 + offload. Hence, TLS TX device feature flag requires TX csum offload being set. 534 534 Disabling the latter implies clearing the former. Disabling TX checksum offload 535 535 should not affect old connections, and drivers should make sure checksum 536 536 calculation does not break for them.
+1 -1
Documentation/sound/alsa-configuration.rst
··· 1501 1501 1502 1502 This module supports multiple cards. 1503 1503 Note: One miXart8 board will be represented as 4 alsa cards. 1504 - See MIXART.txt for details. 1504 + See Documentation/sound/cards/mixart.rst for details. 1505 1505 1506 1506 When the driver is compiled as a module and the hotplug firmware 1507 1507 is supported, the firmware data is loaded via hotplug automatically.
+7 -13
MAINTAINERS
··· 820 820 M: Arthur Kiyanovski <akiyano@amazon.com> 821 821 R: Guy Tzalik <gtzalik@amazon.com> 822 822 R: Saeed Bishara <saeedb@amazon.com> 823 - R: Zorik Machulsky <zorik@amazon.com> 824 823 L: netdev@vger.kernel.org 825 824 S: Supported 826 825 F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst ··· 906 907 M: Felix Kuehling <Felix.Kuehling@amd.com> 907 908 L: amd-gfx@lists.freedesktop.org 908 909 S: Supported 909 - T: git git://people.freedesktop.org/~agd5f/linux 910 + T: git https://gitlab.freedesktop.org/agd5f/linux.git 910 911 F: drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd*.[ch] 911 912 F: drivers/gpu/drm/amd/amdkfd/ 912 913 F: drivers/gpu/drm/amd/include/cik_structs.h ··· 2941 2942 F: drivers/hwmon/asus_atk0110.c 2942 2943 2943 2944 ATLX ETHERNET DRIVERS 2944 - M: Jay Cliburn <jcliburn@gmail.com> 2945 2945 M: Chris Snook <chris.snook@gmail.com> 2946 2946 L: netdev@vger.kernel.org 2947 2947 S: Maintained ··· 4920 4922 F: drivers/scsi/dc395x.* 4921 4923 4922 4924 DCCP PROTOCOL 4923 - M: Gerrit Renker <gerrit@erg.abdn.ac.uk> 4924 4925 L: dccp@vger.kernel.org 4925 - S: Maintained 4926 + S: Orphan 4926 4927 W: http://www.linuxfoundation.org/collaborate/workgroups/networking/dccp 4927 4928 F: include/linux/dccp.h 4928 4929 F: include/linux/tfrc.h ··· 9323 9326 F: drivers/scsi/ips* 9324 9327 9325 9328 IPVS 9326 - M: Wensong Zhang <wensong@linux-vs.org> 9327 9329 M: Simon Horman <horms@verge.net.au> 9328 9330 M: Julian Anastasov <ja@ssi.bg> 9329 9331 L: netdev@vger.kernel.org ··· 12412 12416 12413 12417 NETWORKING [IPv4/IPv6] 12414 12418 M: "David S. Miller" <davem@davemloft.net> 12415 - M: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> 12416 12419 M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> 12417 12420 L: netdev@vger.kernel.org 12418 12421 S: Maintained ··· 12468 12473 12469 12474 NETWORKING [TLS] 12470 12475 M: Boris Pismenny <borisp@nvidia.com> 12471 - M: Aviad Yehezkel <aviadye@nvidia.com> 12472 12476 M: John Fastabend <john.fastabend@gmail.com> 12473 12477 M: Daniel Borkmann <daniel@iogearbox.net> 12474 12478 M: Jakub Kicinski <kuba@kernel.org> ··· 12842 12848 F: include/uapi/misc/ocxl.h 12843 12849 12844 12850 OMAP AUDIO SUPPORT 12845 - M: Peter Ujfalusi <peter.ujfalusi@ti.com> 12851 + M: Peter Ujfalusi <peter.ujfalusi@gmail.com> 12846 12852 M: Jarkko Nikula <jarkko.nikula@bitmer.com> 12847 12853 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 12848 12854 L: linux-omap@vger.kernel.org ··· 14812 14818 M: Christian König <christian.koenig@amd.com> 14813 14819 L: amd-gfx@lists.freedesktop.org 14814 14820 S: Supported 14815 - T: git git://people.freedesktop.org/~agd5f/linux 14821 + T: git https://gitlab.freedesktop.org/agd5f/linux.git 14816 14822 F: drivers/gpu/drm/amd/ 14817 14823 F: drivers/gpu/drm/radeon/ 14818 14824 F: include/uapi/drm/amdgpu_drm.h ··· 16313 16319 M: David Rientjes <rientjes@google.com> 16314 16320 M: Joonsoo Kim <iamjoonsoo.kim@lge.com> 16315 16321 M: Andrew Morton <akpm@linux-foundation.org> 16322 + M: Vlastimil Babka <vbabka@suse.cz> 16316 16323 L: linux-mm@kvack.org 16317 16324 S: Maintained 16318 16325 F: include/linux/sl?b*.h ··· 17536 17541 F: drivers/irqchip/irq-xtensa-* 17537 17542 17538 17543 TEXAS INSTRUMENTS ASoC DRIVERS 17539 - M: Peter Ujfalusi <peter.ujfalusi@ti.com> 17544 + M: Peter Ujfalusi <peter.ujfalusi@gmail.com> 17540 17545 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 17541 17546 S: Maintained 17542 17547 F: sound/soc/ti/ ··· 17846 17851 F: drivers/nfc/trf7970a.c 17847 17852 17848 17853 TI TWL4030 SERIES SOC CODEC DRIVER 17849 - M: Peter Ujfalusi <peter.ujfalusi@ti.com> 17854 + M: Peter Ujfalusi <peter.ujfalusi@gmail.com> 17850 17855 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 17851 17856 S: Maintained 17852 17857 F: sound/soc/codecs/twl4030* ··· 19066 19071 19067 19072 VRF 19068 19073 M: David Ahern <dsahern@kernel.org> 19069 - M: Shrijeet Mukherjee <shrijeet@gmail.com> 19070 19074 L: netdev@vger.kernel.org 19071 19075 S: Maintained 19072 19076 F: Documentation/networking/vrf.rst
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 11 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arm/xen/enlighten.c
··· 371 371 } 372 372 gnttab_init(); 373 373 if (!xen_initial_domain()) 374 - xenbus_probe(NULL); 374 + xenbus_probe(); 375 375 376 376 /* 377 377 * Making sure board specific code will not set up ops for
-2
arch/arm64/Kconfig
··· 174 174 select HAVE_NMI 175 175 select HAVE_PATA_PLATFORM 176 176 select HAVE_PERF_EVENTS 177 - select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI && HW_PERF_EVENTS 178 - select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI 179 177 select HAVE_PERF_REGS 180 178 select HAVE_PERF_USER_STACK_DUMP 181 179 select HAVE_REGS_AND_STACK_ACCESS_API
+5 -5
arch/arm64/include/asm/atomic.h
··· 17 17 #include <asm/lse.h> 18 18 19 19 #define ATOMIC_OP(op) \ 20 - static inline void arch_##op(int i, atomic_t *v) \ 20 + static __always_inline void arch_##op(int i, atomic_t *v) \ 21 21 { \ 22 22 __lse_ll_sc_body(op, i, v); \ 23 23 } ··· 32 32 #undef ATOMIC_OP 33 33 34 34 #define ATOMIC_FETCH_OP(name, op) \ 35 - static inline int arch_##op##name(int i, atomic_t *v) \ 35 + static __always_inline int arch_##op##name(int i, atomic_t *v) \ 36 36 { \ 37 37 return __lse_ll_sc_body(op##name, i, v); \ 38 38 } ··· 56 56 #undef ATOMIC_FETCH_OPS 57 57 58 58 #define ATOMIC64_OP(op) \ 59 - static inline void arch_##op(long i, atomic64_t *v) \ 59 + static __always_inline void arch_##op(long i, atomic64_t *v) \ 60 60 { \ 61 61 __lse_ll_sc_body(op, i, v); \ 62 62 } ··· 71 71 #undef ATOMIC64_OP 72 72 73 73 #define ATOMIC64_FETCH_OP(name, op) \ 74 - static inline long arch_##op##name(long i, atomic64_t *v) \ 74 + static __always_inline long arch_##op##name(long i, atomic64_t *v) \ 75 75 { \ 76 76 return __lse_ll_sc_body(op##name, i, v); \ 77 77 } ··· 94 94 #undef ATOMIC64_FETCH_OP 95 95 #undef ATOMIC64_FETCH_OPS 96 96 97 - static inline long arch_atomic64_dec_if_positive(atomic64_t *v) 97 + static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v) 98 98 { 99 99 return __lse_ll_sc_body(atomic64_dec_if_positive, v); 100 100 }
+27 -2
arch/arm64/include/asm/hyp_image.h
··· 7 7 #ifndef __ARM64_HYP_IMAGE_H__ 8 8 #define __ARM64_HYP_IMAGE_H__ 9 9 10 + #define __HYP_CONCAT(a, b) a ## b 11 + #define HYP_CONCAT(a, b) __HYP_CONCAT(a, b) 12 + 10 13 /* 11 14 * KVM nVHE code has its own symbol namespace prefixed with __kvm_nvhe_, 12 15 * to separate it from the kernel proper. ··· 24 21 */ 25 22 #define HYP_SECTION_NAME(NAME) .hyp##NAME 26 23 24 + /* Symbol defined at the beginning of each hyp section. */ 25 + #define HYP_SECTION_SYMBOL_NAME(NAME) \ 26 + HYP_CONCAT(__hyp_section_, HYP_SECTION_NAME(NAME)) 27 + 28 + /* 29 + * Helper to generate linker script statements starting a hyp section. 30 + * 31 + * A symbol with a well-known name is defined at the first byte. This 32 + * is used as a base for hyp relocations (see gen-hyprel.c). It must 33 + * be defined inside the section so the linker of `vmlinux` cannot 34 + * separate it from the section data. 35 + */ 36 + #define BEGIN_HYP_SECTION(NAME) \ 37 + HYP_SECTION_NAME(NAME) : { \ 38 + HYP_SECTION_SYMBOL_NAME(NAME) = .; 39 + 40 + /* Helper to generate linker script statements ending a hyp section. */ 41 + #define END_HYP_SECTION \ 42 + } 43 + 27 44 /* Defines an ELF hyp section from input section @NAME and its subsections. */ 28 - #define HYP_SECTION(NAME) \ 29 - HYP_SECTION_NAME(NAME) : { *(NAME NAME##.*) } 45 + #define HYP_SECTION(NAME) \ 46 + BEGIN_HYP_SECTION(NAME) \ 47 + *(NAME NAME##.*) \ 48 + END_HYP_SECTION 30 49 31 50 /* 32 51 * Defines a linker script alias of a kernel-proper symbol referenced by
-26
arch/arm64/include/asm/kvm_asm.h
··· 199 199 200 200 extern u32 __kvm_get_mdcr_el2(void); 201 201 202 - #if defined(GCC_VERSION) && GCC_VERSION < 50000 203 - #define SYM_CONSTRAINT "i" 204 - #else 205 - #define SYM_CONSTRAINT "S" 206 - #endif 207 - 208 - /* 209 - * Obtain the PC-relative address of a kernel symbol 210 - * s: symbol 211 - * 212 - * The goal of this macro is to return a symbol's address based on a 213 - * PC-relative computation, as opposed to a loading the VA from a 214 - * constant pool or something similar. This works well for HYP, as an 215 - * absolute VA is guaranteed to be wrong. Only use this if trying to 216 - * obtain the address of a symbol (i.e. not something you obtained by 217 - * following a pointer). 218 - */ 219 - #define hyp_symbol_addr(s) \ 220 - ({ \ 221 - typeof(s) *addr; \ 222 - asm("adrp %0, %1\n" \ 223 - "add %0, %0, :lo12:%1\n" \ 224 - : "=r" (addr) : SYM_CONSTRAINT (&s)); \ 225 - addr; \ 226 - }) 227 - 228 202 #define __KVM_EXTABLE(from, to) \ 229 203 " .pushsection __kvm_ex_table, \"a\"\n" \ 230 204 " .align 3\n" \
+2
arch/arm64/include/asm/kvm_host.h
··· 770 770 #define kvm_vcpu_has_pmu(vcpu) \ 771 771 (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) 772 772 773 + int kvm_trng_call(struct kvm_vcpu *vcpu); 774 + 773 775 #endif /* __ARM64_KVM_HOST_H__ */
+20 -47
arch/arm64/include/asm/kvm_mmu.h
··· 73 73 .endm 74 74 75 75 /* 76 - * Convert a kernel image address to a PA 77 - * reg: kernel address to be converted in place 76 + * Convert a hypervisor VA to a PA 77 + * reg: hypervisor address to be converted in place 78 + * tmp: temporary register 79 + */ 80 + .macro hyp_pa reg, tmp 81 + ldr_l \tmp, hyp_physvirt_offset 82 + add \reg, \reg, \tmp 83 + .endm 84 + 85 + /* 86 + * Convert a hypervisor VA to a kernel image address 87 + * reg: hypervisor address to be converted in place 78 88 * tmp: temporary register 79 89 * 80 90 * The actual code generation takes place in kvm_get_kimage_voffset, and ··· 92 82 * perform the register allocation (kvm_get_kimage_voffset uses the 93 83 * specific registers encoded in the instructions). 94 84 */ 95 - .macro kimg_pa reg, tmp 85 + .macro hyp_kimg_va reg, tmp 86 + /* Convert hyp VA -> PA. */ 87 + hyp_pa \reg, \tmp 88 + 89 + /* Load kimage_voffset. */ 96 90 alternative_cb kvm_get_kimage_voffset 97 91 movz \tmp, #0 98 92 movk \tmp, #0, lsl #16 ··· 104 90 movk \tmp, #0, lsl #48 105 91 alternative_cb_end 106 92 107 - /* reg = __pa(reg) */ 108 - sub \reg, \reg, \tmp 109 - .endm 110 - 111 - /* 112 - * Convert a kernel image address to a hyp VA 113 - * reg: kernel address to be converted in place 114 - * tmp: temporary register 115 - * 116 - * The actual code generation takes place in kvm_get_kimage_voffset, and 117 - * the instructions below are only there to reserve the space and 118 - * perform the register allocation (kvm_update_kimg_phys_offset uses the 119 - * specific registers encoded in the instructions). 120 - */ 121 - .macro kimg_hyp_va reg, tmp 122 - alternative_cb kvm_update_kimg_phys_offset 123 - movz \tmp, #0 124 - movk \tmp, #0, lsl #16 125 - movk \tmp, #0, lsl #32 126 - movk \tmp, #0, lsl #48 127 - alternative_cb_end 128 - 129 - sub \reg, \reg, \tmp 130 - mov_q \tmp, PAGE_OFFSET 131 - orr \reg, \reg, \tmp 132 - kern_hyp_va \reg 93 + /* Convert PA -> kimg VA. */ 94 + add \reg, \reg, \tmp 133 95 .endm 134 96 135 97 #else ··· 119 129 void kvm_update_va_mask(struct alt_instr *alt, 120 130 __le32 *origptr, __le32 *updptr, int nr_inst); 121 131 void kvm_compute_layout(void); 132 + void kvm_apply_hyp_relocations(void); 122 133 123 134 static __always_inline unsigned long __kern_hyp_va(unsigned long v) 124 135 { ··· 134 143 } 135 144 136 145 #define kern_hyp_va(v) ((typeof(v))(__kern_hyp_va((unsigned long)(v)))) 137 - 138 - static __always_inline unsigned long __kimg_hyp_va(unsigned long v) 139 - { 140 - unsigned long offset; 141 - 142 - asm volatile(ALTERNATIVE_CB("movz %0, #0\n" 143 - "movk %0, #0, lsl #16\n" 144 - "movk %0, #0, lsl #32\n" 145 - "movk %0, #0, lsl #48\n", 146 - kvm_update_kimg_phys_offset) 147 - : "=r" (offset)); 148 - 149 - return __kern_hyp_va((v - offset) | PAGE_OFFSET); 150 - } 151 - 152 - #define kimg_fn_hyp_va(v) ((typeof(*v))(__kimg_hyp_va((unsigned long)(v)))) 153 - 154 - #define kimg_fn_ptr(x) (typeof(x) **)(x) 155 146 156 147 /* 157 148 * We currently support using a VM-specified IPA size. For backward
+5
arch/arm64/include/asm/kvm_pgtable.h
··· 157 157 * If device attributes are not explicitly requested in @prot, then the 158 158 * mapping will be normal, cacheable. 159 159 * 160 + * Note that the update of a valid leaf PTE in this function will be aborted, 161 + * if it's trying to recreate the exact same mapping or only change the access 162 + * permissions. Instead, the vCPU will exit one more time from guest if still 163 + * needed and then go through the path of relaxing permissions. 164 + * 160 165 * Note that this function will both coalesce existing table entries and split 161 166 * existing block mappings, relying on page-faults to fault back areas outside 162 167 * of the new mapping lazily.
+1 -2
arch/arm64/include/asm/processor.h
··· 94 94 #endif /* CONFIG_ARM64_FORCE_52BIT */ 95 95 96 96 extern phys_addr_t arm64_dma_phys_limit; 97 - extern phys_addr_t arm64_dma32_phys_limit; 98 - #define ARCH_LOW_ADDRESS_LIMIT ((arm64_dma_phys_limit ? : arm64_dma32_phys_limit) - 1) 97 + #define ARCH_LOW_ADDRESS_LIMIT (arm64_dma_phys_limit - 1) 99 98 100 99 struct debug_info { 101 100 #ifdef CONFIG_HAVE_HW_BREAKPOINT
+2 -1
arch/arm64/include/asm/sections.h
··· 11 11 extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[]; 12 12 extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[]; 13 13 extern char __hyp_text_start[], __hyp_text_end[]; 14 - extern char __hyp_data_ro_after_init_start[], __hyp_data_ro_after_init_end[]; 14 + extern char __hyp_rodata_start[], __hyp_rodata_end[]; 15 + extern char __hyp_reloc_begin[], __hyp_reloc_end[]; 15 16 extern char __idmap_text_start[], __idmap_text_end[]; 16 17 extern char __initdata_begin[], __initdata_end[]; 17 18 extern char __inittext_begin[], __inittext_end[];
+3
arch/arm64/include/asm/sysreg.h
··· 846 846 847 847 #define ID_DFR0_PERFMON_SHIFT 24 848 848 849 + #define ID_DFR0_PERFMON_8_0 0x3 849 850 #define ID_DFR0_PERFMON_8_1 0x4 851 + #define ID_DFR0_PERFMON_8_4 0x5 852 + #define ID_DFR0_PERFMON_8_5 0x6 850 853 851 854 #define ID_ISAR4_SWP_FRAC_SHIFT 28 852 855 #define ID_ISAR4_PSR_M_SHIFT 24
+1 -1
arch/arm64/kernel/asm-offsets.c
··· 75 75 DEFINE(S_SDEI_TTBR1, offsetof(struct pt_regs, sdei_ttbr1)); 76 76 DEFINE(S_PMR_SAVE, offsetof(struct pt_regs, pmr_save)); 77 77 DEFINE(S_STACKFRAME, offsetof(struct pt_regs, stackframe)); 78 - DEFINE(S_FRAME_SIZE, sizeof(struct pt_regs)); 78 + DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs)); 79 79 BLANK(); 80 80 #ifdef CONFIG_COMPAT 81 81 DEFINE(COMPAT_SIGFRAME_REGS_OFFSET, offsetof(struct compat_sigframe, uc.uc_mcontext.arm_r0));
+6 -6
arch/arm64/kernel/entry-ftrace.S
··· 35 35 */ 36 36 .macro ftrace_regs_entry, allregs=0 37 37 /* Make room for pt_regs, plus a callee frame */ 38 - sub sp, sp, #(S_FRAME_SIZE + 16) 38 + sub sp, sp, #(PT_REGS_SIZE + 16) 39 39 40 40 /* Save function arguments (and x9 for simplicity) */ 41 41 stp x0, x1, [sp, #S_X0] ··· 61 61 .endif 62 62 63 63 /* Save the callsite's SP and LR */ 64 - add x10, sp, #(S_FRAME_SIZE + 16) 64 + add x10, sp, #(PT_REGS_SIZE + 16) 65 65 stp x9, x10, [sp, #S_LR] 66 66 67 67 /* Save the PC after the ftrace callsite */ 68 68 str x30, [sp, #S_PC] 69 69 70 70 /* Create a frame record for the callsite above pt_regs */ 71 - stp x29, x9, [sp, #S_FRAME_SIZE] 72 - add x29, sp, #S_FRAME_SIZE 71 + stp x29, x9, [sp, #PT_REGS_SIZE] 72 + add x29, sp, #PT_REGS_SIZE 73 73 74 74 /* Create our frame record within pt_regs. */ 75 75 stp x29, x30, [sp, #S_STACKFRAME] ··· 120 120 ldr x9, [sp, #S_PC] 121 121 122 122 /* Restore the callsite's SP */ 123 - add sp, sp, #S_FRAME_SIZE + 16 123 + add sp, sp, #PT_REGS_SIZE + 16 124 124 125 125 ret x9 126 126 SYM_CODE_END(ftrace_common) ··· 130 130 ldr x0, [sp, #S_PC] 131 131 sub x0, x0, #AARCH64_INSN_SIZE // ip (callsite's BL insn) 132 132 add x1, sp, #S_LR // parent_ip (callsite's LR) 133 - ldr x2, [sp, #S_FRAME_SIZE] // parent fp (callsite's FP) 133 + ldr x2, [sp, #PT_REGS_SIZE] // parent fp (callsite's FP) 134 134 bl prepare_ftrace_return 135 135 b ftrace_common_return 136 136 SYM_CODE_END(ftrace_graph_caller)
+7 -7
arch/arm64/kernel/entry.S
··· 75 75 .endif 76 76 #endif 77 77 78 - sub sp, sp, #S_FRAME_SIZE 78 + sub sp, sp, #PT_REGS_SIZE 79 79 #ifdef CONFIG_VMAP_STACK 80 80 /* 81 81 * Test whether the SP has overflowed, without corrupting a GPR. ··· 96 96 * userspace, and can clobber EL0 registers to free up GPRs. 97 97 */ 98 98 99 - /* Stash the original SP (minus S_FRAME_SIZE) in tpidr_el0. */ 99 + /* Stash the original SP (minus PT_REGS_SIZE) in tpidr_el0. */ 100 100 msr tpidr_el0, x0 101 101 102 102 /* Recover the original x0 value and stash it in tpidrro_el0 */ ··· 253 253 254 254 scs_load tsk, x20 255 255 .else 256 - add x21, sp, #S_FRAME_SIZE 256 + add x21, sp, #PT_REGS_SIZE 257 257 get_current_task tsk 258 258 .endif /* \el == 0 */ 259 259 mrs x22, elr_el1 ··· 377 377 ldp x26, x27, [sp, #16 * 13] 378 378 ldp x28, x29, [sp, #16 * 14] 379 379 ldr lr, [sp, #S_LR] 380 - add sp, sp, #S_FRAME_SIZE // restore sp 380 + add sp, sp, #PT_REGS_SIZE // restore sp 381 381 382 382 .if \el == 0 383 383 alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 ··· 580 580 581 581 /* 582 582 * Store the original GPRs to the new stack. The orginal SP (minus 583 - * S_FRAME_SIZE) was stashed in tpidr_el0 by kernel_ventry. 583 + * PT_REGS_SIZE) was stashed in tpidr_el0 by kernel_ventry. 584 584 */ 585 - sub sp, sp, #S_FRAME_SIZE 585 + sub sp, sp, #PT_REGS_SIZE 586 586 kernel_entry 1 587 587 mrs x0, tpidr_el0 588 - add x0, x0, #S_FRAME_SIZE 588 + add x0, x0, #PT_REGS_SIZE 589 589 str x0, [sp, #S_SP] 590 590 591 591 /* Stash the regs for handle_bad_stack */
-1
arch/arm64/kernel/image-vars.h
··· 64 64 /* Alternative callbacks for init-time patching of nVHE hyp code. */ 65 65 KVM_NVHE_ALIAS(kvm_patch_vector_branch); 66 66 KVM_NVHE_ALIAS(kvm_update_va_mask); 67 - KVM_NVHE_ALIAS(kvm_update_kimg_phys_offset); 68 67 KVM_NVHE_ALIAS(kvm_get_kimage_voffset); 69 68 70 69 /* Global kernel state accessed by nVHE hyp code. */
+2 -39
arch/arm64/kernel/perf_event.c
··· 23 23 #include <linux/platform_device.h> 24 24 #include <linux/sched_clock.h> 25 25 #include <linux/smp.h> 26 - #include <linux/nmi.h> 27 - #include <linux/cpufreq.h> 28 26 29 27 /* ARMv8 Cortex-A53 specific event types. */ 30 28 #define ARMV8_A53_PERFCTR_PREF_LINEFILL 0xC2 ··· 1248 1250 1249 1251 static int __init armv8_pmu_driver_init(void) 1250 1252 { 1251 - int ret; 1252 - 1253 1253 if (acpi_disabled) 1254 - ret = platform_driver_register(&armv8_pmu_driver); 1254 + return platform_driver_register(&armv8_pmu_driver); 1255 1255 else 1256 - ret = arm_pmu_acpi_probe(armv8_pmuv3_init); 1257 - 1258 - /* 1259 - * Try to re-initialize lockup detector after PMU init in 1260 - * case PMU events are triggered via NMIs. 1261 - */ 1262 - if (ret == 0 && arm_pmu_irq_is_nmi()) 1263 - lockup_detector_init(); 1264 - 1265 - return ret; 1256 + return arm_pmu_acpi_probe(armv8_pmuv3_init); 1266 1257 } 1267 1258 device_initcall(armv8_pmu_driver_init) 1268 1259 ··· 1309 1322 userpg->cap_user_time_zero = 1; 1310 1323 userpg->cap_user_time_short = 1; 1311 1324 } 1312 - 1313 - #ifdef CONFIG_HARDLOCKUP_DETECTOR_PERF 1314 - /* 1315 - * Safe maximum CPU frequency in case a particular platform doesn't implement 1316 - * cpufreq driver. Although, architecture doesn't put any restrictions on 1317 - * maximum frequency but 5 GHz seems to be safe maximum given the available 1318 - * Arm CPUs in the market which are clocked much less than 5 GHz. On the other 1319 - * hand, we can't make it much higher as it would lead to a large hard-lockup 1320 - * detection timeout on parts which are running slower (eg. 1GHz on 1321 - * Developerbox) and doesn't possess a cpufreq driver. 1322 - */ 1323 - #define SAFE_MAX_CPU_FREQ 5000000000UL // 5 GHz 1324 - u64 hw_nmi_get_sample_period(int watchdog_thresh) 1325 - { 1326 - unsigned int cpu = smp_processor_id(); 1327 - unsigned long max_cpu_freq; 1328 - 1329 - max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL; 1330 - if (!max_cpu_freq) 1331 - max_cpu_freq = SAFE_MAX_CPU_FREQ; 1332 - 1333 - return (u64)max_cpu_freq * watchdog_thresh; 1334 - } 1335 - #endif
+3 -3
arch/arm64/kernel/probes/kprobes_trampoline.S
··· 25 25 stp x24, x25, [sp, #S_X24] 26 26 stp x26, x27, [sp, #S_X26] 27 27 stp x28, x29, [sp, #S_X28] 28 - add x0, sp, #S_FRAME_SIZE 28 + add x0, sp, #PT_REGS_SIZE 29 29 stp lr, x0, [sp, #S_LR] 30 30 /* 31 31 * Construct a useful saved PSTATE ··· 62 62 .endm 63 63 64 64 SYM_CODE_START(kretprobe_trampoline) 65 - sub sp, sp, #S_FRAME_SIZE 65 + sub sp, sp, #PT_REGS_SIZE 66 66 67 67 save_all_base_regs 68 68 ··· 76 76 77 77 restore_all_base_regs 78 78 79 - add sp, sp, #S_FRAME_SIZE 79 + add sp, sp, #PT_REGS_SIZE 80 80 ret 81 81 82 82 SYM_CODE_END(kretprobe_trampoline)
-7
arch/arm64/kernel/signal.c
··· 914 914 asmlinkage void do_notify_resume(struct pt_regs *regs, 915 915 unsigned long thread_flags) 916 916 { 917 - /* 918 - * The assembly code enters us with IRQs off, but it hasn't 919 - * informed the tracing code of that for efficiency reasons. 920 - * Update the trace code with the current status. 921 - */ 922 - trace_hardirqs_off(); 923 - 924 917 do { 925 918 if (thread_flags & _TIF_NEED_RESCHED) { 926 919 /* Unmask Debug and SError for the next task */
+3 -1
arch/arm64/kernel/smp.c
··· 434 434 "CPU: CPUs started in inconsistent modes"); 435 435 else 436 436 pr_info("CPU: All CPU(s) started at EL1\n"); 437 - if (IS_ENABLED(CONFIG_KVM) && !is_kernel_in_hyp_mode()) 437 + if (IS_ENABLED(CONFIG_KVM) && !is_kernel_in_hyp_mode()) { 438 438 kvm_compute_layout(); 439 + kvm_apply_hyp_relocations(); 440 + } 439 441 } 440 442 441 443 void __init smp_cpus_done(unsigned int max_cpus)
+2 -8
arch/arm64/kernel/syscall.c
··· 9 9 10 10 #include <asm/daifflags.h> 11 11 #include <asm/debug-monitors.h> 12 + #include <asm/exception.h> 12 13 #include <asm/fpsimd.h> 13 14 #include <asm/syscall.h> 14 15 #include <asm/thread_info.h> ··· 166 165 if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) { 167 166 local_daif_mask(); 168 167 flags = current_thread_info()->flags; 169 - if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) { 170 - /* 171 - * We're off to userspace, where interrupts are 172 - * always enabled after we restore the flags from 173 - * the SPSR. 174 - */ 175 - trace_hardirqs_on(); 168 + if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) 176 169 return; 177 - } 178 170 local_daif_restore(DAIF_PROCCTX); 179 171 } 180 172
+15 -3
arch/arm64/kernel/vmlinux.lds.S
··· 31 31 __stop___kvm_ex_table = .; 32 32 33 33 #define HYPERVISOR_DATA_SECTIONS \ 34 - HYP_SECTION_NAME(.data..ro_after_init) : { \ 35 - __hyp_data_ro_after_init_start = .; \ 34 + HYP_SECTION_NAME(.rodata) : { \ 35 + __hyp_rodata_start = .; \ 36 36 *(HYP_SECTION_NAME(.data..ro_after_init)) \ 37 - __hyp_data_ro_after_init_end = .; \ 37 + *(HYP_SECTION_NAME(.rodata)) \ 38 + __hyp_rodata_end = .; \ 38 39 } 39 40 40 41 #define HYPERVISOR_PERCPU_SECTION \ ··· 43 42 HYP_SECTION_NAME(.data..percpu) : { \ 44 43 *(HYP_SECTION_NAME(.data..percpu)) \ 45 44 } 45 + 46 + #define HYPERVISOR_RELOC_SECTION \ 47 + .hyp.reloc : ALIGN(4) { \ 48 + __hyp_reloc_begin = .; \ 49 + *(.hyp.reloc) \ 50 + __hyp_reloc_end = .; \ 51 + } 52 + 46 53 #else /* CONFIG_KVM */ 47 54 #define HYPERVISOR_EXTABLE 48 55 #define HYPERVISOR_DATA_SECTIONS 49 56 #define HYPERVISOR_PERCPU_SECTION 57 + #define HYPERVISOR_RELOC_SECTION 50 58 #endif 51 59 52 60 #define HYPERVISOR_TEXT \ ··· 225 215 226 216 PERCPU_SECTION(L1_CACHE_BYTES) 227 217 HYPERVISOR_PERCPU_SECTION 218 + 219 + HYPERVISOR_RELOC_SECTION 228 220 229 221 .rela.dyn : ALIGN(8) { 230 222 *(.rela .rela*)
+1 -1
arch/arm64/kvm/Makefile
··· 16 16 inject_fault.o va_layout.o handle_exit.o \ 17 17 guest.o debug.o reset.o sys_regs.o \ 18 18 vgic-sys-reg-v3.o fpsimd.o pmu.o \ 19 - arch_timer.o \ 19 + arch_timer.o trng.o\ 20 20 vgic/vgic.o vgic/vgic-init.o \ 21 21 vgic/vgic-irqfd.o vgic/vgic-v2.o \ 22 22 vgic/vgic-v3.o vgic/vgic-v4.o \
+3 -4
arch/arm64/kvm/arm.c
··· 1750 1750 goto out_err; 1751 1751 } 1752 1752 1753 - err = create_hyp_mappings(kvm_ksym_ref(__hyp_data_ro_after_init_start), 1754 - kvm_ksym_ref(__hyp_data_ro_after_init_end), 1755 - PAGE_HYP_RO); 1753 + err = create_hyp_mappings(kvm_ksym_ref(__hyp_rodata_start), 1754 + kvm_ksym_ref(__hyp_rodata_end), PAGE_HYP_RO); 1756 1755 if (err) { 1757 - kvm_err("Cannot map .hyp.data..ro_after_init section\n"); 1756 + kvm_err("Cannot map .hyp.rodata section\n"); 1758 1757 goto out_err; 1759 1758 } 1760 1759
+2 -2
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 505 505 struct exception_table_entry *entry, *end; 506 506 unsigned long elr_el2 = read_sysreg(elr_el2); 507 507 508 - entry = hyp_symbol_addr(__start___kvm_ex_table); 509 - end = hyp_symbol_addr(__stop___kvm_ex_table); 508 + entry = &__start___kvm_ex_table; 509 + end = &__stop___kvm_ex_table; 510 510 511 511 while (entry < end) { 512 512 addr = (unsigned long)&entry->insn + entry->insn;
+2
arch/arm64/kvm/hyp/nvhe/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 + gen-hyprel 2 3 hyp.lds 4 + hyp-reloc.S
+28 -5
arch/arm64/kvm/hyp/nvhe/Makefile
··· 3 3 # Makefile for Kernel-based Virtual Machine module, HYP/nVHE part 4 4 # 5 5 6 - asflags-y := -D__KVM_NVHE_HYPERVISOR__ 7 - ccflags-y := -D__KVM_NVHE_HYPERVISOR__ 6 + asflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS 7 + ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS 8 + 9 + hostprogs := gen-hyprel 10 + HOST_EXTRACFLAGS += -I$(objtree)/include 8 11 9 12 obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ 10 13 hyp-main.o hyp-smp.o psci-relay.o ··· 22 19 23 20 hyp-obj := $(patsubst %.o,%.nvhe.o,$(obj-y)) 24 21 obj-y := kvm_nvhe.o 25 - extra-y := $(hyp-obj) kvm_nvhe.tmp.o hyp.lds 22 + extra-y := $(hyp-obj) kvm_nvhe.tmp.o kvm_nvhe.rel.o hyp.lds hyp-reloc.S hyp-reloc.o 26 23 27 24 # 1) Compile all source files to `.nvhe.o` object files. The file extension 28 25 # avoids file name clashes for files shared with VHE. ··· 45 42 $(obj)/kvm_nvhe.tmp.o: $(obj)/hyp.lds $(addprefix $(obj)/,$(hyp-obj)) FORCE 46 43 $(call if_changed,ld) 47 44 48 - # 4) Produce the final 'kvm_nvhe.o', ready to be linked into 'vmlinux'. 45 + # 4) Generate list of hyp code/data positions that need to be relocated at 46 + # runtime. Because the hypervisor is part of the kernel binary, relocations 47 + # produce a kernel VA. We enumerate relocations targeting hyp at build time 48 + # and convert the kernel VAs at those positions to hyp VAs. 49 + $(obj)/hyp-reloc.S: $(obj)/kvm_nvhe.tmp.o $(obj)/gen-hyprel 50 + $(call if_changed,hyprel) 51 + 52 + # 5) Compile hyp-reloc.S and link it into the existing partially linked object. 53 + # The object file now contains a section with pointers to hyp positions that 54 + # will contain kernel VAs at runtime. These pointers have relocations on them 55 + # so that they get updated as the hyp object is linked into `vmlinux`. 56 + LDFLAGS_kvm_nvhe.rel.o := -r 57 + $(obj)/kvm_nvhe.rel.o: $(obj)/kvm_nvhe.tmp.o $(obj)/hyp-reloc.o FORCE 58 + $(call if_changed,ld) 59 + 60 + # 6) Produce the final 'kvm_nvhe.o', ready to be linked into 'vmlinux'. 49 61 # Prefixes names of ELF symbols with '__kvm_nvhe_'. 50 - $(obj)/kvm_nvhe.o: $(obj)/kvm_nvhe.tmp.o FORCE 62 + $(obj)/kvm_nvhe.o: $(obj)/kvm_nvhe.rel.o FORCE 51 63 $(call if_changed,hypcopy) 64 + 65 + # The HYPREL command calls `gen-hyprel` to generate an assembly file with 66 + # a list of relocations targeting hyp code/data. 67 + quiet_cmd_hyprel = HYPREL $@ 68 + cmd_hyprel = $(obj)/gen-hyprel $< > $@ 52 69 53 70 # The HYPCOPY command uses `objcopy` to prefix all ELF symbol names 54 71 # to avoid clashes with VHE code/data.
+438
arch/arm64/kvm/hyp/nvhe/gen-hyprel.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2020 - Google LLC 4 + * Author: David Brazdil <dbrazdil@google.com> 5 + * 6 + * Generates relocation information used by the kernel to convert 7 + * absolute addresses in hyp data from kernel VAs to hyp VAs. 8 + * 9 + * This is necessary because hyp code is linked into the same binary 10 + * as the kernel but executes under different memory mappings. 11 + * If the compiler used absolute addressing, those addresses need to 12 + * be converted before they are used by hyp code. 13 + * 14 + * The input of this program is the relocatable ELF object containing 15 + * all hyp code/data, not yet linked into vmlinux. Hyp section names 16 + * should have been prefixed with `.hyp` at this point. 17 + * 18 + * The output (printed to stdout) is an assembly file containing 19 + * an array of 32-bit integers and static relocations that instruct 20 + * the linker of `vmlinux` to populate the array entries with offsets 21 + * to positions in the kernel binary containing VAs used by hyp code. 22 + * 23 + * Note that dynamic relocations could be used for the same purpose. 24 + * However, those are only generated if CONFIG_RELOCATABLE=y. 25 + */ 26 + 27 + #include <elf.h> 28 + #include <endian.h> 29 + #include <errno.h> 30 + #include <fcntl.h> 31 + #include <stdbool.h> 32 + #include <stdio.h> 33 + #include <stdlib.h> 34 + #include <string.h> 35 + #include <sys/mman.h> 36 + #include <sys/types.h> 37 + #include <sys/stat.h> 38 + #include <unistd.h> 39 + 40 + #include <generated/autoconf.h> 41 + 42 + #define HYP_SECTION_PREFIX ".hyp" 43 + #define HYP_RELOC_SECTION ".hyp.reloc" 44 + #define HYP_SECTION_SYMBOL_PREFIX "__hyp_section_" 45 + 46 + /* 47 + * AArch64 relocation type constants. 48 + * Included in case these are not defined in the host toolchain. 49 + */ 50 + #ifndef R_AARCH64_ABS64 51 + #define R_AARCH64_ABS64 257 52 + #endif 53 + #ifndef R_AARCH64_LD_PREL_LO19 54 + #define R_AARCH64_LD_PREL_LO19 273 55 + #endif 56 + #ifndef R_AARCH64_ADR_PREL_LO21 57 + #define R_AARCH64_ADR_PREL_LO21 274 58 + #endif 59 + #ifndef R_AARCH64_ADR_PREL_PG_HI21 60 + #define R_AARCH64_ADR_PREL_PG_HI21 275 61 + #endif 62 + #ifndef R_AARCH64_ADR_PREL_PG_HI21_NC 63 + #define R_AARCH64_ADR_PREL_PG_HI21_NC 276 64 + #endif 65 + #ifndef R_AARCH64_ADD_ABS_LO12_NC 66 + #define R_AARCH64_ADD_ABS_LO12_NC 277 67 + #endif 68 + #ifndef R_AARCH64_LDST8_ABS_LO12_NC 69 + #define R_AARCH64_LDST8_ABS_LO12_NC 278 70 + #endif 71 + #ifndef R_AARCH64_TSTBR14 72 + #define R_AARCH64_TSTBR14 279 73 + #endif 74 + #ifndef R_AARCH64_CONDBR19 75 + #define R_AARCH64_CONDBR19 280 76 + #endif 77 + #ifndef R_AARCH64_JUMP26 78 + #define R_AARCH64_JUMP26 282 79 + #endif 80 + #ifndef R_AARCH64_CALL26 81 + #define R_AARCH64_CALL26 283 82 + #endif 83 + #ifndef R_AARCH64_LDST16_ABS_LO12_NC 84 + #define R_AARCH64_LDST16_ABS_LO12_NC 284 85 + #endif 86 + #ifndef R_AARCH64_LDST32_ABS_LO12_NC 87 + #define R_AARCH64_LDST32_ABS_LO12_NC 285 88 + #endif 89 + #ifndef R_AARCH64_LDST64_ABS_LO12_NC 90 + #define R_AARCH64_LDST64_ABS_LO12_NC 286 91 + #endif 92 + #ifndef R_AARCH64_MOVW_PREL_G0 93 + #define R_AARCH64_MOVW_PREL_G0 287 94 + #endif 95 + #ifndef R_AARCH64_MOVW_PREL_G0_NC 96 + #define R_AARCH64_MOVW_PREL_G0_NC 288 97 + #endif 98 + #ifndef R_AARCH64_MOVW_PREL_G1 99 + #define R_AARCH64_MOVW_PREL_G1 289 100 + #endif 101 + #ifndef R_AARCH64_MOVW_PREL_G1_NC 102 + #define R_AARCH64_MOVW_PREL_G1_NC 290 103 + #endif 104 + #ifndef R_AARCH64_MOVW_PREL_G2 105 + #define R_AARCH64_MOVW_PREL_G2 291 106 + #endif 107 + #ifndef R_AARCH64_MOVW_PREL_G2_NC 108 + #define R_AARCH64_MOVW_PREL_G2_NC 292 109 + #endif 110 + #ifndef R_AARCH64_MOVW_PREL_G3 111 + #define R_AARCH64_MOVW_PREL_G3 293 112 + #endif 113 + #ifndef R_AARCH64_LDST128_ABS_LO12_NC 114 + #define R_AARCH64_LDST128_ABS_LO12_NC 299 115 + #endif 116 + 117 + /* Global state of the processed ELF. */ 118 + static struct { 119 + const char *path; 120 + char *begin; 121 + size_t size; 122 + Elf64_Ehdr *ehdr; 123 + Elf64_Shdr *sh_table; 124 + const char *sh_string; 125 + } elf; 126 + 127 + #if defined(CONFIG_CPU_LITTLE_ENDIAN) 128 + 129 + #define elf16toh(x) le16toh(x) 130 + #define elf32toh(x) le32toh(x) 131 + #define elf64toh(x) le64toh(x) 132 + 133 + #define ELFENDIAN ELFDATA2LSB 134 + 135 + #elif defined(CONFIG_CPU_BIG_ENDIAN) 136 + 137 + #define elf16toh(x) be16toh(x) 138 + #define elf32toh(x) be32toh(x) 139 + #define elf64toh(x) be64toh(x) 140 + 141 + #define ELFENDIAN ELFDATA2MSB 142 + 143 + #else 144 + 145 + #error PDP-endian sadly unsupported... 146 + 147 + #endif 148 + 149 + #define fatal_error(fmt, ...) \ 150 + ({ \ 151 + fprintf(stderr, "error: %s: " fmt "\n", \ 152 + elf.path, ## __VA_ARGS__); \ 153 + exit(EXIT_FAILURE); \ 154 + __builtin_unreachable(); \ 155 + }) 156 + 157 + #define fatal_perror(msg) \ 158 + ({ \ 159 + fprintf(stderr, "error: %s: " msg ": %s\n", \ 160 + elf.path, strerror(errno)); \ 161 + exit(EXIT_FAILURE); \ 162 + __builtin_unreachable(); \ 163 + }) 164 + 165 + #define assert_op(lhs, rhs, fmt, op) \ 166 + ({ \ 167 + typeof(lhs) _lhs = (lhs); \ 168 + typeof(rhs) _rhs = (rhs); \ 169 + \ 170 + if (!(_lhs op _rhs)) { \ 171 + fatal_error("assertion " #lhs " " #op " " #rhs \ 172 + " failed (lhs=" fmt ", rhs=" fmt \ 173 + ", line=%d)", _lhs, _rhs, __LINE__); \ 174 + } \ 175 + }) 176 + 177 + #define assert_eq(lhs, rhs, fmt) assert_op(lhs, rhs, fmt, ==) 178 + #define assert_ne(lhs, rhs, fmt) assert_op(lhs, rhs, fmt, !=) 179 + #define assert_lt(lhs, rhs, fmt) assert_op(lhs, rhs, fmt, <) 180 + #define assert_ge(lhs, rhs, fmt) assert_op(lhs, rhs, fmt, >=) 181 + 182 + /* 183 + * Return a pointer of a given type at a given offset from 184 + * the beginning of the ELF file. 185 + */ 186 + #define elf_ptr(type, off) ((type *)(elf.begin + (off))) 187 + 188 + /* Iterate over all sections in the ELF. */ 189 + #define for_each_section(var) \ 190 + for (var = elf.sh_table; var < elf.sh_table + elf16toh(elf.ehdr->e_shnum); ++var) 191 + 192 + /* Iterate over all Elf64_Rela relocations in a given section. */ 193 + #define for_each_rela(shdr, var) \ 194 + for (var = elf_ptr(Elf64_Rela, elf64toh(shdr->sh_offset)); \ 195 + var < elf_ptr(Elf64_Rela, elf64toh(shdr->sh_offset) + elf64toh(shdr->sh_size)); var++) 196 + 197 + /* True if a string starts with a given prefix. */ 198 + static inline bool starts_with(const char *str, const char *prefix) 199 + { 200 + return memcmp(str, prefix, strlen(prefix)) == 0; 201 + } 202 + 203 + /* Returns a string containing the name of a given section. */ 204 + static inline const char *section_name(Elf64_Shdr *shdr) 205 + { 206 + return elf.sh_string + elf32toh(shdr->sh_name); 207 + } 208 + 209 + /* Returns a pointer to the first byte of section data. */ 210 + static inline const char *section_begin(Elf64_Shdr *shdr) 211 + { 212 + return elf_ptr(char, elf64toh(shdr->sh_offset)); 213 + } 214 + 215 + /* Find a section by its offset from the beginning of the file. */ 216 + static inline Elf64_Shdr *section_by_off(Elf64_Off off) 217 + { 218 + assert_ne(off, 0UL, "%lu"); 219 + return elf_ptr(Elf64_Shdr, off); 220 + } 221 + 222 + /* Find a section by its index. */ 223 + static inline Elf64_Shdr *section_by_idx(uint16_t idx) 224 + { 225 + assert_ne(idx, SHN_UNDEF, "%u"); 226 + return &elf.sh_table[idx]; 227 + } 228 + 229 + /* 230 + * Memory-map the given ELF file, perform sanity checks, and 231 + * populate global state. 232 + */ 233 + static void init_elf(const char *path) 234 + { 235 + int fd, ret; 236 + struct stat stat; 237 + 238 + /* Store path in the global struct for error printing. */ 239 + elf.path = path; 240 + 241 + /* Open the ELF file. */ 242 + fd = open(path, O_RDONLY); 243 + if (fd < 0) 244 + fatal_perror("Could not open ELF file"); 245 + 246 + /* Get status of ELF file to obtain its size. */ 247 + ret = fstat(fd, &stat); 248 + if (ret < 0) { 249 + close(fd); 250 + fatal_perror("Could not get status of ELF file"); 251 + } 252 + 253 + /* mmap() the entire ELF file read-only at an arbitrary address. */ 254 + elf.begin = mmap(0, stat.st_size, PROT_READ, MAP_PRIVATE, fd, 0); 255 + if (elf.begin == MAP_FAILED) { 256 + close(fd); 257 + fatal_perror("Could not mmap ELF file"); 258 + } 259 + 260 + /* mmap() was successful, close the FD. */ 261 + close(fd); 262 + 263 + /* Get pointer to the ELF header. */ 264 + assert_ge(stat.st_size, sizeof(*elf.ehdr), "%lu"); 265 + elf.ehdr = elf_ptr(Elf64_Ehdr, 0); 266 + 267 + /* Check the ELF magic. */ 268 + assert_eq(elf.ehdr->e_ident[EI_MAG0], ELFMAG0, "0x%x"); 269 + assert_eq(elf.ehdr->e_ident[EI_MAG1], ELFMAG1, "0x%x"); 270 + assert_eq(elf.ehdr->e_ident[EI_MAG2], ELFMAG2, "0x%x"); 271 + assert_eq(elf.ehdr->e_ident[EI_MAG3], ELFMAG3, "0x%x"); 272 + 273 + /* Sanity check that this is an ELF64 relocatable object for AArch64. */ 274 + assert_eq(elf.ehdr->e_ident[EI_CLASS], ELFCLASS64, "%u"); 275 + assert_eq(elf.ehdr->e_ident[EI_DATA], ELFENDIAN, "%u"); 276 + assert_eq(elf16toh(elf.ehdr->e_type), ET_REL, "%u"); 277 + assert_eq(elf16toh(elf.ehdr->e_machine), EM_AARCH64, "%u"); 278 + 279 + /* Populate fields of the global struct. */ 280 + elf.sh_table = section_by_off(elf64toh(elf.ehdr->e_shoff)); 281 + elf.sh_string = section_begin(section_by_idx(elf16toh(elf.ehdr->e_shstrndx))); 282 + } 283 + 284 + /* Print the prologue of the output ASM file. */ 285 + static void emit_prologue(void) 286 + { 287 + printf(".data\n" 288 + ".pushsection " HYP_RELOC_SECTION ", \"a\"\n"); 289 + } 290 + 291 + /* Print ASM statements needed as a prologue to a processed hyp section. */ 292 + static void emit_section_prologue(const char *sh_orig_name) 293 + { 294 + /* Declare the hyp section symbol. */ 295 + printf(".global %s%s\n", HYP_SECTION_SYMBOL_PREFIX, sh_orig_name); 296 + } 297 + 298 + /* 299 + * Print ASM statements to create a hyp relocation entry for a given 300 + * R_AARCH64_ABS64 relocation. 301 + * 302 + * The linker of vmlinux will populate the position given by `rela` with 303 + * an absolute 64-bit kernel VA. If the kernel is relocatable, it will 304 + * also generate a dynamic relocation entry so that the kernel can shift 305 + * the address at runtime for KASLR. 306 + * 307 + * Emit a 32-bit offset from the current address to the position given 308 + * by `rela`. This way the kernel can iterate over all kernel VAs used 309 + * by hyp at runtime and convert them to hyp VAs. However, that offset 310 + * will not be known until linking of `vmlinux`, so emit a PREL32 311 + * relocation referencing a symbol that the hyp linker script put at 312 + * the beginning of the relocated section + the offset from `rela`. 313 + */ 314 + static void emit_rela_abs64(Elf64_Rela *rela, const char *sh_orig_name) 315 + { 316 + /* Offset of this reloc from the beginning of HYP_RELOC_SECTION. */ 317 + static size_t reloc_offset; 318 + 319 + /* Create storage for the 32-bit offset. */ 320 + printf(".word 0\n"); 321 + 322 + /* 323 + * Create a PREL32 relocation which instructs the linker of `vmlinux` 324 + * to insert offset to position <base> + <offset>, where <base> is 325 + * a symbol at the beginning of the relocated section, and <offset> 326 + * is `rela->r_offset`. 327 + */ 328 + printf(".reloc %lu, R_AARCH64_PREL32, %s%s + 0x%lx\n", 329 + reloc_offset, HYP_SECTION_SYMBOL_PREFIX, sh_orig_name, 330 + elf64toh(rela->r_offset)); 331 + 332 + reloc_offset += 4; 333 + } 334 + 335 + /* Print the epilogue of the output ASM file. */ 336 + static void emit_epilogue(void) 337 + { 338 + printf(".popsection\n"); 339 + } 340 + 341 + /* 342 + * Iterate over all RELA relocations in a given section and emit 343 + * hyp relocation data for all absolute addresses in hyp code/data. 344 + * 345 + * Static relocations that generate PC-relative-addressing are ignored. 346 + * Failure is reported for unexpected relocation types. 347 + */ 348 + static void emit_rela_section(Elf64_Shdr *sh_rela) 349 + { 350 + Elf64_Shdr *sh_orig = &elf.sh_table[elf32toh(sh_rela->sh_info)]; 351 + const char *sh_orig_name = section_name(sh_orig); 352 + Elf64_Rela *rela; 353 + 354 + /* Skip all non-hyp sections. */ 355 + if (!starts_with(sh_orig_name, HYP_SECTION_PREFIX)) 356 + return; 357 + 358 + emit_section_prologue(sh_orig_name); 359 + 360 + for_each_rela(sh_rela, rela) { 361 + uint32_t type = (uint32_t)elf64toh(rela->r_info); 362 + 363 + /* Check that rela points inside the relocated section. */ 364 + assert_lt(elf64toh(rela->r_offset), elf64toh(sh_orig->sh_size), "0x%lx"); 365 + 366 + switch (type) { 367 + /* 368 + * Data relocations to generate absolute addressing. 369 + * Emit a hyp relocation. 370 + */ 371 + case R_AARCH64_ABS64: 372 + emit_rela_abs64(rela, sh_orig_name); 373 + break; 374 + /* Allow relocations to generate PC-relative addressing. */ 375 + case R_AARCH64_LD_PREL_LO19: 376 + case R_AARCH64_ADR_PREL_LO21: 377 + case R_AARCH64_ADR_PREL_PG_HI21: 378 + case R_AARCH64_ADR_PREL_PG_HI21_NC: 379 + case R_AARCH64_ADD_ABS_LO12_NC: 380 + case R_AARCH64_LDST8_ABS_LO12_NC: 381 + case R_AARCH64_LDST16_ABS_LO12_NC: 382 + case R_AARCH64_LDST32_ABS_LO12_NC: 383 + case R_AARCH64_LDST64_ABS_LO12_NC: 384 + case R_AARCH64_LDST128_ABS_LO12_NC: 385 + break; 386 + /* Allow relative relocations for control-flow instructions. */ 387 + case R_AARCH64_TSTBR14: 388 + case R_AARCH64_CONDBR19: 389 + case R_AARCH64_JUMP26: 390 + case R_AARCH64_CALL26: 391 + break; 392 + /* Allow group relocations to create PC-relative offset inline. */ 393 + case R_AARCH64_MOVW_PREL_G0: 394 + case R_AARCH64_MOVW_PREL_G0_NC: 395 + case R_AARCH64_MOVW_PREL_G1: 396 + case R_AARCH64_MOVW_PREL_G1_NC: 397 + case R_AARCH64_MOVW_PREL_G2: 398 + case R_AARCH64_MOVW_PREL_G2_NC: 399 + case R_AARCH64_MOVW_PREL_G3: 400 + break; 401 + default: 402 + fatal_error("Unexpected RELA type %u", type); 403 + } 404 + } 405 + } 406 + 407 + /* Iterate over all sections and emit hyp relocation data for RELA sections. */ 408 + static void emit_all_relocs(void) 409 + { 410 + Elf64_Shdr *shdr; 411 + 412 + for_each_section(shdr) { 413 + switch (elf32toh(shdr->sh_type)) { 414 + case SHT_REL: 415 + fatal_error("Unexpected SHT_REL section \"%s\"", 416 + section_name(shdr)); 417 + case SHT_RELA: 418 + emit_rela_section(shdr); 419 + break; 420 + } 421 + } 422 + } 423 + 424 + int main(int argc, const char **argv) 425 + { 426 + if (argc != 2) { 427 + fprintf(stderr, "Usage: %s <elf_input>\n", argv[0]); 428 + return EXIT_FAILURE; 429 + } 430 + 431 + init_elf(argv[1]); 432 + 433 + emit_prologue(); 434 + emit_all_relocs(); 435 + emit_epilogue(); 436 + 437 + return EXIT_SUCCESS; 438 + }
+15 -14
arch/arm64/kvm/hyp/nvhe/host.S
··· 74 74 * void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par); 75 75 */ 76 76 SYM_FUNC_START(__hyp_do_panic) 77 - /* Load the format arguments into x1-7 */ 78 - mov x6, x3 79 - get_vcpu_ptr x7, x3 80 - 81 - mrs x3, esr_el2 82 - mrs x4, far_el2 83 - mrs x5, hpfar_el2 84 - 85 77 /* Prepare and exit to the host's panic funciton. */ 86 78 mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\ 87 79 PSR_MODE_EL1h) 88 80 msr spsr_el2, lr 89 81 ldr lr, =panic 82 + hyp_kimg_va lr, x6 90 83 msr elr_el2, lr 91 84 92 - /* 93 - * Set the panic format string and enter the host, conditionally 94 - * restoring the host context. 95 - */ 85 + /* Set the panic format string. Use the, now free, LR as scratch. */ 86 + ldr lr, =__hyp_panic_string 87 + hyp_kimg_va lr, x6 88 + 89 + /* Load the format arguments into x1-7. */ 90 + mov x6, x3 91 + get_vcpu_ptr x7, x3 92 + mrs x3, esr_el2 93 + mrs x4, far_el2 94 + mrs x5, hpfar_el2 95 + 96 + /* Enter the host, conditionally restoring the host context. */ 96 97 cmp x0, xzr 97 - ldr x0, =__hyp_panic_string 98 + mov x0, lr 98 99 b.eq __host_enter_without_restoring 99 100 b __host_enter_for_panic 100 101 SYM_FUNC_END(__hyp_do_panic) ··· 125 124 * Preserve x0-x4, which may contain stub parameters. 126 125 */ 127 126 ldr x5, =__kvm_handle_stub_hvc 128 - kimg_pa x5, x6 127 + hyp_pa x5, x6 129 128 br x5 130 129 .L__vect_end\@: 131 130 .if ((.L__vect_end\@ - .L__vect_start\@) > 0x80)
+4 -13
arch/arm64/kvm/hyp/nvhe/hyp-init.S
··· 18 18 #include <asm/virt.h> 19 19 20 20 .text 21 - .pushsection .hyp.idmap.text, "ax" 21 + .pushsection .idmap.text, "ax" 22 22 23 23 .align 11 24 24 ··· 57 57 cmp x0, #HVC_STUB_HCALL_NR 58 58 b.lo __kvm_handle_stub_hvc 59 59 60 - // We only actively check bits [24:31], and everything 61 - // else has to be zero, which we check at build time. 62 - #if (KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) & 0xFFFFFFFF00FFFFFF) 63 - #error Unexpected __KVM_HOST_SMCCC_FUNC___kvm_hyp_init value 64 - #endif 60 + mov x3, #KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) 61 + cmp x0, x3 62 + b.eq 1f 65 63 66 - ror x0, x0, #24 67 - eor x0, x0, #((KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) >> 24) & 0xF) 68 - ror x0, x0, #4 69 - eor x0, x0, #((KVM_HOST_SMCCC_FUNC(__kvm_hyp_init) >> 28) & 0xF) 70 - cbz x0, 1f 71 64 mov x0, #SMCCC_RET_NOT_SUPPORTED 72 65 eret 73 66 ··· 134 141 135 142 /* Set the host vector */ 136 143 ldr x0, =__kvm_hyp_host_vector 137 - kimg_hyp_va x0, x1 138 144 msr vbar_el2, x0 139 145 140 146 ret ··· 192 200 /* Leave idmap. */ 193 201 mov x0, x29 194 202 ldr x1, =kvm_host_psci_cpu_entry 195 - kimg_hyp_va x1, x2 196 203 br x1 197 204 SYM_CODE_END(__kvm_hyp_init_cpu) 198 205
+4 -7
arch/arm64/kvm/hyp/nvhe/hyp-main.c
··· 108 108 109 109 typedef void (*hcall_t)(struct kvm_cpu_context *); 110 110 111 - #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = kimg_fn_ptr(handle_##x) 111 + #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x 112 112 113 - static const hcall_t *host_hcall[] = { 113 + static const hcall_t host_hcall[] = { 114 114 HANDLE_FUNC(__kvm_vcpu_run), 115 115 HANDLE_FUNC(__kvm_flush_vm_context), 116 116 HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), ··· 130 130 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) 131 131 { 132 132 DECLARE_REG(unsigned long, id, host_ctxt, 0); 133 - const hcall_t *kfn; 134 133 hcall_t hfn; 135 134 136 135 id -= KVM_HOST_SMCCC_ID(0); ··· 137 138 if (unlikely(id >= ARRAY_SIZE(host_hcall))) 138 139 goto inval; 139 140 140 - kfn = host_hcall[id]; 141 - if (unlikely(!kfn)) 141 + hfn = host_hcall[id]; 142 + if (unlikely(!hfn)) 142 143 goto inval; 143 144 144 145 cpu_reg(host_ctxt, 0) = SMCCC_RET_SUCCESS; 145 - 146 - hfn = kimg_fn_hyp_va(kfn); 147 146 hfn(host_ctxt); 148 147 149 148 return;
+2 -2
arch/arm64/kvm/hyp/nvhe/hyp-smp.c
··· 33 33 if (cpu >= ARRAY_SIZE(kvm_arm_hyp_percpu_base)) 34 34 hyp_panic(); 35 35 36 - cpu_base_array = (unsigned long *)hyp_symbol_addr(kvm_arm_hyp_percpu_base); 36 + cpu_base_array = (unsigned long *)&kvm_arm_hyp_percpu_base; 37 37 this_cpu_base = kern_hyp_va(cpu_base_array[cpu]); 38 - elf_base = (unsigned long)hyp_symbol_addr(__per_cpu_start); 38 + elf_base = (unsigned long)&__per_cpu_start; 39 39 return this_cpu_base - elf_base; 40 40 }
+6 -3
arch/arm64/kvm/hyp/nvhe/hyp.lds.S
··· 12 12 #include <asm/memory.h> 13 13 14 14 SECTIONS { 15 + HYP_SECTION(.idmap.text) 15 16 HYP_SECTION(.text) 17 + HYP_SECTION(.data..ro_after_init) 18 + HYP_SECTION(.rodata) 19 + 16 20 /* 17 21 * .hyp..data..percpu needs to be page aligned to maintain the same 18 22 * alignment for when linking into vmlinux. 19 23 */ 20 24 . = ALIGN(PAGE_SIZE); 21 - HYP_SECTION_NAME(.data..percpu) : { 25 + BEGIN_HYP_SECTION(.data..percpu) 22 26 PERCPU_INPUT(L1_CACHE_BYTES) 23 - } 24 - HYP_SECTION(.data..ro_after_init) 27 + END_HYP_SECTION 25 28 }
+12 -12
arch/arm64/kvm/hyp/nvhe/psci-relay.c
··· 128 128 if (cpu_id == INVALID_CPU_ID) 129 129 return PSCI_RET_INVALID_PARAMS; 130 130 131 - boot_args = per_cpu_ptr(hyp_symbol_addr(cpu_on_args), cpu_id); 132 - init_params = per_cpu_ptr(hyp_symbol_addr(kvm_init_params), cpu_id); 131 + boot_args = per_cpu_ptr(&cpu_on_args, cpu_id); 132 + init_params = per_cpu_ptr(&kvm_init_params, cpu_id); 133 133 134 134 /* Check if the target CPU is already being booted. */ 135 135 if (!try_acquire_boot_args(boot_args)) ··· 140 140 wmb(); 141 141 142 142 ret = psci_call(func_id, mpidr, 143 - __hyp_pa(hyp_symbol_addr(kvm_hyp_cpu_entry)), 143 + __hyp_pa(&kvm_hyp_cpu_entry), 144 144 __hyp_pa(init_params)); 145 145 146 146 /* If successful, the lock will be released by the target CPU. */ ··· 159 159 struct psci_boot_args *boot_args; 160 160 struct kvm_nvhe_init_params *init_params; 161 161 162 - boot_args = this_cpu_ptr(hyp_symbol_addr(suspend_args)); 163 - init_params = this_cpu_ptr(hyp_symbol_addr(kvm_init_params)); 162 + boot_args = this_cpu_ptr(&suspend_args); 163 + init_params = this_cpu_ptr(&kvm_init_params); 164 164 165 165 /* 166 166 * No need to acquire a lock before writing to boot_args because a core ··· 174 174 * point if it is a deep sleep state. 175 175 */ 176 176 return psci_call(func_id, power_state, 177 - __hyp_pa(hyp_symbol_addr(kvm_hyp_cpu_resume)), 177 + __hyp_pa(&kvm_hyp_cpu_resume), 178 178 __hyp_pa(init_params)); 179 179 } 180 180 ··· 186 186 struct psci_boot_args *boot_args; 187 187 struct kvm_nvhe_init_params *init_params; 188 188 189 - boot_args = this_cpu_ptr(hyp_symbol_addr(suspend_args)); 190 - init_params = this_cpu_ptr(hyp_symbol_addr(kvm_init_params)); 189 + boot_args = this_cpu_ptr(&suspend_args); 190 + init_params = this_cpu_ptr(&kvm_init_params); 191 191 192 192 /* 193 193 * No need to acquire a lock before writing to boot_args because a core ··· 198 198 199 199 /* Will only return on error. */ 200 200 return psci_call(func_id, 201 - __hyp_pa(hyp_symbol_addr(kvm_hyp_cpu_resume)), 201 + __hyp_pa(&kvm_hyp_cpu_resume), 202 202 __hyp_pa(init_params), 0); 203 203 } 204 204 ··· 207 207 struct psci_boot_args *boot_args; 208 208 struct kvm_cpu_context *host_ctxt; 209 209 210 - host_ctxt = &this_cpu_ptr(hyp_symbol_addr(kvm_host_data))->host_ctxt; 210 + host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; 211 211 212 212 if (is_cpu_on) 213 - boot_args = this_cpu_ptr(hyp_symbol_addr(cpu_on_args)); 213 + boot_args = this_cpu_ptr(&cpu_on_args); 214 214 else 215 - boot_args = this_cpu_ptr(hyp_symbol_addr(suspend_args)); 215 + boot_args = this_cpu_ptr(&suspend_args); 216 216 217 217 cpu_reg(host_ctxt, 0) = boot_args->r0; 218 218 write_sysreg_el2(boot_args->pc, SYS_ELR);
+45 -34
arch/arm64/kvm/hyp/pgtable.c
··· 45 45 46 46 #define KVM_PTE_LEAF_ATTR_HI_S2_XN BIT(54) 47 47 48 + #define KVM_PTE_LEAF_ATTR_S2_PERMS (KVM_PTE_LEAF_ATTR_LO_S2_S2AP_R | \ 49 + KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \ 50 + KVM_PTE_LEAF_ATTR_HI_S2_XN) 51 + 48 52 struct kvm_pgtable_walk_data { 49 53 struct kvm_pgtable *pgt; 50 54 struct kvm_pgtable_walker *walker; ··· 174 170 smp_store_release(ptep, pte); 175 171 } 176 172 177 - static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t attr, 178 - u32 level) 173 + static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 level) 179 174 { 180 - kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(pa); 175 + kvm_pte_t pte = kvm_phys_to_pte(pa); 181 176 u64 type = (level == KVM_PGTABLE_MAX_LEVELS - 1) ? KVM_PTE_TYPE_PAGE : 182 177 KVM_PTE_TYPE_BLOCK; 183 178 ··· 184 181 pte |= FIELD_PREP(KVM_PTE_TYPE, type); 185 182 pte |= KVM_PTE_VALID; 186 183 187 - /* Tolerate KVM recreating the exact same mapping. */ 188 - if (kvm_pte_valid(old)) 189 - return old == pte; 190 - 191 - smp_store_release(ptep, pte); 192 - return true; 184 + return pte; 193 185 } 194 186 195 187 static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, u64 addr, ··· 339 341 static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level, 340 342 kvm_pte_t *ptep, struct hyp_map_data *data) 341 343 { 344 + kvm_pte_t new, old = *ptep; 342 345 u64 granule = kvm_granule_size(level), phys = data->phys; 343 346 344 347 if (!kvm_block_mapping_supported(addr, end, phys, level)) 345 348 return false; 346 349 347 - WARN_ON(!kvm_set_valid_leaf_pte(ptep, phys, data->attr, level)); 350 + /* Tolerate KVM recreating the exact same mapping */ 351 + new = kvm_init_valid_leaf_pte(phys, data->attr, level); 352 + if (old != new && !WARN_ON(kvm_pte_valid(old))) 353 + smp_store_release(ptep, new); 354 + 348 355 data->phys += granule; 349 356 return true; 350 357 } ··· 464 461 return 0; 465 462 } 466 463 467 - static bool stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, 468 - kvm_pte_t *ptep, 469 - struct stage2_map_data *data) 464 + static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, 465 + kvm_pte_t *ptep, 466 + struct stage2_map_data *data) 470 467 { 468 + kvm_pte_t new, old = *ptep; 471 469 u64 granule = kvm_granule_size(level), phys = data->phys; 470 + struct page *page = virt_to_page(ptep); 472 471 473 472 if (!kvm_block_mapping_supported(addr, end, phys, level)) 474 - return false; 473 + return -E2BIG; 475 474 476 - /* 477 - * If the PTE was already valid, drop the refcount on the table 478 - * early, as it will be bumped-up again in stage2_map_walk_leaf(). 479 - * This ensures that the refcount stays constant across a valid to 480 - * valid PTE update. 481 - */ 482 - if (kvm_pte_valid(*ptep)) 483 - put_page(virt_to_page(ptep)); 475 + new = kvm_init_valid_leaf_pte(phys, data->attr, level); 476 + if (kvm_pte_valid(old)) { 477 + /* 478 + * Skip updating the PTE if we are trying to recreate the exact 479 + * same mapping or only change the access permissions. Instead, 480 + * the vCPU will exit one more time from guest if still needed 481 + * and then go through the path of relaxing permissions. 482 + */ 483 + if (!((old ^ new) & (~KVM_PTE_LEAF_ATTR_S2_PERMS))) 484 + return -EAGAIN; 484 485 485 - if (kvm_set_valid_leaf_pte(ptep, phys, data->attr, level)) 486 - goto out; 486 + /* 487 + * There's an existing different valid leaf entry, so perform 488 + * break-before-make. 489 + */ 490 + kvm_set_invalid_pte(ptep); 491 + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level); 492 + put_page(page); 493 + } 487 494 488 - /* There's an existing valid leaf entry, so perform break-before-make */ 489 - kvm_set_invalid_pte(ptep); 490 - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level); 491 - kvm_set_valid_leaf_pte(ptep, phys, data->attr, level); 492 - out: 495 + smp_store_release(ptep, new); 496 + get_page(page); 493 497 data->phys += granule; 494 - return true; 498 + return 0; 495 499 } 496 500 497 501 static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level, ··· 526 516 static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, 527 517 struct stage2_map_data *data) 528 518 { 519 + int ret; 529 520 kvm_pte_t *childp, pte = *ptep; 530 521 struct page *page = virt_to_page(ptep); 531 522 ··· 537 526 return 0; 538 527 } 539 528 540 - if (stage2_map_walker_try_leaf(addr, end, level, ptep, data)) 541 - goto out_get_page; 529 + ret = stage2_map_walker_try_leaf(addr, end, level, ptep, data); 530 + if (ret != -E2BIG) 531 + return ret; 542 532 543 533 if (WARN_ON(level == KVM_PGTABLE_MAX_LEVELS - 1)) 544 534 return -EINVAL; ··· 563 551 } 564 552 565 553 kvm_set_table_pte(ptep, childp); 566 - 567 - out_get_page: 568 554 get_page(page); 555 + 569 556 return 0; 570 557 } 571 558
+1 -1
arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
··· 64 64 } 65 65 66 66 rd = kvm_vcpu_dabt_get_rd(vcpu); 67 - addr = hyp_symbol_addr(kvm_vgic_global_state)->vcpu_hyp_va; 67 + addr = kvm_vgic_global_state.vcpu_hyp_va; 68 68 addr += fault_ipa - vgic->vgic_cpu_base; 69 69 70 70 if (kvm_vcpu_dabt_iswrite(vcpu)) {
+6
arch/arm64/kvm/hypercalls.c
··· 71 71 if (gpa != GPA_INVALID) 72 72 val = gpa; 73 73 break; 74 + case ARM_SMCCC_TRNG_VERSION: 75 + case ARM_SMCCC_TRNG_FEATURES: 76 + case ARM_SMCCC_TRNG_GET_UUID: 77 + case ARM_SMCCC_TRNG_RND32: 78 + case ARM_SMCCC_TRNG_RND64: 79 + return kvm_trng_call(vcpu); 74 80 default: 75 81 return kvm_psci_call(vcpu); 76 82 }
+8 -5
arch/arm64/kvm/mmu.c
··· 879 879 if (vma_pagesize == PAGE_SIZE && !force_pte) 880 880 vma_pagesize = transparent_hugepage_adjust(memslot, hva, 881 881 &pfn, &fault_ipa); 882 - if (writable) { 882 + if (writable) 883 883 prot |= KVM_PGTABLE_PROT_W; 884 - kvm_set_pfn_dirty(pfn); 885 - mark_page_dirty(kvm, gfn); 886 - } 887 884 888 885 if (fault_status != FSC_PERM && !device) 889 886 clean_dcache_guest_page(pfn, vma_pagesize); ··· 908 911 memcache); 909 912 } 910 913 914 + /* Mark the page dirty only if the fault is handled successfully */ 915 + if (writable && !ret) { 916 + kvm_set_pfn_dirty(pfn); 917 + mark_page_dirty(kvm, gfn); 918 + } 919 + 911 920 out_unlock: 912 921 spin_unlock(&kvm->mmu_lock); 913 922 kvm_set_pfn_accessed(pfn); 914 923 kvm_release_pfn_clean(pfn); 915 - return ret; 924 + return ret != -EAGAIN ? ret : 0; 916 925 } 917 926 918 927 /* Resolve the access fault by making the page young again. */
+10 -4
arch/arm64/kvm/pmu-emul.c
··· 23 23 static u32 kvm_pmu_event_mask(struct kvm *kvm) 24 24 { 25 25 switch (kvm->arch.pmuver) { 26 - case 1: /* ARMv8.0 */ 26 + case ID_AA64DFR0_PMUVER_8_0: 27 27 return GENMASK(9, 0); 28 - case 4: /* ARMv8.1 */ 29 - case 5: /* ARMv8.4 */ 30 - case 6: /* ARMv8.5 */ 28 + case ID_AA64DFR0_PMUVER_8_1: 29 + case ID_AA64DFR0_PMUVER_8_4: 30 + case ID_AA64DFR0_PMUVER_8_5: 31 31 return GENMASK(15, 0); 32 32 default: /* Shouldn't be here, just for sanity */ 33 33 WARN_ONCE(1, "Unknown PMU version %d\n", kvm->arch.pmuver); ··· 795 795 base = 0; 796 796 } else { 797 797 val = read_sysreg(pmceid1_el0); 798 + /* 799 + * Don't advertise STALL_SLOT, as PMMIR_EL0 is handled 800 + * as RAZ 801 + */ 802 + if (vcpu->kvm->arch.pmuver >= ID_AA64DFR0_PMUVER_8_4) 803 + val &= ~BIT_ULL(ARMV8_PMUV3_PERFCTR_STALL_SLOT - 32); 798 804 base = 32; 799 805 } 800 806
+51 -34
arch/arm64/kvm/sys_regs.c
··· 9 9 * Christoffer Dall <c.dall@virtualopensystems.com> 10 10 */ 11 11 12 + #include <linux/bitfield.h> 12 13 #include <linux/bsearch.h> 13 14 #include <linux/kvm_host.h> 14 15 #include <linux/mm.h> ··· 701 700 static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p, 702 701 const struct sys_reg_desc *r) 703 702 { 704 - u64 pmceid; 703 + u64 pmceid, mask, shift; 705 704 706 705 BUG_ON(p->is_write); 707 706 708 707 if (pmu_access_el0_disabled(vcpu)) 709 708 return false; 710 709 710 + get_access_mask(r, &mask, &shift); 711 + 711 712 pmceid = kvm_pmu_get_pmceid(vcpu, (p->Op2 & 1)); 713 + pmceid &= mask; 714 + pmceid >>= shift; 712 715 713 716 p->regval = pmceid; 714 717 ··· 1026 1021 return true; 1027 1022 } 1028 1023 1024 + #define FEATURE(x) (GENMASK_ULL(x##_SHIFT + 3, x##_SHIFT)) 1025 + 1029 1026 /* Read a sanitised cpufeature ID register by sys_reg_desc */ 1030 1027 static u64 read_id_reg(const struct kvm_vcpu *vcpu, 1031 1028 struct sys_reg_desc const *r, bool raz) ··· 1035 1028 u32 id = reg_to_encoding(r); 1036 1029 u64 val = raz ? 0 : read_sanitised_ftr_reg(id); 1037 1030 1038 - if (id == SYS_ID_AA64PFR0_EL1) { 1031 + switch (id) { 1032 + case SYS_ID_AA64PFR0_EL1: 1039 1033 if (!vcpu_has_sve(vcpu)) 1040 - val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); 1041 - val &= ~(0xfUL << ID_AA64PFR0_AMU_SHIFT); 1042 - val &= ~(0xfUL << ID_AA64PFR0_CSV2_SHIFT); 1043 - val |= ((u64)vcpu->kvm->arch.pfr0_csv2 << ID_AA64PFR0_CSV2_SHIFT); 1044 - val &= ~(0xfUL << ID_AA64PFR0_CSV3_SHIFT); 1045 - val |= ((u64)vcpu->kvm->arch.pfr0_csv3 << ID_AA64PFR0_CSV3_SHIFT); 1046 - } else if (id == SYS_ID_AA64PFR1_EL1) { 1047 - val &= ~(0xfUL << ID_AA64PFR1_MTE_SHIFT); 1048 - } else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) { 1049 - val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) | 1050 - (0xfUL << ID_AA64ISAR1_API_SHIFT) | 1051 - (0xfUL << ID_AA64ISAR1_GPA_SHIFT) | 1052 - (0xfUL << ID_AA64ISAR1_GPI_SHIFT)); 1053 - } else if (id == SYS_ID_AA64DFR0_EL1) { 1054 - u64 cap = 0; 1055 - 1056 - /* Limit guests to PMUv3 for ARMv8.1 */ 1057 - if (kvm_vcpu_has_pmu(vcpu)) 1058 - cap = ID_AA64DFR0_PMUVER_8_1; 1059 - 1034 + val &= ~FEATURE(ID_AA64PFR0_SVE); 1035 + val &= ~FEATURE(ID_AA64PFR0_AMU); 1036 + val &= ~FEATURE(ID_AA64PFR0_CSV2); 1037 + val |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); 1038 + val &= ~FEATURE(ID_AA64PFR0_CSV3); 1039 + val |= FIELD_PREP(FEATURE(ID_AA64PFR0_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); 1040 + break; 1041 + case SYS_ID_AA64PFR1_EL1: 1042 + val &= ~FEATURE(ID_AA64PFR1_MTE); 1043 + break; 1044 + case SYS_ID_AA64ISAR1_EL1: 1045 + if (!vcpu_has_ptrauth(vcpu)) 1046 + val &= ~(FEATURE(ID_AA64ISAR1_APA) | 1047 + FEATURE(ID_AA64ISAR1_API) | 1048 + FEATURE(ID_AA64ISAR1_GPA) | 1049 + FEATURE(ID_AA64ISAR1_GPI)); 1050 + break; 1051 + case SYS_ID_AA64DFR0_EL1: 1052 + /* Limit debug to ARMv8.0 */ 1053 + val &= ~FEATURE(ID_AA64DFR0_DEBUGVER); 1054 + val |= FIELD_PREP(FEATURE(ID_AA64DFR0_DEBUGVER), 6); 1055 + /* Limit guests to PMUv3 for ARMv8.4 */ 1060 1056 val = cpuid_feature_cap_perfmon_field(val, 1061 - ID_AA64DFR0_PMUVER_SHIFT, 1062 - cap); 1063 - } else if (id == SYS_ID_DFR0_EL1) { 1064 - /* Limit guests to PMUv3 for ARMv8.1 */ 1057 + ID_AA64DFR0_PMUVER_SHIFT, 1058 + kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_PMUVER_8_4 : 0); 1059 + break; 1060 + case SYS_ID_DFR0_EL1: 1061 + /* Limit guests to PMUv3 for ARMv8.4 */ 1065 1062 val = cpuid_feature_cap_perfmon_field(val, 1066 - ID_DFR0_PERFMON_SHIFT, 1067 - ID_DFR0_PERFMON_8_1); 1063 + ID_DFR0_PERFMON_SHIFT, 1064 + kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0); 1065 + break; 1068 1066 } 1069 1067 1070 1068 return val; ··· 1505 1493 .access = access_pminten, .reg = PMINTENSET_EL1 }, 1506 1494 { PMU_SYS_REG(SYS_PMINTENCLR_EL1), 1507 1495 .access = access_pminten, .reg = PMINTENSET_EL1 }, 1496 + { SYS_DESC(SYS_PMMIR_EL1), trap_raz_wi }, 1508 1497 1509 1498 { SYS_DESC(SYS_MAIR_EL1), access_vm_reg, reset_unknown, MAIR_EL1 }, 1510 1499 { SYS_DESC(SYS_AMAIR_EL1), access_vm_reg, reset_amair_el1, AMAIR_EL1 }, ··· 1733 1720 { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 }, 1734 1721 }; 1735 1722 1736 - static bool trap_dbgidr(struct kvm_vcpu *vcpu, 1723 + static bool trap_dbgdidr(struct kvm_vcpu *vcpu, 1737 1724 struct sys_reg_params *p, 1738 1725 const struct sys_reg_desc *r) 1739 1726 { ··· 1747 1734 p->regval = ((((dfr >> ID_AA64DFR0_WRPS_SHIFT) & 0xf) << 28) | 1748 1735 (((dfr >> ID_AA64DFR0_BRPS_SHIFT) & 0xf) << 24) | 1749 1736 (((dfr >> ID_AA64DFR0_CTX_CMPS_SHIFT) & 0xf) << 20) 1750 - | (6 << 16) | (el3 << 14) | (el3 << 12)); 1737 + | (6 << 16) | (1 << 15) | (el3 << 14) | (el3 << 12)); 1751 1738 return true; 1752 1739 } 1753 1740 } ··· 1780 1767 * guest. Revisit this one day, would this principle change. 1781 1768 */ 1782 1769 static const struct sys_reg_desc cp14_regs[] = { 1783 - /* DBGIDR */ 1784 - { Op1( 0), CRn( 0), CRm( 0), Op2( 0), trap_dbgidr }, 1770 + /* DBGDIDR */ 1771 + { Op1( 0), CRn( 0), CRm( 0), Op2( 0), trap_dbgdidr }, 1785 1772 /* DBGDTRRXext */ 1786 1773 { Op1( 0), CRn( 0), CRm( 0), Op2( 2), trap_raz_wi }, 1787 1774 ··· 1931 1918 { Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs }, 1932 1919 { Op1( 0), CRn( 9), CRm(12), Op2( 4), access_pmswinc }, 1933 1920 { Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr }, 1934 - { Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid }, 1935 - { Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid }, 1921 + { AA32(LO), Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid }, 1922 + { AA32(LO), Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid }, 1936 1923 { Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr }, 1937 1924 { Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper }, 1938 1925 { Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr }, ··· 1940 1927 { Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten }, 1941 1928 { Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten }, 1942 1929 { Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs }, 1930 + { AA32(HI), Op1( 0), CRn( 9), CRm(14), Op2( 4), access_pmceid }, 1931 + { AA32(HI), Op1( 0), CRn( 9), CRm(14), Op2( 5), access_pmceid }, 1932 + /* PMMIR */ 1933 + { Op1( 0), CRn( 9), CRm(14), Op2( 6), trap_raz_wi }, 1943 1934 1944 1935 /* PRRR/MAIR0 */ 1945 1936 { AA32(LO), Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, MAIR_EL1 },
+85
arch/arm64/kvm/trng.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (C) 2020 Arm Ltd. 3 + 4 + #include <linux/arm-smccc.h> 5 + #include <linux/kvm_host.h> 6 + 7 + #include <asm/kvm_emulate.h> 8 + 9 + #include <kvm/arm_hypercalls.h> 10 + 11 + #define ARM_SMCCC_TRNG_VERSION_1_0 0x10000UL 12 + 13 + /* Those values are deliberately separate from the generic SMCCC definitions. */ 14 + #define TRNG_SUCCESS 0UL 15 + #define TRNG_NOT_SUPPORTED ((unsigned long)-1) 16 + #define TRNG_INVALID_PARAMETER ((unsigned long)-2) 17 + #define TRNG_NO_ENTROPY ((unsigned long)-3) 18 + 19 + #define TRNG_MAX_BITS64 192 20 + 21 + static const uuid_t arm_smc_trng_uuid __aligned(4) = UUID_INIT( 22 + 0x0d21e000, 0x4384, 0x11eb, 0x80, 0x70, 0x52, 0x44, 0x55, 0x4e, 0x5a, 0x4c); 23 + 24 + static int kvm_trng_do_rnd(struct kvm_vcpu *vcpu, int size) 25 + { 26 + DECLARE_BITMAP(bits, TRNG_MAX_BITS64); 27 + u32 num_bits = smccc_get_arg1(vcpu); 28 + int i; 29 + 30 + if (num_bits > 3 * size) { 31 + smccc_set_retval(vcpu, TRNG_INVALID_PARAMETER, 0, 0, 0); 32 + return 1; 33 + } 34 + 35 + /* get as many bits as we need to fulfil the request */ 36 + for (i = 0; i < DIV_ROUND_UP(num_bits, BITS_PER_LONG); i++) 37 + bits[i] = get_random_long(); 38 + 39 + bitmap_clear(bits, num_bits, TRNG_MAX_BITS64 - num_bits); 40 + 41 + if (size == 32) 42 + smccc_set_retval(vcpu, TRNG_SUCCESS, lower_32_bits(bits[1]), 43 + upper_32_bits(bits[0]), lower_32_bits(bits[0])); 44 + else 45 + smccc_set_retval(vcpu, TRNG_SUCCESS, bits[2], bits[1], bits[0]); 46 + 47 + memzero_explicit(bits, sizeof(bits)); 48 + return 1; 49 + } 50 + 51 + int kvm_trng_call(struct kvm_vcpu *vcpu) 52 + { 53 + const __le32 *u = (__le32 *)arm_smc_trng_uuid.b; 54 + u32 func_id = smccc_get_function(vcpu); 55 + unsigned long val = TRNG_NOT_SUPPORTED; 56 + int size = 64; 57 + 58 + switch (func_id) { 59 + case ARM_SMCCC_TRNG_VERSION: 60 + val = ARM_SMCCC_TRNG_VERSION_1_0; 61 + break; 62 + case ARM_SMCCC_TRNG_FEATURES: 63 + switch (smccc_get_arg1(vcpu)) { 64 + case ARM_SMCCC_TRNG_VERSION: 65 + case ARM_SMCCC_TRNG_FEATURES: 66 + case ARM_SMCCC_TRNG_GET_UUID: 67 + case ARM_SMCCC_TRNG_RND32: 68 + case ARM_SMCCC_TRNG_RND64: 69 + val = TRNG_SUCCESS; 70 + } 71 + break; 72 + case ARM_SMCCC_TRNG_GET_UUID: 73 + smccc_set_retval(vcpu, le32_to_cpu(u[0]), le32_to_cpu(u[1]), 74 + le32_to_cpu(u[2]), le32_to_cpu(u[3])); 75 + return 1; 76 + case ARM_SMCCC_TRNG_RND32: 77 + size = 32; 78 + fallthrough; 79 + case ARM_SMCCC_TRNG_RND64: 80 + return kvm_trng_do_rnd(vcpu, size); 81 + } 82 + 83 + smccc_set_retval(vcpu, val, 0, 0, 0); 84 + return 1; 85 + }
+28 -6
arch/arm64/kvm/va_layout.c
··· 81 81 init_hyp_physvirt_offset(); 82 82 } 83 83 84 + /* 85 + * The .hyp.reloc ELF section contains a list of kimg positions that 86 + * contains kimg VAs but will be accessed only in hyp execution context. 87 + * Convert them to hyp VAs. See gen-hyprel.c for more details. 88 + */ 89 + __init void kvm_apply_hyp_relocations(void) 90 + { 91 + int32_t *rel; 92 + int32_t *begin = (int32_t *)__hyp_reloc_begin; 93 + int32_t *end = (int32_t *)__hyp_reloc_end; 94 + 95 + for (rel = begin; rel < end; ++rel) { 96 + uintptr_t *ptr, kimg_va; 97 + 98 + /* 99 + * Each entry contains a 32-bit relative offset from itself 100 + * to a kimg VA position. 101 + */ 102 + ptr = (uintptr_t *)lm_alias((char *)rel + *rel); 103 + 104 + /* Read the kimg VA value at the relocation address. */ 105 + kimg_va = *ptr; 106 + 107 + /* Convert to hyp VA and store back to the relocation address. */ 108 + *ptr = __early_kern_hyp_va((uintptr_t)lm_alias(kimg_va)); 109 + } 110 + } 111 + 84 112 static u32 compute_instruction(int n, u32 rd, u32 rn) 85 113 { 86 114 u32 insn = AARCH64_BREAK_FAULT; ··· 281 253 AARCH64_INSN_VARIANT_64BIT, 282 254 AARCH64_INSN_MOVEWIDE_KEEP); 283 255 *updptr++ = cpu_to_le32(insn); 284 - } 285 - 286 - void kvm_update_kimg_phys_offset(struct alt_instr *alt, 287 - __le32 *origptr, __le32 *updptr, int nr_inst) 288 - { 289 - generate_mov_q(kimage_voffset + PHYS_OFFSET, origptr, updptr, nr_inst); 290 256 } 291 257 292 258 void kvm_get_kimage_voffset(struct alt_instr *alt,
+18 -15
arch/arm64/mm/init.c
··· 53 53 EXPORT_SYMBOL(memstart_addr); 54 54 55 55 /* 56 - * We create both ZONE_DMA and ZONE_DMA32. ZONE_DMA covers the first 1G of 57 - * memory as some devices, namely the Raspberry Pi 4, have peripherals with 58 - * this limited view of the memory. ZONE_DMA32 will cover the rest of the 32 59 - * bit addressable memory area. 56 + * If the corresponding config options are enabled, we create both ZONE_DMA 57 + * and ZONE_DMA32. By default ZONE_DMA covers the 32-bit addressable memory 58 + * unless restricted on specific platforms (e.g. 30-bit on Raspberry Pi 4). 59 + * In such case, ZONE_DMA32 covers the rest of the 32-bit addressable memory, 60 + * otherwise it is empty. 60 61 */ 61 62 phys_addr_t arm64_dma_phys_limit __ro_after_init; 62 - phys_addr_t arm64_dma32_phys_limit __ro_after_init; 63 63 64 64 #ifdef CONFIG_KEXEC_CORE 65 65 /* ··· 84 84 85 85 if (crash_base == 0) { 86 86 /* Current arm64 boot protocol requires 2MB alignment */ 87 - crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit, 87 + crash_base = memblock_find_in_range(0, arm64_dma_phys_limit, 88 88 crash_size, SZ_2M); 89 89 if (crash_base == 0) { 90 90 pr_warn("cannot allocate crashkernel (size:0x%llx)\n", ··· 196 196 unsigned long max_zone_pfns[MAX_NR_ZONES] = {0}; 197 197 unsigned int __maybe_unused acpi_zone_dma_bits; 198 198 unsigned int __maybe_unused dt_zone_dma_bits; 199 + phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32); 199 200 200 201 #ifdef CONFIG_ZONE_DMA 201 202 acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address()); ··· 206 205 max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit); 207 206 #endif 208 207 #ifdef CONFIG_ZONE_DMA32 209 - max_zone_pfns[ZONE_DMA32] = PFN_DOWN(arm64_dma32_phys_limit); 208 + max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit); 209 + if (!arm64_dma_phys_limit) 210 + arm64_dma_phys_limit = dma32_phys_limit; 210 211 #endif 212 + if (!arm64_dma_phys_limit) 213 + arm64_dma_phys_limit = PHYS_MASK + 1; 211 214 max_zone_pfns[ZONE_NORMAL] = max; 212 215 213 216 free_area_init(max_zone_pfns); ··· 399 394 400 395 early_init_fdt_scan_reserved_mem(); 401 396 402 - if (IS_ENABLED(CONFIG_ZONE_DMA32)) 403 - arm64_dma32_phys_limit = max_zone_phys(32); 404 - else 405 - arm64_dma32_phys_limit = PHYS_MASK + 1; 406 - 407 397 reserve_elfcorehdr(); 408 398 409 399 high_memory = __va(memblock_end_of_DRAM() - 1) + 1; 410 - 411 - dma_contiguous_reserve(arm64_dma32_phys_limit); 412 400 } 413 401 414 402 void __init bootmem_init(void) ··· 437 439 zone_sizes_init(min, max); 438 440 439 441 /* 442 + * Reserve the CMA area after arm64_dma_phys_limit was initialised. 443 + */ 444 + dma_contiguous_reserve(arm64_dma_phys_limit); 445 + 446 + /* 440 447 * request_standard_resources() depends on crashkernel's memory being 441 448 * reserved, so do it here. 442 449 */ ··· 458 455 void __init mem_init(void) 459 456 { 460 457 if (swiotlb_force == SWIOTLB_FORCE || 461 - max_pfn > PFN_DOWN(arm64_dma_phys_limit ? : arm64_dma32_phys_limit)) 458 + max_pfn > PFN_DOWN(arm64_dma_phys_limit)) 462 459 swiotlb_init(1); 463 460 else 464 461 swiotlb_force = SWIOTLB_NO_FORCE;
+2 -1
arch/mips/boot/compressed/decompress.c
··· 13 13 #include <linux/libfdt.h> 14 14 15 15 #include <asm/addrspace.h> 16 + #include <asm/unaligned.h> 16 17 17 18 /* 18 19 * These two variables specify the free mem region ··· 118 117 dtb_size = fdt_totalsize((void *)&__appended_dtb); 119 118 120 119 /* last four bytes is always image size in little endian */ 121 - image_size = le32_to_cpup((void *)&__image_end - 4); 120 + image_size = get_unaligned_le32((void *)&__image_end - 4); 122 121 123 122 /* copy dtb to where the booted kernel will expect it */ 124 123 memcpy((void *)VMLINUX_LOAD_ADDRESS_ULL + image_size,
+1 -1
arch/mips/cavium-octeon/octeon-irq.c
··· 1444 1444 static int __init octeon_irq_init_ciu( 1445 1445 struct device_node *ciu_node, struct device_node *parent) 1446 1446 { 1447 - unsigned int i, r; 1447 + int i, r; 1448 1448 struct irq_chip *chip; 1449 1449 struct irq_chip *chip_edge; 1450 1450 struct irq_chip *chip_mbox;
+7
arch/mips/kernel/binfmt_elfn32.c
··· 103 103 #undef ns_to_kernel_old_timeval 104 104 #define ns_to_kernel_old_timeval ns_to_old_timeval32 105 105 106 + /* 107 + * Some data types as stored in coredump. 108 + */ 109 + #define user_long_t compat_long_t 110 + #define user_siginfo_t compat_siginfo_t 111 + #define copy_siginfo_to_external copy_siginfo_to_external32 112 + 106 113 #include "../../../fs/binfmt_elf.c"
+7
arch/mips/kernel/binfmt_elfo32.c
··· 106 106 #undef ns_to_kernel_old_timeval 107 107 #define ns_to_kernel_old_timeval ns_to_old_timeval32 108 108 109 + /* 110 + * Some data types as stored in coredump. 111 + */ 112 + #define user_long_t compat_long_t 113 + #define user_siginfo_t compat_siginfo_t 114 + #define copy_siginfo_to_external copy_siginfo_to_external32 115 + 109 116 #include "../../../fs/binfmt_elf.c"
+8 -2
arch/mips/kernel/relocate.c
··· 187 187 static inline __init unsigned long rotate_xor(unsigned long hash, 188 188 const void *area, size_t size) 189 189 { 190 - size_t i; 191 - unsigned long *ptr = (unsigned long *)area; 190 + const typeof(hash) *ptr = PTR_ALIGN(area, sizeof(hash)); 191 + size_t diff, i; 192 + 193 + diff = (void *)ptr - area; 194 + if (unlikely(size < diff + sizeof(hash))) 195 + return hash; 196 + 197 + size = ALIGN_DOWN(size - diff, sizeof(hash)); 192 198 193 199 for (i = 0; i < size / sizeof(hash); i++) { 194 200 /* Rotate by odd number of bits and XOR. */
+15 -1
arch/powerpc/include/asm/vdso/gettimeofday.h
··· 103 103 return do_syscall_2(__NR_gettimeofday, (unsigned long)_tv, (unsigned long)_tz); 104 104 } 105 105 106 + #ifdef __powerpc64__ 107 + 106 108 static __always_inline 107 109 int clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts) 108 110 { ··· 117 115 return do_syscall_2(__NR_clock_getres, _clkid, (unsigned long)_ts); 118 116 } 119 117 120 - #ifdef CONFIG_VDSO32 118 + #else 121 119 122 120 #define BUILD_VDSO32 1 121 + 122 + static __always_inline 123 + int clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts) 124 + { 125 + return do_syscall_2(__NR_clock_gettime64, _clkid, (unsigned long)_ts); 126 + } 127 + 128 + static __always_inline 129 + int clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts) 130 + { 131 + return do_syscall_2(__NR_clock_getres_time64, _clkid, (unsigned long)_ts); 132 + } 123 133 124 134 static __always_inline 125 135 int clock_gettime32_fallback(clockid_t _clkid, struct old_timespec32 *_ts)
+8
arch/powerpc/kernel/vmlinux.lds.S
··· 187 187 .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) { 188 188 _sinittext = .; 189 189 INIT_TEXT 190 + 191 + /* 192 + *.init.text might be RO so we must ensure this section ends on 193 + * a page boundary. 194 + */ 195 + . = ALIGN(PAGE_SIZE); 190 196 _einittext = .; 191 197 #ifdef CONFIG_PPC64 192 198 *(.tramp.ftrace.init); ··· 205 199 .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) { 206 200 EXIT_TEXT 207 201 } 202 + 203 + . = ALIGN(PAGE_SIZE); 208 204 209 205 INIT_DATA_SECTION(16) 210 206
+4 -2
arch/riscv/Kconfig
··· 137 137 138 138 config PAGE_OFFSET 139 139 hex 140 - default 0xC0000000 if 32BIT && MAXPHYSMEM_2GB 140 + default 0xC0000000 if 32BIT && MAXPHYSMEM_1GB 141 141 default 0x80000000 if 64BIT && !MMU 142 142 default 0xffffffff80000000 if 64BIT && MAXPHYSMEM_2GB 143 143 default 0xffffffe000000000 if 64BIT && MAXPHYSMEM_128GB ··· 247 247 248 248 choice 249 249 prompt "Maximum Physical Memory" 250 - default MAXPHYSMEM_2GB if 32BIT 250 + default MAXPHYSMEM_1GB if 32BIT 251 251 default MAXPHYSMEM_2GB if 64BIT && CMODEL_MEDLOW 252 252 default MAXPHYSMEM_128GB if 64BIT && CMODEL_MEDANY 253 253 254 + config MAXPHYSMEM_1GB 255 + bool "1GiB" 254 256 config MAXPHYSMEM_2GB 255 257 bool "2GiB" 256 258 config MAXPHYSMEM_128GB
+2
arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
··· 88 88 phy-mode = "gmii"; 89 89 phy-handle = <&phy0>; 90 90 phy0: ethernet-phy@0 { 91 + compatible = "ethernet-phy-id0007.0771"; 91 92 reg = <0>; 93 + reset-gpios = <&gpio 12 GPIO_ACTIVE_LOW>; 92 94 }; 93 95 }; 94 96
+2
arch/riscv/configs/defconfig
··· 64 64 CONFIG_HW_RANDOM_VIRTIO=y 65 65 CONFIG_SPI=y 66 66 CONFIG_SPI_SIFIVE=y 67 + CONFIG_GPIOLIB=y 68 + CONFIG_GPIO_SIFIVE=y 67 69 # CONFIG_PTP_1588_CLOCK is not set 68 70 CONFIG_POWER_RESET=y 69 71 CONFIG_DRM=y
-1
arch/riscv/include/asm/pgtable.h
··· 99 99 | _PAGE_DIRTY) 100 100 101 101 #define PAGE_KERNEL __pgprot(_PAGE_KERNEL) 102 - #define PAGE_KERNEL_EXEC __pgprot(_PAGE_KERNEL | _PAGE_EXEC) 103 102 #define PAGE_KERNEL_READ __pgprot(_PAGE_KERNEL & ~_PAGE_WRITE) 104 103 #define PAGE_KERNEL_EXEC __pgprot(_PAGE_KERNEL | _PAGE_EXEC) 105 104 #define PAGE_KERNEL_READ_EXEC __pgprot((_PAGE_KERNEL & ~_PAGE_WRITE) \
+1 -1
arch/riscv/include/asm/vdso.h
··· 10 10 11 11 #include <linux/types.h> 12 12 13 - #ifndef GENERIC_TIME_VSYSCALL 13 + #ifndef CONFIG_GENERIC_TIME_VSYSCALL 14 14 struct vdso_data { 15 15 }; 16 16 #endif
+10 -1
arch/riscv/kernel/cacheinfo.c
··· 26 26 27 27 static struct cacheinfo *get_cacheinfo(u32 level, enum cache_type type) 28 28 { 29 - struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(smp_processor_id()); 29 + /* 30 + * Using raw_smp_processor_id() elides a preemptability check, but this 31 + * is really indicative of a larger problem: the cacheinfo UABI assumes 32 + * that cores have a homonogenous view of the cache hierarchy. That 33 + * happens to be the case for the current set of RISC-V systems, but 34 + * likely won't be true in general. Since there's no way to provide 35 + * correct information for these systems via the current UABI we're 36 + * just eliding the check for now. 37 + */ 38 + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(raw_smp_processor_id()); 30 39 struct cacheinfo *this_leaf; 31 40 int index; 32 41
+13 -11
arch/riscv/kernel/entry.S
··· 124 124 REG_L a1, (a1) 125 125 jr a1 126 126 1: 127 - #ifdef CONFIG_TRACE_IRQFLAGS 128 - call trace_hardirqs_on 129 - #endif 130 127 /* 131 128 * Exceptions run with interrupts enabled or disabled depending on the 132 129 * state of SR_PIE in m/sstatus. 133 130 */ 134 131 andi t0, s1, SR_PIE 135 132 beqz t0, 1f 133 + #ifdef CONFIG_TRACE_IRQFLAGS 134 + call trace_hardirqs_on 135 + #endif 136 136 csrs CSR_STATUS, SR_IE 137 137 138 138 1: ··· 155 155 tail do_trap_unknown 156 156 157 157 handle_syscall: 158 + #ifdef CONFIG_RISCV_M_MODE 159 + /* 160 + * When running is M-Mode (no MMU config), MPIE does not get set. 161 + * As a result, we need to force enable interrupts here because 162 + * handle_exception did not do set SR_IE as it always sees SR_PIE 163 + * being cleared. 164 + */ 165 + csrs CSR_STATUS, SR_IE 166 + #endif 158 167 #if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING) 159 168 /* Recover a0 - a7 for system calls */ 160 169 REG_L a0, PT_A0(sp) ··· 195 186 * Syscall number held in a7. 196 187 * If syscall number is above allowed value, redirect to ni_syscall. 197 188 */ 198 - bge a7, t0, 1f 199 - /* 200 - * Check if syscall is rejected by tracer, i.e., a7 == -1. 201 - * If yes, we pretend it was executed. 202 - */ 203 - li t1, -1 204 - beq a7, t1, ret_from_syscall_rejected 205 - blt a7, t1, 1f 189 + bgeu a7, t0, 1f 206 190 /* Call syscall */ 207 191 la s0, sys_call_table 208 192 slli t0, a7, RISCV_LGPTR
+13 -11
arch/riscv/kernel/setup.c
··· 127 127 { 128 128 struct memblock_region *region = NULL; 129 129 struct resource *res = NULL; 130 - int ret = 0; 130 + struct resource *mem_res = NULL; 131 + size_t mem_res_sz = 0; 132 + int ret = 0, i = 0; 131 133 132 134 code_res.start = __pa_symbol(_text); 133 135 code_res.end = __pa_symbol(_etext) - 1; ··· 147 145 bss_res.end = __pa_symbol(__bss_stop) - 1; 148 146 bss_res.flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY; 149 147 148 + mem_res_sz = (memblock.memory.cnt + memblock.reserved.cnt) * sizeof(*mem_res); 149 + mem_res = memblock_alloc(mem_res_sz, SMP_CACHE_BYTES); 150 + if (!mem_res) 151 + panic("%s: Failed to allocate %zu bytes\n", __func__, mem_res_sz); 150 152 /* 151 153 * Start by adding the reserved regions, if they overlap 152 154 * with /memory regions, insert_resource later on will take 153 155 * care of it. 154 156 */ 155 157 for_each_reserved_mem_region(region) { 156 - res = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES); 157 - if (!res) 158 - panic("%s: Failed to allocate %zu bytes\n", __func__, 159 - sizeof(struct resource)); 158 + res = &mem_res[i++]; 160 159 161 160 res->name = "Reserved"; 162 161 res->flags = IORESOURCE_MEM | IORESOURCE_BUSY; ··· 174 171 * Ignore any other reserved regions within 175 172 * system memory. 176 173 */ 177 - if (memblock_is_memory(res->start)) 174 + if (memblock_is_memory(res->start)) { 175 + memblock_free((phys_addr_t) res, sizeof(struct resource)); 178 176 continue; 177 + } 179 178 180 179 ret = add_resource(&iomem_resource, res); 181 180 if (ret < 0) ··· 186 181 187 182 /* Add /memory regions to the resource tree */ 188 183 for_each_mem_region(region) { 189 - res = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES); 190 - if (!res) 191 - panic("%s: Failed to allocate %zu bytes\n", __func__, 192 - sizeof(struct resource)); 184 + res = &mem_res[i++]; 193 185 194 186 if (unlikely(memblock_is_nomap(region))) { 195 187 res->name = "Reserved"; ··· 207 205 return; 208 206 209 207 error: 210 - memblock_free((phys_addr_t) res, sizeof(struct resource)); 211 208 /* Better an empty resource tree than an inconsistent one */ 212 209 release_child_resources(&iomem_resource); 210 + memblock_free((phys_addr_t) mem_res, mem_res_sz); 213 211 } 214 212 215 213
+2 -3
arch/riscv/kernel/stacktrace.c
··· 14 14 15 15 #include <asm/stacktrace.h> 16 16 17 - register unsigned long sp_in_global __asm__("sp"); 17 + register const unsigned long sp_in_global __asm__("sp"); 18 18 19 19 #ifdef CONFIG_FRAME_POINTER 20 20 ··· 28 28 sp = user_stack_pointer(regs); 29 29 pc = instruction_pointer(regs); 30 30 } else if (task == NULL || task == current) { 31 - const register unsigned long current_sp = sp_in_global; 32 31 fp = (unsigned long)__builtin_frame_address(0); 33 - sp = current_sp; 32 + sp = sp_in_global; 34 33 pc = (unsigned long)walk_stackframe; 35 34 } else { 36 35 /* task blocked in __switch_to */
+3
arch/riscv/kernel/time.c
··· 4 4 * Copyright (C) 2017 SiFive 5 5 */ 6 6 7 + #include <linux/of_clk.h> 7 8 #include <linux/clocksource.h> 8 9 #include <linux/delay.h> 9 10 #include <asm/sbi.h> ··· 25 24 riscv_timebase = prop; 26 25 27 26 lpj_fine = riscv_timebase / HZ; 27 + 28 + of_clk_init(NULL); 28 29 timer_probe(); 29 30 } 30 31
+1 -1
arch/riscv/kernel/vdso.c
··· 12 12 #include <linux/binfmts.h> 13 13 #include <linux/err.h> 14 14 #include <asm/page.h> 15 - #ifdef GENERIC_TIME_VSYSCALL 15 + #ifdef CONFIG_GENERIC_TIME_VSYSCALL 16 16 #include <vdso/datapage.h> 17 17 #else 18 18 #include <asm/vdso.h>
+14 -2
arch/riscv/mm/init.c
··· 157 157 void __init setup_bootmem(void) 158 158 { 159 159 phys_addr_t mem_start = 0; 160 - phys_addr_t start, end = 0; 160 + phys_addr_t start, dram_end, end = 0; 161 161 phys_addr_t vmlinux_end = __pa_symbol(&_end); 162 162 phys_addr_t vmlinux_start = __pa_symbol(&_start); 163 + phys_addr_t max_mapped_addr = __pa(~(ulong)0); 163 164 u64 i; 164 165 165 166 /* Find the memory region containing the kernel */ ··· 182 181 /* Reserve from the start of the kernel to the end of the kernel */ 183 182 memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); 184 183 185 - max_pfn = PFN_DOWN(memblock_end_of_DRAM()); 184 + dram_end = memblock_end_of_DRAM(); 185 + 186 + /* 187 + * memblock allocator is not aware of the fact that last 4K bytes of 188 + * the addressable memory can not be mapped because of IS_ERR_VALUE 189 + * macro. Make sure that last 4k bytes are not usable by memblock 190 + * if end of dram is equal to maximum addressable memory. 191 + */ 192 + if (max_mapped_addr == (dram_end - 1)) 193 + memblock_set_current_limit(max_mapped_addr - 4096); 194 + 195 + max_pfn = PFN_DOWN(dram_end); 186 196 max_low_pfn = max_pfn; 187 197 dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn)); 188 198 set_max_mapnr(max_low_pfn);
+2 -2
arch/riscv/mm/kasan_init.c
··· 93 93 VMALLOC_END)); 94 94 95 95 for_each_mem_range(i, &_start, &_end) { 96 - void *start = (void *)_start; 97 - void *end = (void *)_end; 96 + void *start = (void *)__va(_start); 97 + void *end = (void *)__va(_end); 98 98 99 99 if (start >= end) 100 100 break;
+4
arch/x86/hyperv/hv_init.c
··· 16 16 #include <asm/hyperv-tlfs.h> 17 17 #include <asm/mshyperv.h> 18 18 #include <asm/idtentry.h> 19 + #include <linux/kexec.h> 19 20 #include <linux/version.h> 20 21 #include <linux/vmalloc.h> 21 22 #include <linux/mm.h> ··· 26 25 #include <linux/cpuhotplug.h> 27 26 #include <linux/syscore_ops.h> 28 27 #include <clocksource/hyperv_timer.h> 28 + 29 + int hyperv_init_cpuhp; 29 30 30 31 void *hv_hypercall_pg; 31 32 EXPORT_SYMBOL_GPL(hv_hypercall_pg); ··· 404 401 405 402 register_syscore_ops(&hv_syscore_ops); 406 403 404 + hyperv_init_cpuhp = cpuhp; 407 405 return; 408 406 409 407 remove_cpuhp_state:
+9 -3
arch/x86/hyperv/mmu.c
··· 66 66 if (!hv_hypercall_pg) 67 67 goto do_native; 68 68 69 - if (cpumask_empty(cpus)) 70 - return; 71 - 72 69 local_irq_save(flags); 70 + 71 + /* 72 + * Only check the mask _after_ interrupt has been disabled to avoid the 73 + * mask changing under our feet. 74 + */ 75 + if (cpumask_empty(cpus)) { 76 + local_irq_restore(flags); 77 + return; 78 + } 73 79 74 80 flush_pcpu = (struct hv_tlb_flush **) 75 81 this_cpu_ptr(hyperv_pcpu_input_arg);
+2
arch/x86/include/asm/mshyperv.h
··· 74 74 75 75 76 76 #if IS_ENABLED(CONFIG_HYPERV) 77 + extern int hyperv_init_cpuhp; 78 + 77 79 extern void *hv_hypercall_pg; 78 80 extern void __percpu **hyperv_pcpu_input_arg; 79 81
+18
arch/x86/kernel/cpu/mshyperv.c
··· 135 135 { 136 136 if (kexec_in_progress && hv_kexec_handler) 137 137 hv_kexec_handler(); 138 + 139 + /* 140 + * Call hv_cpu_die() on all the CPUs, otherwise later the hypervisor 141 + * corrupts the old VP Assist Pages and can crash the kexec kernel. 142 + */ 143 + if (kexec_in_progress && hyperv_init_cpuhp > 0) 144 + cpuhp_remove_state(hyperv_init_cpuhp); 145 + 146 + /* The function calls stop_other_cpus(). */ 138 147 native_machine_shutdown(); 148 + 149 + /* Disable the hypercall page when there is only 1 active CPU. */ 150 + if (kexec_in_progress) 151 + hyperv_cleanup(); 139 152 } 140 153 141 154 static void hv_machine_crash_shutdown(struct pt_regs *regs) 142 155 { 143 156 if (hv_crash_handler) 144 157 hv_crash_handler(regs); 158 + 159 + /* The function calls crash_smp_send_stop(). */ 145 160 native_machine_crash_shutdown(regs); 161 + 162 + /* Disable the hypercall page when there is only 1 active CPU. */ 163 + hyperv_cleanup(); 146 164 } 147 165 #endif /* CONFIG_KEXEC_CORE */ 148 166 #endif /* CONFIG_HYPERV */
+12 -3
arch/x86/xen/enlighten_hvm.c
··· 164 164 else 165 165 per_cpu(xen_vcpu_id, cpu) = cpu; 166 166 rc = xen_vcpu_setup(cpu); 167 - if (rc) 167 + if (rc || !xen_have_vector_callback) 168 168 return rc; 169 169 170 - if (xen_have_vector_callback && xen_feature(XENFEAT_hvm_safe_pvclock)) 170 + if (xen_feature(XENFEAT_hvm_safe_pvclock)) 171 171 xen_setup_timer(cpu); 172 172 173 173 rc = xen_smp_intr_init(cpu); ··· 188 188 return 0; 189 189 } 190 190 191 + static bool no_vector_callback __initdata; 192 + 191 193 static void __init xen_hvm_guest_init(void) 192 194 { 193 195 if (xen_pv_domain()) ··· 209 207 210 208 xen_panic_handler_init(); 211 209 212 - if (xen_feature(XENFEAT_hvm_callback_vector)) 210 + if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector)) 213 211 xen_have_vector_callback = 1; 214 212 215 213 xen_hvm_smp_init(); ··· 234 232 return 0; 235 233 } 236 234 early_param("xen_nopv", xen_parse_nopv); 235 + 236 + static __init int xen_parse_no_vector_callback(char *arg) 237 + { 238 + no_vector_callback = true; 239 + return 0; 240 + } 241 + early_param("xen_no_vector_callback", xen_parse_no_vector_callback); 237 242 238 243 bool __init xen_hvm_need_lapic(void) 239 244 {
+18 -11
arch/x86/xen/smp_hvm.c
··· 33 33 int cpu; 34 34 35 35 native_smp_prepare_cpus(max_cpus); 36 - WARN_ON(xen_smp_intr_init(0)); 37 36 38 - xen_init_lock_cpu(0); 37 + if (xen_have_vector_callback) { 38 + WARN_ON(xen_smp_intr_init(0)); 39 + xen_init_lock_cpu(0); 40 + } 39 41 40 42 for_each_possible_cpu(cpu) { 41 43 if (cpu == 0) ··· 52 50 static void xen_hvm_cpu_die(unsigned int cpu) 53 51 { 54 52 if (common_cpu_die(cpu) == 0) { 55 - xen_smp_intr_free(cpu); 56 - xen_uninit_lock_cpu(cpu); 57 - xen_teardown_timer(cpu); 53 + if (xen_have_vector_callback) { 54 + xen_smp_intr_free(cpu); 55 + xen_uninit_lock_cpu(cpu); 56 + xen_teardown_timer(cpu); 57 + } 58 58 } 59 59 } 60 60 #else ··· 68 64 69 65 void __init xen_hvm_smp_init(void) 70 66 { 71 - if (!xen_have_vector_callback) 72 - return; 73 - 67 + smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu; 74 68 smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus; 75 - smp_ops.smp_send_reschedule = xen_smp_send_reschedule; 69 + smp_ops.smp_cpus_done = xen_smp_cpus_done; 76 70 smp_ops.cpu_die = xen_hvm_cpu_die; 71 + 72 + if (!xen_have_vector_callback) { 73 + nopvspin = true; 74 + return; 75 + } 76 + 77 + smp_ops.smp_send_reschedule = xen_smp_send_reschedule; 77 78 smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi; 78 79 smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi; 79 - smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu; 80 - smp_ops.smp_cpus_done = xen_smp_cpus_done; 81 80 }
+1 -1
drivers/acpi/internal.h
··· 97 97 extern struct list_head acpi_bus_id_list; 98 98 99 99 struct acpi_device_bus_id { 100 - char bus_id[15]; 100 + const char *bus_id; 101 101 unsigned int instance_no; 102 102 struct list_head node; 103 103 };
+14 -1
drivers/acpi/scan.c
··· 486 486 acpi_device_bus_id->instance_no--; 487 487 else { 488 488 list_del(&acpi_device_bus_id->node); 489 + kfree_const(acpi_device_bus_id->bus_id); 489 490 kfree(acpi_device_bus_id); 490 491 } 491 492 break; ··· 675 674 } 676 675 if (!found) { 677 676 acpi_device_bus_id = new_bus_id; 678 - strcpy(acpi_device_bus_id->bus_id, acpi_device_hid(device)); 677 + acpi_device_bus_id->bus_id = 678 + kstrdup_const(acpi_device_hid(device), GFP_KERNEL); 679 + if (!acpi_device_bus_id->bus_id) { 680 + pr_err(PREFIX "Memory allocation error for bus id\n"); 681 + result = -ENOMEM; 682 + goto err_free_new_bus_id; 683 + } 684 + 679 685 acpi_device_bus_id->instance_no = 0; 680 686 list_add_tail(&acpi_device_bus_id->node, &acpi_bus_id_list); 681 687 } ··· 717 709 if (device->parent) 718 710 list_del(&device->node); 719 711 list_del(&device->wakeup_list); 712 + 713 + err_free_new_bus_id: 714 + if (!found) 715 + kfree(new_bus_id); 716 + 720 717 mutex_unlock(&acpi_device_lock); 721 718 722 719 err_detach:
+2
drivers/clk/tegra/clk-tegra30.c
··· 1256 1256 { TEGRA30_CLK_I2S3_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 }, 1257 1257 { TEGRA30_CLK_I2S4_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 }, 1258 1258 { TEGRA30_CLK_VIMCLK_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 }, 1259 + { TEGRA30_CLK_HDA, TEGRA30_CLK_PLL_P, 102000000, 0 }, 1260 + { TEGRA30_CLK_HDA2CODEC_2X, TEGRA30_CLK_PLL_P, 48000000, 0 }, 1259 1261 /* must be the last entry */ 1260 1262 { TEGRA30_CLK_CLK_MAX, TEGRA30_CLK_CLK_MAX, 0, 0 }, 1261 1263 };
+3
drivers/dma-buf/heaps/cma_heap.c
··· 251 251 buffer->vaddr = NULL; 252 252 } 253 253 254 + /* free page list */ 255 + kfree(buffer->pages); 256 + /* release memory */ 254 257 cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount); 255 258 kfree(buffer); 256 259 }
+36 -17
drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
··· 112 112 union igp_info { 113 113 struct atom_integrated_system_info_v1_11 v11; 114 114 struct atom_integrated_system_info_v1_12 v12; 115 + struct atom_integrated_system_info_v2_1 v21; 115 116 }; 116 117 117 118 union umc_info { ··· 210 209 if (adev->flags & AMD_IS_APU) { 211 210 igp_info = (union igp_info *) 212 211 (mode_info->atom_context->bios + data_offset); 213 - switch (crev) { 214 - case 11: 215 - mem_channel_number = igp_info->v11.umachannelnumber; 216 - /* channel width is 64 */ 217 - if (vram_width) 218 - *vram_width = mem_channel_number * 64; 219 - mem_type = igp_info->v11.memorytype; 220 - if (vram_type) 221 - *vram_type = convert_atom_mem_type_to_vram_type(adev, mem_type); 212 + switch (frev) { 213 + case 1: 214 + switch (crev) { 215 + case 11: 216 + case 12: 217 + mem_channel_number = igp_info->v11.umachannelnumber; 218 + if (!mem_channel_number) 219 + mem_channel_number = 1; 220 + /* channel width is 64 */ 221 + if (vram_width) 222 + *vram_width = mem_channel_number * 64; 223 + mem_type = igp_info->v11.memorytype; 224 + if (vram_type) 225 + *vram_type = convert_atom_mem_type_to_vram_type(adev, mem_type); 226 + break; 227 + default: 228 + return -EINVAL; 229 + } 222 230 break; 223 - case 12: 224 - mem_channel_number = igp_info->v12.umachannelnumber; 225 - /* channel width is 64 */ 226 - if (vram_width) 227 - *vram_width = mem_channel_number * 64; 228 - mem_type = igp_info->v12.memorytype; 229 - if (vram_type) 230 - *vram_type = convert_atom_mem_type_to_vram_type(adev, mem_type); 231 + case 2: 232 + switch (crev) { 233 + case 1: 234 + case 2: 235 + mem_channel_number = igp_info->v21.umachannelnumber; 236 + if (!mem_channel_number) 237 + mem_channel_number = 1; 238 + /* channel width is 64 */ 239 + if (vram_width) 240 + *vram_width = mem_channel_number * 64; 241 + mem_type = igp_info->v21.memorytype; 242 + if (vram_type) 243 + *vram_type = convert_atom_mem_type_to_vram_type(adev, mem_type); 244 + break; 245 + default: 246 + return -EINVAL; 247 + } 231 248 break; 232 249 default: 233 250 return -EINVAL;
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3034 3034 #endif 3035 3035 default: 3036 3036 if (amdgpu_dc > 0) 3037 - DRM_INFO("Display Core has been requested via kernel parameter " 3037 + DRM_INFO_ONCE("Display Core has been requested via kernel parameter " 3038 3038 "but isn't supported by ASIC, ignoring\n"); 3039 3039 return false; 3040 3040 }
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 1085 1085 1086 1086 /* Renoir */ 1087 1087 {0x1002, 0x1636, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU}, 1088 + {0x1002, 0x1638, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU}, 1089 + {0x1002, 0x164C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU}, 1088 1090 1089 1091 /* Navi12 */ 1090 1092 {0x1002, 0x7360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI12},
+46 -2
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 99 99 #define mmGCR_GENERAL_CNTL_Sienna_Cichlid 0x1580 100 100 #define mmGCR_GENERAL_CNTL_Sienna_Cichlid_BASE_IDX 0 101 101 102 + #define mmGOLDEN_TSC_COUNT_UPPER_Vangogh 0x0025 103 + #define mmGOLDEN_TSC_COUNT_UPPER_Vangogh_BASE_IDX 1 104 + #define mmGOLDEN_TSC_COUNT_LOWER_Vangogh 0x0026 105 + #define mmGOLDEN_TSC_COUNT_LOWER_Vangogh_BASE_IDX 1 102 106 #define mmSPI_CONFIG_CNTL_1_Vangogh 0x2441 103 107 #define mmSPI_CONFIG_CNTL_1_Vangogh_BASE_IDX 1 104 108 #define mmVGT_TF_MEMORY_BASE_HI_Vangogh 0x2261 ··· 163 159 #define mmGCUTCL2_CGTT_CLK_CTRL_Sienna_Cichlid_BASE_IDX 0 164 160 #define mmGCVM_L2_CGTT_CLK_CTRL_Sienna_Cichlid 0x15db 165 161 #define mmGCVM_L2_CGTT_CLK_CTRL_Sienna_Cichlid_BASE_IDX 0 162 + 163 + #define mmGC_THROTTLE_CTRL_Sienna_Cichlid 0x2030 164 + #define mmGC_THROTTLE_CTRL_Sienna_Cichlid_BASE_IDX 0 166 165 167 166 MODULE_FIRMWARE("amdgpu/navi10_ce.bin"); 168 167 MODULE_FIRMWARE("amdgpu/navi10_pfp.bin"); ··· 3331 3324 static void gfx_v10_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, bool start, bool secure); 3332 3325 static u32 gfx_v10_3_get_disabled_sa(struct amdgpu_device *adev); 3333 3326 static void gfx_v10_3_program_pbb_mode(struct amdgpu_device *adev); 3327 + static void gfx_v10_3_set_power_brake_sequence(struct amdgpu_device *adev); 3334 3328 3335 3329 static void gfx10_kiq_set_resources(struct amdgpu_ring *kiq_ring, uint64_t queue_mask) 3336 3330 { ··· 7200 7192 if (adev->asic_type == CHIP_SIENNA_CICHLID) 7201 7193 gfx_v10_3_program_pbb_mode(adev); 7202 7194 7195 + if (adev->asic_type >= CHIP_SIENNA_CICHLID) 7196 + gfx_v10_3_set_power_brake_sequence(adev); 7197 + 7203 7198 return r; 7204 7199 } 7205 7200 ··· 7388 7377 7389 7378 amdgpu_gfx_off_ctrl(adev, false); 7390 7379 mutex_lock(&adev->gfx.gpu_clock_mutex); 7391 - clock = (uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER) | 7392 - ((uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER) << 32ULL); 7380 + switch (adev->asic_type) { 7381 + case CHIP_VANGOGH: 7382 + clock = (uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER_Vangogh) | 7383 + ((uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER_Vangogh) << 32ULL); 7384 + break; 7385 + default: 7386 + clock = (uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER) | 7387 + ((uint64_t)RREG32_SOC15(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER) << 32ULL); 7388 + break; 7389 + } 7393 7390 mutex_unlock(&adev->gfx.gpu_clock_mutex); 7394 7391 amdgpu_gfx_off_ctrl(adev, true); 7395 7392 return clock; ··· 9186 9167 break; 9187 9168 } 9188 9169 } 9170 + } 9171 + 9172 + static void gfx_v10_3_set_power_brake_sequence(struct amdgpu_device *adev) 9173 + { 9174 + WREG32_SOC15(GC, 0, mmGRBM_GFX_INDEX, 9175 + (0x1 << GRBM_GFX_INDEX__SA_BROADCAST_WRITES__SHIFT) | 9176 + (0x1 << GRBM_GFX_INDEX__INSTANCE_BROADCAST_WRITES__SHIFT) | 9177 + (0x1 << GRBM_GFX_INDEX__SE_BROADCAST_WRITES__SHIFT)); 9178 + 9179 + WREG32_SOC15(GC, 0, mmGC_CAC_IND_INDEX, ixPWRBRK_STALL_PATTERN_CTRL); 9180 + WREG32_SOC15(GC, 0, mmGC_CAC_IND_DATA, 9181 + (0x1 << PWRBRK_STALL_PATTERN_CTRL__PWRBRK_STEP_INTERVAL__SHIFT) | 9182 + (0x12 << PWRBRK_STALL_PATTERN_CTRL__PWRBRK_BEGIN_STEP__SHIFT) | 9183 + (0x13 << PWRBRK_STALL_PATTERN_CTRL__PWRBRK_END_STEP__SHIFT) | 9184 + (0xf << PWRBRK_STALL_PATTERN_CTRL__PWRBRK_THROTTLE_PATTERN_BIT_NUMS__SHIFT)); 9185 + 9186 + WREG32_SOC15(GC, 0, mmGC_THROTTLE_CTRL_Sienna_Cichlid, 9187 + (0x1 << GC_THROTTLE_CTRL__PWRBRK_STALL_EN__SHIFT) | 9188 + (0x1 << GC_THROTTLE_CTRL__PATTERN_MODE__SHIFT) | 9189 + (0x5 << GC_THROTTLE_CTRL__RELEASE_STEP_INTERVAL__SHIFT)); 9190 + 9191 + WREG32_SOC15(GC, 0, mmDIDT_IND_INDEX, ixDIDT_SQ_THROTTLE_CTRL); 9192 + 9193 + WREG32_SOC15(GC, 0, mmDIDT_IND_DATA, 9194 + (0x1 << DIDT_SQ_THROTTLE_CTRL__PWRBRK_STALL_EN__SHIFT)); 9189 9195 } 9190 9196 9191 9197 const struct amdgpu_ip_block_version gfx_v10_0_ip_block =
+1 -1
drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
··· 47 47 GFX_CTRL_CMD_ID_DISABLE_INT = 0x00060000, /* disable PSP-to-Gfx interrupt */ 48 48 GFX_CTRL_CMD_ID_MODE1_RST = 0x00070000, /* trigger the Mode 1 reset */ 49 49 GFX_CTRL_CMD_ID_GBR_IH_SET = 0x00080000, /* set Gbr IH_RB_CNTL registers */ 50 - GFX_CTRL_CMD_ID_CONSUME_CMD = 0x000A0000, /* send interrupt to psp for updating write pointer of vf */ 50 + GFX_CTRL_CMD_ID_CONSUME_CMD = 0x00090000, /* send interrupt to psp for updating write pointer of vf */ 51 51 GFX_CTRL_CMD_ID_DESTROY_GPCOM_RING = 0x000C0000, /* destroy GPCOM ring */ 52 52 53 53 GFX_CTRL_CMD_ID_MAX = 0x000F0000, /* max command ID */
+2 -1
drivers/gpu/drm/amd/amdgpu/soc15.c
··· 1239 1239 break; 1240 1240 case CHIP_RENOIR: 1241 1241 adev->asic_funcs = &soc15_asic_funcs; 1242 - if (adev->pdev->device == 0x1636) 1242 + if ((adev->pdev->device == 0x1636) || 1243 + (adev->pdev->device == 0x164c)) 1243 1244 adev->apu_flags |= AMD_APU_IS_RENOIR; 1244 1245 else 1245 1246 adev->apu_flags |= AMD_APU_IS_GREEN_SARDINE;
+7 -4
drivers/gpu/drm/amd/amdkfd/kfd_crat.c
··· 1040 1040 (struct crat_subtype_iolink *)sub_type_hdr); 1041 1041 if (ret < 0) 1042 1042 return ret; 1043 - crat_table->length += (sub_type_hdr->length * entries); 1044 - crat_table->total_entries += entries; 1045 1043 1046 - sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr + 1047 - sub_type_hdr->length * entries); 1044 + if (entries) { 1045 + crat_table->length += (sub_type_hdr->length * entries); 1046 + crat_table->total_entries += entries; 1047 + 1048 + sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr + 1049 + sub_type_hdr->length * entries); 1050 + } 1048 1051 #else 1049 1052 pr_info("IO link not available for non x86 platforms\n"); 1050 1053 #endif
+9 -133
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 939 939 } 940 940 #endif 941 941 942 - #ifdef CONFIG_DEBUG_FS 943 - static int create_crtc_crc_properties(struct amdgpu_display_manager *dm) 944 - { 945 - dm->crc_win_x_start_property = 946 - drm_property_create_range(adev_to_drm(dm->adev), 947 - DRM_MODE_PROP_ATOMIC, 948 - "AMD_CRC_WIN_X_START", 0, U16_MAX); 949 - if (!dm->crc_win_x_start_property) 950 - return -ENOMEM; 951 - 952 - dm->crc_win_y_start_property = 953 - drm_property_create_range(adev_to_drm(dm->adev), 954 - DRM_MODE_PROP_ATOMIC, 955 - "AMD_CRC_WIN_Y_START", 0, U16_MAX); 956 - if (!dm->crc_win_y_start_property) 957 - return -ENOMEM; 958 - 959 - dm->crc_win_x_end_property = 960 - drm_property_create_range(adev_to_drm(dm->adev), 961 - DRM_MODE_PROP_ATOMIC, 962 - "AMD_CRC_WIN_X_END", 0, U16_MAX); 963 - if (!dm->crc_win_x_end_property) 964 - return -ENOMEM; 965 - 966 - dm->crc_win_y_end_property = 967 - drm_property_create_range(adev_to_drm(dm->adev), 968 - DRM_MODE_PROP_ATOMIC, 969 - "AMD_CRC_WIN_Y_END", 0, U16_MAX); 970 - if (!dm->crc_win_y_end_property) 971 - return -ENOMEM; 972 - 973 - return 0; 974 - } 975 - #endif 976 - 977 942 static int amdgpu_dm_init(struct amdgpu_device *adev) 978 943 { 979 944 struct dc_init_data init_data; ··· 1085 1120 1086 1121 dc_init_callbacks(adev->dm.dc, &init_params); 1087 1122 } 1088 - #endif 1089 - #ifdef CONFIG_DEBUG_FS 1090 - if (create_crtc_crc_properties(&adev->dm)) 1091 - DRM_ERROR("amdgpu: failed to create crc property.\n"); 1092 1123 #endif 1093 1124 if (amdgpu_dm_initialize_drm_device(adev)) { 1094 1125 DRM_ERROR( ··· 5294 5333 state->crc_src = cur->crc_src; 5295 5334 state->cm_has_degamma = cur->cm_has_degamma; 5296 5335 state->cm_is_degamma_srgb = cur->cm_is_degamma_srgb; 5297 - #ifdef CONFIG_DEBUG_FS 5298 - state->crc_window = cur->crc_window; 5299 - #endif 5336 + 5300 5337 /* TODO Duplicate dc_stream after objects are stream object is flattened */ 5301 5338 5302 5339 return &state->base; 5303 5340 } 5304 - 5305 - #ifdef CONFIG_DEBUG_FS 5306 - static int amdgpu_dm_crtc_atomic_set_property(struct drm_crtc *crtc, 5307 - struct drm_crtc_state *crtc_state, 5308 - struct drm_property *property, 5309 - uint64_t val) 5310 - { 5311 - struct drm_device *dev = crtc->dev; 5312 - struct amdgpu_device *adev = drm_to_adev(dev); 5313 - struct dm_crtc_state *dm_new_state = 5314 - to_dm_crtc_state(crtc_state); 5315 - 5316 - if (property == adev->dm.crc_win_x_start_property) 5317 - dm_new_state->crc_window.x_start = val; 5318 - else if (property == adev->dm.crc_win_y_start_property) 5319 - dm_new_state->crc_window.y_start = val; 5320 - else if (property == adev->dm.crc_win_x_end_property) 5321 - dm_new_state->crc_window.x_end = val; 5322 - else if (property == adev->dm.crc_win_y_end_property) 5323 - dm_new_state->crc_window.y_end = val; 5324 - else 5325 - return -EINVAL; 5326 - 5327 - return 0; 5328 - } 5329 - 5330 - static int amdgpu_dm_crtc_atomic_get_property(struct drm_crtc *crtc, 5331 - const struct drm_crtc_state *state, 5332 - struct drm_property *property, 5333 - uint64_t *val) 5334 - { 5335 - struct drm_device *dev = crtc->dev; 5336 - struct amdgpu_device *adev = drm_to_adev(dev); 5337 - struct dm_crtc_state *dm_state = 5338 - to_dm_crtc_state(state); 5339 - 5340 - if (property == adev->dm.crc_win_x_start_property) 5341 - *val = dm_state->crc_window.x_start; 5342 - else if (property == adev->dm.crc_win_y_start_property) 5343 - *val = dm_state->crc_window.y_start; 5344 - else if (property == adev->dm.crc_win_x_end_property) 5345 - *val = dm_state->crc_window.x_end; 5346 - else if (property == adev->dm.crc_win_y_end_property) 5347 - *val = dm_state->crc_window.y_end; 5348 - else 5349 - return -EINVAL; 5350 - 5351 - return 0; 5352 - } 5353 - #endif 5354 5341 5355 5342 static inline int dm_set_vupdate_irq(struct drm_crtc *crtc, bool enable) 5356 5343 { ··· 5366 5457 .enable_vblank = dm_enable_vblank, 5367 5458 .disable_vblank = dm_disable_vblank, 5368 5459 .get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp, 5369 - #ifdef CONFIG_DEBUG_FS 5370 - .atomic_set_property = amdgpu_dm_crtc_atomic_set_property, 5371 - .atomic_get_property = amdgpu_dm_crtc_atomic_get_property, 5372 - #endif 5373 5460 }; 5374 5461 5375 5462 static enum drm_connector_status ··· 6567 6662 return 0; 6568 6663 } 6569 6664 6570 - #ifdef CONFIG_DEBUG_FS 6571 - static void attach_crtc_crc_properties(struct amdgpu_display_manager *dm, 6572 - struct amdgpu_crtc *acrtc) 6573 - { 6574 - drm_object_attach_property(&acrtc->base.base, 6575 - dm->crc_win_x_start_property, 6576 - 0); 6577 - drm_object_attach_property(&acrtc->base.base, 6578 - dm->crc_win_y_start_property, 6579 - 0); 6580 - drm_object_attach_property(&acrtc->base.base, 6581 - dm->crc_win_x_end_property, 6582 - 0); 6583 - drm_object_attach_property(&acrtc->base.base, 6584 - dm->crc_win_y_end_property, 6585 - 0); 6586 - } 6587 - #endif 6588 - 6589 6665 static int amdgpu_dm_crtc_init(struct amdgpu_display_manager *dm, 6590 6666 struct drm_plane *plane, 6591 6667 uint32_t crtc_index) ··· 6614 6728 drm_crtc_enable_color_mgmt(&acrtc->base, MAX_COLOR_LUT_ENTRIES, 6615 6729 true, MAX_COLOR_LUT_ENTRIES); 6616 6730 drm_mode_crtc_set_gamma_size(&acrtc->base, MAX_COLOR_LEGACY_LUT_ENTRIES); 6617 - #ifdef CONFIG_DEBUG_FS 6618 - attach_crtc_crc_properties(dm, acrtc); 6619 - #endif 6731 + 6620 6732 return 0; 6621 6733 6622 6734 fail: ··· 8251 8367 */ 8252 8368 for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { 8253 8369 struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc); 8254 - bool configure_crc = false; 8255 8370 8256 8371 dm_new_crtc_state = to_dm_crtc_state(new_crtc_state); 8257 8372 ··· 8260 8377 dc_stream_retain(dm_new_crtc_state->stream); 8261 8378 acrtc->dm_irq_params.stream = dm_new_crtc_state->stream; 8262 8379 manage_dm_interrupts(adev, acrtc, true); 8263 - } 8264 - if (IS_ENABLED(CONFIG_DEBUG_FS) && new_crtc_state->active && 8265 - amdgpu_dm_is_valid_crc_source(dm_new_crtc_state->crc_src)) { 8380 + 8381 + #ifdef CONFIG_DEBUG_FS 8266 8382 /** 8267 8383 * Frontend may have changed so reapply the CRC capture 8268 8384 * settings for the stream. 8269 8385 */ 8270 8386 dm_new_crtc_state = to_dm_crtc_state(new_crtc_state); 8271 - dm_old_crtc_state = to_dm_crtc_state(old_crtc_state); 8272 8387 8273 - if (amdgpu_dm_crc_window_is_default(dm_new_crtc_state)) { 8274 - if (!old_crtc_state->active || drm_atomic_crtc_needs_modeset(new_crtc_state)) 8275 - configure_crc = true; 8276 - } else { 8277 - if (amdgpu_dm_crc_window_changed(dm_new_crtc_state, dm_old_crtc_state)) 8278 - configure_crc = true; 8279 - } 8280 - 8281 - if (configure_crc) 8388 + if (amdgpu_dm_is_valid_crc_source(dm_new_crtc_state->crc_src)) { 8282 8389 amdgpu_dm_crtc_configure_crc_source( 8283 - crtc, dm_new_crtc_state, dm_new_crtc_state->crc_src); 8390 + crtc, dm_new_crtc_state, 8391 + dm_new_crtc_state->crc_src); 8392 + } 8393 + #endif 8284 8394 } 8285 8395 } 8286 8396
-38
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
··· 336 336 */ 337 337 const struct gpu_info_soc_bounding_box_v1_0 *soc_bounding_box; 338 338 339 - #ifdef CONFIG_DEBUG_FS 340 - /** 341 - * @crc_win_x_start_property: 342 - * 343 - * X start of the crc calculation window 344 - */ 345 - struct drm_property *crc_win_x_start_property; 346 - /** 347 - * @crc_win_y_start_property: 348 - * 349 - * Y start of the crc calculation window 350 - */ 351 - struct drm_property *crc_win_y_start_property; 352 - /** 353 - * @crc_win_x_end_property: 354 - * 355 - * X end of the crc calculation window 356 - */ 357 - struct drm_property *crc_win_x_end_property; 358 - /** 359 - * @crc_win_y_end_property: 360 - * 361 - * Y end of the crc calculation window 362 - */ 363 - struct drm_property *crc_win_y_end_property; 364 - #endif 365 339 /** 366 340 * @mst_encoders: 367 341 * ··· 422 448 struct dc_plane_state *dc_state; 423 449 }; 424 450 425 - #ifdef CONFIG_DEBUG_FS 426 - struct crc_rec { 427 - uint16_t x_start; 428 - uint16_t y_start; 429 - uint16_t x_end; 430 - uint16_t y_end; 431 - }; 432 - #endif 433 - 434 451 struct dm_crtc_state { 435 452 struct drm_crtc_state base; 436 453 struct dc_stream_state *stream; ··· 444 479 struct dc_info_packet vrr_infopacket; 445 480 446 481 int abm_level; 447 - #ifdef CONFIG_DEBUG_FS 448 - struct crc_rec crc_window; 449 - #endif 450 482 }; 451 483 452 484 #define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base)
+1 -53
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
··· 81 81 return pipe_crc_sources; 82 82 } 83 83 84 - static void amdgpu_dm_set_crc_window_default(struct dm_crtc_state *dm_crtc_state) 85 - { 86 - dm_crtc_state->crc_window.x_start = 0; 87 - dm_crtc_state->crc_window.y_start = 0; 88 - dm_crtc_state->crc_window.x_end = 0; 89 - dm_crtc_state->crc_window.y_end = 0; 90 - } 91 - 92 - bool amdgpu_dm_crc_window_is_default(struct dm_crtc_state *dm_crtc_state) 93 - { 94 - bool ret = true; 95 - 96 - if ((dm_crtc_state->crc_window.x_start != 0) || 97 - (dm_crtc_state->crc_window.y_start != 0) || 98 - (dm_crtc_state->crc_window.x_end != 0) || 99 - (dm_crtc_state->crc_window.y_end != 0)) 100 - ret = false; 101 - 102 - return ret; 103 - } 104 - 105 - bool amdgpu_dm_crc_window_changed(struct dm_crtc_state *dm_new_crtc_state, 106 - struct dm_crtc_state *dm_old_crtc_state) 107 - { 108 - bool ret = false; 109 - 110 - if ((dm_new_crtc_state->crc_window.x_start != dm_old_crtc_state->crc_window.x_start) || 111 - (dm_new_crtc_state->crc_window.y_start != dm_old_crtc_state->crc_window.y_start) || 112 - (dm_new_crtc_state->crc_window.x_end != dm_old_crtc_state->crc_window.x_end) || 113 - (dm_new_crtc_state->crc_window.y_end != dm_old_crtc_state->crc_window.y_end)) 114 - ret = true; 115 - 116 - return ret; 117 - } 118 - 119 84 int 120 85 amdgpu_dm_crtc_verify_crc_source(struct drm_crtc *crtc, const char *src_name, 121 86 size_t *values_cnt) ··· 105 140 struct dc_stream_state *stream_state = dm_crtc_state->stream; 106 141 bool enable = amdgpu_dm_is_valid_crc_source(source); 107 142 int ret = 0; 108 - struct crc_params *crc_window = NULL, tmp_window; 109 143 110 144 /* Configuration will be deferred to stream enable. */ 111 145 if (!stream_state) ··· 114 150 115 151 /* Enable CRTC CRC generation if necessary. */ 116 152 if (dm_is_crc_source_crtc(source) || source == AMDGPU_DM_PIPE_CRC_SOURCE_NONE) { 117 - if (!enable) 118 - amdgpu_dm_set_crc_window_default(dm_crtc_state); 119 - 120 - if (!amdgpu_dm_crc_window_is_default(dm_crtc_state)) { 121 - crc_window = &tmp_window; 122 - 123 - tmp_window.windowa_x_start = dm_crtc_state->crc_window.x_start; 124 - tmp_window.windowa_y_start = dm_crtc_state->crc_window.y_start; 125 - tmp_window.windowa_x_end = dm_crtc_state->crc_window.x_end; 126 - tmp_window.windowa_y_end = dm_crtc_state->crc_window.y_end; 127 - tmp_window.windowb_x_start = dm_crtc_state->crc_window.x_start; 128 - tmp_window.windowb_y_start = dm_crtc_state->crc_window.y_start; 129 - tmp_window.windowb_x_end = dm_crtc_state->crc_window.x_end; 130 - tmp_window.windowb_y_end = dm_crtc_state->crc_window.y_end; 131 - } 132 - 133 153 if (!dc_stream_configure_crc(stream_state->ctx->dc, 134 - stream_state, crc_window, enable, enable)) { 154 + stream_state, NULL, enable, enable)) { 135 155 ret = -EINVAL; 136 156 goto unlock; 137 157 }
+1 -4
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.h
··· 46 46 } 47 47 48 48 /* amdgpu_dm_crc.c */ 49 - bool amdgpu_dm_crc_window_is_default(struct dm_crtc_state *dm_crtc_state); 50 - bool amdgpu_dm_crc_window_changed(struct dm_crtc_state *dm_new_crtc_state, 51 - struct dm_crtc_state *dm_old_crtc_state); 49 + #ifdef CONFIG_DEBUG_FS 52 50 int amdgpu_dm_crtc_configure_crc_source(struct drm_crtc *crtc, 53 51 struct dm_crtc_state *dm_crtc_state, 54 52 enum amdgpu_dm_pipe_crc_source source); 55 - #ifdef CONFIG_DEBUG_FS 56 53 int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name); 57 54 int amdgpu_dm_crtc_verify_crc_source(struct drm_crtc *crtc, 58 55 const char *src_name,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
··· 470 470 unsigned int mpc1_get_mpc_out_mux(struct mpc *mpc, int opp_id) 471 471 { 472 472 struct dcn10_mpc *mpc10 = TO_DCN10_MPC(mpc); 473 - uint32_t val = 0; 473 + uint32_t val = 0xf; 474 474 475 475 if (opp_id < MAX_OPP && REG(MUX[opp_id])) 476 476 REG_GET(MUX[opp_id], MPC_OUT_MUX, &val);
+2 -2
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
··· 608 608 .disable_pplib_clock_request = false, 609 609 .disable_pplib_wm_range = false, 610 610 .pplib_wm_report_mode = WM_REPORT_DEFAULT, 611 - .pipe_split_policy = MPC_SPLIT_DYNAMIC, 612 - .force_single_disp_pipe_split = true, 611 + .pipe_split_policy = MPC_SPLIT_AVOID, 612 + .force_single_disp_pipe_split = false, 613 613 .disable_dcc = DCC_ENABLE, 614 614 .voltage_align_fclk = true, 615 615 .disable_stereo_support = true,
+1
drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
··· 1731 1731 .populate_dml_pipes = dcn30_populate_dml_pipes_from_context, 1732 1732 .acquire_idle_pipe_for_layer = dcn20_acquire_idle_pipe_for_layer, 1733 1733 .add_stream_to_ctx = dcn30_add_stream_to_ctx, 1734 + .add_dsc_to_stream_resource = dcn20_add_dsc_to_stream_resource, 1734 1735 .remove_stream_from_ctx = dcn20_remove_stream_from_ctx, 1735 1736 .populate_dml_writeback_from_context = dcn30_populate_dml_writeback_from_context, 1736 1737 .set_mcif_arb_params = dcn30_set_mcif_arb_params,
+6 -5
drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
··· 2635 2635 } 2636 2636 2637 2637 if (mode_lib->vba.DRAMClockChangeSupportsVActive && 2638 - mode_lib->vba.MinActiveDRAMClockChangeMargin > 60 && 2639 - mode_lib->vba.PrefetchMode[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb] == 0) { 2638 + mode_lib->vba.MinActiveDRAMClockChangeMargin > 60) { 2640 2639 mode_lib->vba.DRAMClockChangeWatermark += 25; 2641 2640 2642 2641 for (k = 0; k < mode_lib->vba.NumberOfActivePlanes; ++k) { 2643 - if (mode_lib->vba.DRAMClockChangeWatermark > 2644 - dml_max(mode_lib->vba.StutterEnterPlusExitWatermark, mode_lib->vba.UrgentWatermark)) 2645 - mode_lib->vba.MinTTUVBlank[k] += 25; 2642 + if (mode_lib->vba.PrefetchMode[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb] == 0) { 2643 + if (mode_lib->vba.DRAMClockChangeWatermark > 2644 + dml_max(mode_lib->vba.StutterEnterPlusExitWatermark, mode_lib->vba.UrgentWatermark)) 2645 + mode_lib->vba.MinTTUVBlank[k] += 25; 2646 + } 2646 2647 } 2647 2648 2648 2649 mode_lib->vba.DRAMClockChangeSupport[0][0] = dm_dram_clock_change_vactive;
+8 -1
drivers/gpu/drm/drm_plane.c
··· 1163 1163 if (ret) 1164 1164 goto out; 1165 1165 1166 - if (old_fb->format != fb->format) { 1166 + /* 1167 + * Only check the FOURCC format code, excluding modifiers. This is 1168 + * enough for all legacy drivers. Atomic drivers have their own 1169 + * checks in their ->atomic_check implementation, which will 1170 + * return -EINVAL if any hw or driver constraint is violated due 1171 + * to modifier changes. 1172 + */ 1173 + if (old_fb->format->format != fb->format->format) { 1167 1174 DRM_DEBUG_KMS("Page flip is not allowed to change frame buffer format.\n"); 1168 1175 ret = -EINVAL; 1169 1176 goto out;
+1
drivers/gpu/drm/i915/Makefile
··· 38 38 i915_config.o \ 39 39 i915_irq.o \ 40 40 i915_getparam.o \ 41 + i915_mitigations.o \ 41 42 i915_params.o \ 42 43 i915_pci.o \ 43 44 i915_scatterlist.o \
-4
drivers/gpu/drm/i915/display/icl_dsi.c
··· 1616 1616 1617 1617 get_dsi_io_power_domains(i915, 1618 1618 enc_to_intel_dsi(encoder)); 1619 - 1620 - if (crtc_state->dsc.compression_enable) 1621 - intel_display_power_get(i915, 1622 - intel_dsc_power_domain(crtc_state)); 1623 1619 } 1624 1620 1625 1621 static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
+5 -4
drivers/gpu/drm/i915/display/intel_panel.c
··· 1650 1650 val = pch_get_backlight(connector); 1651 1651 else 1652 1652 val = lpt_get_backlight(connector); 1653 - val = intel_panel_compute_brightness(connector, val); 1654 - panel->backlight.level = clamp(val, panel->backlight.min, 1655 - panel->backlight.max); 1656 1653 1657 1654 if (cpu_mode) { 1658 1655 drm_dbg_kms(&dev_priv->drm, 1659 1656 "CPU backlight register was enabled, switching to PCH override\n"); 1660 1657 1661 1658 /* Write converted CPU PWM value to PCH override register */ 1662 - lpt_set_backlight(connector->base.state, panel->backlight.level); 1659 + lpt_set_backlight(connector->base.state, val); 1663 1660 intel_de_write(dev_priv, BLC_PWM_PCH_CTL1, 1664 1661 pch_ctl1 | BLM_PCH_OVERRIDE_ENABLE); 1665 1662 1666 1663 intel_de_write(dev_priv, BLC_PWM_CPU_CTL2, 1667 1664 cpu_ctl2 & ~BLM_PWM_ENABLE); 1668 1665 } 1666 + 1667 + val = intel_panel_compute_brightness(connector, val); 1668 + panel->backlight.level = clamp(val, panel->backlight.min, 1669 + panel->backlight.max); 1669 1670 1670 1671 return 0; 1671 1672 }
+13 -3
drivers/gpu/drm/i915/display/vlv_dsi.c
··· 812 812 intel_dsi_prepare(encoder, pipe_config); 813 813 814 814 intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_POWER_ON); 815 - intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay); 816 815 817 - /* Deassert reset */ 818 - intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET); 816 + /* 817 + * Give the panel time to power-on and then deassert its reset. 818 + * Depending on the VBT MIPI sequences version the deassert-seq 819 + * may contain the necessary delay, intel_dsi_msleep() will skip 820 + * the delay in that case. If there is no deassert-seq, then an 821 + * unconditional msleep is used to give the panel time to power-on. 822 + */ 823 + if (dev_priv->vbt.dsi.sequence[MIPI_SEQ_DEASSERT_RESET]) { 824 + intel_dsi_msleep(intel_dsi, intel_dsi->panel_on_delay); 825 + intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DEASSERT_RESET); 826 + } else { 827 + msleep(intel_dsi->panel_on_delay); 828 + } 819 829 820 830 if (IS_GEMINILAKE(dev_priv)) { 821 831 glk_cold_boot = glk_dsi_enable_io(encoder);
+94 -63
drivers/gpu/drm/i915/gt/gen7_renderclear.c
··· 7 7 #include "i915_drv.h" 8 8 #include "intel_gpu_commands.h" 9 9 10 - #define MAX_URB_ENTRIES 64 11 - #define STATE_SIZE (4 * 1024) 12 10 #define GT3_INLINE_DATA_DELAYS 0x1E00 13 11 #define batch_advance(Y, CS) GEM_BUG_ON((Y)->end != (CS)) 14 12 ··· 32 34 }; 33 35 34 36 struct batch_vals { 35 - u32 max_primitives; 36 - u32 max_urb_entries; 37 - u32 cmd_size; 38 - u32 state_size; 37 + u32 max_threads; 39 38 u32 state_start; 40 - u32 batch_size; 39 + u32 surface_start; 41 40 u32 surface_height; 42 41 u32 surface_width; 43 - u32 scratch_size; 44 - u32 max_size; 42 + u32 size; 45 43 }; 44 + 45 + static inline int num_primitives(const struct batch_vals *bv) 46 + { 47 + /* 48 + * We need to saturate the GPU with work in order to dispatch 49 + * a shader on every HW thread, and clear the thread-local registers. 50 + * In short, we have to dispatch work faster than the shaders can 51 + * run in order to fill the EU and occupy each HW thread. 52 + */ 53 + return bv->max_threads; 54 + } 46 55 47 56 static void 48 57 batch_get_defaults(struct drm_i915_private *i915, struct batch_vals *bv) 49 58 { 50 59 if (IS_HASWELL(i915)) { 51 - bv->max_primitives = 280; 52 - bv->max_urb_entries = MAX_URB_ENTRIES; 60 + switch (INTEL_INFO(i915)->gt) { 61 + default: 62 + case 1: 63 + bv->max_threads = 70; 64 + break; 65 + case 2: 66 + bv->max_threads = 140; 67 + break; 68 + case 3: 69 + bv->max_threads = 280; 70 + break; 71 + } 53 72 bv->surface_height = 16 * 16; 54 73 bv->surface_width = 32 * 2 * 16; 55 74 } else { 56 - bv->max_primitives = 128; 57 - bv->max_urb_entries = MAX_URB_ENTRIES / 2; 75 + switch (INTEL_INFO(i915)->gt) { 76 + default: 77 + case 1: /* including vlv */ 78 + bv->max_threads = 36; 79 + break; 80 + case 2: 81 + bv->max_threads = 128; 82 + break; 83 + } 58 84 bv->surface_height = 16 * 8; 59 85 bv->surface_width = 32 * 16; 60 86 } 61 - bv->cmd_size = bv->max_primitives * 4096; 62 - bv->state_size = STATE_SIZE; 63 - bv->state_start = bv->cmd_size; 64 - bv->batch_size = bv->cmd_size + bv->state_size; 65 - bv->scratch_size = bv->surface_height * bv->surface_width; 66 - bv->max_size = bv->batch_size + bv->scratch_size; 87 + bv->state_start = round_up(SZ_1K + num_primitives(bv) * 64, SZ_4K); 88 + bv->surface_start = bv->state_start + SZ_4K; 89 + bv->size = bv->surface_start + bv->surface_height * bv->surface_width; 67 90 } 68 91 69 92 static void batch_init(struct batch_chunk *bc, ··· 174 155 gen7_fill_binding_table(struct batch_chunk *state, 175 156 const struct batch_vals *bv) 176 157 { 177 - u32 surface_start = gen7_fill_surface_state(state, bv->batch_size, bv); 158 + u32 surface_start = 159 + gen7_fill_surface_state(state, bv->surface_start, bv); 178 160 u32 *cs = batch_alloc_items(state, 32, 8); 179 161 u32 offset = batch_offset(state, cs); 180 162 ··· 234 214 gen7_emit_state_base_address(struct batch_chunk *batch, 235 215 u32 surface_state_base) 236 216 { 237 - u32 *cs = batch_alloc_items(batch, 0, 12); 217 + u32 *cs = batch_alloc_items(batch, 0, 10); 238 218 239 - *cs++ = STATE_BASE_ADDRESS | (12 - 2); 219 + *cs++ = STATE_BASE_ADDRESS | (10 - 2); 240 220 /* general */ 241 221 *cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY; 242 222 /* surface */ ··· 253 233 *cs++ = BASE_ADDRESS_MODIFY; 254 234 *cs++ = 0; 255 235 *cs++ = BASE_ADDRESS_MODIFY; 256 - *cs++ = 0; 257 - *cs++ = 0; 258 236 batch_advance(batch, cs); 259 237 } 260 238 ··· 262 244 u32 urb_size, u32 curbe_size, 263 245 u32 mode) 264 246 { 265 - u32 urb_entries = bv->max_urb_entries; 266 - u32 threads = bv->max_primitives - 1; 247 + u32 threads = bv->max_threads - 1; 267 248 u32 *cs = batch_alloc_items(batch, 32, 8); 268 249 269 250 *cs++ = MEDIA_VFE_STATE | (8 - 2); ··· 271 254 *cs++ = 0; 272 255 273 256 /* number of threads & urb entries for GPGPU vs Media Mode */ 274 - *cs++ = threads << 16 | urb_entries << 8 | mode << 2; 257 + *cs++ = threads << 16 | 1 << 8 | mode << 2; 275 258 276 259 *cs++ = 0; 277 260 ··· 310 293 { 311 294 unsigned int x_offset = (media_object_index % 16) * 64; 312 295 unsigned int y_offset = (media_object_index / 16) * 16; 313 - unsigned int inline_data_size; 314 - unsigned int media_batch_size; 315 - unsigned int i; 296 + unsigned int pkt = 6 + 3; 316 297 u32 *cs; 317 298 318 - inline_data_size = 112 * 8; 319 - media_batch_size = inline_data_size + 6; 299 + cs = batch_alloc_items(batch, 8, pkt); 320 300 321 - cs = batch_alloc_items(batch, 8, media_batch_size); 322 - 323 - *cs++ = MEDIA_OBJECT | (media_batch_size - 2); 301 + *cs++ = MEDIA_OBJECT | (pkt - 2); 324 302 325 303 /* interface descriptor offset */ 326 304 *cs++ = 0; ··· 329 317 *cs++ = 0; 330 318 331 319 /* inline */ 332 - *cs++ = (y_offset << 16) | (x_offset); 320 + *cs++ = y_offset << 16 | x_offset; 333 321 *cs++ = 0; 334 322 *cs++ = GT3_INLINE_DATA_DELAYS; 335 - for (i = 3; i < inline_data_size; i++) 336 - *cs++ = 0; 337 323 338 324 batch_advance(batch, cs); 339 325 } 340 326 341 327 static void gen7_emit_pipeline_flush(struct batch_chunk *batch) 342 328 { 343 - u32 *cs = batch_alloc_items(batch, 0, 5); 329 + u32 *cs = batch_alloc_items(batch, 0, 4); 344 330 345 - *cs++ = GFX_OP_PIPE_CONTROL(5); 346 - *cs++ = PIPE_CONTROL_STATE_CACHE_INVALIDATE | 347 - PIPE_CONTROL_GLOBAL_GTT_IVB; 331 + *cs++ = GFX_OP_PIPE_CONTROL(4); 332 + *cs++ = PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH | 333 + PIPE_CONTROL_DEPTH_CACHE_FLUSH | 334 + PIPE_CONTROL_DC_FLUSH_ENABLE | 335 + PIPE_CONTROL_CS_STALL; 348 336 *cs++ = 0; 349 337 *cs++ = 0; 338 + 339 + batch_advance(batch, cs); 340 + } 341 + 342 + static void gen7_emit_pipeline_invalidate(struct batch_chunk *batch) 343 + { 344 + u32 *cs = batch_alloc_items(batch, 0, 8); 345 + 346 + /* ivb: Stall before STATE_CACHE_INVALIDATE */ 347 + *cs++ = GFX_OP_PIPE_CONTROL(4); 348 + *cs++ = PIPE_CONTROL_STALL_AT_SCOREBOARD | 349 + PIPE_CONTROL_CS_STALL; 350 350 *cs++ = 0; 351 + *cs++ = 0; 352 + 353 + *cs++ = GFX_OP_PIPE_CONTROL(4); 354 + *cs++ = PIPE_CONTROL_STATE_CACHE_INVALIDATE; 355 + *cs++ = 0; 356 + *cs++ = 0; 357 + 351 358 batch_advance(batch, cs); 352 359 } 353 360 ··· 375 344 const struct batch_vals *bv) 376 345 { 377 346 struct drm_i915_private *i915 = vma->vm->i915; 378 - unsigned int desc_count = 64; 379 - const u32 urb_size = 112; 347 + const unsigned int desc_count = 1; 348 + const unsigned int urb_size = 1; 380 349 struct batch_chunk cmds, state; 381 - u32 interface_descriptor; 350 + u32 descriptors; 382 351 unsigned int i; 383 352 384 - batch_init(&cmds, vma, start, 0, bv->cmd_size); 385 - batch_init(&state, vma, start, bv->state_start, bv->state_size); 353 + batch_init(&cmds, vma, start, 0, bv->state_start); 354 + batch_init(&state, vma, start, bv->state_start, SZ_4K); 386 355 387 - interface_descriptor = 388 - gen7_fill_interface_descriptor(&state, bv, 389 - IS_HASWELL(i915) ? 390 - &cb_kernel_hsw : 391 - &cb_kernel_ivb, 392 - desc_count); 393 - gen7_emit_pipeline_flush(&cmds); 356 + descriptors = gen7_fill_interface_descriptor(&state, bv, 357 + IS_HASWELL(i915) ? 358 + &cb_kernel_hsw : 359 + &cb_kernel_ivb, 360 + desc_count); 361 + 362 + gen7_emit_pipeline_invalidate(&cmds); 394 363 batch_add(&cmds, PIPELINE_SELECT | PIPELINE_SELECT_MEDIA); 395 364 batch_add(&cmds, MI_NOOP); 396 - gen7_emit_state_base_address(&cmds, interface_descriptor); 365 + gen7_emit_pipeline_invalidate(&cmds); 366 + 397 367 gen7_emit_pipeline_flush(&cmds); 368 + gen7_emit_state_base_address(&cmds, descriptors); 369 + gen7_emit_pipeline_invalidate(&cmds); 398 370 399 371 gen7_emit_vfe_state(&cmds, bv, urb_size - 1, 0, 0); 372 + gen7_emit_interface_descriptor_load(&cmds, descriptors, desc_count); 400 373 401 - gen7_emit_interface_descriptor_load(&cmds, 402 - interface_descriptor, 403 - desc_count); 404 - 405 - for (i = 0; i < bv->max_primitives; i++) 374 + for (i = 0; i < num_primitives(bv); i++) 406 375 gen7_emit_media_object(&cmds, i); 407 376 408 377 batch_add(&cmds, MI_BATCH_BUFFER_END); ··· 416 385 417 386 batch_get_defaults(engine->i915, &bv); 418 387 if (!vma) 419 - return bv.max_size; 388 + return bv.size; 420 389 421 - GEM_BUG_ON(vma->obj->base.size < bv.max_size); 390 + GEM_BUG_ON(vma->obj->base.size < bv.size); 422 391 423 392 batch = i915_gem_object_pin_map(vma->obj, I915_MAP_WC); 424 393 if (IS_ERR(batch)) 425 394 return PTR_ERR(batch); 426 395 427 - emit_batch(vma, memset(batch, 0, bv.max_size), &bv); 396 + emit_batch(vma, memset(batch, 0, bv.size), &bv); 428 397 429 398 i915_gem_object_flush_map(vma->obj); 430 399 __i915_gem_object_release_map(vma->obj);
+4 -2
drivers/gpu/drm/i915/gt/intel_ring_submission.c
··· 32 32 #include "gen6_ppgtt.h" 33 33 #include "gen7_renderclear.h" 34 34 #include "i915_drv.h" 35 + #include "i915_mitigations.h" 35 36 #include "intel_breadcrumbs.h" 36 37 #include "intel_context.h" 37 38 #include "intel_gt.h" ··· 887 886 GEM_BUG_ON(HAS_EXECLISTS(engine->i915)); 888 887 889 888 if (engine->wa_ctx.vma && ce != engine->kernel_context) { 890 - if (engine->wa_ctx.vma->private != ce) { 889 + if (engine->wa_ctx.vma->private != ce && 890 + i915_mitigate_clear_residuals()) { 891 891 ret = clear_residuals(rq); 892 892 if (ret) 893 893 return ret; ··· 1292 1290 1293 1291 GEM_BUG_ON(timeline->hwsp_ggtt != engine->status_page.vma); 1294 1292 1295 - if (IS_HASWELL(engine->i915) && engine->class == RENDER_CLASS) { 1293 + if (IS_GEN(engine->i915, 7) && engine->class == RENDER_CLASS) { 1296 1294 err = gen7_ctx_switch_bb_init(engine); 1297 1295 if (err) 1298 1296 goto err_ring_unpin;
+59 -26
drivers/gpu/drm/i915/gvt/display.c
··· 217 217 DDI_BUF_CTL_ENABLE); 218 218 vgpu_vreg_t(vgpu, DDI_BUF_CTL(port)) |= DDI_BUF_IS_IDLE; 219 219 } 220 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= 221 + ~(PORTA_HOTPLUG_ENABLE | PORTA_HOTPLUG_STATUS_MASK); 222 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= 223 + ~(PORTB_HOTPLUG_ENABLE | PORTB_HOTPLUG_STATUS_MASK); 224 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= 225 + ~(PORTC_HOTPLUG_ENABLE | PORTC_HOTPLUG_STATUS_MASK); 226 + /* No hpd_invert set in vgpu vbt, need to clear invert mask */ 227 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= ~BXT_DDI_HPD_INVERT_MASK; 228 + vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= ~BXT_DE_PORT_HOTPLUG_MASK; 220 229 221 230 vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) &= ~(BIT(0) | BIT(1)); 222 231 vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) &= ··· 282 273 vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_EDP)) |= 283 274 (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | 284 275 TRANS_DDI_FUNC_ENABLE); 276 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= 277 + PORTA_HOTPLUG_ENABLE; 285 278 vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= 286 279 GEN8_DE_PORT_HOTPLUG(HPD_PORT_A); 287 280 } ··· 312 301 (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | 313 302 (PORT_B << TRANS_DDI_PORT_SHIFT) | 314 303 TRANS_DDI_FUNC_ENABLE); 304 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= 305 + PORTB_HOTPLUG_ENABLE; 315 306 vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= 316 307 GEN8_DE_PORT_HOTPLUG(HPD_PORT_B); 317 308 } ··· 342 329 (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | 343 330 (PORT_B << TRANS_DDI_PORT_SHIFT) | 344 331 TRANS_DDI_FUNC_ENABLE); 332 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= 333 + PORTC_HOTPLUG_ENABLE; 345 334 vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= 346 335 GEN8_DE_PORT_HOTPLUG(HPD_PORT_C); 347 336 } ··· 676 661 PORTD_HOTPLUG_STATUS_MASK; 677 662 intel_vgpu_trigger_virtual_event(vgpu, DP_D_HOTPLUG); 678 663 } else if (IS_BROXTON(i915)) { 679 - if (connected) { 680 - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) { 664 + if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) { 665 + if (connected) { 681 666 vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= 682 667 GEN8_DE_PORT_HOTPLUG(HPD_PORT_A); 683 - } 684 - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) { 685 - vgpu_vreg_t(vgpu, SFUSE_STRAP) |= 686 - SFUSE_STRAP_DDIB_DETECTED; 687 - vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= 688 - GEN8_DE_PORT_HOTPLUG(HPD_PORT_B); 689 - } 690 - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) { 691 - vgpu_vreg_t(vgpu, SFUSE_STRAP) |= 692 - SFUSE_STRAP_DDIC_DETECTED; 693 - vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= 694 - GEN8_DE_PORT_HOTPLUG(HPD_PORT_C); 695 - } 696 - } else { 697 - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) { 668 + } else { 698 669 vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= 699 670 ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_A); 700 671 } 701 - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) { 702 - vgpu_vreg_t(vgpu, SFUSE_STRAP) &= 703 - ~SFUSE_STRAP_DDIB_DETECTED; 672 + vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |= 673 + GEN8_DE_PORT_HOTPLUG(HPD_PORT_A); 674 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= 675 + ~PORTA_HOTPLUG_STATUS_MASK; 676 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= 677 + PORTA_HOTPLUG_LONG_DETECT; 678 + intel_vgpu_trigger_virtual_event(vgpu, DP_A_HOTPLUG); 679 + } 680 + if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) { 681 + if (connected) { 682 + vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= 683 + GEN8_DE_PORT_HOTPLUG(HPD_PORT_B); 684 + vgpu_vreg_t(vgpu, SFUSE_STRAP) |= 685 + SFUSE_STRAP_DDIB_DETECTED; 686 + } else { 704 687 vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= 705 688 ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_B); 706 - } 707 - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) { 708 689 vgpu_vreg_t(vgpu, SFUSE_STRAP) &= 709 - ~SFUSE_STRAP_DDIC_DETECTED; 690 + ~SFUSE_STRAP_DDIB_DETECTED; 691 + } 692 + vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |= 693 + GEN8_DE_PORT_HOTPLUG(HPD_PORT_B); 694 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= 695 + ~PORTB_HOTPLUG_STATUS_MASK; 696 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= 697 + PORTB_HOTPLUG_LONG_DETECT; 698 + intel_vgpu_trigger_virtual_event(vgpu, DP_B_HOTPLUG); 699 + } 700 + if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) { 701 + if (connected) { 702 + vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= 703 + GEN8_DE_PORT_HOTPLUG(HPD_PORT_C); 704 + vgpu_vreg_t(vgpu, SFUSE_STRAP) |= 705 + SFUSE_STRAP_DDIC_DETECTED; 706 + } else { 710 707 vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= 711 708 ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_C); 709 + vgpu_vreg_t(vgpu, SFUSE_STRAP) &= 710 + ~SFUSE_STRAP_DDIC_DETECTED; 712 711 } 712 + vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |= 713 + GEN8_DE_PORT_HOTPLUG(HPD_PORT_C); 714 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= 715 + ~PORTC_HOTPLUG_STATUS_MASK; 716 + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= 717 + PORTC_HOTPLUG_LONG_DETECT; 718 + intel_vgpu_trigger_virtual_event(vgpu, DP_C_HOTPLUG); 713 719 } 714 - vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= 715 - PORTB_HOTPLUG_STATUS_MASK; 716 - intel_vgpu_trigger_virtual_event(vgpu, DP_B_HOTPLUG); 717 720 } 718 721 } 719 722
+2 -3
drivers/gpu/drm/i915/gvt/vgpu.c
··· 437 437 if (ret) 438 438 goto out_clean_sched_policy; 439 439 440 - if (IS_BROADWELL(dev_priv)) 440 + if (IS_BROADWELL(dev_priv) || IS_BROXTON(dev_priv)) 441 441 ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_B); 442 - /* FixMe: Re-enable APL/BXT once vfio_edid enabled */ 443 - else if (!IS_BROXTON(dev_priv)) 442 + else 444 443 ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D); 445 444 if (ret) 446 445 goto out_clean_sched_policy;
+4
drivers/gpu/drm/i915/i915_drv.c
··· 1047 1047 1048 1048 void i915_driver_shutdown(struct drm_i915_private *i915) 1049 1049 { 1050 + disable_rpm_wakeref_asserts(&i915->runtime_pm); 1051 + 1050 1052 i915_gem_suspend(i915); 1051 1053 1052 1054 drm_kms_helper_poll_disable(&i915->drm); ··· 1062 1060 1063 1061 intel_suspend_encoders(i915); 1064 1062 intel_shutdown_encoders(i915); 1063 + 1064 + enable_rpm_wakeref_asserts(&i915->runtime_pm); 1065 1065 } 1066 1066 1067 1067 static bool suspend_to_idle(struct drm_i915_private *dev_priv)
+146
drivers/gpu/drm/i915/i915_mitigations.c
··· 1 + // SPDX-License-Identifier: MIT 2 + /* 3 + * Copyright © 2021 Intel Corporation 4 + */ 5 + 6 + #include <linux/kernel.h> 7 + #include <linux/moduleparam.h> 8 + #include <linux/slab.h> 9 + #include <linux/string.h> 10 + 11 + #include "i915_drv.h" 12 + #include "i915_mitigations.h" 13 + 14 + static unsigned long mitigations __read_mostly = ~0UL; 15 + 16 + enum { 17 + CLEAR_RESIDUALS = 0, 18 + }; 19 + 20 + static const char * const names[] = { 21 + [CLEAR_RESIDUALS] = "residuals", 22 + }; 23 + 24 + bool i915_mitigate_clear_residuals(void) 25 + { 26 + return READ_ONCE(mitigations) & BIT(CLEAR_RESIDUALS); 27 + } 28 + 29 + static int mitigations_set(const char *val, const struct kernel_param *kp) 30 + { 31 + unsigned long new = ~0UL; 32 + char *str, *sep, *tok; 33 + bool first = true; 34 + int err = 0; 35 + 36 + BUILD_BUG_ON(ARRAY_SIZE(names) >= BITS_PER_TYPE(mitigations)); 37 + 38 + str = kstrdup(val, GFP_KERNEL); 39 + if (!str) 40 + return -ENOMEM; 41 + 42 + for (sep = str; (tok = strsep(&sep, ","));) { 43 + bool enable = true; 44 + int i; 45 + 46 + /* Be tolerant of leading/trailing whitespace */ 47 + tok = strim(tok); 48 + 49 + if (first) { 50 + first = false; 51 + 52 + if (!strcmp(tok, "auto")) 53 + continue; 54 + 55 + new = 0; 56 + if (!strcmp(tok, "off")) 57 + continue; 58 + } 59 + 60 + if (*tok == '!') { 61 + enable = !enable; 62 + tok++; 63 + } 64 + 65 + if (!strncmp(tok, "no", 2)) { 66 + enable = !enable; 67 + tok += 2; 68 + } 69 + 70 + if (*tok == '\0') 71 + continue; 72 + 73 + for (i = 0; i < ARRAY_SIZE(names); i++) { 74 + if (!strcmp(tok, names[i])) { 75 + if (enable) 76 + new |= BIT(i); 77 + else 78 + new &= ~BIT(i); 79 + break; 80 + } 81 + } 82 + if (i == ARRAY_SIZE(names)) { 83 + pr_err("Bad \"%s.mitigations=%s\", '%s' is unknown\n", 84 + DRIVER_NAME, val, tok); 85 + err = -EINVAL; 86 + break; 87 + } 88 + } 89 + kfree(str); 90 + if (err) 91 + return err; 92 + 93 + WRITE_ONCE(mitigations, new); 94 + return 0; 95 + } 96 + 97 + static int mitigations_get(char *buffer, const struct kernel_param *kp) 98 + { 99 + unsigned long local = READ_ONCE(mitigations); 100 + int count, i; 101 + bool enable; 102 + 103 + if (!local) 104 + return scnprintf(buffer, PAGE_SIZE, "%s\n", "off"); 105 + 106 + if (local & BIT(BITS_PER_LONG - 1)) { 107 + count = scnprintf(buffer, PAGE_SIZE, "%s,", "auto"); 108 + enable = false; 109 + } else { 110 + enable = true; 111 + count = 0; 112 + } 113 + 114 + for (i = 0; i < ARRAY_SIZE(names); i++) { 115 + if ((local & BIT(i)) != enable) 116 + continue; 117 + 118 + count += scnprintf(buffer + count, PAGE_SIZE - count, 119 + "%s%s,", enable ? "" : "!", names[i]); 120 + } 121 + 122 + buffer[count - 1] = '\n'; 123 + return count; 124 + } 125 + 126 + static const struct kernel_param_ops ops = { 127 + .set = mitigations_set, 128 + .get = mitigations_get, 129 + }; 130 + 131 + module_param_cb_unsafe(mitigations, &ops, NULL, 0600); 132 + MODULE_PARM_DESC(mitigations, 133 + "Selectively enable security mitigations for all Intel® GPUs in the system.\n" 134 + "\n" 135 + " auto -- enables all mitigations required for the platform [default]\n" 136 + " off -- disables all mitigations\n" 137 + "\n" 138 + "Individual mitigations can be enabled by passing a comma-separated string,\n" 139 + "e.g. mitigations=residuals to enable only clearing residuals or\n" 140 + "mitigations=auto,noresiduals to disable only the clear residual mitigation.\n" 141 + "Either '!' or 'no' may be used to switch from enabling the mitigation to\n" 142 + "disabling it.\n" 143 + "\n" 144 + "Active mitigations for Ivybridge, Baytrail, Haswell:\n" 145 + " residuals -- clear all thread-local registers between contexts" 146 + );
+13
drivers/gpu/drm/i915/i915_mitigations.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright © 2021 Intel Corporation 4 + */ 5 + 6 + #ifndef __I915_MITIGATIONS_H__ 7 + #define __I915_MITIGATIONS_H__ 8 + 9 + #include <linux/types.h> 10 + 11 + bool i915_mitigate_clear_residuals(void); 12 + 13 + #endif /* __I915_MITIGATIONS_H__ */
+1
drivers/gpu/drm/nouveau/dispnv50/Kbuild
··· 37 37 nouveau-y += dispnv50/wndw.o 38 38 nouveau-y += dispnv50/wndwc37e.o 39 39 nouveau-y += dispnv50/wndwc57e.o 40 + nouveau-y += dispnv50/wndwc67e.o 40 41 41 42 nouveau-y += dispnv50/base.o 42 43 nouveau-y += dispnv50/base507c.o
+1
drivers/gpu/drm/nouveau/dispnv50/core.c
··· 42 42 int version; 43 43 int (*new)(struct nouveau_drm *, s32, struct nv50_core **); 44 44 } cores[] = { 45 + { GA102_DISP_CORE_CHANNEL_DMA, 0, corec57d_new }, 45 46 { TU102_DISP_CORE_CHANNEL_DMA, 0, corec57d_new }, 46 47 { GV100_DISP_CORE_CHANNEL_DMA, 0, corec37d_new }, 47 48 { GP102_DISP_CORE_CHANNEL_DMA, 0, core917d_new },
+1
drivers/gpu/drm/nouveau/dispnv50/curs.c
··· 31 31 int version; 32 32 int (*new)(struct nouveau_drm *, int, s32, struct nv50_wndw **); 33 33 } curses[] = { 34 + { GA102_DISP_CURSOR, 0, cursc37a_new }, 34 35 { TU102_DISP_CURSOR, 0, cursc37a_new }, 35 36 { GV100_DISP_CURSOR, 0, cursc37a_new }, 36 37 { GK104_DISP_CURSOR, 0, curs907a_new },
+2 -2
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 222 222 223 223 int 224 224 nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp, 225 - const s32 *oclass, u8 head, void *data, u32 size, u64 syncbuf, 225 + const s32 *oclass, u8 head, void *data, u32 size, s64 syncbuf, 226 226 struct nv50_dmac *dmac) 227 227 { 228 228 struct nouveau_cli *cli = (void *)device->object.client; ··· 271 271 if (ret) 272 272 return ret; 273 273 274 - if (!syncbuf) 274 + if (syncbuf < 0) 275 275 return 0; 276 276 277 277 ret = nvif_object_ctor(&dmac->base.user, "kmsSyncCtxDma", NV50_DISP_HANDLE_SYNCBUF,
+1 -1
drivers/gpu/drm/nouveau/dispnv50/disp.h
··· 95 95 96 96 int nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp, 97 97 const s32 *oclass, u8 head, void *data, u32 size, 98 - u64 syncbuf, struct nv50_dmac *dmac); 98 + s64 syncbuf, struct nv50_dmac *dmac); 99 99 void nv50_dmac_destroy(struct nv50_dmac *); 100 100 101 101 /*
+1
drivers/gpu/drm/nouveau/dispnv50/wimm.c
··· 31 31 int version; 32 32 int (*init)(struct nouveau_drm *, s32, struct nv50_wndw *); 33 33 } wimms[] = { 34 + { GA102_DISP_WINDOW_IMM_CHANNEL_DMA, 0, wimmc37b_init }, 34 35 { TU102_DISP_WINDOW_IMM_CHANNEL_DMA, 0, wimmc37b_init }, 35 36 { GV100_DISP_WINDOW_IMM_CHANNEL_DMA, 0, wimmc37b_init }, 36 37 {}
+1 -1
drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c
··· 76 76 int ret; 77 77 78 78 ret = nv50_dmac_create(&drm->client.device, &disp->disp->object, 79 - &oclass, 0, &args, sizeof(args), 0, 79 + &oclass, 0, &args, sizeof(args), -1, 80 80 &wndw->wimm); 81 81 if (ret) { 82 82 NV_ERROR(drm, "wimm%04x allocation failed: %d\n", oclass, ret);
+1
drivers/gpu/drm/nouveau/dispnv50/wndw.c
··· 784 784 int (*new)(struct nouveau_drm *, enum drm_plane_type, 785 785 int, s32, struct nv50_wndw **); 786 786 } wndws[] = { 787 + { GA102_DISP_WINDOW_CHANNEL_DMA, 0, wndwc67e_new }, 787 788 { TU102_DISP_WINDOW_CHANNEL_DMA, 0, wndwc57e_new }, 788 789 { GV100_DISP_WINDOW_CHANNEL_DMA, 0, wndwc37e_new }, 789 790 {}
+8
drivers/gpu/drm/nouveau/dispnv50/wndw.h
··· 129 129 130 130 int wndwc57e_new(struct nouveau_drm *, enum drm_plane_type, int, s32, 131 131 struct nv50_wndw **); 132 + bool wndwc57e_ilut(struct nv50_wndw *, struct nv50_wndw_atom *, int); 133 + int wndwc57e_ilut_set(struct nv50_wndw *, struct nv50_wndw_atom *); 134 + int wndwc57e_ilut_clr(struct nv50_wndw *); 135 + int wndwc57e_csc_set(struct nv50_wndw *, struct nv50_wndw_atom *); 136 + int wndwc57e_csc_clr(struct nv50_wndw *); 137 + 138 + int wndwc67e_new(struct nouveau_drm *, enum drm_plane_type, int, s32, 139 + struct nv50_wndw **); 132 140 133 141 int nv50_wndw_new(struct nouveau_drm *, enum drm_plane_type, int index, 134 142 struct nv50_wndw **);
+5 -5
drivers/gpu/drm/nouveau/dispnv50/wndwc57e.c
··· 80 80 return 0; 81 81 } 82 82 83 - static int 83 + int 84 84 wndwc57e_csc_clr(struct nv50_wndw *wndw) 85 85 { 86 86 struct nvif_push *push = wndw->wndw.push; ··· 98 98 return 0; 99 99 } 100 100 101 - static int 101 + int 102 102 wndwc57e_csc_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 103 103 { 104 104 struct nvif_push *push = wndw->wndw.push; ··· 111 111 return 0; 112 112 } 113 113 114 - static int 114 + int 115 115 wndwc57e_ilut_clr(struct nv50_wndw *wndw) 116 116 { 117 117 struct nvif_push *push = wndw->wndw.push; ··· 124 124 return 0; 125 125 } 126 126 127 - static int 127 + int 128 128 wndwc57e_ilut_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 129 129 { 130 130 struct nvif_push *push = wndw->wndw.push; ··· 179 179 writew(readw(mem - 4), mem + 4); 180 180 } 181 181 182 - static bool 182 + bool 183 183 wndwc57e_ilut(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw, int size) 184 184 { 185 185 if (size = size ? size : 1024, size != 256 && size != 1024)
+106
drivers/gpu/drm/nouveau/dispnv50/wndwc67e.c
··· 1 + /* 2 + * Copyright 2021 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "wndw.h" 23 + #include "atom.h" 24 + 25 + #include <nvif/pushc37b.h> 26 + 27 + #include <nvhw/class/clc57e.h> 28 + 29 + static int 30 + wndwc67e_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 31 + { 32 + struct nvif_push *push = wndw->wndw.push; 33 + int ret; 34 + 35 + if ((ret = PUSH_WAIT(push, 17))) 36 + return ret; 37 + 38 + PUSH_MTHD(push, NVC57E, SET_PRESENT_CONTROL, 39 + NVVAL(NVC57E, SET_PRESENT_CONTROL, MIN_PRESENT_INTERVAL, asyw->image.interval) | 40 + NVVAL(NVC57E, SET_PRESENT_CONTROL, BEGIN_MODE, asyw->image.mode) | 41 + NVDEF(NVC57E, SET_PRESENT_CONTROL, TIMESTAMP_MODE, DISABLE)); 42 + 43 + PUSH_MTHD(push, NVC57E, SET_SIZE, 44 + NVVAL(NVC57E, SET_SIZE, WIDTH, asyw->image.w) | 45 + NVVAL(NVC57E, SET_SIZE, HEIGHT, asyw->image.h), 46 + 47 + SET_STORAGE, 48 + NVVAL(NVC57E, SET_STORAGE, BLOCK_HEIGHT, asyw->image.blockh), 49 + 50 + SET_PARAMS, 51 + NVVAL(NVC57E, SET_PARAMS, FORMAT, asyw->image.format) | 52 + NVDEF(NVC57E, SET_PARAMS, CLAMP_BEFORE_BLEND, DISABLE) | 53 + NVDEF(NVC57E, SET_PARAMS, SWAP_UV, DISABLE) | 54 + NVDEF(NVC57E, SET_PARAMS, FMT_ROUNDING_MODE, ROUND_TO_NEAREST), 55 + 56 + SET_PLANAR_STORAGE(0), 57 + NVVAL(NVC57E, SET_PLANAR_STORAGE, PITCH, asyw->image.blocks[0]) | 58 + NVVAL(NVC57E, SET_PLANAR_STORAGE, PITCH, asyw->image.pitch[0] >> 6)); 59 + 60 + PUSH_MTHD(push, NVC57E, SET_CONTEXT_DMA_ISO(0), asyw->image.handle, 1); 61 + PUSH_MTHD(push, NVC57E, SET_OFFSET(0), asyw->image.offset[0] >> 8); 62 + 63 + PUSH_MTHD(push, NVC57E, SET_POINT_IN(0), 64 + NVVAL(NVC57E, SET_POINT_IN, X, asyw->state.src_x >> 16) | 65 + NVVAL(NVC57E, SET_POINT_IN, Y, asyw->state.src_y >> 16)); 66 + 67 + PUSH_MTHD(push, NVC57E, SET_SIZE_IN, 68 + NVVAL(NVC57E, SET_SIZE_IN, WIDTH, asyw->state.src_w >> 16) | 69 + NVVAL(NVC57E, SET_SIZE_IN, HEIGHT, asyw->state.src_h >> 16)); 70 + 71 + PUSH_MTHD(push, NVC57E, SET_SIZE_OUT, 72 + NVVAL(NVC57E, SET_SIZE_OUT, WIDTH, asyw->state.crtc_w) | 73 + NVVAL(NVC57E, SET_SIZE_OUT, HEIGHT, asyw->state.crtc_h)); 74 + return 0; 75 + } 76 + 77 + static const struct nv50_wndw_func 78 + wndwc67e = { 79 + .acquire = wndwc37e_acquire, 80 + .release = wndwc37e_release, 81 + .sema_set = wndwc37e_sema_set, 82 + .sema_clr = wndwc37e_sema_clr, 83 + .ntfy_set = wndwc37e_ntfy_set, 84 + .ntfy_clr = wndwc37e_ntfy_clr, 85 + .ntfy_reset = corec37d_ntfy_init, 86 + .ntfy_wait_begun = base507c_ntfy_wait_begun, 87 + .ilut = wndwc57e_ilut, 88 + .ilut_identity = true, 89 + .ilut_size = 1024, 90 + .xlut_set = wndwc57e_ilut_set, 91 + .xlut_clr = wndwc57e_ilut_clr, 92 + .csc = base907c_csc, 93 + .csc_set = wndwc57e_csc_set, 94 + .csc_clr = wndwc57e_csc_clr, 95 + .image_set = wndwc67e_image_set, 96 + .image_clr = wndwc37e_image_clr, 97 + .blend_set = wndwc37e_blend_set, 98 + .update = wndwc37e_update, 99 + }; 100 + 101 + int 102 + wndwc67e_new(struct nouveau_drm *drm, enum drm_plane_type type, int index, 103 + s32 oclass, struct nv50_wndw **pwndw) 104 + { 105 + return wndwc37e_new_(&wndwc67e, drm, type, index, oclass, BIT(index >> 1), pwndw); 106 + }
+1
drivers/gpu/drm/nouveau/include/nvif/cl0080.h
··· 33 33 #define NV_DEVICE_INFO_V0_PASCAL 0x0a 34 34 #define NV_DEVICE_INFO_V0_VOLTA 0x0b 35 35 #define NV_DEVICE_INFO_V0_TURING 0x0c 36 + #define NV_DEVICE_INFO_V0_AMPERE 0x0d 36 37 __u8 family; 37 38 __u8 pad06[2]; 38 39 __u64 ram_size;
+5
drivers/gpu/drm/nouveau/include/nvif/class.h
··· 88 88 #define GP102_DISP /* cl5070.h */ 0x00009870 89 89 #define GV100_DISP /* cl5070.h */ 0x0000c370 90 90 #define TU102_DISP /* cl5070.h */ 0x0000c570 91 + #define GA102_DISP /* cl5070.h */ 0x0000c670 91 92 92 93 #define GV100_DISP_CAPS 0x0000c373 93 94 ··· 104 103 #define GK104_DISP_CURSOR /* cl507a.h */ 0x0000917a 105 104 #define GV100_DISP_CURSOR /* cl507a.h */ 0x0000c37a 106 105 #define TU102_DISP_CURSOR /* cl507a.h */ 0x0000c57a 106 + #define GA102_DISP_CURSOR /* cl507a.h */ 0x0000c67a 107 107 108 108 #define NV50_DISP_OVERLAY /* cl507b.h */ 0x0000507b 109 109 #define G82_DISP_OVERLAY /* cl507b.h */ 0x0000827b ··· 114 112 115 113 #define GV100_DISP_WINDOW_IMM_CHANNEL_DMA /* clc37b.h */ 0x0000c37b 116 114 #define TU102_DISP_WINDOW_IMM_CHANNEL_DMA /* clc37b.h */ 0x0000c57b 115 + #define GA102_DISP_WINDOW_IMM_CHANNEL_DMA /* clc37b.h */ 0x0000c67b 117 116 118 117 #define NV50_DISP_BASE_CHANNEL_DMA /* cl507c.h */ 0x0000507c 119 118 #define G82_DISP_BASE_CHANNEL_DMA /* cl507c.h */ 0x0000827c ··· 138 135 #define GP102_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000987d 139 136 #define GV100_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000c37d 140 137 #define TU102_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000c57d 138 + #define GA102_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000c67d 141 139 142 140 #define NV50_DISP_OVERLAY_CHANNEL_DMA /* cl507e.h */ 0x0000507e 143 141 #define G82_DISP_OVERLAY_CHANNEL_DMA /* cl507e.h */ 0x0000827e ··· 149 145 150 146 #define GV100_DISP_WINDOW_CHANNEL_DMA /* clc37e.h */ 0x0000c37e 151 147 #define TU102_DISP_WINDOW_CHANNEL_DMA /* clc37e.h */ 0x0000c57e 148 + #define GA102_DISP_WINDOW_CHANNEL_DMA /* clc37e.h */ 0x0000c67e 152 149 153 150 #define NV50_TESLA 0x00005097 154 151 #define G82_TESLA 0x00008297
+1
drivers/gpu/drm/nouveau/include/nvkm/core/device.h
··· 120 120 GP100 = 0x130, 121 121 GV100 = 0x140, 122 122 TU100 = 0x160, 123 + GA100 = 0x170, 123 124 } card_type; 124 125 u32 chipset; 125 126 u8 chiprev;
+1
drivers/gpu/drm/nouveau/include/nvkm/engine/disp.h
··· 37 37 int gp102_disp_new(struct nvkm_device *, int, struct nvkm_disp **); 38 38 int gv100_disp_new(struct nvkm_device *, int, struct nvkm_disp **); 39 39 int tu102_disp_new(struct nvkm_device *, int, struct nvkm_disp **); 40 + int ga102_disp_new(struct nvkm_device *, int, struct nvkm_disp **); 40 41 #endif
+1
drivers/gpu/drm/nouveau/include/nvkm/subdev/devinit.h
··· 32 32 int gm200_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **); 33 33 int gv100_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **); 34 34 int tu102_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **); 35 + int ga100_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **); 35 36 #endif
+2
drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h
··· 86 86 int gp102_fb_new(struct nvkm_device *, int, struct nvkm_fb **); 87 87 int gp10b_fb_new(struct nvkm_device *, int, struct nvkm_fb **); 88 88 int gv100_fb_new(struct nvkm_device *, int, struct nvkm_fb **); 89 + int ga100_fb_new(struct nvkm_device *, int, struct nvkm_fb **); 90 + int ga102_fb_new(struct nvkm_device *, int, struct nvkm_fb **); 89 91 90 92 #include <subdev/bios.h> 91 93 #include <subdev/bios/ramcfg.h>
+1
drivers/gpu/drm/nouveau/include/nvkm/subdev/gpio.h
··· 37 37 int g94_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **); 38 38 int gf119_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **); 39 39 int gk104_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **); 40 + int ga102_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **); 40 41 #endif
+1
drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h
··· 92 92 int gf117_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **); 93 93 int gf119_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **); 94 94 int gk104_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **); 95 + int gk110_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **); 95 96 int gm200_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **); 96 97 97 98 static inline int
+1
drivers/gpu/drm/nouveau/include/nvkm/subdev/mc.h
··· 32 32 int gp100_mc_new(struct nvkm_device *, int, struct nvkm_mc **); 33 33 int gp10b_mc_new(struct nvkm_device *, int, struct nvkm_mc **); 34 34 int tu102_mc_new(struct nvkm_device *, int, struct nvkm_mc **); 35 + int ga100_mc_new(struct nvkm_device *, int, struct nvkm_mc **); 35 36 #endif
+1
drivers/gpu/drm/nouveau/nouveau_backlight.c
··· 256 256 case NV_DEVICE_INFO_V0_PASCAL: 257 257 case NV_DEVICE_INFO_V0_VOLTA: 258 258 case NV_DEVICE_INFO_V0_TURING: 259 + case NV_DEVICE_INFO_V0_AMPERE: //XXX: not confirmed 259 260 ret = nv50_backlight_init(nv_encoder, &props, &ops); 260 261 break; 261 262 default:
+1
drivers/gpu/drm/nouveau/nvif/disp.c
··· 35 35 struct nvif_disp *disp) 36 36 { 37 37 static const struct nvif_mclass disps[] = { 38 + { GA102_DISP, -1 }, 38 39 { TU102_DISP, -1 }, 39 40 { GV100_DISP, -1 }, 40 41 { GP102_DISP, -1 },
+78 -9
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
··· 1815 1815 .fb = gk110_fb_new, 1816 1816 .fuse = gf100_fuse_new, 1817 1817 .gpio = gk104_gpio_new, 1818 - .i2c = gk104_i2c_new, 1818 + .i2c = gk110_i2c_new, 1819 1819 .ibus = gk104_ibus_new, 1820 1820 .iccsense = gf100_iccsense_new, 1821 1821 .imem = nv50_instmem_new, ··· 1853 1853 .fb = gk110_fb_new, 1854 1854 .fuse = gf100_fuse_new, 1855 1855 .gpio = gk104_gpio_new, 1856 - .i2c = gk104_i2c_new, 1856 + .i2c = gk110_i2c_new, 1857 1857 .ibus = gk104_ibus_new, 1858 1858 .iccsense = gf100_iccsense_new, 1859 1859 .imem = nv50_instmem_new, ··· 1891 1891 .fb = gk110_fb_new, 1892 1892 .fuse = gf100_fuse_new, 1893 1893 .gpio = gk104_gpio_new, 1894 - .i2c = gk104_i2c_new, 1894 + .i2c = gk110_i2c_new, 1895 1895 .ibus = gk104_ibus_new, 1896 1896 .iccsense = gf100_iccsense_new, 1897 1897 .imem = nv50_instmem_new, ··· 1929 1929 .fb = gk110_fb_new, 1930 1930 .fuse = gf100_fuse_new, 1931 1931 .gpio = gk104_gpio_new, 1932 - .i2c = gk104_i2c_new, 1932 + .i2c = gk110_i2c_new, 1933 1933 .ibus = gk104_ibus_new, 1934 1934 .iccsense = gf100_iccsense_new, 1935 1935 .imem = nv50_instmem_new, ··· 1967 1967 .fb = gm107_fb_new, 1968 1968 .fuse = gm107_fuse_new, 1969 1969 .gpio = gk104_gpio_new, 1970 - .i2c = gk104_i2c_new, 1970 + .i2c = gk110_i2c_new, 1971 1971 .ibus = gk104_ibus_new, 1972 1972 .iccsense = gf100_iccsense_new, 1973 1973 .imem = nv50_instmem_new, ··· 2003 2003 .fb = gm107_fb_new, 2004 2004 .fuse = gm107_fuse_new, 2005 2005 .gpio = gk104_gpio_new, 2006 - .i2c = gk104_i2c_new, 2006 + .i2c = gk110_i2c_new, 2007 2007 .ibus = gk104_ibus_new, 2008 2008 .iccsense = gf100_iccsense_new, 2009 2009 .imem = nv50_instmem_new, ··· 2652 2652 .sec2 = tu102_sec2_new, 2653 2653 }; 2654 2654 2655 + static const struct nvkm_device_chip 2656 + nv170_chipset = { 2657 + .name = "GA100", 2658 + .bar = tu102_bar_new, 2659 + .bios = nvkm_bios_new, 2660 + .devinit = ga100_devinit_new, 2661 + .fb = ga100_fb_new, 2662 + .gpio = gk104_gpio_new, 2663 + .i2c = gm200_i2c_new, 2664 + .ibus = gm200_ibus_new, 2665 + .imem = nv50_instmem_new, 2666 + .mc = ga100_mc_new, 2667 + .mmu = tu102_mmu_new, 2668 + .pci = gp100_pci_new, 2669 + .timer = gk20a_timer_new, 2670 + }; 2671 + 2672 + static const struct nvkm_device_chip 2673 + nv172_chipset = { 2674 + .name = "GA102", 2675 + .bar = tu102_bar_new, 2676 + .bios = nvkm_bios_new, 2677 + .devinit = ga100_devinit_new, 2678 + .fb = ga102_fb_new, 2679 + .gpio = ga102_gpio_new, 2680 + .i2c = gm200_i2c_new, 2681 + .ibus = gm200_ibus_new, 2682 + .imem = nv50_instmem_new, 2683 + .mc = ga100_mc_new, 2684 + .mmu = tu102_mmu_new, 2685 + .pci = gp100_pci_new, 2686 + .timer = gk20a_timer_new, 2687 + .disp = ga102_disp_new, 2688 + .dma = gv100_dma_new, 2689 + }; 2690 + 2691 + static const struct nvkm_device_chip 2692 + nv174_chipset = { 2693 + .name = "GA104", 2694 + .bar = tu102_bar_new, 2695 + .bios = nvkm_bios_new, 2696 + .devinit = ga100_devinit_new, 2697 + .fb = ga102_fb_new, 2698 + .gpio = ga102_gpio_new, 2699 + .i2c = gm200_i2c_new, 2700 + .ibus = gm200_ibus_new, 2701 + .imem = nv50_instmem_new, 2702 + .mc = ga100_mc_new, 2703 + .mmu = tu102_mmu_new, 2704 + .pci = gp100_pci_new, 2705 + .timer = gk20a_timer_new, 2706 + .disp = ga102_disp_new, 2707 + .dma = gv100_dma_new, 2708 + }; 2709 + 2655 2710 static int 2656 2711 nvkm_device_event_ctor(struct nvkm_object *object, void *data, u32 size, 2657 2712 struct nvkm_notify *notify) ··· 3118 3063 case 0x130: device->card_type = GP100; break; 3119 3064 case 0x140: device->card_type = GV100; break; 3120 3065 case 0x160: device->card_type = TU100; break; 3066 + case 0x170: device->card_type = GA100; break; 3121 3067 default: 3122 3068 break; 3123 3069 } ··· 3216 3160 case 0x166: device->chip = &nv166_chipset; break; 3217 3161 case 0x167: device->chip = &nv167_chipset; break; 3218 3162 case 0x168: device->chip = &nv168_chipset; break; 3163 + case 0x172: device->chip = &nv172_chipset; break; 3164 + case 0x174: device->chip = &nv174_chipset; break; 3219 3165 default: 3220 - nvdev_error(device, "unknown chipset (%08x)\n", boot0); 3221 - ret = -ENODEV; 3222 - goto done; 3166 + if (nvkm_boolopt(device->cfgopt, "NvEnableUnsupportedChipsets", false)) { 3167 + switch (device->chipset) { 3168 + case 0x170: device->chip = &nv170_chipset; break; 3169 + default: 3170 + break; 3171 + } 3172 + } 3173 + 3174 + if (!device->chip) { 3175 + nvdev_error(device, "unknown chipset (%08x)\n", boot0); 3176 + ret = -ENODEV; 3177 + goto done; 3178 + } 3179 + break; 3223 3180 } 3224 3181 3225 3182 nvdev_info(device, "NVIDIA %s (%08x)\n",
+1
drivers/gpu/drm/nouveau/nvkm/engine/device/user.c
··· 176 176 case GP100: args->v0.family = NV_DEVICE_INFO_V0_PASCAL; break; 177 177 case GV100: args->v0.family = NV_DEVICE_INFO_V0_VOLTA; break; 178 178 case TU100: args->v0.family = NV_DEVICE_INFO_V0_TURING; break; 179 + case GA100: args->v0.family = NV_DEVICE_INFO_V0_AMPERE; break; 179 180 default: 180 181 args->v0.family = 0; 181 182 break;
+3
drivers/gpu/drm/nouveau/nvkm/engine/disp/Kbuild
··· 17 17 nvkm-y += nvkm/engine/disp/gp102.o 18 18 nvkm-y += nvkm/engine/disp/gv100.o 19 19 nvkm-y += nvkm/engine/disp/tu102.o 20 + nvkm-y += nvkm/engine/disp/ga102.o 20 21 nvkm-y += nvkm/engine/disp/vga.o 21 22 22 23 nvkm-y += nvkm/engine/disp/head.o ··· 43 42 nvkm-y += nvkm/engine/disp/sorgp100.o 44 43 nvkm-y += nvkm/engine/disp/sorgv100.o 45 44 nvkm-y += nvkm/engine/disp/sortu102.o 45 + nvkm-y += nvkm/engine/disp/sorga102.o 46 46 47 47 nvkm-y += nvkm/engine/disp/outp.o 48 48 nvkm-y += nvkm/engine/disp/dp.o ··· 77 75 nvkm-y += nvkm/engine/disp/rootgp102.o 78 76 nvkm-y += nvkm/engine/disp/rootgv100.o 79 77 nvkm-y += nvkm/engine/disp/roottu102.o 78 + nvkm-y += nvkm/engine/disp/rootga102.o 80 79 81 80 nvkm-y += nvkm/engine/disp/capsgv100.o 82 81
+27 -6
drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
··· 33 33 34 34 #include <nvif/event.h> 35 35 36 + /* IED scripts are no longer used by UEFI/RM from Ampere, but have been updated for 37 + * the x86 option ROM. However, the relevant VBIOS table versions weren't modified, 38 + * so we're unable to detect this in a nice way. 39 + */ 40 + #define AMPERE_IED_HACK(disp) ((disp)->engine.subdev.device->card_type >= GA100) 41 + 36 42 struct lt_state { 37 43 struct nvkm_dp *dp; 38 44 u8 stat[6]; ··· 244 238 dp->dpcd[DPCD_RC02] &= ~DPCD_RC02_TPS3_SUPPORTED; 245 239 lt.pc2 = dp->dpcd[DPCD_RC02] & DPCD_RC02_TPS3_SUPPORTED; 246 240 241 + if (AMPERE_IED_HACK(disp) && (lnkcmp = lt.dp->info.script[0])) { 242 + /* Execute BeforeLinkTraining script from DP Info table. */ 243 + while (ior->dp.bw < nvbios_rd08(bios, lnkcmp)) 244 + lnkcmp += 3; 245 + lnkcmp = nvbios_rd16(bios, lnkcmp + 1); 246 + 247 + nvbios_init(&dp->outp.disp->engine.subdev, lnkcmp, 248 + init.outp = &dp->outp.info; 249 + init.or = ior->id; 250 + init.link = ior->asy.link; 251 + ); 252 + } 253 + 247 254 /* Set desired link configuration on the source. */ 248 255 if ((lnkcmp = lt.dp->info.lnkcmp)) { 249 256 if (dp->version < 0x30) { ··· 335 316 ); 336 317 } 337 318 338 - /* Execute BeforeLinkTraining script from DP Info table. */ 339 - nvbios_init(&dp->outp.disp->engine.subdev, dp->info.script[0], 340 - init.outp = &dp->outp.info; 341 - init.or = dp->outp.ior->id; 342 - init.link = dp->outp.ior->asy.link; 343 - ); 319 + if (!AMPERE_IED_HACK(dp->outp.disp)) { 320 + /* Execute BeforeLinkTraining script from DP Info table. */ 321 + nvbios_init(&dp->outp.disp->engine.subdev, dp->info.script[0], 322 + init.outp = &dp->outp.info; 323 + init.or = dp->outp.ior->id; 324 + init.link = dp->outp.ior->asy.link; 325 + ); 326 + } 344 327 } 345 328 346 329 static const struct dp_rates {
+46
drivers/gpu/drm/nouveau/nvkm/engine/disp/ga102.c
··· 1 + /* 2 + * Copyright 2021 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "nv50.h" 23 + #include "head.h" 24 + #include "ior.h" 25 + #include "channv50.h" 26 + #include "rootnv50.h" 27 + 28 + static const struct nv50_disp_func 29 + ga102_disp = { 30 + .init = tu102_disp_init, 31 + .fini = gv100_disp_fini, 32 + .intr = gv100_disp_intr, 33 + .uevent = &gv100_disp_chan_uevent, 34 + .super = gv100_disp_super, 35 + .root = &ga102_disp_root_oclass, 36 + .wndw = { .cnt = gv100_disp_wndw_cnt }, 37 + .head = { .cnt = gv100_head_cnt, .new = gv100_head_new }, 38 + .sor = { .cnt = gv100_sor_cnt, .new = ga102_sor_new }, 39 + .ramht_size = 0x2000, 40 + }; 41 + 42 + int 43 + ga102_disp_new(struct nvkm_device *device, int index, struct nvkm_disp **pdisp) 44 + { 45 + return nv50_disp_new_(&ga102_disp, device, index, pdisp); 46 + }
+4
drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h
··· 150 150 void gv100_sor_dp_audio_sym(struct nvkm_ior *, int, u16, u32); 151 151 void gv100_sor_dp_watermark(struct nvkm_ior *, int, u8); 152 152 153 + void tu102_sor_dp_vcpi(struct nvkm_ior *, int, u8, u8, u16, u16); 154 + 153 155 void g84_hdmi_ctrl(struct nvkm_ior *, int, bool, u8, u8, u8 *, u8 , u8 *, u8); 154 156 void gt215_hdmi_ctrl(struct nvkm_ior *, int, bool, u8, u8, u8 *, u8 , u8 *, u8); 155 157 void gf119_hdmi_ctrl(struct nvkm_ior *, int, bool, u8, u8, u8 *, u8 , u8 *, u8); ··· 209 207 int gv100_sor_new(struct nvkm_disp *, int); 210 208 211 209 int tu102_sor_new(struct nvkm_disp *, int); 210 + 211 + int ga102_sor_new(struct nvkm_disp *, int); 212 212 #endif
+2
drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.h
··· 86 86 void gv100_disp_super(struct work_struct *); 87 87 int gv100_disp_wndw_cnt(struct nvkm_disp *, unsigned long *); 88 88 89 + int tu102_disp_init(struct nv50_disp *); 90 + 89 91 void nv50_disp_dptmds_war_2(struct nv50_disp *, struct dcb_output *); 90 92 void nv50_disp_dptmds_war_3(struct nv50_disp *, struct dcb_output *); 91 93 void nv50_disp_update_sppll1(struct nv50_disp *);
+52
drivers/gpu/drm/nouveau/nvkm/engine/disp/rootga102.c
··· 1 + /* 2 + * Copyright 2021 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "rootnv50.h" 23 + #include "channv50.h" 24 + 25 + #include <nvif/class.h> 26 + 27 + static const struct nv50_disp_root_func 28 + ga102_disp_root = { 29 + .user = { 30 + {{-1,-1,GV100_DISP_CAPS }, gv100_disp_caps_new }, 31 + {{0,0,GA102_DISP_CURSOR }, gv100_disp_curs_new }, 32 + {{0,0,GA102_DISP_WINDOW_IMM_CHANNEL_DMA}, gv100_disp_wimm_new }, 33 + {{0,0,GA102_DISP_CORE_CHANNEL_DMA }, gv100_disp_core_new }, 34 + {{0,0,GA102_DISP_WINDOW_CHANNEL_DMA }, gv100_disp_wndw_new }, 35 + {} 36 + }, 37 + }; 38 + 39 + static int 40 + ga102_disp_root_new(struct nvkm_disp *disp, const struct nvkm_oclass *oclass, 41 + void *data, u32 size, struct nvkm_object **pobject) 42 + { 43 + return nv50_disp_root_new_(&ga102_disp_root, disp, oclass, data, size, pobject); 44 + } 45 + 46 + const struct nvkm_disp_oclass 47 + ga102_disp_root_oclass = { 48 + .base.oclass = GA102_DISP, 49 + .base.minver = -1, 50 + .base.maxver = -1, 51 + .ctor = ga102_disp_root_new, 52 + };
+1
drivers/gpu/drm/nouveau/nvkm/engine/disp/rootnv50.h
··· 41 41 extern const struct nvkm_disp_oclass gp102_disp_root_oclass; 42 42 extern const struct nvkm_disp_oclass gv100_disp_root_oclass; 43 43 extern const struct nvkm_disp_oclass tu102_disp_root_oclass; 44 + extern const struct nvkm_disp_oclass ga102_disp_root_oclass; 44 45 #endif
+140
drivers/gpu/drm/nouveau/nvkm/engine/disp/sorga102.c
··· 1 + /* 2 + * Copyright 2021 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "ior.h" 23 + 24 + #include <subdev/timer.h> 25 + 26 + static int 27 + ga102_sor_dp_links(struct nvkm_ior *sor, struct nvkm_i2c_aux *aux) 28 + { 29 + struct nvkm_device *device = sor->disp->engine.subdev.device; 30 + const u32 soff = nv50_ior_base(sor); 31 + const u32 loff = nv50_sor_link(sor); 32 + u32 dpctrl = 0x00000000; 33 + u32 clksor = 0x00000000; 34 + 35 + switch (sor->dp.bw) { 36 + case 0x06: clksor |= 0x00000000; break; 37 + case 0x0a: clksor |= 0x00040000; break; 38 + case 0x14: clksor |= 0x00080000; break; 39 + case 0x1e: clksor |= 0x000c0000; break; 40 + default: 41 + WARN_ON(1); 42 + return -EINVAL; 43 + } 44 + 45 + dpctrl |= ((1 << sor->dp.nr) - 1) << 16; 46 + if (sor->dp.mst) 47 + dpctrl |= 0x40000000; 48 + if (sor->dp.ef) 49 + dpctrl |= 0x00004000; 50 + 51 + nvkm_mask(device, 0x612300 + soff, 0x007c0000, clksor); 52 + 53 + /*XXX*/ 54 + nvkm_msec(device, 40, NVKM_DELAY); 55 + nvkm_mask(device, 0x612300 + soff, 0x00030000, 0x00010000); 56 + nvkm_mask(device, 0x61c10c + loff, 0x00000003, 0x00000001); 57 + 58 + nvkm_mask(device, 0x61c10c + loff, 0x401f4000, dpctrl); 59 + return 0; 60 + } 61 + 62 + static void 63 + ga102_sor_clock(struct nvkm_ior *sor) 64 + { 65 + struct nvkm_device *device = sor->disp->engine.subdev.device; 66 + u32 div2 = 0; 67 + if (sor->asy.proto == TMDS) { 68 + if (sor->tmds.high_speed) 69 + div2 = 1; 70 + } 71 + nvkm_wr32(device, 0x00ec08 + (sor->id * 0x10), 0x00000000); 72 + nvkm_wr32(device, 0x00ec04 + (sor->id * 0x10), div2); 73 + } 74 + 75 + static const struct nvkm_ior_func 76 + ga102_sor_hda = { 77 + .route = { 78 + .get = gm200_sor_route_get, 79 + .set = gm200_sor_route_set, 80 + }, 81 + .state = gv100_sor_state, 82 + .power = nv50_sor_power, 83 + .clock = ga102_sor_clock, 84 + .hdmi = { 85 + .ctrl = gv100_hdmi_ctrl, 86 + .scdc = gm200_hdmi_scdc, 87 + }, 88 + .dp = { 89 + .lanes = { 0, 1, 2, 3 }, 90 + .links = ga102_sor_dp_links, 91 + .power = g94_sor_dp_power, 92 + .pattern = gm107_sor_dp_pattern, 93 + .drive = gm200_sor_dp_drive, 94 + .vcpi = tu102_sor_dp_vcpi, 95 + .audio = gv100_sor_dp_audio, 96 + .audio_sym = gv100_sor_dp_audio_sym, 97 + .watermark = gv100_sor_dp_watermark, 98 + }, 99 + .hda = { 100 + .hpd = gf119_hda_hpd, 101 + .eld = gf119_hda_eld, 102 + .device_entry = gv100_hda_device_entry, 103 + }, 104 + }; 105 + 106 + static const struct nvkm_ior_func 107 + ga102_sor = { 108 + .route = { 109 + .get = gm200_sor_route_get, 110 + .set = gm200_sor_route_set, 111 + }, 112 + .state = gv100_sor_state, 113 + .power = nv50_sor_power, 114 + .clock = ga102_sor_clock, 115 + .hdmi = { 116 + .ctrl = gv100_hdmi_ctrl, 117 + .scdc = gm200_hdmi_scdc, 118 + }, 119 + .dp = { 120 + .lanes = { 0, 1, 2, 3 }, 121 + .links = ga102_sor_dp_links, 122 + .power = g94_sor_dp_power, 123 + .pattern = gm107_sor_dp_pattern, 124 + .drive = gm200_sor_dp_drive, 125 + .vcpi = tu102_sor_dp_vcpi, 126 + .audio = gv100_sor_dp_audio, 127 + .audio_sym = gv100_sor_dp_audio_sym, 128 + .watermark = gv100_sor_dp_watermark, 129 + }, 130 + }; 131 + 132 + int 133 + ga102_sor_new(struct nvkm_disp *disp, int id) 134 + { 135 + struct nvkm_device *device = disp->engine.subdev.device; 136 + u32 hda = nvkm_rd32(device, 0x08a15c); 137 + if (hda & BIT(id)) 138 + return nvkm_ior_new_(&ga102_sor_hda, disp, SOR, id); 139 + return nvkm_ior_new_(&ga102_sor, disp, SOR, id); 140 + }
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/disp/sortu102.c
··· 23 23 24 24 #include <subdev/timer.h> 25 25 26 - static void 26 + void 27 27 tu102_sor_dp_vcpi(struct nvkm_ior *sor, int head, 28 28 u8 slot, u8 slot_nr, u16 pbn, u16 aligned) 29 29 {
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/disp/tu102.c
··· 28 28 #include <core/gpuobj.h> 29 29 #include <subdev/timer.h> 30 30 31 - static int 31 + int 32 32 tu102_disp_init(struct nv50_disp *disp) 33 33 { 34 34 struct nvkm_device *device = disp->base.engine.subdev.device;
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
··· 75 75 nvkm_debug(subdev, "%08x: type %02x, %d bytes\n", 76 76 image.base, image.type, image.size); 77 77 78 - if (!shadow_fetch(bios, mthd, image.size)) { 78 + if (!shadow_fetch(bios, mthd, image.base + image.size)) { 79 79 nvkm_debug(subdev, "%08x: fetch failed\n", image.base); 80 80 return 0; 81 81 }
+3
drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowramin.c
··· 64 64 return NULL; 65 65 66 66 /* we can't get the bios image pointer without PDISP */ 67 + if (device->card_type >= GA100) 68 + addr = device->chipset == 0x170; /*XXX: find the fuse reg for this */ 69 + else 67 70 if (device->card_type >= GM100) 68 71 addr = nvkm_rd32(device, 0x021c04); 69 72 else
+1
drivers/gpu/drm/nouveau/nvkm/subdev/devinit/Kbuild
··· 15 15 nvkm-y += nvkm/subdev/devinit/gm200.o 16 16 nvkm-y += nvkm/subdev/devinit/gv100.o 17 17 nvkm-y += nvkm/subdev/devinit/tu102.o 18 + nvkm-y += nvkm/subdev/devinit/ga100.o
+76
drivers/gpu/drm/nouveau/nvkm/subdev/devinit/ga100.c
··· 1 + /* 2 + * Copyright 2021 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "nv50.h" 23 + 24 + #include <subdev/bios.h> 25 + #include <subdev/bios/pll.h> 26 + #include <subdev/clk/pll.h> 27 + 28 + static int 29 + ga100_devinit_pll_set(struct nvkm_devinit *init, u32 type, u32 freq) 30 + { 31 + struct nvkm_subdev *subdev = &init->subdev; 32 + struct nvkm_device *device = subdev->device; 33 + struct nvbios_pll info; 34 + int head = type - PLL_VPLL0; 35 + int N, fN, M, P; 36 + int ret; 37 + 38 + ret = nvbios_pll_parse(device->bios, type, &info); 39 + if (ret) 40 + return ret; 41 + 42 + ret = gt215_pll_calc(subdev, &info, freq, &N, &fN, &M, &P); 43 + if (ret < 0) 44 + return ret; 45 + 46 + switch (info.type) { 47 + case PLL_VPLL0: 48 + case PLL_VPLL1: 49 + case PLL_VPLL2: 50 + case PLL_VPLL3: 51 + nvkm_wr32(device, 0x00ef00 + (head * 0x40), 0x02080004); 52 + nvkm_wr32(device, 0x00ef18 + (head * 0x40), (N << 16) | fN); 53 + nvkm_wr32(device, 0x00ef04 + (head * 0x40), (P << 16) | M); 54 + nvkm_wr32(device, 0x00e9c0 + (head * 0x04), 0x00000001); 55 + break; 56 + default: 57 + nvkm_warn(subdev, "%08x/%dKhz unimplemented\n", type, freq); 58 + ret = -EINVAL; 59 + break; 60 + } 61 + 62 + return ret; 63 + } 64 + 65 + static const struct nvkm_devinit_func 66 + ga100_devinit = { 67 + .init = nv50_devinit_init, 68 + .post = tu102_devinit_post, 69 + .pll_set = ga100_devinit_pll_set, 70 + }; 71 + 72 + int 73 + ga100_devinit_new(struct nvkm_device *device, int index, struct nvkm_devinit **pinit) 74 + { 75 + return nv50_devinit_new_(&ga100_devinit, device, index, pinit); 76 + }
+1
drivers/gpu/drm/nouveau/nvkm/subdev/devinit/priv.h
··· 19 19 int index, struct nvkm_devinit *); 20 20 21 21 int nv04_devinit_post(struct nvkm_devinit *, bool); 22 + int tu102_devinit_post(struct nvkm_devinit *, bool); 22 23 #endif
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c
··· 65 65 return ret; 66 66 } 67 67 68 - static int 68 + int 69 69 tu102_devinit_post(struct nvkm_devinit *base, bool post) 70 70 { 71 71 struct nv50_devinit *init = nv50_devinit(base);
+3
drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild
··· 32 32 nvkm-y += nvkm/subdev/fb/gp102.o 33 33 nvkm-y += nvkm/subdev/fb/gp10b.o 34 34 nvkm-y += nvkm/subdev/fb/gv100.o 35 + nvkm-y += nvkm/subdev/fb/ga100.o 36 + nvkm-y += nvkm/subdev/fb/ga102.o 35 37 36 38 nvkm-y += nvkm/subdev/fb/ram.o 37 39 nvkm-y += nvkm/subdev/fb/ramnv04.o ··· 54 52 nvkm-y += nvkm/subdev/fb/ramgm107.o 55 53 nvkm-y += nvkm/subdev/fb/ramgm200.o 56 54 nvkm-y += nvkm/subdev/fb/ramgp100.o 55 + nvkm-y += nvkm/subdev/fb/ramga102.o 57 56 nvkm-y += nvkm/subdev/fb/sddr2.o 58 57 nvkm-y += nvkm/subdev/fb/sddr3.o 59 58 nvkm-y += nvkm/subdev/fb/gddr3.o
+40
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga100.c
··· 1 + /* 2 + * Copyright 2021 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "gf100.h" 23 + #include "ram.h" 24 + 25 + static const struct nvkm_fb_func 26 + ga100_fb = { 27 + .dtor = gf100_fb_dtor, 28 + .oneinit = gf100_fb_oneinit, 29 + .init = gp100_fb_init, 30 + .init_page = gv100_fb_init_page, 31 + .init_unkn = gp100_fb_init_unkn, 32 + .ram_new = gp100_ram_new, 33 + .default_bigpage = 16, 34 + }; 35 + 36 + int 37 + ga100_fb_new(struct nvkm_device *device, int index, struct nvkm_fb **pfb) 38 + { 39 + return gp102_fb_new_(&ga100_fb, device, index, pfb); 40 + }
+40
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c
··· 1 + /* 2 + * Copyright 2021 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "gf100.h" 23 + #include "ram.h" 24 + 25 + static const struct nvkm_fb_func 26 + ga102_fb = { 27 + .dtor = gf100_fb_dtor, 28 + .oneinit = gf100_fb_oneinit, 29 + .init = gp100_fb_init, 30 + .init_page = gv100_fb_init_page, 31 + .init_unkn = gp100_fb_init_unkn, 32 + .ram_new = ga102_ram_new, 33 + .default_bigpage = 16, 34 + }; 35 + 36 + int 37 + ga102_fb_new(struct nvkm_device *device, int index, struct nvkm_fb **pfb) 38 + { 39 + return gp102_fb_new_(&ga102_fb, device, index, pfb); 40 + }
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c
··· 22 22 #include "gf100.h" 23 23 #include "ram.h" 24 24 25 - static int 25 + int 26 26 gv100_fb_init_page(struct nvkm_fb *fb) 27 27 { 28 28 return (fb->page == 16) ? 0 : -EINVAL;
+2
drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h
··· 82 82 struct nvkm_fb **); 83 83 bool gp102_fb_vpr_scrub_required(struct nvkm_fb *); 84 84 int gp102_fb_vpr_scrub(struct nvkm_fb *); 85 + 86 + int gv100_fb_init_page(struct nvkm_fb *); 85 87 #endif
+1
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ram.h
··· 70 70 int gm107_ram_new(struct nvkm_fb *, struct nvkm_ram **); 71 71 int gm200_ram_new(struct nvkm_fb *, struct nvkm_ram **); 72 72 int gp100_ram_new(struct nvkm_fb *, struct nvkm_ram **); 73 + int ga102_ram_new(struct nvkm_fb *, struct nvkm_ram **); 73 74 #endif
+40
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramga102.c
··· 1 + /* 2 + * Copyright 2021 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "ram.h" 23 + 24 + #include <subdev/bios.h> 25 + #include <subdev/bios/init.h> 26 + #include <subdev/bios/rammap.h> 27 + 28 + static const struct nvkm_ram_func 29 + ga102_ram = { 30 + }; 31 + 32 + int 33 + ga102_ram_new(struct nvkm_fb *fb, struct nvkm_ram **pram) 34 + { 35 + struct nvkm_device *device = fb->subdev.device; 36 + enum nvkm_ram_type type = nvkm_fb_bios_memtype(device->bios); 37 + u32 size = nvkm_rd32(device, 0x1183a4); 38 + 39 + return nvkm_ram_new_(&ga102_ram, fb, type, (u64)size << 20, pram); 40 + }
+1
drivers/gpu/drm/nouveau/nvkm/subdev/gpio/Kbuild
··· 5 5 nvkm-y += nvkm/subdev/gpio/g94.o 6 6 nvkm-y += nvkm/subdev/gpio/gf119.o 7 7 nvkm-y += nvkm/subdev/gpio/gk104.o 8 + nvkm-y += nvkm/subdev/gpio/ga102.o
+118
drivers/gpu/drm/nouveau/nvkm/subdev/gpio/ga102.c
··· 1 + /* 2 + * Copyright 2021 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "priv.h" 23 + 24 + static void 25 + ga102_gpio_reset(struct nvkm_gpio *gpio, u8 match) 26 + { 27 + struct nvkm_device *device = gpio->subdev.device; 28 + struct nvkm_bios *bios = device->bios; 29 + u8 ver, len; 30 + u16 entry; 31 + int ent = -1; 32 + 33 + while ((entry = dcb_gpio_entry(bios, 0, ++ent, &ver, &len))) { 34 + u32 data = nvbios_rd32(bios, entry); 35 + u8 line = (data & 0x0000003f); 36 + u8 defs = !!(data & 0x00000080); 37 + u8 func = (data & 0x0000ff00) >> 8; 38 + u8 unk0 = (data & 0x00ff0000) >> 16; 39 + u8 unk1 = (data & 0x1f000000) >> 24; 40 + 41 + if ( func == DCB_GPIO_UNUSED || 42 + (match != DCB_GPIO_UNUSED && match != func)) 43 + continue; 44 + 45 + nvkm_gpio_set(gpio, 0, func, line, defs); 46 + 47 + nvkm_mask(device, 0x021200 + (line * 4), 0xff, unk0); 48 + if (unk1--) 49 + nvkm_mask(device, 0x00d740 + (unk1 * 4), 0xff, line); 50 + } 51 + } 52 + 53 + static int 54 + ga102_gpio_drive(struct nvkm_gpio *gpio, int line, int dir, int out) 55 + { 56 + struct nvkm_device *device = gpio->subdev.device; 57 + u32 data = ((dir ^ 1) << 13) | (out << 12); 58 + nvkm_mask(device, 0x021200 + (line * 4), 0x00003000, data); 59 + nvkm_mask(device, 0x00d604, 0x00000001, 0x00000001); /* update? */ 60 + return 0; 61 + } 62 + 63 + static int 64 + ga102_gpio_sense(struct nvkm_gpio *gpio, int line) 65 + { 66 + struct nvkm_device *device = gpio->subdev.device; 67 + return !!(nvkm_rd32(device, 0x021200 + (line * 4)) & 0x00004000); 68 + } 69 + 70 + static void 71 + ga102_gpio_intr_stat(struct nvkm_gpio *gpio, u32 *hi, u32 *lo) 72 + { 73 + struct nvkm_device *device = gpio->subdev.device; 74 + u32 intr0 = nvkm_rd32(device, 0x021640); 75 + u32 intr1 = nvkm_rd32(device, 0x02164c); 76 + u32 stat0 = nvkm_rd32(device, 0x021648) & intr0; 77 + u32 stat1 = nvkm_rd32(device, 0x021654) & intr1; 78 + *lo = (stat1 & 0xffff0000) | (stat0 >> 16); 79 + *hi = (stat1 << 16) | (stat0 & 0x0000ffff); 80 + nvkm_wr32(device, 0x021640, intr0); 81 + nvkm_wr32(device, 0x02164c, intr1); 82 + } 83 + 84 + static void 85 + ga102_gpio_intr_mask(struct nvkm_gpio *gpio, u32 type, u32 mask, u32 data) 86 + { 87 + struct nvkm_device *device = gpio->subdev.device; 88 + u32 inte0 = nvkm_rd32(device, 0x021648); 89 + u32 inte1 = nvkm_rd32(device, 0x021654); 90 + if (type & NVKM_GPIO_LO) 91 + inte0 = (inte0 & ~(mask << 16)) | (data << 16); 92 + if (type & NVKM_GPIO_HI) 93 + inte0 = (inte0 & ~(mask & 0xffff)) | (data & 0xffff); 94 + mask >>= 16; 95 + data >>= 16; 96 + if (type & NVKM_GPIO_LO) 97 + inte1 = (inte1 & ~(mask << 16)) | (data << 16); 98 + if (type & NVKM_GPIO_HI) 99 + inte1 = (inte1 & ~mask) | data; 100 + nvkm_wr32(device, 0x021648, inte0); 101 + nvkm_wr32(device, 0x021654, inte1); 102 + } 103 + 104 + static const struct nvkm_gpio_func 105 + ga102_gpio = { 106 + .lines = 32, 107 + .intr_stat = ga102_gpio_intr_stat, 108 + .intr_mask = ga102_gpio_intr_mask, 109 + .drive = ga102_gpio_drive, 110 + .sense = ga102_gpio_sense, 111 + .reset = ga102_gpio_reset, 112 + }; 113 + 114 + int 115 + ga102_gpio_new(struct nvkm_device *device, int index, struct nvkm_gpio **pgpio) 116 + { 117 + return nvkm_gpio_new_(&ga102_gpio, device, index, pgpio); 118 + }
+1
drivers/gpu/drm/nouveau/nvkm/subdev/i2c/Kbuild
··· 7 7 nvkm-y += nvkm/subdev/i2c/gf117.o 8 8 nvkm-y += nvkm/subdev/i2c/gf119.o 9 9 nvkm-y += nvkm/subdev/i2c/gk104.o 10 + nvkm-y += nvkm/subdev/i2c/gk110.o 10 11 nvkm-y += nvkm/subdev/i2c/gm200.o 11 12 12 13 nvkm-y += nvkm/subdev/i2c/pad.o
+7
drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.h
··· 3 3 #define __NVKM_I2C_AUX_H__ 4 4 #include "pad.h" 5 5 6 + static inline void 7 + nvkm_i2c_aux_autodpcd(struct nvkm_i2c *i2c, int aux, bool enable) 8 + { 9 + if (i2c->func->aux_autodpcd) 10 + i2c->func->aux_autodpcd(i2c, aux, false); 11 + } 12 + 6 13 struct nvkm_i2c_aux_func { 7 14 bool address_only; 8 15 int (*xfer)(struct nvkm_i2c_aux *, bool retry, u8 type,
+7 -3
drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxg94.c
··· 77 77 u8 type, u32 addr, u8 *data, u8 *size) 78 78 { 79 79 struct g94_i2c_aux *aux = g94_i2c_aux(obj); 80 - struct nvkm_device *device = aux->base.pad->i2c->subdev.device; 80 + struct nvkm_i2c *i2c = aux->base.pad->i2c; 81 + struct nvkm_device *device = i2c->subdev.device; 81 82 const u32 base = aux->ch * 0x50; 82 83 u32 ctrl, stat, timeout, retries = 0; 83 84 u32 xbuf[4] = {}; ··· 96 95 ret = -ENXIO; 97 96 goto out; 98 97 } 98 + 99 + nvkm_i2c_aux_autodpcd(i2c, aux->ch, false); 99 100 100 101 if (!(type & 1)) { 101 102 memcpy(xbuf, data, *size); ··· 131 128 if (!timeout--) { 132 129 AUX_ERR(&aux->base, "timeout %08x", ctrl); 133 130 ret = -EIO; 134 - goto out; 131 + goto out_err; 135 132 } 136 133 } while (ctrl & 0x00010000); 137 134 ret = 0; ··· 157 154 memcpy(data, xbuf, *size); 158 155 *size = stat & 0x0000001f; 159 156 } 160 - 157 + out_err: 158 + nvkm_i2c_aux_autodpcd(i2c, aux->ch, true); 161 159 out: 162 160 g94_i2c_aux_fini(aux); 163 161 return ret < 0 ? ret : (stat & 0x000f0000) >> 16;
+11 -6
drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c
··· 33 33 gm200_i2c_aux_fini(struct gm200_i2c_aux *aux) 34 34 { 35 35 struct nvkm_device *device = aux->base.pad->i2c->subdev.device; 36 - nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00310000, 0x00000000); 36 + nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00710000, 0x00000000); 37 37 } 38 38 39 39 static int ··· 54 54 AUX_ERR(&aux->base, "begin idle timeout %08x", ctrl); 55 55 return -EBUSY; 56 56 } 57 - } while (ctrl & 0x03010000); 57 + } while (ctrl & 0x07010000); 58 58 59 59 /* set some magic, and wait up to 1ms for it to appear */ 60 - nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00300000, ureq); 60 + nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00700000, ureq); 61 61 timeout = 1000; 62 62 do { 63 63 ctrl = nvkm_rd32(device, 0x00d954 + (aux->ch * 0x50)); ··· 67 67 gm200_i2c_aux_fini(aux); 68 68 return -EBUSY; 69 69 } 70 - } while ((ctrl & 0x03000000) != urep); 70 + } while ((ctrl & 0x07000000) != urep); 71 71 72 72 return 0; 73 73 } ··· 77 77 u8 type, u32 addr, u8 *data, u8 *size) 78 78 { 79 79 struct gm200_i2c_aux *aux = gm200_i2c_aux(obj); 80 - struct nvkm_device *device = aux->base.pad->i2c->subdev.device; 80 + struct nvkm_i2c *i2c = aux->base.pad->i2c; 81 + struct nvkm_device *device = i2c->subdev.device; 81 82 const u32 base = aux->ch * 0x50; 82 83 u32 ctrl, stat, timeout, retries = 0; 83 84 u32 xbuf[4] = {}; ··· 96 95 ret = -ENXIO; 97 96 goto out; 98 97 } 98 + 99 + nvkm_i2c_aux_autodpcd(i2c, aux->ch, false); 99 100 100 101 if (!(type & 1)) { 101 102 memcpy(xbuf, data, *size); ··· 131 128 if (!timeout--) { 132 129 AUX_ERR(&aux->base, "timeout %08x", ctrl); 133 130 ret = -EIO; 134 - goto out; 131 + goto out_err; 135 132 } 136 133 } while (ctrl & 0x00010000); 137 134 ret = 0; ··· 158 155 *size = stat & 0x0000001f; 159 156 } 160 157 158 + out_err: 159 + nvkm_i2c_aux_autodpcd(i2c, aux->ch, true); 161 160 out: 162 161 gm200_i2c_aux_fini(aux); 163 162 return ret < 0 ? ret : (stat & 0x000f0000) >> 16;
+45
drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gk110.c
··· 1 + /* 2 + * Copyright 2021 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "priv.h" 23 + #include "pad.h" 24 + 25 + static void 26 + gk110_aux_autodpcd(struct nvkm_i2c *i2c, int aux, bool enable) 27 + { 28 + nvkm_mask(i2c->subdev.device, 0x00e4f8 + (aux * 0x50), 0x00010000, enable << 16); 29 + } 30 + 31 + static const struct nvkm_i2c_func 32 + gk110_i2c = { 33 + .pad_x_new = gf119_i2c_pad_x_new, 34 + .pad_s_new = gf119_i2c_pad_s_new, 35 + .aux = 4, 36 + .aux_stat = gk104_aux_stat, 37 + .aux_mask = gk104_aux_mask, 38 + .aux_autodpcd = gk110_aux_autodpcd, 39 + }; 40 + 41 + int 42 + gk110_i2c_new(struct nvkm_device *device, int index, struct nvkm_i2c **pi2c) 43 + { 44 + return nvkm_i2c_new_(&gk110_i2c, device, index, pi2c); 45 + }
+7
drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gm200.c
··· 24 24 #include "priv.h" 25 25 #include "pad.h" 26 26 27 + static void 28 + gm200_aux_autodpcd(struct nvkm_i2c *i2c, int aux, bool enable) 29 + { 30 + nvkm_mask(i2c->subdev.device, 0x00d968 + (aux * 0x50), 0x00010000, enable << 16); 31 + } 32 + 27 33 static const struct nvkm_i2c_func 28 34 gm200_i2c = { 29 35 .pad_x_new = gf119_i2c_pad_x_new, ··· 37 31 .aux = 8, 38 32 .aux_stat = gk104_aux_stat, 39 33 .aux_mask = gk104_aux_mask, 34 + .aux_autodpcd = gm200_aux_autodpcd, 40 35 }; 41 36 42 37 int
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/i2c/pad.h
··· 1 1 /* SPDX-License-Identifier: MIT */ 2 2 #ifndef __NVKM_I2C_PAD_H__ 3 3 #define __NVKM_I2C_PAD_H__ 4 - #include <subdev/i2c.h> 4 + #include "priv.h" 5 5 6 6 struct nvkm_i2c_pad { 7 7 const struct nvkm_i2c_pad_func *func;
+4
drivers/gpu/drm/nouveau/nvkm/subdev/i2c/priv.h
··· 23 23 /* mask on/off interrupt types for a given set of auxch 24 24 */ 25 25 void (*aux_mask)(struct nvkm_i2c *, u32, u32, u32); 26 + 27 + /* enable/disable HW-initiated DPCD reads 28 + */ 29 + void (*aux_autodpcd)(struct nvkm_i2c *, int aux, bool enable); 26 30 }; 27 31 28 32 void g94_aux_stat(struct nvkm_i2c *, u32 *, u32 *, u32 *, u32 *);
+7 -3
drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c
··· 22 22 * Authors: Ben Skeggs 23 23 */ 24 24 #include "priv.h" 25 + #include <subdev/timer.h> 25 26 26 27 static void 27 28 gf100_ibus_intr_hub(struct nvkm_subdev *ibus, int i) ··· 32 31 u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0400)); 33 32 u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0400)); 34 33 nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat); 35 - nvkm_mask(device, 0x122128 + (i * 0x0400), 0x00000200, 0x00000000); 36 34 } 37 35 38 36 static void ··· 42 42 u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0400)); 43 43 u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0400)); 44 44 nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat); 45 - nvkm_mask(device, 0x124128 + (i * 0x0400), 0x00000200, 0x00000000); 46 45 } 47 46 48 47 static void ··· 52 53 u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0400)); 53 54 u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0400)); 54 55 nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat); 55 - nvkm_mask(device, 0x128128 + (i * 0x0400), 0x00000200, 0x00000000); 56 56 } 57 57 58 58 void ··· 88 90 intr1 &= ~stat; 89 91 } 90 92 } 93 + 94 + nvkm_mask(device, 0x121c4c, 0x0000003f, 0x00000002); 95 + nvkm_msec(device, 2000, 96 + if (!(nvkm_rd32(device, 0x121c4c) & 0x0000003f)) 97 + break; 98 + ); 91 99 } 92 100 93 101 static int
+7 -3
drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c
··· 22 22 * Authors: Ben Skeggs 23 23 */ 24 24 #include "priv.h" 25 + #include <subdev/timer.h> 25 26 26 27 static void 27 28 gk104_ibus_intr_hub(struct nvkm_subdev *ibus, int i) ··· 32 31 u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0800)); 33 32 u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0800)); 34 33 nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat); 35 - nvkm_mask(device, 0x122128 + (i * 0x0800), 0x00000200, 0x00000000); 36 34 } 37 35 38 36 static void ··· 42 42 u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0800)); 43 43 u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0800)); 44 44 nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat); 45 - nvkm_mask(device, 0x124128 + (i * 0x0800), 0x00000200, 0x00000000); 46 45 } 47 46 48 47 static void ··· 52 53 u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0800)); 53 54 u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0800)); 54 55 nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat); 55 - nvkm_mask(device, 0x128128 + (i * 0x0800), 0x00000200, 0x00000000); 56 56 } 57 57 58 58 void ··· 88 90 intr1 &= ~stat; 89 91 } 90 92 } 93 + 94 + nvkm_mask(device, 0x12004c, 0x0000003f, 0x00000002); 95 + nvkm_msec(device, 2000, 96 + if (!(nvkm_rd32(device, 0x12004c) & 0x0000003f)) 97 + break; 98 + ); 91 99 } 92 100 93 101 static int
+1
drivers/gpu/drm/nouveau/nvkm/subdev/mc/Kbuild
··· 14 14 nvkm-y += nvkm/subdev/mc/gp100.o 15 15 nvkm-y += nvkm/subdev/mc/gp10b.o 16 16 nvkm-y += nvkm/subdev/mc/tu102.o 17 + nvkm-y += nvkm/subdev/mc/ga100.o
+74
drivers/gpu/drm/nouveau/nvkm/subdev/mc/ga100.c
··· 1 + /* 2 + * Copyright 2021 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "priv.h" 23 + 24 + static void 25 + ga100_mc_intr_unarm(struct nvkm_mc *mc) 26 + { 27 + nvkm_wr32(mc->subdev.device, 0xb81610, 0x00000004); 28 + } 29 + 30 + static void 31 + ga100_mc_intr_rearm(struct nvkm_mc *mc) 32 + { 33 + nvkm_wr32(mc->subdev.device, 0xb81608, 0x00000004); 34 + } 35 + 36 + static void 37 + ga100_mc_intr_mask(struct nvkm_mc *mc, u32 mask, u32 intr) 38 + { 39 + nvkm_wr32(mc->subdev.device, 0xb81210, mask & intr ); 40 + nvkm_wr32(mc->subdev.device, 0xb81410, mask & ~(mask & intr)); 41 + } 42 + 43 + static u32 44 + ga100_mc_intr_stat(struct nvkm_mc *mc) 45 + { 46 + u32 intr_top = nvkm_rd32(mc->subdev.device, 0xb81600), intr = 0x00000000; 47 + if (intr_top & 0x00000004) 48 + intr = nvkm_mask(mc->subdev.device, 0xb81010, 0x00000000, 0x00000000); 49 + return intr; 50 + } 51 + 52 + static void 53 + ga100_mc_init(struct nvkm_mc *mc) 54 + { 55 + nv50_mc_init(mc); 56 + nvkm_wr32(mc->subdev.device, 0xb81210, 0xffffffff); 57 + } 58 + 59 + static const struct nvkm_mc_func 60 + ga100_mc = { 61 + .init = ga100_mc_init, 62 + .intr = gp100_mc_intr, 63 + .intr_unarm = ga100_mc_intr_unarm, 64 + .intr_rearm = ga100_mc_intr_rearm, 65 + .intr_mask = ga100_mc_intr_mask, 66 + .intr_stat = ga100_mc_intr_stat, 67 + .reset = gk104_mc_reset, 68 + }; 69 + 70 + int 71 + ga100_mc_new(struct nvkm_device *device, int index, struct nvkm_mc **pmc) 72 + { 73 + return nvkm_mc_new_(&ga100_mc, device, index, pmc); 74 + }
+3 -3
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c
··· 316 316 { 317 317 struct nvkm_device *device = mmu->subdev.device; 318 318 struct nvkm_mm *mm = &device->fb->ram->vram; 319 - const u32 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL); 320 - const u32 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP); 321 - const u32 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED); 319 + const u64 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL); 320 + const u64 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP); 321 + const u64 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED); 322 322 u8 type = NVKM_MEM_KIND * !!mmu->func->kind; 323 323 u8 heap = NVKM_MEM_VRAM; 324 324 int heapM, heapN, heapU;
+11 -11
drivers/gpu/drm/ttm/ttm_pool.c
··· 66 66 static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER]; 67 67 static struct ttm_pool_type global_dma32_uncached[MAX_ORDER]; 68 68 69 - static spinlock_t shrinker_lock; 69 + static struct mutex shrinker_lock; 70 70 static struct list_head shrinker_list; 71 71 static struct shrinker mm_shrinker; 72 72 ··· 190 190 size_t size = (1ULL << order) * PAGE_SIZE; 191 191 192 192 addr = dma_map_page(pool->dev, p, 0, size, DMA_BIDIRECTIONAL); 193 - if (dma_mapping_error(pool->dev, **dma_addr)) 193 + if (dma_mapping_error(pool->dev, addr)) 194 194 return -EFAULT; 195 195 } 196 196 ··· 249 249 spin_lock_init(&pt->lock); 250 250 INIT_LIST_HEAD(&pt->pages); 251 251 252 - spin_lock(&shrinker_lock); 252 + mutex_lock(&shrinker_lock); 253 253 list_add_tail(&pt->shrinker_list, &shrinker_list); 254 - spin_unlock(&shrinker_lock); 254 + mutex_unlock(&shrinker_lock); 255 255 } 256 256 257 257 /* Remove a pool_type from the global shrinker list and free all pages */ ··· 259 259 { 260 260 struct page *p, *tmp; 261 261 262 - spin_lock(&shrinker_lock); 262 + mutex_lock(&shrinker_lock); 263 263 list_del(&pt->shrinker_list); 264 - spin_unlock(&shrinker_lock); 264 + mutex_unlock(&shrinker_lock); 265 265 266 266 list_for_each_entry_safe(p, tmp, &pt->pages, lru) 267 267 ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); ··· 302 302 unsigned int num_freed; 303 303 struct page *p; 304 304 305 - spin_lock(&shrinker_lock); 305 + mutex_lock(&shrinker_lock); 306 306 pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list); 307 307 308 308 p = ttm_pool_type_take(pt); ··· 314 314 } 315 315 316 316 list_move_tail(&pt->shrinker_list, &shrinker_list); 317 - spin_unlock(&shrinker_lock); 317 + mutex_unlock(&shrinker_lock); 318 318 319 319 return num_freed; 320 320 } ··· 564 564 { 565 565 unsigned int i; 566 566 567 - spin_lock(&shrinker_lock); 567 + mutex_lock(&shrinker_lock); 568 568 569 569 seq_puts(m, "\t "); 570 570 for (i = 0; i < MAX_ORDER; ++i) ··· 600 600 seq_printf(m, "\ntotal\t: %8lu of %8lu\n", 601 601 atomic_long_read(&allocated_pages), page_pool_size); 602 602 603 - spin_unlock(&shrinker_lock); 603 + mutex_unlock(&shrinker_lock); 604 604 605 605 return 0; 606 606 } ··· 644 644 if (!page_pool_size) 645 645 page_pool_size = num_pages; 646 646 647 - spin_lock_init(&shrinker_lock); 647 + mutex_init(&shrinker_lock); 648 648 INIT_LIST_HEAD(&shrinker_list); 649 649 650 650 for (i = 0; i < MAX_ORDER; ++i) {
+1
drivers/hid/Kconfig
··· 899 899 depends on NEW_LEDS 900 900 depends on LEDS_CLASS 901 901 select POWER_SUPPLY 902 + select CRC32 902 903 help 903 904 Support for 904 905
+4 -4
drivers/hid/amd-sfh-hid/amd_sfh_client.c
··· 154 154 155 155 for (i = 0; i < cl_data->num_hid_devices; i++) { 156 156 cl_data->sensor_virt_addr[i] = dma_alloc_coherent(dev, sizeof(int) * 8, 157 - &cl_data->sensor_phys_addr[i], 157 + &cl_data->sensor_dma_addr[i], 158 158 GFP_KERNEL); 159 159 cl_data->sensor_sts[i] = 0; 160 160 cl_data->sensor_requested_cnt[i] = 0; ··· 187 187 } 188 188 info.period = msecs_to_jiffies(AMD_SFH_IDLE_LOOP); 189 189 info.sensor_idx = cl_idx; 190 - info.phys_address = cl_data->sensor_phys_addr[i]; 190 + info.dma_address = cl_data->sensor_dma_addr[i]; 191 191 192 192 cl_data->report_descr[i] = kzalloc(cl_data->report_descr_sz[i], GFP_KERNEL); 193 193 if (!cl_data->report_descr[i]) { ··· 212 212 if (cl_data->sensor_virt_addr[i]) { 213 213 dma_free_coherent(&privdata->pdev->dev, 8 * sizeof(int), 214 214 cl_data->sensor_virt_addr[i], 215 - cl_data->sensor_phys_addr[i]); 215 + cl_data->sensor_dma_addr[i]); 216 216 } 217 217 kfree(cl_data->feature_report[i]); 218 218 kfree(cl_data->input_report[i]); ··· 238 238 if (cl_data->sensor_virt_addr[i]) { 239 239 dma_free_coherent(&privdata->pdev->dev, 8 * sizeof(int), 240 240 cl_data->sensor_virt_addr[i], 241 - cl_data->sensor_phys_addr[i]); 241 + cl_data->sensor_dma_addr[i]); 242 242 } 243 243 } 244 244 kfree(cl_data);
+1 -1
drivers/hid/amd-sfh-hid/amd_sfh_hid.h
··· 27 27 int hid_descr_size[MAX_HID_DEVICES]; 28 28 phys_addr_t phys_addr_base; 29 29 u32 *sensor_virt_addr[MAX_HID_DEVICES]; 30 - phys_addr_t sensor_phys_addr[MAX_HID_DEVICES]; 30 + dma_addr_t sensor_dma_addr[MAX_HID_DEVICES]; 31 31 u32 sensor_sts[MAX_HID_DEVICES]; 32 32 u32 sensor_requested_cnt[MAX_HID_DEVICES]; 33 33 u8 report_type[MAX_HID_DEVICES];
+1 -1
drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
··· 41 41 cmd_param.s.buf_layout = 1; 42 42 cmd_param.s.buf_length = 16; 43 43 44 - writeq(info.phys_address, privdata->mmio + AMD_C2P_MSG2); 44 + writeq(info.dma_address, privdata->mmio + AMD_C2P_MSG2); 45 45 writel(cmd_param.ul, privdata->mmio + AMD_C2P_MSG1); 46 46 writel(cmd_base.ul, privdata->mmio + AMD_C2P_MSG0); 47 47 }
+1 -1
drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
··· 67 67 struct amd_mp2_sensor_info { 68 68 u8 sensor_idx; 69 69 u32 period; 70 - phys_addr_t phys_address; 70 + dma_addr_t dma_address; 71 71 }; 72 72 73 73 void amd_start_sensor(struct amd_mp2_dev *privdata, struct amd_mp2_sensor_info info);
+1
drivers/hid/hid-ids.h
··· 389 389 #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401 390 390 #define USB_DEVICE_ID_HP_X2 0x074d 391 391 #define USB_DEVICE_ID_HP_X2_10_COVER 0x0755 392 + #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706 392 393 393 394 #define USB_VENDOR_ID_ELECOM 0x056e 394 395 #define USB_DEVICE_ID_ELECOM_BM084 0x0061
+2
drivers/hid/hid-input.c
··· 322 322 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 323 323 USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD), 324 324 HID_BATTERY_QUIRK_IGNORE }, 325 + { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN), 326 + HID_BATTERY_QUIRK_IGNORE }, 325 327 {} 326 328 }; 327 329
+4
drivers/hid/hid-logitech-dj.c
··· 1869 1869 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1870 1870 0xc531), 1871 1871 .driver_data = recvr_type_gaming_hidpp}, 1872 + { /* Logitech G602 receiver (0xc537) */ 1873 + HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1874 + 0xc537), 1875 + .driver_data = recvr_type_gaming_hidpp}, 1872 1876 { /* Logitech lightspeed receiver (0xc539) */ 1873 1877 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1874 1878 USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1),
+2
drivers/hid/hid-logitech-hidpp.c
··· 4053 4053 { /* MX Master mouse over Bluetooth */ 4054 4054 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb012), 4055 4055 .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 4056 + { /* MX Ergo trackball over Bluetooth */ 4057 + HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01d) }, 4056 4058 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01e), 4057 4059 .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 4058 4060 { /* MX Master 3 mouse over Bluetooth */
+4
drivers/hid/hid-multitouch.c
··· 2054 2054 HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, 2055 2055 USB_VENDOR_ID_SYNAPTICS, 0xce08) }, 2056 2056 2057 + { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT, 2058 + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, 2059 + USB_VENDOR_ID_SYNAPTICS, 0xce09) }, 2060 + 2057 2061 /* TopSeed panels */ 2058 2062 { .driver_data = MT_CLS_TOPSEED, 2059 2063 MT_USB_DEVICE(USB_VENDOR_ID_TOPSEED2,
+1 -1
drivers/hid/hid-uclogic-params.c
··· 90 90 goto cleanup; 91 91 } else if (rc < 0) { 92 92 hid_err(hdev, 93 - "failed retrieving string descriptor #%hhu: %d\n", 93 + "failed retrieving string descriptor #%u: %d\n", 94 94 idx, rc); 95 95 goto cleanup; 96 96 }
+1 -1
drivers/hid/hid-wiimote-core.c
··· 1482 1482 wdata->state.cmd_err = err; 1483 1483 wiimote_cmd_complete(wdata); 1484 1484 } else if (err) { 1485 - hid_warn(wdata->hdev, "Remote error %hhu on req %hhu\n", err, 1485 + hid_warn(wdata->hdev, "Remote error %u on req %u\n", err, 1486 1486 cmd); 1487 1487 } 1488 1488 }
+32 -3
drivers/hid/wacom_sys.c
··· 1270 1270 group); 1271 1271 } 1272 1272 1273 + static void wacom_devm_kfifo_release(struct device *dev, void *res) 1274 + { 1275 + struct kfifo_rec_ptr_2 *devres = res; 1276 + 1277 + kfifo_free(devres); 1278 + } 1279 + 1280 + static int wacom_devm_kfifo_alloc(struct wacom *wacom) 1281 + { 1282 + struct wacom_wac *wacom_wac = &wacom->wacom_wac; 1283 + struct kfifo_rec_ptr_2 *pen_fifo = &wacom_wac->pen_fifo; 1284 + int error; 1285 + 1286 + pen_fifo = devres_alloc(wacom_devm_kfifo_release, 1287 + sizeof(struct kfifo_rec_ptr_2), 1288 + GFP_KERNEL); 1289 + 1290 + if (!pen_fifo) 1291 + return -ENOMEM; 1292 + 1293 + error = kfifo_alloc(pen_fifo, WACOM_PKGLEN_MAX, GFP_KERNEL); 1294 + if (error) { 1295 + devres_free(pen_fifo); 1296 + return error; 1297 + } 1298 + 1299 + devres_add(&wacom->hdev->dev, pen_fifo); 1300 + 1301 + return 0; 1302 + } 1303 + 1273 1304 enum led_brightness wacom_leds_brightness_get(struct wacom_led *led) 1274 1305 { 1275 1306 struct wacom *wacom = led->wacom; ··· 2755 2724 if (features->check_for_hid_type && features->hid_type != hdev->type) 2756 2725 return -ENODEV; 2757 2726 2758 - error = kfifo_alloc(&wacom_wac->pen_fifo, WACOM_PKGLEN_MAX, GFP_KERNEL); 2727 + error = wacom_devm_kfifo_alloc(wacom); 2759 2728 if (error) 2760 2729 return error; 2761 2730 ··· 2817 2786 2818 2787 if (wacom->wacom_wac.features.type != REMOTE) 2819 2788 wacom_release_resources(wacom); 2820 - 2821 - kfifo_free(&wacom_wac->pen_fifo); 2822 2789 } 2823 2790 2824 2791 #ifdef CONFIG_PM
-2
drivers/hv/vmbus_drv.c
··· 2550 2550 /* Make sure conn_state is set as hv_synic_cleanup checks for it */ 2551 2551 mb(); 2552 2552 cpuhp_remove_state(hyperv_cpuhp_online); 2553 - hyperv_cleanup(); 2554 2553 }; 2555 2554 2556 2555 static void hv_crash_handler(struct pt_regs *regs) ··· 2565 2566 cpu = smp_processor_id(); 2566 2567 hv_stimer_cleanup(cpu); 2567 2568 hv_synic_disable_regs(cpu); 2568 - hyperv_cleanup(); 2569 2569 }; 2570 2570 2571 2571 static int hv_synic_suspend(void)
+3 -1
drivers/infiniband/core/cma_configfs.c
··· 131 131 return ret; 132 132 133 133 gid_type = ib_cache_gid_parse_type_str(buf); 134 - if (gid_type < 0) 134 + if (gid_type < 0) { 135 + cma_configfs_params_put(cma_dev); 135 136 return -EINVAL; 137 + } 136 138 137 139 ret = cma_set_default_gid_type(cma_dev, group->port_num, gid_type); 138 140
+1
drivers/infiniband/core/restrack.c
··· 254 254 } else { 255 255 ret = xa_alloc_cyclic(&rt->xa, &res->id, res, xa_limit_32b, 256 256 &rt->next_id, GFP_KERNEL); 257 + ret = (ret < 0) ? ret : 0; 257 258 } 258 259 259 260 out:
+73 -64
drivers/infiniband/core/ucma.c
··· 95 95 u64 uid; 96 96 97 97 struct list_head list; 98 - /* sync between removal event and id destroy, protected by file mut */ 99 - int destroying; 100 98 struct work_struct close_work; 101 99 }; 102 100 ··· 120 122 static DEFINE_XARRAY_ALLOC(multicast_table); 121 123 122 124 static const struct file_operations ucma_fops; 123 - static int __destroy_id(struct ucma_context *ctx); 125 + static int ucma_destroy_private_ctx(struct ucma_context *ctx); 124 126 125 127 static inline struct ucma_context *_ucma_find_context(int id, 126 128 struct ucma_file *file) ··· 177 179 178 180 /* once all inflight tasks are finished, we close all underlying 179 181 * resources. The context is still alive till its explicit destryoing 180 - * by its creator. 182 + * by its creator. This puts back the xarray's reference. 181 183 */ 182 184 ucma_put_ctx(ctx); 183 185 wait_for_completion(&ctx->comp); 184 186 /* No new events will be generated after destroying the id. */ 185 187 rdma_destroy_id(ctx->cm_id); 186 188 187 - /* 188 - * At this point ctx->ref is zero so the only place the ctx can be is in 189 - * a uevent or in __destroy_id(). Since the former doesn't touch 190 - * ctx->cm_id and the latter sync cancels this, there is no races with 191 - * this store. 192 - */ 189 + /* Reading the cm_id without holding a positive ref is not allowed */ 193 190 ctx->cm_id = NULL; 194 191 } 195 192 ··· 197 204 return NULL; 198 205 199 206 INIT_WORK(&ctx->close_work, ucma_close_id); 200 - refcount_set(&ctx->ref, 1); 201 207 init_completion(&ctx->comp); 202 208 /* So list_del() will work if we don't do ucma_finish_ctx() */ 203 209 INIT_LIST_HEAD(&ctx->list); ··· 208 216 return NULL; 209 217 } 210 218 return ctx; 219 + } 220 + 221 + static void ucma_set_ctx_cm_id(struct ucma_context *ctx, 222 + struct rdma_cm_id *cm_id) 223 + { 224 + refcount_set(&ctx->ref, 1); 225 + ctx->cm_id = cm_id; 211 226 } 212 227 213 228 static void ucma_finish_ctx(struct ucma_context *ctx) ··· 302 303 ctx = ucma_alloc_ctx(listen_ctx->file); 303 304 if (!ctx) 304 305 goto err_backlog; 305 - ctx->cm_id = cm_id; 306 + ucma_set_ctx_cm_id(ctx, cm_id); 306 307 307 308 uevent = ucma_create_uevent(listen_ctx, event); 308 309 if (!uevent) ··· 320 321 return 0; 321 322 322 323 err_alloc: 323 - xa_erase(&ctx_table, ctx->id); 324 - kfree(ctx); 324 + ucma_destroy_private_ctx(ctx); 325 325 err_backlog: 326 326 atomic_inc(&listen_ctx->backlog); 327 327 /* Returning error causes the new ID to be destroyed */ ··· 354 356 wake_up_interruptible(&ctx->file->poll_wait); 355 357 } 356 358 357 - if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL && !ctx->destroying) 358 - queue_work(system_unbound_wq, &ctx->close_work); 359 + if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL) { 360 + xa_lock(&ctx_table); 361 + if (xa_load(&ctx_table, ctx->id) == ctx) 362 + queue_work(system_unbound_wq, &ctx->close_work); 363 + xa_unlock(&ctx_table); 364 + } 359 365 return 0; 360 366 } 361 367 ··· 463 461 ret = PTR_ERR(cm_id); 464 462 goto err1; 465 463 } 466 - ctx->cm_id = cm_id; 464 + ucma_set_ctx_cm_id(ctx, cm_id); 467 465 468 466 resp.id = ctx->id; 469 467 if (copy_to_user(u64_to_user_ptr(cmd.response), 470 468 &resp, sizeof(resp))) { 471 - xa_erase(&ctx_table, ctx->id); 472 - __destroy_id(ctx); 469 + ucma_destroy_private_ctx(ctx); 473 470 return -EFAULT; 474 471 } 475 472 ··· 478 477 return 0; 479 478 480 479 err1: 481 - xa_erase(&ctx_table, ctx->id); 482 - kfree(ctx); 480 + ucma_destroy_private_ctx(ctx); 483 481 return ret; 484 482 } 485 483 ··· 516 516 rdma_unlock_handler(mc->ctx->cm_id); 517 517 } 518 518 519 - /* 520 - * ucma_free_ctx is called after the underlying rdma CM-ID is destroyed. At 521 - * this point, no new events will be reported from the hardware. However, we 522 - * still need to cleanup the UCMA context for this ID. Specifically, there 523 - * might be events that have not yet been consumed by the user space software. 524 - * mutex. After that we release them as needed. 525 - */ 526 - static int ucma_free_ctx(struct ucma_context *ctx) 519 + static int ucma_cleanup_ctx_events(struct ucma_context *ctx) 527 520 { 528 521 int events_reported; 529 522 struct ucma_event *uevent, *tmp; 530 523 LIST_HEAD(list); 531 524 532 - ucma_cleanup_multicast(ctx); 533 - 534 - /* Cleanup events not yet reported to the user. */ 525 + /* Cleanup events not yet reported to the user.*/ 535 526 mutex_lock(&ctx->file->mut); 536 527 list_for_each_entry_safe(uevent, tmp, &ctx->file->event_list, list) { 537 - if (uevent->ctx == ctx || uevent->conn_req_ctx == ctx) 528 + if (uevent->ctx != ctx) 529 + continue; 530 + 531 + if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST && 532 + xa_cmpxchg(&ctx_table, uevent->conn_req_ctx->id, 533 + uevent->conn_req_ctx, XA_ZERO_ENTRY, 534 + GFP_KERNEL) == uevent->conn_req_ctx) { 538 535 list_move_tail(&uevent->list, &list); 536 + continue; 537 + } 538 + list_del(&uevent->list); 539 + kfree(uevent); 539 540 } 540 541 list_del(&ctx->list); 541 542 events_reported = ctx->events_reported; 542 543 mutex_unlock(&ctx->file->mut); 543 544 544 545 /* 545 - * If this was a listening ID then any connections spawned from it 546 - * that have not been delivered to userspace are cleaned up too. 547 - * Must be done outside any locks. 546 + * If this was a listening ID then any connections spawned from it that 547 + * have not been delivered to userspace are cleaned up too. Must be done 548 + * outside any locks. 548 549 */ 549 550 list_for_each_entry_safe(uevent, tmp, &list, list) { 550 - list_del(&uevent->list); 551 - if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST && 552 - uevent->conn_req_ctx != ctx) 553 - __destroy_id(uevent->conn_req_ctx); 551 + ucma_destroy_private_ctx(uevent->conn_req_ctx); 554 552 kfree(uevent); 555 553 } 556 - 557 - mutex_destroy(&ctx->mutex); 558 - kfree(ctx); 559 554 return events_reported; 560 555 } 561 556 562 - static int __destroy_id(struct ucma_context *ctx) 557 + /* 558 + * When this is called the xarray must have a XA_ZERO_ENTRY in the ctx->id (ie 559 + * the ctx is not public to the user). This either because: 560 + * - ucma_finish_ctx() hasn't been called 561 + * - xa_cmpxchg() succeed to remove the entry (only one thread can succeed) 562 + */ 563 + static int ucma_destroy_private_ctx(struct ucma_context *ctx) 563 564 { 564 - /* 565 - * If the refcount is already 0 then ucma_close_id() has already 566 - * destroyed the cm_id, otherwise holding the refcount keeps cm_id 567 - * valid. Prevent queue_work() from being called. 568 - */ 569 - if (refcount_inc_not_zero(&ctx->ref)) { 570 - rdma_lock_handler(ctx->cm_id); 571 - ctx->destroying = 1; 572 - rdma_unlock_handler(ctx->cm_id); 573 - ucma_put_ctx(ctx); 574 - } 565 + int events_reported; 575 566 567 + /* 568 + * Destroy the underlying cm_id. New work queuing is prevented now by 569 + * the removal from the xarray. Once the work is cancled ref will either 570 + * be 0 because the work ran to completion and consumed the ref from the 571 + * xarray, or it will be positive because we still have the ref from the 572 + * xarray. This can also be 0 in cases where cm_id was never set 573 + */ 576 574 cancel_work_sync(&ctx->close_work); 577 - /* At this point it's guaranteed that there is no inflight closing task */ 578 - if (ctx->cm_id) 575 + if (refcount_read(&ctx->ref)) 579 576 ucma_close_id(&ctx->close_work); 580 - return ucma_free_ctx(ctx); 577 + 578 + events_reported = ucma_cleanup_ctx_events(ctx); 579 + ucma_cleanup_multicast(ctx); 580 + 581 + WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, XA_ZERO_ENTRY, NULL, 582 + GFP_KERNEL) != NULL); 583 + mutex_destroy(&ctx->mutex); 584 + kfree(ctx); 585 + return events_reported; 581 586 } 582 587 583 588 static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf, ··· 601 596 602 597 xa_lock(&ctx_table); 603 598 ctx = _ucma_find_context(cmd.id, file); 604 - if (!IS_ERR(ctx)) 605 - __xa_erase(&ctx_table, ctx->id); 599 + if (!IS_ERR(ctx)) { 600 + if (__xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY, 601 + GFP_KERNEL) != ctx) 602 + ctx = ERR_PTR(-ENOENT); 603 + } 606 604 xa_unlock(&ctx_table); 607 605 608 606 if (IS_ERR(ctx)) 609 607 return PTR_ERR(ctx); 610 608 611 - resp.events_reported = __destroy_id(ctx); 609 + resp.events_reported = ucma_destroy_private_ctx(ctx); 612 610 if (copy_to_user(u64_to_user_ptr(cmd.response), 613 611 &resp, sizeof(resp))) 614 612 ret = -EFAULT; ··· 1785 1777 * prevented by this being a FD release function. The list_add_tail() in 1786 1778 * ucma_connect_event_handler() can run concurrently, however it only 1787 1779 * adds to the list *after* a listening ID. By only reading the first of 1788 - * the list, and relying on __destroy_id() to block 1780 + * the list, and relying on ucma_destroy_private_ctx() to block 1789 1781 * ucma_connect_event_handler(), no additional locking is needed. 1790 1782 */ 1791 1783 while (!list_empty(&file->ctx_list)) { 1792 1784 struct ucma_context *ctx = list_first_entry( 1793 1785 &file->ctx_list, struct ucma_context, list); 1794 1786 1795 - xa_erase(&ctx_table, ctx->id); 1796 - __destroy_id(ctx); 1787 + WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY, 1788 + GFP_KERNEL) != ctx); 1789 + ucma_destroy_private_ctx(ctx); 1797 1790 } 1798 1791 kfree(file); 1799 1792 return 0;
+1 -1
drivers/infiniband/core/umem.c
··· 135 135 */ 136 136 if (mask) 137 137 pgsz_bitmap &= GENMASK(count_trailing_zeros(mask), 0); 138 - return rounddown_pow_of_two(pgsz_bitmap); 138 + return pgsz_bitmap ? rounddown_pow_of_two(pgsz_bitmap) : 0; 139 139 } 140 140 EXPORT_SYMBOL(ib_umem_find_best_pgsz); 141 141
+2 -2
drivers/infiniband/hw/mlx5/main.c
··· 3956 3956 3957 3957 err = set_has_smi_cap(dev); 3958 3958 if (err) 3959 - return err; 3959 + goto err_mp; 3960 3960 3961 3961 if (!mlx5_core_mp_enabled(mdev)) { 3962 3962 for (i = 1; i <= dev->num_ports; i++) { ··· 4319 4319 4320 4320 err = mlx5_alloc_bfreg(dev->mdev, &dev->fp_bfreg, false, true); 4321 4321 if (err) 4322 - mlx5_free_bfreg(dev->mdev, &dev->fp_bfreg); 4322 + mlx5_free_bfreg(dev->mdev, &dev->bfreg); 4323 4323 4324 4324 return err; 4325 4325 }
+1 -1
drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
··· 434 434 pr_err("%s(%d) Freeing in use pdid=0x%x.\n", 435 435 __func__, dev->id, pd->id); 436 436 } 437 - kfree(uctx->cntxt_pd); 438 437 uctx->cntxt_pd = NULL; 439 438 _ocrdma_dealloc_pd(dev, pd); 439 + kfree(pd); 440 440 } 441 441 442 442 static struct ocrdma_pd *ocrdma_get_ucontext_pd(struct ocrdma_ucontext *uctx)
+3
drivers/infiniband/hw/usnic/usnic_ib_verbs.c
··· 214 214 215 215 } 216 216 usnic_uiom_free_dev_list(dev_list); 217 + dev_list = NULL; 217 218 } 218 219 219 220 /* Try to find resources on an unused vf */ ··· 240 239 qp_grp_check: 241 240 if (IS_ERR_OR_NULL(qp_grp)) { 242 241 usnic_err("Failed to allocate qp_grp\n"); 242 + if (usnic_ib_share_vf) 243 + usnic_uiom_free_dev_list(dev_list); 243 244 return ERR_PTR(qp_grp ? PTR_ERR(qp_grp) : -ENOMEM); 244 245 } 245 246 return qp_grp;
+2
drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
··· 325 325 } 326 326 327 327 static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = { 328 + { .compatible = "qcom,msm8998-smmu-v2" }, 328 329 { .compatible = "qcom,sc7180-smmu-500" }, 330 + { .compatible = "qcom,sdm630-smmu-v2" }, 329 331 { .compatible = "qcom,sdm845-smmu-500" }, 330 332 { .compatible = "qcom,sm8150-smmu-500" }, 331 333 { .compatible = "qcom,sm8250-smmu-500" },
-1
drivers/iommu/intel/iommu.c
··· 38 38 #include <linux/dmi.h> 39 39 #include <linux/pci-ats.h> 40 40 #include <linux/memblock.h> 41 - #include <linux/dma-map-ops.h> 42 41 #include <linux/dma-direct.h> 43 42 #include <linux/crash_dump.h> 44 43 #include <linux/numa.h>
+20 -2
drivers/iommu/intel/svm.c
··· 118 118 iommu->flags |= VTD_FLAG_SVM_CAPABLE; 119 119 } 120 120 121 - static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_dev *sdev, 122 - unsigned long address, unsigned long pages, int ih) 121 + static void __flush_svm_range_dev(struct intel_svm *svm, 122 + struct intel_svm_dev *sdev, 123 + unsigned long address, 124 + unsigned long pages, int ih) 123 125 { 124 126 struct qi_desc desc; 125 127 ··· 169 167 desc.qw2 = 0; 170 168 desc.qw3 = 0; 171 169 qi_submit_sync(sdev->iommu, &desc, 1, 0); 170 + } 171 + } 172 + 173 + static void intel_flush_svm_range_dev(struct intel_svm *svm, 174 + struct intel_svm_dev *sdev, 175 + unsigned long address, 176 + unsigned long pages, int ih) 177 + { 178 + unsigned long shift = ilog2(__roundup_pow_of_two(pages)); 179 + unsigned long align = (1ULL << (VTD_PAGE_SHIFT + shift)); 180 + unsigned long start = ALIGN_DOWN(address, align); 181 + unsigned long end = ALIGN(address + (pages << VTD_PAGE_SHIFT), align); 182 + 183 + while (start < end) { 184 + __flush_svm_range_dev(svm, sdev, start, align >> VTD_PAGE_SHIFT, ih); 185 + start += align; 172 186 } 173 187 } 174 188
+2
drivers/md/Kconfig
··· 605 605 select BLK_DEV_INTEGRITY 606 606 select DM_BUFIO 607 607 select CRYPTO 608 + select CRYPTO_SKCIPHER 608 609 select ASYNC_XOR 609 610 help 610 611 This device-mapper target emulates a block device that has ··· 623 622 tristate "Drive-managed zoned block device target support" 624 623 depends on BLK_DEV_DM 625 624 depends on BLK_DEV_ZONED 625 + select CRC32 626 626 help 627 627 This device-mapper target takes a host-managed or host-aware zoned 628 628 block device and exposes most of its capacity as a regular block
+6
drivers/md/dm-bufio.c
··· 1534 1534 } 1535 1535 EXPORT_SYMBOL_GPL(dm_bufio_get_device_size); 1536 1536 1537 + struct dm_io_client *dm_bufio_get_dm_io_client(struct dm_bufio_client *c) 1538 + { 1539 + return c->dm_io; 1540 + } 1541 + EXPORT_SYMBOL_GPL(dm_bufio_get_dm_io_client); 1542 + 1537 1543 sector_t dm_bufio_get_block_number(struct dm_buffer *b) 1538 1544 { 1539 1545 return b->block;
+152 -18
drivers/md/dm-crypt.c
··· 1454 1454 static void kcryptd_async_done(struct crypto_async_request *async_req, 1455 1455 int error); 1456 1456 1457 - static void crypt_alloc_req_skcipher(struct crypt_config *cc, 1457 + static int crypt_alloc_req_skcipher(struct crypt_config *cc, 1458 1458 struct convert_context *ctx) 1459 1459 { 1460 1460 unsigned key_index = ctx->cc_sector & (cc->tfms_count - 1); 1461 1461 1462 - if (!ctx->r.req) 1463 - ctx->r.req = mempool_alloc(&cc->req_pool, GFP_NOIO); 1462 + if (!ctx->r.req) { 1463 + ctx->r.req = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO); 1464 + if (!ctx->r.req) 1465 + return -ENOMEM; 1466 + } 1464 1467 1465 1468 skcipher_request_set_tfm(ctx->r.req, cc->cipher_tfm.tfms[key_index]); 1466 1469 ··· 1474 1471 skcipher_request_set_callback(ctx->r.req, 1475 1472 CRYPTO_TFM_REQ_MAY_BACKLOG, 1476 1473 kcryptd_async_done, dmreq_of_req(cc, ctx->r.req)); 1474 + 1475 + return 0; 1477 1476 } 1478 1477 1479 - static void crypt_alloc_req_aead(struct crypt_config *cc, 1478 + static int crypt_alloc_req_aead(struct crypt_config *cc, 1480 1479 struct convert_context *ctx) 1481 1480 { 1482 - if (!ctx->r.req_aead) 1483 - ctx->r.req_aead = mempool_alloc(&cc->req_pool, GFP_NOIO); 1481 + if (!ctx->r.req) { 1482 + ctx->r.req = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO); 1483 + if (!ctx->r.req) 1484 + return -ENOMEM; 1485 + } 1484 1486 1485 1487 aead_request_set_tfm(ctx->r.req_aead, cc->cipher_tfm.tfms_aead[0]); 1486 1488 ··· 1496 1488 aead_request_set_callback(ctx->r.req_aead, 1497 1489 CRYPTO_TFM_REQ_MAY_BACKLOG, 1498 1490 kcryptd_async_done, dmreq_of_req(cc, ctx->r.req_aead)); 1491 + 1492 + return 0; 1499 1493 } 1500 1494 1501 - static void crypt_alloc_req(struct crypt_config *cc, 1495 + static int crypt_alloc_req(struct crypt_config *cc, 1502 1496 struct convert_context *ctx) 1503 1497 { 1504 1498 if (crypt_integrity_aead(cc)) 1505 - crypt_alloc_req_aead(cc, ctx); 1499 + return crypt_alloc_req_aead(cc, ctx); 1506 1500 else 1507 - crypt_alloc_req_skcipher(cc, ctx); 1501 + return crypt_alloc_req_skcipher(cc, ctx); 1508 1502 } 1509 1503 1510 1504 static void crypt_free_req_skcipher(struct crypt_config *cc, ··· 1539 1529 * Encrypt / decrypt data from one bio to another one (can be the same one) 1540 1530 */ 1541 1531 static blk_status_t crypt_convert(struct crypt_config *cc, 1542 - struct convert_context *ctx, bool atomic) 1532 + struct convert_context *ctx, bool atomic, bool reset_pending) 1543 1533 { 1544 1534 unsigned int tag_offset = 0; 1545 1535 unsigned int sector_step = cc->sector_size >> SECTOR_SHIFT; 1546 1536 int r; 1547 1537 1548 - atomic_set(&ctx->cc_pending, 1); 1538 + /* 1539 + * if reset_pending is set we are dealing with the bio for the first time, 1540 + * else we're continuing to work on the previous bio, so don't mess with 1541 + * the cc_pending counter 1542 + */ 1543 + if (reset_pending) 1544 + atomic_set(&ctx->cc_pending, 1); 1549 1545 1550 1546 while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) { 1551 1547 1552 - crypt_alloc_req(cc, ctx); 1548 + r = crypt_alloc_req(cc, ctx); 1549 + if (r) { 1550 + complete(&ctx->restart); 1551 + return BLK_STS_DEV_RESOURCE; 1552 + } 1553 + 1553 1554 atomic_inc(&ctx->cc_pending); 1554 1555 1555 1556 if (crypt_integrity_aead(cc)) ··· 1574 1553 * but the driver request queue is full, let's wait. 1575 1554 */ 1576 1555 case -EBUSY: 1577 - wait_for_completion(&ctx->restart); 1556 + if (in_interrupt()) { 1557 + if (try_wait_for_completion(&ctx->restart)) { 1558 + /* 1559 + * we don't have to block to wait for completion, 1560 + * so proceed 1561 + */ 1562 + } else { 1563 + /* 1564 + * we can't wait for completion without blocking 1565 + * exit and continue processing in a workqueue 1566 + */ 1567 + ctx->r.req = NULL; 1568 + ctx->cc_sector += sector_step; 1569 + tag_offset++; 1570 + return BLK_STS_DEV_RESOURCE; 1571 + } 1572 + } else { 1573 + wait_for_completion(&ctx->restart); 1574 + } 1578 1575 reinit_completion(&ctx->restart); 1579 1576 fallthrough; 1580 1577 /* ··· 1730 1691 atomic_inc(&io->io_pending); 1731 1692 } 1732 1693 1694 + static void kcryptd_io_bio_endio(struct work_struct *work) 1695 + { 1696 + struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work); 1697 + bio_endio(io->base_bio); 1698 + } 1699 + 1733 1700 /* 1734 1701 * One of the bios was finished. Check for completion of 1735 1702 * the whole request and correctly clean up the buffer. ··· 1758 1713 kfree(io->integrity_metadata); 1759 1714 1760 1715 base_bio->bi_status = error; 1761 - bio_endio(base_bio); 1716 + 1717 + /* 1718 + * If we are running this function from our tasklet, 1719 + * we can't call bio_endio() here, because it will call 1720 + * clone_endio() from dm.c, which in turn will 1721 + * free the current struct dm_crypt_io structure with 1722 + * our tasklet. In this case we need to delay bio_endio() 1723 + * execution to after the tasklet is done and dequeued. 1724 + */ 1725 + if (tasklet_trylock(&io->tasklet)) { 1726 + tasklet_unlock(&io->tasklet); 1727 + bio_endio(base_bio); 1728 + return; 1729 + } 1730 + 1731 + INIT_WORK(&io->work, kcryptd_io_bio_endio); 1732 + queue_work(cc->io_queue, &io->work); 1762 1733 } 1763 1734 1764 1735 /* ··· 2006 1945 } 2007 1946 } 2008 1947 1948 + static void kcryptd_crypt_write_continue(struct work_struct *work) 1949 + { 1950 + struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work); 1951 + struct crypt_config *cc = io->cc; 1952 + struct convert_context *ctx = &io->ctx; 1953 + int crypt_finished; 1954 + sector_t sector = io->sector; 1955 + blk_status_t r; 1956 + 1957 + wait_for_completion(&ctx->restart); 1958 + reinit_completion(&ctx->restart); 1959 + 1960 + r = crypt_convert(cc, &io->ctx, true, false); 1961 + if (r) 1962 + io->error = r; 1963 + crypt_finished = atomic_dec_and_test(&ctx->cc_pending); 1964 + if (!crypt_finished && kcryptd_crypt_write_inline(cc, ctx)) { 1965 + /* Wait for completion signaled by kcryptd_async_done() */ 1966 + wait_for_completion(&ctx->restart); 1967 + crypt_finished = 1; 1968 + } 1969 + 1970 + /* Encryption was already finished, submit io now */ 1971 + if (crypt_finished) { 1972 + kcryptd_crypt_write_io_submit(io, 0); 1973 + io->sector = sector; 1974 + } 1975 + 1976 + crypt_dec_pending(io); 1977 + } 1978 + 2009 1979 static void kcryptd_crypt_write_convert(struct dm_crypt_io *io) 2010 1980 { 2011 1981 struct crypt_config *cc = io->cc; ··· 2065 1973 2066 1974 crypt_inc_pending(io); 2067 1975 r = crypt_convert(cc, ctx, 2068 - test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags)); 1976 + test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags), true); 1977 + /* 1978 + * Crypto API backlogged the request, because its queue was full 1979 + * and we're in softirq context, so continue from a workqueue 1980 + * (TODO: is it actually possible to be in softirq in the write path?) 1981 + */ 1982 + if (r == BLK_STS_DEV_RESOURCE) { 1983 + INIT_WORK(&io->work, kcryptd_crypt_write_continue); 1984 + queue_work(cc->crypt_queue, &io->work); 1985 + return; 1986 + } 2069 1987 if (r) 2070 1988 io->error = r; 2071 1989 crypt_finished = atomic_dec_and_test(&ctx->cc_pending); ··· 2100 1998 crypt_dec_pending(io); 2101 1999 } 2102 2000 2001 + static void kcryptd_crypt_read_continue(struct work_struct *work) 2002 + { 2003 + struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work); 2004 + struct crypt_config *cc = io->cc; 2005 + blk_status_t r; 2006 + 2007 + wait_for_completion(&io->ctx.restart); 2008 + reinit_completion(&io->ctx.restart); 2009 + 2010 + r = crypt_convert(cc, &io->ctx, true, false); 2011 + if (r) 2012 + io->error = r; 2013 + 2014 + if (atomic_dec_and_test(&io->ctx.cc_pending)) 2015 + kcryptd_crypt_read_done(io); 2016 + 2017 + crypt_dec_pending(io); 2018 + } 2019 + 2103 2020 static void kcryptd_crypt_read_convert(struct dm_crypt_io *io) 2104 2021 { 2105 2022 struct crypt_config *cc = io->cc; ··· 2130 2009 io->sector); 2131 2010 2132 2011 r = crypt_convert(cc, &io->ctx, 2133 - test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags)); 2012 + test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags), true); 2013 + /* 2014 + * Crypto API backlogged the request, because its queue was full 2015 + * and we're in softirq context, so continue from a workqueue 2016 + */ 2017 + if (r == BLK_STS_DEV_RESOURCE) { 2018 + INIT_WORK(&io->work, kcryptd_crypt_read_continue); 2019 + queue_work(cc->crypt_queue, &io->work); 2020 + return; 2021 + } 2134 2022 if (r) 2135 2023 io->error = r; 2136 2024 ··· 2221 2091 2222 2092 if ((bio_data_dir(io->base_bio) == READ && test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags)) || 2223 2093 (bio_data_dir(io->base_bio) == WRITE && test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags))) { 2224 - if (in_irq()) { 2225 - /* Crypto API's "skcipher_walk_first() refuses to work in hard IRQ context */ 2094 + /* 2095 + * in_irq(): Crypto API's skcipher_walk_first() refuses to work in hard IRQ context. 2096 + * irqs_disabled(): the kernel may run some IO completion from the idle thread, but 2097 + * it is being executed with irqs disabled. 2098 + */ 2099 + if (in_irq() || irqs_disabled()) { 2226 2100 tasklet_init(&io->tasklet, kcryptd_crypt_tasklet, (unsigned long)&io->work); 2227 2101 tasklet_schedule(&io->tasklet); 2228 2102 return;
+50 -12
drivers/md/dm-integrity.c
··· 1379 1379 #undef MAY_BE_HASH 1380 1380 } 1381 1381 1382 - static void dm_integrity_flush_buffers(struct dm_integrity_c *ic) 1382 + struct flush_request { 1383 + struct dm_io_request io_req; 1384 + struct dm_io_region io_reg; 1385 + struct dm_integrity_c *ic; 1386 + struct completion comp; 1387 + }; 1388 + 1389 + static void flush_notify(unsigned long error, void *fr_) 1390 + { 1391 + struct flush_request *fr = fr_; 1392 + if (unlikely(error != 0)) 1393 + dm_integrity_io_error(fr->ic, "flusing disk cache", -EIO); 1394 + complete(&fr->comp); 1395 + } 1396 + 1397 + static void dm_integrity_flush_buffers(struct dm_integrity_c *ic, bool flush_data) 1383 1398 { 1384 1399 int r; 1400 + 1401 + struct flush_request fr; 1402 + 1403 + if (!ic->meta_dev) 1404 + flush_data = false; 1405 + if (flush_data) { 1406 + fr.io_req.bi_op = REQ_OP_WRITE, 1407 + fr.io_req.bi_op_flags = REQ_PREFLUSH | REQ_SYNC, 1408 + fr.io_req.mem.type = DM_IO_KMEM, 1409 + fr.io_req.mem.ptr.addr = NULL, 1410 + fr.io_req.notify.fn = flush_notify, 1411 + fr.io_req.notify.context = &fr; 1412 + fr.io_req.client = dm_bufio_get_dm_io_client(ic->bufio), 1413 + fr.io_reg.bdev = ic->dev->bdev, 1414 + fr.io_reg.sector = 0, 1415 + fr.io_reg.count = 0, 1416 + fr.ic = ic; 1417 + init_completion(&fr.comp); 1418 + r = dm_io(&fr.io_req, 1, &fr.io_reg, NULL); 1419 + BUG_ON(r); 1420 + } 1421 + 1385 1422 r = dm_bufio_write_dirty_buffers(ic->bufio); 1386 1423 if (unlikely(r)) 1387 1424 dm_integrity_io_error(ic, "writing tags", r); 1425 + 1426 + if (flush_data) 1427 + wait_for_completion(&fr.comp); 1388 1428 } 1389 1429 1390 1430 static void sleep_on_endio_wait(struct dm_integrity_c *ic) ··· 2150 2110 2151 2111 if (unlikely(dio->op == REQ_OP_DISCARD) && likely(ic->mode != 'D')) { 2152 2112 integrity_metadata(&dio->work); 2153 - dm_integrity_flush_buffers(ic); 2113 + dm_integrity_flush_buffers(ic, false); 2154 2114 2155 2115 dio->in_flight = (atomic_t)ATOMIC_INIT(1); 2156 2116 dio->completion = NULL; ··· 2235 2195 flushes = bio_list_get(&ic->flush_bio_list); 2236 2196 if (unlikely(ic->mode != 'J')) { 2237 2197 spin_unlock_irq(&ic->endio_wait.lock); 2238 - dm_integrity_flush_buffers(ic); 2198 + dm_integrity_flush_buffers(ic, true); 2239 2199 goto release_flush_bios; 2240 2200 } 2241 2201 ··· 2449 2409 complete_journal_op(&comp); 2450 2410 wait_for_completion_io(&comp.comp); 2451 2411 2452 - dm_integrity_flush_buffers(ic); 2412 + dm_integrity_flush_buffers(ic, true); 2453 2413 } 2454 2414 2455 2415 static void integrity_writer(struct work_struct *w) ··· 2491 2451 { 2492 2452 int r; 2493 2453 2494 - dm_integrity_flush_buffers(ic); 2454 + dm_integrity_flush_buffers(ic, false); 2495 2455 if (dm_integrity_failed(ic)) 2496 2456 return; 2497 2457 ··· 2694 2654 unsigned long limit; 2695 2655 struct bio *bio; 2696 2656 2697 - dm_integrity_flush_buffers(ic); 2657 + dm_integrity_flush_buffers(ic, false); 2698 2658 2699 2659 range.logical_sector = 0; 2700 2660 range.n_sectors = ic->provided_data_sectors; ··· 2703 2663 add_new_range_and_wait(ic, &range); 2704 2664 spin_unlock_irq(&ic->endio_wait.lock); 2705 2665 2706 - dm_integrity_flush_buffers(ic); 2707 - if (ic->meta_dev) 2708 - blkdev_issue_flush(ic->dev->bdev, GFP_NOIO); 2666 + dm_integrity_flush_buffers(ic, true); 2709 2667 2710 2668 limit = ic->provided_data_sectors; 2711 2669 if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING)) { ··· 2972 2934 if (ic->meta_dev) 2973 2935 queue_work(ic->writer_wq, &ic->writer_work); 2974 2936 drain_workqueue(ic->writer_wq); 2975 - dm_integrity_flush_buffers(ic); 2937 + dm_integrity_flush_buffers(ic, true); 2976 2938 } 2977 2939 2978 2940 if (ic->mode == 'B') { 2979 - dm_integrity_flush_buffers(ic); 2941 + dm_integrity_flush_buffers(ic, true); 2980 2942 #if 1 2981 2943 /* set to 0 to test bitmap replay code */ 2982 2944 init_journal(ic, 0, ic->journal_sections, 0); ··· 3792 3754 unsigned extra_args; 3793 3755 struct dm_arg_set as; 3794 3756 static const struct dm_arg _args[] = { 3795 - {0, 9, "Invalid number of feature args"}, 3757 + {0, 15, "Invalid number of feature args"}, 3796 3758 }; 3797 3759 unsigned journal_sectors, interleave_sectors, buffer_sectors, journal_watermark, sync_msec; 3798 3760 bool should_write_sb;
+3 -3
drivers/md/dm-raid.c
··· 3729 3729 blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs)); 3730 3730 3731 3731 /* 3732 - * RAID1 and RAID10 personalities require bio splitting, 3733 - * RAID0/4/5/6 don't and process large discard bios properly. 3732 + * RAID0 and RAID10 personalities require bio splitting, 3733 + * RAID1/4/5/6 don't and process large discard bios properly. 3734 3734 */ 3735 - if (rs_is_raid1(rs) || rs_is_raid10(rs)) { 3735 + if (rs_is_raid0(rs) || rs_is_raid10(rs)) { 3736 3736 limits->discard_granularity = chunk_size_bytes; 3737 3737 limits->max_discard_sectors = rs->md.chunk_sectors; 3738 3738 }
+24
drivers/md/dm-snap.c
··· 141 141 * for them to be committed. 142 142 */ 143 143 struct bio_list bios_queued_during_merge; 144 + 145 + /* 146 + * Flush data after merge. 147 + */ 148 + struct bio flush_bio; 144 149 }; 145 150 146 151 /* ··· 1126 1121 1127 1122 static void error_bios(struct bio *bio); 1128 1123 1124 + static int flush_data(struct dm_snapshot *s) 1125 + { 1126 + struct bio *flush_bio = &s->flush_bio; 1127 + 1128 + bio_reset(flush_bio); 1129 + bio_set_dev(flush_bio, s->origin->bdev); 1130 + flush_bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH; 1131 + 1132 + return submit_bio_wait(flush_bio); 1133 + } 1134 + 1129 1135 static void merge_callback(int read_err, unsigned long write_err, void *context) 1130 1136 { 1131 1137 struct dm_snapshot *s = context; ··· 1147 1131 DMERR("Read error: shutting down merge."); 1148 1132 else 1149 1133 DMERR("Write error: shutting down merge."); 1134 + goto shut; 1135 + } 1136 + 1137 + if (flush_data(s) < 0) { 1138 + DMERR("Flush after merge failed: shutting down merge"); 1150 1139 goto shut; 1151 1140 } 1152 1141 ··· 1339 1318 s->first_merging_chunk = 0; 1340 1319 s->num_merging_chunks = 0; 1341 1320 bio_list_init(&s->bios_queued_during_merge); 1321 + bio_init(&s->flush_bio, NULL, 0); 1342 1322 1343 1323 /* Allocate hash table for COW data */ 1344 1324 if (init_hash_tables(s)) { ··· 1525 1503 mempool_exit(&s->pending_pool); 1526 1504 1527 1505 dm_exception_store_destroy(s->store); 1506 + 1507 + bio_uninit(&s->flush_bio); 1528 1508 1529 1509 dm_put_device(ti, s->cow); 1530 1510
+1 -1
drivers/md/dm.c
··· 562 562 * subset of the parent bdev; require extra privileges. 563 563 */ 564 564 if (!capable(CAP_SYS_RAWIO)) { 565 - DMWARN_LIMIT( 565 + DMDEBUG_LIMIT( 566 566 "%s: sending ioctl %x to DM device without required privilege.", 567 567 current->comm, cmd); 568 568 r = -ENOIOCTLCMD;
+1 -1
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
··· 1491 1491 else 1492 1492 skb = alloc_can_skb(priv->ndev, (struct can_frame **)&cfd); 1493 1493 1494 - if (!cfd) { 1494 + if (!skb) { 1495 1495 stats->rx_dropped++; 1496 1496 return 0; 1497 1497 }
+2 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 2532 2532 2533 2533 if (rc && ((struct hwrm_err_output *)&resp)->cmd_err == 2534 2534 NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) { 2535 - install.flags |= 2535 + install.flags = 2536 2536 cpu_to_le16(NVM_INSTALL_UPDATE_REQ_FLAGS_ALLOWED_TO_DEFRAG); 2537 2537 2538 2538 rc = _hwrm_send_message_silent(bp, &install, ··· 2546 2546 * UPDATE directory and try the flash again 2547 2547 */ 2548 2548 defrag_attempted = true; 2549 + install.flags = 0; 2549 2550 rc = __bnxt_flash_nvram(bp->dev, 2550 2551 BNX_DIR_TYPE_UPDATE, 2551 2552 BNX_DIR_ORDINAL_FIRST,
+6 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 222 222 223 223 int bnxt_get_ulp_stat_ctxs(struct bnxt *bp) 224 224 { 225 - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) 226 - return BNXT_MIN_ROCE_STAT_CTXS; 225 + if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { 226 + struct bnxt_en_dev *edev = bp->edev; 227 + 228 + if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested) 229 + return BNXT_MIN_ROCE_STAT_CTXS; 230 + } 227 231 228 232 return 0; 229 233 }
+7
drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h
··· 40 40 #define TCB_L2T_IX_M 0xfffULL 41 41 #define TCB_L2T_IX_V(x) ((x) << TCB_L2T_IX_S) 42 42 43 + #define TCB_T_FLAGS_W 1 44 + #define TCB_T_FLAGS_S 0 45 + #define TCB_T_FLAGS_M 0xffffffffffffffffULL 46 + #define TCB_T_FLAGS_V(x) ((__u64)(x) << TCB_T_FLAGS_S) 47 + 48 + #define TCB_FIELD_COOKIE_TFLAG 1 49 + 43 50 #define TCB_SMAC_SEL_W 0 44 51 #define TCB_SMAC_SEL_S 24 45 52 #define TCB_SMAC_SEL_M 0xffULL
+4
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
··· 575 575 void chtls_tcp_push(struct sock *sk, int flags); 576 576 int chtls_push_frames(struct chtls_sock *csk, int comp); 577 577 int chtls_set_tcb_tflag(struct sock *sk, unsigned int bit_pos, int val); 578 + void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word, 579 + u64 mask, u64 val, u8 cookie, 580 + int through_l2t); 578 581 int chtls_setkey(struct chtls_sock *csk, u32 keylen, u32 mode, int cipher_type); 582 + void chtls_set_quiesce_ctrl(struct sock *sk, int val); 579 583 void skb_entail(struct sock *sk, struct sk_buff *skb, int flags); 580 584 unsigned int keyid_to_addr(int start_addr, int keyid); 581 585 void free_tls_keyid(struct sock *sk);
+30 -2
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
··· 32 32 #include "chtls.h" 33 33 #include "chtls_cm.h" 34 34 #include "clip_tbl.h" 35 + #include "t4_tcb.h" 35 36 36 37 /* 37 38 * State transitions and actions for close. Note that if we are in SYN_SENT ··· 268 267 if (sk->sk_state != TCP_SYN_RECV) 269 268 chtls_send_abort(sk, mode, skb); 270 269 else 271 - goto out; 270 + chtls_set_tcb_field_rpl_skb(sk, TCB_T_FLAGS_W, 271 + TCB_T_FLAGS_V(TCB_T_FLAGS_M), 0, 272 + TCB_FIELD_COOKIE_TFLAG, 1); 272 273 273 274 return; 274 275 out: ··· 1952 1949 else if (tcp_sk(sk)->linger2 < 0 && 1953 1950 !csk_flag_nochk(csk, CSK_ABORT_SHUTDOWN)) 1954 1951 chtls_abort_conn(sk, skb); 1952 + else if (csk_flag_nochk(csk, CSK_TX_DATA_SENT)) 1953 + chtls_set_quiesce_ctrl(sk, 0); 1955 1954 break; 1956 1955 default: 1957 1956 pr_info("close_con_rpl in bad state %d\n", sk->sk_state); ··· 2297 2292 return 0; 2298 2293 } 2299 2294 2295 + static int chtls_set_tcb_rpl(struct chtls_dev *cdev, struct sk_buff *skb) 2296 + { 2297 + struct cpl_set_tcb_rpl *rpl = cplhdr(skb) + RSS_HDR; 2298 + unsigned int hwtid = GET_TID(rpl); 2299 + struct sock *sk; 2300 + 2301 + sk = lookup_tid(cdev->tids, hwtid); 2302 + 2303 + /* return EINVAL if socket doesn't exist */ 2304 + if (!sk) 2305 + return -EINVAL; 2306 + 2307 + /* Reusing the skb as size of cpl_set_tcb_field structure 2308 + * is greater than cpl_abort_req 2309 + */ 2310 + if (TCB_COOKIE_G(rpl->cookie) == TCB_FIELD_COOKIE_TFLAG) 2311 + chtls_send_abort(sk, CPL_ABORT_SEND_RST, NULL); 2312 + 2313 + kfree_skb(skb); 2314 + return 0; 2315 + } 2316 + 2300 2317 chtls_handler_func chtls_handlers[NUM_CPL_CMDS] = { 2301 2318 [CPL_PASS_OPEN_RPL] = chtls_pass_open_rpl, 2302 2319 [CPL_CLOSE_LISTSRV_RPL] = chtls_close_listsrv_rpl, ··· 2331 2304 [CPL_CLOSE_CON_RPL] = chtls_conn_cpl, 2332 2305 [CPL_ABORT_REQ_RSS] = chtls_conn_cpl, 2333 2306 [CPL_ABORT_RPL_RSS] = chtls_conn_cpl, 2334 - [CPL_FW4_ACK] = chtls_wr_ack, 2307 + [CPL_FW4_ACK] = chtls_wr_ack, 2308 + [CPL_SET_TCB_RPL] = chtls_set_tcb_rpl, 2335 2309 };
+41
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c
··· 88 88 return ret < 0 ? ret : 0; 89 89 } 90 90 91 + void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word, 92 + u64 mask, u64 val, u8 cookie, 93 + int through_l2t) 94 + { 95 + struct sk_buff *skb; 96 + unsigned int wrlen; 97 + 98 + wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata); 99 + wrlen = roundup(wrlen, 16); 100 + 101 + skb = alloc_skb(wrlen, GFP_KERNEL | __GFP_NOFAIL); 102 + if (!skb) 103 + return; 104 + 105 + __set_tcb_field(sk, skb, word, mask, val, cookie, 0); 106 + send_or_defer(sk, tcp_sk(sk), skb, through_l2t); 107 + } 108 + 91 109 /* 92 110 * Set one of the t_flags bits in the TCB. 93 111 */ ··· 129 111 { 130 112 return chtls_set_tcb_field(sk, 1, (1ULL << TF_RX_QUIESCE_S), 131 113 TF_RX_QUIESCE_V(val)); 114 + } 115 + 116 + void chtls_set_quiesce_ctrl(struct sock *sk, int val) 117 + { 118 + struct chtls_sock *csk; 119 + struct sk_buff *skb; 120 + unsigned int wrlen; 121 + int ret; 122 + 123 + wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata); 124 + wrlen = roundup(wrlen, 16); 125 + 126 + skb = alloc_skb(wrlen, GFP_ATOMIC); 127 + if (!skb) 128 + return; 129 + 130 + csk = rcu_dereference_sk_user_data(sk); 131 + 132 + __set_tcb_field(sk, skb, 1, TF_RX_QUIESCE_V(1), 0, 0, 1); 133 + set_wr_txq(skb, CPL_PRIORITY_CONTROL, csk->port_id); 134 + ret = cxgb4_ofld_send(csk->egress_dev, skb); 135 + if (ret < 0) 136 + kfree_skb(skb); 132 137 } 133 138 134 139 /* TLS Key bitmap processing */
+1 -1
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 348 348 * SBP is *not* set in PRT_SBPVSI (default not set). 349 349 */ 350 350 skb = i40e_construct_skb_zc(rx_ring, *bi); 351 - *bi = NULL; 352 351 if (!skb) { 353 352 rx_ring->rx_stats.alloc_buff_failed++; 354 353 break; 355 354 } 356 355 356 + *bi = NULL; 357 357 cleaned_count++; 358 358 i40e_inc_ntc(rx_ring); 359 359
-2
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 5882 5882 5883 5883 phylink_set(mask, Autoneg); 5884 5884 phylink_set_port_modes(mask); 5885 - phylink_set(mask, Pause); 5886 - phylink_set(mask, Asym_Pause); 5887 5885 5888 5886 switch (state->interface) { 5889 5887 case PHY_INTERFACE_MODE_10GBASER:
+8 -5
drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
··· 19 19 #define MLXSW_THERMAL_ASIC_TEMP_NORM 75000 /* 75C */ 20 20 #define MLXSW_THERMAL_ASIC_TEMP_HIGH 85000 /* 85C */ 21 21 #define MLXSW_THERMAL_ASIC_TEMP_HOT 105000 /* 105C */ 22 - #define MLXSW_THERMAL_ASIC_TEMP_CRIT 110000 /* 110C */ 22 + #define MLXSW_THERMAL_ASIC_TEMP_CRIT 140000 /* 140C */ 23 23 #define MLXSW_THERMAL_HYSTERESIS_TEMP 5000 /* 5C */ 24 24 #define MLXSW_THERMAL_MODULE_TEMP_SHIFT (MLXSW_THERMAL_HYSTERESIS_TEMP * 2) 25 25 #define MLXSW_THERMAL_ZONE_MAX_NAME 16 ··· 176 176 if (err) 177 177 return err; 178 178 179 + if (crit_temp > emerg_temp) { 180 + dev_warn(dev, "%s : Critical threshold %d is above emergency threshold %d\n", 181 + tz->tzdev->type, crit_temp, emerg_temp); 182 + return 0; 183 + } 184 + 179 185 /* According to the system thermal requirements, the thermal zones are 180 186 * defined with four trip points. The critical and emergency 181 187 * temperature thresholds, provided by QSFP module are set as "active" ··· 196 190 tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temp = crit_temp; 197 191 tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temp = crit_temp; 198 192 tz->trips[MLXSW_THERMAL_TEMP_TRIP_HOT].temp = emerg_temp; 199 - if (emerg_temp > crit_temp) 200 - tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp + 193 + tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp + 201 194 MLXSW_THERMAL_MODULE_TEMP_SHIFT; 202 - else 203 - tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp; 204 195 205 196 return 0; 206 197 }
+1 -6
drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
··· 564 564 .ndo_set_features = netxen_set_features, 565 565 }; 566 566 567 - static inline bool netxen_function_zero(struct pci_dev *pdev) 568 - { 569 - return (PCI_FUNC(pdev->devfn) == 0) ? true : false; 570 - } 571 - 572 567 static inline void netxen_set_interrupt_mode(struct netxen_adapter *adapter, 573 568 u32 mode) 574 569 { ··· 659 664 netxen_initialize_interrupt_registers(adapter); 660 665 netxen_set_msix_bit(pdev, 0); 661 666 662 - if (netxen_function_zero(pdev)) { 667 + if (adapter->portnum == 0) { 663 668 if (!netxen_setup_msi_interrupts(adapter, num_msix)) 664 669 netxen_set_interrupt_mode(adapter, NETXEN_MSI_MODE); 665 670 else
+4 -48
drivers/net/ethernet/stmicro/stmmac/dwmac5.c
··· 568 568 int dwmac5_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg, 569 569 unsigned int ptp_rate) 570 570 { 571 - u32 speed, total_offset, offset, ctrl, ctr_low; 572 - u32 extcfg = readl(ioaddr + GMAC_EXT_CONFIG); 573 - u32 mac_cfg = readl(ioaddr + GMAC_CONFIG); 574 571 int i, ret = 0x0; 575 - u64 total_ctr; 576 - 577 - if (extcfg & GMAC_CONFIG_EIPG_EN) { 578 - offset = (extcfg & GMAC_CONFIG_EIPG) >> GMAC_CONFIG_EIPG_SHIFT; 579 - offset = 104 + (offset * 8); 580 - } else { 581 - offset = (mac_cfg & GMAC_CONFIG_IPG) >> GMAC_CONFIG_IPG_SHIFT; 582 - offset = 96 - (offset * 8); 583 - } 584 - 585 - speed = mac_cfg & (GMAC_CONFIG_PS | GMAC_CONFIG_FES); 586 - speed = speed >> GMAC_CONFIG_FES_SHIFT; 587 - 588 - switch (speed) { 589 - case 0x0: 590 - offset = offset * 1000; /* 1G */ 591 - break; 592 - case 0x1: 593 - offset = offset * 400; /* 2.5G */ 594 - break; 595 - case 0x2: 596 - offset = offset * 100000; /* 10M */ 597 - break; 598 - case 0x3: 599 - offset = offset * 10000; /* 100M */ 600 - break; 601 - default: 602 - return -EINVAL; 603 - } 604 - 605 - offset = offset / 1000; 572 + u32 ctrl; 606 573 607 574 ret |= dwmac5_est_write(ioaddr, BTR_LOW, cfg->btr[0], false); 608 575 ret |= dwmac5_est_write(ioaddr, BTR_HIGH, cfg->btr[1], false); 609 576 ret |= dwmac5_est_write(ioaddr, TER, cfg->ter, false); 610 577 ret |= dwmac5_est_write(ioaddr, LLR, cfg->gcl_size, false); 578 + ret |= dwmac5_est_write(ioaddr, CTR_LOW, cfg->ctr[0], false); 579 + ret |= dwmac5_est_write(ioaddr, CTR_HIGH, cfg->ctr[1], false); 611 580 if (ret) 612 581 return ret; 613 582 614 - total_offset = 0; 615 583 for (i = 0; i < cfg->gcl_size; i++) { 616 - ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i] + offset, true); 584 + ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i], true); 617 585 if (ret) 618 586 return ret; 619 - 620 - total_offset += offset; 621 587 } 622 - 623 - total_ctr = cfg->ctr[0] + cfg->ctr[1] * 1000000000ULL; 624 - total_ctr += total_offset; 625 - 626 - ctr_low = do_div(total_ctr, 1000000000); 627 - 628 - ret |= dwmac5_est_write(ioaddr, CTR_LOW, ctr_low, false); 629 - ret |= dwmac5_est_write(ioaddr, CTR_HIGH, total_ctr, false); 630 - if (ret) 631 - return ret; 632 588 633 589 ctrl = readl(ioaddr + MTL_EST_CONTROL); 634 590 ctrl &= ~PTOV;
+4 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2184 2184 spin_lock_irqsave(&ch->lock, flags); 2185 2185 stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 0); 2186 2186 spin_unlock_irqrestore(&ch->lock, flags); 2187 - __napi_schedule_irqoff(&ch->rx_napi); 2187 + __napi_schedule(&ch->rx_napi); 2188 2188 } 2189 2189 } 2190 2190 ··· 2193 2193 spin_lock_irqsave(&ch->lock, flags); 2194 2194 stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 0, 1); 2195 2195 spin_unlock_irqrestore(&ch->lock, flags); 2196 - __napi_schedule_irqoff(&ch->tx_napi); 2196 + __napi_schedule(&ch->tx_napi); 2197 2197 } 2198 2198 } 2199 2199 ··· 4026 4026 { 4027 4027 struct stmmac_priv *priv = netdev_priv(dev); 4028 4028 int txfifosz = priv->plat->tx_fifo_size; 4029 + const int mtu = new_mtu; 4029 4030 4030 4031 if (txfifosz == 0) 4031 4032 txfifosz = priv->dma_cap.tx_fifo_size; ··· 4044 4043 if ((txfifosz < new_mtu) || (new_mtu > BUF_SIZE_16KiB)) 4045 4044 return -EINVAL; 4046 4045 4047 - dev->mtu = new_mtu; 4046 + dev->mtu = mtu; 4048 4047 4049 4048 netdev_update_features(dev); 4050 4049
+18 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
··· 599 599 { 600 600 u32 size, wid = priv->dma_cap.estwid, dep = priv->dma_cap.estdep; 601 601 struct plat_stmmacenet_data *plat = priv->plat; 602 - struct timespec64 time; 602 + struct timespec64 time, current_time; 603 + ktime_t current_time_ns; 603 604 bool fpe = false; 604 605 int i, ret = 0; 605 606 u64 ctr; ··· 695 694 } 696 695 697 696 /* Adjust for real system time */ 698 - time = ktime_to_timespec64(qopt->base_time); 697 + priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, &current_time); 698 + current_time_ns = timespec64_to_ktime(current_time); 699 + if (ktime_after(qopt->base_time, current_time_ns)) { 700 + time = ktime_to_timespec64(qopt->base_time); 701 + } else { 702 + ktime_t base_time; 703 + s64 n; 704 + 705 + n = div64_s64(ktime_sub_ns(current_time_ns, qopt->base_time), 706 + qopt->cycle_time); 707 + base_time = ktime_add_ns(qopt->base_time, 708 + (n + 1) * qopt->cycle_time); 709 + 710 + time = ktime_to_timespec64(base_time); 711 + } 712 + 699 713 priv->plat->est->btr[0] = (u32)time.tv_nsec; 700 714 priv->plat->est->btr[1] = (u32)time.tv_sec; 701 715
+1
drivers/net/ipa/ipa_modem.c
··· 216 216 ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]->netdev = netdev; 217 217 ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]->netdev = netdev; 218 218 219 + SET_NETDEV_DEV(netdev, &ipa->pdev->dev); 219 220 priv = netdev_priv(netdev); 220 221 priv->ipa = ipa; 221 222
+2 -1
drivers/net/phy/smsc.c
··· 317 317 /* Make clk optional to keep DTB backward compatibility. */ 318 318 priv->refclk = clk_get_optional(dev, NULL); 319 319 if (IS_ERR(priv->refclk)) 320 - dev_err_probe(dev, PTR_ERR(priv->refclk), "Failed to request clock\n"); 320 + return dev_err_probe(dev, PTR_ERR(priv->refclk), 321 + "Failed to request clock\n"); 321 322 322 323 ret = clk_prepare_enable(priv->refclk); 323 324 if (ret)
+9 -3
drivers/net/ppp/ppp_generic.c
··· 623 623 write_unlock_bh(&pch->upl); 624 624 return -EALREADY; 625 625 } 626 + refcount_inc(&pchb->file.refcnt); 626 627 rcu_assign_pointer(pch->bridge, pchb); 627 628 write_unlock_bh(&pch->upl); 628 629 ··· 633 632 write_unlock_bh(&pchb->upl); 634 633 goto err_unset; 635 634 } 635 + refcount_inc(&pch->file.refcnt); 636 636 rcu_assign_pointer(pchb->bridge, pch); 637 637 write_unlock_bh(&pchb->upl); 638 - 639 - refcount_inc(&pch->file.refcnt); 640 - refcount_inc(&pchb->file.refcnt); 641 638 642 639 return 0; 643 640 644 641 err_unset: 645 642 write_lock_bh(&pch->upl); 643 + /* Re-read pch->bridge with upl held in case it was modified concurrently */ 644 + pchb = rcu_dereference_protected(pch->bridge, lockdep_is_held(&pch->upl)); 646 645 RCU_INIT_POINTER(pch->bridge, NULL); 647 646 write_unlock_bh(&pch->upl); 648 647 synchronize_rcu(); 648 + 649 + if (pchb) 650 + if (refcount_dec_and_test(&pchb->file.refcnt)) 651 + ppp_destroy_channel(pchb); 652 + 649 653 return -EALREADY; 650 654 } 651 655
-1
drivers/net/usb/Kconfig
··· 631 631 config USB_RTL8153_ECM 632 632 tristate "RTL8153 ECM support" 633 633 depends on USB_NET_CDCETHER && (USB_RTL8152 || USB_RTL8152=n) 634 - default y 635 634 help 636 635 This option supports ECM mode for RTL8153 ethernet adapter, when 637 636 CONFIG_USB_RTL8152 is not set, or the RTL8153 device is not
+7
drivers/net/usb/cdc_ether.c
··· 793 793 .driver_info = 0, 794 794 }, 795 795 796 + /* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */ 797 + { 798 + USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x721e, USB_CLASS_COMM, 799 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 800 + .driver_info = 0, 801 + }, 802 + 796 803 /* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */ 797 804 { 798 805 USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM,
+1
drivers/net/usb/r8152.c
··· 6877 6877 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)}, 6878 6878 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c)}, 6879 6879 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)}, 6880 + {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x721e)}, 6880 6881 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387)}, 6881 6882 {REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)}, 6882 6883 {REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)},
+8
drivers/net/usb/r8153_ecm.c
··· 122 122 }; 123 123 124 124 static const struct usb_device_id products[] = { 125 + /* Realtek RTL8153 Based USB 3.0 Ethernet Adapters */ 125 126 { 126 127 USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_REALTEK, 0x8153, USB_CLASS_COMM, 128 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 129 + .driver_info = (unsigned long)&r8153_info, 130 + }, 131 + 132 + /* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */ 133 + { 134 + USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_LENOVO, 0x721e, USB_CLASS_COMM, 127 135 USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 128 136 .driver_info = (unsigned long)&r8153_info, 129 137 },
+1 -1
drivers/net/usb/rndis_host.c
··· 387 387 reply_len = sizeof *phym; 388 388 retval = rndis_query(dev, intf, u.buf, 389 389 RNDIS_OID_GEN_PHYSICAL_MEDIUM, 390 - 0, (void **) &phym, &reply_len); 390 + reply_len, (void **)&phym, &reply_len); 391 391 if (retval != 0 || !phym) { 392 392 /* OID is optional so don't fail here. */ 393 393 phym_unspec = cpu_to_le32(RNDIS_PHYSICAL_MEDIUM_UNSPECIFIED);
+8 -3
drivers/nvme/host/core.c
··· 2856 2856 NULL, 2857 2857 }; 2858 2858 2859 + static inline bool nvme_discovery_ctrl(struct nvme_ctrl *ctrl) 2860 + { 2861 + return ctrl->opts && ctrl->opts->discovery_nqn; 2862 + } 2863 + 2859 2864 static bool nvme_validate_cntlid(struct nvme_subsystem *subsys, 2860 2865 struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) 2861 2866 { ··· 2880 2875 } 2881 2876 2882 2877 if ((id->cmic & NVME_CTRL_CMIC_MULTI_CTRL) || 2883 - (ctrl->opts && ctrl->opts->discovery_nqn)) 2878 + nvme_discovery_ctrl(ctrl)) 2884 2879 continue; 2885 2880 2886 2881 dev_err(ctrl->device, ··· 3149 3144 goto out_free; 3150 3145 } 3151 3146 3152 - if (!ctrl->opts->discovery_nqn && !ctrl->kas) { 3147 + if (!nvme_discovery_ctrl(ctrl) && !ctrl->kas) { 3153 3148 dev_err(ctrl->device, 3154 3149 "keep-alive support is mandatory for fabrics\n"); 3155 3150 ret = -EINVAL; ··· 3189 3184 if (ret < 0) 3190 3185 return ret; 3191 3186 3192 - if (!ctrl->identified) { 3187 + if (!ctrl->identified && !nvme_discovery_ctrl(ctrl)) { 3193 3188 ret = nvme_hwmon_init(ctrl); 3194 3189 if (ret < 0) 3195 3190 return ret;
+2 -2
drivers/nvme/host/tcp.c
··· 201 201 202 202 static inline size_t nvme_tcp_req_cur_length(struct nvme_tcp_request *req) 203 203 { 204 - return min_t(size_t, req->iter.bvec->bv_len - req->iter.iov_offset, 204 + return min_t(size_t, iov_iter_single_seg_count(&req->iter), 205 205 req->pdu_len - req->pdu_sent); 206 206 } 207 207 ··· 286 286 * directly, otherwise queue io_work. Also, only do that if we 287 287 * are on the same cpu, so we don't introduce contention. 288 288 */ 289 - if (queue->io_cpu == smp_processor_id() && 289 + if (queue->io_cpu == __smp_processor_id() && 290 290 sync && empty && mutex_trylock(&queue->send_mutex)) { 291 291 queue->more_requests = !last; 292 292 nvme_tcp_send_all(queue);
+8 -8
drivers/nvme/target/rdma.c
··· 1220 1220 } 1221 1221 ndev->inline_data_size = nport->inline_data_size; 1222 1222 ndev->inline_page_count = inline_page_count; 1223 + 1224 + if (nport->pi_enable && !(cm_id->device->attrs.device_cap_flags & 1225 + IB_DEVICE_INTEGRITY_HANDOVER)) { 1226 + pr_warn("T10-PI is not supported by device %s. Disabling it\n", 1227 + cm_id->device->name); 1228 + nport->pi_enable = false; 1229 + } 1230 + 1223 1231 ndev->device = cm_id->device; 1224 1232 kref_init(&ndev->ref); 1225 1233 ··· 1860 1852 ret = rdma_listen(cm_id, 128); 1861 1853 if (ret) { 1862 1854 pr_err("listening to %pISpcs failed (%d)\n", addr, ret); 1863 - goto out_destroy_id; 1864 - } 1865 - 1866 - if (port->nport->pi_enable && 1867 - !(cm_id->device->attrs.device_cap_flags & 1868 - IB_DEVICE_INTEGRITY_HANDOVER)) { 1869 - pr_err("T10-PI is not supported for %pISpcs\n", addr); 1870 - ret = -EINVAL; 1871 1855 goto out_destroy_id; 1872 1856 } 1873 1857
-5
drivers/perf/arm_pmu.c
··· 726 726 return per_cpu(hw_events->irq, cpu); 727 727 } 728 728 729 - bool arm_pmu_irq_is_nmi(void) 730 - { 731 - return has_nmi; 732 - } 733 - 734 729 /* 735 730 * PMU hardware loses all context when a CPU goes offline. 736 731 * When a CPU is hotplugged back in, since some hardware registers are
+1 -1
drivers/scsi/mpt3sas/Kconfig
··· 79 79 select SCSI_MPT3SAS 80 80 depends on PCI && SCSI 81 81 help 82 - Dummy config option for backwards compatiblity: configure the MPT3SAS 82 + Dummy config option for backwards compatibility: configure the MPT3SAS 83 83 driver instead.
+2 -2
drivers/scsi/qedi/qedi_main.c
··· 2245 2245 chap_name); 2246 2246 break; 2247 2247 case ISCSI_BOOT_TGT_CHAP_SECRET: 2248 - rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN, 2248 + rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN, 2249 2249 chap_secret); 2250 2250 break; 2251 2251 case ISCSI_BOOT_TGT_REV_CHAP_NAME: ··· 2253 2253 mchap_name); 2254 2254 break; 2255 2255 case ISCSI_BOOT_TGT_REV_CHAP_SECRET: 2256 - rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN, 2256 + rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN, 2257 2257 mchap_secret); 2258 2258 break; 2259 2259 case ISCSI_BOOT_TGT_FLAGS:
+3 -2
drivers/scsi/scsi_debug.c
··· 6740 6740 k = sdeb_zbc_model_str(sdeb_zbc_model_s); 6741 6741 if (k < 0) { 6742 6742 ret = k; 6743 - goto free_vm; 6743 + goto free_q_arr; 6744 6744 } 6745 6745 sdeb_zbc_model = k; 6746 6746 switch (sdeb_zbc_model) { ··· 6753 6753 break; 6754 6754 default: 6755 6755 pr_err("Invalid ZBC model\n"); 6756 - return -EINVAL; 6756 + ret = -EINVAL; 6757 + goto free_q_arr; 6757 6758 } 6758 6759 } 6759 6760 if (sdeb_zbc_model != BLK_ZONED_NONE) {
+3 -3
drivers/scsi/sd.c
··· 984 984 } 985 985 } 986 986 987 - if (sdp->no_write_same) 987 + if (sdp->no_write_same) { 988 + rq->rq_flags |= RQF_QUIET; 988 989 return BLK_STS_TARGET; 990 + } 989 991 990 992 if (sdkp->ws16 || lba > 0xffffffff || nr_blocks > 0xffff) 991 993 return sd_setup_write_same16_cmnd(cmd, false); ··· 3512 3510 static int sd_remove(struct device *dev) 3513 3511 { 3514 3512 struct scsi_disk *sdkp; 3515 - dev_t devt; 3516 3513 3517 3514 sdkp = dev_get_drvdata(dev); 3518 - devt = disk_devt(sdkp->disk); 3519 3515 scsi_autopm_get_device(sdkp->device); 3520 3516 3521 3517 async_synchronize_full_domain(&scsi_sd_pm_domain);
+10 -14
drivers/scsi/ufs/ufshcd.c
··· 289 289 if (ret) 290 290 dev_err(hba->dev, "%s: En WB flush during H8: failed: %d\n", 291 291 __func__, ret); 292 - ufshcd_wb_toggle_flush(hba, true); 292 + if (!(hba->quirks & UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL)) 293 + ufshcd_wb_toggle_flush(hba, true); 293 294 } 294 295 295 296 static void ufshcd_scsi_unblock_requests(struct ufs_hba *hba) ··· 5437 5436 5438 5437 static inline void ufshcd_wb_toggle_flush(struct ufs_hba *hba, bool enable) 5439 5438 { 5440 - if (hba->quirks & UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL) 5441 - return; 5442 - 5443 5439 if (enable) 5444 5440 ufshcd_wb_buf_flush_enable(hba); 5445 5441 else ··· 6659 6661 { 6660 6662 struct Scsi_Host *host; 6661 6663 struct ufs_hba *hba; 6662 - unsigned int tag; 6663 6664 u32 pos; 6664 6665 int err; 6665 - u8 resp = 0xF; 6666 - struct ufshcd_lrb *lrbp; 6666 + u8 resp = 0xF, lun; 6667 6667 unsigned long flags; 6668 6668 6669 6669 host = cmd->device->host; 6670 6670 hba = shost_priv(host); 6671 - tag = cmd->request->tag; 6672 6671 6673 - lrbp = &hba->lrb[tag]; 6674 - err = ufshcd_issue_tm_cmd(hba, lrbp->lun, 0, UFS_LOGICAL_RESET, &resp); 6672 + lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun); 6673 + err = ufshcd_issue_tm_cmd(hba, lun, 0, UFS_LOGICAL_RESET, &resp); 6675 6674 if (err || resp != UPIU_TASK_MANAGEMENT_FUNC_COMPL) { 6676 6675 if (!err) 6677 6676 err = resp; ··· 6677 6682 6678 6683 /* clear the commands that were pending for corresponding LUN */ 6679 6684 for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs) { 6680 - if (hba->lrb[pos].lun == lrbp->lun) { 6685 + if (hba->lrb[pos].lun == lun) { 6681 6686 err = ufshcd_clear_cmd(hba, pos); 6682 6687 if (err) 6683 6688 break; ··· 8693 8698 ufshcd_wb_need_flush(hba)); 8694 8699 } 8695 8700 8701 + flush_work(&hba->eeh_work); 8702 + 8696 8703 if (req_dev_pwr_mode != hba->curr_dev_pwr_mode) { 8697 8704 if (!ufshcd_is_runtime_pm(pm_op)) 8698 8705 /* ensure that bkops is disabled */ ··· 8706 8709 goto enable_gating; 8707 8710 } 8708 8711 } 8709 - 8710 - flush_work(&hba->eeh_work); 8711 8712 8712 8713 /* 8713 8714 * In the case of DeepSleep, the device is expected to remain powered ··· 8933 8938 if ((ufs_get_pm_lvl_to_dev_pwr_mode(hba->spm_lvl) == 8934 8939 hba->curr_dev_pwr_mode) && 8935 8940 (ufs_get_pm_lvl_to_link_pwr_state(hba->spm_lvl) == 8936 - hba->uic_link_state)) 8941 + hba->uic_link_state) && 8942 + !hba->dev_info.b_rpm_dev_flush_capable) 8937 8943 goto out; 8938 8944 8939 8945 if (pm_runtime_suspended(hba->dev)) {
+68 -47
drivers/target/target_core_xcopy.c
··· 46 46 return 0; 47 47 } 48 48 49 - struct xcopy_dev_search_info { 50 - const unsigned char *dev_wwn; 51 - struct se_device *found_dev; 52 - }; 53 - 49 + /** 50 + * target_xcopy_locate_se_dev_e4_iter - compare XCOPY NAA device identifiers 51 + * 52 + * @se_dev: device being considered for match 53 + * @dev_wwn: XCOPY requested NAA dev_wwn 54 + * @return: 1 on match, 0 on no-match 55 + */ 54 56 static int target_xcopy_locate_se_dev_e4_iter(struct se_device *se_dev, 55 - void *data) 57 + const unsigned char *dev_wwn) 56 58 { 57 - struct xcopy_dev_search_info *info = data; 58 59 unsigned char tmp_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN]; 59 60 int rc; 60 61 61 - if (!se_dev->dev_attrib.emulate_3pc) 62 + if (!se_dev->dev_attrib.emulate_3pc) { 63 + pr_debug("XCOPY: emulate_3pc disabled on se_dev %p\n", se_dev); 62 64 return 0; 65 + } 63 66 64 67 memset(&tmp_dev_wwn[0], 0, XCOPY_NAA_IEEE_REGEX_LEN); 65 68 target_xcopy_gen_naa_ieee(se_dev, &tmp_dev_wwn[0]); 66 69 67 - rc = memcmp(&tmp_dev_wwn[0], info->dev_wwn, XCOPY_NAA_IEEE_REGEX_LEN); 68 - if (rc != 0) 70 + rc = memcmp(&tmp_dev_wwn[0], dev_wwn, XCOPY_NAA_IEEE_REGEX_LEN); 71 + if (rc != 0) { 72 + pr_debug("XCOPY: skip non-matching: %*ph\n", 73 + XCOPY_NAA_IEEE_REGEX_LEN, tmp_dev_wwn); 69 74 return 0; 70 - 71 - info->found_dev = se_dev; 75 + } 72 76 pr_debug("XCOPY 0xe4: located se_dev: %p\n", se_dev); 73 77 74 - rc = target_depend_item(&se_dev->dev_group.cg_item); 75 - if (rc != 0) { 76 - pr_err("configfs_depend_item attempt failed: %d for se_dev: %p\n", 77 - rc, se_dev); 78 - return rc; 79 - } 80 - 81 - pr_debug("Called configfs_depend_item for se_dev: %p se_dev->se_dev_group: %p\n", 82 - se_dev, &se_dev->dev_group); 83 78 return 1; 84 79 } 85 80 86 - static int target_xcopy_locate_se_dev_e4(const unsigned char *dev_wwn, 87 - struct se_device **found_dev) 81 + static int target_xcopy_locate_se_dev_e4(struct se_session *sess, 82 + const unsigned char *dev_wwn, 83 + struct se_device **_found_dev, 84 + struct percpu_ref **_found_lun_ref) 88 85 { 89 - struct xcopy_dev_search_info info; 90 - int ret; 86 + struct se_dev_entry *deve; 87 + struct se_node_acl *nacl; 88 + struct se_lun *this_lun = NULL; 89 + struct se_device *found_dev = NULL; 91 90 92 - memset(&info, 0, sizeof(info)); 93 - info.dev_wwn = dev_wwn; 91 + /* cmd with NULL sess indicates no associated $FABRIC_MOD */ 92 + if (!sess) 93 + goto err_out; 94 94 95 - ret = target_for_each_device(target_xcopy_locate_se_dev_e4_iter, &info); 96 - if (ret == 1) { 97 - *found_dev = info.found_dev; 98 - return 0; 99 - } else { 100 - pr_debug_ratelimited("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n"); 101 - return -EINVAL; 95 + pr_debug("XCOPY 0xe4: searching for: %*ph\n", 96 + XCOPY_NAA_IEEE_REGEX_LEN, dev_wwn); 97 + 98 + nacl = sess->se_node_acl; 99 + rcu_read_lock(); 100 + hlist_for_each_entry_rcu(deve, &nacl->lun_entry_hlist, link) { 101 + struct se_device *this_dev; 102 + int rc; 103 + 104 + this_lun = rcu_dereference(deve->se_lun); 105 + this_dev = rcu_dereference_raw(this_lun->lun_se_dev); 106 + 107 + rc = target_xcopy_locate_se_dev_e4_iter(this_dev, dev_wwn); 108 + if (rc) { 109 + if (percpu_ref_tryget_live(&this_lun->lun_ref)) 110 + found_dev = this_dev; 111 + break; 112 + } 102 113 } 114 + rcu_read_unlock(); 115 + if (found_dev == NULL) 116 + goto err_out; 117 + 118 + pr_debug("lun_ref held for se_dev: %p se_dev->se_dev_group: %p\n", 119 + found_dev, &found_dev->dev_group); 120 + *_found_dev = found_dev; 121 + *_found_lun_ref = &this_lun->lun_ref; 122 + return 0; 123 + err_out: 124 + pr_debug_ratelimited("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n"); 125 + return -EINVAL; 103 126 } 104 127 105 128 static int target_xcopy_parse_tiddesc_e4(struct se_cmd *se_cmd, struct xcopy_op *xop, ··· 269 246 270 247 switch (xop->op_origin) { 271 248 case XCOL_SOURCE_RECV_OP: 272 - rc = target_xcopy_locate_se_dev_e4(xop->dst_tid_wwn, 273 - &xop->dst_dev); 249 + rc = target_xcopy_locate_se_dev_e4(se_cmd->se_sess, 250 + xop->dst_tid_wwn, 251 + &xop->dst_dev, 252 + &xop->remote_lun_ref); 274 253 break; 275 254 case XCOL_DEST_RECV_OP: 276 - rc = target_xcopy_locate_se_dev_e4(xop->src_tid_wwn, 277 - &xop->src_dev); 255 + rc = target_xcopy_locate_se_dev_e4(se_cmd->se_sess, 256 + xop->src_tid_wwn, 257 + &xop->src_dev, 258 + &xop->remote_lun_ref); 278 259 break; 279 260 default: 280 261 pr_err("XCOPY CSCD descriptor IDs not found in CSCD list - " ··· 418 391 419 392 static void xcopy_pt_undepend_remotedev(struct xcopy_op *xop) 420 393 { 421 - struct se_device *remote_dev; 422 - 423 394 if (xop->op_origin == XCOL_SOURCE_RECV_OP) 424 - remote_dev = xop->dst_dev; 395 + pr_debug("putting dst lun_ref for %p\n", xop->dst_dev); 425 396 else 426 - remote_dev = xop->src_dev; 397 + pr_debug("putting src lun_ref for %p\n", xop->src_dev); 427 398 428 - pr_debug("Calling configfs_undepend_item for" 429 - " remote_dev: %p remote_dev->dev_group: %p\n", 430 - remote_dev, &remote_dev->dev_group.cg_item); 431 - 432 - target_undepend_item(&remote_dev->dev_group.cg_item); 399 + percpu_ref_put(xop->remote_lun_ref); 433 400 } 434 401 435 402 static void xcopy_pt_release_cmd(struct se_cmd *se_cmd)
+1
drivers/target/target_core_xcopy.h
··· 27 27 struct se_device *dst_dev; 28 28 unsigned char dst_tid_wwn[XCOPY_NAA_IEEE_REGEX_LEN]; 29 29 unsigned char local_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN]; 30 + struct percpu_ref *remote_lun_ref; 30 31 31 32 sector_t src_lba; 32 33 sector_t dst_lba;
+1
drivers/tty/serial/sifive.c
··· 1000 1000 /* Set up clock divider */ 1001 1001 ssp->clkin_rate = clk_get_rate(ssp->clk); 1002 1002 ssp->baud_rate = SIFIVE_DEFAULT_BAUD_RATE; 1003 + ssp->port.uartclk = ssp->baud_rate * 16; 1003 1004 __ssp_update_div(ssp); 1004 1005 1005 1006 platform_set_drvdata(pdev, ssp);
-10
drivers/xen/events/events_base.c
··· 2060 2060 .irq_ack = ack_dynirq, 2061 2061 }; 2062 2062 2063 - int xen_set_callback_via(uint64_t via) 2064 - { 2065 - struct xen_hvm_param a; 2066 - a.domid = DOMID_SELF; 2067 - a.index = HVM_PARAM_CALLBACK_IRQ; 2068 - a.value = via; 2069 - return HYPERVISOR_hvm_op(HVMOP_set_param, &a); 2070 - } 2071 - EXPORT_SYMBOL_GPL(xen_set_callback_via); 2072 - 2073 2063 #ifdef CONFIG_XEN_PVHVM 2074 2064 /* Vector callbacks are better than PCI interrupts to receive event 2075 2065 * channel notifications because we can receive vector callbacks on any
+7 -1
drivers/xen/platform-pci.c
··· 132 132 dev_warn(&pdev->dev, "request_irq failed err=%d\n", ret); 133 133 goto out; 134 134 } 135 + /* 136 + * It doesn't strictly *have* to run on CPU0 but it sure 137 + * as hell better process the event channel ports delivered 138 + * to CPU0. 139 + */ 140 + irq_set_affinity(pdev->irq, cpumask_of(0)); 141 + 135 142 callback_via = get_callback_via(pdev); 136 143 ret = xen_set_callback_via(callback_via); 137 144 if (ret) { ··· 156 149 ret = gnttab_init(); 157 150 if (ret) 158 151 goto grant_out; 159 - xenbus_probe(NULL); 160 152 return 0; 161 153 grant_out: 162 154 gnttab_free_auto_xlat_frames();
+19 -6
drivers/xen/privcmd.c
··· 717 717 return 0; 718 718 } 719 719 720 - static long privcmd_ioctl_mmap_resource(struct file *file, void __user *udata) 720 + static long privcmd_ioctl_mmap_resource(struct file *file, 721 + struct privcmd_mmap_resource __user *udata) 721 722 { 722 723 struct privcmd_data *data = file->private_data; 723 724 struct mm_struct *mm = current->mm; 724 725 struct vm_area_struct *vma; 725 726 struct privcmd_mmap_resource kdata; 726 727 xen_pfn_t *pfns = NULL; 727 - struct xen_mem_acquire_resource xdata; 728 + struct xen_mem_acquire_resource xdata = { }; 728 729 int rc; 729 730 730 731 if (copy_from_user(&kdata, udata, sizeof(kdata))) ··· 734 733 /* If restriction is in place, check the domid matches */ 735 734 if (data->domid != DOMID_INVALID && data->domid != kdata.dom) 736 735 return -EPERM; 736 + 737 + /* Both fields must be set or unset */ 738 + if (!!kdata.addr != !!kdata.num) 739 + return -EINVAL; 740 + 741 + xdata.domid = kdata.dom; 742 + xdata.type = kdata.type; 743 + xdata.id = kdata.id; 744 + 745 + if (!kdata.addr && !kdata.num) { 746 + /* Query the size of the resource. */ 747 + rc = HYPERVISOR_memory_op(XENMEM_acquire_resource, &xdata); 748 + if (rc) 749 + return rc; 750 + return __put_user(xdata.nr_frames, &udata->num); 751 + } 737 752 738 753 mmap_write_lock(mm); 739 754 ··· 785 768 } else 786 769 vma->vm_private_data = PRIV_VMA_LOCKED; 787 770 788 - memset(&xdata, 0, sizeof(xdata)); 789 - xdata.domid = kdata.dom; 790 - xdata.type = kdata.type; 791 - xdata.id = kdata.id; 792 771 xdata.frame = kdata.idx; 793 772 xdata.nr_frames = kdata.num; 794 773 set_xen_guest_handle(xdata.frame_list, pfns);
+1
drivers/xen/xenbus/xenbus.h
··· 115 115 const char *type, 116 116 const char *nodename); 117 117 int xenbus_probe_devices(struct xen_bus_type *bus); 118 + void xenbus_probe(void); 118 119 119 120 void xenbus_dev_changed(const char *node, struct xen_bus_type *bus); 120 121
-8
drivers/xen/xenbus/xenbus_comms.c
··· 57 57 static int xenbus_irq; 58 58 static struct task_struct *xenbus_task; 59 59 60 - static DECLARE_WORK(probe_work, xenbus_probe); 61 - 62 - 63 60 static irqreturn_t wake_waiting(int irq, void *unused) 64 61 { 65 - if (unlikely(xenstored_ready == 0)) { 66 - xenstored_ready = 1; 67 - schedule_work(&probe_work); 68 - } 69 - 70 62 wake_up(&xb_waitq); 71 63 return IRQ_HANDLED; 72 64 }
+67 -14
drivers/xen/xenbus/xenbus_probe.c
··· 683 683 } 684 684 EXPORT_SYMBOL_GPL(unregister_xenstore_notifier); 685 685 686 - void xenbus_probe(struct work_struct *unused) 686 + void xenbus_probe(void) 687 687 { 688 688 xenstored_ready = 1; 689 + 690 + /* 691 + * In the HVM case, xenbus_init() deferred its call to 692 + * xs_init() in case callbacks were not operational yet. 693 + * So do it now. 694 + */ 695 + if (xen_store_domain_type == XS_HVM) 696 + xs_init(); 689 697 690 698 /* Notify others that xenstore is up */ 691 699 blocking_notifier_call_chain(&xenstore_chain, 0, NULL); 692 700 } 693 - EXPORT_SYMBOL_GPL(xenbus_probe); 701 + 702 + /* 703 + * Returns true when XenStore init must be deferred in order to 704 + * allow the PCI platform device to be initialised, before we 705 + * can actually have event channel interrupts working. 706 + */ 707 + static bool xs_hvm_defer_init_for_callback(void) 708 + { 709 + #ifdef CONFIG_XEN_PVHVM 710 + return xen_store_domain_type == XS_HVM && 711 + !xen_have_vector_callback; 712 + #else 713 + return false; 714 + #endif 715 + } 694 716 695 717 static int __init xenbus_probe_initcall(void) 696 718 { 697 - if (!xen_domain()) 698 - return -ENODEV; 719 + /* 720 + * Probe XenBus here in the XS_PV case, and also XS_HVM unless we 721 + * need to wait for the platform PCI device to come up. 722 + */ 723 + if (xen_store_domain_type == XS_PV || 724 + (xen_store_domain_type == XS_HVM && 725 + !xs_hvm_defer_init_for_callback())) 726 + xenbus_probe(); 699 727 700 - if (xen_initial_domain() || xen_hvm_domain()) 701 - return 0; 702 - 703 - xenbus_probe(NULL); 704 728 return 0; 705 729 } 706 - 707 730 device_initcall(xenbus_probe_initcall); 731 + 732 + int xen_set_callback_via(uint64_t via) 733 + { 734 + struct xen_hvm_param a; 735 + int ret; 736 + 737 + a.domid = DOMID_SELF; 738 + a.index = HVM_PARAM_CALLBACK_IRQ; 739 + a.value = via; 740 + 741 + ret = HYPERVISOR_hvm_op(HVMOP_set_param, &a); 742 + if (ret) 743 + return ret; 744 + 745 + /* 746 + * If xenbus_probe_initcall() deferred the xenbus_probe() 747 + * due to the callback not functioning yet, we can do it now. 748 + */ 749 + if (!xenstored_ready && xs_hvm_defer_init_for_callback()) 750 + xenbus_probe(); 751 + 752 + return ret; 753 + } 754 + EXPORT_SYMBOL_GPL(xen_set_callback_via); 708 755 709 756 /* Set up event channel for xenstored which is run as a local process 710 757 * (this is normally used only in dom0) ··· 865 818 break; 866 819 } 867 820 868 - /* Initialize the interface to xenstore. */ 869 - err = xs_init(); 870 - if (err) { 871 - pr_warn("Error initializing xenstore comms: %i\n", err); 872 - goto out_error; 821 + /* 822 + * HVM domains may not have a functional callback yet. In that 823 + * case let xs_init() be called from xenbus_probe(), which will 824 + * get invoked at an appropriate time. 825 + */ 826 + if (xen_store_domain_type != XS_HVM) { 827 + err = xs_init(); 828 + if (err) { 829 + pr_warn("Error initializing xenstore comms: %i\n", err); 830 + goto out_error; 831 + } 873 832 } 874 833 875 834 if ((xen_store_domain_type != XS_LOCAL) &&
+1 -1
fs/btrfs/disk-io.c
··· 1457 1457 root = list_first_entry(&fs_info->allocated_roots, 1458 1458 struct btrfs_root, leak_list); 1459 1459 btrfs_err(fs_info, "leaked root %s refcount %d", 1460 - btrfs_root_name(root->root_key.objectid, buf), 1460 + btrfs_root_name(&root->root_key, buf), 1461 1461 refcount_read(&root->refs)); 1462 1462 while (refcount_read(&root->refs) > 1) 1463 1463 btrfs_put_root(root);
+1 -3
fs/btrfs/extent_io.c
··· 676 676 677 677 static void extent_io_tree_panic(struct extent_io_tree *tree, int err) 678 678 { 679 - struct inode *inode = tree->private_data; 680 - 681 - btrfs_panic(btrfs_sb(inode->i_sb), err, 679 + btrfs_panic(tree->fs_info, err, 682 680 "locking error: extent tree was modified by another thread while locked"); 683 681 } 684 682
+43 -17
fs/btrfs/inode.c
··· 9390 9390 * some fairly slow code that needs optimization. This walks the list 9391 9391 * of all the inodes with pending delalloc and forces them to disk. 9392 9392 */ 9393 - static int start_delalloc_inodes(struct btrfs_root *root, u64 *nr, bool snapshot, 9393 + static int start_delalloc_inodes(struct btrfs_root *root, 9394 + struct writeback_control *wbc, bool snapshot, 9394 9395 bool in_reclaim_context) 9395 9396 { 9396 9397 struct btrfs_inode *binode; ··· 9400 9399 struct list_head works; 9401 9400 struct list_head splice; 9402 9401 int ret = 0; 9402 + bool full_flush = wbc->nr_to_write == LONG_MAX; 9403 9403 9404 9404 INIT_LIST_HEAD(&works); 9405 9405 INIT_LIST_HEAD(&splice); ··· 9429 9427 if (snapshot) 9430 9428 set_bit(BTRFS_INODE_SNAPSHOT_FLUSH, 9431 9429 &binode->runtime_flags); 9432 - work = btrfs_alloc_delalloc_work(inode); 9433 - if (!work) { 9434 - iput(inode); 9435 - ret = -ENOMEM; 9436 - goto out; 9437 - } 9438 - list_add_tail(&work->list, &works); 9439 - btrfs_queue_work(root->fs_info->flush_workers, 9440 - &work->work); 9441 - if (*nr != U64_MAX) { 9442 - (*nr)--; 9443 - if (*nr == 0) 9430 + if (full_flush) { 9431 + work = btrfs_alloc_delalloc_work(inode); 9432 + if (!work) { 9433 + iput(inode); 9434 + ret = -ENOMEM; 9435 + goto out; 9436 + } 9437 + list_add_tail(&work->list, &works); 9438 + btrfs_queue_work(root->fs_info->flush_workers, 9439 + &work->work); 9440 + } else { 9441 + ret = sync_inode(inode, wbc); 9442 + if (!ret && 9443 + test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT, 9444 + &BTRFS_I(inode)->runtime_flags)) 9445 + ret = sync_inode(inode, wbc); 9446 + btrfs_add_delayed_iput(inode); 9447 + if (ret || wbc->nr_to_write <= 0) 9444 9448 goto out; 9445 9449 } 9446 9450 cond_resched(); ··· 9472 9464 9473 9465 int btrfs_start_delalloc_snapshot(struct btrfs_root *root) 9474 9466 { 9467 + struct writeback_control wbc = { 9468 + .nr_to_write = LONG_MAX, 9469 + .sync_mode = WB_SYNC_NONE, 9470 + .range_start = 0, 9471 + .range_end = LLONG_MAX, 9472 + }; 9475 9473 struct btrfs_fs_info *fs_info = root->fs_info; 9476 - u64 nr = U64_MAX; 9477 9474 9478 9475 if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) 9479 9476 return -EROFS; 9480 9477 9481 - return start_delalloc_inodes(root, &nr, true, false); 9478 + return start_delalloc_inodes(root, &wbc, true, false); 9482 9479 } 9483 9480 9484 9481 int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr, 9485 9482 bool in_reclaim_context) 9486 9483 { 9484 + struct writeback_control wbc = { 9485 + .nr_to_write = (nr == U64_MAX) ? LONG_MAX : (unsigned long)nr, 9486 + .sync_mode = WB_SYNC_NONE, 9487 + .range_start = 0, 9488 + .range_end = LLONG_MAX, 9489 + }; 9487 9490 struct btrfs_root *root; 9488 9491 struct list_head splice; 9489 9492 int ret; ··· 9508 9489 spin_lock(&fs_info->delalloc_root_lock); 9509 9490 list_splice_init(&fs_info->delalloc_roots, &splice); 9510 9491 while (!list_empty(&splice) && nr) { 9492 + /* 9493 + * Reset nr_to_write here so we know that we're doing a full 9494 + * flush. 9495 + */ 9496 + if (nr == U64_MAX) 9497 + wbc.nr_to_write = LONG_MAX; 9498 + 9511 9499 root = list_first_entry(&splice, struct btrfs_root, 9512 9500 delalloc_root); 9513 9501 root = btrfs_grab_root(root); ··· 9523 9497 &fs_info->delalloc_roots); 9524 9498 spin_unlock(&fs_info->delalloc_root_lock); 9525 9499 9526 - ret = start_delalloc_inodes(root, &nr, false, in_reclaim_context); 9500 + ret = start_delalloc_inodes(root, &wbc, false, in_reclaim_context); 9527 9501 btrfs_put_root(root); 9528 - if (ret < 0) 9502 + if (ret < 0 || wbc.nr_to_write <= 0) 9529 9503 goto out; 9530 9504 spin_lock(&fs_info->delalloc_root_lock); 9531 9505 }
+5 -5
fs/btrfs/print-tree.c
··· 26 26 { BTRFS_DATA_RELOC_TREE_OBJECTID, "DATA_RELOC_TREE" }, 27 27 }; 28 28 29 - const char *btrfs_root_name(u64 objectid, char *buf) 29 + const char *btrfs_root_name(const struct btrfs_key *key, char *buf) 30 30 { 31 31 int i; 32 32 33 - if (objectid == BTRFS_TREE_RELOC_OBJECTID) { 33 + if (key->objectid == BTRFS_TREE_RELOC_OBJECTID) { 34 34 snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, 35 - "TREE_RELOC offset=%llu", objectid); 35 + "TREE_RELOC offset=%llu", key->offset); 36 36 return buf; 37 37 } 38 38 39 39 for (i = 0; i < ARRAY_SIZE(root_map); i++) { 40 - if (root_map[i].id == objectid) 40 + if (root_map[i].id == key->objectid) 41 41 return root_map[i].name; 42 42 } 43 43 44 - snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, "%llu", objectid); 44 + snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, "%llu", key->objectid); 45 45 return buf; 46 46 } 47 47
+1 -1
fs/btrfs/print-tree.h
··· 11 11 12 12 void btrfs_print_leaf(struct extent_buffer *l); 13 13 void btrfs_print_tree(struct extent_buffer *c, bool follow); 14 - const char *btrfs_root_name(u64 objectid, char *buf); 14 + const char *btrfs_root_name(const struct btrfs_key *key, char *buf); 15 15 16 16 #endif
+6 -1
fs/btrfs/relocation.c
··· 2975 2975 return 0; 2976 2976 2977 2977 for (i = 0; i < btrfs_header_nritems(leaf); i++) { 2978 + u8 type; 2979 + 2978 2980 btrfs_item_key_to_cpu(leaf, &key, i); 2979 2981 if (key.type != BTRFS_EXTENT_DATA_KEY) 2980 2982 continue; 2981 2983 ei = btrfs_item_ptr(leaf, i, struct btrfs_file_extent_item); 2982 - if (btrfs_file_extent_type(leaf, ei) == BTRFS_FILE_EXTENT_REG && 2984 + type = btrfs_file_extent_type(leaf, ei); 2985 + 2986 + if ((type == BTRFS_FILE_EXTENT_REG || 2987 + type == BTRFS_FILE_EXTENT_PREALLOC) && 2983 2988 btrfs_file_extent_disk_bytenr(leaf, ei) == data_bytenr) { 2984 2989 found = true; 2985 2990 space_cache_ino = key.objectid;
+3 -1
fs/btrfs/space-info.c
··· 532 532 533 533 loops = 0; 534 534 while ((delalloc_bytes || dio_bytes) && loops < 3) { 535 - btrfs_start_delalloc_roots(fs_info, items, true); 535 + u64 nr_pages = min(delalloc_bytes, to_reclaim) >> PAGE_SHIFT; 536 + 537 + btrfs_start_delalloc_roots(fs_info, nr_pages, true); 536 538 537 539 loops++; 538 540 if (wait_ordered && !trans) {
+7
fs/btrfs/tree-checker.c
··· 760 760 { 761 761 struct btrfs_fs_info *fs_info = leaf->fs_info; 762 762 u64 length; 763 + u64 chunk_end; 763 764 u64 stripe_len; 764 765 u16 num_stripes; 765 766 u16 sub_stripes; ··· 813 812 if (unlikely(!length || !IS_ALIGNED(length, fs_info->sectorsize))) { 814 813 chunk_err(leaf, chunk, logical, 815 814 "invalid chunk length, have %llu", length); 815 + return -EUCLEAN; 816 + } 817 + if (unlikely(check_add_overflow(logical, length, &chunk_end))) { 818 + chunk_err(leaf, chunk, logical, 819 + "invalid chunk logical start and length, have logical start %llu length %llu", 820 + logical, length); 816 821 return -EUCLEAN; 817 822 } 818 823 if (unlikely(!is_power_of_2(stripe_len) || stripe_len != BTRFS_STRIPE_LEN)) {
+1 -1
fs/cifs/connect.c
··· 3740 3740 3741 3741 if (!ses->binding) { 3742 3742 ses->capabilities = server->capabilities; 3743 - if (linuxExtEnabled == 0) 3743 + if (!linuxExtEnabled) 3744 3744 ses->capabilities &= (~server->vals->cap_unix); 3745 3745 3746 3746 if (ses->auth_key.response) {
+2 -1
fs/cifs/dfs_cache.c
··· 1260 1260 vi = find_vol(fullpath); 1261 1261 spin_unlock(&vol_list_lock); 1262 1262 1263 - kref_put(&vi->refcnt, vol_release); 1263 + if (!IS_ERR(vi)) 1264 + kref_put(&vi->refcnt, vol_release); 1264 1265 } 1265 1266 1266 1267 /**
+1 -3
fs/cifs/fs_context.c
··· 303 303 int 304 304 smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx) 305 305 { 306 - int rc = 0; 307 - 308 306 memcpy(new_ctx, ctx, sizeof(*ctx)); 309 307 new_ctx->prepath = NULL; 310 308 new_ctx->mount_options = NULL; ··· 325 327 DUP_CTX_STR(nodename); 326 328 DUP_CTX_STR(iocharset); 327 329 328 - return rc; 330 + return 0; 329 331 } 330 332 331 333 static int
+1 -1
fs/cifs/smb2pdu.c
··· 3248 3248 free_rsp_buf(resp_buftype, rsp); 3249 3249 3250 3250 /* retry close in a worker thread if this one is interrupted */ 3251 - if (rc == -EINTR) { 3251 + if (is_interrupt_error(rc)) { 3252 3252 int tmp_rc; 3253 3253 3254 3254 tmp_rc = smb2_handle_cancelled_close(tcon, persistent_fid,
+1 -1
fs/cifs/smb2pdu.h
··· 424 424 __le16 TransformCount; 425 425 __u16 Reserved1; 426 426 __u32 Reserved2; 427 - __le16 RDMATransformIds[1]; 427 + __le16 RDMATransformIds[]; 428 428 } __packed; 429 429 430 430 /* Signing algorithms */
-17
fs/ext4/ext4_jbd2.c
··· 372 372 } 373 373 return err; 374 374 } 375 - 376 - int __ext4_handle_dirty_super(const char *where, unsigned int line, 377 - handle_t *handle, struct super_block *sb) 378 - { 379 - struct buffer_head *bh = EXT4_SB(sb)->s_sbh; 380 - int err = 0; 381 - 382 - ext4_superblock_csum_set(sb); 383 - if (ext4_handle_valid(handle)) { 384 - err = jbd2_journal_dirty_metadata(handle, bh); 385 - if (err) 386 - ext4_journal_abort_handle(where, line, __func__, 387 - bh, handle, err); 388 - } else 389 - mark_buffer_dirty(bh); 390 - return err; 391 - }
-5
fs/ext4/ext4_jbd2.h
··· 244 244 handle_t *handle, struct inode *inode, 245 245 struct buffer_head *bh); 246 246 247 - int __ext4_handle_dirty_super(const char *where, unsigned int line, 248 - handle_t *handle, struct super_block *sb); 249 - 250 247 #define ext4_journal_get_write_access(handle, bh) \ 251 248 __ext4_journal_get_write_access(__func__, __LINE__, (handle), (bh)) 252 249 #define ext4_forget(handle, is_metadata, inode, bh, block_nr) \ ··· 254 257 #define ext4_handle_dirty_metadata(handle, inode, bh) \ 255 258 __ext4_handle_dirty_metadata(__func__, __LINE__, (handle), (inode), \ 256 259 (bh)) 257 - #define ext4_handle_dirty_super(handle, sb) \ 258 - __ext4_handle_dirty_super(__func__, __LINE__, (handle), (sb)) 259 260 260 261 handle_t *__ext4_journal_start_sb(struct super_block *sb, unsigned int line, 261 262 int type, int blocks, int rsv_blocks,
+18 -17
fs/ext4/fast_commit.c
··· 604 604 trace_ext4_fc_track_range(inode, start, end, ret); 605 605 } 606 606 607 - static void ext4_fc_submit_bh(struct super_block *sb) 607 + static void ext4_fc_submit_bh(struct super_block *sb, bool is_tail) 608 608 { 609 609 int write_flags = REQ_SYNC; 610 610 struct buffer_head *bh = EXT4_SB(sb)->s_fc_bh; 611 611 612 - /* TODO: REQ_FUA | REQ_PREFLUSH is unnecessarily expensive. */ 613 - if (test_opt(sb, BARRIER)) 612 + /* Add REQ_FUA | REQ_PREFLUSH only its tail */ 613 + if (test_opt(sb, BARRIER) && is_tail) 614 614 write_flags |= REQ_FUA | REQ_PREFLUSH; 615 615 lock_buffer(bh); 616 616 set_buffer_dirty(bh); ··· 684 684 *crc = ext4_chksum(sbi, *crc, tl, sizeof(*tl)); 685 685 if (pad_len > 0) 686 686 ext4_fc_memzero(sb, tl + 1, pad_len, crc); 687 - ext4_fc_submit_bh(sb); 687 + ext4_fc_submit_bh(sb, false); 688 688 689 689 ret = jbd2_fc_get_buf(EXT4_SB(sb)->s_journal, &bh); 690 690 if (ret) ··· 741 741 tail.fc_crc = cpu_to_le32(crc); 742 742 ext4_fc_memcpy(sb, dst, &tail.fc_crc, sizeof(tail.fc_crc), NULL); 743 743 744 - ext4_fc_submit_bh(sb); 744 + ext4_fc_submit_bh(sb, true); 745 745 746 746 return 0; 747 747 } ··· 1268 1268 list_splice_init(&sbi->s_fc_dentry_q[FC_Q_STAGING], 1269 1269 &sbi->s_fc_dentry_q[FC_Q_MAIN]); 1270 1270 list_splice_init(&sbi->s_fc_q[FC_Q_STAGING], 1271 - &sbi->s_fc_q[FC_Q_STAGING]); 1271 + &sbi->s_fc_q[FC_Q_MAIN]); 1272 1272 1273 1273 ext4_clear_mount_flag(sb, EXT4_MF_FC_COMMITTING); 1274 1274 ext4_clear_mount_flag(sb, EXT4_MF_FC_INELIGIBLE); ··· 1318 1318 entry.len = darg.dname_len; 1319 1319 inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL); 1320 1320 1321 - if (IS_ERR_OR_NULL(inode)) { 1321 + if (IS_ERR(inode)) { 1322 1322 jbd_debug(1, "Inode %d not found", darg.ino); 1323 1323 return 0; 1324 1324 } 1325 1325 1326 1326 old_parent = ext4_iget(sb, darg.parent_ino, 1327 1327 EXT4_IGET_NORMAL); 1328 - if (IS_ERR_OR_NULL(old_parent)) { 1328 + if (IS_ERR(old_parent)) { 1329 1329 jbd_debug(1, "Dir with inode %d not found", darg.parent_ino); 1330 1330 iput(inode); 1331 1331 return 0; ··· 1410 1410 darg.parent_ino, darg.dname_len); 1411 1411 1412 1412 inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL); 1413 - if (IS_ERR_OR_NULL(inode)) { 1413 + if (IS_ERR(inode)) { 1414 1414 jbd_debug(1, "Inode not found."); 1415 1415 return 0; 1416 1416 } ··· 1466 1466 trace_ext4_fc_replay(sb, tag, ino, 0, 0); 1467 1467 1468 1468 inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL); 1469 - if (!IS_ERR_OR_NULL(inode)) { 1469 + if (!IS_ERR(inode)) { 1470 1470 ext4_ext_clear_bb(inode); 1471 1471 iput(inode); 1472 1472 } 1473 + inode = NULL; 1473 1474 1474 1475 ext4_fc_record_modified_inode(sb, ino); 1475 1476 ··· 1513 1512 1514 1513 /* Given that we just wrote the inode on disk, this SHOULD succeed. */ 1515 1514 inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL); 1516 - if (IS_ERR_OR_NULL(inode)) { 1515 + if (IS_ERR(inode)) { 1517 1516 jbd_debug(1, "Inode not found."); 1518 1517 return -EFSCORRUPTED; 1519 1518 } ··· 1565 1564 goto out; 1566 1565 1567 1566 inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL); 1568 - if (IS_ERR_OR_NULL(inode)) { 1567 + if (IS_ERR(inode)) { 1569 1568 jbd_debug(1, "inode %d not found.", darg.ino); 1570 1569 inode = NULL; 1571 1570 ret = -EINVAL; ··· 1578 1577 * dot and dot dot dirents are setup properly. 1579 1578 */ 1580 1579 dir = ext4_iget(sb, darg.parent_ino, EXT4_IGET_NORMAL); 1581 - if (IS_ERR_OR_NULL(dir)) { 1580 + if (IS_ERR(dir)) { 1582 1581 jbd_debug(1, "Dir %d not found.", darg.ino); 1583 1582 goto out; 1584 1583 } ··· 1654 1653 1655 1654 inode = ext4_iget(sb, le32_to_cpu(fc_add_ex->fc_ino), 1656 1655 EXT4_IGET_NORMAL); 1657 - if (IS_ERR_OR_NULL(inode)) { 1656 + if (IS_ERR(inode)) { 1658 1657 jbd_debug(1, "Inode not found."); 1659 1658 return 0; 1660 1659 } ··· 1778 1777 le32_to_cpu(lrange->fc_ino), cur, remaining); 1779 1778 1780 1779 inode = ext4_iget(sb, le32_to_cpu(lrange->fc_ino), EXT4_IGET_NORMAL); 1781 - if (IS_ERR_OR_NULL(inode)) { 1780 + if (IS_ERR(inode)) { 1782 1781 jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange->fc_ino)); 1783 1782 return 0; 1784 1783 } ··· 1833 1832 for (i = 0; i < state->fc_modified_inodes_used; i++) { 1834 1833 inode = ext4_iget(sb, state->fc_modified_inodes[i], 1835 1834 EXT4_IGET_NORMAL); 1836 - if (IS_ERR_OR_NULL(inode)) { 1835 + if (IS_ERR(inode)) { 1837 1836 jbd_debug(1, "Inode %d not found.", 1838 1837 state->fc_modified_inodes[i]); 1839 1838 continue; ··· 1850 1849 1851 1850 if (ret > 0) { 1852 1851 path = ext4_find_extent(inode, map.m_lblk, NULL, 0); 1853 - if (!IS_ERR_OR_NULL(path)) { 1852 + if (!IS_ERR(path)) { 1854 1853 for (j = 0; j < path->p_depth; j++) 1855 1854 ext4_mb_mark_bb(inode->i_sb, 1856 1855 path[j].p_block, 1, 1);
+5 -2
fs/ext4/file.c
··· 809 809 err = ext4_journal_get_write_access(handle, sbi->s_sbh); 810 810 if (err) 811 811 goto out_journal; 812 - strlcpy(sbi->s_es->s_last_mounted, cp, 812 + lock_buffer(sbi->s_sbh); 813 + strncpy(sbi->s_es->s_last_mounted, cp, 813 814 sizeof(sbi->s_es->s_last_mounted)); 814 - ext4_handle_dirty_super(handle, sb); 815 + ext4_superblock_csum_set(sb); 816 + unlock_buffer(sbi->s_sbh); 817 + ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh); 815 818 out_journal: 816 819 ext4_journal_stop(handle); 817 820 out:
+5 -1
fs/ext4/inode.c
··· 5150 5150 err = ext4_journal_get_write_access(handle, EXT4_SB(sb)->s_sbh); 5151 5151 if (err) 5152 5152 goto out_brelse; 5153 + lock_buffer(EXT4_SB(sb)->s_sbh); 5153 5154 ext4_set_feature_large_file(sb); 5155 + ext4_superblock_csum_set(sb); 5156 + unlock_buffer(EXT4_SB(sb)->s_sbh); 5154 5157 ext4_handle_sync(handle); 5155 - err = ext4_handle_dirty_super(handle, sb); 5158 + err = ext4_handle_dirty_metadata(handle, NULL, 5159 + EXT4_SB(sb)->s_sbh); 5156 5160 } 5157 5161 ext4_update_inode_fsync_trans(handle, inode, need_datasync); 5158 5162 out_brelse:
+3
fs/ext4/ioctl.c
··· 1157 1157 err = ext4_journal_get_write_access(handle, sbi->s_sbh); 1158 1158 if (err) 1159 1159 goto pwsalt_err_journal; 1160 + lock_buffer(sbi->s_sbh); 1160 1161 generate_random_uuid(sbi->s_es->s_encrypt_pw_salt); 1162 + ext4_superblock_csum_set(sb); 1163 + unlock_buffer(sbi->s_sbh); 1161 1164 err = ext4_handle_dirty_metadata(handle, NULL, 1162 1165 sbi->s_sbh); 1163 1166 pwsalt_err_journal:
+19 -12
fs/ext4/namei.c
··· 2976 2976 (le32_to_cpu(sbi->s_es->s_inodes_count))) { 2977 2977 /* Insert this inode at the head of the on-disk orphan list */ 2978 2978 NEXT_ORPHAN(inode) = le32_to_cpu(sbi->s_es->s_last_orphan); 2979 + lock_buffer(sbi->s_sbh); 2979 2980 sbi->s_es->s_last_orphan = cpu_to_le32(inode->i_ino); 2981 + ext4_superblock_csum_set(sb); 2982 + unlock_buffer(sbi->s_sbh); 2980 2983 dirty = true; 2981 2984 } 2982 2985 list_add(&EXT4_I(inode)->i_orphan, &sbi->s_orphan); 2983 2986 mutex_unlock(&sbi->s_orphan_lock); 2984 2987 2985 2988 if (dirty) { 2986 - err = ext4_handle_dirty_super(handle, sb); 2989 + err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh); 2987 2990 rc = ext4_mark_iloc_dirty(handle, inode, &iloc); 2988 2991 if (!err) 2989 2992 err = rc; ··· 3062 3059 mutex_unlock(&sbi->s_orphan_lock); 3063 3060 goto out_brelse; 3064 3061 } 3062 + lock_buffer(sbi->s_sbh); 3065 3063 sbi->s_es->s_last_orphan = cpu_to_le32(ino_next); 3064 + ext4_superblock_csum_set(inode->i_sb); 3065 + unlock_buffer(sbi->s_sbh); 3066 3066 mutex_unlock(&sbi->s_orphan_lock); 3067 - err = ext4_handle_dirty_super(handle, inode->i_sb); 3067 + err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh); 3068 3068 } else { 3069 3069 struct ext4_iloc iloc2; 3070 3070 struct inode *i_prev = ··· 3599 3593 return retval2; 3600 3594 } 3601 3595 } 3602 - brelse(ent->bh); 3603 - ent->bh = NULL; 3604 - 3605 3596 return retval; 3606 3597 } 3607 3598 ··· 3797 3794 } 3798 3795 } 3799 3796 3797 + old_file_type = old.de->file_type; 3800 3798 if (IS_DIRSYNC(old.dir) || IS_DIRSYNC(new.dir)) 3801 3799 ext4_handle_sync(handle); 3802 3800 ··· 3825 3821 force_reread = (new.dir->i_ino == old.dir->i_ino && 3826 3822 ext4_test_inode_flag(new.dir, EXT4_INODE_INLINE_DATA)); 3827 3823 3828 - old_file_type = old.de->file_type; 3829 3824 if (whiteout) { 3830 3825 /* 3831 3826 * Do this before adding a new entry, so the old entry is sure ··· 3922 3919 retval = 0; 3923 3920 3924 3921 end_rename: 3922 + if (whiteout) { 3923 + if (retval) { 3924 + ext4_setent(handle, &old, 3925 + old.inode->i_ino, old_file_type); 3926 + drop_nlink(whiteout); 3927 + } 3928 + unlock_new_inode(whiteout); 3929 + iput(whiteout); 3930 + 3931 + } 3925 3932 brelse(old.dir_bh); 3926 3933 brelse(old.bh); 3927 3934 brelse(new.bh); 3928 - if (whiteout) { 3929 - if (retval) 3930 - drop_nlink(whiteout); 3931 - unlock_new_inode(whiteout); 3932 - iput(whiteout); 3933 - } 3934 3935 if (handle) 3935 3936 ext4_journal_stop(handle); 3936 3937 return retval;
+16 -4
fs/ext4/resize.c
··· 899 899 EXT4_SB(sb)->s_gdb_count++; 900 900 ext4_kvfree_array_rcu(o_group_desc); 901 901 902 + lock_buffer(EXT4_SB(sb)->s_sbh); 902 903 le16_add_cpu(&es->s_reserved_gdt_blocks, -1); 903 - err = ext4_handle_dirty_super(handle, sb); 904 + ext4_superblock_csum_set(sb); 905 + unlock_buffer(EXT4_SB(sb)->s_sbh); 906 + err = ext4_handle_dirty_metadata(handle, NULL, EXT4_SB(sb)->s_sbh); 904 907 if (err) 905 908 ext4_std_error(sb, err); 906 909 return err; ··· 1387 1384 reserved_blocks *= blocks_count; 1388 1385 do_div(reserved_blocks, 100); 1389 1386 1387 + lock_buffer(sbi->s_sbh); 1390 1388 ext4_blocks_count_set(es, ext4_blocks_count(es) + blocks_count); 1391 1389 ext4_free_blocks_count_set(es, ext4_free_blocks_count(es) + free_blocks); 1392 1390 le32_add_cpu(&es->s_inodes_count, EXT4_INODES_PER_GROUP(sb) * ··· 1425 1421 * active. */ 1426 1422 ext4_r_blocks_count_set(es, ext4_r_blocks_count(es) + 1427 1423 reserved_blocks); 1424 + ext4_superblock_csum_set(sb); 1425 + unlock_buffer(sbi->s_sbh); 1428 1426 1429 1427 /* Update the free space counts */ 1430 1428 percpu_counter_add(&sbi->s_freeclusters_counter, ··· 1521 1515 1522 1516 ext4_update_super(sb, flex_gd); 1523 1517 1524 - err = ext4_handle_dirty_super(handle, sb); 1518 + err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh); 1525 1519 1526 1520 exit_journal: 1527 1521 err2 = ext4_journal_stop(handle); ··· 1723 1717 goto errout; 1724 1718 } 1725 1719 1720 + lock_buffer(EXT4_SB(sb)->s_sbh); 1726 1721 ext4_blocks_count_set(es, o_blocks_count + add); 1727 1722 ext4_free_blocks_count_set(es, ext4_free_blocks_count(es) + add); 1723 + ext4_superblock_csum_set(sb); 1724 + unlock_buffer(EXT4_SB(sb)->s_sbh); 1728 1725 ext4_debug("freeing blocks %llu through %llu\n", o_blocks_count, 1729 1726 o_blocks_count + add); 1730 1727 /* We add the blocks to the bitmap and set the group need init bit */ 1731 1728 err = ext4_group_add_blocks(handle, sb, o_blocks_count, add); 1732 1729 if (err) 1733 1730 goto errout; 1734 - ext4_handle_dirty_super(handle, sb); 1731 + ext4_handle_dirty_metadata(handle, NULL, EXT4_SB(sb)->s_sbh); 1735 1732 ext4_debug("freed blocks %llu through %llu\n", o_blocks_count, 1736 1733 o_blocks_count + add); 1737 1734 errout: ··· 1883 1874 if (err) 1884 1875 goto errout; 1885 1876 1877 + lock_buffer(sbi->s_sbh); 1886 1878 ext4_clear_feature_resize_inode(sb); 1887 1879 ext4_set_feature_meta_bg(sb); 1888 1880 sbi->s_es->s_first_meta_bg = 1889 1881 cpu_to_le32(num_desc_blocks(sb, sbi->s_groups_count)); 1882 + ext4_superblock_csum_set(sb); 1883 + unlock_buffer(sbi->s_sbh); 1890 1884 1891 - err = ext4_handle_dirty_super(handle, sb); 1885 + err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh); 1892 1886 if (err) { 1893 1887 ext4_std_error(sb, err); 1894 1888 goto errout;
+118 -72
fs/ext4/super.c
··· 65 65 static int ext4_load_journal(struct super_block *, struct ext4_super_block *, 66 66 unsigned long journal_devnum); 67 67 static int ext4_show_options(struct seq_file *seq, struct dentry *root); 68 - static int ext4_commit_super(struct super_block *sb, int sync); 68 + static void ext4_update_super(struct super_block *sb); 69 + static int ext4_commit_super(struct super_block *sb); 69 70 static int ext4_mark_recovery_complete(struct super_block *sb, 70 71 struct ext4_super_block *es); 71 72 static int ext4_clear_journal_err(struct super_block *sb, ··· 587 586 return EXT4_ERR_UNKNOWN; 588 587 } 589 588 590 - static void __save_error_info(struct super_block *sb, int error, 591 - __u32 ino, __u64 block, 592 - const char *func, unsigned int line) 589 + static void save_error_info(struct super_block *sb, int error, 590 + __u32 ino, __u64 block, 591 + const char *func, unsigned int line) 593 592 { 594 593 struct ext4_sb_info *sbi = EXT4_SB(sb); 595 594 596 - EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS; 597 - if (bdev_read_only(sb->s_bdev)) 598 - return; 599 595 /* We default to EFSCORRUPTED error... */ 600 596 if (error == 0) 601 597 error = EFSCORRUPTED; ··· 616 618 spin_unlock(&sbi->s_error_lock); 617 619 } 618 620 619 - static void save_error_info(struct super_block *sb, int error, 620 - __u32 ino, __u64 block, 621 - const char *func, unsigned int line) 622 - { 623 - __save_error_info(sb, error, ino, block, func, line); 624 - if (!bdev_read_only(sb->s_bdev)) 625 - ext4_commit_super(sb, 1); 626 - } 627 - 628 621 /* Deal with the reporting of failure conditions on a filesystem such as 629 622 * inconsistencies detected or read IO failures. 630 623 * ··· 636 647 * used to deal with unrecoverable failures such as journal IO errors or ENOMEM 637 648 * at a critical moment in log management. 638 649 */ 639 - static void ext4_handle_error(struct super_block *sb, bool force_ro) 650 + static void ext4_handle_error(struct super_block *sb, bool force_ro, int error, 651 + __u32 ino, __u64 block, 652 + const char *func, unsigned int line) 640 653 { 641 654 journal_t *journal = EXT4_SB(sb)->s_journal; 655 + bool continue_fs = !force_ro && test_opt(sb, ERRORS_CONT); 642 656 657 + EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS; 643 658 if (test_opt(sb, WARN_ON_ERROR)) 644 659 WARN_ON_ONCE(1); 645 660 646 - if (sb_rdonly(sb) || (!force_ro && test_opt(sb, ERRORS_CONT))) 661 + if (!continue_fs && !sb_rdonly(sb)) { 662 + ext4_set_mount_flag(sb, EXT4_MF_FS_ABORTED); 663 + if (journal) 664 + jbd2_journal_abort(journal, -EIO); 665 + } 666 + 667 + if (!bdev_read_only(sb->s_bdev)) { 668 + save_error_info(sb, error, ino, block, func, line); 669 + /* 670 + * In case the fs should keep running, we need to writeout 671 + * superblock through the journal. Due to lock ordering 672 + * constraints, it may not be safe to do it right here so we 673 + * defer superblock flushing to a workqueue. 674 + */ 675 + if (continue_fs) 676 + schedule_work(&EXT4_SB(sb)->s_error_work); 677 + else 678 + ext4_commit_super(sb); 679 + } 680 + 681 + if (sb_rdonly(sb) || continue_fs) 647 682 return; 648 683 649 - ext4_set_mount_flag(sb, EXT4_MF_FS_ABORTED); 650 - if (journal) 651 - jbd2_journal_abort(journal, -EIO); 652 684 /* 653 685 * We force ERRORS_RO behavior when system is rebooting. Otherwise we 654 686 * could panic during 'reboot -f' as the underlying device got already ··· 692 682 { 693 683 struct ext4_sb_info *sbi = container_of(work, struct ext4_sb_info, 694 684 s_error_work); 685 + journal_t *journal = sbi->s_journal; 686 + handle_t *handle; 695 687 696 - ext4_commit_super(sbi->s_sb, 1); 688 + /* 689 + * If the journal is still running, we have to write out superblock 690 + * through the journal to avoid collisions of other journalled sb 691 + * updates. 692 + * 693 + * We use directly jbd2 functions here to avoid recursing back into 694 + * ext4 error handling code during handling of previous errors. 695 + */ 696 + if (!sb_rdonly(sbi->s_sb) && journal) { 697 + handle = jbd2_journal_start(journal, 1); 698 + if (IS_ERR(handle)) 699 + goto write_directly; 700 + if (jbd2_journal_get_write_access(handle, sbi->s_sbh)) { 701 + jbd2_journal_stop(handle); 702 + goto write_directly; 703 + } 704 + ext4_update_super(sbi->s_sb); 705 + if (jbd2_journal_dirty_metadata(handle, sbi->s_sbh)) { 706 + jbd2_journal_stop(handle); 707 + goto write_directly; 708 + } 709 + jbd2_journal_stop(handle); 710 + return; 711 + } 712 + write_directly: 713 + /* 714 + * Write through journal failed. Write sb directly to get error info 715 + * out and hope for the best. 716 + */ 717 + ext4_commit_super(sbi->s_sb); 697 718 } 698 719 699 720 #define ext4_error_ratelimit(sb) \ ··· 751 710 sb->s_id, function, line, current->comm, &vaf); 752 711 va_end(args); 753 712 } 754 - save_error_info(sb, error, 0, block, function, line); 755 - ext4_handle_error(sb, force_ro); 713 + ext4_handle_error(sb, force_ro, error, 0, block, function, line); 756 714 } 757 715 758 716 void __ext4_error_inode(struct inode *inode, const char *function, ··· 781 741 current->comm, &vaf); 782 742 va_end(args); 783 743 } 784 - save_error_info(inode->i_sb, error, inode->i_ino, block, 785 - function, line); 786 - ext4_handle_error(inode->i_sb, false); 744 + ext4_handle_error(inode->i_sb, false, error, inode->i_ino, block, 745 + function, line); 787 746 } 788 747 789 748 void __ext4_error_file(struct file *file, const char *function, ··· 819 780 current->comm, path, &vaf); 820 781 va_end(args); 821 782 } 822 - save_error_info(inode->i_sb, EFSCORRUPTED, inode->i_ino, block, 823 - function, line); 824 - ext4_handle_error(inode->i_sb, false); 783 + ext4_handle_error(inode->i_sb, false, EFSCORRUPTED, inode->i_ino, block, 784 + function, line); 825 785 } 826 786 827 787 const char *ext4_decode_error(struct super_block *sb, int errno, ··· 887 849 sb->s_id, function, line, errstr); 888 850 } 889 851 890 - save_error_info(sb, -errno, 0, 0, function, line); 891 - ext4_handle_error(sb, false); 852 + ext4_handle_error(sb, false, -errno, 0, 0, function, line); 892 853 } 893 854 894 855 void __ext4_msg(struct super_block *sb, ··· 981 944 if (test_opt(sb, ERRORS_CONT)) { 982 945 if (test_opt(sb, WARN_ON_ERROR)) 983 946 WARN_ON_ONCE(1); 984 - __save_error_info(sb, EFSCORRUPTED, ino, block, function, line); 985 - schedule_work(&EXT4_SB(sb)->s_error_work); 947 + EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS; 948 + if (!bdev_read_only(sb->s_bdev)) { 949 + save_error_info(sb, EFSCORRUPTED, ino, block, function, 950 + line); 951 + schedule_work(&EXT4_SB(sb)->s_error_work); 952 + } 986 953 return; 987 954 } 988 955 ext4_unlock_group(sb, grp); 989 - save_error_info(sb, EFSCORRUPTED, ino, block, function, line); 990 - ext4_handle_error(sb, false); 956 + ext4_handle_error(sb, false, EFSCORRUPTED, ino, block, function, line); 991 957 /* 992 958 * We only get here in the ERRORS_RO case; relocking the group 993 959 * may be dangerous, but nothing bad will happen since the ··· 1192 1152 es->s_state = cpu_to_le16(sbi->s_mount_state); 1193 1153 } 1194 1154 if (!sb_rdonly(sb)) 1195 - ext4_commit_super(sb, 1); 1155 + ext4_commit_super(sb); 1196 1156 1197 1157 rcu_read_lock(); 1198 1158 group_desc = rcu_dereference(sbi->s_group_desc); ··· 2682 2642 if (sbi->s_journal) 2683 2643 ext4_set_feature_journal_needs_recovery(sb); 2684 2644 2685 - err = ext4_commit_super(sb, 1); 2645 + err = ext4_commit_super(sb); 2686 2646 done: 2687 2647 if (test_opt(sb, DEBUG)) 2688 2648 printk(KERN_INFO "[EXT4 FS bs=%lu, gc=%u, " ··· 4908 4868 if (DUMMY_ENCRYPTION_ENABLED(sbi) && !sb_rdonly(sb) && 4909 4869 !ext4_has_feature_encrypt(sb)) { 4910 4870 ext4_set_feature_encrypt(sb); 4911 - ext4_commit_super(sb, 1); 4871 + ext4_commit_super(sb); 4912 4872 } 4913 4873 4914 4874 /* ··· 5458 5418 es->s_journal_dev = cpu_to_le32(journal_devnum); 5459 5419 5460 5420 /* Make sure we flush the recovery flag to disk. */ 5461 - ext4_commit_super(sb, 1); 5421 + ext4_commit_super(sb); 5462 5422 } 5463 5423 5464 5424 return 0; ··· 5468 5428 return err; 5469 5429 } 5470 5430 5471 - static int ext4_commit_super(struct super_block *sb, int sync) 5431 + /* Copy state of EXT4_SB(sb) into buffer for on-disk superblock */ 5432 + static void ext4_update_super(struct super_block *sb) 5472 5433 { 5473 5434 struct ext4_sb_info *sbi = EXT4_SB(sb); 5474 - struct ext4_super_block *es = EXT4_SB(sb)->s_es; 5475 - struct buffer_head *sbh = EXT4_SB(sb)->s_sbh; 5476 - int error = 0; 5435 + struct ext4_super_block *es = sbi->s_es; 5436 + struct buffer_head *sbh = sbi->s_sbh; 5477 5437 5478 - if (!sbh || block_device_ejected(sb)) 5479 - return error; 5480 - 5438 + lock_buffer(sbh); 5481 5439 /* 5482 5440 * If the file system is mounted read-only, don't update the 5483 5441 * superblock write time. This avoids updating the superblock ··· 5489 5451 if (!(sb->s_flags & SB_RDONLY)) 5490 5452 ext4_update_tstamp(es, s_wtime); 5491 5453 es->s_kbytes_written = 5492 - cpu_to_le64(EXT4_SB(sb)->s_kbytes_written + 5454 + cpu_to_le64(sbi->s_kbytes_written + 5493 5455 ((part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) - 5494 - EXT4_SB(sb)->s_sectors_written_start) >> 1)); 5495 - if (percpu_counter_initialized(&EXT4_SB(sb)->s_freeclusters_counter)) 5456 + sbi->s_sectors_written_start) >> 1)); 5457 + if (percpu_counter_initialized(&sbi->s_freeclusters_counter)) 5496 5458 ext4_free_blocks_count_set(es, 5497 - EXT4_C2B(EXT4_SB(sb), percpu_counter_sum_positive( 5498 - &EXT4_SB(sb)->s_freeclusters_counter))); 5499 - if (percpu_counter_initialized(&EXT4_SB(sb)->s_freeinodes_counter)) 5459 + EXT4_C2B(sbi, percpu_counter_sum_positive( 5460 + &sbi->s_freeclusters_counter))); 5461 + if (percpu_counter_initialized(&sbi->s_freeinodes_counter)) 5500 5462 es->s_free_inodes_count = 5501 5463 cpu_to_le32(percpu_counter_sum_positive( 5502 - &EXT4_SB(sb)->s_freeinodes_counter)); 5464 + &sbi->s_freeinodes_counter)); 5503 5465 /* Copy error information to the on-disk superblock */ 5504 5466 spin_lock(&sbi->s_error_lock); 5505 5467 if (sbi->s_add_error_count > 0) { ··· 5540 5502 } 5541 5503 spin_unlock(&sbi->s_error_lock); 5542 5504 5543 - BUFFER_TRACE(sbh, "marking dirty"); 5544 5505 ext4_superblock_csum_set(sb); 5545 - if (sync) 5546 - lock_buffer(sbh); 5506 + unlock_buffer(sbh); 5507 + } 5508 + 5509 + static int ext4_commit_super(struct super_block *sb) 5510 + { 5511 + struct buffer_head *sbh = EXT4_SB(sb)->s_sbh; 5512 + int error = 0; 5513 + 5514 + if (!sbh || block_device_ejected(sb)) 5515 + return error; 5516 + 5517 + ext4_update_super(sb); 5518 + 5547 5519 if (buffer_write_io_error(sbh) || !buffer_uptodate(sbh)) { 5548 5520 /* 5549 5521 * Oh, dear. A previous attempt to write the ··· 5568 5520 clear_buffer_write_io_error(sbh); 5569 5521 set_buffer_uptodate(sbh); 5570 5522 } 5523 + BUFFER_TRACE(sbh, "marking dirty"); 5571 5524 mark_buffer_dirty(sbh); 5572 - if (sync) { 5573 - unlock_buffer(sbh); 5574 - error = __sync_dirty_buffer(sbh, 5575 - REQ_SYNC | (test_opt(sb, BARRIER) ? REQ_FUA : 0)); 5576 - if (buffer_write_io_error(sbh)) { 5577 - ext4_msg(sb, KERN_ERR, "I/O error while writing " 5578 - "superblock"); 5579 - clear_buffer_write_io_error(sbh); 5580 - set_buffer_uptodate(sbh); 5581 - } 5525 + error = __sync_dirty_buffer(sbh, 5526 + REQ_SYNC | (test_opt(sb, BARRIER) ? REQ_FUA : 0)); 5527 + if (buffer_write_io_error(sbh)) { 5528 + ext4_msg(sb, KERN_ERR, "I/O error while writing " 5529 + "superblock"); 5530 + clear_buffer_write_io_error(sbh); 5531 + set_buffer_uptodate(sbh); 5582 5532 } 5583 5533 return error; 5584 5534 } ··· 5607 5561 5608 5562 if (ext4_has_feature_journal_needs_recovery(sb) && sb_rdonly(sb)) { 5609 5563 ext4_clear_feature_journal_needs_recovery(sb); 5610 - ext4_commit_super(sb, 1); 5564 + ext4_commit_super(sb); 5611 5565 } 5612 5566 out: 5613 5567 jbd2_journal_unlock_updates(journal); ··· 5649 5603 5650 5604 EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS; 5651 5605 es->s_state |= cpu_to_le16(EXT4_ERROR_FS); 5652 - ext4_commit_super(sb, 1); 5606 + ext4_commit_super(sb); 5653 5607 5654 5608 jbd2_journal_clear_err(journal); 5655 5609 jbd2_journal_update_sb_errno(journal); ··· 5751 5705 ext4_clear_feature_journal_needs_recovery(sb); 5752 5706 } 5753 5707 5754 - error = ext4_commit_super(sb, 1); 5708 + error = ext4_commit_super(sb); 5755 5709 out: 5756 5710 if (journal) 5757 5711 /* we rely on upper layer to stop further updates */ ··· 5773 5727 ext4_set_feature_journal_needs_recovery(sb); 5774 5728 } 5775 5729 5776 - ext4_commit_super(sb, 1); 5730 + ext4_commit_super(sb); 5777 5731 return 0; 5778 5732 } 5779 5733 ··· 6033 5987 } 6034 5988 6035 5989 if (sbi->s_journal == NULL && !(old_sb_flags & SB_RDONLY)) { 6036 - err = ext4_commit_super(sb, 1); 5990 + err = ext4_commit_super(sb); 6037 5991 if (err) 6038 5992 goto restore_opts; 6039 5993 }
+4 -1
fs/ext4/xattr.c
··· 792 792 793 793 BUFFER_TRACE(EXT4_SB(sb)->s_sbh, "get_write_access"); 794 794 if (ext4_journal_get_write_access(handle, EXT4_SB(sb)->s_sbh) == 0) { 795 + lock_buffer(EXT4_SB(sb)->s_sbh); 795 796 ext4_set_feature_xattr(sb); 796 - ext4_handle_dirty_super(handle, sb); 797 + ext4_superblock_csum_set(sb); 798 + unlock_buffer(EXT4_SB(sb)->s_sbh); 799 + ext4_handle_dirty_metadata(handle, NULL, EXT4_SB(sb)->s_sbh); 797 800 } 798 801 } 799 802
+41 -5
fs/io_uring.c
··· 354 354 unsigned cq_entries; 355 355 unsigned cq_mask; 356 356 atomic_t cq_timeouts; 357 + unsigned cq_last_tm_flush; 357 358 unsigned long cq_check_overflow; 358 359 struct wait_queue_head cq_wait; 359 360 struct fasync_struct *cq_fasync; ··· 1107 1106 1108 1107 static int __io_sq_thread_acquire_files(struct io_ring_ctx *ctx) 1109 1108 { 1109 + if (current->flags & PF_EXITING) 1110 + return -EFAULT; 1111 + 1110 1112 if (!current->files) { 1111 1113 struct files_struct *files; 1112 1114 struct nsproxy *nsproxy; ··· 1137 1133 { 1138 1134 struct mm_struct *mm; 1139 1135 1136 + if (current->flags & PF_EXITING) 1137 + return -EFAULT; 1140 1138 if (current->mm) 1141 1139 return 0; 1142 1140 ··· 1640 1634 1641 1635 static void io_flush_timeouts(struct io_ring_ctx *ctx) 1642 1636 { 1643 - while (!list_empty(&ctx->timeout_list)) { 1637 + u32 seq; 1638 + 1639 + if (list_empty(&ctx->timeout_list)) 1640 + return; 1641 + 1642 + seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts); 1643 + 1644 + do { 1645 + u32 events_needed, events_got; 1644 1646 struct io_kiocb *req = list_first_entry(&ctx->timeout_list, 1645 1647 struct io_kiocb, timeout.list); 1646 1648 1647 1649 if (io_is_timeout_noseq(req)) 1648 1650 break; 1649 - if (req->timeout.target_seq != ctx->cached_cq_tail 1650 - - atomic_read(&ctx->cq_timeouts)) 1651 + 1652 + /* 1653 + * Since seq can easily wrap around over time, subtract 1654 + * the last seq at which timeouts were flushed before comparing. 1655 + * Assuming not more than 2^31-1 events have happened since, 1656 + * these subtractions won't have wrapped, so we can check if 1657 + * target is in [last_seq, current_seq] by comparing the two. 1658 + */ 1659 + events_needed = req->timeout.target_seq - ctx->cq_last_tm_flush; 1660 + events_got = seq - ctx->cq_last_tm_flush; 1661 + if (events_got < events_needed) 1651 1662 break; 1652 1663 1653 1664 list_del_init(&req->timeout.list); 1654 1665 io_kill_timeout(req); 1655 - } 1666 + } while (!list_empty(&ctx->timeout_list)); 1667 + 1668 + ctx->cq_last_tm_flush = seq; 1656 1669 } 1657 1670 1658 1671 static void io_commit_cqring(struct io_ring_ctx *ctx) ··· 5857 5832 tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts); 5858 5833 req->timeout.target_seq = tail + off; 5859 5834 5835 + /* Update the last seq here in case io_flush_timeouts() hasn't. 5836 + * This is safe because ->completion_lock is held, and submissions 5837 + * and completions are never mixed in the same ->completion_lock section. 5838 + */ 5839 + ctx->cq_last_tm_flush = tail; 5840 + 5860 5841 /* 5861 5842 * Insertion sort, ensuring the first entry in the list is always 5862 5843 * the one we need first. ··· 7087 7056 7088 7057 if (sqt_spin || !time_after(jiffies, timeout)) { 7089 7058 io_run_task_work(); 7059 + io_sq_thread_drop_mm_files(); 7090 7060 cond_resched(); 7091 7061 if (sqt_spin) 7092 7062 timeout = jiffies + sqd->sq_thread_idle; ··· 7125 7093 } 7126 7094 7127 7095 io_run_task_work(); 7096 + io_sq_thread_drop_mm_files(); 7128 7097 7129 7098 if (cur_css) 7130 7099 io_sq_thread_unassociate_blkcg(); ··· 8921 8888 mutex_unlock(&ctx->uring_lock); 8922 8889 8923 8890 /* make sure callers enter the ring to get error */ 8924 - io_ring_set_wakeup_flag(ctx); 8891 + if (ctx->rings) 8892 + io_ring_set_wakeup_flag(ctx); 8925 8893 } 8926 8894 8927 8895 /* ··· 9101 9067 finish_wait(&tctx->wait, &wait); 9102 9068 } while (1); 9103 9069 9070 + finish_wait(&tctx->wait, &wait); 9104 9071 atomic_dec(&tctx->in_idle); 9105 9072 9106 9073 io_uring_remove_task_files(tctx); ··· 9735 9700 */ 9736 9701 ret = io_uring_install_fd(ctx, file); 9737 9702 if (ret < 0) { 9703 + io_disable_sqo_submit(ctx); 9738 9704 /* fput will clean it up */ 9739 9705 fput(file); 9740 9706 return ret;
+5 -2
fs/namespace.c
··· 1713 1713 { 1714 1714 struct mount *mnt = real_mount(path->mnt); 1715 1715 1716 - if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW)) 1717 - return -EINVAL; 1718 1716 if (!may_mount()) 1719 1717 return -EPERM; 1720 1718 if (path->dentry != path->mnt->mnt_root) ··· 1726 1728 return 0; 1727 1729 } 1728 1730 1731 + // caller is responsible for flags being sane 1729 1732 int path_umount(struct path *path, int flags) 1730 1733 { 1731 1734 struct mount *mnt = real_mount(path->mnt); ··· 1747 1748 int lookup_flags = LOOKUP_MOUNTPOINT; 1748 1749 struct path path; 1749 1750 int ret; 1751 + 1752 + // basic validity checks done first 1753 + if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW)) 1754 + return -EINVAL; 1750 1755 1751 1756 if (!(flags & UMOUNT_NOFOLLOW)) 1752 1757 lookup_flags |= LOOKUP_FOLLOW;
+7 -5
fs/nfs/delegation.c
··· 1011 1011 const struct nfs_fh *fhandle) 1012 1012 { 1013 1013 struct nfs_delegation *delegation; 1014 - struct inode *freeme, *res = NULL; 1014 + struct super_block *freeme = NULL; 1015 + struct inode *res = NULL; 1015 1016 1016 1017 list_for_each_entry_rcu(delegation, &server->delegations, super_list) { 1017 1018 spin_lock(&delegation->lock); 1018 1019 if (delegation->inode != NULL && 1019 1020 !test_bit(NFS_DELEGATION_REVOKED, &delegation->flags) && 1020 1021 nfs_compare_fh(fhandle, &NFS_I(delegation->inode)->fh) == 0) { 1021 - freeme = igrab(delegation->inode); 1022 - if (freeme && nfs_sb_active(freeme->i_sb)) 1023 - res = freeme; 1022 + if (nfs_sb_active(server->super)) { 1023 + freeme = server->super; 1024 + res = igrab(delegation->inode); 1025 + } 1024 1026 spin_unlock(&delegation->lock); 1025 1027 if (res != NULL) 1026 1028 return res; 1027 1029 if (freeme) { 1028 1030 rcu_read_unlock(); 1029 - iput(freeme); 1031 + nfs_sb_deactive(freeme); 1030 1032 rcu_read_lock(); 1031 1033 } 1032 1034 return ERR_PTR(-EAGAIN);
+30 -8
fs/nfs/internal.h
··· 136 136 } clone_data; 137 137 }; 138 138 139 - #define nfs_errorf(fc, fmt, ...) errorf(fc, fmt, ## __VA_ARGS__) 140 - #define nfs_invalf(fc, fmt, ...) invalf(fc, fmt, ## __VA_ARGS__) 141 - #define nfs_warnf(fc, fmt, ...) warnf(fc, fmt, ## __VA_ARGS__) 139 + #define nfs_errorf(fc, fmt, ...) ((fc)->log.log ? \ 140 + errorf(fc, fmt, ## __VA_ARGS__) : \ 141 + ({ dprintk(fmt "\n", ## __VA_ARGS__); })) 142 + 143 + #define nfs_ferrorf(fc, fac, fmt, ...) ((fc)->log.log ? \ 144 + errorf(fc, fmt, ## __VA_ARGS__) : \ 145 + ({ dfprintk(fac, fmt "\n", ## __VA_ARGS__); })) 146 + 147 + #define nfs_invalf(fc, fmt, ...) ((fc)->log.log ? \ 148 + invalf(fc, fmt, ## __VA_ARGS__) : \ 149 + ({ dprintk(fmt "\n", ## __VA_ARGS__); -EINVAL; })) 150 + 151 + #define nfs_finvalf(fc, fac, fmt, ...) ((fc)->log.log ? \ 152 + invalf(fc, fmt, ## __VA_ARGS__) : \ 153 + ({ dfprintk(fac, fmt "\n", ## __VA_ARGS__); -EINVAL; })) 154 + 155 + #define nfs_warnf(fc, fmt, ...) ((fc)->log.log ? \ 156 + warnf(fc, fmt, ## __VA_ARGS__) : \ 157 + ({ dprintk(fmt "\n", ## __VA_ARGS__); })) 158 + 159 + #define nfs_fwarnf(fc, fac, fmt, ...) ((fc)->log.log ? \ 160 + warnf(fc, fmt, ## __VA_ARGS__) : \ 161 + ({ dfprintk(fac, fmt "\n", ## __VA_ARGS__); })) 142 162 143 163 static inline struct nfs_fs_context *nfs_fc2context(const struct fs_context *fc) 144 164 { ··· 599 579 600 580 static inline struct inode *nfs_igrab_and_active(struct inode *inode) 601 581 { 602 - inode = igrab(inode); 603 - if (inode != NULL && !nfs_sb_active(inode->i_sb)) { 604 - iput(inode); 605 - inode = NULL; 582 + struct super_block *sb = inode->i_sb; 583 + 584 + if (sb && nfs_sb_active(sb)) { 585 + if (igrab(inode)) 586 + return inode; 587 + nfs_sb_deactive(sb); 606 588 } 607 - return inode; 589 + return NULL; 608 590 } 609 591 610 592 static inline void nfs_iput_and_deactive(struct inode *inode)
+11 -17
fs/nfs/nfs4proc.c
··· 3536 3536 trace_nfs4_close(state, &calldata->arg, &calldata->res, task->tk_status); 3537 3537 3538 3538 /* Handle Layoutreturn errors */ 3539 - if (pnfs_roc_done(task, calldata->inode, 3540 - &calldata->arg.lr_args, 3541 - &calldata->res.lr_res, 3542 - &calldata->res.lr_ret) == -EAGAIN) 3539 + if (pnfs_roc_done(task, &calldata->arg.lr_args, &calldata->res.lr_res, 3540 + &calldata->res.lr_ret) == -EAGAIN) 3543 3541 goto out_restart; 3544 3542 3545 3543 /* hmm. we are done with the inode, and in the process of freeing ··· 6382 6384 trace_nfs4_delegreturn_exit(&data->args, &data->res, task->tk_status); 6383 6385 6384 6386 /* Handle Layoutreturn errors */ 6385 - if (pnfs_roc_done(task, data->inode, 6386 - &data->args.lr_args, 6387 - &data->res.lr_res, 6388 - &data->res.lr_ret) == -EAGAIN) 6387 + if (pnfs_roc_done(task, &data->args.lr_args, &data->res.lr_res, 6388 + &data->res.lr_ret) == -EAGAIN) 6389 6389 goto out_restart; 6390 6390 6391 6391 switch (task->tk_status) { ··· 6437 6441 struct nfs4_delegreturndata *data = calldata; 6438 6442 struct inode *inode = data->inode; 6439 6443 6444 + if (data->lr.roc) 6445 + pnfs_roc_release(&data->lr.arg, &data->lr.res, 6446 + data->res.lr_ret); 6440 6447 if (inode) { 6441 - if (data->lr.roc) 6442 - pnfs_roc_release(&data->lr.arg, &data->lr.res, 6443 - data->res.lr_ret); 6444 6448 nfs_post_op_update_inode_force_wcc(inode, &data->fattr); 6445 6449 nfs_iput_and_deactive(inode); 6446 6450 } ··· 6516 6520 nfs_fattr_init(data->res.fattr); 6517 6521 data->timestamp = jiffies; 6518 6522 data->rpc_status = 0; 6519 - data->lr.roc = pnfs_roc(inode, &data->lr.arg, &data->lr.res, cred); 6520 6523 data->inode = nfs_igrab_and_active(inode); 6521 - if (data->inode) { 6524 + if (data->inode || issync) { 6525 + data->lr.roc = pnfs_roc(inode, &data->lr.arg, &data->lr.res, 6526 + cred); 6522 6527 if (data->lr.roc) { 6523 6528 data->args.lr_args = &data->lr.arg; 6524 6529 data->res.lr_res = &data->lr.res; 6525 6530 } 6526 - } else if (data->lr.roc) { 6527 - pnfs_roc_release(&data->lr.arg, &data->lr.res, 0); 6528 - data->lr.roc = false; 6529 6531 } 6530 6532 6531 6533 task_setup_data.callback_data = data; ··· 7105 7111 data->arg.new_lock_owner, ret); 7106 7112 } else 7107 7113 data->cancelled = true; 7114 + trace_nfs4_set_lock(fl, state, &data->res.stateid, cmd, ret); 7108 7115 rpc_put_task(task); 7109 7116 dprintk("%s: done, ret = %d!\n", __func__, ret); 7110 - trace_nfs4_set_lock(fl, state, &data->res.stateid, cmd, ret); 7111 7117 return ret; 7112 7118 } 7113 7119
+2 -2
fs/nfs/nfs4super.c
··· 227 227 fc, ctx->nfs_server.hostname, 228 228 ctx->nfs_server.export_path); 229 229 if (err) { 230 - nfs_errorf(fc, "NFS4: Couldn't follow remote path"); 230 + nfs_ferrorf(fc, MOUNT, "NFS4: Couldn't follow remote path"); 231 231 dfprintk(MOUNT, "<-- nfs4_try_get_tree() = %d [error]\n", err); 232 232 } else { 233 233 dfprintk(MOUNT, "<-- nfs4_try_get_tree() = 0\n"); ··· 250 250 fc, ctx->nfs_server.hostname, 251 251 ctx->nfs_server.export_path); 252 252 if (err) { 253 - nfs_errorf(fc, "NFS4: Couldn't follow remote path"); 253 + nfs_ferrorf(fc, MOUNT, "NFS4: Couldn't follow remote path"); 254 254 dfprintk(MOUNT, "<-- nfs4_get_referral_tree() = %d [error]\n", err); 255 255 } else { 256 256 dfprintk(MOUNT, "<-- nfs4_get_referral_tree() = 0\n");
+35 -32
fs/nfs/pnfs.c
··· 1152 1152 LIST_HEAD(freeme); 1153 1153 1154 1154 spin_lock(&inode->i_lock); 1155 - if (!pnfs_layout_is_valid(lo) || !arg_stateid || 1155 + if (!pnfs_layout_is_valid(lo) || 1156 1156 !nfs4_stateid_match_other(&lo->plh_stateid, arg_stateid)) 1157 1157 goto out_unlock; 1158 1158 if (stateid) { ··· 1509 1509 return false; 1510 1510 } 1511 1511 1512 - int pnfs_roc_done(struct rpc_task *task, struct inode *inode, 1513 - struct nfs4_layoutreturn_args **argpp, 1514 - struct nfs4_layoutreturn_res **respp, 1515 - int *ret) 1512 + int pnfs_roc_done(struct rpc_task *task, struct nfs4_layoutreturn_args **argpp, 1513 + struct nfs4_layoutreturn_res **respp, int *ret) 1516 1514 { 1517 1515 struct nfs4_layoutreturn_args *arg = *argpp; 1518 1516 int retval = -EAGAIN; ··· 1543 1545 return 0; 1544 1546 case -NFS4ERR_OLD_STATEID: 1545 1547 if (!nfs4_layout_refresh_old_stateid(&arg->stateid, 1546 - &arg->range, inode)) 1548 + &arg->range, arg->inode)) 1547 1549 break; 1548 1550 *ret = -NFS4ERR_NOMATCHING_LAYOUT; 1549 1551 return -EAGAIN; ··· 1558 1560 int ret) 1559 1561 { 1560 1562 struct pnfs_layout_hdr *lo = args->layout; 1561 - const nfs4_stateid *arg_stateid = NULL; 1563 + struct inode *inode = args->inode; 1562 1564 const nfs4_stateid *res_stateid = NULL; 1563 1565 struct nfs4_xdr_opaque_data *ld_private = args->ld_private; 1564 1566 1565 1567 switch (ret) { 1566 1568 case -NFS4ERR_NOMATCHING_LAYOUT: 1569 + spin_lock(&inode->i_lock); 1570 + if (pnfs_layout_is_valid(lo) && 1571 + nfs4_stateid_match_other(&args->stateid, &lo->plh_stateid)) 1572 + pnfs_set_plh_return_info(lo, args->range.iomode, 0); 1573 + pnfs_clear_layoutreturn_waitbit(lo); 1574 + spin_unlock(&inode->i_lock); 1567 1575 break; 1568 1576 case 0: 1569 1577 if (res->lrs_present) 1570 1578 res_stateid = &res->stateid; 1571 1579 fallthrough; 1572 1580 default: 1573 - arg_stateid = &args->stateid; 1581 + pnfs_layoutreturn_free_lsegs(lo, &args->stateid, &args->range, 1582 + res_stateid); 1574 1583 } 1575 1584 trace_nfs4_layoutreturn_on_close(args->inode, &args->stateid, ret); 1576 - pnfs_layoutreturn_free_lsegs(lo, arg_stateid, &args->range, 1577 - res_stateid); 1578 1585 if (ld_private && ld_private->ops && ld_private->ops->free) 1579 1586 ld_private->ops->free(ld_private); 1580 1587 pnfs_put_layout_hdr(lo); ··· 2018 2015 goto lookup_again; 2019 2016 } 2020 2017 2018 + /* 2019 + * Because we free lsegs when sending LAYOUTRETURN, we need to wait 2020 + * for LAYOUTRETURN. 2021 + */ 2022 + if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags)) { 2023 + spin_unlock(&ino->i_lock); 2024 + dprintk("%s wait for layoutreturn\n", __func__); 2025 + lseg = ERR_PTR(pnfs_prepare_to_retry_layoutget(lo)); 2026 + if (!IS_ERR(lseg)) { 2027 + pnfs_put_layout_hdr(lo); 2028 + dprintk("%s retrying\n", __func__); 2029 + trace_pnfs_update_layout(ino, pos, count, iomode, lo, 2030 + lseg, 2031 + PNFS_UPDATE_LAYOUT_RETRY); 2032 + goto lookup_again; 2033 + } 2034 + trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg, 2035 + PNFS_UPDATE_LAYOUT_RETURN); 2036 + goto out_put_layout_hdr; 2037 + } 2038 + 2021 2039 lseg = pnfs_find_lseg(lo, &arg, strict_iomode); 2022 2040 if (lseg) { 2023 2041 trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg, ··· 2089 2065 spin_lock(&ino->i_lock); 2090 2066 } else { 2091 2067 nfs4_stateid_copy(&stateid, &lo->plh_stateid); 2092 - } 2093 - 2094 - /* 2095 - * Because we free lsegs before sending LAYOUTRETURN, we need to wait 2096 - * for LAYOUTRETURN even if first is true. 2097 - */ 2098 - if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags)) { 2099 - spin_unlock(&ino->i_lock); 2100 - dprintk("%s wait for layoutreturn\n", __func__); 2101 - lseg = ERR_PTR(pnfs_prepare_to_retry_layoutget(lo)); 2102 - if (!IS_ERR(lseg)) { 2103 - if (first) 2104 - pnfs_clear_first_layoutget(lo); 2105 - pnfs_put_layout_hdr(lo); 2106 - dprintk("%s retrying\n", __func__); 2107 - trace_pnfs_update_layout(ino, pos, count, iomode, lo, 2108 - lseg, PNFS_UPDATE_LAYOUT_RETRY); 2109 - goto lookup_again; 2110 - } 2111 - trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg, 2112 - PNFS_UPDATE_LAYOUT_RETURN); 2113 - goto out_put_layout_hdr; 2114 2068 } 2115 2069 2116 2070 if (pnfs_layoutgets_blocked(lo)) { ··· 2244 2242 &rng, GFP_KERNEL); 2245 2243 if (!lgp) { 2246 2244 pnfs_clear_first_layoutget(lo); 2245 + nfs_layoutget_end(lo); 2247 2246 pnfs_put_layout_hdr(lo); 2248 2247 return; 2249 2248 }
+3 -5
fs/nfs/pnfs.h
··· 297 297 struct nfs4_layoutreturn_args *args, 298 298 struct nfs4_layoutreturn_res *res, 299 299 const struct cred *cred); 300 - int pnfs_roc_done(struct rpc_task *task, struct inode *inode, 301 - struct nfs4_layoutreturn_args **argpp, 302 - struct nfs4_layoutreturn_res **respp, 303 - int *ret); 300 + int pnfs_roc_done(struct rpc_task *task, struct nfs4_layoutreturn_args **argpp, 301 + struct nfs4_layoutreturn_res **respp, int *ret); 304 302 void pnfs_roc_release(struct nfs4_layoutreturn_args *args, 305 303 struct nfs4_layoutreturn_res *res, 306 304 int ret); ··· 770 772 } 771 773 772 774 static inline int 773 - pnfs_roc_done(struct rpc_task *task, struct inode *inode, 775 + pnfs_roc_done(struct rpc_task *task, 774 776 struct nfs4_layoutreturn_args **argpp, 775 777 struct nfs4_layoutreturn_res **respp, 776 778 int *ret)
+10 -12
fs/nfs/pnfs_nfs.c
··· 78 78 pnfs_generic_clear_request_commit(struct nfs_page *req, 79 79 struct nfs_commit_info *cinfo) 80 80 { 81 - struct pnfs_layout_segment *freeme = NULL; 81 + struct pnfs_commit_bucket *bucket = NULL; 82 82 83 83 if (!test_and_clear_bit(PG_COMMIT_TO_DS, &req->wb_flags)) 84 84 goto out; 85 85 cinfo->ds->nwritten--; 86 - if (list_is_singular(&req->wb_list)) { 87 - struct pnfs_commit_bucket *bucket; 88 - 86 + if (list_is_singular(&req->wb_list)) 89 87 bucket = list_first_entry(&req->wb_list, 90 - struct pnfs_commit_bucket, 91 - written); 92 - freeme = pnfs_free_bucket_lseg(bucket); 93 - } 88 + struct pnfs_commit_bucket, written); 94 89 out: 95 90 nfs_request_remove_commit_list(req, cinfo); 96 - pnfs_put_lseg(freeme); 91 + if (bucket) 92 + pnfs_put_lseg(pnfs_free_bucket_lseg(bucket)); 97 93 } 98 94 EXPORT_SYMBOL_GPL(pnfs_generic_clear_request_commit); 99 95 ··· 403 407 struct pnfs_commit_bucket *bucket, 404 408 struct nfs_commit_info *cinfo) 405 409 { 410 + struct pnfs_layout_segment *lseg; 406 411 struct list_head *pos; 407 412 408 413 list_for_each(pos, &bucket->committing) 409 414 cinfo->ds->ncommitting--; 410 415 list_splice_init(&bucket->committing, head); 411 - return pnfs_free_bucket_lseg(bucket); 416 + lseg = pnfs_free_bucket_lseg(bucket); 417 + if (!lseg) 418 + lseg = pnfs_get_lseg(bucket->lseg); 419 + return lseg; 412 420 } 413 421 414 422 static struct nfs_commit_data * ··· 424 424 if (!data) 425 425 return NULL; 426 426 data->lseg = pnfs_bucket_get_committing(&data->pages, bucket, cinfo); 427 - if (!data->lseg) 428 - data->lseg = pnfs_get_lseg(bucket->lseg); 429 427 return data; 430 428 } 431 429
+5
fs/nfsd/nfs4proc.c
··· 50 50 #include "pnfs.h" 51 51 #include "trace.h" 52 52 53 + static bool inter_copy_offload_enable; 54 + module_param(inter_copy_offload_enable, bool, 0644); 55 + MODULE_PARM_DESC(inter_copy_offload_enable, 56 + "Enable inter server to server copy offload. Default: false"); 57 + 53 58 #ifdef CONFIG_NFSD_V4_SECURITY_LABEL 54 59 #include <linux/security.h> 55 60
+36 -20
fs/nfsd/nfs4xdr.c
··· 147 147 return p; 148 148 } 149 149 150 + static void * 151 + svcxdr_savemem(struct nfsd4_compoundargs *argp, __be32 *p, u32 len) 152 + { 153 + __be32 *tmp; 154 + 155 + /* 156 + * The location of the decoded data item is stable, 157 + * so @p is OK to use. This is the common case. 158 + */ 159 + if (p != argp->xdr->scratch.iov_base) 160 + return p; 161 + 162 + tmp = svcxdr_tmpalloc(argp, len); 163 + if (!tmp) 164 + return NULL; 165 + memcpy(tmp, p, len); 166 + return tmp; 167 + } 168 + 150 169 /* 151 170 * NFSv4 basic data type decoders 152 171 */ ··· 202 183 p = xdr_inline_decode(argp->xdr, len); 203 184 if (!p) 204 185 return nfserr_bad_xdr; 205 - o->data = svcxdr_tmpalloc(argp, len); 186 + o->data = svcxdr_savemem(argp, p, len); 206 187 if (!o->data) 207 188 return nfserr_jukebox; 208 189 o->len = len; 209 - memcpy(o->data, p, len); 210 190 211 191 return nfs_ok; 212 192 } ··· 223 205 status = check_filename((char *)p, *lenp); 224 206 if (status) 225 207 return status; 226 - *namp = svcxdr_tmpalloc(argp, *lenp); 208 + *namp = svcxdr_savemem(argp, p, *lenp); 227 209 if (!*namp) 228 210 return nfserr_jukebox; 229 - memcpy(*namp, p, *lenp); 230 211 231 212 return nfs_ok; 232 213 } ··· 1217 1200 p = xdr_inline_decode(argp->xdr, putfh->pf_fhlen); 1218 1201 if (!p) 1219 1202 return nfserr_bad_xdr; 1220 - putfh->pf_fhval = svcxdr_tmpalloc(argp, putfh->pf_fhlen); 1203 + putfh->pf_fhval = svcxdr_savemem(argp, p, putfh->pf_fhlen); 1221 1204 if (!putfh->pf_fhval) 1222 1205 return nfserr_jukebox; 1223 - memcpy(putfh->pf_fhval, p, putfh->pf_fhlen); 1224 1206 1225 1207 return nfs_ok; 1226 1208 } ··· 1334 1318 p = xdr_inline_decode(argp->xdr, setclientid->se_callback_netid_len); 1335 1319 if (!p) 1336 1320 return nfserr_bad_xdr; 1337 - setclientid->se_callback_netid_val = svcxdr_tmpalloc(argp, 1321 + setclientid->se_callback_netid_val = svcxdr_savemem(argp, p, 1338 1322 setclientid->se_callback_netid_len); 1339 1323 if (!setclientid->se_callback_netid_val) 1340 1324 return nfserr_jukebox; 1341 - memcpy(setclientid->se_callback_netid_val, p, 1342 - setclientid->se_callback_netid_len); 1343 1325 1344 1326 if (xdr_stream_decode_u32(argp->xdr, &setclientid->se_callback_addr_len) < 0) 1345 1327 return nfserr_bad_xdr; 1346 1328 p = xdr_inline_decode(argp->xdr, setclientid->se_callback_addr_len); 1347 1329 if (!p) 1348 1330 return nfserr_bad_xdr; 1349 - setclientid->se_callback_addr_val = svcxdr_tmpalloc(argp, 1331 + setclientid->se_callback_addr_val = svcxdr_savemem(argp, p, 1350 1332 setclientid->se_callback_addr_len); 1351 1333 if (!setclientid->se_callback_addr_val) 1352 1334 return nfserr_jukebox; 1353 - memcpy(setclientid->se_callback_addr_val, p, 1354 - setclientid->se_callback_addr_len); 1355 1335 if (xdr_stream_decode_u32(argp->xdr, &setclientid->se_callback_ident) < 0) 1356 1336 return nfserr_bad_xdr; 1357 1337 ··· 1387 1375 p = xdr_inline_decode(argp->xdr, verify->ve_attrlen); 1388 1376 if (!p) 1389 1377 return nfserr_bad_xdr; 1390 - verify->ve_attrval = svcxdr_tmpalloc(argp, verify->ve_attrlen); 1378 + verify->ve_attrval = svcxdr_savemem(argp, p, verify->ve_attrlen); 1391 1379 if (!verify->ve_attrval) 1392 1380 return nfserr_jukebox; 1393 - memcpy(verify->ve_attrval, p, verify->ve_attrlen); 1394 1381 1395 1382 return nfs_ok; 1396 1383 } ··· 2344 2333 p = xdr_inline_decode(argp->xdr, argp->taglen); 2345 2334 if (!p) 2346 2335 return 0; 2347 - argp->tag = svcxdr_tmpalloc(argp, argp->taglen); 2336 + argp->tag = svcxdr_savemem(argp, p, argp->taglen); 2348 2337 if (!argp->tag) 2349 2338 return 0; 2350 - memcpy(argp->tag, p, argp->taglen); 2351 2339 max_reply += xdr_align_size(argp->taglen); 2352 2340 } 2353 2341 ··· 4766 4756 resp->rqstp->rq_vec, read->rd_vlen, maxcount, eof); 4767 4757 if (nfserr) 4768 4758 return nfserr; 4759 + xdr_truncate_encode(xdr, starting_len + 16 + xdr_align_size(*maxcount)); 4769 4760 4770 4761 tmp = htonl(NFS4_CONTENT_DATA); 4771 4762 write_bytes_to_xdr_buf(xdr->buf, starting_len, &tmp, 4); ··· 4774 4763 write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp64, 8); 4775 4764 tmp = htonl(*maxcount); 4776 4765 write_bytes_to_xdr_buf(xdr->buf, starting_len + 12, &tmp, 4); 4766 + 4767 + tmp = xdr_zero; 4768 + write_bytes_to_xdr_buf(xdr->buf, starting_len + 16 + *maxcount, &tmp, 4769 + xdr_pad_size(*maxcount)); 4777 4770 return nfs_ok; 4778 4771 } 4779 4772 ··· 4870 4855 if (nfserr && segments == 0) 4871 4856 xdr_truncate_encode(xdr, starting_len); 4872 4857 else { 4858 + if (nfserr) { 4859 + xdr_truncate_encode(xdr, last_segment); 4860 + nfserr = nfs_ok; 4861 + eof = 0; 4862 + } 4873 4863 tmp = htonl(eof); 4874 4864 write_bytes_to_xdr_buf(xdr->buf, starting_len, &tmp, 4); 4875 4865 tmp = htonl(segments); 4876 4866 write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4); 4877 - if (nfserr) { 4878 - xdr_truncate_encode(xdr, last_segment); 4879 - nfserr = nfs_ok; 4880 - } 4881 4867 } 4882 4868 4883 4869 return nfserr;
-6
fs/nfsd/nfssvc.c
··· 33 33 34 34 #define NFSDDBG_FACILITY NFSDDBG_SVC 35 35 36 - bool inter_copy_offload_enable; 37 - EXPORT_SYMBOL_GPL(inter_copy_offload_enable); 38 - module_param(inter_copy_offload_enable, bool, 0644); 39 - MODULE_PARM_DESC(inter_copy_offload_enable, 40 - "Enable inter server to server copy offload. Default: false"); 41 - 42 36 extern struct svc_program nfsd_program; 43 37 static int nfsd(void *vrqstp); 44 38 #if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
-1
fs/nfsd/xdr4.h
··· 568 568 struct nfs_fh c_fh; 569 569 nfs4_stateid stateid; 570 570 }; 571 - extern bool inter_copy_offload_enable; 572 571 573 572 struct nfsd4_seek { 574 573 /* request */
+30 -23
fs/proc/task_mmu.c
··· 1035 1035 }; 1036 1036 1037 1037 #ifdef CONFIG_MEM_SOFT_DIRTY 1038 + 1039 + #define is_cow_mapping(flags) (((flags) & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE) 1040 + 1041 + static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr, pte_t pte) 1042 + { 1043 + struct page *page; 1044 + 1045 + if (!pte_write(pte)) 1046 + return false; 1047 + if (!is_cow_mapping(vma->vm_flags)) 1048 + return false; 1049 + if (likely(!atomic_read(&vma->vm_mm->has_pinned))) 1050 + return false; 1051 + page = vm_normal_page(vma, addr, pte); 1052 + if (!page) 1053 + return false; 1054 + return page_maybe_dma_pinned(page); 1055 + } 1056 + 1038 1057 static inline void clear_soft_dirty(struct vm_area_struct *vma, 1039 1058 unsigned long addr, pte_t *pte) 1040 1059 { ··· 1068 1049 if (pte_present(ptent)) { 1069 1050 pte_t old_pte; 1070 1051 1052 + if (pte_is_pinned(vma, addr, ptent)) 1053 + return; 1071 1054 old_pte = ptep_modify_prot_start(vma, addr, pte); 1072 1055 ptent = pte_wrprotect(old_pte); 1073 1056 ptent = pte_clear_soft_dirty(ptent); ··· 1236 1215 .type = type, 1237 1216 }; 1238 1217 1218 + if (mmap_write_lock_killable(mm)) { 1219 + count = -EINTR; 1220 + goto out_mm; 1221 + } 1239 1222 if (type == CLEAR_REFS_MM_HIWATER_RSS) { 1240 - if (mmap_write_lock_killable(mm)) { 1241 - count = -EINTR; 1242 - goto out_mm; 1243 - } 1244 - 1245 1223 /* 1246 1224 * Writing 5 to /proc/pid/clear_refs resets the peak 1247 1225 * resident set size to this mm's current rss value. 1248 1226 */ 1249 1227 reset_mm_hiwater_rss(mm); 1250 - mmap_write_unlock(mm); 1251 - goto out_mm; 1228 + goto out_unlock; 1252 1229 } 1253 1230 1254 - if (mmap_read_lock_killable(mm)) { 1255 - count = -EINTR; 1256 - goto out_mm; 1257 - } 1258 1231 tlb_gather_mmu(&tlb, mm, 0, -1); 1259 1232 if (type == CLEAR_REFS_SOFT_DIRTY) { 1260 1233 for (vma = mm->mmap; vma; vma = vma->vm_next) { 1261 1234 if (!(vma->vm_flags & VM_SOFTDIRTY)) 1262 1235 continue; 1263 - mmap_read_unlock(mm); 1264 - if (mmap_write_lock_killable(mm)) { 1265 - count = -EINTR; 1266 - goto out_mm; 1267 - } 1268 - for (vma = mm->mmap; vma; vma = vma->vm_next) { 1269 - vma->vm_flags &= ~VM_SOFTDIRTY; 1270 - vma_set_page_prot(vma); 1271 - } 1272 - mmap_write_downgrade(mm); 1273 - break; 1236 + vma->vm_flags &= ~VM_SOFTDIRTY; 1237 + vma_set_page_prot(vma); 1274 1238 } 1275 1239 1276 1240 mmu_notifier_range_init(&range, MMU_NOTIFY_SOFT_DIRTY, ··· 1267 1261 if (type == CLEAR_REFS_SOFT_DIRTY) 1268 1262 mmu_notifier_invalidate_range_end(&range); 1269 1263 tlb_finish_mmu(&tlb, 0, -1); 1270 - mmap_read_unlock(mm); 1264 + out_unlock: 1265 + mmap_write_unlock(mm); 1271 1266 out_mm: 1272 1267 mmput(mm); 1273 1268 }
+3 -3
include/asm-generic/bitops/atomic.h
··· 11 11 * See Documentation/atomic_bitops.txt for details. 12 12 */ 13 13 14 - static inline void set_bit(unsigned int nr, volatile unsigned long *p) 14 + static __always_inline void set_bit(unsigned int nr, volatile unsigned long *p) 15 15 { 16 16 p += BIT_WORD(nr); 17 17 atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p); 18 18 } 19 19 20 - static inline void clear_bit(unsigned int nr, volatile unsigned long *p) 20 + static __always_inline void clear_bit(unsigned int nr, volatile unsigned long *p) 21 21 { 22 22 p += BIT_WORD(nr); 23 23 atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p); 24 24 } 25 25 26 - static inline void change_bit(unsigned int nr, volatile unsigned long *p) 26 + static __always_inline void change_bit(unsigned int nr, volatile unsigned long *p) 27 27 { 28 28 p += BIT_WORD(nr); 29 29 atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p);
+1 -1
include/asm-generic/export.h
··· 33 33 */ 34 34 35 35 .macro ___EXPORT_SYMBOL name,val,sec 36 - #ifdef CONFIG_MODULES 36 + #if defined(CONFIG_MODULES) && !defined(__DISABLE_EXPORTS) 37 37 .section ___ksymtab\sec+\name,"a" 38 38 .balign KSYM_ALIGN 39 39 __ksymtab_\name:
+31
include/linux/arm-smccc.h
··· 102 102 ARM_SMCCC_OWNER_STANDARD_HYP, \ 103 103 0x21) 104 104 105 + /* TRNG entropy source calls (defined by ARM DEN0098) */ 106 + #define ARM_SMCCC_TRNG_VERSION \ 107 + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ 108 + ARM_SMCCC_SMC_32, \ 109 + ARM_SMCCC_OWNER_STANDARD, \ 110 + 0x50) 111 + 112 + #define ARM_SMCCC_TRNG_FEATURES \ 113 + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ 114 + ARM_SMCCC_SMC_32, \ 115 + ARM_SMCCC_OWNER_STANDARD, \ 116 + 0x51) 117 + 118 + #define ARM_SMCCC_TRNG_GET_UUID \ 119 + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ 120 + ARM_SMCCC_SMC_32, \ 121 + ARM_SMCCC_OWNER_STANDARD, \ 122 + 0x52) 123 + 124 + #define ARM_SMCCC_TRNG_RND32 \ 125 + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ 126 + ARM_SMCCC_SMC_32, \ 127 + ARM_SMCCC_OWNER_STANDARD, \ 128 + 0x53) 129 + 130 + #define ARM_SMCCC_TRNG_RND64 \ 131 + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ 132 + ARM_SMCCC_SMC_64, \ 133 + ARM_SMCCC_OWNER_STANDARD, \ 134 + 0x53) 135 + 105 136 /* 106 137 * Return codes defined in ARM DEN 0070A 107 138 * ARM DEN 0070A is now merged/consolidated into ARM DEN 0028 C
+6
include/linux/compiler-gcc.h
··· 13 13 /* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 */ 14 14 #if GCC_VERSION < 40900 15 15 # error Sorry, your version of GCC is too old - please use 4.9 or newer. 16 + #elif defined(CONFIG_ARM64) && GCC_VERSION < 50100 17 + /* 18 + * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63293 19 + * https://lore.kernel.org/r/20210107111841.GN1551@shell.armlinux.org.uk 20 + */ 21 + # error Sorry, your version of GCC is too old - please use 5.1 or newer. 16 22 #endif 17 23 18 24 /*
+1
include/linux/dm-bufio.h
··· 150 150 151 151 unsigned dm_bufio_get_block_size(struct dm_bufio_client *c); 152 152 sector_t dm_bufio_get_device_size(struct dm_bufio_client *c); 153 + struct dm_io_client *dm_bufio_get_dm_io_client(struct dm_bufio_client *c); 153 154 sector_t dm_bufio_get_block_number(struct dm_buffer *b); 154 155 void *dm_bufio_get_block_data(struct dm_buffer *b); 155 156 void *dm_bufio_get_aux_data(struct dm_buffer *b);
+5 -1
include/linux/kasan.h
··· 35 35 #define KASAN_SHADOW_INIT 0 36 36 #endif 37 37 38 + #ifndef PTE_HWTABLE_PTRS 39 + #define PTE_HWTABLE_PTRS 0 40 + #endif 41 + 38 42 extern unsigned char kasan_early_shadow_page[PAGE_SIZE]; 39 - extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE]; 43 + extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS]; 40 44 extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD]; 41 45 extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD]; 42 46 extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
+1 -1
include/linux/memcontrol.h
··· 665 665 { 666 666 struct mem_cgroup *memcg = page_memcg(page); 667 667 668 - VM_WARN_ON_ONCE_PAGE(!memcg, page); 668 + VM_WARN_ON_ONCE_PAGE(!memcg && !mem_cgroup_disabled(), page); 669 669 return mem_cgroup_lruvec(memcg, pgdat); 670 670 } 671 671
-2
include/linux/perf/arm_pmu.h
··· 163 163 static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; } 164 164 #endif 165 165 166 - bool arm_pmu_irq_is_nmi(void); 167 - 168 166 /* Internal functions only for core arm_pmu code */ 169 167 struct arm_pmu *armpmu_alloc(void); 170 168 struct arm_pmu *armpmu_alloc_atomic(void);
+2 -1
include/linux/skbuff.h
··· 366 366 static inline bool skb_frag_must_loop(struct page *p) 367 367 { 368 368 #if defined(CONFIG_HIGHMEM) 369 - if (PageHighMem(p)) 369 + if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) || PageHighMem(p)) 370 370 return true; 371 371 #endif 372 372 return false; ··· 1203 1203 struct sk_buff *root_skb; 1204 1204 struct sk_buff *cur_skb; 1205 1205 __u8 *frag_data; 1206 + __u32 frag_off; 1206 1207 }; 1207 1208 1208 1209 void skb_prepare_seq_read(struct sk_buff *skb, unsigned int from,
+1 -1
include/xen/xenbus.h
··· 192 192 193 193 struct work_struct; 194 194 195 - void xenbus_probe(struct work_struct *); 195 + void xenbus_probe(void); 196 196 197 197 #define XENBUS_IS_ERR_READ(str) ({ \ 198 198 if (!IS_ERR(str) && strlen(str) == 0) { \
+1 -1
kernel/trace/Kconfig
··· 538 538 config KPROBE_EVENTS_ON_NOTRACE 539 539 bool "Do NOT protect notrace function from kprobe events" 540 540 depends on KPROBE_EVENTS 541 - depends on KPROBES_ON_FTRACE 541 + depends on DYNAMIC_FTRACE 542 542 default n 543 543 help 544 544 This is only for the developers who want to debug ftrace itself
+1 -1
kernel/trace/trace_kprobe.c
··· 434 434 return 0; 435 435 } 436 436 437 - #if defined(CONFIG_KPROBES_ON_FTRACE) && \ 437 + #if defined(CONFIG_DYNAMIC_FTRACE) && \ 438 438 !defined(CONFIG_KPROBE_EVENTS_ON_NOTRACE) 439 439 static bool __within_notrace_func(unsigned long addr) 440 440 {
+1 -1
lib/iov_iter.c
··· 1658 1658 (const struct compat_iovec __user *)uvec; 1659 1659 int ret = -EFAULT, i; 1660 1660 1661 - if (!user_access_begin(uvec, nr_segs * sizeof(*uvec))) 1661 + if (!user_access_begin(uiov, nr_segs * sizeof(*uiov))) 1662 1662 return -EFAULT; 1663 1663 1664 1664 for (i = 0; i < nr_segs; i++) {
+1 -1
mm/hugetlb.c
··· 4371 4371 * So we need to block hugepage fault by PG_hwpoison bit check. 4372 4372 */ 4373 4373 if (unlikely(PageHWPoison(page))) { 4374 - ret = VM_FAULT_HWPOISON | 4374 + ret = VM_FAULT_HWPOISON_LARGE | 4375 4375 VM_FAULT_SET_HINDEX(hstate_index(h)); 4376 4376 goto backout_unlocked; 4377 4377 }
+2 -1
mm/kasan/init.c
··· 64 64 return false; 65 65 } 66 66 #endif 67 - pte_t kasan_early_shadow_pte[PTRS_PER_PTE] __page_aligned_bss; 67 + pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS] 68 + __page_aligned_bss; 68 69 69 70 static inline bool kasan_pte_table(pmd_t pmd) 70 71 {
+1 -1
mm/memory-failure.c
··· 1940 1940 goto retry; 1941 1941 } 1942 1942 } else if (ret == -EIO) { 1943 - pr_info("%s: %#lx: unknown page type: %lx (%pGP)\n", 1943 + pr_info("%s: %#lx: unknown page type: %lx (%pGp)\n", 1944 1944 __func__, pfn, page->flags, &page->flags); 1945 1945 } 1946 1946
+1 -1
mm/mempolicy.c
··· 1111 1111 const nodemask_t *to, int flags) 1112 1112 { 1113 1113 int busy = 0; 1114 - int err; 1114 + int err = 0; 1115 1115 nodemask_t tmp; 1116 1116 1117 1117 migrate_prep();
+16 -15
mm/page_alloc.c
··· 2862 2862 { 2863 2863 struct page *page; 2864 2864 2865 - #ifdef CONFIG_CMA 2866 - /* 2867 - * Balance movable allocations between regular and CMA areas by 2868 - * allocating from CMA when over half of the zone's free memory 2869 - * is in the CMA area. 2870 - */ 2871 - if (alloc_flags & ALLOC_CMA && 2872 - zone_page_state(zone, NR_FREE_CMA_PAGES) > 2873 - zone_page_state(zone, NR_FREE_PAGES) / 2) { 2874 - page = __rmqueue_cma_fallback(zone, order); 2875 - if (page) 2876 - return page; 2865 + if (IS_ENABLED(CONFIG_CMA)) { 2866 + /* 2867 + * Balance movable allocations between regular and CMA areas by 2868 + * allocating from CMA when over half of the zone's free memory 2869 + * is in the CMA area. 2870 + */ 2871 + if (alloc_flags & ALLOC_CMA && 2872 + zone_page_state(zone, NR_FREE_CMA_PAGES) > 2873 + zone_page_state(zone, NR_FREE_PAGES) / 2) { 2874 + page = __rmqueue_cma_fallback(zone, order); 2875 + if (page) 2876 + goto out; 2877 + } 2877 2878 } 2878 - #endif 2879 2879 retry: 2880 2880 page = __rmqueue_smallest(zone, order, migratetype); 2881 2881 if (unlikely(!page)) { ··· 2886 2886 alloc_flags)) 2887 2887 goto retry; 2888 2888 } 2889 - 2890 - trace_mm_page_alloc_zone_locked(page, order, migratetype); 2889 + out: 2890 + if (page) 2891 + trace_mm_page_alloc_zone_locked(page, order, migratetype); 2891 2892 return page; 2892 2893 } 2893 2894
+1
mm/process_vm_access.c
··· 9 9 #include <linux/mm.h> 10 10 #include <linux/uio.h> 11 11 #include <linux/sched.h> 12 + #include <linux/compat.h> 12 13 #include <linux/sched/mm.h> 13 14 #include <linux/highmem.h> 14 15 #include <linux/ptrace.h>
+1 -1
mm/slub.c
··· 1973 1973 1974 1974 t = acquire_slab(s, n, page, object == NULL, &objects); 1975 1975 if (!t) 1976 - break; 1976 + continue; /* cmpxchg raced */ 1977 1977 1978 1978 available += objects; 1979 1979 if (!object) {
+3 -1
mm/vmalloc.c
··· 2420 2420 return NULL; 2421 2421 } 2422 2422 2423 - if (flags & VM_MAP_PUT_PAGES) 2423 + if (flags & VM_MAP_PUT_PAGES) { 2424 2424 area->pages = pages; 2425 + area->nr_pages = count; 2426 + } 2425 2427 return area->addr; 2426 2428 } 2427 2429 EXPORT_SYMBOL(vmap);
+2
mm/vmscan.c
··· 1238 1238 if (!PageSwapCache(page)) { 1239 1239 if (!(sc->gfp_mask & __GFP_IO)) 1240 1240 goto keep_locked; 1241 + if (page_maybe_dma_pinned(page)) 1242 + goto keep_locked; 1241 1243 if (PageTransHuge(page)) { 1242 1244 /* cannot split THP, skip it */ 1243 1245 if (!can_split_huge_page(page, NULL))
+1 -3
net/8021q/vlan.c
··· 284 284 return 0; 285 285 286 286 out_free_newdev: 287 - if (new_dev->reg_state == NETREG_UNINITIALIZED || 288 - new_dev->reg_state == NETREG_UNREGISTERED) 289 - free_netdev(new_dev); 287 + free_netdev(new_dev); 290 288 return err; 291 289 } 292 290
+1
net/can/isotp.c
··· 1155 1155 if (peer) 1156 1156 return -EOPNOTSUPP; 1157 1157 1158 + memset(addr, 0, sizeof(*addr)); 1158 1159 addr->can_family = AF_CAN; 1159 1160 addr->can_ifindex = so->ifindex; 1160 1161 addr->can_addr.tp.rx_id = so->rxid;
+24 -13
net/core/dev.c
··· 9661 9661 } 9662 9662 } 9663 9663 9664 - if ((features & NETIF_F_HW_TLS_TX) && !(features & NETIF_F_HW_CSUM)) { 9665 - netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n"); 9666 - features &= ~NETIF_F_HW_TLS_TX; 9664 + if (features & NETIF_F_HW_TLS_TX) { 9665 + bool ip_csum = (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) == 9666 + (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM); 9667 + bool hw_csum = features & NETIF_F_HW_CSUM; 9668 + 9669 + if (!ip_csum && !hw_csum) { 9670 + netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n"); 9671 + features &= ~NETIF_F_HW_TLS_TX; 9672 + } 9667 9673 } 9668 9674 9669 9675 return features; ··· 10083 10077 ret = call_netdevice_notifiers(NETDEV_REGISTER, dev); 10084 10078 ret = notifier_to_errno(ret); 10085 10079 if (ret) { 10080 + /* Expect explicit free_netdev() on failure */ 10081 + dev->needs_free_netdev = false; 10086 10082 rollback_registered(dev); 10087 - rcu_barrier(); 10088 - 10089 - dev->reg_state = NETREG_UNREGISTERED; 10090 - /* We should put the kobject that hold in 10091 - * netdev_unregister_kobject(), otherwise 10092 - * the net device cannot be freed when 10093 - * driver calls free_netdev(), because the 10094 - * kobject is being hold. 10095 - */ 10096 - kobject_put(&dev->dev.kobj); 10083 + net_set_todo(dev); 10084 + goto out; 10097 10085 } 10098 10086 /* 10099 10087 * Prevent userspace races by waiting until the network ··· 10631 10631 struct napi_struct *p, *n; 10632 10632 10633 10633 might_sleep(); 10634 + 10635 + /* When called immediately after register_netdevice() failed the unwind 10636 + * handling may still be dismantling the device. Handle that case by 10637 + * deferring the free. 10638 + */ 10639 + if (dev->reg_state == NETREG_UNREGISTERING) { 10640 + ASSERT_RTNL(); 10641 + dev->needs_free_netdev = true; 10642 + return; 10643 + } 10644 + 10634 10645 netif_free_tx_queues(dev); 10635 10646 netif_free_rx_queues(dev); 10636 10647
+6 -17
net/core/rtnetlink.c
··· 3439 3439 3440 3440 dev->ifindex = ifm->ifi_index; 3441 3441 3442 - if (ops->newlink) { 3442 + if (ops->newlink) 3443 3443 err = ops->newlink(link_net ? : net, dev, tb, data, extack); 3444 - /* Drivers should call free_netdev() in ->destructor 3445 - * and unregister it on failure after registration 3446 - * so that device could be finally freed in rtnl_unlock. 3447 - */ 3448 - if (err < 0) { 3449 - /* If device is not registered at all, free it now */ 3450 - if (dev->reg_state == NETREG_UNINITIALIZED || 3451 - dev->reg_state == NETREG_UNREGISTERED) 3452 - free_netdev(dev); 3453 - goto out; 3454 - } 3455 - } else { 3444 + else 3456 3445 err = register_netdevice(dev); 3457 - if (err < 0) { 3458 - free_netdev(dev); 3459 - goto out; 3460 - } 3446 + if (err < 0) { 3447 + free_netdev(dev); 3448 + goto out; 3461 3449 } 3450 + 3462 3451 err = rtnl_configure_link(dev, ifm); 3463 3452 if (err < 0) 3464 3453 goto out_unregister;
+50 -9
net/core/skbuff.c
··· 501 501 struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len, 502 502 gfp_t gfp_mask) 503 503 { 504 - struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); 504 + struct napi_alloc_cache *nc; 505 505 struct sk_buff *skb; 506 506 void *data; 507 507 508 508 len += NET_SKB_PAD + NET_IP_ALIGN; 509 509 510 - if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) || 510 + /* If requested length is either too small or too big, 511 + * we use kmalloc() for skb->head allocation. 512 + */ 513 + if (len <= SKB_WITH_OVERHEAD(1024) || 514 + len > SKB_WITH_OVERHEAD(PAGE_SIZE) || 511 515 (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) { 512 516 skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE); 513 517 if (!skb) ··· 519 515 goto skb_success; 520 516 } 521 517 518 + nc = this_cpu_ptr(&napi_alloc_cache); 522 519 len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); 523 520 len = SKB_DATA_ALIGN(len); 524 521 ··· 3447 3442 st->root_skb = st->cur_skb = skb; 3448 3443 st->frag_idx = st->stepped_offset = 0; 3449 3444 st->frag_data = NULL; 3445 + st->frag_off = 0; 3450 3446 } 3451 3447 EXPORT_SYMBOL(skb_prepare_seq_read); 3452 3448 ··· 3502 3496 st->stepped_offset += skb_headlen(st->cur_skb); 3503 3497 3504 3498 while (st->frag_idx < skb_shinfo(st->cur_skb)->nr_frags) { 3505 - frag = &skb_shinfo(st->cur_skb)->frags[st->frag_idx]; 3506 - block_limit = skb_frag_size(frag) + st->stepped_offset; 3499 + unsigned int pg_idx, pg_off, pg_sz; 3507 3500 3501 + frag = &skb_shinfo(st->cur_skb)->frags[st->frag_idx]; 3502 + 3503 + pg_idx = 0; 3504 + pg_off = skb_frag_off(frag); 3505 + pg_sz = skb_frag_size(frag); 3506 + 3507 + if (skb_frag_must_loop(skb_frag_page(frag))) { 3508 + pg_idx = (pg_off + st->frag_off) >> PAGE_SHIFT; 3509 + pg_off = offset_in_page(pg_off + st->frag_off); 3510 + pg_sz = min_t(unsigned int, pg_sz - st->frag_off, 3511 + PAGE_SIZE - pg_off); 3512 + } 3513 + 3514 + block_limit = pg_sz + st->stepped_offset; 3508 3515 if (abs_offset < block_limit) { 3509 3516 if (!st->frag_data) 3510 - st->frag_data = kmap_atomic(skb_frag_page(frag)); 3517 + st->frag_data = kmap_atomic(skb_frag_page(frag) + pg_idx); 3511 3518 3512 - *data = (u8 *) st->frag_data + skb_frag_off(frag) + 3519 + *data = (u8 *)st->frag_data + pg_off + 3513 3520 (abs_offset - st->stepped_offset); 3514 3521 3515 3522 return block_limit - abs_offset; ··· 3533 3514 st->frag_data = NULL; 3534 3515 } 3535 3516 3536 - st->frag_idx++; 3537 - st->stepped_offset += skb_frag_size(frag); 3517 + st->stepped_offset += pg_sz; 3518 + st->frag_off += pg_sz; 3519 + if (st->frag_off == skb_frag_size(frag)) { 3520 + st->frag_off = 0; 3521 + st->frag_idx++; 3522 + } 3538 3523 } 3539 3524 3540 3525 if (st->frag_data) { ··· 3678 3655 unsigned int delta_truesize = 0; 3679 3656 unsigned int delta_len = 0; 3680 3657 struct sk_buff *tail = NULL; 3681 - struct sk_buff *nskb; 3658 + struct sk_buff *nskb, *tmp; 3659 + int err; 3682 3660 3683 3661 skb_push(skb, -skb_network_offset(skb) + offset); 3684 3662 ··· 3689 3665 nskb = list_skb; 3690 3666 list_skb = list_skb->next; 3691 3667 3668 + err = 0; 3669 + if (skb_shared(nskb)) { 3670 + tmp = skb_clone(nskb, GFP_ATOMIC); 3671 + if (tmp) { 3672 + consume_skb(nskb); 3673 + nskb = tmp; 3674 + err = skb_unclone(nskb, GFP_ATOMIC); 3675 + } else { 3676 + err = -ENOMEM; 3677 + } 3678 + } 3679 + 3692 3680 if (!tail) 3693 3681 skb->next = nskb; 3694 3682 else 3695 3683 tail->next = nskb; 3684 + 3685 + if (unlikely(err)) { 3686 + nskb->next = list_skb; 3687 + goto err_linearize; 3688 + } 3696 3689 3697 3690 tail = nskb; 3698 3691
+1 -1
net/core/sock_reuseport.c
··· 293 293 i = j = reciprocal_scale(hash, socks); 294 294 while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) { 295 295 i++; 296 - if (i >= reuse->num_socks) 296 + if (i >= socks) 297 297 i = 0; 298 298 if (i == j) 299 299 goto out;
+1 -1
net/dcb/dcbnl.c
··· 1765 1765 fn = &reply_funcs[dcb->cmd]; 1766 1766 if (!fn->cb) 1767 1767 return -EOPNOTSUPP; 1768 - if (fn->type != nlh->nlmsg_type) 1768 + if (fn->type == RTM_SETDCB && !netlink_capable(skb, CAP_NET_ADMIN)) 1769 1769 return -EPERM; 1770 1770 1771 1771 if (!tb[DCB_ATTR_IFNAME])
+4
net/dsa/dsa2.c
··· 353 353 354 354 static void dsa_port_teardown(struct dsa_port *dp) 355 355 { 356 + struct devlink_port *dlp = &dp->devlink_port; 357 + 356 358 if (!dp->setup) 357 359 return; 360 + 361 + devlink_port_type_clear(dlp); 358 362 359 363 switch (dp->type) { 360 364 case DSA_PORT_TYPE_UNUSED:
+10
net/dsa/master.c
··· 309 309 int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp) 310 310 { 311 311 int mtu = ETH_DATA_LEN + cpu_dp->tag_ops->overhead; 312 + struct dsa_switch *ds = cpu_dp->ds; 313 + struct device_link *consumer_link; 312 314 int ret; 315 + 316 + /* The DSA master must use SET_NETDEV_DEV for this to work. */ 317 + consumer_link = device_link_add(ds->dev, dev->dev.parent, 318 + DL_FLAG_AUTOREMOVE_CONSUMER); 319 + if (!consumer_link) 320 + netdev_err(dev, 321 + "Failed to create a device link to DSA switch %s\n", 322 + dev_name(ds->dev)); 313 323 314 324 rtnl_lock(); 315 325 ret = dev_set_mtu(dev, mtu);
+1 -6
net/ipv4/esp4.c
··· 443 443 int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp) 444 444 { 445 445 u8 *tail; 446 - u8 *vaddr; 447 446 int nfrags; 448 447 int esph_offset; 449 448 struct page *page; ··· 484 485 page = pfrag->page; 485 486 get_page(page); 486 487 487 - vaddr = kmap_atomic(page); 488 - 489 - tail = vaddr + pfrag->offset; 488 + tail = page_address(page) + pfrag->offset; 490 489 491 490 esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto); 492 - 493 - kunmap_atomic(vaddr); 494 491 495 492 nfrags = skb_shinfo(skb)->nr_frags; 496 493
+1 -6
net/ipv6/esp6.c
··· 478 478 int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp) 479 479 { 480 480 u8 *tail; 481 - u8 *vaddr; 482 481 int nfrags; 483 482 int esph_offset; 484 483 struct page *page; ··· 518 519 page = pfrag->page; 519 520 get_page(page); 520 521 521 - vaddr = kmap_atomic(page); 522 - 523 - tail = vaddr + pfrag->offset; 522 + tail = page_address(page) + pfrag->offset; 524 523 525 524 esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto); 526 - 527 - kunmap_atomic(vaddr); 528 525 529 526 nfrags = skb_shinfo(skb)->nr_frags; 530 527
+40 -1
net/ipv6/ip6_output.c
··· 125 125 return -EINVAL; 126 126 } 127 127 128 + static int 129 + ip6_finish_output_gso_slowpath_drop(struct net *net, struct sock *sk, 130 + struct sk_buff *skb, unsigned int mtu) 131 + { 132 + struct sk_buff *segs, *nskb; 133 + netdev_features_t features; 134 + int ret = 0; 135 + 136 + /* Please see corresponding comment in ip_finish_output_gso 137 + * describing the cases where GSO segment length exceeds the 138 + * egress MTU. 139 + */ 140 + features = netif_skb_features(skb); 141 + segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK); 142 + if (IS_ERR_OR_NULL(segs)) { 143 + kfree_skb(skb); 144 + return -ENOMEM; 145 + } 146 + 147 + consume_skb(skb); 148 + 149 + skb_list_walk_safe(segs, segs, nskb) { 150 + int err; 151 + 152 + skb_mark_not_on_list(segs); 153 + err = ip6_fragment(net, sk, segs, ip6_finish_output2); 154 + if (err && ret == 0) 155 + ret = err; 156 + } 157 + 158 + return ret; 159 + } 160 + 128 161 static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *skb) 129 162 { 163 + unsigned int mtu; 164 + 130 165 #if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM) 131 166 /* Policy lookup after SNAT yielded a new policy */ 132 167 if (skb_dst(skb)->xfrm) { ··· 170 135 } 171 136 #endif 172 137 173 - if ((skb->len > ip6_skb_dst_mtu(skb) && !skb_is_gso(skb)) || 138 + mtu = ip6_skb_dst_mtu(skb); 139 + if (skb_is_gso(skb) && !skb_gso_validate_network_len(skb, mtu)) 140 + return ip6_finish_output_gso_slowpath_drop(net, sk, skb, mtu); 141 + 142 + if ((skb->len > mtu && !skb_is_gso(skb)) || 174 143 dst_allfrag(skb_dst(skb)) || 175 144 (IP6CB(skb)->frag_max_size && skb->len > IP6CB(skb)->frag_max_size)) 176 145 return ip6_fragment(net, sk, skb, ip6_finish_output2);
+4 -1
net/ipv6/sit.c
··· 1645 1645 } 1646 1646 1647 1647 #ifdef CONFIG_IPV6_SIT_6RD 1648 - if (ipip6_netlink_6rd_parms(data, &ip6rd)) 1648 + if (ipip6_netlink_6rd_parms(data, &ip6rd)) { 1649 1649 err = ipip6_tunnel_update_6rd(nt, &ip6rd); 1650 + if (err < 0) 1651 + unregister_netdevice_queue(dev, NULL); 1652 + } 1650 1653 #endif 1651 1654 1652 1655 return err;
+23 -46
net/mptcp/protocol.c
··· 427 427 static bool tcp_can_send_ack(const struct sock *ssk) 428 428 { 429 429 return !((1 << inet_sk_state_load(ssk)) & 430 - (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE)); 430 + (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE | TCPF_LISTEN)); 431 431 } 432 432 433 433 static void mptcp_send_ack(struct mptcp_sock *msk) ··· 2642 2642 2643 2643 static int mptcp_disconnect(struct sock *sk, int flags) 2644 2644 { 2645 - /* Should never be called. 2646 - * inet_stream_connect() calls ->disconnect, but that 2647 - * refers to the subflow socket, not the mptcp one. 2648 - */ 2649 - WARN_ON_ONCE(1); 2645 + struct mptcp_subflow_context *subflow; 2646 + struct mptcp_sock *msk = mptcp_sk(sk); 2647 + 2648 + __mptcp_flush_join_list(msk); 2649 + mptcp_for_each_subflow(msk, subflow) { 2650 + struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 2651 + 2652 + lock_sock(ssk); 2653 + tcp_disconnect(ssk, flags); 2654 + release_sock(ssk); 2655 + } 2650 2656 return 0; 2651 2657 } 2652 2658 ··· 3095 3089 return true; 3096 3090 } 3097 3091 3092 + static void mptcp_shutdown(struct sock *sk, int how) 3093 + { 3094 + pr_debug("sk=%p, how=%d", sk, how); 3095 + 3096 + if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk)) 3097 + __mptcp_wr_shutdown(sk); 3098 + } 3099 + 3098 3100 static struct proto mptcp_prot = { 3099 3101 .name = "MPTCP", 3100 3102 .owner = THIS_MODULE, ··· 3112 3098 .accept = mptcp_accept, 3113 3099 .setsockopt = mptcp_setsockopt, 3114 3100 .getsockopt = mptcp_getsockopt, 3115 - .shutdown = tcp_shutdown, 3101 + .shutdown = mptcp_shutdown, 3116 3102 .destroy = mptcp_destroy, 3117 3103 .sendmsg = mptcp_sendmsg, 3118 3104 .recvmsg = mptcp_recvmsg, ··· 3358 3344 return mask; 3359 3345 } 3360 3346 3361 - static int mptcp_shutdown(struct socket *sock, int how) 3362 - { 3363 - struct mptcp_sock *msk = mptcp_sk(sock->sk); 3364 - struct sock *sk = sock->sk; 3365 - int ret = 0; 3366 - 3367 - pr_debug("sk=%p, how=%d", msk, how); 3368 - 3369 - lock_sock(sk); 3370 - 3371 - how++; 3372 - if ((how & ~SHUTDOWN_MASK) || !how) { 3373 - ret = -EINVAL; 3374 - goto out_unlock; 3375 - } 3376 - 3377 - if (sock->state == SS_CONNECTING) { 3378 - if ((1 << sk->sk_state) & 3379 - (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_CLOSE)) 3380 - sock->state = SS_DISCONNECTING; 3381 - else 3382 - sock->state = SS_CONNECTED; 3383 - } 3384 - 3385 - sk->sk_shutdown |= how; 3386 - if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk)) 3387 - __mptcp_wr_shutdown(sk); 3388 - 3389 - /* Wake up anyone sleeping in poll. */ 3390 - sk->sk_state_change(sk); 3391 - 3392 - out_unlock: 3393 - release_sock(sk); 3394 - 3395 - return ret; 3396 - } 3397 - 3398 3347 static const struct proto_ops mptcp_stream_ops = { 3399 3348 .family = PF_INET, 3400 3349 .owner = THIS_MODULE, ··· 3371 3394 .ioctl = inet_ioctl, 3372 3395 .gettstamp = sock_gettstamp, 3373 3396 .listen = mptcp_listen, 3374 - .shutdown = mptcp_shutdown, 3397 + .shutdown = inet_shutdown, 3375 3398 .setsockopt = sock_common_setsockopt, 3376 3399 .getsockopt = sock_common_getsockopt, 3377 3400 .sendmsg = inet_sendmsg, ··· 3421 3444 .ioctl = inet6_ioctl, 3422 3445 .gettstamp = sock_gettstamp, 3423 3446 .listen = mptcp_listen, 3424 - .shutdown = mptcp_shutdown, 3447 + .shutdown = inet_shutdown, 3425 3448 .setsockopt = sock_common_setsockopt, 3426 3449 .getsockopt = sock_common_getsockopt, 3427 3450 .sendmsg = inet6_sendmsg,
+3
net/netfilter/nf_conntrack_standalone.c
··· 523 523 { 524 524 int ret; 525 525 526 + /* module_param hashsize could have changed value */ 527 + nf_conntrack_htable_size_user = nf_conntrack_htable_size; 528 + 526 529 ret = proc_dointvec(table, write, buffer, lenp, ppos); 527 530 if (ret < 0 || !write) 528 531 return ret;
+1
net/netfilter/nf_nat_core.c
··· 1174 1174 ret = register_pernet_subsys(&nat_net_ops); 1175 1175 if (ret < 0) { 1176 1176 nf_ct_extend_unregister(&nat_extend); 1177 + kvfree(nf_nat_bysource); 1177 1178 return ret; 1178 1179 } 1179 1180
+1 -1
net/rxrpc/input.c
··· 430 430 return; 431 431 } 432 432 433 - if (call->state == RXRPC_CALL_SERVER_RECV_REQUEST) { 433 + if (state == RXRPC_CALL_SERVER_RECV_REQUEST) { 434 434 unsigned long timo = READ_ONCE(call->next_req_timo); 435 435 unsigned long now, expect_req_by; 436 436
+4 -2
net/rxrpc/key.c
··· 598 598 default: /* we have a ticket we can't encode */ 599 599 pr_err("Unsupported key token type (%u)\n", 600 600 token->security_index); 601 - continue; 601 + return -ENOPKG; 602 602 } 603 603 604 604 _debug("token[%u]: toksize=%u", ntoks, toksize); ··· 674 674 break; 675 675 676 676 default: 677 - break; 677 + pr_err("Unsupported key token type (%u)\n", 678 + token->security_index); 679 + return -ENOPKG; 678 680 } 679 681 680 682 ASSERTCMP((unsigned long)xdr - (unsigned long)oldxdr, ==,
+13 -7
net/smc/smc_core.c
··· 246 246 goto errattr; 247 247 smc_clc_get_hostname(&host); 248 248 if (host) { 249 - snprintf(hostname, sizeof(hostname), "%s", host); 249 + memcpy(hostname, host, SMC_MAX_HOSTNAME_LEN); 250 + hostname[SMC_MAX_HOSTNAME_LEN] = 0; 250 251 if (nla_put_string(skb, SMC_NLA_SYS_LOCAL_HOST, hostname)) 251 252 goto errattr; 252 253 } ··· 258 257 smc_ism_get_system_eid(smcd_dev, &seid); 259 258 mutex_unlock(&smcd_dev_list.mutex); 260 259 if (seid && smc_ism_is_v2_capable()) { 261 - snprintf(smc_seid, sizeof(smc_seid), "%s", seid); 260 + memcpy(smc_seid, seid, SMC_MAX_EID_LEN); 261 + smc_seid[SMC_MAX_EID_LEN] = 0; 262 262 if (nla_put_string(skb, SMC_NLA_SYS_SEID, smc_seid)) 263 263 goto errattr; 264 264 } ··· 297 295 goto errattr; 298 296 if (nla_put_u8(skb, SMC_NLA_LGR_R_VLAN_ID, lgr->vlan_id)) 299 297 goto errattr; 300 - snprintf(smc_target, sizeof(smc_target), "%s", lgr->pnet_id); 298 + memcpy(smc_target, lgr->pnet_id, SMC_MAX_PNETID_LEN); 299 + smc_target[SMC_MAX_PNETID_LEN] = 0; 301 300 if (nla_put_string(skb, SMC_NLA_LGR_R_PNETID, smc_target)) 302 301 goto errattr; 303 302 ··· 315 312 struct sk_buff *skb, 316 313 struct netlink_callback *cb) 317 314 { 318 - char smc_ibname[IB_DEVICE_NAME_MAX + 1]; 315 + char smc_ibname[IB_DEVICE_NAME_MAX]; 319 316 u8 smc_gid_target[41]; 320 317 struct nlattr *attrs; 321 318 u32 link_uid = 0; ··· 464 461 goto errattr; 465 462 if (nla_put_u32(skb, SMC_NLA_LGR_D_CHID, smc_ism_get_chid(lgr->smcd))) 466 463 goto errattr; 467 - snprintf(smc_pnet, sizeof(smc_pnet), "%s", lgr->smcd->pnetid); 464 + memcpy(smc_pnet, lgr->smcd->pnetid, SMC_MAX_PNETID_LEN); 465 + smc_pnet[SMC_MAX_PNETID_LEN] = 0; 468 466 if (nla_put_string(skb, SMC_NLA_LGR_D_PNETID, smc_pnet)) 469 467 goto errattr; 470 468 ··· 478 474 goto errv2attr; 479 475 if (nla_put_u8(skb, SMC_NLA_LGR_V2_OS, lgr->peer_os)) 480 476 goto errv2attr; 481 - snprintf(smc_host, sizeof(smc_host), "%s", lgr->peer_hostname); 477 + memcpy(smc_host, lgr->peer_hostname, SMC_MAX_HOSTNAME_LEN); 478 + smc_host[SMC_MAX_HOSTNAME_LEN] = 0; 482 479 if (nla_put_string(skb, SMC_NLA_LGR_V2_PEER_HOST, smc_host)) 483 480 goto errv2attr; 484 - snprintf(smc_eid, sizeof(smc_eid), "%s", lgr->negotiated_eid); 481 + memcpy(smc_eid, lgr->negotiated_eid, SMC_MAX_EID_LEN); 482 + smc_eid[SMC_MAX_EID_LEN] = 0; 485 483 if (nla_put_string(skb, SMC_NLA_LGR_V2_NEG_EID, smc_eid)) 486 484 goto errv2attr; 487 485
+3 -3
net/smc/smc_ib.c
··· 371 371 if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR, 372 372 smcibdev->pnetid_by_user[port])) 373 373 goto errattr; 374 - snprintf(smc_pnet, sizeof(smc_pnet), "%s", 375 - (char *)&smcibdev->pnetid[port]); 374 + memcpy(smc_pnet, &smcibdev->pnetid[port], SMC_MAX_PNETID_LEN); 375 + smc_pnet[SMC_MAX_PNETID_LEN] = 0; 376 376 if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet)) 377 377 goto errattr; 378 378 if (nla_put_u32(skb, SMC_NLA_DEV_PORT_NETDEV, ··· 414 414 struct sk_buff *skb, 415 415 struct netlink_callback *cb) 416 416 { 417 - char smc_ibname[IB_DEVICE_NAME_MAX + 1]; 417 + char smc_ibname[IB_DEVICE_NAME_MAX]; 418 418 struct smc_pci_dev smc_pci_dev; 419 419 struct pci_dev *pci_dev; 420 420 unsigned char is_crit;
+2 -1
net/smc/smc_ism.c
··· 250 250 goto errattr; 251 251 if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR, smcd->pnetid_by_user)) 252 252 goto errportattr; 253 - snprintf(smc_pnet, sizeof(smc_pnet), "%s", smcd->pnetid); 253 + memcpy(smc_pnet, smcd->pnetid, SMC_MAX_PNETID_LEN); 254 + smc_pnet[SMC_MAX_PNETID_LEN] = 0; 254 255 if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet)) 255 256 goto errportattr; 256 257
+1 -1
net/sunrpc/addr.c
··· 185 185 scope_id = dev->ifindex; 186 186 dev_put(dev); 187 187 } else { 188 - if (kstrtou32(p, 10, &scope_id) == 0) { 188 + if (kstrtou32(p, 10, &scope_id) != 0) { 189 189 kfree(p); 190 190 return 0; 191 191 }
+85 -1
net/sunrpc/svcsock.c
··· 1062 1062 return 0; /* record not complete */ 1063 1063 } 1064 1064 1065 + static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec, 1066 + int flags) 1067 + { 1068 + return kernel_sendpage(sock, virt_to_page(vec->iov_base), 1069 + offset_in_page(vec->iov_base), 1070 + vec->iov_len, flags); 1071 + } 1072 + 1073 + /* 1074 + * kernel_sendpage() is used exclusively to reduce the number of 1075 + * copy operations in this path. Therefore the caller must ensure 1076 + * that the pages backing @xdr are unchanging. 1077 + * 1078 + * In addition, the logic assumes that * .bv_len is never larger 1079 + * than PAGE_SIZE. 1080 + */ 1081 + static int svc_tcp_sendmsg(struct socket *sock, struct msghdr *msg, 1082 + struct xdr_buf *xdr, rpc_fraghdr marker, 1083 + unsigned int *sentp) 1084 + { 1085 + const struct kvec *head = xdr->head; 1086 + const struct kvec *tail = xdr->tail; 1087 + struct kvec rm = { 1088 + .iov_base = &marker, 1089 + .iov_len = sizeof(marker), 1090 + }; 1091 + int flags, ret; 1092 + 1093 + *sentp = 0; 1094 + xdr_alloc_bvec(xdr, GFP_KERNEL); 1095 + 1096 + msg->msg_flags = MSG_MORE; 1097 + ret = kernel_sendmsg(sock, msg, &rm, 1, rm.iov_len); 1098 + if (ret < 0) 1099 + return ret; 1100 + *sentp += ret; 1101 + if (ret != rm.iov_len) 1102 + return -EAGAIN; 1103 + 1104 + flags = head->iov_len < xdr->len ? MSG_MORE | MSG_SENDPAGE_NOTLAST : 0; 1105 + ret = svc_tcp_send_kvec(sock, head, flags); 1106 + if (ret < 0) 1107 + return ret; 1108 + *sentp += ret; 1109 + if (ret != head->iov_len) 1110 + goto out; 1111 + 1112 + if (xdr->page_len) { 1113 + unsigned int offset, len, remaining; 1114 + struct bio_vec *bvec; 1115 + 1116 + bvec = xdr->bvec; 1117 + offset = xdr->page_base; 1118 + remaining = xdr->page_len; 1119 + flags = MSG_MORE | MSG_SENDPAGE_NOTLAST; 1120 + while (remaining > 0) { 1121 + if (remaining <= PAGE_SIZE && tail->iov_len == 0) 1122 + flags = 0; 1123 + len = min(remaining, bvec->bv_len); 1124 + ret = kernel_sendpage(sock, bvec->bv_page, 1125 + bvec->bv_offset + offset, 1126 + len, flags); 1127 + if (ret < 0) 1128 + return ret; 1129 + *sentp += ret; 1130 + if (ret != len) 1131 + goto out; 1132 + remaining -= len; 1133 + offset = 0; 1134 + bvec++; 1135 + } 1136 + } 1137 + 1138 + if (tail->iov_len) { 1139 + ret = svc_tcp_send_kvec(sock, tail, 0); 1140 + if (ret < 0) 1141 + return ret; 1142 + *sentp += ret; 1143 + } 1144 + 1145 + out: 1146 + return 0; 1147 + } 1148 + 1065 1149 /** 1066 1150 * svc_tcp_sendto - Send out a reply on a TCP socket 1067 1151 * @rqstp: completed svc_rqst ··· 1173 1089 mutex_lock(&xprt->xpt_mutex); 1174 1090 if (svc_xprt_is_dead(xprt)) 1175 1091 goto out_notconn; 1176 - err = xprt_sock_sendmsg(svsk->sk_sock, &msg, xdr, 0, marker, &sent); 1092 + err = svc_tcp_sendmsg(svsk->sk_sock, &msg, xdr, marker, &sent); 1177 1093 xdr_free_bvec(xdr); 1178 1094 trace_svcsock_tcp_send(xprt, err < 0 ? err : sent); 1179 1095 if (err < 0 || sent != (xdr->len + sizeof(marker)))
+8 -3
net/tipc/link.c
··· 1030 1030 int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list, 1031 1031 struct sk_buff_head *xmitq) 1032 1032 { 1033 - struct tipc_msg *hdr = buf_msg(skb_peek(list)); 1034 1033 struct sk_buff_head *backlogq = &l->backlogq; 1035 1034 struct sk_buff_head *transmq = &l->transmq; 1036 1035 struct sk_buff *skb, *_skb; ··· 1037 1038 u16 ack = l->rcv_nxt - 1; 1038 1039 u16 seqno = l->snd_nxt; 1039 1040 int pkt_cnt = skb_queue_len(list); 1040 - int imp = msg_importance(hdr); 1041 1041 unsigned int mss = tipc_link_mss(l); 1042 1042 unsigned int cwin = l->window; 1043 1043 unsigned int mtu = l->mtu; 1044 + struct tipc_msg *hdr; 1044 1045 bool new_bundle; 1045 1046 int rc = 0; 1047 + int imp; 1046 1048 1049 + if (pkt_cnt <= 0) 1050 + return 0; 1051 + 1052 + hdr = buf_msg(skb_peek(list)); 1047 1053 if (unlikely(msg_size(hdr) > mtu)) { 1048 1054 pr_warn("Too large msg, purging xmit list %d %d %d %d %d!\n", 1049 1055 skb_queue_len(list), msg_user(hdr), ··· 1057 1053 return -EMSGSIZE; 1058 1054 } 1059 1055 1056 + imp = msg_importance(hdr); 1060 1057 /* Allow oversubscription of one data msg per source at congestion */ 1061 1058 if (unlikely(l->backlog[imp].len >= l->backlog[imp].limit)) { 1062 1059 if (imp == TIPC_SYSTEM_IMPORTANCE) { ··· 2544 2539 } 2545 2540 2546 2541 /** 2547 - * link_reset_stats - reset link statistics 2542 + * tipc_link_reset_stats - reset link statistics 2548 2543 * @l: pointer to link 2549 2544 */ 2550 2545 void tipc_link_reset_stats(struct tipc_link *l)
+1 -1
net/tipc/node.c
··· 1665 1665 } 1666 1666 1667 1667 /** 1668 - * tipc_node_xmit() is the general link level function for message sending 1668 + * tipc_node_xmit() - general link level function for message sending 1669 1669 * @net: the applicable net namespace 1670 1670 * @list: chain of buffers containing message 1671 1671 * @dnode: address of destination node
+5 -2
security/lsm_audit.c
··· 275 275 struct inode *inode; 276 276 277 277 audit_log_format(ab, " name="); 278 + spin_lock(&a->u.dentry->d_lock); 278 279 audit_log_untrustedstring(ab, a->u.dentry->d_name.name); 280 + spin_unlock(&a->u.dentry->d_lock); 279 281 280 282 inode = d_backing_inode(a->u.dentry); 281 283 if (inode) { ··· 295 293 dentry = d_find_alias(inode); 296 294 if (dentry) { 297 295 audit_log_format(ab, " name="); 298 - audit_log_untrustedstring(ab, 299 - dentry->d_name.name); 296 + spin_lock(&dentry->d_lock); 297 + audit_log_untrustedstring(ab, dentry->d_name.name); 298 + spin_unlock(&dentry->d_lock); 300 299 dput(dentry); 301 300 } 302 301 audit_log_format(ab, " dev=");
+1 -1
sound/firewire/fireface/ff-transaction.c
··· 88 88 89 89 /* Set interval to next transaction. */ 90 90 ff->next_ktime[port] = ktime_add_ns(ktime_get(), 91 - ff->rx_bytes[port] * 8 * NSEC_PER_SEC / 31250); 91 + ff->rx_bytes[port] * 8 * (NSEC_PER_SEC / 31250)); 92 92 93 93 if (quad_count == 1) 94 94 tcode = TCODE_WRITE_QUADLET_REQUEST;
+1 -1
sound/firewire/tascam/tascam-transaction.c
··· 209 209 210 210 /* Set interval to next transaction. */ 211 211 port->next_ktime = ktime_add_ns(ktime_get(), 212 - port->consume_bytes * 8 * NSEC_PER_SEC / 31250); 212 + port->consume_bytes * 8 * (NSEC_PER_SEC / 31250)); 213 213 214 214 /* Start this transaction. */ 215 215 port->idling = false;
+6 -3
sound/pci/hda/hda_intel.c
··· 2598 2598 .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_AMD_SB }, 2599 2599 /* ATI HDMI */ 2600 2600 { PCI_DEVICE(0x1002, 0x0002), 2601 - .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, 2601 + .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS | 2602 + AZX_DCAPS_PM_RUNTIME }, 2602 2603 { PCI_DEVICE(0x1002, 0x1308), 2603 2604 .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, 2604 2605 { PCI_DEVICE(0x1002, 0x157a), ··· 2661 2660 { PCI_DEVICE(0x1002, 0xaab0), 2662 2661 .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, 2663 2662 { PCI_DEVICE(0x1002, 0xaac0), 2664 - .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, 2663 + .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS | 2664 + AZX_DCAPS_PM_RUNTIME }, 2665 2665 { PCI_DEVICE(0x1002, 0xaac8), 2666 - .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, 2666 + .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS | 2667 + AZX_DCAPS_PM_RUNTIME }, 2667 2668 { PCI_DEVICE(0x1002, 0xaad8), 2668 2669 .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS | 2669 2670 AZX_DCAPS_PM_RUNTIME },
+1 -1
sound/pci/hda/hda_tegra.c
··· 388 388 * in powers of 2, next available ratio is 16 which can be 389 389 * used as a limiting factor here. 390 390 */ 391 - if (of_device_is_compatible(np, "nvidia,tegra194-hda")) 391 + if (of_device_is_compatible(np, "nvidia,tegra30-hda")) 392 392 chip->bus.core.sdo_limit = 16; 393 393 394 394 /* codec detection */
+4
sound/pci/hda/patch_realtek.c
··· 7970 7970 SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED), 7971 7971 SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED), 7972 7972 SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED), 7973 + SND_PCI_QUIRK(0x103c, 0x8780, "HP ZBook Fury 17 G7 Mobile Workstation", 7974 + ALC285_FIXUP_HP_GPIO_AMP_INIT), 7975 + SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation", 7976 + ALC285_FIXUP_HP_GPIO_AMP_INIT), 7973 7977 SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED), 7974 7978 SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED), 7975 7979 SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+3 -13
sound/soc/amd/raven/pci-acp3x.c
··· 140 140 goto release_regions; 141 141 } 142 142 143 - /* check for msi interrupt support */ 144 - ret = pci_enable_msi(pci); 145 - if (ret) 146 - /* msi is not enabled */ 147 - irqflags = IRQF_SHARED; 148 - else 149 - /* msi is enabled */ 150 - irqflags = 0; 143 + irqflags = IRQF_SHARED; 151 144 152 145 addr = pci_resource_start(pci, 0); 153 146 adata->acp3x_base = devm_ioremap(&pci->dev, addr, 154 147 pci_resource_len(pci, 0)); 155 148 if (!adata->acp3x_base) { 156 149 ret = -ENOMEM; 157 - goto disable_msi; 150 + goto release_regions; 158 151 } 159 152 pci_set_master(pci); 160 153 pci_set_drvdata(pci, adata); ··· 155 162 adata->pme_en = rv_readl(adata->acp3x_base + mmACP_PME_EN); 156 163 ret = acp3x_init(adata); 157 164 if (ret) 158 - goto disable_msi; 165 + goto release_regions; 159 166 160 167 val = rv_readl(adata->acp3x_base + mmACP_I2S_PIN_CONFIG); 161 168 switch (val) { ··· 244 251 de_init: 245 252 if (acp3x_deinit(adata->acp3x_base)) 246 253 dev_err(&pci->dev, "ACP de-init failed\n"); 247 - disable_msi: 248 - pci_disable_msi(pci); 249 254 release_regions: 250 255 pci_release_regions(pci); 251 256 disable_pci: ··· 302 311 dev_err(&pci->dev, "ACP de-init failed\n"); 303 312 pm_runtime_forbid(&pci->dev); 304 313 pm_runtime_get_noresume(&pci->dev); 305 - pci_disable_msi(pci); 306 314 pci_release_regions(pci); 307 315 pci_disable_device(pci); 308 316 }
+14
sound/soc/amd/renoir/rn-pci-acp3x.c
··· 171 171 DMI_EXACT_MATCH(DMI_BOARD_NAME, "LNVNB161216"), 172 172 } 173 173 }, 174 + { 175 + /* Lenovo ThinkPad E14 Gen 2 */ 176 + .matches = { 177 + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 178 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "20T6CTO1WW"), 179 + } 180 + }, 181 + { 182 + /* Lenovo ThinkPad X395 */ 183 + .matches = { 184 + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 185 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "20NLCTO1WW"), 186 + } 187 + }, 174 188 {} 175 189 }; 176 190
+2 -2
sound/soc/atmel/Kconfig
··· 143 143 - sama7g5 144 144 145 145 This S/PDIF TX driver is compliant with IEC-60958 standard and 146 - includes programable User Data and Channel Status fields. 146 + includes programmable User Data and Channel Status fields. 147 147 148 148 config SND_MCHP_SOC_SPDIFRX 149 149 tristate "Microchip ASoC driver for boards using S/PDIF RX" ··· 157 157 - sama7g5 158 158 159 159 This S/PDIF RX driver is compliant with IEC-60958 standard and 160 - includes programable User Data and Channel Status fields. 160 + includes programmable User Data and Channel Status fields. 161 161 endif
+1 -1
sound/soc/codecs/Kconfig
··· 457 457 help 458 458 Enable support for the Analog Devices ADAU7118 8 Channel PDM-to-I2S/TDM 459 459 Converter. In this mode, the device works in standalone mode which 460 - means that there is no bus to comunicate with it. Stereo mode is not 460 + means that there is no bus to communicate with it. Stereo mode is not 461 461 supported in this mode. 462 462 463 463 To compile this driver as a module, choose M here: the module
+20
sound/soc/codecs/max98373-i2c.c
··· 19 19 #include <sound/tlv.h> 20 20 #include "max98373.h" 21 21 22 + static const u32 max98373_i2c_cache_reg[] = { 23 + MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK, 24 + MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK, 25 + MAX98373_R20B6_BDE_CUR_STATE_READBACK, 26 + }; 27 + 22 28 static struct reg_default max98373_reg[] = { 23 29 {MAX98373_R2000_SW_RESET, 0x00}, 24 30 {MAX98373_R2001_INT_RAW1, 0x00}, ··· 478 472 static int max98373_suspend(struct device *dev) 479 473 { 480 474 struct max98373_priv *max98373 = dev_get_drvdata(dev); 475 + int i; 476 + 477 + /* cache feedback register values before suspend */ 478 + for (i = 0; i < max98373->cache_num; i++) 479 + regmap_read(max98373->regmap, max98373->cache[i].reg, &max98373->cache[i].val); 481 480 482 481 regcache_cache_only(max98373->regmap, true); 483 482 regcache_mark_dirty(max98373->regmap); ··· 520 509 { 521 510 int ret = 0; 522 511 int reg = 0; 512 + int i; 523 513 struct max98373_priv *max98373 = NULL; 524 514 525 515 max98373 = devm_kzalloc(&i2c->dev, sizeof(*max98373), GFP_KERNEL); ··· 545 533 "Failed to allocate regmap: %d\n", ret); 546 534 return ret; 547 535 } 536 + 537 + max98373->cache_num = ARRAY_SIZE(max98373_i2c_cache_reg); 538 + max98373->cache = devm_kcalloc(&i2c->dev, max98373->cache_num, 539 + sizeof(*max98373->cache), 540 + GFP_KERNEL); 541 + 542 + for (i = 0; i < max98373->cache_num; i++) 543 + max98373->cache[i].reg = max98373_i2c_cache_reg[i]; 548 544 549 545 /* voltage/current slot & gpio configuration */ 550 546 max98373_slot_config(&i2c->dev, max98373);
+20
sound/soc/codecs/max98373-sdw.c
··· 23 23 struct sdw_stream_runtime *sdw_stream; 24 24 }; 25 25 26 + static const u32 max98373_sdw_cache_reg[] = { 27 + MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK, 28 + MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK, 29 + MAX98373_R20B6_BDE_CUR_STATE_READBACK, 30 + }; 31 + 26 32 static struct reg_default max98373_reg[] = { 27 33 {MAX98373_R0040_SCP_INIT_STAT_1, 0x00}, 28 34 {MAX98373_R0041_SCP_INIT_MASK_1, 0x00}, ··· 251 245 static __maybe_unused int max98373_suspend(struct device *dev) 252 246 { 253 247 struct max98373_priv *max98373 = dev_get_drvdata(dev); 248 + int i; 249 + 250 + /* cache feedback register values before suspend */ 251 + for (i = 0; i < max98373->cache_num; i++) 252 + regmap_read(max98373->regmap, max98373->cache[i].reg, &max98373->cache[i].val); 254 253 255 254 regcache_cache_only(max98373->regmap, true); 256 255 ··· 768 757 { 769 758 struct max98373_priv *max98373; 770 759 int ret; 760 + int i; 771 761 struct device *dev = &slave->dev; 772 762 773 763 /* Allocate and assign private driver data structure */ ··· 779 767 dev_set_drvdata(dev, max98373); 780 768 max98373->regmap = regmap; 781 769 max98373->slave = slave; 770 + 771 + max98373->cache_num = ARRAY_SIZE(max98373_sdw_cache_reg); 772 + max98373->cache = devm_kcalloc(dev, max98373->cache_num, 773 + sizeof(*max98373->cache), 774 + GFP_KERNEL); 775 + 776 + for (i = 0; i < max98373->cache_num; i++) 777 + max98373->cache[i].reg = max98373_sdw_cache_reg[i]; 782 778 783 779 /* Read voltage and slot configuration */ 784 780 max98373_slot_config(dev, max98373);
+31 -3
sound/soc/codecs/max98373.c
··· 168 168 MAX98373_R2051_MEAS_ADC_SAMPLING_RATE, 0, 169 169 max98373_ADC_samplerate_text); 170 170 171 + static int max98373_feedback_get(struct snd_kcontrol *kcontrol, 172 + struct snd_ctl_elem_value *ucontrol) 173 + { 174 + struct snd_soc_component *component = snd_kcontrol_chip(kcontrol); 175 + struct soc_mixer_control *mc = 176 + (struct soc_mixer_control *)kcontrol->private_value; 177 + struct max98373_priv *max98373 = snd_soc_component_get_drvdata(component); 178 + int i; 179 + 180 + if (snd_soc_component_get_bias_level(component) == SND_SOC_BIAS_OFF) { 181 + /* 182 + * Register values will be cached before suspend. The cached value 183 + * will be a valid value and userspace will happy with that. 184 + */ 185 + for (i = 0; i < max98373->cache_num; i++) { 186 + if (mc->reg == max98373->cache[i].reg) { 187 + ucontrol->value.integer.value[0] = max98373->cache[i].val; 188 + return 0; 189 + } 190 + } 191 + } 192 + 193 + return snd_soc_put_volsw(kcontrol, ucontrol); 194 + } 195 + 171 196 static const struct snd_kcontrol_new max98373_snd_controls[] = { 172 197 SOC_SINGLE("Digital Vol Sel Switch", MAX98373_R203F_AMP_DSP_CFG, 173 198 MAX98373_AMP_VOL_SEL_SHIFT, 1, 0), ··· 234 209 MAX98373_FLT_EN_SHIFT, 1, 0), 235 210 SOC_SINGLE("ADC TEMP FLT Switch", MAX98373_R2053_MEAS_ADC_THERM_FLT_CFG, 236 211 MAX98373_FLT_EN_SHIFT, 1, 0), 237 - SOC_SINGLE("ADC PVDD", MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK, 0, 0xFF, 0), 238 - SOC_SINGLE("ADC TEMP", MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK, 0, 0xFF, 0), 212 + SOC_SINGLE_EXT("ADC PVDD", MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK, 0, 0xFF, 0, 213 + max98373_feedback_get, NULL), 214 + SOC_SINGLE_EXT("ADC TEMP", MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK, 0, 0xFF, 0, 215 + max98373_feedback_get, NULL), 239 216 SOC_SINGLE("ADC PVDD FLT Coeff", MAX98373_R2052_MEAS_ADC_PVDD_FLT_CFG, 240 217 0, 0x3, 0), 241 218 SOC_SINGLE("ADC TEMP FLT Coeff", MAX98373_R2053_MEAS_ADC_THERM_FLT_CFG, ··· 253 226 SOC_SINGLE("BDE LVL2 Thresh", MAX98373_R2098_BDE_L2_THRESH, 0, 0xFF, 0), 254 227 SOC_SINGLE("BDE LVL3 Thresh", MAX98373_R2099_BDE_L3_THRESH, 0, 0xFF, 0), 255 228 SOC_SINGLE("BDE LVL4 Thresh", MAX98373_R209A_BDE_L4_THRESH, 0, 0xFF, 0), 256 - SOC_SINGLE("BDE Active Level", MAX98373_R20B6_BDE_CUR_STATE_READBACK, 0, 8, 0), 229 + SOC_SINGLE_EXT("BDE Active Level", MAX98373_R20B6_BDE_CUR_STATE_READBACK, 0, 8, 0, 230 + max98373_feedback_get, NULL), 257 231 SOC_SINGLE("BDE Clip Mode Switch", MAX98373_R2092_BDE_CLIPPER_MODE, 0, 1, 0), 258 232 SOC_SINGLE("BDE Thresh Hysteresis", MAX98373_R209B_BDE_THRESH_HYST, 0, 0xFF, 0), 259 233 SOC_SINGLE("BDE Hold Time", MAX98373_R2090_BDE_LVL_HOLD, 0, 0xFF, 0),
+8
sound/soc/codecs/max98373.h
··· 203 203 /* MAX98373_R2000_SW_RESET */ 204 204 #define MAX98373_SOFT_RESET (0x1 << 0) 205 205 206 + struct max98373_cache { 207 + u32 reg; 208 + u32 val; 209 + }; 210 + 206 211 struct max98373_priv { 207 212 struct regmap *regmap; 208 213 int reset_gpio; ··· 217 212 bool interleave_mode; 218 213 unsigned int ch_size; 219 214 bool tdm_mode; 215 + /* cache for reading a valid fake feedback value */ 216 + struct max98373_cache *cache; 217 + int cache_num; 220 218 /* variables to support soundwire */ 221 219 struct sdw_slave *slave; 222 220 bool hw_init;
+6
sound/soc/codecs/rt711.c
··· 462 462 unsigned int read_ll, read_rl; 463 463 int i; 464 464 465 + mutex_lock(&rt711->calibrate_mutex); 466 + 465 467 /* Can't use update bit function, so read the original value first */ 466 468 addr_h = mc->reg; 467 469 addr_l = mc->rreg; ··· 549 547 if (dapm->bias_level <= SND_SOC_BIAS_STANDBY) 550 548 regmap_write(rt711->regmap, 551 549 RT711_SET_AUDIO_POWER_STATE, AC_PWRST_D3); 550 + 551 + mutex_unlock(&rt711->calibrate_mutex); 552 552 return 0; 553 553 } 554 554 ··· 863 859 break; 864 860 865 861 case SND_SOC_BIAS_STANDBY: 862 + mutex_lock(&rt711->calibrate_mutex); 866 863 regmap_write(rt711->regmap, 867 864 RT711_SET_AUDIO_POWER_STATE, 868 865 AC_PWRST_D3); 866 + mutex_unlock(&rt711->calibrate_mutex); 869 867 break; 870 868 871 869 default:
+1
sound/soc/fsl/imx-hdmi.c
··· 164 164 165 165 if ((hdmi_out && hdmi_in) || (!hdmi_out && !hdmi_in)) { 166 166 dev_err(&pdev->dev, "Invalid HDMI DAI link\n"); 167 + ret = -EINVAL; 167 168 goto fail; 168 169 } 169 170
+1
sound/soc/intel/boards/haswell.c
··· 189 189 .probe = haswell_audio_probe, 190 190 .driver = { 191 191 .name = "haswell-audio", 192 + .pm = &snd_soc_pm_ops, 192 193 }, 193 194 }; 194 195
+1
sound/soc/intel/skylake/cnl-sst.c
··· 224 224 "dsp boot timeout, status=%#x error=%#x\n", 225 225 sst_dsp_shim_read(ctx, CNL_ADSP_FW_STATUS), 226 226 sst_dsp_shim_read(ctx, CNL_ADSP_ERROR_CODE)); 227 + ret = -ETIMEDOUT; 227 228 goto err; 228 229 } 229 230 } else {
+13 -1
sound/soc/meson/axg-tdm-interface.c
··· 467 467 return ret; 468 468 } 469 469 470 + static const struct snd_soc_dapm_widget axg_tdm_iface_dapm_widgets[] = { 471 + SND_SOC_DAPM_SIGGEN("Playback Signal"), 472 + }; 473 + 474 + static const struct snd_soc_dapm_route axg_tdm_iface_dapm_routes[] = { 475 + { "Loopback", NULL, "Playback Signal" }, 476 + }; 477 + 470 478 static const struct snd_soc_component_driver axg_tdm_iface_component_drv = { 471 - .set_bias_level = axg_tdm_iface_set_bias_level, 479 + .dapm_widgets = axg_tdm_iface_dapm_widgets, 480 + .num_dapm_widgets = ARRAY_SIZE(axg_tdm_iface_dapm_widgets), 481 + .dapm_routes = axg_tdm_iface_dapm_routes, 482 + .num_dapm_routes = ARRAY_SIZE(axg_tdm_iface_dapm_routes), 483 + .set_bias_level = axg_tdm_iface_set_bias_level, 472 484 }; 473 485 474 486 static const struct of_device_id axg_tdm_iface_of_match[] = {
+2 -11
sound/soc/meson/axg-tdmin.c
··· 228 228 .regmap_cfg = &axg_tdmin_regmap_cfg, 229 229 .ops = &axg_tdmin_ops, 230 230 .quirks = &(const struct axg_tdm_formatter_hw) { 231 - .skew_offset = 2, 232 - }, 233 - }; 234 - 235 - static const struct axg_tdm_formatter_driver g12a_tdmin_drv = { 236 - .component_drv = &axg_tdmin_component_drv, 237 - .regmap_cfg = &axg_tdmin_regmap_cfg, 238 - .ops = &axg_tdmin_ops, 239 - .quirks = &(const struct axg_tdm_formatter_hw) { 240 231 .skew_offset = 3, 241 232 }, 242 233 }; ··· 238 247 .data = &axg_tdmin_drv, 239 248 }, { 240 249 .compatible = "amlogic,g12a-tdmin", 241 - .data = &g12a_tdmin_drv, 250 + .data = &axg_tdmin_drv, 242 251 }, { 243 252 .compatible = "amlogic,sm1-tdmin", 244 - .data = &g12a_tdmin_drv, 253 + .data = &axg_tdmin_drv, 245 254 }, {} 246 255 }; 247 256 MODULE_DEVICE_TABLE(of, axg_tdmin_of_match);
+2 -18
sound/soc/qcom/lpass-cpu.c
··· 270 270 struct lpaif_i2sctl *i2sctl = drvdata->i2sctl; 271 271 unsigned int id = dai->driver->id; 272 272 int ret = -EINVAL; 273 - unsigned int val = 0; 274 - 275 - ret = regmap_read(drvdata->lpaif_map, 276 - LPAIF_I2SCTL_REG(drvdata->variant, dai->driver->id), &val); 277 - if (ret) { 278 - dev_err(dai->dev, "error reading from i2sctl reg: %d\n", ret); 279 - return ret; 280 - } 281 - if (val == LPAIF_I2SCTL_RESET_STATE) { 282 - dev_err(dai->dev, "error in i2sctl register state\n"); 283 - return -ENOTRECOVERABLE; 284 - } 285 273 286 274 switch (cmd) { 287 275 case SNDRV_PCM_TRIGGER_START: ··· 442 454 struct lpass_variant *v = drvdata->variant; 443 455 int i; 444 456 445 - for (i = 0; i < v->i2s_ports; ++i) 446 - if (reg == LPAIF_I2SCTL_REG(v, i)) 447 - return true; 448 457 for (i = 0; i < v->irq_ports; ++i) 449 458 if (reg == LPAIF_IRQSTAT_REG(v, i)) 450 459 return true; 451 460 452 461 for (i = 0; i < v->rdma_channels; ++i) 453 - if (reg == LPAIF_RDMACURR_REG(v, i) || reg == LPAIF_RDMACTL_REG(v, i)) 462 + if (reg == LPAIF_RDMACURR_REG(v, i)) 454 463 return true; 455 464 456 465 for (i = 0; i < v->wrdma_channels; ++i) 457 - if (reg == LPAIF_WRDMACURR_REG(v, i + v->wrdma_channel_start) || 458 - reg == LPAIF_WRDMACTL_REG(v, i + v->wrdma_channel_start)) 466 + if (reg == LPAIF_WRDMACURR_REG(v, i + v->wrdma_channel_start)) 459 467 return true; 460 468 461 469 return false;
+35 -15
sound/soc/qcom/lpass-platform.c
··· 452 452 unsigned int reg_irqclr = 0, val_irqclr = 0; 453 453 unsigned int reg_irqen = 0, val_irqen = 0, val_mask = 0; 454 454 unsigned int dai_id = cpu_dai->driver->id; 455 - unsigned int dma_ctrl_reg = 0; 456 455 457 456 ch = pcm_data->dma_ch; 458 457 if (dir == SNDRV_PCM_STREAM_PLAYBACK) { ··· 468 469 id = pcm_data->dma_ch - v->wrdma_channel_start; 469 470 map = drvdata->lpaif_map; 470 471 } 471 - ret = regmap_read(map, LPAIF_DMACTL_REG(v, ch, dir, dai_id), &dma_ctrl_reg); 472 - if (ret) { 473 - dev_err(soc_runtime->dev, "error reading from rdmactl reg: %d\n", ret); 474 - return ret; 475 - } 476 472 477 - if (dma_ctrl_reg == LPAIF_DMACTL_RESET_STATE || 478 - dma_ctrl_reg == LPAIF_DMACTL_RESET_STATE + 1) { 479 - dev_err(soc_runtime->dev, "error in rdmactl register state\n"); 480 - return -ENOTRECOVERABLE; 481 - } 482 473 switch (cmd) { 483 474 case SNDRV_PCM_TRIGGER_START: 484 475 case SNDRV_PCM_TRIGGER_RESUME: ··· 489 500 "error writing to rdmactl reg: %d\n", ret); 490 501 return ret; 491 502 } 492 - map = drvdata->hdmiif_map; 493 503 reg_irqclr = LPASS_HDMITX_APP_IRQCLEAR_REG(v); 494 504 val_irqclr = (LPAIF_IRQ_ALL(ch) | 495 505 LPAIF_IRQ_HDMI_REQ_ON_PRELOAD(ch) | ··· 507 519 break; 508 520 case MI2S_PRIMARY: 509 521 case MI2S_SECONDARY: 510 - map = drvdata->lpaif_map; 511 522 reg_irqclr = LPAIF_IRQCLEAR_REG(v, LPAIF_IRQ_PORT_HOST); 512 523 val_irqclr = LPAIF_IRQ_ALL(ch); 513 524 ··· 550 563 "error writing to rdmactl reg: %d\n", ret); 551 564 return ret; 552 565 } 553 - map = drvdata->hdmiif_map; 554 566 reg_irqen = LPASS_HDMITX_APP_IRQEN_REG(v); 555 567 val_mask = (LPAIF_IRQ_ALL(ch) | 556 568 LPAIF_IRQ_HDMI_REQ_ON_PRELOAD(ch) | ··· 559 573 break; 560 574 case MI2S_PRIMARY: 561 575 case MI2S_SECONDARY: 562 - map = drvdata->lpaif_map; 563 576 reg_irqen = LPAIF_IRQEN_REG(v, LPAIF_IRQ_PORT_HOST); 564 577 val_mask = LPAIF_IRQ_ALL(ch); 565 578 val_irqen = 0; ··· 823 838 } 824 839 } 825 840 841 + static int lpass_platform_pcmops_suspend(struct snd_soc_component *component) 842 + { 843 + struct lpass_data *drvdata = snd_soc_component_get_drvdata(component); 844 + struct regmap *map; 845 + unsigned int dai_id = component->id; 846 + 847 + if (dai_id == LPASS_DP_RX) 848 + map = drvdata->hdmiif_map; 849 + else 850 + map = drvdata->lpaif_map; 851 + 852 + regcache_cache_only(map, true); 853 + regcache_mark_dirty(map); 854 + 855 + return 0; 856 + } 857 + 858 + static int lpass_platform_pcmops_resume(struct snd_soc_component *component) 859 + { 860 + struct lpass_data *drvdata = snd_soc_component_get_drvdata(component); 861 + struct regmap *map; 862 + unsigned int dai_id = component->id; 863 + 864 + if (dai_id == LPASS_DP_RX) 865 + map = drvdata->hdmiif_map; 866 + else 867 + map = drvdata->lpaif_map; 868 + 869 + regcache_cache_only(map, false); 870 + return regcache_sync(map); 871 + } 872 + 873 + 826 874 static const struct snd_soc_component_driver lpass_component_driver = { 827 875 .name = DRV_NAME, 828 876 .open = lpass_platform_pcmops_open, ··· 868 850 .mmap = lpass_platform_pcmops_mmap, 869 851 .pcm_construct = lpass_platform_pcm_new, 870 852 .pcm_destruct = lpass_platform_pcm_free, 853 + .suspend = lpass_platform_pcmops_suspend, 854 + .resume = lpass_platform_pcmops_resume, 871 855 872 856 }; 873 857
+10 -8
sound/soc/sh/rcar/adg.c
··· 366 366 struct rsnd_adg *adg = rsnd_priv_to_adg(priv); 367 367 struct device *dev = rsnd_priv_to_dev(priv); 368 368 struct clk *clk; 369 - int i, ret; 369 + int i; 370 370 371 371 for_each_rsnd_clk(clk, adg, i) { 372 - ret = 0; 373 372 if (enable) { 374 - ret = clk_prepare_enable(clk); 373 + int ret = clk_prepare_enable(clk); 375 374 376 375 /* 377 376 * We shouldn't use clk_get_rate() under 378 377 * atomic context. Let's keep it when 379 378 * rsnd_adg_clk_enable() was called 380 379 */ 381 - adg->clk_rate[i] = clk_get_rate(adg->clk[i]); 380 + adg->clk_rate[i] = 0; 381 + if (ret < 0) 382 + dev_warn(dev, "can't use clk %d\n", i); 383 + else 384 + adg->clk_rate[i] = clk_get_rate(clk); 382 385 } else { 383 - clk_disable_unprepare(clk); 386 + if (adg->clk_rate[i]) 387 + clk_disable_unprepare(clk); 388 + adg->clk_rate[i] = 0; 384 389 } 385 - 386 - if (ret < 0) 387 - dev_warn(dev, "can't use clk %d\n", i); 388 390 } 389 391 } 390 392
+1
sound/soc/soc-dapm.c
··· 2486 2486 enum snd_soc_dapm_direction dir; 2487 2487 2488 2488 list_del(&w->list); 2489 + list_del(&w->dirty); 2489 2490 /* 2490 2491 * remove source and sink paths associated to this widget. 2491 2492 * While removing the path, remove reference to it from both
+1 -1
sound/soc/sof/Kconfig
··· 122 122 bool "SOF stop on XRUN" 123 123 help 124 124 This option forces PCMs to stop on any XRUN event. This is useful to 125 - preserve any trace data ond pipeline status prior to the XRUN. 125 + preserve any trace data and pipeline status prior to the XRUN. 126 126 Say Y if you are debugging SOF FW pipeline XRUNs. 127 127 If unsure select "N". 128 128
+2 -3
sound/usb/card.c
··· 450 450 static void snd_usb_audio_free(struct snd_card *card) 451 451 { 452 452 struct snd_usb_audio *chip = card->private_data; 453 - struct snd_usb_endpoint *ep, *n; 454 453 455 - list_for_each_entry_safe(ep, n, &chip->ep_list, list) 456 - snd_usb_endpoint_free(ep); 454 + snd_usb_endpoint_free_all(chip); 457 455 458 456 mutex_destroy(&chip->mutex); 459 457 if (!atomic_read(&chip->shutdown)) ··· 609 611 chip->usb_id = usb_id; 610 612 INIT_LIST_HEAD(&chip->pcm_list); 611 613 INIT_LIST_HEAD(&chip->ep_list); 614 + INIT_LIST_HEAD(&chip->iface_ref_list); 612 615 INIT_LIST_HEAD(&chip->midi_list); 613 616 INIT_LIST_HEAD(&chip->mixer_list); 614 617
+3
sound/usb/card.h
··· 18 18 unsigned int frame_size; /* samples per frame for non-audio */ 19 19 unsigned char iface; /* interface number */ 20 20 unsigned char altsetting; /* corresponding alternate setting */ 21 + unsigned char ep_idx; /* endpoint array index */ 21 22 unsigned char altset_idx; /* array index of altenate setting */ 22 23 unsigned char attributes; /* corresponding attributes of cs endpoint */ 23 24 unsigned char endpoint; /* endpoint */ ··· 43 42 }; 44 43 45 44 struct snd_usb_substream; 45 + struct snd_usb_iface_ref; 46 46 struct snd_usb_endpoint; 47 47 struct snd_usb_power_domain; 48 48 ··· 60 58 61 59 struct snd_usb_endpoint { 62 60 struct snd_usb_audio *chip; 61 + struct snd_usb_iface_ref *iface_ref; 63 62 64 63 int opened; /* open refcount; protect with chip->mutex */ 65 64 atomic_t running; /* running status */
+72 -12
sound/usb/endpoint.c
··· 24 24 #define EP_FLAG_RUNNING 1 25 25 #define EP_FLAG_STOPPING 2 26 26 27 + /* interface refcounting */ 28 + struct snd_usb_iface_ref { 29 + unsigned char iface; 30 + bool need_setup; 31 + int opened; 32 + struct list_head list; 33 + }; 34 + 27 35 /* 28 36 * snd_usb_endpoint is a model that abstracts everything related to an 29 37 * USB endpoint and its streaming. ··· 497 489 } 498 490 499 491 /* 492 + * Find or create a refcount object for the given interface 493 + * 494 + * The objects are released altogether in snd_usb_endpoint_free_all() 495 + */ 496 + static struct snd_usb_iface_ref * 497 + iface_ref_find(struct snd_usb_audio *chip, int iface) 498 + { 499 + struct snd_usb_iface_ref *ip; 500 + 501 + list_for_each_entry(ip, &chip->iface_ref_list, list) 502 + if (ip->iface == iface) 503 + return ip; 504 + 505 + ip = kzalloc(sizeof(*ip), GFP_KERNEL); 506 + if (!ip) 507 + return NULL; 508 + ip->iface = iface; 509 + list_add_tail(&ip->list, &chip->iface_ref_list); 510 + return ip; 511 + } 512 + 513 + /* 500 514 * Get the existing endpoint object corresponding EP 501 515 * Returns NULL if not present. 502 516 */ ··· 550 520 * 551 521 * Returns zero on success or a negative error code. 552 522 * 553 - * New endpoints will be added to chip->ep_list and must be freed by 554 - * calling snd_usb_endpoint_free(). 523 + * New endpoints will be added to chip->ep_list and freed by 524 + * calling snd_usb_endpoint_free_all(). 555 525 * 556 526 * For SND_USB_ENDPOINT_TYPE_SYNC, the caller needs to guarantee that 557 527 * bNumEndpoints > 1 beforehand. ··· 683 653 } else { 684 654 ep->iface = fp->iface; 685 655 ep->altsetting = fp->altsetting; 686 - ep->ep_idx = 0; 656 + ep->ep_idx = fp->ep_idx; 687 657 } 688 658 usb_audio_dbg(chip, "Open EP 0x%x, iface=%d:%d, idx=%d\n", 689 659 ep_num, ep->iface, ep->altsetting, ep->ep_idx); 660 + 661 + ep->iface_ref = iface_ref_find(chip, ep->iface); 662 + if (!ep->iface_ref) { 663 + ep = NULL; 664 + goto unlock; 665 + } 690 666 691 667 ep->cur_audiofmt = fp; 692 668 ep->cur_channels = fp->channels; ··· 717 681 ep->implicit_fb_sync); 718 682 719 683 } else { 684 + if (WARN_ON(!ep->iface_ref)) { 685 + ep = NULL; 686 + goto unlock; 687 + } 688 + 720 689 if (!endpoint_compatible(ep, fp, params)) { 721 690 usb_audio_err(chip, "Incompatible EP setup for 0x%x\n", 722 691 ep_num); ··· 732 691 usb_audio_dbg(chip, "Reopened EP 0x%x (count %d)\n", 733 692 ep_num, ep->opened); 734 693 } 694 + 695 + if (!ep->iface_ref->opened++) 696 + ep->iface_ref->need_setup = true; 735 697 736 698 ep->opened++; 737 699 ··· 804 760 mutex_lock(&chip->mutex); 805 761 usb_audio_dbg(chip, "Closing EP 0x%x (count %d)\n", 806 762 ep->ep_num, ep->opened); 807 - if (!--ep->opened) { 763 + 764 + if (!--ep->iface_ref->opened) 808 765 endpoint_set_interface(chip, ep, false); 766 + 767 + if (!--ep->opened) { 809 768 ep->iface = 0; 810 769 ep->altsetting = 0; 811 770 ep->cur_audiofmt = NULL; 812 771 ep->cur_rate = 0; 772 + ep->iface_ref = NULL; 813 773 usb_audio_dbg(chip, "EP 0x%x closed\n", ep->ep_num); 814 774 } 815 775 mutex_unlock(&chip->mutex); ··· 823 775 void snd_usb_endpoint_suspend(struct snd_usb_endpoint *ep) 824 776 { 825 777 ep->need_setup = true; 778 + if (ep->iface_ref) 779 + ep->iface_ref->need_setup = true; 826 780 } 827 781 828 782 /* ··· 1245 1195 int err = 0; 1246 1196 1247 1197 mutex_lock(&chip->mutex); 1198 + if (WARN_ON(!ep->iface_ref)) 1199 + goto unlock; 1248 1200 if (!ep->need_setup) 1249 1201 goto unlock; 1250 1202 1251 - /* No need to (re-)configure the sync EP belonging to the same altset */ 1252 - if (ep->ep_idx) { 1203 + /* If the interface has been already set up, just set EP parameters */ 1204 + if (!ep->iface_ref->need_setup) { 1253 1205 err = snd_usb_endpoint_set_params(chip, ep); 1254 1206 if (err < 0) 1255 1207 goto unlock; ··· 1293 1241 if (err < 0) 1294 1242 goto unlock; 1295 1243 } 1244 + 1245 + ep->iface_ref->need_setup = false; 1296 1246 1297 1247 done: 1298 1248 ep->need_setup = false; ··· 1441 1387 } 1442 1388 1443 1389 /** 1444 - * snd_usb_endpoint_free: Free the resources of an snd_usb_endpoint 1390 + * snd_usb_endpoint_free_all: Free the resources of an snd_usb_endpoint 1391 + * @card: The chip 1445 1392 * 1446 - * @ep: the endpoint to free 1447 - * 1448 - * This free all resources of the given ep. 1393 + * This free all endpoints and those resources 1449 1394 */ 1450 - void snd_usb_endpoint_free(struct snd_usb_endpoint *ep) 1395 + void snd_usb_endpoint_free_all(struct snd_usb_audio *chip) 1451 1396 { 1452 - kfree(ep); 1397 + struct snd_usb_endpoint *ep, *en; 1398 + struct snd_usb_iface_ref *ip, *in; 1399 + 1400 + list_for_each_entry_safe(ep, en, &chip->ep_list, list) 1401 + kfree(ep); 1402 + 1403 + list_for_each_entry_safe(ip, in, &chip->iface_ref_list, list) 1404 + kfree(ip); 1453 1405 } 1454 1406 1455 1407 /*
+1 -1
sound/usb/endpoint.h
··· 42 42 void snd_usb_endpoint_suspend(struct snd_usb_endpoint *ep); 43 43 int snd_usb_endpoint_activate(struct snd_usb_endpoint *ep); 44 44 void snd_usb_endpoint_release(struct snd_usb_endpoint *ep); 45 - void snd_usb_endpoint_free(struct snd_usb_endpoint *ep); 45 + void snd_usb_endpoint_free_all(struct snd_usb_audio *chip); 46 46 47 47 int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep); 48 48 int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep,
+42 -15
sound/usb/implicit.c
··· 58 58 IMPLICIT_FB_FIXED_DEV(0x0499, 0x172f, 0x81, 2), /* Steinberg UR22C */ 59 59 IMPLICIT_FB_FIXED_DEV(0x0d9a, 0x00df, 0x81, 2), /* RTX6001 */ 60 60 IMPLICIT_FB_FIXED_DEV(0x22f0, 0x0006, 0x81, 3), /* Allen&Heath Qu-16 */ 61 - IMPLICIT_FB_FIXED_DEV(0x2b73, 0x000a, 0x82, 0), /* Pioneer DJ DJM-900NXS2 */ 62 - IMPLICIT_FB_FIXED_DEV(0x2b73, 0x0017, 0x82, 0), /* Pioneer DJ DJM-250MK2 */ 63 61 IMPLICIT_FB_FIXED_DEV(0x1686, 0xf029, 0x82, 2), /* Zoom UAC-2 */ 64 62 IMPLICIT_FB_FIXED_DEV(0x2466, 0x8003, 0x86, 2), /* Fractal Audio Axe-Fx II */ 65 63 IMPLICIT_FB_FIXED_DEV(0x0499, 0x172a, 0x86, 2), /* Yamaha MODX */ ··· 98 100 /* set up sync EP information on the audioformat */ 99 101 static int add_implicit_fb_sync_ep(struct snd_usb_audio *chip, 100 102 struct audioformat *fmt, 101 - int ep, int ifnum, 103 + int ep, int ep_idx, int ifnum, 102 104 const struct usb_host_interface *alts) 103 105 { 104 106 struct usb_interface *iface; ··· 113 115 fmt->sync_ep = ep; 114 116 fmt->sync_iface = ifnum; 115 117 fmt->sync_altsetting = alts->desc.bAlternateSetting; 116 - fmt->sync_ep_idx = 0; 118 + fmt->sync_ep_idx = ep_idx; 117 119 fmt->implicit_fb = 1; 118 120 usb_audio_dbg(chip, 119 121 "%d:%d: added %s implicit_fb sync_ep %x, iface %d:%d\n", ··· 145 147 (epd->bmAttributes & USB_ENDPOINT_USAGE_MASK) != 146 148 USB_ENDPOINT_USAGE_IMPLICIT_FB) 147 149 return 0; 148 - return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, 150 + return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, 0, 149 151 ifnum, alts); 150 152 } 151 153 ··· 171 173 (epd->bmAttributes & USB_ENDPOINT_USAGE_MASK) != 172 174 USB_ENDPOINT_USAGE_IMPLICIT_FB) 173 175 return 0; 174 - return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, 176 + return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, 0, 175 177 ifnum, alts); 176 178 } 177 179 180 + /* Pioneer devices: playback and capture streams sharing the same iface/altset 181 + */ 182 + static int add_pioneer_implicit_fb(struct snd_usb_audio *chip, 183 + struct audioformat *fmt, 184 + struct usb_host_interface *alts) 185 + { 186 + struct usb_endpoint_descriptor *epd; 187 + 188 + if (alts->desc.bNumEndpoints != 2) 189 + return 0; 190 + 191 + epd = get_endpoint(alts, 1); 192 + if (!usb_endpoint_is_isoc_in(epd) || 193 + (epd->bmAttributes & USB_ENDPOINT_SYNCTYPE) != USB_ENDPOINT_SYNC_ASYNC || 194 + ((epd->bmAttributes & USB_ENDPOINT_USAGE_MASK) != 195 + USB_ENDPOINT_USAGE_DATA && 196 + (epd->bmAttributes & USB_ENDPOINT_USAGE_MASK) != 197 + USB_ENDPOINT_USAGE_IMPLICIT_FB)) 198 + return 0; 199 + return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, 1, 200 + alts->desc.bInterfaceNumber, alts); 201 + } 178 202 179 203 static int __add_generic_implicit_fb(struct snd_usb_audio *chip, 180 204 struct audioformat *fmt, ··· 217 197 if (!usb_endpoint_is_isoc_in(epd) || 218 198 (epd->bmAttributes & USB_ENDPOINT_SYNCTYPE) != USB_ENDPOINT_SYNC_ASYNC) 219 199 return 0; 220 - return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, 200 + return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, 0, 221 201 iface, alts); 222 202 } 223 203 ··· 270 250 case IMPLICIT_FB_NONE: 271 251 return 0; /* No quirk */ 272 252 case IMPLICIT_FB_FIXED: 273 - return add_implicit_fb_sync_ep(chip, fmt, p->ep_num, 253 + return add_implicit_fb_sync_ep(chip, fmt, p->ep_num, 0, 274 254 p->iface, NULL); 275 255 } 276 256 } ··· 298 278 return 1; 299 279 } 300 280 281 + /* Pioneer devices implicit feedback with vendor spec class */ 282 + if (attr == USB_ENDPOINT_SYNC_ASYNC && 283 + alts->desc.bInterfaceClass == USB_CLASS_VENDOR_SPEC && 284 + USB_ID_VENDOR(chip->usb_id) == 0x2b73 /* Pioneer */) { 285 + if (add_pioneer_implicit_fb(chip, fmt, alts)) 286 + return 1; 287 + } 288 + 301 289 /* Try the generic implicit fb if available */ 302 290 if (chip->generic_implicit_fb) 303 291 return add_generic_implicit_fb(chip, fmt, alts); ··· 323 295 324 296 p = find_implicit_fb_entry(chip, capture_implicit_fb_quirks, alts); 325 297 if (p && p->type == IMPLICIT_FB_FIXED) 326 - return add_implicit_fb_sync_ep(chip, fmt, p->ep_num, p->iface, 327 - NULL); 298 + return add_implicit_fb_sync_ep(chip, fmt, p->ep_num, 0, 299 + p->iface, NULL); 328 300 return 0; 329 301 } 330 302 ··· 406 378 int stream) 407 379 { 408 380 struct snd_usb_substream *subs; 409 - const struct audioformat *fp, *sync_fmt; 381 + const struct audioformat *fp, *sync_fmt = NULL; 410 382 int score, high_score; 411 383 412 - /* When sharing the same altset, use the original audioformat */ 384 + /* Use the original audioformat as fallback for the shared altset */ 413 385 if (target->iface == target->sync_iface && 414 386 target->altsetting == target->sync_altsetting) 415 - return target; 387 + sync_fmt = target; 416 388 417 389 subs = find_matching_substream(chip, stream, target->sync_ep, 418 390 target->fmt_type); 419 391 if (!subs) 420 - return NULL; 392 + return sync_fmt; 421 393 422 - sync_fmt = NULL; 423 394 high_score = 0; 424 395 list_for_each_entry(fp, &subs->fmt_list, list) { 425 396 score = match_endpoint_audioformats(subs, fp,
+6
sound/usb/quirks-table.h
··· 3362 3362 .altsetting = 1, 3363 3363 .altset_idx = 1, 3364 3364 .endpoint = 0x86, 3365 + .ep_idx = 1, 3365 3366 .ep_attr = USB_ENDPOINT_XFER_ISOC| 3366 3367 USB_ENDPOINT_SYNC_ASYNC| 3367 3368 USB_ENDPOINT_USAGE_IMPLICIT_FB, ··· 3451 3450 .altsetting = 1, 3452 3451 .altset_idx = 1, 3453 3452 .endpoint = 0x82, 3453 + .ep_idx = 1, 3454 3454 .ep_attr = USB_ENDPOINT_XFER_ISOC| 3455 3455 USB_ENDPOINT_SYNC_ASYNC| 3456 3456 USB_ENDPOINT_USAGE_IMPLICIT_FB, ··· 3508 3506 .altsetting = 1, 3509 3507 .altset_idx = 1, 3510 3508 .endpoint = 0x82, 3509 + .ep_idx = 1, 3511 3510 .ep_attr = USB_ENDPOINT_XFER_ISOC| 3512 3511 USB_ENDPOINT_SYNC_ASYNC| 3513 3512 USB_ENDPOINT_USAGE_IMPLICIT_FB, ··· 3565 3562 .altsetting = 1, 3566 3563 .altset_idx = 1, 3567 3564 .endpoint = 0x82, 3565 + .ep_idx = 1, 3568 3566 .ep_attr = USB_ENDPOINT_XFER_ISOC| 3569 3567 USB_ENDPOINT_SYNC_ASYNC| 3570 3568 USB_ENDPOINT_USAGE_IMPLICIT_FB, ··· 3623 3619 .altsetting = 1, 3624 3620 .altset_idx = 1, 3625 3621 .endpoint = 0x82, 3622 + .ep_idx = 1, 3626 3623 .ep_attr = USB_ENDPOINT_XFER_ISOC| 3627 3624 USB_ENDPOINT_SYNC_ASYNC| 3628 3625 USB_ENDPOINT_USAGE_IMPLICIT_FB, ··· 3684 3679 .altsetting = 1, 3685 3680 .altset_idx = 1, 3686 3681 .endpoint = 0x82, 3682 + .ep_idx = 1, 3687 3683 .ep_attr = USB_ENDPOINT_XFER_ISOC| 3688 3684 USB_ENDPOINT_SYNC_ASYNC| 3689 3685 USB_ENDPOINT_USAGE_IMPLICIT_FB,
+46 -12
sound/usb/quirks.c
··· 120 120 return 0; 121 121 } 122 122 123 + /* create the audio stream and the corresponding endpoints from the fixed 124 + * audioformat object; this is used for quirks with the fixed EPs 125 + */ 126 + static int add_audio_stream_from_fixed_fmt(struct snd_usb_audio *chip, 127 + struct audioformat *fp) 128 + { 129 + int stream, err; 130 + 131 + stream = (fp->endpoint & USB_DIR_IN) ? 132 + SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; 133 + 134 + snd_usb_audioformat_set_sync_ep(chip, fp); 135 + 136 + err = snd_usb_add_audio_stream(chip, stream, fp); 137 + if (err < 0) 138 + return err; 139 + 140 + err = snd_usb_add_endpoint(chip, fp->endpoint, 141 + SND_USB_ENDPOINT_TYPE_DATA); 142 + if (err < 0) 143 + return err; 144 + 145 + if (fp->sync_ep) { 146 + err = snd_usb_add_endpoint(chip, fp->sync_ep, 147 + fp->implicit_fb ? 148 + SND_USB_ENDPOINT_TYPE_DATA : 149 + SND_USB_ENDPOINT_TYPE_SYNC); 150 + if (err < 0) 151 + return err; 152 + } 153 + 154 + return 0; 155 + } 156 + 123 157 /* 124 158 * create a stream for an endpoint/altsetting without proper descriptors 125 159 */ ··· 165 131 struct audioformat *fp; 166 132 struct usb_host_interface *alts; 167 133 struct usb_interface_descriptor *altsd; 168 - int stream, err; 169 134 unsigned *rate_table = NULL; 135 + int err; 170 136 171 137 fp = kmemdup(quirk->data, sizeof(*fp), GFP_KERNEL); 172 138 if (!fp) ··· 187 153 fp->rate_table = rate_table; 188 154 } 189 155 190 - stream = (fp->endpoint & USB_DIR_IN) 191 - ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; 192 - err = snd_usb_add_audio_stream(chip, stream, fp); 193 - if (err < 0) 194 - goto error; 195 156 if (fp->iface != get_iface_desc(&iface->altsetting[0])->bInterfaceNumber || 196 157 fp->altset_idx >= iface->num_altsetting) { 197 158 err = -EINVAL; ··· 194 165 } 195 166 alts = &iface->altsetting[fp->altset_idx]; 196 167 altsd = get_iface_desc(alts); 197 - if (altsd->bNumEndpoints < 1) { 168 + if (altsd->bNumEndpoints <= fp->ep_idx) { 198 169 err = -EINVAL; 199 170 goto error; 200 171 } ··· 204 175 if (fp->datainterval == 0) 205 176 fp->datainterval = snd_usb_parse_datainterval(chip, alts); 206 177 if (fp->maxpacksize == 0) 207 - fp->maxpacksize = le16_to_cpu(get_endpoint(alts, 0)->wMaxPacketSize); 178 + fp->maxpacksize = le16_to_cpu(get_endpoint(alts, fp->ep_idx)->wMaxPacketSize); 179 + if (!fp->fmt_type) 180 + fp->fmt_type = UAC_FORMAT_TYPE_I; 181 + 182 + err = add_audio_stream_from_fixed_fmt(chip, fp); 183 + if (err < 0) 184 + goto error; 185 + 208 186 usb_set_interface(chip->dev, fp->iface, 0); 209 187 snd_usb_init_pitch(chip, fp); 210 188 snd_usb_init_sample_rate(chip, fp, fp->rate_max); ··· 453 417 struct usb_host_interface *alts; 454 418 struct usb_interface_descriptor *altsd; 455 419 struct audioformat *fp; 456 - int stream, err; 420 + int err; 457 421 458 422 /* both PCM and MIDI interfaces have 2 or more altsettings */ 459 423 if (iface->num_altsetting < 2) ··· 518 482 return -ENXIO; 519 483 } 520 484 521 - stream = (fp->endpoint & USB_DIR_IN) 522 - ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; 523 - err = snd_usb_add_audio_stream(chip, stream, fp); 485 + err = add_audio_stream_from_fixed_fmt(chip, fp); 524 486 if (err < 0) { 525 487 list_del(&fp->list); /* unlink for avoiding double-free */ 526 488 kfree(fp);
+1
sound/usb/usbaudio.h
··· 44 44 45 45 struct list_head pcm_list; /* list of pcm streams */ 46 46 struct list_head ep_list; /* list of audio-related endpoints */ 47 + struct list_head iface_ref_list; /* list of interface refcounts */ 47 48 int pcm_devs; 48 49 49 50 struct list_head midi_list; /* list of midi interfaces */
+1
tools/bootconfig/scripts/bconf2ftrace.sh
··· 152 152 set_array_of ${instance}.options ${instancedir}/trace_options 153 153 set_value_of ${instance}.trace_clock ${instancedir}/trace_clock 154 154 set_value_of ${instance}.cpumask ${instancedir}/tracing_cpumask 155 + set_value_of ${instance}.tracing_on ${instancedir}/tracing_on 155 156 set_value_of ${instance}.tracer ${instancedir}/current_tracer 156 157 set_array_of ${instance}.ftrace.filters \ 157 158 ${instancedir}/set_ftrace_filter
+4
tools/bootconfig/scripts/ftrace2bconf.sh
··· 221 221 if [ `echo $val | sed -e s/f//g`x != x ]; then 222 222 emit_kv $PREFIX.cpumask = $val 223 223 fi 224 + val=`cat $INSTANCE/tracing_on` 225 + if [ `echo $val | sed -e s/f//g`x != x ]; then 226 + emit_kv $PREFIX.tracing_on = $val 227 + fi 224 228 225 229 val= 226 230 for i in `cat $INSTANCE/set_event`; do
-5
tools/include/linux/build_bug.h
··· 79 79 #define __static_assert(expr, msg, ...) _Static_assert(expr, msg) 80 80 #endif // static_assert 81 81 82 - #ifdef __GENKSYMS__ 83 - /* genksyms gets confused by _Static_assert */ 84 - #define _Static_assert(expr, ...) 85 - #endif 86 - 87 82 #endif /* _LINUX_BUILD_BUG_H */
+2
tools/include/uapi/linux/kvm.h
··· 251 251 #define KVM_EXIT_X86_RDMSR 29 252 252 #define KVM_EXIT_X86_WRMSR 30 253 253 #define KVM_EXIT_DIRTY_RING_FULL 31 254 + #define KVM_EXIT_AP_RESET_HOLD 32 254 255 255 256 /* For KVM_EXIT_INTERNAL_ERROR */ 256 257 /* Emulate instruction failed. */ ··· 574 573 #define KVM_MP_STATE_CHECK_STOP 6 575 574 #define KVM_MP_STATE_OPERATING 7 576 575 #define KVM_MP_STATE_LOAD 8 576 + #define KVM_MP_STATE_AP_RESET_HOLD 9 577 577 578 578 struct kvm_mp_state { 579 579 __u32 mp_state;
+1 -1
tools/lib/perf/tests/test-cpumap.c
··· 27 27 perf_cpu_map__put(cpus); 28 28 29 29 __T_END; 30 - return 0; 30 + return tests_failed == 0 ? 0 : -1; 31 31 }
+4 -3
tools/lib/perf/tests/test-evlist.c
··· 208 208 char path[PATH_MAX]; 209 209 int id, err, pid, go_pipe[2]; 210 210 union perf_event *event; 211 - char bf; 212 211 int count = 0; 213 212 214 213 snprintf(path, PATH_MAX, "%s/kernel/debug/tracing/events/syscalls/sys_enter_prctl/id", 215 214 sysfs__mountpoint()); 216 215 217 216 if (filename__read_int(path, &id)) { 217 + tests_failed++; 218 218 fprintf(stderr, "error: failed to get tracepoint id: %s\n", path); 219 219 return -1; 220 220 } ··· 229 229 pid = fork(); 230 230 if (!pid) { 231 231 int i; 232 + char bf; 232 233 233 234 read(go_pipe[0], &bf, 1); 234 235 ··· 267 266 perf_evlist__enable(evlist); 268 267 269 268 /* kick the child and wait for it to finish */ 270 - write(go_pipe[1], &bf, 1); 269 + write(go_pipe[1], "A", 1); 271 270 waitpid(pid, NULL, 0); 272 271 273 272 /* ··· 410 409 test_mmap_cpus(); 411 410 412 411 __T_END; 413 - return 0; 412 + return tests_failed == 0 ? 0 : -1; 414 413 }
+1 -1
tools/lib/perf/tests/test-evsel.c
··· 131 131 test_stat_thread_enable(); 132 132 133 133 __T_END; 134 - return 0; 134 + return tests_failed == 0 ? 0 : -1; 135 135 }
+1 -1
tools/lib/perf/tests/test-threadmap.c
··· 27 27 perf_thread_map__put(threads); 28 28 29 29 __T_END; 30 - return 0; 30 + return tests_failed == 0 ? 0 : -1; 31 31 }
+1 -1
tools/perf/examples/bpf/5sec.c
··· 39 39 Copyright (C) 2018 Red Hat, Inc., Arnaldo Carvalho de Melo <acme@redhat.com> 40 40 */ 41 41 42 - #include <bpf/bpf.h> 42 + #include <bpf.h> 43 43 44 44 #define NSEC_PER_SEC 1000000000L 45 45
+14 -16
tools/perf/tests/shell/stat+shadow_stat.sh
··· 9 9 10 10 test_global_aggr() 11 11 { 12 - local cyc 13 - 14 12 perf stat -a --no-big-num -e cycles,instructions sleep 1 2>&1 | \ 15 13 grep -e cycles -e instructions | \ 16 14 while read num evt hash ipc rest 17 15 do 18 16 # skip not counted events 19 - if [[ $num == "<not" ]]; then 17 + if [ "$num" = "<not" ]; then 20 18 continue 21 19 fi 22 20 23 21 # save cycles count 24 - if [[ $evt == "cycles" ]]; then 22 + if [ "$evt" = "cycles" ]; then 25 23 cyc=$num 26 24 continue 27 25 fi 28 26 29 27 # skip if no cycles 30 - if [[ -z $cyc ]]; then 28 + if [ -z "$cyc" ]; then 31 29 continue 32 30 fi 33 31 34 32 # use printf for rounding and a leading zero 35 - local res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)` 36 - if [[ $ipc != $res ]]; then 33 + res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)` 34 + if [ "$ipc" != "$res" ]; then 37 35 echo "IPC is different: $res != $ipc ($num / $cyc)" 38 36 exit 1 39 37 fi ··· 40 42 41 43 test_no_aggr() 42 44 { 43 - declare -A results 44 - 45 45 perf stat -a -A --no-big-num -e cycles,instructions sleep 1 2>&1 | \ 46 46 grep ^CPU | \ 47 47 while read cpu num evt hash ipc rest 48 48 do 49 49 # skip not counted events 50 - if [[ $num == "<not" ]]; then 50 + if [ "$num" = "<not" ]; then 51 51 continue 52 52 fi 53 53 54 54 # save cycles count 55 - if [[ $evt == "cycles" ]]; then 56 - results[$cpu]=$num 55 + if [ "$evt" = "cycles" ]; then 56 + results="$results $cpu:$num" 57 57 continue 58 58 fi 59 59 60 + cyc=${results##* $cpu:} 61 + cyc=${cyc%% *} 62 + 60 63 # skip if no cycles 61 - local cyc=${results[$cpu]} 62 - if [[ -z $cyc ]]; then 64 + if [ -z "$cyc" ]; then 63 65 continue 64 66 fi 65 67 66 68 # use printf for rounding and a leading zero 67 - local res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)` 68 - if [[ $ipc != $res ]]; then 69 + res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)` 70 + if [ "$ipc" != "$res" ]; then 69 71 echo "IPC is different for $cpu: $res != $ipc ($num / $cyc)" 70 72 exit 1 71 73 fi
+8
tools/perf/util/header.c
··· 3323 3323 attr_offset = lseek(ff.fd, 0, SEEK_CUR); 3324 3324 3325 3325 evlist__for_each_entry(evlist, evsel) { 3326 + if (evsel->core.attr.size < sizeof(evsel->core.attr)) { 3327 + /* 3328 + * We are likely in "perf inject" and have read 3329 + * from an older file. Update attr size so that 3330 + * reader gets the right offset to the ids. 3331 + */ 3332 + evsel->core.attr.size = sizeof(evsel->core.attr); 3333 + } 3326 3334 f_attr = (struct perf_file_attr){ 3327 3335 .attr = evsel->core.attr, 3328 3336 .ids = {
+2 -2
tools/perf/util/machine.c
··· 2980 2980 2981 2981 pid_t machine__get_current_tid(struct machine *machine, int cpu) 2982 2982 { 2983 - int nr_cpus = min(machine->env->nr_cpus_online, MAX_NR_CPUS); 2983 + int nr_cpus = min(machine->env->nr_cpus_avail, MAX_NR_CPUS); 2984 2984 2985 2985 if (cpu < 0 || cpu >= nr_cpus || !machine->current_tid) 2986 2986 return -1; ··· 2992 2992 pid_t tid) 2993 2993 { 2994 2994 struct thread *thread; 2995 - int nr_cpus = min(machine->env->nr_cpus_online, MAX_NR_CPUS); 2995 + int nr_cpus = min(machine->env->nr_cpus_avail, MAX_NR_CPUS); 2996 2996 2997 2997 if (cpu < 0) 2998 2998 return -EINVAL;
+1 -1
tools/perf/util/session.c
··· 2404 2404 { 2405 2405 int i, err = -1; 2406 2406 struct perf_cpu_map *map; 2407 - int nr_cpus = min(session->header.env.nr_cpus_online, MAX_NR_CPUS); 2407 + int nr_cpus = min(session->header.env.nr_cpus_avail, MAX_NR_CPUS); 2408 2408 2409 2409 for (i = 0; i < PERF_TYPE_MAX; ++i) { 2410 2410 struct evsel *evsel;
+189 -177
tools/perf/util/stat-shadow.c
··· 8 8 #include "evlist.h" 9 9 #include "expr.h" 10 10 #include "metricgroup.h" 11 + #include "cgroup.h" 11 12 #include <linux/zalloc.h> 12 13 13 14 /* ··· 29 28 enum stat_type type; 30 29 int ctx; 31 30 int cpu; 31 + struct cgroup *cgrp; 32 32 struct runtime_stat *stat; 33 33 struct stats stats; 34 34 u64 metric_total; ··· 58 56 59 57 if (a->ctx != b->ctx) 60 58 return a->ctx - b->ctx; 59 + 60 + if (a->cgrp != b->cgrp) 61 + return (char *)a->cgrp < (char *)b->cgrp ? -1 : +1; 61 62 62 63 if (a->evsel == NULL && b->evsel == NULL) { 63 64 if (a->stat == b->stat) ··· 105 100 bool create, 106 101 enum stat_type type, 107 102 int ctx, 108 - struct runtime_stat *st) 103 + struct runtime_stat *st, 104 + struct cgroup *cgrp) 109 105 { 110 106 struct rblist *rblist; 111 107 struct rb_node *nd; ··· 116 110 .type = type, 117 111 .ctx = ctx, 118 112 .stat = st, 113 + .cgrp = cgrp, 119 114 }; 120 115 121 116 rblist = &st->value_list; 117 + 118 + /* don't use context info for clock events */ 119 + if (type == STAT_NSECS) 120 + dm.ctx = 0; 122 121 123 122 nd = rblist__find(rblist, &dm); 124 123 if (nd) ··· 202 191 reset_stat(st); 203 192 } 204 193 194 + struct runtime_stat_data { 195 + int ctx; 196 + struct cgroup *cgrp; 197 + }; 198 + 205 199 static void update_runtime_stat(struct runtime_stat *st, 206 200 enum stat_type type, 207 - int ctx, int cpu, u64 count) 201 + int cpu, u64 count, 202 + struct runtime_stat_data *rsd) 208 203 { 209 - struct saved_value *v = saved_value_lookup(NULL, cpu, true, 210 - type, ctx, st); 204 + struct saved_value *v = saved_value_lookup(NULL, cpu, true, type, 205 + rsd->ctx, st, rsd->cgrp); 211 206 212 207 if (v) 213 208 update_stats(&v->stats, count); ··· 227 210 void perf_stat__update_shadow_stats(struct evsel *counter, u64 count, 228 211 int cpu, struct runtime_stat *st) 229 212 { 230 - int ctx = evsel_context(counter); 231 213 u64 count_ns = count; 232 214 struct saved_value *v; 215 + struct runtime_stat_data rsd = { 216 + .ctx = evsel_context(counter), 217 + .cgrp = counter->cgrp, 218 + }; 233 219 234 220 count *= counter->scale; 235 221 236 222 if (evsel__is_clock(counter)) 237 - update_runtime_stat(st, STAT_NSECS, 0, cpu, count_ns); 223 + update_runtime_stat(st, STAT_NSECS, cpu, count_ns, &rsd); 238 224 else if (evsel__match(counter, HARDWARE, HW_CPU_CYCLES)) 239 - update_runtime_stat(st, STAT_CYCLES, ctx, cpu, count); 225 + update_runtime_stat(st, STAT_CYCLES, cpu, count, &rsd); 240 226 else if (perf_stat_evsel__is(counter, CYCLES_IN_TX)) 241 - update_runtime_stat(st, STAT_CYCLES_IN_TX, ctx, cpu, count); 227 + update_runtime_stat(st, STAT_CYCLES_IN_TX, cpu, count, &rsd); 242 228 else if (perf_stat_evsel__is(counter, TRANSACTION_START)) 243 - update_runtime_stat(st, STAT_TRANSACTION, ctx, cpu, count); 229 + update_runtime_stat(st, STAT_TRANSACTION, cpu, count, &rsd); 244 230 else if (perf_stat_evsel__is(counter, ELISION_START)) 245 - update_runtime_stat(st, STAT_ELISION, ctx, cpu, count); 231 + update_runtime_stat(st, STAT_ELISION, cpu, count, &rsd); 246 232 else if (perf_stat_evsel__is(counter, TOPDOWN_TOTAL_SLOTS)) 247 233 update_runtime_stat(st, STAT_TOPDOWN_TOTAL_SLOTS, 248 - ctx, cpu, count); 234 + cpu, count, &rsd); 249 235 else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_ISSUED)) 250 236 update_runtime_stat(st, STAT_TOPDOWN_SLOTS_ISSUED, 251 - ctx, cpu, count); 237 + cpu, count, &rsd); 252 238 else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_RETIRED)) 253 239 update_runtime_stat(st, STAT_TOPDOWN_SLOTS_RETIRED, 254 - ctx, cpu, count); 240 + cpu, count, &rsd); 255 241 else if (perf_stat_evsel__is(counter, TOPDOWN_FETCH_BUBBLES)) 256 242 update_runtime_stat(st, STAT_TOPDOWN_FETCH_BUBBLES, 257 - ctx, cpu, count); 243 + cpu, count, &rsd); 258 244 else if (perf_stat_evsel__is(counter, TOPDOWN_RECOVERY_BUBBLES)) 259 245 update_runtime_stat(st, STAT_TOPDOWN_RECOVERY_BUBBLES, 260 - ctx, cpu, count); 246 + cpu, count, &rsd); 261 247 else if (perf_stat_evsel__is(counter, TOPDOWN_RETIRING)) 262 248 update_runtime_stat(st, STAT_TOPDOWN_RETIRING, 263 - ctx, cpu, count); 249 + cpu, count, &rsd); 264 250 else if (perf_stat_evsel__is(counter, TOPDOWN_BAD_SPEC)) 265 251 update_runtime_stat(st, STAT_TOPDOWN_BAD_SPEC, 266 - ctx, cpu, count); 252 + cpu, count, &rsd); 267 253 else if (perf_stat_evsel__is(counter, TOPDOWN_FE_BOUND)) 268 254 update_runtime_stat(st, STAT_TOPDOWN_FE_BOUND, 269 - ctx, cpu, count); 255 + cpu, count, &rsd); 270 256 else if (perf_stat_evsel__is(counter, TOPDOWN_BE_BOUND)) 271 257 update_runtime_stat(st, STAT_TOPDOWN_BE_BOUND, 272 - ctx, cpu, count); 258 + cpu, count, &rsd); 273 259 else if (evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_FRONTEND)) 274 260 update_runtime_stat(st, STAT_STALLED_CYCLES_FRONT, 275 - ctx, cpu, count); 261 + cpu, count, &rsd); 276 262 else if (evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_BACKEND)) 277 263 update_runtime_stat(st, STAT_STALLED_CYCLES_BACK, 278 - ctx, cpu, count); 264 + cpu, count, &rsd); 279 265 else if (evsel__match(counter, HARDWARE, HW_BRANCH_INSTRUCTIONS)) 280 - update_runtime_stat(st, STAT_BRANCHES, ctx, cpu, count); 266 + update_runtime_stat(st, STAT_BRANCHES, cpu, count, &rsd); 281 267 else if (evsel__match(counter, HARDWARE, HW_CACHE_REFERENCES)) 282 - update_runtime_stat(st, STAT_CACHEREFS, ctx, cpu, count); 268 + update_runtime_stat(st, STAT_CACHEREFS, cpu, count, &rsd); 283 269 else if (evsel__match(counter, HW_CACHE, HW_CACHE_L1D)) 284 - update_runtime_stat(st, STAT_L1_DCACHE, ctx, cpu, count); 270 + update_runtime_stat(st, STAT_L1_DCACHE, cpu, count, &rsd); 285 271 else if (evsel__match(counter, HW_CACHE, HW_CACHE_L1I)) 286 - update_runtime_stat(st, STAT_L1_ICACHE, ctx, cpu, count); 272 + update_runtime_stat(st, STAT_L1_ICACHE, cpu, count, &rsd); 287 273 else if (evsel__match(counter, HW_CACHE, HW_CACHE_LL)) 288 - update_runtime_stat(st, STAT_LL_CACHE, ctx, cpu, count); 274 + update_runtime_stat(st, STAT_LL_CACHE, cpu, count, &rsd); 289 275 else if (evsel__match(counter, HW_CACHE, HW_CACHE_DTLB)) 290 - update_runtime_stat(st, STAT_DTLB_CACHE, ctx, cpu, count); 276 + update_runtime_stat(st, STAT_DTLB_CACHE, cpu, count, &rsd); 291 277 else if (evsel__match(counter, HW_CACHE, HW_CACHE_ITLB)) 292 - update_runtime_stat(st, STAT_ITLB_CACHE, ctx, cpu, count); 278 + update_runtime_stat(st, STAT_ITLB_CACHE, cpu, count, &rsd); 293 279 else if (perf_stat_evsel__is(counter, SMI_NUM)) 294 - update_runtime_stat(st, STAT_SMI_NUM, ctx, cpu, count); 280 + update_runtime_stat(st, STAT_SMI_NUM, cpu, count, &rsd); 295 281 else if (perf_stat_evsel__is(counter, APERF)) 296 - update_runtime_stat(st, STAT_APERF, ctx, cpu, count); 282 + update_runtime_stat(st, STAT_APERF, cpu, count, &rsd); 297 283 298 284 if (counter->collect_stat) { 299 - v = saved_value_lookup(counter, cpu, true, STAT_NONE, 0, st); 285 + v = saved_value_lookup(counter, cpu, true, STAT_NONE, 0, st, 286 + rsd.cgrp); 300 287 update_stats(&v->stats, count); 301 288 if (counter->metric_leader) 302 289 v->metric_total += count; 303 290 } else if (counter->metric_leader) { 304 291 v = saved_value_lookup(counter->metric_leader, 305 - cpu, true, STAT_NONE, 0, st); 292 + cpu, true, STAT_NONE, 0, st, rsd.cgrp); 306 293 v->metric_total += count; 307 294 v->metric_other++; 308 295 } ··· 443 422 } 444 423 445 424 static double runtime_stat_avg(struct runtime_stat *st, 446 - enum stat_type type, int ctx, int cpu) 425 + enum stat_type type, int cpu, 426 + struct runtime_stat_data *rsd) 447 427 { 448 428 struct saved_value *v; 449 429 450 - v = saved_value_lookup(NULL, cpu, false, type, ctx, st); 430 + v = saved_value_lookup(NULL, cpu, false, type, rsd->ctx, st, rsd->cgrp); 451 431 if (!v) 452 432 return 0.0; 453 433 ··· 456 434 } 457 435 458 436 static double runtime_stat_n(struct runtime_stat *st, 459 - enum stat_type type, int ctx, int cpu) 437 + enum stat_type type, int cpu, 438 + struct runtime_stat_data *rsd) 460 439 { 461 440 struct saved_value *v; 462 441 463 - v = saved_value_lookup(NULL, cpu, false, type, ctx, st); 442 + v = saved_value_lookup(NULL, cpu, false, type, rsd->ctx, st, rsd->cgrp); 464 443 if (!v) 465 444 return 0.0; 466 445 ··· 469 446 } 470 447 471 448 static void print_stalled_cycles_frontend(struct perf_stat_config *config, 472 - int cpu, 473 - struct evsel *evsel, double avg, 449 + int cpu, double avg, 474 450 struct perf_stat_output_ctx *out, 475 - struct runtime_stat *st) 451 + struct runtime_stat *st, 452 + struct runtime_stat_data *rsd) 476 453 { 477 454 double total, ratio = 0.0; 478 455 const char *color; 479 - int ctx = evsel_context(evsel); 480 456 481 - total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); 457 + total = runtime_stat_avg(st, STAT_CYCLES, cpu, rsd); 482 458 483 459 if (total) 484 460 ratio = avg / total * 100.0; ··· 492 470 } 493 471 494 472 static void print_stalled_cycles_backend(struct perf_stat_config *config, 495 - int cpu, 496 - struct evsel *evsel, double avg, 473 + int cpu, double avg, 497 474 struct perf_stat_output_ctx *out, 498 - struct runtime_stat *st) 475 + struct runtime_stat *st, 476 + struct runtime_stat_data *rsd) 499 477 { 500 478 double total, ratio = 0.0; 501 479 const char *color; 502 - int ctx = evsel_context(evsel); 503 480 504 - total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); 481 + total = runtime_stat_avg(st, STAT_CYCLES, cpu, rsd); 505 482 506 483 if (total) 507 484 ratio = avg / total * 100.0; ··· 511 490 } 512 491 513 492 static void print_branch_misses(struct perf_stat_config *config, 514 - int cpu, 515 - struct evsel *evsel, 516 - double avg, 493 + int cpu, double avg, 517 494 struct perf_stat_output_ctx *out, 518 - struct runtime_stat *st) 495 + struct runtime_stat *st, 496 + struct runtime_stat_data *rsd) 519 497 { 520 498 double total, ratio = 0.0; 521 499 const char *color; 522 - int ctx = evsel_context(evsel); 523 500 524 - total = runtime_stat_avg(st, STAT_BRANCHES, ctx, cpu); 501 + total = runtime_stat_avg(st, STAT_BRANCHES, cpu, rsd); 525 502 526 503 if (total) 527 504 ratio = avg / total * 100.0; ··· 530 511 } 531 512 532 513 static void print_l1_dcache_misses(struct perf_stat_config *config, 533 - int cpu, 534 - struct evsel *evsel, 535 - double avg, 514 + int cpu, double avg, 536 515 struct perf_stat_output_ctx *out, 537 - struct runtime_stat *st) 538 - 516 + struct runtime_stat *st, 517 + struct runtime_stat_data *rsd) 539 518 { 540 519 double total, ratio = 0.0; 541 520 const char *color; 542 - int ctx = evsel_context(evsel); 543 521 544 - total = runtime_stat_avg(st, STAT_L1_DCACHE, ctx, cpu); 522 + total = runtime_stat_avg(st, STAT_L1_DCACHE, cpu, rsd); 545 523 546 524 if (total) 547 525 ratio = avg / total * 100.0; ··· 549 533 } 550 534 551 535 static void print_l1_icache_misses(struct perf_stat_config *config, 552 - int cpu, 553 - struct evsel *evsel, 554 - double avg, 536 + int cpu, double avg, 555 537 struct perf_stat_output_ctx *out, 556 - struct runtime_stat *st) 557 - 538 + struct runtime_stat *st, 539 + struct runtime_stat_data *rsd) 558 540 { 559 541 double total, ratio = 0.0; 560 542 const char *color; 561 - int ctx = evsel_context(evsel); 562 543 563 - total = runtime_stat_avg(st, STAT_L1_ICACHE, ctx, cpu); 544 + total = runtime_stat_avg(st, STAT_L1_ICACHE, cpu, rsd); 564 545 565 546 if (total) 566 547 ratio = avg / total * 100.0; ··· 567 554 } 568 555 569 556 static void print_dtlb_cache_misses(struct perf_stat_config *config, 570 - int cpu, 571 - struct evsel *evsel, 572 - double avg, 557 + int cpu, double avg, 573 558 struct perf_stat_output_ctx *out, 574 - struct runtime_stat *st) 559 + struct runtime_stat *st, 560 + struct runtime_stat_data *rsd) 575 561 { 576 562 double total, ratio = 0.0; 577 563 const char *color; 578 - int ctx = evsel_context(evsel); 579 564 580 - total = runtime_stat_avg(st, STAT_DTLB_CACHE, ctx, cpu); 565 + total = runtime_stat_avg(st, STAT_DTLB_CACHE, cpu, rsd); 581 566 582 567 if (total) 583 568 ratio = avg / total * 100.0; ··· 585 574 } 586 575 587 576 static void print_itlb_cache_misses(struct perf_stat_config *config, 588 - int cpu, 589 - struct evsel *evsel, 590 - double avg, 577 + int cpu, double avg, 591 578 struct perf_stat_output_ctx *out, 592 - struct runtime_stat *st) 579 + struct runtime_stat *st, 580 + struct runtime_stat_data *rsd) 593 581 { 594 582 double total, ratio = 0.0; 595 583 const char *color; 596 - int ctx = evsel_context(evsel); 597 584 598 - total = runtime_stat_avg(st, STAT_ITLB_CACHE, ctx, cpu); 585 + total = runtime_stat_avg(st, STAT_ITLB_CACHE, cpu, rsd); 599 586 600 587 if (total) 601 588 ratio = avg / total * 100.0; ··· 603 594 } 604 595 605 596 static void print_ll_cache_misses(struct perf_stat_config *config, 606 - int cpu, 607 - struct evsel *evsel, 608 - double avg, 597 + int cpu, double avg, 609 598 struct perf_stat_output_ctx *out, 610 - struct runtime_stat *st) 599 + struct runtime_stat *st, 600 + struct runtime_stat_data *rsd) 611 601 { 612 602 double total, ratio = 0.0; 613 603 const char *color; 614 - int ctx = evsel_context(evsel); 615 604 616 - total = runtime_stat_avg(st, STAT_LL_CACHE, ctx, cpu); 605 + total = runtime_stat_avg(st, STAT_LL_CACHE, cpu, rsd); 617 606 618 607 if (total) 619 608 ratio = avg / total * 100.0; ··· 669 662 return x; 670 663 } 671 664 672 - static double td_total_slots(int ctx, int cpu, struct runtime_stat *st) 665 + static double td_total_slots(int cpu, struct runtime_stat *st, 666 + struct runtime_stat_data *rsd) 673 667 { 674 - return runtime_stat_avg(st, STAT_TOPDOWN_TOTAL_SLOTS, ctx, cpu); 668 + return runtime_stat_avg(st, STAT_TOPDOWN_TOTAL_SLOTS, cpu, rsd); 675 669 } 676 670 677 - static double td_bad_spec(int ctx, int cpu, struct runtime_stat *st) 671 + static double td_bad_spec(int cpu, struct runtime_stat *st, 672 + struct runtime_stat_data *rsd) 678 673 { 679 674 double bad_spec = 0; 680 675 double total_slots; 681 676 double total; 682 677 683 - total = runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_ISSUED, ctx, cpu) - 684 - runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, ctx, cpu) + 685 - runtime_stat_avg(st, STAT_TOPDOWN_RECOVERY_BUBBLES, ctx, cpu); 678 + total = runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_ISSUED, cpu, rsd) - 679 + runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, cpu, rsd) + 680 + runtime_stat_avg(st, STAT_TOPDOWN_RECOVERY_BUBBLES, cpu, rsd); 686 681 687 - total_slots = td_total_slots(ctx, cpu, st); 682 + total_slots = td_total_slots(cpu, st, rsd); 688 683 if (total_slots) 689 684 bad_spec = total / total_slots; 690 685 return sanitize_val(bad_spec); 691 686 } 692 687 693 - static double td_retiring(int ctx, int cpu, struct runtime_stat *st) 688 + static double td_retiring(int cpu, struct runtime_stat *st, 689 + struct runtime_stat_data *rsd) 694 690 { 695 691 double retiring = 0; 696 - double total_slots = td_total_slots(ctx, cpu, st); 692 + double total_slots = td_total_slots(cpu, st, rsd); 697 693 double ret_slots = runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, 698 - ctx, cpu); 694 + cpu, rsd); 699 695 700 696 if (total_slots) 701 697 retiring = ret_slots / total_slots; 702 698 return retiring; 703 699 } 704 700 705 - static double td_fe_bound(int ctx, int cpu, struct runtime_stat *st) 701 + static double td_fe_bound(int cpu, struct runtime_stat *st, 702 + struct runtime_stat_data *rsd) 706 703 { 707 704 double fe_bound = 0; 708 - double total_slots = td_total_slots(ctx, cpu, st); 705 + double total_slots = td_total_slots(cpu, st, rsd); 709 706 double fetch_bub = runtime_stat_avg(st, STAT_TOPDOWN_FETCH_BUBBLES, 710 - ctx, cpu); 707 + cpu, rsd); 711 708 712 709 if (total_slots) 713 710 fe_bound = fetch_bub / total_slots; 714 711 return fe_bound; 715 712 } 716 713 717 - static double td_be_bound(int ctx, int cpu, struct runtime_stat *st) 714 + static double td_be_bound(int cpu, struct runtime_stat *st, 715 + struct runtime_stat_data *rsd) 718 716 { 719 - double sum = (td_fe_bound(ctx, cpu, st) + 720 - td_bad_spec(ctx, cpu, st) + 721 - td_retiring(ctx, cpu, st)); 717 + double sum = (td_fe_bound(cpu, st, rsd) + 718 + td_bad_spec(cpu, st, rsd) + 719 + td_retiring(cpu, st, rsd)); 722 720 if (sum == 0) 723 721 return 0; 724 722 return sanitize_val(1.0 - sum); ··· 734 722 * the ratios we need to recreate the sum. 735 723 */ 736 724 737 - static double td_metric_ratio(int ctx, int cpu, 738 - enum stat_type type, 739 - struct runtime_stat *stat) 725 + static double td_metric_ratio(int cpu, enum stat_type type, 726 + struct runtime_stat *stat, 727 + struct runtime_stat_data *rsd) 740 728 { 741 - double sum = runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, ctx, cpu) + 742 - runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, ctx, cpu) + 743 - runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, ctx, cpu) + 744 - runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, ctx, cpu); 745 - double d = runtime_stat_avg(stat, type, ctx, cpu); 729 + double sum = runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, cpu, rsd) + 730 + runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, cpu, rsd) + 731 + runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, cpu, rsd) + 732 + runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, cpu, rsd); 733 + double d = runtime_stat_avg(stat, type, cpu, rsd); 746 734 747 735 if (sum) 748 736 return d / sum; ··· 754 742 * We allow two missing. 755 743 */ 756 744 757 - static bool full_td(int ctx, int cpu, 758 - struct runtime_stat *stat) 745 + static bool full_td(int cpu, struct runtime_stat *stat, 746 + struct runtime_stat_data *rsd) 759 747 { 760 748 int c = 0; 761 749 762 - if (runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, ctx, cpu) > 0) 750 + if (runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, cpu, rsd) > 0) 763 751 c++; 764 - if (runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, ctx, cpu) > 0) 752 + if (runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, cpu, rsd) > 0) 765 753 c++; 766 - if (runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, ctx, cpu) > 0) 754 + if (runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, cpu, rsd) > 0) 767 755 c++; 768 - if (runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, ctx, cpu) > 0) 756 + if (runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, cpu, rsd) > 0) 769 757 c++; 770 758 return c >= 2; 771 759 } 772 760 773 - static void print_smi_cost(struct perf_stat_config *config, 774 - int cpu, struct evsel *evsel, 761 + static void print_smi_cost(struct perf_stat_config *config, int cpu, 775 762 struct perf_stat_output_ctx *out, 776 - struct runtime_stat *st) 763 + struct runtime_stat *st, 764 + struct runtime_stat_data *rsd) 777 765 { 778 766 double smi_num, aperf, cycles, cost = 0.0; 779 - int ctx = evsel_context(evsel); 780 767 const char *color = NULL; 781 768 782 - smi_num = runtime_stat_avg(st, STAT_SMI_NUM, ctx, cpu); 783 - aperf = runtime_stat_avg(st, STAT_APERF, ctx, cpu); 784 - cycles = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); 769 + smi_num = runtime_stat_avg(st, STAT_SMI_NUM, cpu, rsd); 770 + aperf = runtime_stat_avg(st, STAT_APERF, cpu, rsd); 771 + cycles = runtime_stat_avg(st, STAT_CYCLES, cpu, rsd); 785 772 786 773 if ((cycles == 0) || (aperf == 0)) 787 774 return; ··· 815 804 scale = 1e-9; 816 805 } else { 817 806 v = saved_value_lookup(metric_events[i], cpu, false, 818 - STAT_NONE, 0, st); 807 + STAT_NONE, 0, st, 808 + metric_events[i]->cgrp); 819 809 if (!v) 820 810 break; 821 811 stats = &v->stats; ··· 942 930 print_metric_t print_metric = out->print_metric; 943 931 double total, ratio = 0.0, total2; 944 932 const char *color = NULL; 945 - int ctx = evsel_context(evsel); 933 + struct runtime_stat_data rsd = { 934 + .ctx = evsel_context(evsel), 935 + .cgrp = evsel->cgrp, 936 + }; 946 937 struct metric_event *me; 947 938 int num = 1; 948 939 949 940 if (evsel__match(evsel, HARDWARE, HW_INSTRUCTIONS)) { 950 - total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); 941 + total = runtime_stat_avg(st, STAT_CYCLES, cpu, &rsd); 951 942 952 943 if (total) { 953 944 ratio = avg / total; ··· 960 945 print_metric(config, ctxp, NULL, NULL, "insn per cycle", 0); 961 946 } 962 947 963 - total = runtime_stat_avg(st, STAT_STALLED_CYCLES_FRONT, 964 - ctx, cpu); 948 + total = runtime_stat_avg(st, STAT_STALLED_CYCLES_FRONT, cpu, &rsd); 965 949 966 950 total = max(total, runtime_stat_avg(st, 967 951 STAT_STALLED_CYCLES_BACK, 968 - ctx, cpu)); 952 + cpu, &rsd)); 969 953 970 954 if (total && avg) { 971 955 out->new_line(config, ctxp); ··· 974 960 ratio); 975 961 } 976 962 } else if (evsel__match(evsel, HARDWARE, HW_BRANCH_MISSES)) { 977 - if (runtime_stat_n(st, STAT_BRANCHES, ctx, cpu) != 0) 978 - print_branch_misses(config, cpu, evsel, avg, out, st); 963 + if (runtime_stat_n(st, STAT_BRANCHES, cpu, &rsd) != 0) 964 + print_branch_misses(config, cpu, avg, out, st, &rsd); 979 965 else 980 966 print_metric(config, ctxp, NULL, NULL, "of all branches", 0); 981 967 } else if ( ··· 984 970 ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | 985 971 ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { 986 972 987 - if (runtime_stat_n(st, STAT_L1_DCACHE, ctx, cpu) != 0) 988 - print_l1_dcache_misses(config, cpu, evsel, avg, out, st); 973 + if (runtime_stat_n(st, STAT_L1_DCACHE, cpu, &rsd) != 0) 974 + print_l1_dcache_misses(config, cpu, avg, out, st, &rsd); 989 975 else 990 976 print_metric(config, ctxp, NULL, NULL, "of all L1-dcache accesses", 0); 991 977 } else if ( ··· 994 980 ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | 995 981 ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { 996 982 997 - if (runtime_stat_n(st, STAT_L1_ICACHE, ctx, cpu) != 0) 998 - print_l1_icache_misses(config, cpu, evsel, avg, out, st); 983 + if (runtime_stat_n(st, STAT_L1_ICACHE, cpu, &rsd) != 0) 984 + print_l1_icache_misses(config, cpu, avg, out, st, &rsd); 999 985 else 1000 986 print_metric(config, ctxp, NULL, NULL, "of all L1-icache accesses", 0); 1001 987 } else if ( ··· 1004 990 ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | 1005 991 ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { 1006 992 1007 - if (runtime_stat_n(st, STAT_DTLB_CACHE, ctx, cpu) != 0) 1008 - print_dtlb_cache_misses(config, cpu, evsel, avg, out, st); 993 + if (runtime_stat_n(st, STAT_DTLB_CACHE, cpu, &rsd) != 0) 994 + print_dtlb_cache_misses(config, cpu, avg, out, st, &rsd); 1009 995 else 1010 996 print_metric(config, ctxp, NULL, NULL, "of all dTLB cache accesses", 0); 1011 997 } else if ( ··· 1014 1000 ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | 1015 1001 ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { 1016 1002 1017 - if (runtime_stat_n(st, STAT_ITLB_CACHE, ctx, cpu) != 0) 1018 - print_itlb_cache_misses(config, cpu, evsel, avg, out, st); 1003 + if (runtime_stat_n(st, STAT_ITLB_CACHE, cpu, &rsd) != 0) 1004 + print_itlb_cache_misses(config, cpu, avg, out, st, &rsd); 1019 1005 else 1020 1006 print_metric(config, ctxp, NULL, NULL, "of all iTLB cache accesses", 0); 1021 1007 } else if ( ··· 1024 1010 ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | 1025 1011 ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { 1026 1012 1027 - if (runtime_stat_n(st, STAT_LL_CACHE, ctx, cpu) != 0) 1028 - print_ll_cache_misses(config, cpu, evsel, avg, out, st); 1013 + if (runtime_stat_n(st, STAT_LL_CACHE, cpu, &rsd) != 0) 1014 + print_ll_cache_misses(config, cpu, avg, out, st, &rsd); 1029 1015 else 1030 1016 print_metric(config, ctxp, NULL, NULL, "of all LL-cache accesses", 0); 1031 1017 } else if (evsel__match(evsel, HARDWARE, HW_CACHE_MISSES)) { 1032 - total = runtime_stat_avg(st, STAT_CACHEREFS, ctx, cpu); 1018 + total = runtime_stat_avg(st, STAT_CACHEREFS, cpu, &rsd); 1033 1019 1034 1020 if (total) 1035 1021 ratio = avg * 100 / total; 1036 1022 1037 - if (runtime_stat_n(st, STAT_CACHEREFS, ctx, cpu) != 0) 1023 + if (runtime_stat_n(st, STAT_CACHEREFS, cpu, &rsd) != 0) 1038 1024 print_metric(config, ctxp, NULL, "%8.3f %%", 1039 1025 "of all cache refs", ratio); 1040 1026 else 1041 1027 print_metric(config, ctxp, NULL, NULL, "of all cache refs", 0); 1042 1028 } else if (evsel__match(evsel, HARDWARE, HW_STALLED_CYCLES_FRONTEND)) { 1043 - print_stalled_cycles_frontend(config, cpu, evsel, avg, out, st); 1029 + print_stalled_cycles_frontend(config, cpu, avg, out, st, &rsd); 1044 1030 } else if (evsel__match(evsel, HARDWARE, HW_STALLED_CYCLES_BACKEND)) { 1045 - print_stalled_cycles_backend(config, cpu, evsel, avg, out, st); 1031 + print_stalled_cycles_backend(config, cpu, avg, out, st, &rsd); 1046 1032 } else if (evsel__match(evsel, HARDWARE, HW_CPU_CYCLES)) { 1047 - total = runtime_stat_avg(st, STAT_NSECS, 0, cpu); 1033 + total = runtime_stat_avg(st, STAT_NSECS, cpu, &rsd); 1048 1034 1049 1035 if (total) { 1050 1036 ratio = avg / total; ··· 1053 1039 print_metric(config, ctxp, NULL, NULL, "Ghz", 0); 1054 1040 } 1055 1041 } else if (perf_stat_evsel__is(evsel, CYCLES_IN_TX)) { 1056 - total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); 1042 + total = runtime_stat_avg(st, STAT_CYCLES, cpu, &rsd); 1057 1043 1058 1044 if (total) 1059 1045 print_metric(config, ctxp, NULL, ··· 1063 1049 print_metric(config, ctxp, NULL, NULL, "transactional cycles", 1064 1050 0); 1065 1051 } else if (perf_stat_evsel__is(evsel, CYCLES_IN_TX_CP)) { 1066 - total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); 1067 - total2 = runtime_stat_avg(st, STAT_CYCLES_IN_TX, ctx, cpu); 1052 + total = runtime_stat_avg(st, STAT_CYCLES, cpu, &rsd); 1053 + total2 = runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu, &rsd); 1068 1054 1069 1055 if (total2 < avg) 1070 1056 total2 = avg; ··· 1074 1060 else 1075 1061 print_metric(config, ctxp, NULL, NULL, "aborted cycles", 0); 1076 1062 } else if (perf_stat_evsel__is(evsel, TRANSACTION_START)) { 1077 - total = runtime_stat_avg(st, STAT_CYCLES_IN_TX, 1078 - ctx, cpu); 1063 + total = runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu, &rsd); 1079 1064 1080 1065 if (avg) 1081 1066 ratio = total / avg; 1082 1067 1083 - if (runtime_stat_n(st, STAT_CYCLES_IN_TX, ctx, cpu) != 0) 1068 + if (runtime_stat_n(st, STAT_CYCLES_IN_TX, cpu, &rsd) != 0) 1084 1069 print_metric(config, ctxp, NULL, "%8.0f", 1085 1070 "cycles / transaction", ratio); 1086 1071 else 1087 1072 print_metric(config, ctxp, NULL, NULL, "cycles / transaction", 1088 1073 0); 1089 1074 } else if (perf_stat_evsel__is(evsel, ELISION_START)) { 1090 - total = runtime_stat_avg(st, STAT_CYCLES_IN_TX, 1091 - ctx, cpu); 1075 + total = runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu, &rsd); 1092 1076 1093 1077 if (avg) 1094 1078 ratio = total / avg; ··· 1099 1087 else 1100 1088 print_metric(config, ctxp, NULL, NULL, "CPUs utilized", 0); 1101 1089 } else if (perf_stat_evsel__is(evsel, TOPDOWN_FETCH_BUBBLES)) { 1102 - double fe_bound = td_fe_bound(ctx, cpu, st); 1090 + double fe_bound = td_fe_bound(cpu, st, &rsd); 1103 1091 1104 1092 if (fe_bound > 0.2) 1105 1093 color = PERF_COLOR_RED; 1106 1094 print_metric(config, ctxp, color, "%8.1f%%", "frontend bound", 1107 1095 fe_bound * 100.); 1108 1096 } else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_RETIRED)) { 1109 - double retiring = td_retiring(ctx, cpu, st); 1097 + double retiring = td_retiring(cpu, st, &rsd); 1110 1098 1111 1099 if (retiring > 0.7) 1112 1100 color = PERF_COLOR_GREEN; 1113 1101 print_metric(config, ctxp, color, "%8.1f%%", "retiring", 1114 1102 retiring * 100.); 1115 1103 } else if (perf_stat_evsel__is(evsel, TOPDOWN_RECOVERY_BUBBLES)) { 1116 - double bad_spec = td_bad_spec(ctx, cpu, st); 1104 + double bad_spec = td_bad_spec(cpu, st, &rsd); 1117 1105 1118 1106 if (bad_spec > 0.1) 1119 1107 color = PERF_COLOR_RED; 1120 1108 print_metric(config, ctxp, color, "%8.1f%%", "bad speculation", 1121 1109 bad_spec * 100.); 1122 1110 } else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_ISSUED)) { 1123 - double be_bound = td_be_bound(ctx, cpu, st); 1111 + double be_bound = td_be_bound(cpu, st, &rsd); 1124 1112 const char *name = "backend bound"; 1125 1113 static int have_recovery_bubbles = -1; 1126 1114 ··· 1133 1121 1134 1122 if (be_bound > 0.2) 1135 1123 color = PERF_COLOR_RED; 1136 - if (td_total_slots(ctx, cpu, st) > 0) 1124 + if (td_total_slots(cpu, st, &rsd) > 0) 1137 1125 print_metric(config, ctxp, color, "%8.1f%%", name, 1138 1126 be_bound * 100.); 1139 1127 else 1140 1128 print_metric(config, ctxp, NULL, NULL, name, 0); 1141 1129 } else if (perf_stat_evsel__is(evsel, TOPDOWN_RETIRING) && 1142 - full_td(ctx, cpu, st)) { 1143 - double retiring = td_metric_ratio(ctx, cpu, 1144 - STAT_TOPDOWN_RETIRING, st); 1145 - 1130 + full_td(cpu, st, &rsd)) { 1131 + double retiring = td_metric_ratio(cpu, 1132 + STAT_TOPDOWN_RETIRING, st, 1133 + &rsd); 1146 1134 if (retiring > 0.7) 1147 1135 color = PERF_COLOR_GREEN; 1148 1136 print_metric(config, ctxp, color, "%8.1f%%", "retiring", 1149 1137 retiring * 100.); 1150 1138 } else if (perf_stat_evsel__is(evsel, TOPDOWN_FE_BOUND) && 1151 - full_td(ctx, cpu, st)) { 1152 - double fe_bound = td_metric_ratio(ctx, cpu, 1153 - STAT_TOPDOWN_FE_BOUND, st); 1154 - 1139 + full_td(cpu, st, &rsd)) { 1140 + double fe_bound = td_metric_ratio(cpu, 1141 + STAT_TOPDOWN_FE_BOUND, st, 1142 + &rsd); 1155 1143 if (fe_bound > 0.2) 1156 1144 color = PERF_COLOR_RED; 1157 1145 print_metric(config, ctxp, color, "%8.1f%%", "frontend bound", 1158 1146 fe_bound * 100.); 1159 1147 } else if (perf_stat_evsel__is(evsel, TOPDOWN_BE_BOUND) && 1160 - full_td(ctx, cpu, st)) { 1161 - double be_bound = td_metric_ratio(ctx, cpu, 1162 - STAT_TOPDOWN_BE_BOUND, st); 1163 - 1148 + full_td(cpu, st, &rsd)) { 1149 + double be_bound = td_metric_ratio(cpu, 1150 + STAT_TOPDOWN_BE_BOUND, st, 1151 + &rsd); 1164 1152 if (be_bound > 0.2) 1165 1153 color = PERF_COLOR_RED; 1166 1154 print_metric(config, ctxp, color, "%8.1f%%", "backend bound", 1167 1155 be_bound * 100.); 1168 1156 } else if (perf_stat_evsel__is(evsel, TOPDOWN_BAD_SPEC) && 1169 - full_td(ctx, cpu, st)) { 1170 - double bad_spec = td_metric_ratio(ctx, cpu, 1171 - STAT_TOPDOWN_BAD_SPEC, st); 1172 - 1157 + full_td(cpu, st, &rsd)) { 1158 + double bad_spec = td_metric_ratio(cpu, 1159 + STAT_TOPDOWN_BAD_SPEC, st, 1160 + &rsd); 1173 1161 if (bad_spec > 0.1) 1174 1162 color = PERF_COLOR_RED; 1175 1163 print_metric(config, ctxp, color, "%8.1f%%", "bad speculation", ··· 1177 1165 } else if (evsel->metric_expr) { 1178 1166 generic_metric(config, evsel->metric_expr, evsel->metric_events, NULL, 1179 1167 evsel->name, evsel->metric_name, NULL, 1, cpu, out, st); 1180 - } else if (runtime_stat_n(st, STAT_NSECS, 0, cpu) != 0) { 1168 + } else if (runtime_stat_n(st, STAT_NSECS, cpu, &rsd) != 0) { 1181 1169 char unit = 'M'; 1182 1170 char unit_buf[10]; 1183 1171 1184 - total = runtime_stat_avg(st, STAT_NSECS, 0, cpu); 1172 + total = runtime_stat_avg(st, STAT_NSECS, cpu, &rsd); 1185 1173 1186 1174 if (total) 1187 1175 ratio = 1000.0 * avg / total; ··· 1192 1180 snprintf(unit_buf, sizeof(unit_buf), "%c/sec", unit); 1193 1181 print_metric(config, ctxp, NULL, "%8.3f", unit_buf, ratio); 1194 1182 } else if (perf_stat_evsel__is(evsel, SMI_NUM)) { 1195 - print_smi_cost(config, cpu, evsel, out, st); 1183 + print_smi_cost(config, cpu, out, st, &rsd); 1196 1184 } else { 1197 1185 num = 0; 1198 1186 }
+4 -2
tools/testing/selftests/Makefile
··· 77 77 TARGETS_HOTPLUG = cpu-hotplug 78 78 TARGETS_HOTPLUG += memory-hotplug 79 79 80 - # User can optionally provide a TARGETS skiplist. 81 - SKIP_TARGETS ?= 80 + # User can optionally provide a TARGETS skiplist. By default we skip 81 + # BPF since it has cutting edge build time dependencies which require 82 + # more effort to install. 83 + SKIP_TARGETS ?= bpf 82 84 ifneq ($(SKIP_TARGETS),) 83 85 TMP := $(filter-out $(SKIP_TARGETS), $(TARGETS)) 84 86 override TARGETS := $(TMP)
+1 -1
tools/testing/selftests/arm64/fp/fpsimd-test.S
··· 457 457 mov x11, x1 // actual data 458 458 mov x12, x2 // data size 459 459 460 - puts "Mistatch: PID=" 460 + puts "Mismatch: PID=" 461 461 mov x0, x20 462 462 bl putdec 463 463 puts ", iteration="
+1 -1
tools/testing/selftests/arm64/fp/sve-test.S
··· 625 625 mov x11, x1 // actual data 626 626 mov x12, x2 // data size 627 627 628 - puts "Mistatch: PID=" 628 + puts "Mismatch: PID=" 629 629 mov x0, x20 630 630 bl putdec 631 631 puts ", iteration="
+2 -2
tools/testing/selftests/net/tls.c
··· 103 103 104 104 FIXTURE_VARIANT(tls) 105 105 { 106 - u16 tls_version; 107 - u16 cipher_type; 106 + uint16_t tls_version; 107 + uint16_t cipher_type; 108 108 }; 109 109 110 110 FIXTURE_VARIANT_ADD(tls, 12_gcm)
+9 -3
tools/testing/selftests/netfilter/nft_conntrack_helper.sh
··· 94 94 local message=$2 95 95 local port=$3 96 96 97 - ip netns exec ${netns} conntrack -L -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp' 97 + if echo $message |grep -q 'ipv6';then 98 + local family="ipv6" 99 + else 100 + local family="ipv4" 101 + fi 102 + 103 + ip netns exec ${netns} conntrack -L -f $family -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp' 98 104 if [ $? -ne 0 ] ; then 99 105 echo "FAIL: ${netns} did not show attached helper $message" 1>&2 100 106 ret=1 ··· 117 111 118 112 sleep 3 | ip netns exec ${ns2} nc -w 2 -l -p $port > /dev/null & 119 113 120 - sleep 1 121 114 sleep 1 | ip netns exec ${ns1} nc -w 2 10.0.1.2 $port > /dev/null & 115 + sleep 1 122 116 123 117 check_for_helper "$ns1" "ip $msg" $port 124 118 check_for_helper "$ns2" "ip $msg" $port ··· 134 128 135 129 sleep 3 | ip netns exec ${ns2} nc -w 2 -6 -l -p $port > /dev/null & 136 130 137 - sleep 1 138 131 sleep 1 | ip netns exec ${ns1} nc -w 2 -6 dead:1::2 $port > /dev/null & 132 + sleep 1 139 133 140 134 check_for_helper "$ns1" "ipv6 $msg" $port 141 135 check_for_helper "$ns2" "ipv6 $msg" $port