Merge tag 'extcon-fixes-for-4.6-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/chanwoo/extcon into char-misc-linus

Chanwoo writes:

Update extcon for v4.6-rc3

This patch fixes the following one issue:
- In extcon-palmas.c, the external abort happen when wake-up from suspend state
on BeagleBoard-X15 platform. So, drop the IRQF_EARLY_RESUME flag.

+2162 -1425
+1
.mailmap
··· 33 Brian Avery <b.avery@hp.com> 34 Brian King <brking@us.ibm.com> 35 Christoph Hellwig <hch@lst.de> 36 Corey Minyard <minyard@acm.org> 37 Damian Hobson-Garcia <dhobsong@igel.co.jp> 38 David Brownell <david-b@pacbell.net>
··· 33 Brian Avery <b.avery@hp.com> 34 Brian King <brking@us.ibm.com> 35 Christoph Hellwig <hch@lst.de> 36 + Christophe Ricard <christophe.ricard@gmail.com> 37 Corey Minyard <minyard@acm.org> 38 Damian Hobson-Garcia <dhobsong@igel.co.jp> 39 David Brownell <david-b@pacbell.net>
+1 -1
Documentation/networking/switchdev.txt
··· 386 memory allocation, etc. The goal is to handle the stuff that is not unlikely 387 to fail here. The second phase is to "commit" the actual changes. 388 389 - Switchdev provides an inftrastructure for sharing items (for example memory 390 allocations) between the two phases. 391 392 The object created by a driver in "prepare" phase and it is queued up by:
··· 386 memory allocation, etc. The goal is to handle the stuff that is not unlikely 387 to fail here. The second phase is to "commit" the actual changes. 388 389 + Switchdev provides an infrastructure for sharing items (for example memory 390 allocations) between the two phases. 391 392 The object created by a driver in "prepare" phase and it is queued up by:
+208
Documentation/x86/topology.txt
···
··· 1 + x86 Topology 2 + ============ 3 + 4 + This documents and clarifies the main aspects of x86 topology modelling and 5 + representation in the kernel. Update/change when doing changes to the 6 + respective code. 7 + 8 + The architecture-agnostic topology definitions are in 9 + Documentation/cputopology.txt. This file holds x86-specific 10 + differences/specialities which must not necessarily apply to the generic 11 + definitions. Thus, the way to read up on Linux topology on x86 is to start 12 + with the generic one and look at this one in parallel for the x86 specifics. 13 + 14 + Needless to say, code should use the generic functions - this file is *only* 15 + here to *document* the inner workings of x86 topology. 16 + 17 + Started by Thomas Gleixner <tglx@linutronix.de> and Borislav Petkov <bp@alien8.de>. 18 + 19 + The main aim of the topology facilities is to present adequate interfaces to 20 + code which needs to know/query/use the structure of the running system wrt 21 + threads, cores, packages, etc. 22 + 23 + The kernel does not care about the concept of physical sockets because a 24 + socket has no relevance to software. It's an electromechanical component. In 25 + the past a socket always contained a single package (see below), but with the 26 + advent of Multi Chip Modules (MCM) a socket can hold more than one package. So 27 + there might be still references to sockets in the code, but they are of 28 + historical nature and should be cleaned up. 29 + 30 + The topology of a system is described in the units of: 31 + 32 + - packages 33 + - cores 34 + - threads 35 + 36 + * Package: 37 + 38 + Packages contain a number of cores plus shared resources, e.g. DRAM 39 + controller, shared caches etc. 40 + 41 + AMD nomenclature for package is 'Node'. 42 + 43 + Package-related topology information in the kernel: 44 + 45 + - cpuinfo_x86.x86_max_cores: 46 + 47 + The number of cores in a package. This information is retrieved via CPUID. 48 + 49 + - cpuinfo_x86.phys_proc_id: 50 + 51 + The physical ID of the package. This information is retrieved via CPUID 52 + and deduced from the APIC IDs of the cores in the package. 53 + 54 + - cpuinfo_x86.logical_id: 55 + 56 + The logical ID of the package. As we do not trust BIOSes to enumerate the 57 + packages in a consistent way, we introduced the concept of logical package 58 + ID so we can sanely calculate the number of maximum possible packages in 59 + the system and have the packages enumerated linearly. 60 + 61 + - topology_max_packages(): 62 + 63 + The maximum possible number of packages in the system. Helpful for per 64 + package facilities to preallocate per package information. 65 + 66 + 67 + * Cores: 68 + 69 + A core consists of 1 or more threads. It does not matter whether the threads 70 + are SMT- or CMT-type threads. 71 + 72 + AMDs nomenclature for a CMT core is "Compute Unit". The kernel always uses 73 + "core". 74 + 75 + Core-related topology information in the kernel: 76 + 77 + - smp_num_siblings: 78 + 79 + The number of threads in a core. The number of threads in a package can be 80 + calculated by: 81 + 82 + threads_per_package = cpuinfo_x86.x86_max_cores * smp_num_siblings 83 + 84 + 85 + * Threads: 86 + 87 + A thread is a single scheduling unit. It's the equivalent to a logical Linux 88 + CPU. 89 + 90 + AMDs nomenclature for CMT threads is "Compute Unit Core". The kernel always 91 + uses "thread". 92 + 93 + Thread-related topology information in the kernel: 94 + 95 + - topology_core_cpumask(): 96 + 97 + The cpumask contains all online threads in the package to which a thread 98 + belongs. 99 + 100 + The number of online threads is also printed in /proc/cpuinfo "siblings." 101 + 102 + - topology_sibling_mask(): 103 + 104 + The cpumask contains all online threads in the core to which a thread 105 + belongs. 106 + 107 + - topology_logical_package_id(): 108 + 109 + The logical package ID to which a thread belongs. 110 + 111 + - topology_physical_package_id(): 112 + 113 + The physical package ID to which a thread belongs. 114 + 115 + - topology_core_id(); 116 + 117 + The ID of the core to which a thread belongs. It is also printed in /proc/cpuinfo 118 + "core_id." 119 + 120 + 121 + 122 + System topology examples 123 + 124 + Note: 125 + 126 + The alternative Linux CPU enumeration depends on how the BIOS enumerates the 127 + threads. Many BIOSes enumerate all threads 0 first and then all threads 1. 128 + That has the "advantage" that the logical Linux CPU numbers of threads 0 stay 129 + the same whether threads are enabled or not. That's merely an implementation 130 + detail and has no practical impact. 131 + 132 + 1) Single Package, Single Core 133 + 134 + [package 0] -> [core 0] -> [thread 0] -> Linux CPU 0 135 + 136 + 2) Single Package, Dual Core 137 + 138 + a) One thread per core 139 + 140 + [package 0] -> [core 0] -> [thread 0] -> Linux CPU 0 141 + -> [core 1] -> [thread 0] -> Linux CPU 1 142 + 143 + b) Two threads per core 144 + 145 + [package 0] -> [core 0] -> [thread 0] -> Linux CPU 0 146 + -> [thread 1] -> Linux CPU 1 147 + -> [core 1] -> [thread 0] -> Linux CPU 2 148 + -> [thread 1] -> Linux CPU 3 149 + 150 + Alternative enumeration: 151 + 152 + [package 0] -> [core 0] -> [thread 0] -> Linux CPU 0 153 + -> [thread 1] -> Linux CPU 2 154 + -> [core 1] -> [thread 0] -> Linux CPU 1 155 + -> [thread 1] -> Linux CPU 3 156 + 157 + AMD nomenclature for CMT systems: 158 + 159 + [node 0] -> [Compute Unit 0] -> [Compute Unit Core 0] -> Linux CPU 0 160 + -> [Compute Unit Core 1] -> Linux CPU 1 161 + -> [Compute Unit 1] -> [Compute Unit Core 0] -> Linux CPU 2 162 + -> [Compute Unit Core 1] -> Linux CPU 3 163 + 164 + 4) Dual Package, Dual Core 165 + 166 + a) One thread per core 167 + 168 + [package 0] -> [core 0] -> [thread 0] -> Linux CPU 0 169 + -> [core 1] -> [thread 0] -> Linux CPU 1 170 + 171 + [package 1] -> [core 0] -> [thread 0] -> Linux CPU 2 172 + -> [core 1] -> [thread 0] -> Linux CPU 3 173 + 174 + b) Two threads per core 175 + 176 + [package 0] -> [core 0] -> [thread 0] -> Linux CPU 0 177 + -> [thread 1] -> Linux CPU 1 178 + -> [core 1] -> [thread 0] -> Linux CPU 2 179 + -> [thread 1] -> Linux CPU 3 180 + 181 + [package 1] -> [core 0] -> [thread 0] -> Linux CPU 4 182 + -> [thread 1] -> Linux CPU 5 183 + -> [core 1] -> [thread 0] -> Linux CPU 6 184 + -> [thread 1] -> Linux CPU 7 185 + 186 + Alternative enumeration: 187 + 188 + [package 0] -> [core 0] -> [thread 0] -> Linux CPU 0 189 + -> [thread 1] -> Linux CPU 4 190 + -> [core 1] -> [thread 0] -> Linux CPU 1 191 + -> [thread 1] -> Linux CPU 5 192 + 193 + [package 1] -> [core 0] -> [thread 0] -> Linux CPU 2 194 + -> [thread 1] -> Linux CPU 6 195 + -> [core 1] -> [thread 0] -> Linux CPU 3 196 + -> [thread 1] -> Linux CPU 7 197 + 198 + AMD nomenclature for CMT systems: 199 + 200 + [node 0] -> [Compute Unit 0] -> [Compute Unit Core 0] -> Linux CPU 0 201 + -> [Compute Unit Core 1] -> Linux CPU 1 202 + -> [Compute Unit 1] -> [Compute Unit Core 0] -> Linux CPU 2 203 + -> [Compute Unit Core 1] -> Linux CPU 3 204 + 205 + [node 1] -> [Compute Unit 0] -> [Compute Unit Core 0] -> Linux CPU 4 206 + -> [Compute Unit Core 1] -> Linux CPU 5 207 + -> [Compute Unit 1] -> [Compute Unit Core 0] -> Linux CPU 6 208 + -> [Compute Unit Core 1] -> Linux CPU 7
+7 -4
MAINTAINERS
··· 5042 HARDWARE SPINLOCK CORE 5043 M: Ohad Ben-Cohen <ohad@wizery.com> 5044 M: Bjorn Andersson <bjorn.andersson@linaro.org> 5045 S: Maintained 5046 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/hwspinlock.git 5047 F: Documentation/hwspinlock.txt ··· 6403 M: Ananth N Mavinakayanahalli <ananth@in.ibm.com> 6404 M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> 6405 M: "David S. Miller" <davem@davemloft.net> 6406 - M: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> 6407 S: Maintained 6408 F: Documentation/kprobes.txt 6409 F: include/linux/kprobes.h ··· 8254 8255 ORANGEFS FILESYSTEM 8256 M: Mike Marshall <hubcap@omnibond.com> 8257 - L: pvfs2-developers@beowulf-underground.org 8258 T: git git://git.kernel.org/pub/scm/linux/kernel/git/hubcap/linux.git 8259 S: Supported 8260 F: fs/orangefs/ ··· 9315 REMOTE PROCESSOR (REMOTEPROC) SUBSYSTEM 9316 M: Ohad Ben-Cohen <ohad@wizery.com> 9317 M: Bjorn Andersson <bjorn.andersson@linaro.org> 9318 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/remoteproc.git 9319 S: Maintained 9320 F: drivers/remoteproc/ ··· 9325 REMOTE PROCESSOR MESSAGING (RPMSG) SUBSYSTEM 9326 M: Ohad Ben-Cohen <ohad@wizery.com> 9327 M: Bjorn Andersson <bjorn.andersson@linaro.org> 9328 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/rpmsg.git 9329 S: Maintained 9330 F: drivers/rpmsg/ ··· 11140 F: net/tipc/ 11141 11142 TILE ARCHITECTURE 11143 - M: Chris Metcalf <cmetcalf@ezchip.com> 11144 - W: http://www.ezchip.com/scm/ 11145 T: git git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile.git 11146 S: Supported 11147 F: arch/tile/
··· 5042 HARDWARE SPINLOCK CORE 5043 M: Ohad Ben-Cohen <ohad@wizery.com> 5044 M: Bjorn Andersson <bjorn.andersson@linaro.org> 5045 + L: linux-remoteproc@vger.kernel.org 5046 S: Maintained 5047 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/hwspinlock.git 5048 F: Documentation/hwspinlock.txt ··· 6402 M: Ananth N Mavinakayanahalli <ananth@in.ibm.com> 6403 M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> 6404 M: "David S. Miller" <davem@davemloft.net> 6405 + M: Masami Hiramatsu <mhiramat@kernel.org> 6406 S: Maintained 6407 F: Documentation/kprobes.txt 6408 F: include/linux/kprobes.h ··· 8253 8254 ORANGEFS FILESYSTEM 8255 M: Mike Marshall <hubcap@omnibond.com> 8256 + L: pvfs2-developers@beowulf-underground.org (subscribers-only) 8257 T: git git://git.kernel.org/pub/scm/linux/kernel/git/hubcap/linux.git 8258 S: Supported 8259 F: fs/orangefs/ ··· 9314 REMOTE PROCESSOR (REMOTEPROC) SUBSYSTEM 9315 M: Ohad Ben-Cohen <ohad@wizery.com> 9316 M: Bjorn Andersson <bjorn.andersson@linaro.org> 9317 + L: linux-remoteproc@vger.kernel.org 9318 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/remoteproc.git 9319 S: Maintained 9320 F: drivers/remoteproc/ ··· 9323 REMOTE PROCESSOR MESSAGING (RPMSG) SUBSYSTEM 9324 M: Ohad Ben-Cohen <ohad@wizery.com> 9325 M: Bjorn Andersson <bjorn.andersson@linaro.org> 9326 + L: linux-remoteproc@vger.kernel.org 9327 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/rpmsg.git 9328 S: Maintained 9329 F: drivers/rpmsg/ ··· 11137 F: net/tipc/ 11138 11139 TILE ARCHITECTURE 11140 + M: Chris Metcalf <cmetcalf@mellanox.com> 11141 + W: http://www.mellanox.com/repository/solutions/tile-scm/ 11142 T: git git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile.git 11143 S: Supported 11144 F: arch/tile/
+1 -1
Makefile
··· 1 VERSION = 4 2 PATCHLEVEL = 6 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc1 5 NAME = Blurry Fish Butt 6 7 # *DOCUMENTATION*
··· 1 VERSION = 4 2 PATCHLEVEL = 6 3 SUBLEVEL = 0 4 + EXTRAVERSION = -rc2 5 NAME = Blurry Fish Butt 6 7 # *DOCUMENTATION*
+20 -8
arch/arm64/configs/defconfig
··· 68 CONFIG_TRANSPARENT_HUGEPAGE=y 69 CONFIG_CMA=y 70 CONFIG_XEN=y 71 - CONFIG_CMDLINE="console=ttyAMA0" 72 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 73 CONFIG_COMPAT=y 74 CONFIG_CPU_IDLE=y 75 CONFIG_ARM_CPUIDLE=y 76 CONFIG_NET=y 77 CONFIG_PACKET=y 78 CONFIG_UNIX=y ··· 82 CONFIG_IP_PNP=y 83 CONFIG_IP_PNP_DHCP=y 84 CONFIG_IP_PNP_BOOTP=y 85 - # CONFIG_INET_LRO is not set 86 # CONFIG_IPV6 is not set 87 CONFIG_BPF_JIT=y 88 # CONFIG_WIRELESS is not set ··· 145 CONFIG_SERIAL_MVEBU_UART=y 146 CONFIG_VIRTIO_CONSOLE=y 147 # CONFIG_HW_RANDOM is not set 148 - CONFIG_I2C=y 149 CONFIG_I2C_CHARDEV=y 150 CONFIG_I2C_MV64XXX=y 151 CONFIG_I2C_QUP=y 152 CONFIG_I2C_UNIPHIER_F=y 153 CONFIG_I2C_RCAR=y 154 CONFIG_SPI=y 155 CONFIG_SPI_PL022=y 156 CONFIG_SPI_QUP=y 157 CONFIG_SPMI=y 158 CONFIG_PINCTRL_MSM8916=y 159 CONFIG_PINCTRL_QCOM_SPMI_PMIC=y 160 CONFIG_GPIO_SYSFS=y ··· 199 CONFIG_USB_OHCI_HCD=y 200 CONFIG_USB_OHCI_HCD_PLATFORM=y 201 CONFIG_USB_STORAGE=y 202 CONFIG_USB_CHIPIDEA=y 203 CONFIG_USB_CHIPIDEA_UDC=y 204 CONFIG_USB_CHIPIDEA_HOST=y ··· 209 CONFIG_USB_ULPI=y 210 CONFIG_USB_GADGET=y 211 CONFIG_MMC=y 212 - CONFIG_MMC_BLOCK_MINORS=16 213 CONFIG_MMC_ARMMMCI=y 214 CONFIG_MMC_SDHCI=y 215 CONFIG_MMC_SDHCI_PLTFM=y 216 CONFIG_MMC_SDHCI_TEGRA=y 217 CONFIG_MMC_SDHCI_MSM=y 218 CONFIG_MMC_SPI=y 219 - CONFIG_MMC_SUNXI=y 220 CONFIG_MMC_DW=y 221 CONFIG_MMC_DW_EXYNOS=y 222 - CONFIG_MMC_BLOCK_MINORS=16 223 CONFIG_NEW_LEDS=y 224 CONFIG_LEDS_CLASS=y 225 CONFIG_LEDS_SYSCON=y 226 CONFIG_LEDS_TRIGGERS=y 227 CONFIG_LEDS_TRIGGER_HEARTBEAT=y ··· 234 CONFIG_RTC_DRV_SUN6I=y 235 CONFIG_RTC_DRV_XGENE=y 236 CONFIG_DMADEVICES=y 237 - CONFIG_QCOM_BAM_DMA=y 238 CONFIG_TEGRA20_APB_DMA=y 239 CONFIG_RCAR_DMAC=y 240 CONFIG_VFIO=y 241 CONFIG_VFIO_PCI=y ··· 244 CONFIG_VIRTIO_MMIO=y 245 CONFIG_XEN_GNTDEV=y 246 CONFIG_XEN_GRANT_DEV_ALLOC=y 247 CONFIG_COMMON_CLK_CS2000_CP=y 248 CONFIG_COMMON_CLK_QCOM=y 249 CONFIG_MSM_GCC_8916=y 250 CONFIG_HWSPINLOCK_QCOM=y 251 CONFIG_ARM_SMMU=y 252 CONFIG_QCOM_SMEM=y 253 CONFIG_QCOM_SMD=y 254 CONFIG_QCOM_SMD_RPM=y 255 CONFIG_ARCH_TEGRA_132_SOC=y 256 CONFIG_ARCH_TEGRA_210_SOC=y 257 - CONFIG_HISILICON_IRQ_MBIGEN=y 258 CONFIG_EXTCON_USB_GPIO=y 259 CONFIG_PHY_RCAR_GEN3_USB2=y 260 CONFIG_PHY_XGENE=y 261 CONFIG_EXT2_FS=y 262 CONFIG_EXT3_FS=y 263 CONFIG_FANOTIFY=y ··· 275 CONFIG_VFAT_FS=y 276 CONFIG_TMPFS=y 277 CONFIG_HUGETLBFS=y 278 CONFIG_EFIVAR_FS=y 279 CONFIG_SQUASHFS=y 280 CONFIG_NFS_FS=y
··· 68 CONFIG_TRANSPARENT_HUGEPAGE=y 69 CONFIG_CMA=y 70 CONFIG_XEN=y 71 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 72 CONFIG_COMPAT=y 73 CONFIG_CPU_IDLE=y 74 CONFIG_ARM_CPUIDLE=y 75 + CONFIG_CPU_FREQ=y 76 + CONFIG_ARM_BIG_LITTLE_CPUFREQ=y 77 + CONFIG_ARM_SCPI_CPUFREQ=y 78 CONFIG_NET=y 79 CONFIG_PACKET=y 80 CONFIG_UNIX=y ··· 80 CONFIG_IP_PNP=y 81 CONFIG_IP_PNP_DHCP=y 82 CONFIG_IP_PNP_BOOTP=y 83 # CONFIG_IPV6 is not set 84 CONFIG_BPF_JIT=y 85 # CONFIG_WIRELESS is not set ··· 144 CONFIG_SERIAL_MVEBU_UART=y 145 CONFIG_VIRTIO_CONSOLE=y 146 # CONFIG_HW_RANDOM is not set 147 CONFIG_I2C_CHARDEV=y 148 + CONFIG_I2C_DESIGNWARE_PLATFORM=y 149 CONFIG_I2C_MV64XXX=y 150 CONFIG_I2C_QUP=y 151 + CONFIG_I2C_TEGRA=y 152 CONFIG_I2C_UNIPHIER_F=y 153 CONFIG_I2C_RCAR=y 154 CONFIG_SPI=y 155 CONFIG_SPI_PL022=y 156 CONFIG_SPI_QUP=y 157 CONFIG_SPMI=y 158 + CONFIG_PINCTRL_SINGLE=y 159 CONFIG_PINCTRL_MSM8916=y 160 CONFIG_PINCTRL_QCOM_SPMI_PMIC=y 161 CONFIG_GPIO_SYSFS=y ··· 196 CONFIG_USB_OHCI_HCD=y 197 CONFIG_USB_OHCI_HCD_PLATFORM=y 198 CONFIG_USB_STORAGE=y 199 + CONFIG_USB_DWC2=y 200 CONFIG_USB_CHIPIDEA=y 201 CONFIG_USB_CHIPIDEA_UDC=y 202 CONFIG_USB_CHIPIDEA_HOST=y ··· 205 CONFIG_USB_ULPI=y 206 CONFIG_USB_GADGET=y 207 CONFIG_MMC=y 208 + CONFIG_MMC_BLOCK_MINORS=32 209 CONFIG_MMC_ARMMMCI=y 210 CONFIG_MMC_SDHCI=y 211 CONFIG_MMC_SDHCI_PLTFM=y 212 CONFIG_MMC_SDHCI_TEGRA=y 213 CONFIG_MMC_SDHCI_MSM=y 214 CONFIG_MMC_SPI=y 215 CONFIG_MMC_DW=y 216 CONFIG_MMC_DW_EXYNOS=y 217 + CONFIG_MMC_DW_K3=y 218 + CONFIG_MMC_SUNXI=y 219 CONFIG_NEW_LEDS=y 220 CONFIG_LEDS_CLASS=y 221 + CONFIG_LEDS_GPIO=y 222 CONFIG_LEDS_SYSCON=y 223 CONFIG_LEDS_TRIGGERS=y 224 CONFIG_LEDS_TRIGGER_HEARTBEAT=y ··· 229 CONFIG_RTC_DRV_SUN6I=y 230 CONFIG_RTC_DRV_XGENE=y 231 CONFIG_DMADEVICES=y 232 CONFIG_TEGRA20_APB_DMA=y 233 + CONFIG_QCOM_BAM_DMA=y 234 CONFIG_RCAR_DMAC=y 235 CONFIG_VFIO=y 236 CONFIG_VFIO_PCI=y ··· 239 CONFIG_VIRTIO_MMIO=y 240 CONFIG_XEN_GNTDEV=y 241 CONFIG_XEN_GRANT_DEV_ALLOC=y 242 + CONFIG_COMMON_CLK_SCPI=y 243 CONFIG_COMMON_CLK_CS2000_CP=y 244 CONFIG_COMMON_CLK_QCOM=y 245 CONFIG_MSM_GCC_8916=y 246 CONFIG_HWSPINLOCK_QCOM=y 247 + CONFIG_MAILBOX=y 248 + CONFIG_ARM_MHU=y 249 + CONFIG_HI6220_MBOX=y 250 CONFIG_ARM_SMMU=y 251 CONFIG_QCOM_SMEM=y 252 CONFIG_QCOM_SMD=y 253 CONFIG_QCOM_SMD_RPM=y 254 CONFIG_ARCH_TEGRA_132_SOC=y 255 CONFIG_ARCH_TEGRA_210_SOC=y 256 CONFIG_EXTCON_USB_GPIO=y 257 + CONFIG_COMMON_RESET_HI6220=y 258 CONFIG_PHY_RCAR_GEN3_USB2=y 259 + CONFIG_PHY_HI6220_USB=y 260 CONFIG_PHY_XGENE=y 261 + CONFIG_ARM_SCPI_PROTOCOL=y 262 CONFIG_EXT2_FS=y 263 CONFIG_EXT3_FS=y 264 CONFIG_FANOTIFY=y ··· 264 CONFIG_VFAT_FS=y 265 CONFIG_TMPFS=y 266 CONFIG_HUGETLBFS=y 267 + CONFIG_CONFIGFS_FS=y 268 CONFIG_EFIVAR_FS=y 269 CONFIG_SQUASHFS=y 270 CONFIG_NFS_FS=y
-1
arch/arm64/include/asm/kvm_host.h
··· 27 #include <asm/kvm.h> 28 #include <asm/kvm_asm.h> 29 #include <asm/kvm_mmio.h> 30 - #include <asm/kvm_perf_event.h> 31 32 #define __KVM_HAVE_ARCH_INTC_INITIALIZED 33
··· 27 #include <asm/kvm.h> 28 #include <asm/kvm_asm.h> 29 #include <asm/kvm_mmio.h> 30 31 #define __KVM_HAVE_ARCH_INTC_INITIALIZED 32
-1
arch/arm64/include/asm/kvm_hyp.h
··· 21 #include <linux/compiler.h> 22 #include <linux/kvm_host.h> 23 #include <asm/kvm_mmu.h> 24 - #include <asm/kvm_perf_event.h> 25 #include <asm/sysreg.h> 26 27 #define __hyp_text __section(.hyp.text) notrace
··· 21 #include <linux/compiler.h> 22 #include <linux/kvm_host.h> 23 #include <asm/kvm_mmu.h> 24 #include <asm/sysreg.h> 25 26 #define __hyp_text __section(.hyp.text) notrace
-68
arch/arm64/include/asm/kvm_perf_event.h
··· 1 - /* 2 - * Copyright (C) 2012 ARM Ltd. 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 as 6 - * published by the Free Software Foundation. 7 - * 8 - * This program is distributed in the hope that it will be useful, 9 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 - * GNU General Public License for more details. 12 - * 13 - * You should have received a copy of the GNU General Public License 14 - * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 - */ 16 - 17 - #ifndef __ASM_KVM_PERF_EVENT_H 18 - #define __ASM_KVM_PERF_EVENT_H 19 - 20 - #define ARMV8_PMU_MAX_COUNTERS 32 21 - #define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) 22 - 23 - /* 24 - * Per-CPU PMCR: config reg 25 - */ 26 - #define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */ 27 - #define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */ 28 - #define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */ 29 - #define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ 30 - #define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ 31 - #define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ 32 - /* Determines which bit of PMCCNTR_EL0 generates an overflow */ 33 - #define ARMV8_PMU_PMCR_LC (1 << 6) 34 - #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ 35 - #define ARMV8_PMU_PMCR_N_MASK 0x1f 36 - #define ARMV8_PMU_PMCR_MASK 0x7f /* Mask for writable bits */ 37 - 38 - /* 39 - * PMOVSR: counters overflow flag status reg 40 - */ 41 - #define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ 42 - #define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK 43 - 44 - /* 45 - * PMXEVTYPER: Event selection reg 46 - */ 47 - #define ARMV8_PMU_EVTYPE_MASK 0xc80003ff /* Mask for writable bits */ 48 - #define ARMV8_PMU_EVTYPE_EVENT 0x3ff /* Mask for EVENT bits */ 49 - 50 - #define ARMV8_PMU_EVTYPE_EVENT_SW_INCR 0 /* Software increment event */ 51 - 52 - /* 53 - * Event filters for PMUv3 54 - */ 55 - #define ARMV8_PMU_EXCLUDE_EL1 (1 << 31) 56 - #define ARMV8_PMU_EXCLUDE_EL0 (1 << 30) 57 - #define ARMV8_PMU_INCLUDE_EL2 (1 << 27) 58 - 59 - /* 60 - * PMUSERENR: user enable reg 61 - */ 62 - #define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */ 63 - #define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */ 64 - #define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */ 65 - #define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ 66 - #define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ 67 - 68 - #endif
···
+4
arch/arm64/include/asm/opcodes.h
··· 1 #include <../../arm/include/asm/opcodes.h>
··· 1 + #ifdef CONFIG_CPU_BIG_ENDIAN 2 + #define CONFIG_CPU_ENDIAN_BE8 CONFIG_CPU_BIG_ENDIAN 3 + #endif 4 + 5 #include <../../arm/include/asm/opcodes.h>
+47
arch/arm64/include/asm/perf_event.h
··· 17 #ifndef __ASM_PERF_EVENT_H 18 #define __ASM_PERF_EVENT_H 19 20 #ifdef CONFIG_PERF_EVENTS 21 struct pt_regs; 22 extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
··· 17 #ifndef __ASM_PERF_EVENT_H 18 #define __ASM_PERF_EVENT_H 19 20 + #define ARMV8_PMU_MAX_COUNTERS 32 21 + #define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) 22 + 23 + /* 24 + * Per-CPU PMCR: config reg 25 + */ 26 + #define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */ 27 + #define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */ 28 + #define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */ 29 + #define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ 30 + #define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ 31 + #define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ 32 + #define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ 33 + #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ 34 + #define ARMV8_PMU_PMCR_N_MASK 0x1f 35 + #define ARMV8_PMU_PMCR_MASK 0x7f /* Mask for writable bits */ 36 + 37 + /* 38 + * PMOVSR: counters overflow flag status reg 39 + */ 40 + #define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ 41 + #define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK 42 + 43 + /* 44 + * PMXEVTYPER: Event selection reg 45 + */ 46 + #define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */ 47 + #define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */ 48 + 49 + #define ARMV8_PMU_EVTYPE_EVENT_SW_INCR 0 /* Software increment event */ 50 + 51 + /* 52 + * Event filters for PMUv3 53 + */ 54 + #define ARMV8_PMU_EXCLUDE_EL1 (1 << 31) 55 + #define ARMV8_PMU_EXCLUDE_EL0 (1 << 30) 56 + #define ARMV8_PMU_INCLUDE_EL2 (1 << 27) 57 + 58 + /* 59 + * PMUSERENR: user enable reg 60 + */ 61 + #define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */ 62 + #define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */ 63 + #define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */ 64 + #define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ 65 + #define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ 66 + 67 #ifdef CONFIG_PERF_EVENTS 68 struct pt_regs; 69 extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
+19 -53
arch/arm64/kernel/perf_event.c
··· 20 */ 21 22 #include <asm/irq_regs.h> 23 #include <asm/virt.h> 24 25 #include <linux/of.h> ··· 385 #define ARMV8_IDX_COUNTER_LAST(cpu_pmu) \ 386 (ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1) 387 388 - #define ARMV8_MAX_COUNTERS 32 389 - #define ARMV8_COUNTER_MASK (ARMV8_MAX_COUNTERS - 1) 390 - 391 /* 392 * ARMv8 low level PMU access 393 */ ··· 393 * Perf Event to low level counters mapping 394 */ 395 #define ARMV8_IDX_TO_COUNTER(x) \ 396 - (((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK) 397 - 398 - /* 399 - * Per-CPU PMCR: config reg 400 - */ 401 - #define ARMV8_PMCR_E (1 << 0) /* Enable all counters */ 402 - #define ARMV8_PMCR_P (1 << 1) /* Reset all counters */ 403 - #define ARMV8_PMCR_C (1 << 2) /* Cycle counter reset */ 404 - #define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ 405 - #define ARMV8_PMCR_X (1 << 4) /* Export to ETM */ 406 - #define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ 407 - #define ARMV8_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ 408 - #define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */ 409 - #define ARMV8_PMCR_N_MASK 0x1f 410 - #define ARMV8_PMCR_MASK 0x7f /* Mask for writable bits */ 411 - 412 - /* 413 - * PMOVSR: counters overflow flag status reg 414 - */ 415 - #define ARMV8_OVSR_MASK 0xffffffff /* Mask for writable bits */ 416 - #define ARMV8_OVERFLOWED_MASK ARMV8_OVSR_MASK 417 - 418 - /* 419 - * PMXEVTYPER: Event selection reg 420 - */ 421 - #define ARMV8_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */ 422 - #define ARMV8_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */ 423 - 424 - /* 425 - * Event filters for PMUv3 426 - */ 427 - #define ARMV8_EXCLUDE_EL1 (1 << 31) 428 - #define ARMV8_EXCLUDE_EL0 (1 << 30) 429 - #define ARMV8_INCLUDE_EL2 (1 << 27) 430 431 static inline u32 armv8pmu_pmcr_read(void) 432 { ··· 404 405 static inline void armv8pmu_pmcr_write(u32 val) 406 { 407 - val &= ARMV8_PMCR_MASK; 408 isb(); 409 asm volatile("msr pmcr_el0, %0" :: "r" (val)); 410 } 411 412 static inline int armv8pmu_has_overflowed(u32 pmovsr) 413 { 414 - return pmovsr & ARMV8_OVERFLOWED_MASK; 415 } 416 417 static inline int armv8pmu_counter_valid(struct arm_pmu *cpu_pmu, int idx) ··· 477 static inline void armv8pmu_write_evtype(int idx, u32 val) 478 { 479 if (armv8pmu_select_counter(idx) == idx) { 480 - val &= ARMV8_EVTYPE_MASK; 481 asm volatile("msr pmxevtyper_el0, %0" :: "r" (val)); 482 } 483 } ··· 523 asm volatile("mrs %0, pmovsclr_el0" : "=r" (value)); 524 525 /* Write to clear flags */ 526 - value &= ARMV8_OVSR_MASK; 527 asm volatile("msr pmovsclr_el0, %0" :: "r" (value)); 528 529 return value; ··· 661 662 raw_spin_lock_irqsave(&events->pmu_lock, flags); 663 /* Enable all counters */ 664 - armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMCR_E); 665 raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 666 } 667 ··· 672 673 raw_spin_lock_irqsave(&events->pmu_lock, flags); 674 /* Disable all counters */ 675 - armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMCR_E); 676 raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 677 } 678 ··· 682 int idx; 683 struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); 684 struct hw_perf_event *hwc = &event->hw; 685 - unsigned long evtype = hwc->config_base & ARMV8_EVTYPE_EVENT; 686 687 /* Always place a cycle counter into the cycle counter. */ 688 if (evtype == ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES) { ··· 719 attr->exclude_kernel != attr->exclude_hv) 720 return -EINVAL; 721 if (attr->exclude_user) 722 - config_base |= ARMV8_EXCLUDE_EL0; 723 if (!is_kernel_in_hyp_mode() && attr->exclude_kernel) 724 - config_base |= ARMV8_EXCLUDE_EL1; 725 if (!attr->exclude_hv) 726 - config_base |= ARMV8_INCLUDE_EL2; 727 728 /* 729 * Install the filter into config_base as this is used to ··· 749 * Initialize & Reset PMNC. Request overflow interrupt for 750 * 64 bit cycle counter but cheat in armv8pmu_write_counter(). 751 */ 752 - armv8pmu_pmcr_write(ARMV8_PMCR_P | ARMV8_PMCR_C | ARMV8_PMCR_LC); 753 } 754 755 static int armv8_pmuv3_map_event(struct perf_event *event) 756 { 757 return armpmu_map_event(event, &armv8_pmuv3_perf_map, 758 &armv8_pmuv3_perf_cache_map, 759 - ARMV8_EVTYPE_EVENT); 760 } 761 762 static int armv8_a53_map_event(struct perf_event *event) 763 { 764 return armpmu_map_event(event, &armv8_a53_perf_map, 765 &armv8_a53_perf_cache_map, 766 - ARMV8_EVTYPE_EVENT); 767 } 768 769 static int armv8_a57_map_event(struct perf_event *event) 770 { 771 return armpmu_map_event(event, &armv8_a57_perf_map, 772 &armv8_a57_perf_cache_map, 773 - ARMV8_EVTYPE_EVENT); 774 } 775 776 static int armv8_thunder_map_event(struct perf_event *event) 777 { 778 return armpmu_map_event(event, &armv8_thunder_perf_map, 779 &armv8_thunder_perf_cache_map, 780 - ARMV8_EVTYPE_EVENT); 781 } 782 783 static void armv8pmu_read_num_pmnc_events(void *info) ··· 786 int *nb_cnt = info; 787 788 /* Read the nb of CNTx counters supported from PMNC */ 789 - *nb_cnt = (armv8pmu_pmcr_read() >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK; 790 791 /* Add the CPU cycles counter */ 792 *nb_cnt += 1;
··· 20 */ 21 22 #include <asm/irq_regs.h> 23 + #include <asm/perf_event.h> 24 #include <asm/virt.h> 25 26 #include <linux/of.h> ··· 384 #define ARMV8_IDX_COUNTER_LAST(cpu_pmu) \ 385 (ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1) 386 387 /* 388 * ARMv8 low level PMU access 389 */ ··· 395 * Perf Event to low level counters mapping 396 */ 397 #define ARMV8_IDX_TO_COUNTER(x) \ 398 + (((x) - ARMV8_IDX_COUNTER0) & ARMV8_PMU_COUNTER_MASK) 399 400 static inline u32 armv8pmu_pmcr_read(void) 401 { ··· 439 440 static inline void armv8pmu_pmcr_write(u32 val) 441 { 442 + val &= ARMV8_PMU_PMCR_MASK; 443 isb(); 444 asm volatile("msr pmcr_el0, %0" :: "r" (val)); 445 } 446 447 static inline int armv8pmu_has_overflowed(u32 pmovsr) 448 { 449 + return pmovsr & ARMV8_PMU_OVERFLOWED_MASK; 450 } 451 452 static inline int armv8pmu_counter_valid(struct arm_pmu *cpu_pmu, int idx) ··· 512 static inline void armv8pmu_write_evtype(int idx, u32 val) 513 { 514 if (armv8pmu_select_counter(idx) == idx) { 515 + val &= ARMV8_PMU_EVTYPE_MASK; 516 asm volatile("msr pmxevtyper_el0, %0" :: "r" (val)); 517 } 518 } ··· 558 asm volatile("mrs %0, pmovsclr_el0" : "=r" (value)); 559 560 /* Write to clear flags */ 561 + value &= ARMV8_PMU_OVSR_MASK; 562 asm volatile("msr pmovsclr_el0, %0" :: "r" (value)); 563 564 return value; ··· 696 697 raw_spin_lock_irqsave(&events->pmu_lock, flags); 698 /* Enable all counters */ 699 + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); 700 raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 701 } 702 ··· 707 708 raw_spin_lock_irqsave(&events->pmu_lock, flags); 709 /* Disable all counters */ 710 + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); 711 raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 712 } 713 ··· 717 int idx; 718 struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); 719 struct hw_perf_event *hwc = &event->hw; 720 + unsigned long evtype = hwc->config_base & ARMV8_PMU_EVTYPE_EVENT; 721 722 /* Always place a cycle counter into the cycle counter. */ 723 if (evtype == ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES) { ··· 754 attr->exclude_kernel != attr->exclude_hv) 755 return -EINVAL; 756 if (attr->exclude_user) 757 + config_base |= ARMV8_PMU_EXCLUDE_EL0; 758 if (!is_kernel_in_hyp_mode() && attr->exclude_kernel) 759 + config_base |= ARMV8_PMU_EXCLUDE_EL1; 760 if (!attr->exclude_hv) 761 + config_base |= ARMV8_PMU_INCLUDE_EL2; 762 763 /* 764 * Install the filter into config_base as this is used to ··· 784 * Initialize & Reset PMNC. Request overflow interrupt for 785 * 64 bit cycle counter but cheat in armv8pmu_write_counter(). 786 */ 787 + armv8pmu_pmcr_write(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C | 788 + ARMV8_PMU_PMCR_LC); 789 } 790 791 static int armv8_pmuv3_map_event(struct perf_event *event) 792 { 793 return armpmu_map_event(event, &armv8_pmuv3_perf_map, 794 &armv8_pmuv3_perf_cache_map, 795 + ARMV8_PMU_EVTYPE_EVENT); 796 } 797 798 static int armv8_a53_map_event(struct perf_event *event) 799 { 800 return armpmu_map_event(event, &armv8_a53_perf_map, 801 &armv8_a53_perf_cache_map, 802 + ARMV8_PMU_EVTYPE_EVENT); 803 } 804 805 static int armv8_a57_map_event(struct perf_event *event) 806 { 807 return armpmu_map_event(event, &armv8_a57_perf_map, 808 &armv8_a57_perf_cache_map, 809 + ARMV8_PMU_EVTYPE_EVENT); 810 } 811 812 static int armv8_thunder_map_event(struct perf_event *event) 813 { 814 return armpmu_map_event(event, &armv8_thunder_perf_map, 815 &armv8_thunder_perf_cache_map, 816 + ARMV8_PMU_EVTYPE_EVENT); 817 } 818 819 static void armv8pmu_read_num_pmnc_events(void *info) ··· 820 int *nb_cnt = info; 821 822 /* Read the nb of CNTx counters supported from PMNC */ 823 + *nb_cnt = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; 824 825 /* Add the CPU cycles counter */ 826 *nb_cnt += 1;
+1 -2
arch/nios2/kernel/prom.c
··· 97 return 0; 98 #endif 99 100 - *addr64 = fdt_translate_address((const void *)initial_boot_params, 101 - node); 102 103 return *addr64 == OF_BAD_ADDR ? 0 : 1; 104 }
··· 97 return 0; 98 #endif 99 100 + *addr64 = of_flat_dt_translate_address(node); 101 102 return *addr64 == OF_BAD_ADDR ? 0 : 1; 103 }
+1
arch/parisc/Kconfig
··· 30 select TTY # Needed for pdc_cons.c 31 select HAVE_DEBUG_STACKOVERFLOW 32 select HAVE_ARCH_AUDITSYSCALL 33 select ARCH_NO_COHERENT_DMA_MMAP 34 35 help
··· 30 select TTY # Needed for pdc_cons.c 31 select HAVE_DEBUG_STACKOVERFLOW 32 select HAVE_ARCH_AUDITSYSCALL 33 + select HAVE_ARCH_SECCOMP_FILTER 34 select ARCH_NO_COHERENT_DMA_MMAP 35 36 help
+7
arch/parisc/include/asm/compat.h
··· 183 int _band; /* POLL_IN, POLL_OUT, POLL_MSG */ 184 int _fd; 185 } _sigpoll; 186 } _sifields; 187 } compat_siginfo_t; 188
··· 183 int _band; /* POLL_IN, POLL_OUT, POLL_MSG */ 184 int _fd; 185 } _sigpoll; 186 + 187 + /* SIGSYS */ 188 + struct { 189 + compat_uptr_t _call_addr; /* calling user insn */ 190 + int _syscall; /* triggering system call number */ 191 + compat_uint_t _arch; /* AUDIT_ARCH_* of syscall */ 192 + } _sigsys; 193 } _sifields; 194 } compat_siginfo_t; 195
+13
arch/parisc/include/asm/syscall.h
··· 39 } 40 } 41 42 static inline int syscall_get_arch(void) 43 { 44 int arch = AUDIT_ARCH_PARISC;
··· 39 } 40 } 41 42 + static inline void syscall_set_return_value(struct task_struct *task, 43 + struct pt_regs *regs, 44 + int error, long val) 45 + { 46 + regs->gr[28] = error ? error : val; 47 + } 48 + 49 + static inline void syscall_rollback(struct task_struct *task, 50 + struct pt_regs *regs) 51 + { 52 + /* do nothing */ 53 + } 54 + 55 static inline int syscall_get_arch(void) 56 { 57 int arch = AUDIT_ARCH_PARISC;
+7 -2
arch/parisc/kernel/ptrace.c
··· 270 long do_syscall_trace_enter(struct pt_regs *regs) 271 { 272 /* Do the secure computing check first. */ 273 - secure_computing_strict(regs->gr[20]); 274 275 if (test_thread_flag(TIF_SYSCALL_TRACE) && 276 tracehook_report_syscall_entry(regs)) { ··· 297 regs->gr[23] & 0xffffffff); 298 299 out: 300 - return regs->gr[20]; 301 } 302 303 void do_syscall_trace_exit(struct pt_regs *regs)
··· 270 long do_syscall_trace_enter(struct pt_regs *regs) 271 { 272 /* Do the secure computing check first. */ 273 + if (secure_computing() == -1) 274 + return -1; 275 276 if (test_thread_flag(TIF_SYSCALL_TRACE) && 277 tracehook_report_syscall_entry(regs)) { ··· 296 regs->gr[23] & 0xffffffff); 297 298 out: 299 + /* 300 + * Sign extend the syscall number to 64bit since it may have been 301 + * modified by a compat ptrace call 302 + */ 303 + return (int) ((u32) regs->gr[20]); 304 } 305 306 void do_syscall_trace_exit(struct pt_regs *regs)
+5
arch/parisc/kernel/signal32.c
··· 371 val = (compat_int_t)from->si_int; 372 err |= __put_user(val, &to->si_int); 373 break; 374 } 375 } 376 return err;
··· 371 val = (compat_int_t)from->si_int; 372 err |= __put_user(val, &to->si_int); 373 break; 374 + case __SI_SYS >> 16: 375 + err |= __put_user(ptr_to_compat(from->si_call_addr), &to->si_call_addr); 376 + err |= __put_user(from->si_syscall, &to->si_syscall); 377 + err |= __put_user(from->si_arch, &to->si_arch); 378 + break; 379 } 380 } 381 return err;
+2
arch/parisc/kernel/syscall.S
··· 329 330 ldo -THREAD_SZ_ALGN-FRAME_SIZE(%r30),%r1 /* get task ptr */ 331 LDREG TI_TASK(%r1), %r1 332 LDREG TASK_PT_GR26(%r1), %r26 /* Restore the users args */ 333 LDREG TASK_PT_GR25(%r1), %r25 334 LDREG TASK_PT_GR24(%r1), %r24 ··· 343 stw %r21, -56(%r30) /* 6th argument */ 344 #endif 345 346 comiclr,>>= __NR_Linux_syscalls, %r20, %r0 347 b,n .Ltracesys_nosys 348
··· 329 330 ldo -THREAD_SZ_ALGN-FRAME_SIZE(%r30),%r1 /* get task ptr */ 331 LDREG TI_TASK(%r1), %r1 332 + LDREG TASK_PT_GR28(%r1), %r28 /* Restore return value */ 333 LDREG TASK_PT_GR26(%r1), %r26 /* Restore the users args */ 334 LDREG TASK_PT_GR25(%r1), %r25 335 LDREG TASK_PT_GR24(%r1), %r24 ··· 342 stw %r21, -56(%r30) /* 6th argument */ 343 #endif 344 345 + cmpib,COND(=),n -1,%r20,tracesys_exit /* seccomp may have returned -1 */ 346 comiclr,>>= __NR_Linux_syscalls, %r20, %r0 347 b,n .Ltracesys_nosys 348
+1 -1
arch/powerpc/include/asm/processor.h
··· 246 #endif /* CONFIG_ALTIVEC */ 247 #ifdef CONFIG_VSX 248 /* VSR status */ 249 - int used_vsr; /* set if process has used altivec */ 250 #endif /* CONFIG_VSX */ 251 #ifdef CONFIG_SPE 252 unsigned long evr[32]; /* upper 32-bits of SPE regs */
··· 246 #endif /* CONFIG_ALTIVEC */ 247 #ifdef CONFIG_VSX 248 /* VSR status */ 249 + int used_vsr; /* set if process has used VSX */ 250 #endif /* CONFIG_VSX */ 251 #ifdef CONFIG_SPE 252 unsigned long evr[32]; /* upper 32-bits of SPE regs */
+1 -1
arch/powerpc/kernel/process.c
··· 983 static inline void save_sprs(struct thread_struct *t) 984 { 985 #ifdef CONFIG_ALTIVEC 986 - if (cpu_has_feature(cpu_has_feature(CPU_FTR_ALTIVEC))) 987 t->vrsave = mfspr(SPRN_VRSAVE); 988 #endif 989 #ifdef CONFIG_PPC_BOOK3S_64
··· 983 static inline void save_sprs(struct thread_struct *t) 984 { 985 #ifdef CONFIG_ALTIVEC 986 + if (cpu_has_feature(CPU_FTR_ALTIVEC)) 987 t->vrsave = mfspr(SPRN_VRSAVE); 988 #endif 989 #ifdef CONFIG_PPC_BOOK3S_64
+2 -2
arch/powerpc/mm/hugetlbpage.c
··· 413 { 414 struct hugepd_freelist **batchp; 415 416 - batchp = this_cpu_ptr(&hugepd_freelist_cur); 417 418 if (atomic_read(&tlb->mm->mm_users) < 2 || 419 cpumask_equal(mm_cpumask(tlb->mm), 420 cpumask_of(smp_processor_id()))) { 421 kmem_cache_free(hugepte_cache, hugepte); 422 - put_cpu_var(hugepd_freelist_cur); 423 return; 424 } 425
··· 413 { 414 struct hugepd_freelist **batchp; 415 416 + batchp = &get_cpu_var(hugepd_freelist_cur); 417 418 if (atomic_read(&tlb->mm->mm_users) < 2 || 419 cpumask_equal(mm_cpumask(tlb->mm), 420 cpumask_of(smp_processor_id()))) { 421 kmem_cache_free(hugepte_cache, hugepte); 422 + put_cpu_var(hugepd_freelist_cur); 423 return; 424 } 425
+3
arch/s390/Kconfig
··· 59 config ARCH_SUPPORTS_UPROBES 60 def_bool y 61 62 config S390 63 def_bool y 64 select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
··· 59 config ARCH_SUPPORTS_UPROBES 60 def_bool y 61 62 + config DEBUG_RODATA 63 + def_bool y 64 + 65 config S390 66 def_bool y 67 select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
+2
arch/s390/crypto/prng.c
··· 669 static struct miscdevice prng_sha512_dev = { 670 .name = "prandom", 671 .minor = MISC_DYNAMIC_MINOR, 672 .fops = &prng_sha512_fops, 673 }; 674 static struct miscdevice prng_tdes_dev = { 675 .name = "prandom", 676 .minor = MISC_DYNAMIC_MINOR, 677 .fops = &prng_tdes_fops, 678 }; 679
··· 669 static struct miscdevice prng_sha512_dev = { 670 .name = "prandom", 671 .minor = MISC_DYNAMIC_MINOR, 672 + .mode = 0644, 673 .fops = &prng_sha512_fops, 674 }; 675 static struct miscdevice prng_tdes_dev = { 676 .name = "prandom", 677 .minor = MISC_DYNAMIC_MINOR, 678 + .mode = 0644, 679 .fops = &prng_tdes_fops, 680 }; 681
+3
arch/s390/include/asm/cache.h
··· 15 16 #define __read_mostly __attribute__((__section__(".data..read_mostly"))) 17 18 #endif
··· 15 16 #define __read_mostly __attribute__((__section__(".data..read_mostly"))) 17 18 + /* Read-only memory is marked before mark_rodata_ro() is called. */ 19 + #define __ro_after_init __read_mostly 20 + 21 #endif
+3 -1
arch/s390/include/uapi/asm/unistd.h
··· 311 #define __NR_shutdown 373 312 #define __NR_mlock2 374 313 #define __NR_copy_file_range 375 314 - #define NR_syscalls 376 315 316 /* 317 * There are some system calls that are not present on 64 bit, some
··· 311 #define __NR_shutdown 373 312 #define __NR_mlock2 374 313 #define __NR_copy_file_range 375 314 + #define __NR_preadv2 376 315 + #define __NR_pwritev2 377 316 + #define NR_syscalls 378 317 318 /* 319 * There are some system calls that are not present on 64 bit, some
+1
arch/s390/kernel/perf_cpum_cf.c
··· 670 671 switch (action & ~CPU_TASKS_FROZEN) { 672 case CPU_ONLINE: 673 flags = PMC_INIT; 674 smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1); 675 break;
··· 670 671 switch (action & ~CPU_TASKS_FROZEN) { 672 case CPU_ONLINE: 673 + case CPU_DOWN_FAILED: 674 flags = PMC_INIT; 675 smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1); 676 break;
+1 -1
arch/s390/kernel/perf_cpum_sf.c
··· 1521 1522 switch (action & ~CPU_TASKS_FROZEN) { 1523 case CPU_ONLINE: 1524 - case CPU_ONLINE_FROZEN: 1525 flags = PMC_INIT; 1526 smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1); 1527 break;
··· 1521 1522 switch (action & ~CPU_TASKS_FROZEN) { 1523 case CPU_ONLINE: 1524 + case CPU_DOWN_FAILED: 1525 flags = PMC_INIT; 1526 smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1); 1527 break;
+2
arch/s390/kernel/syscalls.S
··· 384 SYSCALL(sys_shutdown,sys_shutdown) 385 SYSCALL(sys_mlock2,compat_sys_mlock2) 386 SYSCALL(sys_copy_file_range,compat_sys_copy_file_range) /* 375 */
··· 384 SYSCALL(sys_shutdown,sys_shutdown) 385 SYSCALL(sys_mlock2,compat_sys_mlock2) 386 SYSCALL(sys_copy_file_range,compat_sys_copy_file_range) /* 375 */ 387 + SYSCALL(sys_preadv2,compat_sys_preadv2) 388 + SYSCALL(sys_pwritev2,compat_sys_pwritev2)
+5 -3
arch/s390/mm/gup.c
··· 20 static inline int gup_pte_range(pmd_t *pmdp, pmd_t pmd, unsigned long addr, 21 unsigned long end, int write, struct page **pages, int *nr) 22 { 23 unsigned long mask; 24 pte_t *ptep, pte; 25 - struct page *page; 26 27 mask = (write ? _PAGE_PROTECT : 0) | _PAGE_INVALID | _PAGE_SPECIAL; 28 ··· 37 return 0; 38 VM_BUG_ON(!pfn_valid(pte_pfn(pte))); 39 page = pte_page(pte); 40 - if (!page_cache_get_speculative(page)) 41 return 0; 42 if (unlikely(pte_val(pte) != pte_val(*ptep))) { 43 - put_page(page); 44 return 0; 45 } 46 pages[*nr] = page; 47 (*nr)++; 48
··· 20 static inline int gup_pte_range(pmd_t *pmdp, pmd_t pmd, unsigned long addr, 21 unsigned long end, int write, struct page **pages, int *nr) 22 { 23 + struct page *head, *page; 24 unsigned long mask; 25 pte_t *ptep, pte; 26 27 mask = (write ? _PAGE_PROTECT : 0) | _PAGE_INVALID | _PAGE_SPECIAL; 28 ··· 37 return 0; 38 VM_BUG_ON(!pfn_valid(pte_pfn(pte))); 39 page = pte_page(pte); 40 + head = compound_head(page); 41 + if (!page_cache_get_speculative(head)) 42 return 0; 43 if (unlikely(pte_val(pte) != pte_val(*ptep))) { 44 + put_page(head); 45 return 0; 46 } 47 + VM_BUG_ON_PAGE(compound_head(page) != head, page); 48 pages[*nr] = page; 49 (*nr)++; 50
+7 -3
arch/s390/mm/init.c
··· 108 free_area_init_nodes(max_zone_pfns); 109 } 110 111 void __init mem_init(void) 112 { 113 if (MACHINE_HAS_TLB_LC) ··· 133 setup_zero_pages(); /* Setup zeroed pages. */ 134 135 mem_init_print_info(NULL); 136 - printk("Write protected kernel read-only data: %#lx - %#lx\n", 137 - (unsigned long)&_stext, 138 - PFN_ALIGN((unsigned long)&_eshared) - 1); 139 } 140 141 void free_initmem(void)
··· 108 free_area_init_nodes(max_zone_pfns); 109 } 110 111 + void mark_rodata_ro(void) 112 + { 113 + /* Text and rodata are already protected. Nothing to do here. */ 114 + pr_info("Write protecting the kernel read-only data: %luk\n", 115 + ((unsigned long)&_eshared - (unsigned long)&_stext) >> 10); 116 + } 117 + 118 void __init mem_init(void) 119 { 120 if (MACHINE_HAS_TLB_LC) ··· 126 setup_zero_pages(); /* Setup zeroed pages. */ 127 128 mem_init_print_info(NULL); 129 } 130 131 void free_initmem(void)
+1 -2
arch/s390/pci/pci_clp.c
··· 176 rc = clp_store_query_pci_fn(zdev, &rrb->response); 177 if (rc) 178 goto out; 179 - if (rrb->response.pfgid) 180 - rc = clp_query_pci_fngrp(zdev, rrb->response.pfgid); 181 } else { 182 zpci_err("Q PCI FN:\n"); 183 zpci_err_clp(rrb->response.hdr.rsp, rc);
··· 176 rc = clp_store_query_pci_fn(zdev, &rrb->response); 177 if (rc) 178 goto out; 179 + rc = clp_query_pci_fngrp(zdev, rrb->response.pfgid); 180 } else { 181 zpci_err("Q PCI FN:\n"); 182 zpci_err_clp(rrb->response.hdr.rsp, rc);
+4 -4
arch/sparc/include/asm/compat_signal.h
··· 6 7 #ifdef CONFIG_COMPAT 8 struct __new_sigaction32 { 9 - unsigned sa_handler; 10 unsigned int sa_flags; 11 - unsigned sa_restorer; /* not used by Linux/SPARC yet */ 12 compat_sigset_t sa_mask; 13 }; 14 15 struct __old_sigaction32 { 16 - unsigned sa_handler; 17 compat_old_sigset_t sa_mask; 18 unsigned int sa_flags; 19 - unsigned sa_restorer; /* not used by Linux/SPARC yet */ 20 }; 21 #endif 22
··· 6 7 #ifdef CONFIG_COMPAT 8 struct __new_sigaction32 { 9 + unsigned int sa_handler; 10 unsigned int sa_flags; 11 + unsigned int sa_restorer; /* not used by Linux/SPARC yet */ 12 compat_sigset_t sa_mask; 13 }; 14 15 struct __old_sigaction32 { 16 + unsigned int sa_handler; 17 compat_old_sigset_t sa_mask; 18 unsigned int sa_flags; 19 + unsigned int sa_restorer; /* not used by Linux/SPARC yet */ 20 }; 21 #endif 22
+16 -16
arch/sparc/include/asm/obio.h
··· 117 "i" (ASI_M_CTL)); 118 } 119 120 - static inline unsigned bw_get_prof_limit(int cpu) 121 { 122 - unsigned limit; 123 124 __asm__ __volatile__ ("lda [%1] %2, %0" : 125 "=r" (limit) : ··· 128 return limit; 129 } 130 131 - static inline void bw_set_prof_limit(int cpu, unsigned limit) 132 { 133 __asm__ __volatile__ ("sta %0, [%1] %2" : : 134 "r" (limit), ··· 136 "i" (ASI_M_CTL)); 137 } 138 139 - static inline unsigned bw_get_ctrl(int cpu) 140 { 141 - unsigned ctrl; 142 143 __asm__ __volatile__ ("lda [%1] %2, %0" : 144 "=r" (ctrl) : ··· 147 return ctrl; 148 } 149 150 - static inline void bw_set_ctrl(int cpu, unsigned ctrl) 151 { 152 __asm__ __volatile__ ("sta %0, [%1] %2" : : 153 "r" (ctrl), ··· 155 "i" (ASI_M_CTL)); 156 } 157 158 - static inline unsigned cc_get_ipen(void) 159 { 160 - unsigned pending; 161 162 __asm__ __volatile__ ("lduha [%1] %2, %0" : 163 "=r" (pending) : ··· 166 return pending; 167 } 168 169 - static inline void cc_set_iclr(unsigned clear) 170 { 171 __asm__ __volatile__ ("stha %0, [%1] %2" : : 172 "r" (clear), ··· 174 "i" (ASI_M_MXCC)); 175 } 176 177 - static inline unsigned cc_get_imsk(void) 178 { 179 - unsigned mask; 180 181 __asm__ __volatile__ ("lduha [%1] %2, %0" : 182 "=r" (mask) : ··· 185 return mask; 186 } 187 188 - static inline void cc_set_imsk(unsigned mask) 189 { 190 __asm__ __volatile__ ("stha %0, [%1] %2" : : 191 "r" (mask), ··· 193 "i" (ASI_M_MXCC)); 194 } 195 196 - static inline unsigned cc_get_imsk_other(int cpuid) 197 { 198 - unsigned mask; 199 200 __asm__ __volatile__ ("lduha [%1] %2, %0" : 201 "=r" (mask) : ··· 204 return mask; 205 } 206 207 - static inline void cc_set_imsk_other(int cpuid, unsigned mask) 208 { 209 __asm__ __volatile__ ("stha %0, [%1] %2" : : 210 "r" (mask), ··· 212 "i" (ASI_M_CTL)); 213 } 214 215 - static inline void cc_set_igen(unsigned gen) 216 { 217 __asm__ __volatile__ ("sta %0, [%1] %2" : : 218 "r" (gen),
··· 117 "i" (ASI_M_CTL)); 118 } 119 120 + static inline unsigned int bw_get_prof_limit(int cpu) 121 { 122 + unsigned int limit; 123 124 __asm__ __volatile__ ("lda [%1] %2, %0" : 125 "=r" (limit) : ··· 128 return limit; 129 } 130 131 + static inline void bw_set_prof_limit(int cpu, unsigned int limit) 132 { 133 __asm__ __volatile__ ("sta %0, [%1] %2" : : 134 "r" (limit), ··· 136 "i" (ASI_M_CTL)); 137 } 138 139 + static inline unsigned int bw_get_ctrl(int cpu) 140 { 141 + unsigned int ctrl; 142 143 __asm__ __volatile__ ("lda [%1] %2, %0" : 144 "=r" (ctrl) : ··· 147 return ctrl; 148 } 149 150 + static inline void bw_set_ctrl(int cpu, unsigned int ctrl) 151 { 152 __asm__ __volatile__ ("sta %0, [%1] %2" : : 153 "r" (ctrl), ··· 155 "i" (ASI_M_CTL)); 156 } 157 158 + static inline unsigned int cc_get_ipen(void) 159 { 160 + unsigned int pending; 161 162 __asm__ __volatile__ ("lduha [%1] %2, %0" : 163 "=r" (pending) : ··· 166 return pending; 167 } 168 169 + static inline void cc_set_iclr(unsigned int clear) 170 { 171 __asm__ __volatile__ ("stha %0, [%1] %2" : : 172 "r" (clear), ··· 174 "i" (ASI_M_MXCC)); 175 } 176 177 + static inline unsigned int cc_get_imsk(void) 178 { 179 + unsigned int mask; 180 181 __asm__ __volatile__ ("lduha [%1] %2, %0" : 182 "=r" (mask) : ··· 185 return mask; 186 } 187 188 + static inline void cc_set_imsk(unsigned int mask) 189 { 190 __asm__ __volatile__ ("stha %0, [%1] %2" : : 191 "r" (mask), ··· 193 "i" (ASI_M_MXCC)); 194 } 195 196 + static inline unsigned int cc_get_imsk_other(int cpuid) 197 { 198 + unsigned int mask; 199 200 __asm__ __volatile__ ("lduha [%1] %2, %0" : 201 "=r" (mask) : ··· 204 return mask; 205 } 206 207 + static inline void cc_set_imsk_other(int cpuid, unsigned int mask) 208 { 209 __asm__ __volatile__ ("stha %0, [%1] %2" : : 210 "r" (mask), ··· 212 "i" (ASI_M_CTL)); 213 } 214 215 + static inline void cc_set_igen(unsigned int gen) 216 { 217 __asm__ __volatile__ ("sta %0, [%1] %2" : : 218 "r" (gen),
+5 -5
arch/sparc/include/asm/openprom.h
··· 29 /* V2 and later prom device operations. */ 30 struct linux_dev_v2_funcs { 31 phandle (*v2_inst2pkg)(int d); /* Convert ihandle to phandle */ 32 - char * (*v2_dumb_mem_alloc)(char *va, unsigned sz); 33 - void (*v2_dumb_mem_free)(char *va, unsigned sz); 34 35 /* To map devices into virtual I/O space. */ 36 - char * (*v2_dumb_mmap)(char *virta, int which_io, unsigned paddr, unsigned sz); 37 - void (*v2_dumb_munmap)(char *virta, unsigned size); 38 39 int (*v2_dev_open)(char *devpath); 40 void (*v2_dev_close)(int d); ··· 50 struct linux_mlist_v0 { 51 struct linux_mlist_v0 *theres_more; 52 unsigned int start_adr; 53 - unsigned num_bytes; 54 }; 55 56 struct linux_mem_v0 {
··· 29 /* V2 and later prom device operations. */ 30 struct linux_dev_v2_funcs { 31 phandle (*v2_inst2pkg)(int d); /* Convert ihandle to phandle */ 32 + char * (*v2_dumb_mem_alloc)(char *va, unsigned int sz); 33 + void (*v2_dumb_mem_free)(char *va, unsigned int sz); 34 35 /* To map devices into virtual I/O space. */ 36 + char * (*v2_dumb_mmap)(char *virta, int which_io, unsigned int paddr, unsigned int sz); 37 + void (*v2_dumb_munmap)(char *virta, unsigned int size); 38 39 int (*v2_dev_open)(char *devpath); 40 void (*v2_dev_close)(int d); ··· 50 struct linux_mlist_v0 { 51 struct linux_mlist_v0 *theres_more; 52 unsigned int start_adr; 53 + unsigned int num_bytes; 54 }; 55 56 struct linux_mem_v0 {
+1 -1
arch/sparc/include/asm/pgtable_64.h
··· 218 extern pgprot_t PAGE_COPY; 219 extern pgprot_t PAGE_SHARED; 220 221 - /* XXX This uglyness is for the atyfb driver's sparc mmap() support. XXX */ 222 extern unsigned long _PAGE_IE; 223 extern unsigned long _PAGE_E; 224 extern unsigned long _PAGE_CACHE;
··· 218 extern pgprot_t PAGE_COPY; 219 extern pgprot_t PAGE_SHARED; 220 221 + /* XXX This ugliness is for the atyfb driver's sparc mmap() support. XXX */ 222 extern unsigned long _PAGE_IE; 223 extern unsigned long _PAGE_E; 224 extern unsigned long _PAGE_CACHE;
+1 -1
arch/sparc/include/asm/processor_64.h
··· 201 #define KSTK_ESP(tsk) (task_pt_regs(tsk)->u_regs[UREG_FP]) 202 203 /* Please see the commentary in asm/backoff.h for a description of 204 - * what these instructions are doing and how they have been choosen. 205 * To make a long story short, we are trying to yield the current cpu 206 * strand during busy loops. 207 */
··· 201 #define KSTK_ESP(tsk) (task_pt_regs(tsk)->u_regs[UREG_FP]) 202 203 /* Please see the commentary in asm/backoff.h for a description of 204 + * what these instructions are doing and how they have been chosen. 205 * To make a long story short, we are trying to yield the current cpu 206 * strand during busy loops. 207 */
+1 -1
arch/sparc/include/asm/sigcontext.h
··· 25 int sigc_oswins; /* outstanding windows */ 26 27 /* stack ptrs for each regwin buf */ 28 - unsigned sigc_spbuf[__SUNOS_MAXWIN]; 29 30 /* Windows to restore after signal */ 31 struct reg_window32 sigc_wbuf[__SUNOS_MAXWIN];
··· 25 int sigc_oswins; /* outstanding windows */ 26 27 /* stack ptrs for each regwin buf */ 28 + unsigned int sigc_spbuf[__SUNOS_MAXWIN]; 29 30 /* Windows to restore after signal */ 31 struct reg_window32 sigc_wbuf[__SUNOS_MAXWIN];
+1 -1
arch/sparc/include/asm/tsb.h
··· 149 * page size in question. So for PMD mappings (which fall on 150 * bit 23, for 8MB per PMD) we must propagate bit 22 for a 151 * 4MB huge page. For huge PUDs (which fall on bit 33, for 152 - * 8GB per PUD), we have to accomodate 256MB and 2GB huge 153 * pages. So for those we propagate bits 32 to 28. 154 */ 155 #define KERN_PGTABLE_WALK(VADDR, REG1, REG2, FAIL_LABEL) \
··· 149 * page size in question. So for PMD mappings (which fall on 150 * bit 23, for 8MB per PMD) we must propagate bit 22 for a 151 * 4MB huge page. For huge PUDs (which fall on bit 33, for 152 + * 8GB per PUD), we have to accommodate 256MB and 2GB huge 153 * pages. So for those we propagate bits 32 to 28. 154 */ 155 #define KERN_PGTABLE_WALK(VADDR, REG1, REG2, FAIL_LABEL) \
+2 -2
arch/sparc/include/uapi/asm/stat.h
··· 6 #if defined(__sparc__) && defined(__arch64__) 7 /* 64 bit sparc */ 8 struct stat { 9 - unsigned st_dev; 10 ino_t st_ino; 11 mode_t st_mode; 12 short st_nlink; 13 uid_t st_uid; 14 gid_t st_gid; 15 - unsigned st_rdev; 16 off_t st_size; 17 time_t st_atime; 18 time_t st_mtime;
··· 6 #if defined(__sparc__) && defined(__arch64__) 7 /* 64 bit sparc */ 8 struct stat { 9 + unsigned int st_dev; 10 ino_t st_ino; 11 mode_t st_mode; 12 short st_nlink; 13 uid_t st_uid; 14 gid_t st_gid; 15 + unsigned int st_rdev; 16 off_t st_size; 17 time_t st_atime; 18 time_t st_mtime;
+6 -6
arch/sparc/kernel/audit.c
··· 5 6 #include "kernel.h" 7 8 - static unsigned dir_class[] = { 9 #include <asm-generic/audit_dir_write.h> 10 ~0U 11 }; 12 13 - static unsigned read_class[] = { 14 #include <asm-generic/audit_read.h> 15 ~0U 16 }; 17 18 - static unsigned write_class[] = { 19 #include <asm-generic/audit_write.h> 20 ~0U 21 }; 22 23 - static unsigned chattr_class[] = { 24 #include <asm-generic/audit_change_attr.h> 25 ~0U 26 }; 27 28 - static unsigned signal_class[] = { 29 #include <asm-generic/audit_signal.h> 30 ~0U 31 }; ··· 39 return 0; 40 } 41 42 - int audit_classify_syscall(int abi, unsigned syscall) 43 { 44 #ifdef CONFIG_COMPAT 45 if (abi == AUDIT_ARCH_SPARC)
··· 5 6 #include "kernel.h" 7 8 + static unsigned int dir_class[] = { 9 #include <asm-generic/audit_dir_write.h> 10 ~0U 11 }; 12 13 + static unsigned int read_class[] = { 14 #include <asm-generic/audit_read.h> 15 ~0U 16 }; 17 18 + static unsigned int write_class[] = { 19 #include <asm-generic/audit_write.h> 20 ~0U 21 }; 22 23 + static unsigned int chattr_class[] = { 24 #include <asm-generic/audit_change_attr.h> 25 ~0U 26 }; 27 28 + static unsigned int signal_class[] = { 29 #include <asm-generic/audit_signal.h> 30 ~0U 31 }; ··· 39 return 0; 40 } 41 42 + int audit_classify_syscall(int abi, unsigned int syscall) 43 { 44 #ifdef CONFIG_COMPAT 45 if (abi == AUDIT_ARCH_SPARC)
+6 -6
arch/sparc/kernel/compat_audit.c
··· 2 #include <asm/unistd.h> 3 #include "kernel.h" 4 5 - unsigned sparc32_dir_class[] = { 6 #include <asm-generic/audit_dir_write.h> 7 ~0U 8 }; 9 10 - unsigned sparc32_chattr_class[] = { 11 #include <asm-generic/audit_change_attr.h> 12 ~0U 13 }; 14 15 - unsigned sparc32_write_class[] = { 16 #include <asm-generic/audit_write.h> 17 ~0U 18 }; 19 20 - unsigned sparc32_read_class[] = { 21 #include <asm-generic/audit_read.h> 22 ~0U 23 }; 24 25 - unsigned sparc32_signal_class[] = { 26 #include <asm-generic/audit_signal.h> 27 ~0U 28 }; 29 30 - int sparc32_classify_syscall(unsigned syscall) 31 { 32 switch(syscall) { 33 case __NR_open:
··· 2 #include <asm/unistd.h> 3 #include "kernel.h" 4 5 + unsigned int sparc32_dir_class[] = { 6 #include <asm-generic/audit_dir_write.h> 7 ~0U 8 }; 9 10 + unsigned int sparc32_chattr_class[] = { 11 #include <asm-generic/audit_change_attr.h> 12 ~0U 13 }; 14 15 + unsigned int sparc32_write_class[] = { 16 #include <asm-generic/audit_write.h> 17 ~0U 18 }; 19 20 + unsigned int sparc32_read_class[] = { 21 #include <asm-generic/audit_read.h> 22 ~0U 23 }; 24 25 + unsigned int sparc32_signal_class[] = { 26 #include <asm-generic/audit_signal.h> 27 ~0U 28 }; 29 30 + int sparc32_classify_syscall(unsigned int syscall) 31 { 32 switch(syscall) { 33 case __NR_open:
+1 -1
arch/sparc/kernel/entry.S
··· 1255 kuw_patch1_7win: sll %o3, 6, %o3 1256 1257 /* No matter how much overhead this routine has in the worst 1258 - * case scenerio, it is several times better than taking the 1259 * traps with the old method of just doing flush_user_windows(). 1260 */ 1261 kill_user_windows:
··· 1255 kuw_patch1_7win: sll %o3, 6, %o3 1256 1257 /* No matter how much overhead this routine has in the worst 1258 + * case scenario, it is several times better than taking the 1259 * traps with the old method of just doing flush_user_windows(). 1260 */ 1261 kill_user_windows:
+3 -3
arch/sparc/kernel/ioport.c
··· 131 EXPORT_SYMBOL(ioremap); 132 133 /* 134 - * Comlimentary to ioremap(). 135 */ 136 void iounmap(volatile void __iomem *virtual) 137 { ··· 233 } 234 235 /* 236 - * Comlimentary to _sparc_ioremap(). 237 */ 238 static void _sparc_free_io(struct resource *res) 239 { ··· 532 } 533 534 /* Map a set of buffers described by scatterlist in streaming 535 - * mode for DMA. This is the scather-gather version of the 536 * above pci_map_single interface. Here the scatter gather list 537 * elements are each tagged with the appropriate dma address 538 * and length. They are obtained via sg_dma_{address,length}(SG).
··· 131 EXPORT_SYMBOL(ioremap); 132 133 /* 134 + * Complementary to ioremap(). 135 */ 136 void iounmap(volatile void __iomem *virtual) 137 { ··· 233 } 234 235 /* 236 + * Complementary to _sparc_ioremap(). 237 */ 238 static void _sparc_free_io(struct resource *res) 239 { ··· 532 } 533 534 /* Map a set of buffers described by scatterlist in streaming 535 + * mode for DMA. This is the scatter-gather version of the 536 * above pci_map_single interface. Here the scatter gather list 537 * elements are each tagged with the appropriate dma address 538 * and length. They are obtained via sg_dma_{address,length}(SG).
+6 -6
arch/sparc/kernel/kernel.h
··· 54 asmlinkage int do_sys32_sigstack(u32 u_ssptr, u32 u_ossptr, unsigned long sp); 55 56 /* compat_audit.c */ 57 - extern unsigned sparc32_dir_class[]; 58 - extern unsigned sparc32_chattr_class[]; 59 - extern unsigned sparc32_write_class[]; 60 - extern unsigned sparc32_read_class[]; 61 - extern unsigned sparc32_signal_class[]; 62 - int sparc32_classify_syscall(unsigned syscall); 63 #endif 64 65 #ifdef CONFIG_SPARC32
··· 54 asmlinkage int do_sys32_sigstack(u32 u_ssptr, u32 u_ossptr, unsigned long sp); 55 56 /* compat_audit.c */ 57 + extern unsigned int sparc32_dir_class[]; 58 + extern unsigned int sparc32_chattr_class[]; 59 + extern unsigned int sparc32_write_class[]; 60 + extern unsigned int sparc32_read_class[]; 61 + extern unsigned int sparc32_signal_class[]; 62 + int sparc32_classify_syscall(unsigned int syscall); 63 #endif 64 65 #ifdef CONFIG_SPARC32
+1 -1
arch/sparc/kernel/leon_kernel.c
··· 203 204 /* 205 * Build a LEON IRQ for the edge triggered LEON IRQ controller: 206 - * Edge (normal) IRQ - handle_simple_irq, ack=DONT-CARE, never ack 207 * Level IRQ (PCI|Level-GPIO) - handle_fasteoi_irq, ack=1, ack after ISR 208 * Per-CPU Edge - handle_percpu_irq, ack=0 209 */
··· 203 204 /* 205 * Build a LEON IRQ for the edge triggered LEON IRQ controller: 206 + * Edge (normal) IRQ - handle_simple_irq, ack=DON'T-CARE, never ack 207 * Level IRQ (PCI|Level-GPIO) - handle_fasteoi_irq, ack=1, ack after ISR 208 * Per-CPU Edge - handle_percpu_irq, ack=0 209 */
+1 -1
arch/sparc/kernel/process_64.c
··· 103 mm_segment_t old_fs; 104 105 __asm__ __volatile__ ("flushw"); 106 - rw = compat_ptr((unsigned)regs->u_regs[14]); 107 old_fs = get_fs(); 108 set_fs (USER_DS); 109 if (copy_from_user (&r_w, rw, sizeof(r_w))) {
··· 103 mm_segment_t old_fs; 104 105 __asm__ __volatile__ ("flushw"); 106 + rw = compat_ptr((unsigned int)regs->u_regs[14]); 107 old_fs = get_fs(); 108 set_fs (USER_DS); 109 if (copy_from_user (&r_w, rw, sizeof(r_w))) {
+1 -1
arch/sparc/kernel/setup_32.c
··· 109 unsigned char boot_cpu_id = 0xff; /* 0xff will make it into DATA section... */ 110 111 static void 112 - prom_console_write(struct console *con, const char *s, unsigned n) 113 { 114 prom_write(s, n); 115 }
··· 109 unsigned char boot_cpu_id = 0xff; /* 0xff will make it into DATA section... */ 110 111 static void 112 + prom_console_write(struct console *con, const char *s, unsigned int n) 113 { 114 prom_write(s, n); 115 }
+1 -1
arch/sparc/kernel/setup_64.c
··· 77 }; 78 79 static void 80 - prom_console_write(struct console *con, const char *s, unsigned n) 81 { 82 prom_write(s, n); 83 }
··· 77 }; 78 79 static void 80 + prom_console_write(struct console *con, const char *s, unsigned int n) 81 { 82 prom_write(s, n); 83 }
+1 -1
arch/sparc/kernel/signal32.c
··· 144 compat_uptr_t fpu_save; 145 compat_uptr_t rwin_save; 146 unsigned int psr; 147 - unsigned pc, npc; 148 sigset_t set; 149 compat_sigset_t seta; 150 int err, i;
··· 144 compat_uptr_t fpu_save; 145 compat_uptr_t rwin_save; 146 unsigned int psr; 147 + unsigned int pc, npc; 148 sigset_t set; 149 compat_sigset_t seta; 150 int err, i;
+2 -2
arch/sparc/kernel/sys_sparc_64.c
··· 337 switch (call) { 338 case SEMOP: 339 err = sys_semtimedop(first, ptr, 340 - (unsigned)second, NULL); 341 goto out; 342 case SEMTIMEDOP: 343 - err = sys_semtimedop(first, ptr, (unsigned)second, 344 (const struct timespec __user *) 345 (unsigned long) fifth); 346 goto out;
··· 337 switch (call) { 338 case SEMOP: 339 err = sys_semtimedop(first, ptr, 340 + (unsigned int)second, NULL); 341 goto out; 342 case SEMTIMEDOP: 343 + err = sys_semtimedop(first, ptr, (unsigned int)second, 344 (const struct timespec __user *) 345 (unsigned long) fifth); 346 goto out;
+1 -1
arch/sparc/kernel/sysfs.c
··· 1 - /* sysfs.c: Toplogy sysfs support code for sparc64. 2 * 3 * Copyright (C) 2007 David S. Miller <davem@davemloft.net> 4 */
··· 1 + /* sysfs.c: Topology sysfs support code for sparc64. 2 * 3 * Copyright (C) 2007 David S. Miller <davem@davemloft.net> 4 */
+2 -2
arch/sparc/kernel/unaligned_64.c
··· 209 if (size == 16) { 210 size = 8; 211 zero = (((long)(reg_num ? 212 - (unsigned)fetch_reg(reg_num, regs) : 0)) << 32) | 213 - (unsigned)fetch_reg(reg_num + 1, regs); 214 } else if (reg_num) { 215 src_val_p = fetch_reg_addr(reg_num, regs); 216 }
··· 209 if (size == 16) { 210 size = 8; 211 zero = (((long)(reg_num ? 212 + (unsigned int)fetch_reg(reg_num, regs) : 0)) << 32) | 213 + (unsigned int)fetch_reg(reg_num + 1, regs); 214 } else if (reg_num) { 215 src_val_p = fetch_reg_addr(reg_num, regs); 216 }
+4 -4
arch/sparc/mm/fault_32.c
··· 303 fixup = search_extables_range(regs->pc, &g2); 304 /* Values below 10 are reserved for other things */ 305 if (fixup > 10) { 306 - extern const unsigned __memset_start[]; 307 - extern const unsigned __memset_end[]; 308 - extern const unsigned __csum_partial_copy_start[]; 309 - extern const unsigned __csum_partial_copy_end[]; 310 311 #ifdef DEBUG_EXCEPTIONS 312 printk("Exception: PC<%08lx> faddr<%08lx>\n",
··· 303 fixup = search_extables_range(regs->pc, &g2); 304 /* Values below 10 are reserved for other things */ 305 if (fixup > 10) { 306 + extern const unsigned int __memset_start[]; 307 + extern const unsigned int __memset_end[]; 308 + extern const unsigned int __csum_partial_copy_start[]; 309 + extern const unsigned int __csum_partial_copy_end[]; 310 311 #ifdef DEBUG_EXCEPTIONS 312 printk("Exception: PC<%08lx> faddr<%08lx>\n",
+1 -1
arch/sparc/net/bpf_jit_comp.c
··· 351 * 352 * Sometimes we need to emit a branch earlier in the code 353 * sequence. And in these situations we adjust "destination" 354 - * to accomodate this difference. For example, if we needed 355 * to emit a branch (and it's delay slot) right before the 356 * final instruction emitted for a BPF opcode, we'd use 357 * "destination + 4" instead of just plain "destination" above.
··· 351 * 352 * Sometimes we need to emit a branch earlier in the code 353 * sequence. And in these situations we adjust "destination" 354 + * to accommodate this difference. For example, if we needed 355 * to emit a branch (and it's delay slot) right before the 356 * final instruction emitted for a BPF opcode, we'd use 357 * "destination + 4" instead of just plain "destination" above.
+13 -13
arch/tile/include/hv/drv_mpipe_intf.h
··· 211 * request shared data permission on the same link. 212 * 213 * No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA, 214 - * or ::GXIO_MPIPE_LINK_EXCL_DATA may be specifed in a gxio_mpipe_link_open() 215 * call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed. 216 */ 217 #define GXIO_MPIPE_LINK_DATA 0x00000001UL ··· 219 /** Do not request data permission on the specified link. 220 * 221 * No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA, 222 - * or ::GXIO_MPIPE_LINK_EXCL_DATA may be specifed in a gxio_mpipe_link_open() 223 * call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed. 224 */ 225 #define GXIO_MPIPE_LINK_NO_DATA 0x00000002UL ··· 230 * data permission on it, this open will fail. 231 * 232 * No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA, 233 - * or ::GXIO_MPIPE_LINK_EXCL_DATA may be specifed in a gxio_mpipe_link_open() 234 * call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed. 235 */ 236 #define GXIO_MPIPE_LINK_EXCL_DATA 0x00000004UL ··· 241 * permission on the same link. 242 * 243 * No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS, 244 - * or ::GXIO_MPIPE_LINK_EXCL_STATS may be specifed in a gxio_mpipe_link_open() 245 * call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed. 246 */ 247 #define GXIO_MPIPE_LINK_STATS 0x00000008UL ··· 249 /** Do not request stats permission on the specified link. 250 * 251 * No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS, 252 - * or ::GXIO_MPIPE_LINK_EXCL_STATS may be specifed in a gxio_mpipe_link_open() 253 * call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed. 254 */ 255 #define GXIO_MPIPE_LINK_NO_STATS 0x00000010UL ··· 267 * reset by other statistics programs. 268 * 269 * No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS, 270 - * or ::GXIO_MPIPE_LINK_EXCL_STATS may be specifed in a gxio_mpipe_link_open() 271 * call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed. 272 */ 273 #define GXIO_MPIPE_LINK_EXCL_STATS 0x00000020UL ··· 278 * permission on the same link. 279 * 280 * No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL, 281 - * or ::GXIO_MPIPE_LINK_EXCL_CTL may be specifed in a gxio_mpipe_link_open() 282 * call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed. 283 */ 284 #define GXIO_MPIPE_LINK_CTL 0x00000040UL ··· 286 /** Do not request control permission on the specified link. 287 * 288 * No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL, 289 - * or ::GXIO_MPIPE_LINK_EXCL_CTL may be specifed in a gxio_mpipe_link_open() 290 * call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed. 291 */ 292 #define GXIO_MPIPE_LINK_NO_CTL 0x00000080UL ··· 301 * it prevents programs like mpipe-link from configuring the link. 302 * 303 * No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL, 304 - * or ::GXIO_MPIPE_LINK_EXCL_CTL may be specifed in a gxio_mpipe_link_open() 305 * call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed. 306 */ 307 #define GXIO_MPIPE_LINK_EXCL_CTL 0x00000100UL ··· 311 * change the desired state of the link when it is closed or the process 312 * exits. No more than one of ::GXIO_MPIPE_LINK_AUTO_UP, 313 * ::GXIO_MPIPE_LINK_AUTO_UPDOWN, ::GXIO_MPIPE_LINK_AUTO_DOWN, or 314 - * ::GXIO_MPIPE_LINK_AUTO_NONE may be specifed in a gxio_mpipe_link_open() 315 * call. If none are specified, ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed. 316 */ 317 #define GXIO_MPIPE_LINK_AUTO_UP 0x00000200UL ··· 322 * open, set the desired state of the link to down. No more than one of 323 * ::GXIO_MPIPE_LINK_AUTO_UP, ::GXIO_MPIPE_LINK_AUTO_UPDOWN, 324 * ::GXIO_MPIPE_LINK_AUTO_DOWN, or ::GXIO_MPIPE_LINK_AUTO_NONE may be 325 - * specifed in a gxio_mpipe_link_open() call. If none are specified, 326 * ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed. 327 */ 328 #define GXIO_MPIPE_LINK_AUTO_UPDOWN 0x00000400UL ··· 332 * process has the link open, set the desired state of the link to down. 333 * No more than one of ::GXIO_MPIPE_LINK_AUTO_UP, 334 * ::GXIO_MPIPE_LINK_AUTO_UPDOWN, ::GXIO_MPIPE_LINK_AUTO_DOWN, or 335 - * ::GXIO_MPIPE_LINK_AUTO_NONE may be specifed in a gxio_mpipe_link_open() 336 * call. If none are specified, ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed. 337 */ 338 #define GXIO_MPIPE_LINK_AUTO_DOWN 0x00000800UL ··· 342 * closed or the process exits. No more than one of 343 * ::GXIO_MPIPE_LINK_AUTO_UP, ::GXIO_MPIPE_LINK_AUTO_UPDOWN, 344 * ::GXIO_MPIPE_LINK_AUTO_DOWN, or ::GXIO_MPIPE_LINK_AUTO_NONE may be 345 - * specifed in a gxio_mpipe_link_open() call. If none are specified, 346 * ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed. 347 */ 348 #define GXIO_MPIPE_LINK_AUTO_NONE 0x00001000UL
··· 211 * request shared data permission on the same link. 212 * 213 * No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA, 214 + * or ::GXIO_MPIPE_LINK_EXCL_DATA may be specified in a gxio_mpipe_link_open() 215 * call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed. 216 */ 217 #define GXIO_MPIPE_LINK_DATA 0x00000001UL ··· 219 /** Do not request data permission on the specified link. 220 * 221 * No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA, 222 + * or ::GXIO_MPIPE_LINK_EXCL_DATA may be specified in a gxio_mpipe_link_open() 223 * call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed. 224 */ 225 #define GXIO_MPIPE_LINK_NO_DATA 0x00000002UL ··· 230 * data permission on it, this open will fail. 231 * 232 * No more than one of ::GXIO_MPIPE_LINK_DATA, ::GXIO_MPIPE_LINK_NO_DATA, 233 + * or ::GXIO_MPIPE_LINK_EXCL_DATA may be specified in a gxio_mpipe_link_open() 234 * call. If none are specified, ::GXIO_MPIPE_LINK_DATA is assumed. 235 */ 236 #define GXIO_MPIPE_LINK_EXCL_DATA 0x00000004UL ··· 241 * permission on the same link. 242 * 243 * No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS, 244 + * or ::GXIO_MPIPE_LINK_EXCL_STATS may be specified in a gxio_mpipe_link_open() 245 * call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed. 246 */ 247 #define GXIO_MPIPE_LINK_STATS 0x00000008UL ··· 249 /** Do not request stats permission on the specified link. 250 * 251 * No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS, 252 + * or ::GXIO_MPIPE_LINK_EXCL_STATS may be specified in a gxio_mpipe_link_open() 253 * call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed. 254 */ 255 #define GXIO_MPIPE_LINK_NO_STATS 0x00000010UL ··· 267 * reset by other statistics programs. 268 * 269 * No more than one of ::GXIO_MPIPE_LINK_STATS, ::GXIO_MPIPE_LINK_NO_STATS, 270 + * or ::GXIO_MPIPE_LINK_EXCL_STATS may be specified in a gxio_mpipe_link_open() 271 * call. If none are specified, ::GXIO_MPIPE_LINK_STATS is assumed. 272 */ 273 #define GXIO_MPIPE_LINK_EXCL_STATS 0x00000020UL ··· 278 * permission on the same link. 279 * 280 * No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL, 281 + * or ::GXIO_MPIPE_LINK_EXCL_CTL may be specified in a gxio_mpipe_link_open() 282 * call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed. 283 */ 284 #define GXIO_MPIPE_LINK_CTL 0x00000040UL ··· 286 /** Do not request control permission on the specified link. 287 * 288 * No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL, 289 + * or ::GXIO_MPIPE_LINK_EXCL_CTL may be specified in a gxio_mpipe_link_open() 290 * call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed. 291 */ 292 #define GXIO_MPIPE_LINK_NO_CTL 0x00000080UL ··· 301 * it prevents programs like mpipe-link from configuring the link. 302 * 303 * No more than one of ::GXIO_MPIPE_LINK_CTL, ::GXIO_MPIPE_LINK_NO_CTL, 304 + * or ::GXIO_MPIPE_LINK_EXCL_CTL may be specified in a gxio_mpipe_link_open() 305 * call. If none are specified, ::GXIO_MPIPE_LINK_CTL is assumed. 306 */ 307 #define GXIO_MPIPE_LINK_EXCL_CTL 0x00000100UL ··· 311 * change the desired state of the link when it is closed or the process 312 * exits. No more than one of ::GXIO_MPIPE_LINK_AUTO_UP, 313 * ::GXIO_MPIPE_LINK_AUTO_UPDOWN, ::GXIO_MPIPE_LINK_AUTO_DOWN, or 314 + * ::GXIO_MPIPE_LINK_AUTO_NONE may be specified in a gxio_mpipe_link_open() 315 * call. If none are specified, ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed. 316 */ 317 #define GXIO_MPIPE_LINK_AUTO_UP 0x00000200UL ··· 322 * open, set the desired state of the link to down. No more than one of 323 * ::GXIO_MPIPE_LINK_AUTO_UP, ::GXIO_MPIPE_LINK_AUTO_UPDOWN, 324 * ::GXIO_MPIPE_LINK_AUTO_DOWN, or ::GXIO_MPIPE_LINK_AUTO_NONE may be 325 + * specified in a gxio_mpipe_link_open() call. If none are specified, 326 * ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed. 327 */ 328 #define GXIO_MPIPE_LINK_AUTO_UPDOWN 0x00000400UL ··· 332 * process has the link open, set the desired state of the link to down. 333 * No more than one of ::GXIO_MPIPE_LINK_AUTO_UP, 334 * ::GXIO_MPIPE_LINK_AUTO_UPDOWN, ::GXIO_MPIPE_LINK_AUTO_DOWN, or 335 + * ::GXIO_MPIPE_LINK_AUTO_NONE may be specified in a gxio_mpipe_link_open() 336 * call. If none are specified, ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed. 337 */ 338 #define GXIO_MPIPE_LINK_AUTO_DOWN 0x00000800UL ··· 342 * closed or the process exits. No more than one of 343 * ::GXIO_MPIPE_LINK_AUTO_UP, ::GXIO_MPIPE_LINK_AUTO_UPDOWN, 344 * ::GXIO_MPIPE_LINK_AUTO_DOWN, or ::GXIO_MPIPE_LINK_AUTO_NONE may be 345 + * specified in a gxio_mpipe_link_open() call. If none are specified, 346 * ::GXIO_MPIPE_LINK_AUTO_UPDOWN is assumed. 347 */ 348 #define GXIO_MPIPE_LINK_AUTO_NONE 0x00001000UL
+8 -8
arch/tile/kernel/kgdb.c
··· 126 sleeping_thread_to_gdb_regs(unsigned long *gdb_regs, struct task_struct *task) 127 { 128 struct pt_regs *thread_regs; 129 130 if (task == NULL) 131 return; 132 133 - /* Initialize to zero. */ 134 - memset(gdb_regs, 0, NUMREGBYTES); 135 - 136 thread_regs = task_pt_regs(task); 137 - memcpy(gdb_regs, thread_regs, TREG_LAST_GPR * sizeof(unsigned long)); 138 gdb_regs[TILEGX_PC_REGNUM] = thread_regs->pc; 139 gdb_regs[TILEGX_FAULTNUM_REGNUM] = thread_regs->faultnum; 140 } ··· 433 struct kgdb_arch arch_kgdb_ops; 434 435 /* 436 - * kgdb_arch_init - Perform any architecture specific initalization. 437 * 438 - * This function will handle the initalization of any architecture 439 * specific callbacks. 440 */ 441 int kgdb_arch_init(void) ··· 447 } 448 449 /* 450 - * kgdb_arch_exit - Perform any architecture specific uninitalization. 451 * 452 - * This function will handle the uninitalization of any architecture 453 * specific callbacks, for dynamic registration and unregistration. 454 */ 455 void kgdb_arch_exit(void)
··· 126 sleeping_thread_to_gdb_regs(unsigned long *gdb_regs, struct task_struct *task) 127 { 128 struct pt_regs *thread_regs; 129 + const int NGPRS = TREG_LAST_GPR + 1; 130 131 if (task == NULL) 132 return; 133 134 thread_regs = task_pt_regs(task); 135 + memcpy(gdb_regs, thread_regs, NGPRS * sizeof(unsigned long)); 136 + memset(&gdb_regs[NGPRS], 0, 137 + (TILEGX_PC_REGNUM - NGPRS) * sizeof(unsigned long)); 138 gdb_regs[TILEGX_PC_REGNUM] = thread_regs->pc; 139 gdb_regs[TILEGX_FAULTNUM_REGNUM] = thread_regs->faultnum; 140 } ··· 433 struct kgdb_arch arch_kgdb_ops; 434 435 /* 436 + * kgdb_arch_init - Perform any architecture specific initialization. 437 * 438 + * This function will handle the initialization of any architecture 439 * specific callbacks. 440 */ 441 int kgdb_arch_init(void) ··· 447 } 448 449 /* 450 + * kgdb_arch_exit - Perform any architecture specific uninitialization. 451 * 452 + * This function will handle the uninitialization of any architecture 453 * specific callbacks, for dynamic registration and unregistration. 454 */ 455 void kgdb_arch_exit(void)
+1 -1
arch/tile/kernel/pci_gx.c
··· 1326 1327 1328 /* 1329 - * See tile_cfg_read() for relevent comments. 1330 * Note that "val" is the value to write, not a pointer to that value. 1331 */ 1332 static int tile_cfg_write(struct pci_bus *bus, unsigned int devfn, int offset,
··· 1326 1327 1328 /* 1329 + * See tile_cfg_read() for relevant comments. 1330 * Note that "val" is the value to write, not a pointer to that value. 1331 */ 1332 static int tile_cfg_write(struct pci_bus *bus, unsigned int devfn, int offset,
+18 -3
arch/x86/events/amd/core.c
··· 369 370 WARN_ON_ONCE(cpuc->amd_nb); 371 372 - if (boot_cpu_data.x86_max_cores < 2) 373 return NOTIFY_OK; 374 375 cpuc->amd_nb = amd_alloc_nb(cpu); ··· 388 389 cpuc->perf_ctr_virt_mask = AMD64_EVENTSEL_HOSTONLY; 390 391 - if (boot_cpu_data.x86_max_cores < 2) 392 return; 393 394 nb_id = amd_get_nb_id(cpu); ··· 414 { 415 struct cpu_hw_events *cpuhw; 416 417 - if (boot_cpu_data.x86_max_cores < 2) 418 return; 419 420 cpuhw = &per_cpu(cpu_hw_events, cpu); ··· 648 .cpu_prepare = amd_pmu_cpu_prepare, 649 .cpu_starting = amd_pmu_cpu_starting, 650 .cpu_dead = amd_pmu_cpu_dead, 651 }; 652 653 static int __init amd_core_pmu_init(void) ··· 676 x86_pmu.eventsel = MSR_F15H_PERF_CTL; 677 x86_pmu.perfctr = MSR_F15H_PERF_CTR; 678 x86_pmu.num_counters = AMD64_NUM_COUNTERS_CORE; 679 680 pr_cont("core perfctr, "); 681 return 0; ··· 699 ret = amd_core_pmu_init(); 700 if (ret) 701 return ret; 702 703 /* Events are common for all AMDs */ 704 memcpy(hw_cache_event_ids, amd_hw_cache_event_ids,
··· 369 370 WARN_ON_ONCE(cpuc->amd_nb); 371 372 + if (!x86_pmu.amd_nb_constraints) 373 return NOTIFY_OK; 374 375 cpuc->amd_nb = amd_alloc_nb(cpu); ··· 388 389 cpuc->perf_ctr_virt_mask = AMD64_EVENTSEL_HOSTONLY; 390 391 + if (!x86_pmu.amd_nb_constraints) 392 return; 393 394 nb_id = amd_get_nb_id(cpu); ··· 414 { 415 struct cpu_hw_events *cpuhw; 416 417 + if (!x86_pmu.amd_nb_constraints) 418 return; 419 420 cpuhw = &per_cpu(cpu_hw_events, cpu); ··· 648 .cpu_prepare = amd_pmu_cpu_prepare, 649 .cpu_starting = amd_pmu_cpu_starting, 650 .cpu_dead = amd_pmu_cpu_dead, 651 + 652 + .amd_nb_constraints = 1, 653 }; 654 655 static int __init amd_core_pmu_init(void) ··· 674 x86_pmu.eventsel = MSR_F15H_PERF_CTL; 675 x86_pmu.perfctr = MSR_F15H_PERF_CTR; 676 x86_pmu.num_counters = AMD64_NUM_COUNTERS_CORE; 677 + /* 678 + * AMD Core perfctr has separate MSRs for the NB events, see 679 + * the amd/uncore.c driver. 680 + */ 681 + x86_pmu.amd_nb_constraints = 0; 682 683 pr_cont("core perfctr, "); 684 return 0; ··· 692 ret = amd_core_pmu_init(); 693 if (ret) 694 return ret; 695 + 696 + if (num_possible_cpus() == 1) { 697 + /* 698 + * No point in allocating data structures to serialize 699 + * against other CPUs, when there is only the one CPU. 700 + */ 701 + x86_pmu.amd_nb_constraints = 0; 702 + } 703 704 /* Events are common for all AMDs */ 705 memcpy(hw_cache_event_ids, amd_hw_cache_event_ids,
+45 -7
arch/x86/events/amd/ibs.c
··· 28 #define IBS_FETCH_CONFIG_MASK (IBS_FETCH_RAND_EN | IBS_FETCH_MAX_CNT) 29 #define IBS_OP_CONFIG_MASK IBS_OP_MAX_CNT 30 31 enum ibs_states { 32 IBS_ENABLED = 0, 33 IBS_STARTED = 1, 34 IBS_STOPPING = 2, 35 36 IBS_MAX_STATES, 37 }; ··· 413 414 perf_ibs_set_period(perf_ibs, hwc, &period); 415 /* 416 - * Set STARTED before enabling the hardware, such that 417 - * a subsequent NMI must observe it. Then clear STOPPING 418 - * such that we don't consume NMIs by accident. 419 */ 420 - set_bit(IBS_STARTED, pcpu->state); 421 clear_bit(IBS_STOPPING, pcpu->state); 422 perf_ibs_enable_event(perf_ibs, hwc, period >> 4); 423 ··· 431 u64 config; 432 int stopping; 433 434 stopping = test_bit(IBS_STARTED, pcpu->state); 435 436 if (!stopping && (hwc->state & PERF_HES_UPTODATE)) ··· 443 444 if (stopping) { 445 /* 446 - * Set STOPPING before disabling the hardware, such that it 447 * must be visible to NMIs the moment we clear the EN bit, 448 * at which point we can generate an !VALID sample which 449 * we need to consume. 450 */ 451 - set_bit(IBS_STOPPING, pcpu->state); 452 perf_ibs_disable_event(perf_ibs, hwc, config); 453 /* 454 * Clear STARTED after disabling the hardware; if it were ··· 594 * with samples that even have the valid bit cleared. 595 * Mark all this NMIs as handled. 596 */ 597 - if (test_and_clear_bit(IBS_STOPPING, pcpu->state)) 598 return 1; 599 600 return 0;
··· 28 #define IBS_FETCH_CONFIG_MASK (IBS_FETCH_RAND_EN | IBS_FETCH_MAX_CNT) 29 #define IBS_OP_CONFIG_MASK IBS_OP_MAX_CNT 30 31 + 32 + /* 33 + * IBS states: 34 + * 35 + * ENABLED; tracks the pmu::add(), pmu::del() state, when set the counter is taken 36 + * and any further add()s must fail. 37 + * 38 + * STARTED/STOPPING/STOPPED; deal with pmu::start(), pmu::stop() state but are 39 + * complicated by the fact that the IBS hardware can send late NMIs (ie. after 40 + * we've cleared the EN bit). 41 + * 42 + * In order to consume these late NMIs we have the STOPPED state, any NMI that 43 + * happens after we've cleared the EN state will clear this bit and report the 44 + * NMI handled (this is fundamentally racy in the face or multiple NMI sources, 45 + * someone else can consume our BIT and our NMI will go unhandled). 46 + * 47 + * And since we cannot set/clear this separate bit together with the EN bit, 48 + * there are races; if we cleared STARTED early, an NMI could land in 49 + * between clearing STARTED and clearing the EN bit (in fact multiple NMIs 50 + * could happen if the period is small enough), and consume our STOPPED bit 51 + * and trigger streams of unhandled NMIs. 52 + * 53 + * If, however, we clear STARTED late, an NMI can hit between clearing the 54 + * EN bit and clearing STARTED, still see STARTED set and process the event. 55 + * If this event will have the VALID bit clear, we bail properly, but this 56 + * is not a given. With VALID set we can end up calling pmu::stop() again 57 + * (the throttle logic) and trigger the WARNs in there. 58 + * 59 + * So what we do is set STOPPING before clearing EN to avoid the pmu::stop() 60 + * nesting, and clear STARTED late, so that we have a well defined state over 61 + * the clearing of the EN bit. 62 + * 63 + * XXX: we could probably be using !atomic bitops for all this. 64 + */ 65 + 66 enum ibs_states { 67 IBS_ENABLED = 0, 68 IBS_STARTED = 1, 69 IBS_STOPPING = 2, 70 + IBS_STOPPED = 3, 71 72 IBS_MAX_STATES, 73 }; ··· 377 378 perf_ibs_set_period(perf_ibs, hwc, &period); 379 /* 380 + * Set STARTED before enabling the hardware, such that a subsequent NMI 381 + * must observe it. 382 */ 383 + set_bit(IBS_STARTED, pcpu->state); 384 clear_bit(IBS_STOPPING, pcpu->state); 385 perf_ibs_enable_event(perf_ibs, hwc, period >> 4); 386 ··· 396 u64 config; 397 int stopping; 398 399 + if (test_and_set_bit(IBS_STOPPING, pcpu->state)) 400 + return; 401 + 402 stopping = test_bit(IBS_STARTED, pcpu->state); 403 404 if (!stopping && (hwc->state & PERF_HES_UPTODATE)) ··· 405 406 if (stopping) { 407 /* 408 + * Set STOPPED before disabling the hardware, such that it 409 * must be visible to NMIs the moment we clear the EN bit, 410 * at which point we can generate an !VALID sample which 411 * we need to consume. 412 */ 413 + set_bit(IBS_STOPPED, pcpu->state); 414 perf_ibs_disable_event(perf_ibs, hwc, config); 415 /* 416 * Clear STARTED after disabling the hardware; if it were ··· 556 * with samples that even have the valid bit cleared. 557 * Mark all this NMIs as handled. 558 */ 559 + if (test_and_clear_bit(IBS_STOPPED, pcpu->state)) 560 return 1; 561 562 return 0;
+8 -3
arch/x86/events/perf_event.h
··· 608 atomic_t lbr_exclusive[x86_lbr_exclusive_max]; 609 610 /* 611 * Extra registers for events 612 */ 613 struct extra_reg *extra_regs; ··· 800 801 struct attribute **merge_attr(struct attribute **a, struct attribute **b); 802 803 #ifdef CONFIG_CPU_SUP_AMD 804 805 int amd_pmu_init(void); ··· 932 int p6_pmu_init(void); 933 934 int knc_pmu_init(void); 935 - 936 - ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr, 937 - char *page); 938 939 static inline int is_ht_workaround_enabled(void) 940 {
··· 608 atomic_t lbr_exclusive[x86_lbr_exclusive_max]; 609 610 /* 611 + * AMD bits 612 + */ 613 + unsigned int amd_nb_constraints : 1; 614 + 615 + /* 616 * Extra registers for events 617 */ 618 struct extra_reg *extra_regs; ··· 795 796 struct attribute **merge_attr(struct attribute **a, struct attribute **b); 797 798 + ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr, 799 + char *page); 800 + 801 #ifdef CONFIG_CPU_SUP_AMD 802 803 int amd_pmu_init(void); ··· 924 int p6_pmu_init(void); 925 926 int knc_pmu_init(void); 927 928 static inline int is_ht_workaround_enabled(void) 929 {
+1 -7
arch/x86/include/asm/msr-index.h
··· 190 #define MSR_PP1_ENERGY_STATUS 0x00000641 191 #define MSR_PP1_POLICY 0x00000642 192 193 #define MSR_CONFIG_TDP_NOMINAL 0x00000648 194 #define MSR_CONFIG_TDP_LEVEL_1 0x00000649 195 #define MSR_CONFIG_TDP_LEVEL_2 0x0000064A ··· 210 #define MSR_CORE_PERF_LIMIT_REASONS 0x00000690 211 #define MSR_GFX_PERF_LIMIT_REASONS 0x000006B0 212 #define MSR_RING_PERF_LIMIT_REASONS 0x000006B1 213 - 214 - /* Config TDP MSRs */ 215 - #define MSR_CONFIG_TDP_NOMINAL 0x00000648 216 - #define MSR_CONFIG_TDP_LEVEL1 0x00000649 217 - #define MSR_CONFIG_TDP_LEVEL2 0x0000064A 218 - #define MSR_CONFIG_TDP_CONTROL 0x0000064B 219 - #define MSR_TURBO_ACTIVATION_RATIO 0x0000064C 220 221 /* Hardware P state interface */ 222 #define MSR_PPERF 0x0000064e
··· 190 #define MSR_PP1_ENERGY_STATUS 0x00000641 191 #define MSR_PP1_POLICY 0x00000642 192 193 + /* Config TDP MSRs */ 194 #define MSR_CONFIG_TDP_NOMINAL 0x00000648 195 #define MSR_CONFIG_TDP_LEVEL_1 0x00000649 196 #define MSR_CONFIG_TDP_LEVEL_2 0x0000064A ··· 209 #define MSR_CORE_PERF_LIMIT_REASONS 0x00000690 210 #define MSR_GFX_PERF_LIMIT_REASONS 0x000006B0 211 #define MSR_RING_PERF_LIMIT_REASONS 0x000006B1 212 213 /* Hardware P state interface */ 214 #define MSR_PPERF 0x0000064e
+9
arch/x86/include/asm/pmem.h
··· 47 BUG(); 48 } 49 50 /** 51 * arch_wmb_pmem - synchronize writes to persistent memory 52 *
··· 47 BUG(); 48 } 49 50 + static inline int arch_memcpy_from_pmem(void *dst, const void __pmem *src, 51 + size_t n) 52 + { 53 + if (static_cpu_has(X86_FEATURE_MCE_RECOVERY)) 54 + return memcpy_mcsafe(dst, (void __force *) src, n); 55 + memcpy(dst, (void __force *) src, n); 56 + return 0; 57 + } 58 + 59 /** 60 * arch_wmb_pmem - synchronize writes to persistent memory 61 *
-2
arch/x86/include/asm/processor.h
··· 132 u16 logical_proc_id; 133 /* Core id: */ 134 u16 cpu_core_id; 135 - /* Compute unit id */ 136 - u8 compute_unit_id; 137 /* Index into per_cpu list: */ 138 u16 cpu_index; 139 u32 microcode;
··· 132 u16 logical_proc_id; 133 /* Core id: */ 134 u16 cpu_core_id; 135 /* Index into per_cpu list: */ 136 u16 cpu_index; 137 u32 microcode;
+1
arch/x86/include/asm/smp.h
··· 155 wbinvd(); 156 return 0; 157 } 158 #endif /* CONFIG_SMP */ 159 160 extern unsigned disabled_cpus;
··· 155 wbinvd(); 156 return 0; 157 } 158 + #define smp_num_siblings 1 159 #endif /* CONFIG_SMP */ 160 161 extern unsigned disabled_cpus;
+2 -4
arch/x86/include/asm/thread_info.h
··· 276 */ 277 #define force_iret() set_thread_flag(TIF_NOTIFY_RESUME) 278 279 - #endif /* !__ASSEMBLY__ */ 280 - 281 - #ifndef __ASSEMBLY__ 282 extern void arch_task_cache_init(void); 283 extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); 284 extern void arch_release_task_struct(struct task_struct *tsk); 285 - #endif 286 #endif /* _ASM_X86_THREAD_INFO_H */
··· 276 */ 277 #define force_iret() set_thread_flag(TIF_NOTIFY_RESUME) 278 279 extern void arch_task_cache_init(void); 280 extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); 281 extern void arch_release_task_struct(struct task_struct *tsk); 282 + #endif /* !__ASSEMBLY__ */ 283 + 284 #endif /* _ASM_X86_THREAD_INFO_H */
-6
arch/x86/include/asm/tlbflush.h
··· 319 320 #endif /* SMP */ 321 322 - /* Not inlined due to inc_irq_stat not being defined yet */ 323 - #define flush_tlb_local() { \ 324 - inc_irq_stat(irq_tlb_count); \ 325 - local_flush_tlb(); \ 326 - } 327 - 328 #ifndef CONFIG_PARAVIRT 329 #define flush_tlb_others(mask, mm, start, end) \ 330 native_flush_tlb_others(mask, mm, start, end)
··· 319 320 #endif /* SMP */ 321 322 #ifndef CONFIG_PARAVIRT 323 #define flush_tlb_others(mask, mm, start, end) \ 324 native_flush_tlb_others(mask, mm, start, end)
+2 -4
arch/x86/kernel/amd_nb.c
··· 170 { 171 struct pci_dev *link = node_to_amd_nb(amd_get_nb_id(cpu))->link; 172 unsigned int mask; 173 - int cuid; 174 175 if (!amd_nb_has_feature(AMD_NB_L3_PARTITIONING)) 176 return 0; 177 178 pci_read_config_dword(link, 0x1d4, &mask); 179 180 - cuid = cpu_data(cpu).compute_unit_id; 181 - return (mask >> (4 * cuid)) & 0xf; 182 } 183 184 int amd_set_subcaches(int cpu, unsigned long mask) ··· 202 pci_write_config_dword(nb->misc, 0x1b8, reg & ~0x180000); 203 } 204 205 - cuid = cpu_data(cpu).compute_unit_id; 206 mask <<= 4 * cuid; 207 mask |= (0xf ^ (1 << cuid)) << 26; 208
··· 170 { 171 struct pci_dev *link = node_to_amd_nb(amd_get_nb_id(cpu))->link; 172 unsigned int mask; 173 174 if (!amd_nb_has_feature(AMD_NB_L3_PARTITIONING)) 175 return 0; 176 177 pci_read_config_dword(link, 0x1d4, &mask); 178 179 + return (mask >> (4 * cpu_data(cpu).cpu_core_id)) & 0xf; 180 } 181 182 int amd_set_subcaches(int cpu, unsigned long mask) ··· 204 pci_write_config_dword(nb->misc, 0x1b8, reg & ~0x180000); 205 } 206 207 + cuid = cpu_data(cpu).cpu_core_id; 208 mask <<= 4 * cuid; 209 mask |= (0xf ^ (1 << cuid)) << 26; 210
+4 -8
arch/x86/kernel/cpu/amd.c
··· 300 #ifdef CONFIG_SMP 301 static void amd_get_topology(struct cpuinfo_x86 *c) 302 { 303 - u32 cores_per_cu = 1; 304 u8 node_id; 305 int cpu = smp_processor_id(); 306 ··· 312 313 /* get compute unit information */ 314 smp_num_siblings = ((ebx >> 8) & 3) + 1; 315 - c->compute_unit_id = ebx & 0xff; 316 - cores_per_cu += ((ebx >> 8) & 3); 317 } else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) { 318 u64 value; 319 ··· 324 325 /* fixup multi-node processor information */ 326 if (nodes_per_socket > 1) { 327 - u32 cores_per_node; 328 u32 cus_per_node; 329 330 set_cpu_cap(c, X86_FEATURE_AMD_DCM); 331 - cores_per_node = c->x86_max_cores / nodes_per_socket; 332 - cus_per_node = cores_per_node / cores_per_cu; 333 334 /* store NodeID, use llc_shared_map to store sibling info */ 335 per_cpu(cpu_llc_id, cpu) = node_id; 336 337 /* core id has to be in the [0 .. cores_per_node - 1] range */ 338 - c->cpu_core_id %= cores_per_node; 339 - c->compute_unit_id %= cus_per_node; 340 } 341 } 342 #endif
··· 300 #ifdef CONFIG_SMP 301 static void amd_get_topology(struct cpuinfo_x86 *c) 302 { 303 u8 node_id; 304 int cpu = smp_processor_id(); 305 ··· 313 314 /* get compute unit information */ 315 smp_num_siblings = ((ebx >> 8) & 3) + 1; 316 + c->x86_max_cores /= smp_num_siblings; 317 + c->cpu_core_id = ebx & 0xff; 318 } else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) { 319 u64 value; 320 ··· 325 326 /* fixup multi-node processor information */ 327 if (nodes_per_socket > 1) { 328 u32 cus_per_node; 329 330 set_cpu_cap(c, X86_FEATURE_AMD_DCM); 331 + cus_per_node = c->x86_max_cores / nodes_per_socket; 332 333 /* store NodeID, use llc_shared_map to store sibling info */ 334 per_cpu(cpu_llc_id, cpu) = node_id; 335 336 /* core id has to be in the [0 .. cores_per_node - 1] range */ 337 + c->cpu_core_id %= cus_per_node; 338 } 339 } 340 #endif
+3
arch/x86/kernel/cpu/mcheck/therm_throt.c
··· 384 { 385 __u64 msr_val; 386 387 rdmsrl(MSR_IA32_THERM_STATUS, msr_val); 388 389 /* Check for violation of core thermal thresholds*/
··· 384 { 385 __u64 msr_val; 386 387 + if (static_cpu_has(X86_FEATURE_HWP)) 388 + wrmsrl_safe(MSR_HWP_STATUS, 0); 389 + 390 rdmsrl(MSR_IA32_THERM_STATUS, msr_val); 391 392 /* Check for violation of core thermal thresholds*/
+2
arch/x86/kernel/cpu/powerflags.c
··· 18 "", /* tsc invariant mapped to constant_tsc */ 19 "cpb", /* core performance boost */ 20 "eff_freq_ro", /* Readonly aperf/mperf */ 21 };
··· 18 "", /* tsc invariant mapped to constant_tsc */ 19 "cpb", /* core performance boost */ 20 "eff_freq_ro", /* Readonly aperf/mperf */ 21 + "proc_feedback", /* processor feedback interface */ 22 + "acc_power", /* accumulated power mechanism */ 23 };
+1 -1
arch/x86/kernel/smpboot.c
··· 422 423 if (c->phys_proc_id == o->phys_proc_id && 424 per_cpu(cpu_llc_id, cpu1) == per_cpu(cpu_llc_id, cpu2) && 425 - c->compute_unit_id == o->compute_unit_id) 426 return topology_sane(c, o, "smt"); 427 428 } else if (c->phys_proc_id == o->phys_proc_id &&
··· 422 423 if (c->phys_proc_id == o->phys_proc_id && 424 per_cpu(cpu_llc_id, cpu1) == per_cpu(cpu_llc_id, cpu2) && 425 + c->cpu_core_id == o->cpu_core_id) 426 return topology_sane(c, o, "smt"); 427 428 } else if (c->phys_proc_id == o->phys_proc_id &&
+10 -4
arch/x86/mm/tlb.c
··· 104 105 inc_irq_stat(irq_tlb_count); 106 107 - if (f->flush_mm != this_cpu_read(cpu_tlbstate.active_mm)) 108 return; 109 - if (!f->flush_end) 110 - f->flush_end = f->flush_start + PAGE_SIZE; 111 112 count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED); 113 if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK) { ··· 133 unsigned long end) 134 { 135 struct flush_tlb_info info; 136 info.flush_mm = mm; 137 info.flush_start = start; 138 info.flush_end = end; 139 140 count_vm_tlb_event(NR_TLB_REMOTE_FLUSH); 141 - trace_tlb_flush(TLB_REMOTE_SEND_IPI, end - start); 142 if (is_uv_system()) { 143 unsigned int cpu; 144
··· 104 105 inc_irq_stat(irq_tlb_count); 106 107 + if (f->flush_mm && f->flush_mm != this_cpu_read(cpu_tlbstate.active_mm)) 108 return; 109 110 count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED); 111 if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK) { ··· 135 unsigned long end) 136 { 137 struct flush_tlb_info info; 138 + 139 + if (end == 0) 140 + end = start + PAGE_SIZE; 141 info.flush_mm = mm; 142 info.flush_start = start; 143 info.flush_end = end; 144 145 count_vm_tlb_event(NR_TLB_REMOTE_FLUSH); 146 + if (end == TLB_FLUSH_ALL) 147 + trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL); 148 + else 149 + trace_tlb_flush(TLB_REMOTE_SEND_IPI, 150 + (end - start) >> PAGE_SHIFT); 151 + 152 if (is_uv_system()) { 153 unsigned int cpu; 154
+2 -1
arch/x86/ras/mce_amd_inj.c
··· 20 #include <linux/pci.h> 21 22 #include <asm/mce.h> 23 #include <asm/amd_nb.h> 24 #include <asm/irq_vectors.h> 25 ··· 207 struct cpuinfo_x86 *c = &boot_cpu_data; 208 u32 cores_per_node; 209 210 - cores_per_node = c->x86_max_cores / amd_get_nodes_per_socket(); 211 212 return cores_per_node * node_id; 213 }
··· 20 #include <linux/pci.h> 21 22 #include <asm/mce.h> 23 + #include <asm/smp.h> 24 #include <asm/amd_nb.h> 25 #include <asm/irq_vectors.h> 26 ··· 206 struct cpuinfo_x86 *c = &boot_cpu_data; 207 u32 cores_per_node; 208 209 + cores_per_node = (c->x86_max_cores * smp_num_siblings) / amd_get_nodes_per_socket(); 210 211 return cores_per_node * node_id; 212 }
+2
crypto/asymmetric_keys/pkcs7_trust.c
··· 178 int cached_ret = -ENOKEY; 179 int ret; 180 181 for (p = pkcs7->certs; p; p = p->next) 182 p->seen = false; 183
··· 178 int cached_ret = -ENOKEY; 179 int ret; 180 181 + *_trusted = false; 182 + 183 for (p = pkcs7->certs; p; p = p->next) 184 p->seen = false; 185
+52
drivers/acpi/acpi_processor.c
··· 491 } 492 #endif /* CONFIG_ACPI_HOTPLUG_CPU */ 493 494 /* 495 * The following ACPI IDs are known to be suitable for representing as 496 * processor devices.
··· 491 } 492 #endif /* CONFIG_ACPI_HOTPLUG_CPU */ 493 494 + #ifdef CONFIG_X86 495 + static bool acpi_hwp_native_thermal_lvt_set; 496 + static acpi_status __init acpi_hwp_native_thermal_lvt_osc(acpi_handle handle, 497 + u32 lvl, 498 + void *context, 499 + void **rv) 500 + { 501 + u8 sb_uuid_str[] = "4077A616-290C-47BE-9EBD-D87058713953"; 502 + u32 capbuf[2]; 503 + struct acpi_osc_context osc_context = { 504 + .uuid_str = sb_uuid_str, 505 + .rev = 1, 506 + .cap.length = 8, 507 + .cap.pointer = capbuf, 508 + }; 509 + 510 + if (acpi_hwp_native_thermal_lvt_set) 511 + return AE_CTRL_TERMINATE; 512 + 513 + capbuf[0] = 0x0000; 514 + capbuf[1] = 0x1000; /* set bit 12 */ 515 + 516 + if (ACPI_SUCCESS(acpi_run_osc(handle, &osc_context))) { 517 + if (osc_context.ret.pointer && osc_context.ret.length > 1) { 518 + u32 *capbuf_ret = osc_context.ret.pointer; 519 + 520 + if (capbuf_ret[1] & 0x1000) { 521 + acpi_handle_info(handle, 522 + "_OSC native thermal LVT Acked\n"); 523 + acpi_hwp_native_thermal_lvt_set = true; 524 + } 525 + } 526 + kfree(osc_context.ret.pointer); 527 + } 528 + 529 + return AE_OK; 530 + } 531 + 532 + void __init acpi_early_processor_osc(void) 533 + { 534 + if (boot_cpu_has(X86_FEATURE_HWP)) { 535 + acpi_walk_namespace(ACPI_TYPE_PROCESSOR, ACPI_ROOT_OBJECT, 536 + ACPI_UINT32_MAX, 537 + acpi_hwp_native_thermal_lvt_osc, 538 + NULL, NULL, NULL); 539 + acpi_get_devices(ACPI_PROCESSOR_DEVICE_HID, 540 + acpi_hwp_native_thermal_lvt_osc, 541 + NULL, NULL); 542 + } 543 + } 544 + #endif 545 + 546 /* 547 * The following ACPI IDs are known to be suitable for representing as 548 * processor devices.
+3
drivers/acpi/bus.c
··· 1019 goto error1; 1020 } 1021 1022 /* 1023 * _OSC method may exist in module level code, 1024 * so it must be run after ACPI_FULL_INITIALIZATION
··· 1019 goto error1; 1020 } 1021 1022 + /* Set capability bits for _OSC under processor scope */ 1023 + acpi_early_processor_osc(); 1024 + 1025 /* 1026 * _OSC method may exist in module level code, 1027 * so it must be run after ACPI_FULL_INITIALIZATION
+6
drivers/acpi/internal.h
··· 145 static inline void acpi_early_processor_set_pdc(void) {} 146 #endif 147 148 /* -------------------------------------------------------------------------- 149 Embedded Controller 150 -------------------------------------------------------------------------- */
··· 145 static inline void acpi_early_processor_set_pdc(void) {} 146 #endif 147 148 + #ifdef CONFIG_X86 149 + void acpi_early_processor_osc(void); 150 + #else 151 + static inline void acpi_early_processor_osc(void) {} 152 + #endif 153 + 154 /* -------------------------------------------------------------------------- 155 Embedded Controller 156 -------------------------------------------------------------------------- */
+1 -1
drivers/clk/mediatek/reset.c
··· 57 return mtk_reset_deassert(rcdev, id); 58 } 59 60 - static struct reset_control_ops mtk_reset_ops = { 61 .assert = mtk_reset_assert, 62 .deassert = mtk_reset_deassert, 63 .reset = mtk_reset,
··· 57 return mtk_reset_deassert(rcdev, id); 58 } 59 60 + static const struct reset_control_ops mtk_reset_ops = { 61 .assert = mtk_reset_assert, 62 .deassert = mtk_reset_deassert, 63 .reset = mtk_reset,
+1 -1
drivers/clk/mmp/reset.c
··· 74 return 0; 75 } 76 77 - static struct reset_control_ops mmp_clk_reset_ops = { 78 .assert = mmp_clk_reset_assert, 79 .deassert = mmp_clk_reset_deassert, 80 };
··· 74 return 0; 75 } 76 77 + static const struct reset_control_ops mmp_clk_reset_ops = { 78 .assert = mmp_clk_reset_assert, 79 .deassert = mmp_clk_reset_deassert, 80 };
+35 -35
drivers/clk/qcom/gcc-ipq4019.c
··· 129 }; 130 131 #define F(f, s, h, m, n) { (f), (s), (2 * (h) - 1), (m), (n) } 132 - #define P_XO 0 133 - #define FE_PLL_200 1 134 - #define FE_PLL_500 2 135 - #define DDRC_PLL_666 3 136 - 137 - #define DDRC_PLL_666_SDCC 1 138 - #define FE_PLL_125_DLY 1 139 - 140 - #define FE_PLL_WCSS2G 1 141 - #define FE_PLL_WCSS5G 1 142 143 static const struct freq_tbl ftbl_gcc_audio_pwm_clk[] = { 144 F(48000000, P_XO, 1, 0, 0), 145 - F(200000000, FE_PLL_200, 1, 0, 0), 146 { } 147 }; 148 ··· 324 }; 325 326 static const struct freq_tbl ftbl_gcc_blsp1_uart1_2_apps_clk[] = { 327 - F(1843200, FE_PLL_200, 1, 144, 15625), 328 - F(3686400, FE_PLL_200, 1, 288, 15625), 329 - F(7372800, FE_PLL_200, 1, 576, 15625), 330 - F(14745600, FE_PLL_200, 1, 1152, 15625), 331 - F(16000000, FE_PLL_200, 1, 2, 25), 332 F(24000000, P_XO, 1, 1, 2), 333 - F(32000000, FE_PLL_200, 1, 4, 25), 334 - F(40000000, FE_PLL_200, 1, 1, 5), 335 - F(46400000, FE_PLL_200, 1, 29, 125), 336 F(48000000, P_XO, 1, 0, 0), 337 { } 338 }; ··· 400 }; 401 402 static const struct freq_tbl ftbl_gcc_gp_clk[] = { 403 - F(1250000, FE_PLL_200, 1, 16, 0), 404 - F(2500000, FE_PLL_200, 1, 8, 0), 405 - F(5000000, FE_PLL_200, 1, 4, 0), 406 { } 407 }; 408 ··· 502 static const struct freq_tbl ftbl_gcc_sdcc1_apps_clk[] = { 503 F(144000, P_XO, 1, 3, 240), 504 F(400000, P_XO, 1, 1, 0), 505 - F(20000000, FE_PLL_500, 1, 1, 25), 506 - F(25000000, FE_PLL_500, 1, 1, 20), 507 - F(50000000, FE_PLL_500, 1, 1, 10), 508 - F(100000000, FE_PLL_500, 1, 1, 5), 509 - F(193000000, DDRC_PLL_666_SDCC, 1, 0, 0), 510 { } 511 }; 512 ··· 526 527 static const struct freq_tbl ftbl_gcc_apps_clk[] = { 528 F(48000000, P_XO, 1, 0, 0), 529 - F(200000000, FE_PLL_200, 1, 0, 0), 530 - F(500000000, FE_PLL_500, 1, 0, 0), 531 - F(626000000, DDRC_PLL_666, 1, 0, 0), 532 { } 533 }; 534 ··· 547 548 static const struct freq_tbl ftbl_gcc_apps_ahb_clk[] = { 549 F(48000000, P_XO, 1, 0, 0), 550 - F(100000000, FE_PLL_200, 2, 0, 0), 551 { } 552 }; 553 ··· 930 }; 931 932 static const struct freq_tbl ftbl_gcc_usb30_mock_utmi_clk[] = { 933 - F(2000000, FE_PLL_200, 10, 0, 0), 934 { } 935 }; 936 ··· 997 }; 998 999 static const struct freq_tbl ftbl_gcc_fephy_dly_clk[] = { 1000 - F(125000000, FE_PLL_125_DLY, 1, 0, 0), 1001 { } 1002 }; 1003 ··· 1017 1018 static const struct freq_tbl ftbl_gcc_wcss2g_clk[] = { 1019 F(48000000, P_XO, 1, 0, 0), 1020 - F(250000000, FE_PLL_WCSS2G, 1, 0, 0), 1021 { } 1022 }; 1023 ··· 1087 1088 static const struct freq_tbl ftbl_gcc_wcss5g_clk[] = { 1089 F(48000000, P_XO, 1, 0, 0), 1090 - F(250000000, FE_PLL_WCSS5G, 1, 0, 0), 1091 { } 1092 }; 1093 ··· 1315 1316 static int gcc_ipq4019_probe(struct platform_device *pdev) 1317 { 1318 return qcom_cc_probe(pdev, &gcc_ipq4019_desc); 1319 } 1320
··· 129 }; 130 131 #define F(f, s, h, m, n) { (f), (s), (2 * (h) - 1), (m), (n) } 132 133 static const struct freq_tbl ftbl_gcc_audio_pwm_clk[] = { 134 F(48000000, P_XO, 1, 0, 0), 135 + F(200000000, P_FEPLL200, 1, 0, 0), 136 { } 137 }; 138 ··· 334 }; 335 336 static const struct freq_tbl ftbl_gcc_blsp1_uart1_2_apps_clk[] = { 337 + F(1843200, P_FEPLL200, 1, 144, 15625), 338 + F(3686400, P_FEPLL200, 1, 288, 15625), 339 + F(7372800, P_FEPLL200, 1, 576, 15625), 340 + F(14745600, P_FEPLL200, 1, 1152, 15625), 341 + F(16000000, P_FEPLL200, 1, 2, 25), 342 F(24000000, P_XO, 1, 1, 2), 343 + F(32000000, P_FEPLL200, 1, 4, 25), 344 + F(40000000, P_FEPLL200, 1, 1, 5), 345 + F(46400000, P_FEPLL200, 1, 29, 125), 346 F(48000000, P_XO, 1, 0, 0), 347 { } 348 }; ··· 410 }; 411 412 static const struct freq_tbl ftbl_gcc_gp_clk[] = { 413 + F(1250000, P_FEPLL200, 1, 16, 0), 414 + F(2500000, P_FEPLL200, 1, 8, 0), 415 + F(5000000, P_FEPLL200, 1, 4, 0), 416 { } 417 }; 418 ··· 512 static const struct freq_tbl ftbl_gcc_sdcc1_apps_clk[] = { 513 F(144000, P_XO, 1, 3, 240), 514 F(400000, P_XO, 1, 1, 0), 515 + F(20000000, P_FEPLL500, 1, 1, 25), 516 + F(25000000, P_FEPLL500, 1, 1, 20), 517 + F(50000000, P_FEPLL500, 1, 1, 10), 518 + F(100000000, P_FEPLL500, 1, 1, 5), 519 + F(193000000, P_DDRPLL, 1, 0, 0), 520 { } 521 }; 522 ··· 536 537 static const struct freq_tbl ftbl_gcc_apps_clk[] = { 538 F(48000000, P_XO, 1, 0, 0), 539 + F(200000000, P_FEPLL200, 1, 0, 0), 540 + F(500000000, P_FEPLL500, 1, 0, 0), 541 + F(626000000, P_DDRPLLAPSS, 1, 0, 0), 542 { } 543 }; 544 ··· 557 558 static const struct freq_tbl ftbl_gcc_apps_ahb_clk[] = { 559 F(48000000, P_XO, 1, 0, 0), 560 + F(100000000, P_FEPLL200, 2, 0, 0), 561 { } 562 }; 563 ··· 940 }; 941 942 static const struct freq_tbl ftbl_gcc_usb30_mock_utmi_clk[] = { 943 + F(2000000, P_FEPLL200, 10, 0, 0), 944 { } 945 }; 946 ··· 1007 }; 1008 1009 static const struct freq_tbl ftbl_gcc_fephy_dly_clk[] = { 1010 + F(125000000, P_FEPLL125DLY, 1, 0, 0), 1011 { } 1012 }; 1013 ··· 1027 1028 static const struct freq_tbl ftbl_gcc_wcss2g_clk[] = { 1029 F(48000000, P_XO, 1, 0, 0), 1030 + F(250000000, P_FEPLLWCSS2G, 1, 0, 0), 1031 { } 1032 }; 1033 ··· 1097 1098 static const struct freq_tbl ftbl_gcc_wcss5g_clk[] = { 1099 F(48000000, P_XO, 1, 0, 0), 1100 + F(250000000, P_FEPLLWCSS5G, 1, 0, 0), 1101 { } 1102 }; 1103 ··· 1325 1326 static int gcc_ipq4019_probe(struct platform_device *pdev) 1327 { 1328 + struct device *dev = &pdev->dev; 1329 + 1330 + clk_register_fixed_rate(dev, "fepll125", "xo", 0, 200000000); 1331 + clk_register_fixed_rate(dev, "fepll125dly", "xo", 0, 200000000); 1332 + clk_register_fixed_rate(dev, "fepllwcss2g", "xo", 0, 200000000); 1333 + clk_register_fixed_rate(dev, "fepllwcss5g", "xo", 0, 200000000); 1334 + clk_register_fixed_rate(dev, "fepll200", "xo", 0, 200000000); 1335 + clk_register_fixed_rate(dev, "fepll500", "xo", 0, 200000000); 1336 + clk_register_fixed_rate(dev, "ddrpllapss", "xo", 0, 666000000); 1337 + 1338 return qcom_cc_probe(pdev, &gcc_ipq4019_desc); 1339 } 1340
+1 -1
drivers/clk/qcom/reset.c
··· 55 return regmap_update_bits(rst->regmap, map->reg, mask, 0); 56 } 57 58 - struct reset_control_ops qcom_reset_ops = { 59 .reset = qcom_reset, 60 .assert = qcom_reset_assert, 61 .deassert = qcom_reset_deassert,
··· 55 return regmap_update_bits(rst->regmap, map->reg, mask, 0); 56 } 57 58 + const struct reset_control_ops qcom_reset_ops = { 59 .reset = qcom_reset, 60 .assert = qcom_reset_assert, 61 .deassert = qcom_reset_deassert,
+1 -1
drivers/clk/qcom/reset.h
··· 32 #define to_qcom_reset_controller(r) \ 33 container_of(r, struct qcom_reset_controller, rcdev); 34 35 - extern struct reset_control_ops qcom_reset_ops; 36 37 #endif
··· 32 #define to_qcom_reset_controller(r) \ 33 container_of(r, struct qcom_reset_controller, rcdev); 34 35 + extern const struct reset_control_ops qcom_reset_ops; 36 37 #endif
+1 -1
drivers/clk/rockchip/softrst.c
··· 81 return 0; 82 } 83 84 - static struct reset_control_ops rockchip_softrst_ops = { 85 .assert = rockchip_softrst_assert, 86 .deassert = rockchip_softrst_deassert, 87 };
··· 81 return 0; 82 } 83 84 + static const struct reset_control_ops rockchip_softrst_ops = { 85 .assert = rockchip_softrst_assert, 86 .deassert = rockchip_softrst_deassert, 87 };
+1 -1
drivers/clk/sirf/clk-atlas7.c
··· 1423 return 0; 1424 } 1425 1426 - static struct reset_control_ops atlas7_rst_ops = { 1427 .reset = atlas7_reset_module, 1428 }; 1429
··· 1423 return 0; 1424 } 1425 1426 + static const struct reset_control_ops atlas7_rst_ops = { 1427 .reset = atlas7_reset_module, 1428 }; 1429
+1 -1
drivers/clk/sunxi/clk-a10-ve.c
··· 85 return 0; 86 } 87 88 - static struct reset_control_ops sunxi_ve_reset_ops = { 89 .assert = sunxi_ve_reset_assert, 90 .deassert = sunxi_ve_reset_deassert, 91 };
··· 85 return 0; 86 } 87 88 + static const struct reset_control_ops sunxi_ve_reset_ops = { 89 .assert = sunxi_ve_reset_assert, 90 .deassert = sunxi_ve_reset_deassert, 91 };
+1 -1
drivers/clk/sunxi/clk-sun9i-mmc.c
··· 83 return 0; 84 } 85 86 - static struct reset_control_ops sun9i_mmc_reset_ops = { 87 .assert = sun9i_mmc_reset_assert, 88 .deassert = sun9i_mmc_reset_deassert, 89 };
··· 83 return 0; 84 } 85 86 + static const struct reset_control_ops sun9i_mmc_reset_ops = { 87 .assert = sun9i_mmc_reset_assert, 88 .deassert = sun9i_mmc_reset_deassert, 89 };
+1 -1
drivers/clk/sunxi/clk-usb.c
··· 76 return 0; 77 } 78 79 - static struct reset_control_ops sunxi_usb_reset_ops = { 80 .assert = sunxi_usb_reset_assert, 81 .deassert = sunxi_usb_reset_deassert, 82 };
··· 76 return 0; 77 } 78 79 + static const struct reset_control_ops sunxi_usb_reset_ops = { 80 .assert = sunxi_usb_reset_assert, 81 .deassert = sunxi_usb_reset_deassert, 82 };
+1 -1
drivers/clk/tegra/clk.c
··· 271 } 272 } 273 274 - static struct reset_control_ops rst_ops = { 275 .assert = tegra_clk_rst_assert, 276 .deassert = tegra_clk_rst_deassert, 277 };
··· 271 } 272 } 273 274 + static const struct reset_control_ops rst_ops = { 275 .assert = tegra_clk_rst_assert, 276 .deassert = tegra_clk_rst_deassert, 277 };
+1 -2
drivers/extcon/extcon-palmas.c
··· 348 palmas_vbus_irq_handler, 349 IRQF_TRIGGER_FALLING | 350 IRQF_TRIGGER_RISING | 351 - IRQF_ONESHOT | 352 - IRQF_EARLY_RESUME, 353 "palmas_usb_vbus", 354 palmas_usb); 355 if (status < 0) {
··· 348 palmas_vbus_irq_handler, 349 IRQF_TRIGGER_FALLING | 350 IRQF_TRIGGER_RISING | 351 + IRQF_ONESHOT, 352 "palmas_usb_vbus", 353 palmas_usb); 354 if (status < 0) {
+4 -5
drivers/gpio/gpio-menz127.c
··· 37 void __iomem *reg_base; 38 struct mcb_device *mdev; 39 struct resource *mem; 40 - spinlock_t lock; 41 }; 42 43 static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio, ··· 68 debounce /= 50; 69 } 70 71 - spin_lock(&priv->lock); 72 73 db_en = readl(priv->reg_base + MEN_Z127_DBER); 74 ··· 83 writel(db_en, priv->reg_base + MEN_Z127_DBER); 84 writel(db_cnt, priv->reg_base + GPIO_TO_DBCNT_REG(gpio)); 85 86 - spin_unlock(&priv->lock); 87 88 return 0; 89 } ··· 96 if (gpio_pin >= gc->ngpio) 97 return -EINVAL; 98 99 - spin_lock(&priv->lock); 100 od_en = readl(priv->reg_base + MEN_Z127_ODER); 101 102 if (gpiochip_line_is_open_drain(gc, gpio_pin)) ··· 105 od_en &= ~BIT(gpio_pin); 106 107 writel(od_en, priv->reg_base + MEN_Z127_ODER); 108 - spin_unlock(&priv->lock); 109 110 return 0; 111 }
··· 37 void __iomem *reg_base; 38 struct mcb_device *mdev; 39 struct resource *mem; 40 }; 41 42 static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio, ··· 69 debounce /= 50; 70 } 71 72 + spin_lock(&gc->bgpio_lock); 73 74 db_en = readl(priv->reg_base + MEN_Z127_DBER); 75 ··· 84 writel(db_en, priv->reg_base + MEN_Z127_DBER); 85 writel(db_cnt, priv->reg_base + GPIO_TO_DBCNT_REG(gpio)); 86 87 + spin_unlock(&gc->bgpio_lock); 88 89 return 0; 90 } ··· 97 if (gpio_pin >= gc->ngpio) 98 return -EINVAL; 99 100 + spin_lock(&gc->bgpio_lock); 101 od_en = readl(priv->reg_base + MEN_Z127_ODER); 102 103 if (gpiochip_line_is_open_drain(gc, gpio_pin)) ··· 106 od_en &= ~BIT(gpio_pin); 107 108 writel(od_en, priv->reg_base + MEN_Z127_ODER); 109 + spin_unlock(&gc->bgpio_lock); 110 111 return 0; 112 }
+5
drivers/gpio/gpio-xgene.c
··· 173 } 174 175 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 176 gpio->base = devm_ioremap_nocache(&pdev->dev, res->start, 177 resource_size(res)); 178 if (!gpio->base) {
··· 173 } 174 175 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 176 + if (!res) { 177 + err = -EINVAL; 178 + goto err; 179 + } 180 + 181 gpio->base = devm_ioremap_nocache(&pdev->dev, res->start, 182 resource_size(res)); 183 if (!gpio->base) {
+6 -2
drivers/gpu/drm/amd/acp/Kconfig
··· 1 - menu "ACP Configuration" 2 3 config DRM_AMD_ACP 4 - bool "Enable ACP IP support" 5 select MFD_CORE 6 select PM_GENERIC_DOMAINS if PM 7 help 8 Choose this option to enable ACP IP support for AMD SOCs. 9 10 endmenu
··· 1 + menu "ACP (Audio CoProcessor) Configuration" 2 3 config DRM_AMD_ACP 4 + bool "Enable AMD Audio CoProcessor IP support" 5 select MFD_CORE 6 select PM_GENERIC_DOMAINS if PM 7 help 8 Choose this option to enable ACP IP support for AMD SOCs. 9 + This adds the ACP (Audio CoProcessor) IP driver and wires 10 + it up into the amdgpu driver. The ACP block provides the DMA 11 + engine for the i2s-based ALSA driver. It is required for audio 12 + on APUs which utilize an i2s codec. 13 14 endmenu
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 608 if ((offset + size) <= adev->mc.visible_vram_size) 609 return 0; 610 611 /* hurrah the memory is not visible ! */ 612 amdgpu_ttm_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_VRAM); 613 lpfn = adev->mc.visible_vram_size >> PAGE_SHIFT;
··· 608 if ((offset + size) <= adev->mc.visible_vram_size) 609 return 0; 610 611 + /* Can't move a pinned BO to visible VRAM */ 612 + if (abo->pin_count > 0) 613 + return -EINVAL; 614 + 615 /* hurrah the memory is not visible ! */ 616 amdgpu_ttm_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_VRAM); 617 lpfn = adev->mc.visible_vram_size >> PAGE_SHIFT;
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 384 struct ttm_mem_reg *new_mem) 385 { 386 struct amdgpu_device *adev; 387 struct ttm_mem_reg *old_mem = &bo->mem; 388 int r; 389 390 adev = amdgpu_get_adev(bo->bdev); 391 if (old_mem->mem_type == TTM_PL_SYSTEM && bo->ttm == NULL) {
··· 384 struct ttm_mem_reg *new_mem) 385 { 386 struct amdgpu_device *adev; 387 + struct amdgpu_bo *abo; 388 struct ttm_mem_reg *old_mem = &bo->mem; 389 int r; 390 + 391 + /* Can't move a pinned BO */ 392 + abo = container_of(bo, struct amdgpu_bo, tbo); 393 + if (WARN_ON_ONCE(abo->pin_count > 0)) 394 + return -EINVAL; 395 396 adev = amdgpu_get_adev(bo->bdev); 397 if (old_mem->mem_type == TTM_PL_SYSTEM && bo->ttm == NULL) {
+17 -10
drivers/gpu/drm/drm_dp_helper.c
··· 179 { 180 struct drm_dp_aux_msg msg; 181 unsigned int retry; 182 - int err; 183 184 memset(&msg, 0, sizeof(msg)); 185 msg.address = offset; 186 msg.request = request; 187 msg.buffer = buffer; 188 msg.size = size; 189 190 /* 191 * The specification doesn't give any recommendation on how often to ··· 197 */ 198 for (retry = 0; retry < 32; retry++) { 199 200 - mutex_lock(&aux->hw_mutex); 201 err = aux->transfer(aux, &msg); 202 - mutex_unlock(&aux->hw_mutex); 203 if (err < 0) { 204 if (err == -EBUSY) 205 continue; 206 207 - return err; 208 } 209 210 211 switch (msg.reply & DP_AUX_NATIVE_REPLY_MASK) { 212 case DP_AUX_NATIVE_REPLY_ACK: 213 if (err < size) 214 - return -EPROTO; 215 - return err; 216 217 case DP_AUX_NATIVE_REPLY_NACK: 218 - return -EIO; 219 220 case DP_AUX_NATIVE_REPLY_DEFER: 221 usleep_range(AUX_RETRY_INTERVAL, AUX_RETRY_INTERVAL + 100); ··· 223 } 224 225 DRM_DEBUG_KMS("too many retries, giving up\n"); 226 - return -EIO; 227 } 228 229 /** ··· 549 int max_retries = max(7, drm_dp_i2c_retry_count(msg, dp_aux_i2c_speed_khz)); 550 551 for (retry = 0, defer_i2c = 0; retry < (max_retries + defer_i2c); retry++) { 552 - mutex_lock(&aux->hw_mutex); 553 ret = aux->transfer(aux, msg); 554 - mutex_unlock(&aux->hw_mutex); 555 if (ret < 0) { 556 if (ret == -EBUSY) 557 continue; ··· 688 689 memset(&msg, 0, sizeof(msg)); 690 691 for (i = 0; i < num; i++) { 692 msg.address = msgs[i].addr; 693 drm_dp_i2c_msg_set_request(&msg, &msgs[i]); ··· 743 msg.buffer = NULL; 744 msg.size = 0; 745 (void)drm_dp_i2c_do_msg(aux, &msg); 746 747 return err; 748 }
··· 179 { 180 struct drm_dp_aux_msg msg; 181 unsigned int retry; 182 + int err = 0; 183 184 memset(&msg, 0, sizeof(msg)); 185 msg.address = offset; 186 msg.request = request; 187 msg.buffer = buffer; 188 msg.size = size; 189 + 190 + mutex_lock(&aux->hw_mutex); 191 192 /* 193 * The specification doesn't give any recommendation on how often to ··· 195 */ 196 for (retry = 0; retry < 32; retry++) { 197 198 err = aux->transfer(aux, &msg); 199 if (err < 0) { 200 if (err == -EBUSY) 201 continue; 202 203 + goto unlock; 204 } 205 206 207 switch (msg.reply & DP_AUX_NATIVE_REPLY_MASK) { 208 case DP_AUX_NATIVE_REPLY_ACK: 209 if (err < size) 210 + err = -EPROTO; 211 + goto unlock; 212 213 case DP_AUX_NATIVE_REPLY_NACK: 214 + err = -EIO; 215 + goto unlock; 216 217 case DP_AUX_NATIVE_REPLY_DEFER: 218 usleep_range(AUX_RETRY_INTERVAL, AUX_RETRY_INTERVAL + 100); ··· 222 } 223 224 DRM_DEBUG_KMS("too many retries, giving up\n"); 225 + err = -EIO; 226 + 227 + unlock: 228 + mutex_unlock(&aux->hw_mutex); 229 + return err; 230 } 231 232 /** ··· 544 int max_retries = max(7, drm_dp_i2c_retry_count(msg, dp_aux_i2c_speed_khz)); 545 546 for (retry = 0, defer_i2c = 0; retry < (max_retries + defer_i2c); retry++) { 547 ret = aux->transfer(aux, msg); 548 if (ret < 0) { 549 if (ret == -EBUSY) 550 continue; ··· 685 686 memset(&msg, 0, sizeof(msg)); 687 688 + mutex_lock(&aux->hw_mutex); 689 + 690 for (i = 0; i < num; i++) { 691 msg.address = msgs[i].addr; 692 drm_dp_i2c_msg_set_request(&msg, &msgs[i]); ··· 738 msg.buffer = NULL; 739 msg.size = 0; 740 (void)drm_dp_i2c_do_msg(aux, &msg); 741 + 742 + mutex_unlock(&aux->hw_mutex); 743 744 return err; 745 }
+1 -1
drivers/gpu/drm/msm/hdmi/hdmi.h
··· 196 int msm_hdmi_pll_8960_init(struct platform_device *pdev); 197 int msm_hdmi_pll_8996_init(struct platform_device *pdev); 198 #else 199 - static inline int msm_hdmi_pll_8960_init(struct platform_device *pdev); 200 { 201 return -ENODEV; 202 }
··· 196 int msm_hdmi_pll_8960_init(struct platform_device *pdev); 197 int msm_hdmi_pll_8996_init(struct platform_device *pdev); 198 #else 199 + static inline int msm_hdmi_pll_8960_init(struct platform_device *pdev) 200 { 201 return -ENODEV; 202 }
-3
drivers/gpu/drm/msm/msm_drv.c
··· 467 struct msm_file_private *ctx = file->driver_priv; 468 struct msm_kms *kms = priv->kms; 469 470 - if (kms) 471 - kms->funcs->preclose(kms, file); 472 - 473 mutex_lock(&dev->struct_mutex); 474 if (ctx == priv->lastctx) 475 priv->lastctx = NULL;
··· 467 struct msm_file_private *ctx = file->driver_priv; 468 struct msm_kms *kms = priv->kms; 469 470 mutex_lock(&dev->struct_mutex); 471 if (ctx == priv->lastctx) 472 priv->lastctx = NULL;
-1
drivers/gpu/drm/msm/msm_kms.h
··· 55 struct drm_encoder *slave_encoder, 56 bool is_cmd_mode); 57 /* cleanup: */ 58 - void (*preclose)(struct msm_kms *kms, struct drm_file *file); 59 void (*destroy)(struct msm_kms *kms); 60 }; 61
··· 55 struct drm_encoder *slave_encoder, 56 bool is_cmd_mode); 57 /* cleanup: */ 58 void (*destroy)(struct msm_kms *kms); 59 }; 60
+4
drivers/gpu/drm/radeon/radeon_object.c
··· 799 if ((offset + size) <= rdev->mc.visible_vram_size) 800 return 0; 801 802 /* hurrah the memory is not visible ! */ 803 radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_VRAM); 804 lpfn = rdev->mc.visible_vram_size >> PAGE_SHIFT;
··· 799 if ((offset + size) <= rdev->mc.visible_vram_size) 800 return 0; 801 802 + /* Can't move a pinned BO to visible VRAM */ 803 + if (rbo->pin_count > 0) 804 + return -EINVAL; 805 + 806 /* hurrah the memory is not visible ! */ 807 radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_VRAM); 808 lpfn = rdev->mc.visible_vram_size >> PAGE_SHIFT;
+6
drivers/gpu/drm/radeon/radeon_ttm.c
··· 397 struct ttm_mem_reg *new_mem) 398 { 399 struct radeon_device *rdev; 400 struct ttm_mem_reg *old_mem = &bo->mem; 401 int r; 402 403 rdev = radeon_get_rdev(bo->bdev); 404 if (old_mem->mem_type == TTM_PL_SYSTEM && bo->ttm == NULL) {
··· 397 struct ttm_mem_reg *new_mem) 398 { 399 struct radeon_device *rdev; 400 + struct radeon_bo *rbo; 401 struct ttm_mem_reg *old_mem = &bo->mem; 402 int r; 403 + 404 + /* Can't move a pinned BO */ 405 + rbo = container_of(bo, struct radeon_bo, tbo); 406 + if (WARN_ON_ONCE(rbo->pin_count > 0)) 407 + return -EINVAL; 408 409 rdev = radeon_get_rdev(bo->bdev); 410 if (old_mem->mem_type == TTM_PL_SYSTEM && bo->ttm == NULL) {
+6
drivers/gpu/drm/radeon/si_dpm.c
··· 2926 /* PITCAIRN - https://bugs.freedesktop.org/show_bug.cgi?id=76490 */ 2927 { PCI_VENDOR_ID_ATI, 0x6810, 0x1462, 0x3036, 0, 120000 }, 2928 { PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0xe271, 0, 120000 }, 2929 { PCI_VENDOR_ID_ATI, 0x6810, 0x174b, 0xe271, 85000, 90000 }, 2930 { PCI_VENDOR_ID_ATI, 0x6811, 0x1462, 0x2015, 0, 120000 }, 2931 { PCI_VENDOR_ID_ATI, 0x6811, 0x1043, 0x2015, 0, 120000 }, 2932 { 0, 0, 0, 0 }, 2933 }; 2934 ··· 3010 } 3011 ++p; 3012 } 3013 3014 if (rps->vce_active) { 3015 rps->evclk = rdev->pm.dpm.vce_states[rdev->pm.dpm.vce_level].evclk;
··· 2926 /* PITCAIRN - https://bugs.freedesktop.org/show_bug.cgi?id=76490 */ 2927 { PCI_VENDOR_ID_ATI, 0x6810, 0x1462, 0x3036, 0, 120000 }, 2928 { PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0xe271, 0, 120000 }, 2929 + { PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0x2015, 0, 120000 }, 2930 { PCI_VENDOR_ID_ATI, 0x6810, 0x174b, 0xe271, 85000, 90000 }, 2931 { PCI_VENDOR_ID_ATI, 0x6811, 0x1462, 0x2015, 0, 120000 }, 2932 { PCI_VENDOR_ID_ATI, 0x6811, 0x1043, 0x2015, 0, 120000 }, 2933 + { PCI_VENDOR_ID_ATI, 0x6811, 0x148c, 0x2015, 0, 120000 }, 2934 { 0, 0, 0, 0 }, 2935 }; 2936 ··· 3008 } 3009 ++p; 3010 } 3011 + /* limit mclk on all R7 370 parts for stability */ 3012 + if (rdev->pdev->device == 0x6811 && 3013 + rdev->pdev->revision == 0x81) 3014 + max_mclk = 120000; 3015 3016 if (rps->vce_active) { 3017 rps->evclk = rdev->pm.dpm.vce_states[rdev->pm.dpm.vce_level].evclk;
+10 -3
drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
··· 271 if (!iores) 272 return -ENXIO; 273 274 - platform_set_drvdata(pdev, hdmi); 275 - 276 encoder->possible_crtcs = drm_of_find_possible_crtcs(drm, dev->of_node); 277 /* 278 * If we failed to find the CRTC(s) which this encoder is ··· 291 drm_encoder_init(drm, encoder, &dw_hdmi_rockchip_encoder_funcs, 292 DRM_MODE_ENCODER_TMDS, NULL); 293 294 - return dw_hdmi_bind(dev, master, data, encoder, iores, irq, plat_data); 295 } 296 297 static void dw_hdmi_rockchip_unbind(struct device *dev, struct device *master,
··· 271 if (!iores) 272 return -ENXIO; 273 274 encoder->possible_crtcs = drm_of_find_possible_crtcs(drm, dev->of_node); 275 /* 276 * If we failed to find the CRTC(s) which this encoder is ··· 293 drm_encoder_init(drm, encoder, &dw_hdmi_rockchip_encoder_funcs, 294 DRM_MODE_ENCODER_TMDS, NULL); 295 296 + ret = dw_hdmi_bind(dev, master, data, encoder, iores, irq, plat_data); 297 + 298 + /* 299 + * If dw_hdmi_bind() fails we'll never call dw_hdmi_unbind(), 300 + * which would have called the encoder cleanup. Do it manually. 301 + */ 302 + if (ret) 303 + drm_encoder_cleanup(encoder); 304 + 305 + return ret; 306 } 307 308 static void dw_hdmi_rockchip_unbind(struct device *dev, struct device *master,
+22
drivers/gpu/drm/rockchip/rockchip_drm_drv.c
··· 251 return 0; 252 } 253 254 void rockchip_drm_lastclose(struct drm_device *dev) 255 { 256 struct rockchip_drm_private *priv = dev->dev_private; ··· 302 DRIVER_PRIME | DRIVER_ATOMIC, 303 .load = rockchip_drm_load, 304 .unload = rockchip_drm_unload, 305 .lastclose = rockchip_drm_lastclose, 306 .get_vblank_counter = drm_vblank_no_hw_counter, 307 .enable_vblank = rockchip_drm_crtc_enable_vblank,
··· 251 return 0; 252 } 253 254 + static void rockchip_drm_crtc_cancel_pending_vblank(struct drm_crtc *crtc, 255 + struct drm_file *file_priv) 256 + { 257 + struct rockchip_drm_private *priv = crtc->dev->dev_private; 258 + int pipe = drm_crtc_index(crtc); 259 + 260 + if (pipe < ROCKCHIP_MAX_CRTC && 261 + priv->crtc_funcs[pipe] && 262 + priv->crtc_funcs[pipe]->cancel_pending_vblank) 263 + priv->crtc_funcs[pipe]->cancel_pending_vblank(crtc, file_priv); 264 + } 265 + 266 + static void rockchip_drm_preclose(struct drm_device *dev, 267 + struct drm_file *file_priv) 268 + { 269 + struct drm_crtc *crtc; 270 + 271 + list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) 272 + rockchip_drm_crtc_cancel_pending_vblank(crtc, file_priv); 273 + } 274 + 275 void rockchip_drm_lastclose(struct drm_device *dev) 276 { 277 struct rockchip_drm_private *priv = dev->dev_private; ··· 281 DRIVER_PRIME | DRIVER_ATOMIC, 282 .load = rockchip_drm_load, 283 .unload = rockchip_drm_unload, 284 + .preclose = rockchip_drm_preclose, 285 .lastclose = rockchip_drm_lastclose, 286 .get_vblank_counter = drm_vblank_no_hw_counter, 287 .enable_vblank = rockchip_drm_crtc_enable_vblank,
+1
drivers/gpu/drm/rockchip/rockchip_drm_drv.h
··· 40 int (*enable_vblank)(struct drm_crtc *crtc); 41 void (*disable_vblank)(struct drm_crtc *crtc); 42 void (*wait_for_update)(struct drm_crtc *crtc); 43 }; 44 45 struct rockchip_atomic_commit {
··· 40 int (*enable_vblank)(struct drm_crtc *crtc); 41 void (*disable_vblank)(struct drm_crtc *crtc); 42 void (*wait_for_update)(struct drm_crtc *crtc); 43 + void (*cancel_pending_vblank)(struct drm_crtc *crtc, struct drm_file *file_priv); 44 }; 45 46 struct rockchip_atomic_commit {
+67 -12
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 499 static void vop_crtc_disable(struct drm_crtc *crtc) 500 { 501 struct vop *vop = to_vop(crtc); 502 503 if (!vop->is_enabled) 504 return; 505 506 drm_crtc_vblank_off(crtc); 507 ··· 564 struct drm_plane_state *state) 565 { 566 struct drm_crtc *crtc = state->crtc; 567 struct drm_framebuffer *fb = state->fb; 568 struct vop_win *vop_win = to_vop_win(plane); 569 struct vop_plane_state *vop_plane_state = to_vop_plane_state(state); ··· 579 int max_scale = win->phy->scl ? FRAC_16_16(8, 1) : 580 DRM_PLANE_HELPER_NO_SCALING; 581 582 - crtc = crtc ? crtc : plane->state->crtc; 583 - /* 584 - * Both crtc or plane->state->crtc can be null. 585 - */ 586 if (!crtc || !fb) 587 goto out_disable; 588 src->x1 = state->src_x; 589 src->y1 = state->src_y; 590 src->x2 = state->src_x + state->src_w; ··· 597 598 clip.x1 = 0; 599 clip.y1 = 0; 600 - clip.x2 = crtc->mode.hdisplay; 601 - clip.y2 = crtc->mode.vdisplay; 602 603 ret = drm_plane_helper_check_update(plane, crtc, state->fb, 604 src, dest, &clip, ··· 890 WARN_ON(!wait_for_completion_timeout(&vop->wait_update_complete, 100)); 891 } 892 893 static const struct rockchip_crtc_funcs private_crtc_funcs = { 894 .enable_vblank = vop_crtc_enable_vblank, 895 .disable_vblank = vop_crtc_disable_vblank, 896 .wait_for_update = vop_crtc_wait_for_update, 897 }; 898 899 static bool vop_crtc_mode_fixup(struct drm_crtc *crtc, ··· 921 struct drm_display_mode *adjusted_mode) 922 { 923 struct vop *vop = to_vop(crtc); 924 - 925 - if (adjusted_mode->htotal == 0 || adjusted_mode->vtotal == 0) 926 - return false; 927 928 adjusted_mode->clock = 929 clk_round_rate(vop->dclk, mode->clock * 1000) / 1000; ··· 1142 const struct vop_data *vop_data = vop->data; 1143 struct device *dev = vop->dev; 1144 struct drm_device *drm_dev = vop->drm_dev; 1145 - struct drm_plane *primary = NULL, *cursor = NULL, *plane; 1146 struct drm_crtc *crtc = &vop->crtc; 1147 struct device_node *port; 1148 int ret; ··· 1182 ret = drm_crtc_init_with_planes(drm_dev, crtc, primary, cursor, 1183 &vop_crtc_funcs, NULL); 1184 if (ret) 1185 - return ret; 1186 1187 drm_crtc_helper_add(crtc, &vop_crtc_helper_funcs); 1188 ··· 1215 if (!port) { 1216 DRM_ERROR("no port node found in %s\n", 1217 dev->of_node->full_name); 1218 goto err_cleanup_crtc; 1219 } 1220 ··· 1229 err_cleanup_crtc: 1230 drm_crtc_cleanup(crtc); 1231 err_cleanup_planes: 1232 - list_for_each_entry(plane, &drm_dev->mode_config.plane_list, head) 1233 drm_plane_cleanup(plane); 1234 return ret; 1235 } ··· 1238 static void vop_destroy_crtc(struct vop *vop) 1239 { 1240 struct drm_crtc *crtc = &vop->crtc; 1241 1242 rockchip_unregister_crtc_funcs(crtc); 1243 of_node_put(crtc->port); 1244 drm_crtc_cleanup(crtc); 1245 } 1246
··· 499 static void vop_crtc_disable(struct drm_crtc *crtc) 500 { 501 struct vop *vop = to_vop(crtc); 502 + int i; 503 504 if (!vop->is_enabled) 505 return; 506 + 507 + /* 508 + * We need to make sure that all windows are disabled before we 509 + * disable that crtc. Otherwise we might try to scan from a destroyed 510 + * buffer later. 511 + */ 512 + for (i = 0; i < vop->data->win_size; i++) { 513 + struct vop_win *vop_win = &vop->win[i]; 514 + const struct vop_win_data *win = vop_win->data; 515 + 516 + spin_lock(&vop->reg_lock); 517 + VOP_WIN_SET(vop, win, enable, 0); 518 + spin_unlock(&vop->reg_lock); 519 + } 520 521 drm_crtc_vblank_off(crtc); 522 ··· 549 struct drm_plane_state *state) 550 { 551 struct drm_crtc *crtc = state->crtc; 552 + struct drm_crtc_state *crtc_state; 553 struct drm_framebuffer *fb = state->fb; 554 struct vop_win *vop_win = to_vop_win(plane); 555 struct vop_plane_state *vop_plane_state = to_vop_plane_state(state); ··· 563 int max_scale = win->phy->scl ? FRAC_16_16(8, 1) : 564 DRM_PLANE_HELPER_NO_SCALING; 565 566 if (!crtc || !fb) 567 goto out_disable; 568 + 569 + crtc_state = drm_atomic_get_existing_crtc_state(state->state, crtc); 570 + if (WARN_ON(!crtc_state)) 571 + return -EINVAL; 572 + 573 src->x1 = state->src_x; 574 src->y1 = state->src_y; 575 src->x2 = state->src_x + state->src_w; ··· 580 581 clip.x1 = 0; 582 clip.y1 = 0; 583 + clip.x2 = crtc_state->adjusted_mode.hdisplay; 584 + clip.y2 = crtc_state->adjusted_mode.vdisplay; 585 586 ret = drm_plane_helper_check_update(plane, crtc, state->fb, 587 src, dest, &clip, ··· 873 WARN_ON(!wait_for_completion_timeout(&vop->wait_update_complete, 100)); 874 } 875 876 + static void vop_crtc_cancel_pending_vblank(struct drm_crtc *crtc, 877 + struct drm_file *file_priv) 878 + { 879 + struct drm_device *drm = crtc->dev; 880 + struct vop *vop = to_vop(crtc); 881 + struct drm_pending_vblank_event *e; 882 + unsigned long flags; 883 + 884 + spin_lock_irqsave(&drm->event_lock, flags); 885 + e = vop->event; 886 + if (e && e->base.file_priv == file_priv) { 887 + vop->event = NULL; 888 + 889 + e->base.destroy(&e->base); 890 + file_priv->event_space += sizeof(e->event); 891 + } 892 + spin_unlock_irqrestore(&drm->event_lock, flags); 893 + } 894 + 895 static const struct rockchip_crtc_funcs private_crtc_funcs = { 896 .enable_vblank = vop_crtc_enable_vblank, 897 .disable_vblank = vop_crtc_disable_vblank, 898 .wait_for_update = vop_crtc_wait_for_update, 899 + .cancel_pending_vblank = vop_crtc_cancel_pending_vblank, 900 }; 901 902 static bool vop_crtc_mode_fixup(struct drm_crtc *crtc, ··· 884 struct drm_display_mode *adjusted_mode) 885 { 886 struct vop *vop = to_vop(crtc); 887 888 adjusted_mode->clock = 889 clk_round_rate(vop->dclk, mode->clock * 1000) / 1000; ··· 1108 const struct vop_data *vop_data = vop->data; 1109 struct device *dev = vop->dev; 1110 struct drm_device *drm_dev = vop->drm_dev; 1111 + struct drm_plane *primary = NULL, *cursor = NULL, *plane, *tmp; 1112 struct drm_crtc *crtc = &vop->crtc; 1113 struct device_node *port; 1114 int ret; ··· 1148 ret = drm_crtc_init_with_planes(drm_dev, crtc, primary, cursor, 1149 &vop_crtc_funcs, NULL); 1150 if (ret) 1151 + goto err_cleanup_planes; 1152 1153 drm_crtc_helper_add(crtc, &vop_crtc_helper_funcs); 1154 ··· 1181 if (!port) { 1182 DRM_ERROR("no port node found in %s\n", 1183 dev->of_node->full_name); 1184 + ret = -ENOENT; 1185 goto err_cleanup_crtc; 1186 } 1187 ··· 1194 err_cleanup_crtc: 1195 drm_crtc_cleanup(crtc); 1196 err_cleanup_planes: 1197 + list_for_each_entry_safe(plane, tmp, &drm_dev->mode_config.plane_list, 1198 + head) 1199 drm_plane_cleanup(plane); 1200 return ret; 1201 } ··· 1202 static void vop_destroy_crtc(struct vop *vop) 1203 { 1204 struct drm_crtc *crtc = &vop->crtc; 1205 + struct drm_device *drm_dev = vop->drm_dev; 1206 + struct drm_plane *plane, *tmp; 1207 1208 rockchip_unregister_crtc_funcs(crtc); 1209 of_node_put(crtc->port); 1210 + 1211 + /* 1212 + * We need to cleanup the planes now. Why? 1213 + * 1214 + * The planes are "&vop->win[i].base". That means the memory is 1215 + * all part of the big "struct vop" chunk of memory. That memory 1216 + * was devm allocated and associated with this component. We need to 1217 + * free it ourselves before vop_unbind() finishes. 1218 + */ 1219 + list_for_each_entry_safe(plane, tmp, &drm_dev->mode_config.plane_list, 1220 + head) 1221 + vop_plane_destroy(plane); 1222 + 1223 + /* 1224 + * Destroy CRTC after vop_plane_destroy() since vop_disable_plane() 1225 + * references the CRTC. 1226 + */ 1227 drm_crtc_cleanup(crtc); 1228 } 1229
+1 -1
drivers/gpu/drm/udl/udl_fb.c
··· 536 out_destroy_fbi: 537 drm_fb_helper_release_fbi(helper); 538 out_gfree: 539 - drm_gem_object_unreference(&ufbdev->ufb.obj->base); 540 out: 541 return ret; 542 }
··· 536 out_destroy_fbi: 537 drm_fb_helper_release_fbi(helper); 538 out_gfree: 539 + drm_gem_object_unreference_unlocked(&ufbdev->ufb.obj->base); 540 out: 541 return ret; 542 }
+1 -1
drivers/gpu/drm/udl/udl_gem.c
··· 52 return ret; 53 } 54 55 - drm_gem_object_unreference(&obj->base); 56 *handle_p = handle; 57 return 0; 58 }
··· 52 return ret; 53 } 54 55 + drm_gem_object_unreference_unlocked(&obj->base); 56 *handle_p = handle; 57 return 0; 58 }
+6
drivers/hwmon/max1111.c
··· 85 86 int max1111_read_channel(int channel) 87 { 88 return max1111_read(&the_max1111->spi->dev, channel); 89 } 90 EXPORT_SYMBOL(max1111_read_channel); ··· 261 { 262 struct max1111_data *data = spi_get_drvdata(spi); 263 264 hwmon_device_unregister(data->hwmon_dev); 265 sysfs_remove_group(&spi->dev.kobj, &max1110_attr_group); 266 sysfs_remove_group(&spi->dev.kobj, &max1111_attr_group);
··· 85 86 int max1111_read_channel(int channel) 87 { 88 + if (!the_max1111 || !the_max1111->spi) 89 + return -ENODEV; 90 + 91 return max1111_read(&the_max1111->spi->dev, channel); 92 } 93 EXPORT_SYMBOL(max1111_read_channel); ··· 258 { 259 struct max1111_data *data = spi_get_drvdata(spi); 260 261 + #ifdef CONFIG_SHARPSL_PM 262 + the_max1111 = NULL; 263 + #endif 264 hwmon_device_unregister(data->hwmon_dev); 265 sysfs_remove_group(&spi->dev.kobj, &max1110_attr_group); 266 sysfs_remove_group(&spi->dev.kobj, &max1111_attr_group);
+1 -1
drivers/ide/icside.c
··· 451 return ret; 452 } 453 454 - static const struct ide_port_info icside_v6_port_info __initconst = { 455 .init_dma = icside_dma_off_init, 456 .port_ops = &icside_v6_no_dma_port_ops, 457 .host_flags = IDE_HFLAG_SERIALIZE | IDE_HFLAG_MMIO,
··· 451 return ret; 452 } 453 454 + static const struct ide_port_info icside_v6_port_info = { 455 .init_dma = icside_dma_off_init, 456 .port_ops = &icside_v6_no_dma_port_ops, 457 .host_flags = IDE_HFLAG_SERIALIZE | IDE_HFLAG_MMIO,
+2
drivers/ide/palm_bk3710.c
··· 325 326 clk_enable(clk); 327 rate = clk_get_rate(clk); 328 329 /* NOTE: round *down* to meet minimum timings; we count in clocks */ 330 ideclk_period = 1000000000UL / rate;
··· 325 326 clk_enable(clk); 327 rate = clk_get_rate(clk); 328 + if (!rate) 329 + return -EINVAL; 330 331 /* NOTE: round *down* to meet minimum timings; we count in clocks */ 332 ideclk_period = 1000000000UL / rate;
+4 -35
drivers/infiniband/ulp/isert/ib_isert.c
··· 63 struct rdma_cm_id *isert_setup_id(struct isert_np *isert_np); 64 65 static void isert_release_work(struct work_struct *work); 66 - static void isert_wait4flush(struct isert_conn *isert_conn); 67 static void isert_recv_done(struct ib_cq *cq, struct ib_wc *wc); 68 static void isert_send_done(struct ib_cq *cq, struct ib_wc *wc); 69 static void isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc); ··· 140 attr.qp_context = isert_conn; 141 attr.send_cq = comp->cq; 142 attr.recv_cq = comp->cq; 143 - attr.cap.max_send_wr = ISERT_QP_MAX_REQ_DTOS; 144 attr.cap.max_recv_wr = ISERT_QP_MAX_RECV_DTOS + 1; 145 attr.cap.max_send_sge = device->ib_device->attrs.max_sge; 146 isert_conn->max_sge = min(device->ib_device->attrs.max_sge, ··· 886 break; 887 case ISER_CONN_UP: 888 isert_conn_terminate(isert_conn); 889 - isert_wait4flush(isert_conn); 890 isert_handle_unbound_conn(isert_conn); 891 break; 892 case ISER_CONN_BOUND: ··· 3212 } 3213 } 3214 3215 - static void 3216 - isert_beacon_done(struct ib_cq *cq, struct ib_wc *wc) 3217 - { 3218 - struct isert_conn *isert_conn = wc->qp->qp_context; 3219 - 3220 - isert_print_wc(wc, "beacon"); 3221 - 3222 - isert_info("conn %p completing wait_comp_err\n", isert_conn); 3223 - complete(&isert_conn->wait_comp_err); 3224 - } 3225 - 3226 - static void 3227 - isert_wait4flush(struct isert_conn *isert_conn) 3228 - { 3229 - struct ib_recv_wr *bad_wr; 3230 - static struct ib_cqe cqe = { .done = isert_beacon_done }; 3231 - 3232 - isert_info("conn %p\n", isert_conn); 3233 - 3234 - init_completion(&isert_conn->wait_comp_err); 3235 - isert_conn->beacon.wr_cqe = &cqe; 3236 - /* post an indication that all flush errors were consumed */ 3237 - if (ib_post_recv(isert_conn->qp, &isert_conn->beacon, &bad_wr)) { 3238 - isert_err("conn %p failed to post beacon", isert_conn); 3239 - return; 3240 - } 3241 - 3242 - wait_for_completion(&isert_conn->wait_comp_err); 3243 - } 3244 - 3245 /** 3246 * isert_put_unsol_pending_cmds() - Drop commands waiting for 3247 * unsolicitate dataout ··· 3257 isert_conn_terminate(isert_conn); 3258 mutex_unlock(&isert_conn->mutex); 3259 3260 - isert_wait4flush(isert_conn); 3261 isert_put_unsol_pending_cmds(conn); 3262 isert_wait4cmds(conn); 3263 isert_wait4logout(isert_conn); ··· 3269 { 3270 struct isert_conn *isert_conn = conn->context; 3271 3272 - isert_wait4flush(isert_conn); 3273 isert_put_conn(isert_conn); 3274 } 3275
··· 63 struct rdma_cm_id *isert_setup_id(struct isert_np *isert_np); 64 65 static void isert_release_work(struct work_struct *work); 66 static void isert_recv_done(struct ib_cq *cq, struct ib_wc *wc); 67 static void isert_send_done(struct ib_cq *cq, struct ib_wc *wc); 68 static void isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc); ··· 141 attr.qp_context = isert_conn; 142 attr.send_cq = comp->cq; 143 attr.recv_cq = comp->cq; 144 + attr.cap.max_send_wr = ISERT_QP_MAX_REQ_DTOS + 1; 145 attr.cap.max_recv_wr = ISERT_QP_MAX_RECV_DTOS + 1; 146 attr.cap.max_send_sge = device->ib_device->attrs.max_sge; 147 isert_conn->max_sge = min(device->ib_device->attrs.max_sge, ··· 887 break; 888 case ISER_CONN_UP: 889 isert_conn_terminate(isert_conn); 890 + ib_drain_qp(isert_conn->qp); 891 isert_handle_unbound_conn(isert_conn); 892 break; 893 case ISER_CONN_BOUND: ··· 3213 } 3214 } 3215 3216 /** 3217 * isert_put_unsol_pending_cmds() - Drop commands waiting for 3218 * unsolicitate dataout ··· 3288 isert_conn_terminate(isert_conn); 3289 mutex_unlock(&isert_conn->mutex); 3290 3291 + ib_drain_qp(isert_conn->qp); 3292 isert_put_unsol_pending_cmds(conn); 3293 isert_wait4cmds(conn); 3294 isert_wait4logout(isert_conn); ··· 3300 { 3301 struct isert_conn *isert_conn = conn->context; 3302 3303 + ib_drain_qp(isert_conn->qp); 3304 isert_put_conn(isert_conn); 3305 } 3306
-2
drivers/infiniband/ulp/isert/ib_isert.h
··· 209 struct ib_qp *qp; 210 struct isert_device *device; 211 struct mutex mutex; 212 - struct completion wait_comp_err; 213 struct kref kref; 214 struct list_head fr_pool; 215 int fr_pool_size; 216 /* lock to protect fastreg pool */ 217 spinlock_t pool_lock; 218 struct work_struct release_work; 219 - struct ib_recv_wr beacon; 220 bool logout_posted; 221 bool snd_w_inv; 222 };
··· 209 struct ib_qp *qp; 210 struct isert_device *device; 211 struct mutex mutex; 212 struct kref kref; 213 struct list_head fr_pool; 214 int fr_pool_size; 215 /* lock to protect fastreg pool */ 216 spinlock_t pool_lock; 217 struct work_struct release_work; 218 bool logout_posted; 219 bool snd_w_inv; 220 };
+10 -5
drivers/isdn/hisax/isac.c
··· 215 if (count == 0) 216 count = 32; 217 isac_empty_fifo(cs, count); 218 - if ((count = cs->rcvidx) > 0) { 219 cs->rcvidx = 0; 220 - if (!(skb = alloc_skb(count, GFP_ATOMIC))) 221 printk(KERN_WARNING "HiSax: D receive out of memory\n"); 222 else { 223 memcpy(skb_put(skb, count), cs->rcvbuf, count); ··· 253 cs->tx_skb = NULL; 254 } 255 } 256 - if ((cs->tx_skb = skb_dequeue(&cs->sq))) { 257 cs->tx_cnt = 0; 258 isac_fill_fifo(cs); 259 } else ··· 316 #if ARCOFI_USE 317 if (v1 & 0x08) { 318 if (!cs->dc.isac.mon_rx) { 319 - if (!(cs->dc.isac.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC))) { 320 if (cs->debug & L1_DEB_WARN) 321 debugl1(cs, "ISAC MON RX out of memory!"); 322 cs->dc.isac.mocr &= 0xf0; ··· 347 afterMONR0: 348 if (v1 & 0x80) { 349 if (!cs->dc.isac.mon_rx) { 350 - if (!(cs->dc.isac.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC))) { 351 if (cs->debug & L1_DEB_WARN) 352 debugl1(cs, "ISAC MON RX out of memory!"); 353 cs->dc.isac.mocr &= 0x0f;
··· 215 if (count == 0) 216 count = 32; 217 isac_empty_fifo(cs, count); 218 + count = cs->rcvidx; 219 + if (count > 0) { 220 cs->rcvidx = 0; 221 + skb = alloc_skb(count, GFP_ATOMIC); 222 + if (!skb) 223 printk(KERN_WARNING "HiSax: D receive out of memory\n"); 224 else { 225 memcpy(skb_put(skb, count), cs->rcvbuf, count); ··· 251 cs->tx_skb = NULL; 252 } 253 } 254 + cs->tx_skb = skb_dequeue(&cs->sq); 255 + if (cs->tx_skb) { 256 cs->tx_cnt = 0; 257 isac_fill_fifo(cs); 258 } else ··· 313 #if ARCOFI_USE 314 if (v1 & 0x08) { 315 if (!cs->dc.isac.mon_rx) { 316 + cs->dc.isac.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC); 317 + if (!cs->dc.isac.mon_rx) { 318 if (cs->debug & L1_DEB_WARN) 319 debugl1(cs, "ISAC MON RX out of memory!"); 320 cs->dc.isac.mocr &= 0xf0; ··· 343 afterMONR0: 344 if (v1 & 0x80) { 345 if (!cs->dc.isac.mon_rx) { 346 + cs->dc.isac.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC); 347 + if (!cs->dc.isac.mon_rx) { 348 if (cs->debug & L1_DEB_WARN) 349 debugl1(cs, "ISAC MON RX out of memory!"); 350 cs->dc.isac.mocr &= 0x0f;
+1 -1
drivers/media/v4l2-core/v4l2-mc.c
··· 34 { 35 struct media_entity *entity; 36 struct media_entity *if_vid = NULL, *if_aud = NULL; 37 - struct media_entity *tuner = NULL, *decoder = NULL, *dtv_demod = NULL; 38 struct media_entity *io_v4l = NULL, *io_vbi = NULL, *io_swradio = NULL; 39 bool is_webcam = false; 40 u32 flags;
··· 34 { 35 struct media_entity *entity; 36 struct media_entity *if_vid = NULL, *if_aud = NULL; 37 + struct media_entity *tuner = NULL, *decoder = NULL; 38 struct media_entity *io_v4l = NULL, *io_vbi = NULL, *io_swradio = NULL; 39 bool is_webcam = false; 40 u32 flags;
+72 -13
drivers/net/dsa/mv88e6xxx.c
··· 2264 mutex_unlock(&ps->smi_mutex); 2265 } 2266 2267 static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port) 2268 { 2269 struct mv88e6xxx_priv_state *ps = ds_to_priv(ds); ··· 2416 PORT_CONTROL, reg); 2417 if (ret) 2418 goto abort; 2419 } 2420 2421 /* Port Control 2: don't force a good FCS, set the maximum frame size to ··· 2782 int ret; 2783 2784 mutex_lock(&ps->smi_mutex); 2785 - ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page); 2786 - if (ret < 0) 2787 - goto error; 2788 - ret = _mv88e6xxx_phy_read_indirect(ds, port, reg); 2789 - error: 2790 - _mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0); 2791 mutex_unlock(&ps->smi_mutex); 2792 return ret; 2793 } 2794 ··· 2795 int ret; 2796 2797 mutex_lock(&ps->smi_mutex); 2798 - ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page); 2799 - if (ret < 0) 2800 - goto error; 2801 - 2802 - ret = _mv88e6xxx_phy_write_indirect(ds, port, reg, val); 2803 - error: 2804 - _mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0); 2805 mutex_unlock(&ps->smi_mutex); 2806 return ret; 2807 } 2808
··· 2264 mutex_unlock(&ps->smi_mutex); 2265 } 2266 2267 + static int _mv88e6xxx_phy_page_write(struct dsa_switch *ds, int port, int page, 2268 + int reg, int val) 2269 + { 2270 + int ret; 2271 + 2272 + ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page); 2273 + if (ret < 0) 2274 + goto restore_page_0; 2275 + 2276 + ret = _mv88e6xxx_phy_write_indirect(ds, port, reg, val); 2277 + restore_page_0: 2278 + _mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0); 2279 + 2280 + return ret; 2281 + } 2282 + 2283 + static int _mv88e6xxx_phy_page_read(struct dsa_switch *ds, int port, int page, 2284 + int reg) 2285 + { 2286 + int ret; 2287 + 2288 + ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page); 2289 + if (ret < 0) 2290 + goto restore_page_0; 2291 + 2292 + ret = _mv88e6xxx_phy_read_indirect(ds, port, reg); 2293 + restore_page_0: 2294 + _mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0); 2295 + 2296 + return ret; 2297 + } 2298 + 2299 + static int mv88e6xxx_power_on_serdes(struct dsa_switch *ds) 2300 + { 2301 + int ret; 2302 + 2303 + ret = _mv88e6xxx_phy_page_read(ds, REG_FIBER_SERDES, PAGE_FIBER_SERDES, 2304 + MII_BMCR); 2305 + if (ret < 0) 2306 + return ret; 2307 + 2308 + if (ret & BMCR_PDOWN) { 2309 + ret &= ~BMCR_PDOWN; 2310 + ret = _mv88e6xxx_phy_page_write(ds, REG_FIBER_SERDES, 2311 + PAGE_FIBER_SERDES, MII_BMCR, 2312 + ret); 2313 + } 2314 + 2315 + return ret; 2316 + } 2317 + 2318 static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port) 2319 { 2320 struct mv88e6xxx_priv_state *ps = ds_to_priv(ds); ··· 2365 PORT_CONTROL, reg); 2366 if (ret) 2367 goto abort; 2368 + } 2369 + 2370 + /* If this port is connected to a SerDes, make sure the SerDes is not 2371 + * powered down. 2372 + */ 2373 + if (mv88e6xxx_6352_family(ds)) { 2374 + ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_STATUS); 2375 + if (ret < 0) 2376 + goto abort; 2377 + ret &= PORT_STATUS_CMODE_MASK; 2378 + if ((ret == PORT_STATUS_CMODE_100BASE_X) || 2379 + (ret == PORT_STATUS_CMODE_1000BASE_X) || 2380 + (ret == PORT_STATUS_CMODE_SGMII)) { 2381 + ret = mv88e6xxx_power_on_serdes(ds); 2382 + if (ret < 0) 2383 + goto abort; 2384 + } 2385 } 2386 2387 /* Port Control 2: don't force a good FCS, set the maximum frame size to ··· 2714 int ret; 2715 2716 mutex_lock(&ps->smi_mutex); 2717 + ret = _mv88e6xxx_phy_page_read(ds, port, page, reg); 2718 mutex_unlock(&ps->smi_mutex); 2719 + 2720 return ret; 2721 } 2722 ··· 2731 int ret; 2732 2733 mutex_lock(&ps->smi_mutex); 2734 + ret = _mv88e6xxx_phy_page_write(ds, port, page, reg, val); 2735 mutex_unlock(&ps->smi_mutex); 2736 + 2737 return ret; 2738 } 2739
+8
drivers/net/dsa/mv88e6xxx.h
··· 28 #define SMI_CMD_OP_45_READ_DATA_INC ((3 << 10) | SMI_CMD_BUSY) 29 #define SMI_DATA 0x01 30 31 #define REG_PORT(p) (0x10 + (p)) 32 #define PORT_STATUS 0x00 33 #define PORT_STATUS_PAUSE_EN BIT(15) ··· 49 #define PORT_STATUS_MGMII BIT(6) /* 6185 */ 50 #define PORT_STATUS_TX_PAUSED BIT(5) 51 #define PORT_STATUS_FLOW_CTRL BIT(4) 52 #define PORT_PCS_CTRL 0x01 53 #define PORT_PCS_CTRL_RGMII_DELAY_RXCLK BIT(15) 54 #define PORT_PCS_CTRL_RGMII_DELAY_TXCLK BIT(14)
··· 28 #define SMI_CMD_OP_45_READ_DATA_INC ((3 << 10) | SMI_CMD_BUSY) 29 #define SMI_DATA 0x01 30 31 + /* Fiber/SERDES Registers are located at SMI address F, page 1 */ 32 + #define REG_FIBER_SERDES 0x0f 33 + #define PAGE_FIBER_SERDES 0x01 34 + 35 #define REG_PORT(p) (0x10 + (p)) 36 #define PORT_STATUS 0x00 37 #define PORT_STATUS_PAUSE_EN BIT(15) ··· 45 #define PORT_STATUS_MGMII BIT(6) /* 6185 */ 46 #define PORT_STATUS_TX_PAUSED BIT(5) 47 #define PORT_STATUS_FLOW_CTRL BIT(4) 48 + #define PORT_STATUS_CMODE_MASK 0x0f 49 + #define PORT_STATUS_CMODE_100BASE_X 0x8 50 + #define PORT_STATUS_CMODE_1000BASE_X 0x9 51 + #define PORT_STATUS_CMODE_SGMII 0xa 52 #define PORT_PCS_CTRL 0x01 53 #define PORT_PCS_CTRL_RGMII_DELAY_RXCLK BIT(15) 54 #define PORT_PCS_CTRL_RGMII_DELAY_TXCLK BIT(14)
+7 -3
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 2653 /* Write request msg to hwrm channel */ 2654 __iowrite32_copy(bp->bar0, data, msg_len / 4); 2655 2656 - for (i = msg_len; i < HWRM_MAX_REQ_LEN; i += 4) 2657 writel(0, bp->bar0 + i); 2658 2659 /* currently supports only one outstanding message */ ··· 3391 struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring; 3392 struct bnxt_ring_struct *ring = &cpr->cp_ring_struct; 3393 3394 rc = hwrm_ring_alloc_send_msg(bp, ring, HWRM_RING_ALLOC_CMPL, i, 3395 INVALID_STATS_CTX_ID); 3396 if (rc) 3397 goto err_out; 3398 - cpr->cp_doorbell = bp->bar1 + i * 0x80; 3399 BNXT_CP_DB(cpr->cp_doorbell, cpr->cp_raw_cons); 3400 bp->grp_info[i].cp_fw_ring_id = ring->fw_ring_id; 3401 } ··· 3830 struct hwrm_ver_get_input req = {0}; 3831 struct hwrm_ver_get_output *resp = bp->hwrm_cmd_resp_addr; 3832 3833 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_VER_GET, -1, -1); 3834 req.hwrm_intf_maj = HWRM_VERSION_MAJOR; 3835 req.hwrm_intf_min = HWRM_VERSION_MINOR; ··· 3855 bp->hwrm_cmd_timeout = le16_to_cpu(resp->def_req_timeout); 3856 if (!bp->hwrm_cmd_timeout) 3857 bp->hwrm_cmd_timeout = DFLT_HWRM_CMD_TIMEOUT; 3858 3859 hwrm_ver_get_exit: 3860 mutex_unlock(&bp->hwrm_cmd_lock); ··· 4559 if (bp->link_info.req_flow_ctrl & BNXT_LINK_PAUSE_RX) 4560 req->auto_pause |= PORT_PHY_CFG_REQ_AUTO_PAUSE_RX; 4561 if (bp->link_info.req_flow_ctrl & BNXT_LINK_PAUSE_TX) 4562 - req->auto_pause |= PORT_PHY_CFG_REQ_AUTO_PAUSE_RX; 4563 req->enables |= 4564 cpu_to_le32(PORT_PHY_CFG_REQ_ENABLES_AUTO_PAUSE); 4565 } else {
··· 2653 /* Write request msg to hwrm channel */ 2654 __iowrite32_copy(bp->bar0, data, msg_len / 4); 2655 2656 + for (i = msg_len; i < BNXT_HWRM_MAX_REQ_LEN; i += 4) 2657 writel(0, bp->bar0 + i); 2658 2659 /* currently supports only one outstanding message */ ··· 3391 struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring; 3392 struct bnxt_ring_struct *ring = &cpr->cp_ring_struct; 3393 3394 + cpr->cp_doorbell = bp->bar1 + i * 0x80; 3395 rc = hwrm_ring_alloc_send_msg(bp, ring, HWRM_RING_ALLOC_CMPL, i, 3396 INVALID_STATS_CTX_ID); 3397 if (rc) 3398 goto err_out; 3399 BNXT_CP_DB(cpr->cp_doorbell, cpr->cp_raw_cons); 3400 bp->grp_info[i].cp_fw_ring_id = ring->fw_ring_id; 3401 } ··· 3830 struct hwrm_ver_get_input req = {0}; 3831 struct hwrm_ver_get_output *resp = bp->hwrm_cmd_resp_addr; 3832 3833 + bp->hwrm_max_req_len = HWRM_MAX_REQ_LEN; 3834 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_VER_GET, -1, -1); 3835 req.hwrm_intf_maj = HWRM_VERSION_MAJOR; 3836 req.hwrm_intf_min = HWRM_VERSION_MINOR; ··· 3854 bp->hwrm_cmd_timeout = le16_to_cpu(resp->def_req_timeout); 3855 if (!bp->hwrm_cmd_timeout) 3856 bp->hwrm_cmd_timeout = DFLT_HWRM_CMD_TIMEOUT; 3857 + 3858 + if (resp->hwrm_intf_maj >= 1) 3859 + bp->hwrm_max_req_len = le16_to_cpu(resp->max_req_win_len); 3860 3861 hwrm_ver_get_exit: 3862 mutex_unlock(&bp->hwrm_cmd_lock); ··· 4555 if (bp->link_info.req_flow_ctrl & BNXT_LINK_PAUSE_RX) 4556 req->auto_pause |= PORT_PHY_CFG_REQ_AUTO_PAUSE_RX; 4557 if (bp->link_info.req_flow_ctrl & BNXT_LINK_PAUSE_TX) 4558 + req->auto_pause |= PORT_PHY_CFG_REQ_AUTO_PAUSE_TX; 4559 req->enables |= 4560 cpu_to_le32(PORT_PHY_CFG_REQ_ENABLES_AUTO_PAUSE); 4561 } else {
+2
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 477 #define RING_CMP(idx) ((idx) & bp->cp_ring_mask) 478 #define NEXT_CMP(idx) RING_CMP(ADV_RAW_CMP(idx, 1)) 479 480 #define DFLT_HWRM_CMD_TIMEOUT 500 481 #define HWRM_CMD_TIMEOUT (bp->hwrm_cmd_timeout) 482 #define HWRM_RESET_TIMEOUT ((HWRM_CMD_TIMEOUT) * 4) ··· 954 dma_addr_t hw_tx_port_stats_map; 955 int hw_port_stats_size; 956 957 int hwrm_cmd_timeout; 958 struct mutex hwrm_cmd_lock; /* serialize hwrm messages */ 959 struct hwrm_ver_get_output ver_resp;
··· 477 #define RING_CMP(idx) ((idx) & bp->cp_ring_mask) 478 #define NEXT_CMP(idx) RING_CMP(ADV_RAW_CMP(idx, 1)) 479 480 + #define BNXT_HWRM_MAX_REQ_LEN (bp->hwrm_max_req_len) 481 #define DFLT_HWRM_CMD_TIMEOUT 500 482 #define HWRM_CMD_TIMEOUT (bp->hwrm_cmd_timeout) 483 #define HWRM_RESET_TIMEOUT ((HWRM_CMD_TIMEOUT) * 4) ··· 953 dma_addr_t hw_tx_port_stats_map; 954 int hw_port_stats_size; 955 956 + u16 hwrm_max_req_len; 957 int hwrm_cmd_timeout; 958 struct mutex hwrm_cmd_lock; /* serialize hwrm messages */ 959 struct hwrm_ver_get_output ver_resp;
+2 -4
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 855 if (BNXT_VF(bp)) 856 return; 857 epause->autoneg = !!(link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL); 858 - epause->rx_pause = 859 - ((link_info->auto_pause_setting & BNXT_LINK_PAUSE_RX) != 0); 860 - epause->tx_pause = 861 - ((link_info->auto_pause_setting & BNXT_LINK_PAUSE_TX) != 0); 862 } 863 864 static int bnxt_set_pauseparam(struct net_device *dev,
··· 855 if (BNXT_VF(bp)) 856 return; 857 epause->autoneg = !!(link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL); 858 + epause->rx_pause = !!(link_info->req_flow_ctrl & BNXT_LINK_PAUSE_RX); 859 + epause->tx_pause = !!(link_info->req_flow_ctrl & BNXT_LINK_PAUSE_TX); 860 } 861 862 static int bnxt_set_pauseparam(struct net_device *dev,
+11 -5
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 1171 struct enet_cb *tx_cb_ptr; 1172 struct netdev_queue *txq; 1173 unsigned int pkts_compl = 0; 1174 unsigned int c_index; 1175 unsigned int txbds_ready; 1176 unsigned int txbds_processed = 0; ··· 1194 tx_cb_ptr = &priv->tx_cbs[ring->clean_ptr]; 1195 if (tx_cb_ptr->skb) { 1196 pkts_compl++; 1197 - dev->stats.tx_packets++; 1198 - dev->stats.tx_bytes += tx_cb_ptr->skb->len; 1199 dma_unmap_single(&dev->dev, 1200 dma_unmap_addr(tx_cb_ptr, dma_addr), 1201 dma_unmap_len(tx_cb_ptr, dma_len), 1202 DMA_TO_DEVICE); 1203 bcmgenet_free_cb(tx_cb_ptr); 1204 } else if (dma_unmap_addr(tx_cb_ptr, dma_addr)) { 1205 - dev->stats.tx_bytes += 1206 - dma_unmap_len(tx_cb_ptr, dma_len); 1207 dma_unmap_page(&dev->dev, 1208 dma_unmap_addr(tx_cb_ptr, dma_addr), 1209 dma_unmap_len(tx_cb_ptr, dma_len), ··· 1217 1218 ring->free_bds += txbds_processed; 1219 ring->c_index = (ring->c_index + txbds_processed) & DMA_C_INDEX_MASK; 1220 1221 if (ring->free_bds > (MAX_SKB_FRAGS + 1)) { 1222 txq = netdev_get_tx_queue(dev, ring->queue); ··· 1297 1298 tx_cb_ptr->skb = skb; 1299 1300 - skb_len = skb_headlen(skb) < ETH_ZLEN ? ETH_ZLEN : skb_headlen(skb); 1301 1302 mapping = dma_map_single(kdev, skb->data, skb_len, DMA_TO_DEVICE); 1303 ret = dma_mapping_error(kdev, mapping); ··· 1464 ret = NETDEV_TX_OK; 1465 goto out; 1466 } 1467 1468 /* set the SKB transmit checksum */ 1469 if (priv->desc_64b_en) {
··· 1171 struct enet_cb *tx_cb_ptr; 1172 struct netdev_queue *txq; 1173 unsigned int pkts_compl = 0; 1174 + unsigned int bytes_compl = 0; 1175 unsigned int c_index; 1176 unsigned int txbds_ready; 1177 unsigned int txbds_processed = 0; ··· 1193 tx_cb_ptr = &priv->tx_cbs[ring->clean_ptr]; 1194 if (tx_cb_ptr->skb) { 1195 pkts_compl++; 1196 + bytes_compl += GENET_CB(tx_cb_ptr->skb)->bytes_sent; 1197 dma_unmap_single(&dev->dev, 1198 dma_unmap_addr(tx_cb_ptr, dma_addr), 1199 dma_unmap_len(tx_cb_ptr, dma_len), 1200 DMA_TO_DEVICE); 1201 bcmgenet_free_cb(tx_cb_ptr); 1202 } else if (dma_unmap_addr(tx_cb_ptr, dma_addr)) { 1203 dma_unmap_page(&dev->dev, 1204 dma_unmap_addr(tx_cb_ptr, dma_addr), 1205 dma_unmap_len(tx_cb_ptr, dma_len), ··· 1219 1220 ring->free_bds += txbds_processed; 1221 ring->c_index = (ring->c_index + txbds_processed) & DMA_C_INDEX_MASK; 1222 + 1223 + dev->stats.tx_packets += pkts_compl; 1224 + dev->stats.tx_bytes += bytes_compl; 1225 1226 if (ring->free_bds > (MAX_SKB_FRAGS + 1)) { 1227 txq = netdev_get_tx_queue(dev, ring->queue); ··· 1296 1297 tx_cb_ptr->skb = skb; 1298 1299 + skb_len = skb_headlen(skb); 1300 1301 mapping = dma_map_single(kdev, skb->data, skb_len, DMA_TO_DEVICE); 1302 ret = dma_mapping_error(kdev, mapping); ··· 1463 ret = NETDEV_TX_OK; 1464 goto out; 1465 } 1466 + 1467 + /* Retain how many bytes will be sent on the wire, without TSB inserted 1468 + * by transmit checksum offload 1469 + */ 1470 + GENET_CB(skb)->bytes_sent = skb->len; 1471 1472 /* set the SKB transmit checksum */ 1473 if (priv->desc_64b_en) {
+6
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 531 u32 flags; 532 }; 533 534 struct bcmgenet_tx_ring { 535 spinlock_t lock; /* ring lock */ 536 struct napi_struct napi; /* NAPI per tx queue */
··· 531 u32 flags; 532 }; 533 534 + struct bcmgenet_skb_cb { 535 + unsigned int bytes_sent; /* bytes on the wire (no TSB) */ 536 + }; 537 + 538 + #define GENET_CB(skb) ((struct bcmgenet_skb_cb *)((skb)->cb)) 539 + 540 struct bcmgenet_tx_ring { 541 spinlock_t lock; /* ring lock */ 542 struct napi_struct napi; /* NAPI per tx queue */
+55 -14
drivers/net/ethernet/cadence/macb.c
··· 917 unsigned int frag_len = bp->rx_buffer_size; 918 919 if (offset + frag_len > len) { 920 - BUG_ON(frag != last_frag); 921 frag_len = len - offset; 922 } 923 skb_copy_to_linear_data_offset(skb, offset, ··· 948 return 0; 949 } 950 951 static int macb_rx(struct macb *bp, int budget) 952 { 953 int received = 0; 954 unsigned int tail; 955 int first_frag = -1; ··· 990 991 if (ctrl & MACB_BIT(RX_EOF)) { 992 int dropped; 993 - BUG_ON(first_frag == -1); 994 995 dropped = macb_rx_frame(bp, first_frag, tail); 996 first_frag = -1; 997 if (!dropped) { 998 received++; 999 budget--; 1000 } 1001 } 1002 } 1003 1004 if (first_frag != -1) ··· 1146 macb_writel(bp, NCR, ctrl | MACB_BIT(RE)); 1147 1148 if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) 1149 - macb_writel(bp, ISR, MACB_BIT(RXUBR)); 1150 } 1151 1152 if (status & MACB_BIT(ISR_ROVR)) { ··· 1569 static void macb_init_rings(struct macb *bp) 1570 { 1571 int i; 1572 - dma_addr_t addr; 1573 1574 - addr = bp->rx_buffers_dma; 1575 - for (i = 0; i < RX_RING_SIZE; i++) { 1576 - bp->rx_ring[i].addr = addr; 1577 - bp->rx_ring[i].ctrl = 0; 1578 - addr += bp->rx_buffer_size; 1579 - } 1580 - bp->rx_ring[RX_RING_SIZE - 1].addr |= MACB_BIT(RX_WRAP); 1581 1582 for (i = 0; i < TX_RING_SIZE; i++) { 1583 bp->queues[0].tx_ring[i].addr = 0; ··· 2996 phy_node = of_get_next_available_child(np, NULL); 2997 if (phy_node) { 2998 int gpio = of_get_named_gpio(phy_node, "reset-gpios", 0); 2999 - if (gpio_is_valid(gpio)) 3000 bp->reset_gpio = gpio_to_desc(gpio); 3001 - gpiod_direction_output(bp->reset_gpio, 1); 3002 } 3003 of_node_put(phy_node); 3004 ··· 3069 mdiobus_free(bp->mii_bus); 3070 3071 /* Shutdown the PHY if there is a GPIO reset */ 3072 - gpiod_set_value(bp->reset_gpio, 0); 3073 3074 unregister_netdev(dev); 3075 clk_disable_unprepare(bp->tx_clk);
··· 917 unsigned int frag_len = bp->rx_buffer_size; 918 919 if (offset + frag_len > len) { 920 + if (unlikely(frag != last_frag)) { 921 + dev_kfree_skb_any(skb); 922 + return -1; 923 + } 924 frag_len = len - offset; 925 } 926 skb_copy_to_linear_data_offset(skb, offset, ··· 945 return 0; 946 } 947 948 + static inline void macb_init_rx_ring(struct macb *bp) 949 + { 950 + dma_addr_t addr; 951 + int i; 952 + 953 + addr = bp->rx_buffers_dma; 954 + for (i = 0; i < RX_RING_SIZE; i++) { 955 + bp->rx_ring[i].addr = addr; 956 + bp->rx_ring[i].ctrl = 0; 957 + addr += bp->rx_buffer_size; 958 + } 959 + bp->rx_ring[RX_RING_SIZE - 1].addr |= MACB_BIT(RX_WRAP); 960 + } 961 + 962 static int macb_rx(struct macb *bp, int budget) 963 { 964 + bool reset_rx_queue = false; 965 int received = 0; 966 unsigned int tail; 967 int first_frag = -1; ··· 972 973 if (ctrl & MACB_BIT(RX_EOF)) { 974 int dropped; 975 + 976 + if (unlikely(first_frag == -1)) { 977 + reset_rx_queue = true; 978 + continue; 979 + } 980 981 dropped = macb_rx_frame(bp, first_frag, tail); 982 first_frag = -1; 983 + if (unlikely(dropped < 0)) { 984 + reset_rx_queue = true; 985 + continue; 986 + } 987 if (!dropped) { 988 received++; 989 budget--; 990 } 991 } 992 + } 993 + 994 + if (unlikely(reset_rx_queue)) { 995 + unsigned long flags; 996 + u32 ctrl; 997 + 998 + netdev_err(bp->dev, "RX queue corruption: reset it\n"); 999 + 1000 + spin_lock_irqsave(&bp->lock, flags); 1001 + 1002 + ctrl = macb_readl(bp, NCR); 1003 + macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE)); 1004 + 1005 + macb_init_rx_ring(bp); 1006 + macb_writel(bp, RBQP, bp->rx_ring_dma); 1007 + 1008 + macb_writel(bp, NCR, ctrl | MACB_BIT(RE)); 1009 + 1010 + spin_unlock_irqrestore(&bp->lock, flags); 1011 + return received; 1012 } 1013 1014 if (first_frag != -1) ··· 1100 macb_writel(bp, NCR, ctrl | MACB_BIT(RE)); 1101 1102 if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) 1103 + queue_writel(queue, ISR, MACB_BIT(RXUBR)); 1104 } 1105 1106 if (status & MACB_BIT(ISR_ROVR)) { ··· 1523 static void macb_init_rings(struct macb *bp) 1524 { 1525 int i; 1526 1527 + macb_init_rx_ring(bp); 1528 1529 for (i = 0; i < TX_RING_SIZE; i++) { 1530 bp->queues[0].tx_ring[i].addr = 0; ··· 2957 phy_node = of_get_next_available_child(np, NULL); 2958 if (phy_node) { 2959 int gpio = of_get_named_gpio(phy_node, "reset-gpios", 0); 2960 + if (gpio_is_valid(gpio)) { 2961 bp->reset_gpio = gpio_to_desc(gpio); 2962 + gpiod_direction_output(bp->reset_gpio, 1); 2963 + } 2964 } 2965 of_node_put(phy_node); 2966 ··· 3029 mdiobus_free(bp->mii_bus); 3030 3031 /* Shutdown the PHY if there is a GPIO reset */ 3032 + if (bp->reset_gpio) 3033 + gpiod_set_value(bp->reset_gpio, 0); 3034 3035 unregister_netdev(dev); 3036 clk_disable_unprepare(bp->tx_clk);
+1 -1
drivers/net/ethernet/freescale/fec_main.c
··· 943 else 944 val &= ~FEC_RACC_OPTIONS; 945 writel(val, fep->hwp + FEC_RACC); 946 } 947 - writel(PKT_MAXBUF_SIZE, fep->hwp + FEC_FTRL); 948 #endif 949 950 /*
··· 943 else 944 val &= ~FEC_RACC_OPTIONS; 945 writel(val, fep->hwp + FEC_RACC); 946 + writel(PKT_MAXBUF_SIZE, fep->hwp + FEC_FTRL); 947 } 948 #endif 949 950 /*
+1 -1
drivers/net/ethernet/hisilicon/hns/hnae.h
··· 469 u32 *tx_usecs, u32 *rx_usecs); 470 void (*get_rx_max_coalesced_frames)(struct hnae_handle *handle, 471 u32 *tx_frames, u32 *rx_frames); 472 - void (*set_coalesce_usecs)(struct hnae_handle *handle, u32 timeout); 473 int (*set_coalesce_frames)(struct hnae_handle *handle, 474 u32 coalesce_frames); 475 void (*set_promisc_mode)(struct hnae_handle *handle, u32 en);
··· 469 u32 *tx_usecs, u32 *rx_usecs); 470 void (*get_rx_max_coalesced_frames)(struct hnae_handle *handle, 471 u32 *tx_frames, u32 *rx_frames); 472 + int (*set_coalesce_usecs)(struct hnae_handle *handle, u32 timeout); 473 int (*set_coalesce_frames)(struct hnae_handle *handle, 474 u32 coalesce_frames); 475 void (*set_promisc_mode)(struct hnae_handle *handle, u32 en);
+23 -41
drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
··· 159 ae_handle->qs[i]->tx_ring.q = ae_handle->qs[i]; 160 161 ring_pair_cb->used_by_vf = 1; 162 - if (port_idx < DSAF_SERVICE_PORT_NUM_PER_DSAF) 163 - ring_pair_cb->port_id_in_dsa = port_idx; 164 - else 165 - ring_pair_cb->port_id_in_dsa = 0; 166 - 167 ring_pair_cb++; 168 } 169 ··· 448 static void hns_ae_get_coalesce_usecs(struct hnae_handle *handle, 449 u32 *tx_usecs, u32 *rx_usecs) 450 { 451 - int port; 452 453 - port = hns_ae_map_eport_to_dport(handle->eport_id); 454 - 455 - *tx_usecs = hns_rcb_get_coalesce_usecs( 456 - hns_ae_get_dsaf_dev(handle->dev), 457 - hns_dsaf_get_comm_idx_by_port(port)); 458 - *rx_usecs = hns_rcb_get_coalesce_usecs( 459 - hns_ae_get_dsaf_dev(handle->dev), 460 - hns_dsaf_get_comm_idx_by_port(port)); 461 } 462 463 static void hns_ae_get_rx_max_coalesced_frames(struct hnae_handle *handle, 464 u32 *tx_frames, u32 *rx_frames) 465 { 466 - int port; 467 468 - assert(handle); 469 - 470 - port = hns_ae_map_eport_to_dport(handle->eport_id); 471 - 472 - *tx_frames = hns_rcb_get_coalesced_frames( 473 - hns_ae_get_dsaf_dev(handle->dev), port); 474 - *rx_frames = hns_rcb_get_coalesced_frames( 475 - hns_ae_get_dsaf_dev(handle->dev), port); 476 } 477 478 - static void hns_ae_set_coalesce_usecs(struct hnae_handle *handle, 479 - u32 timeout) 480 { 481 - int port; 482 483 - assert(handle); 484 - 485 - port = hns_ae_map_eport_to_dport(handle->eport_id); 486 - 487 - hns_rcb_set_coalesce_usecs(hns_ae_get_dsaf_dev(handle->dev), 488 - port, timeout); 489 } 490 491 static int hns_ae_set_coalesce_frames(struct hnae_handle *handle, 492 u32 coalesce_frames) 493 { 494 - int port; 495 - int ret; 496 497 - assert(handle); 498 - 499 - port = hns_ae_map_eport_to_dport(handle->eport_id); 500 - 501 - ret = hns_rcb_set_coalesced_frames(hns_ae_get_dsaf_dev(handle->dev), 502 - port, coalesce_frames); 503 - return ret; 504 } 505 506 void hns_ae_update_stats(struct hnae_handle *handle,
··· 159 ae_handle->qs[i]->tx_ring.q = ae_handle->qs[i]; 160 161 ring_pair_cb->used_by_vf = 1; 162 ring_pair_cb++; 163 } 164 ··· 453 static void hns_ae_get_coalesce_usecs(struct hnae_handle *handle, 454 u32 *tx_usecs, u32 *rx_usecs) 455 { 456 + struct ring_pair_cb *ring_pair = 457 + container_of(handle->qs[0], struct ring_pair_cb, q); 458 459 + *tx_usecs = hns_rcb_get_coalesce_usecs(ring_pair->rcb_common, 460 + ring_pair->port_id_in_comm); 461 + *rx_usecs = hns_rcb_get_coalesce_usecs(ring_pair->rcb_common, 462 + ring_pair->port_id_in_comm); 463 } 464 465 static void hns_ae_get_rx_max_coalesced_frames(struct hnae_handle *handle, 466 u32 *tx_frames, u32 *rx_frames) 467 { 468 + struct ring_pair_cb *ring_pair = 469 + container_of(handle->qs[0], struct ring_pair_cb, q); 470 471 + *tx_frames = hns_rcb_get_coalesced_frames(ring_pair->rcb_common, 472 + ring_pair->port_id_in_comm); 473 + *rx_frames = hns_rcb_get_coalesced_frames(ring_pair->rcb_common, 474 + ring_pair->port_id_in_comm); 475 } 476 477 + static int hns_ae_set_coalesce_usecs(struct hnae_handle *handle, 478 + u32 timeout) 479 { 480 + struct ring_pair_cb *ring_pair = 481 + container_of(handle->qs[0], struct ring_pair_cb, q); 482 483 + return hns_rcb_set_coalesce_usecs( 484 + ring_pair->rcb_common, ring_pair->port_id_in_comm, timeout); 485 } 486 487 static int hns_ae_set_coalesce_frames(struct hnae_handle *handle, 488 u32 coalesce_frames) 489 { 490 + struct ring_pair_cb *ring_pair = 491 + container_of(handle->qs[0], struct ring_pair_cb, q); 492 493 + return hns_rcb_set_coalesced_frames( 494 + ring_pair->rcb_common, 495 + ring_pair->port_id_in_comm, coalesce_frames); 496 } 497 498 void hns_ae_update_stats(struct hnae_handle *handle,
+2 -1
drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
··· 664 return; 665 666 for (i = 0; i < ARRAY_SIZE(g_gmac_stats_string); i++) { 667 - snprintf(buff, ETH_GSTRING_LEN, g_gmac_stats_string[i].desc); 668 buff = buff + ETH_GSTRING_LEN; 669 } 670 }
··· 664 return; 665 666 for (i = 0; i < ARRAY_SIZE(g_gmac_stats_string); i++) { 667 + snprintf(buff, ETH_GSTRING_LEN, "%s", 668 + g_gmac_stats_string[i].desc); 669 buff = buff + ETH_GSTRING_LEN; 670 } 671 }
+6 -6
drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
··· 2219 /* dsaf onode registers */ 2220 for (i = 0; i < DSAF_XOD_NUM; i++) { 2221 p[311 + i] = dsaf_read_dev(ddev, 2222 - DSAF_XOD_ETS_TSA_TC0_TC3_CFG_0_REG + j * 0x90); 2223 p[319 + i] = dsaf_read_dev(ddev, 2224 - DSAF_XOD_ETS_TSA_TC4_TC7_CFG_0_REG + j * 0x90); 2225 p[327 + i] = dsaf_read_dev(ddev, 2226 - DSAF_XOD_ETS_BW_TC0_TC3_CFG_0_REG + j * 0x90); 2227 p[335 + i] = dsaf_read_dev(ddev, 2228 - DSAF_XOD_ETS_BW_TC4_TC7_CFG_0_REG + j * 0x90); 2229 p[343 + i] = dsaf_read_dev(ddev, 2230 - DSAF_XOD_ETS_BW_OFFSET_CFG_0_REG + j * 0x90); 2231 p[351 + i] = dsaf_read_dev(ddev, 2232 - DSAF_XOD_ETS_TOKEN_CFG_0_REG + j * 0x90); 2233 } 2234 2235 p[359] = dsaf_read_dev(ddev, DSAF_XOD_PFS_CFG_0_0_REG + port * 0x90);
··· 2219 /* dsaf onode registers */ 2220 for (i = 0; i < DSAF_XOD_NUM; i++) { 2221 p[311 + i] = dsaf_read_dev(ddev, 2222 + DSAF_XOD_ETS_TSA_TC0_TC3_CFG_0_REG + i * 0x90); 2223 p[319 + i] = dsaf_read_dev(ddev, 2224 + DSAF_XOD_ETS_TSA_TC4_TC7_CFG_0_REG + i * 0x90); 2225 p[327 + i] = dsaf_read_dev(ddev, 2226 + DSAF_XOD_ETS_BW_TC0_TC3_CFG_0_REG + i * 0x90); 2227 p[335 + i] = dsaf_read_dev(ddev, 2228 + DSAF_XOD_ETS_BW_TC4_TC7_CFG_0_REG + i * 0x90); 2229 p[343 + i] = dsaf_read_dev(ddev, 2230 + DSAF_XOD_ETS_BW_OFFSET_CFG_0_REG + i * 0x90); 2231 p[351 + i] = dsaf_read_dev(ddev, 2232 + DSAF_XOD_ETS_TOKEN_CFG_0_REG + i * 0x90); 2233 } 2234 2235 p[359] = dsaf_read_dev(ddev, DSAF_XOD_PFS_CFG_0_0_REG + port * 0x90);
+24 -20
drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
··· 244 */ 245 phy_interface_t hns_mac_get_phy_if(struct hns_mac_cb *mac_cb) 246 { 247 - u32 hilink3_mode; 248 - u32 hilink4_mode; 249 void __iomem *sys_ctl_vaddr = mac_cb->sys_ctl_vaddr; 250 - int dev_id = mac_cb->mac_id; 251 phy_interface_t phy_if = PHY_INTERFACE_MODE_NA; 252 253 - hilink3_mode = dsaf_read_reg(sys_ctl_vaddr, HNS_MAC_HILINK3_REG); 254 - hilink4_mode = dsaf_read_reg(sys_ctl_vaddr, HNS_MAC_HILINK4_REG); 255 - if (dev_id >= 0 && dev_id <= 3) { 256 - if (hilink4_mode == 0) 257 - phy_if = PHY_INTERFACE_MODE_SGMII; 258 - else 259 - phy_if = PHY_INTERFACE_MODE_XGMII; 260 - } else if (dev_id >= 4 && dev_id <= 5) { 261 - if (hilink3_mode == 0) 262 - phy_if = PHY_INTERFACE_MODE_SGMII; 263 - else 264 - phy_if = PHY_INTERFACE_MODE_XGMII; 265 - } else { 266 phy_if = PHY_INTERFACE_MODE_SGMII; 267 } 268 - 269 - dev_dbg(mac_cb->dev, 270 - "hilink3_mode=%d, hilink4_mode=%d dev_id=%d, phy_if=%d\n", 271 - hilink3_mode, hilink4_mode, dev_id, phy_if); 272 return phy_if; 273 } 274
··· 244 */ 245 phy_interface_t hns_mac_get_phy_if(struct hns_mac_cb *mac_cb) 246 { 247 + u32 mode; 248 + u32 reg; 249 + u32 shift; 250 + bool is_ver1 = AE_IS_VER1(mac_cb->dsaf_dev->dsaf_ver); 251 void __iomem *sys_ctl_vaddr = mac_cb->sys_ctl_vaddr; 252 + int mac_id = mac_cb->mac_id; 253 phy_interface_t phy_if = PHY_INTERFACE_MODE_NA; 254 255 + if (is_ver1 && (mac_id >= 6 && mac_id <= 7)) { 256 phy_if = PHY_INTERFACE_MODE_SGMII; 257 + } else if (mac_id >= 0 && mac_id <= 3) { 258 + reg = is_ver1 ? HNS_MAC_HILINK4_REG : HNS_MAC_HILINK4V2_REG; 259 + mode = dsaf_read_reg(sys_ctl_vaddr, reg); 260 + /* mac_id 0, 1, 2, 3 ---> hilink4 lane 0, 1, 2, 3 */ 261 + shift = is_ver1 ? 0 : mac_id; 262 + if (dsaf_get_bit(mode, shift)) 263 + phy_if = PHY_INTERFACE_MODE_XGMII; 264 + else 265 + phy_if = PHY_INTERFACE_MODE_SGMII; 266 + } else if (mac_id >= 4 && mac_id <= 7) { 267 + reg = is_ver1 ? HNS_MAC_HILINK3_REG : HNS_MAC_HILINK3V2_REG; 268 + mode = dsaf_read_reg(sys_ctl_vaddr, reg); 269 + /* mac_id 4, 5, 6, 7 ---> hilink3 lane 2, 3, 0, 1 */ 270 + shift = is_ver1 ? 0 : mac_id <= 5 ? mac_id - 2 : mac_id - 6; 271 + if (dsaf_get_bit(mode, shift)) 272 + phy_if = PHY_INTERFACE_MODE_XGMII; 273 + else 274 + phy_if = PHY_INTERFACE_MODE_SGMII; 275 } 276 return phy_if; 277 } 278
+92 -104
drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
··· 215 dsaf_write_dev(q, RCB_RING_RX_RING_BD_LEN_REG, 216 bd_size_type); 217 dsaf_write_dev(q, RCB_RING_RX_RING_BD_NUM_REG, 218 - ring_pair->port_id_in_dsa); 219 dsaf_write_dev(q, RCB_RING_RX_RING_PKTLINE_REG, 220 - ring_pair->port_id_in_dsa); 221 } else { 222 dsaf_write_dev(q, RCB_RING_TX_RING_BASEADDR_L_REG, 223 (u32)dma); ··· 227 dsaf_write_dev(q, RCB_RING_TX_RING_BD_LEN_REG, 228 bd_size_type); 229 dsaf_write_dev(q, RCB_RING_TX_RING_BD_NUM_REG, 230 - ring_pair->port_id_in_dsa); 231 dsaf_write_dev(q, RCB_RING_TX_RING_PKTLINE_REG, 232 - ring_pair->port_id_in_dsa); 233 } 234 } 235 ··· 256 desc_cnt); 257 } 258 259 - /** 260 - *hns_rcb_set_port_coalesced_frames - set rcb port coalesced frames 261 - *@rcb_common: rcb_common device 262 - *@port_idx:port index 263 - *@coalesced_frames:BD num for coalesced frames 264 - */ 265 - static int hns_rcb_set_port_coalesced_frames(struct rcb_common_cb *rcb_common, 266 - u32 port_idx, 267 - u32 coalesced_frames) 268 { 269 - if (coalesced_frames >= rcb_common->desc_num || 270 - coalesced_frames > HNS_RCB_MAX_COALESCED_FRAMES) 271 - return -EINVAL; 272 - 273 - dsaf_write_dev(rcb_common, RCB_CFG_PKTLINE_REG + port_idx * 4, 274 - coalesced_frames); 275 - return 0; 276 - } 277 - 278 - /** 279 - *hns_rcb_get_port_coalesced_frames - set rcb port coalesced frames 280 - *@rcb_common: rcb_common device 281 - *@port_idx:port index 282 - * return coaleseced frames value 283 - */ 284 - static u32 hns_rcb_get_port_coalesced_frames(struct rcb_common_cb *rcb_common, 285 - u32 port_idx) 286 - { 287 - if (port_idx >= HNS_RCB_SERVICE_NW_ENGINE_NUM) 288 - port_idx = 0; 289 - 290 - return dsaf_read_dev(rcb_common, 291 - RCB_CFG_PKTLINE_REG + port_idx * 4); 292 - } 293 - 294 - /** 295 - *hns_rcb_set_timeout - set rcb port coalesced time_out 296 - *@rcb_common: rcb_common device 297 - *@time_out:time for coalesced time_out 298 - */ 299 - static void hns_rcb_set_timeout(struct rcb_common_cb *rcb_common, 300 - u32 timeout) 301 - { 302 - dsaf_write_dev(rcb_common, RCB_CFG_OVERTIME_REG, timeout); 303 } 304 305 static int hns_rcb_common_get_port_num(struct rcb_common_cb *rcb_common) ··· 327 328 for (i = 0; i < port_num; i++) { 329 hns_rcb_set_port_desc_cnt(rcb_common, i, rcb_common->desc_num); 330 - (void)hns_rcb_set_port_coalesced_frames( 331 - rcb_common, i, rcb_common->coalesced_frames); 332 } 333 - hns_rcb_set_timeout(rcb_common, rcb_common->timeout); 334 335 dsaf_write_dev(rcb_common, RCB_COM_CFG_ENDIAN_REG, 336 HNS_RCB_COMMON_ENDIAN); ··· 427 hns_rcb_ring_get_cfg(&ring_pair_cb->q, TX_RING); 428 } 429 430 - static int hns_rcb_get_port(struct rcb_common_cb *rcb_common, int ring_idx) 431 { 432 int comm_index = rcb_common->comm_index; 433 int port; ··· 438 q_num = (int)rcb_common->max_q_per_vf * rcb_common->max_vfn; 439 port = ring_idx / q_num; 440 } else { 441 - port = HNS_RCB_SERVICE_NW_ENGINE_NUM + comm_index - 1; 442 } 443 444 return port; ··· 486 ring_pair_cb->index = i; 487 ring_pair_cb->q.io_base = 488 RCB_COMM_BASE_TO_RING_BASE(rcb_common->io_base, i); 489 - ring_pair_cb->port_id_in_dsa = hns_rcb_get_port(rcb_common, i); 490 ring_pair_cb->virq[HNS_RCB_IRQ_IDX_TX] = 491 is_ver1 ? irq_of_parse_and_map(np, base_irq_idx + i * 2) : 492 platform_get_irq(pdev, base_irq_idx + i * 3 + 1); ··· 503 /** 504 *hns_rcb_get_coalesced_frames - get rcb port coalesced frames 505 *@rcb_common: rcb_common device 506 - *@comm_index:port index 507 - *return coalesced_frames 508 */ 509 - u32 hns_rcb_get_coalesced_frames(struct dsaf_device *dsaf_dev, int port) 510 { 511 - int comm_index = hns_dsaf_get_comm_idx_by_port(port); 512 - struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[comm_index]; 513 - 514 - return hns_rcb_get_port_coalesced_frames(rcb_comm, port); 515 } 516 517 /** 518 *hns_rcb_get_coalesce_usecs - get rcb port coalesced time_out 519 *@rcb_common: rcb_common device 520 - *@comm_index:port index 521 - *return time_out 522 */ 523 - u32 hns_rcb_get_coalesce_usecs(struct dsaf_device *dsaf_dev, int comm_index) 524 { 525 - struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[comm_index]; 526 - 527 - return rcb_comm->timeout; 528 } 529 530 /** 531 *hns_rcb_set_coalesce_usecs - set rcb port coalesced time_out 532 *@rcb_common: rcb_common device 533 - *@comm_index: comm :index 534 - *@etx_usecs:tx time for coalesced time_out 535 - *@rx_usecs:rx time for coalesced time_out 536 */ 537 - void hns_rcb_set_coalesce_usecs(struct dsaf_device *dsaf_dev, 538 - int port, u32 timeout) 539 { 540 - int comm_index = hns_dsaf_get_comm_idx_by_port(port); 541 - struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[comm_index]; 542 543 - if (rcb_comm->timeout == timeout) 544 - return; 545 546 - if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) { 547 - dev_err(dsaf_dev->dev, 548 - "error: not support coalesce_usecs setting!\n"); 549 - return; 550 } 551 - rcb_comm->timeout = timeout; 552 - hns_rcb_set_timeout(rcb_comm, rcb_comm->timeout); 553 } 554 555 /** 556 *hns_rcb_set_coalesced_frames - set rcb coalesced frames 557 *@rcb_common: rcb_common device 558 - *@tx_frames:tx BD num for coalesced frames 559 - *@rx_frames:rx BD num for coalesced frames 560 - *Return 0 on success, negative on failure 561 */ 562 - int hns_rcb_set_coalesced_frames(struct dsaf_device *dsaf_dev, 563 - int port, u32 coalesced_frames) 564 { 565 - int comm_index = hns_dsaf_get_comm_idx_by_port(port); 566 - struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[comm_index]; 567 - u32 coalesced_reg_val; 568 - int ret; 569 570 - coalesced_reg_val = hns_rcb_get_port_coalesced_frames(rcb_comm, port); 571 - 572 - if (coalesced_reg_val == coalesced_frames) 573 return 0; 574 575 - if (coalesced_frames >= HNS_RCB_MIN_COALESCED_FRAMES) { 576 - ret = hns_rcb_set_port_coalesced_frames(rcb_comm, port, 577 - coalesced_frames); 578 - return ret; 579 - } else { 580 return -EINVAL; 581 } 582 } 583 584 /** ··· 731 rcb_common->dsaf_dev = dsaf_dev; 732 733 rcb_common->desc_num = dsaf_dev->desc_num; 734 - rcb_common->coalesced_frames = HNS_RCB_DEF_COALESCED_FRAMES; 735 - rcb_common->timeout = HNS_RCB_MAX_TIME_OUT; 736 737 hns_rcb_get_queue_mode(dsaf_mode, comm_index, &max_vfn, &max_q_per_vf); 738 rcb_common->max_vfn = max_vfn; ··· 931 void hns_rcb_get_common_regs(struct rcb_common_cb *rcb_com, void *data) 932 { 933 u32 *regs = data; 934 u32 i = 0; 935 936 /*rcb common registers */ ··· 988 = dsaf_read_dev(rcb_com, RCB_CFG_PKTLINE_REG + 4 * i); 989 } 990 991 - regs[70] = dsaf_read_dev(rcb_com, RCB_CFG_OVERTIME_REG); 992 - regs[71] = dsaf_read_dev(rcb_com, RCB_CFG_PKTLINE_INT_NUM_REG); 993 - regs[72] = dsaf_read_dev(rcb_com, RCB_CFG_OVERTIME_INT_NUM_REG); 994 995 /* mark end of rcb common regs */ 996 - for (i = 73; i < 80; i++) 997 regs[i] = 0xcccccccc; 998 } 999
··· 215 dsaf_write_dev(q, RCB_RING_RX_RING_BD_LEN_REG, 216 bd_size_type); 217 dsaf_write_dev(q, RCB_RING_RX_RING_BD_NUM_REG, 218 + ring_pair->port_id_in_comm); 219 dsaf_write_dev(q, RCB_RING_RX_RING_PKTLINE_REG, 220 + ring_pair->port_id_in_comm); 221 } else { 222 dsaf_write_dev(q, RCB_RING_TX_RING_BASEADDR_L_REG, 223 (u32)dma); ··· 227 dsaf_write_dev(q, RCB_RING_TX_RING_BD_LEN_REG, 228 bd_size_type); 229 dsaf_write_dev(q, RCB_RING_TX_RING_BD_NUM_REG, 230 + ring_pair->port_id_in_comm); 231 dsaf_write_dev(q, RCB_RING_TX_RING_PKTLINE_REG, 232 + ring_pair->port_id_in_comm); 233 } 234 } 235 ··· 256 desc_cnt); 257 } 258 259 + static void hns_rcb_set_port_timeout( 260 + struct rcb_common_cb *rcb_common, u32 port_idx, u32 timeout) 261 { 262 + if (AE_IS_VER1(rcb_common->dsaf_dev->dsaf_ver)) 263 + dsaf_write_dev(rcb_common, RCB_CFG_OVERTIME_REG, 264 + timeout * HNS_RCB_CLK_FREQ_MHZ); 265 + else 266 + dsaf_write_dev(rcb_common, 267 + RCB_PORT_CFG_OVERTIME_REG + port_idx * 4, 268 + timeout); 269 } 270 271 static int hns_rcb_common_get_port_num(struct rcb_common_cb *rcb_common) ··· 361 362 for (i = 0; i < port_num; i++) { 363 hns_rcb_set_port_desc_cnt(rcb_common, i, rcb_common->desc_num); 364 + (void)hns_rcb_set_coalesced_frames( 365 + rcb_common, i, HNS_RCB_DEF_COALESCED_FRAMES); 366 + hns_rcb_set_port_timeout( 367 + rcb_common, i, HNS_RCB_DEF_COALESCED_USECS); 368 } 369 370 dsaf_write_dev(rcb_common, RCB_COM_CFG_ENDIAN_REG, 371 HNS_RCB_COMMON_ENDIAN); ··· 460 hns_rcb_ring_get_cfg(&ring_pair_cb->q, TX_RING); 461 } 462 463 + static int hns_rcb_get_port_in_comm( 464 + struct rcb_common_cb *rcb_common, int ring_idx) 465 { 466 int comm_index = rcb_common->comm_index; 467 int port; ··· 470 q_num = (int)rcb_common->max_q_per_vf * rcb_common->max_vfn; 471 port = ring_idx / q_num; 472 } else { 473 + port = 0; /* config debug-ports port_id_in_comm to 0*/ 474 } 475 476 return port; ··· 518 ring_pair_cb->index = i; 519 ring_pair_cb->q.io_base = 520 RCB_COMM_BASE_TO_RING_BASE(rcb_common->io_base, i); 521 + ring_pair_cb->port_id_in_comm = 522 + hns_rcb_get_port_in_comm(rcb_common, i); 523 ring_pair_cb->virq[HNS_RCB_IRQ_IDX_TX] = 524 is_ver1 ? irq_of_parse_and_map(np, base_irq_idx + i * 2) : 525 platform_get_irq(pdev, base_irq_idx + i * 3 + 1); ··· 534 /** 535 *hns_rcb_get_coalesced_frames - get rcb port coalesced frames 536 *@rcb_common: rcb_common device 537 + *@port_idx:port id in comm 538 + * 539 + *Returns: coalesced_frames 540 */ 541 + u32 hns_rcb_get_coalesced_frames( 542 + struct rcb_common_cb *rcb_common, u32 port_idx) 543 { 544 + return dsaf_read_dev(rcb_common, RCB_CFG_PKTLINE_REG + port_idx * 4); 545 } 546 547 /** 548 *hns_rcb_get_coalesce_usecs - get rcb port coalesced time_out 549 *@rcb_common: rcb_common device 550 + *@port_idx:port id in comm 551 + * 552 + *Returns: time_out 553 */ 554 + u32 hns_rcb_get_coalesce_usecs( 555 + struct rcb_common_cb *rcb_common, u32 port_idx) 556 { 557 + if (AE_IS_VER1(rcb_common->dsaf_dev->dsaf_ver)) 558 + return dsaf_read_dev(rcb_common, RCB_CFG_OVERTIME_REG) / 559 + HNS_RCB_CLK_FREQ_MHZ; 560 + else 561 + return dsaf_read_dev(rcb_common, 562 + RCB_PORT_CFG_OVERTIME_REG + port_idx * 4); 563 } 564 565 /** 566 *hns_rcb_set_coalesce_usecs - set rcb port coalesced time_out 567 *@rcb_common: rcb_common device 568 + *@port_idx:port id in comm 569 + *@timeout:tx/rx time for coalesced time_out 570 + * 571 + * Returns: 572 + * Zero for success, or an error code in case of failure 573 */ 574 + int hns_rcb_set_coalesce_usecs( 575 + struct rcb_common_cb *rcb_common, u32 port_idx, u32 timeout) 576 { 577 + u32 old_timeout = hns_rcb_get_coalesce_usecs(rcb_common, port_idx); 578 579 + if (timeout == old_timeout) 580 + return 0; 581 582 + if (AE_IS_VER1(rcb_common->dsaf_dev->dsaf_ver)) { 583 + if (rcb_common->comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) { 584 + dev_err(rcb_common->dsaf_dev->dev, 585 + "error: not support coalesce_usecs setting!\n"); 586 + return -EINVAL; 587 + } 588 } 589 + if (timeout > HNS_RCB_MAX_COALESCED_USECS) { 590 + dev_err(rcb_common->dsaf_dev->dev, 591 + "error: not support coalesce %dus!\n", timeout); 592 + return -EINVAL; 593 + } 594 + hns_rcb_set_port_timeout(rcb_common, port_idx, timeout); 595 + return 0; 596 } 597 598 /** 599 *hns_rcb_set_coalesced_frames - set rcb coalesced frames 600 *@rcb_common: rcb_common device 601 + *@port_idx:port id in comm 602 + *@coalesced_frames:tx/rx BD num for coalesced frames 603 + * 604 + * Returns: 605 + * Zero for success, or an error code in case of failure 606 */ 607 + int hns_rcb_set_coalesced_frames( 608 + struct rcb_common_cb *rcb_common, u32 port_idx, u32 coalesced_frames) 609 { 610 + u32 old_waterline = hns_rcb_get_coalesced_frames(rcb_common, port_idx); 611 612 + if (coalesced_frames == old_waterline) 613 return 0; 614 615 + if (coalesced_frames >= rcb_common->desc_num || 616 + coalesced_frames > HNS_RCB_MAX_COALESCED_FRAMES || 617 + coalesced_frames < HNS_RCB_MIN_COALESCED_FRAMES) { 618 + dev_err(rcb_common->dsaf_dev->dev, 619 + "error: not support coalesce_frames setting!\n"); 620 return -EINVAL; 621 } 622 + 623 + dsaf_write_dev(rcb_common, RCB_CFG_PKTLINE_REG + port_idx * 4, 624 + coalesced_frames); 625 + return 0; 626 } 627 628 /** ··· 749 rcb_common->dsaf_dev = dsaf_dev; 750 751 rcb_common->desc_num = dsaf_dev->desc_num; 752 753 hns_rcb_get_queue_mode(dsaf_mode, comm_index, &max_vfn, &max_q_per_vf); 754 rcb_common->max_vfn = max_vfn; ··· 951 void hns_rcb_get_common_regs(struct rcb_common_cb *rcb_com, void *data) 952 { 953 u32 *regs = data; 954 + bool is_ver1 = AE_IS_VER1(rcb_com->dsaf_dev->dsaf_ver); 955 + bool is_dbg = (rcb_com->comm_index != HNS_DSAF_COMM_SERVICE_NW_IDX); 956 + u32 reg_tmp; 957 + u32 reg_num_tmp; 958 u32 i = 0; 959 960 /*rcb common registers */ ··· 1004 = dsaf_read_dev(rcb_com, RCB_CFG_PKTLINE_REG + 4 * i); 1005 } 1006 1007 + reg_tmp = is_ver1 ? RCB_CFG_OVERTIME_REG : RCB_PORT_CFG_OVERTIME_REG; 1008 + reg_num_tmp = (is_ver1 || is_dbg) ? 1 : 6; 1009 + for (i = 0; i < reg_num_tmp; i++) 1010 + regs[70 + i] = dsaf_read_dev(rcb_com, reg_tmp); 1011 + 1012 + regs[76] = dsaf_read_dev(rcb_com, RCB_CFG_PKTLINE_INT_NUM_REG); 1013 + regs[77] = dsaf_read_dev(rcb_com, RCB_CFG_OVERTIME_INT_NUM_REG); 1014 1015 /* mark end of rcb common regs */ 1016 + for (i = 78; i < 80; i++) 1017 regs[i] = 0xcccccccc; 1018 } 1019
+12 -11
drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
··· 38 #define HNS_RCB_MAX_COALESCED_FRAMES 1023 39 #define HNS_RCB_MIN_COALESCED_FRAMES 1 40 #define HNS_RCB_DEF_COALESCED_FRAMES 50 41 - #define HNS_RCB_MAX_TIME_OUT 0x500 42 43 #define HNS_RCB_COMMON_ENDIAN 1 44 ··· 84 85 int virq[HNS_RCB_IRQ_NUM_PER_QUEUE]; 86 87 - u8 port_id_in_dsa; 88 u8 used_by_vf; 89 90 struct hns_ring_hw_stats hw_stats; ··· 99 100 u8 comm_index; 101 u32 ring_num; 102 - u32 coalesced_frames; /* frames threshold of rx interrupt */ 103 - u32 timeout; /* time threshold of rx interrupt */ 104 u32 desc_num; /* desc num per queue*/ 105 106 struct ring_pair_cb ring_pair_cb[0]; ··· 125 void hns_rcb_init_hw(struct ring_pair_cb *ring); 126 void hns_rcb_reset_ring_hw(struct hnae_queue *q); 127 void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag); 128 - 129 - u32 hns_rcb_get_coalesced_frames(struct dsaf_device *dsaf_dev, int comm_index); 130 - u32 hns_rcb_get_coalesce_usecs(struct dsaf_device *dsaf_dev, int comm_index); 131 - void hns_rcb_set_coalesce_usecs(struct dsaf_device *dsaf_dev, 132 - int comm_index, u32 timeout); 133 - int hns_rcb_set_coalesced_frames(struct dsaf_device *dsaf_dev, 134 - int comm_index, u32 coalesce_frames); 135 void hns_rcb_update_stats(struct hnae_queue *queue); 136 137 void hns_rcb_get_stats(struct hnae_queue *queue, u64 *data);
··· 38 #define HNS_RCB_MAX_COALESCED_FRAMES 1023 39 #define HNS_RCB_MIN_COALESCED_FRAMES 1 40 #define HNS_RCB_DEF_COALESCED_FRAMES 50 41 + #define HNS_RCB_CLK_FREQ_MHZ 350 42 + #define HNS_RCB_MAX_COALESCED_USECS 0x3ff 43 + #define HNS_RCB_DEF_COALESCED_USECS 3 44 45 #define HNS_RCB_COMMON_ENDIAN 1 46 ··· 82 83 int virq[HNS_RCB_IRQ_NUM_PER_QUEUE]; 84 85 + u8 port_id_in_comm; 86 u8 used_by_vf; 87 88 struct hns_ring_hw_stats hw_stats; ··· 97 98 u8 comm_index; 99 u32 ring_num; 100 u32 desc_num; /* desc num per queue*/ 101 102 struct ring_pair_cb ring_pair_cb[0]; ··· 125 void hns_rcb_init_hw(struct ring_pair_cb *ring); 126 void hns_rcb_reset_ring_hw(struct hnae_queue *q); 127 void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag); 128 + u32 hns_rcb_get_coalesced_frames( 129 + struct rcb_common_cb *rcb_common, u32 port_idx); 130 + u32 hns_rcb_get_coalesce_usecs( 131 + struct rcb_common_cb *rcb_common, u32 port_idx); 132 + int hns_rcb_set_coalesce_usecs( 133 + struct rcb_common_cb *rcb_common, u32 port_idx, u32 timeout); 134 + int hns_rcb_set_coalesced_frames( 135 + struct rcb_common_cb *rcb_common, u32 port_idx, u32 coalesced_frames); 136 void hns_rcb_update_stats(struct hnae_queue *queue); 137 138 void hns_rcb_get_stats(struct hnae_queue *queue, u64 *data);
+3
drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
··· 103 /*serdes offset**/ 104 #define HNS_MAC_HILINK3_REG DSAF_SUB_SC_HILINK3_CRG_CTRL0_REG 105 #define HNS_MAC_HILINK4_REG DSAF_SUB_SC_HILINK4_CRG_CTRL0_REG 106 #define HNS_MAC_LANE0_CTLEDFE_REG 0x000BFFCCULL 107 #define HNS_MAC_LANE1_CTLEDFE_REG 0x000BFFBCULL 108 #define HNS_MAC_LANE2_CTLEDFE_REG 0x000BFFACULL ··· 406 #define RCB_CFG_OVERTIME_REG 0x9300 407 #define RCB_CFG_PKTLINE_INT_NUM_REG 0x9304 408 #define RCB_CFG_OVERTIME_INT_NUM_REG 0x9308 409 410 #define RCB_RING_RX_RING_BASEADDR_L_REG 0x00000 411 #define RCB_RING_RX_RING_BASEADDR_H_REG 0x00004
··· 103 /*serdes offset**/ 104 #define HNS_MAC_HILINK3_REG DSAF_SUB_SC_HILINK3_CRG_CTRL0_REG 105 #define HNS_MAC_HILINK4_REG DSAF_SUB_SC_HILINK4_CRG_CTRL0_REG 106 + #define HNS_MAC_HILINK3V2_REG DSAF_SUB_SC_HILINK3_CRG_CTRL1_REG 107 + #define HNS_MAC_HILINK4V2_REG DSAF_SUB_SC_HILINK4_CRG_CTRL1_REG 108 #define HNS_MAC_LANE0_CTLEDFE_REG 0x000BFFCCULL 109 #define HNS_MAC_LANE1_CTLEDFE_REG 0x000BFFBCULL 110 #define HNS_MAC_LANE2_CTLEDFE_REG 0x000BFFACULL ··· 404 #define RCB_CFG_OVERTIME_REG 0x9300 405 #define RCB_CFG_PKTLINE_INT_NUM_REG 0x9304 406 #define RCB_CFG_OVERTIME_INT_NUM_REG 0x9308 407 + #define RCB_PORT_CFG_OVERTIME_REG 0x9430 408 409 #define RCB_RING_RX_RING_BASEADDR_L_REG 0x00000 410 #define RCB_RING_RX_RING_BASEADDR_H_REG 0x00004
+7 -9
drivers/net/ethernet/hisilicon/hns/hns_enet.c
··· 913 static void hns_nic_tx_fini_pro(struct hns_nic_ring_data *ring_data) 914 { 915 struct hnae_ring *ring = ring_data->ring; 916 - int head = ring->next_to_clean; 917 - 918 - /* for hardware bug fixed */ 919 - head = readl_relaxed(ring->io_base + RCB_REG_HEAD); 920 921 if (head != ring->next_to_clean) { 922 ring_data->ring->q->handle->dev->ops->toggle_ring_irq( ··· 956 napi_complete(napi); 957 ring_data->ring->q->handle->dev->ops->toggle_ring_irq( 958 ring_data->ring, 0); 959 - 960 - ring_data->fini_process(ring_data); 961 return 0; 962 } 963 ··· 1720 { 1721 struct hnae_handle *h = priv->ae_handle; 1722 struct hns_nic_ring_data *rd; 1723 int i; 1724 1725 if (h->q_num > NIC_MAX_Q_PER_VF) { ··· 1738 rd->queue_index = i; 1739 rd->ring = &h->qs[i]->tx_ring; 1740 rd->poll_one = hns_nic_tx_poll_one; 1741 - rd->fini_process = hns_nic_tx_fini_pro; 1742 1743 netif_napi_add(priv->netdev, &rd->napi, 1744 hns_nic_common_poll, NIC_TX_CLEAN_MAX_NUM); ··· 1750 rd->ring = &h->qs[i - h->q_num]->rx_ring; 1751 rd->poll_one = hns_nic_rx_poll_one; 1752 rd->ex_process = hns_nic_rx_up_pro; 1753 - rd->fini_process = hns_nic_rx_fini_pro; 1754 1755 netif_napi_add(priv->netdev, &rd->napi, 1756 hns_nic_common_poll, NIC_RX_CLEAN_MAX_NUM); ··· 1814 h = hnae_get_handle(&priv->netdev->dev, 1815 priv->ae_node, priv->port_id, NULL); 1816 if (IS_ERR_OR_NULL(h)) { 1817 - ret = PTR_ERR(h); 1818 dev_dbg(priv->dev, "has not handle, register notifier!\n"); 1819 goto out; 1820 }
··· 913 static void hns_nic_tx_fini_pro(struct hns_nic_ring_data *ring_data) 914 { 915 struct hnae_ring *ring = ring_data->ring; 916 + int head = readl_relaxed(ring->io_base + RCB_REG_HEAD); 917 918 if (head != ring->next_to_clean) { 919 ring_data->ring->q->handle->dev->ops->toggle_ring_irq( ··· 959 napi_complete(napi); 960 ring_data->ring->q->handle->dev->ops->toggle_ring_irq( 961 ring_data->ring, 0); 962 + if (ring_data->fini_process) 963 + ring_data->fini_process(ring_data); 964 return 0; 965 } 966 ··· 1723 { 1724 struct hnae_handle *h = priv->ae_handle; 1725 struct hns_nic_ring_data *rd; 1726 + bool is_ver1 = AE_IS_VER1(priv->enet_ver); 1727 int i; 1728 1729 if (h->q_num > NIC_MAX_Q_PER_VF) { ··· 1740 rd->queue_index = i; 1741 rd->ring = &h->qs[i]->tx_ring; 1742 rd->poll_one = hns_nic_tx_poll_one; 1743 + rd->fini_process = is_ver1 ? hns_nic_tx_fini_pro : NULL; 1744 1745 netif_napi_add(priv->netdev, &rd->napi, 1746 hns_nic_common_poll, NIC_TX_CLEAN_MAX_NUM); ··· 1752 rd->ring = &h->qs[i - h->q_num]->rx_ring; 1753 rd->poll_one = hns_nic_rx_poll_one; 1754 rd->ex_process = hns_nic_rx_up_pro; 1755 + rd->fini_process = is_ver1 ? hns_nic_rx_fini_pro : NULL; 1756 1757 netif_napi_add(priv->netdev, &rd->napi, 1758 hns_nic_common_poll, NIC_RX_CLEAN_MAX_NUM); ··· 1816 h = hnae_get_handle(&priv->netdev->dev, 1817 priv->ae_node, priv->port_id, NULL); 1818 if (IS_ERR_OR_NULL(h)) { 1819 + ret = -ENODEV; 1820 dev_dbg(priv->dev, "has not handle, register notifier!\n"); 1821 goto out; 1822 }
+6 -4
drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
··· 794 (!ops->set_coalesce_frames)) 795 return -ESRCH; 796 797 - ops->set_coalesce_usecs(priv->ae_handle, 798 - ec->rx_coalesce_usecs); 799 800 ret = ops->set_coalesce_frames( 801 priv->ae_handle, ··· 1015 struct phy_device *phy_dev = priv->phy; 1016 1017 retval = phy_write(phy_dev, HNS_PHY_PAGE_REG, HNS_PHY_PAGE_LED); 1018 - retval = phy_write(phy_dev, HNS_LED_FC_REG, value); 1019 - retval = phy_write(phy_dev, HNS_PHY_PAGE_REG, HNS_PHY_PAGE_COPPER); 1020 if (retval) { 1021 netdev_err(netdev, "mdiobus_write fail !\n"); 1022 return retval;
··· 794 (!ops->set_coalesce_frames)) 795 return -ESRCH; 796 797 + ret = ops->set_coalesce_usecs(priv->ae_handle, 798 + ec->rx_coalesce_usecs); 799 + if (ret) 800 + return ret; 801 802 ret = ops->set_coalesce_frames( 803 priv->ae_handle, ··· 1013 struct phy_device *phy_dev = priv->phy; 1014 1015 retval = phy_write(phy_dev, HNS_PHY_PAGE_REG, HNS_PHY_PAGE_LED); 1016 + retval |= phy_write(phy_dev, HNS_LED_FC_REG, value); 1017 + retval |= phy_write(phy_dev, HNS_PHY_PAGE_REG, HNS_PHY_PAGE_COPPER); 1018 if (retval) { 1019 netdev_err(netdev, "mdiobus_write fail !\n"); 1020 return retval;
+5 -5
drivers/net/ethernet/intel/ixgbe/ixgbe.h
··· 661 #define IXGBE_FLAG2_RSS_FIELD_IPV6_UDP (u32)(1 << 9) 662 #define IXGBE_FLAG2_PTP_PPS_ENABLED (u32)(1 << 10) 663 #define IXGBE_FLAG2_PHY_INTERRUPT (u32)(1 << 11) 664 - #ifdef CONFIG_IXGBE_VXLAN 665 #define IXGBE_FLAG2_VXLAN_REREG_NEEDED BIT(12) 666 - #endif 667 #define IXGBE_FLAG2_VLAN_PROMISC BIT(13) 668 669 /* Tx fast path data */ ··· 672 /* Rx fast path data */ 673 int num_rx_queues; 674 u16 rx_itr_setting; 675 676 /* TX */ 677 struct ixgbe_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp; ··· 783 u32 timer_event_accumulator; 784 u32 vferr_refcount; 785 struct ixgbe_mac_addr *mac_table; 786 - #ifdef CONFIG_IXGBE_VXLAN 787 - u16 vxlan_port; 788 - #endif 789 struct kobject *info_kobj; 790 #ifdef CONFIG_IXGBE_HWMON 791 struct hwmon_buff *ixgbe_hwmon_buff; ··· 877 extern char ixgbe_default_device_descr[]; 878 #endif /* IXGBE_FCOE */ 879 880 void ixgbe_up(struct ixgbe_adapter *adapter); 881 void ixgbe_down(struct ixgbe_adapter *adapter); 882 void ixgbe_reinit_locked(struct ixgbe_adapter *adapter);
··· 661 #define IXGBE_FLAG2_RSS_FIELD_IPV6_UDP (u32)(1 << 9) 662 #define IXGBE_FLAG2_PTP_PPS_ENABLED (u32)(1 << 10) 663 #define IXGBE_FLAG2_PHY_INTERRUPT (u32)(1 << 11) 664 #define IXGBE_FLAG2_VXLAN_REREG_NEEDED BIT(12) 665 #define IXGBE_FLAG2_VLAN_PROMISC BIT(13) 666 667 /* Tx fast path data */ ··· 674 /* Rx fast path data */ 675 int num_rx_queues; 676 u16 rx_itr_setting; 677 + 678 + /* Port number used to identify VXLAN traffic */ 679 + __be16 vxlan_port; 680 681 /* TX */ 682 struct ixgbe_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp; ··· 782 u32 timer_event_accumulator; 783 u32 vferr_refcount; 784 struct ixgbe_mac_addr *mac_table; 785 struct kobject *info_kobj; 786 #ifdef CONFIG_IXGBE_HWMON 787 struct hwmon_buff *ixgbe_hwmon_buff; ··· 879 extern char ixgbe_default_device_descr[]; 880 #endif /* IXGBE_FCOE */ 881 882 + int ixgbe_open(struct net_device *netdev); 883 + int ixgbe_close(struct net_device *netdev); 884 void ixgbe_up(struct ixgbe_adapter *adapter); 885 void ixgbe_down(struct ixgbe_adapter *adapter); 886 void ixgbe_reinit_locked(struct ixgbe_adapter *adapter);
+2 -2
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
··· 2053 2054 if (if_running) 2055 /* indicate we're in test mode */ 2056 - dev_close(netdev); 2057 else 2058 ixgbe_reset(adapter); 2059 ··· 2091 /* clear testing bit and return adapter to previous state */ 2092 clear_bit(__IXGBE_TESTING, &adapter->state); 2093 if (if_running) 2094 - dev_open(netdev); 2095 else if (hw->mac.ops.disable_tx_laser) 2096 hw->mac.ops.disable_tx_laser(hw); 2097 } else {
··· 2053 2054 if (if_running) 2055 /* indicate we're in test mode */ 2056 + ixgbe_close(netdev); 2057 else 2058 ixgbe_reset(adapter); 2059 ··· 2091 /* clear testing bit and return adapter to previous state */ 2092 clear_bit(__IXGBE_TESTING, &adapter->state); 2093 if (if_running) 2094 + ixgbe_open(netdev); 2095 else if (hw->mac.ops.disable_tx_laser) 2096 hw->mac.ops.disable_tx_laser(hw); 2097 } else {
+76 -89
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 4531 case ixgbe_mac_X550: 4532 case ixgbe_mac_X550EM_x: 4533 IXGBE_WRITE_REG(&adapter->hw, IXGBE_VXLANCTRL, 0); 4534 - #ifdef CONFIG_IXGBE_VXLAN 4535 adapter->vxlan_port = 0; 4536 - #endif 4537 break; 4538 default: 4539 break; ··· 5992 * handler is registered with the OS, the watchdog timer is started, 5993 * and the stack is notified that the interface is ready. 5994 **/ 5995 - static int ixgbe_open(struct net_device *netdev) 5996 { 5997 struct ixgbe_adapter *adapter = netdev_priv(netdev); 5998 struct ixgbe_hw *hw = &adapter->hw; ··· 6094 * needs to be disabled. A global MAC reset is issued to stop the 6095 * hardware, and all transmit and receive resources are freed. 6096 **/ 6097 - static int ixgbe_close(struct net_device *netdev) 6098 { 6099 struct ixgbe_adapter *adapter = netdev_priv(netdev); 6100 ··· 7558 struct ipv6hdr *ipv6; 7559 } hdr; 7560 struct tcphdr *th; 7561 struct sk_buff *skb; 7562 - #ifdef CONFIG_IXGBE_VXLAN 7563 - u8 encap = false; 7564 - #endif /* CONFIG_IXGBE_VXLAN */ 7565 __be16 vlan_id; 7566 7567 /* if ring doesn't have a interrupt vector, cannot perform ATR */ 7568 if (!q_vector) ··· 7573 7574 ring->atr_count++; 7575 7576 /* snag network header to get L4 type and address */ 7577 skb = first->skb; 7578 hdr.network = skb_network_header(skb); 7579 - if (!skb->encapsulation) { 7580 - th = tcp_hdr(skb); 7581 - } else { 7582 #ifdef CONFIG_IXGBE_VXLAN 7583 struct ixgbe_adapter *adapter = q_vector->adapter; 7584 7585 - if (!adapter->vxlan_port) 7586 - return; 7587 - if (first->protocol != htons(ETH_P_IP) || 7588 - hdr.ipv4->version != IPVERSION || 7589 - hdr.ipv4->protocol != IPPROTO_UDP) { 7590 - return; 7591 - } 7592 - if (ntohs(udp_hdr(skb)->dest) != adapter->vxlan_port) 7593 - return; 7594 - encap = true; 7595 - hdr.network = skb_inner_network_header(skb); 7596 - th = inner_tcp_hdr(skb); 7597 - #else 7598 - return; 7599 - #endif /* CONFIG_IXGBE_VXLAN */ 7600 } 7601 7602 /* Currently only IPv4/IPv6 with TCP is supported */ 7603 switch (hdr.ipv4->version) { 7604 case IPVERSION: 7605 - if (hdr.ipv4->protocol != IPPROTO_TCP) 7606 - return; 7607 break; 7608 case 6: 7609 - if (likely((unsigned char *)th - hdr.network == 7610 - sizeof(struct ipv6hdr))) { 7611 - if (hdr.ipv6->nexthdr != IPPROTO_TCP) 7612 - return; 7613 - } else { 7614 - __be16 frag_off; 7615 - u8 l4_hdr; 7616 - 7617 - ipv6_skip_exthdr(skb, hdr.network - skb->data + 7618 - sizeof(struct ipv6hdr), 7619 - &l4_hdr, &frag_off); 7620 - if (unlikely(frag_off)) 7621 - return; 7622 - if (l4_hdr != IPPROTO_TCP) 7623 - return; 7624 - } 7625 break; 7626 default: 7627 return; 7628 } 7629 7630 - /* skip this packet since it is invalid or the socket is closing */ 7631 - if (!th || th->fin) 7632 return; 7633 7634 /* sample on all syn packets or once every atr sample count */ ··· 7667 break; 7668 } 7669 7670 - #ifdef CONFIG_IXGBE_VXLAN 7671 - if (encap) 7672 input.formatted.flow_type |= IXGBE_ATR_L4TYPE_TUNNEL_MASK; 7673 - #endif /* CONFIG_IXGBE_VXLAN */ 7674 7675 /* This assumes the Rx queue and Tx queue are bound to the same CPU */ 7676 ixgbe_fdir_add_signature_filter_82599(&q_vector->adapter->hw, ··· 8192 static int ixgbe_delete_clsu32(struct ixgbe_adapter *adapter, 8193 struct tc_cls_u32_offload *cls) 8194 { 8195 int err; 8196 8197 spin_lock(&adapter->fdir_perfect_lock); 8198 - err = ixgbe_update_ethtool_fdir_entry(adapter, NULL, cls->knode.handle); 8199 spin_unlock(&adapter->fdir_perfect_lock); 8200 return err; 8201 } ··· 8211 __be16 protocol, 8212 struct tc_cls_u32_offload *cls) 8213 { 8214 /* This ixgbe devices do not support hash tables at the moment 8215 * so abort when given hash tables. 8216 */ 8217 if (cls->hnode.divisor > 0) 8218 return -EINVAL; 8219 8220 - set_bit(TC_U32_USERHTID(cls->hnode.handle), &adapter->tables); 8221 return 0; 8222 } 8223 8224 static int ixgbe_configure_clsu32_del_hnode(struct ixgbe_adapter *adapter, 8225 struct tc_cls_u32_offload *cls) 8226 { 8227 - clear_bit(TC_U32_USERHTID(cls->hnode.handle), &adapter->tables); 8228 return 0; 8229 } 8230 ··· 8252 #endif 8253 int i, err = 0; 8254 u8 queue; 8255 - u32 handle; 8256 8257 memset(&mask, 0, sizeof(union ixgbe_atr_input)); 8258 - handle = cls->knode.handle; 8259 8260 - /* At the moment cls_u32 jumps to transport layer and skips past 8261 * L2 headers. The canonical method to match L2 frames is to use 8262 * negative values. However this is error prone at best but really 8263 * just broken because there is no way to "know" what sort of hdr 8264 - * is in front of the transport layer. Fix cls_u32 to support L2 8265 * headers when needed. 8266 */ 8267 if (protocol != htons(ETH_P_IP)) 8268 return -EINVAL; 8269 8270 - if (cls->knode.link_handle || 8271 - cls->knode.link_handle >= IXGBE_MAX_LINK_HANDLE) { 8272 struct ixgbe_nexthdr *nexthdr = ixgbe_ipv4_jumps; 8273 - u32 uhtid = TC_U32_USERHTID(cls->knode.link_handle); 8274 8275 - if (!test_bit(uhtid, &adapter->tables)) 8276 return -EINVAL; 8277 8278 for (i = 0; nexthdr[i].jump; i++) { ··· 8290 nexthdr->mask != cls->knode.sel->keys[0].mask) 8291 return -EINVAL; 8292 8293 - if (uhtid >= IXGBE_MAX_LINK_HANDLE) 8294 - return -EINVAL; 8295 - 8296 - adapter->jump_tables[uhtid] = nexthdr->jump; 8297 } 8298 return 0; 8299 } ··· 8307 * To add support for new nodes update ixgbe_model.h parse structures 8308 * this function _should_ be generic try not to hardcode values here. 8309 */ 8310 - if (TC_U32_USERHTID(handle) == 0x800) { 8311 field_ptr = adapter->jump_tables[0]; 8312 } else { 8313 - if (TC_U32_USERHTID(handle) >= ARRAY_SIZE(adapter->jump_tables)) 8314 return -EINVAL; 8315 8316 - field_ptr = adapter->jump_tables[TC_U32_USERHTID(handle)]; 8317 } 8318 8319 if (!field_ptr) ··· 8331 int j; 8332 8333 for (j = 0; field_ptr[j].val; j++) { 8334 - if (field_ptr[j].off == off && 8335 - field_ptr[j].mask == m) { 8336 field_ptr[j].val(input, &mask, val, m); 8337 input->filter.formatted.flow_type |= 8338 field_ptr[j].type; ··· 8391 return -EINVAL; 8392 } 8393 8394 - int __ixgbe_setup_tc(struct net_device *dev, u32 handle, __be16 proto, 8395 - struct tc_to_netdev *tc) 8396 { 8397 struct ixgbe_adapter *adapter = netdev_priv(dev); 8398 ··· 8552 { 8553 struct ixgbe_adapter *adapter = netdev_priv(dev); 8554 struct ixgbe_hw *hw = &adapter->hw; 8555 - u16 new_port = ntohs(port); 8556 8557 if (!(adapter->flags & IXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE)) 8558 return; ··· 8559 if (sa_family == AF_INET6) 8560 return; 8561 8562 - if (adapter->vxlan_port == new_port) 8563 return; 8564 8565 if (adapter->vxlan_port) { 8566 netdev_info(dev, 8567 "Hit Max num of VXLAN ports, not adding port %d\n", 8568 - new_port); 8569 return; 8570 } 8571 8572 - adapter->vxlan_port = new_port; 8573 - IXGBE_WRITE_REG(hw, IXGBE_VXLANCTRL, new_port); 8574 } 8575 8576 /** ··· 8583 __be16 port) 8584 { 8585 struct ixgbe_adapter *adapter = netdev_priv(dev); 8586 - u16 new_port = ntohs(port); 8587 8588 if (!(adapter->flags & IXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE)) 8589 return; ··· 8590 if (sa_family == AF_INET6) 8591 return; 8592 8593 - if (adapter->vxlan_port != new_port) { 8594 netdev_info(dev, "Port %d was not found, not deleting\n", 8595 - new_port); 8596 return; 8597 } 8598 ··· 9261 netdev->priv_flags |= IFF_UNICAST_FLT; 9262 netdev->priv_flags |= IFF_SUPP_NOFCS; 9263 9264 - #ifdef CONFIG_IXGBE_VXLAN 9265 - switch (adapter->hw.mac.type) { 9266 - case ixgbe_mac_X550: 9267 - case ixgbe_mac_X550EM_x: 9268 - netdev->hw_enc_features |= NETIF_F_RXCSUM; 9269 - break; 9270 - default: 9271 - break; 9272 - } 9273 - #endif /* CONFIG_IXGBE_VXLAN */ 9274 - 9275 #ifdef CONFIG_IXGBE_DCB 9276 netdev->dcbnl_ops = &dcbnl_ops; 9277 #endif ··· 9314 goto err_sw_init; 9315 } 9316 9317 ixgbe_mac_set_default_filter(adapter); 9318 9319 setup_timer(&adapter->service_timer, &ixgbe_service_timer,
··· 4531 case ixgbe_mac_X550: 4532 case ixgbe_mac_X550EM_x: 4533 IXGBE_WRITE_REG(&adapter->hw, IXGBE_VXLANCTRL, 0); 4534 adapter->vxlan_port = 0; 4535 break; 4536 default: 4537 break; ··· 5994 * handler is registered with the OS, the watchdog timer is started, 5995 * and the stack is notified that the interface is ready. 5996 **/ 5997 + int ixgbe_open(struct net_device *netdev) 5998 { 5999 struct ixgbe_adapter *adapter = netdev_priv(netdev); 6000 struct ixgbe_hw *hw = &adapter->hw; ··· 6096 * needs to be disabled. A global MAC reset is issued to stop the 6097 * hardware, and all transmit and receive resources are freed. 6098 **/ 6099 + int ixgbe_close(struct net_device *netdev) 6100 { 6101 struct ixgbe_adapter *adapter = netdev_priv(netdev); 6102 ··· 7560 struct ipv6hdr *ipv6; 7561 } hdr; 7562 struct tcphdr *th; 7563 + unsigned int hlen; 7564 struct sk_buff *skb; 7565 __be16 vlan_id; 7566 + int l4_proto; 7567 7568 /* if ring doesn't have a interrupt vector, cannot perform ATR */ 7569 if (!q_vector) ··· 7576 7577 ring->atr_count++; 7578 7579 + /* currently only IPv4/IPv6 with TCP is supported */ 7580 + if ((first->protocol != htons(ETH_P_IP)) && 7581 + (first->protocol != htons(ETH_P_IPV6))) 7582 + return; 7583 + 7584 /* snag network header to get L4 type and address */ 7585 skb = first->skb; 7586 hdr.network = skb_network_header(skb); 7587 #ifdef CONFIG_IXGBE_VXLAN 7588 + if (skb->encapsulation && 7589 + first->protocol == htons(ETH_P_IP) && 7590 + hdr.ipv4->protocol != IPPROTO_UDP) { 7591 struct ixgbe_adapter *adapter = q_vector->adapter; 7592 7593 + /* verify the port is recognized as VXLAN */ 7594 + if (adapter->vxlan_port && 7595 + udp_hdr(skb)->dest == adapter->vxlan_port) 7596 + hdr.network = skb_inner_network_header(skb); 7597 } 7598 + #endif /* CONFIG_IXGBE_VXLAN */ 7599 7600 /* Currently only IPv4/IPv6 with TCP is supported */ 7601 switch (hdr.ipv4->version) { 7602 case IPVERSION: 7603 + /* access ihl as u8 to avoid unaligned access on ia64 */ 7604 + hlen = (hdr.network[0] & 0x0F) << 2; 7605 + l4_proto = hdr.ipv4->protocol; 7606 break; 7607 case 6: 7608 + hlen = hdr.network - skb->data; 7609 + l4_proto = ipv6_find_hdr(skb, &hlen, IPPROTO_TCP, NULL, NULL); 7610 + hlen -= hdr.network - skb->data; 7611 break; 7612 default: 7613 return; 7614 } 7615 7616 + if (l4_proto != IPPROTO_TCP) 7617 + return; 7618 + 7619 + th = (struct tcphdr *)(hdr.network + hlen); 7620 + 7621 + /* skip this packet since the socket is closing */ 7622 + if (th->fin) 7623 return; 7624 7625 /* sample on all syn packets or once every atr sample count */ ··· 7682 break; 7683 } 7684 7685 + if (hdr.network != skb_network_header(skb)) 7686 input.formatted.flow_type |= IXGBE_ATR_L4TYPE_TUNNEL_MASK; 7687 7688 /* This assumes the Rx queue and Tx queue are bound to the same CPU */ 7689 ixgbe_fdir_add_signature_filter_82599(&q_vector->adapter->hw, ··· 8209 static int ixgbe_delete_clsu32(struct ixgbe_adapter *adapter, 8210 struct tc_cls_u32_offload *cls) 8211 { 8212 + u32 uhtid = TC_U32_USERHTID(cls->knode.handle); 8213 + u32 loc; 8214 int err; 8215 8216 + if ((uhtid != 0x800) && (uhtid >= IXGBE_MAX_LINK_HANDLE)) 8217 + return -EINVAL; 8218 + 8219 + loc = cls->knode.handle & 0xfffff; 8220 + 8221 spin_lock(&adapter->fdir_perfect_lock); 8222 + err = ixgbe_update_ethtool_fdir_entry(adapter, NULL, loc); 8223 spin_unlock(&adapter->fdir_perfect_lock); 8224 return err; 8225 } ··· 8221 __be16 protocol, 8222 struct tc_cls_u32_offload *cls) 8223 { 8224 + u32 uhtid = TC_U32_USERHTID(cls->hnode.handle); 8225 + 8226 + if (uhtid >= IXGBE_MAX_LINK_HANDLE) 8227 + return -EINVAL; 8228 + 8229 /* This ixgbe devices do not support hash tables at the moment 8230 * so abort when given hash tables. 8231 */ 8232 if (cls->hnode.divisor > 0) 8233 return -EINVAL; 8234 8235 + set_bit(uhtid - 1, &adapter->tables); 8236 return 0; 8237 } 8238 8239 static int ixgbe_configure_clsu32_del_hnode(struct ixgbe_adapter *adapter, 8240 struct tc_cls_u32_offload *cls) 8241 { 8242 + u32 uhtid = TC_U32_USERHTID(cls->hnode.handle); 8243 + 8244 + if (uhtid >= IXGBE_MAX_LINK_HANDLE) 8245 + return -EINVAL; 8246 + 8247 + clear_bit(uhtid - 1, &adapter->tables); 8248 return 0; 8249 } 8250 ··· 8252 #endif 8253 int i, err = 0; 8254 u8 queue; 8255 + u32 uhtid, link_uhtid; 8256 8257 memset(&mask, 0, sizeof(union ixgbe_atr_input)); 8258 + uhtid = TC_U32_USERHTID(cls->knode.handle); 8259 + link_uhtid = TC_U32_USERHTID(cls->knode.link_handle); 8260 8261 + /* At the moment cls_u32 jumps to network layer and skips past 8262 * L2 headers. The canonical method to match L2 frames is to use 8263 * negative values. However this is error prone at best but really 8264 * just broken because there is no way to "know" what sort of hdr 8265 + * is in front of the network layer. Fix cls_u32 to support L2 8266 * headers when needed. 8267 */ 8268 if (protocol != htons(ETH_P_IP)) 8269 return -EINVAL; 8270 8271 + if (link_uhtid) { 8272 struct ixgbe_nexthdr *nexthdr = ixgbe_ipv4_jumps; 8273 8274 + if (link_uhtid >= IXGBE_MAX_LINK_HANDLE) 8275 + return -EINVAL; 8276 + 8277 + if (!test_bit(link_uhtid - 1, &adapter->tables)) 8278 return -EINVAL; 8279 8280 for (i = 0; nexthdr[i].jump; i++) { ··· 8288 nexthdr->mask != cls->knode.sel->keys[0].mask) 8289 return -EINVAL; 8290 8291 + adapter->jump_tables[link_uhtid] = nexthdr->jump; 8292 } 8293 return 0; 8294 } ··· 8308 * To add support for new nodes update ixgbe_model.h parse structures 8309 * this function _should_ be generic try not to hardcode values here. 8310 */ 8311 + if (uhtid == 0x800) { 8312 field_ptr = adapter->jump_tables[0]; 8313 } else { 8314 + if (uhtid >= IXGBE_MAX_LINK_HANDLE) 8315 return -EINVAL; 8316 8317 + field_ptr = adapter->jump_tables[uhtid]; 8318 } 8319 8320 if (!field_ptr) ··· 8332 int j; 8333 8334 for (j = 0; field_ptr[j].val; j++) { 8335 + if (field_ptr[j].off == off) { 8336 field_ptr[j].val(input, &mask, val, m); 8337 input->filter.formatted.flow_type |= 8338 field_ptr[j].type; ··· 8393 return -EINVAL; 8394 } 8395 8396 + static int __ixgbe_setup_tc(struct net_device *dev, u32 handle, __be16 proto, 8397 + struct tc_to_netdev *tc) 8398 { 8399 struct ixgbe_adapter *adapter = netdev_priv(dev); 8400 ··· 8554 { 8555 struct ixgbe_adapter *adapter = netdev_priv(dev); 8556 struct ixgbe_hw *hw = &adapter->hw; 8557 8558 if (!(adapter->flags & IXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE)) 8559 return; ··· 8562 if (sa_family == AF_INET6) 8563 return; 8564 8565 + if (adapter->vxlan_port == port) 8566 return; 8567 8568 if (adapter->vxlan_port) { 8569 netdev_info(dev, 8570 "Hit Max num of VXLAN ports, not adding port %d\n", 8571 + ntohs(port)); 8572 return; 8573 } 8574 8575 + adapter->vxlan_port = port; 8576 + IXGBE_WRITE_REG(hw, IXGBE_VXLANCTRL, ntohs(port)); 8577 } 8578 8579 /** ··· 8586 __be16 port) 8587 { 8588 struct ixgbe_adapter *adapter = netdev_priv(dev); 8589 8590 if (!(adapter->flags & IXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE)) 8591 return; ··· 8594 if (sa_family == AF_INET6) 8595 return; 8596 8597 + if (adapter->vxlan_port != port) { 8598 netdev_info(dev, "Port %d was not found, not deleting\n", 8599 + ntohs(port)); 8600 return; 8601 } 8602 ··· 9265 netdev->priv_flags |= IFF_UNICAST_FLT; 9266 netdev->priv_flags |= IFF_SUPP_NOFCS; 9267 9268 #ifdef CONFIG_IXGBE_DCB 9269 netdev->dcbnl_ops = &dcbnl_ops; 9270 #endif ··· 9329 goto err_sw_init; 9330 } 9331 9332 + /* Set hw->mac.addr to permanent MAC address */ 9333 + ether_addr_copy(hw->mac.addr, hw->mac.perm_addr); 9334 ixgbe_mac_set_default_filter(adapter); 9335 9336 setup_timer(&adapter->service_timer, &ixgbe_service_timer,
+6 -15
drivers/net/ethernet/intel/ixgbe/ixgbe_model.h
··· 32 33 struct ixgbe_mat_field { 34 unsigned int off; 35 - unsigned int mask; 36 int (*val)(struct ixgbe_fdir_filter *input, 37 union ixgbe_atr_input *mask, 38 u32 val, u32 m); ··· 57 } 58 59 static struct ixgbe_mat_field ixgbe_ipv4_fields[] = { 60 - { .off = 12, .mask = -1, .val = ixgbe_mat_prgm_sip, 61 .type = IXGBE_ATR_FLOW_TYPE_IPV4}, 62 - { .off = 16, .mask = -1, .val = ixgbe_mat_prgm_dip, 63 .type = IXGBE_ATR_FLOW_TYPE_IPV4}, 64 { .val = NULL } /* terminal node */ 65 }; 66 67 - static inline int ixgbe_mat_prgm_sport(struct ixgbe_fdir_filter *input, 68 union ixgbe_atr_input *mask, 69 u32 val, u32 m) 70 { 71 input->filter.formatted.src_port = val & 0xffff; 72 mask->formatted.src_port = m & 0xffff; 73 - return 0; 74 - }; 75 76 - static inline int ixgbe_mat_prgm_dport(struct ixgbe_fdir_filter *input, 77 - union ixgbe_atr_input *mask, 78 - u32 val, u32 m) 79 - { 80 - input->filter.formatted.dst_port = val & 0xffff; 81 - mask->formatted.dst_port = m & 0xffff; 82 return 0; 83 }; 84 85 static struct ixgbe_mat_field ixgbe_tcp_fields[] = { 86 - {.off = 0, .mask = 0xffff, .val = ixgbe_mat_prgm_sport, 87 - .type = IXGBE_ATR_FLOW_TYPE_TCPV4}, 88 - {.off = 2, .mask = 0xffff, .val = ixgbe_mat_prgm_dport, 89 .type = IXGBE_ATR_FLOW_TYPE_TCPV4}, 90 { .val = NULL } /* terminal node */ 91 };
··· 32 33 struct ixgbe_mat_field { 34 unsigned int off; 35 int (*val)(struct ixgbe_fdir_filter *input, 36 union ixgbe_atr_input *mask, 37 u32 val, u32 m); ··· 58 } 59 60 static struct ixgbe_mat_field ixgbe_ipv4_fields[] = { 61 + { .off = 12, .val = ixgbe_mat_prgm_sip, 62 .type = IXGBE_ATR_FLOW_TYPE_IPV4}, 63 + { .off = 16, .val = ixgbe_mat_prgm_dip, 64 .type = IXGBE_ATR_FLOW_TYPE_IPV4}, 65 { .val = NULL } /* terminal node */ 66 }; 67 68 + static inline int ixgbe_mat_prgm_ports(struct ixgbe_fdir_filter *input, 69 union ixgbe_atr_input *mask, 70 u32 val, u32 m) 71 { 72 input->filter.formatted.src_port = val & 0xffff; 73 mask->formatted.src_port = m & 0xffff; 74 + input->filter.formatted.dst_port = val >> 16; 75 + mask->formatted.dst_port = m >> 16; 76 77 return 0; 78 }; 79 80 static struct ixgbe_mat_field ixgbe_tcp_fields[] = { 81 + {.off = 0, .val = ixgbe_mat_prgm_ports, 82 .type = IXGBE_ATR_FLOW_TYPE_TCPV4}, 83 { .val = NULL } /* terminal node */ 84 };
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
··· 355 command = IXGBE_READ_REG(hw, IXGBE_SB_IOSF_INDIRECT_CTRL); 356 if (!(command & IXGBE_SB_IOSF_CTRL_BUSY)) 357 break; 358 - usleep_range(10, 20); 359 } 360 if (ctrl) 361 *ctrl = command;
··· 355 command = IXGBE_READ_REG(hw, IXGBE_SB_IOSF_INDIRECT_CTRL); 356 if (!(command & IXGBE_SB_IOSF_CTRL_BUSY)) 357 break; 358 + udelay(10); 359 } 360 if (ctrl) 361 *ctrl = command;
+2 -2
drivers/net/ethernet/intel/ixgbevf/ethtool.c
··· 680 681 if (if_running) 682 /* indicate we're in test mode */ 683 - dev_close(netdev); 684 else 685 ixgbevf_reset(adapter); 686 ··· 692 693 clear_bit(__IXGBEVF_TESTING, &adapter->state); 694 if (if_running) 695 - dev_open(netdev); 696 } else { 697 hw_dbg(&adapter->hw, "online testing starting\n"); 698 /* Online tests */
··· 680 681 if (if_running) 682 /* indicate we're in test mode */ 683 + ixgbevf_close(netdev); 684 else 685 ixgbevf_reset(adapter); 686 ··· 692 693 clear_bit(__IXGBEVF_TESTING, &adapter->state); 694 if (if_running) 695 + ixgbevf_open(netdev); 696 } else { 697 hw_dbg(&adapter->hw, "online testing starting\n"); 698 /* Online tests */
+2
drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
··· 486 extern const char ixgbevf_driver_name[]; 487 extern const char ixgbevf_driver_version[]; 488 489 void ixgbevf_up(struct ixgbevf_adapter *adapter); 490 void ixgbevf_down(struct ixgbevf_adapter *adapter); 491 void ixgbevf_reinit_locked(struct ixgbevf_adapter *adapter);
··· 486 extern const char ixgbevf_driver_name[]; 487 extern const char ixgbevf_driver_version[]; 488 489 + int ixgbevf_open(struct net_device *netdev); 490 + int ixgbevf_close(struct net_device *netdev); 491 void ixgbevf_up(struct ixgbevf_adapter *adapter); 492 void ixgbevf_down(struct ixgbevf_adapter *adapter); 493 void ixgbevf_reinit_locked(struct ixgbevf_adapter *adapter);
+10 -6
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 3122 * handler is registered with the OS, the watchdog timer is started, 3123 * and the stack is notified that the interface is ready. 3124 **/ 3125 - static int ixgbevf_open(struct net_device *netdev) 3126 { 3127 struct ixgbevf_adapter *adapter = netdev_priv(netdev); 3128 struct ixgbe_hw *hw = &adapter->hw; ··· 3205 * needs to be disabled. A global MAC reset is issued to stop the 3206 * hardware, and all transmit and receive resources are freed. 3207 **/ 3208 - static int ixgbevf_close(struct net_device *netdev) 3209 { 3210 struct ixgbevf_adapter *adapter = netdev_priv(netdev); 3211 ··· 3692 struct ixgbevf_adapter *adapter = netdev_priv(netdev); 3693 struct ixgbe_hw *hw = &adapter->hw; 3694 struct sockaddr *addr = p; 3695 3696 if (!is_valid_ether_addr(addr->sa_data)) 3697 return -EADDRNOTAVAIL; 3698 3699 - ether_addr_copy(netdev->dev_addr, addr->sa_data); 3700 - ether_addr_copy(hw->mac.addr, addr->sa_data); 3701 - 3702 spin_lock_bh(&adapter->mbx_lock); 3703 3704 - hw->mac.ops.set_rar(hw, 0, hw->mac.addr, 0); 3705 3706 spin_unlock_bh(&adapter->mbx_lock); 3707 3708 return 0; 3709 }
··· 3122 * handler is registered with the OS, the watchdog timer is started, 3123 * and the stack is notified that the interface is ready. 3124 **/ 3125 + int ixgbevf_open(struct net_device *netdev) 3126 { 3127 struct ixgbevf_adapter *adapter = netdev_priv(netdev); 3128 struct ixgbe_hw *hw = &adapter->hw; ··· 3205 * needs to be disabled. A global MAC reset is issued to stop the 3206 * hardware, and all transmit and receive resources are freed. 3207 **/ 3208 + int ixgbevf_close(struct net_device *netdev) 3209 { 3210 struct ixgbevf_adapter *adapter = netdev_priv(netdev); 3211 ··· 3692 struct ixgbevf_adapter *adapter = netdev_priv(netdev); 3693 struct ixgbe_hw *hw = &adapter->hw; 3694 struct sockaddr *addr = p; 3695 + int err; 3696 3697 if (!is_valid_ether_addr(addr->sa_data)) 3698 return -EADDRNOTAVAIL; 3699 3700 spin_lock_bh(&adapter->mbx_lock); 3701 3702 + err = hw->mac.ops.set_rar(hw, 0, addr->sa_data, 0); 3703 3704 spin_unlock_bh(&adapter->mbx_lock); 3705 + 3706 + if (err) 3707 + return -EPERM; 3708 + 3709 + ether_addr_copy(hw->mac.addr, addr->sa_data); 3710 + ether_addr_copy(netdev->dev_addr, addr->sa_data); 3711 3712 return 0; 3713 }
+3 -1
drivers/net/ethernet/intel/ixgbevf/vf.c
··· 408 409 /* if nacked the address was rejected, use "perm_addr" */ 410 if (!ret_val && 411 - (msgbuf[0] == (IXGBE_VF_SET_MAC_ADDR | IXGBE_VT_MSGTYPE_NACK))) 412 ixgbevf_get_mac_addr_vf(hw, hw->mac.addr); 413 414 return ret_val; 415 }
··· 408 409 /* if nacked the address was rejected, use "perm_addr" */ 410 if (!ret_val && 411 + (msgbuf[0] == (IXGBE_VF_SET_MAC_ADDR | IXGBE_VT_MSGTYPE_NACK))) { 412 ixgbevf_get_mac_addr_vf(hw, hw->mac.addr); 413 + return IXGBE_ERR_MBX; 414 + } 415 416 return ret_val; 417 }
+17 -23
drivers/net/ethernet/marvell/mvneta.c
··· 260 261 #define MVNETA_VLAN_TAG_LEN 4 262 263 - #define MVNETA_CPU_D_CACHE_LINE_SIZE 32 264 #define MVNETA_TX_CSUM_DEF_SIZE 1600 265 #define MVNETA_TX_CSUM_MAX_SIZE 9800 266 #define MVNETA_ACC_MODE_EXT1 1 ··· 299 #define MVNETA_RX_PKT_SIZE(mtu) \ 300 ALIGN((mtu) + MVNETA_MH_SIZE + MVNETA_VLAN_TAG_LEN + \ 301 ETH_HLEN + ETH_FCS_LEN, \ 302 - MVNETA_CPU_D_CACHE_LINE_SIZE) 303 304 #define IS_TSO_HEADER(txq, addr) \ 305 ((addr >= txq->tso_hdrs_phys) && \ ··· 2763 if (rxq->descs == NULL) 2764 return -ENOMEM; 2765 2766 - BUG_ON(rxq->descs != 2767 - PTR_ALIGN(rxq->descs, MVNETA_CPU_D_CACHE_LINE_SIZE)); 2768 - 2769 rxq->last_desc = rxq->size - 1; 2770 2771 /* Set Rx descriptors queue starting address */ ··· 2832 &txq->descs_phys, GFP_KERNEL); 2833 if (txq->descs == NULL) 2834 return -ENOMEM; 2835 - 2836 - /* Make sure descriptor address is cache line size aligned */ 2837 - BUG_ON(txq->descs != 2838 - PTR_ALIGN(txq->descs, MVNETA_CPU_D_CACHE_LINE_SIZE)); 2839 2840 txq->last_desc = txq->size - 1; 2841 ··· 3042 return mtu; 3043 } 3044 3045 /* Change the device mtu */ 3046 static int mvneta_change_mtu(struct net_device *dev, int mtu) 3047 { ··· 3080 * reallocation of the queues 3081 */ 3082 mvneta_stop_dev(pp); 3083 3084 mvneta_cleanup_txqs(pp); 3085 mvneta_cleanup_rxqs(pp); ··· 3104 return ret; 3105 } 3106 3107 mvneta_start_dev(pp); 3108 mvneta_port_up(pp); 3109 ··· 3256 { 3257 phy_disconnect(pp->phy_dev); 3258 pp->phy_dev = NULL; 3259 - } 3260 - 3261 - static void mvneta_percpu_enable(void *arg) 3262 - { 3263 - struct mvneta_port *pp = arg; 3264 - 3265 - enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE); 3266 - } 3267 - 3268 - static void mvneta_percpu_disable(void *arg) 3269 - { 3270 - struct mvneta_port *pp = arg; 3271 - 3272 - disable_percpu_irq(pp->dev->irq); 3273 } 3274 3275 /* Electing a CPU must be done in an atomic way: it should be done
··· 260 261 #define MVNETA_VLAN_TAG_LEN 4 262 263 #define MVNETA_TX_CSUM_DEF_SIZE 1600 264 #define MVNETA_TX_CSUM_MAX_SIZE 9800 265 #define MVNETA_ACC_MODE_EXT1 1 ··· 300 #define MVNETA_RX_PKT_SIZE(mtu) \ 301 ALIGN((mtu) + MVNETA_MH_SIZE + MVNETA_VLAN_TAG_LEN + \ 302 ETH_HLEN + ETH_FCS_LEN, \ 303 + cache_line_size()) 304 305 #define IS_TSO_HEADER(txq, addr) \ 306 ((addr >= txq->tso_hdrs_phys) && \ ··· 2764 if (rxq->descs == NULL) 2765 return -ENOMEM; 2766 2767 rxq->last_desc = rxq->size - 1; 2768 2769 /* Set Rx descriptors queue starting address */ ··· 2836 &txq->descs_phys, GFP_KERNEL); 2837 if (txq->descs == NULL) 2838 return -ENOMEM; 2839 2840 txq->last_desc = txq->size - 1; 2841 ··· 3050 return mtu; 3051 } 3052 3053 + static void mvneta_percpu_enable(void *arg) 3054 + { 3055 + struct mvneta_port *pp = arg; 3056 + 3057 + enable_percpu_irq(pp->dev->irq, IRQ_TYPE_NONE); 3058 + } 3059 + 3060 + static void mvneta_percpu_disable(void *arg) 3061 + { 3062 + struct mvneta_port *pp = arg; 3063 + 3064 + disable_percpu_irq(pp->dev->irq); 3065 + } 3066 + 3067 /* Change the device mtu */ 3068 static int mvneta_change_mtu(struct net_device *dev, int mtu) 3069 { ··· 3074 * reallocation of the queues 3075 */ 3076 mvneta_stop_dev(pp); 3077 + on_each_cpu(mvneta_percpu_disable, pp, true); 3078 3079 mvneta_cleanup_txqs(pp); 3080 mvneta_cleanup_rxqs(pp); ··· 3097 return ret; 3098 } 3099 3100 + on_each_cpu(mvneta_percpu_enable, pp, true); 3101 mvneta_start_dev(pp); 3102 mvneta_port_up(pp); 3103 ··· 3248 { 3249 phy_disconnect(pp->phy_dev); 3250 pp->phy_dev = NULL; 3251 } 3252 3253 /* Electing a CPU must be done in an atomic way: it should be done
+4 -14
drivers/net/ethernet/marvell/mvpp2.c
··· 321 /* Lbtd 802.3 type */ 322 #define MVPP2_IP_LBDT_TYPE 0xfffa 323 324 - #define MVPP2_CPU_D_CACHE_LINE_SIZE 32 325 #define MVPP2_TX_CSUM_MAX_SIZE 9800 326 327 /* Timeout constants */ ··· 376 377 #define MVPP2_RX_PKT_SIZE(mtu) \ 378 ALIGN((mtu) + MVPP2_MH_SIZE + MVPP2_VLAN_TAG_LEN + \ 379 - ETH_HLEN + ETH_FCS_LEN, MVPP2_CPU_D_CACHE_LINE_SIZE) 380 381 #define MVPP2_RX_BUF_SIZE(pkt_size) ((pkt_size) + NET_SKB_PAD) 382 #define MVPP2_RX_TOTAL_SIZE(buf_size) ((buf_size) + MVPP2_SKB_SHINFO_SIZE) ··· 4492 if (!aggr_txq->descs) 4493 return -ENOMEM; 4494 4495 - /* Make sure descriptor address is cache line size aligned */ 4496 - BUG_ON(aggr_txq->descs != 4497 - PTR_ALIGN(aggr_txq->descs, MVPP2_CPU_D_CACHE_LINE_SIZE)); 4498 - 4499 aggr_txq->last_desc = aggr_txq->size - 1; 4500 4501 /* Aggr TXQ no reset WA */ ··· 4520 &rxq->descs_phys, GFP_KERNEL); 4521 if (!rxq->descs) 4522 return -ENOMEM; 4523 - 4524 - BUG_ON(rxq->descs != 4525 - PTR_ALIGN(rxq->descs, MVPP2_CPU_D_CACHE_LINE_SIZE)); 4526 4527 rxq->last_desc = rxq->size - 1; 4528 ··· 4607 &txq->descs_phys, GFP_KERNEL); 4608 if (!txq->descs) 4609 return -ENOMEM; 4610 - 4611 - /* Make sure descriptor address is cache line size aligned */ 4612 - BUG_ON(txq->descs != 4613 - PTR_ALIGN(txq->descs, MVPP2_CPU_D_CACHE_LINE_SIZE)); 4614 4615 txq->last_desc = txq->size - 1; 4616 ··· 6047 6048 /* Map physical Rx queue to port's logical Rx queue */ 6049 rxq = devm_kzalloc(dev, sizeof(*rxq), GFP_KERNEL); 6050 - if (!rxq) 6051 goto err_free_percpu; 6052 /* Map this Rx queue to a physical queue */ 6053 rxq->id = port->first_rxq + queue; 6054 rxq->port = port->id;
··· 321 /* Lbtd 802.3 type */ 322 #define MVPP2_IP_LBDT_TYPE 0xfffa 323 324 #define MVPP2_TX_CSUM_MAX_SIZE 9800 325 326 /* Timeout constants */ ··· 377 378 #define MVPP2_RX_PKT_SIZE(mtu) \ 379 ALIGN((mtu) + MVPP2_MH_SIZE + MVPP2_VLAN_TAG_LEN + \ 380 + ETH_HLEN + ETH_FCS_LEN, cache_line_size()) 381 382 #define MVPP2_RX_BUF_SIZE(pkt_size) ((pkt_size) + NET_SKB_PAD) 383 #define MVPP2_RX_TOTAL_SIZE(buf_size) ((buf_size) + MVPP2_SKB_SHINFO_SIZE) ··· 4493 if (!aggr_txq->descs) 4494 return -ENOMEM; 4495 4496 aggr_txq->last_desc = aggr_txq->size - 1; 4497 4498 /* Aggr TXQ no reset WA */ ··· 4525 &rxq->descs_phys, GFP_KERNEL); 4526 if (!rxq->descs) 4527 return -ENOMEM; 4528 4529 rxq->last_desc = rxq->size - 1; 4530 ··· 4615 &txq->descs_phys, GFP_KERNEL); 4616 if (!txq->descs) 4617 return -ENOMEM; 4618 4619 txq->last_desc = txq->size - 1; 4620 ··· 6059 6060 /* Map physical Rx queue to port's logical Rx queue */ 6061 rxq = devm_kzalloc(dev, sizeof(*rxq), GFP_KERNEL); 6062 + if (!rxq) { 6063 + err = -ENOMEM; 6064 goto err_free_percpu; 6065 + } 6066 /* Map this Rx queue to a physical queue */ 6067 rxq->id = port->first_rxq + queue; 6068 rxq->port = port->id;
+1 -1
drivers/net/ethernet/qlogic/qed/qed_int.c
··· 2750 int qed_int_igu_enable(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, 2751 enum qed_int_mode int_mode) 2752 { 2753 - int rc; 2754 2755 /* Configure AEU signal change to produce attentions */ 2756 qed_wr(p_hwfn, p_ptt, IGU_REG_ATTENTION_ENABLE, 0);
··· 2750 int qed_int_igu_enable(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, 2751 enum qed_int_mode int_mode) 2752 { 2753 + int rc = 0; 2754 2755 /* Configure AEU signal change to produce attentions */ 2756 qed_wr(p_hwfn, p_ptt, IGU_REG_ATTENTION_ENABLE, 0);
+1 -1
drivers/net/ethernet/qlogic/qlge/qlge.h
··· 18 */ 19 #define DRV_NAME "qlge" 20 #define DRV_STRING "QLogic 10 Gigabit PCI-E Ethernet Driver " 21 - #define DRV_VERSION "1.00.00.34" 22 23 #define WQ_ADDR_ALIGN 0x3 /* 4 byte alignment */ 24
··· 18 */ 19 #define DRV_NAME "qlge" 20 #define DRV_STRING "QLogic 10 Gigabit PCI-E Ethernet Driver " 21 + #define DRV_VERSION "1.00.00.35" 22 23 #define WQ_ADDR_ALIGN 0x3 /* 4 byte alignment */ 24
+1 -1
drivers/net/ethernet/renesas/ravb_main.c
··· 1377 1378 /* TAG and timestamp required flag */ 1379 skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; 1380 - skb_tx_timestamp(skb); 1381 desc->tagh_tsr = (ts_skb->tag >> 4) | TX_TSR; 1382 desc->ds_tagl |= le16_to_cpu(ts_skb->tag << 12); 1383 } 1384 1385 /* Descriptor type must be set after all the above writes */ 1386 dma_wmb(); 1387 desc->die_dt = DT_FEND;
··· 1377 1378 /* TAG and timestamp required flag */ 1379 skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; 1380 desc->tagh_tsr = (ts_skb->tag >> 4) | TX_TSR; 1381 desc->ds_tagl |= le16_to_cpu(ts_skb->tag << 12); 1382 } 1383 1384 + skb_tx_timestamp(skb); 1385 /* Descriptor type must be set after all the above writes */ 1386 dma_wmb(); 1387 desc->die_dt = DT_FEND;
+2 -2
drivers/net/ethernet/samsung/sxgbe/sxgbe_platform.c
··· 155 return 0; 156 157 err_rx_irq_unmap: 158 - while (--i) 159 irq_dispose_mapping(priv->rxq[i]->irq_no); 160 i = SXGBE_TX_QUEUES; 161 err_tx_irq_unmap: 162 - while (--i) 163 irq_dispose_mapping(priv->txq[i]->irq_no); 164 irq_dispose_mapping(priv->irq); 165 err_drv_remove:
··· 155 return 0; 156 157 err_rx_irq_unmap: 158 + while (i--) 159 irq_dispose_mapping(priv->rxq[i]->irq_no); 160 i = SXGBE_TX_QUEUES; 161 err_tx_irq_unmap: 162 + while (i--) 163 irq_dispose_mapping(priv->txq[i]->irq_no); 164 irq_dispose_mapping(priv->irq); 165 err_drv_remove:
+8 -8
drivers/net/ethernet/stmicro/stmmac/norm_desc.c
··· 199 { 200 unsigned int tdes1 = p->des1; 201 202 - if (mode == STMMAC_CHAIN_MODE) 203 - norm_set_tx_desc_len_on_chain(p, len); 204 - else 205 - norm_set_tx_desc_len_on_ring(p, len); 206 - 207 if (is_fs) 208 tdes1 |= TDES1_FIRST_SEGMENT; 209 else ··· 212 if (ls) 213 tdes1 |= TDES1_LAST_SEGMENT; 214 215 - if (tx_own) 216 - tdes1 |= TDES0_OWN; 217 - 218 p->des1 = tdes1; 219 } 220 221 static void ndesc_set_tx_ic(struct dma_desc *p)
··· 199 { 200 unsigned int tdes1 = p->des1; 201 202 if (is_fs) 203 tdes1 |= TDES1_FIRST_SEGMENT; 204 else ··· 217 if (ls) 218 tdes1 |= TDES1_LAST_SEGMENT; 219 220 p->des1 = tdes1; 221 + 222 + if (mode == STMMAC_CHAIN_MODE) 223 + norm_set_tx_desc_len_on_chain(p, len); 224 + else 225 + norm_set_tx_desc_len_on_ring(p, len); 226 + 227 + if (tx_own) 228 + p->des0 |= TDES0_OWN; 229 } 230 231 static void ndesc_set_tx_ic(struct dma_desc *p)
+5 -11
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 278 */ 279 bool stmmac_eee_init(struct stmmac_priv *priv) 280 { 281 - char *phy_bus_name = priv->plat->phy_bus_name; 282 unsigned long flags; 283 bool ret = false; 284 ··· 289 goto out; 290 291 /* Never init EEE in case of a switch is attached */ 292 - if (phy_bus_name && (!strcmp(phy_bus_name, "fixed"))) 293 goto out; 294 295 /* MAC core supports the EEE feature. */ ··· 826 phydev = of_phy_connect(dev, priv->plat->phy_node, 827 &stmmac_adjust_link, 0, interface); 828 } else { 829 - if (priv->plat->phy_bus_name) 830 - snprintf(bus_id, MII_BUS_ID_SIZE, "%s-%x", 831 - priv->plat->phy_bus_name, priv->plat->bus_id); 832 - else 833 - snprintf(bus_id, MII_BUS_ID_SIZE, "stmmac-%x", 834 - priv->plat->bus_id); 835 836 snprintf(phy_id_fmt, MII_BUS_ID_SIZE + 3, PHY_ID_FMT, bus_id, 837 priv->plat->phy_addr); ··· 866 } 867 868 /* If attached to a switch, there is no reason to poll phy handler */ 869 - if (priv->plat->phy_bus_name) 870 - if (!strcmp(priv->plat->phy_bus_name, "fixed")) 871 - phydev->irq = PHY_IGNORE_INTERRUPT; 872 873 pr_debug("stmmac_init_phy: %s: attached to PHY (UID 0x%x)" 874 " Link = %d\n", dev->name, phydev->phy_id, phydev->link);
··· 278 */ 279 bool stmmac_eee_init(struct stmmac_priv *priv) 280 { 281 unsigned long flags; 282 bool ret = false; 283 ··· 290 goto out; 291 292 /* Never init EEE in case of a switch is attached */ 293 + if (priv->phydev->is_pseudo_fixed_link) 294 goto out; 295 296 /* MAC core supports the EEE feature. */ ··· 827 phydev = of_phy_connect(dev, priv->plat->phy_node, 828 &stmmac_adjust_link, 0, interface); 829 } else { 830 + snprintf(bus_id, MII_BUS_ID_SIZE, "stmmac-%x", 831 + priv->plat->bus_id); 832 833 snprintf(phy_id_fmt, MII_BUS_ID_SIZE + 3, PHY_ID_FMT, bus_id, 834 priv->plat->phy_addr); ··· 871 } 872 873 /* If attached to a switch, there is no reason to poll phy handler */ 874 + if (phydev->is_pseudo_fixed_link) 875 + phydev->irq = PHY_IGNORE_INTERRUPT; 876 877 pr_debug("stmmac_init_phy: %s: attached to PHY (UID 0x%x)" 878 " Link = %d\n", dev->name, phydev->phy_id, phydev->link);
+1 -9
drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
··· 198 struct mii_bus *new_bus; 199 struct stmmac_priv *priv = netdev_priv(ndev); 200 struct stmmac_mdio_bus_data *mdio_bus_data = priv->plat->mdio_bus_data; 201 - int addr, found; 202 struct device_node *mdio_node = priv->plat->mdio_node; 203 204 if (!mdio_bus_data) 205 return 0; 206 - 207 - if (IS_ENABLED(CONFIG_OF)) { 208 - if (mdio_node) { 209 - netdev_dbg(ndev, "FOUND MDIO subnode\n"); 210 - } else { 211 - netdev_warn(ndev, "No MDIO subnode found\n"); 212 - } 213 - } 214 215 new_bus = mdiobus_alloc(); 216 if (new_bus == NULL)
··· 198 struct mii_bus *new_bus; 199 struct stmmac_priv *priv = netdev_priv(ndev); 200 struct stmmac_mdio_bus_data *mdio_bus_data = priv->plat->mdio_bus_data; 201 struct device_node *mdio_node = priv->plat->mdio_node; 202 + int addr, found; 203 204 if (!mdio_bus_data) 205 return 0; 206 207 new_bus = mdiobus_alloc(); 208 if (new_bus == NULL)
+66 -25
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 132 } 133 134 /** 135 * stmmac_probe_config_dt - parse device-tree driver parameters 136 * @pdev: platform_device structure 137 * @plat: driver data platform structure ··· 209 struct device_node *np = pdev->dev.of_node; 210 struct plat_stmmacenet_data *plat; 211 struct stmmac_dma_cfg *dma_cfg; 212 - struct device_node *child_node = NULL; 213 214 plat = devm_kzalloc(&pdev->dev, sizeof(*plat), GFP_KERNEL); 215 if (!plat) ··· 228 /* Default to phy auto-detection */ 229 plat->phy_addr = -1; 230 231 - /* If we find a phy-handle property, use it as the PHY */ 232 - plat->phy_node = of_parse_phandle(np, "phy-handle", 0); 233 - 234 - /* If phy-handle is not specified, check if we have a fixed-phy */ 235 - if (!plat->phy_node && of_phy_is_fixed_link(np)) { 236 - if ((of_phy_register_fixed_link(np) < 0)) 237 - return ERR_PTR(-ENODEV); 238 - 239 - plat->phy_node = of_node_get(np); 240 - } 241 - 242 - for_each_child_of_node(np, child_node) 243 - if (of_device_is_compatible(child_node, "snps,dwmac-mdio")) { 244 - plat->mdio_node = child_node; 245 - break; 246 - } 247 - 248 /* "snps,phy-addr" is not a standard property. Mark it as deprecated 249 * and warn of its use. Remove this when phy node support is added. 250 */ 251 if (of_property_read_u32(np, "snps,phy-addr", &plat->phy_addr) == 0) 252 dev_warn(&pdev->dev, "snps,phy-addr property is deprecated\n"); 253 254 - if ((plat->phy_node && !of_phy_is_fixed_link(np)) || !plat->mdio_node) 255 - plat->mdio_bus_data = NULL; 256 - else 257 - plat->mdio_bus_data = 258 - devm_kzalloc(&pdev->dev, 259 - sizeof(struct stmmac_mdio_bus_data), 260 - GFP_KERNEL); 261 262 of_property_read_u32(np, "tx-fifo-depth", &plat->tx_fifo_size); 263
··· 132 } 133 134 /** 135 + * stmmac_dt_phy - parse device-tree driver parameters to allocate PHY resources 136 + * @plat: driver data platform structure 137 + * @np: device tree node 138 + * @dev: device pointer 139 + * Description: 140 + * The mdio bus will be allocated in case of a phy transceiver is on board; 141 + * it will be NULL if the fixed-link is configured. 142 + * If there is the "snps,dwmac-mdio" sub-node the mdio will be allocated 143 + * in any case (for DSA, mdio must be registered even if fixed-link). 144 + * The table below sums the supported configurations: 145 + * ------------------------------- 146 + * snps,phy-addr | Y 147 + * ------------------------------- 148 + * phy-handle | Y 149 + * ------------------------------- 150 + * fixed-link | N 151 + * ------------------------------- 152 + * snps,dwmac-mdio | 153 + * even if | Y 154 + * fixed-link | 155 + * ------------------------------- 156 + * 157 + * It returns 0 in case of success otherwise -ENODEV. 158 + */ 159 + static int stmmac_dt_phy(struct plat_stmmacenet_data *plat, 160 + struct device_node *np, struct device *dev) 161 + { 162 + bool mdio = true; 163 + 164 + /* If phy-handle property is passed from DT, use it as the PHY */ 165 + plat->phy_node = of_parse_phandle(np, "phy-handle", 0); 166 + if (plat->phy_node) 167 + dev_dbg(dev, "Found phy-handle subnode\n"); 168 + 169 + /* If phy-handle is not specified, check if we have a fixed-phy */ 170 + if (!plat->phy_node && of_phy_is_fixed_link(np)) { 171 + if ((of_phy_register_fixed_link(np) < 0)) 172 + return -ENODEV; 173 + 174 + dev_dbg(dev, "Found fixed-link subnode\n"); 175 + plat->phy_node = of_node_get(np); 176 + mdio = false; 177 + } 178 + 179 + /* If snps,dwmac-mdio is passed from DT, always register the MDIO */ 180 + for_each_child_of_node(np, plat->mdio_node) { 181 + if (of_device_is_compatible(plat->mdio_node, "snps,dwmac-mdio")) 182 + break; 183 + } 184 + 185 + if (plat->mdio_node) { 186 + dev_dbg(dev, "Found MDIO subnode\n"); 187 + mdio = true; 188 + } 189 + 190 + if (mdio) 191 + plat->mdio_bus_data = 192 + devm_kzalloc(dev, sizeof(struct stmmac_mdio_bus_data), 193 + GFP_KERNEL); 194 + return 0; 195 + } 196 + 197 + /** 198 * stmmac_probe_config_dt - parse device-tree driver parameters 199 * @pdev: platform_device structure 200 * @plat: driver data platform structure ··· 146 struct device_node *np = pdev->dev.of_node; 147 struct plat_stmmacenet_data *plat; 148 struct stmmac_dma_cfg *dma_cfg; 149 150 plat = devm_kzalloc(&pdev->dev, sizeof(*plat), GFP_KERNEL); 151 if (!plat) ··· 166 /* Default to phy auto-detection */ 167 plat->phy_addr = -1; 168 169 /* "snps,phy-addr" is not a standard property. Mark it as deprecated 170 * and warn of its use. Remove this when phy node support is added. 171 */ 172 if (of_property_read_u32(np, "snps,phy-addr", &plat->phy_addr) == 0) 173 dev_warn(&pdev->dev, "snps,phy-addr property is deprecated\n"); 174 175 + /* To Configure PHY by using all device-tree supported properties */ 176 + if (stmmac_dt_phy(plat, np, &pdev->dev)) 177 + return ERR_PTR(-ENODEV); 178 179 of_property_read_u32(np, "tx-fifo-depth", &plat->tx_fifo_size); 180
+4
drivers/net/phy/bcm7xxx.c
··· 339 BCM7XXX_28NM_GPHY(PHY_ID_BCM7439, "Broadcom BCM7439"), 340 BCM7XXX_28NM_GPHY(PHY_ID_BCM7439_2, "Broadcom BCM7439 (2)"), 341 BCM7XXX_28NM_GPHY(PHY_ID_BCM7445, "Broadcom BCM7445"), 342 BCM7XXX_40NM_EPHY(PHY_ID_BCM7425, "Broadcom BCM7425"), 343 BCM7XXX_40NM_EPHY(PHY_ID_BCM7429, "Broadcom BCM7429"), 344 BCM7XXX_40NM_EPHY(PHY_ID_BCM7435, "Broadcom BCM7435"), ··· 350 { PHY_ID_BCM7250, 0xfffffff0, }, 351 { PHY_ID_BCM7364, 0xfffffff0, }, 352 { PHY_ID_BCM7366, 0xfffffff0, }, 353 { PHY_ID_BCM7425, 0xfffffff0, }, 354 { PHY_ID_BCM7429, 0xfffffff0, }, 355 { PHY_ID_BCM7439, 0xfffffff0, },
··· 339 BCM7XXX_28NM_GPHY(PHY_ID_BCM7439, "Broadcom BCM7439"), 340 BCM7XXX_28NM_GPHY(PHY_ID_BCM7439_2, "Broadcom BCM7439 (2)"), 341 BCM7XXX_28NM_GPHY(PHY_ID_BCM7445, "Broadcom BCM7445"), 342 + BCM7XXX_40NM_EPHY(PHY_ID_BCM7346, "Broadcom BCM7346"), 343 + BCM7XXX_40NM_EPHY(PHY_ID_BCM7362, "Broadcom BCM7362"), 344 BCM7XXX_40NM_EPHY(PHY_ID_BCM7425, "Broadcom BCM7425"), 345 BCM7XXX_40NM_EPHY(PHY_ID_BCM7429, "Broadcom BCM7429"), 346 BCM7XXX_40NM_EPHY(PHY_ID_BCM7435, "Broadcom BCM7435"), ··· 348 { PHY_ID_BCM7250, 0xfffffff0, }, 349 { PHY_ID_BCM7364, 0xfffffff0, }, 350 { PHY_ID_BCM7366, 0xfffffff0, }, 351 + { PHY_ID_BCM7346, 0xfffffff0, }, 352 + { PHY_ID_BCM7362, 0xfffffff0, }, 353 { PHY_ID_BCM7425, 0xfffffff0, }, 354 { PHY_ID_BCM7429, 0xfffffff0, }, 355 { PHY_ID_BCM7439, 0xfffffff0, },
+5
drivers/net/team/team.c
··· 1198 goto err_dev_open; 1199 } 1200 1201 err = vlan_vids_add_by_dev(port_dev, dev); 1202 if (err) { 1203 netdev_err(dev, "Failed to add vlan ids to device %s\n", ··· 1264 vlan_vids_del_by_dev(port_dev, dev); 1265 1266 err_vids_add: 1267 dev_close(port_dev); 1268 1269 err_dev_open:
··· 1198 goto err_dev_open; 1199 } 1200 1201 + dev_uc_sync_multiple(port_dev, dev); 1202 + dev_mc_sync_multiple(port_dev, dev); 1203 + 1204 err = vlan_vids_add_by_dev(port_dev, dev); 1205 if (err) { 1206 netdev_err(dev, "Failed to add vlan ids to device %s\n", ··· 1261 vlan_vids_del_by_dev(port_dev, dev); 1262 1263 err_vids_add: 1264 + dev_uc_unsync(port_dev, dev); 1265 + dev_mc_unsync(port_dev, dev); 1266 dev_close(port_dev); 1267 1268 err_dev_open:
+5 -3
drivers/net/tun.c
··· 622 623 /* Re-attach the filter to persist device */ 624 if (!skip_filter && (tun->filter_attached == true)) { 625 - err = sk_attach_filter(&tun->fprog, tfile->socket.sk); 626 if (!err) 627 goto out; 628 } ··· 1823 1824 for (i = 0; i < n; i++) { 1825 tfile = rtnl_dereference(tun->tfiles[i]); 1826 - sk_detach_filter(tfile->socket.sk); 1827 } 1828 1829 tun->filter_attached = false; ··· 1836 1837 for (i = 0; i < tun->numqueues; i++) { 1838 tfile = rtnl_dereference(tun->tfiles[i]); 1839 - ret = sk_attach_filter(&tun->fprog, tfile->socket.sk); 1840 if (ret) { 1841 tun_detach_filter(tun, i); 1842 return ret;
··· 622 623 /* Re-attach the filter to persist device */ 624 if (!skip_filter && (tun->filter_attached == true)) { 625 + err = __sk_attach_filter(&tun->fprog, tfile->socket.sk, 626 + lockdep_rtnl_is_held()); 627 if (!err) 628 goto out; 629 } ··· 1822 1823 for (i = 0; i < n; i++) { 1824 tfile = rtnl_dereference(tun->tfiles[i]); 1825 + __sk_detach_filter(tfile->socket.sk, lockdep_rtnl_is_held()); 1826 } 1827 1828 tun->filter_attached = false; ··· 1835 1836 for (i = 0; i < tun->numqueues; i++) { 1837 tfile = rtnl_dereference(tun->tfiles[i]); 1838 + ret = __sk_attach_filter(&tun->fprog, tfile->socket.sk, 1839 + lockdep_rtnl_is_held()); 1840 if (ret) { 1841 tun_detach_filter(tun, i); 1842 return ret;
+7
drivers/net/usb/cdc_ncm.c
··· 1626 .driver_info = (unsigned long) &wwan_info, 1627 }, 1628 1629 /* DW5812 LTE Verizon Mobile Broadband Card 1630 * Unlike DW5550 this device requires FLAG_NOARP 1631 */
··· 1626 .driver_info = (unsigned long) &wwan_info, 1627 }, 1628 1629 + /* Telit LE910 V2 */ 1630 + { USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x0036, 1631 + USB_CLASS_COMM, 1632 + USB_CDC_SUBCLASS_NCM, USB_CDC_PROTO_NONE), 1633 + .driver_info = (unsigned long)&wwan_noarp_info, 1634 + }, 1635 + 1636 /* DW5812 LTE Verizon Mobile Broadband Card 1637 * Unlike DW5550 this device requires FLAG_NOARP 1638 */
+1 -1
drivers/net/usb/plusb.c
··· 38 * HEADS UP: this handshaking isn't all that robust. This driver 39 * gets confused easily if you unplug one end of the cable then 40 * try to connect it again; you'll need to restart both ends. The 41 - * "naplink" software (used by some PlayStation/2 deveopers) does 42 * the handshaking much better! Also, sometimes this hardware 43 * seems to get wedged under load. Prolific docs are weak, and 44 * don't identify differences between PL2301 and PL2302, much less
··· 38 * HEADS UP: this handshaking isn't all that robust. This driver 39 * gets confused easily if you unplug one end of the cable then 40 * try to connect it again; you'll need to restart both ends. The 41 + * "naplink" software (used by some PlayStation/2 developers) does 42 * the handshaking much better! Also, sometimes this hardware 43 * seems to get wedged under load. Prolific docs are weak, and 44 * don't identify differences between PL2301 and PL2302, much less
+1
drivers/net/usb/qmi_wwan.c
··· 844 {QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */ 845 {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ 846 {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ 847 {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ 848 {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ 849 {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */
··· 844 {QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */ 845 {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ 846 {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ 847 + {QMI_FIXED_INTF(0x2001, 0x7e19, 4)}, /* D-Link DWM-221 B1 */ 848 {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ 849 {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ 850 {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */
+2 -2
drivers/nvdimm/pmem.c
··· 99 if (unlikely(bad_pmem)) 100 rc = -EIO; 101 else { 102 - memcpy_from_pmem(mem + off, pmem_addr, len); 103 flush_dcache_page(page); 104 } 105 } else { ··· 295 296 if (unlikely(is_bad_pmem(&pmem->bb, offset / 512, sz_align))) 297 return -EIO; 298 - memcpy_from_pmem(buf, pmem->virt_addr + offset, size); 299 } else { 300 memcpy_to_pmem(pmem->virt_addr + offset, buf, size); 301 wmb_pmem();
··· 99 if (unlikely(bad_pmem)) 100 rc = -EIO; 101 else { 102 + rc = memcpy_from_pmem(mem + off, pmem_addr, len); 103 flush_dcache_page(page); 104 } 105 } else { ··· 295 296 if (unlikely(is_bad_pmem(&pmem->bb, offset / 512, sz_align))) 297 return -EIO; 298 + return memcpy_from_pmem(buf, pmem->virt_addr + offset, size); 299 } else { 300 memcpy_to_pmem(pmem->virt_addr + offset, buf, size); 301 wmb_pmem();
+1 -2
drivers/platform/goldfish/goldfish_pipe.c
··· 309 * much memory to the process. 310 */ 311 down_read(&current->mm->mmap_sem); 312 - ret = get_user_pages(current, current->mm, address, 1, 313 - !is_write, 0, &page, NULL); 314 up_read(&current->mm->mmap_sem); 315 if (ret < 0) 316 break;
··· 309 * much memory to the process. 310 */ 311 down_read(&current->mm->mmap_sem); 312 + ret = get_user_pages(address, 1, !is_write, 0, &page, NULL); 313 up_read(&current->mm->mmap_sem); 314 if (ret < 0) 315 break;
+1 -1
drivers/rapidio/devices/rio_mport_cdev.c
··· 886 } 887 888 down_read(&current->mm->mmap_sem); 889 - pinned = get_user_pages(current, current->mm, 890 (unsigned long)xfer->loc_addr & PAGE_MASK, 891 nr_pages, dir == DMA_FROM_DEVICE, 0, 892 page_list, NULL);
··· 886 } 887 888 down_read(&current->mm->mmap_sem); 889 + pinned = get_user_pages( 890 (unsigned long)xfer->loc_addr & PAGE_MASK, 891 nr_pages, dir == DMA_FROM_DEVICE, 0, 892 page_list, NULL);
+2 -2
drivers/remoteproc/st_remoteproc.c
··· 189 } 190 191 ddata->boot_base = syscon_regmap_lookup_by_phandle(np, "st,syscfg"); 192 - if (!ddata->boot_base) { 193 dev_err(dev, "Boot base not found\n"); 194 - return -EINVAL; 195 } 196 197 err = of_property_read_u32_index(np, "st,syscfg", 1,
··· 189 } 190 191 ddata->boot_base = syscon_regmap_lookup_by_phandle(np, "st,syscfg"); 192 + if (IS_ERR(ddata->boot_base)) { 193 dev_err(dev, "Boot base not found\n"); 194 + return PTR_ERR(ddata->boot_base); 195 } 196 197 err = of_property_read_u32_index(np, "st,syscfg", 1,
+53 -173
drivers/s390/block/dasd_alias.c
··· 317 struct alias_pav_group *group; 318 struct dasd_uid uid; 319 320 private->uid.type = lcu->uac->unit[private->uid.real_unit_addr].ua_type; 321 private->uid.base_unit_addr = 322 lcu->uac->unit[private->uid.real_unit_addr].base_ua; 323 uid = private->uid; 324 - 325 /* if we have no PAV anyway, we don't need to bother with PAV groups */ 326 if (lcu->pav == NO_PAV) { 327 list_move(&device->alias_list, &lcu->active_devices); 328 return 0; 329 } 330 - 331 group = _find_group(lcu, &uid); 332 if (!group) { 333 group = kzalloc(sizeof(*group), GFP_ATOMIC); ··· 395 return 1; 396 397 return 0; 398 - } 399 - 400 - /* 401 - * This function tries to lock all devices on an lcu via trylock 402 - * return NULL on success otherwise return first failed device 403 - */ 404 - static struct dasd_device *_trylock_all_devices_on_lcu(struct alias_lcu *lcu, 405 - struct dasd_device *pos) 406 - 407 - { 408 - struct alias_pav_group *pavgroup; 409 - struct dasd_device *device; 410 - 411 - list_for_each_entry(device, &lcu->active_devices, alias_list) { 412 - if (device == pos) 413 - continue; 414 - if (!spin_trylock(get_ccwdev_lock(device->cdev))) 415 - return device; 416 - } 417 - list_for_each_entry(device, &lcu->inactive_devices, alias_list) { 418 - if (device == pos) 419 - continue; 420 - if (!spin_trylock(get_ccwdev_lock(device->cdev))) 421 - return device; 422 - } 423 - list_for_each_entry(pavgroup, &lcu->grouplist, group) { 424 - list_for_each_entry(device, &pavgroup->baselist, alias_list) { 425 - if (device == pos) 426 - continue; 427 - if (!spin_trylock(get_ccwdev_lock(device->cdev))) 428 - return device; 429 - } 430 - list_for_each_entry(device, &pavgroup->aliaslist, alias_list) { 431 - if (device == pos) 432 - continue; 433 - if (!spin_trylock(get_ccwdev_lock(device->cdev))) 434 - return device; 435 - } 436 - } 437 - return NULL; 438 - } 439 - 440 - /* 441 - * unlock all devices except the one that is specified as pos 442 - * stop if enddev is specified and reached 443 - */ 444 - static void _unlock_all_devices_on_lcu(struct alias_lcu *lcu, 445 - struct dasd_device *pos, 446 - struct dasd_device *enddev) 447 - 448 - { 449 - struct alias_pav_group *pavgroup; 450 - struct dasd_device *device; 451 - 452 - list_for_each_entry(device, &lcu->active_devices, alias_list) { 453 - if (device == pos) 454 - continue; 455 - if (device == enddev) 456 - return; 457 - spin_unlock(get_ccwdev_lock(device->cdev)); 458 - } 459 - list_for_each_entry(device, &lcu->inactive_devices, alias_list) { 460 - if (device == pos) 461 - continue; 462 - if (device == enddev) 463 - return; 464 - spin_unlock(get_ccwdev_lock(device->cdev)); 465 - } 466 - list_for_each_entry(pavgroup, &lcu->grouplist, group) { 467 - list_for_each_entry(device, &pavgroup->baselist, alias_list) { 468 - if (device == pos) 469 - continue; 470 - if (device == enddev) 471 - return; 472 - spin_unlock(get_ccwdev_lock(device->cdev)); 473 - } 474 - list_for_each_entry(device, &pavgroup->aliaslist, alias_list) { 475 - if (device == pos) 476 - continue; 477 - if (device == enddev) 478 - return; 479 - spin_unlock(get_ccwdev_lock(device->cdev)); 480 - } 481 - } 482 - } 483 - 484 - /* 485 - * this function is needed because the locking order 486 - * device lock -> lcu lock 487 - * needs to be assured when iterating over devices in an LCU 488 - * 489 - * if a device is specified in pos then the device lock is already hold 490 - */ 491 - static void _trylock_and_lock_lcu_irqsave(struct alias_lcu *lcu, 492 - struct dasd_device *pos, 493 - unsigned long *flags) 494 - { 495 - struct dasd_device *failed; 496 - 497 - do { 498 - spin_lock_irqsave(&lcu->lock, *flags); 499 - failed = _trylock_all_devices_on_lcu(lcu, pos); 500 - if (failed) { 501 - _unlock_all_devices_on_lcu(lcu, pos, failed); 502 - spin_unlock_irqrestore(&lcu->lock, *flags); 503 - cpu_relax(); 504 - } 505 - } while (failed); 506 - } 507 - 508 - static void _trylock_and_lock_lcu(struct alias_lcu *lcu, 509 - struct dasd_device *pos) 510 - { 511 - struct dasd_device *failed; 512 - 513 - do { 514 - spin_lock(&lcu->lock); 515 - failed = _trylock_all_devices_on_lcu(lcu, pos); 516 - if (failed) { 517 - _unlock_all_devices_on_lcu(lcu, pos, failed); 518 - spin_unlock(&lcu->lock); 519 - cpu_relax(); 520 - } 521 - } while (failed); 522 } 523 524 static int read_unit_address_configuration(struct dasd_device *device, ··· 491 if (rc) 492 return rc; 493 494 - _trylock_and_lock_lcu_irqsave(lcu, NULL, &flags); 495 lcu->pav = NO_PAV; 496 for (i = 0; i < MAX_DEVICES_PER_LCU; ++i) { 497 switch (lcu->uac->unit[i].ua_type) { ··· 510 alias_list) { 511 _add_device_to_lcu(lcu, device, refdev); 512 } 513 - _unlock_all_devices_on_lcu(lcu, NULL, NULL); 514 spin_unlock_irqrestore(&lcu->lock, flags); 515 return 0; 516 } ··· 597 598 lcu = private->lcu; 599 rc = 0; 600 - spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags); 601 - spin_lock(&lcu->lock); 602 if (!(lcu->flags & UPDATE_PENDING)) { 603 rc = _add_device_to_lcu(lcu, device, device); 604 if (rc) ··· 607 list_move(&device->alias_list, &lcu->active_devices); 608 _schedule_lcu_update(lcu, device); 609 } 610 - spin_unlock(&lcu->lock); 611 - spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags); 612 return rc; 613 } 614 ··· 806 struct alias_pav_group *pavgroup; 807 struct dasd_device *device; 808 809 - list_for_each_entry(device, &lcu->active_devices, alias_list) 810 dasd_device_set_stop_bits(device, DASD_STOPPED_SU); 811 - list_for_each_entry(device, &lcu->inactive_devices, alias_list) 812 dasd_device_set_stop_bits(device, DASD_STOPPED_SU); 813 list_for_each_entry(pavgroup, &lcu->grouplist, group) { 814 - list_for_each_entry(device, &pavgroup->baselist, alias_list) 815 dasd_device_set_stop_bits(device, DASD_STOPPED_SU); 816 - list_for_each_entry(device, &pavgroup->aliaslist, alias_list) 817 dasd_device_set_stop_bits(device, DASD_STOPPED_SU); 818 } 819 } 820 ··· 835 struct alias_pav_group *pavgroup; 836 struct dasd_device *device; 837 838 - list_for_each_entry(device, &lcu->active_devices, alias_list) 839 dasd_device_remove_stop_bits(device, DASD_STOPPED_SU); 840 - list_for_each_entry(device, &lcu->inactive_devices, alias_list) 841 dasd_device_remove_stop_bits(device, DASD_STOPPED_SU); 842 list_for_each_entry(pavgroup, &lcu->grouplist, group) { 843 - list_for_each_entry(device, &pavgroup->baselist, alias_list) 844 dasd_device_remove_stop_bits(device, DASD_STOPPED_SU); 845 - list_for_each_entry(device, &pavgroup->aliaslist, alias_list) 846 dasd_device_remove_stop_bits(device, DASD_STOPPED_SU); 847 } 848 } 849 ··· 881 spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags); 882 reset_summary_unit_check(lcu, device, suc_data->reason); 883 884 - _trylock_and_lock_lcu_irqsave(lcu, NULL, &flags); 885 _unstop_all_devices_on_lcu(lcu); 886 _restart_all_base_devices_on_lcu(lcu); 887 /* 3. read new alias configuration */ 888 _schedule_lcu_update(lcu, device); 889 lcu->suc_data.device = NULL; 890 dasd_put_device(device); 891 - _unlock_all_devices_on_lcu(lcu, NULL, NULL); 892 spin_unlock_irqrestore(&lcu->lock, flags); 893 } 894 895 - /* 896 - * note: this will be called from int handler context (cdev locked) 897 - */ 898 - void dasd_alias_handle_summary_unit_check(struct dasd_device *device, 899 - struct irb *irb) 900 { 901 struct dasd_eckd_private *private = device->private; 902 struct alias_lcu *lcu; 903 - char reason; 904 - char *sense; 905 - 906 - sense = dasd_get_sense(irb); 907 - if (sense) { 908 - reason = sense[8]; 909 - DBF_DEV_EVENT(DBF_NOTICE, device, "%s %x", 910 - "eckd handle summary unit check: reason", reason); 911 - } else { 912 - DBF_DEV_EVENT(DBF_WARNING, device, "%s", 913 - "eckd handle summary unit check:" 914 - " no reason code available"); 915 - return; 916 - } 917 918 lcu = private->lcu; 919 if (!lcu) { 920 DBF_DEV_EVENT(DBF_WARNING, device, "%s", 921 "device not ready to handle summary" 922 " unit check (no lcu structure)"); 923 - return; 924 } 925 - _trylock_and_lock_lcu(lcu, device); 926 /* If this device is about to be removed just return and wait for 927 * the next interrupt on a different device 928 */ ··· 914 DBF_DEV_EVENT(DBF_WARNING, device, "%s", 915 "device is in offline processing," 916 " don't do summary unit check handling"); 917 - _unlock_all_devices_on_lcu(lcu, device, NULL); 918 - spin_unlock(&lcu->lock); 919 - return; 920 } 921 if (lcu->suc_data.device) { 922 /* already scheduled or running */ 923 DBF_DEV_EVENT(DBF_WARNING, device, "%s", 924 "previous instance of summary unit check worker" 925 " still pending"); 926 - _unlock_all_devices_on_lcu(lcu, device, NULL); 927 - spin_unlock(&lcu->lock); 928 - return ; 929 } 930 _stop_all_devices_on_lcu(lcu); 931 /* prepare for lcu_update */ 932 - private->lcu->flags |= NEED_UAC_UPDATE | UPDATE_PENDING; 933 - lcu->suc_data.reason = reason; 934 lcu->suc_data.device = device; 935 dasd_get_device(device); 936 - _unlock_all_devices_on_lcu(lcu, device, NULL); 937 - spin_unlock(&lcu->lock); 938 if (!schedule_work(&lcu->suc_data.worker)) 939 dasd_put_device(device); 940 };
··· 317 struct alias_pav_group *group; 318 struct dasd_uid uid; 319 320 + spin_lock(get_ccwdev_lock(device->cdev)); 321 private->uid.type = lcu->uac->unit[private->uid.real_unit_addr].ua_type; 322 private->uid.base_unit_addr = 323 lcu->uac->unit[private->uid.real_unit_addr].base_ua; 324 uid = private->uid; 325 + spin_unlock(get_ccwdev_lock(device->cdev)); 326 /* if we have no PAV anyway, we don't need to bother with PAV groups */ 327 if (lcu->pav == NO_PAV) { 328 list_move(&device->alias_list, &lcu->active_devices); 329 return 0; 330 } 331 group = _find_group(lcu, &uid); 332 if (!group) { 333 group = kzalloc(sizeof(*group), GFP_ATOMIC); ··· 395 return 1; 396 397 return 0; 398 } 399 400 static int read_unit_address_configuration(struct dasd_device *device, ··· 615 if (rc) 616 return rc; 617 618 + spin_lock_irqsave(&lcu->lock, flags); 619 lcu->pav = NO_PAV; 620 for (i = 0; i < MAX_DEVICES_PER_LCU; ++i) { 621 switch (lcu->uac->unit[i].ua_type) { ··· 634 alias_list) { 635 _add_device_to_lcu(lcu, device, refdev); 636 } 637 spin_unlock_irqrestore(&lcu->lock, flags); 638 return 0; 639 } ··· 722 723 lcu = private->lcu; 724 rc = 0; 725 + spin_lock_irqsave(&lcu->lock, flags); 726 if (!(lcu->flags & UPDATE_PENDING)) { 727 rc = _add_device_to_lcu(lcu, device, device); 728 if (rc) ··· 733 list_move(&device->alias_list, &lcu->active_devices); 734 _schedule_lcu_update(lcu, device); 735 } 736 + spin_unlock_irqrestore(&lcu->lock, flags); 737 return rc; 738 } 739 ··· 933 struct alias_pav_group *pavgroup; 934 struct dasd_device *device; 935 936 + list_for_each_entry(device, &lcu->active_devices, alias_list) { 937 + spin_lock(get_ccwdev_lock(device->cdev)); 938 dasd_device_set_stop_bits(device, DASD_STOPPED_SU); 939 + spin_unlock(get_ccwdev_lock(device->cdev)); 940 + } 941 + list_for_each_entry(device, &lcu->inactive_devices, alias_list) { 942 + spin_lock(get_ccwdev_lock(device->cdev)); 943 dasd_device_set_stop_bits(device, DASD_STOPPED_SU); 944 + spin_unlock(get_ccwdev_lock(device->cdev)); 945 + } 946 list_for_each_entry(pavgroup, &lcu->grouplist, group) { 947 + list_for_each_entry(device, &pavgroup->baselist, alias_list) { 948 + spin_lock(get_ccwdev_lock(device->cdev)); 949 dasd_device_set_stop_bits(device, DASD_STOPPED_SU); 950 + spin_unlock(get_ccwdev_lock(device->cdev)); 951 + } 952 + list_for_each_entry(device, &pavgroup->aliaslist, alias_list) { 953 + spin_lock(get_ccwdev_lock(device->cdev)); 954 dasd_device_set_stop_bits(device, DASD_STOPPED_SU); 955 + spin_unlock(get_ccwdev_lock(device->cdev)); 956 + } 957 } 958 } 959 ··· 950 struct alias_pav_group *pavgroup; 951 struct dasd_device *device; 952 953 + list_for_each_entry(device, &lcu->active_devices, alias_list) { 954 + spin_lock(get_ccwdev_lock(device->cdev)); 955 dasd_device_remove_stop_bits(device, DASD_STOPPED_SU); 956 + spin_unlock(get_ccwdev_lock(device->cdev)); 957 + } 958 + list_for_each_entry(device, &lcu->inactive_devices, alias_list) { 959 + spin_lock(get_ccwdev_lock(device->cdev)); 960 dasd_device_remove_stop_bits(device, DASD_STOPPED_SU); 961 + spin_unlock(get_ccwdev_lock(device->cdev)); 962 + } 963 list_for_each_entry(pavgroup, &lcu->grouplist, group) { 964 + list_for_each_entry(device, &pavgroup->baselist, alias_list) { 965 + spin_lock(get_ccwdev_lock(device->cdev)); 966 dasd_device_remove_stop_bits(device, DASD_STOPPED_SU); 967 + spin_unlock(get_ccwdev_lock(device->cdev)); 968 + } 969 + list_for_each_entry(device, &pavgroup->aliaslist, alias_list) { 970 + spin_lock(get_ccwdev_lock(device->cdev)); 971 dasd_device_remove_stop_bits(device, DASD_STOPPED_SU); 972 + spin_unlock(get_ccwdev_lock(device->cdev)); 973 + } 974 } 975 } 976 ··· 984 spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags); 985 reset_summary_unit_check(lcu, device, suc_data->reason); 986 987 + spin_lock_irqsave(&lcu->lock, flags); 988 _unstop_all_devices_on_lcu(lcu); 989 _restart_all_base_devices_on_lcu(lcu); 990 /* 3. read new alias configuration */ 991 _schedule_lcu_update(lcu, device); 992 lcu->suc_data.device = NULL; 993 dasd_put_device(device); 994 spin_unlock_irqrestore(&lcu->lock, flags); 995 } 996 997 + void dasd_alias_handle_summary_unit_check(struct work_struct *work) 998 { 999 + struct dasd_device *device = container_of(work, struct dasd_device, 1000 + suc_work); 1001 struct dasd_eckd_private *private = device->private; 1002 struct alias_lcu *lcu; 1003 + unsigned long flags; 1004 1005 lcu = private->lcu; 1006 if (!lcu) { 1007 DBF_DEV_EVENT(DBF_WARNING, device, "%s", 1008 "device not ready to handle summary" 1009 " unit check (no lcu structure)"); 1010 + goto out; 1011 } 1012 + spin_lock_irqsave(&lcu->lock, flags); 1013 /* If this device is about to be removed just return and wait for 1014 * the next interrupt on a different device 1015 */ ··· 1033 DBF_DEV_EVENT(DBF_WARNING, device, "%s", 1034 "device is in offline processing," 1035 " don't do summary unit check handling"); 1036 + goto out_unlock; 1037 } 1038 if (lcu->suc_data.device) { 1039 /* already scheduled or running */ 1040 DBF_DEV_EVENT(DBF_WARNING, device, "%s", 1041 "previous instance of summary unit check worker" 1042 " still pending"); 1043 + goto out_unlock; 1044 } 1045 _stop_all_devices_on_lcu(lcu); 1046 /* prepare for lcu_update */ 1047 + lcu->flags |= NEED_UAC_UPDATE | UPDATE_PENDING; 1048 + lcu->suc_data.reason = private->suc_reason; 1049 lcu->suc_data.device = device; 1050 dasd_get_device(device); 1051 if (!schedule_work(&lcu->suc_data.worker)) 1052 dasd_put_device(device); 1053 + out_unlock: 1054 + spin_unlock_irqrestore(&lcu->lock, flags); 1055 + out: 1056 + clear_bit(DASD_FLAG_SUC, &device->flags); 1057 + dasd_put_device(device); 1058 };
+29 -9
drivers/s390/block/dasd_eckd.c
··· 1682 1683 /* setup work queue for validate server*/ 1684 INIT_WORK(&device->kick_validate, dasd_eckd_do_validate_server); 1685 1686 if (!ccw_device_is_pathgroup(device->cdev)) { 1687 dev_warn(&device->cdev->dev, ··· 2551 device->state == DASD_STATE_ONLINE && 2552 !test_bit(DASD_FLAG_OFFLINE, &device->flags) && 2553 !test_bit(DASD_FLAG_SUSPENDED, &device->flags)) { 2554 - /* 2555 - * the state change could be caused by an alias 2556 - * reassignment remove device from alias handling 2557 - * to prevent new requests from being scheduled on 2558 - * the wrong alias device 2559 - */ 2560 - dasd_alias_remove_device(device); 2561 - 2562 /* schedule worker to reload device */ 2563 dasd_reload_device(device); 2564 } ··· 2565 /* summary unit check */ 2566 if ((sense[27] & DASD_SENSE_BIT_0) && (sense[7] == 0x0D) && 2567 (scsw_dstat(&irb->scsw) & DEV_STAT_UNIT_CHECK)) { 2568 - dasd_alias_handle_summary_unit_check(device, irb); 2569 return; 2570 } 2571 ··· 4508 char print_uid[60]; 4509 struct dasd_uid uid; 4510 unsigned long flags; 4511 4512 spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags); 4513 old_base = private->uid.base_unit_addr;
··· 1682 1683 /* setup work queue for validate server*/ 1684 INIT_WORK(&device->kick_validate, dasd_eckd_do_validate_server); 1685 + /* setup work queue for summary unit check */ 1686 + INIT_WORK(&device->suc_work, dasd_alias_handle_summary_unit_check); 1687 1688 if (!ccw_device_is_pathgroup(device->cdev)) { 1689 dev_warn(&device->cdev->dev, ··· 2549 device->state == DASD_STATE_ONLINE && 2550 !test_bit(DASD_FLAG_OFFLINE, &device->flags) && 2551 !test_bit(DASD_FLAG_SUSPENDED, &device->flags)) { 2552 /* schedule worker to reload device */ 2553 dasd_reload_device(device); 2554 } ··· 2571 /* summary unit check */ 2572 if ((sense[27] & DASD_SENSE_BIT_0) && (sense[7] == 0x0D) && 2573 (scsw_dstat(&irb->scsw) & DEV_STAT_UNIT_CHECK)) { 2574 + if (test_and_set_bit(DASD_FLAG_SUC, &device->flags)) { 2575 + DBF_DEV_EVENT(DBF_WARNING, device, "%s", 2576 + "eckd suc: device already notified"); 2577 + return; 2578 + } 2579 + sense = dasd_get_sense(irb); 2580 + if (!sense) { 2581 + DBF_DEV_EVENT(DBF_WARNING, device, "%s", 2582 + "eckd suc: no reason code available"); 2583 + clear_bit(DASD_FLAG_SUC, &device->flags); 2584 + return; 2585 + 2586 + } 2587 + private->suc_reason = sense[8]; 2588 + DBF_DEV_EVENT(DBF_NOTICE, device, "%s %x", 2589 + "eckd handle summary unit check: reason", 2590 + private->suc_reason); 2591 + dasd_get_device(device); 2592 + if (!schedule_work(&device->suc_work)) 2593 + dasd_put_device(device); 2594 + 2595 return; 2596 } 2597 ··· 4494 char print_uid[60]; 4495 struct dasd_uid uid; 4496 unsigned long flags; 4497 + 4498 + /* 4499 + * remove device from alias handling to prevent new requests 4500 + * from being scheduled on the wrong alias device 4501 + */ 4502 + dasd_alias_remove_device(device); 4503 4504 spin_lock_irqsave(get_ccwdev_lock(device->cdev), flags); 4505 old_base = private->uid.base_unit_addr;
+2 -1
drivers/s390/block/dasd_eckd.h
··· 525 int count; 526 527 u32 fcx_max_data; 528 }; 529 530 ··· 535 int dasd_alias_add_device(struct dasd_device *); 536 int dasd_alias_remove_device(struct dasd_device *); 537 struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *); 538 - void dasd_alias_handle_summary_unit_check(struct dasd_device *, struct irb *); 539 void dasd_eckd_reset_ccw_to_base_io(struct dasd_ccw_req *); 540 void dasd_alias_lcu_setup_complete(struct dasd_device *); 541 void dasd_alias_wait_for_lcu_setup(struct dasd_device *);
··· 525 int count; 526 527 u32 fcx_max_data; 528 + char suc_reason; 529 }; 530 531 ··· 534 int dasd_alias_add_device(struct dasd_device *); 535 int dasd_alias_remove_device(struct dasd_device *); 536 struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *); 537 + void dasd_alias_handle_summary_unit_check(struct work_struct *); 538 void dasd_eckd_reset_ccw_to_base_io(struct dasd_ccw_req *); 539 void dasd_alias_lcu_setup_complete(struct dasd_device *); 540 void dasd_alias_wait_for_lcu_setup(struct dasd_device *);
+2
drivers/s390/block/dasd_int.h
··· 470 struct work_struct restore_device; 471 struct work_struct reload_device; 472 struct work_struct kick_validate; 473 struct timer_list timer; 474 475 debug_info_t *debug_area; ··· 543 #define DASD_FLAG_SAFE_OFFLINE_RUNNING 11 /* safe offline running */ 544 #define DASD_FLAG_ABORTALL 12 /* Abort all noretry requests */ 545 #define DASD_FLAG_PATH_VERIFY 13 /* Path verification worker running */ 546 547 #define DASD_SLEEPON_START_TAG ((void *) 1) 548 #define DASD_SLEEPON_END_TAG ((void *) 2)
··· 470 struct work_struct restore_device; 471 struct work_struct reload_device; 472 struct work_struct kick_validate; 473 + struct work_struct suc_work; 474 struct timer_list timer; 475 476 debug_info_t *debug_area; ··· 542 #define DASD_FLAG_SAFE_OFFLINE_RUNNING 11 /* safe offline running */ 543 #define DASD_FLAG_ABORTALL 12 /* Abort all noretry requests */ 544 #define DASD_FLAG_PATH_VERIFY 13 /* Path verification worker running */ 545 + #define DASD_FLAG_SUC 14 /* unhandled summary unit check */ 546 547 #define DASD_SLEEPON_START_TAG ((void *) 1) 548 #define DASD_SLEEPON_END_TAG ((void *) 2)
+11 -17
drivers/target/iscsi/iscsi_target_configfs.c
··· 779 return 0; 780 } 781 782 - static void lio_target_cleanup_nodeacl( struct se_node_acl *se_nacl) 783 - { 784 - struct iscsi_node_acl *acl = container_of(se_nacl, 785 - struct iscsi_node_acl, se_node_acl); 786 - 787 - configfs_remove_default_groups(&acl->se_node_acl.acl_fabric_stat_group); 788 - } 789 - 790 /* End items for lio_target_acl_cit */ 791 792 /* Start items for lio_target_tpg_attrib_cit */ ··· 1239 if (IS_ERR(tiqn)) 1240 return ERR_CAST(tiqn); 1241 1242 config_group_init_type_name(&tiqn->tiqn_stat_grps.iscsi_instance_group, 1243 "iscsi_instance", &iscsi_stat_instance_cit); 1244 configfs_add_default_group(&tiqn->tiqn_stat_grps.iscsi_instance_group, ··· 1273 "iscsi_logout_stats", &iscsi_stat_logout_cit); 1274 configfs_add_default_group(&tiqn->tiqn_stat_grps.iscsi_logout_stats_group, 1275 &tiqn->tiqn_wwn.fabric_stat_group); 1276 - 1277 - 1278 - pr_debug("LIO_Target_ConfigFS: REGISTER -> %s\n", tiqn->tiqn); 1279 - pr_debug("LIO_Target_ConfigFS: REGISTER -> Allocated Node:" 1280 - " %s\n", name); 1281 - return &tiqn->tiqn_wwn; 1282 } 1283 1284 static void lio_target_call_coredeltiqn( 1285 struct se_wwn *wwn) 1286 { 1287 struct iscsi_tiqn *tiqn = container_of(wwn, struct iscsi_tiqn, tiqn_wwn); 1288 - 1289 - configfs_remove_default_groups(&tiqn->tiqn_wwn.fabric_stat_group); 1290 1291 pr_debug("LIO_Target_ConfigFS: DEREGISTER -> %s\n", 1292 tiqn->tiqn); ··· 1654 .aborted_task = lio_aborted_task, 1655 .fabric_make_wwn = lio_target_call_coreaddtiqn, 1656 .fabric_drop_wwn = lio_target_call_coredeltiqn, 1657 .fabric_make_tpg = lio_target_tiqn_addtpg, 1658 .fabric_drop_tpg = lio_target_tiqn_deltpg, 1659 .fabric_make_np = lio_target_call_addnptotpg, 1660 .fabric_drop_np = lio_target_call_delnpfromtpg, 1661 .fabric_init_nodeacl = lio_target_init_nodeacl, 1662 - .fabric_cleanup_nodeacl = lio_target_cleanup_nodeacl, 1663 1664 .tfc_discovery_attrs = lio_target_discovery_auth_attrs, 1665 .tfc_wwn_attrs = lio_target_wwn_attrs,
··· 779 return 0; 780 } 781 782 /* End items for lio_target_acl_cit */ 783 784 /* Start items for lio_target_tpg_attrib_cit */ ··· 1247 if (IS_ERR(tiqn)) 1248 return ERR_CAST(tiqn); 1249 1250 + pr_debug("LIO_Target_ConfigFS: REGISTER -> %s\n", tiqn->tiqn); 1251 + pr_debug("LIO_Target_ConfigFS: REGISTER -> Allocated Node:" 1252 + " %s\n", name); 1253 + return &tiqn->tiqn_wwn; 1254 + } 1255 + 1256 + static void lio_target_add_wwn_groups(struct se_wwn *wwn) 1257 + { 1258 + struct iscsi_tiqn *tiqn = container_of(wwn, struct iscsi_tiqn, tiqn_wwn); 1259 + 1260 config_group_init_type_name(&tiqn->tiqn_stat_grps.iscsi_instance_group, 1261 "iscsi_instance", &iscsi_stat_instance_cit); 1262 configfs_add_default_group(&tiqn->tiqn_stat_grps.iscsi_instance_group, ··· 1271 "iscsi_logout_stats", &iscsi_stat_logout_cit); 1272 configfs_add_default_group(&tiqn->tiqn_stat_grps.iscsi_logout_stats_group, 1273 &tiqn->tiqn_wwn.fabric_stat_group); 1274 } 1275 1276 static void lio_target_call_coredeltiqn( 1277 struct se_wwn *wwn) 1278 { 1279 struct iscsi_tiqn *tiqn = container_of(wwn, struct iscsi_tiqn, tiqn_wwn); 1280 1281 pr_debug("LIO_Target_ConfigFS: DEREGISTER -> %s\n", 1282 tiqn->tiqn); ··· 1660 .aborted_task = lio_aborted_task, 1661 .fabric_make_wwn = lio_target_call_coreaddtiqn, 1662 .fabric_drop_wwn = lio_target_call_coredeltiqn, 1663 + .add_wwn_groups = lio_target_add_wwn_groups, 1664 .fabric_make_tpg = lio_target_tiqn_addtpg, 1665 .fabric_drop_tpg = lio_target_tiqn_deltpg, 1666 .fabric_make_np = lio_target_call_addnptotpg, 1667 .fabric_drop_np = lio_target_call_delnpfromtpg, 1668 .fabric_init_nodeacl = lio_target_init_nodeacl, 1669 1670 .tfc_discovery_attrs = lio_target_discovery_auth_attrs, 1671 .tfc_wwn_attrs = lio_target_wwn_attrs,
+13 -11
drivers/target/target_core_fabric_configfs.c
··· 338 { 339 struct se_node_acl *se_nacl = container_of(to_config_group(item), 340 struct se_node_acl, acl_group); 341 - struct target_fabric_configfs *tf = se_nacl->se_tpg->se_tpg_wwn->wwn_tf; 342 343 - if (tf->tf_ops->fabric_cleanup_nodeacl) 344 - tf->tf_ops->fabric_cleanup_nodeacl(se_nacl); 345 core_tpg_del_initiator_node_acl(se_nacl); 346 } 347 ··· 381 if (IS_ERR(se_nacl)) 382 return ERR_CAST(se_nacl); 383 384 - if (tf->tf_ops->fabric_init_nodeacl) { 385 - int ret = tf->tf_ops->fabric_init_nodeacl(se_nacl, name); 386 - if (ret) { 387 - core_tpg_del_initiator_node_acl(se_nacl); 388 - return ERR_PTR(ret); 389 - } 390 - } 391 - 392 config_group_init_type_name(&se_nacl->acl_group, name, 393 &tf->tf_tpg_nacl_base_cit); 394 ··· 403 "fabric_statistics", &tf->tf_tpg_nacl_stat_cit); 404 configfs_add_default_group(&se_nacl->acl_fabric_stat_group, 405 &se_nacl->acl_group); 406 407 return &se_nacl->acl_group; 408 } ··· 891 struct se_wwn, wwn_group); 892 struct target_fabric_configfs *tf = wwn->wwn_tf; 893 894 tf->tf_ops->fabric_drop_wwn(wwn); 895 } 896 ··· 945 &tf->tf_wwn_fabric_stats_cit); 946 configfs_add_default_group(&wwn->fabric_stat_group, &wwn->wwn_group); 947 948 return &wwn->wwn_group; 949 } 950
··· 338 { 339 struct se_node_acl *se_nacl = container_of(to_config_group(item), 340 struct se_node_acl, acl_group); 341 342 + configfs_remove_default_groups(&se_nacl->acl_fabric_stat_group); 343 core_tpg_del_initiator_node_acl(se_nacl); 344 } 345 ··· 383 if (IS_ERR(se_nacl)) 384 return ERR_CAST(se_nacl); 385 386 config_group_init_type_name(&se_nacl->acl_group, name, 387 &tf->tf_tpg_nacl_base_cit); 388 ··· 413 "fabric_statistics", &tf->tf_tpg_nacl_stat_cit); 414 configfs_add_default_group(&se_nacl->acl_fabric_stat_group, 415 &se_nacl->acl_group); 416 + 417 + if (tf->tf_ops->fabric_init_nodeacl) { 418 + int ret = tf->tf_ops->fabric_init_nodeacl(se_nacl, name); 419 + if (ret) { 420 + configfs_remove_default_groups(&se_nacl->acl_fabric_stat_group); 421 + core_tpg_del_initiator_node_acl(se_nacl); 422 + return ERR_PTR(ret); 423 + } 424 + } 425 426 return &se_nacl->acl_group; 427 } ··· 892 struct se_wwn, wwn_group); 893 struct target_fabric_configfs *tf = wwn->wwn_tf; 894 895 + configfs_remove_default_groups(&wwn->fabric_stat_group); 896 tf->tf_ops->fabric_drop_wwn(wwn); 897 } 898 ··· 945 &tf->tf_wwn_fabric_stats_cit); 946 configfs_add_default_group(&wwn->fabric_stat_group, &wwn->wwn_group); 947 948 + if (tf->tf_ops->add_wwn_groups) 949 + tf->tf_ops->add_wwn_groups(wwn); 950 return &wwn->wwn_group; 951 } 952
+25 -20
fs/btrfs/disk-io.c
··· 25 #include <linux/buffer_head.h> 26 #include <linux/workqueue.h> 27 #include <linux/kthread.h> 28 - #include <linux/freezer.h> 29 #include <linux/slab.h> 30 #include <linux/migrate.h> 31 #include <linux/ratelimit.h> ··· 302 err = map_private_extent_buffer(buf, offset, 32, 303 &kaddr, &map_start, &map_len); 304 if (err) 305 - return 1; 306 cur_len = min(len, map_len - (offset - map_start)); 307 crc = btrfs_csum_data(kaddr + offset - map_start, 308 crc, cur_len); ··· 312 if (csum_size > sizeof(inline_result)) { 313 result = kzalloc(csum_size, GFP_NOFS); 314 if (!result) 315 - return 1; 316 } else { 317 result = (char *)&inline_result; 318 } ··· 333 val, found, btrfs_header_level(buf)); 334 if (result != (char *)&inline_result) 335 kfree(result); 336 - return 1; 337 } 338 } else { 339 write_extent_buffer(buf, result, 0, csum_size); ··· 512 eb = (struct extent_buffer *)page->private; 513 if (page != eb->pages[0]) 514 return 0; 515 found_start = btrfs_header_bytenr(eb); 516 - if (WARN_ON(found_start != start || !PageUptodate(page))) 517 - return 0; 518 - csum_tree_block(fs_info, eb, 0); 519 - return 0; 520 } 521 522 static int check_tree_block_fsid(struct btrfs_fs_info *fs_info, ··· 670 eb, found_level); 671 672 ret = csum_tree_block(fs_info, eb, 1); 673 - if (ret) { 674 - ret = -EIO; 675 goto err; 676 - } 677 678 /* 679 * If this is a leaf block and it is corrupt, set the corrupt bit so ··· 1838 */ 1839 btrfs_delete_unused_bgs(root->fs_info); 1840 sleep: 1841 - if (!try_to_freeze() && !again) { 1842 set_current_state(TASK_INTERRUPTIBLE); 1843 if (!kthread_should_stop()) 1844 schedule(); ··· 1928 if (unlikely(test_bit(BTRFS_FS_STATE_ERROR, 1929 &root->fs_info->fs_state))) 1930 btrfs_cleanup_transaction(root); 1931 - if (!try_to_freeze()) { 1932 - set_current_state(TASK_INTERRUPTIBLE); 1933 - if (!kthread_should_stop() && 1934 - (!btrfs_transaction_blocked(root->fs_info) || 1935 - cannot_commit)) 1936 - schedule_timeout(delay); 1937 - __set_current_state(TASK_RUNNING); 1938 - } 1939 } while (!kthread_should_stop()); 1940 return 0; 1941 }
··· 25 #include <linux/buffer_head.h> 26 #include <linux/workqueue.h> 27 #include <linux/kthread.h> 28 #include <linux/slab.h> 29 #include <linux/migrate.h> 30 #include <linux/ratelimit.h> ··· 303 err = map_private_extent_buffer(buf, offset, 32, 304 &kaddr, &map_start, &map_len); 305 if (err) 306 + return err; 307 cur_len = min(len, map_len - (offset - map_start)); 308 crc = btrfs_csum_data(kaddr + offset - map_start, 309 crc, cur_len); ··· 313 if (csum_size > sizeof(inline_result)) { 314 result = kzalloc(csum_size, GFP_NOFS); 315 if (!result) 316 + return -ENOMEM; 317 } else { 318 result = (char *)&inline_result; 319 } ··· 334 val, found, btrfs_header_level(buf)); 335 if (result != (char *)&inline_result) 336 kfree(result); 337 + return -EUCLEAN; 338 } 339 } else { 340 write_extent_buffer(buf, result, 0, csum_size); ··· 513 eb = (struct extent_buffer *)page->private; 514 if (page != eb->pages[0]) 515 return 0; 516 + 517 found_start = btrfs_header_bytenr(eb); 518 + /* 519 + * Please do not consolidate these warnings into a single if. 520 + * It is useful to know what went wrong. 521 + */ 522 + if (WARN_ON(found_start != start)) 523 + return -EUCLEAN; 524 + if (WARN_ON(!PageUptodate(page))) 525 + return -EUCLEAN; 526 + 527 + ASSERT(memcmp_extent_buffer(eb, fs_info->fsid, 528 + btrfs_header_fsid(), BTRFS_FSID_SIZE) == 0); 529 + 530 + return csum_tree_block(fs_info, eb, 0); 531 } 532 533 static int check_tree_block_fsid(struct btrfs_fs_info *fs_info, ··· 661 eb, found_level); 662 663 ret = csum_tree_block(fs_info, eb, 1); 664 + if (ret) 665 goto err; 666 667 /* 668 * If this is a leaf block and it is corrupt, set the corrupt bit so ··· 1831 */ 1832 btrfs_delete_unused_bgs(root->fs_info); 1833 sleep: 1834 + if (!again) { 1835 set_current_state(TASK_INTERRUPTIBLE); 1836 if (!kthread_should_stop()) 1837 schedule(); ··· 1921 if (unlikely(test_bit(BTRFS_FS_STATE_ERROR, 1922 &root->fs_info->fs_state))) 1923 btrfs_cleanup_transaction(root); 1924 + set_current_state(TASK_INTERRUPTIBLE); 1925 + if (!kthread_should_stop() && 1926 + (!btrfs_transaction_blocked(root->fs_info) || 1927 + cannot_commit)) 1928 + schedule_timeout(delay); 1929 + __set_current_state(TASK_RUNNING); 1930 } while (!kthread_should_stop()); 1931 return 0; 1932 }
+1 -2
fs/dlm/config.c
··· 343 struct dlm_cluster *cl = NULL; 344 struct dlm_spaces *sps = NULL; 345 struct dlm_comms *cms = NULL; 346 - void *gps = NULL; 347 348 cl = kzalloc(sizeof(struct dlm_cluster), GFP_NOFS); 349 sps = kzalloc(sizeof(struct dlm_spaces), GFP_NOFS); 350 cms = kzalloc(sizeof(struct dlm_comms), GFP_NOFS); 351 352 - if (!cl || !gps || !sps || !cms) 353 goto fail; 354 355 config_group_init_type_name(&cl->group, name, &cluster_type);
··· 343 struct dlm_cluster *cl = NULL; 344 struct dlm_spaces *sps = NULL; 345 struct dlm_comms *cms = NULL; 346 347 cl = kzalloc(sizeof(struct dlm_cluster), GFP_NOFS); 348 sps = kzalloc(sizeof(struct dlm_spaces), GFP_NOFS); 349 cms = kzalloc(sizeof(struct dlm_comms), GFP_NOFS); 350 351 + if (!cl || !sps || !cms) 352 goto fail; 353 354 config_group_init_type_name(&cl->group, name, &cluster_type);
+6 -4
fs/namei.c
··· 1740 nd->flags); 1741 if (IS_ERR(path.dentry)) 1742 return PTR_ERR(path.dentry); 1743 - if (unlikely(d_is_negative(path.dentry))) { 1744 - dput(path.dentry); 1745 - return -ENOENT; 1746 - } 1747 path.mnt = nd->path.mnt; 1748 err = follow_managed(&path, nd); 1749 if (unlikely(err < 0)) 1750 return err; 1751 1752 seq = 0; /* we are already out of RCU mode */ 1753 inode = d_backing_inode(path.dentry);
··· 1740 nd->flags); 1741 if (IS_ERR(path.dentry)) 1742 return PTR_ERR(path.dentry); 1743 + 1744 path.mnt = nd->path.mnt; 1745 err = follow_managed(&path, nd); 1746 if (unlikely(err < 0)) 1747 return err; 1748 + 1749 + if (unlikely(d_is_negative(path.dentry))) { 1750 + path_to_nameidata(&path, nd); 1751 + return -ENOENT; 1752 + } 1753 1754 seq = 0; /* we are already out of RCU mode */ 1755 inode = d_backing_inode(path.dentry);
+3 -5
fs/orangefs/dir.c
··· 235 if (ret == -EIO && op_state_purged(new_op)) { 236 gossip_err("%s: Client is down. Aborting readdir call.\n", 237 __func__); 238 - goto out_slot; 239 } 240 241 if (ret < 0 || new_op->downcall.status != 0) { ··· 244 new_op->downcall.status); 245 if (ret >= 0) 246 ret = new_op->downcall.status; 247 - goto out_slot; 248 } 249 250 dents_buf = new_op->downcall.trailer_buf; 251 if (dents_buf == NULL) { 252 gossip_err("Invalid NULL buffer in readdir response\n"); 253 ret = -ENOMEM; 254 - goto out_slot; 255 } 256 257 bytes_decoded = decode_dirents(dents_buf, new_op->downcall.trailer_size, ··· 363 out_vfree: 364 gossip_debug(GOSSIP_DIR_DEBUG, "vfree %p\n", dents_buf); 365 vfree(dents_buf); 366 - out_slot: 367 - orangefs_readdir_index_put(buffer_index); 368 out_free_op: 369 op_release(new_op); 370 gossip_debug(GOSSIP_DIR_DEBUG, "orangefs_readdir returning %d\n", ret);
··· 235 if (ret == -EIO && op_state_purged(new_op)) { 236 gossip_err("%s: Client is down. Aborting readdir call.\n", 237 __func__); 238 + goto out_free_op; 239 } 240 241 if (ret < 0 || new_op->downcall.status != 0) { ··· 244 new_op->downcall.status); 245 if (ret >= 0) 246 ret = new_op->downcall.status; 247 + goto out_free_op; 248 } 249 250 dents_buf = new_op->downcall.trailer_buf; 251 if (dents_buf == NULL) { 252 gossip_err("Invalid NULL buffer in readdir response\n"); 253 ret = -ENOMEM; 254 + goto out_free_op; 255 } 256 257 bytes_decoded = decode_dirents(dents_buf, new_op->downcall.trailer_size, ··· 363 out_vfree: 364 gossip_debug(GOSSIP_DIR_DEBUG, "vfree %p\n", dents_buf); 365 vfree(dents_buf); 366 out_free_op: 367 op_release(new_op); 368 gossip_debug(GOSSIP_DIR_DEBUG, "orangefs_readdir returning %d\n", ret);
+1 -1
fs/orangefs/protocol.h
··· 407 * space. Zero signifies the upstream version of the kernel module. 408 */ 409 #define ORANGEFS_KERNEL_PROTO_VERSION 0 410 - #define ORANGEFS_MINIMUM_USERSPACE_VERSION 20904 411 412 /* 413 * describes memory regions to map in the ORANGEFS_DEV_MAP ioctl.
··· 407 * space. Zero signifies the upstream version of the kernel module. 408 */ 409 #define ORANGEFS_KERNEL_PROTO_VERSION 0 410 + #define ORANGEFS_MINIMUM_USERSPACE_VERSION 20903 411 412 /* 413 * describes memory regions to map in the ORANGEFS_DEV_MAP ioctl.
+17 -17
include/linux/atomic.h
··· 559 #endif 560 561 /** 562 - * fetch_or - perform *ptr |= mask and return old value of *ptr 563 - * @ptr: pointer to value 564 - * @mask: mask to OR on the value 565 - * 566 - * cmpxchg based fetch_or, macro so it works for different integer types 567 */ 568 - #ifndef fetch_or 569 - #define fetch_or(ptr, mask) \ 570 - ({ typeof(*(ptr)) __old, __val = *(ptr); \ 571 - for (;;) { \ 572 - __old = cmpxchg((ptr), __val, __val | (mask)); \ 573 - if (__old == __val) \ 574 - break; \ 575 - __val = __old; \ 576 - } \ 577 - __old; \ 578 - }) 579 - #endif 580 581 582 #ifdef CONFIG_GENERIC_ATOMIC64 583 #include <asm-generic/atomic64.h>
··· 559 #endif 560 561 /** 562 + * atomic_fetch_or - perform *p |= mask and return old value of *p 563 + * @p: pointer to atomic_t 564 + * @mask: mask to OR on the atomic_t 565 */ 566 + #ifndef atomic_fetch_or 567 + static inline int atomic_fetch_or(atomic_t *p, int mask) 568 + { 569 + int old, val = atomic_read(p); 570 571 + for (;;) { 572 + old = atomic_cmpxchg(p, val, val | mask); 573 + if (old == val) 574 + break; 575 + val = old; 576 + } 577 + 578 + return old; 579 + } 580 + #endif 581 582 #ifdef CONFIG_GENERIC_ATOMIC64 583 #include <asm-generic/atomic64.h>
+2
include/linux/brcmphy.h
··· 24 #define PHY_ID_BCM7250 0xae025280 25 #define PHY_ID_BCM7364 0xae025260 26 #define PHY_ID_BCM7366 0x600d8490 27 #define PHY_ID_BCM7425 0x600d86b0 28 #define PHY_ID_BCM7429 0x600d8730 29 #define PHY_ID_BCM7435 0x600d8750
··· 24 #define PHY_ID_BCM7250 0xae025280 25 #define PHY_ID_BCM7364 0xae025260 26 #define PHY_ID_BCM7366 0x600d8490 27 + #define PHY_ID_BCM7346 0x600d8650 28 + #define PHY_ID_BCM7362 0x600d84b0 29 #define PHY_ID_BCM7425 0x600d86b0 30 #define PHY_ID_BCM7429 0x600d8730 31 #define PHY_ID_BCM7435 0x600d8750
+2 -2
include/linux/configfs.h
··· 188 } 189 190 #define CONFIGFS_BIN_ATTR_RO(_pfx, _name, _priv, _maxsz) \ 191 - static struct configfs_attribute _pfx##attr_##_name = { \ 192 .cb_attr = { \ 193 .ca_name = __stringify(_name), \ 194 .ca_mode = S_IRUGO, \ ··· 200 } 201 202 #define CONFIGFS_BIN_ATTR_WO(_pfx, _name, _priv, _maxsz) \ 203 - static struct configfs_attribute _pfx##attr_##_name = { \ 204 .cb_attr = { \ 205 .ca_name = __stringify(_name), \ 206 .ca_mode = S_IWUSR, \
··· 188 } 189 190 #define CONFIGFS_BIN_ATTR_RO(_pfx, _name, _priv, _maxsz) \ 191 + static struct configfs_bin_attribute _pfx##attr_##_name = { \ 192 .cb_attr = { \ 193 .ca_name = __stringify(_name), \ 194 .ca_mode = S_IRUGO, \ ··· 200 } 201 202 #define CONFIGFS_BIN_ATTR_WO(_pfx, _name, _priv, _maxsz) \ 203 + static struct configfs_bin_attribute _pfx##attr_##_name = { \ 204 .cb_attr = { \ 205 .ca_name = __stringify(_name), \ 206 .ca_mode = S_IWUSR, \
+4
include/linux/filter.h
··· 465 void bpf_prog_destroy(struct bpf_prog *fp); 466 467 int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk); 468 int sk_attach_bpf(u32 ufd, struct sock *sk); 469 int sk_reuseport_attach_filter(struct sock_fprog *fprog, struct sock *sk); 470 int sk_reuseport_attach_bpf(u32 ufd, struct sock *sk); 471 int sk_detach_filter(struct sock *sk); 472 int sk_get_filter(struct sock *sk, struct sock_filter __user *filter, 473 unsigned int len); 474
··· 465 void bpf_prog_destroy(struct bpf_prog *fp); 466 467 int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk); 468 + int __sk_attach_filter(struct sock_fprog *fprog, struct sock *sk, 469 + bool locked); 470 int sk_attach_bpf(u32 ufd, struct sock *sk); 471 int sk_reuseport_attach_filter(struct sock_fprog *fprog, struct sock *sk); 472 int sk_reuseport_attach_bpf(u32 ufd, struct sock *sk); 473 int sk_detach_filter(struct sock *sk); 474 + int __sk_detach_filter(struct sock *sk, bool locked); 475 + 476 int sk_get_filter(struct sock *sk, struct sock_filter __user *filter, 477 unsigned int len); 478
+1 -1
include/linux/huge_mm.h
··· 127 if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) 128 return __pmd_trans_huge_lock(pmd, vma); 129 else 130 - return false; 131 } 132 static inline int hpage_nr_pages(struct page *page) 133 {
··· 127 if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) 128 return __pmd_trans_huge_lock(pmd, vma); 129 else 130 + return NULL; 131 } 132 static inline int hpage_nr_pages(struct page *page) 133 {
+4
include/linux/netfilter/ipset/ip_set.h
··· 234 spinlock_t lock; 235 /* References to the set */ 236 u32 ref; 237 /* The core set type */ 238 struct ip_set_type *type; 239 /* The type variant doing the real job */
··· 234 spinlock_t lock; 235 /* References to the set */ 236 u32 ref; 237 + /* References to the set for netlink events like dump, 238 + * ref can be swapped out by ip_set_swap 239 + */ 240 + u32 ref_netlink; 241 /* The core set type */ 242 struct ip_set_type *type; 243 /* The type variant doing the real job */
+16 -6
include/linux/pmem.h
··· 42 BUG(); 43 } 44 45 static inline size_t arch_copy_from_iter_pmem(void __pmem *addr, size_t bytes, 46 struct iov_iter *i) 47 { ··· 73 #endif 74 75 /* 76 - * Architectures that define ARCH_HAS_PMEM_API must provide 77 - * implementations for arch_memcpy_to_pmem(), arch_wmb_pmem(), 78 - * arch_copy_from_iter_pmem(), arch_clear_pmem(), arch_wb_cache_pmem() 79 - * and arch_has_wmb_pmem(). 80 */ 81 - static inline void memcpy_from_pmem(void *dst, void __pmem const *src, size_t size) 82 { 83 - memcpy(dst, (void __force const *) src, size); 84 } 85 86 static inline bool arch_has_pmem_api(void)
··· 42 BUG(); 43 } 44 45 + static inline int arch_memcpy_from_pmem(void *dst, const void __pmem *src, 46 + size_t n) 47 + { 48 + BUG(); 49 + return -EFAULT; 50 + } 51 + 52 static inline size_t arch_copy_from_iter_pmem(void __pmem *addr, size_t bytes, 53 struct iov_iter *i) 54 { ··· 66 #endif 67 68 /* 69 + * memcpy_from_pmem - read from persistent memory with error handling 70 + * @dst: destination buffer 71 + * @src: source buffer 72 + * @size: transfer length 73 + * 74 + * Returns 0 on success negative error code on failure. 75 */ 76 + static inline int memcpy_from_pmem(void *dst, void __pmem const *src, 77 + size_t size) 78 { 79 + return arch_memcpy_from_pmem(dst, src, size); 80 } 81 82 static inline bool arch_has_pmem_api(void)
+2 -2
include/linux/sched.h
··· 720 struct task_cputime cputime_expires; 721 722 #ifdef CONFIG_NO_HZ_FULL 723 - unsigned long tick_dep_mask; 724 #endif 725 726 struct list_head cpu_timers[3]; ··· 1549 #endif 1550 1551 #ifdef CONFIG_NO_HZ_FULL 1552 - unsigned long tick_dep_mask; 1553 #endif 1554 unsigned long nvcsw, nivcsw; /* context switch counts */ 1555 u64 start_time; /* monotonic time in nsec */
··· 720 struct task_cputime cputime_expires; 721 722 #ifdef CONFIG_NO_HZ_FULL 723 + atomic_t tick_dep_mask; 724 #endif 725 726 struct list_head cpu_timers[3]; ··· 1549 #endif 1550 1551 #ifdef CONFIG_NO_HZ_FULL 1552 + atomic_t tick_dep_mask; 1553 #endif 1554 unsigned long nvcsw, nivcsw; /* context switch counts */ 1555 u64 start_time; /* monotonic time in nsec */
-1
include/linux/stmmac.h
··· 108 }; 109 110 struct plat_stmmacenet_data { 111 - char *phy_bus_name; 112 int bus_id; 113 int phy_addr; 114 int interface;
··· 108 }; 109 110 struct plat_stmmacenet_data { 111 int bus_id; 112 int phy_addr; 113 int interface;
+1 -1
include/target/target_core_fabric.h
··· 76 struct se_wwn *(*fabric_make_wwn)(struct target_fabric_configfs *, 77 struct config_group *, const char *); 78 void (*fabric_drop_wwn)(struct se_wwn *); 79 struct se_portal_group *(*fabric_make_tpg)(struct se_wwn *, 80 struct config_group *, const char *); 81 void (*fabric_drop_tpg)(struct se_portal_group *); ··· 88 struct config_group *, const char *); 89 void (*fabric_drop_np)(struct se_tpg_np *); 90 int (*fabric_init_nodeacl)(struct se_node_acl *, const char *); 91 - void (*fabric_cleanup_nodeacl)(struct se_node_acl *); 92 93 struct configfs_attribute **tfc_discovery_attrs; 94 struct configfs_attribute **tfc_wwn_attrs;
··· 76 struct se_wwn *(*fabric_make_wwn)(struct target_fabric_configfs *, 77 struct config_group *, const char *); 78 void (*fabric_drop_wwn)(struct se_wwn *); 79 + void (*add_wwn_groups)(struct se_wwn *); 80 struct se_portal_group *(*fabric_make_tpg)(struct se_wwn *, 81 struct config_group *, const char *); 82 void (*fabric_drop_tpg)(struct se_portal_group *); ··· 87 struct config_group *, const char *); 88 void (*fabric_drop_np)(struct se_tpg_np *); 89 int (*fabric_init_nodeacl)(struct se_node_acl *, const char *); 90 91 struct configfs_attribute **tfc_discovery_attrs; 92 struct configfs_attribute **tfc_wwn_attrs;
+1 -1
include/trace/events/page_isolation.h
··· 29 30 TP_printk("start_pfn=0x%lx end_pfn=0x%lx fin_pfn=0x%lx ret=%s", 31 __entry->start_pfn, __entry->end_pfn, __entry->fin_pfn, 32 - __entry->end_pfn == __entry->fin_pfn ? "success" : "fail") 33 ); 34 35 #endif /* _TRACE_PAGE_ISOLATION_H */
··· 29 30 TP_printk("start_pfn=0x%lx end_pfn=0x%lx fin_pfn=0x%lx ret=%s", 31 __entry->start_pfn, __entry->end_pfn, __entry->fin_pfn, 32 + __entry->end_pfn <= __entry->fin_pfn ? "success" : "fail") 33 ); 34 35 #endif /* _TRACE_PAGE_ISOLATION_H */
+1
include/uapi/linux/bpf.h
··· 375 }; 376 __u8 tunnel_tos; 377 __u8 tunnel_ttl; 378 __u32 tunnel_label; 379 }; 380
··· 375 }; 376 __u8 tunnel_tos; 377 __u8 tunnel_ttl; 378 + __u16 tunnel_ext; 379 __u32 tunnel_label; 380 }; 381
+4
include/uapi/linux/stddef.h
··· 1 #include <linux/compiler.h>
··· 1 #include <linux/compiler.h> 2 + 3 + #ifndef __always_inline 4 + #define __always_inline inline 5 + #endif
+2 -1
init/Kconfig
··· 272 See the man page for more details. 273 274 config FHANDLE 275 - bool "open by fhandle syscalls" 276 select EXPORTFS 277 help 278 If you say Y here, a user level program will be able to map 279 file names to handle and then later use the handle for
··· 272 See the man page for more details. 273 274 config FHANDLE 275 + bool "open by fhandle syscalls" if EXPERT 276 select EXPORTFS 277 + default y 278 help 279 If you say Y here, a user level program will be able to map 280 file names to handle and then later use the handle for
+4 -2
kernel/bpf/syscall.c
··· 137 "map_type:\t%u\n" 138 "key_size:\t%u\n" 139 "value_size:\t%u\n" 140 - "max_entries:\t%u\n", 141 map->map_type, 142 map->key_size, 143 map->value_size, 144 - map->max_entries); 145 } 146 #endif 147
··· 137 "map_type:\t%u\n" 138 "key_size:\t%u\n" 139 "value_size:\t%u\n" 140 + "max_entries:\t%u\n" 141 + "map_flags:\t%#x\n", 142 map->map_type, 143 map->key_size, 144 map->value_size, 145 + map->max_entries, 146 + map->map_flags); 147 } 148 #endif 149
+13 -2
kernel/events/core.c
··· 2417 cpuctx->task_ctx = NULL; 2418 } 2419 2420 - is_active ^= ctx->is_active; /* changed bits */ 2421 - 2422 if (is_active & EVENT_TIME) { 2423 /* update (and stop) ctx time */ 2424 update_context_time(ctx); 2425 update_cgrp_time_from_cpuctx(cpuctx); 2426 } 2427 2428 if (!ctx->nr_active || !(is_active & EVENT_ALL)) 2429 return; ··· 8542 f_flags); 8543 if (IS_ERR(event_file)) { 8544 err = PTR_ERR(event_file); 8545 goto err_context; 8546 } 8547
··· 2417 cpuctx->task_ctx = NULL; 2418 } 2419 2420 + /* 2421 + * Always update time if it was set; not only when it changes. 2422 + * Otherwise we can 'forget' to update time for any but the last 2423 + * context we sched out. For example: 2424 + * 2425 + * ctx_sched_out(.event_type = EVENT_FLEXIBLE) 2426 + * ctx_sched_out(.event_type = EVENT_PINNED) 2427 + * 2428 + * would only update time for the pinned events. 2429 + */ 2430 if (is_active & EVENT_TIME) { 2431 /* update (and stop) ctx time */ 2432 update_context_time(ctx); 2433 update_cgrp_time_from_cpuctx(cpuctx); 2434 } 2435 + 2436 + is_active ^= ctx->is_active; /* changed bits */ 2437 2438 if (!ctx->nr_active || !(is_active & EVENT_ALL)) 2439 return; ··· 8532 f_flags); 8533 if (IS_ERR(event_file)) { 8534 err = PTR_ERR(event_file); 8535 + event_file = NULL; 8536 goto err_context; 8537 } 8538
+77 -2
kernel/locking/lockdep.c
··· 2000 } 2001 2002 /* 2003 * Checks whether the chain and the current held locks are consistent 2004 * in depth and also in content. If they are not it most likely means 2005 * that there was a collision during the calculation of the chain_key. ··· 2085 2086 i = get_first_held_lock(curr, hlock); 2087 2088 - if (DEBUG_LOCKS_WARN_ON(chain->depth != curr->lockdep_depth - (i - 1))) 2089 return 0; 2090 2091 for (j = 0; j < chain->depth - 1; j++, i++) { 2092 id = curr->held_locks[i].class_idx - 1; 2093 2094 - if (DEBUG_LOCKS_WARN_ON(chain_hlocks[chain->base + j] != id)) 2095 return 0; 2096 } 2097 #endif 2098 return 1;
··· 2000 } 2001 2002 /* 2003 + * Returns the next chain_key iteration 2004 + */ 2005 + static u64 print_chain_key_iteration(int class_idx, u64 chain_key) 2006 + { 2007 + u64 new_chain_key = iterate_chain_key(chain_key, class_idx); 2008 + 2009 + printk(" class_idx:%d -> chain_key:%016Lx", 2010 + class_idx, 2011 + (unsigned long long)new_chain_key); 2012 + return new_chain_key; 2013 + } 2014 + 2015 + static void 2016 + print_chain_keys_held_locks(struct task_struct *curr, struct held_lock *hlock_next) 2017 + { 2018 + struct held_lock *hlock; 2019 + u64 chain_key = 0; 2020 + int depth = curr->lockdep_depth; 2021 + int i; 2022 + 2023 + printk("depth: %u\n", depth + 1); 2024 + for (i = get_first_held_lock(curr, hlock_next); i < depth; i++) { 2025 + hlock = curr->held_locks + i; 2026 + chain_key = print_chain_key_iteration(hlock->class_idx, chain_key); 2027 + 2028 + print_lock(hlock); 2029 + } 2030 + 2031 + print_chain_key_iteration(hlock_next->class_idx, chain_key); 2032 + print_lock(hlock_next); 2033 + } 2034 + 2035 + static void print_chain_keys_chain(struct lock_chain *chain) 2036 + { 2037 + int i; 2038 + u64 chain_key = 0; 2039 + int class_id; 2040 + 2041 + printk("depth: %u\n", chain->depth); 2042 + for (i = 0; i < chain->depth; i++) { 2043 + class_id = chain_hlocks[chain->base + i]; 2044 + chain_key = print_chain_key_iteration(class_id + 1, chain_key); 2045 + 2046 + print_lock_name(lock_classes + class_id); 2047 + printk("\n"); 2048 + } 2049 + } 2050 + 2051 + static void print_collision(struct task_struct *curr, 2052 + struct held_lock *hlock_next, 2053 + struct lock_chain *chain) 2054 + { 2055 + printk("\n"); 2056 + printk("======================\n"); 2057 + printk("[chain_key collision ]\n"); 2058 + print_kernel_ident(); 2059 + printk("----------------------\n"); 2060 + printk("%s/%d: ", current->comm, task_pid_nr(current)); 2061 + printk("Hash chain already cached but the contents don't match!\n"); 2062 + 2063 + printk("Held locks:"); 2064 + print_chain_keys_held_locks(curr, hlock_next); 2065 + 2066 + printk("Locks in cached chain:"); 2067 + print_chain_keys_chain(chain); 2068 + 2069 + printk("\nstack backtrace:\n"); 2070 + dump_stack(); 2071 + } 2072 + 2073 + /* 2074 * Checks whether the chain and the current held locks are consistent 2075 * in depth and also in content. If they are not it most likely means 2076 * that there was a collision during the calculation of the chain_key. ··· 2014 2015 i = get_first_held_lock(curr, hlock); 2016 2017 + if (DEBUG_LOCKS_WARN_ON(chain->depth != curr->lockdep_depth - (i - 1))) { 2018 + print_collision(curr, hlock, chain); 2019 return 0; 2020 + } 2021 2022 for (j = 0; j < chain->depth - 1; j++, i++) { 2023 id = curr->held_locks[i].class_idx - 1; 2024 2025 + if (DEBUG_LOCKS_WARN_ON(chain_hlocks[chain->base + j] != id)) { 2026 + print_collision(curr, hlock, chain); 2027 return 0; 2028 + } 2029 } 2030 #endif 2031 return 1;
+18
kernel/sched/core.c
··· 321 } 322 #endif /* CONFIG_SCHED_HRTICK */ 323 324 #if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG) 325 /* 326 * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
··· 321 } 322 #endif /* CONFIG_SCHED_HRTICK */ 323 324 + /* 325 + * cmpxchg based fetch_or, macro so it works for different integer types 326 + */ 327 + #define fetch_or(ptr, mask) \ 328 + ({ \ 329 + typeof(ptr) _ptr = (ptr); \ 330 + typeof(mask) _mask = (mask); \ 331 + typeof(*_ptr) _old, _val = *_ptr; \ 332 + \ 333 + for (;;) { \ 334 + _old = cmpxchg(_ptr, _val, _val | _mask); \ 335 + if (_old == _val) \ 336 + break; \ 337 + _val = _old; \ 338 + } \ 339 + _old; \ 340 + }) 341 + 342 #if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG) 343 /* 344 * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
+30 -31
kernel/time/tick-sched.c
··· 157 cpumask_var_t tick_nohz_full_mask; 158 cpumask_var_t housekeeping_mask; 159 bool tick_nohz_full_running; 160 - static unsigned long tick_dep_mask; 161 162 - static void trace_tick_dependency(unsigned long dep) 163 { 164 - if (dep & TICK_DEP_MASK_POSIX_TIMER) { 165 trace_tick_stop(0, TICK_DEP_MASK_POSIX_TIMER); 166 - return; 167 } 168 169 - if (dep & TICK_DEP_MASK_PERF_EVENTS) { 170 trace_tick_stop(0, TICK_DEP_MASK_PERF_EVENTS); 171 - return; 172 } 173 174 - if (dep & TICK_DEP_MASK_SCHED) { 175 trace_tick_stop(0, TICK_DEP_MASK_SCHED); 176 - return; 177 } 178 179 - if (dep & TICK_DEP_MASK_CLOCK_UNSTABLE) 180 trace_tick_stop(0, TICK_DEP_MASK_CLOCK_UNSTABLE); 181 } 182 183 static bool can_stop_full_tick(struct tick_sched *ts) 184 { 185 WARN_ON_ONCE(!irqs_disabled()); 186 187 - if (tick_dep_mask) { 188 - trace_tick_dependency(tick_dep_mask); 189 return false; 190 - } 191 192 - if (ts->tick_dep_mask) { 193 - trace_tick_dependency(ts->tick_dep_mask); 194 return false; 195 - } 196 197 - if (current->tick_dep_mask) { 198 - trace_tick_dependency(current->tick_dep_mask); 199 return false; 200 - } 201 202 - if (current->signal->tick_dep_mask) { 203 - trace_tick_dependency(current->signal->tick_dep_mask); 204 return false; 205 - } 206 207 return true; 208 } ··· 257 preempt_enable(); 258 } 259 260 - static void tick_nohz_dep_set_all(unsigned long *dep, 261 enum tick_dep_bits bit) 262 { 263 - unsigned long prev; 264 265 - prev = fetch_or(dep, BIT_MASK(bit)); 266 if (!prev) 267 tick_nohz_full_kick_all(); 268 } ··· 278 279 void tick_nohz_dep_clear(enum tick_dep_bits bit) 280 { 281 - clear_bit(bit, &tick_dep_mask); 282 } 283 284 /* ··· 287 */ 288 void tick_nohz_dep_set_cpu(int cpu, enum tick_dep_bits bit) 289 { 290 - unsigned long prev; 291 struct tick_sched *ts; 292 293 ts = per_cpu_ptr(&tick_cpu_sched, cpu); 294 295 - prev = fetch_or(&ts->tick_dep_mask, BIT_MASK(bit)); 296 if (!prev) { 297 preempt_disable(); 298 /* Perf needs local kick that is NMI safe */ ··· 311 { 312 struct tick_sched *ts = per_cpu_ptr(&tick_cpu_sched, cpu); 313 314 - clear_bit(bit, &ts->tick_dep_mask); 315 } 316 317 /* ··· 329 330 void tick_nohz_dep_clear_task(struct task_struct *tsk, enum tick_dep_bits bit) 331 { 332 - clear_bit(bit, &tsk->tick_dep_mask); 333 } 334 335 /* ··· 343 344 void tick_nohz_dep_clear_signal(struct signal_struct *sig, enum tick_dep_bits bit) 345 { 346 - clear_bit(bit, &sig->tick_dep_mask); 347 } 348 349 /* ··· 364 ts = this_cpu_ptr(&tick_cpu_sched); 365 366 if (ts->tick_stopped) { 367 - if (current->tick_dep_mask || current->signal->tick_dep_mask) 368 tick_nohz_full_kick(); 369 } 370 out:
··· 157 cpumask_var_t tick_nohz_full_mask; 158 cpumask_var_t housekeeping_mask; 159 bool tick_nohz_full_running; 160 + static atomic_t tick_dep_mask; 161 162 + static bool check_tick_dependency(atomic_t *dep) 163 { 164 + int val = atomic_read(dep); 165 + 166 + if (val & TICK_DEP_MASK_POSIX_TIMER) { 167 trace_tick_stop(0, TICK_DEP_MASK_POSIX_TIMER); 168 + return true; 169 } 170 171 + if (val & TICK_DEP_MASK_PERF_EVENTS) { 172 trace_tick_stop(0, TICK_DEP_MASK_PERF_EVENTS); 173 + return true; 174 } 175 176 + if (val & TICK_DEP_MASK_SCHED) { 177 trace_tick_stop(0, TICK_DEP_MASK_SCHED); 178 + return true; 179 } 180 181 + if (val & TICK_DEP_MASK_CLOCK_UNSTABLE) { 182 trace_tick_stop(0, TICK_DEP_MASK_CLOCK_UNSTABLE); 183 + return true; 184 + } 185 + 186 + return false; 187 } 188 189 static bool can_stop_full_tick(struct tick_sched *ts) 190 { 191 WARN_ON_ONCE(!irqs_disabled()); 192 193 + if (check_tick_dependency(&tick_dep_mask)) 194 return false; 195 196 + if (check_tick_dependency(&ts->tick_dep_mask)) 197 return false; 198 199 + if (check_tick_dependency(&current->tick_dep_mask)) 200 return false; 201 202 + if (check_tick_dependency(&current->signal->tick_dep_mask)) 203 return false; 204 205 return true; 206 } ··· 259 preempt_enable(); 260 } 261 262 + static void tick_nohz_dep_set_all(atomic_t *dep, 263 enum tick_dep_bits bit) 264 { 265 + int prev; 266 267 + prev = atomic_fetch_or(dep, BIT(bit)); 268 if (!prev) 269 tick_nohz_full_kick_all(); 270 } ··· 280 281 void tick_nohz_dep_clear(enum tick_dep_bits bit) 282 { 283 + atomic_andnot(BIT(bit), &tick_dep_mask); 284 } 285 286 /* ··· 289 */ 290 void tick_nohz_dep_set_cpu(int cpu, enum tick_dep_bits bit) 291 { 292 + int prev; 293 struct tick_sched *ts; 294 295 ts = per_cpu_ptr(&tick_cpu_sched, cpu); 296 297 + prev = atomic_fetch_or(&ts->tick_dep_mask, BIT(bit)); 298 if (!prev) { 299 preempt_disable(); 300 /* Perf needs local kick that is NMI safe */ ··· 313 { 314 struct tick_sched *ts = per_cpu_ptr(&tick_cpu_sched, cpu); 315 316 + atomic_andnot(BIT(bit), &ts->tick_dep_mask); 317 } 318 319 /* ··· 331 332 void tick_nohz_dep_clear_task(struct task_struct *tsk, enum tick_dep_bits bit) 333 { 334 + atomic_andnot(BIT(bit), &tsk->tick_dep_mask); 335 } 336 337 /* ··· 345 346 void tick_nohz_dep_clear_signal(struct signal_struct *sig, enum tick_dep_bits bit) 347 { 348 + atomic_andnot(BIT(bit), &sig->tick_dep_mask); 349 } 350 351 /* ··· 366 ts = this_cpu_ptr(&tick_cpu_sched); 367 368 if (ts->tick_stopped) { 369 + if (atomic_read(&current->tick_dep_mask) || 370 + atomic_read(&current->signal->tick_dep_mask)) 371 tick_nohz_full_kick(); 372 } 373 out:
+1 -1
kernel/time/tick-sched.h
··· 60 u64 next_timer; 61 ktime_t idle_expires; 62 int do_timer_last; 63 - unsigned long tick_dep_mask; 64 }; 65 66 extern struct tick_sched *tick_get_tick_sched(int cpu);
··· 60 u64 next_timer; 61 ktime_t idle_expires; 62 int do_timer_last; 63 + atomic_t tick_dep_mask; 64 }; 65 66 extern struct tick_sched *tick_get_tick_sched(int cpu);
+1 -1
mm/kasan/kasan.c
··· 498 struct kasan_alloc_meta *alloc_info = 499 get_alloc_info(cache, object); 500 alloc_info->state = KASAN_STATE_FREE; 501 - set_track(&free_info->track); 502 } 503 #endif 504
··· 498 struct kasan_alloc_meta *alloc_info = 499 get_alloc_info(cache, object); 500 alloc_info->state = KASAN_STATE_FREE; 501 + set_track(&free_info->track, GFP_NOWAIT); 502 } 503 #endif 504
+5 -1
mm/oom_kill.c
··· 547 548 static void wake_oom_reaper(struct task_struct *tsk) 549 { 550 - if (!oom_reaper_th || tsk->oom_reaper_list) 551 return; 552 553 get_task_struct(tsk);
··· 547 548 static void wake_oom_reaper(struct task_struct *tsk) 549 { 550 + if (!oom_reaper_th) 551 + return; 552 + 553 + /* tsk is already queued? */ 554 + if (tsk == oom_reaper_list || tsk->oom_reaper_list) 555 return; 556 557 get_task_struct(tsk);
+5 -5
mm/page_isolation.c
··· 215 * all pages in [start_pfn...end_pfn) must be in the same zone. 216 * zone->lock must be held before call this. 217 * 218 - * Returns 1 if all pages in the range are isolated. 219 */ 220 static unsigned long 221 __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn, ··· 289 * now as a simple work-around, we use the next node for destination. 290 */ 291 if (PageHuge(page)) { 292 - nodemask_t src = nodemask_of_node(page_to_nid(page)); 293 - nodemask_t dst; 294 - nodes_complement(dst, src); 295 return alloc_huge_page_node(page_hstate(compound_head(page)), 296 - next_node(page_to_nid(page), dst)); 297 } 298 299 if (PageHighMem(page))
··· 215 * all pages in [start_pfn...end_pfn) must be in the same zone. 216 * zone->lock must be held before call this. 217 * 218 + * Returns the last tested pfn. 219 */ 220 static unsigned long 221 __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn, ··· 289 * now as a simple work-around, we use the next node for destination. 290 */ 291 if (PageHuge(page)) { 292 + int node = next_online_node(page_to_nid(page)); 293 + if (node == MAX_NUMNODES) 294 + node = first_online_node; 295 return alloc_huge_page_node(page_hstate(compound_head(page)), 296 + node); 297 } 298 299 if (PageHighMem(page))
+7 -21
mm/rmap.c
··· 569 } 570 571 #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH 572 - static void percpu_flush_tlb_batch_pages(void *data) 573 - { 574 - /* 575 - * All TLB entries are flushed on the assumption that it is 576 - * cheaper to flush all TLBs and let them be refilled than 577 - * flushing individual PFNs. Note that we do not track mm's 578 - * to flush as that might simply be multiple full TLB flushes 579 - * for no gain. 580 - */ 581 - count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED); 582 - flush_tlb_local(); 583 - } 584 - 585 /* 586 * Flush TLB entries for recently unmapped pages from remote CPUs. It is 587 * important if a PTE was dirty when it was unmapped that it's flushed ··· 585 586 cpu = get_cpu(); 587 588 - trace_tlb_flush(TLB_REMOTE_SHOOTDOWN, -1UL); 589 - 590 - if (cpumask_test_cpu(cpu, &tlb_ubc->cpumask)) 591 - percpu_flush_tlb_batch_pages(&tlb_ubc->cpumask); 592 - 593 - if (cpumask_any_but(&tlb_ubc->cpumask, cpu) < nr_cpu_ids) { 594 - smp_call_function_many(&tlb_ubc->cpumask, 595 - percpu_flush_tlb_batch_pages, (void *)tlb_ubc, true); 596 } 597 cpumask_clear(&tlb_ubc->cpumask); 598 tlb_ubc->flush_required = false; 599 tlb_ubc->writable = false;
··· 569 } 570 571 #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH 572 /* 573 * Flush TLB entries for recently unmapped pages from remote CPUs. It is 574 * important if a PTE was dirty when it was unmapped that it's flushed ··· 598 599 cpu = get_cpu(); 600 601 + if (cpumask_test_cpu(cpu, &tlb_ubc->cpumask)) { 602 + count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); 603 + local_flush_tlb(); 604 + trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL); 605 } 606 + 607 + if (cpumask_any_but(&tlb_ubc->cpumask, cpu) < nr_cpu_ids) 608 + flush_tlb_others(&tlb_ubc->cpumask, NULL, 0, TLB_FLUSH_ALL); 609 cpumask_clear(&tlb_ubc->cpumask); 610 tlb_ubc->flush_required = false; 611 tlb_ubc->writable = false;
+1 -1
net/bridge/br_stp.c
··· 582 int err; 583 584 err = switchdev_port_attr_set(br->dev, &attr); 585 - if (err) 586 return err; 587 588 br->ageing_time = t;
··· 582 int err; 583 584 err = switchdev_port_attr_set(br->dev, &attr); 585 + if (err && err != -EOPNOTSUPP) 586 return err; 587 588 br->ageing_time = t;
+4
net/bridge/netfilter/ebtables.c
··· 1521 if (copy_from_user(&tmp, user, sizeof(tmp))) 1522 return -EFAULT; 1523 1524 t = find_table_lock(net, tmp.name, &ret, &ebt_mutex); 1525 if (!t) 1526 return ret; ··· 2333 2334 if (copy_from_user(&tmp, user, sizeof(tmp))) 2335 return -EFAULT; 2336 2337 t = find_table_lock(net, tmp.name, &ret, &ebt_mutex); 2338 if (!t)
··· 1521 if (copy_from_user(&tmp, user, sizeof(tmp))) 1522 return -EFAULT; 1523 1524 + tmp.name[sizeof(tmp.name) - 1] = '\0'; 1525 + 1526 t = find_table_lock(net, tmp.name, &ret, &ebt_mutex); 1527 if (!t) 1528 return ret; ··· 2331 2332 if (copy_from_user(&tmp, user, sizeof(tmp))) 2333 return -EFAULT; 2334 + 2335 + tmp.name[sizeof(tmp.name) - 1] = '\0'; 2336 2337 t = find_table_lock(net, tmp.name, &ret, &ebt_mutex); 2338 if (!t)
+10 -10
net/bridge/netfilter/nft_reject_bridge.c
··· 40 /* We cannot use oldskb->dev, it can be either bridge device (NF_BRIDGE INPUT) 41 * or the bridge port (NF_BRIDGE PREROUTING). 42 */ 43 - static void nft_reject_br_send_v4_tcp_reset(struct sk_buff *oldskb, 44 const struct net_device *dev, 45 int hook) 46 { ··· 49 struct iphdr *niph; 50 const struct tcphdr *oth; 51 struct tcphdr _oth; 52 - struct net *net = sock_net(oldskb->sk); 53 54 if (!nft_bridge_iphdr_validate(oldskb)) 55 return; ··· 75 br_deliver(br_port_get_rcu(dev), nskb); 76 } 77 78 - static void nft_reject_br_send_v4_unreach(struct sk_buff *oldskb, 79 const struct net_device *dev, 80 int hook, u8 code) 81 { ··· 87 void *payload; 88 __wsum csum; 89 u8 proto; 90 - struct net *net = sock_net(oldskb->sk); 91 92 if (oldskb->csum_bad || !nft_bridge_iphdr_validate(oldskb)) 93 return; ··· 273 case htons(ETH_P_IP): 274 switch (priv->type) { 275 case NFT_REJECT_ICMP_UNREACH: 276 - nft_reject_br_send_v4_unreach(pkt->skb, pkt->in, 277 - pkt->hook, 278 priv->icmp_code); 279 break; 280 case NFT_REJECT_TCP_RST: 281 - nft_reject_br_send_v4_tcp_reset(pkt->skb, pkt->in, 282 - pkt->hook); 283 break; 284 case NFT_REJECT_ICMPX_UNREACH: 285 - nft_reject_br_send_v4_unreach(pkt->skb, pkt->in, 286 - pkt->hook, 287 nft_reject_icmp_code(priv->icmp_code)); 288 break; 289 }
··· 40 /* We cannot use oldskb->dev, it can be either bridge device (NF_BRIDGE INPUT) 41 * or the bridge port (NF_BRIDGE PREROUTING). 42 */ 43 + static void nft_reject_br_send_v4_tcp_reset(struct net *net, 44 + struct sk_buff *oldskb, 45 const struct net_device *dev, 46 int hook) 47 { ··· 48 struct iphdr *niph; 49 const struct tcphdr *oth; 50 struct tcphdr _oth; 51 52 if (!nft_bridge_iphdr_validate(oldskb)) 53 return; ··· 75 br_deliver(br_port_get_rcu(dev), nskb); 76 } 77 78 + static void nft_reject_br_send_v4_unreach(struct net *net, 79 + struct sk_buff *oldskb, 80 const struct net_device *dev, 81 int hook, u8 code) 82 { ··· 86 void *payload; 87 __wsum csum; 88 u8 proto; 89 90 if (oldskb->csum_bad || !nft_bridge_iphdr_validate(oldskb)) 91 return; ··· 273 case htons(ETH_P_IP): 274 switch (priv->type) { 275 case NFT_REJECT_ICMP_UNREACH: 276 + nft_reject_br_send_v4_unreach(pkt->net, pkt->skb, 277 + pkt->in, pkt->hook, 278 priv->icmp_code); 279 break; 280 case NFT_REJECT_TCP_RST: 281 + nft_reject_br_send_v4_tcp_reset(pkt->net, pkt->skb, 282 + pkt->in, pkt->hook); 283 break; 284 case NFT_REJECT_ICMPX_UNREACH: 285 + nft_reject_br_send_v4_unreach(pkt->net, pkt->skb, 286 + pkt->in, pkt->hook, 287 nft_reject_icmp_code(priv->icmp_code)); 288 break; 289 }
+25 -13
net/core/filter.c
··· 1149 } 1150 EXPORT_SYMBOL_GPL(bpf_prog_destroy); 1151 1152 - static int __sk_attach_prog(struct bpf_prog *prog, struct sock *sk) 1153 { 1154 struct sk_filter *fp, *old_fp; 1155 ··· 1166 return -ENOMEM; 1167 } 1168 1169 - old_fp = rcu_dereference_protected(sk->sk_filter, 1170 - sock_owned_by_user(sk)); 1171 rcu_assign_pointer(sk->sk_filter, fp); 1172 - 1173 if (old_fp) 1174 sk_filter_uncharge(sk, old_fp); 1175 ··· 1246 * occurs or there is insufficient memory for the filter a negative 1247 * errno code is returned. On success the return is zero. 1248 */ 1249 - int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk) 1250 { 1251 struct bpf_prog *prog = __get_filter(fprog, sk); 1252 int err; ··· 1255 if (IS_ERR(prog)) 1256 return PTR_ERR(prog); 1257 1258 - err = __sk_attach_prog(prog, sk); 1259 if (err < 0) { 1260 __bpf_prog_release(prog); 1261 return err; ··· 1263 1264 return 0; 1265 } 1266 - EXPORT_SYMBOL_GPL(sk_attach_filter); 1267 1268 int sk_reuseport_attach_filter(struct sock_fprog *fprog, struct sock *sk) 1269 { ··· 1314 if (IS_ERR(prog)) 1315 return PTR_ERR(prog); 1316 1317 - err = __sk_attach_prog(prog, sk); 1318 if (err < 0) { 1319 bpf_prog_put(prog); 1320 return err; ··· 1769 if (unlikely(size != sizeof(struct bpf_tunnel_key))) { 1770 switch (size) { 1771 case offsetof(struct bpf_tunnel_key, tunnel_label): 1772 goto set_compat; 1773 case offsetof(struct bpf_tunnel_key, remote_ipv6[1]): 1774 /* Fixup deprecated structure layouts here, so we have ··· 1855 if (unlikely(size != sizeof(struct bpf_tunnel_key))) { 1856 switch (size) { 1857 case offsetof(struct bpf_tunnel_key, tunnel_label): 1858 case offsetof(struct bpf_tunnel_key, remote_ipv6[1]): 1859 /* Fixup deprecated structure layouts here, so we have 1860 * a common path later on. ··· 1868 return -EINVAL; 1869 } 1870 } 1871 - if (unlikely(!(flags & BPF_F_TUNINFO_IPV6) && from->tunnel_label)) 1872 return -EINVAL; 1873 1874 skb_dst_drop(skb); ··· 2255 } 2256 late_initcall(register_sk_filter_ops); 2257 2258 - int sk_detach_filter(struct sock *sk) 2259 { 2260 int ret = -ENOENT; 2261 struct sk_filter *filter; ··· 2263 if (sock_flag(sk, SOCK_FILTER_LOCKED)) 2264 return -EPERM; 2265 2266 - filter = rcu_dereference_protected(sk->sk_filter, 2267 - sock_owned_by_user(sk)); 2268 if (filter) { 2269 RCU_INIT_POINTER(sk->sk_filter, NULL); 2270 sk_filter_uncharge(sk, filter); ··· 2272 2273 return ret; 2274 } 2275 - EXPORT_SYMBOL_GPL(sk_detach_filter); 2276 2277 int sk_get_filter(struct sock *sk, struct sock_filter __user *ubuf, 2278 unsigned int len)
··· 1149 } 1150 EXPORT_SYMBOL_GPL(bpf_prog_destroy); 1151 1152 + static int __sk_attach_prog(struct bpf_prog *prog, struct sock *sk, 1153 + bool locked) 1154 { 1155 struct sk_filter *fp, *old_fp; 1156 ··· 1165 return -ENOMEM; 1166 } 1167 1168 + old_fp = rcu_dereference_protected(sk->sk_filter, locked); 1169 rcu_assign_pointer(sk->sk_filter, fp); 1170 if (old_fp) 1171 sk_filter_uncharge(sk, old_fp); 1172 ··· 1247 * occurs or there is insufficient memory for the filter a negative 1248 * errno code is returned. On success the return is zero. 1249 */ 1250 + int __sk_attach_filter(struct sock_fprog *fprog, struct sock *sk, 1251 + bool locked) 1252 { 1253 struct bpf_prog *prog = __get_filter(fprog, sk); 1254 int err; ··· 1255 if (IS_ERR(prog)) 1256 return PTR_ERR(prog); 1257 1258 + err = __sk_attach_prog(prog, sk, locked); 1259 if (err < 0) { 1260 __bpf_prog_release(prog); 1261 return err; ··· 1263 1264 return 0; 1265 } 1266 + EXPORT_SYMBOL_GPL(__sk_attach_filter); 1267 + 1268 + int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk) 1269 + { 1270 + return __sk_attach_filter(fprog, sk, sock_owned_by_user(sk)); 1271 + } 1272 1273 int sk_reuseport_attach_filter(struct sock_fprog *fprog, struct sock *sk) 1274 { ··· 1309 if (IS_ERR(prog)) 1310 return PTR_ERR(prog); 1311 1312 + err = __sk_attach_prog(prog, sk, sock_owned_by_user(sk)); 1313 if (err < 0) { 1314 bpf_prog_put(prog); 1315 return err; ··· 1764 if (unlikely(size != sizeof(struct bpf_tunnel_key))) { 1765 switch (size) { 1766 case offsetof(struct bpf_tunnel_key, tunnel_label): 1767 + case offsetof(struct bpf_tunnel_key, tunnel_ext): 1768 goto set_compat; 1769 case offsetof(struct bpf_tunnel_key, remote_ipv6[1]): 1770 /* Fixup deprecated structure layouts here, so we have ··· 1849 if (unlikely(size != sizeof(struct bpf_tunnel_key))) { 1850 switch (size) { 1851 case offsetof(struct bpf_tunnel_key, tunnel_label): 1852 + case offsetof(struct bpf_tunnel_key, tunnel_ext): 1853 case offsetof(struct bpf_tunnel_key, remote_ipv6[1]): 1854 /* Fixup deprecated structure layouts here, so we have 1855 * a common path later on. ··· 1861 return -EINVAL; 1862 } 1863 } 1864 + if (unlikely((!(flags & BPF_F_TUNINFO_IPV6) && from->tunnel_label) || 1865 + from->tunnel_ext)) 1866 return -EINVAL; 1867 1868 skb_dst_drop(skb); ··· 2247 } 2248 late_initcall(register_sk_filter_ops); 2249 2250 + int __sk_detach_filter(struct sock *sk, bool locked) 2251 { 2252 int ret = -ENOENT; 2253 struct sk_filter *filter; ··· 2255 if (sock_flag(sk, SOCK_FILTER_LOCKED)) 2256 return -EPERM; 2257 2258 + filter = rcu_dereference_protected(sk->sk_filter, locked); 2259 if (filter) { 2260 RCU_INIT_POINTER(sk->sk_filter, NULL); 2261 sk_filter_uncharge(sk, filter); ··· 2265 2266 return ret; 2267 } 2268 + EXPORT_SYMBOL_GPL(__sk_detach_filter); 2269 + 2270 + int sk_detach_filter(struct sock *sk) 2271 + { 2272 + return __sk_detach_filter(sk, sock_owned_by_user(sk)); 2273 + } 2274 2275 int sk_get_filter(struct sock *sk, struct sock_filter __user *ubuf, 2276 unsigned int len)
+2 -1
net/core/netpoll.c
··· 603 const struct net_device_ops *ops; 604 int err; 605 606 - np->dev = ndev; 607 strlcpy(np->dev_name, ndev->name, IFNAMSIZ); 608 INIT_WORK(&np->cleanup_work, netpoll_async_cleanup); 609 ··· 669 goto unlock; 670 } 671 dev_hold(ndev); 672 673 if (netdev_master_upper_dev_get(ndev)) { 674 np_err(np, "%s is a slave device, aborting\n", np->dev_name); ··· 770 return 0; 771 772 put: 773 dev_put(ndev); 774 unlock: 775 rtnl_unlock();
··· 603 const struct net_device_ops *ops; 604 int err; 605 606 strlcpy(np->dev_name, ndev->name, IFNAMSIZ); 607 INIT_WORK(&np->cleanup_work, netpoll_async_cleanup); 608 ··· 670 goto unlock; 671 } 672 dev_hold(ndev); 673 + np->dev = ndev; 674 675 if (netdev_master_upper_dev_get(ndev)) { 676 np_err(np, "%s is a slave device, aborting\n", np->dev_name); ··· 770 return 0; 771 772 put: 773 + np->dev = NULL; 774 dev_put(ndev); 775 unlock: 776 rtnl_unlock();
+1
net/core/rtnetlink.c
··· 909 + rtnl_link_get_af_size(dev, ext_filter_mask) /* IFLA_AF_SPEC */ 910 + nla_total_size(MAX_PHYS_ITEM_ID_LEN) /* IFLA_PHYS_PORT_ID */ 911 + nla_total_size(MAX_PHYS_ITEM_ID_LEN) /* IFLA_PHYS_SWITCH_ID */ 912 + nla_total_size(1); /* IFLA_PROTO_DOWN */ 913 914 }
··· 909 + rtnl_link_get_af_size(dev, ext_filter_mask) /* IFLA_AF_SPEC */ 910 + nla_total_size(MAX_PHYS_ITEM_ID_LEN) /* IFLA_PHYS_PORT_ID */ 911 + nla_total_size(MAX_PHYS_ITEM_ID_LEN) /* IFLA_PHYS_SWITCH_ID */ 912 + + nla_total_size(IFNAMSIZ) /* IFLA_PHYS_PORT_NAME */ 913 + nla_total_size(1); /* IFLA_PROTO_DOWN */ 914 915 }
+16
net/ipv4/fou.c
··· 195 u8 proto = NAPI_GRO_CB(skb)->proto; 196 const struct net_offload **offloads; 197 198 rcu_read_lock(); 199 offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads; 200 ops = rcu_dereference(offloads[proto]); ··· 359 continue; 360 } 361 } 362 363 rcu_read_lock(); 364 offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
··· 195 u8 proto = NAPI_GRO_CB(skb)->proto; 196 const struct net_offload **offloads; 197 198 + /* We can clear the encap_mark for FOU as we are essentially doing 199 + * one of two possible things. We are either adding an L4 tunnel 200 + * header to the outer L3 tunnel header, or we are are simply 201 + * treating the GRE tunnel header as though it is a UDP protocol 202 + * specific header such as VXLAN or GENEVE. 203 + */ 204 + NAPI_GRO_CB(skb)->encap_mark = 0; 205 + 206 rcu_read_lock(); 207 offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads; 208 ops = rcu_dereference(offloads[proto]); ··· 351 continue; 352 } 353 } 354 + 355 + /* We can clear the encap_mark for GUE as we are essentially doing 356 + * one of two possible things. We are either adding an L4 tunnel 357 + * header to the outer L3 tunnel header, or we are are simply 358 + * treating the GRE tunnel header as though it is a UDP protocol 359 + * specific header such as VXLAN or GENEVE. 360 + */ 361 + NAPI_GRO_CB(skb)->encap_mark = 0; 362 363 rcu_read_lock(); 364 offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
+2 -2
net/ipv4/ip_tunnel_core.c
··· 372 if (nla_put_be64(skb, LWTUNNEL_IP6_ID, tun_info->key.tun_id) || 373 nla_put_in6_addr(skb, LWTUNNEL_IP6_DST, &tun_info->key.u.ipv6.dst) || 374 nla_put_in6_addr(skb, LWTUNNEL_IP6_SRC, &tun_info->key.u.ipv6.src) || 375 - nla_put_u8(skb, LWTUNNEL_IP6_HOPLIMIT, tun_info->key.tos) || 376 - nla_put_u8(skb, LWTUNNEL_IP6_TC, tun_info->key.ttl) || 377 nla_put_be16(skb, LWTUNNEL_IP6_FLAGS, tun_info->key.tun_flags)) 378 return -ENOMEM; 379
··· 372 if (nla_put_be64(skb, LWTUNNEL_IP6_ID, tun_info->key.tun_id) || 373 nla_put_in6_addr(skb, LWTUNNEL_IP6_DST, &tun_info->key.u.ipv6.dst) || 374 nla_put_in6_addr(skb, LWTUNNEL_IP6_SRC, &tun_info->key.u.ipv6.src) || 375 + nla_put_u8(skb, LWTUNNEL_IP6_TC, tun_info->key.tos) || 376 + nla_put_u8(skb, LWTUNNEL_IP6_HOPLIMIT, tun_info->key.ttl) || 377 nla_put_be16(skb, LWTUNNEL_IP6_FLAGS, tun_info->key.tun_flags)) 378 return -ENOMEM; 379
+23 -20
net/ipv4/netfilter/arp_tables.c
··· 359 } 360 361 /* All zeroes == unconditional rule. */ 362 - static inline bool unconditional(const struct arpt_arp *arp) 363 { 364 static const struct arpt_arp uncond; 365 366 - return memcmp(arp, &uncond, sizeof(uncond)) == 0; 367 } 368 369 /* Figures out from what hook each rule can be called: returns 0 if ··· 403 |= ((1 << hook) | (1 << NF_ARP_NUMHOOKS)); 404 405 /* Unconditional return/END. */ 406 - if ((e->target_offset == sizeof(struct arpt_entry) && 407 (strcmp(t->target.u.user.name, 408 XT_STANDARD_TARGET) == 0) && 409 - t->verdict < 0 && unconditional(&e->arp)) || 410 - visited) { 411 unsigned int oldpos, size; 412 413 if ((strcmp(t->target.u.user.name, ··· 474 return 1; 475 } 476 477 - static inline int check_entry(const struct arpt_entry *e, const char *name) 478 { 479 const struct xt_entry_target *t; 480 481 - if (!arp_checkentry(&e->arp)) { 482 - duprintf("arp_tables: arp check failed %p %s.\n", e, name); 483 return -EINVAL; 484 - } 485 486 if (e->target_offset + sizeof(struct xt_entry_target) > e->next_offset) 487 return -EINVAL; ··· 520 struct xt_target *target; 521 int ret; 522 523 - ret = check_entry(e, name); 524 - if (ret) 525 - return ret; 526 - 527 e->counters.pcnt = xt_percpu_counter_alloc(); 528 if (IS_ERR_VALUE(e->counters.pcnt)) 529 return -ENOMEM; ··· 551 const struct xt_entry_target *t; 552 unsigned int verdict; 553 554 - if (!unconditional(&e->arp)) 555 return false; 556 t = arpt_get_target_c(e); 557 if (strcmp(t->u.user.name, XT_STANDARD_TARGET) != 0) ··· 570 unsigned int valid_hooks) 571 { 572 unsigned int h; 573 574 if ((unsigned long)e % __alignof__(struct arpt_entry) != 0 || 575 - (unsigned char *)e + sizeof(struct arpt_entry) >= limit) { 576 duprintf("Bad offset %p\n", e); 577 return -EINVAL; 578 } ··· 586 return -EINVAL; 587 } 588 589 /* Check hooks & underflows */ 590 for (h = 0; h < NF_ARP_NUMHOOKS; h++) { 591 if (!(valid_hooks & (1 << h))) ··· 598 newinfo->hook_entry[h] = hook_entries[h]; 599 if ((unsigned char *)e - base == underflows[h]) { 600 if (!check_underflow(e)) { 601 - pr_err("Underflows must be unconditional and " 602 - "use the STANDARD target with " 603 - "ACCEPT/DROP\n"); 604 return -EINVAL; 605 } 606 newinfo->underflow[h] = underflows[h]; ··· 969 sizeof(struct arpt_get_entries) + get.size); 970 return -EINVAL; 971 } 972 973 t = xt_find_table_lock(net, NFPROTO_ARP, get.name); 974 if (!IS_ERR_OR_NULL(t)) { ··· 1234 1235 duprintf("check_compat_entry_size_and_hooks %p\n", e); 1236 if ((unsigned long)e % __alignof__(struct compat_arpt_entry) != 0 || 1237 - (unsigned char *)e + sizeof(struct compat_arpt_entry) >= limit) { 1238 duprintf("Bad offset %p, limit = %p\n", e, limit); 1239 return -EINVAL; 1240 } ··· 1248 } 1249 1250 /* For purposes of check_entry casting the compat entry is fine */ 1251 - ret = check_entry((struct arpt_entry *)e, name); 1252 if (ret) 1253 return ret; 1254 ··· 1664 *len, sizeof(get) + get.size); 1665 return -EINVAL; 1666 } 1667 1668 xt_compat_lock(NFPROTO_ARP); 1669 t = xt_find_table_lock(net, NFPROTO_ARP, get.name);
··· 359 } 360 361 /* All zeroes == unconditional rule. */ 362 + static inline bool unconditional(const struct arpt_entry *e) 363 { 364 static const struct arpt_arp uncond; 365 366 + return e->target_offset == sizeof(struct arpt_entry) && 367 + memcmp(&e->arp, &uncond, sizeof(uncond)) == 0; 368 } 369 370 /* Figures out from what hook each rule can be called: returns 0 if ··· 402 |= ((1 << hook) | (1 << NF_ARP_NUMHOOKS)); 403 404 /* Unconditional return/END. */ 405 + if ((unconditional(e) && 406 (strcmp(t->target.u.user.name, 407 XT_STANDARD_TARGET) == 0) && 408 + t->verdict < 0) || visited) { 409 unsigned int oldpos, size; 410 411 if ((strcmp(t->target.u.user.name, ··· 474 return 1; 475 } 476 477 + static inline int check_entry(const struct arpt_entry *e) 478 { 479 const struct xt_entry_target *t; 480 481 + if (!arp_checkentry(&e->arp)) 482 return -EINVAL; 483 484 if (e->target_offset + sizeof(struct xt_entry_target) > e->next_offset) 485 return -EINVAL; ··· 522 struct xt_target *target; 523 int ret; 524 525 e->counters.pcnt = xt_percpu_counter_alloc(); 526 if (IS_ERR_VALUE(e->counters.pcnt)) 527 return -ENOMEM; ··· 557 const struct xt_entry_target *t; 558 unsigned int verdict; 559 560 + if (!unconditional(e)) 561 return false; 562 t = arpt_get_target_c(e); 563 if (strcmp(t->u.user.name, XT_STANDARD_TARGET) != 0) ··· 576 unsigned int valid_hooks) 577 { 578 unsigned int h; 579 + int err; 580 581 if ((unsigned long)e % __alignof__(struct arpt_entry) != 0 || 582 + (unsigned char *)e + sizeof(struct arpt_entry) >= limit || 583 + (unsigned char *)e + e->next_offset > limit) { 584 duprintf("Bad offset %p\n", e); 585 return -EINVAL; 586 } ··· 590 return -EINVAL; 591 } 592 593 + err = check_entry(e); 594 + if (err) 595 + return err; 596 + 597 /* Check hooks & underflows */ 598 for (h = 0; h < NF_ARP_NUMHOOKS; h++) { 599 if (!(valid_hooks & (1 << h))) ··· 598 newinfo->hook_entry[h] = hook_entries[h]; 599 if ((unsigned char *)e - base == underflows[h]) { 600 if (!check_underflow(e)) { 601 + pr_debug("Underflows must be unconditional and " 602 + "use the STANDARD target with " 603 + "ACCEPT/DROP\n"); 604 return -EINVAL; 605 } 606 newinfo->underflow[h] = underflows[h]; ··· 969 sizeof(struct arpt_get_entries) + get.size); 970 return -EINVAL; 971 } 972 + get.name[sizeof(get.name) - 1] = '\0'; 973 974 t = xt_find_table_lock(net, NFPROTO_ARP, get.name); 975 if (!IS_ERR_OR_NULL(t)) { ··· 1233 1234 duprintf("check_compat_entry_size_and_hooks %p\n", e); 1235 if ((unsigned long)e % __alignof__(struct compat_arpt_entry) != 0 || 1236 + (unsigned char *)e + sizeof(struct compat_arpt_entry) >= limit || 1237 + (unsigned char *)e + e->next_offset > limit) { 1238 duprintf("Bad offset %p, limit = %p\n", e, limit); 1239 return -EINVAL; 1240 } ··· 1246 } 1247 1248 /* For purposes of check_entry casting the compat entry is fine */ 1249 + ret = check_entry((struct arpt_entry *)e); 1250 if (ret) 1251 return ret; 1252 ··· 1662 *len, sizeof(get) + get.size); 1663 return -EINVAL; 1664 } 1665 + get.name[sizeof(get.name) - 1] = '\0'; 1666 1667 xt_compat_lock(NFPROTO_ARP); 1668 t = xt_find_table_lock(net, NFPROTO_ARP, get.name);
+25 -23
net/ipv4/netfilter/ip_tables.c
··· 168 169 /* All zeroes == unconditional rule. */ 170 /* Mildly perf critical (only if packet tracing is on) */ 171 - static inline bool unconditional(const struct ipt_ip *ip) 172 { 173 static const struct ipt_ip uncond; 174 175 - return memcmp(ip, &uncond, sizeof(uncond)) == 0; 176 #undef FWINV 177 } 178 ··· 230 } else if (s == e) { 231 (*rulenum)++; 232 233 - if (s->target_offset == sizeof(struct ipt_entry) && 234 strcmp(t->target.u.kernel.target->name, 235 XT_STANDARD_TARGET) == 0 && 236 - t->verdict < 0 && 237 - unconditional(&s->ip)) { 238 /* Tail of chains: STANDARD target (return/policy) */ 239 *comment = *chainname == hookname 240 ? comments[NF_IP_TRACE_COMMENT_POLICY] ··· 476 e->comefrom |= ((1 << hook) | (1 << NF_INET_NUMHOOKS)); 477 478 /* Unconditional return/END. */ 479 - if ((e->target_offset == sizeof(struct ipt_entry) && 480 (strcmp(t->target.u.user.name, 481 XT_STANDARD_TARGET) == 0) && 482 - t->verdict < 0 && unconditional(&e->ip)) || 483 - visited) { 484 unsigned int oldpos, size; 485 486 if ((strcmp(t->target.u.user.name, ··· 568 } 569 570 static int 571 - check_entry(const struct ipt_entry *e, const char *name) 572 { 573 const struct xt_entry_target *t; 574 575 - if (!ip_checkentry(&e->ip)) { 576 - duprintf("ip check failed %p %s.\n", e, name); 577 return -EINVAL; 578 - } 579 580 if (e->target_offset + sizeof(struct xt_entry_target) > 581 e->next_offset) ··· 663 struct xt_mtchk_param mtpar; 664 struct xt_entry_match *ematch; 665 666 - ret = check_entry(e, name); 667 - if (ret) 668 - return ret; 669 - 670 e->counters.pcnt = xt_percpu_counter_alloc(); 671 if (IS_ERR_VALUE(e->counters.pcnt)) 672 return -ENOMEM; ··· 714 const struct xt_entry_target *t; 715 unsigned int verdict; 716 717 - if (!unconditional(&e->ip)) 718 return false; 719 t = ipt_get_target_c(e); 720 if (strcmp(t->u.user.name, XT_STANDARD_TARGET) != 0) ··· 734 unsigned int valid_hooks) 735 { 736 unsigned int h; 737 738 if ((unsigned long)e % __alignof__(struct ipt_entry) != 0 || 739 - (unsigned char *)e + sizeof(struct ipt_entry) >= limit) { 740 duprintf("Bad offset %p\n", e); 741 return -EINVAL; 742 } ··· 750 return -EINVAL; 751 } 752 753 /* Check hooks & underflows */ 754 for (h = 0; h < NF_INET_NUMHOOKS; h++) { 755 if (!(valid_hooks & (1 << h))) ··· 762 newinfo->hook_entry[h] = hook_entries[h]; 763 if ((unsigned char *)e - base == underflows[h]) { 764 if (!check_underflow(e)) { 765 - pr_err("Underflows must be unconditional and " 766 - "use the STANDARD target with " 767 - "ACCEPT/DROP\n"); 768 return -EINVAL; 769 } 770 newinfo->underflow[h] = underflows[h]; ··· 1156 *len, sizeof(get) + get.size); 1157 return -EINVAL; 1158 } 1159 1160 t = xt_find_table_lock(net, AF_INET, get.name); 1161 if (!IS_ERR_OR_NULL(t)) { ··· 1493 1494 duprintf("check_compat_entry_size_and_hooks %p\n", e); 1495 if ((unsigned long)e % __alignof__(struct compat_ipt_entry) != 0 || 1496 - (unsigned char *)e + sizeof(struct compat_ipt_entry) >= limit) { 1497 duprintf("Bad offset %p, limit = %p\n", e, limit); 1498 return -EINVAL; 1499 } ··· 1507 } 1508 1509 /* For purposes of check_entry casting the compat entry is fine */ 1510 - ret = check_entry((struct ipt_entry *)e, name); 1511 if (ret) 1512 return ret; 1513 ··· 1936 *len, sizeof(get) + get.size); 1937 return -EINVAL; 1938 } 1939 1940 xt_compat_lock(AF_INET); 1941 t = xt_find_table_lock(net, AF_INET, get.name);
··· 168 169 /* All zeroes == unconditional rule. */ 170 /* Mildly perf critical (only if packet tracing is on) */ 171 + static inline bool unconditional(const struct ipt_entry *e) 172 { 173 static const struct ipt_ip uncond; 174 175 + return e->target_offset == sizeof(struct ipt_entry) && 176 + memcmp(&e->ip, &uncond, sizeof(uncond)) == 0; 177 #undef FWINV 178 } 179 ··· 229 } else if (s == e) { 230 (*rulenum)++; 231 232 + if (unconditional(s) && 233 strcmp(t->target.u.kernel.target->name, 234 XT_STANDARD_TARGET) == 0 && 235 + t->verdict < 0) { 236 /* Tail of chains: STANDARD target (return/policy) */ 237 *comment = *chainname == hookname 238 ? comments[NF_IP_TRACE_COMMENT_POLICY] ··· 476 e->comefrom |= ((1 << hook) | (1 << NF_INET_NUMHOOKS)); 477 478 /* Unconditional return/END. */ 479 + if ((unconditional(e) && 480 (strcmp(t->target.u.user.name, 481 XT_STANDARD_TARGET) == 0) && 482 + t->verdict < 0) || visited) { 483 unsigned int oldpos, size; 484 485 if ((strcmp(t->target.u.user.name, ··· 569 } 570 571 static int 572 + check_entry(const struct ipt_entry *e) 573 { 574 const struct xt_entry_target *t; 575 576 + if (!ip_checkentry(&e->ip)) 577 return -EINVAL; 578 579 if (e->target_offset + sizeof(struct xt_entry_target) > 580 e->next_offset) ··· 666 struct xt_mtchk_param mtpar; 667 struct xt_entry_match *ematch; 668 669 e->counters.pcnt = xt_percpu_counter_alloc(); 670 if (IS_ERR_VALUE(e->counters.pcnt)) 671 return -ENOMEM; ··· 721 const struct xt_entry_target *t; 722 unsigned int verdict; 723 724 + if (!unconditional(e)) 725 return false; 726 t = ipt_get_target_c(e); 727 if (strcmp(t->u.user.name, XT_STANDARD_TARGET) != 0) ··· 741 unsigned int valid_hooks) 742 { 743 unsigned int h; 744 + int err; 745 746 if ((unsigned long)e % __alignof__(struct ipt_entry) != 0 || 747 + (unsigned char *)e + sizeof(struct ipt_entry) >= limit || 748 + (unsigned char *)e + e->next_offset > limit) { 749 duprintf("Bad offset %p\n", e); 750 return -EINVAL; 751 } ··· 755 return -EINVAL; 756 } 757 758 + err = check_entry(e); 759 + if (err) 760 + return err; 761 + 762 /* Check hooks & underflows */ 763 for (h = 0; h < NF_INET_NUMHOOKS; h++) { 764 if (!(valid_hooks & (1 << h))) ··· 763 newinfo->hook_entry[h] = hook_entries[h]; 764 if ((unsigned char *)e - base == underflows[h]) { 765 if (!check_underflow(e)) { 766 + pr_debug("Underflows must be unconditional and " 767 + "use the STANDARD target with " 768 + "ACCEPT/DROP\n"); 769 return -EINVAL; 770 } 771 newinfo->underflow[h] = underflows[h]; ··· 1157 *len, sizeof(get) + get.size); 1158 return -EINVAL; 1159 } 1160 + get.name[sizeof(get.name) - 1] = '\0'; 1161 1162 t = xt_find_table_lock(net, AF_INET, get.name); 1163 if (!IS_ERR_OR_NULL(t)) { ··· 1493 1494 duprintf("check_compat_entry_size_and_hooks %p\n", e); 1495 if ((unsigned long)e % __alignof__(struct compat_ipt_entry) != 0 || 1496 + (unsigned char *)e + sizeof(struct compat_ipt_entry) >= limit || 1497 + (unsigned char *)e + e->next_offset > limit) { 1498 duprintf("Bad offset %p, limit = %p\n", e, limit); 1499 return -EINVAL; 1500 } ··· 1506 } 1507 1508 /* For purposes of check_entry casting the compat entry is fine */ 1509 + ret = check_entry((struct ipt_entry *)e); 1510 if (ret) 1511 return ret; 1512 ··· 1935 *len, sizeof(get) + get.size); 1936 return -EINVAL; 1937 } 1938 + get.name[sizeof(get.name) - 1] = '\0'; 1939 1940 xt_compat_lock(AF_INET); 1941 t = xt_find_table_lock(net, AF_INET, get.name);
+28 -26
net/ipv4/netfilter/ipt_SYNPROXY.c
··· 18 #include <net/netfilter/nf_conntrack_synproxy.h> 19 20 static struct iphdr * 21 - synproxy_build_ip(struct sk_buff *skb, __be32 saddr, __be32 daddr) 22 { 23 struct iphdr *iph; 24 - struct net *net = sock_net(skb->sk); 25 26 skb_reset_network_header(skb); 27 iph = (struct iphdr *)skb_put(skb, sizeof(*iph)); ··· 40 } 41 42 static void 43 - synproxy_send_tcp(const struct synproxy_net *snet, 44 const struct sk_buff *skb, struct sk_buff *nskb, 45 struct nf_conntrack *nfct, enum ip_conntrack_info ctinfo, 46 struct iphdr *niph, struct tcphdr *nth, 47 unsigned int tcp_hdr_size) 48 { 49 - struct net *net = nf_ct_net(snet->tmpl); 50 - 51 nth->check = ~tcp_v4_check(tcp_hdr_size, niph->saddr, niph->daddr, 0); 52 nskb->ip_summed = CHECKSUM_PARTIAL; 53 nskb->csum_start = (unsigned char *)nth - nskb->head; ··· 70 } 71 72 static void 73 - synproxy_send_client_synack(const struct synproxy_net *snet, 74 const struct sk_buff *skb, const struct tcphdr *th, 75 const struct synproxy_options *opts) 76 { ··· 89 return; 90 skb_reserve(nskb, MAX_TCP_HEADER); 91 92 - niph = synproxy_build_ip(nskb, iph->daddr, iph->saddr); 93 94 skb_reset_transport_header(nskb); 95 nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size); ··· 107 108 synproxy_build_options(nth, opts); 109 110 - synproxy_send_tcp(snet, skb, nskb, skb->nfct, IP_CT_ESTABLISHED_REPLY, 111 niph, nth, tcp_hdr_size); 112 } 113 114 static void 115 - synproxy_send_server_syn(const struct synproxy_net *snet, 116 const struct sk_buff *skb, const struct tcphdr *th, 117 const struct synproxy_options *opts, u32 recv_seq) 118 { 119 struct sk_buff *nskb; 120 struct iphdr *iph, *niph; 121 struct tcphdr *nth; ··· 131 return; 132 skb_reserve(nskb, MAX_TCP_HEADER); 133 134 - niph = synproxy_build_ip(nskb, iph->saddr, iph->daddr); 135 136 skb_reset_transport_header(nskb); 137 nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size); ··· 152 153 synproxy_build_options(nth, opts); 154 155 - synproxy_send_tcp(snet, skb, nskb, &snet->tmpl->ct_general, IP_CT_NEW, 156 niph, nth, tcp_hdr_size); 157 } 158 159 static void 160 - synproxy_send_server_ack(const struct synproxy_net *snet, 161 const struct ip_ct_tcp *state, 162 const struct sk_buff *skb, const struct tcphdr *th, 163 const struct synproxy_options *opts) ··· 176 return; 177 skb_reserve(nskb, MAX_TCP_HEADER); 178 179 - niph = synproxy_build_ip(nskb, iph->daddr, iph->saddr); 180 181 skb_reset_transport_header(nskb); 182 nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size); ··· 192 193 synproxy_build_options(nth, opts); 194 195 - synproxy_send_tcp(snet, skb, nskb, NULL, 0, niph, nth, tcp_hdr_size); 196 } 197 198 static void 199 - synproxy_send_client_ack(const struct synproxy_net *snet, 200 const struct sk_buff *skb, const struct tcphdr *th, 201 const struct synproxy_options *opts) 202 { ··· 214 return; 215 skb_reserve(nskb, MAX_TCP_HEADER); 216 217 - niph = synproxy_build_ip(nskb, iph->saddr, iph->daddr); 218 219 skb_reset_transport_header(nskb); 220 nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size); ··· 230 231 synproxy_build_options(nth, opts); 232 233 - synproxy_send_tcp(snet, skb, nskb, skb->nfct, IP_CT_ESTABLISHED_REPLY, 234 niph, nth, tcp_hdr_size); 235 } 236 237 static bool 238 - synproxy_recv_client_ack(const struct synproxy_net *snet, 239 const struct sk_buff *skb, const struct tcphdr *th, 240 struct synproxy_options *opts, u32 recv_seq) 241 { 242 int mss; 243 244 mss = __cookie_v4_check(ip_hdr(skb), th, ntohl(th->ack_seq) - 1); ··· 255 if (opts->options & XT_SYNPROXY_OPT_TIMESTAMP) 256 synproxy_check_timestamp_cookie(opts); 257 258 - synproxy_send_server_syn(snet, skb, th, opts, recv_seq); 259 return true; 260 } 261 ··· 263 synproxy_tg4(struct sk_buff *skb, const struct xt_action_param *par) 264 { 265 const struct xt_synproxy_info *info = par->targinfo; 266 - struct synproxy_net *snet = synproxy_pernet(par->net); 267 struct synproxy_options opts = {}; 268 struct tcphdr *th, _th; 269 ··· 293 XT_SYNPROXY_OPT_SACK_PERM | 294 XT_SYNPROXY_OPT_ECN); 295 296 - synproxy_send_client_synack(snet, skb, th, &opts); 297 return NF_DROP; 298 299 } else if (th->ack && !(th->fin || th->rst || th->syn)) { 300 /* ACK from client */ 301 - synproxy_recv_client_ack(snet, skb, th, &opts, ntohl(th->seq)); 302 return NF_DROP; 303 } 304 ··· 309 struct sk_buff *skb, 310 const struct nf_hook_state *nhs) 311 { 312 - struct synproxy_net *snet = synproxy_pernet(nhs->net); 313 enum ip_conntrack_info ctinfo; 314 struct nf_conn *ct; 315 struct nf_conn_synproxy *synproxy; ··· 367 * therefore we need to add 1 to make the SYN sequence 368 * number match the one of first SYN. 369 */ 370 - if (synproxy_recv_client_ack(snet, skb, th, &opts, 371 ntohl(th->seq) + 1)) 372 this_cpu_inc(snet->stats->cookie_retrans); 373 ··· 393 XT_SYNPROXY_OPT_SACK_PERM); 394 395 swap(opts.tsval, opts.tsecr); 396 - synproxy_send_server_ack(snet, state, skb, th, &opts); 397 398 nf_ct_seqadj_init(ct, ctinfo, synproxy->isn - ntohl(th->seq)); 399 400 swap(opts.tsval, opts.tsecr); 401 - synproxy_send_client_ack(snet, skb, th, &opts); 402 403 consume_skb(skb); 404 return NF_STOLEN;
··· 18 #include <net/netfilter/nf_conntrack_synproxy.h> 19 20 static struct iphdr * 21 + synproxy_build_ip(struct net *net, struct sk_buff *skb, __be32 saddr, 22 + __be32 daddr) 23 { 24 struct iphdr *iph; 25 26 skb_reset_network_header(skb); 27 iph = (struct iphdr *)skb_put(skb, sizeof(*iph)); ··· 40 } 41 42 static void 43 + synproxy_send_tcp(struct net *net, 44 const struct sk_buff *skb, struct sk_buff *nskb, 45 struct nf_conntrack *nfct, enum ip_conntrack_info ctinfo, 46 struct iphdr *niph, struct tcphdr *nth, 47 unsigned int tcp_hdr_size) 48 { 49 nth->check = ~tcp_v4_check(tcp_hdr_size, niph->saddr, niph->daddr, 0); 50 nskb->ip_summed = CHECKSUM_PARTIAL; 51 nskb->csum_start = (unsigned char *)nth - nskb->head; ··· 72 } 73 74 static void 75 + synproxy_send_client_synack(struct net *net, 76 const struct sk_buff *skb, const struct tcphdr *th, 77 const struct synproxy_options *opts) 78 { ··· 91 return; 92 skb_reserve(nskb, MAX_TCP_HEADER); 93 94 + niph = synproxy_build_ip(net, nskb, iph->daddr, iph->saddr); 95 96 skb_reset_transport_header(nskb); 97 nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size); ··· 109 110 synproxy_build_options(nth, opts); 111 112 + synproxy_send_tcp(net, skb, nskb, skb->nfct, IP_CT_ESTABLISHED_REPLY, 113 niph, nth, tcp_hdr_size); 114 } 115 116 static void 117 + synproxy_send_server_syn(struct net *net, 118 const struct sk_buff *skb, const struct tcphdr *th, 119 const struct synproxy_options *opts, u32 recv_seq) 120 { 121 + struct synproxy_net *snet = synproxy_pernet(net); 122 struct sk_buff *nskb; 123 struct iphdr *iph, *niph; 124 struct tcphdr *nth; ··· 132 return; 133 skb_reserve(nskb, MAX_TCP_HEADER); 134 135 + niph = synproxy_build_ip(net, nskb, iph->saddr, iph->daddr); 136 137 skb_reset_transport_header(nskb); 138 nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size); ··· 153 154 synproxy_build_options(nth, opts); 155 156 + synproxy_send_tcp(net, skb, nskb, &snet->tmpl->ct_general, IP_CT_NEW, 157 niph, nth, tcp_hdr_size); 158 } 159 160 static void 161 + synproxy_send_server_ack(struct net *net, 162 const struct ip_ct_tcp *state, 163 const struct sk_buff *skb, const struct tcphdr *th, 164 const struct synproxy_options *opts) ··· 177 return; 178 skb_reserve(nskb, MAX_TCP_HEADER); 179 180 + niph = synproxy_build_ip(net, nskb, iph->daddr, iph->saddr); 181 182 skb_reset_transport_header(nskb); 183 nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size); ··· 193 194 synproxy_build_options(nth, opts); 195 196 + synproxy_send_tcp(net, skb, nskb, NULL, 0, niph, nth, tcp_hdr_size); 197 } 198 199 static void 200 + synproxy_send_client_ack(struct net *net, 201 const struct sk_buff *skb, const struct tcphdr *th, 202 const struct synproxy_options *opts) 203 { ··· 215 return; 216 skb_reserve(nskb, MAX_TCP_HEADER); 217 218 + niph = synproxy_build_ip(net, nskb, iph->saddr, iph->daddr); 219 220 skb_reset_transport_header(nskb); 221 nth = (struct tcphdr *)skb_put(nskb, tcp_hdr_size); ··· 231 232 synproxy_build_options(nth, opts); 233 234 + synproxy_send_tcp(net, skb, nskb, skb->nfct, IP_CT_ESTABLISHED_REPLY, 235 niph, nth, tcp_hdr_size); 236 } 237 238 static bool 239 + synproxy_recv_client_ack(struct net *net, 240 const struct sk_buff *skb, const struct tcphdr *th, 241 struct synproxy_options *opts, u32 recv_seq) 242 { 243 + struct synproxy_net *snet = synproxy_pernet(net); 244 int mss; 245 246 mss = __cookie_v4_check(ip_hdr(skb), th, ntohl(th->ack_seq) - 1); ··· 255 if (opts->options & XT_SYNPROXY_OPT_TIMESTAMP) 256 synproxy_check_timestamp_cookie(opts); 257 258 + synproxy_send_server_syn(net, skb, th, opts, recv_seq); 259 return true; 260 } 261 ··· 263 synproxy_tg4(struct sk_buff *skb, const struct xt_action_param *par) 264 { 265 const struct xt_synproxy_info *info = par->targinfo; 266 + struct net *net = par->net; 267 + struct synproxy_net *snet = synproxy_pernet(net); 268 struct synproxy_options opts = {}; 269 struct tcphdr *th, _th; 270 ··· 292 XT_SYNPROXY_OPT_SACK_PERM | 293 XT_SYNPROXY_OPT_ECN); 294 295 + synproxy_send_client_synack(net, skb, th, &opts); 296 return NF_DROP; 297 298 } else if (th->ack && !(th->fin || th->rst || th->syn)) { 299 /* ACK from client */ 300 + synproxy_recv_client_ack(net, skb, th, &opts, ntohl(th->seq)); 301 return NF_DROP; 302 } 303 ··· 308 struct sk_buff *skb, 309 const struct nf_hook_state *nhs) 310 { 311 + struct net *net = nhs->net; 312 + struct synproxy_net *snet = synproxy_pernet(net); 313 enum ip_conntrack_info ctinfo; 314 struct nf_conn *ct; 315 struct nf_conn_synproxy *synproxy; ··· 365 * therefore we need to add 1 to make the SYN sequence 366 * number match the one of first SYN. 367 */ 368 + if (synproxy_recv_client_ack(net, skb, th, &opts, 369 ntohl(th->seq) + 1)) 370 this_cpu_inc(snet->stats->cookie_retrans); 371 ··· 391 XT_SYNPROXY_OPT_SACK_PERM); 392 393 swap(opts.tsval, opts.tsecr); 394 + synproxy_send_server_ack(net, state, skb, th, &opts); 395 396 nf_ct_seqadj_init(ct, ctinfo, synproxy->isn - ntohl(th->seq)); 397 398 swap(opts.tsval, opts.tsecr); 399 + synproxy_send_client_ack(net, skb, th, &opts); 400 401 consume_skb(skb); 402 return NF_STOLEN;
+25 -23
net/ipv6/netfilter/ip6_tables.c
··· 198 199 /* All zeroes == unconditional rule. */ 200 /* Mildly perf critical (only if packet tracing is on) */ 201 - static inline bool unconditional(const struct ip6t_ip6 *ipv6) 202 { 203 static const struct ip6t_ip6 uncond; 204 205 - return memcmp(ipv6, &uncond, sizeof(uncond)) == 0; 206 } 207 208 static inline const struct xt_entry_target * ··· 259 } else if (s == e) { 260 (*rulenum)++; 261 262 - if (s->target_offset == sizeof(struct ip6t_entry) && 263 strcmp(t->target.u.kernel.target->name, 264 XT_STANDARD_TARGET) == 0 && 265 - t->verdict < 0 && 266 - unconditional(&s->ipv6)) { 267 /* Tail of chains: STANDARD target (return/policy) */ 268 *comment = *chainname == hookname 269 ? comments[NF_IP6_TRACE_COMMENT_POLICY] ··· 488 e->comefrom |= ((1 << hook) | (1 << NF_INET_NUMHOOKS)); 489 490 /* Unconditional return/END. */ 491 - if ((e->target_offset == sizeof(struct ip6t_entry) && 492 (strcmp(t->target.u.user.name, 493 XT_STANDARD_TARGET) == 0) && 494 - t->verdict < 0 && 495 - unconditional(&e->ipv6)) || visited) { 496 unsigned int oldpos, size; 497 498 if ((strcmp(t->target.u.user.name, ··· 580 } 581 582 static int 583 - check_entry(const struct ip6t_entry *e, const char *name) 584 { 585 const struct xt_entry_target *t; 586 587 - if (!ip6_checkentry(&e->ipv6)) { 588 - duprintf("ip_tables: ip check failed %p %s.\n", e, name); 589 return -EINVAL; 590 - } 591 592 if (e->target_offset + sizeof(struct xt_entry_target) > 593 e->next_offset) ··· 676 struct xt_mtchk_param mtpar; 677 struct xt_entry_match *ematch; 678 679 - ret = check_entry(e, name); 680 - if (ret) 681 - return ret; 682 - 683 e->counters.pcnt = xt_percpu_counter_alloc(); 684 if (IS_ERR_VALUE(e->counters.pcnt)) 685 return -ENOMEM; ··· 726 const struct xt_entry_target *t; 727 unsigned int verdict; 728 729 - if (!unconditional(&e->ipv6)) 730 return false; 731 t = ip6t_get_target_c(e); 732 if (strcmp(t->u.user.name, XT_STANDARD_TARGET) != 0) ··· 746 unsigned int valid_hooks) 747 { 748 unsigned int h; 749 750 if ((unsigned long)e % __alignof__(struct ip6t_entry) != 0 || 751 - (unsigned char *)e + sizeof(struct ip6t_entry) >= limit) { 752 duprintf("Bad offset %p\n", e); 753 return -EINVAL; 754 } ··· 762 return -EINVAL; 763 } 764 765 /* Check hooks & underflows */ 766 for (h = 0; h < NF_INET_NUMHOOKS; h++) { 767 if (!(valid_hooks & (1 << h))) ··· 774 newinfo->hook_entry[h] = hook_entries[h]; 775 if ((unsigned char *)e - base == underflows[h]) { 776 if (!check_underflow(e)) { 777 - pr_err("Underflows must be unconditional and " 778 - "use the STANDARD target with " 779 - "ACCEPT/DROP\n"); 780 return -EINVAL; 781 } 782 newinfo->underflow[h] = underflows[h]; ··· 1168 *len, sizeof(get) + get.size); 1169 return -EINVAL; 1170 } 1171 1172 t = xt_find_table_lock(net, AF_INET6, get.name); 1173 if (!IS_ERR_OR_NULL(t)) { ··· 1505 1506 duprintf("check_compat_entry_size_and_hooks %p\n", e); 1507 if ((unsigned long)e % __alignof__(struct compat_ip6t_entry) != 0 || 1508 - (unsigned char *)e + sizeof(struct compat_ip6t_entry) >= limit) { 1509 duprintf("Bad offset %p, limit = %p\n", e, limit); 1510 return -EINVAL; 1511 } ··· 1519 } 1520 1521 /* For purposes of check_entry casting the compat entry is fine */ 1522 - ret = check_entry((struct ip6t_entry *)e, name); 1523 if (ret) 1524 return ret; 1525 ··· 1945 *len, sizeof(get) + get.size); 1946 return -EINVAL; 1947 } 1948 1949 xt_compat_lock(AF_INET6); 1950 t = xt_find_table_lock(net, AF_INET6, get.name);
··· 198 199 /* All zeroes == unconditional rule. */ 200 /* Mildly perf critical (only if packet tracing is on) */ 201 + static inline bool unconditional(const struct ip6t_entry *e) 202 { 203 static const struct ip6t_ip6 uncond; 204 205 + return e->target_offset == sizeof(struct ip6t_entry) && 206 + memcmp(&e->ipv6, &uncond, sizeof(uncond)) == 0; 207 } 208 209 static inline const struct xt_entry_target * ··· 258 } else if (s == e) { 259 (*rulenum)++; 260 261 + if (unconditional(s) && 262 strcmp(t->target.u.kernel.target->name, 263 XT_STANDARD_TARGET) == 0 && 264 + t->verdict < 0) { 265 /* Tail of chains: STANDARD target (return/policy) */ 266 *comment = *chainname == hookname 267 ? comments[NF_IP6_TRACE_COMMENT_POLICY] ··· 488 e->comefrom |= ((1 << hook) | (1 << NF_INET_NUMHOOKS)); 489 490 /* Unconditional return/END. */ 491 + if ((unconditional(e) && 492 (strcmp(t->target.u.user.name, 493 XT_STANDARD_TARGET) == 0) && 494 + t->verdict < 0) || visited) { 495 unsigned int oldpos, size; 496 497 if ((strcmp(t->target.u.user.name, ··· 581 } 582 583 static int 584 + check_entry(const struct ip6t_entry *e) 585 { 586 const struct xt_entry_target *t; 587 588 + if (!ip6_checkentry(&e->ipv6)) 589 return -EINVAL; 590 591 if (e->target_offset + sizeof(struct xt_entry_target) > 592 e->next_offset) ··· 679 struct xt_mtchk_param mtpar; 680 struct xt_entry_match *ematch; 681 682 e->counters.pcnt = xt_percpu_counter_alloc(); 683 if (IS_ERR_VALUE(e->counters.pcnt)) 684 return -ENOMEM; ··· 733 const struct xt_entry_target *t; 734 unsigned int verdict; 735 736 + if (!unconditional(e)) 737 return false; 738 t = ip6t_get_target_c(e); 739 if (strcmp(t->u.user.name, XT_STANDARD_TARGET) != 0) ··· 753 unsigned int valid_hooks) 754 { 755 unsigned int h; 756 + int err; 757 758 if ((unsigned long)e % __alignof__(struct ip6t_entry) != 0 || 759 + (unsigned char *)e + sizeof(struct ip6t_entry) >= limit || 760 + (unsigned char *)e + e->next_offset > limit) { 761 duprintf("Bad offset %p\n", e); 762 return -EINVAL; 763 } ··· 767 return -EINVAL; 768 } 769 770 + err = check_entry(e); 771 + if (err) 772 + return err; 773 + 774 /* Check hooks & underflows */ 775 for (h = 0; h < NF_INET_NUMHOOKS; h++) { 776 if (!(valid_hooks & (1 << h))) ··· 775 newinfo->hook_entry[h] = hook_entries[h]; 776 if ((unsigned char *)e - base == underflows[h]) { 777 if (!check_underflow(e)) { 778 + pr_debug("Underflows must be unconditional and " 779 + "use the STANDARD target with " 780 + "ACCEPT/DROP\n"); 781 return -EINVAL; 782 } 783 newinfo->underflow[h] = underflows[h]; ··· 1169 *len, sizeof(get) + get.size); 1170 return -EINVAL; 1171 } 1172 + get.name[sizeof(get.name) - 1] = '\0'; 1173 1174 t = xt_find_table_lock(net, AF_INET6, get.name); 1175 if (!IS_ERR_OR_NULL(t)) { ··· 1505 1506 duprintf("check_compat_entry_size_and_hooks %p\n", e); 1507 if ((unsigned long)e % __alignof__(struct compat_ip6t_entry) != 0 || 1508 + (unsigned char *)e + sizeof(struct compat_ip6t_entry) >= limit || 1509 + (unsigned char *)e + e->next_offset > limit) { 1510 duprintf("Bad offset %p, limit = %p\n", e, limit); 1511 return -EINVAL; 1512 } ··· 1518 } 1519 1520 /* For purposes of check_entry casting the compat entry is fine */ 1521 + ret = check_entry((struct ip6t_entry *)e); 1522 if (ret) 1523 return ret; 1524 ··· 1944 *len, sizeof(get) + get.size); 1945 return -EINVAL; 1946 } 1947 + get.name[sizeof(get.name) - 1] = '\0'; 1948 1949 xt_compat_lock(AF_INET6); 1950 t = xt_find_table_lock(net, AF_INET6, get.name);
+2 -2
net/ipv6/udp.c
··· 843 flush_stack(stack, count, skb, count - 1); 844 } else { 845 if (!inner_flushed) 846 - UDP_INC_STATS_BH(net, UDP_MIB_IGNOREDMULTI, 847 - proto == IPPROTO_UDPLITE); 848 consume_skb(skb); 849 } 850 return 0;
··· 843 flush_stack(stack, count, skb, count - 1); 844 } else { 845 if (!inner_flushed) 846 + UDP6_INC_STATS_BH(net, UDP_MIB_IGNOREDMULTI, 847 + proto == IPPROTO_UDPLITE); 848 consume_skb(skb); 849 } 850 return 0;
+1 -1
net/netfilter/ipset/ip_set_bitmap_gen.h
··· 95 if (!nested) 96 goto nla_put_failure; 97 if (mtype_do_head(skb, map) || 98 - nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref - 1)) || 99 nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize))) 100 goto nla_put_failure; 101 if (unlikely(ip_set_put_flags(skb, set)))
··· 95 if (!nested) 96 goto nla_put_failure; 97 if (mtype_do_head(skb, map) || 98 + nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) || 99 nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize))) 100 goto nla_put_failure; 101 if (unlikely(ip_set_put_flags(skb, set)))
+28 -5
net/netfilter/ipset/ip_set_core.c
··· 497 write_unlock_bh(&ip_set_ref_lock); 498 } 499 500 /* Add, del and test set entries from kernel. 501 * 502 * The set behind the index must exist and must be referenced ··· 1022 if (!attr[IPSET_ATTR_SETNAME]) { 1023 for (i = 0; i < inst->ip_set_max; i++) { 1024 s = ip_set(inst, i); 1025 - if (s && s->ref) { 1026 ret = -IPSET_ERR_BUSY; 1027 goto out; 1028 } ··· 1044 if (!s) { 1045 ret = -ENOENT; 1046 goto out; 1047 - } else if (s->ref) { 1048 ret = -IPSET_ERR_BUSY; 1049 goto out; 1050 } ··· 1191 from->family == to->family)) 1192 return -IPSET_ERR_TYPE_MISMATCH; 1193 1194 strncpy(from_name, from->name, IPSET_MAXNAMELEN); 1195 strncpy(from->name, to->name, IPSET_MAXNAMELEN); 1196 strncpy(to->name, from_name, IPSET_MAXNAMELEN); ··· 1229 if (set->variant->uref) 1230 set->variant->uref(set, cb, false); 1231 pr_debug("release set %s\n", set->name); 1232 - __ip_set_put_byindex(inst, index); 1233 } 1234 return 0; 1235 } ··· 1351 if (!cb->args[IPSET_CB_ARG0]) { 1352 /* Start listing: make sure set won't be destroyed */ 1353 pr_debug("reference set\n"); 1354 - set->ref++; 1355 } 1356 write_unlock_bh(&ip_set_ref_lock); 1357 nlh = start_msg(skb, NETLINK_CB(cb->skb).portid, ··· 1419 if (set->variant->uref) 1420 set->variant->uref(set, cb, false); 1421 pr_debug("release set %s\n", set->name); 1422 - __ip_set_put_byindex(inst, index); 1423 cb->args[IPSET_CB_ARG0] = 0; 1424 } 1425 out:
··· 497 write_unlock_bh(&ip_set_ref_lock); 498 } 499 500 + /* set->ref can be swapped out by ip_set_swap, netlink events (like dump) need 501 + * a separate reference counter 502 + */ 503 + static inline void 504 + __ip_set_get_netlink(struct ip_set *set) 505 + { 506 + write_lock_bh(&ip_set_ref_lock); 507 + set->ref_netlink++; 508 + write_unlock_bh(&ip_set_ref_lock); 509 + } 510 + 511 + static inline void 512 + __ip_set_put_netlink(struct ip_set *set) 513 + { 514 + write_lock_bh(&ip_set_ref_lock); 515 + BUG_ON(set->ref_netlink == 0); 516 + set->ref_netlink--; 517 + write_unlock_bh(&ip_set_ref_lock); 518 + } 519 + 520 /* Add, del and test set entries from kernel. 521 * 522 * The set behind the index must exist and must be referenced ··· 1002 if (!attr[IPSET_ATTR_SETNAME]) { 1003 for (i = 0; i < inst->ip_set_max; i++) { 1004 s = ip_set(inst, i); 1005 + if (s && (s->ref || s->ref_netlink)) { 1006 ret = -IPSET_ERR_BUSY; 1007 goto out; 1008 } ··· 1024 if (!s) { 1025 ret = -ENOENT; 1026 goto out; 1027 + } else if (s->ref || s->ref_netlink) { 1028 ret = -IPSET_ERR_BUSY; 1029 goto out; 1030 } ··· 1171 from->family == to->family)) 1172 return -IPSET_ERR_TYPE_MISMATCH; 1173 1174 + if (from->ref_netlink || to->ref_netlink) 1175 + return -EBUSY; 1176 + 1177 strncpy(from_name, from->name, IPSET_MAXNAMELEN); 1178 strncpy(from->name, to->name, IPSET_MAXNAMELEN); 1179 strncpy(to->name, from_name, IPSET_MAXNAMELEN); ··· 1206 if (set->variant->uref) 1207 set->variant->uref(set, cb, false); 1208 pr_debug("release set %s\n", set->name); 1209 + __ip_set_put_netlink(set); 1210 } 1211 return 0; 1212 } ··· 1328 if (!cb->args[IPSET_CB_ARG0]) { 1329 /* Start listing: make sure set won't be destroyed */ 1330 pr_debug("reference set\n"); 1331 + set->ref_netlink++; 1332 } 1333 write_unlock_bh(&ip_set_ref_lock); 1334 nlh = start_msg(skb, NETLINK_CB(cb->skb).portid, ··· 1396 if (set->variant->uref) 1397 set->variant->uref(set, cb, false); 1398 pr_debug("release set %s\n", set->name); 1399 + __ip_set_put_netlink(set); 1400 cb->args[IPSET_CB_ARG0] = 0; 1401 } 1402 out:
+1 -1
net/netfilter/ipset/ip_set_hash_gen.h
··· 1082 if (nla_put_u32(skb, IPSET_ATTR_MARKMASK, h->markmask)) 1083 goto nla_put_failure; 1084 #endif 1085 - if (nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref - 1)) || 1086 nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize))) 1087 goto nla_put_failure; 1088 if (unlikely(ip_set_put_flags(skb, set)))
··· 1082 if (nla_put_u32(skb, IPSET_ATTR_MARKMASK, h->markmask)) 1083 goto nla_put_failure; 1084 #endif 1085 + if (nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) || 1086 nla_put_net32(skb, IPSET_ATTR_MEMSIZE, htonl(memsize))) 1087 goto nla_put_failure; 1088 if (unlikely(ip_set_put_flags(skb, set)))
+1 -1
net/netfilter/ipset/ip_set_list_set.c
··· 458 if (!nested) 459 goto nla_put_failure; 460 if (nla_put_net32(skb, IPSET_ATTR_SIZE, htonl(map->size)) || 461 - nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref - 1)) || 462 nla_put_net32(skb, IPSET_ATTR_MEMSIZE, 463 htonl(sizeof(*map) + n * set->dsize))) 464 goto nla_put_failure;
··· 458 if (!nested) 459 goto nla_put_failure; 460 if (nla_put_net32(skb, IPSET_ATTR_SIZE, htonl(map->size)) || 461 + nla_put_net32(skb, IPSET_ATTR_REFERENCES, htonl(set->ref)) || 462 nla_put_net32(skb, IPSET_ATTR_MEMSIZE, 463 htonl(sizeof(*map) + n * set->dsize))) 464 goto nla_put_failure;
+3 -1
net/openvswitch/Kconfig
··· 7 depends on INET 8 depends on !NF_CONNTRACK || \ 9 (NF_CONNTRACK && ((!NF_DEFRAG_IPV6 || NF_DEFRAG_IPV6) && \ 10 - (!NF_NAT || NF_NAT))) 11 select LIBCRC32C 12 select MPLS 13 select NET_MPLS_GSO
··· 7 depends on INET 8 depends on !NF_CONNTRACK || \ 9 (NF_CONNTRACK && ((!NF_DEFRAG_IPV6 || NF_DEFRAG_IPV6) && \ 10 + (!NF_NAT || NF_NAT) && \ 11 + (!NF_NAT_IPV4 || NF_NAT_IPV4) && \ 12 + (!NF_NAT_IPV6 || NF_NAT_IPV6))) 13 select LIBCRC32C 14 select MPLS 15 select NET_MPLS_GSO
+13 -11
net/openvswitch/conntrack.c
··· 535 switch (ctinfo) { 536 case IP_CT_RELATED: 537 case IP_CT_RELATED_REPLY: 538 - if (skb->protocol == htons(ETH_P_IP) && 539 ip_hdr(skb)->protocol == IPPROTO_ICMP) { 540 if (!nf_nat_icmp_reply_translation(skb, ct, ctinfo, 541 hooknum)) 542 err = NF_DROP; 543 goto push; 544 - #if IS_ENABLED(CONFIG_NF_NAT_IPV6) 545 - } else if (skb->protocol == htons(ETH_P_IPV6)) { 546 __be16 frag_off; 547 u8 nexthdr = ipv6_hdr(skb)->nexthdr; 548 int hdrlen = ipv6_skip_exthdr(skb, ··· 558 err = NF_DROP; 559 goto push; 560 } 561 - #endif 562 } 563 /* Non-ICMP, fall thru to initialize if needed. */ 564 case IP_CT_NEW: ··· 664 665 /* Determine NAT type. 666 * Check if the NAT type can be deduced from the tracked connection. 667 - * Make sure expected traffic is NATted only when committing. 668 */ 669 if (info->nat & OVS_CT_NAT && ctinfo != IP_CT_NEW && 670 ct->status & IPS_NAT_MASK && 671 - (!(ct->status & IPS_EXPECTED_BIT) || info->commit)) { 672 /* NAT an established or related connection like before. */ 673 if (CTINFO2DIR(ctinfo) == IP_CT_DIR_REPLY) 674 /* This is the REPLY direction for a connection ··· 969 break; 970 971 case OVS_NAT_ATTR_IP_MIN: 972 - nla_memcpy(&info->range.min_addr, a, nla_len(a)); 973 info->range.flags |= NF_NAT_RANGE_MAP_IPS; 974 break; 975 ··· 1240 } 1241 1242 if (info->range.flags & NF_NAT_RANGE_MAP_IPS) { 1243 - if (info->family == NFPROTO_IPV4) { 1244 if (nla_put_in_addr(skb, OVS_NAT_ATTR_IP_MIN, 1245 info->range.min_addr.ip) || 1246 (info->range.max_addr.ip ··· 1249 (nla_put_in_addr(skb, OVS_NAT_ATTR_IP_MAX, 1250 info->range.max_addr.ip)))) 1251 return false; 1252 - #if IS_ENABLED(CONFIG_NF_NAT_IPV6) 1253 - } else if (info->family == NFPROTO_IPV6) { 1254 if (nla_put_in6_addr(skb, OVS_NAT_ATTR_IP_MIN, 1255 &info->range.min_addr.in6) || 1256 (memcmp(&info->range.max_addr.in6, ··· 1259 (nla_put_in6_addr(skb, OVS_NAT_ATTR_IP_MAX, 1260 &info->range.max_addr.in6)))) 1261 return false; 1262 - #endif 1263 } else { 1264 return false; 1265 }
··· 535 switch (ctinfo) { 536 case IP_CT_RELATED: 537 case IP_CT_RELATED_REPLY: 538 + if (IS_ENABLED(CONFIG_NF_NAT_IPV4) && 539 + skb->protocol == htons(ETH_P_IP) && 540 ip_hdr(skb)->protocol == IPPROTO_ICMP) { 541 if (!nf_nat_icmp_reply_translation(skb, ct, ctinfo, 542 hooknum)) 543 err = NF_DROP; 544 goto push; 545 + } else if (IS_ENABLED(CONFIG_NF_NAT_IPV6) && 546 + skb->protocol == htons(ETH_P_IPV6)) { 547 __be16 frag_off; 548 u8 nexthdr = ipv6_hdr(skb)->nexthdr; 549 int hdrlen = ipv6_skip_exthdr(skb, ··· 557 err = NF_DROP; 558 goto push; 559 } 560 } 561 /* Non-ICMP, fall thru to initialize if needed. */ 562 case IP_CT_NEW: ··· 664 665 /* Determine NAT type. 666 * Check if the NAT type can be deduced from the tracked connection. 667 + * Make sure new expected connections (IP_CT_RELATED) are NATted only 668 + * when committing. 669 */ 670 if (info->nat & OVS_CT_NAT && ctinfo != IP_CT_NEW && 671 ct->status & IPS_NAT_MASK && 672 + (ctinfo != IP_CT_RELATED || info->commit)) { 673 /* NAT an established or related connection like before. */ 674 if (CTINFO2DIR(ctinfo) == IP_CT_DIR_REPLY) 675 /* This is the REPLY direction for a connection ··· 968 break; 969 970 case OVS_NAT_ATTR_IP_MIN: 971 + nla_memcpy(&info->range.min_addr, a, 972 + sizeof(info->range.min_addr)); 973 info->range.flags |= NF_NAT_RANGE_MAP_IPS; 974 break; 975 ··· 1238 } 1239 1240 if (info->range.flags & NF_NAT_RANGE_MAP_IPS) { 1241 + if (IS_ENABLED(CONFIG_NF_NAT_IPV4) && 1242 + info->family == NFPROTO_IPV4) { 1243 if (nla_put_in_addr(skb, OVS_NAT_ATTR_IP_MIN, 1244 info->range.min_addr.ip) || 1245 (info->range.max_addr.ip ··· 1246 (nla_put_in_addr(skb, OVS_NAT_ATTR_IP_MAX, 1247 info->range.max_addr.ip)))) 1248 return false; 1249 + } else if (IS_ENABLED(CONFIG_NF_NAT_IPV6) && 1250 + info->family == NFPROTO_IPV6) { 1251 if (nla_put_in6_addr(skb, OVS_NAT_ATTR_IP_MIN, 1252 &info->range.min_addr.in6) || 1253 (memcmp(&info->range.max_addr.in6, ··· 1256 (nla_put_in6_addr(skb, OVS_NAT_ATTR_IP_MAX, 1257 &info->range.max_addr.in6)))) 1258 return false; 1259 } else { 1260 return false; 1261 }
+3 -3
net/sctp/output.c
··· 401 sk = chunk->skb->sk; 402 403 /* Allocate the new skb. */ 404 - nskb = alloc_skb(packet->size + MAX_HEADER, GFP_ATOMIC); 405 if (!nskb) 406 goto nomem; 407 ··· 523 */ 524 if (auth) 525 sctp_auth_calculate_hmac(asoc, nskb, 526 - (struct sctp_auth_chunk *)auth, 527 - GFP_ATOMIC); 528 529 /* 2) Calculate the Adler-32 checksum of the whole packet, 530 * including the SCTP common header and all the
··· 401 sk = chunk->skb->sk; 402 403 /* Allocate the new skb. */ 404 + nskb = alloc_skb(packet->size + MAX_HEADER, gfp); 405 if (!nskb) 406 goto nomem; 407 ··· 523 */ 524 if (auth) 525 sctp_auth_calculate_hmac(asoc, nskb, 526 + (struct sctp_auth_chunk *)auth, 527 + gfp); 528 529 /* 2) Calculate the Adler-32 checksum of the whole packet, 530 * including the SCTP common header and all the
+1 -1
net/switchdev/switchdev.c
··· 1079 * @filter_dev: filter device 1080 * @idx: 1081 * 1082 - * Delete FDB entry from switch device. 1083 */ 1084 int switchdev_port_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb, 1085 struct net_device *dev,
··· 1079 * @filter_dev: filter device 1080 * @idx: 1081 * 1082 + * Dump FDB entries from switch device. 1083 */ 1084 int switchdev_port_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb, 1085 struct net_device *dev,
+3
net/xfrm/xfrm_input.c
··· 292 XFRM_SKB_CB(skb)->seq.input.hi = seq_hi; 293 294 skb_dst_force(skb); 295 296 nexthdr = x->type->input(x, skb); 297 298 if (nexthdr == -EINPROGRESS) 299 return 0; 300 resume: 301 spin_lock(&x->lock); 302 if (nexthdr <= 0) { 303 if (nexthdr == -EBADMSG) {
··· 292 XFRM_SKB_CB(skb)->seq.input.hi = seq_hi; 293 294 skb_dst_force(skb); 295 + dev_hold(skb->dev); 296 297 nexthdr = x->type->input(x, skb); 298 299 if (nexthdr == -EINPROGRESS) 300 return 0; 301 resume: 302 + dev_put(skb->dev); 303 + 304 spin_lock(&x->lock); 305 if (nexthdr <= 0) { 306 if (nexthdr == -EBADMSG) {
+15 -9
sound/core/timer.c
··· 1019 njiff += timer->sticks - priv->correction; 1020 priv->correction = 0; 1021 } 1022 - priv->last_expires = priv->tlist.expires = njiff; 1023 - add_timer(&priv->tlist); 1024 return 0; 1025 } 1026 ··· 1502 return err; 1503 } 1504 1505 - static int snd_timer_user_gparams(struct file *file, 1506 - struct snd_timer_gparams __user *_gparams) 1507 { 1508 - struct snd_timer_gparams gparams; 1509 struct snd_timer *t; 1510 int err; 1511 1512 - if (copy_from_user(&gparams, _gparams, sizeof(gparams))) 1513 - return -EFAULT; 1514 mutex_lock(&register_mutex); 1515 - t = snd_timer_find(&gparams.tid); 1516 if (!t) { 1517 err = -ENODEV; 1518 goto _error; ··· 1521 err = -ENOSYS; 1522 goto _error; 1523 } 1524 - err = t->hw.set_period(t, gparams.period_num, gparams.period_den); 1525 _error: 1526 mutex_unlock(&register_mutex); 1527 return err; 1528 } 1529 1530 static int snd_timer_user_gstatus(struct file *file,
··· 1019 njiff += timer->sticks - priv->correction; 1020 priv->correction = 0; 1021 } 1022 + priv->last_expires = njiff; 1023 + mod_timer(&priv->tlist, njiff); 1024 return 0; 1025 } 1026 ··· 1502 return err; 1503 } 1504 1505 + static int timer_set_gparams(struct snd_timer_gparams *gparams) 1506 { 1507 struct snd_timer *t; 1508 int err; 1509 1510 mutex_lock(&register_mutex); 1511 + t = snd_timer_find(&gparams->tid); 1512 if (!t) { 1513 err = -ENODEV; 1514 goto _error; ··· 1525 err = -ENOSYS; 1526 goto _error; 1527 } 1528 + err = t->hw.set_period(t, gparams->period_num, gparams->period_den); 1529 _error: 1530 mutex_unlock(&register_mutex); 1531 return err; 1532 + } 1533 + 1534 + static int snd_timer_user_gparams(struct file *file, 1535 + struct snd_timer_gparams __user *_gparams) 1536 + { 1537 + struct snd_timer_gparams gparams; 1538 + 1539 + if (copy_from_user(&gparams, _gparams, sizeof(gparams))) 1540 + return -EFAULT; 1541 + return timer_set_gparams(&gparams); 1542 } 1543 1544 static int snd_timer_user_gstatus(struct file *file,
+29 -1
sound/core/timer_compat.c
··· 22 23 #include <linux/compat.h> 24 25 struct snd_timer_info32 { 26 u32 flags; 27 s32 card; ··· 44 u32 resolution; 45 unsigned char reserved[64]; 46 }; 47 48 static int snd_timer_user_info_compat(struct file *file, 49 struct snd_timer_info32 __user *_info) ··· 125 */ 126 127 enum { 128 SNDRV_TIMER_IOCTL_INFO32 = _IOR('T', 0x11, struct snd_timer_info32), 129 SNDRV_TIMER_IOCTL_STATUS32 = _IOW('T', 0x14, struct snd_timer_status32), 130 #ifdef CONFIG_X86_X32 ··· 141 case SNDRV_TIMER_IOCTL_PVERSION: 142 case SNDRV_TIMER_IOCTL_TREAD: 143 case SNDRV_TIMER_IOCTL_GINFO: 144 - case SNDRV_TIMER_IOCTL_GPARAMS: 145 case SNDRV_TIMER_IOCTL_GSTATUS: 146 case SNDRV_TIMER_IOCTL_SELECT: 147 case SNDRV_TIMER_IOCTL_PARAMS: ··· 154 case SNDRV_TIMER_IOCTL_PAUSE_OLD: 155 case SNDRV_TIMER_IOCTL_NEXT_DEVICE: 156 return snd_timer_user_ioctl(file, cmd, (unsigned long)argp); 157 case SNDRV_TIMER_IOCTL_INFO32: 158 return snd_timer_user_info_compat(file, argp); 159 case SNDRV_TIMER_IOCTL_STATUS32:
··· 22 23 #include <linux/compat.h> 24 25 + /* 26 + * ILP32/LP64 has different size for 'long' type. Additionally, the size 27 + * of storage alignment differs depending on architectures. Here, '__packed' 28 + * qualifier is used so that the size of this structure is multiple of 4 and 29 + * it fits to any architectures with 32 bit storage alignment. 30 + */ 31 + struct snd_timer_gparams32 { 32 + struct snd_timer_id tid; 33 + u32 period_num; 34 + u32 period_den; 35 + unsigned char reserved[32]; 36 + } __packed; 37 + 38 struct snd_timer_info32 { 39 u32 flags; 40 s32 card; ··· 31 u32 resolution; 32 unsigned char reserved[64]; 33 }; 34 + 35 + static int snd_timer_user_gparams_compat(struct file *file, 36 + struct snd_timer_gparams32 __user *user) 37 + { 38 + struct snd_timer_gparams gparams; 39 + 40 + if (copy_from_user(&gparams.tid, &user->tid, sizeof(gparams.tid)) || 41 + get_user(gparams.period_num, &user->period_num) || 42 + get_user(gparams.period_den, &user->period_den)) 43 + return -EFAULT; 44 + 45 + return timer_set_gparams(&gparams); 46 + } 47 48 static int snd_timer_user_info_compat(struct file *file, 49 struct snd_timer_info32 __user *_info) ··· 99 */ 100 101 enum { 102 + SNDRV_TIMER_IOCTL_GPARAMS32 = _IOW('T', 0x04, struct snd_timer_gparams32), 103 SNDRV_TIMER_IOCTL_INFO32 = _IOR('T', 0x11, struct snd_timer_info32), 104 SNDRV_TIMER_IOCTL_STATUS32 = _IOW('T', 0x14, struct snd_timer_status32), 105 #ifdef CONFIG_X86_X32 ··· 114 case SNDRV_TIMER_IOCTL_PVERSION: 115 case SNDRV_TIMER_IOCTL_TREAD: 116 case SNDRV_TIMER_IOCTL_GINFO: 117 case SNDRV_TIMER_IOCTL_GSTATUS: 118 case SNDRV_TIMER_IOCTL_SELECT: 119 case SNDRV_TIMER_IOCTL_PARAMS: ··· 128 case SNDRV_TIMER_IOCTL_PAUSE_OLD: 129 case SNDRV_TIMER_IOCTL_NEXT_DEVICE: 130 return snd_timer_user_ioctl(file, cmd, (unsigned long)argp); 131 + case SNDRV_TIMER_IOCTL_GPARAMS32: 132 + return snd_timer_user_gparams_compat(file, argp); 133 case SNDRV_TIMER_IOCTL_INFO32: 134 return snd_timer_user_info_compat(file, argp); 135 case SNDRV_TIMER_IOCTL_STATUS32:
+4 -10
sound/firewire/dice/dice-stream.c
··· 446 447 void snd_dice_stream_destroy_duplex(struct snd_dice *dice) 448 { 449 - struct reg_params tx_params, rx_params; 450 451 - snd_dice_transaction_clear_enable(dice); 452 - 453 - if (get_register_params(dice, &tx_params, &rx_params) == 0) { 454 - stop_streams(dice, AMDTP_IN_STREAM, &tx_params); 455 - stop_streams(dice, AMDTP_OUT_STREAM, &rx_params); 456 } 457 - 458 - release_resources(dice); 459 - 460 - dice->substreams_counter = 0; 461 } 462 463 void snd_dice_stream_update_duplex(struct snd_dice *dice)
··· 446 447 void snd_dice_stream_destroy_duplex(struct snd_dice *dice) 448 { 449 + unsigned int i; 450 451 + for (i = 0; i < MAX_STREAMS; i++) { 452 + destroy_stream(dice, AMDTP_IN_STREAM, i); 453 + destroy_stream(dice, AMDTP_OUT_STREAM, i); 454 } 455 } 456 457 void snd_dice_stream_update_duplex(struct snd_dice *dice)
+4
sound/pci/hda/hda_intel.c
··· 2361 .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, 2362 { PCI_DEVICE(0x1002, 0xaae8), 2363 .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, 2364 /* VIA VT8251/VT8237A */ 2365 { PCI_DEVICE(0x1106, 0x3288), .driver_data = AZX_DRIVER_VIA }, 2366 /* VIA GFX VT7122/VX900 */
··· 2361 .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, 2362 { PCI_DEVICE(0x1002, 0xaae8), 2363 .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, 2364 + { PCI_DEVICE(0x1002, 0xaae0), 2365 + .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, 2366 + { PCI_DEVICE(0x1002, 0xaaf0), 2367 + .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, 2368 /* VIA VT8251/VT8237A */ 2369 { PCI_DEVICE(0x1106, 0x3288), .driver_data = AZX_DRIVER_VIA }, 2370 /* VIA GFX VT7122/VX900 */
+18 -1
sound/pci/hda/patch_realtek.c
··· 4759 ALC255_FIXUP_DELL_SPK_NOISE, 4760 ALC225_FIXUP_DELL1_MIC_NO_PRESENCE, 4761 ALC280_FIXUP_HP_HEADSET_MIC, 4762 }; 4763 4764 static const struct hda_fixup alc269_fixups[] = { ··· 5402 .chained = true, 5403 .chain_id = ALC269_FIXUP_HEADSET_MIC, 5404 }, 5405 }; 5406 5407 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 5514 SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 5515 SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 5516 SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC), 5517 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 5518 SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 5519 SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), ··· 6415 ALC668_FIXUP_AUTO_MUTE, 6416 ALC668_FIXUP_DELL_DISABLE_AAMIX, 6417 ALC668_FIXUP_DELL_XPS13, 6418 }; 6419 6420 static const struct hda_fixup alc662_fixups[] = { ··· 6656 .type = HDA_FIXUP_FUNC, 6657 .v.func = alc_fixup_bass_chmap, 6658 }, 6659 }; 6660 6661 static const struct snd_pci_quirk alc662_fixup_tbl[] = { ··· 6684 SND_PCI_QUIRK(0x1028, 0x0698, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 6685 SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 6686 SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800), 6687 - SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_BASS_1A), 6688 SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A), 6689 SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP), 6690 SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16), 6691 SND_PCI_QUIRK(0x1043, 0x1b73, "ASUS N55SF", ALC662_FIXUP_BASS_16),
··· 4759 ALC255_FIXUP_DELL_SPK_NOISE, 4760 ALC225_FIXUP_DELL1_MIC_NO_PRESENCE, 4761 ALC280_FIXUP_HP_HEADSET_MIC, 4762 + ALC221_FIXUP_HP_FRONT_MIC, 4763 }; 4764 4765 static const struct hda_fixup alc269_fixups[] = { ··· 5401 .chained = true, 5402 .chain_id = ALC269_FIXUP_HEADSET_MIC, 5403 }, 5404 + [ALC221_FIXUP_HP_FRONT_MIC] = { 5405 + .type = HDA_FIXUP_PINS, 5406 + .v.pins = (const struct hda_pintbl[]) { 5407 + { 0x19, 0x02a19020 }, /* Front Mic */ 5408 + { } 5409 + }, 5410 + }, 5411 }; 5412 5413 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 5506 SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 5507 SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 5508 SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC), 5509 + SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC), 5510 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 5511 SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 5512 SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), ··· 6406 ALC668_FIXUP_AUTO_MUTE, 6407 ALC668_FIXUP_DELL_DISABLE_AAMIX, 6408 ALC668_FIXUP_DELL_XPS13, 6409 + ALC662_FIXUP_ASUS_Nx50, 6410 }; 6411 6412 static const struct hda_fixup alc662_fixups[] = { ··· 6646 .type = HDA_FIXUP_FUNC, 6647 .v.func = alc_fixup_bass_chmap, 6648 }, 6649 + [ALC662_FIXUP_ASUS_Nx50] = { 6650 + .type = HDA_FIXUP_FUNC, 6651 + .v.func = alc_fixup_auto_mute_via_amp, 6652 + .chained = true, 6653 + .chain_id = ALC662_FIXUP_BASS_1A 6654 + }, 6655 }; 6656 6657 static const struct snd_pci_quirk alc662_fixup_tbl[] = { ··· 6668 SND_PCI_QUIRK(0x1028, 0x0698, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 6669 SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 6670 SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800), 6671 + SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50), 6672 SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A), 6673 + SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50), 6674 SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP), 6675 SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16), 6676 SND_PCI_QUIRK(0x1043, 0x1b73, "ASUS N55SF", ALC662_FIXUP_BASS_16),
+4
sound/usb/quirks.c
··· 150 usb_audio_err(chip, "cannot memdup\n"); 151 return -ENOMEM; 152 } 153 if (fp->nr_rates > MAX_NR_RATES) { 154 kfree(fp); 155 return -EINVAL; ··· 194 return 0; 195 196 error: 197 kfree(fp); 198 kfree(rate_table); 199 return err; ··· 471 fp->ep_attr = get_endpoint(alts, 0)->bmAttributes; 472 fp->datainterval = 0; 473 fp->maxpacksize = le16_to_cpu(get_endpoint(alts, 0)->wMaxPacketSize); 474 475 switch (fp->maxpacksize) { 476 case 0x120: ··· 495 ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; 496 err = snd_usb_add_audio_stream(chip, stream, fp); 497 if (err < 0) { 498 kfree(fp); 499 return err; 500 }
··· 150 usb_audio_err(chip, "cannot memdup\n"); 151 return -ENOMEM; 152 } 153 + INIT_LIST_HEAD(&fp->list); 154 if (fp->nr_rates > MAX_NR_RATES) { 155 kfree(fp); 156 return -EINVAL; ··· 193 return 0; 194 195 error: 196 + list_del(&fp->list); /* unlink for avoiding double-free */ 197 kfree(fp); 198 kfree(rate_table); 199 return err; ··· 469 fp->ep_attr = get_endpoint(alts, 0)->bmAttributes; 470 fp->datainterval = 0; 471 fp->maxpacksize = le16_to_cpu(get_endpoint(alts, 0)->wMaxPacketSize); 472 + INIT_LIST_HEAD(&fp->list); 473 474 switch (fp->maxpacksize) { 475 case 0x120: ··· 492 ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; 493 err = snd_usb_add_audio_stream(chip, stream, fp); 494 if (err < 0) { 495 + list_del(&fp->list); /* unlink for avoiding double-free */ 496 kfree(fp); 497 return err; 498 }
+5 -1
sound/usb/stream.c
··· 316 /* 317 * add this endpoint to the chip instance. 318 * if a stream with the same endpoint already exists, append to it. 319 - * if not, create a new pcm stream. 320 */ 321 int snd_usb_add_audio_stream(struct snd_usb_audio *chip, 322 int stream, ··· 679 * (fp->maxpacksize & 0x7ff); 680 fp->attributes = parse_uac_endpoint_attributes(chip, alts, protocol, iface_no); 681 fp->clock = clock; 682 683 /* some quirks for attributes here */ 684 ··· 728 dev_dbg(&dev->dev, "%u:%d: add audio endpoint %#x\n", iface_no, altno, fp->endpoint); 729 err = snd_usb_add_audio_stream(chip, stream, fp); 730 if (err < 0) { 731 kfree(fp->rate_table); 732 kfree(fp->chmap); 733 kfree(fp);
··· 316 /* 317 * add this endpoint to the chip instance. 318 * if a stream with the same endpoint already exists, append to it. 319 + * if not, create a new pcm stream. note, fp is added to the substream 320 + * fmt_list and will be freed on the chip instance release. do not free 321 + * fp or do remove it from the substream fmt_list to avoid double-free. 322 */ 323 int snd_usb_add_audio_stream(struct snd_usb_audio *chip, 324 int stream, ··· 677 * (fp->maxpacksize & 0x7ff); 678 fp->attributes = parse_uac_endpoint_attributes(chip, alts, protocol, iface_no); 679 fp->clock = clock; 680 + INIT_LIST_HEAD(&fp->list); 681 682 /* some quirks for attributes here */ 683 ··· 725 dev_dbg(&dev->dev, "%u:%d: add audio endpoint %#x\n", iface_no, altno, fp->endpoint); 726 err = snd_usb_add_audio_stream(chip, stream, fp); 727 if (err < 0) { 728 + list_del(&fp->list); /* unlink for avoiding double-free */ 729 kfree(fp->rate_table); 730 kfree(fp->chmap); 731 kfree(fp);
+8 -4
tools/lib/lockdep/run_tests.sh
··· 3 make &> /dev/null 4 5 for i in `ls tests/*.c`; do 6 - testname=$(basename -s .c "$i") 7 gcc -o tests/$testname -pthread -lpthread $i liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &> /dev/null 8 echo -ne "$testname... " 9 if [ $(timeout 1 ./tests/$testname | wc -l) -gt 0 ]; then ··· 11 else 12 echo "FAILED!" 13 fi 14 - rm tests/$testname 15 done 16 17 for i in `ls tests/*.c`; do 18 - testname=$(basename -s .c "$i") 19 gcc -o tests/$testname -pthread -lpthread -Iinclude $i &> /dev/null 20 echo -ne "(PRELOAD) $testname... " 21 if [ $(timeout 1 ./lockdep ./tests/$testname | wc -l) -gt 0 ]; then ··· 25 else 26 echo "FAILED!" 27 fi 28 - rm tests/$testname 29 done
··· 3 make &> /dev/null 4 5 for i in `ls tests/*.c`; do 6 + testname=$(basename "$i" .c) 7 gcc -o tests/$testname -pthread -lpthread $i liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &> /dev/null 8 echo -ne "$testname... " 9 if [ $(timeout 1 ./tests/$testname | wc -l) -gt 0 ]; then ··· 11 else 12 echo "FAILED!" 13 fi 14 + if [ -f "tests/$testname" ]; then 15 + rm tests/$testname 16 + fi 17 done 18 19 for i in `ls tests/*.c`; do 20 + testname=$(basename "$i" .c) 21 gcc -o tests/$testname -pthread -lpthread -Iinclude $i &> /dev/null 22 echo -ne "(PRELOAD) $testname... " 23 if [ $(timeout 1 ./lockdep ./tests/$testname | wc -l) -gt 0 ]; then ··· 23 else 24 echo "FAILED!" 25 fi 26 + if [ -f "tests/$testname" ]; then 27 + rm tests/$testname 28 + fi 29 done
+1
tools/perf/MANIFEST
··· 74 arch/*/include/uapi/asm/perf_regs.h 75 arch/*/lib/memcpy*.S 76 arch/*/lib/memset*.S 77 include/linux/poison.h 78 include/linux/hw_breakpoint.h 79 include/uapi/linux/perf_event.h
··· 74 arch/*/include/uapi/asm/perf_regs.h 75 arch/*/lib/memcpy*.S 76 arch/*/lib/memset*.S 77 + arch/*/include/asm/*features.h 78 include/linux/poison.h 79 include/linux/hw_breakpoint.h 80 include/uapi/linux/perf_event.h
+2
tools/perf/arch/powerpc/util/header.c
··· 4 #include <stdlib.h> 5 #include <string.h> 6 #include <linux/stringify.h> 7 8 #define mfspr(rn) ({unsigned long rval; \ 9 asm volatile("mfspr %0," __stringify(rn) \
··· 4 #include <stdlib.h> 5 #include <string.h> 6 #include <linux/stringify.h> 7 + #include "header.h" 8 + #include "util.h" 9 10 #define mfspr(rn) ({unsigned long rval; \ 11 asm volatile("mfspr %0," __stringify(rn) \
+1 -1
tools/perf/tests/perf-targz-src-pkg
··· 15 tar xf ${TARBALL} -C $TMP_DEST 16 rm -f ${TARBALL} 17 cd - > /dev/null 18 - make -C $TMP_DEST/perf*/tools/perf > /dev/null 2>&1 19 RC=$? 20 rm -rf ${TMP_DEST} 21 exit $RC
··· 15 tar xf ${TARBALL} -C $TMP_DEST 16 rm -f ${TARBALL} 17 cd - > /dev/null 18 + make -C $TMP_DEST/perf*/tools/perf > /dev/null 19 RC=$? 20 rm -rf ${TMP_DEST} 21 exit $RC
+1 -1
tools/perf/ui/browsers/hists.c
··· 337 chain = list_entry(node->val.next, struct callchain_list, list); 338 chain->has_children = has_sibling; 339 340 - if (node->val.next != node->val.prev) { 341 chain = list_entry(node->val.prev, struct callchain_list, list); 342 chain->has_children = !RB_EMPTY_ROOT(&node->rb_root); 343 }
··· 337 chain = list_entry(node->val.next, struct callchain_list, list); 338 chain->has_children = has_sibling; 339 340 + if (!list_empty(&node->val)) { 341 chain = list_entry(node->val.prev, struct callchain_list, list); 342 chain->has_children = !RB_EMPTY_ROOT(&node->rb_root); 343 }
+16 -7
tools/perf/util/event.c
··· 56 return perf_event__names[id]; 57 } 58 59 - static struct perf_sample synth_sample = { 60 .pid = -1, 61 .tid = -1, 62 .time = -1, 63 .stream_id = -1, 64 .cpu = -1, 65 .period = 1, 66 }; 67 68 /* ··· 195 if (perf_event__prepare_comm(event, pid, machine, &tgid, &ppid) != 0) 196 return -1; 197 198 - if (process(tool, event, &synth_sample, machine) != 0) 199 return -1; 200 201 return tgid; ··· 227 228 event->fork.header.size = (sizeof(event->fork) + machine->id_hdr_size); 229 230 - if (process(tool, event, &synth_sample, machine) != 0) 231 return -1; 232 233 return 0; ··· 353 event->mmap2.pid = tgid; 354 event->mmap2.tid = pid; 355 356 - if (process(tool, event, &synth_sample, machine) != 0) { 357 rc = -1; 358 break; 359 } ··· 411 412 memcpy(event->mmap.filename, pos->dso->long_name, 413 pos->dso->long_name_len + 1); 414 - if (process(tool, event, &synth_sample, machine) != 0) { 415 rc = -1; 416 break; 417 } ··· 481 /* 482 * Send the prepared comm event 483 */ 484 - if (process(tool, comm_event, &synth_sample, machine) != 0) 485 break; 486 487 rc = 0; ··· 710 event->mmap.len = map->end - event->mmap.start; 711 event->mmap.pid = machine->pid; 712 713 - err = process(tool, event, &synth_sample, machine); 714 free(event); 715 716 return err;
··· 56 return perf_event__names[id]; 57 } 58 59 + static int perf_tool__process_synth_event(struct perf_tool *tool, 60 + union perf_event *event, 61 + struct machine *machine, 62 + perf_event__handler_t process) 63 + { 64 + struct perf_sample synth_sample = { 65 .pid = -1, 66 .tid = -1, 67 .time = -1, 68 .stream_id = -1, 69 .cpu = -1, 70 .period = 1, 71 + .cpumode = event->header.misc & PERF_RECORD_MISC_CPUMODE_MASK, 72 + }; 73 + 74 + return process(tool, event, &synth_sample, machine); 75 }; 76 77 /* ··· 186 if (perf_event__prepare_comm(event, pid, machine, &tgid, &ppid) != 0) 187 return -1; 188 189 + if (perf_tool__process_synth_event(tool, event, machine, process) != 0) 190 return -1; 191 192 return tgid; ··· 218 219 event->fork.header.size = (sizeof(event->fork) + machine->id_hdr_size); 220 221 + if (perf_tool__process_synth_event(tool, event, machine, process) != 0) 222 return -1; 223 224 return 0; ··· 344 event->mmap2.pid = tgid; 345 event->mmap2.tid = pid; 346 347 + if (perf_tool__process_synth_event(tool, event, machine, process) != 0) { 348 rc = -1; 349 break; 350 } ··· 402 403 memcpy(event->mmap.filename, pos->dso->long_name, 404 pos->dso->long_name_len + 1); 405 + if (perf_tool__process_synth_event(tool, event, machine, process) != 0) { 406 rc = -1; 407 break; 408 } ··· 472 /* 473 * Send the prepared comm event 474 */ 475 + if (perf_tool__process_synth_event(tool, comm_event, machine, process) != 0) 476 break; 477 478 rc = 0; ··· 701 event->mmap.len = map->end - event->mmap.start; 702 event->mmap.pid = machine->pid; 703 704 + err = perf_tool__process_synth_event(tool, event, machine, process); 705 free(event); 706 707 return err;
+10 -14
tools/perf/util/genelf.h
··· 9 10 #if defined(__arm__) 11 #define GEN_ELF_ARCH EM_ARM 12 - #define GEN_ELF_ENDIAN ELFDATA2LSB 13 #define GEN_ELF_CLASS ELFCLASS32 14 #elif defined(__aarch64__) 15 #define GEN_ELF_ARCH EM_AARCH64 16 - #define GEN_ELF_ENDIAN ELFDATA2LSB 17 #define GEN_ELF_CLASS ELFCLASS64 18 #elif defined(__x86_64__) 19 #define GEN_ELF_ARCH EM_X86_64 20 - #define GEN_ELF_ENDIAN ELFDATA2LSB 21 #define GEN_ELF_CLASS ELFCLASS64 22 #elif defined(__i386__) 23 #define GEN_ELF_ARCH EM_386 24 - #define GEN_ELF_ENDIAN ELFDATA2LSB 25 #define GEN_ELF_CLASS ELFCLASS32 26 - #elif defined(__ppcle__) 27 - #define GEN_ELF_ARCH EM_PPC 28 - #define GEN_ELF_ENDIAN ELFDATA2LSB 29 #define GEN_ELF_CLASS ELFCLASS64 30 #elif defined(__powerpc__) 31 - #define GEN_ELF_ARCH EM_PPC64 32 - #define GEN_ELF_ENDIAN ELFDATA2MSB 33 - #define GEN_ELF_CLASS ELFCLASS64 34 - #elif defined(__powerpcle__) 35 - #define GEN_ELF_ARCH EM_PPC64 36 - #define GEN_ELF_ENDIAN ELFDATA2LSB 37 - #define GEN_ELF_CLASS ELFCLASS64 38 #else 39 #error "unsupported architecture" 40 #endif 41 42 #if GEN_ELF_CLASS == ELFCLASS64
··· 9 10 #if defined(__arm__) 11 #define GEN_ELF_ARCH EM_ARM 12 #define GEN_ELF_CLASS ELFCLASS32 13 #elif defined(__aarch64__) 14 #define GEN_ELF_ARCH EM_AARCH64 15 #define GEN_ELF_CLASS ELFCLASS64 16 #elif defined(__x86_64__) 17 #define GEN_ELF_ARCH EM_X86_64 18 #define GEN_ELF_CLASS ELFCLASS64 19 #elif defined(__i386__) 20 #define GEN_ELF_ARCH EM_386 21 #define GEN_ELF_CLASS ELFCLASS32 22 + #elif defined(__powerpc64__) 23 + #define GEN_ELF_ARCH EM_PPC64 24 #define GEN_ELF_CLASS ELFCLASS64 25 #elif defined(__powerpc__) 26 + #define GEN_ELF_ARCH EM_PPC 27 + #define GEN_ELF_CLASS ELFCLASS32 28 #else 29 #error "unsupported architecture" 30 + #endif 31 + 32 + #if __BYTE_ORDER == __BIG_ENDIAN 33 + #define GEN_ELF_ENDIAN ELFDATA2MSB 34 + #else 35 + #define GEN_ELF_ENDIAN ELFDATA2LSB 36 #endif 37 38 #if GEN_ELF_CLASS == ELFCLASS64
+1
tools/perf/util/intel-bts.c
··· 279 event.sample.header.misc = PERF_RECORD_MISC_USER; 280 event.sample.header.size = sizeof(struct perf_event_header); 281 282 sample.ip = le64_to_cpu(branch->from); 283 sample.pid = btsq->pid; 284 sample.tid = btsq->tid;
··· 279 event.sample.header.misc = PERF_RECORD_MISC_USER; 280 event.sample.header.size = sizeof(struct perf_event_header); 281 282 + sample.cpumode = PERF_RECORD_MISC_USER; 283 sample.ip = le64_to_cpu(branch->from); 284 sample.pid = btsq->pid; 285 sample.tid = btsq->tid;
+3
tools/perf/util/intel-pt.c
··· 979 if (!pt->timeless_decoding) 980 sample.time = tsc_to_perf_time(ptq->timestamp, &pt->tc); 981 982 sample.ip = ptq->state->from_ip; 983 sample.pid = ptq->pid; 984 sample.tid = ptq->tid; ··· 1036 if (!pt->timeless_decoding) 1037 sample.time = tsc_to_perf_time(ptq->timestamp, &pt->tc); 1038 1039 sample.ip = ptq->state->from_ip; 1040 sample.pid = ptq->pid; 1041 sample.tid = ptq->tid; ··· 1094 if (!pt->timeless_decoding) 1095 sample.time = tsc_to_perf_time(ptq->timestamp, &pt->tc); 1096 1097 sample.ip = ptq->state->from_ip; 1098 sample.pid = ptq->pid; 1099 sample.tid = ptq->tid;
··· 979 if (!pt->timeless_decoding) 980 sample.time = tsc_to_perf_time(ptq->timestamp, &pt->tc); 981 982 + sample.cpumode = PERF_RECORD_MISC_USER; 983 sample.ip = ptq->state->from_ip; 984 sample.pid = ptq->pid; 985 sample.tid = ptq->tid; ··· 1035 if (!pt->timeless_decoding) 1036 sample.time = tsc_to_perf_time(ptq->timestamp, &pt->tc); 1037 1038 + sample.cpumode = PERF_RECORD_MISC_USER; 1039 sample.ip = ptq->state->from_ip; 1040 sample.pid = ptq->pid; 1041 sample.tid = ptq->tid; ··· 1092 if (!pt->timeless_decoding) 1093 sample.time = tsc_to_perf_time(ptq->timestamp, &pt->tc); 1094 1095 + sample.cpumode = PERF_RECORD_MISC_USER; 1096 sample.ip = ptq->state->from_ip; 1097 sample.pid = ptq->pid; 1098 sample.tid = ptq->tid;
+2
tools/perf/util/jitdump.c
··· 417 * use first address as sample address 418 */ 419 memset(&sample, 0, sizeof(sample)); 420 sample.pid = pid; 421 sample.tid = tid; 422 sample.time = id->time; ··· 506 * use first address as sample address 507 */ 508 memset(&sample, 0, sizeof(sample)); 509 sample.pid = pid; 510 sample.tid = tid; 511 sample.time = id->time;
··· 417 * use first address as sample address 418 */ 419 memset(&sample, 0, sizeof(sample)); 420 + sample.cpumode = PERF_RECORD_MISC_USER; 421 sample.pid = pid; 422 sample.tid = tid; 423 sample.time = id->time; ··· 505 * use first address as sample address 506 */ 507 memset(&sample, 0, sizeof(sample)); 508 + sample.cpumode = PERF_RECORD_MISC_USER; 509 sample.pid = pid; 510 sample.tid = tid; 511 sample.time = id->time;