···2121- interrupts : identifier to the device interrupt2222- clocks : a list of phandle + clock-specifier pairs, one for each2323 entry in clock names.2424-- clocks-names :2424+- clock-names :2525 * "xtal" for external xtal clock identifier2626 * "pclk" for the bus core clock, either the clk81 clock or the gate clock2727 * "baud" for the source of the baudrate generator, can be either the xtal
···2424 - Must contain two elements for the extended variant of the IP2525 (marvell,armada-3700-uart-ext): "uart-tx" and "uart-rx",2626 respectively the UART TX interrupt and the UART RX interrupt. A2727- corresponding interrupts-names property must be defined.2727+ corresponding interrupt-names property must be defined.2828 - For backward compatibility reasons, a single element interrupts2929 property is also supported for the standard variant of the IP,3030 containing only the UART sum interrupt. This form is deprecated
···2828 - interrupts: one XHCI interrupt should be described here.29293030Optional properties:3131- - clocks: reference to a clock3131+ - clocks: reference to the clocks3232+ - clock-names: mandatory if there is a second clock, in this case3333+ the name must be "core" for the first clock and "reg" for the3434+ second one3235 - usb2-lpm-disable: indicate if we don't want to enable USB2 HW LPM3336 - usb3-lpm-capable: determines if platform is USB3 LPM capable3437 - quirk-broken-port-ped: set if the controller has broken port disable mechanism
···17171818request_firmware1919----------------2020-.. kernel-doc:: drivers/base/firmware_class.c2020+.. kernel-doc:: drivers/base/firmware_loader/main.c2121 :functions: request_firmware22222323request_firmware_direct2424-----------------------2525-.. kernel-doc:: drivers/base/firmware_class.c2525+.. kernel-doc:: drivers/base/firmware_loader/main.c2626 :functions: request_firmware_direct27272828request_firmware_into_buf2929-------------------------3030-.. kernel-doc:: drivers/base/firmware_class.c3030+.. kernel-doc:: drivers/base/firmware_loader/main.c3131 :functions: request_firmware_into_buf32323333Asynchronous firmware requests···41414242request_firmware_nowait4343-----------------------4444-.. kernel-doc:: drivers/base/firmware_class.c4444+.. kernel-doc:: drivers/base/firmware_loader/main.c4545 :functions: request_firmware_nowait46464747Special optimizations on reboot···5050Some devices have an optimization in place to enable the firmware to be5151retained during system reboot. When such optimizations are used the driver5252author must ensure the firmware is still available on resume from suspend,5353-this can be done with firmware_request_cache() insted of requesting for the5454-firmare to be loaded.5353+this can be done with firmware_request_cache() instead of requesting for the5454+firmware to be loaded.55555656firmware_request_cache()5757------------------------5858-.. kernel-doc:: drivers/base/firmware_class.c5757+------------------------5858+.. kernel-doc:: drivers/base/firmware_loader/main.c5959 :functions: firmware_request_cache60606161request firmware API expected driver use
···210210role. USB Type-C Connector Class does not supply separate API for them. The211211port drivers can use USB Role Class API with those.212212213213-Illustration of the muxes behind a connector that supports an alternate mode:213213+Illustration of the muxes behind a connector that supports an alternate mode::214214215215 ------------------------216216 | Connector |
+14-18
Documentation/i2c/dev-interface
···99the i2c-tools package.10101111I2C device files are character device files with major device number 891212-and a minor device number corresponding to the number assigned as 1313-explained above. They should be called "i2c-%d" (i2c-0, i2c-1, ..., 1212+and a minor device number corresponding to the number assigned as1313+explained above. They should be called "i2c-%d" (i2c-0, i2c-1, ...,1414i2c-10, ...). All 256 minor device numbers are reserved for i2c.15151616···2323 #include <linux/i2c-dev.h>2424 #include <i2c/smbus.h>25252626-(Please note that there are two files named "i2c-dev.h" out there. One is2727-distributed with the Linux kernel and the other one is included in the2828-source tree of i2c-tools. They used to be different in content but since 20122929-they're identical. You should use "linux/i2c-dev.h").3030-3126Now, you have to decide which adapter you want to access. You should3227inspect /sys/class/i2c-dev/ or run "i2cdetect -l" to decide this.3328Adapter numbers are assigned somewhat dynamically, so you can not···3338 int file;3439 int adapter_nr = 2; /* probably dynamically determined */3540 char filename[20];3636-4141+3742 snprintf(filename, 19, "/dev/i2c-%d", adapter_nr);3843 file = open(filename, O_RDWR);3944 if (file < 0) {···6772 /* res contains the read word */6873 }69747070- /* Using I2C Write, equivalent of 7171- i2c_smbus_write_word_data(file, reg, 0x6543) */7575+ /*7676+ * Using I2C Write, equivalent of7777+ * i2c_smbus_write_word_data(file, reg, 0x6543)7878+ */7279 buf[0] = reg;7380 buf[1] = 0x43;7481 buf[2] = 0x65;···137140 set in each message, overriding the values set with the above ioctl's.138141139142ioctl(file, I2C_SMBUS, struct i2c_smbus_ioctl_data *args)140140- Not meant to be called directly; instead, use the access functions141141- below.143143+ If possible, use the provided i2c_smbus_* methods described below instead144144+ of issuing direct ioctls.142145143146You can do plain i2c transactions by using read(2) and write(2) calls.144147You do not need to pass the address byte; instead, set it through145148ioctl I2C_SLAVE before you try to access the device.146149147147-You can do SMBus level transactions (see documentation file smbus-protocol 150150+You can do SMBus level transactions (see documentation file smbus-protocol148151for details) through the following functions:149152 __s32 i2c_smbus_write_quick(int file, __u8 value);150153 __s32 i2c_smbus_read_byte(int file);···155158 __s32 i2c_smbus_write_word_data(int file, __u8 command, __u16 value);156159 __s32 i2c_smbus_process_call(int file, __u8 command, __u16 value);157160 __s32 i2c_smbus_read_block_data(int file, __u8 command, __u8 *values);158158- __s32 i2c_smbus_write_block_data(int file, __u8 command, __u8 length, 161161+ __s32 i2c_smbus_write_block_data(int file, __u8 command, __u8 length,159162 __u8 *values);160163All these transactions return -1 on failure; you can read errno to see161164what happened. The 'write' transactions return 0 on success; the···163166returns the number of values read. The block buffers need not be longer164167than 32 bytes.165168166166-The above functions are all inline functions, that resolve to calls to167167-the i2c_smbus_access function, that on its turn calls a specific ioctl168168-with the data in a specific format. Read the source code if you169169-want to know what happens behind the screens.169169+The above functions are made available by linking against the libi2c library,170170+which is provided by the i2c-tools project. See:171171+https://git.kernel.org/pub/scm/utils/i2c-tools/i2c-tools.git/.170172171173172174Implementation details
···168168169169[Please bear in mind that the kernel requests the microcode images from170170userspace, using the request_firmware() function defined in171171-drivers/base/firmware_class.c]171171+drivers/base/firmware_loader/main.c]172172173173174174a. When all the CPUs are identical:
-3
Documentation/process/magic-number.rst
···157157OSS sound drivers have their magic numbers constructed from the soundcard PCI158158ID - these are not listed here as well.159159160160-IrDA subsystem also uses large number of own magic numbers, see161161-``include/net/irda/irda.h`` for a complete list of them.162162-163160HFS is another larger user of magic numbers - you can find them in164161``fs/hfs/hfs.h``.
+11-3
Documentation/trace/ftrace.rst
···461461 and ticks at the same rate as the hardware clocksource.462462463463 boot:464464- Same as mono. Used to be a separate clock which accounted465465- for the time spent in suspend while CLOCK_MONOTONIC did466466- not.464464+ This is the boot clock (CLOCK_BOOTTIME) and is based on the465465+ fast monotonic clock, but also accounts for time spent in466466+ suspend. Since the clock access is designed for use in467467+ tracing in the suspend path, some side effects are possible468468+ if clock is accessed after the suspend time is accounted before469469+ the fast mono clock is updated. In this case, the clock update470470+ appears to happen slightly sooner than it normally would have.471471+ Also on 32-bit systems, it's possible that the 64-bit boot offset472472+ sees a partial update. These effects are rare and post473473+ processing should be able to handle them. See comments in the474474+ ktime_get_boot_fast_ns() function for more information.467475468476 To set a clock, simply echo the clock name into this file::469477
+8-1
Documentation/virtual/kvm/api.txt
···19601960ARM 64-bit FP registers have the following id bit patterns:19611961 0x4030 0000 0012 0 <regno:12>1962196219631963+ARM firmware pseudo-registers have the following bit pattern:19641964+ 0x4030 0000 0014 <regno:16>19651965+1963196619641967arm64 registers are mapped using the lower 32 bits. The upper 16 of19651968that is the register group type, or coprocessor number:···1978197519791976arm64 system registers have the following id bit patterns:19801977 0x6030 0000 0013 <op0:2> <op1:3> <crn:4> <crm:4> <op2:3>19781978+19791979+arm64 firmware pseudo-registers have the following bit pattern:19801980+ 0x6030 0000 0014 <regno:16>198119811982198219831983MIPS registers are mapped using the lower 32 bits. The upper 16 of that is···25162510 and execute guest code when KVM_RUN is called.25172511 - KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode.25182512 Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).25192519- - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.25132513+ - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 (or a future revision25142514+ backward compatible with v0.2) for the CPU.25202515 Depends on KVM_CAP_ARM_PSCI_0_2.25212516 - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.25222517 Depends on KVM_CAP_ARM_PMU_V3.
+30
Documentation/virtual/kvm/arm/psci.txt
···11+KVM implements the PSCI (Power State Coordination Interface)22+specification in order to provide services such as CPU on/off, reset33+and power-off to the guest.44+55+The PSCI specification is regularly updated to provide new features,66+and KVM implements these updates if they make sense from a virtualization77+point of view.88+99+This means that a guest booted on two different versions of KVM can1010+observe two different "firmware" revisions. This could cause issues if1111+a given guest is tied to a particular PSCI revision (unlikely), or if1212+a migration causes a different PSCI version to be exposed out of the1313+blue to an unsuspecting guest.1414+1515+In order to remedy this situation, KVM exposes a set of "firmware1616+pseudo-registers" that can be manipulated using the GET/SET_ONE_REG1717+interface. These registers can be saved/restored by userspace, and set1818+to a convenient value if required.1919+2020+The following register is defined:2121+2222+* KVM_REG_ARM_PSCI_VERSION:2323+2424+ - Only valid if the vcpu has the KVM_ARM_VCPU_PSCI_0_2 feature set2525+ (and thus has already been initialized)2626+ - Returns the current PSCI version on GET_ONE_REG (defaulting to the2727+ highest PSCI version implemented by KVM and compatible with v0.2)2828+ - Allows any PSCI version implemented by KVM and compatible with2929+ v0.2 to be set with SET_ONE_REG3030+ - Affects the whole VM (even if the register view is per-vcpu)
+9-18
MAINTAINERS
···564564F: drivers/media/dvb-frontends/af9033*565565566566AFFS FILE SYSTEM567567+M: David Sterba <dsterba@suse.com>567568L: linux-fsdevel@vger.kernel.org568568-S: Orphan569569+S: Odd Fixes569570F: Documentation/filesystems/affs.txt570571F: fs/affs/571572···906905M: Laura Abbott <labbott@redhat.com>907906M: Sumit Semwal <sumit.semwal@linaro.org>908907L: devel@driverdev.osuosl.org908908+L: dri-devel@lists.freedesktop.org909909+L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)909910S: Supported910911F: drivers/staging/android/ion911912F: drivers/staging/android/uapi/ion.h···12111208ARM/ARTPEC MACHINE SUPPORT12121209M: Jesper Nilsson <jesper.nilsson@axis.com>12131210M: Lars Persson <lars.persson@axis.com>12141214-M: Niklas Cassel <niklas.cassel@axis.com>12151211S: Maintained12161212L: linux-arm-kernel@axis.com12171213F: arch/arm/mach-artpec···26192617F: drivers/net/hamradio/baycom*2620261826212619BCACHE (BLOCK LAYER CACHE)26222622-M: Michael Lyle <mlyle@lyle.org>26202620+M: Coly Li <colyli@suse.de>26232621M: Kent Overstreet <kent.overstreet@gmail.com>26242622L: linux-bcache@vger.kernel.org26252623W: http://bcache.evilpiepirate.org···74137411F: include/uapi/linux/ipx.h74147412F: drivers/staging/ipx/7415741374167416-IRDA SUBSYSTEM74177417-M: Samuel Ortiz <samuel@sortiz.org>74187418-L: irda-users@lists.sourceforge.net (subscribers-only)74197419-L: netdev@vger.kernel.org74207420-W: http://irda.sourceforge.net/74217421-S: Obsolete74227422-T: git git://git.kernel.org/pub/scm/linux/kernel/git/sameo/irda-2.6.git74237423-F: Documentation/networking/irda.txt74247424-F: drivers/staging/irda/74257425-74267414IRQ DOMAINS (IRQ NUMBER MAPPING LIBRARY)74277415M: Marc Zyngier <marc.zyngier@arm.com>74287416S: Maintained···77457753F: arch/x86/kvm/svm.c7746775477477755KERNEL VIRTUAL MACHINE FOR ARM (KVM/arm)77487748-M: Christoffer Dall <christoffer.dall@linaro.org>77567756+M: Christoffer Dall <christoffer.dall@arm.com>77497757M: Marc Zyngier <marc.zyngier@arm.com>77507758L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)77517759L: kvmarm@lists.cs.columbia.edu···77597767F: include/kvm/arm_*7760776877617769KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)77627762-M: Christoffer Dall <christoffer.dall@linaro.org>77707770+M: Christoffer Dall <christoffer.dall@arm.com>77637771M: Marc Zyngier <marc.zyngier@arm.com>77647772L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)77657773L: kvmarm@lists.cs.columbia.edu···1090110909F: drivers/pci/dwc/10902109101090310911PCIE DRIVER FOR AXIS ARTPEC1090410904-M: Niklas Cassel <niklas.cassel@axis.com>1090510912M: Jesper Nilsson <jesper.nilsson@axis.com>1090610913L: linux-arm-kernel@axis.com1090710914L: linux-pci@vger.kernel.org···1395613965M: Andreas Noever <andreas.noever@gmail.com>1395713966M: Michael Jamet <michael.jamet@intel.com>1395813967M: Mika Westerberg <mika.westerberg@linux.intel.com>1395913959-M: Yehezkel Bernat <yehezkel.bernat@intel.com>1396813968+M: Yehezkel Bernat <YehezkelShB@gmail.com>1396013969T: git git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt.git1396113970S: Maintained1396213971F: Documentation/admin-guide/thunderbolt.rst···1396613975THUNDERBOLT NETWORK DRIVER1396713976M: Michael Jamet <michael.jamet@intel.com>1396813977M: Mika Westerberg <mika.westerberg@linux.intel.com>1396913969-M: Yehezkel Bernat <yehezkel.bernat@intel.com>1397813978+M: Yehezkel Bernat <YehezkelShB@gmail.com>1397013979L: netdev@vger.kernel.org1397113980S: Maintained1397213981F: drivers/net/thunderbolt.c
···11# CONFIG_LOCALVERSION_AUTO is not set22CONFIG_SYSVIPC=y33CONFIG_NO_HZ_IDLE=y44+CONFIG_HIGH_RES_TIMERS=y45CONFIG_BSD_PROCESS_ACCT=y56CONFIG_USER_NS=y67CONFIG_RELAY=y···1312CONFIG_PCI=y1413CONFIG_PREEMPT=y1514CONFIG_AEABI=y1515+CONFIG_HIGHMEM=y1616+CONFIG_CMA=y1617CONFIG_CMDLINE="console=ttyS0,115200n8"1718CONFIG_KEXEC=y1819CONFIG_BINFMT_MISC=y1920CONFIG_PM=y2121+CONFIG_NET=y2222+CONFIG_UNIX=y2323+CONFIG_INET=y2024CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"2125CONFIG_DEVTMPFS=y2226CONFIG_MTD=y2327CONFIG_MTD_BLOCK=y2428CONFIG_MTD_CFI=y2929+CONFIG_MTD_JEDECPROBE=y2530CONFIG_MTD_CFI_INTELEXT=y2631CONFIG_MTD_CFI_AMDSTD=y2732CONFIG_MTD_CFI_STAA=y···4033# CONFIG_SCSI_LOWLEVEL is not set4134CONFIG_ATA=y4235CONFIG_PATA_FTIDE010=y3636+CONFIG_NETDEVICES=y3737+CONFIG_GEMINI_ETHERNET=y3838+CONFIG_MDIO_BITBANG=y3939+CONFIG_MDIO_GPIO=y4040+CONFIG_REALTEK_PHY=y4341CONFIG_INPUT_EVDEV=y4442CONFIG_KEYBOARD_GPIO=y4543# CONFIG_INPUT_MOUSE is not set···5543CONFIG_SERIAL_8250_RUNTIME_UARTS=15644CONFIG_SERIAL_OF_PLATFORM=y5745# CONFIG_HW_RANDOM is not set5858-# CONFIG_HWMON is not set4646+CONFIG_I2C_GPIO=y4747+CONFIG_SPI=y4848+CONFIG_SPI_GPIO=y4949+CONFIG_SENSORS_GPIO_FAN=y5050+CONFIG_SENSORS_LM75=y5151+CONFIG_THERMAL=y5952CONFIG_WATCHDOG=y6060-CONFIG_GEMINI_WATCHDOG=y5353+CONFIG_REGULATOR=y5454+CONFIG_REGULATOR_FIXED_VOLTAGE=y5555+CONFIG_DRM=y5656+CONFIG_DRM_PANEL_ILITEK_IL9322=y5757+CONFIG_DRM_TVE200=y5858+CONFIG_LOGO=y6159CONFIG_USB=y6260CONFIG_USB_MON=y6361CONFIG_USB_FOTG210_HCD=y···7654CONFIG_LEDS_CLASS=y7755CONFIG_LEDS_GPIO=y7856CONFIG_LEDS_TRIGGERS=y5757+CONFIG_LEDS_TRIGGER_DISK=y7958CONFIG_LEDS_TRIGGER_HEARTBEAT=y8059CONFIG_RTC_CLASS=y8160CONFIG_DMADEVICES=y
+1
arch/arm/configs/socfpga_defconfig
···5757CONFIG_MTD_NAND=y5858CONFIG_MTD_NAND_DENALI_DT=y5959CONFIG_MTD_SPI_NOR=y6060+# CONFIG_MTD_SPI_NOR_USE_4K_SECTORS is not set6061CONFIG_SPI_CADENCE_QUADSPI=y6162CONFIG_OF_OVERLAY=y6263CONFIG_OF_CONFIGFS=y
+3
arch/arm/include/asm/kvm_host.h
···7777 /* Interrupt controller */7878 struct vgic_dist vgic;7979 int max_vcpus;8080+8181+ /* Mandated version of PSCI */8282+ u32 psci_version;8083};81848285#define KVM_NR_MEM_OBJS 40
···271271 pinctrl-0 = <&uart_ao_a_pins>;272272 pinctrl-names = "default";273273};274274+275275+&usb0 {276276+ status = "okay";277277+};278278+279279+&usb2_phy0 {280280+ /*281281+ * even though the schematics don't show it:282282+ * HDMI_5V is also used as supply for the USB VBUS.283283+ */284284+ phy-supply = <&hdmi_5v>;285285+};
···230230 }231231}232232233233-extern void __sync_icache_dcache(pte_t pteval, unsigned long addr);233233+extern void __sync_icache_dcache(pte_t pteval);234234235235/*236236 * PTE bits configuration in the presence of hardware Dirty Bit Management···253253 pte_t old_pte;254254255255 if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))256256- __sync_icache_dcache(pte, addr);256256+ __sync_icache_dcache(pte);257257258258 /*259259 * If the existing pte is valid, check for potential race with
···215215 insn &= ~BIT(31);216216 } else {217217 /* out of range for ADR -> emit a veneer */218218- val = module_emit_adrp_veneer(mod, place, val & ~0xfff);218218+ val = module_emit_veneer_for_adrp(mod, place, val & ~0xfff);219219 if (!val)220220 return -ENOEXEC;221221 insn = aarch64_insn_gen_branch_imm((u64)place, val,
+10-10
arch/arm64/kernel/ptrace.c
···2525#include <linux/sched/signal.h>2626#include <linux/sched/task_stack.h>2727#include <linux/mm.h>2828+#include <linux/nospec.h>2829#include <linux/smp.h>2930#include <linux/ptrace.h>3031#include <linux/user.h>···250249251250 switch (note_type) {252251 case NT_ARM_HW_BREAK:253253- if (idx < ARM_MAX_BRP)254254- bp = tsk->thread.debug.hbp_break[idx];252252+ if (idx >= ARM_MAX_BRP)253253+ goto out;254254+ idx = array_index_nospec(idx, ARM_MAX_BRP);255255+ bp = tsk->thread.debug.hbp_break[idx];255256 break;256257 case NT_ARM_HW_WATCH:257257- if (idx < ARM_MAX_WRP)258258- bp = tsk->thread.debug.hbp_watch[idx];258258+ if (idx >= ARM_MAX_WRP)259259+ goto out;260260+ idx = array_index_nospec(idx, ARM_MAX_WRP);261261+ bp = tsk->thread.debug.hbp_watch[idx];259262 break;260263 }261264265265+out:262266 return bp;263267}264268···14641458{14651459 int ret;14661460 u32 kdata;14671467- mm_segment_t old_fs = get_fs();1468146114691469- set_fs(KERNEL_DS);14701462 /* Watchpoint */14711463 if (num < 0) {14721464 ret = compat_ptrace_hbp_get(NT_ARM_HW_WATCH, tsk, num, &kdata);···14751471 } else {14761472 ret = compat_ptrace_hbp_get(NT_ARM_HW_BREAK, tsk, num, &kdata);14771473 }14781478- set_fs(old_fs);1479147414801475 if (!ret)14811476 ret = put_user(kdata, data);···14871484{14881485 int ret;14891486 u32 kdata = 0;14901490- mm_segment_t old_fs = get_fs();1491148714921488 if (num == 0)14931489 return 0;···14951493 if (ret)14961494 return ret;1497149514981498- set_fs(KERNEL_DS);14991496 if (num < 0)15001497 ret = compat_ptrace_hbp_set(NT_ARM_HW_WATCH, tsk, num, &kdata);15011498 else15021499 ret = compat_ptrace_hbp_set(NT_ARM_HW_BREAK, tsk, num, &kdata);15031503- set_fs(old_fs);1504150015051501 return ret;15061502}
+2-1
arch/arm64/kernel/traps.c
···277277 * If we were single stepping, we want to get the step exception after278278 * we return from the trap.279279 */280280- user_fastforward_single_step(current);280280+ if (user_mode(regs))281281+ user_fastforward_single_step(current);281282}282283283284static LIST_HEAD(undef_hook);
+13-1
arch/arm64/kvm/guest.c
···2525#include <linux/module.h>2626#include <linux/vmalloc.h>2727#include <linux/fs.h>2828+#include <kvm/arm_psci.h>2829#include <asm/cputype.h>2930#include <linux/uaccess.h>3031#include <asm/kvm.h>···206205unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)207206{208207 return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu)209209- + NUM_TIMER_REGS;208208+ + kvm_arm_get_fw_num_regs(vcpu) + NUM_TIMER_REGS;210209}211210212211/**···226225 uindices++;227226 }228227228228+ ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices);229229+ if (ret)230230+ return ret;231231+ uindices += kvm_arm_get_fw_num_regs(vcpu);232232+229233 ret = copy_timer_indices(vcpu, uindices);230234 if (ret)231235 return ret;···249243 if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)250244 return get_core_reg(vcpu, reg);251245246246+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW)247247+ return kvm_arm_get_fw_reg(vcpu, reg);248248+252249 if (is_timer_reg(reg->id))253250 return get_timer_reg(vcpu, reg);254251···267258 /* Register group 16 means we set a core register. */268259 if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)269260 return set_core_reg(vcpu, reg);261261+262262+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW)263263+ return kvm_arm_set_fw_reg(vcpu, reg);270264271265 if (is_timer_reg(reg->id))272266 return set_timer_reg(vcpu, reg);
+2-4
arch/arm64/kvm/sys_regs.c
···996996997997 if (id == SYS_ID_AA64PFR0_EL1) {998998 if (val & (0xfUL << ID_AA64PFR0_SVE_SHIFT))999999- pr_err_once("kvm [%i]: SVE unsupported for guests, suppressing\n",10001000- task_pid_nr(current));999999+ kvm_debug("SVE unsupported for guests, suppressing\n");1001100010021001 val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);10031002 } else if (id == SYS_ID_AA64MMFR1_EL1) {10041003 if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))10051005- pr_err_once("kvm [%i]: LORegions unsupported for guests, suppressing\n",10061006- task_pid_nr(current));10041004+ kvm_debug("LORegions unsupported for guests, suppressing\n");1007100510081006 val &= ~(0xfUL << ID_AA64MMFR1_LOR_SHIFT);10091007 }
···566566#endif567567568568#ifdef CONFIG_NMI_IPI569569-static void stop_this_cpu(struct pt_regs *regs)570570-#else569569+static void nmi_stop_this_cpu(struct pt_regs *regs)570570+{571571+ /*572572+ * This is a special case because it never returns, so the NMI IPI573573+ * handling would never mark it as done, which makes any later574574+ * smp_send_nmi_ipi() call spin forever. Mark it done now.575575+ *576576+ * IRQs are already hard disabled by the smp_handle_nmi_ipi.577577+ */578578+ nmi_ipi_lock();579579+ nmi_ipi_busy_count--;580580+ nmi_ipi_unlock();581581+582582+ /* Remove this CPU */583583+ set_cpu_online(smp_processor_id(), false);584584+585585+ spin_begin();586586+ while (1)587587+ spin_cpu_relax();588588+}589589+590590+void smp_send_stop(void)591591+{592592+ smp_send_nmi_ipi(NMI_IPI_ALL_OTHERS, nmi_stop_this_cpu, 1000000);593593+}594594+595595+#else /* CONFIG_NMI_IPI */596596+571597static void stop_this_cpu(void *dummy)572572-#endif573598{574599 /* Remove this CPU */575600 set_cpu_online(smp_processor_id(), false);···607582608583void smp_send_stop(void)609584{610610-#ifdef CONFIG_NMI_IPI611611- smp_send_nmi_ipi(NMI_IPI_ALL_OTHERS, stop_this_cpu, 1000000);612612-#else585585+ static bool stopped = false;586586+587587+ /*588588+ * Prevent waiting on csd lock from a previous smp_send_stop.589589+ * This is racy, but in general callers try to do the right590590+ * thing and only fire off one smp_send_stop (e.g., see591591+ * kernel/panic.c)592592+ */593593+ if (stopped)594594+ return;595595+596596+ stopped = true;597597+613598 smp_call_function(stop_this_cpu, NULL, 0);614614-#endif615599}600600+#endif /* CONFIG_NMI_IPI */616601617602struct thread_info *current_set[NR_CPUS];618603
···3434#define npu_to_phb(x) container_of(x, struct pnv_phb, npu)35353636/*3737+ * spinlock to protect initialisation of an npu_context for a particular3838+ * mm_struct.3939+ */4040+static DEFINE_SPINLOCK(npu_context_lock);4141+4242+/*4343+ * When an address shootdown range exceeds this threshold we invalidate the4444+ * entire TLB on the GPU for the given PID rather than each specific address in4545+ * the range.4646+ */4747+#define ATSD_THRESHOLD (2*1024*1024)4848+4949+/*3750 * Other types of TCE cache invalidation are not functional in the3851 * hardware.3952 */···414401 bool nmmu_flush;415402416403 /* Callback to stop translation requests on a given GPU */417417- struct npu_context *(*release_cb)(struct npu_context *, void *);404404+ void (*release_cb)(struct npu_context *context, void *priv);418405419406 /*420407 * Private pointer passed to the above callback for usage by···684671 struct npu_context *npu_context = mn_to_npu_context(mn);685672 unsigned long address;686673687687- for (address = start; address < end; address += PAGE_SIZE)688688- mmio_invalidate(npu_context, 1, address, false);674674+ if (end - start > ATSD_THRESHOLD) {675675+ /*676676+ * Just invalidate the entire PID if the address range is too677677+ * large.678678+ */679679+ mmio_invalidate(npu_context, 0, 0, true);680680+ } else {681681+ for (address = start; address < end; address += PAGE_SIZE)682682+ mmio_invalidate(npu_context, 1, address, false);689683690690- /* Do the flush only on the final addess == end */691691- mmio_invalidate(npu_context, 1, address, true);684684+ /* Do the flush only on the final addess == end */685685+ mmio_invalidate(npu_context, 1, address, true);686686+ }692687}693688694689static const struct mmu_notifier_ops nv_nmmu_notifier_ops = {···717696 * Returns an error if there no contexts are currently available or a718697 * npu_context which should be passed to pnv_npu2_handle_fault().719698 *720720- * mmap_sem must be held in write mode.699699+ * mmap_sem must be held in write mode and must not be called from interrupt700700+ * context.721701 */722702struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,723703 unsigned long flags,724724- struct npu_context *(*cb)(struct npu_context *, void *),704704+ void (*cb)(struct npu_context *, void *),725705 void *priv)726706{727707 int rc;···765743 /*766744 * Setup the NPU context table for a particular GPU. These need to be767745 * per-GPU as we need the tables to filter ATSDs when there are no768768- * active contexts on a particular GPU.746746+ * active contexts on a particular GPU. It is safe for these to be747747+ * called concurrently with destroy as the OPAL call takes appropriate748748+ * locks and refcounts on init/destroy.769749 */770750 rc = opal_npu_init_context(nphb->opal_id, mm->context.id, flags,771751 PCI_DEVID(gpdev->bus->number, gpdev->devfn));···778754 * We store the npu pci device so we can more easily get at the779755 * associated npus.780756 */757757+ spin_lock(&npu_context_lock);781758 npu_context = mm->context.npu_context;759759+ if (npu_context) {760760+ if (npu_context->release_cb != cb ||761761+ npu_context->priv != priv) {762762+ spin_unlock(&npu_context_lock);763763+ opal_npu_destroy_context(nphb->opal_id, mm->context.id,764764+ PCI_DEVID(gpdev->bus->number,765765+ gpdev->devfn));766766+ return ERR_PTR(-EINVAL);767767+ }768768+769769+ WARN_ON(!kref_get_unless_zero(&npu_context->kref));770770+ }771771+ spin_unlock(&npu_context_lock);772772+782773 if (!npu_context) {774774+ /*775775+ * We can set up these fields without holding the776776+ * npu_context_lock as the npu_context hasn't been returned to777777+ * the caller meaning it can't be destroyed. Parallel allocation778778+ * is protected against by mmap_sem.779779+ */783780 rc = -ENOMEM;784781 npu_context = kzalloc(sizeof(struct npu_context), GFP_KERNEL);785782 if (npu_context) {···819774 }820775821776 mm->context.npu_context = npu_context;822822- } else {823823- WARN_ON(!kref_get_unless_zero(&npu_context->kref));824777 }825778826779 npu_context->release_cb = cb;···857814 mm_context_remove_copro(npu_context->mm);858815859816 npu_context->mm->context.npu_context = NULL;860860- mmu_notifier_unregister(&npu_context->mn,861861- npu_context->mm);862862-863863- kfree(npu_context);864817}865818819819+/*820820+ * Destroy a context on the given GPU. May free the npu_context if it is no821821+ * longer active on any GPUs. Must not be called from interrupt context.822822+ */866823void pnv_npu2_destroy_context(struct npu_context *npu_context,867824 struct pci_dev *gpdev)868825{826826+ int removed;869827 struct pnv_phb *nphb;870828 struct npu *npu;871829 struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);···888844 WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], NULL);889845 opal_npu_destroy_context(nphb->opal_id, npu_context->mm->context.id,890846 PCI_DEVID(gpdev->bus->number, gpdev->devfn));891891- kref_put(&npu_context->kref, pnv_npu2_release_context);847847+ spin_lock(&npu_context_lock);848848+ removed = kref_put(&npu_context->kref, pnv_npu2_release_context);849849+ spin_unlock(&npu_context_lock);850850+851851+ /*852852+ * We need to do this outside of pnv_npu2_release_context so that it is853853+ * outside the spinlock as mmu_notifier_destroy uses SRCU.854854+ */855855+ if (removed) {856856+ mmu_notifier_unregister(&npu_context->mn,857857+ npu_context->mm);858858+859859+ kfree(npu_context);860860+ }861861+892862}893863EXPORT_SYMBOL(pnv_npu2_destroy_context);894864
+5-3
arch/powerpc/platforms/powernv/opal-rtc.c
···48484949 while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {5050 rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms);5151- if (rc == OPAL_BUSY_EVENT)5151+ if (rc == OPAL_BUSY_EVENT) {5252+ mdelay(OPAL_BUSY_DELAY_MS);5253 opal_poll_events(NULL);5353- else if (rc == OPAL_BUSY)5454- mdelay(10);5454+ } else if (rc == OPAL_BUSY) {5555+ mdelay(OPAL_BUSY_DELAY_MS);5656+ }5557 }5658 if (rc != OPAL_SUCCESS)5759 return 0;
···3339333933403340 cpuc->lbr_sel = NULL;3341334133423342- flip_smm_bit(&x86_pmu.attr_freeze_on_smi);33423342+ if (x86_pmu.version > 1)33433343+ flip_smm_bit(&x86_pmu.attr_freeze_on_smi);3343334433443345 if (!cpuc->shared_regs)33453346 return;···35033502 .cpu_dying = intel_pmu_cpu_dying,35043503};3505350435053505+static struct attribute *intel_pmu_attrs[];35063506+35063507static __initconst const struct x86_pmu intel_pmu = {35073508 .name = "Intel",35083509 .handle_irq = intel_pmu_handle_irq,···3535353235363533 .format_attrs = intel_arch3_formats_attr,35373534 .events_sysfs_show = intel_event_sysfs_show,35353535+35363536+ .attrs = intel_pmu_attrs,3538353735393538 .cpu_prepare = intel_pmu_cpu_prepare,35403539 .cpu_starting = intel_pmu_cpu_starting,···3916391139173912 x86_pmu.max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, x86_pmu.num_counters);3918391339193919-39203920- x86_pmu.attrs = intel_pmu_attrs;39213914 /*39223915 * Quirk: v2 perfmon does not report fixed-purpose events, so39233916 * assume at least 3 events, when not running in a hypervisor:
+1
arch/x86/include/asm/cpufeatures.h
···320320#define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* POPCNT for vectors of DW/QW */321321#define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */322322#define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */323323+#define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */323324324325/* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */325326#define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */
+17-2
arch/x86/include/asm/ftrace.h
···4646#endif /* CONFIG_FUNCTION_TRACER */474748484949-#if !defined(__ASSEMBLY__) && !defined(COMPILE_OFFSETS)4949+#ifndef __ASSEMBLY__5050+5151+#define ARCH_HAS_SYSCALL_MATCH_SYM_NAME5252+static inline bool arch_syscall_match_sym_name(const char *sym, const char *name)5353+{5454+ /*5555+ * Compare the symbol name with the system call name. Skip the5656+ * "__x64_sys", "__ia32_sys" or simple "sys" prefix.5757+ */5858+ return !strcmp(sym + 3, name + 3) ||5959+ (!strncmp(sym, "__x64_", 6) && !strcmp(sym + 9, name + 3)) ||6060+ (!strncmp(sym, "__ia32_", 7) && !strcmp(sym + 10, name + 3));6161+}6262+6363+#ifndef COMPILE_OFFSETS50645165#if defined(CONFIG_FTRACE_SYSCALLS) && defined(CONFIG_IA32_EMULATION)5266#include <asm/compat.h>···8167 return false;8268}8369#endif /* CONFIG_FTRACE_SYSCALLS && CONFIG_IA32_EMULATION */8484-#endif /* !__ASSEMBLY__ && !COMPILE_OFFSETS */7070+#endif /* !COMPILE_OFFSETS */7171+#endif /* !__ASSEMBLY__ */85728673#endif /* _ASM_X86_FTRACE_H */
-7
arch/x86/include/asm/irq_vectors.h
···3434 * (0x80 is the syscall vector, 0x30-0x3f are for ISA)3535 */3636#define FIRST_EXTERNAL_VECTOR 0x203737-/*3838- * We start allocating at 0x21 to spread out vectors evenly between3939- * priority levels. (0x80 is the syscall vector)4040- */4141-#define VECTOR_OFFSET_START 142374338/*4439 * Reserve the lowest usable vector (and hence lowest priority) 0x20 for···113118#else114119#define FIRST_SYSTEM_VECTOR NR_VECTORS115120#endif116116-117117-#define FPU_IRQ 13118121119122/*120123 * Size the maximum number of interrupts.
···11+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */22+#ifndef __ASM_X64_MSGBUF_H33+#define __ASM_X64_MSGBUF_H44+55+#if !defined(__x86_64__) || !defined(__ILP32__)16#include <asm-generic/msgbuf.h>77+#else88+/*99+ * The msqid64_ds structure for x86 architecture with x32 ABI.1010+ *1111+ * On x86-32 and x86-64 we can just use the generic definition, but1212+ * x32 uses the same binary layout as x86_64, which is differnet1313+ * from other 32-bit architectures.1414+ */1515+1616+struct msqid64_ds {1717+ struct ipc64_perm msg_perm;1818+ __kernel_time_t msg_stime; /* last msgsnd time */1919+ __kernel_time_t msg_rtime; /* last msgrcv time */2020+ __kernel_time_t msg_ctime; /* last change time */2121+ __kernel_ulong_t msg_cbytes; /* current number of bytes on queue */2222+ __kernel_ulong_t msg_qnum; /* number of messages in queue */2323+ __kernel_ulong_t msg_qbytes; /* max number of bytes on queue */2424+ __kernel_pid_t msg_lspid; /* pid of last msgsnd */2525+ __kernel_pid_t msg_lrpid; /* last receive pid */2626+ __kernel_ulong_t __unused4;2727+ __kernel_ulong_t __unused5;2828+};2929+3030+#endif3131+3232+#endif /* __ASM_GENERIC_MSGBUF_H */
+42
arch/x86/include/uapi/asm/shmbuf.h
···11+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */22+#ifndef __ASM_X86_SHMBUF_H33+#define __ASM_X86_SHMBUF_H44+55+#if !defined(__x86_64__) || !defined(__ILP32__)16#include <asm-generic/shmbuf.h>77+#else88+/*99+ * The shmid64_ds structure for x86 architecture with x32 ABI.1010+ *1111+ * On x86-32 and x86-64 we can just use the generic definition, but1212+ * x32 uses the same binary layout as x86_64, which is differnet1313+ * from other 32-bit architectures.1414+ */1515+1616+struct shmid64_ds {1717+ struct ipc64_perm shm_perm; /* operation perms */1818+ size_t shm_segsz; /* size of segment (bytes) */1919+ __kernel_time_t shm_atime; /* last attach time */2020+ __kernel_time_t shm_dtime; /* last detach time */2121+ __kernel_time_t shm_ctime; /* last change time */2222+ __kernel_pid_t shm_cpid; /* pid of creator */2323+ __kernel_pid_t shm_lpid; /* pid of last operator */2424+ __kernel_ulong_t shm_nattch; /* no. of current attaches */2525+ __kernel_ulong_t __unused4;2626+ __kernel_ulong_t __unused5;2727+};2828+2929+struct shminfo64 {3030+ __kernel_ulong_t shmmax;3131+ __kernel_ulong_t shmmin;3232+ __kernel_ulong_t shmmni;3333+ __kernel_ulong_t shmseg;3434+ __kernel_ulong_t shmall;3535+ __kernel_ulong_t __unused1;3636+ __kernel_ulong_t __unused2;3737+ __kernel_ulong_t __unused3;3838+ __kernel_ulong_t __unused4;3939+};4040+4141+#endif4242+4343+#endif /* __ASM_X86_SHMBUF_H */
···5050#include <linux/init_ohci1394_dma.h>5151#include <linux/kvm_para.h>5252#include <linux/dma-contiguous.h>5353+#include <xen/xen.h>53545455#include <linux/errno.h>5556#include <linux/kernel.h>···533532 if (ret != 0 || crash_size <= 0)534533 return;535534 high = true;535535+ }536536+537537+ if (xen_pv_domain()) {538538+ pr_info("Ignoring crashkernel for a Xen PV domain\n");539539+ return;536540 }537541538542 /* 0 means: find the address automatically */
+2
arch/x86/kernel/smpboot.c
···15711571 void *mwait_ptr;15721572 int i;1573157315741574+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)15751575+ return;15741576 if (!this_cpu_has(X86_FEATURE_MWAIT))15751577 return;15761578 if (!this_cpu_has(X86_FEATURE_CLFLUSH))
+4-10
arch/x86/kvm/vmx.c
···45444544 __vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa);45454545}4546454645474547-static void vmx_flush_tlb_ept_only(struct kvm_vcpu *vcpu)45484548-{45494549- if (enable_ept)45504550- vmx_flush_tlb(vcpu, true);45514551-}45524552-45534547static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)45544548{45554549 ulong cr0_guest_owned_bits = vcpu->arch.cr0_guest_owned_bits;···92729278 } else {92739279 sec_exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;92749280 sec_exec_control |= SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;92759275- vmx_flush_tlb_ept_only(vcpu);92819281+ vmx_flush_tlb(vcpu, true);92769282 }92779283 vmcs_write32(SECONDARY_VM_EXEC_CONTROL, sec_exec_control);92789284···93009306 !nested_cpu_has2(get_vmcs12(&vmx->vcpu),93019307 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {93029308 vmcs_write64(APIC_ACCESS_ADDR, hpa);93039303- vmx_flush_tlb_ept_only(vcpu);93099309+ vmx_flush_tlb(vcpu, true);93049310 }93059311}93069312···1121411220 }1121511221 } else if (nested_cpu_has2(vmcs12,1121611222 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {1121711217- vmx_flush_tlb_ept_only(vcpu);1122311223+ vmx_flush_tlb(vcpu, true);1121811224 }11219112251122011226 /*···1206712073 } else if (!nested_cpu_has_ept(vmcs12) &&1206812074 nested_cpu_has2(vmcs12,1206912075 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {1207012070- vmx_flush_tlb_ept_only(vcpu);1207612076+ vmx_flush_tlb(vcpu, true);1207112077 }12072120781207312079 /* This is needed for same reason as it was needed in prepare_vmcs02 */
···9393static inline void split_page_count(int level) { }9494#endif95959696+static inline int9797+within(unsigned long addr, unsigned long start, unsigned long end)9898+{9999+ return addr >= start && addr < end;100100+}101101+102102+static inline int103103+within_inclusive(unsigned long addr, unsigned long start, unsigned long end)104104+{105105+ return addr >= start && addr <= end;106106+}107107+96108#ifdef CONFIG_X86_649710998110static inline unsigned long highmap_start_pfn(void)···118106 return __pa_symbol(roundup(_brk_end, PMD_SIZE) - 1) >> PAGE_SHIFT;119107}120108109109+static bool __cpa_pfn_in_highmap(unsigned long pfn)110110+{111111+ /*112112+ * Kernel text has an alias mapping at a high address, known113113+ * here as "highmap".114114+ */115115+ return within_inclusive(pfn, highmap_start_pfn(), highmap_end_pfn());116116+}117117+118118+#else119119+120120+static bool __cpa_pfn_in_highmap(unsigned long pfn)121121+{122122+ /* There is no highmap on 32-bit */123123+ return false;124124+}125125+121126#endif122122-123123-static inline int124124-within(unsigned long addr, unsigned long start, unsigned long end)125125-{126126- return addr >= start && addr < end;127127-}128128-129129-static inline int130130-within_inclusive(unsigned long addr, unsigned long start, unsigned long end)131131-{132132- return addr >= start && addr <= end;133133-}134127135128/*136129 * Flushing functions···189172190173static void cpa_flush_all(unsigned long cache)191174{192192- BUG_ON(irqs_disabled());175175+ BUG_ON(irqs_disabled() && !early_boot_irqs_disabled);193176194177 on_each_cpu(__cpa_flush_all, (void *) cache, 1);195178}···253236 unsigned long do_wbinvd = cache && numpages >= 1024; /* 4M threshold */254237#endif255238256256- BUG_ON(irqs_disabled());239239+ BUG_ON(irqs_disabled() && !early_boot_irqs_disabled);257240258241 on_each_cpu(__cpa_flush_all, (void *) do_wbinvd, 1);259242···12001183 cpa->numpages = 1;12011184 cpa->pfn = __pa(vaddr) >> PAGE_SHIFT;12021185 return 0;11861186+11871187+ } else if (__cpa_pfn_in_highmap(cpa->pfn)) {11881188+ /* Faults in the highmap are OK, so do not warn: */11891189+ return -EFAULT;12031190 } else {12041191 WARN(1, KERN_WARNING "CPA: called for zero pte. "12051192 "vaddr = %lx cpa->vaddr = %lx\n", vaddr,···13561335 * to touch the high mapped kernel as well:13571336 */13581337 if (!within(vaddr, (unsigned long)_text, _brk_end) &&13591359- within_inclusive(cpa->pfn, highmap_start_pfn(),13601360- highmap_end_pfn())) {13381338+ __cpa_pfn_in_highmap(cpa->pfn)) {13611339 unsigned long temp_cpa_vaddr = (cpa->pfn << PAGE_SHIFT) +13621340 __START_KERNEL_map - phys_base;13631341 alias_cpa = *cpa;
+23-3
arch/x86/mm/pti.c
···421421 if (boot_cpu_has(X86_FEATURE_K8))422422 return false;423423424424+ /*425425+ * RANDSTRUCT derives its hardening benefits from the426426+ * attacker's lack of knowledge about the layout of kernel427427+ * data structures. Keep the kernel image non-global in428428+ * cases where RANDSTRUCT is in use to help keep the layout a429429+ * secret.430430+ */431431+ if (IS_ENABLED(CONFIG_GCC_PLUGIN_RANDSTRUCT))432432+ return false;433433+424434 return true;425435}426436···440430 */441431void pti_clone_kernel_text(void)442432{433433+ /*434434+ * rodata is part of the kernel image and is normally435435+ * readable on the filesystem or on the web. But, do not436436+ * clone the areas past rodata, they might contain secrets.437437+ */443438 unsigned long start = PFN_ALIGN(_text);444444- unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE);439439+ unsigned long end = (unsigned long)__end_rodata_hpage_align;445440446441 if (!pti_kernel_image_global_ok())447442 return;448443444444+ pr_debug("mapping partial kernel image into user address space\n");445445+446446+ /*447447+ * Note that this will undo _some_ of the work that448448+ * pti_set_kernel_image_nonglobal() did to clear the449449+ * global bit.450450+ */449451 pti_clone_pmds(start, end, _PAGE_RW);450452}451453···479457480458 if (pti_kernel_image_global_ok())481459 return;482482-483483- pr_debug("set kernel image non-global\n");484460485461 set_memory_nonglobal(start, (end - start) >> PAGE_SHIFT);486462}
+9-1
block/bfq-iosched.c
···49344934 bool new_queue = false;49354935 bool bfqq_already_existing = false, split = false;4936493649374937- if (!rq->elv.icq)49374937+ /*49384938+ * Even if we don't have an icq attached, we should still clear49394939+ * the scheduler pointers, as they might point to previously49404940+ * allocated bic/bfqq structs.49414941+ */49424942+ if (!rq->elv.icq) {49434943+ rq->elv.priv[0] = rq->elv.priv[1] = NULL;49384944 return;49454945+ }49464946+49394947 bic = icq_to_bic(rq->elv.icq);4940494849414949 spin_lock_irq(&bfqd->lock);
+12-16
block/blk-cgroup.c
···1177117711781178 preloaded = !radix_tree_preload(GFP_KERNEL);1179117911801180- /*11811181- * Make sure the root blkg exists and count the existing blkgs. As11821182- * @q is bypassing at this point, blkg_lookup_create() can't be11831183- * used. Open code insertion.11841184- */11801180+ /* Make sure the root blkg exists. */11851181 rcu_read_lock();11861182 spin_lock_irq(q->queue_lock);11871183 blkg = blkg_create(&blkcg_root, q, new_blkg);11841184+ if (IS_ERR(blkg))11851185+ goto err_unlock;11861186+ q->root_blkg = blkg;11871187+ q->root_rl.blkg = blkg;11881188 spin_unlock_irq(q->queue_lock);11891189 rcu_read_unlock();1190119011911191 if (preloaded)11921192 radix_tree_preload_end();11931193-11941194- if (IS_ERR(blkg))11951195- return PTR_ERR(blkg);11961196-11971197- q->root_blkg = blkg;11981198- q->root_rl.blkg = blkg;1199119312001194 ret = blk_throtl_init(q);12011195 if (ret) {···11981204 spin_unlock_irq(q->queue_lock);11991205 }12001206 return ret;12071207+12081208+err_unlock:12091209+ spin_unlock_irq(q->queue_lock);12101210+ rcu_read_unlock();12111211+ if (preloaded)12121212+ radix_tree_preload_end();12131213+ return PTR_ERR(blkg);12011214}1202121512031216/**···14111410 __clear_bit(pol->plid, q->blkcg_pols);1412141114131412 list_for_each_entry(blkg, &q->blkg_list, q_node) {14141414- /* grab blkcg lock too while removing @pd from @blkg */14151415- spin_lock(&blkg->blkcg->lock);14161416-14171413 if (blkg->pd[pol->plid]) {14181414 if (!blkg->pd[pol->plid]->offline &&14191415 pol->pd_offline_fn) {···14201422 pol->pd_free_fn(blkg->pd[pol->plid]);14211423 blkg->pd[pol->plid] = NULL;14221424 }14231423-14241424- spin_unlock(&blkg->blkcg->lock);14251425 }1426142614271427 spin_unlock_irq(q->queue_lock);
+8-7
block/blk-core.c
···201201 rq->part = NULL;202202 seqcount_init(&rq->gstate_seq);203203 u64_stats_init(&rq->aborted_gstate_sync);204204+ /*205205+ * See comment of blk_mq_init_request206206+ */207207+ WRITE_ONCE(rq->gstate, MQ_RQ_GEN_INC);204208}205209EXPORT_SYMBOL(blk_rq_init);206210···919915920916 while (true) {921917 bool success = false;922922- int ret;923918924919 rcu_read_lock();925920 if (percpu_ref_tryget_live(&q->q_usage_counter)) {···950947 */951948 smp_rmb();952949953953- ret = wait_event_interruptible(q->mq_freeze_wq,954954- (atomic_read(&q->mq_freeze_depth) == 0 &&955955- (preempt || !blk_queue_preempt_only(q))) ||956956- blk_queue_dying(q));950950+ wait_event(q->mq_freeze_wq,951951+ (atomic_read(&q->mq_freeze_depth) == 0 &&952952+ (preempt || !blk_queue_preempt_only(q))) ||953953+ blk_queue_dying(q));957954 if (blk_queue_dying(q))958955 return -ENODEV;959959- if (ret)960960- return ret;961956 }962957}963958
+38-3
block/blk-mq.c
···2042204220432043 seqcount_init(&rq->gstate_seq);20442044 u64_stats_init(&rq->aborted_gstate_sync);20452045+ /*20462046+ * start gstate with gen 1 instead of 0, otherwise it will be equal20472047+ * to aborted_gstate, and be identified timed out by20482048+ * blk_mq_terminate_expired.20492049+ */20502050+ WRITE_ONCE(rq->gstate, MQ_RQ_GEN_INC);20512051+20452052 return 0;20462053}20472054···2336232923372330static void blk_mq_map_swqueue(struct request_queue *q)23382331{23392339- unsigned int i;23322332+ unsigned int i, hctx_idx;23402333 struct blk_mq_hw_ctx *hctx;23412334 struct blk_mq_ctx *ctx;23422335 struct blk_mq_tag_set *set = q->tag_set;···2353234623542347 /*23552348 * Map software to hardware queues.23492349+ *23502350+ * If the cpu isn't present, the cpu is mapped to first hctx.23562351 */23572352 for_each_possible_cpu(i) {23532353+ hctx_idx = q->mq_map[i];23542354+ /* unmapped hw queue can be remapped after CPU topo changed */23552355+ if (!set->tags[hctx_idx] &&23562356+ !__blk_mq_alloc_rq_map(set, hctx_idx)) {23572357+ /*23582358+ * If tags initialization fail for some hctx,23592359+ * that hctx won't be brought online. In this23602360+ * case, remap the current ctx to hctx[0] which23612361+ * is guaranteed to always have tags allocated23622362+ */23632363+ q->mq_map[i] = 0;23642364+ }23652365+23582366 ctx = per_cpu_ptr(q->queue_ctx, i);23592367 hctx = blk_mq_map_queue(q, i);23602368···23812359 mutex_unlock(&q->sysfs_lock);2382236023832361 queue_for_each_hw_ctx(q, hctx, i) {23842384- /* every hctx should get mapped by at least one CPU */23852385- WARN_ON(!hctx->nr_ctx);23622362+ /*23632363+ * If no software queues are mapped to this hardware queue,23642364+ * disable it and free the request entries.23652365+ */23662366+ if (!hctx->nr_ctx) {23672367+ /* Never unmap queue 0. We need it as a23682368+ * fallback in case of a new remap fails23692369+ * allocation23702370+ */23712371+ if (i && set->tags[i])23722372+ blk_mq_free_map_and_requests(set, i);23732373+23742374+ hctx->tags = NULL;23752375+ continue;23762376+ }2386237723872378 hctx->tags = set->tags[i];23882379 WARN_ON(!hctx->tags);
+3
block/blk-mq.h
···7788struct blk_mq_tag_set;991010+/**1111+ * struct blk_mq_ctx - State for a software queue facing the submitting CPUs1212+ */1013struct blk_mq_ctx {1114 struct {1215 spinlock_t lock;
···21232123 return opregion;21242124}2125212521262126+static bool dmi_is_desktop(void)21272127+{21282128+ const char *chassis_type;21292129+21302130+ chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE);21312131+ if (!chassis_type)21322132+ return false;21332133+21342134+ if (!strcmp(chassis_type, "3") || /* 3: Desktop */21352135+ !strcmp(chassis_type, "4") || /* 4: Low Profile Desktop */21362136+ !strcmp(chassis_type, "5") || /* 5: Pizza Box */21372137+ !strcmp(chassis_type, "6") || /* 6: Mini Tower */21382138+ !strcmp(chassis_type, "7") || /* 7: Tower */21392139+ !strcmp(chassis_type, "11")) /* 11: Main Server Chassis */21402140+ return true;21412141+21422142+ return false;21432143+}21442144+21262145int acpi_video_register(void)21272146{21282147 int ret = 0;···21622143 * win8 ready (where we also prefer the native backlight driver, so21632144 * normally the acpi_video code should not register there anyways).21642145 */21652165- if (only_lcd == -1)21662166- only_lcd = acpi_osi_is_win8();21462146+ if (only_lcd == -1) {21472147+ if (dmi_is_desktop() && acpi_osi_is_win8())21482148+ only_lcd = true;21492149+ else21502150+ only_lcd = false;21512151+ }2167215221682153 dmi_check_system(video_dmi_table);21692154
+49-10
drivers/acpi/acpi_watchdog.c
···1212#define pr_fmt(fmt) "ACPI: watchdog: " fmt13131414#include <linux/acpi.h>1515+#include <linux/dmi.h>1516#include <linux/ioport.h>1617#include <linux/platform_device.h>17181819#include "internal.h"2020+2121+static const struct dmi_system_id acpi_watchdog_skip[] = {2222+ {2323+ /*2424+ * On Lenovo Z50-70 there are two issues with the WDAT2525+ * table. First some of the instructions use RTC SRAM2626+ * to store persistent information. This does not work well2727+ * with Linux RTC driver. Second, more important thing is2828+ * that the instructions do not actually reset the system.2929+ *3030+ * On this particular system iTCO_wdt seems to work just3131+ * fine so we prefer that over WDAT for now.3232+ *3333+ * See also https://bugzilla.kernel.org/show_bug.cgi?id=199033.3434+ */3535+ .ident = "Lenovo Z50-70",3636+ .matches = {3737+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),3838+ DMI_MATCH(DMI_PRODUCT_NAME, "20354"),3939+ DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Z50-70"),4040+ },4141+ },4242+ {}4343+};4444+4545+static const struct acpi_table_wdat *acpi_watchdog_get_wdat(void)4646+{4747+ const struct acpi_table_wdat *wdat = NULL;4848+ acpi_status status;4949+5050+ if (acpi_disabled)5151+ return NULL;5252+5353+ if (dmi_check_system(acpi_watchdog_skip))5454+ return NULL;5555+5656+ status = acpi_get_table(ACPI_SIG_WDAT, 0,5757+ (struct acpi_table_header **)&wdat);5858+ if (ACPI_FAILURE(status)) {5959+ /* It is fine if there is no WDAT */6060+ return NULL;6161+ }6262+6363+ return wdat;6464+}19652066/**2167 * Returns true if this system should prefer ACPI based watchdog instead of···6923 */7024bool acpi_has_watchdog(void)7125{7272- struct acpi_table_header hdr;7373-7474- if (acpi_disabled)7575- return false;7676-7777- return ACPI_SUCCESS(acpi_get_table_header(ACPI_SIG_WDAT, 0, &hdr));2626+ return !!acpi_watchdog_get_wdat();7827}7928EXPORT_SYMBOL_GPL(acpi_has_watchdog);8029···8241 struct platform_device *pdev;8342 struct resource *resources;8443 size_t nresources = 0;8585- acpi_status status;8644 int i;87458888- status = acpi_get_table(ACPI_SIG_WDAT, 0,8989- (struct acpi_table_header **)&wdat);9090- if (ACPI_FAILURE(status)) {4646+ wdat = acpi_watchdog_get_wdat();4747+ if (!wdat) {9148 /* It is fine if there is no WDAT */9249 return;9350 }
+23-1
drivers/acpi/button.c
···635635 NULL, 0644);636636MODULE_PARM_DESC(lid_init_state, "Behavior for reporting LID initial state");637637638638-module_acpi_driver(acpi_button_driver);638638+static int acpi_button_register_driver(struct acpi_driver *driver)639639+{640640+ /*641641+ * Modules such as nouveau.ko and i915.ko have a link time dependency642642+ * on acpi_lid_open(), and would therefore not be loadable on ACPI643643+ * capable kernels booted in non-ACPI mode if the return value of644644+ * acpi_bus_register_driver() is returned from here with ACPI disabled645645+ * when this driver is built as a module.646646+ */647647+ if (acpi_disabled)648648+ return 0;649649+650650+ return acpi_bus_register_driver(driver);651651+}652652+653653+static void acpi_button_unregister_driver(struct acpi_driver *driver)654654+{655655+ if (!acpi_disabled)656656+ acpi_bus_unregister_driver(driver);657657+}658658+659659+module_driver(acpi_button_driver, acpi_button_register_driver,660660+ acpi_button_unregister_driver);
···6969 struct device_attribute *attr, char *buf)7070{7171 struct amba_device *dev = to_amba_device(_dev);7272+ ssize_t len;72737373- if (!dev->driver_override)7474- return 0;7575-7676- return sprintf(buf, "%s\n", dev->driver_override);7474+ device_lock(_dev);7575+ len = sprintf(buf, "%s\n", dev->driver_override);7676+ device_unlock(_dev);7777+ return len;7778}78797980static ssize_t driver_override_store(struct device *_dev,···8281 const char *buf, size_t count)8382{8483 struct amba_device *dev = to_amba_device(_dev);8585- char *driver_override, *old = dev->driver_override, *cp;8484+ char *driver_override, *old, *cp;86858787- if (count > PATH_MAX)8686+ /* We need to keep extra room for a newline */8787+ if (count >= (PAGE_SIZE - 1))8888 return -EINVAL;89899090 driver_override = kstrndup(buf, count, GFP_KERNEL);···9694 if (cp)9795 *cp = '\0';98969797+ device_lock(_dev);9898+ old = dev->driver_override;9999 if (strlen(driver_override)) {100100 dev->driver_override = driver_override;101101 } else {102102 kfree(driver_override);103103 dev->driver_override = NULL;104104 }105105+ device_unlock(_dev);105106106107 kfree(old);107108
+8
drivers/android/binder.c
···28392839 else28402840 return_error = BR_DEAD_REPLY;28412841 mutex_unlock(&context->context_mgr_node_lock);28422842+ if (target_node && target_proc == proc) {28432843+ binder_user_error("%d:%d got transaction to context manager from process owning it\n",28442844+ proc->pid, thread->pid);28452845+ return_error = BR_FAILED_REPLY;28462846+ return_error_param = -EINVAL;28472847+ return_error_line = __LINE__;28482848+ goto err_invalid_target_handle;28492849+ }28422850 }28432851 if (!target_node) {28442852 /*
+3-2
drivers/base/dma-coherent.c
···312312 * This checks whether the memory was allocated from the per-device313313 * coherent memory pool and if so, maps that memory to the provided vma.314314 *315315- * Returns 1 if we correctly mapped the memory, or 0 if the caller should316316- * proceed with mapping memory from generic pools.315315+ * Returns 1 if @vaddr belongs to the device coherent pool and the caller316316+ * should return @ret, or 0 if they should proceed with mapping memory from317317+ * generic areas.317318 */318319int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma,319320 void *vaddr, size_t size, int *ret)
+2-4
drivers/base/dma-mapping.c
···226226#ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP227227 unsigned long user_count = vma_pages(vma);228228 unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;229229- unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));230229 unsigned long off = vma->vm_pgoff;231230232231 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);···233234 if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))234235 return ret;235236236236- if (off < count && user_count <= (count - off)) {237237+ if (off < count && user_count <= (count - off))237238 ret = remap_pfn_range(vma, vma->vm_start,238238- pfn + off,239239+ page_to_pfn(virt_to_page(cpu_addr)) + off,239240 user_count << PAGE_SHIFT,240241 vma->vm_page_prot);241241- }242242#endif /* !CONFIG_ARCH_NO_COHERENT_DMA_MMAP */243243244244 return ret;
+2-2
drivers/base/firmware_loader/fallback.c
···537537}538538539539/**540540- * fw_load_sysfs_fallback - load a firmware via the syfs fallback mechanism541541- * @fw_sysfs: firmware syfs information for the firmware to load540540+ * fw_load_sysfs_fallback - load a firmware via the sysfs fallback mechanism541541+ * @fw_sysfs: firmware sysfs information for the firmware to load542542 * @opt_flags: flags of options, FW_OPT_*543543 * @timeout: timeout to wait for the load544544 *
+1-1
drivers/base/firmware_loader/fallback.h
···66#include <linux/device.h>7788/**99- * struct firmware_fallback_config - firmware fallback configuratioon settings99+ * struct firmware_fallback_config - firmware fallback configuration settings1010 *1111 * Helps describe and fine tune the fallback mechanism.1212 *
+43-21
drivers/block/loop.c
···451451static void lo_complete_rq(struct request *rq)452452{453453 struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);454454+ blk_status_t ret = BLK_STS_OK;454455455455- if (unlikely(req_op(cmd->rq) == REQ_OP_READ && cmd->use_aio &&456456- cmd->ret >= 0 && cmd->ret < blk_rq_bytes(cmd->rq))) {457457- struct bio *bio = cmd->rq->bio;458458-459459- bio_advance(bio, cmd->ret);460460- zero_fill_bio(bio);456456+ if (!cmd->use_aio || cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) ||457457+ req_op(rq) != REQ_OP_READ) {458458+ if (cmd->ret < 0)459459+ ret = BLK_STS_IOERR;460460+ goto end_io;461461 }462462463463- blk_mq_end_request(rq, cmd->ret < 0 ? BLK_STS_IOERR : BLK_STS_OK);463463+ /*464464+ * Short READ - if we got some data, advance our request and465465+ * retry it. If we got no data, end the rest with EIO.466466+ */467467+ if (cmd->ret) {468468+ blk_update_request(rq, BLK_STS_OK, cmd->ret);469469+ cmd->ret = 0;470470+ blk_mq_requeue_request(rq, true);471471+ } else {472472+ if (cmd->use_aio) {473473+ struct bio *bio = rq->bio;474474+475475+ while (bio) {476476+ zero_fill_bio(bio);477477+ bio = bio->bi_next;478478+ }479479+ }480480+ ret = BLK_STS_IOERR;481481+end_io:482482+ blk_mq_end_request(rq, ret);483483+ }464484}465485466486static void lo_rw_aio_do_completion(struct loop_cmd *cmd)467487{488488+ struct request *rq = blk_mq_rq_from_pdu(cmd);489489+468490 if (!atomic_dec_and_test(&cmd->ref))469491 return;470492 kfree(cmd->bvec);471493 cmd->bvec = NULL;472472- blk_mq_complete_request(cmd->rq);494494+ blk_mq_complete_request(rq);473495}474496475497static void lo_rw_aio_complete(struct kiocb *iocb, long ret, long ret2)···509487{510488 struct iov_iter iter;511489 struct bio_vec *bvec;512512- struct request *rq = cmd->rq;490490+ struct request *rq = blk_mq_rq_from_pdu(cmd);513491 struct bio *bio = rq->bio;514492 struct file *file = lo->lo_backing_file;515493 unsigned int offset;···17241702static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx,17251703 const struct blk_mq_queue_data *bd)17261704{17271727- struct loop_cmd *cmd = blk_mq_rq_to_pdu(bd->rq);17281728- struct loop_device *lo = cmd->rq->q->queuedata;17051705+ struct request *rq = bd->rq;17061706+ struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);17071707+ struct loop_device *lo = rq->q->queuedata;1729170817301730- blk_mq_start_request(bd->rq);17091709+ blk_mq_start_request(rq);1731171017321711 if (lo->lo_state != Lo_bound)17331712 return BLK_STS_IOERR;1734171317351735- switch (req_op(cmd->rq)) {17141714+ switch (req_op(rq)) {17361715 case REQ_OP_FLUSH:17371716 case REQ_OP_DISCARD:17381717 case REQ_OP_WRITE_ZEROES:···1746172317471724 /* always use the first bio's css */17481725#ifdef CONFIG_BLK_CGROUP17491749- if (cmd->use_aio && cmd->rq->bio && cmd->rq->bio->bi_css) {17501750- cmd->css = cmd->rq->bio->bi_css;17261726+ if (cmd->use_aio && rq->bio && rq->bio->bi_css) {17271727+ cmd->css = rq->bio->bi_css;17511728 css_get(cmd->css);17521729 } else17531730#endif···1759173617601737static void loop_handle_cmd(struct loop_cmd *cmd)17611738{17621762- const bool write = op_is_write(req_op(cmd->rq));17631763- struct loop_device *lo = cmd->rq->q->queuedata;17391739+ struct request *rq = blk_mq_rq_from_pdu(cmd);17401740+ const bool write = op_is_write(req_op(rq));17411741+ struct loop_device *lo = rq->q->queuedata;17641742 int ret = 0;1765174317661744 if (write && (lo->lo_flags & LO_FLAGS_READ_ONLY)) {···17691745 goto failed;17701746 }1771174717721772- ret = do_req_filebacked(lo, cmd->rq);17481748+ ret = do_req_filebacked(lo, rq);17731749 failed:17741750 /* complete non-aio request */17751751 if (!cmd->use_aio || ret) {17761752 cmd->ret = ret ? -EIO : 0;17771777- blk_mq_complete_request(cmd->rq);17531753+ blk_mq_complete_request(rq);17781754 }17791755}17801756···17911767{17921768 struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);1793176917941794- cmd->rq = rq;17951770 kthread_init_work(&cmd->work, loop_queue_work);17961796-17971771 return 0;17981772}17991773
-1
drivers/block/loop.h
···66666767struct loop_cmd {6868 struct kthread_work work;6969- struct request *rq;7069 bool use_aio; /* use AIO interface to handle I/O */7170 atomic_t ref; /* only for aio */7271 long ret;
···148148#define MOTOR_ON 2149149#define RELAX 3 /* also eject in progress */150150#define READ_DATA_0 4151151-#define TWOMEG_DRIVE 5151151+#define ONEMEG_DRIVE 5152152#define SINGLE_SIDED 6 /* drive or diskette is 4MB type? */153153#define DRIVE_PRESENT 7154154#define DISK_IN 8···156156#define TRACK_ZERO 10157157#define TACHO 11158158#define READ_DATA_1 12159159-#define MFM_MODE 13159159+#define GCR_MODE 13160160#define SEEK_COMPLETE 14161161-#define ONEMEG_MEDIA 15161161+#define TWOMEG_MEDIA 15162162163163/* Definitions of values used in writing and formatting */164164#define DATA_ESCAPE 0x99
+1
drivers/bus/Kconfig
···3333 bool "Support for ISA I/O space on HiSilicon Hip06/7"3434 depends on ARM64 && (ARCH_HISI || COMPILE_TEST)3535 select INDIRECT_PIO3636+ select MFD_CORE if ACPI3637 help3738 Driver to enable I/O access to devices attached to the Low Pin3839 Count bus on the HiSilicon Hip06/7 SoC.
+1-1
drivers/cdrom/cdrom.c
···23712371 if (!CDROM_CAN(CDC_SELECT_DISC) || arg == CDSL_CURRENT)23722372 return media_changed(cdi, 1);2373237323742374- if ((unsigned int)arg >= cdi->capacity)23742374+ if (arg >= cdi->capacity)23752375 return -EINVAL;2376237623772377 info = kmalloc(sizeof(*info), GFP_KERNEL);
···422422 }423423}424424425425-static struct port_buffer *alloc_buf(struct virtqueue *vq, size_t buf_size,425425+static struct port_buffer *alloc_buf(struct virtio_device *vdev, size_t buf_size,426426 int pages)427427{428428 struct port_buffer *buf;···445445 return buf;446446 }447447448448- if (is_rproc_serial(vq->vdev)) {448448+ if (is_rproc_serial(vdev)) {449449 /*450450 * Allocate DMA memory from ancestor. When a virtio451451 * device is created by remoteproc, the DMA memory is452452 * associated with the grandparent device:453453 * vdev => rproc => platform-dev.454454 */455455- if (!vq->vdev->dev.parent || !vq->vdev->dev.parent->parent)455455+ if (!vdev->dev.parent || !vdev->dev.parent->parent)456456 goto free_buf;457457- buf->dev = vq->vdev->dev.parent->parent;457457+ buf->dev = vdev->dev.parent->parent;458458459459 /* Increase device refcnt to avoid freeing it */460460 get_device(buf->dev);···838838839839 count = min((size_t)(32 * 1024), count);840840841841- buf = alloc_buf(port->out_vq, count, 0);841841+ buf = alloc_buf(port->portdev->vdev, count, 0);842842 if (!buf)843843 return -ENOMEM;844844···957957 if (ret < 0)958958 goto error_out;959959960960- buf = alloc_buf(port->out_vq, 0, pipe->nrbufs);960960+ buf = alloc_buf(port->portdev->vdev, 0, pipe->nrbufs);961961 if (!buf) {962962 ret = -ENOMEM;963963 goto error_out;···1374137413751375 nr_added_bufs = 0;13761376 do {13771377- buf = alloc_buf(vq, PAGE_SIZE, 0);13771377+ buf = alloc_buf(vq->vdev, PAGE_SIZE, 0);13781378 if (!buf)13791379 break;13801380···14021402{14031403 char debugfs_name[16];14041404 struct port *port;14051405- struct port_buffer *buf;14061405 dev_t devt;14071406 unsigned int nr_added_bufs;14081407 int err;···15121513 return 0;1513151415141515free_inbufs:15151515- while ((buf = virtqueue_detach_unused_buf(port->in_vq)))15161516- free_buf(buf, true);15171516free_device:15181517 device_destroy(pdrvdata.class, port->dev->devt);15191518free_cdev:···1536153915371540static void remove_port_data(struct port *port)15381541{15391539- struct port_buffer *buf;15401540-15411542 spin_lock_irq(&port->inbuf_lock);15421543 /* Remove unused data this port might have received. */15431544 discard_port_data(port);15441545 spin_unlock_irq(&port->inbuf_lock);1545154615461546- /* Remove buffers we queued up for the Host to send us data in. */15471547- do {15481548- spin_lock_irq(&port->inbuf_lock);15491549- buf = virtqueue_detach_unused_buf(port->in_vq);15501550- spin_unlock_irq(&port->inbuf_lock);15511551- if (buf)15521552- free_buf(buf, true);15531553- } while (buf);15541554-15551547 spin_lock_irq(&port->outvq_lock);15561548 reclaim_consumed_buffers(port);15571549 spin_unlock_irq(&port->outvq_lock);15581558-15591559- /* Free pending buffers from the out-queue. */15601560- do {15611561- spin_lock_irq(&port->outvq_lock);15621562- buf = virtqueue_detach_unused_buf(port->out_vq);15631563- spin_unlock_irq(&port->outvq_lock);15641564- if (buf)15651565- free_buf(buf, true);15661566- } while (buf);15671550}1568155115691552/*···17681791 spin_unlock(&portdev->c_ivq_lock);17691792}1770179317941794+static void flush_bufs(struct virtqueue *vq, bool can_sleep)17951795+{17961796+ struct port_buffer *buf;17971797+ unsigned int len;17981798+17991799+ while ((buf = virtqueue_get_buf(vq, &len)))18001800+ free_buf(buf, can_sleep);18011801+}18021802+17711803static void out_intr(struct virtqueue *vq)17721804{17731805 struct port *port;1774180617751807 port = find_port_by_vq(vq->vdev->priv, vq);17761776- if (!port)18081808+ if (!port) {18091809+ flush_bufs(vq, false);17771810 return;18111811+ }1778181217791813 wake_up_interruptible(&port->waitqueue);17801814}···17961808 unsigned long flags;1797180917981810 port = find_port_by_vq(vq->vdev->priv, vq);17991799- if (!port)18111811+ if (!port) {18121812+ flush_bufs(vq, false);18001813 return;18141814+ }1801181518021816 spin_lock_irqsave(&port->inbuf_lock, flags);18031817 port->inbuf = get_inbuf(port);···1974198419751985static void remove_vqs(struct ports_device *portdev)19761986{19871987+ struct virtqueue *vq;19881988+19891989+ virtio_device_for_each_vq(portdev->vdev, vq) {19901990+ struct port_buffer *buf;19911991+19921992+ flush_bufs(vq, true);19931993+ while ((buf = virtqueue_detach_unused_buf(vq)))19941994+ free_buf(buf, true);19951995+ }19771996 portdev->vdev->config->del_vqs(portdev->vdev);19781997 kfree(portdev->in_vqs);19791998 kfree(portdev->out_vqs);19801999}1981200019821982-static void remove_controlq_data(struct ports_device *portdev)20012001+static void virtcons_remove(struct virtio_device *vdev)19832002{19841984- struct port_buffer *buf;19851985- unsigned int len;20032003+ struct ports_device *portdev;20042004+ struct port *port, *port2;1986200519871987- if (!use_multiport(portdev))19881988- return;20062006+ portdev = vdev->priv;1989200719901990- while ((buf = virtqueue_get_buf(portdev->c_ivq, &len)))19911991- free_buf(buf, true);20082008+ spin_lock_irq(&pdrvdata_lock);20092009+ list_del(&portdev->list);20102010+ spin_unlock_irq(&pdrvdata_lock);1992201119931993- while ((buf = virtqueue_detach_unused_buf(portdev->c_ivq)))19941994- free_buf(buf, true);20122012+ /* Disable interrupts for vqs */20132013+ vdev->config->reset(vdev);20142014+ /* Finish up work that's lined up */20152015+ if (use_multiport(portdev))20162016+ cancel_work_sync(&portdev->control_work);20172017+ else20182018+ cancel_work_sync(&portdev->config_work);20192019+20202020+ list_for_each_entry_safe(port, port2, &portdev->ports, list)20212021+ unplug_port(port);20222022+20232023+ unregister_chrdev(portdev->chr_major, "virtio-portsdev");20242024+20252025+ /*20262026+ * When yanking out a device, we immediately lose the20272027+ * (device-side) queues. So there's no point in keeping the20282028+ * guest side around till we drop our final reference. This20292029+ * also means that any ports which are in an open state will20302030+ * have to just stop using the port, as the vqs are going20312031+ * away.20322032+ */20332033+ remove_vqs(portdev);20342034+ kfree(portdev);19952035}1996203619972037/*···2090207020912071 spin_lock_init(&portdev->ports_lock);20922072 INIT_LIST_HEAD(&portdev->ports);20732073+ INIT_LIST_HEAD(&portdev->list);2093207420942075 virtio_device_ready(portdev->vdev);20952076···21082087 if (!nr_added_bufs) {21092088 dev_err(&vdev->dev,21102089 "Error allocating buffers for control queue\n");21112111- err = -ENOMEM;21122112- goto free_vqs;20902090+ /*20912091+ * The host might want to notify mgmt sw about device20922092+ * add failure.20932093+ */20942094+ __send_control_msg(portdev, VIRTIO_CONSOLE_BAD_ID,20952095+ VIRTIO_CONSOLE_DEVICE_READY, 0);20962096+ /* Device was functional: we need full cleanup. */20972097+ virtcons_remove(vdev);20982098+ return -ENOMEM;21132099 }21142100 } else {21152101 /*···2147211921482120 return 0;2149212121502150-free_vqs:21512151- /* The host might want to notify mgmt sw about device add failure */21522152- __send_control_msg(portdev, VIRTIO_CONSOLE_BAD_ID,21532153- VIRTIO_CONSOLE_DEVICE_READY, 0);21542154- remove_vqs(portdev);21552122free_chrdev:21562123 unregister_chrdev(portdev->chr_major, "virtio-portsdev");21572124free:21582125 kfree(portdev);21592126fail:21602127 return err;21612161-}21622162-21632163-static void virtcons_remove(struct virtio_device *vdev)21642164-{21652165- struct ports_device *portdev;21662166- struct port *port, *port2;21672167-21682168- portdev = vdev->priv;21692169-21702170- spin_lock_irq(&pdrvdata_lock);21712171- list_del(&portdev->list);21722172- spin_unlock_irq(&pdrvdata_lock);21732173-21742174- /* Disable interrupts for vqs */21752175- vdev->config->reset(vdev);21762176- /* Finish up work that's lined up */21772177- if (use_multiport(portdev))21782178- cancel_work_sync(&portdev->control_work);21792179- else21802180- cancel_work_sync(&portdev->config_work);21812181-21822182- list_for_each_entry_safe(port, port2, &portdev->ports, list)21832183- unplug_port(port);21842184-21852185- unregister_chrdev(portdev->chr_major, "virtio-portsdev");21862186-21872187- /*21882188- * When yanking out a device, we immediately lose the21892189- * (device-side) queues. So there's no point in keeping the21902190- * guest side around till we drop our final reference. This21912191- * also means that any ports which are in an open state will21922192- * have to just stop using the port, as the vqs are going21932193- * away.21942194- */21952195- remove_controlq_data(portdev);21962196- remove_vqs(portdev);21972197- kfree(portdev);21982128}2199212922002130static struct virtio_device_id id_table[] = {···21952209 */21962210 if (use_multiport(portdev))21972211 virtqueue_disable_cb(portdev->c_ivq);21982198- remove_controlq_data(portdev);2199221222002213 list_for_each_entry(port, &portdev->ports, list) {22012214 virtqueue_disable_cb(port->in_vq);
-10
drivers/cpufreq/Kconfig.arm
···71717272 Say Y, if you have a Broadcom SoC with AVS support for DFS or DVFS.73737474-config ARM_BRCMSTB_AVS_CPUFREQ_DEBUG7575- bool "Broadcom STB AVS CPUfreq driver sysfs debug capability"7676- depends on ARM_BRCMSTB_AVS_CPUFREQ7777- help7878- Enabling this option turns on debug support via sysfs under7979- /sys/kernel/debug/brcmstb-avs-cpufreq. It is possible to read all and8080- write some AVS mailbox registers through sysfs entries.8181-8282- If in doubt, say N.8383-8474config ARM_EXYNOS5440_CPUFREQ8575 tristate "SAMSUNG EXYNOS5440"8676 depends on SOC_EXYNOS5440
+1-322
drivers/cpufreq/brcmstb-avs-cpufreq.c
···4949#include <linux/platform_device.h>5050#include <linux/semaphore.h>51515252-#ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG5353-#include <linux/ctype.h>5454-#include <linux/debugfs.h>5555-#include <linux/slab.h>5656-#include <linux/uaccess.h>5757-#endif5858-5952/* Max number of arguments AVS calls take */6053#define AVS_MAX_CMD_ARGS 46154/*···175182 void __iomem *base;176183 void __iomem *avs_intr_base;177184 struct device *dev;178178-#ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG179179- struct dentry *debugfs;180180-#endif181185 struct completion done;182186 struct semaphore sem;183187 struct pmap pmap;184188};185185-186186-#ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG187187-188188-enum debugfs_format {189189- DEBUGFS_NORMAL,190190- DEBUGFS_FLOAT,191191- DEBUGFS_REV,192192-};193193-194194-struct debugfs_data {195195- struct debugfs_entry *entry;196196- struct private_data *priv;197197-};198198-199199-struct debugfs_entry {200200- char *name;201201- u32 offset;202202- fmode_t mode;203203- enum debugfs_format format;204204-};205205-206206-#define DEBUGFS_ENTRY(name, mode, format) { \207207- #name, AVS_MBOX_##name, mode, format \208208-}209209-210210-/*211211- * These are used for debugfs only. Otherwise we use AVS_MBOX_PARAM() directly.212212- */213213-#define AVS_MBOX_PARAM1 AVS_MBOX_PARAM(0)214214-#define AVS_MBOX_PARAM2 AVS_MBOX_PARAM(1)215215-#define AVS_MBOX_PARAM3 AVS_MBOX_PARAM(2)216216-#define AVS_MBOX_PARAM4 AVS_MBOX_PARAM(3)217217-218218-/*219219- * This table stores the name, access permissions and offset for each hardware220220- * register and is used to generate debugfs entries.221221- */222222-static struct debugfs_entry debugfs_entries[] = {223223- DEBUGFS_ENTRY(COMMAND, S_IWUSR, DEBUGFS_NORMAL),224224- DEBUGFS_ENTRY(STATUS, S_IWUSR, DEBUGFS_NORMAL),225225- DEBUGFS_ENTRY(VOLTAGE0, 0, DEBUGFS_FLOAT),226226- DEBUGFS_ENTRY(TEMP0, 0, DEBUGFS_FLOAT),227227- DEBUGFS_ENTRY(PV0, 0, DEBUGFS_FLOAT),228228- DEBUGFS_ENTRY(MV0, 0, DEBUGFS_FLOAT),229229- DEBUGFS_ENTRY(PARAM1, S_IWUSR, DEBUGFS_NORMAL),230230- DEBUGFS_ENTRY(PARAM2, S_IWUSR, DEBUGFS_NORMAL),231231- DEBUGFS_ENTRY(PARAM3, S_IWUSR, DEBUGFS_NORMAL),232232- DEBUGFS_ENTRY(PARAM4, S_IWUSR, DEBUGFS_NORMAL),233233- DEBUGFS_ENTRY(REVISION, 0, DEBUGFS_REV),234234- DEBUGFS_ENTRY(PSTATE, 0, DEBUGFS_NORMAL),235235- DEBUGFS_ENTRY(HEARTBEAT, 0, DEBUGFS_NORMAL),236236- DEBUGFS_ENTRY(MAGIC, S_IWUSR, DEBUGFS_NORMAL),237237- DEBUGFS_ENTRY(SIGMA_HVT, 0, DEBUGFS_NORMAL),238238- DEBUGFS_ENTRY(SIGMA_SVT, 0, DEBUGFS_NORMAL),239239- DEBUGFS_ENTRY(VOLTAGE1, 0, DEBUGFS_FLOAT),240240- DEBUGFS_ENTRY(TEMP1, 0, DEBUGFS_FLOAT),241241- DEBUGFS_ENTRY(PV1, 0, DEBUGFS_FLOAT),242242- DEBUGFS_ENTRY(MV1, 0, DEBUGFS_FLOAT),243243- DEBUGFS_ENTRY(FREQUENCY, 0, DEBUGFS_NORMAL),244244-};245245-246246-static int brcm_avs_target_index(struct cpufreq_policy *, unsigned int);247247-248248-static char *__strtolower(char *s)249249-{250250- char *p;251251-252252- for (p = s; *p; p++)253253- *p = tolower(*p);254254-255255- return s;256256-}257257-258258-#endif /* CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG */259189260190static void __iomem *__map_region(const char *name)261191{···431515432516 return table;433517}434434-435435-#ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG436436-437437-#define MANT(x) (unsigned int)(abs((x)) / 1000)438438-#define FRAC(x) (unsigned int)(abs((x)) - abs((x)) / 1000 * 1000)439439-440440-static int brcm_avs_debug_show(struct seq_file *s, void *data)441441-{442442- struct debugfs_data *dbgfs = s->private;443443- void __iomem *base;444444- u32 val, offset;445445-446446- if (!dbgfs) {447447- seq_puts(s, "No device pointer\n");448448- return 0;449449- }450450-451451- base = dbgfs->priv->base;452452- offset = dbgfs->entry->offset;453453- val = readl(base + offset);454454- switch (dbgfs->entry->format) {455455- case DEBUGFS_NORMAL:456456- seq_printf(s, "%u\n", val);457457- break;458458- case DEBUGFS_FLOAT:459459- seq_printf(s, "%d.%03d\n", MANT(val), FRAC(val));460460- break;461461- case DEBUGFS_REV:462462- seq_printf(s, "%c.%c.%c.%c\n", (val >> 24 & 0xff),463463- (val >> 16 & 0xff), (val >> 8 & 0xff),464464- val & 0xff);465465- break;466466- }467467- seq_printf(s, "0x%08x\n", val);468468-469469- return 0;470470-}471471-472472-#undef MANT473473-#undef FRAC474474-475475-static ssize_t brcm_avs_seq_write(struct file *file, const char __user *buf,476476- size_t size, loff_t *ppos)477477-{478478- struct seq_file *s = file->private_data;479479- struct debugfs_data *dbgfs = s->private;480480- struct private_data *priv = dbgfs->priv;481481- void __iomem *base, *avs_intr_base;482482- bool use_issue_command = false;483483- unsigned long val, offset;484484- char str[128];485485- int ret;486486- char *str_ptr = str;487487-488488- if (size >= sizeof(str))489489- return -E2BIG;490490-491491- memset(str, 0, sizeof(str));492492- ret = copy_from_user(str, buf, size);493493- if (ret)494494- return ret;495495-496496- base = priv->base;497497- avs_intr_base = priv->avs_intr_base;498498- offset = dbgfs->entry->offset;499499- /*500500- * Special case writing to "command" entry only: if the string starts501501- * with a 'c', we use the driver's __issue_avs_command() function.502502- * Otherwise, we perform a raw write. This should allow testing of raw503503- * access as well as using the higher level function. (Raw access504504- * doesn't clear the firmware return status after issuing the command.)505505- */506506- if (str_ptr[0] == 'c' && offset == AVS_MBOX_COMMAND) {507507- use_issue_command = true;508508- str_ptr++;509509- }510510- if (kstrtoul(str_ptr, 0, &val) != 0)511511- return -EINVAL;512512-513513- /*514514- * Setting the P-state is a special case. We need to update the CPU515515- * frequency we report.516516- */517517- if (val == AVS_CMD_SET_PSTATE) {518518- struct cpufreq_policy *policy;519519- unsigned int pstate;520520-521521- policy = cpufreq_cpu_get(smp_processor_id());522522- /* Read back the P-state we are about to set */523523- pstate = readl(base + AVS_MBOX_PARAM(0));524524- if (use_issue_command) {525525- ret = brcm_avs_target_index(policy, pstate);526526- return ret ? ret : size;527527- }528528- policy->cur = policy->freq_table[pstate].frequency;529529- }530530-531531- if (use_issue_command) {532532- ret = __issue_avs_command(priv, val, false, NULL);533533- } else {534534- /* Locking here is not perfect, but is only for debug. */535535- ret = down_interruptible(&priv->sem);536536- if (ret)537537- return ret;538538-539539- writel(val, base + offset);540540- /* We have to wake up the firmware to process a command. */541541- if (offset == AVS_MBOX_COMMAND)542542- writel(AVS_CPU_L2_INT_MASK,543543- avs_intr_base + AVS_CPU_L2_SET0);544544- up(&priv->sem);545545- }546546-547547- return ret ? ret : size;548548-}549549-550550-static struct debugfs_entry *__find_debugfs_entry(const char *name)551551-{552552- int i;553553-554554- for (i = 0; i < ARRAY_SIZE(debugfs_entries); i++)555555- if (strcasecmp(debugfs_entries[i].name, name) == 0)556556- return &debugfs_entries[i];557557-558558- return NULL;559559-}560560-561561-static int brcm_avs_debug_open(struct inode *inode, struct file *file)562562-{563563- struct debugfs_data *data;564564- fmode_t fmode;565565- int ret;566566-567567- /*568568- * seq_open(), which is called by single_open(), clears "write" access.569569- * We need write access to some files, so we preserve our access mode570570- * and restore it.571571- */572572- fmode = file->f_mode;573573- /*574574- * Check access permissions even for root. We don't want to be writing575575- * to read-only registers. Access for regular users has already been576576- * checked by the VFS layer.577577- */578578- if ((fmode & FMODE_WRITER) && !(inode->i_mode & S_IWUSR))579579- return -EACCES;580580-581581- data = kmalloc(sizeof(*data), GFP_KERNEL);582582- if (!data)583583- return -ENOMEM;584584- /*585585- * We use the same file system operations for all our debug files. To586586- * produce specific output, we look up the file name upon opening a587587- * debugfs entry and map it to a memory offset. This offset is then used588588- * in the generic "show" function to read a specific register.589589- */590590- data->entry = __find_debugfs_entry(file->f_path.dentry->d_iname);591591- data->priv = inode->i_private;592592-593593- ret = single_open(file, brcm_avs_debug_show, data);594594- if (ret)595595- kfree(data);596596- file->f_mode = fmode;597597-598598- return ret;599599-}600600-601601-static int brcm_avs_debug_release(struct inode *inode, struct file *file)602602-{603603- struct seq_file *seq_priv = file->private_data;604604- struct debugfs_data *data = seq_priv->private;605605-606606- kfree(data);607607- return single_release(inode, file);608608-}609609-610610-static const struct file_operations brcm_avs_debug_ops = {611611- .open = brcm_avs_debug_open,612612- .read = seq_read,613613- .write = brcm_avs_seq_write,614614- .llseek = seq_lseek,615615- .release = brcm_avs_debug_release,616616-};617617-618618-static void brcm_avs_cpufreq_debug_init(struct platform_device *pdev)619619-{620620- struct private_data *priv = platform_get_drvdata(pdev);621621- struct dentry *dir;622622- int i;623623-624624- if (!priv)625625- return;626626-627627- dir = debugfs_create_dir(BRCM_AVS_CPUFREQ_NAME, NULL);628628- if (IS_ERR_OR_NULL(dir))629629- return;630630- priv->debugfs = dir;631631-632632- for (i = 0; i < ARRAY_SIZE(debugfs_entries); i++) {633633- /*634634- * The DEBUGFS_ENTRY macro generates uppercase strings. We635635- * convert them to lowercase before creating the debugfs636636- * entries.637637- */638638- char *entry = __strtolower(debugfs_entries[i].name);639639- fmode_t mode = debugfs_entries[i].mode;640640-641641- if (!debugfs_create_file(entry, S_IFREG | S_IRUGO | mode,642642- dir, priv, &brcm_avs_debug_ops)) {643643- priv->debugfs = NULL;644644- debugfs_remove_recursive(dir);645645- break;646646- }647647- }648648-}649649-650650-static void brcm_avs_cpufreq_debug_exit(struct platform_device *pdev)651651-{652652- struct private_data *priv = platform_get_drvdata(pdev);653653-654654- if (priv && priv->debugfs) {655655- debugfs_remove_recursive(priv->debugfs);656656- priv->debugfs = NULL;657657- }658658-}659659-660660-#else661661-662662-static void brcm_avs_cpufreq_debug_init(struct platform_device *pdev) {}663663-static void brcm_avs_cpufreq_debug_exit(struct platform_device *pdev) {}664664-665665-#endif /* CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG */666518667519/*668520 * To ensure the right firmware is running we need to···7001016 return ret;70110177021018 brcm_avs_driver.driver_data = pdev;703703- ret = cpufreq_register_driver(&brcm_avs_driver);704704- if (!ret)705705- brcm_avs_cpufreq_debug_init(pdev);7061019707707- return ret;10201020+ return cpufreq_register_driver(&brcm_avs_driver);7081021}70910227101023static int brcm_avs_cpufreq_remove(struct platform_device *pdev)···7121031 ret = cpufreq_unregister_driver(&brcm_avs_driver);7131032 if (ret)7141033 return ret;715715-716716- brcm_avs_cpufreq_debug_exit(pdev);71710347181035 priv = platform_get_drvdata(pdev);7191036 iounmap(priv->base);
+11-3
drivers/cpufreq/powernv-cpufreq.c
···679679680680 if (!spin_trylock(&gpstates->gpstate_lock))681681 return;682682+ /*683683+ * If the timer has migrated to the different cpu then bring684684+ * it back to one of the policy->cpus685685+ */686686+ if (!cpumask_test_cpu(raw_smp_processor_id(), policy->cpus)) {687687+ gpstates->timer.expires = jiffies + msecs_to_jiffies(1);688688+ add_timer_on(&gpstates->timer, cpumask_first(policy->cpus));689689+ spin_unlock(&gpstates->gpstate_lock);690690+ return;691691+ }682692683693 /*684694 * If PMCR was last updated was using fast_swtich then···728718 if (gpstate_idx != gpstates->last_lpstate_idx)729719 queue_gpstate_timer(gpstates);730720721721+ set_pstate(&freq_data);731722 spin_unlock(&gpstates->gpstate_lock);732732-733733- /* Timer may get migrated to a different cpu on cpu hot unplug */734734- smp_call_function_any(policy->cpus, set_pstate, &freq_data, 1);735723}736724737725/*
···66 tristate "HSA kernel driver for AMD GPU devices"77 depends on DRM_AMDGPU && X86_6488 imply AMD_IOMMU_V299+ select MMU_NOTIFIER910 help1011 Enable this if you want to use HSA features on AMD GPU devices.
+9-8
drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
···749749 struct timespec64 time;750750751751 dev = kfd_device_by_id(args->gpu_id);752752- if (dev == NULL)753753- return -EINVAL;754754-755755- /* Reading GPU clock counter from KGD */756756- args->gpu_clock_counter =757757- dev->kfd2kgd->get_gpu_clock_counter(dev->kgd);752752+ if (dev)753753+ /* Reading GPU clock counter from KGD */754754+ args->gpu_clock_counter =755755+ dev->kfd2kgd->get_gpu_clock_counter(dev->kgd);756756+ else757757+ /* Node without GPU resource */758758+ args->gpu_clock_counter = 0;758759759760 /* No access to rdtsc. Using raw monotonic time */760761 getrawmonotonic64(&time);···11481147 return ret;11491148}1150114911511151-bool kfd_dev_is_large_bar(struct kfd_dev *dev)11501150+static bool kfd_dev_is_large_bar(struct kfd_dev *dev)11521151{11531152 struct kfd_local_mem_info mem_info;11541153···1422142114231422 pdd = kfd_get_process_device_data(dev, p);14241423 if (!pdd) {14251425- err = PTR_ERR(pdd);14241424+ err = -EINVAL;14261425 goto bind_process_to_device_failed;14271426 }14281427
···329329{330330 int src;331331 struct irq_list_head *lh;332332+ unsigned long irq_table_flags;332333 DRM_DEBUG_KMS("DM_IRQ: releasing resources.\n");333333-334334 for (src = 0; src < DAL_IRQ_SOURCES_NUMBER; src++) {335335-335335+ DM_IRQ_TABLE_LOCK(adev, irq_table_flags);336336 /* The handler was removed from the table,337337 * it means it is safe to flush all the 'work'338338 * (because no code can schedule a new one). */339339 lh = &adev->dm.irq_handler_list_low_tab[src];340340+ DM_IRQ_TABLE_UNLOCK(adev, irq_table_flags);340341 flush_work(&lh->work);341342 }342343}
···21402140 }21412141 }2142214221432143- /* According to BSpec, "The CD clock frequency must be at least twice21432143+ /*21442144+ * According to BSpec, "The CD clock frequency must be at least twice21442145 * the frequency of the Azalia BCLK." and BCLK is 96 MHz by default.21462146+ *21472147+ * FIXME: Check the actual, not default, BCLK being used.21482148+ *21492149+ * FIXME: This does not depend on ->has_audio because the higher CDCLK21502150+ * is required for audio probe, also when there are no audio capable21512151+ * displays connected at probe time. This leads to unnecessarily high21522152+ * CDCLK when audio is not required.21532153+ *21542154+ * FIXME: This limit is only applied when there are displays connected21552155+ * at probe time. If we probe without displays, we'll still end up using21562156+ * the platform minimum CDCLK, failing audio probe.21452157 */21462146- if (crtc_state->has_audio && INTEL_GEN(dev_priv) >= 9)21582158+ if (INTEL_GEN(dev_priv) >= 9)21472159 min_cdclk = max(2 * 96000, min_cdclk);2148216021492161 /*
+2-2
drivers/gpu/drm/i915/intel_drv.h
···4949 * check the condition before the timeout.5050 */5151#define __wait_for(OP, COND, US, Wmin, Wmax) ({ \5252- unsigned long timeout__ = jiffies + usecs_to_jiffies(US) + 1; \5252+ const ktime_t end__ = ktime_add_ns(ktime_get_raw(), 1000ll * (US)); \5353 long wait__ = (Wmin); /* recommended min for usleep is 10 us */ \5454 int ret__; \5555 might_sleep(); \5656 for (;;) { \5757- bool expired__ = time_after(jiffies, timeout__); \5757+ const bool expired__ = ktime_after(ktime_get_raw(), end__); \5858 OP; \5959 if (COND) { \6060 ret__ = 0; \
+1-1
drivers/gpu/drm/i915/intel_fbdev.c
···806806 return;807807808808 intel_fbdev_sync(ifbdev);809809- if (ifbdev->vma)809809+ if (ifbdev->vma || ifbdev->helper.deferred_setup)810810 drm_fb_helper_hotplug_event(&ifbdev->helper);811811}812812
+5-6
drivers/gpu/drm/i915/intel_runtime_pm.c
···641641642642 DRM_DEBUG_KMS("Enabling DC6\n");643643644644- gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6);644644+ /* Wa Display #1183: skl,kbl,cfl */645645+ if (IS_GEN9_BC(dev_priv))646646+ I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) |647647+ SKL_SELECT_ALTERNATE_DC_EXIT);645648649649+ gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6);646650}647651648652void skl_disable_dc6(struct drm_i915_private *dev_priv)649653{650654 DRM_DEBUG_KMS("Disabling DC6\n");651651-652652- /* Wa Display #1183: skl,kbl,cfl */653653- if (IS_GEN9_BC(dev_priv))654654- I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) |655655- SKL_SELECT_ALTERNATE_DC_EXIT);656655657656 gen9_set_dc_state(dev_priv, DC_STATE_DISABLE);658657}
···7979 dsi_phy_write(lane_base + REG_DSI_10nm_PHY_LN_TX_DCTRL(3), 0x04);8080}81818282-static int msm_dsi_dphy_timing_calc_v3(struct msm_dsi_dphy_timing *timing,8383- struct msm_dsi_phy_clk_request *clk_req)8484-{8585- /*8686- * TODO: These params need to be computed, they're currently hardcoded8787- * for a 1440x2560@60Hz panel with a byteclk of 100.618 Mhz, and a8888- * default escape clock of 19.2 Mhz.8989- */9090-9191- timing->hs_halfbyte_en = 0;9292- timing->clk_zero = 0x1c;9393- timing->clk_prepare = 0x07;9494- timing->clk_trail = 0x07;9595- timing->hs_exit = 0x23;9696- timing->hs_zero = 0x21;9797- timing->hs_prepare = 0x07;9898- timing->hs_trail = 0x07;9999- timing->hs_rqst = 0x05;100100- timing->ta_sure = 0x00;101101- timing->ta_go = 0x03;102102- timing->ta_get = 0x04;103103-104104- timing->shared_timings.clk_pre = 0x2d;105105- timing->shared_timings.clk_post = 0x0d;106106-107107- return 0;108108-}109109-11082static int dsi_10nm_phy_enable(struct msm_dsi_phy *phy, int src_pll_id,11183 struct msm_dsi_phy_clk_request *clk_req)11284{
+2-1
drivers/gpu/drm/msm/msm_fb.c
···183183 hsub = drm_format_horz_chroma_subsampling(mode_cmd->pixel_format);184184 vsub = drm_format_vert_chroma_subsampling(mode_cmd->pixel_format);185185186186- format = kms->funcs->get_format(kms, mode_cmd->pixel_format);186186+ format = kms->funcs->get_format(kms, mode_cmd->pixel_format,187187+ mode_cmd->modifier[0]);187188 if (!format) {188189 dev_err(dev->dev, "unsupported pixel format: %4.4s\n",189190 (char *)&mode_cmd->pixel_format);
+2-9
drivers/gpu/drm/msm/msm_fbdev.c
···92929393 if (IS_ERR(fb)) {9494 dev_err(dev->dev, "failed to allocate fb\n");9595- ret = PTR_ERR(fb);9696- goto fail;9595+ return PTR_ERR(fb);9796 }98979998 bo = msm_framebuffer_bo(fb, 0);···150151151152fail_unlock:152153 mutex_unlock(&dev->struct_mutex);153153-fail:154154-155155- if (ret) {156156- if (fb)157157- drm_framebuffer_remove(fb);158158- }159159-154154+ drm_framebuffer_remove(fb);160155 return ret;161156}162157
+11-9
drivers/gpu/drm/msm/msm_gem.c
···132132 struct msm_gem_object *msm_obj = to_msm_bo(obj);133133134134 if (msm_obj->pages) {135135- /* For non-cached buffers, ensure the new pages are clean136136- * because display controller, GPU, etc. are not coherent:137137- */138138- if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))139139- dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,140140- msm_obj->sgt->nents, DMA_BIDIRECTIONAL);135135+ if (msm_obj->sgt) {136136+ /* For non-cached buffers, ensure the new137137+ * pages are clean because display controller,138138+ * GPU, etc. are not coherent:139139+ */140140+ if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))141141+ dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,142142+ msm_obj->sgt->nents,143143+ DMA_BIDIRECTIONAL);141144142142- if (msm_obj->sgt)143145 sg_free_table(msm_obj->sgt);144144-145145- kfree(msm_obj->sgt);146146+ kfree(msm_obj->sgt);147147+ }146148147149 if (use_pages(obj))148150 drm_gem_put_pages(obj, msm_obj->pages, true, false);
+4-1
drivers/gpu/drm/msm/msm_kms.h
···4848 /* functions to wait for atomic commit completed on each CRTC */4949 void (*wait_for_crtc_commit_done)(struct msm_kms *kms,5050 struct drm_crtc *crtc);5151+ /* get msm_format w/ optional format modifiers from drm_mode_fb_cmd2 */5252+ const struct msm_format *(*get_format)(struct msm_kms *kms,5353+ const uint32_t format,5454+ const uint64_t modifiers);5155 /* misc: */5252- const struct msm_format *(*get_format)(struct msm_kms *kms, uint32_t format);5356 long (*round_pixclk)(struct msm_kms *kms, unsigned long rate,5457 struct drm_encoder *encoder);5558 int (*set_split_display)(struct msm_kms *kms,
···13801380 /* Activate logical device if needed */13811381 val = superio_inb(sioaddr, SIO_REG_ENABLE);13821382 if (!(val & 0x01)) {13831383- pr_err("EC is disabled\n");13841384- goto fail;13831383+ pr_warn("Forcibly enabling EC access. Data may be unusable.\n");13841384+ superio_outb(sioaddr, SIO_REG_ENABLE, val | 0x01);13851385 }1386138613871387 superio_exit(sioaddr);
+4-1
drivers/hwmon/scmi-hwmon.c
···170170 scmi_chip_info.info = ptr_scmi_ci;171171 chip_info = &scmi_chip_info;172172173173- for (type = 0; type < hwmon_max && nr_count[type]; type++) {173173+ for (type = 0; type < hwmon_max; type++) {174174+ if (!nr_count[type])175175+ continue;176176+174177 scmi_hwmon_add_chan_info(scmi_hwmon_chan, dev, nr_count[type],175178 type, hwmon_attributes[type]);176179 *ptr_scmi_ci++ = scmi_hwmon_chan++;
-3
drivers/i2c/busses/Kconfig
···707707config I2C_MT65XX708708 tristate "MediaTek I2C adapter"709709 depends on ARCH_MEDIATEK || COMPILE_TEST710710- depends on HAS_DMA711710 help712711 This selects the MediaTek(R) Integrated Inter Circuit bus driver713712 for MT65xx and MT81xx.···884885885886config I2C_SH_MOBILE886887 tristate "SuperH Mobile I2C Controller"887887- depends on HAS_DMA888888 depends on ARCH_SHMOBILE || ARCH_RENESAS || COMPILE_TEST889889 help890890 If you say yes to this option, support will be included for the···1096109810971099config I2C_RCAR10981100 tristate "Renesas R-Car I2C Controller"10991099- depends on HAS_DMA11001101 depends on ARCH_RENESAS || COMPILE_TEST11011102 select I2C_SLAVE11021103 help
+18-4
drivers/i2c/busses/i2c-sprd.c
···8686 u32 count;8787 int irq;8888 int err;8989+ bool is_suspended;8990};90919192static void sprd_i2c_set_count(struct sprd_i2c *i2c_dev, u32 count)···284283 struct sprd_i2c *i2c_dev = i2c_adap->algo_data;285284 int im, ret;286285286286+ if (i2c_dev->is_suspended)287287+ return -EBUSY;288288+287289 ret = pm_runtime_get_sync(i2c_dev->dev);288290 if (ret < 0)289291 return ret;···368364 struct sprd_i2c *i2c_dev = dev_id;369365 struct i2c_msg *msg = i2c_dev->msg;370366 bool ack = !(readl(i2c_dev->base + I2C_STATUS) & I2C_RX_ACK);371371- u32 i2c_count = readl(i2c_dev->base + I2C_COUNT);372367 u32 i2c_tran;373368374369 if (msg->flags & I2C_M_RD)375370 i2c_tran = i2c_dev->count >= I2C_FIFO_FULL_THLD;376371 else377377- i2c_tran = i2c_count;372372+ i2c_tran = i2c_dev->count;378373379374 /*380375 * If we got one ACK from slave when writing data, and we did not···411408{412409 struct sprd_i2c *i2c_dev = dev_id;413410 struct i2c_msg *msg = i2c_dev->msg;414414- u32 i2c_count = readl(i2c_dev->base + I2C_COUNT);415411 bool ack = !(readl(i2c_dev->base + I2C_STATUS) & I2C_RX_ACK);416412 u32 i2c_tran;417413418414 if (msg->flags & I2C_M_RD)419415 i2c_tran = i2c_dev->count >= I2C_FIFO_FULL_THLD;420416 else421421- i2c_tran = i2c_count;417417+ i2c_tran = i2c_dev->count;422418423419 /*424420 * If we did not get one ACK from slave when writing data, then we···588586589587static int __maybe_unused sprd_i2c_suspend_noirq(struct device *pdev)590588{589589+ struct sprd_i2c *i2c_dev = dev_get_drvdata(pdev);590590+591591+ i2c_lock_adapter(&i2c_dev->adap);592592+ i2c_dev->is_suspended = true;593593+ i2c_unlock_adapter(&i2c_dev->adap);594594+591595 return pm_runtime_force_suspend(pdev);592596}593597594598static int __maybe_unused sprd_i2c_resume_noirq(struct device *pdev)595599{600600+ struct sprd_i2c *i2c_dev = dev_get_drvdata(pdev);601601+602602+ i2c_lock_adapter(&i2c_dev->adap);603603+ i2c_dev->is_suspended = false;604604+ i2c_unlock_adapter(&i2c_dev->adap);605605+596606 return pm_runtime_force_resume(pdev);597607}598608
···4545#define I82802AB 0x00ad4646#define I82802AC 0x00ac4747#define PF38F4476 0x881c4848+#define M28F00AP30 0x89634849/* STMicroelectronics chips */4950#define M50LPW080 0x002F5051#define M50FLW080A 0x0080···374373 if (cfi->mfr == CFI_MFR_INTEL &&375374 cfi->id == PF38F4476 && extp->MinorVersion == '3')376375 extp->MinorVersion = '1';376376+}377377+378378+static int cfi_is_micron_28F00AP30(struct cfi_private *cfi, struct flchip *chip)379379+{380380+ /*381381+ * Micron(was Numonyx) 1Gbit bottom boot are buggy w.r.t382382+ * Erase Supend for their small Erase Blocks(0x8000)383383+ */384384+ if (cfi->mfr == CFI_MFR_INTEL && cfi->id == M28F00AP30)385385+ return 1;386386+ return 0;377387}378388379389static inline struct cfi_pri_intelext *···843831 (mode == FL_WRITING && (cfip->SuspendCmdSupport & 1))))844832 goto sleep;845833834834+ /* Do not allow suspend iff read/write to EB address */835835+ if ((adr & chip->in_progress_block_mask) ==836836+ chip->in_progress_block_addr)837837+ goto sleep;838838+839839+ /* do not suspend small EBs, buggy Micron Chips */840840+ if (cfi_is_micron_28F00AP30(cfi, chip) &&841841+ (chip->in_progress_block_mask == ~(0x8000-1)))842842+ goto sleep;846843847844 /* Erase suspend */848848- map_write(map, CMD(0xB0), adr);845845+ map_write(map, CMD(0xB0), chip->in_progress_block_addr);849846850847 /* If the flash has finished erasing, then 'erase suspend'851848 * appears to make some (28F320) flash devices switch to852849 * 'read' mode. Make sure that we switch to 'read status'853850 * mode so we get the right data. --rmk854851 */855855- map_write(map, CMD(0x70), adr);852852+ map_write(map, CMD(0x70), chip->in_progress_block_addr);856853 chip->oldstate = FL_ERASING;857854 chip->state = FL_ERASE_SUSPENDING;858855 chip->erase_suspended = 1;859856 for (;;) {860860- status = map_read(map, adr);857857+ status = map_read(map, chip->in_progress_block_addr);861858 if (map_word_andequal(map, status, status_OK, status_OK))862859 break;863860···10621041 sending the 0x70 (Read Status) command to an erasing10631042 chip and expecting it to be ignored, that's what we10641043 do. */10651065- map_write(map, CMD(0xd0), adr);10661066- map_write(map, CMD(0x70), adr);10441044+ map_write(map, CMD(0xd0), chip->in_progress_block_addr);10451045+ map_write(map, CMD(0x70), chip->in_progress_block_addr);10671046 chip->oldstate = FL_READY;10681047 chip->state = FL_ERASING;10691048 break;···19541933 map_write(map, CMD(0xD0), adr);19551934 chip->state = FL_ERASING;19561935 chip->erase_suspended = 0;19361936+ chip->in_progress_block_addr = adr;19371937+ chip->in_progress_block_mask = ~(len - 1);1957193819581939 ret = INVAL_CACHE_AND_WAIT(map, chip, adr,19591940 adr, len,
+6-3
drivers/mtd/chips/cfi_cmdset_0002.c
···816816 (mode == FL_WRITING && (cfip->EraseSuspend & 0x2))))817817 goto sleep;818818819819- /* We could check to see if we're trying to access the sector820820- * that is currently being erased. However, no user will try821821- * anything like that so we just wait for the timeout. */819819+ /* Do not allow suspend iff read/write to EB address */820820+ if ((adr & chip->in_progress_block_mask) ==821821+ chip->in_progress_block_addr)822822+ goto sleep;822823823824 /* Erase suspend */824825 /* It's harmless to issue the Erase-Suspend and Erase-Resume···22682267 chip->state = FL_ERASING;22692268 chip->erase_suspended = 0;22702269 chip->in_progress_block_addr = adr;22702270+ chip->in_progress_block_mask = ~(map->size - 1);2271227122722272 INVALIDATE_CACHE_UDELAY(map, chip,22732273 adr, map->size,···23582356 chip->state = FL_ERASING;23592357 chip->erase_suspended = 0;23602358 chip->in_progress_block_addr = adr;23592359+ chip->in_progress_block_mask = ~(len - 1);2361236023622361 INVALIDATE_CACHE_UDELAY(map, chip,23632362 adr, len,
···22992299 /*23002300 * The legacy "num-cs" property indicates the number of CS on the only23012301 * chip connected to the controller (legacy bindings does not support23022302- * more than one chip). CS are only incremented one by one while the RB23032303- * pin is always the #0.23022302+ * more than one chip). The CS and RB pins are always the #0.23042303 *23052304 * When not using legacy bindings, a couple of "reg" and "nand-rb"23062305 * properties must be filled. For each chip, expressed as a subnode,23072306 * "reg" points to the CS lines and "nand-rb" to the RB line.23082307 */23092309- if (pdata) {23082308+ if (pdata || nfc->caps->legacy_of_bindings) {23102309 nsels = 1;23112311- } else if (nfc->caps->legacy_of_bindings &&23122312- !of_get_property(np, "num-cs", &nsels)) {23132313- dev_err(dev, "missing num-cs property\n");23142314- return -EINVAL;23152315- } else if (!of_get_property(np, "reg", &nsels)) {23162316- dev_err(dev, "missing reg property\n");23172317- return -EINVAL;23182318- }23192319-23202320- if (!pdata)23212321- nsels /= sizeof(u32);23222322- if (!nsels) {23232323- dev_err(dev, "invalid reg property size\n");23242324- return -EINVAL;23102310+ } else {23112311+ nsels = of_property_count_elems_of_size(np, "reg", sizeof(u32));23122312+ if (nsels <= 0) {23132313+ dev_err(dev, "missing/invalid reg property\n");23142314+ return -EINVAL;23152315+ }23252316 }2326231723272318 /* Alloc the nand chip structure */
···501501 void __iomem *reg_base = cqspi->iobase;502502 void __iomem *ahb_base = cqspi->ahb_base;503503 unsigned int remaining = n_rx;504504+ unsigned int mod_bytes = n_rx % 4;504505 unsigned int bytes_to_read = 0;506506+ u8 *rxbuf_end = rxbuf + n_rx;505507 int ret = 0;506508507509 writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR);···532530 }533531534532 while (bytes_to_read != 0) {533533+ unsigned int word_remain = round_down(remaining, 4);534534+535535 bytes_to_read *= cqspi->fifo_width;536536 bytes_to_read = bytes_to_read > remaining ?537537 remaining : bytes_to_read;538538- ioread32_rep(ahb_base, rxbuf,539539- DIV_ROUND_UP(bytes_to_read, 4));538538+ bytes_to_read = round_down(bytes_to_read, 4);539539+ /* Read 4 byte word chunks then single bytes */540540+ if (bytes_to_read) {541541+ ioread32_rep(ahb_base, rxbuf,542542+ (bytes_to_read / 4));543543+ } else if (!word_remain && mod_bytes) {544544+ unsigned int temp = ioread32(ahb_base);545545+546546+ bytes_to_read = mod_bytes;547547+ memcpy(rxbuf, &temp, min((unsigned int)548548+ (rxbuf_end - rxbuf),549549+ bytes_to_read));550550+ }540551 rxbuf += bytes_to_read;541552 remaining -= bytes_to_read;542553 bytes_to_read = cqspi_get_rd_sram_level(cqspi);
+1-2
drivers/net/bonding/bond_main.c
···16601660 } /* switch(bond_mode) */1661166116621662#ifdef CONFIG_NET_POLL_CONTROLLER16631663- slave_dev->npinfo = bond->dev->npinfo;16641664- if (slave_dev->npinfo) {16631663+ if (bond->dev->npinfo) {16651664 if (slave_enable_netpoll(new_slave)) {16661665 netdev_info(bond_dev, "master_dev is using netpoll, but new slave device does not support netpoll\n");16671666 res = -EBUSY;
+8
drivers/net/ethernet/amd/xgbe/xgbe-common.h
···13211321#define MDIO_VEND2_AN_STAT 0x800213221322#endif1323132313241324+#ifndef MDIO_VEND2_PMA_CDR_CONTROL13251325+#define MDIO_VEND2_PMA_CDR_CONTROL 0x805613261326+#endif13271327+13241328#ifndef MDIO_CTRL1_SPEED1G13251329#define MDIO_CTRL1_SPEED1G (MDIO_CTRL1_SPEED10G & ~BMCR_SPEED100)13261330#endif···13721368#define XGBE_AN_CL37_PCS_MODE_SGMII 0x0413731369#define XGBE_AN_CL37_TX_CONFIG_MASK 0x0813741370#define XGBE_AN_CL37_MII_CTRL_8BIT 0x010013711371+13721372+#define XGBE_PMA_CDR_TRACK_EN_MASK 0x0113731373+#define XGBE_PMA_CDR_TRACK_EN_OFF 0x0013741374+#define XGBE_PMA_CDR_TRACK_EN_ON 0x011375137513761376/* Bit setting and getting macros13771377 * The get macro will extract the current bit field value from within
···147147/* Rate-change complete wait/retry count */148148#define XGBE_RATECHANGE_COUNT 500149149150150+/* CDR delay values for KR support (in usec) */151151+#define XGBE_CDR_DELAY_INIT 10000152152+#define XGBE_CDR_DELAY_INC 10000153153+#define XGBE_CDR_DELAY_MAX 100000154154+155155+/* RRC frequency during link status check */156156+#define XGBE_RRC_FREQUENCY 10157157+150158enum xgbe_port_mode {151159 XGBE_PORT_MODE_RSVD = 0,152160 XGBE_PORT_MODE_BACKPLANE,···253245#define XGBE_SFP_BASE_VENDOR_SN 4254246#define XGBE_SFP_BASE_VENDOR_SN_LEN 16255247248248+#define XGBE_SFP_EXTD_OPT1 1249249+#define XGBE_SFP_EXTD_OPT1_RX_LOS BIT(1)250250+#define XGBE_SFP_EXTD_OPT1_TX_FAULT BIT(3)251251+256252#define XGBE_SFP_EXTD_DIAG 28257253#define XGBE_SFP_EXTD_DIAG_ADDR_CHANGE BIT(2)258254···336324337325 unsigned int sfp_gpio_address;338326 unsigned int sfp_gpio_mask;327327+ unsigned int sfp_gpio_inputs;339328 unsigned int sfp_gpio_rx_los;340329 unsigned int sfp_gpio_tx_fault;341330 unsigned int sfp_gpio_mod_absent;···368355 unsigned int redrv_addr;369356 unsigned int redrv_lane;370357 unsigned int redrv_model;358358+359359+ /* KR AN support */360360+ unsigned int phy_cdr_notrack;361361+ unsigned int phy_cdr_delay;371362};372363373364/* I2C, MDIO and GPIO lines are muxed, so only one device at a time */···991974 phy_data->sfp_phy_avail = 1;992975}993976977977+static bool xgbe_phy_check_sfp_rx_los(struct xgbe_phy_data *phy_data)978978+{979979+ u8 *sfp_extd = phy_data->sfp_eeprom.extd;980980+981981+ if (!(sfp_extd[XGBE_SFP_EXTD_OPT1] & XGBE_SFP_EXTD_OPT1_RX_LOS))982982+ return false;983983+984984+ if (phy_data->sfp_gpio_mask & XGBE_GPIO_NO_RX_LOS)985985+ return false;986986+987987+ if (phy_data->sfp_gpio_inputs & (1 << phy_data->sfp_gpio_rx_los))988988+ return true;989989+990990+ return false;991991+}992992+993993+static bool xgbe_phy_check_sfp_tx_fault(struct xgbe_phy_data *phy_data)994994+{995995+ u8 *sfp_extd = phy_data->sfp_eeprom.extd;996996+997997+ if (!(sfp_extd[XGBE_SFP_EXTD_OPT1] & XGBE_SFP_EXTD_OPT1_TX_FAULT))998998+ return false;999999+10001000+ if (phy_data->sfp_gpio_mask & XGBE_GPIO_NO_TX_FAULT)10011001+ return false;10021002+10031003+ if (phy_data->sfp_gpio_inputs & (1 << phy_data->sfp_gpio_tx_fault))10041004+ return true;10051005+10061006+ return false;10071007+}10081008+10091009+static bool xgbe_phy_check_sfp_mod_absent(struct xgbe_phy_data *phy_data)10101010+{10111011+ if (phy_data->sfp_gpio_mask & XGBE_GPIO_NO_MOD_ABSENT)10121012+ return false;10131013+10141014+ if (phy_data->sfp_gpio_inputs & (1 << phy_data->sfp_gpio_mod_absent))10151015+ return true;10161016+10171017+ return false;10181018+}10191019+9941020static bool xgbe_phy_belfuse_parse_quirks(struct xgbe_prv_data *pdata)9951021{9961022 struct xgbe_phy_data *phy_data = pdata->phy_data;···1078101810791019 if (sfp_base[XGBE_SFP_BASE_EXT_ID] != XGBE_SFP_EXT_ID_SFP)10801020 return;10211021+10221022+ /* Update transceiver signals (eeprom extd/options) */10231023+ phy_data->sfp_tx_fault = xgbe_phy_check_sfp_tx_fault(phy_data);10241024+ phy_data->sfp_rx_los = xgbe_phy_check_sfp_rx_los(phy_data);1081102510821026 if (xgbe_phy_sfp_parse_quirks(pdata))10831027 return;···12481184static void xgbe_phy_sfp_signals(struct xgbe_prv_data *pdata)12491185{12501186 struct xgbe_phy_data *phy_data = pdata->phy_data;12511251- unsigned int gpio_input;12521187 u8 gpio_reg, gpio_ports[2];12531188 int ret;12541189···12621199 return;12631200 }1264120112651265- gpio_input = (gpio_ports[1] << 8) | gpio_ports[0];12021202+ phy_data->sfp_gpio_inputs = (gpio_ports[1] << 8) | gpio_ports[0];1266120312671267- if (phy_data->sfp_gpio_mask & XGBE_GPIO_NO_MOD_ABSENT) {12681268- /* No GPIO, just assume the module is present for now */12691269- phy_data->sfp_mod_absent = 0;12701270- } else {12711271- if (!(gpio_input & (1 << phy_data->sfp_gpio_mod_absent)))12721272- phy_data->sfp_mod_absent = 0;12731273- }12741274-12751275- if (!(phy_data->sfp_gpio_mask & XGBE_GPIO_NO_RX_LOS) &&12761276- (gpio_input & (1 << phy_data->sfp_gpio_rx_los)))12771277- phy_data->sfp_rx_los = 1;12781278-12791279- if (!(phy_data->sfp_gpio_mask & XGBE_GPIO_NO_TX_FAULT) &&12801280- (gpio_input & (1 << phy_data->sfp_gpio_tx_fault)))12811281- phy_data->sfp_tx_fault = 1;12041204+ phy_data->sfp_mod_absent = xgbe_phy_check_sfp_mod_absent(phy_data);12821205}1283120612841207static void xgbe_phy_sfp_mod_absent(struct xgbe_prv_data *pdata)···24102361 return 1;2411236224122363 /* No link, attempt a receiver reset cycle */24132413- if (phy_data->rrc_count++) {23642364+ if (phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) {24142365 phy_data->rrc_count = 0;24152366 xgbe_phy_rrc(pdata);24162367 }···27182669 return true;27192670}2720267126722672+static void xgbe_phy_cdr_track(struct xgbe_prv_data *pdata)26732673+{26742674+ struct xgbe_phy_data *phy_data = pdata->phy_data;26752675+26762676+ if (!pdata->debugfs_an_cdr_workaround)26772677+ return;26782678+26792679+ if (!phy_data->phy_cdr_notrack)26802680+ return;26812681+26822682+ usleep_range(phy_data->phy_cdr_delay,26832683+ phy_data->phy_cdr_delay + 500);26842684+26852685+ XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_VEND2_PMA_CDR_CONTROL,26862686+ XGBE_PMA_CDR_TRACK_EN_MASK,26872687+ XGBE_PMA_CDR_TRACK_EN_ON);26882688+26892689+ phy_data->phy_cdr_notrack = 0;26902690+}26912691+26922692+static void xgbe_phy_cdr_notrack(struct xgbe_prv_data *pdata)26932693+{26942694+ struct xgbe_phy_data *phy_data = pdata->phy_data;26952695+26962696+ if (!pdata->debugfs_an_cdr_workaround)26972697+ return;26982698+26992699+ if (phy_data->phy_cdr_notrack)27002700+ return;27012701+27022702+ XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_VEND2_PMA_CDR_CONTROL,27032703+ XGBE_PMA_CDR_TRACK_EN_MASK,27042704+ XGBE_PMA_CDR_TRACK_EN_OFF);27052705+27062706+ xgbe_phy_rrc(pdata);27072707+27082708+ phy_data->phy_cdr_notrack = 1;27092709+}27102710+27112711+static void xgbe_phy_kr_training_post(struct xgbe_prv_data *pdata)27122712+{27132713+ if (!pdata->debugfs_an_cdr_track_early)27142714+ xgbe_phy_cdr_track(pdata);27152715+}27162716+27172717+static void xgbe_phy_kr_training_pre(struct xgbe_prv_data *pdata)27182718+{27192719+ if (pdata->debugfs_an_cdr_track_early)27202720+ xgbe_phy_cdr_track(pdata);27212721+}27222722+27232723+static void xgbe_phy_an_post(struct xgbe_prv_data *pdata)27242724+{27252725+ struct xgbe_phy_data *phy_data = pdata->phy_data;27262726+27272727+ switch (pdata->an_mode) {27282728+ case XGBE_AN_MODE_CL73:27292729+ case XGBE_AN_MODE_CL73_REDRV:27302730+ if (phy_data->cur_mode != XGBE_MODE_KR)27312731+ break;27322732+27332733+ xgbe_phy_cdr_track(pdata);27342734+27352735+ switch (pdata->an_result) {27362736+ case XGBE_AN_READY:27372737+ case XGBE_AN_COMPLETE:27382738+ break;27392739+ default:27402740+ if (phy_data->phy_cdr_delay < XGBE_CDR_DELAY_MAX)27412741+ phy_data->phy_cdr_delay += XGBE_CDR_DELAY_INC;27422742+ else27432743+ phy_data->phy_cdr_delay = XGBE_CDR_DELAY_INIT;27442744+ break;27452745+ }27462746+ break;27472747+ default:27482748+ break;27492749+ }27502750+}27512751+27522752+static void xgbe_phy_an_pre(struct xgbe_prv_data *pdata)27532753+{27542754+ struct xgbe_phy_data *phy_data = pdata->phy_data;27552755+27562756+ switch (pdata->an_mode) {27572757+ case XGBE_AN_MODE_CL73:27582758+ case XGBE_AN_MODE_CL73_REDRV:27592759+ if (phy_data->cur_mode != XGBE_MODE_KR)27602760+ break;27612761+27622762+ xgbe_phy_cdr_notrack(pdata);27632763+ break;27642764+ default:27652765+ break;27662766+ }27672767+}27682768+27212769static void xgbe_phy_stop(struct xgbe_prv_data *pdata)27222770{27232771 struct xgbe_phy_data *phy_data = pdata->phy_data;···28252679 /* Reset SFP data */28262680 xgbe_phy_sfp_reset(phy_data);28272681 xgbe_phy_sfp_mod_absent(pdata);26822682+26832683+ /* Reset CDR support */26842684+ xgbe_phy_cdr_track(pdata);2828268528292686 /* Power off the PHY */28302687 xgbe_phy_power_off(pdata);···2860271128612712 /* Start in highest supported mode */28622713 xgbe_phy_set_mode(pdata, phy_data->start_mode);27142714+27152715+ /* Reset CDR support */27162716+ xgbe_phy_cdr_track(pdata);2863271728642718 /* After starting the I2C controller, we can check for an SFP */28652719 switch (phy_data->port_mode) {···31713019 }31723020 }3173302130223022+ phy_data->phy_cdr_delay = XGBE_CDR_DELAY_INIT;30233023+31743024 /* Register for driving external PHYs */31753025 mii = devm_mdiobus_alloc(pdata->dev);31763026 if (!mii) {···32253071 phy_impl->an_advertising = xgbe_phy_an_advertising;3226307232273073 phy_impl->an_outcome = xgbe_phy_an_outcome;30743074+30753075+ phy_impl->an_pre = xgbe_phy_an_pre;30763076+ phy_impl->an_post = xgbe_phy_an_post;30773077+30783078+ phy_impl->kr_training_pre = xgbe_phy_kr_training_pre;30793079+ phy_impl->kr_training_post = xgbe_phy_kr_training_post;32283080}
+9
drivers/net/ethernet/amd/xgbe/xgbe.h
···833833/* This structure represents implementation specific routines for an834834 * implementation of a PHY. All routines are required unless noted below.835835 * Optional routines:836836+ * an_pre, an_post836837 * kr_training_pre, kr_training_post837838 */838839struct xgbe_phy_impl_if {···875874876875 /* Process results of auto-negotiation */877876 enum xgbe_mode (*an_outcome)(struct xgbe_prv_data *);877877+878878+ /* Pre/Post auto-negotiation support */879879+ void (*an_pre)(struct xgbe_prv_data *);880880+ void (*an_post)(struct xgbe_prv_data *);878881879882 /* Pre/Post KR training enablement support */880883 void (*kr_training_pre)(struct xgbe_prv_data *);···994989 unsigned int irq_reissue_support;995990 unsigned int tx_desc_prefetch;996991 unsigned int rx_desc_prefetch;992992+ unsigned int an_cdr_workaround;997993};998994999995struct xgbe_vxlan_data {···12631257 unsigned int debugfs_xprop_reg;1264125812651259 unsigned int debugfs_xi2c_reg;12601260+12611261+ bool debugfs_an_cdr_workaround;12621262+ bool debugfs_an_cdr_track_early;12661263};1267126412681265/* Function prototypes*/
+2-2
drivers/net/ethernet/ibm/ibmvnic.c
···11281128 if (!adapter->rx_pool)11291129 return;1130113011311131- rx_scrqs = be32_to_cpu(adapter->login_rsp_buf->num_rxadd_subcrqs);11311131+ rx_scrqs = adapter->num_active_rx_pools;11321132 rx_entries = adapter->req_rx_add_entries_per_subcrq;1133113311341134 /* Free any remaining skbs in the rx buffer pools */···11771177 if (!adapter->tx_pool || !adapter->tso_pool)11781178 return;1179117911801180- tx_scrqs = be32_to_cpu(adapter->login_rsp_buf->num_txsubm_subcrqs);11801180+ tx_scrqs = adapter->num_active_tx_pools;1181118111821182 /* Free any remaining skbs in the tx buffer pools */11831183 for (i = 0; i < tx_scrqs; i++) {
+1-1
drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
···586586#define ICE_LG_ACT_MIRROR_VSI_ID_S 3587587#define ICE_LG_ACT_MIRROR_VSI_ID_M (0x3FF << ICE_LG_ACT_MIRROR_VSI_ID_S)588588589589- /* Action type = 5 - Large Action */589589+ /* Action type = 5 - Generic Value */590590#define ICE_LG_ACT_GENERIC 0x5591591#define ICE_LG_ACT_GENERIC_VALUE_S 3592592#define ICE_LG_ACT_GENERIC_VALUE_M (0xFFFF << ICE_LG_ACT_GENERIC_VALUE_S)
+17-5
drivers/net/ethernet/intel/ice/ice_common.c
···7878 struct ice_aq_desc desc;7979 enum ice_status status;8080 u16 flags;8181+ u8 i;81828283 cmd = &desc.params.mac_read;8384···9998 return ICE_ERR_CFG;10099 }101100102102- ether_addr_copy(hw->port_info->mac.lan_addr, resp->mac_addr);103103- ether_addr_copy(hw->port_info->mac.perm_addr, resp->mac_addr);101101+ /* A single port can report up to two (LAN and WoL) addresses */102102+ for (i = 0; i < cmd->num_addr; i++)103103+ if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) {104104+ ether_addr_copy(hw->port_info->mac.lan_addr,105105+ resp[i].mac_addr);106106+ ether_addr_copy(hw->port_info->mac.perm_addr,107107+ resp[i].mac_addr);108108+ break;109109+ }110110+104111 return 0;105112}106113···473464 if (status)474465 goto err_unroll_sched;475466476476- /* Get port MAC information */477477- mac_buf_len = sizeof(struct ice_aqc_manage_mac_read_resp);478478- mac_buf = devm_kzalloc(ice_hw_to_dev(hw), mac_buf_len, GFP_KERNEL);467467+ /* Get MAC information */468468+ /* A single port can report up to two (LAN and WoL) addresses */469469+ mac_buf = devm_kcalloc(ice_hw_to_dev(hw), 2,470470+ sizeof(struct ice_aqc_manage_mac_read_resp),471471+ GFP_KERNEL);472472+ mac_buf_len = 2 * sizeof(struct ice_aqc_manage_mac_read_resp);479473480474 if (!mac_buf) {481475 status = ICE_ERR_NO_MEMORY;
···17221722 oicr = rd32(hw, PFINT_OICR);17231723 ena_mask = rd32(hw, PFINT_OICR_ENA);1724172417251725- if (!(oicr & PFINT_OICR_INTEVENT_M))17261726- goto ena_intr;17271727-17281725 if (oicr & PFINT_OICR_GRST_M) {17291726 u32 reset;17301727 /* we have a reset warning */···17791782 }17801783 ret = IRQ_HANDLED;1781178417821782-ena_intr:17831785 /* re-enable interrupt causes that are not handled during this pass */17841786 wr32(hw, PFINT_OICR_ENA, ena_mask);17851787 if (!test_bit(__ICE_DOWN, pf->state)) {
+2-2
drivers/net/ethernet/intel/ice/ice_sched.c
···751751 u16 num_added = 0;752752 u32 temp;753753754754+ *num_nodes_added = 0;755755+754756 if (!num_nodes)755757 return status;756758757759 if (!parent || layer < hw->sw_entry_point_layer)758760 return ICE_ERR_PARAM;759759-760760- *num_nodes_added = 0;761761762762 /* max children per node per layer */763763 max_child_nodes =
+16-1
drivers/net/ethernet/intel/igb/igb_main.c
···17001700 WARN_ON(hw->mac.type != e1000_i210);17011701 WARN_ON(queue < 0 || queue > 1);1702170217031703- if (enable) {17031703+ if (enable || queue == 0) {17041704+ /* i210 does not allow the queue 0 to be in the Strict17051705+ * Priority mode while the Qav mode is enabled, so,17061706+ * instead of disabling strict priority mode, we give17071707+ * queue 0 the maximum of credits possible.17081708+ *17091709+ * See section 8.12.19 of the i210 datasheet, "Note:17101710+ * Queue0 QueueMode must be set to 1b when17111711+ * TransmitMode is set to Qav."17121712+ */17131713+ if (queue == 0 && !enable) {17141714+ /* max "linkspeed" idleslope in kbps */17151715+ idleslope = 1000000;17161716+ hicredit = ETH_FRAME_LEN;17171717+ }17181718+17041719 set_tx_desc_fetch_prio(hw, queue, TX_QUEUE_PRIO_HIGH);17051720 set_queue_mode(hw, queue, QUEUE_MODE_STREAM_RESERVATION);17061721
···39993999 atomic_set(&efx->active_queues, 0);40004000}4001400140024002-static bool efx_ef10_filter_equal(const struct efx_filter_spec *left,40034003- const struct efx_filter_spec *right)40044004-{40054005- if ((left->match_flags ^ right->match_flags) |40064006- ((left->flags ^ right->flags) &40074007- (EFX_FILTER_FLAG_RX | EFX_FILTER_FLAG_TX)))40084008- return false;40094009-40104010- return memcmp(&left->outer_vid, &right->outer_vid,40114011- sizeof(struct efx_filter_spec) -40124012- offsetof(struct efx_filter_spec, outer_vid)) == 0;40134013-}40144014-40154015-static unsigned int efx_ef10_filter_hash(const struct efx_filter_spec *spec)40164016-{40174017- BUILD_BUG_ON(offsetof(struct efx_filter_spec, outer_vid) & 3);40184018- return jhash2((const u32 *)&spec->outer_vid,40194019- (sizeof(struct efx_filter_spec) -40204020- offsetof(struct efx_filter_spec, outer_vid)) / 4,40214021- 0);40224022- /* XXX should we randomise the initval? */40234023-}40244024-40254002/* Decide whether a filter should be exclusive or else should allow40264003 * delivery to additional recipients. Currently we decide that40274004 * filters for specific local unicast MAC and IP addresses are···43234346 goto out_unlock;43244347 match_pri = rc;4325434843264326- hash = efx_ef10_filter_hash(spec);43494349+ hash = efx_filter_spec_hash(spec);43274350 is_mc_recip = efx_filter_is_mc_recipient(spec);43284351 if (is_mc_recip)43294352 bitmap_zero(mc_rem_map, EFX_EF10_FILTER_SEARCH_LIMIT);···43554378 if (!saved_spec) {43564379 if (ins_index < 0)43574380 ins_index = i;43584358- } else if (efx_ef10_filter_equal(spec, saved_spec)) {43814381+ } else if (efx_filter_spec_equal(spec, saved_spec)) {43594382 if (spec->priority < saved_spec->priority &&43604383 spec->priority != EFX_FILTER_PRI_AUTO) {43614384 rc = -EPERM;···47394762static bool efx_ef10_filter_rfs_expire_one(struct efx_nic *efx, u32 flow_id,47404763 unsigned int filter_idx)47414764{47654765+ struct efx_filter_spec *spec, saved_spec;47424766 struct efx_ef10_filter_table *table;47434743- struct efx_filter_spec *spec;47444744- bool ret;47674767+ struct efx_arfs_rule *rule = NULL;47684768+ bool ret = true, force = false;47694769+ u16 arfs_id;4745477047464771 down_read(&efx->filter_sem);47474772 table = efx->filter_state;47484773 down_write(&table->lock);47494774 spec = efx_ef10_filter_entry_spec(table, filter_idx);4750477547514751- if (!spec || spec->priority != EFX_FILTER_PRI_HINT) {47524752- ret = true;47764776+ if (!spec || spec->priority != EFX_FILTER_PRI_HINT)47534777 goto out_unlock;47544754- }4755477847564756- if (!rps_may_expire_flow(efx->net_dev, spec->dmaq_id, flow_id, 0)) {47794779+ spin_lock_bh(&efx->rps_hash_lock);47804780+ if (!efx->rps_hash_table) {47814781+ /* In the absence of the table, we always return 0 to ARFS. */47824782+ arfs_id = 0;47834783+ } else {47844784+ rule = efx_rps_hash_find(efx, spec);47854785+ if (!rule)47864786+ /* ARFS table doesn't know of this filter, so remove it */47874787+ goto expire;47884788+ arfs_id = rule->arfs_id;47894789+ ret = efx_rps_check_rule(rule, filter_idx, &force);47904790+ if (force)47914791+ goto expire;47924792+ if (!ret) {47934793+ spin_unlock_bh(&efx->rps_hash_lock);47944794+ goto out_unlock;47954795+ }47964796+ }47974797+ if (!rps_may_expire_flow(efx->net_dev, spec->dmaq_id, flow_id, arfs_id))47574798 ret = false;47584758- goto out_unlock;47594759- }47604760-47994799+ else if (rule)48004800+ rule->filter_id = EFX_ARFS_FILTER_ID_REMOVING;48014801+expire:48024802+ saved_spec = *spec; /* remove operation will kfree spec */48034803+ spin_unlock_bh(&efx->rps_hash_lock);48044804+ /* At this point (since we dropped the lock), another thread might queue48054805+ * up a fresh insertion request (but the actual insertion will be held48064806+ * up by our possession of the filter table lock). In that case, it48074807+ * will set rule->filter_id to EFX_ARFS_FILTER_ID_PENDING, meaning that48084808+ * the rule is not removed by efx_rps_hash_del() below.48094809+ */47614810 ret = efx_ef10_filter_remove_internal(efx, 1U << spec->priority,47624811 filter_idx, true) == 0;48124812+ /* While we can't safely dereference rule (we dropped the lock), we can48134813+ * still test it for NULL.48144814+ */48154815+ if (ret && rule) {48164816+ /* Expiring, so remove entry from ARFS table */48174817+ spin_lock_bh(&efx->rps_hash_lock);48184818+ efx_rps_hash_del(efx, &saved_spec);48194819+ spin_unlock_bh(&efx->rps_hash_lock);48204820+ }47634821out_unlock:47644822 up_write(&table->lock);47654823 up_read(&efx->filter_sem);
+143
drivers/net/ethernet/sfc/efx.c
···30273027 mutex_init(&efx->mac_lock);30283028#ifdef CONFIG_RFS_ACCEL30293029 mutex_init(&efx->rps_mutex);30303030+ spin_lock_init(&efx->rps_hash_lock);30313031+ /* Failure to allocate is not fatal, but may degrade ARFS performance */30323032+ efx->rps_hash_table = kcalloc(EFX_ARFS_HASH_TABLE_SIZE,30333033+ sizeof(*efx->rps_hash_table), GFP_KERNEL);30303034#endif30313035 efx->phy_op = &efx_dummy_phy_operations;30323036 efx->mdio.dev = net_dev;···30743070{30753071 int i;3076307230733073+#ifdef CONFIG_RFS_ACCEL30743074+ kfree(efx->rps_hash_table);30753075+#endif30763076+30773077 for (i = 0; i < EFX_MAX_CHANNELS; i++)30783078 kfree(efx->channel[i]);30793079···30993091 stats[GENERIC_STAT_rx_nodesc_trunc] = n_rx_nodesc_trunc;31003092 stats[GENERIC_STAT_rx_noskb_drops] = atomic_read(&efx->n_rx_noskb_drops);31013093}30943094+30953095+bool efx_filter_spec_equal(const struct efx_filter_spec *left,30963096+ const struct efx_filter_spec *right)30973097+{30983098+ if ((left->match_flags ^ right->match_flags) |30993099+ ((left->flags ^ right->flags) &31003100+ (EFX_FILTER_FLAG_RX | EFX_FILTER_FLAG_TX)))31013101+ return false;31023102+31033103+ return memcmp(&left->outer_vid, &right->outer_vid,31043104+ sizeof(struct efx_filter_spec) -31053105+ offsetof(struct efx_filter_spec, outer_vid)) == 0;31063106+}31073107+31083108+u32 efx_filter_spec_hash(const struct efx_filter_spec *spec)31093109+{31103110+ BUILD_BUG_ON(offsetof(struct efx_filter_spec, outer_vid) & 3);31113111+ return jhash2((const u32 *)&spec->outer_vid,31123112+ (sizeof(struct efx_filter_spec) -31133113+ offsetof(struct efx_filter_spec, outer_vid)) / 4,31143114+ 0);31153115+}31163116+31173117+#ifdef CONFIG_RFS_ACCEL31183118+bool efx_rps_check_rule(struct efx_arfs_rule *rule, unsigned int filter_idx,31193119+ bool *force)31203120+{31213121+ if (rule->filter_id == EFX_ARFS_FILTER_ID_PENDING) {31223122+ /* ARFS is currently updating this entry, leave it */31233123+ return false;31243124+ }31253125+ if (rule->filter_id == EFX_ARFS_FILTER_ID_ERROR) {31263126+ /* ARFS tried and failed to update this, so it's probably out31273127+ * of date. Remove the filter and the ARFS rule entry.31283128+ */31293129+ rule->filter_id = EFX_ARFS_FILTER_ID_REMOVING;31303130+ *force = true;31313131+ return true;31323132+ } else if (WARN_ON(rule->filter_id != filter_idx)) { /* can't happen */31333133+ /* ARFS has moved on, so old filter is not needed. Since we did31343134+ * not mark the rule with EFX_ARFS_FILTER_ID_REMOVING, it will31353135+ * not be removed by efx_rps_hash_del() subsequently.31363136+ */31373137+ *force = true;31383138+ return true;31393139+ }31403140+ /* Remove it iff ARFS wants to. */31413141+ return true;31423142+}31433143+31443144+struct hlist_head *efx_rps_hash_bucket(struct efx_nic *efx,31453145+ const struct efx_filter_spec *spec)31463146+{31473147+ u32 hash = efx_filter_spec_hash(spec);31483148+31493149+ WARN_ON(!spin_is_locked(&efx->rps_hash_lock));31503150+ if (!efx->rps_hash_table)31513151+ return NULL;31523152+ return &efx->rps_hash_table[hash % EFX_ARFS_HASH_TABLE_SIZE];31533153+}31543154+31553155+struct efx_arfs_rule *efx_rps_hash_find(struct efx_nic *efx,31563156+ const struct efx_filter_spec *spec)31573157+{31583158+ struct efx_arfs_rule *rule;31593159+ struct hlist_head *head;31603160+ struct hlist_node *node;31613161+31623162+ head = efx_rps_hash_bucket(efx, spec);31633163+ if (!head)31643164+ return NULL;31653165+ hlist_for_each(node, head) {31663166+ rule = container_of(node, struct efx_arfs_rule, node);31673167+ if (efx_filter_spec_equal(spec, &rule->spec))31683168+ return rule;31693169+ }31703170+ return NULL;31713171+}31723172+31733173+struct efx_arfs_rule *efx_rps_hash_add(struct efx_nic *efx,31743174+ const struct efx_filter_spec *spec,31753175+ bool *new)31763176+{31773177+ struct efx_arfs_rule *rule;31783178+ struct hlist_head *head;31793179+ struct hlist_node *node;31803180+31813181+ head = efx_rps_hash_bucket(efx, spec);31823182+ if (!head)31833183+ return NULL;31843184+ hlist_for_each(node, head) {31853185+ rule = container_of(node, struct efx_arfs_rule, node);31863186+ if (efx_filter_spec_equal(spec, &rule->spec)) {31873187+ *new = false;31883188+ return rule;31893189+ }31903190+ }31913191+ rule = kmalloc(sizeof(*rule), GFP_ATOMIC);31923192+ *new = true;31933193+ if (rule) {31943194+ memcpy(&rule->spec, spec, sizeof(rule->spec));31953195+ hlist_add_head(&rule->node, head);31963196+ }31973197+ return rule;31983198+}31993199+32003200+void efx_rps_hash_del(struct efx_nic *efx, const struct efx_filter_spec *spec)32013201+{32023202+ struct efx_arfs_rule *rule;32033203+ struct hlist_head *head;32043204+ struct hlist_node *node;32053205+32063206+ head = efx_rps_hash_bucket(efx, spec);32073207+ if (WARN_ON(!head))32083208+ return;32093209+ hlist_for_each(node, head) {32103210+ rule = container_of(node, struct efx_arfs_rule, node);32113211+ if (efx_filter_spec_equal(spec, &rule->spec)) {32123212+ /* Someone already reused the entry. We know that if32133213+ * this check doesn't fire (i.e. filter_id == REMOVING)32143214+ * then the REMOVING mark was put there by our caller,32153215+ * because caller is holding a lock on filter table and32163216+ * only holders of that lock set REMOVING.32173217+ */32183218+ if (rule->filter_id != EFX_ARFS_FILTER_ID_REMOVING)32193219+ return;32203220+ hlist_del(node);32213221+ kfree(rule);32223222+ return;32233223+ }32243224+ }32253225+ /* We didn't find it. */32263226+ WARN_ON(1);32273227+}32283228+#endif3102322931033230/* RSS contexts. We're using linked lists and crappy O(n) algorithms, because31043231 * (a) this is an infrequent control-plane operation and (b) n is small (max 64)
+21
drivers/net/ethernet/sfc/efx.h
···186186#endif187187bool efx_filter_is_mc_recipient(const struct efx_filter_spec *spec);188188189189+bool efx_filter_spec_equal(const struct efx_filter_spec *left,190190+ const struct efx_filter_spec *right);191191+u32 efx_filter_spec_hash(const struct efx_filter_spec *spec);192192+193193+#ifdef CONFIG_RFS_ACCEL194194+bool efx_rps_check_rule(struct efx_arfs_rule *rule, unsigned int filter_idx,195195+ bool *force);196196+197197+struct efx_arfs_rule *efx_rps_hash_find(struct efx_nic *efx,198198+ const struct efx_filter_spec *spec);199199+200200+/* @new is written to indicate if entry was newly added (true) or if an old201201+ * entry was found and returned (false).202202+ */203203+struct efx_arfs_rule *efx_rps_hash_add(struct efx_nic *efx,204204+ const struct efx_filter_spec *spec,205205+ bool *new);206206+207207+void efx_rps_hash_del(struct efx_nic *efx, const struct efx_filter_spec *spec);208208+#endif209209+189210/* RSS contexts */190211struct efx_rss_context *efx_alloc_rss_context_entry(struct efx_nic *efx);191212struct efx_rss_context *efx_find_rss_context_entry(struct efx_nic *efx, u32 id);
+34-7
drivers/net/ethernet/sfc/farch.c
···29052905{29062906 struct efx_farch_filter_state *state = efx->filter_state;29072907 struct efx_farch_filter_table *table;29082908- bool ret = false;29082908+ bool ret = false, force = false;29092909+ u16 arfs_id;2909291029102911 down_write(&state->lock);29122912+ spin_lock_bh(&efx->rps_hash_lock);29112913 table = &state->table[EFX_FARCH_FILTER_TABLE_RX_IP];29122914 if (test_bit(index, table->used_bitmap) &&29132913- table->spec[index].priority == EFX_FILTER_PRI_HINT &&29142914- rps_may_expire_flow(efx->net_dev, table->spec[index].dmaq_id,29152915- flow_id, 0)) {29162916- efx_farch_filter_table_clear_entry(efx, table, index);29172917- ret = true;29182918- }29152915+ table->spec[index].priority == EFX_FILTER_PRI_HINT) {29162916+ struct efx_arfs_rule *rule = NULL;29172917+ struct efx_filter_spec spec;2919291829192919+ efx_farch_filter_to_gen_spec(&spec, &table->spec[index]);29202920+ if (!efx->rps_hash_table) {29212921+ /* In the absence of the table, we always returned 0 to29222922+ * ARFS, so use the same to query it.29232923+ */29242924+ arfs_id = 0;29252925+ } else {29262926+ rule = efx_rps_hash_find(efx, &spec);29272927+ if (!rule) {29282928+ /* ARFS table doesn't know of this filter, remove it */29292929+ force = true;29302930+ } else {29312931+ arfs_id = rule->arfs_id;29322932+ if (!efx_rps_check_rule(rule, index, &force))29332933+ goto out_unlock;29342934+ }29352935+ }29362936+ if (force || rps_may_expire_flow(efx->net_dev, spec.dmaq_id,29372937+ flow_id, arfs_id)) {29382938+ if (rule)29392939+ rule->filter_id = EFX_ARFS_FILTER_ID_REMOVING;29402940+ efx_rps_hash_del(efx, &spec);29412941+ efx_farch_filter_table_clear_entry(efx, table, index);29422942+ ret = true;29432943+ }29442944+ }29452945+out_unlock:29462946+ spin_unlock_bh(&efx->rps_hash_lock);29202947 up_write(&state->lock);29212948 return ret;29222949}
+36
drivers/net/ethernet/sfc/net_driver.h
···734734};735735736736#ifdef CONFIG_RFS_ACCEL737737+/* Order of these is important, since filter_id >= %EFX_ARFS_FILTER_ID_PENDING738738+ * is used to test if filter does or will exist.739739+ */740740+#define EFX_ARFS_FILTER_ID_PENDING -1741741+#define EFX_ARFS_FILTER_ID_ERROR -2742742+#define EFX_ARFS_FILTER_ID_REMOVING -3743743+/**744744+ * struct efx_arfs_rule - record of an ARFS filter and its IDs745745+ * @node: linkage into hash table746746+ * @spec: details of the filter (used as key for hash table). Use efx->type to747747+ * determine which member to use.748748+ * @rxq_index: channel to which the filter will steer traffic.749749+ * @arfs_id: filter ID which was returned to ARFS750750+ * @filter_id: index in software filter table. May be751751+ * %EFX_ARFS_FILTER_ID_PENDING if filter was not inserted yet,752752+ * %EFX_ARFS_FILTER_ID_ERROR if filter insertion failed, or753753+ * %EFX_ARFS_FILTER_ID_REMOVING if expiry is currently removing the filter.754754+ */755755+struct efx_arfs_rule {756756+ struct hlist_node node;757757+ struct efx_filter_spec spec;758758+ u16 rxq_index;759759+ u16 arfs_id;760760+ s32 filter_id;761761+};762762+763763+/* Size chosen so that the table is one page (4kB) */764764+#define EFX_ARFS_HASH_TABLE_SIZE 512765765+737766/**738767 * struct efx_async_filter_insertion - Request to asynchronously insert a filter739768 * @net_dev: Reference to the netdevice···902873 * @rps_expire_channel's @rps_flow_id903874 * @rps_slot_map: bitmap of in-flight entries in @rps_slot904875 * @rps_slot: array of ARFS insertion requests for efx_filter_rfs_work()876876+ * @rps_hash_lock: Protects ARFS filter mapping state (@rps_hash_table and877877+ * @rps_next_id).878878+ * @rps_hash_table: Mapping between ARFS filters and their various IDs879879+ * @rps_next_id: next arfs_id for an ARFS filter905880 * @active_queues: Count of RX and TX queues that haven't been flushed and drained.906881 * @rxq_flush_pending: Count of number of receive queues that need to be flushed.907882 * Decremented when the efx_flush_rx_queue() is called.···10621029 unsigned int rps_expire_index;10631030 unsigned long rps_slot_map;10641031 struct efx_async_filter_insertion rps_slot[EFX_RPS_MAX_IN_FLIGHT];10321032+ spinlock_t rps_hash_lock;10331033+ struct hlist_head *rps_hash_table;10341034+ u32 rps_next_id;10651035#endif1066103610671037 atomic_t active_queues;
+57-5
drivers/net/ethernet/sfc/rx.c
···834834 struct efx_nic *efx = netdev_priv(req->net_dev);835835 struct efx_channel *channel = efx_get_channel(efx, req->rxq_index);836836 int slot_idx = req - efx->rps_slot;837837+ struct efx_arfs_rule *rule;838838+ u16 arfs_id = 0;837839 int rc;838840839841 rc = efx->type->filter_insert(efx, &req->spec, true);842842+ if (efx->rps_hash_table) {843843+ spin_lock_bh(&efx->rps_hash_lock);844844+ rule = efx_rps_hash_find(efx, &req->spec);845845+ /* The rule might have already gone, if someone else's request846846+ * for the same spec was already worked and then expired before847847+ * we got around to our work. In that case we have nothing848848+ * tying us to an arfs_id, meaning that as soon as the filter849849+ * is considered for expiry it will be removed.850850+ */851851+ if (rule) {852852+ if (rc < 0)853853+ rule->filter_id = EFX_ARFS_FILTER_ID_ERROR;854854+ else855855+ rule->filter_id = rc;856856+ arfs_id = rule->arfs_id;857857+ }858858+ spin_unlock_bh(&efx->rps_hash_lock);859859+ }840860 if (rc >= 0) {841861 /* Remember this so we can check whether to expire the filter842862 * later.···868848869849 if (req->spec.ether_type == htons(ETH_P_IP))870850 netif_info(efx, rx_status, efx->net_dev,871871- "steering %s %pI4:%u:%pI4:%u to queue %u [flow %u filter %d]\n",851851+ "steering %s %pI4:%u:%pI4:%u to queue %u [flow %u filter %d id %u]\n",872852 (req->spec.ip_proto == IPPROTO_TCP) ? "TCP" : "UDP",873853 req->spec.rem_host, ntohs(req->spec.rem_port),874854 req->spec.loc_host, ntohs(req->spec.loc_port),875875- req->rxq_index, req->flow_id, rc);855855+ req->rxq_index, req->flow_id, rc, arfs_id);876856 else877857 netif_info(efx, rx_status, efx->net_dev,878878- "steering %s [%pI6]:%u:[%pI6]:%u to queue %u [flow %u filter %d]\n",858858+ "steering %s [%pI6]:%u:[%pI6]:%u to queue %u [flow %u filter %d id %u]\n",879859 (req->spec.ip_proto == IPPROTO_TCP) ? "TCP" : "UDP",880860 req->spec.rem_host, ntohs(req->spec.rem_port),881861 req->spec.loc_host, ntohs(req->spec.loc_port),882882- req->rxq_index, req->flow_id, rc);862862+ req->rxq_index, req->flow_id, rc, arfs_id);883863 }884864885865 /* Release references */···892872{893873 struct efx_nic *efx = netdev_priv(net_dev);894874 struct efx_async_filter_insertion *req;875875+ struct efx_arfs_rule *rule;895876 struct flow_keys fk;896877 int slot_idx;878878+ bool new;897879 int rc;898880899881 /* find a free slot */···948926 req->spec.rem_port = fk.ports.src;949927 req->spec.loc_port = fk.ports.dst;950928929929+ if (efx->rps_hash_table) {930930+ /* Add it to ARFS hash table */931931+ spin_lock(&efx->rps_hash_lock);932932+ rule = efx_rps_hash_add(efx, &req->spec, &new);933933+ if (!rule) {934934+ rc = -ENOMEM;935935+ goto out_unlock;936936+ }937937+ if (new)938938+ rule->arfs_id = efx->rps_next_id++ % RPS_NO_FILTER;939939+ rc = rule->arfs_id;940940+ /* Skip if existing or pending filter already does the right thing */941941+ if (!new && rule->rxq_index == rxq_index &&942942+ rule->filter_id >= EFX_ARFS_FILTER_ID_PENDING)943943+ goto out_unlock;944944+ rule->rxq_index = rxq_index;945945+ rule->filter_id = EFX_ARFS_FILTER_ID_PENDING;946946+ spin_unlock(&efx->rps_hash_lock);947947+ } else {948948+ /* Without an ARFS hash table, we just use arfs_id 0 for all949949+ * filters. This means if multiple flows hash to the same950950+ * flow_id, all but the most recently touched will be eligible951951+ * for expiry.952952+ */953953+ rc = 0;954954+ }955955+956956+ /* Queue the request */951957 dev_hold(req->net_dev = net_dev);952958 INIT_WORK(&req->work, efx_filter_rfs_work);953959 req->rxq_index = rxq_index;954960 req->flow_id = flow_id;955961 schedule_work(&req->work);956956- return 0;962962+ return rc;963963+out_unlock:964964+ spin_unlock(&efx->rps_hash_lock);957965out_clear:958966 clear_bit(slot_idx, &efx->rps_slot_map);959967 return rc;
···13931393 if (err < 0)13941394 goto error;1395139513961396+ /* If WOL event happened once, the LED[2] interrupt pin13971397+ * will not be cleared unless we reading the interrupt status13981398+ * register. If interrupts are in use, the normal interrupt13991399+ * handling will clear the WOL event. Clear the WOL event14001400+ * before enabling it if !phy_interrupt_is_valid()14011401+ */14021402+ if (!phy_interrupt_is_valid(phydev))14031403+ phy_read(phydev, MII_M1011_IEVENT);14041404+13961405 /* Enable the WOL interrupt */13971406 err = __phy_modify(phydev, MII_88E1318S_PHY_CSIER, 0,13981407 MII_88E1318S_PHY_CSIER_WOL_EIE);
+4
drivers/net/ppp/pppoe.c
···620620 lock_sock(sk);621621622622 error = -EINVAL;623623+624624+ if (sockaddr_len != sizeof(struct sockaddr_pppox))625625+ goto end;626626+623627 if (sp->sa_protocol != PX_PROTO_OE)624628 goto end;625629
+12-7
drivers/net/team/team.c
···10721072}1073107310741074#ifdef CONFIG_NET_POLL_CONTROLLER10751075-static int team_port_enable_netpoll(struct team *team, struct team_port *port)10751075+static int __team_port_enable_netpoll(struct team_port *port)10761076{10771077 struct netpoll *np;10781078 int err;10791079-10801080- if (!team->dev->npinfo)10811081- return 0;1082107910831080 np = kzalloc(sizeof(*np), GFP_KERNEL);10841081 if (!np)···10881091 }10891092 port->np = np;10901093 return err;10941094+}10951095+10961096+static int team_port_enable_netpoll(struct team_port *port)10971097+{10981098+ if (!port->team->dev->npinfo)10991099+ return 0;11001100+11011101+ return __team_port_enable_netpoll(port);10911102}1092110310931104static void team_port_disable_netpoll(struct team_port *port)···11121107 kfree(np);11131108}11141109#else11151115-static int team_port_enable_netpoll(struct team *team, struct team_port *port)11101110+static int team_port_enable_netpoll(struct team_port *port)11161111{11171112 return 0;11181113}···12261221 goto err_vids_add;12271222 }1228122312291229- err = team_port_enable_netpoll(team, port);12241224+ err = team_port_enable_netpoll(port);12301225 if (err) {12311226 netdev_err(dev, "Failed to enable netpoll on device %s\n",12321227 portname);···1923191819241919 mutex_lock(&team->lock);19251920 list_for_each_entry(port, &team->port_list, list) {19261926- err = team_port_enable_netpoll(team, port);19211921+ err = __team_port_enable_netpoll(port);19271922 if (err) {19281923 __team_netpoll_cleanup(team);19291924 break;
+5-2
drivers/of/fdt.c
···942942 int offset;943943 const char *p, *q, *options = NULL;944944 int l;945945- const struct earlycon_id *match;945945+ const struct earlycon_id **p_match;946946 const void *fdt = initial_boot_params;947947948948 offset = fdt_path_offset(fdt, "/chosen");···969969 return 0;970970 }971971972972- for (match = __earlycon_table; match < __earlycon_table_end; match++) {972972+ for (p_match = __earlycon_table; p_match < __earlycon_table_end;973973+ p_match++) {974974+ const struct earlycon_id *match = *p_match;975975+973976 if (!match->compatible[0])974977 continue;975978
···958958 * devices should not be touched during freeze/thaw transitions,959959 * however.960960 */961961- if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND))961961+ if (!dev_pm_smart_suspend_and_suspended(dev)) {962962 pm_runtime_resume(dev);963963+ pci_dev->state_saved = false;964964+ }963965964964- pci_dev->state_saved = false;965966 if (pm->freeze) {966967 int error;967968
+2-2
drivers/pci/pci.c
···52735273 bw_avail = pcie_bandwidth_available(dev, &limiting_dev, &speed, &width);5274527452755275 if (bw_avail >= bw_cap)52765276- pci_info(dev, "%u.%03u Gb/s available bandwidth (%s x%d link)\n",52765276+ pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth (%s x%d link)\n",52775277 bw_cap / 1000, bw_cap % 1000,52785278 PCIE_SPEED2STR(speed_cap), width_cap);52795279 else52805280- pci_info(dev, "%u.%03u Gb/s available bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)\n",52805280+ pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)\n",52815281 bw_avail / 1000, bw_avail % 1000,52825282 PCIE_SPEED2STR(speed), width,52835283 limiting_dev ? pci_name(limiting_dev) : "<unknown>",
+23-14
drivers/rtc/rtc-opal.c
···57575858static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm)5959{6060- long rc = OPAL_BUSY;6060+ s64 rc = OPAL_BUSY;6161 int retries = 10;6262 u32 y_m_d;6363 u64 h_m_s_ms;···66666767 while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {6868 rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms);6969- if (rc == OPAL_BUSY_EVENT)6969+ if (rc == OPAL_BUSY_EVENT) {7070+ msleep(OPAL_BUSY_DELAY_MS);7071 opal_poll_events(NULL);7171- else if (retries-- && (rc == OPAL_HARDWARE7272- || rc == OPAL_INTERNAL_ERROR))7373- msleep(10);7474- else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT)7575- break;7272+ } else if (rc == OPAL_BUSY) {7373+ msleep(OPAL_BUSY_DELAY_MS);7474+ } else if (rc == OPAL_HARDWARE || rc == OPAL_INTERNAL_ERROR) {7575+ if (retries--) {7676+ msleep(10); /* Wait 10ms before retry */7777+ rc = OPAL_BUSY; /* go around again */7878+ }7979+ }7680 }77817882 if (rc != OPAL_SUCCESS)···91879288static int opal_set_rtc_time(struct device *dev, struct rtc_time *tm)9389{9494- long rc = OPAL_BUSY;9090+ s64 rc = OPAL_BUSY;9591 int retries = 10;9692 u32 y_m_d = 0;9793 u64 h_m_s_ms = 0;98949995 tm_to_opal(tm, &y_m_d, &h_m_s_ms);9696+10097 while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {10198 rc = opal_rtc_write(y_m_d, h_m_s_ms);102102- if (rc == OPAL_BUSY_EVENT)9999+ if (rc == OPAL_BUSY_EVENT) {100100+ msleep(OPAL_BUSY_DELAY_MS);103101 opal_poll_events(NULL);104104- else if (retries-- && (rc == OPAL_HARDWARE105105- || rc == OPAL_INTERNAL_ERROR))106106- msleep(10);107107- else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT)108108- break;102102+ } else if (rc == OPAL_BUSY) {103103+ msleep(OPAL_BUSY_DELAY_MS);104104+ } else if (rc == OPAL_HARDWARE || rc == OPAL_INTERNAL_ERROR) {105105+ if (retries--) {106106+ msleep(10); /* Wait 10ms before retry */107107+ rc = OPAL_BUSY; /* go around again */108108+ }109109+ }109110 }110111111112 return rc == OPAL_SUCCESS ? 0 : -EIO;
+11-2
drivers/s390/block/dasd_alias.c
···592592int dasd_alias_add_device(struct dasd_device *device)593593{594594 struct dasd_eckd_private *private = device->private;595595- struct alias_lcu *lcu;595595+ __u8 uaddr = private->uid.real_unit_addr;596596+ struct alias_lcu *lcu = private->lcu;596597 unsigned long flags;597598 int rc;598599599599- lcu = private->lcu;600600 rc = 0;601601 spin_lock_irqsave(&lcu->lock, flags);602602+ /*603603+ * Check if device and lcu type differ. If so, the uac data may be604604+ * outdated and needs to be updated.605605+ */606606+ if (private->uid.type != lcu->uac->unit[uaddr].ua_type) {607607+ lcu->flags |= UPDATE_PENDING;608608+ DBF_DEV_EVENT(DBF_WARNING, device, "%s",609609+ "uid type mismatch - trigger rescan");610610+ }602611 if (!(lcu->flags & UPDATE_PENDING)) {603612 rc = _add_device_to_lcu(lcu, device, device);604613 if (rc)
+11-3
drivers/s390/cio/chsc.c
···452452453453static void chsc_process_sei_res_acc(struct chsc_sei_nt0_area *sei_area)454454{455455+ struct channel_path *chp;455456 struct chp_link link;456457 struct chp_id chpid;457458 int status;···465464 chpid.id = sei_area->rsid;466465 /* allocate a new channel path structure, if needed */467466 status = chp_get_status(chpid);468468- if (status < 0)469469- chp_new(chpid);470470- else if (!status)467467+ if (!status)471468 return;469469+470470+ if (status < 0) {471471+ chp_new(chpid);472472+ } else {473473+ chp = chpid_to_chp(chpid);474474+ mutex_lock(&chp->lock);475475+ chp_update_desc(chp);476476+ mutex_unlock(&chp->lock);477477+ }472478 memset(&link, 0, sizeof(struct chp_link));473479 link.chpid = chpid;474480 if ((sei_area->vf & 0xc0) != 0) {
+12-7
drivers/s390/cio/vfio_ccw_fsm.c
···2020 int ccode;2121 __u8 lpm;2222 unsigned long flags;2323+ int ret;23242425 sch = private->sch;25262627 spin_lock_irqsave(sch->lock, flags);2728 private->state = VFIO_CCW_STATE_BUSY;2828- spin_unlock_irqrestore(sch->lock, flags);29293030 orb = cp_get_orb(&private->cp, (u32)(addr_t)sch, sch->lpm);3131···3838 * Initialize device status information3939 */4040 sch->schib.scsw.cmd.actl |= SCSW_ACTL_START_PEND;4141- return 0;4141+ ret = 0;4242+ break;4243 case 1: /* Status pending */4344 case 2: /* Busy */4444- return -EBUSY;4545+ ret = -EBUSY;4646+ break;4547 case 3: /* Device/path not operational */4648 {4749 lpm = orb->cmd.lpm;···5351 sch->lpm = 0;54525553 if (cio_update_schib(sch))5656- return -ENODEV;5757-5858- return sch->lpm ? -EACCES : -ENODEV;5454+ ret = -ENODEV;5555+ else5656+ ret = sch->lpm ? -EACCES : -ENODEV;5757+ break;5958 }6059 default:6161- return ccode;6060+ ret = ccode;6261 }6262+ spin_unlock_irqrestore(sch->lock, flags);6363+ return ret;6364}64656566static void fsm_notoper(struct vfio_ccw_private *private,
···3535#define QETH_HALT_CHANNEL_PARM -113636#define QETH_RCD_PARM -1237373838+static inline bool qeth_intparm_is_iob(unsigned long intparm)3939+{4040+ switch (intparm) {4141+ case QETH_CLEAR_CHANNEL_PARM:4242+ case QETH_HALT_CHANNEL_PARM:4343+ case QETH_RCD_PARM:4444+ case 0:4545+ return false;4646+ }4747+ return true;4848+}4949+3850/*****************************************************************************/3951/* IP Assist related definitions */4052/*****************************************************************************/
+33-26
drivers/s390/net/qeth_l2_main.c
···121121 QETH_CARD_TEXT(card, 2, "L2Setmac");122122 rc = qeth_l2_send_setdelmac(card, mac, IPA_CMD_SETVMAC);123123 if (rc == 0) {124124- card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED;125125- ether_addr_copy(card->dev->dev_addr, mac);126124 dev_info(&card->gdev->dev,127127- "MAC address %pM successfully registered on device %s\n",128128- card->dev->dev_addr, card->dev->name);125125+ "MAC address %pM successfully registered on device %s\n",126126+ mac, card->dev->name);129127 } else {130130- card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED;131128 switch (rc) {132129 case -EEXIST:133130 dev_warn(&card->gdev->dev,···136139 break;137140 }138141 }139139- return rc;140140-}141141-142142-static int qeth_l2_send_delmac(struct qeth_card *card, __u8 *mac)143143-{144144- int rc;145145-146146- QETH_CARD_TEXT(card, 2, "L2Delmac");147147- if (!(card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED))148148- return 0;149149- rc = qeth_l2_send_setdelmac(card, mac, IPA_CMD_DELVMAC);150150- if (rc == 0)151151- card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED;152142 return rc;153143}154144···503519{504520 struct sockaddr *addr = p;505521 struct qeth_card *card = dev->ml_priv;522522+ u8 old_addr[ETH_ALEN];506523 int rc = 0;507524508525 QETH_CARD_TEXT(card, 3, "setmac");···515530 return -EOPNOTSUPP;516531 }517532 QETH_CARD_HEX(card, 3, addr->sa_data, ETH_ALEN);533533+ if (!is_valid_ether_addr(addr->sa_data))534534+ return -EADDRNOTAVAIL;535535+518536 if (qeth_wait_for_threads(card, QETH_RECOVER_THREAD)) {519537 QETH_CARD_TEXT(card, 3, "setmcREC");520538 return -ERESTARTSYS;521539 }522522- rc = qeth_l2_send_delmac(card, &card->dev->dev_addr[0]);523523- if (!rc || (rc == -ENOENT))524524- rc = qeth_l2_send_setmac(card, addr->sa_data);525525- return rc ? -EINVAL : 0;540540+541541+ if (!qeth_card_hw_is_reachable(card)) {542542+ ether_addr_copy(dev->dev_addr, addr->sa_data);543543+ return 0;544544+ }545545+546546+ /* don't register the same address twice */547547+ if (ether_addr_equal_64bits(dev->dev_addr, addr->sa_data) &&548548+ (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED))549549+ return 0;550550+551551+ /* add the new address, switch over, drop the old */552552+ rc = qeth_l2_send_setmac(card, addr->sa_data);553553+ if (rc)554554+ return rc;555555+ ether_addr_copy(old_addr, dev->dev_addr);556556+ ether_addr_copy(dev->dev_addr, addr->sa_data);557557+558558+ if (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED)559559+ qeth_l2_remove_mac(card, old_addr);560560+ card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED;561561+ return 0;526562}527563528564static void qeth_promisc_to_bridge(struct qeth_card *card)···10731067 goto out_remove;10741068 }1075106910761076- if (card->info.type != QETH_CARD_TYPE_OSN)10771077- qeth_l2_send_setmac(card, &card->dev->dev_addr[0]);10701070+ if (card->info.type != QETH_CARD_TYPE_OSN &&10711071+ !qeth_l2_send_setmac(card, card->dev->dev_addr))10721072+ card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED;1078107310791074 if (qeth_is_diagass_supported(card, QETH_DIAGS_CMD_TRAP)) {10801075 if (card->info.hwtrap &&···13451338 qeth_prepare_control_data(card, len, iob);13461339 QETH_CARD_TEXT(card, 6, "osnoirqp");13471340 spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags);13481348- rc = ccw_device_start(card->write.ccwdev, &card->write.ccw,13491349- (addr_t) iob, 0, 0);13411341+ rc = ccw_device_start_timeout(CARD_WDEV(card), &card->write.ccw,13421342+ (addr_t) iob, 0, 0, QETH_IPA_TIMEOUT);13501343 spin_unlock_irqrestore(get_ccwdev_lock(card->write.ccwdev), flags);13511344 if (rc) {13521345 QETH_DBF_MESSAGE(2, "qeth_osn_send_control_data: "
···21212121 break; /* standby */21222122 if (sshdr.asc == 4 && sshdr.ascq == 0xc)21232123 break; /* unavailable */21242124+ if (sshdr.asc == 4 && sshdr.ascq == 0x1b)21252125+ break; /* sanitize in progress */21242126 /*21252127 * Issue command to spin up drive when not ready21262128 */
+80-56
drivers/scsi/sd_zbc.c
···400400 *401401 * Check that all zones of the device are equal. The last zone can however402402 * be smaller. The zone size must also be a power of two number of LBAs.403403+ *404404+ * Returns the zone size in bytes upon success or an error code upon failure.403405 */404404-static int sd_zbc_check_zone_size(struct scsi_disk *sdkp)406406+static s64 sd_zbc_check_zone_size(struct scsi_disk *sdkp)405407{406408 u64 zone_blocks = 0;407409 sector_t block = 0;···413411 unsigned int list_length;414412 int ret;415413 u8 same;416416-417417- sdkp->zone_blocks = 0;418414419415 /* Get a buffer */420416 buf = kmalloc(SD_ZBC_BUF_SIZE, GFP_KERNEL);···445445446446 /* Parse zone descriptors */447447 while (rec < buf + buf_len) {448448- zone_blocks = get_unaligned_be64(&rec[8]);449449- if (sdkp->zone_blocks == 0) {450450- sdkp->zone_blocks = zone_blocks;451451- } else if (zone_blocks != sdkp->zone_blocks &&452452- (block + zone_blocks < sdkp->capacity453453- || zone_blocks > sdkp->zone_blocks)) {454454- zone_blocks = 0;448448+ u64 this_zone_blocks = get_unaligned_be64(&rec[8]);449449+450450+ if (zone_blocks == 0) {451451+ zone_blocks = this_zone_blocks;452452+ } else if (this_zone_blocks != zone_blocks &&453453+ (block + this_zone_blocks < sdkp->capacity454454+ || this_zone_blocks > zone_blocks)) {455455+ this_zone_blocks = 0;455456 goto out;456457 }457457- block += zone_blocks;458458+ block += this_zone_blocks;458459 rec += 64;459460 }460461···467466 }468467469468 } while (block < sdkp->capacity);470470-471471- zone_blocks = sdkp->zone_blocks;472469473470out:474471 if (!zone_blocks) {···487488 "Zone size too large\n");488489 ret = -ENODEV;489490 } else {490490- sdkp->zone_blocks = zone_blocks;491491- sdkp->zone_shift = ilog2(zone_blocks);491491+ ret = zone_blocks;492492 }493493494494out_free:···498500499501/**500502 * sd_zbc_alloc_zone_bitmap - Allocate a zone bitmap (one bit per zone).501501- * @sdkp: The disk of the bitmap503503+ * @nr_zones: Number of zones to allocate space for.504504+ * @numa_node: NUMA node to allocate the memory from.502505 */503503-static inline unsigned long *sd_zbc_alloc_zone_bitmap(struct scsi_disk *sdkp)506506+static inline unsigned long *507507+sd_zbc_alloc_zone_bitmap(u32 nr_zones, int numa_node)504508{505505- struct request_queue *q = sdkp->disk->queue;506506-507507- return kzalloc_node(BITS_TO_LONGS(sdkp->nr_zones)508508- * sizeof(unsigned long),509509- GFP_KERNEL, q->node);509509+ return kzalloc_node(BITS_TO_LONGS(nr_zones) * sizeof(unsigned long),510510+ GFP_KERNEL, numa_node);510511}511512512513/**···513516 * @sdkp: disk used514517 * @buf: report reply buffer515518 * @buflen: length of @buf519519+ * @zone_shift: logarithm base 2 of the number of blocks in a zone516520 * @seq_zones_bitmap: bitmap of sequential zones to set517521 *518522 * Parse reported zone descriptors in @buf to identify sequential zones and···523525 * Return the LBA after the last zone reported.524526 */525527static sector_t sd_zbc_get_seq_zones(struct scsi_disk *sdkp, unsigned char *buf,526526- unsigned int buflen,528528+ unsigned int buflen, u32 zone_shift,527529 unsigned long *seq_zones_bitmap)528530{529531 sector_t lba, next_lba = sdkp->capacity;···542544 if (type != ZBC_ZONE_TYPE_CONV &&543545 cond != ZBC_ZONE_COND_READONLY &&544546 cond != ZBC_ZONE_COND_OFFLINE)545545- set_bit(lba >> sdkp->zone_shift, seq_zones_bitmap);547547+ set_bit(lba >> zone_shift, seq_zones_bitmap);546548 next_lba = lba + get_unaligned_be64(&rec[8]);547549 rec += 64;548550 }···551553}552554553555/**554554- * sd_zbc_setup_seq_zones_bitmap - Initialize the disk seq zone bitmap.556556+ * sd_zbc_setup_seq_zones_bitmap - Initialize a seq zone bitmap.555557 * @sdkp: target disk558558+ * @zone_shift: logarithm base 2 of the number of blocks in a zone559559+ * @nr_zones: number of zones to set up a seq zone bitmap for556560 *557561 * Allocate a zone bitmap and initialize it by identifying sequential zones.558562 */559559-static int sd_zbc_setup_seq_zones_bitmap(struct scsi_disk *sdkp)563563+static unsigned long *564564+sd_zbc_setup_seq_zones_bitmap(struct scsi_disk *sdkp, u32 zone_shift,565565+ u32 nr_zones)560566{561567 struct request_queue *q = sdkp->disk->queue;562568 unsigned long *seq_zones_bitmap;···568566 unsigned char *buf;569567 int ret = -ENOMEM;570568571571- seq_zones_bitmap = sd_zbc_alloc_zone_bitmap(sdkp);569569+ seq_zones_bitmap = sd_zbc_alloc_zone_bitmap(nr_zones, q->node);572570 if (!seq_zones_bitmap)573573- return -ENOMEM;571571+ return ERR_PTR(-ENOMEM);574572575573 buf = kmalloc(SD_ZBC_BUF_SIZE, GFP_KERNEL);576574 if (!buf)···581579 if (ret)582580 goto out;583581 lba = sd_zbc_get_seq_zones(sdkp, buf, SD_ZBC_BUF_SIZE,584584- seq_zones_bitmap);582582+ zone_shift, seq_zones_bitmap);585583 }586584587585 if (lba != sdkp->capacity) {···593591 kfree(buf);594592 if (ret) {595593 kfree(seq_zones_bitmap);596596- return ret;594594+ return ERR_PTR(ret);597595 }598598-599599- q->seq_zones_bitmap = seq_zones_bitmap;600600-601601- return 0;596596+ return seq_zones_bitmap;602597}603598604599static void sd_zbc_cleanup(struct scsi_disk *sdkp)···611612 q->nr_zones = 0;612613}613614614614-static int sd_zbc_setup(struct scsi_disk *sdkp)615615+static int sd_zbc_setup(struct scsi_disk *sdkp, u32 zone_blocks)615616{616617 struct request_queue *q = sdkp->disk->queue;618618+ u32 zone_shift = ilog2(zone_blocks);619619+ u32 nr_zones;617620 int ret;618621619619- /* READ16/WRITE16 is mandatory for ZBC disks */620620- sdkp->device->use_16_for_rw = 1;621621- sdkp->device->use_10_for_rw = 0;622622-623622 /* chunk_sectors indicates the zone size */624624- blk_queue_chunk_sectors(sdkp->disk->queue,625625- logical_to_sectors(sdkp->device, sdkp->zone_blocks));626626- sdkp->nr_zones =627627- round_up(sdkp->capacity, sdkp->zone_blocks) >> sdkp->zone_shift;623623+ blk_queue_chunk_sectors(q,624624+ logical_to_sectors(sdkp->device, zone_blocks));625625+ nr_zones = round_up(sdkp->capacity, zone_blocks) >> zone_shift;628626629627 /*630628 * Initialize the device request queue information if the number631629 * of zones changed.632630 */633633- if (sdkp->nr_zones != q->nr_zones) {631631+ if (nr_zones != sdkp->nr_zones || nr_zones != q->nr_zones) {632632+ unsigned long *seq_zones_wlock = NULL, *seq_zones_bitmap = NULL;633633+ size_t zone_bitmap_size;634634635635- sd_zbc_cleanup(sdkp);636636-637637- q->nr_zones = sdkp->nr_zones;638638- if (sdkp->nr_zones) {639639- q->seq_zones_wlock = sd_zbc_alloc_zone_bitmap(sdkp);640640- if (!q->seq_zones_wlock) {635635+ if (nr_zones) {636636+ seq_zones_wlock = sd_zbc_alloc_zone_bitmap(nr_zones,637637+ q->node);638638+ if (!seq_zones_wlock) {641639 ret = -ENOMEM;642640 goto err;643641 }644642645645- ret = sd_zbc_setup_seq_zones_bitmap(sdkp);646646- if (ret) {647647- sd_zbc_cleanup(sdkp);643643+ seq_zones_bitmap = sd_zbc_setup_seq_zones_bitmap(sdkp,644644+ zone_shift, nr_zones);645645+ if (IS_ERR(seq_zones_bitmap)) {646646+ ret = PTR_ERR(seq_zones_bitmap);647647+ kfree(seq_zones_wlock);648648 goto err;649649 }650650 }651651+ zone_bitmap_size = BITS_TO_LONGS(nr_zones) *652652+ sizeof(unsigned long);653653+ blk_mq_freeze_queue(q);654654+ if (q->nr_zones != nr_zones) {655655+ /* READ16/WRITE16 is mandatory for ZBC disks */656656+ sdkp->device->use_16_for_rw = 1;657657+ sdkp->device->use_10_for_rw = 0;651658659659+ sdkp->zone_blocks = zone_blocks;660660+ sdkp->zone_shift = zone_shift;661661+ sdkp->nr_zones = nr_zones;662662+ q->nr_zones = nr_zones;663663+ swap(q->seq_zones_wlock, seq_zones_wlock);664664+ swap(q->seq_zones_bitmap, seq_zones_bitmap);665665+ } else if (memcmp(q->seq_zones_bitmap, seq_zones_bitmap,666666+ zone_bitmap_size) != 0) {667667+ memcpy(q->seq_zones_bitmap, seq_zones_bitmap,668668+ zone_bitmap_size);669669+ }670670+ blk_mq_unfreeze_queue(q);671671+ kfree(seq_zones_wlock);672672+ kfree(seq_zones_bitmap);652673 }653674654675 return 0;···680661681662int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf)682663{664664+ int64_t zone_blocks;683665 int ret;684666685667 if (!sd_is_zoned(sdkp))···717697 * Check zone size: only devices with a constant zone size (except718698 * an eventual last runt zone) that is a power of 2 are supported.719699 */720720- ret = sd_zbc_check_zone_size(sdkp);721721- if (ret)700700+ zone_blocks = sd_zbc_check_zone_size(sdkp);701701+ ret = -EFBIG;702702+ if (zone_blocks != (u32)zone_blocks)703703+ goto err;704704+ ret = zone_blocks;705705+ if (ret < 0)722706 goto err;723707724708 /* The drive satisfies the kernel restrictions: set it up */725725- ret = sd_zbc_setup(sdkp);709709+ ret = sd_zbc_setup(sdkp, zone_blocks);726710 if (ret)727711 goto err;728712
···4545struct rpi_power_domain_packet {4646 u32 domain;4747 u32 on;4848-} __packet;4848+};49495050/*5151 * Asks the firmware to enable or disable power on a specific power
+1-1
drivers/staging/wilc1000/host_interface.c
···13901390 }1391139113921392 if (hif_drv->usr_conn_req.ies) {13931393- conn_info.req_ies = kmemdup(conn_info.req_ies,13931393+ conn_info.req_ies = kmemdup(hif_drv->usr_conn_req.ies,13941394 hif_drv->usr_conn_req.ies_len,13951395 GFP_KERNEL);13961396 if (conn_info.req_ies)
···121121 struct mutex mutex;122122123123 /* Link layer */124124+ int mode;125125+#define DLCI_MODE_ABM 0 /* Normal Asynchronous Balanced Mode */126126+#define DLCI_MODE_ADM 1 /* Asynchronous Disconnected Mode */124127 spinlock_t lock; /* Protects the internal state */125128 struct timer_list t1; /* Retransmit timer for SABM and UA */126129 int retries;···13671364 ctrl->data = data;13681365 ctrl->len = clen;13691366 gsm->pending_cmd = ctrl;13701370- gsm->cretries = gsm->n2;13671367+13681368+ /* If DLCI0 is in ADM mode skip retries, it won't respond */13691369+ if (gsm->dlci[0]->mode == DLCI_MODE_ADM)13701370+ gsm->cretries = 1;13711371+ else13721372+ gsm->cretries = gsm->n2;13731373+13711374 mod_timer(&gsm->t2_timer, jiffies + gsm->t2 * HZ / 100);13721375 gsm_control_transmit(gsm, ctrl);13731376 spin_unlock_irqrestore(&gsm->control_lock, flags);···14811472 if (debug & 8)14821473 pr_info("DLCI %d opening in ADM mode.\n",14831474 dlci->addr);14751475+ dlci->mode = DLCI_MODE_ADM;14841476 gsm_dlci_open(dlci);14851477 } else {14861478 gsm_dlci_close(dlci);···28712861static int gsm_carrier_raised(struct tty_port *port)28722862{28732863 struct gsm_dlci *dlci = container_of(port, struct gsm_dlci, port);28642864+ struct gsm_mux *gsm = dlci->gsm;28652865+28742866 /* Not yet open so no carrier info */28752867 if (dlci->state != DLCI_OPEN)28762868 return 0;28772869 if (debug & 2)28782870 return 1;28712871+28722872+ /*28732873+ * Basic mode with control channel in ADM mode may not respond28742874+ * to CMD_MSC at all and modem_rx is empty.28752875+ */28762876+ if (gsm->encoding == 0 && gsm->dlci[0]->mode == DLCI_MODE_ADM &&28772877+ !dlci->modem_rx)28782878+ return 1;28792879+28792880 return dlci->modem_rx & TIOCM_CD;28802881}28812882
+4-2
drivers/tty/serial/earlycon.c
···169169 */170170int __init setup_earlycon(char *buf)171171{172172- const struct earlycon_id *match;172172+ const struct earlycon_id **p_match;173173174174 if (!buf || !buf[0])175175 return -EINVAL;···177177 if (early_con.flags & CON_ENABLED)178178 return -EALREADY;179179180180- for (match = __earlycon_table; match < __earlycon_table_end; match++) {180180+ for (p_match = __earlycon_table; p_match < __earlycon_table_end;181181+ p_match++) {182182+ const struct earlycon_id *match = *p_match;181183 size_t len = strlen(match->name);182184183185 if (strncmp(buf, match->name, len))
+18-1
drivers/tty/serial/imx.c
···316316 * differ from the value that was last written. As it only317317 * clears after being set, reread conditionally.318318 */319319- if (sport->ucr2 & UCR2_SRST)319319+ if (!(sport->ucr2 & UCR2_SRST))320320 sport->ucr2 = readl(sport->port.membase + offset);321321 return sport->ucr2;322322 break;···18331833 rs485conf->flags &= ~SER_RS485_ENABLED;1834183418351835 if (rs485conf->flags & SER_RS485_ENABLED) {18361836+ /* Enable receiver if low-active RTS signal is requested */18371837+ if (sport->have_rtscts && !sport->have_rtsgpio &&18381838+ !(rs485conf->flags & SER_RS485_RTS_ON_SEND))18391839+ rs485conf->flags |= SER_RS485_RX_DURING_TX;18401840+18361841 /* disable transmitter */18371842 ucr2 = imx_uart_readl(sport, UCR2);18381843 if (rs485conf->flags & SER_RS485_RTS_AFTER_SEND)···22692264 if (sport->port.rs485.flags & SER_RS485_ENABLED &&22702265 (!sport->have_rtscts && !sport->have_rtsgpio))22712266 dev_err(&pdev->dev, "no RTS control, disabling rs485\n");22672267+22682268+ /*22692269+ * If using the i.MX UART RTS/CTS control then the RTS (CTS_B)22702270+ * signal cannot be set low during transmission in case the22712271+ * receiver is off (limitation of the i.MX UART IP).22722272+ */22732273+ if (sport->port.rs485.flags & SER_RS485_ENABLED &&22742274+ sport->have_rtscts && !sport->have_rtsgpio &&22752275+ (!(sport->port.rs485.flags & SER_RS485_RTS_ON_SEND) &&22762276+ !(sport->port.rs485.flags & SER_RS485_RX_DURING_TX)))22772277+ dev_err(&pdev->dev,22782278+ "low-active RTS not possible when receiver is off, enabling receiver\n");2272227922732280 imx_uart_rs485_config(&sport->port, &sport->port.rs485);22742281
···10221022 struct qcom_geni_serial_port *port;10231023 struct uart_port *uport;10241024 struct resource *res;10251025+ int irq;1025102610261027 if (pdev->dev.of_node)10271028 line = of_alias_get_id(pdev->dev.of_node, "serial");···10621061 port->rx_fifo_depth = DEF_FIFO_DEPTH_WORDS;10631062 port->tx_fifo_width = DEF_FIFO_WIDTH_BITS;1064106310651065- uport->irq = platform_get_irq(pdev, 0);10661066- if (uport->irq < 0) {10671067- dev_err(&pdev->dev, "Failed to get IRQ %d\n", uport->irq);10681068- return uport->irq;10641064+ irq = platform_get_irq(pdev, 0);10651065+ if (irq < 0) {10661066+ dev_err(&pdev->dev, "Failed to get IRQ %d\n", irq);10671067+ return irq;10691068 }10691069+ uport->irq = irq;1070107010711071 uport->private_data = &qcom_geni_console_driver;10721072 platform_set_drvdata(pdev, port);
+1-1
drivers/tty/serial/xilinx_uartps.c
···11811181 /* only set baud if specified on command line - otherwise11821182 * assume it has been initialized by a boot loader.11831183 */11841184- if (device->baud) {11841184+ if (port->uartclk && device->baud) {11851185 u32 cd = 0, bdiv = 0;11861186 u32 mr;11871187 int div8;
···176176 return ERR_CAST(ldops);177177 }178178179179- ld = kmalloc(sizeof(struct tty_ldisc), GFP_KERNEL);180180- if (ld == NULL) {181181- put_ldops(ldops);182182- return ERR_PTR(-ENOMEM);183183- }184184-179179+ /*180180+ * There is no way to handle allocation failure of only 16 bytes.181181+ * Let's simplify error handling and save more memory.182182+ */183183+ ld = kmalloc(sizeof(struct tty_ldisc), GFP_KERNEL | __GFP_NOFAIL);185184 ld->ops = ldops;186185 ld->tty = tty;187186···526527static void tty_ldisc_restore(struct tty_struct *tty, struct tty_ldisc *old)527528{528529 /* There is an outstanding reference here so this is safe */529529- old = tty_ldisc_get(tty, old->ops->num);530530- WARN_ON(IS_ERR(old));531531- tty->ldisc = old;532532- tty_set_termios_ldisc(tty, old->ops->num);533533- if (tty_ldisc_open(tty, old) < 0) {534534- tty_ldisc_put(old);530530+ if (tty_ldisc_failto(tty, old->ops->num) < 0) {531531+ const char *name = tty_name(tty);532532+533533+ pr_warn("Falling back ldisc for %s.\n", name);535534 /* The traditional behaviour is to fall back to N_TTY, we536535 want to avoid falling back to N_NULL unless we have no537536 choice to avoid the risk of breaking anything */538537 if (tty_ldisc_failto(tty, N_TTY) < 0 &&539538 tty_ldisc_failto(tty, N_NULL) < 0)540540- panic("Couldn't open N_NULL ldisc for %s.",541541- tty_name(tty));539539+ panic("Couldn't open N_NULL ldisc for %s.", name);542540 }543541}544542···820824 * the tty structure is not completely set up when this call is made.821825 */822826823823-void tty_ldisc_init(struct tty_struct *tty)827827+int tty_ldisc_init(struct tty_struct *tty)824828{825829 struct tty_ldisc *ld = tty_ldisc_get(tty, N_TTY);826830 if (IS_ERR(ld))827827- panic("n_tty: init_tty");831831+ return PTR_ERR(ld);828832 tty->ldisc = ld;833833+ return 0;829834}830835831836/**
+23-49
drivers/uio/uio_hv_generic.c
···1919 * # echo -n "ed963694-e847-4b2a-85af-bc9cfc11d6f3" \2020 * > /sys/bus/vmbus/drivers/uio_hv_generic/bind2121 */2222-2222+#define DEBUG 12323#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt24242525#include <linux/device.h>···9494 */9595static void hv_uio_channel_cb(void *context)9696{9797- struct hv_uio_private_data *pdata = context;9898- struct hv_device *dev = pdata->device;9797+ struct vmbus_channel *chan = context;9898+ struct hv_device *hv_dev = chan->device_obj;9999+ struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev);99100100100- dev->channel->inbound.ring_buffer->interrupt_mask = 1;101101+ chan->inbound.ring_buffer->interrupt_mask = 1;101102 virt_mb();102103103104 uio_event_notify(&pdata->info);···122121 uio_event_notify(&pdata->info);123122}124123125125-/*126126- * Handle fault when looking for sub channel ring buffer127127- * Subchannel ring buffer is same as resource 0 which is main ring buffer128128- * This is derived from uio_vma_fault124124+/* Sysfs API to allow mmap of the ring buffers125125+ * The ring buffer is allocated as contiguous memory by vmbus_open129126 */130130-static int hv_uio_vma_fault(struct vm_fault *vmf)131131-{132132- struct vm_area_struct *vma = vmf->vma;133133- void *ring_buffer = vma->vm_private_data;134134- struct page *page;135135- void *addr;136136-137137- addr = ring_buffer + (vmf->pgoff << PAGE_SHIFT);138138- page = virt_to_page(addr);139139- get_page(page);140140- vmf->page = page;141141- return 0;142142-}143143-144144-static const struct vm_operations_struct hv_uio_vm_ops = {145145- .fault = hv_uio_vma_fault,146146-};147147-148148-/* Sysfs API to allow mmap of the ring buffers */149127static int hv_uio_ring_mmap(struct file *filp, struct kobject *kobj,150128 struct bin_attribute *attr,151129 struct vm_area_struct *vma)152130{153131 struct vmbus_channel *channel154132 = container_of(kobj, struct vmbus_channel, kobj);155155- unsigned long requested_pages, actual_pages;133133+ struct hv_device *dev = channel->primary_channel->device_obj;134134+ u16 q_idx = channel->offermsg.offer.sub_channel_index;156135157157- if (vma->vm_end < vma->vm_start)158158- return -EINVAL;136136+ dev_dbg(&dev->device, "mmap channel %u pages %#lx at %#lx\n",137137+ q_idx, vma_pages(vma), vma->vm_pgoff);159138160160- /* only allow 0 for now */161161- if (vma->vm_pgoff > 0)162162- return -EINVAL;163163-164164- requested_pages = vma_pages(vma);165165- actual_pages = 2 * HV_RING_SIZE;166166- if (requested_pages > actual_pages)167167- return -EINVAL;168168-169169- vma->vm_private_data = channel->ringbuffer_pages;170170- vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;171171- vma->vm_ops = &hv_uio_vm_ops;172172- return 0;139139+ return vm_iomap_memory(vma, virt_to_phys(channel->ringbuffer_pages),140140+ channel->ringbuffer_pagecount << PAGE_SHIFT);173141}174142175175-static struct bin_attribute ring_buffer_bin_attr __ro_after_init = {143143+static const struct bin_attribute ring_buffer_bin_attr = {176144 .attr = {177145 .name = "ring",178146 .mode = 0600,179179- /* size is set at init time */180147 },148148+ .size = 2 * HV_RING_SIZE * PAGE_SIZE,181149 .mmap = hv_uio_ring_mmap,182150};183151184184-/* Callback from VMBUS subystem when new channel created. */152152+/* Callback from VMBUS subsystem when new channel created. */185153static void186154hv_uio_new_channel(struct vmbus_channel *new_sc)187155{188156 struct hv_device *hv_dev = new_sc->primary_channel->device_obj;189157 struct device *device = &hv_dev->device;190190- struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev);191158 const size_t ring_bytes = HV_RING_SIZE * PAGE_SIZE;192159 int ret;193160194161 /* Create host communication ring */195162 ret = vmbus_open(new_sc, ring_bytes, ring_bytes, NULL, 0,196196- hv_uio_channel_cb, pdata);163163+ hv_uio_channel_cb, new_sc);197164 if (ret) {198165 dev_err(device, "vmbus_open subchannel failed: %d\n", ret);199166 return;···203234204235 ret = vmbus_open(dev->channel, HV_RING_SIZE * PAGE_SIZE,205236 HV_RING_SIZE * PAGE_SIZE, NULL, 0,206206- hv_uio_channel_cb, pdata);237237+ hv_uio_channel_cb, dev->channel);207238 if (ret)208239 goto fail;209240···294325295326 vmbus_set_chn_rescind_callback(dev->channel, hv_uio_rescind);296327 vmbus_set_sc_create_callback(dev->channel, hv_uio_new_channel);328328+329329+ ret = sysfs_create_bin_file(&dev->channel->kobj, &ring_buffer_bin_attr);330330+ if (ret)331331+ dev_notice(&dev->device,332332+ "sysfs create ring bin file failed; %d\n", ret);297333298334 hv_set_drvdata(dev, pdata);299335
···6262 - Fundamental Software dongle.6363 - Google USB serial devices6464 - HP4x calculators6565+ - Libtransistor USB console6566 - a number of Motorola phones6667 - Motorola Tetra devices6768 - Novatel Wireless GPS receivers
+1
drivers/usb/serial/cp210x.c
···214214 { USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */215215 { USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */216216 { USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */217217+ { USB_DEVICE(0x3923, 0x7A0B) }, /* National Instruments USB Serial Console */217218 { USB_DEVICE(0x413C, 0x9500) }, /* DW700 GPS USB interface */218219 { } /* Terminating Entry */219220};
···2828 * difficult to estimate the time it takes for the system to process the command2929 * before it is actually passed to the PPM.3030 */3131-#define UCSI_TIMEOUT_MS 10003131+#define UCSI_TIMEOUT_MS 500032323333/*3434 * UCSI_SWAP_TIMEOUT_MS - Timeout for role swap requests
+5
drivers/usb/usbip/stub_main.c
···186186 if (!bid)187187 return -ENODEV;188188189189+ /* device_attach() callers should hold parent lock for USB */190190+ if (bid->udev->dev.parent)191191+ device_lock(bid->udev->dev.parent);189192 ret = device_attach(&bid->udev->dev);193193+ if (bid->udev->dev.parent)194194+ device_unlock(bid->udev->dev.parent);190195 if (ret < 0) {191196 dev_err(&bid->udev->dev, "rebind failed\n");192197 return ret;
···8787 struct vbg_session *session = filp->private_data;8888 size_t returned_size, size;8989 struct vbg_ioctl_hdr hdr;9090+ bool is_vmmdev_req;9091 int ret = 0;9192 void *buf;9293···107106 if (size > SZ_16M)108107 return -E2BIG;109108110110- /* __GFP_DMA32 because IOCTL_VMMDEV_REQUEST passes this to the host */111111- buf = kmalloc(size, GFP_KERNEL | __GFP_DMA32);109109+ /*110110+ * IOCTL_VMMDEV_REQUEST needs the buffer to be below 4G to avoid111111+ * the need for a bounce-buffer and another copy later on.112112+ */113113+ is_vmmdev_req = (req & ~IOCSIZE_MASK) == VBG_IOCTL_VMMDEV_REQUEST(0) ||114114+ req == VBG_IOCTL_VMMDEV_REQUEST_BIG;115115+116116+ if (is_vmmdev_req)117117+ buf = vbg_req_alloc(size, VBG_IOCTL_HDR_TYPE_DEFAULT);118118+ else119119+ buf = kmalloc(size, GFP_KERNEL);112120 if (!buf)113121 return -ENOMEM;114122···142132 ret = -EFAULT;143133144134out:145145- kfree(buf);135135+ if (is_vmmdev_req)136136+ vbg_req_free(buf, size);137137+ else138138+ kfree(buf);146139147140 return ret;148141}
+13-4
drivers/virt/vboxguest/vboxguest_utils.c
···6565void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type)6666{6767 struct vmmdev_request_header *req;6868+ int order = get_order(PAGE_ALIGN(len));68696969- req = kmalloc(len, GFP_KERNEL | __GFP_DMA32);7070+ req = (void *)__get_free_pages(GFP_KERNEL | GFP_DMA32, order);7071 if (!req)7172 return NULL;7273···8180 req->reserved2 = 0;82818382 return req;8383+}8484+8585+void vbg_req_free(void *req, size_t len)8686+{8787+ if (!req)8888+ return;8989+9090+ free_pages((unsigned long)req, get_order(PAGE_ALIGN(len)));8491}85928693/* Note this function returns a VBox status code, not a negative errno!! */···146137 rc = hgcm_connect->header.result;147138 }148139149149- kfree(hgcm_connect);140140+ vbg_req_free(hgcm_connect, sizeof(*hgcm_connect));150141151142 *vbox_status = rc;152143 return 0;···175166 if (rc >= 0)176167 rc = hgcm_disconnect->header.result;177168178178- kfree(hgcm_disconnect);169169+ vbg_req_free(hgcm_disconnect, sizeof(*hgcm_disconnect));179170180171 *vbox_status = rc;181172 return 0;···632623 }633624634625 if (!leak_it)635635- kfree(call);626626+ vbg_req_free(call, size);636627637628free_bounce_bufs:638629 if (bounce_bufs) {
+25-3
fs/ceph/xattr.c
···228228229229static bool ceph_vxattrcb_quota_exists(struct ceph_inode_info *ci)230230{231231- return (ci->i_max_files || ci->i_max_bytes);231231+ bool ret = false;232232+ spin_lock(&ci->i_ceph_lock);233233+ if ((ci->i_max_files || ci->i_max_bytes) &&234234+ ci->i_vino.snap == CEPH_NOSNAP &&235235+ ci->i_snap_realm &&236236+ ci->i_snap_realm->ino == ci->i_vino.ino)237237+ ret = true;238238+ spin_unlock(&ci->i_ceph_lock);239239+ return ret;232240}233241234242static size_t ceph_vxattrcb_quota(struct ceph_inode_info *ci, char *val,···10161008 char *newval = NULL;10171009 struct ceph_inode_xattr *xattr = NULL;10181010 int required_blob_size;10111011+ bool check_realm = false;10191012 bool lock_snap_rwsem = false;1020101310211014 if (ceph_snap(inode) != CEPH_NOSNAP)10221015 return -EROFS;1023101610241017 vxattr = ceph_match_vxattr(inode, name);10251025- if (vxattr && vxattr->readonly)10261026- return -EOPNOTSUPP;10181018+ if (vxattr) {10191019+ if (vxattr->readonly)10201020+ return -EOPNOTSUPP;10211021+ if (value && !strncmp(vxattr->name, "ceph.quota", 10))10221022+ check_realm = true;10231023+ }1027102410281025 /* pass any unhandled ceph.* xattrs through to the MDS */10291026 if (!strncmp(name, XATTR_CEPH_PREFIX, XATTR_CEPH_PREFIX_LEN))···11221109 err = -EBUSY;11231110 } else {11241111 err = ceph_sync_setxattr(inode, name, value, size, flags);11121112+ if (err >= 0 && check_realm) {11131113+ /* check if snaprealm was created for quota inode */11141114+ spin_lock(&ci->i_ceph_lock);11151115+ if ((ci->i_max_files || ci->i_max_bytes) &&11161116+ !(ci->i_snap_realm &&11171117+ ci->i_snap_realm->ino == ci->i_vino.ino))11181118+ err = -EOPNOTSUPP;11191119+ spin_unlock(&ci->i_ceph_lock);11201120+ }11251121 }11261122out:11271123 ceph_free_cap_flush(prealloc_cf);
+3
fs/cifs/cifssmb.c
···455455 server->sign = true;456456 }457457458458+ if (cifs_rdma_enabled(server) && server->sign)459459+ cifs_dbg(VFS, "Signing is enabled, and RDMA read/write will be disabled");460460+458461 return 0;459462}460463
+16-16
fs/cifs/connect.c
···29592959 }29602960 }2961296129622962+ if (volume_info->seal) {29632963+ if (ses->server->vals->protocol_id == 0) {29642964+ cifs_dbg(VFS,29652965+ "SMB3 or later required for encryption\n");29662966+ rc = -EOPNOTSUPP;29672967+ goto out_fail;29682968+ } else if (tcon->ses->server->capabilities &29692969+ SMB2_GLOBAL_CAP_ENCRYPTION)29702970+ tcon->seal = true;29712971+ else {29722972+ cifs_dbg(VFS, "Encryption is not supported on share\n");29732973+ rc = -EOPNOTSUPP;29742974+ goto out_fail;29752975+ }29762976+ }29772977+29622978 /*29632979 * BB Do we need to wrap session_mutex around this TCon call and Unix29642980 * SetFS as we do on SessSetup and reconnect?···30213005 goto out_fail;30223006 }30233007 tcon->use_resilient = true;30243024- }30253025-30263026- if (volume_info->seal) {30273027- if (ses->server->vals->protocol_id == 0) {30283028- cifs_dbg(VFS,30293029- "SMB3 or later required for encryption\n");30303030- rc = -EOPNOTSUPP;30313031- goto out_fail;30323032- } else if (tcon->ses->server->capabilities &30333033- SMB2_GLOBAL_CAP_ENCRYPTION)30343034- tcon->seal = true;30353035- else {30363036- cifs_dbg(VFS, "Encryption is not supported on share\n");30373037- rc = -EOPNOTSUPP;30383038- goto out_fail;30393039- }30403008 }3041300930423010 /*
···383383build_encrypt_ctxt(struct smb2_encryption_neg_context *pneg_ctxt)384384{385385 pneg_ctxt->ContextType = SMB2_ENCRYPTION_CAPABILITIES;386386- pneg_ctxt->DataLength = cpu_to_le16(6);387387- pneg_ctxt->CipherCount = cpu_to_le16(2);388388- pneg_ctxt->Ciphers[0] = SMB2_ENCRYPTION_AES128_GCM;389389- pneg_ctxt->Ciphers[1] = SMB2_ENCRYPTION_AES128_CCM;386386+ pneg_ctxt->DataLength = cpu_to_le16(4); /* Cipher Count + le16 cipher */387387+ pneg_ctxt->CipherCount = cpu_to_le16(1);388388+/* pneg_ctxt->Ciphers[0] = SMB2_ENCRYPTION_AES128_GCM;*/ /* not supported yet */389389+ pneg_ctxt->Ciphers[0] = SMB2_ENCRYPTION_AES128_CCM;390390}391391392392static void···444444 return -EINVAL;445445 }446446 server->cipher_type = ctxt->Ciphers[0];447447+ server->capabilities |= SMB2_GLOBAL_CAP_ENCRYPTION;447448 return 0;448449}449450···25912590 * If we want to do a RDMA write, fill in and append25922591 * smbd_buffer_descriptor_v1 to the end of read request25932592 */25942594- if (server->rdma && rdata &&25932593+ if (server->rdma && rdata && !server->sign &&25952594 rdata->bytes >= server->smbd_conn->rdma_readwrite_threshold) {2596259525972596 struct smbd_buffer_descriptor_v1 *v1;···29692968 * If we want to do a server RDMA read, fill in and append29702969 * smbd_buffer_descriptor_v1 to the end of write request29712970 */29722972- if (server->rdma && wdata->bytes >=29712971+ if (server->rdma && !server->sign && wdata->bytes >=29732972 server->smbd_conn->rdma_readwrite_threshold) {2974297329752974 struct smbd_buffer_descriptor_v1 *v1;
+1-1
fs/cifs/smb2pdu.h
···297297 __le16 DataLength;298298 __le32 Reserved;299299 __le16 CipherCount; /* AES-128-GCM and AES-128-CCM */300300- __le16 Ciphers[2]; /* Ciphers[0] since only one used now */300300+ __le16 Ciphers[1]; /* Ciphers[0] since only one used now */301301} __packed;302302303303struct smb2_negotiate_rsp {
+12-24
fs/cifs/smbdirect.c
···20862086 int start, i, j;20872087 int max_iov_size =20882088 info->max_send_size - sizeof(struct smbd_data_transfer);20892089- struct kvec iov[SMBDIRECT_MAX_SGE];20892089+ struct kvec *iov;20902090 int rc;2091209120922092 info->smbd_send_pending++;···20962096 }2097209720982098 /*20992099- * This usually means a configuration error21002100- * We use RDMA read/write for packet size > rdma_readwrite_threshold21012101- * as long as it's properly configured we should never get into this21022102- * situation21032103- */21042104- if (rqst->rq_nvec + rqst->rq_npages > SMBDIRECT_MAX_SGE) {21052105- log_write(ERR, "maximum send segment %x exceeding %x\n",21062106- rqst->rq_nvec + rqst->rq_npages, SMBDIRECT_MAX_SGE);21072107- rc = -EINVAL;21082108- goto done;21092109- }21102110-21112111- /*21122112- * Remove the RFC1002 length defined in MS-SMB2 section 2.121132113- * It is used only for TCP transport20992099+ * Skip the RFC1002 length defined in MS-SMB2 section 2.121002100+ * It is used only for TCP transport in the iov[0]21142101 * In future we may want to add a transport layer under protocol21152102 * layer so this will only be issued to TCP transport21162103 */21172117- iov[0].iov_base = (char *)rqst->rq_iov[0].iov_base + 4;21182118- iov[0].iov_len = rqst->rq_iov[0].iov_len - 4;21192119- buflen += iov[0].iov_len;21042104+21052105+ if (rqst->rq_iov[0].iov_len != 4) {21062106+ log_write(ERR, "expected the pdu length in 1st iov, but got %zu\n", rqst->rq_iov[0].iov_len);21072107+ return -EINVAL;21082108+ }21092109+ iov = &rqst->rq_iov[1];2120211021212111 /* total up iov array first */21222122- for (i = 1; i < rqst->rq_nvec; i++) {21232123- iov[i].iov_base = rqst->rq_iov[i].iov_base;21242124- iov[i].iov_len = rqst->rq_iov[i].iov_len;21122112+ for (i = 0; i < rqst->rq_nvec-1; i++) {21252113 buflen += iov[i].iov_len;21262114 }21272115···21862198 goto done;21872199 }21882200 i++;21892189- if (i == rqst->rq_nvec)22012201+ if (i == rqst->rq_nvec-1)21902202 break;21912203 }21922204 start = i;21932205 buflen = 0;21942206 } else {21952207 i++;21962196- if (i == rqst->rq_nvec) {22082208+ if (i == rqst->rq_nvec-1) {21972209 /* send out all remaining vecs */21982210 remaining_data_length -= buflen;21992211 log_write(INFO,
···321321 struct ext4_sb_info *sbi = EXT4_SB(sb);322322 ext4_grpblk_t offset;323323 ext4_grpblk_t next_zero_bit;324324+ ext4_grpblk_t max_bit = EXT4_CLUSTERS_PER_GROUP(sb);324325 ext4_fsblk_t blk;325326 ext4_fsblk_t group_first_block;326327···339338 /* check whether block bitmap block number is set */340339 blk = ext4_block_bitmap(sb, desc);341340 offset = blk - group_first_block;342342- if (offset < 0 || EXT4_B2C(sbi, offset) >= sb->s_blocksize ||341341+ if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit ||343342 !ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data))344343 /* bad block bitmap */345344 return blk;···347346 /* check whether the inode bitmap block number is set */348347 blk = ext4_inode_bitmap(sb, desc);349348 offset = blk - group_first_block;350350- if (offset < 0 || EXT4_B2C(sbi, offset) >= sb->s_blocksize ||349349+ if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit ||351350 !ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data))352351 /* bad block bitmap */353352 return blk;···355354 /* check whether the inode table block number is set */356355 blk = ext4_inode_table(sb, desc);357356 offset = blk - group_first_block;358358- if (offset < 0 || EXT4_B2C(sbi, offset) >= sb->s_blocksize ||359359- EXT4_B2C(sbi, offset + sbi->s_itb_per_group) >= sb->s_blocksize)357357+ if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit ||358358+ EXT4_B2C(sbi, offset + sbi->s_itb_per_group) >= max_bit)360359 return blk;361360 next_zero_bit = ext4_find_next_zero_bit(bh->b_data,362361 EXT4_B2C(sbi, offset + sbi->s_itb_per_group),
+11-5
fs/ext4/extents.c
···53295329 stop = le32_to_cpu(extent->ee_block);5330533053315331 /*53325332- * In case of left shift, Don't start shifting extents until we make53335333- * sure the hole is big enough to accommodate the shift.53325332+ * For left shifts, make sure the hole on the left is big enough to53335333+ * accommodate the shift. For right shifts, make sure the last extent53345334+ * won't be shifted beyond EXT_MAX_BLOCKS.53345335 */53355336 if (SHIFT == SHIFT_LEFT) {53365337 path = ext4_find_extent(inode, start - 1, &path,···5351535053525351 if ((start == ex_start && shift > ex_start) ||53535352 (shift > start - ex_end)) {53545354- ext4_ext_drop_refs(path);53555355- kfree(path);53565356- return -EINVAL;53535353+ ret = -EINVAL;53545354+ goto out;53555355+ }53565356+ } else {53575357+ if (shift > EXT_MAX_BLOCKS -53585358+ (stop + ext4_ext_get_actual_len(extent))) {53595359+ ret = -EINVAL;53605360+ goto out;53575361 }53585362 }53595363
+1
fs/ext4/super.c
···58865886MODULE_AUTHOR("Remy Card, Stephen Tweedie, Andrew Morton, Andreas Dilger, Theodore Ts'o and others");58875887MODULE_DESCRIPTION("Fourth Extended Filesystem");58885888MODULE_LICENSE("GPL");58895889+MODULE_SOFTDEP("pre: crc32c");58895890module_init(ext4_init_fs)58905891module_exit(ext4_exit_fs)
+1
fs/jbd2/transaction.c
···532532 */533533 ret = start_this_handle(journal, handle, GFP_NOFS);534534 if (ret < 0) {535535+ handle->h_journal = journal;535536 jbd2_journal_free_reserved(handle);536537 return ret;537538 }
···3737 * Our PSCI implementation stays the same across versions from3838 * v0.2 onward, only adding the few mandatory functions (such3939 * as FEATURES with 1.0) that are required by newer4040- * revisions. It is thus safe to return the latest.4040+ * revisions. It is thus safe to return the latest, unless4141+ * userspace has instructed us otherwise.4142 */4242- if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))4343+ if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) {4444+ if (vcpu->kvm->arch.psci_version)4545+ return vcpu->kvm->arch.psci_version;4646+4347 return KVM_ARM_PSCI_LATEST;4848+ }44494550 return KVM_ARM_PSCI_0_1;4651}475248534954int kvm_hvc_call_handler(struct kvm_vcpu *vcpu);5555+5656+struct kvm_one_reg;5757+5858+int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu);5959+int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);6060+int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);6161+int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);50625163#endif /* __KVM_ARM_PSCI_H__ */
+3
include/linux/blk-mq.h
···99struct blk_mq_tags;1010struct blk_flush_queue;11111212+/**1313+ * struct blk_mq_hw_ctx - State for a hardware queue facing the hardware block device1414+ */1215struct blk_mq_hw_ctx {1316 struct {1417 spinlock_t lock;
+6
include/linux/blkdev.h
···605605 * initialized by the low level device driver (e.g. scsi/sd.c).606606 * Stacking drivers (device mappers) may or may not initialize607607 * these fields.608608+ *609609+ * Reads of this information must be protected with blk_queue_enter() /610610+ * blk_queue_exit(). Modifying this information is only allowed while611611+ * no requests are being processed. See also blk_mq_freeze_queue() and612612+ * blk_mq_unfreeze_queue().608613 */609614 unsigned int nr_zones;610615 unsigned long *seq_zones_bitmap;···742737#define blk_queue_quiesced(q) test_bit(QUEUE_FLAG_QUIESCED, &(q)->queue_flags)743738#define blk_queue_preempt_only(q) \744739 test_bit(QUEUE_FLAG_PREEMPT_ONLY, &(q)->queue_flags)740740+#define blk_queue_fua(q) test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags)745741746742extern int blk_set_preempt_only(struct request_queue *q);747743extern void blk_clear_preempt_only(struct request_queue *q);
···256256 * automatically.257257 * @pm: Power management operations of the device which matched258258 * this driver.259259- * @coredump: Called through sysfs to initiate a device coredump.259259+ * @coredump: Called when sysfs entry is written to. The device driver260260+ * is expected to call the dev_coredump API resulting in a261261+ * uevent.260262 * @p: Driver core's private data, no one other than the driver261263 * core can touch this.262264 *···290288 const struct attribute_group **groups;291289292290 const struct dev_pm_ops *pm;293293- int (*coredump) (struct device *dev);291291+ void (*coredump) (struct device *dev);294292295293 struct driver_private *p;296294};
+2
include/linux/ethtool.h
···310310 * fields should be ignored (use %__ETHTOOL_LINK_MODE_MASK_NBITS311311 * instead of the latter), any change to them will be overwritten312312 * by kernel. Returns a negative error code or zero.313313+ * @get_fecparam: Get the network device Forward Error Correction parameters.314314+ * @set_fecparam: Set the network device Forward Error Correction parameters.313315 *314316 * All operations are optional (i.e. the function pointer may be set315317 * to %NULL) and callers must take this into account. Callers must
+1-3
include/linux/fsnotify_backend.h
···217217 union { /* Object pointer [lock] */218218 struct inode *inode;219219 struct vfsmount *mnt;220220- };221221- union {222222- struct hlist_head list;223220 /* Used listing heads to free after srcu period expires */224221 struct fsnotify_mark_connector *destroy_next;225222 };223223+ struct hlist_head list;226224};227225228226/*
···8585 unsigned int write_suspended:1;8686 unsigned int erase_suspended:1;8787 unsigned long in_progress_block_addr;8888+ unsigned long in_progress_block_mask;88898990 struct mutex mutex;9091 wait_queue_head_t wq; /* Wait on here when we're waiting for the chip
···5050 * losing bits). This also has the property (wanted by the dcache)5151 * that the msbits make a good hash table index.5252 */5353-static inline unsigned long end_name_hash(unsigned long hash)5353+static inline unsigned int end_name_hash(unsigned long hash)5454{5555- return __hash_32((unsigned int)hash);5555+ return hash_long(hash, 32);5656}57575858/*
···5252 * @offs_real: Offset clock monotonic -> clock realtime5353 * @offs_boot: Offset clock monotonic -> clock boottime5454 * @offs_tai: Offset clock monotonic -> clock tai5555- * @time_suspended: Accumulated suspend time5655 * @tai_offset: The current UTC to TAI offset in seconds5756 * @clock_was_set_seq: The sequence number of clock was set events5857 * @cs_was_changed_seq: The sequence number of clocksource change events···9495 ktime_t offs_real;9596 ktime_t offs_boot;9697 ktime_t offs_tai;9797- ktime_t time_suspended;9898 s32 tai_offset;9999 unsigned int clock_was_set_seq;100100 u8 cs_was_changed_seq;
+25-12
include/linux/timekeeping.h
···3333extern time64_t ktime_get_seconds(void);3434extern time64_t __ktime_get_real_seconds(void);3535extern time64_t ktime_get_real_seconds(void);3636-extern void ktime_get_active_ts64(struct timespec64 *ts);37363837extern int __getnstimeofday64(struct timespec64 *tv);3938extern void getnstimeofday64(struct timespec64 *tv);4039extern void getboottime64(struct timespec64 *ts);41404242-#define ktime_get_real_ts64(ts) getnstimeofday64(ts)4343-4444-/* Clock BOOTTIME compatibility wrappers */4545-static inline void get_monotonic_boottime64(struct timespec64 *ts)4646-{4747- ktime_get_ts64(ts);4848-}4141+#define ktime_get_real_ts64(ts) getnstimeofday64(ts)49425043/*5144 * ktime_t based interfaces5245 */4646+5347enum tk_offsets {5448 TK_OFFS_REAL,4949+ TK_OFFS_BOOT,5550 TK_OFFS_TAI,5651 TK_OFFS_MAX,5752};···5762extern ktime_t ktime_get_raw(void);5863extern u32 ktime_get_resolution_ns(void);59646060-/* Clock BOOTTIME compatibility wrappers */6161-static inline ktime_t ktime_get_boottime(void) { return ktime_get(); }6262-static inline u64 ktime_get_boot_ns(void) { return ktime_get(); }6363-6465/**6566 * ktime_get_real - get the real (wall-) time in ktime_t format6667 */6768static inline ktime_t ktime_get_real(void)6869{6970 return ktime_get_with_offset(TK_OFFS_REAL);7171+}7272+7373+/**7474+ * ktime_get_boottime - Returns monotonic time since boot in ktime_t format7575+ *7676+ * This is similar to CLOCK_MONTONIC/ktime_get, but also includes the7777+ * time spent in suspend.7878+ */7979+static inline ktime_t ktime_get_boottime(void)8080+{8181+ return ktime_get_with_offset(TK_OFFS_BOOT);7082}71837284/**···102100 return ktime_to_ns(ktime_get_real());103101}104102103103+static inline u64 ktime_get_boot_ns(void)104104+{105105+ return ktime_to_ns(ktime_get_boottime());106106+}107107+105108static inline u64 ktime_get_tai_ns(void)106109{107110 return ktime_to_ns(ktime_get_clocktai());···119112120113extern u64 ktime_get_mono_fast_ns(void);121114extern u64 ktime_get_raw_fast_ns(void);115115+extern u64 ktime_get_boot_fast_ns(void);122116extern u64 ktime_get_real_fast_ns(void);123117124118/*125119 * timespec64 interfaces utilizing the ktime based ones126120 */121121+static inline void get_monotonic_boottime64(struct timespec64 *ts)122122+{123123+ *ts = ktime_to_timespec64(ktime_get_boottime());124124+}125125+127126static inline void timekeeping_clocktai64(struct timespec64 *ts)128127{129128 *ts = ktime_to_timespec64(ktime_get_clocktai());
+1-1
include/linux/tty.h
···701701extern int tty_set_ldisc(struct tty_struct *tty, int disc);702702extern int tty_ldisc_setup(struct tty_struct *tty, struct tty_struct *o_tty);703703extern void tty_ldisc_release(struct tty_struct *tty);704704-extern void tty_ldisc_init(struct tty_struct *tty);704704+extern int __must_check tty_ldisc_init(struct tty_struct *tty);705705extern void tty_ldisc_deinit(struct tty_struct *tty);706706extern int tty_ldisc_receive_buf(struct tty_ldisc *ld, const unsigned char *p,707707 char *f, int count);
-23
include/linux/vbox_utils.h
···2424#define vbg_debug pr_debug2525#endif26262727-/**2828- * Allocate memory for generic request and initialize the request header.2929- *3030- * Return: the allocated memory3131- * @len: Size of memory block required for the request.3232- * @req_type: The generic request type.3333- */3434-void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type);3535-3636-/**3737- * Perform a generic request.3838- *3939- * Return: VBox status code4040- * @gdev: The Guest extension device.4141- * @req: Pointer to the request structure.4242- */4343-int vbg_req_perform(struct vbg_dev *gdev, void *req);4444-4527int vbg_hgcm_connect(struct vbg_dev *gdev,4628 struct vmmdev_hgcm_service_location *loc,4729 u32 *client_id, int *vbox_status);···3351int vbg_hgcm_call(struct vbg_dev *gdev, u32 client_id, u32 function,3452 u32 timeout_ms, struct vmmdev_hgcm_function_parameter *parms,3553 u32 parm_count, int *vbox_status);3636-3737-int vbg_hgcm_call32(3838- struct vbg_dev *gdev, u32 client_id, u32 function, u32 timeout_ms,3939- struct vmmdev_hgcm_function_parameter32 *parm32, u32 parm_count,4040- int *vbox_status);41544255/**4356 * Convert a VirtualBox status code to a standard Linux kernel return value.
+3
include/linux/virtio.h
···157157int virtio_device_restore(struct virtio_device *dev);158158#endif159159160160+#define virtio_device_for_each_vq(vdev, vq) \161161+ list_for_each_entry(vq, &vdev->vqs, list)162162+160163/**161164 * virtio_driver - operations for a virtio I/O driver162165 * @driver: underlying device driver (populate name and owner).
···2525 TP_printk("work struct %p", __entry->work)2626);27272828+struct pool_workqueue;2929+2830/**2931 * workqueue_queue_work - called when a work gets queued3032 * @req_cpu: the requested cpu
···5252static ktime_t last_jiffies_update;53535454/*5555- * Called after resume. Make sure that jiffies are not fast forwarded due to5656- * clock monotonic being forwarded by the suspended time.5757- */5858-void tick_sched_forward_next_period(void)5959-{6060- last_jiffies_update = tick_next_period;6161-}6262-6363-/*6455 * Must be called with interrupts disabled !6556 */6657static void tick_do_update_jiffies64(ktime_t now)···795804 return;796805 }797806798798- hrtimer_set_expires(&ts->sched_timer, tick);799799-800800- if (ts->nohz_mode == NOHZ_MODE_HIGHRES)801801- hrtimer_start_expires(&ts->sched_timer, HRTIMER_MODE_ABS_PINNED);802802- else807807+ if (ts->nohz_mode == NOHZ_MODE_HIGHRES) {808808+ hrtimer_start(&ts->sched_timer, tick, HRTIMER_MODE_ABS_PINNED);809809+ } else {810810+ hrtimer_set_expires(&ts->sched_timer, tick);803811 tick_program_event(tick, 1);812812+ }804813}805814806815static void tick_nohz_retain_tick(struct tick_sched *ts)
+37-41
kernel/time/timekeeping.c
···138138139139static inline void tk_update_sleep_time(struct timekeeper *tk, ktime_t delta)140140{141141- /* Update both bases so mono and raw stay coupled. */142142- tk->tkr_mono.base += delta;143143- tk->tkr_raw.base += delta;144144-145145- /* Accumulate time spent in suspend */146146- tk->time_suspended += delta;141141+ tk->offs_boot = ktime_add(tk->offs_boot, delta);147142}148143149144/*···468473}469474EXPORT_SYMBOL_GPL(ktime_get_raw_fast_ns);470475476476+/**477477+ * ktime_get_boot_fast_ns - NMI safe and fast access to boot clock.478478+ *479479+ * To keep it NMI safe since we're accessing from tracing, we're not using a480480+ * separate timekeeper with updates to monotonic clock and boot offset481481+ * protected with seqlocks. This has the following minor side effects:482482+ *483483+ * (1) Its possible that a timestamp be taken after the boot offset is updated484484+ * but before the timekeeper is updated. If this happens, the new boot offset485485+ * is added to the old timekeeping making the clock appear to update slightly486486+ * earlier:487487+ * CPU 0 CPU 1488488+ * timekeeping_inject_sleeptime64()489489+ * __timekeeping_inject_sleeptime(tk, delta);490490+ * timestamp();491491+ * timekeeping_update(tk, TK_CLEAR_NTP...);492492+ *493493+ * (2) On 32-bit systems, the 64-bit boot offset (tk->offs_boot) may be494494+ * partially updated. Since the tk->offs_boot update is a rare event, this495495+ * should be a rare occurrence which postprocessing should be able to handle.496496+ */497497+u64 notrace ktime_get_boot_fast_ns(void)498498+{499499+ struct timekeeper *tk = &tk_core.timekeeper;500500+501501+ return (ktime_get_mono_fast_ns() + ktime_to_ns(tk->offs_boot));502502+}503503+EXPORT_SYMBOL_GPL(ktime_get_boot_fast_ns);504504+505505+471506/*472507 * See comment for __ktime_get_fast_ns() vs. timestamp ordering473508 */···789764790765static ktime_t *offsets[TK_OFFS_MAX] = {791766 [TK_OFFS_REAL] = &tk_core.timekeeper.offs_real,767767+ [TK_OFFS_BOOT] = &tk_core.timekeeper.offs_boot,792768 [TK_OFFS_TAI] = &tk_core.timekeeper.offs_tai,793769};794770···885859 timespec64_add_ns(ts, nsec + tomono.tv_nsec);886860}887861EXPORT_SYMBOL_GPL(ktime_get_ts64);888888-889889-/**890890- * ktime_get_active_ts64 - Get the active non-suspended monotonic clock891891- * @ts: pointer to timespec variable892892- *893893- * The function calculates the monotonic clock from the realtime clock and894894- * the wall_to_monotonic offset, subtracts the accumulated suspend time and895895- * stores the result in normalized timespec64 format in the variable896896- * pointed to by @ts.897897- */898898-void ktime_get_active_ts64(struct timespec64 *ts)899899-{900900- struct timekeeper *tk = &tk_core.timekeeper;901901- struct timespec64 tomono, tsusp;902902- u64 nsec, nssusp;903903- unsigned int seq;904904-905905- WARN_ON(timekeeping_suspended);906906-907907- do {908908- seq = read_seqcount_begin(&tk_core.seq);909909- ts->tv_sec = tk->xtime_sec;910910- nsec = timekeeping_get_ns(&tk->tkr_mono);911911- tomono = tk->wall_to_monotonic;912912- nssusp = tk->time_suspended;913913- } while (read_seqcount_retry(&tk_core.seq, seq));914914-915915- ts->tv_sec += tomono.tv_sec;916916- ts->tv_nsec = 0;917917- timespec64_add_ns(ts, nsec + tomono.tv_nsec);918918- tsusp = ns_to_timespec64(nssusp);919919- *ts = timespec64_sub(*ts, tsusp);920920-}921862922863/**923864 * ktime_get_seconds - Get the seconds portion of CLOCK_MONOTONIC···15861593 return;15871594 }15881595 tk_xtime_add(tk, delta);15961596+ tk_set_wall_to_mono(tk, timespec64_sub(tk->wall_to_monotonic, *delta));15891597 tk_update_sleep_time(tk, timespec64_to_ktime(*delta));15901598 tk_debug_account_sleep_time(delta);15911599}···21192125void getboottime64(struct timespec64 *ts)21202126{21212127 struct timekeeper *tk = &tk_core.timekeeper;21222122- ktime_t t = ktime_sub(tk->offs_real, tk->time_suspended);21282128+ ktime_t t = ktime_sub(tk->offs_real, tk->offs_boot);2123212921242130 *ts = ktime_to_timespec64(t);21252131}···21822188 * ktime_get_update_offsets_now - hrtimer helper21832189 * @cwsseq: pointer to check and store the clock was set sequence number21842190 * @offs_real: pointer to storage for monotonic -> realtime offset21912191+ * @offs_boot: pointer to storage for monotonic -> boottime offset21852192 * @offs_tai: pointer to storage for monotonic -> clock tai offset21862193 *21872194 * Returns current monotonic time and updates the offsets if the···21922197 * Called from hrtimer_interrupt() or retrigger_next_event()21932198 */21942199ktime_t ktime_get_update_offsets_now(unsigned int *cwsseq, ktime_t *offs_real,21952195- ktime_t *offs_tai)22002200+ ktime_t *offs_boot, ktime_t *offs_tai)21962201{21972202 struct timekeeper *tk = &tk_core.timekeeper;21982203 unsigned int seq;···22092214 if (*cwsseq != tk->clock_was_set_seq) {22102215 *cwsseq = tk->clock_was_set_seq;22112216 *offs_real = tk->offs_real;22172217+ *offs_boot = tk->offs_boot;22122218 *offs_tai = tk->offs_tai;22132219 }22142220
+1
kernel/time/timekeeping.h
···66 */77extern ktime_t ktime_get_update_offsets_now(unsigned int *cwsseq,88 ktime_t *offs_real,99+ ktime_t *offs_boot,910 ktime_t *offs_tai);10111112extern int timekeeping_valid_for_hres(void);
+21-4
kernel/trace/bpf_trace.c
···977977{978978 struct perf_event_query_bpf __user *uquery = info;979979 struct perf_event_query_bpf query = {};980980+ u32 *ids, prog_cnt, ids_len;980981 int ret;981982982983 if (!capable(CAP_SYS_ADMIN))···986985 return -EINVAL;987986 if (copy_from_user(&query, uquery, sizeof(query)))988987 return -EFAULT;989989- if (query.ids_len > BPF_TRACE_MAX_PROGS)988988+989989+ ids_len = query.ids_len;990990+ if (ids_len > BPF_TRACE_MAX_PROGS)990991 return -E2BIG;992992+ ids = kcalloc(ids_len, sizeof(u32), GFP_USER | __GFP_NOWARN);993993+ if (!ids)994994+ return -ENOMEM;995995+ /*996996+ * The above kcalloc returns ZERO_SIZE_PTR when ids_len = 0, which997997+ * is required when user only wants to check for uquery->prog_cnt.998998+ * There is no need to check for it since the case is handled999999+ * gracefully in bpf_prog_array_copy_info.10001000+ */99110019921002 mutex_lock(&bpf_event_mutex);9931003 ret = bpf_prog_array_copy_info(event->tp_event->prog_array,994994- uquery->ids,995995- query.ids_len,996996- &uquery->prog_cnt);10041004+ ids,10051005+ ids_len,10061006+ &prog_cnt);9971007 mutex_unlock(&bpf_event_mutex);998100810091009+ if (copy_to_user(&uquery->prog_cnt, &prog_cnt, sizeof(prog_cnt)) ||10101010+ copy_to_user(uquery->ids, ids, ids_len * sizeof(u32)))10111011+ ret = -EFAULT;10121012+10131013+ kfree(ids);9991014 return ret;10001015}10011016
···233233234234 /* be noisy on error issues */235235 if (error == -EEXIST)236236- WARN(1,237237- "%s failed for %s with -EEXIST, don't try to register things with the same name in the same directory.\n",238238- __func__, kobject_name(kobj));236236+ pr_err("%s failed for %s with -EEXIST, don't try to register things with the same name in the same directory.\n",237237+ __func__, kobject_name(kobj));239238 else240240- WARN(1, "%s failed for %s (error: %d parent: %s)\n",241241- __func__, kobject_name(kobj), error,242242- parent ? kobject_name(parent) : "'none'");239239+ pr_err("%s failed for %s (error: %d parent: %s)\n",240240+ __func__, kobject_name(kobj), error,241241+ parent ? kobject_name(parent) : "'none'");243242 } else244243 kobj->state_in_sysfs = 1;245244
···38683868 int length = (th->doff << 2) - sizeof(*th);38693869 const u8 *ptr = (const u8 *)(th + 1);3870387038713871- /* If the TCP option is too short, we can short cut */38723872- if (length < TCPOLEN_MD5SIG)38733873- return NULL;38743874-38753875- while (length > 0) {38713871+ /* If not enough data remaining, we can short cut */38723872+ while (length >= TCPOLEN_MD5SIG) {38763873 int opcode = *ptr++;38773874 int opsize;38783875
+28-27
net/ipv6/netfilter/Kconfig
···4848 fields such as the source, destination, flowlabel, hop-limit and4949 the packet mark.50505151+if NF_NAT_IPV65252+5353+config NFT_CHAIN_NAT_IPV65454+ tristate "IPv6 nf_tables nat chain support"5555+ help5656+ This option enables the "nat" chain for IPv6 in nf_tables. This5757+ chain type is used to perform Network Address Translation (NAT)5858+ packet transformations such as the source, destination address and5959+ source and destination ports.6060+6161+config NFT_MASQ_IPV66262+ tristate "IPv6 masquerade support for nf_tables"6363+ depends on NFT_MASQ6464+ select NF_NAT_MASQUERADE_IPV66565+ help6666+ This is the expression that provides IPv4 masquerading support for6767+ nf_tables.6868+6969+config NFT_REDIR_IPV67070+ tristate "IPv6 redirect support for nf_tables"7171+ depends on NFT_REDIR7272+ select NF_NAT_REDIRECT7373+ help7474+ This is the expression that provides IPv4 redirect support for7575+ nf_tables.7676+7777+endif # NF_NAT_IPV67878+5179config NFT_REJECT_IPV65280 select NF_REJECT_IPV65381 default NFT_REJECT···135107136108if NF_NAT_IPV6137109138138-config NFT_CHAIN_NAT_IPV6139139- depends on NF_TABLES_IPV6140140- tristate "IPv6 nf_tables nat chain support"141141- help142142- This option enables the "nat" chain for IPv6 in nf_tables. This143143- chain type is used to perform Network Address Translation (NAT)144144- packet transformations such as the source, destination address and145145- source and destination ports.146146-147110config NF_NAT_MASQUERADE_IPV6148111 tristate "IPv6 masquerade support"149112 help150113 This is the kernel functionality to provide NAT in the masquerade151114 flavour (automatic source address selection) for IPv6.152152-153153-config NFT_MASQ_IPV6154154- tristate "IPv6 masquerade support for nf_tables"155155- depends on NF_TABLES_IPV6156156- depends on NFT_MASQ157157- select NF_NAT_MASQUERADE_IPV6158158- help159159- This is the expression that provides IPv4 masquerading support for160160- nf_tables.161161-162162-config NFT_REDIR_IPV6163163- tristate "IPv6 redirect support for nf_tables"164164- depends on NF_TABLES_IPV6165165- depends on NFT_REDIR166166- select NF_NAT_REDIRECT167167- help168168- This is the expression that provides IPv4 redirect support for169169- nf_tables.170115171116endif # NF_NAT_IPV6172117
···106106 return;107107108108 /* Drop reference taken by last invocation of l2tp_dfs_next_tunnel() */109109- if (pd->tunnel)109109+ if (pd->tunnel) {110110 l2tp_tunnel_dec_refcount(pd->tunnel);111111+ pd->tunnel = NULL;112112+ pd->session = NULL;113113+ }111114}112115113116static void l2tp_dfs_seq_tunnel_show(struct seq_file *m, void *v)
+11-1
net/l2tp/l2tp_ppp.c
···619619 lock_sock(sk);620620621621 error = -EINVAL;622622+623623+ if (sockaddr_len != sizeof(struct sockaddr_pppol2tp) &&624624+ sockaddr_len != sizeof(struct sockaddr_pppol2tpv3) &&625625+ sockaddr_len != sizeof(struct sockaddr_pppol2tpin6) &&626626+ sockaddr_len != sizeof(struct sockaddr_pppol2tpv3in6))627627+ goto end;628628+622629 if (sp->sa_protocol != PX_PROTO_OL2TP)623630 goto end;624631···16251618 return;1626161916271620 /* Drop reference taken by last invocation of pppol2tp_next_tunnel() */16281628- if (pd->tunnel)16211621+ if (pd->tunnel) {16291622 l2tp_tunnel_dec_refcount(pd->tunnel);16231623+ pd->tunnel = NULL;16241624+ pd->session = NULL;16251625+ }16301626}1631162716321628static void pppol2tp_seq_tunnel_show(struct seq_file *m, void *v)
+12-9
net/llc/af_llc.c
···189189{190190 struct sock *sk = sock->sk;191191 struct llc_sock *llc;192192- struct llc_sap *sap;193192194193 if (unlikely(sk == NULL))195194 goto out;···199200 llc->laddr.lsap, llc->daddr.lsap);200201 if (!llc_send_disc(sk))201202 llc_ui_wait_for_disc(sk, sk->sk_rcvtimeo);202202- sap = llc->sap;203203- /* Hold this for release_sock(), so that llc_backlog_rcv() could still204204- * use it.205205- */206206- llc_sap_hold(sap);207207- if (!sock_flag(sk, SOCK_ZAPPED))203203+ if (!sock_flag(sk, SOCK_ZAPPED)) {204204+ struct llc_sap *sap = llc->sap;205205+206206+ /* Hold this for release_sock(), so that llc_backlog_rcv()207207+ * could still use it.208208+ */209209+ llc_sap_hold(sap);208210 llc_sap_remove_socket(llc->sap, sk);209209- release_sock(sk);210210- llc_sap_put(sap);211211+ release_sock(sk);212212+ llc_sap_put(sap);213213+ } else {214214+ release_sock(sk);215215+ }211216 if (llc->dev)212217 dev_put(llc->dev);213218 sock_put(sk);
···594594config NFT_REJECT595595 default m if NETFILTER_ADVANCED=n596596 tristate "Netfilter nf_tables reject support"597597+ depends on !NF_TABLES_INET || (IPV6!=m || m)597598 help598599 This option adds the "reject" expression that you can use to599600 explicitly deny and notify via TCP reset/ICMP informational errors
···99 * 2 of the License, or (at your option) any later version.1010 */1111#include <linux/kernel.h>1212+#include <linux/kmemleak.h>1213#include <linux/module.h>1314#include <linux/mutex.h>1415#include <linux/rcupdate.h>···7271 rcu_read_unlock();73727473 alloc = max(newlen, NF_CT_EXT_PREALLOC);7474+ kmemleak_not_leak(old);7575 new = __krealloc(old, alloc, gfp);7676 if (!new)7777 return NULL;
+12-4
net/netfilter/nf_conntrack_sip.c
···938938 datalen, rtp_exp, rtcp_exp,939939 mediaoff, medialen, daddr);940940 else {941941- if (nf_ct_expect_related(rtp_exp) == 0) {942942- if (nf_ct_expect_related(rtcp_exp) != 0)943943- nf_ct_unexpect_related(rtp_exp);944944- else941941+ /* -EALREADY handling works around end-points that send942942+ * SDP messages with identical port but different media type,943943+ * we pretend expectation was set up.944944+ */945945+ int errp = nf_ct_expect_related(rtp_exp);946946+947947+ if (errp == 0 || errp == -EALREADY) {948948+ int errcp = nf_ct_expect_related(rtcp_exp);949949+950950+ if (errcp == 0 || errcp == -EALREADY)945951 ret = NF_ACCEPT;952952+ else if (errp == 0)953953+ nf_ct_unexpect_related(rtp_exp);946954 }947955 }948956 nf_ct_expect_put(rtcp_exp);
···329329 skb_set_queue_mapping(skb, queue_index);330330}331331332332-/* register_prot_hook must be invoked with the po->bind_lock held,332332+/* __register_prot_hook must be invoked through register_prot_hook333333 * or from a context in which asynchronous accesses to the packet334334 * socket is not possible (packet_create()).335335 */336336-static void register_prot_hook(struct sock *sk)336336+static void __register_prot_hook(struct sock *sk)337337{338338 struct packet_sock *po = pkt_sk(sk);339339···348348 }349349}350350351351-/* {,__}unregister_prot_hook() must be invoked with the po->bind_lock352352- * held. If the sync parameter is true, we will temporarily drop351351+static void register_prot_hook(struct sock *sk)352352+{353353+ lockdep_assert_held_once(&pkt_sk(sk)->bind_lock);354354+ __register_prot_hook(sk);355355+}356356+357357+/* If the sync parameter is true, we will temporarily drop353358 * the po->bind_lock and do a synchronize_net to make sure no354359 * asynchronous packet processing paths still refer to the elements355360 * of po->prot_hook. If the sync parameter is false, it is the···363358static void __unregister_prot_hook(struct sock *sk, bool sync)364359{365360 struct packet_sock *po = pkt_sk(sk);361361+362362+ lockdep_assert_held_once(&po->bind_lock);366363367364 po->running = 0;368365···3259325232603253 if (proto) {32613254 po->prot_hook.type = proto;32623262- register_prot_hook(sk);32553255+ __register_prot_hook(sk);32633256 }3264325732653258 mutex_lock(&net->packet.sklist_lock);···3739373237403733 if (optlen != sizeof(val))37413734 return -EINVAL;37423742- if (po->rx_ring.pg_vec || po->tx_ring.pg_vec)37433743- return -EBUSY;37443735 if (copy_from_user(&val, optval, sizeof(val)))37453736 return -EFAULT;37463746- po->tp_loss = !!val;37473747- return 0;37373737+37383738+ lock_sock(sk);37393739+ if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) {37403740+ ret = -EBUSY;37413741+ } else {37423742+ po->tp_loss = !!val;37433743+ ret = 0;37443744+ }37453745+ release_sock(sk);37463746+ return ret;37483747 }37493748 case PACKET_AUXDATA:37503749 {···37613748 if (copy_from_user(&val, optval, sizeof(val)))37623749 return -EFAULT;3763375037513751+ lock_sock(sk);37643752 po->auxdata = !!val;37533753+ release_sock(sk);37653754 return 0;37663755 }37673756 case PACKET_ORIGDEV:···37753760 if (copy_from_user(&val, optval, sizeof(val)))37763761 return -EFAULT;3777376237633763+ lock_sock(sk);37783764 po->origdev = !!val;37653765+ release_sock(sk);37793766 return 0;37803767 }37813768 case PACKET_VNET_HDR:···3786376937873770 if (sock->type != SOCK_RAW)37883771 return -EINVAL;37893789- if (po->rx_ring.pg_vec || po->tx_ring.pg_vec)37903790- return -EBUSY;37913772 if (optlen < sizeof(val))37923773 return -EINVAL;37933774 if (copy_from_user(&val, optval, sizeof(val)))37943775 return -EFAULT;3795377637963796- po->has_vnet_hdr = !!val;37973797- return 0;37773777+ lock_sock(sk);37783778+ if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) {37793779+ ret = -EBUSY;37803780+ } else {37813781+ po->has_vnet_hdr = !!val;37823782+ ret = 0;37833783+ }37843784+ release_sock(sk);37853785+ return ret;37983786 }37993787 case PACKET_TIMESTAMP:38003788 {···3837381538383816 if (optlen != sizeof(val))38393817 return -EINVAL;38403840- if (po->rx_ring.pg_vec || po->tx_ring.pg_vec)38413841- return -EBUSY;38423818 if (copy_from_user(&val, optval, sizeof(val)))38433819 return -EFAULT;38443844- po->tp_tx_has_off = !!val;38203820+38213821+ lock_sock(sk);38223822+ if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) {38233823+ ret = -EBUSY;38243824+ } else {38253825+ po->tp_tx_has_off = !!val;38263826+ ret = 0;38273827+ }38283828+ release_sock(sk);38453829 return 0;38463830 }38473831 case PACKET_QDISC_BYPASS:
+5-5
net/packet/internal.h
···112112 int copy_thresh;113113 spinlock_t bind_lock;114114 struct mutex pg_vec_lock;115115- unsigned int running:1, /* prot_hook is attached*/116116- auxdata:1,115115+ unsigned int running; /* bind_lock must be held */116116+ unsigned int auxdata:1, /* writer must hold sock lock */117117 origdev:1,118118- has_vnet_hdr:1;118118+ has_vnet_hdr:1,119119+ tp_loss:1,120120+ tp_tx_has_off:1;119121 int pressure;120122 int ifindex; /* bound device */121123 __be16 num;···127125 enum tpacket_versions tp_version;128126 unsigned int tp_hdrlen;129127 unsigned int tp_reserve;130130- unsigned int tp_loss:1;131131- unsigned int tp_tx_has_off:1;132128 unsigned int tp_tstamp;133129 struct net_device __rcu *cached_dev;134130 int (*xmit)(struct sk_buff *skb);
+7-2
net/sched/act_ife.c
···652652 }653653 }654654655655- return 0;655655+ return -ENOENT;656656}657657658658static int tcf_ife_decode(struct sk_buff *skb, const struct tc_action *a,···682682 u16 mtype;683683 u16 dlen;684684685685- curr_data = ife_tlv_meta_decode(tlv_data, &mtype, &dlen, NULL);685685+ curr_data = ife_tlv_meta_decode(tlv_data, ifehdr_end, &mtype,686686+ &dlen, NULL);687687+ if (!curr_data) {688688+ qstats_drop_inc(this_cpu_ptr(ife->common.cpu_qstats));689689+ return TC_ACT_SHOT;690690+ }686691687692 if (find_decode_metaid(skb, ife, mtype, dlen, curr_data)) {688693 /* abuse overlimits to count when we receive metadata
+1-1
net/strparser/strparser.c
···67676868static void strp_start_timer(struct strparser *strp, long timeo)6969{7070- if (timeo)7070+ if (timeo && timeo != LONG_MAX)7171 mod_delayed_work(strp_wq, &strp->msg_timer_work, timeo);7272}7373
···8989 {RT5514_PLL3_CALIB_CTRL5, 0x40220012},9090 {RT5514_DELAY_BUF_CTRL1, 0x7fff006a},9191 {RT5514_DELAY_BUF_CTRL3, 0x00000000},9292+ {RT5514_ASRC_IN_CTRL1, 0x00000003},9293 {RT5514_DOWNFILTER0_CTRL1, 0x00020c2f},9394 {RT5514_DOWNFILTER0_CTRL2, 0x00020c2f},9495 {RT5514_DOWNFILTER0_CTRL3, 0x10000362},···182181 case RT5514_PLL3_CALIB_CTRL5:183182 case RT5514_DELAY_BUF_CTRL1:184183 case RT5514_DELAY_BUF_CTRL3:184184+ case RT5514_ASRC_IN_CTRL1:185185 case RT5514_DOWNFILTER0_CTRL1:186186 case RT5514_DOWNFILTER0_CTRL2:187187 case RT5514_DOWNFILTER0_CTRL3:···240238 case RT5514_DSP_MAPPING | RT5514_PLL3_CALIB_CTRL5:241239 case RT5514_DSP_MAPPING | RT5514_DELAY_BUF_CTRL1:242240 case RT5514_DSP_MAPPING | RT5514_DELAY_BUF_CTRL3:241241+ case RT5514_DSP_MAPPING | RT5514_ASRC_IN_CTRL1:243242 case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL1:244243 case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL2:245244 case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL3:
+7
sound/soc/fsl/fsl_esai.c
···144144145145 psr = ratio <= 256 * maxfp ? ESAI_xCCR_xPSR_BYPASS : ESAI_xCCR_xPSR_DIV8;146146147147+ /* Do not loop-search if PM (1 ~ 256) alone can serve the ratio */148148+ if (ratio <= 256) {149149+ pm = ratio;150150+ fp = 1;151151+ goto out;152152+ }153153+147154 /* Set the max fluctuation -- 0.1% of the max devisor */148155 savesub = (psr ? 1 : 8) * 256 * maxfp / 1000;149156
+11-3
sound/soc/fsl/fsl_ssi.c
···217217 * @dai_fmt: DAI configuration this device is currently used with218218 * @streams: Mask of current active streams: BIT(TX) and BIT(RX)219219 * @i2s_net: I2S and Network mode configurations of SCR register220220+ * (this is the initial settings based on the DAI format)220221 * @synchronous: Use synchronous mode - both of TX and RX use STCK and SFCK221222 * @use_dma: DMA is used or FIQ with stream filter222223 * @use_dual_fifo: DMA with support for dual FIFO mode···830829 }831830832831 if (!fsl_ssi_is_ac97(ssi)) {832832+ /*833833+ * Keep the ssi->i2s_net intact while having a local variable834834+ * to override settings for special use cases. Otherwise, the835835+ * ssi->i2s_net will lose the settings for regular use cases.836836+ */837837+ u8 i2s_net = ssi->i2s_net;838838+833839 /* Normal + Network mode to send 16-bit data in 32-bit frames */834840 if (fsl_ssi_is_i2s_cbm_cfs(ssi) && sample_size == 16)835835- ssi->i2s_net = SSI_SCR_I2S_MODE_NORMAL | SSI_SCR_NET;841841+ i2s_net = SSI_SCR_I2S_MODE_NORMAL | SSI_SCR_NET;836842837843 /* Use Normal mode to send mono data at 1st slot of 2 slots */838844 if (channels == 1)839839- ssi->i2s_net = SSI_SCR_I2S_MODE_NORMAL;845845+ i2s_net = SSI_SCR_I2S_MODE_NORMAL;840846841847 regmap_update_bits(regs, REG_SSI_SCR,842842- SSI_SCR_I2S_NET_MASK, ssi->i2s_net);848848+ SSI_SCR_I2S_NET_MASK, i2s_net);843849 }844850845851 /* In synchronous mode, the SSI uses STCCR for capture */
+13-9
sound/soc/intel/Kconfig
···7272 for Baytrail Chromebooks but this option is now deprecated and is7373 not recommended, use SND_SST_ATOM_HIFI2_PLATFORM instead.74747575+config SND_SST_ATOM_HIFI2_PLATFORM7676+ tristate7777+ select SND_SOC_COMPRESS7878+7579config SND_SST_ATOM_HIFI2_PLATFORM_PCI7676- tristate "PCI HiFi2 (Medfield, Merrifield) Platforms"8080+ tristate "PCI HiFi2 (Merrifield) Platforms"7781 depends on X86 && PCI7882 select SND_SST_IPC_PCI7979- select SND_SOC_COMPRESS8383+ select SND_SST_ATOM_HIFI2_PLATFORM8084 help8181- If you have a Intel Medfield or Merrifield/Edison platform, then8585+ If you have a Intel Merrifield/Edison platform, then8286 enable this option by saying Y or m. Distros will typically not8383- enable this option: Medfield devices are not available to8484- developers and while Merrifield/Edison can run a mainline kernel with8585- limited functionality it will require a firmware file which8686- is not in the standard firmware tree8787+ enable this option: while Merrifield/Edison can run a mainline8888+ kernel with limited functionality it will require a firmware file8989+ which is not in the standard firmware tree87908888-config SND_SST_ATOM_HIFI2_PLATFORM9191+config SND_SST_ATOM_HIFI2_PLATFORM_ACPI8992 tristate "ACPI HiFi2 (Baytrail, Cherrytrail) Platforms"9393+ default ACPI9094 depends on X86 && ACPI9195 select SND_SST_IPC_ACPI9292- select SND_SOC_COMPRESS9696+ select SND_SST_ATOM_HIFI2_PLATFORM9397 select SND_SOC_ACPI_INTEL_MATCH9498 select IOSF_MBI9599 help
+11-3
sound/soc/omap/omap-dmic.c
···281281static int omap_dmic_select_fclk(struct omap_dmic *dmic, int clk_id,282282 unsigned int freq)283283{284284- struct clk *parent_clk;284284+ struct clk *parent_clk, *mux;285285 char *parent_clk_name;286286 int ret = 0;287287···329329 return -ENODEV;330330 }331331332332+ mux = clk_get_parent(dmic->fclk);333333+ if (IS_ERR(mux)) {334334+ dev_err(dmic->dev, "can't get fck mux parent\n");335335+ clk_put(parent_clk);336336+ return -ENODEV;337337+ }338338+332339 mutex_lock(&dmic->mutex);333340 if (dmic->active) {334341 /* disable clock while reparenting */335342 pm_runtime_put_sync(dmic->dev);336336- ret = clk_set_parent(dmic->fclk, parent_clk);343343+ ret = clk_set_parent(mux, parent_clk);337344 pm_runtime_get_sync(dmic->dev);338345 } else {339339- ret = clk_set_parent(dmic->fclk, parent_clk);346346+ ret = clk_set_parent(mux, parent_clk);340347 }341348 mutex_unlock(&dmic->mutex);342349···356349 dmic->fclk_freq = freq;357350358351err_busy:352352+ clk_put(mux);359353 clk_put(parent_clk);360354361355 return ret;
+2-2
sound/soc/sh/rcar/core.c
···15361536 return ret;15371537}1538153815391539-static int rsnd_suspend(struct device *dev)15391539+static int __maybe_unused rsnd_suspend(struct device *dev)15401540{15411541 struct rsnd_priv *priv = dev_get_drvdata(dev);15421542···15451545 return 0;15461546}1547154715481548-static int rsnd_resume(struct device *dev)15481548+static int __maybe_unused rsnd_resume(struct device *dev)15491549{15501550 struct rsnd_priv *priv = dev_get_drvdata(dev);15511551
+9-5
sound/soc/soc-topology.c
···513513 */514514 if (dobj->widget.kcontrol_type == SND_SOC_TPLG_TYPE_ENUM) {515515 /* enumerated widget mixer */516516- for (i = 0; i < w->num_kcontrols; i++) {516516+ for (i = 0; w->kcontrols != NULL && i < w->num_kcontrols; i++) {517517 struct snd_kcontrol *kcontrol = w->kcontrols[i];518518 struct soc_enum *se =519519 (struct soc_enum *)kcontrol->private_value;···530530 }531531 } else {532532 /* volume mixer or bytes controls */533533- for (i = 0; i < w->num_kcontrols; i++) {533533+ for (i = 0; w->kcontrols != NULL && i < w->num_kcontrols; i++) {534534 struct snd_kcontrol *kcontrol = w->kcontrols[i];535535536536 if (dobj->widget.kcontrol_type···13251325 ec->hdr.name);1326132613271327 kc[i].name = kstrdup(ec->hdr.name, GFP_KERNEL);13281328- if (kc[i].name == NULL)13281328+ if (kc[i].name == NULL) {13291329+ kfree(se);13291330 goto err_se;13311331+ }13301332 kc[i].private_value = (long)se;13311333 kc[i].iface = SNDRV_CTL_ELEM_IFACE_MIXER;13321334 kc[i].access = ec->hdr.access;···14441442 be->hdr.name, be->hdr.access);1445144314461444 kc[i].name = kstrdup(be->hdr.name, GFP_KERNEL);14471447- if (kc[i].name == NULL)14451445+ if (kc[i].name == NULL) {14461446+ kfree(sbe);14481447 goto err;14481448+ }14491449 kc[i].private_value = (long)sbe;14501450 kc[i].iface = SNDRV_CTL_ELEM_IFACE_MIXER;14511451 kc[i].access = be->hdr.access;···2580257625812577 /* match index */25822578 if (dobj->index != index &&25832583- dobj->index != SND_SOC_TPLG_INDEX_ALL)25792579+ index != SND_SOC_TPLG_INDEX_ALL)25842580 continue;2585258125862582 switch (dobj->type) {
+4-3
sound/usb/mixer.c
···17761776 build_feature_ctl(state, _ftr, ch_bits, control,17771777 &iterm, unitid, ch_read_only);17781778 if (uac_v2v3_control_is_readable(master_bits, control))17791779- build_feature_ctl(state, _ftr, 0, i, &iterm, unitid,17791779+ build_feature_ctl(state, _ftr, 0, control,17801780+ &iterm, unitid,17801781 !uac_v2v3_control_is_writeable(master_bits,17811782 control));17821783 }···18601859 check_input_term(state, d->bTerminalID, &iterm);18611860 if (state->mixer->protocol == UAC_VERSION_2) {18621861 /* Check for jack detection. */18631863- if (uac_v2v3_control_is_readable(d->bmControls,18621862+ if (uac_v2v3_control_is_readable(le16_to_cpu(d->bmControls),18641863 UAC2_TE_CONNECTOR)) {18651864 build_connector_control(state, &iterm, true);18661865 }···25622561 if (err < 0 && err != -EINVAL)25632562 return err;2564256325652565- if (uac_v2v3_control_is_readable(desc->bmControls,25642564+ if (uac_v2v3_control_is_readable(le16_to_cpu(desc->bmControls),25662565 UAC2_TE_CONNECTOR)) {25672566 build_connector_control(&state, &state.oterm,25682567 false);
+3
sound/usb/mixer_maps.c
···353353/*354354 * Dell usb dock with ALC4020 codec had a firmware problem where it got355355 * screwed up when zero volume is passed; just skip it as a workaround356356+ *357357+ * Also the extension unit gives an access error, so skip it as well.356358 */357359static const struct usbmix_name_map dell_alc4020_map[] = {360360+ { 4, NULL }, /* extension unit */358361 { 16, NULL },359362 { 19, NULL },360363 { 0 }
+1-1
sound/usb/stream.c
···349349 * TODO: this conversion is not complete, update it350350 * after adding UAC3 values to asound.h351351 */352352- switch (is->bChPurpose) {352352+ switch (is->bChRelationship) {353353 case UAC3_CH_MONO:354354 map = SNDRV_CHMAP_MONO;355355 break;
+1-1
sound/usb/usx2y/us122l.c
···139139 snd_printdd(KERN_DEBUG "%i\n", atomic_read(&us122l->mmap_count));140140}141141142142-static int usb_stream_hwdep_vm_fault(struct vm_fault *vmf)142142+static vm_fault_t usb_stream_hwdep_vm_fault(struct vm_fault *vmf)143143{144144 unsigned long offset;145145 struct page *page;
+1-1
sound/usb/usx2y/usX2Yhwdep.c
···3131#include "usbusx2y.h"3232#include "usX2Yhwdep.h"33333434-static int snd_us428ctls_vm_fault(struct vm_fault *vmf)3434+static vm_fault_t snd_us428ctls_vm_fault(struct vm_fault *vmf)3535{3636 unsigned long offset;3737 struct page * page;
+1-1
sound/usb/usx2y/usx2yhwdeppcm.c
···652652}653653654654655655-static int snd_usX2Y_hwdep_pcm_vm_fault(struct vm_fault *vmf)655655+static vm_fault_t snd_usX2Y_hwdep_pcm_vm_fault(struct vm_fault *vmf)656656{657657 unsigned long offset;658658 void *vaddr;
+29-12
tools/perf/Documentation/perf-mem.txt
···2828<command>...::2929 Any command you can specify in a shell.30303131+-i::3232+--input=<file>::3333+ Input file name.3434+3135-f::3236--force::3337 Don't do ownership validation34383539-t::3636---type=::4040+--type=<type>::3741 Select the memory operation type: load or store (default: load,store)38423943-D::4040---dump-raw-samples=::4444+--dump-raw-samples::4145 Dump the raw decoded samples on the screen in a format that is easy to parse with4246 one sample per line.43474448-x::4545---field-separator::4949+--field-separator=<separator>::4650 Specify the field separator used when dump raw samples (-D option). By default,4751 The separator is the space character.48524953-C::5050---cpu-list::5151- Restrict dump of raw samples to those provided via this option. Note that the same5252- option can be passed in record mode. It will be interpreted the same way as perf5353- record.5454+--cpu=<cpu>::5555+ Monitor only on the list of CPUs provided. Multiple CPUs can be provided as a5656+ comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2. Default5757+ is to monitor all CPUS.5858+-U::5959+--hide-unresolved::6060+ Only display entries resolved to a symbol.6161+6262+-p::6363+--phys-data::6464+ Record/Report sample physical addresses6565+6666+RECORD OPTIONS6767+--------------6868+-e::6969+--event <event>::7070+ Event selector. Use 'perf mem record -e list' to list available events.54715572-K::5673--all-kernel::···7760--all-user::7861 Configure all used events to run in user space.79628080---ldload::8181- Specify desired latency for loads event.6363+-v::6464+--verbose::6565+ Be more verbose (show counter open errors, etc)82668383--p::8484---phys-data::8585- Record/Report sample physical addresses6767+--ldlat <n>::6868+ Specify desired latency for loads event.86698770In addition, for report all perf report options are valid, and for record8871all perf record options.
+1
tools/perf/arch/s390/util/auxtrace.c
···8787 struct perf_evsel *pos;8888 int diagnose = 0;89899090+ *err = 0;9091 if (evlist->nr_entries == 0)9192 return NULL;9293
-18
tools/perf/arch/s390/util/header.c
···146146 zfree(&buf);147147 return buf;148148}149149-150150-/*151151- * Compare the cpuid string returned by get_cpuid() function152152- * with the name generated by the jevents file read from153153- * pmu-events/arch/s390/mapfile.csv.154154- *155155- * Parameter mapcpuid is the cpuid as stored in the156156- * pmu-events/arch/s390/mapfile.csv. This is just the type number.157157- * Parameter cpuid is the cpuid returned by function get_cpuid().158158- */159159-int strcmp_cpuid_str(const char *mapcpuid, const char *cpuid)160160-{161161- char *cp = strchr(cpuid, ',');162162-163163- if (cp == NULL)164164- return -1;165165- return strncmp(cp + 1, mapcpuid, strlen(mapcpuid));166166-}
+38-2
tools/perf/builtin-stat.c
···172172static const char *output_name;173173static int output_fd;174174static int print_free_counters_hint;175175+static int print_mixed_hw_group_error;175176176177struct perf_stat {177178 bool record;···11271126 fprintf(output, "%s%s", csv_sep, evsel->cgrp->name);11281127}1129112811291129+static bool is_mixed_hw_group(struct perf_evsel *counter)11301130+{11311131+ struct perf_evlist *evlist = counter->evlist;11321132+ u32 pmu_type = counter->attr.type;11331133+ struct perf_evsel *pos;11341134+11351135+ if (counter->nr_members < 2)11361136+ return false;11371137+11381138+ evlist__for_each_entry(evlist, pos) {11391139+ /* software events can be part of any hardware group */11401140+ if (pos->attr.type == PERF_TYPE_SOFTWARE)11411141+ continue;11421142+ if (pmu_type == PERF_TYPE_SOFTWARE) {11431143+ pmu_type = pos->attr.type;11441144+ continue;11451145+ }11461146+ if (pmu_type != pos->attr.type)11471147+ return true;11481148+ }11491149+11501150+ return false;11511151+}11521152+11301153static void printout(int id, int nr, struct perf_evsel *counter, double uval,11311154 char *prefix, u64 run, u64 ena, double noise,11321155 struct runtime_stat *st)···12031178 counter->supported ? CNTR_NOT_COUNTED : CNTR_NOT_SUPPORTED,12041179 csv_sep);1205118012061206- if (counter->supported)11811181+ if (counter->supported) {12071182 print_free_counters_hint = 1;11831183+ if (is_mixed_hw_group(counter))11841184+ print_mixed_hw_group_error = 1;11851185+ }1208118612091187 fprintf(stat_config.output, "%-*s%s",12101188 csv_output ? 0 : unit_width,···12841256 char *new_name;12851257 char *config;1286125812871287- if (!counter->pmu_name || !strncmp(counter->name, counter->pmu_name,12591259+ if (counter->uniquified_name ||12601260+ !counter->pmu_name || !strncmp(counter->name, counter->pmu_name,12881261 strlen(counter->pmu_name)))12891262 return;12901263···13031274 counter->name = new_name;13041275 }13051276 }12771277+12781278+ counter->uniquified_name = true;13061279}1307128013081281static void collect_all_aliases(struct perf_evsel *counter,···17881757" echo 0 > /proc/sys/kernel/nmi_watchdog\n"17891758" perf stat ...\n"17901759" echo 1 > /proc/sys/kernel/nmi_watchdog\n");17601760+17611761+ if (print_mixed_hw_group_error)17621762+ fprintf(output,17631763+ "The events in group usually have to be from "17641764+ "the same PMU. Try reorganizing the group.\n");17911765}1792176617931767static void print_counters(struct timespec *ts, int argc, const char **argv)
···930930 * than leader in case leader 'leads' the sampling.931931 */932932 if ((leader != evsel) && leader->sample_read) {933933- attr->sample_freq = 0;934934- attr->sample_period = 0;933933+ attr->freq = 0;934934+ attr->sample_freq = 0;935935+ attr->sample_period = 0;936936+ attr->write_backward = 0;937937+ attr->sample_id_all = 0;935938 }936939937940 if (opts->no_samples)···19251922 goto fallback_missing_features;19261923 } else if (!perf_missing_features.group_read &&19271924 evsel->attr.inherit &&19281928- (evsel->attr.read_format & PERF_FORMAT_GROUP)) {19251925+ (evsel->attr.read_format & PERF_FORMAT_GROUP) &&19261926+ perf_evsel__is_group_leader(evsel)) {19291927 perf_missing_features.group_read = true;19301928 pr_debug2("switching off group read\n");19311929 goto fallback_missing_features;···27582754 (paranoid = perf_event_paranoid()) > 1) {27592755 const char *name = perf_evsel__name(evsel);27602756 char *new_name;27572757+ const char *sep = ":";2761275827622762- if (asprintf(&new_name, "%s%su", name, strchr(name, ':') ? "" : ":") < 0)27592759+ /* Is there already the separator in the name. */27602760+ if (strchr(name, '/') ||27612761+ strchr(name, ':'))27622762+ sep = "";27632763+27642764+ if (asprintf(&new_name, "%s%su", name, sep) < 0)27632765 return false;2764276627652767 if (evsel->name)
+1
tools/perf/util/evsel.h
···115115 unsigned int sample_size;116116 int id_pos;117117 int is_pos;118118+ bool uniquified_name;118119 bool snapshot;119120 bool supported;120121 bool needs_swap;
+18-12
tools/perf/util/machine.c
···10191019 return ret;10201020}1021102110221022-static void map_groups__fixup_end(struct map_groups *mg)10231023-{10241024- int i;10251025- for (i = 0; i < MAP__NR_TYPES; ++i)10261026- __map_groups__fixup_end(mg, i);10271027-}10281028-10291022static char *get_kernel_version(const char *root_dir)10301023{10311024 char version[PATH_MAX];···12261233{12271234 struct dso *kernel = machine__get_kernel(machine);12281235 const char *name = NULL;12361236+ struct map *map;12291237 u64 addr = 0;12301238 int ret;12311239···12531259 machine__destroy_kernel_maps(machine);12541260 return -1;12551261 }12561256- machine__set_kernel_mmap(machine, addr, 0);12621262+12631263+ /* we have a real start address now, so re-order the kmaps */12641264+ map = machine__kernel_map(machine);12651265+12661266+ map__get(map);12671267+ map_groups__remove(&machine->kmaps, map);12681268+12691269+ /* assume it's the last in the kmaps */12701270+ machine__set_kernel_mmap(machine, addr, ~0ULL);12711271+12721272+ map_groups__insert(&machine->kmaps, map);12731273+ map__put(map);12571274 }1258127512591259- /*12601260- * Now that we have all the maps created, just set the ->end of them:12611261- */12621262- map_groups__fixup_end(&machine->kmaps);12761276+ /* update end address of the kernel map using adjacent module address */12771277+ map = map__next(machine__kernel_map(machine));12781278+ if (map)12791279+ machine__set_kernel_mmap(machine, addr, map->start);12801280+12631281 return 0;12641282}12651283
+4-4
tools/perf/util/parse-events.y
···224224 event_bpf_file225225226226event_pmu:227227-PE_NAME opt_event_config227227+PE_NAME '/' event_config '/'228228{229229 struct list_head *list, *orig_terms, *terms;230230231231- if (parse_events_copy_term_list($2, &orig_terms))231231+ if (parse_events_copy_term_list($3, &orig_terms))232232 YYABORT;233233234234 ALLOC_LIST(list);235235- if (parse_events_add_pmu(_parse_state, list, $1, $2, false)) {235235+ if (parse_events_add_pmu(_parse_state, list, $1, $3, false)) {236236 struct perf_pmu *pmu = NULL;237237 int ok = 0;238238 char *pattern;···262262 if (!ok)263263 YYABORT;264264 }265265- parse_events_terms__delete($2);265265+ parse_events_terms__delete($3);266266 parse_events_terms__delete(orig_terms);267267 $$ = list;268268}
+8-14
tools/perf/util/pmu.c
···539539540540/*541541 * PMU CORE devices have different name other than cpu in sysfs on some542542- * platforms. looking for possible sysfs files to identify as core device.542542+ * platforms.543543+ * Looking for possible sysfs files to identify the arm core device.543544 */544544-static int is_pmu_core(const char *name)545545+static int is_arm_pmu_core(const char *name)545546{546547 struct stat st;547548 char path[PATH_MAX];···550549551550 if (!sysfs)552551 return 0;553553-554554- /* Look for cpu sysfs (x86 and others) */555555- scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/cpu", sysfs);556556- if ((stat(path, &st) == 0) &&557557- (strncmp(name, "cpu", strlen("cpu")) == 0))558558- return 1;559552560553 /* Look for cpu sysfs (specific to arm) */561554 scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/%s/cpus",···581586 * cpuid string generated on this platform.582587 * Otherwise return non-zero.583588 */584584-int __weak strcmp_cpuid_str(const char *mapcpuid, const char *cpuid)589589+int strcmp_cpuid_str(const char *mapcpuid, const char *cpuid)585590{586591 regex_t re;587592 regmatch_t pmatch[1];···663668 struct pmu_events_map *map;664669 struct pmu_event *pe;665670 const char *name = pmu->name;671671+ const char *pname;666672667673 map = perf_pmu__find_map(pmu);668674 if (!map)···682686 break;683687 }684688685685- if (!is_pmu_core(name)) {686686- /* check for uncore devices */687687- if (pe->pmu == NULL)688688- continue;689689- if (strncmp(pe->pmu, name, strlen(pe->pmu)))689689+ if (!is_arm_pmu_core(name)) {690690+ pname = pe->pmu ? pe->pmu : "cpu";691691+ if (strncmp(pname, name, strlen(pname)))690692 continue;691693 }692694
···11+#!/bin/sh22+# description: event trigger - test multiple actions on hist trigger33+44+55+do_reset() {66+ reset_trigger77+ echo > set_event88+ clear_trace99+}1010+1111+fail() { #msg1212+ do_reset1313+ echo $11414+ exit_fail1515+}1616+1717+if [ ! -f set_event ]; then1818+ echo "event tracing is not supported"1919+ exit_unsupported2020+fi2121+2222+if [ ! -f synthetic_events ]; then2323+ echo "synthetic event is not supported"2424+ exit_unsupported2525+fi2626+2727+clear_synthetic_events2828+reset_tracer2929+do_reset3030+3131+echo "Test multiple actions on hist trigger"3232+echo 'wakeup_latency u64 lat; pid_t pid' >> synthetic_events3333+TRIGGER1=events/sched/sched_wakeup/trigger3434+TRIGGER2=events/sched/sched_switch/trigger3535+3636+echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="cyclictest"' > $TRIGGER13737+echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0 if next_comm=="cyclictest"' >> $TRIGGER23838+echo 'hist:keys=next_pid:onmatch(sched.sched_wakeup).wakeup_latency(sched.sched_switch.$wakeup_lat,next_pid) if next_comm=="cyclictest"' >> $TRIGGER23939+echo 'hist:keys=next_pid:onmatch(sched.sched_wakeup).wakeup_latency(sched.sched_switch.$wakeup_lat,prev_pid) if next_comm=="cyclictest"' >> $TRIGGER24040+echo 'hist:keys=next_pid if next_comm=="cyclictest"' >> $TRIGGER24141+4242+do_reset4343+4444+exit 0
+21-14
tools/testing/selftests/x86/test_syscall_vdso.c
···100100 " shl $32, %r8\n"101101 " orq $0x7f7f7f7f, %r8\n"102102 " movq %r8, %r9\n"103103- " movq %r8, %r10\n"104104- " movq %r8, %r11\n"105105- " movq %r8, %r12\n"106106- " movq %r8, %r13\n"107107- " movq %r8, %r14\n"108108- " movq %r8, %r15\n"103103+ " incq %r9\n"104104+ " movq %r9, %r10\n"105105+ " incq %r10\n"106106+ " movq %r10, %r11\n"107107+ " incq %r11\n"108108+ " movq %r11, %r12\n"109109+ " incq %r12\n"110110+ " movq %r12, %r13\n"111111+ " incq %r13\n"112112+ " movq %r13, %r14\n"113113+ " incq %r14\n"114114+ " movq %r14, %r15\n"115115+ " incq %r15\n"109116 " ret\n"110117 " .code32\n"111118 " .popsection\n"···135128 int err = 0;136129 int num = 8;137130 uint64_t *r64 = ®s64.r8;131131+ uint64_t expected = 0x7f7f7f7f7f7f7f7fULL;138132139133 if (!kernel_is_64bit)140134 return 0;141135142136 do {143143- if (*r64 == 0x7f7f7f7f7f7f7f7fULL)137137+ if (*r64 == expected++)144138 continue; /* register did not change */145139 if (syscall_addr != (long)&int80) {146140 /*···155147 continue;156148 }157149 } else {158158- /* INT80 syscall entrypoint can be used by150150+ /*151151+ * INT80 syscall entrypoint can be used by159152 * 64-bit programs too, unlike SYSCALL/SYSENTER.160153 * Therefore it must preserve R12+161154 * (they are callee-saved registers in 64-bit C ABI).162155 *163163- * This was probably historically not intended,164164- * but R8..11 are clobbered (cleared to 0).165165- * IOW: they are the only registers which aren't166166- * preserved across INT80 syscall.156156+ * Starting in Linux 4.17 (and any kernel that157157+ * backports the change), R8..11 are preserved.158158+ * Historically (and probably unintentionally), they159159+ * were clobbered or zeroed.167160 */168168- if (*r64 == 0 && num <= 11)169169- continue;170161 }171162 printf("[FAIL]\tR%d has changed:%016llx\n", num, *r64);172163 err++;
+10-5
virt/kvm/arm/arm.c
···6363static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);6464static u32 kvm_next_vmid;6565static unsigned int kvm_vmid_bits __read_mostly;6666-static DEFINE_SPINLOCK(kvm_vmid_lock);6666+static DEFINE_RWLOCK(kvm_vmid_lock);67676868static bool vgic_present;6969···473473{474474 phys_addr_t pgd_phys;475475 u64 vmid;476476+ bool new_gen;476477477477- if (!need_new_vmid_gen(kvm))478478+ read_lock(&kvm_vmid_lock);479479+ new_gen = need_new_vmid_gen(kvm);480480+ read_unlock(&kvm_vmid_lock);481481+482482+ if (!new_gen)478483 return;479484480480- spin_lock(&kvm_vmid_lock);485485+ write_lock(&kvm_vmid_lock);481486482487 /*483488 * We need to re-check the vmid_gen here to ensure that if another vcpu···490485 * use the same vmid.491486 */492487 if (!need_new_vmid_gen(kvm)) {493493- spin_unlock(&kvm_vmid_lock);488488+ write_unlock(&kvm_vmid_lock);494489 return;495490 }496491···524519 vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits);525520 kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid;526521527527- spin_unlock(&kvm_vmid_lock);522522+ write_unlock(&kvm_vmid_lock);528523}529524530525static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)