···557557 pulls in some header files containing file scope host assembly codes.558558 - You can add "-fno-jump-tables" to work around the switch table issue.559559560560- Otherwise, you can use bpf target.560560+ Otherwise, you can use bpf target. Additionally, you _must_ use bpf target561561+ when:562562+563563+ - Your program uses data structures with pointer or long / unsigned long564564+ types that interface with BPF helpers or context data structures. Access565565+ into these structures is verified by the BPF verifier and may result566566+ in verification failures if the native architecture is not aligned with567567+ the BPF architecture, e.g. 64-bit. An example of this is568568+ BPF_PROG_TYPE_SK_MSG require '-target bpf'561569562570Happy BPF hacking!
···44- compatible:55 atmel,maxtouch6677+ The following compatibles have been used in various products but are88+ deprecated:99+ atmel,qt602240_ts1010+ atmel,atmel_mxt_ts1111+ atmel,atmel_mxt_tp1212+ atmel,mXT2241313+714- reg: The I2C address of the device815916- interrupts: The sink for the touchpad's IRQ output
···2121- interrupts : identifier to the device interrupt2222- clocks : a list of phandle + clock-specifier pairs, one for each2323 entry in clock names.2424-- clocks-names :2424+- clock-names :2525 * "xtal" for external xtal clock identifier2626 * "pclk" for the bus core clock, either the clk81 clock or the gate clock2727 * "baud" for the source of the baudrate generator, can be either the xtal
···2424 - Must contain two elements for the extended variant of the IP2525 (marvell,armada-3700-uart-ext): "uart-tx" and "uart-rx",2626 respectively the UART TX interrupt and the UART RX interrupt. A2727- corresponding interrupts-names property must be defined.2727+ corresponding interrupt-names property must be defined.2828 - For backward compatibility reasons, a single element interrupts2929 property is also supported for the standard variant of the IP,3030 containing only the UART sum interrupt. This form is deprecated
···2828 - interrupts: one XHCI interrupt should be described here.29293030Optional properties:3131- - clocks: reference to a clock3131+ - clocks: reference to the clocks3232+ - clock-names: mandatory if there is a second clock, in this case3333+ the name must be "core" for the first clock and "reg" for the3434+ second one3235 - usb2-lpm-disable: indicate if we don't want to enable USB2 HW LPM3336 - usb3-lpm-capable: determines if platform is USB3 LPM capable3437 - quirk-broken-port-ped: set if the controller has broken port disable mechanism
···17171818request_firmware1919----------------2020-.. kernel-doc:: drivers/base/firmware_class.c2020+.. kernel-doc:: drivers/base/firmware_loader/main.c2121 :functions: request_firmware22222323request_firmware_direct2424-----------------------2525-.. kernel-doc:: drivers/base/firmware_class.c2525+.. kernel-doc:: drivers/base/firmware_loader/main.c2626 :functions: request_firmware_direct27272828request_firmware_into_buf2929-------------------------3030-.. kernel-doc:: drivers/base/firmware_class.c3030+.. kernel-doc:: drivers/base/firmware_loader/main.c3131 :functions: request_firmware_into_buf32323333Asynchronous firmware requests···41414242request_firmware_nowait4343-----------------------4444-.. kernel-doc:: drivers/base/firmware_class.c4444+.. kernel-doc:: drivers/base/firmware_loader/main.c4545 :functions: request_firmware_nowait46464747Special optimizations on reboot···5050Some devices have an optimization in place to enable the firmware to be5151retained during system reboot. When such optimizations are used the driver5252author must ensure the firmware is still available on resume from suspend,5353-this can be done with firmware_request_cache() insted of requesting for the5454-firmare to be loaded.5353+this can be done with firmware_request_cache() instead of requesting for the5454+firmware to be loaded.55555656firmware_request_cache()5757------------------------5858-.. kernel-doc:: drivers/base/firmware_class.c5757+------------------------5858+.. kernel-doc:: drivers/base/firmware_loader/main.c5959 :functions: firmware_request_cache60606161request firmware API expected driver use
···210210role. USB Type-C Connector Class does not supply separate API for them. The211211port drivers can use USB Role Class API with those.212212213213-Illustration of the muxes behind a connector that supports an alternate mode:213213+Illustration of the muxes behind a connector that supports an alternate mode::214214215215 ------------------------216216 | Connector |
+14-18
Documentation/i2c/dev-interface
···99the i2c-tools package.10101111I2C device files are character device files with major device number 891212-and a minor device number corresponding to the number assigned as 1313-explained above. They should be called "i2c-%d" (i2c-0, i2c-1, ..., 1212+and a minor device number corresponding to the number assigned as1313+explained above. They should be called "i2c-%d" (i2c-0, i2c-1, ...,1414i2c-10, ...). All 256 minor device numbers are reserved for i2c.15151616···2323 #include <linux/i2c-dev.h>2424 #include <i2c/smbus.h>25252626-(Please note that there are two files named "i2c-dev.h" out there. One is2727-distributed with the Linux kernel and the other one is included in the2828-source tree of i2c-tools. They used to be different in content but since 20122929-they're identical. You should use "linux/i2c-dev.h").3030-3126Now, you have to decide which adapter you want to access. You should3227inspect /sys/class/i2c-dev/ or run "i2cdetect -l" to decide this.3328Adapter numbers are assigned somewhat dynamically, so you can not···3338 int file;3439 int adapter_nr = 2; /* probably dynamically determined */3540 char filename[20];3636-4141+3742 snprintf(filename, 19, "/dev/i2c-%d", adapter_nr);3843 file = open(filename, O_RDWR);3944 if (file < 0) {···6772 /* res contains the read word */6873 }69747070- /* Using I2C Write, equivalent of 7171- i2c_smbus_write_word_data(file, reg, 0x6543) */7575+ /*7676+ * Using I2C Write, equivalent of7777+ * i2c_smbus_write_word_data(file, reg, 0x6543)7878+ */7279 buf[0] = reg;7380 buf[1] = 0x43;7481 buf[2] = 0x65;···137140 set in each message, overriding the values set with the above ioctl's.138141139142ioctl(file, I2C_SMBUS, struct i2c_smbus_ioctl_data *args)140140- Not meant to be called directly; instead, use the access functions141141- below.143143+ If possible, use the provided i2c_smbus_* methods described below instead144144+ of issuing direct ioctls.142145143146You can do plain i2c transactions by using read(2) and write(2) calls.144147You do not need to pass the address byte; instead, set it through145148ioctl I2C_SLAVE before you try to access the device.146149147147-You can do SMBus level transactions (see documentation file smbus-protocol 150150+You can do SMBus level transactions (see documentation file smbus-protocol148151for details) through the following functions:149152 __s32 i2c_smbus_write_quick(int file, __u8 value);150153 __s32 i2c_smbus_read_byte(int file);···155158 __s32 i2c_smbus_write_word_data(int file, __u8 command, __u16 value);156159 __s32 i2c_smbus_process_call(int file, __u8 command, __u16 value);157160 __s32 i2c_smbus_read_block_data(int file, __u8 command, __u8 *values);158158- __s32 i2c_smbus_write_block_data(int file, __u8 command, __u8 length, 161161+ __s32 i2c_smbus_write_block_data(int file, __u8 command, __u8 length,159162 __u8 *values);160163All these transactions return -1 on failure; you can read errno to see161164what happened. The 'write' transactions return 0 on success; the···163166returns the number of values read. The block buffers need not be longer164167than 32 bytes.165168166166-The above functions are all inline functions, that resolve to calls to167167-the i2c_smbus_access function, that on its turn calls a specific ioctl168168-with the data in a specific format. Read the source code if you169169-want to know what happens behind the screens.169169+The above functions are made available by linking against the libi2c library,170170+which is provided by the i2c-tools project. See:171171+https://git.kernel.org/pub/scm/utils/i2c-tools/i2c-tools.git/.170172171173172174Implementation details
···168168169169[Please bear in mind that the kernel requests the microcode images from170170userspace, using the request_firmware() function defined in171171-drivers/base/firmware_class.c]171171+drivers/base/firmware_loader/main.c]172172173173174174a. When all the CPUs are identical:
-3
Documentation/process/magic-number.rst
···157157OSS sound drivers have their magic numbers constructed from the soundcard PCI158158ID - these are not listed here as well.159159160160-IrDA subsystem also uses large number of own magic numbers, see161161-``include/net/irda/irda.h`` for a complete list of them.162162-163160HFS is another larger user of magic numbers - you can find them in164161``fs/hfs/hfs.h``.
+11-3
Documentation/trace/ftrace.rst
···461461 and ticks at the same rate as the hardware clocksource.462462463463 boot:464464- Same as mono. Used to be a separate clock which accounted465465- for the time spent in suspend while CLOCK_MONOTONIC did466466- not.464464+ This is the boot clock (CLOCK_BOOTTIME) and is based on the465465+ fast monotonic clock, but also accounts for time spent in466466+ suspend. Since the clock access is designed for use in467467+ tracing in the suspend path, some side effects are possible468468+ if clock is accessed after the suspend time is accounted before469469+ the fast mono clock is updated. In this case, the clock update470470+ appears to happen slightly sooner than it normally would have.471471+ Also on 32-bit systems, it's possible that the 64-bit boot offset472472+ sees a partial update. These effects are rare and post473473+ processing should be able to handle them. See comments in the474474+ ktime_get_boot_fast_ns() function for more information.467475468476 To set a clock, simply echo the clock name into this file::469477
+8-1
Documentation/virtual/kvm/api.txt
···19601960ARM 64-bit FP registers have the following id bit patterns:19611961 0x4030 0000 0012 0 <regno:12>1962196219631963+ARM firmware pseudo-registers have the following bit pattern:19641964+ 0x4030 0000 0014 <regno:16>19651965+1963196619641967arm64 registers are mapped using the lower 32 bits. The upper 16 of19651968that is the register group type, or coprocessor number:···1978197519791976arm64 system registers have the following id bit patterns:19801977 0x6030 0000 0013 <op0:2> <op1:3> <crn:4> <crm:4> <op2:3>19781978+19791979+arm64 firmware pseudo-registers have the following bit pattern:19801980+ 0x6030 0000 0014 <regno:16>198119811982198219831983MIPS registers are mapped using the lower 32 bits. The upper 16 of that is···25162510 and execute guest code when KVM_RUN is called.25172511 - KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode.25182512 Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).25192519- - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.25132513+ - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 (or a future revision25142514+ backward compatible with v0.2) for the CPU.25202515 Depends on KVM_CAP_ARM_PSCI_0_2.25212516 - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.25222517 Depends on KVM_CAP_ARM_PMU_V3.
+30
Documentation/virtual/kvm/arm/psci.txt
···11+KVM implements the PSCI (Power State Coordination Interface)22+specification in order to provide services such as CPU on/off, reset33+and power-off to the guest.44+55+The PSCI specification is regularly updated to provide new features,66+and KVM implements these updates if they make sense from a virtualization77+point of view.88+99+This means that a guest booted on two different versions of KVM can1010+observe two different "firmware" revisions. This could cause issues if1111+a given guest is tied to a particular PSCI revision (unlikely), or if1212+a migration causes a different PSCI version to be exposed out of the1313+blue to an unsuspecting guest.1414+1515+In order to remedy this situation, KVM exposes a set of "firmware1616+pseudo-registers" that can be manipulated using the GET/SET_ONE_REG1717+interface. These registers can be saved/restored by userspace, and set1818+to a convenient value if required.1919+2020+The following register is defined:2121+2222+* KVM_REG_ARM_PSCI_VERSION:2323+2424+ - Only valid if the vcpu has the KVM_ARM_VCPU_PSCI_0_2 feature set2525+ (and thus has already been initialized)2626+ - Returns the current PSCI version on GET_ONE_REG (defaulting to the2727+ highest PSCI version implemented by KVM and compatible with v0.2)2828+ - Allows any PSCI version implemented by KVM and compatible with2929+ v0.2 to be set with SET_ONE_REG3030+ - Affects the whole VM (even if the register view is per-vcpu)
+10-18
MAINTAINERS
···564564F: drivers/media/dvb-frontends/af9033*565565566566AFFS FILE SYSTEM567567+M: David Sterba <dsterba@suse.com>567568L: linux-fsdevel@vger.kernel.org568568-S: Orphan569569+S: Odd Fixes569570F: Documentation/filesystems/affs.txt570571F: fs/affs/571572···906905M: Laura Abbott <labbott@redhat.com>907906M: Sumit Semwal <sumit.semwal@linaro.org>908907L: devel@driverdev.osuosl.org908908+L: dri-devel@lists.freedesktop.org909909+L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)909910S: Supported910911F: drivers/staging/android/ion911912F: drivers/staging/android/uapi/ion.h···12111208ARM/ARTPEC MACHINE SUPPORT12121209M: Jesper Nilsson <jesper.nilsson@axis.com>12131210M: Lars Persson <lars.persson@axis.com>12141214-M: Niklas Cassel <niklas.cassel@axis.com>12151211S: Maintained12161212L: linux-arm-kernel@axis.com12171213F: arch/arm/mach-artpec···74137411F: include/uapi/linux/ipx.h74147412F: drivers/staging/ipx/7415741374167416-IRDA SUBSYSTEM74177417-M: Samuel Ortiz <samuel@sortiz.org>74187418-L: irda-users@lists.sourceforge.net (subscribers-only)74197419-L: netdev@vger.kernel.org74207420-W: http://irda.sourceforge.net/74217421-S: Obsolete74227422-T: git git://git.kernel.org/pub/scm/linux/kernel/git/sameo/irda-2.6.git74237423-F: Documentation/networking/irda.txt74247424-F: drivers/staging/irda/74257425-74267414IRQ DOMAINS (IRQ NUMBER MAPPING LIBRARY)74277415M: Marc Zyngier <marc.zyngier@arm.com>74287416S: Maintained···77457753F: arch/x86/kvm/svm.c7746775477477755KERNEL VIRTUAL MACHINE FOR ARM (KVM/arm)77487748-M: Christoffer Dall <christoffer.dall@linaro.org>77567756+M: Christoffer Dall <christoffer.dall@arm.com>77497757M: Marc Zyngier <marc.zyngier@arm.com>77507758L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)77517759L: kvmarm@lists.cs.columbia.edu···77597767F: include/kvm/arm_*7760776877617769KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)77627762-M: Christoffer Dall <christoffer.dall@linaro.org>77707770+M: Christoffer Dall <christoffer.dall@arm.com>77637771M: Marc Zyngier <marc.zyngier@arm.com>77647772L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)77657773L: kvmarm@lists.cs.columbia.edu···97269734F: net/core/drop_monitor.c9727973597289736NETWORKING DRIVERS97379737+M: "David S. Miller" <davem@davemloft.net>97299738L: netdev@vger.kernel.org97309739W: http://www.linuxfoundation.org/en/Net97319740Q: http://patchwork.ozlabs.org/project/netdev/list/···1090310910F: drivers/pci/dwc/10904109111090510912PCIE DRIVER FOR AXIS ARTPEC1090610906-M: Niklas Cassel <niklas.cassel@axis.com>1090710913M: Jesper Nilsson <jesper.nilsson@axis.com>1090810914L: linux-arm-kernel@axis.com1090910915L: linux-pci@vger.kernel.org···1250012508SCTP PROTOCOL1250112509M: Vlad Yasevich <vyasevich@gmail.com>1250212510M: Neil Horman <nhorman@tuxdriver.com>1251112511+M: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>1250312512L: linux-sctp@vger.kernel.org1250412513W: http://lksctp.sourceforge.net1250512514S: Maintained···1385613863F: drivers/iommu/tegra*13857138641385813865TEGRA KBC DRIVER1385913859-M: Rakesh Iyer <riyer@nvidia.com>1386013866M: Laxman Dewangan <ldewangan@nvidia.com>1386113867S: Supported1386213868F: drivers/input/keyboard/tegra-kbc.c···1395813966M: Andreas Noever <andreas.noever@gmail.com>1395913967M: Michael Jamet <michael.jamet@intel.com>1396013968M: Mika Westerberg <mika.westerberg@linux.intel.com>1396113961-M: Yehezkel Bernat <yehezkel.bernat@intel.com>1396913969+M: Yehezkel Bernat <YehezkelShB@gmail.com>1396213970T: git git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt.git1396313971S: Maintained1396413972F: Documentation/admin-guide/thunderbolt.rst···1396813976THUNDERBOLT NETWORK DRIVER1396913977M: Michael Jamet <michael.jamet@intel.com>1397013978M: Mika Westerberg <mika.westerberg@linux.intel.com>1397113971-M: Yehezkel Bernat <yehezkel.bernat@intel.com>1397913979+M: Yehezkel Bernat <YehezkelShB@gmail.com>1397213980L: netdev@vger.kernel.org1397313981S: Maintained1397413982F: drivers/net/thunderbolt.c
···11# CONFIG_LOCALVERSION_AUTO is not set22CONFIG_SYSVIPC=y33CONFIG_NO_HZ_IDLE=y44+CONFIG_HIGH_RES_TIMERS=y45CONFIG_BSD_PROCESS_ACCT=y56CONFIG_USER_NS=y67CONFIG_RELAY=y···1312CONFIG_PCI=y1413CONFIG_PREEMPT=y1514CONFIG_AEABI=y1515+CONFIG_HIGHMEM=y1616+CONFIG_CMA=y1617CONFIG_CMDLINE="console=ttyS0,115200n8"1718CONFIG_KEXEC=y1819CONFIG_BINFMT_MISC=y1920CONFIG_PM=y2121+CONFIG_NET=y2222+CONFIG_UNIX=y2323+CONFIG_INET=y2024CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"2125CONFIG_DEVTMPFS=y2226CONFIG_MTD=y2327CONFIG_MTD_BLOCK=y2428CONFIG_MTD_CFI=y2929+CONFIG_MTD_JEDECPROBE=y2530CONFIG_MTD_CFI_INTELEXT=y2631CONFIG_MTD_CFI_AMDSTD=y2732CONFIG_MTD_CFI_STAA=y···4033# CONFIG_SCSI_LOWLEVEL is not set4134CONFIG_ATA=y4235CONFIG_PATA_FTIDE010=y3636+CONFIG_NETDEVICES=y3737+CONFIG_GEMINI_ETHERNET=y3838+CONFIG_MDIO_BITBANG=y3939+CONFIG_MDIO_GPIO=y4040+CONFIG_REALTEK_PHY=y4341CONFIG_INPUT_EVDEV=y4442CONFIG_KEYBOARD_GPIO=y4543# CONFIG_INPUT_MOUSE is not set···5543CONFIG_SERIAL_8250_RUNTIME_UARTS=15644CONFIG_SERIAL_OF_PLATFORM=y5745# CONFIG_HW_RANDOM is not set5858-# CONFIG_HWMON is not set4646+CONFIG_I2C_GPIO=y4747+CONFIG_SPI=y4848+CONFIG_SPI_GPIO=y4949+CONFIG_SENSORS_GPIO_FAN=y5050+CONFIG_SENSORS_LM75=y5151+CONFIG_THERMAL=y5952CONFIG_WATCHDOG=y6060-CONFIG_GEMINI_WATCHDOG=y5353+CONFIG_REGULATOR=y5454+CONFIG_REGULATOR_FIXED_VOLTAGE=y5555+CONFIG_DRM=y5656+CONFIG_DRM_PANEL_ILITEK_IL9322=y5757+CONFIG_DRM_TVE200=y5858+CONFIG_LOGO=y6159CONFIG_USB=y6260CONFIG_USB_MON=y6361CONFIG_USB_FOTG210_HCD=y···7654CONFIG_LEDS_CLASS=y7755CONFIG_LEDS_GPIO=y7856CONFIG_LEDS_TRIGGERS=y5757+CONFIG_LEDS_TRIGGER_DISK=y7958CONFIG_LEDS_TRIGGER_HEARTBEAT=y8059CONFIG_RTC_CLASS=y8160CONFIG_DMADEVICES=y
+1
arch/arm/configs/socfpga_defconfig
···5757CONFIG_MTD_NAND=y5858CONFIG_MTD_NAND_DENALI_DT=y5959CONFIG_MTD_SPI_NOR=y6060+# CONFIG_MTD_SPI_NOR_USE_4K_SECTORS is not set6061CONFIG_SPI_CADENCE_QUADSPI=y6162CONFIG_OF_OVERLAY=y6263CONFIG_OF_CONFIGFS=y
+3
arch/arm/include/asm/kvm_host.h
···7777 /* Interrupt controller */7878 struct vgic_dist vgic;7979 int max_vcpus;8080+8181+ /* Mandated version of PSCI */8282+ u32 psci_version;8083};81848285#define KVM_NR_MEM_OBJS 40
···271271 pinctrl-0 = <&uart_ao_a_pins>;272272 pinctrl-names = "default";273273};274274+275275+&usb0 {276276+ status = "okay";277277+};278278+279279+&usb2_phy0 {280280+ /*281281+ * even though the schematics don't show it:282282+ * HDMI_5V is also used as supply for the USB VBUS.283283+ */284284+ phy-supply = <&hdmi_5v>;285285+};
···230230 }231231}232232233233-extern void __sync_icache_dcache(pte_t pteval, unsigned long addr);233233+extern void __sync_icache_dcache(pte_t pteval);234234235235/*236236 * PTE bits configuration in the presence of hardware Dirty Bit Management···253253 pte_t old_pte;254254255255 if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))256256- __sync_icache_dcache(pte, addr);256256+ __sync_icache_dcache(pte);257257258258 /*259259 * If the existing pte is valid, check for potential race with
···215215 insn &= ~BIT(31);216216 } else {217217 /* out of range for ADR -> emit a veneer */218218- val = module_emit_adrp_veneer(mod, place, val & ~0xfff);218218+ val = module_emit_veneer_for_adrp(mod, place, val & ~0xfff);219219 if (!val)220220 return -ENOEXEC;221221 insn = aarch64_insn_gen_branch_imm((u64)place, val,
+10-10
arch/arm64/kernel/ptrace.c
···2525#include <linux/sched/signal.h>2626#include <linux/sched/task_stack.h>2727#include <linux/mm.h>2828+#include <linux/nospec.h>2829#include <linux/smp.h>2930#include <linux/ptrace.h>3031#include <linux/user.h>···250249251250 switch (note_type) {252251 case NT_ARM_HW_BREAK:253253- if (idx < ARM_MAX_BRP)254254- bp = tsk->thread.debug.hbp_break[idx];252252+ if (idx >= ARM_MAX_BRP)253253+ goto out;254254+ idx = array_index_nospec(idx, ARM_MAX_BRP);255255+ bp = tsk->thread.debug.hbp_break[idx];255256 break;256257 case NT_ARM_HW_WATCH:257257- if (idx < ARM_MAX_WRP)258258- bp = tsk->thread.debug.hbp_watch[idx];258258+ if (idx >= ARM_MAX_WRP)259259+ goto out;260260+ idx = array_index_nospec(idx, ARM_MAX_WRP);261261+ bp = tsk->thread.debug.hbp_watch[idx];259262 break;260263 }261264265265+out:262266 return bp;263267}264268···14641458{14651459 int ret;14661460 u32 kdata;14671467- mm_segment_t old_fs = get_fs();1468146114691469- set_fs(KERNEL_DS);14701462 /* Watchpoint */14711463 if (num < 0) {14721464 ret = compat_ptrace_hbp_get(NT_ARM_HW_WATCH, tsk, num, &kdata);···14751471 } else {14761472 ret = compat_ptrace_hbp_get(NT_ARM_HW_BREAK, tsk, num, &kdata);14771473 }14781478- set_fs(old_fs);1479147414801475 if (!ret)14811476 ret = put_user(kdata, data);···14871484{14881485 int ret;14891486 u32 kdata = 0;14901490- mm_segment_t old_fs = get_fs();1491148714921488 if (num == 0)14931489 return 0;···14951493 if (ret)14961494 return ret;1497149514981498- set_fs(KERNEL_DS);14991496 if (num < 0)15001497 ret = compat_ptrace_hbp_set(NT_ARM_HW_WATCH, tsk, num, &kdata);15011498 else15021499 ret = compat_ptrace_hbp_set(NT_ARM_HW_BREAK, tsk, num, &kdata);15031503- set_fs(old_fs);1504150015051501 return ret;15061502}
+2-1
arch/arm64/kernel/traps.c
···277277 * If we were single stepping, we want to get the step exception after278278 * we return from the trap.279279 */280280- user_fastforward_single_step(current);280280+ if (user_mode(regs))281281+ user_fastforward_single_step(current);281282}282283283284static LIST_HEAD(undef_hook);
+13-1
arch/arm64/kvm/guest.c
···2525#include <linux/module.h>2626#include <linux/vmalloc.h>2727#include <linux/fs.h>2828+#include <kvm/arm_psci.h>2829#include <asm/cputype.h>2930#include <linux/uaccess.h>3031#include <asm/kvm.h>···206205unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)207206{208207 return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu)209209- + NUM_TIMER_REGS;208208+ + kvm_arm_get_fw_num_regs(vcpu) + NUM_TIMER_REGS;210209}211210212211/**···226225 uindices++;227226 }228227228228+ ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices);229229+ if (ret)230230+ return ret;231231+ uindices += kvm_arm_get_fw_num_regs(vcpu);232232+229233 ret = copy_timer_indices(vcpu, uindices);230234 if (ret)231235 return ret;···249243 if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)250244 return get_core_reg(vcpu, reg);251245246246+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW)247247+ return kvm_arm_get_fw_reg(vcpu, reg);248248+252249 if (is_timer_reg(reg->id))253250 return get_timer_reg(vcpu, reg);254251···267258 /* Register group 16 means we set a core register. */268259 if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE)269260 return set_core_reg(vcpu, reg);261261+262262+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW)263263+ return kvm_arm_set_fw_reg(vcpu, reg);270264271265 if (is_timer_reg(reg->id))272266 return set_timer_reg(vcpu, reg);
+2-4
arch/arm64/kvm/sys_regs.c
···996996997997 if (id == SYS_ID_AA64PFR0_EL1) {998998 if (val & (0xfUL << ID_AA64PFR0_SVE_SHIFT))999999- pr_err_once("kvm [%i]: SVE unsupported for guests, suppressing\n",10001000- task_pid_nr(current));999999+ kvm_debug("SVE unsupported for guests, suppressing\n");1001100010021001 val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);10031002 } else if (id == SYS_ID_AA64MMFR1_EL1) {10041003 if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))10051005- pr_err_once("kvm [%i]: LORegions unsupported for guests, suppressing\n",10061006- task_pid_nr(current));10041004+ kvm_debug("LORegions unsupported for guests, suppressing\n");1007100510081006 val &= ~(0xfUL << ID_AA64MMFR1_LOR_SHIFT);10091007 }
···448448 * Checks all the children of @parent for a matching @id. If none449449 * found, it allocates a new device and returns it.450450 */451451-static struct parisc_device * alloc_tree_node(struct device *parent, char id)451451+static struct parisc_device * __init alloc_tree_node(452452+ struct device *parent, char id)452453{453454 struct match_id_data d = {454455 .id = id,···826825 * devices which are not physically connected (such as extra serial &827826 * keyboard ports). This problem is not yet solved.828827 */829829-static void walk_native_bus(unsigned long io_io_low, unsigned long io_io_high,830830- struct device *parent)828828+static void __init walk_native_bus(unsigned long io_io_low,829829+ unsigned long io_io_high, struct device *parent)831830{832831 int i, devices_found = 0;833832 unsigned long hpa = io_io_low;
+1-1
arch/parisc/kernel/pci.c
···174174 * pcibios_init_bridge() initializes cache line and default latency175175 * for pci controllers and pci-pci bridges176176 */177177-void __init pcibios_init_bridge(struct pci_dev *dev)177177+void __ref pcibios_init_bridge(struct pci_dev *dev)178178{179179 unsigned short bridge_ctl, bridge_ctl_new;180180
···837837 if (pdc_instr(&instr) == PDC_OK)838838 ivap[0] = instr;839839840840+ /*841841+ * Rules for the checksum of the HPMC handler:842842+ * 1. The IVA does not point to PDC/PDH space (ie: the OS has installed843843+ * its own IVA).844844+ * 2. The word at IVA + 32 is nonzero.845845+ * 3. If Length (IVA + 60) is not zero, then Length (IVA + 60) and846846+ * Address (IVA + 56) are word-aligned.847847+ * 4. The checksum of the 8 words starting at IVA + 32 plus the sum of848848+ * the Length/4 words starting at Address is zero.849849+ */850850+840851 /* Compute Checksum for HPMC handler */841852 length = os_hpmc_size;842853 ivap[7] = length;
+1-1
arch/parisc/mm/init.c
···516516 }517517}518518519519-void free_initmem(void)519519+void __ref free_initmem(void)520520{521521 unsigned long init_begin = (unsigned long)__init_begin;522522 unsigned long init_end = (unsigned long)__init_end;
···566566#endif567567568568#ifdef CONFIG_NMI_IPI569569-static void stop_this_cpu(struct pt_regs *regs)570570-#else569569+static void nmi_stop_this_cpu(struct pt_regs *regs)570570+{571571+ /*572572+ * This is a special case because it never returns, so the NMI IPI573573+ * handling would never mark it as done, which makes any later574574+ * smp_send_nmi_ipi() call spin forever. Mark it done now.575575+ *576576+ * IRQs are already hard disabled by the smp_handle_nmi_ipi.577577+ */578578+ nmi_ipi_lock();579579+ nmi_ipi_busy_count--;580580+ nmi_ipi_unlock();581581+582582+ /* Remove this CPU */583583+ set_cpu_online(smp_processor_id(), false);584584+585585+ spin_begin();586586+ while (1)587587+ spin_cpu_relax();588588+}589589+590590+void smp_send_stop(void)591591+{592592+ smp_send_nmi_ipi(NMI_IPI_ALL_OTHERS, nmi_stop_this_cpu, 1000000);593593+}594594+595595+#else /* CONFIG_NMI_IPI */596596+571597static void stop_this_cpu(void *dummy)572572-#endif573598{574599 /* Remove this CPU */575600 set_cpu_online(smp_processor_id(), false);···607582608583void smp_send_stop(void)609584{610610-#ifdef CONFIG_NMI_IPI611611- smp_send_nmi_ipi(NMI_IPI_ALL_OTHERS, stop_this_cpu, 1000000);612612-#else585585+ static bool stopped = false;586586+587587+ /*588588+ * Prevent waiting on csd lock from a previous smp_send_stop.589589+ * This is racy, but in general callers try to do the right590590+ * thing and only fire off one smp_send_stop (e.g., see591591+ * kernel/panic.c)592592+ */593593+ if (stopped)594594+ return;595595+596596+ stopped = true;597597+613598 smp_call_function(stop_this_cpu, NULL, 0);614614-#endif615599}600600+#endif /* CONFIG_NMI_IPI */616601617602struct thread_info *current_set[NR_CPUS];618603
···3434#define npu_to_phb(x) container_of(x, struct pnv_phb, npu)35353636/*3737+ * spinlock to protect initialisation of an npu_context for a particular3838+ * mm_struct.3939+ */4040+static DEFINE_SPINLOCK(npu_context_lock);4141+4242+/*4343+ * When an address shootdown range exceeds this threshold we invalidate the4444+ * entire TLB on the GPU for the given PID rather than each specific address in4545+ * the range.4646+ */4747+#define ATSD_THRESHOLD (2*1024*1024)4848+4949+/*3750 * Other types of TCE cache invalidation are not functional in the3851 * hardware.3952 */···414401 bool nmmu_flush;415402416403 /* Callback to stop translation requests on a given GPU */417417- struct npu_context *(*release_cb)(struct npu_context *, void *);404404+ void (*release_cb)(struct npu_context *context, void *priv);418405419406 /*420407 * Private pointer passed to the above callback for usage by···684671 struct npu_context *npu_context = mn_to_npu_context(mn);685672 unsigned long address;686673687687- for (address = start; address < end; address += PAGE_SIZE)688688- mmio_invalidate(npu_context, 1, address, false);674674+ if (end - start > ATSD_THRESHOLD) {675675+ /*676676+ * Just invalidate the entire PID if the address range is too677677+ * large.678678+ */679679+ mmio_invalidate(npu_context, 0, 0, true);680680+ } else {681681+ for (address = start; address < end; address += PAGE_SIZE)682682+ mmio_invalidate(npu_context, 1, address, false);689683690690- /* Do the flush only on the final addess == end */691691- mmio_invalidate(npu_context, 1, address, true);684684+ /* Do the flush only on the final addess == end */685685+ mmio_invalidate(npu_context, 1, address, true);686686+ }692687}693688694689static const struct mmu_notifier_ops nv_nmmu_notifier_ops = {···717696 * Returns an error if there no contexts are currently available or a718697 * npu_context which should be passed to pnv_npu2_handle_fault().719698 *720720- * mmap_sem must be held in write mode.699699+ * mmap_sem must be held in write mode and must not be called from interrupt700700+ * context.721701 */722702struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev,723703 unsigned long flags,724724- struct npu_context *(*cb)(struct npu_context *, void *),704704+ void (*cb)(struct npu_context *, void *),725705 void *priv)726706{727707 int rc;···765743 /*766744 * Setup the NPU context table for a particular GPU. These need to be767745 * per-GPU as we need the tables to filter ATSDs when there are no768768- * active contexts on a particular GPU.746746+ * active contexts on a particular GPU. It is safe for these to be747747+ * called concurrently with destroy as the OPAL call takes appropriate748748+ * locks and refcounts on init/destroy.769749 */770750 rc = opal_npu_init_context(nphb->opal_id, mm->context.id, flags,771751 PCI_DEVID(gpdev->bus->number, gpdev->devfn));···778754 * We store the npu pci device so we can more easily get at the779755 * associated npus.780756 */757757+ spin_lock(&npu_context_lock);781758 npu_context = mm->context.npu_context;759759+ if (npu_context) {760760+ if (npu_context->release_cb != cb ||761761+ npu_context->priv != priv) {762762+ spin_unlock(&npu_context_lock);763763+ opal_npu_destroy_context(nphb->opal_id, mm->context.id,764764+ PCI_DEVID(gpdev->bus->number,765765+ gpdev->devfn));766766+ return ERR_PTR(-EINVAL);767767+ }768768+769769+ WARN_ON(!kref_get_unless_zero(&npu_context->kref));770770+ }771771+ spin_unlock(&npu_context_lock);772772+782773 if (!npu_context) {774774+ /*775775+ * We can set up these fields without holding the776776+ * npu_context_lock as the npu_context hasn't been returned to777777+ * the caller meaning it can't be destroyed. Parallel allocation778778+ * is protected against by mmap_sem.779779+ */783780 rc = -ENOMEM;784781 npu_context = kzalloc(sizeof(struct npu_context), GFP_KERNEL);785782 if (npu_context) {···819774 }820775821776 mm->context.npu_context = npu_context;822822- } else {823823- WARN_ON(!kref_get_unless_zero(&npu_context->kref));824777 }825778826779 npu_context->release_cb = cb;···857814 mm_context_remove_copro(npu_context->mm);858815859816 npu_context->mm->context.npu_context = NULL;860860- mmu_notifier_unregister(&npu_context->mn,861861- npu_context->mm);862862-863863- kfree(npu_context);864817}865818819819+/*820820+ * Destroy a context on the given GPU. May free the npu_context if it is no821821+ * longer active on any GPUs. Must not be called from interrupt context.822822+ */866823void pnv_npu2_destroy_context(struct npu_context *npu_context,867824 struct pci_dev *gpdev)868825{826826+ int removed;869827 struct pnv_phb *nphb;870828 struct npu *npu;871829 struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0);···888844 WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], NULL);889845 opal_npu_destroy_context(nphb->opal_id, npu_context->mm->context.id,890846 PCI_DEVID(gpdev->bus->number, gpdev->devfn));891891- kref_put(&npu_context->kref, pnv_npu2_release_context);847847+ spin_lock(&npu_context_lock);848848+ removed = kref_put(&npu_context->kref, pnv_npu2_release_context);849849+ spin_unlock(&npu_context_lock);850850+851851+ /*852852+ * We need to do this outside of pnv_npu2_release_context so that it is853853+ * outside the spinlock as mmu_notifier_destroy uses SRCU.854854+ */855855+ if (removed) {856856+ mmu_notifier_unregister(&npu_context->mn,857857+ npu_context->mm);858858+859859+ kfree(npu_context);860860+ }861861+892862}893863EXPORT_SYMBOL(pnv_npu2_destroy_context);894864
+5-3
arch/powerpc/platforms/powernv/opal-rtc.c
···48484949 while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {5050 rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms);5151- if (rc == OPAL_BUSY_EVENT)5151+ if (rc == OPAL_BUSY_EVENT) {5252+ mdelay(OPAL_BUSY_DELAY_MS);5253 opal_poll_events(NULL);5353- else if (rc == OPAL_BUSY)5454- mdelay(10);5454+ } else if (rc == OPAL_BUSY) {5555+ mdelay(OPAL_BUSY_DELAY_MS);5656+ }5557 }5658 if (rc != OPAL_SUCCESS)5759 return 0;
+1-1
arch/sparc/include/uapi/asm/oradax.h
···33 *44 * This program is free software: you can redistribute it and/or modify55 * it under the terms of the GNU General Public License as published by66- * the Free Software Foundation, either version 3 of the License, or66+ * the Free Software Foundation, either version 2 of the License, or77 * (at your option) any later version.88 *99 * This program is distributed in the hope that it will be useful,
+1-1
arch/sparc/kernel/vio.c
···403403 if (err) {404404 printk(KERN_ERR "VIO: Could not register device %s, err=%d\n",405405 dev_name(&vdev->dev), err);406406- kfree(vdev);406406+ put_device(&vdev->dev);407407 return NULL;408408 }409409 if (vdev->dp)
···3339333933403340 cpuc->lbr_sel = NULL;3341334133423342- flip_smm_bit(&x86_pmu.attr_freeze_on_smi);33423342+ if (x86_pmu.version > 1)33433343+ flip_smm_bit(&x86_pmu.attr_freeze_on_smi);3343334433443345 if (!cpuc->shared_regs)33453346 return;···35033502 .cpu_dying = intel_pmu_cpu_dying,35043503};3505350435053505+static struct attribute *intel_pmu_attrs[];35063506+35063507static __initconst const struct x86_pmu intel_pmu = {35073508 .name = "Intel",35083509 .handle_irq = intel_pmu_handle_irq,···3535353235363533 .format_attrs = intel_arch3_formats_attr,35373534 .events_sysfs_show = intel_event_sysfs_show,35353535+35363536+ .attrs = intel_pmu_attrs,3538353735393538 .cpu_prepare = intel_pmu_cpu_prepare,35403539 .cpu_starting = intel_pmu_cpu_starting,···3916391139173912 x86_pmu.max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, x86_pmu.num_counters);3918391339193919-39203920- x86_pmu.attrs = intel_pmu_attrs;39213914 /*39223915 * Quirk: v2 perfmon does not report fixed-purpose events, so39233916 * assume at least 3 events, when not running in a hypervisor:
+1
arch/x86/include/asm/cpufeatures.h
···320320#define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* POPCNT for vectors of DW/QW */321321#define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */322322#define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */323323+#define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */323324324325/* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */325326#define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */
-7
arch/x86/include/asm/irq_vectors.h
···3434 * (0x80 is the syscall vector, 0x30-0x3f are for ISA)3535 */3636#define FIRST_EXTERNAL_VECTOR 0x203737-/*3838- * We start allocating at 0x21 to spread out vectors evenly between3939- * priority levels. (0x80 is the syscall vector)4040- */4141-#define VECTOR_OFFSET_START 142374338/*4439 * Reserve the lowest usable vector (and hence lowest priority) 0x20 for···113118#else114119#define FIRST_SYSTEM_VECTOR NR_VECTORS115120#endif116116-117117-#define FPU_IRQ 13118121119122/*120123 * Size the maximum number of interrupts.
···11+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */22+#ifndef __ASM_X64_MSGBUF_H33+#define __ASM_X64_MSGBUF_H44+55+#if !defined(__x86_64__) || !defined(__ILP32__)16#include <asm-generic/msgbuf.h>77+#else88+/*99+ * The msqid64_ds structure for x86 architecture with x32 ABI.1010+ *1111+ * On x86-32 and x86-64 we can just use the generic definition, but1212+ * x32 uses the same binary layout as x86_64, which is differnet1313+ * from other 32-bit architectures.1414+ */1515+1616+struct msqid64_ds {1717+ struct ipc64_perm msg_perm;1818+ __kernel_time_t msg_stime; /* last msgsnd time */1919+ __kernel_time_t msg_rtime; /* last msgrcv time */2020+ __kernel_time_t msg_ctime; /* last change time */2121+ __kernel_ulong_t msg_cbytes; /* current number of bytes on queue */2222+ __kernel_ulong_t msg_qnum; /* number of messages in queue */2323+ __kernel_ulong_t msg_qbytes; /* max number of bytes on queue */2424+ __kernel_pid_t msg_lspid; /* pid of last msgsnd */2525+ __kernel_pid_t msg_lrpid; /* last receive pid */2626+ __kernel_ulong_t __unused4;2727+ __kernel_ulong_t __unused5;2828+};2929+3030+#endif3131+3232+#endif /* __ASM_GENERIC_MSGBUF_H */
+42
arch/x86/include/uapi/asm/shmbuf.h
···11+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */22+#ifndef __ASM_X86_SHMBUF_H33+#define __ASM_X86_SHMBUF_H44+55+#if !defined(__x86_64__) || !defined(__ILP32__)16#include <asm-generic/shmbuf.h>77+#else88+/*99+ * The shmid64_ds structure for x86 architecture with x32 ABI.1010+ *1111+ * On x86-32 and x86-64 we can just use the generic definition, but1212+ * x32 uses the same binary layout as x86_64, which is differnet1313+ * from other 32-bit architectures.1414+ */1515+1616+struct shmid64_ds {1717+ struct ipc64_perm shm_perm; /* operation perms */1818+ size_t shm_segsz; /* size of segment (bytes) */1919+ __kernel_time_t shm_atime; /* last attach time */2020+ __kernel_time_t shm_dtime; /* last detach time */2121+ __kernel_time_t shm_ctime; /* last change time */2222+ __kernel_pid_t shm_cpid; /* pid of creator */2323+ __kernel_pid_t shm_lpid; /* pid of last operator */2424+ __kernel_ulong_t shm_nattch; /* no. of current attaches */2525+ __kernel_ulong_t __unused4;2626+ __kernel_ulong_t __unused5;2727+};2828+2929+struct shminfo64 {3030+ __kernel_ulong_t shmmax;3131+ __kernel_ulong_t shmmin;3232+ __kernel_ulong_t shmmni;3333+ __kernel_ulong_t shmseg;3434+ __kernel_ulong_t shmall;3535+ __kernel_ulong_t __unused1;3636+ __kernel_ulong_t __unused2;3737+ __kernel_ulong_t __unused3;3838+ __kernel_ulong_t __unused4;3939+};4040+4141+#endif4242+4343+#endif /* __ASM_X86_SHMBUF_H */
···5050#include <linux/init_ohci1394_dma.h>5151#include <linux/kvm_para.h>5252#include <linux/dma-contiguous.h>5353+#include <xen/xen.h>53545455#include <linux/errno.h>5556#include <linux/kernel.h>···533532 if (ret != 0 || crash_size <= 0)534533 return;535534 high = true;535535+ }536536+537537+ if (xen_pv_domain()) {538538+ pr_info("Ignoring crashkernel for a Xen PV domain\n");539539+ return;536540 }537541538542 /* 0 means: find the address automatically */
+2
arch/x86/kernel/smpboot.c
···15711571 void *mwait_ptr;15721572 int i;1573157315741574+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)15751575+ return;15741576 if (!this_cpu_has(X86_FEATURE_MWAIT))15751577 return;15761578 if (!this_cpu_has(X86_FEATURE_CLFLUSH))
+4-10
arch/x86/kvm/vmx.c
···45444544 __vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa);45454545}4546454645474547-static void vmx_flush_tlb_ept_only(struct kvm_vcpu *vcpu)45484548-{45494549- if (enable_ept)45504550- vmx_flush_tlb(vcpu, true);45514551-}45524552-45534547static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)45544548{45554549 ulong cr0_guest_owned_bits = vcpu->arch.cr0_guest_owned_bits;···92729278 } else {92739279 sec_exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;92749280 sec_exec_control |= SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;92759275- vmx_flush_tlb_ept_only(vcpu);92819281+ vmx_flush_tlb(vcpu, true);92769282 }92779283 vmcs_write32(SECONDARY_VM_EXEC_CONTROL, sec_exec_control);92789284···93009306 !nested_cpu_has2(get_vmcs12(&vmx->vcpu),93019307 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {93029308 vmcs_write64(APIC_ACCESS_ADDR, hpa);93039303- vmx_flush_tlb_ept_only(vcpu);93099309+ vmx_flush_tlb(vcpu, true);93049310 }93059311}93069312···1121411220 }1121511221 } else if (nested_cpu_has2(vmcs12,1121611222 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {1121711217- vmx_flush_tlb_ept_only(vcpu);1122311223+ vmx_flush_tlb(vcpu, true);1121811224 }11219112251122011226 /*···1206712073 } else if (!nested_cpu_has_ept(vmcs12) &&1206812074 nested_cpu_has2(vmcs12,1206912075 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {1207012070- vmx_flush_tlb_ept_only(vcpu);1207612076+ vmx_flush_tlb(vcpu, true);1207112077 }12072120781207312079 /* This is needed for same reason as it was needed in prepare_vmcs02 */
···9393static inline void split_page_count(int level) { }9494#endif95959696+static inline int9797+within(unsigned long addr, unsigned long start, unsigned long end)9898+{9999+ return addr >= start && addr < end;100100+}101101+102102+static inline int103103+within_inclusive(unsigned long addr, unsigned long start, unsigned long end)104104+{105105+ return addr >= start && addr <= end;106106+}107107+96108#ifdef CONFIG_X86_649710998110static inline unsigned long highmap_start_pfn(void)···118106 return __pa_symbol(roundup(_brk_end, PMD_SIZE) - 1) >> PAGE_SHIFT;119107}120108109109+static bool __cpa_pfn_in_highmap(unsigned long pfn)110110+{111111+ /*112112+ * Kernel text has an alias mapping at a high address, known113113+ * here as "highmap".114114+ */115115+ return within_inclusive(pfn, highmap_start_pfn(), highmap_end_pfn());116116+}117117+118118+#else119119+120120+static bool __cpa_pfn_in_highmap(unsigned long pfn)121121+{122122+ /* There is no highmap on 32-bit */123123+ return false;124124+}125125+121126#endif122122-123123-static inline int124124-within(unsigned long addr, unsigned long start, unsigned long end)125125-{126126- return addr >= start && addr < end;127127-}128128-129129-static inline int130130-within_inclusive(unsigned long addr, unsigned long start, unsigned long end)131131-{132132- return addr >= start && addr <= end;133133-}134127135128/*136129 * Flushing functions···189172190173static void cpa_flush_all(unsigned long cache)191174{192192- BUG_ON(irqs_disabled());175175+ BUG_ON(irqs_disabled() && !early_boot_irqs_disabled);193176194177 on_each_cpu(__cpa_flush_all, (void *) cache, 1);195178}···253236 unsigned long do_wbinvd = cache && numpages >= 1024; /* 4M threshold */254237#endif255238256256- BUG_ON(irqs_disabled());239239+ BUG_ON(irqs_disabled() && !early_boot_irqs_disabled);257240258241 on_each_cpu(__cpa_flush_all, (void *) do_wbinvd, 1);259242···12001183 cpa->numpages = 1;12011184 cpa->pfn = __pa(vaddr) >> PAGE_SHIFT;12021185 return 0;11861186+11871187+ } else if (__cpa_pfn_in_highmap(cpa->pfn)) {11881188+ /* Faults in the highmap are OK, so do not warn: */11891189+ return -EFAULT;12031190 } else {12041191 WARN(1, KERN_WARNING "CPA: called for zero pte. "12051192 "vaddr = %lx cpa->vaddr = %lx\n", vaddr,···13561335 * to touch the high mapped kernel as well:13571336 */13581337 if (!within(vaddr, (unsigned long)_text, _brk_end) &&13591359- within_inclusive(cpa->pfn, highmap_start_pfn(),13601360- highmap_end_pfn())) {13381338+ __cpa_pfn_in_highmap(cpa->pfn)) {13611339 unsigned long temp_cpa_vaddr = (cpa->pfn << PAGE_SHIFT) +13621340 __START_KERNEL_map - phys_base;13631341 alias_cpa = *cpa;
+23-3
arch/x86/mm/pti.c
···421421 if (boot_cpu_has(X86_FEATURE_K8))422422 return false;423423424424+ /*425425+ * RANDSTRUCT derives its hardening benefits from the426426+ * attacker's lack of knowledge about the layout of kernel427427+ * data structures. Keep the kernel image non-global in428428+ * cases where RANDSTRUCT is in use to help keep the layout a429429+ * secret.430430+ */431431+ if (IS_ENABLED(CONFIG_GCC_PLUGIN_RANDSTRUCT))432432+ return false;433433+424434 return true;425435}426436···440430 */441431void pti_clone_kernel_text(void)442432{433433+ /*434434+ * rodata is part of the kernel image and is normally435435+ * readable on the filesystem or on the web. But, do not436436+ * clone the areas past rodata, they might contain secrets.437437+ */443438 unsigned long start = PFN_ALIGN(_text);444444- unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE);439439+ unsigned long end = (unsigned long)__end_rodata_hpage_align;445440446441 if (!pti_kernel_image_global_ok())447442 return;448443444444+ pr_debug("mapping partial kernel image into user address space\n");445445+446446+ /*447447+ * Note that this will undo _some_ of the work that448448+ * pti_set_kernel_image_nonglobal() did to clear the449449+ * global bit.450450+ */449451 pti_clone_pmds(start, end, _PAGE_RW);450452}451453···479457480458 if (pti_kernel_image_global_ok())481459 return;482482-483483- pr_debug("set kernel image non-global\n");484460485461 set_memory_nonglobal(start, (end - start) >> PAGE_SHIFT);486462}
+14-4
arch/x86/net/bpf_jit_comp.c
···10271027 break;1028102810291029 case BPF_JMP | BPF_JA:10301030- jmp_offset = addrs[i + insn->off] - addrs[i];10301030+ if (insn->off == -1)10311031+ /* -1 jmp instructions will always jump10321032+ * backwards two bytes. Explicitly handling10331033+ * this case avoids wasting too many passes10341034+ * when there are long sequences of replaced10351035+ * dead code.10361036+ */10371037+ jmp_offset = -2;10381038+ else10391039+ jmp_offset = addrs[i + insn->off] - addrs[i];10401040+10311041 if (!jmp_offset)10321042 /* optimize out nop jumps */10331043 break;···12361226 for (pass = 0; pass < 20 || image; pass++) {12371227 proglen = do_jit(prog, addrs, image, oldproglen, &ctx);12381228 if (proglen <= 0) {12291229+out_image:12391230 image = NULL;12401231 if (header)12411232 bpf_jit_binary_free(header);···12471236 if (proglen != oldproglen) {12481237 pr_err("bpf_jit: proglen=%d != oldproglen=%d\n",12491238 proglen, oldproglen);12501250- prog = orig_prog;12511251- goto out_addrs;12391239+ goto out_image;12521240 }12531241 break;12541242 }···12831273 prog = orig_prog;12841274 }1285127512861286- if (!prog->is_func || extra_pass) {12761276+ if (!image || !prog->is_func || extra_pass) {12871277out_addrs:12881278 kfree(addrs);12891279 kfree(jit_data);
···6969 struct device_attribute *attr, char *buf)7070{7171 struct amba_device *dev = to_amba_device(_dev);7272+ ssize_t len;72737373- if (!dev->driver_override)7474- return 0;7575-7676- return sprintf(buf, "%s\n", dev->driver_override);7474+ device_lock(_dev);7575+ len = sprintf(buf, "%s\n", dev->driver_override);7676+ device_unlock(_dev);7777+ return len;7778}78797980static ssize_t driver_override_store(struct device *_dev,···8281 const char *buf, size_t count)8382{8483 struct amba_device *dev = to_amba_device(_dev);8585- char *driver_override, *old = dev->driver_override, *cp;8484+ char *driver_override, *old, *cp;86858787- if (count > PATH_MAX)8686+ /* We need to keep extra room for a newline */8787+ if (count >= (PAGE_SIZE - 1))8888 return -EINVAL;89899090 driver_override = kstrndup(buf, count, GFP_KERNEL);···9694 if (cp)9795 *cp = '\0';98969797+ device_lock(_dev);9898+ old = dev->driver_override;9999 if (strlen(driver_override)) {100100 dev->driver_override = driver_override;101101 } else {102102 kfree(driver_override);103103 dev->driver_override = NULL;104104 }105105+ device_unlock(_dev);105106106107 kfree(old);107108
+8
drivers/android/binder.c
···28392839 else28402840 return_error = BR_DEAD_REPLY;28412841 mutex_unlock(&context->context_mgr_node_lock);28422842+ if (target_node && target_proc == proc) {28432843+ binder_user_error("%d:%d got transaction to context manager from process owning it\n",28442844+ proc->pid, thread->pid);28452845+ return_error = BR_FAILED_REPLY;28462846+ return_error_param = -EINVAL;28472847+ return_error_line = __LINE__;28482848+ goto err_invalid_target_handle;28492849+ }28422850 }28432851 if (!target_node) {28442852 /*
+2-2
drivers/base/firmware_loader/fallback.c
···537537}538538539539/**540540- * fw_load_sysfs_fallback - load a firmware via the syfs fallback mechanism541541- * @fw_sysfs: firmware syfs information for the firmware to load540540+ * fw_load_sysfs_fallback - load a firmware via the sysfs fallback mechanism541541+ * @fw_sysfs: firmware sysfs information for the firmware to load542542 * @opt_flags: flags of options, FW_OPT_*543543 * @timeout: timeout to wait for the load544544 *
+1-1
drivers/base/firmware_loader/fallback.h
···66#include <linux/device.h>7788/**99- * struct firmware_fallback_config - firmware fallback configuratioon settings99+ * struct firmware_fallback_config - firmware fallback configuration settings1010 *1111 * Helps describe and fine tune the fallback mechanism.1212 *
+1
drivers/bus/Kconfig
···3333 bool "Support for ISA I/O space on HiSilicon Hip06/7"3434 depends on ARM64 && (ARCH_HISI || COMPILE_TEST)3535 select INDIRECT_PIO3636+ select MFD_CORE if ACPI3637 help3738 Driver to enable I/O access to devices attached to the Low Pin3839 Count bus on the HiSilicon Hip06/7 SoC.
+11-3
drivers/cpufreq/powernv-cpufreq.c
···679679680680 if (!spin_trylock(&gpstates->gpstate_lock))681681 return;682682+ /*683683+ * If the timer has migrated to the different cpu then bring684684+ * it back to one of the policy->cpus685685+ */686686+ if (!cpumask_test_cpu(raw_smp_processor_id(), policy->cpus)) {687687+ gpstates->timer.expires = jiffies + msecs_to_jiffies(1);688688+ add_timer_on(&gpstates->timer, cpumask_first(policy->cpus));689689+ spin_unlock(&gpstates->gpstate_lock);690690+ return;691691+ }682692683693 /*684694 * If PMCR was last updated was using fast_swtich then···728718 if (gpstate_idx != gpstates->last_lpstate_idx)729719 queue_gpstate_timer(gpstates);730720721721+ set_pstate(&freq_data);731722 spin_unlock(&gpstates->gpstate_lock);732732-733733- /* Timer may get migrated to a different cpu on cpu hot unplug */734734- smp_call_function_any(policy->cpus, set_pstate, &freq_data, 1);735723}736724737725/*
···66 tristate "HSA kernel driver for AMD GPU devices"77 depends on DRM_AMDGPU && X86_6488 imply AMD_IOMMU_V299+ select MMU_NOTIFIER910 help1011 Enable this if you want to use HSA features on AMD GPU devices.
+9-8
drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
···749749 struct timespec64 time;750750751751 dev = kfd_device_by_id(args->gpu_id);752752- if (dev == NULL)753753- return -EINVAL;754754-755755- /* Reading GPU clock counter from KGD */756756- args->gpu_clock_counter =757757- dev->kfd2kgd->get_gpu_clock_counter(dev->kgd);752752+ if (dev)753753+ /* Reading GPU clock counter from KGD */754754+ args->gpu_clock_counter =755755+ dev->kfd2kgd->get_gpu_clock_counter(dev->kgd);756756+ else757757+ /* Node without GPU resource */758758+ args->gpu_clock_counter = 0;758759759760 /* No access to rdtsc. Using raw monotonic time */760761 getrawmonotonic64(&time);···11481147 return ret;11491148}1150114911511151-bool kfd_dev_is_large_bar(struct kfd_dev *dev)11501150+static bool kfd_dev_is_large_bar(struct kfd_dev *dev)11521151{11531152 struct kfd_local_mem_info mem_info;11541153···1422142114231422 pdd = kfd_get_process_device_data(dev, p);14241423 if (!pdd) {14251425- err = PTR_ERR(pdd);14241424+ err = -EINVAL;14261425 goto bind_process_to_device_failed;14271426 }14281427
···329329{330330 int src;331331 struct irq_list_head *lh;332332+ unsigned long irq_table_flags;332333 DRM_DEBUG_KMS("DM_IRQ: releasing resources.\n");333333-334334 for (src = 0; src < DAL_IRQ_SOURCES_NUMBER; src++) {335335-335335+ DM_IRQ_TABLE_LOCK(adev, irq_table_flags);336336 /* The handler was removed from the table,337337 * it means it is safe to flush all the 'work'338338 * (because no code can schedule a new one). */339339 lh = &adev->dm.irq_handler_list_low_tab[src];340340+ DM_IRQ_TABLE_UNLOCK(adev, irq_table_flags);340341 flush_work(&lh->work);341342 }342343}
···21402140 }21412141 }2142214221432143- /* According to BSpec, "The CD clock frequency must be at least twice21432143+ /*21442144+ * According to BSpec, "The CD clock frequency must be at least twice21442145 * the frequency of the Azalia BCLK." and BCLK is 96 MHz by default.21462146+ *21472147+ * FIXME: Check the actual, not default, BCLK being used.21482148+ *21492149+ * FIXME: This does not depend on ->has_audio because the higher CDCLK21502150+ * is required for audio probe, also when there are no audio capable21512151+ * displays connected at probe time. This leads to unnecessarily high21522152+ * CDCLK when audio is not required.21532153+ *21542154+ * FIXME: This limit is only applied when there are displays connected21552155+ * at probe time. If we probe without displays, we'll still end up using21562156+ * the platform minimum CDCLK, failing audio probe.21452157 */21462146- if (crtc_state->has_audio && INTEL_GEN(dev_priv) >= 9)21582158+ if (INTEL_GEN(dev_priv) >= 9)21472159 min_cdclk = max(2 * 96000, min_cdclk);2148216021492161 /*
+2-2
drivers/gpu/drm/i915/intel_drv.h
···4949 * check the condition before the timeout.5050 */5151#define __wait_for(OP, COND, US, Wmin, Wmax) ({ \5252- unsigned long timeout__ = jiffies + usecs_to_jiffies(US) + 1; \5252+ const ktime_t end__ = ktime_add_ns(ktime_get_raw(), 1000ll * (US)); \5353 long wait__ = (Wmin); /* recommended min for usleep is 10 us */ \5454 int ret__; \5555 might_sleep(); \5656 for (;;) { \5757- bool expired__ = time_after(jiffies, timeout__); \5757+ const bool expired__ = ktime_after(ktime_get_raw(), end__); \5858 OP; \5959 if (COND) { \6060 ret__ = 0; \
+1-1
drivers/gpu/drm/i915/intel_fbdev.c
···806806 return;807807808808 intel_fbdev_sync(ifbdev);809809- if (ifbdev->vma)809809+ if (ifbdev->vma || ifbdev->helper.deferred_setup)810810 drm_fb_helper_hotplug_event(&ifbdev->helper);811811}812812
+5-6
drivers/gpu/drm/i915/intel_runtime_pm.c
···641641642642 DRM_DEBUG_KMS("Enabling DC6\n");643643644644- gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6);644644+ /* Wa Display #1183: skl,kbl,cfl */645645+ if (IS_GEN9_BC(dev_priv))646646+ I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) |647647+ SKL_SELECT_ALTERNATE_DC_EXIT);645648649649+ gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6);646650}647651648652void skl_disable_dc6(struct drm_i915_private *dev_priv)649653{650654 DRM_DEBUG_KMS("Disabling DC6\n");651651-652652- /* Wa Display #1183: skl,kbl,cfl */653653- if (IS_GEN9_BC(dev_priv))654654- I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) |655655- SKL_SELECT_ALTERNATE_DC_EXIT);656655657656 gen9_set_dc_state(dev_priv, DC_STATE_DISABLE);658657}
···7979 dsi_phy_write(lane_base + REG_DSI_10nm_PHY_LN_TX_DCTRL(3), 0x04);8080}81818282-static int msm_dsi_dphy_timing_calc_v3(struct msm_dsi_dphy_timing *timing,8383- struct msm_dsi_phy_clk_request *clk_req)8484-{8585- /*8686- * TODO: These params need to be computed, they're currently hardcoded8787- * for a 1440x2560@60Hz panel with a byteclk of 100.618 Mhz, and a8888- * default escape clock of 19.2 Mhz.8989- */9090-9191- timing->hs_halfbyte_en = 0;9292- timing->clk_zero = 0x1c;9393- timing->clk_prepare = 0x07;9494- timing->clk_trail = 0x07;9595- timing->hs_exit = 0x23;9696- timing->hs_zero = 0x21;9797- timing->hs_prepare = 0x07;9898- timing->hs_trail = 0x07;9999- timing->hs_rqst = 0x05;100100- timing->ta_sure = 0x00;101101- timing->ta_go = 0x03;102102- timing->ta_get = 0x04;103103-104104- timing->shared_timings.clk_pre = 0x2d;105105- timing->shared_timings.clk_post = 0x0d;106106-107107- return 0;108108-}109109-11082static int dsi_10nm_phy_enable(struct msm_dsi_phy *phy, int src_pll_id,11183 struct msm_dsi_phy_clk_request *clk_req)11284{
+2-1
drivers/gpu/drm/msm/msm_fb.c
···183183 hsub = drm_format_horz_chroma_subsampling(mode_cmd->pixel_format);184184 vsub = drm_format_vert_chroma_subsampling(mode_cmd->pixel_format);185185186186- format = kms->funcs->get_format(kms, mode_cmd->pixel_format);186186+ format = kms->funcs->get_format(kms, mode_cmd->pixel_format,187187+ mode_cmd->modifier[0]);187188 if (!format) {188189 dev_err(dev->dev, "unsupported pixel format: %4.4s\n",189190 (char *)&mode_cmd->pixel_format);
+2-9
drivers/gpu/drm/msm/msm_fbdev.c
···92929393 if (IS_ERR(fb)) {9494 dev_err(dev->dev, "failed to allocate fb\n");9595- ret = PTR_ERR(fb);9696- goto fail;9595+ return PTR_ERR(fb);9796 }98979998 bo = msm_framebuffer_bo(fb, 0);···150151151152fail_unlock:152153 mutex_unlock(&dev->struct_mutex);153153-fail:154154-155155- if (ret) {156156- if (fb)157157- drm_framebuffer_remove(fb);158158- }159159-154154+ drm_framebuffer_remove(fb);160155 return ret;161156}162157
+11-9
drivers/gpu/drm/msm/msm_gem.c
···132132 struct msm_gem_object *msm_obj = to_msm_bo(obj);133133134134 if (msm_obj->pages) {135135- /* For non-cached buffers, ensure the new pages are clean136136- * because display controller, GPU, etc. are not coherent:137137- */138138- if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))139139- dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,140140- msm_obj->sgt->nents, DMA_BIDIRECTIONAL);135135+ if (msm_obj->sgt) {136136+ /* For non-cached buffers, ensure the new137137+ * pages are clean because display controller,138138+ * GPU, etc. are not coherent:139139+ */140140+ if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))141141+ dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,142142+ msm_obj->sgt->nents,143143+ DMA_BIDIRECTIONAL);141144142142- if (msm_obj->sgt)143145 sg_free_table(msm_obj->sgt);144144-145145- kfree(msm_obj->sgt);146146+ kfree(msm_obj->sgt);147147+ }146148147149 if (use_pages(obj))148150 drm_gem_put_pages(obj, msm_obj->pages, true, false);
+4-1
drivers/gpu/drm/msm/msm_kms.h
···4848 /* functions to wait for atomic commit completed on each CRTC */4949 void (*wait_for_crtc_commit_done)(struct msm_kms *kms,5050 struct drm_crtc *crtc);5151+ /* get msm_format w/ optional format modifiers from drm_mode_fb_cmd2 */5252+ const struct msm_format *(*get_format)(struct msm_kms *kms,5353+ const uint32_t format,5454+ const uint64_t modifiers);5155 /* misc: */5252- const struct msm_format *(*get_format)(struct msm_kms *kms, uint32_t format);5356 long (*round_pixclk)(struct msm_kms *kms, unsigned long rate,5457 struct drm_encoder *encoder);5558 int (*set_split_display)(struct msm_kms *kms,
···707707config I2C_MT65XX708708 tristate "MediaTek I2C adapter"709709 depends on ARCH_MEDIATEK || COMPILE_TEST710710- depends on HAS_DMA711710 help712711 This selects the MediaTek(R) Integrated Inter Circuit bus driver713712 for MT65xx and MT81xx.···884885885886config I2C_SH_MOBILE886887 tristate "SuperH Mobile I2C Controller"887887- depends on HAS_DMA888888 depends on ARCH_SHMOBILE || ARCH_RENESAS || COMPILE_TEST889889 help890890 If you say yes to this option, support will be included for the···1096109810971099config I2C_RCAR10981100 tristate "Renesas R-Car I2C Controller"10991099- depends on HAS_DMA11001101 depends on ARCH_RENESAS || COMPILE_TEST11011102 select I2C_SLAVE11021103 help
+18-4
drivers/i2c/busses/i2c-sprd.c
···8686 u32 count;8787 int irq;8888 int err;8989+ bool is_suspended;8990};90919192static void sprd_i2c_set_count(struct sprd_i2c *i2c_dev, u32 count)···284283 struct sprd_i2c *i2c_dev = i2c_adap->algo_data;285284 int im, ret;286285286286+ if (i2c_dev->is_suspended)287287+ return -EBUSY;288288+287289 ret = pm_runtime_get_sync(i2c_dev->dev);288290 if (ret < 0)289291 return ret;···368364 struct sprd_i2c *i2c_dev = dev_id;369365 struct i2c_msg *msg = i2c_dev->msg;370366 bool ack = !(readl(i2c_dev->base + I2C_STATUS) & I2C_RX_ACK);371371- u32 i2c_count = readl(i2c_dev->base + I2C_COUNT);372367 u32 i2c_tran;373368374369 if (msg->flags & I2C_M_RD)375370 i2c_tran = i2c_dev->count >= I2C_FIFO_FULL_THLD;376371 else377377- i2c_tran = i2c_count;372372+ i2c_tran = i2c_dev->count;378373379374 /*380375 * If we got one ACK from slave when writing data, and we did not···411408{412409 struct sprd_i2c *i2c_dev = dev_id;413410 struct i2c_msg *msg = i2c_dev->msg;414414- u32 i2c_count = readl(i2c_dev->base + I2C_COUNT);415411 bool ack = !(readl(i2c_dev->base + I2C_STATUS) & I2C_RX_ACK);416412 u32 i2c_tran;417413418414 if (msg->flags & I2C_M_RD)419415 i2c_tran = i2c_dev->count >= I2C_FIFO_FULL_THLD;420416 else421421- i2c_tran = i2c_count;417417+ i2c_tran = i2c_dev->count;422418423419 /*424420 * If we did not get one ACK from slave when writing data, then we···588586589587static int __maybe_unused sprd_i2c_suspend_noirq(struct device *pdev)590588{589589+ struct sprd_i2c *i2c_dev = dev_get_drvdata(pdev);590590+591591+ i2c_lock_adapter(&i2c_dev->adap);592592+ i2c_dev->is_suspended = true;593593+ i2c_unlock_adapter(&i2c_dev->adap);594594+591595 return pm_runtime_force_suspend(pdev);592596}593597594598static int __maybe_unused sprd_i2c_resume_noirq(struct device *pdev)595599{600600+ struct sprd_i2c *i2c_dev = dev_get_drvdata(pdev);601601+602602+ i2c_lock_adapter(&i2c_dev->adap);603603+ i2c_dev->is_suspended = false;604604+ i2c_unlock_adapter(&i2c_dev->adap);605605+596606 return pm_runtime_force_resume(pdev);597607}598608
···47574757{47584758 struct mlx5_ib_dev *dev = to_mdev(ibdev);4759475947604760- return mlx5_get_vector_affinity(dev->mdev, comp_vector);47604760+ return mlx5_get_vector_affinity_hint(dev->mdev, comp_vector);47614761}4762476247634763/* The mlx5_ib_multiport_mutex should be held when calling this function */
···583583584584 x = (s8)(((packet[0] & 0x20) << 2) | (packet[1] & 0x7f));585585 y = (s8)(((packet[0] & 0x10) << 3) | (packet[2] & 0x7f));586586- z = packet[4] & 0x7c;586586+ z = packet[4] & 0x7f;587587588588 /*589589 * The x and y values tend to be quite large, and when used
+5-2
drivers/input/rmi4/rmi_spi.c
···147147 if (len > RMI_SPI_XFER_SIZE_LIMIT)148148 return -EINVAL;149149150150- if (rmi_spi->xfer_buf_size < len)151151- rmi_spi_manage_pools(rmi_spi, len);150150+ if (rmi_spi->xfer_buf_size < len) {151151+ ret = rmi_spi_manage_pools(rmi_spi, len);152152+ if (ret < 0)153153+ return ret;154154+ }152155153156 if (addr == 0)154157 /*
+1-1
drivers/input/touchscreen/Kconfig
···362362363363 If unsure, say N.364364365365- To compile this driver as a moudle, choose M here : the365365+ To compile this driver as a module, choose M here : the366366 module will be called hideep_ts.367367368368config TOUCHSCREEN_ILI210X
+124-76
drivers/input/touchscreen/atmel_mxt_ts.c
···280280 struct input_dev *input_dev;281281 char phys[64]; /* device physical location */282282 struct mxt_object *object_table;283283- struct mxt_info info;283283+ struct mxt_info *info;284284+ void *raw_info_block;284285 unsigned int irq;285286 unsigned int max_x;286287 unsigned int max_y;···461460{462461 u8 appmode = data->client->addr;463462 u8 bootloader;463463+ u8 family_id = data->info ? data->info->family_id : 0;464464465465 switch (appmode) {466466 case 0x4a:467467 case 0x4b:468468 /* Chips after 1664S use different scheme */469469- if (retry || data->info.family_id >= 0xa2) {469469+ if (retry || family_id >= 0xa2) {470470 bootloader = appmode - 0x24;471471 break;472472 }···694692 struct mxt_object *object;695693 int i;696694697697- for (i = 0; i < data->info.object_num; i++) {695695+ for (i = 0; i < data->info->object_num; i++) {698696 object = data->object_table + i;699697 if (object->type == type)700698 return object;···14641462 data_pos += offset;14651463 }1466146414671467- if (cfg_info.family_id != data->info.family_id) {14651465+ if (cfg_info.family_id != data->info->family_id) {14681466 dev_err(dev, "Family ID mismatch!\n");14691467 return -EINVAL;14701468 }1471146914721472- if (cfg_info.variant_id != data->info.variant_id) {14701470+ if (cfg_info.variant_id != data->info->variant_id) {14731471 dev_err(dev, "Variant ID mismatch!\n");14741472 return -EINVAL;14751473 }···1514151215151513 /* Malloc memory to store configuration */15161514 cfg_start_ofs = MXT_OBJECT_START +15171517- data->info.object_num * sizeof(struct mxt_object) +15151515+ data->info->object_num * sizeof(struct mxt_object) +15181516 MXT_INFO_CHECKSUM_SIZE;15191517 config_mem_size = data->mem_size - cfg_start_ofs;15201518 config_mem = kzalloc(config_mem_size, GFP_KERNEL);···15651563 return ret;15661564}1567156515681568-static int mxt_get_info(struct mxt_data *data)15691569-{15701570- struct i2c_client *client = data->client;15711571- struct mxt_info *info = &data->info;15721572- int error;15731573-15741574- /* Read 7-byte info block starting at address 0 */15751575- error = __mxt_read_reg(client, 0, sizeof(*info), info);15761576- if (error)15771577- return error;15781578-15791579- return 0;15801580-}15811581-15821566static void mxt_free_input_device(struct mxt_data *data)15831567{15841568 if (data->input_dev) {···15791591 video_unregister_device(&data->dbg.vdev);15801592 v4l2_device_unregister(&data->dbg.v4l2);15811593#endif15821582-15831583- kfree(data->object_table);15841594 data->object_table = NULL;15951595+ data->info = NULL;15961596+ kfree(data->raw_info_block);15971597+ data->raw_info_block = NULL;15851598 kfree(data->msg_buf);15861599 data->msg_buf = NULL;15871600 data->T5_address = 0;···15981609 data->max_reportid = 0;15991610}1600161116011601-static int mxt_get_object_table(struct mxt_data *data)16121612+static int mxt_parse_object_table(struct mxt_data *data,16131613+ struct mxt_object *object_table)16021614{16031615 struct i2c_client *client = data->client;16041604- size_t table_size;16051605- struct mxt_object *object_table;16061606- int error;16071616 int i;16081617 u8 reportid;16091618 u16 end_address;1610161916111611- table_size = data->info.object_num * sizeof(struct mxt_object);16121612- object_table = kzalloc(table_size, GFP_KERNEL);16131613- if (!object_table) {16141614- dev_err(&data->client->dev, "Failed to allocate memory\n");16151615- return -ENOMEM;16161616- }16171617-16181618- error = __mxt_read_reg(client, MXT_OBJECT_START, table_size,16191619- object_table);16201620- if (error) {16211621- kfree(object_table);16221622- return error;16231623- }16241624-16251620 /* Valid Report IDs start counting from 1 */16261621 reportid = 1;16271622 data->mem_size = 0;16281628- for (i = 0; i < data->info.object_num; i++) {16231623+ for (i = 0; i < data->info->object_num; i++) {16291624 struct mxt_object *object = object_table + i;16301625 u8 min_id, max_id;16311626···1633166016341661 switch (object->type) {16351662 case MXT_GEN_MESSAGE_T5:16361636- if (data->info.family_id == 0x80 &&16371637- data->info.version < 0x20) {16631663+ if (data->info->family_id == 0x80 &&16641664+ data->info->version < 0x20) {16381665 /*16391666 * On mXT224 firmware versions prior to V2.016401667 * read and discard unused CRC byte otherwise···16891716 /* If T44 exists, T5 position has to be directly after */16901717 if (data->T44_address && (data->T5_address != data->T44_address + 1)) {16911718 dev_err(&client->dev, "Invalid T44 position\n");16921692- error = -EINVAL;16931693- goto free_object_table;17191719+ return -EINVAL;16941720 }1695172116961722 data->msg_buf = kcalloc(data->max_reportid,16971723 data->T5_msg_size, GFP_KERNEL);16981698- if (!data->msg_buf) {16991699- dev_err(&client->dev, "Failed to allocate message buffer\n");17241724+ if (!data->msg_buf)17251725+ return -ENOMEM;17261726+17271727+ return 0;17281728+}17291729+17301730+static int mxt_read_info_block(struct mxt_data *data)17311731+{17321732+ struct i2c_client *client = data->client;17331733+ int error;17341734+ size_t size;17351735+ void *id_buf, *buf;17361736+ uint8_t num_objects;17371737+ u32 calculated_crc;17381738+ u8 *crc_ptr;17391739+17401740+ /* If info block already allocated, free it */17411741+ if (data->raw_info_block)17421742+ mxt_free_object_table(data);17431743+17441744+ /* Read 7-byte ID information block starting at address 0 */17451745+ size = sizeof(struct mxt_info);17461746+ id_buf = kzalloc(size, GFP_KERNEL);17471747+ if (!id_buf)17481748+ return -ENOMEM;17491749+17501750+ error = __mxt_read_reg(client, 0, size, id_buf);17511751+ if (error)17521752+ goto err_free_mem;17531753+17541754+ /* Resize buffer to give space for rest of info block */17551755+ num_objects = ((struct mxt_info *)id_buf)->object_num;17561756+ size += (num_objects * sizeof(struct mxt_object))17571757+ + MXT_INFO_CHECKSUM_SIZE;17581758+17591759+ buf = krealloc(id_buf, size, GFP_KERNEL);17601760+ if (!buf) {17001761 error = -ENOMEM;17011701- goto free_object_table;17621762+ goto err_free_mem;17631763+ }17641764+ id_buf = buf;17651765+17661766+ /* Read rest of info block */17671767+ error = __mxt_read_reg(client, MXT_OBJECT_START,17681768+ size - MXT_OBJECT_START,17691769+ id_buf + MXT_OBJECT_START);17701770+ if (error)17711771+ goto err_free_mem;17721772+17731773+ /* Extract & calculate checksum */17741774+ crc_ptr = id_buf + size - MXT_INFO_CHECKSUM_SIZE;17751775+ data->info_crc = crc_ptr[0] | (crc_ptr[1] << 8) | (crc_ptr[2] << 16);17761776+17771777+ calculated_crc = mxt_calculate_crc(id_buf, 0,17781778+ size - MXT_INFO_CHECKSUM_SIZE);17791779+17801780+ /*17811781+ * CRC mismatch can be caused by data corruption due to I2C comms17821782+ * issue or else device is not using Object Based Protocol (eg i2c-hid)17831783+ */17841784+ if ((data->info_crc == 0) || (data->info_crc != calculated_crc)) {17851785+ dev_err(&client->dev,17861786+ "Info Block CRC error calculated=0x%06X read=0x%06X\n",17871787+ calculated_crc, data->info_crc);17881788+ error = -EIO;17891789+ goto err_free_mem;17021790 }1703179117041704- data->object_table = object_table;17921792+ data->raw_info_block = id_buf;17931793+ data->info = (struct mxt_info *)id_buf;17941794+17951795+ dev_info(&client->dev,17961796+ "Family: %u Variant: %u Firmware V%u.%u.%02X Objects: %u\n",17971797+ data->info->family_id, data->info->variant_id,17981798+ data->info->version >> 4, data->info->version & 0xf,17991799+ data->info->build, data->info->object_num);18001800+18011801+ /* Parse object table information */18021802+ error = mxt_parse_object_table(data, id_buf + MXT_OBJECT_START);18031803+ if (error) {18041804+ dev_err(&client->dev, "Error %d parsing object table\n", error);18051805+ mxt_free_object_table(data);18061806+ goto err_free_mem;18071807+ }18081808+18091809+ data->object_table = (struct mxt_object *)(id_buf + MXT_OBJECT_START);1705181017061811 return 0;1707181217081708-free_object_table:17091709- mxt_free_object_table(data);18131813+err_free_mem:18141814+ kfree(id_buf);17101815 return error;17111816}17121817···20972046 int error;2098204720992048 while (1) {21002100- error = mxt_get_info(data);20492049+ error = mxt_read_info_block(data);21012050 if (!error)21022051 break;21032052···21282077 msleep(MXT_FW_RESET_TIME);21292078 }2130207921312131- /* Get object table information */21322132- error = mxt_get_object_table(data);21332133- if (error) {21342134- dev_err(&client->dev, "Error %d reading object table\n", error);21352135- return error;21362136- }21372137-21382080 error = mxt_acquire_irq(data);21392081 if (error)21402140- goto err_free_object_table;20822082+ return error;2141208321422084 error = request_firmware_nowait(THIS_MODULE, true, MXT_CFG_NAME,21432085 &client->dev, GFP_KERNEL, data,···21382094 if (error) {21392095 dev_err(&client->dev, "Failed to invoke firmware loader: %d\n",21402096 error);21412141- goto err_free_object_table;20972097+ return error;21422098 }2143209921442100 return 0;21452145-21462146-err_free_object_table:21472147- mxt_free_object_table(data);21482148- return error;21492101}2150210221512103static int mxt_set_t7_power_cfg(struct mxt_data *data, u8 sleep)···22022162static u16 mxt_get_debug_value(struct mxt_data *data, unsigned int x,22032163 unsigned int y)22042164{22052205- struct mxt_info *info = &data->info;21652165+ struct mxt_info *info = data->info;22062166 struct mxt_dbg *dbg = &data->dbg;22072167 unsigned int ofs, page;22082168 unsigned int col = 0;···2530249025312491static void mxt_debug_init(struct mxt_data *data)25322492{25332533- struct mxt_info *info = &data->info;24932493+ struct mxt_info *info = data->info;25342494 struct mxt_dbg *dbg = &data->dbg;25352495 struct mxt_object *object;25362496 int error;···26162576 const struct firmware *cfg)26172577{26182578 struct device *dev = &data->client->dev;26192619- struct mxt_info *info = &data->info;26202579 int error;2621258026222581 error = mxt_init_t7_power_cfg(data);···2640260126412602 mxt_debug_init(data);2642260326432643- dev_info(dev,26442644- "Family: %u Variant: %u Firmware V%u.%u.%02X Objects: %u\n",26452645- info->family_id, info->variant_id, info->version >> 4,26462646- info->version & 0xf, info->build, info->object_num);26472647-26482604 return 0;26492605}26502606···26482614 struct device_attribute *attr, char *buf)26492615{26502616 struct mxt_data *data = dev_get_drvdata(dev);26512651- struct mxt_info *info = &data->info;26172617+ struct mxt_info *info = data->info;26522618 return scnprintf(buf, PAGE_SIZE, "%u.%u.%02X\n",26532619 info->version >> 4, info->version & 0xf, info->build);26542620}···26582624 struct device_attribute *attr, char *buf)26592625{26602626 struct mxt_data *data = dev_get_drvdata(dev);26612661- struct mxt_info *info = &data->info;26272627+ struct mxt_info *info = data->info;26622628 return scnprintf(buf, PAGE_SIZE, "%u.%u\n",26632629 info->family_id, info->variant_id);26642630}···26972663 return -ENOMEM;2698266426992665 error = 0;27002700- for (i = 0; i < data->info.object_num; i++) {26662666+ for (i = 0; i < data->info->object_num; i++) {27012667 object = data->object_table + i;2702266827032669 if (!mxt_object_readable(object->type))···30693035 .driver_data = samus_platform_data,30703036 },30713037 {30383038+ /* Samsung Chromebook Pro */30393039+ .ident = "Samsung Chromebook Pro",30403040+ .matches = {30413041+ DMI_MATCH(DMI_SYS_VENDOR, "Google"),30423042+ DMI_MATCH(DMI_PRODUCT_NAME, "Caroline"),30433043+ },30443044+ .driver_data = samus_platform_data,30453045+ },30463046+ {30723047 /* Other Google Chromebooks */30733048 .ident = "Chromebook",30743049 .matches = {···3297325432983255static const struct of_device_id mxt_of_match[] = {32993256 { .compatible = "atmel,maxtouch", },32573257+ /* Compatibles listed below are deprecated */32583258+ { .compatible = "atmel,qt602240_ts", },32593259+ { .compatible = "atmel,atmel_mxt_ts", },32603260+ { .compatible = "atmel,atmel_mxt_tp", },32613261+ { .compatible = "atmel,mXT224", },33003262 {},33013263};33023264MODULE_DEVICE_TABLE(of, mxt_of_match);
···4545#define I82802AB 0x00ad4646#define I82802AC 0x00ac4747#define PF38F4476 0x881c4848+#define M28F00AP30 0x89634849/* STMicroelectronics chips */4950#define M50LPW080 0x002F5051#define M50FLW080A 0x0080···374373 if (cfi->mfr == CFI_MFR_INTEL &&375374 cfi->id == PF38F4476 && extp->MinorVersion == '3')376375 extp->MinorVersion = '1';376376+}377377+378378+static int cfi_is_micron_28F00AP30(struct cfi_private *cfi, struct flchip *chip)379379+{380380+ /*381381+ * Micron(was Numonyx) 1Gbit bottom boot are buggy w.r.t382382+ * Erase Supend for their small Erase Blocks(0x8000)383383+ */384384+ if (cfi->mfr == CFI_MFR_INTEL && cfi->id == M28F00AP30)385385+ return 1;386386+ return 0;377387}378388379389static inline struct cfi_pri_intelext *···843831 (mode == FL_WRITING && (cfip->SuspendCmdSupport & 1))))844832 goto sleep;845833834834+ /* Do not allow suspend iff read/write to EB address */835835+ if ((adr & chip->in_progress_block_mask) ==836836+ chip->in_progress_block_addr)837837+ goto sleep;838838+839839+ /* do not suspend small EBs, buggy Micron Chips */840840+ if (cfi_is_micron_28F00AP30(cfi, chip) &&841841+ (chip->in_progress_block_mask == ~(0x8000-1)))842842+ goto sleep;846843847844 /* Erase suspend */848848- map_write(map, CMD(0xB0), adr);845845+ map_write(map, CMD(0xB0), chip->in_progress_block_addr);849846850847 /* If the flash has finished erasing, then 'erase suspend'851848 * appears to make some (28F320) flash devices switch to852849 * 'read' mode. Make sure that we switch to 'read status'853850 * mode so we get the right data. --rmk854851 */855855- map_write(map, CMD(0x70), adr);852852+ map_write(map, CMD(0x70), chip->in_progress_block_addr);856853 chip->oldstate = FL_ERASING;857854 chip->state = FL_ERASE_SUSPENDING;858855 chip->erase_suspended = 1;859856 for (;;) {860860- status = map_read(map, adr);857857+ status = map_read(map, chip->in_progress_block_addr);861858 if (map_word_andequal(map, status, status_OK, status_OK))862859 break;863860···10621041 sending the 0x70 (Read Status) command to an erasing10631042 chip and expecting it to be ignored, that's what we10641043 do. */10651065- map_write(map, CMD(0xd0), adr);10661066- map_write(map, CMD(0x70), adr);10441044+ map_write(map, CMD(0xd0), chip->in_progress_block_addr);10451045+ map_write(map, CMD(0x70), chip->in_progress_block_addr);10671046 chip->oldstate = FL_READY;10681047 chip->state = FL_ERASING;10691048 break;···19541933 map_write(map, CMD(0xD0), adr);19551934 chip->state = FL_ERASING;19561935 chip->erase_suspended = 0;19361936+ chip->in_progress_block_addr = adr;19371937+ chip->in_progress_block_mask = ~(len - 1);1957193819581939 ret = INVAL_CACHE_AND_WAIT(map, chip, adr,19591940 adr, len,
+6-3
drivers/mtd/chips/cfi_cmdset_0002.c
···816816 (mode == FL_WRITING && (cfip->EraseSuspend & 0x2))))817817 goto sleep;818818819819- /* We could check to see if we're trying to access the sector820820- * that is currently being erased. However, no user will try821821- * anything like that so we just wait for the timeout. */819819+ /* Do not allow suspend iff read/write to EB address */820820+ if ((adr & chip->in_progress_block_mask) ==821821+ chip->in_progress_block_addr)822822+ goto sleep;822823823824 /* Erase suspend */824825 /* It's harmless to issue the Erase-Suspend and Erase-Resume···22682267 chip->state = FL_ERASING;22692268 chip->erase_suspended = 0;22702269 chip->in_progress_block_addr = adr;22702270+ chip->in_progress_block_mask = ~(map->size - 1);2271227122722272 INVALIDATE_CACHE_UDELAY(map, chip,22732273 adr, map->size,···23582356 chip->state = FL_ERASING;23592357 chip->erase_suspended = 0;23602358 chip->in_progress_block_addr = adr;23592359+ chip->in_progress_block_mask = ~(len - 1);2361236023622361 INVALIDATE_CACHE_UDELAY(map, chip,23632362 adr, len,
···22992299 /*23002300 * The legacy "num-cs" property indicates the number of CS on the only23012301 * chip connected to the controller (legacy bindings does not support23022302- * more than one chip). CS are only incremented one by one while the RB23032303- * pin is always the #0.23022302+ * more than one chip). The CS and RB pins are always the #0.23042303 *23052304 * When not using legacy bindings, a couple of "reg" and "nand-rb"23062305 * properties must be filled. For each chip, expressed as a subnode,23072306 * "reg" points to the CS lines and "nand-rb" to the RB line.23082307 */23092309- if (pdata) {23082308+ if (pdata || nfc->caps->legacy_of_bindings) {23102309 nsels = 1;23112311- } else if (nfc->caps->legacy_of_bindings &&23122312- !of_get_property(np, "num-cs", &nsels)) {23132313- dev_err(dev, "missing num-cs property\n");23142314- return -EINVAL;23152315- } else if (!of_get_property(np, "reg", &nsels)) {23162316- dev_err(dev, "missing reg property\n");23172317- return -EINVAL;23182318- }23192319-23202320- if (!pdata)23212321- nsels /= sizeof(u32);23222322- if (!nsels) {23232323- dev_err(dev, "invalid reg property size\n");23242324- return -EINVAL;23102310+ } else {23112311+ nsels = of_property_count_elems_of_size(np, "reg", sizeof(u32));23122312+ if (nsels <= 0) {23132313+ dev_err(dev, "missing/invalid reg property\n");23142314+ return -EINVAL;23152315+ }23252316 }2326231723272318 /* Alloc the nand chip structure */
···1007100710081008 mutex_lock(&priv->state_lock);1009100910101010- if (!test_bit(MLX5E_STATE_OPENED, &priv->state))10111011- goto out;10121012-10131010 new_channels.params = priv->channels.params;10141011 mlx5e_trust_update_tx_min_inline_mode(priv, &new_channels.params);10121012+10131013+ if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {10141014+ priv->channels.params = new_channels.params;10151015+ goto out;10161016+ }1015101710161018 /* Skip if tx_min_inline is the same */10171019 if (new_channels.params.tx_min_inline_mode ==
···290290291291 if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {292292 netdev_err(priv->netdev,293293- "\tCan't perform loobpack test while device is down\n");293293+ "\tCan't perform loopback test while device is down\n");294294 return -ENODEV;295295 }296296
+2-1
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
···18641864 }1865186518661866 ip_proto = MLX5_GET(fte_match_set_lyr_2_4, headers_v, ip_protocol);18671867- if (modify_ip_header && ip_proto != IPPROTO_TCP && ip_proto != IPPROTO_UDP) {18671867+ if (modify_ip_header && ip_proto != IPPROTO_TCP &&18681868+ ip_proto != IPPROTO_UDP && ip_proto != IPPROTO_ICMP) {18681869 pr_info("can't offload re-write of ip proto %d\n", ip_proto);18691870 return false;18701871 }
···17181718 struct net_device *dev = mlxsw_sp_port->dev;17191719 int err;1720172017211721- if (bridge_port->bridge_device->multicast_enabled) {17221722- if (bridge_port->bridge_device->multicast_enabled) {17231723- err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid,17241724- false);17251725- if (err)17261726- netdev_err(dev, "Unable to remove port from SMID\n");17271727- }17211721+ if (bridge_port->bridge_device->multicast_enabled &&17221722+ !bridge_port->mrouter) {17231723+ err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, false);17241724+ if (err)17251725+ netdev_err(dev, "Unable to remove port from SMID\n");17281726 }1729172717301728 err = mlxsw_sp_port_remove_from_mid(mlxsw_sp_port, mid);
···6767/**6868 * nfp_net_get_mac_addr() - Get the MAC address.6969 * @pf: NFP PF handle7070+ * @netdev: net_device to set MAC address on7071 * @port: NFP port structure7172 *7273 * First try to get the MAC address from NSP ETH table. If that7374 * fails generate a random address.7475 */7575-void nfp_net_get_mac_addr(struct nfp_pf *pf, struct nfp_port *port)7676+void7777+nfp_net_get_mac_addr(struct nfp_pf *pf, struct net_device *netdev,7878+ struct nfp_port *port)7679{7780 struct nfp_eth_table_port *eth_port;78817982 eth_port = __nfp_port_get_eth_port(port);8083 if (!eth_port) {8181- eth_hw_addr_random(port->netdev);8484+ eth_hw_addr_random(netdev);8285 return;8386 }84878585- ether_addr_copy(port->netdev->dev_addr, eth_port->mac_addr);8686- ether_addr_copy(port->netdev->perm_addr, eth_port->mac_addr);8888+ ether_addr_copy(netdev->dev_addr, eth_port->mac_addr);8989+ ether_addr_copy(netdev->perm_addr, eth_port->mac_addr);8790}88918992static struct nfp_eth_table_port *···514511 return PTR_ERR(mem);515512 }516513517517- min_size = NFP_MAC_STATS_SIZE * (pf->eth_tbl->max_index + 1);518518- pf->mac_stats_mem = nfp_rtsym_map(pf->rtbl, "_mac_stats",519519- "net.macstats", min_size,520520- &pf->mac_stats_bar);521521- if (IS_ERR(pf->mac_stats_mem)) {522522- if (PTR_ERR(pf->mac_stats_mem) != -ENOENT) {523523- err = PTR_ERR(pf->mac_stats_mem);524524- goto err_unmap_ctrl;514514+ if (pf->eth_tbl) {515515+ min_size = NFP_MAC_STATS_SIZE * (pf->eth_tbl->max_index + 1);516516+ pf->mac_stats_mem = nfp_rtsym_map(pf->rtbl, "_mac_stats",517517+ "net.macstats", min_size,518518+ &pf->mac_stats_bar);519519+ if (IS_ERR(pf->mac_stats_mem)) {520520+ if (PTR_ERR(pf->mac_stats_mem) != -ENOENT) {521521+ err = PTR_ERR(pf->mac_stats_mem);522522+ goto err_unmap_ctrl;523523+ }524524+ pf->mac_stats_mem = NULL;525525 }526526- pf->mac_stats_mem = NULL;527526 }528527529528 pf->vf_cfg_mem = nfp_net_pf_map_rtsym(pf, "net.vfcfg",
+1-1
drivers/net/ethernet/qlogic/qed/qed_ll2.c
···23702370 u8 flags = 0;2371237123722372 if (unlikely(skb->ip_summed != CHECKSUM_NONE)) {23732373- DP_INFO(cdev, "Cannot transmit a checksumed packet\n");23732373+ DP_INFO(cdev, "Cannot transmit a checksummed packet\n");23742374 return -EINVAL;23752375 }23762376
+1-1
drivers/net/ethernet/qlogic/qed/qed_roce.c
···848848849849 if (!(qp->resp_offloaded)) {850850 DP_NOTICE(p_hwfn,851851- "The responder's qp should be offloded before requester's\n");851851+ "The responder's qp should be offloaded before requester's\n");852852 return -EINVAL;853853 }854854
···47844784 * will set rule->filter_id to EFX_ARFS_FILTER_ID_PENDING, meaning that47854785 * the rule is not removed by efx_rps_hash_del() below.47864786 */47874787- ret = efx_ef10_filter_remove_internal(efx, 1U << spec->priority,47884788- filter_idx, true) == 0;47874787+ if (ret)47884788+ ret = efx_ef10_filter_remove_internal(efx, 1U << spec->priority,47894789+ filter_idx, true) == 0;47894790 /* While we can't safely dereference rule (we dropped the lock), we can47904791 * still test it for NULL.47914792 */
+2
drivers/net/ethernet/sfc/rx.c
···839839 int rc;840840841841 rc = efx->type->filter_insert(efx, &req->spec, true);842842+ if (rc >= 0)843843+ rc %= efx->type->max_rx_ip_filters;842844 if (efx->rps_hash_table) {843845 spin_lock_bh(&efx->rps_hash_lock);844846 rule = efx_rps_hash_find(efx, &req->spec);
···535535536536 /* Grab the bits from PHYIR1, and put them in the upper half */537537 phy_reg = mdiobus_read(bus, addr, MII_PHYSID1);538538- if (phy_reg < 0)538538+ if (phy_reg < 0) {539539+ /* if there is no device, return without an error so scanning540540+ * the bus works properly541541+ */542542+ if (phy_reg == -EIO || phy_reg == -ENODEV) {543543+ *phy_id = 0xffffffff;544544+ return 0;545545+ }546546+539547 return -EIO;548548+ }540549541550 *phy_id = (phy_reg & 0xffff) << 16;542551
+13
drivers/net/usb/qmi_wwan.c
···10981098 {QMI_FIXED_INTF(0x05c6, 0x9080, 8)},10991099 {QMI_FIXED_INTF(0x05c6, 0x9083, 3)},11001100 {QMI_FIXED_INTF(0x05c6, 0x9084, 4)},11011101+ {QMI_FIXED_INTF(0x05c6, 0x90b2, 3)}, /* ublox R410M */11011102 {QMI_FIXED_INTF(0x05c6, 0x920d, 0)},11021103 {QMI_FIXED_INTF(0x05c6, 0x920d, 5)},11031104 {QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)}, /* YUGA CLM920-NC5 */···13421341 if (!id->driver_info) {13431342 dev_dbg(&intf->dev, "setting defaults for dynamic device id\n");13441343 id->driver_info = (unsigned long)&qmi_wwan_info;13441344+ }13451345+13461346+ /* There are devices where the same interface number can be13471347+ * configured as different functions. We should only bind to13481348+ * vendor specific functions when matching on interface number13491349+ */13501350+ if (id->match_flags & USB_DEVICE_ID_MATCH_INT_NUMBER &&13511351+ desc->bInterfaceClass != USB_CLASS_VENDOR_SPEC) {13521352+ dev_dbg(&intf->dev,13531353+ "Rejecting interface number match for class %02x\n",13541354+ desc->bInterfaceClass);13551355+ return -ENODEV;13451356 }1346135713471358 /* Quectel EC20 quirk where we've QMI on interface 4 instead of 0 */
···101101 *102102 * This function parses the regulatory channel data received as a103103 * MCC_UPDATE_CMD command. It returns a newly allocation regulatory domain,104104- * to be fed into the regulatory core. An ERR_PTR is returned on error.104104+ * to be fed into the regulatory core. In case the geo_info is set handle105105+ * accordingly. An ERR_PTR is returned on error.105106 * If not given to the regulatory core, the user is responsible for freeing106107 * the regdomain returned here with kfree.107108 */108109struct ieee80211_regdomain *109110iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,110110- int num_of_ch, __le32 *channels, u16 fw_mcc);111111+ int num_of_ch, __le32 *channels, u16 fw_mcc,112112+ u16 geo_info);111113112114#endif /* __iwl_nvm_parse_h__ */
+2-1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
···311311 regd = iwl_parse_nvm_mcc_info(mvm->trans->dev, mvm->cfg,312312 __le32_to_cpu(resp->n_channels),313313 resp->channels,314314- __le16_to_cpu(resp->mcc));314314+ __le16_to_cpu(resp->mcc),315315+ __le16_to_cpu(resp->geo_info));315316 /* Store the return source id */316317 src_id = resp->source_id;317318 kfree(resp);
···942942 int offset;943943 const char *p, *q, *options = NULL;944944 int l;945945- const struct earlycon_id *match;945945+ const struct earlycon_id **p_match;946946 const void *fdt = initial_boot_params;947947948948 offset = fdt_path_offset(fdt, "/chosen");···969969 return 0;970970 }971971972972- for (match = __earlycon_table; match < __earlycon_table_end; match++) {972972+ for (p_match = __earlycon_table; p_match < __earlycon_table_end;973973+ p_match++) {974974+ const struct earlycon_id *match = *p_match;975975+973976 if (!match->compatible[0])974977 continue;975978
+1-1
drivers/parisc/ccio-dma.c
···12631263 * I/O Page Directory, the resource map, and initalizing the12641264 * U2/Uturn chip into virtual mode.12651265 */12661266-static void12661266+static void __init12671267ccio_ioc_init(struct ioc *ioc)12681268{12691269 int i;
+23-14
drivers/rtc/rtc-opal.c
···57575858static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm)5959{6060- long rc = OPAL_BUSY;6060+ s64 rc = OPAL_BUSY;6161 int retries = 10;6262 u32 y_m_d;6363 u64 h_m_s_ms;···66666767 while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {6868 rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms);6969- if (rc == OPAL_BUSY_EVENT)6969+ if (rc == OPAL_BUSY_EVENT) {7070+ msleep(OPAL_BUSY_DELAY_MS);7071 opal_poll_events(NULL);7171- else if (retries-- && (rc == OPAL_HARDWARE7272- || rc == OPAL_INTERNAL_ERROR))7373- msleep(10);7474- else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT)7575- break;7272+ } else if (rc == OPAL_BUSY) {7373+ msleep(OPAL_BUSY_DELAY_MS);7474+ } else if (rc == OPAL_HARDWARE || rc == OPAL_INTERNAL_ERROR) {7575+ if (retries--) {7676+ msleep(10); /* Wait 10ms before retry */7777+ rc = OPAL_BUSY; /* go around again */7878+ }7979+ }7680 }77817882 if (rc != OPAL_SUCCESS)···91879288static int opal_set_rtc_time(struct device *dev, struct rtc_time *tm)9389{9494- long rc = OPAL_BUSY;9090+ s64 rc = OPAL_BUSY;9591 int retries = 10;9692 u32 y_m_d = 0;9793 u64 h_m_s_ms = 0;98949995 tm_to_opal(tm, &y_m_d, &h_m_s_ms);9696+10097 while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {10198 rc = opal_rtc_write(y_m_d, h_m_s_ms);102102- if (rc == OPAL_BUSY_EVENT)9999+ if (rc == OPAL_BUSY_EVENT) {100100+ msleep(OPAL_BUSY_DELAY_MS);103101 opal_poll_events(NULL);104104- else if (retries-- && (rc == OPAL_HARDWARE105105- || rc == OPAL_INTERNAL_ERROR))106106- msleep(10);107107- else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT)108108- break;102102+ } else if (rc == OPAL_BUSY) {103103+ msleep(OPAL_BUSY_DELAY_MS);104104+ } else if (rc == OPAL_HARDWARE || rc == OPAL_INTERNAL_ERROR) {105105+ if (retries--) {106106+ msleep(10); /* Wait 10ms before retry */107107+ rc = OPAL_BUSY; /* go around again */108108+ }109109+ }109110 }110111111112 return rc == OPAL_SUCCESS ? 0 : -EIO;
+1-1
drivers/sbus/char/oradax.c
···33 *44 * This program is free software: you can redistribute it and/or modify55 * it under the terms of the GNU General Public License as published by66- * the Free Software Foundation, either version 3 of the License, or66+ * the Free Software Foundation, either version 2 of the License, or77 * (at your option) any later version.88 *99 * This program is distributed in the hope that it will be useful,
+1-2
drivers/scsi/isci/port_config.c
···291291 * Note: We have not moved the current phy_index so we will actually292292 * compare the startting phy with itself.293293 * This is expected and required to add the phy to the port. */294294- while (phy_index < SCI_MAX_PHYS) {294294+ for (; phy_index < SCI_MAX_PHYS; phy_index++) {295295 if ((phy_mask & (1 << phy_index)) == 0)296296 continue;297297 sci_phy_get_sas_address(&ihost->phys[phy_index],···311311 &ihost->phys[phy_index]);312312313313 assigned_phy_mask |= (1 << phy_index);314314- phy_index++;315314 }316315317316 }
+5-2
drivers/scsi/storvsc_drv.c
···17221722 max_targets = STORVSC_MAX_TARGETS;17231723 max_channels = STORVSC_MAX_CHANNELS;17241724 /*17251725- * On Windows8 and above, we support sub-channels for storage.17251725+ * On Windows8 and above, we support sub-channels for storage17261726+ * on SCSI and FC controllers.17261727 * The number of sub-channels offerred is based on the number of17271728 * VCPUs in the guest.17281729 */17291729- max_sub_channels = (num_cpus / storvsc_vcpus_per_sub_channel);17301730+ if (!dev_is_ide)17311731+ max_sub_channels =17321732+ (num_cpus - 1) / storvsc_vcpus_per_sub_channel;17301733 }1731173417321735 scsi_driver.can_queue = (max_outstanding_req_per_channel *
···4545struct rpi_power_domain_packet {4646 u32 domain;4747 u32 on;4848-} __packet;4848+};49495050/*5151 * Asks the firmware to enable or disable power on a specific power
+1-1
drivers/staging/wilc1000/host_interface.c
···13901390 }1391139113921392 if (hif_drv->usr_conn_req.ies) {13931393- conn_info.req_ies = kmemdup(conn_info.req_ies,13931393+ conn_info.req_ies = kmemdup(hif_drv->usr_conn_req.ies,13941394 hif_drv->usr_conn_req.ies_len,13951395 GFP_KERNEL);13961396 if (conn_info.req_ies)
+4-4
drivers/target/target_core_iblock.c
···427427{428428 struct se_device *dev = cmd->se_dev;429429 struct scatterlist *sg = &cmd->t_data_sg[0];430430- unsigned char *buf, zero = 0x00, *p = &zero;431431- int rc, ret;430430+ unsigned char *buf, *not_zero;431431+ int ret;432432433433 buf = kmap(sg_page(sg)) + sg->offset;434434 if (!buf)···437437 * Fall back to block_execute_write_same() slow-path if438438 * incoming WRITE_SAME payload does not contain zeros.439439 */440440- rc = memcmp(buf, p, cmd->data_length);440440+ not_zero = memchr_inv(buf, 0x00, cmd->data_length);441441 kunmap(sg_page(sg));442442443443- if (rc)443443+ if (not_zero)444444 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;445445446446 ret = blkdev_issue_zeroout(bdev,
+22-1
drivers/tty/n_gsm.c
···121121 struct mutex mutex;122122123123 /* Link layer */124124+ int mode;125125+#define DLCI_MODE_ABM 0 /* Normal Asynchronous Balanced Mode */126126+#define DLCI_MODE_ADM 1 /* Asynchronous Disconnected Mode */124127 spinlock_t lock; /* Protects the internal state */125128 struct timer_list t1; /* Retransmit timer for SABM and UA */126129 int retries;···13671364 ctrl->data = data;13681365 ctrl->len = clen;13691366 gsm->pending_cmd = ctrl;13701370- gsm->cretries = gsm->n2;13671367+13681368+ /* If DLCI0 is in ADM mode skip retries, it won't respond */13691369+ if (gsm->dlci[0]->mode == DLCI_MODE_ADM)13701370+ gsm->cretries = 1;13711371+ else13721372+ gsm->cretries = gsm->n2;13731373+13711374 mod_timer(&gsm->t2_timer, jiffies + gsm->t2 * HZ / 100);13721375 gsm_control_transmit(gsm, ctrl);13731376 spin_unlock_irqrestore(&gsm->control_lock, flags);···14811472 if (debug & 8)14821473 pr_info("DLCI %d opening in ADM mode.\n",14831474 dlci->addr);14751475+ dlci->mode = DLCI_MODE_ADM;14841476 gsm_dlci_open(dlci);14851477 } else {14861478 gsm_dlci_close(dlci);···28712861static int gsm_carrier_raised(struct tty_port *port)28722862{28732863 struct gsm_dlci *dlci = container_of(port, struct gsm_dlci, port);28642864+ struct gsm_mux *gsm = dlci->gsm;28652865+28742866 /* Not yet open so no carrier info */28752867 if (dlci->state != DLCI_OPEN)28762868 return 0;28772869 if (debug & 2)28782870 return 1;28712871+28722872+ /*28732873+ * Basic mode with control channel in ADM mode may not respond28742874+ * to CMD_MSC at all and modem_rx is empty.28752875+ */28762876+ if (gsm->encoding == 0 && gsm->dlci[0]->mode == DLCI_MODE_ADM &&28772877+ !dlci->modem_rx)28782878+ return 1;28792879+28792880 return dlci->modem_rx & TIOCM_CD;28802881}28812882
+4-2
drivers/tty/serial/earlycon.c
···169169 */170170int __init setup_earlycon(char *buf)171171{172172- const struct earlycon_id *match;172172+ const struct earlycon_id **p_match;173173174174 if (!buf || !buf[0])175175 return -EINVAL;···177177 if (early_con.flags & CON_ENABLED)178178 return -EALREADY;179179180180- for (match = __earlycon_table; match < __earlycon_table_end; match++) {180180+ for (p_match = __earlycon_table; p_match < __earlycon_table_end;181181+ p_match++) {182182+ const struct earlycon_id *match = *p_match;181183 size_t len = strlen(match->name);182184183185 if (strncmp(buf, match->name, len))
+18-1
drivers/tty/serial/imx.c
···316316 * differ from the value that was last written. As it only317317 * clears after being set, reread conditionally.318318 */319319- if (sport->ucr2 & UCR2_SRST)319319+ if (!(sport->ucr2 & UCR2_SRST))320320 sport->ucr2 = readl(sport->port.membase + offset);321321 return sport->ucr2;322322 break;···18331833 rs485conf->flags &= ~SER_RS485_ENABLED;1834183418351835 if (rs485conf->flags & SER_RS485_ENABLED) {18361836+ /* Enable receiver if low-active RTS signal is requested */18371837+ if (sport->have_rtscts && !sport->have_rtsgpio &&18381838+ !(rs485conf->flags & SER_RS485_RTS_ON_SEND))18391839+ rs485conf->flags |= SER_RS485_RX_DURING_TX;18401840+18361841 /* disable transmitter */18371842 ucr2 = imx_uart_readl(sport, UCR2);18381843 if (rs485conf->flags & SER_RS485_RTS_AFTER_SEND)···22692264 if (sport->port.rs485.flags & SER_RS485_ENABLED &&22702265 (!sport->have_rtscts && !sport->have_rtsgpio))22712266 dev_err(&pdev->dev, "no RTS control, disabling rs485\n");22672267+22682268+ /*22692269+ * If using the i.MX UART RTS/CTS control then the RTS (CTS_B)22702270+ * signal cannot be set low during transmission in case the22712271+ * receiver is off (limitation of the i.MX UART IP).22722272+ */22732273+ if (sport->port.rs485.flags & SER_RS485_ENABLED &&22742274+ sport->have_rtscts && !sport->have_rtsgpio &&22752275+ (!(sport->port.rs485.flags & SER_RS485_RTS_ON_SEND) &&22762276+ !(sport->port.rs485.flags & SER_RS485_RX_DURING_TX)))22772277+ dev_err(&pdev->dev,22782278+ "low-active RTS not possible when receiver is off, enabling receiver\n");2272227922732280 imx_uart_rs485_config(&sport->port, &sport->port.rs485);22742281
···10221022 struct qcom_geni_serial_port *port;10231023 struct uart_port *uport;10241024 struct resource *res;10251025+ int irq;1025102610261027 if (pdev->dev.of_node)10271028 line = of_alias_get_id(pdev->dev.of_node, "serial");···10621061 port->rx_fifo_depth = DEF_FIFO_DEPTH_WORDS;10631062 port->tx_fifo_width = DEF_FIFO_WIDTH_BITS;1064106310651065- uport->irq = platform_get_irq(pdev, 0);10661066- if (uport->irq < 0) {10671067- dev_err(&pdev->dev, "Failed to get IRQ %d\n", uport->irq);10681068- return uport->irq;10641064+ irq = platform_get_irq(pdev, 0);10651065+ if (irq < 0) {10661066+ dev_err(&pdev->dev, "Failed to get IRQ %d\n", irq);10671067+ return irq;10691068 }10691069+ uport->irq = irq;1070107010711071 uport->private_data = &qcom_geni_console_driver;10721072 platform_set_drvdata(pdev, port);
+1-1
drivers/tty/serial/xilinx_uartps.c
···11811181 /* only set baud if specified on command line - otherwise11821182 * assume it has been initialized by a boot loader.11831183 */11841184- if (device->baud) {11841184+ if (port->uartclk && device->baud) {11851185 u32 cd = 0, bdiv = 0;11861186 u32 mr;11871187 int div8;
···176176 return ERR_CAST(ldops);177177 }178178179179- ld = kmalloc(sizeof(struct tty_ldisc), GFP_KERNEL);180180- if (ld == NULL) {181181- put_ldops(ldops);182182- return ERR_PTR(-ENOMEM);183183- }184184-179179+ /*180180+ * There is no way to handle allocation failure of only 16 bytes.181181+ * Let's simplify error handling and save more memory.182182+ */183183+ ld = kmalloc(sizeof(struct tty_ldisc), GFP_KERNEL | __GFP_NOFAIL);185184 ld->ops = ldops;186185 ld->tty = tty;187186···526527static void tty_ldisc_restore(struct tty_struct *tty, struct tty_ldisc *old)527528{528529 /* There is an outstanding reference here so this is safe */529529- old = tty_ldisc_get(tty, old->ops->num);530530- WARN_ON(IS_ERR(old));531531- tty->ldisc = old;532532- tty_set_termios_ldisc(tty, old->ops->num);533533- if (tty_ldisc_open(tty, old) < 0) {534534- tty_ldisc_put(old);530530+ if (tty_ldisc_failto(tty, old->ops->num) < 0) {531531+ const char *name = tty_name(tty);532532+533533+ pr_warn("Falling back ldisc for %s.\n", name);535534 /* The traditional behaviour is to fall back to N_TTY, we536535 want to avoid falling back to N_NULL unless we have no537536 choice to avoid the risk of breaking anything */538537 if (tty_ldisc_failto(tty, N_TTY) < 0 &&539538 tty_ldisc_failto(tty, N_NULL) < 0)540540- panic("Couldn't open N_NULL ldisc for %s.",541541- tty_name(tty));539539+ panic("Couldn't open N_NULL ldisc for %s.", name);542540 }543541}544542···820824 * the tty structure is not completely set up when this call is made.821825 */822826823823-void tty_ldisc_init(struct tty_struct *tty)827827+int tty_ldisc_init(struct tty_struct *tty)824828{825829 struct tty_ldisc *ld = tty_ldisc_get(tty, N_TTY);826830 if (IS_ERR(ld))827827- panic("n_tty: init_tty");831831+ return PTR_ERR(ld);828832 tty->ldisc = ld;833833+ return 0;829834}830835831836/**
+23-49
drivers/uio/uio_hv_generic.c
···1919 * # echo -n "ed963694-e847-4b2a-85af-bc9cfc11d6f3" \2020 * > /sys/bus/vmbus/drivers/uio_hv_generic/bind2121 */2222-2222+#define DEBUG 12323#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt24242525#include <linux/device.h>···9494 */9595static void hv_uio_channel_cb(void *context)9696{9797- struct hv_uio_private_data *pdata = context;9898- struct hv_device *dev = pdata->device;9797+ struct vmbus_channel *chan = context;9898+ struct hv_device *hv_dev = chan->device_obj;9999+ struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev);99100100100- dev->channel->inbound.ring_buffer->interrupt_mask = 1;101101+ chan->inbound.ring_buffer->interrupt_mask = 1;101102 virt_mb();102103103104 uio_event_notify(&pdata->info);···122121 uio_event_notify(&pdata->info);123122}124123125125-/*126126- * Handle fault when looking for sub channel ring buffer127127- * Subchannel ring buffer is same as resource 0 which is main ring buffer128128- * This is derived from uio_vma_fault124124+/* Sysfs API to allow mmap of the ring buffers125125+ * The ring buffer is allocated as contiguous memory by vmbus_open129126 */130130-static int hv_uio_vma_fault(struct vm_fault *vmf)131131-{132132- struct vm_area_struct *vma = vmf->vma;133133- void *ring_buffer = vma->vm_private_data;134134- struct page *page;135135- void *addr;136136-137137- addr = ring_buffer + (vmf->pgoff << PAGE_SHIFT);138138- page = virt_to_page(addr);139139- get_page(page);140140- vmf->page = page;141141- return 0;142142-}143143-144144-static const struct vm_operations_struct hv_uio_vm_ops = {145145- .fault = hv_uio_vma_fault,146146-};147147-148148-/* Sysfs API to allow mmap of the ring buffers */149127static int hv_uio_ring_mmap(struct file *filp, struct kobject *kobj,150128 struct bin_attribute *attr,151129 struct vm_area_struct *vma)152130{153131 struct vmbus_channel *channel154132 = container_of(kobj, struct vmbus_channel, kobj);155155- unsigned long requested_pages, actual_pages;133133+ struct hv_device *dev = channel->primary_channel->device_obj;134134+ u16 q_idx = channel->offermsg.offer.sub_channel_index;156135157157- if (vma->vm_end < vma->vm_start)158158- return -EINVAL;136136+ dev_dbg(&dev->device, "mmap channel %u pages %#lx at %#lx\n",137137+ q_idx, vma_pages(vma), vma->vm_pgoff);159138160160- /* only allow 0 for now */161161- if (vma->vm_pgoff > 0)162162- return -EINVAL;163163-164164- requested_pages = vma_pages(vma);165165- actual_pages = 2 * HV_RING_SIZE;166166- if (requested_pages > actual_pages)167167- return -EINVAL;168168-169169- vma->vm_private_data = channel->ringbuffer_pages;170170- vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;171171- vma->vm_ops = &hv_uio_vm_ops;172172- return 0;139139+ return vm_iomap_memory(vma, virt_to_phys(channel->ringbuffer_pages),140140+ channel->ringbuffer_pagecount << PAGE_SHIFT);173141}174142175175-static struct bin_attribute ring_buffer_bin_attr __ro_after_init = {143143+static const struct bin_attribute ring_buffer_bin_attr = {176144 .attr = {177145 .name = "ring",178146 .mode = 0600,179179- /* size is set at init time */180147 },148148+ .size = 2 * HV_RING_SIZE * PAGE_SIZE,181149 .mmap = hv_uio_ring_mmap,182150};183151184184-/* Callback from VMBUS subystem when new channel created. */152152+/* Callback from VMBUS subsystem when new channel created. */185153static void186154hv_uio_new_channel(struct vmbus_channel *new_sc)187155{188156 struct hv_device *hv_dev = new_sc->primary_channel->device_obj;189157 struct device *device = &hv_dev->device;190190- struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev);191158 const size_t ring_bytes = HV_RING_SIZE * PAGE_SIZE;192159 int ret;193160194161 /* Create host communication ring */195162 ret = vmbus_open(new_sc, ring_bytes, ring_bytes, NULL, 0,196196- hv_uio_channel_cb, pdata);163163+ hv_uio_channel_cb, new_sc);197164 if (ret) {198165 dev_err(device, "vmbus_open subchannel failed: %d\n", ret);199166 return;···203234204235 ret = vmbus_open(dev->channel, HV_RING_SIZE * PAGE_SIZE,205236 HV_RING_SIZE * PAGE_SIZE, NULL, 0,206206- hv_uio_channel_cb, pdata);237237+ hv_uio_channel_cb, dev->channel);207238 if (ret)208239 goto fail;209240···294325295326 vmbus_set_chn_rescind_callback(dev->channel, hv_uio_rescind);296327 vmbus_set_sc_create_callback(dev->channel, hv_uio_new_channel);328328+329329+ ret = sysfs_create_bin_file(&dev->channel->kobj, &ring_buffer_bin_attr);330330+ if (ret)331331+ dev_notice(&dev->device,332332+ "sysfs create ring bin file failed; %d\n", ret);297333298334 hv_set_drvdata(dev, pdata);299335
···6262 - Fundamental Software dongle.6363 - Google USB serial devices6464 - HP4x calculators6565+ - Libtransistor USB console6566 - a number of Motorola phones6667 - Motorola Tetra devices6768 - Novatel Wireless GPS receivers
+1
drivers/usb/serial/cp210x.c
···214214 { USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */215215 { USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */216216 { USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */217217+ { USB_DEVICE(0x3923, 0x7A0B) }, /* National Instruments USB Serial Console */217218 { USB_DEVICE(0x413C, 0x9500) }, /* DW700 GPS USB interface */218219 { } /* Terminating Entry */219220};
···2828 * difficult to estimate the time it takes for the system to process the command2929 * before it is actually passed to the PPM.3030 */3131-#define UCSI_TIMEOUT_MS 10003131+#define UCSI_TIMEOUT_MS 500032323333/*3434 * UCSI_SWAP_TIMEOUT_MS - Timeout for role swap requests
+5
drivers/usb/usbip/stub_main.c
···186186 if (!bid)187187 return -ENODEV;188188189189+ /* device_attach() callers should hold parent lock for USB */190190+ if (bid->udev->dev.parent)191191+ device_lock(bid->udev->dev.parent);189192 ret = device_attach(&bid->udev->dev);193193+ if (bid->udev->dev.parent)194194+ device_unlock(bid->udev->dev.parent);190195 if (ret < 0) {191196 dev_err(&bid->udev->dev, "rebind failed\n");192197 return ret;
···8787 struct vbg_session *session = filp->private_data;8888 size_t returned_size, size;8989 struct vbg_ioctl_hdr hdr;9090+ bool is_vmmdev_req;9091 int ret = 0;9192 void *buf;9293···107106 if (size > SZ_16M)108107 return -E2BIG;109108110110- /* __GFP_DMA32 because IOCTL_VMMDEV_REQUEST passes this to the host */111111- buf = kmalloc(size, GFP_KERNEL | __GFP_DMA32);109109+ /*110110+ * IOCTL_VMMDEV_REQUEST needs the buffer to be below 4G to avoid111111+ * the need for a bounce-buffer and another copy later on.112112+ */113113+ is_vmmdev_req = (req & ~IOCSIZE_MASK) == VBG_IOCTL_VMMDEV_REQUEST(0) ||114114+ req == VBG_IOCTL_VMMDEV_REQUEST_BIG;115115+116116+ if (is_vmmdev_req)117117+ buf = vbg_req_alloc(size, VBG_IOCTL_HDR_TYPE_DEFAULT);118118+ else119119+ buf = kmalloc(size, GFP_KERNEL);112120 if (!buf)113121 return -ENOMEM;114122···142132 ret = -EFAULT;143133144134out:145145- kfree(buf);135135+ if (is_vmmdev_req)136136+ vbg_req_free(buf, size);137137+ else138138+ kfree(buf);146139147140 return ret;148141}
+13-4
drivers/virt/vboxguest/vboxguest_utils.c
···6565void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type)6666{6767 struct vmmdev_request_header *req;6868+ int order = get_order(PAGE_ALIGN(len));68696969- req = kmalloc(len, GFP_KERNEL | __GFP_DMA32);7070+ req = (void *)__get_free_pages(GFP_KERNEL | GFP_DMA32, order);7071 if (!req)7172 return NULL;7273···8180 req->reserved2 = 0;82818382 return req;8383+}8484+8585+void vbg_req_free(void *req, size_t len)8686+{8787+ if (!req)8888+ return;8989+9090+ free_pages((unsigned long)req, get_order(PAGE_ALIGN(len)));8491}85928693/* Note this function returns a VBox status code, not a negative errno!! */···146137 rc = hgcm_connect->header.result;147138 }148139149149- kfree(hgcm_connect);140140+ vbg_req_free(hgcm_connect, sizeof(*hgcm_connect));150141151142 *vbox_status = rc;152143 return 0;···175166 if (rc >= 0)176167 rc = hgcm_disconnect->header.result;177168178178- kfree(hgcm_disconnect);169169+ vbg_req_free(hgcm_disconnect, sizeof(*hgcm_disconnect));179170180171 *vbox_status = rc;181172 return 0;···632623 }633624634625 if (!leak_it)635635- kfree(call);626626+ vbg_req_free(call, size);636627637628free_bounce_bufs:638629 if (bounce_bufs) {
+25-3
fs/ceph/xattr.c
···228228229229static bool ceph_vxattrcb_quota_exists(struct ceph_inode_info *ci)230230{231231- return (ci->i_max_files || ci->i_max_bytes);231231+ bool ret = false;232232+ spin_lock(&ci->i_ceph_lock);233233+ if ((ci->i_max_files || ci->i_max_bytes) &&234234+ ci->i_vino.snap == CEPH_NOSNAP &&235235+ ci->i_snap_realm &&236236+ ci->i_snap_realm->ino == ci->i_vino.ino)237237+ ret = true;238238+ spin_unlock(&ci->i_ceph_lock);239239+ return ret;232240}233241234242static size_t ceph_vxattrcb_quota(struct ceph_inode_info *ci, char *val,···10161008 char *newval = NULL;10171009 struct ceph_inode_xattr *xattr = NULL;10181010 int required_blob_size;10111011+ bool check_realm = false;10191012 bool lock_snap_rwsem = false;1020101310211014 if (ceph_snap(inode) != CEPH_NOSNAP)10221015 return -EROFS;1023101610241017 vxattr = ceph_match_vxattr(inode, name);10251025- if (vxattr && vxattr->readonly)10261026- return -EOPNOTSUPP;10181018+ if (vxattr) {10191019+ if (vxattr->readonly)10201020+ return -EOPNOTSUPP;10211021+ if (value && !strncmp(vxattr->name, "ceph.quota", 10))10221022+ check_realm = true;10231023+ }1027102410281025 /* pass any unhandled ceph.* xattrs through to the MDS */10291026 if (!strncmp(name, XATTR_CEPH_PREFIX, XATTR_CEPH_PREFIX_LEN))···11221109 err = -EBUSY;11231110 } else {11241111 err = ceph_sync_setxattr(inode, name, value, size, flags);11121112+ if (err >= 0 && check_realm) {11131113+ /* check if snaprealm was created for quota inode */11141114+ spin_lock(&ci->i_ceph_lock);11151115+ if ((ci->i_max_files || ci->i_max_bytes) &&11161116+ !(ci->i_snap_realm &&11171117+ ci->i_snap_realm->ino == ci->i_vino.ino))11181118+ err = -EOPNOTSUPP;11191119+ spin_unlock(&ci->i_ceph_lock);11201120+ }11251121 }11261122out:11271123 ceph_free_cap_flush(prealloc_cf);
+3
fs/cifs/cifssmb.c
···455455 server->sign = true;456456 }457457458458+ if (cifs_rdma_enabled(server) && server->sign)459459+ cifs_dbg(VFS, "Signing is enabled, and RDMA read/write will be disabled");460460+458461 return 0;459462}460463
+16-16
fs/cifs/connect.c
···29592959 }29602960 }2961296129622962+ if (volume_info->seal) {29632963+ if (ses->server->vals->protocol_id == 0) {29642964+ cifs_dbg(VFS,29652965+ "SMB3 or later required for encryption\n");29662966+ rc = -EOPNOTSUPP;29672967+ goto out_fail;29682968+ } else if (tcon->ses->server->capabilities &29692969+ SMB2_GLOBAL_CAP_ENCRYPTION)29702970+ tcon->seal = true;29712971+ else {29722972+ cifs_dbg(VFS, "Encryption is not supported on share\n");29732973+ rc = -EOPNOTSUPP;29742974+ goto out_fail;29752975+ }29762976+ }29772977+29622978 /*29632979 * BB Do we need to wrap session_mutex around this TCon call and Unix29642980 * SetFS as we do on SessSetup and reconnect?···30213005 goto out_fail;30223006 }30233007 tcon->use_resilient = true;30243024- }30253025-30263026- if (volume_info->seal) {30273027- if (ses->server->vals->protocol_id == 0) {30283028- cifs_dbg(VFS,30293029- "SMB3 or later required for encryption\n");30303030- rc = -EOPNOTSUPP;30313031- goto out_fail;30323032- } else if (tcon->ses->server->capabilities &30333033- SMB2_GLOBAL_CAP_ENCRYPTION)30343034- tcon->seal = true;30353035- else {30363036- cifs_dbg(VFS, "Encryption is not supported on share\n");30373037- rc = -EOPNOTSUPP;30383038- goto out_fail;30393039- }30403008 }3041300930423010 /*
···383383build_encrypt_ctxt(struct smb2_encryption_neg_context *pneg_ctxt)384384{385385 pneg_ctxt->ContextType = SMB2_ENCRYPTION_CAPABILITIES;386386- pneg_ctxt->DataLength = cpu_to_le16(6);387387- pneg_ctxt->CipherCount = cpu_to_le16(2);388388- pneg_ctxt->Ciphers[0] = SMB2_ENCRYPTION_AES128_GCM;389389- pneg_ctxt->Ciphers[1] = SMB2_ENCRYPTION_AES128_CCM;386386+ pneg_ctxt->DataLength = cpu_to_le16(4); /* Cipher Count + le16 cipher */387387+ pneg_ctxt->CipherCount = cpu_to_le16(1);388388+/* pneg_ctxt->Ciphers[0] = SMB2_ENCRYPTION_AES128_GCM;*/ /* not supported yet */389389+ pneg_ctxt->Ciphers[0] = SMB2_ENCRYPTION_AES128_CCM;390390}391391392392static void···444444 return -EINVAL;445445 }446446 server->cipher_type = ctxt->Ciphers[0];447447+ server->capabilities |= SMB2_GLOBAL_CAP_ENCRYPTION;447448 return 0;448449}449450···25912590 * If we want to do a RDMA write, fill in and append25922591 * smbd_buffer_descriptor_v1 to the end of read request25932592 */25942594- if (server->rdma && rdata &&25932593+ if (server->rdma && rdata && !server->sign &&25952594 rdata->bytes >= server->smbd_conn->rdma_readwrite_threshold) {2596259525972596 struct smbd_buffer_descriptor_v1 *v1;···29692968 * If we want to do a server RDMA read, fill in and append29702969 * smbd_buffer_descriptor_v1 to the end of write request29712970 */29722972- if (server->rdma && wdata->bytes >=29712971+ if (server->rdma && !server->sign && wdata->bytes >=29732972 server->smbd_conn->rdma_readwrite_threshold) {2974297329752974 struct smbd_buffer_descriptor_v1 *v1;
+1-1
fs/cifs/smb2pdu.h
···297297 __le16 DataLength;298298 __le32 Reserved;299299 __le16 CipherCount; /* AES-128-GCM and AES-128-CCM */300300- __le16 Ciphers[2]; /* Ciphers[0] since only one used now */300300+ __le16 Ciphers[1]; /* Ciphers[0] since only one used now */301301} __packed;302302303303struct smb2_negotiate_rsp {
+12-24
fs/cifs/smbdirect.c
···20862086 int start, i, j;20872087 int max_iov_size =20882088 info->max_send_size - sizeof(struct smbd_data_transfer);20892089- struct kvec iov[SMBDIRECT_MAX_SGE];20892089+ struct kvec *iov;20902090 int rc;2091209120922092 info->smbd_send_pending++;···20962096 }2097209720982098 /*20992099- * This usually means a configuration error21002100- * We use RDMA read/write for packet size > rdma_readwrite_threshold21012101- * as long as it's properly configured we should never get into this21022102- * situation21032103- */21042104- if (rqst->rq_nvec + rqst->rq_npages > SMBDIRECT_MAX_SGE) {21052105- log_write(ERR, "maximum send segment %x exceeding %x\n",21062106- rqst->rq_nvec + rqst->rq_npages, SMBDIRECT_MAX_SGE);21072107- rc = -EINVAL;21082108- goto done;21092109- }21102110-21112111- /*21122112- * Remove the RFC1002 length defined in MS-SMB2 section 2.121132113- * It is used only for TCP transport20992099+ * Skip the RFC1002 length defined in MS-SMB2 section 2.121002100+ * It is used only for TCP transport in the iov[0]21142101 * In future we may want to add a transport layer under protocol21152102 * layer so this will only be issued to TCP transport21162103 */21172117- iov[0].iov_base = (char *)rqst->rq_iov[0].iov_base + 4;21182118- iov[0].iov_len = rqst->rq_iov[0].iov_len - 4;21192119- buflen += iov[0].iov_len;21042104+21052105+ if (rqst->rq_iov[0].iov_len != 4) {21062106+ log_write(ERR, "expected the pdu length in 1st iov, but got %zu\n", rqst->rq_iov[0].iov_len);21072107+ return -EINVAL;21082108+ }21092109+ iov = &rqst->rq_iov[1];2120211021212111 /* total up iov array first */21222122- for (i = 1; i < rqst->rq_nvec; i++) {21232123- iov[i].iov_base = rqst->rq_iov[i].iov_base;21242124- iov[i].iov_len = rqst->rq_iov[i].iov_len;21122112+ for (i = 0; i < rqst->rq_nvec-1; i++) {21252113 buflen += iov[i].iov_len;21262114 }21272115···21862198 goto done;21872199 }21882200 i++;21892189- if (i == rqst->rq_nvec)22012201+ if (i == rqst->rq_nvec-1)21902202 break;21912203 }21922204 start = i;21932205 buflen = 0;21942206 } else {21952207 i++;21962196- if (i == rqst->rq_nvec) {22082208+ if (i == rqst->rq_nvec-1) {21972209 /* send out all remaining vecs */21982210 remaining_data_length -= buflen;21992211 log_write(INFO,
···321321 struct ext4_sb_info *sbi = EXT4_SB(sb);322322 ext4_grpblk_t offset;323323 ext4_grpblk_t next_zero_bit;324324+ ext4_grpblk_t max_bit = EXT4_CLUSTERS_PER_GROUP(sb);324325 ext4_fsblk_t blk;325326 ext4_fsblk_t group_first_block;326327···339338 /* check whether block bitmap block number is set */340339 blk = ext4_block_bitmap(sb, desc);341340 offset = blk - group_first_block;342342- if (offset < 0 || EXT4_B2C(sbi, offset) >= sb->s_blocksize ||341341+ if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit ||343342 !ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data))344343 /* bad block bitmap */345344 return blk;···347346 /* check whether the inode bitmap block number is set */348347 blk = ext4_inode_bitmap(sb, desc);349348 offset = blk - group_first_block;350350- if (offset < 0 || EXT4_B2C(sbi, offset) >= sb->s_blocksize ||349349+ if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit ||351350 !ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data))352351 /* bad block bitmap */353352 return blk;···355354 /* check whether the inode table block number is set */356355 blk = ext4_inode_table(sb, desc);357356 offset = blk - group_first_block;358358- if (offset < 0 || EXT4_B2C(sbi, offset) >= sb->s_blocksize ||359359- EXT4_B2C(sbi, offset + sbi->s_itb_per_group) >= sb->s_blocksize)357357+ if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit ||358358+ EXT4_B2C(sbi, offset + sbi->s_itb_per_group) >= max_bit)360359 return blk;361360 next_zero_bit = ext4_find_next_zero_bit(bh->b_data,362361 EXT4_B2C(sbi, offset + sbi->s_itb_per_group),
+11-5
fs/ext4/extents.c
···53295329 stop = le32_to_cpu(extent->ee_block);5330533053315331 /*53325332- * In case of left shift, Don't start shifting extents until we make53335333- * sure the hole is big enough to accommodate the shift.53325332+ * For left shifts, make sure the hole on the left is big enough to53335333+ * accommodate the shift. For right shifts, make sure the last extent53345334+ * won't be shifted beyond EXT_MAX_BLOCKS.53345335 */53355336 if (SHIFT == SHIFT_LEFT) {53365337 path = ext4_find_extent(inode, start - 1, &path,···5351535053525351 if ((start == ex_start && shift > ex_start) ||53535352 (shift > start - ex_end)) {53545354- ext4_ext_drop_refs(path);53555355- kfree(path);53565356- return -EINVAL;53535353+ ret = -EINVAL;53545354+ goto out;53555355+ }53565356+ } else {53575357+ if (shift > EXT_MAX_BLOCKS -53585358+ (stop + ext4_ext_get_actual_len(extent))) {53595359+ ret = -EINVAL;53605360+ goto out;53575361 }53585362 }53595363
+1
fs/ext4/super.c
···58865886MODULE_AUTHOR("Remy Card, Stephen Tweedie, Andrew Morton, Andreas Dilger, Theodore Ts'o and others");58875887MODULE_DESCRIPTION("Fourth Extended Filesystem");58885888MODULE_LICENSE("GPL");58895889+MODULE_SOFTDEP("pre: crc32c");58895890module_init(ext4_init_fs)58905891module_exit(ext4_exit_fs)
+1
fs/jbd2/transaction.c
···532532 */533533 ret = start_this_handle(journal, handle, GFP_NOFS);534534 if (ret < 0) {535535+ handle->h_journal = journal;535536 jbd2_journal_free_reserved(handle);536537 return ret;537538 }
+8-1
fs/xfs/libxfs/xfs_attr.c
···511511 if (args->flags & ATTR_CREATE)512512 return retval;513513 retval = xfs_attr_shortform_remove(args);514514- ASSERT(retval == 0);514514+ if (retval)515515+ return retval;516516+ /*517517+ * Since we have removed the old attr, clear ATTR_REPLACE so518518+ * that the leaf format add routine won't trip over the attr519519+ * not being around.520520+ */521521+ args->flags &= ~ATTR_REPLACE;515522 }516523517524 if (args->namelen >= XFS_ATTR_SF_ENTSIZE_MAX ||
···466466 return __this_address;467467 if (di_size > XFS_DFORK_DSIZE(dip, mp))468468 return __this_address;469469+ if (dip->di_nextents)470470+ return __this_address;469471 /* fall through */470472 case XFS_DINODE_FMT_EXTENTS:471473 case XFS_DINODE_FMT_BTREE:···486484 if (XFS_DFORK_Q(dip)) {487485 switch (dip->di_aformat) {488486 case XFS_DINODE_FMT_LOCAL:487487+ if (dip->di_anextents)488488+ return __this_address;489489+ /* fall through */489490 case XFS_DINODE_FMT_EXTENTS:490491 case XFS_DINODE_FMT_BTREE:491492 break;492493 default:493494 return __this_address;494495 }496496+ } else {497497+ /*498498+ * If there is no fork offset, this may be a freshly-made inode499499+ * in a new disk cluster, in which case di_aformat is zeroed.500500+ * Otherwise, such an inode must be in EXTENTS format; this goes501501+ * for freed inodes as well.502502+ */503503+ switch (dip->di_aformat) {504504+ case 0:505505+ case XFS_DINODE_FMT_EXTENTS:506506+ break;507507+ default:508508+ return __this_address;509509+ }510510+ if (dip->di_anextents)511511+ return __this_address;495512 }496513497514 /* only version 3 or greater inodes are extensively verified here */
+9-5
fs/xfs/xfs_file.c
···778778 if (error)779779 goto out_unlock;780780 } else if (mode & FALLOC_FL_INSERT_RANGE) {781781- unsigned int blksize_mask = i_blocksize(inode) - 1;781781+ unsigned int blksize_mask = i_blocksize(inode) - 1;782782+ loff_t isize = i_size_read(inode);782783783783- new_size = i_size_read(inode) + len;784784 if (offset & blksize_mask || len & blksize_mask) {785785 error = -EINVAL;786786 goto out_unlock;787787 }788788789789- /* check the new inode size does not wrap through zero */790790- if (new_size > inode->i_sb->s_maxbytes) {789789+ /*790790+ * New inode size must not exceed ->s_maxbytes, accounting for791791+ * possible signed overflow.792792+ */793793+ if (inode->i_sb->s_maxbytes - isize < len) {791794 error = -EFBIG;792795 goto out_unlock;793796 }797797+ new_size = isize + len;794798795799 /* Offset should be less than i_size */796796- if (offset >= i_size_read(inode)) {800800+ if (offset >= isize) {797801 error = -EINVAL;798802 goto out_unlock;799803 }
···3737 * Our PSCI implementation stays the same across versions from3838 * v0.2 onward, only adding the few mandatory functions (such3939 * as FEATURES with 1.0) that are required by newer4040- * revisions. It is thus safe to return the latest.4040+ * revisions. It is thus safe to return the latest, unless4141+ * userspace has instructed us otherwise.4142 */4242- if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))4343+ if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) {4444+ if (vcpu->kvm->arch.psci_version)4545+ return vcpu->kvm->arch.psci_version;4646+4347 return KVM_ARM_PSCI_LATEST;4848+ }44494550 return KVM_ARM_PSCI_0_1;4651}475248534954int kvm_hvc_call_handler(struct kvm_vcpu *vcpu);5555+5656+struct kvm_one_reg;5757+5858+int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu);5959+int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);6060+int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);6161+int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);50625163#endif /* __KVM_ARM_PSCI_H__ */
···256256 * automatically.257257 * @pm: Power management operations of the device which matched258258 * this driver.259259- * @coredump: Called through sysfs to initiate a device coredump.259259+ * @coredump: Called when sysfs entry is written to. The device driver260260+ * is expected to call the dev_coredump API resulting in a261261+ * uevent.260262 * @p: Driver core's private data, no one other than the driver261263 * core can touch this.262264 *···290288 const struct attribute_group **groups;291289292290 const struct dev_pm_ops *pm;293293- int (*coredump) (struct device *dev);291291+ void (*coredump) (struct device *dev);294292295293 struct driver_private *p;296294};
···8585 unsigned int write_suspended:1;8686 unsigned int erase_suspended:1;8787 unsigned long in_progress_block_addr;8888+ unsigned long in_progress_block_mask;88898990 struct mutex mutex;9091 wait_queue_head_t wq; /* Wait on here when we're waiting for the chip
···5050 * losing bits). This also has the property (wanted by the dcache)5151 * that the msbits make a good hash table index.5252 */5353-static inline unsigned long end_name_hash(unsigned long hash)5353+static inline unsigned int end_name_hash(unsigned long hash)5454{5555- return __hash_32((unsigned int)hash);5555+ return hash_long(hash, 32);5656}57575858/*
···5252 * @offs_real: Offset clock monotonic -> clock realtime5353 * @offs_boot: Offset clock monotonic -> clock boottime5454 * @offs_tai: Offset clock monotonic -> clock tai5555- * @time_suspended: Accumulated suspend time5655 * @tai_offset: The current UTC to TAI offset in seconds5756 * @clock_was_set_seq: The sequence number of clock was set events5857 * @cs_was_changed_seq: The sequence number of clocksource change events···9495 ktime_t offs_real;9596 ktime_t offs_boot;9697 ktime_t offs_tai;9797- ktime_t time_suspended;9898 s32 tai_offset;9999 unsigned int clock_was_set_seq;100100 u8 cs_was_changed_seq;
+25-12
include/linux/timekeeping.h
···3333extern time64_t ktime_get_seconds(void);3434extern time64_t __ktime_get_real_seconds(void);3535extern time64_t ktime_get_real_seconds(void);3636-extern void ktime_get_active_ts64(struct timespec64 *ts);37363837extern int __getnstimeofday64(struct timespec64 *tv);3938extern void getnstimeofday64(struct timespec64 *tv);4039extern void getboottime64(struct timespec64 *ts);41404242-#define ktime_get_real_ts64(ts) getnstimeofday64(ts)4343-4444-/* Clock BOOTTIME compatibility wrappers */4545-static inline void get_monotonic_boottime64(struct timespec64 *ts)4646-{4747- ktime_get_ts64(ts);4848-}4141+#define ktime_get_real_ts64(ts) getnstimeofday64(ts)49425043/*5144 * ktime_t based interfaces5245 */4646+5347enum tk_offsets {5448 TK_OFFS_REAL,4949+ TK_OFFS_BOOT,5550 TK_OFFS_TAI,5651 TK_OFFS_MAX,5752};···5762extern ktime_t ktime_get_raw(void);5863extern u32 ktime_get_resolution_ns(void);59646060-/* Clock BOOTTIME compatibility wrappers */6161-static inline ktime_t ktime_get_boottime(void) { return ktime_get(); }6262-static inline u64 ktime_get_boot_ns(void) { return ktime_get(); }6363-6465/**6566 * ktime_get_real - get the real (wall-) time in ktime_t format6667 */6768static inline ktime_t ktime_get_real(void)6869{6970 return ktime_get_with_offset(TK_OFFS_REAL);7171+}7272+7373+/**7474+ * ktime_get_boottime - Returns monotonic time since boot in ktime_t format7575+ *7676+ * This is similar to CLOCK_MONTONIC/ktime_get, but also includes the7777+ * time spent in suspend.7878+ */7979+static inline ktime_t ktime_get_boottime(void)8080+{8181+ return ktime_get_with_offset(TK_OFFS_BOOT);7082}71837284/**···102100 return ktime_to_ns(ktime_get_real());103101}104102103103+static inline u64 ktime_get_boot_ns(void)104104+{105105+ return ktime_to_ns(ktime_get_boottime());106106+}107107+105108static inline u64 ktime_get_tai_ns(void)106109{107110 return ktime_to_ns(ktime_get_clocktai());···119112120113extern u64 ktime_get_mono_fast_ns(void);121114extern u64 ktime_get_raw_fast_ns(void);115115+extern u64 ktime_get_boot_fast_ns(void);122116extern u64 ktime_get_real_fast_ns(void);123117124118/*125119 * timespec64 interfaces utilizing the ktime based ones126120 */121121+static inline void get_monotonic_boottime64(struct timespec64 *ts)122122+{123123+ *ts = ktime_to_timespec64(ktime_get_boottime());124124+}125125+127126static inline void timekeeping_clocktai64(struct timespec64 *ts)128127{129128 *ts = ktime_to_timespec64(ktime_get_clocktai());
+1-1
include/linux/tty.h
···701701extern int tty_set_ldisc(struct tty_struct *tty, int disc);702702extern int tty_ldisc_setup(struct tty_struct *tty, struct tty_struct *o_tty);703703extern void tty_ldisc_release(struct tty_struct *tty);704704-extern void tty_ldisc_init(struct tty_struct *tty);704704+extern int __must_check tty_ldisc_init(struct tty_struct *tty);705705extern void tty_ldisc_deinit(struct tty_struct *tty);706706extern int tty_ldisc_receive_buf(struct tty_ldisc *ld, const unsigned char *p,707707 char *f, int count);
-23
include/linux/vbox_utils.h
···2424#define vbg_debug pr_debug2525#endif26262727-/**2828- * Allocate memory for generic request and initialize the request header.2929- *3030- * Return: the allocated memory3131- * @len: Size of memory block required for the request.3232- * @req_type: The generic request type.3333- */3434-void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type);3535-3636-/**3737- * Perform a generic request.3838- *3939- * Return: VBox status code4040- * @gdev: The Guest extension device.4141- * @req: Pointer to the request structure.4242- */4343-int vbg_req_perform(struct vbg_dev *gdev, void *req);4444-4527int vbg_hgcm_connect(struct vbg_dev *gdev,4628 struct vmmdev_hgcm_service_location *loc,4729 u32 *client_id, int *vbox_status);···3351int vbg_hgcm_call(struct vbg_dev *gdev, u32 client_id, u32 function,3452 u32 timeout_ms, struct vmmdev_hgcm_function_parameter *parms,3553 u32 parm_count, int *vbox_status);3636-3737-int vbg_hgcm_call32(3838- struct vbg_dev *gdev, u32 client_id, u32 function, u32 timeout_ms,3939- struct vmmdev_hgcm_function_parameter32 *parm32, u32 parm_count,4040- int *vbox_status);41544255/**4356 * Convert a VirtualBox status code to a standard Linux kernel return value.
+1
include/net/tls.h
···148148 struct scatterlist *partially_sent_record;149149 u16 partially_sent_offset;150150 unsigned long flags;151151+ bool in_tcp_sendpages;151152152153 u16 pending_open_record_frags;153154 int (*push_pending_record)(struct sock *sk, int flags);
···3131 TP_ARGS(func),32323333 TP_STRUCT__entry(3434- __field(initcall_t, func)3434+ /*3535+ * Use field_struct to avoid is_signed_type()3636+ * comparison of a function pointer3737+ */3838+ __field_struct(initcall_t, func)3539 ),36403741 TP_fast_assign(···5248 TP_ARGS(func, ret),53495450 TP_STRUCT__entry(5555- __field(initcall_t, func)5656- __field(int, ret)5151+ /*5252+ * Use field_struct to avoid is_signed_type()5353+ * comparison of a function pointer5454+ */5555+ __field_struct(initcall_t, func)5656+ __field(int, ret)5757 ),58585959 TP_fast_assign(
···476476}477477478478/* decrement refcnt of all bpf_progs that are stored in this map */479479-void bpf_fd_array_map_clear(struct bpf_map *map)479479+static void bpf_fd_array_map_clear(struct bpf_map *map)480480{481481 struct bpf_array *array = container_of(map, struct bpf_array, map);482482 int i;···495495 .map_fd_get_ptr = prog_fd_array_get_ptr,496496 .map_fd_put_ptr = prog_fd_array_put_ptr,497497 .map_fd_sys_lookup_elem = prog_fd_array_sys_lookup_elem,498498+ .map_release_uref = bpf_fd_array_map_clear,498499};499500500501static struct bpf_event_entry *bpf_event_entry_gen(struct file *perf_file,
+73-26
kernel/bpf/sockmap.c
···4343#include <net/tcp.h>4444#include <linux/ptr_ring.h>4545#include <net/inet_common.h>4646+#include <linux/sched/signal.h>46474748#define SOCK_CREATE_FLAG_MASK \4849 (BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY)···326325 if (ret > 0) {327326 if (apply)328327 apply_bytes -= ret;328328+329329+ sg->offset += ret;330330+ sg->length -= ret;329331 size -= ret;330332 offset += ret;331333 if (uncharge)···336332 goto retry;337333 }338334339339- sg->length = size;340340- sg->offset = offset;341335 return ret;342336 }343337···393391 } while (i != md->sg_end);394392}395393396396-static void free_bytes_sg(struct sock *sk, int bytes, struct sk_msg_buff *md)394394+static void free_bytes_sg(struct sock *sk, int bytes,395395+ struct sk_msg_buff *md, bool charge)397396{398397 struct scatterlist *sg = md->sg_data;399398 int i = md->sg_start, free;···404401 if (bytes < free) {405402 sg[i].length -= bytes;406403 sg[i].offset += bytes;407407- sk_mem_uncharge(sk, bytes);404404+ if (charge)405405+ sk_mem_uncharge(sk, bytes);408406 break;409407 }410408411411- sk_mem_uncharge(sk, sg[i].length);409409+ if (charge)410410+ sk_mem_uncharge(sk, sg[i].length);412411 put_page(sg_page(&sg[i]));413412 bytes -= sg[i].length;414413 sg[i].length = 0;···421416 if (i == MAX_SKB_FRAGS)422417 i = 0;423418 }419419+ md->sg_start = i;424420}425421426422static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)···529523 i = md->sg_start;530524531525 do {532532- r->sg_data[i] = md->sg_data[i];533533-534526 size = (apply && apply_bytes < md->sg_data[i].length) ?535527 apply_bytes : md->sg_data[i].length;536528···539535 }540536541537 sk_mem_charge(sk, size);538538+ r->sg_data[i] = md->sg_data[i];542539 r->sg_data[i].length = size;543540 md->sg_data[i].length -= size;544541 md->sg_data[i].offset += size;···580575 struct sk_msg_buff *md,581576 int flags)582577{578578+ bool ingress = !!(md->flags & BPF_F_INGRESS);583579 struct smap_psock *psock;584580 struct scatterlist *sg;585585- int i, err, free = 0;586586- bool ingress = !!(md->flags & BPF_F_INGRESS);581581+ int err = 0;587582588583 sg = md->sg_data;589584···611606out_rcu:612607 rcu_read_unlock();613608out:614614- i = md->sg_start;615615- while (sg[i].length) {616616- free += sg[i].length;617617- put_page(sg_page(&sg[i]));618618- sg[i].length = 0;619619- i++;620620- if (i == MAX_SKB_FRAGS)621621- i = 0;622622- }623623- return free;609609+ free_bytes_sg(NULL, send, md, false);610610+ return err;624611}625612626613static inline void bpf_md_init(struct smap_psock *psock)···697700 err = bpf_tcp_sendmsg_do_redirect(redir, send, m, flags);698701 lock_sock(sk);699702703703+ if (unlikely(err < 0)) {704704+ free_start_sg(sk, m);705705+ psock->sg_size = 0;706706+ if (!cork)707707+ *copied -= send;708708+ } else {709709+ psock->sg_size -= send;710710+ }711711+700712 if (cork) {701713 free_start_sg(sk, m);714714+ psock->sg_size = 0;702715 kfree(m);703716 m = NULL;717717+ err = 0;704718 }705705- if (unlikely(err))706706- *copied -= err;707707- else708708- psock->sg_size -= send;709719 break;710720 case __SK_DROP:711721 default:712712- free_bytes_sg(sk, send, m);722722+ free_bytes_sg(sk, send, m, true);713723 apply_bytes_dec(psock, send);714724 *copied -= send;715725 psock->sg_size -= send;···734730735731out_err:736732 return err;733733+}734734+735735+static int bpf_wait_data(struct sock *sk,736736+ struct smap_psock *psk, int flags,737737+ long timeo, int *err)738738+{739739+ int rc;740740+741741+ DEFINE_WAIT_FUNC(wait, woken_wake_function);742742+743743+ add_wait_queue(sk_sleep(sk), &wait);744744+ sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);745745+ rc = sk_wait_event(sk, &timeo,746746+ !list_empty(&psk->ingress) ||747747+ !skb_queue_empty(&sk->sk_receive_queue),748748+ &wait);749749+ sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);750750+ remove_wait_queue(sk_sleep(sk), &wait);751751+752752+ return rc;737753}738754739755static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,···779755 return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);780756781757 lock_sock(sk);758758+bytes_ready:782759 while (copied != len) {783760 struct scatterlist *sg;784761 struct sk_msg_buff *md;···832807 consume_skb(md->skb);833808 kfree(md);834809 }810810+ }811811+812812+ if (!copied) {813813+ long timeo;814814+ int data;815815+ int err = 0;816816+817817+ timeo = sock_rcvtimeo(sk, nonblock);818818+ data = bpf_wait_data(sk, psock, flags, timeo, &err);819819+820820+ if (data) {821821+ if (!skb_queue_empty(&sk->sk_receive_queue)) {822822+ release_sock(sk);823823+ smap_release_sock(psock, sk);824824+ copied = tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);825825+ return copied;826826+ }827827+ goto bytes_ready;828828+ }829829+830830+ if (err)831831+ copied = err;835832 }836833837834 release_sock(sk);···18781831 return err;18791832}1880183318811881-static void sock_map_release(struct bpf_map *map, struct file *map_file)18341834+static void sock_map_release(struct bpf_map *map)18821835{18831836 struct bpf_stab *stab = container_of(map, struct bpf_stab, map);18841837 struct bpf_prog *orig;···19021855 .map_get_next_key = sock_map_get_next_key,19031856 .map_update_elem = sock_map_update_elem,19041857 .map_delete_elem = sock_map_delete_elem,19051905- .map_release = sock_map_release,18581858+ .map_release_uref = sock_map_release,19061859};1907186019081861BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
+2-2
kernel/bpf/syscall.c
···257257static void bpf_map_put_uref(struct bpf_map *map)258258{259259 if (atomic_dec_and_test(&map->usercnt)) {260260- if (map->map_type == BPF_MAP_TYPE_PROG_ARRAY)261261- bpf_fd_array_map_clear(map);260260+ if (map->ops->map_release_uref)261261+ map->ops->map_release_uref(map);262262 }263263}264264
+3-4
kernel/events/uprobes.c
···491491 if (!uprobe)492492 return NULL;493493494494- uprobe->inode = igrab(inode);494494+ uprobe->inode = inode;495495 uprobe->offset = offset;496496 init_rwsem(&uprobe->register_rwsem);497497 init_rwsem(&uprobe->consumer_rwsem);···502502 if (cur_uprobe) {503503 kfree(uprobe);504504 uprobe = cur_uprobe;505505- iput(inode);506505 }507506508507 return uprobe;···700701 rb_erase(&uprobe->rb_node, &uprobes_tree);701702 spin_unlock(&uprobes_treelock);702703 RB_CLEAR_NODE(&uprobe->rb_node); /* for uprobe_is_active() */703703- iput(uprobe->inode);704704 put_uprobe(uprobe);705705}706706···871873 * tuple). Creation refcount stops uprobe_unregister from freeing the872874 * @uprobe even before the register operation is complete. Creation873875 * refcount is released when the last @uc for the @uprobe874874- * unregisters.876876+ * unregisters. Caller of uprobe_register() is required to keep @inode877877+ * (and the containing mount) referenced.875878 *876879 * Return errno if it cannot successully install probes877880 * else return 0 (success)
···5252static ktime_t last_jiffies_update;53535454/*5555- * Called after resume. Make sure that jiffies are not fast forwarded due to5656- * clock monotonic being forwarded by the suspended time.5757- */5858-void tick_sched_forward_next_period(void)5959-{6060- last_jiffies_update = tick_next_period;6161-}6262-6363-/*6455 * Must be called with interrupts disabled !6556 */6657static void tick_do_update_jiffies64(ktime_t now)···795804 return;796805 }797806798798- hrtimer_set_expires(&ts->sched_timer, tick);799799-800800- if (ts->nohz_mode == NOHZ_MODE_HIGHRES)801801- hrtimer_start_expires(&ts->sched_timer, HRTIMER_MODE_ABS_PINNED);802802- else807807+ if (ts->nohz_mode == NOHZ_MODE_HIGHRES) {808808+ hrtimer_start(&ts->sched_timer, tick, HRTIMER_MODE_ABS_PINNED);809809+ } else {810810+ hrtimer_set_expires(&ts->sched_timer, tick);803811 tick_program_event(tick, 1);812812+ }804813}805814806815static void tick_nohz_retain_tick(struct tick_sched *ts)
+37-41
kernel/time/timekeeping.c
···138138139139static inline void tk_update_sleep_time(struct timekeeper *tk, ktime_t delta)140140{141141- /* Update both bases so mono and raw stay coupled. */142142- tk->tkr_mono.base += delta;143143- tk->tkr_raw.base += delta;144144-145145- /* Accumulate time spent in suspend */146146- tk->time_suspended += delta;141141+ tk->offs_boot = ktime_add(tk->offs_boot, delta);147142}148143149144/*···468473}469474EXPORT_SYMBOL_GPL(ktime_get_raw_fast_ns);470475476476+/**477477+ * ktime_get_boot_fast_ns - NMI safe and fast access to boot clock.478478+ *479479+ * To keep it NMI safe since we're accessing from tracing, we're not using a480480+ * separate timekeeper with updates to monotonic clock and boot offset481481+ * protected with seqlocks. This has the following minor side effects:482482+ *483483+ * (1) Its possible that a timestamp be taken after the boot offset is updated484484+ * but before the timekeeper is updated. If this happens, the new boot offset485485+ * is added to the old timekeeping making the clock appear to update slightly486486+ * earlier:487487+ * CPU 0 CPU 1488488+ * timekeeping_inject_sleeptime64()489489+ * __timekeeping_inject_sleeptime(tk, delta);490490+ * timestamp();491491+ * timekeeping_update(tk, TK_CLEAR_NTP...);492492+ *493493+ * (2) On 32-bit systems, the 64-bit boot offset (tk->offs_boot) may be494494+ * partially updated. Since the tk->offs_boot update is a rare event, this495495+ * should be a rare occurrence which postprocessing should be able to handle.496496+ */497497+u64 notrace ktime_get_boot_fast_ns(void)498498+{499499+ struct timekeeper *tk = &tk_core.timekeeper;500500+501501+ return (ktime_get_mono_fast_ns() + ktime_to_ns(tk->offs_boot));502502+}503503+EXPORT_SYMBOL_GPL(ktime_get_boot_fast_ns);504504+505505+471506/*472507 * See comment for __ktime_get_fast_ns() vs. timestamp ordering473508 */···789764790765static ktime_t *offsets[TK_OFFS_MAX] = {791766 [TK_OFFS_REAL] = &tk_core.timekeeper.offs_real,767767+ [TK_OFFS_BOOT] = &tk_core.timekeeper.offs_boot,792768 [TK_OFFS_TAI] = &tk_core.timekeeper.offs_tai,793769};794770···885859 timespec64_add_ns(ts, nsec + tomono.tv_nsec);886860}887861EXPORT_SYMBOL_GPL(ktime_get_ts64);888888-889889-/**890890- * ktime_get_active_ts64 - Get the active non-suspended monotonic clock891891- * @ts: pointer to timespec variable892892- *893893- * The function calculates the monotonic clock from the realtime clock and894894- * the wall_to_monotonic offset, subtracts the accumulated suspend time and895895- * stores the result in normalized timespec64 format in the variable896896- * pointed to by @ts.897897- */898898-void ktime_get_active_ts64(struct timespec64 *ts)899899-{900900- struct timekeeper *tk = &tk_core.timekeeper;901901- struct timespec64 tomono, tsusp;902902- u64 nsec, nssusp;903903- unsigned int seq;904904-905905- WARN_ON(timekeeping_suspended);906906-907907- do {908908- seq = read_seqcount_begin(&tk_core.seq);909909- ts->tv_sec = tk->xtime_sec;910910- nsec = timekeeping_get_ns(&tk->tkr_mono);911911- tomono = tk->wall_to_monotonic;912912- nssusp = tk->time_suspended;913913- } while (read_seqcount_retry(&tk_core.seq, seq));914914-915915- ts->tv_sec += tomono.tv_sec;916916- ts->tv_nsec = 0;917917- timespec64_add_ns(ts, nsec + tomono.tv_nsec);918918- tsusp = ns_to_timespec64(nssusp);919919- *ts = timespec64_sub(*ts, tsusp);920920-}921862922863/**923864 * ktime_get_seconds - Get the seconds portion of CLOCK_MONOTONIC···15861593 return;15871594 }15881595 tk_xtime_add(tk, delta);15961596+ tk_set_wall_to_mono(tk, timespec64_sub(tk->wall_to_monotonic, *delta));15891597 tk_update_sleep_time(tk, timespec64_to_ktime(*delta));15901598 tk_debug_account_sleep_time(delta);15911599}···21192125void getboottime64(struct timespec64 *ts)21202126{21212127 struct timekeeper *tk = &tk_core.timekeeper;21222122- ktime_t t = ktime_sub(tk->offs_real, tk->time_suspended);21282128+ ktime_t t = ktime_sub(tk->offs_real, tk->offs_boot);2123212921242130 *ts = ktime_to_timespec64(t);21252131}···21822188 * ktime_get_update_offsets_now - hrtimer helper21832189 * @cwsseq: pointer to check and store the clock was set sequence number21842190 * @offs_real: pointer to storage for monotonic -> realtime offset21912191+ * @offs_boot: pointer to storage for monotonic -> boottime offset21852192 * @offs_tai: pointer to storage for monotonic -> clock tai offset21862193 *21872194 * Returns current monotonic time and updates the offsets if the···21922197 * Called from hrtimer_interrupt() or retrigger_next_event()21932198 */21942199ktime_t ktime_get_update_offsets_now(unsigned int *cwsseq, ktime_t *offs_real,21952195- ktime_t *offs_tai)22002200+ ktime_t *offs_boot, ktime_t *offs_tai)21962201{21972202 struct timekeeper *tk = &tk_core.timekeeper;21982203 unsigned int seq;···22092214 if (*cwsseq != tk->clock_was_set_seq) {22102215 *cwsseq = tk->clock_was_set_seq;22112216 *offs_real = tk->offs_real;22172217+ *offs_boot = tk->offs_boot;22122218 *offs_tai = tk->offs_tai;22132219 }22142220
+1
kernel/time/timekeeping.h
···66 */77extern ktime_t ktime_get_update_offsets_now(unsigned int *cwsseq,88 ktime_t *offs_real,99+ ktime_t *offs_boot,910 ktime_t *offs_tai);10111112extern int timekeeping_valid_for_hres(void);
···24662466 else if (strcmp(modifier, "usecs") == 0)24672467 *flags |= HIST_FIELD_FL_TIMESTAMP_USECS;24682468 else {24692469+ hist_err("Invalid field modifier: ", modifier);24692470 field = ERR_PTR(-EINVAL);24702471 goto out;24712472 }···24822481 else {24832482 field = trace_find_event_field(file->event_call, field_name);24842483 if (!field || !field->size) {24842484+ hist_err("Couldn't find field: ", field_name);24852485 field = ERR_PTR(-EINVAL);24862486 goto out;24872487 }···49154913 seq_printf(m, "%s", field_name);49164914 } else if (hist_field->flags & HIST_FIELD_FL_TIMESTAMP)49174915 seq_puts(m, "common_timestamp");49164916+49174917+ if (hist_field->flags) {49184918+ if (!(hist_field->flags & HIST_FIELD_FL_VAR_REF) &&49194919+ !(hist_field->flags & HIST_FIELD_FL_EXPR)) {49204920+ const char *flags = get_hist_field_flags(hist_field);49214921+49224922+ if (flags)49234923+ seq_printf(m, ".%s", flags);49244924+ }49254925+ }49184926}4919492749204928static int event_hist_trigger_print(struct seq_file *m,
+14-21
kernel/trace/trace_uprobe.c
···5555 struct list_head list;5656 struct trace_uprobe_filter filter;5757 struct uprobe_consumer consumer;5858+ struct path path;5859 struct inode *inode;5960 char *filename;6061 unsigned long offset;···290289 for (i = 0; i < tu->tp.nr_args; i++)291290 traceprobe_free_probe_arg(&tu->tp.args[i]);292291293293- iput(tu->inode);292292+ path_put(&tu->path);294293 kfree(tu->tp.call.class->system);295294 kfree(tu->tp.call.name);296295 kfree(tu->filename);···364363static int create_trace_uprobe(int argc, char **argv)365364{366365 struct trace_uprobe *tu;367367- struct inode *inode;368366 char *arg, *event, *group, *filename;369367 char buf[MAX_EVENT_NAME_LEN];370368 struct path path;···371371 bool is_delete, is_return;372372 int i, ret;373373374374- inode = NULL;375374 ret = 0;376375 is_delete = false;377376 is_return = false;···436437 }437438 /* Find the last occurrence, in case the path contains ':' too. */438439 arg = strrchr(argv[1], ':');439439- if (!arg) {440440- ret = -EINVAL;441441- goto fail_address_parse;442442- }440440+ if (!arg)441441+ return -EINVAL;443442444443 *arg++ = '\0';445444 filename = argv[1];446445 ret = kern_path(filename, LOOKUP_FOLLOW, &path);447446 if (ret)448448- goto fail_address_parse;447447+ return ret;449448450450- inode = igrab(d_real_inode(path.dentry));451451- path_put(&path);452452-453453- if (!inode || !S_ISREG(inode->i_mode)) {449449+ if (!d_is_reg(path.dentry)) {454450 ret = -EINVAL;455451 goto fail_address_parse;456452 }···484490 goto fail_address_parse;485491 }486492 tu->offset = offset;487487- tu->inode = inode;493493+ tu->path = path;488494 tu->filename = kstrdup(filename, GFP_KERNEL);489495490496 if (!tu->filename) {···552558 return ret;553559554560fail_address_parse:555555- iput(inode);561561+ path_put(&path);556562557563 pr_info("Failed to parse address or file.\n");558564···916922 goto err_flags;917923918924 tu->consumer.filter = filter;925925+ tu->inode = d_real_inode(tu->path.dentry);919926 ret = uprobe_register(tu->inode, tu->offset, &tu->consumer);920927 if (ret)921928 goto err_buffer;···962967 WARN_ON(!uprobe_filter_is_empty(&tu->filter));963968964969 uprobe_unregister(tu->inode, tu->offset, &tu->consumer);970970+ tu->inode = NULL;965971 tu->tp.flags &= file ? ~TP_FLAG_TRACE : ~TP_FLAG_PROFILE;966972967973 uprobe_buffer_disable();···13331337create_local_trace_uprobe(char *name, unsigned long offs, bool is_return)13341338{13351339 struct trace_uprobe *tu;13361336- struct inode *inode;13371340 struct path path;13381341 int ret;13391342···13401345 if (ret)13411346 return ERR_PTR(ret);1342134713431343- inode = igrab(d_inode(path.dentry));13441344- path_put(&path);13451345-13461346- if (!inode || !S_ISREG(inode->i_mode)) {13471347- iput(inode);13481348+ if (!d_is_reg(path.dentry)) {13491349+ path_put(&path);13481350 return ERR_PTR(-EINVAL);13491351 }13501352···13561364 if (IS_ERR(tu)) {13571365 pr_info("Failed to allocate trace_uprobe.(%d)\n",13581366 (int)PTR_ERR(tu));13671367+ path_put(&path);13591368 return ERR_CAST(tu);13601369 }1361137013621371 tu->offset = offs;13631363- tu->inode = inode;13721372+ tu->path = path;13641373 tu->filename = kstrdup(name, GFP_KERNEL);13651374 init_trace_event_call(tu, &tu->tp.call);13661375
+2-2
kernel/tracepoint.c
···207207 lockdep_is_held(&tracepoints_mutex));208208 old = func_add(&tp_funcs, func, prio);209209 if (IS_ERR(old)) {210210- WARN_ON_ONCE(1);210210+ WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM);211211 return PTR_ERR(old);212212 }213213···239239 lockdep_is_held(&tracepoints_mutex));240240 old = func_remove(&tp_funcs, func);241241 if (IS_ERR(old)) {242242- WARN_ON_ONCE(1);242242+ WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM);243243 return PTR_ERR(old);244244 }245245
+9-14
lib/errseq.c
···111111 * errseq_sample() - Grab current errseq_t value.112112 * @eseq: Pointer to errseq_t to be sampled.113113 *114114- * This function allows callers to sample an errseq_t value, marking it as115115- * "seen" if required.114114+ * This function allows callers to initialise their errseq_t variable.115115+ * If the error has been "seen", new callers will not see an old error.116116+ * If there is an unseen error in @eseq, the caller of this function will117117+ * see it the next time it checks for an error.116118 *119119+ * Context: Any context.117120 * Return: The current errseq value.118121 */119122errseq_t errseq_sample(errseq_t *eseq)120123{121124 errseq_t old = READ_ONCE(*eseq);122122- errseq_t new = old;123125124124- /*125125- * For the common case of no errors ever having been set, we can skip126126- * marking the SEEN bit. Once an error has been set, the value will127127- * never go back to zero.128128- */129129- if (old != 0) {130130- new |= ERRSEQ_SEEN;131131- if (old != new)132132- cmpxchg(eseq, old, new);133133- }134134- return new;126126+ /* If nobody has seen this error yet, then we can be the first. */127127+ if (!(old & ERRSEQ_SEEN))128128+ old = 0;129129+ return old;135130}136131EXPORT_SYMBOL(errseq_sample);137132
+5-6
lib/kobject.c
···233233234234 /* be noisy on error issues */235235 if (error == -EEXIST)236236- WARN(1,237237- "%s failed for %s with -EEXIST, don't try to register things with the same name in the same directory.\n",238238- __func__, kobject_name(kobj));236236+ pr_err("%s failed for %s with -EEXIST, don't try to register things with the same name in the same directory.\n",237237+ __func__, kobject_name(kobj));239238 else240240- WARN(1, "%s failed for %s (error: %d parent: %s)\n",241241- __func__, kobject_name(kobj), error,242242- parent ? kobject_name(parent) : "'none'");239239+ pr_err("%s failed for %s (error: %d parent: %s)\n",240240+ __func__, kobject_name(kobj), error,241241+ parent ? kobject_name(parent) : "'none'");243242 } else244243 kobj->state_in_sysfs = 1;245244
···518518 return -ELOOP;519519 }520520521521- /* Device is already being bridged */522522- if (br_port_exists(dev))521521+ /* Device has master upper dev */522522+ if (netdev_master_upper_dev_get(dev))523523 return -EBUSY;524524525525 /* No bridging devices that dislike that (e.g. wireless) */
+7
net/ceph/messenger.c
···25692569 int ret = 1;2570257025712571 dout("try_write start %p state %lu\n", con, con->state);25722572+ if (con->state != CON_STATE_PREOPEN &&25732573+ con->state != CON_STATE_CONNECTING &&25742574+ con->state != CON_STATE_NEGOTIATING &&25752575+ con->state != CON_STATE_OPEN)25762576+ return 0;2572257725732578more:25742579 dout("try_write out_kvec_bytes %d\n", con->out_kvec_bytes);···25992594 }2600259526012596more_kvec:25972597+ BUG_ON(!con->sock);25982598+26022599 /* kvec data queued? */26032600 if (con->out_kvec_left) {26042601 ret = write_partial_kvec(con);
+11-3
net/ceph/mon_client.c
···209209 __open_session(monc);210210}211211212212+static void un_backoff(struct ceph_mon_client *monc)213213+{214214+ monc->hunt_mult /= 2; /* reduce by 50% */215215+ if (monc->hunt_mult < 1)216216+ monc->hunt_mult = 1;217217+ dout("%s hunt_mult now %d\n", __func__, monc->hunt_mult);218218+}219219+212220/*213221 * Reschedule delayed work timer.214222 */···971963 if (!monc->hunting) {972964 ceph_con_keepalive(&monc->con);973965 __validate_auth(monc);966966+ un_backoff(monc);974967 }975968976969 if (is_auth &&···11321123 dout("%s found mon%d\n", __func__, monc->cur_mon);11331124 monc->hunting = false;11341125 monc->had_a_connection = true;11351135- monc->hunt_mult /= 2; /* reduce by 50% */11361136- if (monc->hunt_mult < 1)11371137- monc->hunt_mult = 1;11261126+ un_backoff(monc);11271127+ __schedule_delayed(monc);11381128 }11391129}11401130
···10321032 info_size = sizeof(info);10331033 if (copy_from_user(&info, useraddr, info_size))10341034 return -EFAULT;10351035+ /* Since malicious users may modify the original data,10361036+ * we need to check whether FLOW_RSS is still requested.10371037+ */10381038+ if (!(info.flow_type & FLOW_RSS))10391039+ return -EINVAL;10351040 }1036104110371042 if (info.cmd == ETHTOOL_GRXCLSRLALL) {
···806806 }807807 }808808 }809809- bbr->idle_restart = 0;809809+ /* Restart after idle ends only once we process a new S/ACK for data */810810+ if (rs->delivered > 0)811811+ bbr->idle_restart = 0;810812}811813812814static void bbr_update_model(struct sock *sk, const struct rate_sample *rs)
···558558 struct rds_cmsg_rx_trace t;559559 int i, j;560560561561+ memset(&t, 0, sizeof(t));561562 inc->i_rx_lat_trace[RDS_MSG_RX_CMSG] = local_clock();562563 t.rx_traces = rs->rs_rx_traces;563564 for (i = 0; i < rs->rs_rx_traces; i++) {
···217217 skb_pull(chunk->skb, sizeof(*ch));218218 chunk->subh.v = NULL; /* Subheader is no longer valid. */219219220220- if (chunk->chunk_end + sizeof(*ch) < skb_tail_pointer(chunk->skb)) {220220+ if (chunk->chunk_end + sizeof(*ch) <= skb_tail_pointer(chunk->skb)) {221221 /* This is not a singleton */222222 chunk->singleton = 0;223223 } else if (chunk->chunk_end > skb_tail_pointer(chunk->skb)) {
···17941794 GFP_ATOMIC))17951795 goto nomem;1796179617971797+ if (sctp_auth_asoc_init_active_key(new_asoc, GFP_ATOMIC))17981798+ goto nomem;17991799+17971800 /* Make sure no new addresses are being added during the17981801 * restart. Though this is a pretty complicated attack17991802 * since you'd have to get inside the cookie.···19071904 peer_init = &chunk->subh.cookie_hdr->c.peer_init[0];19081905 if (!sctp_process_init(new_asoc, chunk, sctp_source(chunk), peer_init,19091906 GFP_ATOMIC))19071907+ goto nomem;19081908+19091909+ if (sctp_auth_asoc_init_active_key(new_asoc, GFP_ATOMIC))19101910 goto nomem;1911191119121912 /* Update the content of current association. */···20562050 }20572051 }2058205220592059- repl = sctp_make_cookie_ack(new_asoc, chunk);20532053+ repl = sctp_make_cookie_ack(asoc, chunk);20602054 if (!repl)20612055 goto nomem;20622056
···114114 size = sg->length - offset;115115 offset += sg->offset;116116117117+ ctx->in_tcp_sendpages = true;117118 while (1) {118119 if (sg_is_last(sg))119120 sendpage_flags = flags;···149148 }150149151150 clear_bit(TLS_PENDING_CLOSED_RECORD, &ctx->flags);151151+ ctx->in_tcp_sendpages = false;152152+ ctx->sk_write_space(sk);152153153154 return 0;154155}···219216static void tls_write_space(struct sock *sk)220217{221218 struct tls_context *ctx = tls_get_ctx(sk);219219+220220+ /* We are already sending pages, ignore notification */221221+ if (ctx->in_tcp_sendpages)222222+ return;222223223224 if (!sk->sk_write_pending && tls_is_pending_closed_record(ctx)) {224225 gfp_t sk_allocation = sk->sk_allocation;
+5-2
samples/sockmap/Makefile
···6565# asm/sysreg.h - inline assembly used by it is incompatible with llvm.6666# But, there is no easy way to fix it, so just exclude it since it is6767# useless for BPF samples.6868+#6969+# -target bpf option required with SK_MSG programs, this is to ensure7070+# reading 'void *' data types for data and data_end are __u64 reads.6871$(obj)/%.o: $(src)/%.c6972 $(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) -I$(obj) \7073 -D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused-value -Wno-pointer-sign \7174 -Wno-compare-distinct-pointer-types \7275 -Wno-gnu-variable-sized-type-not-at-end \7376 -Wno-address-of-packed-member -Wno-tautological-compare \7474- -Wno-unknown-warning-option \7575- -O2 -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@7777+ -Wno-unknown-warning-option -O2 -target bpf \7878+ -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@
+1-1
sound/core/control.c
···14921492 int op_flag)14931493{14941494 struct snd_ctl_tlv header;14951495- unsigned int *container;14951495+ unsigned int __user *container;14961496 unsigned int container_size;14971497 struct snd_kcontrol *kctl;14981498 struct snd_ctl_elem_id id;
+4-3
sound/core/pcm_compat.c
···2727 s32 __user *src)2828{2929 snd_pcm_sframes_t delay;3030+ int err;30313131- delay = snd_pcm_delay(substream);3232- if (delay < 0)3333- return delay;3232+ err = snd_pcm_delay(substream, &delay);3333+ if (err)3434+ return err;3435 if (put_user(delay, src))3536 return -EFAULT;3637 return 0;
+15-15
sound/core/pcm_native.c
···26922692 return err;26932693}2694269426952695-static snd_pcm_sframes_t snd_pcm_delay(struct snd_pcm_substream *substream)26952695+static int snd_pcm_delay(struct snd_pcm_substream *substream,26962696+ snd_pcm_sframes_t *delay)26962697{26972698 struct snd_pcm_runtime *runtime = substream->runtime;26982699 int err;···27092708 n += runtime->delay;27102709 }27112710 snd_pcm_stream_unlock_irq(substream);27122712- return err < 0 ? err : n;27112711+ if (!err)27122712+ *delay = n;27132713+ return err;27132714}2714271527152716static int snd_pcm_sync_ptr(struct snd_pcm_substream *substream,···27542751 sync_ptr.s.status.hw_ptr = status->hw_ptr;27552752 sync_ptr.s.status.tstamp = status->tstamp;27562753 sync_ptr.s.status.suspended_state = status->suspended_state;27542754+ sync_ptr.s.status.audio_tstamp = status->audio_tstamp;27572755 snd_pcm_stream_unlock_irq(substream);27582756 if (copy_to_user(_sync_ptr, &sync_ptr, sizeof(sync_ptr)))27592757 return -EFAULT;···29202916 return snd_pcm_hwsync(substream);29212917 case SNDRV_PCM_IOCTL_DELAY:29222918 {29232923- snd_pcm_sframes_t delay = snd_pcm_delay(substream);29192919+ snd_pcm_sframes_t delay;29242920 snd_pcm_sframes_t __user *res = arg;29212921+ int err;2925292229262926- if (delay < 0)29272927- return delay;29232923+ err = snd_pcm_delay(substream, &delay);29242924+ if (err)29252925+ return err;29282926 if (put_user(delay, res))29292927 return -EFAULT;29302928 return 0;···30143008 case SNDRV_PCM_IOCTL_DROP:30153009 return snd_pcm_drop(substream);30163010 case SNDRV_PCM_IOCTL_DELAY:30173017- {30183018- result = snd_pcm_delay(substream);30193019- if (result < 0)30203020- return result;30213021- *frames = result;30223022- return 0;30233023- }30113011+ return snd_pcm_delay(substream, frames);30243012 default:30253013 return -EINVAL;30263014 }···32343234/*32353235 * mmap status record32363236 */32373237-static int snd_pcm_mmap_status_fault(struct vm_fault *vmf)32373237+static vm_fault_t snd_pcm_mmap_status_fault(struct vm_fault *vmf)32383238{32393239 struct snd_pcm_substream *substream = vmf->vma->vm_private_data;32403240 struct snd_pcm_runtime *runtime;···32703270/*32713271 * mmap control record32723272 */32733273-static int snd_pcm_mmap_control_fault(struct vm_fault *vmf)32733273+static vm_fault_t snd_pcm_mmap_control_fault(struct vm_fault *vmf)32743274{32753275 struct snd_pcm_substream *substream = vmf->vma->vm_private_data;32763276 struct snd_pcm_runtime *runtime;···33593359/*33603360 * fault callback for mmapping a RAM page33613361 */33623362-static int snd_pcm_mmap_data_fault(struct vm_fault *vmf)33623362+static vm_fault_t snd_pcm_mmap_data_fault(struct vm_fault *vmf)33633363{33643364 struct snd_pcm_substream *substream = vmf->vma->vm_private_data;33653365 struct snd_pcm_runtime *runtime;
+9-6
sound/core/seq/oss/seq_oss_event.c
···2626#include <sound/seq_oss_legacy.h>2727#include "seq_oss_readq.h"2828#include "seq_oss_writeq.h"2929+#include <linux/nospec.h>293030313132/*···288287{289288 struct seq_oss_synthinfo *info;290289291291- if (!snd_seq_oss_synth_is_valid(dp, dev))290290+ info = snd_seq_oss_synth_info(dp, dev);291291+ if (!info)292292 return -ENXIO;293293294294- info = &dp->synths[dev];295294 switch (info->arg.event_passing) {296295 case SNDRV_SEQ_OSS_PROCESS_EVENTS:297296 if (! info->ch || ch < 0 || ch >= info->nr_voices) {···299298 return set_note_event(dp, dev, SNDRV_SEQ_EVENT_NOTEON, ch, note, vel, ev);300299 }301300301301+ ch = array_index_nospec(ch, info->nr_voices);302302 if (note == 255 && info->ch[ch].note >= 0) {303303 /* volume control */304304 int type;···349347{350348 struct seq_oss_synthinfo *info;351349352352- if (!snd_seq_oss_synth_is_valid(dp, dev))350350+ info = snd_seq_oss_synth_info(dp, dev);351351+ if (!info)353352 return -ENXIO;354353355355- info = &dp->synths[dev];356354 switch (info->arg.event_passing) {357355 case SNDRV_SEQ_OSS_PROCESS_EVENTS:358356 if (! info->ch || ch < 0 || ch >= info->nr_voices) {···360358 return set_note_event(dp, dev, SNDRV_SEQ_EVENT_NOTEON, ch, note, vel, ev);361359 }362360361361+ ch = array_index_nospec(ch, info->nr_voices);363362 if (info->ch[ch].note >= 0) {364363 note = info->ch[ch].note;365364 info->ch[ch].vel = 0;···384381static int385382set_note_event(struct seq_oss_devinfo *dp, int dev, int type, int ch, int note, int vel, struct snd_seq_event *ev)386383{387387- if (! snd_seq_oss_synth_is_valid(dp, dev))384384+ if (!snd_seq_oss_synth_info(dp, dev))388385 return -ENXIO;389386390387 ev->type = type;···402399static int403400set_control_event(struct seq_oss_devinfo *dp, int dev, int type, int ch, int param, int val, struct snd_seq_event *ev)404401{405405- if (! snd_seq_oss_synth_is_valid(dp, dev))402402+ if (!snd_seq_oss_synth_info(dp, dev))406403 return -ENXIO;407404408405 ev->type = type;
+2
sound/core/seq/oss/seq_oss_midi.c
···2929#include "../seq_lock.h"3030#include <linux/init.h>3131#include <linux/slab.h>3232+#include <linux/nospec.h>323333343435/*···316315{317316 if (dev < 0 || dev >= dp->max_mididev)318317 return NULL;318318+ dev = array_index_nospec(dev, dp->max_mididev);319319 return get_mdev(dev);320320}321321
+49-36
sound/core/seq/oss/seq_oss_synth.c
···2626#include <linux/init.h>2727#include <linux/module.h>2828#include <linux/slab.h>2929+#include <linux/nospec.h>29303031/*3132 * constants···340339 dp->max_synthdev = 0;341340}342341343343-/*344344- * check if the specified device is MIDI mapped device345345- */346346-static int347347-is_midi_dev(struct seq_oss_devinfo *dp, int dev)342342+static struct seq_oss_synthinfo *343343+get_synthinfo_nospec(struct seq_oss_devinfo *dp, int dev)348344{349345 if (dev < 0 || dev >= dp->max_synthdev)350350- return 0;351351- if (dp->synths[dev].is_midi)352352- return 1;353353- return 0;346346+ return NULL;347347+ dev = array_index_nospec(dev, SNDRV_SEQ_OSS_MAX_SYNTH_DEVS);348348+ return &dp->synths[dev];354349}355350356351/*···356359get_synthdev(struct seq_oss_devinfo *dp, int dev)357360{358361 struct seq_oss_synth *rec;359359- if (dev < 0 || dev >= dp->max_synthdev)362362+ struct seq_oss_synthinfo *info = get_synthinfo_nospec(dp, dev);363363+364364+ if (!info)360365 return NULL;361361- if (! dp->synths[dev].opened)366366+ if (!info->opened)362367 return NULL;363363- if (dp->synths[dev].is_midi)364364- return &midi_synth_dev;365365- if ((rec = get_sdev(dev)) == NULL)366366- return NULL;368368+ if (info->is_midi) {369369+ rec = &midi_synth_dev;370370+ snd_use_lock_use(&rec->use_lock);371371+ } else {372372+ rec = get_sdev(dev);373373+ if (!rec)374374+ return NULL;375375+ }367376 if (! rec->opened) {368377 snd_use_lock_free(&rec->use_lock);369378 return NULL;···405402 struct seq_oss_synth *rec;406403 struct seq_oss_synthinfo *info;407404408408- if (snd_BUG_ON(dev < 0 || dev >= dp->max_synthdev))409409- return;410410- info = &dp->synths[dev];411411- if (! info->opened)405405+ info = get_synthinfo_nospec(dp, dev);406406+ if (!info || !info->opened)412407 return;413408 if (info->sysex)414409 info->sysex->len = 0; /* reset sysex */···455454 const char __user *buf, int p, int c)456455{457456 struct seq_oss_synth *rec;457457+ struct seq_oss_synthinfo *info;458458 int rc;459459460460- if (dev < 0 || dev >= dp->max_synthdev)460460+ info = get_synthinfo_nospec(dp, dev);461461+ if (!info)461462 return -ENXIO;462463463463- if (is_midi_dev(dp, dev))464464+ if (info->is_midi)464465 return 0;465466 if ((rec = get_synthdev(dp, dev)) == NULL)466467 return -ENXIO;···470467 if (rec->oper.load_patch == NULL)471468 rc = -ENXIO;472469 else473473- rc = rec->oper.load_patch(&dp->synths[dev].arg, fmt, buf, p, c);470470+ rc = rec->oper.load_patch(&info->arg, fmt, buf, p, c);474471 snd_use_lock_free(&rec->use_lock);475472 return rc;476473}477474478475/*479479- * check if the device is valid synth device476476+ * check if the device is valid synth device and return the synth info480477 */481481-int482482-snd_seq_oss_synth_is_valid(struct seq_oss_devinfo *dp, int dev)478478+struct seq_oss_synthinfo *479479+snd_seq_oss_synth_info(struct seq_oss_devinfo *dp, int dev)483480{484481 struct seq_oss_synth *rec;482482+485483 rec = get_synthdev(dp, dev);486484 if (rec) {487485 snd_use_lock_free(&rec->use_lock);488488- return 1;486486+ return get_synthinfo_nospec(dp, dev);489487 }490490- return 0;488488+ return NULL;491489}492490493491···503499 int i, send;504500 unsigned char *dest;505501 struct seq_oss_synth_sysex *sysex;502502+ struct seq_oss_synthinfo *info;506503507507- if (! snd_seq_oss_synth_is_valid(dp, dev))504504+ info = snd_seq_oss_synth_info(dp, dev);505505+ if (!info)508506 return -ENXIO;509507510510- sysex = dp->synths[dev].sysex;508508+ sysex = info->sysex;511509 if (sysex == NULL) {512510 sysex = kzalloc(sizeof(*sysex), GFP_KERNEL);513511 if (sysex == NULL)514512 return -ENOMEM;515515- dp->synths[dev].sysex = sysex;513513+ info->sysex = sysex;516514 }517515518516 send = 0;···559553int560554snd_seq_oss_synth_addr(struct seq_oss_devinfo *dp, int dev, struct snd_seq_event *ev)561555{562562- if (! snd_seq_oss_synth_is_valid(dp, dev))556556+ struct seq_oss_synthinfo *info = snd_seq_oss_synth_info(dp, dev);557557+558558+ if (!info)563559 return -EINVAL;564564- snd_seq_oss_fill_addr(dp, ev, dp->synths[dev].arg.addr.client,565565- dp->synths[dev].arg.addr.port);560560+ snd_seq_oss_fill_addr(dp, ev, info->arg.addr.client,561561+ info->arg.addr.port);566562 return 0;567563}568564···576568snd_seq_oss_synth_ioctl(struct seq_oss_devinfo *dp, int dev, unsigned int cmd, unsigned long addr)577569{578570 struct seq_oss_synth *rec;571571+ struct seq_oss_synthinfo *info;579572 int rc;580573581581- if (is_midi_dev(dp, dev))574574+ info = get_synthinfo_nospec(dp, dev);575575+ if (!info || info->is_midi)582576 return -ENXIO;583577 if ((rec = get_synthdev(dp, dev)) == NULL)584578 return -ENXIO;585579 if (rec->oper.ioctl == NULL)586580 rc = -ENXIO;587581 else588588- rc = rec->oper.ioctl(&dp->synths[dev].arg, cmd, addr);582582+ rc = rec->oper.ioctl(&info->arg, cmd, addr);589583 snd_use_lock_free(&rec->use_lock);590584 return rc;591585}···599589int600590snd_seq_oss_synth_raw_event(struct seq_oss_devinfo *dp, int dev, unsigned char *data, struct snd_seq_event *ev)601591{602602- if (! snd_seq_oss_synth_is_valid(dp, dev) || is_midi_dev(dp, dev))592592+ struct seq_oss_synthinfo *info;593593+594594+ info = snd_seq_oss_synth_info(dp, dev);595595+ if (!info || info->is_midi)603596 return -ENXIO;604597 ev->type = SNDRV_SEQ_EVENT_OSS;605598 memcpy(ev->data.raw8.d, data, 8);
+2-1
sound/core/seq/oss/seq_oss_synth.h
···3737void snd_seq_oss_synth_reset(struct seq_oss_devinfo *dp, int dev);3838int snd_seq_oss_synth_load_patch(struct seq_oss_devinfo *dp, int dev, int fmt,3939 const char __user *buf, int p, int c);4040-int snd_seq_oss_synth_is_valid(struct seq_oss_devinfo *dp, int dev);4040+struct seq_oss_synthinfo *snd_seq_oss_synth_info(struct seq_oss_devinfo *dp,4141+ int dev);4142int snd_seq_oss_synth_sysex(struct seq_oss_devinfo *dp, int dev, unsigned char *buf,4243 struct snd_seq_event *ev);4344int snd_seq_oss_synth_addr(struct seq_oss_devinfo *dp, int dev, struct snd_seq_event *ev);
···8989 {RT5514_PLL3_CALIB_CTRL5, 0x40220012},9090 {RT5514_DELAY_BUF_CTRL1, 0x7fff006a},9191 {RT5514_DELAY_BUF_CTRL3, 0x00000000},9292+ {RT5514_ASRC_IN_CTRL1, 0x00000003},9293 {RT5514_DOWNFILTER0_CTRL1, 0x00020c2f},9394 {RT5514_DOWNFILTER0_CTRL2, 0x00020c2f},9495 {RT5514_DOWNFILTER0_CTRL3, 0x10000362},···182181 case RT5514_PLL3_CALIB_CTRL5:183182 case RT5514_DELAY_BUF_CTRL1:184183 case RT5514_DELAY_BUF_CTRL3:184184+ case RT5514_ASRC_IN_CTRL1:185185 case RT5514_DOWNFILTER0_CTRL1:186186 case RT5514_DOWNFILTER0_CTRL2:187187 case RT5514_DOWNFILTER0_CTRL3:···240238 case RT5514_DSP_MAPPING | RT5514_PLL3_CALIB_CTRL5:241239 case RT5514_DSP_MAPPING | RT5514_DELAY_BUF_CTRL1:242240 case RT5514_DSP_MAPPING | RT5514_DELAY_BUF_CTRL3:241241+ case RT5514_DSP_MAPPING | RT5514_ASRC_IN_CTRL1:243242 case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL1:244243 case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL2:245244 case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL3:
+7
sound/soc/fsl/fsl_esai.c
···144144145145 psr = ratio <= 256 * maxfp ? ESAI_xCCR_xPSR_BYPASS : ESAI_xCCR_xPSR_DIV8;146146147147+ /* Do not loop-search if PM (1 ~ 256) alone can serve the ratio */148148+ if (ratio <= 256) {149149+ pm = ratio;150150+ fp = 1;151151+ goto out;152152+ }153153+147154 /* Set the max fluctuation -- 0.1% of the max devisor */148155 savesub = (psr ? 1 : 8) * 256 * maxfp / 1000;149156
+11-3
sound/soc/fsl/fsl_ssi.c
···217217 * @dai_fmt: DAI configuration this device is currently used with218218 * @streams: Mask of current active streams: BIT(TX) and BIT(RX)219219 * @i2s_net: I2S and Network mode configurations of SCR register220220+ * (this is the initial settings based on the DAI format)220221 * @synchronous: Use synchronous mode - both of TX and RX use STCK and SFCK221222 * @use_dma: DMA is used or FIQ with stream filter222223 * @use_dual_fifo: DMA with support for dual FIFO mode···830829 }831830832831 if (!fsl_ssi_is_ac97(ssi)) {832832+ /*833833+ * Keep the ssi->i2s_net intact while having a local variable834834+ * to override settings for special use cases. Otherwise, the835835+ * ssi->i2s_net will lose the settings for regular use cases.836836+ */837837+ u8 i2s_net = ssi->i2s_net;838838+833839 /* Normal + Network mode to send 16-bit data in 32-bit frames */834840 if (fsl_ssi_is_i2s_cbm_cfs(ssi) && sample_size == 16)835835- ssi->i2s_net = SSI_SCR_I2S_MODE_NORMAL | SSI_SCR_NET;841841+ i2s_net = SSI_SCR_I2S_MODE_NORMAL | SSI_SCR_NET;836842837843 /* Use Normal mode to send mono data at 1st slot of 2 slots */838844 if (channels == 1)839839- ssi->i2s_net = SSI_SCR_I2S_MODE_NORMAL;845845+ i2s_net = SSI_SCR_I2S_MODE_NORMAL;840846841847 regmap_update_bits(regs, REG_SSI_SCR,842842- SSI_SCR_I2S_NET_MASK, ssi->i2s_net);848848+ SSI_SCR_I2S_NET_MASK, i2s_net);843849 }844850845851 /* In synchronous mode, the SSI uses STCCR for capture */
+13-9
sound/soc/intel/Kconfig
···7272 for Baytrail Chromebooks but this option is now deprecated and is7373 not recommended, use SND_SST_ATOM_HIFI2_PLATFORM instead.74747575+config SND_SST_ATOM_HIFI2_PLATFORM7676+ tristate7777+ select SND_SOC_COMPRESS7878+7579config SND_SST_ATOM_HIFI2_PLATFORM_PCI7676- tristate "PCI HiFi2 (Medfield, Merrifield) Platforms"8080+ tristate "PCI HiFi2 (Merrifield) Platforms"7781 depends on X86 && PCI7882 select SND_SST_IPC_PCI7979- select SND_SOC_COMPRESS8383+ select SND_SST_ATOM_HIFI2_PLATFORM8084 help8181- If you have a Intel Medfield or Merrifield/Edison platform, then8585+ If you have a Intel Merrifield/Edison platform, then8286 enable this option by saying Y or m. Distros will typically not8383- enable this option: Medfield devices are not available to8484- developers and while Merrifield/Edison can run a mainline kernel with8585- limited functionality it will require a firmware file which8686- is not in the standard firmware tree8787+ enable this option: while Merrifield/Edison can run a mainline8888+ kernel with limited functionality it will require a firmware file8989+ which is not in the standard firmware tree87908888-config SND_SST_ATOM_HIFI2_PLATFORM9191+config SND_SST_ATOM_HIFI2_PLATFORM_ACPI8992 tristate "ACPI HiFi2 (Baytrail, Cherrytrail) Platforms"9393+ default ACPI9094 depends on X86 && ACPI9195 select SND_SST_IPC_ACPI9292- select SND_SOC_COMPRESS9696+ select SND_SST_ATOM_HIFI2_PLATFORM9397 select SND_SOC_ACPI_INTEL_MATCH9498 select IOSF_MBI9599 help
+11-3
sound/soc/omap/omap-dmic.c
···281281static int omap_dmic_select_fclk(struct omap_dmic *dmic, int clk_id,282282 unsigned int freq)283283{284284- struct clk *parent_clk;284284+ struct clk *parent_clk, *mux;285285 char *parent_clk_name;286286 int ret = 0;287287···329329 return -ENODEV;330330 }331331332332+ mux = clk_get_parent(dmic->fclk);333333+ if (IS_ERR(mux)) {334334+ dev_err(dmic->dev, "can't get fck mux parent\n");335335+ clk_put(parent_clk);336336+ return -ENODEV;337337+ }338338+332339 mutex_lock(&dmic->mutex);333340 if (dmic->active) {334341 /* disable clock while reparenting */335342 pm_runtime_put_sync(dmic->dev);336336- ret = clk_set_parent(dmic->fclk, parent_clk);343343+ ret = clk_set_parent(mux, parent_clk);337344 pm_runtime_get_sync(dmic->dev);338345 } else {339339- ret = clk_set_parent(dmic->fclk, parent_clk);346346+ ret = clk_set_parent(mux, parent_clk);340347 }341348 mutex_unlock(&dmic->mutex);342349···356349 dmic->fclk_freq = freq;357350358351err_busy:352352+ clk_put(mux);359353 clk_put(parent_clk);360354361355 return ret;
+2-2
sound/soc/sh/rcar/core.c
···15361536 return ret;15371537}1538153815391539-static int rsnd_suspend(struct device *dev)15391539+static int __maybe_unused rsnd_suspend(struct device *dev)15401540{15411541 struct rsnd_priv *priv = dev_get_drvdata(dev);15421542···15451545 return 0;15461546}1547154715481548-static int rsnd_resume(struct device *dev)15481548+static int __maybe_unused rsnd_resume(struct device *dev)15491549{15501550 struct rsnd_priv *priv = dev_get_drvdata(dev);15511551
+9-5
sound/soc/soc-topology.c
···513513 */514514 if (dobj->widget.kcontrol_type == SND_SOC_TPLG_TYPE_ENUM) {515515 /* enumerated widget mixer */516516- for (i = 0; i < w->num_kcontrols; i++) {516516+ for (i = 0; w->kcontrols != NULL && i < w->num_kcontrols; i++) {517517 struct snd_kcontrol *kcontrol = w->kcontrols[i];518518 struct soc_enum *se =519519 (struct soc_enum *)kcontrol->private_value;···530530 }531531 } else {532532 /* volume mixer or bytes controls */533533- for (i = 0; i < w->num_kcontrols; i++) {533533+ for (i = 0; w->kcontrols != NULL && i < w->num_kcontrols; i++) {534534 struct snd_kcontrol *kcontrol = w->kcontrols[i];535535536536 if (dobj->widget.kcontrol_type···13251325 ec->hdr.name);1326132613271327 kc[i].name = kstrdup(ec->hdr.name, GFP_KERNEL);13281328- if (kc[i].name == NULL)13281328+ if (kc[i].name == NULL) {13291329+ kfree(se);13291330 goto err_se;13311331+ }13301332 kc[i].private_value = (long)se;13311333 kc[i].iface = SNDRV_CTL_ELEM_IFACE_MIXER;13321334 kc[i].access = ec->hdr.access;···14441442 be->hdr.name, be->hdr.access);1445144314461444 kc[i].name = kstrdup(be->hdr.name, GFP_KERNEL);14471447- if (kc[i].name == NULL)14451445+ if (kc[i].name == NULL) {14461446+ kfree(sbe);14481447 goto err;14481448+ }14491449 kc[i].private_value = (long)sbe;14501450 kc[i].iface = SNDRV_CTL_ELEM_IFACE_MIXER;14511451 kc[i].access = be->hdr.access;···2580257625812577 /* match index */25822578 if (dobj->index != index &&25832583- dobj->index != SND_SOC_TPLG_INDEX_ALL)25792579+ index != SND_SOC_TPLG_INDEX_ALL)25842580 continue;2585258125862582 switch (dobj->type) {
+4-3
sound/usb/mixer.c
···17761776 build_feature_ctl(state, _ftr, ch_bits, control,17771777 &iterm, unitid, ch_read_only);17781778 if (uac_v2v3_control_is_readable(master_bits, control))17791779- build_feature_ctl(state, _ftr, 0, i, &iterm, unitid,17791779+ build_feature_ctl(state, _ftr, 0, control,17801780+ &iterm, unitid,17801781 !uac_v2v3_control_is_writeable(master_bits,17811782 control));17821783 }···18601859 check_input_term(state, d->bTerminalID, &iterm);18611860 if (state->mixer->protocol == UAC_VERSION_2) {18621861 /* Check for jack detection. */18631863- if (uac_v2v3_control_is_readable(d->bmControls,18621862+ if (uac_v2v3_control_is_readable(le16_to_cpu(d->bmControls),18641863 UAC2_TE_CONNECTOR)) {18651864 build_connector_control(state, &iterm, true);18661865 }···25622561 if (err < 0 && err != -EINVAL)25632562 return err;2564256325652565- if (uac_v2v3_control_is_readable(desc->bmControls,25642564+ if (uac_v2v3_control_is_readable(le16_to_cpu(desc->bmControls),25662565 UAC2_TE_CONNECTOR)) {25672566 build_connector_control(&state, &state.oterm,25682567 false);
+3
sound/usb/mixer_maps.c
···353353/*354354 * Dell usb dock with ALC4020 codec had a firmware problem where it got355355 * screwed up when zero volume is passed; just skip it as a workaround356356+ *357357+ * Also the extension unit gives an access error, so skip it as well.356358 */357359static const struct usbmix_name_map dell_alc4020_map[] = {360360+ { 4, NULL }, /* extension unit */358361 { 16, NULL },359362 { 19, NULL },360363 { 0 }
+1-1
sound/usb/stream.c
···349349 * TODO: this conversion is not complete, update it350350 * after adding UAC3 values to asound.h351351 */352352- switch (is->bChPurpose) {352352+ switch (is->bChRelationship) {353353 case UAC3_CH_MONO:354354 map = SNDRV_CHMAP_MONO;355355 break;
+1-1
sound/usb/usx2y/us122l.c
···139139 snd_printdd(KERN_DEBUG "%i\n", atomic_read(&us122l->mmap_count));140140}141141142142-static int usb_stream_hwdep_vm_fault(struct vm_fault *vmf)142142+static vm_fault_t usb_stream_hwdep_vm_fault(struct vm_fault *vmf)143143{144144 unsigned long offset;145145 struct page *page;
+1-1
sound/usb/usx2y/usX2Yhwdep.c
···3131#include "usbusx2y.h"3232#include "usX2Yhwdep.h"33333434-static int snd_us428ctls_vm_fault(struct vm_fault *vmf)3434+static vm_fault_t snd_us428ctls_vm_fault(struct vm_fault *vmf)3535{3636 unsigned long offset;3737 struct page * page;
+1-1
sound/usb/usx2y/usx2yhwdeppcm.c
···652652}653653654654655655-static int snd_usX2Y_hwdep_pcm_vm_fault(struct vm_fault *vmf)655655+static vm_fault_t snd_usX2Y_hwdep_pcm_vm_fault(struct vm_fault *vmf)656656{657657 unsigned long offset;658658 void *vaddr;
···1063106310641064static int cmd_load(char *arg)10651065{10661066- char *subcmd, *cont, *tmp = strdup(arg);10661066+ char *subcmd, *cont = NULL, *tmp = strdup(arg);10671067 int ret = CMD_OK;1068106810691069 subcmd = strtok_r(tmp, " ", &cont);···10731073 bpf_reset();10741074 bpf_reset_breakpoints();1075107510761076- ret = cmd_load_bpf(cont);10761076+ if (!cont)10771077+ ret = CMD_ERR;10781078+ else10791079+ ret = cmd_load_bpf(cont);10771080 } else if (matches(subcmd, "pcap") == 0) {10781081 ret = cmd_load_pcap(cont);10791082 } else {
+29-12
tools/perf/Documentation/perf-mem.txt
···2828<command>...::2929 Any command you can specify in a shell.30303131+-i::3232+--input=<file>::3333+ Input file name.3434+3135-f::3236--force::3337 Don't do ownership validation34383539-t::3636---type=::4040+--type=<type>::3741 Select the memory operation type: load or store (default: load,store)38423943-D::4040---dump-raw-samples=::4444+--dump-raw-samples::4145 Dump the raw decoded samples on the screen in a format that is easy to parse with4246 one sample per line.43474448-x::4545---field-separator::4949+--field-separator=<separator>::4650 Specify the field separator used when dump raw samples (-D option). By default,4751 The separator is the space character.48524953-C::5050---cpu-list::5151- Restrict dump of raw samples to those provided via this option. Note that the same5252- option can be passed in record mode. It will be interpreted the same way as perf5353- record.5454+--cpu=<cpu>::5555+ Monitor only on the list of CPUs provided. Multiple CPUs can be provided as a5656+ comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2. Default5757+ is to monitor all CPUS.5858+-U::5959+--hide-unresolved::6060+ Only display entries resolved to a symbol.6161+6262+-p::6363+--phys-data::6464+ Record/Report sample physical addresses6565+6666+RECORD OPTIONS6767+--------------6868+-e::6969+--event <event>::7070+ Event selector. Use 'perf mem record -e list' to list available events.54715572-K::5673--all-kernel::···7760--all-user::7861 Configure all used events to run in user space.79628080---ldload::8181- Specify desired latency for loads event.6363+-v::6464+--verbose::6565+ Be more verbose (show counter open errors, etc)82668383--p::8484---phys-data::8585- Record/Report sample physical addresses6767+--ldlat <n>::6868+ Specify desired latency for loads event.86698770In addition, for report all perf report options are valid, and for record8871all perf record options.
+1
tools/perf/arch/s390/util/auxtrace.c
···8787 struct perf_evsel *pos;8888 int diagnose = 0;89899090+ *err = 0;9091 if (evlist->nr_entries == 0)9192 return NULL;9293
-18
tools/perf/arch/s390/util/header.c
···146146 zfree(&buf);147147 return buf;148148}149149-150150-/*151151- * Compare the cpuid string returned by get_cpuid() function152152- * with the name generated by the jevents file read from153153- * pmu-events/arch/s390/mapfile.csv.154154- *155155- * Parameter mapcpuid is the cpuid as stored in the156156- * pmu-events/arch/s390/mapfile.csv. This is just the type number.157157- * Parameter cpuid is the cpuid returned by function get_cpuid().158158- */159159-int strcmp_cpuid_str(const char *mapcpuid, const char *cpuid)160160-{161161- char *cp = strchr(cpuid, ',');162162-163163- if (cp == NULL)164164- return -1;165165- return strncmp(cp + 1, mapcpuid, strlen(mapcpuid));166166-}
+38-2
tools/perf/builtin-stat.c
···172172static const char *output_name;173173static int output_fd;174174static int print_free_counters_hint;175175+static int print_mixed_hw_group_error;175176176177struct perf_stat {177178 bool record;···11271126 fprintf(output, "%s%s", csv_sep, evsel->cgrp->name);11281127}1129112811291129+static bool is_mixed_hw_group(struct perf_evsel *counter)11301130+{11311131+ struct perf_evlist *evlist = counter->evlist;11321132+ u32 pmu_type = counter->attr.type;11331133+ struct perf_evsel *pos;11341134+11351135+ if (counter->nr_members < 2)11361136+ return false;11371137+11381138+ evlist__for_each_entry(evlist, pos) {11391139+ /* software events can be part of any hardware group */11401140+ if (pos->attr.type == PERF_TYPE_SOFTWARE)11411141+ continue;11421142+ if (pmu_type == PERF_TYPE_SOFTWARE) {11431143+ pmu_type = pos->attr.type;11441144+ continue;11451145+ }11461146+ if (pmu_type != pos->attr.type)11471147+ return true;11481148+ }11491149+11501150+ return false;11511151+}11521152+11301153static void printout(int id, int nr, struct perf_evsel *counter, double uval,11311154 char *prefix, u64 run, u64 ena, double noise,11321155 struct runtime_stat *st)···12031178 counter->supported ? CNTR_NOT_COUNTED : CNTR_NOT_SUPPORTED,12041179 csv_sep);1205118012061206- if (counter->supported)11811181+ if (counter->supported) {12071182 print_free_counters_hint = 1;11831183+ if (is_mixed_hw_group(counter))11841184+ print_mixed_hw_group_error = 1;11851185+ }1208118612091187 fprintf(stat_config.output, "%-*s%s",12101188 csv_output ? 0 : unit_width,···12841256 char *new_name;12851257 char *config;1286125812871287- if (!counter->pmu_name || !strncmp(counter->name, counter->pmu_name,12591259+ if (counter->uniquified_name ||12601260+ !counter->pmu_name || !strncmp(counter->name, counter->pmu_name,12881261 strlen(counter->pmu_name)))12891262 return;12901263···13031274 counter->name = new_name;13041275 }13051276 }12771277+12781278+ counter->uniquified_name = true;13061279}1307128013081281static void collect_all_aliases(struct perf_evsel *counter,···17881757" echo 0 > /proc/sys/kernel/nmi_watchdog\n"17891758" perf stat ...\n"17901759" echo 1 > /proc/sys/kernel/nmi_watchdog\n");17601760+17611761+ if (print_mixed_hw_group_error)17621762+ fprintf(output,17631763+ "The events in group usually have to be from "17641764+ "the same PMU. Try reorganizing the group.\n");17911765}1792176617931767static void print_counters(struct timespec *ts, int argc, const char **argv)
···930930 * than leader in case leader 'leads' the sampling.931931 */932932 if ((leader != evsel) && leader->sample_read) {933933- attr->sample_freq = 0;934934- attr->sample_period = 0;933933+ attr->freq = 0;934934+ attr->sample_freq = 0;935935+ attr->sample_period = 0;936936+ attr->write_backward = 0;937937+ attr->sample_id_all = 0;935938 }936939937940 if (opts->no_samples)···19251922 goto fallback_missing_features;19261923 } else if (!perf_missing_features.group_read &&19271924 evsel->attr.inherit &&19281928- (evsel->attr.read_format & PERF_FORMAT_GROUP)) {19251925+ (evsel->attr.read_format & PERF_FORMAT_GROUP) &&19261926+ perf_evsel__is_group_leader(evsel)) {19291927 perf_missing_features.group_read = true;19301928 pr_debug2("switching off group read\n");19311929 goto fallback_missing_features;···27582754 (paranoid = perf_event_paranoid()) > 1) {27592755 const char *name = perf_evsel__name(evsel);27602756 char *new_name;27572757+ const char *sep = ":";2761275827622762- if (asprintf(&new_name, "%s%su", name, strchr(name, ':') ? "" : ":") < 0)27592759+ /* Is there already the separator in the name. */27602760+ if (strchr(name, '/') ||27612761+ strchr(name, ':'))27622762+ sep = "";27632763+27642764+ if (asprintf(&new_name, "%s%su", name, sep) < 0)27632765 return false;2764276627652767 if (evsel->name)
+1
tools/perf/util/evsel.h
···115115 unsigned int sample_size;116116 int id_pos;117117 int is_pos;118118+ bool uniquified_name;118119 bool snapshot;119120 bool supported;120121 bool needs_swap;
+18-12
tools/perf/util/machine.c
···10191019 return ret;10201020}1021102110221022-static void map_groups__fixup_end(struct map_groups *mg)10231023-{10241024- int i;10251025- for (i = 0; i < MAP__NR_TYPES; ++i)10261026- __map_groups__fixup_end(mg, i);10271027-}10281028-10291022static char *get_kernel_version(const char *root_dir)10301023{10311024 char version[PATH_MAX];···12261233{12271234 struct dso *kernel = machine__get_kernel(machine);12281235 const char *name = NULL;12361236+ struct map *map;12291237 u64 addr = 0;12301238 int ret;12311239···12531259 machine__destroy_kernel_maps(machine);12541260 return -1;12551261 }12561256- machine__set_kernel_mmap(machine, addr, 0);12621262+12631263+ /* we have a real start address now, so re-order the kmaps */12641264+ map = machine__kernel_map(machine);12651265+12661266+ map__get(map);12671267+ map_groups__remove(&machine->kmaps, map);12681268+12691269+ /* assume it's the last in the kmaps */12701270+ machine__set_kernel_mmap(machine, addr, ~0ULL);12711271+12721272+ map_groups__insert(&machine->kmaps, map);12731273+ map__put(map);12571274 }1258127512591259- /*12601260- * Now that we have all the maps created, just set the ->end of them:12611261- */12621262- map_groups__fixup_end(&machine->kmaps);12761276+ /* update end address of the kernel map using adjacent module address */12771277+ map = map__next(machine__kernel_map(machine));12781278+ if (map)12791279+ machine__set_kernel_mmap(machine, addr, map->start);12801280+12631281 return 0;12641282}12651283
+4-4
tools/perf/util/parse-events.y
···224224 event_bpf_file225225226226event_pmu:227227-PE_NAME opt_event_config227227+PE_NAME '/' event_config '/'228228{229229 struct list_head *list, *orig_terms, *terms;230230231231- if (parse_events_copy_term_list($2, &orig_terms))231231+ if (parse_events_copy_term_list($3, &orig_terms))232232 YYABORT;233233234234 ALLOC_LIST(list);235235- if (parse_events_add_pmu(_parse_state, list, $1, $2, false)) {235235+ if (parse_events_add_pmu(_parse_state, list, $1, $3, false)) {236236 struct perf_pmu *pmu = NULL;237237 int ok = 0;238238 char *pattern;···262262 if (!ok)263263 YYABORT;264264 }265265- parse_events_terms__delete($2);265265+ parse_events_terms__delete($3);266266 parse_events_terms__delete(orig_terms);267267 $$ = list;268268}
+8-14
tools/perf/util/pmu.c
···539539540540/*541541 * PMU CORE devices have different name other than cpu in sysfs on some542542- * platforms. looking for possible sysfs files to identify as core device.542542+ * platforms.543543+ * Looking for possible sysfs files to identify the arm core device.543544 */544544-static int is_pmu_core(const char *name)545545+static int is_arm_pmu_core(const char *name)545546{546547 struct stat st;547548 char path[PATH_MAX];···550549551550 if (!sysfs)552551 return 0;553553-554554- /* Look for cpu sysfs (x86 and others) */555555- scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/cpu", sysfs);556556- if ((stat(path, &st) == 0) &&557557- (strncmp(name, "cpu", strlen("cpu")) == 0))558558- return 1;559552560553 /* Look for cpu sysfs (specific to arm) */561554 scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/%s/cpus",···581586 * cpuid string generated on this platform.582587 * Otherwise return non-zero.583588 */584584-int __weak strcmp_cpuid_str(const char *mapcpuid, const char *cpuid)589589+int strcmp_cpuid_str(const char *mapcpuid, const char *cpuid)585590{586591 regex_t re;587592 regmatch_t pmatch[1];···663668 struct pmu_events_map *map;664669 struct pmu_event *pe;665670 const char *name = pmu->name;671671+ const char *pname;666672667673 map = perf_pmu__find_map(pmu);668674 if (!map)···682686 break;683687 }684688685685- if (!is_pmu_core(name)) {686686- /* check for uncore devices */687687- if (pe->pmu == NULL)688688- continue;689689- if (strncmp(pe->pmu, name, strlen(pe->pmu)))689689+ if (!is_arm_pmu_core(name)) {690690+ pname = pe->pmu ? pe->pmu : "cpu";691691+ if (strncmp(pname, name, strlen(pname)))690692 continue;691693 }692694
+2-2
tools/testing/selftests/bpf/test_progs.c
···1108110811091109 assert(system("dd if=/dev/urandom of=/dev/zero count=4 2> /dev/null")11101110 == 0);11111111- assert(system("./urandom_read if=/dev/urandom of=/dev/zero count=4 2> /dev/null") == 0);11111111+ assert(system("./urandom_read") == 0);11121112 /* disable stack trace collection */11131113 key = 0;11141114 val = 1;···11581158 } while (bpf_map_get_next_key(stackmap_fd, &previous_key, &key) == 0);1159115911601160 CHECK(build_id_matches < 1, "build id match",11611161- "Didn't find expected build ID from the map");11611161+ "Didn't find expected build ID from the map\n");1162116211631163disable_pmu:11641164 ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
···100100 " shl $32, %r8\n"101101 " orq $0x7f7f7f7f, %r8\n"102102 " movq %r8, %r9\n"103103- " movq %r8, %r10\n"104104- " movq %r8, %r11\n"105105- " movq %r8, %r12\n"106106- " movq %r8, %r13\n"107107- " movq %r8, %r14\n"108108- " movq %r8, %r15\n"103103+ " incq %r9\n"104104+ " movq %r9, %r10\n"105105+ " incq %r10\n"106106+ " movq %r10, %r11\n"107107+ " incq %r11\n"108108+ " movq %r11, %r12\n"109109+ " incq %r12\n"110110+ " movq %r12, %r13\n"111111+ " incq %r13\n"112112+ " movq %r13, %r14\n"113113+ " incq %r14\n"114114+ " movq %r14, %r15\n"115115+ " incq %r15\n"109116 " ret\n"110117 " .code32\n"111118 " .popsection\n"···135128 int err = 0;136129 int num = 8;137130 uint64_t *r64 = ®s64.r8;131131+ uint64_t expected = 0x7f7f7f7f7f7f7f7fULL;138132139133 if (!kernel_is_64bit)140134 return 0;141135142136 do {143143- if (*r64 == 0x7f7f7f7f7f7f7f7fULL)137137+ if (*r64 == expected++)144138 continue; /* register did not change */145139 if (syscall_addr != (long)&int80) {146140 /*···155147 continue;156148 }157149 } else {158158- /* INT80 syscall entrypoint can be used by150150+ /*151151+ * INT80 syscall entrypoint can be used by159152 * 64-bit programs too, unlike SYSCALL/SYSENTER.160153 * Therefore it must preserve R12+161154 * (they are callee-saved registers in 64-bit C ABI).162155 *163163- * This was probably historically not intended,164164- * but R8..11 are clobbered (cleared to 0).165165- * IOW: they are the only registers which aren't166166- * preserved across INT80 syscall.156156+ * Starting in Linux 4.17 (and any kernel that157157+ * backports the change), R8..11 are preserved.158158+ * Historically (and probably unintentionally), they159159+ * were clobbered or zeroed.167160 */168168- if (*r64 == 0 && num <= 11)169169- continue;170161 }171162 printf("[FAIL]\tR%d has changed:%016llx\n", num, *r64);172163 err++;
+10-5
virt/kvm/arm/arm.c
···6363static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);6464static u32 kvm_next_vmid;6565static unsigned int kvm_vmid_bits __read_mostly;6666-static DEFINE_SPINLOCK(kvm_vmid_lock);6666+static DEFINE_RWLOCK(kvm_vmid_lock);67676868static bool vgic_present;6969···473473{474474 phys_addr_t pgd_phys;475475 u64 vmid;476476+ bool new_gen;476477477477- if (!need_new_vmid_gen(kvm))478478+ read_lock(&kvm_vmid_lock);479479+ new_gen = need_new_vmid_gen(kvm);480480+ read_unlock(&kvm_vmid_lock);481481+482482+ if (!new_gen)478483 return;479484480480- spin_lock(&kvm_vmid_lock);485485+ write_lock(&kvm_vmid_lock);481486482487 /*483488 * We need to re-check the vmid_gen here to ensure that if another vcpu···490485 * use the same vmid.491486 */492487 if (!need_new_vmid_gen(kvm)) {493493- spin_unlock(&kvm_vmid_lock);488488+ write_unlock(&kvm_vmid_lock);494489 return;495490 }496491···524519 vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits);525520 kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid;526521527527- spin_unlock(&kvm_vmid_lock);522522+ write_unlock(&kvm_vmid_lock);528523}529524530525static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)