···1414 The /sys/class/mei/meiN directory is created for1515 each probed mei device16161717+What: /sys/class/mei/meiN/fw_status1818+Date: Nov 20141919+KernelVersion: 3.192020+Contact: Tomas Winkler <tomas.winkler@intel.com>2121+Description: Display fw status registers content2222+2323+ The ME FW writes its status information into fw status2424+ registers for BIOS and OS to monitor fw health.2525+2626+ The register contains running state, power management2727+ state, error codes, and others. The way the registers2828+ are decoded depends on PCH or SoC generation.2929+ Also number of registers varies between 1 and 63030+ depending on generation.3131+
···1010Each button (key) is represented as a sub-node of "gpio-keys":1111Subnode properties:12121313+ - gpios: OF device-tree gpio specification.1414+ - interrupts: the interrupt line for that input.1315 - label: Descriptive name of the key.1416 - linux,code: Keycode to emit.15171616-Required mutual exclusive subnode-properties:1717- - gpios: OF device-tree gpio specification.1818- - interrupts: the interrupt line for that input1818+Note that either "interrupts" or "gpios" properties can be omitted, but not1919+both at the same time. Specifying both properties is allowed.19202021Optional subnode-properties:2122 - linux,input-type: Specify event type this button/key generates.···2423 - debounce-interval: Debouncing interval time in milliseconds.2524 If not specified defaults to 5.2625 - gpio-key,wakeup: Boolean, button can wake-up the system.2626+ - linux,can-disable: Boolean, indicates that button is connected2727+ to dedicated (not shared) interrupt which can be disabled to2828+ suppress events from the button.27292830Example nodes:2931
···88 - debounce-interval : Debouncing interval time in milliseconds99 - st,scan-count : Scanning cycles elapsed before key data is updated1010 - st,no-autorepeat : If specified device will not autorepeat1111+ - keypad,num-rows : See ./matrix-keymap.txt1212+ - keypad,num-columns : See ./matrix-keymap.txt11131214Example:1315
+2
Documentation/networking/ip-sysctl.txt
···6666route/max_size - INTEGER6767 Maximum number of routes allowed in the kernel. Increase6868 this when using large numbers of interfaces and/or routes.6969+ From linux kernel 3.6 onwards, this is deprecated for ipv47070+ as route cache is no longer used.69717072neigh/default/gc_thresh1 - INTEGER7173 Minimum number of entries to keep. Garbage collector will not
···3344Written by Amit Daniel Kachhap <amit.kachhap@linaro.org>5566-Updated: 12 May 201266+Updated: 6 Jan 20157788Copyright (c) 2012 Samsung Electronics Co., Ltd(http://www.samsung.com)99···25252626 clip_cpus: cpumask of cpus where the frequency constraints will happen.27272828-1.1.2 void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)2828+1.1.2 struct thermal_cooling_device *of_cpufreq_cooling_register(2929+ struct device_node *np, const struct cpumask *clip_cpus)3030+3131+ This interface function registers the cpufreq cooling device with3232+ the name "thermal-cpufreq-%x" linking it with a device tree node, in3333+ order to bind it via the thermal DT code. This api can support multiple3434+ instances of cpufreq cooling devices.3535+3636+ np: pointer to the cooling device device tree node3737+ clip_cpus: cpumask of cpus where the frequency constraints will happen.3838+3939+1.1.3 void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)29403041 This interface function unregisters the "thermal-cpufreq-%x" cooling device.3142
+21-11
MAINTAINERS
···724724F: drivers/char/apm-emulation.c725725726726APPLE BCM5974 MULTITOUCH DRIVER727727-M: Henrik Rydberg <rydberg@euromail.se>727727+M: Henrik Rydberg <rydberg@bitmath.org>728728L: linux-input@vger.kernel.org729729-S: Maintained729729+S: Odd fixes730730F: drivers/input/mouse/bcm5974.c731731732732APPLE SMC DRIVER733733-M: Henrik Rydberg <rydberg@euromail.se>733733+M: Henrik Rydberg <rydberg@bitmath.org>734734L: lm-sensors@lm-sensors.org735735-S: Maintained735735+S: Odd fixes736736F: drivers/hwmon/applesmc.c737737738738APPLETALK NETWORK LAYER···22592259BTRFS FILE SYSTEM22602260M: Chris Mason <clm@fb.com>22612261M: Josef Bacik <jbacik@fb.com>22622262+M: David Sterba <dsterba@suse.cz>22622263L: linux-btrfs@vger.kernel.org22632264W: http://btrfs.wiki.kernel.org/22642265Q: http://patchwork.kernel.org/project/linux-btrfs/list/···31833182Q: https://patchwork.kernel.org/project/linux-dmaengine/list/31843183S: Maintained31853184F: drivers/dma/31863186-F: include/linux/dma*31853185+F: include/linux/dmaengine.h31873186F: Documentation/dmaengine/31883187T: git git://git.infradead.org/users/vkoul/slave-dma.git31893188···47494748F: drivers/scsi/ipr.*4750474947514750IBM Power Virtual Ethernet Device Driver47524752-M: Santiago Leon <santil@linux.vnet.ibm.com>47514751+M: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>47534752L: netdev@vger.kernel.org47544753S: Supported47554754F: drivers/net/ethernet/ibm/ibmveth.*···49414940F: include/linux/input/4942494149434942INPUT MULTITOUCH (MT) PROTOCOL49444944-M: Henrik Rydberg <rydberg@euromail.se>49434943+M: Henrik Rydberg <rydberg@bitmath.org>49454944L: linux-input@vger.kernel.org49464945T: git git://git.kernel.org/pub/scm/linux/kernel/git/rydberg/input-mt.git49474947-S: Maintained49464946+S: Odd fixes49484947F: Documentation/input/multi-touch-protocol.txt49494948F: drivers/input/input-mt.c49504949K: \b(ABS|SYN)_MT_···52795278W: www.open-iscsi.org52805279Q: http://patchwork.kernel.org/project/linux-rdma/list/52815280F: drivers/infiniband/ulp/iser/52815281+52825282+ISCSI EXTENSIONS FOR RDMA (ISER) TARGET52835283+M: Sagi Grimberg <sagig@mellanox.com>52845284+T: git git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master52855285+L: linux-rdma@vger.kernel.org52865286+L: target-devel@vger.kernel.org52875287+S: Supported52885288+W: http://www.linux-iscsi.org52895289+F: drivers/infiniband/ulp/isert5282529052835291ISDN SUBSYSTEM52845292M: Karsten Keil <isdn@linux-pingi.de>···77477737F: drivers/scsi/qla2xxx/7748773877497739QLOGIC QLA4XXX iSCSI DRIVER77507750-M: Vikas Chaudhary <vikas.chaudhary@qlogic.com>77517751-M: iscsi-driver@qlogic.com77407740+M: QLogic-Storage-Upstream@qlogic.com77527741L: linux-scsi@vger.kernel.org77537742S: Supported77547743F: Documentation/scsi/LICENSE.qla4xxx···95429533TI BANDGAP AND THERMAL DRIVER95439534M: Eduardo Valentin <edubezval@gmail.com>95449535L: linux-pm@vger.kernel.org95459545-S: Supported95369536+L: linux-omap@vger.kernel.org95379537+S: Maintained95469538F: drivers/thermal/ti-soc-thermal/9547953995489540TI CLOCK DRIVER
+2-1
Makefile
···11VERSION = 322PATCHLEVEL = 1933SUBLEVEL = 044-EXTRAVERSION = -rc244+EXTRAVERSION = -rc555NAME = Diseased Newt6677# *DOCUMENTATION*···391391# Needed to be compatible with the O= option392392LINUXINCLUDE := \393393 -I$(srctree)/arch/$(hdr-arch)/include \394394+ -Iarch/$(hdr-arch)/include/generated/uapi \394395 -Iarch/$(hdr-arch)/include/generated \395396 $(if $(KBUILD_SRC), -I$(srctree)/include) \396397 -Iinclude \
-24
arch/arm/boot/dts/armada-370-db.dts
···203203 compatible = "linux,spdif-dir";204204 };205205};206206-207207-&pinctrl {208208- /*209209- * These pins might be muxed as I2S by210210- * the bootloader, but it conflicts211211- * with the real I2S pins that are212212- * muxed using i2s_pins. We must mux213213- * those pins to a function other than214214- * I2S.215215- */216216- pinctrl-0 = <&hog_pins1 &hog_pins2>;217217- pinctrl-names = "default";218218-219219- hog_pins1: hog-pins1 {220220- marvell,pins = "mpp6", "mpp8", "mpp10",221221- "mpp12", "mpp13";222222- marvell,function = "gpio";223223- };224224-225225- hog_pins2: hog-pins2 {226226- marvell,pins = "mpp5", "mpp7", "mpp9";227227- marvell,function = "gpo";228228- };229229-};
···8484CONFIG_POWER_SUPPLY=y8585CONFIG_BATTERY_SBS=y8686CONFIG_CHARGER_TPS65090=y8787-# CONFIG_HWMON is not set8787+CONFIG_HWMON=y8888+CONFIG_SENSORS_LM90=y8889CONFIG_THERMAL=y8990CONFIG_EXYNOS_THERMAL=y9091CONFIG_EXYNOS_THERMAL_CORE=y···110109CONFIG_REGULATOR_S2MPS11=y111110CONFIG_REGULATOR_S5M8767=y112111CONFIG_REGULATOR_TPS65090=y112112+CONFIG_DRM=y113113+CONFIG_DRM_BRIDGE=y114114+CONFIG_DRM_PTN3460=y115115+CONFIG_DRM_PS8622=y116116+CONFIG_DRM_EXYNOS=y117117+CONFIG_DRM_EXYNOS_FIMD=y118118+CONFIG_DRM_EXYNOS_DP=y119119+CONFIG_DRM_PANEL=y120120+CONFIG_DRM_PANEL_SIMPLE=y113121CONFIG_FB=y114122CONFIG_FB_MODE_HELPERS=y115123CONFIG_FB_SIMPLE=y116124CONFIG_EXYNOS_VIDEO=y117125CONFIG_EXYNOS_MIPI_DSI=y126126+CONFIG_BACKLIGHT_LCD_SUPPORT=y127127+CONFIG_LCD_CLASS_DEVICE=y128128+CONFIG_LCD_PLATFORM=y129129+CONFIG_BACKLIGHT_CLASS_DEVICE=y130130+CONFIG_BACKLIGHT_GENERIC=y131131+CONFIG_BACKLIGHT_PWM=y118132CONFIG_FRAMEBUFFER_CONSOLE=y119133CONFIG_FONTS=y120134CONFIG_FONT_7x14=y
···6868CONFIG_CPU_FREQ_GOV_POWERSAVE=y6969CONFIG_CPU_FREQ_GOV_USERSPACE=y7070CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y7171-CONFIG_GENERIC_CPUFREQ_CPU0=y7171+CONFIG_CPUFREQ_DT=y7272# CONFIG_ARM_OMAP2PLUS_CPUFREQ is not set7373CONFIG_CPU_IDLE=y7474CONFIG_BINFMT_MISC=y
+1
arch/arm/include/uapi/asm/unistd.h
···413413#define __NR_getrandom (__NR_SYSCALL_BASE+384)414414#define __NR_memfd_create (__NR_SYSCALL_BASE+385)415415#define __NR_bpf (__NR_SYSCALL_BASE+386)416416+#define __NR_execveat (__NR_SYSCALL_BASE+387)416417417418/*418419 * The following SWIs are ARM private.
···286286#define OMAP5XXX_CONTROL_STATUS 0x134287287#define OMAP5_DEVICETYPE_MASK (0x7 << 6)288288289289+/* DRA7XX CONTROL CORE BOOTSTRAP */290290+#define DRA7_CTRL_CORE_BOOTSTRAP 0x6c4291291+#define DRA7_SPEEDSELECT_MASK (0x3 << 8)292292+289293/*290294 * REVISIT: This list of registers is not comprehensive - there are more291295 * that should be added.
+21
arch/arm/mach-omap2/omap-headsmp.S
···22222323/* Physical address needed since MMU not enabled yet on secondary core */2424#define AUX_CORE_BOOT0_PA 0x482818002525+#define API_HYP_ENTRY 0x10225262627/*2728 * OMAP5 specific entry point for secondary CPU to jump from ROM···4140 bne wait4241 b secondary_startup4342ENDPROC(omap5_secondary_startup)4343+/*4444+ * Same as omap5_secondary_startup except we call into the ROM to4545+ * enable HYP mode first. This is called instead of4646+ * omap5_secondary_startup if the primary CPU was put into HYP mode by4747+ * the boot loader.4848+ */4949+ENTRY(omap5_secondary_hyp_startup)5050+wait_2: ldr r2, =AUX_CORE_BOOT0_PA @ read from AuxCoreBoot05151+ ldr r0, [r2]5252+ mov r0, r0, lsr #55353+ mrc p15, 0, r4, c0, c0, 55454+ and r4, r4, #0x0f5555+ cmp r0, r45656+ bne wait_25757+ ldr r12, =API_HYP_ENTRY5858+ adr r0, hyp_boot5959+ smc #06060+hyp_boot:6161+ b secondary_startup6262+ENDPROC(omap5_secondary_hyp_startup)4463/*4564 * OMAP4 specific entry point for secondary CPU to jump from ROM4665 * code. This routine also provides a holding flag into which
+11-2
arch/arm/mach-omap2/omap-smp.c
···2222#include <linux/irqchip/arm-gic.h>23232424#include <asm/smp_scu.h>2525+#include <asm/virt.h>25262627#include "omap-secure.h"2728#include "omap-wakeupgen.h"···228227 if (omap_secure_apis_support())229228 omap_auxcoreboot_addr(virt_to_phys(startup_addr));230229 else231231- writel_relaxed(virt_to_phys(omap5_secondary_startup),232232- base + OMAP_AUX_CORE_BOOT_1);230230+ /*231231+ * If the boot CPU is in HYP mode then start secondary232232+ * CPU in HYP mode as well.233233+ */234234+ if ((__boot_cpu_mode & MODE_MASK) == HYP_MODE)235235+ writel_relaxed(virt_to_phys(omap5_secondary_hyp_startup),236236+ base + OMAP_AUX_CORE_BOOT_1);237237+ else238238+ writel_relaxed(virt_to_phys(omap5_secondary_startup),239239+ base + OMAP_AUX_CORE_BOOT_1);233240234241}235242
+38-6
arch/arm/mach-omap2/timer.c
···54545555#include "soc.h"5656#include "common.h"5757+#include "control.h"5758#include "powerdomain.h"5859#include "omap-secure.h"5960···497496 void __iomem *base;498497 static struct clk *sys_clk;499498 unsigned long rate;500500- unsigned int reg, num, den;499499+ unsigned int reg;500500+ unsigned long long num, den;501501502502 base = ioremap(REALTIME_COUNTER_BASE, SZ_32);503503 if (!base) {···513511 }514512515513 rate = clk_get_rate(sys_clk);514514+515515+ if (soc_is_dra7xx()) {516516+ /*517517+ * Errata i856 says the 32.768KHz crystal does not start at518518+ * power on, so the CPU falls back to an emulated 32KHz clock519519+ * based on sysclk / 610 instead. This causes the master counter520520+ * frequency to not be 6.144MHz but at sysclk / 610 * 375 / 2521521+ * (OR sysclk * 75 / 244)522522+ *523523+ * This affects at least the DRA7/AM572x 1.0, 1.1 revisions.524524+ * Of course any board built without a populated 32.768KHz525525+ * crystal would also need this fix even if the CPU is fixed526526+ * later.527527+ *528528+ * Either case can be detected by using the two speedselect bits529529+ * If they are not 0, then the 32.768KHz clock driving the530530+ * coarse counter that corrects the fine counter every time it531531+ * ticks is actually rate/610 rather than 32.768KHz and we532532+ * should compensate to avoid the 570ppm (at 20MHz, much worse533533+ * at other rates) too fast system time.534534+ */535535+ reg = omap_ctrl_readl(DRA7_CTRL_CORE_BOOTSTRAP);536536+ if (reg & DRA7_SPEEDSELECT_MASK) {537537+ num = 75;538538+ den = 244;539539+ goto sysclk1_based;540540+ }541541+ }542542+516543 /* Numerator/denumerator values refer TRM Realtime Counter section */517544 switch (rate) {518518- case 1200000:545545+ case 12000000:519546 num = 64;520547 den = 125;521548 break;522522- case 1300000:549549+ case 13000000:523550 num = 768;524551 den = 1625;525552 break;···560529 num = 192;561530 den = 625;562531 break;563563- case 2600000:532532+ case 26000000:564533 num = 384;565534 den = 1625;566535 break;567567- case 2700000:536536+ case 27000000:568537 num = 256;569538 den = 1125;570539 break;···576545 break;577546 }578547548548+sysclk1_based:579549 /* Program numerator and denumerator registers */580550 reg = readl_relaxed(base + INCREMENTER_NUMERATOR_OFFSET) &581551 NUMERATOR_DENUMERATOR_MASK;···588556 reg |= den;589557 writel_relaxed(reg, base + INCREMENTER_DENUMERATOR_RELOAD_OFFSET);590558591591- arch_timer_freq = (rate / den) * num;559559+ arch_timer_freq = DIV_ROUND_UP_ULL(rate * num, den);592560 set_cntfreq();593561594562 iounmap(base);
+27
arch/arm/mach-rockchip/rockchip.c
···1919#include <linux/init.h>2020#include <linux/of_platform.h>2121#include <linux/irqchip.h>2222+#include <linux/clk-provider.h>2323+#include <linux/clocksource.h>2424+#include <linux/mfd/syscon.h>2525+#include <linux/regmap.h>2226#include <asm/mach/arch.h>2327#include <asm/mach/map.h>2428#include <asm/hardware/cache-l2x0.h>2529#include "core.h"3030+3131+#define RK3288_GRF_SOC_CON0 0x2443232+3333+static void __init rockchip_timer_init(void)3434+{3535+ if (of_machine_is_compatible("rockchip,rk3288")) {3636+ struct regmap *grf;3737+3838+ /*3939+ * Disable auto jtag/sdmmc switching that causes issues4040+ * with the mmc controllers making them unreliable4141+ */4242+ grf = syscon_regmap_lookup_by_compatible("rockchip,rk3288-grf");4343+ if (!IS_ERR(grf))4444+ regmap_write(grf, RK3288_GRF_SOC_CON0, 0x10000000);4545+ else4646+ pr_err("rockchip: could not get grf syscon\n");4747+ }4848+4949+ of_clk_init(NULL);5050+ clocksource_of_init();5151+}26522753static void __init rockchip_dt_init(void)2854{···6842DT_MACHINE_START(ROCKCHIP_DT, "Rockchip Cortex-A9 (Device Tree)")6943 .l2c_aux_val = 0,7044 .l2c_aux_mask = ~0,4545+ .init_time = rockchip_timer_init,7146 .dt_compat = rockchip_board_dt_compat,7247 .init_machine = rockchip_dt_init,7348MACHINE_END
···31313232#include <asm/fpsimd.h>3333#include <asm/hw_breakpoint.h>3434+#include <asm/pgtable-hwdef.h>3435#include <asm/ptrace.h>3536#include <asm/types.h>3637···123122124123/* Free all resources held by a thread. */125124extern void release_thread(struct task_struct *);126126-127127-/* Prepare to copy thread state - unlazy all lazy status */128128-#define prepare_to_copy(tsk) do { } while (0)129125130126unsigned long get_wchan(struct task_struct *p);131127
···147147 * If we have AArch32, we care about 32-bit features for compat. These148148 * registers should be RES0 otherwise.149149 */150150+ diff |= CHECK(id_dfr0, boot, cur, cpu);150151 diff |= CHECK(id_isar0, boot, cur, cpu);151152 diff |= CHECK(id_isar1, boot, cur, cpu);152153 diff |= CHECK(id_isar2, boot, cur, cpu);···165164 diff |= CHECK(id_mmfr3, boot, cur, cpu);166165 diff |= CHECK(id_pfr0, boot, cur, cpu);167166 diff |= CHECK(id_pfr1, boot, cur, cpu);167167+168168+ diff |= CHECK(mvfr0, boot, cur, cpu);169169+ diff |= CHECK(mvfr1, boot, cur, cpu);170170+ diff |= CHECK(mvfr2, boot, cur, cpu);168171169172 /*170173 * Mismatched CPU features are a recipe for disaster. Don't even···194189 info->reg_id_aa64pfr0 = read_cpuid(ID_AA64PFR0_EL1);195190 info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1);196191192192+ info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);197193 info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);198194 info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);199195 info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);···207201 info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);208202 info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);209203 info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);204204+205205+ info->reg_mvfr0 = read_cpuid(MVFR0_EL1);206206+ info->reg_mvfr1 = read_cpuid(MVFR1_EL1);207207+ info->reg_mvfr2 = read_cpuid(MVFR2_EL1);210208211209 cpuinfo_detect_icache_policy(info);212210
+1-1
arch/arm64/kernel/efi.c
···326326327327 /* boot time idmap_pg_dir is incomplete, so fill in missing parts */328328 efi_setup_idmap();329329+ early_memunmap(memmap.map, memmap.map_end - memmap.map);329330}330331331332static int __init remap_region(efi_memory_desc_t *md, void **new)···381380 }382381383382 mapsize = memmap.map_end - memmap.map;384384- early_memunmap(memmap.map, mapsize);385383386384 if (efi_runtime_disabled()) {387385 pr_info("EFI runtime services will be disabled.\n");
···10141014 * Instead, we invalidate Stage-2 for this IPA, and the10151015 * whole of Stage-1. Weep...10161016 */10171017+ lsr x1, x1, #1210171018 tlbi ipas2e1is, x110181019 /*10191020 * We have to ensure completion of the invalidation at Stage-2,
···335335336336void free_initrd_mem(unsigned long start, unsigned long end)337337{338338- if (!keep_initrd) {339339- if (start == initrd_start)340340- start = round_down(start, PAGE_SIZE);341341- if (end == initrd_end)342342- end = round_up(end, PAGE_SIZE);343343-338338+ if (!keep_initrd)344339 free_reserved_area((void *)start, (void *)end, 0, "initrd");345345- }346340}347341348342static int __init keepinitrd_setup(char *__unused)
···1111121213131414-#define NR_syscalls 318 /* length of syscall table */1414+#define NR_syscalls 319 /* length of syscall table */15151616/*1717 * The following defines stop scripts/checksyscalls.sh from complaining about
···330330 * using debugger IPI.331331 */332332333333- if (crashing_cpu == -1)333333+ if (!kdump_in_progress())334334 kexec_prepare_cpus();335335336336 pr_debug("kexec: Starting switchover sequence.\n");
+1-8
arch/powerpc/kernel/smp.c
···700700 smp_store_cpu_info(cpu);701701 set_dec(tb_ticks_per_jiffy);702702 preempt_disable();703703+ cpu_callin_map[cpu] = 1;703704704705 if (smp_ops->setup_cpu)705706 smp_ops->setup_cpu(cpu);···738737 smp_wmb();739738 notify_cpu_starting(cpu);740739 set_cpu_online(cpu, true);741741-742742- /*743743- * CPU must be marked active and online before we signal back to the744744- * master, because the scheduler needs to see the cpu_online and745745- * cpu_active bits set.746746- */747747- smp_wmb();748748- cpu_callin_map[cpu] = 1;749740750741 local_irq_enable();751742
···4343#include <asm/trace.h>4444#include <asm/firmware.h>4545#include <asm/plpar_wrappers.h>4646+#include <asm/kexec.h>4647#include <asm/fadump.h>47484849#include "pseries.h"···268267 * out to the user, but at least this will stop us from269268 * continuing on further and creating an even more270269 * difficult to debug situation.270270+ *271271+ * There is a known problem when kdump'ing, if cpus are offline272272+ * the above call will fail. Rather than panicking again, keep273273+ * going and hope the kdump kernel is also little endian, which274274+ * it usually is.271275 */272272- if (rc)276276+ if (rc && !kdump_in_progress())273277 panic("Could not enable big endian exceptions");274278 }275279#endif
+1-1
arch/s390/hypfs/hypfs_vm.c
···231231struct dbfs_d2fc_hdr {232232 u64 len; /* Length of d2fc buffer without header */233233 u16 version; /* Version of header */234234- char tod_ext[16]; /* TOD clock for d2fc */234234+ char tod_ext[STORE_CLOCK_EXT_SIZE]; /* TOD clock for d2fc */235235 u64 count; /* Number of VM guests in d2fc buffer */236236 char reserved[30];237237} __attribute__ ((packed));
+1-1
arch/s390/include/asm/irqflags.h
···36363737static inline notrace unsigned long arch_local_save_flags(void)3838{3939- return __arch_local_irq_stosm(0x00);3939+ return __arch_local_irq_stnsm(0xff);4040}41414242static inline notrace unsigned long arch_local_irq_save(void)
···289289#define __NR_bpf 351290290#define __NR_s390_pci_mmio_write 352291291#define __NR_s390_pci_mmio_read 353292292-#define NR_syscalls 354292292+#define __NR_execveat 354293293+#define NR_syscalls 355293294294295/* 295296 * There are some system calls that are not present on 64 bit, some
···80808181 /*8282 * Load per CPU data from GDT. LSL is faster than RDTSCP and8383- * works on all CPUs.8383+ * works on all CPUs. This is volatile so that it orders8484+ * correctly wrt barrier() and to keep gcc from cleverly8585+ * hoisting it out of the calling function.8486 */8585- asm("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));8787+ asm volatile ("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));86888789 return p;8890}
+4-5
arch/x86/kernel/acpi/boot.c
···750750}751751752752/* wrapper to silence section mismatch warning */753753-int __ref acpi_map_lsapic(acpi_handle handle, int physid, int *pcpu)753753+int __ref acpi_map_cpu(acpi_handle handle, int physid, int *pcpu)754754{755755 return _acpi_map_lsapic(handle, physid, pcpu);756756}757757-EXPORT_SYMBOL(acpi_map_lsapic);757757+EXPORT_SYMBOL(acpi_map_cpu);758758759759-int acpi_unmap_lsapic(int cpu)759759+int acpi_unmap_cpu(int cpu)760760{761761#ifdef CONFIG_ACPI_NUMA762762 set_apicid_to_node(per_cpu(x86_cpu_to_apicid, cpu), NUMA_NO_NODE);···768768769769 return (0);770770}771771-772772-EXPORT_SYMBOL(acpi_unmap_lsapic);771771+EXPORT_SYMBOL(acpi_unmap_cpu);773772#endif /* CONFIG_ACPI_HOTPLUG_CPU */774773775774int acpi_register_ioapic(acpi_handle handle, u64 phys_addr, u32 gsi_base)
···891891enum {892892 SNBEP_PCI_QPI_PORT0_FILTER,893893 SNBEP_PCI_QPI_PORT1_FILTER,894894+ HSWEP_PCI_PCU_3,894895};895896896897static int snbep_qpi_hw_config(struct intel_uncore_box *box, struct perf_event *event)···20272026{20282027 if (hswep_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)20292028 hswep_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;20292029+20302030+ /* Detect 6-8 core systems with only two SBOXes */20312031+ if (uncore_extra_pci_dev[0][HSWEP_PCI_PCU_3]) {20322032+ u32 capid4;20332033+20342034+ pci_read_config_dword(uncore_extra_pci_dev[0][HSWEP_PCI_PCU_3],20352035+ 0x94, &capid4);20362036+ if (((capid4 >> 6) & 0x3) == 0)20372037+ hswep_uncore_sbox.num_boxes = 2;20382038+ }20392039+20302040 uncore_msr_uncores = hswep_msr_uncores;20312041}20322042···22982286 PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x2f96),22992287 .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,23002288 SNBEP_PCI_QPI_PORT1_FILTER),22892289+ },22902290+ { /* PCU.3 (for Capability registers) */22912291+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x2fc0),22922292+ .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,22932293+ HSWEP_PCI_PCU_3),23012294 },23022295 { /* end: all zeroes */ }23032296};
+15-5
arch/x86/kernel/kprobes/core.c
···10201020 regs->flags &= ~X86_EFLAGS_IF;10211021 trace_hardirqs_off();10221022 regs->ip = (unsigned long)(jp->entry);10231023+10241024+ /*10251025+ * jprobes use jprobe_return() which skips the normal return10261026+ * path of the function, and this messes up the accounting of the10271027+ * function graph tracer to get messed up.10281028+ *10291029+ * Pause function graph tracing while performing the jprobe function.10301030+ */10311031+ pause_graph_tracing();10231032 return 1;10241033}10251034NOKPROBE_SYMBOL(setjmp_pre_handler);···10571048 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();10581049 u8 *addr = (u8 *) (regs->ip - 1);10591050 struct jprobe *jp = container_of(p, struct jprobe, kp);10511051+ void *saved_sp = kcb->jprobe_saved_sp;1060105210611053 if ((addr > (u8 *) jprobe_return) &&10621054 (addr < (u8 *) jprobe_return_end)) {10631063- if (stack_addr(regs) != kcb->jprobe_saved_sp) {10551055+ if (stack_addr(regs) != saved_sp) {10641056 struct pt_regs *saved_regs = &kcb->jprobe_saved_regs;10651057 printk(KERN_ERR10661058 "current sp %p does not match saved sp %p\n",10671067- stack_addr(regs), kcb->jprobe_saved_sp);10591059+ stack_addr(regs), saved_sp);10681060 printk(KERN_ERR "Saved registers for jprobe %p\n", jp);10691061 show_regs(saved_regs);10701062 printk(KERN_ERR "Current registers\n");10711063 show_regs(regs);10721064 BUG();10731065 }10661066+ /* It's OK to start function graph tracing again */10671067+ unpause_graph_tracing();10741068 *regs = kcb->jprobe_saved_regs;10751075- memcpy((kprobe_opcode_t *)(kcb->jprobe_saved_sp),10761076- kcb->jprobes_stack,10771077- MIN_STACK_SIZE(kcb->jprobe_saved_sp));10691069+ memcpy(saved_sp, kcb->jprobes_stack, MIN_STACK_SIZE(saved_sp));10781070 preempt_enable_no_resched();10791071 return 1;10801072 }
+90
arch/x86/kernel/perf_regs.c
···7878{7979 return PERF_SAMPLE_REGS_ABI_32;8080}8181+8282+void perf_get_regs_user(struct perf_regs *regs_user,8383+ struct pt_regs *regs,8484+ struct pt_regs *regs_user_copy)8585+{8686+ regs_user->regs = task_pt_regs(current);8787+ regs_user->abi = perf_reg_abi(current);8888+}8189#else /* CONFIG_X86_64 */8290#define REG_NOSUPPORT ((1ULL << PERF_REG_X86_DS) | \8391 (1ULL << PERF_REG_X86_ES) | \···109101 return PERF_SAMPLE_REGS_ABI_32;110102 else111103 return PERF_SAMPLE_REGS_ABI_64;104104+}105105+106106+void perf_get_regs_user(struct perf_regs *regs_user,107107+ struct pt_regs *regs,108108+ struct pt_regs *regs_user_copy)109109+{110110+ struct pt_regs *user_regs = task_pt_regs(current);111111+112112+ /*113113+ * If we're in an NMI that interrupted task_pt_regs setup, then114114+ * we can't sample user regs at all. This check isn't really115115+ * sufficient, though, as we could be in an NMI inside an interrupt116116+ * that happened during task_pt_regs setup.117117+ */118118+ if (regs->sp > (unsigned long)&user_regs->r11 &&119119+ regs->sp <= (unsigned long)(user_regs + 1)) {120120+ regs_user->abi = PERF_SAMPLE_REGS_ABI_NONE;121121+ regs_user->regs = NULL;122122+ return;123123+ }124124+125125+ /*126126+ * RIP, flags, and the argument registers are usually saved.127127+ * orig_ax is probably okay, too.128128+ */129129+ regs_user_copy->ip = user_regs->ip;130130+ regs_user_copy->cx = user_regs->cx;131131+ regs_user_copy->dx = user_regs->dx;132132+ regs_user_copy->si = user_regs->si;133133+ regs_user_copy->di = user_regs->di;134134+ regs_user_copy->r8 = user_regs->r8;135135+ regs_user_copy->r9 = user_regs->r9;136136+ regs_user_copy->r10 = user_regs->r10;137137+ regs_user_copy->r11 = user_regs->r11;138138+ regs_user_copy->orig_ax = user_regs->orig_ax;139139+ regs_user_copy->flags = user_regs->flags;140140+141141+ /*142142+ * Don't even try to report the "rest" regs.143143+ */144144+ regs_user_copy->bx = -1;145145+ regs_user_copy->bp = -1;146146+ regs_user_copy->r12 = -1;147147+ regs_user_copy->r13 = -1;148148+ regs_user_copy->r14 = -1;149149+ regs_user_copy->r15 = -1;150150+151151+ /*152152+ * For this to be at all useful, we need a reasonable guess for153153+ * sp and the ABI. Be careful: we're in NMI context, and we're154154+ * considering current to be the current task, so we should155155+ * be careful not to look at any other percpu variables that might156156+ * change during context switches.157157+ */158158+ if (IS_ENABLED(CONFIG_IA32_EMULATION) &&159159+ task_thread_info(current)->status & TS_COMPAT) {160160+ /* Easy case: we're in a compat syscall. */161161+ regs_user->abi = PERF_SAMPLE_REGS_ABI_32;162162+ regs_user_copy->sp = user_regs->sp;163163+ regs_user_copy->cs = user_regs->cs;164164+ regs_user_copy->ss = user_regs->ss;165165+ } else if (user_regs->orig_ax != -1) {166166+ /*167167+ * We're probably in a 64-bit syscall.168168+ * Warning: this code is severely racy. At least it's better169169+ * than just blindly copying user_regs.170170+ */171171+ regs_user->abi = PERF_SAMPLE_REGS_ABI_64;172172+ regs_user_copy->sp = this_cpu_read(old_rsp);173173+ regs_user_copy->cs = __USER_CS;174174+ regs_user_copy->ss = __USER_DS;175175+ regs_user_copy->cx = -1; /* usually contains garbage */176176+ } else {177177+ /* We're probably in an interrupt or exception. */178178+ regs_user->abi = user_64bit_mode(user_regs) ?179179+ PERF_SAMPLE_REGS_ABI_64 : PERF_SAMPLE_REGS_ABI_32;180180+ regs_user_copy->sp = user_regs->sp;181181+ regs_user_copy->cs = user_regs->cs;182182+ regs_user_copy->ss = user_regs->ss;183183+ }184184+185185+ regs_user->regs = regs_user_copy;112186}113187#endif /* CONFIG_X86_32 */
+1-1
arch/x86/lib/insn.c
···28282929/* Verify next sizeof(t) bytes can be on the same instruction */3030#define validate_next(t, insn, n) \3131- ((insn)->next_byte + sizeof(t) + n < (insn)->end_kaddr)3131+ ((insn)->next_byte + sizeof(t) + n <= (insn)->end_kaddr)32323333#define __get_next(t, insn) \3434 ({ t r = *(t*)insn->next_byte; insn->next_byte += sizeof(t); r; })
+17-20
arch/x86/mm/init.c
···438438static unsigned long __init get_new_step_size(unsigned long step_size)439439{440440 /*441441- * Explain why we shift by 5 and why we don't have to worry about442442- * 'step_size << 5' overflowing:443443- *444444- * initial mapped size is PMD_SIZE (2M).441441+ * Initial mapped size is PMD_SIZE (2M).445442 * We can not set step_size to be PUD_SIZE (1G) yet.446443 * In worse case, when we cross the 1G boundary, and447444 * PG_LEVEL_2M is not set, we will need 1+1+512 pages (2M + 8k)448448- * to map 1G range with PTE. Use 5 as shift for now.445445+ * to map 1G range with PTE. Hence we use one less than the446446+ * difference of page table level shifts.449447 *450450- * Don't need to worry about overflow, on 32bit, when step_size451451- * is 0, round_down() returns 0 for start, and that turns it452452- * into 0x100000000ULL.448448+ * Don't need to worry about overflow in the top-down case, on 32bit,449449+ * when step_size is 0, round_down() returns 0 for start, and that450450+ * turns it into 0x100000000ULL.451451+ * In the bottom-up case, round_up(x, 0) returns 0 though too, which452452+ * needs to be taken into consideration by the code below.453453 */454454- return step_size << 5;454454+ return step_size << (PMD_SHIFT - PAGE_SHIFT - 1);455455}456456457457/**···471471 unsigned long step_size;472472 unsigned long addr;473473 unsigned long mapped_ram_size = 0;474474- unsigned long new_mapped_ram_size;475474476475 /* xen has big range in reserved near end of ram, skip it at first.*/477476 addr = memblock_find_in_range(map_start, map_end, PMD_SIZE, PMD_SIZE);···495496 start = map_start;496497 } else497498 start = map_start;498498- new_mapped_ram_size = init_range_memory_mapping(start,499499+ mapped_ram_size += init_range_memory_mapping(start,499500 last_start);500501 last_start = start;501502 min_pfn_mapped = last_start >> PAGE_SHIFT;502502- /* only increase step_size after big range get mapped */503503- if (new_mapped_ram_size > mapped_ram_size)503503+ if (mapped_ram_size >= step_size)504504 step_size = get_new_step_size(step_size);505505- mapped_ram_size += new_mapped_ram_size;506505 }507506508507 if (real_end < map_end)···521524static void __init memory_map_bottom_up(unsigned long map_start,522525 unsigned long map_end)523526{524524- unsigned long next, new_mapped_ram_size, start;527527+ unsigned long next, start;525528 unsigned long mapped_ram_size = 0;526529 /* step_size need to be small so pgt_buf from BRK could cover it */527530 unsigned long step_size = PMD_SIZE;···536539 * for page table.537540 */538541 while (start < map_end) {539539- if (map_end - start > step_size) {542542+ if (step_size && map_end - start > step_size) {540543 next = round_up(start + 1, step_size);541544 if (next > map_end)542545 next = map_end;543543- } else546546+ } else {544547 next = map_end;548548+ }545549546546- new_mapped_ram_size = init_range_memory_mapping(start, next);550550+ mapped_ram_size += init_range_memory_mapping(start, next);547551 start = next;548552549549- if (new_mapped_ram_size > mapped_ram_size)553553+ if (mapped_ram_size >= step_size)550554 step_size = get_new_step_size(step_size);551551- mapped_ram_size += new_mapped_ram_size;552555 }553556}554557
+1-1
arch/x86/um/sys_call_table_32.c
···34343535extern asmlinkage void sys_ni_syscall(void);36363737-const sys_call_ptr_t sys_call_table[] __cacheline_aligned = {3737+const sys_call_ptr_t sys_call_table[] ____cacheline_aligned = {3838 /*3939 * Smells like a compiler bug -- it doesn't work4040 * when the & below is removed.
+1-1
arch/x86/um/sys_call_table_64.c
···47474848extern void sys_ni_syscall(void);49495050-const sys_call_ptr_t sys_call_table[] __cacheline_aligned = {5050+const sys_call_ptr_t sys_call_table[] ____cacheline_aligned = {5151 /*5252 * Smells like a compiler bug -- it doesn't work5353 * when the & below is removed.
+29-16
arch/x86/vdso/vma.c
···41414242struct linux_binprm;43434444-/* Put the vdso above the (randomized) stack with another randomized offset.4545- This way there is no hole in the middle of address space.4646- To save memory make sure it is still in the same PTE as the stack top.4747- This doesn't give that many random bits.4848-4949- Only used for the 64-bit and x32 vdsos. */4444+/*4545+ * Put the vdso above the (randomized) stack with another randomized4646+ * offset. This way there is no hole in the middle of address space.4747+ * To save memory make sure it is still in the same PTE as the stack4848+ * top. This doesn't give that many random bits.4949+ *5050+ * Note that this algorithm is imperfect: the distribution of the vdso5151+ * start address within a PMD is biased toward the end.5252+ *5353+ * Only used for the 64-bit and x32 vdsos.5454+ */5055static unsigned long vdso_addr(unsigned long start, unsigned len)5156{5257#ifdef CONFIG_X86_32···5954#else6055 unsigned long addr, end;6156 unsigned offset;6262- end = (start + PMD_SIZE - 1) & PMD_MASK;5757+5858+ /*5959+ * Round up the start address. It can start out unaligned as a result6060+ * of stack start randomization.6161+ */6262+ start = PAGE_ALIGN(start);6363+6464+ /* Round the lowest possible end address up to a PMD boundary. */6565+ end = (start + len + PMD_SIZE - 1) & PMD_MASK;6366 if (end >= TASK_SIZE_MAX)6467 end = TASK_SIZE_MAX;6568 end -= len;6666- /* This loses some more bits than a modulo, but is cheaper */6767- offset = get_random_int() & (PTRS_PER_PTE - 1);6868- addr = start + (offset << PAGE_SHIFT);6969- if (addr >= end)7070- addr = end;6969+7070+ if (end > start) {7171+ offset = get_random_int() % (((end - start) >> PAGE_SHIFT) + 1);7272+ addr = start + (offset << PAGE_SHIFT);7373+ } else {7474+ addr = start;7575+ }71767277 /*7373- * page-align it here so that get_unmapped_area doesn't7474- * align it wrongfully again to the next page. addr can come in 4K7575- * unaligned here as a result of stack start randomization.7878+ * Forcibly align the final address in case we have a hardware7979+ * issue that requires alignment for performance reasons.7680 */7777- addr = PAGE_ALIGN(addr);7881 addr = align_vdso_addr(addr);79828083 return addr;
+21-1
arch/x86/xen/enlighten.c
···4040#include <xen/interface/physdev.h>4141#include <xen/interface/vcpu.h>4242#include <xen/interface/memory.h>4343+#include <xen/interface/nmi.h>4344#include <xen/interface/xen-mca.h>4445#include <xen/features.h>4546#include <xen/page.h>···6766#include <asm/reboot.h>6867#include <asm/stackprotector.h>6968#include <asm/hypervisor.h>6969+#include <asm/mach_traps.h>7070#include <asm/mwait.h>7171#include <asm/pci_x86.h>7272#include <asm/pat.h>···13531351 .emergency_restart = xen_emergency_restart,13541352};1355135313541354+static unsigned char xen_get_nmi_reason(void)13551355+{13561356+ unsigned char reason = 0;13571357+13581358+ /* Construct a value which looks like it came from port 0x61. */13591359+ if (test_bit(_XEN_NMIREASON_io_error,13601360+ &HYPERVISOR_shared_info->arch.nmi_reason))13611361+ reason |= NMI_REASON_IOCHK;13621362+ if (test_bit(_XEN_NMIREASON_pci_serr,13631363+ &HYPERVISOR_shared_info->arch.nmi_reason))13641364+ reason |= NMI_REASON_SERR;13651365+13661366+ return reason;13671367+}13681368+13561369static void __init xen_boot_params_init_edd(void)13571370{13581371#if IS_ENABLED(CONFIG_EDD)···15521535 pv_info = xen_info;15531536 pv_init_ops = xen_init_ops;15541537 pv_apic_ops = xen_apic_ops;15551555- if (!xen_pvh_domain())15381538+ if (!xen_pvh_domain()) {15561539 pv_cpu_ops = xen_cpu_ops;15401540+15411541+ x86_platform.get_nmi_reason = xen_get_nmi_reason;15421542+ }1557154315581544 if (xen_feature(XENFEAT_auto_translated_physmap))15591545 x86_init.resources.memory_setup = xen_auto_xlated_memory_setup;
+10-10
arch/x86/xen/p2m.c
···167167 return (void *)__get_free_page(GFP_KERNEL | __GFP_REPEAT);168168}169169170170-/* Only to be called in case of a race for a page just allocated! */171171-static void free_p2m_page(void *p)170170+static void __ref free_p2m_page(void *p)172171{173173- BUG_ON(!slab_is_available());172172+ if (unlikely(!slab_is_available())) {173173+ free_bootmem((unsigned long)p, PAGE_SIZE);174174+ return;175175+ }176176+174177 free_page((unsigned long)p);175178}176179···378375 p2m_missing_pte : p2m_identity_pte;379376 for (i = 0; i < PMDS_PER_MID_PAGE; i++) {380377 pmdp = populate_extra_pmd(381381- (unsigned long)(p2m + pfn + i * PTRS_PER_PTE));378378+ (unsigned long)(p2m + pfn) + i * PMD_SIZE);382379 set_pmd(pmdp, __pmd(__pa(ptep) | _KERNPG_TABLE));383380 }384381 }···439436 * a new pmd is to replace p2m_missing_pte or p2m_identity_pte by a individual440437 * pmd. In case of PAE/x86-32 there are multiple pmds to allocate!441438 */442442-static pte_t *alloc_p2m_pmd(unsigned long addr, pte_t *ptep, pte_t *pte_pg)439439+static pte_t *alloc_p2m_pmd(unsigned long addr, pte_t *pte_pg)443440{444441 pte_t *ptechk;445445- pte_t *pteret = ptep;446442 pte_t *pte_newpg[PMDS_PER_MID_PAGE];447443 pmd_t *pmdp;448444 unsigned int level;···475473 if (ptechk == pte_pg) {476474 set_pmd(pmdp,477475 __pmd(__pa(pte_newpg[i]) | _KERNPG_TABLE));478478- if (vaddr == (addr & ~(PMD_SIZE - 1)))479479- pteret = pte_offset_kernel(pmdp, addr);480476 pte_newpg[i] = NULL;481477 }482478···488488 vaddr += PMD_SIZE;489489 }490490491491- return pteret;491491+ return lookup_address(addr, &level);492492}493493494494/*···517517518518 if (pte_pg == p2m_missing_pte || pte_pg == p2m_identity_pte) {519519 /* PMD level is missing, allocate a new one */520520- ptep = alloc_p2m_pmd(addr, ptep, pte_pg);520520+ ptep = alloc_p2m_pmd(addr, pte_pg);521521 if (!ptep)522522 return false;523523 }
+20-22
arch/x86/xen/setup.c
···140140unsigned long __ref xen_chk_extra_mem(unsigned long pfn)141141{142142 int i;143143- unsigned long addr = PFN_PHYS(pfn);143143+ phys_addr_t addr = PFN_PHYS(pfn);144144145145 for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {146146 if (addr >= xen_extra_mem[i].start &&···160160 int i;161161162162 for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {163163+ if (!xen_extra_mem[i].size)164164+ continue;163165 pfn_s = PFN_DOWN(xen_extra_mem[i].start);164166 pfn_e = PFN_UP(xen_extra_mem[i].start + xen_extra_mem[i].size);165167 for (pfn = pfn_s; pfn < pfn_e; pfn++)···231229 * as a fallback if the remapping fails.232230 */233231static void __init xen_set_identity_and_release_chunk(unsigned long start_pfn,234234- unsigned long end_pfn, unsigned long nr_pages, unsigned long *identity,235235- unsigned long *released)232232+ unsigned long end_pfn, unsigned long nr_pages, unsigned long *released)236233{237237- unsigned long len = 0;238234 unsigned long pfn, end;239235 int ret;240236241237 WARN_ON(start_pfn > end_pfn);242238239239+ /* Release pages first. */243240 end = min(end_pfn, nr_pages);244241 for (pfn = start_pfn; pfn < end; pfn++) {245242 unsigned long mfn = pfn_to_mfn(pfn);···251250 WARN(ret != 1, "Failed to release pfn %lx err=%d\n", pfn, ret);252251253252 if (ret == 1) {253253+ (*released)++;254254 if (!__set_phys_to_machine(pfn, INVALID_P2M_ENTRY))255255 break;256256- len++;257256 } else258257 break;259258 }260259261261- /* Need to release pages first */262262- *released += len;263263- *identity += set_phys_range_identity(start_pfn, end_pfn);260260+ set_phys_range_identity(start_pfn, end_pfn);264261}265262266263/*···286287 }287288288289 /* Update kernel mapping, but not for highmem. */289289- if ((pfn << PAGE_SHIFT) >= __pa(high_memory))290290+ if (pfn >= PFN_UP(__pa(high_memory - 1)))290291 return;291292292293 if (HYPERVISOR_update_va_mapping((unsigned long)__va(pfn << PAGE_SHIFT),···317318 unsigned long ident_pfn_iter, remap_pfn_iter;318319 unsigned long ident_end_pfn = start_pfn + size;319320 unsigned long left = size;320320- unsigned long ident_cnt = 0;321321 unsigned int i, chunk;322322323323 WARN_ON(size == 0);···345347 xen_remap_mfn = mfn;346348347349 /* Set identity map */348348- ident_cnt += set_phys_range_identity(ident_pfn_iter,349349- ident_pfn_iter + chunk);350350+ set_phys_range_identity(ident_pfn_iter, ident_pfn_iter + chunk);350351351352 left -= chunk;352353 }···368371static unsigned long __init xen_set_identity_and_remap_chunk(369372 const struct e820entry *list, size_t map_size, unsigned long start_pfn,370373 unsigned long end_pfn, unsigned long nr_pages, unsigned long remap_pfn,371371- unsigned long *identity, unsigned long *released)374374+ unsigned long *released, unsigned long *remapped)372375{373376 unsigned long pfn;374377 unsigned long i = 0;···383386 /* Do not remap pages beyond the current allocation */384387 if (cur_pfn >= nr_pages) {385388 /* Identity map remaining pages */386386- *identity += set_phys_range_identity(cur_pfn,387387- cur_pfn + size);389389+ set_phys_range_identity(cur_pfn, cur_pfn + size);388390 break;389391 }390392 if (cur_pfn + size > nr_pages)···394398 if (!remap_range_size) {395399 pr_warning("Unable to find available pfn range, not remapping identity pages\n");396400 xen_set_identity_and_release_chunk(cur_pfn,397397- cur_pfn + left, nr_pages, identity, released);401401+ cur_pfn + left, nr_pages, released);398402 break;399403 }400404 /* Adjust size to fit in current e820 RAM region */···406410 /* Update variables to reflect new mappings. */407411 i += size;408412 remap_pfn += size;409409- *identity += size;413413+ *remapped += size;410414 }411415412416 /*···423427424428static void __init xen_set_identity_and_remap(425429 const struct e820entry *list, size_t map_size, unsigned long nr_pages,426426- unsigned long *released)430430+ unsigned long *released, unsigned long *remapped)427431{428432 phys_addr_t start = 0;429429- unsigned long identity = 0;430433 unsigned long last_pfn = nr_pages;431434 const struct e820entry *entry;432435 unsigned long num_released = 0;436436+ unsigned long num_remapped = 0;433437 int i;434438435439 /*···456460 last_pfn = xen_set_identity_and_remap_chunk(457461 list, map_size, start_pfn,458462 end_pfn, nr_pages, last_pfn,459459- &identity, &num_released);463463+ &num_released, &num_remapped);460464 start = end;461465 }462466 }463467464468 *released = num_released;469469+ *remapped = num_remapped;465470466466- pr_info("Set %ld page(s) to 1-1 mapping\n", identity);467471 pr_info("Released %ld page(s)\n", num_released);468472}469473···582586 struct xen_memory_map memmap;583587 unsigned long max_pages;584588 unsigned long extra_pages = 0;589589+ unsigned long remapped_pages;585590 int i;586591 int op;587592···632635 * underlying RAM.633636 */634637 xen_set_identity_and_remap(map, memmap.nr_entries, max_pfn,635635- &xen_released_pages);638638+ &xen_released_pages, &remapped_pages);636639637640 extra_pages += xen_released_pages;641641+ extra_pages += remapped_pages;638642639643 /*640644 * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
···473473}474474EXPORT_SYMBOL_GPL(blk_queue_bypass_end);475475476476+void blk_set_queue_dying(struct request_queue *q)477477+{478478+ queue_flag_set_unlocked(QUEUE_FLAG_DYING, q);479479+480480+ if (q->mq_ops)481481+ blk_mq_wake_waiters(q);482482+ else {483483+ struct request_list *rl;484484+485485+ blk_queue_for_each_rl(rl, q) {486486+ if (rl->rq_pool) {487487+ wake_up(&rl->wait[BLK_RW_SYNC]);488488+ wake_up(&rl->wait[BLK_RW_ASYNC]);489489+ }490490+ }491491+ }492492+}493493+EXPORT_SYMBOL_GPL(blk_set_queue_dying);494494+476495/**477496 * blk_cleanup_queue - shutdown a request queue478497 * @q: request queue to shutdown···505486506487 /* mark @q DYING, no new request or merges will be allowed afterwards */507488 mutex_lock(&q->sysfs_lock);508508- queue_flag_set_unlocked(QUEUE_FLAG_DYING, q);489489+ blk_set_queue_dying(q);509490 spin_lock_irq(lock);510491511492 /*
+10-4
block/blk-mq-tag.c
···6868}69697070/*7171- * Wakeup all potentially sleeping on normal (non-reserved) tags7171+ * Wakeup all potentially sleeping on tags7272 */7373-static void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags)7373+void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool include_reserve)7474{7575 struct blk_mq_bitmap_tags *bt;7676 int i, wake_index;···8484 wake_up(&bs->wait);85858686 wake_index = bt_index_inc(wake_index);8787+ }8888+8989+ if (include_reserve) {9090+ bt = &tags->breserved_tags;9191+ if (waitqueue_active(&bt->bs[0].wait))9292+ wake_up(&bt->bs[0].wait);8793 }8894}8995···106100107101 atomic_dec(&tags->active_queues);108102109109- blk_mq_tag_wakeup_all(tags);103103+ blk_mq_tag_wakeup_all(tags, false);110104}111105112106/*···590584 * static and should never need resizing.591585 */592586 bt_update_count(&tags->bitmap_tags, tdepth);593593- blk_mq_tag_wakeup_all(tags);587587+ blk_mq_tag_wakeup_all(tags, false);594588 return 0;595589}596590
+1
block/blk-mq-tag.h
···5454extern ssize_t blk_mq_tag_sysfs_show(struct blk_mq_tags *tags, char *page);5555extern void blk_mq_tag_init_last_tag(struct blk_mq_tags *tags, unsigned int *last_tag);5656extern int blk_mq_tag_update_depth(struct blk_mq_tags *tags, unsigned int depth);5757+extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool);57585859enum {5960 BLK_MQ_TAG_CACHE_MIN = 1,
+69-6
block/blk-mq.c
···107107 wake_up_all(&q->mq_freeze_wq);108108}109109110110-static void blk_mq_freeze_queue_start(struct request_queue *q)110110+void blk_mq_freeze_queue_start(struct request_queue *q)111111{112112 bool freeze;113113···120120 blk_mq_run_queues(q, false);121121 }122122}123123+EXPORT_SYMBOL_GPL(blk_mq_freeze_queue_start);123124124125static void blk_mq_freeze_queue_wait(struct request_queue *q)125126{···137136 blk_mq_freeze_queue_wait(q);138137}139138140140-static void blk_mq_unfreeze_queue(struct request_queue *q)139139+void blk_mq_unfreeze_queue(struct request_queue *q)141140{142141 bool wake;143142···149148 percpu_ref_reinit(&q->mq_usage_counter);150149 wake_up_all(&q->mq_freeze_wq);151150 }151151+}152152+EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);153153+154154+void blk_mq_wake_waiters(struct request_queue *q)155155+{156156+ struct blk_mq_hw_ctx *hctx;157157+ unsigned int i;158158+159159+ queue_for_each_hw_ctx(q, hctx, i)160160+ if (blk_mq_hw_queue_mapped(hctx))161161+ blk_mq_tag_wakeup_all(hctx->tags, true);162162+163163+ /*164164+ * If we are called because the queue has now been marked as165165+ * dying, we need to ensure that processes currently waiting on166166+ * the queue are notified as well.167167+ */168168+ wake_up_all(&q->mq_freeze_wq);152169}153170154171bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx)···277258 ctx = alloc_data.ctx;278259 }279260 blk_mq_put_ctx(ctx);280280- if (!rq)261261+ if (!rq) {262262+ blk_mq_queue_exit(q);281263 return ERR_PTR(-EWOULDBLOCK);264264+ }282265 return rq;283266}284267EXPORT_SYMBOL(blk_mq_alloc_request);···404383}405384EXPORT_SYMBOL(blk_mq_complete_request);406385386386+int blk_mq_request_started(struct request *rq)387387+{388388+ return test_bit(REQ_ATOM_STARTED, &rq->atomic_flags);389389+}390390+EXPORT_SYMBOL_GPL(blk_mq_request_started);391391+407392void blk_mq_start_request(struct request *rq)408393{409394 struct request_queue *q = rq->q;···527500}528501EXPORT_SYMBOL(blk_mq_add_to_requeue_list);529502503503+void blk_mq_cancel_requeue_work(struct request_queue *q)504504+{505505+ cancel_work_sync(&q->requeue_work);506506+}507507+EXPORT_SYMBOL_GPL(blk_mq_cancel_requeue_work);508508+530509void blk_mq_kick_requeue_list(struct request_queue *q)531510{532511 kblockd_schedule_work(&q->requeue_work);533512}534513EXPORT_SYMBOL(blk_mq_kick_requeue_list);514514+515515+void blk_mq_abort_requeue_list(struct request_queue *q)516516+{517517+ unsigned long flags;518518+ LIST_HEAD(rq_list);519519+520520+ spin_lock_irqsave(&q->requeue_lock, flags);521521+ list_splice_init(&q->requeue_list, &rq_list);522522+ spin_unlock_irqrestore(&q->requeue_lock, flags);523523+524524+ while (!list_empty(&rq_list)) {525525+ struct request *rq;526526+527527+ rq = list_first_entry(&rq_list, struct request, queuelist);528528+ list_del_init(&rq->queuelist);529529+ rq->errors = -EIO;530530+ blk_mq_end_request(rq, rq->errors);531531+ }532532+}533533+EXPORT_SYMBOL(blk_mq_abort_requeue_list);535534536535static inline bool is_flush_request(struct request *rq,537536 struct blk_flush_queue *fq, unsigned int tag)···619566 break;620567 }621568}622622-569569+623570static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,624571 struct request *rq, void *priv, bool reserved)625572{626573 struct blk_mq_timeout_data *data = priv;627574628628- if (!test_bit(REQ_ATOM_STARTED, &rq->atomic_flags))575575+ if (!test_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) {576576+ /*577577+ * If a request wasn't started before the queue was578578+ * marked dying, kill it here or it'll go unnoticed.579579+ */580580+ if (unlikely(blk_queue_dying(rq->q))) {581581+ rq->errors = -EIO;582582+ blk_mq_complete_request(rq);583583+ }584584+ return;585585+ }586586+ if (rq->cmd_flags & REQ_NO_TIMEOUT)629587 return;630588631589 if (time_after_eq(jiffies, rq->deadline)) {···16651601 hctx->queue = q;16661602 hctx->queue_num = hctx_idx;16671603 hctx->flags = set->flags;16681668- hctx->cmd_size = set->cmd_size;1669160416701605 blk_mq_init_cpu_notifier(&hctx->cpu_notifier,16711606 blk_mq_hctx_notify, hctx);
+1
block/blk-mq.h
···3232void blk_mq_clone_flush_request(struct request *flush_rq,3333 struct request *orig_rq);3434int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);3535+void blk_mq_wake_waiters(struct request_queue *q);35363637/*3738 * CPU hotplug helpers
+3
block/blk-timeout.c
···190190 struct request_queue *q = req->q;191191 unsigned long expiry;192192193193+ if (req->cmd_flags & REQ_NO_TIMEOUT)194194+ return;195195+193196 /* blk-mq has its own handler, so we don't need ->rq_timed_out_fn */194197 if (!q->mq_ops && !q->rq_timed_out_fn)195198 return;
···5050obj-y += tty/5151obj-y += char/52525353-# gpu/ comes after char for AGP vs DRM startup5353+# iommu/ comes before gpu as gpu are using iommu controllers5454+obj-$(CONFIG_IOMMU_SUPPORT) += iommu/5555+5656+# gpu/ comes after char for AGP vs DRM startup and after iommu5457obj-y += gpu/55585659obj-$(CONFIG_CONNECTOR) += connector/···144141145142obj-$(CONFIG_MAILBOX) += mailbox/146143obj-$(CONFIG_HWSPINLOCK) += hwspinlock/147147-obj-$(CONFIG_IOMMU_SUPPORT) += iommu/148144obj-$(CONFIG_REMOTEPROC) += remoteproc/149145obj-$(CONFIG_RPMSG) += rpmsg/150146
+14-11
drivers/acpi/acpi_processor.c
···170170 acpi_status status;171171 int ret;172172173173- if (pr->apic_id == -1)173173+ if (pr->phys_id == -1)174174 return -ENODEV;175175176176 status = acpi_evaluate_integer(pr->handle, "_STA", NULL, &sta);···180180 cpu_maps_update_begin();181181 cpu_hotplug_begin();182182183183- ret = acpi_map_lsapic(pr->handle, pr->apic_id, &pr->id);183183+ ret = acpi_map_cpu(pr->handle, pr->phys_id, &pr->id);184184 if (ret)185185 goto out;186186187187 ret = arch_register_cpu(pr->id);188188 if (ret) {189189- acpi_unmap_lsapic(pr->id);189189+ acpi_unmap_cpu(pr->id);190190 goto out;191191 }192192···215215 union acpi_object object = { 0 };216216 struct acpi_buffer buffer = { sizeof(union acpi_object), &object };217217 struct acpi_processor *pr = acpi_driver_data(device);218218- int apic_id, cpu_index, device_declaration = 0;218218+ int phys_id, cpu_index, device_declaration = 0;219219 acpi_status status = AE_OK;220220 static int cpu0_initialized;221221 unsigned long long value;···262262 pr->acpi_id = value;263263 }264264265265- apic_id = acpi_get_apicid(pr->handle, device_declaration, pr->acpi_id);266266- if (apic_id < 0)267267- acpi_handle_debug(pr->handle, "failed to get CPU APIC ID.\n");268268- pr->apic_id = apic_id;265265+ phys_id = acpi_get_phys_id(pr->handle, device_declaration, pr->acpi_id);266266+ if (phys_id < 0)267267+ acpi_handle_debug(pr->handle, "failed to get CPU physical ID.\n");268268+ pr->phys_id = phys_id;269269270270- cpu_index = acpi_map_cpuid(pr->apic_id, pr->acpi_id);270270+ cpu_index = acpi_map_cpuid(pr->phys_id, pr->acpi_id);271271 if (!cpu0_initialized && !acpi_has_cpu_in_madt()) {272272 cpu0_initialized = 1;273273- /* Handle UP system running SMP kernel, with no LAPIC in MADT */273273+ /*274274+ * Handle UP system running SMP kernel, with no CPU275275+ * entry in MADT276276+ */274277 if ((cpu_index == -1) && (num_online_cpus() == 1))275278 cpu_index = 0;276279 }···461458462459 /* Remove the CPU. */463460 arch_unregister_cpu(pr->id);464464- acpi_unmap_lsapic(pr->id);461461+ acpi_unmap_cpu(pr->id);465462466463 cpu_hotplug_done();467464 cpu_maps_update_done();
+1-1
drivers/acpi/device_pm.c
···257257258258 device->power.state = ACPI_STATE_UNKNOWN;259259 if (!acpi_device_is_present(device))260260- return 0;260260+ return -ENXIO;261261262262 result = acpi_device_get_power(device, &state);263263 if (result)
···20882088 * Returns a valid pointer to struct generic_pm_domain on success or ERR_PTR()20892089 * on failure.20902090 */20912091-static struct generic_pm_domain *of_genpd_get_from_provider(20912091+struct generic_pm_domain *of_genpd_get_from_provider(20922092 struct of_phandle_args *genpdspec)20932093{20942094 struct generic_pm_domain *genpd = ERR_PTR(-ENOENT);···2108210821092109 return genpd;21102110}21112111+EXPORT_SYMBOL_GPL(of_genpd_get_from_provider);2111211221122113/**21132114 * genpd_dev_pm_detach - Detach a device from its PM domain.
+31-8
drivers/base/power/opp.c
···108108/* Lock to allow exclusive modification to the device and opp lists */109109static DEFINE_MUTEX(dev_opp_list_lock);110110111111+#define opp_rcu_lockdep_assert() \112112+do { \113113+ rcu_lockdep_assert(rcu_read_lock_held() || \114114+ lockdep_is_held(&dev_opp_list_lock), \115115+ "Missing rcu_read_lock() or " \116116+ "dev_opp_list_lock protection"); \117117+} while (0)118118+111119/**112120 * find_device_opp() - find device_opp struct using device pointer113121 * @dev: device pointer used to lookup device OPPs···216208 * This function returns the number of available opps if there are any,217209 * else returns 0 if none or the corresponding error value.218210 *219219- * Locking: This function must be called under rcu_read_lock(). This function220220- * internally references two RCU protected structures: device_opp and opp which221221- * are safe as long as we are under a common RCU locked section.211211+ * Locking: This function takes rcu_read_lock().222212 */223213int dev_pm_opp_get_opp_count(struct device *dev)224214{···224218 struct dev_pm_opp *temp_opp;225219 int count = 0;226220221221+ rcu_read_lock();222222+227223 dev_opp = find_device_opp(dev);228224 if (IS_ERR(dev_opp)) {229229- int r = PTR_ERR(dev_opp);230230- dev_err(dev, "%s: device OPP not found (%d)\n", __func__, r);231231- return r;225225+ count = PTR_ERR(dev_opp);226226+ dev_err(dev, "%s: device OPP not found (%d)\n",227227+ __func__, count);228228+ goto out_unlock;232229 }233230234231 list_for_each_entry_rcu(temp_opp, &dev_opp->opp_list, node) {···239230 count++;240231 }241232233233+out_unlock:234234+ rcu_read_unlock();242235 return count;243236}244237EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count);···277266{278267 struct device_opp *dev_opp;279268 struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);269269+270270+ opp_rcu_lockdep_assert();280271281272 dev_opp = find_device_opp(dev);282273 if (IS_ERR(dev_opp)) {···325312{326313 struct device_opp *dev_opp;327314 struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);315315+316316+ opp_rcu_lockdep_assert();328317329318 if (!dev || !freq) {330319 dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq);···375360{376361 struct device_opp *dev_opp;377362 struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);363363+364364+ opp_rcu_lockdep_assert();378365379366 if (!dev || !freq) {380367 dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq);···800783801784 /* Check for existing list for 'dev' */802785 dev_opp = find_device_opp(dev);803803- if (WARN(IS_ERR(dev_opp), "%s: dev_opp: %ld\n", dev_name(dev),804804- PTR_ERR(dev_opp)))786786+ if (IS_ERR(dev_opp)) {787787+ int error = PTR_ERR(dev_opp);788788+ if (error != -ENODEV)789789+ WARN(1, "%s: dev_opp: %d\n",790790+ IS_ERR_OR_NULL(dev) ?791791+ "Invalid device" : dev_name(dev),792792+ error);805793 return;794794+ }806795807796 /* Hold our list modification lock here */808797 mutex_lock(&dev_opp_list_lock);
+1-1
drivers/block/null_blk.c
···530530 goto out_cleanup_queues;531531532532 nullb->q = blk_mq_init_queue(&nullb->tag_set);533533- if (!nullb->q) {533533+ if (IS_ERR(nullb->q)) {534534 rv = -ENOMEM;535535 goto out_cleanup_tags;536536 }
+126-49
drivers/block/nvme-core.c
···215215 cmd->fn = handler;216216 cmd->ctx = ctx;217217 cmd->aborted = 0;218218+ blk_mq_start_request(blk_mq_rq_from_pdu(cmd));218219}219220220221/* Special values must be less than 0x1000 */···432431 if (unlikely(status)) {433432 if (!(status & NVME_SC_DNR || blk_noretry_request(req))434433 && (jiffies - req->start_time) < req->timeout) {434434+ unsigned long flags;435435+435436 blk_mq_requeue_request(req);436436- blk_mq_kick_requeue_list(req->q);437437+ spin_lock_irqsave(req->q->queue_lock, flags);438438+ if (!blk_queue_stopped(req->q))439439+ blk_mq_kick_requeue_list(req->q);440440+ spin_unlock_irqrestore(req->q->queue_lock, flags);437441 return;438442 }439443 req->errors = nvme_error_status(status);···670664 }671665 }672666673673- blk_mq_start_request(req);674674-675667 nvme_set_info(cmd, iod, req_completion);676668 spin_lock_irq(&nvmeq->q_lock);677669 if (req->cmd_flags & REQ_DISCARD)···839835 if (IS_ERR(req))840836 return PTR_ERR(req);841837838838+ req->cmd_flags |= REQ_NO_TIMEOUT;842839 cmd_info = blk_mq_rq_to_pdu(req);843840 nvme_set_info(cmd_info, req, async_req_completion);844841···10211016 struct nvme_command cmd;1022101710231018 if (!nvmeq->qid || cmd_rq->aborted) {10191019+ unsigned long flags;10201020+10211021+ spin_lock_irqsave(&dev_list_lock, flags);10241022 if (work_busy(&dev->reset_work))10251025- return;10231023+ goto out;10261024 list_del_init(&dev->node);10271025 dev_warn(&dev->pci_dev->dev,10281026 "I/O %d QID %d timeout, reset controller\n",10291027 req->tag, nvmeq->qid);10301028 dev->reset_workfn = nvme_reset_failed_dev;10311029 queue_work(nvme_workq, &dev->reset_work);10301030+ out:10311031+ spin_unlock_irqrestore(&dev_list_lock, flags);10321032 return;10331033 }10341034···10741064 void *ctx;10751065 nvme_completion_fn fn;10761066 struct nvme_cmd_info *cmd;10771077- static struct nvme_completion cqe = {10781078- .status = cpu_to_le16(NVME_SC_ABORT_REQ << 1),10791079- };10671067+ struct nvme_completion cqe;10681068+10691069+ if (!blk_mq_request_started(req))10701070+ return;1080107110811072 cmd = blk_mq_rq_to_pdu(req);1082107310831074 if (cmd->ctx == CMD_CTX_CANCELLED)10841075 return;10761076+10771077+ if (blk_queue_dying(req->q))10781078+ cqe.status = cpu_to_le16((NVME_SC_ABORT_REQ | NVME_SC_DNR) << 1);10791079+ else10801080+ cqe.status = cpu_to_le16(NVME_SC_ABORT_REQ << 1);10811081+1085108210861083 dev_warn(nvmeq->q_dmadev, "Cancelling I/O %d QID %d\n",10871084 req->tag, nvmeq->qid);···11011084 struct nvme_cmd_info *cmd = blk_mq_rq_to_pdu(req);11021085 struct nvme_queue *nvmeq = cmd->nvmeq;1103108611041104- dev_warn(nvmeq->q_dmadev, "Timeout I/O %d QID %d\n", req->tag,11051105- nvmeq->qid);11061106- if (nvmeq->dev->initialized)11071107- nvme_abort_req(req);11081108-11091087 /*11101088 * The aborted req will be completed on receiving the abort req.11111089 * We enable the timer again. If hit twice, it'll cause a device reset,11121090 * as the device then is in a faulty state.11131091 */11141114- return BLK_EH_RESET_TIMER;10921092+ int ret = BLK_EH_RESET_TIMER;10931093+10941094+ dev_warn(nvmeq->q_dmadev, "Timeout I/O %d QID %d\n", req->tag,10951095+ nvmeq->qid);10961096+10971097+ spin_lock_irq(&nvmeq->q_lock);10981098+ if (!nvmeq->dev->initialized) {10991099+ /*11001100+ * Force cancelled command frees the request, which requires we11011101+ * return BLK_EH_NOT_HANDLED.11021102+ */11031103+ nvme_cancel_queue_ios(nvmeq->hctx, req, nvmeq, reserved);11041104+ ret = BLK_EH_NOT_HANDLED;11051105+ } else11061106+ nvme_abort_req(req);11071107+ spin_unlock_irq(&nvmeq->q_lock);11081108+11091109+ return ret;11151110}1116111111171112static void nvme_free_queue(struct nvme_queue *nvmeq)···11601131 */11611132static int nvme_suspend_queue(struct nvme_queue *nvmeq)11621133{11631163- int vector = nvmeq->dev->entry[nvmeq->cq_vector].vector;11341134+ int vector;1164113511651136 spin_lock_irq(&nvmeq->q_lock);11371137+ if (nvmeq->cq_vector == -1) {11381138+ spin_unlock_irq(&nvmeq->q_lock);11391139+ return 1;11401140+ }11411141+ vector = nvmeq->dev->entry[nvmeq->cq_vector].vector;11661142 nvmeq->dev->online_queues--;11431143+ nvmeq->cq_vector = -1;11671144 spin_unlock_irq(&nvmeq->q_lock);1168114511691146 irq_set_affinity_hint(vector, NULL);···12041169 adapter_delete_sq(dev, qid);12051170 adapter_delete_cq(dev, qid);12061171 }11721172+ if (!qid && dev->admin_q)11731173+ blk_mq_freeze_queue_start(dev->admin_q);12071174 nvme_clear_queue(nvmeq);12081175}1209117612101177static struct nvme_queue *nvme_alloc_queue(struct nvme_dev *dev, int qid,12111211- int depth, int vector)11781178+ int depth)12121179{12131180 struct device *dmadev = &dev->pci_dev->dev;12141181 struct nvme_queue *nvmeq = kzalloc(sizeof(*nvmeq), GFP_KERNEL);···12361199 nvmeq->cq_phase = 1;12371200 nvmeq->q_db = &dev->dbs[qid * 2 * dev->db_stride];12381201 nvmeq->q_depth = depth;12391239- nvmeq->cq_vector = vector;12401202 nvmeq->qid = qid;12411203 dev->queue_count++;12421204 dev->queues[qid] = nvmeq;···12801244 struct nvme_dev *dev = nvmeq->dev;12811245 int result;1282124612471247+ nvmeq->cq_vector = qid - 1;12831248 result = adapter_alloc_cq(dev, qid, nvmeq);12841249 if (result < 0)12851250 return result;···13921355 .timeout = nvme_timeout,13931356};1394135713581358+static void nvme_dev_remove_admin(struct nvme_dev *dev)13591359+{13601360+ if (dev->admin_q && !blk_queue_dying(dev->admin_q)) {13611361+ blk_cleanup_queue(dev->admin_q);13621362+ blk_mq_free_tag_set(&dev->admin_tagset);13631363+ }13641364+}13651365+13951366static int nvme_alloc_admin_tags(struct nvme_dev *dev)13961367{13971368 if (!dev->admin_q) {···14151370 return -ENOMEM;1416137114171372 dev->admin_q = blk_mq_init_queue(&dev->admin_tagset);14181418- if (!dev->admin_q) {13731373+ if (IS_ERR(dev->admin_q)) {14191374 blk_mq_free_tag_set(&dev->admin_tagset);14201375 return -ENOMEM;14211376 }14221422- }13771377+ if (!blk_get_queue(dev->admin_q)) {13781378+ nvme_dev_remove_admin(dev);13791379+ return -ENODEV;13801380+ }13811381+ } else13821382+ blk_mq_unfreeze_queue(dev->admin_q);1423138314241384 return 0;14251425-}14261426-14271427-static void nvme_free_admin_tags(struct nvme_dev *dev)14281428-{14291429- if (dev->admin_q)14301430- blk_mq_free_tag_set(&dev->admin_tagset);14311385}1432138614331387static int nvme_configure_admin_queue(struct nvme_dev *dev)···1460141614611417 nvmeq = dev->queues[0];14621418 if (!nvmeq) {14631463- nvmeq = nvme_alloc_queue(dev, 0, NVME_AQ_DEPTH, 0);14191419+ nvmeq = nvme_alloc_queue(dev, 0, NVME_AQ_DEPTH);14641420 if (!nvmeq)14651421 return -ENOMEM;14661422 }···14831439 if (result)14841440 goto free_nvmeq;1485144114861486- result = nvme_alloc_admin_tags(dev);14421442+ nvmeq->cq_vector = 0;14431443+ result = queue_request_irq(dev, nvmeq, nvmeq->irqname);14871444 if (result)14881445 goto free_nvmeq;1489144614901490- result = queue_request_irq(dev, nvmeq, nvmeq->irqname);14911491- if (result)14921492- goto free_tags;14931493-14941447 return result;1495144814961496- free_tags:14971497- nvme_free_admin_tags(dev);14981449 free_nvmeq:14991450 nvme_free_queues(dev, 0);15001451 return result;···19831944 unsigned i;1984194519851946 for (i = dev->queue_count; i <= dev->max_qid; i++)19861986- if (!nvme_alloc_queue(dev, i, dev->q_depth, i - 1))19471947+ if (!nvme_alloc_queue(dev, i, dev->q_depth))19871948 break;1988194919891950 for (i = dev->online_queues; i <= dev->queue_count - 1; i++)···22742235 break;22752236 if (!schedule_timeout(ADMIN_TIMEOUT) ||22762237 fatal_signal_pending(current)) {22382238+ /*22392239+ * Disable the controller first since we can't trust it22402240+ * at this point, but leave the admin queue enabled22412241+ * until all queue deletion requests are flushed.22422242+ * FIXME: This may take a while if there are more h/w22432243+ * queues than admin tags.22442244+ */22772245 set_current_state(TASK_RUNNING);22782278-22792246 nvme_disable_ctrl(dev, readq(&dev->bar->cap));22802280- nvme_disable_queue(dev, 0);22812281-22822282- send_sig(SIGKILL, dq->worker->task, 1);22472247+ nvme_clear_queue(dev->queues[0]);22832248 flush_kthread_worker(dq->worker);22492249+ nvme_disable_queue(dev, 0);22842250 return;22852251 }22862252 }···23622318{23632319 struct nvme_queue *nvmeq = container_of(work, struct nvme_queue,23642320 cmdinfo.work);23652365- allow_signal(SIGKILL);23662321 if (nvme_delete_sq(nvmeq))23672322 nvme_del_queue_end(nvmeq);23682323}···24192376 kthread_stop(tmp);24202377}2421237823792379+static void nvme_freeze_queues(struct nvme_dev *dev)23802380+{23812381+ struct nvme_ns *ns;23822382+23832383+ list_for_each_entry(ns, &dev->namespaces, list) {23842384+ blk_mq_freeze_queue_start(ns->queue);23852385+23862386+ spin_lock(ns->queue->queue_lock);23872387+ queue_flag_set(QUEUE_FLAG_STOPPED, ns->queue);23882388+ spin_unlock(ns->queue->queue_lock);23892389+23902390+ blk_mq_cancel_requeue_work(ns->queue);23912391+ blk_mq_stop_hw_queues(ns->queue);23922392+ }23932393+}23942394+23952395+static void nvme_unfreeze_queues(struct nvme_dev *dev)23962396+{23972397+ struct nvme_ns *ns;23982398+23992399+ list_for_each_entry(ns, &dev->namespaces, list) {24002400+ queue_flag_clear_unlocked(QUEUE_FLAG_STOPPED, ns->queue);24012401+ blk_mq_unfreeze_queue(ns->queue);24022402+ blk_mq_start_stopped_hw_queues(ns->queue, true);24032403+ blk_mq_kick_requeue_list(ns->queue);24042404+ }24052405+}24062406+24222407static void nvme_dev_shutdown(struct nvme_dev *dev)24232408{24242409 int i;···24552384 dev->initialized = 0;24562385 nvme_dev_list_remove(dev);2457238624582458- if (dev->bar)23872387+ if (dev->bar) {23882388+ nvme_freeze_queues(dev);24592389 csts = readl(&dev->bar->csts);23902390+ }24602391 if (csts & NVME_CSTS_CFS || !(csts & NVME_CSTS_RDY)) {24612392 for (i = dev->queue_count - 1; i >= 0; i--) {24622393 struct nvme_queue *nvmeq = dev->queues[i];···24732400 nvme_dev_unmap(dev);24742401}2475240224762476-static void nvme_dev_remove_admin(struct nvme_dev *dev)24772477-{24782478- if (dev->admin_q && !blk_queue_dying(dev->admin_q))24792479- blk_cleanup_queue(dev->admin_q);24802480-}24812481-24822403static void nvme_dev_remove(struct nvme_dev *dev)24832404{24842405 struct nvme_ns *ns;···24802413 list_for_each_entry(ns, &dev->namespaces, list) {24812414 if (ns->disk->flags & GENHD_FL_UP)24822415 del_gendisk(ns->disk);24832483- if (!blk_queue_dying(ns->queue))24162416+ if (!blk_queue_dying(ns->queue)) {24172417+ blk_mq_abort_requeue_list(ns->queue);24842418 blk_cleanup_queue(ns->queue);24192419+ }24852420 }24862421}24872422···25642495 nvme_free_namespaces(dev);25652496 nvme_release_instance(dev);25662497 blk_mq_free_tag_set(&dev->tagset);24982498+ blk_put_queue(dev->admin_q);25672499 kfree(dev->queues);25682500 kfree(dev->entry);25692501 kfree(dev);···26612591 }2662259226632593 nvme_init_queue(dev->queues[0], 0);25942594+ result = nvme_alloc_admin_tags(dev);25952595+ if (result)25962596+ goto disable;2664259726652598 result = nvme_setup_io_queues(dev);26662599 if (result)26672667- goto disable;26002600+ goto free_tags;2668260126692602 nvme_set_irq_hints(dev);2670260326712604 return result;2672260526062606+ free_tags:26072607+ nvme_dev_remove_admin(dev);26732608 disable:26742609 nvme_disable_queue(dev, 0);26752610 nvme_dev_list_remove(dev);···27142639 dev->reset_workfn = nvme_remove_disks;27152640 queue_work(nvme_workq, &dev->reset_work);27162641 spin_unlock(&dev_list_lock);26422642+ } else {26432643+ nvme_unfreeze_queues(dev);26442644+ nvme_set_irq_hints(dev);27172645 }27182646 dev->initialized = 1;27192647 return 0;···28542776 pci_set_drvdata(pdev, NULL);28552777 flush_work(&dev->reset_work);28562778 misc_deregister(&dev->miscdev);28572857- nvme_dev_remove(dev);28582779 nvme_dev_shutdown(dev);27802780+ nvme_dev_remove(dev);28592781 nvme_dev_remove_admin(dev);28602782 nvme_free_queues(dev, 0);28612861- nvme_free_admin_tags(dev);28622783 nvme_release_prp_pools(dev);28632784 kref_put(&dev->kref, nvme_free_dev);28642785}
+1-1
drivers/block/virtio_blk.c
···638638 goto out_put_disk;639639640640 q = vblk->disk->queue = blk_mq_init_queue(&vblk->tag_set);641641- if (!q) {641641+ if (IS_ERR(q)) {642642 err = -ENOMEM;643643 goto out_free_tags;644644 }
+3
drivers/bus/arm-cci.c
···13121312 if (!np)13131313 return -ENODEV;1314131413151315+ if (!of_device_is_available(np))13161316+ return -ENODEV;13171317+13151318 cci_config = of_match_node(arm_cci_matches, np)->data;13161319 if (!cci_config)13171320 return -ENODEV;
···70707171#define to_clk_sam9x5_slow(hw) container_of(hw, struct clk_sam9x5_slow, hw)72727373+static struct clk *slow_clk;73747475static int clk_slow_osc_prepare(struct clk_hw *hw)7576{···358357 clk = clk_register(NULL, &slowck->hw);359358 if (IS_ERR(clk))360359 kfree(slowck);360360+ else361361+ slow_clk = clk;361362362363 return clk;363364}···436433 clk = clk_register(NULL, &slowck->hw);437434 if (IS_ERR(clk))438435 kfree(slowck);436436+ else437437+ slow_clk = clk;439438440439 return clk;441440}···470465471466 of_clk_add_provider(np, of_clk_src_simple_get, clk);472467}468468+469469+/*470470+ * FIXME: All slow clk users are not properly claiming it (get + prepare +471471+ * enable) before using it.472472+ * If all users properly claiming this clock decide that they don't need it473473+ * anymore (or are removed), it is disabled while faulty users are still474474+ * requiring it, and the system hangs.475475+ * Prevent this clock from being disabled until all users are properly476476+ * requesting it.477477+ * Once this is done we should remove this function and the slow_clk variable.478478+ */479479+static int __init of_at91_clk_slow_retain(void)480480+{481481+ if (!slow_clk)482482+ return 0;483483+484484+ __clk_get(slow_clk);485485+ clk_prepare_enable(slow_clk);486486+487487+ return 0;488488+}489489+arch_initcall(of_at91_clk_slow_retain);
···462462463463 /* Register the CP15 based counter if we have one */464464 if (type & ARCH_CP15_TIMER) {465465- if (arch_timer_use_virtual)465465+ if (IS_ENABLED(CONFIG_ARM64) || arch_timer_use_virtual)466466 arch_timer_read_counter = arch_counter_get_cntvct;467467 else468468 arch_timer_read_counter = arch_counter_get_cntpct;
+11
drivers/cpufreq/cpufreq-dt.c
···211211 /* OPPs might be populated at runtime, don't check for error here */212212 of_init_opp_table(cpu_dev);213213214214+ /*215215+ * But we need OPP table to function so if it is not there let's216216+ * give platform code chance to provide it for us.217217+ */218218+ ret = dev_pm_opp_get_opp_count(cpu_dev);219219+ if (ret <= 0) {220220+ pr_debug("OPP table is not ready, deferring probe\n");221221+ ret = -EPROBE_DEFER;222222+ goto out_free_opp;223223+ }224224+214225 priv = kzalloc(sizeof(*priv), GFP_KERNEL);215226 if (!priv) {216227 ret = -ENOMEM;
+6
drivers/cpufreq/cpufreq.c
···20282028 /* Don't start any governor operations if we are entering suspend */20292029 if (cpufreq_suspended)20302030 return 0;20312031+ /*20322032+ * Governor might not be initiated here if ACPI _PPC changed20332033+ * notification happened, so check it.20342034+ */20352035+ if (!policy->governor)20362036+ return -EINVAL;2031203720322038 if (policy->governor->max_transition_latency &&20332039 policy->cpuinfo.transition_latency >
···396396 * power state and occurrence of the wakeup event.397397 *398398 * If the entered idle state didn't support residency measurements,399399- * we are basically lost in the dark how much time passed.400400- * As a compromise, assume we slept for the whole expected time.399399+ * we use them anyway if they are short, and if long,400400+ * truncate to the whole expected time.401401 *402402 * Any measured amount of time will include the exit latency.403403 * Since we are interested in when the wakeup begun, not when it···405405 * the measured amount of time is less than the exit latency,406406 * assume the state was never reached and the exit latency is 0.407407 */408408- if (unlikely(target->flags & CPUIDLE_FLAG_TIME_INVALID)) {409409- /* Use timer value as is */408408+409409+ /* measured value */410410+ measured_us = cpuidle_get_last_residency(dev);411411+412412+ /* Deduct exit latency */413413+ if (measured_us > target->exit_latency)414414+ measured_us -= target->exit_latency;415415+416416+ /* Make sure our coefficients do not exceed unity */417417+ if (measured_us > data->next_timer_us)410418 measured_us = data->next_timer_us;411411-412412- } else {413413- /* Use measured value */414414- measured_us = cpuidle_get_last_residency(dev);415415-416416- /* Deduct exit latency */417417- if (measured_us > target->exit_latency)418418- measured_us -= target->exit_latency;419419-420420- /* Make sure our coefficients do not exceed unity */421421- if (measured_us > data->next_timer_us)422422- measured_us = data->next_timer_us;423423- }424419425420 /* Update our correction ratio */426421 new_factor = data->correction_factor[data->bucket];
···463463 bool is_32bit_user_mode;464464};465465466466+/**467467+ * Ioctl function type.468468+ *469469+ * \param filep pointer to file structure.470470+ * \param p amdkfd process pointer.471471+ * \param data pointer to arg that was copied from user.472472+ */473473+typedef int amdkfd_ioctl_t(struct file *filep, struct kfd_process *p,474474+ void *data);475475+476476+struct amdkfd_ioctl_desc {477477+ unsigned int cmd;478478+ int flags;479479+ amdkfd_ioctl_t *func;480480+ unsigned int cmd_drv;481481+ const char *name;482482+};483483+466484void kfd_process_create_wq(void);467485void kfd_process_destroy_wq(void);468486struct kfd_process *kfd_create_process(const struct task_struct *);
+1-1
drivers/gpu/drm/amd/amdkfd/kfd_topology.c
···921921 uint32_t i = 0;922922923923 list_for_each_entry(dev, &topology_device_list, list) {924924- ret = kfd_build_sysfs_node_entry(dev, 0);924924+ ret = kfd_build_sysfs_node_entry(dev, i);925925 if (ret < 0)926926 return ret;927927 i++;
···37253725 if ((iir & flip_pending) == 0)37263726 goto check_page_flip;3727372737283728- intel_prepare_page_flip(dev, plane);37293729-37303728 /* We detect FlipDone by looking for the change in PendingFlip from '1'37313729 * to '0' on the following vblank, i.e. IIR has the Pendingflip37323730 * asserted following the MI_DISPLAY_FLIP, but ISR is deasserted, hence···37343736 if (I915_READ16(ISR) & flip_pending)37353737 goto check_page_flip;3736373837393739+ intel_prepare_page_flip(dev, plane);37373740 intel_finish_page_flip(dev, pipe);37383741 return true;37393742···39063907 if ((iir & flip_pending) == 0)39073908 goto check_page_flip;3908390939093909- intel_prepare_page_flip(dev, plane);39103910-39113910 /* We detect FlipDone by looking for the change in PendingFlip from '1'39123911 * to '0' on the following vblank, i.e. IIR has the Pendingflip39133912 * asserted following the MI_DISPLAY_FLIP, but ISR is deasserted, hence···39153918 if (I915_READ(ISR) & flip_pending)39163919 goto check_page_flip;3917392039213921+ intel_prepare_page_flip(dev, plane);39183922 intel_finish_page_flip(dev, pipe);39193923 return true;39203924
+1-7
drivers/gpu/drm/i915/intel_display.c
···1305713057 vga_put(dev->pdev, VGA_RSRC_LEGACY_IO);1305813058 udelay(300);13059130591306013060- /*1306113061- * Fujitsu-Siemens Lifebook S6010 (830) has problems resuming1306213062- * from S3 without preserving (some of?) the other bits.1306313063- */1306413064- I915_WRITE(vga_reg, dev_priv->bios_vgacntr | VGA_DISP_DISABLE);1306013060+ I915_WRITE(vga_reg, VGA_DISP_DISABLE);1306513061 POSTING_READ(vga_reg);1306613062}1306713063···13142131461314313147 intel_shared_dpll_init(dev);13144131481314513145- /* save the BIOS value before clobbering it */1314613146- dev_priv->bios_vgacntr = I915_READ(i915_vgacntrl_reg(dev));1314713149 /* Just disable it once at startup */1314813150 i915_disable_vga(dev);1314913151 intel_setup_outputs(dev);
···15721572 * so use the DMA API for them.15731573 */15741574 if (!nv_device_is_cpu_coherent(device) &&15751575- ttm->caching_state == tt_uncached)15751575+ ttm->caching_state == tt_uncached) {15761576 ttm_dma_unpopulate(ttm_dma, dev->dev);15771577+ return;15781578+ }1577157915781580#if __OS_HAS_AGP15791581 if (drm->agp.stat == ENABLED) {
+31-6
drivers/gpu/drm/nouveau/nouveau_gem.c
···3636nouveau_gem_object_del(struct drm_gem_object *gem)3737{3838 struct nouveau_bo *nvbo = nouveau_gem_object(gem);3939+ struct nouveau_drm *drm = nouveau_bdev(nvbo->bo.bdev);3940 struct ttm_buffer_object *bo = &nvbo->bo;4141+ struct device *dev = drm->dev->dev;4242+ int ret;4343+4444+ ret = pm_runtime_get_sync(dev);4545+ if (WARN_ON(ret < 0 && ret != -EACCES))4646+ return;40474148 if (gem->import_attach)4249 drm_prime_gem_destroy(gem, nvbo->bo.sg);···5346 /* reset filp so nouveau_bo_del_ttm() can test for it */5447 gem->filp = NULL;5548 ttm_bo_unref(&bo);4949+5050+ pm_runtime_mark_last_busy(dev);5151+ pm_runtime_put_autosuspend(dev);5652}57535854int···6353{6454 struct nouveau_cli *cli = nouveau_cli(file_priv);6555 struct nouveau_bo *nvbo = nouveau_gem_object(gem);5656+ struct nouveau_drm *drm = nouveau_bdev(nvbo->bo.bdev);6657 struct nouveau_vma *vma;5858+ struct device *dev = drm->dev->dev;6759 int ret;68606961 if (!cli->vm)···8371 goto out;8472 }85738686- ret = nouveau_bo_vma_add(nvbo, cli->vm, vma);8787- if (ret) {8888- kfree(vma);7474+ ret = pm_runtime_get_sync(dev);7575+ if (ret < 0 && ret != -EACCES)8976 goto out;9090- }7777+7878+ ret = nouveau_bo_vma_add(nvbo, cli->vm, vma);7979+ if (ret)8080+ kfree(vma);8181+8282+ pm_runtime_mark_last_busy(dev);8383+ pm_runtime_put_autosuspend(dev);9184 } else {9285 vma->refcount++;9386 }···146129{147130 struct nouveau_cli *cli = nouveau_cli(file_priv);148131 struct nouveau_bo *nvbo = nouveau_gem_object(gem);132132+ struct nouveau_drm *drm = nouveau_bdev(nvbo->bo.bdev);133133+ struct device *dev = drm->dev->dev;149134 struct nouveau_vma *vma;150135 int ret;151136···160141161142 vma = nouveau_bo_vma_find(nvbo, cli->vm);162143 if (vma) {163163- if (--vma->refcount == 0)164164- nouveau_gem_object_unmap(nvbo, vma);144144+ if (--vma->refcount == 0) {145145+ ret = pm_runtime_get_sync(dev);146146+ if (!WARN_ON(ret < 0 && ret != -EACCES)) {147147+ nouveau_gem_object_unmap(nvbo, vma);148148+ pm_runtime_mark_last_busy(dev);149149+ pm_runtime_put_autosuspend(dev);150150+ }151151+ }165152 }166153 ttm_bo_unreserve(&nvbo->bo);167154}
+4-4
drivers/gpu/drm/radeon/atombios_crtc.c
···18511851 return pll;18521852 }18531853 /* otherwise, pick one of the plls */18541854- if ((rdev->family == CHIP_KAVERI) ||18551855- (rdev->family == CHIP_KABINI) ||18541854+ if ((rdev->family == CHIP_KABINI) ||18561855 (rdev->family == CHIP_MULLINS)) {18571857- /* KB/KV/ML has PPLL1 and PPLL2 */18561856+ /* KB/ML has PPLL1 and PPLL2 */18581857 pll_in_use = radeon_get_pll_use_mask(crtc);18591858 if (!(pll_in_use & (1 << ATOM_PPLL2)))18601859 return ATOM_PPLL2;···18621863 DRM_ERROR("unable to allocate a PPLL\n");18631864 return ATOM_PPLL_INVALID;18641865 } else {18651865- /* CI has PPLL0, PPLL1, and PPLL2 */18661866+ /* CI/KV has PPLL0, PPLL1, and PPLL2 */18661867 pll_in_use = radeon_get_pll_use_mask(crtc);18671868 if (!(pll_in_use & (1 << ATOM_PPLL2)))18681869 return ATOM_PPLL2;···21542155 case ATOM_PPLL0:21552156 /* disable the ppll */21562157 if ((rdev->family == CHIP_ARUBA) ||21582158+ (rdev->family == CHIP_KAVERI) ||21572159 (rdev->family == CHIP_BONAIRE) ||21582160 (rdev->family == CHIP_HAWAII))21592161 atombios_crtc_program_pll(crtc, radeon_crtc->crtc_id, radeon_crtc->pll_id,
+4
drivers/gpu/drm/radeon/atombios_dp.c
···492492 struct radeon_connector_atom_dig *dig_connector;493493 int dp_clock;494494495495+ if ((mode->clock > 340000) &&496496+ (!radeon_connector_is_dp12_capable(connector)))497497+ return MODE_CLOCK_HIGH;498498+495499 if (!radeon_connector->con_priv)496500 return MODE_CLOCK_HIGH;497501 dig_connector = radeon_connector->con_priv;
···143143 case ad7998:144144 return i2c_smbus_write_word_swapped(st->client, AD7998_CONF_REG,145145 val);146146- default:146146+ case ad7992:147147+ case ad7993:148148+ case ad7994:147149 return i2c_smbus_write_byte_data(st->client, AD7998_CONF_REG,148150 val);151151+ default:152152+ /* Will be written when doing a conversion */153153+ st->config = val;154154+ return 0;149155 }150156}151157···161155 case ad7997:162156 case ad7998:163157 return i2c_smbus_read_word_swapped(st->client, AD7998_CONF_REG);164164- default:158158+ case ad7992:159159+ case ad7993:160160+ case ad7994:165161 return i2c_smbus_read_byte_data(st->client, AD7998_CONF_REG);162162+ default:163163+ /* No readback support */164164+ return st->config;166165 }167166}168167
+3
drivers/iio/inkern.c
···449449 if (val2 == NULL)450450 val2 = &unused;451451452452+ if(!iio_channel_has_info(chan->channel, info))453453+ return -EINVAL;454454+452455 if (chan->indio_dev->info->read_raw_multi) {453456 ret = chan->indio_dev->info->read_raw_multi(chan->indio_dev,454457 chan->channel, INDIO_MAX_RAW_ELEMENTS,
+44-16
drivers/input/evdev.c
···2828#include <linux/cdev.h>2929#include "input-compat.h"30303131+enum evdev_clock_type {3232+ EV_CLK_REAL = 0,3333+ EV_CLK_MONO,3434+ EV_CLK_BOOT,3535+ EV_CLK_MAX3636+};3737+3138struct evdev {3239 int open;3340 struct input_handle handle;···5649 struct fasync_struct *fasync;5750 struct evdev *evdev;5851 struct list_head node;5959- int clkid;5252+ int clk_type;6053 bool revoked;6154 unsigned int bufsize;6255 struct input_event buffer[];6356};5757+5858+static int evdev_set_clk_type(struct evdev_client *client, unsigned int clkid)5959+{6060+ switch (clkid) {6161+6262+ case CLOCK_REALTIME:6363+ client->clk_type = EV_CLK_REAL;6464+ break;6565+ case CLOCK_MONOTONIC:6666+ client->clk_type = EV_CLK_MONO;6767+ break;6868+ case CLOCK_BOOTTIME:6969+ client->clk_type = EV_CLK_BOOT;7070+ break;7171+ default:7272+ return -EINVAL;7373+ }7474+7575+ return 0;7676+}64776578/* flush queued events of type @type, caller must hold client->buffer_lock */6679static void __evdev_flush_queue(struct evdev_client *client, unsigned int type)···135108 struct input_event ev;136109 ktime_t time;137110138138- time = (client->clkid == CLOCK_MONOTONIC) ?139139- ktime_get() : ktime_get_real();111111+ time = client->clk_type == EV_CLK_REAL ?112112+ ktime_get_real() :113113+ client->clk_type == EV_CLK_MONO ?114114+ ktime_get() :115115+ ktime_get_boottime();140116141117 ev.time = ktime_to_timeval(time);142118 ev.type = EV_SYN;···189159190160static void evdev_pass_values(struct evdev_client *client,191161 const struct input_value *vals, unsigned int count,192192- ktime_t mono, ktime_t real)162162+ ktime_t *ev_time)193163{194164 struct evdev *evdev = client->evdev;195165 const struct input_value *v;···199169 if (client->revoked)200170 return;201171202202- event.time = ktime_to_timeval(client->clkid == CLOCK_MONOTONIC ?203203- mono : real);172172+ event.time = ktime_to_timeval(ev_time[client->clk_type]);204173205174 /* Interrupts are disabled, just acquire the lock. */206175 spin_lock(&client->buffer_lock);···227198{228199 struct evdev *evdev = handle->private;229200 struct evdev_client *client;230230- ktime_t time_mono, time_real;201201+ ktime_t ev_time[EV_CLK_MAX];231202232232- time_mono = ktime_get();233233- time_real = ktime_mono_to_real(time_mono);203203+ ev_time[EV_CLK_MONO] = ktime_get();204204+ ev_time[EV_CLK_REAL] = ktime_mono_to_real(ev_time[EV_CLK_MONO]);205205+ ev_time[EV_CLK_BOOT] = ktime_mono_to_any(ev_time[EV_CLK_MONO],206206+ TK_OFFS_BOOT);234207235208 rcu_read_lock();236209237210 client = rcu_dereference(evdev->grab);238211239212 if (client)240240- evdev_pass_values(client, vals, count, time_mono, time_real);213213+ evdev_pass_values(client, vals, count, ev_time);241214 else242215 list_for_each_entry_rcu(client, &evdev->client_list, node)243243- evdev_pass_values(client, vals, count,244244- time_mono, time_real);216216+ evdev_pass_values(client, vals, count, ev_time);245217246218 rcu_read_unlock();247219}···907877 case EVIOCSCLOCKID:908878 if (copy_from_user(&i, p, sizeof(unsigned int)))909879 return -EFAULT;910910- if (i != CLOCK_MONOTONIC && i != CLOCK_REALTIME)911911- return -EINVAL;912912- client->clkid = i;913913- return 0;880880+881881+ return evdev_set_clk_type(client, i);914882915883 case EVIOCGKEYCODE:916884 return evdev_handle_get_keycode(dev, p);
+13-9
drivers/input/input.c
···1974197419751975 events = mt_slots + 1; /* count SYN_MT_REPORT and SYN_REPORT */1976197619771977- for (i = 0; i < ABS_CNT; i++) {19781978- if (test_bit(i, dev->absbit)) {19791979- if (input_is_mt_axis(i))19801980- events += mt_slots;19811981- else19821982- events++;19771977+ if (test_bit(EV_ABS, dev->evbit)) {19781978+ for (i = 0; i < ABS_CNT; i++) {19791979+ if (test_bit(i, dev->absbit)) {19801980+ if (input_is_mt_axis(i))19811981+ events += mt_slots;19821982+ else19831983+ events++;19841984+ }19831985 }19841986 }1985198719861986- for (i = 0; i < REL_CNT; i++)19871987- if (test_bit(i, dev->relbit))19881988- events++;19881988+ if (test_bit(EV_REL, dev->evbit)) {19891989+ for (i = 0; i < REL_CNT; i++)19901990+ if (test_bit(i, dev->relbit))19911991+ events++;19921992+ }1989199319901994 /* Make room for KEY and MSC events */19911995 events += 7;
+1
drivers/input/keyboard/Kconfig
···559559config KEYBOARD_STMPE560560 tristate "STMPE keypad support"561561 depends on MFD_STMPE562562+ depends on OF562563 select INPUT_MATRIXKMAP563564 help564565 Say Y here if you want to use the keypad controller on STMPE I/O
+57-57
drivers/input/keyboard/gpio_keys.c
···3535struct gpio_button_data {3636 const struct gpio_keys_button *button;3737 struct input_dev *input;3838- struct timer_list timer;3939- struct work_struct work;4040- unsigned int timer_debounce; /* in msecs */3838+3939+ struct timer_list release_timer;4040+ unsigned int release_delay; /* in msecs, for IRQ-only buttons */4141+4242+ struct delayed_work work;4343+ unsigned int software_debounce; /* in msecs, for GPIO-driven buttons */4444+4145 unsigned int irq;4246 spinlock_t lock;4347 bool disabled;···120116{121117 if (!bdata->disabled) {122118 /*123123- * Disable IRQ and possible debouncing timer.119119+ * Disable IRQ and associated timer/work structure.124120 */125121 disable_irq(bdata->irq);126126- if (bdata->timer_debounce)127127- del_timer_sync(&bdata->timer);122122+123123+ if (gpio_is_valid(bdata->button->gpio))124124+ cancel_delayed_work_sync(&bdata->work);125125+ else126126+ del_timer_sync(&bdata->release_timer);128127129128 bdata->disabled = true;130129 }···350343static void gpio_keys_gpio_work_func(struct work_struct *work)351344{352345 struct gpio_button_data *bdata =353353- container_of(work, struct gpio_button_data, work);346346+ container_of(work, struct gpio_button_data, work.work);354347355348 gpio_keys_gpio_report_event(bdata);356349357350 if (bdata->button->wakeup)358351 pm_relax(bdata->input->dev.parent);359359-}360360-361361-static void gpio_keys_gpio_timer(unsigned long _data)362362-{363363- struct gpio_button_data *bdata = (struct gpio_button_data *)_data;364364-365365- schedule_work(&bdata->work);366352}367353368354static irqreturn_t gpio_keys_gpio_isr(int irq, void *dev_id)···366366367367 if (bdata->button->wakeup)368368 pm_stay_awake(bdata->input->dev.parent);369369- if (bdata->timer_debounce)370370- mod_timer(&bdata->timer,371371- jiffies + msecs_to_jiffies(bdata->timer_debounce));372372- else373373- schedule_work(&bdata->work);369369+370370+ mod_delayed_work(system_wq,371371+ &bdata->work,372372+ msecs_to_jiffies(bdata->software_debounce));374373375374 return IRQ_HANDLED;376375}···407408 input_event(input, EV_KEY, button->code, 1);408409 input_sync(input);409410410410- if (!bdata->timer_debounce) {411411+ if (!bdata->release_delay) {411412 input_event(input, EV_KEY, button->code, 0);412413 input_sync(input);413414 goto out;···416417 bdata->key_pressed = true;417418 }418419419419- if (bdata->timer_debounce)420420- mod_timer(&bdata->timer,421421- jiffies + msecs_to_jiffies(bdata->timer_debounce));420420+ if (bdata->release_delay)421421+ mod_timer(&bdata->release_timer,422422+ jiffies + msecs_to_jiffies(bdata->release_delay));422423out:423424 spin_unlock_irqrestore(&bdata->lock, flags);424425 return IRQ_HANDLED;···428429{429430 struct gpio_button_data *bdata = data;430431431431- if (bdata->timer_debounce)432432- del_timer_sync(&bdata->timer);433433-434434- cancel_work_sync(&bdata->work);432432+ if (gpio_is_valid(bdata->button->gpio))433433+ cancel_delayed_work_sync(&bdata->work);434434+ else435435+ del_timer_sync(&bdata->release_timer);435436}436437437438static int gpio_keys_setup_key(struct platform_device *pdev,···465466 button->debounce_interval * 1000);466467 /* use timer if gpiolib doesn't provide debounce */467468 if (error < 0)468468- bdata->timer_debounce =469469+ bdata->software_debounce =469470 button->debounce_interval;470471 }471472472472- irq = gpio_to_irq(button->gpio);473473- if (irq < 0) {474474- error = irq;475475- dev_err(dev,476476- "Unable to get irq number for GPIO %d, error %d\n",477477- button->gpio, error);478478- return error;473473+ if (button->irq) {474474+ bdata->irq = button->irq;475475+ } else {476476+ irq = gpio_to_irq(button->gpio);477477+ if (irq < 0) {478478+ error = irq;479479+ dev_err(dev,480480+ "Unable to get irq number for GPIO %d, error %d\n",481481+ button->gpio, error);482482+ return error;483483+ }484484+ bdata->irq = irq;479485 }480480- bdata->irq = irq;481486482482- INIT_WORK(&bdata->work, gpio_keys_gpio_work_func);483483- setup_timer(&bdata->timer,484484- gpio_keys_gpio_timer, (unsigned long)bdata);487487+ INIT_DELAYED_WORK(&bdata->work, gpio_keys_gpio_work_func);485488486489 isr = gpio_keys_gpio_isr;487490 irqflags = IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING;···500499 return -EINVAL;501500 }502501503503- bdata->timer_debounce = button->debounce_interval;504504- setup_timer(&bdata->timer,502502+ bdata->release_delay = button->debounce_interval;503503+ setup_timer(&bdata->release_timer,505504 gpio_keys_irq_timer, (unsigned long)bdata);506505507506 isr = gpio_keys_irq_isr;···511510 input_set_capability(input, button->type ?: EV_KEY, button->code);512511513512 /*514514- * Install custom action to cancel debounce timer and513513+ * Install custom action to cancel release timer and515514 * workqueue item.516515 */517516 error = devm_add_action(&pdev->dev, gpio_keys_quiesce_key, bdata);···619618620619 i = 0;621620 for_each_child_of_node(node, pp) {622622- int gpio = -1;623621 enum of_gpio_flags flags;624622625623 button = &pdata->buttons[i++];626624627627- if (!of_find_property(pp, "gpios", NULL)) {628628- button->irq = irq_of_parse_and_map(pp, 0);629629- if (button->irq == 0) {630630- i--;631631- pdata->nbuttons--;632632- dev_warn(dev, "Found button without gpios or irqs\n");633633- continue;634634- }635635- } else {636636- gpio = of_get_gpio_flags(pp, 0, &flags);637637- if (gpio < 0) {638638- error = gpio;625625+ button->gpio = of_get_gpio_flags(pp, 0, &flags);626626+ if (button->gpio < 0) {627627+ error = button->gpio;628628+ if (error != -ENOENT) {639629 if (error != -EPROBE_DEFER)640630 dev_err(dev,641631 "Failed to get gpio flags, error: %d\n",642632 error);643633 return ERR_PTR(error);644634 }635635+ } else {636636+ button->active_low = flags & OF_GPIO_ACTIVE_LOW;645637 }646638647647- button->gpio = gpio;648648- button->active_low = flags & OF_GPIO_ACTIVE_LOW;639639+ button->irq = irq_of_parse_and_map(pp, 0);640640+641641+ if (!gpio_is_valid(button->gpio) && !button->irq) {642642+ dev_err(dev, "Found button without gpios or irqs\n");643643+ return ERR_PTR(-EINVAL);644644+ }649645650646 if (of_property_read_u32(pp, "linux,code", &button->code)) {651647 dev_err(dev, "Button without keycode: 0x%x\n",···656658 button->type = EV_KEY;657659658660 button->wakeup = !!of_get_property(pp, "gpio-key,wakeup", NULL);661661+662662+ button->can_disable = !!of_get_property(pp, "linux,can-disable", NULL);659663660664 if (of_property_read_u32(pp, "debounce-interval",661665 &button->debounce_interval))
···4545#define STMPE_KEYPAD_MAX_ROWS 84646#define STMPE_KEYPAD_MAX_COLS 84747#define STMPE_KEYPAD_ROW_SHIFT 34848-#define STMPE_KEYPAD_KEYMAP_SIZE \4848+#define STMPE_KEYPAD_KEYMAP_MAX_SIZE \4949 (STMPE_KEYPAD_MAX_ROWS * STMPE_KEYPAD_MAX_COLS)50505151/**5252 * struct stmpe_keypad_variant - model-specific attributes5353 * @auto_increment: whether the KPC_DATA_BYTE register address5454 * auto-increments on multiple read5555+ * @set_pullup: whether the pins need to have their pull-ups set5556 * @num_data: number of data bytes5657 * @num_normal_data: number of normal keys' data bytes5758 * @max_cols: maximum number of columns supported···6261 */6362struct stmpe_keypad_variant {6463 bool auto_increment;6464+ bool set_pullup;6565 int num_data;6666 int num_normal_data;6767 int max_cols;···8381 },8482 [STMPE2401] = {8583 .auto_increment = false,8484+ .set_pullup = true,8685 .num_data = 3,8786 .num_normal_data = 2,8887 .max_cols = 8,···9390 },9491 [STMPE2403] = {9592 .auto_increment = true,9393+ .set_pullup = true,9694 .num_data = 5,9795 .num_normal_data = 3,9896 .max_cols = 8,···10399 },104100};105101102102+/**103103+ * struct stmpe_keypad - STMPE keypad state container104104+ * @stmpe: pointer to parent STMPE device105105+ * @input: spawned input device106106+ * @variant: STMPE variant107107+ * @debounce_ms: debounce interval, in ms. Maximum is108108+ * %STMPE_KEYPAD_MAX_DEBOUNCE.109109+ * @scan_count: number of key scanning cycles to confirm key data.110110+ * Maximum is %STMPE_KEYPAD_MAX_SCAN_COUNT.111111+ * @no_autorepeat: disable key autorepeat112112+ * @rows: bitmask for the rows113113+ * @cols: bitmask for the columns114114+ * @keymap: the keymap115115+ */106116struct stmpe_keypad {107117 struct stmpe *stmpe;108118 struct input_dev *input;109119 const struct stmpe_keypad_variant *variant;110110- const struct stmpe_keypad_platform_data *plat;111111-120120+ unsigned int debounce_ms;121121+ unsigned int scan_count;122122+ bool no_autorepeat;112123 unsigned int rows;113124 unsigned int cols;114114-115115- unsigned short keymap[STMPE_KEYPAD_KEYMAP_SIZE];125125+ unsigned short keymap[STMPE_KEYPAD_KEYMAP_MAX_SIZE];116126};117127118128static int stmpe_keypad_read_data(struct stmpe_keypad *keypad, u8 *data)···189171 unsigned int col_gpios = variant->col_gpios;190172 unsigned int row_gpios = variant->row_gpios;191173 struct stmpe *stmpe = keypad->stmpe;174174+ u8 pureg = stmpe->regs[STMPE_IDX_GPPUR_LSB];192175 unsigned int pins = 0;176176+ unsigned int pu_pins = 0;177177+ int ret;193178 int i;194179195180 /*···209188 for (i = 0; i < variant->max_cols; i++) {210189 int num = __ffs(col_gpios);211190212212- if (keypad->cols & (1 << i))191191+ if (keypad->cols & (1 << i)) {213192 pins |= 1 << num;193193+ pu_pins |= 1 << num;194194+ }214195215196 col_gpios &= ~(1 << num);216197 }···226203 row_gpios &= ~(1 << num);227204 }228205229229- return stmpe_set_altfunc(stmpe, pins, STMPE_BLOCK_KEYPAD);206206+ ret = stmpe_set_altfunc(stmpe, pins, STMPE_BLOCK_KEYPAD);207207+ if (ret)208208+ return ret;209209+210210+ /*211211+ * On STMPE24xx, set pin bias to pull-up on all keypad input212212+ * pins (columns), this incidentally happen to be maximum 8 pins213213+ * and placed at GPIO0-7 so only the LSB of the pull up register214214+ * ever needs to be written.215215+ */216216+ if (variant->set_pullup) {217217+ u8 val;218218+219219+ ret = stmpe_reg_read(stmpe, pureg);220220+ if (ret)221221+ return ret;222222+223223+ /* Do not touch unused pins, may be used for GPIO */224224+ val = ret & ~pu_pins;225225+ val |= pu_pins;226226+227227+ ret = stmpe_reg_write(stmpe, pureg, val);228228+ }229229+230230+ return 0;230231}231232232233static int stmpe_keypad_chip_init(struct stmpe_keypad *keypad)233234{234234- const struct stmpe_keypad_platform_data *plat = keypad->plat;235235 const struct stmpe_keypad_variant *variant = keypad->variant;236236 struct stmpe *stmpe = keypad->stmpe;237237 int ret;238238239239- if (plat->debounce_ms > STMPE_KEYPAD_MAX_DEBOUNCE)239239+ if (keypad->debounce_ms > STMPE_KEYPAD_MAX_DEBOUNCE)240240 return -EINVAL;241241242242- if (plat->scan_count > STMPE_KEYPAD_MAX_SCAN_COUNT)242242+ if (keypad->scan_count > STMPE_KEYPAD_MAX_SCAN_COUNT)243243 return -EINVAL;244244245245 ret = stmpe_enable(stmpe, STMPE_BLOCK_KEYPAD);···291245292246 ret = stmpe_set_bits(stmpe, STMPE_KPC_CTRL_MSB,293247 STMPE_KPC_CTRL_MSB_SCAN_COUNT,294294- plat->scan_count << 4);248248+ keypad->scan_count << 4);295249 if (ret < 0)296250 return ret;297251···299253 STMPE_KPC_CTRL_LSB_SCAN |300254 STMPE_KPC_CTRL_LSB_DEBOUNCE,301255 STMPE_KPC_CTRL_LSB_SCAN |302302- (plat->debounce_ms << 1));256256+ (keypad->debounce_ms << 1));303257}304258305305-static void stmpe_keypad_fill_used_pins(struct stmpe_keypad *keypad)259259+static void stmpe_keypad_fill_used_pins(struct stmpe_keypad *keypad,260260+ u32 used_rows, u32 used_cols)306261{307262 int row, col;308263309309- for (row = 0; row < STMPE_KEYPAD_MAX_ROWS; row++) {310310- for (col = 0; col < STMPE_KEYPAD_MAX_COLS; col++) {264264+ for (row = 0; row < used_rows; row++) {265265+ for (col = 0; col < used_cols; col++) {311266 int code = MATRIX_SCAN_CODE(row, col,312312- STMPE_KEYPAD_ROW_SHIFT);267267+ STMPE_KEYPAD_ROW_SHIFT);313268 if (keypad->keymap[code] != KEY_RESERVED) {314269 keypad->rows |= 1 << row;315270 keypad->cols |= 1 << col;···319272 }320273}321274322322-#ifdef CONFIG_OF323323-static const struct stmpe_keypad_platform_data *324324-stmpe_keypad_of_probe(struct device *dev)325325-{326326- struct device_node *np = dev->of_node;327327- struct stmpe_keypad_platform_data *plat;328328-329329- if (!np)330330- return ERR_PTR(-ENODEV);331331-332332- plat = devm_kzalloc(dev, sizeof(*plat), GFP_KERNEL);333333- if (!plat)334334- return ERR_PTR(-ENOMEM);335335-336336- of_property_read_u32(np, "debounce-interval", &plat->debounce_ms);337337- of_property_read_u32(np, "st,scan-count", &plat->scan_count);338338-339339- plat->no_autorepeat = of_property_read_bool(np, "st,no-autorepeat");340340-341341- return plat;342342-}343343-#else344344-static inline const struct stmpe_keypad_platform_data *345345-stmpe_keypad_of_probe(struct device *dev)346346-{347347- return ERR_PTR(-EINVAL);348348-}349349-#endif350350-351275static int stmpe_keypad_probe(struct platform_device *pdev)352276{353277 struct stmpe *stmpe = dev_get_drvdata(pdev->dev.parent);354354- const struct stmpe_keypad_platform_data *plat;278278+ struct device_node *np = pdev->dev.of_node;355279 struct stmpe_keypad *keypad;356280 struct input_dev *input;281281+ u32 rows;282282+ u32 cols;357283 int error;358284 int irq;359359-360360- plat = stmpe->pdata->keypad;361361- if (!plat) {362362- plat = stmpe_keypad_of_probe(&pdev->dev);363363- if (IS_ERR(plat))364364- return PTR_ERR(plat);365365- }366285367286 irq = platform_get_irq(pdev, 0);368287 if (irq < 0)···339326 if (!keypad)340327 return -ENOMEM;341328329329+ keypad->stmpe = stmpe;330330+ keypad->variant = &stmpe_keypad_variants[stmpe->partnum];331331+332332+ of_property_read_u32(np, "debounce-interval", &keypad->debounce_ms);333333+ of_property_read_u32(np, "st,scan-count", &keypad->scan_count);334334+ keypad->no_autorepeat = of_property_read_bool(np, "st,no-autorepeat");335335+342336 input = devm_input_allocate_device(&pdev->dev);343337 if (!input)344338 return -ENOMEM;···354334 input->id.bustype = BUS_I2C;355335 input->dev.parent = &pdev->dev;356336357357- error = matrix_keypad_build_keymap(plat->keymap_data, NULL,358358- STMPE_KEYPAD_MAX_ROWS,359359- STMPE_KEYPAD_MAX_COLS,337337+ error = matrix_keypad_parse_of_params(&pdev->dev, &rows, &cols);338338+ if (error)339339+ return error;340340+341341+ error = matrix_keypad_build_keymap(NULL, NULL, rows, cols,360342 keypad->keymap, input);361343 if (error)362344 return error;363345364346 input_set_capability(input, EV_MSC, MSC_SCAN);365365- if (!plat->no_autorepeat)347347+ if (!keypad->no_autorepeat)366348 __set_bit(EV_REP, input->evbit);367349368368- stmpe_keypad_fill_used_pins(keypad);350350+ stmpe_keypad_fill_used_pins(keypad, rows, cols);369351370370- keypad->stmpe = stmpe;371371- keypad->plat = plat;372352 keypad->input = input;373373- keypad->variant = &stmpe_keypad_variants[stmpe->partnum];374353375354 error = stmpe_keypad_chip_init(keypad);376355 if (error < 0)
+74-10
drivers/input/mouse/alps.c
···881881 unsigned char *pkt,882882 unsigned char pkt_id)883883{884884+ /*885885+ * packet-fmt b7 b6 b5 b4 b3 b2 b1 b0886886+ * Byte0 TWO & MULTI L 1 R M 1 Y0-2 Y0-1 Y0-0887887+ * Byte0 NEW L 1 X1-5 1 1 Y0-2 Y0-1 Y0-0888888+ * Byte1 Y0-10 Y0-9 Y0-8 Y0-7 Y0-6 Y0-5 Y0-4 Y0-3889889+ * Byte2 X0-11 1 X0-10 X0-9 X0-8 X0-7 X0-6 X0-5890890+ * Byte3 X1-11 1 X0-4 X0-3 1 X0-2 X0-1 X0-0891891+ * Byte4 TWO X1-10 TWO X1-9 X1-8 X1-7 X1-6 X1-5 X1-4892892+ * Byte4 MULTI X1-10 TWO X1-9 X1-8 X1-7 X1-6 Y1-5 1893893+ * Byte4 NEW X1-10 TWO X1-9 X1-8 X1-7 X1-6 0 0894894+ * Byte5 TWO & NEW Y1-10 0 Y1-9 Y1-8 Y1-7 Y1-6 Y1-5 Y1-4895895+ * Byte5 MULTI Y1-10 0 Y1-9 Y1-8 Y1-7 Y1-6 F-1 F-0896896+ * L: Left button897897+ * R / M: Non-clickpads: Right / Middle button898898+ * Clickpads: When > 2 fingers are down, and some fingers899899+ * are in the button area, then the 2 coordinates reported900900+ * are for fingers outside the button area and these report901901+ * extra fingers being present in the right / left button902902+ * area. Note these fingers are not added to the F field!903903+ * so if a TWO packet is received and R = 1 then there are904904+ * 3 fingers down, etc.905905+ * TWO: 1: Two touches present, byte 0/4/5 are in TWO fmt906906+ * 0: If byte 4 bit 0 is 1, then byte 0/4/5 are in MULTI fmt907907+ * otherwise byte 0 bit 4 must be set and byte 0/4/5 are908908+ * in NEW fmt909909+ * F: Number of fingers - 3, 0 means 3 fingers, 1 means 4 ...910910+ */911911+884912 mt[0].x = ((pkt[2] & 0x80) << 4);885913 mt[0].x |= ((pkt[2] & 0x3F) << 5);886914 mt[0].x |= ((pkt[3] & 0x30) >> 1);···947919948920static int alps_get_mt_count(struct input_mt_pos *mt)949921{950950- int i;922922+ int i, fingers = 0;951923952952- for (i = 0; i < MAX_TOUCHES && mt[i].x != 0 && mt[i].y != 0; i++)953953- /* empty */;924924+ for (i = 0; i < MAX_TOUCHES; i++) {925925+ if (mt[i].x != 0 || mt[i].y != 0)926926+ fingers++;927927+ }954928955955- return i;929929+ return fingers;956930}957931958932static int alps_decode_packet_v7(struct alps_fields *f,959933 unsigned char *p,960934 struct psmouse *psmouse)961935{936936+ struct alps_data *priv = psmouse->private;962937 unsigned char pkt_id;963938964939 pkt_id = alps_get_packet_id_v7(p);···969938 return 0;970939 if (pkt_id == V7_PACKET_ID_UNKNOWN)971940 return -1;941941+ /*942942+ * NEW packets are send to indicate a discontinuity in the finger943943+ * coordinate reporting. Specifically a finger may have moved from944944+ * slot 0 to 1 or vice versa. INPUT_MT_TRACK takes care of this for945945+ * us.946946+ *947947+ * NEW packets have 3 problems:948948+ * 1) They do not contain middle / right button info (on non clickpads)949949+ * this can be worked around by preserving the old button state950950+ * 2) They do not contain an accurate fingercount, and they are951951+ * typically send when the number of fingers changes. We cannot use952952+ * the old finger count as that may mismatch with the amount of953953+ * touch coordinates we've available in the NEW packet954954+ * 3) Their x data for the second touch is inaccurate leading to955955+ * a possible jump of the x coordinate by 16 units when the first956956+ * non NEW packet comes in957957+ * Since problems 2 & 3 cannot be worked around, just ignore them.958958+ */959959+ if (pkt_id == V7_PACKET_ID_NEW)960960+ return 1;972961973962 alps_get_finger_coordinate_v7(f->mt, p, pkt_id);974963975975- if (pkt_id == V7_PACKET_ID_TWO || pkt_id == V7_PACKET_ID_MULTI) {976976- f->left = (p[0] & 0x80) >> 7;964964+ if (pkt_id == V7_PACKET_ID_TWO)965965+ f->fingers = alps_get_mt_count(f->mt);966966+ else /* pkt_id == V7_PACKET_ID_MULTI */967967+ f->fingers = 3 + (p[5] & 0x03);968968+969969+ f->left = (p[0] & 0x80) >> 7;970970+ if (priv->flags & ALPS_BUTTONPAD) {971971+ if (p[0] & 0x20)972972+ f->fingers++;973973+ if (p[0] & 0x10)974974+ f->fingers++;975975+ } else {977976 f->right = (p[0] & 0x20) >> 5;978977 f->middle = (p[0] & 0x10) >> 4;979978 }980979981981- if (pkt_id == V7_PACKET_ID_TWO)982982- f->fingers = alps_get_mt_count(f->mt);983983- else if (pkt_id == V7_PACKET_ID_MULTI)984984- f->fingers = 3 + (p[5] & 0x03);980980+ /* Sometimes a single touch is reported in mt[1] rather then mt[0] */981981+ if (f->fingers == 1 && f->mt[0].x == 0 && f->mt[0].y == 0) {982982+ f->mt[0].x = f->mt[1].x;983983+ f->mt[0].y = f->mt[1].y;984984+ f->mt[1].x = 0;985985+ f->mt[1].y = 0;986986+ }985987986988 return 0;987989}
···40294029 if (action != BUS_NOTIFY_REMOVED_DEVICE)40304030 return 0;4031403140324032- /*40334033- * If the device is still attached to a device driver we can't40344034- * tear down the domain yet as DMA mappings may still be in use.40354035- * Wait for the BUS_NOTIFY_UNBOUND_DRIVER event to do that.40364036- */40374037- if (action == BUS_NOTIFY_DEL_DEVICE && dev->driver != NULL)40384038- return 0;40394039-40404032 domain = find_domain(dev);40414033 if (!domain)40424034 return 0;···44204428 domain_remove_one_dev_info(old_domain, dev);44214429 else44224430 domain_remove_dev_info(old_domain);44314431+44324432+ if (!domain_type_is_vm_or_si(old_domain) &&44334433+ list_empty(&old_domain->devices))44344434+ domain_exit(old_domain);44234435 }44244436 }44254437
+3-3
drivers/iommu/ipmmu-vmsa.c
···558558559559static u64 ipmmu_page_prot(unsigned int prot, u64 type)560560{561561- u64 pgprot = ARM_VMSA_PTE_XN | ARM_VMSA_PTE_nG | ARM_VMSA_PTE_AF561561+ u64 pgprot = ARM_VMSA_PTE_nG | ARM_VMSA_PTE_AF562562 | ARM_VMSA_PTE_SH_IS | ARM_VMSA_PTE_AP_UNPRIV563563 | ARM_VMSA_PTE_NS | type;564564···568568 if (prot & IOMMU_CACHE)569569 pgprot |= IMMAIR_ATTR_IDX_WBRWA << ARM_VMSA_PTE_ATTRINDX_SHIFT;570570571571- if (prot & IOMMU_EXEC)572572- pgprot &= ~ARM_VMSA_PTE_XN;571571+ if (prot & IOMMU_NOEXEC)572572+ pgprot |= ARM_VMSA_PTE_XN;573573 else if (!(prot & (IOMMU_READ | IOMMU_WRITE)))574574 /* If no access create a faulting entry to avoid TLB fills. */575575 pgprot &= ~ARM_VMSA_PTE_PAGE;
···100100 return 0;101101}102102103103+static int cxl_mmap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)104104+{105105+ struct cxl_context *ctx = vma->vm_file->private_data;106106+ unsigned long address = (unsigned long)vmf->virtual_address;107107+ u64 area, offset;108108+109109+ offset = vmf->pgoff << PAGE_SHIFT;110110+111111+ pr_devel("%s: pe: %i address: 0x%lx offset: 0x%llx\n",112112+ __func__, ctx->pe, address, offset);113113+114114+ if (ctx->afu->current_mode == CXL_MODE_DEDICATED) {115115+ area = ctx->afu->psn_phys;116116+ if (offset > ctx->afu->adapter->ps_size)117117+ return VM_FAULT_SIGBUS;118118+ } else {119119+ area = ctx->psn_phys;120120+ if (offset > ctx->psn_size)121121+ return VM_FAULT_SIGBUS;122122+ }123123+124124+ mutex_lock(&ctx->status_mutex);125125+126126+ if (ctx->status != STARTED) {127127+ mutex_unlock(&ctx->status_mutex);128128+ pr_devel("%s: Context not started, failing problem state access\n", __func__);129129+ return VM_FAULT_SIGBUS;130130+ }131131+132132+ vm_insert_pfn(vma, address, (area + offset) >> PAGE_SHIFT);133133+134134+ mutex_unlock(&ctx->status_mutex);135135+136136+ return VM_FAULT_NOPAGE;137137+}138138+139139+static const struct vm_operations_struct cxl_mmap_vmops = {140140+ .fault = cxl_mmap_fault,141141+};142142+103143/*104144 * Map a per-context mmio space into the given vma.105145 */···148108 u64 len = vma->vm_end - vma->vm_start;149109 len = min(len, ctx->psn_size);150110151151- if (ctx->afu->current_mode == CXL_MODE_DEDICATED) {152152- vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);153153- return vm_iomap_memory(vma, ctx->afu->psn_phys, ctx->afu->adapter->ps_size);154154- }111111+ if (ctx->afu->current_mode != CXL_MODE_DEDICATED) {112112+ /* make sure there is a valid per process space for this AFU */113113+ if ((ctx->master && !ctx->afu->psa) || (!ctx->afu->pp_psa)) {114114+ pr_devel("AFU doesn't support mmio space\n");115115+ return -EINVAL;116116+ }155117156156- /* make sure there is a valid per process space for this AFU */157157- if ((ctx->master && !ctx->afu->psa) || (!ctx->afu->pp_psa)) {158158- pr_devel("AFU doesn't support mmio space\n");159159- return -EINVAL;118118+ /* Can't mmap until the AFU is enabled */119119+ if (!ctx->afu->enabled)120120+ return -EBUSY;160121 }161161-162162- /* Can't mmap until the AFU is enabled */163163- if (!ctx->afu->enabled)164164- return -EBUSY;165122166123 pr_devel("%s: mmio physical: %llx pe: %i master:%i\n", __func__,167124 ctx->psn_phys, ctx->pe , ctx->master);168125126126+ vma->vm_flags |= VM_IO | VM_PFNMAP;169127 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);170170- return vm_iomap_memory(vma, ctx->psn_phys, len);128128+ vma->vm_ops = &cxl_mmap_vmops;129129+ return 0;171130}172131173132/*···189150 afu_release_irqs(ctx);190151 flush_work(&ctx->fault_work); /* Only needed for dedicated process */191152 wake_up_all(&ctx->wq);192192-193193- /* Release Problem State Area mapping */194194- mutex_lock(&ctx->mapping_lock);195195- if (ctx->mapping)196196- unmap_mapping_range(ctx->mapping, 0, 0, 1);197197- mutex_unlock(&ctx->mapping_lock);198153}199154200155/*···217184 * created and torn down after the IDR removed218185 */219186 __detach_context(ctx);187187+188188+ /*189189+ * We are force detaching - remove any active PSA mappings so190190+ * userspace cannot interfere with the card if it comes back.191191+ * Easiest way to exercise this is to unbind and rebind the192192+ * driver via sysfs while it is in use.193193+ */194194+ mutex_lock(&ctx->mapping_lock);195195+ if (ctx->mapping)196196+ unmap_mapping_range(ctx->mapping, 0, 0, 1);197197+ mutex_unlock(&ctx->mapping_lock);220198 }221199 mutex_unlock(&afu->contexts_lock);222200}
+8-6
drivers/misc/cxl/file.c
···140140141141 pr_devel("%s: pe: %i\n", __func__, ctx->pe);142142143143- mutex_lock(&ctx->status_mutex);144144- if (ctx->status != OPENED) {145145- rc = -EIO;146146- goto out;147147- }148148-143143+ /* Do this outside the status_mutex to avoid a circular dependency with144144+ * the locking in cxl_mmap_fault() */149145 if (copy_from_user(&work, uwork,150146 sizeof(struct cxl_ioctl_start_work))) {151147 rc = -EFAULT;148148+ goto out;149149+ }150150+151151+ mutex_lock(&ctx->status_mutex);152152+ if (ctx->status != OPENED) {153153+ rc = -EIO;152154 goto out;153155 }154156
+12
drivers/misc/mei/hw-me.c
···234234 struct mei_me_hw *hw = to_me_hw(dev);235235 u32 hcsr = mei_hcsr_read(hw);236236237237+ /* H_RST may be found lit before reset is started,238238+ * for example if preceding reset flow hasn't completed.239239+ * In that case asserting H_RST will be ignored, therefore240240+ * we need to clean H_RST bit to start a successful reset sequence.241241+ */242242+ if ((hcsr & H_RST) == H_RST) {243243+ dev_warn(dev->dev, "H_RST is set = 0x%08X", hcsr);244244+ hcsr &= ~H_RST;245245+ mei_me_reg_write(hw, H_CSR, hcsr);246246+ hcsr = mei_hcsr_read(hw);247247+ }248248+237249 hcsr |= H_RST | H_IG | H_IS;238250239251 if (intr_enable)
+1-1
drivers/mmc/core/mmc.c
···886886 unsigned idx, bus_width = 0;887887 int err = 0;888888889889- if (!mmc_can_ext_csd(card) &&889889+ if (!mmc_can_ext_csd(card) ||890890 !(host->caps & (MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA)))891891 return 0;892892
···300300 if (IS_ERR(host))301301 return PTR_ERR(host);302302303303- if (of_device_is_compatible(np, "marvell,armada-380-sdhci")) {304304- ret = mv_conf_mbus_windows(pdev, mv_mbus_dram_info());305305- if (ret < 0)306306- goto err_mbus_win;307307- }308308-309309-310303 pltfm_host = sdhci_priv(host);311304 pltfm_host->priv = pxa;312305···317324 pxa->clk_core = devm_clk_get(dev, "core");318325 if (!IS_ERR(pxa->clk_core))319326 clk_prepare_enable(pxa->clk_core);327327+328328+ if (of_device_is_compatible(np, "marvell,armada-380-sdhci")) {329329+ ret = mv_conf_mbus_windows(pdev, mv_mbus_dram_info());330330+ if (ret < 0)331331+ goto err_mbus_win;332332+ }320333321334 /* enable 1/8V DDR capable */322335 host->mmc->caps |= MMC_CAP_1_8V_DDR;···395396 pm_runtime_disable(&pdev->dev);396397err_of_parse:397398err_cd_req:399399+err_mbus_win:398400 clk_disable_unprepare(pxa->clk_io);399401 if (!IS_ERR(pxa->clk_core))400402 clk_disable_unprepare(pxa->clk_core);401403err_clk_get:402402-err_mbus_win:403404 sdhci_pltfm_free(pdev);404405 return ret;405406}
+54-26
drivers/mmc/host/sdhci.c
···259259260260 del_timer_sync(&host->tuning_timer);261261 host->flags &= ~SDHCI_NEEDS_RETUNING;262262- host->mmc->max_blk_count =263263- (host->quirks & SDHCI_QUIRK_NO_MULTIBLOCK) ? 1 : 65535;264262 }265263 sdhci_enable_card_detection(host);266264}···12711273 spin_unlock_irq(&host->lock);12721274 mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd);12731275 spin_lock_irq(&host->lock);12761276+12771277+ if (mode != MMC_POWER_OFF)12781278+ sdhci_writeb(host, SDHCI_POWER_ON, SDHCI_POWER_CONTROL);12791279+ else12801280+ sdhci_writeb(host, 0, SDHCI_POWER_CONTROL);12811281+12741282 return;12751283 }12761284···1357135313581354 sdhci_runtime_pm_get(host);1359135513561356+ present = mmc_gpio_get_cd(host->mmc);13571357+13601358 spin_lock_irqsave(&host->lock, flags);1361135913621360 WARN_ON(host->mrq != NULL);···13871381 * zero: cd-gpio is used, and card is removed13881382 * one: cd-gpio is used, and card is present13891383 */13901390- present = mmc_gpio_get_cd(host->mmc);13911384 if (present < 0) {13921385 /* If polling, assume that the card is always present. */13931386 if (host->quirks & SDHCI_QUIRK_BROKEN_CARD_DETECTION)···18851880 return !(present_state & SDHCI_DATA_LVL_MASK);18861881}1887188218831883+static int sdhci_prepare_hs400_tuning(struct mmc_host *mmc, struct mmc_ios *ios)18841884+{18851885+ struct sdhci_host *host = mmc_priv(mmc);18861886+ unsigned long flags;18871887+18881888+ spin_lock_irqsave(&host->lock, flags);18891889+ host->flags |= SDHCI_HS400_TUNING;18901890+ spin_unlock_irqrestore(&host->lock, flags);18911891+18921892+ return 0;18931893+}18941894+18881895static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)18891896{18901897 struct sdhci_host *host = mmc_priv(mmc);···19041887 int tuning_loop_counter = MAX_TUNING_LOOP;19051888 int err = 0;19061889 unsigned long flags;18901890+ unsigned int tuning_count = 0;18911891+ bool hs400_tuning;1907189219081893 sdhci_runtime_pm_get(host);19091894 spin_lock_irqsave(&host->lock, flags);18951895+18961896+ hs400_tuning = host->flags & SDHCI_HS400_TUNING;18971897+ host->flags &= ~SDHCI_HS400_TUNING;18981898+18991899+ if (host->tuning_mode == SDHCI_TUNING_MODE_1)19001900+ tuning_count = host->tuning_count;1910190119111902 /*19121903 * The Host Controller needs tuning only in case of SDR104 mode···19241899 * tuning function has to be executed.19251900 */19261901 switch (host->timing) {19021902+ /* HS400 tuning is done in HS200 mode */19271903 case MMC_TIMING_MMC_HS400:19041904+ err = -EINVAL;19051905+ goto out_unlock;19061906+19281907 case MMC_TIMING_MMC_HS200:19081908+ /*19091909+ * Periodic re-tuning for HS400 is not expected to be needed, so19101910+ * disable it here.19111911+ */19121912+ if (hs400_tuning)19131913+ tuning_count = 0;19141914+ break;19151915+19291916 case MMC_TIMING_UHS_SDR104:19301917 break;19311918···19481911 /* FALLTHROUGH */1949191219501913 default:19511951- spin_unlock_irqrestore(&host->lock, flags);19521952- sdhci_runtime_pm_put(host);19531953- return 0;19141914+ goto out_unlock;19541915 }1955191619561917 if (host->ops->platform_execute_tuning) {···20722037 }2073203820742039out:20752075- /*20762076- * If this is the very first time we are here, we start the retuning20772077- * timer. Since only during the first time, SDHCI_NEEDS_RETUNING20782078- * flag won't be set, we check this condition before actually starting20792079- * the timer.20802080- */20812081- if (!(host->flags & SDHCI_NEEDS_RETUNING) && host->tuning_count &&20822082- (host->tuning_mode == SDHCI_TUNING_MODE_1)) {20402040+ host->flags &= ~SDHCI_NEEDS_RETUNING;20412041+20422042+ if (tuning_count) {20832043 host->flags |= SDHCI_USING_RETUNING_TIMER;20842084- mod_timer(&host->tuning_timer, jiffies +20852085- host->tuning_count * HZ);20862086- /* Tuning mode 1 limits the maximum data length to 4MB */20872087- mmc->max_blk_count = (4 * 1024 * 1024) / mmc->max_blk_size;20882088- } else if (host->flags & SDHCI_USING_RETUNING_TIMER) {20892089- host->flags &= ~SDHCI_NEEDS_RETUNING;20902090- /* Reload the new initial value for timer */20912091- mod_timer(&host->tuning_timer, jiffies +20922092- host->tuning_count * HZ);20442044+ mod_timer(&host->tuning_timer, jiffies + tuning_count * HZ);20932045 }2094204620952047 /*···2092207020932071 sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);20942072 sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);20732073+out_unlock:20952074 spin_unlock_irqrestore(&host->lock, flags);20962075 sdhci_runtime_pm_put(host);20972076···21332110{21342111 struct sdhci_host *host = mmc_priv(mmc);21352112 unsigned long flags;21132113+ int present;2136211421372115 /* First check if client has provided their own card event */21382116 if (host->ops->card_event)21392117 host->ops->card_event(host);2140211821192119+ present = sdhci_do_get_cd(host);21202120+21412121 spin_lock_irqsave(&host->lock, flags);2142212221432123 /* Check host->mrq first in case we are runtime suspended */21442144- if (host->mrq && !sdhci_do_get_cd(host)) {21242124+ if (host->mrq && !present) {21452125 pr_err("%s: Card removed during transfer!\n",21462126 mmc_hostname(host->mmc));21472127 pr_err("%s: Resetting controller.\n",···21682142 .hw_reset = sdhci_hw_reset,21692143 .enable_sdio_irq = sdhci_enable_sdio_irq,21702144 .start_signal_voltage_switch = sdhci_start_signal_voltage_switch,21452145+ .prepare_hs400_tuning = sdhci_prepare_hs400_tuning,21712146 .execute_tuning = sdhci_execute_tuning,21722147 .card_event = sdhci_card_event,21732148 .card_busy = sdhci_card_busy,···32873260 mmc->max_segs = SDHCI_MAX_SEGS;3288326132893262 /*32903290- * Maximum number of sectors in one transfer. Limited by DMA boundary32913291- * size (512KiB).32633263+ * Maximum number of sectors in one transfer. Limited by SDMA boundary32643264+ * size (512KiB). Note some tuning modes impose a 4MiB limit, but this32653265+ * is less anyway.32923266 */32933267 mmc->max_req_size = 524288;32943268
+1-1
drivers/net/bonding/bond_main.c
···16481648 /* slave is not a slave or master is not master of this slave */16491649 if (!(slave_dev->flags & IFF_SLAVE) ||16501650 !netdev_has_upper_dev(slave_dev, bond_dev)) {16511651- netdev_err(bond_dev, "cannot release %s\n",16511651+ netdev_dbg(bond_dev, "cannot release %s\n",16521652 slave_dev->name);16531653 return -EINVAL;16541654 }
···156156source "drivers/net/ethernet/renesas/Kconfig"157157source "drivers/net/ethernet/rdc/Kconfig"158158source "drivers/net/ethernet/rocker/Kconfig"159159-160160-config S6GMAC161161- tristate "S6105 GMAC ethernet support"162162- depends on XTENSA_VARIANT_S6000163163- select PHYLIB164164- ---help---165165- This driver supports the on chip ethernet device on the166166- S6105 xtensa processor.167167-168168- To compile this driver as a module, choose M here. The module169169- will be called s6gmac.170170-171159source "drivers/net/ethernet/samsung/Kconfig"172160source "drivers/net/ethernet/seeq/Kconfig"173161source "drivers/net/ethernet/silan/Kconfig"
···9696 s16 xact_addr_filt; /* index of our MAC address filter */9797 u16 rss_size; /* size of VI's RSS table slice */9898 u8 pidx; /* index into adapter port[] */9999+ s8 mdio_addr;100100+ u8 port_type; /* firmware port type */101101+ u8 mod_type; /* firmware module type */99102 u8 port_id; /* physical port ID */100103 u8 nqsets; /* # of "Queue Sets" */101104 u8 first_qset; /* index of first "Queue Set" */···525522 * is "contracted" to provide for the common code.526523 */527524void t4vf_os_link_changed(struct adapter *, int, int);525525+void t4vf_os_portmod_changed(struct adapter *, int);528526529527/*530528 * SGE function prototype declarations.
···424424 * (40ns * 6).425425 */426426#define FEC_QUIRK_BUG_CAPTURE (1 << 10)427427+/* Controller has only one MDIO bus */428428+#define FEC_QUIRK_SINGLE_MDIO (1 << 11)427429428430struct fec_enet_priv_tx_q {429431 int index;
+6-4
drivers/net/ethernet/freescale/fec_main.c
···9191 .driver_data = 0,9292 }, {9393 .name = "imx28-fec",9494- .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME,9494+ .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |9595+ FEC_QUIRK_SINGLE_MDIO,9596 }, {9697 .name = "imx6q-fec",9798 .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |···19381937 int err = -ENXIO, i;1939193819401939 /*19411941- * The dual fec interfaces are not equivalent with enet-mac.19401940+ * The i.MX28 dual fec interfaces are not equal.19421941 * Here are the differences:19431942 *19441943 * - fec0 supports MII & RMII modes while fec1 only supports RMII···19531952 * mdio interface in board design, and need to be configured by19541953 * fec0 mii_bus.19551954 */19561956- if ((fep->quirks & FEC_QUIRK_ENET_MAC) && fep->dev_id > 0) {19551955+ if ((fep->quirks & FEC_QUIRK_SINGLE_MDIO) && fep->dev_id > 0) {19571956 /* fec1 uses fec0 mii_bus */19581957 if (mii_cnt && fec0_mii_bus) {19591958 fep->mii_bus = fec0_mii_bus;···20162015 mii_cnt++;2017201620182017 /* save fec0 mii_bus */20192019- if (fep->quirks & FEC_QUIRK_ENET_MAC)20182018+ if (fep->quirks & FEC_QUIRK_SINGLE_MDIO)20202019 fec0_mii_bus = fep->mii_bus;2021202020222021 return 0;···31303129 pdev->id_entry = of_id->data;31313130 fep->quirks = pdev->id_entry->driver_data;3132313131323132+ fep->netdev = ndev;31333133 fep->num_rx_queues = num_rx_qs;31343134 fep->num_tx_queues = num_tx_qs;31353135
+11
drivers/net/ethernet/intel/Kconfig
···281281282282 If unsure, say N.283283284284+config I40E_FCOE285285+ bool "Fibre Channel over Ethernet (FCoE)"286286+ default n287287+ depends on I40E && DCB && FCOE288288+ ---help---289289+ Say Y here if you want to use Fibre Channel over Ethernet (FCoE)290290+ in the driver. This will create new netdev for exclusive FCoE291291+ use with XL710 FCoE offloads enabled.292292+293293+ If unsure, say N.294294+284295config I40EVF285296 tristate "Intel(R) XL710 X710 Virtual Function Ethernet support"286297 depends on PCI_MSI
···7878} while (0)79798080typedef enum i40e_status_code i40e_status;8181-#if defined(CONFIG_FCOE) || defined(CONFIG_FCOE_MODULE)8181+#ifdef CONFIG_I40E_FCOE8282#define I40E_FCOE8383-#endif /* CONFIG_FCOE or CONFIG_FCOE_MODULE */8383+#endif8484#endif /* _I40E_OSDEP_H_ */
+72-32
drivers/net/ethernet/intel/i40e/i40e_txrx.c
···658658 return le32_to_cpu(*(volatile __le32 *)head);659659}660660661661+#define WB_STRIDE 0x3662662+661663/**662664 * i40e_clean_tx_irq - Reclaim resources after transmit completes663665 * @tx_ring: tx ring to clean···761759 tx_ring->q_vector->tx.total_bytes += total_bytes;762760 tx_ring->q_vector->tx.total_packets += total_packets;763761762762+ /* check to see if there are any non-cache aligned descriptors763763+ * waiting to be written back, and kick the hardware to force764764+ * them to be written back in case of napi polling765765+ */766766+ if (budget &&767767+ !((i & WB_STRIDE) == WB_STRIDE) &&768768+ !test_bit(__I40E_DOWN, &tx_ring->vsi->state) &&769769+ (I40E_DESC_UNUSED(tx_ring) != tx_ring->count))770770+ tx_ring->arm_wb = true;771771+ else772772+ tx_ring->arm_wb = false;773773+764774 if (check_for_tx_hang(tx_ring) && i40e_check_tx_hang(tx_ring)) {765775 /* schedule immediate reset if we believe we hung */766776 dev_info(tx_ring->dev, "Detected Tx Unit Hang\n"···791777 netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);792778793779 dev_info(tx_ring->dev,794794- "tx hang detected on queue %d, resetting adapter\n",780780+ "tx hang detected on queue %d, reset requested\n",795781 tx_ring->queue_index);796782797797- tx_ring->netdev->netdev_ops->ndo_tx_timeout(tx_ring->netdev);783783+ /* do not fire the reset immediately, wait for the stack to784784+ * decide we are truly stuck, also prevents every queue from785785+ * simultaneously requesting a reset786786+ */798787799799- /* the adapter is about to reset, no point in enabling stuff */800800- return true;788788+ /* the adapter is about to reset, no point in enabling polling */789789+ budget = 1;801790 }802791803792 netdev_tx_completed_queue(netdev_get_tx_queue(tx_ring->netdev,···823806 }824807 }825808826826- return budget > 0;809809+ return !!budget;810810+}811811+812812+/**813813+ * i40e_force_wb - Arm hardware to do a wb on noncache aligned descriptors814814+ * @vsi: the VSI we care about815815+ * @q_vector: the vector on which to force writeback816816+ *817817+ **/818818+static void i40e_force_wb(struct i40e_vsi *vsi, struct i40e_q_vector *q_vector)819819+{820820+ u32 val = I40E_PFINT_DYN_CTLN_INTENA_MASK |821821+ I40E_PFINT_DYN_CTLN_SWINT_TRIG_MASK |822822+ I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_MASK823823+ /* allow 00 to be written to the index */;824824+825825+ wr32(&vsi->back->hw,826826+ I40E_PFINT_DYN_CTLN(q_vector->v_idx + vsi->base_vector - 1),827827+ val);827828}828829829830/**···13251290 * so the total length of IPv4 header is IHL*4 bytes13261291 * The UDP_0 bit *may* bet set if the *inner* header is UDP13271292 */13281328- if (ipv4_tunnel &&13291329- (decoded.inner_prot != I40E_RX_PTYPE_INNER_PROT_UDP) &&13301330- !(rx_status & (1 << I40E_RX_DESC_STATUS_UDP_0_SHIFT))) {12931293+ if (ipv4_tunnel) {13311294 skb->transport_header = skb->mac_header +13321295 sizeof(struct ethhdr) +13331296 (ip_hdr(skb)->ihl * 4);···13351302 skb->protocol == htons(ETH_P_8021AD))13361303 ? VLAN_HLEN : 0;1337130413381338- rx_udp_csum = udp_csum(skb);13391339- iph = ip_hdr(skb);13401340- csum = csum_tcpudp_magic(13411341- iph->saddr, iph->daddr,13421342- (skb->len - skb_transport_offset(skb)),13431343- IPPROTO_UDP, rx_udp_csum);13051305+ if ((ip_hdr(skb)->protocol == IPPROTO_UDP) &&13061306+ (udp_hdr(skb)->check != 0)) {13071307+ rx_udp_csum = udp_csum(skb);13081308+ iph = ip_hdr(skb);13091309+ csum = csum_tcpudp_magic(13101310+ iph->saddr, iph->daddr,13111311+ (skb->len - skb_transport_offset(skb)),13121312+ IPPROTO_UDP, rx_udp_csum);1344131313451345- if (udp_hdr(skb)->check != csum)13461346- goto checksum_fail;13141314+ if (udp_hdr(skb)->check != csum)13151315+ goto checksum_fail;13161316+13171317+ } /* else its GRE and so no outer UDP header */13471318 }1348131913491320 skb->ip_summed = CHECKSUM_UNNECESSARY;···16181581 struct i40e_vsi *vsi = q_vector->vsi;16191582 struct i40e_ring *ring;16201583 bool clean_complete = true;15841584+ bool arm_wb = false;16211585 int budget_per_ring;1622158616231587 if (test_bit(__I40E_DOWN, &vsi->state)) {···16291591 /* Since the actual Tx work is minimal, we can give the Tx a larger16301592 * budget and be more aggressive about cleaning up the Tx descriptors.16311593 */16321632- i40e_for_each_ring(ring, q_vector->tx)15941594+ i40e_for_each_ring(ring, q_vector->tx) {16331595 clean_complete &= i40e_clean_tx_irq(ring, vsi->work_limit);15961596+ arm_wb |= ring->arm_wb;15971597+ }1634159816351599 /* We attempt to distribute budget to each Rx queue fairly, but don't16361600 * allow the budget to go below 1 because that would exit polling early.···16431603 clean_complete &= i40e_clean_rx_irq(ring, budget_per_ring);1644160416451605 /* If work not completed, return budget and polling will return */16461646- if (!clean_complete)16061606+ if (!clean_complete) {16071607+ if (arm_wb)16081608+ i40e_force_wb(vsi, q_vector);16471609 return budget;16101610+ }1648161116491612 /* Work is done so exit the polling mode and re-enable the interrupt */16501613 napi_complete(napi);···18831840 if (err < 0)18841841 return err;1885184218861886- if (protocol == htons(ETH_P_IP)) {18871887- iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb);18431843+ iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb);18441844+ ipv6h = skb->encapsulation ? inner_ipv6_hdr(skb) : ipv6_hdr(skb);18451845+18461846+ if (iph->version == 4) {18881847 tcph = skb->encapsulation ? inner_tcp_hdr(skb) : tcp_hdr(skb);18891848 iph->tot_len = 0;18901849 iph->check = 0;18911850 tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,18921851 0, IPPROTO_TCP, 0);18931893- } else if (skb_is_gso_v6(skb)) {18941894-18951895- ipv6h = skb->encapsulation ? inner_ipv6_hdr(skb)18961896- : ipv6_hdr(skb);18521852+ } else if (ipv6h->version == 6) {18971853 tcph = skb->encapsulation ? inner_tcp_hdr(skb) : tcp_hdr(skb);18981854 ipv6h->payload_len = 0;18991855 tcph->check = ~csum_ipv6_magic(&ipv6h->saddr, &ipv6h->daddr,···19881946 I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM;19891947 }19901948 } else if (tx_flags & I40E_TX_FLAGS_IPV6) {19911991- if (tx_flags & I40E_TX_FLAGS_TSO) {19921992- *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6;19491949+ *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6;19501950+ if (tx_flags & I40E_TX_FLAGS_TSO)19931951 ip_hdr(skb)->check = 0;19941994- } else {19951995- *cd_tunneling |=19961996- I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM;19971997- }19981952 }1999195320001954 /* Now set the ctx descriptor fields */···20001962 ((skb_inner_network_offset(skb) -20011963 skb_transport_offset(skb)) >> 1) <<20021964 I40E_TXD_CTX_QW0_NATLEN_SHIFT;20032003-19651965+ if (this_ip_hdr->version == 6) {19661966+ tx_flags &= ~I40E_TX_FLAGS_IPV4;19671967+ tx_flags |= I40E_TX_FLAGS_IPV6;19681968+ }20041969 } else {20051970 network_hdr_len = skb_network_header_len(skb);20061971 this_ip_hdr = ip_hdr(skb);···22392198 /* Place RS bit on last descriptor of any packet that spans across the22402199 * 4th descriptor (WB_STRIDE aka 0x3) in a 64B cacheline.22412200 */22422242-#define WB_STRIDE 0x322432201 if (((i & WB_STRIDE) != WB_STRIDE) &&22442202 (first <= &tx_ring->tx_bi[i]) &&22452203 (first >= &tx_ring->tx_bi[i & ~WB_STRIDE])) {
+1
drivers/net/ethernet/intel/i40e/i40e_txrx.h
···241241 unsigned long last_rx_timestamp;242242243243 bool ring_active; /* is ring online or not */244244+ bool arm_wb; /* do something to arm write back */244245245246 /* stats structs */246247 struct i40e_queue_stats stats;
+1-1
drivers/net/ethernet/intel/igb/e1000_82575.c
···11251125 u32 swmask = mask;11261126 u32 fwmask = mask << 16;11271127 s32 ret_val = 0;11281128- s32 i = 0, timeout = 200; /* FIXME: find real value to use here */11281128+ s32 i = 0, timeout = 200;1129112911301130 while (i < timeout) {11311131 if (igb_get_hw_semaphore(hw)) {
···962962 tx_desc->ctrl.owner_opcode = op_own;963963 if (send_doorbell) {964964 wmb();965965- iowrite32(ring->doorbell_qpn,965965+ /* Since there is no iowrite*_native() that writes the966966+ * value as is, without byteswapping - using the one967967+ * the doesn't do byteswapping in the relevant arch968968+ * endianness.969969+ */970970+#if defined(__LITTLE_ENDIAN)971971+ iowrite32(972972+#else973973+ iowrite32be(974974+#endif975975+ ring->doorbell_qpn,966976 ring->bf.uar->map + MLX4_SEND_DOORBELL);967977 } else {968978 ring->xmit_more++;
···2303230323042304/* Spanning Tree */2305230523062306-static inline void port_cfg_dis_learn(struct ksz_hw *hw, int p, int set)23072307-{23082308- port_cfg(hw, p,23092309- KS8842_PORT_CTRL_2_OFFSET, PORT_LEARN_DISABLE, set);23102310-}23112311-23122306static inline void port_cfg_rx(struct ksz_hw *hw, int p, int set)23132307{23142308 port_cfg(hw, p,
+3-1
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
···40334033 (void)pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));40344034 mgp->cmd = dma_alloc_coherent(&pdev->dev, sizeof(*mgp->cmd),40354035 &mgp->cmd_bus, GFP_KERNEL);40364036- if (mgp->cmd == NULL)40364036+ if (!mgp->cmd) {40374037+ status = -ENOMEM;40374038 goto abort_with_enabled;40394039+ }4038404040394041 mgp->board_span = pci_resource_len(pdev, 0);40404042 mgp->iomem_base = pci_resource_start(pdev, 0);
+3-5
drivers/net/ethernet/qlogic/qla3xxx.c
···146146{147147 int i = 0;148148149149- while (i < 10) {150150- if (i)151151- ssleep(1);152152-149149+ do {153150 if (ql_sem_lock(qdev,154151 QL_DRVR_SEM_MASK,155152 (QL_RESOURCE_BITS_BASE_CODE | (qdev->mac_index)···155158 "driver lock acquired\n");156159 return 1;157160 }158158- }161161+ ssleep(1);162162+ } while (++i < 10);159163160164 netdev_err(qdev->ndev, "Timed out waiting for driver lock...\n");161165 return 0;
···16711671 * 0 on success and an appropriate (-)ve integer as defined in errno.h16721672 * file on failure.16731673 */16741674-static int stmmac_hw_setup(struct net_device *dev)16741674+static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)16751675{16761676 struct stmmac_priv *priv = netdev_priv(dev);16771677 int ret;···1708170817091709 stmmac_mmc_setup(priv);1710171017111711- ret = stmmac_init_ptp(priv);17121712- if (ret && ret != -EOPNOTSUPP)17131713- pr_warn("%s: failed PTP initialisation\n", __func__);17111711+ if (init_ptp) {17121712+ ret = stmmac_init_ptp(priv);17131713+ if (ret && ret != -EOPNOTSUPP)17141714+ pr_warn("%s: failed PTP initialisation\n", __func__);17151715+ }1714171617151717#ifdef CONFIG_DEBUG_FS17161718 ret = stmmac_init_fs(dev);···17891787 goto init_error;17901788 }1791178917921792- ret = stmmac_hw_setup(dev);17901790+ ret = stmmac_hw_setup(dev, true);17931791 if (ret < 0) {17941792 pr_err("%s: Hw setup failed\n", __func__);17951793 goto init_error;···30383036 netif_device_attach(ndev);3039303730403038 init_dma_desc_rings(ndev, GFP_ATOMIC);30413041- stmmac_hw_setup(ndev);30393039+ stmmac_hw_setup(ndev, false);30423040 stmmac_init_tx_coalesce(priv);3043304130443042 napi_enable(&priv->napi);
···610610611611 /* Clear all mcast from ALE */612612 cpsw_ale_flush_multicast(ale, ALE_ALL_PORTS <<613613- priv->host_port);613613+ priv->host_port, -1);614614615615 /* Flood All Unicast Packets to Host port */616616 cpsw_ale_control_set(ale, 0, ALE_P0_UNI_FLOOD, 1);···634634static void cpsw_ndo_set_rx_mode(struct net_device *ndev)635635{636636 struct cpsw_priv *priv = netdev_priv(ndev);637637+ int vid;638638+639639+ if (priv->data.dual_emac)640640+ vid = priv->slaves[priv->emac_port].port_vlan;641641+ else642642+ vid = priv->data.default_vlan;637643638644 if (ndev->flags & IFF_PROMISC) {639645 /* Enable promiscuous mode */···655649 cpsw_ale_set_allmulti(priv->ale, priv->ndev->flags & IFF_ALLMULTI);656650657651 /* Clear all mcast from ALE */658658- cpsw_ale_flush_multicast(priv->ale, ALE_ALL_PORTS << priv->host_port);652652+ cpsw_ale_flush_multicast(priv->ale, ALE_ALL_PORTS << priv->host_port,653653+ vid);659654660655 if (!netdev_mc_empty(ndev)) {661656 struct netdev_hw_addr *ha;···764757static irqreturn_t cpsw_interrupt(int irq, void *dev_id)765758{766759 struct cpsw_priv *priv = dev_id;760760+ int value = irq - priv->irqs_table[0];761761+762762+ /* NOTICE: Ending IRQ here. The trick with the 'value' variable above763763+ * is to make sure we will always write the correct value to the EOI764764+ * register. Namely 0 for RX_THRESH Interrupt, 1 for RX Interrupt, 2765765+ * for TX Interrupt and 3 for MISC Interrupt.766766+ */767767+ cpdma_ctlr_eoi(priv->dma, value);767768768769 cpsw_intr_disable(priv);769770 if (priv->irq_enabled == true) {···801786 int num_tx, num_rx;802787803788 num_tx = cpdma_chan_process(priv->txch, 128);804804- if (num_tx)805805- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);806789807790 num_rx = cpdma_chan_process(priv->rxch, budget);808791 if (num_rx < budget) {···808795809796 napi_complete(napi);810797 cpsw_intr_enable(priv);811811- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX);812798 prim_cpsw = cpsw_get_slave_priv(priv, 0);813799 if (prim_cpsw->irq_enabled == false) {814800 prim_cpsw->irq_enabled = true;···13221310 napi_enable(&priv->napi);13231311 cpdma_ctlr_start(priv->dma);13241312 cpsw_intr_enable(priv);13251325- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX);13261326- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);1327131313281314 prim_cpsw = cpsw_get_slave_priv(priv, 0);13291315 if (prim_cpsw->irq_enabled == false) {···15881578 cpdma_chan_start(priv->txch);15891579 cpdma_ctlr_int_ctrl(priv->dma, true);15901580 cpsw_intr_enable(priv);15911591- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX);15921592- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);15931593-15941581}1595158215961583static int cpsw_ndo_set_mac_address(struct net_device *ndev, void *p)···16271620 cpsw_interrupt(ndev->irq, priv);16281621 cpdma_ctlr_int_ctrl(priv->dma, true);16291622 cpsw_intr_enable(priv);16301630- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX);16311631- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);16321632-16331623}16341624#endif16351625
+9-1
drivers/net/ethernet/ti/cpsw_ale.c
···234234 cpsw_ale_set_entry_type(ale_entry, ALE_TYPE_FREE);235235}236236237237-int cpsw_ale_flush_multicast(struct cpsw_ale *ale, int port_mask)237237+int cpsw_ale_flush_multicast(struct cpsw_ale *ale, int port_mask, int vid)238238{239239 u32 ale_entry[ALE_ENTRY_WORDS];240240 int ret, idx;···243243 cpsw_ale_read(ale, idx, ale_entry);244244 ret = cpsw_ale_get_entry_type(ale_entry);245245 if (ret != ALE_TYPE_ADDR && ret != ALE_TYPE_VLAN_ADDR)246246+ continue;247247+248248+ /* if vid passed is -1 then remove all multicast entry from249249+ * the table irrespective of vlan id, if a valid vlan id is250250+ * passed then remove only multicast added to that vlan id.251251+ * if vlan id doesn't match then move on to next entry.252252+ */253253+ if (vid != -1 && cpsw_ale_get_vlan_id(ale_entry) != vid)246254 continue;247255248256 if (cpsw_ale_get_mcast(ale_entry)) {
+1-1
drivers/net/ethernet/ti/cpsw_ale.h
···92929393int cpsw_ale_set_ageout(struct cpsw_ale *ale, int ageout);9494int cpsw_ale_flush(struct cpsw_ale *ale, int port_mask);9595-int cpsw_ale_flush_multicast(struct cpsw_ale *ale, int port_mask);9595+int cpsw_ale_flush_multicast(struct cpsw_ale *ale, int port_mask, int vid);9696int cpsw_ale_add_ucast(struct cpsw_ale *ale, u8 *addr, int port,9797 int flags, u16 vid);9898int cpsw_ale_del_ucast(struct cpsw_ale *ale, u8 *addr, int port,
···388388 * @dma_err_tasklet: Tasklet structure to process Axi DMA errors389389 * @tx_irq: Axidma TX IRQ number390390 * @rx_irq: Axidma RX IRQ number391391- * @temac_type: axienet type to identify between soft and hard temac392391 * @phy_type: Phy type to identify between MII/GMII/RGMII/SGMII/1000 Base-X393392 * @options: AxiEthernet option word394393 * @last_link: Phy link state in which the PHY was negotiated earlier···430431431432 int tx_irq;432433 int rx_irq;433433- u32 temac_type;434434 u32 phy_type;435435436436 u32 options; /* Current options word */
+2-4
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
···15011501 lp->regs = of_iomap(op->dev.of_node, 0);15021502 if (!lp->regs) {15031503 dev_err(&op->dev, "could not map Axi Ethernet regs.\n");15041504+ ret = -ENOMEM;15041505 goto nodev;15051506 }15061507 /* Setup checksum offload, but default to off if not specified */···15561555 if ((be32_to_cpup(p)) >= 0x4000)15571556 lp->jumbo_support = 1;15581557 }15591559- p = (__be32 *) of_get_property(op->dev.of_node, "xlnx,temac-type",15601560- NULL);15611561- if (p)15621562- lp->temac_type = be32_to_cpup(p);15631558 p = (__be32 *) of_get_property(op->dev.of_node, "xlnx,phy-type", NULL);15641559 if (p)15651560 lp->phy_type = be32_to_cpup(p);···15641567 np = of_parse_phandle(op->dev.of_node, "axistream-connected", 0);15651568 if (!np) {15661569 dev_err(&op->dev, "could not find DMA node\n");15701570+ ret = -ENODEV;15671571 goto err_iounmap;15681572 }15691573 lp->dma_regs = of_iomap(np, 0);
···629629static void team_notify_peers_work(struct work_struct *work)630630{631631 struct team *team;632632+ int val;632633633634 team = container_of(work, struct team, notify_peers.dw.work);634635···637636 schedule_delayed_work(&team->notify_peers.dw, 0);638637 return;639638 }639639+ val = atomic_dec_if_positive(&team->notify_peers.count_pending);640640+ if (val < 0) {641641+ rtnl_unlock();642642+ return;643643+ }640644 call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, team->dev);641645 rtnl_unlock();642642- if (!atomic_dec_and_test(&team->notify_peers.count_pending))646646+ if (val)643647 schedule_delayed_work(&team->notify_peers.dw,644648 msecs_to_jiffies(team->notify_peers.interval));645649}···675669static void team_mcast_rejoin_work(struct work_struct *work)676670{677671 struct team *team;672672+ int val;678673679674 team = container_of(work, struct team, mcast_rejoin.dw.work);680675···683676 schedule_delayed_work(&team->mcast_rejoin.dw, 0);684677 return;685678 }679679+ val = atomic_dec_if_positive(&team->mcast_rejoin.count_pending);680680+ if (val < 0) {681681+ rtnl_unlock();682682+ return;683683+ }686684 call_netdevice_notifiers(NETDEV_RESEND_IGMP, team->dev);687685 rtnl_unlock();688688- if (!atomic_dec_and_test(&team->mcast_rejoin.count_pending))686686+ if (val)689687 schedule_delayed_work(&team->mcast_rejoin.dw,690688 msecs_to_jiffies(team->mcast_rejoin.interval));691689}
+1-1
drivers/net/usb/kaweth.c
···12761276 awd.done = 0;1277127712781278 urb->context = &awd;12791279- status = usb_submit_urb(urb, GFP_NOIO);12791279+ status = usb_submit_urb(urb, GFP_ATOMIC);12801280 if (status) {12811281 // something went wrong12821282 usb_free_urb(urb);
+7-3
drivers/net/usb/qmi_wwan.c
···5656/* default ethernet address used by the modem */5757static const u8 default_modem_addr[ETH_ALEN] = {0x02, 0x50, 0xf3};58585959+static const u8 buggy_fw_addr[ETH_ALEN] = {0x00, 0xa0, 0xc6, 0x00, 0x00, 0x00};6060+5961/* Make up an ethernet header if the packet doesn't have one.6062 *6163 * A firmware bug common among several devices cause them to send raw···334332 usb_driver_release_interface(driver, info->data);335333 }336334337337- /* Never use the same address on both ends of the link, even338338- * if the buggy firmware told us to.335335+ /* Never use the same address on both ends of the link, even if the336336+ * buggy firmware told us to. Or, if device is assigned the well-known337337+ * buggy firmware MAC address, replace it with a random address,339338 */340340- if (ether_addr_equal(dev->net->dev_addr, default_modem_addr))339339+ if (ether_addr_equal(dev->net->dev_addr, default_modem_addr) ||340340+ ether_addr_equal(dev->net->dev_addr, buggy_fw_addr))341341 eth_hw_addr_random(dev->net);342342343343 /* make MAC addr easily distinguishable from an IP header */
···65656666config IPW22006767 tristate "Intel PRO/Wireless 2200BG and 2915ABG Network Connection"6868- depends on PCI && CFG80211 && CFG80211_WEXT6868+ depends on PCI && CFG802116969+ select CFG80211_WEXT6970 select WIRELESS_EXT7071 select WEXT_SPY7172 select WEXT_PRIV
+3-3
drivers/net/wireless/iwlwifi/iwl-7000.c
···6969#include "iwl-agn-hw.h"70707171/* Highest firmware API version supported */7272-#define IWL7260_UCODE_API_MAX 107373-#define IWL3160_UCODE_API_MAX 107272+#define IWL7260_UCODE_API_MAX 127373+#define IWL3160_UCODE_API_MAX 1274747575/* Oldest version we won't warn about */7676#define IWL7260_UCODE_API_OK 10···105105#define IWL7265_MODULE_FIRMWARE(api) IWL7265_FW_PRE __stringify(api) ".ucode"106106107107#define IWL7265D_FW_PRE "iwlwifi-7265D-"108108-#define IWL7265D_MODULE_FIRMWARE(api) IWL7265_FW_PRE __stringify(api) ".ucode"108108+#define IWL7265D_MODULE_FIRMWARE(api) IWL7265D_FW_PRE __stringify(api) ".ucode"109109110110#define NVM_HW_SECTION_NUM_FAMILY_7000 0111111
+1-1
drivers/net/wireless/iwlwifi/iwl-8000.c
···6969#include "iwl-agn-hw.h"70707171/* Highest firmware API version supported */7272-#define IWL8000_UCODE_API_MAX 107272+#define IWL8000_UCODE_API_MAX 1273737474/* Oldest version we won't warn about */7575#define IWL8000_UCODE_API_OK 10
···243243 * @IWL_UCODE_TLV_API_SF_NO_DUMMY_NOTIF: ucode supports disabling dummy notif.244244 * @IWL_UCODE_TLV_API_FRAGMENTED_SCAN: This ucode supports active dwell time245245 * longer than the passive one, which is essential for fragmented scan.246246+ * @IWL_UCODE_TLV_API_BASIC_DWELL: use only basic dwell time in scan command,247247+ * regardless of the band or the number of the probes. FW will calculate248248+ * the actual dwell time.246249 */247250enum iwl_ucode_tlv_api {248251 IWL_UCODE_TLV_API_WOWLAN_CONFIG_TID = BIT(0),···256253 IWL_UCODE_TLV_API_LMAC_SCAN = BIT(6),257254 IWL_UCODE_TLV_API_SF_NO_DUMMY_NOTIF = BIT(7),258255 IWL_UCODE_TLV_API_FRAGMENTED_SCAN = BIT(8),256256+ IWL_UCODE_TLV_API_BASIC_DWELL = BIT(13),259257};260258261259/**
+2
drivers/net/wireless/iwlwifi/mvm/fw-api-scan.h
···672672 * @IWL_MVM_LMAC_SCAN_FLAG_FRAGMENTED: all passive scans will be fragmented673673 * @IWL_MVM_LMAC_SCAN_FLAGS_RRM_ENABLED: insert WFA vendor-specific TPC report674674 * and DS parameter set IEs into probe requests.675675+ * @IWL_MVM_LMAC_SCAN_FLAG_MATCH: Send match found notification on matches675676 */676677enum iwl_mvm_lmac_scan_flags {677678 IWL_MVM_LMAC_SCAN_FLAG_PASS_ALL = BIT(0),···682681 IWL_MVM_LMAC_SCAN_FLAG_MULTIPLE_SSIDS = BIT(4),683682 IWL_MVM_LMAC_SCAN_FLAG_FRAGMENTED = BIT(5),684683 IWL_MVM_LMAC_SCAN_FLAGS_RRM_ENABLED = BIT(6),684684+ IWL_MVM_LMAC_SCAN_FLAG_MATCH = BIT(9),685685};686686687687enum iwl_scan_priority {
+13-2
drivers/net/wireless/iwlwifi/mvm/mac80211.c
···10041004{10051005 lockdep_assert_held(&mvm->mutex);1006100610071007- /* disallow low power states when the FW is down */10081008- iwl_mvm_ref(mvm, IWL_MVM_REF_UCODE_DOWN);10071007+ /*10081008+ * Disallow low power states when the FW is down by taking10091009+ * the UCODE_DOWN ref. in case of ongoing hw restart the10101010+ * ref is already taken, so don't take it again.10111011+ */10121012+ if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))10131013+ iwl_mvm_ref(mvm, IWL_MVM_REF_UCODE_DOWN);1009101410101015 /* async_handlers_wk is now blocked */10111016···1027102210281023 /* the fw is stopped, the aux sta is dead: clean up driver state */10291024 iwl_mvm_del_aux_sta(mvm);10251025+10261026+ /*10271027+ * Clear IN_HW_RESTART flag when stopping the hw (as restart_complete()10281028+ * won't be called in this case).10291029+ */10301030+ clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status);1030103110311032 mvm->ucode_loaded = false;10321033}
+14-5
drivers/net/wireless/iwlwifi/mvm/scan.c
···171171 * already included in the probe template, so we need to set only172172 * req->n_ssids - 1 bits in addition to the first bit.173173 */174174-static u16 iwl_mvm_get_active_dwell(enum ieee80211_band band, int n_ssids)174174+static u16 iwl_mvm_get_active_dwell(struct iwl_mvm *mvm,175175+ enum ieee80211_band band, int n_ssids)175176{177177+ if (mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_BASIC_DWELL)178178+ return 10;176179 if (band == IEEE80211_BAND_2GHZ)177180 return 20 + 3 * (n_ssids + 1);178181 return 10 + 2 * (n_ssids + 1);179182}180183181181-static u16 iwl_mvm_get_passive_dwell(enum ieee80211_band band)184184+static u16 iwl_mvm_get_passive_dwell(struct iwl_mvm *mvm,185185+ enum ieee80211_band band)182186{187187+ if (mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_BASIC_DWELL)188188+ return 110;183189 return band == IEEE80211_BAND_2GHZ ? 100 + 20 : 100 + 10;184190}185191···337331 */338332 if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {339333 u32 passive_dwell =340340- iwl_mvm_get_passive_dwell(IEEE80211_BAND_2GHZ);334334+ iwl_mvm_get_passive_dwell(mvm,335335+ IEEE80211_BAND_2GHZ);341336 params->max_out_time = passive_dwell;342337 } else {343338 params->passive_fragmented = true;···355348 params->dwell[band].passive = frag_passive_dwell;356349 else357350 params->dwell[band].passive =358358- iwl_mvm_get_passive_dwell(band);359359- params->dwell[band].active = iwl_mvm_get_active_dwell(band,351351+ iwl_mvm_get_passive_dwell(mvm, band);352352+ params->dwell[band].active = iwl_mvm_get_active_dwell(mvm, band,360353 n_ssids);361354 }362355}···1455144814561449 if (iwl_mvm_scan_pass_all(mvm, req))14571450 flags |= IWL_MVM_LMAC_SCAN_FLAG_PASS_ALL;14511451+ else14521452+ flags |= IWL_MVM_LMAC_SCAN_FLAG_MATCH;1458145314591454 if (req->n_ssids == 1 && req->ssids[0].ssid_len != 0)14601455 flags |= IWL_MVM_LMAC_SCAN_FLAG_PRE_CONNECTION;
+6-2
drivers/net/wireless/iwlwifi/mvm/tx.c
···108108 tx_flags &= ~TX_CMD_FLG_SEQ_CTL;109109 }110110111111- /* tid_tspec will default to 0 = BE when QOS isn't enabled */112112- ac = tid_to_mac80211_ac[tx_cmd->tid_tspec];111111+ /* Default to 0 (BE) when tid_spec is set to IWL_TID_NON_QOS */112112+ if (tx_cmd->tid_tspec < IWL_MAX_TID_COUNT)113113+ ac = tid_to_mac80211_ac[tx_cmd->tid_tspec];114114+ else115115+ ac = tid_to_mac80211_ac[0];116116+113117 tx_flags |= iwl_mvm_bt_coex_tx_prio(mvm, hdr, info, ac) <<114118 TX_CMD_FLG_BT_PRIO_POS;115119
+1-1
drivers/net/wireless/iwlwifi/mvm/utils.c
···665665 if (num_of_ant(mvm->fw->valid_rx_ant) == 1)666666 return false;667667668668- if (!mvm->cfg->rx_with_siso_diversity)668668+ if (mvm->cfg->rx_with_siso_diversity)669669 return false;670670671671 ieee80211_iterate_active_interfaces_atomic(
···614614{615615 u8 *v_addr;616616 dma_addr_t p_addr;617617- u32 offset, chunk_sz = section->len;617617+ u32 offset, chunk_sz = min_t(u32, FH_MEM_TB_MAX_LENGTH, section->len);618618 int ret = 0;619619620620 IWL_DEBUG_FW(trans, "[%d] uCode section being loaded...\n",···10121012 /* Stop the device, and put it in low power state */10131013 iwl_pcie_apm_stop(trans);1014101410151015- /* Upon stop, the APM issues an interrupt if HW RF kill is set.10161016- * Clean again the interrupt here10151015+ /* stop and reset the on-board processor */10161016+ iwl_write32(trans, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);10171017+ udelay(20);10181018+10191019+ /*10201020+ * Upon stop, the APM issues an interrupt if HW RF kill is set.10211021+ * This is a bug in certain verions of the hardware.10221022+ * Certain devices also keep sending HW RF kill interrupt all10231023+ * the time, unless the interrupt is ACKed even if the interrupt10241024+ * should be masked. Re-ACK all the interrupts here.10171025 */10181026 spin_lock(&trans_pcie->irq_lock);10191027 iwl_disable_interrupts(trans);10201028 spin_unlock(&trans_pcie->irq_lock);1021102910221022- /* stop and reset the on-board processor */10231023- iwl_write32(trans, CSR_RESET, CSR_RESET_REG_FLAG_SW_RESET);10241024- udelay(20);1025103010261031 /* clear all status bits */10271032 clear_bit(STATUS_SYNC_HCMD_ACTIVE, &trans->status);
+25-9
drivers/net/wireless/rtlwifi/pci.c
···666666}667667668668static int _rtl_pci_init_one_rxdesc(struct ieee80211_hw *hw,669669- u8 *entry, int rxring_idx, int desc_idx)669669+ struct sk_buff *new_skb, u8 *entry,670670+ int rxring_idx, int desc_idx)670671{671672 struct rtl_priv *rtlpriv = rtl_priv(hw);672673 struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));···675674 u8 tmp_one = 1;676675 struct sk_buff *skb;677676677677+ if (likely(new_skb)) {678678+ skb = new_skb;679679+ goto remap;680680+ }678681 skb = dev_alloc_skb(rtlpci->rxbuffersize);679682 if (!skb)680683 return 0;681681- rtlpci->rx_ring[rxring_idx].rx_buf[desc_idx] = skb;682684685685+remap:683686 /* just set skb->cb to mapping addr for pci_unmap_single use */684687 *((dma_addr_t *)skb->cb) =685688 pci_map_single(rtlpci->pdev, skb_tail_pointer(skb),···691686 bufferaddress = *((dma_addr_t *)skb->cb);692687 if (pci_dma_mapping_error(rtlpci->pdev, bufferaddress))693688 return 0;689689+ rtlpci->rx_ring[rxring_idx].rx_buf[desc_idx] = skb;694690 if (rtlpriv->use_new_trx_flow) {695691 rtlpriv->cfg->ops->set_desc(hw, (u8 *)entry, false,696692 HW_DESC_RX_PREPARE,···787781 /*rx pkt */788782 struct sk_buff *skb = rtlpci->rx_ring[rxring_idx].rx_buf[789783 rtlpci->rx_ring[rxring_idx].idx];784784+ struct sk_buff *new_skb;790785791786 if (rtlpriv->use_new_trx_flow) {792787 rx_remained_cnt =···814807 pci_unmap_single(rtlpci->pdev, *((dma_addr_t *)skb->cb),815808 rtlpci->rxbuffersize, PCI_DMA_FROMDEVICE);816809810810+ /* get a new skb - if fail, old one will be reused */811811+ new_skb = dev_alloc_skb(rtlpci->rxbuffersize);812812+ if (unlikely(!new_skb)) {813813+ pr_err("Allocation of new skb failed in %s\n",814814+ __func__);815815+ goto no_new;816816+ }817817 if (rtlpriv->use_new_trx_flow) {818818 buffer_desc =819819 &rtlpci->rx_ring[rxring_idx].buffer_desc···925911 schedule_work(&rtlpriv->works.lps_change_work);926912 }927913end:914914+ skb = new_skb;915915+no_new:928916 if (rtlpriv->use_new_trx_flow) {929929- _rtl_pci_init_one_rxdesc(hw, (u8 *)buffer_desc,917917+ _rtl_pci_init_one_rxdesc(hw, skb, (u8 *)buffer_desc,930918 rxring_idx,931931- rtlpci->rx_ring[rxring_idx].idx);932932- } else {933933- _rtl_pci_init_one_rxdesc(hw, (u8 *)pdesc, rxring_idx,934919 rtlpci->rx_ring[rxring_idx].idx);935935-920920+ } else {921921+ _rtl_pci_init_one_rxdesc(hw, skb, (u8 *)pdesc,922922+ rxring_idx,923923+ rtlpci->rx_ring[rxring_idx].idx);936924 if (rtlpci->rx_ring[rxring_idx].idx ==937925 rtlpci->rxringcount - 1)938926 rtlpriv->cfg->ops->set_desc(hw, (u8 *)pdesc,···13231307 rtlpci->rx_ring[rxring_idx].idx = 0;13241308 for (i = 0; i < rtlpci->rxringcount; i++) {13251309 entry = &rtlpci->rx_ring[rxring_idx].buffer_desc[i];13261326- if (!_rtl_pci_init_one_rxdesc(hw, (u8 *)entry,13101310+ if (!_rtl_pci_init_one_rxdesc(hw, NULL, (u8 *)entry,13271311 rxring_idx, i))13281312 return -ENOMEM;13291313 }···1348133213491333 for (i = 0; i < rtlpci->rxringcount; i++) {13501334 entry = &rtlpci->rx_ring[rxring_idx].desc[i];13511351- if (!_rtl_pci_init_one_rxdesc(hw, (u8 *)entry,13351335+ if (!_rtl_pci_init_one_rxdesc(hw, NULL, (u8 *)entry,13521336 rxring_idx, i))13531337 return -ENOMEM;13541338 }
···2929/**3030 * omap_control_pcie_pcs - set the PCS delay count3131 * @dev: the control module device3232- * @id: index of the pcie PHY (should be 1 or 2)3332 * @delay: 8 bit delay value3433 */3535-void omap_control_pcie_pcs(struct device *dev, u8 id, u8 delay)3434+void omap_control_pcie_pcs(struct device *dev, u8 delay)3635{3736 u32 val;3837 struct omap_control_phy *control_phy;···54555556 val = readl(control_phy->pcie_pcs);5657 val &= ~(OMAP_CTRL_PCIE_PCS_MASK <<5757- (id * OMAP_CTRL_PCIE_PCS_DELAY_COUNT_SHIFT));5858- val |= delay << (id * OMAP_CTRL_PCIE_PCS_DELAY_COUNT_SHIFT);5858+ OMAP_CTRL_PCIE_PCS_DELAY_COUNT_SHIFT);5959+ val |= (delay << OMAP_CTRL_PCIE_PCS_DELAY_COUNT_SHIFT);5960 writel(val, control_phy->pcie_pcs);6061}6162EXPORT_SYMBOL_GPL(omap_control_pcie_pcs);
···18921892 goto fnic_abort_cmd_end;18931893 }1894189418951895+ /* IO out of order */18961896+18971897+ if (!(CMD_FLAGS(sc) & (FNIC_IO_ABORTED | FNIC_IO_DONE))) {18981898+ spin_unlock_irqrestore(io_lock, flags);18991899+ FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,19001900+ "Issuing Host reset due to out of order IO\n");19011901+19021902+ if (fnic_host_reset(sc) == FAILED) {19031903+ FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,19041904+ "fnic_host_reset failed.\n");19051905+ }19061906+ ret = FAILED;19071907+ goto fnic_abort_cmd_end;19081908+ }19091909+18951910 CMD_STATE(sc) = FNIC_IOREQ_ABTS_COMPLETE;1896191118971912 /*
+3-1
drivers/scsi/qla2xxx/qla_os.c
···734734 * Return target busy if we've received a non-zero retry_delay_timer735735 * in a FCP_RSP.736736 */737737- if (time_after(jiffies, fcport->retry_delay_timestamp))737737+ if (fcport->retry_delay_timestamp == 0) {738738+ /* retry delay not set */739739+ } else if (time_after(jiffies, fcport->retry_delay_timestamp))738740 fcport->retry_delay_timestamp = 0;739741 else740742 goto qc24_target_busy;
···591591static int scsi_alloc_sgtable(struct scsi_data_buffer *sdb, int nents, bool mq)592592{593593 struct scatterlist *first_chunk = NULL;594594- gfp_t gfp_mask = mq ? GFP_NOIO : GFP_ATOMIC;595594 int ret;596595597596 BUG_ON(!nents);···605606 }606607607608 ret = __sg_alloc_table(&sdb->table, nents, SCSI_MAX_SG_SEGMENTS,608608- first_chunk, gfp_mask, scsi_sg_alloc);609609+ first_chunk, GFP_ATOMIC, scsi_sg_alloc);609610 if (unlikely(ret))610611 scsi_free_sgtable(sdb, mq);611612 return ret;
+3-2
drivers/scsi/sd.c
···26232623 sd_config_discard(sdkp, SD_LBP_WS16);2624262426252625 } else { /* LBP VPD page tells us what to use */26262626-26272627- if (sdkp->lbpws)26262626+ if (sdkp->lbpu && sdkp->max_unmap_blocks && !sdkp->lbprz)26272627+ sd_config_discard(sdkp, SD_LBP_UNMAP);26282628+ else if (sdkp->lbpws)26282629 sd_config_discard(sdkp, SD_LBP_WS16);26292630 else if (sdkp->lbpws10)26302631 sd_config_discard(sdkp, SD_LBP_WS10);
···11031103}11041104EXPORT_SYMBOL(se_dev_set_queue_depth);1105110511061106-int se_dev_set_fabric_max_sectors(struct se_device *dev, u32 fabric_max_sectors)11071107-{11081108- int block_size = dev->dev_attrib.block_size;11091109-11101110- if (dev->export_count) {11111111- pr_err("dev[%p]: Unable to change SE Device"11121112- " fabric_max_sectors while export_count is %d\n",11131113- dev, dev->export_count);11141114- return -EINVAL;11151115- }11161116- if (!fabric_max_sectors) {11171117- pr_err("dev[%p]: Illegal ZERO value for"11181118- " fabric_max_sectors\n", dev);11191119- return -EINVAL;11201120- }11211121- if (fabric_max_sectors < DA_STATUS_MAX_SECTORS_MIN) {11221122- pr_err("dev[%p]: Passed fabric_max_sectors: %u less than"11231123- " DA_STATUS_MAX_SECTORS_MIN: %u\n", dev, fabric_max_sectors,11241124- DA_STATUS_MAX_SECTORS_MIN);11251125- return -EINVAL;11261126- }11271127- if (fabric_max_sectors > DA_STATUS_MAX_SECTORS_MAX) {11281128- pr_err("dev[%p]: Passed fabric_max_sectors: %u"11291129- " greater than DA_STATUS_MAX_SECTORS_MAX:"11301130- " %u\n", dev, fabric_max_sectors,11311131- DA_STATUS_MAX_SECTORS_MAX);11321132- return -EINVAL;11331133- }11341134- /*11351135- * Align max_sectors down to PAGE_SIZE to follow transport_allocate_data_tasks()11361136- */11371137- if (!block_size) {11381138- block_size = 512;11391139- pr_warn("Defaulting to 512 for zero block_size\n");11401140- }11411141- fabric_max_sectors = se_dev_align_max_sectors(fabric_max_sectors,11421142- block_size);11431143-11441144- dev->dev_attrib.fabric_max_sectors = fabric_max_sectors;11451145- pr_debug("dev[%p]: SE Device max_sectors changed to %u\n",11461146- dev, fabric_max_sectors);11471147- return 0;11481148-}11491149-EXPORT_SYMBOL(se_dev_set_fabric_max_sectors);11501150-11511106int se_dev_set_optimal_sectors(struct se_device *dev, u32 optimal_sectors)11521107{11531108 if (dev->export_count) {···11111156 dev, dev->export_count);11121157 return -EINVAL;11131158 }11141114- if (optimal_sectors > dev->dev_attrib.fabric_max_sectors) {11591159+ if (optimal_sectors > dev->dev_attrib.hw_max_sectors) {11151160 pr_err("dev[%p]: Passed optimal_sectors %u cannot be"11161116- " greater than fabric_max_sectors: %u\n", dev,11171117- optimal_sectors, dev->dev_attrib.fabric_max_sectors);11611161+ " greater than hw_max_sectors: %u\n", dev,11621162+ optimal_sectors, dev->dev_attrib.hw_max_sectors);11181163 return -EINVAL;11191164 }11201165···15081553 dev->dev_attrib.unmap_granularity_alignment =15091554 DA_UNMAP_GRANULARITY_ALIGNMENT_DEFAULT;15101555 dev->dev_attrib.max_write_same_len = DA_MAX_WRITE_SAME_LEN;15111511- dev->dev_attrib.fabric_max_sectors = DA_FABRIC_MAX_SECTORS;15121512- dev->dev_attrib.optimal_sectors = DA_FABRIC_MAX_SECTORS;1513155615141557 xcopy_lun = &dev->xcopy_lun;15151558 xcopy_lun->lun_se_dev = dev;···15481595 dev->dev_attrib.hw_max_sectors =15491596 se_dev_align_max_sectors(dev->dev_attrib.hw_max_sectors,15501597 dev->dev_attrib.hw_block_size);15981598+ dev->dev_attrib.optimal_sectors = dev->dev_attrib.hw_max_sectors;1551159915521600 dev->dev_index = scsi_get_new_index(SCSI_DEVICE_INDEX);15531601 dev->creation_time = get_jiffies_64();
+10-2
drivers/target/target_core_file.c
···621621 struct fd_prot fd_prot;622622 sense_reason_t rc;623623 int ret = 0;624624-624624+ /*625625+ * We are currently limited by the number of iovecs (2048) per626626+ * single vfs_[writev,readv] call.627627+ */628628+ if (cmd->data_length > FD_MAX_BYTES) {629629+ pr_err("FILEIO: Not able to process I/O of %u bytes due to"630630+ "FD_MAX_BYTES: %u iovec count limitiation\n",631631+ cmd->data_length, FD_MAX_BYTES);632632+ return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;633633+ }625634 /*626635 * Call vectorized fileio functions to map struct scatterlist627636 * physical memory addresses to struct iovec virtual memory.···968959 &fileio_dev_attrib_hw_block_size.attr,969960 &fileio_dev_attrib_block_size.attr,970961 &fileio_dev_attrib_hw_max_sectors.attr,971971- &fileio_dev_attrib_fabric_max_sectors.attr,972962 &fileio_dev_attrib_optimal_sectors.attr,973963 &fileio_dev_attrib_hw_queue_depth.attr,974964 &fileio_dev_attrib_queue_depth.attr,
···44 * Copyright (C) 2012 Samsung Electronics Co., Ltd(http://www.samsung.com)55 * Copyright (C) 2012 Amit Daniel <amit.kachhap@linaro.org>66 *77+ * Copyright (C) 2014 Viresh Kumar <viresh.kumar@linaro.org>88+ *79 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~810 * This program is free software; you can redistribute it and/or modify911 * it under the terms of the GNU General Public License as published by···3028#include <linux/cpu.h>3129#include <linux/cpu_cooling.h>32303131+/*3232+ * Cooling state <-> CPUFreq frequency3333+ *3434+ * Cooling states are translated to frequencies throughout this driver and this3535+ * is the relation between them.3636+ *3737+ * Highest cooling state corresponds to lowest possible frequency.3838+ *3939+ * i.e.4040+ * level 0 --> 1st Max Freq4141+ * level 1 --> 2nd Max Freq4242+ * ...4343+ */4444+3345/**3446 * struct cpufreq_cooling_device - data for cooling device with cpufreq3547 * @id: unique integer value corresponding to each cpufreq_cooling_device···5438 * cooling devices.5539 * @cpufreq_val: integer value representing the absolute value of the clipped5640 * frequency.4141+ * @max_level: maximum cooling level. One less than total number of valid4242+ * cpufreq frequencies.5743 * @allowed_cpus: all the cpus involved for this cpufreq_cooling_device.4444+ * @node: list_head to link all cpufreq_cooling_device together.5845 *5959- * This structure is required for keeping information of each6060- * cpufreq_cooling_device registered. In order to prevent corruption of this a6161- * mutex lock cooling_cpufreq_lock is used.4646+ * This structure is required for keeping information of each registered4747+ * cpufreq_cooling_device.6248 */6349struct cpufreq_cooling_device {6450 int id;6551 struct thermal_cooling_device *cool_dev;6652 unsigned int cpufreq_state;6753 unsigned int cpufreq_val;5454+ unsigned int max_level;5555+ unsigned int *freq_table; /* In descending order */6856 struct cpumask allowed_cpus;6957 struct list_head node;7058};7159static DEFINE_IDR(cpufreq_idr);7260static DEFINE_MUTEX(cooling_cpufreq_lock);7373-7474-static unsigned int cpufreq_dev_count;75617662static LIST_HEAD(cpufreq_dev_list);7763···11698/* Below code defines functions to be used for cpufreq as cooling device */11799118100/**119119- * is_cpufreq_valid - function to check frequency transitioning capability.120120- * @cpu: cpu for which check is needed.101101+ * get_level: Find the level for a particular frequency102102+ * @cpufreq_dev: cpufreq_dev for which the property is required103103+ * @freq: Frequency121104 *122122- * This function will check the current state of the system if123123- * it is capable of changing the frequency for a given @cpu.124124- *125125- * Return: 0 if the system is not currently capable of changing126126- * the frequency of given cpu. !0 in case the frequency is changeable.105105+ * Return: level on success, THERMAL_CSTATE_INVALID on error.127106 */128128-static int is_cpufreq_valid(int cpu)107107+static unsigned long get_level(struct cpufreq_cooling_device *cpufreq_dev,108108+ unsigned int freq)129109{130130- struct cpufreq_policy policy;110110+ unsigned long level;131111132132- return !cpufreq_get_policy(&policy, cpu);133133-}112112+ for (level = 0; level <= cpufreq_dev->max_level; level++) {113113+ if (freq == cpufreq_dev->freq_table[level])114114+ return level;134115135135-enum cpufreq_cooling_property {136136- GET_LEVEL,137137- GET_FREQ,138138- GET_MAXL,139139-};140140-141141-/**142142- * get_property - fetch a property of interest for a give cpu.143143- * @cpu: cpu for which the property is required144144- * @input: query parameter145145- * @output: query return146146- * @property: type of query (frequency, level, max level)147147- *148148- * This is the common function to149149- * 1. get maximum cpu cooling states150150- * 2. translate frequency to cooling state151151- * 3. translate cooling state to frequency152152- * Note that the code may be not in good shape153153- * but it is written in this way in order to:154154- * a) reduce duplicate code as most of the code can be shared.155155- * b) make sure the logic is consistent when translating between156156- * cooling states and frequencies.157157- *158158- * Return: 0 on success, -EINVAL when invalid parameters are passed.159159- */160160-static int get_property(unsigned int cpu, unsigned long input,161161- unsigned int *output,162162- enum cpufreq_cooling_property property)163163-{164164- int i;165165- unsigned long max_level = 0, level = 0;166166- unsigned int freq = CPUFREQ_ENTRY_INVALID;167167- int descend = -1;168168- struct cpufreq_frequency_table *pos, *table =169169- cpufreq_frequency_get_table(cpu);170170-171171- if (!output)172172- return -EINVAL;173173-174174- if (!table)175175- return -EINVAL;176176-177177- cpufreq_for_each_valid_entry(pos, table) {178178- /* ignore duplicate entry */179179- if (freq == pos->frequency)180180- continue;181181-182182- /* get the frequency order */183183- if (freq != CPUFREQ_ENTRY_INVALID && descend == -1)184184- descend = freq > pos->frequency;185185-186186- freq = pos->frequency;187187- max_level++;116116+ if (freq > cpufreq_dev->freq_table[level])117117+ break;188118 }189119190190- /* No valid cpu frequency entry */191191- if (max_level == 0)192192- return -EINVAL;193193-194194- /* max_level is an index, not a counter */195195- max_level--;196196-197197- /* get max level */198198- if (property == GET_MAXL) {199199- *output = (unsigned int)max_level;200200- return 0;201201- }202202-203203- if (property == GET_FREQ)204204- level = descend ? input : (max_level - input);205205-206206- i = 0;207207- cpufreq_for_each_valid_entry(pos, table) {208208- /* ignore duplicate entry */209209- if (freq == pos->frequency)210210- continue;211211-212212- /* now we have a valid frequency entry */213213- freq = pos->frequency;214214-215215- if (property == GET_LEVEL && (unsigned int)input == freq) {216216- /* get level by frequency */217217- *output = descend ? i : (max_level - i);218218- return 0;219219- }220220- if (property == GET_FREQ && level == i) {221221- /* get frequency by level */222222- *output = freq;223223- return 0;224224- }225225- i++;226226- }227227-228228- return -EINVAL;120120+ return THERMAL_CSTATE_INVALID;229121}230122231123/**232232- * cpufreq_cooling_get_level - for a give cpu, return the cooling level.124124+ * cpufreq_cooling_get_level - for a given cpu, return the cooling level.233125 * @cpu: cpu for which the level is required234126 * @freq: the frequency of interest235127 *···151223 */152224unsigned long cpufreq_cooling_get_level(unsigned int cpu, unsigned int freq)153225{154154- unsigned int val;226226+ struct cpufreq_cooling_device *cpufreq_dev;155227156156- if (get_property(cpu, (unsigned long)freq, &val, GET_LEVEL))157157- return THERMAL_CSTATE_INVALID;228228+ mutex_lock(&cooling_cpufreq_lock);229229+ list_for_each_entry(cpufreq_dev, &cpufreq_dev_list, node) {230230+ if (cpumask_test_cpu(cpu, &cpufreq_dev->allowed_cpus)) {231231+ mutex_unlock(&cooling_cpufreq_lock);232232+ return get_level(cpufreq_dev, freq);233233+ }234234+ }235235+ mutex_unlock(&cooling_cpufreq_lock);158236159159- return (unsigned long)val;237237+ pr_err("%s: cpu:%d not part of any cooling device\n", __func__, cpu);238238+ return THERMAL_CSTATE_INVALID;160239}161240EXPORT_SYMBOL_GPL(cpufreq_cooling_get_level);162162-163163-/**164164- * get_cpu_frequency - get the absolute value of frequency from level.165165- * @cpu: cpu for which frequency is fetched.166166- * @level: cooling level167167- *168168- * This function matches cooling level with frequency. Based on a cooling level169169- * of frequency, equals cooling state of cpu cooling device, it will return170170- * the corresponding frequency.171171- * e.g level=0 --> 1st MAX FREQ, level=1 ---> 2nd MAX FREQ, .... etc172172- *173173- * Return: 0 on error, the corresponding frequency otherwise.174174- */175175-static unsigned int get_cpu_frequency(unsigned int cpu, unsigned long level)176176-{177177- int ret = 0;178178- unsigned int freq;179179-180180- ret = get_property(cpu, level, &freq, GET_FREQ);181181- if (ret)182182- return 0;183183-184184- return freq;185185-}186186-187187-/**188188- * cpufreq_apply_cooling - function to apply frequency clipping.189189- * @cpufreq_device: cpufreq_cooling_device pointer containing frequency190190- * clipping data.191191- * @cooling_state: value of the cooling state.192192- *193193- * Function used to make sure the cpufreq layer is aware of current thermal194194- * limits. The limits are applied by updating the cpufreq policy.195195- *196196- * Return: 0 on success, an error code otherwise (-EINVAL in case wrong197197- * cooling state).198198- */199199-static int cpufreq_apply_cooling(struct cpufreq_cooling_device *cpufreq_device,200200- unsigned long cooling_state)201201-{202202- unsigned int cpuid, clip_freq;203203- struct cpumask *mask = &cpufreq_device->allowed_cpus;204204- unsigned int cpu = cpumask_any(mask);205205-206206-207207- /* Check if the old cooling action is same as new cooling action */208208- if (cpufreq_device->cpufreq_state == cooling_state)209209- return 0;210210-211211- clip_freq = get_cpu_frequency(cpu, cooling_state);212212- if (!clip_freq)213213- return -EINVAL;214214-215215- cpufreq_device->cpufreq_state = cooling_state;216216- cpufreq_device->cpufreq_val = clip_freq;217217-218218- for_each_cpu(cpuid, mask) {219219- if (is_cpufreq_valid(cpuid))220220- cpufreq_update_policy(cpuid);221221- }222222-223223- return 0;224224-}225241226242/**227243 * cpufreq_thermal_notifier - notifier callback for cpufreq policy change.···195323 &cpufreq_dev->allowed_cpus))196324 continue;197325198198- if (!cpufreq_dev->cpufreq_val)199199- cpufreq_dev->cpufreq_val = get_cpu_frequency(200200- cpumask_any(&cpufreq_dev->allowed_cpus),201201- cpufreq_dev->cpufreq_state);202202-203326 max_freq = cpufreq_dev->cpufreq_val;204327205328 if (policy->max != max_freq)···221354 unsigned long *state)222355{223356 struct cpufreq_cooling_device *cpufreq_device = cdev->devdata;224224- struct cpumask *mask = &cpufreq_device->allowed_cpus;225225- unsigned int cpu;226226- unsigned int count = 0;227227- int ret;228357229229- cpu = cpumask_any(mask);230230-231231- ret = get_property(cpu, 0, &count, GET_MAXL);232232-233233- if (count > 0)234234- *state = count;235235-236236- return ret;358358+ *state = cpufreq_device->max_level;359359+ return 0;237360}238361239362/**···260403 unsigned long state)261404{262405 struct cpufreq_cooling_device *cpufreq_device = cdev->devdata;406406+ unsigned int cpu = cpumask_any(&cpufreq_device->allowed_cpus);407407+ unsigned int clip_freq;263408264264- return cpufreq_apply_cooling(cpufreq_device, state);409409+ /* Request state should be less than max_level */410410+ if (WARN_ON(state > cpufreq_device->max_level))411411+ return -EINVAL;412412+413413+ /* Check if the old cooling action is same as new cooling action */414414+ if (cpufreq_device->cpufreq_state == state)415415+ return 0;416416+417417+ clip_freq = cpufreq_device->freq_table[state];418418+ cpufreq_device->cpufreq_state = state;419419+ cpufreq_device->cpufreq_val = clip_freq;420420+421421+ cpufreq_update_policy(cpu);422422+423423+ return 0;265424}266425267426/* Bind cpufreq callbacks to thermal cooling device ops */···292419 .notifier_call = cpufreq_thermal_notifier,293420};294421422422+static unsigned int find_next_max(struct cpufreq_frequency_table *table,423423+ unsigned int prev_max)424424+{425425+ struct cpufreq_frequency_table *pos;426426+ unsigned int max = 0;427427+428428+ cpufreq_for_each_valid_entry(pos, table) {429429+ if (pos->frequency > max && pos->frequency < prev_max)430430+ max = pos->frequency;431431+ }432432+433433+ return max;434434+}435435+295436/**296437 * __cpufreq_cooling_register - helper function to create cpufreq cooling device297438 * @np: a valid struct device_node to the cooling device device tree node298439 * @clip_cpus: cpumask of cpus where the frequency constraints will happen.440440+ * Normally this should be same as cpufreq policy->related_cpus.299441 *300442 * This interface function registers the cpufreq cooling device with the name301443 * "thermal-cpufreq-%x". This api can support multiple instances of cpufreq···325437 const struct cpumask *clip_cpus)326438{327439 struct thermal_cooling_device *cool_dev;328328- struct cpufreq_cooling_device *cpufreq_dev = NULL;329329- unsigned int min = 0, max = 0;440440+ struct cpufreq_cooling_device *cpufreq_dev;330441 char dev_name[THERMAL_NAME_LENGTH];331331- int ret = 0, i;332332- struct cpufreq_policy policy;442442+ struct cpufreq_frequency_table *pos, *table;443443+ unsigned int freq, i;444444+ int ret;333445334334- /* Verify that all the clip cpus have same freq_min, freq_max limit */335335- for_each_cpu(i, clip_cpus) {336336- /* continue if cpufreq policy not found and not return error */337337- if (!cpufreq_get_policy(&policy, i))338338- continue;339339- if (min == 0 && max == 0) {340340- min = policy.cpuinfo.min_freq;341341- max = policy.cpuinfo.max_freq;342342- } else {343343- if (min != policy.cpuinfo.min_freq ||344344- max != policy.cpuinfo.max_freq)345345- return ERR_PTR(-EINVAL);346346- }446446+ table = cpufreq_frequency_get_table(cpumask_first(clip_cpus));447447+ if (!table) {448448+ pr_debug("%s: CPUFreq table not found\n", __func__);449449+ return ERR_PTR(-EPROBE_DEFER);347450 }348348- cpufreq_dev = kzalloc(sizeof(struct cpufreq_cooling_device),349349- GFP_KERNEL);451451+452452+ cpufreq_dev = kzalloc(sizeof(*cpufreq_dev), GFP_KERNEL);350453 if (!cpufreq_dev)351454 return ERR_PTR(-ENOMEM);455455+456456+ /* Find max levels */457457+ cpufreq_for_each_valid_entry(pos, table)458458+ cpufreq_dev->max_level++;459459+460460+ cpufreq_dev->freq_table = kmalloc(sizeof(*cpufreq_dev->freq_table) *461461+ cpufreq_dev->max_level, GFP_KERNEL);462462+ if (!cpufreq_dev->freq_table) {463463+ cool_dev = ERR_PTR(-ENOMEM);464464+ goto free_cdev;465465+ }466466+467467+ /* max_level is an index, not a counter */468468+ cpufreq_dev->max_level--;352469353470 cpumask_copy(&cpufreq_dev->allowed_cpus, clip_cpus);354471355472 ret = get_idr(&cpufreq_idr, &cpufreq_dev->id);356473 if (ret) {357357- kfree(cpufreq_dev);358358- return ERR_PTR(-EINVAL);474474+ cool_dev = ERR_PTR(ret);475475+ goto free_table;359476 }360477361478 snprintf(dev_name, sizeof(dev_name), "thermal-cpufreq-%d",···368475369476 cool_dev = thermal_of_cooling_device_register(np, dev_name, cpufreq_dev,370477 &cpufreq_cooling_ops);371371- if (IS_ERR(cool_dev)) {372372- release_idr(&cpufreq_idr, cpufreq_dev->id);373373- kfree(cpufreq_dev);374374- return cool_dev;478478+ if (IS_ERR(cool_dev))479479+ goto remove_idr;480480+481481+ /* Fill freq-table in descending order of frequencies */482482+ for (i = 0, freq = -1; i <= cpufreq_dev->max_level; i++) {483483+ freq = find_next_max(table, freq);484484+ cpufreq_dev->freq_table[i] = freq;485485+486486+ /* Warn for duplicate entries */487487+ if (!freq)488488+ pr_warn("%s: table has duplicate entries\n", __func__);489489+ else490490+ pr_debug("%s: freq:%u KHz\n", __func__, freq);375491 }492492+493493+ cpufreq_dev->cpufreq_val = cpufreq_dev->freq_table[0];376494 cpufreq_dev->cool_dev = cool_dev;377377- cpufreq_dev->cpufreq_state = 0;495495+378496 mutex_lock(&cooling_cpufreq_lock);379497380498 /* Register the notifier for first cpufreq cooling device */381381- if (cpufreq_dev_count == 0)499499+ if (list_empty(&cpufreq_dev_list))382500 cpufreq_register_notifier(&thermal_cpufreq_notifier_block,383501 CPUFREQ_POLICY_NOTIFIER);384384- cpufreq_dev_count++;385502 list_add(&cpufreq_dev->node, &cpufreq_dev_list);386503387504 mutex_unlock(&cooling_cpufreq_lock);505505+506506+ return cool_dev;507507+508508+remove_idr:509509+ release_idr(&cpufreq_idr, cpufreq_dev->id);510510+free_table:511511+ kfree(cpufreq_dev->freq_table);512512+free_cdev:513513+ kfree(cpufreq_dev);388514389515 return cool_dev;390516}···466554 cpufreq_dev = cdev->devdata;467555 mutex_lock(&cooling_cpufreq_lock);468556 list_del(&cpufreq_dev->node);469469- cpufreq_dev_count--;470557471558 /* Unregister the notifier for the last cpufreq cooling device */472472- if (cpufreq_dev_count == 0)559559+ if (list_empty(&cpufreq_dev_list))473560 cpufreq_unregister_notifier(&thermal_cpufreq_notifier_block,474561 CPUFREQ_POLICY_NOTIFIER);475562 mutex_unlock(&cooling_cpufreq_lock);476563477564 thermal_cooling_device_unregister(cpufreq_dev->cool_dev);478565 release_idr(&cpufreq_idr, cpufreq_dev->id);566566+ kfree(cpufreq_dev->freq_table);479567 kfree(cpufreq_dev);480568}481569EXPORT_SYMBOL_GPL(cpufreq_cooling_unregister);
+9-11
drivers/thermal/db8500_cpufreq_cooling.c
···1818 */19192020#include <linux/cpu_cooling.h>2121-#include <linux/cpufreq.h>2221#include <linux/err.h>2322#include <linux/module.h>2423#include <linux/of.h>···2728static int db8500_cpufreq_cooling_probe(struct platform_device *pdev)2829{2930 struct thermal_cooling_device *cdev;3030- struct cpumask mask_val;31313232- /* make sure cpufreq driver has been initialized */3333- if (!cpufreq_frequency_get_table(0))3434- return -EPROBE_DEFER;3535-3636- cpumask_set_cpu(0, &mask_val);3737- cdev = cpufreq_cooling_register(&mask_val);3838-3232+ cdev = cpufreq_cooling_register(cpu_present_mask);3933 if (IS_ERR(cdev)) {4040- dev_err(&pdev->dev, "Failed to register cooling device\n");4141- return PTR_ERR(cdev);3434+ int ret = PTR_ERR(cdev);3535+3636+ if (ret != -EPROBE_DEFER)3737+ dev_err(&pdev->dev,3838+ "Failed to register cooling device %d\n",3939+ ret);4040+4141+ return ret;4242 }43434444 platform_set_drvdata(pdev, cdev);
···8282 struct acpi_buffer trt_format = { sizeof("RRNNNNNN"), "RRNNNNNN" };83838484 if (!acpi_has_method(handle, "_TRT"))8585- return 0;8585+ return -ENODEV;86868787 status = acpi_evaluate_object(handle, "_TRT", NULL, &buffer);8888 if (ACPI_FAILURE(status))···119119 continue;120120121121 result = acpi_bus_get_device(trt->source, &adev);122122- if (!result)123123- acpi_create_platform_device(adev);124124- else122122+ if (result)125123 pr_warn("Failed to get source ACPI device\n");126124127125 result = acpi_bus_get_device(trt->target, &adev);128128- if (!result)129129- acpi_create_platform_device(adev);130130- else126126+ if (result)131127 pr_warn("Failed to get target ACPI device\n");132128 }133129···163167 sizeof("RRNNNNNNNNNNN"), "RRNNNNNNNNNNN" };164168165169 if (!acpi_has_method(handle, "_ART"))166166- return 0;170170+ return -ENODEV;167171168172 status = acpi_evaluate_object(handle, "_ART", NULL, &buffer);169173 if (ACPI_FAILURE(status))···202206203207 if (art->source) {204208 result = acpi_bus_get_device(art->source, &adev);205205- if (!result)206206- acpi_create_platform_device(adev);207207- else209209+ if (result)208210 pr_warn("Failed to get source ACPI device\n");209211 }210212 if (art->target) {211213 result = acpi_bus_get_device(art->target, &adev);212212- if (!result)213213- acpi_create_platform_device(adev);214214- else214214+ if (result)215215 pr_warn("Failed to get source ACPI device\n");216216 }217217 }···313321 unsigned long length = 0;314322 int count = 0;315323 char __user *arg = (void __user *)__arg;316316- struct trt *trts;317317- struct art *arts;324324+ struct trt *trts = NULL;325325+ struct art *arts = NULL;318326319327 switch (cmd) {320328 case ACPI_THERMAL_GET_TRT_COUNT:
···11config EXYNOS_THERMAL22 tristate "Exynos thermal management unit driver"33- depends on ARCH_HAS_BANDGAP && OF33+ depends on OF44 help55 If you say yes here you get support for the TMU (Thermal Management66 Unit) driver for SAMSUNG EXYNOS series of SoCs. This driver initialises
+6-6
drivers/thermal/samsung/exynos_thermal_common.c
···347347int exynos_register_thermal(struct thermal_sensor_conf *sensor_conf)348348{349349 int ret;350350- struct cpumask mask_val;351350 struct exynos_thermal_zone *th_zone;352351353352 if (!sensor_conf || !sensor_conf->read_temperature) {···366367 * sensor367368 */368369 if (sensor_conf->cooling_data.freq_clip_count > 0) {369369- cpumask_set_cpu(0, &mask_val);370370 th_zone->cool_dev[th_zone->cool_dev_size] =371371- cpufreq_cooling_register(&mask_val);371371+ cpufreq_cooling_register(cpu_present_mask);372372 if (IS_ERR(th_zone->cool_dev[th_zone->cool_dev_size])) {373373- dev_err(sensor_conf->dev,374374- "Failed to register cpufreq cooling device\n");375375- ret = -EINVAL;373373+ ret = PTR_ERR(th_zone->cool_dev[th_zone->cool_dev_size]);374374+ if (ret != -EPROBE_DEFER)375375+ dev_err(sensor_conf->dev,376376+ "Failed to register cpufreq cooling device: %d\n",377377+ ret);376378 goto err_unregister;377379 }378380 th_zone->cool_dev_size++;
+4-1
drivers/thermal/samsung/exynos_tmu.c
···927927 /* Register the sensor with thermal management interface */928928 ret = exynos_register_thermal(sensor_conf);929929 if (ret) {930930- dev_err(&pdev->dev, "Failed to register thermal interface\n");930930+ if (ret != -EPROBE_DEFER)931931+ dev_err(&pdev->dev,932932+ "Failed to register thermal interface: %d\n",933933+ ret);931934 goto err_clk;932935 }933936 data->reg_conf = sensor_conf;
+4-2
drivers/thermal/thermal_core.c
···930930 struct thermal_zone_device *pos1;931931 struct thermal_cooling_device *pos2;932932 unsigned long max_state;933933- int result;933933+ int result, ret;934934935935 if (trip >= tz->trips || (trip < 0 && trip != THERMAL_TRIPS_NONE))936936 return -EINVAL;···947947 if (tz != pos1 || cdev != pos2)948948 return -EINVAL;949949950950- cdev->ops->get_max_state(cdev, &max_state);950950+ ret = cdev->ops->get_max_state(cdev, &max_state);951951+ if (ret)952952+ return ret;951953952954 /* lower default 0, upper default max_state */953955 lower = lower == THERMAL_NO_LIMIT ? 0 : lower;
···716716 req->using_dma = 1;717717 req->ctrl = USBA_BF(DMA_BUF_LEN, req->req.length)718718 | USBA_DMA_CH_EN | USBA_DMA_END_BUF_IE719719- | USBA_DMA_END_TR_EN | USBA_DMA_END_TR_IE;719719+ | USBA_DMA_END_BUF_EN;720720721721- if (ep->is_in)722722- req->ctrl |= USBA_DMA_END_BUF_EN;721721+ if (!ep->is_in)722722+ req->ctrl |= USBA_DMA_END_TR_EN | USBA_DMA_END_TR_IE;723723724724 /*725725 * Add this request to the queue and submit for DMA if···828828{829829 struct usba_ep *ep = to_usba_ep(_ep);830830 struct usba_udc *udc = ep->udc;831831- struct usba_request *req = to_usba_req(_req);831831+ struct usba_request *req;832832 unsigned long flags;833833 u32 status;834834···836836 ep->ep.name, req);837837838838 spin_lock_irqsave(&udc->lock, flags);839839+840840+ list_for_each_entry(req, &ep->queue, queue) {841841+ if (&req->req == _req)842842+ break;843843+ }844844+845845+ if (&req->req != _req) {846846+ spin_unlock_irqrestore(&udc->lock, flags);847847+ return -EINVAL;848848+ }839849840850 if (req->using_dma) {841851 /*···15731563 if ((epstatus & epctrl) & USBA_RX_BK_RDY) {15741564 DBG(DBG_BUS, "%s: RX data ready\n", ep->ep.name);15751565 receive_data(ep);15761576- usba_ep_writel(ep, CLR_STA, USBA_RX_BK_RDY);15771566 }15781567}15791568
+2-1
drivers/usb/gadget/udc/bdc/bdc_ep.c
···718718 struct bdc *bdc;719719 int ret = 0;720720721721- bdc = ep->bdc;722721 if (!req || !ep || !ep->usb_ep.desc)723722 return -EINVAL;723723+724724+ bdc = ep->bdc;724725725726 req->usb_req.actual = 0;726727 req->usb_req.status = -EINPROGRESS;
+7-7
drivers/usb/host/ehci-sched.c
···15811581 else15821582 next = (now + 2 + 7) & ~0x07; /* full frame cache */1583158315841584+ /* If needed, initialize last_iso_frame so that this URB will be seen */15851585+ if (ehci->isoc_count == 0)15861586+ ehci->last_iso_frame = now >> 3;15871587+15841588 /*15851589 * Use ehci->last_iso_frame as the base. There can't be any15861590 * TDs scheduled for earlier than that.···16041600 */16051601 now2 = (now - base) & (mod - 1);1606160216071607- /* Is the schedule already full? */16031603+ /* Is the schedule about to wrap around? */16081604 if (unlikely(!empty && start < period)) {16091609- ehci_dbg(ehci, "iso sched full %p (%u-%u < %u mod %u)\n",16051605+ ehci_dbg(ehci, "request %p would overflow (%u-%u < %u mod %u)\n",16101606 urb, stream->next_uframe, base, period, mod);16111611- status = -ENOSPC;16071607+ status = -EFBIG;16121608 goto fail;16131609 }16141610···16751671 urb->start_frame = start & (mod - 1);16761672 if (!stream->highspeed)16771673 urb->start_frame >>= 3;16781678-16791679- /* Make sure scan_isoc() sees these */16801680- if (ehci->isoc_count == 0)16811681- ehci->last_iso_frame = now >> 3;16821674 return status;1683167516841676 fail:
···567567{568568 void __iomem *base;569569 u32 control;570570- u32 fminterval;570570+ u32 fminterval = 0;571571+ bool no_fminterval = false;571572 int cnt;572573573574 if (!mmio_resource_enabled(pdev, 0))···577576 base = pci_ioremap_bar(pdev, 0);578577 if (base == NULL)579578 return;579579+580580+ /*581581+ * ULi M5237 OHCI controller locks the whole system when accessing582582+ * the OHCI_FMINTERVAL offset.583583+ */584584+ if (pdev->vendor == PCI_VENDOR_ID_AL && pdev->device == 0x5237)585585+ no_fminterval = true;580586581587 control = readl(base + OHCI_CONTROL);582588···623615 }624616625617 /* software reset of the controller, preserving HcFmInterval */626626- fminterval = readl(base + OHCI_FMINTERVAL);618618+ if (!no_fminterval)619619+ fminterval = readl(base + OHCI_FMINTERVAL);620620+627621 writel(OHCI_HCR, base + OHCI_CMDSTATUS);628622629623 /* reset requires max 10 us delay */···634624 break;635625 udelay(1);636626 }637637- writel(fminterval, base + OHCI_FMINTERVAL);627627+628628+ if (!no_fminterval)629629+ writel(fminterval, base + OHCI_FMINTERVAL);638630639631 /* Now the controller is safely in SUSPEND and nothing can wake it up */640632 iounmap(base);
+2
drivers/usb/host/xhci-pci.c
···8282 "must be suspended extra slowly",8383 pdev->revision);8484 }8585+ if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK)8686+ xhci->quirks |= XHCI_BROKEN_STREAMS;8587 /* Fresco Logic confirms: all revisions of this chip do not8688 * support MSI, even though some of them claim to in their PCI8789 * capabilities.
+9
drivers/usb/host/xhci.c
···38033803 return -EINVAL;38043804 }3805380538063806+ if (setup == SETUP_CONTEXT_ONLY) {38073807+ slot_ctx = xhci_get_slot_ctx(xhci, virt_dev->out_ctx);38083808+ if (GET_SLOT_STATE(le32_to_cpu(slot_ctx->dev_state)) ==38093809+ SLOT_STATE_DEFAULT) {38103810+ xhci_dbg(xhci, "Slot already in default state\n");38113811+ return 0;38123812+ }38133813+ }38143814+38063815 command = xhci_alloc_command(xhci, false, false, GFP_KERNEL);38073816 if (!command)38083817 return -ENOMEM;
+4
drivers/usb/musb/Kconfig
···72727373config USB_MUSB_TUSB60107474 tristate "TUSB6010"7575+ depends on ARCH_OMAP2PLUS || COMPILE_TEST7676+ depends on NOP_USB_XCEIV = USB_MUSB_HDRC # both built-in or both modules75777678config USB_MUSB_OMAP2PLUS7779 tristate "OMAP2430 and onwards"···8785config USB_MUSB_DSPS8886 tristate "TI DSPS platforms"8987 select USB_MUSB_AM335X_CHILD8888+ depends on ARCH_OMAP2PLUS || COMPILE_TEST9089 depends on OF_IRQ91909291config USB_MUSB_BLACKFIN···96939794config USB_MUSB_UX5009895 tristate "Ux500 platforms"9696+ depends on ARCH_U8500 || COMPILE_TEST999710098config USB_MUSB_JZ474010199 tristate "JZ4740"
···142142 {DEVICE_SWI(0x0f3d, 0x68a2)}, /* Sierra Wireless MC7700 */143143 {DEVICE_SWI(0x114f, 0x68a2)}, /* Sierra Wireless MC7750 */144144 {DEVICE_SWI(0x1199, 0x68a2)}, /* Sierra Wireless MC7710 */145145- {DEVICE_SWI(0x1199, 0x68c0)}, /* Sierra Wireless MC73xx */146145 {DEVICE_SWI(0x1199, 0x901c)}, /* Sierra Wireless EM7700 */147146 {DEVICE_SWI(0x1199, 0x901f)}, /* Sierra Wireless EM7355 */148147 {DEVICE_SWI(0x1199, 0x9040)}, /* Sierra Wireless Modem */
+28-5
drivers/usb/storage/uas-detect.h
···6969 return 0;70707171 /*7272- * ASM1051 and older ASM1053 devices have the same usb-id, and UAS is7373- * broken on the ASM1051, use the number of streams to differentiate.7474- * New ASM1053-s also support 32 streams, but have a different prod-id.7272+ * ASMedia has a number of usb3 to sata bridge chips, at the time of7373+ * this writing the following versions exist:7474+ * ASM1051 - no uas support version7575+ * ASM1051 - with broken (*) uas support7676+ * ASM1053 - with working uas support7777+ * ASM1153 - with working uas support7878+ *7979+ * Devices with these chips re-use a number of device-ids over the8080+ * entire line, so the device-id is useless to determine if we're8181+ * dealing with an ASM1051 (which we want to avoid).8282+ *8383+ * The ASM1153 can be identified by config.MaxPower == 0,8484+ * where as the ASM105x models have config.MaxPower == 36.8585+ *8686+ * Differentiating between the ASM1053 and ASM1051 is trickier, when8787+ * connected over USB-3 we can look at the number of streams supported,8888+ * ASM1051 supports 32 streams, where as early ASM1053 versions support8989+ * 16 streams, newer ASM1053-s also support 32 streams, but have a9090+ * different prod-id.9191+ *9292+ * (*) ASM1051 chips do work with UAS with some disks (with the9393+ * US_FL_NO_REPORT_OPCODES quirk), but are broken with other disks7594 */7695 if (le16_to_cpu(udev->descriptor.idVendor) == 0x174c &&7777- le16_to_cpu(udev->descriptor.idProduct) == 0x55aa) {7878- if (udev->speed < USB_SPEED_SUPER) {9696+ (le16_to_cpu(udev->descriptor.idProduct) == 0x5106 ||9797+ le16_to_cpu(udev->descriptor.idProduct) == 0x55aa)) {9898+ if (udev->actconfig->desc.bMaxPower == 0) {9999+ /* ASM1153, do nothing */100100+ } else if (udev->speed < USB_SPEED_SUPER) {79101 /* No streams info, assume ASM1051 */80102 flags |= US_FL_IGNORE_UAS;81103 } else if (usb_ss_max_streams(&eps[1]->ss_ep_comp) == 32) {104104+ /* Possibly an ASM1051, disable uas */82105 flags |= US_FL_IGNORE_UAS;83106 }84107 }
+38-8
drivers/usb/storage/unusual_uas.h
···4040 * and don't forget to CC: the USB development list <linux-usb@vger.kernel.org>4141 */42424343+/*4444+ * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI4545+ * commands in UAS mode. Observed with the 1.28 firmware; are there others?4646+ */4747+UNUSUAL_DEV(0x0984, 0x0301, 0x0128, 0x0128,4848+ "Apricorn",4949+ "",5050+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,5151+ US_FL_IGNORE_UAS),5252+4353/* https://bugzilla.kernel.org/show_bug.cgi?id=79511 */4454UNUSUAL_DEV(0x0bc2, 0x2312, 0x0000, 0x9999,4555 "Seagate",···7868 USB_SC_DEVICE, USB_PR_DEVICE, NULL,7969 US_FL_NO_ATA_1X),80707171+/* Reported-by: Marcin Zajączkowski <mszpak@wp.pl> */7272+UNUSUAL_DEV(0x0bc2, 0xa013, 0x0000, 0x9999,7373+ "Seagate",7474+ "Backup Plus",7575+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,7676+ US_FL_NO_ATA_1X),7777+7878+/* Reported-by: Hans de Goede <hdegoede@redhat.com> */7979+UNUSUAL_DEV(0x0bc2, 0xa0a4, 0x0000, 0x9999,8080+ "Seagate",8181+ "Backup Plus Desk",8282+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,8383+ US_FL_NO_ATA_1X),8484+8185/* https://bbs.archlinux.org/viewtopic.php?id=183190 */8286UNUSUAL_DEV(0x0bc2, 0xab20, 0x0000, 0x9999,8387 "Seagate",···10682 USB_SC_DEVICE, USB_PR_DEVICE, NULL,10783 US_FL_NO_ATA_1X),108848585+/* Reported-by: G. Richard Bellamy <rbellamy@pteradigm.com> */8686+UNUSUAL_DEV(0x0bc2, 0xab2a, 0x0000, 0x9999,8787+ "Seagate",8888+ "BUP Fast HDD",8989+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,9090+ US_FL_NO_ATA_1X),9191+10992/* Reported-by: Claudio Bizzarri <claudio.bizzarri@gmail.com> */11093UNUSUAL_DEV(0x152d, 0x0567, 0x0000, 0x9999,11194 "JMicron",···12089 USB_SC_DEVICE, USB_PR_DEVICE, NULL,12190 US_FL_NO_REPORT_OPCODES),12291123123-/* Most ASM1051 based devices have issues with uas, blacklist them all */124124-/* Reported-by: Hans de Goede <hdegoede@redhat.com> */125125-UNUSUAL_DEV(0x174c, 0x5106, 0x0000, 0x9999,126126- "ASMedia",127127- "ASM1051",128128- USB_SC_DEVICE, USB_PR_DEVICE, NULL,129129- US_FL_IGNORE_UAS),130130-13192/* Reported-by: Hans de Goede <hdegoede@redhat.com> */13293UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,13394 "VIA",13495 "VL711",13596 USB_SC_DEVICE, USB_PR_DEVICE, NULL,13697 US_FL_NO_ATA_1X),9898+9999+/* Reported-by: Takeo Nakayama <javhera@gmx.com> */100100+UNUSUAL_DEV(0x357d, 0x7788, 0x0000, 0x9999,101101+ "JMicron",102102+ "JMS566",103103+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,104104+ US_FL_NO_REPORT_OPCODES),137105138106/* Reported-by: Hans de Goede <hdegoede@redhat.com> */139107UNUSUAL_DEV(0x4971, 0x1012, 0x0000, 0x9999,
+1-3
drivers/vfio/pci/vfio_pci.c
···840840841841static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)842842{843843- u8 type;844843 struct vfio_pci_device *vdev;845844 struct iommu_group *group;846845 int ret;847846848848- pci_read_config_byte(pdev, PCI_HEADER_TYPE, &type);849849- if ((type & PCI_HEADER_TYPE) != PCI_HEADER_TYPE_NORMAL)847847+ if (pdev->hdr_type != PCI_HEADER_TYPE_NORMAL)850848 return -EINVAL;851849852850 group = iommu_group_get(&pdev->dev);
+1-1
drivers/vhost/net.c
···538538 ++headcount;539539 seg += in;540540 }541541- heads[headcount - 1].len = cpu_to_vhost32(vq, len - datalen);541541+ heads[headcount - 1].len = cpu_to_vhost32(vq, len + datalen);542542 *iovcount = seg;543543 if (unlikely(log))544544 *log_num = nlogs;
···636636 err = broadsheet_spiflash_read_range(par, start_sector_addr,637637 data_start_addr, sector_buffer);638638 if (err)639639- return err;639639+ goto out;640640 }641641642642 /* now we copy our data into the right place in the sector buffer */···657657 err = broadsheet_spiflash_read_range(par, tail_start_addr,658658 tail_len, sector_buffer + tail_start_addr);659659 if (err)660660- return err;660660+ goto out;661661 }662662663663 /* if we got here we have the full sector that we want to rewrite. */···665665 /* first erase the sector */666666 err = broadsheet_spiflash_erase_sector(par, start_sector_addr);667667 if (err)668668- return err;668668+ goto out;669669670670 /* now write it */671671 err = broadsheet_spiflash_write_sector(par, start_sector_addr,672672 sector_buffer, sector_size);673673+out:674674+ kfree(sector_buffer);673675 return err;674676}675677
+3-2
drivers/video/fbdev/core/fb_defio.c
···8383 cancel_delayed_work_sync(&info->deferred_work);84848585 /* Run it immediately */8686- err = schedule_delayed_work(&info->deferred_work, 0);8686+ schedule_delayed_work(&info->deferred_work, 0);8787 mutex_unlock(&inode->i_mutex);8888- return err;8888+8989+ return 0;8990}9091EXPORT_SYMBOL_GPL(fb_deferred_io_fsync);9192
···9797 return 0;98989999err_enable:100100- regulator_disable(pll->regulator);100100+ if (pll->regulator)101101+ regulator_disable(pll->regulator);101102err_reg:102103 clk_disable_unprepare(pll->clkin);103104 return r;
+2
drivers/video/fbdev/omap2/dss/sdi.c
···342342 out->output_type = OMAP_DISPLAY_TYPE_SDI;343343 out->name = "sdi.0";344344 out->dispc_channel = OMAP_DSS_CHANNEL_LCD;345345+ /* We have SDI only on OMAP3, where it's on port 1 */346346+ out->port_num = 1;345347 out->ops.sdi = &sdi_ops;346348 out->owner = THIS_MODULE;347349
+1-1
drivers/video/fbdev/simplefb.c
···402402 if (ret)403403 return ret;404404405405- if (IS_ENABLED(CONFIG_OF) && of_chosen) {405405+ if (IS_ENABLED(CONFIG_OF_ADDRESS) && of_chosen) {406406 for_each_child_of_node(of_chosen, np) {407407 if (of_device_is_compatible(np, "simple-framebuffer"))408408 of_platform_device_create(np, NULL, NULL);
+16-1
drivers/video/logo/logo.c
···2121module_param(nologo, bool, 0);2222MODULE_PARM_DESC(nologo, "Disables startup logo");23232424+/*2525+ * Logos are located in the initdata, and will be freed in kernel_init.2626+ * Use late_init to mark the logos as freed to prevent any further use.2727+ */2828+2929+static bool logos_freed;3030+3131+static int __init fb_logo_late_init(void)3232+{3333+ logos_freed = true;3434+ return 0;3535+}3636+3737+late_initcall(fb_logo_late_init);3838+2439/* logo's are marked __initdata. Use __init_refok to tell2540 * modpost that it is intended that this function uses data2641 * marked __initdata.···4429{4530 const struct linux_logo *logo = NULL;46314747- if (nologo)3232+ if (nologo || logos_freed)4833 return NULL;49345035 if (depth >= 1) {
+1-9
drivers/virtio/virtio_pci_common.c
···282282283283 vp_free_vectors(vdev);284284 kfree(vp_dev->vqs);285285+ vp_dev->vqs = NULL;285286}286287287288static int vp_try_to_find_vqs(struct virtio_device *vdev, unsigned nvqs,···420419 }421420 }422421 return 0;423423-}424424-425425-void virtio_pci_release_dev(struct device *_d)426426-{427427- /*428428- * No need for a release method as we allocate/free429429- * all devices together with the pci devices.430430- * Provide an empty one to avoid getting a warning from core.431431- */432422}433423434424#ifdef CONFIG_PM_SLEEP
-1
drivers/virtio/virtio_pci_common.h
···126126 * - ignore the affinity request if we're using INTX127127 */128128int vp_set_vq_affinity(struct virtqueue *vq, int cpu);129129-void virtio_pci_release_dev(struct device *);130129131130int virtio_pci_legacy_probe(struct pci_dev *pci_dev,132131 const struct pci_device_id *id);
+11-1
drivers/virtio/virtio_pci_legacy.c
···211211 .set_vq_affinity = vp_set_vq_affinity,212212};213213214214+static void virtio_pci_release_dev(struct device *_d)215215+{216216+ struct virtio_device *vdev = dev_to_virtio(_d);217217+ struct virtio_pci_device *vp_dev = to_vp_device(vdev);218218+219219+ /* As struct device is a kobject, it's not safe to220220+ * free the memory (including the reference counter itself)221221+ * until it's release callback. */222222+ kfree(vp_dev);223223+}224224+214225/* the PCI probing function */215226int virtio_pci_legacy_probe(struct pci_dev *pci_dev,216227 const struct pci_device_id *id)···313302 pci_iounmap(pci_dev, vp_dev->ioaddr);314303 pci_release_regions(pci_dev);315304 pci_disable_device(pci_dev);316316- kfree(vp_dev);317305}
+10-3
fs/btrfs/backref.c
···15521552{15531553 int ret;15541554 int type;15551555- struct btrfs_tree_block_info *info;15561555 struct btrfs_extent_inline_ref *eiref;1557155615581557 if (*ptr == (unsigned long)-1)···15721573 }1573157415741575 /* we can treat both ref types equally here */15751575- info = (struct btrfs_tree_block_info *)(ei + 1);15761576 *out_root = btrfs_extent_inline_ref_offset(eb, eiref);15771577- *out_level = btrfs_tree_block_level(eb, info);15771577+15781578+ if (key->type == BTRFS_EXTENT_ITEM_KEY) {15791579+ struct btrfs_tree_block_info *info;15801580+15811581+ info = (struct btrfs_tree_block_info *)(ei + 1);15821582+ *out_level = btrfs_tree_block_level(eb, info);15831583+ } else {15841584+ ASSERT(key->type == BTRFS_METADATA_ITEM_KEY);15851585+ *out_level = (u8)key->offset;15861586+ }1578158715791588 if (ret == 1)15801589 *ptr = (unsigned long)-1;
+8
fs/btrfs/delayed-inode.c
···18571857{18581858 struct btrfs_delayed_node *delayed_node;1859185918601860+ /*18611861+ * we don't do delayed inode updates during log recovery because it18621862+ * leads to enospc problems. This means we also can't do18631863+ * delayed inode refs18641864+ */18651865+ if (BTRFS_I(inode)->root->fs_info->log_root_recovering)18661866+ return -EAGAIN;18671867+18601868 delayed_node = btrfs_get_or_create_delayed_node(inode);18611869 if (IS_ERR(delayed_node))18621870 return PTR_ERR(delayed_node);
+6-6
fs/btrfs/extent-tree.c
···31393139 struct extent_buffer *leaf;3140314031413141 ret = btrfs_search_slot(trans, extent_root, &cache->key, path, 0, 1);31423142- if (ret < 0)31423142+ if (ret) {31433143+ if (ret > 0)31443144+ ret = -ENOENT;31433145 goto fail;31443144- BUG_ON(ret); /* Corruption */31463146+ }3145314731463148 leaf = path->nodes[0];31473149 bi = btrfs_item_ptr_offset(leaf, path->slots[0]);···31513149 btrfs_mark_buffer_dirty(leaf);31523150 btrfs_release_path(path);31533151fail:31543154- if (ret) {31523152+ if (ret)31553153 btrfs_abort_transaction(trans, root, ret);31563156- return ret;31573157- }31583158- return 0;31543154+ return ret;3159315531603156}31613157
+3-1
fs/btrfs/inode.c
···6255625562566256out_fail:62576257 btrfs_end_transaction(trans, root);62586258- if (drop_on_err)62586258+ if (drop_on_err) {62596259+ inode_dec_link_count(inode);62596260 iput(inode);62616261+ }62606262 btrfs_balance_delayed_items(root);62616263 btrfs_btree_balance_dirty(root);62626264 return err;
···14161416 }14171417 }1418141814191419- dout("fill_inline_data %p %llx.%llx len %lu locked_page %p\n",14191419+ dout("fill_inline_data %p %llx.%llx len %zu locked_page %p\n",14201420 inode, ceph_vinop(inode), len, locked_page);1421142114221422 if (len > 0) {
+3-3
fs/cifs/cifsglob.h
···661661 server->ops->set_credits(server, val);662662}663663664664-static inline __u64664664+static inline __le64665665get_next_mid64(struct TCP_Server_Info *server)666666{667667- return server->ops->get_next_mid(server);667667+ return cpu_to_le64(server->ops->get_next_mid(server));668668}669669670670static inline __le16671671get_next_mid(struct TCP_Server_Info *server)672672{673673- __u16 mid = get_next_mid64(server);673673+ __u16 mid = server->ops->get_next_mid(server);674674 /*675675 * The value in the SMB header should be little endian for easy676676 * on-the-wire decoding.
+7-5
fs/cifs/netmisc.c
···926926927927 /* Subtract the NTFS time offset, then convert to 1s intervals. */928928 s64 t = le64_to_cpu(ntutc) - NTFS_TIME_OFFSET;929929+ u64 abs_t;929930930931 /*931932 * Unfortunately can not use normal 64 bit division on 32 bit arch, but···934933 * to special case them935934 */936935 if (t < 0) {937937- t = -t;938938- ts.tv_nsec = (long)(do_div(t, 10000000) * 100);936936+ abs_t = -t;937937+ ts.tv_nsec = (long)(do_div(abs_t, 10000000) * 100);939938 ts.tv_nsec = -ts.tv_nsec;940940- ts.tv_sec = -t;939939+ ts.tv_sec = -abs_t;941940 } else {942942- ts.tv_nsec = (long)do_div(t, 10000000) * 100;943943- ts.tv_sec = t;941941+ abs_t = t;942942+ ts.tv_nsec = (long)do_div(abs_t, 10000000) * 100;943943+ ts.tv_sec = abs_t;944944 }945945946946 return ts;
+7-3
fs/cifs/readdir.c
···6969 * Attempt to preload the dcache with the results from the FIND_FIRST/NEXT7070 *7171 * Find the dentry that matches "name". If there isn't one, create one. If it's7272- * a negative dentry or the uniqueid changed, then drop it and recreate it.7272+ * a negative dentry or the uniqueid or filetype(mode) changed,7373+ * then drop it and recreate it.7374 */7475static void7576cifs_prime_dcache(struct dentry *parent, struct qstr *name,···9897 if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM))9998 fattr->cf_uniqueid = CIFS_I(inode)->uniqueid;10099101101- /* update inode in place if i_ino didn't change */102102- if (CIFS_I(inode)->uniqueid == fattr->cf_uniqueid) {100100+ /* update inode in place101101+ * if both i_ino and i_mode didn't change */102102+ if (CIFS_I(inode)->uniqueid == fattr->cf_uniqueid &&103103+ (inode->i_mode & S_IFMT) ==104104+ (fattr->cf_mode & S_IFMT)) {103105 cifs_fattr_to_inode(inode, fattr);104106 goto out;105107 }
+7-5
fs/cifs/smb2misc.c
···3232static int3333check_smb2_hdr(struct smb2_hdr *hdr, __u64 mid)3434{3535+ __u64 wire_mid = le64_to_cpu(hdr->MessageId);3636+3537 /*3638 * Make sure that this really is an SMB, that it is a response,3739 * and that the message ids match.3840 */3941 if ((*(__le32 *)hdr->ProtocolId == SMB2_PROTO_NUMBER) &&4040- (mid == hdr->MessageId)) {4242+ (mid == wire_mid)) {4143 if (hdr->Flags & SMB2_FLAGS_SERVER_TO_REDIR)4244 return 0;4345 else {···5351 if (*(__le32 *)hdr->ProtocolId != SMB2_PROTO_NUMBER)5452 cifs_dbg(VFS, "Bad protocol string signature header %x\n",5553 *(unsigned int *) hdr->ProtocolId);5656- if (mid != hdr->MessageId)5454+ if (mid != wire_mid)5755 cifs_dbg(VFS, "Mids do not match: %llu and %llu\n",5858- mid, hdr->MessageId);5656+ mid, wire_mid);5957 }6060- cifs_dbg(VFS, "Bad SMB detected. The Mid=%llu\n", hdr->MessageId);5858+ cifs_dbg(VFS, "Bad SMB detected. The Mid=%llu\n", wire_mid);6159 return 1;6260}6361···9795{9896 struct smb2_hdr *hdr = (struct smb2_hdr *)buf;9997 struct smb2_pdu *pdu = (struct smb2_pdu *)hdr;100100- __u64 mid = hdr->MessageId;9898+ __u64 mid = le64_to_cpu(hdr->MessageId);10199 __u32 len = get_rfc1002_length(buf);102100 __u32 clc_len; /* calculated length */103101 int command;
···110110 __le16 CreditRequest; /* CreditResponse */111111 __le32 Flags;112112 __le32 NextCommand;113113- __u64 MessageId; /* opaque - so can stay little endian */113113+ __le64 MessageId;114114 __le32 ProcessId;115115 __u32 TreeId; /* opaque - so do not make little endian */116116 __u64 SessionId; /* opaque - so do not make little endian */
···5166516651675167 /* fallback to generic here if not in extents fmt */51685168 if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))51695169- return __generic_block_fiemap(inode, fieinfo, start, len,51705170- ext4_get_block);51695169+ return generic_block_fiemap(inode, fieinfo, start, len,51705170+ ext4_get_block);5171517151725172 if (fiemap_check_flags(fieinfo, EXT4_FIEMAP_FLAGS))51735173 return -EBADR;
+116-108
fs/ext4/file.c
···273273 * we determine this extent as a data or a hole according to whether the274274 * page cache has data or not.275275 */276276-static int ext4_find_unwritten_pgoff(struct inode *inode, int whence,277277- loff_t endoff, loff_t *offset)276276+static int ext4_find_unwritten_pgoff(struct inode *inode,277277+ int whence,278278+ struct ext4_map_blocks *map,279279+ loff_t *offset)278280{279281 struct pagevec pvec;282282+ unsigned int blkbits;280283 pgoff_t index;281284 pgoff_t end;285285+ loff_t endoff;282286 loff_t startoff;283287 loff_t lastoff;284288 int found = 0;285289290290+ blkbits = inode->i_sb->s_blocksize_bits;286291 startoff = *offset;287292 lastoff = startoff;288288-293293+ endoff = (loff_t)(map->m_lblk + map->m_len) << blkbits;289294290295 index = startoff >> PAGE_CACHE_SHIFT;291296 end = endoff >> PAGE_CACHE_SHIFT;···408403static loff_t ext4_seek_data(struct file *file, loff_t offset, loff_t maxsize)409404{410405 struct inode *inode = file->f_mapping->host;411411- struct fiemap_extent_info fie;412412- struct fiemap_extent ext[2];413413- loff_t next;414414- int i, ret = 0;406406+ struct ext4_map_blocks map;407407+ struct extent_status es;408408+ ext4_lblk_t start, last, end;409409+ loff_t dataoff, isize;410410+ int blkbits;411411+ int ret = 0;415412416413 mutex_lock(&inode->i_mutex);417417- if (offset >= inode->i_size) {414414+415415+ isize = i_size_read(inode);416416+ if (offset >= isize) {418417 mutex_unlock(&inode->i_mutex);419418 return -ENXIO;420419 }421421- fie.fi_flags = 0;422422- fie.fi_extents_max = 2;423423- fie.fi_extents_start = (struct fiemap_extent __user *) &ext;424424- while (1) {425425- mm_segment_t old_fs = get_fs();426420427427- fie.fi_extents_mapped = 0;428428- memset(ext, 0, sizeof(*ext) * fie.fi_extents_max);421421+ blkbits = inode->i_sb->s_blocksize_bits;422422+ start = offset >> blkbits;423423+ last = start;424424+ end = isize >> blkbits;425425+ dataoff = offset;429426430430- set_fs(get_ds());431431- ret = ext4_fiemap(inode, &fie, offset, maxsize - offset);432432- set_fs(old_fs);433433- if (ret)434434- break;435435-436436- /* No extents found, EOF */437437- if (!fie.fi_extents_mapped) {438438- ret = -ENXIO;427427+ do {428428+ map.m_lblk = last;429429+ map.m_len = end - last + 1;430430+ ret = ext4_map_blocks(NULL, inode, &map, 0);431431+ if (ret > 0 && !(map.m_flags & EXT4_MAP_UNWRITTEN)) {432432+ if (last != start)433433+ dataoff = (loff_t)last << blkbits;439434 break;440435 }441441- for (i = 0; i < fie.fi_extents_mapped; i++) {442442- next = (loff_t)(ext[i].fe_length + ext[i].fe_logical);443436444444- if (offset < (loff_t)ext[i].fe_logical)445445- offset = (loff_t)ext[i].fe_logical;446446- /*447447- * If extent is not unwritten, then it contains valid448448- * data, mapped or delayed.449449- */450450- if (!(ext[i].fe_flags & FIEMAP_EXTENT_UNWRITTEN))451451- goto out;452452-453453- /*454454- * If there is a unwritten extent at this offset,455455- * it will be as a data or a hole according to page456456- * cache that has data or not.457457- */458458- if (ext4_find_unwritten_pgoff(inode, SEEK_DATA,459459- next, &offset))460460- goto out;461461-462462- if (ext[i].fe_flags & FIEMAP_EXTENT_LAST) {463463- ret = -ENXIO;464464- goto out;465465- }466466- offset = next;437437+ /*438438+ * If there is a delay extent at this offset,439439+ * it will be as a data.440440+ */441441+ ext4_es_find_delayed_extent_range(inode, last, last, &es);442442+ if (es.es_len != 0 && in_range(last, es.es_lblk, es.es_len)) {443443+ if (last != start)444444+ dataoff = (loff_t)last << blkbits;445445+ break;467446 }468468- }469469- if (offset > inode->i_size)470470- offset = inode->i_size;471471-out:447447+448448+ /*449449+ * If there is a unwritten extent at this offset,450450+ * it will be as a data or a hole according to page451451+ * cache that has data or not.452452+ */453453+ if (map.m_flags & EXT4_MAP_UNWRITTEN) {454454+ int unwritten;455455+ unwritten = ext4_find_unwritten_pgoff(inode, SEEK_DATA,456456+ &map, &dataoff);457457+ if (unwritten)458458+ break;459459+ }460460+461461+ last++;462462+ dataoff = (loff_t)last << blkbits;463463+ } while (last <= end);464464+472465 mutex_unlock(&inode->i_mutex);473473- if (ret)474474- return ret;475466476476- return vfs_setpos(file, offset, maxsize);467467+ if (dataoff > isize)468468+ return -ENXIO;469469+470470+ return vfs_setpos(file, dataoff, maxsize);477471}478472479473/*480480- * ext4_seek_hole() retrieves the offset for SEEK_HOLE474474+ * ext4_seek_hole() retrieves the offset for SEEK_HOLE.481475 */482476static loff_t ext4_seek_hole(struct file *file, loff_t offset, loff_t maxsize)483477{484478 struct inode *inode = file->f_mapping->host;485485- struct fiemap_extent_info fie;486486- struct fiemap_extent ext[2];487487- loff_t next;488488- int i, ret = 0;479479+ struct ext4_map_blocks map;480480+ struct extent_status es;481481+ ext4_lblk_t start, last, end;482482+ loff_t holeoff, isize;483483+ int blkbits;484484+ int ret = 0;489485490486 mutex_lock(&inode->i_mutex);491491- if (offset >= inode->i_size) {487487+488488+ isize = i_size_read(inode);489489+ if (offset >= isize) {492490 mutex_unlock(&inode->i_mutex);493491 return -ENXIO;494492 }495493496496- fie.fi_flags = 0;497497- fie.fi_extents_max = 2;498498- fie.fi_extents_start = (struct fiemap_extent __user *)&ext;499499- while (1) {500500- mm_segment_t old_fs = get_fs();494494+ blkbits = inode->i_sb->s_blocksize_bits;495495+ start = offset >> blkbits;496496+ last = start;497497+ end = isize >> blkbits;498498+ holeoff = offset;501499502502- fie.fi_extents_mapped = 0;503503- memset(ext, 0, sizeof(*ext));500500+ do {501501+ map.m_lblk = last;502502+ map.m_len = end - last + 1;503503+ ret = ext4_map_blocks(NULL, inode, &map, 0);504504+ if (ret > 0 && !(map.m_flags & EXT4_MAP_UNWRITTEN)) {505505+ last += ret;506506+ holeoff = (loff_t)last << blkbits;507507+ continue;508508+ }504509505505- set_fs(get_ds());506506- ret = ext4_fiemap(inode, &fie, offset, maxsize - offset);507507- set_fs(old_fs);508508- if (ret)509509- break;510510+ /*511511+ * If there is a delay extent at this offset,512512+ * we will skip this extent.513513+ */514514+ ext4_es_find_delayed_extent_range(inode, last, last, &es);515515+ if (es.es_len != 0 && in_range(last, es.es_lblk, es.es_len)) {516516+ last = es.es_lblk + es.es_len;517517+ holeoff = (loff_t)last << blkbits;518518+ continue;519519+ }510520511511- /* No extents found */512512- if (!fie.fi_extents_mapped)513513- break;514514-515515- for (i = 0; i < fie.fi_extents_mapped; i++) {516516- next = (loff_t)(ext[i].fe_logical + ext[i].fe_length);517517- /*518518- * If extent is not unwritten, then it contains valid519519- * data, mapped or delayed.520520- */521521- if (!(ext[i].fe_flags & FIEMAP_EXTENT_UNWRITTEN)) {522522- if (offset < (loff_t)ext[i].fe_logical)523523- goto out;524524- offset = next;521521+ /*522522+ * If there is a unwritten extent at this offset,523523+ * it will be as a data or a hole according to page524524+ * cache that has data or not.525525+ */526526+ if (map.m_flags & EXT4_MAP_UNWRITTEN) {527527+ int unwritten;528528+ unwritten = ext4_find_unwritten_pgoff(inode, SEEK_HOLE,529529+ &map, &holeoff);530530+ if (!unwritten) {531531+ last += ret;532532+ holeoff = (loff_t)last << blkbits;525533 continue;526534 }527527- /*528528- * If there is a unwritten extent at this offset,529529- * it will be as a data or a hole according to page530530- * cache that has data or not.531531- */532532- if (ext4_find_unwritten_pgoff(inode, SEEK_HOLE,533533- next, &offset))534534- goto out;535535-536536- offset = next;537537- if (ext[i].fe_flags & FIEMAP_EXTENT_LAST)538538- goto out;539535 }540540- }541541- if (offset > inode->i_size)542542- offset = inode->i_size;543543-out:544544- mutex_unlock(&inode->i_mutex);545545- if (ret)546546- return ret;547536548548- return vfs_setpos(file, offset, maxsize);537537+ /* find a hole */538538+ break;539539+ } while (last <= end);540540+541541+ mutex_unlock(&inode->i_mutex);542542+543543+ if (holeoff > isize)544544+ holeoff = isize;545545+546546+ return vfs_setpos(file, holeoff, maxsize);549547}550548551549/*
+12-12
fs/ext4/resize.c
···2424 return -EPERM;25252626 /*2727+ * If we are not using the primary superblock/GDT copy don't resize,2828+ * because the user tools have no way of handling this. Probably a2929+ * bad time to do it anyways.3030+ */3131+ if (EXT4_SB(sb)->s_sbh->b_blocknr !=3232+ le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) {3333+ ext4_warning(sb, "won't resize using backup superblock at %llu",3434+ (unsigned long long)EXT4_SB(sb)->s_sbh->b_blocknr);3535+ return -EPERM;3636+ }3737+3838+ /*2739 * We are not allowed to do online-resizing on a filesystem mounted2840 * with error, because it can destroy the filesystem easily.2941 */···769757 printk(KERN_DEBUG770758 "EXT4-fs: ext4_add_new_gdb: adding group block %lu\n",771759 gdb_num);772772-773773- /*774774- * If we are not using the primary superblock/GDT copy don't resize,775775- * because the user tools have no way of handling this. Probably a776776- * bad time to do it anyways.777777- */778778- if (EXT4_SB(sb)->s_sbh->b_blocknr !=779779- le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) {780780- ext4_warning(sb, "won't resize using backup superblock at %llu",781781- (unsigned long long)EXT4_SB(sb)->s_sbh->b_blocknr);782782- return -EPERM;783783- }784760785761 gdb_bh = sb_bread(sb, gdblock);786762 if (!gdb_bh)
+1-1
fs/ext4/super.c
···34823482 if (EXT4_HAS_RO_COMPAT_FEATURE(sb,34833483 EXT4_FEATURE_RO_COMPAT_METADATA_CSUM) &&34843484 EXT4_HAS_RO_COMPAT_FEATURE(sb, EXT4_FEATURE_RO_COMPAT_GDT_CSUM))34853485- ext4_warning(sb, KERN_INFO "metadata_csum and uninit_bg are "34853485+ ext4_warning(sb, "metadata_csum and uninit_bg are "34863486 "redundant flags; please run fsck.");3487348734883488 /* Check for a known checksum algorithm */
+3-2
fs/fcntl.c
···740740 * Exceptions: O_NONBLOCK is a two bit define on parisc; O_NDELAY741741 * is defined as O_NONBLOCK on some platforms and not on others.742742 */743743- BUILD_BUG_ON(20 - 1 /* for O_RDONLY being 0 */ != HWEIGHT32(743743+ BUILD_BUG_ON(21 - 1 /* for O_RDONLY being 0 */ != HWEIGHT32(744744 O_RDONLY | O_WRONLY | O_RDWR |745745 O_CREAT | O_EXCL | O_NOCTTY |746746 O_TRUNC | O_APPEND | /* O_NONBLOCK | */747747 __O_SYNC | O_DSYNC | FASYNC |748748 O_DIRECT | O_LARGEFILE | O_DIRECTORY |749749 O_NOFOLLOW | O_NOATIME | O_CLOEXEC |750750- __FMODE_EXEC | O_PATH | __O_TMPFILE750750+ __FMODE_EXEC | O_PATH | __O_TMPFILE |751751+ __FMODE_NONOTIFY751752 ));752753753754 fasync_cache = kmem_cache_create("fasync_cache",
+49-2
fs/fuse/dev.c
···131131 req->in.h.pid = current->pid;132132}133133134134+void fuse_set_initialized(struct fuse_conn *fc)135135+{136136+ /* Make sure stores before this are seen on another CPU */137137+ smp_wmb();138138+ fc->initialized = 1;139139+}140140+134141static bool fuse_block_alloc(struct fuse_conn *fc, bool for_background)135142{136143 return !fc->initialized || (for_background && fc->blocked);···162155 if (intr)163156 goto out;164157 }158158+ /* Matches smp_wmb() in fuse_set_initialized() */159159+ smp_rmb();165160166161 err = -ENOTCONN;167162 if (!fc->connected)···262253263254 atomic_inc(&fc->num_waiting);264255 wait_event(fc->blocked_waitq, fc->initialized);256256+ /* Matches smp_wmb() in fuse_set_initialized() */257257+ smp_rmb();265258 req = fuse_request_alloc(0);266259 if (!req)267260 req = get_reserved_req(fc, file);···522511}523512EXPORT_SYMBOL_GPL(fuse_request_send);524513514514+static void fuse_adjust_compat(struct fuse_conn *fc, struct fuse_args *args)515515+{516516+ if (fc->minor < 4 && args->in.h.opcode == FUSE_STATFS)517517+ args->out.args[0].size = FUSE_COMPAT_STATFS_SIZE;518518+519519+ if (fc->minor < 9) {520520+ switch (args->in.h.opcode) {521521+ case FUSE_LOOKUP:522522+ case FUSE_CREATE:523523+ case FUSE_MKNOD:524524+ case FUSE_MKDIR:525525+ case FUSE_SYMLINK:526526+ case FUSE_LINK:527527+ args->out.args[0].size = FUSE_COMPAT_ENTRY_OUT_SIZE;528528+ break;529529+ case FUSE_GETATTR:530530+ case FUSE_SETATTR:531531+ args->out.args[0].size = FUSE_COMPAT_ATTR_OUT_SIZE;532532+ break;533533+ }534534+ }535535+ if (fc->minor < 12) {536536+ switch (args->in.h.opcode) {537537+ case FUSE_CREATE:538538+ args->in.args[0].size = sizeof(struct fuse_open_in);539539+ break;540540+ case FUSE_MKNOD:541541+ args->in.args[0].size = FUSE_COMPAT_MKNOD_IN_SIZE;542542+ break;543543+ }544544+ }545545+}546546+525547ssize_t fuse_simple_request(struct fuse_conn *fc, struct fuse_args *args)526548{527549 struct fuse_req *req;···563519 req = fuse_get_req(fc, 0);564520 if (IS_ERR(req))565521 return PTR_ERR(req);522522+523523+ /* Needs to be done after fuse_get_req() so that fc->minor is valid */524524+ fuse_adjust_compat(fc, args);566525567526 req->in.h.opcode = args->in.h.opcode;568527 req->in.h.nodeid = args->in.h.nodeid;···21742127 if (fc->connected) {21752128 fc->connected = 0;21762129 fc->blocked = 0;21772177- fc->initialized = 1;21302130+ fuse_set_initialized(fc);21782131 end_io_requests(fc);21792132 end_queued_requests(fc);21802133 end_polls(fc);···21932146 spin_lock(&fc->lock);21942147 fc->connected = 0;21952148 fc->blocked = 0;21962196- fc->initialized = 1;21492149+ fuse_set_initialized(fc);21972150 end_queued_requests(fc);21982151 end_polls(fc);21992152 wake_up_all(&fc->blocked_waitq);
···362362 rs.cont_size = isonum_733(rr->u.CE.size);363363 break;364364 case SIG('E', 'R'):365365+ /* Invalid length of ER tag id? */366366+ if (rr->u.ER.len_id + offsetof(struct rock_ridge, u.ER.data) > rr->len)367367+ goto out;365368 ISOFS_SB(inode->i_sb)->s_rock = 1;366369 printk(KERN_DEBUG "ISO 9660 Extensions: ");367370 {
+8-4
fs/kernfs/dir.c
···201201static int kernfs_name_compare(unsigned int hash, const char *name,202202 const void *ns, const struct kernfs_node *kn)203203{204204- if (hash != kn->hash)205205- return hash - kn->hash;206206- if (ns != kn->ns)207207- return ns - kn->ns;204204+ if (hash < kn->hash)205205+ return -1;206206+ if (hash > kn->hash)207207+ return 1;208208+ if (ns < kn->ns)209209+ return -1;210210+ if (ns > kn->ns)211211+ return 1;208212 return strcmp(name, kn->name);209213}210214
+4-4
fs/lockd/svc.c
···138138139139 dprintk("NFS locking service started (ver " LOCKD_VERSION ").\n");140140141141- if (!nlm_timeout)142142- nlm_timeout = LOCKD_DFLT_TIMEO;143143- nlmsvc_timeout = nlm_timeout * HZ;144144-145141 /*146142 * The main request loop. We don't terminate until the last147143 * NFS mount or NFS daemon has gone away.···345349 if (nlmsvc_users)346350 printk(KERN_WARNING347351 "lockd_up: no pid, %d users??\n", nlmsvc_users);352352+353353+ if (!nlm_timeout)354354+ nlm_timeout = LOCKD_DFLT_TIMEO;355355+ nlmsvc_timeout = nlm_timeout * HZ;348356349357 serv = svc_create(&nlmsvc_program, LOCKD_BUFSIZE, svc_rpcb_cleanup);350358 if (!serv) {
+1-1
fs/locks.c
···17021702 break;17031703 }17041704 trace_generic_delete_lease(inode, fl);17051705- if (fl)17051705+ if (fl && IS_LEASE(fl))17061706 error = fl->fl_lmops->lm_change(before, F_UNLCK, &dispose);17071707 spin_unlock(&inode->i_lock);17081708 locks_dispose_list(&dispose);
+27-15
fs/nfs/nfs4client.c
···228228 kfree(clp->cl_serverowner);229229 kfree(clp->cl_serverscope);230230 kfree(clp->cl_implid);231231+ kfree(clp->cl_owner_id);231232}232233233234void nfs4_free_client(struct nfs_client *clp)···453452 spin_unlock(&nn->nfs_client_lock);454453}455454455455+static bool nfs4_match_client_owner_id(const struct nfs_client *clp1,456456+ const struct nfs_client *clp2)457457+{458458+ if (clp1->cl_owner_id == NULL || clp2->cl_owner_id == NULL)459459+ return true;460460+ return strcmp(clp1->cl_owner_id, clp2->cl_owner_id) == 0;461461+}462462+456463/**457464 * nfs40_walk_client_list - Find server that recognizes a client ID458465 *···492483 if (pos->rpc_ops != new->rpc_ops)493484 continue;494485495495- if (pos->cl_proto != new->cl_proto)496496- continue;497497-498486 if (pos->cl_minorversion != new->cl_minorversion)499487 continue;500488···514508 continue;515509516510 if (pos->cl_clientid != new->cl_clientid)511511+ continue;512512+513513+ if (!nfs4_match_client_owner_id(pos, new))517514 continue;518515519516 atomic_inc(&pos->cl_count);···575566}576567577568/*578578- * Returns true if the server owners match569569+ * Returns true if the server major ids match579570 */580571static bool581581-nfs4_match_serverowners(struct nfs_client *a, struct nfs_client *b)572572+nfs4_check_clientid_trunking(struct nfs_client *a, struct nfs_client *b)582573{583574 struct nfs41_server_owner *o1 = a->cl_serverowner;584575 struct nfs41_server_owner *o2 = b->cl_serverowner;585585-586586- if (o1->minor_id != o2->minor_id) {587587- dprintk("NFS: --> %s server owner minor IDs do not match\n",588588- __func__);589589- return false;590590- }591576592577 if (o1->major_id_sz != o2->major_id_sz)593578 goto out_major_mismatch;···624621 if (pos->rpc_ops != new->rpc_ops)625622 continue;626623627627- if (pos->cl_proto != new->cl_proto)628628- continue;629629-630624 if (pos->cl_minorversion != new->cl_minorversion)631625 continue;632626···654654 if (!nfs4_match_clientids(pos, new))655655 continue;656656657657- if (!nfs4_match_serverowners(pos, new))657657+ /*658658+ * Note that session trunking is just a special subcase of659659+ * client id trunking. In either case, we want to fall back660660+ * to using the existing nfs_client.661661+ */662662+ if (!nfs4_check_clientid_trunking(pos, new))663663+ continue;664664+665665+ /* Unlike NFSv4.0, we know that NFSv4.1 always uses the666666+ * uniform string, however someone might switch the667667+ * uniquifier string on us.668668+ */669669+ if (!nfs4_match_client_owner_id(pos, new))658670 continue;659671660672 atomic_inc(&pos->cl_count);
···9494 struct inode *inode,9595 const char *symname);96969797+static int ocfs2_double_lock(struct ocfs2_super *osb,9898+ struct buffer_head **bh1,9999+ struct inode *inode1,100100+ struct buffer_head **bh2,101101+ struct inode *inode2,102102+ int rename);103103+104104+static void ocfs2_double_unlock(struct inode *inode1, struct inode *inode2);97105/* An orphan dir name is an 8 byte value, printed as a hex string */98106#define OCFS2_ORPHAN_NAMELEN ((int)(2 * sizeof(u64)))99107···686678{687679 handle_t *handle;688680 struct inode *inode = old_dentry->d_inode;681681+ struct inode *old_dir = old_dentry->d_parent->d_inode;689682 int err;690683 struct buffer_head *fe_bh = NULL;684684+ struct buffer_head *old_dir_bh = NULL;691685 struct buffer_head *parent_fe_bh = NULL;692686 struct ocfs2_dinode *fe = NULL;693687 struct ocfs2_super *osb = OCFS2_SB(dir->i_sb);···706696707697 dquot_initialize(dir);708698709709- err = ocfs2_inode_lock_nested(dir, &parent_fe_bh, 1, OI_LS_PARENT);699699+ err = ocfs2_double_lock(osb, &old_dir_bh, old_dir,700700+ &parent_fe_bh, dir, 0);710701 if (err < 0) {711702 if (err != -ENOENT)712703 mlog_errno(err);713704 return err;705705+ }706706+707707+ /* make sure both dirs have bhs708708+ * get an extra ref on old_dir_bh if old==new */709709+ if (!parent_fe_bh) {710710+ if (old_dir_bh) {711711+ parent_fe_bh = old_dir_bh;712712+ get_bh(parent_fe_bh);713713+ } else {714714+ mlog(ML_ERROR, "%s: no old_dir_bh!\n", osb->uuid_str);715715+ err = -EIO;716716+ goto out;717717+ }714718 }715719716720 if (!dir->i_nlink) {···732708 goto out;733709 }734710735735- err = ocfs2_lookup_ino_from_name(dir, old_dentry->d_name.name,711711+ err = ocfs2_lookup_ino_from_name(old_dir, old_dentry->d_name.name,736712 old_dentry->d_name.len, &old_de_ino);737713 if (err) {738714 err = -ENOENT;···825801 ocfs2_inode_unlock(inode, 1);826802827803out:828828- ocfs2_inode_unlock(dir, 1);804804+ ocfs2_double_unlock(old_dir, dir);829805830806 brelse(fe_bh);831807 brelse(parent_fe_bh);808808+ brelse(old_dir_bh);832809833810 ocfs2_free_dir_lookup_result(&lookup);834811···10971072}1098107310991074/*11001100- * The only place this should be used is rename!10751075+ * The only place this should be used is rename and link!11011076 * if they have the same id, then the 1st one is the only one locked.11021077 */11031078static int ocfs2_double_lock(struct ocfs2_super *osb,11041079 struct buffer_head **bh1,11051080 struct inode *inode1,11061081 struct buffer_head **bh2,11071107- struct inode *inode2)10821082+ struct inode *inode2,10831083+ int rename)11081084{11091085 int status;11101086 int inode1_is_ancestor, inode2_is_ancestor;···11531127 }11541128 /* lock id2 */11551129 status = ocfs2_inode_lock_nested(inode2, bh2, 1,11561156- OI_LS_RENAME1);11301130+ rename == 1 ? OI_LS_RENAME1 : OI_LS_PARENT);11571131 if (status < 0) {11581132 if (status != -ENOENT)11591133 mlog_errno(status);···11621136 }1163113711641138 /* lock id1 */11651165- status = ocfs2_inode_lock_nested(inode1, bh1, 1, OI_LS_RENAME2);11391139+ status = ocfs2_inode_lock_nested(inode1, bh1, 1,11401140+ rename == 1 ? OI_LS_RENAME2 : OI_LS_PARENT);11661141 if (status < 0) {11671142 /*11681143 * An error return must mean that no cluster locks···1279125212801253 /* if old and new are the same, this'll just do one lock. */12811254 status = ocfs2_double_lock(osb, &old_dir_bh, old_dir,12821282- &new_dir_bh, new_dir);12551255+ &new_dir_bh, new_dir, 1);12831256 if (status < 0) {12841257 mlog_errno(status);12851258 goto bail;
+16-15
fs/udf/dir.c
···5757 sector_t offset;5858 int i, num, ret = 0;5959 struct extent_position epos = { NULL, 0, {0, 0} };6060+ struct super_block *sb = dir->i_sb;60616162 if (ctx->pos == 0) {6263 if (!dir_emit_dot(file, ctx))···7776 if (nf_pos == 0)7877 nf_pos = udf_ext0_offset(dir);79788080- fibh.soffset = fibh.eoffset = nf_pos & (dir->i_sb->s_blocksize - 1);7979+ fibh.soffset = fibh.eoffset = nf_pos & (sb->s_blocksize - 1);8180 if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB) {8282- if (inode_bmap(dir, nf_pos >> dir->i_sb->s_blocksize_bits,8181+ if (inode_bmap(dir, nf_pos >> sb->s_blocksize_bits,8382 &epos, &eloc, &elen, &offset)8483 != (EXT_RECORDED_ALLOCATED >> 30)) {8584 ret = -ENOENT;8685 goto out;8786 }8888- block = udf_get_lb_pblock(dir->i_sb, &eloc, offset);8989- if ((++offset << dir->i_sb->s_blocksize_bits) < elen) {8787+ block = udf_get_lb_pblock(sb, &eloc, offset);8888+ if ((++offset << sb->s_blocksize_bits) < elen) {9089 if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)9190 epos.offset -= sizeof(struct short_ad);9291 else if (iinfo->i_alloc_type ==···9695 offset = 0;9796 }98979999- if (!(fibh.sbh = fibh.ebh = udf_tread(dir->i_sb, block))) {9898+ if (!(fibh.sbh = fibh.ebh = udf_tread(sb, block))) {10099 ret = -EIO;101100 goto out;102101 }103102104104- if (!(offset & ((16 >> (dir->i_sb->s_blocksize_bits - 9)) - 1))) {105105- i = 16 >> (dir->i_sb->s_blocksize_bits - 9);106106- if (i + offset > (elen >> dir->i_sb->s_blocksize_bits))107107- i = (elen >> dir->i_sb->s_blocksize_bits) - offset;103103+ if (!(offset & ((16 >> (sb->s_blocksize_bits - 9)) - 1))) {104104+ i = 16 >> (sb->s_blocksize_bits - 9);105105+ if (i + offset > (elen >> sb->s_blocksize_bits))106106+ i = (elen >> sb->s_blocksize_bits) - offset;108107 for (num = 0; i > 0; i--) {109109- block = udf_get_lb_pblock(dir->i_sb, &eloc, offset + i);110110- tmp = udf_tgetblk(dir->i_sb, block);108108+ block = udf_get_lb_pblock(sb, &eloc, offset + i);109109+ tmp = udf_tgetblk(sb, block);111110 if (tmp && !buffer_uptodate(tmp) && !buffer_locked(tmp))112111 bha[num++] = tmp;113112 else···153152 }154153155154 if ((cfi.fileCharacteristics & FID_FILE_CHAR_DELETED) != 0) {156156- if (!UDF_QUERY_FLAG(dir->i_sb, UDF_FLAG_UNDELETE))155155+ if (!UDF_QUERY_FLAG(sb, UDF_FLAG_UNDELETE))157156 continue;158157 }159158160159 if ((cfi.fileCharacteristics & FID_FILE_CHAR_HIDDEN) != 0) {161161- if (!UDF_QUERY_FLAG(dir->i_sb, UDF_FLAG_UNHIDE))160160+ if (!UDF_QUERY_FLAG(sb, UDF_FLAG_UNHIDE))162161 continue;163162 }164163···168167 continue;169168 }170169171171- flen = udf_get_filename(dir->i_sb, nameptr, fname, lfi);170170+ flen = udf_get_filename(sb, nameptr, lfi, fname, UDF_NAME_LEN);172171 if (!flen)173172 continue;174173175174 tloc = lelb_to_cpu(cfi.icb.extLocation);176176- iblock = udf_get_lb_pblock(dir->i_sb, &tloc, 0);175175+ iblock = udf_get_lb_pblock(sb, &tloc, 0);177176 if (!dir_emit(ctx, fname, flen, iblock, DT_UNKNOWN))178177 goto out;179178 } /* end while */
+14
fs/udf/inode.c
···14891489 }14901490 inode->i_generation = iinfo->i_unique;1491149114921492+ /* Sanity checks for files in ICB so that we don't get confused later */14931493+ if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) {14941494+ /*14951495+ * For file in ICB data is stored in allocation descriptor14961496+ * so sizes should match14971497+ */14981498+ if (iinfo->i_lenAlloc != inode->i_size)14991499+ goto out;15001500+ /* File in ICB has to fit in there... */15011501+ if (inode->i_size > inode->i_sb->s_blocksize -15021502+ udf_file_entry_alloc_offset(inode))15031503+ goto out;15041504+ }15051505+14921506 switch (fe->icbTag.fileType) {14931507 case ICBTAG_FILE_TYPE_DIRECTORY:14941508 inode->i_op = &udf_dir_inode_operations;
···3030#include <linux/buffer_head.h>3131#include "udf_i.h"32323333-static void udf_pc_to_char(struct super_block *sb, unsigned char *from,3434- int fromlen, unsigned char *to)3333+static int udf_pc_to_char(struct super_block *sb, unsigned char *from,3434+ int fromlen, unsigned char *to, int tolen)3535{3636 struct pathComponent *pc;3737 int elen = 0;3838+ int comp_len;3839 unsigned char *p = to;39404141+ /* Reserve one byte for terminating \0 */4242+ tolen--;4043 while (elen < fromlen) {4144 pc = (struct pathComponent *)(from + elen);4545+ elen += sizeof(struct pathComponent);4246 switch (pc->componentType) {4347 case 1:4448 /*4549 * Symlink points to some place which should be agreed4650 * upon between originator and receiver of the media. Ignore.4751 */4848- if (pc->lengthComponentIdent > 0)5252+ if (pc->lengthComponentIdent > 0) {5353+ elen += pc->lengthComponentIdent;4954 break;5555+ }5056 /* Fall through */5157 case 2:5858+ if (tolen == 0)5959+ return -ENAMETOOLONG;5260 p = to;5361 *p++ = '/';6262+ tolen--;5463 break;5564 case 3:6565+ if (tolen < 3)6666+ return -ENAMETOOLONG;5667 memcpy(p, "../", 3);5768 p += 3;6969+ tolen -= 3;5870 break;5971 case 4:7272+ if (tolen < 2)7373+ return -ENAMETOOLONG;6074 memcpy(p, "./", 2);6175 p += 2;7676+ tolen -= 2;6277 /* that would be . - just ignore */6378 break;6479 case 5:6565- p += udf_get_filename(sb, pc->componentIdent, p,6666- pc->lengthComponentIdent);8080+ elen += pc->lengthComponentIdent;8181+ if (elen > fromlen)8282+ return -EIO;8383+ comp_len = udf_get_filename(sb, pc->componentIdent,8484+ pc->lengthComponentIdent,8585+ p, tolen);8686+ p += comp_len;8787+ tolen -= comp_len;8888+ if (tolen == 0)8989+ return -ENAMETOOLONG;6790 *p++ = '/';9191+ tolen--;6892 break;6993 }7070- elen += sizeof(struct pathComponent) + pc->lengthComponentIdent;7194 }7295 if (p > to + 1)7396 p[-1] = '\0';7497 else7598 p[0] = '\0';9999+ return 0;76100}7710178102static int udf_symlink_filler(struct file *file, struct page *page)···10480 struct inode *inode = page->mapping->host;10581 struct buffer_head *bh = NULL;10682 unsigned char *symlink;107107- int err = -EIO;8383+ int err;10884 unsigned char *p = kmap(page);10985 struct udf_inode_info *iinfo;11086 uint32_t pos;8787+8888+ /* We don't support symlinks longer than one block */8989+ if (inode->i_size > inode->i_sb->s_blocksize) {9090+ err = -ENAMETOOLONG;9191+ goto out_unmap;9292+ }1119311294 iinfo = UDF_I(inode);11395 pos = udf_block_map(inode, 0);···12494 } else {12595 bh = sb_bread(inode->i_sb, pos);12696127127- if (!bh)128128- goto out;9797+ if (!bh) {9898+ err = -EIO;9999+ goto out_unlock_inode;100100+ }129101130102 symlink = bh->b_data;131103 }132104133133- udf_pc_to_char(inode->i_sb, symlink, inode->i_size, p);105105+ err = udf_pc_to_char(inode->i_sb, symlink, inode->i_size, p, PAGE_SIZE);134106 brelse(bh);107107+ if (err)108108+ goto out_unlock_inode;135109136110 up_read(&iinfo->i_data_sem);137111 SetPageUptodate(page);···143109 unlock_page(page);144110 return 0;145111146146-out:112112+out_unlock_inode:147113 up_read(&iinfo->i_data_sem);148114 SetPageError(page);115115+out_unmap:149116 kunmap(page);150117 unlock_page(page);151118 return err;
+2-1
fs/udf/udfdecl.h
···211211}212212213213/* unicode.c */214214-extern int udf_get_filename(struct super_block *, uint8_t *, uint8_t *, int);214214+extern int udf_get_filename(struct super_block *, uint8_t *, int, uint8_t *,215215+ int);215216extern int udf_put_filename(struct super_block *, const uint8_t *, uint8_t *,216217 int);217218extern int udf_build_ustr(struct ustr *, dstring *, int);
+16-12
fs/udf/unicode.c
···28282929#include "udf_sb.h"30303131-static int udf_translate_to_linux(uint8_t *, uint8_t *, int, uint8_t *, int);3131+static int udf_translate_to_linux(uint8_t *, int, uint8_t *, int, uint8_t *,3232+ int);32333334static int udf_char_to_ustr(struct ustr *dest, const uint8_t *src, int strlen)3435{···334333 return u_len + 1;335334}336335337337-int udf_get_filename(struct super_block *sb, uint8_t *sname, uint8_t *dname,338338- int flen)336336+int udf_get_filename(struct super_block *sb, uint8_t *sname, int slen,337337+ uint8_t *dname, int dlen)339338{340339 struct ustr *filename, *unifilename;341340 int len = 0;···348347 if (!unifilename)349348 goto out1;350349351351- if (udf_build_ustr_exact(unifilename, sname, flen))350350+ if (udf_build_ustr_exact(unifilename, sname, slen))352351 goto out2;353352354353 if (UDF_QUERY_FLAG(sb, UDF_FLAG_UTF8)) {···367366 } else368367 goto out2;369368370370- len = udf_translate_to_linux(dname, filename->u_name, filename->u_len,369369+ len = udf_translate_to_linux(dname, dlen,370370+ filename->u_name, filename->u_len,371371 unifilename->u_name, unifilename->u_len);372372out2:373373 kfree(unifilename);···405403#define EXT_MARK '.'406404#define CRC_MARK '#'407405#define EXT_SIZE 5406406+/* Number of chars we need to store generated CRC to make filename unique */407407+#define CRC_LEN 5408408409409-static int udf_translate_to_linux(uint8_t *newName, uint8_t *udfName,410410- int udfLen, uint8_t *fidName,411411- int fidNameLen)409409+static int udf_translate_to_linux(uint8_t *newName, int newLen,410410+ uint8_t *udfName, int udfLen,411411+ uint8_t *fidName, int fidNameLen)412412{413413 int index, newIndex = 0, needsCRC = 0;414414 int extIndex = 0, newExtIndex = 0, hasExt = 0;···443439 newExtIndex = newIndex;444440 }445441 }446446- if (newIndex < 256)442442+ if (newIndex < newLen)447443 newName[newIndex++] = curr;448444 else449445 needsCRC = 1;···471467 }472468 ext[localExtIndex++] = curr;473469 }474474- maxFilenameLen = 250 - localExtIndex;470470+ maxFilenameLen = newLen - CRC_LEN - localExtIndex;475471 if (newIndex > maxFilenameLen)476472 newIndex = maxFilenameLen;477473 else478474 newIndex = newExtIndex;479479- } else if (newIndex > 250)480480- newIndex = 250;475475+ } else if (newIndex > newLen - CRC_LEN)476476+ newIndex = newLen - CRC_LEN;481477 newName[newIndex++] = CRC_MARK;482478 valueCRC = crc_itu_t(0, fidName, fidNameLen);483479 newName[newIndex++] = hex_asc_upper_hi(valueCRC >> 8);
+4-4
include/acpi/processor.h
···196196struct acpi_processor {197197 acpi_handle handle;198198 u32 acpi_id;199199- u32 apic_id;200200- u32 id;199199+ u32 phys_id; /* CPU hardware ID such as APIC ID for x86 */200200+ u32 id; /* CPU logical ID allocated by OS */201201 u32 pblk;202202 int performance_platform_limit;203203 int throttling_platform_limit;···310310#endif /* CONFIG_CPU_FREQ */311311312312/* in processor_core.c */313313-int acpi_get_apicid(acpi_handle, int type, u32 acpi_id);314314-int acpi_map_cpuid(int apic_id, u32 acpi_id);313313+int acpi_get_phys_id(acpi_handle, int type, u32 acpi_id);314314+int acpi_map_cpuid(int phys_id, u32 acpi_id);315315int acpi_get_cpuid(acpi_handle, int type, u32 acpi_id);316316317317/* in processor_pdc.c */
···1111#define _DT_BINDINGS_THERMAL_THERMAL_H12121313/* On cooling devices upper and lower limits */1414-#define THERMAL_NO_LIMIT (-1UL)1414+#define THERMAL_NO_LIMIT (~0)15151616#endif1717
+2-2
include/linux/acpi.h
···147147148148#ifdef CONFIG_ACPI_HOTPLUG_CPU149149/* Arch dependent functions for cpu hotplug support */150150-int acpi_map_lsapic(acpi_handle handle, int physid, int *pcpu);151151-int acpi_unmap_lsapic(int cpu);150150+int acpi_map_cpu(acpi_handle handle, int physid, int *pcpu);151151+int acpi_unmap_cpu(int cpu);152152#endif /* CONFIG_ACPI_HOTPLUG_CPU */153153154154int acpi_register_ioapic(acpi_handle handle, u64 phys_addr, u32 gsi_base);
+6-2
include/linux/blk-mq.h
···3434 unsigned long flags; /* BLK_MQ_F_* flags */35353636 struct request_queue *queue;3737- unsigned int queue_num;3837 struct blk_flush_queue *fq;39384039 void *driver_data;···5354 unsigned long dispatched[BLK_MQ_MAX_DISPATCH_ORDER];54555556 unsigned int numa_node;5656- unsigned int cmd_size; /* per-request extra data */5757+ unsigned int queue_num;57585859 atomic_t nr_active;5960···194195struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *, const int ctx_index);195196struct blk_mq_hw_ctx *blk_mq_alloc_single_hw_queue(struct blk_mq_tag_set *, unsigned int, int);196197198198+int blk_mq_request_started(struct request *rq);197199void blk_mq_start_request(struct request *rq);198200void blk_mq_end_request(struct request *rq, int error);199201void __blk_mq_end_request(struct request *rq, int error);200202201203void blk_mq_requeue_request(struct request *rq);202204void blk_mq_add_to_requeue_list(struct request *rq, bool at_head);205205+void blk_mq_cancel_requeue_work(struct request_queue *q);203206void blk_mq_kick_requeue_list(struct request_queue *q);207207+void blk_mq_abort_requeue_list(struct request_queue *q);204208void blk_mq_complete_request(struct request *rq);205209206210void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx);···214212void blk_mq_delay_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs);215213void blk_mq_tag_busy_iter(struct blk_mq_hw_ctx *hctx, busy_iter_fn *fn,216214 void *priv);215215+void blk_mq_unfreeze_queue(struct request_queue *q);216216+void blk_mq_freeze_queue_start(struct request_queue *q);217217218218/*219219 * Driver command data is immediately after the request. So subtract request
···215215 }216216}217217218218-static __always_inline void __assign_once_size(volatile void *p, void *res, int size)218218+static __always_inline void __write_once_size(volatile void *p, void *res, int size)219219{220220 switch (size) {221221 case 1: *(volatile __u8 *)p = *(__u8 *)res; break;···235235/*236236 * Prevent the compiler from merging or refetching reads or writes. The237237 * compiler is also forbidden from reordering successive instances of238238- * READ_ONCE, ASSIGN_ONCE and ACCESS_ONCE (see below), but only when the238238+ * READ_ONCE, WRITE_ONCE and ACCESS_ONCE (see below), but only when the239239 * compiler is aware of some particular ordering. One way to make the240240 * compiler aware of ordering is to put the two invocations of READ_ONCE,241241- * ASSIGN_ONCE or ACCESS_ONCE() in different C statements.241241+ * WRITE_ONCE or ACCESS_ONCE() in different C statements.242242 *243243 * In contrast to ACCESS_ONCE these two macros will also work on aggregate244244 * data types like structs or unions. If the size of the accessed data245245 * type exceeds the word size of the machine (e.g., 32 bits or 64 bits)246246- * READ_ONCE() and ASSIGN_ONCE() will fall back to memcpy and print a246246+ * READ_ONCE() and WRITE_ONCE() will fall back to memcpy and print a247247 * compile-time warning.248248 *249249 * Their two major use cases are: (1) Mediating communication between···257257#define READ_ONCE(x) \258258 ({ typeof(x) __val; __read_once_size(&x, &__val, sizeof(__val)); __val; })259259260260-#define ASSIGN_ONCE(val, x) \261261- ({ typeof(x) __val; __val = val; __assign_once_size(&x, &__val, sizeof(__val)); __val; })260260+#define WRITE_ONCE(x, val) \261261+ ({ typeof(x) __val; __val = val; __write_once_size(&x, &__val, sizeof(__val)); __val; })262262263263#endif /* __KERNEL__ */264264
···5353};54545555/* Idle State Flags */5656-#define CPUIDLE_FLAG_TIME_INVALID (0x01) /* is residency time measurable? */5756#define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */5857#define CPUIDLE_FLAG_TIMER_STOP (0x04) /* timer is stopped on this state */5958···8889/**8990 * cpuidle_get_last_residency - retrieves the last state's residency time9091 * @dev: the target CPU9191- *9292- * NOTE: this value is invalid if CPUIDLE_FLAG_TIME_INVALID is set9392 */9493static inline int cpuidle_get_last_residency(struct cpuidle_device *dev)9594{
+1-1
include/linux/fs.h
···135135#define FMODE_CAN_WRITE ((__force fmode_t)0x40000)136136137137/* File was opened by fanotify and shouldn't generate fanotify events */138138-#define FMODE_NONOTIFY ((__force fmode_t)0x1000000)138138+#define FMODE_NONOTIFY ((__force fmode_t)0x4000000)139139140140/*141141 * Flag for rw_copy_check_uvector and compat_rw_copy_check_uvector
+53-9
include/linux/kdb.h
···1313 * Copyright (C) 2009 Jason Wessel <jason.wessel@windriver.com>1414 */15151616+/* Shifted versions of the command enable bits are be used if the command1717+ * has no arguments (see kdb_check_flags). This allows commands, such as1818+ * go, to have different permissions depending upon whether it is called1919+ * with an argument.2020+ */2121+#define KDB_ENABLE_NO_ARGS_SHIFT 102222+1623typedef enum {1717- KDB_REPEAT_NONE = 0, /* Do not repeat this command */1818- KDB_REPEAT_NO_ARGS, /* Repeat the command without arguments */1919- KDB_REPEAT_WITH_ARGS, /* Repeat the command including its arguments */2020-} kdb_repeat_t;2424+ KDB_ENABLE_ALL = (1 << 0), /* Enable everything */2525+ KDB_ENABLE_MEM_READ = (1 << 1),2626+ KDB_ENABLE_MEM_WRITE = (1 << 2),2727+ KDB_ENABLE_REG_READ = (1 << 3),2828+ KDB_ENABLE_REG_WRITE = (1 << 4),2929+ KDB_ENABLE_INSPECT = (1 << 5),3030+ KDB_ENABLE_FLOW_CTRL = (1 << 6),3131+ KDB_ENABLE_SIGNAL = (1 << 7),3232+ KDB_ENABLE_REBOOT = (1 << 8),3333+ /* User exposed values stop here, all remaining flags are3434+ * exclusively used to describe a commands behaviour.3535+ */3636+3737+ KDB_ENABLE_ALWAYS_SAFE = (1 << 9),3838+ KDB_ENABLE_MASK = (1 << KDB_ENABLE_NO_ARGS_SHIFT) - 1,3939+4040+ KDB_ENABLE_ALL_NO_ARGS = KDB_ENABLE_ALL << KDB_ENABLE_NO_ARGS_SHIFT,4141+ KDB_ENABLE_MEM_READ_NO_ARGS = KDB_ENABLE_MEM_READ4242+ << KDB_ENABLE_NO_ARGS_SHIFT,4343+ KDB_ENABLE_MEM_WRITE_NO_ARGS = KDB_ENABLE_MEM_WRITE4444+ << KDB_ENABLE_NO_ARGS_SHIFT,4545+ KDB_ENABLE_REG_READ_NO_ARGS = KDB_ENABLE_REG_READ4646+ << KDB_ENABLE_NO_ARGS_SHIFT,4747+ KDB_ENABLE_REG_WRITE_NO_ARGS = KDB_ENABLE_REG_WRITE4848+ << KDB_ENABLE_NO_ARGS_SHIFT,4949+ KDB_ENABLE_INSPECT_NO_ARGS = KDB_ENABLE_INSPECT5050+ << KDB_ENABLE_NO_ARGS_SHIFT,5151+ KDB_ENABLE_FLOW_CTRL_NO_ARGS = KDB_ENABLE_FLOW_CTRL5252+ << KDB_ENABLE_NO_ARGS_SHIFT,5353+ KDB_ENABLE_SIGNAL_NO_ARGS = KDB_ENABLE_SIGNAL5454+ << KDB_ENABLE_NO_ARGS_SHIFT,5555+ KDB_ENABLE_REBOOT_NO_ARGS = KDB_ENABLE_REBOOT5656+ << KDB_ENABLE_NO_ARGS_SHIFT,5757+ KDB_ENABLE_ALWAYS_SAFE_NO_ARGS = KDB_ENABLE_ALWAYS_SAFE5858+ << KDB_ENABLE_NO_ARGS_SHIFT,5959+ KDB_ENABLE_MASK_NO_ARGS = KDB_ENABLE_MASK << KDB_ENABLE_NO_ARGS_SHIFT,6060+6161+ KDB_REPEAT_NO_ARGS = 0x40000000, /* Repeat the command w/o arguments */6262+ KDB_REPEAT_WITH_ARGS = 0x80000000, /* Repeat the command with args */6363+} kdb_cmdflags_t;21642265typedef int (*kdb_func_t)(int, const char **);2366···10562#define KDB_BADLENGTH (-19)10663#define KDB_NOBP (-20)10764#define KDB_BADADDR (-21)6565+#define KDB_NOPERM (-22)1086610967/*11068 * kdb_diemsg···190146191147/* Dynamic kdb shell command registration */192148extern int kdb_register(char *, kdb_func_t, char *, char *, short);193193-extern int kdb_register_repeat(char *, kdb_func_t, char *, char *,194194- short, kdb_repeat_t);149149+extern int kdb_register_flags(char *, kdb_func_t, char *, char *,150150+ short, kdb_cmdflags_t);195151extern int kdb_unregister(char *);196152#else /* ! CONFIG_KGDB_KDB */197153static inline __printf(1, 2) int kdb_printf(const char *fmt, ...) { return 0; }198154static inline void kdb_init(int level) {}199155static inline int kdb_register(char *cmd, kdb_func_t func, char *usage,200156 char *help, short minlen) { return 0; }201201-static inline int kdb_register_repeat(char *cmd, kdb_func_t func, char *usage,202202- char *help, short minlen,203203- kdb_repeat_t repeat) { return 0; }157157+static inline int kdb_register_flags(char *cmd, kdb_func_t func, char *usage,158158+ char *help, short minlen,159159+ kdb_cmdflags_t flags) { return 0; }204160static inline int kdb_unregister(char *cmd) { return 0; }205161#endif /* CONFIG_KGDB_KDB */206162enum {
+2-20
include/linux/mfd/stmpe.h
···5050 STMPE_IDX_GPEDR_MSB,5151 STMPE_IDX_GPRER_LSB,5252 STMPE_IDX_GPFER_LSB,5353+ STMPE_IDX_GPPUR_LSB,5454+ STMPE_IDX_GPPDR_LSB,5355 STMPE_IDX_GPAFR_U_MSB,5456 STMPE_IDX_IEGPIOR_LSB,5557 STMPE_IDX_ISGPIOR_LSB,···114112 enum stmpe_block block);115113extern int stmpe_enable(struct stmpe *stmpe, unsigned int blocks);116114extern int stmpe_disable(struct stmpe *stmpe, unsigned int blocks);117117-118118-struct matrix_keymap_data;119119-120120-/**121121- * struct stmpe_keypad_platform_data - STMPE keypad platform data122122- * @keymap_data: key map table and size123123- * @debounce_ms: debounce interval, in ms. Maximum is124124- * %STMPE_KEYPAD_MAX_DEBOUNCE.125125- * @scan_count: number of key scanning cycles to confirm key data.126126- * Maximum is %STMPE_KEYPAD_MAX_SCAN_COUNT.127127- * @no_autorepeat: disable key autorepeat128128- */129129-struct stmpe_keypad_platform_data {130130- const struct matrix_keymap_data *keymap_data;131131- unsigned int debounce_ms;132132- unsigned int scan_count;133133- bool no_autorepeat;134134-};135115136116#define STMPE_GPIO_NOREQ_811_TOUCH (0xf0)137117···183199 * @irq_gpio: gpio number over which irq will be requested (significant only if184200 * irq_over_gpio is true)185201 * @gpio: GPIO-specific platform data186186- * @keypad: keypad-specific platform data187202 * @ts: touchscreen-specific platform data188203 */189204struct stmpe_platform_data {···195212 int autosleep_timeout;196213197214 struct stmpe_gpio_platform_data *gpio;198198- struct stmpe_keypad_platform_data *keypad;199215 struct stmpe_ts_platform_data *ts;200216};201217
+1-1
include/linux/mm.h
···19521952#if VM_GROWSUP19531953extern int expand_upwards(struct vm_area_struct *vma, unsigned long address);19541954#else19551955- #define expand_upwards(vma, address) do { } while (0)19551955+ #define expand_upwards(vma, address) (0)19561956#endif1957195719581958/* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
+1
include/linux/mmc/sdhci.h
···137137#define SDHCI_SDR104_NEEDS_TUNING (1<<10) /* SDR104/HS200 needs tuning */138138#define SDHCI_USING_RETUNING_TIMER (1<<11) /* Host is using a retuning timer for the card */139139#define SDHCI_USE_64_BIT_DMA (1<<12) /* Use 64-bit DMA */140140+#define SDHCI_HS400_TUNING (1<<13) /* Tuning for HS400 */140141141142 unsigned int version; /* SDHCI spec. version */142143
+14-12
include/linux/netdevice.h
···852852 * 3. Update dev->stats asynchronously and atomically, and define853853 * neither operation.854854 *855855- * int (*ndo_vlan_rx_add_vid)(struct net_device *dev, __be16 proto, u16t vid);855855+ * int (*ndo_vlan_rx_add_vid)(struct net_device *dev, __be16 proto, u16 vid);856856 * If device support VLAN filtering this function is called when a857857 * VLAN id is registered.858858 *859859- * int (*ndo_vlan_rx_kill_vid)(struct net_device *dev, unsigned short vid);859859+ * int (*ndo_vlan_rx_kill_vid)(struct net_device *dev, __be16 proto, u16 vid);860860 * If device support VLAN filtering this function is called when a861861 * VLAN id is unregistered.862862 *···10121012 * Callback to use for xmit over the accelerated station. This10131013 * is used in place of ndo_start_xmit on accelerated net10141014 * devices.10151015- * bool (*ndo_gso_check) (struct sk_buff *skb,10161016- * struct net_device *dev);10151015+ * netdev_features_t (*ndo_features_check) (struct sk_buff *skb,10161016+ * struct net_device *dev10171017+ * netdev_features_t features);10171018 * Called by core transmit path to determine if device is capable of10181018- * performing GSO on a packet. The device returns true if it is10191019- * able to GSO the packet, false otherwise. If the return value is10201020- * false the stack will do software GSO.10191019+ * performing offload operations on a given packet. This is to give10201020+ * the device an opportunity to implement any restrictions that cannot10211021+ * be otherwise expressed by feature flags. The check is called with10221022+ * the set of features that the stack has calculated and it returns10231023+ * those the driver believes to be appropriate.10211024 *10221025 * int (*ndo_switch_parent_id_get)(struct net_device *dev,10231026 * struct netdev_phys_item_id *psid);···11811178 struct net_device *dev,11821179 void *priv);11831180 int (*ndo_get_lock_subclass)(struct net_device *dev);11841184- bool (*ndo_gso_check) (struct sk_buff *skb,11851185- struct net_device *dev);11811181+ netdev_features_t (*ndo_features_check) (struct sk_buff *skb,11821182+ struct net_device *dev,11831183+ netdev_features_t features);11861184#ifdef CONFIG_NET_SWITCHDEV11871185 int (*ndo_switch_parent_id_get)(struct net_device *dev,11881186 struct netdev_phys_item_id *psid);···20852081 list_for_each_entry_continue_rcu(d, &(net)->dev_base_head, dev_list)20862082#define for_each_netdev_in_bond_rcu(bond, slave) \20872083 for_each_netdev_rcu(&init_net, slave) \20882088- if (netdev_master_upper_dev_get_rcu(slave) == bond)20842084+ if (netdev_master_upper_dev_get_rcu(slave) == (bond))20892085#define net_device_entry(lh) list_entry(lh, struct net_device, dev_list)2090208620912087static inline struct net_device *next_net_device(struct net_device *dev)···36153611 netdev_features_t features)36163612{36173613 return skb_is_gso(skb) && (!skb_gso_ok(skb, features) ||36183618- (dev->netdev_ops->ndo_gso_check &&36193619- !dev->netdev_ops->ndo_gso_check(skb, dev)) ||36203614 unlikely((skb->ip_summed != CHECKSUM_PARTIAL) &&36213615 (skb->ip_summed != CHECKSUM_UNNECESSARY)));36223616}
+2-2
include/linux/netlink.h
···4646 unsigned int flags;4747 void (*input)(struct sk_buff *skb);4848 struct mutex *cb_mutex;4949- int (*bind)(int group);5050- void (*unbind)(int group);4949+ int (*bind)(struct net *net, int group);5050+ void (*unbind)(struct net *net, int group);5151 bool (*compare)(struct net *net, struct sock *sk);5252};5353
+3
include/linux/nfs_fs_sb.h
···7474 /* idmapper */7575 struct idmap * cl_idmap;76767777+ /* Client owner identifier */7878+ const char * cl_owner_id;7979+7780 /* Our own IP address, as a null-terminated string.7881 * This is used to generate the mv0 callback address.7982 */
···3737 atomic_t refcount;38383939 /*4040+ * Count of child anon_vmas and VMAs which points to this anon_vma.4141+ *4242+ * This counter is used for making decision about reusing anon_vma4343+ * instead of forking new one. See comments in function anon_vma_clone.4444+ */4545+ unsigned degree;4646+4747+ struct anon_vma *parent; /* Parent of this anon_vma */4848+4949+ /*4050 * NOTE: the LSB of the rb_root.rb_node is set by4151 * mm_take_all_locks() _after_ taking the above lock. So the4252 * rb_root must only be read/written after taking the above lock
···3131 * do additional, common, filtering and return an error3232 * @post_doit: called after an operation's doit callback, it may3333 * undo operations done by pre_doit, for example release locks3434+ * @mcast_bind: a socket bound to the given multicast group (which3535+ * is given as the offset into the groups array)3636+ * @mcast_unbind: a socket was unbound from the given multicast group3437 * @attrbuf: buffer to store parsed attributes3538 * @family_list: family list3639 * @mcgrps: multicast groups used by this family (private)···5653 void (*post_doit)(const struct genl_ops *ops,5754 struct sk_buff *skb,5855 struct genl_info *info);5656+ int (*mcast_bind)(struct net *net, int group);5757+ void (*mcast_unbind)(struct net *net, int group);5958 struct nlattr ** attrbuf; /* private */6059 const struct genl_ops * ops; /* private */6160 const struct genl_multicast_group *mcgrps; /* private */···400395}401396402397static inline int genl_has_listeners(struct genl_family *family,403403- struct sock *sk, unsigned int group)398398+ struct net *net, unsigned int group)404399{405400 if (WARN_ON_ONCE(group >= family->n_mcgrps))406401 return -EINVAL;407402 group = family->mcgrp_offset + group;408408- return netlink_has_listeners(sk, group);403403+ return netlink_has_listeners(net->genl_sock, group);409404}410405#endif /* __NET_GENERIC_NETLINK_H */
+2-5
include/net/mac80211.h
···12701270 *12711271 * @IEEE80211_KEY_FLAG_GENERATE_IV: This flag should be set by the12721272 * driver to indicate that it requires IV generation for this12731273- * particular key. Setting this flag does not necessarily mean that SKBs12741274- * will have sufficient tailroom for ICV or MIC.12731273+ * particular key.12751274 * @IEEE80211_KEY_FLAG_GENERATE_MMIC: This flag should be set by12761275 * the driver for a TKIP key if it requires Michael MIC12771276 * generation in software.···12821283 * @IEEE80211_KEY_FLAG_PUT_IV_SPACE: This flag should be set by the driver12831284 * if space should be prepared for the IV, but the IV12841285 * itself should not be generated. Do not set together with12851285- * @IEEE80211_KEY_FLAG_GENERATE_IV on the same key. Setting this flag does12861286- * not necessarily mean that SKBs will have sufficient tailroom for ICV or12871287- * MIC.12861286+ * @IEEE80211_KEY_FLAG_GENERATE_IV on the same key.12881287 * @IEEE80211_KEY_FLAG_RX_MGMT: This key will be used to decrypt received12891288 * management frames. The flag can help drivers that have a hardware12901289 * crypto implementation that doesn't deal with management frames
···857857}858858859859/**860860- * params_channels - Get the sample rate from the hw params860860+ * params_rate - Get the sample rate from the hw params861861 * @p: hw params862862 */863863static inline unsigned int params_rate(const struct snd_pcm_hw_params *p)···866866}867867868868/**869869- * params_channels - Get the period size (in frames) from the hw params869869+ * params_period_size - Get the period size (in frames) from the hw params870870 * @p: hw params871871 */872872static inline unsigned int params_period_size(const struct snd_pcm_hw_params *p)···875875}876876877877/**878878- * params_channels - Get the number of periods from the hw params878878+ * params_periods - Get the number of periods from the hw params879879 * @p: hw params880880 */881881static inline unsigned int params_periods(const struct snd_pcm_hw_params *p)···884884}885885886886/**887887- * params_channels - Get the buffer size (in frames) from the hw params887887+ * params_buffer_size - Get the buffer size (in frames) from the hw params888888 * @p: hw params889889 */890890static inline unsigned int params_buffer_size(const struct snd_pcm_hw_params *p)···893893}894894895895/**896896- * params_channels - Get the buffer size (in bytes) from the hw params896896+ * params_buffer_bytes - Get the buffer size (in bytes) from the hw params897897 * @p: hw params898898 */899899static inline unsigned int params_buffer_bytes(const struct snd_pcm_hw_params *p)
···7777#define DA_UNMAP_GRANULARITY_ALIGNMENT_DEFAULT 07878/* Default max_write_same_len, disabled by default */7979#define DA_MAX_WRITE_SAME_LEN 08080-/* Default max transfer length */8181-#define DA_FABRIC_MAX_SECTORS 81928280/* Use a model alias based on the configfs backend device name */8381#define DA_EMULATE_MODEL_ALIAS 08482/* Emulation for Direct Page Out */···692694 u32 hw_block_size;693695 u32 block_size;694696 u32 hw_max_sectors;695695- u32 fabric_max_sectors;696697 u32 optimal_sectors;697698 u32 hw_queue_depth;698699 u32 queue_depth;
+1-1
include/uapi/asm-generic/fcntl.h
···5566/*77 * FMODE_EXEC is 0x2088- * FMODE_NONOTIFY is 0x100000088+ * FMODE_NONOTIFY is 0x400000099 * These cannot be used by userspace O_* until internal and external open1010 * flags are split.1111 * -Eric Paris
···174174 OVS_PACKET_ATTR_USERDATA, /* OVS_ACTION_ATTR_USERSPACE arg. */175175 OVS_PACKET_ATTR_EGRESS_TUN_KEY, /* Nested OVS_TUNNEL_KEY_ATTR_*176176 attributes. */177177+ OVS_PACKET_ATTR_UNUSED1,178178+ OVS_PACKET_ATTR_UNUSED2,179179+ OVS_PACKET_ATTR_PROBE, /* Packet operation is a feature probe,180180+ error logging should be suppressed. */177181 __OVS_PACKET_ATTR_MAX178182};179183
+7
include/uapi/linux/virtio_ring.h
···101101 struct vring_used *used;102102};103103104104+/* Alignment requirements for vring elements.105105+ * When using pre-virtio 1.0 layout, these fall out naturally.106106+ */107107+#define VRING_AVAIL_ALIGN_SIZE 2108108+#define VRING_USED_ALIGN_SIZE 4109109+#define VRING_DESC_ALIGN_SIZE 16110110+104111/* The standard layout for the ring is a continuous chunk of memory which looks105112 * like this. We assume num is a power of 2.106113 *
+51
include/xen/interface/nmi.h
···11+/******************************************************************************22+ * nmi.h33+ *44+ * NMI callback registration and reason codes.55+ *66+ * Copyright (c) 2005, Keir Fraser <keir@xensource.com>77+ */88+99+#ifndef __XEN_PUBLIC_NMI_H__1010+#define __XEN_PUBLIC_NMI_H__1111+1212+#include <xen/interface/xen.h>1313+1414+/*1515+ * NMI reason codes:1616+ * Currently these are x86-specific, stored in arch_shared_info.nmi_reason.1717+ */1818+ /* I/O-check error reported via ISA port 0x61, bit 6. */1919+#define _XEN_NMIREASON_io_error 02020+#define XEN_NMIREASON_io_error (1UL << _XEN_NMIREASON_io_error)2121+ /* PCI SERR reported via ISA port 0x61, bit 7. */2222+#define _XEN_NMIREASON_pci_serr 12323+#define XEN_NMIREASON_pci_serr (1UL << _XEN_NMIREASON_pci_serr)2424+ /* Unknown hardware-generated NMI. */2525+#define _XEN_NMIREASON_unknown 22626+#define XEN_NMIREASON_unknown (1UL << _XEN_NMIREASON_unknown)2727+2828+/*2929+ * long nmi_op(unsigned int cmd, void *arg)3030+ * NB. All ops return zero on success, else a negative error code.3131+ */3232+3333+/*3434+ * Register NMI callback for this (calling) VCPU. Currently this only makes3535+ * sense for domain 0, vcpu 0. All other callers will be returned EINVAL.3636+ * arg == pointer to xennmi_callback structure.3737+ */3838+#define XENNMI_register_callback 03939+struct xennmi_callback {4040+ unsigned long handler_address;4141+ unsigned long pad;4242+};4343+DEFINE_GUEST_HANDLE_STRUCT(xennmi_callback);4444+4545+/*4646+ * Deregister NMI callback for this (calling) VCPU.4747+ * arg == NULL.4848+ */4949+#define XENNMI_unregister_callback 15050+5151+#endif /* __XEN_PUBLIC_NMI_H__ */
+1-1
kernel/audit.c
···11001100}1101110111021102/* Run custom bind function on netlink socket group connect or bind requests. */11031103-static int audit_bind(int group)11031103+static int audit_bind(struct net *net, int group)11041104{11051105 if (!capable(CAP_AUDIT_READ))11061106 return -EPERM;
+40-9
kernel/auditsc.c
···7272#include <linux/fs_struct.h>7373#include <linux/compat.h>7474#include <linux/ctype.h>7575+#include <linux/string.h>7676+#include <uapi/linux/limits.h>75777678#include "audit.h"7779···18631861 }1864186218651863 list_for_each_entry_reverse(n, &context->names_list, list) {18661866- /* does the name pointer match? */18671867- if (!n->name || n->name->name != name->name)18641864+ if (!n->name || strcmp(n->name->name, name->name))18681865 continue;1869186618701867 /* match the correct record type */···18821881 n = audit_alloc_name(context, AUDIT_TYPE_UNKNOWN);18831882 if (!n)18841883 return;18851885- if (name)18861886- /* since name is not NULL we know there is already a matching18871887- * name record, see audit_getname(), so there must be a type18881888- * mismatch; reuse the string path since the original name18891889- * record will keep the string valid until we free it in18901890- * audit_free_names() */18911891- n->name = name;18841884+ /* unfortunately, while we may have a path name to record with the18851885+ * inode, we can't always rely on the string lasting until the end of18861886+ * the syscall so we need to create our own copy, it may fail due to18871887+ * memory allocation issues, but we do our best */18881888+ if (name) {18891889+ /* we can't use getname_kernel() due to size limits */18901890+ size_t len = strlen(name->name) + 1;18911891+ struct filename *new = __getname();1892189218931893+ if (unlikely(!new))18941894+ goto out;18951895+18961896+ if (len <= (PATH_MAX - sizeof(*new))) {18971897+ new->name = (char *)(new) + sizeof(*new);18981898+ new->separate = false;18991899+ } else if (len <= PATH_MAX) {19001900+ /* this looks odd, but is due to final_putname() */19011901+ struct filename *new2;19021902+19031903+ new2 = kmalloc(sizeof(*new2), GFP_KERNEL);19041904+ if (unlikely(!new2)) {19051905+ __putname(new);19061906+ goto out;19071907+ }19081908+ new2->name = (char *)new;19091909+ new2->separate = true;19101910+ new = new2;19111911+ } else {19121912+ /* we should never get here, but let's be safe */19131913+ __putname(new);19141914+ goto out;19151915+ }19161916+ strlcpy((char *)new->name, name->name, len);19171917+ new->uptr = NULL;19181918+ new->aname = n;19191919+ n->name = new;19201920+ n->name_put = true;19211921+ }18931922out:18941923 if (parent) {18951924 n->name_len = n->name ? parent_len(n->name->name) : AUDIT_NAME_FULL;
+28-24
kernel/debug/debug_core.c
···2727 * version 2. This program is licensed "as is" without any warranty of any2828 * kind, whether express or implied.2929 */3030+3131+#define pr_fmt(fmt) "KGDB: " fmt3232+3033#include <linux/pid_namespace.h>3134#include <linux/clocksource.h>3235#include <linux/serial_core.h>···199196 return err;200197 err = kgdb_arch_remove_breakpoint(&tmp);201198 if (err)202202- printk(KERN_ERR "KGDB: Critical breakpoint error, kernel "203203- "memory destroyed at: %lx", addr);199199+ pr_err("Critical breakpoint error, kernel memory destroyed at: %lx\n",200200+ addr);204201 return err;205202}206203···259256 error = kgdb_arch_set_breakpoint(&kgdb_break[i]);260257 if (error) {261258 ret = error;262262- printk(KERN_INFO "KGDB: BP install failed: %lx",263263- kgdb_break[i].bpt_addr);259259+ pr_info("BP install failed: %lx\n",260260+ kgdb_break[i].bpt_addr);264261 continue;265262 }266263···322319 continue;323320 error = kgdb_arch_remove_breakpoint(&kgdb_break[i]);324321 if (error) {325325- printk(KERN_INFO "KGDB: BP remove failed: %lx\n",326326- kgdb_break[i].bpt_addr);322322+ pr_info("BP remove failed: %lx\n",323323+ kgdb_break[i].bpt_addr);327324 ret = error;328325 }329326···370367 goto setundefined;371368 error = kgdb_arch_remove_breakpoint(&kgdb_break[i]);372369 if (error)373373- printk(KERN_ERR "KGDB: breakpoint remove failed: %lx\n",370370+ pr_err("breakpoint remove failed: %lx\n",374371 kgdb_break[i].bpt_addr);375372setundefined:376373 kgdb_break[i].state = BP_UNDEFINED;···403400 if (print_wait) {404401#ifdef CONFIG_KGDB_KDB405402 if (!dbg_kdb_mode)406406- printk(KERN_CRIT "KGDB: waiting... or $3#33 for KDB\n");403403+ pr_crit("waiting... or $3#33 for KDB\n");407404#else408408- printk(KERN_CRIT "KGDB: Waiting for remote debugger\n");405405+ pr_crit("Waiting for remote debugger\n");409406#endif410407 }411408 return 1;···433430 exception_level = 0;434431 kgdb_skipexception(ks->ex_vector, ks->linux_regs);435432 dbg_activate_sw_breakpoints();436436- printk(KERN_CRIT "KGDB: re-enter error: breakpoint removed %lx\n",437437- addr);433433+ pr_crit("re-enter error: breakpoint removed %lx\n", addr);438434 WARN_ON_ONCE(1);439435440436 return 1;···446444 panic("Recursive entry to debugger");447445 }448446449449- printk(KERN_CRIT "KGDB: re-enter exception: ALL breakpoints killed\n");447447+ pr_crit("re-enter exception: ALL breakpoints killed\n");450448#ifdef CONFIG_KGDB_KDB451449 /* Allow kdb to debug itself one level */452450 return 0;···473471 int cpu;474472 int trace_on = 0;475473 int online_cpus = num_online_cpus();474474+ u64 time_left;476475477476 kgdb_info[ks->cpu].enter_kgdb++;478477 kgdb_info[ks->cpu].exception_state |= exception_state;···598595 /*599596 * Wait for the other CPUs to be notified and be waiting for us:600597 */601601- while (kgdb_do_roundup && (atomic_read(&masters_in_kgdb) +602602- atomic_read(&slaves_in_kgdb)) != online_cpus)598598+ time_left = loops_per_jiffy * HZ;599599+ while (kgdb_do_roundup && --time_left &&600600+ (atomic_read(&masters_in_kgdb) + atomic_read(&slaves_in_kgdb)) !=601601+ online_cpus)603602 cpu_relax();603603+ if (!time_left)604604+ pr_crit("KGDB: Timed out waiting for secondary CPUs.\n");604605605606 /*606607 * At this point the primary processor is completely···802795static void sysrq_handle_dbg(int key)803796{804797 if (!dbg_io_ops) {805805- printk(KERN_CRIT "ERROR: No KGDB I/O module available\n");798798+ pr_crit("ERROR: No KGDB I/O module available\n");806799 return;807800 }808801 if (!kgdb_connected) {809802#ifdef CONFIG_KGDB_KDB810803 if (!dbg_kdb_mode)811811- printk(KERN_CRIT "KGDB or $3#33 for KDB\n");804804+ pr_crit("KGDB or $3#33 for KDB\n");812805#else813813- printk(KERN_CRIT "Entering KGDB\n");806806+ pr_crit("Entering KGDB\n");814807#endif815808 }816809···952945{953946 kgdb_break_asap = 0;954947955955- printk(KERN_CRIT "kgdb: Waiting for connection from remote gdb...\n");948948+ pr_crit("Waiting for connection from remote gdb...\n");956949 kgdb_breakpoint();957950}958951···971964 if (dbg_io_ops) {972965 spin_unlock(&kgdb_registration_lock);973966974974- printk(KERN_ERR "kgdb: Another I/O driver is already "975975- "registered with KGDB.\n");967967+ pr_err("Another I/O driver is already registered with KGDB\n");976968 return -EBUSY;977969 }978970···987981988982 spin_unlock(&kgdb_registration_lock);989983990990- printk(KERN_INFO "kgdb: Registered I/O driver %s.\n",991991- new_dbg_io_ops->name);984984+ pr_info("Registered I/O driver %s\n", new_dbg_io_ops->name);992985993986 /* Arm KGDB now. */994987 kgdb_register_callbacks();···1022101710231018 spin_unlock(&kgdb_registration_lock);1024101910251025- printk(KERN_INFO10261026- "kgdb: Unregistered I/O driver %s, debugger disabled.\n",10201020+ pr_info("Unregistered I/O driver %s, debugger disabled\n",10271021 old_dbg_io_ops->name);10281022}10291023EXPORT_SYMBOL_GPL(kgdb_unregister_io_module);
···129129 ks->pass_exception = 1;130130 KDB_FLAG_SET(CATASTROPHIC);131131 }132132+ /* set CATASTROPHIC if the system contains unresponsive processors */133133+ for_each_online_cpu(i)134134+ if (!kgdb_info[i].enter_kgdb)135135+ KDB_FLAG_SET(CATASTROPHIC);132136 if (KDB_STATE(SSBPT) && reason == KDB_REASON_SSTEP) {133137 KDB_STATE_CLEAR(SSBPT);134138 KDB_STATE_CLEAR(DOING_SS);
+170-95
kernel/debug/kdb/kdb_main.c
···1212 */13131414#include <linux/ctype.h>1515+#include <linux/types.h>1516#include <linux/string.h>1617#include <linux/kernel.h>1718#include <linux/kmsg_dump.h>···2423#include <linux/vmalloc.h>2524#include <linux/atomic.h>2625#include <linux/module.h>2626+#include <linux/moduleparam.h>2727#include <linux/mm.h>2828#include <linux/init.h>2929#include <linux/kallsyms.h>···4341#include <linux/uaccess.h>4442#include <linux/slab.h>4543#include "kdb_private.h"4444+4545+#undef MODULE_PARAM_PREFIX4646+#define MODULE_PARAM_PREFIX "kdb."4747+4848+static int kdb_cmd_enabled = CONFIG_KDB_DEFAULT_ENABLE;4949+module_param_named(cmd_enable, kdb_cmd_enabled, int, 0600);46504751#define GREP_LEN 2564852char kdb_grep_string[GREP_LEN];···129121 KDBMSG(BADLENGTH, "Invalid length field"),130122 KDBMSG(NOBP, "No Breakpoint exists"),131123 KDBMSG(BADADDR, "Invalid address"),124124+ KDBMSG(NOPERM, "Permission denied"),132125};133126#undef KDBMSG134127···194185 p = krp->p;195186#endif196187 return p;188188+}189189+190190+/*191191+ * Check whether the flags of the current command and the permissions192192+ * of the kdb console has allow a command to be run.193193+ */194194+static inline bool kdb_check_flags(kdb_cmdflags_t flags, int permissions,195195+ bool no_args)196196+{197197+ /* permissions comes from userspace so needs massaging slightly */198198+ permissions &= KDB_ENABLE_MASK;199199+ permissions |= KDB_ENABLE_ALWAYS_SAFE;200200+201201+ /* some commands change group when launched with no arguments */202202+ if (no_args)203203+ permissions |= permissions << KDB_ENABLE_NO_ARGS_SHIFT;204204+205205+ flags |= KDB_ENABLE_ALL;206206+207207+ return permissions & flags;197208}198209199210/*···505476 kdb_symtab_t symtab;506477507478 /*479479+ * If the enable flags prohibit both arbitrary memory access480480+ * and flow control then there are no reasonable grounds to481481+ * provide symbol lookup.482482+ */483483+ if (!kdb_check_flags(KDB_ENABLE_MEM_READ | KDB_ENABLE_FLOW_CTRL,484484+ kdb_cmd_enabled, false))485485+ return KDB_NOPERM;486486+487487+ /*508488 * Process arguments which follow the following syntax:509489 *510490 * symbol | numeric-address [+/- numeric-offset]···679641 if (!s->count)680642 s->usable = 0;681643 if (s->usable)682682- kdb_register(s->name, kdb_exec_defcmd,683683- s->usage, s->help, 0);644644+ /* macros are always safe because when executed each645645+ * internal command re-enters kdb_parse() and is646646+ * safety checked individually.647647+ */648648+ kdb_register_flags(s->name, kdb_exec_defcmd, s->usage,649649+ s->help, 0,650650+ KDB_ENABLE_ALWAYS_SAFE);684651 return 0;685652 }686653 if (!s->usable)···1046100310471004 if (i < kdb_max_commands) {10481005 int result;10061006+10071007+ if (!kdb_check_flags(tp->cmd_flags, kdb_cmd_enabled, argc <= 1))10081008+ return KDB_NOPERM;10091009+10491010 KDB_STATE_SET(CMD);10501011 result = (*tp->cmd_func)(argc-1, (const char **)argv);10511012 if (result && ignore_errors && result > KDB_CMD_GO)10521013 result = 0;10531014 KDB_STATE_CLEAR(CMD);10541054- switch (tp->cmd_repeat) {10551055- case KDB_REPEAT_NONE:10561056- argc = 0;10571057- if (argv[0])10581058- *(argv[0]) = '\0';10591059- break;10601060- case KDB_REPEAT_NO_ARGS:10611061- argc = 1;10621062- if (argv[1])10631063- *(argv[1]) = '\0';10641064- break;10651065- case KDB_REPEAT_WITH_ARGS:10661066- break;10671067- }10151015+10161016+ if (tp->cmd_flags & KDB_REPEAT_WITH_ARGS)10171017+ return result;10181018+10191019+ argc = tp->cmd_flags & KDB_REPEAT_NO_ARGS ? 1 : 0;10201020+ if (argv[argc])10211021+ *(argv[argc]) = '\0';10681022 return result;10691023 }10701024···19611921 */19621922static int kdb_sr(int argc, const char **argv)19631923{19241924+ bool check_mask =19251925+ !kdb_check_flags(KDB_ENABLE_ALL, kdb_cmd_enabled, false);19261926+19641927 if (argc != 1)19651928 return KDB_ARGCOUNT;19291929+19661930 kdb_trap_printk++;19671967- __handle_sysrq(*argv[1], false);19311931+ __handle_sysrq(*argv[1], check_mask);19681932 kdb_trap_printk--;1969193319701934 return 0;···22012157 for (start_cpu = -1, i = 0; i < NR_CPUS; i++) {22022158 if (!cpu_online(i)) {22032159 state = 'F'; /* cpu is offline */21602160+ } else if (!kgdb_info[i].enter_kgdb) {21612161+ state = 'D'; /* cpu is online but unresponsive */22042162 } else {22052163 state = ' '; /* cpu is responding to kdb */22062164 if (kdb_task_state_char(KDB_TSK(i)) == 'I')···22562210 /*22572211 * Validate cpunum22582212 */22592259- if ((cpunum > NR_CPUS) || !cpu_online(cpunum))22132213+ if ((cpunum > NR_CPUS) || !kgdb_info[cpunum].enter_kgdb)22602214 return KDB_BADCPUNUM;2261221522622216 dbg_switch_cpu = cpunum;···24202374 if (KDB_FLAG(CMD_INTERRUPT))24212375 return 0;24222376 if (!kt->cmd_name)23772377+ continue;23782378+ if (!kdb_check_flags(kt->cmd_flags, kdb_cmd_enabled, true))24232379 continue;24242380 if (strlen(kt->cmd_usage) > 20)24252381 space = "\n ";···26772629}2678263026792631/*26802680- * kdb_register_repeat - This function is used to register a kernel26322632+ * kdb_register_flags - This function is used to register a kernel26812633 * debugger command.26822634 * Inputs:26832635 * cmd Command name···26892641 * zero for success, one if a duplicate command.26902642 */26912643#define kdb_command_extend 50 /* arbitrary */26922692-int kdb_register_repeat(char *cmd,26932693- kdb_func_t func,26942694- char *usage,26952695- char *help,26962696- short minlen,26972697- kdb_repeat_t repeat)26442644+int kdb_register_flags(char *cmd,26452645+ kdb_func_t func,26462646+ char *usage,26472647+ char *help,26482648+ short minlen,26492649+ kdb_cmdflags_t flags)26982650{26992651 int i;27002652 kdbtab_t *kp;···27422694 kp->cmd_func = func;27432695 kp->cmd_usage = usage;27442696 kp->cmd_help = help;27452745- kp->cmd_flags = 0;27462697 kp->cmd_minlen = minlen;27472747- kp->cmd_repeat = repeat;26982698+ kp->cmd_flags = flags;2748269927492700 return 0;27502701}27512751-EXPORT_SYMBOL_GPL(kdb_register_repeat);27022702+EXPORT_SYMBOL_GPL(kdb_register_flags);275227032753270427542705/*27552706 * kdb_register - Compatibility register function for commands that do27562707 * not need to specify a repeat state. Equivalent to27572757- * kdb_register_repeat with KDB_REPEAT_NONE.27082708+ * kdb_register_flags with flags set to 0.27582709 * Inputs:27592710 * cmd Command name27602711 * func Function to execute the command···27682721 char *help,27692722 short minlen)27702723{27712771- return kdb_register_repeat(cmd, func, usage, help, minlen,27722772- KDB_REPEAT_NONE);27242724+ return kdb_register_flags(cmd, func, usage, help, minlen, 0);27732725}27742726EXPORT_SYMBOL_GPL(kdb_register);27752727···28102764 for_each_kdbcmd(kp, i)28112765 kp->cmd_name = NULL;2812276628132813- kdb_register_repeat("md", kdb_md, "<vaddr>",27672767+ kdb_register_flags("md", kdb_md, "<vaddr>",28142768 "Display Memory Contents, also mdWcN, e.g. md8c1", 1,28152815- KDB_REPEAT_NO_ARGS);28162816- kdb_register_repeat("mdr", kdb_md, "<vaddr> <bytes>",28172817- "Display Raw Memory", 0, KDB_REPEAT_NO_ARGS);28182818- kdb_register_repeat("mdp", kdb_md, "<paddr> <bytes>",28192819- "Display Physical Memory", 0, KDB_REPEAT_NO_ARGS);28202820- kdb_register_repeat("mds", kdb_md, "<vaddr>",28212821- "Display Memory Symbolically", 0, KDB_REPEAT_NO_ARGS);28222822- kdb_register_repeat("mm", kdb_mm, "<vaddr> <contents>",28232823- "Modify Memory Contents", 0, KDB_REPEAT_NO_ARGS);28242824- kdb_register_repeat("go", kdb_go, "[<vaddr>]",28252825- "Continue Execution", 1, KDB_REPEAT_NONE);28262826- kdb_register_repeat("rd", kdb_rd, "",28272827- "Display Registers", 0, KDB_REPEAT_NONE);28282828- kdb_register_repeat("rm", kdb_rm, "<reg> <contents>",28292829- "Modify Registers", 0, KDB_REPEAT_NONE);28302830- kdb_register_repeat("ef", kdb_ef, "<vaddr>",28312831- "Display exception frame", 0, KDB_REPEAT_NONE);28322832- kdb_register_repeat("bt", kdb_bt, "[<vaddr>]",28332833- "Stack traceback", 1, KDB_REPEAT_NONE);28342834- kdb_register_repeat("btp", kdb_bt, "<pid>",28352835- "Display stack for process <pid>", 0, KDB_REPEAT_NONE);28362836- kdb_register_repeat("bta", kdb_bt, "[D|R|S|T|C|Z|E|U|I|M|A]",28372837- "Backtrace all processes matching state flag", 0, KDB_REPEAT_NONE);28382838- kdb_register_repeat("btc", kdb_bt, "",28392839- "Backtrace current process on each cpu", 0, KDB_REPEAT_NONE);28402840- kdb_register_repeat("btt", kdb_bt, "<vaddr>",27692769+ KDB_ENABLE_MEM_READ | KDB_REPEAT_NO_ARGS);27702770+ kdb_register_flags("mdr", kdb_md, "<vaddr> <bytes>",27712771+ "Display Raw Memory", 0,27722772+ KDB_ENABLE_MEM_READ | KDB_REPEAT_NO_ARGS);27732773+ kdb_register_flags("mdp", kdb_md, "<paddr> <bytes>",27742774+ "Display Physical Memory", 0,27752775+ KDB_ENABLE_MEM_READ | KDB_REPEAT_NO_ARGS);27762776+ kdb_register_flags("mds", kdb_md, "<vaddr>",27772777+ "Display Memory Symbolically", 0,27782778+ KDB_ENABLE_MEM_READ | KDB_REPEAT_NO_ARGS);27792779+ kdb_register_flags("mm", kdb_mm, "<vaddr> <contents>",27802780+ "Modify Memory Contents", 0,27812781+ KDB_ENABLE_MEM_WRITE | KDB_REPEAT_NO_ARGS);27822782+ kdb_register_flags("go", kdb_go, "[<vaddr>]",27832783+ "Continue Execution", 1,27842784+ KDB_ENABLE_REG_WRITE | KDB_ENABLE_ALWAYS_SAFE_NO_ARGS);27852785+ kdb_register_flags("rd", kdb_rd, "",27862786+ "Display Registers", 0,27872787+ KDB_ENABLE_REG_READ);27882788+ kdb_register_flags("rm", kdb_rm, "<reg> <contents>",27892789+ "Modify Registers", 0,27902790+ KDB_ENABLE_REG_WRITE);27912791+ kdb_register_flags("ef", kdb_ef, "<vaddr>",27922792+ "Display exception frame", 0,27932793+ KDB_ENABLE_MEM_READ);27942794+ kdb_register_flags("bt", kdb_bt, "[<vaddr>]",27952795+ "Stack traceback", 1,27962796+ KDB_ENABLE_MEM_READ | KDB_ENABLE_INSPECT_NO_ARGS);27972797+ kdb_register_flags("btp", kdb_bt, "<pid>",27982798+ "Display stack for process <pid>", 0,27992799+ KDB_ENABLE_INSPECT);28002800+ kdb_register_flags("bta", kdb_bt, "[D|R|S|T|C|Z|E|U|I|M|A]",28012801+ "Backtrace all processes matching state flag", 0,28022802+ KDB_ENABLE_INSPECT);28032803+ kdb_register_flags("btc", kdb_bt, "",28042804+ "Backtrace current process on each cpu", 0,28052805+ KDB_ENABLE_INSPECT);28062806+ kdb_register_flags("btt", kdb_bt, "<vaddr>",28412807 "Backtrace process given its struct task address", 0,28422842- KDB_REPEAT_NONE);28432843- kdb_register_repeat("env", kdb_env, "",28442844- "Show environment variables", 0, KDB_REPEAT_NONE);28452845- kdb_register_repeat("set", kdb_set, "",28462846- "Set environment variables", 0, KDB_REPEAT_NONE);28472847- kdb_register_repeat("help", kdb_help, "",28482848- "Display Help Message", 1, KDB_REPEAT_NONE);28492849- kdb_register_repeat("?", kdb_help, "",28502850- "Display Help Message", 0, KDB_REPEAT_NONE);28512851- kdb_register_repeat("cpu", kdb_cpu, "<cpunum>",28522852- "Switch to new cpu", 0, KDB_REPEAT_NONE);28532853- kdb_register_repeat("kgdb", kdb_kgdb, "",28542854- "Enter kgdb mode", 0, KDB_REPEAT_NONE);28552855- kdb_register_repeat("ps", kdb_ps, "[<flags>|A]",28562856- "Display active task list", 0, KDB_REPEAT_NONE);28572857- kdb_register_repeat("pid", kdb_pid, "<pidnum>",28582858- "Switch to another task", 0, KDB_REPEAT_NONE);28592859- kdb_register_repeat("reboot", kdb_reboot, "",28602860- "Reboot the machine immediately", 0, KDB_REPEAT_NONE);28082808+ KDB_ENABLE_MEM_READ | KDB_ENABLE_INSPECT_NO_ARGS);28092809+ kdb_register_flags("env", kdb_env, "",28102810+ "Show environment variables", 0,28112811+ KDB_ENABLE_ALWAYS_SAFE);28122812+ kdb_register_flags("set", kdb_set, "",28132813+ "Set environment variables", 0,28142814+ KDB_ENABLE_ALWAYS_SAFE);28152815+ kdb_register_flags("help", kdb_help, "",28162816+ "Display Help Message", 1,28172817+ KDB_ENABLE_ALWAYS_SAFE);28182818+ kdb_register_flags("?", kdb_help, "",28192819+ "Display Help Message", 0,28202820+ KDB_ENABLE_ALWAYS_SAFE);28212821+ kdb_register_flags("cpu", kdb_cpu, "<cpunum>",28222822+ "Switch to new cpu", 0,28232823+ KDB_ENABLE_ALWAYS_SAFE_NO_ARGS);28242824+ kdb_register_flags("kgdb", kdb_kgdb, "",28252825+ "Enter kgdb mode", 0, 0);28262826+ kdb_register_flags("ps", kdb_ps, "[<flags>|A]",28272827+ "Display active task list", 0,28282828+ KDB_ENABLE_INSPECT);28292829+ kdb_register_flags("pid", kdb_pid, "<pidnum>",28302830+ "Switch to another task", 0,28312831+ KDB_ENABLE_INSPECT);28322832+ kdb_register_flags("reboot", kdb_reboot, "",28332833+ "Reboot the machine immediately", 0,28342834+ KDB_ENABLE_REBOOT);28612835#if defined(CONFIG_MODULES)28622862- kdb_register_repeat("lsmod", kdb_lsmod, "",28632863- "List loaded kernel modules", 0, KDB_REPEAT_NONE);28362836+ kdb_register_flags("lsmod", kdb_lsmod, "",28372837+ "List loaded kernel modules", 0,28382838+ KDB_ENABLE_INSPECT);28642839#endif28652840#if defined(CONFIG_MAGIC_SYSRQ)28662866- kdb_register_repeat("sr", kdb_sr, "<key>",28672867- "Magic SysRq key", 0, KDB_REPEAT_NONE);28412841+ kdb_register_flags("sr", kdb_sr, "<key>",28422842+ "Magic SysRq key", 0,28432843+ KDB_ENABLE_ALWAYS_SAFE);28682844#endif28692845#if defined(CONFIG_PRINTK)28702870- kdb_register_repeat("dmesg", kdb_dmesg, "[lines]",28712871- "Display syslog buffer", 0, KDB_REPEAT_NONE);28462846+ kdb_register_flags("dmesg", kdb_dmesg, "[lines]",28472847+ "Display syslog buffer", 0,28482848+ KDB_ENABLE_ALWAYS_SAFE);28722849#endif28732850 if (arch_kgdb_ops.enable_nmi) {28742874- kdb_register_repeat("disable_nmi", kdb_disable_nmi, "",28752875- "Disable NMI entry to KDB", 0, KDB_REPEAT_NONE);28512851+ kdb_register_flags("disable_nmi", kdb_disable_nmi, "",28522852+ "Disable NMI entry to KDB", 0,28532853+ KDB_ENABLE_ALWAYS_SAFE);28762854 }28772877- kdb_register_repeat("defcmd", kdb_defcmd, "name \"usage\" \"help\"",28782878- "Define a set of commands, down to endefcmd", 0, KDB_REPEAT_NONE);28792879- kdb_register_repeat("kill", kdb_kill, "<-signal> <pid>",28802880- "Send a signal to a process", 0, KDB_REPEAT_NONE);28812881- kdb_register_repeat("summary", kdb_summary, "",28822882- "Summarize the system", 4, KDB_REPEAT_NONE);28832883- kdb_register_repeat("per_cpu", kdb_per_cpu, "<sym> [<bytes>] [<cpu>]",28842884- "Display per_cpu variables", 3, KDB_REPEAT_NONE);28852885- kdb_register_repeat("grephelp", kdb_grep_help, "",28862886- "Display help on | grep", 0, KDB_REPEAT_NONE);28552855+ kdb_register_flags("defcmd", kdb_defcmd, "name \"usage\" \"help\"",28562856+ "Define a set of commands, down to endefcmd", 0,28572857+ KDB_ENABLE_ALWAYS_SAFE);28582858+ kdb_register_flags("kill", kdb_kill, "<-signal> <pid>",28592859+ "Send a signal to a process", 0,28602860+ KDB_ENABLE_SIGNAL);28612861+ kdb_register_flags("summary", kdb_summary, "",28622862+ "Summarize the system", 4,28632863+ KDB_ENABLE_ALWAYS_SAFE);28642864+ kdb_register_flags("per_cpu", kdb_per_cpu, "<sym> [<bytes>] [<cpu>]",28652865+ "Display per_cpu variables", 3,28662866+ KDB_ENABLE_MEM_READ);28672867+ kdb_register_flags("grephelp", kdb_grep_help, "",28682868+ "Display help on | grep", 0,28692869+ KDB_ENABLE_ALWAYS_SAFE);28872870}2888287128892872/* Execute any commands defined in kdb_cmds. */
+1-2
kernel/debug/kdb/kdb_private.h
···172172 kdb_func_t cmd_func; /* Function to execute command */173173 char *cmd_usage; /* Usage String for this command */174174 char *cmd_help; /* Help message for this command */175175- short cmd_flags; /* Parsing flags */176175 short cmd_minlen; /* Minimum legal # command177176 * chars required */178178- kdb_repeat_t cmd_repeat; /* Does command auto repeat on enter? */177177+ kdb_cmdflags_t cmd_flags; /* Command behaviour flags */179178} kdbtab_t;180179181180extern int kdb_bt(int, const char **); /* KDB display back trace */
+8-11
kernel/events/core.c
···44614461}4462446244634463static void perf_sample_regs_user(struct perf_regs *regs_user,44644464- struct pt_regs *regs)44644464+ struct pt_regs *regs,44654465+ struct pt_regs *regs_user_copy)44654466{44664466- if (!user_mode(regs)) {44674467- if (current->mm)44684468- regs = task_pt_regs(current);44694469- else44704470- regs = NULL;44714471- }44724472-44734473- if (regs) {44744474- regs_user->abi = perf_reg_abi(current);44674467+ if (user_mode(regs)) {44684468+ regs_user->abi = perf_reg_abi(current);44754469 regs_user->regs = regs;44704470+ } else if (current->mm) {44714471+ perf_get_regs_user(regs_user, regs, regs_user_copy);44764472 } else {44774473 regs_user->abi = PERF_SAMPLE_REGS_ABI_NONE;44784474 regs_user->regs = NULL;···49474951 }4948495249494953 if (sample_type & (PERF_SAMPLE_REGS_USER | PERF_SAMPLE_STACK_USER))49504950- perf_sample_regs_user(&data->regs_user, regs);49544954+ perf_sample_regs_user(&data->regs_user, regs,49554955+ &data->regs_user_copy);4951495649524957 if (sample_type & PERF_SAMPLE_REGS_USER) {49534958 /* regs dump ABI info */
+9-3
kernel/exit.c
···12871287static int wait_consider_task(struct wait_opts *wo, int ptrace,12881288 struct task_struct *p)12891289{12901290+ /*12911291+ * We can race with wait_task_zombie() from another thread.12921292+ * Ensure that EXIT_ZOMBIE -> EXIT_DEAD/EXIT_TRACE transition12931293+ * can't confuse the checks below.12941294+ */12951295+ int exit_state = ACCESS_ONCE(p->exit_state);12901296 int ret;1291129712921292- if (unlikely(p->exit_state == EXIT_DEAD))12981298+ if (unlikely(exit_state == EXIT_DEAD))12931299 return 0;1294130012951301 ret = eligible_child(wo, p);···13161310 return 0;13171311 }1318131213191319- if (unlikely(p->exit_state == EXIT_TRACE)) {13131313+ if (unlikely(exit_state == EXIT_TRACE)) {13201314 /*13211315 * ptrace == 0 means we are the natural parent. In this case13221316 * we should clear notask_error, debugger will notify us.···13431337 }1344133813451339 /* slay zombie? */13461346- if (p->exit_state == EXIT_ZOMBIE) {13401340+ if (exit_state == EXIT_ZOMBIE) {13471341 /* we don't reap group leaders with subthreads */13481342 if (!delay_group_leader(p)) {13491343 /*
+1-1
kernel/locking/mutex-debug.c
···8080 DEBUG_LOCKS_WARN_ON(lock->owner != current);81818282 DEBUG_LOCKS_WARN_ON(!lock->wait_list.prev && !lock->wait_list.next);8383- mutex_clear_owner(lock);8483 }85848685 /*8786 * __mutex_slowpath_needs_to_unlock() is explicitly 0 for debug8887 * mutexes so that we can do it here after we've verified state.8988 */8989+ mutex_clear_owner(lock);9090 atomic_set(&lock->count, 1);9191}9292
+5-5
kernel/range.c
···113113{114114 const struct range *r1 = x1;115115 const struct range *r2 = x2;116116- s64 start1, start2;117116118118- start1 = r1->start;119119- start2 = r2->start;120120-121121- return start1 - start2;117117+ if (r1->start < r2->start)118118+ return -1;119119+ if (r1->start > r2->start)120120+ return 1;121121+ return 0;122122}123123124124int clean_sort_range(struct range *range, int az)
···570570static571571int dl_runtime_exceeded(struct rq *rq, struct sched_dl_entity *dl_se)572572{573573- int dmiss = dl_time_before(dl_se->deadline, rq_clock(rq));574574- int rorun = dl_se->runtime <= 0;575575-576576- if (!rorun && !dmiss)577577- return 0;578578-579579- /*580580- * If we are beyond our current deadline and we are still581581- * executing, then we have already used some of the runtime of582582- * the next instance. Thus, if we do not account that, we are583583- * stealing bandwidth from the system at each deadline miss!584584- */585585- if (dmiss) {586586- dl_se->runtime = rorun ? dl_se->runtime : 0;587587- dl_se->runtime -= rq_clock(rq) - dl_se->deadline;588588- }589589-590590- return 1;573573+ return (dl_se->runtime <= 0);591574}592575593576extern bool sched_rt_bandwidth_account(struct rt_rq *rt_rq);···809826 * parameters of the task might need updating. Otherwise,810827 * we want a replenishment of its runtime.811828 */812812- if (!dl_se->dl_new && flags & ENQUEUE_REPLENISH)813813- replenish_dl_entity(dl_se, pi_se);814814- else829829+ if (dl_se->dl_new || flags & ENQUEUE_WAKEUP)815830 update_dl_entity(dl_se, pi_se);831831+ else if (flags & ENQUEUE_REPLENISH)832832+ replenish_dl_entity(dl_se, pi_se);816833817834 __enqueue_dl_entity(dl_se);818835}
+5-1
kernel/sched/fair.c
···4005400540064006static void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b)40074007{40084008+ /* init_cfs_bandwidth() was not called */40094009+ if (!cfs_b->throttled_cfs_rq.next)40104010+ return;40114011+40084012 hrtimer_cancel(&cfs_b->period_timer);40094013 hrtimer_cancel(&cfs_b->slack_timer);40104014}···44284424 * wl = S * s'_i; see (2)44294425 */44304426 if (W > 0 && w < W)44314431- wl = (w * tg->shares) / W;44274427+ wl = (w * (long)tg->shares) / W;44324428 else44334429 wl = tg->shares;44344430
+45-8
kernel/trace/ftrace.c
···24972497}2498249824992499static void ftrace_run_modify_code(struct ftrace_ops *ops, int command,25002500- struct ftrace_hash *old_hash)25002500+ struct ftrace_ops_hash *old_hash)25012501{25022502 ops->flags |= FTRACE_OPS_FL_MODIFYING;25032503- ops->old_hash.filter_hash = old_hash;25032503+ ops->old_hash.filter_hash = old_hash->filter_hash;25042504+ ops->old_hash.notrace_hash = old_hash->notrace_hash;25042505 ftrace_run_update_code(command);25052506 ops->old_hash.filter_hash = NULL;25072507+ ops->old_hash.notrace_hash = NULL;25062508 ops->flags &= ~FTRACE_OPS_FL_MODIFYING;25072509}25082510···3581357935823580static int ftrace_probe_registered;3583358135843584-static void __enable_ftrace_function_probe(struct ftrace_hash *old_hash)35823582+static void __enable_ftrace_function_probe(struct ftrace_ops_hash *old_hash)35853583{35863584 int ret;35873585 int i;···36393637register_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,36403638 void *data)36413639{36403640+ struct ftrace_ops_hash old_hash_ops;36423641 struct ftrace_func_probe *entry;36433642 struct ftrace_hash **orig_hash = &trace_probe_ops.func_hash->filter_hash;36443643 struct ftrace_hash *old_hash = *orig_hash;···36603657 return -EINVAL;3661365836623659 mutex_lock(&trace_probe_ops.func_hash->regex_lock);36603660+36613661+ old_hash_ops.filter_hash = old_hash;36623662+ /* Probes only have filters */36633663+ old_hash_ops.notrace_hash = NULL;3663366436643665 hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, old_hash);36653666 if (!hash) {···3725371837263719 ret = ftrace_hash_move(&trace_probe_ops, 1, orig_hash, hash);3727372037283728- __enable_ftrace_function_probe(old_hash);37213721+ __enable_ftrace_function_probe(&old_hash_ops);3729372237303723 if (!ret)37313724 free_ftrace_hash_rcu(old_hash);···40134006}4014400740154008static void ftrace_ops_update_code(struct ftrace_ops *ops,40164016- struct ftrace_hash *old_hash)40094009+ struct ftrace_ops_hash *old_hash)40174010{40184018- if (ops->flags & FTRACE_OPS_FL_ENABLED && ftrace_enabled)40114011+ struct ftrace_ops *op;40124012+40134013+ if (!ftrace_enabled)40144014+ return;40154015+40164016+ if (ops->flags & FTRACE_OPS_FL_ENABLED) {40194017 ftrace_run_modify_code(ops, FTRACE_UPDATE_CALLS, old_hash);40184018+ return;40194019+ }40204020+40214021+ /*40224022+ * If this is the shared global_ops filter, then we need to40234023+ * check if there is another ops that shares it, is enabled.40244024+ * If so, we still need to run the modify code.40254025+ */40264026+ if (ops->func_hash != &global_ops.local_hash)40274027+ return;40284028+40294029+ do_for_each_ftrace_op(op, ftrace_ops_list) {40304030+ if (op->func_hash == &global_ops.local_hash &&40314031+ op->flags & FTRACE_OPS_FL_ENABLED) {40324032+ ftrace_run_modify_code(op, FTRACE_UPDATE_CALLS, old_hash);40334033+ /* Only need to do this once */40344034+ return;40354035+ }40364036+ } while_for_each_ftrace_op(op);40204037}4021403840224039static int···40484017 unsigned long ip, int remove, int reset, int enable)40494018{40504019 struct ftrace_hash **orig_hash;40204020+ struct ftrace_ops_hash old_hash_ops;40514021 struct ftrace_hash *old_hash;40524022 struct ftrace_hash *hash;40534023 int ret;···4085405340864054 mutex_lock(&ftrace_lock);40874055 old_hash = *orig_hash;40564056+ old_hash_ops.filter_hash = ops->func_hash->filter_hash;40574057+ old_hash_ops.notrace_hash = ops->func_hash->notrace_hash;40884058 ret = ftrace_hash_move(ops, enable, orig_hash, hash);40894059 if (!ret) {40904090- ftrace_ops_update_code(ops, old_hash);40604060+ ftrace_ops_update_code(ops, &old_hash_ops);40914061 free_ftrace_hash_rcu(old_hash);40924062 }40934063 mutex_unlock(&ftrace_lock);···43014267int ftrace_regex_release(struct inode *inode, struct file *file)43024268{43034269 struct seq_file *m = (struct seq_file *)file->private_data;42704270+ struct ftrace_ops_hash old_hash_ops;43044271 struct ftrace_iterator *iter;43054272 struct ftrace_hash **orig_hash;43064273 struct ftrace_hash *old_hash;···4335430043364301 mutex_lock(&ftrace_lock);43374302 old_hash = *orig_hash;43034303+ old_hash_ops.filter_hash = iter->ops->func_hash->filter_hash;43044304+ old_hash_ops.notrace_hash = iter->ops->func_hash->notrace_hash;43384305 ret = ftrace_hash_move(iter->ops, filter_hash,43394306 orig_hash, iter->hash);43404307 if (!ret) {43414341- ftrace_ops_update_code(iter->ops, old_hash);43084308+ ftrace_ops_update_code(iter->ops, &old_hash_ops);43424309 free_ftrace_hash_rcu(old_hash);43434310 }43444311 mutex_unlock(&ftrace_lock);
···24292429 return 0;24302430}2431243124322432+static __init void24332433+early_enable_events(struct trace_array *tr, bool disable_first)24342434+{24352435+ char *buf = bootup_event_buf;24362436+ char *token;24372437+ int ret;24382438+24392439+ while (true) {24402440+ token = strsep(&buf, ",");24412441+24422442+ if (!token)24432443+ break;24442444+ if (!*token)24452445+ continue;24462446+24472447+ /* Restarting syscalls requires that we stop them first */24482448+ if (disable_first)24492449+ ftrace_set_clr_event(tr, token, 0);24502450+24512451+ ret = ftrace_set_clr_event(tr, token, 1);24522452+ if (ret)24532453+ pr_warn("Failed to enable trace event: %s\n", token);24542454+24552455+ /* Put back the comma to allow this to be called again */24562456+ if (buf)24572457+ *(buf - 1) = ',';24582458+ }24592459+}24602460+24322461static __init int event_trace_enable(void)24332462{24342463 struct trace_array *tr = top_trace_array();24352464 struct ftrace_event_call **iter, *call;24362436- char *buf = bootup_event_buf;24372437- char *token;24382465 int ret;2439246624402467 if (!tr)···24832456 */24842457 __trace_early_add_events(tr);2485245824862486- while (true) {24872487- token = strsep(&buf, ",");24882488-24892489- if (!token)24902490- break;24912491- if (!*token)24922492- continue;24932493-24942494- ret = ftrace_set_clr_event(tr, token, 1);24952495- if (ret)24962496- pr_warn("Failed to enable trace event: %s\n", token);24972497- }24592459+ early_enable_events(tr, false);2498246024992461 trace_printk_start_comm();25002462···2493247724942478 return 0;24952479}24802480+24812481+/*24822482+ * event_trace_enable() is called from trace_event_init() first to24832483+ * initialize events and perhaps start any events that are on the24842484+ * command line. Unfortunately, there are some events that will not24852485+ * start this early, like the system call tracepoints that need24862486+ * to set the TIF_SYSCALL_TRACEPOINT flag of pid 1. But event_trace_enable()24872487+ * is called before pid 1 starts, and this flag is never set, making24882488+ * the syscall tracepoint never get reached, but the event is enabled24892489+ * regardless (and not doing anything).24902490+ */24912491+static __init int event_trace_enable_again(void)24922492+{24932493+ struct trace_array *tr;24942494+24952495+ tr = top_trace_array();24962496+ if (!tr)24972497+ return -ENODEV;24982498+24992499+ early_enable_events(tr, true);25002500+25012501+ return 0;25022502+}25032503+25042504+early_initcall(event_trace_enable_again);2496250524972506static __init int event_trace_init(void)24982507{
···7373 help7474 KDB frontend for kernel75757676+config KDB_DEFAULT_ENABLE7777+ hex "KDB: Select kdb command functions to be enabled by default"7878+ depends on KGDB_KDB7979+ default 0x18080+ help8181+ Specifiers which kdb commands are enabled by default. This may8282+ be set to 1 or 0 to enable all commands or disable almost all8383+ commands.8484+8585+ Alternatively the following bitmask applies:8686+8787+ 0x0002 - allow arbitrary reads from memory and symbol lookup8888+ 0x0004 - allow arbitrary writes to memory8989+ 0x0008 - allow current register state to be inspected9090+ 0x0010 - allow current register state to be modified9191+ 0x0020 - allow passive inspection (backtrace, process list, lsmod)9292+ 0x0040 - allow flow control management (breakpoint, single step)9393+ 0x0080 - enable signalling of processes9494+ 0x0100 - allow machine to be rebooted9595+9696+ The config option merely sets the default at boot time. Both9797+ issuing 'echo X > /sys/module/kdb/parameters/cmd_enable' or9898+ setting with kdb.cmd_enable=X kernel command line option will9999+ override the default settings.100100+76101config KDB_KEYBOARD77102 bool "KGDB_KDB: keyboard as input device"78103 depends on VT && KGDB_KDB
+1
lib/assoc_array.c
···1111 * 2 of the Licence, or (at your option) any later version.1212 */1313//#define DEBUG1414+#include <linux/rcupdate.h>1415#include <linux/slab.h>1516#include <linux/err.h>1617#include <linux/assoc_array_priv.h>
-9
mm/Kconfig.debug
···1414 depends on !KMEMCHECK1515 select PAGE_EXTENSION1616 select PAGE_POISONING if !ARCH_SUPPORTS_DEBUG_PAGEALLOC1717- select PAGE_GUARD if ARCH_SUPPORTS_DEBUG_PAGEALLOC1817 ---help---1918 Unmap pages from the kernel linear mapping after free_pages().2019 This results in a large slowdown, but helps to find certain types···2627 that would result in incorrect warnings of memory corruption after2728 a resume because free pages are not saved to the suspend image.28292929-config WANT_PAGE_DEBUG_FLAGS3030- bool3131-3230config PAGE_POISONING3331 bool3434- select WANT_PAGE_DEBUG_FLAGS3535-3636-config PAGE_GUARD3737- bool3838- select WANT_PAGE_DEBUG_FLAGS
+12-17
mm/filemap.c
···10461046 * @mapping: the address_space to search10471047 * @offset: the page index10481048 * @fgp_flags: PCG flags10491049- * @cache_gfp_mask: gfp mask to use for the page cache data page allocation10501050- * @radix_gfp_mask: gfp mask to use for radix tree node allocation10491049+ * @gfp_mask: gfp mask to use for the page cache data page allocation10511050 *10521051 * Looks up the page cache slot at @mapping & @offset.10531052 *···10551056 * FGP_ACCESSED: the page will be marked accessed10561057 * FGP_LOCK: Page is return locked10571058 * FGP_CREAT: If page is not present then a new page is allocated using10581058- * @cache_gfp_mask and added to the page cache and the VM's LRU10591059- * list. If radix tree nodes are allocated during page cache10601060- * insertion then @radix_gfp_mask is used. The page is returned10611061- * locked and with an increased refcount. Otherwise, %NULL is10621062- * returned.10591059+ * @gfp_mask and added to the page cache and the VM's LRU10601060+ * list. The page is returned locked and with an increased10611061+ * refcount. Otherwise, %NULL is returned.10631062 *10641063 * If FGP_LOCK or FGP_CREAT are specified then the function may sleep even10651064 * if the GFP flags specified for FGP_CREAT are atomic.···10651068 * If there is a page cache page, it is returned with an increased refcount.10661069 */10671070struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,10681068- int fgp_flags, gfp_t cache_gfp_mask, gfp_t radix_gfp_mask)10711071+ int fgp_flags, gfp_t gfp_mask)10691072{10701073 struct page *page;10711074···11021105 if (!page && (fgp_flags & FGP_CREAT)) {11031106 int err;11041107 if ((fgp_flags & FGP_WRITE) && mapping_cap_account_dirty(mapping))11051105- cache_gfp_mask |= __GFP_WRITE;11061106- if (fgp_flags & FGP_NOFS) {11071107- cache_gfp_mask &= ~__GFP_FS;11081108- radix_gfp_mask &= ~__GFP_FS;11091109- }11081108+ gfp_mask |= __GFP_WRITE;11091109+ if (fgp_flags & FGP_NOFS)11101110+ gfp_mask &= ~__GFP_FS;1110111111111111- page = __page_cache_alloc(cache_gfp_mask);11121112+ page = __page_cache_alloc(gfp_mask);11121113 if (!page)11131114 return NULL;11141115···11171122 if (fgp_flags & FGP_ACCESSED)11181123 __SetPageReferenced(page);1119112411201120- err = add_to_page_cache_lru(page, mapping, offset, radix_gfp_mask);11251125+ err = add_to_page_cache_lru(page, mapping, offset,11261126+ gfp_mask & GFP_RECLAIM_MASK);11211127 if (unlikely(err)) {11221128 page_cache_release(page);11231129 page = NULL;···24392443 fgp_flags |= FGP_NOFS;2440244424412445 page = pagecache_get_page(mapping, index, fgp_flags,24422442- mapping_gfp_mask(mapping),24432443- GFP_KERNEL);24462446+ mapping_gfp_mask(mapping));24442447 if (page)24452448 wait_for_stable_page(page);24462449
+4-13
mm/memcontrol.c
···30433043 if (swap_cgroup_cmpxchg(entry, old_id, new_id) == old_id) {30443044 mem_cgroup_swap_statistics(from, false);30453045 mem_cgroup_swap_statistics(to, true);30463046- /*30473047- * This function is only called from task migration context now.30483048- * It postpones page_counter and refcount handling till the end30493049- * of task migration(mem_cgroup_clear_mc()) for performance30503050- * improvement. But we cannot postpone css_get(to) because if30513051- * the process that has been moved to @to does swap-in, the30523052- * refcount of @to might be decreased to 0.30533053- *30543054- * We are in attach() phase, so the cgroup is guaranteed to be30553055- * alive, so we can just call css_get().30563056- */30573057- css_get(&to->css);30583046 return 0;30593047 }30603048 return -EINVAL;···46674679 if (parent_css == NULL) {46684680 root_mem_cgroup = memcg;46694681 page_counter_init(&memcg->memory, NULL);46824682+ memcg->soft_limit = PAGE_COUNTER_MAX;46704683 page_counter_init(&memcg->memsw, NULL);46714684 page_counter_init(&memcg->kmem, NULL);46724685 }···4713472447144725 if (parent->use_hierarchy) {47154726 page_counter_init(&memcg->memory, &parent->memory);47274727+ memcg->soft_limit = PAGE_COUNTER_MAX;47164728 page_counter_init(&memcg->memsw, &parent->memsw);47174729 page_counter_init(&memcg->kmem, &parent->kmem);47184730···47234733 */47244734 } else {47254735 page_counter_init(&memcg->memory, NULL);47364736+ memcg->soft_limit = PAGE_COUNTER_MAX;47264737 page_counter_init(&memcg->memsw, NULL);47274738 page_counter_init(&memcg->kmem, NULL);47284739 /*···47984807 mem_cgroup_resize_limit(memcg, PAGE_COUNTER_MAX);47994808 mem_cgroup_resize_memsw_limit(memcg, PAGE_COUNTER_MAX);48004809 memcg_update_kmem_limit(memcg, PAGE_COUNTER_MAX);48014801- memcg->soft_limit = 0;48104810+ memcg->soft_limit = PAGE_COUNTER_MAX;48024811}4803481248044813#ifdef CONFIG_MMU
+23-16
mm/memory.c
···235235236236static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)237237{238238+ if (!tlb->end)239239+ return;240240+238241 tlb_flush(tlb);239242 mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);240243#ifdef CONFIG_HAVE_RCU_TABLE_FREE···250247{251248 struct mmu_gather_batch *batch;252249253253- for (batch = &tlb->local; batch; batch = batch->next) {250250+ for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {254251 free_pages_and_swap_cache(batch->pages, batch->nr);255252 batch->nr = 0;256253 }···259256260257void tlb_flush_mmu(struct mmu_gather *tlb)261258{262262- if (!tlb->end)263263- return;264264-265259 tlb_flush_mmu_tlbonly(tlb);266260 tlb_flush_mmu_free(tlb);267261}···21372137 if (!dirty_page)21382138 return ret;2139213921402140- /*21412141- * Yes, Virginia, this is actually required to prevent a race21422142- * with clear_page_dirty_for_io() from clearing the page dirty21432143- * bit after it clear all dirty ptes, but before a racing21442144- * do_wp_page installs a dirty pte.21452145- *21462146- * do_shared_fault is protected similarly.21472147- */21482140 if (!page_mkwrite) {21492149- wait_on_page_locked(dirty_page);21502150- set_page_dirty_balance(dirty_page);21412141+ struct address_space *mapping;21422142+ int dirtied;21432143+21442144+ lock_page(dirty_page);21452145+ dirtied = set_page_dirty(dirty_page);21462146+ VM_BUG_ON_PAGE(PageAnon(dirty_page), dirty_page);21472147+ mapping = dirty_page->mapping;21482148+ unlock_page(dirty_page);21492149+21502150+ if (dirtied && mapping) {21512151+ /*21522152+ * Some device drivers do not set page.mapping21532153+ * but still dirty their pages21542154+ */21552155+ balance_dirty_pages_ratelimited(mapping);21562156+ }21572157+21512158 /* file_update_time outside page_lock */21522159 if (vma->vm_file)21532160 file_update_time(vma->vm_file);···26002593 if (prev && prev->vm_end == address)26012594 return prev->vm_flags & VM_GROWSDOWN ? 0 : -ENOMEM;2602259526032603- expand_downwards(vma, address - PAGE_SIZE);25962596+ return expand_downwards(vma, address - PAGE_SIZE);26042597 }26052598 if ((vma->vm_flags & VM_GROWSUP) && address + PAGE_SIZE == vma->vm_end) {26062599 struct vm_area_struct *next = vma->vm_next;···26092602 if (next && next->vm_start == address + PAGE_SIZE)26102603 return next->vm_flags & VM_GROWSUP ? 0 : -ENOMEM;2611260426122612- expand_upwards(vma, address + PAGE_SIZE);26052605+ return expand_upwards(vma, address + PAGE_SIZE);26132606 }26142607 return 0;26152608}
+10-5
mm/mmap.c
···778778 if (exporter && exporter->anon_vma && !importer->anon_vma) {779779 int error;780780781781- error = anon_vma_clone(importer, exporter);782782- if (error)783783- return error;784781 importer->anon_vma = exporter->anon_vma;782782+ error = anon_vma_clone(importer, exporter);783783+ if (error) {784784+ importer->anon_vma = NULL;785785+ return error;786786+ }785787 }786788 }787789···21012099{21022100 struct mm_struct *mm = vma->vm_mm;21032101 struct rlimit *rlim = current->signal->rlim;21042104- unsigned long new_start;21022102+ unsigned long new_start, actual_size;2105210321062104 /* address space limit tests */21072105 if (!may_expand_vm(mm, grow))21082106 return -ENOMEM;2109210721102108 /* Stack limit test */21112111- if (size > ACCESS_ONCE(rlim[RLIMIT_STACK].rlim_cur))21092109+ actual_size = size;21102110+ if (size && (vma->vm_flags & (VM_GROWSUP | VM_GROWSDOWN)))21112111+ actual_size -= PAGE_SIZE;21122112+ if (actual_size > ACCESS_ONCE(rlim[RLIMIT_STACK].rlim_cur))21122113 return -ENOMEM;2113211421142115 /* mlock limit tests */
+12-31
mm/page-writeback.c
···15411541 bdi_start_background_writeback(bdi);15421542}1543154315441544-void set_page_dirty_balance(struct page *page)15451545-{15461546- if (set_page_dirty(page)) {15471547- struct address_space *mapping = page_mapping(page);15481548-15491549- if (mapping)15501550- balance_dirty_pages_ratelimited(mapping);15511551- }15521552-}15531553-15541544static DEFINE_PER_CPU(int, bdp_ratelimits);1555154515561546/*···21132123 * page dirty in that case, but not all the buffers. This is a "bottom-up"21142124 * dirtying, whereas __set_page_dirty_buffers() is a "top-down" dirtying.21152125 *21162116- * Most callers have locked the page, which pins the address_space in memory.21172117- * But zap_pte_range() does not lock the page, however in that case the21182118- * mapping is pinned by the vma's ->vm_file reference.21192119- *21202120- * We take care to handle the case where the page was truncated from the21212121- * mapping by re-checking page_mapping() inside tree_lock.21262126+ * The caller must ensure this doesn't race with truncation. Most will simply21272127+ * hold the page lock, but e.g. zap_pte_range() calls with the page mapped and21282128+ * the pte lock held, which also locks out truncation.21222129 */21232130int __set_page_dirty_nobuffers(struct page *page)21242131{21252132 if (!TestSetPageDirty(page)) {21262133 struct address_space *mapping = page_mapping(page);21272127- struct address_space *mapping2;21282134 unsigned long flags;2129213521302136 if (!mapping)21312137 return 1;2132213821332139 spin_lock_irqsave(&mapping->tree_lock, flags);21342134- mapping2 = page_mapping(page);21352135- if (mapping2) { /* Race with truncate? */21362136- BUG_ON(mapping2 != mapping);21372137- WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));21382138- account_page_dirtied(page, mapping);21392139- radix_tree_tag_set(&mapping->page_tree,21402140- page_index(page), PAGECACHE_TAG_DIRTY);21412141- }21402140+ BUG_ON(page_mapping(page) != mapping);21412141+ WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));21422142+ account_page_dirtied(page, mapping);21432143+ radix_tree_tag_set(&mapping->page_tree, page_index(page),21442144+ PAGECACHE_TAG_DIRTY);21422145 spin_unlock_irqrestore(&mapping->tree_lock, flags);21432146 if (mapping->host) {21442147 /* !PageAnon && !swapper_space */···22882305 /*22892306 * We carefully synchronise fault handlers against22902307 * installing a dirty pte and marking the page dirty22912291- * at this point. We do this by having them hold the22922292- * page lock at some point after installing their22932293- * pte, but before marking the page dirty.22942294- * Pages are always locked coming in here, so we get22952295- * the desired exclusion. See mm/memory.c:do_wp_page()22962296- * for more comments.23082308+ * at this point. We do this by having them hold the23092309+ * page lock while dirtying the page, and pages are23102310+ * always locked coming in here, so we get the desired23112311+ * exclusion.22972312 */22982313 if (TestClearPageDirty(page)) {22992314 dec_zone_page_state(page, NR_FILE_DIRTY);
+41-1
mm/rmap.c
···7272 anon_vma = kmem_cache_alloc(anon_vma_cachep, GFP_KERNEL);7373 if (anon_vma) {7474 atomic_set(&anon_vma->refcount, 1);7575+ anon_vma->degree = 1; /* Reference for first vma */7676+ anon_vma->parent = anon_vma;7577 /*7678 * Initialise the anon_vma root to point to itself. If called7779 * from fork, the root will be reset to the parents anon_vma.···190188 if (likely(!vma->anon_vma)) {191189 vma->anon_vma = anon_vma;192190 anon_vma_chain_link(vma, avc, anon_vma);191191+ /* vma reference or self-parent link for new root */192192+ anon_vma->degree++;193193 allocated = NULL;194194 avc = NULL;195195 }···240236/*241237 * Attach the anon_vmas from src to dst.242238 * Returns 0 on success, -ENOMEM on failure.239239+ *240240+ * If dst->anon_vma is NULL this function tries to find and reuse existing241241+ * anon_vma which has no vmas and only one child anon_vma. This prevents242242+ * degradation of anon_vma hierarchy to endless linear chain in case of243243+ * constantly forking task. On the other hand, an anon_vma with more than one244244+ * child isn't reused even if there was no alive vma, thus rmap walker has a245245+ * good chance of avoiding scanning the whole hierarchy when it searches where246246+ * page is mapped.243247 */244248int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)245249{···268256 anon_vma = pavc->anon_vma;269257 root = lock_anon_vma_root(root, anon_vma);270258 anon_vma_chain_link(dst, avc, anon_vma);259259+260260+ /*261261+ * Reuse existing anon_vma if its degree lower than two,262262+ * that means it has no vma and only one anon_vma child.263263+ *264264+ * Do not chose parent anon_vma, otherwise first child265265+ * will always reuse it. Root anon_vma is never reused:266266+ * it has self-parent reference and at least one child.267267+ */268268+ if (!dst->anon_vma && anon_vma != src->anon_vma &&269269+ anon_vma->degree < 2)270270+ dst->anon_vma = anon_vma;271271 }272272+ if (dst->anon_vma)273273+ dst->anon_vma->degree++;272274 unlock_anon_vma_root(root);273275 return 0;274276···306280 if (!pvma->anon_vma)307281 return 0;308282283283+ /* Drop inherited anon_vma, we'll reuse existing or allocate new. */284284+ vma->anon_vma = NULL;285285+309286 /*310287 * First, attach the new VMA to the parent VMA's anon_vmas,311288 * so rmap can find non-COWed pages in child processes.···316287 error = anon_vma_clone(vma, pvma);317288 if (error)318289 return error;290290+291291+ /* An existing anon_vma has been reused, all done then. */292292+ if (vma->anon_vma)293293+ return 0;319294320295 /* Then add our own anon_vma. */321296 anon_vma = anon_vma_alloc();···334301 * lock any of the anon_vmas in this anon_vma tree.335302 */336303 anon_vma->root = pvma->anon_vma->root;304304+ anon_vma->parent = pvma->anon_vma;337305 /*338306 * With refcounts, an anon_vma can stay around longer than the339307 * process it belongs to. The root anon_vma needs to be pinned until···345311 vma->anon_vma = anon_vma;346312 anon_vma_lock_write(anon_vma);347313 anon_vma_chain_link(vma, avc, anon_vma);314314+ anon_vma->parent->degree++;348315 anon_vma_unlock_write(anon_vma);349316350317 return 0;···376341 * Leave empty anon_vmas on the list - we'll need377342 * to free them outside the lock.378343 */379379- if (RB_EMPTY_ROOT(&anon_vma->rb_root))344344+ if (RB_EMPTY_ROOT(&anon_vma->rb_root)) {345345+ anon_vma->parent->degree--;380346 continue;347347+ }381348382349 list_del(&avc->same_vma);383350 anon_vma_chain_free(avc);384351 }352352+ if (vma->anon_vma)353353+ vma->anon_vma->degree--;385354 unlock_anon_vma_root(root);386355387356 /*···396357 list_for_each_entry_safe(avc, next, &vma->anon_vma_chain, same_vma) {397358 struct anon_vma *anon_vma = avc->anon_vma;398359360360+ BUG_ON(anon_vma->degree);399361 put_anon_vma(anon_vma);400362401363 list_del(&avc->same_vma);
+13-11
mm/vmscan.c
···29212921 return false;2922292229232923 /*29242924- * There is a potential race between when kswapd checks its watermarks29252925- * and a process gets throttled. There is also a potential race if29262926- * processes get throttled, kswapd wakes, a large process exits therby29272927- * balancing the zones that causes kswapd to miss a wakeup. If kswapd29282928- * is going to sleep, no process should be sleeping on pfmemalloc_wait29292929- * so wake them now if necessary. If necessary, processes will wake29302930- * kswapd and get throttled again29242924+ * The throttled processes are normally woken up in balance_pgdat() as29252925+ * soon as pfmemalloc_watermark_ok() is true. But there is a potential29262926+ * race between when kswapd checks the watermarks and a process gets29272927+ * throttled. There is also a potential race if processes get29282928+ * throttled, kswapd wakes, a large process exits thereby balancing the29292929+ * zones, which causes kswapd to exit balance_pgdat() before reaching29302930+ * the wake up checks. If kswapd is going to sleep, no process should29312931+ * be sleeping on pfmemalloc_wait, so wake them now if necessary. If29322932+ * the wake up is premature, processes will wake kswapd and get29332933+ * throttled again. The difference from wake ups in balance_pgdat() is29342934+ * that here we are under prepare_to_wait().29312935 */29322932- if (waitqueue_active(&pgdat->pfmemalloc_wait)) {29332933- wake_up(&pgdat->pfmemalloc_wait);29342934- return false;29352935- }29362936+ if (waitqueue_active(&pgdat->pfmemalloc_wait))29372937+ wake_up_all(&pgdat->pfmemalloc_wait);2936293829372939 return pgdat_balanced(pgdat, order, classzone_idx);29382940}
+2-2
net/batman-adv/fragmentation.c
···251251 kfree(entry);252252253253 /* Make room for the rest of the fragments. */254254- if (pskb_expand_head(skb_out, 0, size - skb->len, GFP_ATOMIC) < 0) {254254+ if (pskb_expand_head(skb_out, 0, size - skb_out->len, GFP_ATOMIC) < 0) {255255 kfree_skb(skb_out);256256 skb_out = NULL;257257 goto free;···434434 * fragments larger than BATADV_FRAG_MAX_FRAG_SIZE435435 */436436 mtu = min_t(unsigned, mtu, BATADV_FRAG_MAX_FRAG_SIZE);437437- max_fragment_size = (mtu - header_size - ETH_HLEN);437437+ max_fragment_size = mtu - header_size;438438 max_packet_size = max_fragment_size * BATADV_FRAG_MAX_FRAGMENTS;439439440440 /* Don't even try to fragment, if we need more than 16 fragments */
+1-1
net/batman-adv/gateway_client.c
···810810 goto out;811811812812 gw_node = batadv_gw_node_get(bat_priv, orig_dst_node);813813- if (!gw_node->bandwidth_down == 0)813813+ if (!gw_node)814814 goto out;815815816816 switch (atomic_read(&bat_priv->gw_mode)) {
+7-4
net/batman-adv/multicast.c
···685685 if (orig_initialized)686686 atomic_dec(&bat_priv->mcast.num_disabled);687687 orig->capabilities |= BATADV_ORIG_CAPA_HAS_MCAST;688688- /* If mcast support is being switched off increase the disabled689689- * mcast node counter.688688+ /* If mcast support is being switched off or if this is an initial689689+ * OGM without mcast support then increase the disabled mcast690690+ * node counter.690691 */691692 } else if (!orig_mcast_enabled &&692692- orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST) {693693+ (orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST ||694694+ !orig_initialized)) {693695 atomic_inc(&bat_priv->mcast.num_disabled);694696 orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_MCAST;695697 }···740738{741739 struct batadv_priv *bat_priv = orig->bat_priv;742740743743- if (!(orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST))741741+ if (!(orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST) &&742742+ orig->capa_initialized & BATADV_ORIG_CAPA_HAS_MCAST)744743 atomic_dec(&bat_priv->mcast.num_disabled);745744746745 batadv_mcast_want_unsnoop_update(bat_priv, orig, BATADV_NO_FLAGS);
+1-1
net/batman-adv/network-coding.c
···133133 if (!bat_priv->nc.decoding_hash)134134 goto err;135135136136- batadv_hash_set_lock_class(bat_priv->nc.coding_hash,136136+ batadv_hash_set_lock_class(bat_priv->nc.decoding_hash,137137 &batadv_nc_decoding_hash_lock_class_key);138138139139 INIT_DELAYED_WORK(&bat_priv->nc.work, batadv_nc_worker);
···443443444444 router = batadv_orig_router_get(orig_node, recv_if);445445446446+ if (!router)447447+ return router;448448+446449 /* only consider bonding for recv_if == BATADV_IF_DEFAULT (first hop)447450 * and if activated.448451 */449449- if (recv_if == BATADV_IF_DEFAULT || !atomic_read(&bat_priv->bonding) ||450450- !router)452452+ if (!(recv_if == BATADV_IF_DEFAULT && atomic_read(&bat_priv->bonding)))451453 return router;452454453455 /* bonding: loop through the list of possible routers found
···334334335335 BT_DBG("");336336337337+ if (!l2cap_is_socket(sock))338338+ return -EBADFD;339339+337340 session = kzalloc(sizeof(struct cmtp_session), GFP_KERNEL);338341 if (!session)339342 return -ENOMEM;
+12-4
net/bluetooth/hci_event.c
···242242 if (rp->status)243243 return;244244245245- if (test_bit(HCI_SETUP, &hdev->dev_flags))245245+ if (test_bit(HCI_SETUP, &hdev->dev_flags) ||246246+ test_bit(HCI_CONFIG, &hdev->dev_flags))246247 memcpy(hdev->dev_name, rp->name, HCI_MAX_NAME_LENGTH);247248}248249···510509 if (rp->status)511510 return;512511513513- if (test_bit(HCI_SETUP, &hdev->dev_flags)) {512512+ if (test_bit(HCI_SETUP, &hdev->dev_flags) ||513513+ test_bit(HCI_CONFIG, &hdev->dev_flags)) {514514 hdev->hci_ver = rp->hci_ver;515515 hdev->hci_rev = __le16_to_cpu(rp->hci_rev);516516 hdev->lmp_ver = rp->lmp_ver;···530528 if (rp->status)531529 return;532530533533- if (test_bit(HCI_SETUP, &hdev->dev_flags))531531+ if (test_bit(HCI_SETUP, &hdev->dev_flags) ||532532+ test_bit(HCI_CONFIG, &hdev->dev_flags))534533 memcpy(hdev->commands, rp->commands, sizeof(hdev->commands));535534}536535···21972194 return;21982195 }2199219622002200- if (!test_bit(HCI_CONNECTABLE, &hdev->dev_flags) &&21972197+ /* Require HCI_CONNECTABLE or a whitelist entry to accept the21982198+ * connection. These features are only touched through mgmt so21992199+ * only do the checks if HCI_MGMT is set.22002200+ */22012201+ if (test_bit(HCI_MGMT, &hdev->dev_flags) &&22022202+ !test_bit(HCI_CONNECTABLE, &hdev->dev_flags) &&22012203 !hci_bdaddr_list_lookup(&hdev->whitelist, &ev->bdaddr,22022204 BDADDR_BREDR)) {22032205 hci_reject_conn(hdev, &ev->bdaddr);
+2-1
net/bluetooth/hidp/core.c
···13141314{13151315 struct hidp_session *session;13161316 struct l2cap_conn *conn;13171317- struct l2cap_chan *chan = l2cap_pi(ctrl_sock->sk)->chan;13171317+ struct l2cap_chan *chan;13181318 int ret;1319131913201320 ret = hidp_verify_sockets(ctrl_sock, intr_sock);13211321 if (ret)13221322 return ret;1323132313241324+ chan = l2cap_pi(ctrl_sock->sk)->chan;13241325 conn = NULL;13251326 l2cap_chan_lock(chan);13261327 if (chan->conn)
+2-1
net/bridge/br_input.c
···154154 dst = NULL;155155156156 if (is_broadcast_ether_addr(dest)) {157157- if (p->flags & BR_PROXYARP &&157157+ if (IS_ENABLED(CONFIG_INET) &&158158+ p->flags & BR_PROXYARP &&158159 skb->protocol == htons(ETH_P_ARP))159160 br_do_proxy_arp(skb, br, vid);160161
···1694169416951695 skb_scrub_packet(skb, true);16961696 skb->protocol = eth_type_trans(skb, dev);16971697+ skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);1697169816981699 return 0;16991700}···25232522/* If MPLS offload request, verify we are testing hardware MPLS features25242523 * instead of standard features for the netdev.25252524 */25262526-#ifdef CONFIG_NET_MPLS_GSO25252525+#if IS_ENABLED(CONFIG_NET_MPLS_GSO)25272526static netdev_features_t net_mpls_features(struct sk_buff *skb,25282527 netdev_features_t features,25292528 __be16 type)···2563256225642563netdev_features_t netif_skb_features(struct sk_buff *skb)25652564{25662566- const struct net_device *dev = skb->dev;25652565+ struct net_device *dev = skb->dev;25672566 netdev_features_t features = dev->features;25682567 u16 gso_segs = skb_shinfo(skb)->gso_segs;25692568 __be16 protocol = skb->protocol;···25712570 if (gso_segs > dev->gso_max_segs || gso_segs < dev->gso_min_segs)25722571 features &= ~NETIF_F_GSO_MASK;2573257225742574- if (protocol == htons(ETH_P_8021Q) || protocol == htons(ETH_P_8021AD)) {25752575- struct vlan_ethhdr *veh = (struct vlan_ethhdr *)skb->data;25762576- protocol = veh->h_vlan_encapsulated_proto;25772577- } else if (!vlan_tx_tag_present(skb)) {25782578- return harmonize_features(skb, features);25732573+ /* If encapsulation offload request, verify we are testing25742574+ * hardware encapsulation features instead of standard25752575+ * features for the netdev25762576+ */25772577+ if (skb->encapsulation)25782578+ features &= dev->hw_enc_features;25792579+25802580+ if (!vlan_tx_tag_present(skb)) {25812581+ if (unlikely(protocol == htons(ETH_P_8021Q) ||25822582+ protocol == htons(ETH_P_8021AD))) {25832583+ struct vlan_ethhdr *veh = (struct vlan_ethhdr *)skb->data;25842584+ protocol = veh->h_vlan_encapsulated_proto;25852585+ } else {25862586+ goto finalize;25872587+ }25792588 }2580258925812590 features = netdev_intersect_features(features,···26012590 NETIF_F_GEN_CSUM |26022591 NETIF_F_HW_VLAN_CTAG_TX |26032592 NETIF_F_HW_VLAN_STAG_TX);25932593+25942594+finalize:25952595+ if (dev->netdev_ops->ndo_features_check)25962596+ features &= dev->netdev_ops->ndo_features_check(skb, dev,25972597+ features);2604259826052599 return harmonize_features(skb, features);26062600}···26772661 if (unlikely(!skb))26782662 goto out_null;2679266326802680- /* If encapsulation offload request, verify we are testing26812681- * hardware encapsulation features instead of standard26822682- * features for the netdev26832683- */26842684- if (skb->encapsulation)26852685- features &= dev->hw_enc_features;26862686-26872664 if (netif_needs_gso(dev, skb, features)) {26882665 struct sk_buff *segs;2689266626902667 segs = skb_gso_segment(skb, features);26912668 if (IS_ERR(segs)) {26922692- segs = NULL;26692669+ goto out_kfree_skb;26932670 } else if (segs) {26942671 consume_skb(skb);26952672 skb = segs;···45664557}45674558EXPORT_SYMBOL(netif_napi_del);4568455945604560+static int napi_poll(struct napi_struct *n, struct list_head *repoll)45614561+{45624562+ void *have;45634563+ int work, weight;45644564+45654565+ list_del_init(&n->poll_list);45664566+45674567+ have = netpoll_poll_lock(n);45684568+45694569+ weight = n->weight;45704570+45714571+ /* This NAPI_STATE_SCHED test is for avoiding a race45724572+ * with netpoll's poll_napi(). Only the entity which45734573+ * obtains the lock and sees NAPI_STATE_SCHED set will45744574+ * actually make the ->poll() call. Therefore we avoid45754575+ * accidentally calling ->poll() when NAPI is not scheduled.45764576+ */45774577+ work = 0;45784578+ if (test_bit(NAPI_STATE_SCHED, &n->state)) {45794579+ work = n->poll(n, weight);45804580+ trace_napi_poll(n);45814581+ }45824582+45834583+ WARN_ON_ONCE(work > weight);45844584+45854585+ if (likely(work < weight))45864586+ goto out_unlock;45874587+45884588+ /* Drivers must not modify the NAPI state if they45894589+ * consume the entire weight. In such cases this code45904590+ * still "owns" the NAPI instance and therefore can45914591+ * move the instance around on the list at-will.45924592+ */45934593+ if (unlikely(napi_disable_pending(n))) {45944594+ napi_complete(n);45954595+ goto out_unlock;45964596+ }45974597+45984598+ if (n->gro_list) {45994599+ /* flush too old packets46004600+ * If HZ < 1000, flush all packets.46014601+ */46024602+ napi_gro_flush(n, HZ >= 1000);46034603+ }46044604+46054605+ /* Some drivers may have called napi_schedule46064606+ * prior to exhausting their budget.46074607+ */46084608+ if (unlikely(!list_empty(&n->poll_list))) {46094609+ pr_warn_once("%s: Budget exhausted after napi rescheduled\n",46104610+ n->dev ? n->dev->name : "backlog");46114611+ goto out_unlock;46124612+ }46134613+46144614+ list_add_tail(&n->poll_list, repoll);46154615+46164616+out_unlock:46174617+ netpoll_poll_unlock(have);46184618+46194619+ return work;46204620+}46214621+45694622static void net_rx_action(struct softirq_action *h)45704623{45714624 struct softnet_data *sd = this_cpu_ptr(&softnet_data);···46354564 int budget = netdev_budget;46364565 LIST_HEAD(list);46374566 LIST_HEAD(repoll);46384638- void *have;4639456746404568 local_irq_disable();46414569 list_splice_init(&sd->poll_list, &list);46424570 local_irq_enable();4643457146444644- while (!list_empty(&list)) {45724572+ for (;;) {46454573 struct napi_struct *n;46464646- int work, weight;45744574+45754575+ if (list_empty(&list)) {45764576+ if (!sd_has_rps_ipi_waiting(sd) && list_empty(&repoll))45774577+ return;45784578+ break;45794579+ }45804580+45814581+ n = list_first_entry(&list, struct napi_struct, poll_list);45824582+ budget -= napi_poll(n, &repoll);4647458346484584 /* If softirq window is exhausted then punt.46494585 * Allow this to run for 2 jiffies since which will allow46504586 * an average latency of 1.5/HZ.46514587 */46524652- if (unlikely(budget <= 0 || time_after_eq(jiffies, time_limit)))46534653- goto softnet_break;46544654-46554655-46564656- n = list_first_entry(&list, struct napi_struct, poll_list);46574657- list_del_init(&n->poll_list);46584658-46594659- have = netpoll_poll_lock(n);46604660-46614661- weight = n->weight;46624662-46634663- /* This NAPI_STATE_SCHED test is for avoiding a race46644664- * with netpoll's poll_napi(). Only the entity which46654665- * obtains the lock and sees NAPI_STATE_SCHED set will46664666- * actually make the ->poll() call. Therefore we avoid46674667- * accidentally calling ->poll() when NAPI is not scheduled.46684668- */46694669- work = 0;46704670- if (test_bit(NAPI_STATE_SCHED, &n->state)) {46714671- work = n->poll(n, weight);46724672- trace_napi_poll(n);45884588+ if (unlikely(budget <= 0 ||45894589+ time_after_eq(jiffies, time_limit))) {45904590+ sd->time_squeeze++;45914591+ break;46734592 }46744674-46754675- WARN_ON_ONCE(work > weight);46764676-46774677- budget -= work;46784678-46794679- /* Drivers must not modify the NAPI state if they46804680- * consume the entire weight. In such cases this code46814681- * still "owns" the NAPI instance and therefore can46824682- * move the instance around on the list at-will.46834683- */46844684- if (unlikely(work == weight)) {46854685- if (unlikely(napi_disable_pending(n))) {46864686- napi_complete(n);46874687- } else {46884688- if (n->gro_list) {46894689- /* flush too old packets46904690- * If HZ < 1000, flush all packets.46914691- */46924692- napi_gro_flush(n, HZ >= 1000);46934693- }46944694- list_add_tail(&n->poll_list, &repoll);46954695- }46964696- }46974697-46984698- netpoll_poll_unlock(have);46994593 }4700459447014701- if (!sd_has_rps_ipi_waiting(sd) &&47024702- list_empty(&list) &&47034703- list_empty(&repoll))47044704- return;47054705-out:47064595 local_irq_disable();4707459647084597 list_splice_tail_init(&sd->poll_list, &list);···46724641 __raise_softirq_irqoff(NET_RX_SOFTIRQ);4673464246744643 net_rps_action_and_irq_enable(sd);46754675-46764676- return;46774677-46784678-softnet_break:46794679- sd->time_squeeze++;46804680- goto out;46814644}4682464546834646struct netdev_adjacent {
+44
net/core/neighbour.c
···20432043 case NDTPA_BASE_REACHABLE_TIME:20442044 NEIGH_VAR_SET(p, BASE_REACHABLE_TIME,20452045 nla_get_msecs(tbp[i]));20462046+ /* update reachable_time as well, otherwise, the change will20472047+ * only be effective after the next time neigh_periodic_work20482048+ * decides to recompute it (can be multiple minutes)20492049+ */20502050+ p->reachable_time =20512051+ neigh_rand_reach_time(NEIGH_VAR(p, BASE_REACHABLE_TIME));20462052 break;20472053 case NDTPA_GC_STALETIME:20482054 NEIGH_VAR_SET(p, GC_STALETIME,···29272921 return ret;29282922}2929292329242924+static int neigh_proc_base_reachable_time(struct ctl_table *ctl, int write,29252925+ void __user *buffer,29262926+ size_t *lenp, loff_t *ppos)29272927+{29282928+ struct neigh_parms *p = ctl->extra2;29292929+ int ret;29302930+29312931+ if (strcmp(ctl->procname, "base_reachable_time") == 0)29322932+ ret = neigh_proc_dointvec_jiffies(ctl, write, buffer, lenp, ppos);29332933+ else if (strcmp(ctl->procname, "base_reachable_time_ms") == 0)29342934+ ret = neigh_proc_dointvec_ms_jiffies(ctl, write, buffer, lenp, ppos);29352935+ else29362936+ ret = -1;29372937+29382938+ if (write && ret == 0) {29392939+ /* update reachable_time as well, otherwise, the change will29402940+ * only be effective after the next time neigh_periodic_work29412941+ * decides to recompute it29422942+ */29432943+ p->reachable_time =29442944+ neigh_rand_reach_time(NEIGH_VAR(p, BASE_REACHABLE_TIME));29452945+ }29462946+ return ret;29472947+}29482948+29302949#define NEIGH_PARMS_DATA_OFFSET(index) \29312950 (&((struct neigh_parms *) 0)->data[index])29322951···30783047 t->neigh_vars[NEIGH_VAR_RETRANS_TIME_MS].proc_handler = handler;30793048 /* ReachableTime (in milliseconds) */30803049 t->neigh_vars[NEIGH_VAR_BASE_REACHABLE_TIME_MS].proc_handler = handler;30503050+ } else {30513051+ /* Those handlers will update p->reachable_time after30523052+ * base_reachable_time(_ms) is set to ensure the new timer starts being30533053+ * applied after the next neighbour update instead of waiting for30543054+ * neigh_periodic_work to update its value (can be multiple minutes)30553055+ * So any handler that replaces them should do this as well30563056+ */30573057+ /* ReachableTime */30583058+ t->neigh_vars[NEIGH_VAR_BASE_REACHABLE_TIME].proc_handler =30593059+ neigh_proc_base_reachable_time;30603060+ /* ReachableTime (in milliseconds) */30613061+ t->neigh_vars[NEIGH_VAR_BASE_REACHABLE_TIME_MS].proc_handler =30623062+ neigh_proc_base_reachable_time;30813063 }3082306430833065 /* Don't export sysctls to unprivileged users */
···183183 struct nf_conn *ct;184184 struct net *net;185185186186+ *diff = 0;187187+186188#ifdef CONFIG_IP_VS_IPV6187189 /* This application helper doesn't work with IPv6 yet,188190 * so turn this into a no-op for IPv6 packets···192190 if (cp->af == AF_INET6)193191 return 1;194192#endif195195-196196- *diff = 0;197193198194 /* Only useful for established sessions */199195 if (cp->state != IP_VS_TCP_S_ESTABLISHED)···322322 struct ip_vs_conn *n_cp;323323 struct net *net;324324325325+ /* no diff required for incoming packets */326326+ *diff = 0;327327+325328#ifdef CONFIG_IP_VS_IPV6326329 /* This application helper doesn't work with IPv6 yet,327330 * so turn this into a no-op for IPv6 packets···332329 if (cp->af == AF_INET6)333330 return 1;334331#endif335335-336336- /* no diff required for incoming packets */337337- *diff = 0;338332339333 /* Only useful for established sessions */340334 if (cp->state != IP_VS_TCP_S_ESTABLISHED)
+9-11
net/netfilter/nf_conntrack_core.c
···611611 */612612 NF_CT_ASSERT(!nf_ct_is_confirmed(ct));613613 pr_debug("Confirming conntrack %p\n", ct);614614- /* We have to check the DYING flag inside the lock to prevent615615- a race against nf_ct_get_next_corpse() possibly called from616616- user context, else we insert an already 'dead' hash, blocking617617- further use of that particular connection -JM */614614+ /* We have to check the DYING flag after unlink to prevent615615+ * a race against nf_ct_get_next_corpse() possibly called from616616+ * user context, else we insert an already 'dead' hash, blocking617617+ * further use of that particular connection -JM.618618+ */619619+ nf_ct_del_from_dying_or_unconfirmed_list(ct);618620619619- if (unlikely(nf_ct_is_dying(ct))) {620620- nf_conntrack_double_unlock(hash, reply_hash);621621- local_bh_enable();622622- return NF_ACCEPT;623623- }621621+ if (unlikely(nf_ct_is_dying(ct)))622622+ goto out;624623625624 /* See if there's one in the list already, including reverse:626625 NAT could have grabbed it without realizing, since we're···634635 &h->tuple) &&635636 zone == nf_ct_zone(nf_ct_tuplehash_to_ctrack(h)))636637 goto out;637637-638638- nf_ct_del_from_dying_or_unconfirmed_list(ct);639638640639 /* Timer relative to confirmation time, not original641640 setting time, otherwise we'd get timer wrap in···670673 return NF_ACCEPT;671674672675out:676676+ nf_ct_add_to_dying_list(ct);673677 nf_conntrack_double_unlock(hash, reply_hash);674678 NF_CT_STAT_INC(net, insert_failed);675679 local_bh_enable();
···10911091 mutex_unlock(&nl_sk_hash_lock);1092109210931093 netlink_table_grab();10941094- if (nlk_sk(sk)->subscriptions)10941094+ if (nlk_sk(sk)->subscriptions) {10951095 __sk_del_bind_node(sk);10961096+ netlink_update_listeners(sk);10971097+ }10961098 netlink_table_ungrab();10971099}10981100···11411139 struct module *module = NULL;11421140 struct mutex *cb_mutex;11431141 struct netlink_sock *nlk;11441144- int (*bind)(int group);11451145- void (*unbind)(int group);11421142+ int (*bind)(struct net *net, int group);11431143+ void (*unbind)(struct net *net, int group);11461144 int err = 0;1147114511481146 sock->state = SS_UNCONNECTED;···1228122612291227 module_put(nlk->module);1230122812311231- netlink_table_grab();12321229 if (netlink_is_kernel(sk)) {12301230+ netlink_table_grab();12331231 BUG_ON(nl_table[sk->sk_protocol].registered == 0);12341232 if (--nl_table[sk->sk_protocol].registered == 0) {12351233 struct listeners *old;···12431241 nl_table[sk->sk_protocol].flags = 0;12441242 nl_table[sk->sk_protocol].registered = 0;12451243 }12461246- } else if (nlk->subscriptions) {12471247- netlink_update_listeners(sk);12441244+ netlink_table_ungrab();12481245 }12491249- netlink_table_ungrab();1250124612471247+ if (nlk->netlink_unbind) {12481248+ int i;12491249+12501250+ for (i = 0; i < nlk->ngroups; i++)12511251+ if (test_bit(i, nlk->groups))12521252+ nlk->netlink_unbind(sock_net(sk), i + 1);12531253+ }12511254 kfree(nlk->groups);12521255 nlk->groups = NULL;12531256···14171410 return err;14181411}1419141214201420-static void netlink_unbind(int group, long unsigned int groups,14211421- struct netlink_sock *nlk)14131413+static void netlink_undo_bind(int group, long unsigned int groups,14141414+ struct sock *sk)14221415{14161416+ struct netlink_sock *nlk = nlk_sk(sk);14231417 int undo;1424141814251419 if (!nlk->netlink_unbind)···1428142014291421 for (undo = 0; undo < group; undo++)14301422 if (test_bit(undo, &groups))14311431- nlk->netlink_unbind(undo);14231423+ nlk->netlink_unbind(sock_net(sk), undo);14321424}1433142514341426static int netlink_bind(struct socket *sock, struct sockaddr *addr,···14661458 for (group = 0; group < nlk->ngroups; group++) {14671459 if (!test_bit(group, &groups))14681460 continue;14691469- err = nlk->netlink_bind(group);14611461+ err = nlk->netlink_bind(net, group);14701462 if (!err)14711463 continue;14721472- netlink_unbind(group, groups, nlk);14641464+ netlink_undo_bind(group, groups, sk);14731465 return err;14741466 }14751467 }···14791471 netlink_insert(sk, net, nladdr->nl_pid) :14801472 netlink_autobind(sock);14811473 if (err) {14821482- netlink_unbind(nlk->ngroups, groups, nlk);14741474+ netlink_undo_bind(nlk->ngroups, groups, sk);14831475 return err;14841476 }14851477 }···21302122 if (!val || val - 1 >= nlk->ngroups)21312123 return -EINVAL;21322124 if (optname == NETLINK_ADD_MEMBERSHIP && nlk->netlink_bind) {21332133- err = nlk->netlink_bind(val);21252125+ err = nlk->netlink_bind(sock_net(sk), val);21342126 if (err)21352127 return err;21362128 }···21392131 optname == NETLINK_ADD_MEMBERSHIP);21402132 netlink_table_ungrab();21412133 if (optname == NETLINK_DROP_MEMBERSHIP && nlk->netlink_unbind)21422142- nlk->netlink_unbind(val);21342134+ nlk->netlink_unbind(sock_net(sk), val);2143213521442136 err = 0;21452137 break;
+4-4
net/netlink/af_netlink.h
···3939 struct mutex *cb_mutex;4040 struct mutex cb_def_mutex;4141 void (*netlink_rcv)(struct sk_buff *skb);4242- int (*netlink_bind)(int group);4343- void (*netlink_unbind)(int group);4242+ int (*netlink_bind)(struct net *net, int group);4343+ void (*netlink_unbind)(struct net *net, int group);4444 struct module *module;4545#ifdef CONFIG_NETLINK_MMAP4646 struct mutex pg_vec_lock;···6565 unsigned int groups;6666 struct mutex *cb_mutex;6767 struct module *module;6868- int (*bind)(int group);6969- void (*unbind)(int group);6868+ int (*bind)(struct net *net, int group);6969+ void (*unbind)(struct net *net, int group);7070 bool (*compare)(struct net *net, struct sock *sock);7171 int registered;7272};
+56
net/netlink/genetlink.c
···983983 { .name = "notify", },984984};985985986986+static int genl_bind(struct net *net, int group)987987+{988988+ int i, err = 0;989989+990990+ down_read(&cb_lock);991991+ for (i = 0; i < GENL_FAM_TAB_SIZE; i++) {992992+ struct genl_family *f;993993+994994+ list_for_each_entry(f, genl_family_chain(i), family_list) {995995+ if (group >= f->mcgrp_offset &&996996+ group < f->mcgrp_offset + f->n_mcgrps) {997997+ int fam_grp = group - f->mcgrp_offset;998998+999999+ if (!f->netnsok && net != &init_net)10001000+ err = -ENOENT;10011001+ else if (f->mcast_bind)10021002+ err = f->mcast_bind(net, fam_grp);10031003+ else10041004+ err = 0;10051005+ break;10061006+ }10071007+ }10081008+ }10091009+ up_read(&cb_lock);10101010+10111011+ return err;10121012+}10131013+10141014+static void genl_unbind(struct net *net, int group)10151015+{10161016+ int i;10171017+ bool found = false;10181018+10191019+ down_read(&cb_lock);10201020+ for (i = 0; i < GENL_FAM_TAB_SIZE; i++) {10211021+ struct genl_family *f;10221022+10231023+ list_for_each_entry(f, genl_family_chain(i), family_list) {10241024+ if (group >= f->mcgrp_offset &&10251025+ group < f->mcgrp_offset + f->n_mcgrps) {10261026+ int fam_grp = group - f->mcgrp_offset;10271027+10281028+ if (f->mcast_unbind)10291029+ f->mcast_unbind(net, fam_grp);10301030+ found = true;10311031+ break;10321032+ }10331033+ }10341034+ }10351035+ up_read(&cb_lock);10361036+10371037+ WARN_ON(!found);10381038+}10391039+9861040static int __net_init genl_pernet_init(struct net *net)9871041{9881042 struct netlink_kernel_cfg cfg = {9891043 .input = genl_rcv,9901044 .flags = NL_CFG_F_NONROOT_RECV,10451045+ .bind = genl_bind,10461046+ .unbind = genl_unbind,9911047 };99210489931049 /* we'll bump the group number right afterwards */
···175175 Most distributions have a CRDA package. So if unsure, say N.176176177177config CFG80211_WEXT178178- bool178178+ bool "cfg80211 wireless extensions compatibility"179179 depends on CFG80211180180 select WEXT_CORE181181 help
+8-8
scripts/Makefile.clean
···42424343__clean-files := $(filter-out $(no-clean-files), $(__clean-files))44444545-# as clean-files is given relative to the current directory, this adds4646-# a $(obj) prefix, except for absolute paths4545+# clean-files is given relative to the current directory, unless it4646+# starts with $(objtree)/ (which means "./", so do not add "./" unless4747+# you want to delete a file from the toplevel object directory).47484849__clean-files := $(wildcard \4949- $(addprefix $(obj)/, $(filter-out /%, $(__clean-files))) \5050- $(filter /%, $(__clean-files)))5050+ $(addprefix $(obj)/, $(filter-out $(objtree)/%, $(__clean-files))) \5151+ $(filter $(objtree)/%, $(__clean-files)))51525252-# as clean-dirs is given relative to the current directory, this adds5353-# a $(obj) prefix, except for absolute paths5353+# same as clean-files54545555__clean-dirs := $(wildcard \5656- $(addprefix $(obj)/, $(filter-out /%, $(clean-dirs))) \5757- $(filter /%, $(clean-dirs)))5656+ $(addprefix $(obj)/, $(filter-out $(objtree)/%, $(clean-dirs))) \5757+ $(filter $(objtree)/%, $(clean-dirs)))58585959# ==========================================================================6060
+2-2
security/keys/gc.c
···148148 if (test_bit(KEY_FLAG_INSTANTIATED, &key->flags))149149 atomic_dec(&key->user->nikeys);150150151151- key_user_put(key->user);152152-153151 /* now throw away the key memory */154152 if (key->type->destroy)155153 key->type->destroy(key);154154+155155+ key_user_put(key->user);156156157157 kfree(key->description);158158
···32303230 const char *propname)32313231{32323232 struct device_node *np = card->dev->of_node;32333233- int num_routes, old_routes;32333233+ int num_routes;32343234 struct snd_soc_dapm_route *routes;32353235 int i, ret;32363236···32483248 return -EINVAL;32493249 }3250325032513251- old_routes = card->num_dapm_routes;32523252- routes = devm_kzalloc(card->dev,32533253- (old_routes + num_routes) * sizeof(*routes),32513251+ routes = devm_kzalloc(card->dev, num_routes * sizeof(*routes),32543252 GFP_KERNEL);32553253 if (!routes) {32563254 dev_err(card->dev,···32563258 return -EINVAL;32573259 }3258326032593259- memcpy(routes, card->dapm_routes, old_routes * sizeof(*routes));32603260-32613261 for (i = 0; i < num_routes; i++) {32623262 ret = of_property_read_string_index(np, propname,32633263- 2 * i, &routes[old_routes + i].sink);32633263+ 2 * i, &routes[i].sink);32643264 if (ret) {32653265 dev_err(card->dev,32663266 "ASoC: Property '%s' index %d could not be read: %d\n",···32663270 return -EINVAL;32673271 }32683272 ret = of_property_read_string_index(np, propname,32693269- (2 * i) + 1, &routes[old_routes + i].source);32733273+ (2 * i) + 1, &routes[i].source);32703274 if (ret) {32713275 dev_err(card->dev,32723276 "ASoC: Property '%s' index %d could not be read: %d\n",···32753279 }32763280 }3277328132783278- card->num_dapm_routes += num_routes;32823282+ card->num_dapm_routes = num_routes;32793283 card->dapm_routes = routes;3280328432813285 return 0;
+1-1
sound/usb/caiaq/audio.c
···816816 return -EINVAL;817817 }818818819819- if (cdev->n_streams < 2) {819819+ if (cdev->n_streams < 1) {820820 dev_err(dev, "bogus number of streams: %d\n", cdev->n_streams);821821 return -EINVAL;822822 }
+2
tools/include/asm-generic/bitops.h
···2222#error only <linux/bitops.h> can be included directly2323#endif24242525+#include <asm-generic/bitops/hweight.h>2626+2527#include <asm-generic/bitops/atomic.h>26282729#endif /* __TOOLS_ASM_GENERIC_BITOPS_H */
···11#ifndef _TOOLS_LINUX_BITOPS_H_22#define _TOOLS_LINUX_BITOPS_H_3344+#include <asm/types.h>45#include <linux/kernel.h>56#include <linux/compiler.h>66-#include <asm/hweight.h>7788#ifndef __WORDSIZE99#define __WORDSIZE (__SIZEOF_LONG__ * 8)···1818#define BITS_TO_U64(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u64))1919#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(u32))2020#define BITS_TO_BYTES(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE)2121+2222+extern unsigned int __sw_hweight8(unsigned int w);2323+extern unsigned int __sw_hweight16(unsigned int w);2424+extern unsigned int __sw_hweight32(unsigned int w);2525+extern unsigned long __sw_hweight64(__u64 w);21262227/*2328 * Include this here because some architectures need generic_ffs/fls in
+1-1
tools/lib/api/fs/debugfs.c
···67676868 if (statfs(debugfs, &st_fs) < 0)6969 return -ENOENT;7070- else if (st_fs.f_type != (long) DEBUGFS_MAGIC)7070+ else if ((long)st_fs.f_type != (long)DEBUGFS_MAGIC)7171 return -ENOENT;72727373 return 0;
+1-1
tools/lib/api/fs/fs.c
···79798080 if (statfs(fs, &st_fs) < 0)8181 return -ENOENT;8282- else if (st_fs.f_type != magic)8282+ else if ((long)st_fs.f_type != magic)8383 return -ENOENT;84848585 return 0;
+2-2
tools/lib/lockdep/preload.c
···317317 *318318 * TODO: Hook into free() and add that check there as well.319319 */320320- debug_check_no_locks_freed(mutex, mutex + sizeof(*mutex));320320+ debug_check_no_locks_freed(mutex, sizeof(*mutex));321321 __del_lock(__get_lock(mutex));322322 return ll_pthread_mutex_destroy(mutex);323323}···341341{342342 try_init_preload();343343344344- debug_check_no_locks_freed(rwlock, rwlock + sizeof(*rwlock));344344+ debug_check_no_locks_freed(rwlock, sizeof(*rwlock));345345 __del_lock(__get_lock(rwlock));346346 return ll_pthread_rwlock_destroy(rwlock);347347}
···11-#include <linux/bitops.h>22-33-/**44- * hweightN - returns the hamming weight of a N-bit word55- * @x: the word to weigh66- *77- * The Hamming Weight of a number is the total number of bits set in it.88- */99-1010-unsigned int hweight32(unsigned int w)1111-{1212- unsigned int res = w - ((w >> 1) & 0x55555555);1313- res = (res & 0x33333333) + ((res >> 2) & 0x33333333);1414- res = (res + (res >> 4)) & 0x0F0F0F0F;1515- res = res + (res >> 8);1616- return (res + (res >> 16)) & 0x000000FF;1717-}1818-1919-unsigned long hweight64(__u64 w)2020-{2121-#if BITS_PER_LONG == 322222- return hweight32((unsigned int)(w >> 32)) + hweight32((unsigned int)w);2323-#elif BITS_PER_LONG == 642424- __u64 res = w - ((w >> 1) & 0x5555555555555555ul);2525- res = (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);2626- res = (res + (res >> 4)) & 0x0F0F0F0F0F0F0F0Ful;2727- res = res + (res >> 8);2828- res = res + (res >> 16);2929- return (res + (res >> 32)) & 0x00000000000000FFul;3030-#endif3131-}
-8
tools/perf/util/include/asm/hweight.h
···11-#ifndef PERF_HWEIGHT_H22-#define PERF_HWEIGHT_H33-44-#include <linux/types.h>55-unsigned int hweight32(unsigned int w);66-unsigned long hweight64(__u64 w);77-88-#endif /* PERF_HWEIGHT_H */
+3-1
tools/perf/util/machine.c
···389389 if (th != NULL) {390390 rb_link_node(&th->rb_node, parent, p);391391 rb_insert_color(&th->rb_node, &machine->threads);392392- machine->last_match = th;393392394393 /*395394 * We have to initialize map_groups separately···399400 * leader and that would screwed the rb tree.400401 */401402 if (thread__init_map_groups(th, machine)) {403403+ rb_erase(&th->rb_node, &machine->threads);402404 thread__delete(th);403405 return NULL;404406 }407407+408408+ machine->last_match = th;405409 }406410407411 return th;
+7-3
tools/perf/util/probe-event.c
···495495 }496496497497 if (ntevs == 0) { /* No error but failed to find probe point. */498498- pr_warning("Probe point '%s' not found.\n",498498+ pr_warning("Probe point '%s' not found in debuginfo.\n",499499 synthesize_perf_probe_point(&pev->point));500500- return -ENOENT;500500+ if (need_dwarf)501501+ return -ENOENT;502502+ return 0;501503 }502504 /* Error path : ntevs < 0 */503505 pr_debug("An error occurred in debuginfo analysis (%d).\n", ntevs);···20522050 pr_debug("Writing event: %s\n", buf);20532051 if (!probe_event_dry_run) {20542052 ret = write(fd, buf, strlen(buf));20552055- if (ret <= 0)20532053+ if (ret <= 0) {20542054+ ret = -errno;20562055 pr_warning("Failed to write event: %s\n",20572056 strerror_r(errno, sbuf, sizeof(sbuf)));20572057+ }20582058 }20592059 free(buf);20602060 return ret;
+17-1
tools/perf/util/probe-finder.c
···989989 int ret = 0;990990991991#if _ELFUTILS_PREREQ(0, 142)992992+ Elf *elf;993993+ GElf_Ehdr ehdr;994994+ GElf_Shdr shdr;995995+992996 /* Get the call frame information from this dwarf */993993- pf->cfi = dwarf_getcfi_elf(dwarf_getelf(dbg->dbg));997997+ elf = dwarf_getelf(dbg->dbg);998998+ if (elf == NULL)999999+ return -EINVAL;10001000+10011001+ if (gelf_getehdr(elf, &ehdr) == NULL)10021002+ return -EINVAL;10031003+10041004+ if (elf_section_by_name(elf, &ehdr, &shdr, ".eh_frame", NULL) &&10051005+ shdr.sh_type == SHT_PROGBITS) {10061006+ pf->cfi = dwarf_getcfi_elf(elf);10071007+ } else {10081008+ pf->cfi = dwarf_getcfi(dbg->dbg);10091009+ }9941010#endif99510119961012 off = 0;
···6262}63636464static int check_execveat_invoked_rc(int fd, const char *path, int flags,6565- int expected_rc)6565+ int expected_rc, int expected_rc2)6666{6767 int status;6868 int rc;···9898 child, status);9999 return 1;100100 }101101- if (WEXITSTATUS(status) != expected_rc) {102102- printf("[FAIL] (child %d exited with %d not %d)\n",103103- child, WEXITSTATUS(status), expected_rc);101101+ if ((WEXITSTATUS(status) != expected_rc) &&102102+ (WEXITSTATUS(status) != expected_rc2)) {103103+ printf("[FAIL] (child %d exited with %d not %d nor %d)\n",104104+ child, WEXITSTATUS(status), expected_rc, expected_rc2);104105 return 1;105106 }106107 printf("[OK]\n");···110109111110static int check_execveat(int fd, const char *path, int flags)112111{113113- return check_execveat_invoked_rc(fd, path, flags, 99);112112+ return check_execveat_invoked_rc(fd, path, flags, 99, 99);114113}115114116115static char *concat(const char *left, const char *right)···180179 */181180 fd = open(longpath, O_RDONLY);182181 if (fd > 0) {183183- printf("Invoke copy of '%s' via filename of length %lu:\n",182182+ printf("Invoke copy of '%s' via filename of length %zu:\n",184183 src, strlen(longpath));185184 fail += check_execveat(fd, "", AT_EMPTY_PATH);186185 } else {187187- printf("Failed to open length %lu filename, errno=%d (%s)\n",186186+ printf("Failed to open length %zu filename, errno=%d (%s)\n",188187 strlen(longpath), errno, strerror(errno));189188 fail++;190189 }···193192 * Execute as a long pathname relative to ".". If this is a script,194193 * the interpreter will launch but fail to open the script because its195194 * name ("/dev/fd/5/xxx....") is bigger than PATH_MAX.195195+ *196196+ * The failure code is usually 127 (POSIX: "If a command is not found,197197+ * the exit status shall be 127."), but some systems give 126 (POSIX:198198+ * "If the command name is found, but it is not an executable utility,199199+ * the exit status shall be 126."), so allow either.196200 */197201 if (is_script)198198- fail += check_execveat_invoked_rc(dot_dfd, longpath, 0, 127);202202+ fail += check_execveat_invoked_rc(dot_dfd, longpath, 0,203203+ 127, 126);199204 else200205 fail += check_execveat(dot_dfd, longpath, 0);201206
+1-2
tools/testing/selftests/mqueue/mq_perf_tests.c
···536536{537537 struct mq_attr attr;538538 char *option, *next_option;539539- int i, cpu;539539+ int i, cpu, rc;540540 struct sigaction sa;541541 poptContext popt_context;542542- char rc;543542 void *retval;544543545544 main_thread = pthread_self();