···19192020- clock-frequency : The frequency of the main counter, in Hz. Optional.21212222+- always-on : a boolean property. If present, the timer is powered through an2323+ always-on power domain, therefore it never loses context.2424+2225Example:23262427 timer {
···3333- max-frame-size: See ethernet.txt file in the same directory3434- clocks: If present, the first clock should be the GMAC main clock,3535 further clocks may be specified in derived bindings.3636-- clocks-names: One name for each entry in the clocks property, the3636+- clock-names: One name for each entry in the clocks property, the3737 first one should be "stmmaceth".38383939Examples:
···1313 "ti,tlv320aic3111" - TLV320AIC3111 (stereo speaker amp, MiniDSP)14141515- reg - <int> - I2C slave address1616+- HPVDD-supply, SPRVDD-supply, SPLVDD-supply, AVDD-supply, IOVDD-supply,1717+ DVDD-supply : power supplies for the device as covered in1818+ Documentation/devicetree/bindings/regulator/regulator.txt161917201821Optional properties:···2724 3 or MICBIAS_AVDD - MICBIAS output is connected to AVDD2825 If this node is not mentioned or if the value is unknown, then2926 micbias is set to 2.0V.3030-- HPVDD-supply, SPRVDD-supply, SPLVDD-supply, AVDD-supply, IOVDD-supply,3131- DVDD-supply : power supplies for the device as covered in3232- Documentation/devicetree/bindings/regulator/regulator.txt33273428CODEC output pins:3529 * HPL
+3-3
Documentation/timers/timer_stats.txt
···3939The statistics can be retrieved by:4040# cat /proc/timer_stats41414242-The readout of /proc/timer_stats automatically disables sampling. The sampled4343-information is kept until a new sample period is started. This allows multiple4444-readouts.4242+While sampling is enabled, each readout from /proc/timer_stats will see4343+newly updated statistics. Once sampling is disabled, the sampled information4444+is kept until a new sample period is started. This allows multiple readouts.45454646Sample output of /proc/timer_stats:4747
+11
MAINTAINERS
···34853485F: drivers/extcon/34863486F: Documentation/extcon/3487348734883488+EXYNOS DP DRIVER34893489+M: Jingoo Han <jg1.han@samsung.com>34903490+L: dri-devel@lists.freedesktop.org34913491+S: Maintained34923492+F: drivers/gpu/drm/exynos/exynos_dp*34933493+34883494EXYNOS MIPI DISPLAY DRIVERS34893495M: Inki Dae <inki.dae@samsung.com>34903496M: Donghwa Lee <dh09.lee@samsung.com>···5114510851155109KERNEL VIRTUAL MACHINE (KVM) FOR ARM51165110M: Christoffer Dall <christoffer.dall@linaro.org>51115111+M: Marc Zyngier <marc.zyngier@arm.com>51125112+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)51175113L: kvmarm@lists.cs.columbia.edu51185114W: http://systems.cs.columbia.edu/projects/kvm-arm51195115S: Supported51205116F: arch/arm/include/uapi/asm/kvm*51215117F: arch/arm/include/asm/kvm*51225118F: arch/arm/kvm/51195119+F: virt/kvm/arm/51205120+F: include/kvm/arm_*5123512151245122KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)51235123+M: Christoffer Dall <christoffer.dall@linaro.org>51255124M: Marc Zyngier <marc.zyngier@arm.com>51265125L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)51275126L: kvmarm@lists.cs.columbia.edu
···614614615615resume_kernel_mode:616616617617-#ifdef CONFIG_PREEMPT618618-619619- ; This is a must for preempt_schedule_irq()617617+ ; Disable Interrupts from this point on618618+ ; CONFIG_PREEMPT: This is a must for preempt_schedule_irq()619619+ ; !CONFIG_PREEMPT: To ensure restore_regs is intr safe620620 IRQ_DISABLE r9621621+622622+#ifdef CONFIG_PREEMPT621623622624 ; Can't preempt if preemption disabled623625 GET_CURR_THR_INFO_FROM_SP r10
+9-8
arch/arm/Kconfig
···3030 select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)3131 select HAVE_ARCH_TRACEHOOK3232 select HAVE_BPF_JIT3333+ select HAVE_CC_STACKPROTECTOR3334 select HAVE_CONTEXT_TRACKING3435 select HAVE_C_RECORDMCOUNT3535- select HAVE_CC_STACKPROTECTOR3636 select HAVE_DEBUG_KMEMLEAK3737 select HAVE_DMA_API_DEBUG3838 select HAVE_DMA_ATTRS···311311 select ARM_HAS_SG_CHAIN312312 select ARM_PATCH_PHYS_VIRT313313 select AUTO_ZRELADDR314314+ select CLKSRC_OF314315 select COMMON_CLK315316 select GENERIC_CLOCKEVENTS316317 select MULTI_IRQ_HANDLER···423422 bool "Energy Micro efm32"424423 depends on !MMU425424 select ARCH_REQUIRE_GPIOLIB426426- select AUTO_ZRELADDR427425 select ARM_NVIC426426+ select AUTO_ZRELADDR428427 select CLKSRC_OF429428 select COMMON_CLK430429 select CPU_V7M···512511 bool "IXP4xx-based"513512 depends on MMU514513 select ARCH_HAS_DMA_SET_COHERENT_MASK515515- select ARCH_SUPPORTS_BIG_ENDIAN516514 select ARCH_REQUIRE_GPIOLIB515515+ select ARCH_SUPPORTS_BIG_ENDIAN517516 select CLKSRC_MMIO518517 select CPU_XSCALE519518 select DMABOUNCE if PCI···11111110 default 81112111111131112config IWMMXT11141114- bool "Enable iWMMXt support" if !CPU_PJ411151115- depends on CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK || CPU_PJ411161116- default y if PXA27x || PXA3xx || ARCH_MMP || CPU_PJ411131113+ bool "Enable iWMMXt support"11141114+ depends on CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK || CPU_PJ4 || CPU_PJ4B11151115+ default y if PXA27x || PXA3xx || ARCH_MMP || CPU_PJ4 || CPU_PJ4B11171116 help11181117 Enable support for iWMMXt context switching at run time if11191118 running on a CPU that supports it.···15761575config BL_SWITCHER15771576 bool "big.LITTLE switcher support"15781577 depends on BIG_LITTLE && MCPM && HOTPLUG_CPU15791579- select CPU_PM15801578 select ARM_CPU_SUSPEND15791579+ select CPU_PM15811580 help15821581 The big.LITTLE "switcher" provides the core functionality to15831582 transparently handle transition between a cluster of A15's···19211920 depends on CPU_V7 && !CPU_V619221921 depends on !GENERIC_ATOMIC6419231922 depends on MMU19231923+ select ARCH_DMA_ADDR_T_64BIT19241924 select ARM_PSCI19251925 select SWIOTLB_XEN19261926- select ARCH_DMA_ADDR_T_64BIT19271926 help19281927 Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.19291928
+6-6
arch/arm/Kconfig.debug
···10301030 default 0x40100000 if DEBUG_PXA_UART110311031 default 0x42000000 if ARCH_GEMINI10321032 default 0x7c0003f8 if FOOTBRIDGE10331033- default 0x80230000 if DEBUG_PICOXCELL_UART10341033 default 0x80070000 if DEBUG_IMX23_UART10351034 default 0x80074000 if DEBUG_IMX28_UART10351035+ default 0x80230000 if DEBUG_PICOXCELL_UART10361036 default 0x808c0000 if ARCH_EP93XX10371037 default 0x90020000 if DEBUG_NSPIRE_CLASSIC_UART || DEBUG_NSPIRE_CX_UART10381038 default 0xb0090000 if DEBUG_VEXPRESS_UART0_CRX···10961096 default 0xfeb26000 if DEBUG_RK3X_UART110971097 default 0xfeb30c00 if DEBUG_KEYSTONE_UART010981098 default 0xfeb31000 if DEBUG_KEYSTONE_UART110991099- default 0xfec12000 if DEBUG_MVEBU_UART || DEBUG_MVEBU_UART_ALTERNATE11001100- default 0xfed60000 if DEBUG_RK29_UART011011101- default 0xfed64000 if DEBUG_RK29_UART1 || DEBUG_RK3X_UART211021102- default 0xfed68000 if DEBUG_RK29_UART2 || DEBUG_RK3X_UART311031099 default 0xfec02000 if DEBUG_SOCFPGA_UART11001100+ default 0xfec12000 if DEBUG_MVEBU_UART || DEBUG_MVEBU_UART_ALTERNATE11041101 default 0xfec20000 if DEBUG_DAVINCI_DMx_UART011051102 default 0xfed0c000 if DEBUG_DAVINCI_DA8XX_UART111061103 default 0xfed0d000 if DEBUG_DAVINCI_DA8XX_UART211071104 default 0xfed12000 if ARCH_KIRKWOOD11051105+ default 0xfed60000 if DEBUG_RK29_UART011061106+ default 0xfed64000 if DEBUG_RK29_UART1 || DEBUG_RK3X_UART211071107+ default 0xfed68000 if DEBUG_RK29_UART2 || DEBUG_RK3X_UART311081108 default 0xfedc0000 if ARCH_EP93XX11091109 default 0xfee003f8 if FOOTBRIDGE11101110 default 0xfee20000 if DEBUG_NSPIRE_CLASSIC_UART || DEBUG_NSPIRE_CX_UART11111111- default 0xfef36000 if DEBUG_HIGHBANK_UART11121111 default 0xfee82340 if ARCH_IOP13XX11131112 default 0xfef00000 if ARCH_IXP4XX && !CPU_BIG_ENDIAN11141113 default 0xfef00003 if ARCH_IXP4XX && CPU_BIG_ENDIAN11141114+ default 0xfef36000 if DEBUG_HIGHBANK_UART11151115 default 0xfefff700 if ARCH_IOP33X11161116 default 0xff003000 if DEBUG_U300_UART11171117 default DEBUG_UART_PHYS if !MMU
···7272 };73737474 /*7575- * The soc node represents the soc top level view. It is uses for IPs7575+ * The soc node represents the soc top level view. It is used for IPs7676 * that are not memory mapped in the MPU view or for the MPU itself.7777 */7878 soc {···94949595 /*9696 * XXX: Use a flat representation of the AM33XX interconnect.9797- * The real AM33XX interconnect network is quite complex.Since9898- * that will not bring real advantage to represent that in DT9797+ * The real AM33XX interconnect network is quite complex. Since9898+ * it will not bring real advantage to represent that in DT9999 * for the moment, just use a fake OCP bus entry to represent100100 * the whole bus hierarchy.101101 */···802802 <0x46000000 0x400000>;803803 reg-names = "mpu", "dat";804804 interrupts = <80>, <81>;805805- interrupts-names = "tx", "rx";805805+ interrupt-names = "tx", "rx";806806 status = "disabled";807807 dmas = <&edma 8>,808808 <&edma 9>;···816816 <0x46400000 0x400000>;817817 reg-names = "mpu", "dat";818818 interrupts = <82>, <83>;819819- interrupts-names = "tx", "rx";819819+ interrupt-names = "tx", "rx";820820 status = "disabled";821821 dmas = <&edma 10>,822822 <&edma 11>;
···8080 };81818282 /*8383- * The soc node represents the soc top level view. It is uses for IPs8383+ * The soc node represents the soc top level view. It is used for IPs8484 * that are not memory mapped in the MPU view or for the MPU itself.8585 */8686 soc {···9494 /*9595 * XXX: Use a flat representation of the SOC interconnect.9696 * The real OMAP interconnect network is quite complex.9797- * Since that will not bring real advantage to represent that in DT for9797+ * Since it will not bring real advantage to represent that in DT for9898 * the moment, just use a fake OCP bus entry to represent the whole bus9999 * hierarchy.100100 */
···11+/*22+ * Copyright (C) 2011 Texas Instruments Incorporated - http://www.ti.com/33+ *44+ * This program is free software; you can redistribute it and/or modify55+ * it under the terms of the GNU General Public License version 2 as66+ * published by the Free Software Foundation.77+ */88+99+#include "omap3-beagle-xm.dts"1010+1111+/ {1212+ /* HS USB Port 2 Power enable was inverted with the xM C */1313+ hsusb2_power: hsusb2_power_reg {1414+ enable-active-high;1515+ };1616+};
···368368 /* no elm on omap3 */369369370370 gpmc,mux-add-data = <0>;371371- gpmc,device-nand;372371 gpmc,device-width = <2>;373372 gpmc,wait-pin = <0>;374373 gpmc,wait-monitoring-ns = <0>;
+1-1
arch/arm/boot/dts/omap3.dtsi
···7474 /*7575 * XXX: Use a flat representation of the OMAP3 interconnect.7676 * The real OMAP interconnect network is quite complex.7777- * Since that will not bring real advantage to represent that in DT for7777+ * Since it will not bring real advantage to represent that in DT for7878 * the moment, just use a fake OCP bus entry to represent the whole bus7979 * hierarchy.8080 */
+2-2
arch/arm/boot/dts/omap4.dtsi
···7272 };73737474 /*7575- * The soc node represents the soc top level view. It is uses for IPs7575+ * The soc node represents the soc top level view. It is used for IPs7676 * that are not memory mapped in the MPU view or for the MPU itself.7777 */7878 soc {···9696 /*9797 * XXX: Use a flat representation of the OMAP4 interconnect.9898 * The real OMAP interconnect network is quite complex.9999- * Since that will not bring real advantage to represent that in DT for9999+ * Since it will not bring real advantage to represent that in DT for100100 * the moment, just use a fake OCP bus entry to represent the whole bus101101 * hierarchy.102102 */
+8-2
arch/arm/boot/dts/omap5.dtsi
···9393 };94949595 /*9696- * The soc node represents the soc top level view. It is uses for IPs9696+ * The soc node represents the soc top level view. It is used for IPs9797 * that are not memory mapped in the MPU view or for the MPU itself.9898 */9999 soc {···107107 /*108108 * XXX: Use a flat representation of the OMAP3 interconnect.109109 * The real OMAP interconnect network is quite complex.110110- * Since that will not bring real advantage to represent that in DT for110110+ * Since it will not bring real advantage to represent that in DT for111111 * the moment, just use a fake OCP bus entry to represent the whole bus112112 * hierarchy.113113 */···813813 <0x4a084c00 0x40>;814814 reg-names = "phy_rx", "phy_tx", "pll_ctrl";815815 ctrl-module = <&omap_control_usb3phy>;816816+ clocks = <&usb_phy_cm_clk32k>,817817+ <&sys_clkin>,818818+ <&usb_otg_ss_refclk960m>;819819+ clock-names = "wkupclk",820820+ "sysclk",821821+ "refclk";816822 #phy-cells = <0>;817823 };818824 };
···797797{798798 int ret;799799800800- if (MAX_NR_CLUSTERS != 2) {801801- pr_err("%s: only dual cluster systems are supported\n", __func__);802802- return -EINVAL;803803- }800800+ if (!mcpm_is_available())801801+ return -ENODEV;804802805803 cpu_notifier(bL_switcher_hotplug_callback, 0);806804
+5
arch/arm/common/mcpm_entry.c
···4848 return 0;4949}50505151+bool mcpm_is_available(void)5252+{5353+ return (platform_ops) ? true : false;5454+}5555+5156int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster)5257{5358 if (!platform_ops)
···1111CONFIG_MODULE_UNLOAD=y1212# CONFIG_LBDAF is not set1313# CONFIG_BLK_DEV_BSG is not set1414+CONFIG_PARTITION_ADVANCED=y1415# CONFIG_IOSCHED_CFQ is not set1516# CONFIG_ARCH_MULTI_V7 is not set1617CONFIG_ARCH_U300=y···2221CONFIG_ZBOOT_ROM_BSS=0x02322CONFIG_CMDLINE="root=/dev/ram0 rw rootfstype=rootfs console=ttyAMA0,115200n8 lpj=515072"2423CONFIG_CPU_IDLE=y2525-CONFIG_FPE_NWFPE=y2624# CONFIG_SUSPEND is not set2725CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"2826# CONFIG_PREVENT_FIRMWARE_BUILD is not set···6464CONFIG_NLS_CODEPAGE_437=y6565CONFIG_NLS_ISO8859_1=y6666CONFIG_PRINTK_TIME=y6767+CONFIG_DEBUG_INFO=y6768CONFIG_DEBUG_FS=y6869# CONFIG_SCHED_DEBUG is not set6970CONFIG_TIMER_STATS=y7071# CONFIG_DEBUG_PREEMPT is not set7171-CONFIG_DEBUG_INFO=y
+15-9
arch/arm/configs/u8500_defconfig
···11# CONFIG_SWAP is not set22CONFIG_SYSVIPC=y33-CONFIG_NO_HZ=y33+CONFIG_NO_HZ_IDLE=y44CONFIG_HIGH_RES_TIMERS=y55CONFIG_BLK_DEV_INITRD=y66CONFIG_KALLSYMS_ALL=y77CONFIG_MODULES=y88CONFIG_MODULE_UNLOAD=y99# CONFIG_BLK_DEV_BSG is not set1010+CONFIG_PARTITION_ADVANCED=y1011CONFIG_ARCH_U8500=y1112CONFIG_MACH_HREFV60=y1213CONFIG_MACH_SNOWBALL=y1313-CONFIG_MACH_UX500_DT=y1414CONFIG_SMP=y1515CONFIG_NR_CPUS=21616CONFIG_PREEMPT=y···3434CONFIG_IP_PNP_DHCP=y3535CONFIG_NETFILTER=y3636CONFIG_PHONET=y3737-# CONFIG_WIRELESS is not set3737+CONFIG_CFG80211=y3838+CONFIG_CFG80211_DEBUGFS=y3939+CONFIG_MAC80211=y4040+CONFIG_MAC80211_LEDS=y3841CONFIG_CAIF=y3942CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"4343+CONFIG_DEVTMPFS=y4444+CONFIG_DEVTMPFS_MOUNT=y4045CONFIG_BLK_DEV_RAM=y4146CONFIG_BLK_DEV_RAM_SIZE=655364247CONFIG_SENSORS_BH1780=y4348CONFIG_NETDEVICES=y4449CONFIG_SMSC911X=y4550CONFIG_SMSC_PHY=y4646-# CONFIG_WLAN is not set5151+CONFIG_CW1200=y5252+CONFIG_CW1200_WLAN_SDIO=y4753# CONFIG_INPUT_MOUSEDEV_PSAUX is not set4854CONFIG_INPUT_EVDEV=y4955# CONFIG_KEYBOARD_ATKBD is not set···9185CONFIG_USB_GADGET=y9286CONFIG_USB_ETH=m9387CONFIG_MMC=y9494-CONFIG_MMC_UNSAFE_RESUME=y9595-# CONFIG_MMC_BLOCK_BOUNCE is not set9688CONFIG_MMC_ARMMMCI=y9789CONFIG_NEW_LEDS=y9890CONFIG_LEDS_CLASS=y9991CONFIG_LEDS_LM3530=y10092CONFIG_LEDS_GPIO=y10193CONFIG_LEDS_LP5521=y102102-CONFIG_LEDS_TRIGGERS=y10394CONFIG_LEDS_TRIGGER_HEARTBEAT=y10495CONFIG_RTC_CLASS=y10596CONFIG_RTC_DRV_AB8500=y···106103CONFIG_STAGING=y107104CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4=y108105CONFIG_HSEM_U8500=y106106+CONFIG_IIO=y107107+CONFIG_IIO_ST_ACCEL_3AXIS=y108108+CONFIG_IIO_ST_GYRO_3AXIS=y109109+CONFIG_IIO_ST_MAGN_3AXIS=y110110+CONFIG_IIO_ST_PRESS=y109111CONFIG_EXT2_FS=y110112CONFIG_EXT2_FS_XATTR=y111113CONFIG_EXT2_FS_POSIX_ACL=y···118110CONFIG_EXT3_FS=y119111CONFIG_EXT4_FS=y120112CONFIG_VFAT_FS=y121121-CONFIG_DEVTMPFS=y122122-CONFIG_DEVTMPFS_MOUNT=y123113CONFIG_TMPFS=y124114CONFIG_TMPFS_POSIX_ACL=y125115# CONFIG_MISC_FILESYSTEMS is not set
+7-7
arch/arm/include/asm/cputype.h
···222222#endif223223224224/*225225- * Marvell's PJ4 core is based on V7 version. It has some modification226226- * for coprocessor setting. For this reason, we need a way to distinguish227227- * it.225225+ * Marvell's PJ4 and PJ4B cores are based on V7 version,226226+ * but require a specical sequence for enabling coprocessors.227227+ * For this reason, we need a way to distinguish them.228228 */229229-#ifndef CONFIG_CPU_PJ4230230-#define cpu_is_pj4() 0231231-#else229229+#if defined(CONFIG_CPU_PJ4) || defined(CONFIG_CPU_PJ4B)232230static inline int cpu_is_pj4(void)233231{234232 unsigned int id;235233236234 id = read_cpuid_id();237237- if ((id & 0xfffffff0) == 0x562f5840)235235+ if ((id & 0xff0fff00) == 0x560f5800)238236 return 1;239237240238 return 0;241239}240240+#else241241+#define cpu_is_pj4() 0242242#endif243243#endif
+1-1
arch/arm/include/asm/div64.h
···156156 /* Select the best insn combination to perform the */ \157157 /* actual __m * __n / (__p << 64) operation. */ \158158 if (!__c) { \159159- asm ( "umull %Q0, %R0, %1, %Q2\n\t" \159159+ asm ( "umull %Q0, %R0, %Q1, %Q2\n\t" \160160 "mov %Q0, #0" \161161 : "=&r" (__res) \162162 : "r" (__m), "r" (__n) \
+7
arch/arm/include/asm/mcpm.h
···5454 */55555656/**5757+ * mcpm_is_available - returns whether MCPM is initialized and available5858+ *5959+ * This returns true or false accordingly.6060+ */6161+bool mcpm_is_available(void);6262+6363+/**5764 * mcpm_cpu_power_up - make given CPU in given cluster runable5865 *5966 * @cpu: CPU number within given cluster
···408408#define __NR_finit_module (__NR_SYSCALL_BASE+379)409409#define __NR_sched_setattr (__NR_SYSCALL_BASE+380)410410#define __NR_sched_getattr (__NR_SYSCALL_BASE+381)411411+#define __NR_renameat2 (__NR_SYSCALL_BASE+382)411412412413/*413414 * This may need to be greater than __NR_last_syscall+1 in order to
···4545 return NOTIFY_DONE;4646}47474848-static struct notifier_block iwmmxt_notifier_block = {4848+static struct notifier_block __maybe_unused iwmmxt_notifier_block = {4949 .notifier_call = iwmmxt_do,5050};5151···7272 : "=r" (temp) : "r" (value));7373}74747575+static int __init pj4_get_iwmmxt_version(void)7676+{7777+ u32 cp_access, wcid;7878+7979+ cp_access = pj4_cp_access_read();8080+ pj4_cp_access_write(cp_access | 0xf);8181+8282+ /* check if coprocessor 0 and 1 are available */8383+ if ((pj4_cp_access_read() & 0xf) != 0xf) {8484+ pj4_cp_access_write(cp_access);8585+ return -ENODEV;8686+ }8787+8888+ /* read iWMMXt coprocessor id register p1, c0 */8989+ __asm__ __volatile__ ("mrc p1, 0, %0, c0, c0, 0\n" : "=r" (wcid));9090+9191+ pj4_cp_access_write(cp_access);9292+9393+ /* iWMMXt v1 */9494+ if ((wcid & 0xffffff00) == 0x56051000)9595+ return 1;9696+ /* iWMMXt v2 */9797+ if ((wcid & 0xffffff00) == 0x56052000)9898+ return 2;9999+100100+ return -EINVAL;101101+}7510276103/*77104 * Disable CP0/CP1 on boot, and let call_fpe() and the iWMMXt lazy···10679 */10780static int __init pj4_cp0_init(void)10881{109109- u32 cp_access;8282+ u32 __maybe_unused cp_access;8383+ int vers;1108411185 if (!cpu_is_pj4())11286 return 0;113878888+ vers = pj4_get_iwmmxt_version();8989+ if (vers < 0)9090+ return 0;9191+9292+#ifndef CONFIG_IWMMXT9393+ pr_info("PJ4 iWMMXt coprocessor detected, but kernel support is missing.\n");9494+#else11495 cp_access = pj4_cp_access_read() & ~0xf;11596 pj4_cp_access_write(cp_access);11697117117- printk(KERN_INFO "PJ4 iWMMXt coprocessor enabled.\n");9898+ pr_info("PJ4 iWMMXt v%d coprocessor enabled.\n", vers);11899 elf_hwcap |= HWCAP_IWMMXT;119100 thread_register_notifier(&iwmmxt_notifier_block);101101+#endif120102121103 return 0;122104}
+3-3
arch/arm/kernel/sys_oabi-compat.c
···203203 int ret;204204205205 switch (cmd) {206206- case F_GETLKP:207207- case F_SETLKP:208208- case F_SETLKPW:206206+ case F_OFD_GETLK:207207+ case F_OFD_SETLK:208208+ case F_OFD_SETLKW:209209 case F_GETLK64:210210 case F_SETLK64:211211 case F_SETLKW64:
+1-1
arch/arm/kvm/Kconfig
···2323 select HAVE_KVM_CPU_RELAX_INTERCEPT2424 select KVM_MMIO2525 select KVM_ARM_HOST2626- depends on ARM_VIRT_EXT && ARM_LPAE2626+ depends on ARM_VIRT_EXT && ARM_LPAE && !CPU_BIG_ENDIAN2727 ---help---2828 Support hosting virtualized guest machines. You will also2929 need to select one or more of the processor modules below.
···208208 * the "output_enable" bit as a gate, even though it's really just209209 * enabling clock output.210210 */211211- clk[lvds1_gate] = imx_clk_gate("lvds1_gate", "dummy", base + 0x160, 10);212212- clk[lvds2_gate] = imx_clk_gate("lvds2_gate", "dummy", base + 0x160, 11);211211+ clk[lvds1_gate] = imx_clk_gate("lvds1_gate", "lvds1_sel", base + 0x160, 10);212212+ clk[lvds2_gate] = imx_clk_gate("lvds2_gate", "lvds2_sel", base + 0x160, 11);213213214214 /* name parent_name reg idx */215215 clk[pll2_pfd0_352m] = imx_clk_pfd("pll2_pfd0_352m", "pll2_bus", base + 0x100, 0);···258258 clk[ipu2_sel] = imx_clk_mux("ipu2_sel", base + 0x3c, 14, 2, ipu_sels, ARRAY_SIZE(ipu_sels));259259 clk[ldb_di0_sel] = imx_clk_mux_flags("ldb_di0_sel", base + 0x2c, 9, 3, ldb_di_sels, ARRAY_SIZE(ldb_di_sels), CLK_SET_RATE_PARENT);260260 clk[ldb_di1_sel] = imx_clk_mux_flags("ldb_di1_sel", base + 0x2c, 12, 3, ldb_di_sels, ARRAY_SIZE(ldb_di_sels), CLK_SET_RATE_PARENT);261261- clk[ipu1_di0_pre_sel] = imx_clk_mux("ipu1_di0_pre_sel", base + 0x34, 6, 3, ipu_di_pre_sels, ARRAY_SIZE(ipu_di_pre_sels));262262- clk[ipu1_di1_pre_sel] = imx_clk_mux("ipu1_di1_pre_sel", base + 0x34, 15, 3, ipu_di_pre_sels, ARRAY_SIZE(ipu_di_pre_sels));263263- clk[ipu2_di0_pre_sel] = imx_clk_mux("ipu2_di0_pre_sel", base + 0x38, 6, 3, ipu_di_pre_sels, ARRAY_SIZE(ipu_di_pre_sels));264264- clk[ipu2_di1_pre_sel] = imx_clk_mux("ipu2_di1_pre_sel", base + 0x38, 15, 3, ipu_di_pre_sels, ARRAY_SIZE(ipu_di_pre_sels));265265- clk[ipu1_di0_sel] = imx_clk_mux("ipu1_di0_sel", base + 0x34, 0, 3, ipu1_di0_sels, ARRAY_SIZE(ipu1_di0_sels));266266- clk[ipu1_di1_sel] = imx_clk_mux("ipu1_di1_sel", base + 0x34, 9, 3, ipu1_di1_sels, ARRAY_SIZE(ipu1_di1_sels));267267- clk[ipu2_di0_sel] = imx_clk_mux("ipu2_di0_sel", base + 0x38, 0, 3, ipu2_di0_sels, ARRAY_SIZE(ipu2_di0_sels));268268- clk[ipu2_di1_sel] = imx_clk_mux("ipu2_di1_sel", base + 0x38, 9, 3, ipu2_di1_sels, ARRAY_SIZE(ipu2_di1_sels));261261+ clk[ipu1_di0_pre_sel] = imx_clk_mux_flags("ipu1_di0_pre_sel", base + 0x34, 6, 3, ipu_di_pre_sels, ARRAY_SIZE(ipu_di_pre_sels), CLK_SET_RATE_PARENT);262262+ clk[ipu1_di1_pre_sel] = imx_clk_mux_flags("ipu1_di1_pre_sel", base + 0x34, 15, 3, ipu_di_pre_sels, ARRAY_SIZE(ipu_di_pre_sels), CLK_SET_RATE_PARENT);263263+ clk[ipu2_di0_pre_sel] = imx_clk_mux_flags("ipu2_di0_pre_sel", base + 0x38, 6, 3, ipu_di_pre_sels, ARRAY_SIZE(ipu_di_pre_sels), CLK_SET_RATE_PARENT);264264+ clk[ipu2_di1_pre_sel] = imx_clk_mux_flags("ipu2_di1_pre_sel", base + 0x38, 15, 3, ipu_di_pre_sels, ARRAY_SIZE(ipu_di_pre_sels), CLK_SET_RATE_PARENT);265265+ clk[ipu1_di0_sel] = imx_clk_mux_flags("ipu1_di0_sel", base + 0x34, 0, 3, ipu1_di0_sels, ARRAY_SIZE(ipu1_di0_sels), CLK_SET_RATE_PARENT);266266+ clk[ipu1_di1_sel] = imx_clk_mux_flags("ipu1_di1_sel", base + 0x34, 9, 3, ipu1_di1_sels, ARRAY_SIZE(ipu1_di1_sels), CLK_SET_RATE_PARENT);267267+ clk[ipu2_di0_sel] = imx_clk_mux_flags("ipu2_di0_sel", base + 0x38, 0, 3, ipu2_di0_sels, ARRAY_SIZE(ipu2_di0_sels), CLK_SET_RATE_PARENT);268268+ clk[ipu2_di1_sel] = imx_clk_mux_flags("ipu2_di1_sel", base + 0x38, 9, 3, ipu2_di1_sels, ARRAY_SIZE(ipu2_di1_sels), CLK_SET_RATE_PARENT);269269 clk[hsi_tx_sel] = imx_clk_mux("hsi_tx_sel", base + 0x30, 28, 1, hsi_tx_sels, ARRAY_SIZE(hsi_tx_sels));270270 clk[pcie_axi_sel] = imx_clk_mux("pcie_axi_sel", base + 0x18, 10, 1, pcie_axi_sels, ARRAY_SIZE(pcie_axi_sels));271271 clk[ssi1_sel] = imx_clk_fixup_mux("ssi1_sel", base + 0x1c, 10, 2, ssi_sels, ARRAY_SIZE(ssi_sels), imx_cscmr1_fixup);···444444 clk_set_parent(clk[ldb_di0_sel], clk[pll5_video_div]);445445 clk_set_parent(clk[ldb_di1_sel], clk[pll5_video_div]);446446 }447447+448448+ clk_set_parent(clk[ipu1_di0_pre_sel], clk[pll5_video_div]);449449+ clk_set_parent(clk[ipu1_di1_pre_sel], clk[pll5_video_div]);450450+ clk_set_parent(clk[ipu2_di0_pre_sel], clk[pll5_video_div]);451451+ clk_set_parent(clk[ipu2_di1_pre_sel], clk[pll5_video_div]);452452+ clk_set_parent(clk[ipu1_di0_sel], clk[ipu1_di0_pre]);453453+ clk_set_parent(clk[ipu1_di1_sel], clk[ipu1_di1_pre]);454454+ clk_set_parent(clk[ipu2_di0_sel], clk[ipu2_di0_pre]);455455+ clk_set_parent(clk[ipu2_di1_sel], clk[ipu2_di1_pre]);447456448457 /*449458 * The gpmi needs 100MHz frequency in the EDO/Sync mode,
+1-1
arch/arm/mach-omap2/board-rx51-video.c
···48484949static int __init rx51_video_init(void)5050{5151- if (!machine_is_nokia_rx51() && !of_machine_is_compatible("nokia,omap3-n900"))5151+ if (!machine_is_nokia_rx51())5252 return 0;53535454 if (omap_mux_init_gpio(RX51_LCD_RESET_GPIO, OMAP_PIN_OUTPUT)) {
+2-2
arch/arm/mach-omap2/clkt_dpll.c
···209209 if (v == OMAP3XXX_EN_DPLL_LPBYPASS ||210210 v == OMAP3XXX_EN_DPLL_FRBYPASS)211211 return 1;212212- } else if (soc_is_am33xx() || cpu_is_omap44xx()) {212212+ } else if (soc_is_am33xx() || cpu_is_omap44xx() || soc_is_am43xx()) {213213 if (v == OMAP4XXX_EN_DPLL_LPBYPASS ||214214 v == OMAP4XXX_EN_DPLL_FRBYPASS ||215215 v == OMAP4XXX_EN_DPLL_MNBYPASS)···255255 if (v == OMAP3XXX_EN_DPLL_LPBYPASS ||256256 v == OMAP3XXX_EN_DPLL_FRBYPASS)257257 return __clk_get_rate(dd->clk_bypass);258258- } else if (soc_is_am33xx() || cpu_is_omap44xx()) {258258+ } else if (soc_is_am33xx() || cpu_is_omap44xx() || soc_is_am43xx()) {259259 if (v == OMAP4XXX_EN_DPLL_LPBYPASS ||260260 v == OMAP4XXX_EN_DPLL_FRBYPASS ||261261 v == OMAP4XXX_EN_DPLL_MNBYPASS)
+13-2
arch/arm/mach-omap2/gpmc.c
···501501 int r;502502503503 spin_lock(&gpmc_mem_lock);504504- r = release_resource(&gpmc_cs_mem[cs]);504504+ r = release_resource(res);505505 res->start = 0;506506 res->end = 0;507507 spin_unlock(&gpmc_mem_lock);···527527 pr_err("%s: requested chip-select is disabled\n", __func__);528528 return -ENODEV;529529 }530530+531531+ /*532532+ * Make sure we ignore any device offsets from the GPMC partition533533+ * allocated for the chip select and that the new base confirms534534+ * to the GPMC 16MB minimum granularity.535535+ */ 536536+ base &= ~(SZ_16M - 1);537537+530538 gpmc_cs_get_memconf(cs, &old_base, &size);531539 if (base == old_base)532540 return 0;···594586595587void gpmc_cs_free(int cs)596588{589589+ struct resource *res = &gpmc_cs_mem[cs];590590+597591 spin_lock(&gpmc_mem_lock);598592 if (cs >= gpmc_cs_num || cs < 0 || !gpmc_cs_reserved(cs)) {599593 printk(KERN_ERR "Trying to free non-reserved GPMC CS%d\n", cs);···604594 return;605595 }606596 gpmc_cs_disable_mem(cs);607607- release_resource(&gpmc_cs_mem[cs]);597597+ if (res->flags)598598+ release_resource(res);608599 gpmc_cs_set_reserved(cs, 0);609600 spin_unlock(&gpmc_mem_lock);610601}
···7171static int clockevent_next_event(unsigned long evt,7272 struct clock_event_device *clk_event_dev);73737474-static void spear_clocksource_init(void)7474+static void __init spear_clocksource_init(void)7575{7676 u32 tick_rate;7777 u16 val;
-3
arch/arm/mach-tegra/Kconfig
···7070 which controls AHB bus master arbitration and some7171 performance parameters(priority, prefech size).72727373-config TEGRA_EMC_SCALING_ENABLE7474- bool "Enable scaling the memory frequency"7575-7673endmenu
+5-2
arch/arm/mach-vexpress/dcscb.c
···5151static int dcscb_power_up(unsigned int cpu, unsigned int cluster)5252{5353 unsigned int rst_hold, cpumask = (1 << cpu);5454- unsigned int all_mask = dcscb_allcpus_mask[cluster];5454+ unsigned int all_mask;55555656 pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);5757 if (cpu >= 4 || cluster >= 2)5858 return -EINVAL;5959+6060+ all_mask = dcscb_allcpus_mask[cluster];59616062 /*6163 * Since this is called with IRQs enabled, and no arch_spin_lock_irq···103101 cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);104102 cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);105103 cpumask = (1 << cpu);106106- all_mask = dcscb_allcpus_mask[cluster];107104108105 pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster);109106 BUG_ON(cpu >= 4 || cluster >= 2);107107+108108+ all_mask = dcscb_allcpus_mask[cluster];110109111110 __mcpm_cpu_going_down(cpu, cluster);112111
···11-/*22- * Memory barrier definitions for the Hexagon architecture33- *44- * Copyright (c) 2010-2011, The Linux Foundation. All rights reserved.55- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License version 2 and88- * only version 2 as published by the Free Software Foundation.99- *1010- * This program is distributed in the hope that it will be useful,1111- * but WITHOUT ANY WARRANTY; without even the implied warranty of1212- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1313- * GNU General Public License for more details.1414- *1515- * You should have received a copy of the GNU General Public License1616- * along with this program; if not, write to the Free Software1717- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA1818- * 02110-1301, USA.1919- */2020-2121-#ifndef _ASM_BARRIER_H2222-#define _ASM_BARRIER_H2323-2424-#define rmb() barrier()2525-#define read_barrier_depends() barrier()2626-#define wmb() barrier()2727-#define mb() barrier()2828-#define smp_rmb() barrier()2929-#define smp_read_barrier_depends() barrier()3030-#define smp_wmb() barrier()3131-#define smp_mb() barrier()3232-3333-/* Set a value and use a memory barrier. Used by the scheduler somewhere. */3434-#define set_mb(var, value) \3535- do { var = value; mb(); } while (0)3636-3737-#endif /* _ASM_BARRIER_H */
+32-10
arch/ia64/include/asm/tlb.h
···9191#define RR_RID_MASK 0x00000000ffffff00L9292#define RR_TO_RID(val) ((val >> 8) & 0xffffff)93939494-/*9595- * Flush the TLB for address range START to END and, if not in fast mode, release the9696- * freed pages that where gathered up to this point.9797- */9894static inline void9999-ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end)9595+ia64_tlb_flush_mmu_tlbonly(struct mmu_gather *tlb, unsigned long start, unsigned long end)10096{101101- unsigned long i;102102- unsigned int nr;103103-104104- if (!tlb->need_flush)105105- return;10697 tlb->need_flush = 0;1079810899 if (tlb->fullmm) {···126135 flush_tlb_range(&vma, ia64_thash(start), ia64_thash(end));127136 }128137138138+}139139+140140+static inline void141141+ia64_tlb_flush_mmu_free(struct mmu_gather *tlb)142142+{143143+ unsigned long i;144144+ unsigned int nr;145145+129146 /* lastly, release the freed pages */130147 nr = tlb->nr;131148···141142 tlb->start_addr = ~0UL;142143 for (i = 0; i < nr; ++i)143144 free_page_and_swap_cache(tlb->pages[i]);145145+}146146+147147+/*148148+ * Flush the TLB for address range START to END and, if not in fast mode, release the149149+ * freed pages that where gathered up to this point.150150+ */151151+static inline void152152+ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end)153153+{154154+ if (!tlb->need_flush)155155+ return;156156+ ia64_tlb_flush_mmu_tlbonly(tlb, start, end);157157+ ia64_tlb_flush_mmu_free(tlb);144158}145159146160static inline void __tlb_alloc_page(struct mmu_gather *tlb)···216204 VM_BUG_ON(tlb->nr > tlb->max);217205218206 return tlb->max - tlb->nr;207207+}208208+209209+static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)210210+{211211+ ia64_tlb_flush_mmu_tlbonly(tlb, tlb->start_addr, tlb->end_addr);212212+}213213+214214+static inline void tlb_flush_mmu_free(struct mmu_gather *tlb)215215+{216216+ ia64_tlb_flush_mmu_free(tlb);219217}220218221219static inline void tlb_flush_mmu(struct mmu_gather *tlb)
···139139 * edit the command line passed to vmlinux (by setting /chosen/bootargs).140140 * The buffer is put in it's own section so that tools may locate it easier.141141 */142142-static char cmdline[COMMAND_LINE_SIZE]142142+static char cmdline[BOOT_COMMAND_LINE_SIZE]143143 __attribute__((__section__("__builtin_cmdline")));144144145145static void prep_cmdline(void *chosen)146146{147147 if (cmdline[0] == '\0')148148- getprop(chosen, "bootargs", cmdline, COMMAND_LINE_SIZE-1);148148+ getprop(chosen, "bootargs", cmdline, BOOT_COMMAND_LINE_SIZE-1);149149150150 printf("\n\rLinux/PowerPC load: %s", cmdline);151151 /* If possible, edit the command line */152152 if (console_ops.edit_cmdline)153153- console_ops.edit_cmdline(cmdline, COMMAND_LINE_SIZE);153153+ console_ops.edit_cmdline(cmdline, BOOT_COMMAND_LINE_SIZE);154154 printf("\n\r");155155156156 /* Put the command line back into the devtree for the kernel */···174174 * built-in command line wasn't set by an external tool */175175 if ((loader_info.cmdline_len > 0) && (cmdline[0] == '\0'))176176 memmove(cmdline, loader_info.cmdline,177177- min(loader_info.cmdline_len, COMMAND_LINE_SIZE-1));177177+ min(loader_info.cmdline_len, BOOT_COMMAND_LINE_SIZE-1));178178179179 if (console_ops.open && (console_ops.open() < 0))180180 exit();
+1-1
arch/powerpc/boot/ops.h
···1515#include "types.h"1616#include "string.h"17171818-#define COMMAND_LINE_SIZE 5121818+#define BOOT_COMMAND_LINE_SIZE 20481919#define MAX_PATH_LEN 2562020#define MAX_PROP_LEN 256 /* What should this be? */2121
+2-2
arch/powerpc/boot/ps3.c
···4747 * The buffer is put in it's own section so that tools may locate it easier.4848 */49495050-static char cmdline[COMMAND_LINE_SIZE]5050+static char cmdline[BOOT_COMMAND_LINE_SIZE]5151 __attribute__((__section__("__builtin_cmdline")));52525353static void prep_cmdline(void *chosen)5454{5555 if (cmdline[0] == '\0')5656- getprop(chosen, "bootargs", cmdline, COMMAND_LINE_SIZE-1);5656+ getprop(chosen, "bootargs", cmdline, BOOT_COMMAND_LINE_SIZE-1);5757 else5858 setprop_str(chosen, "bootargs", cmdline);5959
+19-23
arch/powerpc/include/asm/opal.h
···4141 * size except the last one in the list to be as well.4242 */4343struct opal_sg_entry {4444- void *data;4545- long length;4444+ __be64 data;4545+ __be64 length;4646};47474848-/* sg list */4848+/* SG list */4949struct opal_sg_list {5050- unsigned long num_entries;5151- struct opal_sg_list *next;5050+ __be64 length;5151+ __be64 next;5252 struct opal_sg_entry entry[];5353};5454···858858int64_t opal_lpc_read(uint32_t chip_id, enum OpalLPCAddressType addr_type,859859 uint32_t addr, __be32 *data, uint32_t sz);860860861861-int64_t opal_read_elog(uint64_t buffer, size_t size, uint64_t log_id);862862-int64_t opal_get_elog_size(uint64_t *log_id, size_t *size, uint64_t *elog_type);861861+int64_t opal_read_elog(uint64_t buffer, uint64_t size, uint64_t log_id);862862+int64_t opal_get_elog_size(__be64 *log_id, __be64 *size, __be64 *elog_type);863863int64_t opal_write_elog(uint64_t buffer, uint64_t size, uint64_t offset);864864int64_t opal_send_ack_elog(uint64_t log_id);865865void opal_resend_pending_logs(void);···868868int64_t opal_manage_flash(uint8_t op);869869int64_t opal_update_flash(uint64_t blk_list);870870int64_t opal_dump_init(uint8_t dump_type);871871-int64_t opal_dump_info(uint32_t *dump_id, uint32_t *dump_size);872872-int64_t opal_dump_info2(uint32_t *dump_id, uint32_t *dump_size, uint32_t *dump_type);871871+int64_t opal_dump_info(__be32 *dump_id, __be32 *dump_size);872872+int64_t opal_dump_info2(__be32 *dump_id, __be32 *dump_size, __be32 *dump_type);873873int64_t opal_dump_read(uint32_t dump_id, uint64_t buffer);874874int64_t opal_dump_ack(uint32_t dump_id);875875int64_t opal_dump_resend_notification(void);876876877877-int64_t opal_get_msg(uint64_t buffer, size_t size);878878-int64_t opal_check_completion(uint64_t buffer, size_t size, uint64_t token);877877+int64_t opal_get_msg(uint64_t buffer, uint64_t size);878878+int64_t opal_check_completion(uint64_t buffer, uint64_t size, uint64_t token);879879int64_t opal_sync_host_reboot(void);880880int64_t opal_get_param(uint64_t token, uint32_t param_id, uint64_t buffer,881881- size_t length);881881+ uint64_t length);882882int64_t opal_set_param(uint64_t token, uint32_t param_id, uint64_t buffer,883883- size_t length);883883+ uint64_t length);884884int64_t opal_sensor_read(uint32_t sensor_hndl, int token, __be32 *sensor_data);885885886886/* Internal functions */887887-extern int early_init_dt_scan_opal(unsigned long node, const char *uname, int depth, void *data);887887+extern int early_init_dt_scan_opal(unsigned long node, const char *uname,888888+ int depth, void *data);888889extern int early_init_dt_scan_recoverable_ranges(unsigned long node,889890 const char *uname, int depth, void *data);890891···893892extern int opal_put_chars(uint32_t vtermno, const char *buf, int total_len);894893895894extern void hvc_opal_init_early(void);896896-897897-/* Internal functions */898898-extern int early_init_dt_scan_opal(unsigned long node, const char *uname,899899- int depth, void *data);900895901896extern int opal_notifier_register(struct notifier_block *nb);902897extern int opal_notifier_unregister(struct notifier_block *nb);···903906extern void opal_notifier_disable(void);904907extern void opal_notifier_update_evt(uint64_t evt_mask, uint64_t evt_val);905908906906-extern int opal_get_chars(uint32_t vtermno, char *buf, int count);907907-extern int opal_put_chars(uint32_t vtermno, const char *buf, int total_len);908908-909909extern int __opal_async_get_token(void);910910extern int opal_async_get_token_interruptible(void);911911extern int __opal_async_release_token(int token);912912extern int opal_async_release_token(int token);913913extern int opal_async_wait_response(uint64_t token, struct opal_msg *msg);914914extern int opal_get_sensor_data(u32 sensor_hndl, u32 *sensor_data);915915-916916-extern void hvc_opal_init_early(void);917915918916struct rtc_time;919917extern int opal_set_rtc_time(struct rtc_time *tm);···928936extern int opal_resync_timebase(void);929937930938extern void opal_lpc_init(void);939939+940940+struct opal_sg_list *opal_vmalloc_to_sg_list(void *vmalloc_addr,941941+ unsigned long vmalloc_size);942942+void opal_free_sg_list(struct opal_sg_list *sg);931943932944#endif /* __ASSEMBLY__ */933945
···705705 if (rtas_token("ibm,update-flash-64-and-reboot") ==706706 RTAS_UNKNOWN_SERVICE) {707707 pr_info("rtas_flash: no firmware flash support\n");708708- return 1;708708+ return -EINVAL;709709 }710710711711 rtas_validate_flash_data.buf = kzalloc(VALIDATE_BUF_SIZE, GFP_KERNEL);
+17-1
arch/powerpc/kvm/book3s_hv_rmhandlers.S
···242242 */243243 .globl kvm_start_guest244244kvm_start_guest:245245+246246+ /* Set runlatch bit the minute you wake up from nap */247247+ mfspr r1, SPRN_CTRLF248248+ ori r1, r1, 1249249+ mtspr SPRN_CTRLT, r1250250+245251 ld r2,PACATOC(r13)246252247253 li r0,KVM_HWTHREAD_IN_KVM···315309 li r0, KVM_HWTHREAD_IN_NAP316310 stb r0, HSTATE_HWTHREAD_STATE(r13)317311kvm_do_nap:312312+ /* Clear the runlatch bit before napping */313313+ mfspr r2, SPRN_CTRLF314314+ clrrdi r2, r2, 1315315+ mtspr SPRN_CTRLT, r2316316+318317 li r3, LPCR_PECE0319318 mfspr r4, SPRN_LPCR320319 rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1···2010199920112000 /*20122001 * Take a nap until a decrementer or external or doobell interrupt20132013- * occurs, with PECE1, PECE0 and PECEDP set in LPCR20022002+ * occurs, with PECE1, PECE0 and PECEDP set in LPCR. Also clear the20032003+ * runlatch bit before napping.20142004 */20052005+ mfspr r2, SPRN_CTRLF20062006+ clrrdi r2, r2, 120072007+ mtspr SPRN_CTRLT, r220082008+20152009 li r0,120162010 stb r0,HSTATE_HWTHREAD_REQ(r13)20172011 mfspr r5,SPRN_LPCR
+16-22
arch/powerpc/mm/hash_native_64.c
···8282 va &= ~((1ul << mmu_psize_defs[apsize].shift) - 1);8383 va |= penc << 12;8484 va |= ssize << 8;8585- /* Add AVAL part */8686- if (psize != apsize) {8787- /*8888- * MPSS, 64K base page size and 16MB parge page size8989- * We don't need all the bits, but rest of the bits9090- * must be ignored by the processor.9191- * vpn cover upto 65 bits of va. (0...65) and we need9292- * 58..64 bits of va.9393- */9494- va |= (vpn & 0xfe);9595- }8585+ /*8686+ * AVAL bits:8787+ * We don't need all the bits, but rest of the bits8888+ * must be ignored by the processor.8989+ * vpn cover upto 65 bits of va. (0...65) and we need9090+ * 58..64 bits of va.9191+ */9292+ va |= (vpn & 0xfe); /* AVAL */9693 va |= 1; /* L */9794 asm volatile(ASM_FTR_IFCLR("tlbie %0,1", PPC_TLBIE(%1,%0), %2)9895 : : "r" (va), "r"(0), "i" (CPU_FTR_ARCH_206)···130133 va &= ~((1ul << mmu_psize_defs[apsize].shift) - 1);131134 va |= penc << 12;132135 va |= ssize << 8;133133- /* Add AVAL part */134134- if (psize != apsize) {135135- /*136136- * MPSS, 64K base page size and 16MB parge page size137137- * We don't need all the bits, but rest of the bits138138- * must be ignored by the processor.139139- * vpn cover upto 65 bits of va. (0...65) and we need140140- * 58..64 bits of va.141141- */142142- va |= (vpn & 0xfe);143143- }136136+ /*137137+ * AVAL bits:138138+ * We don't need all the bits, but rest of the bits139139+ * must be ignored by the processor.140140+ * vpn cover upto 65 bits of va. (0...65) and we need141141+ * 58..64 bits of va.142142+ */143143+ va |= (vpn & 0xfe);144144 va |= 1; /* L */145145 asm volatile(".long 0x7c000224 | (%0 << 11) | (1 << 21)"146146 : : "r"(va) : "memory");
+25-12
arch/powerpc/perf/hv-24x7.c
···155155 return copy_len;156156}157157158158-static unsigned long h_get_24x7_catalog_page(char page[static 4096],159159- u32 version, u32 index)158158+static unsigned long h_get_24x7_catalog_page_(unsigned long phys_4096,159159+ unsigned long version,160160+ unsigned long index)160161{161161- WARN_ON(!IS_ALIGNED((unsigned long)page, 4096));162162- return plpar_hcall_norets(H_GET_24X7_CATALOG_PAGE,163163- virt_to_phys(page),162162+ pr_devel("h_get_24x7_catalog_page(0x%lx, %lu, %lu)",163163+ phys_4096,164164 version,165165 index);166166+ WARN_ON(!IS_ALIGNED(phys_4096, 4096));167167+ return plpar_hcall_norets(H_GET_24X7_CATALOG_PAGE,168168+ phys_4096,169169+ version,170170+ index);171171+}172172+173173+static unsigned long h_get_24x7_catalog_page(char page[],174174+ u64 version, u32 index)175175+{176176+ return h_get_24x7_catalog_page_(virt_to_phys(page),177177+ version, index);166178}167179168180static ssize_t catalog_read(struct file *filp, struct kobject *kobj,···185173 ssize_t ret = 0;186174 size_t catalog_len = 0, catalog_page_len = 0, page_count = 0;187175 loff_t page_offset = 0;188188- uint32_t catalog_version_num = 0;176176+ uint64_t catalog_version_num = 0;189177 void *page = kmem_cache_alloc(hv_page_cache, GFP_USER);190178 struct hv_24x7_catalog_page_0 *page_0 = page;191179 if (!page)···197185 goto e_free;198186 }199187200200- catalog_version_num = be32_to_cpu(page_0->version);188188+ catalog_version_num = be64_to_cpu(page_0->version);201189 catalog_page_len = be32_to_cpu(page_0->length);202190 catalog_len = catalog_page_len * 4096;203191···220208 page, 4096, page_offset * 4096);221209e_free:222210 if (hret)223223- pr_err("h_get_24x7_catalog_page(ver=%d, page=%lld) failed: rc=%ld\n",224224- catalog_version_num, page_offset, hret);211211+ pr_err("h_get_24x7_catalog_page(ver=%lld, page=%lld) failed:"212212+ " rc=%ld\n",213213+ catalog_version_num, page_offset, hret);225214 kfree(page);226215227216 pr_devel("catalog_read: offset=%lld(%lld) count=%zu(%zu) catalog_len=%zu(%zu) => %zd\n",···256243static DEVICE_ATTR_RO(_name)257244258245PAGE_0_ATTR(catalog_version, "%lld\n",259259- (unsigned long long)be32_to_cpu(page_0->version));246246+ (unsigned long long)be64_to_cpu(page_0->version));260247PAGE_0_ATTR(catalog_len, "%lld\n",261248 (unsigned long long)be32_to_cpu(page_0->length) * 4096);262249static BIN_ATTR_RO(catalog, 0/* real length varies */);···498485 struct hv_perf_caps caps;499486500487 if (!firmware_has_feature(FW_FEATURE_LPAR)) {501501- pr_info("not a virtualized system, not enabling\n");488488+ pr_debug("not a virtualized system, not enabling\n");502489 return -ENODEV;503490 }504491505492 hret = hv_perf_caps_get(&caps);506493 if (hret) {507507- pr_info("could not obtain capabilities, error 0x%80lx, not enabling\n",494494+ pr_debug("could not obtain capabilities, not enabling, rc=%ld\n",508495 hret);509496 return -ENODEV;510497 }
+3-3
arch/powerpc/perf/hv-gpci.c
···7878 return sprintf(page, "0x%x\n", COUNTER_INFO_VERSION_CURRENT);7979}80808181-DEVICE_ATTR_RO(kernel_version);8181+static DEVICE_ATTR_RO(kernel_version);8282HV_CAPS_ATTR(version, "0x%x\n");8383HV_CAPS_ATTR(ga, "%d\n");8484HV_CAPS_ATTR(expanded, "%d\n");···273273 struct hv_perf_caps caps;274274275275 if (!firmware_has_feature(FW_FEATURE_LPAR)) {276276- pr_info("not a virtualized system, not enabling\n");276276+ pr_debug("not a virtualized system, not enabling\n");277277 return -ENODEV;278278 }279279280280 hret = hv_perf_caps_get(&caps);281281 if (hret) {282282- pr_info("could not obtain capabilities, error 0x%80lx, not enabling\n",282282+ pr_debug("could not obtain capabilities, not enabling, rc=%ld\n",283283 hret);284284 return -ENODEV;285285 }
+11-83
arch/powerpc/platforms/powernv/opal-dump.c
···209209 .default_attrs = dump_default_attrs,210210};211211212212-static void free_dump_sg_list(struct opal_sg_list *list)212212+static int64_t dump_read_info(uint32_t *dump_id, uint32_t *dump_size, uint32_t *dump_type)213213{214214- struct opal_sg_list *sg1;215215- while (list) {216216- sg1 = list->next;217217- kfree(list);218218- list = sg1;219219- }220220- list = NULL;221221-}222222-223223-static struct opal_sg_list *dump_data_to_sglist(struct dump_obj *dump)224224-{225225- struct opal_sg_list *sg1, *list = NULL;226226- void *addr;227227- int64_t size;228228-229229- addr = dump->buffer;230230- size = dump->size;231231-232232- sg1 = kzalloc(PAGE_SIZE, GFP_KERNEL);233233- if (!sg1)234234- goto nomem;235235-236236- list = sg1;237237- sg1->num_entries = 0;238238- while (size > 0) {239239- /* Translate virtual address to physical address */240240- sg1->entry[sg1->num_entries].data =241241- (void *)(vmalloc_to_pfn(addr) << PAGE_SHIFT);242242-243243- if (size > PAGE_SIZE)244244- sg1->entry[sg1->num_entries].length = PAGE_SIZE;245245- else246246- sg1->entry[sg1->num_entries].length = size;247247-248248- sg1->num_entries++;249249- if (sg1->num_entries >= SG_ENTRIES_PER_NODE) {250250- sg1->next = kzalloc(PAGE_SIZE, GFP_KERNEL);251251- if (!sg1->next)252252- goto nomem;253253-254254- sg1 = sg1->next;255255- sg1->num_entries = 0;256256- }257257- addr += PAGE_SIZE;258258- size -= PAGE_SIZE;259259- }260260- return list;261261-262262-nomem:263263- pr_err("%s : Failed to allocate memory\n", __func__);264264- free_dump_sg_list(list);265265- return NULL;266266-}267267-268268-static void sglist_to_phy_addr(struct opal_sg_list *list)269269-{270270- struct opal_sg_list *sg, *next;271271-272272- for (sg = list; sg; sg = next) {273273- next = sg->next;274274- /* Don't translate NULL pointer for last entry */275275- if (sg->next)276276- sg->next = (struct opal_sg_list *)__pa(sg->next);277277- else278278- sg->next = NULL;279279-280280- /* Convert num_entries to length */281281- sg->num_entries =282282- sg->num_entries * sizeof(struct opal_sg_entry) + 16;283283- }284284-}285285-286286-static int64_t dump_read_info(uint32_t *id, uint32_t *size, uint32_t *type)287287-{214214+ __be32 id, size, type;288215 int rc;289289- *type = 0xffffffff;290216291291- rc = opal_dump_info2(id, size, type);217217+ type = cpu_to_be32(0xffffffff);292218219219+ rc = opal_dump_info2(&id, &size, &type);293220 if (rc == OPAL_PARAMETER)294294- rc = opal_dump_info(id, size);221221+ rc = opal_dump_info(&id, &size);222222+223223+ *dump_id = be32_to_cpu(id);224224+ *dump_size = be32_to_cpu(size);225225+ *dump_type = be32_to_cpu(type);295226296227 if (rc)297228 pr_warn("%s: Failed to get dump info (%d)\n",···245314 }246315247316 /* Generate SG list */248248- list = dump_data_to_sglist(dump);317317+ list = opal_vmalloc_to_sg_list(dump->buffer, dump->size);249318 if (!list) {250319 rc = -ENOMEM;251320 goto out;252321 }253253-254254- /* Translate sg list addr to real address */255255- sglist_to_phy_addr(list);256322257323 /* First entry address */258324 addr = __pa(list);···269341 __func__, dump->id);270342271343 /* Free SG list */272272- free_dump_sg_list(list);344344+ opal_free_sg_list(list);273345274346out:275347 return rc;
···162162}163163164164#ifdef CONFIG_KEXEC165165+static void pnv_kexec_wait_secondaries_down(void)166166+{167167+ int my_cpu, i, notified = -1;168168+169169+ my_cpu = get_cpu();170170+171171+ for_each_online_cpu(i) {172172+ uint8_t status;173173+ int64_t rc;174174+175175+ if (i == my_cpu)176176+ continue;177177+178178+ for (;;) {179179+ rc = opal_query_cpu_status(get_hard_smp_processor_id(i),180180+ &status);181181+ if (rc != OPAL_SUCCESS || status != OPAL_THREAD_STARTED)182182+ break;183183+ barrier();184184+ if (i != notified) {185185+ printk(KERN_INFO "kexec: waiting for cpu %d "186186+ "(physical %d) to enter OPAL\n",187187+ i, paca[i].hw_cpu_id);188188+ notified = i;189189+ }190190+ }191191+ }192192+}193193+165194static void pnv_kexec_cpu_down(int crash_shutdown, int secondary)166195{167196 xics_kexec_teardown_cpu(secondary);168197169169- /* Return secondary CPUs to firmware on OPAL v3 */170170- if (firmware_has_feature(FW_FEATURE_OPALv3) && secondary) {198198+ /* On OPAL v3, we return all CPUs to firmware */199199+200200+ if (!firmware_has_feature(FW_FEATURE_OPALv3))201201+ return;202202+203203+ if (secondary) {204204+ /* Return secondary CPUs to firmware on OPAL v3 */171205 mb();172206 get_paca()->kexec_state = KEXEC_STATE_REAL_MODE;173207 mb();174208175209 /* Return the CPU to OPAL */176210 opal_return_cpu();211211+ } else if (crash_shutdown) {212212+ /*213213+ * On crash, we don't wait for secondaries to go214214+ * down as they might be unreachable or hung, so215215+ * instead we just wait a bit and move on.216216+ */217217+ mdelay(1);218218+ } else {219219+ /* Primary waits for the secondaries to have reached OPAL */220220+ pnv_kexec_wait_secondaries_down();177221 }178222}179223#endif /* CONFIG_KEXEC */
+3
arch/powerpc/platforms/powernv/smp.c
···3030#include <asm/cputhreads.h>3131#include <asm/xics.h>3232#include <asm/opal.h>3333+#include <asm/runlatch.h>33343435#include "powernv.h"3536···157156 */158157 mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1);159158 while (!generic_check_cpu_restart(cpu)) {159159+ ppc64_runlatch_off();160160 power7_nap();161161+ ppc64_runlatch_on();161162 if (!generic_check_cpu_restart(cpu)) {162163 DBG("CPU%d Unexpected exit while offline !\n", cpu);163164 /* We may be getting an IPI, so we re-enable
···276276 case BPF_S_LD_W_IND:277277 case BPF_S_LD_H_IND:278278 case BPF_S_LD_B_IND:279279- case BPF_S_LDX_B_MSH:280279 case BPF_S_LD_IMM:281280 case BPF_S_LD_MEM:282281 case BPF_S_MISC_TXA:
···136136extern int os_get_ifname(int fd, char *namebuf);137137extern int os_set_slip(int fd);138138extern int os_mode_fd(int fd, int mode);139139+extern int os_fsync_file(int fd);139140140141extern int os_seek_file(int fd, unsigned long long offset);141142extern int os_open_file(const char *file, struct openflags flags, int mode);
···1212#include <string.h>1313#include <sys/stat.h>1414#include <sys/mman.h>1515-#include <sys/param.h>1515+#include <sys/vfs.h>1616+#include <linux/magic.h>1617#include <init.h>1718#include <os.h>18191919-/* Modified by which_tmpdir, which is called during early boot */2020-static char *default_tmpdir = "/tmp";2121-2222-/*2323- * Modified when creating the physical memory file and when checking2424- * the tmp filesystem for usability, both happening during early boot.2525- */2020+/* Set by make_tempfile() during early boot. */2621static char *tempdir = NULL;27222828-static void __init find_tempdir(void)2323+/* Check if dir is on tmpfs. Return 0 if yes, -1 if no or error. */2424+static int __init check_tmpfs(const char *dir)2925{3030- const char *dirs[] = { "TMP", "TEMP", "TMPDIR", NULL };3131- int i;3232- char *dir = NULL;2626+ struct statfs st;33273434- if (tempdir != NULL)3535- /* We've already been called */3636- return;3737- for (i = 0; dirs[i]; i++) {3838- dir = getenv(dirs[i]);3939- if ((dir != NULL) && (*dir != '\0'))4040- break;4141- }4242- if ((dir == NULL) || (*dir == '\0'))4343- dir = default_tmpdir;4444-4545- tempdir = malloc(strlen(dir) + 2);4646- if (tempdir == NULL) {4747- fprintf(stderr, "Failed to malloc tempdir, "4848- "errno = %d\n", errno);4949- return;5050- }5151- strcpy(tempdir, dir);5252- strcat(tempdir, "/");5353-}5454-5555-/*5656- * Remove bytes from the front of the buffer and refill it so that if there's a5757- * partial string that we care about, it will be completed, and we can recognize5858- * it.5959- */6060-static int pop(int fd, char *buf, size_t size, size_t npop)6161-{6262- ssize_t n;6363- size_t len = strlen(&buf[npop]);6464-6565- memmove(buf, &buf[npop], len + 1);6666- n = read(fd, &buf[len], size - len - 1);6767- if (n < 0)6868- return -errno;6969-7070- buf[len + n] = '\0';7171- return 1;7272-}7373-7474-/*7575- * This will return 1, with the first character in buf being the7676- * character following the next instance of c in the file. This will7777- * read the file as needed. If there's an error, -errno is returned;7878- * if the end of the file is reached, 0 is returned.7979- */8080-static int next(int fd, char *buf, size_t size, char c)8181-{8282- ssize_t n;8383- char *ptr;8484-8585- while ((ptr = strchr(buf, c)) == NULL) {8686- n = read(fd, buf, size - 1);8787- if (n == 0)8888- return 0;8989- else if (n < 0)9090- return -errno;9191-9292- buf[n] = '\0';9393- }9494-9595- return pop(fd, buf, size, ptr - buf + 1);9696-}9797-9898-/*9999- * Decode an octal-escaped and space-terminated path of the form used by100100- * /proc/mounts. May be used to decode a path in-place. "out" must be at least101101- * as large as the input. The output is always null-terminated. "len" gets the102102- * length of the output, excluding the trailing null. Returns 0 if a full path103103- * was successfully decoded, otherwise an error.104104- */105105-static int decode_path(const char *in, char *out, size_t *len)106106-{107107- char *first = out;108108- int c;109109- int i;110110- int ret = -EINVAL;111111- while (1) {112112- switch (*in) {113113- case '\0':114114- goto out;115115-116116- case ' ':117117- ret = 0;118118- goto out;119119-120120- case '\\':121121- in++;122122- c = 0;123123- for (i = 0; i < 3; i++) {124124- if (*in < '0' || *in > '7')125125- goto out;126126- c = (c << 3) | (*in++ - '0');127127- }128128- *(unsigned char *)out++ = (unsigned char) c;129129- break;130130-131131- default:132132- *out++ = *in++;133133- break;134134- }135135- }136136-137137-out:138138- *out = '\0';139139- *len = out - first;140140- return ret;141141-}142142-143143-/*144144- * Computes the length of s when encoded with three-digit octal escape sequences145145- * for the characters in chars.146146- */147147-static size_t octal_encoded_length(const char *s, const char *chars)148148-{149149- size_t len = strlen(s);150150- while ((s = strpbrk(s, chars)) != NULL) {151151- len += 3;152152- s++;153153- }154154-155155- return len;156156-}157157-158158-enum {159159- OUTCOME_NOTHING_MOUNTED,160160- OUTCOME_TMPFS_MOUNT,161161- OUTCOME_NON_TMPFS_MOUNT,162162-};163163-164164-/* Read a line of /proc/mounts data looking for a tmpfs mount at "path". */165165-static int read_mount(int fd, char *buf, size_t bufsize, const char *path,166166- int *outcome)167167-{168168- int found;169169- int match;170170- char *space;171171- size_t len;172172-173173- enum {174174- MATCH_NONE,175175- MATCH_EXACT,176176- MATCH_PARENT,177177- };178178-179179- found = next(fd, buf, bufsize, ' ');180180- if (found != 1)181181- return found;182182-183183- /*184184- * If there's no following space in the buffer, then this path is185185- * truncated, so it can't be the one we're looking for.186186- */187187- space = strchr(buf, ' ');188188- if (space) {189189- match = MATCH_NONE;190190- if (!decode_path(buf, buf, &len)) {191191- if (!strcmp(buf, path))192192- match = MATCH_EXACT;193193- else if (!strncmp(buf, path, len)194194- && (path[len] == '/' || !strcmp(buf, "/")))195195- match = MATCH_PARENT;196196- }197197-198198- found = pop(fd, buf, bufsize, space - buf + 1);199199- if (found != 1)200200- return found;201201-202202- switch (match) {203203- case MATCH_EXACT:204204- if (!strncmp(buf, "tmpfs", strlen("tmpfs")))205205- *outcome = OUTCOME_TMPFS_MOUNT;206206- else207207- *outcome = OUTCOME_NON_TMPFS_MOUNT;208208- break;209209-210210- case MATCH_PARENT:211211- /* This mount obscures any previous ones. */212212- *outcome = OUTCOME_NOTHING_MOUNTED;213213- break;214214- }215215- }216216-217217- return next(fd, buf, bufsize, '\n');218218-}219219-220220-/* which_tmpdir is called only during early boot */221221-static int checked_tmpdir = 0;222222-223223-/*224224- * Look for a tmpfs mounted at /dev/shm. I couldn't find a cleaner225225- * way to do this than to parse /proc/mounts. statfs will return the226226- * same filesystem magic number and fs id for both /dev and /dev/shm227227- * when they are both tmpfs, so you can't tell if they are different228228- * filesystems. Also, there seems to be no other way of finding the229229- * mount point of a filesystem from within it.230230- *231231- * If a /dev/shm tmpfs entry is found, then we switch to using it.232232- * Otherwise, we stay with the default /tmp.233233- */234234-static void which_tmpdir(void)235235-{236236- int fd;237237- int found;238238- int outcome;239239- char *path;240240- char *buf;241241- size_t bufsize;242242-243243- if (checked_tmpdir)244244- return;245245-246246- checked_tmpdir = 1;247247-248248- printf("Checking for tmpfs mount on /dev/shm...");249249-250250- path = realpath("/dev/shm", NULL);251251- if (!path) {252252- printf("failed to check real path, errno = %d\n", errno);253253- return;254254- }255255- printf("%s...", path);256256-257257- /*258258- * The buffer needs to be able to fit the full octal-escaped path, a259259- * space, and a trailing null in order to successfully decode it.260260- */261261- bufsize = octal_encoded_length(path, " \t\n\\") + 2;262262-263263- if (bufsize < 128)264264- bufsize = 128;265265-266266- buf = malloc(bufsize);267267- if (!buf) {268268- printf("malloc failed, errno = %d\n", errno);269269- goto out;270270- }271271- buf[0] = '\0';272272-273273- fd = open("/proc/mounts", O_RDONLY);274274- if (fd < 0) {275275- printf("failed to open /proc/mounts, errno = %d\n", errno);276276- goto out1;277277- }278278-279279- outcome = OUTCOME_NOTHING_MOUNTED;280280- while (1) {281281- found = read_mount(fd, buf, bufsize, path, &outcome);282282- if (found != 1)283283- break;284284- }285285-286286- if (found < 0) {287287- printf("read returned errno %d\n", -found);2828+ printf("Checking if %s is on tmpfs...", dir);2929+ if (statfs(dir, &st) < 0) {3030+ printf("%s\n", strerror(errno));3131+ } else if (st.f_type != TMPFS_MAGIC) {3232+ printf("no\n");28833 } else {289289- switch (outcome) {290290- case OUTCOME_TMPFS_MOUNT:291291- printf("OK\n");292292- default_tmpdir = "/dev/shm";293293- break;294294-295295- case OUTCOME_NON_TMPFS_MOUNT:296296- printf("not tmpfs\n");297297- break;298298-299299- default:300300- printf("nothing mounted on /dev/shm\n");301301- break;302302- }3434+ printf("OK\n");3535+ return 0;30336 }304304-305305- close(fd);306306-out1:307307- free(buf);308308-out:309309- free(path);3737+ return -1;31038}31139312312-static int __init make_tempfile(const char *template, char **out_tempname,313313- int do_unlink)4040+/*4141+ * Choose the tempdir to use. We want something on tmpfs so that our memory is4242+ * not subject to the host's vm.dirty_ratio. If a tempdir is specified in the4343+ * environment, we use that even if it's not on tmpfs, but we warn the user.4444+ * Otherwise, we try common tmpfs locations, and if no tmpfs directory is found4545+ * then we fall back to /tmp.4646+ */4747+static char * __init choose_tempdir(void)4848+{4949+ static const char * const vars[] = {5050+ "TMPDIR",5151+ "TMP",5252+ "TEMP",5353+ NULL5454+ };5555+ static const char fallback_dir[] = "/tmp";5656+ static const char * const tmpfs_dirs[] = {5757+ "/dev/shm",5858+ fallback_dir,5959+ NULL6060+ };6161+ int i;6262+ const char *dir;6363+6464+ printf("Checking environment variables for a tempdir...");6565+ for (i = 0; vars[i]; i++) {6666+ dir = getenv(vars[i]);6767+ if ((dir != NULL) && (*dir != '\0')) {6868+ printf("%s\n", dir);6969+ if (check_tmpfs(dir) >= 0)7070+ goto done;7171+ else7272+ goto warn;7373+ }7474+ }7575+ printf("none found\n");7676+7777+ for (i = 0; tmpfs_dirs[i]; i++) {7878+ dir = tmpfs_dirs[i];7979+ if (check_tmpfs(dir) >= 0)8080+ goto done;8181+ }8282+8383+ dir = fallback_dir;8484+warn:8585+ printf("Warning: tempdir %s is not on tmpfs\n", dir);8686+done:8787+ /* Make a copy since getenv results may not remain valid forever. */8888+ return strdup(dir);8989+}9090+9191+/*9292+ * Create an unlinked tempfile in a suitable tempdir. template must be the9393+ * basename part of the template with a leading '/'.9494+ */9595+static int __init make_tempfile(const char *template)31496{31597 char *tempname;31698 int fd;31799318318- which_tmpdir();319319- tempname = malloc(MAXPATHLEN);100100+ if (tempdir == NULL) {101101+ tempdir = choose_tempdir();102102+ if (tempdir == NULL) {103103+ fprintf(stderr, "Failed to choose tempdir: %s\n",104104+ strerror(errno));105105+ return -1;106106+ }107107+ }108108+109109+ tempname = malloc(strlen(tempdir) + strlen(template) + 1);320110 if (tempname == NULL)321111 return -1;322112323323- find_tempdir();324324- if ((tempdir == NULL) || (strlen(tempdir) >= MAXPATHLEN))325325- goto out;326326-327327- if (template[0] != '/')328328- strcpy(tempname, tempdir);329329- else330330- tempname[0] = '\0';331331- strncat(tempname, template, MAXPATHLEN-1-strlen(tempname));113113+ strcpy(tempname, tempdir);114114+ strcat(tempname, template);332115 fd = mkstemp(tempname);333116 if (fd < 0) {334117 fprintf(stderr, "open - cannot create %s: %s\n", tempname,335118 strerror(errno));336119 goto out;337120 }338338- if (do_unlink && (unlink(tempname) < 0)) {121121+ if (unlink(tempname) < 0) {339122 perror("unlink");340123 goto close;341124 }342342- if (out_tempname) {343343- *out_tempname = tempname;344344- } else345345- free(tempname);125125+ free(tempname);346126 return fd;347127close:348128 close(fd);···131351 return -1;132352}133353134134-#define TEMPNAME_TEMPLATE "vm_file-XXXXXX"354354+#define TEMPNAME_TEMPLATE "/vm_file-XXXXXX"135355136356static int __init create_tmp_file(unsigned long long len)137357{138358 int fd, err;139359 char zero;140360141141- fd = make_tempfile(TEMPNAME_TEMPLATE, NULL, 1);361361+ fd = make_tempfile(TEMPNAME_TEMPLATE);142362 if (fd < 0)143363 exit(1);144364···182402 return fd;183403}184404185185-186405void __init check_tmpexec(void)187406{188407 void *addr;···189410190411 addr = mmap(NULL, UM_KERN_PAGE_SIZE,191412 PROT_READ | PROT_WRITE | PROT_EXEC, MAP_PRIVATE, fd, 0);192192- printf("Checking PROT_EXEC mmap in %s...",tempdir);193193- fflush(stdout);413413+ printf("Checking PROT_EXEC mmap in %s...", tempdir);194414 if (addr == MAP_FAILED) {195415 err = errno;196196- perror("failed");416416+ printf("%s\n", strerror(err));197417 close(fd);198418 if (err == EPERM)199199- printf("%s must be not mounted noexec\n",tempdir);419419+ printf("%s must be not mounted noexec\n", tempdir);200420 exit(1);201421 }202422 printf("OK\n");
+3-1
arch/x86/Makefile
···8383 KBUILD_CFLAGS += -m6484848585 # Don't autogenerate traditional x87, MMX or SSE instructions8686- KBUILD_CFLAGS += -mno-mmx -mno-sse -mno-80387 -mno-fp-ret-in-3878686+ KBUILD_CFLAGS += -mno-mmx -mno-sse8787+ KBUILD_CFLAGS += $(call cc-option,-mno-80387)8888+ KBUILD_CFLAGS += $(call cc-option,-mno-fp-ret-in-387)87898890 # Use -mpreferred-stack-boundary=3 if supported.8991 KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3)
···543543 if (phys_id < 0)544544 return -1;545545546546- if (!rdmsrl_safe(MSR_RAPL_POWER_UNIT, &msr_rapl_power_unit_bits))546546+ /* protect rdmsrl() to handle virtualization */547547+ if (rdmsrl_safe(MSR_RAPL_POWER_UNIT, &msr_rapl_power_unit_bits))547548 return -1;548549549550 pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
+10-1
arch/x86/kernel/vsmp_64.c
···26262727#define TOPOLOGY_REGISTER_OFFSET 0x1028282929+/* Flag below is initialized once during vSMP PCI initialization. */3030+static int irq_routing_comply = 1;3131+2932#if defined CONFIG_PCI && defined CONFIG_PARAVIRT3033/*3134 * Interrupt control on vSMPowered systems:···104101#ifdef CONFIG_SMP105102 if (cap & ctl & BIT(8)) {106103 ctl &= ~BIT(8);104104+105105+ /* Interrupt routing set to ignore */106106+ irq_routing_comply = 0;107107+107108#ifdef CONFIG_PROC_FS108109 /* Don't let users change irq affinity via procfs */109110 no_irq_affinity = 1;···225218{226219 /* need to update phys_pkg_id */227220 apic->phys_pkg_id = apicid_phys_pkg_id;228228- apic->vector_allocation_domain = fill_vector_allocation_domain;221221+222222+ if (!irq_routing_comply)223223+ apic->vector_allocation_domain = fill_vector_allocation_domain;229224}230225231226void __init vsmp_init(void)
+41-12
arch/x86/kvm/vmx.c
···503503 [number##_HIGH] = VMCS12_OFFSET(name)+4504504505505506506-static const unsigned long shadow_read_only_fields[] = {506506+static unsigned long shadow_read_only_fields[] = {507507 /*508508 * We do NOT shadow fields that are modified when L0509509 * traps and emulates any vmx instruction (e.g. VMPTRLD,···526526 GUEST_LINEAR_ADDRESS,527527 GUEST_PHYSICAL_ADDRESS528528};529529-static const int max_shadow_read_only_fields =529529+static int max_shadow_read_only_fields =530530 ARRAY_SIZE(shadow_read_only_fields);531531532532-static const unsigned long shadow_read_write_fields[] = {532532+static unsigned long shadow_read_write_fields[] = {533533 GUEST_RIP,534534 GUEST_RSP,535535 GUEST_CR0,···558558 HOST_FS_SELECTOR,559559 HOST_GS_SELECTOR560560};561561-static const int max_shadow_read_write_fields =561561+static int max_shadow_read_write_fields =562562 ARRAY_SIZE(shadow_read_write_fields);563563564564static const unsigned short vmcs_field_to_offset_table[] = {···30093009 }30103010}3011301130123012+static void init_vmcs_shadow_fields(void)30133013+{30143014+ int i, j;30153015+30163016+ /* No checks for read only fields yet */30173017+30183018+ for (i = j = 0; i < max_shadow_read_write_fields; i++) {30193019+ switch (shadow_read_write_fields[i]) {30203020+ case GUEST_BNDCFGS:30213021+ if (!vmx_mpx_supported())30223022+ continue;30233023+ break;30243024+ default:30253025+ break;30263026+ }30273027+30283028+ if (j < i)30293029+ shadow_read_write_fields[j] =30303030+ shadow_read_write_fields[i];30313031+ j++;30323032+ }30333033+ max_shadow_read_write_fields = j;30343034+30353035+ /* shadowed fields guest access without vmexit */30363036+ for (i = 0; i < max_shadow_read_write_fields; i++) {30373037+ clear_bit(shadow_read_write_fields[i],30383038+ vmx_vmwrite_bitmap);30393039+ clear_bit(shadow_read_write_fields[i],30403040+ vmx_vmread_bitmap);30413041+ }30423042+ for (i = 0; i < max_shadow_read_only_fields; i++)30433043+ clear_bit(shadow_read_only_fields[i],30443044+ vmx_vmread_bitmap);30453045+}30463046+30123047static __init int alloc_kvm_area(void)30133048{30143049 int cpu;···30743039 enable_vpid = 0;30753040 if (!cpu_has_vmx_shadow_vmcs())30763041 enable_shadow_vmcs = 0;30423042+ if (enable_shadow_vmcs)30433043+ init_vmcs_shadow_fields();3077304430783045 if (!cpu_has_vmx_ept() ||30793046 !cpu_has_vmx_ept_4levels()) {···8840880388418804 memset(vmx_vmread_bitmap, 0xff, PAGE_SIZE);88428805 memset(vmx_vmwrite_bitmap, 0xff, PAGE_SIZE);88438843- /* shadowed read/write fields */88448844- for (i = 0; i < max_shadow_read_write_fields; i++) {88458845- clear_bit(shadow_read_write_fields[i], vmx_vmwrite_bitmap);88468846- clear_bit(shadow_read_write_fields[i], vmx_vmread_bitmap);88478847- }88488848- /* shadowed read only fields */88498849- for (i = 0; i < max_shadow_read_only_fields; i++)88508850- clear_bit(shadow_read_only_fields[i], vmx_vmread_bitmap);8851880688528807 /*88538808 * Allow direct access to the PC debug port (it is often used for I/O
···206206 spin_unlock_irqrestore(&ec->lock, flags);207207}208208209209-static int acpi_ec_sync_query(struct acpi_ec *ec);209209+static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data);210210211211static int ec_check_sci_sync(struct acpi_ec *ec, u8 state)212212{213213 if (state & ACPI_EC_FLAG_SCI) {214214 if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags))215215- return acpi_ec_sync_query(ec);215215+ return acpi_ec_sync_query(ec, NULL);216216 }217217 return 0;218218}···443443444444EXPORT_SYMBOL(ec_get_handle);445445446446-static int acpi_ec_query_unlocked(struct acpi_ec *ec, u8 *data);447447-448446/*449449- * Clears stale _Q events that might have accumulated in the EC.447447+ * Process _Q events that might have accumulated in the EC.450448 * Run with locked ec mutex.451449 */452450static void acpi_ec_clear(struct acpi_ec *ec)···453455 u8 value = 0;454456455457 for (i = 0; i < ACPI_EC_CLEAR_MAX; i++) {456456- status = acpi_ec_query_unlocked(ec, &value);458458+ status = acpi_ec_sync_query(ec, &value);457459 if (status || !value)458460 break;459461 }···580582 kfree(handler);581583}582584583583-static int acpi_ec_sync_query(struct acpi_ec *ec)585585+static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data)584586{585587 u8 value = 0;586588 int status;587589 struct acpi_ec_query_handler *handler, *copy;588588- if ((status = acpi_ec_query_unlocked(ec, &value)))590590+591591+ status = acpi_ec_query_unlocked(ec, &value);592592+ if (data)593593+ *data = value;594594+ if (status)589595 return status;596596+590597 list_for_each_entry(handler, &ec->list, node) {591598 if (value == handler->query_bit) {592599 /* have custom handler for this bit */···615612 if (!ec)616613 return;617614 mutex_lock(&ec->mutex);618618- acpi_ec_sync_query(ec);615615+ acpi_ec_sync_query(ec, NULL);619616 mutex_unlock(&ec->mutex);620617}621618
+2-3
drivers/ata/Kconfig
···116116117117config AHCI_IMX118118 tristate "Freescale i.MX AHCI SATA support"119119- depends on MFD_SYSCON119119+ depends on MFD_SYSCON && (ARCH_MXC || COMPILE_TEST)120120 help121121 This option enables support for the Freescale i.MX SoC's122122 onboard AHCI SATA.···134134135135config AHCI_XGENE136136 tristate "APM X-Gene 6.0Gbps AHCI SATA host controller support"137137- depends on ARM64 || COMPILE_TEST138138- select PHY_XGENE137137+ depends on PHY_XGENE139138 help140139 This option enables support for APM X-Gene SoC SATA host controller.141140
+21-14
drivers/ata/ahci.c
···11641164#endif1165116511661166static int ahci_init_interrupts(struct pci_dev *pdev, unsigned int n_ports,11671167- struct ahci_host_priv *hpriv)11671167+ struct ahci_host_priv *hpriv)11681168{11691169- int nvec;11691169+ int rc, nvec;1170117011711171 if (hpriv->flags & AHCI_HFLAG_NO_MSI)11721172 goto intx;···11831183 if (nvec < n_ports)11841184 goto single_msi;1185118511861186- nvec = pci_enable_msi_range(pdev, nvec, nvec);11871187- if (nvec == -ENOSPC)11861186+ rc = pci_enable_msi_exact(pdev, nvec);11871187+ if (rc == -ENOSPC)11881188 goto single_msi;11891189- else if (nvec < 0)11891189+ else if (rc < 0)11901190 goto intx;11911191+11921192+ /* fallback to single MSI mode if the controller enforced MRSM mode */11931193+ if (readl(hpriv->mmio + HOST_CTL) & HOST_MRSM) {11941194+ pci_disable_msi(pdev);11951195+ printk(KERN_INFO "ahci: MRSM is on, fallback to single MSI\n");11961196+ goto single_msi;11971197+ }1191119811921199 return nvec;11931200···12391232 return rc;1240123312411234 for (i = 0; i < host->n_ports; i++) {12421242- const char* desc;12431235 struct ahci_port_priv *pp = host->ports[i]->private_data;1244123612451245- /* pp is NULL for dummy ports */12461246- if (pp)12471247- desc = pp->irq_desc;12481248- else12491249- desc = dev_driver_string(host->dev);12371237+ /* Do not receive interrupts sent by dummy ports */12381238+ if (!pp) {12391239+ disable_irq(irq + i);12401240+ continue;12411241+ }1250124212511251- rc = devm_request_threaded_irq(host->dev,12521252- irq + i, ahci_hw_interrupt, ahci_thread_fn, IRQF_SHARED,12531253- desc, host->ports[i]);12431243+ rc = devm_request_threaded_irq(host->dev, irq + i,12441244+ ahci_hw_interrupt,12451245+ ahci_thread_fn, IRQF_SHARED,12461246+ pp->irq_desc, host->ports[i]);12541247 if (rc)12551248 goto out_free_irqs;12561249 }
···42244224 { "PIONEER DVD-RW DVR-216D", NULL, ATA_HORKAGE_NOSETXFER },4225422542264226 /* devices that don't properly handle queued TRIM commands */42274227- { "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },42284228- { "Crucial_CT???M500SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },42274227+ { "Micron_M500*", "MU0[1-4]*", ATA_HORKAGE_NO_NCQ_TRIM, },42284228+ { "Crucial_CT???M500SSD*", "MU0[1-4]*", ATA_HORKAGE_NO_NCQ_TRIM, },42294229+ { "Micron_M550*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },42304230+ { "Crucial_CT???M550SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },4229423142304232 /*42314233 * Some WD SATA-I drives spin up and down erratically when the link···47944792static struct ata_queued_cmd *ata_qc_new(struct ata_port *ap)47954793{47964794 struct ata_queued_cmd *qc = NULL;47974797- unsigned int i;47954795+ unsigned int i, tag;4798479647994797 /* no command while frozen */48004798 if (unlikely(ap->pflags & ATA_PFLAG_FROZEN))48014799 return NULL;4802480048034803- /* the last tag is reserved for internal command. */48044804- for (i = 0; i < ATA_MAX_QUEUE - 1; i++)48054805- if (!test_and_set_bit(i, &ap->qc_allocated)) {48064806- qc = __ata_qc_from_tag(ap, i);48014801+ for (i = 0; i < ATA_MAX_QUEUE; i++) {48024802+ tag = (i + ap->last_tag + 1) % ATA_MAX_QUEUE;48034803+48044804+ /* the last tag is reserved for internal command. */48054805+ if (tag == ATA_TAG_INTERNAL)48064806+ continue;48074807+48084808+ if (!test_and_set_bit(tag, &ap->qc_allocated)) {48094809+ qc = __ata_qc_from_tag(ap, tag);48104810+ qc->tag = tag;48114811+ ap->last_tag = tag;48074812 break;48084813 }48094809-48104810- if (qc)48114811- qc->tag = i;48144814+ }4812481548134816 return qc;48144817}
···5252static LIST_HEAD(deferred_probe_pending_list);5353static LIST_HEAD(deferred_probe_active_list);5454static struct workqueue_struct *deferred_wq;5555+static atomic_t deferred_trigger_count = ATOMIC_INIT(0);55565657/**5758 * deferred_probe_work_func() - Retry probing devices in the active list.···136135 * This functions moves all devices from the pending list to the active137136 * list and schedules the deferred probe workqueue to process them. It138137 * should be called anytime a driver is successfully bound to a device.138138+ *139139+ * Note, there is a race condition in multi-threaded probe. In the case where140140+ * more than one device is probing at the same time, it is possible for one141141+ * probe to complete successfully while another is about to defer. If the second142142+ * depends on the first, then it will get put on the pending list after the143143+ * trigger event has already occured and will be stuck there.144144+ *145145+ * The atomic 'deferred_trigger_count' is used to determine if a successful146146+ * trigger has occurred in the midst of probing a driver. If the trigger count147147+ * changes in the midst of a probe, then deferred processing should be triggered148148+ * again.139149 */140150static void driver_deferred_probe_trigger(void)141151{···159147 * into the active list so they can be retried by the workqueue160148 */161149 mutex_lock(&deferred_probe_mutex);150150+ atomic_inc(&deferred_trigger_count);162151 list_splice_tail_init(&deferred_probe_pending_list,163152 &deferred_probe_active_list);164153 mutex_unlock(&deferred_probe_mutex);···278265static int really_probe(struct device *dev, struct device_driver *drv)279266{280267 int ret = 0;268268+ int local_trigger_count = atomic_read(&deferred_trigger_count);281269282270 atomic_inc(&probe_count);283271 pr_debug("bus: '%s': %s: probing driver %s with device %s\n",···324310 /* Driver requested deferred probing */325311 dev_info(dev, "Driver %s requests probe deferral\n", drv->name);326312 driver_deferred_probe_add(dev);313313+ /* Did a trigger occur while probing? Need to re-trigger if yes */314314+ if (local_trigger_count != atomic_read(&deferred_trigger_count))315315+ driver_deferred_probe_trigger();327316 } else if (ret != -ENODEV && ret != -ENXIO) {328317 /* driver matched but the probe failed */329318 printk(KERN_WARNING
···92929393config ARM_HIGHBANK_CPUFREQ9494 tristate "Calxeda Highbank-based"9595- depends on ARCH_HIGHBANK9696- select GENERIC_CPUFREQ_CPU09797- select PM_OPP9898- select REGULATOR9999-9595+ depends on ARCH_HIGHBANK && GENERIC_CPUFREQ_CPU0 && REGULATOR10096 default m10197 help10298 This adds the CPUFreq driver for Calxeda Highbank SoC
+24-12
drivers/cpufreq/longhaul.c
···242242 * Sets a new clock ratio.243243 */244244245245-static void longhaul_setstate(struct cpufreq_policy *policy,245245+static int longhaul_setstate(struct cpufreq_policy *policy,246246 unsigned int table_index)247247{248248 unsigned int mults_index;···258258 /* Safety precautions */259259 mult = mults[mults_index & 0x1f];260260 if (mult == -1)261261- return;261261+ return -EINVAL;262262+262263 speed = calc_speed(mult);263264 if ((speed > highest_speed) || (speed < lowest_speed))264264- return;265265+ return -EINVAL;266266+265267 /* Voltage transition before frequency transition? */266268 if (can_scale_voltage && longhaul_index < table_index)267269 dir = 1;268270269271 freqs.old = calc_speed(longhaul_get_cpu_mult());270272 freqs.new = speed;271271-272272- cpufreq_freq_transition_begin(policy, &freqs);273273274274 pr_debug("Setting to FSB:%dMHz Mult:%d.%dx (%s)\n",275275 fsb, mult/10, mult%10, print_speed(speed/1000));···385385 goto retry_loop;386386 }387387 }388388- /* Report true CPU frequency */389389- cpufreq_freq_transition_end(policy, &freqs, 0);390388391391- if (!bm_timeout)389389+ if (!bm_timeout) {392390 printk(KERN_INFO PFX "Warning: Timeout while waiting for "393391 "idle PCI bus.\n");392392+ return -EBUSY;393393+ }394394+395395+ return 0;394396}395397396398/*···633631 unsigned int i;634632 unsigned int dir = 0;635633 u8 vid, current_vid;634634+ int retval = 0;636635637636 if (!can_scale_voltage)638638- longhaul_setstate(policy, table_index);637637+ retval = longhaul_setstate(policy, table_index);639638 else {640639 /* On test system voltage transitions exceeding single641640 * step up or down were turning motherboard off. Both···651648 while (i != table_index) {652649 vid = (longhaul_table[i].driver_data >> 8) & 0x1f;653650 if (vid != current_vid) {654654- longhaul_setstate(policy, i);651651+ retval = longhaul_setstate(policy, i);655652 current_vid = vid;656653 msleep(200);657654 }···660657 else661658 i--;662659 }663663- longhaul_setstate(policy, table_index);660660+ retval = longhaul_setstate(policy, table_index);664661 }662662+665663 longhaul_index = table_index;666666- return 0;664664+ return retval;667665}668666669667···972968973969 for (i = 0; i < numscales; i++) {974970 if (mults[i] == maxmult) {971971+ struct cpufreq_freqs freqs;972972+973973+ freqs.old = policy->cur;974974+ freqs.new = longhaul_table[i].frequency;975975+ freqs.flags = 0;976976+977977+ cpufreq_freq_transition_begin(policy, &freqs);975978 longhaul_setstate(policy, i);979979+ cpufreq_freq_transition_end(policy, &freqs, 0);976980 break;977981 }978982 }
+13-10
drivers/cpufreq/powernow-k6.c
···138138static int powernow_k6_target(struct cpufreq_policy *policy,139139 unsigned int best_i)140140{141141- struct cpufreq_freqs freqs;142141143142 if (clock_ratio[best_i].driver_data > max_multiplier) {144143 printk(KERN_ERR PFX "invalid target frequency\n");145144 return -EINVAL;146145 }147146148148- freqs.old = busfreq * powernow_k6_get_cpu_multiplier();149149- freqs.new = busfreq * clock_ratio[best_i].driver_data;150150-151151- cpufreq_freq_transition_begin(policy, &freqs);152152-153147 powernow_k6_set_cpu_multiplier(best_i);154154-155155- cpufreq_freq_transition_end(policy, &freqs, 0);156148157149 return 0;158150}···219227static int powernow_k6_cpu_exit(struct cpufreq_policy *policy)220228{221229 unsigned int i;222222- for (i = 0; i < 8; i++) {223223- if (i == max_multiplier)230230+231231+ for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) {232232+ if (clock_ratio[i].driver_data == max_multiplier) {233233+ struct cpufreq_freqs freqs;234234+235235+ freqs.old = policy->cur;236236+ freqs.new = clock_ratio[i].frequency;237237+ freqs.flags = 0;238238+239239+ cpufreq_freq_transition_begin(policy, &freqs);224240 powernow_k6_target(policy, i);241241+ cpufreq_freq_transition_end(policy, &freqs, 0);242242+ break;243243+ }225244 }226245 return 0;227246}
-4
drivers/cpufreq/powernow-k7.c
···269269270270 freqs.new = powernow_table[index].frequency;271271272272- cpufreq_freq_transition_begin(policy, &freqs);273273-274272 /* Now do the magic poking into the MSRs. */275273276274 if (have_a0 == 1) /* A0 errata 5 */···287289288290 if (have_a0 == 1)289291 local_irq_enable();290290-291291- cpufreq_freq_transition_end(policy, &freqs, 0);292292293293 return 0;294294}
+1
drivers/cpufreq/powernv-cpufreq.c
···29293030#include <asm/cputhreads.h>3131#include <asm/reg.h>3232+#include <asm/smp.h> /* Required for cpu_sibling_mask() in UP configs */32333334#define POWERNV_MAX_PSTATES 2563435
···50505151 /* Full ppgtt disabled by default for now due to issues. */5252 if (full)5353- return false; /* HAS_PPGTT(dev) */5353+ return HAS_PPGTT(dev) && (i915.enable_ppgtt == 2);5454 else5555 return HAS_ALIASING_PPGTT(dev);5656}
+14-4
drivers/gpu/drm/i915/i915_irq.c
···13621362 spin_lock(&dev_priv->irq_lock);13631363 for (i = 1; i < HPD_NUM_PINS; i++) {1364136413651365- WARN_ONCE(hpd[i] & hotplug_trigger &&13661366- dev_priv->hpd_stats[i].hpd_mark == HPD_DISABLED,13671367- "Received HPD interrupt (0x%08x) on pin %d (0x%08x) although disabled\n",13681368- hotplug_trigger, i, hpd[i]);13651365+ if (hpd[i] & hotplug_trigger &&13661366+ dev_priv->hpd_stats[i].hpd_mark == HPD_DISABLED) {13671367+ /*13681368+ * On GMCH platforms the interrupt mask bits only13691369+ * prevent irq generation, not the setting of the13701370+ * hotplug bits itself. So only WARN about unexpected13711371+ * interrupts on saner platforms.13721372+ */13731373+ WARN_ONCE(INTEL_INFO(dev)->gen >= 5 && !IS_VALLEYVIEW(dev),13741374+ "Received HPD interrupt (0x%08x) on pin %d (0x%08x) although disabled\n",13751375+ hotplug_trigger, i, hpd[i]);13761376+13771377+ continue;13781378+ }1369137913701380 if (!(hpd[i] & hotplug_trigger) ||13711381 dev_priv->hpd_stats[i].hpd_mark != HPD_ENABLED)
···96549654 PIPE_CONF_CHECK_I(pipe_src_w);96559655 PIPE_CONF_CHECK_I(pipe_src_h);9656965696579657- PIPE_CONF_CHECK_I(gmch_pfit.control);96589658- /* pfit ratios are autocomputed by the hw on gen4+ */96599659- if (INTEL_INFO(dev)->gen < 4)96609660- PIPE_CONF_CHECK_I(gmch_pfit.pgm_ratios);96619661- PIPE_CONF_CHECK_I(gmch_pfit.lvds_border_bits);96579657+ /*96589658+ * FIXME: BIOS likes to set up a cloned config with lvds+external96599659+ * screen. Since we don't yet re-compute the pipe config when moving96609660+ * just the lvds port away to another pipe the sw tracking won't match.96619661+ *96629662+ * Proper atomic modesets with recomputed global state will fix this.96639663+ * Until then just don't check gmch state for inherited modes.96649664+ */96659665+ if (!PIPE_CONF_QUIRK(PIPE_CONFIG_QUIRK_INHERITED_MODE)) {96669666+ PIPE_CONF_CHECK_I(gmch_pfit.control);96679667+ /* pfit ratios are autocomputed by the hw on gen4+ */96689668+ if (INTEL_INFO(dev)->gen < 4)96699669+ PIPE_CONF_CHECK_I(gmch_pfit.pgm_ratios);96709670+ PIPE_CONF_CHECK_I(gmch_pfit.lvds_border_bits);96719671+ }96729672+96629673 PIPE_CONF_CHECK_I(pch_pfit.enabled);96639674 if (current_config->pch_pfit.enabled) {96649675 PIPE_CONF_CHECK_I(pch_pfit.pos);···1162611615 list_for_each_entry(crtc, &dev->mode_config.crtc_list,1162711616 base.head) {1162811617 memset(&crtc->config, 0, sizeof(crtc->config));1161811618+1161911619+ crtc->config.quirks |= PIPE_CONFIG_QUIRK_INHERITED_MODE;11629116201163011621 crtc->active = dev_priv->display.get_pipe_config(crtc,1163111622 &crtc->config);
+10-1
drivers/gpu/drm/i915/intel_dp.c
···36193619{36203620 struct drm_connector *connector = &intel_connector->base;36213621 struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);36223622- struct drm_device *dev = intel_dig_port->base.base.dev;36223622+ struct intel_encoder *intel_encoder = &intel_dig_port->base;36233623+ struct drm_device *dev = intel_encoder->base.dev;36233624 struct drm_i915_private *dev_priv = dev->dev_private;36243625 struct drm_display_mode *fixed_mode = NULL;36253626 bool has_dpcd;···3629362836303629 if (!is_edp(intel_dp))36313630 return true;36313631+36323632+ /* The VDD bit needs a power domain reference, so if the bit is already36333633+ * enabled when we boot, grab this reference. */36343634+ if (edp_have_panel_vdd(intel_dp)) {36353635+ enum intel_display_power_domain power_domain;36363636+ power_domain = intel_display_port_power_domain(intel_encoder);36373637+ intel_display_power_get(dev_priv, power_domain);36383638+ }3632363936333640 /* Cache DPCD and EDID for edp. */36343641 intel_edp_panel_vdd_on(intel_dp);
+2-1
drivers/gpu/drm/i915/intel_drv.h
···236236 * tracked with quirk flags so that fastboot and state checker can act237237 * accordingly.238238 */239239-#define PIPE_CONFIG_QUIRK_MODE_SYNC_FLAGS (1<<0) /* unreliable sync mode.flags */239239+#define PIPE_CONFIG_QUIRK_MODE_SYNC_FLAGS (1<<0) /* unreliable sync mode.flags */240240+#define PIPE_CONFIG_QUIRK_INHERITED_MODE (1<<1) /* mode inherited from firmware */240241 unsigned long quirks;241242242243 /* User requested mode, only valid as a starting point to
+10
drivers/gpu/drm/i915/intel_fbdev.c
···132132133133 mutex_lock(&dev->struct_mutex);134134135135+ if (intel_fb &&136136+ (sizes->fb_width > intel_fb->base.width ||137137+ sizes->fb_height > intel_fb->base.height)) {138138+ DRM_DEBUG_KMS("BIOS fb too small (%dx%d), we require (%dx%d),"139139+ " releasing it\n",140140+ intel_fb->base.width, intel_fb->base.height,141141+ sizes->fb_width, sizes->fb_height);142142+ drm_framebuffer_unreference(&intel_fb->base);143143+ intel_fb = ifbdev->fb = NULL;144144+ }135145 if (!intel_fb || WARN_ON(!intel_fb->obj)) {136146 DRM_DEBUG_KMS("no BIOS fb, allocating a new one\n");137147 ret = intelfb_alloc(helper, sizes);
+5-4
drivers/gpu/drm/i915/intel_hdmi.c
···821821 }822822}823823824824-static int hdmi_portclock_limit(struct intel_hdmi *hdmi)824824+static int hdmi_portclock_limit(struct intel_hdmi *hdmi, bool respect_dvi_limit)825825{826826 struct drm_device *dev = intel_hdmi_to_dev(hdmi);827827828828- if (!hdmi->has_hdmi_sink || IS_G4X(dev))828828+ if ((respect_dvi_limit && !hdmi->has_hdmi_sink) || IS_G4X(dev))829829 return 165000;830830 else if (IS_HASWELL(dev) || INTEL_INFO(dev)->gen >= 8)831831 return 300000;···837837intel_hdmi_mode_valid(struct drm_connector *connector,838838 struct drm_display_mode *mode)839839{840840- if (mode->clock > hdmi_portclock_limit(intel_attached_hdmi(connector)))840840+ if (mode->clock > hdmi_portclock_limit(intel_attached_hdmi(connector),841841+ true))841842 return MODE_CLOCK_HIGH;842843 if (mode->clock < 20000)843844 return MODE_CLOCK_LOW;···880879 struct drm_device *dev = encoder->base.dev;881880 struct drm_display_mode *adjusted_mode = &pipe_config->adjusted_mode;882881 int clock_12bpc = pipe_config->adjusted_mode.crtc_clock * 3 / 2;883883- int portclock_limit = hdmi_portclock_limit(intel_hdmi);882882+ int portclock_limit = hdmi_portclock_limit(intel_hdmi, false);884883 int desired_bpp;885884886885 if (intel_hdmi->color_range_auto) {
+34-20
drivers/gpu/drm/i915/intel_ringbuffer.c
···437437 I915_WRITE(HWS_PGA, addr);438438}439439440440+static bool stop_ring(struct intel_ring_buffer *ring)441441+{442442+ struct drm_i915_private *dev_priv = to_i915(ring->dev);443443+444444+ if (!IS_GEN2(ring->dev)) {445445+ I915_WRITE_MODE(ring, _MASKED_BIT_ENABLE(STOP_RING));446446+ if (wait_for_atomic((I915_READ_MODE(ring) & MODE_IDLE) != 0, 1000)) {447447+ DRM_ERROR("%s :timed out trying to stop ring\n", ring->name);448448+ return false;449449+ }450450+ }451451+452452+ I915_WRITE_CTL(ring, 0);453453+ I915_WRITE_HEAD(ring, 0);454454+ ring->write_tail(ring, 0);455455+456456+ if (!IS_GEN2(ring->dev)) {457457+ (void)I915_READ_CTL(ring);458458+ I915_WRITE_MODE(ring, _MASKED_BIT_DISABLE(STOP_RING));459459+ }460460+461461+ return (I915_READ_HEAD(ring) & HEAD_ADDR) == 0;462462+}463463+440464static int init_ring_common(struct intel_ring_buffer *ring)441465{442466 struct drm_device *dev = ring->dev;443467 struct drm_i915_private *dev_priv = dev->dev_private;444468 struct drm_i915_gem_object *obj = ring->obj;445469 int ret = 0;446446- u32 head;447470448471 gen6_gt_force_wake_get(dev_priv, FORCEWAKE_ALL);449472450450- /* Stop the ring if it's running. */451451- I915_WRITE_CTL(ring, 0);452452- I915_WRITE_HEAD(ring, 0);453453- ring->write_tail(ring, 0);454454- if (wait_for_atomic((I915_READ_MODE(ring) & MODE_IDLE) != 0, 1000))455455- DRM_ERROR("%s :timed out trying to stop ring\n", ring->name);456456-457457- if (I915_NEED_GFX_HWS(dev))458458- intel_ring_setup_status_page(ring);459459- else460460- ring_setup_phys_status_page(ring);461461-462462- head = I915_READ_HEAD(ring) & HEAD_ADDR;463463-464464- /* G45 ring initialization fails to reset head to zero */465465- if (head != 0) {473473+ if (!stop_ring(ring)) {474474+ /* G45 ring initialization often fails to reset head to zero */466475 DRM_DEBUG_KMS("%s head not reset to zero "467476 "ctl %08x head %08x tail %08x start %08x\n",468477 ring->name,···480471 I915_READ_TAIL(ring),481472 I915_READ_START(ring));482473483483- I915_WRITE_HEAD(ring, 0);484484-485485- if (I915_READ_HEAD(ring) & HEAD_ADDR) {474474+ if (!stop_ring(ring)) {486475 DRM_ERROR("failed to set %s head to zero "487476 "ctl %08x head %08x tail %08x start %08x\n",488477 ring->name,···488481 I915_READ_HEAD(ring),489482 I915_READ_TAIL(ring),490483 I915_READ_START(ring));484484+ ret = -EIO;485485+ goto out;491486 }492487 }488488+489489+ if (I915_NEED_GFX_HWS(dev))490490+ intel_ring_setup_status_page(ring);491491+ else492492+ ring_setup_phys_status_page(ring);493493494494 /* Initialize the ring. This must happen _after_ we've cleared the ring495495 * registers with the above sequence (the readback of the HEAD registers
···510510 MDP4_DMA_CURSOR_BLEND_CONFIG_CURSOR_EN);511511 } else {512512 /* disable cursor: */513513- mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_BASE(dma), 0);514514- mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_BLEND_CONFIG(dma),515515- MDP4_DMA_CURSOR_BLEND_CONFIG_FORMAT(CURSOR_ARGB));513513+ mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_BASE(dma),514514+ mdp4_kms->blank_cursor_iova);516515 }517516518517 /* and drop the iova ref + obj rev when done scanning out: */···573574574575 if (old_bo) {575576 /* drop our previous reference: */576576- msm_gem_put_iova(old_bo, mdp4_kms->id);577577- drm_gem_object_unreference_unlocked(old_bo);577577+ drm_flip_work_queue(&mdp4_crtc->unref_cursor_work, old_bo);578578 }579579580580- crtc_flush(crtc);581580 request_pending(crtc, PENDING_CURSOR);582581583582 return 0;
+2-2
drivers/gpu/drm/msm/mdp/mdp4/mdp4_irq.c
···70707171 VERB("status=%08x", status);72727373+ mdp_dispatch_irqs(mdp_kms, status);7474+7375 for (id = 0; id < priv->num_crtcs; id++)7476 if (status & mdp4_crtc_vblank(priv->crtcs[id]))7577 drm_handle_vblank(dev, id);7676-7777- mdp_dispatch_irqs(mdp_kms, status);78787979 return IRQ_HANDLED;8080}
+21
drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c
···144144static void mdp4_destroy(struct msm_kms *kms)145145{146146 struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));147147+ if (mdp4_kms->blank_cursor_iova)148148+ msm_gem_put_iova(mdp4_kms->blank_cursor_bo, mdp4_kms->id);149149+ if (mdp4_kms->blank_cursor_bo)150150+ drm_gem_object_unreference(mdp4_kms->blank_cursor_bo);147151 kfree(mdp4_kms);148152}149153···373369 ret = modeset_init(mdp4_kms);374370 if (ret) {375371 dev_err(dev->dev, "modeset_init failed: %d\n", ret);372372+ goto fail;373373+ }374374+375375+ mutex_lock(&dev->struct_mutex);376376+ mdp4_kms->blank_cursor_bo = msm_gem_new(dev, SZ_16K, MSM_BO_WC);377377+ mutex_unlock(&dev->struct_mutex);378378+ if (IS_ERR(mdp4_kms->blank_cursor_bo)) {379379+ ret = PTR_ERR(mdp4_kms->blank_cursor_bo);380380+ dev_err(dev->dev, "could not allocate blank-cursor bo: %d\n", ret);381381+ mdp4_kms->blank_cursor_bo = NULL;382382+ goto fail;383383+ }384384+385385+ ret = msm_gem_get_iova(mdp4_kms->blank_cursor_bo, mdp4_kms->id,386386+ &mdp4_kms->blank_cursor_iova);387387+ if (ret) {388388+ dev_err(dev->dev, "could not pin blank-cursor bo: %d\n", ret);376389 goto fail;377390 }378391
+4
drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.h
···4444 struct clk *lut_clk;45454646 struct mdp_irq error_handler;4747+4848+ /* empty/blank cursor bo to use when cursor is "disabled" */4949+ struct drm_gem_object *blank_cursor_bo;5050+ uint32_t blank_cursor_iova;4751};4852#define to_mdp4_kms(x) container_of(x, struct mdp4_kms, base)4953
+2-2
drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c
···71717272 VERB("status=%08x", status);73737474+ mdp_dispatch_irqs(mdp_kms, status);7575+7476 for (id = 0; id < priv->num_crtcs; id++)7577 if (status & mdp5_crtc_vblank(priv->crtcs[id]))7678 drm_handle_vblank(dev, id);7777-7878- mdp_dispatch_irqs(mdp_kms, status);7979}80808181irqreturn_t mdp5_irq(struct msm_kms *kms)
+1-4
drivers/gpu/drm/msm/msm_fbdev.c
···6262 dma_addr_t paddr;6363 int ret, size;64646565- /* only doing ARGB32 since this is what is needed to alpha-blend6666- * with video overlays:6767- */6865 sizes->surface_bpp = 32;6969- sizes->surface_depth = 32;6666+ sizes->surface_depth = 24;70677168 DBG("create fbdev: %dx%d@%d (%dx%d)", sizes->surface_width,7269 sizes->surface_height, sizes->surface_bpp,
···12141214 SVGA3dCmdSurfaceDMA dma;12151215 } *cmd;12161216 int ret;12171217+ SVGA3dCmdSurfaceDMASuffix *suffix;12181218+ uint32_t bo_size;1217121912181220 cmd = container_of(header, struct vmw_dma_cmd, header);12211221+ suffix = (SVGA3dCmdSurfaceDMASuffix *)((unsigned long) &cmd->dma +12221222+ header->size - sizeof(*suffix));12231223+12241224+ /* Make sure device and verifier stays in sync. */12251225+ if (unlikely(suffix->suffixSize != sizeof(*suffix))) {12261226+ DRM_ERROR("Invalid DMA suffix size.\n");12271227+ return -EINVAL;12281228+ }12291229+12191230 ret = vmw_translate_guest_ptr(dev_priv, sw_context,12201231 &cmd->dma.guest.ptr,12211232 &vmw_bo);12221233 if (unlikely(ret != 0))12231234 return ret;12351235+12361236+ /* Make sure DMA doesn't cross BO boundaries. */12371237+ bo_size = vmw_bo->base.num_pages * PAGE_SIZE;12381238+ if (unlikely(cmd->dma.guest.ptr.offset > bo_size)) {12391239+ DRM_ERROR("Invalid DMA offset.\n");12401240+ return -EINVAL;12411241+ }12421242+12431243+ bo_size -= cmd->dma.guest.ptr.offset;12441244+ if (unlikely(suffix->maximumOffset > bo_size))12451245+ suffix->maximumOffset = bo_size;1224124612251247 ret = vmw_cmd_res_check(dev_priv, sw_context, vmw_res_surface,12261248 user_surface_converter, &cmd->dma.host.sid,
+2-2
drivers/hwmon/coretemp.c
···365365 if (cpu_has_tjmax(c))366366 dev_warn(dev, "Unable to read TjMax from CPU %u\n", id);367367 } else {368368- val = (eax >> 16) & 0x7f;368368+ val = (eax >> 16) & 0xff;369369 /*370370 * If the TjMax is not plausible, an assumption371371 * will be used372372 */373373- if (val >= 85) {373373+ if (val) {374374 dev_dbg(dev, "TjMax is %d degrees C\n", val);375375 return val * 1000;376376 }
+3-3
drivers/hwmon/ltc2945.c
···11-/*11+ /*22 * Driver for Linear Technology LTC2945 I2C Power Monitor33 *44 * Copyright (c) 2014 Guenter Roeck···314314 reg = LTC2945_MAX_ADIN_H;315315 break;316316 default:317317- BUG();318318- break;317317+ WARN_ONCE(1, "Bad register: 0x%x\n", reg);318318+ return -EINVAL;319319 }320320 /* Reset maximum */321321 ret = regmap_bulk_write(regmap, reg, buf_max, num_regs);
···165165 int ret;166166 struct iio_dev *indio_dev = dev_to_iio_dev(dev);167167168168- ret = test_bit(to_iio_dev_attr(attr)->address,168168+ /* Ensure ret is 0 or 1. */169169+ ret = !!test_bit(to_iio_dev_attr(attr)->address,169170 indio_dev->buffer->scan_mask);170171171172 return sprintf(buf, "%d\n", ret);···863862 if (!buffer->scan_mask)864863 return 0;865864866866- return test_bit(bit, buffer->scan_mask);865865+ /* Ensure return value is 0 or 1. */866866+ return !!test_bit(bit, buffer->scan_mask);867867};868868EXPORT_SYMBOL_GPL(iio_scan_mask_query);869869
+1
drivers/iio/light/cm32181.c
···221221 *val = cm32181->calibscale;222222 return IIO_VAL_INT;223223 case IIO_CHAN_INFO_INT_TIME:224224+ *val = 0;224225 ret = cm32181_read_als_it(cm32181, val2);225226 return ret;226227 }
+20-2
drivers/iio/light/cm36651.c
···652652 cm36651->client = client;653653 cm36651->ps_client = i2c_new_dummy(client->adapter,654654 CM36651_I2C_ADDR_PS);655655+ if (!cm36651->ps_client) {656656+ dev_err(&client->dev, "%s: new i2c device failed\n", __func__);657657+ ret = -ENODEV;658658+ goto error_disable_reg;659659+ }660660+655661 cm36651->ara_client = i2c_new_dummy(client->adapter, CM36651_ARA);662662+ if (!cm36651->ara_client) {663663+ dev_err(&client->dev, "%s: new i2c device failed\n", __func__);664664+ ret = -ENODEV;665665+ goto error_i2c_unregister_ps;666666+ }667667+656668 mutex_init(&cm36651->lock);657669 indio_dev->dev.parent = &client->dev;658670 indio_dev->channels = cm36651_channels;···676664 ret = cm36651_setup_reg(cm36651);677665 if (ret) {678666 dev_err(&client->dev, "%s: register setup failed\n", __func__);679679- goto error_disable_reg;667667+ goto error_i2c_unregister_ara;680668 }681669682670 ret = request_threaded_irq(client->irq, NULL, cm36651_irq_handler,···684672 "cm36651", indio_dev);685673 if (ret) {686674 dev_err(&client->dev, "%s: request irq failed\n", __func__);687687- goto error_disable_reg;675675+ goto error_i2c_unregister_ara;688676 }689677690678 ret = iio_device_register(indio_dev);···697685698686error_free_irq:699687 free_irq(client->irq, indio_dev);688688+error_i2c_unregister_ara:689689+ i2c_unregister_device(cm36651->ara_client);690690+error_i2c_unregister_ps:691691+ i2c_unregister_device(cm36651->ps_client);700692error_disable_reg:701693 regulator_disable(cm36651->vled_reg);702694 return ret;···714698 iio_device_unregister(indio_dev);715699 regulator_disable(cm36651->vled_reg);716700 free_irq(client->irq, indio_dev);701701+ i2c_unregister_device(cm36651->ps_client);702702+ i2c_unregister_device(cm36651->ara_client);717703718704 return 0;719705}
+3-3
drivers/infiniband/hw/cxgb4/Kconfig
···11config INFINIBAND_CXGB422- tristate "Chelsio T4 RDMA Driver"22+ tristate "Chelsio T4/T5 RDMA Driver"33 depends on CHELSIO_T4 && INET && (IPV6 || IPV6=n)44 select GENERIC_ALLOCATOR55 ---help---66- This is an iWARP/RDMA driver for the Chelsio T4 1GbE and77- 10GbE adapters.66+ This is an iWARP/RDMA driver for the Chelsio T4 and T577+ 1GbE, 10GbE adapters and T5 40GbE adapter.8899 For general information about Chelsio and our products, visit1010 our website at <http://www.chelsio.com>.
+28-11
drivers/infiniband/hw/cxgb4/cm.c
···587587 opt2 |= SACK_EN(1);588588 if (wscale && enable_tcp_window_scaling)589589 opt2 |= WND_SCALE_EN(1);590590+ if (is_t5(ep->com.dev->rdev.lldi.adapter_type)) {591591+ opt2 |= T5_OPT_2_VALID;592592+ opt2 |= V_CONG_CNTRL(CONG_ALG_TAHOE);593593+ }590594 t4_set_arp_err_handler(skb, NULL, act_open_req_arp_failure);591595592596 if (is_t4(ep->com.dev->rdev.lldi.adapter_type)) {···1000996static int abort_connection(struct c4iw_ep *ep, struct sk_buff *skb, gfp_t gfp)1001997{1002998 PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);10031003- state_set(&ep->com, ABORTING);999999+ __state_set(&ep->com, ABORTING);10041000 set_bit(ABORT_CONN, &ep->com.history);10051001 return send_abort(ep, skb, gfp);10061002}···11581154 return credits;11591155}1160115611611161-static void process_mpa_reply(struct c4iw_ep *ep, struct sk_buff *skb)11571157+static int process_mpa_reply(struct c4iw_ep *ep, struct sk_buff *skb)11621158{11631159 struct mpa_message *mpa;11641160 struct mpa_v2_conn_params *mpa_v2_params;···11681164 struct c4iw_qp_attributes attrs;11691165 enum c4iw_qp_attr_mask mask;11701166 int err;11671167+ int disconnect = 0;1171116811721169 PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);11731170···11781173 * will abort the connection.11791174 */11801175 if (stop_ep_timer(ep))11811181- return;11761176+ return 0;1182117711831178 /*11841179 * If we get more than the supported amount of private data···12001195 * if we don't even have the mpa message, then bail.12011196 */12021197 if (ep->mpa_pkt_len < sizeof(*mpa))12031203- return;11981198+ return 0;12041199 mpa = (struct mpa_message *) ep->mpa_pkt;1205120012061201 /* Validate MPA header. */···12401235 * We'll continue process when more data arrives.12411236 */12421237 if (ep->mpa_pkt_len < (sizeof(*mpa) + plen))12431243- return;12381238+ return 0;1244123912451240 if (mpa->flags & MPA_REJECT) {12461241 err = -ECONNREFUSED;···13421337 attrs.layer_etype = LAYER_MPA | DDP_LLP;13431338 attrs.ecode = MPA_NOMATCH_RTR;13441339 attrs.next_state = C4IW_QP_STATE_TERMINATE;13401340+ attrs.send_term = 1;13451341 err = c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp,13461346- C4IW_QP_ATTR_NEXT_STATE, &attrs, 0);13421342+ C4IW_QP_ATTR_NEXT_STATE, &attrs, 1);13471343 err = -ENOMEM;13441344+ disconnect = 1;13481345 goto out;13491346 }13501347···13621355 attrs.layer_etype = LAYER_MPA | DDP_LLP;13631356 attrs.ecode = MPA_INSUFF_IRD;13641357 attrs.next_state = C4IW_QP_STATE_TERMINATE;13581358+ attrs.send_term = 1;13651359 err = c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp,13661366- C4IW_QP_ATTR_NEXT_STATE, &attrs, 0);13601360+ C4IW_QP_ATTR_NEXT_STATE, &attrs, 1);13671361 err = -ENOMEM;13621362+ disconnect = 1;13681363 goto out;13691364 }13701365 goto out;···13751366 send_abort(ep, skb, GFP_KERNEL);13761367out:13771368 connect_reply_upcall(ep, err);13781378- return;13691369+ return disconnect;13791370}1380137113811372static void process_mpa_request(struct c4iw_ep *ep, struct sk_buff *skb)···15331524 unsigned int tid = GET_TID(hdr);15341525 struct tid_info *t = dev->rdev.lldi.tids;15351526 __u8 status = hdr->status;15271527+ int disconnect = 0;1536152815371529 ep = lookup_tid(t, tid);15381530 if (!ep)···15491539 switch (ep->com.state) {15501540 case MPA_REQ_SENT:15511541 ep->rcv_seq += dlen;15521552- process_mpa_reply(ep, skb);15421542+ disconnect = process_mpa_reply(ep, skb);15531543 break;15541544 case MPA_REQ_WAIT:15551545 ep->rcv_seq += dlen;···15651555 ep->com.state, ep->hwtid, status);15661556 attrs.next_state = C4IW_QP_STATE_TERMINATE;15671557 c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp,15681568- C4IW_QP_ATTR_NEXT_STATE, &attrs, 0);15581558+ C4IW_QP_ATTR_NEXT_STATE, &attrs, 1);15591559+ disconnect = 1;15691560 break;15701561 }15711562 default:15721563 break;15731564 }15741565 mutex_unlock(&ep->com.mutex);15661566+ if (disconnect)15671567+ c4iw_ep_disconnect(ep, 0, GFP_KERNEL);15751568 return 0;15761569}15771570···20212008 G_IP_HDR_LEN(hlen);20222009 if (tcph->ece && tcph->cwr)20232010 opt2 |= CCTRL_ECN(1);20112011+ }20122012+ if (is_t5(ep->com.dev->rdev.lldi.adapter_type)) {20132013+ opt2 |= T5_OPT_2_VALID;20142014+ opt2 |= V_CONG_CNTRL(CONG_ALG_TAHOE);20242015 }2025201620262017 rpl = cplhdr(skb);···34993482 __func__, ep, ep->hwtid, ep->com.state);35003483 abort = 0;35013484 }35023502- mutex_unlock(&ep->com.mutex);35033485 if (abort)35043486 abort_connection(ep, NULL, GFP_KERNEL);34873487+ mutex_unlock(&ep->com.mutex);35053488 c4iw_put_ep(&ep->com);35063489}35073490
···232232 struct bio_list deferred_bio_list;233233 struct bio_list retry_on_resume_list;234234 struct rb_root sort_bio_list; /* sorted list of deferred bios */235235+236236+ /*237237+ * Ensures the thin is not destroyed until the worker has finished238238+ * iterating the active_thins list.239239+ */240240+ atomic_t refcount;241241+ struct completion can_destroy;235242};236243237244/*----------------------------------------------------------------*/···14931486 blk_finish_plug(&plug);14941487}1495148814891489+static void thin_get(struct thin_c *tc);14901490+static void thin_put(struct thin_c *tc);14911491+14921492+/*14931493+ * We can't hold rcu_read_lock() around code that can block. So we14941494+ * find a thin with the rcu lock held; bump a refcount; then drop14951495+ * the lock.14961496+ */14971497+static struct thin_c *get_first_thin(struct pool *pool)14981498+{14991499+ struct thin_c *tc = NULL;15001500+15011501+ rcu_read_lock();15021502+ if (!list_empty(&pool->active_thins)) {15031503+ tc = list_entry_rcu(pool->active_thins.next, struct thin_c, list);15041504+ thin_get(tc);15051505+ }15061506+ rcu_read_unlock();15071507+15081508+ return tc;15091509+}15101510+15111511+static struct thin_c *get_next_thin(struct pool *pool, struct thin_c *tc)15121512+{15131513+ struct thin_c *old_tc = tc;15141514+15151515+ rcu_read_lock();15161516+ list_for_each_entry_continue_rcu(tc, &pool->active_thins, list) {15171517+ thin_get(tc);15181518+ thin_put(old_tc);15191519+ rcu_read_unlock();15201520+ return tc;15211521+ }15221522+ thin_put(old_tc);15231523+ rcu_read_unlock();15241524+15251525+ return NULL;15261526+}15271527+14961528static void process_deferred_bios(struct pool *pool)14971529{14981530 unsigned long flags;···15391493 struct bio_list bios;15401494 struct thin_c *tc;1541149515421542- rcu_read_lock();15431543- list_for_each_entry_rcu(tc, &pool->active_thins, list)14961496+ tc = get_first_thin(pool);14971497+ while (tc) {15441498 process_thin_deferred_bios(tc);15451545- rcu_read_unlock();14991499+ tc = get_next_thin(pool, tc);15001500+ }1546150115471502 /*15481503 * If there are any deferred flush bios, we must commit···16251578{16261579 struct noflush_work w;1627158016281628- INIT_WORK(&w.worker, fn);15811581+ INIT_WORK_ONSTACK(&w.worker, fn);16291582 w.tc = tc;16301583 atomic_set(&w.complete, 0);16311584 init_waitqueue_head(&w.wait);···31083061/*----------------------------------------------------------------31093062 * Thin target methods31103063 *--------------------------------------------------------------*/30643064+static void thin_get(struct thin_c *tc)30653065+{30663066+ atomic_inc(&tc->refcount);30673067+}30683068+30693069+static void thin_put(struct thin_c *tc)30703070+{30713071+ if (atomic_dec_and_test(&tc->refcount))30723072+ complete(&tc->can_destroy);30733073+}30743074+31113075static void thin_dtr(struct dm_target *ti)31123076{31133077 struct thin_c *tc = ti->private;31143078 unsigned long flags;30793079+30803080+ thin_put(tc);30813081+ wait_for_completion(&tc->can_destroy);3115308231163083 spin_lock_irqsave(&tc->pool->lock, flags);31173084 list_del_rcu(&tc->list);···31623101 struct thin_c *tc;31633102 struct dm_dev *pool_dev, *origin_dev;31643103 struct mapped_device *pool_md;31043104+ unsigned long flags;3165310531663106 mutex_lock(&dm_thin_pool_table.mutex);31673107···3253319132543192 mutex_unlock(&dm_thin_pool_table.mutex);3255319332563256- spin_lock(&tc->pool->lock);31943194+ atomic_set(&tc->refcount, 1);31953195+ init_completion(&tc->can_destroy);31963196+31973197+ spin_lock_irqsave(&tc->pool->lock, flags);32573198 list_add_tail_rcu(&tc->list, &tc->pool->active_thins);32583258- spin_unlock(&tc->pool->lock);31993199+ spin_unlock_irqrestore(&tc->pool->lock, flags);32593200 /*32603201 * This synchronize_rcu() call is needed here otherwise we risk a32613202 * wake_worker() call finding no bios to process (because the newly
+9-6
drivers/md/dm-verity.c
···330330 return r;331331 }332332 }333333-334333 todo = 1 << v->data_dev_block_bits;335335- while (io->iter.bi_size) {334334+ do {336335 u8 *page;336336+ unsigned len;337337 struct bio_vec bv = bio_iter_iovec(bio, io->iter);338338339339 page = kmap_atomic(bv.bv_page);340340- r = crypto_shash_update(desc, page + bv.bv_offset,341341- bv.bv_len);340340+ len = bv.bv_len;341341+ if (likely(len >= todo))342342+ len = todo;343343+ r = crypto_shash_update(desc, page + bv.bv_offset, len);342344 kunmap_atomic(page);343345344346 if (r < 0) {···348346 return r;349347 }350348351351- bio_advance_iter(bio, &io->iter, bv.bv_len);352352- }349349+ bio_advance_iter(bio, &io->iter, len);350350+ todo -= len;351351+ } while (todo);353352354353 if (!v->version) {355354 r = crypto_shash_update(desc, v->salt, v->salt_size);
+27-1
drivers/of/irq.c
···364364365365 memset(r, 0, sizeof(*r));366366 /*367367- * Get optional "interrupts-names" property to add a name367367+ * Get optional "interrupt-names" property to add a name368368 * to the resource.369369 */370370 of_property_read_string_index(dev, "interrupt-names", index,···378378 return irq;379379}380380EXPORT_SYMBOL_GPL(of_irq_to_resource);381381+382382+/**383383+ * of_irq_get - Decode a node's IRQ and return it as a Linux irq number384384+ * @dev: pointer to device tree node385385+ * @index: zero-based index of the irq386386+ *387387+ * Returns Linux irq number on success, or -EPROBE_DEFER if the irq domain388388+ * is not yet created.389389+ *390390+ */391391+int of_irq_get(struct device_node *dev, int index)392392+{393393+ int rc;394394+ struct of_phandle_args oirq;395395+ struct irq_domain *domain;396396+397397+ rc = of_irq_parse_one(dev, index, &oirq);398398+ if (rc)399399+ return rc;400400+401401+ domain = irq_find_host(oirq.np);402402+ if (!domain)403403+ return -EPROBE_DEFER;404404+405405+ return irq_create_of_mapping(&oirq);406406+}381407382408/**383409 * of_irq_count - Count the number of IRQs a node uses
+3-1
drivers/of/platform.c
···168168 rc = of_address_to_resource(np, i, res);169169 WARN_ON(rc);170170 }171171- WARN_ON(of_irq_to_resource_table(np, res, num_irq) != num_irq);171171+ if (of_irq_to_resource_table(np, res, num_irq) != num_irq)172172+ pr_debug("not all legacy IRQ resources mapped for %s\n",173173+ np->name);172174 }173175174176 dev->dev.of_node = of_node_get(np);
+32
drivers/of/selftest.c
···1010#include <linux/module.h>1111#include <linux/of.h>1212#include <linux/of_irq.h>1313+#include <linux/of_platform.h>1314#include <linux/list.h>1415#include <linux/mutex.h>1516#include <linux/slab.h>···428427 }429428}430429430430+static void __init of_selftest_platform_populate(void)431431+{432432+ int irq;433433+ struct device_node *np;434434+ struct platform_device *pdev;435435+436436+ np = of_find_node_by_path("/testcase-data");437437+ of_platform_populate(np, of_default_bus_match_table, NULL, NULL);438438+439439+ /* Test that a missing irq domain returns -EPROBE_DEFER */440440+ np = of_find_node_by_path("/testcase-data/testcase-device1");441441+ pdev = of_find_device_by_node(np);442442+ if (!pdev)443443+ selftest(0, "device 1 creation failed\n");444444+ irq = platform_get_irq(pdev, 0);445445+ if (irq != -EPROBE_DEFER)446446+ selftest(0, "device deferred probe failed - %d\n", irq);447447+448448+ /* Test that a parsing failure does not return -EPROBE_DEFER */449449+ np = of_find_node_by_path("/testcase-data/testcase-device2");450450+ pdev = of_find_device_by_node(np);451451+ if (!pdev)452452+ selftest(0, "device 2 creation failed\n");453453+ irq = platform_get_irq(pdev, 0);454454+ if (irq >= 0 || irq == -EPROBE_DEFER)455455+ selftest(0, "device parsing error failed - %d\n", irq);456456+457457+ selftest(1, "passed");458458+}459459+431460static int __init of_selftest(void)432461{433462 struct device_node *np;···476445 of_selftest_parse_interrupts();477446 of_selftest_parse_interrupts_extended();478447 of_selftest_match_node();448448+ of_selftest_platform_populate();479449 pr_info("end of selftest - %i passed, %i failed\n",480450 selftest_results.passed, selftest_results.failed);481451 return 0;
···33333434config OMAP_CONTROL_PHY3535 tristate "OMAP CONTROL PHY Driver"3636+ depends on ARCH_OMAP2PLUS || COMPILE_TEST3637 help3738 Enable this to add support for the PHY part present in the control3839 module. This driver has API to power on the USB2 PHY and to write to
···6464 class_dev_iter_init(&iter, phy_class, NULL, NULL);6565 while ((dev = class_dev_iter_next(&iter))) {6666 phy = to_phy(dev);6767+6868+ if (!phy->init_data)6969+ continue;6770 count = phy->init_data->num_consumers;6871 consumers = phy->init_data->consumers;6972 while (count--) {
+11-6
drivers/pinctrl/pinctrl-as3722.c
···6464};65656666struct as3722_gpio_pin_control {6767- bool enable_gpio_invert;6867 unsigned mode_prop;6968 int io_function;7069};···319320 return mode;320321 }321322322322- if (as_pci->gpio_control[offset].enable_gpio_invert)323323- mode |= AS3722_GPIO_INV;324324-325325- return as3722_write(as3722, AS3722_GPIOn_CONTROL_REG(offset), mode);323323+ return as3722_update_bits(as3722, AS3722_GPIOn_CONTROL_REG(offset),324324+ AS3722_GPIO_MODE_MASK, mode);326325}327326328327static const struct pinmux_ops as3722_pinmux_ops = {···493496{494497 struct as3722_pctrl_info *as_pci = to_as_pci(chip);495498 struct as3722 *as3722 = as_pci->as3722;496496- int en_invert = as_pci->gpio_control[offset].enable_gpio_invert;499499+ int en_invert;497500 u32 val;498501 int ret;502502+503503+ ret = as3722_read(as3722, AS3722_GPIOn_CONTROL_REG(offset), &val);504504+ if (ret < 0) {505505+ dev_err(as_pci->dev,506506+ "GPIO_CONTROL%d_REG read failed: %d\n", offset, ret);507507+ return;508508+ }509509+ en_invert = !!(val & AS3722_GPIO_INV);499510500511 if (value)501512 val = (en_invert) ? 0 : AS3722_GPIOn_SIGNAL(offset);
+13
drivers/pinctrl/pinctrl-single.c
···810810static int pcs_add_pin(struct pcs_device *pcs, unsigned offset,811811 unsigned pin_pos)812812{813813+ struct pcs_soc_data *pcs_soc = &pcs->socdata;813814 struct pinctrl_pin_desc *pin;814815 struct pcs_name *pn;815816 int i;···820819 dev_err(pcs->dev, "too many pins, max %i\n",821820 pcs->desc.npins);822821 return -ENOMEM;822822+ }823823+824824+ if (pcs_soc->irq_enable_mask) {825825+ unsigned val;826826+827827+ val = pcs->read(pcs->base + offset);828828+ if (val & pcs_soc->irq_enable_mask) {829829+ dev_dbg(pcs->dev, "irq enabled at boot for pin at %lx (%x), clearing\n",830830+ (unsigned long)pcs->res->start + offset, val);831831+ val &= ~pcs_soc->irq_enable_mask;832832+ pcs->write(val, pcs->base + offset);833833+ }823834 }824835825836 pin = &pcs->pins.pa[i];
+1-2
drivers/pinctrl/pinctrl-tb10x.c
···629629 */630630 for (i = 0; i < state->pinfuncgrpcnt; i++) {631631 const struct tb10x_pinfuncgrp *pfg = &state->pingroups[i];632632- unsigned int port = pfg->port;633632 unsigned int mode = pfg->mode;634634- int j;633633+ int j, port = pfg->port;635634636635 /*637636 * Skip pin groups which are always mapped and don't need
···8383{8484 struct acpi_device *acpi_dev;8585 acpi_handle handle;8686- struct acpi_buffer buffer;8787- int ret;8686+ int ret = 0;88878988 pnp_dbg(&dev->dev, "set resources\n");9089···9697 if (WARN_ON_ONCE(acpi_dev != dev->data))9798 dev->data = acpi_dev;98999999- ret = pnpacpi_build_resource_template(dev, &buffer);100100- if (ret)101101- return ret;102102- ret = pnpacpi_encode_resources(dev, &buffer);103103- if (ret) {100100+ if (acpi_has_method(handle, METHOD_NAME__SRS)) {101101+ struct acpi_buffer buffer;102102+103103+ ret = pnpacpi_build_resource_template(dev, &buffer);104104+ if (ret)105105+ return ret;106106+107107+ ret = pnpacpi_encode_resources(dev, &buffer);108108+ if (!ret) {109109+ acpi_status status;110110+111111+ status = acpi_set_current_resources(handle, &buffer);112112+ if (ACPI_FAILURE(status))113113+ ret = -EIO;114114+ }104115 kfree(buffer.pointer);105105- return ret;106116 }107107- if (ACPI_FAILURE(acpi_set_current_resources(handle, &buffer)))108108- ret = -EINVAL;109109- else if (acpi_bus_power_manageable(handle))117117+ if (!ret && acpi_bus_power_manageable(handle))110118 ret = acpi_bus_set_power(handle, ACPI_STATE_D0);111111- kfree(buffer.pointer);119119+112120 return ret;113121}114122···123117{124118 struct acpi_device *acpi_dev;125119 acpi_handle handle;126126- int ret;120120+ acpi_status status;127121128122 dev_dbg(&dev->dev, "disable resources\n");129123···134128 }135129136130 /* acpi_unregister_gsi(pnp_irq(dev, 0)); */137137- ret = 0;138131 if (acpi_bus_power_manageable(handle))139132 acpi_bus_set_power(handle, ACPI_STATE_D3_COLD);140140- /* continue even if acpi_bus_set_power() fails */141141- if (ACPI_FAILURE(acpi_evaluate_object(handle, "_DIS", NULL, NULL)))142142- ret = -ENODEV;143143- return ret;133133+134134+ /* continue even if acpi_bus_set_power() fails */135135+ status = acpi_evaluate_object(handle, "_DIS", NULL, NULL);136136+ if (ACPI_FAILURE(status) && status != AE_NOT_FOUND)137137+ return -ENODEV;138138+139139+ return 0;144140}145141146142#ifdef CONFIG_ACPI_SLEEP
+79
drivers/pnp/quirks.c
···15151616#include <linux/types.h>1717#include <linux/kernel.h>1818+#include <linux/pci.h>1819#include <linux/string.h>1920#include <linux/slab.h>2021#include <linux/pnp.h>···335334}336335#endif337336337337+#ifdef CONFIG_PCI338338+/* Device IDs of parts that have 32KB MCH space */339339+static const unsigned int mch_quirk_devices[] = {340340+ 0x0154, /* Ivy Bridge */341341+ 0x0c00, /* Haswell */342342+};343343+344344+static struct pci_dev *get_intel_host(void)345345+{346346+ int i;347347+ struct pci_dev *host;348348+349349+ for (i = 0; i < ARRAY_SIZE(mch_quirk_devices); i++) {350350+ host = pci_get_device(PCI_VENDOR_ID_INTEL, mch_quirk_devices[i],351351+ NULL);352352+ if (host)353353+ return host;354354+ }355355+ return NULL;356356+}357357+358358+static void quirk_intel_mch(struct pnp_dev *dev)359359+{360360+ struct pci_dev *host;361361+ u32 addr_lo, addr_hi;362362+ struct pci_bus_region region;363363+ struct resource mch;364364+ struct pnp_resource *pnp_res;365365+ struct resource *res;366366+367367+ host = get_intel_host();368368+ if (!host)369369+ return;370370+371371+ /*372372+ * MCHBAR is not an architected PCI BAR, so MCH space is usually373373+ * reported as a PNP0C02 resource. The MCH space was originally374374+ * 16KB, but is 32KB in newer parts. Some BIOSes still report a375375+ * PNP0C02 resource that is only 16KB, which means the rest of the376376+ * MCH space is consumed but unreported.377377+ */378378+379379+ /*380380+ * Read MCHBAR for Host Member Mapped Register Range Base381381+ * https://www-ssl.intel.com/content/www/us/en/processors/core/4th-gen-core-family-desktop-vol-2-datasheet382382+ * Sec 3.1.12.383383+ */384384+ pci_read_config_dword(host, 0x48, &addr_lo);385385+ region.start = addr_lo & ~0x7fff;386386+ pci_read_config_dword(host, 0x4c, &addr_hi);387387+ region.start |= (u64) addr_hi << 32;388388+ region.end = region.start + 32*1024 - 1;389389+390390+ memset(&mch, 0, sizeof(mch));391391+ mch.flags = IORESOURCE_MEM;392392+ pcibios_bus_to_resource(host->bus, &mch, ®ion);393393+394394+ list_for_each_entry(pnp_res, &dev->resources, list) {395395+ res = &pnp_res->res;396396+ if (res->end < mch.start || res->start > mch.end)397397+ continue; /* no overlap */398398+ if (res->start == mch.start && res->end == mch.end)399399+ continue; /* exact match */400400+401401+ dev_info(&dev->dev, FW_BUG "PNP resource %pR covers only part of %s Intel MCH; extending to %pR\n",402402+ res, pci_name(host), &mch);403403+ res->start = mch.start;404404+ res->end = mch.end;405405+ break;406406+ }407407+408408+ pci_dev_put(host);409409+}410410+#endif411411+338412/*339413 * PnP Quirks340414 * Cards or devices that need some tweaking due to incomplete resource info···439363 {"PNP0c02", quirk_system_pci_resources},440364#ifdef CONFIG_AMD_NB441365 {"PNP0c01", quirk_amd_mmconfig_area},366366+#endif367367+#ifdef CONFIG_PCI368368+ {"PNP0c02", quirk_intel_mch},442369#endif443370 {""}444371};
···541541542542static void chsc_process_event_information(struct chsc_sei *sei, u64 ntsm)543543{544544- do {544544+ static int ntsm_unsupported;545545+546546+ while (true) {545547 memset(sei, 0, sizeof(*sei));546548 sei->request.length = 0x0010;547549 sei->request.code = 0x000e;548548- sei->ntsm = ntsm;550550+ if (!ntsm_unsupported)551551+ sei->ntsm = ntsm;549552550553 if (chsc(sei))551554 break;552555553556 if (sei->response.code != 0x0001) {554554- CIO_CRW_EVENT(2, "chsc: sei failed (rc=%04x)\n",555555- sei->response.code);557557+ CIO_CRW_EVENT(2, "chsc: sei failed (rc=%04x, ntsm=%llx)\n",558558+ sei->response.code, sei->ntsm);559559+560560+ if (sei->response.code == 3 && sei->ntsm) {561561+ /* Fallback for old firmware. */562562+ ntsm_unsupported = 1;563563+ continue;564564+ }556565 break;557566 }558567···577568 CIO_CRW_EVENT(2, "chsc: unhandled nt: %d\n", sei->nt);578569 break;579570 }580580- } while (sei->u.nt0_area.flags & 0x80);571571+572572+ if (!(sei->u.nt0_area.flags & 0x80))573573+ break;574574+ }581575}582576583577/*
+4-4
drivers/scsi/hpsa.c
···74637463 if (hpsa_simple_mode)74647464 return;7465746574667466+ trans_support = readl(&(h->cfgtable->TransportSupport));74677467+ if (!(trans_support & PERFORMANT_MODE))74687468+ return;74697469+74667470 /* Check for I/O accelerator mode support */74677471 if (trans_support & CFGTBL_Trans_io_accel1) {74687472 transMethod |= CFGTBL_Trans_io_accel1 |···74837479 }7484748074857481 /* TODO, check that this next line h->nreply_queues is correct */74867486- trans_support = readl(&(h->cfgtable->TransportSupport));74877487- if (!(trans_support & PERFORMANT_MODE))74887488- return;74897489-74907482 h->nreply_queues = h->msix_vector > 0 ? h->msix_vector : 1;74917483 hpsa_get_max_perf_mode_cmds(h);74927484 /* Performant mode ring buffer and supporting data structures */
+12
drivers/scsi/scsi_error.c
···189189 /*190190 * Retry after abort failed, escalate to next level.191191 */192192+ scmd->eh_eflags &= ~SCSI_EH_ABORT_SCHEDULED;192193 SCSI_LOG_ERROR_RECOVERY(3,193194 scmd_printk(KERN_INFO, scmd,194195 "scmd %p previous abort failed\n", scmd));···921920 ses->prot_op = scmd->prot_op;922921923922 scmd->prot_op = SCSI_PROT_NORMAL;923923+ scmd->eh_eflags = 0;924924 scmd->cmnd = ses->eh_cmnd;925925 memset(scmd->cmnd, 0, BLK_MAX_CDB);926926 memset(&scmd->sdb, 0, sizeof(scmd->sdb));927927 scmd->request->next_rq = NULL;928928+ scmd->result = 0;928929929930 if (sense_bytes) {930931 scmd->sdb.length = min_t(unsigned, SCSI_SENSE_BUFFERSIZE,···11601157 __func__));11611158 break;11621159 }11601160+ if (status_byte(scmd->result) != CHECK_CONDITION)11611161+ /*11621162+ * don't request sense if there's no check condition11631163+ * status because the error we're processing isn't one11641164+ * that has a sense code (and some devices get11651165+ * confused by sense requests out of the blue)11661166+ */11671167+ continue;11681168+11631169 SCSI_LOG_ERROR_RECOVERY(2, scmd_printk(KERN_INFO, scmd,11641170 "%s: requesting sense\n",11651171 current->comm));
···244244 return -ENOMEM;245245 }246246247247- clk = clk_get(NULL, "shyway_clk");247247+ clk = clk_get(&pdev->dev, NULL);248248 if (IS_ERR(clk)) {249249- dev_err(&pdev->dev, "shyway_clk is required\n");249249+ dev_err(&pdev->dev, "couldn't get clock\n");250250 ret = -EINVAL;251251 goto error0;252252 }
+17-3
drivers/spi/spi-sirf.c
···287287 sspi->left_rx_word)288288 sspi->rx_word(sspi);289289290290- if (spi_stat & (SIRFSOC_SPI_FIFO_EMPTY291291- | SIRFSOC_SPI_TXFIFO_THD_REACH))290290+ if (spi_stat & (SIRFSOC_SPI_TXFIFO_EMPTY |291291+ SIRFSOC_SPI_TXFIFO_THD_REACH))292292 while (!((readl(sspi->base + SIRFSOC_SPI_TXFIFO_STATUS)293293 & SIRFSOC_SPI_FIFO_FULL)) &&294294 sspi->left_tx_word)···470470 writel(regval, sspi->base + SIRFSOC_SPI_CTRL);471471 } else {472472 int gpio = sspi->chipselect[spi->chip_select];473473- gpio_direction_output(gpio, spi->mode & SPI_CS_HIGH ? 0 : 1);473473+ switch (value) {474474+ case BITBANG_CS_ACTIVE:475475+ gpio_direction_output(gpio,476476+ spi->mode & SPI_CS_HIGH ? 1 : 0);477477+ break;478478+ case BITBANG_CS_INACTIVE:479479+ gpio_direction_output(gpio,480480+ spi->mode & SPI_CS_HIGH ? 0 : 1);481481+ break;482482+ }474483 }475484}476485···568559 regval &= ~SIRFSOC_SPI_CMD_MODE;569560 sspi->tx_by_cmd = false;570561 }562562+ /*563563+ * set spi controller in RISC chipselect mode, we are controlling CS by564564+ * software BITBANG_CS_ACTIVE and BITBANG_CS_INACTIVE.565565+ */566566+ regval |= SIRFSOC_SPI_CS_IO_MODE;571567 writel(regval, sspi->base + SIRFSOC_SPI_CTRL);572568573569 if (IS_DMA_VALID(t)) {
+3-6
drivers/staging/comedi/drivers/usbdux.c
···493493 /* pointer to the DA */494494 *datap++ = val & 0xff;495495 *datap++ = (val >> 8) & 0xff;496496- *datap++ = chan;496496+ *datap++ = chan << 6;497497 devpriv->ao_readback[chan] = val;498498499499 s->async->events |= COMEDI_CB_BLOCK;···10401040 /* set current channel of the running acquisition to zero */10411041 s->async->cur_chan = 0;1042104210431043- for (i = 0; i < cmd->chanlist_len; ++i) {10441044- unsigned int chan = CR_CHAN(cmd->chanlist[i]);10451045-10461046- devpriv->ao_chanlist[i] = chan << 6;10471047- }10431043+ for (i = 0; i < cmd->chanlist_len; ++i)10441044+ devpriv->ao_chanlist[i] = CR_CHAN(cmd->chanlist[i]);1048104510491046 /* we count in steps of 1ms (125us) */10501047 /* 125us mode not used yet */
+1-1
drivers/staging/iio/adc/mxs-lradc.c
···15261526 struct resource *iores;15271527 int ret = 0, touch_ret;15281528 int i, s;15291529- unsigned int scale_uv;15291529+ uint64_t scale_uv;1530153015311531 /* Allocate the IIO device. */15321532 iio = devm_iio_device_alloc(dev, sizeof(*lradc));
+1
drivers/staging/iio/resolver/ad2s1200.c
···7070 vel = (((s16)(st->rx[0])) << 4) | ((st->rx[1] & 0xF0) >> 4);7171 vel = (vel << 4) >> 4;7272 *val = vel;7373+ break;7374 default:7475 mutex_unlock(&st->lock);7576 return -EINVAL;
+1-1
drivers/tty/serial/8250/8250_core.c
···15201520 status = serial8250_rx_chars(up, status);15211521 }15221522 serial8250_modem_status(up);15231523- if (status & UART_LSR_THRE)15231523+ if (!up->dma && (status & UART_LSR_THRE))15241524 serial8250_tx_chars(up);1525152515261526 spin_unlock_irqrestore(&port->lock, flags);
···14461446static void s3c24xx_serial_put_poll_char(struct uart_port *port,14471447 unsigned char c)14481448{14491449- unsigned int ufcon = rd_regl(cons_uart, S3C2410_UFCON);14501450- unsigned int ucon = rd_regl(cons_uart, S3C2410_UCON);14491449+ unsigned int ufcon = rd_regl(port, S3C2410_UFCON);14501450+ unsigned int ucon = rd_regl(port, S3C2410_UCON);1451145114521452 /* not possible to xmit on unconfigured port */14531453 if (!s3c24xx_port_configured(ucon))···1455145514561456 while (!s3c24xx_serial_console_txrdy(port, ufcon))14571457 cpu_relax();14581458- wr_regb(cons_uart, S3C2410_UTXH, c);14581458+ wr_regb(port, S3C2410_UTXH, c);14591459}1460146014611461#endif /* CONFIG_CONSOLE_POLL */···14631463static void14641464s3c24xx_serial_console_putchar(struct uart_port *port, int ch)14651465{14661466- unsigned int ufcon = rd_regl(cons_uart, S3C2410_UFCON);14671467- unsigned int ucon = rd_regl(cons_uart, S3C2410_UCON);14681468-14691469- /* not possible to xmit on unconfigured port */14701470- if (!s3c24xx_port_configured(ucon))14711471- return;14661466+ unsigned int ufcon = rd_regl(port, S3C2410_UFCON);1472146714731468 while (!s3c24xx_serial_console_txrdy(port, ufcon))14741474- barrier();14751475- wr_regb(cons_uart, S3C2410_UTXH, ch);14691469+ cpu_relax();14701470+ wr_regb(port, S3C2410_UTXH, ch);14761471}1477147214781473static void14791474s3c24xx_serial_console_write(struct console *co, const char *s,14801475 unsigned int count)14811476{14771477+ unsigned int ucon = rd_regl(cons_uart, S3C2410_UCON);14781478+14791479+ /* not possible to xmit on unconfigured port */14801480+ if (!s3c24xx_port_configured(ucon))14811481+ return;14821482+14821483 uart_console_write(cons_uart, s, count, s3c24xx_serial_console_putchar);14831484}14841485
+21-18
drivers/tty/serial/serial_core.c
···137137 return 1;138138139139 /*140140+ * Make sure the device is in D0 state.141141+ */142142+ uart_change_pm(state, UART_PM_STATE_ON);143143+144144+ /*140145 * Initialise and allocate the transmit and temporary141146 * buffer.142147 */···830825 * If we fail to request resources for the831826 * new port, try to restore the old settings.832827 */833833- if (retval && old_type != PORT_UNKNOWN) {828828+ if (retval) {834829 uport->iobase = old_iobase;835830 uport->type = old_type;836831 uport->hub6 = old_hub6;837832 uport->iotype = old_iotype;838833 uport->regshift = old_shift;839834 uport->mapbase = old_mapbase;840840- retval = uport->ops->request_port(uport);841841- /*842842- * If we failed to restore the old settings,843843- * we fail like this.844844- */845845- if (retval)846846- uport->type = PORT_UNKNOWN;847835848848- /*849849- * We failed anyway.850850- */851851- retval = -EBUSY;836836+ if (old_type != PORT_UNKNOWN) {837837+ retval = uport->ops->request_port(uport);838838+ /*839839+ * If we failed to restore the old settings,840840+ * we fail like this.841841+ */842842+ if (retval)843843+ uport->type = PORT_UNKNOWN;844844+845845+ /*846846+ * We failed anyway.847847+ */848848+ retval = -EBUSY;849849+ }850850+852851 /* Added to return the correct error -Ram Gupta */853852 goto exit;854853 }···15781569 retval = -EAGAIN;15791570 goto err_dec_count;15801571 }15811581-15821582- /*15831583- * Make sure the device is in D0 state.15841584- */15851585- if (port->count == 1)15861586- uart_change_pm(state, UART_PM_STATE_ON);1587157215881573 /*15891574 * Start up the serial port.
+14-2
drivers/tty/tty_buffer.c
···255255 if (change || left < size) {256256 /* This is the slow path - looking for new buffers to use */257257 if ((n = tty_buffer_alloc(port, size)) != NULL) {258258+ unsigned long iflags;259259+258260 n->flags = flags;259261 buf->tail = n;262262+263263+ spin_lock_irqsave(&buf->flush_lock, iflags);260264 b->commit = b->used;261261- smp_mb();262265 b->next = n;266266+ spin_unlock_irqrestore(&buf->flush_lock, iflags);267267+263268 } else if (change)264269 size = 0;265270 else···448443 mutex_lock(&buf->lock);449444450445 while (1) {446446+ unsigned long flags;451447 struct tty_buffer *head = buf->head;452448 int count;453449···456450 if (atomic_read(&buf->priority))457451 break;458452453453+ spin_lock_irqsave(&buf->flush_lock, flags);459454 count = head->commit - head->read;460455 if (!count) {461461- if (head->next == NULL)456456+ if (head->next == NULL) {457457+ spin_unlock_irqrestore(&buf->flush_lock, flags);462458 break;459459+ }463460 buf->head = head->next;461461+ spin_unlock_irqrestore(&buf->flush_lock, flags);464462 tty_buffer_free(port, head);465463 continue;466464 }465465+ spin_unlock_irqrestore(&buf->flush_lock, flags);467466468467 count = receive_buf(tty, head, count);469468 if (!count)···523512 struct tty_bufhead *buf = &port->buf;524513525514 mutex_init(&buf->lock);515515+ spin_lock_init(&buf->flush_lock);526516 tty_buffer_reset(&buf->sentinel, 0);527517 buf->head = &buf->sentinel;528518 buf->tail = &buf->sentinel;
+34-3
drivers/usb/chipidea/core.c
···277277}278278279279/**280280+ * ci_usb_phy_init: initialize phy according to different phy type281281+ * @ci: the controller282282+ *283283+ * This function returns an error code if usb_phy_init has failed284284+ */285285+static int ci_usb_phy_init(struct ci_hdrc *ci)286286+{287287+ int ret;288288+289289+ switch (ci->platdata->phy_mode) {290290+ case USBPHY_INTERFACE_MODE_UTMI:291291+ case USBPHY_INTERFACE_MODE_UTMIW:292292+ case USBPHY_INTERFACE_MODE_HSIC:293293+ ret = usb_phy_init(ci->transceiver);294294+ if (ret)295295+ return ret;296296+ hw_phymode_configure(ci);297297+ break;298298+ case USBPHY_INTERFACE_MODE_ULPI:299299+ case USBPHY_INTERFACE_MODE_SERIAL:300300+ hw_phymode_configure(ci);301301+ ret = usb_phy_init(ci->transceiver);302302+ if (ret)303303+ return ret;304304+ break;305305+ default:306306+ ret = usb_phy_init(ci->transceiver);307307+ }308308+309309+ return ret;310310+}311311+312312+/**280313 * hw_device_reset: resets chip (execute without interruption)281314 * @ci: the controller282315 *···576543 return -ENODEV;577544 }578545579579- hw_phymode_configure(ci);580580-581546 if (ci->platdata->phy)582547 ci->transceiver = ci->platdata->phy;583548 else···595564 return -EPROBE_DEFER;596565 }597566598598- ret = usb_phy_init(ci->transceiver);567567+ ret = ci_usb_phy_init(ci);599568 if (ret) {600569 dev_err(dev, "unable to init phy: %d\n", ret);601570 return ret;
+1-1
drivers/usb/dwc3/core.c
···821821822822 spin_lock_irqsave(&dwc->lock, flags);823823824824+ dwc3_event_buffers_setup(dwc);824825 switch (dwc->dr_mode) {825826 case USB_DR_MODE_PERIPHERAL:826827 case USB_DR_MODE_OTG:···829828 /* FALLTHROUGH */830829 case USB_DR_MODE_HOST:831830 default:832832- dwc3_event_buffers_setup(dwc);833831 break;834832 }835833
+4-8
drivers/usb/dwc3/gadget.c
···187187 * improve this algorithm so that we better use the internal188188 * FIFO space189189 */190190- for (num = 0; num < DWC3_ENDPOINTS_NUM; num++) {191191- struct dwc3_ep *dep = dwc->eps[num];192192- int fifo_number = dep->number >> 1;190190+ for (num = 0; num < dwc->num_in_eps; num++) {191191+ /* bit0 indicates direction; 1 means IN ep */192192+ struct dwc3_ep *dep = dwc->eps[(num << 1) | 1];193193 int mult = 1;194194 int tmp;195195-196196- if (!(dep->number & 1))197197- continue;198195199196 if (!(dep->flags & DWC3_EP_ENABLED))200197 continue;···221224 dev_vdbg(dwc->dev, "%s: Fifo Addr %04x Size %d\n",222225 dep->name, last_fifo_depth, fifo_size & 0xffff);223226224224- dwc3_writel(dwc->regs, DWC3_GTXFIFOSIZ(fifo_number),225225- fifo_size);227227+ dwc3_writel(dwc->regs, DWC3_GTXFIFOSIZ(num), fifo_size);226228227229 last_fifo_depth += (fifo_size & 0xffff);228230 }
+7
drivers/usb/gadget/f_fs.c
···745745 */746746 struct usb_gadget *gadget = epfile->ffs->gadget;747747748748+ spin_lock_irq(&epfile->ffs->eps_lock);749749+ /* In the meantime, endpoint got disabled or changed. */750750+ if (epfile->ep != ep) {751751+ spin_unlock_irq(&epfile->ffs->eps_lock);752752+ return -ESHUTDOWN;753753+ }748754 /*749755 * Controller may require buffer size to be aligned to750756 * maxpacketsize of an out endpoint.···758752 data_len = io_data->read ?759753 usb_ep_align_maybe(gadget, ep->ep, io_data->len) :760754 io_data->len;755755+ spin_unlock_irq(&epfile->ffs->eps_lock);761756762757 data = kmalloc(data_len, GFP_KERNEL);763758 if (unlikely(!data))
···550550 struct xhci_ring *ep_ring;551551 struct xhci_generic_trb *trb;552552 dma_addr_t addr;553553+ u64 hw_dequeue;553554554555 ep_ring = xhci_triad_to_transfer_ring(xhci, slot_id,555556 ep_index, stream_id);···558557 xhci_warn(xhci, "WARN can't find new dequeue state "559558 "for invalid stream ID %u.\n",560559 stream_id);561561- return;562562- }563563- state->new_cycle_state = 0;564564- xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,565565- "Finding segment containing stopped TRB.");566566- state->new_deq_seg = find_trb_seg(cur_td->start_seg,567567- dev->eps[ep_index].stopped_trb,568568- &state->new_cycle_state);569569- if (!state->new_deq_seg) {570570- WARN_ON(1);571560 return;572561 }573562···568577 if (ep->ep_state & EP_HAS_STREAMS) {569578 struct xhci_stream_ctx *ctx =570579 &ep->stream_info->stream_ctx_array[stream_id];571571- state->new_cycle_state = 0x1 & le64_to_cpu(ctx->stream_ring);580580+ hw_dequeue = le64_to_cpu(ctx->stream_ring);572581 } else {573582 struct xhci_ep_ctx *ep_ctx574583 = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index);575575- state->new_cycle_state = 0x1 & le64_to_cpu(ep_ctx->deq);584584+ hw_dequeue = le64_to_cpu(ep_ctx->deq);576585 }586586+587587+ /* Find virtual address and segment of hardware dequeue pointer */588588+ state->new_deq_seg = ep_ring->deq_seg;589589+ state->new_deq_ptr = ep_ring->dequeue;590590+ while (xhci_trb_virt_to_dma(state->new_deq_seg, state->new_deq_ptr)591591+ != (dma_addr_t)(hw_dequeue & ~0xf)) {592592+ next_trb(xhci, ep_ring, &state->new_deq_seg,593593+ &state->new_deq_ptr);594594+ if (state->new_deq_ptr == ep_ring->dequeue) {595595+ WARN_ON(1);596596+ return;597597+ }598598+ }599599+ /*600600+ * Find cycle state for last_trb, starting at old cycle state of601601+ * hw_dequeue. If there is only one segment ring, find_trb_seg() will602602+ * return immediately and cannot toggle the cycle state if this search603603+ * wraps around, so add one more toggle manually in that case.604604+ */605605+ state->new_cycle_state = hw_dequeue & 0x1;606606+ if (ep_ring->first_seg == ep_ring->first_seg->next &&607607+ cur_td->last_trb < state->new_deq_ptr)608608+ state->new_cycle_state ^= 0x1;577609578610 state->new_deq_ptr = cur_td->last_trb;579611 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,580612 "Finding segment containing last TRB in TD.");581613 state->new_deq_seg = find_trb_seg(state->new_deq_seg,582582- state->new_deq_ptr,583583- &state->new_cycle_state);614614+ state->new_deq_ptr, &state->new_cycle_state);584615 if (!state->new_deq_seg) {585616 WARN_ON(1);586617 return;587618 }588619620620+ /* Increment to find next TRB after last_trb. Cycle if appropriate. */589621 trb = &state->new_deq_ptr->generic;590622 if (TRB_TYPE_LINK_LE32(trb->field[3]) &&591623 (trb->field[3] & cpu_to_le32(LINK_TOGGLE)))592624 state->new_cycle_state ^= 0x1;593625 next_trb(xhci, ep_ring, &state->new_deq_seg, &state->new_deq_ptr);594626595595- /*596596- * If there is only one segment in a ring, find_trb_seg()'s while loop597597- * will not run, and it will return before it has a chance to see if it598598- * needs to toggle the cycle bit. It can't tell if the stalled transfer599599- * ended just before the link TRB on a one-segment ring, or if the TD600600- * wrapped around the top of the ring, because it doesn't have the TD in601601- * question. Look for the one-segment case where stalled TRB's address602602- * is greater than the new dequeue pointer address.603603- */604604- if (ep_ring->first_seg == ep_ring->first_seg->next &&605605- state->new_deq_ptr < dev->eps[ep_index].stopped_trb)606606- state->new_cycle_state ^= 0x1;627627+ /* Don't update the ring cycle state for the producer (us). */607628 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,608629 "Cycle state = 0x%x", state->new_cycle_state);609630610610- /* Don't update the ring cycle state for the producer (us). */611631 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,612632 "New dequeue segment = %p (virtual)",613633 state->new_deq_seg);···801799 if (list_empty(&ep->cancelled_td_list)) {802800 xhci_stop_watchdog_timer_in_irq(xhci, ep);803801 ep->stopped_td = NULL;804804- ep->stopped_trb = NULL;805802 ring_doorbell_for_active_rings(xhci, slot_id, ep_index);806803 return;807804 }···868867 ring_doorbell_for_active_rings(xhci, slot_id, ep_index);869868 }870869871871- /* Clear stopped_td and stopped_trb if endpoint is not halted */872872- if (!(ep->ep_state & EP_HALTED)) {870870+ /* Clear stopped_td if endpoint is not halted */871871+ if (!(ep->ep_state & EP_HALTED))873872 ep->stopped_td = NULL;874874- ep->stopped_trb = NULL;875875- }876873877874 /*878875 * Drop the lock and complete the URBs in the cancelled TD list.···19401941 struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index];19411942 ep->ep_state |= EP_HALTED;19421943 ep->stopped_td = td;19431943- ep->stopped_trb = event_trb;19441944 ep->stopped_stream = stream_id;1945194519461946 xhci_queue_reset_ep(xhci, slot_id, ep_index);19471947 xhci_cleanup_stalled_ring(xhci, td->urb->dev, ep_index);1948194819491949 ep->stopped_td = NULL;19501950- ep->stopped_trb = NULL;19511950 ep->stopped_stream = 0;1952195119531952 xhci_ring_cmd_db(xhci);···20272030 * the ring dequeue pointer or take this TD off any lists yet.20282031 */20292032 ep->stopped_td = td;20302030- ep->stopped_trb = event_trb;20312033 return 0;20322034 } else {20332035 if (trb_comp_code == COMP_STALL) {···20382042 * USB class driver clear the stall later.20392043 */20402044 ep->stopped_td = td;20412041- ep->stopped_trb = event_trb;20422045 ep->stopped_stream = ep_ring->stream_id;20432046 } else if (xhci_requires_manual_halt_cleanup(xhci,20442047 ep_ctx, trb_comp_code)) {
···865865#define EP_GETTING_NO_STREAMS (1 << 5)866866 /* ---- Related to URB cancellation ---- */867867 struct list_head cancelled_td_list;868868- /* The TRB that was last reported in a stopped endpoint ring */869869- union xhci_trb *stopped_trb;870868 struct xhci_td *stopped_td;871869 unsigned int stopped_stream;872870 /* Watchdog timer for stop endpoint command to cancel URBs */
···33#include <linux/err.h>44#include <linux/of.h>55#include <linux/io.h>66+#include <linux/delay.h>67#include "am35x-phy-control.h"7889struct am335x_control_usb {···8786 }88878988 writel(val, usb_ctrl->phy_reg + reg);8989+9090+ /*9191+ * Give the PHY ~1ms to complete the power up operation.9292+ * Tests have shown unstable behaviour if other USB PHY related9393+ * registers are written too shortly after such a transition.9494+ */9595+ if (on)9696+ mdelay(1);9097}91989299static const struct phy_control ctrl_am335x = {
+3
drivers/usb/phy/phy.c
···132132 if (IS_ERR(phy) || !try_module_get(phy->dev->driver->owner)) {133133 pr_debug("PHY: unable to find transceiver of type %s\n",134134 usb_phy_type_string(type));135135+ if (!IS_ERR(phy))136136+ phy = ERR_PTR(-ENODEV);137137+135138 goto err0;136139 }137140
+33-17
drivers/usb/serial/io_ti.c
···2828#include <linux/spinlock.h>2929#include <linux/mutex.h>3030#include <linux/serial.h>3131+#include <linux/swab.h>3132#include <linux/kfifo.h>3233#include <linux/ioctl.h>3334#include <linux/firmware.h>···281280{282281 int status = 0;283282 __u8 read_length;284284- __be16 be_start_address;283283+ u16 be_start_address;285284286285 dev_dbg(&dev->dev, "%s - @ %x for %d\n", __func__, start_address, length);287286···297296 if (read_length > 1) {298297 dev_dbg(&dev->dev, "%s - @ %x for %d\n", __func__, start_address, read_length);299298 }300300- be_start_address = cpu_to_be16(start_address);299299+ /*300300+ * NOTE: Must use swab as wIndex is sent in little-endian301301+ * byte order regardless of host byte order.302302+ */303303+ be_start_address = swab16((u16)start_address);301304 status = ti_vread_sync(dev, UMPC_MEMORY_READ,302305 (__u16)address_type,303303- (__force __u16)be_start_address,306306+ be_start_address,304307 buffer, read_length);305308306309 if (status) {···399394 struct device *dev = &serial->serial->dev->dev;400395 int status = 0;401396 int write_length;402402- __be16 be_start_address;397397+ u16 be_start_address;403398404399 /* We can only send a maximum of 1 aligned byte page at a time */405400···414409 __func__, start_address, write_length);415410 usb_serial_debug_data(dev, __func__, write_length, buffer);416411417417- /* Write first page */418418- be_start_address = cpu_to_be16(start_address);412412+ /*413413+ * Write first page.414414+ *415415+ * NOTE: Must use swab as wIndex is sent in little-endian byte order416416+ * regardless of host byte order.417417+ */418418+ be_start_address = swab16((u16)start_address);419419 status = ti_vsend_sync(serial->serial->dev,420420 UMPC_MEMORY_WRITE, (__u16)address_type,421421- (__force __u16)be_start_address,421421+ be_start_address,422422 buffer, write_length);423423 if (status) {424424 dev_dbg(dev, "%s - ERROR %d\n", __func__, status);···446436 __func__, start_address, write_length);447437 usb_serial_debug_data(dev, __func__, write_length, buffer);448438449449- /* Write next page */450450- be_start_address = cpu_to_be16(start_address);439439+ /*440440+ * Write next page.441441+ *442442+ * NOTE: Must use swab as wIndex is sent in little-endian byte443443+ * order regardless of host byte order.444444+ */445445+ be_start_address = swab16((u16)start_address);451446 status = ti_vsend_sync(serial->serial->dev, UMPC_MEMORY_WRITE,452447 (__u16)address_type,453453- (__force __u16)be_start_address,448448+ be_start_address,454449 buffer, write_length);455450 if (status) {456451 dev_err(dev, "%s - ERROR %d\n", __func__, status);···600585 if (rom_desc->Type == desc_type)601586 return start_address;602587603603- start_address = start_address + sizeof(struct ti_i2c_desc)604604- + rom_desc->Size;588588+ start_address = start_address + sizeof(struct ti_i2c_desc) +589589+ le16_to_cpu(rom_desc->Size);605590606591 } while ((start_address < TI_MAX_I2C_SIZE) && rom_desc->Type);607592···614599 __u16 i;615600 __u8 cs = 0;616601617617- for (i = 0; i < rom_desc->Size; i++)602602+ for (i = 0; i < le16_to_cpu(rom_desc->Size); i++)618603 cs = (__u8)(cs + buffer[i]);619604620605 if (cs != rom_desc->CheckSum) {···665650 break;666651667652 if ((start_address + sizeof(struct ti_i2c_desc) +668668- rom_desc->Size) > TI_MAX_I2C_SIZE) {653653+ le16_to_cpu(rom_desc->Size)) > TI_MAX_I2C_SIZE) {669654 status = -ENODEV;670655 dev_dbg(dev, "%s - structure too big, erroring out.\n", __func__);671656 break;···680665 /* Read the descriptor data */681666 status = read_rom(serial, start_address +682667 sizeof(struct ti_i2c_desc),683683- rom_desc->Size, buffer);668668+ le16_to_cpu(rom_desc->Size),669669+ buffer);684670 if (status)685671 break;686672···690674 break;691675 }692676 start_address = start_address + sizeof(struct ti_i2c_desc) +693693- rom_desc->Size;677677+ le16_to_cpu(rom_desc->Size);694678695679 } while ((rom_desc->Type != I2C_DESC_TYPE_ION) &&696680 (start_address < TI_MAX_I2C_SIZE));···728712729713 /* Read the descriptor data */730714 status = read_rom(serial, start_address+sizeof(struct ti_i2c_desc),731731- rom_desc->Size, buffer);715715+ le16_to_cpu(rom_desc->Size), buffer);732716 if (status)733717 goto exit;734718
···301301302302 if (chid)303303 result = uwb_radio_start(&wusbhc->pal);304304- else304304+ else if (wusbhc->uwb_rc)305305 uwb_radio_stop(&wusbhc->pal);306306307307 return result;
···112112113113 struct work_struct free_work;114114115115+ /*116116+ * signals when all in-flight requests are done117117+ */118118+ struct completion *requests_done;119119+115120 struct {116121 /*117122 * This counts the number of available slots in the ringbuffer,···513508{514509 struct kioctx *ctx = container_of(ref, struct kioctx, reqs);515510511511+ /* At this point we know that there are no any in-flight requests */512512+ if (ctx->requests_done)513513+ complete(ctx->requests_done);514514+516515 INIT_WORK(&ctx->free_work, free_ioctx);517516 schedule_work(&ctx->free_work);518517}···727718 * when the processes owning a context have all exited to encourage728719 * the rapid destruction of the kioctx.729720 */730730-static void kill_ioctx(struct mm_struct *mm, struct kioctx *ctx)721721+static void kill_ioctx(struct mm_struct *mm, struct kioctx *ctx,722722+ struct completion *requests_done)731723{732724 if (!atomic_xchg(&ctx->dead, 1)) {733725 struct kioctx_table *table;···757747 if (ctx->mmap_size)758748 vm_munmap(ctx->mmap_base, ctx->mmap_size);759749750750+ ctx->requests_done = requests_done;760751 percpu_ref_kill(&ctx->users);752752+ } else {753753+ if (requests_done)754754+ complete(requests_done);761755 }762756}763757···823809 */824810 ctx->mmap_size = 0;825811826826- kill_ioctx(mm, ctx);812812+ kill_ioctx(mm, ctx, NULL);827813 }828814}829815···11991185 if (!IS_ERR(ioctx)) {12001186 ret = put_user(ioctx->user_id, ctxp);12011187 if (ret)12021202- kill_ioctx(current->mm, ioctx);11881188+ kill_ioctx(current->mm, ioctx, NULL);12031189 percpu_ref_put(&ioctx->users);12041190 }12051191···12171203{12181204 struct kioctx *ioctx = lookup_ioctx(ctx);12191205 if (likely(NULL != ioctx)) {12201220- kill_ioctx(current->mm, ioctx);12061206+ struct completion requests_done =12071207+ COMPLETION_INITIALIZER_ONSTACK(requests_done);12081208+12091209+ /* Pass requests_done to kill_ioctx() where it can be set12101210+ * in a thread-safe way. If we try to set it here then we have12111211+ * a race condition if two io_destroy() called simultaneously.12121212+ */12131213+ kill_ioctx(current->mm, ioctx, &requests_done);12211214 percpu_ref_put(&ioctx->users);12151215+12161216+ /* Wait until all IO for the context are done. Otherwise kernel12171217+ * keep using user-space buffers even if user thinks the context12181218+ * is destroyed.12191219+ */12201220+ wait_for_completion(&requests_done);12211221+12221222 return 0;12231223 }12241224 pr_debug("EINVAL: io_destroy: invalid context id\n");···13271299 &iovec, compat)13281300 : aio_setup_single_vector(req, rw, buf, &nr_segs,13291301 iovec);13301330- if (ret)13311331- return ret;13321332-13331333- ret = rw_verify_area(rw, file, &req->ki_pos, req->ki_nbytes);13021302+ if (!ret)13031303+ ret = rw_verify_area(rw, file, &req->ki_pos, req->ki_nbytes);13341304 if (ret < 0) {13351305 if (iovec != &inline_vec)13361306 kfree(iovec);
···800800 if (start > key.offset && end < extent_end) {801801 BUG_ON(del_nr > 0);802802 if (extent_type == BTRFS_FILE_EXTENT_INLINE) {803803- ret = -EINVAL;803803+ ret = -EOPNOTSUPP;804804 break;805805 }806806···846846 */847847 if (start <= key.offset && end < extent_end) {848848 if (extent_type == BTRFS_FILE_EXTENT_INLINE) {849849- ret = -EINVAL;849849+ ret = -EOPNOTSUPP;850850 break;851851 }852852···872872 if (start > key.offset && end >= extent_end) {873873 BUG_ON(del_nr > 0);874874 if (extent_type == BTRFS_FILE_EXTENT_INLINE) {875875- ret = -EINVAL;875875+ ret = -EOPNOTSUPP;876876 break;877877 }878878···17771777 start_pos = round_down(pos, root->sectorsize);17781778 if (start_pos > i_size_read(inode)) {17791779 /* Expand hole size to cover write data, preventing empty gap */17801780- end_pos = round_up(pos + iov->iov_len, root->sectorsize);17801780+ end_pos = round_up(pos + count, root->sectorsize);17811781 err = btrfs_cont_expand(inode, i_size_read(inode), end_pos);17821782 if (err) {17831783 mutex_unlock(&inode->i_mutex);
+7-17
fs/btrfs/inode-map.c
···176176177177 tsk = kthread_run(caching_kthread, root, "btrfs-ino-cache-%llu\n",178178 root->root_key.objectid);179179- BUG_ON(IS_ERR(tsk)); /* -ENOMEM */179179+ if (IS_ERR(tsk)) {180180+ btrfs_warn(root->fs_info, "failed to start inode caching task");181181+ btrfs_clear_and_info(root, CHANGE_INODE_CACHE,182182+ "disabling inode map caching");183183+ }180184}181185182186int btrfs_find_free_ino(struct btrfs_root *root, u64 *objectid)···209205210206void btrfs_return_ino(struct btrfs_root *root, u64 objectid)211207{212212- struct btrfs_free_space_ctl *ctl = root->free_ino_ctl;213208 struct btrfs_free_space_ctl *pinned = root->free_ino_pinned;214209215210 if (!btrfs_test_opt(root, INODE_MAP_CACHE))216211 return;217217-218212again:219213 if (root->cached == BTRFS_CACHE_FINISHED) {220220- __btrfs_add_free_space(ctl, objectid, 1);214214+ __btrfs_add_free_space(pinned, objectid, 1);221215 } else {222222- /*223223- * If we are in the process of caching free ino chunks,224224- * to avoid adding the same inode number to the free_ino225225- * tree twice due to cross transaction, we'll leave it226226- * in the pinned tree until a transaction is committed227227- * or the caching work is done.228228- */229229-230216 down_write(&root->fs_info->commit_root_sem);231217 spin_lock(&root->cache_lock);232218 if (root->cached == BTRFS_CACHE_FINISHED) {···228234229235 start_caching(root);230236231231- if (objectid <= root->cache_progress ||232232- objectid >= root->highest_objectid)233233- __btrfs_add_free_space(ctl, objectid, 1);234234- else235235- __btrfs_add_free_space(pinned, objectid, 1);237237+ __btrfs_add_free_space(pinned, objectid, 1);236238237239 up_write(&root->fs_info->commit_root_sem);238240 }
+2-2
fs/btrfs/ioctl.c
···30663066 new_key.offset + datal,30673067 1);30683068 if (ret) {30693069- if (ret != -EINVAL)30693069+ if (ret != -EOPNOTSUPP)30703070 btrfs_abort_transaction(trans,30713071 root, ret);30723072 btrfs_end_transaction(trans, root);···31413141 new_key.offset + datal,31423142 1);31433143 if (ret) {31443144- if (ret != -EINVAL)31443144+ if (ret != -EOPNOTSUPP)31453145 btrfs_abort_transaction(trans,31463146 root, ret);31473147 btrfs_end_transaction(trans, root);
···457457 case F_GETLK64:458458 case F_SETLK64:459459 case F_SETLKW64:460460- case F_GETLKP:461461- case F_SETLKP:462462- case F_SETLKPW:460460+ case F_OFD_GETLK:461461+ case F_OFD_SETLK:462462+ case F_OFD_SETLKW:463463 ret = get_compat_flock64(&f, compat_ptr(arg));464464 if (ret != 0)465465 break;···468468 conv_cmd = convert_fcntl_cmd(cmd);469469 ret = sys_fcntl(fd, conv_cmd, (unsigned long)&f);470470 set_fs(old_fs);471471- if ((conv_cmd == F_GETLK || conv_cmd == F_GETLKP) && ret == 0) {471471+ if ((conv_cmd == F_GETLK || conv_cmd == F_OFD_GETLK) && ret == 0) {472472 /* need to return lock information - see above for commentary */473473 if (f.l_start > COMPAT_LOFF_T_MAX)474474 ret = -EOVERFLOW;···493493 case F_GETLK64:494494 case F_SETLK64:495495 case F_SETLKW64:496496- case F_GETLKP:497497- case F_SETLKP:498498- case F_SETLKPW:496496+ case F_OFD_GETLK:497497+ case F_OFD_SETLK:498498+ case F_OFD_SETLKW:499499 return -EINVAL;500500 }501501 return compat_sys_fcntl64(fd, cmd, arg);
···8282 size_t count = iov_length(iov, nr_segs);8383 loff_t final_size = pos + count;84848585- if (pos >= inode->i_size)8585+ if (pos >= i_size_read(inode))8686 return 0;87878888 if ((pos & blockmask) || (final_size & blockmask))
+27-24
fs/ext4/inode.c
···522522 if (unlikely(map->m_len > INT_MAX))523523 map->m_len = INT_MAX;524524525525+ /* We can handle the block number less than EXT_MAX_BLOCKS */526526+ if (unlikely(map->m_lblk >= EXT_MAX_BLOCKS))527527+ return -EIO;528528+525529 /* Lookup extent status tree firstly */526530 if (ext4_es_lookup_extent(inode, map->m_lblk, &es)) {527531 ext4_es_lru_add(inode);···22472243 return err;22482244 } while (map->m_len);2249224522502250- /* Update on-disk size after IO is submitted */22462246+ /*22472247+ * Update on-disk size after IO is submitted. Races with22482248+ * truncate are avoided by checking i_size under i_data_sem.22492249+ */22512250 disksize = ((loff_t)mpd->first_page) << PAGE_CACHE_SHIFT;22522251 if (disksize > EXT4_I(inode)->i_disksize) {22532252 int err2;22532253+ loff_t i_size;2254225422552255- ext4_wb_update_i_disksize(inode, disksize);22552255+ down_write(&EXT4_I(inode)->i_data_sem);22562256+ i_size = i_size_read(inode);22572257+ if (disksize > i_size)22582258+ disksize = i_size;22592259+ if (disksize > EXT4_I(inode)->i_disksize)22602260+ EXT4_I(inode)->i_disksize = disksize;22562261 err2 = ext4_mark_inode_dirty(handle, inode);22622262+ up_write(&EXT4_I(inode)->i_data_sem);22572263 if (err2)22582264 ext4_error(inode->i_sb,22592265 "Failed to mark inode %lu dirty",···35413527 }3542352835433529 mutex_lock(&inode->i_mutex);35443544- /* It's not possible punch hole on append only file */35453545- if (IS_APPEND(inode) || IS_IMMUTABLE(inode)) {35463546- ret = -EPERM;35473547- goto out_mutex;35483548- }35493549- if (IS_SWAPFILE(inode)) {35503550- ret = -ETXTBSY;35513551- goto out_mutex;35523552- }3553353035543531 /* No need to punch hole beyond i_size */35553532 if (offset >= inode->i_size)···36213616 ret = ext4_free_hole_blocks(handle, inode, first_block,36223617 stop_block);3623361836243624- ext4_discard_preallocations(inode);36253619 up_write(&EXT4_I(inode)->i_data_sem);36263620 if (IS_SYNC(inode))36273621 ext4_handle_sync(handle);···44274423 *44284424 * We are called from a few places:44294425 *44304430- * - Within generic_file_write() for O_SYNC files.44264426+ * - Within generic_file_aio_write() -> generic_write_sync() for O_SYNC files.44314427 * Here, there will be no transaction running. We wait for any running44324428 * transaction to commit.44334429 *44344434- * - Within sys_sync(), kupdate and such.44354435- * We wait on commit, if tol to.44304430+ * - Within flush work (sys_sync(), kupdate and such).44314431+ * We wait on commit, if told to.44364432 *44374437- * - Within prune_icache() (PF_MEMALLOC == true)44384438- * Here we simply return. We can't afford to block kswapd on the44394439- * journal commit.44334433+ * - Within iput_final() -> write_inode_now()44344434+ * We wait on commit, if told to.44404435 *44414436 * In all cases it is actually safe for us to return without doing anything,44424437 * because the inode has been copied into a raw inode buffer in44434443- * ext4_mark_inode_dirty(). This is a correctness thing for O_SYNC and for44444444- * knfsd.44384438+ * ext4_mark_inode_dirty(). This is a correctness thing for WB_SYNC_ALL44394439+ * writeback.44454440 *44464441 * Note that we are absolutely dependent upon all inode dirtiers doing the44474442 * right thing: they *must* call mark_inode_dirty() after dirtying info in···44524449 * stuff();44534450 * inode->i_size = expr;44544451 *44554455- * is in error because a kswapd-driven write_inode() could occur while44564456- * `stuff()' is running, and the new i_size will be lost. Plus the inode44574457- * will no longer be on the superblock's dirty inode list.44524452+ * is in error because write_inode() could occur while `stuff()' is running,44534453+ * and the new i_size will be lost. Plus the inode will no longer be on the44544454+ * superblock's dirty inode list.44584455 */44594456int ext4_write_inode(struct inode *inode, struct writeback_control *wbc)44604457{44614458 int err;4462445944634463- if (current->flags & PF_MEMALLOC)44604460+ if (WARN_ON_ONCE(current->flags & PF_MEMALLOC))44644461 return 0;4465446244664463 if (EXT4_SB(inode->i_sb)->s_journal) {
+14-4
fs/ext4/mballoc.c
···989989 poff = block % blocks_per_page;990990 page = find_or_create_page(inode->i_mapping, pnum, GFP_NOFS);991991 if (!page)992992- return -EIO;992992+ return -ENOMEM;993993 BUG_ON(page->mapping != inode->i_mapping);994994 e4b->bd_bitmap_page = page;995995 e4b->bd_bitmap = page_address(page) + (poff * sb->s_blocksize);···10031003 pnum = block / blocks_per_page;10041004 page = find_or_create_page(inode->i_mapping, pnum, GFP_NOFS);10051005 if (!page)10061006- return -EIO;10061006+ return -ENOMEM;10071007 BUG_ON(page->mapping != inode->i_mapping);10081008 e4b->bd_buddy_page = page;10091009 return 0;···11681168 unlock_page(page);11691169 }11701170 }11711171- if (page == NULL || !PageUptodate(page)) {11711171+ if (page == NULL) {11721172+ ret = -ENOMEM;11731173+ goto err;11741174+ }11751175+ if (!PageUptodate(page)) {11721176 ret = -EIO;11731177 goto err;11741178 }···12011197 unlock_page(page);12021198 }12031199 }12041204- if (page == NULL || !PageUptodate(page)) {12001200+ if (page == NULL) {12011201+ ret = -ENOMEM;12021202+ goto err;12031203+ }12041204+ if (!PageUptodate(page)) {12051205 ret = -EIO;12061206 goto err;12071207 }···50165008 */50175009static int ext4_trim_extent(struct super_block *sb, int start, int count,50185010 ext4_group_t group, struct ext4_buddy *e4b)50115011+__releases(bitlock)50125012+__acquires(bitlock)50195013{50205014 struct ext4_free_extent ex;50215015 int ret = 0;
+3-2
fs/ext4/page-io.c
···308308 if (error) {309309 struct inode *inode = io_end->inode;310310311311- ext4_warning(inode->i_sb, "I/O error writing to inode %lu "311311+ ext4_warning(inode->i_sb, "I/O error %d writing to inode %lu "312312 "(offset %llu size %ld starting block %llu)",313313- inode->i_ino,313313+ error, inode->i_ino,314314 (unsigned long long) io_end->offset,315315 (long) io_end->size,316316 (unsigned long long)317317 bi_sector >> (inode->i_blkbits - 9));318318+ mapping_set_error(inode->i_mapping, error);318319 }319320320321 if (io_end->flag & EXT4_IO_END_UNWRITTEN) {
+27-24
fs/ext4/super.c
···38693869 goto failed_mount2;38703870 }38713871 }38723872+38733873+ /*38743874+ * set up enough so that it can read an inode,38753875+ * and create new inode for buddy allocator38763876+ */38773877+ sbi->s_gdb_count = db_count;38783878+ if (!test_opt(sb, NOLOAD) &&38793879+ EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL))38803880+ sb->s_op = &ext4_sops;38813881+ else38823882+ sb->s_op = &ext4_nojournal_sops;38833883+38843884+ ext4_ext_init(sb);38853885+ err = ext4_mb_init(sb);38863886+ if (err) {38873887+ ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)",38883888+ err);38893889+ goto failed_mount2;38903890+ }38913891+38723892 if (!ext4_check_descriptors(sb, &first_not_zeroed)) {38733893 ext4_msg(sb, KERN_ERR, "group descriptors corrupted!");38743874- goto failed_mount2;38943894+ goto failed_mount2a;38753895 }38763896 if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG))38773897 if (!ext4_fill_flex_info(sb)) {38783898 ext4_msg(sb, KERN_ERR,38793899 "unable to initialize "38803900 "flex_bg meta info!");38813881- goto failed_mount2;39013901+ goto failed_mount2a;38823902 }3883390338843884- sbi->s_gdb_count = db_count;38853904 get_random_bytes(&sbi->s_next_generation, sizeof(u32));38863905 spin_lock_init(&sbi->s_next_gen_lock);38873906···39353916 sbi->s_stripe = ext4_get_stripe_size(sbi);39363917 sbi->s_extent_max_zeroout_kb = 32;3937391839383938- /*39393939- * set up enough so that it can read an inode39403940- */39413941- if (!test_opt(sb, NOLOAD) &&39423942- EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL))39433943- sb->s_op = &ext4_sops;39443944- else39453945- sb->s_op = &ext4_nojournal_sops;39463919 sb->s_export_op = &ext4_export_ops;39473920 sb->s_xattr = ext4_xattr_handlers;39483921#ifdef CONFIG_QUOTA···41244113 if (err) {41254114 ext4_msg(sb, KERN_ERR, "failed to reserve %llu clusters for "41264115 "reserved pool", ext4_calculate_resv_clusters(sb));41274127- goto failed_mount4a;41164116+ goto failed_mount5;41284117 }4129411841304119 err = ext4_setup_system_zone(sb);41314120 if (err) {41324121 ext4_msg(sb, KERN_ERR, "failed to initialize system "41334122 "zone (%d)", err);41344134- goto failed_mount4a;41354135- }41364136-41374137- ext4_ext_init(sb);41384138- err = ext4_mb_init(sb);41394139- if (err) {41404140- ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)",41414141- err);41424123 goto failed_mount5;41434124 }41444125···42074204failed_mount7:42084205 ext4_unregister_li_request(sb);42094206failed_mount6:42104210- ext4_mb_release(sb);42114211-failed_mount5:42124212- ext4_ext_release(sb);42134207 ext4_release_system_zone(sb);42144214-failed_mount4a:42084208+failed_mount5:42154209 dput(sb->s_root);42164210 sb->s_root = NULL;42174211failed_mount4:···42324232 percpu_counter_destroy(&sbi->s_extent_cache_cnt);42334233 if (sbi->s_mmp_tsk)42344234 kthread_stop(sbi->s_mmp_tsk);42354235+failed_mount2a:42364236+ ext4_mb_release(sb);42354237failed_mount2:42364238 for (i = 0; i < db_count; i++)42374239 brelse(sbi->s_group_desc[i]);42384240 ext4_kvfree(sbi->s_group_desc);42394241failed_mount:42424242+ ext4_ext_release(sb);42404243 if (sbi->s_chksum_driver)42414244 crypto_free_shash(sbi->s_chksum_driver);42424245 if (sbi->s_proc) {
+19-4
fs/ext4/xattr.c
···520520}521521522522/*523523- * Release the xattr block BH: If the reference count is > 1, decrement524524- * it; otherwise free the block.523523+ * Release the xattr block BH: If the reference count is > 1, decrement it;524524+ * otherwise free the block.525525 */526526static void527527ext4_xattr_release_block(handle_t *handle, struct inode *inode,···542542 if (ce)543543 mb_cache_entry_free(ce);544544 get_bh(bh);545545+ unlock_buffer(bh);545546 ext4_free_blocks(handle, inode, bh, 0, 1,546547 EXT4_FREE_BLOCKS_METADATA |547548 EXT4_FREE_BLOCKS_FORGET);548548- unlock_buffer(bh);549549 } else {550550 le32_add_cpu(&BHDR(bh)->h_refcount, -1);551551 if (ce)552552 mb_cache_entry_release(ce);553553+ /*554554+ * Beware of this ugliness: Releasing of xattr block references555555+ * from different inodes can race and so we have to protect556556+ * from a race where someone else frees the block (and releases557557+ * its journal_head) before we are done dirtying the buffer. In558558+ * nojournal mode this race is harmless and we actually cannot559559+ * call ext4_handle_dirty_xattr_block() with locked buffer as560560+ * that function can call sync_dirty_buffer() so for that case561561+ * we handle the dirtying after unlocking the buffer.562562+ */563563+ if (ext4_handle_valid(handle))564564+ error = ext4_handle_dirty_xattr_block(handle, inode,565565+ bh);553566 unlock_buffer(bh);554554- error = ext4_handle_dirty_xattr_block(handle, inode, bh);567567+ if (!ext4_handle_valid(handle))568568+ error = ext4_handle_dirty_xattr_block(handle, inode,569569+ bh);555570 if (IS_SYNC(inode))556571 ext4_handle_sync(handle);557572 dquot_free_block(inode, EXT4_C2B(EXT4_SB(inode->i_sb), 1));
+6-6
fs/fcntl.c
···274274 break;275275#if BITS_PER_LONG != 32276276 /* 32-bit arches must use fcntl64() */277277- case F_GETLKP:277277+ case F_OFD_GETLK:278278#endif279279 case F_GETLK:280280 err = fcntl_getlk(filp, cmd, (struct flock __user *) arg);281281 break;282282#if BITS_PER_LONG != 32283283 /* 32-bit arches must use fcntl64() */284284- case F_SETLKP:285285- case F_SETLKPW:284284+ case F_OFD_SETLK:285285+ case F_OFD_SETLKW:286286#endif287287 /* Fallthrough */288288 case F_SETLK:···399399400400 switch (cmd) {401401 case F_GETLK64:402402- case F_GETLKP:402402+ case F_OFD_GETLK:403403 err = fcntl_getlk64(f.file, cmd, (struct flock64 __user *) arg);404404 break;405405 case F_SETLK64:406406 case F_SETLKW64:407407- case F_SETLKP:408408- case F_SETLKPW:407407+ case F_OFD_SETLK:408408+ case F_OFD_SETLKW:409409 err = fcntl_setlk64(fd, f.file, cmd,410410 (struct flock64 __user *) arg);411411 break;
+6-3
fs/kernfs/dir.c
···232232 struct rb_node **node = &kn->parent->dir.children.rb_node;233233 struct rb_node *parent = NULL;234234235235- if (kernfs_type(kn) == KERNFS_DIR)236236- kn->parent->dir.subdirs++;237237-238235 while (*node) {239236 struct kernfs_node *pos;240237 int result;···246249 else247250 return -EEXIST;248251 }252252+249253 /* add new node and rebalance the tree */250254 rb_link_node(&kn->rb, parent, node);251255 rb_insert_color(&kn->rb, &kn->parent->dir.children);256256+257257+ /* successfully added, account subdir number */258258+ if (kernfs_type(kn) == KERNFS_DIR)259259+ kn->parent->dir.subdirs++;260260+252261 return 0;253262}254263
+2
fs/kernfs/file.c
···484484485485 ops = kernfs_ops(of->kn);486486 rc = ops->mmap(of, vma);487487+ if (rc)488488+ goto out_put;487489488490 /*489491 * PowerPC's pci_mmap of legacy_mem uses shmem_zero_setup()
+27-28
fs/locks.c
···135135#define IS_POSIX(fl) (fl->fl_flags & FL_POSIX)136136#define IS_FLOCK(fl) (fl->fl_flags & FL_FLOCK)137137#define IS_LEASE(fl) (fl->fl_flags & (FL_LEASE|FL_DELEG))138138-#define IS_FILE_PVT(fl) (fl->fl_flags & FL_FILE_PVT)138138+#define IS_OFDLCK(fl) (fl->fl_flags & FL_OFDLCK)139139140140static bool lease_breaking(struct file_lock *fl)141141{···564564 BUG_ON(!list_empty(&waiter->fl_block));565565 waiter->fl_next = blocker;566566 list_add_tail(&waiter->fl_block, &blocker->fl_block);567567- if (IS_POSIX(blocker) && !IS_FILE_PVT(blocker))567567+ if (IS_POSIX(blocker) && !IS_OFDLCK(blocker))568568 locks_insert_global_blocked(waiter);569569}570570···759759 * of tasks (such as posix threads) sharing the same open file table.760760 * To handle those cases, we just bail out after a few iterations.761761 *762762- * For FL_FILE_PVT locks, the owner is the filp, not the files_struct.762762+ * For FL_OFDLCK locks, the owner is the filp, not the files_struct.763763 * Because the owner is not even nominally tied to a thread of764764 * execution, the deadlock detection below can't reasonably work well. Just765765 * skip it for those.766766 *767767- * In principle, we could do a more limited deadlock detection on FL_FILE_PVT767767+ * In principle, we could do a more limited deadlock detection on FL_OFDLCK768768 * locks that just checks for the case where two tasks are attempting to769769 * upgrade from read to write locks on the same inode.770770 */···791791792792 /*793793 * This deadlock detector can't reasonably detect deadlocks with794794- * FL_FILE_PVT locks, since they aren't owned by a process, per-se.794794+ * FL_OFDLCK locks, since they aren't owned by a process, per-se.795795 */796796- if (IS_FILE_PVT(caller_fl))796796+ if (IS_OFDLCK(caller_fl))797797 return 0;798798799799 while ((block_fl = what_owner_is_waiting_for(block_fl))) {···1391139113921392restart:13931393 break_time = flock->fl_break_time;13941394- if (break_time != 0) {13941394+ if (break_time != 0)13951395 break_time -= jiffies;13961396- if (break_time == 0)13971397- break_time++;13981398- }13961396+ if (break_time == 0)13971397+ break_time++;13991398 locks_insert_block(flock, new_fl);14001399 spin_unlock(&inode->i_lock);14011400 error = wait_event_interruptible_timeout(new_fl->fl_wait,···1890189118911892static int posix_lock_to_flock(struct flock *flock, struct file_lock *fl)18921893{18931893- flock->l_pid = IS_FILE_PVT(fl) ? -1 : fl->fl_pid;18941894+ flock->l_pid = IS_OFDLCK(fl) ? -1 : fl->fl_pid;18941895#if BITS_PER_LONG == 3218951896 /*18961897 * Make sure we can represent the posix lock via···19121913#if BITS_PER_LONG == 3219131914static void posix_lock_to_flock64(struct flock64 *flock, struct file_lock *fl)19141915{19151915- flock->l_pid = IS_FILE_PVT(fl) ? -1 : fl->fl_pid;19161916+ flock->l_pid = IS_OFDLCK(fl) ? -1 : fl->fl_pid;19161917 flock->l_start = fl->fl_start;19171918 flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :19181919 fl->fl_end - fl->fl_start + 1;···19411942 if (error)19421943 goto out;1943194419441944- if (cmd == F_GETLKP) {19451945+ if (cmd == F_OFD_GETLK) {19451946 error = -EINVAL;19461947 if (flock.l_pid != 0)19471948 goto out;1948194919491950 cmd = F_GETLK;19501950- file_lock.fl_flags |= FL_FILE_PVT;19511951+ file_lock.fl_flags |= FL_OFDLCK;19511952 file_lock.fl_owner = (fl_owner_t)filp;19521953 }19531954···2073207420742075 /*20752076 * If the cmd is requesting file-private locks, then set the20762076- * FL_FILE_PVT flag and override the owner.20772077+ * FL_OFDLCK flag and override the owner.20772078 */20782079 switch (cmd) {20792079- case F_SETLKP:20802080+ case F_OFD_SETLK:20802081 error = -EINVAL;20812082 if (flock.l_pid != 0)20822083 goto out;2083208420842085 cmd = F_SETLK;20852085- file_lock->fl_flags |= FL_FILE_PVT;20862086+ file_lock->fl_flags |= FL_OFDLCK;20862087 file_lock->fl_owner = (fl_owner_t)filp;20872088 break;20882088- case F_SETLKPW:20892089+ case F_OFD_SETLKW:20892090 error = -EINVAL;20902091 if (flock.l_pid != 0)20912092 goto out;2092209320932094 cmd = F_SETLKW;20942094- file_lock->fl_flags |= FL_FILE_PVT;20952095+ file_lock->fl_flags |= FL_OFDLCK;20952096 file_lock->fl_owner = (fl_owner_t)filp;20962097 /* Fallthrough */20972098 case F_SETLKW:···21432144 if (error)21442145 goto out;2145214621462146- if (cmd == F_GETLKP) {21472147+ if (cmd == F_OFD_GETLK) {21472148 error = -EINVAL;21482149 if (flock.l_pid != 0)21492150 goto out;2150215121512152 cmd = F_GETLK64;21522152- file_lock.fl_flags |= FL_FILE_PVT;21532153+ file_lock.fl_flags |= FL_OFDLCK;21532154 file_lock.fl_owner = (fl_owner_t)filp;21542155 }21552156···2208220922092210 /*22102211 * If the cmd is requesting file-private locks, then set the22112211- * FL_FILE_PVT flag and override the owner.22122212+ * FL_OFDLCK flag and override the owner.22122213 */22132214 switch (cmd) {22142214- case F_SETLKP:22152215+ case F_OFD_SETLK:22152216 error = -EINVAL;22162217 if (flock.l_pid != 0)22172218 goto out;2218221922192220 cmd = F_SETLK64;22202220- file_lock->fl_flags |= FL_FILE_PVT;22212221+ file_lock->fl_flags |= FL_OFDLCK;22212222 file_lock->fl_owner = (fl_owner_t)filp;22222223 break;22232223- case F_SETLKPW:22242224+ case F_OFD_SETLKW:22242225 error = -EINVAL;22252226 if (flock.l_pid != 0)22262227 goto out;2227222822282229 cmd = F_SETLKW64;22292229- file_lock->fl_flags |= FL_FILE_PVT;22302230+ file_lock->fl_flags |= FL_OFDLCK;22302231 file_lock->fl_owner = (fl_owner_t)filp;22312232 /* Fallthrough */22322233 case F_SETLKW64:···24122413 if (IS_POSIX(fl)) {24132414 if (fl->fl_flags & FL_ACCESS)24142415 seq_printf(f, "ACCESS");24152415- else if (IS_FILE_PVT(fl))24162416- seq_printf(f, "FLPVT ");24162416+ else if (IS_OFDLCK(fl))24172417+ seq_printf(f, "OFDLCK");24172418 else24182419 seq_printf(f, "POSIX ");24192420
···36273627 /* nfsd4_check_resp_size guarantees enough room for error status */36283628 if (!op->status)36293629 op->status = nfsd4_check_resp_size(resp, 0);36303630- if (op->status == nfserr_resource && nfsd4_has_session(&resp->cstate)) {36313631- struct nfsd4_slot *slot = resp->cstate.slot;36323632-36333633- if (slot->sl_flags & NFSD4_SLOT_CACHETHIS)36343634- op->status = nfserr_rep_too_big_to_cache;36353635- else36363636- op->status = nfserr_rep_too_big;36373637- }36383630 if (so) {36393631 so->so_replay.rp_status = op->status;36403632 so->so_replay.rp_buflen = (char *)resp->p - (char *)(statp+1);
+9-12
fs/open.c
···254254 return -EBADF;255255256256 /*257257- * It's not possible to punch hole or perform collapse range258258- * on append only file257257+ * We can only allow pure fallocate on append only files259258 */260260- if (mode & (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_COLLAPSE_RANGE)261261- && IS_APPEND(inode))259259+ if ((mode & ~FALLOC_FL_KEEP_SIZE) && IS_APPEND(inode))262260 return -EPERM;263261264262 if (IS_IMMUTABLE(inode))265263 return -EPERM;264264+265265+ /*266266+ * We can not allow to do any fallocate operation on an active267267+ * swapfile268268+ */269269+ if (IS_SWAPFILE(inode))270270+ ret = -ETXTBSY;266271267272 /*268273 * Revalidate the write permissions, in case security policy has···290285 /* Check for wrap through zero too */291286 if (((offset + len) > inode->i_sb->s_maxbytes) || ((offset + len) < 0))292287 return -EFBIG;293293-294294- /*295295- * There is no need to overlap collapse range with EOF, in which case296296- * it is effectively a truncate operation297297- */298298- if ((mode & FALLOC_FL_COLLAPSE_RANGE) &&299299- (offset + len >= i_size_read(inode)))300300- return -EINVAL;301288302289 if (!file->f_op->fallocate)303290 return -EOPNOTSUPP;
+9-1
fs/xfs/xfs_file.c
···841841 goto out_unlock;842842 }843843844844- ASSERT(offset + len < i_size_read(inode));844844+ /*845845+ * There is no need to overlap collapse range with EOF,846846+ * in which case it is effectively a truncate operation847847+ */848848+ if (offset + len >= i_size_read(inode)) {849849+ error = -EINVAL;850850+ goto out_unlock;851851+ }852852+845853 new_size = i_size_read(inode) - len;846854847855 error = xfs_collapse_file_space(ip, offset, len);
···815815#define FL_SLEEP 128 /* A blocking lock */816816#define FL_DOWNGRADE_PENDING 256 /* Lease is being downgraded */817817#define FL_UNLOCK_PENDING 512 /* Lease is being broken */818818-#define FL_FILE_PVT 1024 /* lock is private to the file */818818+#define FL_OFDLCK 1024 /* lock is "owned" by struct file */819819820820/*821821 * Special return value from posix_lock_file() and vfs_lock_file() for
+2
include/linux/ftrace.h
···535535extern int ftrace_arch_read_dyn_info(char *buf, int size);536536537537extern int skip_trace(unsigned long ip);538538+extern void ftrace_module_init(struct module *mod);538539539540extern void ftrace_disable_daemon(void);540541extern void ftrace_enable_daemon(void);···545544static inline void ftrace_disable_daemon(void) { }546545static inline void ftrace_enable_daemon(void) { }547546static inline void ftrace_release_mod(struct module *mod) {}547547+static inline void ftrace_module_init(struct module *mod) {}548548static inline __init int register_ftrace_command(struct ftrace_func_command *cmd)549549{550550 return -EINVAL;
+34-1
include/linux/interrupt.h
···203203204204extern cpumask_var_t irq_default_affinity;205205206206-extern int irq_set_affinity(unsigned int irq, const struct cpumask *cpumask);206206+/* Internal implementation. Use the helpers below */207207+extern int __irq_set_affinity(unsigned int irq, const struct cpumask *cpumask,208208+ bool force);209209+210210+/**211211+ * irq_set_affinity - Set the irq affinity of a given irq212212+ * @irq: Interrupt to set affinity213213+ * @cpumask: cpumask214214+ *215215+ * Fails if cpumask does not contain an online CPU216216+ */217217+static inline int218218+irq_set_affinity(unsigned int irq, const struct cpumask *cpumask)219219+{220220+ return __irq_set_affinity(irq, cpumask, false);221221+}222222+223223+/**224224+ * irq_force_affinity - Force the irq affinity of a given irq225225+ * @irq: Interrupt to set affinity226226+ * @cpumask: cpumask227227+ *228228+ * Same as irq_set_affinity, but without checking the mask against229229+ * online cpus.230230+ *231231+ * Solely for low level cpu hotplug code, where we need to make per232232+ * cpu interrupts affine before the cpu becomes online.233233+ */234234+static inline int235235+irq_force_affinity(unsigned int irq, const struct cpumask *cpumask)236236+{237237+ return __irq_set_affinity(irq, cpumask, true);238238+}239239+207240extern int irq_can_set_affinity(unsigned int irq);208241extern int irq_select_affinity(unsigned int irq);209242
+4-1
include/linux/irq.h
···394394395395extern void irq_cpu_online(void);396396extern void irq_cpu_offline(void);397397-extern int __irq_set_affinity_locked(struct irq_data *data, const struct cpumask *cpumask);397397+extern int irq_set_affinity_locked(struct irq_data *data,398398+ const struct cpumask *cpumask, bool force);398399399400#if defined(CONFIG_SMP) && defined(CONFIG_GENERIC_PENDING_IRQ)400401void irq_move_irq(struct irq_data *data);···602601 struct irq_data *d = irq_get_irq_data(irq);603602 return d ? irqd_get_trigger_type(d) : 0;604603}604604+605605+unsigned int arch_dynirq_lower_bound(unsigned int from);605606606607int __irq_alloc_descs(int irq, unsigned int from, unsigned int cnt, int node,607608 struct module *owner);
+1
include/linux/libata.h
···822822 unsigned long qc_allocated;823823 unsigned int qc_active;824824 int nr_active_links; /* #links with active qcs */825825+ unsigned int last_tag; /* track next tag hw expects */825826826827 struct ata_link link; /* host default link */827828 struct ata_link *slave_link; /* see ata_slave_link_init() */
+5
include/linux/of_irq.h
···44444545#ifdef CONFIG_OF_IRQ4646extern int of_irq_count(struct device_node *dev);4747+extern int of_irq_get(struct device_node *dev, int index);4748#else4849static inline int of_irq_count(struct device_node *dev)5050+{5151+ return 0;5252+}5353+static inline int of_irq_get(struct device_node *dev, int index)4954{5055 return 0;5156}
···133133#endif134134135135/*136136- * fd "private" POSIX locks.136136+ * Open File Description Locks137137 *138138- * Usually POSIX locks held by a process are released on *any* close and are138138+ * Usually record locks held by a process are released on *any* close and are139139 * not inherited across a fork().140140 *141141- * These cmd values will set locks that conflict with normal POSIX locks, but142142- * are "owned" by the opened file, not the process. This means that they are143143- * inherited across fork() like BSD (flock) locks, and they are only released144144- * automatically when the last reference to the the open file against which145145- * they were acquired is put.141141+ * These cmd values will set locks that conflict with process-associated142142+ * record locks, but are "owned" by the open file description, not the143143+ * process. This means that they are inherited across fork() like BSD (flock)144144+ * locks, and they are only released automatically when the last reference to145145+ * the the open file against which they were acquired is put.146146 */147147-#define F_GETLKP 36148148-#define F_SETLKP 37149149-#define F_SETLKPW 38147147+#define F_OFD_GETLK 36148148+#define F_OFD_SETLK 37149149+#define F_OFD_SETLKW 38150150151151#define F_OWNER_TID 0152152#define F_OWNER_PID 1
+1
include/uapi/linux/input.h
···164164#define INPUT_PROP_DIRECT 0x01 /* direct input devices */165165#define INPUT_PROP_BUTTONPAD 0x02 /* has button(s) under pad */166166#define INPUT_PROP_SEMI_MT 0x03 /* touch rectangle only */167167+#define INPUT_PROP_TOPBUTTONPAD 0x04 /* softbuttons at top of pad */167168168169#define INPUT_PROP_MAX 0x1f169170#define INPUT_PROP_CNT (INPUT_PROP_MAX + 1)
+22
kernel/hrtimer.c
···234234 goto again;235235 }236236 timer->base = new_base;237237+ } else {238238+ if (cpu != this_cpu && hrtimer_check_target(timer, new_base)) {239239+ cpu = this_cpu;240240+ goto again;241241+ }237242 }238243 return new_base;239244}···573568 return;574569575570 cpu_base->expires_next.tv64 = expires_next.tv64;571571+572572+ /*573573+ * If a hang was detected in the last timer interrupt then we574574+ * leave the hang delay active in the hardware. We want the575575+ * system to make progress. That also prevents the following576576+ * scenario:577577+ * T1 expires 50ms from now578578+ * T2 expires 5s from now579579+ *580580+ * T1 is removed, so this code is called and would reprogram581581+ * the hardware to 5s from now. Any hrtimer_start after that582582+ * will not reprogram the hardware due to hang_detected being583583+ * set. So we'd effectivly block all timers until the T2 event584584+ * fires.585585+ */586586+ if (cpu_base->hang_detected)587587+ return;576588577589 if (cpu_base->expires_next.tv64 != KTIME_MAX)578590 tick_program_event(cpu_base->expires_next, 1);
+7
kernel/irq/irqdesc.c
···363363 if (from > irq)364364 return -EINVAL;365365 from = irq;366366+ } else {367367+ /*368368+ * For interrupts which are freely allocated the369369+ * architecture can force a lower bound to the @from370370+ * argument. x86 uses this to exclude the GSI space.371371+ */372372+ from = arch_dynirq_lower_bound(from);366373 }367374368375 mutex_lock(&sparse_irq_lock);
+6-11
kernel/irq/manage.c
···180180 struct irq_chip *chip = irq_data_get_irq_chip(data);181181 int ret;182182183183- ret = chip->irq_set_affinity(data, mask, false);183183+ ret = chip->irq_set_affinity(data, mask, force);184184 switch (ret) {185185 case IRQ_SET_MASK_OK:186186 cpumask_copy(data->affinity, mask);···192192 return ret;193193}194194195195-int __irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask)195195+int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,196196+ bool force)196197{197198 struct irq_chip *chip = irq_data_get_irq_chip(data);198199 struct irq_desc *desc = irq_data_to_desc(data);···203202 return -EINVAL;204203205204 if (irq_can_move_pcntxt(data)) {206206- ret = irq_do_set_affinity(data, mask, false);205205+ ret = irq_do_set_affinity(data, mask, force);207206 } else {208207 irqd_set_move_pending(data);209208 irq_copy_pending(desc, mask);···218217 return ret;219218}220219221221-/**222222- * irq_set_affinity - Set the irq affinity of a given irq223223- * @irq: Interrupt to set affinity224224- * @mask: cpumask225225- *226226- */227227-int irq_set_affinity(unsigned int irq, const struct cpumask *mask)220220+int __irq_set_affinity(unsigned int irq, const struct cpumask *mask, bool force)228221{229222 struct irq_desc *desc = irq_to_desc(irq);230223 unsigned long flags;···228233 return -EINVAL;229234230235 raw_spin_lock_irqsave(&desc->lock, flags);231231- ret = __irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask);236236+ ret = irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask, force);232237 raw_spin_unlock_irqrestore(&desc->lock, flags);233238 return ret;234239}
+3-3
kernel/module.c
···815815 return -EFAULT;816816 name[MODULE_NAME_LEN-1] = '\0';817817818818- if (!(flags & O_NONBLOCK))819819- pr_warn("waiting module removal not supported: please upgrade\n");820820-821818 if (mutex_lock_interruptible(&module_mutex) != 0)822819 return -EINTR;823820···32673270 }3268327132693272 dynamic_debug_setup(info->debug, info->num_debug);32733273+32743274+ /* Ftrace init must be called in the MODULE_STATE_UNFORMED state */32753275+ ftrace_module_init(mod);3270327632713277 /* Finally it's fully formed, ready to start executing. */32723278 err = complete_formation(mod, info);
···43304330 ftrace_process_locs(mod, start, end);43314331}4332433243334333-static int ftrace_module_notify_enter(struct notifier_block *self,43344334- unsigned long val, void *data)43334333+void ftrace_module_init(struct module *mod)43354334{43364336- struct module *mod = data;43374337-43384338- if (val == MODULE_STATE_COMING)43394339- ftrace_init_module(mod, mod->ftrace_callsites,43404340- mod->ftrace_callsites +43414341- mod->num_ftrace_callsites);43424342- return 0;43354335+ ftrace_init_module(mod, mod->ftrace_callsites,43364336+ mod->ftrace_callsites +43374337+ mod->num_ftrace_callsites);43434338}4344433943454340static int ftrace_module_notify_exit(struct notifier_block *self,···43484353 return 0;43494354}43504355#else43514351-static int ftrace_module_notify_enter(struct notifier_block *self,43524352- unsigned long val, void *data)43534353-{43544354- return 0;43554355-}43564356static int ftrace_module_notify_exit(struct notifier_block *self,43574357 unsigned long val, void *data)43584358{43594359 return 0;43604360}43614361#endif /* CONFIG_MODULES */43624362-43634363-struct notifier_block ftrace_module_enter_nb = {43644364- .notifier_call = ftrace_module_notify_enter,43654365- .priority = INT_MAX, /* Run before anything that can use kprobes */43664366-};4367436243684363struct notifier_block ftrace_module_exit_nb = {43694364 .notifier_call = ftrace_module_notify_exit,···43874402 ret = ftrace_process_locs(NULL,43884403 __start_mcount_loc,43894404 __stop_mcount_loc);43904390-43914391- ret = register_module_notifier(&ftrace_module_enter_nb);43924392- if (ret)43934393- pr_warning("Failed to register trace ftrace module enter notifier\n");4394440543954406 ret = register_module_notifier(&ftrace_module_exit_nb);43964407 if (ret)
+1-1
kernel/trace/trace_events_trigger.c
···7777 data->ops->func(data);7878 continue;7979 }8080- filter = rcu_dereference(data->filter);8080+ filter = rcu_dereference_sched(data->filter);8181 if (filter && !filter_match_preds(filter, rec))8282 continue;8383 if (data->cmd_ops->post_trigger) {
+39-19
mm/memory.c
···232232#endif233233}234234235235-void tlb_flush_mmu(struct mmu_gather *tlb)235235+static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)236236{237237- struct mmu_gather_batch *batch;238238-239239- if (!tlb->need_flush)240240- return;241237 tlb->need_flush = 0;242238 tlb_flush(tlb);243239#ifdef CONFIG_HAVE_RCU_TABLE_FREE244240 tlb_table_flush(tlb);245241#endif242242+}243243+244244+static void tlb_flush_mmu_free(struct mmu_gather *tlb)245245+{246246+ struct mmu_gather_batch *batch;246247247248 for (batch = &tlb->local; batch; batch = batch->next) {248249 free_pages_and_swap_cache(batch->pages, batch->nr);249250 batch->nr = 0;250251 }251252 tlb->active = &tlb->local;253253+}254254+255255+void tlb_flush_mmu(struct mmu_gather *tlb)256256+{257257+ if (!tlb->need_flush)258258+ return;259259+ tlb_flush_mmu_tlbonly(tlb);260260+ tlb_flush_mmu_free(tlb);252261}253262254263/* tlb_finish_mmu···11361127 if (PageAnon(page))11371128 rss[MM_ANONPAGES]--;11381129 else {11391139- if (pte_dirty(ptent))11301130+ if (pte_dirty(ptent)) {11311131+ force_flush = 1;11401132 set_page_dirty(page);11331133+ }11411134 if (pte_young(ptent) &&11421135 likely(!(vma->vm_flags & VM_SEQ_READ)))11431136 mark_page_accessed(page);···11481137 page_remove_rmap(page);11491138 if (unlikely(page_mapcount(page) < 0))11501139 print_bad_pte(vma, addr, ptent, page);11511151- force_flush = !__tlb_remove_page(tlb, page);11521152- if (force_flush)11401140+ if (unlikely(!__tlb_remove_page(tlb, page))) {11411141+ force_flush = 1;11531142 break;11431143+ }11541144 continue;11551145 }11561146 /*···1186117411871175 add_mm_rss_vec(mm, rss);11881176 arch_leave_lazy_mmu_mode();11891189- pte_unmap_unlock(start_pte, ptl);1190117711911191- /*11921192- * mmu_gather ran out of room to batch pages, we break out of11931193- * the PTE lock to avoid doing the potential expensive TLB invalidate11941194- * and page-free while holding it.11951195- */11781178+ /* Do the actual TLB flush before dropping ptl */11961179 if (force_flush) {11971180 unsigned long old_end;11981198-11991199- force_flush = 0;1200118112011182 /*12021183 * Flush the TLB just for the previous segment,···11981193 */11991194 old_end = tlb->end;12001195 tlb->end = addr;12011201-12021202- tlb_flush_mmu(tlb);12031203-11961196+ tlb_flush_mmu_tlbonly(tlb);12041197 tlb->start = addr;12051198 tlb->end = old_end;11991199+ }12001200+ pte_unmap_unlock(start_pte, ptl);12011201+12021202+ /*12031203+ * If we forced a TLB flush (either due to running out of12041204+ * batch buffers or because we needed to flush dirty TLB12051205+ * entries before releasing the ptl), free the batched12061206+ * memory too. Restart if we didn't do everything.12071207+ */12081208+ if (force_flush) {12091209+ force_flush = 0;12101210+ tlb_flush_mmu_free(tlb);1206121112071212 if (addr != end)12081213 goto again;···19701955 unsigned long address, unsigned int fault_flags)19711956{19721957 struct vm_area_struct *vma;19581958+ vm_flags_t vm_flags;19731959 int ret;1974196019751961 vma = find_extend_vma(mm, address);19761962 if (!vma || address < vma->vm_start)19631963+ return -EFAULT;19641964+19651965+ vm_flags = (fault_flags & FAULT_FLAG_WRITE) ? VM_WRITE : VM_READ;19661966+ if (!(vm_flags & vma->vm_flags))19771967 return -EFAULT;1978196819791969 ret = handle_mm_fault(mm, vma, address, fault_flags);
+5-3
mm/vmacache.c
···8181 for (i = 0; i < VMACACHE_SIZE; i++) {8282 struct vm_area_struct *vma = current->vmacache[i];83838484- if (vma && vma->vm_start <= addr && vma->vm_end > addr) {8585- BUG_ON(vma->vm_mm != mm);8484+ if (!vma)8585+ continue;8686+ if (WARN_ON_ONCE(vma->vm_mm != mm))8787+ break;8888+ if (vma->vm_start <= addr && vma->vm_end > addr)8689 return vma;8787- }8890 }89919092 return NULL;
+3-3
security/selinux/hooks.c
···33173317 case F_GETLK:33183318 case F_SETLK:33193319 case F_SETLKW:33203320- case F_GETLKP:33213321- case F_SETLKP:33223322- case F_SETLKPW:33203320+ case F_OFD_GETLK:33213321+ case F_OFD_SETLK:33223322+ case F_OFD_SETLKW:33233323#if BITS_PER_LONG == 3233243324 case F_GETLK64:33253325 case F_SETLK64:
···23232424 sp = (unsigned long) regs[PERF_REG_X86_SP];25252626- map = map_groups__find(&thread->mg, MAP__FUNCTION, (u64) sp);2626+ map = map_groups__find(&thread->mg, MAP__VARIABLE, (u64) sp);2727 if (!map) {2828 pr_debug("failed to get stack map\n");2929+ free(buf);2930 return -1;3031 }3132
+7-1
tools/perf/arch/x86/tests/regs_load.S
···11-21#include <linux/linkage.h>3243#define AX 0···8990 ret9091ENDPROC(perf_regs_load)9192#endif9393+9494+/*9595+ * We need to provide note.GNU-stack section, saying that we want9696+ * NOT executable stack. Otherwise the final linking will assume that9797+ * the ELF stack should not be restricted at all and set it RWX.9898+ */9999+.section .note.GNU-stack,"",@progbits
+35-11
tools/perf/config/Makefile
···3434 LIBUNWIND_LIBS = -lunwind -lunwind-arm3535endif36363737+# So far there's only x86 libdw unwind support merged in perf.3838+# Disable it on all other architectures in case libdw unwind3939+# support is detected in system. Add supported architectures4040+# to the check.4141+ifneq ($(ARCH),x86)4242+ NO_LIBDW_DWARF_UNWIND := 14343+endif4444+3745ifeq ($(LIBUNWIND_LIBS),)3846 NO_LIBUNWIND := 13947else···116108CFLAGS += -Wall117109CFLAGS += -Wextra118110CFLAGS += -std=gnu99111111+112112+# Enforce a non-executable stack, as we may regress (again) in the future by113113+# adding assembler files missing the .GNU-stack linker note.114114+LDFLAGS += -Wl,-z,noexecstack119115120116EXTLIBS = -lelf -lpthread -lrt -lm -ldl121117···198186 stackprotector-all \199187 timerfd \200188 libunwind-debug-frame \201201- bionic189189+ bionic \190190+ liberty \191191+ liberty-z \192192+ cplus-demangle202193203194# Set FEATURE_CHECK_(C|LD)FLAGS-all for all CORE_FEATURE_TESTS features.204195# If in the future we need per-feature checks/flags for features not···519504endif520505521506ifeq ($(feature-libbfd), 1)522522- EXTLIBS += -lbfd -lz -liberty507507+ EXTLIBS += -lbfd508508+509509+ # call all detections now so we get correct510510+ # status in VF output511511+ $(call feature_check,liberty)512512+ $(call feature_check,liberty-z)513513+ $(call feature_check,cplus-demangle)514514+515515+ ifeq ($(feature-liberty), 1)516516+ EXTLIBS += -liberty517517+ else518518+ ifeq ($(feature-liberty-z), 1)519519+ EXTLIBS += -liberty -lz520520+ endif521521+ endif523522endif524523525524ifdef NO_DEMANGLE···544515 CFLAGS += -DHAVE_CPLUS_DEMANGLE_SUPPORT545516 else546517 ifneq ($(feature-libbfd), 1)547547- $(call feature_check,liberty)548548- ifeq ($(feature-liberty), 1)549549- EXTLIBS += -lbfd -liberty550550- else551551- $(call feature_check,liberty-z)552552- ifeq ($(feature-liberty-z), 1)553553- EXTLIBS += -lbfd -liberty -lz554554- else555555- $(call feature_check,cplus-demangle)518518+ ifneq ($(feature-liberty), 1)519519+ ifneq ($(feature-liberty-z), 1)520520+ # we dont have neither HAVE_CPLUS_DEMANGLE_SUPPORT521521+ # or any of 'bfd iberty z' trinity556522 ifeq ($(feature-cplus-demangle), 1)557523 EXTLIBS += -liberty558524 CFLAGS += -DHAVE_CPLUS_DEMANGLE_SUPPORT
+2
tools/perf/tests/make
···4646make_install_html := install-html4747make_install_info := install-info4848make_install_pdf := install-pdf4949+make_static := LDFLAGS=-static49505051# all the NO_* variable combined5152make_minimal := NO_LIBPERL=1 NO_LIBPYTHON=1 NO_NEWT=1 NO_GTK2=1···8887# run += make_install_info8988# run += make_install_pdf9089run += make_minimal9090+run += make_static91919292ifneq ($(call has,ctags),)9393run += make_tags
+12-4
tools/perf/util/machine.c
···717717}718718719719static int map_groups__set_modules_path_dir(struct map_groups *mg,720720- const char *dir_name)720720+ const char *dir_name, int depth)721721{722722 struct dirent *dent;723723 DIR *dir = opendir(dir_name);···742742 !strcmp(dent->d_name, ".."))743743 continue;744744745745- ret = map_groups__set_modules_path_dir(mg, path);745745+ /* Do not follow top-level source and build symlinks */746746+ if (depth == 0) {747747+ if (!strcmp(dent->d_name, "source") ||748748+ !strcmp(dent->d_name, "build"))749749+ continue;750750+ }751751+752752+ ret = map_groups__set_modules_path_dir(mg, path,753753+ depth + 1);746754 if (ret < 0)747755 goto out;748756 } else {···794786 if (!version)795787 return -1;796788797797- snprintf(modules_path, sizeof(modules_path), "%s/lib/modules/%s/kernel",789789+ snprintf(modules_path, sizeof(modules_path), "%s/lib/modules/%s",798790 machine->root_dir, version);799791 free(version);800792801801- return map_groups__set_modules_path_dir(&machine->kmaps, modules_path);793793+ return map_groups__set_modules_path_dir(&machine->kmaps, modules_path, 0);802794}803795804796static int machine__create_module(void *arg, const char *name, u64 start)
+1-10
tools/power/acpi/Makefile
···8989 STRIPCMD = $(STRIP) -s --remove-section=.note --remove-section=.comment9090endif91919292-# if DEBUG is enabled, then we do not strip or optimize9393-ifeq ($(strip $(DEBUG)),true)9494- CFLAGS += -O1 -g -DDEBUG9595- STRIPCMD = /bin/true -Since_we_are_debugging9696-else9797- CFLAGS += $(OPTIMIZATION) -fomit-frame-pointer9898- STRIPCMD = $(STRIP) -s --remove-section=.note --remove-section=.comment9999-endif100100-10192# --- ACPIDUMP BEGIN ---1029310394vpath %.c \···119128 -rm -f $(OUTPUT)acpidump120129121130install-tools:122122- $(INSTALL) -d $(DESTDIR)${bindir}131131+ $(INSTALL) -d $(DESTDIR)${sbindir}123132 $(INSTALL_PROGRAM) $(OUTPUT)acpidump $(DESTDIR)${sbindir}124133125134install-man:
+8-7
virt/kvm/arm/vgic.c
···548548 u32 val;549549 u32 *reg;550550551551- offset >>= 1;552551 reg = vgic_bitmap_get_reg(&vcpu->kvm->arch.vgic.irq_cfg,553553- vcpu->vcpu_id, offset);552552+ vcpu->vcpu_id, offset >> 1);554553555555- if (offset & 2)554554+ if (offset & 4)556555 val = *reg >> 16;557556 else558557 val = *reg & 0xffff;···560561 vgic_reg_access(mmio, &val, offset,561562 ACCESS_READ_VALUE | ACCESS_WRITE_VALUE);562563 if (mmio->is_write) {563563- if (offset < 4) {564564+ if (offset < 8) {564565 *reg = ~0U; /* Force PPIs/SGIs to 1 */565566 return false;566567 }567568568569 val = vgic_cfg_compress(val);569569- if (offset & 2) {570570+ if (offset & 4) {570571 *reg &= 0xffff;571572 *reg |= val << 16;572573 } else {···915916 case 0:916917 if (!target_cpus)917918 return;919919+ break;918920919921 case 1:920922 target_cpus = ((1 << nrcpus) - 1) & ~(1 << vcpu_id) & 0xff;···16671667 if (addr + size < addr)16681668 return -EINVAL;1669166916701670+ *ioaddr = addr;16701671 ret = vgic_ioaddr_overlap(kvm);16711672 if (ret)16721672- return ret;16731673- *ioaddr = addr;16731673+ *ioaddr = VGIC_ADDR_UNDEF;16741674+16741675 return ret;16751676}16761677
+2-1
virt/kvm/assigned-dev.c
···395395 if (dev->entries_nr == 0)396396 return r;397397398398- r = pci_enable_msix(dev->dev, dev->host_msix_entries, dev->entries_nr);398398+ r = pci_enable_msix_exact(dev->dev,399399+ dev->host_msix_entries, dev->entries_nr);399400 if (r)400401 return r;401402