···11+* Rockchip RK3xxx I2C controller22+33+This driver interfaces with the native I2C controller present in Rockchip44+RK3xxx SoCs.55+66+Required properties :77+88+ - reg : Offset and length of the register set for the device99+ - compatible : should be "rockchip,rk3066-i2c", "rockchip,rk3188-i2c" or1010+ "rockchip,rk3288-i2c".1111+ - interrupts : interrupt number1212+ - clocks : parent clock1313+1414+Required on RK3066, RK3188 :1515+1616+ - rockchip,grf : the phandle of the syscon node for the general register1717+ file (GRF)1818+ - on those SoCs an alias with the correct I2C bus ID (bit offset in the GRF)1919+ is also required.2020+2121+Optional properties :2222+2323+ - clock-frequency : SCL frequency to use (in Hz). If omitted, 100kHz is used.2424+2525+Example:2626+2727+aliases {2828+ i2c0 = &i2c0;2929+}3030+3131+i2c0: i2c@2002d000 {3232+ compatible = "rockchip,rk3188-i2c";3333+ reg = <0x2002d000 0x1000>;3434+ interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;3535+ #address-cells = <1>;3636+ #size-cells = <0>;3737+3838+ rockchip,grf = <&grf>;3939+4040+ clock-names = "i2c";4141+ clocks = <&cru PCLK_I2C0>;4242+};
···11+22+* Allwinner P2WI (Push/Pull 2 Wire Interface) controller33+44+Required properties :55+66+ - reg : Offset and length of the register set for the device.77+ - compatible : Should one of the following:88+ - "allwinner,sun6i-a31-p2wi"99+ - interrupts : The interrupt line connected to the P2WI peripheral.1010+ - clocks : The gate clk connected to the P2WI peripheral.1111+ - resets : The reset line connected to the P2WI peripheral.1212+1313+Optional properties :1414+1515+ - clock-frequency : Desired P2WI bus clock frequency in Hz. If not set the1616+default frequency is 100kHz1717+1818+A P2WI may contain one child node encoding a P2WI slave device.1919+2020+Slave device properties:2121+ Required properties:2222+ - reg : the I2C slave address used during the initialization2323+ process to switch from I2C to P2WI mode2424+2525+Example:2626+2727+ p2wi@01f03400 {2828+ compatible = "allwinner,sun6i-a31-p2wi";2929+ reg = <0x01f03400 0x400>;3030+ interrupts = <0 39 4>;3131+ clocks = <&apb0_gates 3>;3232+ clock-frequency = <6000000>;3333+ resets = <&apb0_rst 3>;3434+3535+ axp221: pmic@68 {3636+ compatible = "x-powers,axp221";3737+ reg = <0x68>;3838+3939+ /* ... */4040+ };4141+ };
+1-1
Documentation/kbuild/makefiles.txt
···11711171 obvious reason.1172117211731173 dtc11741174- Create flattend device tree blob object suitable for linking11741174+ Create flattened device tree blob object suitable for linking11751175 into vmlinux. Device tree blobs linked into vmlinux are placed11761176 in an init section in the image. Platform code *must* copy the11771177 blob to non-init memory prior to calling unflatten_device_tree().
+10-4
Documentation/kernel-parameters.txt
···14741474 js= [HW,JOY] Analog joystick14751475 See Documentation/input/joystick.txt.1476147614771477+ kaslr/nokaslr [X86]14781478+ Enable/disable kernel and module base offset ASLR14791479+ (Address Space Layout Randomization) if built into14801480+ the kernel. When CONFIG_HIBERNATION is selected,14811481+ kASLR is disabled by default. When kASLR is enabled,14821482+ hibernation will be disabled.14831483+14771484 keepinitrd [HW,ARM]1478148514791486 kernelcore=nn[KMG] [KNL,X86,IA-64,PPC] This parameter···21172110 noapic [SMP,APIC] Tells the kernel to not make use of any21182111 IOAPICs that may be present in the system.2119211221202120- nokaslr [X86]21212121- Disable kernel and module base offset ASLR (Address21222122- Space Layout Randomization) if built into the kernel.21232123-21242113 noautogroup Disable scheduler automatic task group creation.2125211421262115 nobats [PPC] Do not use BATs for mapping kernel lowmem···21862183 interrupt wake-up latency, which may improve performance21872184 in certain environments such as networked servers or21882185 real-time systems.21862186+21872187+ nohibernate [HIBERNATION] Disable hibernation and resume.2189218821902189 nohz= [KNL] Boottime enable/disable dynamic ticks21912190 Valid arguments: on, off···29852980 noresume Don't check if there's a hibernation image29862981 present during boot.29872982 nocompress Don't compress/decompress hibernation images.29832983+ no Disable hibernation and resume.2988298429892985 retain_initrd [RAM] Keep initrd memory after extraction29902986
+4-3
Documentation/thermal/nouveau_thermal
···44Supported chips:55* NV43+6677-Authors: Martin Peres (mupuf) <martin.peres@labri.fr>77+Authors: Martin Peres (mupuf) <martin.peres@free.fr>8899Description1010---------···68686969NOTE: Be sure to use the manual mode if you want to drive the fan speed manually70707171-NOTE2: Not all fan management modes may be supported on all chipsets. We are7272-working on it.7171+NOTE2: When operating in manual mode outside the vbios-defined7272+[PWM_min, PWM_max] range, the reported fan speed (RPM) may not be accurate7373+depending on your hardware.73747475Bug reports7576---------
···11-config ARCH_BCM11+menuconfig ARCH_BCM22 bool "Broadcom SoC Support" if ARCH_MULTI_V6_V733 help44 This enables support for Broadcom ARM based SoC chips5566-menu "Broadcom SoC Selection"77- depends on ARCH_BCM66+if ARCH_BCM8798config ARCH_BCM_MOBILE109 bool "Broadcom Mobile SoC Support" if ARCH_MULTI_V7···8788 different SoC or with the older BCM47XX and BCM53XX based8889 network SoC using a MIPS CPU, they are supported by arch/mips/bcm47xx89909090-endmenu9191+endif
+1-5
arch/arm/mach-berlin/Kconfig
···11-config ARCH_BERLIN11+menuconfig ARCH_BERLIN22 bool "Marvell Berlin SoCs" if ARCH_MULTI_V733 select ARCH_REQUIRE_GPIOLIB44 select ARM_GIC···88 select PINCTRL991010if ARCH_BERLIN1111-1212-menu "Marvell Berlin SoC variants"13111412config MACH_BERLIN_BG21513 bool "Marvell Armada 1500 (BG2)"···2729 select CACHE_L2X02830 select HAVE_ARM_TWD if SMP2931 select PINCTRL_BERLIN_BG2Q3030-3131-endmenu32323333endif
+3-4
arch/arm/mach-cns3xxx/Kconfig
···11-config ARCH_CNS3XXX11+menuconfig ARCH_CNS3XXX22 bool "Cavium Networks CNS3XXX family" if ARCH_MULTI_V633 select ARM_GIC44 select PCI_DOMAINS if PCI55 help66 Support for Cavium Networks CNS3XXX platform.7788-menu "CNS3XXX platform type"99- depends on ARCH_CNS3XXX88+if ARCH_CNS3XXX1091110config MACH_CNS3420VB1211 bool "Support for CNS3420 Validation Board"···1617 This is a platform with an on-board ARM11 MPCore and has support1718 for USB, USB-OTG, MMC/SD/SDIO, SATA, PCI-E, etc.18191919-endmenu2020+endif
···184184 platform_device_register_simple("exynos-cpufreq", -1, NULL, 0);185185}186186187187+void __iomem *sysram_base_addr;188188+void __iomem *sysram_ns_base_addr;189189+190190+void __init exynos_sysram_init(void)191191+{192192+ struct device_node *node;193193+194194+ for_each_compatible_node(node, NULL, "samsung,exynos4210-sysram") {195195+ if (!of_device_is_available(node))196196+ continue;197197+ sysram_base_addr = of_iomap(node, 0);198198+ break;199199+ }200200+201201+ for_each_compatible_node(node, NULL, "samsung,exynos4210-sysram-ns") {202202+ if (!of_device_is_available(node))203203+ continue;204204+ sysram_ns_base_addr = of_iomap(node, 0);205205+ break;206206+ }207207+}208208+187209void __init exynos_init_late(void)188210{189211 if (of_machine_is_compatible("samsung,exynos5440"))···220198 int depth, void *data)221199{222200 struct map_desc iodesc;223223- __be32 *reg;201201+ const __be32 *reg;224202 int len;225203226204 if (!of_flat_dt_is_compatible(node, "samsung,exynos4210-chipid") &&···292270 }293271 }294272 }273273+274274+ /*275275+ * This is called from smp_prepare_cpus if we've built for SMP, but276276+ * we still need to set it up for PM and firmware ops if not.277277+ */278278+ if (!IS_ENABLED(SMP))279279+ exynos_sysram_init();295280296281 exynos_cpuidle_init();297282 exynos_cpufreq_init();
+2-24
arch/arm/mach-exynos/platsmp.c
···32323333extern void exynos4_secondary_startup(void);34343535-void __iomem *sysram_base_addr;3636-void __iomem *sysram_ns_base_addr;3737-3838-static void __init exynos_smp_prepare_sysram(void)3939-{4040- struct device_node *node;4141-4242- for_each_compatible_node(node, NULL, "samsung,exynos4210-sysram") {4343- if (!of_device_is_available(node))4444- continue;4545- sysram_base_addr = of_iomap(node, 0);4646- break;4747- }4848-4949- for_each_compatible_node(node, NULL, "samsung,exynos4210-sysram-ns") {5050- if (!of_device_is_available(node))5151- continue;5252- sysram_ns_base_addr = of_iomap(node, 0);5353- break;5454- }5555-}5656-5735static inline void __iomem *cpu_boot_reg_base(void)5836{5937 if (soc_is_exynos4210() && samsung_rev() == EXYNOS4210_REV_1_1)···212234{213235 int i;214236237237+ exynos_sysram_init();238238+215239 if (read_cpuid_part_number() == ARM_CPU_PART_CORTEX_A9)216240 scu_enable(scu_base_addr());217217-218218- exynos_smp_prepare_sysram();219241220242 /*221243 * Write the address of secondary startup into the
-1
arch/arm/mach-highbank/Kconfig
···11config ARCH_HIGHBANK22 bool "Calxeda ECX-1000/2000 (Highbank/Midway)" if ARCH_MULTI_V733 select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE44- select ARCH_HAS_CPUFREQ54 select ARCH_HAS_HOLES_MEMORYMODEL65 select ARCH_HAS_OPP76 select ARCH_SUPPORTS_BIG_ENDIAN
+3-7
arch/arm/mach-imx/Kconfig
···11-config ARCH_MXC11+menuconfig ARCH_MXC22 bool "Freescale i.MX family" if ARCH_MULTI_V4_V5 || ARCH_MULTI_V6_V733- select ARCH_HAS_CPUFREQ43 select ARCH_HAS_OPP54 select ARCH_REQUIRE_GPIOLIB65 select ARM_CPU_SUSPEND if PM···1213 help1314 Support for Freescale MXC/iMX-based family of processors14151515-menu "Freescale i.MX support"1616- depends on ARCH_MXC1616+if ARCH_MXC17171818config MXC_TZIC1919 bool···979998100config SOC_IMX2799101 bool100100- select ARCH_HAS_CPUFREQ101102 select ARCH_HAS_OPP102103 select CPU_ARM926T103104 select IMX_HAVE_IOMUX_V1···121124122125config SOC_IMX5123126 bool124124- select ARCH_HAS_CPUFREQ125127 select ARCH_HAS_OPP126128 select ARCH_MXC_IOMUX_V3127129 select MXC_TZIC···782786783787source "arch/arm/mach-imx/devices/Kconfig"784788785785-endmenu789789+endif
+1-1
arch/arm/mach-integrator/Kconfig
···2828 bool29293030config INTEGRATOR_IMPD13131- tristate "Include support for Integrator/IM-PD1"3131+ bool "Include support for Integrator/IM-PD1"3232 depends on ARCH_INTEGRATOR_AP3333 select ARCH_REQUIRE_GPIOLIB3434 select ARM_VIC
+11-1
arch/arm/mach-integrator/impd1.c
···308308 */309309#define IMPD1_VALID_IRQS 0x00000bffU310310311311-static int __init impd1_probe(struct lm_device *dev)311311+/*312312+ * As this module is bool, it is OK to have this as __init_refok() - no313313+ * probe calls will be done after the initial system bootup, as devices314314+ * are discovered as part of the machine startup.315315+ */316316+static int __init_refok impd1_probe(struct lm_device *dev)312317{313318 struct impd1_module *impd1;314319 int irq_base;···402397static struct lm_driver impd1_driver = {403398 .drv = {404399 .name = "impd1",400400+ /*401401+ * As we're dropping the probe() function, suppress driver402402+ * binding from sysfs.403403+ */404404+ .suppress_bind_attrs = true,405405 },406406 .probe = impd1_probe,407407 .remove = impd1_remove,
+1
arch/arm/mach-keystone/Kconfig
···11config ARCH_KEYSTONE22 bool "Texas Instruments Keystone Devices"33 depends on ARCH_MULTI_V744+ depends on ARM_PATCH_PHYS_VIRT45 select ARM_GIC56 select HAVE_ARM_ARCH_TIMER67 select CLKSRC_MMIO
+1-1
arch/arm/mach-moxart/Kconfig
···11-config ARCH_MOXART11+menuconfig ARCH_MOXART22 bool "MOXA ART SoC" if ARCH_MULTI_V433 select CPU_FA52644 select ARM_DMA_MEM_BUFFERABLE
+1-6
arch/arm/mach-mvebu/Kconfig
···11-config ARCH_MVEBU11+menuconfig ARCH_MVEBU22 bool "Marvell Engineering Business Unit (MVEBU) SoCs" if (ARCH_MULTI_V7 || ARCH_MULTI_V5)33 select ARCH_SUPPORTS_BIG_ENDIAN44 select CLKSRC_MMIO···1212 select PCI_QUIRKS if PCI13131414if ARCH_MVEBU1515-1616-menu "Marvell EBU SoC variants"17151816config MACH_MVEBU_V71917 bool···82848385config MACH_KIRKWOOD8486 bool "Marvell Kirkwood boards" if ARCH_MULTI_V58585- select ARCH_HAS_CPUFREQ8687 select ARCH_REQUIRE_GPIOLIB8788 select CPU_FEROCEON8889 select KIRKWOOD_CLK···9396 help9497 Say 'Y' here if you want your kernel to support boards based9598 on the Marvell Kirkwood device tree.9696-9797-endmenu989999100endif
+1-3
arch/arm/mach-nomadik/Kconfig
···11-config ARCH_NOMADIK11+menuconfig ARCH_NOMADIK22 bool "ST-Ericsson Nomadik"33 depends on ARCH_MULTI_V544 select ARCH_REQUIRE_GPIOLIB···1515 Support for the Nomadik platform by ST-Ericsson16161717if ARCH_NOMADIK1818-menu "Nomadik boards"19182019config MACH_NOMADIK_8815NHK2120 bool "ST 8815 Nomadik Hardware Kit (evaluation board)"···2324 select I2C_ALGOBIT2425 select I2C_NOMADIK25262626-endmenu2727endif28282929config NOMADIK_8815
···11-config ARCH_SIRF11+menuconfig ARCH_SIRF22 bool "CSR SiRF" if ARCH_MULTI_V733 select ARCH_HAS_RESET_CONTROLLER44 select ARCH_REQUIRE_GPIOLIB···11111212if ARCH_SIRF13131414-menu "CSR SiRF atlas6/primaII/Marco/Polo Specific Features"1414+comment "CSR SiRF atlas6/primaII/Marco/Polo Specific Features"15151616config ARCH_ATLAS61717 bool "CSR SiRFSoC ATLAS6 ARM Cortex A9 Platform"···3636 select SMP_ON_UP if SMP3737 help3838 Support for CSR SiRFSoC ARM Cortex A9 Platform3939-4040-endmenu41394240config SIRF_IRQ4341 bool
+1-5
arch/arm/mach-qcom/Kconfig
···11-config ARCH_QCOM11+menuconfig ARCH_QCOM22 bool "Qualcomm Support" if ARCH_MULTI_V733 select ARCH_REQUIRE_GPIOLIB44 select ARM_GIC···11111212if ARCH_QCOM13131414-menu "Qualcomm SoC Selection"1515-1614config ARCH_MSM8X601715 bool "Enable support for MSM8X60"1816 select CLKSRC_QCOM···2224config ARCH_MSM89742325 bool "Enable support for MSM8974"2426 select HAVE_ARM_ARCH_TIMER2525-2626-endmenu27272828config QCOM_SCM2929 bool
+1-1
arch/arm/mach-s3c24xx/Kconfig
···117117 Compile in platform device definition for Samsung TouchScreen.118118119119config S3C24XX_DMA120120- bool "S3C2410 DMA support"120120+ bool "S3C2410 DMA support (deprecated)"121121 select S3C_DMA122122 help123123 S3C2410 DMA support. This is needed for drivers like sound which
···991010config CPU_S5P64401111 bool1212+ select ARM_AMBA1313+ select PL330_DMA if DMADEVICES1214 select S5P_SLEEP if PM1313- select SAMSUNG_DMADEV1415 select SAMSUNG_WAKEMASK if PM1516 help1617 Enable S5P6440 CPU support17181819config CPU_S5P64501920 bool2121+ select ARM_AMBA2222+ select PL330_DMA if DMADEVICES2023 select S5P_SLEEP if PM2121- select SAMSUNG_DMADEV2224 select SAMSUNG_WAKEMASK if PM2325 help2426 Enable S5P6450 CPU support
+2-1
arch/arm/mach-s5pc100/Kconfig
···991010config CPU_S5PC1001111 bool1212+ select ARM_AMBA1313+ select PL330_DMA if DMADEVICES1214 select S5P_EXT_INT1313- select SAMSUNG_DMADEV1415 help1516 Enable S5PC100 CPU support1617
+2-1
arch/arm/mach-s5pv210/Kconfig
···11111212config CPU_S5PV2101313 bool1414+ select ARM_AMBA1515+ select PL330_DMA if DMADEVICES1416 select S5P_EXT_INT1517 select S5P_PM if PM1618 select S5P_SLEEP if PM1717- select SAMSUNG_DMADEV1819 help1920 Enable S5PV210 CPU support2021
+2-4
arch/arm/mach-shmobile/Kconfig
···11config ARCH_SHMOBILE22 bool3344-config ARCH_SHMOBILE_MULTI44+menuconfig ARCH_SHMOBILE_MULTI55 bool "Renesas ARM SoCs" if ARCH_MULTI_V766 depends on MMU77 select ARCH_SHMOBILE···15151616if ARCH_SHMOBILE_MULTI17171818-comment "Renesas ARM SoCs System Type"1818+#comment "Renesas ARM SoCs System Type"19192020config ARCH_EMEV22121 bool "Emma Mobile EV2"···8585 select CPU_V78686 select SH_CLK_CPG8787 select RENESAS_IRQC8888- select ARCH_HAS_CPUFREQ8988 select ARCH_HAS_OPP9089 select SYS_SUPPORTS_SH_CMT9190 select SYS_SUPPORTS_SH_TMU···263264config MACH_KZM9G264265 bool "KZM-A9-GT board"265266 depends on ARCH_SH73A0266266- select ARCH_HAS_CPUFREQ267267 select ARCH_HAS_OPP268268 select ARCH_REQUIRE_GPIOLIB269269 select REGULATOR_FIXED_VOLTAGE if REGULATOR
-1
arch/arm/mach-spear/Kconfig
···1414config ARCH_SPEAR13XX1515 bool "ST SPEAr13xx"1616 depends on ARCH_MULTI_V7 || PLAT_SPEAR_SINGLE1717- select ARCH_HAS_CPUFREQ1817 select ARM_GIC1918 select GPIO_SPEAR_SPICS2019 select HAVE_ARM_SCU if SMP
+1-1
arch/arm/mach-sti/Kconfig
···11menuconfig ARCH_STI22- bool "STMicroelectronics Consumer Electronics SOCs with Device Trees" if ARCH_MULTI_V722+ bool "STMicroelectronics Consumer Electronics SOCs" if ARCH_MULTI_V733 select ARM_GIC44 select ARM_GLOBAL_TIMER55 select PINCTRL
+3-5
arch/arm/mach-tegra/Kconfig
···11-config ARCH_TEGRA11+menuconfig ARCH_TEGRA22 bool "NVIDIA Tegra" if ARCH_MULTI_V733- select ARCH_HAS_CPUFREQ43 select ARCH_REQUIRE_GPIOLIB54 select ARCH_SUPPORTS_TRUSTED_FOUNDATIONS65 select ARM_GIC···1516 help1617 This enables support for NVIDIA Tegra based systems.17181818-menu "NVIDIA Tegra options"1919- depends on ARCH_TEGRA1919+if ARCH_TEGRA20202121config ARCH_TEGRA_2x_SOC2222 bool "Enable support for Tegra20 family"···6769 which controls AHB bus master arbitration and some6870 performance parameters(priority, prefech size).69717070-endmenu7272+endif
+1-5
arch/arm/mach-u300/Kconfig
···11-config ARCH_U30011+menuconfig ARCH_U30022 bool "ST-Ericsson U300 Series" if ARCH_MULTI_V533 depends on MMU44 select ARCH_REQUIRE_GPIOLIB···1515 Support for ST-Ericsson U300 series mobile platforms.16161717if ARCH_U3001818-1919-menu "ST-Ericsson AB U300/U335 Platform"20182119config MACH_U3002220 depends on ARCH_U300···4042 to test reference designs. If you're not testing SPI,4143 you don't need it. Selecting this will activate the4244 SPI framework and ARM PL022 support.4343-4444-endmenu45454646endif
+1-6
arch/arm/mach-ux500/Kconfig
···11-config ARCH_U850011+menuconfig ARCH_U850022 bool "ST-Ericsson U8500 Series" if ARCH_MULTI_V733 depends on MMU44 select AB8500_CORE55 select ABX500_CORE66- select ARCH_HAS_CPUFREQ76 select ARCH_REQUIRE_GPIOLIB87 select ARM_AMBA98 select ARM_ERRATA_754322···3233 select PINCTRL_AB85403334 select REGULATOR3435 select REGULATOR_DB8500_PRCMU3535-3636-menu "Ux500 target platform (boards)"37363837config MACH_MOP5003938 bool "U8500 Development platform, MOP500 versions"···6467 At least one platform needs to be selected in order to build6568 a working kernel. If everything else is disabled, this6669 automatically enables MACH_MOP500.6767-6868-endmenu69707071config UX500_DEBUG_UART7172 int "Ux500 UART to use for low-level debug"
+3-5
arch/arm/mach-vexpress/Kconfig
···11-config ARCH_VEXPRESS11+menuconfig ARCH_VEXPRESS22 bool "ARM Ltd. Versatile Express family" if ARCH_MULTI_V733 select ARCH_REQUIRE_GPIOLIB44 select ARCH_SUPPORTS_BIG_ENDIAN···3737 platforms. The traditional (ATAGs) boot method is not usable on3838 these boards with this option.39394040-menu "Versatile Express platform type"4141- depends on ARCH_VEXPRESS4040+if ARCH_VEXPRESS42414342config ARCH_VEXPRESS_CORTEX_A5_A9_ERRATA4443 bool "Enable A5 and A9 only errata work-arounds"···64656566config ARCH_VEXPRESS_SPC6667 bool "Versatile Express Serial Power Controller (SPC)"6767- select ARCH_HAS_CPUFREQ6868 select ARCH_HAS_OPP6969 select PM_OPP7070 help···8183 Support for CPU and cluster power management on Versatile Express8284 with a TC2 (A15x2 A7x3) big.LITTLE core tile.83858484-endmenu8686+endif
···11config ARCH_ZYNQ22 bool "Xilinx Zynq ARM Cortex A9 Platform" if ARCH_MULTI_V733- select ARCH_HAS_CPUFREQ43 select ARCH_HAS_OPP54 select ARCH_SUPPORTS_BIG_ENDIAN65 select ARM_AMBA
+8-20
arch/arm/plat-samsung/Kconfig
···3535 Base platform power management code for samsung code36363737if PLAT_SAMSUNG3838+menu "Samsung Common options"38393940# boot configurations40414142comment "Boot options"42434343-config S3C_BOOT_ERROR_RESET4444- bool "S3C Reboot on decompression error"4545- help4646- Say y here to use the watchdog to reset the system if the4747- kernel decompressor detects an error during decompression.4848-4949-config S3C_BOOT_UART_FORCE_FIFO5050- bool "Force UART FIFO on during boot process"5151- default y5252- help5353- Say Y here to force the UART FIFOs on during the kernel5454- uncompressor5555-5656-5744config S3C_LOWLEVEL_UART_PORT5845 int "S3C UART to use for low-level messages"4646+ depends on ARCH_S3C64XX5947 default 06048 help6149 Choice of which UART port to use for the low-level messages,···395407 Include legacy GPIO power management code for platforms not using396408 pinctrl-samsung driver.397409398398-endif399399-400410config SAMSUNG_DMADEV401401- bool402402- select ARM_AMBA411411+ bool "Use legacy Samsung DMA abstraction"412412+ depends on CPU_S5PV210 || CPU_S5PC100 || ARCH_S5P64X0 || ARCH_S3C64XX403413 select DMADEVICES404404- select PL330_DMA if (ARCH_EXYNOS5 || ARCH_EXYNOS4 || CPU_S5PV210 || CPU_S5PC100 || \405405- CPU_S5P6450 || CPU_S5P6440)414414+ default y406415 help407416 Use DMA device engine for PL330 DMAC.417417+418418+endif408419409420config S5P_DEV_MFC410421 bool···490503 default "2" if DEBUG_S3C_UART2491504 default "3" if DEBUG_S3C_UART3492505506506+endmenu493507endif
···66CONFIG_HIGH_RES_TIMERS=y77CONFIG_BSD_PROCESS_ACCT=y88CONFIG_BSD_PROCESS_ACCT_V3=y99+CONFIG_TASKSTATS=y1010+CONFIG_TASK_DELAY_ACCT=y1111+CONFIG_TASK_XACCT=y1212+CONFIG_TASK_IO_ACCOUNTING=y913CONFIG_IKCONFIG=y1014CONFIG_IKCONFIG_PROC=y1115CONFIG_LOG_BUF_SHIFT=141616+CONFIG_RESOURCE_COUNTERS=y1717+CONFIG_MEMCG=y1818+CONFIG_MEMCG_SWAP=y1919+CONFIG_MEMCG_KMEM=y2020+CONFIG_CGROUP_HUGETLB=y1221# CONFIG_UTS_NS is not set1322# CONFIG_IPC_NS is not set1423# CONFIG_PID_NS is not set···3627CONFIG_ARCH_XGENE=y3728CONFIG_SMP=y3829CONFIG_PREEMPT=y3030+CONFIG_KSM=y3931CONFIG_TRANSPARENT_HUGEPAGE=y4032CONFIG_CMA=y4133CONFIG_CMDLINE="console=ttyAMA0"···5545CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"5646CONFIG_DEVTMPFS=y5747CONFIG_DMA_CMA=y4848+CONFIG_BLK_DEV_LOOP=y5849CONFIG_VIRTIO_BLK=y5950# CONFIG_SCSI_PROC_FS is not set6051CONFIG_BLK_DEV_SD=y···6453CONFIG_PATA_PLATFORM=y6554CONFIG_PATA_OF_PLATFORM=y6655CONFIG_NETDEVICES=y5656+CONFIG_TUN=y6757CONFIG_SMC91X=y6858CONFIG_SMSC911X=y6959# CONFIG_WLAN is not set···9785# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set9886# CONFIG_EXT3_FS_XATTR is not set9987CONFIG_EXT4_FS=y8888+CONFIG_FANOTIFY=y8989+CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y10090CONFIG_FUSE_FS=y10191CONFIG_CUSE=y10292CONFIG_VFAT_FS=y···118104CONFIG_LOCKUP_DETECTOR=y119105# CONFIG_SCHED_DEBUG is not set120106# CONFIG_FTRACE is not set107107+CONFIG_SECURITY=y121108CONFIG_CRYPTO_ANSI_CPRNG=y122109CONFIG_ARM64_CRYPTO=y123110CONFIG_CRYPTO_SHA1_ARM64_CE=y
···205205 *206206 * Run ftrace_return_to_handler() before going back to parent.207207 * @fp is checked against the value passed by ftrace_graph_caller()208208- * only when CONFIG_FUNCTION_GRAPH_FP_TEST is enabled.208208+ * only when CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST is enabled.209209 */210210ENTRY(return_to_handler)211211 str x0, [sp, #-16]!
···655655 reg = task_pt_regs(target)->regs[idx];656656 }657657658658- ret = copy_to_user(ubuf, ®, sizeof(reg));659659- if (ret)660660- break;658658+ if (kbuf) {659659+ memcpy(kbuf, ®, sizeof(reg));660660+ kbuf += sizeof(reg);661661+ } else {662662+ ret = copy_to_user(ubuf, ®, sizeof(reg));663663+ if (ret)664664+ break;661665662662- ubuf += sizeof(reg);666666+ ubuf += sizeof(reg);667667+ }663668 }664669665670 return ret;···694689 unsigned int idx = start + i;695690 compat_ulong_t reg;696691697697- ret = copy_from_user(®, ubuf, sizeof(reg));698698- if (ret)699699- return ret;692692+ if (kbuf) {693693+ memcpy(®, kbuf, sizeof(reg));694694+ kbuf += sizeof(reg);695695+ } else {696696+ ret = copy_from_user(®, ubuf, sizeof(reg));697697+ if (ret)698698+ return ret;700699701701- ubuf += sizeof(reg);700700+ ubuf += sizeof(reg);701701+ }702702703703 switch (idx) {704704 case 15:···837827 compat_ulong_t val)838828{839829 int ret;830830+ mm_segment_t old_fs = get_fs();840831841832 if (off & 3 || off >= COMPAT_USER_SZ)842833 return -EIO;···845834 if (off >= sizeof(compat_elf_gregset_t))846835 return 0;847836837837+ set_fs(KERNEL_DS);848838 ret = copy_regset_from_user(tsk, &user_aarch32_view,849839 REGSET_COMPAT_GPR, off,850840 sizeof(compat_ulong_t),851841 &val);842842+ set_fs(old_fs);843843+852844 return ret;853845}854846
+8-2
arch/arm64/mm/init.c
···7171 /* 4GB maximum for 32-bit only capable devices */7272 if (IS_ENABLED(CONFIG_ZONE_DMA)) {7373 unsigned long max_dma_phys =7474- (unsigned long)dma_to_phys(NULL, DMA_BIT_MASK(32) + 1);7474+ (unsigned long)(dma_to_phys(NULL, DMA_BIT_MASK(32)) + 1);7575 max_dma = max(min, min(max, max_dma_phys >> PAGE_SHIFT));7676 zone_size[ZONE_DMA] = max_dma - min;7777 }···126126127127void __init arm64_memblock_init(void)128128{129129+ phys_addr_t dma_phys_limit = 0;130130+129131 /* Register the kernel text, kernel data and initrd with memblock */130132 memblock_reserve(__pa(_text), _end - _text);131133#ifdef CONFIG_BLK_DEV_INITRD···143141 memblock_reserve(__pa(idmap_pg_dir), IDMAP_DIR_SIZE);144142145143 early_init_fdt_scan_reserved_mem();146146- dma_contiguous_reserve(0);144144+145145+ /* 4GB maximum for 32-bit only capable devices */146146+ if (IS_ENABLED(CONFIG_ZONE_DMA))147147+ dma_phys_limit = dma_to_phys(NULL, DMA_BIT_MASK(32)) + 1;148148+ dma_contiguous_reserve(dma_phys_limit);147149148150 memblock_allow_resize();149151 memblock_dump_all();
+37-27
arch/ia64/hp/common/sba_iommu.c
···242242 struct pci_dev *sac_only_dev;243243};244244245245-static struct ioc *ioc_list;245245+static struct ioc *ioc_list, *ioc_found;246246static int reserve_sba_gart = 1;247247248248static SBA_INLINE void sba_mark_invalid(struct ioc *, dma_addr_t, size_t);···18091809 { SX2000_IOC_ID, "sx2000", NULL },18101810};1811181118121812-static struct ioc *18131813-ioc_init(unsigned long hpa, void *handle)18121812+static void ioc_init(unsigned long hpa, struct ioc *ioc)18141813{18151815- struct ioc *ioc;18161814 struct ioc_iommu *info;18171817-18181818- ioc = kzalloc(sizeof(*ioc), GFP_KERNEL);18191819- if (!ioc)18201820- return NULL;1821181518221816 ioc->next = ioc_list;18231817 ioc_list = ioc;1824181818251825- ioc->handle = handle;18261819 ioc->ioc_hpa = ioremap(hpa, 0x1000);1827182018281821 ioc->func_id = READ_REG(ioc->ioc_hpa + IOC_FUNC_ID);···18561863 "%s %d.%d HPA 0x%lx IOVA space %dMb at 0x%lx\n",18571864 ioc->name, (ioc->rev >> 4) & 0xF, ioc->rev & 0xF,18581865 hpa, ioc->iov_size >> 20, ioc->ibase);18591859-18601860- return ioc;18611866}1862186718631868···20222031#endif20232032}2024203320252025-static int20262026-acpi_sba_ioc_add(struct acpi_device *device,20272027- const struct acpi_device_id *not_used)20342034+static void acpi_sba_ioc_add(struct ioc *ioc)20282035{20292029- struct ioc *ioc;20362036+ acpi_handle handle = ioc->handle;20302037 acpi_status status;20312038 u64 hpa, length;20322039 struct acpi_device_info *adi;2033204020342034- status = hp_acpi_csr_space(device->handle, &hpa, &length);20412041+ ioc_found = ioc->next;20422042+ status = hp_acpi_csr_space(handle, &hpa, &length);20352043 if (ACPI_FAILURE(status))20362036- return 1;20442044+ goto err;2037204520382038- status = acpi_get_object_info(device->handle, &adi);20462046+ status = acpi_get_object_info(handle, &adi);20392047 if (ACPI_FAILURE(status))20402040- return 1;20482048+ goto err;2041204920422050 /*20432051 * For HWP0001, only SBA appears in ACPI namespace. It encloses the PCI···20572067 if (!iovp_shift)20582068 iovp_shift = 12;2059206920602060- ioc = ioc_init(hpa, device->handle);20612061- if (!ioc)20622062- return 1;20632063-20702070+ ioc_init(hpa, ioc);20642071 /* setup NUMA node association */20652065- sba_map_ioc_to_node(ioc, device->handle);20662066- return 0;20722072+ sba_map_ioc_to_node(ioc, handle);20732073+ return;20742074+20752075+ err:20762076+ kfree(ioc);20672077}2068207820692079static const struct acpi_device_id hp_ioc_iommu_device_ids[] = {···20712081 {"HWP0004", 0},20722082 {"", 0},20732083};20842084+20852085+static int acpi_sba_ioc_attach(struct acpi_device *device,20862086+ const struct acpi_device_id *not_used)20872087+{20882088+ struct ioc *ioc;20892089+20902090+ ioc = kzalloc(sizeof(*ioc), GFP_KERNEL);20912091+ if (!ioc)20922092+ return -ENOMEM;20932093+20942094+ ioc->next = ioc_found;20952095+ ioc_found = ioc;20962096+ ioc->handle = device->handle;20972097+ return 1;20982098+}20992099+21002100+20742101static struct acpi_scan_handler acpi_sba_ioc_handler = {20752102 .ids = hp_ioc_iommu_device_ids,20762076- .attach = acpi_sba_ioc_add,21032103+ .attach = acpi_sba_ioc_attach,20772104};2078210520792106static int __init acpi_sba_ioc_init_acpi(void)···21252118#endif2126211921272120 /*21282128- * ioc_list should be populated by the acpi_sba_ioc_handler's .attach()21212121+ * ioc_found should be populated by the acpi_sba_ioc_handler's .attach()21292122 * routine, but that only happens if acpi_scan_init() has already run.21302123 */21242124+ while (ioc_found)21252125+ acpi_sba_ioc_add(ioc_found);21262126+21312127 if (!ioc_list) {21322128#ifdef CONFIG_IA64_GENERIC21332129 /*
+3-2
arch/s390/configs/default_defconfig
···4545CONFIG_UNIXWARE_DISKLABEL=y4646CONFIG_CFQ_GROUP_IOSCHED=y4747CONFIG_DEFAULT_DEADLINE=y4848-CONFIG_MARCH_Z9_109=y4848+CONFIG_MARCH_Z196=y4949+CONFIG_TUNE_ZEC12=y4950CONFIG_NR_CPUS=2565051CONFIG_PREEMPT=y5152CONFIG_HZ_100=y···241240CONFIG_NF_CONNTRACK_IPV4=m242241# CONFIG_NF_CONNTRACK_PROC_COMPAT is not set243242CONFIG_NF_TABLES_IPV4=m244244-CONFIG_NFT_REJECT_IPV4=m245243CONFIG_NFT_CHAIN_ROUTE_IPV4=m246244CONFIG_NFT_CHAIN_NAT_IPV4=m247245CONFIG_NF_TABLES_ARP=m···456456CONFIG_WATCHDOG=y457457CONFIG_WATCHDOG_NOWAYOUT=y458458CONFIG_SOFT_WATCHDOG=m459459+CONFIG_DIAG288_WATCHDOG=m459460# CONFIG_HID is not set460461# CONFIG_USB_SUPPORT is not set461462CONFIG_INFINIBAND=m
+3-2
arch/s390/configs/gcov_defconfig
···4545CONFIG_UNIXWARE_DISKLABEL=y4646CONFIG_CFQ_GROUP_IOSCHED=y4747CONFIG_DEFAULT_DEADLINE=y4848-CONFIG_MARCH_Z9_109=y4848+CONFIG_MARCH_Z196=y4949+CONFIG_TUNE_ZEC12=y4950CONFIG_NR_CPUS=2565051CONFIG_HZ_100=y5152CONFIG_MEMORY_HOTPLUG=y···239238CONFIG_NF_CONNTRACK_IPV4=m240239# CONFIG_NF_CONNTRACK_PROC_COMPAT is not set241240CONFIG_NF_TABLES_IPV4=m242242-CONFIG_NFT_REJECT_IPV4=m243241CONFIG_NFT_CHAIN_ROUTE_IPV4=m244242CONFIG_NFT_CHAIN_NAT_IPV4=m245243CONFIG_NF_TABLES_ARP=m···453453CONFIG_WATCHDOG=y454454CONFIG_WATCHDOG_NOWAYOUT=y455455CONFIG_SOFT_WATCHDOG=m456456+CONFIG_DIAG288_WATCHDOG=m456457# CONFIG_HID is not set457458# CONFIG_USB_SUPPORT is not set458459CONFIG_INFINIBAND=m
+3-2
arch/s390/configs/performance_defconfig
···4343CONFIG_UNIXWARE_DISKLABEL=y4444CONFIG_CFQ_GROUP_IOSCHED=y4545CONFIG_DEFAULT_DEADLINE=y4646-CONFIG_MARCH_Z9_109=y4646+CONFIG_MARCH_Z196=y4747+CONFIG_TUNE_ZEC12=y4748CONFIG_NR_CPUS=2564849CONFIG_HZ_100=y4950CONFIG_MEMORY_HOTPLUG=y···237236CONFIG_NF_CONNTRACK_IPV4=m238237# CONFIG_NF_CONNTRACK_PROC_COMPAT is not set239238CONFIG_NF_TABLES_IPV4=m240240-CONFIG_NFT_REJECT_IPV4=m241239CONFIG_NFT_CHAIN_ROUTE_IPV4=m242240CONFIG_NFT_CHAIN_NAT_IPV4=m243241CONFIG_NF_TABLES_ARP=m···451451CONFIG_WATCHDOG=y452452CONFIG_WATCHDOG_NOWAYOUT=y453453CONFIG_SOFT_WATCHDOG=m454454+CONFIG_DIAG288_WATCHDOG=m454455# CONFIG_HID is not set455456# CONFIG_USB_SUPPORT is not set456457CONFIG_INFINIBAND=m
+2-1
arch/s390/configs/zfcpdump_defconfig
···88CONFIG_PARTITION_ADVANCED=y99CONFIG_IBM_PARTITION=y1010CONFIG_DEFAULT_DEADLINE=y1111-CONFIG_MARCH_Z9_109=y1111+CONFIG_MARCH_Z196=y1212+CONFIG_TUNE_ZEC12=y1213# CONFIG_COMPAT is not set1314CONFIG_NR_CPUS=21415# CONFIG_HOTPLUG_CPU is not set
+7-1
arch/s390/defconfig
···135135CONFIG_LOCK_STAT=y136136CONFIG_DEBUG_LOCKDEP=y137137CONFIG_DEBUG_ATOMIC_SLEEP=y138138-CONFIG_DEBUG_WRITECOUNT=y139138CONFIG_DEBUG_LIST=y139139+CONFIG_DEBUG_PI_LIST=y140140CONFIG_DEBUG_SG=y141141CONFIG_DEBUG_NOTIFIERS=y142142CONFIG_PROVE_RCU=y···199199CONFIG_CRYPTO_DES_S390=m200200CONFIG_CRYPTO_AES_S390=m201201CONFIG_CRC7=m202202+# CONFIG_XZ_DEC_X86 is not set203203+# CONFIG_XZ_DEC_POWERPC is not set204204+# CONFIG_XZ_DEC_IA64 is not set205205+# CONFIG_XZ_DEC_ARM is not set206206+# CONFIG_XZ_DEC_ARMTHUMB is not set207207+# CONFIG_XZ_DEC_SPARC is not set202208CONFIG_CMM=m
+15-16
arch/s390/include/asm/mmu_context.h
···33333434static inline void set_user_asce(struct mm_struct *mm)3535{3636- pgd_t *pgd = mm->pgd;3737-3838- S390_lowcore.user_asce = mm->context.asce_bits | __pa(pgd);3939- set_fs(current->thread.mm_segment);3636+ S390_lowcore.user_asce = mm->context.asce_bits | __pa(mm->pgd);3737+ if (current->thread.mm_segment.ar4)3838+ __ctl_load(S390_lowcore.user_asce, 7, 7);4039 set_cpu_flag(CIF_ASCE);4140}4241···6970 /* Clear old ASCE by loading the kernel ASCE. */7071 __ctl_load(S390_lowcore.kernel_asce, 1, 1);7172 __ctl_load(S390_lowcore.kernel_asce, 7, 7);7272- /* Delay loading of the new ASCE to control registers CR1 & CR7 */7373- set_cpu_flag(CIF_ASCE);7473 atomic_inc(&next->context.attach_count);7574 atomic_dec(&prev->context.attach_count);7675 if (MACHINE_HAS_TLB_LC)7776 cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask);7777+ S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd);7878}79798080#define finish_arch_post_lock_switch finish_arch_post_lock_switch···8284 struct task_struct *tsk = current;8385 struct mm_struct *mm = tsk->mm;84868585- if (!mm)8686- return;8787- preempt_disable();8888- while (atomic_read(&mm->context.attach_count) >> 16)8989- cpu_relax();8787+ load_kernel_asce();8888+ if (mm) {8989+ preempt_disable();9090+ while (atomic_read(&mm->context.attach_count) >> 16)9191+ cpu_relax();90929191- cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm));9292- set_user_asce(mm);9393- if (mm->context.flush_mm)9494- __tlb_flush_mm(mm);9595- preempt_enable();9393+ cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm));9494+ if (mm->context.flush_mm)9595+ __tlb_flush_mm(mm);9696+ preempt_enable();9797+ }9898+ set_fs(current->thread.mm_segment);9699}9710098101#define enter_lazy_tlb(mm,tsk) do { } while (0)
-4
arch/s390/include/asm/switch_to.h
···134134 prev = __switch_to(prev,next); \135135} while (0)136136137137-#define finish_arch_switch(prev) do { \138138- set_fs(current->thread.mm_segment); \139139-} while (0)140140-141137#endif /* __ASM_SWITCH_TO_H */
+6-2
arch/s390/include/uapi/asm/ucontext.h
···1616 struct ucontext *uc_link;1717 stack_t uc_stack;1818 _sigregs uc_mcontext;1919- unsigned long uc_sigmask[2];1919+ sigset_t uc_sigmask;2020+ /* Allow for uc_sigmask growth. Glibc uses a 1024-bit sigset_t. */2121+ unsigned char __unused[128 - sizeof(sigset_t)];2022 unsigned long uc_gprs_high[16];2123};2224···2927 struct ucontext *uc_link;3028 stack_t uc_stack;3129 _sigregs uc_mcontext;3232- sigset_t uc_sigmask; /* mask last for extensibility */3030+ sigset_t uc_sigmask;3131+ /* Allow for uc_sigmask growth. Glibc uses a 1024-bit sigset_t. */3232+ unsigned char __unused[128 - sizeof(sigset_t)];3333};34343535#endif /* !_ASM_S390_UCONTEXT_H */
+3-1
arch/s390/kernel/compat_linux.h
···6969 __u32 uc_link; /* pointer */ 7070 compat_stack_t uc_stack;7171 _sigregs32 uc_mcontext;7272- compat_sigset_t uc_sigmask; /* mask last for extensibility */7272+ compat_sigset_t uc_sigmask;7373+ /* Allow for uc_sigmask growth. Glibc uses a 1024-bit sigset_t. */7474+ unsigned char __unused[128 - sizeof(compat_sigset_t)];7375};74767577struct stat64_emu31;
···2020 int num_colors;2121};22222323-extern int bit_map_string_get(struct bit_map *t, int len, int align);2424-extern void bit_map_clear(struct bit_map *t, int offset, int len);2525-extern void bit_map_init(struct bit_map *t, unsigned long *map, int size);2323+int bit_map_string_get(struct bit_map *t, int len, int align);2424+void bit_map_clear(struct bit_map *t, int offset, int len);2525+void bit_map_init(struct bit_map *t, unsigned long *map, int size);26262727#endif /* defined(_SPARC_BITEXT_H) */
+3-3
arch/sparc/include/asm/bitops_32.h
···1818#error only <linux/bitops.h> can be included directly1919#endif20202121-extern unsigned long ___set_bit(unsigned long *addr, unsigned long mask);2222-extern unsigned long ___clear_bit(unsigned long *addr, unsigned long mask);2323-extern unsigned long ___change_bit(unsigned long *addr, unsigned long mask);2121+unsigned long ___set_bit(unsigned long *addr, unsigned long mask);2222+unsigned long ___clear_bit(unsigned long *addr, unsigned long mask);2323+unsigned long ___change_bit(unsigned long *addr, unsigned long mask);24242525/*2626 * Set bit 'nr' in 32-bit quantity at address 'addr' where bit '0'
+12-12
arch/sparc/include/asm/bitops_64.h
···1515#include <asm/byteorder.h>1616#include <asm/barrier.h>17171818-extern int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);1919-extern int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);2020-extern int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);2121-extern void set_bit(unsigned long nr, volatile unsigned long *addr);2222-extern void clear_bit(unsigned long nr, volatile unsigned long *addr);2323-extern void change_bit(unsigned long nr, volatile unsigned long *addr);1818+int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);1919+int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);2020+int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);2121+void set_bit(unsigned long nr, volatile unsigned long *addr);2222+void clear_bit(unsigned long nr, volatile unsigned long *addr);2323+void change_bit(unsigned long nr, volatile unsigned long *addr);24242525#include <asm-generic/bitops/non-atomic.h>2626···30303131#ifdef __KERNEL__32323333-extern int ffs(int x);3434-extern unsigned long __ffs(unsigned long);3333+int ffs(int x);3434+unsigned long __ffs(unsigned long);35353636#include <asm-generic/bitops/ffz.h>3737#include <asm-generic/bitops/sched.h>···4141 * of bits set) of a N-bit word4242 */43434444-extern unsigned long __arch_hweight64(__u64 w);4545-extern unsigned int __arch_hweight32(unsigned int w);4646-extern unsigned int __arch_hweight16(unsigned int w);4747-extern unsigned int __arch_hweight8(unsigned int w);4444+unsigned long __arch_hweight64(__u64 w);4545+unsigned int __arch_hweight32(unsigned int w);4646+unsigned int __arch_hweight16(unsigned int w);4747+unsigned int __arch_hweight8(unsigned int w);48484949#include <asm-generic/bitops/const_hweight.h>5050#include <asm-generic/bitops/lock.h>
+1-1
arch/sparc/include/asm/btext.h
···11#ifndef _SPARC_BTEXT_H22#define _SPARC_BTEXT_H3344-extern int btext_find_display(void);44+int btext_find_display(void);5566#endif /* _SPARC_BTEXT_H */
···3636#define flush_page_for_dma(addr) \3737 sparc32_cachetlb_ops->page_for_dma(addr)38383939-extern void sparc_flush_page_to_ram(struct page *page);3939+void sparc_flush_page_to_ram(struct page *page);40404141#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 14242#define flush_dcache_page(page) sparc_flush_page_to_ram(page)···5151 * way the windows are all clean for the next process and the stack5252 * frames are up to date.5353 */5454-extern void flush_user_windows(void);5555-extern void kill_user_windows(void);5656-extern void flushw_all(void);5454+void flush_user_windows(void);5555+void kill_user_windows(void);5656+void flushw_all(void);57575858#endif /* _SPARC_CACHEFLUSH_H */
+12-12
arch/sparc/include/asm/cacheflush_64.h
···1010/* Cache flush operations. */1111#define flushw_all() __asm__ __volatile__("flushw")12121313-extern void __flushw_user(void);1313+void __flushw_user(void);1414#define flushw_user() __flushw_user()15151616#define flush_user_windows flushw_user···3030 * use block commit stores (which invalidate icache lines) during3131 * module load, so we need this.3232 */3333-extern void flush_icache_range(unsigned long start, unsigned long end);3434-extern void __flush_icache_page(unsigned long);3333+void flush_icache_range(unsigned long start, unsigned long end);3434+void __flush_icache_page(unsigned long);35353636-extern void __flush_dcache_page(void *addr, int flush_icache);3737-extern void flush_dcache_page_impl(struct page *page);3636+void __flush_dcache_page(void *addr, int flush_icache);3737+void flush_dcache_page_impl(struct page *page);3838#ifdef CONFIG_SMP3939-extern void smp_flush_dcache_page_impl(struct page *page, int cpu);4040-extern void flush_dcache_page_all(struct mm_struct *mm, struct page *page);3939+void smp_flush_dcache_page_impl(struct page *page, int cpu);4040+void flush_dcache_page_all(struct mm_struct *mm, struct page *page);4141#else4242#define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page)4343#define flush_dcache_page_all(mm,page) flush_dcache_page_impl(page)4444#endif45454646-extern void __flush_dcache_range(unsigned long start, unsigned long end);4646+void __flush_dcache_range(unsigned long start, unsigned long end);4747#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 14848-extern void flush_dcache_page(struct page *page);4848+void flush_dcache_page(struct page *page);49495050#define flush_icache_page(vma, pg) do { } while(0)5151#define flush_icache_user_range(vma,pg,adr,len) do { } while (0)52525353-extern void flush_ptrace_access(struct vm_area_struct *, struct page *,5454- unsigned long uaddr, void *kaddr,5555- unsigned long len, int write);5353+void flush_ptrace_access(struct vm_area_struct *, struct page *,5454+ unsigned long uaddr, void *kaddr,5555+ unsigned long len, int write);56565757#define copy_to_user_page(vma, page, vaddr, dst, src, len) \5858 do { \
+2-2
arch/sparc/include/asm/checksum_32.h
···2929 *3030 * it's best to have buff aligned on a 32-bit boundary3131 */3232-extern __wsum csum_partial(const void *buff, int len, __wsum sum);3232+__wsum csum_partial(const void *buff, int len, __wsum sum);33333434/* the same as csum_partial, but copies from fs:src while it3535 * checksums···3838 * better 64-bit) boundary3939 */40404141-extern unsigned int __csum_partial_copy_sparc_generic (const unsigned char *, unsigned char *);4141+unsigned int __csum_partial_copy_sparc_generic (const unsigned char *, unsigned char *);42424343static inline __wsum4444csum_partial_copy_nocheck(const void *src, void *dst, int len, __wsum sum)
+16-16
arch/sparc/include/asm/checksum_64.h
···2929 *3030 * it's best to have buff aligned on a 32-bit boundary3131 */3232-extern __wsum csum_partial(const void * buff, int len, __wsum sum);3232+__wsum csum_partial(const void * buff, int len, __wsum sum);33333434/* the same as csum_partial, but copies from user space while it3535 * checksums···3737 * here even more important to align src and dst on a 32-bit (or even3838 * better 64-bit) boundary3939 */4040-extern __wsum csum_partial_copy_nocheck(const void *src, void *dst,4141- int len, __wsum sum);4040+__wsum csum_partial_copy_nocheck(const void *src, void *dst,4141+ int len, __wsum sum);42424343-extern long __csum_partial_copy_from_user(const void __user *src,4444- void *dst, int len,4545- __wsum sum);4343+long __csum_partial_copy_from_user(const void __user *src,4444+ void *dst, int len,4545+ __wsum sum);46464747static inline __wsum4848csum_partial_copy_from_user(const void __user *src,···5959 * Copy and checksum to user6060 */6161#define HAVE_CSUM_COPY_USER6262-extern long __csum_partial_copy_to_user(const void *src,6363- void __user *dst, int len,6464- __wsum sum);6262+long __csum_partial_copy_to_user(const void *src,6363+ void __user *dst, int len,6464+ __wsum sum);65656666static inline __wsum6767csum_and_copy_to_user(const void *src,···7777/* ihl is always 5 or greater, almost always is 5, and iph is word aligned7878 * the majority of the time.7979 */8080-extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);8080+__sum16 ip_fast_csum(const void *iph, unsigned int ihl);81818282/* Fold a partial checksum without adding pseudo headers. */8383static inline __sum16 csum_fold(__wsum sum)···9696}97979898static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,9999- unsigned int len,100100- unsigned short proto,101101- __wsum sum)9999+ unsigned int len,100100+ unsigned short proto,101101+ __wsum sum)102102{103103 __asm__ __volatile__(104104" addcc %1, %0, %0\n"···116116 * returns a 16-bit checksum, already complemented117117 */118118static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,119119- unsigned short len,120120- unsigned short proto,121121- __wsum sum)119119+ unsigned short len,120120+ unsigned short proto,121121+ __wsum sum)122122{123123 return csum_fold(csum_tcpudp_nofold(saddr,daddr,len,proto,sum));124124}
+3-3
arch/sparc/include/asm/cmpxchg_32.h
···2020 return val;2121}22222323-extern void __xchg_called_with_bad_pointer(void);2323+void __xchg_called_with_bad_pointer(void);24242525static inline unsigned long __xchg(unsigned long x, __volatile__ void * ptr, int size)2626{···4545#define __HAVE_ARCH_CMPXCHG 146464747/* bug catcher for when unsupported size is used - won't link */4848-extern void __cmpxchg_called_with_bad_pointer(void);4848+void __cmpxchg_called_with_bad_pointer(void);4949/* we only need to support cmpxchg of a u32 on sparc */5050-extern unsigned long __cmpxchg_u32(volatile u32 *m, u32 old, u32 new_);5050+unsigned long __cmpxchg_u32(volatile u32 *m, u32 old, u32 new_);51515252/* don't worry...optimizer will get rid of most of this */5353static inline unsigned long
+2-2
arch/sparc/include/asm/cmpxchg_64.h
···42424343#define xchg(ptr,x) ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))44444545-extern void __xchg_called_with_bad_pointer(void);4545+void __xchg_called_with_bad_pointer(void);46464747static inline unsigned long __xchg(unsigned long x, __volatile__ void * ptr,4848 int size)···91919292/* This function doesn't exist, so you'll get a linker error9393 if something tries to do an invalid cmpxchg(). */9494-extern void __cmpxchg_called_with_bad_pointer(void);9494+void __cmpxchg_called_with_bad_pointer(void);95959696static inline unsigned long9797__cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
···8899#ifndef __ASSEMBLY__10101111-#include <linux/percpu.h>1212-#include <linux/threads.h>1313-1411typedef struct {1512 /* Dcache line 1 */1613 unsigned int __softirq_pending; /* must be 1st, see rtrap.S */···3134DECLARE_PER_CPU(cpuinfo_sparc, __cpu_data);3235#define cpu_data(__cpu) per_cpu(__cpu_data, (__cpu))3336#define local_cpu_data() __get_cpu_var(__cpu_data)3434-3535-extern const struct seq_operations cpuinfo_op;36373738#endif /* !(__ASSEMBLY__) */3839
+2-2
arch/sparc/include/asm/delay_32.h
···2020}21212222/* This is too messy with inline asm on the Sparc. */2323-extern void __udelay(unsigned long usecs, unsigned long lpj);2424-extern void __ndelay(unsigned long nsecs, unsigned long lpj);2323+void __udelay(unsigned long usecs, unsigned long lpj);2424+void __ndelay(unsigned long nsecs, unsigned long lpj);25252626#ifdef CONFIG_SMP2727#define __udelay_val cpu_data(smp_processor_id()).udelay_val
+2-2
arch/sparc/include/asm/delay_64.h
···8899#ifndef __ASSEMBLY__10101111-extern void __delay(unsigned long loops);1212-extern void udelay(unsigned long usecs);1111+void __delay(unsigned long loops);1212+void udelay(unsigned long usecs);1313#define mdelay(n) udelay((n) * 1000)14141515#endif /* !__ASSEMBLY__ */
···2222 unsigned char name[64];2323};24242525-extern int ebus_dma_register(struct ebus_dma_info *p);2626-extern int ebus_dma_irq_enable(struct ebus_dma_info *p, int on);2727-extern void ebus_dma_unregister(struct ebus_dma_info *p);2828-extern int ebus_dma_request(struct ebus_dma_info *p, dma_addr_t bus_addr,2525+int ebus_dma_register(struct ebus_dma_info *p);2626+int ebus_dma_irq_enable(struct ebus_dma_info *p, int on);2727+void ebus_dma_unregister(struct ebus_dma_info *p);2828+int ebus_dma_request(struct ebus_dma_info *p, dma_addr_t bus_addr,2929 size_t len);3030-extern void ebus_dma_prepare(struct ebus_dma_info *p, int write);3131-extern unsigned int ebus_dma_residue(struct ebus_dma_info *p);3232-extern unsigned int ebus_dma_addr(struct ebus_dma_info *p);3333-extern void ebus_dma_enable(struct ebus_dma_info *p, int on);3030+void ebus_dma_prepare(struct ebus_dma_info *p, int write);3131+unsigned int ebus_dma_residue(struct ebus_dma_info *p);3232+unsigned int ebus_dma_addr(struct ebus_dma_info *p);3333+void ebus_dma_enable(struct ebus_dma_info *p, int on);34343535#endif /* __ASM_SPARC_EBUS_DMA_H */
+3-11
arch/sparc/include/asm/floppy_32.h
···99#include <linux/of.h>1010#include <linux/of_device.h>11111212-#include <asm/page.h>1312#include <asm/pgtable.h>1413#include <asm/idprom.h>1514#include <asm/oplib.h>1615#include <asm/auxio.h>1616+#include <asm/setup.h>1717+#include <asm/page.h>1718#include <asm/irq.h>18191920/* We don't need no stinkin' I/O port allocation crap. */···50495150/* You'll only ever find one controller on a SparcStation anyways. */5251static struct sun_flpy_controller *sun_fdc = NULL;5353-extern volatile unsigned char *fdc_status;54525553struct sun_floppy_ops {5654 unsigned char (*fd_inb)(int port);···212212 * underruns. If non-zero, doing_pdma encodes the direction of213213 * the transfer for debugging. 1=read 2=write214214 */215215-extern char *pdma_vaddr;216216-extern unsigned long pdma_size;217217-extern volatile int doing_pdma;218218-219219-/* This is software state */220220-extern char *pdma_base;221221-extern unsigned long pdma_areasize;222215223216/* Common routines to all controller types on the Sparc. */224217static inline void virtual_dma_init(void)···256263 pdma_areasize = pdma_size;257264}258265259259-extern int sparc_floppy_request_irq(unsigned int irq,260260- irq_handler_t irq_handler);266266+int sparc_floppy_request_irq(unsigned int irq, irq_handler_t irq_handler);261267262268static int sun_fd_request_irq(void)263269{
···66#define MCOUNT_INSN_SIZE 4 /* sizeof mcount call */7788#ifndef __ASSEMBLY__99-extern void _mcount(void);99+void _mcount(void);1010#endif11111212#endif···2121struct dyn_arch_ftrace {2222};2323#endif /* CONFIG_DYNAMIC_FTRACE */2424+2525+unsigned long prepare_ftrace_return(unsigned long parent,2626+ unsigned long self_addr,2727+ unsigned long frame_pointer);24282529#endif /* _ASM_SPARC64_FTRACE */
+5-5
arch/sparc/include/asm/highmem.h
···3131extern pgprot_t kmap_prot;3232extern pte_t *pkmap_page_table;33333434-extern void kmap_init(void) __init;3434+void kmap_init(void) __init;35353636/*3737 * Right now we initialize only a single pte table. It can be extended···49495050#define PKMAP_END (PKMAP_ADDR(LAST_PKMAP))51515252-extern void *kmap_high(struct page *page);5353-extern void kunmap_high(struct page *page);5252+void *kmap_high(struct page *page);5353+void kunmap_high(struct page *page);54545555static inline void *kmap(struct page *page)5656{···6868 kunmap_high(page);6969}70707171-extern void *kmap_atomic(struct page *page);7272-extern void __kunmap_atomic(void *kvaddr);7171+void *kmap_atomic(struct page *page);7272+void __kunmap_atomic(void *kvaddr);73737474#define flush_cache_kmaps() flush_cache_all()7575
+1-1
arch/sparc/include/asm/hvtramp.h
···1919 struct hvtramp_mapping maps[1];2020};21212222-extern void hv_cpu_startup(unsigned long hvdescr_pa);2222+void hv_cpu_startup(unsigned long hvdescr_pa);23232424#endif2525
+163-160
arch/sparc/include/asm/hypervisor.h
···9898#define HV_FAST_MACH_EXIT 0x009999100100#ifndef __ASSEMBLY__101101-extern void sun4v_mach_exit(unsigned long exit_code);101101+void sun4v_mach_exit(unsigned long exit_code);102102#endif103103104104/* Domain services. */···127127#define HV_FAST_MACH_DESC 0x01128128129129#ifndef __ASSEMBLY__130130-extern unsigned long sun4v_mach_desc(unsigned long buffer_pa,131131- unsigned long buf_len,132132- unsigned long *real_buf_len);130130+unsigned long sun4v_mach_desc(unsigned long buffer_pa,131131+ unsigned long buf_len,132132+ unsigned long *real_buf_len);133133#endif134134135135/* mach_sir()···148148#define HV_FAST_MACH_SIR 0x02149149150150#ifndef __ASSEMBLY__151151-extern void sun4v_mach_sir(void);151151+void sun4v_mach_sir(void);152152#endif153153154154/* mach_set_watchdog()···204204#define HV_FAST_MACH_SET_WATCHDOG 0x05205205206206#ifndef __ASSEMBLY__207207-extern unsigned long sun4v_mach_set_watchdog(unsigned long timeout,208208- unsigned long *orig_timeout);207207+unsigned long sun4v_mach_set_watchdog(unsigned long timeout,208208+ unsigned long *orig_timeout);209209#endif210210211211/* CPU services.···250250#define HV_FAST_CPU_START 0x10251251252252#ifndef __ASSEMBLY__253253-extern unsigned long sun4v_cpu_start(unsigned long cpuid,254254- unsigned long pc,255255- unsigned long rtba,256256- unsigned long arg0);253253+unsigned long sun4v_cpu_start(unsigned long cpuid,254254+ unsigned long pc,255255+ unsigned long rtba,256256+ unsigned long arg0);257257#endif258258259259/* cpu_stop()···278278#define HV_FAST_CPU_STOP 0x11279279280280#ifndef __ASSEMBLY__281281-extern unsigned long sun4v_cpu_stop(unsigned long cpuid);281281+unsigned long sun4v_cpu_stop(unsigned long cpuid);282282#endif283283284284/* cpu_yield()···295295#define HV_FAST_CPU_YIELD 0x12296296297297#ifndef __ASSEMBLY__298298-extern unsigned long sun4v_cpu_yield(void);298298+unsigned long sun4v_cpu_yield(void);299299#endif300300301301/* cpu_qconf()···341341#define HV_CPU_QUEUE_NONRES_ERROR 0x3f342342343343#ifndef __ASSEMBLY__344344-extern unsigned long sun4v_cpu_qconf(unsigned long type,345345- unsigned long queue_paddr,346346- unsigned long num_queue_entries);344344+unsigned long sun4v_cpu_qconf(unsigned long type,345345+ unsigned long queue_paddr,346346+ unsigned long num_queue_entries);347347#endif348348349349/* cpu_qinfo()···394394#define HV_FAST_CPU_MONDO_SEND 0x42395395396396#ifndef __ASSEMBLY__397397-extern unsigned long sun4v_cpu_mondo_send(unsigned long cpu_count, unsigned long cpu_list_pa, unsigned long mondo_block_pa);397397+unsigned long sun4v_cpu_mondo_send(unsigned long cpu_count,398398+ unsigned long cpu_list_pa,399399+ unsigned long mondo_block_pa);398400#endif399401400402/* cpu_myid()···427425#define HV_CPU_STATE_ERROR 0x03428426429427#ifndef __ASSEMBLY__430430-extern long sun4v_cpu_state(unsigned long cpuid);428428+long sun4v_cpu_state(unsigned long cpuid);431429#endif432430433431/* cpu_set_rtba()···627625#define HV_FAST_MMU_TSB_CTX0 0x20628626629627#ifndef __ASSEMBLY__630630-extern unsigned long sun4v_mmu_tsb_ctx0(unsigned long num_descriptions,631631- unsigned long tsb_desc_ra);628628+unsigned long sun4v_mmu_tsb_ctx0(unsigned long num_descriptions,629629+ unsigned long tsb_desc_ra);632630#endif633631634632/* mmu_tsb_ctxnon0()···712710#define HV_FAST_MMU_DEMAP_ALL 0x24713711714712#ifndef __ASSEMBLY__715715-extern void sun4v_mmu_demap_all(void);713713+void sun4v_mmu_demap_all(void);716714#endif717715718716/* mmu_map_perm_addr()···742740#define HV_FAST_MMU_MAP_PERM_ADDR 0x25743741744742#ifndef __ASSEMBLY__745745-extern unsigned long sun4v_mmu_map_perm_addr(unsigned long vaddr,746746- unsigned long set_to_zero,747747- unsigned long tte,748748- unsigned long flags);743743+unsigned long sun4v_mmu_map_perm_addr(unsigned long vaddr,744744+ unsigned long set_to_zero,745745+ unsigned long tte,746746+ unsigned long flags);749747#endif750748751749/* mmu_fault_area_conf()···947945#define HV_FAST_TOD_GET 0x50948946949947#ifndef __ASSEMBLY__950950-extern unsigned long sun4v_tod_get(unsigned long *time);948948+unsigned long sun4v_tod_get(unsigned long *time);951949#endif952950953951/* tod_set()···964962#define HV_FAST_TOD_SET 0x51965963966964#ifndef __ASSEMBLY__967967-extern unsigned long sun4v_tod_set(unsigned long time);965965+unsigned long sun4v_tod_set(unsigned long time);968966#endif969967970968/* Console services */···10401038#define HV_FAST_CONS_WRITE 0x631041103910421040#ifndef __ASSEMBLY__10431043-extern long sun4v_con_getchar(long *status);10441044-extern long sun4v_con_putchar(long c);10451045-extern long sun4v_con_read(unsigned long buffer,10461046- unsigned long size,10471047- unsigned long *bytes_read);10481048-extern unsigned long sun4v_con_write(unsigned long buffer,10491049- unsigned long size,10501050- unsigned long *bytes_written);10411041+long sun4v_con_getchar(long *status);10421042+long sun4v_con_putchar(long c);10431043+long sun4v_con_read(unsigned long buffer,10441044+ unsigned long size,10451045+ unsigned long *bytes_read);10461046+unsigned long sun4v_con_write(unsigned long buffer,10471047+ unsigned long size,10481048+ unsigned long *bytes_written);10511049#endif1052105010531051/* mach_set_soft_state()···10821080#define HV_SOFT_STATE_TRANSITION 0x021083108110841082#ifndef __ASSEMBLY__10851085-extern unsigned long sun4v_mach_set_soft_state(unsigned long soft_state,10861086- unsigned long msg_string_ra);10831083+unsigned long sun4v_mach_set_soft_state(unsigned long soft_state,10841084+ unsigned long msg_string_ra);10871085#endif1088108610891087/* mach_get_soft_state()···11611159#define HV_FAST_SVC_CLRSTATUS 0x841162116011631161#ifndef __ASSEMBLY__11641164-extern unsigned long sun4v_svc_send(unsigned long svc_id,11651165- unsigned long buffer,11661166- unsigned long buffer_size,11671167- unsigned long *sent_bytes);11681168-extern unsigned long sun4v_svc_recv(unsigned long svc_id,11691169- unsigned long buffer,11701170- unsigned long buffer_size,11711171- unsigned long *recv_bytes);11721172-extern unsigned long sun4v_svc_getstatus(unsigned long svc_id,11731173- unsigned long *status_bits);11741174-extern unsigned long sun4v_svc_setstatus(unsigned long svc_id,11751175- unsigned long status_bits);11761176-extern unsigned long sun4v_svc_clrstatus(unsigned long svc_id,11771177- unsigned long status_bits);11621162+unsigned long sun4v_svc_send(unsigned long svc_id,11631163+ unsigned long buffer,11641164+ unsigned long buffer_size,11651165+ unsigned long *sent_bytes);11661166+unsigned long sun4v_svc_recv(unsigned long svc_id,11671167+ unsigned long buffer,11681168+ unsigned long buffer_size,11691169+ unsigned long *recv_bytes);11701170+unsigned long sun4v_svc_getstatus(unsigned long svc_id,11711171+ unsigned long *status_bits);11721172+unsigned long sun4v_svc_setstatus(unsigned long svc_id,11731173+ unsigned long status_bits);11741174+unsigned long sun4v_svc_clrstatus(unsigned long svc_id,11751175+ unsigned long status_bits);11781176#endif1179117711801178/* Trap trace services.···14601458#define HV_FAST_INTR_DEVINO2SYSINO 0xa01461145914621460#ifndef __ASSEMBLY__14631463-extern unsigned long sun4v_devino_to_sysino(unsigned long devhandle,14641464- unsigned long devino);14611461+unsigned long sun4v_devino_to_sysino(unsigned long devhandle,14621462+ unsigned long devino);14651463#endif1466146414671465/* intr_getenabled()···14781476#define HV_FAST_INTR_GETENABLED 0xa11479147714801478#ifndef __ASSEMBLY__14811481-extern unsigned long sun4v_intr_getenabled(unsigned long sysino);14791479+unsigned long sun4v_intr_getenabled(unsigned long sysino);14821480#endif1483148114841482/* intr_setenabled()···14941492#define HV_FAST_INTR_SETENABLED 0xa21495149314961494#ifndef __ASSEMBLY__14971497-extern unsigned long sun4v_intr_setenabled(unsigned long sysino, unsigned long intr_enabled);14951495+unsigned long sun4v_intr_setenabled(unsigned long sysino,14961496+ unsigned long intr_enabled);14981497#endif1499149815001499/* intr_getstate()···15111508#define HV_FAST_INTR_GETSTATE 0xa31512150915131510#ifndef __ASSEMBLY__15141514-extern unsigned long sun4v_intr_getstate(unsigned long sysino);15111511+unsigned long sun4v_intr_getstate(unsigned long sysino);15151512#endif1516151315171514/* intr_setstate()···15311528#define HV_FAST_INTR_SETSTATE 0xa41532152915331530#ifndef __ASSEMBLY__15341534-extern unsigned long sun4v_intr_setstate(unsigned long sysino, unsigned long intr_state);15311531+unsigned long sun4v_intr_setstate(unsigned long sysino, unsigned long intr_state);15351532#endif1536153315371534/* intr_gettarget()···15491546#define HV_FAST_INTR_GETTARGET 0xa51550154715511548#ifndef __ASSEMBLY__15521552-extern unsigned long sun4v_intr_gettarget(unsigned long sysino);15491549+unsigned long sun4v_intr_gettarget(unsigned long sysino);15531550#endif1554155115551552/* intr_settarget()···15661563#define HV_FAST_INTR_SETTARGET 0xa61567156415681565#ifndef __ASSEMBLY__15691569-extern unsigned long sun4v_intr_settarget(unsigned long sysino, unsigned long cpuid);15661566+unsigned long sun4v_intr_settarget(unsigned long sysino, unsigned long cpuid);15701567#endif1571156815721569/* vintr_get_cookie()···16501647#define HV_FAST_VINTR_SET_TARGET 0xae1651164816521649#ifndef __ASSEMBLY__16531653-extern unsigned long sun4v_vintr_get_cookie(unsigned long dev_handle,16541654- unsigned long dev_ino,16551655- unsigned long *cookie);16561656-extern unsigned long sun4v_vintr_set_cookie(unsigned long dev_handle,16571657- unsigned long dev_ino,16581658- unsigned long cookie);16591659-extern unsigned long sun4v_vintr_get_valid(unsigned long dev_handle,16601660- unsigned long dev_ino,16611661- unsigned long *valid);16621662-extern unsigned long sun4v_vintr_set_valid(unsigned long dev_handle,16631663- unsigned long dev_ino,16641664- unsigned long valid);16651665-extern unsigned long sun4v_vintr_get_state(unsigned long dev_handle,16661666- unsigned long dev_ino,16671667- unsigned long *state);16681668-extern unsigned long sun4v_vintr_set_state(unsigned long dev_handle,16691669- unsigned long dev_ino,16701670- unsigned long state);16711671-extern unsigned long sun4v_vintr_get_target(unsigned long dev_handle,16721672- unsigned long dev_ino,16731673- unsigned long *cpuid);16741674-extern unsigned long sun4v_vintr_set_target(unsigned long dev_handle,16751675- unsigned long dev_ino,16761676- unsigned long cpuid);16501650+unsigned long sun4v_vintr_get_cookie(unsigned long dev_handle,16511651+ unsigned long dev_ino,16521652+ unsigned long *cookie);16531653+unsigned long sun4v_vintr_set_cookie(unsigned long dev_handle,16541654+ unsigned long dev_ino,16551655+ unsigned long cookie);16561656+unsigned long sun4v_vintr_get_valid(unsigned long dev_handle,16571657+ unsigned long dev_ino,16581658+ unsigned long *valid);16591659+unsigned long sun4v_vintr_set_valid(unsigned long dev_handle,16601660+ unsigned long dev_ino,16611661+ unsigned long valid);16621662+unsigned long sun4v_vintr_get_state(unsigned long dev_handle,16631663+ unsigned long dev_ino,16641664+ unsigned long *state);16651665+unsigned long sun4v_vintr_set_state(unsigned long dev_handle,16661666+ unsigned long dev_ino,16671667+ unsigned long state);16681668+unsigned long sun4v_vintr_get_target(unsigned long dev_handle,16691669+ unsigned long dev_ino,16701670+ unsigned long *cpuid);16711671+unsigned long sun4v_vintr_set_target(unsigned long dev_handle,16721672+ unsigned long dev_ino,16731673+ unsigned long cpuid);16771674#endif1678167516791676/* PCI IO services.···26302627#define HV_FAST_LDC_REVOKE 0xef2631262826322629#ifndef __ASSEMBLY__26332633-extern unsigned long sun4v_ldc_tx_qconf(unsigned long channel,26342634- unsigned long ra,26352635- unsigned long num_entries);26362636-extern unsigned long sun4v_ldc_tx_qinfo(unsigned long channel,26372637- unsigned long *ra,26382638- unsigned long *num_entries);26392639-extern unsigned long sun4v_ldc_tx_get_state(unsigned long channel,26402640- unsigned long *head_off,26412641- unsigned long *tail_off,26422642- unsigned long *chan_state);26432643-extern unsigned long sun4v_ldc_tx_set_qtail(unsigned long channel,26442644- unsigned long tail_off);26452645-extern unsigned long sun4v_ldc_rx_qconf(unsigned long channel,26462646- unsigned long ra,26472647- unsigned long num_entries);26482648-extern unsigned long sun4v_ldc_rx_qinfo(unsigned long channel,26492649- unsigned long *ra,26502650- unsigned long *num_entries);26512651-extern unsigned long sun4v_ldc_rx_get_state(unsigned long channel,26522652- unsigned long *head_off,26532653- unsigned long *tail_off,26542654- unsigned long *chan_state);26552655-extern unsigned long sun4v_ldc_rx_set_qhead(unsigned long channel,26562656- unsigned long head_off);26572657-extern unsigned long sun4v_ldc_set_map_table(unsigned long channel,26582658- unsigned long ra,26592659- unsigned long num_entries);26602660-extern unsigned long sun4v_ldc_get_map_table(unsigned long channel,26612661- unsigned long *ra,26622662- unsigned long *num_entries);26632663-extern unsigned long sun4v_ldc_copy(unsigned long channel,26642664- unsigned long dir_code,26652665- unsigned long tgt_raddr,26662666- unsigned long lcl_raddr,26672667- unsigned long len,26682668- unsigned long *actual_len);26692669-extern unsigned long sun4v_ldc_mapin(unsigned long channel,26702670- unsigned long cookie,26712671- unsigned long *ra,26722672- unsigned long *perm);26732673-extern unsigned long sun4v_ldc_unmap(unsigned long ra);26742674-extern unsigned long sun4v_ldc_revoke(unsigned long channel,26752675- unsigned long cookie,26762676- unsigned long mte_cookie);26302630+unsigned long sun4v_ldc_tx_qconf(unsigned long channel,26312631+ unsigned long ra,26322632+ unsigned long num_entries);26332633+unsigned long sun4v_ldc_tx_qinfo(unsigned long channel,26342634+ unsigned long *ra,26352635+ unsigned long *num_entries);26362636+unsigned long sun4v_ldc_tx_get_state(unsigned long channel,26372637+ unsigned long *head_off,26382638+ unsigned long *tail_off,26392639+ unsigned long *chan_state);26402640+unsigned long sun4v_ldc_tx_set_qtail(unsigned long channel,26412641+ unsigned long tail_off);26422642+unsigned long sun4v_ldc_rx_qconf(unsigned long channel,26432643+ unsigned long ra,26442644+ unsigned long num_entries);26452645+unsigned long sun4v_ldc_rx_qinfo(unsigned long channel,26462646+ unsigned long *ra,26472647+ unsigned long *num_entries);26482648+unsigned long sun4v_ldc_rx_get_state(unsigned long channel,26492649+ unsigned long *head_off,26502650+ unsigned long *tail_off,26512651+ unsigned long *chan_state);26522652+unsigned long sun4v_ldc_rx_set_qhead(unsigned long channel,26532653+ unsigned long head_off);26542654+unsigned long sun4v_ldc_set_map_table(unsigned long channel,26552655+ unsigned long ra,26562656+ unsigned long num_entries);26572657+unsigned long sun4v_ldc_get_map_table(unsigned long channel,26582658+ unsigned long *ra,26592659+ unsigned long *num_entries);26602660+unsigned long sun4v_ldc_copy(unsigned long channel,26612661+ unsigned long dir_code,26622662+ unsigned long tgt_raddr,26632663+ unsigned long lcl_raddr,26642664+ unsigned long len,26652665+ unsigned long *actual_len);26662666+unsigned long sun4v_ldc_mapin(unsigned long channel,26672667+ unsigned long cookie,26682668+ unsigned long *ra,26692669+ unsigned long *perm);26702670+unsigned long sun4v_ldc_unmap(unsigned long ra);26712671+unsigned long sun4v_ldc_revoke(unsigned long channel,26722672+ unsigned long cookie,26732673+ unsigned long mte_cookie);26772674#endif2678267526792676/* Performance counter services. */···27302727#define HV_FAST_N2_SET_PERFREG 0x1052731272827322729#ifndef __ASSEMBLY__27332733-extern unsigned long sun4v_niagara_getperf(unsigned long reg,27342734- unsigned long *val);27352735-extern unsigned long sun4v_niagara_setperf(unsigned long reg,27362736- unsigned long val);27372737-extern unsigned long sun4v_niagara2_getperf(unsigned long reg,27382738- unsigned long *val);27392739-extern unsigned long sun4v_niagara2_setperf(unsigned long reg,27402740- unsigned long val);27302730+unsigned long sun4v_niagara_getperf(unsigned long reg,27312731+ unsigned long *val);27322732+unsigned long sun4v_niagara_setperf(unsigned long reg,27332733+ unsigned long val);27342734+unsigned long sun4v_niagara2_getperf(unsigned long reg,27352735+ unsigned long *val);27362736+unsigned long sun4v_niagara2_setperf(unsigned long reg,27372737+ unsigned long val);27412738#endif2742273927432740/* MMU statistics services.···28322829#define HV_FAST_MMUSTAT_INFO 0x1032833283028342831#ifndef __ASSEMBLY__28352835-extern unsigned long sun4v_mmustat_conf(unsigned long ra, unsigned long *orig_ra);28362836-extern unsigned long sun4v_mmustat_info(unsigned long *ra);28322832+unsigned long sun4v_mmustat_conf(unsigned long ra, unsigned long *orig_ra);28332833+unsigned long sun4v_mmustat_info(unsigned long *ra);28372834#endif2838283528392836/* NCS crypto services */···29222919#define HV_FAST_NCS_REQUEST 0x1102923292029242921#ifndef __ASSEMBLY__29252925-extern unsigned long sun4v_ncs_request(unsigned long request,29262926- unsigned long arg_ra,29272927- unsigned long arg_size);29222922+unsigned long sun4v_ncs_request(unsigned long request,29232923+ unsigned long arg_ra,29242924+ unsigned long arg_size);29282925#endif2929292629302927#define HV_FAST_FIRE_GET_PERFREG 0x120···29332930#define HV_FAST_REBOOT_DATA_SET 0x1722934293129352932#ifndef __ASSEMBLY__29362936-extern unsigned long sun4v_reboot_data_set(unsigned long ra,29372937- unsigned long len);29332933+unsigned long sun4v_reboot_data_set(unsigned long ra,29342934+ unsigned long len);29382935#endif2939293629402937#define HV_FAST_VT_GET_PERFREG 0x18429412938#define HV_FAST_VT_SET_PERFREG 0x1852942293929432940#ifndef __ASSEMBLY__29442944-extern unsigned long sun4v_vt_get_perfreg(unsigned long reg_num,29452945- unsigned long *reg_val);29462946-extern unsigned long sun4v_vt_set_perfreg(unsigned long reg_num,29472947- unsigned long reg_val);29412941+unsigned long sun4v_vt_get_perfreg(unsigned long reg_num,29422942+ unsigned long *reg_val);29432943+unsigned long sun4v_vt_set_perfreg(unsigned long reg_num,29442944+ unsigned long reg_val);29482945#endif2949294629502947/* Function numbers for HV_CORE_TRAP. */···29812978#define HV_GRP_DIAG 0x03002982297929832980#ifndef __ASSEMBLY__29842984-extern unsigned long sun4v_get_version(unsigned long group,29852985- unsigned long *major,29862986- unsigned long *minor);29872987-extern unsigned long sun4v_set_version(unsigned long group,29882988- unsigned long major,29892989- unsigned long minor,29902990- unsigned long *actual_minor);29812981+unsigned long sun4v_get_version(unsigned long group,29822982+ unsigned long *major,29832983+ unsigned long *minor);29842984+unsigned long sun4v_set_version(unsigned long group,29852985+ unsigned long major,29862986+ unsigned long minor,29872987+ unsigned long *actual_minor);2991298829922992-extern int sun4v_hvapi_register(unsigned long group, unsigned long major,29932993- unsigned long *minor);29942994-extern void sun4v_hvapi_unregister(unsigned long group);29952995-extern int sun4v_hvapi_get(unsigned long group,29962996- unsigned long *major,29972997- unsigned long *minor);29982998-extern void sun4v_hvapi_init(void);29892989+int sun4v_hvapi_register(unsigned long group, unsigned long major,29902990+ unsigned long *minor);29912991+void sun4v_hvapi_unregister(unsigned long group);29922992+int sun4v_hvapi_get(unsigned long group,29932993+ unsigned long *major,29942994+ unsigned long *minor);29952995+void sun4v_hvapi_init(void);29992996#endif3000299730012998#endif /* !(_SPARC64_HYPERVISOR_H) */
···3939 */4040#define NR_IRQS 25541414242-extern void irq_install_pre_handler(int irq,4343- void (*func)(unsigned int, void *, void *),4444- void *arg1, void *arg2);4242+void irq_install_pre_handler(int irq,4343+ void (*func)(unsigned int, void *, void *),4444+ void *arg1, void *arg2);4545#define irq_canonicalize(irq) (irq)4646-extern unsigned int build_irq(int inofixup, unsigned long iclr, unsigned long imap);4747-extern unsigned int sun4v_build_irq(u32 devhandle, unsigned int devino);4848-extern unsigned int sun4v_build_virq(u32 devhandle, unsigned int devino);4949-extern unsigned int sun4v_build_msi(u32 devhandle, unsigned int *irq_p,5050- unsigned int msi_devino_start,5151- unsigned int msi_devino_end);5252-extern void sun4v_destroy_msi(unsigned int irq);5353-extern unsigned int sun4u_build_msi(u32 portid, unsigned int *irq_p,5454- unsigned int msi_devino_start,5555- unsigned int msi_devino_end,5656- unsigned long imap_base,5757- unsigned long iclr_base);5858-extern void sun4u_destroy_msi(unsigned int irq);4646+unsigned int build_irq(int inofixup, unsigned long iclr, unsigned long imap);4747+unsigned int sun4v_build_irq(u32 devhandle, unsigned int devino);4848+unsigned int sun4v_build_virq(u32 devhandle, unsigned int devino);4949+unsigned int sun4v_build_msi(u32 devhandle, unsigned int *irq_p,5050+ unsigned int msi_devino_start,5151+ unsigned int msi_devino_end);5252+void sun4v_destroy_msi(unsigned int irq);5353+unsigned int sun4u_build_msi(u32 portid, unsigned int *irq_p,5454+ unsigned int msi_devino_start,5555+ unsigned int msi_devino_end,5656+ unsigned long imap_base,5757+ unsigned long iclr_base);5858+void sun4u_destroy_msi(unsigned int irq);59596060-extern unsigned char irq_alloc(unsigned int dev_handle,6161- unsigned int dev_ino);6060+unsigned char irq_alloc(unsigned int dev_handle,6161+ unsigned int dev_ino);6262#ifdef CONFIG_PCI_MSI6363-extern void irq_free(unsigned int irq);6363+void irq_free(unsigned int irq);6464#endif65656666-extern void __init init_IRQ(void);6767-extern void fixup_irqs(void);6666+void __init init_IRQ(void);6767+void fixup_irqs(void);68686969static inline void set_softint(unsigned long bits)7070{
+3-3
arch/sparc/include/asm/irqflags_32.h
···1515#include <linux/types.h>1616#include <asm/psr.h>17171818-extern void arch_local_irq_restore(unsigned long);1919-extern unsigned long arch_local_irq_save(void);2020-extern void arch_local_irq_enable(void);1818+void arch_local_irq_restore(unsigned long);1919+unsigned long arch_local_irq_save(void);2020+void arch_local_irq_enable(void);21212222static inline notrace unsigned long arch_local_save_flags(void)2323{
···4343 struct prev_kprobe prev_kprobe;4444};45454646-extern int kprobe_exceptions_notify(struct notifier_block *self,4747- unsigned long val, void *data);4848-extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);4646+int kprobe_exceptions_notify(struct notifier_block *self,4747+ unsigned long val, void *data);4848+int kprobe_fault_handler(struct pt_regs *regs, int trapnr);4949+asmlinkage void __kprobes kprobe_trap(unsigned long trap_level,5050+ struct pt_regs *regs);4951#endif /* _SPARC64_KPROBES_H */
+33-33
arch/sparc/include/asm/ldc.h
···44#include <asm/hypervisor.h>5566extern int ldom_domaining_enabled;77-extern void ldom_set_var(const char *var, const char *value);88-extern void ldom_reboot(const char *boot_command);99-extern void ldom_power_off(void);77+void ldom_set_var(const char *var, const char *value);88+void ldom_reboot(const char *boot_command);99+void ldom_power_off(void);10101111/* The event handler will be evoked when link state changes1212 * or data becomes available on the receive side.···5151struct ldc_channel;52525353/* Allocate state for a channel. */5454-extern struct ldc_channel *ldc_alloc(unsigned long id,5555- const struct ldc_channel_config *cfgp,5656- void *event_arg);5454+struct ldc_channel *ldc_alloc(unsigned long id,5555+ const struct ldc_channel_config *cfgp,5656+ void *event_arg);57575858/* Shut down and free state for a channel. */5959-extern void ldc_free(struct ldc_channel *lp);5959+void ldc_free(struct ldc_channel *lp);60606161/* Register TX and RX queues of the link with the hypervisor. */6262-extern int ldc_bind(struct ldc_channel *lp, const char *name);6262+int ldc_bind(struct ldc_channel *lp, const char *name);63636464/* For non-RAW protocols we need to complete a handshake before6565 * communication can proceed. ldc_connect() does that, if the6666 * handshake completes successfully, an LDC_EVENT_UP event will6767 * be sent up to the driver.6868 */6969-extern int ldc_connect(struct ldc_channel *lp);7070-extern int ldc_disconnect(struct ldc_channel *lp);6969+int ldc_connect(struct ldc_channel *lp);7070+int ldc_disconnect(struct ldc_channel *lp);71717272-extern int ldc_state(struct ldc_channel *lp);7272+int ldc_state(struct ldc_channel *lp);73737474/* Read and write operations. Only valid when the link is up. */7575-extern int ldc_write(struct ldc_channel *lp, const void *buf,7676- unsigned int size);7777-extern int ldc_read(struct ldc_channel *lp, void *buf, unsigned int size);7575+int ldc_write(struct ldc_channel *lp, const void *buf,7676+ unsigned int size);7777+int ldc_read(struct ldc_channel *lp, void *buf, unsigned int size);78787979#define LDC_MAP_SHADOW 0x018080#define LDC_MAP_DIRECT 0x02···9292};93939494struct scatterlist;9595-extern int ldc_map_sg(struct ldc_channel *lp,9696- struct scatterlist *sg, int num_sg,9797- struct ldc_trans_cookie *cookies, int ncookies,9898- unsigned int map_perm);9595+int ldc_map_sg(struct ldc_channel *lp,9696+ struct scatterlist *sg, int num_sg,9797+ struct ldc_trans_cookie *cookies, int ncookies,9898+ unsigned int map_perm);9999100100-extern int ldc_map_single(struct ldc_channel *lp,101101- void *buf, unsigned int len,102102- struct ldc_trans_cookie *cookies, int ncookies,103103- unsigned int map_perm);100100+int ldc_map_single(struct ldc_channel *lp,101101+ void *buf, unsigned int len,102102+ struct ldc_trans_cookie *cookies, int ncookies,103103+ unsigned int map_perm);104104105105-extern void ldc_unmap(struct ldc_channel *lp, struct ldc_trans_cookie *cookies,106106- int ncookies);105105+void ldc_unmap(struct ldc_channel *lp, struct ldc_trans_cookie *cookies,106106+ int ncookies);107107108108-extern int ldc_copy(struct ldc_channel *lp, int copy_dir,109109- void *buf, unsigned int len, unsigned long offset,110110- struct ldc_trans_cookie *cookies, int ncookies);108108+int ldc_copy(struct ldc_channel *lp, int copy_dir,109109+ void *buf, unsigned int len, unsigned long offset,110110+ struct ldc_trans_cookie *cookies, int ncookies);111111112112static inline int ldc_get_dring_entry(struct ldc_channel *lp,113113 void *buf, unsigned int len,···127127 return ldc_copy(lp, LDC_COPY_OUT, buf, len, offset, cookies, ncookies);128128}129129130130-extern void *ldc_alloc_exp_dring(struct ldc_channel *lp, unsigned int len,131131- struct ldc_trans_cookie *cookies,132132- int *ncookies, unsigned int map_perm);130130+void *ldc_alloc_exp_dring(struct ldc_channel *lp, unsigned int len,131131+ struct ldc_trans_cookie *cookies,132132+ int *ncookies, unsigned int map_perm);133133134134-extern void ldc_free_exp_dring(struct ldc_channel *lp, void *buf,135135- unsigned int len,136136- struct ldc_trans_cookie *cookies, int ncookies);134134+void ldc_free_exp_dring(struct ldc_channel *lp, void *buf,135135+ unsigned int len,136136+ struct ldc_trans_cookie *cookies, int ncookies);137137138138#endif /* _SPARC64_LDC_H */
+27-27
arch/sparc/include/asm/leon.h
···8282#define LEON_BYPASS_LOAD_PA(x) leon_load_reg((unsigned long)(x))8383#define LEON_BYPASS_STORE_PA(x, v) leon_store_reg((unsigned long)(x), (unsigned long)(v))84848585-extern void leon_switch_mm(void);8686-extern void leon_init_IRQ(void);8585+void leon_switch_mm(void);8686+void leon_init_IRQ(void);87878888static inline unsigned long sparc_leon3_get_dcachecfg(void)8989{···196196#ifndef __ASSEMBLY__197197struct vm_area_struct;198198199199-extern unsigned long leon_swprobe(unsigned long vaddr, unsigned long *paddr);200200-extern void leon_flush_icache_all(void);201201-extern void leon_flush_dcache_all(void);202202-extern void leon_flush_cache_all(void);203203-extern void leon_flush_tlb_all(void);199199+unsigned long leon_swprobe(unsigned long vaddr, unsigned long *paddr);200200+void leon_flush_icache_all(void);201201+void leon_flush_dcache_all(void);202202+void leon_flush_cache_all(void);203203+void leon_flush_tlb_all(void);204204extern int leon_flush_during_switch;205205-extern int leon_flush_needed(void);206206-extern void leon_flush_pcache_all(struct vm_area_struct *vma, unsigned long page);205205+int leon_flush_needed(void);206206+void leon_flush_pcache_all(struct vm_area_struct *vma, unsigned long page);207207208208/* struct that hold LEON3 cache configuration registers */209209struct leon3_cacheregs {···217217218218struct device_node;219219struct task_struct;220220-extern unsigned int leon_build_device_irq(unsigned int real_irq,221221- irq_flow_handler_t flow_handler,222222- const char *name, int do_ack);223223-extern void leon_update_virq_handling(unsigned int virq,224224- irq_flow_handler_t flow_handler,225225- const char *name, int do_ack);226226-extern void leon_init_timers(void);227227-extern void leon_trans_init(struct device_node *dp);228228-extern void leon_node_init(struct device_node *dp, struct device_node ***nextp);229229-extern void init_leon(void);230230-extern void poke_leonsparc(void);231231-extern void leon3_getCacheRegs(struct leon3_cacheregs *regs);220220+unsigned int leon_build_device_irq(unsigned int real_irq,221221+ irq_flow_handler_t flow_handler,222222+ const char *name, int do_ack);223223+void leon_update_virq_handling(unsigned int virq,224224+ irq_flow_handler_t flow_handler,225225+ const char *name, int do_ack);226226+void leon_init_timers(void);227227+void leon_trans_init(struct device_node *dp);228228+void leon_node_init(struct device_node *dp, struct device_node ***nextp);229229+void init_leon(void);230230+void poke_leonsparc(void);231231+void leon3_getCacheRegs(struct leon3_cacheregs *regs);232232extern int leon3_ticker_irq;233233234234#ifdef CONFIG_SMP235235-extern int leon_smp_nrcpus(void);236236-extern void leon_clear_profile_irq(int cpu);237237-extern void leon_smp_done(void);238238-extern void leon_boot_cpus(void);239239-extern int leon_boot_one_cpu(int i, struct task_struct *);235235+int leon_smp_nrcpus(void);236236+void leon_clear_profile_irq(int cpu);237237+void leon_smp_done(void);238238+void leon_boot_cpus(void);239239+int leon_boot_one_cpu(int i, struct task_struct *);240240void leon_init_smp(void);241241void leon_enable_irq_cpu(unsigned int irq_nr, unsigned int cpu);242242-extern irqreturn_t leon_percpu_timer_interrupt(int irq, void *unused);242242+irqreturn_t leon_percpu_timer_interrupt(int irq, void *unused);243243244244extern unsigned int smpleon_ipi[];245245extern unsigned int linux_trap_ipi15_leon[];
···6767 unsigned long pte;6868} __attribute__((aligned(TSB_ENTRY_ALIGNMENT)));69697070-extern void __tsb_insert(unsigned long ent, unsigned long tag, unsigned long pte);7171-extern void tsb_flush(unsigned long ent, unsigned long tag);7272-extern void tsb_init(struct tsb *tsb, unsigned long size);7070+void __tsb_insert(unsigned long ent, unsigned long tag, unsigned long pte);7171+void tsb_flush(unsigned long ent, unsigned long tag);7272+void tsb_init(struct tsb *tsb, unsigned long size);73737474struct tsb_config {7575 struct tsb *tsb;
+13-11
arch/sparc/include/asm/mmu_context_64.h
···1717extern unsigned long tlb_context_cache;1818extern unsigned long mmu_context_bmap[];19192020-extern void get_new_mmu_context(struct mm_struct *mm);2020+void get_new_mmu_context(struct mm_struct *mm);2121#ifdef CONFIG_SMP2222-extern void smp_new_mmu_context_version(void);2222+void smp_new_mmu_context_version(void);2323#else2424#define smp_new_mmu_context_version() do { } while (0)2525#endif26262727-extern int init_new_context(struct task_struct *tsk, struct mm_struct *mm);2828-extern void destroy_context(struct mm_struct *mm);2727+int init_new_context(struct task_struct *tsk, struct mm_struct *mm);2828+void destroy_context(struct mm_struct *mm);29293030-extern void __tsb_context_switch(unsigned long pgd_pa,3131- struct tsb_config *tsb_base,3232- struct tsb_config *tsb_huge,3333- unsigned long tsb_descr_pa);3030+void __tsb_context_switch(unsigned long pgd_pa,3131+ struct tsb_config *tsb_base,3232+ struct tsb_config *tsb_huge,3333+ unsigned long tsb_descr_pa);34343535static inline void tsb_context_switch(struct mm_struct *mm)3636{···4646 , __pa(&mm->context.tsb_descr[0]));4747}48484949-extern void tsb_grow(struct mm_struct *mm, unsigned long tsb_index, unsigned long mm_rss);4949+void tsb_grow(struct mm_struct *mm,5050+ unsigned long tsb_index,5151+ unsigned long mm_rss);5052#ifdef CONFIG_SMP5151-extern void smp_tsb_sync(struct mm_struct *mm);5353+void smp_tsb_sync(struct mm_struct *mm);5254#else5355#define smp_tsb_sync(__mm) do { } while (0)5456#endif···6866 : "r" (CTX_HWBITS((__mm)->context)), \6967 "r" (SECONDARY_CONTEXT), "i" (ASI_DMMU), "i" (ASI_MMU))70687171-extern void __flush_tlb_mm(unsigned long, unsigned long);6969+void __flush_tlb_mm(unsigned long, unsigned long);72707371/* Switch the current MM context. */7472static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, struct task_struct *tsk)
···4343/* You must call prom_init() before using any of the library services,4444 * preferably as early as possible. Pass it the romvec pointer.4545 */4646-extern void prom_init(struct linux_romvec *rom_ptr);4646+void prom_init(struct linux_romvec *rom_ptr);47474848/* Boot argument acquisition, returns the boot command line string. */4949-extern char *prom_getbootargs(void);4949+char *prom_getbootargs(void);50505151/* Miscellaneous routines, don't really fit in any category per se. */52525353/* Reboot the machine with the command line passed. */5454-extern void prom_reboot(char *boot_command);5454+void prom_reboot(char *boot_command);55555656/* Evaluate the forth string passed. */5757-extern void prom_feval(char *forth_string);5757+void prom_feval(char *forth_string);58585959/* Enter the prom, with possibility of continuation with the 'go'6060 * command in newer proms.6161 */6262-extern void prom_cmdline(void);6262+void prom_cmdline(void);63636464/* Enter the prom, with no chance of continuation for the stand-alone6565 * which calls this.6666 */6767-extern void __noreturn prom_halt(void);6767+void __noreturn prom_halt(void);68686969/* Set the PROM 'sync' callback function to the passed function pointer.7070 * When the user gives the 'sync' command at the prom prompt while the···7373 * XXX The arguments are different on V0 vs. V2->higher proms, grrr! XXX7474 */7575typedef void (*sync_func_t)(void);7676-extern void prom_setsync(sync_func_t func_ptr);7676+void prom_setsync(sync_func_t func_ptr);77777878/* Acquire the IDPROM of the root node in the prom device tree. This7979 * gets passed a buffer where you would like it stuffed. The return value8080 * is the format type of this idprom or 0xff on error.8181 */8282-extern unsigned char prom_get_idprom(char *idp_buffer, int idpbuf_size);8282+unsigned char prom_get_idprom(char *idp_buffer, int idpbuf_size);83838484/* Get the prom major version. */8585-extern int prom_version(void);8585+int prom_version(void);86868787/* Get the prom plugin revision. */8888-extern int prom_getrev(void);8888+int prom_getrev(void);89899090/* Get the prom firmware revision. */9191-extern int prom_getprev(void);9191+int prom_getprev(void);92929393/* Write a buffer of characters to the console. */9494-extern void prom_console_write_buf(const char *buf, int len);9494+void prom_console_write_buf(const char *buf, int len);95959696/* Prom's internal routines, don't use in kernel/boot code. */9797-extern __printf(1, 2) void prom_printf(const char *fmt, ...);9898-extern void prom_write(const char *buf, unsigned int len);9797+__printf(1, 2) void prom_printf(const char *fmt, ...);9898+void prom_write(const char *buf, unsigned int len);9999100100/* Multiprocessor operations... */101101102102/* Start the CPU with the given device tree node, context table, and context103103 * at the passed program counter.104104 */105105-extern int prom_startcpu(int cpunode, struct linux_prom_registers *context_table,106106- int context, char *program_counter);105105+int prom_startcpu(int cpunode, struct linux_prom_registers *context_table,106106+ int context, char *program_counter);107107108108/* Initialize the memory lists based upon the prom version. */109109void prom_meminit(void);···111111/* PROM device tree traversal functions... */112112113113/* Get the child node of the given node, or zero if no child exists. */114114-extern phandle prom_getchild(phandle parent_node);114114+phandle prom_getchild(phandle parent_node);115115116116/* Get the next sibling node of the given node, or zero if no further117117 * siblings exist.118118 */119119-extern phandle prom_getsibling(phandle node);119119+phandle prom_getsibling(phandle node);120120121121/* Get the length, at the passed node, of the given property type.122122 * Returns -1 on error (ie. no such property at this node).123123 */124124-extern int prom_getproplen(phandle thisnode, const char *property);124124+int prom_getproplen(phandle thisnode, const char *property);125125126126/* Fetch the requested property using the given buffer. Returns127127 * the number of bytes the prom put into your buffer or -1 on error.128128 */129129-extern int __must_check prom_getproperty(phandle thisnode, const char *property,130130- char *prop_buffer, int propbuf_size);129129+int __must_check prom_getproperty(phandle thisnode, const char *property,130130+ char *prop_buffer, int propbuf_size);131131132132/* Acquire an integer property. */133133-extern int prom_getint(phandle node, char *property);133133+int prom_getint(phandle node, char *property);134134135135/* Acquire an integer property, with a default value. */136136-extern int prom_getintdefault(phandle node, char *property, int defval);136136+int prom_getintdefault(phandle node, char *property, int defval);137137138138/* Acquire a boolean property, 0=FALSE 1=TRUE. */139139-extern int prom_getbool(phandle node, char *prop);139139+int prom_getbool(phandle node, char *prop);140140141141/* Acquire a string property, null string on error. */142142-extern void prom_getstring(phandle node, char *prop, char *buf, int bufsize);142142+void prom_getstring(phandle node, char *prop, char *buf, int bufsize);143143144144/* Search all siblings starting at the passed node for "name" matching145145 * the given string. Returns the node on success, zero on failure.146146 */147147-extern phandle prom_searchsiblings(phandle node_start, char *name);147147+phandle prom_searchsiblings(phandle node_start, char *name);148148149149/* Returns the next property after the passed property for the given150150 * node. Returns null string on failure.151151 */152152-extern char *prom_nextprop(phandle node, char *prev_property, char *buffer);152152+char *prom_nextprop(phandle node, char *prev_property, char *buffer);153153154154/* Returns phandle of the path specified */155155-extern phandle prom_finddevice(char *name);155155+phandle prom_finddevice(char *name);156156157157/* Set the indicated property at the given node with the passed value.158158 * Returns the number of bytes of your value that the prom took.159159 */160160-extern int prom_setprop(phandle node, const char *prop_name, char *prop_value,161161- int value_size);160160+int prom_setprop(phandle node, const char *prop_name, char *prop_value,161161+ int value_size);162162163163-extern phandle prom_inst2pkg(int);163163+phandle prom_inst2pkg(int);164164165165/* Dorking with Bus ranges... */166166167167/* Apply promlib probes OBIO ranges to registers. */168168-extern void prom_apply_obio_ranges(struct linux_prom_registers *obioregs, int nregs);168168+void prom_apply_obio_ranges(struct linux_prom_registers *obioregs, int nregs);169169170170/* Apply ranges of any prom node (and optionally parent node as well) to registers. */171171-extern void prom_apply_generic_ranges(phandle node, phandle parent,172172- struct linux_prom_registers *sbusregs, int nregs);171171+void prom_apply_generic_ranges(phandle node, phandle parent,172172+ struct linux_prom_registers *sbusregs, int nregs);173173174174void prom_ranges_init(void);175175
+56-56
arch/sparc/include/asm/oplib_64.h
···6262/* You must call prom_init() before using any of the library services,6363 * preferably as early as possible. Pass it the romvec pointer.6464 */6565-extern void prom_init(void *cif_handler, void *cif_stack);6565+void prom_init(void *cif_handler, void *cif_stack);66666767/* Boot argument acquisition, returns the boot command line string. */6868-extern char *prom_getbootargs(void);6868+char *prom_getbootargs(void);69697070/* Miscellaneous routines, don't really fit in any category per se. */71717272/* Reboot the machine with the command line passed. */7373-extern void prom_reboot(const char *boot_command);7373+void prom_reboot(const char *boot_command);74747575/* Evaluate the forth string passed. */7676-extern void prom_feval(const char *forth_string);7676+void prom_feval(const char *forth_string);77777878/* Enter the prom, with possibility of continuation with the 'go'7979 * command in newer proms.8080 */8181-extern void prom_cmdline(void);8181+void prom_cmdline(void);82828383/* Enter the prom, with no chance of continuation for the stand-alone8484 * which calls this.8585 */8686-extern void prom_halt(void) __attribute__ ((noreturn));8686+void prom_halt(void) __attribute__ ((noreturn));87878888/* Halt and power-off the machine. */8989-extern void prom_halt_power_off(void) __attribute__ ((noreturn));8989+void prom_halt_power_off(void) __attribute__ ((noreturn));90909191/* Acquire the IDPROM of the root node in the prom device tree. This9292 * gets passed a buffer where you would like it stuffed. The return value9393 * is the format type of this idprom or 0xff on error.9494 */9595-extern unsigned char prom_get_idprom(char *idp_buffer, int idpbuf_size);9595+unsigned char prom_get_idprom(char *idp_buffer, int idpbuf_size);96969797/* Write a buffer of characters to the console. */9898-extern void prom_console_write_buf(const char *buf, int len);9898+void prom_console_write_buf(const char *buf, int len);9999100100/* Prom's internal routines, don't use in kernel/boot code. */101101-extern __printf(1, 2) void prom_printf(const char *fmt, ...);102102-extern void prom_write(const char *buf, unsigned int len);101101+__printf(1, 2) void prom_printf(const char *fmt, ...);102102+void prom_write(const char *buf, unsigned int len);103103104104/* Multiprocessor operations... */105105#ifdef CONFIG_SMP106106/* Start the CPU with the given device tree node at the passed program107107 * counter with the given arg passed in via register %o0.108108 */109109-extern void prom_startcpu(int cpunode, unsigned long pc, unsigned long arg);109109+void prom_startcpu(int cpunode, unsigned long pc, unsigned long arg);110110111111/* Start the CPU with the given cpu ID at the passed program112112 * counter with the given arg passed in via register %o0.113113 */114114-extern void prom_startcpu_cpuid(int cpuid, unsigned long pc, unsigned long arg);114114+void prom_startcpu_cpuid(int cpuid, unsigned long pc, unsigned long arg);115115116116/* Stop the CPU with the given cpu ID. */117117-extern void prom_stopcpu_cpuid(int cpuid);117117+void prom_stopcpu_cpuid(int cpuid);118118119119/* Stop the current CPU. */120120-extern void prom_stopself(void);120120+void prom_stopself(void);121121122122/* Idle the current CPU. */123123-extern void prom_idleself(void);123123+void prom_idleself(void);124124125125/* Resume the CPU with the passed device tree node. */126126-extern void prom_resumecpu(int cpunode);126126+void prom_resumecpu(int cpunode);127127#endif128128129129/* Power management interfaces. */130130131131/* Put the current CPU to sleep. */132132-extern void prom_sleepself(void);132132+void prom_sleepself(void);133133134134/* Put the entire system to sleep. */135135-extern int prom_sleepsystem(void);135135+int prom_sleepsystem(void);136136137137/* Initiate a wakeup event. */138138-extern int prom_wakeupsystem(void);138138+int prom_wakeupsystem(void);139139140140/* MMU and memory related OBP interfaces. */141141142142/* Get unique string identifying SIMM at given physical address. */143143-extern int prom_getunumber(int syndrome_code,144144- unsigned long phys_addr,145145- char *buf, int buflen);143143+int prom_getunumber(int syndrome_code,144144+ unsigned long phys_addr,145145+ char *buf, int buflen);146146147147/* Retain physical memory to the caller across soft resets. */148148-extern int prom_retain(const char *name, unsigned long size,149149- unsigned long align, unsigned long *paddr);148148+int prom_retain(const char *name, unsigned long size,149149+ unsigned long align, unsigned long *paddr);150150151151/* Load explicit I/D TLB entries into the calling processor. */152152-extern long prom_itlb_load(unsigned long index,153153- unsigned long tte_data,154154- unsigned long vaddr);152152+long prom_itlb_load(unsigned long index,153153+ unsigned long tte_data,154154+ unsigned long vaddr);155155156156-extern long prom_dtlb_load(unsigned long index,157157- unsigned long tte_data,158158- unsigned long vaddr);156156+long prom_dtlb_load(unsigned long index,157157+ unsigned long tte_data,158158+ unsigned long vaddr);159159160160/* Map/Unmap client program address ranges. First the format of161161 * the mapping mode argument.···170170#define PROM_MAP_IE 0x0100 /* Invert-Endianness */171171#define PROM_MAP_DEFAULT (PROM_MAP_WRITE | PROM_MAP_READ | PROM_MAP_EXEC | PROM_MAP_CACHED)172172173173-extern int prom_map(int mode, unsigned long size,174174- unsigned long vaddr, unsigned long paddr);175175-extern void prom_unmap(unsigned long size, unsigned long vaddr);173173+int prom_map(int mode, unsigned long size,174174+ unsigned long vaddr, unsigned long paddr);175175+void prom_unmap(unsigned long size, unsigned long vaddr);176176177177178178/* PROM device tree traversal functions... */179179180180/* Get the child node of the given node, or zero if no child exists. */181181-extern phandle prom_getchild(phandle parent_node);181181+phandle prom_getchild(phandle parent_node);182182183183/* Get the next sibling node of the given node, or zero if no further184184 * siblings exist.185185 */186186-extern phandle prom_getsibling(phandle node);186186+phandle prom_getsibling(phandle node);187187188188/* Get the length, at the passed node, of the given property type.189189 * Returns -1 on error (ie. no such property at this node).190190 */191191-extern int prom_getproplen(phandle thisnode, const char *property);191191+int prom_getproplen(phandle thisnode, const char *property);192192193193/* Fetch the requested property using the given buffer. Returns194194 * the number of bytes the prom put into your buffer or -1 on error.195195 */196196-extern int prom_getproperty(phandle thisnode, const char *property,197197- char *prop_buffer, int propbuf_size);196196+int prom_getproperty(phandle thisnode, const char *property,197197+ char *prop_buffer, int propbuf_size);198198199199/* Acquire an integer property. */200200-extern int prom_getint(phandle node, const char *property);200200+int prom_getint(phandle node, const char *property);201201202202/* Acquire an integer property, with a default value. */203203-extern int prom_getintdefault(phandle node, const char *property, int defval);203203+int prom_getintdefault(phandle node, const char *property, int defval);204204205205/* Acquire a boolean property, 0=FALSE 1=TRUE. */206206-extern int prom_getbool(phandle node, const char *prop);206206+int prom_getbool(phandle node, const char *prop);207207208208/* Acquire a string property, null string on error. */209209-extern void prom_getstring(phandle node, const char *prop, char *buf,210210- int bufsize);209209+void prom_getstring(phandle node, const char *prop, char *buf,210210+ int bufsize);211211212212/* Does the passed node have the given "name"? YES=1 NO=0 */213213-extern int prom_nodematch(phandle thisnode, const char *name);213213+int prom_nodematch(phandle thisnode, const char *name);214214215215/* Search all siblings starting at the passed node for "name" matching216216 * the given string. Returns the node on success, zero on failure.217217 */218218-extern phandle prom_searchsiblings(phandle node_start, const char *name);218218+phandle prom_searchsiblings(phandle node_start, const char *name);219219220220/* Return the first property type, as a string, for the given node.221221 * Returns a null string on error. Buffer should be at least 32B long.222222 */223223-extern char *prom_firstprop(phandle node, char *buffer);223223+char *prom_firstprop(phandle node, char *buffer);224224225225/* Returns the next property after the passed property for the given226226 * node. Returns null string on failure. Buffer should be at least 32B long.227227 */228228-extern char *prom_nextprop(phandle node, const char *prev_property, char *buf);228228+char *prom_nextprop(phandle node, const char *prev_property, char *buf);229229230230/* Returns 1 if the specified node has given property. */231231-extern int prom_node_has_property(phandle node, const char *property);231231+int prom_node_has_property(phandle node, const char *property);232232233233/* Returns phandle of the path specified */234234-extern phandle prom_finddevice(const char *name);234234+phandle prom_finddevice(const char *name);235235236236/* Set the indicated property at the given node with the passed value.237237 * Returns the number of bytes of your value that the prom took.238238 */239239-extern int prom_setprop(phandle node, const char *prop_name, char *prop_value,240240- int value_size);239239+int prom_setprop(phandle node, const char *prop_name, char *prop_value,240240+ int value_size);241241242242-extern phandle prom_inst2pkg(int);243243-extern void prom_sun4v_guest_soft_state(void);242242+phandle prom_inst2pkg(int);243243+void prom_sun4v_guest_soft_state(void);244244245245-extern int prom_ihandle2path(int handle, char *buffer, int bufsize);245245+int prom_ihandle2path(int handle, char *buffer, int bufsize);246246247247/* Client interface level routines. */248248-extern void p1275_cmd_direct(unsigned long *);248248+void p1275_cmd_direct(unsigned long *);249249250250#endif /* !(__SPARC64_OPLIB_H) */
···1414void *srmmu_get_nocache(int size, int align);1515void srmmu_free_nocache(void *addr, int size);16161717+extern struct resource sparc_iomap;1818+1719#define check_pgt_cache() do { } while (0)18201921pgd_t *get_pgd_fast(void);
···2525struct vm_area_struct;2626struct page;27272828-extern void load_mmu(void);2929-extern unsigned long calc_highpages(void);2828+void load_mmu(void);2929+unsigned long calc_highpages(void);3030+unsigned long __init bootmem_init(unsigned long *pages_avail);30313132#define pte_ERROR(e) __builtin_trap()3233#define pmd_ERROR(e) __builtin_trap()···5756 * srmmu.c will assign the real one (which is dynamically sized) */5857#define swapper_pg_dir NULL59586060-extern void paging_init(void);5959+void paging_init(void);61606261extern unsigned long ptr_in_current_pgd;6362···429428#define GET_IOSPACE(pfn) (pfn >> (BITS_PER_LONG - 4))430429#define GET_PFN(pfn) (pfn & 0x0fffffffUL)431430432432-extern int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,433433- unsigned long, pgprot_t);431431+int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,432432+ unsigned long, pgprot_t);434433435434static inline int io_remap_pfn_range(struct vm_area_struct *vma,436435 unsigned long from, unsigned long pfn,
+29-29
arch/sparc/include/asm/pgtable_64.h
···210210211211#ifndef __ASSEMBLY__212212213213-extern pte_t mk_pte_io(unsigned long, pgprot_t, int, unsigned long);213213+pte_t mk_pte_io(unsigned long, pgprot_t, int, unsigned long);214214215215-extern unsigned long pte_sz_bits(unsigned long size);215215+unsigned long pte_sz_bits(unsigned long size);216216217217extern pgprot_t PAGE_KERNEL;218218extern pgprot_t PAGE_KERNEL_LOCKED;···780780 !__kern_addr_valid(pud_val(pud)))781781782782#ifdef CONFIG_TRANSPARENT_HUGEPAGE783783-extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,784784- pmd_t *pmdp, pmd_t pmd);783783+void set_pmd_at(struct mm_struct *mm, unsigned long addr,784784+ pmd_t *pmdp, pmd_t pmd);785785#else786786static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,787787 pmd_t *pmdp, pmd_t pmd)···840840#define pte_unmap(pte) do { } while (0)841841842842/* Actual page table PTE updates. */843843-extern void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr,844844- pte_t *ptep, pte_t orig, int fullmm);843843+void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr,844844+ pte_t *ptep, pte_t orig, int fullmm);845845846846#define __HAVE_ARCH_PMDP_GET_AND_CLEAR847847static inline pmd_t pmdp_get_and_clear(struct mm_struct *mm,···900900extern pgd_t swapper_pg_dir[PTRS_PER_PGD];901901extern pmd_t swapper_low_pmd_dir[PTRS_PER_PMD];902902903903-extern void paging_init(void);904904-extern unsigned long find_ecache_flush_span(unsigned long size);903903+void paging_init(void);904904+unsigned long find_ecache_flush_span(unsigned long size);905905906906struct seq_file;907907-extern void mmu_info(struct seq_file *);907907+void mmu_info(struct seq_file *);908908909909struct vm_area_struct;910910-extern void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *);910910+void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *);911911#ifdef CONFIG_TRANSPARENT_HUGEPAGE912912-extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,913913- pmd_t *pmd);912912+void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,913913+ pmd_t *pmd);914914915915#define __HAVE_ARCH_PMDP_INVALIDATE916916extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,917917 pmd_t *pmdp);918918919919#define __HAVE_ARCH_PGTABLE_DEPOSIT920920-extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,921921- pgtable_t pgtable);920920+void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,921921+ pgtable_t pgtable);922922923923#define __HAVE_ARCH_PGTABLE_WITHDRAW924924-extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);924924+pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);925925#endif926926927927/* Encode and de-code a swap entry */···937937#define __swp_entry_to_pte(x) ((pte_t) { (x).val })938938939939/* File offset in PTE support. */940940-extern unsigned long pte_file(pte_t);940940+unsigned long pte_file(pte_t);941941#define pte_to_pgoff(pte) (pte_val(pte) >> PAGE_SHIFT)942942-extern pte_t pgoff_to_pte(unsigned long);942942+pte_t pgoff_to_pte(unsigned long);943943#define PTE_FILE_MAX_BITS (64UL - PAGE_SHIFT - 1UL)944944945945-extern int page_in_phys_avail(unsigned long paddr);945945+int page_in_phys_avail(unsigned long paddr);946946947947/*948948 * For sparc32&64, the pfn in io_remap_pfn_range() carries <iospace> in···952952#define GET_IOSPACE(pfn) (pfn >> (BITS_PER_LONG - 4))953953#define GET_PFN(pfn) (pfn & 0x0fffffffffffffffUL)954954955955-extern int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,956956- unsigned long, pgprot_t);955955+int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,956956+ unsigned long, pgprot_t);957957958958static inline int io_remap_pfn_range(struct vm_area_struct *vma,959959 unsigned long from, unsigned long pfn,···981981/* We provide a special get_unmapped_area for framebuffer mmaps to try and use982982 * the largest alignment possible such that larget PTEs can be used.983983 */984984-extern unsigned long get_fb_unmapped_area(struct file *filp, unsigned long,985985- unsigned long, unsigned long,986986- unsigned long);984984+unsigned long get_fb_unmapped_area(struct file *filp, unsigned long,985985+ unsigned long, unsigned long,986986+ unsigned long);987987#define HAVE_ARCH_FB_UNMAPPED_AREA988988989989-extern void pgtable_cache_init(void);990990-extern void sun4v_register_fault_status(void);991991-extern void sun4v_ktsb_register(void);992992-extern void __init cheetah_ecache_flush_init(void);993993-extern void sun4v_patch_tlb_handlers(void);989989+void pgtable_cache_init(void);990990+void sun4v_register_fault_status(void);991991+void sun4v_ktsb_register(void);992992+void __init cheetah_ecache_flush_init(void);993993+void sun4v_patch_tlb_handlers(void);994994995995extern unsigned long cmdline_memory_size;996996997997-extern asmlinkage void do_sparc64_fault(struct pt_regs *regs);997997+asmlinkage void do_sparc64_fault(struct pt_regs *regs);998998999999#endif /* !(__ASSEMBLY__) */10001000
+3-2
arch/sparc/include/asm/processor_32.h
···7474}75757676/* Return saved PC of a blocked thread. */7777-extern unsigned long thread_saved_pc(struct task_struct *t);7777+unsigned long thread_saved_pc(struct task_struct *t);78787979/* Do necessary setup to start up a newly executed thread. */8080static inline void start_thread(struct pt_regs * regs, unsigned long pc,···107107/* Free all resources held by a thread. */108108#define release_thread(tsk) do { } while(0)109109110110-extern unsigned long get_wchan(struct task_struct *);110110+unsigned long get_wchan(struct task_struct *);111111112112#define task_pt_regs(tsk) ((tsk)->thread.kregs)113113#define KSTK_EIP(tsk) ((tsk)->thread.kregs->pc)···116116#ifdef __KERNEL__117117118118extern struct task_struct *last_task_used_math;119119+int do_mathemu(struct pt_regs *regs, struct task_struct *fpt);119120120121#define cpu_relax() barrier()121122extern void (*sparc_idle)(void);
+4-2
arch/sparc/include/asm/processor_64.h
···95959696/* Return saved PC of a blocked thread. */9797struct task_struct;9898-extern unsigned long thread_saved_pc(struct task_struct *);9898+unsigned long thread_saved_pc(struct task_struct *);9999100100/* On Uniprocessor, even in RMO processes see TSO semantics */101101#ifdef CONFIG_SMP···194194/* Free all resources held by a thread. */195195#define release_thread(tsk) do { } while (0)196196197197-extern unsigned long get_wchan(struct task_struct *task);197197+unsigned long get_wchan(struct task_struct *task);198198199199#define task_pt_regs(tsk) (task_thread_info(tsk)->kregs)200200#define KSTK_EIP(tsk) (task_pt_regs(tsk)->tpc)···252252#define spin_lock_prefetch(x) prefetchw(x)253253254254#define HAVE_ARCH_PICK_MMAP_LAYOUT255255+256256+int do_mathemu(struct pt_regs *regs, struct fpustate *f, bool illegal_insn_trap);255257256258#endif /* !(__ASSEMBLY__) */257259
···11111212extern int this_is_starfire;13131414-extern void check_if_starfire(void);1515-extern int starfire_hard_smp_processor_id(void);1616-extern void starfire_hookup(int);1717-extern unsigned int starfire_translate(unsigned long imap, unsigned int upaid);1414+void check_if_starfire(void);1515+int starfire_hard_smp_processor_id(void);1616+void starfire_hookup(int);1717+unsigned int starfire_translate(unsigned long imap, unsigned int upaid);18181919#endif2020#endif
···3344struct pt_regs;5566-extern asmlinkage long sparc_do_fork(unsigned long clone_flags,77- unsigned long stack_start,88- struct pt_regs *regs,99- unsigned long stack_size);66+asmlinkage long sparc_do_fork(unsigned long clone_flags,77+ unsigned long stack_start,88+ struct pt_regs *regs,99+ unsigned long stack_size);10101111#endif /* _SPARC64_SYSCALLS_H */
···23232424extern struct sparc64_tick_ops *tick_ops;25252626-extern unsigned long sparc64_get_clock_tick(unsigned int cpu);2727-extern void setup_sparc64_timer(void);2828-extern void __init time_init(void);2626+unsigned long sparc64_get_clock_tick(unsigned int cpu);2727+void setup_sparc64_timer(void);2828+void __init time_init(void);29293030#endif /* _SPARC64_TIMER_H */
+4-4
arch/sparc/include/asm/tlb_64.h
···88#include <asm/mmu_context.h>991010#ifdef CONFIG_SMP1111-extern void smp_flush_tlb_pending(struct mm_struct *,1111+void smp_flush_tlb_pending(struct mm_struct *,1212 unsigned long, unsigned long *);1313#endif14141515#ifdef CONFIG_SMP1616-extern void smp_flush_tlb_mm(struct mm_struct *mm);1616+void smp_flush_tlb_mm(struct mm_struct *mm);1717#define do_flush_tlb_mm(mm) smp_flush_tlb_mm(mm)1818#else1919#define do_flush_tlb_mm(mm) __flush_tlb_mm(CTX_HWBITS(mm->context), SECONDARY_CONTEXT)2020#endif21212222-extern void __flush_tlb_pending(unsigned long, unsigned long, unsigned long *);2323-extern void flush_tlb_pending(void);2222+void __flush_tlb_pending(unsigned long, unsigned long, unsigned long *);2323+void flush_tlb_pending(void);24242525#define tlb_start_vma(tlb, vma) do { } while (0)2626#define tlb_end_vma(tlb, vma) do { } while (0)
+11-11
arch/sparc/include/asm/tlbflush_64.h
···1414 unsigned long vaddrs[TLB_BATCH_NR];1515};16161717-extern void flush_tsb_kernel_range(unsigned long start, unsigned long end);1818-extern void flush_tsb_user(struct tlb_batch *tb);1919-extern void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr);1717+void flush_tsb_kernel_range(unsigned long start, unsigned long end);1818+void flush_tsb_user(struct tlb_batch *tb);1919+void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr);20202121/* TLB flush operations. */2222···36363737#define __HAVE_ARCH_ENTER_LAZY_MMU_MODE38383939-extern void flush_tlb_pending(void);4040-extern void arch_enter_lazy_mmu_mode(void);4141-extern void arch_leave_lazy_mmu_mode(void);3939+void flush_tlb_pending(void);4040+void arch_enter_lazy_mmu_mode(void);4141+void arch_leave_lazy_mmu_mode(void);4242#define arch_flush_lazy_mmu_mode() do {} while (0)43434444/* Local cpu only. */4545-extern void __flush_tlb_all(void);4646-extern void __flush_tlb_page(unsigned long context, unsigned long vaddr);4747-extern void __flush_tlb_kernel_range(unsigned long start, unsigned long end);4545+void __flush_tlb_all(void);4646+void __flush_tlb_page(unsigned long context, unsigned long vaddr);4747+void __flush_tlb_kernel_range(unsigned long start, unsigned long end);48484949#ifndef CONFIG_SMP5050···60606161#else /* CONFIG_SMP */62626363-extern void smp_flush_tlb_kernel_range(unsigned long start, unsigned long end);6464-extern void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr);6363+void smp_flush_tlb_kernel_range(unsigned long start, unsigned long end);6464+void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr);65656666#define flush_tlb_kernel_range(start, end) \6767do { flush_tsb_kernel_range(start,end); \
+1-1
arch/sparc/include/asm/topology_64.h
···18181919struct pci_bus;2020#ifdef CONFIG_PCI2121-extern int pcibus_to_node(struct pci_bus *pbus);2121+int pcibus_to_node(struct pci_bus *pbus);2222#else2323static inline int pcibus_to_node(struct pci_bus *pbus)2424{
+3-3
arch/sparc/include/asm/trap_block.h
···5151 unsigned long __per_cpu_base;5252} __attribute__((aligned(64)));5353extern struct trap_per_cpu trap_block[NR_CPUS];5454-extern void init_cur_cpu_trap(struct thread_info *);5555-extern void setup_tba(void);5454+void init_cur_cpu_trap(struct thread_info *);5555+void setup_tba(void);5656extern int ncpus_probed;57575858-extern unsigned long real_hard_smp_processor_id(void);5858+unsigned long real_hard_smp_processor_id(void);59596060struct cpuid_patch_entry {6161 unsigned int addr;
+1-1
arch/sparc/include/asm/uaccess.h
···99#define user_addr_max() \1010 (segment_eq(get_fs(), USER_DS) ? TASK_SIZE : ~0UL)11111212-extern long strncpy_from_user(char *dest, const char __user *src, long count);1212+long strncpy_from_user(char *dest, const char __user *src, long count);13131414#endif
+7-7
arch/sparc/include/asm/uaccess_32.h
···7878};79798080/* Returns 0 if exception not found and fixup otherwise. */8181-extern unsigned long search_extables_range(unsigned long addr, unsigned long *g2);8181+unsigned long search_extables_range(unsigned long addr, unsigned long *g2);82828383-extern void __ret_efault(void);8383+void __ret_efault(void);84848585/* Uh, these should become the main single-value transfer routines..8686 * They automatically use the right size if we just have the right···152152 : "=&r" (ret) : "r" (x), "m" (*__m(addr)), \153153 "i" (-EFAULT))154154155155-extern int __put_user_bad(void);155155+int __put_user_bad(void);156156157157#define __get_user_check(x,addr,size,type) ({ \158158register int __gu_ret; \···244244 ".previous\n\t" \245245 : "=&r" (x) : "m" (*__m(addr)), "i" (retval))246246247247-extern int __get_user_bad(void);247247+int __get_user_bad(void);248248249249-extern unsigned long __copy_user(void __user *to, const void __user *from, unsigned long size);249249+unsigned long __copy_user(void __user *to, const void __user *from, unsigned long size);250250251251static inline unsigned long copy_to_user(void __user *to, const void *from, unsigned long n)252252{···306306 return n;307307}308308309309-extern __must_check long strlen_user(const char __user *str);310310-extern __must_check long strnlen_user(const char __user *str, long n);309309+__must_check long strlen_user(const char __user *str);310310+__must_check long strnlen_user(const char __user *str, long n);311311312312#endif /* __ASSEMBLY__ */313313
+25-25
arch/sparc/include/asm/uaccess_64.h
···7676 unsigned int insn, fixup;7777};78787979-extern void __ret_efault(void);8080-extern void __retl_efault(void);7979+void __ret_efault(void);8080+void __retl_efault(void);81818282/* Uh, these should become the main single-value transfer routines..8383 * They automatically use the right size if we just have the right···134134 : "=r" (ret) : "r" (x), "r" (__m(addr)), \135135 "i" (-EFAULT))136136137137-extern int __put_user_bad(void);137137+int __put_user_bad(void);138138139139#define __get_user_nocheck(data,addr,size,type) ({ \140140register int __gu_ret; \···204204 ".previous\n\t" \205205 : "=r" (x) : "r" (__m(addr)), "i" (retval))206206207207-extern int __get_user_bad(void);207207+int __get_user_bad(void);208208209209-extern unsigned long __must_check ___copy_from_user(void *to,210210- const void __user *from,211211- unsigned long size);212212-extern unsigned long copy_from_user_fixup(void *to, const void __user *from,213213- unsigned long size);209209+unsigned long __must_check ___copy_from_user(void *to,210210+ const void __user *from,211211+ unsigned long size);212212+unsigned long copy_from_user_fixup(void *to, const void __user *from,213213+ unsigned long size);214214static inline unsigned long __must_check215215copy_from_user(void *to, const void __user *from, unsigned long size)216216{···223223}224224#define __copy_from_user copy_from_user225225226226-extern unsigned long __must_check ___copy_to_user(void __user *to,227227- const void *from,228228- unsigned long size);229229-extern unsigned long copy_to_user_fixup(void __user *to, const void *from,230230- unsigned long size);226226+unsigned long __must_check ___copy_to_user(void __user *to,227227+ const void *from,228228+ unsigned long size);229229+unsigned long copy_to_user_fixup(void __user *to, const void *from,230230+ unsigned long size);231231static inline unsigned long __must_check232232copy_to_user(void __user *to, const void *from, unsigned long size)233233{···239239}240240#define __copy_to_user copy_to_user241241242242-extern unsigned long __must_check ___copy_in_user(void __user *to,243243- const void __user *from,244244- unsigned long size);245245-extern unsigned long copy_in_user_fixup(void __user *to, void __user *from,246246- unsigned long size);242242+unsigned long __must_check ___copy_in_user(void __user *to,243243+ const void __user *from,244244+ unsigned long size);245245+unsigned long copy_in_user_fixup(void __user *to, void __user *from,246246+ unsigned long size);247247static inline unsigned long __must_check248248copy_in_user(void __user *to, void __user *from, unsigned long size)249249{···255255}256256#define __copy_in_user copy_in_user257257258258-extern unsigned long __must_check __clear_user(void __user *, unsigned long);258258+unsigned long __must_check __clear_user(void __user *, unsigned long);259259260260#define clear_user __clear_user261261262262-extern __must_check long strlen_user(const char __user *str);263263-extern __must_check long strnlen_user(const char __user *str, long n);262262+__must_check long strlen_user(const char __user *str);263263+__must_check long strnlen_user(const char __user *str, long n);264264265265#define __copy_to_user_inatomic __copy_to_user266266#define __copy_from_user_inatomic __copy_from_user267267268268struct pt_regs;269269-extern unsigned long compute_effective_address(struct pt_regs *,270270- unsigned int insn,271271- unsigned int rd);269269+unsigned long compute_effective_address(struct pt_regs *,270270+ unsigned int insn,271271+ unsigned int rd);272272273273#endif /* __ASSEMBLY__ */274274
+17-17
arch/sparc/include/asm/vio.h
···372372 vio->vdev->channel_id, ## a); \373373} while (0)374374375375-extern int __vio_register_driver(struct vio_driver *drv, struct module *owner,375375+int __vio_register_driver(struct vio_driver *drv, struct module *owner,376376 const char *mod_name);377377/*378378 * vio_register_driver must be a macro so that KBUILD_MODNAME can be expanded379379 */380380#define vio_register_driver(driver) \381381 __vio_register_driver(driver, THIS_MODULE, KBUILD_MODNAME)382382-extern void vio_unregister_driver(struct vio_driver *drv);382382+void vio_unregister_driver(struct vio_driver *drv);383383384384static inline struct vio_driver *to_vio_driver(struct device_driver *drv)385385{···391391 return container_of(dev, struct vio_dev, dev);392392}393393394394-extern int vio_ldc_send(struct vio_driver_state *vio, void *data, int len);395395-extern void vio_link_state_change(struct vio_driver_state *vio, int event);396396-extern void vio_conn_reset(struct vio_driver_state *vio);397397-extern int vio_control_pkt_engine(struct vio_driver_state *vio, void *pkt);398398-extern int vio_validate_sid(struct vio_driver_state *vio,399399- struct vio_msg_tag *tp);400400-extern u32 vio_send_sid(struct vio_driver_state *vio);401401-extern int vio_ldc_alloc(struct vio_driver_state *vio,402402- struct ldc_channel_config *base_cfg, void *event_arg);403403-extern void vio_ldc_free(struct vio_driver_state *vio);404404-extern int vio_driver_init(struct vio_driver_state *vio, struct vio_dev *vdev,405405- u8 dev_class, struct vio_version *ver_table,406406- int ver_table_size, struct vio_driver_ops *ops,407407- char *name);394394+int vio_ldc_send(struct vio_driver_state *vio, void *data, int len);395395+void vio_link_state_change(struct vio_driver_state *vio, int event);396396+void vio_conn_reset(struct vio_driver_state *vio);397397+int vio_control_pkt_engine(struct vio_driver_state *vio, void *pkt);398398+int vio_validate_sid(struct vio_driver_state *vio,399399+ struct vio_msg_tag *tp);400400+u32 vio_send_sid(struct vio_driver_state *vio);401401+int vio_ldc_alloc(struct vio_driver_state *vio,402402+ struct ldc_channel_config *base_cfg, void *event_arg);403403+void vio_ldc_free(struct vio_driver_state *vio);404404+int vio_driver_init(struct vio_driver_state *vio, struct vio_dev *vdev,405405+ u8 dev_class, struct vio_version *ver_table,406406+ int ver_table_size, struct vio_driver_ops *ops,407407+ char *name);408408409409-extern void vio_port_up(struct vio_driver_state *vio);409409+void vio_port_up(struct vio_driver_state *vio);410410411411#endif /* _SPARC64_VIO_H */
···20202121#include <asm/spitfire.h>22222323-extern void xor_vis_2(unsigned long, unsigned long *, unsigned long *);2424-extern void xor_vis_3(unsigned long, unsigned long *, unsigned long *,2525- unsigned long *);2626-extern void xor_vis_4(unsigned long, unsigned long *, unsigned long *,2727- unsigned long *, unsigned long *);2828-extern void xor_vis_5(unsigned long, unsigned long *, unsigned long *,2929- unsigned long *, unsigned long *, unsigned long *);2323+void xor_vis_2(unsigned long, unsigned long *, unsigned long *);2424+void xor_vis_3(unsigned long, unsigned long *, unsigned long *,2525+ unsigned long *);2626+void xor_vis_4(unsigned long, unsigned long *, unsigned long *,2727+ unsigned long *, unsigned long *);2828+void xor_vis_5(unsigned long, unsigned long *, unsigned long *,2929+ unsigned long *, unsigned long *, unsigned long *);30303131/* XXX Ugh, write cheetah versions... -DaveM */3232···3838 .do_5 = xor_vis_5,3939};40404141-extern void xor_niagara_2(unsigned long, unsigned long *, unsigned long *);4242-extern void xor_niagara_3(unsigned long, unsigned long *, unsigned long *,4343- unsigned long *);4444-extern void xor_niagara_4(unsigned long, unsigned long *, unsigned long *,4545- unsigned long *, unsigned long *);4646-extern void xor_niagara_5(unsigned long, unsigned long *, unsigned long *,4747- unsigned long *, unsigned long *, unsigned long *);4141+void xor_niagara_2(unsigned long, unsigned long *, unsigned long *);4242+void xor_niagara_3(unsigned long, unsigned long *, unsigned long *,4343+ unsigned long *);4444+void xor_niagara_4(unsigned long, unsigned long *, unsigned long *,4545+ unsigned long *, unsigned long *);4646+void xor_niagara_5(unsigned long, unsigned long *, unsigned long *,4747+ unsigned long *, unsigned long *, unsigned long *);48484949static struct xor_block_template xor_block_niagara = {5050 .name = "Niagara",
···22#define _CPUMAP_H3344#ifdef CONFIG_SMP55-extern void cpu_map_rebuild(void);66-extern int map_to_cpu(unsigned int index);55+void cpu_map_rebuild(void);66+int map_to_cpu(unsigned int index);77#define cpu_map_init() cpu_map_rebuild()88#else99#define cpu_map_init() do {} while (0)
···4848 return iommu_is_span_boundary(entry, nr, shift, boundary_size);4949}50505151-extern unsigned long iommu_range_alloc(struct device *dev,5252- struct iommu *iommu,5353- unsigned long npages,5454- unsigned long *handle);5555-extern void iommu_range_free(struct iommu *iommu,5656- dma_addr_t dma_addr,5757- unsigned long npages);5151+unsigned long iommu_range_alloc(struct device *dev,5252+ struct iommu *iommu,5353+ unsigned long npages,5454+ unsigned long *handle);5555+void iommu_range_free(struct iommu *iommu,5656+ dma_addr_t dma_addr,5757+ unsigned long npages);58585959#endif /* _IOMMU_COMMON_H */
+3-3
arch/sparc/kernel/ioport.c
···186186187187 if (name == NULL) name = "???";188188189189- if ((xres = xres_alloc()) != 0) {189189+ if ((xres = xres_alloc()) != NULL) {190190 tack = xres->xname;191191 res = &xres->xres;192192 } else {···400400 BUG();401401}402402403403-struct dma_map_ops sbus_dma_ops = {403403+static struct dma_map_ops sbus_dma_ops = {404404 .alloc = sbus_alloc_coherent,405405 .free = sbus_free_coherent,406406 .map_page = sbus_map_page,···681681 const char *nm;682682683683 for (r = root->child; r != NULL; r = r->sibling) {684684- if ((nm = r->name) == 0) nm = "???";684684+ if ((nm = r->name) == NULL) nm = "???";685685 seq_printf(m, "%016llx-%016llx: %s\n",686686 (unsigned long long)r->start,687687 (unsigned long long)r->end, nm);
+10-1
arch/sparc/kernel/irq.h
···82828383unsigned long leon_get_irqmask(unsigned int irq);84848585+/* irq_32.c */8686+void sparc_floppy_irq(int irq, void *dev_id, struct pt_regs *regs);8787+8888+/* sun4m_irq.c */8989+void sun4m_nmi(struct pt_regs *regs);9090+9191+/* sun4d_irq.c */9292+void sun4d_handler_irq(unsigned int pil, struct pt_regs *regs);9393+8594#ifdef CONFIG_SMP86958796/* All SUN4D IPIs are sent on this IRQ, may be shared with hard IRQs */8897#define SUN4D_IPI_IRQ 1389989090-extern void sun4d_ipi_interrupt(void);9999+void sun4d_ipi_interrupt(void);9110092101#endif
···512512/*513513 * Called when the probe at kretprobe trampoline is hit514514 */515515-int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)515515+static int __kprobes trampoline_probe_handler(struct kprobe *p,516516+ struct pt_regs *regs)516517{517518 struct kretprobe_instance *ri = NULL;518519 struct hlist_head *head, empty_rp;···577576 return 1;578577}579578580580-void kretprobe_trampoline_holder(void)579579+static void __used kretprobe_trampoline_holder(void)581580{582581 asm volatile(".global kretprobe_trampoline\n"583582 "kretprobe_trampoline:\n"
+5-5
arch/sparc/kernel/leon_kernel.c
···32323333int leondebug_irq_disable;3434int leon_debug_irqout;3535-static int dummy_master_l10_counter;3535+static volatile u32 dummy_master_l10_counter;3636unsigned long amba_system_id;3737static DEFINE_SPINLOCK(leon_irq_lock);38383939+static unsigned long leon3_gptimer_idx; /* Timer Index (0..6) within Timer Core */3940unsigned long leon3_gptimer_irq; /* interrupt controller irq number */4040-unsigned long leon3_gptimer_idx; /* Timer Index (0..6) within Timer Core */4141unsigned int sparc_leon_eirq;4242#define LEON_IMASK(cpu) (&leon3_irqctrl_regs->mask[cpu])4343#define LEON_IACK (&leon3_irqctrl_regs->iclear)···6565}66666767/* The extended IRQ controller has been found, this function registers it */6868-void leon_eirq_setup(unsigned int eirq)6868+static void leon_eirq_setup(unsigned int eirq)6969{7070 unsigned long mask, oldmask;7171 unsigned int veirq;···270270#ifdef CONFIG_SMP271271272272/* smp clockevent irq */273273-irqreturn_t leon_percpu_timer_ce_interrupt(int irq, void *unused)273273+static irqreturn_t leon_percpu_timer_ce_interrupt(int irq, void *unused)274274{275275 struct clock_event_device *ce;276276 int cpu = smp_processor_id();···313313314314 leondebug_irq_disable = 0;315315 leon_debug_irqout = 0;316316- master_l10_counter = (unsigned int *)&dummy_master_l10_counter;316316+ master_l10_counter = (u32 __iomem *)&dummy_master_l10_counter;317317 dummy_master_l10_counter = 0;318318319319 rootnp = of_find_node_by_path("/ambapp0");
-79
arch/sparc/kernel/leon_pci.c
···9898{9999 return res->start;100100}101101-102102-/* in/out routines taken from pcic.c103103- *104104- * This probably belongs here rather than ioport.c because105105- * we do not want this crud linked into SBus kernels.106106- * Also, think for a moment about likes of floppy.c that107107- * include architecture specific parts. They may want to redefine ins/outs.108108- *109109- * We do not use horrible macros here because we want to110110- * advance pointer by sizeof(size).111111- */112112-void outsb(unsigned long addr, const void *src, unsigned long count)113113-{114114- while (count) {115115- count -= 1;116116- outb(*(const char *)src, addr);117117- src += 1;118118- /* addr += 1; */119119- }120120-}121121-EXPORT_SYMBOL(outsb);122122-123123-void outsw(unsigned long addr, const void *src, unsigned long count)124124-{125125- while (count) {126126- count -= 2;127127- outw(*(const short *)src, addr);128128- src += 2;129129- /* addr += 2; */130130- }131131-}132132-EXPORT_SYMBOL(outsw);133133-134134-void outsl(unsigned long addr, const void *src, unsigned long count)135135-{136136- while (count) {137137- count -= 4;138138- outl(*(const long *)src, addr);139139- src += 4;140140- /* addr += 4; */141141- }142142-}143143-EXPORT_SYMBOL(outsl);144144-145145-void insb(unsigned long addr, void *dst, unsigned long count)146146-{147147- while (count) {148148- count -= 1;149149- *(unsigned char *)dst = inb(addr);150150- dst += 1;151151- /* addr += 1; */152152- }153153-}154154-EXPORT_SYMBOL(insb);155155-156156-void insw(unsigned long addr, void *dst, unsigned long count)157157-{158158- while (count) {159159- count -= 2;160160- *(unsigned short *)dst = inw(addr);161161- dst += 2;162162- /* addr += 2; */163163- }164164-}165165-EXPORT_SYMBOL(insw);166166-167167-void insl(unsigned long addr, void *dst, unsigned long count)168168-{169169- while (count) {170170- count -= 4;171171- /*172172- * XXX I am sure we are in for an unaligned trap here.173173- */174174- *(unsigned long *)dst = inl(addr);175175- dst += 4;176176- /* addr += 4; */177177- }178178-}179179-EXPORT_SYMBOL(insl);
+8-8
arch/sparc/kernel/leon_pci_grpci1.c
···80808181struct grpci1_priv {8282 struct leon_pci_info info; /* must be on top of this structure */8383- struct grpci1_regs *regs; /* GRPCI register map */8383+ struct grpci1_regs __iomem *regs; /* GRPCI register map */8484 struct device *dev;8585 int pci_err_mask; /* STATUS register error mask */8686 int irq; /* LEON irqctrl GRPCI IRQ */···101101static int grpci1_cfg_w32(struct grpci1_priv *priv, unsigned int bus,102102 unsigned int devfn, int where, u32 val);103103104104-int grpci1_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)104104+static int grpci1_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)105105{106106 struct grpci1_priv *priv = dev->bus->sysdata;107107 int irq_group;···144144 grpci1_cfg_w32(priv, TGT, 0, PCI_COMMAND, tmp);145145 } else {146146 /* Bus always little endian (unaffected by byte-swapping) */147147- *val = flip_dword(tmp);147147+ *val = swab32(tmp);148148 }149149150150 return 0;···197197198198 pci_conf = (unsigned int *) (priv->pci_conf |199199 (devfn << 8) | (where & 0xfc));200200- LEON3_BYPASS_STORE_PA(pci_conf, flip_dword(val));200200+ LEON3_BYPASS_STORE_PA(pci_conf, swab32(val));201201202202 return 0;203203}···417417 * BAR1: peripheral DMA to host's memory (size at least 256MByte)418418 * BAR2..BAR5: not implemented in hardware419419 */420420-void grpci1_hw_init(struct grpci1_priv *priv)420420+static void grpci1_hw_init(struct grpci1_priv *priv)421421{422422 u32 ahbadr, bar_sz, data, pciadr;423423- struct grpci1_regs *regs = priv->regs;423423+ struct grpci1_regs __iomem *regs = priv->regs;424424425425 /* set 1:1 mapping between AHB -> PCI memory space */426426 REGSTORE(regs->cfg_stat, priv->pci_area & 0xf0000000);···509509510510static int grpci1_of_probe(struct platform_device *ofdev)511511{512512- struct grpci1_regs *regs;512512+ struct grpci1_regs __iomem *regs;513513 struct grpci1_priv *priv;514514 int err, len;515515 const int *tmp;···690690err2:691691 release_resource(&priv->info.mem_space);692692err1:693693- iounmap((void *)priv->pci_io_va);693693+ iounmap((void __iomem *)priv->pci_io_va);694694 grpci1priv = NULL;695695 return err;696696}
+11-11
arch/sparc/kernel/leon_pci_grpci2.c
···191191192192struct grpci2_priv {193193 struct leon_pci_info info; /* must be on top of this structure */194194- struct grpci2_regs *regs;194194+ struct grpci2_regs __iomem *regs;195195 char irq;196196 char irq_mode; /* IRQ Mode from CAPSTS REG */197197 char bt_enabled;···215215 struct grpci2_barcfg tgtbars[6];216216};217217218218-DEFINE_SPINLOCK(grpci2_dev_lock);219219-struct grpci2_priv *grpci2priv;218218+static DEFINE_SPINLOCK(grpci2_dev_lock);219219+static struct grpci2_priv *grpci2priv;220220221221-int grpci2_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)221221+static int grpci2_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)222222{223223 struct grpci2_priv *priv = dev->bus->sysdata;224224 int irq_group;···270270 *val = 0xffffffff;271271 } else {272272 /* Bus always little endian (unaffected by byte-swapping) */273273- *val = flip_dword(tmp);273273+ *val = swab32(tmp);274274 }275275276276 return 0;···328328329329 pci_conf = (unsigned int *) (priv->pci_conf |330330 (devfn << 8) | (where & 0xfc));331331- LEON3_BYPASS_STORE_PA(pci_conf, flip_dword(val));331331+ LEON3_BYPASS_STORE_PA(pci_conf, swab32(val));332332333333 /* Wait until GRPCI2 signals that CFG access is done, it should be334334 * done instantaneously unless a DMA operation is ongoing...···561561 return virq;562562}563563564564-void grpci2_hw_init(struct grpci2_priv *priv)564564+static void grpci2_hw_init(struct grpci2_priv *priv)565565{566566 u32 ahbadr, pciadr, bar_sz, capptr, io_map, data;567567- struct grpci2_regs *regs = priv->regs;567567+ struct grpci2_regs __iomem *regs = priv->regs;568568 int i;569569 struct grpci2_barcfg *barcfg = priv->tgtbars;570570···655655static irqreturn_t grpci2_err_interrupt(int irq, void *arg)656656{657657 struct grpci2_priv *priv = arg;658658- struct grpci2_regs *regs = priv->regs;658658+ struct grpci2_regs __iomem *regs = priv->regs;659659 unsigned int status;660660661661 status = REGLOAD(regs->sts_cap);···682682683683static int grpci2_of_probe(struct platform_device *ofdev)684684{685685- struct grpci2_regs *regs;685685+ struct grpci2_regs __iomem *regs;686686 struct grpci2_priv *priv;687687 int err, i, len;688688 const int *tmp;···878878 release_resource(&priv->info.mem_space);879879err3:880880 err = -ENOMEM;881881- iounmap((void *)priv->pci_io_va);881881+ iounmap((void __iomem *)priv->pci_io_va);882882err2:883883 kfree(priv);884884err1:
+4-4
arch/sparc/kernel/leon_pmc.c
···1212#include <asm/processor.h>13131414/* List of Systems that need fixup instructions around power-down instruction */1515-unsigned int pmc_leon_fixup_ids[] = {1515+static unsigned int pmc_leon_fixup_ids[] = {1616 AEROFLEX_UT699,1717 GAISLER_GR712RC,1818 LEON4_NEXTREME1,1919 02020};21212222-int pmc_leon_need_fixup(void)2222+static int pmc_leon_need_fixup(void)2323{2424 unsigned int systemid = amba_system_id >> 16;2525 unsigned int *id;···3838 * CPU idle callback function for systems that need some extra handling3939 * See .../arch/sparc/kernel/process.c4040 */4141-void pmc_leon_idle_fixup(void)4141+static void pmc_leon_idle_fixup(void)4242{4343 /* Prepare an address to a non-cachable region. APB is always4444 * none-cachable. One instruction is executed after the Sleep···6262 * CPU idle callback function6363 * See .../arch/sparc/kernel/process.c6464 */6565-void pmc_leon_idle(void)6565+static void pmc_leon_idle(void)6666{6767 /* Interrupts need to be enabled to not hang the CPU */6868 local_irq_enable();
+1-12
arch/sparc/kernel/leon_smp.c
···130130 local_ops->tlb_all();131131}132132133133-void leon_smp_setbroadcast(unsigned int mask)133133+static void leon_smp_setbroadcast(unsigned int mask)134134{135135 int broadcast =136136 ((LEON3_BYPASS_LOAD_PA(&(leon3_irqctrl_regs->mpstatus)) >>···146146 }147147 }148148 LEON_BYPASS_STORE_PA(&(leon3_irqctrl_regs->mpbroadcast), mask);149149-}150150-151151-unsigned int leon_smp_getbroadcast(void)152152-{153153- unsigned int mask;154154- mask = LEON_BYPASS_LOAD_PA(&(leon3_irqctrl_regs->mpbroadcast));155155- return mask;156149}157150158151int leon_smp_nrcpus(void)···257264 /* Ok, they are spinning and ready to go. */258265 smp_processors_ready = 1;259266260260-}261261-262262-void leon_irq_rotate(int cpu)263263-{264267}265268266269struct leon_ipi_work {
···2828#include <asm/apb.h>29293030#include "pci_impl.h"3131+#include "kernel.h"31323233/* List of all PCI controllers found in the system. */3334struct pci_pbm_info *pci_pbm_root = NULL;
···66#ifndef _PCI_SUN4V_H77#define _PCI_SUN4V_H8899-extern long pci_sun4v_iommu_map(unsigned long devhandle,1010- unsigned long tsbid,1111- unsigned long num_ttes,1212- unsigned long io_attributes,1313- unsigned long io_page_list_pa);1414-extern unsigned long pci_sun4v_iommu_demap(unsigned long devhandle,1515- unsigned long tsbid,1616- unsigned long num_ttes);1717-extern unsigned long pci_sun4v_iommu_getmap(unsigned long devhandle,1818- unsigned long tsbid,1919- unsigned long *io_attributes,2020- unsigned long *real_address);2121-extern unsigned long pci_sun4v_config_get(unsigned long devhandle,2222- unsigned long pci_device,2323- unsigned long config_offset,2424- unsigned long size);2525-extern int pci_sun4v_config_put(unsigned long devhandle,2626- unsigned long pci_device,2727- unsigned long config_offset,2828- unsigned long size,2929- unsigned long data);99+long pci_sun4v_iommu_map(unsigned long devhandle,1010+ unsigned long tsbid,1111+ unsigned long num_ttes,1212+ unsigned long io_attributes,1313+ unsigned long io_page_list_pa);1414+unsigned long pci_sun4v_iommu_demap(unsigned long devhandle,1515+ unsigned long tsbid,1616+ unsigned long num_ttes);1717+unsigned long pci_sun4v_iommu_getmap(unsigned long devhandle,1818+ unsigned long tsbid,1919+ unsigned long *io_attributes,2020+ unsigned long *real_address);2121+unsigned long pci_sun4v_config_get(unsigned long devhandle,2222+ unsigned long pci_device,2323+ unsigned long config_offset,2424+ unsigned long size);2525+int pci_sun4v_config_put(unsigned long devhandle,2626+ unsigned long pci_device,2727+ unsigned long config_offset,2828+ unsigned long size,2929+ unsigned long data);30303131-extern unsigned long pci_sun4v_msiq_conf(unsigned long devhandle,3131+unsigned long pci_sun4v_msiq_conf(unsigned long devhandle,3232 unsigned long msiqid,3333 unsigned long msiq_paddr,3434 unsigned long num_entries);3535-extern unsigned long pci_sun4v_msiq_info(unsigned long devhandle,3636- unsigned long msiqid,3737- unsigned long *msiq_paddr,3838- unsigned long *num_entries);3939-extern unsigned long pci_sun4v_msiq_getvalid(unsigned long devhandle,4040- unsigned long msiqid,4141- unsigned long *valid);4242-extern unsigned long pci_sun4v_msiq_setvalid(unsigned long devhandle,4343- unsigned long msiqid,4444- unsigned long valid);4545-extern unsigned long pci_sun4v_msiq_getstate(unsigned long devhandle,4646- unsigned long msiqid,4747- unsigned long *state);4848-extern unsigned long pci_sun4v_msiq_setstate(unsigned long devhandle,4949- unsigned long msiqid,5050- unsigned long state);5151-extern unsigned long pci_sun4v_msiq_gethead(unsigned long devhandle,5252- unsigned long msiqid,5353- unsigned long *head);5454-extern unsigned long pci_sun4v_msiq_sethead(unsigned long devhandle,5555- unsigned long msiqid,5656- unsigned long head);5757-extern unsigned long pci_sun4v_msiq_gettail(unsigned long devhandle,5858- unsigned long msiqid,5959- unsigned long *head);6060-extern unsigned long pci_sun4v_msi_getvalid(unsigned long devhandle,6161- unsigned long msinum,6262- unsigned long *valid);6363-extern unsigned long pci_sun4v_msi_setvalid(unsigned long devhandle,6464- unsigned long msinum,6565- unsigned long valid);6666-extern unsigned long pci_sun4v_msi_getmsiq(unsigned long devhandle,6767- unsigned long msinum,6868- unsigned long *msiq);6969-extern unsigned long pci_sun4v_msi_setmsiq(unsigned long devhandle,7070- unsigned long msinum,7171- unsigned long msiq,7272- unsigned long msitype);7373-extern unsigned long pci_sun4v_msi_getstate(unsigned long devhandle,7474- unsigned long msinum,7575- unsigned long *state);7676-extern unsigned long pci_sun4v_msi_setstate(unsigned long devhandle,7777- unsigned long msinum,7878- unsigned long state);7979-extern unsigned long pci_sun4v_msg_getmsiq(unsigned long devhandle,8080- unsigned long msinum,8181- unsigned long *msiq);8282-extern unsigned long pci_sun4v_msg_setmsiq(unsigned long devhandle,8383- unsigned long msinum,8484- unsigned long msiq);8585-extern unsigned long pci_sun4v_msg_getvalid(unsigned long devhandle,8686- unsigned long msinum,8787- unsigned long *valid);8888-extern unsigned long pci_sun4v_msg_setvalid(unsigned long devhandle,8989- unsigned long msinum,9090- unsigned long valid);3535+unsigned long pci_sun4v_msiq_info(unsigned long devhandle,3636+ unsigned long msiqid,3737+ unsigned long *msiq_paddr,3838+ unsigned long *num_entries);3939+unsigned long pci_sun4v_msiq_getvalid(unsigned long devhandle,4040+ unsigned long msiqid,4141+ unsigned long *valid);4242+unsigned long pci_sun4v_msiq_setvalid(unsigned long devhandle,4343+ unsigned long msiqid,4444+ unsigned long valid);4545+unsigned long pci_sun4v_msiq_getstate(unsigned long devhandle,4646+ unsigned long msiqid,4747+ unsigned long *state);4848+unsigned long pci_sun4v_msiq_setstate(unsigned long devhandle,4949+ unsigned long msiqid,5050+ unsigned long state);5151+unsigned long pci_sun4v_msiq_gethead(unsigned long devhandle,5252+ unsigned long msiqid,5353+ unsigned long *head);5454+unsigned long pci_sun4v_msiq_sethead(unsigned long devhandle,5555+ unsigned long msiqid,5656+ unsigned long head);5757+unsigned long pci_sun4v_msiq_gettail(unsigned long devhandle,5858+ unsigned long msiqid,5959+ unsigned long *head);6060+unsigned long pci_sun4v_msi_getvalid(unsigned long devhandle,6161+ unsigned long msinum,6262+ unsigned long *valid);6363+unsigned long pci_sun4v_msi_setvalid(unsigned long devhandle,6464+ unsigned long msinum,6565+ unsigned long valid);6666+unsigned long pci_sun4v_msi_getmsiq(unsigned long devhandle,6767+ unsigned long msinum,6868+ unsigned long *msiq);6969+unsigned long pci_sun4v_msi_setmsiq(unsigned long devhandle,7070+ unsigned long msinum,7171+ unsigned long msiq,7272+ unsigned long msitype);7373+unsigned long pci_sun4v_msi_getstate(unsigned long devhandle,7474+ unsigned long msinum,7575+ unsigned long *state);7676+unsigned long pci_sun4v_msi_setstate(unsigned long devhandle,7777+ unsigned long msinum,7878+ unsigned long state);7979+unsigned long pci_sun4v_msg_getmsiq(unsigned long devhandle,8080+ unsigned long msinum,8181+ unsigned long *msiq);8282+unsigned long pci_sun4v_msg_setmsiq(unsigned long devhandle,8383+ unsigned long msinum,8484+ unsigned long msiq);8585+unsigned long pci_sun4v_msg_getvalid(unsigned long devhandle,8686+ unsigned long msinum,8787+ unsigned long *valid);8888+unsigned long pci_sun4v_msg_setvalid(unsigned long devhandle,8989+ unsigned long msinum,9090+ unsigned long valid);91919292#endif /* !(_PCI_SUN4V_H) */
+7-109
arch/sparc/kernel/pcic.c
···3636#include <asm/uaccess.h>3737#include <asm/irq_regs.h>38383939+#include "kernel.h"3940#include "irq.h"40414142/*···163162static struct linux_pcic pcic0;164163165164void __iomem *pcic_regs;166166-volatile int pcic_speculative;167167-volatile int pcic_trapped;165165+static volatile int pcic_speculative;166166+static volatile int pcic_trapped;168167169168/* forward */170169unsigned int pcic_build_device_irq(struct platform_device *op,···330329331330 pcic->pcic_res_cfg_addr.name = "pcic_cfg_addr";332331 if ((pcic->pcic_config_space_addr =333333- ioremap(regs[2].phys_addr, regs[2].reg_size * 2)) == 0) {332332+ ioremap(regs[2].phys_addr, regs[2].reg_size * 2)) == NULL) {334333 prom_printf("PCIC: Error, cannot map "335334 "PCI Configuration Space Address.\n");336335 prom_halt();···342341 */343342 pcic->pcic_res_cfg_data.name = "pcic_cfg_data";344343 if ((pcic->pcic_config_space_data =345345- ioremap(regs[3].phys_addr, regs[3].reg_size * 2)) == 0) {344344+ ioremap(regs[3].phys_addr, regs[3].reg_size * 2)) == NULL) {346345 prom_printf("PCIC: Error, cannot map "347346 "PCI Configuration Space Data.\n");348347 prom_halt();···354353 strcpy(pbm->prom_name, namebuf);355354356355 {357357- extern volatile int t_nmi[4];358356 extern int pcic_nmi_trap_patch[4];359357360358 t_nmi[0] = pcic_nmi_trap_patch[0];···536536 prom_getstring(node, "name", namebuf, sizeof(namebuf));537537 }538538539539- if ((p = pcic->pcic_imap) == 0) {539539+ if ((p = pcic->pcic_imap) == NULL) {540540 dev->irq = 0;541541 return;542542 }···670670 }671671}672672673673-/*674674- * pcic_pin_to_irq() is exported to bus probing code675675- */676676-unsigned int677677-pcic_pin_to_irq(unsigned int pin, const char *name)678678-{679679- struct linux_pcic *pcic = &pcic0;680680- unsigned int irq;681681- unsigned int ivec;682682-683683- if (pin < 4) {684684- ivec = readw(pcic->pcic_regs+PCI_INT_SELECT_LO);685685- irq = ivec >> (pin << 2) & 0xF;686686- } else if (pin < 8) {687687- ivec = readw(pcic->pcic_regs+PCI_INT_SELECT_HI);688688- irq = ivec >> ((pin-4) << 2) & 0xF;689689- } else { /* Corrupted map */690690- printk("PCIC: BAD PIN %d FOR %s\n", pin, name);691691- for (;;) {} /* XXX Cannot panic properly in case of PROLL */692692- }693693-/* P3 */ /* printk("PCIC: dev %s pin %d ivec 0x%x irq %x\n", name, pin, ivec, irq); */694694- return irq;695695-}696696-697673/* Makes compiler happy */698674static volatile int pcic_timer_dummy;699675···759783void pcic_nmi(unsigned int pend, struct pt_regs *regs)760784{761785762762- pend = flip_dword(pend);786786+ pend = swab32(pend);763787764788 if (!pcic_speculative || (pend & PCI_SYS_INT_PENDING_PIO) == 0) {765789 /*···850874 sparc_config.clear_clock_irq = pcic_clear_clock_irq;851875 sparc_config.load_profile_irq = pcic_load_profile_irq;852876}853853-854854-/*855855- * This probably belongs here rather than ioport.c because856856- * we do not want this crud linked into SBus kernels.857857- * Also, think for a moment about likes of floppy.c that858858- * include architecture specific parts. They may want to redefine ins/outs.859859- *860860- * We do not use horrible macros here because we want to861861- * advance pointer by sizeof(size).862862- */863863-void outsb(unsigned long addr, const void *src, unsigned long count)864864-{865865- while (count) {866866- count -= 1;867867- outb(*(const char *)src, addr);868868- src += 1;869869- /* addr += 1; */870870- }871871-}872872-EXPORT_SYMBOL(outsb);873873-874874-void outsw(unsigned long addr, const void *src, unsigned long count)875875-{876876- while (count) {877877- count -= 2;878878- outw(*(const short *)src, addr);879879- src += 2;880880- /* addr += 2; */881881- }882882-}883883-EXPORT_SYMBOL(outsw);884884-885885-void outsl(unsigned long addr, const void *src, unsigned long count)886886-{887887- while (count) {888888- count -= 4;889889- outl(*(const long *)src, addr);890890- src += 4;891891- /* addr += 4; */892892- }893893-}894894-EXPORT_SYMBOL(outsl);895895-896896-void insb(unsigned long addr, void *dst, unsigned long count)897897-{898898- while (count) {899899- count -= 1;900900- *(unsigned char *)dst = inb(addr);901901- dst += 1;902902- /* addr += 1; */903903- }904904-}905905-EXPORT_SYMBOL(insb);906906-907907-void insw(unsigned long addr, void *dst, unsigned long count)908908-{909909- while (count) {910910- count -= 2;911911- *(unsigned short *)dst = inw(addr);912912- dst += 2;913913- /* addr += 2; */914914- }915915-}916916-EXPORT_SYMBOL(insw);917917-918918-void insl(unsigned long addr, void *dst, unsigned long count)919919-{920920- while (count) {921921- count -= 4;922922- /*923923- * XXX I am sure we are in for an unaligned trap here.924924- */925925- *(unsigned long *)dst = inl(addr);926926- dst += 4;927927- /* addr += 4; */928928- }929929-}930930-EXPORT_SYMBOL(insl);931877932878subsys_initcall(pcic_init);
+13-10
arch/sparc/kernel/perf_event.c
···110110111111 unsigned int group_flag;112112};113113-DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events) = { .enabled = 1, };113113+static DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events) = { .enabled = 1, };114114115115/* An event map describes the characteristics of a performance116116 * counter event. In particular it gives the encoding as well as···11531153 cpuc->pcr[i] = pcr_ops->read_pcr(i);11541154}1155115511561156-void perf_event_grab_pmc(void)11561156+static void perf_event_grab_pmc(void)11571157{11581158 if (atomic_inc_not_zero(&active_events))11591159 return;···11691169 mutex_unlock(&pmc_grab_mutex);11701170}1171117111721172-void perf_event_release_pmc(void)11721172+static void perf_event_release_pmc(void)11731173{11741174 if (atomic_dec_and_mutex_lock(&active_events, &pmc_grab_mutex)) {11751175 if (atomic_read(&nmi_active) == 0)···16691669 return false;16701670}1671167116721672-int __init init_hw_perf_events(void)16721672+static int __init init_hw_perf_events(void)16731673{16741674 pr_info("Performance events: ");16751675···1742174217431743 ufp = regs->u_regs[UREG_I6] + STACK_BIAS;17441744 do {17451745- struct sparc_stackf *usf, sf;17451745+ struct sparc_stackf __user *usf;17461746+ struct sparc_stackf sf;17461747 unsigned long pc;1747174817481748- usf = (struct sparc_stackf *) ufp;17491749+ usf = (struct sparc_stackf __user *)ufp;17491750 if (__copy_from_user_inatomic(&sf, usf, sizeof(sf)))17501751 break;17511752···17661765 unsigned long pc;1767176617681767 if (thread32_stack_is_64bit(ufp)) {17691769- struct sparc_stackf *usf, sf;17681768+ struct sparc_stackf __user *usf;17691769+ struct sparc_stackf sf;1770177017711771 ufp += STACK_BIAS;17721772- usf = (struct sparc_stackf *) ufp;17721772+ usf = (struct sparc_stackf __user *)ufp;17731773 if (__copy_from_user_inatomic(&sf, usf, sizeof(sf)))17741774 break;17751775 pc = sf.callers_pc & 0xffffffff;17761776 ufp = ((unsigned long) sf.fp) & 0xffffffff;17771777 } else {17781778- struct sparc_stackf32 *usf, sf;17791779- usf = (struct sparc_stackf32 *) ufp;17781778+ struct sparc_stackf32 __user *usf;17791779+ struct sparc_stackf32 sf;17801780+ usf = (struct sparc_stackf32 __user *)ufp;17801781 if (__copy_from_user_inatomic(&sf, usf, sizeof(sf)))17811782 break;17821783 pc = sf.callers_pc;
···44#include <linux/spinlock.h>55#include <asm/prom.h>6677-extern void of_console_init(void);77+void of_console_init(void);8899extern unsigned int prom_early_allocated;1010
+5-4
arch/sparc/kernel/prom_64.c
···1515 * 2 of the License, or (at your option) any later version.1616 */17171818-#include <linux/kernel.h>1919-#include <linux/types.h>2020-#include <linux/string.h>2121-#include <linux/mm.h>2218#include <linux/memblock.h>1919+#include <linux/kernel.h>2020+#include <linux/string.h>2121+#include <linux/types.h>2222+#include <linux/cpu.h>2323+#include <linux/mm.h>2324#include <linux/of.h>24252526#include <asm/prom.h>
+11-11
arch/sparc/kernel/psycho_common.h
···3030 UE_ERR, CE_ERR, PCI_ERR3131};32323333-extern void psycho_check_iommu_error(struct pci_pbm_info *pbm,3434- unsigned long afsr,3535- unsigned long afar,3636- enum psycho_error_type type);3333+void psycho_check_iommu_error(struct pci_pbm_info *pbm,3434+ unsigned long afsr,3535+ unsigned long afar,3636+ enum psycho_error_type type);37373838-extern irqreturn_t psycho_pcierr_intr(int irq, void *dev_id);3838+irqreturn_t psycho_pcierr_intr(int irq, void *dev_id);39394040-extern int psycho_iommu_init(struct pci_pbm_info *pbm, int tsbsize,4141- u32 dvma_offset, u32 dma_mask,4242- unsigned long write_complete_offset);4040+int psycho_iommu_init(struct pci_pbm_info *pbm, int tsbsize,4141+ u32 dvma_offset, u32 dma_mask,4242+ unsigned long write_complete_offset);43434444-extern void psycho_pbm_init_common(struct pci_pbm_info *pbm,4545- struct platform_device *op,4646- const char *chip_name, int chip_type);4444+void psycho_pbm_init_common(struct pci_pbm_info *pbm,4545+ struct platform_device *op,4646+ const char *chip_name, int chip_type);47474848#endif /* _PSYCHO_COMMON_H */
···267267}268268269269struct tt_entry *sparc_ttable;270270-struct pt_regs fake_swapper_regs;270270+static struct pt_regs fake_swapper_regs;271271272272/* Called from head_32.S - before we have setup anything273273 * in the kernel. Be very careful with what you do here.···365365366366 prom_setsync(prom_sync_me);367367368368- if((boot_flags&BOOTME_DEBUG) && (linux_dbvec!=0) && 368368+ if((boot_flags & BOOTME_DEBUG) && (linux_dbvec != NULL) &&369369 ((*(short *)linux_dbvec) != -1)) {370370 printk("Booted under KADB. Syncing trap table.\n");371371 (*(linux_dbvec->teach_debugger))();
+18-38
arch/sparc/kernel/signal32.c
···3131#include <asm/switch_to.h>32323333#include "sigutil.h"3434+#include "kernel.h"34353536/* This magic should be in g_upper[0] for all upper parts3637 * to be valid.···146145 unsigned int psr;147146 unsigned pc, npc;148147 sigset_t set;149149- unsigned seta[_COMPAT_NSIG_WORDS];148148+ compat_sigset_t seta;150149 int err, i;151150152151 /* Always make any pending restarted system calls return -EINTR */···210209 if (restore_rwin_state(compat_ptr(rwin_save)))211210 goto segv;212211 }213213- err |= __get_user(seta[0], &sf->info.si_mask);214214- err |= copy_from_user(seta+1, &sf->extramask,212212+ err |= __get_user(seta.sig[0], &sf->info.si_mask);213213+ err |= copy_from_user(&seta.sig[1], &sf->extramask,215214 (_COMPAT_NSIG_WORDS - 1) * sizeof(unsigned int));216215 if (err)217216 goto segv;218218- switch (_NSIG_WORDS) {219219- case 4: set.sig[3] = seta[6] + (((long)seta[7]) << 32);220220- case 3: set.sig[2] = seta[4] + (((long)seta[5]) << 32);221221- case 2: set.sig[1] = seta[2] + (((long)seta[3]) << 32);222222- case 1: set.sig[0] = seta[0] + (((long)seta[1]) << 32);223223- }217217+218218+ set.sig[0] = seta.sig[0] + (((long)seta.sig[1]) << 32);224219 set_current_blocked(&set);225220 return;226221···300303 goto segv;301304 }302305303303- switch (_NSIG_WORDS) {304304- case 4: set.sig[3] = seta.sig[6] + (((long)seta.sig[7]) << 32);305305- case 3: set.sig[2] = seta.sig[4] + (((long)seta.sig[5]) << 32);306306- case 2: set.sig[1] = seta.sig[2] + (((long)seta.sig[3]) << 32);307307- case 1: set.sig[0] = seta.sig[0] + (((long)seta.sig[1]) << 32);308308- }306306+ set.sig[0] = seta.sig[0] + (((long)seta.sig[1]) << 32);309307 set_current_blocked(&set);310308 return;311309segv:···409417 void __user *tail;410418 int sigframe_size;411419 u32 psr;412412- unsigned int seta[_COMPAT_NSIG_WORDS];420420+ compat_sigset_t seta;413421414422 /* 1. Make sure everything is clean */415423 synchronize_user_stack();···473481 err |= __put_user(0, &sf->rwin_save);474482 }475483476476- switch (_NSIG_WORDS) {477477- case 4: seta[7] = (oldset->sig[3] >> 32);478478- seta[6] = oldset->sig[3];479479- case 3: seta[5] = (oldset->sig[2] >> 32);480480- seta[4] = oldset->sig[2];481481- case 2: seta[3] = (oldset->sig[1] >> 32);482482- seta[2] = oldset->sig[1];483483- case 1: seta[1] = (oldset->sig[0] >> 32);484484- seta[0] = oldset->sig[0];485485- }486486- err |= __put_user(seta[0], &sf->info.si_mask);487487- err |= __copy_to_user(sf->extramask, seta + 1,484484+ /* If these change we need to know - assignments to seta relies on these sizes */485485+ BUILD_BUG_ON(_NSIG_WORDS != 1);486486+ BUILD_BUG_ON(_COMPAT_NSIG_WORDS != 2);487487+ seta.sig[1] = (oldset->sig[0] >> 32);488488+ seta.sig[0] = oldset->sig[0];489489+490490+ err |= __put_user(seta.sig[0], &sf->info.si_mask);491491+ err |= __copy_to_user(sf->extramask, &seta.sig[1],488492 (_COMPAT_NSIG_WORDS - 1) * sizeof(unsigned int));489493490494 if (!wsaved) {···610622 /* Setup sigaltstack */611623 err |= __compat_save_altstack(&sf->stack, regs->u_regs[UREG_FP]);612624613613- switch (_NSIG_WORDS) {614614- case 4: seta.sig[7] = (oldset->sig[3] >> 32);615615- seta.sig[6] = oldset->sig[3];616616- case 3: seta.sig[5] = (oldset->sig[2] >> 32);617617- seta.sig[4] = oldset->sig[2];618618- case 2: seta.sig[3] = (oldset->sig[1] >> 32);619619- seta.sig[2] = oldset->sig[1];620620- case 1: seta.sig[1] = (oldset->sig[0] >> 32);621621- seta.sig[0] = oldset->sig[0];622622- }625625+ seta.sig[1] = (oldset->sig[0] >> 32);626626+ seta.sig[0] = oldset->sig[0];623627 err |= __copy_to_user(&sf->mask, &seta, sizeof(compat_sigset_t));624628625629 if (!wsaved) {
···2020#include <linux/seq_file.h>2121#include <linux/cache.h>2222#include <linux/delay.h>2323+#include <linux/profile.h>2324#include <linux/cpu.h>24252526#include <asm/ptrace.h>···76757776void __init smp_cpus_done(unsigned int max_cpus)7877{7979- extern void smp4m_smp_done(void);8080- extern void smp4d_smp_done(void);8178 unsigned long bogosum = 0;8279 int cpu, num = 0;8380···182183183184void __init smp_prepare_cpus(unsigned int max_cpus)184185{185185- extern void __init smp4m_boot_cpus(void);186186- extern void __init smp4d_boot_cpus(void);187186 int i, cpuid, extra;188187189188 printk("Entering SMP Mode...\n");···258261259262int __cpu_up(unsigned int cpu, struct task_struct *tidle)260263{261261- extern int smp4m_boot_one_cpu(int, struct task_struct *);262262- extern int smp4d_boot_one_cpu(int, struct task_struct *);263264 int ret=0;264265265266 switch(sparc_cpu_model) {···292297 return ret;293298}294299295295-void arch_cpu_pre_starting(void *arg)300300+static void arch_cpu_pre_starting(void *arg)296301{297302 local_ops->cache_all();298303 local_ops->tlb_all();···312317 }313318}314319315315-void arch_cpu_pre_online(void *arg)320320+static void arch_cpu_pre_online(void *arg)316321{317322 unsigned int cpuid = hard_smp_processor_id();318323···339344 }340345}341346342342-void sparc_start_secondary(void *arg)347347+static void sparc_start_secondary(void *arg)343348{344349 unsigned int cpu;345350
+3-13
arch/sparc/kernel/smp_64.c
···2525#include <linux/ftrace.h>2626#include <linux/cpu.h>2727#include <linux/slab.h>2828+#include <linux/kgdb.h>28292930#include <asm/head.h>3031#include <asm/ptrace.h>···3635#include <asm/hvtramp.h>3736#include <asm/io.h>3837#include <asm/timer.h>3838+#include <asm/setup.h>39394040#include <asm/irq.h>4141#include <asm/irq_regs.h>···5452#include <asm/pcr.h>55535654#include "cpumap.h"5555+#include "kernel.h"57565857DEFINE_PER_CPU(cpumask_t, cpu_sibling_map) = CPU_MASK_NONE;5958cpumask_t cpu_core_map[NR_CPUS] __read_mostly =···275272}276273277274#if defined(CONFIG_SUN_LDOMS) && defined(CONFIG_HOTPLUG_CPU)278278-/* XXX Put this in some common place. XXX */279279-static unsigned long kimage_addr_to_ra(void *p)280280-{281281- unsigned long val = (unsigned long) p;282282-283283- return kern_base + (val - KERNBASE);284284-}285285-286275static void ldom_startcpu_cpuid(unsigned int cpu, unsigned long thread_reg,287276 void **descrp)288277{···861866extern unsigned long xcall_flush_dcache_page_cheetah;862867#endif863868extern unsigned long xcall_flush_dcache_page_spitfire;864864-865865-#ifdef CONFIG_DEBUG_DCFLUSH866866-extern atomic_t dcpage_flushes;867867-extern atomic_t dcpage_flushes_xcall;868868-#endif869869870870static inline void __local_flush_dcache_page(struct page *page)871871{
+9-8
arch/sparc/kernel/sun4d_irq.c
···143143 }144144}145145146146-void sun4d_handler_irq(int pil, struct pt_regs *regs)146146+void sun4d_handler_irq(unsigned int pil, struct pt_regs *regs)147147{148148 struct pt_regs *old_regs;149149 /* SBUS IRQ level (1 - 7) */···236236 irq_unlink(data->irq);237237}238238239239-struct irq_chip sun4d_irq = {239239+static struct irq_chip sun4d_irq = {240240 .name = "sun4d",241241 .irq_startup = sun4d_startup_irq,242242 .irq_shutdown = sun4d_shutdown_irq,···285285 }286286}287287288288-unsigned int _sun4d_build_device_irq(unsigned int real_irq,289289- unsigned int pil,290290- unsigned int board)288288+static unsigned int _sun4d_build_device_irq(unsigned int real_irq,289289+ unsigned int pil,290290+ unsigned int board)291291{292292 struct sun4d_handler_data *handler_data;293293 unsigned int irq;···320320321321322322323323-unsigned int sun4d_build_device_irq(struct platform_device *op,324324- unsigned int real_irq)323323+static unsigned int sun4d_build_device_irq(struct platform_device *op,324324+ unsigned int real_irq)325325{326326 struct device_node *dp = op->dev.of_node;327327 struct device_node *board_parent, *bus = dp->parent;···383383 return irq;384384}385385386386-unsigned int sun4d_build_timer_irq(unsigned int board, unsigned int real_irq)386386+static unsigned int sun4d_build_timer_irq(unsigned int board,387387+ unsigned int real_irq)387388{388389 return _sun4d_build_device_irq(real_irq, real_irq, board);389390}
+2
arch/sparc/kernel/sys_sparc32.c
···4949#include <asm/mmu_context.h>5050#include <asm/compat_signal.h>51515252+#include "systbls.h"5353+5254asmlinkage long sys32_truncate64(const char __user * path, unsigned long high, unsigned long low)5355{5456 if ((int)high < 0)
+6-4
arch/sparc/kernel/sys_sparc_32.c
···2424#include <asm/uaccess.h>2525#include <asm/unistd.h>26262727+#include "systbls.h"2828+2729/* #define DEBUG_UNIMP_SYSCALL */28302931/* XXX Make this per-binary type, this way we can detect the type of···7068 * sys_pipe() is the normal C calling standard for creating7169 * a pipe. It's not the way unix traditionally does this, though.7270 */7373-asmlinkage int sparc_pipe(struct pt_regs *regs)7171+asmlinkage long sparc_pipe(struct pt_regs *regs)7472{7573 int fd[2];7674 int error;···95939694/* Linux version of mmap */97959898-asmlinkage unsigned long sys_mmap2(unsigned long addr, unsigned long len,9696+asmlinkage long sys_mmap2(unsigned long addr, unsigned long len,9997 unsigned long prot, unsigned long flags, unsigned long fd,10098 unsigned long pgoff)10199{···105103 pgoff >> (PAGE_SHIFT - 12));106104}107105108108-asmlinkage unsigned long sys_mmap(unsigned long addr, unsigned long len,106106+asmlinkage long sys_mmap(unsigned long addr, unsigned long len,109107 unsigned long prot, unsigned long flags, unsigned long fd,110108 unsigned long off)111109{···199197 return ret;200198}201199202202-asmlinkage int sys_getdomainname(char __user *name, int len)200200+asmlinkage long sys_getdomainname(char __user *name, int len)203201{204202 int nlen, err;205203
···11#ifndef _SYSTBLS_H22#define _SYSTBLS_H3344-#include <linux/kernel.h>55-#include <linux/types.h>64#include <linux/signal.h>55+#include <linux/kernel.h>66+#include <linux/compat.h>77+#include <linux/types.h>88+79#include <asm/utrap.h>81099-extern asmlinkage unsigned long sys_getpagesize(void);1010-extern asmlinkage long sparc_pipe(struct pt_regs *regs);1111-extern asmlinkage long sys_sparc_ipc(unsigned int call, int first,1212- unsigned long second,1313- unsigned long third,1414- void __user *ptr, long fifth);1515-extern asmlinkage long sparc64_personality(unsigned long personality);1616-extern asmlinkage long sys64_munmap(unsigned long addr, size_t len);1717-extern asmlinkage unsigned long sys64_mremap(unsigned long addr,1818- unsigned long old_len,1919- unsigned long new_len,2020- unsigned long flags,2121- unsigned long new_addr);2222-extern asmlinkage unsigned long c_sys_nis_syscall(struct pt_regs *regs);2323-extern asmlinkage long sys_getdomainname(char __user *name, int len);2424-extern asmlinkage long sys_utrap_install(utrap_entry_t type,2525- utrap_handler_t new_p,2626- utrap_handler_t new_d,2727- utrap_handler_t __user *old_p,2828- utrap_handler_t __user *old_d);2929-extern asmlinkage long sparc_memory_ordering(unsigned long model,3030- struct pt_regs *regs);3131-extern asmlinkage long sys_rt_sigaction(int sig,3232- const struct sigaction __user *act,3333- struct sigaction __user *oact,3434- void __user *restorer,3535- size_t sigsetsize);1111+asmlinkage unsigned long sys_getpagesize(void);1212+asmlinkage long sparc_pipe(struct pt_regs *regs);1313+asmlinkage unsigned long c_sys_nis_syscall(struct pt_regs *regs);1414+asmlinkage long sys_getdomainname(char __user *name, int len);1515+void do_rt_sigreturn(struct pt_regs *regs);1616+asmlinkage long sys_mmap(unsigned long addr, unsigned long len,1717+ unsigned long prot, unsigned long flags,1818+ unsigned long fd, unsigned long off);1919+asmlinkage void sparc_breakpoint(struct pt_regs *regs);36203737-extern asmlinkage void sparc64_set_context(struct pt_regs *regs);3838-extern asmlinkage void sparc64_get_context(struct pt_regs *regs);3939-extern void do_rt_sigreturn(struct pt_regs *regs);2121+#ifdef CONFIG_SPARC322222+asmlinkage long sys_mmap2(unsigned long addr, unsigned long len,2323+ unsigned long prot, unsigned long flags,2424+ unsigned long fd, unsigned long pgoff);2525+long sparc_remap_file_pages(unsigned long start, unsigned long size,2626+ unsigned long prot, unsigned long pgoff,2727+ unsigned long flags);40282929+#endif /* CONFIG_SPARC32 */3030+3131+#ifdef CONFIG_SPARC643232+asmlinkage long sys_sparc_ipc(unsigned int call, int first,3333+ unsigned long second,3434+ unsigned long third,3535+ void __user *ptr, long fifth);3636+asmlinkage long sparc64_personality(unsigned long personality);3737+asmlinkage long sys64_munmap(unsigned long addr, size_t len);3838+asmlinkage unsigned long sys64_mremap(unsigned long addr,3939+ unsigned long old_len,4040+ unsigned long new_len,4141+ unsigned long flags,4242+ unsigned long new_addr);4343+asmlinkage long sys_utrap_install(utrap_entry_t type,4444+ utrap_handler_t new_p,4545+ utrap_handler_t new_d,4646+ utrap_handler_t __user *old_p,4747+ utrap_handler_t __user *old_d);4848+asmlinkage long sparc_memory_ordering(unsigned long model,4949+ struct pt_regs *regs);5050+asmlinkage void sparc64_set_context(struct pt_regs *regs);5151+asmlinkage void sparc64_get_context(struct pt_regs *regs);5252+asmlinkage long sys32_truncate64(const char __user * path,5353+ unsigned long high,5454+ unsigned long low);5555+asmlinkage long sys32_ftruncate64(unsigned int fd,5656+ unsigned long high,5757+ unsigned long low);5858+struct compat_stat64;5959+asmlinkage long compat_sys_stat64(const char __user * filename,6060+ struct compat_stat64 __user *statbuf);6161+asmlinkage long compat_sys_lstat64(const char __user * filename,6262+ struct compat_stat64 __user *statbuf);6363+asmlinkage long compat_sys_fstat64(unsigned int fd,6464+ struct compat_stat64 __user * statbuf);6565+asmlinkage long compat_sys_fstatat64(unsigned int dfd,6666+ const char __user *filename,6767+ struct compat_stat64 __user * statbuf, int flag);6868+asmlinkage compat_ssize_t sys32_pread64(unsigned int fd,6969+ char __user *ubuf,7070+ compat_size_t count,7171+ unsigned long poshi,7272+ unsigned long poslo);7373+asmlinkage compat_ssize_t sys32_pwrite64(unsigned int fd,7474+ char __user *ubuf,7575+ compat_size_t count,7676+ unsigned long poshi,7777+ unsigned long poslo);7878+asmlinkage long compat_sys_readahead(int fd,7979+ unsigned long offhi,8080+ unsigned long offlo,8181+ compat_size_t count);8282+long compat_sys_fadvise64(int fd,8383+ unsigned long offhi,8484+ unsigned long offlo,8585+ compat_size_t len, int advice);8686+long compat_sys_fadvise64_64(int fd,8787+ unsigned long offhi, unsigned long offlo,8888+ unsigned long lenhi, unsigned long lenlo,8989+ int advice);9090+long sys32_sync_file_range(unsigned int fd,9191+ unsigned long off_high, unsigned long off_low,9292+ unsigned long nb_high, unsigned long nb_low,9393+ unsigned int flags);9494+asmlinkage long compat_sys_fallocate(int fd, int mode, u32 offhi, u32 offlo,9595+ u32 lenhi, u32 lenlo);9696+asmlinkage long compat_sys_fstat64(unsigned int fd,9797+ struct compat_stat64 __user * statbuf);9898+asmlinkage long compat_sys_fstatat64(unsigned int dfd,9999+ const char __user *filename,100100+ struct compat_stat64 __user * statbuf,101101+ int flag);102102+#endif /* CONFIG_SPARC64 */41103#endif /* _SYSTBLS_H */
···1010#include <linux/mm.h>1111#include <linux/smp.h>12121313+#include <asm/cacheflush.h>1314#include <asm/uaccess.h>1515+1616+#include "kernel.h"14171518/* Do save's until all user register windows are out of the cpu. */1619void flush_user_windows(void)
···3232#include <asm/lsu.h>3333#include <asm/sections.h>3434#include <asm/mmu_context.h>3535+#include <asm/setup.h>35363637int show_unhandled_signals = 1;3738···196195197196 force_sig_info(sig, &info, current);198197}199199-200200-extern int handle_ldf_stq(u32, struct pt_regs *);201201-extern int handle_ld_nf(u32, struct pt_regs *);202198203199static unsigned int get_fault_insn(struct pt_regs *regs, unsigned int insn)204200{
+3-4
arch/sparc/mm/init_32.c
···3131#include <asm/pgtable.h>3232#include <asm/vaddrs.h>3333#include <asm/pgalloc.h> /* bug in asm-generic/tlb.h: check_pgt_cache */3434+#include <asm/setup.h>3435#include <asm/tlb.h>3536#include <asm/prom.h>3637#include <asm/leon.h>3838+3939+#include "mm_32.h"37403841unsigned long *sparc_valid_addr_bitmap;3942EXPORT_SYMBOL(sparc_valid_addr_bitmap);···6663}676468656969-extern unsigned long cmdline_memory_size;7066unsigned long last_valid_pfn;71677268unsigned long calc_highpages(void)···248246 * init routine based upon the Sun model type on the Sparc.249247 *250248 */251251-extern void srmmu_paging_init(void);252252-extern void device_scan(void);253253-254249void __init paging_init(void)255250{256251 srmmu_paging_init();
+7-2
arch/sparc/mm/init_64.c
···4747#include <asm/prom.h>4848#include <asm/mdesc.h>4949#include <asm/cpudata.h>5050+#include <asm/setup.h>5051#include <asm/irq.h>51525253#include "init_64.h"···795794static struct node_mem_mask node_masks[MAX_NUMNODES];796795static int num_node_masks;797796797797+#ifdef CONFIG_NEED_MULTIPLE_NODES798798+798799int numa_cpu_lookup_table[NR_CPUS];799800cpumask_t numa_cpumask_lookup_table[MAX_NUMNODES];800800-801801-#ifdef CONFIG_NEED_MULTIPLE_NODES802801803802struct mdesc_mblock {804803 u64 base;···888887889888static void init_node_masks_nonnuma(void)890889{890890+#ifdef CONFIG_NEED_MULTIPLE_NODES891891 int i;892892+#endif892893893894 numadbg("Initializing tables for non-numa.\n");894895895896 node_masks[0].mask = node_masks[0].val = 0;896897 num_node_masks = 1;897898899899+#ifdef CONFIG_NEED_MULTIPLE_NODES898900 for (i = 0; i < NR_CPUS; i++)899901 numa_cpu_lookup_table[i] = 0;900902901903 cpumask_setall(&numa_cpumask_lookup_table[0]);904904+#endif902905}903906904907#ifdef CONFIG_NEED_MULTIPLE_NODES
+2-2
arch/sparc/mm/init_64.h
···2121extern unsigned long sparc64_kern_pri_context;2222extern unsigned long sparc64_kern_pri_nuc_bits;2323extern unsigned long sparc64_kern_sec_context;2424-extern void mmu_info(struct seq_file *m);2424+void mmu_info(struct seq_file *m);25252626struct linux_prom_translation {2727 unsigned long virt;···3636/* Exported for SMP bootup purposes. */3737extern unsigned long kern_locked_tte_data;38383939-extern void prom_world(int enter);3939+void prom_world(int enter);40404141#ifdef CONFIG_SPARSEMEM_VMEMMAP4242#define VMEMMAP_CHUNK_SHIFT 22
···2727#include <asm/iommu.h>2828#include <asm/dma.h>29293030+#include "mm_32.h"3131+3032/*3133 * This can be sized dynamically, but we will do this3234 * only when we have a guidance about actual I/O pressures.···3937#define IOMMU_NPTES (IOMMU_WINSIZE/PAGE_SIZE) /* 64K PTEs, 256KB */4038#define IOMMU_ORDER 6 /* 4096 * (1<<6) */41394242-/* srmmu.c */4343-extern int viking_mxcc_present;4444-extern int flush_page_for_dma_global;4540static int viking_flush;4641/* viking.S */4742extern void viking_flush_page(unsigned long page);···5859 struct iommu_struct *iommu;5960 unsigned int impl, vers;6061 unsigned long *bitmap;6262+ unsigned long control;6363+ unsigned long base;6164 unsigned long tmp;62656366 iommu = kmalloc(sizeof(struct iommu_struct), GFP_KERNEL);···7473 prom_printf("Cannot map IOMMU registers\n");7574 prom_halt();7675 }7777- impl = (iommu->regs->control & IOMMU_CTRL_IMPL) >> 28;7878- vers = (iommu->regs->control & IOMMU_CTRL_VERS) >> 24;7979- tmp = iommu->regs->control;8080- tmp &= ~(IOMMU_CTRL_RNGE);8181- tmp |= (IOMMU_RNGE_256MB | IOMMU_CTRL_ENAB);8282- iommu->regs->control = tmp;7676+7777+ control = sbus_readl(&iommu->regs->control);7878+ impl = (control & IOMMU_CTRL_IMPL) >> 28;7979+ vers = (control & IOMMU_CTRL_VERS) >> 24;8080+ control &= ~(IOMMU_CTRL_RNGE);8181+ control |= (IOMMU_RNGE_256MB | IOMMU_CTRL_ENAB);8282+ sbus_writel(control, &iommu->regs->control);8383+8384 iommu_invalidate(iommu->regs);8485 iommu->start = IOMMU_START;8586 iommu->end = 0xffffffff;···103100 memset(iommu->page_table, 0, IOMMU_NPTES*sizeof(iopte_t));104101 flush_cache_all();105102 flush_tlb_all();106106- iommu->regs->base = __pa((unsigned long) iommu->page_table) >> 4;103103+104104+ base = __pa((unsigned long)iommu->page_table) >> 4;105105+ sbus_writel(base, &iommu->regs->base);107106 iommu_invalidate(iommu->regs);108107109108 bitmap = kmalloc(IOMMU_NPTES>>3, GFP_KERNEL);
+2-2
arch/sparc/mm/leon_mm.c
···1515#include <asm/leon.h>1616#include <asm/tlbflush.h>17171818-#include "srmmu.h"1818+#include "mm_32.h"19192020int leon_flush_during_switch = 1;2121-int srmmu_swprobe_trace;2121+static int srmmu_swprobe_trace;22222323static inline unsigned long leon_get_ctable_ptr(void)2424{
+24
arch/sparc/mm/mm_32.h
···11+/* fault_32.c - visible as they are called from assembler */22+asmlinkage int lookup_fault(unsigned long pc, unsigned long ret_pc,33+ unsigned long address);44+asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,55+ unsigned long address);66+77+void window_overflow_fault(void);88+void window_underflow_fault(unsigned long sp);99+void window_ret_fault(struct pt_regs *regs);1010+1111+/* srmmu.c */1212+extern char *srmmu_name;1313+extern int viking_mxcc_present;1414+extern int flush_page_for_dma_global;1515+1616+extern void (*poke_srmmu)(void);1717+1818+void __init srmmu_paging_init(void);1919+2020+/* iommu.c */2121+void ld_mmu_iommu(void);2222+2323+/* io-unit.c */2424+void ld_mmu_iounit(void);
···8181}8282EXPORT_SYMBOL(prom_feval);83838484-#ifdef CONFIG_SMP8585-extern void smp_capture(void);8686-extern void smp_release(void);8787-#endif8888-8984/* Drop into the prom, with the chance to continue with the 'go'9085 * prom command.9186 */
-6
arch/unicore32/Kconfig
···5151config ARCH_HAS_ILOG2_U645252 bool53535454-config ARCH_HAS_CPUFREQ5555- bool5656-5754config GENERIC_HWEIGHT5855 def_bool y5956···8487 select GENERIC_CLOCKEVENTS8588 select HAVE_CLK8689 select ARCH_REQUIRE_GPIOLIB8787- select ARCH_HAS_CPUFREQ88908991# CONFIGs for ARCH_PUV39092···194198195199source "kernel/power/Kconfig"196200197197-if ARCH_HAS_CPUFREQ198201source "drivers/cpufreq/Kconfig"199199-endif200202201203config ARCH_SUSPEND_POSSIBLE202204 def_bool y if !ARCH_FPGA
+27
arch/unicore32/include/asm/io.h
···3939#define ioremap_nocache(cookie, size) __uc32_ioremap(cookie, size)4040#define iounmap(cookie) __uc32_iounmap(cookie)41414242+#define readb_relaxed readb4343+#define readw_relaxed readw4444+#define readl_relaxed readl4545+4246#define HAVE_ARCH_PIO_SIZE4347#define PIO_OFFSET (unsigned int)(PCI_IOBASE)4448#define PIO_MASK (unsigned int)(IO_SPACE_LIMIT)4549#define PIO_RESERVED (PIO_OFFSET + PIO_MASK + 1)5050+5151+#ifdef CONFIG_STRICT_DEVMEM5252+5353+#include <linux/ioport.h>5454+#include <linux/mm.h>5555+5656+/*5757+ * devmem_is_allowed() checks to see if /dev/mem access to a certain5858+ * address is valid. The argument is a physical page number.5959+ * We mimic x86 here by disallowing access to system RAM as well as6060+ * device-exclusive MMIO regions. This effectively disable read()/write()6161+ * on /dev/mem.6262+ */6363+static inline int devmem_is_allowed(unsigned long pfn)6464+{6565+ if (iomem_is_exclusive(pfn << PAGE_SHIFT))6666+ return 0;6767+ if (!page_is_ram(pfn))6868+ return 1;6969+ return 0;7070+}7171+7272+#endif /* CONFIG_STRICT_DEVMEM */46734774#endif /* __KERNEL__ */4875#endif /* __UNICORE_IO_H__ */
···6060 * Function pointers to optional machine specific functions6161 */6262void (*pm_power_off)(void) = NULL;6363+EXPORT_SYMBOL(pm_power_off);63646465void machine_power_off(void)6566{
···16721672config RANDOMIZE_BASE16731673 bool "Randomize the address of the kernel image"16741674 depends on RELOCATABLE16751675- depends on !HIBERNATION16761675 default n16771676 ---help---16781677 Randomizes the physical and virtual address at which the
+9-2
arch/x86/boot/compressed/aslr.c
···289289 unsigned long choice = (unsigned long)output;290290 unsigned long random;291291292292- if (cmdline_find_option_bool("nokaslr")) {293293- debug_putstr("KASLR disabled...\n");292292+#ifdef CONFIG_HIBERNATION293293+ if (!cmdline_find_option_bool("kaslr")) {294294+ debug_putstr("KASLR disabled by default...\n");294295 goto out;295296 }297297+#else298298+ if (cmdline_find_option_bool("nokaslr")) {299299+ debug_putstr("KASLR disabled by cmdline...\n");300300+ goto out;301301+ }302302+#endif296303297304 /* Record the various known unsafe memory ranges. */298305 mem_avoid_init((unsigned long)input, input_size,
+4-3
arch/x86/kernel/traps.c
···343343 if (poke_int3_handler(regs))344344 return;345345346346+ prev_state = exception_enter();346347#ifdef CONFIG_KGDB_LOW_LEVEL_TRAP347348 if (kgdb_ll_trap(DIE_INT3, "int3", regs, error_code, X86_TRAP_BP,348349 SIGTRAP) == NOTIFY_STOP)···352351353352#ifdef CONFIG_KPROBES354353 if (kprobe_int3_handler(regs))355355- return;354354+ goto exit;356355#endif357357- prev_state = exception_enter();358356359357 if (notify_die(DIE_INT3, "int3", regs, error_code, X86_TRAP_BP,360358 SIGTRAP) == NOTIFY_STOP)···433433 unsigned long dr6;434434 int si_code;435435436436+ prev_state = exception_enter();437437+436438 get_debugreg(dr6, 6);437439438440 /* Filter out all the reserved bits which are preset to 1 */···467465 if (kprobe_debug_handler(regs))468466 goto exit;469467#endif470470- prev_state = exception_enter();471468472469 if (notify_die(DIE_DEBUG, "debug", regs, (long)&dr6, error_code,473470 SIGTRAP) == NOTIFY_STOP)
···2727#include <xen/interface/memory.h>2828#include <xen/interface/physdev.h>2929#include <xen/features.h>3030-#include "mmu.h"3130#include "xen-ops.h"3231#include "vdso.h"3332···81828283 memblock_reserve(start, size);83848484- if (xen_feature(XENFEAT_auto_translated_physmap))8585- return;8686-8785 xen_max_p2m_pfn = PFN_DOWN(start + size);8886 for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {8987 unsigned long mfn = pfn_to_mfn(pfn);···103107 .domid = DOMID_SELF104108 };105109 unsigned long len = 0;106106- int xlated_phys = xen_feature(XENFEAT_auto_translated_physmap);107110 unsigned long pfn;108111 int ret;109112···116121 continue;117122 frame = mfn;118123 } else {119119- if (!xlated_phys && mfn != INVALID_P2M_ENTRY)124124+ if (mfn != INVALID_P2M_ENTRY)120125 continue;121126 frame = pfn;122127 }···154159static unsigned long __init xen_release_chunk(unsigned long start,155160 unsigned long end)156161{157157- /*158158- * Xen already ballooned out the E820 non RAM regions for us159159- * and set them up properly in EPT.160160- */161161- if (xen_feature(XENFEAT_auto_translated_physmap))162162- return end - start;163163-164162 return xen_do_chunk(start, end, true);165163}166164···222234 * (except for the ISA region which must be 1:1 mapped) to223235 * release the refcounts (in Xen) on the original frames.224236 */225225-226226- /*227227- * PVH E820 matches the hypervisor's P2M which means we need to228228- * account for the proper values of *release and *identity.229229- */230230- for (pfn = start_pfn; !xen_feature(XENFEAT_auto_translated_physmap) &&231231- pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {237237+ for (pfn = start_pfn; pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {232238 pte_t pte = __pte_ma(0);233239234240 if (pfn < PFN_UP(ISA_END_ADDRESS))···500518}501519502520/*521521+ * Machine specific memory setup for auto-translated guests.522522+ */523523+char * __init xen_auto_xlated_memory_setup(void)524524+{525525+ static struct e820entry map[E820MAX] __initdata;526526+527527+ struct xen_memory_map memmap;528528+ int i;529529+ int rc;530530+531531+ memmap.nr_entries = E820MAX;532532+ set_xen_guest_handle(memmap.buffer, map);533533+534534+ rc = HYPERVISOR_memory_op(XENMEM_memory_map, &memmap);535535+ if (rc < 0)536536+ panic("No memory map (%d)\n", rc);537537+538538+ sanitize_e820_map(map, ARRAY_SIZE(map), &memmap.nr_entries);539539+540540+ for (i = 0; i < memmap.nr_entries; i++)541541+ e820_add_region(map[i].addr, map[i].size, map[i].type);542542+543543+ memblock_reserve(__pa(xen_start_info->mfn_list),544544+ xen_start_info->pt_base - xen_start_info->mfn_list);545545+546546+ return "Xen";547547+}548548+549549+/*503550 * Set the bit indicating "nosegneg" library variants should be used.504551 * We only need to bother in pure 32-bit mode; compat 32-bit processes505552 * can have un-truncated segments, so wrapping around is allowed.···601590 }602591#endif /* CONFIG_X86_64 */603592}604604-void xen_enable_nmi(void)605605-{606606-#ifdef CONFIG_X86_64607607- if (register_callback(CALLBACKTYPE_nmi, (char *)nmi))608608- BUG();609609-#endif610610-}593593+611594void __init xen_pvmmu_arch_setup(void)612595{613596 HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);···616611617612 xen_enable_sysenter();618613 xen_enable_syscall();619619- xen_enable_nmi();620614}621615622616/* This function is not called for HVM domains */
···3312331233133313 /* used for unplugging and affects IO latency/throughput - HIGHPRI */33143314 kblockd_workqueue = alloc_workqueue("kblockd",33153315- WQ_MEM_RECLAIM | WQ_HIGHPRI |33163316- WQ_POWER_EFFICIENT, 0);33153315+ WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);33173316 if (!kblockd_workqueue)33183317 panic("Failed to create kblockd\n");33193318
-38
block/blk-flush.c
···422422}423423424424/**425425- * blk_abort_flushes - @q is being aborted, abort flush requests426426- * @q: request_queue being aborted427427- *428428- * To be called from elv_abort_queue(). @q is being aborted. Prepare all429429- * FLUSH/FUA requests for abortion.430430- *431431- * CONTEXT:432432- * spin_lock_irq(q->queue_lock)433433- */434434-void blk_abort_flushes(struct request_queue *q)435435-{436436- struct request *rq, *n;437437- int i;438438-439439- /*440440- * Requests in flight for data are already owned by the dispatch441441- * queue or the device driver. Just restore for normal completion.442442- */443443- list_for_each_entry_safe(rq, n, &q->flush_data_in_flight, flush.list) {444444- list_del_init(&rq->flush.list);445445- blk_flush_restore_request(rq);446446- }447447-448448- /*449449- * We need to give away requests on flush queues. Restore for450450- * normal completion and put them on the dispatch queue.451451- */452452- for (i = 0; i < ARRAY_SIZE(q->flush_queue); i++) {453453- list_for_each_entry_safe(rq, n, &q->flush_queue[i],454454- flush.list) {455455- list_del_init(&rq->flush.list);456456- blk_flush_restore_request(rq);457457- list_add_tail(&rq->queuelist, &q->queue_head);458458- }459459- }460460-}461461-462462-/**463425 * blkdev_issue_flush - queue a flush464426 * @bdev: blockdev to issue flush for465427 * @gfp_mask: memory allocation flags (for bio_alloc)
+39-22
block/blk-mq-tag.c
···4343 return bt_has_free_tags(&tags->bitmap_tags);4444}45454646-static inline void bt_index_inc(unsigned int *index)4646+static inline int bt_index_inc(int index)4747{4848- *index = (*index + 1) & (BT_WAIT_QUEUES - 1);4848+ return (index + 1) & (BT_WAIT_QUEUES - 1);4949+}5050+5151+static inline void bt_index_atomic_inc(atomic_t *index)5252+{5353+ int old = atomic_read(index);5454+ int new = bt_index_inc(old);5555+ atomic_cmpxchg(index, old, new);4956}50575158/*···7669 int i, wake_index;77707871 bt = &tags->bitmap_tags;7979- wake_index = bt->wake_index;7272+ wake_index = atomic_read(&bt->wake_index);8073 for (i = 0; i < BT_WAIT_QUEUES; i++) {8174 struct bt_wait_state *bs = &bt->bs[wake_index];82758376 if (waitqueue_active(&bs->wait))8477 wake_up(&bs->wait);85788686- bt_index_inc(&wake_index);7979+ wake_index = bt_index_inc(wake_index);8780 }8881}8982···219212 struct blk_mq_hw_ctx *hctx)220213{221214 struct bt_wait_state *bs;215215+ int wait_index;222216223217 if (!hctx)224218 return &bt->bs[0];225219226226- bs = &bt->bs[hctx->wait_index];227227- bt_index_inc(&hctx->wait_index);220220+ wait_index = atomic_read(&hctx->wait_index);221221+ bs = &bt->bs[wait_index];222222+ bt_index_atomic_inc(&hctx->wait_index);228223 return bs;229224}230225···248239249240 bs = bt_wait_ptr(bt, hctx);250241 do {251251- bool was_empty;252252-253253- was_empty = list_empty(&wait.task_list);254242 prepare_to_wait(&bs->wait, &wait, TASK_UNINTERRUPTIBLE);255243256244 tag = __bt_get(hctx, bt, last_tag);257245 if (tag != -1)258246 break;259259-260260- if (was_empty)261261- atomic_set(&bs->wait_cnt, bt->wake_cnt);262247263248 blk_mq_put_ctx(data->ctx);264249···316313{317314 int i, wake_index;318315319319- wake_index = bt->wake_index;316316+ wake_index = atomic_read(&bt->wake_index);320317 for (i = 0; i < BT_WAIT_QUEUES; i++) {321318 struct bt_wait_state *bs = &bt->bs[wake_index];322319323320 if (waitqueue_active(&bs->wait)) {324324- if (wake_index != bt->wake_index)325325- bt->wake_index = wake_index;321321+ int o = atomic_read(&bt->wake_index);322322+ if (wake_index != o)323323+ atomic_cmpxchg(&bt->wake_index, o, wake_index);326324327325 return bs;328326 }329327330330- bt_index_inc(&wake_index);328328+ wake_index = bt_index_inc(wake_index);331329 }332330333331 return NULL;···338334{339335 const int index = TAG_TO_INDEX(bt, tag);340336 struct bt_wait_state *bs;337337+ int wait_cnt;341338342339 /*343340 * The unlock memory barrier need to order access to req in free···347342 clear_bit_unlock(TAG_TO_BIT(bt, tag), &bt->map[index].word);348343349344 bs = bt_wake_ptr(bt);350350- if (bs && atomic_dec_and_test(&bs->wait_cnt)) {351351- atomic_set(&bs->wait_cnt, bt->wake_cnt);352352- bt_index_inc(&bt->wake_index);345345+ if (!bs)346346+ return;347347+348348+ wait_cnt = atomic_dec_return(&bs->wait_cnt);349349+ if (wait_cnt == 0) {350350+wake:351351+ atomic_add(bt->wake_cnt, &bs->wait_cnt);352352+ bt_index_atomic_inc(&bt->wake_index);353353 wake_up(&bs->wait);354354+ } else if (wait_cnt < 0) {355355+ wait_cnt = atomic_inc_return(&bs->wait_cnt);356356+ if (!wait_cnt)357357+ goto wake;354358 }355359}356360···513499 return -ENOMEM;514500 }515501516516- for (i = 0; i < BT_WAIT_QUEUES; i++)517517- init_waitqueue_head(&bt->bs[i].wait);518518-519502 bt_update_count(bt, depth);503503+504504+ for (i = 0; i < BT_WAIT_QUEUES; i++) {505505+ init_waitqueue_head(&bt->bs[i].wait);506506+ atomic_set(&bt->bs[i].wait_cnt, bt->wake_cnt);507507+ }508508+520509 return 0;521510}522511
+1-1
block/blk-mq-tag.h
···2424 unsigned int map_nr;2525 struct blk_align_bitmap *map;26262727- unsigned int wake_index;2727+ atomic_t wake_index;2828 struct bt_wait_state *bs;2929};3030
···3232#include <linux/jiffies.h>3333#include <linux/async.h>3434#include <linux/dmi.h>3535+#include <linux/delay.h>3536#include <linux/slab.h>3637#include <linux/suspend.h>3738#include <asm/unaligned.h>···7170MODULE_LICENSE("GPL");72717372static int battery_bix_broken_package;7373+static int battery_notification_delay_ms;7474static unsigned int cache_time = 1000;7575module_param(cache_time, uint, 0644);7676MODULE_PARM_DESC(cache_time, "cache time in milliseconds");···932930 goto end;933931 }934932 alarm_string[count] = '\0';935935- battery->alarm = simple_strtol(alarm_string, NULL, 0);933933+ if (kstrtoint(alarm_string, 0, &battery->alarm)) {934934+ result = -EINVAL;935935+ goto end;936936+ }936937 result = acpi_battery_set_alarm(battery);937938 end:938939 if (!result)···10671062 if (!battery)10681063 return;10691064 old = battery->bat.dev;10651065+ /*10661066+ * On Acer Aspire V5-573G notifications are sometimes triggered too10671067+ * early. For example, when AC is unplugged and notification is10681068+ * triggered, battery state is still reported as "Full", and changes to10691069+ * "Discharging" only after short delay, without any notification.10701070+ */10711071+ if (battery_notification_delay_ms > 0)10721072+ msleep(battery_notification_delay_ms);10701073 if (event == ACPI_BATTERY_NOTIFY_INFO)10711074 acpi_battery_refresh(battery);10721075 acpi_battery_update(battery, false);···11191106 return 0;11201107}1121110811091109+static int battery_bix_broken_package_quirk(const struct dmi_system_id *d)11101110+{11111111+ battery_bix_broken_package = 1;11121112+ return 0;11131113+}11141114+11151115+static int battery_notification_delay_quirk(const struct dmi_system_id *d)11161116+{11171117+ battery_notification_delay_ms = 1000;11181118+ return 0;11191119+}11201120+11221121static struct dmi_system_id bat_dmi_table[] = {11231122 {11231123+ .callback = battery_bix_broken_package_quirk,11241124 .ident = "NEC LZ750/LS",11251125 .matches = {11261126 DMI_MATCH(DMI_SYS_VENDOR, "NEC"),11271127 DMI_MATCH(DMI_PRODUCT_NAME, "PC-LZ750LS"),11281128+ },11291129+ },11301130+ {11311131+ .callback = battery_notification_delay_quirk,11321132+ .ident = "Acer Aspire V5-573G",11331133+ .matches = {11341134+ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),11351135+ DMI_MATCH(DMI_PRODUCT_NAME, "Aspire V5-573G"),11281136 },11291137 },11301138 {},···12611227 if (acpi_disabled)12621228 return;1263122912641264- if (dmi_check_system(bat_dmi_table))12651265- battery_bix_broken_package = 1;12301230+ dmi_check_system(bat_dmi_table);1266123112671232#ifdef CONFIG_ACPI_PROCFS_POWER12681233 acpi_battery_dir = acpi_lock_battery_dir();
+2-1
drivers/acpi/osl.c
···235235static unsigned long acpi_rsdp;236236static int __init setup_acpi_rsdp(char *arg)237237{238238- acpi_rsdp = simple_strtoul(arg, NULL, 16);238238+ if (kstrtoul(arg, 16, &acpi_rsdp))239239+ return -EINVAL;239240 return 0;240241}241242early_param("acpi_rsdp", setup_acpi_rsdp);
+2-1
drivers/acpi/tables.c
···360360 if (!str)361361 return -EINVAL;362362363363- acpi_apic_instance = simple_strtoul(str, NULL, 0);363363+ if (kstrtoint(str, 0, &acpi_apic_instance))364364+ return -EINVAL;364365365366 pr_notice("Shall use APIC/MADT table %d\n", acpi_apic_instance);366367
+5-2
drivers/block/null_blk.c
···79798080static int queue_mode = NULL_Q_MQ;8181module_param(queue_mode, int, S_IRUGO);8282-MODULE_PARM_DESC(use_mq, "Use blk-mq interface (0=bio,1=rq,2=multiqueue)");8282+MODULE_PARM_DESC(queue_mode, "Block interface to use (0=bio,1=rq,2=multiqueue)");83838484static int gb = 250;8585module_param(gb, int, S_IRUGO);···227227228228static void null_softirq_done_fn(struct request *rq)229229{230230- end_cmd(blk_mq_rq_to_pdu(rq));230230+ if (queue_mode == NULL_Q_MQ)231231+ end_cmd(blk_mq_rq_to_pdu(rq));232232+ else233233+ end_cmd(rq->special);231234}232235233236static inline void null_handle_cmd(struct nullb_cmd *cmd)
+1-1
drivers/bus/Kconfig
···45454646config ARM_CCI4747 bool "ARM CCI driver support"4848- depends on ARM4848+ depends on ARM && OF && CPU_V74949 help5050 Driver supporting the CCI cache coherent interconnect for ARM5151 platforms.
+9-8
drivers/char/random.c
···980980static size_t account(struct entropy_store *r, size_t nbytes, int min,981981 int reserved)982982{983983- int have_bytes;984983 int entropy_count, orig;985984 size_t ibytes;986985···988989 /* Can we pull enough? */989990retry:990991 entropy_count = orig = ACCESS_ONCE(r->entropy_count);991991- have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);992992 ibytes = nbytes;993993 /* If limited, never pull more than available */994994- if (r->limit)995995- ibytes = min_t(size_t, ibytes, have_bytes - reserved);994994+ if (r->limit) {995995+ int have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);996996+997997+ if ((have_bytes -= reserved) < 0)998998+ have_bytes = 0;999999+ ibytes = min_t(size_t, ibytes, have_bytes);10001000+ }9961001 if (ibytes < min)9971002 ibytes = 0;998998- if (have_bytes >= ibytes + reserved)999999- entropy_count -= ibytes << (ENTROPY_SHIFT + 3);10001000- else10011001- entropy_count = reserved << (ENTROPY_SHIFT + 3);10031003+ if ((entropy_count -= ibytes << (ENTROPY_SHIFT + 3)) < 0)10041004+ entropy_count = 0;1002100510031006 if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)10041007 goto retry;
+2
drivers/cpufreq/Kconfig
···186186config GENERIC_CPUFREQ_CPU0187187 tristate "Generic CPU0 cpufreq driver"188188 depends on HAVE_CLK && OF189189+ # if CPU_THERMAL is on and THERMAL=m, CPU0 cannot be =y:190190+ depends on !CPU_THERMAL || THERMAL189191 select PM_OPP190192 help191193 This adds a generic cpufreq driver for CPU0 frequency management.
···888888 for (i = 0; i < I915_NUM_RINGS; i++) {889889 struct intel_engine_cs *ring = &dev_priv->ring[i];890890891891+ error->ring[i].pid = -1;892892+891893 if (ring->dev == NULL)892894 continue;893895···897895898896 i915_record_ring_state(dev, ring, &error->ring[i]);899897900900- error->ring[i].pid = -1;901898 request = i915_gem_find_active_request(ring);902899 if (request) {903900 /* We need to copy these to an anonymous buffer
+14-4
drivers/gpu/drm/i915/i915_irq.c
···28472847 struct intel_engine_cs *signaller;28482848 u32 seqno, ctl;2849284928502850- ring->hangcheck.deadlock = true;28502850+ ring->hangcheck.deadlock++;2851285128522852 signaller = semaphore_waits_for(ring, &seqno);28532853- if (signaller == NULL || signaller->hangcheck.deadlock)28532853+ if (signaller == NULL)28542854+ return -1;28552855+28562856+ /* Prevent pathological recursion due to driver bugs */28572857+ if (signaller->hangcheck.deadlock >= I915_NUM_RINGS)28542858 return -1;2855285928562860 /* cursory check for an unkickable deadlock */···28622858 if (ctl & RING_WAIT_SEMAPHORE && semaphore_passed(signaller) < 0)28632859 return -1;2864286028652865- return i915_seqno_passed(signaller->get_seqno(signaller, false), seqno);28612861+ if (i915_seqno_passed(signaller->get_seqno(signaller, false), seqno))28622862+ return 1;28632863+28642864+ if (signaller->hangcheck.deadlock)28652865+ return -1;28662866+28672867+ return 0;28662868}2867286928682870static void semaphore_clear_deadlocks(struct drm_i915_private *dev_priv)···28772867 int i;2878286828792869 for_each_ring(ring, dev_priv, i)28802880- ring->hangcheck.deadlock = false;28702870+ ring->hangcheck.deadlock = 0;28812871}2882287228832873static enum intel_ring_hangcheck_action
···320320 struct drm_i915_private *dev_priv = dev->dev_private;321321 unsigned long irqflags;322322323323- del_timer_sync(&dev_priv->uncore.force_wake_timer);323323+ if (del_timer_sync(&dev_priv->uncore.force_wake_timer))324324+ gen6_force_wake_timer((unsigned long)dev_priv);324325325326 /* Hold uncore.lock across reset to prevent any register access326327 * with forcewake not set correctly
···5454#ifdef INCLUDE_CODE5555// reports an exception to the host5656//5757-// In: $r15 error code (see nvc0.fuc)5757+// In: $r15 error code (see os.h)5858//5959error:6060 push $r14
···262262 struct nve0_ram *ram = (void *)pfb->ram;263263 struct nve0_ramfuc *fuc = &ram->fuc;264264 struct nouveau_ram_data *next = ram->base.next;265265- int vc = !(next->bios.ramcfg_11_02_08);266266- int mv = !(next->bios.ramcfg_11_02_04);265265+ int vc = !next->bios.ramcfg_11_02_08;266266+ int mv = !next->bios.ramcfg_11_02_04;267267 u32 mask, data;268268269269 ram_mask(fuc, 0x10f808, 0x40000000, 0x40000000);···370370 }371371 }372372373373- if ( (next->bios.ramcfg_11_02_40) ||374374- (next->bios.ramcfg_11_07_10)) {373373+ if (next->bios.ramcfg_11_02_40 ||374374+ next->bios.ramcfg_11_07_10) {375375 ram_mask(fuc, 0x132040, 0x00010000, 0x00010000);376376 ram_nsec(fuc, 20000);377377 }···417417 ram_mask(fuc, 0x10f694, 0xff00ff00, data);418418 }419419420420- if (ram->mode == 2 && (next->bios.ramcfg_11_08_10))420420+ if (ram->mode == 2 && next->bios.ramcfg_11_08_10)421421 data = 0x00000080;422422 else423423 data = 0x00000000;···425425426426 mask = 0x00070000;427427 data = 0x00000000;428428- if (!(next->bios.ramcfg_11_02_80))428428+ if (!next->bios.ramcfg_11_02_80)429429 data |= 0x03000000;430430- if (!(next->bios.ramcfg_11_02_40))430430+ if (!next->bios.ramcfg_11_02_40)431431 data |= 0x00002000;432432- if (!(next->bios.ramcfg_11_07_10))432432+ if (!next->bios.ramcfg_11_07_10)433433 data |= 0x00004000;434434- if (!(next->bios.ramcfg_11_07_08))434434+ if (!next->bios.ramcfg_11_07_08)435435 data |= 0x00000003;436436 else437437 data |= 0x74000000;···486486487487 data = mask = 0x00000000;488488 if (NOTE00(ramcfg_02_03 != 0)) {489489- data |= (next->bios.ramcfg_11_02_03) << 8;489489+ data |= next->bios.ramcfg_11_02_03 << 8;490490 mask |= 0x00000300;491491 }492492 if (NOTE00(ramcfg_01_10)) {···498498499499 data = mask = 0x00000000;500500 if (NOTE00(timing_30_07 != 0)) {501501- data |= (next->bios.timing_20_30_07) << 28;501501+ data |= next->bios.timing_20_30_07 << 28;502502 mask |= 0x70000000;503503 }504504 if (NOTE00(ramcfg_01_01)) {···510510511511 data = mask = 0x00000000;512512 if (NOTE00(timing_30_07 != 0)) {513513- data |= (next->bios.timing_20_30_07) << 28;513513+ data |= next->bios.timing_20_30_07 << 28;514514 mask |= 0x70000000;515515 }516516 if (NOTE00(ramcfg_01_02)) {···522522523523 mask = 0x33f00000;524524 data = 0x00000000;525525- if (!(next->bios.ramcfg_11_01_04))525525+ if (!next->bios.ramcfg_11_01_04)526526 data |= 0x20200000;527527- if (!(next->bios.ramcfg_11_07_80))527527+ if (!next->bios.ramcfg_11_07_80)528528 data |= 0x12800000;529529 /*XXX: see note above about there probably being some condition530530 * for the 10f824 stuff that uses ramcfg 3...531531 */532532- if ( (next->bios.ramcfg_11_03_f0)) {532532+ if (next->bios.ramcfg_11_03_f0) {533533 if (next->bios.rammap_11_08_0c) {534534- if (!(next->bios.ramcfg_11_07_80))534534+ if (!next->bios.ramcfg_11_07_80)535535 mask |= 0x00000020;536536 else537537 data |= 0x00000020;···563563 ram_wait(fuc, 0x100710, 0x80000000, 0x80000000, 200000);564564 }565565566566- data = (next->bios.timing_20_30_07) << 8;566566+ data = next->bios.timing_20_30_07 << 8;567567 if (next->bios.ramcfg_11_01_01)568568 data |= 0x80000000;569569 ram_mask(fuc, 0x100778, 0x00000700, data);···588588 ram_wr32(fuc, 0x10f310, 0x00000001); /* REFRESH */589589 ram_wr32(fuc, 0x10f210, 0x80000000); /* REFRESH_AUTO = 1 */590590591591- if ((next->bios.ramcfg_11_08_10) && (ram->mode == 2) /*XXX*/) {591591+ if (next->bios.ramcfg_11_08_10 && (ram->mode == 2) /*XXX*/) {592592 u32 temp = ram_mask(fuc, 0x10f294, 0xff000000, 0x24000000);593593 nve0_ram_train(fuc, 0xbc0e0000, 0xa4010000); /*XXX*/594594 ram_nsec(fuc, 1000);···621621 data = ram_rd32(fuc, 0x10f978);622622 data &= ~0x00046144;623623 data |= 0x0000000b;624624- if (!(next->bios.ramcfg_11_07_08)) {625625- if (!(next->bios.ramcfg_11_07_04))624624+ if (!next->bios.ramcfg_11_07_08) {625625+ if (!next->bios.ramcfg_11_07_04)626626 data |= 0x0000200c;627627 else628628 data |= 0x00000000;···636636 ram_wr32(fuc, 0x10f830, data);637637 }638638639639- if (!(next->bios.ramcfg_11_07_08)) {639639+ if (!next->bios.ramcfg_11_07_08) {640640 data = 0x88020000;641641- if ( (next->bios.ramcfg_11_07_04))641641+ if ( next->bios.ramcfg_11_07_04)642642 data |= 0x10000000;643643- if (!(next->bios.rammap_11_08_10))643643+ if (!next->bios.rammap_11_08_10)644644 data |= 0x00080000;645645 } else {646646 data = 0xa40e0000;···689689 const u32 runk0 = ram->fN1 << 16;690690 const u32 runk1 = ram->fN1;691691 struct nouveau_ram_data *next = ram->base.next;692692- int vc = !(next->bios.ramcfg_11_02_08);693693- int mv = !(next->bios.ramcfg_11_02_04);692692+ int vc = !next->bios.ramcfg_11_02_08;693693+ int mv = !next->bios.ramcfg_11_02_04;694694 u32 mask, data;695695696696 ram_mask(fuc, 0x10f808, 0x40000000, 0x40000000);···705705 }706706707707 ram_mask(fuc, 0x10f200, 0x00000800, 0x00000000);708708- if ((next->bios.ramcfg_11_03_f0))708708+ if (next->bios.ramcfg_11_03_f0)709709 ram_mask(fuc, 0x10f808, 0x04000000, 0x04000000);710710711711 ram_wr32(fuc, 0x10f314, 0x00000001); /* PRECHARGE */···761761762762 ram_mask(fuc, 0x1373f4, 0x00000000, 0x00010010);763763 data = ram_rd32(fuc, 0x1373ec) & ~0x00030000;764764- data |= (next->bios.ramcfg_11_03_30) << 12;764764+ data |= next->bios.ramcfg_11_03_30 << 16;765765 ram_wr32(fuc, 0x1373ec, data);766766 ram_mask(fuc, 0x1373f4, 0x00000003, 0x00000000);767767 ram_mask(fuc, 0x1373f4, 0x00000010, 0x00000000);···793793 }794794 }795795796796- if ( (next->bios.ramcfg_11_02_40) ||797797- (next->bios.ramcfg_11_07_10)) {796796+ if (next->bios.ramcfg_11_02_40 ||797797+ next->bios.ramcfg_11_07_10) {798798 ram_mask(fuc, 0x132040, 0x00010000, 0x00010000);799799 ram_nsec(fuc, 20000);800800 }···810810811811 mask = 0x00010000;812812 data = 0x00000000;813813- if (!(next->bios.ramcfg_11_02_80))813813+ if (!next->bios.ramcfg_11_02_80)814814 data |= 0x03000000;815815- if (!(next->bios.ramcfg_11_02_40))815815+ if (!next->bios.ramcfg_11_02_40)816816 data |= 0x00002000;817817- if (!(next->bios.ramcfg_11_07_10))817817+ if (!next->bios.ramcfg_11_07_10)818818 data |= 0x00004000;819819- if (!(next->bios.ramcfg_11_07_08))819819+ if (!next->bios.ramcfg_11_07_08)820820 data |= 0x00000003;821821 else822822 data |= 0x14000000;···844844845845 mask = 0x33f00000;846846 data = 0x00000000;847847- if (!(next->bios.ramcfg_11_01_04))847847+ if (!next->bios.ramcfg_11_01_04)848848 data |= 0x20200000;849849- if (!(next->bios.ramcfg_11_07_80))849849+ if (!next->bios.ramcfg_11_07_80)850850 data |= 0x12800000;851851 /*XXX: see note above about there probably being some condition852852 * for the 10f824 stuff that uses ramcfg 3...853853 */854854- if ( (next->bios.ramcfg_11_03_f0)) {854854+ if (next->bios.ramcfg_11_03_f0) {855855 if (next->bios.rammap_11_08_0c) {856856- if (!(next->bios.ramcfg_11_07_80))856856+ if (!next->bios.ramcfg_11_07_80)857857 mask |= 0x00000020;858858 else859859 data |= 0x00000020;···876876 data = next->bios.timing_20_2c_1fc0;877877 ram_mask(fuc, 0x10f24c, 0x7f000000, data << 24);878878879879- ram_mask(fuc, 0x10f224, 0x001f0000, next->bios.timing_20_30_f8);879879+ ram_mask(fuc, 0x10f224, 0x001f0000, next->bios.timing_20_30_f8 << 16);880880881881 ram_wr32(fuc, 0x10f090, 0x4000007f);882882 ram_nsec(fuc, 1000);
+39
drivers/gpu/drm/nouveau/core/subdev/i2c/gf117.c
···11+/*22+ * Copyright 2012 Red Hat Inc.33+ *44+ * Permission is hereby granted, free of charge, to any person obtaining a55+ * copy of this software and associated documentation files (the "Software"),66+ * to deal in the Software without restriction, including without limitation77+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,88+ * and/or sell copies of the Software, and to permit persons to whom the99+ * Software is furnished to do so, subject to the following conditions:1010+ *1111+ * The above copyright notice and this permission notice shall be included in1212+ * all copies or substantial portions of the Software.1313+ *1414+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR1515+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,1616+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL1717+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR1818+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,1919+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR2020+ * OTHER DEALINGS IN THE SOFTWARE.2121+ *2222+ * Authors: Ben Skeggs2323+ */2424+2525+#include "nv50.h"2626+2727+struct nouveau_oclass *2828+gf117_i2c_oclass = &(struct nouveau_i2c_impl) {2929+ .base.handle = NV_SUBDEV(I2C, 0xd7),3030+ .base.ofuncs = &(struct nouveau_ofuncs) {3131+ .ctor = _nouveau_i2c_ctor,3232+ .dtor = _nouveau_i2c_dtor,3333+ .init = _nouveau_i2c_init,3434+ .fini = _nouveau_i2c_fini,3535+ },3636+ .sclass = nvd0_i2c_sclass,3737+ .pad_x = &nv04_i2c_pad_oclass,3838+ .pad_s = &nv04_i2c_pad_oclass,3939+}.base;
···736736 fb->bits_per_pixel, fb->pitches[0], crtc->x, crtc->y,737737 new_bo->bo.offset };738738739739+ /* Keep vblanks on during flip, for the target crtc of this flip */740740+ drm_vblank_get(dev, nouveau_crtc(crtc)->index);741741+739742 /* Emit a page flip */740743 if (nv_device(drm->device)->card_type >= NV_50) {741744 ret = nv50_display_flip_next(crtc, fb, chan, swap_interval);···782779 return 0;783780784781fail_unreserve:782782+ drm_vblank_put(dev, nouveau_crtc(crtc)->index);785783 ttm_bo_unreserve(&old_bo->bo);786784fail_unpin:787785 mutex_unlock(&chan->cli->mutex);···820816821817 drm_send_vblank_event(dev, crtcid, s->event);822818 }819819+820820+ /* Give up ownership of vblank for page-flipped crtc */821821+ drm_vblank_put(dev, s->crtc);823822824823 list_del(&s->head);825824 if (ps)
+101-17
drivers/gpu/drm/radeon/atombios_crtc.c
···10521052 int encoder_mode = atombios_get_encoder_mode(radeon_crtc->encoder);1053105310541054 /* pass the actual clock to atombios_crtc_program_pll for DCE5,6 for HDMI */10551055- if (ASIC_IS_DCE5(rdev) && !ASIC_IS_DCE8(rdev) &&10551055+ if (ASIC_IS_DCE5(rdev) &&10561056 (encoder_mode == ATOM_ENCODER_MODE_HDMI) &&10571057 (radeon_crtc->bpc > 8))10581058 clock = radeon_crtc->adjusted_clock;···11361136 u32 fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_NONE);11371137 u32 tmp, viewport_w, viewport_h;11381138 int r;11391139+ bool bypass_lut = false;1139114011401141 /* no fb bound */11411142 if (!atomic && !crtc->primary->fb) {···11751174 radeon_bo_get_tiling_flags(rbo, &tiling_flags, NULL);11761175 radeon_bo_unreserve(rbo);1177117611781178- switch (target_fb->bits_per_pixel) {11791179- case 8:11771177+ switch (target_fb->pixel_format) {11781178+ case DRM_FORMAT_C8:11801179 fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_8BPP) |11811180 EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_INDEXED));11821181 break;11831183- case 15:11821182+ case DRM_FORMAT_XRGB4444:11831183+ case DRM_FORMAT_ARGB4444:11841184+ fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_16BPP) |11851185+ EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_ARGB4444));11861186+#ifdef __BIG_ENDIAN11871187+ fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_8IN16);11881188+#endif11891189+ break;11901190+ case DRM_FORMAT_XRGB1555:11911191+ case DRM_FORMAT_ARGB1555:11841192 fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_16BPP) |11851193 EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_ARGB1555));11941194+#ifdef __BIG_ENDIAN11951195+ fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_8IN16);11961196+#endif11861197 break;11871187- case 16:11981198+ case DRM_FORMAT_BGRX5551:11991199+ case DRM_FORMAT_BGRA5551:12001200+ fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_16BPP) |12011201+ EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_BGRA5551));12021202+#ifdef __BIG_ENDIAN12031203+ fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_8IN16);12041204+#endif12051205+ break;12061206+ case DRM_FORMAT_RGB565:11881207 fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_16BPP) |11891208 EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_ARGB565));11901209#ifdef __BIG_ENDIAN11911210 fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_8IN16);11921211#endif11931212 break;11941194- case 24:11951195- case 32:12131213+ case DRM_FORMAT_XRGB8888:12141214+ case DRM_FORMAT_ARGB8888:11961215 fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_32BPP) |11971216 EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_ARGB8888));11981217#ifdef __BIG_ENDIAN11991218 fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_8IN32);12001219#endif12011220 break;12211221+ case DRM_FORMAT_XRGB2101010:12221222+ case DRM_FORMAT_ARGB2101010:12231223+ fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_32BPP) |12241224+ EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_ARGB2101010));12251225+#ifdef __BIG_ENDIAN12261226+ fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_8IN32);12271227+#endif12281228+ /* Greater 8 bpc fb needs to bypass hw-lut to retain precision */12291229+ bypass_lut = true;12301230+ break;12311231+ case DRM_FORMAT_BGRX1010102:12321232+ case DRM_FORMAT_BGRA1010102:12331233+ fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_32BPP) |12341234+ EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_BGRA1010102));12351235+#ifdef __BIG_ENDIAN12361236+ fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_8IN32);12371237+#endif12381238+ /* Greater 8 bpc fb needs to bypass hw-lut to retain precision */12391239+ bypass_lut = true;12401240+ break;12021241 default:12031203- DRM_ERROR("Unsupported screen depth %d\n",12041204- target_fb->bits_per_pixel);12421242+ DRM_ERROR("Unsupported screen format %s\n",12431243+ drm_get_format_name(target_fb->pixel_format));12051244 return -EINVAL;12061245 }12071246···13701329 WREG32(EVERGREEN_GRPH_CONTROL + radeon_crtc->crtc_offset, fb_format);13711330 WREG32(EVERGREEN_GRPH_SWAP_CONTROL + radeon_crtc->crtc_offset, fb_swap);1372133113321332+ /*13331333+ * The LUT only has 256 slots for indexing by a 8 bpc fb. Bypass the LUT13341334+ * for > 8 bpc scanout to avoid truncation of fb indices to 8 msb's, to13351335+ * retain the full precision throughout the pipeline.13361336+ */13371337+ WREG32_P(EVERGREEN_GRPH_LUT_10BIT_BYPASS_CONTROL + radeon_crtc->crtc_offset,13381338+ (bypass_lut ? EVERGREEN_LUT_10BIT_BYPASS_EN : 0),13391339+ ~EVERGREEN_LUT_10BIT_BYPASS_EN);13401340+13411341+ if (bypass_lut)13421342+ DRM_DEBUG_KMS("Bypassing hardware LUT due to 10 bit fb scanout.\n");13431343+13731344 WREG32(EVERGREEN_GRPH_SURFACE_OFFSET_X + radeon_crtc->crtc_offset, 0);13741345 WREG32(EVERGREEN_GRPH_SURFACE_OFFSET_Y + radeon_crtc->crtc_offset, 0);13751346 WREG32(EVERGREEN_GRPH_X_START + radeon_crtc->crtc_offset, 0);···14491396 u32 fb_swap = R600_D1GRPH_SWAP_ENDIAN_NONE;14501397 u32 tmp, viewport_w, viewport_h;14511398 int r;13991399+ bool bypass_lut = false;1452140014531401 /* no fb bound */14541402 if (!atomic && !crtc->primary->fb) {···14871433 radeon_bo_get_tiling_flags(rbo, &tiling_flags, NULL);14881434 radeon_bo_unreserve(rbo);1489143514901490- switch (target_fb->bits_per_pixel) {14911491- case 8:14361436+ switch (target_fb->pixel_format) {14371437+ case DRM_FORMAT_C8:14921438 fb_format =14931439 AVIVO_D1GRPH_CONTROL_DEPTH_8BPP |14941440 AVIVO_D1GRPH_CONTROL_8BPP_INDEXED;14951441 break;14961496- case 15:14421442+ case DRM_FORMAT_XRGB4444:14431443+ case DRM_FORMAT_ARGB4444:14441444+ fb_format =14451445+ AVIVO_D1GRPH_CONTROL_DEPTH_16BPP |14461446+ AVIVO_D1GRPH_CONTROL_16BPP_ARGB4444;14471447+#ifdef __BIG_ENDIAN14481448+ fb_swap = R600_D1GRPH_SWAP_ENDIAN_16BIT;14491449+#endif14501450+ break;14511451+ case DRM_FORMAT_XRGB1555:14971452 fb_format =14981453 AVIVO_D1GRPH_CONTROL_DEPTH_16BPP |14991454 AVIVO_D1GRPH_CONTROL_16BPP_ARGB1555;14551455+#ifdef __BIG_ENDIAN14561456+ fb_swap = R600_D1GRPH_SWAP_ENDIAN_16BIT;14571457+#endif15001458 break;15011501- case 16:14591459+ case DRM_FORMAT_RGB565:15021460 fb_format =15031461 AVIVO_D1GRPH_CONTROL_DEPTH_16BPP |15041462 AVIVO_D1GRPH_CONTROL_16BPP_RGB565;···15181452 fb_swap = R600_D1GRPH_SWAP_ENDIAN_16BIT;15191453#endif15201454 break;15211521- case 24:15221522- case 32:14551455+ case DRM_FORMAT_XRGB8888:14561456+ case DRM_FORMAT_ARGB8888:15231457 fb_format =15241458 AVIVO_D1GRPH_CONTROL_DEPTH_32BPP |15251459 AVIVO_D1GRPH_CONTROL_32BPP_ARGB8888;···15271461 fb_swap = R600_D1GRPH_SWAP_ENDIAN_32BIT;15281462#endif15291463 break;14641464+ case DRM_FORMAT_XRGB2101010:14651465+ case DRM_FORMAT_ARGB2101010:14661466+ fb_format =14671467+ AVIVO_D1GRPH_CONTROL_DEPTH_32BPP |14681468+ AVIVO_D1GRPH_CONTROL_32BPP_ARGB2101010;14691469+#ifdef __BIG_ENDIAN14701470+ fb_swap = R600_D1GRPH_SWAP_ENDIAN_32BIT;14711471+#endif14721472+ /* Greater 8 bpc fb needs to bypass hw-lut to retain precision */14731473+ bypass_lut = true;14741474+ break;15301475 default:15311531- DRM_ERROR("Unsupported screen depth %d\n",15321532- target_fb->bits_per_pixel);14761476+ DRM_ERROR("Unsupported screen format %s\n",14771477+ drm_get_format_name(target_fb->pixel_format));15331478 return -EINVAL;15341479 }15351480···15781501 WREG32(AVIVO_D1GRPH_CONTROL + radeon_crtc->crtc_offset, fb_format);15791502 if (rdev->family >= CHIP_R600)15801503 WREG32(R600_D1GRPH_SWAP_CONTROL + radeon_crtc->crtc_offset, fb_swap);15041504+15051505+ /* LUT only has 256 slots for 8 bpc fb. Bypass for > 8 bpc scanout for precision */15061506+ WREG32_P(AVIVO_D1GRPH_LUT_SEL + radeon_crtc->crtc_offset,15071507+ (bypass_lut ? AVIVO_LUT_10BIT_BYPASS_EN : 0), ~AVIVO_LUT_10BIT_BYPASS_EN);15081508+15091509+ if (bypass_lut)15101510+ DRM_DEBUG_KMS("Bypassing hardware LUT due to 10 bit fb scanout.\n");1581151115821512 WREG32(AVIVO_D1GRPH_SURFACE_OFFSET_X + radeon_crtc->crtc_offset, 0);15831513 WREG32(AVIVO_D1GRPH_SURFACE_OFFSET_Y + radeon_crtc->crtc_offset, 0);
···402402 * block and vice versa. This applies to GRPH, CUR, etc.403403 */404404#define AVIVO_D1GRPH_LUT_SEL 0x6108405405+# define AVIVO_LUT_10BIT_BYPASS_EN (1 << 8)405406#define AVIVO_D1GRPH_PRIMARY_SURFACE_ADDRESS 0x6110406407#define R700_D1GRPH_PRIMARY_SURFACE_ADDRESS_HIGH 0x6914407408#define R700_D2GRPH_PRIMARY_SURFACE_ADDRESS_HIGH 0x6114
+22-13
drivers/gpu/drm/radeon/radeon_connectors.c
···12881288 (radeon_connector->connector_object_id == CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_D) ||12891289 (radeon_connector->connector_object_id == CONNECTOR_OBJECT_ID_HDMI_TYPE_B))12901290 return MODE_OK;12911291- else if (radeon_connector->connector_object_id == CONNECTOR_OBJECT_ID_HDMI_TYPE_A) {12921292- if (ASIC_IS_DCE6(rdev)) {12931293- /* HDMI 1.3+ supports max clock of 340 Mhz */12941294- if (mode->clock > 340000)12951295- return MODE_CLOCK_HIGH;12961296- else12971297- return MODE_OK;12981298- } else12911291+ else if (ASIC_IS_DCE6(rdev) && drm_detect_hdmi_monitor(radeon_connector->edid)) {12921292+ /* HDMI 1.3+ supports max clock of 340 Mhz */12931293+ if (mode->clock > 340000)12991294 return MODE_CLOCK_HIGH;13001300- } else12951295+ else12961296+ return MODE_OK;12971297+ } else {13011298 return MODE_CLOCK_HIGH;12991299+ }13021300 }1303130113041302 /* check against the max pixel clock */···15471549static int radeon_dp_mode_valid(struct drm_connector *connector,15481550 struct drm_display_mode *mode)15491551{15521552+ struct drm_device *dev = connector->dev;15531553+ struct radeon_device *rdev = dev->dev_private;15501554 struct radeon_connector *radeon_connector = to_radeon_connector(connector);15511555 struct radeon_connector_atom_dig *radeon_dig_connector = radeon_connector->con_priv;15521556···15791579 return MODE_PANEL;15801580 }15811581 }15821582- return MODE_OK;15831582 } else {15841583 if ((radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) ||15851585- (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP))15841584+ (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) {15861585 return radeon_dp_mode_valid_helper(connector, mode);15871587- else15881588- return MODE_OK;15861586+ } else {15871587+ if (ASIC_IS_DCE6(rdev) && drm_detect_hdmi_monitor(radeon_connector->edid)) {15881588+ /* HDMI 1.3+ supports max clock of 340 Mhz */15891589+ if (mode->clock > 340000)15901590+ return MODE_CLOCK_HIGH;15911591+ } else {15921592+ if (mode->clock > 165000)15931593+ return MODE_CLOCK_HIGH;15941594+ }15951595+ }15891596 }15971597+15981598+ return MODE_OK;15901599}1591160015921601static const struct drm_connector_helper_funcs radeon_dp_connector_helper_funcs = {
+20-2
drivers/gpu/drm/radeon/radeon_display.c
···6666 (radeon_crtc->lut_b[i] << 0));6767 }68686969- WREG32(AVIVO_D1GRPH_LUT_SEL + radeon_crtc->crtc_offset, radeon_crtc->crtc_id);6969+ /* Only change bit 0 of LUT_SEL, other bits are set elsewhere */7070+ WREG32_P(AVIVO_D1GRPH_LUT_SEL + radeon_crtc->crtc_offset, radeon_crtc->crtc_id, ~1);7071}71727273static void dce4_crtc_load_lut(struct drm_crtc *crtc)···358357359358 spin_unlock_irqrestore(&rdev->ddev->event_lock, flags);360359360360+ drm_vblank_put(rdev->ddev, radeon_crtc->crtc_id);361361 radeon_fence_unref(&work->fence);362362- radeon_irq_kms_pflip_irq_get(rdev, work->crtc_id);362362+ radeon_irq_kms_pflip_irq_put(rdev, work->crtc_id);363363 queue_work(radeon_crtc->flip_queue, &work->unpin_work);364364}365365···461459 base &= ~7;462460 }463461462462+ r = drm_vblank_get(crtc->dev, radeon_crtc->crtc_id);463463+ if (r) {464464+ DRM_ERROR("failed to get vblank before flip\n");465465+ goto pflip_cleanup;466466+ }467467+464468 /* We borrow the event spin lock for protecting flip_work */465469 spin_lock_irqsave(&crtc->dev->event_lock, flags);466470···480472 up_read(&rdev->exclusive_lock);481473482474 return;475475+476476+pflip_cleanup:477477+ if (unlikely(radeon_bo_reserve(work->new_rbo, false) != 0)) {478478+ DRM_ERROR("failed to reserve new rbo in error path\n");479479+ goto cleanup;480480+ }481481+ if (unlikely(radeon_bo_unpin(work->new_rbo) != 0)) {482482+ DRM_ERROR("failed to unpin new rbo in error path\n");483483+ }484484+ radeon_bo_unreserve(work->new_rbo);483485484486cleanup:485487 drm_gem_object_unreference_unlocked(&work->old_rbo->gem_base);
+23
drivers/i2c/busses/Kconfig
···676676 This driver can also be built as a module. If so, the module677677 will be called i2c-riic.678678679679+config I2C_RK3X680680+ tristate "Rockchip RK3xxx I2C adapter"681681+ depends on OF682682+ help683683+ Say Y here to include support for the I2C adapter in Rockchip RK3xxx684684+ SoCs.685685+686686+ This driver can also be built as a module. If so, the module will687687+ be called i2c-rk3x.688688+679689config HAVE_S3C2410_I2C680690 bool681691 help···773763774764 This driver can also be built as a module. If so, the module775765 will be called i2c-stu300.766766+767767+config I2C_SUN6I_P2WI768768+ tristate "Allwinner sun6i internal P2WI controller"769769+ depends on RESET_CONTROLLER770770+ depends on MACH_SUN6I || COMPILE_TEST771771+ help772772+ If you say yes to this option, support will be included for the773773+ P2WI (Push/Pull 2 Wire Interface) controller embedded in some sunxi774774+ SOCs.775775+ The P2WI looks like an SMBus controller (which supports only byte776776+ accesses), except that it only supports one slave device.777777+ This interface is used to connect to specific PMIC devices (like the778778+ AXP221).776779777780config I2C_TEGRA778781 tristate "NVIDIA Tegra internal I2C controller"
···1717 * along with this program; if not, write to the Free Software1818 * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.1919 *2020- * Maintained by: Dmitry Torokhov <dtor@vmware.com>2020+ * Maintained by: Xavier Deguillard <xdeguillard@vmware.com>2121+ * Philip Moltmann <moltmann@vmware.com>2122 */22232324/*
+4-3
drivers/of/base.c
···227227 np->kobj.kset = of_kset;228228 if (!np->parent) {229229 /* Nodes without parents are new top level trees */230230- rc = kobject_add(&np->kobj, NULL, safe_name(&of_kset->kobj, "base"));230230+ rc = kobject_add(&np->kobj, NULL, "%s",231231+ safe_name(&of_kset->kobj, "base"));231232 } else {232233 name = safe_name(&np->parent->kobj, kbasename(np->full_name));233234 if (!name || !name[0])···1961196019621961 raw_spin_lock_irqsave(&devtree_lock, flags);19631962 np->sibling = np->parent->child;19641964- np->allnext = of_allnodes;19631963+ np->allnext = np->parent->allnext;19641964+ np->parent->allnext = np;19651965 np->parent->child = np;19661966- of_allnodes = np;19671966 of_node_clear_flag(np, OF_DETACHED);19681967 raw_spin_unlock_irqrestore(&devtree_lock, flags);19691968
-4
drivers/of/platform.c
···166166 int ret;167167 struct device *dev = &pdev->dev;168168169169-#if defined(CONFIG_MICROBLAZE)170170- pdev->archdata.dma_mask = 0xffffffffUL;171171-#endif172172-173169 /*174170 * Set default dma-mask to 32 bit. Drivers are expected to setup175171 * the correct supported dma_mask.
···11-/*22- * Watchdog implementation based on z/VM Watchdog Timer API33- *44- * Copyright IBM Corp. 2004, 200955- *66- * The user space watchdog daemon can use this driver as77- * /dev/vmwatchdog to have z/VM execute the specified CP88- * command when the timeout expires. The default command is99- * "IPL", which which cause an immediate reboot.1010- */1111-#define KMSG_COMPONENT "vmwatchdog"1212-#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt1313-1414-#include <linux/init.h>1515-#include <linux/fs.h>1616-#include <linux/kernel.h>1717-#include <linux/miscdevice.h>1818-#include <linux/module.h>1919-#include <linux/moduleparam.h>2020-#include <linux/slab.h>2121-#include <linux/suspend.h>2222-#include <linux/watchdog.h>2323-2424-#include <asm/ebcdic.h>2525-#include <asm/io.h>2626-#include <asm/uaccess.h>2727-2828-#define MAX_CMDLEN 2402929-#define MIN_INTERVAL 153030-static char vmwdt_cmd[MAX_CMDLEN] = "IPL";3131-static bool vmwdt_conceal;3232-3333-static bool vmwdt_nowayout = WATCHDOG_NOWAYOUT;3434-3535-MODULE_LICENSE("GPL");3636-MODULE_AUTHOR("Arnd Bergmann <arndb@de.ibm.com>");3737-MODULE_DESCRIPTION("z/VM Watchdog Timer");3838-module_param_string(cmd, vmwdt_cmd, MAX_CMDLEN, 0644);3939-MODULE_PARM_DESC(cmd, "CP command that is run when the watchdog triggers");4040-module_param_named(conceal, vmwdt_conceal, bool, 0644);4141-MODULE_PARM_DESC(conceal, "Enable the CONCEAL CP option while the watchdog "4242- " is active");4343-module_param_named(nowayout, vmwdt_nowayout, bool, 0);4444-MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started"4545- " (default=CONFIG_WATCHDOG_NOWAYOUT)");4646-MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);4747-4848-static unsigned int vmwdt_interval = 60;4949-static unsigned long vmwdt_is_open;5050-static int vmwdt_expect_close;5151-5252-static DEFINE_MUTEX(vmwdt_mutex);5353-5454-#define VMWDT_OPEN 0 /* devnode is open or suspend in progress */5555-#define VMWDT_RUNNING 1 /* The watchdog is armed */5656-5757-enum vmwdt_func {5858- /* function codes */5959- wdt_init = 0,6060- wdt_change = 1,6161- wdt_cancel = 2,6262- /* flags */6363- wdt_conceal = 0x80000000,6464-};6565-6666-static int __diag288(enum vmwdt_func func, unsigned int timeout,6767- char *cmd, size_t len)6868-{6969- register unsigned long __func asm("2") = func;7070- register unsigned long __timeout asm("3") = timeout;7171- register unsigned long __cmdp asm("4") = virt_to_phys(cmd);7272- register unsigned long __cmdl asm("5") = len;7373- int err;7474-7575- err = -EINVAL;7676- asm volatile(7777- " diag %1,%3,0x288\n"7878- "0: la %0,0\n"7979- "1:\n"8080- EX_TABLE(0b,1b)8181- : "+d" (err) : "d"(__func), "d"(__timeout),8282- "d"(__cmdp), "d"(__cmdl) : "1", "cc");8383- return err;8484-}8585-8686-static int vmwdt_keepalive(void)8787-{8888- /* we allocate new memory every time to avoid having8989- * to track the state. static allocation is not an9090- * option since that might not be contiguous in real9191- * storage in case of a modular build */9292- static char *ebc_cmd;9393- size_t len;9494- int ret;9595- unsigned int func;9696-9797- ebc_cmd = kmalloc(MAX_CMDLEN, GFP_KERNEL);9898- if (!ebc_cmd)9999- return -ENOMEM;100100-101101- len = strlcpy(ebc_cmd, vmwdt_cmd, MAX_CMDLEN);102102- ASCEBC(ebc_cmd, MAX_CMDLEN);103103- EBC_TOUPPER(ebc_cmd, MAX_CMDLEN);104104-105105- func = vmwdt_conceal ? (wdt_init | wdt_conceal) : wdt_init;106106- set_bit(VMWDT_RUNNING, &vmwdt_is_open);107107- ret = __diag288(func, vmwdt_interval, ebc_cmd, len);108108- WARN_ON(ret != 0);109109- kfree(ebc_cmd);110110- return ret;111111-}112112-113113-static int vmwdt_disable(void)114114-{115115- char cmd[] = {'\0'};116116- int ret = __diag288(wdt_cancel, 0, cmd, 0);117117- WARN_ON(ret != 0);118118- clear_bit(VMWDT_RUNNING, &vmwdt_is_open);119119- return ret;120120-}121121-122122-static int __init vmwdt_probe(void)123123-{124124- /* there is no real way to see if the watchdog is supported,125125- * so we try initializing it with a NOP command ("BEGIN")126126- * that won't cause any harm even if the following disable127127- * fails for some reason */128128- char ebc_begin[] = {129129- 194, 197, 199, 201, 213130130- };131131- if (__diag288(wdt_init, 15, ebc_begin, sizeof(ebc_begin)) != 0)132132- return -EINVAL;133133- return vmwdt_disable();134134-}135135-136136-static int vmwdt_open(struct inode *i, struct file *f)137137-{138138- int ret;139139- if (test_and_set_bit(VMWDT_OPEN, &vmwdt_is_open))140140- return -EBUSY;141141- ret = vmwdt_keepalive();142142- if (ret)143143- clear_bit(VMWDT_OPEN, &vmwdt_is_open);144144- return ret ? ret : nonseekable_open(i, f);145145-}146146-147147-static int vmwdt_close(struct inode *i, struct file *f)148148-{149149- if (vmwdt_expect_close == 42)150150- vmwdt_disable();151151- vmwdt_expect_close = 0;152152- clear_bit(VMWDT_OPEN, &vmwdt_is_open);153153- return 0;154154-}155155-156156-static struct watchdog_info vmwdt_info = {157157- .options = WDIOF_SETTIMEOUT | WDIOF_KEEPALIVEPING | WDIOF_MAGICCLOSE,158158- .firmware_version = 0,159159- .identity = "z/VM Watchdog Timer",160160-};161161-162162-static int __vmwdt_ioctl(unsigned int cmd, unsigned long arg)163163-{164164- switch (cmd) {165165- case WDIOC_GETSUPPORT:166166- if (copy_to_user((void __user *)arg, &vmwdt_info,167167- sizeof(vmwdt_info)))168168- return -EFAULT;169169- return 0;170170- case WDIOC_GETSTATUS:171171- case WDIOC_GETBOOTSTATUS:172172- return put_user(0, (int __user *)arg);173173- case WDIOC_GETTEMP:174174- return -EINVAL;175175- case WDIOC_SETOPTIONS:176176- {177177- int options, ret;178178- if (get_user(options, (int __user *)arg))179179- return -EFAULT;180180- ret = -EINVAL;181181- if (options & WDIOS_DISABLECARD) {182182- ret = vmwdt_disable();183183- if (ret)184184- return ret;185185- }186186- if (options & WDIOS_ENABLECARD) {187187- ret = vmwdt_keepalive();188188- }189189- return ret;190190- }191191- case WDIOC_GETTIMEOUT:192192- return put_user(vmwdt_interval, (int __user *)arg);193193- case WDIOC_SETTIMEOUT:194194- {195195- int interval;196196- if (get_user(interval, (int __user *)arg))197197- return -EFAULT;198198- if (interval < MIN_INTERVAL)199199- return -EINVAL;200200- vmwdt_interval = interval;201201- }202202- return vmwdt_keepalive();203203- case WDIOC_KEEPALIVE:204204- return vmwdt_keepalive();205205- }206206- return -EINVAL;207207-}208208-209209-static long vmwdt_ioctl(struct file *f, unsigned int cmd, unsigned long arg)210210-{211211- int rc;212212-213213- mutex_lock(&vmwdt_mutex);214214- rc = __vmwdt_ioctl(cmd, arg);215215- mutex_unlock(&vmwdt_mutex);216216- return (long) rc;217217-}218218-219219-static ssize_t vmwdt_write(struct file *f, const char __user *buf,220220- size_t count, loff_t *ppos)221221-{222222- if(count) {223223- if (!vmwdt_nowayout) {224224- size_t i;225225-226226- /* note: just in case someone wrote the magic character227227- * five months ago... */228228- vmwdt_expect_close = 0;229229-230230- for (i = 0; i != count; i++) {231231- char c;232232- if (get_user(c, buf+i))233233- return -EFAULT;234234- if (c == 'V')235235- vmwdt_expect_close = 42;236236- }237237- }238238- /* someone wrote to us, we should restart timer */239239- vmwdt_keepalive();240240- }241241- return count;242242-}243243-244244-static int vmwdt_resume(void)245245-{246246- clear_bit(VMWDT_OPEN, &vmwdt_is_open);247247- return NOTIFY_DONE;248248-}249249-250250-/*251251- * It makes no sense to go into suspend while the watchdog is running.252252- * Depending on the memory size, the watchdog might trigger, while we253253- * are still saving the memory.254254- * We reuse the open flag to ensure that suspend and watchdog open are255255- * exclusive operations256256- */257257-static int vmwdt_suspend(void)258258-{259259- if (test_and_set_bit(VMWDT_OPEN, &vmwdt_is_open)) {260260- pr_err("The system cannot be suspended while the watchdog"261261- " is in use\n");262262- return notifier_from_errno(-EBUSY);263263- }264264- if (test_bit(VMWDT_RUNNING, &vmwdt_is_open)) {265265- clear_bit(VMWDT_OPEN, &vmwdt_is_open);266266- pr_err("The system cannot be suspended while the watchdog"267267- " is running\n");268268- return notifier_from_errno(-EBUSY);269269- }270270- return NOTIFY_DONE;271271-}272272-273273-/*274274- * This function is called for suspend and resume.275275- */276276-static int vmwdt_power_event(struct notifier_block *this, unsigned long event,277277- void *ptr)278278-{279279- switch (event) {280280- case PM_POST_HIBERNATION:281281- case PM_POST_SUSPEND:282282- return vmwdt_resume();283283- case PM_HIBERNATION_PREPARE:284284- case PM_SUSPEND_PREPARE:285285- return vmwdt_suspend();286286- default:287287- return NOTIFY_DONE;288288- }289289-}290290-291291-static struct notifier_block vmwdt_power_notifier = {292292- .notifier_call = vmwdt_power_event,293293-};294294-295295-static const struct file_operations vmwdt_fops = {296296- .open = &vmwdt_open,297297- .release = &vmwdt_close,298298- .unlocked_ioctl = &vmwdt_ioctl,299299- .write = &vmwdt_write,300300- .owner = THIS_MODULE,301301- .llseek = noop_llseek,302302-};303303-304304-static struct miscdevice vmwdt_dev = {305305- .minor = WATCHDOG_MINOR,306306- .name = "watchdog",307307- .fops = &vmwdt_fops,308308-};309309-310310-static int __init vmwdt_init(void)311311-{312312- int ret;313313-314314- ret = vmwdt_probe();315315- if (ret)316316- return ret;317317- ret = register_pm_notifier(&vmwdt_power_notifier);318318- if (ret)319319- return ret;320320- /*321321- * misc_register() has to be the last action in module_init(), because322322- * file operations will be available right after this.323323- */324324- ret = misc_register(&vmwdt_dev);325325- if (ret) {326326- unregister_pm_notifier(&vmwdt_power_notifier);327327- return ret;328328- }329329- return 0;330330-}331331-module_init(vmwdt_init);332332-333333-static void __exit vmwdt_exit(void)334334-{335335- unregister_pm_notifier(&vmwdt_power_notifier);336336- misc_deregister(&vmwdt_dev);337337-}338338-module_exit(vmwdt_exit);
+6-7
drivers/s390/cio/airq.c
···196196 */197197unsigned long airq_iv_alloc(struct airq_iv *iv, unsigned long num)198198{199199- unsigned long bit, i;199199+ unsigned long bit, i, flags;200200201201 if (!iv->avail || num == 0)202202 return -1UL;203203- spin_lock(&iv->lock);203203+ spin_lock_irqsave(&iv->lock, flags);204204 bit = find_first_bit_inv(iv->avail, iv->bits);205205 while (bit + num <= iv->bits) {206206 for (i = 1; i < num; i++)···218218 }219219 if (bit + num > iv->bits)220220 bit = -1UL;221221- spin_unlock(&iv->lock);221221+ spin_unlock_irqrestore(&iv->lock, flags);222222 return bit;223223-224223}225224EXPORT_SYMBOL(airq_iv_alloc);226225···231232 */232233void airq_iv_free(struct airq_iv *iv, unsigned long bit, unsigned long num)233234{234234- unsigned long i;235235+ unsigned long i, flags;235236236237 if (!iv->avail || num == 0)237238 return;238238- spin_lock(&iv->lock);239239+ spin_lock_irqsave(&iv->lock, flags);239240 for (i = 0; i < num; i++) {240241 /* Clear (possibly left over) interrupt bit */241242 clear_bit_inv(bit + i, iv->vector);···247248 while (iv->end > 0 && !test_bit_inv(iv->end - 1, iv->avail))248249 iv->end--;249250 }250250- spin_unlock(&iv->lock);251251+ spin_unlock_irqrestore(&iv->lock, flags);251252}252253EXPORT_SYMBOL(airq_iv_free);253254
+11-17
drivers/s390/cio/ccwgroup.c
···184184 const char *buf, size_t count)185185{186186 struct ccwgroup_device *gdev = to_ccwgroupdev(dev);187187- int rc;187187+ int rc = 0;188188189189 /* Prevent concurrent online/offline processing and ungrouping. */190190 if (atomic_cmpxchg(&gdev->onoff, 0, 1) != 0)···196196197197 if (device_remove_file_self(dev, attr))198198 ccwgroup_ungroup(gdev);199199+ else200200+ rc = -ENODEV;199201out:200202 if (rc) {201201- if (rc != -EAGAIN)202202- /* Release onoff "lock" when ungrouping failed. */203203- atomic_set(&gdev->onoff, 0);203203+ /* Release onoff "lock" when ungrouping failed. */204204+ atomic_set(&gdev->onoff, 0);204205 return rc;205206 }206207 return count;···228227 container_of(work, struct ccwgroup_device, ungroup_work);229228230229 ccwgroup_ungroup(gdev);230230+ put_device(&gdev->dev);231231}232232233233static void ccwgroup_release(struct device *dev)···414412{415413 struct ccwgroup_device *gdev = to_ccwgroupdev(data);416414417417- if (action == BUS_NOTIFY_UNBIND_DRIVER)415415+ if (action == BUS_NOTIFY_UNBIND_DRIVER) {416416+ get_device(&gdev->dev);418417 schedule_work(&gdev->ungroup_work);418418+ }419419420420 return NOTIFY_OK;421421}···586582 __ccwgroup_match_all))) {587583 struct ccwgroup_device *gdev = to_ccwgroupdev(dev);588584589589- mutex_lock(&gdev->reg_mutex);590590- __ccwgroup_remove_symlinks(gdev);591591- device_unregister(dev);592592- __ccwgroup_remove_cdev_refs(gdev);593593- mutex_unlock(&gdev->reg_mutex);585585+ ccwgroup_ungroup(gdev);594586 put_device(dev);595587 }596588 driver_unregister(&cdriver->driver);···633633 get_device(&gdev->dev);634634 spin_unlock_irq(cdev->ccwlock);635635 /* Unregister group device. */636636- mutex_lock(&gdev->reg_mutex);637637- if (device_is_registered(&gdev->dev)) {638638- __ccwgroup_remove_symlinks(gdev);639639- device_unregister(&gdev->dev);640640- __ccwgroup_remove_cdev_refs(gdev);641641- }642642- mutex_unlock(&gdev->reg_mutex);636636+ ccwgroup_ungroup(gdev);643637 /* Release ccwgroup device reference for local processing. */644638 put_device(&gdev->dev);645639}
+2
drivers/s390/cio/cio.c
···602602603603#ifdef CONFIG_CCW_CONSOLE604604static struct subchannel *console_sch;605605+static struct lock_class_key console_sch_key;605606606607/*607608 * Use cio_tsch to update the subchannel status and call the interrupt handler···687686 if (IS_ERR(sch))688687 return sch;689688689689+ lockdep_set_class(sch->lock, &console_sch_key);690690 isc_register(CONSOLE_ISC);691691 sch->config.isc = CONSOLE_ISC;692692 sch->config.intparm = (u32)(addr_t)sch;
+36-35
drivers/s390/cio/device.c
···678678 NULL,679679};680680681681-/* this is a simple abstraction for device_register that sets the682682- * correct bus type and adds the bus specific files */683683-static int ccw_device_register(struct ccw_device *cdev)681681+static int ccw_device_add(struct ccw_device *cdev)684682{685683 struct device *dev = &cdev->dev;686686- int ret;687684688685 dev->bus = &ccw_bus_type;689689- ret = dev_set_name(&cdev->dev, "0.%x.%04x", cdev->private->dev_id.ssid,690690- cdev->private->dev_id.devno);691691- if (ret)692692- return ret;693686 return device_add(dev);694687}695688···757764static int io_subchannel_initialize_dev(struct subchannel *sch,758765 struct ccw_device *cdev)759766{760760- cdev->private->cdev = cdev;761761- cdev->private->int_class = IRQIO_CIO;762762- atomic_set(&cdev->private->onoff, 0);767767+ struct ccw_device_private *priv = cdev->private;768768+ int ret;769769+770770+ priv->cdev = cdev;771771+ priv->int_class = IRQIO_CIO;772772+ priv->state = DEV_STATE_NOT_OPER;773773+ priv->dev_id.devno = sch->schib.pmcw.dev;774774+ priv->dev_id.ssid = sch->schid.ssid;775775+ priv->schid = sch->schid;776776+777777+ INIT_WORK(&priv->todo_work, ccw_device_todo);778778+ INIT_LIST_HEAD(&priv->cmb_list);779779+ init_waitqueue_head(&priv->wait_q);780780+ init_timer(&priv->timer);781781+782782+ atomic_set(&priv->onoff, 0);783783+ cdev->ccwlock = sch->lock;763784 cdev->dev.parent = &sch->dev;764785 cdev->dev.release = ccw_device_release;765765- INIT_WORK(&cdev->private->todo_work, ccw_device_todo);766786 cdev->dev.groups = ccwdev_attr_groups;767787 /* Do first half of device_register. */768788 device_initialize(&cdev->dev);789789+ ret = dev_set_name(&cdev->dev, "0.%x.%04x", cdev->private->dev_id.ssid,790790+ cdev->private->dev_id.devno);791791+ if (ret)792792+ goto out_put;769793 if (!get_device(&sch->dev)) {770770- /* Release reference from device_initialize(). */771771- put_device(&cdev->dev);772772- return -ENODEV;794794+ ret = -ENODEV;795795+ goto out_put;773796 }774774- cdev->private->flags.initialized = 1;797797+ priv->flags.initialized = 1;798798+ spin_lock_irq(sch->lock);799799+ sch_set_cdev(sch, cdev);800800+ spin_unlock_irq(sch->lock);775801 return 0;802802+803803+out_put:804804+ /* Release reference from device_initialize(). */805805+ put_device(&cdev->dev);806806+ return ret;776807}777808778809static struct ccw_device * io_subchannel_create_ccwdev(struct subchannel *sch)···875858 dev_set_uevent_suppress(&sch->dev, 0);876859 kobject_uevent(&sch->dev.kobj, KOBJ_ADD);877860 /* make it known to the system */878878- ret = ccw_device_register(cdev);861861+ ret = ccw_device_add(cdev);879862 if (ret) {880863 CIO_MSG_EVENT(0, "Could not register ccw dev 0.%x.%04x: %d\n",881864 cdev->private->dev_id.ssid,···940923941924static void io_subchannel_recog(struct ccw_device *cdev, struct subchannel *sch)942925{943943- struct ccw_device_private *priv;944944-945945- cdev->ccwlock = sch->lock;946946-947947- /* Init private data. */948948- priv = cdev->private;949949- priv->dev_id.devno = sch->schib.pmcw.dev;950950- priv->dev_id.ssid = sch->schid.ssid;951951- priv->schid = sch->schid;952952- priv->state = DEV_STATE_NOT_OPER;953953- INIT_LIST_HEAD(&priv->cmb_list);954954- init_waitqueue_head(&priv->wait_q);955955- init_timer(&priv->timer);956956-957926 /* Increase counter of devices currently in recognition. */958927 atomic_inc(&ccw_device_init_count);959928960929 /* Start async. device sensing. */961930 spin_lock_irq(sch->lock);962962- sch_set_cdev(sch, cdev);963931 ccw_device_recognition(cdev);964932 spin_unlock_irq(sch->lock);965933}···10851083 dev_set_uevent_suppress(&sch->dev, 0);10861084 kobject_uevent(&sch->dev.kobj, KOBJ_ADD);10871085 cdev = sch_get_cdev(sch);10881088- rc = ccw_device_register(cdev);10861086+ rc = ccw_device_add(cdev);10891087 if (rc) {10901088 /* Release online reference. */10911089 put_device(&cdev->dev);···15991597 if (rc)16001598 return rc;16011599 sch->driver = &io_subchannel_driver;16021602- sch_set_cdev(sch, cdev);16031600 io_subchannel_recog(cdev, sch);16041601 /* Now wait for the async. recognition to come to an end. */16051602 spin_lock_irq(cdev->ccwlock);···16401639 put_device(&sch->dev);16411640 return ERR_PTR(-ENOMEM);16421641 }16421642+ set_io_private(sch, io_priv);16431643 cdev = io_subchannel_create_ccwdev(sch);16441644 if (IS_ERR(cdev)) {16451645 put_device(&sch->dev);···16481646 return cdev;16491647 }16501648 cdev->drv = drv;16511651- set_io_private(sch, io_priv);16521649 ccw_device_set_int_class(cdev);16531650 return cdev;16541651}
···368368 * otherwise we use the default. Also we use the default FIFO369369 * thresholds for now.370370 */371371- *burst_code = chip_info ? chip_info->dma_burst_size : 16;371371+ *burst_code = chip_info ? chip_info->dma_burst_size : 1;372372 *threshold = SSCR1_RxTresh(RX_THRESH_DFLT)373373 | SSCR1_TxTresh(TX_THRESH_DFLT);374374
···653653654654config COMEDI_ADDI_APCI_1564655655 tristate "ADDI-DATA APCI_1564 support"656656+ select COMEDI_ADDI_WATCHDOG656657 ---help---657658 Enable support for ADDI-DATA APCI_1564 cards658659
+5-4
drivers/staging/iio/Kconfig
···3636 Add some dummy events to the simple dummy driver.37373838config IIO_SIMPLE_DUMMY_BUFFER3939- boolean "Buffered capture support"4040- select IIO_KFIFO_BUF4141- help4242- Add buffered data capture to the simple dummy driver.3939+ boolean "Buffered capture support"4040+ select IIO_BUFFER4141+ select IIO_KFIFO_BUF4242+ help4343+ Add buffered data capture to the simple dummy driver.43444445endif # IIO_SIMPLE_DUMMY4546
+8-4
drivers/staging/iio/adc/mxs-lradc.c
···846846 LRADC_CTRL1);847847 mxs_lradc_reg_clear(lradc, 0xff, LRADC_CTRL0);848848849849+ /* Enable / disable the divider per requirement */850850+ if (test_bit(chan, &lradc->is_divided))851851+ mxs_lradc_reg_set(lradc, 1 << LRADC_CTRL2_DIVIDE_BY_TWO_OFFSET,852852+ LRADC_CTRL2);853853+ else854854+ mxs_lradc_reg_clear(lradc,855855+ 1 << LRADC_CTRL2_DIVIDE_BY_TWO_OFFSET, LRADC_CTRL2);856856+849857 /* Clean the slot's previous content, then set new one. */850858 mxs_lradc_reg_clear(lradc, LRADC_CTRL4_LRADCSELECT_MASK(0),851859 LRADC_CTRL4);···969961 if (val == scale_avail[MXS_LRADC_DIV_DISABLED].integer &&970962 val2 == scale_avail[MXS_LRADC_DIV_DISABLED].nano) {971963 /* divider by two disabled */972972- writel(1 << LRADC_CTRL2_DIVIDE_BY_TWO_OFFSET,973973- lradc->base + LRADC_CTRL2 + STMP_OFFSET_REG_CLR);974964 clear_bit(chan->channel, &lradc->is_divided);975965 ret = 0;976966 } else if (val == scale_avail[MXS_LRADC_DIV_ENABLED].integer &&977967 val2 == scale_avail[MXS_LRADC_DIV_ENABLED].nano) {978968 /* divider by two enabled */979979- writel(1 << LRADC_CTRL2_DIVIDE_BY_TWO_OFFSET,980980- lradc->base + LRADC_CTRL2 + STMP_OFFSET_REG_SET);981969 set_bit(chan->channel, &lradc->is_divided);982970 ret = 0;983971 }
+6-2
drivers/staging/iio/light/tsl2x7x_core.c
···667667 chip->tsl2x7x_config[TSL2X7X_PRX_COUNT] =668668 chip->tsl2x7x_settings.prox_pulse_count;669669 chip->tsl2x7x_config[TSL2X7X_PRX_MINTHRESHLO] =670670- chip->tsl2x7x_settings.prox_thres_low;670670+ (chip->tsl2x7x_settings.prox_thres_low) & 0xFF;671671+ chip->tsl2x7x_config[TSL2X7X_PRX_MINTHRESHHI] =672672+ (chip->tsl2x7x_settings.prox_thres_low >> 8) & 0xFF;671673 chip->tsl2x7x_config[TSL2X7X_PRX_MAXTHRESHLO] =672672- chip->tsl2x7x_settings.prox_thres_high;674674+ (chip->tsl2x7x_settings.prox_thres_high) & 0xFF;675675+ chip->tsl2x7x_config[TSL2X7X_PRX_MAXTHRESHHI] =676676+ (chip->tsl2x7x_settings.prox_thres_high >> 8) & 0xFF;673677674678 /* and make sure we're not already on */675679 if (chip->tsl2x7x_chip_status == TSL2X7X_CHIP_WORKING) {
+7
drivers/staging/imx-drm/parallel-display.c
···173173 if (ret)174174 return ret;175175176176+ /* set the connector's dpms to OFF so that177177+ * drm_helper_connector_dpms() won't return178178+ * immediately since the current state is ON179179+ * at this point.180180+ */181181+ imxpd->connector.dpms = DRM_MODE_DPMS_OFF;182182+176183 drm_encoder_helper_add(&imxpd->encoder, &imx_pd_encoder_helper_funcs);177184 drm_encoder_init(drm, &imxpd->encoder, &imx_pd_encoder_funcs,178185 DRM_MODE_ENCODER_NONE);
+10-9
drivers/tty/n_tty.c
···12141214{12151215 struct n_tty_data *ldata = tty->disc_data;1216121612171217- if (I_IGNPAR(tty))12181218- return;12191219- if (I_PARMRK(tty)) {12201220- put_tty_queue('\377', ldata);12211221- put_tty_queue('\0', ldata);12221222- put_tty_queue(c, ldata);12231223- } else if (I_INPCK(tty))12241224- put_tty_queue('\0', ldata);12251225- else12171217+ if (I_INPCK(tty)) {12181218+ if (I_IGNPAR(tty))12191219+ return;12201220+ if (I_PARMRK(tty)) {12211221+ put_tty_queue('\377', ldata);12221222+ put_tty_queue('\0', ldata);12231223+ put_tty_queue(c, ldata);12241224+ } else12251225+ put_tty_queue('\0', ldata);12261226+ } else12261227 put_tty_queue(c, ldata);12271228 if (waitqueue_active(&tty->read_wait))12281229 wake_up_interruptible(&tty->read_wait);
···32263226 for (i = 0; i < MAX_NR_CON_DRIVER; i++) {32273227 con_back = ®istered_con_driver[i];3228322832293229- if (con_back->con &&32303230- !(con_back->flag & CON_DRIVER_FLAG_MODULE)) {32293229+ if (con_back->con && con_back->con != csw) {32313230 defcsw = con_back->con;32323231 retval = 0;32333232 break;···33313332{33323333 const struct consw *csw = NULL;33333334 int i, more = 1, first = -1, last = -1, deflt = 0;33353335+ int ret;3334333633353337 if (!con->con || !(con->flag & CON_DRIVER_FLAG_MODULE) ||33363338 con_is_graphics(con->con, con->first, con->last))···3357335733583358 if (first != -1) {33593359 console_lock();33603360- do_unbind_con_driver(csw, first, last, deflt);33603360+ ret = do_unbind_con_driver(csw, first, last, deflt);33613361 console_unlock();33623362+ if (ret != 0)33633363+ return ret;33623364 }3363336533643366 first = -1;···36473645 */36483646int do_unregister_con_driver(const struct consw *csw)36493647{36503650- int i, retval = -ENODEV;36483648+ int i;3651364936523650 /* cannot unregister a bound driver */36533651 if (con_is_bound(csw))36543654- goto err;36523652+ return -EBUSY;36533653+36543654+ if (csw == conswitchp)36553655+ return -EINVAL;3655365636563657 for (i = 0; i < MAX_NR_CON_DRIVER; i++) {36573658 struct con_driver *con_driver = ®istered_con_driver[i];3658365936593660 if (con_driver->con == csw &&36603660- con_driver->flag & CON_DRIVER_FLAG_MODULE) {36613661+ con_driver->flag & CON_DRIVER_FLAG_INIT) {36613662 vtconsole_deinit_device(con_driver);36623663 device_destroy(vtconsole_class,36633664 MKDEV(0, con_driver->node));···36713666 con_driver->flag = 0;36723667 con_driver->first = 0;36733668 con_driver->last = 0;36743674- retval = 0;36753675- break;36693669+ return 0;36763670 }36773671 }36783678-err:36793679- return retval;36723672+36733673+ return -ENODEV;36803674}36813675EXPORT_SYMBOL_GPL(do_unregister_con_driver);36823676
+1-1
drivers/uio/uio.c
···655655656656 if (mem->addr & ~PAGE_MASK)657657 return -ENODEV;658658- if (vma->vm_end - vma->vm_start > PAGE_ALIGN(mem->size))658658+ if (vma->vm_end - vma->vm_start > mem->size)659659 return -EINVAL;660660661661 vma->vm_ops = &uio_physical_vm_ops;
+20-13
drivers/usb/core/hub.c
···15261526 dev_dbg(hub_dev, "%umA bus power budget for each child\n",15271527 hub->mA_per_port);1528152815291529- /* Update the HCD's internal representation of this hub before khubd15301530- * starts getting port status changes for devices under the hub.15311531- */15321532- if (hcd->driver->update_hub_device) {15331533- ret = hcd->driver->update_hub_device(hcd, hdev,15341534- &hub->tt, GFP_KERNEL);15351535- if (ret < 0) {15361536- message = "can't update HCD hub info";15371537- goto fail;15381538- }15391539- }15401540-15411529 ret = hub_hub_status(hub, &hubstatus, &hubchange);15421530 if (ret < 0) {15431531 message = "can't get hub status";···15771589 }15781590 }15791591 hdev->maxchild = i;15921592+ for (i = 0; i < hdev->maxchild; i++) {15931593+ struct usb_port *port_dev = hub->ports[i];15941594+15951595+ pm_runtime_put(&port_dev->dev);15961596+ }15971597+15801598 mutex_unlock(&usb_port_peer_mutex);15811599 if (ret < 0)15821600 goto fail;16011601+16021602+ /* Update the HCD's internal representation of this hub before khubd16031603+ * starts getting port status changes for devices under the hub.16041604+ */16051605+ if (hcd->driver->update_hub_device) {16061606+ ret = hcd->driver->update_hub_device(hcd, hdev,16071607+ &hub->tt, GFP_KERNEL);16081608+ if (ret < 0) {16091609+ message = "can't update HCD hub info";16101610+ goto fail;16111611+ }16121612+ }1583161315841614 usb_hub_adjust_deviceremovable(hdev, hub->descriptor);15851615···34643458 struct usb_device *udev = port_dev->child;3465345934663460 if (udev && udev->can_submit) {34673467- dev_warn(&port_dev->dev, "not suspended yet\n");34613461+ dev_warn(&port_dev->dev, "device %s not suspended yet\n",34623462+ dev_name(&udev->dev));34683463 if (PMSG_IS_AUTO(msg))34693464 return -EBUSY;34703465 }
+2
drivers/usb/core/hub.h
···8484 * @dev: generic device interface8585 * @port_owner: port's owner8686 * @peer: related usb2 and usb3 ports (share the same connector)8787+ * @req: default pm qos request for hubs without port power control8788 * @connect_type: port's connect type8889 * @location: opaque representation of platform connector location8990 * @status_lock: synchronize port_event() vs usb_port_{suspend|resume}···9695 struct device dev;9796 struct usb_dev_state *port_owner;9897 struct usb_port *peer;9898+ struct dev_pm_qos_request *req;9999 enum usb_port_connect_type connect_type;100100 usb_port_location_t location;101101 struct mutex status_lock;
+64-23
drivers/usb/core/port.c
···21212222#include "hub.h"23232424+static int usb_port_block_power_off;2525+2426static const struct attribute_group *port_dev_group[];25272628static ssize_t connect_type_show(struct device *dev,···6866{6967 struct usb_port *port_dev = to_usb_port(dev);70686969+ kfree(port_dev->req);7170 kfree(port_dev);7271}7372···145142 == PM_QOS_FLAGS_ALL)146143 return -EAGAIN;147144145145+ if (usb_port_block_power_off)146146+ return -EBUSY;147147+148148 usb_autopm_get_interface(intf);149149 retval = usb_hub_set_port_power(hdev, hub, port1, false);150150 usb_clear_port_feature(hdev, port1, USB_PORT_FEAT_C_CONNECTION);···196190 if (left->peer || right->peer) {197191 struct usb_port *lpeer = left->peer;198192 struct usb_port *rpeer = right->peer;193193+ char *method;199194200200- WARN(1, "failed to peer %s and %s (%s -> %p) (%s -> %p)\n",201201- dev_name(&left->dev), dev_name(&right->dev),202202- dev_name(&left->dev), lpeer,203203- dev_name(&right->dev), rpeer);195195+ if (left->location && left->location == right->location)196196+ method = "location";197197+ else198198+ method = "default";199199+200200+ pr_warn("usb: failed to peer %s and %s by %s (%s:%s) (%s:%s)\n",201201+ dev_name(&left->dev), dev_name(&right->dev), method,202202+ dev_name(&left->dev),203203+ lpeer ? dev_name(&lpeer->dev) : "none",204204+ dev_name(&right->dev),205205+ rpeer ? dev_name(&rpeer->dev) : "none");204206 return -EBUSY;205207 }206208···265251 dev_warn(&left->dev, "failed to peer to %s (%d)\n",266252 dev_name(&right->dev), rc);267253 pr_warn_once("usb: port power management may be unreliable\n");254254+ usb_port_block_power_off = 1;268255 }269256}270257···401386 int retval;402387403388 port_dev = kzalloc(sizeof(*port_dev), GFP_KERNEL);404404- if (!port_dev) {405405- retval = -ENOMEM;406406- goto exit;389389+ if (!port_dev)390390+ return -ENOMEM;391391+392392+ port_dev->req = kzalloc(sizeof(*(port_dev->req)), GFP_KERNEL);393393+ if (!port_dev->req) {394394+ kfree(port_dev);395395+ return -ENOMEM;407396 }408397409398 hub->ports[port1 - 1] = port_dev;···423404 port1);424405 mutex_init(&port_dev->status_lock);425406 retval = device_register(&port_dev->dev);426426- if (retval)427427- goto error_register;407407+ if (retval) {408408+ put_device(&port_dev->dev);409409+ return retval;410410+ }411411+412412+ /* Set default policy of port-poweroff disabled. */413413+ retval = dev_pm_qos_add_request(&port_dev->dev, port_dev->req,414414+ DEV_PM_QOS_FLAGS, PM_QOS_FLAG_NO_POWER_OFF);415415+ if (retval < 0) {416416+ device_unregister(&port_dev->dev);417417+ return retval;418418+ }428419429420 find_and_link_peer(hub, port1);430421422422+ /*423423+ * Enable runtime pm and hold a refernce that hub_configure()424424+ * will drop once the PM_QOS_NO_POWER_OFF flag state has been set425425+ * and the hub has been fully registered (hdev->maxchild set).426426+ */431427 pm_runtime_set_active(&port_dev->dev);428428+ pm_runtime_get_noresume(&port_dev->dev);429429+ pm_runtime_enable(&port_dev->dev);430430+ device_enable_async_suspend(&port_dev->dev);432431433432 /*434434- * Do not enable port runtime pm if the hub does not support435435- * power switching. Also, userspace must have final say of436436- * whether a port is permitted to power-off. Do not enable437437- * runtime pm if we fail to expose pm_qos_no_power_off.433433+ * Keep hidden the ability to enable port-poweroff if the hub434434+ * does not support power switching.438435 */439439- if (hub_is_port_power_switchable(hub)440440- && dev_pm_qos_expose_flags(&port_dev->dev,441441- PM_QOS_FLAG_NO_POWER_OFF) == 0)442442- pm_runtime_enable(&port_dev->dev);436436+ if (!hub_is_port_power_switchable(hub))437437+ return 0;443438444444- device_enable_async_suspend(&port_dev->dev);439439+ /* Attempt to let userspace take over the policy. */440440+ retval = dev_pm_qos_expose_flags(&port_dev->dev,441441+ PM_QOS_FLAG_NO_POWER_OFF);442442+ if (retval < 0) {443443+ dev_warn(&port_dev->dev, "failed to expose pm_qos_no_poweroff\n");444444+ return 0;445445+ }446446+447447+ /* Userspace owns the policy, drop the kernel 'no_poweroff' request. */448448+ retval = dev_pm_qos_remove_request(port_dev->req);449449+ if (retval >= 0) {450450+ kfree(port_dev->req);451451+ port_dev->req = NULL;452452+ }445453 return 0;446446-447447-error_register:448448- put_device(&port_dev->dev);449449-exit:450450- return retval;451454}452455453456void usb_hub_remove_port_device(struct usb_hub *hub, int port1)
+16-3
drivers/usb/host/pci-quirks.c
···656656 DMI_MATCH(DMI_BIOS_VERSION, "Lucid-"),657657 },658658 },659659+ {660660+ /* HASEE E200 */661661+ .matches = {662662+ DMI_MATCH(DMI_BOARD_VENDOR, "HASEE"),663663+ DMI_MATCH(DMI_BOARD_NAME, "E210"),664664+ DMI_MATCH(DMI_BIOS_VERSION, "6.00"),665665+ },666666+ },659667 { }660668};661669···673665{674666 int try_handoff = 1, tried_handoff = 0;675667676676- /* The Pegatron Lucid tablet sporadically waits for 98 seconds trying677677- * the handoff on its unused controller. Skip it. */678678- if (pdev->vendor == 0x8086 && pdev->device == 0x283a) {668668+ /*669669+ * The Pegatron Lucid tablet sporadically waits for 98 seconds trying670670+ * the handoff on its unused controller. Skip it.671671+ *672672+ * The HASEE E200 hangs when the semaphore is set (bugzilla #77021).673673+ */674674+ if (pdev->vendor == 0x8086 && (pdev->device == 0x283a ||675675+ pdev->device == 0x27cc)) {679676 if (dmi_check_system(ehci_dmi_nohandoff_table))680677 try_handoff = 0;681678 }
···1280128012811281# S390 Architecture1282128212831283-config ZVM_WATCHDOG12841284- tristate "z/VM Watchdog Timer"12831283+config DIAG288_WATCHDOG12841284+ tristate "System z diag288 Watchdog"12851285 depends on S39012861286+ select WATCHDOG_CORE12861287 help12871288 IBM s/390 and zSeries machines running under z/VM 5.1 or later12881289 provide a virtual watchdog timer to their guest that cause a12891290 user define Control Program command to be executed after a12901291 timeout.12921292+ LPAR provides a very similar interface. This driver handles12931293+ both.1291129412921295 To compile this driver as a module, choose M here. The module12931296 will be called vmwatchdog.
···11+/*22+ * Watchdog driver for z/VM and LPAR using the diag 288 interface.33+ *44+ * Under z/VM, expiration of the watchdog will send a "system restart" command55+ * to CP.66+ *77+ * The command can be altered using the module parameter "cmd". This is88+ * not recommended because it's only supported on z/VM but not whith LPAR.99+ *1010+ * On LPAR, the watchdog will always trigger a system restart. the module1111+ * paramter cmd is meaningless here.1212+ *1313+ *1414+ * Copyright IBM Corp. 2004, 20131515+ * Author(s): Arnd Bergmann (arndb@de.ibm.com)1616+ * Philipp Hachtmann (phacht@de.ibm.com)1717+ *1818+ */1919+2020+#define KMSG_COMPONENT "diag288_wdt"2121+#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt2222+2323+#include <linux/init.h>2424+#include <linux/kernel.h>2525+#include <linux/module.h>2626+#include <linux/moduleparam.h>2727+#include <linux/slab.h>2828+#include <linux/miscdevice.h>2929+#include <linux/watchdog.h>3030+#include <linux/suspend.h>3131+#include <asm/ebcdic.h>3232+#include <linux/io.h>3333+#include <linux/uaccess.h>3434+3535+#define MAX_CMDLEN 2403636+#define DEFAULT_CMD "SYSTEM RESTART"3737+3838+#define MIN_INTERVAL 15 /* Minimal time supported by diag88 */3939+#define MAX_INTERVAL 3600 /* One hour should be enough - pure estimation */4040+4141+#define WDT_DEFAULT_TIMEOUT 304242+4343+/* Function codes - init, change, cancel */4444+#define WDT_FUNC_INIT 04545+#define WDT_FUNC_CHANGE 14646+#define WDT_FUNC_CANCEL 24747+#define WDT_FUNC_CONCEAL 0x800000004848+4949+/* Action codes for LPAR watchdog */5050+#define LPARWDT_RESTART 05151+5252+static char wdt_cmd[MAX_CMDLEN] = DEFAULT_CMD;5353+static bool conceal_on;5454+static bool nowayout_info = WATCHDOG_NOWAYOUT;5555+5656+MODULE_LICENSE("GPL");5757+MODULE_AUTHOR("Arnd Bergmann <arndb@de.ibm.com>");5858+MODULE_AUTHOR("Philipp Hachtmann <phacht@de.ibm.com>");5959+6060+MODULE_DESCRIPTION("System z diag288 Watchdog Timer");6161+6262+module_param_string(cmd, wdt_cmd, MAX_CMDLEN, 0644);6363+MODULE_PARM_DESC(cmd, "CP command that is run when the watchdog triggers (z/VM only)");6464+6565+module_param_named(conceal, conceal_on, bool, 0644);6666+MODULE_PARM_DESC(conceal, "Enable the CONCEAL CP option while the watchdog is active (z/VM only)");6767+6868+module_param_named(nowayout, nowayout_info, bool, 0444);6969+MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started (default = CONFIG_WATCHDOG_NOWAYOUT)");7070+7171+MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);7272+MODULE_ALIAS("vmwatchdog");7373+7474+static int __diag288(unsigned int func, unsigned int timeout,7575+ unsigned long action, unsigned int len)7676+{7777+ register unsigned long __func asm("2") = func;7878+ register unsigned long __timeout asm("3") = timeout;7979+ register unsigned long __action asm("4") = action;8080+ register unsigned long __len asm("5") = len;8181+ int err;8282+8383+ err = -EINVAL;8484+ asm volatile(8585+ " diag %1, %3, 0x288\n"8686+ "0: la %0, 0\n"8787+ "1:\n"8888+ EX_TABLE(0b, 1b)8989+ : "+d" (err) : "d"(__func), "d"(__timeout),9090+ "d"(__action), "d"(__len) : "1", "cc");9191+ return err;9292+}9393+9494+static int __diag288_vm(unsigned int func, unsigned int timeout,9595+ char *cmd, size_t len)9696+{9797+ return __diag288(func, timeout, virt_to_phys(cmd), len);9898+}9999+100100+static int __diag288_lpar(unsigned int func, unsigned int timeout,101101+ unsigned long action)102102+{103103+ return __diag288(func, timeout, action, 0);104104+}105105+106106+static int wdt_start(struct watchdog_device *dev)107107+{108108+ char *ebc_cmd;109109+ size_t len;110110+ int ret;111111+ unsigned int func;112112+113113+ ret = -ENODEV;114114+115115+ if (MACHINE_IS_VM) {116116+ ebc_cmd = kmalloc(MAX_CMDLEN, GFP_KERNEL);117117+ if (!ebc_cmd)118118+ return -ENOMEM;119119+ len = strlcpy(ebc_cmd, wdt_cmd, MAX_CMDLEN);120120+ ASCEBC(ebc_cmd, MAX_CMDLEN);121121+ EBC_TOUPPER(ebc_cmd, MAX_CMDLEN);122122+123123+ func = conceal_on ? (WDT_FUNC_INIT | WDT_FUNC_CONCEAL)124124+ : WDT_FUNC_INIT;125125+ ret = __diag288_vm(func, dev->timeout, ebc_cmd, len);126126+ WARN_ON(ret != 0);127127+ kfree(ebc_cmd);128128+ }129129+130130+ if (MACHINE_IS_LPAR) {131131+ ret = __diag288_lpar(WDT_FUNC_INIT,132132+ dev->timeout, LPARWDT_RESTART);133133+ }134134+135135+ if (ret) {136136+ pr_err("The watchdog cannot be activated\n");137137+ return ret;138138+ }139139+ pr_info("The watchdog was activated\n");140140+ return 0;141141+}142142+143143+static int wdt_stop(struct watchdog_device *dev)144144+{145145+ int ret;146146+147147+ ret = __diag288(WDT_FUNC_CANCEL, 0, 0, 0);148148+ pr_info("The watchdog was deactivated\n");149149+ return ret;150150+}151151+152152+static int wdt_ping(struct watchdog_device *dev)153153+{154154+ char *ebc_cmd;155155+ size_t len;156156+ int ret;157157+ unsigned int func;158158+159159+ ret = -ENODEV;160160+161161+ if (MACHINE_IS_VM) {162162+ ebc_cmd = kmalloc(MAX_CMDLEN, GFP_KERNEL);163163+ if (!ebc_cmd)164164+ return -ENOMEM;165165+ len = strlcpy(ebc_cmd, wdt_cmd, MAX_CMDLEN);166166+ ASCEBC(ebc_cmd, MAX_CMDLEN);167167+ EBC_TOUPPER(ebc_cmd, MAX_CMDLEN);168168+169169+ /*170170+ * It seems to be ok to z/VM to use the init function to171171+ * retrigger the watchdog. On LPAR WDT_FUNC_CHANGE must172172+ * be used when the watchdog is running.173173+ */174174+ func = conceal_on ? (WDT_FUNC_INIT | WDT_FUNC_CONCEAL)175175+ : WDT_FUNC_INIT;176176+177177+ ret = __diag288_vm(func, dev->timeout, ebc_cmd, len);178178+ WARN_ON(ret != 0);179179+ kfree(ebc_cmd);180180+ }181181+182182+ if (MACHINE_IS_LPAR)183183+ ret = __diag288_lpar(WDT_FUNC_CHANGE, dev->timeout, 0);184184+185185+ if (ret)186186+ pr_err("The watchdog timer cannot be started or reset\n");187187+ return ret;188188+}189189+190190+static int wdt_set_timeout(struct watchdog_device * dev, unsigned int new_to)191191+{192192+ dev->timeout = new_to;193193+ return wdt_ping(dev);194194+}195195+196196+static struct watchdog_ops wdt_ops = {197197+ .owner = THIS_MODULE,198198+ .start = wdt_start,199199+ .stop = wdt_stop,200200+ .ping = wdt_ping,201201+ .set_timeout = wdt_set_timeout,202202+};203203+204204+static struct watchdog_info wdt_info = {205205+ .options = WDIOF_SETTIMEOUT | WDIOF_MAGICCLOSE,206206+ .firmware_version = 0,207207+ .identity = "z Watchdog",208208+};209209+210210+static struct watchdog_device wdt_dev = {211211+ .parent = NULL,212212+ .info = &wdt_info,213213+ .ops = &wdt_ops,214214+ .bootstatus = 0,215215+ .timeout = WDT_DEFAULT_TIMEOUT,216216+ .min_timeout = MIN_INTERVAL,217217+ .max_timeout = MAX_INTERVAL,218218+};219219+220220+/*221221+ * It makes no sense to go into suspend while the watchdog is running.222222+ * Depending on the memory size, the watchdog might trigger, while we223223+ * are still saving the memory.224224+ * We reuse the open flag to ensure that suspend and watchdog open are225225+ * exclusive operations226226+ */227227+static int wdt_suspend(void)228228+{229229+ if (test_and_set_bit(WDOG_DEV_OPEN, &wdt_dev.status)) {230230+ pr_err("Linux cannot be suspended while the watchdog is in use\n");231231+ return notifier_from_errno(-EBUSY);232232+ }233233+ if (test_bit(WDOG_ACTIVE, &wdt_dev.status)) {234234+ clear_bit(WDOG_DEV_OPEN, &wdt_dev.status);235235+ pr_err("Linux cannot be suspended while the watchdog is in use\n");236236+ return notifier_from_errno(-EBUSY);237237+ }238238+ return NOTIFY_DONE;239239+}240240+241241+static int wdt_resume(void)242242+{243243+ clear_bit(WDOG_DEV_OPEN, &wdt_dev.status);244244+ return NOTIFY_DONE;245245+}246246+247247+static int wdt_power_event(struct notifier_block *this, unsigned long event,248248+ void *ptr)249249+{250250+ switch (event) {251251+ case PM_POST_HIBERNATION:252252+ case PM_POST_SUSPEND:253253+ return wdt_resume();254254+ case PM_HIBERNATION_PREPARE:255255+ case PM_SUSPEND_PREPARE:256256+ return wdt_suspend();257257+ default:258258+ return NOTIFY_DONE;259259+ }260260+}261261+262262+static struct notifier_block wdt_power_notifier = {263263+ .notifier_call = wdt_power_event,264264+};265265+266266+static int __init diag288_init(void)267267+{268268+ int ret;269269+ char ebc_begin[] = {270270+ 194, 197, 199, 201, 213271271+ };272272+273273+ watchdog_set_nowayout(&wdt_dev, nowayout_info);274274+275275+ if (MACHINE_IS_VM) {276276+ pr_info("The watchdog device driver detected a z/VM environment\n");277277+ if (__diag288_vm(WDT_FUNC_INIT, 15,278278+ ebc_begin, sizeof(ebc_begin)) != 0) {279279+ pr_err("The watchdog cannot be initialized\n");280280+ return -EINVAL;281281+ }282282+ } else if (MACHINE_IS_LPAR) {283283+ pr_info("The watchdog device driver detected an LPAR environment\n");284284+ if (__diag288_lpar(WDT_FUNC_INIT, 30, LPARWDT_RESTART)) {285285+ pr_err("The watchdog cannot be initialized\n");286286+ return -EINVAL;287287+ }288288+ } else {289289+ pr_err("Linux runs in an environment that does not support the diag288 watchdog\n");290290+ return -ENODEV;291291+ }292292+293293+ if (__diag288_lpar(WDT_FUNC_CANCEL, 0, 0)) {294294+ pr_err("The watchdog cannot be deactivated\n");295295+ return -EINVAL;296296+ }297297+298298+ ret = register_pm_notifier(&wdt_power_notifier);299299+ if (ret)300300+ return ret;301301+302302+ ret = watchdog_register_device(&wdt_dev);303303+ if (ret)304304+ unregister_pm_notifier(&wdt_power_notifier);305305+306306+ return ret;307307+}308308+309309+static void __exit diag288_exit(void)310310+{311311+ watchdog_unregister_device(&wdt_dev);312312+ unregister_pm_notifier(&wdt_power_notifier);313313+}314314+315315+module_init(diag288_init);316316+module_exit(diag288_exit);
···12591259 spinlock_t lock;12601260 u64 pinned;12611261 u64 reserved;12621262+ u64 delalloc_bytes;12621263 u64 bytes_super;12631264 u64 flags;12641265 u64 sectorsize;12651266 u64 cache_generation;12671267+12681268+ /*12691269+ * It is just used for the delayed data space allocation because12701270+ * only the data space allocation and the relative metadata update12711271+ * can be done cross the transaction.12721272+ */12731273+ struct rw_semaphore data_rwsem;1266127412671275 /* for raid56, this is a full stripe, without parity */12681276 unsigned long full_stripe_len;···33243316 struct btrfs_key *ins);33253317int btrfs_reserve_extent(struct btrfs_root *root, u64 num_bytes,33263318 u64 min_alloc_size, u64 empty_size, u64 hint_byte,33273327- struct btrfs_key *ins, int is_data);33193319+ struct btrfs_key *ins, int is_data, int delalloc);33283320int btrfs_inc_ref(struct btrfs_trans_handle *trans, struct btrfs_root *root,33293321 struct extent_buffer *buf, int full_backref, int no_quota);33303322int btrfs_dec_ref(struct btrfs_trans_handle *trans, struct btrfs_root *root,···33383330 u64 bytenr, u64 num_bytes, u64 parent, u64 root_objectid,33393331 u64 owner, u64 offset, int no_quota);3340333233413341-int btrfs_free_reserved_extent(struct btrfs_root *root, u64 start, u64 len);33333333+int btrfs_free_reserved_extent(struct btrfs_root *root, u64 start, u64 len,33343334+ int delalloc);33423335int btrfs_free_and_pin_reserved_extent(struct btrfs_root *root,33433336 u64 start, u64 len);33443337void btrfs_prepare_extent_commit(struct btrfs_trans_handle *trans,
+112-31
fs/btrfs/extent-tree.c
···105105static void dump_space_info(struct btrfs_space_info *info, u64 bytes,106106 int dump_block_groups);107107static int btrfs_update_reserved_bytes(struct btrfs_block_group_cache *cache,108108- u64 num_bytes, int reserve);108108+ u64 num_bytes, int reserve,109109+ int delalloc);109110static int block_rsv_use_bytes(struct btrfs_block_rsv *block_rsv,110111 u64 num_bytes);111112int btrfs_pin_extent(struct btrfs_root *root,···3261326032623261 spin_lock(&block_group->lock);32633262 if (block_group->cached != BTRFS_CACHE_FINISHED ||32643264- !btrfs_test_opt(root, SPACE_CACHE)) {32633263+ !btrfs_test_opt(root, SPACE_CACHE) ||32643264+ block_group->delalloc_bytes) {32653265 /*32663266 * don't bother trying to write stuff out _if_32673267 * a) we're not cached,···56155613 * @cache: The cache we are manipulating56165614 * @num_bytes: The number of bytes in question56175615 * @reserve: One of the reservation enums56165616+ * @delalloc: The blocks are allocated for the delalloc write56185617 *56195618 * This is called by the allocator when it reserves space, or by somebody who is56205619 * freeing space that was never actually used on disk. For example if you···56345631 * succeeds.56355632 */56365633static int btrfs_update_reserved_bytes(struct btrfs_block_group_cache *cache,56375637- u64 num_bytes, int reserve)56345634+ u64 num_bytes, int reserve, int delalloc)56385635{56395636 struct btrfs_space_info *space_info = cache->space_info;56405637 int ret = 0;···56535650 num_bytes, 0);56545651 space_info->bytes_may_use -= num_bytes;56555652 }56535653+56545654+ if (delalloc)56555655+ cache->delalloc_bytes += num_bytes;56565656 }56575657 } else {56585658 if (cache->ro)56595659 space_info->bytes_readonly += num_bytes;56605660 cache->reserved -= num_bytes;56615661 space_info->bytes_reserved -= num_bytes;56625662+56635663+ if (delalloc)56645664+ cache->delalloc_bytes -= num_bytes;56625665 }56635666 spin_unlock(&cache->lock);56645667 spin_unlock(&space_info->lock);···62156206 WARN_ON(test_bit(EXTENT_BUFFER_DIRTY, &buf->bflags));6216620762176208 btrfs_add_free_space(cache, buf->start, buf->len);62186218- btrfs_update_reserved_bytes(cache, buf->len, RESERVE_FREE);62096209+ btrfs_update_reserved_bytes(cache, buf->len, RESERVE_FREE, 0);62196210 trace_btrfs_reserved_extent_free(root, buf->start, buf->len);62206211 pin = 0;62216212 }···63746365 LOOP_NO_EMPTY_SIZE = 3,63756366};6376636763686368+static inline void63696369+btrfs_lock_block_group(struct btrfs_block_group_cache *cache,63706370+ int delalloc)63716371+{63726372+ if (delalloc)63736373+ down_read(&cache->data_rwsem);63746374+}63756375+63766376+static inline void63776377+btrfs_grab_block_group(struct btrfs_block_group_cache *cache,63786378+ int delalloc)63796379+{63806380+ btrfs_get_block_group(cache);63816381+ if (delalloc)63826382+ down_read(&cache->data_rwsem);63836383+}63846384+63856385+static struct btrfs_block_group_cache *63866386+btrfs_lock_cluster(struct btrfs_block_group_cache *block_group,63876387+ struct btrfs_free_cluster *cluster,63886388+ int delalloc)63896389+{63906390+ struct btrfs_block_group_cache *used_bg;63916391+ bool locked = false;63926392+again:63936393+ spin_lock(&cluster->refill_lock);63946394+ if (locked) {63956395+ if (used_bg == cluster->block_group)63966396+ return used_bg;63976397+63986398+ up_read(&used_bg->data_rwsem);63996399+ btrfs_put_block_group(used_bg);64006400+ }64016401+64026402+ used_bg = cluster->block_group;64036403+ if (!used_bg)64046404+ return NULL;64056405+64066406+ if (used_bg == block_group)64076407+ return used_bg;64086408+64096409+ btrfs_get_block_group(used_bg);64106410+64116411+ if (!delalloc)64126412+ return used_bg;64136413+64146414+ if (down_read_trylock(&used_bg->data_rwsem))64156415+ return used_bg;64166416+64176417+ spin_unlock(&cluster->refill_lock);64186418+ down_read(&used_bg->data_rwsem);64196419+ locked = true;64206420+ goto again;64216421+}64226422+64236423+static inline void64246424+btrfs_release_block_group(struct btrfs_block_group_cache *cache,64256425+ int delalloc)64266426+{64276427+ if (delalloc)64286428+ up_read(&cache->data_rwsem);64296429+ btrfs_put_block_group(cache);64306430+}64316431+63776432/*63786433 * walks the btree of allocated extents and find a hole of a given size.63796434 * The key ins is changed to record the hole:···64526379static noinline int find_free_extent(struct btrfs_root *orig_root,64536380 u64 num_bytes, u64 empty_size,64546381 u64 hint_byte, struct btrfs_key *ins,64556455- u64 flags)63826382+ u64 flags, int delalloc)64566383{64576384 int ret = 0;64586385 struct btrfs_root *root = orig_root->fs_info->extent_root;···65406467 up_read(&space_info->groups_sem);65416468 } else {65426469 index = get_block_group_index(block_group);64706470+ btrfs_lock_block_group(block_group, delalloc);65436471 goto have_block_group;65446472 }65456473 } else if (block_group) {···65556481 u64 offset;65566482 int cached;6557648365586558- btrfs_get_block_group(block_group);64846484+ btrfs_grab_block_group(block_group, delalloc);65596485 search_start = block_group->key.objectid;6560648665616487 /*···66036529 * the refill lock keeps out other66046530 * people trying to start a new cluster66056531 */66066606- spin_lock(&last_ptr->refill_lock);66076607- used_block_group = last_ptr->block_group;66086608- if (used_block_group != block_group &&66096609- (!used_block_group ||66106610- used_block_group->ro ||66116611- !block_group_bits(used_block_group, flags)))65326532+ used_block_group = btrfs_lock_cluster(block_group,65336533+ last_ptr,65346534+ delalloc);65356535+ if (!used_block_group)66126536 goto refill_cluster;6613653766146614- if (used_block_group != block_group)66156615- btrfs_get_block_group(used_block_group);65386538+ if (used_block_group != block_group &&65396539+ (used_block_group->ro ||65406540+ !block_group_bits(used_block_group, flags)))65416541+ goto release_cluster;6616654266176543 offset = btrfs_alloc_from_cluster(used_block_group,66186544 last_ptr,···66266552 used_block_group,66276553 search_start, num_bytes);66286554 if (used_block_group != block_group) {66296629- btrfs_put_block_group(block_group);65556555+ btrfs_release_block_group(block_group,65566556+ delalloc);66306557 block_group = used_block_group;66316558 }66326559 goto checks;66336560 }6634656166356562 WARN_ON(last_ptr->block_group != used_block_group);66366636- if (used_block_group != block_group)66376637- btrfs_put_block_group(used_block_group);66386638-refill_cluster:65636563+release_cluster:66396564 /* If we are on LOOP_NO_EMPTY_SIZE, we can't66406565 * set up a new clusters, so lets just skip it66416566 * and let the allocator find whatever block···66516578 * succeeding in the unclustered66526579 * allocation. */66536580 if (loop >= LOOP_NO_EMPTY_SIZE &&66546654- last_ptr->block_group != block_group) {65816581+ used_block_group != block_group) {66556582 spin_unlock(&last_ptr->refill_lock);65836583+ btrfs_release_block_group(used_block_group,65846584+ delalloc);66566585 goto unclustered_alloc;66576586 }66586587···66646589 */66656590 btrfs_return_cluster_to_free_space(NULL, last_ptr);6666659165926592+ if (used_block_group != block_group)65936593+ btrfs_release_block_group(used_block_group,65946594+ delalloc);65956595+refill_cluster:66676596 if (loop >= LOOP_NO_EMPTY_SIZE) {66686597 spin_unlock(&last_ptr->refill_lock);66696598 goto unclustered_alloc;···67756696 BUG_ON(offset > search_start);6776669767776698 ret = btrfs_update_reserved_bytes(block_group, num_bytes,67786778- alloc_type);66996699+ alloc_type, delalloc);67796700 if (ret == -EAGAIN) {67806701 btrfs_add_free_space(block_group, offset, num_bytes);67816702 goto loop;···6787670867886709 trace_btrfs_reserve_extent(orig_root, block_group,67896710 search_start, num_bytes);67906790- btrfs_put_block_group(block_group);67116711+ btrfs_release_block_group(block_group, delalloc);67916712 break;67926713loop:67936714 failed_cluster_refill = false;67946715 failed_alloc = false;67956716 BUG_ON(index != get_block_group_index(block_group));67966796- btrfs_put_block_group(block_group);67176717+ btrfs_release_block_group(block_group, delalloc);67976718 }67986719 up_read(&space_info->groups_sem);67996720···69066827int btrfs_reserve_extent(struct btrfs_root *root,69076828 u64 num_bytes, u64 min_alloc_size,69086829 u64 empty_size, u64 hint_byte,69096909- struct btrfs_key *ins, int is_data)68306830+ struct btrfs_key *ins, int is_data, int delalloc)69106831{69116832 bool final_tried = false;69126833 u64 flags;···69166837again:69176838 WARN_ON(num_bytes < root->sectorsize);69186839 ret = find_free_extent(root, num_bytes, empty_size, hint_byte, ins,69196919- flags);68406840+ flags, delalloc);6920684169216842 if (ret == -ENOSPC) {69226843 if (!final_tried && ins->offset) {···69416862}6942686369436864static int __btrfs_free_reserved_extent(struct btrfs_root *root,69446944- u64 start, u64 len, int pin)68656865+ u64 start, u64 len,68666866+ int pin, int delalloc)69456867{69466868 struct btrfs_block_group_cache *cache;69476869 int ret = 0;···69616881 pin_down_extent(root, cache, start, len, 1);69626882 else {69636883 btrfs_add_free_space(cache, start, len);69646964- btrfs_update_reserved_bytes(cache, len, RESERVE_FREE);68846884+ btrfs_update_reserved_bytes(cache, len, RESERVE_FREE, delalloc);69656885 }69666886 btrfs_put_block_group(cache);69676887···69716891}6972689269736893int btrfs_free_reserved_extent(struct btrfs_root *root,69746974- u64 start, u64 len)68946894+ u64 start, u64 len, int delalloc)69756895{69766976- return __btrfs_free_reserved_extent(root, start, len, 0);68966896+ return __btrfs_free_reserved_extent(root, start, len, 0, delalloc);69776897}6978689869796899int btrfs_free_and_pin_reserved_extent(struct btrfs_root *root,69806900 u64 start, u64 len)69816901{69826982- return __btrfs_free_reserved_extent(root, start, len, 1);69026902+ return __btrfs_free_reserved_extent(root, start, len, 1, 0);69836903}6984690469856905static int alloc_reserved_file_extent(struct btrfs_trans_handle *trans,···71947114 return -EINVAL;7195711571967116 ret = btrfs_update_reserved_bytes(block_group, ins->offset,71977197- RESERVE_ALLOC_NO_ACCOUNT);71177117+ RESERVE_ALLOC_NO_ACCOUNT, 0);71987118 BUG_ON(ret); /* logic error */71997119 ret = alloc_reserved_file_extent(trans, root, 0, root_objectid,72007120 0, owner, offset, ins, 1);···73367256 return ERR_CAST(block_rsv);7337725773387258 ret = btrfs_reserve_extent(root, blocksize, blocksize,73397339- empty_size, hint, &ins, 0);72597259+ empty_size, hint, &ins, 0, 0);73407260 if (ret) {73417261 unuse_block_rsv(root->fs_info, block_rsv, blocksize);73427262 return ERR_PTR(ret);···87398659 start);87408660 atomic_set(&cache->count, 1);87418661 spin_lock_init(&cache->lock);86628662+ init_rwsem(&cache->data_rwsem);87428663 INIT_LIST_HEAD(&cache->list);87438664 INIT_LIST_HEAD(&cache->cluster_list);87448665 INIT_LIST_HEAD(&cache->new_bg_list);
···7575 if (atomic_dec_and_test(&em->refs)) {7676 WARN_ON(extent_map_in_tree(em));7777 WARN_ON(!list_empty(&em->list));7878+ if (test_bit(EXTENT_FLAG_FS_MAPPING, &em->flags))7979+ kfree(em->bdev);7880 kmem_cache_free(extent_map_cache, em);7981 }8082}
+1
fs/btrfs/extent_map.h
···1515#define EXTENT_FLAG_PREALLOC 3 /* pre-allocated extent */1616#define EXTENT_FLAG_LOGGING 4 /* Logging this extent */1717#define EXTENT_FLAG_FILLING 5 /* Filling in a preallocated extent */1818+#define EXTENT_FLAG_FS_MAPPING 6 /* filesystem extent mapping type */18191920struct extent_map {2021 struct rb_node rb_node;
+127-67
fs/btrfs/free-space-cache.c
···274274};275275276276static int io_ctl_init(struct io_ctl *io_ctl, struct inode *inode,277277- struct btrfs_root *root)277277+ struct btrfs_root *root, int write)278278{279279+ int num_pages;280280+ int check_crcs = 0;281281+282282+ num_pages = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >>283283+ PAGE_CACHE_SHIFT;284284+285285+ if (btrfs_ino(inode) != BTRFS_FREE_INO_OBJECTID)286286+ check_crcs = 1;287287+288288+ /* Make sure we can fit our crcs into the first page */289289+ if (write && check_crcs &&290290+ (num_pages * sizeof(u32)) >= PAGE_CACHE_SIZE)291291+ return -ENOSPC;292292+279293 memset(io_ctl, 0, sizeof(struct io_ctl));280280- io_ctl->num_pages = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >>281281- PAGE_CACHE_SHIFT;282282- io_ctl->pages = kzalloc(sizeof(struct page *) * io_ctl->num_pages,283283- GFP_NOFS);294294+295295+ io_ctl->pages = kzalloc(sizeof(struct page *) * num_pages, GFP_NOFS);284296 if (!io_ctl->pages)285297 return -ENOMEM;298298+299299+ io_ctl->num_pages = num_pages;286300 io_ctl->root = root;287287- if (btrfs_ino(inode) != BTRFS_FREE_INO_OBJECTID)288288- io_ctl->check_crcs = 1;301301+ io_ctl->check_crcs = check_crcs;302302+289303 return 0;290304}291305···680666 generation = btrfs_free_space_generation(leaf, header);681667 btrfs_release_path(path);682668669669+ if (!BTRFS_I(inode)->generation) {670670+ btrfs_info(root->fs_info,671671+ "The free space cache file (%llu) is invalid. skip it\n",672672+ offset);673673+ return 0;674674+ }675675+683676 if (BTRFS_I(inode)->generation != generation) {684677 btrfs_err(root->fs_info,685678 "free space inode generation (%llu) "···698677 if (!num_entries)699678 return 0;700679701701- ret = io_ctl_init(&io_ctl, inode, root);680680+ ret = io_ctl_init(&io_ctl, inode, root, 0);702681 if (ret)703682 return ret;704683···978957}979958980959static noinline_for_stack int981981-add_ioctl_entries(struct btrfs_root *root,982982- struct inode *inode,983983- struct btrfs_block_group_cache *block_group,984984- struct io_ctl *io_ctl,985985- struct extent_state **cached_state,986986- struct list_head *bitmap_list,987987- int *entries)960960+write_pinned_extent_entries(struct btrfs_root *root,961961+ struct btrfs_block_group_cache *block_group,962962+ struct io_ctl *io_ctl,963963+ int *entries)988964{989965 u64 start, extent_start, extent_end, len;990990- struct list_head *pos, *n;991966 struct extent_io_tree *unpin = NULL;992967 int ret;968968+969969+ if (!block_group)970970+ return 0;993971994972 /*995973 * We want to add any pinned extents to our free space cache···999979 */1000980 unpin = root->fs_info->pinned_extents;100198110021002- if (block_group)10031003- start = block_group->key.objectid;982982+ start = block_group->key.objectid;100498310051005- while (block_group && (start < block_group->key.objectid +10061006- block_group->key.offset)) {984984+ while (start < block_group->key.objectid + block_group->key.offset) {1007985 ret = find_first_extent_bit(unpin, start,1008986 &extent_start, &extent_end,1009987 EXTENT_DIRTY, NULL);10101010- if (ret) {10111011- ret = 0;10121012- break;10131013- }988988+ if (ret)989989+ return 0;10149901015991 /* This pinned extent is out of our range */1016992 if (extent_start >= block_group->key.objectid +1017993 block_group->key.offset)10181018- break;994994+ return 0;10199951020996 extent_start = max(extent_start, start);1021997 extent_end = min(block_group->key.objectid +···10211005 *entries += 1;10221006 ret = io_ctl_add_entry(io_ctl, extent_start, len, NULL);10231007 if (ret)10241024- goto out_nospc;10081008+ return -ENOSPC;1025100910261010 start = extent_end;10271011 }10121012+10131013+ return 0;10141014+}10151015+10161016+static noinline_for_stack int10171017+write_bitmap_entries(struct io_ctl *io_ctl, struct list_head *bitmap_list)10181018+{10191019+ struct list_head *pos, *n;10201020+ int ret;1028102110291022 /* Write out the bitmaps */10301023 list_for_each_safe(pos, n, bitmap_list) {···1042101710431018 ret = io_ctl_add_bitmap(io_ctl, entry->bitmap);10441019 if (ret)10451045- goto out_nospc;10201020+ return -ENOSPC;10461021 list_del_init(&entry->list);10471022 }1048102310491049- /* Zero out the rest of the pages just to make sure */10501050- io_ctl_zero_remaining_pages(io_ctl);10241024+ return 0;10251025+}1051102610521052- ret = btrfs_dirty_pages(root, inode, io_ctl->pages, io_ctl->num_pages,10531053- 0, i_size_read(inode), cached_state);10541054- io_ctl_drop_pages(io_ctl);10551055- unlock_extent_cached(&BTRFS_I(inode)->io_tree, 0,10561056- i_size_read(inode) - 1, cached_state, GFP_NOFS);10571057-10581058- if (ret)10591059- goto fail;10271027+static int flush_dirty_cache(struct inode *inode)10281028+{10291029+ int ret;1060103010611031 ret = btrfs_wait_ordered_range(inode, 0, (u64)-1);10621062- if (ret) {10321032+ if (ret)10631033 clear_extent_bit(&BTRFS_I(inode)->io_tree, 0, inode->i_size - 1,10641034 EXTENT_DIRTY | EXTENT_DELALLOC, 0, 0, NULL,10651035 GFP_NOFS);10661066- goto fail;10671067- }10681068- return 0;1069103610701070-fail:10711071- return -1;10721072-10731073-out_nospc:10741074- return -ENOSPC;10371037+ return ret;10751038}1076103910771040static void noinline_for_stack···10691056 struct list_head *bitmap_list)10701057{10711058 struct list_head *pos, *n;10591059+10721060 list_for_each_safe(pos, n, bitmap_list) {10731061 struct btrfs_free_space *entry =10741062 list_entry(pos, struct btrfs_free_space, list);···11021088{11031089 struct extent_state *cached_state = NULL;11041090 struct io_ctl io_ctl;11051105- struct list_head bitmap_list;10911091+ LIST_HEAD(bitmap_list);11061092 int entries = 0;11071093 int bitmaps = 0;11081094 int ret;11091109- int err = -1;11101110-11111111- INIT_LIST_HEAD(&bitmap_list);1112109511131096 if (!i_size_read(inode))11141097 return -1;1115109811161116- ret = io_ctl_init(&io_ctl, inode, root);10991099+ ret = io_ctl_init(&io_ctl, inode, root, 1);11171100 if (ret)11181101 return -1;11021102+11031103+ if (block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA)) {11041104+ down_write(&block_group->data_rwsem);11051105+ spin_lock(&block_group->lock);11061106+ if (block_group->delalloc_bytes) {11071107+ block_group->disk_cache_state = BTRFS_DC_WRITTEN;11081108+ spin_unlock(&block_group->lock);11091109+ up_write(&block_group->data_rwsem);11101110+ BTRFS_I(inode)->generation = 0;11111111+ ret = 0;11121112+ goto out;11131113+ }11141114+ spin_unlock(&block_group->lock);11151115+ }1119111611201117 /* Lock all pages first so we can lock the extent safely. */11211118 io_ctl_prepare_pages(&io_ctl, inode, 0);···11341109 lock_extent_bits(&BTRFS_I(inode)->io_tree, 0, i_size_read(inode) - 1,11351110 0, &cached_state);1136111111371137-11381138- /* Make sure we can fit our crcs into the first page */11391139- if (io_ctl.check_crcs &&11401140- (io_ctl.num_pages * sizeof(u32)) >= PAGE_CACHE_SIZE)11411141- goto out_nospc;11421142-11431112 io_ctl_set_generation(&io_ctl, trans->transid);1144111311141114+ /* Write out the extent entries in the free space cache */11451115 ret = write_cache_extent_entries(&io_ctl, ctl,11461116 block_group, &entries, &bitmaps,11471117 &bitmap_list);11481118 if (ret)11491119 goto out_nospc;1150112011511151- ret = add_ioctl_entries(root, inode, block_group, &io_ctl,11521152- &cached_state, &bitmap_list, &entries);11531153-11541154- if (ret == -ENOSPC)11211121+ /*11221122+ * Some spaces that are freed in the current transaction are pinned,11231123+ * they will be added into free space cache after the transaction is11241124+ * committed, we shouldn't lose them.11251125+ */11261126+ ret = write_pinned_extent_entries(root, block_group, &io_ctl, &entries);11271127+ if (ret)11551128 goto out_nospc;11561156- else if (ret)11291129+11301130+ /* At last, we write out all the bitmaps. */11311131+ ret = write_bitmap_entries(&io_ctl, &bitmap_list);11321132+ if (ret)11331133+ goto out_nospc;11341134+11351135+ /* Zero out the rest of the pages just to make sure */11361136+ io_ctl_zero_remaining_pages(&io_ctl);11371137+11381138+ /* Everything is written out, now we dirty the pages in the file. */11391139+ ret = btrfs_dirty_pages(root, inode, io_ctl.pages, io_ctl.num_pages,11401140+ 0, i_size_read(inode), &cached_state);11411141+ if (ret)11421142+ goto out_nospc;11431143+11441144+ if (block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA))11451145+ up_write(&block_group->data_rwsem);11461146+ /*11471147+ * Release the pages and unlock the extent, we will flush11481148+ * them out later11491149+ */11501150+ io_ctl_drop_pages(&io_ctl);11511151+11521152+ unlock_extent_cached(&BTRFS_I(inode)->io_tree, 0,11531153+ i_size_read(inode) - 1, &cached_state, GFP_NOFS);11541154+11551155+ /* Flush the dirty pages in the cache file. */11561156+ ret = flush_dirty_cache(inode);11571157+ if (ret)11571158 goto out;1158115911591159- err = update_cache_item(trans, root, inode, path, offset,11601160+ /* Update the cache item to tell everyone this cache file is valid. */11611161+ ret = update_cache_item(trans, root, inode, path, offset,11601162 entries, bitmaps);11611161-11621163out:11631164 io_ctl_free(&io_ctl);11641164- if (err) {11651165+ if (ret) {11651166 invalidate_inode_pages2(inode->i_mapping);11661167 BTRFS_I(inode)->generation = 0;11671168 }11681169 btrfs_update_inode(trans, root, inode);11691169- return err;11701170+ return ret;1170117111711172out_nospc:11721172-11731173 cleanup_write_cache_enospc(inode, &io_ctl, &cached_state, &bitmap_list);11741174+11751175+ if (block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA))11761176+ up_write(&block_group->data_rwsem);11771177+11741178 goto out;11751179}11761180···1216116212171163 spin_lock(&block_group->lock);12181164 if (block_group->disk_cache_state < BTRFS_DC_SETUP) {11651165+ spin_unlock(&block_group->lock);11661166+ return 0;11671167+ }11681168+11691169+ if (block_group->delalloc_bytes) {11701170+ block_group->disk_cache_state = BTRFS_DC_WRITTEN;12191171 spin_unlock(&block_group->lock);12201172 return 0;12211173 }
+30-11
fs/btrfs/inode.c
···693693 ret = btrfs_reserve_extent(root,694694 async_extent->compressed_size,695695 async_extent->compressed_size,696696- 0, alloc_hint, &ins, 1);696696+ 0, alloc_hint, &ins, 1, 1);697697 if (ret) {698698 int i;699699···794794out:795795 return ret;796796out_free_reserve:797797- btrfs_free_reserved_extent(root, ins.objectid, ins.offset);797797+ btrfs_free_reserved_extent(root, ins.objectid, ins.offset, 1);798798out_free:799799 extent_clear_unlock_delalloc(inode, async_extent->start,800800 async_extent->start +···917917 cur_alloc_size = disk_num_bytes;918918 ret = btrfs_reserve_extent(root, cur_alloc_size,919919 root->sectorsize, 0, alloc_hint,920920- &ins, 1);920920+ &ins, 1, 1);921921 if (ret < 0)922922 goto out_unlock;923923···995995 return ret;996996997997out_reserve:998998- btrfs_free_reserved_extent(root, ins.objectid, ins.offset);998998+ btrfs_free_reserved_extent(root, ins.objectid, ins.offset, 1);999999out_unlock:10001000 extent_clear_unlock_delalloc(inode, start, end, locked_page,10011001 EXTENT_LOCKED | EXTENT_DO_ACCOUNTING |···25992599 return NULL;26002600}2601260126022602+static void btrfs_release_delalloc_bytes(struct btrfs_root *root,26032603+ u64 start, u64 len)26042604+{26052605+ struct btrfs_block_group_cache *cache;26062606+26072607+ cache = btrfs_lookup_block_group(root->fs_info, start);26082608+ ASSERT(cache);26092609+26102610+ spin_lock(&cache->lock);26112611+ cache->delalloc_bytes -= len;26122612+ spin_unlock(&cache->lock);26132613+26142614+ btrfs_put_block_group(cache);26152615+}26162616+26022617/* as ordered data IO finishes, this gets called so we can finish26032618 * an ordered extent if the range of bytes in the file it covers are26042619 * fully written.···27132698 logical_len, logical_len,27142699 compress_type, 0, 0,27152700 BTRFS_FILE_EXTENT_REG);27012701+ if (!ret)27022702+ btrfs_release_delalloc_bytes(root,27032703+ ordered_extent->start,27042704+ ordered_extent->disk_len);27162705 }27172706 unpin_extent_cache(&BTRFS_I(inode)->extent_tree,27182707 ordered_extent->file_offset, ordered_extent->len,···27692750 !test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags) &&27702751 !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags))27712752 btrfs_free_reserved_extent(root, ordered_extent->start,27722772- ordered_extent->disk_len);27532753+ ordered_extent->disk_len, 1);27732754 }2774275527752756···6554653565556536 alloc_hint = get_extent_allocation_hint(inode, start, len);65566537 ret = btrfs_reserve_extent(root, len, root->sectorsize, 0,65576557- alloc_hint, &ins, 1);65386538+ alloc_hint, &ins, 1, 1);65586539 if (ret)65596540 return ERR_PTR(ret);6560654165616542 em = create_pinned_em(inode, start, ins.offset, start, ins.objectid,65626543 ins.offset, ins.offset, ins.offset, 0);65636544 if (IS_ERR(em)) {65646564- btrfs_free_reserved_extent(root, ins.objectid, ins.offset);65456545+ btrfs_free_reserved_extent(root, ins.objectid, ins.offset, 1);65656546 return em;65666547 }6567654865686549 ret = btrfs_add_ordered_extent_dio(inode, start, ins.objectid,65696550 ins.offset, ins.offset, 0);65706551 if (ret) {65716571- btrfs_free_reserved_extent(root, ins.objectid, ins.offset);65526552+ btrfs_free_reserved_extent(root, ins.objectid, ins.offset, 1);65726553 free_extent_map(em);65736554 return ERR_PTR(ret);65746555 }···74567437 if (!test_bit(BTRFS_ORDERED_PREALLOC, &ordered->flags) &&74577438 !test_bit(BTRFS_ORDERED_NOCOW, &ordered->flags))74587439 btrfs_free_reserved_extent(root, ordered->start,74597459- ordered->disk_len);74407440+ ordered->disk_len, 1);74607441 btrfs_put_ordered_extent(ordered);74617442 btrfs_put_ordered_extent(ordered);74627443 }···88278808 cur_bytes = min(num_bytes, 256ULL * 1024 * 1024);88288809 cur_bytes = max(cur_bytes, min_size);88298810 ret = btrfs_reserve_extent(root, cur_bytes, min_size, 0,88308830- *alloc_hint, &ins, 1);88118811+ *alloc_hint, &ins, 1, 0);88318812 if (ret) {88328813 if (own_trans)88338814 btrfs_end_transaction(trans, root);···88418822 BTRFS_FILE_EXTENT_PREALLOC);88428823 if (ret) {88438824 btrfs_free_reserved_extent(root, ins.objectid,88448844- ins.offset);88258825+ ins.offset, 0);88458826 btrfs_abort_transaction(trans, root, ret);88468827 if (own_trans)88478828 btrfs_end_transaction(trans, root);
+46-34
fs/btrfs/locking.c
···3333 */3434void btrfs_set_lock_blocking_rw(struct extent_buffer *eb, int rw)3535{3636- if (eb->lock_nested) {3737- read_lock(&eb->lock);3838- if (eb->lock_nested && current->pid == eb->lock_owner) {3939- read_unlock(&eb->lock);4040- return;4141- }4242- read_unlock(&eb->lock);4343- }3636+ /*3737+ * no lock is required. The lock owner may change if3838+ * we have a read lock, but it won't change to or away3939+ * from us. If we have the write lock, we are the owner4040+ * and it'll never change.4141+ */4242+ if (eb->lock_nested && current->pid == eb->lock_owner)4343+ return;4444 if (rw == BTRFS_WRITE_LOCK) {4545 if (atomic_read(&eb->blocking_writers) == 0) {4646 WARN_ON(atomic_read(&eb->spinning_writers) != 1);···6565 */6666void btrfs_clear_lock_blocking_rw(struct extent_buffer *eb, int rw)6767{6868- if (eb->lock_nested) {6969- read_lock(&eb->lock);7070- if (eb->lock_nested && current->pid == eb->lock_owner) {7171- read_unlock(&eb->lock);7272- return;7373- }7474- read_unlock(&eb->lock);7575- }6868+ /*6969+ * no lock is required. The lock owner may change if7070+ * we have a read lock, but it won't change to or away7171+ * from us. If we have the write lock, we are the owner7272+ * and it'll never change.7373+ */7474+ if (eb->lock_nested && current->pid == eb->lock_owner)7575+ return;7676+7677 if (rw == BTRFS_WRITE_LOCK_BLOCKING) {7778 BUG_ON(atomic_read(&eb->blocking_writers) != 1);7879 write_lock(&eb->lock);···10099void btrfs_tree_read_lock(struct extent_buffer *eb)101100{102101again:102102+ BUG_ON(!atomic_read(&eb->blocking_writers) &&103103+ current->pid == eb->lock_owner);104104+103105 read_lock(&eb->lock);104106 if (atomic_read(&eb->blocking_writers) &&105107 current->pid == eb->lock_owner) {···136132 if (atomic_read(&eb->blocking_writers))137133 return 0;138134139139- read_lock(&eb->lock);135135+ if (!read_trylock(&eb->lock))136136+ return 0;137137+140138 if (atomic_read(&eb->blocking_writers)) {141139 read_unlock(&eb->lock);142140 return 0;···157151 if (atomic_read(&eb->blocking_writers) ||158152 atomic_read(&eb->blocking_readers))159153 return 0;160160- write_lock(&eb->lock);154154+155155+ if (!write_trylock(&eb->lock))156156+ return 0;157157+161158 if (atomic_read(&eb->blocking_writers) ||162159 atomic_read(&eb->blocking_readers)) {163160 write_unlock(&eb->lock);···177168 */178169void btrfs_tree_read_unlock(struct extent_buffer *eb)179170{180180- if (eb->lock_nested) {181181- read_lock(&eb->lock);182182- if (eb->lock_nested && current->pid == eb->lock_owner) {183183- eb->lock_nested = 0;184184- read_unlock(&eb->lock);185185- return;186186- }187187- read_unlock(&eb->lock);171171+ /*172172+ * if we're nested, we have the write lock. No new locking173173+ * is needed as long as we are the lock owner.174174+ * The write unlock will do a barrier for us, and the lock_nested175175+ * field only matters to the lock owner.176176+ */177177+ if (eb->lock_nested && current->pid == eb->lock_owner) {178178+ eb->lock_nested = 0;179179+ return;188180 }189181 btrfs_assert_tree_read_locked(eb);190182 WARN_ON(atomic_read(&eb->spinning_readers) == 0);···199189 */200190void btrfs_tree_read_unlock_blocking(struct extent_buffer *eb)201191{202202- if (eb->lock_nested) {203203- read_lock(&eb->lock);204204- if (eb->lock_nested && current->pid == eb->lock_owner) {205205- eb->lock_nested = 0;206206- read_unlock(&eb->lock);207207- return;208208- }209209- read_unlock(&eb->lock);192192+ /*193193+ * if we're nested, we have the write lock. No new locking194194+ * is needed as long as we are the lock owner.195195+ * The write unlock will do a barrier for us, and the lock_nested196196+ * field only matters to the lock owner.197197+ */198198+ if (eb->lock_nested && current->pid == eb->lock_owner) {199199+ eb->lock_nested = 0;200200+ return;210201 }211202 btrfs_assert_tree_read_locked(eb);212203 WARN_ON(atomic_read(&eb->blocking_readers) == 0);···255244 BUG_ON(blockers > 1);256245257246 btrfs_assert_tree_locked(eb);247247+ eb->lock_owner = 0;258248 atomic_dec(&eb->write_locks);259249260250 if (blockers) {
+9-10
fs/btrfs/scrub.c
···27252725 dev_extent = btrfs_item_ptr(l, slot, struct btrfs_dev_extent);27262726 length = btrfs_dev_extent_length(l, dev_extent);2727272727282728- if (found_key.offset + length <= start) {27292729- key.offset = found_key.offset + length;27302730- btrfs_release_path(path);27312731- continue;27322732- }27282728+ if (found_key.offset + length <= start)27292729+ goto skip;2733273027342731 chunk_tree = btrfs_dev_extent_chunk_tree(l, dev_extent);27352732 chunk_objectid = btrfs_dev_extent_chunk_objectid(l, dev_extent);···27372740 * the chunk from going away while we scrub it27382741 */27392742 cache = btrfs_lookup_block_group(fs_info, chunk_offset);27402740- if (!cache) {27412741- ret = -ENOENT;27422742- break;27432743- }27432743+27442744+ /* some chunks are removed but not committed to disk yet,27452745+ * continue scrubbing */27462746+ if (!cache)27472747+ goto skip;27482748+27442749 dev_replace->cursor_right = found_key.offset + length;27452750 dev_replace->cursor_left = found_key.offset;27462751 dev_replace->item_needs_writeback = 1;···2801280228022803 dev_replace->cursor_left = dev_replace->cursor_right;28032804 dev_replace->item_needs_writeback = 1;28042804-28052805+skip:28052806 key.offset = found_key.offset + length;28062807 btrfs_release_path(path);28072808 }
+19-17
fs/btrfs/volumes.c
···25432543 remove_extent_mapping(em_tree, em);25442544 write_unlock(&em_tree->lock);2545254525462546- kfree(map);25472547- em->bdev = NULL;25482548-25492546 /* once for the tree */25502547 free_extent_map(em);25512548 /* once for us */···4298430142994302 em = alloc_extent_map();43004303 if (!em) {43044304+ kfree(map);43014305 ret = -ENOMEM;43024306 goto error;43034307 }43084308+ set_bit(EXTENT_FLAG_FS_MAPPING, &em->flags);43044309 em->bdev = (struct block_device *)map;43054310 em->start = start;43064311 em->len = num_bytes;···43454346 /* One for the tree reference */43464347 free_extent_map(em);43474348error:43484348- kfree(map);43494349 kfree(devices_info);43504350 return ret;43514351}···45564558 write_unlock(&tree->map_tree.lock);45574559 if (!em)45584560 break;45594559- kfree(em->bdev);45604561 /* once for us */45614562 free_extent_map(em);45624563 /* once for the tree */···53595362 return 0;53605363}5361536453655365+static inline void btrfs_end_bbio(struct btrfs_bio *bbio, struct bio *bio, int err)53665366+{53675367+ if (likely(bbio->flags & BTRFS_BIO_ORIG_BIO_SUBMITTED))53685368+ bio_endio_nodec(bio, err);53695369+ else53705370+ bio_endio(bio, err);53715371+ kfree(bbio);53725372+}53735373+53625374static void btrfs_end_bio(struct bio *bio, int err)53635375{53645376 struct btrfs_bio *bbio = bio->bi_private;···54085402 bio = bbio->orig_bio;54095403 }5410540454115411- /*54125412- * We have original bio now. So increment bi_remaining to54135413- * account for it in endio54145414- */54155415- atomic_inc(&bio->bi_remaining);54165416-54175405 bio->bi_private = bbio->private;54185406 bio->bi_end_io = bbio->end_io;54195407 btrfs_io_bio(bio)->mirror_num = bbio->mirror_num;···54245424 set_bit(BIO_UPTODATE, &bio->bi_flags);54255425 err = 0;54265426 }54275427- kfree(bbio);5428542754295429- bio_endio(bio, err);54285428+ btrfs_end_bbio(bbio, bio, err);54305429 } else if (!is_orig_bio) {54315430 bio_put(bio);54325431 }···55885589{55895590 atomic_inc(&bbio->error);55905591 if (atomic_dec_and_test(&bbio->stripes_pending)) {55925592+ /* Shoud be the original bio. */55935593+ WARN_ON(bio != bbio->orig_bio);55945594+55915595 bio->bi_private = bbio->private;55925596 bio->bi_end_io = bbio->end_io;55935597 btrfs_io_bio(bio)->mirror_num = bbio->mirror_num;55945598 bio->bi_iter.bi_sector = logical >> 9;55955595- kfree(bbio);55965596- bio_endio(bio, -EIO);55995599+56005600+ btrfs_end_bbio(bbio, bio, -EIO);55975601 }55985602}55995603···56835681 BUG_ON(!bio); /* -ENOMEM */56845682 } else {56855683 bio = first_bio;56845684+ bbio->flags |= BTRFS_BIO_ORIG_BIO_SUBMITTED;56865685 }5687568656885687 submit_stripe_bio(root, bbio, bio,···58255822 return -ENOMEM;58265823 }5827582458255825+ set_bit(EXTENT_FLAG_FS_MAPPING, &em->flags);58285826 em->bdev = (struct block_device *)map;58295827 em->start = logical;58305828 em->len = length;···58505846 map->stripes[i].dev = btrfs_find_device(root->fs_info, devid,58515847 uuid, NULL);58525848 if (!map->stripes[i].dev && !btrfs_test_opt(root, DEGRADED)) {58535853- kfree(map);58545849 free_extent_map(em);58555850 return -EIO;58565851 }···58575854 map->stripes[i].dev =58585855 add_missing_dev(root, devid, uuid);58595856 if (!map->stripes[i].dev) {58605860- kfree(map);58615857 free_extent_map(em);58625858 return -EIO;58635859 }
+3
fs/btrfs/volumes.h
···190190struct btrfs_bio;191191typedef void (btrfs_bio_end_io_t) (struct btrfs_bio *bio, int err);192192193193+#define BTRFS_BIO_ORIG_BIO_SUBMITTED 0x1194194+193195struct btrfs_bio {194196 atomic_t stripes_pending;195197 struct btrfs_fs_info *fs_info;196198 bio_end_io_t *end_io;197199 struct bio *orig_bio;200200+ unsigned long flags;198201 void *private;199202 atomic_t error;200203 int max_errors;
+2-2
fs/eventpoll.c
···910910void eventpoll_release_file(struct file *file)911911{912912 struct eventpoll *ep;913913- struct epitem *epi;913913+ struct epitem *epi, *next;914914915915 /*916916 * We don't want to get "file->f_lock" because it is not···926926 * Besides, ep_remove() acquires the lock, so we can't hold it here.927927 */928928 mutex_lock(&epmutex);929929- list_for_each_entry_rcu(epi, &file->f_ep_links, fllink) {929929+ list_for_each_entry_safe(epi, next, &file->f_ep_links, fllink) {930930 ep = epi->ep;931931 mutex_lock_nested(&ep->mtx, 0);932932 ep_remove(ep, epi);
···4141#include <linux/ratelimit.h>4242#include <linux/sunrpc/svcauth_gss.h>4343#include <linux/sunrpc/addr.h>4444+#include <linux/hash.h>4445#include "xdr4.h"4546#include "xdr4cb.h"4647#include "vfs.h"···365364 return openlockstateid(nfs4_alloc_stid(clp, stateid_slab));366365}367366367367+/*368368+ * When we recall a delegation, we should be careful not to hand it369369+ * out again straight away.370370+ * To ensure this we keep a pair of bloom filters ('new' and 'old')371371+ * in which the filehandles of recalled delegations are "stored".372372+ * If a filehandle appear in either filter, a delegation is blocked.373373+ * When a delegation is recalled, the filehandle is stored in the "new"374374+ * filter.375375+ * Every 30 seconds we swap the filters and clear the "new" one,376376+ * unless both are empty of course.377377+ *378378+ * Each filter is 256 bits. We hash the filehandle to 32bit and use the379379+ * low 3 bytes as hash-table indices.380380+ *381381+ * 'state_lock', which is always held when block_delegations() is called,382382+ * is used to manage concurrent access. Testing does not need the lock383383+ * except when swapping the two filters.384384+ */385385+static struct bloom_pair {386386+ int entries, old_entries;387387+ time_t swap_time;388388+ int new; /* index into 'set' */389389+ DECLARE_BITMAP(set[2], 256);390390+} blocked_delegations;391391+392392+static int delegation_blocked(struct knfsd_fh *fh)393393+{394394+ u32 hash;395395+ struct bloom_pair *bd = &blocked_delegations;396396+397397+ if (bd->entries == 0)398398+ return 0;399399+ if (seconds_since_boot() - bd->swap_time > 30) {400400+ spin_lock(&state_lock);401401+ if (seconds_since_boot() - bd->swap_time > 30) {402402+ bd->entries -= bd->old_entries;403403+ bd->old_entries = bd->entries;404404+ memset(bd->set[bd->new], 0,405405+ sizeof(bd->set[0]));406406+ bd->new = 1-bd->new;407407+ bd->swap_time = seconds_since_boot();408408+ }409409+ spin_unlock(&state_lock);410410+ }411411+ hash = arch_fast_hash(&fh->fh_base, fh->fh_size, 0);412412+ if (test_bit(hash&255, bd->set[0]) &&413413+ test_bit((hash>>8)&255, bd->set[0]) &&414414+ test_bit((hash>>16)&255, bd->set[0]))415415+ return 1;416416+417417+ if (test_bit(hash&255, bd->set[1]) &&418418+ test_bit((hash>>8)&255, bd->set[1]) &&419419+ test_bit((hash>>16)&255, bd->set[1]))420420+ return 1;421421+422422+ return 0;423423+}424424+425425+static void block_delegations(struct knfsd_fh *fh)426426+{427427+ u32 hash;428428+ struct bloom_pair *bd = &blocked_delegations;429429+430430+ hash = arch_fast_hash(&fh->fh_base, fh->fh_size, 0);431431+432432+ __set_bit(hash&255, bd->set[bd->new]);433433+ __set_bit((hash>>8)&255, bd->set[bd->new]);434434+ __set_bit((hash>>16)&255, bd->set[bd->new]);435435+ if (bd->entries == 0)436436+ bd->swap_time = seconds_since_boot();437437+ bd->entries += 1;438438+}439439+368440static struct nfs4_delegation *369441alloc_init_deleg(struct nfs4_client *clp, struct nfs4_ol_stateid *stp, struct svc_fh *current_fh)370442{···445371446372 dprintk("NFSD alloc_init_deleg\n");447373 if (num_delegations > max_delegations)374374+ return NULL;375375+ if (delegation_blocked(¤t_fh->fh_handle))448376 return NULL;449377 dp = delegstateid(nfs4_alloc_stid(clp, deleg_slab));450378 if (dp == NULL)···2845276928462770 /* Only place dl_time is set; protected by i_lock: */28472771 dp->dl_time = get_seconds();27722772+27732773+ block_delegations(&dp->dl_fh);2848277428492775 nfsd4_cb_recall(dp);28502776}
···3030#define _I915_POWERWELL_H_31313232/* For use by hda_i915 driver */3333-extern void i915_request_power_well(void);3434-extern void i915_release_power_well(void);3333+extern int i915_request_power_well(void);3434+extern int i915_release_power_well(void);35353636#endif /* _I915_POWERWELL_H_ */
+1-1
include/linux/blk-mq.h
···4242 unsigned int nr_ctx;4343 struct blk_mq_ctx **ctxs;44444545- unsigned int wait_index;4545+ atomic_t wait_index;46464747 struct blk_mq_tags *tags;4848
···133133extern int elv_register_queue(struct request_queue *q);134134extern void elv_unregister_queue(struct request_queue *q);135135extern int elv_may_queue(struct request_queue *, int);136136-extern void elv_abort_queue(struct request_queue *);137136extern void elv_completed_request(struct request_queue *, struct request *);138137extern int elv_set_request(struct request_queue *q, struct request *rq,139138 struct bio *bio, gfp_t gfp_mask);
+6
include/linux/fs.h
···1921192119221922static inline int break_deleg(struct inode *inode, unsigned int mode)19231923{19241924+ /*19251925+ * Since this check is lockless, we must ensure that any refcounts19261926+ * taken are done before checking inode->i_flock. Otherwise, we could19271927+ * end up racing with tasks trying to set a new lease on this file.19281928+ */19291929+ smp_mb();19241930 if (inode->i_flock)19251931 return __break_lease(inode, mode, FL_DELEG);19261932 return 0;
+1
include/linux/profile.h
···4444int profile_init(void);4545int profile_setup(char *str);4646void profile_tick(int type);4747+int setup_profiling_timer(unsigned int multiplier);47484849/*4950 * Add multiple profiler hits to a given address:
+5
include/linux/regulator/consumer.h
···395395{396396}397397398398+static inline int regulator_can_change_voltage(struct regulator *regulator)399399+{400400+ return 0;401401+}402402+398403static inline int regulator_set_voltage(struct regulator *regulator,399404 int min_uV, int max_uV)400405{
···116116 int user_ctl_count; /* count of all user controls */117117 struct list_head controls; /* all controls for this card */118118 struct list_head ctl_files; /* active control files */119119+ struct mutex user_ctl_lock; /* protects user controls against120120+ concurrent access */119121120122 struct snd_info_entry *proc_root; /* root for soundcard specific files */121123 struct snd_info_entry *proc_id; /* the card id */
···8383 owner = *p;8484 } while (cmpxchg(p, owner, owner | RT_MUTEX_HAS_WAITERS) != owner);8585}8686+8787+/*8888+ * Safe fastpath aware unlock:8989+ * 1) Clear the waiters bit9090+ * 2) Drop lock->wait_lock9191+ * 3) Try to unlock the lock with cmpxchg9292+ */9393+static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock)9494+ __releases(lock->wait_lock)9595+{9696+ struct task_struct *owner = rt_mutex_owner(lock);9797+9898+ clear_rt_mutex_waiters(lock);9999+ raw_spin_unlock(&lock->wait_lock);100100+ /*101101+ * If a new waiter comes in between the unlock and the cmpxchg102102+ * we have two situations:103103+ *104104+ * unlock(wait_lock);105105+ * lock(wait_lock);106106+ * cmpxchg(p, owner, 0) == owner107107+ * mark_rt_mutex_waiters(lock);108108+ * acquire(lock);109109+ * or:110110+ *111111+ * unlock(wait_lock);112112+ * lock(wait_lock);113113+ * mark_rt_mutex_waiters(lock);114114+ *115115+ * cmpxchg(p, owner, 0) != owner116116+ * enqueue_waiter();117117+ * unlock(wait_lock);118118+ * lock(wait_lock);119119+ * wake waiter();120120+ * unlock(wait_lock);121121+ * lock(wait_lock);122122+ * acquire(lock);123123+ */124124+ return rt_mutex_cmpxchg(lock, owner, NULL);125125+}126126+86127#else87128# define rt_mutex_cmpxchg(l,c,n) (0)88129static inline void mark_rt_mutex_waiters(struct rt_mutex *lock)89130{90131 lock->owner = (struct task_struct *)91132 ((unsigned long)lock->owner | RT_MUTEX_HAS_WAITERS);133133+}134134+135135+/*136136+ * Simple slow path only version: lock->owner is protected by lock->wait_lock.137137+ */138138+static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock)139139+ __releases(lock->wait_lock)140140+{141141+ lock->owner = NULL;142142+ raw_spin_unlock(&lock->wait_lock);143143+ return true;92144}93145#endif94146···312260 */313261int max_lock_depth = 1024;314262263263+static inline struct rt_mutex *task_blocked_on_lock(struct task_struct *p)264264+{265265+ return p->pi_blocked_on ? p->pi_blocked_on->lock : NULL;266266+}267267+315268/*316269 * Adjust the priority chain. Also used for deadlock detection.317270 * Decreases task's usage by one - may thus free the task.318271 *319319- * @task: the task owning the mutex (owner) for which a chain walk is probably320320- * needed272272+ * @task: the task owning the mutex (owner) for which a chain walk is273273+ * probably needed321274 * @deadlock_detect: do we have to carry out deadlock detection?322322- * @orig_lock: the mutex (can be NULL if we are walking the chain to recheck323323- * things for a task that has just got its priority adjusted, and324324- * is waiting on a mutex)275275+ * @orig_lock: the mutex (can be NULL if we are walking the chain to recheck276276+ * things for a task that has just got its priority adjusted, and277277+ * is waiting on a mutex)278278+ * @next_lock: the mutex on which the owner of @orig_lock was blocked before279279+ * we dropped its pi_lock. Is never dereferenced, only used for280280+ * comparison to detect lock chain changes.325281 * @orig_waiter: rt_mutex_waiter struct for the task that has just donated326326- * its priority to the mutex owner (can be NULL in the case327327- * depicted above or if the top waiter is gone away and we are328328- * actually deboosting the owner)329329- * @top_task: the current top waiter282282+ * its priority to the mutex owner (can be NULL in the case283283+ * depicted above or if the top waiter is gone away and we are284284+ * actually deboosting the owner)285285+ * @top_task: the current top waiter330286 *331287 * Returns 0 or -EDEADLK.332288 */333289static int rt_mutex_adjust_prio_chain(struct task_struct *task,334290 int deadlock_detect,335291 struct rt_mutex *orig_lock,292292+ struct rt_mutex *next_lock,336293 struct rt_mutex_waiter *orig_waiter,337294 struct task_struct *top_task)338295{···375314 }376315 put_task_struct(task);377316378378- return deadlock_detect ? -EDEADLK : 0;317317+ return -EDEADLK;379318 }380319 retry:381320 /*···397336 * the previous owner of the lock might have released the lock.398337 */399338 if (orig_waiter && !rt_mutex_owner(orig_lock))339339+ goto out_unlock_pi;340340+341341+ /*342342+ * We dropped all locks after taking a refcount on @task, so343343+ * the task might have moved on in the lock chain or even left344344+ * the chain completely and blocks now on an unrelated lock or345345+ * on @orig_lock.346346+ *347347+ * We stored the lock on which @task was blocked in @next_lock,348348+ * so we can detect the chain change.349349+ */350350+ if (next_lock != waiter->lock)400351 goto out_unlock_pi;401352402353 /*···450377 if (lock == orig_lock || rt_mutex_owner(lock) == top_task) {451378 debug_rt_mutex_deadlock(deadlock_detect, orig_waiter, lock);452379 raw_spin_unlock(&lock->wait_lock);453453- ret = deadlock_detect ? -EDEADLK : 0;380380+ ret = -EDEADLK;454381 goto out_unlock_pi;455382 }456383···495422 __rt_mutex_adjust_prio(task);496423 }497424425425+ /*426426+ * Check whether the task which owns the current lock is pi427427+ * blocked itself. If yes we store a pointer to the lock for428428+ * the lock chain change detection above. After we dropped429429+ * task->pi_lock next_lock cannot be dereferenced anymore.430430+ */431431+ next_lock = task_blocked_on_lock(task);432432+498433 raw_spin_unlock_irqrestore(&task->pi_lock, flags);499434500435 top_waiter = rt_mutex_top_waiter(lock);501436 raw_spin_unlock(&lock->wait_lock);437437+438438+ /*439439+ * We reached the end of the lock chain. Stop right here. No440440+ * point to go back just to figure that out.441441+ */442442+ if (!next_lock)443443+ goto out_put_task;502444503445 if (!detect_deadlock && waiter != top_waiter)504446 goto out_put_task;···624536{625537 struct task_struct *owner = rt_mutex_owner(lock);626538 struct rt_mutex_waiter *top_waiter = waiter;627627- unsigned long flags;539539+ struct rt_mutex *next_lock;628540 int chain_walk = 0, res;541541+ unsigned long flags;629542630543 /*631544 * Early deadlock detection. We really don't want the task to···637548 * which is wrong, as the other waiter is not in a deadlock638549 * situation.639550 */640640- if (detect_deadlock && owner == task)551551+ if (owner == task)641552 return -EDEADLK;642553643554 raw_spin_lock_irqsave(&task->pi_lock, flags);···658569 if (!owner)659570 return 0;660571572572+ raw_spin_lock_irqsave(&owner->pi_lock, flags);661573 if (waiter == rt_mutex_top_waiter(lock)) {662662- raw_spin_lock_irqsave(&owner->pi_lock, flags);663574 rt_mutex_dequeue_pi(owner, top_waiter);664575 rt_mutex_enqueue_pi(owner, waiter);665576666577 __rt_mutex_adjust_prio(owner);667578 if (owner->pi_blocked_on)668579 chain_walk = 1;669669- raw_spin_unlock_irqrestore(&owner->pi_lock, flags);670670- }671671- else if (debug_rt_mutex_detect_deadlock(waiter, detect_deadlock))580580+ } else if (debug_rt_mutex_detect_deadlock(waiter, detect_deadlock)) {672581 chain_walk = 1;582582+ }673583674674- if (!chain_walk)584584+ /* Store the lock on which owner is blocked or NULL */585585+ next_lock = task_blocked_on_lock(owner);586586+587587+ raw_spin_unlock_irqrestore(&owner->pi_lock, flags);588588+ /*589589+ * Even if full deadlock detection is on, if the owner is not590590+ * blocked itself, we can avoid finding this out in the chain591591+ * walk.592592+ */593593+ if (!chain_walk || !next_lock)675594 return 0;676595677596 /*···691594692595 raw_spin_unlock(&lock->wait_lock);693596694694- res = rt_mutex_adjust_prio_chain(owner, detect_deadlock, lock, waiter,695695- task);597597+ res = rt_mutex_adjust_prio_chain(owner, detect_deadlock, lock,598598+ next_lock, waiter, task);696599697600 raw_spin_lock(&lock->wait_lock);698601···702605/*703606 * Wake up the next waiter on the lock.704607 *705705- * Remove the top waiter from the current tasks waiter list and wake it up.608608+ * Remove the top waiter from the current tasks pi waiter list and609609+ * wake it up.706610 *707611 * Called with lock->wait_lock held.708612 */···724626 */725627 rt_mutex_dequeue_pi(current, waiter);726628727727- rt_mutex_set_owner(lock, NULL);629629+ /*630630+ * As we are waking up the top waiter, and the waiter stays631631+ * queued on the lock until it gets the lock, this lock632632+ * obviously has waiters. Just set the bit here and this has633633+ * the added benefit of forcing all new tasks into the634634+ * slow path making sure no task of lower priority than635635+ * the top waiter can steal this lock.636636+ */637637+ lock->owner = (void *) RT_MUTEX_HAS_WAITERS;728638729639 raw_spin_unlock_irqrestore(¤t->pi_lock, flags);730640641641+ /*642642+ * It's safe to dereference waiter as it cannot go away as643643+ * long as we hold lock->wait_lock. The waiter task needs to644644+ * acquire it in order to dequeue the waiter.645645+ */731646 wake_up_process(waiter->task);732647}733648···755644{756645 int first = (waiter == rt_mutex_top_waiter(lock));757646 struct task_struct *owner = rt_mutex_owner(lock);647647+ struct rt_mutex *next_lock = NULL;758648 unsigned long flags;759759- int chain_walk = 0;760649761650 raw_spin_lock_irqsave(¤t->pi_lock, flags);762651 rt_mutex_dequeue(lock, waiter);···780669 }781670 __rt_mutex_adjust_prio(owner);782671783783- if (owner->pi_blocked_on)784784- chain_walk = 1;672672+ /* Store the lock on which owner is blocked or NULL */673673+ next_lock = task_blocked_on_lock(owner);785674786675 raw_spin_unlock_irqrestore(&owner->pi_lock, flags);787676 }788677789789- if (!chain_walk)678678+ if (!next_lock)790679 return;791680792681 /* gets dropped in rt_mutex_adjust_prio_chain()! */···794683795684 raw_spin_unlock(&lock->wait_lock);796685797797- rt_mutex_adjust_prio_chain(owner, 0, lock, NULL, current);686686+ rt_mutex_adjust_prio_chain(owner, 0, lock, next_lock, NULL, current);798687799688 raw_spin_lock(&lock->wait_lock);800689}···807696void rt_mutex_adjust_pi(struct task_struct *task)808697{809698 struct rt_mutex_waiter *waiter;699699+ struct rt_mutex *next_lock;810700 unsigned long flags;811701812702 raw_spin_lock_irqsave(&task->pi_lock, flags);···818706 raw_spin_unlock_irqrestore(&task->pi_lock, flags);819707 return;820708 }821821-709709+ next_lock = waiter->lock;822710 raw_spin_unlock_irqrestore(&task->pi_lock, flags);823711824712 /* gets dropped in rt_mutex_adjust_prio_chain()! */825713 get_task_struct(task);826826- rt_mutex_adjust_prio_chain(task, 0, NULL, NULL, task);714714+715715+ rt_mutex_adjust_prio_chain(task, 0, NULL, next_lock, NULL, task);827716}828717829718/**···876763 return ret;877764}878765766766+static void rt_mutex_handle_deadlock(int res, int detect_deadlock,767767+ struct rt_mutex_waiter *w)768768+{769769+ /*770770+ * If the result is not -EDEADLOCK or the caller requested771771+ * deadlock detection, nothing to do here.772772+ */773773+ if (res != -EDEADLOCK || detect_deadlock)774774+ return;775775+776776+ /*777777+ * Yell lowdly and stop the task right here.778778+ */779779+ rt_mutex_print_deadlock(w);780780+ while (1) {781781+ set_current_state(TASK_INTERRUPTIBLE);782782+ schedule();783783+ }784784+}785785+879786/*880787 * Slow path lock function:881788 */···935802936803 set_current_state(TASK_RUNNING);937804938938- if (unlikely(ret))805805+ if (unlikely(ret)) {939806 remove_waiter(lock, &waiter);807807+ rt_mutex_handle_deadlock(ret, detect_deadlock, &waiter);808808+ }940809941810 /*942811 * try_to_take_rt_mutex() sets the waiter bit···994859995860 rt_mutex_deadlock_account_unlock(current);996861997997- if (!rt_mutex_has_waiters(lock)) {998998- lock->owner = NULL;999999- raw_spin_unlock(&lock->wait_lock);10001000- return;862862+ /*863863+ * We must be careful here if the fast path is enabled. If we864864+ * have no waiters queued we cannot set owner to NULL here865865+ * because of:866866+ *867867+ * foo->lock->owner = NULL;868868+ * rtmutex_lock(foo->lock); <- fast path869869+ * free = atomic_dec_and_test(foo->refcnt);870870+ * rtmutex_unlock(foo->lock); <- fast path871871+ * if (free)872872+ * kfree(foo);873873+ * raw_spin_unlock(foo->lock->wait_lock);874874+ *875875+ * So for the fastpath enabled kernel:876876+ *877877+ * Nothing can set the waiters bit as long as we hold878878+ * lock->wait_lock. So we do the following sequence:879879+ *880880+ * owner = rt_mutex_owner(lock);881881+ * clear_rt_mutex_waiters(lock);882882+ * raw_spin_unlock(&lock->wait_lock);883883+ * if (cmpxchg(&lock->owner, owner, 0) == owner)884884+ * return;885885+ * goto retry;886886+ *887887+ * The fastpath disabled variant is simple as all access to888888+ * lock->owner is serialized by lock->wait_lock:889889+ *890890+ * lock->owner = NULL;891891+ * raw_spin_unlock(&lock->wait_lock);892892+ */893893+ while (!rt_mutex_has_waiters(lock)) {894894+ /* Drops lock->wait_lock ! */895895+ if (unlock_rt_mutex_safe(lock) == true)896896+ return;897897+ /* Relock the rtmutex and try again */898898+ raw_spin_lock(&lock->wait_lock);1001899 }1002900901901+ /*902902+ * The wakeup next waiter path does not suffer from the above903903+ * race. See the comments there.904904+ */1003905 wakeup_next_waiter(lock);10049061005907 raw_spin_unlock(&lock->wait_lock);···12841112 return 1;12851113 }1286111412871287- ret = task_blocks_on_rt_mutex(lock, waiter, task, detect_deadlock);11151115+ /* We enforce deadlock detection for futexes */11161116+ ret = task_blocks_on_rt_mutex(lock, waiter, task, 1);1288111712891118 if (ret && !rt_mutex_owner(lock)) {12901119 /*
+5
kernel/locking/rtmutex.h
···2424#define debug_rt_mutex_print_deadlock(w) do { } while (0)2525#define debug_rt_mutex_detect_deadlock(w,d) (d)2626#define debug_rt_mutex_reset_waiter(w) do { } while (0)2727+2828+static inline void rt_mutex_print_deadlock(struct rt_mutex_waiter *w)2929+{3030+ WARN(1, "rtmutex deadlock detected\n");3131+}
+36-1
kernel/power/hibernate.c
···35353636static int nocompress;3737static int noresume;3838+static int nohibernate;3839static int resume_wait;3940static unsigned int resume_delay;4041static char resume_file[256] = CONFIG_PM_STD_PARTITION;···6261bool freezer_test_done;63626463static const struct platform_hibernation_ops *hibernation_ops;6464+6565+bool hibernation_available(void)6666+{6767+ return (nohibernate == 0);6868+}65696670/**6771 * hibernation_set_ops - Set the global hibernate operations.···648642{649643 int error;650644645645+ if (!hibernation_available()) {646646+ pr_debug("PM: Hibernation not available.\n");647647+ return -EPERM;648648+ }649649+651650 lock_system_sleep();652651 /* The snapshot device should not be opened while we're running */653652 if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {···745734 /*746735 * If the user said "noresume".. bail out early.747736 */748748- if (noresume)737737+ if (noresume || !hibernation_available())749738 return 0;750739751740 /*···911900 int i;912901 char *start = buf;913902903903+ if (!hibernation_available())904904+ return sprintf(buf, "[disabled]\n");905905+914906 for (i = HIBERNATION_FIRST; i <= HIBERNATION_MAX; i++) {915907 if (!hibernation_modes[i])916908 continue;···947933 int len;948934 char *p;949935 int mode = HIBERNATION_INVALID;936936+937937+ if (!hibernation_available())938938+ return -EPERM;950939951940 p = memchr(buf, '\n', n);952941 len = p ? p - buf : n;···11181101 noresume = 1;11191102 else if (!strncmp(str, "nocompress", 10))11201103 nocompress = 1;11041104+ else if (!strncmp(str, "no", 2)) {11051105+ noresume = 1;11061106+ nohibernate = 1;11071107+ }11211108 return 1;11221109}11231110···11461125 return 1;11471126}1148112711281128+static int __init nohibernate_setup(char *str)11291129+{11301130+ noresume = 1;11311131+ nohibernate = 1;11321132+ return 1;11331133+}11341134+11351135+static int __init kaslr_nohibernate_setup(char *str)11361136+{11371137+ return nohibernate_setup(str);11381138+}11391139+11491140__setup("noresume", noresume_setup);11501141__setup("resume_offset=", resume_offset_setup);11511142__setup("resume=", resume_setup);11521143__setup("hibernate=", hibernate_setup);11531144__setup("resumewait", resumewait_setup);11541145__setup("resumedelay=", resumedelay_setup);11461146+__setup("nohibernate", nohibernate_setup);11471147+__setup("kaslr", kaslr_nohibernate_setup);
+2-4
kernel/power/main.c
···300300 s += sprintf(s,"%s ", pm_states[i].label);301301302302#endif303303-#ifdef CONFIG_HIBERNATION304304- s += sprintf(s, "%s\n", "disk");305305-#else303303+ if (hibernation_available())304304+ s += sprintf(s, "disk ");306305 if (s != buf)307306 /* convert the last space to a newline */308307 *(s-1) = '\n';309309-#endif310308 return (s - buf);311309}312310
+3
kernel/power/user.c
···4949 struct snapshot_data *data;5050 int error;51515252+ if (!hibernation_available())5353+ return -EPERM;5454+5255 lock_system_sleep();53565457 if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
-4
kernel/sysctl.c
···152152#ifdef CONFIG_SPARC153153#endif154154155155-#ifdef CONFIG_SPARC64156156-extern int sysctl_tsb_ratio;157157-#endif158158-159155#ifdef __hppa__160156extern int pwrsw_enabled;161157#endif
+8-6
scripts/package/builddeb
···289289290290fi291291292292-# Build header package293293-(cd $srctree; find . -name Makefile\* -o -name Kconfig\* -o -name \*.pl > "$objtree/debian/hdrsrcfiles")294294-(cd $srctree; find arch/$SRCARCH/include include scripts -type f >> "$objtree/debian/hdrsrcfiles")295295-(cd $objtree; find arch/$SRCARCH/include Module.symvers include scripts -type f >> "$objtree/debian/hdrobjfiles")292292+# Build kernel header package293293+(cd $srctree; find . -name Makefile\* -o -name Kconfig\* -o -name \*.pl) > "$objtree/debian/hdrsrcfiles"294294+(cd $srctree; find arch/$SRCARCH/include include scripts -type f) >> "$objtree/debian/hdrsrcfiles"295295+(cd $srctree; find arch/$SRCARCH -name module.lds -o -name Kbuild.platforms -o -name Platform) >> "$objtree/debian/hdrsrcfiles"296296+(cd $srctree; find $(find arch/$SRCARCH -name include -o -name scripts -type d) -type f) >> "$objtree/debian/hdrsrcfiles"297297+(cd $objtree; find arch/$SRCARCH/include Module.symvers include scripts -type f) >> "$objtree/debian/hdrobjfiles"296298destdir=$kernel_headers_dir/usr/src/linux-headers-$version297299mkdir -p "$destdir"298298-(cd $srctree; tar -c -f - -T "$objtree/debian/hdrsrcfiles") | (cd $destdir; tar -xf -)299299-(cd $objtree; tar -c -f - -T "$objtree/debian/hdrobjfiles") | (cd $destdir; tar -xf -)300300+(cd $srctree; tar -c -f - -T -) < "$objtree/debian/hdrsrcfiles" | (cd $destdir; tar -xf -)301301+(cd $objtree; tar -c -f - -T -) < "$objtree/debian/hdrobjfiles" | (cd $destdir; tar -xf -)300302(cd $objtree; cp $KCONFIG_CONFIG $destdir/.config) # copy .config manually to be where it's expected to be301303ln -sf "/usr/src/linux-headers-$version" "$kernel_headers_dir/lib/modules/$version/build"302304rm -f "$objtree/debian/hdrsrcfiles" "$objtree/debian/hdrobjfiles"
+1-2
scripts/package/buildtar
···125125# Create the tarball126126#127127(128128- cd "${tmpdir}"129128 opts=130129 if tar --owner=root --group=root --help >/dev/null 2>&1; then131130 opts="--owner=root --group=root"132131 fi133133- tar cf - boot/* lib/* $opts | ${compress} > "${tarball}${file_ext}"132132+ tar cf - -C "$tmpdir" boot/ lib/ $opts | ${compress} > "${tarball}${file_ext}"134133)135134136135echo "Tarball successfully created in ${tarball}${file_ext}"
+51-27
sound/core/control.c
···288288{289289 struct snd_kcontrol *kctl;290290291291+ /* Make sure that the ids assigned to the control do not wrap around */292292+ if (card->last_numid >= UINT_MAX - count)293293+ card->last_numid = 0;294294+291295 list_for_each_entry(kctl, &card->controls, list) {292296 if (kctl->id.numid < card->last_numid + 1 + count &&293297 kctl->id.numid + kctl->count > card->last_numid + 1) {···334330{335331 struct snd_ctl_elem_id id;336332 unsigned int idx;333333+ unsigned int count;337334 int err = -EINVAL;338335339336 if (! kcontrol)···342337 if (snd_BUG_ON(!card || !kcontrol->info))343338 goto error;344339 id = kcontrol->id;340340+ if (id.index > UINT_MAX - kcontrol->count)341341+ goto error;342342+345343 down_write(&card->controls_rwsem);346344 if (snd_ctl_find_id(card, &id)) {347345 up_write(&card->controls_rwsem);···366358 card->controls_count += kcontrol->count;367359 kcontrol->id.numid = card->last_numid + 1;368360 card->last_numid += kcontrol->count;361361+ count = kcontrol->count;369362 up_write(&card->controls_rwsem);370370- for (idx = 0; idx < kcontrol->count; idx++, id.index++, id.numid++)363363+ for (idx = 0; idx < count; idx++, id.index++, id.numid++)371364 snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id);372365 return 0;373366···397388 bool add_on_replace)398389{399390 struct snd_ctl_elem_id id;391391+ unsigned int count;400392 unsigned int idx;401393 struct snd_kcontrol *old;402394 int ret;···433423 card->controls_count += kcontrol->count;434424 kcontrol->id.numid = card->last_numid + 1;435425 card->last_numid += kcontrol->count;426426+ count = kcontrol->count;436427 up_write(&card->controls_rwsem);437437- for (idx = 0; idx < kcontrol->count; idx++, id.index++, id.numid++)428428+ for (idx = 0; idx < count; idx++, id.index++, id.numid++)438429 snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id);439430 return 0;440431···908897 result = kctl->put(kctl, control);909898 }910899 if (result > 0) {900900+ struct snd_ctl_elem_id id = control->id;911901 up_read(&card->controls_rwsem);912912- snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_VALUE,913913- &control->id);902902+ snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_VALUE, &id);914903 return 0;915904 }916905 }···10029911003992struct user_element {1004993 struct snd_ctl_elem_info info;994994+ struct snd_card *card;1005995 void *elem_data; /* element data */1006996 unsigned long elem_data_size; /* size of element data in bytes */1007997 void *tlv_data; /* TLV data */···10461034{10471035 struct user_element *ue = kcontrol->private_data;1048103610371037+ mutex_lock(&ue->card->user_ctl_lock);10491038 memcpy(&ucontrol->value, ue->elem_data, ue->elem_data_size);10391039+ mutex_unlock(&ue->card->user_ctl_lock);10501040 return 0;10511041}10521042···10571043{10581044 int change;10591045 struct user_element *ue = kcontrol->private_data;10601060-10461046+10471047+ mutex_lock(&ue->card->user_ctl_lock);10611048 change = memcmp(&ucontrol->value, ue->elem_data, ue->elem_data_size) != 0;10621049 if (change)10631050 memcpy(ue->elem_data, &ucontrol->value, ue->elem_data_size);10511051+ mutex_unlock(&ue->card->user_ctl_lock);10641052 return change;10651053}10661054···10821066 new_data = memdup_user(tlv, size);10831067 if (IS_ERR(new_data))10841068 return PTR_ERR(new_data);10691069+ mutex_lock(&ue->card->user_ctl_lock);10851070 change = ue->tlv_data_size != size;10861071 if (!change)10871072 change = memcmp(ue->tlv_data, new_data, size);10881073 kfree(ue->tlv_data);10891074 ue->tlv_data = new_data;10901075 ue->tlv_data_size = size;10761076+ mutex_unlock(&ue->card->user_ctl_lock);10911077 } else {10921092- if (! ue->tlv_data_size || ! ue->tlv_data)10931093- return -ENXIO;10941094- if (size < ue->tlv_data_size)10951095- return -ENOSPC;10781078+ int ret = 0;10791079+10801080+ mutex_lock(&ue->card->user_ctl_lock);10811081+ if (!ue->tlv_data_size || !ue->tlv_data) {10821082+ ret = -ENXIO;10831083+ goto err_unlock;10841084+ }10851085+ if (size < ue->tlv_data_size) {10861086+ ret = -ENOSPC;10871087+ goto err_unlock;10881088+ }10961089 if (copy_to_user(tlv, ue->tlv_data, ue->tlv_data_size))10971097- return -EFAULT;10901090+ ret = -EFAULT;10911091+err_unlock:10921092+ mutex_unlock(&ue->card->user_ctl_lock);10931093+ if (ret)10941094+ return ret;10981095 }10991096 return change;11001097}···11651136 struct user_element *ue;11661137 int idx, err;1167113811681168- if (!replace && card->user_ctl_count >= MAX_USER_CONTROLS)11691169- return -ENOMEM;11701139 if (info->count < 1)11711140 return -EINVAL;11721141 access = info->access == 0 ? SNDRV_CTL_ELEM_ACCESS_READWRITE :···11731146 SNDRV_CTL_ELEM_ACCESS_TLV_READWRITE));11741147 info->id.numid = 0;11751148 memset(&kctl, 0, sizeof(kctl));11761176- down_write(&card->controls_rwsem);11771177- _kctl = snd_ctl_find_id(card, &info->id);11781178- err = 0;11791179- if (_kctl) {11801180- if (replace)11811181- err = snd_ctl_remove(card, _kctl);11821182- else11831183- err = -EBUSY;11841184- } else {11851185- if (replace)11861186- err = -ENOENT;11491149+11501150+ if (replace) {11511151+ err = snd_ctl_remove_user_ctl(file, &info->id);11521152+ if (err)11531153+ return err;11871154 }11881188- up_write(&card->controls_rwsem);11891189- if (err < 0)11901190- return err;11551155+11561156+ if (card->user_ctl_count >= MAX_USER_CONTROLS)11571157+ return -ENOMEM;11581158+11911159 memcpy(&kctl.id, &info->id, sizeof(info->id));11921160 kctl.count = info->owner ? info->owner : 1;11931161 access |= SNDRV_CTL_ELEM_ACCESS_USER;···12321210 ue = kzalloc(sizeof(struct user_element) + private_size, GFP_KERNEL);12331211 if (ue == NULL)12341212 return -ENOMEM;12131213+ ue->card = card;12351214 ue->info = *info;12361215 ue->info.access = 0;12371216 ue->elem_data = (char *)ue + sizeof(*ue);···13441321 }13451322 err = kctl->tlv.c(kctl, op_flag, tlv.length, _tlv->tlv);13461323 if (err > 0) {13241324+ struct snd_ctl_elem_id id = kctl->id;13471325 up_read(&card->controls_rwsem);13481348- snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_TLV, &kctl->id);13261326+ snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_TLV, &id);13491327 return 0;13501328 }13511329 } else {
···27552755 unsigned int mask = (1 << fls(max)) - 1;27562756 unsigned int invert = mc->invert;27572757 unsigned int val;27582758- int connect, change;27582758+ int connect, change, reg_change = 0;27592759 struct snd_soc_dapm_update update;27602760 int ret = 0;27612761···27732773 mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);2774277427752775 change = dapm_kcontrol_set_value(kcontrol, val);27762776- if (change) {27772777- if (reg != SND_SOC_NOPM) {27782778- mask = mask << shift;27792779- val = val << shift;2780277627812781- if (snd_soc_test_bits(codec, reg, mask, val)) {27822782- update.kcontrol = kcontrol;27832783- update.reg = reg;27842784- update.mask = mask;27852785- update.val = val;27862786- card->update = &update;27872787- }27772777+ if (reg != SND_SOC_NOPM) {27782778+ mask = mask << shift;27792779+ val = val << shift;2788278027812781+ reg_change = snd_soc_test_bits(codec, reg, mask, val);27822782+ }27832783+27842784+ if (change || reg_change) {27852785+ if (reg_change) {27862786+ update.kcontrol = kcontrol;27872787+ update.reg = reg;27882788+ update.mask = mask;27892789+ update.val = val;27902790+ card->update = &update;27892791 }27922792+ change |= reg_change;2790279327912794 ret = soc_dapm_mixer_update_power(card, kcontrol, connect);27922795
+113
tools/lib/traceevent/event-parse.c
···765765 case PRINT_BSTRING:766766 free(arg->string.string);767767 break;768768+ case PRINT_BITMASK:769769+ free(arg->bitmask.bitmask);770770+ break;768771 case PRINT_DYNAMIC_ARRAY:769772 free(arg->dynarray.index);770773 break;···22712268 case PRINT_FIELD ... PRINT_SYMBOL:22722269 case PRINT_STRING:22732270 case PRINT_BSTRING:22712271+ case PRINT_BITMASK:22742272 default:22752273 do_warning("invalid eval type %d", arg->type);22762274 ret = 0;···23002296 case PRINT_FIELD ... PRINT_SYMBOL:23012297 case PRINT_STRING:23022298 case PRINT_BSTRING:22992299+ case PRINT_BITMASK:23032300 default:23042301 do_warning("invalid eval type %d", arg->type);23052302 break;···26882683 return EVENT_ERROR;26892684}2690268526862686+static enum event_type26872687+process_bitmask(struct event_format *event __maybe_unused, struct print_arg *arg,26882688+ char **tok)26892689+{26902690+ enum event_type type;26912691+ char *token;26922692+26932693+ if (read_expect_type(EVENT_ITEM, &token) < 0)26942694+ goto out_free;26952695+26962696+ arg->type = PRINT_BITMASK;26972697+ arg->bitmask.bitmask = token;26982698+ arg->bitmask.offset = -1;26992699+27002700+ if (read_expected(EVENT_DELIM, ")") < 0)27012701+ goto out_err;27022702+27032703+ type = read_token(&token);27042704+ *tok = token;27052705+27062706+ return type;27072707+27082708+ out_free:27092709+ free_token(token);27102710+ out_err:27112711+ *tok = NULL;27122712+ return EVENT_ERROR;27132713+}27142714+26912715static struct pevent_function_handler *26922716find_func_handler(struct pevent *pevent, char *func_name)26932717{···28302796 if (strcmp(token, "__get_str") == 0) {28312797 free_token(token);28322798 return process_str(event, arg, tok);27992799+ }28002800+ if (strcmp(token, "__get_bitmask") == 0) {28012801+ free_token(token);28022802+ return process_bitmask(event, arg, tok);28332803 }28342804 if (strcmp(token, "__get_dynamic_array") == 0) {28352805 free_token(token);···33623324 return eval_type(val, arg, 0);33633325 case PRINT_STRING:33643326 case PRINT_BSTRING:33273327+ case PRINT_BITMASK:33653328 return 0;33663329 case PRINT_FUNC: {33673330 struct trace_seq s;···35953556 trace_seq_printf(s, format, str);35963557}3597355835593559+static void print_bitmask_to_seq(struct pevent *pevent,35603560+ struct trace_seq *s, const char *format,35613561+ int len_arg, const void *data, int size)35623562+{35633563+ int nr_bits = size * 8;35643564+ int str_size = (nr_bits + 3) / 4;35653565+ int len = 0;35663566+ char buf[3];35673567+ char *str;35683568+ int index;35693569+ int i;35703570+35713571+ /*35723572+ * The kernel likes to put in commas every 32 bits, we35733573+ * can do the same.35743574+ */35753575+ str_size += (nr_bits - 1) / 32;35763576+35773577+ str = malloc(str_size + 1);35783578+ if (!str) {35793579+ do_warning("%s: not enough memory!", __func__);35803580+ return;35813581+ }35823582+ str[str_size] = 0;35833583+35843584+ /* Start out with -2 for the two chars per byte */35853585+ for (i = str_size - 2; i >= 0; i -= 2) {35863586+ /*35873587+ * data points to a bit mask of size bytes.35883588+ * In the kernel, this is an array of long words, thus35893589+ * endianess is very important.35903590+ */35913591+ if (pevent->file_bigendian)35923592+ index = size - (len + 1);35933593+ else35943594+ index = len;35953595+35963596+ snprintf(buf, 3, "%02x", *((unsigned char *)data + index));35973597+ memcpy(str + i, buf, 2);35983598+ len++;35993599+ if (!(len & 3) && i > 0) {36003600+ i--;36013601+ str[i] = ',';36023602+ }36033603+ }36043604+36053605+ if (len_arg >= 0)36063606+ trace_seq_printf(s, format, len_arg, str);36073607+ else36083608+ trace_seq_printf(s, format, str);36093609+36103610+ free(str);36113611+}36123612+35983613static void print_str_arg(struct trace_seq *s, void *data, int size,35993614 struct event_format *event, const char *format,36003615 int len_arg, struct print_arg *arg)···37843691 case PRINT_BSTRING:37853692 print_str_to_seq(s, format, len_arg, arg->string.string);37863693 break;36943694+ case PRINT_BITMASK: {36953695+ int bitmask_offset;36963696+ int bitmask_size;36973697+36983698+ if (arg->bitmask.offset == -1) {36993699+ struct format_field *f;37003700+37013701+ f = pevent_find_any_field(event, arg->bitmask.bitmask);37023702+ arg->bitmask.offset = f->offset;37033703+ }37043704+ bitmask_offset = data2host4(pevent, data + arg->bitmask.offset);37053705+ bitmask_size = bitmask_offset >> 16;37063706+ bitmask_offset &= 0xffff;37073707+ print_bitmask_to_seq(pevent, s, format, len_arg,37083708+ data + bitmask_offset, bitmask_size);37093709+ break;37103710+ }37873711 case PRINT_OP:37883712 /*37893713 * The only op for string should be ? :···49314821 case PRINT_STRING:49324822 case PRINT_BSTRING:49334823 printf("__get_str(%s)", args->string.string);48244824+ break;48254825+ case PRINT_BITMASK:48264826+ printf("__get_bitmask(%s)", args->bitmask.bitmask);49344827 break;49354828 case PRINT_TYPE:49364829 printf("(%s)", args->typecast.type);
···1818 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~1919 */20202121+#include <stdio.h>2122#include <string.h>2223#include <dlfcn.h>2324#include <stdlib.h>···31303231#define LOCAL_PLUGIN_DIR ".traceevent/plugins"33323333+static struct registered_plugin_options {3434+ struct registered_plugin_options *next;3535+ struct pevent_plugin_option *options;3636+} *registered_options;3737+3838+static struct trace_plugin_options {3939+ struct trace_plugin_options *next;4040+ char *plugin;4141+ char *option;4242+ char *value;4343+} *trace_plugin_options;4444+3445struct plugin_list {3546 struct plugin_list *next;3647 char *name;3748 void *handle;3849};5050+5151+/**5252+ * traceevent_plugin_list_options - get list of plugin options5353+ *5454+ * Returns an array of char strings that list the currently registered5555+ * plugin options in the format of <plugin>:<option>. This list can be5656+ * used by toggling the option.5757+ *5858+ * Returns NULL if there's no options registered. On error it returns5959+ * INVALID_PLUGIN_LIST_OPTION6060+ *6161+ * Must be freed with traceevent_plugin_free_options_list().6262+ */6363+char **traceevent_plugin_list_options(void)6464+{6565+ struct registered_plugin_options *reg;6666+ struct pevent_plugin_option *op;6767+ char **list = NULL;6868+ char *name;6969+ int count = 0;7070+7171+ for (reg = registered_options; reg; reg = reg->next) {7272+ for (op = reg->options; op->name; op++) {7373+ char *alias = op->plugin_alias ? op->plugin_alias : op->file;7474+ char **temp = list;7575+7676+ name = malloc(strlen(op->name) + strlen(alias) + 2);7777+ if (!name)7878+ goto err;7979+8080+ sprintf(name, "%s:%s", alias, op->name);8181+ list = realloc(list, count + 2);8282+ if (!list) {8383+ list = temp;8484+ free(name);8585+ goto err;8686+ }8787+ list[count++] = name;8888+ list[count] = NULL;8989+ }9090+ }9191+ return list;9292+9393+ err:9494+ while (--count >= 0)9595+ free(list[count]);9696+ free(list);9797+9898+ return INVALID_PLUGIN_LIST_OPTION;9999+}100100+101101+void traceevent_plugin_free_options_list(char **list)102102+{103103+ int i;104104+105105+ if (!list)106106+ return;107107+108108+ if (list == INVALID_PLUGIN_LIST_OPTION)109109+ return;110110+111111+ for (i = 0; list[i]; i++)112112+ free(list[i]);113113+114114+ free(list);115115+}116116+117117+static int118118+update_option(const char *file, struct pevent_plugin_option *option)119119+{120120+ struct trace_plugin_options *op;121121+ char *plugin;122122+123123+ if (option->plugin_alias) {124124+ plugin = strdup(option->plugin_alias);125125+ if (!plugin)126126+ return -1;127127+ } else {128128+ char *p;129129+ plugin = strdup(file);130130+ if (!plugin)131131+ return -1;132132+ p = strstr(plugin, ".");133133+ if (p)134134+ *p = '\0';135135+ }136136+137137+ /* first look for named options */138138+ for (op = trace_plugin_options; op; op = op->next) {139139+ if (!op->plugin)140140+ continue;141141+ if (strcmp(op->plugin, plugin) != 0)142142+ continue;143143+ if (strcmp(op->option, option->name) != 0)144144+ continue;145145+146146+ option->value = op->value;147147+ option->set ^= 1;148148+ goto out;149149+ }150150+151151+ /* first look for unnamed options */152152+ for (op = trace_plugin_options; op; op = op->next) {153153+ if (op->plugin)154154+ continue;155155+ if (strcmp(op->option, option->name) != 0)156156+ continue;157157+158158+ option->value = op->value;159159+ option->set ^= 1;160160+ break;161161+ }162162+163163+ out:164164+ free(plugin);165165+ return 0;166166+}167167+168168+/**169169+ * traceevent_plugin_add_options - Add a set of options by a plugin170170+ * @name: The name of the plugin adding the options171171+ * @options: The set of options being loaded172172+ *173173+ * Sets the options with the values that have been added by user.174174+ */175175+int traceevent_plugin_add_options(const char *name,176176+ struct pevent_plugin_option *options)177177+{178178+ struct registered_plugin_options *reg;179179+180180+ reg = malloc(sizeof(*reg));181181+ if (!reg)182182+ return -1;183183+ reg->next = registered_options;184184+ reg->options = options;185185+ registered_options = reg;186186+187187+ while (options->name) {188188+ update_option(name, options);189189+ options++;190190+ }191191+ return 0;192192+}193193+194194+/**195195+ * traceevent_plugin_remove_options - remove plugin options that were registered196196+ * @options: Options to removed that were registered with traceevent_plugin_add_options197197+ */198198+void traceevent_plugin_remove_options(struct pevent_plugin_option *options)199199+{200200+ struct registered_plugin_options **last;201201+ struct registered_plugin_options *reg;202202+203203+ for (last = ®istered_options; *last; last = &(*last)->next) {204204+ if ((*last)->options == options) {205205+ reg = *last;206206+ *last = reg->next;207207+ free(reg);208208+ return;209209+ }210210+ }211211+}212212+213213+/**214214+ * traceevent_print_plugins - print out the list of plugins loaded215215+ * @s: the trace_seq descripter to write to216216+ * @prefix: The prefix string to add before listing the option name217217+ * @suffix: The suffix string ot append after the option name218218+ * @list: The list of plugins (usually returned by traceevent_load_plugins()219219+ *220220+ * Writes to the trace_seq @s the list of plugins (files) that is221221+ * returned by traceevent_load_plugins(). Use @prefix and @suffix for formating:222222+ * @prefix = " ", @suffix = "\n".223223+ */224224+void traceevent_print_plugins(struct trace_seq *s,225225+ const char *prefix, const char *suffix,226226+ const struct plugin_list *list)227227+{228228+ while (list) {229229+ trace_seq_printf(s, "%s%s%s", prefix, list->name, suffix);230230+ list = list->next;231231+ }232232+}3923340234static void41235load_plugin(struct pevent *pevent, const char *path,···344148 char *path;345149 char *envdir;346150151151+ if (pevent->flags & PEVENT_DISABLE_PLUGINS)152152+ return;153153+347154 /*348155 * If a system plugin directory was defined,349156 * check that first.350157 */351158#ifdef PLUGIN_DIR352352- load_plugins_dir(pevent, suffix, PLUGIN_DIR, load_plugin, data);159159+ if (!(pevent->flags & PEVENT_DISABLE_SYS_PLUGINS))160160+ load_plugins_dir(pevent, suffix, PLUGIN_DIR,161161+ load_plugin, data);353162#endif354163355164 /*
+37-6
tools/lib/traceevent/plugin_function.c
···33333434#define STK_BLK 1035353636+struct pevent_plugin_option plugin_options[] =3737+{3838+ {3939+ .name = "parent",4040+ .plugin_alias = "ftrace",4141+ .description =4242+ "Print parent of functions for function events",4343+ },4444+ {4545+ .name = "indent",4646+ .plugin_alias = "ftrace",4747+ .description =4848+ "Try to show function call indents, based on parents",4949+ .set = 1,5050+ },5151+ {5252+ .name = NULL,5353+ }5454+};5555+5656+static struct pevent_plugin_option *ftrace_parent = &plugin_options[0];5757+static struct pevent_plugin_option *ftrace_indent = &plugin_options[1];5858+3659static void add_child(struct func_stack *stack, const char *child, int pos)3760{3861 int i;···142119143120 parent = pevent_find_function(pevent, pfunction);144121145145- index = add_and_get_index(parent, func, record->cpu);122122+ if (parent && ftrace_indent->set)123123+ index = add_and_get_index(parent, func, record->cpu);146124147125 trace_seq_printf(s, "%*s", index*3, "");148126···152128 else153129 trace_seq_printf(s, "0x%llx", function);154130155155- trace_seq_printf(s, " <-- ");156156- if (parent)157157- trace_seq_printf(s, "%s", parent);158158- else159159- trace_seq_printf(s, "0x%llx", pfunction);131131+ if (ftrace_parent->set) {132132+ trace_seq_printf(s, " <-- ");133133+ if (parent)134134+ trace_seq_printf(s, "%s", parent);135135+ else136136+ trace_seq_printf(s, "0x%llx", pfunction);137137+ }160138161139 return 0;162140}···167141{168142 pevent_register_event_handler(pevent, -1, "ftrace", "function",169143 function_handler, NULL);144144+145145+ traceevent_plugin_add_options("ftrace", plugin_options);146146+170147 return 0;171148}172149···185156 free(fstack[i].stack[x]);186157 free(fstack[i].stack);187158 }159159+160160+ traceevent_plugin_remove_options(plugin_options);188161189162 free(fstack);190163 fstack = NULL;
+23
tools/perf/Documentation/perf-report.txt
···117117 By default, every sort keys not specified in -F will be appended118118 automatically.119119120120+ If --mem-mode option is used, following sort keys are also available121121+ (incompatible with --branch-stack):122122+ symbol_daddr, dso_daddr, locked, tlb, mem, snoop, dcacheline.123123+124124+ - symbol_daddr: name of data symbol being executed on at the time of sample125125+ - dso_daddr: name of library or module containing the data being executed126126+ on at the time of sample127127+ - locked: whether the bus was locked at the time of sample128128+ - tlb: type of tlb access for the data at the time of sample129129+ - mem: type of memory access for the data at the time of sample130130+ - snoop: type of snoop (if any) for the data at the time of sample131131+ - dcacheline: the cacheline the data address is on at the time of sample132132+133133+ And default sort keys are changed to local_weight, mem, sym, dso,134134+ symbol_daddr, dso_daddr, snoop, tlb, locked, see '--mem-mode'.135135+120136-p::121137--parent=<regex>::122138 A regex filter to identify parent. The parent is a caller of this···275259--demangle::276260 Demangle symbol names to human readable form. It's enabled by default,277261 disable with --no-demangle.262262+263263+--mem-mode::264264+ Use the data addresses of samples in addition to instruction addresses265265+ to build the histograms. To generate meaningful output, the perf.data266266+ file must have been obtained using perf record -d -W and using a267267+ special event -e cpu/mem-loads/ or -e cpu/mem-stores/. See268268+ 'perf mem' for simpler access.278269279270--percent-limit::280271 Do not show entries which have an overhead under that percent.
+20-21
tools/perf/Documentation/perf-timechart.txt
···43434444--symfs=<directory>::4545 Look for files with symbols relative to this directory.4646-4747-EXAMPLES4848---------4949-5050-$ perf timechart record git pull5151-5252- [ perf record: Woken up 13 times to write data ]5353- [ perf record: Captured and wrote 4.253 MB perf.data (~185801 samples) ]5454-5555-$ perf timechart5656-5757- Written 10.2 seconds of trace to output.svg.5858-5959-Record system-wide timechart:6060-6161- $ perf timechart record6262-6363- then generate timechart and highlight 'gcc' tasks:6464-6565- $ perf timechart --highlight gcc6666-6746-n::6847--proc-num::6948 Print task info for at least given number of tasks.···6687-g::6788--callchain::6889 Do call-graph (stack chain/backtrace) recording9090+9191+EXAMPLES9292+--------9393+9494+$ perf timechart record git pull9595+9696+ [ perf record: Woken up 13 times to write data ]9797+ [ perf record: Captured and wrote 4.253 MB perf.data (~185801 samples) ]9898+9999+$ perf timechart100100+101101+ Written 10.2 seconds of trace to output.svg.102102+103103+Record system-wide timechart:104104+105105+ $ perf timechart record106106+107107+ then generate timechart and highlight 'gcc' tasks:108108+109109+ $ perf timechart --highlight gcc6911070111SEE ALSO71112--------
···7272 if (ret)7373 return ret;74747575- if (&inject->output.is_pipe)7575+ if (!inject->output.is_pipe)7676 return 0;77777878 return perf_event__repipe_synth(tool, event);
+14-9
tools/perf/builtin-probe.c
···288288 memset(¶ms, 0, sizeof(params));289289}290290291291+static void pr_err_with_code(const char *msg, int err)292292+{293293+ pr_err("%s", msg);294294+ pr_debug(" Reason: %s (Code: %d)", strerror(-err), err);295295+ pr_err("\n");296296+}297297+291298static int292299__cmd_probe(int argc, const char **argv, const char *prefix __maybe_unused)293300{···386379 }387380 ret = parse_probe_event_argv(argc, argv);388381 if (ret < 0) {389389- pr_err(" Error: Parse Error. (%d)\n", ret);382382+ pr_err_with_code(" Error: Command Parse Error.", ret);390383 return ret;391384 }392385 }···426419 }427420 ret = show_perf_probe_events();428421 if (ret < 0)429429- pr_err(" Error: Failed to show event list. (%d)\n",430430- ret);422422+ pr_err_with_code(" Error: Failed to show event list.", ret);431423 return ret;432424 }433425 if (params.show_funcs) {···451445 strfilter__delete(params.filter);452446 params.filter = NULL;453447 if (ret < 0)454454- pr_err(" Error: Failed to show functions."455455- " (%d)\n", ret);448448+ pr_err_with_code(" Error: Failed to show functions.", ret);456449 return ret;457450 }458451···469464470465 ret = show_line_range(¶ms.line_range, params.target);471466 if (ret < 0)472472- pr_err(" Error: Failed to show lines. (%d)\n", ret);467467+ pr_err_with_code(" Error: Failed to show lines.", ret);473468 return ret;474469 }475470 if (params.show_vars) {···490485 strfilter__delete(params.filter);491486 params.filter = NULL;492487 if (ret < 0)493493- pr_err(" Error: Failed to show vars. (%d)\n", ret);488488+ pr_err_with_code(" Error: Failed to show vars.", ret);494489 return ret;495490 }496491#endif···498493 if (params.dellist) {499494 ret = del_perf_probe_events(params.dellist);500495 if (ret < 0) {501501- pr_err(" Error: Failed to delete events. (%d)\n", ret);496496+ pr_err_with_code(" Error: Failed to delete events.", ret);502497 return ret;503498 }504499 }···509504 params.target,510505 params.force_add);511506 if (ret < 0) {512512- pr_err(" Error: Failed to add events. (%d)\n", ret);507507+ pr_err_with_code(" Error: Failed to add events.", ret);513508 return ret;514509 }515510 }
···458458459459 /* The page_size is placed in util object. */460460 page_size = sysconf(_SC_PAGE_SIZE);461461+ cacheline_size = sysconf(_SC_LEVEL1_DCACHE_LINESIZE);461462462463 cmd = perf_extract_argv0_path(argv[0]);463464 if (!cmd)
+40-2
tools/perf/tests/builtin-test.c
···33 *44 * Builtin regression testing command: ever growing number of sanity tests55 */66+#include <unistd.h>77+#include <string.h>68#include "builtin.h"79#include "intlist.h"810#include "tests.h"···5250 .func = test__pmu,5351 },5452 {5555- .desc = "Test dso data interface",5353+ .desc = "Test dso data read",5654 .func = test__dso_data,5555+ },5656+ {5757+ .desc = "Test dso data cache",5858+ .func = test__dso_data_cache,5959+ },6060+ {6161+ .desc = "Test dso data reopen",6262+ .func = test__dso_data_reopen,5763 },5864 {5965 .desc = "roundtrip evsel->name check",···182172 return false;183173}184174175175+static int run_test(struct test *test)176176+{177177+ int status, err = -1, child = fork();178178+179179+ if (child < 0) {180180+ pr_err("failed to fork test: %s\n", strerror(errno));181181+ return -1;182182+ }183183+184184+ if (!child) {185185+ pr_debug("test child forked, pid %d\n", getpid());186186+ err = test->func();187187+ exit(err);188188+ }189189+190190+ wait(&status);191191+192192+ if (WIFEXITED(status)) {193193+ err = WEXITSTATUS(status);194194+ pr_debug("test child finished with %d\n", err);195195+ } else if (WIFSIGNALED(status)) {196196+ err = -1;197197+ pr_debug("test child interrupted\n");198198+ }199199+200200+ return err;201201+}202202+185203static int __cmd_test(int argc, const char *argv[], struct intlist *skiplist)186204{187205 int i = 0;···238200 }239201240202 pr_debug("\n--- start ---\n");241241- err = tests[curr].func();203203+ err = run_test(&tests[curr]);242204 pr_debug("---- end ----\n%s:", tests[curr].desc);243205244206 switch (err) {
+210-4
tools/perf/tests/dso-data.c
···11-#include "util.h"22-31#include <stdlib.h>42#include <linux/types.h>53#include <sys/stat.h>64#include <fcntl.h>75#include <string.h>88-66+#include <sys/time.h>77+#include <sys/resource.h>88+#include <api/fs/fs.h>99+#include "util.h"910#include "machine.h"1011#include "symbol.h"1112#include "tests.h"12131314static char *test_file(int size)1415{1515- static char buf_templ[] = "/tmp/test-XXXXXX";1616+#define TEMPL "/tmp/perf-test-XXXXXX"1717+ static char buf_templ[sizeof(TEMPL)];1618 char *templ = buf_templ;1719 int fd, i;1820 unsigned char *buf;2121+2222+ strcpy(buf_templ, TEMPL);2323+#undef TEMPL19242025 fd = mkstemp(templ);2126 if (fd < 0) {···153148154149 dso__delete(dso);155150 unlink(file);151151+ return 0;152152+}153153+154154+static long open_files_cnt(void)155155+{156156+ char path[PATH_MAX];157157+ struct dirent *dent;158158+ DIR *dir;159159+ long nr = 0;160160+161161+ scnprintf(path, PATH_MAX, "%s/self/fd", procfs__mountpoint());162162+ pr_debug("fd path: %s\n", path);163163+164164+ dir = opendir(path);165165+ TEST_ASSERT_VAL("failed to open fd directory", dir);166166+167167+ while ((dent = readdir(dir)) != NULL) {168168+ if (!strcmp(dent->d_name, ".") ||169169+ !strcmp(dent->d_name, ".."))170170+ continue;171171+172172+ nr++;173173+ }174174+175175+ closedir(dir);176176+ return nr - 1;177177+}178178+179179+static struct dso **dsos;180180+181181+static int dsos__create(int cnt, int size)182182+{183183+ int i;184184+185185+ dsos = malloc(sizeof(dsos) * cnt);186186+ TEST_ASSERT_VAL("failed to alloc dsos array", dsos);187187+188188+ for (i = 0; i < cnt; i++) {189189+ char *file;190190+191191+ file = test_file(size);192192+ TEST_ASSERT_VAL("failed to get dso file", file);193193+194194+ dsos[i] = dso__new(file);195195+ TEST_ASSERT_VAL("failed to get dso", dsos[i]);196196+ }197197+198198+ return 0;199199+}200200+201201+static void dsos__delete(int cnt)202202+{203203+ int i;204204+205205+ for (i = 0; i < cnt; i++) {206206+ struct dso *dso = dsos[i];207207+208208+ unlink(dso->name);209209+ dso__delete(dso);210210+ }211211+212212+ free(dsos);213213+}214214+215215+static int set_fd_limit(int n)216216+{217217+ struct rlimit rlim;218218+219219+ if (getrlimit(RLIMIT_NOFILE, &rlim))220220+ return -1;221221+222222+ pr_debug("file limit %ld, new %d\n", (long) rlim.rlim_cur, n);223223+224224+ rlim.rlim_cur = n;225225+ return setrlimit(RLIMIT_NOFILE, &rlim);226226+}227227+228228+int test__dso_data_cache(void)229229+{230230+ struct machine machine;231231+ long nr_end, nr = open_files_cnt();232232+ int dso_cnt, limit, i, fd;233233+234234+ memset(&machine, 0, sizeof(machine));235235+236236+ /* set as system limit */237237+ limit = nr * 4;238238+ TEST_ASSERT_VAL("failed to set file limit", !set_fd_limit(limit));239239+240240+ /* and this is now our dso open FDs limit + 1 extra */241241+ dso_cnt = limit / 2 + 1;242242+ TEST_ASSERT_VAL("failed to create dsos\n",243243+ !dsos__create(dso_cnt, TEST_FILE_SIZE));244244+245245+ for (i = 0; i < (dso_cnt - 1); i++) {246246+ struct dso *dso = dsos[i];247247+248248+ /*249249+ * Open dsos via dso__data_fd or dso__data_read_offset.250250+ * Both opens the data file and keep it open.251251+ */252252+ if (i % 2) {253253+ fd = dso__data_fd(dso, &machine);254254+ TEST_ASSERT_VAL("failed to get fd", fd > 0);255255+ } else {256256+ #define BUFSIZE 10257257+ u8 buf[BUFSIZE];258258+ ssize_t n;259259+260260+ n = dso__data_read_offset(dso, &machine, 0, buf, BUFSIZE);261261+ TEST_ASSERT_VAL("failed to read dso", n == BUFSIZE);262262+ }263263+ }264264+265265+ /* open +1 dso over the allowed limit */266266+ fd = dso__data_fd(dsos[i], &machine);267267+ TEST_ASSERT_VAL("failed to get fd", fd > 0);268268+269269+ /* should force the first one to be closed */270270+ TEST_ASSERT_VAL("failed to close dsos[0]", dsos[0]->data.fd == -1);271271+272272+ /* cleanup everything */273273+ dsos__delete(dso_cnt);274274+275275+ /* Make sure we did not leak any file descriptor. */276276+ nr_end = open_files_cnt();277277+ pr_debug("nr start %ld, nr stop %ld\n", nr, nr_end);278278+ TEST_ASSERT_VAL("failed leadking files", nr == nr_end);279279+ return 0;280280+}281281+282282+int test__dso_data_reopen(void)283283+{284284+ struct machine machine;285285+ long nr_end, nr = open_files_cnt();286286+ int fd, fd_extra;287287+288288+#define dso_0 (dsos[0])289289+#define dso_1 (dsos[1])290290+#define dso_2 (dsos[2])291291+292292+ memset(&machine, 0, sizeof(machine));293293+294294+ /*295295+ * Test scenario:296296+ * - create 3 dso objects297297+ * - set process file descriptor limit to current298298+ * files count + 3299299+ * - test that the first dso gets closed when we300300+ * reach the files count limit301301+ */302302+303303+ /* Make sure we are able to open 3 fds anyway */304304+ TEST_ASSERT_VAL("failed to set file limit",305305+ !set_fd_limit((nr + 3)));306306+307307+ TEST_ASSERT_VAL("failed to create dsos\n", !dsos__create(3, TEST_FILE_SIZE));308308+309309+ /* open dso_0 */310310+ fd = dso__data_fd(dso_0, &machine);311311+ TEST_ASSERT_VAL("failed to get fd", fd > 0);312312+313313+ /* open dso_1 */314314+ fd = dso__data_fd(dso_1, &machine);315315+ TEST_ASSERT_VAL("failed to get fd", fd > 0);316316+317317+ /*318318+ * open extra file descriptor and we just319319+ * reached the files count limit320320+ */321321+ fd_extra = open("/dev/null", O_RDONLY);322322+ TEST_ASSERT_VAL("failed to open extra fd", fd_extra > 0);323323+324324+ /* open dso_2 */325325+ fd = dso__data_fd(dso_2, &machine);326326+ TEST_ASSERT_VAL("failed to get fd", fd > 0);327327+328328+ /*329329+ * dso_0 should get closed, because we reached330330+ * the file descriptor limit331331+ */332332+ TEST_ASSERT_VAL("failed to close dso_0", dso_0->data.fd == -1);333333+334334+ /* open dso_0 */335335+ fd = dso__data_fd(dso_0, &machine);336336+ TEST_ASSERT_VAL("failed to get fd", fd > 0);337337+338338+ /*339339+ * dso_1 should get closed, because we reached340340+ * the file descriptor limit341341+ */342342+ TEST_ASSERT_VAL("failed to close dso_1", dso_1->data.fd == -1);343343+344344+ /* cleanup everything */345345+ close(fd_extra);346346+ dsos__delete(3);347347+348348+ /* Make sure we did not leak any file descriptor. */349349+ nr_end = open_files_cnt();350350+ pr_debug("nr start %ld, nr stop %ld\n", nr, nr_end);351351+ TEST_ASSERT_VAL("failed leadking files", nr == nr_end);156352 return 0;157353}
···11+#include <asm/bug.h>22+#include <sys/time.h>33+#include <sys/resource.h>14#include "symbol.h"25#include "dso.h"36#include "machine.h"···139136 return ret;140137}141138142142-static int open_dso(struct dso *dso, struct machine *machine)139139+/*140140+ * Global list of open DSOs and the counter.141141+ */142142+static LIST_HEAD(dso__data_open);143143+static long dso__data_open_cnt;144144+145145+static void dso__list_add(struct dso *dso)146146+{147147+ list_add_tail(&dso->data.open_entry, &dso__data_open);148148+ dso__data_open_cnt++;149149+}150150+151151+static void dso__list_del(struct dso *dso)152152+{153153+ list_del(&dso->data.open_entry);154154+ WARN_ONCE(dso__data_open_cnt <= 0,155155+ "DSO data fd counter out of bounds.");156156+ dso__data_open_cnt--;157157+}158158+159159+static void close_first_dso(void);160160+161161+static int do_open(char *name)162162+{163163+ int fd;164164+165165+ do {166166+ fd = open(name, O_RDONLY);167167+ if (fd >= 0)168168+ return fd;169169+170170+ pr_debug("dso open failed, mmap: %s\n", strerror(errno));171171+ if (!dso__data_open_cnt || errno != EMFILE)172172+ break;173173+174174+ close_first_dso();175175+ } while (1);176176+177177+ return -1;178178+}179179+180180+static int __open_dso(struct dso *dso, struct machine *machine)143181{144182 int fd;145183 char *root_dir = (char *)"";···198154 return -EINVAL;199155 }200156201201- fd = open(name, O_RDONLY);157157+ fd = do_open(name);202158 free(name);203159 return fd;204160}205161162162+static void check_data_close(void);163163+164164+/**165165+ * dso_close - Open DSO data file166166+ * @dso: dso object167167+ *168168+ * Open @dso's data file descriptor and updates169169+ * list/count of open DSO objects.170170+ */171171+static int open_dso(struct dso *dso, struct machine *machine)172172+{173173+ int fd = __open_dso(dso, machine);174174+175175+ if (fd > 0) {176176+ dso__list_add(dso);177177+ /*178178+ * Check if we crossed the allowed number179179+ * of opened DSOs and close one if needed.180180+ */181181+ check_data_close();182182+ }183183+184184+ return fd;185185+}186186+187187+static void close_data_fd(struct dso *dso)188188+{189189+ if (dso->data.fd >= 0) {190190+ close(dso->data.fd);191191+ dso->data.fd = -1;192192+ dso->data.file_size = 0;193193+ dso__list_del(dso);194194+ }195195+}196196+197197+/**198198+ * dso_close - Close DSO data file199199+ * @dso: dso object200200+ *201201+ * Close @dso's data file descriptor and updates202202+ * list/count of open DSO objects.203203+ */204204+static void close_dso(struct dso *dso)205205+{206206+ close_data_fd(dso);207207+}208208+209209+static void close_first_dso(void)210210+{211211+ struct dso *dso;212212+213213+ dso = list_first_entry(&dso__data_open, struct dso, data.open_entry);214214+ close_dso(dso);215215+}216216+217217+static rlim_t get_fd_limit(void)218218+{219219+ struct rlimit l;220220+ rlim_t limit = 0;221221+222222+ /* Allow half of the current open fd limit. */223223+ if (getrlimit(RLIMIT_NOFILE, &l) == 0) {224224+ if (l.rlim_cur == RLIM_INFINITY)225225+ limit = l.rlim_cur;226226+ else227227+ limit = l.rlim_cur / 2;228228+ } else {229229+ pr_err("failed to get fd limit\n");230230+ limit = 1;231231+ }232232+233233+ return limit;234234+}235235+236236+static bool may_cache_fd(void)237237+{238238+ static rlim_t limit;239239+240240+ if (!limit)241241+ limit = get_fd_limit();242242+243243+ if (limit == RLIM_INFINITY)244244+ return true;245245+246246+ return limit > (rlim_t) dso__data_open_cnt;247247+}248248+249249+/*250250+ * Check and close LRU dso if we crossed allowed limit251251+ * for opened dso file descriptors. The limit is half252252+ * of the RLIMIT_NOFILE files opened.253253+*/254254+static void check_data_close(void)255255+{256256+ bool cache_fd = may_cache_fd();257257+258258+ if (!cache_fd)259259+ close_first_dso();260260+}261261+262262+/**263263+ * dso__data_close - Close DSO data file264264+ * @dso: dso object265265+ *266266+ * External interface to close @dso's data file descriptor.267267+ */268268+void dso__data_close(struct dso *dso)269269+{270270+ close_dso(dso);271271+}272272+273273+/**274274+ * dso__data_fd - Get dso's data file descriptor275275+ * @dso: dso object276276+ * @machine: machine object277277+ *278278+ * External interface to find dso's file, open it and279279+ * returns file descriptor.280280+ */206281int dso__data_fd(struct dso *dso, struct machine *machine)207282{208283 enum dso_binary_type binary_type_data[] = {···331168 };332169 int i = 0;333170334334- if (dso->binary_type != DSO_BINARY_TYPE__NOT_FOUND)335335- return open_dso(dso, machine);171171+ if (dso->data.fd >= 0)172172+ return dso->data.fd;173173+174174+ if (dso->binary_type != DSO_BINARY_TYPE__NOT_FOUND) {175175+ dso->data.fd = open_dso(dso, machine);176176+ return dso->data.fd;177177+ }336178337179 do {338180 int fd;···346178347179 fd = open_dso(dso, machine);348180 if (fd >= 0)349349- return fd;181181+ return dso->data.fd = fd;350182351183 } while (dso->binary_type != DSO_BINARY_TYPE__NOT_FOUND);352184···428260}429261430262static ssize_t431431-dso_cache__read(struct dso *dso, struct machine *machine,432432- u64 offset, u8 *data, ssize_t size)263263+dso_cache__read(struct dso *dso, u64 offset, u8 *data, ssize_t size)433264{434265 struct dso_cache *cache;435266 ssize_t ret;436436- int fd;437437-438438- fd = dso__data_fd(dso, machine);439439- if (fd < 0)440440- return -1;441267442268 do {443269 u64 cache_offset;···445283 cache_offset = offset & DSO__DATA_CACHE_MASK;446284 ret = -EINVAL;447285448448- if (-1 == lseek(fd, cache_offset, SEEK_SET))286286+ if (-1 == lseek(dso->data.fd, cache_offset, SEEK_SET))449287 break;450288451451- ret = read(fd, cache->data, DSO__DATA_CACHE_SIZE);289289+ ret = read(dso->data.fd, cache->data, DSO__DATA_CACHE_SIZE);452290 if (ret <= 0)453291 break;454292455293 cache->offset = cache_offset;456294 cache->size = ret;457457- dso_cache__insert(&dso->cache, cache);295295+ dso_cache__insert(&dso->data.cache, cache);458296459297 ret = dso_cache__memcpy(cache, offset, data, size);460298···463301 if (ret <= 0)464302 free(cache);465303466466- close(fd);467304 return ret;468305}469306470470-static ssize_t dso_cache_read(struct dso *dso, struct machine *machine,471471- u64 offset, u8 *data, ssize_t size)307307+static ssize_t dso_cache_read(struct dso *dso, u64 offset,308308+ u8 *data, ssize_t size)472309{473310 struct dso_cache *cache;474311475475- cache = dso_cache__find(&dso->cache, offset);312312+ cache = dso_cache__find(&dso->data.cache, offset);476313 if (cache)477314 return dso_cache__memcpy(cache, offset, data, size);478315 else479479- return dso_cache__read(dso, machine, offset, data, size);316316+ return dso_cache__read(dso, offset, data, size);480317}481318482482-ssize_t dso__data_read_offset(struct dso *dso, struct machine *machine,483483- u64 offset, u8 *data, ssize_t size)319319+/*320320+ * Reads and caches dso data DSO__DATA_CACHE_SIZE size chunks321321+ * in the rb_tree. Any read to already cached data is served322322+ * by cached data.323323+ */324324+static ssize_t cached_read(struct dso *dso, u64 offset, u8 *data, ssize_t size)484325{485326 ssize_t r = 0;486327 u8 *p = data;···491326 do {492327 ssize_t ret;493328494494- ret = dso_cache_read(dso, machine, offset, p, size);329329+ ret = dso_cache_read(dso, offset, p, size);495330 if (ret < 0)496331 return ret;497332···511346 return r;512347}513348349349+static int data_file_size(struct dso *dso)350350+{351351+ struct stat st;352352+353353+ if (!dso->data.file_size) {354354+ if (fstat(dso->data.fd, &st)) {355355+ pr_err("dso mmap failed, fstat: %s\n", strerror(errno));356356+ return -1;357357+ }358358+ dso->data.file_size = st.st_size;359359+ }360360+361361+ return 0;362362+}363363+364364+static ssize_t data_read_offset(struct dso *dso, u64 offset,365365+ u8 *data, ssize_t size)366366+{367367+ if (data_file_size(dso))368368+ return -1;369369+370370+ /* Check the offset sanity. */371371+ if (offset > dso->data.file_size)372372+ return -1;373373+374374+ if (offset + size < offset)375375+ return -1;376376+377377+ return cached_read(dso, offset, data, size);378378+}379379+380380+/**381381+ * dso__data_read_offset - Read data from dso file offset382382+ * @dso: dso object383383+ * @machine: machine object384384+ * @offset: file offset385385+ * @data: buffer to store data386386+ * @size: size of the @data buffer387387+ *388388+ * External interface to read data from dso file offset. Open389389+ * dso data file and use cached_read to get the data.390390+ */391391+ssize_t dso__data_read_offset(struct dso *dso, struct machine *machine,392392+ u64 offset, u8 *data, ssize_t size)393393+{394394+ if (dso__data_fd(dso, machine) < 0)395395+ return -1;396396+397397+ return data_read_offset(dso, offset, data, size);398398+}399399+400400+/**401401+ * dso__data_read_addr - Read data from dso address402402+ * @dso: dso object403403+ * @machine: machine object404404+ * @add: virtual memory address405405+ * @data: buffer to store data406406+ * @size: size of the @data buffer407407+ *408408+ * External interface to read data from dso address.409409+ */514410ssize_t dso__data_read_addr(struct dso *dso, struct map *map,515411 struct machine *machine, u64 addr,516412 u8 *data, ssize_t size)···699473 dso__set_short_name(dso, dso->name, false);700474 for (i = 0; i < MAP__NR_TYPES; ++i)701475 dso->symbols[i] = dso->symbol_names[i] = RB_ROOT;702702- dso->cache = RB_ROOT;476476+ dso->data.cache = RB_ROOT;477477+ dso->data.fd = -1;703478 dso->symtab_type = DSO_BINARY_TYPE__NOT_FOUND;704479 dso->binary_type = DSO_BINARY_TYPE__NOT_FOUND;705480 dso->loaded = 0;···712485 dso->kernel = DSO_TYPE_USER;713486 dso->needs_swap = DSO_SWAP__UNSET;714487 INIT_LIST_HEAD(&dso->node);488488+ INIT_LIST_HEAD(&dso->data.open_entry);715489 }716490717491 return dso;···734506 dso->long_name_allocated = false;735507 }736508737737- dso_cache__free(&dso->cache);509509+ dso__data_close(dso);510510+ dso_cache__free(&dso->data.cache);738511 dso__free_a2l(dso);739512 zfree(&dso->symsrc_filename);740513 free(dso);
+49-1
tools/perf/util/dso.h
···7676 struct list_head node;7777 struct rb_root symbols[MAP__NR_TYPES];7878 struct rb_root symbol_names[MAP__NR_TYPES];7979- struct rb_root cache;8079 void *a2l;8180 char *symsrc_filename;8281 unsigned int a2l_fails;···9899 const char *long_name;99100 u16 long_name_len;100101 u16 short_name_len;102102+103103+ /* dso data file */104104+ struct {105105+ struct rb_root cache;106106+ int fd;107107+ size_t file_size;108108+ struct list_head open_entry;109109+ } data;110110+101111 char name[0];102112};103113···149141int dso__read_binary_type_filename(const struct dso *dso, enum dso_binary_type type,150142 char *root_dir, char *filename, size_t size);151143144144+/*145145+ * The dso__data_* external interface provides following functions:146146+ * dso__data_fd147147+ * dso__data_close148148+ * dso__data_read_offset149149+ * dso__data_read_addr150150+ *151151+ * Please refer to the dso.c object code for each function and152152+ * arguments documentation. Following text tries to explain the153153+ * dso file descriptor caching.154154+ *155155+ * The dso__data* interface allows caching of opened file descriptors156156+ * to speed up the dso data accesses. The idea is to leave the file157157+ * descriptor opened ideally for the whole life of the dso object.158158+ *159159+ * The current usage of the dso__data_* interface is as follows:160160+ *161161+ * Get DSO's fd:162162+ * int fd = dso__data_fd(dso, machine);163163+ * USE 'fd' SOMEHOW164164+ *165165+ * Read DSO's data:166166+ * n = dso__data_read_offset(dso_0, &machine, 0, buf, BUFSIZE);167167+ * n = dso__data_read_addr(dso_0, &machine, 0, buf, BUFSIZE);168168+ *169169+ * Eventually close DSO's fd:170170+ * dso__data_close(dso);171171+ *172172+ * It is not necessary to close the DSO object data file. Each time new173173+ * DSO data file is opened, the limit (RLIMIT_NOFILE/2) is checked. Once174174+ * it is crossed, the oldest opened DSO object is closed.175175+ *176176+ * The dso__delete function calls close_dso function to ensure the177177+ * data file descriptor gets closed/unmapped before the dso object178178+ * is freed.179179+ *180180+ * TODO181181+*/152182int dso__data_fd(struct dso *dso, struct machine *machine);183183+void dso__data_close(struct dso *dso);184184+153185ssize_t dso__data_read_offset(struct dso *dso, struct machine *machine,154186 u64 offset, u8 *data, ssize_t size);155187ssize_t dso__data_read_addr(struct dso *dso, struct map *map,
···589589 }590590591591 /*592592- * We default some events to a 1 default interval. But keep592592+ * We default some events to have a default interval. But keep593593 * it a weak assumption overridable by the user.594594 */595595- if (!attr->sample_period || (opts->user_freq != UINT_MAX &&595595+ if (!attr->sample_period || (opts->user_freq != UINT_MAX ||596596 opts->user_interval != ULLONG_MAX)) {597597 if (opts->freq) {598598 perf_evsel__set_sample_bit(evsel, PERIOD);···659659 perf_evsel__set_sample_bit(evsel, WEIGHT);660660661661 attr->mmap = track;662662+ attr->mmap2 = track && !perf_missing_features.mmap2;662663 attr->comm = track;663664664665 if (opts->sample_transaction)
···628628629629 ret = debuginfo__find_line_range(dinfo, lr);630630 debuginfo__delete(dinfo);631631- if (ret == 0) {631631+ if (ret == 0 || ret == -ENOENT) {632632 pr_warning("Specified source line is not found.\n");633633 return -ENOENT;634634 } else if (ret < 0) {635635- pr_warning("Debuginfo analysis failed. (%d)\n", ret);635635+ pr_warning("Debuginfo analysis failed.\n");636636 return ret;637637 }638638···641641 ret = get_real_path(tmp, lr->comp_dir, &lr->path);642642 free(tmp); /* Free old path */643643 if (ret < 0) {644644- pr_warning("Failed to find source file. (%d)\n", ret);644644+ pr_warning("Failed to find source file path.\n");645645 return ret;646646 }647647···721721 ret = debuginfo__find_available_vars_at(dinfo, pev, &vls,722722 max_vls, externs);723723 if (ret <= 0) {724724- pr_err("Failed to find variables at %s (%d)\n", buf, ret);724724+ if (ret == 0 || ret == -ENOENT) {725725+ pr_err("Failed to find the address of %s\n", buf);726726+ ret = -ENOENT;727727+ } else728728+ pr_warning("Debuginfo analysis failed.\n");725729 goto end;726730 }731731+727732 /* Some variables are found */728733 fprintf(stdout, "Available variables at %s\n", buf);729734 for (i = 0; i < ret; i++) {
+7-4
tools/perf/util/probe-finder.c
···573573 if (!die_find_variable_at(sc_die, pf->pvar->var, pf->addr, &vr_die)) {574574 /* Search again in global variables */575575 if (!die_find_variable_at(&pf->cu_die, pf->pvar->var, 0, &vr_die))576576+ pr_warning("Failed to find '%s' in this function.\n",577577+ pf->pvar->var);576578 ret = -ENOENT;577579 }578580 if (ret >= 0)579581 ret = convert_variable(&vr_die, pf);580582581581- if (ret < 0)582582- pr_warning("Failed to find '%s' in this function.\n",583583- pf->pvar->var);584583 return ret;585584}586585···12801281 return ret;12811282}1282128312831283-/* Find available variables at given probe point */12841284+/*12851285+ * Find available variables at given probe point12861286+ * Return the number of found probe points. Return 0 if there is no12871287+ * matched probe point. Return <0 if an error occurs.12881288+ */12841289int debuginfo__find_available_vars_at(struct debuginfo *dbg,12851290 struct perf_probe_event *pev,12861291 struct variable_list **vls,
···215215 case PRINT_BSTRING:216216 case PRINT_DYNAMIC_ARRAY:217217 case PRINT_STRING:218218+ case PRINT_BITMASK:218219 break;219220 case PRINT_TYPE:220221 define_event_symbols(event, ev_name, args->typecast.item);
···250250251251 /* Check the .eh_frame section for unwinding info */252252 offset = elf_section_offset(fd, ".eh_frame_hdr");253253- close(fd);254253255254 if (offset)256255 ret = unwind_spec_ehframe(dso, machine, offset,···270271271272 /* Check the .debug_frame section for unwinding info */272273 *offset = elf_section_offset(fd, ".debug_frame");273273- close(fd);274274275275 if (*offset)276276 return 0;
+1
tools/perf/util/util.c
···1717 * XXX We need to find a better place for these things...1818 */1919unsigned int page_size;2020+int cacheline_size;20212122bool test_attr__enabled;2223
+1
tools/perf/util/util.h
···304304void dump_stack(void);305305306306extern unsigned int page_size;307307+extern int cacheline_size;307308308309void get_term_dimensions(struct winsize *ws);309310