Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branches 'x86/cleanups', 'x86/kexec', 'x86/mce2' and 'linus' into x86/core

+6539 -3665
+12
Documentation/RCU/checklist.txt
··· 298 298 299 299 Note that, rcu_assign_pointer() and rcu_dereference() relate to 300 300 SRCU just as they do to other forms of RCU. 301 + 302 + 15. The whole point of call_rcu(), synchronize_rcu(), and friends 303 + is to wait until all pre-existing readers have finished before 304 + carrying out some otherwise-destructive operation. It is 305 + therefore critically important to -first- remove any path 306 + that readers can follow that could be affected by the 307 + destructive operation, and -only- -then- invoke call_rcu(), 308 + synchronize_rcu(), or friends. 309 + 310 + Because these primitives only wait for pre-existing readers, 311 + it is the caller's responsibility to guarantee safety to 312 + any subsequent readers.
+9
Documentation/feature-removal-schedule.txt
··· 335 335 Secmark, it is time to deprecate the older mechanism and start the 336 336 process of removing the old code. 337 337 Who: Paul Moore <paul.moore@hp.com> 338 + --------------------------- 339 + 340 + What: sysfs ui for changing p4-clockmod parameters 341 + When: September 2009 342 + Why: See commits 129f8ae9b1b5be94517da76009ea956e89104ce8 and 343 + e088e4c9cdb618675874becb91b2fd581ee707e6. 344 + Removal is subject to fixing any remaining bugs in ACPI which may 345 + cause the thermal throttling not to happen at the right time. 346 + Who: Dave Jones <davej@redhat.com>, Matthew Garrett <mjg@redhat.com>
+1 -1
Documentation/filesystems/squashfs.txt
··· 22 22 23 23 Squashfs Cramfs 24 24 25 - Max filesystem size: 2^64 16 MiB 25 + Max filesystem size: 2^64 256 MiB 26 26 Max file size: ~ 2 TiB 16 MiB 27 27 Max files: unlimited unlimited 28 28 Max directories: unlimited unlimited
+35
Documentation/networking/ipv6.txt
··· 1 + 2 + Options for the ipv6 module are supplied as parameters at load time. 3 + 4 + Module options may be given as command line arguments to the insmod 5 + or modprobe command, but are usually specified in either the 6 + /etc/modules.conf or /etc/modprobe.conf configuration file, or in a 7 + distro-specific configuration file. 8 + 9 + The available ipv6 module parameters are listed below. If a parameter 10 + is not specified the default value is used. 11 + 12 + The parameters are as follows: 13 + 14 + disable 15 + 16 + Specifies whether to load the IPv6 module, but disable all 17 + its functionality. This might be used when another module 18 + has a dependency on the IPv6 module being loaded, but no 19 + IPv6 addresses or operations are desired. 20 + 21 + The possible values and their effects are: 22 + 23 + 0 24 + IPv6 is enabled. 25 + 26 + This is the default value. 27 + 28 + 1 29 + IPv6 is disabled. 30 + 31 + No IPv6 addresses will be added to interfaces, and 32 + it will not be possible to open an IPv6 socket. 33 + 34 + A reboot is required to enable IPv6. 35 +
+101
Documentation/x86/earlyprintk.txt
··· 1 + 2 + Mini-HOWTO for using the earlyprintk=dbgp boot option with a 3 + USB2 Debug port key and a debug cable, on x86 systems. 4 + 5 + You need two computers, the 'USB debug key' special gadget and 6 + and two USB cables, connected like this: 7 + 8 + [host/target] <-------> [USB debug key] <-------> [client/console] 9 + 10 + 1. There are three specific hardware requirements: 11 + 12 + a.) Host/target system needs to have USB debug port capability. 13 + 14 + You can check this capability by looking at a 'Debug port' bit in 15 + the lspci -vvv output: 16 + 17 + # lspci -vvv 18 + ... 19 + 00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03) (prog-if 20 [EHCI]) 20 + Subsystem: Lenovo ThinkPad T61 21 + Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- 22 + Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- 23 + Latency: 0 24 + Interrupt: pin D routed to IRQ 19 25 + Region 0: Memory at fe227000 (32-bit, non-prefetchable) [size=1K] 26 + Capabilities: [50] Power Management version 2 27 + Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+) 28 + Status: D0 PME-Enable- DSel=0 DScale=0 PME+ 29 + Capabilities: [58] Debug port: BAR=1 offset=00a0 30 + ^^^^^^^^^^^ <==================== [ HERE ] 31 + Kernel driver in use: ehci_hcd 32 + Kernel modules: ehci-hcd 33 + ... 34 + 35 + ( If your system does not list a debug port capability then you probably 36 + wont be able to use the USB debug key. ) 37 + 38 + b.) You also need a Netchip USB debug cable/key: 39 + 40 + http://www.plxtech.com/products/NET2000/NET20DC/default.asp 41 + 42 + This is a small blue plastic connector with two USB connections, 43 + it draws power from its USB connections. 44 + 45 + c.) Thirdly, you need a second client/console system with a regular USB port. 46 + 47 + 2. Software requirements: 48 + 49 + a.) On the host/target system: 50 + 51 + You need to enable the following kernel config option: 52 + 53 + CONFIG_EARLY_PRINTK_DBGP=y 54 + 55 + And you need to add the boot command line: "earlyprintk=dbgp". 56 + (If you are using Grub, append it to the 'kernel' line in 57 + /etc/grub.conf) 58 + 59 + NOTE: normally earlyprintk console gets turned off once the 60 + regular console is alive - use "earlyprintk=dbgp,keep" to keep 61 + this channel open beyond early bootup. This can be useful for 62 + debugging crashes under Xorg, etc. 63 + 64 + b.) On the client/console system: 65 + 66 + You should enable the following kernel config option: 67 + 68 + CONFIG_USB_SERIAL_DEBUG=y 69 + 70 + On the next bootup with the modified kernel you should 71 + get a /dev/ttyUSBx device(s). 72 + 73 + Now this channel of kernel messages is ready to be used: start 74 + your favorite terminal emulator (minicom, etc.) and set 75 + it up to use /dev/ttyUSB0 - or use a raw 'cat /dev/ttyUSBx' to 76 + see the raw output. 77 + 78 + c.) On Nvidia Southbridge based systems: the kernel will try to probe 79 + and find out which port has debug device connected. 80 + 81 + 3. Testing that it works fine: 82 + 83 + You can test the output by using earlyprintk=dbgp,keep and provoking 84 + kernel messages on the host/target system. You can provoke a harmless 85 + kernel message by for example doing: 86 + 87 + echo h > /proc/sysrq-trigger 88 + 89 + On the host/target system you should see this help line in "dmesg" output: 90 + 91 + SysRq : HELP : loglevel(0-9) reBoot Crashdump terminate-all-tasks(E) memory-full-oom-kill(F) kill-all-tasks(I) saK show-backtrace-all-active-cpus(L) show-memory-usage(M) nice-all-RT-tasks(N) powerOff show-registers(P) show-all-timers(Q) unRaw Sync show-task-states(T) Unmount show-blocked-tasks(W) dump-ftrace-buffer(Z) 92 + 93 + On the client/console system do: 94 + 95 + cat /dev/ttyUSB0 96 + 97 + And you should see the help line above displayed shortly after you've 98 + provoked it on the host system. 99 + 100 + If it does not work then please ask about it on the linux-kernel@vger.kernel.org 101 + mailing list or contact the x86 maintainers.
+1 -1
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 29 4 - EXTRAVERSION = -rc6 4 + EXTRAVERSION = -rc7 5 5 NAME = Erotic Pickled Herring 6 6 7 7 # *DOCUMENTATION*
+13 -7
arch/alpha/mm/init.c
··· 189 189 190 190 if (alpha_using_srm) { 191 191 static struct vm_struct console_remap_vm; 192 - unsigned long vaddr = VMALLOC_START; 192 + unsigned long nr_pages = 0; 193 + unsigned long vaddr; 193 194 unsigned long i, j; 195 + 196 + /* calculate needed size */ 197 + for (i = 0; i < crb->map_entries; ++i) 198 + nr_pages += crb->map[i].count; 199 + 200 + /* register the vm area */ 201 + console_remap_vm.flags = VM_ALLOC; 202 + console_remap_vm.size = nr_pages << PAGE_SHIFT; 203 + vm_area_register_early(&console_remap_vm, PAGE_SIZE); 204 + 205 + vaddr = (unsigned long)console_remap_vm.addr; 194 206 195 207 /* Set up the third level PTEs and update the virtual 196 208 addresses of the CRB entries. */ ··· 225 213 vaddr += PAGE_SIZE; 226 214 } 227 215 } 228 - 229 - /* Let vmalloc know that we've allocated some space. */ 230 - console_remap_vm.flags = VM_ALLOC; 231 - console_remap_vm.addr = (void *) VMALLOC_START; 232 - console_remap_vm.size = vaddr - VMALLOC_START; 233 - vmlist = &console_remap_vm; 234 216 } 235 217 236 218 callback_init_done = 1;
+7 -6
arch/arm/kernel/setup.c
··· 233 233 unsigned int cachetype = read_cpuid_cachetype(); 234 234 unsigned int arch = cpu_architecture(); 235 235 236 - if (arch >= CPU_ARCH_ARMv7) { 237 - cacheid = CACHEID_VIPT_NONALIASING; 238 - if ((cachetype & (3 << 14)) == 1 << 14) 239 - cacheid |= CACHEID_ASID_TAGGED; 240 - } else if (arch >= CPU_ARCH_ARMv6) { 241 - if (cachetype & (1 << 23)) 236 + if (arch >= CPU_ARCH_ARMv6) { 237 + if ((cachetype & (7 << 29)) == 4 << 29) { 238 + /* ARMv7 register format */ 239 + cacheid = CACHEID_VIPT_NONALIASING; 240 + if ((cachetype & (3 << 14)) == 1 << 14) 241 + cacheid |= CACHEID_ASID_TAGGED; 242 + } else if (cachetype & (1 << 23)) 242 243 cacheid = CACHEID_VIPT_ALIASING; 243 244 else 244 245 cacheid = CACHEID_VIPT_NONALIASING;
+105
arch/arm/mach-at91/at91sam9263_devices.c
··· 347 347 void __init at91_add_device_mmc(short mmc_id, struct at91_mmc_data *data) {} 348 348 #endif 349 349 350 + /* -------------------------------------------------------------------- 351 + * Compact Flash (PCMCIA or IDE) 352 + * -------------------------------------------------------------------- */ 353 + 354 + #if defined(CONFIG_AT91_CF) || defined(CONFIG_AT91_CF_MODULE) || \ 355 + defined(CONFIG_BLK_DEV_IDE_AT91) || defined(CONFIG_BLK_DEV_IDE_AT91_MODULE) 356 + 357 + static struct at91_cf_data cf0_data; 358 + 359 + static struct resource cf0_resources[] = { 360 + [0] = { 361 + .start = AT91_CHIPSELECT_4, 362 + .end = AT91_CHIPSELECT_4 + SZ_256M - 1, 363 + .flags = IORESOURCE_MEM | IORESOURCE_MEM_8AND16BIT, 364 + } 365 + }; 366 + 367 + static struct platform_device cf0_device = { 368 + .id = 0, 369 + .dev = { 370 + .platform_data = &cf0_data, 371 + }, 372 + .resource = cf0_resources, 373 + .num_resources = ARRAY_SIZE(cf0_resources), 374 + }; 375 + 376 + static struct at91_cf_data cf1_data; 377 + 378 + static struct resource cf1_resources[] = { 379 + [0] = { 380 + .start = AT91_CHIPSELECT_5, 381 + .end = AT91_CHIPSELECT_5 + SZ_256M - 1, 382 + .flags = IORESOURCE_MEM | IORESOURCE_MEM_8AND16BIT, 383 + } 384 + }; 385 + 386 + static struct platform_device cf1_device = { 387 + .id = 1, 388 + .dev = { 389 + .platform_data = &cf1_data, 390 + }, 391 + .resource = cf1_resources, 392 + .num_resources = ARRAY_SIZE(cf1_resources), 393 + }; 394 + 395 + void __init at91_add_device_cf(struct at91_cf_data *data) 396 + { 397 + unsigned long ebi0_csa; 398 + struct platform_device *pdev; 399 + 400 + if (!data) 401 + return; 402 + 403 + /* 404 + * assign CS4 or CS5 to SMC with Compact Flash logic support, 405 + * we assume SMC timings are configured by board code, 406 + * except True IDE where timings are controlled by driver 407 + */ 408 + ebi0_csa = at91_sys_read(AT91_MATRIX_EBI0CSA); 409 + switch (data->chipselect) { 410 + case 4: 411 + at91_set_A_periph(AT91_PIN_PD6, 0); /* EBI0_NCS4/CFCS0 */ 412 + ebi0_csa |= AT91_MATRIX_EBI0_CS4A_SMC_CF1; 413 + cf0_data = *data; 414 + pdev = &cf0_device; 415 + break; 416 + case 5: 417 + at91_set_A_periph(AT91_PIN_PD7, 0); /* EBI0_NCS5/CFCS1 */ 418 + ebi0_csa |= AT91_MATRIX_EBI0_CS5A_SMC_CF2; 419 + cf1_data = *data; 420 + pdev = &cf1_device; 421 + break; 422 + default: 423 + printk(KERN_ERR "AT91 CF: bad chip-select requested (%u)\n", 424 + data->chipselect); 425 + return; 426 + } 427 + at91_sys_write(AT91_MATRIX_EBI0CSA, ebi0_csa); 428 + 429 + if (data->det_pin) { 430 + at91_set_gpio_input(data->det_pin, 1); 431 + at91_set_deglitch(data->det_pin, 1); 432 + } 433 + 434 + if (data->irq_pin) { 435 + at91_set_gpio_input(data->irq_pin, 1); 436 + at91_set_deglitch(data->irq_pin, 1); 437 + } 438 + 439 + if (data->vcc_pin) 440 + /* initially off */ 441 + at91_set_gpio_output(data->vcc_pin, 0); 442 + 443 + /* enable EBI controlled pins */ 444 + at91_set_A_periph(AT91_PIN_PD5, 1); /* NWAIT */ 445 + at91_set_A_periph(AT91_PIN_PD8, 0); /* CFCE1 */ 446 + at91_set_A_periph(AT91_PIN_PD9, 0); /* CFCE2 */ 447 + at91_set_A_periph(AT91_PIN_PD14, 0); /* CFNRW */ 448 + 449 + pdev->name = (data->flags & AT91_CF_TRUE_IDE) ? "at91_ide" : "at91_cf"; 450 + platform_device_register(pdev); 451 + } 452 + #else 453 + void __init at91_add_device_cf(struct at91_cf_data *data) {} 454 + #endif 350 455 351 456 /* -------------------------------------------------------------------- 352 457 * NAND / SmartMedia
+3
arch/arm/mach-at91/include/mach/board.h
··· 56 56 u8 vcc_pin; /* power switching */ 57 57 u8 rst_pin; /* card reset */ 58 58 u8 chipselect; /* EBI Chip Select number */ 59 + u8 flags; 60 + #define AT91_CF_TRUE_IDE 0x01 61 + #define AT91_IDE_SWAP_A0_A2 0x02 59 62 }; 60 63 extern void __init at91_add_device_cf(struct at91_cf_data *data); 61 64
-1
arch/arm/mach-at91/pm.c
··· 332 332 at91_sys_read(AT91_AIC_IPR) & at91_sys_read(AT91_AIC_IMR)); 333 333 334 334 error: 335 - sdram_selfrefresh_disable(); 336 335 target_state = PM_SUSPEND_ON; 337 336 at91_irq_resume(); 338 337 at91_gpio_resume();
+1 -1
arch/arm/mach-omap2/board-ldp.c
··· 81 81 } 82 82 83 83 ldp_smc911x_resources[0].start = cs_mem_base + 0x0; 84 - ldp_smc911x_resources[0].end = cs_mem_base + 0xf; 84 + ldp_smc911x_resources[0].end = cs_mem_base + 0xff; 85 85 udelay(100); 86 86 87 87 eth_gpio = LDP_SMC911X_GPIO;
+2 -1
arch/arm/mm/abort-ev6.S
··· 23 23 #ifdef CONFIG_CPU_32v6K 24 24 clrex 25 25 #else 26 - strex r0, r1, [sp] @ Clear the exclusive monitor 26 + sub r1, sp, #4 @ Get unused stack location 27 + strex r0, r1, [r1] @ Clear the exclusive monitor 27 28 #endif 28 29 mrc p15, 0, r1, c5, c0, 0 @ get FSR 29 30 mrc p15, 0, r0, c6, c0, 0 @ get FAR
+1 -1
arch/arm/plat-s3c64xx/irq-eint.c
··· 55 55 u32 mask; 56 56 57 57 mask = __raw_readl(S3C64XX_EINT0MASK); 58 - mask |= eint_irq_to_bit(irq); 58 + mask &= ~eint_irq_to_bit(irq); 59 59 __raw_writel(mask, S3C64XX_EINT0MASK); 60 60 } 61 61
+1 -1
arch/avr32/Kconfig
··· 181 181 config QUICKLIST 182 182 def_bool y 183 183 184 - config HAVE_ARCH_BOOTMEM_NODE 184 + config HAVE_ARCH_BOOTMEM 185 185 def_bool n 186 186 187 187 config ARCH_HAVE_MEMORY_PRESENT
+7
arch/blackfin/Kconfig
··· 1129 1129 1130 1130 config PM_WAKEUP_BY_GPIO 1131 1131 bool "Allow Wakeup from Standby by GPIO" 1132 + depends on PM && !BF54x 1132 1133 1133 1134 config PM_WAKEUP_GPIO_NUMBER 1134 1135 int "GPIO number" ··· 1169 1168 default n 1170 1169 help 1171 1170 Enable General-Purpose Wake-Up (Voltage Regulator Power-Up) 1171 + (all processors, except ADSP-BF549). This option sets 1172 + the general-purpose wake-up enable (GPWE) control bit to enable 1173 + wake-up upon detection of an active low signal on the /GPW (PH7) pin. 1174 + On ADSP-BF549 this option enables the the same functionality on the 1175 + /MRXON pin also PH7. 1176 + 1172 1177 endmenu 1173 1178 1174 1179 menu "CPU Frequency scaling"
-6
arch/blackfin/Kconfig.debug
··· 21 21 config HAVE_ARCH_KGDB 22 22 def_bool y 23 23 24 - config KGDB_TESTCASE 25 - tristate "KGDB: for test case in expect" 26 - default n 27 - help 28 - This is a kgdb test case for automated testing. 29 - 30 24 config DEBUG_VERBOSE 31 25 bool "Verbose fault messages" 32 26 default y
+59 -4
arch/blackfin/configs/BF518F-EZBRD_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.28-rc2 4 - # Fri Jan 9 17:58:41 2009 3 + # Linux kernel version: 2.6.28 4 + # Fri Feb 20 10:01:44 2009 5 5 # 6 6 # CONFIG_MMU is not set 7 7 # CONFIG_FPU is not set ··· 133 133 # CONFIG_BF538 is not set 134 134 # CONFIG_BF539 is not set 135 135 # CONFIG_BF542 is not set 136 + # CONFIG_BF542M is not set 136 137 # CONFIG_BF544 is not set 138 + # CONFIG_BF544M is not set 137 139 # CONFIG_BF547 is not set 140 + # CONFIG_BF547M is not set 138 141 # CONFIG_BF548 is not set 142 + # CONFIG_BF548M is not set 139 143 # CONFIG_BF549 is not set 144 + # CONFIG_BF549M is not set 140 145 # CONFIG_BF561 is not set 141 146 CONFIG_BF_REV_MIN=0 142 147 CONFIG_BF_REV_MAX=2 ··· 431 426 # CONFIG_TIPC is not set 432 427 # CONFIG_ATM is not set 433 428 # CONFIG_BRIDGE is not set 434 - # CONFIG_NET_DSA is not set 429 + CONFIG_NET_DSA=y 430 + # CONFIG_NET_DSA_TAG_DSA is not set 431 + # CONFIG_NET_DSA_TAG_EDSA is not set 432 + # CONFIG_NET_DSA_TAG_TRAILER is not set 433 + CONFIG_NET_DSA_TAG_STPID=y 434 + # CONFIG_NET_DSA_MV88E6XXX is not set 435 + # CONFIG_NET_DSA_MV88E6060 is not set 436 + # CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set 437 + # CONFIG_NET_DSA_MV88E6131 is not set 438 + # CONFIG_NET_DSA_MV88E6123_61_65 is not set 439 + CONFIG_NET_DSA_KSZ8893M=y 435 440 # CONFIG_VLAN_8021Q is not set 436 441 # CONFIG_DECNET is not set 437 442 # CONFIG_LLC2 is not set ··· 544 529 # 545 530 # Self-contained MTD device drivers 546 531 # 532 + # CONFIG_MTD_DATAFLASH is not set 533 + # CONFIG_MTD_M25P80 is not set 547 534 # CONFIG_MTD_SLRAM is not set 548 535 # CONFIG_MTD_PHRAM is not set 549 536 # CONFIG_MTD_MTDRAM is not set ··· 578 561 # CONFIG_BLK_DEV_HD is not set 579 562 CONFIG_MISC_DEVICES=y 580 563 # CONFIG_EEPROM_93CX6 is not set 564 + # CONFIG_ICS932S401 is not set 581 565 # CONFIG_ENCLOSURE_SERVICES is not set 566 + # CONFIG_C2PORT is not set 582 567 CONFIG_HAVE_IDE=y 583 568 # CONFIG_IDE is not set 584 569 ··· 626 607 # CONFIG_SMC91X is not set 627 608 # CONFIG_SMSC911X is not set 628 609 # CONFIG_DM9000 is not set 610 + # CONFIG_ENC28J60 is not set 629 611 # CONFIG_IBM_NEW_EMAC_ZMII is not set 630 612 # CONFIG_IBM_NEW_EMAC_RGMII is not set 631 613 # CONFIG_IBM_NEW_EMAC_TAH is not set ··· 784 764 # CONFIG_I2C_DEBUG_ALGO is not set 785 765 # CONFIG_I2C_DEBUG_BUS is not set 786 766 # CONFIG_I2C_DEBUG_CHIP is not set 787 - # CONFIG_SPI is not set 767 + CONFIG_SPI=y 768 + # CONFIG_SPI_DEBUG is not set 769 + CONFIG_SPI_MASTER=y 770 + 771 + # 772 + # SPI Master Controller Drivers 773 + # 774 + CONFIG_SPI_BFIN=y 775 + # CONFIG_SPI_BFIN_LOCK is not set 776 + # CONFIG_SPI_BITBANG is not set 777 + 778 + # 779 + # SPI Protocol Masters 780 + # 781 + # CONFIG_SPI_AT25 is not set 782 + # CONFIG_SPI_SPIDEV is not set 783 + # CONFIG_SPI_TLE62X0 is not set 788 784 CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y 789 785 # CONFIG_GPIOLIB is not set 790 786 # CONFIG_W1 is not set ··· 824 788 # CONFIG_MFD_SM501 is not set 825 789 # CONFIG_HTC_PASIC3 is not set 826 790 # CONFIG_MFD_TMIO is not set 791 + # CONFIG_PMIC_DA903X is not set 827 792 # CONFIG_MFD_WM8400 is not set 828 793 # CONFIG_MFD_WM8350_I2C is not set 794 + # CONFIG_REGULATOR is not set 829 795 830 796 # 831 797 # Multimedia devices ··· 899 861 # CONFIG_RTC_DRV_M41T80 is not set 900 862 # CONFIG_RTC_DRV_S35390A is not set 901 863 # CONFIG_RTC_DRV_FM3130 is not set 864 + # CONFIG_RTC_DRV_RX8581 is not set 902 865 903 866 # 904 867 # SPI RTC drivers 905 868 # 869 + # CONFIG_RTC_DRV_M41T94 is not set 870 + # CONFIG_RTC_DRV_DS1305 is not set 871 + # CONFIG_RTC_DRV_DS1390 is not set 872 + # CONFIG_RTC_DRV_MAX6902 is not set 873 + # CONFIG_RTC_DRV_R9701 is not set 874 + # CONFIG_RTC_DRV_RS5C348 is not set 875 + # CONFIG_RTC_DRV_DS3234 is not set 906 876 907 877 # 908 878 # Platform RTC drivers ··· 1108 1062 # CONFIG_DEBUG_BLOCK_EXT_DEVT is not set 1109 1063 # CONFIG_FAULT_INJECTION is not set 1110 1064 CONFIG_SYSCTL_SYSCALL_CHECK=y 1065 + 1066 + # 1067 + # Tracers 1068 + # 1069 + # CONFIG_SCHED_TRACER is not set 1070 + # CONFIG_CONTEXT_SWITCH_TRACER is not set 1071 + # CONFIG_BOOT_TRACER is not set 1111 1072 # CONFIG_DYNAMIC_PRINTK_DEBUG is not set 1112 1073 # CONFIG_SAMPLES is not set 1113 1074 CONFIG_HAVE_ARCH_KGDB=y 1114 1075 # CONFIG_KGDB is not set 1115 1076 # CONFIG_DEBUG_STACKOVERFLOW is not set 1116 1077 # CONFIG_DEBUG_STACK_USAGE is not set 1078 + # CONFIG_KGDB_TESTCASE is not set 1117 1079 CONFIG_DEBUG_VERBOSE=y 1118 1080 CONFIG_DEBUG_MMRS=y 1119 1081 # CONFIG_DEBUG_HWERR is not set ··· 1154 1100 # 1155 1101 # CONFIG_CRYPTO_FIPS is not set 1156 1102 # CONFIG_CRYPTO_MANAGER is not set 1103 + # CONFIG_CRYPTO_MANAGER2 is not set 1157 1104 # CONFIG_CRYPTO_GF128MUL is not set 1158 1105 # CONFIG_CRYPTO_NULL is not set 1159 1106 # CONFIG_CRYPTO_CRYPTD is not set
+2 -2
arch/blackfin/configs/BF527-EZKIT_defconfig
··· 327 327 CONFIG_BFIN_DCACHE=y 328 328 # CONFIG_BFIN_DCACHE_BANKA is not set 329 329 # CONFIG_BFIN_ICACHE_LOCK is not set 330 - # CONFIG_BFIN_WB is not set 331 - CONFIG_BFIN_WT=y 330 + CONFIG_BFIN_WB=y 331 + # CONFIG_BFIN_WT is not set 332 332 # CONFIG_MPU is not set 333 333 334 334 #
+2 -2
arch/blackfin/configs/BF533-EZKIT_defconfig
··· 290 290 CONFIG_BFIN_DCACHE=y 291 291 # CONFIG_BFIN_DCACHE_BANKA is not set 292 292 # CONFIG_BFIN_ICACHE_LOCK is not set 293 - # CONFIG_BFIN_WB is not set 294 - CONFIG_BFIN_WT=y 293 + CONFIG_BFIN_WB=y 294 + # CONFIG_BFIN_WT is not set 295 295 # CONFIG_MPU is not set 296 296 297 297 #
+2 -2
arch/blackfin/configs/BF533-STAMP_defconfig
··· 290 290 CONFIG_BFIN_DCACHE=y 291 291 # CONFIG_BFIN_DCACHE_BANKA is not set 292 292 # CONFIG_BFIN_ICACHE_LOCK is not set 293 - # CONFIG_BFIN_WB is not set 294 - CONFIG_BFIN_WT=y 293 + CONFIG_BFIN_WB=y 294 + # CONFIG_BFIN_WT is not set 295 295 # CONFIG_MPU is not set 296 296 297 297 #
+3 -11
arch/blackfin/configs/BF537-STAMP_defconfig
··· 298 298 CONFIG_BFIN_DCACHE=y 299 299 # CONFIG_BFIN_DCACHE_BANKA is not set 300 300 # CONFIG_BFIN_ICACHE_LOCK is not set 301 - # CONFIG_BFIN_WB is not set 302 - CONFIG_BFIN_WT=y 301 + CONFIG_BFIN_WB=y 302 + # CONFIG_BFIN_WT is not set 303 303 # CONFIG_MPU is not set 304 304 305 305 # ··· 568 568 # CONFIG_MTD_DOC2000 is not set 569 569 # CONFIG_MTD_DOC2001 is not set 570 570 # CONFIG_MTD_DOC2001PLUS is not set 571 - CONFIG_MTD_NAND=m 572 - # CONFIG_MTD_NAND_VERIFY_WRITE is not set 573 - # CONFIG_MTD_NAND_ECC_SMC is not set 574 - # CONFIG_MTD_NAND_MUSEUM_IDS is not set 575 - # CONFIG_MTD_NAND_BFIN is not set 576 - CONFIG_MTD_NAND_IDS=m 577 - # CONFIG_MTD_NAND_DISKONCHIP is not set 578 - # CONFIG_MTD_NAND_NANDSIM is not set 579 - CONFIG_MTD_NAND_PLATFORM=m 571 + # CONFIG_MTD_NAND is not set 580 572 # CONFIG_MTD_ONENAND is not set 581 573 582 574 #
+2 -2
arch/blackfin/configs/BF538-EZKIT_defconfig
··· 306 306 CONFIG_BFIN_DCACHE=y 307 307 # CONFIG_BFIN_DCACHE_BANKA is not set 308 308 # CONFIG_BFIN_ICACHE_LOCK is not set 309 - # CONFIG_BFIN_WB is not set 310 - CONFIG_BFIN_WT=y 309 + CONFIG_BFIN_WB=y 310 + # CONFIG_BFIN_WT is not set 311 311 # CONFIG_MPU is not set 312 312 313 313 #
+3 -3
arch/blackfin/configs/BF548-EZKIT_defconfig
··· 361 361 CONFIG_BFIN_DCACHE=y 362 362 # CONFIG_BFIN_DCACHE_BANKA is not set 363 363 # CONFIG_BFIN_ICACHE_LOCK is not set 364 - # CONFIG_BFIN_WB is not set 365 - CONFIG_BFIN_WT=y 364 + CONFIG_BFIN_WB=y 365 + # CONFIG_BFIN_WT is not set 366 366 # CONFIG_BFIN_L2_CACHEABLE is not set 367 367 # CONFIG_MPU is not set 368 368 ··· 680 680 CONFIG_SCSI_DMA=y 681 681 # CONFIG_SCSI_TGT is not set 682 682 # CONFIG_SCSI_NETLINK is not set 683 - CONFIG_SCSI_PROC_FS=y 683 + # CONFIG_SCSI_PROC_FS is not set 684 684 685 685 # 686 686 # SCSI support type (disk, tape, CD-ROM)
+2 -2
arch/blackfin/configs/BF561-EZKIT_defconfig
··· 329 329 CONFIG_BFIN_DCACHE=y 330 330 # CONFIG_BFIN_DCACHE_BANKA is not set 331 331 # CONFIG_BFIN_ICACHE_LOCK is not set 332 - # CONFIG_BFIN_WB is not set 333 - CONFIG_BFIN_WT=y 332 + CONFIG_BFIN_WB=y 333 + # CONFIG_BFIN_WT is not set 334 334 # CONFIG_BFIN_L2_CACHEABLE is not set 335 335 # CONFIG_MPU is not set 336 336
+2 -2
arch/blackfin/configs/BlackStamp_defconfig
··· 288 288 CONFIG_BFIN_DCACHE=y 289 289 # CONFIG_BFIN_DCACHE_BANKA is not set 290 290 # CONFIG_BFIN_ICACHE_LOCK is not set 291 - # CONFIG_BFIN_WB is not set 292 - CONFIG_BFIN_WT=y 291 + CONFIG_BFIN_WB=y 292 + # CONFIG_BFIN_WT is not set 293 293 # CONFIG_MPU is not set 294 294 295 295 #
+2 -2
arch/blackfin/configs/CM-BF527_defconfig
··· 332 332 CONFIG_BFIN_DCACHE=y 333 333 # CONFIG_BFIN_DCACHE_BANKA is not set 334 334 # CONFIG_BFIN_ICACHE_LOCK is not set 335 - # CONFIG_BFIN_WB is not set 336 - CONFIG_BFIN_WT=y 335 + CONFIG_BFIN_WB=y 336 + # CONFIG_BFIN_WT is not set 337 337 # CONFIG_MPU is not set 338 338 339 339 #
+3 -3
arch/blackfin/configs/CM-BF548_defconfig
··· 336 336 CONFIG_BFIN_DCACHE=y 337 337 # CONFIG_BFIN_DCACHE_BANKA is not set 338 338 # CONFIG_BFIN_ICACHE_LOCK is not set 339 - # CONFIG_BFIN_WB is not set 340 - CONFIG_BFIN_WT=y 339 + CONFIG_BFIN_WB=y 340 + # CONFIG_BFIN_WT is not set 341 341 CONFIG_L1_MAX_PIECE=16 342 342 # CONFIG_MPU is not set 343 343 ··· 595 595 CONFIG_SCSI_DMA=y 596 596 # CONFIG_SCSI_TGT is not set 597 597 # CONFIG_SCSI_NETLINK is not set 598 - CONFIG_SCSI_PROC_FS=y 598 + # CONFIG_SCSI_PROC_FS is not set 599 599 600 600 # 601 601 # SCSI support type (disk, tape, CD-ROM)
+1 -1
arch/blackfin/configs/IP0X_defconfig
··· 612 612 CONFIG_SCSI=y 613 613 # CONFIG_SCSI_TGT is not set 614 614 # CONFIG_SCSI_NETLINK is not set 615 - CONFIG_SCSI_PROC_FS=y 615 + # CONFIG_SCSI_PROC_FS is not set 616 616 617 617 # 618 618 # SCSI support type (disk, tape, CD-ROM)
+2 -2
arch/blackfin/configs/SRV1_defconfig
··· 282 282 CONFIG_BFIN_DCACHE=y 283 283 # CONFIG_BFIN_DCACHE_BANKA is not set 284 284 # CONFIG_BFIN_ICACHE_LOCK is not set 285 - # CONFIG_BFIN_WB is not set 286 - CONFIG_BFIN_WT=y 285 + CONFIG_BFIN_WB=y 286 + # CONFIG_BFIN_WT is not set 287 287 CONFIG_L1_MAX_PIECE=16 288 288 289 289 #
+1
arch/blackfin/include/asm/Kbuild
··· 1 1 include include/asm-generic/Kbuild.asm 2 2 3 + unifdef-y += bfin_sport.h 3 4 unifdef-y += fixed_code.h
+14 -31
arch/blackfin/include/asm/bfin_sport.h
··· 1 1 /* 2 - * File: include/asm-blackfin/bfin_sport.h 3 - * Based on: 4 - * Author: Roy Huang (roy.huang@analog.com) 2 + * bfin_sport.h - userspace header for bfin sport driver 5 3 * 6 - * Created: Thu Aug. 24 2006 7 - * Description: 4 + * Copyright 2004-2008 Analog Devices Inc. 8 5 * 9 - * Modified: 10 - * Copyright 2004-2006 Analog Devices Inc. 11 - * 12 - * Bugs: Enter bugs at http://blackfin.uclinux.org/ 13 - * 14 - * This program is free software; you can redistribute it and/or modify 15 - * it under the terms of the GNU General Public License as published by 16 - * the Free Software Foundation; either version 2 of the License, or 17 - * (at your option) any later version. 18 - * 19 - * This program is distributed in the hope that it will be useful, 20 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 21 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 22 - * GNU General Public License for more details. 23 - * 24 - * You should have received a copy of the GNU General Public License 25 - * along with this program; if not, see the file COPYING, or write 26 - * to the Free Software Foundation, Inc., 27 - * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 6 + * Licensed under the GPL-2 or later. 28 7 */ 29 8 30 9 #ifndef __BFIN_SPORT_H__ ··· 21 42 #define NORM_FORMAT 0x0 22 43 #define ALAW_FORMAT 0x2 23 44 #define ULAW_FORMAT 0x3 24 - struct sport_register; 25 45 26 46 /* Function driver which use sport must initialize the structure */ 27 47 struct sport_config { 28 - /*TDM (multichannels), I2S or other mode */ 48 + /* TDM (multichannels), I2S or other mode */ 29 49 unsigned int mode:3; 30 50 31 51 /* if TDM mode is selected, channels must be set */ ··· 50 72 int serial_clk; 51 73 int fsync_clk; 52 74 53 - unsigned int data_format:2; /*Normal, u-law or a-law */ 75 + unsigned int data_format:2; /* Normal, u-law or a-law */ 54 76 55 77 int word_len; /* How length of the word in bits, 3-32 bits */ 56 78 int dma_enabled; 57 79 }; 80 + 81 + /* Userspace interface */ 82 + #define SPORT_IOC_MAGIC 'P' 83 + #define SPORT_IOC_CONFIG _IOWR('P', 0x01, struct sport_config) 84 + 85 + #ifdef __KERNEL__ 58 86 59 87 struct sport_register { 60 88 unsigned short tcr1; ··· 101 117 unsigned long mrcs3; 102 118 }; 103 119 104 - #define SPORT_IOC_MAGIC 'P' 105 - #define SPORT_IOC_CONFIG _IOWR('P', 0x01, struct sport_config) 106 - 107 120 struct sport_dev { 108 121 struct cdev cdev; /* Char device structure */ 109 122 ··· 130 149 struct sport_config config; 131 150 }; 132 151 152 + #endif 153 + 133 154 #define SPORT_TCR1 0 134 155 #define SPORT_TCR2 1 135 156 #define SPORT_TCLKDIV 2 ··· 152 169 #define SPORT_MRCS2 22 153 170 #define SPORT_MRCS3 23 154 171 155 - #endif /*__BFIN_SPORT_H__*/ 172 + #endif
+36 -64
arch/blackfin/include/asm/ipipe.h
··· 35 35 #include <asm/atomic.h> 36 36 #include <asm/traps.h> 37 37 38 - #define IPIPE_ARCH_STRING "1.8-00" 38 + #define IPIPE_ARCH_STRING "1.9-00" 39 39 #define IPIPE_MAJOR_NUMBER 1 40 - #define IPIPE_MINOR_NUMBER 8 40 + #define IPIPE_MINOR_NUMBER 9 41 41 #define IPIPE_PATCH_NUMBER 0 42 42 43 43 #ifdef CONFIG_SMP ··· 83 83 "%2 = CYCLES2\n" \ 84 84 "CC = %2 == %0\n" \ 85 85 "if ! CC jump 1b\n" \ 86 - : "=r" (((unsigned long *)&t)[1]), \ 87 - "=r" (((unsigned long *)&t)[0]), \ 88 - "=r" (__cy2) \ 86 + : "=d,a" (((unsigned long *)&t)[1]), \ 87 + "=d,a" (((unsigned long *)&t)[0]), \ 88 + "=d,a" (__cy2) \ 89 89 : /*no input*/ : "CC"); \ 90 90 t; \ 91 91 }) ··· 118 118 119 119 #define __ipipe_disable_irq(irq) (irq_desc[irq].chip->mask(irq)) 120 120 121 - #define __ipipe_lock_root() \ 122 - set_bit(IPIPE_ROOTLOCK_FLAG, &ipipe_root_domain->flags) 121 + static inline int __ipipe_check_tickdev(const char *devname) 122 + { 123 + return 1; 124 + } 123 125 124 - #define __ipipe_unlock_root() \ 125 - clear_bit(IPIPE_ROOTLOCK_FLAG, &ipipe_root_domain->flags) 126 + static inline void __ipipe_lock_root(void) 127 + { 128 + set_bit(IPIPE_SYNCDEFER_FLAG, &ipipe_root_cpudom_var(status)); 129 + } 130 + 131 + static inline void __ipipe_unlock_root(void) 132 + { 133 + clear_bit(IPIPE_SYNCDEFER_FLAG, &ipipe_root_cpudom_var(status)); 134 + } 126 135 127 136 void __ipipe_enable_pipeline(void); 128 137 129 138 #define __ipipe_hook_critical_ipi(ipd) do { } while (0) 130 139 131 - #define __ipipe_sync_pipeline(syncmask) \ 132 - do { \ 133 - struct ipipe_domain *ipd = ipipe_current_domain; \ 134 - if (likely(ipd != ipipe_root_domain || !test_bit(IPIPE_ROOTLOCK_FLAG, &ipd->flags))) \ 135 - __ipipe_sync_stage(syncmask); \ 136 - } while (0) 140 + #define __ipipe_sync_pipeline ___ipipe_sync_pipeline 141 + void ___ipipe_sync_pipeline(unsigned long syncmask); 137 142 138 143 void __ipipe_handle_irq(unsigned irq, struct pt_regs *regs); 139 144 140 145 int __ipipe_get_irq_priority(unsigned irq); 141 - 142 - int __ipipe_get_irqthread_priority(unsigned irq); 143 146 144 147 void __ipipe_stall_root_raw(void); 145 148 146 149 void __ipipe_unstall_root_raw(void); 147 150 148 151 void __ipipe_serial_debug(const char *fmt, ...); 152 + 153 + asmlinkage void __ipipe_call_irqtail(unsigned long addr); 149 154 150 155 DECLARE_PER_CPU(struct pt_regs, __ipipe_tick_regs); 151 156 ··· 167 162 168 163 #define __ipipe_run_irqtail() /* Must be a macro */ \ 169 164 do { \ 170 - asmlinkage void __ipipe_call_irqtail(void); \ 171 165 unsigned long __pending; \ 172 - CSYNC(); \ 166 + CSYNC(); \ 173 167 __pending = bfin_read_IPEND(); \ 174 168 if (__pending & 0x8000) { \ 175 169 __pending &= ~0x8010; \ 176 170 if (__pending && (__pending & (__pending - 1)) == 0) \ 177 - __ipipe_call_irqtail(); \ 171 + __ipipe_call_irqtail(__ipipe_irq_tail_hook); \ 178 172 } \ 179 173 } while (0) 180 174 181 175 #define __ipipe_run_isr(ipd, irq) \ 182 176 do { \ 183 177 if (ipd == ipipe_root_domain) { \ 184 - /* \ 185 - * Note: the I-pipe implements a threaded interrupt model on \ 186 - * this arch for Linux external IRQs. The interrupt handler we \ 187 - * call here only wakes up the associated IRQ thread. \ 188 - */ \ 189 - if (ipipe_virtual_irq_p(irq)) { \ 190 - /* No irqtail here; virtual interrupts have no effect \ 191 - on IPEND so there is no need for processing \ 192 - deferral. */ \ 193 - local_irq_enable_nohead(ipd); \ 178 + local_irq_enable_hw(); \ 179 + if (ipipe_virtual_irq_p(irq)) \ 194 180 ipd->irqs[irq].handler(irq, ipd->irqs[irq].cookie); \ 195 - local_irq_disable_nohead(ipd); \ 196 - } else \ 197 - /* \ 198 - * No need to run the irqtail here either; \ 199 - * we can't be preempted by hw IRQs, so \ 200 - * non-Linux IRQs cannot stack over the short \ 201 - * thread wakeup code. Which in turn means \ 202 - * that no irqtail condition could be pending \ 203 - * for domains above Linux in the pipeline. \ 204 - */ \ 181 + else \ 205 182 ipd->irqs[irq].handler(irq, &__raw_get_cpu_var(__ipipe_tick_regs)); \ 183 + local_irq_disable_hw(); \ 206 184 } else { \ 207 185 __clear_bit(IPIPE_SYNC_FLAG, &ipipe_cpudom_var(ipd, status)); \ 208 186 local_irq_enable_nohead(ipd); \ ··· 205 217 206 218 int ipipe_start_irq_thread(unsigned irq, struct irq_desc *desc); 207 219 208 - #define IS_SYSIRQ(irq) ((irq) > IRQ_CORETMR && (irq) <= SYS_IRQS) 209 - #define IS_GPIOIRQ(irq) ((irq) >= GPIO_IRQ_BASE && (irq) < NR_IRQS) 210 - 220 + #ifdef CONFIG_GENERIC_CLOCKEVENTS 221 + #define IRQ_SYSTMR IRQ_CORETMR 222 + #define IRQ_PRIOTMR IRQ_CORETMR 223 + #else 211 224 #define IRQ_SYSTMR IRQ_TIMER0 212 225 #define IRQ_PRIOTMR CONFIG_IRQ_TIMER0 226 + #endif 213 227 214 - #if defined(CONFIG_BF531) || defined(CONFIG_BF532) || defined(CONFIG_BF533) 215 - #define PRIO_GPIODEMUX(irq) CONFIG_PFA 216 - #elif defined(CONFIG_BF534) || defined(CONFIG_BF536) || defined(CONFIG_BF537) 217 - #define PRIO_GPIODEMUX(irq) CONFIG_IRQ_PROG_INTA 218 - #elif defined(CONFIG_BF52x) 219 - #define PRIO_GPIODEMUX(irq) ((irq) == IRQ_PORTF_INTA ? CONFIG_IRQ_PORTF_INTA : \ 220 - (irq) == IRQ_PORTG_INTA ? CONFIG_IRQ_PORTG_INTA : \ 221 - (irq) == IRQ_PORTH_INTA ? CONFIG_IRQ_PORTH_INTA : \ 222 - -1) 223 - #elif defined(CONFIG_BF561) 224 - #define PRIO_GPIODEMUX(irq) ((irq) == IRQ_PROG0_INTA ? CONFIG_IRQ_PROG0_INTA : \ 225 - (irq) == IRQ_PROG1_INTA ? CONFIG_IRQ_PROG1_INTA : \ 226 - (irq) == IRQ_PROG2_INTA ? CONFIG_IRQ_PROG2_INTA : \ 227 - -1) 228 + #ifdef CONFIG_BF561 228 229 #define bfin_write_TIMER_DISABLE(val) bfin_write_TMRS8_DISABLE(val) 229 230 #define bfin_write_TIMER_ENABLE(val) bfin_write_TMRS8_ENABLE(val) 230 231 #define bfin_write_TIMER_STATUS(val) bfin_write_TMRS8_STATUS(val) 231 232 #define bfin_read_TIMER_STATUS() bfin_read_TMRS8_STATUS() 232 233 #elif defined(CONFIG_BF54x) 233 - #define PRIO_GPIODEMUX(irq) ((irq) == IRQ_PINT0 ? CONFIG_IRQ_PINT0 : \ 234 - (irq) == IRQ_PINT1 ? CONFIG_IRQ_PINT1 : \ 235 - (irq) == IRQ_PINT2 ? CONFIG_IRQ_PINT2 : \ 236 - (irq) == IRQ_PINT3 ? CONFIG_IRQ_PINT3 : \ 237 - -1) 238 234 #define bfin_write_TIMER_DISABLE(val) bfin_write_TIMER_DISABLE0(val) 239 235 #define bfin_write_TIMER_ENABLE(val) bfin_write_TIMER_ENABLE0(val) 240 236 #define bfin_write_TIMER_STATUS(val) bfin_write_TIMER_STATUS0(val) 241 237 #define bfin_read_TIMER_STATUS(val) bfin_read_TIMER_STATUS0(val) 242 - #else 243 - # error "no PRIO_GPIODEMUX() for this part" 244 238 #endif 245 239 246 240 #define __ipipe_root_tick_p(regs) ((regs->ipend & 0x10) != 0) ··· 244 274 #define __ipipe_root_tick_p(regs) 1 245 275 246 276 #endif /* !CONFIG_IPIPE */ 277 + 278 + #define ipipe_update_tick_evtdev(evtdev) do { } while (0) 247 279 248 280 #endif /* !__ASM_BLACKFIN_IPIPE_H */
+4 -8
arch/blackfin/include/asm/ipipe_base.h
··· 1 1 /* -*- linux-c -*- 2 - * include/asm-blackfin/_baseipipe.h 2 + * include/asm-blackfin/ipipe_base.h 3 3 * 4 4 * Copyright (C) 2007 Philippe Gerum. 5 5 * ··· 27 27 #define IPIPE_NR_XIRQS NR_IRQS 28 28 #define IPIPE_IRQ_ISHIFT 5 /* 2^5 for 32bits arch. */ 29 29 30 - /* Blackfin-specific, global domain flags */ 31 - #define IPIPE_ROOTLOCK_FLAG 1 /* Lock pipeline for root */ 30 + /* Blackfin-specific, per-cpu pipeline status */ 31 + #define IPIPE_SYNCDEFER_FLAG 15 32 + #define IPIPE_SYNCDEFER_MASK (1L << IPIPE_SYNCDEFER_MASK) 32 33 33 34 /* Blackfin traps -- i.e. exception vector numbers */ 34 35 #define IPIPE_NR_FAULTS 52 /* We leave a gap after VEC_ILL_RES. */ ··· 48 47 #define IPIPE_TIMER_IRQ IRQ_CORETMR 49 48 50 49 #ifndef __ASSEMBLY__ 51 - 52 - #include <linux/bitops.h> 53 - 54 - extern int test_bit(int nr, const void *addr); 55 - 56 50 57 51 extern unsigned long __ipipe_root_status; /* Alias to ipipe_root_cpudom_var(status) */ 58 52
+27 -9
arch/blackfin/include/asm/irq.h
··· 61 61 #define raw_irqs_disabled_flags(flags) (!irqs_enabled_from_flags_hw(flags)) 62 62 #define local_test_iflag_hw(x) irqs_enabled_from_flags_hw(x) 63 63 64 - #define local_save_flags(x) \ 65 - do { \ 66 - (x) = __ipipe_test_root() ? \ 64 + #define local_save_flags(x) \ 65 + do { \ 66 + (x) = __ipipe_test_root() ? \ 67 67 __all_masked_irq_flags : bfin_irq_flags; \ 68 + barrier(); \ 68 69 } while (0) 69 70 70 - #define local_irq_save(x) \ 71 - do { \ 72 - (x) = __ipipe_test_and_stall_root(); \ 71 + #define local_irq_save(x) \ 72 + do { \ 73 + (x) = __ipipe_test_and_stall_root() ? \ 74 + __all_masked_irq_flags : bfin_irq_flags; \ 75 + barrier(); \ 73 76 } while (0) 74 77 75 - #define local_irq_restore(x) __ipipe_restore_root(x) 76 - #define local_irq_disable() __ipipe_stall_root() 77 - #define local_irq_enable() __ipipe_unstall_root() 78 + static inline void local_irq_restore(unsigned long x) 79 + { 80 + barrier(); 81 + __ipipe_restore_root(x == __all_masked_irq_flags); 82 + } 83 + 84 + #define local_irq_disable() \ 85 + do { \ 86 + __ipipe_stall_root(); \ 87 + barrier(); \ 88 + } while (0) 89 + 90 + static inline void local_irq_enable(void) 91 + { 92 + barrier(); 93 + __ipipe_unstall_root(); 94 + } 95 + 78 96 #define irqs_disabled() __ipipe_test_root() 79 97 80 98 #define local_save_flags_hw(x) \
-10
arch/blackfin/include/asm/percpu.h
··· 3 3 4 4 #include <asm-generic/percpu.h> 5 5 6 - #ifdef CONFIG_MODULES 7 - #define PERCPU_MODULE_RESERVE 8192 8 - #else 9 - #define PERCPU_MODULE_RESERVE 0 10 - #endif 11 - 12 - #define PERCPU_ENOUGH_ROOM \ 13 - (ALIGN(__per_cpu_end - __per_cpu_start, SMP_CACHE_BYTES) + \ 14 - PERCPU_MODULE_RESERVE) 15 - 16 6 #endif /* __ARCH_BLACKFIN_PERCPU__ */
+2
arch/blackfin/include/asm/thread_info.h
··· 122 122 #define TIF_MEMDIE 4 123 123 #define TIF_RESTORE_SIGMASK 5 /* restore signal mask in do_signal() */ 124 124 #define TIF_FREEZE 6 /* is freezing for suspend */ 125 + #define TIF_IRQ_SYNC 7 /* sync pipeline stage */ 125 126 126 127 /* as above, but as bit values */ 127 128 #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) ··· 131 130 #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) 132 131 #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) 133 132 #define _TIF_FREEZE (1<<TIF_FREEZE) 133 + #define _TIF_IRQ_SYNC (1<<TIF_IRQ_SYNC) 134 134 135 135 #define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */ 136 136
+5 -3
arch/blackfin/kernel/Makefile
··· 15 15 obj-y += time.o 16 16 endif 17 17 18 - CFLAGS_kgdb_test.o := -mlong-calls -O0 19 - 20 18 obj-$(CONFIG_IPIPE) += ipipe.o 21 19 obj-$(CONFIG_IPIPE_TRACE_MCOUNT) += mcount.o 22 20 obj-$(CONFIG_BFIN_GPTIMERS) += gptimers.o 23 21 obj-$(CONFIG_CPLB_INFO) += cplbinfo.o 24 22 obj-$(CONFIG_MODULES) += module.o 25 23 obj-$(CONFIG_KGDB) += kgdb.o 26 - obj-$(CONFIG_KGDB_TESTCASE) += kgdb_test.o 24 + obj-$(CONFIG_KGDB_TESTS) += kgdb_test.o 27 25 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o 26 + 27 + # the kgdb test puts code into L2 and without linker 28 + # relaxation, we need to force long calls to/from it 29 + CFLAGS_kgdb_test.o := -mlong-calls -O0
+4
arch/blackfin/kernel/cplb-nompu/cplbinit.c
··· 53 53 54 54 i_d = i_i = 0; 55 55 56 + #ifdef CONFIG_DEBUG_HUNT_FOR_ZERO 56 57 /* Set up the zero page. */ 57 58 d_tbl[i_d].addr = 0; 58 59 d_tbl[i_d++].data = SDRAM_OOPS | PAGE_SIZE_1KB; 60 + i_tbl[i_i].addr = 0; 61 + i_tbl[i_i++].data = SDRAM_OOPS | PAGE_SIZE_1KB; 62 + #endif 59 63 60 64 /* Cover kernel memory with 4M pages. */ 61 65 addr = 0;
+48 -132
arch/blackfin/kernel/ipipe.c
··· 35 35 #include <asm/atomic.h> 36 36 #include <asm/io.h> 37 37 38 - static int create_irq_threads; 39 - 40 38 DEFINE_PER_CPU(struct pt_regs, __ipipe_tick_regs); 41 - 42 - static DEFINE_PER_CPU(unsigned long, pending_irqthread_mask); 43 - 44 - static DEFINE_PER_CPU(int [IVG13 + 1], pending_irq_count); 45 39 46 40 asmlinkage void asm_do_IRQ(unsigned int irq, struct pt_regs *regs); 47 41 ··· 87 93 */ 88 94 void __ipipe_handle_irq(unsigned irq, struct pt_regs *regs) 89 95 { 96 + struct ipipe_percpu_domain_data *p = ipipe_root_cpudom_ptr(); 90 97 struct ipipe_domain *this_domain, *next_domain; 91 98 struct list_head *head, *pos; 92 99 int m_ack, s = -1; ··· 99 104 * interrupt. 100 105 */ 101 106 m_ack = (regs == NULL || irq == IRQ_SYSTMR || irq == IRQ_CORETMR); 102 - 103 107 this_domain = ipipe_current_domain; 104 108 105 109 if (unlikely(test_bit(IPIPE_STICKY_FLAG, &this_domain->irqs[irq].control))) ··· 108 114 next_domain = list_entry(head, struct ipipe_domain, p_link); 109 115 if (likely(test_bit(IPIPE_WIRED_FLAG, &next_domain->irqs[irq].control))) { 110 116 if (!m_ack && next_domain->irqs[irq].acknowledge != NULL) 111 - next_domain->irqs[irq].acknowledge(irq, irq_desc + irq); 112 - if (test_bit(IPIPE_ROOTLOCK_FLAG, &ipipe_root_domain->flags)) 113 - s = __test_and_set_bit(IPIPE_STALL_FLAG, 114 - &ipipe_root_cpudom_var(status)); 117 + next_domain->irqs[irq].acknowledge(irq, irq_to_desc(irq)); 118 + if (test_bit(IPIPE_SYNCDEFER_FLAG, &p->status)) 119 + s = __test_and_set_bit(IPIPE_STALL_FLAG, &p->status); 115 120 __ipipe_dispatch_wired(next_domain, irq); 116 - goto finalize; 117 - return; 121 + goto out; 118 122 } 119 123 } 120 124 121 125 /* Ack the interrupt. */ 122 126 123 127 pos = head; 124 - 125 128 while (pos != &__ipipe_pipeline) { 126 129 next_domain = list_entry(pos, struct ipipe_domain, p_link); 127 - /* 128 - * For each domain handling the incoming IRQ, mark it 129 - * as pending in its log. 130 - */ 131 130 if (test_bit(IPIPE_HANDLE_FLAG, &next_domain->irqs[irq].control)) { 132 - /* 133 - * Domains that handle this IRQ are polled for 134 - * acknowledging it by decreasing priority 135 - * order. The interrupt must be made pending 136 - * _first_ in the domain's status flags before 137 - * the PIC is unlocked. 138 - */ 139 131 __ipipe_set_irq_pending(next_domain, irq); 140 - 141 132 if (!m_ack && next_domain->irqs[irq].acknowledge != NULL) { 142 - next_domain->irqs[irq].acknowledge(irq, irq_desc + irq); 133 + next_domain->irqs[irq].acknowledge(irq, irq_to_desc(irq)); 143 134 m_ack = 1; 144 135 } 145 136 } 146 - 147 - /* 148 - * If the domain does not want the IRQ to be passed 149 - * down the interrupt pipe, exit the loop now. 150 - */ 151 137 if (!test_bit(IPIPE_PASS_FLAG, &next_domain->irqs[irq].control)) 152 138 break; 153 - 154 139 pos = next_domain->p_link.next; 155 140 } 156 141 ··· 139 166 * immediately to the current domain if the interrupt has been 140 167 * marked as 'sticky'. This search does not go beyond the 141 168 * current domain in the pipeline. We also enforce the 142 - * additional root stage lock (blackfin-specific). */ 169 + * additional root stage lock (blackfin-specific). 170 + */ 171 + if (test_bit(IPIPE_SYNCDEFER_FLAG, &p->status)) 172 + s = __test_and_set_bit(IPIPE_STALL_FLAG, &p->status); 143 173 144 - if (test_bit(IPIPE_ROOTLOCK_FLAG, &ipipe_root_domain->flags)) 145 - s = __test_and_set_bit(IPIPE_STALL_FLAG, 146 - &ipipe_root_cpudom_var(status)); 147 - finalize: 174 + /* 175 + * If the interrupt preempted the head domain, then do not 176 + * even try to walk the pipeline, unless an interrupt is 177 + * pending for it. 178 + */ 179 + if (test_bit(IPIPE_AHEAD_FLAG, &this_domain->flags) && 180 + ipipe_head_cpudom_var(irqpend_himask) == 0) 181 + goto out; 148 182 149 183 __ipipe_walk_pipeline(head); 150 - 184 + out: 151 185 if (!s) 152 - __clear_bit(IPIPE_STALL_FLAG, 153 - &ipipe_root_cpudom_var(status)); 186 + __clear_bit(IPIPE_STALL_FLAG, &p->status); 154 187 } 155 188 156 189 int __ipipe_check_root(void) ··· 166 187 167 188 void __ipipe_enable_irqdesc(struct ipipe_domain *ipd, unsigned irq) 168 189 { 169 - struct irq_desc *desc = irq_desc + irq; 190 + struct irq_desc *desc = irq_to_desc(irq); 170 191 int prio = desc->ic_prio; 171 192 172 193 desc->depth = 0; ··· 178 199 179 200 void __ipipe_disable_irqdesc(struct ipipe_domain *ipd, unsigned irq) 180 201 { 181 - struct irq_desc *desc = irq_desc + irq; 202 + struct irq_desc *desc = irq_to_desc(irq); 182 203 int prio = desc->ic_prio; 183 204 184 205 if (ipd != &ipipe_root && ··· 215 236 { 216 237 unsigned long flags; 217 238 218 - /* We need to run the IRQ tail hook whenever we don't 239 + /* 240 + * We need to run the IRQ tail hook whenever we don't 219 241 * propagate a syscall to higher domains, because we know that 220 242 * important operations might be pending there (e.g. Xenomai 221 - * deferred rescheduling). */ 243 + * deferred rescheduling). 244 + */ 222 245 223 - if (!__ipipe_syscall_watched_p(current, regs->orig_p0)) { 246 + if (regs->orig_p0 < NR_syscalls) { 224 247 void (*hook)(void) = (void (*)(void))__ipipe_irq_tail_hook; 225 248 hook(); 226 - return 0; 249 + if ((current->flags & PF_EVNOTIFY) == 0) 250 + return 0; 227 251 } 228 252 229 253 /* ··· 294 312 { 295 313 unsigned long flags; 296 314 315 + #ifdef CONFIG_IPIPE_DEBUG 297 316 if (irq >= IPIPE_NR_IRQS || 298 317 (ipipe_virtual_irq_p(irq) 299 318 && !test_bit(irq - IPIPE_VIRQ_BASE, &__ipipe_virtual_irq_map))) 300 319 return -EINVAL; 320 + #endif 301 321 302 322 local_irq_save_hw(flags); 303 - 304 323 __ipipe_handle_irq(irq, NULL); 305 - 306 324 local_irq_restore_hw(flags); 307 325 308 326 return 1; 309 327 } 310 328 311 - /* Move Linux IRQ to threads. */ 312 - 313 - static int do_irqd(void *__desc) 329 + asmlinkage void __ipipe_sync_root(void) 314 330 { 315 - struct irq_desc *desc = __desc; 316 - unsigned irq = desc - irq_desc; 317 - int thrprio = desc->thr_prio; 318 - int thrmask = 1 << thrprio; 319 - int cpu = smp_processor_id(); 320 - cpumask_t cpumask; 331 + unsigned long flags; 321 332 322 - sigfillset(&current->blocked); 323 - current->flags |= PF_NOFREEZE; 324 - cpumask = cpumask_of_cpu(cpu); 325 - set_cpus_allowed(current, cpumask); 326 - ipipe_setscheduler_root(current, SCHED_FIFO, 50 + thrprio); 333 + BUG_ON(irqs_disabled()); 327 334 328 - while (!kthread_should_stop()) { 329 - local_irq_disable(); 330 - if (!(desc->status & IRQ_SCHEDULED)) { 331 - set_current_state(TASK_INTERRUPTIBLE); 332 - resched: 333 - local_irq_enable(); 334 - schedule(); 335 - local_irq_disable(); 336 - } 337 - __set_current_state(TASK_RUNNING); 338 - /* 339 - * If higher priority interrupt servers are ready to 340 - * run, reschedule immediately. We need this for the 341 - * GPIO demux IRQ handler to unmask the interrupt line 342 - * _last_, after all GPIO IRQs have run. 343 - */ 344 - if (per_cpu(pending_irqthread_mask, cpu) & ~(thrmask|(thrmask-1))) 345 - goto resched; 346 - if (--per_cpu(pending_irq_count[thrprio], cpu) == 0) 347 - per_cpu(pending_irqthread_mask, cpu) &= ~thrmask; 348 - desc->status &= ~IRQ_SCHEDULED; 349 - desc->thr_handler(irq, &__raw_get_cpu_var(__ipipe_tick_regs)); 350 - local_irq_enable(); 351 - } 352 - __set_current_state(TASK_RUNNING); 353 - return 0; 335 + local_irq_save_hw(flags); 336 + 337 + clear_thread_flag(TIF_IRQ_SYNC); 338 + 339 + if (ipipe_root_cpudom_var(irqpend_himask) != 0) 340 + __ipipe_sync_pipeline(IPIPE_IRQMASK_ANY); 341 + 342 + local_irq_restore_hw(flags); 354 343 } 355 344 356 - static void kick_irqd(unsigned irq, void *cookie) 345 + void ___ipipe_sync_pipeline(unsigned long syncmask) 357 346 { 358 - struct irq_desc *desc = irq_desc + irq; 359 - int thrprio = desc->thr_prio; 360 - int thrmask = 1 << thrprio; 361 - int cpu = smp_processor_id(); 347 + struct ipipe_domain *ipd = ipipe_current_domain; 362 348 363 - if (!(desc->status & IRQ_SCHEDULED)) { 364 - desc->status |= IRQ_SCHEDULED; 365 - per_cpu(pending_irqthread_mask, cpu) |= thrmask; 366 - ++per_cpu(pending_irq_count[thrprio], cpu); 367 - wake_up_process(desc->thread); 368 - } 369 - } 370 - 371 - int ipipe_start_irq_thread(unsigned irq, struct irq_desc *desc) 372 - { 373 - if (desc->thread || !create_irq_threads) 374 - return 0; 375 - 376 - desc->thread = kthread_create(do_irqd, desc, "IRQ %d", irq); 377 - if (desc->thread == NULL) { 378 - printk(KERN_ERR "irqd: could not create IRQ thread %d!\n", irq); 379 - return -ENOMEM; 349 + if (ipd == ipipe_root_domain) { 350 + if (test_bit(IPIPE_SYNCDEFER_FLAG, &ipipe_root_cpudom_var(status))) 351 + return; 380 352 } 381 353 382 - wake_up_process(desc->thread); 383 - 384 - desc->thr_handler = ipipe_root_domain->irqs[irq].handler; 385 - ipipe_root_domain->irqs[irq].handler = &kick_irqd; 386 - 387 - return 0; 388 - } 389 - 390 - void __init ipipe_init_irq_threads(void) 391 - { 392 - unsigned irq; 393 - struct irq_desc *desc; 394 - 395 - create_irq_threads = 1; 396 - 397 - for (irq = 0; irq < NR_IRQS; irq++) { 398 - desc = irq_desc + irq; 399 - if (desc->action != NULL || 400 - (desc->status & IRQ_NOREQUEST) != 0) 401 - ipipe_start_irq_thread(irq, desc); 402 - } 354 + __ipipe_sync_stage(syncmask); 403 355 } 404 356 405 357 EXPORT_SYMBOL(show_stack);
+9 -5
arch/blackfin/kernel/irqchip.c
··· 149 149 #endif 150 150 generic_handle_irq(irq); 151 151 152 - #ifndef CONFIG_IPIPE /* Useless and bugous over the I-pipe: IRQs are threaded. */ 153 - /* If we're the only interrupt running (ignoring IRQ15 which is for 154 - syscalls), lower our priority to IRQ14 so that softirqs run at 155 - that level. If there's another, lower-level interrupt, irq_exit 156 - will defer softirqs to that. */ 152 + #ifndef CONFIG_IPIPE 153 + /* 154 + * If we're the only interrupt running (ignoring IRQ15 which 155 + * is for syscalls), lower our priority to IRQ14 so that 156 + * softirqs run at that level. If there's another, 157 + * lower-level interrupt, irq_exit will defer softirqs to 158 + * that. If the interrupt pipeline is enabled, we are already 159 + * running at IRQ14 priority, so we don't need this code. 160 + */ 157 161 CSYNC(); 158 162 pending = bfin_read_IPEND() & ~0x8000; 159 163 other_ints = pending & (pending - 1);
+7 -2
arch/blackfin/kernel/kgdb_test.c
··· 20 20 static char cmdline[256]; 21 21 static unsigned long len; 22 22 23 + #ifndef CONFIG_SMP 23 24 static int num1 __attribute__((l1_data)); 24 25 25 26 void kgdb_l1_test(void) __attribute__((l1_text)); ··· 33 32 printk(KERN_ALERT "L1(after change) : data variable addr = 0x%p, data value is %d\n", &num1, num1); 34 33 return ; 35 34 } 35 + #endif 36 + 36 37 #if L2_LENGTH 37 38 38 39 static int num2 __attribute__((l2)); ··· 62 59 static int test_proc_output(char *buf) 63 60 { 64 61 kgdb_test("hello world!", 12, 0x55, 0x10); 62 + #ifndef CONFIG_SMP 65 63 kgdb_l1_test(); 66 - #if L2_LENGTH 64 + #endif 65 + #if L2_LENGTH 67 66 kgdb_l2_test(); 68 - #endif 67 + #endif 69 68 70 69 return 0; 71 70 }
+3 -2
arch/blackfin/kernel/ptrace.c
··· 45 45 #include <asm/asm-offsets.h> 46 46 #include <asm/dma.h> 47 47 #include <asm/fixed_code.h> 48 + #include <asm/cacheflush.h> 48 49 #include <asm/mem_map.h> 49 50 50 51 #define TEXT_OFFSET 0 ··· 241 240 242 241 } else if (addr >= FIXED_CODE_START 243 242 && addr + sizeof(tmp) <= FIXED_CODE_END) { 244 - memcpy(&tmp, (const void *)(addr), sizeof(tmp)); 243 + copy_from_user_page(0, 0, 0, &tmp, (const void *)(addr), sizeof(tmp)); 245 244 copied = sizeof(tmp); 246 245 247 246 } else ··· 321 320 322 321 } else if (addr >= FIXED_CODE_START 323 322 && addr + sizeof(data) <= FIXED_CODE_END) { 324 - memcpy((void *)(addr), &data, sizeof(data)); 323 + copy_to_user_page(0, 0, 0, (void *)(addr), &data, sizeof(data)); 325 324 copied = sizeof(data); 326 325 327 326 } else
+7 -3
arch/blackfin/kernel/setup.c
··· 889 889 CPU, bfin_revid()); 890 890 } 891 891 892 + /* We can't run on BF548-0.1 due to ANOMALY 05000448 */ 893 + if (bfin_cpuid() == 0x27de && bfin_revid() == 1) 894 + panic("You can't run on this processor due to 05000448\n"); 895 + 892 896 printk(KERN_INFO "Blackfin Linux support by http://blackfin.uclinux.org/\n"); 893 897 894 898 printk(KERN_INFO "Processor Speed: %lu MHz core clock and %lu MHz System Clock\n", ··· 1145 1141 icache_size = 0; 1146 1142 1147 1143 seq_printf(m, "cache size\t: %d KB(L1 icache) " 1148 - "%d KB(L1 dcache-%s) %d KB(L2 cache)\n", 1144 + "%d KB(L1 dcache%s) %d KB(L2 cache)\n", 1149 1145 icache_size, dcache_size, 1150 1146 #if defined CONFIG_BFIN_WB 1151 - "wb" 1147 + "-wb" 1152 1148 #elif defined CONFIG_BFIN_WT 1153 - "wt" 1149 + "-wt" 1154 1150 #endif 1155 1151 "", 0); 1156 1152
+4 -1
arch/blackfin/kernel/time.c
··· 134 134 135 135 write_seqlock(&xtime_lock); 136 136 #if defined(CONFIG_TICK_SOURCE_SYSTMR0) && !defined(CONFIG_IPIPE) 137 - /* FIXME: Here TIMIL0 is not set when IPIPE enabled, why? */ 137 + /* 138 + * TIMIL0 is latched in __ipipe_grab_irq() when the I-Pipe is 139 + * enabled. 140 + */ 138 141 if (get_gptimer_status(0) & TIMER_STATUS_TIMIL0) { 139 142 #endif 140 143 do_timer(1);
+13 -20
arch/blackfin/mach-bf518/boards/ezbrd.c
··· 113 113 .name = "bfin_mac", 114 114 .dev.platform_data = &bfin_mii_bus, 115 115 }; 116 - #endif 117 116 118 117 #if defined(CONFIG_NET_DSA_KSZ8893M) || defined(CONFIG_NET_DSA_KSZ8893M_MODULE) 119 118 static struct dsa_platform_data ksz8893m_switch_data = { ··· 130 131 .num_resources = 0, 131 132 .dev.platform_data = &ksz8893m_switch_data, 132 133 }; 134 + #endif 133 135 #endif 134 136 135 137 #if defined(CONFIG_MTD_M25P80) \ ··· 171 171 }; 172 172 #endif 173 173 174 + #if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE) 174 175 #if defined(CONFIG_NET_DSA_KSZ8893M) \ 175 176 || defined(CONFIG_NET_DSA_KSZ8893M_MODULE) 176 177 /* SPI SWITCH CHIP */ ··· 180 179 .bits_per_word = 8, 181 180 }; 182 181 #endif 182 + #endif 183 183 184 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 185 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 186 - .enable_dma = 1, 184 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 185 + static struct bfin5xx_spi_chip mmc_spi_chip_info = { 186 + .enable_dma = 0, 187 187 .bits_per_word = 8, 188 188 }; 189 189 #endif ··· 261 259 }, 262 260 #endif 263 261 262 + #if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE) 264 263 #if defined(CONFIG_NET_DSA_KSZ8893M) \ 265 264 || defined(CONFIG_NET_DSA_KSZ8893M_MODULE) 266 265 { ··· 274 271 .mode = SPI_MODE_3, 275 272 }, 276 273 #endif 274 + #endif 277 275 278 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 276 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 279 277 { 280 - .modalias = "spi_mmc_dummy", 278 + .modalias = "mmc_spi", 281 279 .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 282 280 .bus_num = 0, 283 - .chip_select = 0, 284 - .platform_data = NULL, 285 - .controller_data = &spi_mmc_chip_info, 286 - .mode = SPI_MODE_3, 287 - }, 288 - { 289 - .modalias = "spi_mmc", 290 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 291 - .bus_num = 0, 292 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 293 - .platform_data = NULL, 294 - .controller_data = &spi_mmc_chip_info, 281 + .chip_select = 5, 282 + .controller_data = &mmc_spi_chip_info, 295 283 .mode = SPI_MODE_3, 296 284 }, 297 285 #endif ··· 624 630 #if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE) 625 631 &bfin_mii_bus, 626 632 &bfin_mac_device, 627 - #endif 628 - 629 633 #if defined(CONFIG_NET_DSA_KSZ8893M) || defined(CONFIG_NET_DSA_KSZ8893M_MODULE) 630 634 &ksz8893m_switch_device, 635 + #endif 631 636 #endif 632 637 633 638 #if defined(CONFIG_SPI_BFIN) || defined(CONFIG_SPI_BFIN_MODULE)
+15 -2
arch/blackfin/mach-bf518/include/mach/anomaly.h
··· 2 2 * File: include/asm-blackfin/mach-bf518/anomaly.h 3 3 * Bugs: Enter bugs at http://blackfin.uclinux.org/ 4 4 * 5 - * Copyright (C) 2004-2008 Analog Devices Inc. 5 + * Copyright (C) 2004-2009 Analog Devices Inc. 6 6 * Licensed under the GPL-2 or later. 7 7 */ 8 8 9 9 /* This file shoule be up to date with: 10 - * - ???? 10 + * - Revision B, 02/03/2009; ADSP-BF512/BF514/BF516/BF518 Blackfin Processor Anomaly List 11 11 */ 12 12 13 13 #ifndef _MACH_ANOMALY_H_ ··· 19 19 #define ANOMALY_05000122 (1) 20 20 /* False Hardware Error from an Access in the Shadow of a Conditional Branch */ 21 21 #define ANOMALY_05000245 (1) 22 + /* Incorrect Timer Pulse Width in Single-Shot PWM_OUT Mode with External Clock */ 23 + #define ANOMALY_05000254 (1) 22 24 /* Sensitivity To Noise with Slow Input Edge Rates on External SPORT TX and RX Clocks */ 23 25 #define ANOMALY_05000265 (1) 24 26 /* False Hardware Errors Caused by Fetches at the Boundary of Reserved Memory */ ··· 55 53 #define ANOMALY_05000443 (1) 56 54 /* Incorrect L1 Instruction Bank B Memory Map Location */ 57 55 #define ANOMALY_05000444 (1) 56 + /* Incorrect Default Hysteresis Setting for RESET, NMI, and BMODE Signals */ 57 + #define ANOMALY_05000452 (1) 58 + /* PWM_TRIPB Signal Not Available on PG10 */ 59 + #define ANOMALY_05000453 (1) 60 + /* PPI_FS3 is Driven One Half Cycle Later Than PPI Data */ 61 + #define ANOMALY_05000455 (1) 58 62 59 63 /* Anomalies that don't exist on this proc */ 60 64 #define ANOMALY_05000125 (0) ··· 73 65 #define ANOMALY_05000263 (0) 74 66 #define ANOMALY_05000266 (0) 75 67 #define ANOMALY_05000273 (0) 68 + #define ANOMALY_05000278 (0) 76 69 #define ANOMALY_05000285 (0) 70 + #define ANOMALY_05000305 (0) 77 71 #define ANOMALY_05000307 (0) 78 72 #define ANOMALY_05000311 (0) 79 73 #define ANOMALY_05000312 (0) 80 74 #define ANOMALY_05000323 (0) 81 75 #define ANOMALY_05000353 (0) 82 76 #define ANOMALY_05000363 (0) 77 + #define ANOMALY_05000380 (0) 83 78 #define ANOMALY_05000386 (0) 84 79 #define ANOMALY_05000412 (0) 85 80 #define ANOMALY_05000432 (0) 81 + #define ANOMALY_05000447 (0) 82 + #define ANOMALY_05000448 (0) 86 83 87 84 #endif
+2 -2
arch/blackfin/mach-bf518/include/mach/bfin_serial_5xx.h
··· 144 144 CH_UART0_TX, 145 145 CH_UART0_RX, 146 146 #endif 147 - #ifdef CONFIG_BFIN_UART0_CTSRTS 147 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 148 148 CONFIG_UART0_CTS_PIN, 149 149 CONFIG_UART0_RTS_PIN, 150 150 #endif ··· 158 158 CH_UART1_TX, 159 159 CH_UART1_RX, 160 160 #endif 161 - #ifdef CONFIG_BFIN_UART1_CTSRTS 161 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 162 162 CONFIG_UART1_CTS_PIN, 163 163 CONFIG_UART1_RTS_PIN, 164 164 #endif
+8 -18
arch/blackfin/mach-bf527/boards/cm_bf527.c
··· 487 487 }; 488 488 #endif 489 489 490 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 491 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 492 - .enable_dma = 1, 490 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 491 + static struct bfin5xx_spi_chip mmc_spi_chip_info = { 492 + .enable_dma = 0, 493 493 .bits_per_word = 8, 494 494 }; 495 495 #endif ··· 585 585 .controller_data = &ad9960_spi_chip_info, 586 586 }, 587 587 #endif 588 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 588 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 589 589 { 590 - .modalias = "spi_mmc_dummy", 591 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 590 + .modalias = "mmc_spi", 591 + .max_speed_hz = 20000000, /* max spi clock (SCK) speed in HZ */ 592 592 .bus_num = 0, 593 - .chip_select = 0, 594 - .platform_data = NULL, 595 - .controller_data = &spi_mmc_chip_info, 596 - .mode = SPI_MODE_3, 597 - }, 598 - { 599 - .modalias = "spi_mmc", 600 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 601 - .bus_num = 0, 602 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 603 - .platform_data = NULL, 604 - .controller_data = &spi_mmc_chip_info, 593 + .chip_select = 5, 594 + .controller_data = &mmc_spi_chip_info, 605 595 .mode = SPI_MODE_3, 606 596 }, 607 597 #endif
+7 -17
arch/blackfin/mach-bf527/boards/ezbrd.c
··· 256 256 }; 257 257 #endif 258 258 259 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 260 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 261 - .enable_dma = 1, 259 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 260 + static struct bfin5xx_spi_chip mmc_spi_chip_info = { 261 + .enable_dma = 0, 262 262 .bits_per_word = 8, 263 263 }; 264 264 #endif ··· 366 366 }, 367 367 #endif 368 368 369 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 369 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 370 370 { 371 - .modalias = "spi_mmc_dummy", 371 + .modalias = "mmc_spi", 372 372 .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 373 373 .bus_num = 0, 374 - .chip_select = 0, 375 - .platform_data = NULL, 376 - .controller_data = &spi_mmc_chip_info, 377 - .mode = SPI_MODE_3, 378 - }, 379 - { 380 - .modalias = "spi_mmc", 381 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 382 - .bus_num = 0, 383 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 384 - .platform_data = NULL, 385 - .controller_data = &spi_mmc_chip_info, 374 + .chip_select = 5, 375 + .controller_data = &mmc_spi_chip_info, 386 376 .mode = SPI_MODE_3, 387 377 }, 388 378 #endif
+5 -1
arch/blackfin/mach-bf527/include/mach/anomaly.h
··· 2 2 * File: include/asm-blackfin/mach-bf527/anomaly.h 3 3 * Bugs: Enter bugs at http://blackfin.uclinux.org/ 4 4 * 5 - * Copyright (C) 2004-2008 Analog Devices Inc. 5 + * Copyright (C) 2004-2009 Analog Devices Inc. 6 6 * Licensed under the GPL-2 or later. 7 7 */ 8 8 ··· 167 167 #define ANOMALY_05000263 (0) 168 168 #define ANOMALY_05000266 (0) 169 169 #define ANOMALY_05000273 (0) 170 + #define ANOMALY_05000278 (0) 170 171 #define ANOMALY_05000285 (0) 172 + #define ANOMALY_05000305 (0) 171 173 #define ANOMALY_05000307 (0) 172 174 #define ANOMALY_05000311 (0) 173 175 #define ANOMALY_05000312 (0) 174 176 #define ANOMALY_05000323 (0) 175 177 #define ANOMALY_05000363 (0) 176 178 #define ANOMALY_05000412 (0) 179 + #define ANOMALY_05000447 (0) 180 + #define ANOMALY_05000448 (0) 177 181 178 182 #endif
+2 -2
arch/blackfin/mach-bf527/include/mach/bfin_serial_5xx.h
··· 144 144 CH_UART0_TX, 145 145 CH_UART0_RX, 146 146 #endif 147 - #ifdef CONFIG_BFIN_UART0_CTSRTS 147 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 148 148 CONFIG_UART0_CTS_PIN, 149 149 CONFIG_UART0_RTS_PIN, 150 150 #endif ··· 158 158 CH_UART1_TX, 159 159 CH_UART1_RX, 160 160 #endif 161 - #ifdef CONFIG_BFIN_UART1_CTSRTS 161 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 162 162 CONFIG_UART1_CTS_PIN, 163 163 CONFIG_UART1_RTS_PIN, 164 164 #endif
-5
arch/blackfin/mach-bf533/boards/Kconfig
··· 38 38 help 39 39 Core support for IP04/IP04 open hardware IP-PBX. 40 40 41 - config GENERIC_BF533_BOARD 42 - bool "Generic" 43 - help 44 - Generic or Custom board support. 45 - 46 41 endchoice
-1
arch/blackfin/mach-bf533/boards/Makefile
··· 2 2 # arch/blackfin/mach-bf533/boards/Makefile 3 3 # 4 4 5 - obj-$(CONFIG_GENERIC_BF533_BOARD) += generic_board.o 6 5 obj-$(CONFIG_BFIN533_STAMP) += stamp.o 7 6 obj-$(CONFIG_BFIN532_IP0X) += ip0x.o 8 7 obj-$(CONFIG_BFIN533_EZKIT) += ezkit.o
+7 -17
arch/blackfin/mach-bf533/boards/blackstamp.c
··· 101 101 }; 102 102 #endif 103 103 104 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 105 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 106 - .enable_dma = 1, 104 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 105 + static struct bfin5xx_spi_chip mmc_spi_chip_info = { 106 + .enable_dma = 0, 107 107 .bits_per_word = 8, 108 108 }; 109 109 #endif ··· 129 129 }, 130 130 #endif 131 131 132 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 132 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 133 133 { 134 - .modalias = "spi_mmc_dummy", 134 + .modalias = "mmc_spi", 135 135 .max_speed_hz = 20000000, /* max spi clock (SCK) speed in HZ */ 136 136 .bus_num = 0, 137 - .chip_select = 0, 138 - .platform_data = NULL, 139 - .controller_data = &spi_mmc_chip_info, 140 - .mode = SPI_MODE_3, 141 - }, 142 - { 143 - .modalias = "spi_mmc", 144 - .max_speed_hz = 20000000, /* max spi clock (SCK) speed in HZ */ 145 - .bus_num = 0, 146 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 147 - .platform_data = NULL, 148 - .controller_data = &spi_mmc_chip_info, 137 + .chip_select = 5, 138 + .controller_data = &mmc_spi_chip_info, 149 139 .mode = SPI_MODE_3, 150 140 }, 151 141 #endif
+7 -17
arch/blackfin/mach-bf533/boards/cm_bf533.c
··· 96 96 }; 97 97 #endif 98 98 99 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 100 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 101 - .enable_dma = 1, 99 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 100 + static struct bfin5xx_spi_chip mmc_spi_chip_info = { 101 + .enable_dma = 0, 102 102 .bits_per_word = 8, 103 103 }; 104 104 #endif ··· 138 138 }, 139 139 #endif 140 140 141 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 141 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 142 142 { 143 - .modalias = "spi_mmc_dummy", 143 + .modalias = "mmc_spi", 144 144 .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 145 145 .bus_num = 0, 146 - .chip_select = 0, 147 - .platform_data = NULL, 148 - .controller_data = &spi_mmc_chip_info, 149 - .mode = SPI_MODE_3, 150 - }, 151 - { 152 - .modalias = "spi_mmc", 153 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 154 - .bus_num = 0, 155 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 156 - .platform_data = NULL, 157 - .controller_data = &spi_mmc_chip_info, 146 + .chip_select = 5, 147 + .controller_data = &mmc_spi_chip_info, 158 148 .mode = SPI_MODE_3, 159 149 }, 160 150 #endif
-126
arch/blackfin/mach-bf533/boards/generic_board.c
··· 1 - /* 2 - * File: arch/blackfin/mach-bf533/generic_board.c 3 - * Based on: arch/blackfin/mach-bf533/ezkit.c 4 - * Author: Aidan Williams <aidan@nicta.com.au> 5 - * 6 - * Created: 2005 7 - * Description: 8 - * 9 - * Modified: 10 - * Copyright 2005 National ICT Australia (NICTA) 11 - * Copyright 2004-2006 Analog Devices Inc. 12 - * 13 - * Bugs: Enter bugs at http://blackfin.uclinux.org/ 14 - * 15 - * This program is free software; you can redistribute it and/or modify 16 - * it under the terms of the GNU General Public License as published by 17 - * the Free Software Foundation; either version 2 of the License, or 18 - * (at your option) any later version. 19 - * 20 - * This program is distributed in the hope that it will be useful, 21 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 22 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 23 - * GNU General Public License for more details. 24 - * 25 - * You should have received a copy of the GNU General Public License 26 - * along with this program; if not, see the file COPYING, or write 27 - * to the Free Software Foundation, Inc., 28 - * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 29 - */ 30 - 31 - #include <linux/device.h> 32 - #include <linux/platform_device.h> 33 - #include <linux/irq.h> 34 - 35 - /* 36 - * Name the Board for the /proc/cpuinfo 37 - */ 38 - const char bfin_board_name[] = "UNKNOWN BOARD"; 39 - 40 - #if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE) 41 - static struct platform_device rtc_device = { 42 - .name = "rtc-bfin", 43 - .id = -1, 44 - }; 45 - #endif 46 - 47 - /* 48 - * Driver needs to know address, irq and flag pin. 49 - */ 50 - #if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE) 51 - static struct resource smc91x_resources[] = { 52 - { 53 - .start = 0x20300300, 54 - .end = 0x20300300 + 16, 55 - .flags = IORESOURCE_MEM, 56 - }, { 57 - .start = IRQ_PROG_INTB, 58 - .end = IRQ_PROG_INTB, 59 - .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 60 - }, { 61 - .start = IRQ_PF7, 62 - .end = IRQ_PF7, 63 - .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 64 - }, 65 - }; 66 - 67 - static struct platform_device smc91x_device = { 68 - .name = "smc91x", 69 - .id = 0, 70 - .num_resources = ARRAY_SIZE(smc91x_resources), 71 - .resource = smc91x_resources, 72 - }; 73 - #endif 74 - 75 - #if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE) 76 - #ifdef CONFIG_BFIN_SIR0 77 - static struct resource bfin_sir0_resources[] = { 78 - { 79 - .start = 0xFFC00400, 80 - .end = 0xFFC004FF, 81 - .flags = IORESOURCE_MEM, 82 - }, 83 - { 84 - .start = IRQ_UART0_RX, 85 - .end = IRQ_UART0_RX+1, 86 - .flags = IORESOURCE_IRQ, 87 - }, 88 - { 89 - .start = CH_UART0_RX, 90 - .end = CH_UART0_RX+1, 91 - .flags = IORESOURCE_DMA, 92 - }, 93 - }; 94 - 95 - static struct platform_device bfin_sir0_device = { 96 - .name = "bfin_sir", 97 - .id = 0, 98 - .num_resources = ARRAY_SIZE(bfin_sir0_resources), 99 - .resource = bfin_sir0_resources, 100 - }; 101 - #endif 102 - #endif 103 - 104 - static struct platform_device *generic_board_devices[] __initdata = { 105 - #if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE) 106 - &rtc_device, 107 - #endif 108 - 109 - #if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE) 110 - &smc91x_device, 111 - #endif 112 - 113 - #if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE) 114 - #ifdef CONFIG_BFIN_SIR0 115 - &bfin_sir0_device, 116 - #endif 117 - #endif 118 - }; 119 - 120 - static int __init generic_board_init(void) 121 - { 122 - printk(KERN_INFO "%s(): registering device resources\n", __func__); 123 - return platform_add_devices(generic_board_devices, ARRAY_SIZE(generic_board_devices)); 124 - } 125 - 126 - arch_initcall(generic_board_init);
+6 -7
arch/blackfin/mach-bf533/boards/ip0x.c
··· 127 127 #if defined(CONFIG_SPI_BFIN) || defined(CONFIG_SPI_BFIN_MODULE) 128 128 /* all SPI peripherals info goes here */ 129 129 130 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 131 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 130 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 131 + static struct bfin5xx_spi_chip mmc_spi_chip_info = { 132 132 /* 133 133 * CPOL (Clock Polarity) 134 134 * 0 - Active high SCK ··· 152 152 /* Notice: for blackfin, the speed_hz is the value of register 153 153 * SPI_BAUD, not the real baudrate */ 154 154 static struct spi_board_info bfin_spi_board_info[] __initdata = { 155 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 155 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 156 156 { 157 - .modalias = "spi_mmc", 157 + .modalias = "mmc_spi", 158 158 .max_speed_hz = 2, 159 159 .bus_num = 1, 160 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 161 - .platform_data = NULL, 162 - .controller_data = &spi_mmc_chip_info, 160 + .chip_select = 5, 161 + .controller_data = &mmc_spi_chip_info, 163 162 }, 164 163 #endif 165 164 };
+5 -2
arch/blackfin/mach-bf533/include/mach/anomaly.h
··· 2 2 * File: include/asm-blackfin/mach-bf533/anomaly.h 3 3 * Bugs: Enter bugs at http://blackfin.uclinux.org/ 4 4 * 5 - * Copyright (C) 2004-2008 Analog Devices Inc. 5 + * Copyright (C) 2004-2009 Analog Devices Inc. 6 6 * Licensed under the GPL-2 or later. 7 7 */ 8 8 ··· 160 160 #define ANOMALY_05000301 (__SILICON_REVISION__ < 6) 161 161 /* SSYNCs After Writes To DMA MMR Registers May Not Be Handled Correctly */ 162 162 #define ANOMALY_05000302 (__SILICON_REVISION__ < 5) 163 - /* New Feature: Additional Hysteresis on SPORT Input Pins (Not Available On Older Silicon) */ 163 + /* SPORT_HYS Bit in PLL_CTL Register Is Not Functional */ 164 164 #define ANOMALY_05000305 (__SILICON_REVISION__ < 5) 165 165 /* New Feature: Additional PPI Frame Sync Sampling Options (Not Available On Older Silicon) */ 166 166 #define ANOMALY_05000306 (__SILICON_REVISION__ < 5) ··· 278 278 #define ANOMALY_05000266 (0) 279 279 #define ANOMALY_05000323 (0) 280 280 #define ANOMALY_05000353 (1) 281 + #define ANOMALY_05000380 (0) 281 282 #define ANOMALY_05000386 (1) 282 283 #define ANOMALY_05000412 (0) 283 284 #define ANOMALY_05000432 (0) 284 285 #define ANOMALY_05000435 (0) 286 + #define ANOMALY_05000447 (0) 287 + #define ANOMALY_05000448 (0) 285 288 286 289 #endif
+1 -1
arch/blackfin/mach-bf533/include/mach/bfin_serial_5xx.h
··· 134 134 CH_UART_TX, 135 135 CH_UART_RX, 136 136 #endif 137 - #ifdef CONFIG_BFIN_UART0_CTSRTS 137 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 138 138 CONFIG_UART0_CTS_PIN, 139 139 CONFIG_UART0_RTS_PIN, 140 140 #endif
-5
arch/blackfin/mach-bf537/boards/Kconfig
··· 33 33 help 34 34 Board supply package for CSP Minotaur 35 35 36 - config GENERIC_BF537_BOARD 37 - bool "Generic" 38 - help 39 - Generic or Custom board support. 40 - 41 36 endchoice
-1
arch/blackfin/mach-bf537/boards/Makefile
··· 2 2 # arch/blackfin/mach-bf537/boards/Makefile 3 3 # 4 4 5 - obj-$(CONFIG_GENERIC_BF537_BOARD) += generic_board.o 6 5 obj-$(CONFIG_BFIN537_STAMP) += stamp.o 7 6 obj-$(CONFIG_BFIN537_BLUETECHNIX_CM) += cm_bf537.o 8 7 obj-$(CONFIG_BFIN537_BLUETECHNIX_TCM) += tcm_bf537.o
+8 -18
arch/blackfin/mach-bf537/boards/cm_bf537.c
··· 108 108 }; 109 109 #endif 110 110 111 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 112 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 113 - .enable_dma = 1, 111 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 112 + static struct bfin5xx_spi_chip mmc_spi_chip_info = { 113 + .enable_dma = 0, 114 114 .bits_per_word = 8, 115 115 }; 116 116 #endif ··· 160 160 }, 161 161 #endif 162 162 163 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 163 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 164 164 { 165 - .modalias = "spi_mmc_dummy", 166 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 165 + .modalias = "mmc_spi", 166 + .max_speed_hz = 20000000, /* max spi clock (SCK) speed in HZ */ 167 167 .bus_num = 0, 168 - .chip_select = 7, 169 - .platform_data = NULL, 170 - .controller_data = &spi_mmc_chip_info, 171 - .mode = SPI_MODE_3, 172 - }, 173 - { 174 - .modalias = "spi_mmc", 175 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 176 - .bus_num = 0, 177 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 178 - .platform_data = NULL, 179 - .controller_data = &spi_mmc_chip_info, 168 + .chip_select = 1, 169 + .controller_data = &mmc_spi_chip_info, 180 170 .mode = SPI_MODE_3, 181 171 }, 182 172 #endif
-745
arch/blackfin/mach-bf537/boards/generic_board.c
··· 1 - /* 2 - * File: arch/blackfin/mach-bf537/boards/generic_board.c 3 - * Based on: arch/blackfin/mach-bf533/boards/ezkit.c 4 - * Author: Aidan Williams <aidan@nicta.com.au> 5 - * 6 - * Created: 7 - * Description: 8 - * 9 - * Modified: 10 - * Copyright 2005 National ICT Australia (NICTA) 11 - * Copyright 2004-2008 Analog Devices Inc. 12 - * 13 - * Bugs: Enter bugs at http://blackfin.uclinux.org/ 14 - * 15 - * This program is free software; you can redistribute it and/or modify 16 - * it under the terms of the GNU General Public License as published by 17 - * the Free Software Foundation; either version 2 of the License, or 18 - * (at your option) any later version. 19 - * 20 - * This program is distributed in the hope that it will be useful, 21 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 22 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 23 - * GNU General Public License for more details. 24 - * 25 - * You should have received a copy of the GNU General Public License 26 - * along with this program; if not, see the file COPYING, or write 27 - * to the Free Software Foundation, Inc., 28 - * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 29 - */ 30 - 31 - #include <linux/device.h> 32 - #include <linux/etherdevice.h> 33 - #include <linux/platform_device.h> 34 - #include <linux/mtd/mtd.h> 35 - #include <linux/mtd/partitions.h> 36 - #include <linux/spi/spi.h> 37 - #include <linux/spi/flash.h> 38 - #if defined(CONFIG_USB_ISP1362_HCD) || defined(CONFIG_USB_ISP1362_HCD_MODULE) 39 - #include <linux/usb/isp1362.h> 40 - #endif 41 - #include <linux/irq.h> 42 - #include <linux/interrupt.h> 43 - #include <linux/usb/sl811.h> 44 - #include <asm/dma.h> 45 - #include <asm/bfin5xx_spi.h> 46 - #include <asm/reboot.h> 47 - #include <asm/portmux.h> 48 - #include <linux/spi/ad7877.h> 49 - 50 - /* 51 - * Name the Board for the /proc/cpuinfo 52 - */ 53 - const char bfin_board_name[] = "UNKNOWN BOARD"; 54 - 55 - /* 56 - * Driver needs to know address, irq and flag pin. 57 - */ 58 - 59 - #if defined(CONFIG_USB_ISP1760_HCD) || defined(CONFIG_USB_ISP1760_HCD_MODULE) 60 - #include <linux/usb/isp1760.h> 61 - static struct resource bfin_isp1760_resources[] = { 62 - [0] = { 63 - .start = 0x203C0000, 64 - .end = 0x203C0000 + 0x000fffff, 65 - .flags = IORESOURCE_MEM, 66 - }, 67 - [1] = { 68 - .start = IRQ_PF7, 69 - .end = IRQ_PF7, 70 - .flags = IORESOURCE_IRQ, 71 - }, 72 - }; 73 - 74 - static struct isp1760_platform_data isp1760_priv = { 75 - .is_isp1761 = 0, 76 - .port1_disable = 0, 77 - .bus_width_16 = 1, 78 - .port1_otg = 0, 79 - .analog_oc = 0, 80 - .dack_polarity_high = 0, 81 - .dreq_polarity_high = 0, 82 - }; 83 - 84 - static struct platform_device bfin_isp1760_device = { 85 - .name = "isp1760-hcd", 86 - .id = 0, 87 - .dev = { 88 - .platform_data = &isp1760_priv, 89 - }, 90 - .num_resources = ARRAY_SIZE(bfin_isp1760_resources), 91 - .resource = bfin_isp1760_resources, 92 - }; 93 - #endif 94 - 95 - #if defined(CONFIG_BFIN_CFPCMCIA) || defined(CONFIG_BFIN_CFPCMCIA_MODULE) 96 - static struct resource bfin_pcmcia_cf_resources[] = { 97 - { 98 - .start = 0x20310000, /* IO PORT */ 99 - .end = 0x20312000, 100 - .flags = IORESOURCE_MEM, 101 - }, { 102 - .start = 0x20311000, /* Attribute Memory */ 103 - .end = 0x20311FFF, 104 - .flags = IORESOURCE_MEM, 105 - }, { 106 - .start = IRQ_PF4, 107 - .end = IRQ_PF4, 108 - .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_LOWLEVEL, 109 - }, { 110 - .start = 6, /* Card Detect PF6 */ 111 - .end = 6, 112 - .flags = IORESOURCE_IRQ, 113 - }, 114 - }; 115 - 116 - static struct platform_device bfin_pcmcia_cf_device = { 117 - .name = "bfin_cf_pcmcia", 118 - .id = -1, 119 - .num_resources = ARRAY_SIZE(bfin_pcmcia_cf_resources), 120 - .resource = bfin_pcmcia_cf_resources, 121 - }; 122 - #endif 123 - 124 - #if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE) 125 - static struct platform_device rtc_device = { 126 - .name = "rtc-bfin", 127 - .id = -1, 128 - }; 129 - #endif 130 - 131 - #if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE) 132 - static struct resource smc91x_resources[] = { 133 - { 134 - .name = "smc91x-regs", 135 - .start = 0x20300300, 136 - .end = 0x20300300 + 16, 137 - .flags = IORESOURCE_MEM, 138 - }, { 139 - 140 - .start = IRQ_PF7, 141 - .end = IRQ_PF7, 142 - .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 143 - }, 144 - }; 145 - static struct platform_device smc91x_device = { 146 - .name = "smc91x", 147 - .id = 0, 148 - .num_resources = ARRAY_SIZE(smc91x_resources), 149 - .resource = smc91x_resources, 150 - }; 151 - #endif 152 - 153 - #if defined(CONFIG_DM9000) || defined(CONFIG_DM9000_MODULE) 154 - static struct resource dm9000_resources[] = { 155 - [0] = { 156 - .start = 0x203FB800, 157 - .end = 0x203FB800 + 1, 158 - .flags = IORESOURCE_MEM, 159 - }, 160 - [1] = { 161 - .start = 0x203FB800 + 4, 162 - .end = 0x203FB800 + 5, 163 - .flags = IORESOURCE_MEM, 164 - }, 165 - [2] = { 166 - .start = IRQ_PF9, 167 - .end = IRQ_PF9, 168 - .flags = (IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHEDGE), 169 - }, 170 - }; 171 - 172 - static struct platform_device dm9000_device = { 173 - .name = "dm9000", 174 - .id = -1, 175 - .num_resources = ARRAY_SIZE(dm9000_resources), 176 - .resource = dm9000_resources, 177 - }; 178 - #endif 179 - 180 - #if defined(CONFIG_USB_SL811_HCD) || defined(CONFIG_USB_SL811_HCD_MODULE) 181 - static struct resource sl811_hcd_resources[] = { 182 - { 183 - .start = 0x20340000, 184 - .end = 0x20340000, 185 - .flags = IORESOURCE_MEM, 186 - }, { 187 - .start = 0x20340004, 188 - .end = 0x20340004, 189 - .flags = IORESOURCE_MEM, 190 - }, { 191 - .start = CONFIG_USB_SL811_BFIN_IRQ, 192 - .end = CONFIG_USB_SL811_BFIN_IRQ, 193 - .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 194 - }, 195 - }; 196 - 197 - #if defined(CONFIG_USB_SL811_BFIN_USE_VBUS) 198 - void sl811_port_power(struct device *dev, int is_on) 199 - { 200 - gpio_request(CONFIG_USB_SL811_BFIN_GPIO_VBUS, "usb:SL811_VBUS"); 201 - gpio_direction_output(CONFIG_USB_SL811_BFIN_GPIO_VBUS, is_on); 202 - 203 - } 204 - #endif 205 - 206 - static struct sl811_platform_data sl811_priv = { 207 - .potpg = 10, 208 - .power = 250, /* == 500mA */ 209 - #if defined(CONFIG_USB_SL811_BFIN_USE_VBUS) 210 - .port_power = &sl811_port_power, 211 - #endif 212 - }; 213 - 214 - static struct platform_device sl811_hcd_device = { 215 - .name = "sl811-hcd", 216 - .id = 0, 217 - .dev = { 218 - .platform_data = &sl811_priv, 219 - }, 220 - .num_resources = ARRAY_SIZE(sl811_hcd_resources), 221 - .resource = sl811_hcd_resources, 222 - }; 223 - #endif 224 - 225 - #if defined(CONFIG_USB_ISP1362_HCD) || defined(CONFIG_USB_ISP1362_HCD_MODULE) 226 - static struct resource isp1362_hcd_resources[] = { 227 - { 228 - .start = 0x20360000, 229 - .end = 0x20360000, 230 - .flags = IORESOURCE_MEM, 231 - }, { 232 - .start = 0x20360004, 233 - .end = 0x20360004, 234 - .flags = IORESOURCE_MEM, 235 - }, { 236 - .start = CONFIG_USB_ISP1362_BFIN_GPIO_IRQ, 237 - .end = CONFIG_USB_ISP1362_BFIN_GPIO_IRQ, 238 - .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 239 - }, 240 - }; 241 - 242 - static struct isp1362_platform_data isp1362_priv = { 243 - .sel15Kres = 1, 244 - .clknotstop = 0, 245 - .oc_enable = 0, 246 - .int_act_high = 0, 247 - .int_edge_triggered = 0, 248 - .remote_wakeup_connected = 0, 249 - .no_power_switching = 1, 250 - .power_switching_mode = 0, 251 - }; 252 - 253 - static struct platform_device isp1362_hcd_device = { 254 - .name = "isp1362-hcd", 255 - .id = 0, 256 - .dev = { 257 - .platform_data = &isp1362_priv, 258 - }, 259 - .num_resources = ARRAY_SIZE(isp1362_hcd_resources), 260 - .resource = isp1362_hcd_resources, 261 - }; 262 - #endif 263 - 264 - #if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE) 265 - static struct platform_device bfin_mii_bus = { 266 - .name = "bfin_mii_bus", 267 - }; 268 - 269 - static struct platform_device bfin_mac_device = { 270 - .name = "bfin_mac", 271 - .dev.platform_data = &bfin_mii_bus, 272 - }; 273 - #endif 274 - 275 - #if defined(CONFIG_USB_NET2272) || defined(CONFIG_USB_NET2272_MODULE) 276 - static struct resource net2272_bfin_resources[] = { 277 - { 278 - .start = 0x20300000, 279 - .end = 0x20300000 + 0x100, 280 - .flags = IORESOURCE_MEM, 281 - }, { 282 - .start = IRQ_PF7, 283 - .end = IRQ_PF7, 284 - .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 285 - }, 286 - }; 287 - 288 - static struct platform_device net2272_bfin_device = { 289 - .name = "net2272", 290 - .id = -1, 291 - .num_resources = ARRAY_SIZE(net2272_bfin_resources), 292 - .resource = net2272_bfin_resources, 293 - }; 294 - #endif 295 - 296 - #if defined(CONFIG_SPI_BFIN) || defined(CONFIG_SPI_BFIN_MODULE) 297 - /* all SPI peripherals info goes here */ 298 - 299 - #if defined(CONFIG_MTD_M25P80) \ 300 - || defined(CONFIG_MTD_M25P80_MODULE) 301 - static struct mtd_partition bfin_spi_flash_partitions[] = { 302 - { 303 - .name = "bootloader(spi)", 304 - .size = 0x00020000, 305 - .offset = 0, 306 - .mask_flags = MTD_CAP_ROM 307 - }, { 308 - .name = "linux kernel(spi)", 309 - .size = 0xe0000, 310 - .offset = 0x20000 311 - }, { 312 - .name = "file system(spi)", 313 - .size = 0x700000, 314 - .offset = 0x00100000, 315 - } 316 - }; 317 - 318 - static struct flash_platform_data bfin_spi_flash_data = { 319 - .name = "m25p80", 320 - .parts = bfin_spi_flash_partitions, 321 - .nr_parts = ARRAY_SIZE(bfin_spi_flash_partitions), 322 - .type = "m25p64", 323 - }; 324 - 325 - /* SPI flash chip (m25p64) */ 326 - static struct bfin5xx_spi_chip spi_flash_chip_info = { 327 - .enable_dma = 0, /* use dma transfer with this chip*/ 328 - .bits_per_word = 8, 329 - }; 330 - #endif 331 - 332 - #if defined(CONFIG_SPI_ADC_BF533) \ 333 - || defined(CONFIG_SPI_ADC_BF533_MODULE) 334 - /* SPI ADC chip */ 335 - static struct bfin5xx_spi_chip spi_adc_chip_info = { 336 - .enable_dma = 1, /* use dma transfer with this chip*/ 337 - .bits_per_word = 16, 338 - }; 339 - #endif 340 - 341 - #if defined(CONFIG_SND_BLACKFIN_AD1836) \ 342 - || defined(CONFIG_SND_BLACKFIN_AD1836_MODULE) 343 - static struct bfin5xx_spi_chip ad1836_spi_chip_info = { 344 - .enable_dma = 0, 345 - .bits_per_word = 16, 346 - }; 347 - #endif 348 - 349 - #if defined(CONFIG_AD9960) || defined(CONFIG_AD9960_MODULE) 350 - static struct bfin5xx_spi_chip ad9960_spi_chip_info = { 351 - .enable_dma = 0, 352 - .bits_per_word = 16, 353 - }; 354 - #endif 355 - 356 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 357 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 358 - .enable_dma = 1, 359 - .bits_per_word = 8, 360 - }; 361 - #endif 362 - 363 - #if defined(CONFIG_PBX) 364 - static struct bfin5xx_spi_chip spi_si3xxx_chip_info = { 365 - .ctl_reg = 0x4, /* send zero */ 366 - .enable_dma = 0, 367 - .bits_per_word = 8, 368 - .cs_change_per_word = 1, 369 - }; 370 - #endif 371 - 372 - #if defined(CONFIG_TOUCHSCREEN_AD7877) || defined(CONFIG_TOUCHSCREEN_AD7877_MODULE) 373 - static struct bfin5xx_spi_chip spi_ad7877_chip_info = { 374 - .enable_dma = 0, 375 - .bits_per_word = 16, 376 - }; 377 - 378 - static const struct ad7877_platform_data bfin_ad7877_ts_info = { 379 - .model = 7877, 380 - .vref_delay_usecs = 50, /* internal, no capacitor */ 381 - .x_plate_ohms = 419, 382 - .y_plate_ohms = 486, 383 - .pressure_max = 1000, 384 - .pressure_min = 0, 385 - .stopacq_polarity = 1, 386 - .first_conversion_delay = 3, 387 - .acquisition_time = 1, 388 - .averaging = 1, 389 - .pen_down_acc_interval = 1, 390 - }; 391 - #endif 392 - 393 - static struct spi_board_info bfin_spi_board_info[] __initdata = { 394 - #if defined(CONFIG_MTD_M25P80) \ 395 - || defined(CONFIG_MTD_M25P80_MODULE) 396 - { 397 - /* the modalias must be the same as spi device driver name */ 398 - .modalias = "m25p80", /* Name of spi_driver for this device */ 399 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 400 - .bus_num = 0, /* Framework bus number */ 401 - .chip_select = 1, /* Framework chip select. On STAMP537 it is SPISSEL1*/ 402 - .platform_data = &bfin_spi_flash_data, 403 - .controller_data = &spi_flash_chip_info, 404 - .mode = SPI_MODE_3, 405 - }, 406 - #endif 407 - 408 - #if defined(CONFIG_SPI_ADC_BF533) \ 409 - || defined(CONFIG_SPI_ADC_BF533_MODULE) 410 - { 411 - .modalias = "bfin_spi_adc", /* Name of spi_driver for this device */ 412 - .max_speed_hz = 6250000, /* max spi clock (SCK) speed in HZ */ 413 - .bus_num = 0, /* Framework bus number */ 414 - .chip_select = 1, /* Framework chip select. */ 415 - .platform_data = NULL, /* No spi_driver specific config */ 416 - .controller_data = &spi_adc_chip_info, 417 - }, 418 - #endif 419 - 420 - #if defined(CONFIG_SND_BLACKFIN_AD1836) \ 421 - || defined(CONFIG_SND_BLACKFIN_AD1836_MODULE) 422 - { 423 - .modalias = "ad1836-spi", 424 - .max_speed_hz = 3125000, /* max spi clock (SCK) speed in HZ */ 425 - .bus_num = 0, 426 - .chip_select = CONFIG_SND_BLACKFIN_SPI_PFBIT, 427 - .controller_data = &ad1836_spi_chip_info, 428 - }, 429 - #endif 430 - #if defined(CONFIG_AD9960) || defined(CONFIG_AD9960_MODULE) 431 - { 432 - .modalias = "ad9960-spi", 433 - .max_speed_hz = 10000000, /* max spi clock (SCK) speed in HZ */ 434 - .bus_num = 0, 435 - .chip_select = 1, 436 - .controller_data = &ad9960_spi_chip_info, 437 - }, 438 - #endif 439 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 440 - { 441 - .modalias = "spi_mmc_dummy", 442 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 443 - .bus_num = 0, 444 - .chip_select = 0, 445 - .platform_data = NULL, 446 - .controller_data = &spi_mmc_chip_info, 447 - .mode = SPI_MODE_3, 448 - }, 449 - { 450 - .modalias = "spi_mmc", 451 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 452 - .bus_num = 0, 453 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 454 - .platform_data = NULL, 455 - .controller_data = &spi_mmc_chip_info, 456 - .mode = SPI_MODE_3, 457 - }, 458 - #endif 459 - #if defined(CONFIG_PBX) 460 - { 461 - .modalias = "fxs-spi", 462 - .max_speed_hz = 12500000, /* max spi clock (SCK) speed in HZ */ 463 - .bus_num = 0, 464 - .chip_select = 8 - CONFIG_J11_JUMPER, 465 - .controller_data = &spi_si3xxx_chip_info, 466 - .mode = SPI_MODE_3, 467 - }, 468 - { 469 - .modalias = "fxo-spi", 470 - .max_speed_hz = 12500000, /* max spi clock (SCK) speed in HZ */ 471 - .bus_num = 0, 472 - .chip_select = 8 - CONFIG_J19_JUMPER, 473 - .controller_data = &spi_si3xxx_chip_info, 474 - .mode = SPI_MODE_3, 475 - }, 476 - #endif 477 - #if defined(CONFIG_TOUCHSCREEN_AD7877) || defined(CONFIG_TOUCHSCREEN_AD7877_MODULE) 478 - { 479 - .modalias = "ad7877", 480 - .platform_data = &bfin_ad7877_ts_info, 481 - .irq = IRQ_PF6, 482 - .max_speed_hz = 12500000, /* max spi clock (SCK) speed in HZ */ 483 - .bus_num = 0, 484 - .chip_select = 1, 485 - .controller_data = &spi_ad7877_chip_info, 486 - }, 487 - #endif 488 - }; 489 - 490 - /* SPI controller data */ 491 - static struct bfin5xx_spi_master bfin_spi0_info = { 492 - .num_chipselect = 8, 493 - .enable_dma = 1, /* master has the ability to do dma transfer */ 494 - .pin_req = {P_SPI0_SCK, P_SPI0_MISO, P_SPI0_MOSI, 0}, 495 - }; 496 - 497 - /* SPI (0) */ 498 - static struct resource bfin_spi0_resource[] = { 499 - [0] = { 500 - .start = SPI0_REGBASE, 501 - .end = SPI0_REGBASE + 0xFF, 502 - .flags = IORESOURCE_MEM, 503 - }, 504 - [1] = { 505 - .start = CH_SPI, 506 - .end = CH_SPI, 507 - .flags = IORESOURCE_IRQ, 508 - }, 509 - }; 510 - 511 - static struct platform_device bfin_spi0_device = { 512 - .name = "bfin-spi", 513 - .id = 0, /* Bus number */ 514 - .num_resources = ARRAY_SIZE(bfin_spi0_resource), 515 - .resource = bfin_spi0_resource, 516 - .dev = { 517 - .platform_data = &bfin_spi0_info, /* Passed to driver */ 518 - }, 519 - }; 520 - #endif /* spi master and devices */ 521 - 522 - #if defined(CONFIG_FB_BF537_LQ035) || defined(CONFIG_FB_BF537_LQ035_MODULE) 523 - static struct platform_device bfin_fb_device = { 524 - .name = "bf537-lq035", 525 - }; 526 - #endif 527 - 528 - #if defined(CONFIG_FB_BFIN_7393) || defined(CONFIG_FB_BFIN_7393_MODULE) 529 - static struct platform_device bfin_fb_adv7393_device = { 530 - .name = "bfin-adv7393", 531 - }; 532 - #endif 533 - 534 - #if defined(CONFIG_SERIAL_BFIN) || defined(CONFIG_SERIAL_BFIN_MODULE) 535 - static struct resource bfin_uart_resources[] = { 536 - { 537 - .start = 0xFFC00400, 538 - .end = 0xFFC004FF, 539 - .flags = IORESOURCE_MEM, 540 - }, { 541 - .start = 0xFFC02000, 542 - .end = 0xFFC020FF, 543 - .flags = IORESOURCE_MEM, 544 - }, 545 - }; 546 - 547 - static struct platform_device bfin_uart_device = { 548 - .name = "bfin-uart", 549 - .id = 1, 550 - .num_resources = ARRAY_SIZE(bfin_uart_resources), 551 - .resource = bfin_uart_resources, 552 - }; 553 - #endif 554 - 555 - #if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE) 556 - #ifdef CONFIG_BFIN_SIR0 557 - static struct resource bfin_sir0_resources[] = { 558 - { 559 - .start = 0xFFC00400, 560 - .end = 0xFFC004FF, 561 - .flags = IORESOURCE_MEM, 562 - }, 563 - { 564 - .start = IRQ_UART0_RX, 565 - .end = IRQ_UART0_RX+1, 566 - .flags = IORESOURCE_IRQ, 567 - }, 568 - { 569 - .start = CH_UART0_RX, 570 - .end = CH_UART0_RX+1, 571 - .flags = IORESOURCE_DMA, 572 - }, 573 - }; 574 - 575 - static struct platform_device bfin_sir0_device = { 576 - .name = "bfin_sir", 577 - .id = 0, 578 - .num_resources = ARRAY_SIZE(bfin_sir0_resources), 579 - .resource = bfin_sir0_resources, 580 - }; 581 - #endif 582 - #ifdef CONFIG_BFIN_SIR1 583 - static struct resource bfin_sir1_resources[] = { 584 - { 585 - .start = 0xFFC02000, 586 - .end = 0xFFC020FF, 587 - .flags = IORESOURCE_MEM, 588 - }, 589 - { 590 - .start = IRQ_UART1_RX, 591 - .end = IRQ_UART1_RX+1, 592 - .flags = IORESOURCE_IRQ, 593 - }, 594 - { 595 - .start = CH_UART1_RX, 596 - .end = CH_UART1_RX+1, 597 - .flags = IORESOURCE_DMA, 598 - }, 599 - }; 600 - 601 - static struct platform_device bfin_sir1_device = { 602 - .name = "bfin_sir", 603 - .id = 1, 604 - .num_resources = ARRAY_SIZE(bfin_sir1_resources), 605 - .resource = bfin_sir1_resources, 606 - }; 607 - #endif 608 - #endif 609 - 610 - #if defined(CONFIG_I2C_BLACKFIN_TWI) || defined(CONFIG_I2C_BLACKFIN_TWI_MODULE) 611 - static struct resource bfin_twi0_resource[] = { 612 - [0] = { 613 - .start = TWI0_REGBASE, 614 - .end = TWI0_REGBASE + 0xFF, 615 - .flags = IORESOURCE_MEM, 616 - }, 617 - [1] = { 618 - .start = IRQ_TWI, 619 - .end = IRQ_TWI, 620 - .flags = IORESOURCE_IRQ, 621 - }, 622 - }; 623 - 624 - static struct platform_device i2c_bfin_twi_device = { 625 - .name = "i2c-bfin-twi", 626 - .id = 0, 627 - .num_resources = ARRAY_SIZE(bfin_twi0_resource), 628 - .resource = bfin_twi0_resource, 629 - }; 630 - #endif 631 - 632 - #if defined(CONFIG_SERIAL_BFIN_SPORT) || defined(CONFIG_SERIAL_BFIN_SPORT_MODULE) 633 - static struct platform_device bfin_sport0_uart_device = { 634 - .name = "bfin-sport-uart", 635 - .id = 0, 636 - }; 637 - 638 - static struct platform_device bfin_sport1_uart_device = { 639 - .name = "bfin-sport-uart", 640 - .id = 1, 641 - }; 642 - #endif 643 - 644 - static struct platform_device *stamp_devices[] __initdata = { 645 - #if defined(CONFIG_BFIN_CFPCMCIA) || defined(CONFIG_BFIN_CFPCMCIA_MODULE) 646 - &bfin_pcmcia_cf_device, 647 - #endif 648 - 649 - #if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE) 650 - &rtc_device, 651 - #endif 652 - 653 - #if defined(CONFIG_USB_SL811_HCD) || defined(CONFIG_USB_SL811_HCD_MODULE) 654 - &sl811_hcd_device, 655 - #endif 656 - 657 - #if defined(CONFIG_USB_ISP1362_HCD) || defined(CONFIG_USB_ISP1362_HCD_MODULE) 658 - &isp1362_hcd_device, 659 - #endif 660 - 661 - #if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE) 662 - &smc91x_device, 663 - #endif 664 - 665 - #if defined(CONFIG_DM9000) || defined(CONFIG_DM9000_MODULE) 666 - &dm9000_device, 667 - #endif 668 - 669 - #if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE) 670 - &bfin_mii_bus, 671 - &bfin_mac_device, 672 - #endif 673 - 674 - #if defined(CONFIG_USB_NET2272) || defined(CONFIG_USB_NET2272_MODULE) 675 - &net2272_bfin_device, 676 - #endif 677 - 678 - #if defined(CONFIG_USB_ISP1760_HCD) || defined(CONFIG_USB_ISP1760_HCD_MODULE) 679 - &bfin_isp1760_device, 680 - #endif 681 - 682 - #if defined(CONFIG_SPI_BFIN) || defined(CONFIG_SPI_BFIN_MODULE) 683 - &bfin_spi0_device, 684 - #endif 685 - 686 - #if defined(CONFIG_FB_BF537_LQ035) || defined(CONFIG_FB_BF537_LQ035_MODULE) 687 - &bfin_fb_device, 688 - #endif 689 - 690 - #if defined(CONFIG_FB_BFIN_7393) || defined(CONFIG_FB_BFIN_7393_MODULE) 691 - &bfin_fb_adv7393_device, 692 - #endif 693 - 694 - #if defined(CONFIG_SERIAL_BFIN) || defined(CONFIG_SERIAL_BFIN_MODULE) 695 - &bfin_uart_device, 696 - #endif 697 - 698 - #if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE) 699 - #ifdef CONFIG_BFIN_SIR0 700 - &bfin_sir0_device, 701 - #endif 702 - #ifdef CONFIG_BFIN_SIR1 703 - &bfin_sir1_device, 704 - #endif 705 - #endif 706 - 707 - #if defined(CONFIG_I2C_BLACKFIN_TWI) || defined(CONFIG_I2C_BLACKFIN_TWI_MODULE) 708 - &i2c_bfin_twi_device, 709 - #endif 710 - 711 - #if defined(CONFIG_SERIAL_BFIN_SPORT) || defined(CONFIG_SERIAL_BFIN_SPORT_MODULE) 712 - &bfin_sport0_uart_device, 713 - &bfin_sport1_uart_device, 714 - #endif 715 - }; 716 - 717 - static int __init generic_init(void) 718 - { 719 - printk(KERN_INFO "%s(): registering device resources\n", __func__); 720 - platform_add_devices(stamp_devices, ARRAY_SIZE(stamp_devices)); 721 - #if defined(CONFIG_SPI_BFIN) || defined(CONFIG_SPI_BFIN_MODULE) 722 - spi_register_board_info(bfin_spi_board_info, 723 - ARRAY_SIZE(bfin_spi_board_info)); 724 - #endif 725 - 726 - return 0; 727 - } 728 - 729 - arch_initcall(generic_init); 730 - 731 - void native_machine_restart(char *cmd) 732 - { 733 - /* workaround reboot hang when booting from SPI */ 734 - if ((bfin_read_SYSCR() & 0x7) == 0x3) 735 - bfin_reset_boot_spi_cs(P_DEFAULT_BOOT_SPI_CS); 736 - } 737 - 738 - #if defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE) 739 - void bfin_get_ether_addr(char *addr) 740 - { 741 - random_ether_addr(addr); 742 - printk(KERN_WARNING "%s:%s: Setting Ethernet MAC to a random one\n", __FILE__, __func__); 743 - } 744 - EXPORT_SYMBOL(bfin_get_ether_addr); 745 - #endif
+7 -17
arch/blackfin/mach-bf537/boards/minotaur.c
··· 134 134 }; 135 135 #endif 136 136 137 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 138 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 139 - .enable_dma = 1, 137 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 138 + static struct bfin5xx_spi_chip mmc_spi_chip_info = { 139 + .enable_dma = 0, 140 140 .bits_per_word = 8, 141 141 }; 142 142 #endif ··· 156 156 }, 157 157 #endif 158 158 159 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 159 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 160 160 { 161 - .modalias = "spi_mmc_dummy", 161 + .modalias = "mmc_spi", 162 162 .max_speed_hz = 5000000, /* max spi clock (SCK) speed in HZ */ 163 163 .bus_num = 0, 164 - .chip_select = 0, 165 - .platform_data = NULL, 166 - .controller_data = &spi_mmc_chip_info, 167 - .mode = SPI_MODE_3, 168 - }, 169 - { 170 - .modalias = "spi_mmc", 171 - .max_speed_hz = 5000000, /* max spi clock (SCK) speed in HZ */ 172 - .bus_num = 0, 173 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 174 - .platform_data = NULL, 175 - .controller_data = &spi_mmc_chip_info, 164 + .chip_select = 5, 165 + .controller_data = &mmc_spi_chip_info, 176 166 .mode = SPI_MODE_3, 177 167 }, 178 168 #endif
+7 -17
arch/blackfin/mach-bf537/boards/pnav10.c
··· 289 289 }; 290 290 #endif 291 291 292 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 293 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 294 - .enable_dma = 1, 292 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 293 + static struct bfin5xx_spi_chip mmc_spi_chip_info = { 294 + .enable_dma = 0, 295 295 .bits_per_word = 8, 296 296 }; 297 297 #endif ··· 364 364 .controller_data = &ad9960_spi_chip_info, 365 365 }, 366 366 #endif 367 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 367 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 368 368 { 369 - .modalias = "spi_mmc_dummy", 369 + .modalias = "mmc_spi", 370 370 .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 371 371 .bus_num = 0, 372 - .chip_select = 7, 373 - .platform_data = NULL, 374 - .controller_data = &spi_mmc_chip_info, 375 - .mode = SPI_MODE_3, 376 - }, 377 - { 378 - .modalias = "spi_mmc", 379 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 380 - .bus_num = 0, 381 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 382 - .platform_data = NULL, 383 - .controller_data = &spi_mmc_chip_info, 372 + .chip_select = 5, 373 + .controller_data = &mmc_spi_chip_info, 384 374 .mode = SPI_MODE_3, 385 375 }, 386 376 #endif
+7 -17
arch/blackfin/mach-bf537/boards/tcm_bf537.c
··· 108 108 }; 109 109 #endif 110 110 111 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 112 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 113 - .enable_dma = 1, 111 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 112 + static struct bfin5xx_spi_chip mmc_spi_chip_info = { 113 + .enable_dma = 0, 114 114 .bits_per_word = 8, 115 115 }; 116 116 #endif ··· 160 160 }, 161 161 #endif 162 162 163 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 163 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 164 164 { 165 - .modalias = "spi_mmc_dummy", 165 + .modalias = "mmc_spi", 166 166 .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 167 167 .bus_num = 0, 168 - .chip_select = 7, 169 - .platform_data = NULL, 170 - .controller_data = &spi_mmc_chip_info, 171 - .mode = SPI_MODE_3, 172 - }, 173 - { 174 - .modalias = "spi_mmc", 175 - .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 176 - .bus_num = 0, 177 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 178 - .platform_data = NULL, 179 - .controller_data = &spi_mmc_chip_info, 168 + .chip_select = 5, 169 + .controller_data = &mmc_spi_chip_info, 180 170 .mode = SPI_MODE_3, 181 171 }, 182 172 #endif
+5 -2
arch/blackfin/mach-bf537/include/mach/anomaly.h
··· 2 2 * File: include/asm-blackfin/mach-bf537/anomaly.h 3 3 * Bugs: Enter bugs at http://blackfin.uclinux.org/ 4 4 * 5 - * Copyright (C) 2004-2008 Analog Devices Inc. 5 + * Copyright (C) 2004-2009 Analog Devices Inc. 6 6 * Licensed under the GPL-2 or later. 7 7 */ 8 8 ··· 110 110 #define ANOMALY_05000301 (1) 111 111 /* SSYNCs After Writes To CAN/DMA MMR Registers Are Not Always Handled Correctly */ 112 112 #define ANOMALY_05000304 (__SILICON_REVISION__ < 3) 113 - /* New Feature: Additional Hysteresis on SPORT Input Pins (Not Available On Older Silicon) */ 113 + /* SPORT_HYS Bit in PLL_CTL Register Is Not Functional */ 114 114 #define ANOMALY_05000305 (__SILICON_REVISION__ < 3) 115 115 /* SCKELOW Bit Does Not Maintain State Through Hibernate */ 116 116 #define ANOMALY_05000307 (__SILICON_REVISION__ < 3) ··· 168 168 #define ANOMALY_05000323 (0) 169 169 #define ANOMALY_05000353 (1) 170 170 #define ANOMALY_05000363 (0) 171 + #define ANOMALY_05000380 (0) 171 172 #define ANOMALY_05000386 (1) 172 173 #define ANOMALY_05000412 (0) 173 174 #define ANOMALY_05000432 (0) 174 175 #define ANOMALY_05000435 (0) 176 + #define ANOMALY_05000447 (0) 177 + #define ANOMALY_05000448 (0) 175 178 176 179 #endif
+2 -2
arch/blackfin/mach-bf537/include/mach/bfin_serial_5xx.h
··· 144 144 CH_UART0_TX, 145 145 CH_UART0_RX, 146 146 #endif 147 - #ifdef CONFIG_BFIN_UART0_CTSRTS 147 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 148 148 CONFIG_UART0_CTS_PIN, 149 149 CONFIG_UART0_RTS_PIN, 150 150 #endif ··· 158 158 CH_UART1_TX, 159 159 CH_UART1_RX, 160 160 #endif 161 - #ifdef CONFIG_BFIN_UART1_CTSRTS 161 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 162 162 CONFIG_UART1_CTS_PIN, 163 163 CONFIG_UART1_RTS_PIN, 164 164 #endif
+5 -1
arch/blackfin/mach-bf538/include/mach/anomaly.h
··· 2 2 * File: include/asm-blackfin/mach-bf538/anomaly.h 3 3 * Bugs: Enter bugs at http://blackfin.uclinux.org/ 4 4 * 5 - * Copyright (C) 2004-2008 Analog Devices Inc. 5 + * Copyright (C) 2004-2009 Analog Devices Inc. 6 6 * Licensed under the GPL-2 or later. 7 7 */ 8 8 ··· 120 120 #define ANOMALY_05000198 (0) 121 121 #define ANOMALY_05000230 (0) 122 122 #define ANOMALY_05000263 (0) 123 + #define ANOMALY_05000305 (0) 123 124 #define ANOMALY_05000311 (0) 124 125 #define ANOMALY_05000323 (0) 125 126 #define ANOMALY_05000353 (1) 126 127 #define ANOMALY_05000363 (0) 128 + #define ANOMALY_05000380 (0) 127 129 #define ANOMALY_05000386 (1) 128 130 #define ANOMALY_05000412 (0) 129 131 #define ANOMALY_05000432 (0) 130 132 #define ANOMALY_05000435 (0) 133 + #define ANOMALY_05000447 (0) 134 + #define ANOMALY_05000448 (0) 131 135 132 136 #endif
+2 -2
arch/blackfin/mach-bf538/include/mach/bfin_serial_5xx.h
··· 144 144 CH_UART0_TX, 145 145 CH_UART0_RX, 146 146 #endif 147 - #ifdef CONFIG_BFIN_UART0_CTSRTS 147 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 148 148 CONFIG_UART0_CTS_PIN, 149 149 CONFIG_UART0_RTS_PIN, 150 150 #endif ··· 158 158 CH_UART1_TX, 159 159 CH_UART1_RX, 160 160 #endif 161 - #ifdef CONFIG_BFIN_UART1_CTSRTS 161 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 162 162 CONFIG_UART1_CTS_PIN, 163 163 CONFIG_UART1_RTS_PIN, 164 164 #endif
+18 -4
arch/blackfin/mach-bf548/include/mach/anomaly.h
··· 2 2 * File: include/asm-blackfin/mach-bf548/anomaly.h 3 3 * Bugs: Enter bugs at http://blackfin.uclinux.org/ 4 4 * 5 - * Copyright (C) 2004-2008 Analog Devices Inc. 5 + * Copyright (C) 2004-2009 Analog Devices Inc. 6 6 * Licensed under the GPL-2 or later. 7 7 */ 8 8 9 9 /* This file shoule be up to date with: 10 - * - Revision G, 08/07/2008; ADSP-BF542/BF544/BF547/BF548/BF549 Blackfin Processor Anomaly List 10 + * - Revision H, 01/16/2009; ADSP-BF542/BF544/BF547/BF548/BF549 Blackfin Processor Anomaly List 11 11 */ 12 12 13 13 #ifndef _MACH_ANOMALY_H_ ··· 91 91 #define ANOMALY_05000371 (__SILICON_REVISION__ < 2) 92 92 /* USB DP/DM Data Pins May Lose State When Entering Hibernate */ 93 93 #define ANOMALY_05000372 (__SILICON_REVISION__ < 1) 94 - /* Mobile DDR Operation Not Functional */ 95 - #define ANOMALY_05000377 (1) 96 94 /* Security/Authentication Speedpath Causes Authentication To Fail To Initiate */ 97 95 #define ANOMALY_05000378 (__SILICON_REVISION__ < 2) 98 96 /* 16-Bit NAND FLASH Boot Mode Is Not Functional */ ··· 155 157 #define ANOMALY_05000429 (__SILICON_REVISION__ < 2) 156 158 /* Software System Reset Corrupts PLL_LOCKCNT Register */ 157 159 #define ANOMALY_05000430 (__SILICON_REVISION__ >= 2) 160 + /* Incorrect Use of Stack in Lockbox Firmware During Authentication */ 161 + #define ANOMALY_05000431 (__SILICON_REVISION__ < 3) 162 + /* OTP Write Accesses Not Supported */ 163 + #define ANOMALY_05000442 (__SILICON_REVISION__ < 1) 158 164 /* IFLUSH Instruction at End of Hardware Loop Causes Infinite Stall */ 159 165 #define ANOMALY_05000443 (1) 166 + /* CDMAPRIO and L2DMAPRIO Bits in the SYSCR Register Are Not Functional */ 167 + #define ANOMALY_05000446 (1) 168 + /* UART IrDA Receiver Fails on Extended Bit Pulses */ 169 + #define ANOMALY_05000447 (1) 170 + /* DDR Clock Duty Cycle Spec Violation (tCH, tCL) */ 171 + #define ANOMALY_05000448 (__SILICON_REVISION__ == 1) 172 + /* Reduced Timing Margins on DDR Output Setup and Hold (tDS and tDH) */ 173 + #define ANOMALY_05000449 (__SILICON_REVISION__ == 1) 174 + /* USB DMA Mode 1 Short Packet Data Corruption */ 175 + #define ANOMALY_05000450 (1 160 176 161 177 /* Anomalies that don't exist on this proc */ 162 178 #define ANOMALY_05000125 (0) ··· 183 171 #define ANOMALY_05000263 (0) 184 172 #define ANOMALY_05000266 (0) 185 173 #define ANOMALY_05000273 (0) 174 + #define ANOMALY_05000278 (0) 175 + #define ANOMALY_05000305 (0) 186 176 #define ANOMALY_05000307 (0) 187 177 #define ANOMALY_05000311 (0) 188 178 #define ANOMALY_05000323 (0)
+15 -7
arch/blackfin/mach-bf548/include/mach/bfin_serial_5xx.h
··· 63 63 #define UART_ENABLE_INTS(x, v) UART_SET_IER(x, v) 64 64 #define UART_DISABLE_INTS(x) UART_CLEAR_IER(x, 0xF) 65 65 66 - #if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART1_CTSRTS) 66 + #if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART2_CTSRTS) 67 67 # define CONFIG_SERIAL_BFIN_CTSRTS 68 68 69 69 # ifndef CONFIG_UART0_CTS_PIN ··· 74 74 # define CONFIG_UART0_RTS_PIN -1 75 75 # endif 76 76 77 - # ifndef CONFIG_UART1_CTS_PIN 78 - # define CONFIG_UART1_CTS_PIN -1 77 + # ifndef CONFIG_UART2_CTS_PIN 78 + # define CONFIG_UART2_CTS_PIN -1 79 79 # endif 80 80 81 - # ifndef CONFIG_UART1_RTS_PIN 82 - # define CONFIG_UART1_RTS_PIN -1 81 + # ifndef CONFIG_UART2_RTS_PIN 82 + # define CONFIG_UART2_RTS_PIN -1 83 83 # endif 84 84 #endif 85 85 ··· 130 130 CH_UART0_TX, 131 131 CH_UART0_RX, 132 132 #endif 133 - #ifdef CONFIG_BFIN_UART0_CTSRTS 133 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 134 134 CONFIG_UART0_CTS_PIN, 135 135 CONFIG_UART0_RTS_PIN, 136 136 #endif ··· 144 144 CH_UART1_TX, 145 145 CH_UART1_RX, 146 146 #endif 147 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 148 + 0, 149 + 0, 150 + #endif 147 151 }, 148 152 #endif 149 153 #ifdef CONFIG_SERIAL_BFIN_UART2 ··· 158 154 CH_UART2_TX, 159 155 CH_UART2_RX, 160 156 #endif 161 - #ifdef CONFIG_BFIN_UART2_CTSRTS 157 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 162 158 CONFIG_UART2_CTS_PIN, 163 159 CONFIG_UART2_RTS_PIN, 164 160 #endif ··· 171 167 #ifdef CONFIG_SERIAL_BFIN_DMA 172 168 CH_UART3_TX, 173 169 CH_UART3_RX, 170 + #endif 171 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 172 + 0, 173 + 0, 174 174 #endif 175 175 }, 176 176 #endif
+4 -4
arch/blackfin/mach-bf548/include/mach/irq.h
··· 123 123 #define IRQ_MXVR_ERROR BFIN_IRQ(51) /* MXVR Status (Error) Interrupt */ 124 124 #define IRQ_MXVR_MSG BFIN_IRQ(52) /* MXVR Message Interrupt */ 125 125 #define IRQ_MXVR_PKT BFIN_IRQ(53) /* MXVR Packet Interrupt */ 126 - #define IRQ_EPP1_ERROR BFIN_IRQ(54) /* EPPI1 Error Interrupt */ 127 - #define IRQ_EPP2_ERROR BFIN_IRQ(55) /* EPPI2 Error Interrupt */ 126 + #define IRQ_EPPI1_ERROR BFIN_IRQ(54) /* EPPI1 Error Interrupt */ 127 + #define IRQ_EPPI2_ERROR BFIN_IRQ(55) /* EPPI2 Error Interrupt */ 128 128 #define IRQ_UART3_ERROR BFIN_IRQ(56) /* UART3 Status (Error) Interrupt */ 129 129 #define IRQ_HOST_ERROR BFIN_IRQ(57) /* HOST Status (Error) Interrupt */ 130 130 #define IRQ_PIXC_ERROR BFIN_IRQ(59) /* PIXC Status (Error) Interrupt */ ··· 361 361 #define IRQ_UART2_ERR IRQ_UART2_ERROR 362 362 #define IRQ_CAN0_ERR IRQ_CAN0_ERROR 363 363 #define IRQ_MXVR_ERR IRQ_MXVR_ERROR 364 - #define IRQ_EPP1_ERR IRQ_EPP1_ERROR 365 - #define IRQ_EPP2_ERR IRQ_EPP2_ERROR 364 + #define IRQ_EPPI1_ERR IRQ_EPPI1_ERROR 365 + #define IRQ_EPPI2_ERR IRQ_EPPI2_ERROR 366 366 #define IRQ_UART3_ERR IRQ_UART3_ERROR 367 367 #define IRQ_HOST_ERR IRQ_HOST_ERROR 368 368 #define IRQ_PIXC_ERR IRQ_PIXC_ERROR
-5
arch/blackfin/mach-bf561/boards/Kconfig
··· 19 19 help 20 20 CM-BF561 support for EVAL- and DEV-Board. 21 21 22 - config GENERIC_BF561_BOARD 23 - bool "Generic" 24 - help 25 - Generic or Custom board support. 26 - 27 22 endchoice
-1
arch/blackfin/mach-bf561/boards/Makefile
··· 2 2 # arch/blackfin/mach-bf561/boards/Makefile 3 3 # 4 4 5 - obj-$(CONFIG_GENERIC_BF561_BOARD) += generic_board.o 6 5 obj-$(CONFIG_BFIN561_BLUETECHNIX_CM) += cm_bf561.o 7 6 obj-$(CONFIG_BFIN561_EZKIT) += ezkit.o 8 7 obj-$(CONFIG_BFIN561_TEPLA) += tepla.o
+7 -8
arch/blackfin/mach-bf561/boards/cm_bf561.c
··· 105 105 }; 106 106 #endif 107 107 108 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 109 - static struct bfin5xx_spi_chip spi_mmc_chip_info = { 110 - .enable_dma = 1, 108 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 109 + static struct bfin5xx_spi_chip mmc_spi_chip_info = { 110 + .enable_dma = 0, 111 111 .bits_per_word = 8, 112 112 }; 113 113 #endif ··· 155 155 .controller_data = &ad9960_spi_chip_info, 156 156 }, 157 157 #endif 158 - #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE) 158 + #if defined(CONFIG_MMC_SPI) || defined(CONFIG_MMC_SPI_MODULE) 159 159 { 160 - .modalias = "spi_mmc", 160 + .modalias = "mmc_spi", 161 161 .max_speed_hz = 25000000, /* max spi clock (SCK) speed in HZ */ 162 162 .bus_num = 0, 163 - .chip_select = CONFIG_SPI_MMC_CS_CHAN, 164 - .platform_data = NULL, 165 - .controller_data = &spi_mmc_chip_info, 163 + .chip_select = 5, 164 + .controller_data = &mmc_spi_chip_info, 166 165 .mode = SPI_MODE_3, 167 166 }, 168 167 #endif
-113
arch/blackfin/mach-bf561/boards/generic_board.c
··· 1 - /* 2 - * File: arch/blackfin/mach-bf561/generic_board.c 3 - * Based on: arch/blackfin/mach-bf533/ezkit.c 4 - * Author: Aidan Williams <aidan@nicta.com.au> 5 - * 6 - * Created: 7 - * Description: 8 - * 9 - * Modified: 10 - * Copyright 2005 National ICT Australia (NICTA) 11 - * Copyright 2004-2006 Analog Devices Inc. 12 - * 13 - * Bugs: Enter bugs at http://blackfin.uclinux.org/ 14 - * 15 - * This program is free software; you can redistribute it and/or modify 16 - * it under the terms of the GNU General Public License as published by 17 - * the Free Software Foundation; either version 2 of the License, or 18 - * (at your option) any later version. 19 - * 20 - * This program is distributed in the hope that it will be useful, 21 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 22 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 23 - * GNU General Public License for more details. 24 - * 25 - * You should have received a copy of the GNU General Public License 26 - * along with this program; if not, see the file COPYING, or write 27 - * to the Free Software Foundation, Inc., 28 - * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 29 - */ 30 - 31 - #include <linux/device.h> 32 - #include <linux/platform_device.h> 33 - #include <linux/irq.h> 34 - 35 - const char bfin_board_name[] = "UNKNOWN BOARD"; 36 - 37 - /* 38 - * Driver needs to know address, irq and flag pin. 39 - */ 40 - #if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE) 41 - static struct resource smc91x_resources[] = { 42 - { 43 - .start = 0x2C010300, 44 - .end = 0x2C010300 + 16, 45 - .flags = IORESOURCE_MEM, 46 - }, { 47 - .start = IRQ_PROG_INTB, 48 - .end = IRQ_PROG_INTB, 49 - .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 50 - }, { 51 - .start = IRQ_PF9, 52 - .end = IRQ_PF9, 53 - .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL, 54 - }, 55 - }; 56 - 57 - static struct platform_device smc91x_device = { 58 - .name = "smc91x", 59 - .id = 0, 60 - .num_resources = ARRAY_SIZE(smc91x_resources), 61 - .resource = smc91x_resources, 62 - }; 63 - #endif 64 - 65 - #if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE) 66 - #ifdef CONFIG_BFIN_SIR0 67 - static struct resource bfin_sir0_resources[] = { 68 - { 69 - .start = 0xFFC00400, 70 - .end = 0xFFC004FF, 71 - .flags = IORESOURCE_MEM, 72 - }, 73 - { 74 - .start = IRQ_UART0_RX, 75 - .end = IRQ_UART0_RX+1, 76 - .flags = IORESOURCE_IRQ, 77 - }, 78 - { 79 - .start = CH_UART0_RX, 80 - .end = CH_UART0_RX+1, 81 - .flags = IORESOURCE_DMA, 82 - }, 83 - }; 84 - 85 - static struct platform_device bfin_sir0_device = { 86 - .name = "bfin_sir", 87 - .id = 0, 88 - .num_resources = ARRAY_SIZE(bfin_sir0_resources), 89 - .resource = bfin_sir0_resources, 90 - }; 91 - #endif 92 - #endif 93 - 94 - static struct platform_device *generic_board_devices[] __initdata = { 95 - #if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE) 96 - &smc91x_device, 97 - #endif 98 - 99 - #if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE) 100 - #ifdef CONFIG_BFIN_SIR0 101 - &bfin_sir0_device, 102 - #endif 103 - #endif 104 - }; 105 - 106 - static int __init generic_board_init(void) 107 - { 108 - printk(KERN_INFO "%s(): registering device resources\n", __func__); 109 - return platform_add_devices(generic_board_devices, 110 - ARRAY_SIZE(generic_board_devices)); 111 - } 112 - 113 - arch_initcall(generic_board_init);
+5 -2
arch/blackfin/mach-bf561/include/mach/anomaly.h
··· 2 2 * File: include/asm-blackfin/mach-bf561/anomaly.h 3 3 * Bugs: Enter bugs at http://blackfin.uclinux.org/ 4 4 * 5 - * Copyright (C) 2004-2008 Analog Devices Inc. 5 + * Copyright (C) 2004-2009 Analog Devices Inc. 6 6 * Licensed under the GPL-2 or later. 7 7 */ 8 8 ··· 224 224 #define ANOMALY_05000301 (1) 225 225 /* SSYNCs After Writes To DMA MMR Registers May Not Be Handled Correctly */ 226 226 #define ANOMALY_05000302 (1) 227 - /* New Feature: Additional Hysteresis on SPORT Input Pins (Not Available On Older Silicon) */ 227 + /* SPORT_HYS Bit in PLL_CTL Register Is Not Functional */ 228 228 #define ANOMALY_05000305 (__SILICON_REVISION__ < 5) 229 229 /* SCKELOW Bit Does Not Maintain State Through Hibernate */ 230 230 #define ANOMALY_05000307 (__SILICON_REVISION__ < 5) ··· 283 283 #define ANOMALY_05000273 (0) 284 284 #define ANOMALY_05000311 (0) 285 285 #define ANOMALY_05000353 (1) 286 + #define ANOMALY_05000380 (0) 286 287 #define ANOMALY_05000386 (1) 287 288 #define ANOMALY_05000432 (0) 288 289 #define ANOMALY_05000435 (0) 290 + #define ANOMALY_05000447 (0) 291 + #define ANOMALY_05000448 (0) 289 292 290 293 #endif
+1 -1
arch/blackfin/mach-bf561/include/mach/bfin_serial_5xx.h
··· 134 134 CH_UART_TX, 135 135 CH_UART_RX, 136 136 #endif 137 - #ifdef CONFIG_BFIN_UART0_CTSRTS 137 + #ifdef CONFIG_SERIAL_BFIN_CTSRTS 138 138 CONFIG_UART0_CTS_PIN, 139 139 CONFIG_UART0_RTS_PIN, 140 140 #endif
+9
arch/blackfin/mach-common/arch_checks.c
··· 62 62 #if (CONFIG_BOOT_LOAD & 0x3) 63 63 # error "The kernel load address must be 4 byte aligned" 64 64 #endif 65 + 66 + /* The entire kernel must be able to make a 24bit pcrel call to start of L1 */ 67 + #if ((0xffffffff - L1_CODE_START + 1) + CONFIG_BOOT_LOAD) > 0x1000000 68 + # error "The kernel load address is too high; keep it below 10meg for safety" 69 + #endif 70 + 71 + #if ANOMALY_05000448 72 + # error You are using a part with anomaly 05000448, this issue causes random memory read/write failures - that means random crashes. 73 + #endif
+22
arch/blackfin/mach-common/cache.S
··· 66 66 67 67 /* Invalidate all instruction cache lines assocoiated with this memory area */ 68 68 ENTRY(_blackfin_icache_flush_range) 69 + /* 70 + * Walkaround to avoid loading wrong instruction after invalidating icache 71 + * and following sequence is met. 72 + * 73 + * 1) One instruction address is cached in the instruction cache. 74 + * 2) This instruction in SDRAM is changed. 75 + * 3) IFLASH[P0] is executed only once in blackfin_icache_flush_range(). 76 + * 4) This instruction is executed again, but the old one is loaded. 77 + */ 78 + P0 = R0; 79 + IFLUSH[P0]; 69 80 do_flush IFLUSH, , nop 70 81 ENDPROC(_blackfin_icache_flush_range) 71 82 72 83 /* Flush all cache lines assocoiated with this area of memory. */ 73 84 ENTRY(_blackfin_icache_dcache_flush_range) 85 + /* 86 + * Walkaround to avoid loading wrong instruction after invalidating icache 87 + * and following sequence is met. 88 + * 89 + * 1) One instruction address is cached in the instruction cache. 90 + * 2) This instruction in SDRAM is changed. 91 + * 3) IFLASH[P0] is executed only once in blackfin_icache_flush_range(). 92 + * 4) This instruction is executed again, but the old one is loaded. 93 + */ 94 + P0 = R0; 95 + IFLUSH[P0]; 74 96 do_flush FLUSH, IFLUSH 75 97 ENDPROC(_blackfin_icache_dcache_flush_range) 76 98
+1 -1
arch/blackfin/mach-common/clocks-init.c
··· 17 17 #define SDGCTL_WIDTH (1 << 31) /* SDRAM external data path width */ 18 18 #define PLL_CTL_VAL \ 19 19 (((CONFIG_VCO_MULT & 63) << 9) | CLKIN_HALF | \ 20 - (PLL_BYPASS << 8) | (ANOMALY_05000265 ? 0x8000 : 0)) 20 + (PLL_BYPASS << 8) | (ANOMALY_05000305 ? 0 : 0x8000)) 21 21 22 22 __attribute__((l1_text)) 23 23 static void do_sync(void)
+24
arch/blackfin/mach-common/dpmc_modes.S
··· 376 376 #endif 377 377 378 378 #ifdef PINT0_ASSIGN 379 + PM_SYS_PUSH(PINT0_MASK_SET) 380 + PM_SYS_PUSH(PINT1_MASK_SET) 381 + PM_SYS_PUSH(PINT2_MASK_SET) 382 + PM_SYS_PUSH(PINT3_MASK_SET) 379 383 PM_SYS_PUSH(PINT0_ASSIGN) 380 384 PM_SYS_PUSH(PINT1_ASSIGN) 381 385 PM_SYS_PUSH(PINT2_ASSIGN) 382 386 PM_SYS_PUSH(PINT3_ASSIGN) 387 + PM_SYS_PUSH(PINT0_INVERT_SET) 388 + PM_SYS_PUSH(PINT1_INVERT_SET) 389 + PM_SYS_PUSH(PINT2_INVERT_SET) 390 + PM_SYS_PUSH(PINT3_INVERT_SET) 391 + PM_SYS_PUSH(PINT0_EDGE_SET) 392 + PM_SYS_PUSH(PINT1_EDGE_SET) 393 + PM_SYS_PUSH(PINT2_EDGE_SET) 394 + PM_SYS_PUSH(PINT3_EDGE_SET) 383 395 #endif 384 396 385 397 PM_SYS_PUSH(EBIU_AMBCTL0) ··· 726 714 PM_SYS_POP(EBIU_AMBCTL0) 727 715 728 716 #ifdef PINT0_ASSIGN 717 + PM_SYS_POP(PINT3_EDGE_SET) 718 + PM_SYS_POP(PINT2_EDGE_SET) 719 + PM_SYS_POP(PINT1_EDGE_SET) 720 + PM_SYS_POP(PINT0_EDGE_SET) 721 + PM_SYS_POP(PINT3_INVERT_SET) 722 + PM_SYS_POP(PINT2_INVERT_SET) 723 + PM_SYS_POP(PINT1_INVERT_SET) 724 + PM_SYS_POP(PINT0_INVERT_SET) 729 725 PM_SYS_POP(PINT3_ASSIGN) 730 726 PM_SYS_POP(PINT2_ASSIGN) 731 727 PM_SYS_POP(PINT1_ASSIGN) 732 728 PM_SYS_POP(PINT0_ASSIGN) 729 + PM_SYS_POP(PINT3_MASK_SET) 730 + PM_SYS_POP(PINT2_MASK_SET) 731 + PM_SYS_POP(PINT1_MASK_SET) 732 + PM_SYS_POP(PINT0_MASK_SET) 733 733 #endif 734 734 735 735 #ifdef SICA_IWR1
+60 -1
arch/blackfin/mach-common/entry.S
··· 600 600 p2 = [p2]; 601 601 602 602 [p2+(TASK_THREAD+THREAD_KSP)] = sp; 603 + #ifdef CONFIG_IPIPE 604 + r0 = sp; 605 + SP += -12; 606 + call ___ipipe_syscall_root; 607 + SP += 12; 608 + cc = r0 == 1; 609 + if cc jump .Lsyscall_really_exit; 610 + cc = r0 == -1; 611 + if cc jump .Lresume_userspace; 612 + r3 = [sp + PT_R3]; 613 + r4 = [sp + PT_R4]; 614 + p0 = [sp + PT_ORIG_P0]; 615 + #endif /* CONFIG_IPIPE */ 603 616 604 617 /* Check the System Call */ 605 618 r7 = __NR_syscall; ··· 667 654 r7 = r7 & r4; 668 655 669 656 .Lsyscall_resched: 657 + #ifdef CONFIG_IPIPE 658 + cc = BITTST(r7, TIF_IRQ_SYNC); 659 + if !cc jump .Lsyscall_no_irqsync; 660 + [--sp] = reti; 661 + r0 = [sp++]; 662 + SP += -12; 663 + call ___ipipe_sync_root; 664 + SP += 12; 665 + jump .Lresume_userspace_1; 666 + .Lsyscall_no_irqsync: 667 + #endif 670 668 cc = BITTST(r7, TIF_NEED_RESCHED); 671 669 if !cc jump .Lsyscall_sigpending; 672 670 ··· 709 685 .Lsyscall_really_exit: 710 686 r5 = [sp + PT_RESERVED]; 711 687 rets = r5; 688 + #ifdef CONFIG_IPIPE 689 + [--sp] = reti; 690 + r5 = [sp++]; 691 + #endif /* CONFIG_IPIPE */ 712 692 rts; 713 693 ENDPROC(_system_call) 714 694 ··· 799 771 ENDPROC(_resume) 800 772 801 773 ENTRY(_ret_from_exception) 774 + #ifdef CONFIG_IPIPE 775 + [--sp] = rets; 776 + SP += -12; 777 + call ___ipipe_check_root 778 + SP += 12 779 + rets = [sp++]; 780 + cc = r0 == 0; 781 + if cc jump 4f; /* not on behalf of Linux, get out */ 782 + #endif /* CONFIG_IPIPE */ 802 783 p2.l = lo(IPEND); 803 784 p2.h = hi(IPEND); 804 785 ··· 864 827 rts; 865 828 ENDPROC(_ret_from_exception) 866 829 830 + #ifdef CONFIG_IPIPE 831 + 832 + _sync_root_irqs: 833 + [--sp] = reti; /* Reenable interrupts */ 834 + r0 = [sp++]; 835 + jump.l ___ipipe_sync_root 836 + 837 + _resume_kernel_from_int: 838 + r0.l = _sync_root_irqs 839 + r0.h = _sync_root_irqs 840 + [--sp] = rets; 841 + [--sp] = ( r7:4, p5:3 ); 842 + SP += -12; 843 + call ___ipipe_call_irqtail 844 + SP += 12; 845 + ( r7:4, p5:3 ) = [sp++]; 846 + rets = [sp++]; 847 + rts 848 + #else 849 + #define _resume_kernel_from_int 2f 850 + #endif 851 + 867 852 ENTRY(_return_from_int) 868 853 /* If someone else already raised IRQ 15, do nothing. */ 869 854 csync; ··· 907 848 r1 = r0 - r1; 908 849 r2 = r0 & r1; 909 850 cc = r2 == 0; 910 - if !cc jump 2f; 851 + if !cc jump _resume_kernel_from_int; 911 852 912 853 /* Lower the interrupt level to 15. */ 913 854 p0.l = lo(EVT15);
+3 -9
arch/blackfin/mach-common/interrupt.S
··· 235 235 236 236 #ifdef CONFIG_IPIPE 237 237 ENTRY(___ipipe_call_irqtail) 238 + p0 = r0; 238 239 r0.l = 1f; 239 240 r0.h = 1f; 240 241 reti = r0; ··· 243 242 1: 244 243 [--sp] = rets; 245 244 [--sp] = ( r7:4, p5:3 ); 246 - p0.l = ___ipipe_irq_tail_hook; 247 - p0.h = ___ipipe_irq_tail_hook; 248 - p0 = [p0]; 249 245 sp += -12; 250 246 call (p0); 251 247 sp += 12; ··· 257 259 p0.h = hi(EVT14); 258 260 [p0] = r0; 259 261 csync; 260 - r0 = 0x401f; 262 + r0 = 0x401f (z); 261 263 sti r0; 262 264 raise 14; 263 265 [--sp] = reti; /* IRQs on. */ ··· 275 277 p0.h = _bfin_irq_flags; 276 278 r0 = [p0]; 277 279 sti r0; 278 - #if 0 /* FIXME: this actually raises scheduling latencies */ 279 - /* Reenable interrupts */ 280 - [--sp] = reti; 281 - r0 = [sp++]; 282 - #endif 283 280 rts; 284 281 ENDPROC(___ipipe_call_irqtail) 282 + 285 283 #endif /* CONFIG_IPIPE */
+50 -76
arch/blackfin/mach-common/ints-priority.c
··· 161 161 162 162 static void bfin_internal_mask_irq(unsigned int irq) 163 163 { 164 + unsigned long flags; 165 + 164 166 #ifdef CONFIG_BF53x 167 + local_irq_save_hw(flags); 165 168 bfin_write_SIC_IMASK(bfin_read_SIC_IMASK() & 166 169 ~(1 << SIC_SYSIRQ(irq))); 167 170 #else 168 171 unsigned mask_bank, mask_bit; 172 + local_irq_save_hw(flags); 169 173 mask_bank = SIC_SYSIRQ(irq) / 32; 170 174 mask_bit = SIC_SYSIRQ(irq) % 32; 171 175 bfin_write_SIC_IMASK(mask_bank, bfin_read_SIC_IMASK(mask_bank) & ··· 179 175 ~(1 << mask_bit)); 180 176 #endif 181 177 #endif 178 + local_irq_restore_hw(flags); 182 179 } 183 180 184 181 static void bfin_internal_unmask_irq(unsigned int irq) 185 182 { 183 + unsigned long flags; 184 + 186 185 #ifdef CONFIG_BF53x 186 + local_irq_save_hw(flags); 187 187 bfin_write_SIC_IMASK(bfin_read_SIC_IMASK() | 188 188 (1 << SIC_SYSIRQ(irq))); 189 189 #else 190 190 unsigned mask_bank, mask_bit; 191 + local_irq_save_hw(flags); 191 192 mask_bank = SIC_SYSIRQ(irq) / 32; 192 193 mask_bit = SIC_SYSIRQ(irq) % 32; 193 194 bfin_write_SIC_IMASK(mask_bank, bfin_read_SIC_IMASK(mask_bank) | ··· 202 193 (1 << mask_bit)); 203 194 #endif 204 195 #endif 196 + local_irq_restore_hw(flags); 205 197 } 206 198 207 199 #ifdef CONFIG_PM ··· 400 390 static inline void bfin_set_irq_handler(unsigned irq, irq_flow_handler_t handle) 401 391 { 402 392 #ifdef CONFIG_IPIPE 403 - _set_irq_handler(irq, handle_edge_irq); 393 + _set_irq_handler(irq, handle_level_irq); 404 394 #else 405 395 struct irq_desc *desc = irq_desc + irq; 406 396 /* May not call generic set_irq_handler() due to spinlock ··· 1065 1055 #endif 1066 1056 default: 1067 1057 #ifdef CONFIG_IPIPE 1068 - /* 1069 - * We want internal interrupt sources to be masked, because 1070 - * ISRs may trigger interrupts recursively (e.g. DMA), but 1071 - * interrupts are _not_ masked at CPU level. So let's handle 1072 - * them as level interrupts. 1073 - */ 1074 - set_irq_handler(irq, handle_level_irq); 1058 + /* 1059 + * We want internal interrupt sources to be 1060 + * masked, because ISRs may trigger interrupts 1061 + * recursively (e.g. DMA), but interrupts are 1062 + * _not_ masked at CPU level. So let's handle 1063 + * most of them as level interrupts, except 1064 + * the timer interrupt which is special. 1065 + */ 1066 + if (irq == IRQ_SYSTMR || irq == IRQ_CORETMR) 1067 + set_irq_handler(irq, handle_simple_irq); 1068 + else 1069 + set_irq_handler(irq, handle_level_irq); 1075 1070 #else /* !CONFIG_IPIPE */ 1076 1071 set_irq_handler(irq, handle_simple_irq); 1077 1072 #endif /* !CONFIG_IPIPE */ ··· 1138 1123 1139 1124 #ifdef CONFIG_IPIPE 1140 1125 for (irq = 0; irq < NR_IRQS; irq++) { 1141 - struct irq_desc *desc = irq_desc + irq; 1126 + struct irq_desc *desc = irq_to_desc(irq); 1142 1127 desc->ic_prio = __ipipe_get_irq_priority(irq); 1143 - desc->thr_prio = __ipipe_get_irqthread_priority(irq); 1144 1128 } 1145 1129 #endif /* CONFIG_IPIPE */ 1146 1130 ··· 1222 1208 return IVG15; 1223 1209 } 1224 1210 1225 - int __ipipe_get_irqthread_priority(unsigned irq) 1226 - { 1227 - int ient, prio; 1228 - int demux_irq; 1229 - 1230 - /* The returned priority value is rescaled to [0..IVG13+1] 1231 - * with 0 being the lowest effective priority level. */ 1232 - 1233 - if (irq <= IRQ_CORETMR) 1234 - return IVG13 - irq + 1; 1235 - 1236 - /* GPIO IRQs are given the priority of the demux 1237 - * interrupt. */ 1238 - if (IS_GPIOIRQ(irq)) { 1239 - #if defined(CONFIG_BF54x) 1240 - u32 bank = PINT_2_BANK(irq2pint_lut[irq - SYS_IRQS]); 1241 - demux_irq = (bank == 0 ? IRQ_PINT0 : 1242 - bank == 1 ? IRQ_PINT1 : 1243 - bank == 2 ? IRQ_PINT2 : 1244 - IRQ_PINT3); 1245 - #elif defined(CONFIG_BF561) 1246 - demux_irq = (irq >= IRQ_PF32 ? IRQ_PROG2_INTA : 1247 - irq >= IRQ_PF16 ? IRQ_PROG1_INTA : 1248 - IRQ_PROG0_INTA); 1249 - #elif defined(CONFIG_BF52x) 1250 - demux_irq = (irq >= IRQ_PH0 ? IRQ_PORTH_INTA : 1251 - irq >= IRQ_PG0 ? IRQ_PORTG_INTA : 1252 - IRQ_PORTF_INTA); 1253 - #else 1254 - demux_irq = irq; 1255 - #endif 1256 - return IVG13 - PRIO_GPIODEMUX(demux_irq) + 1; 1257 - } 1258 - 1259 - /* The GPIO demux interrupt is given a lower priority 1260 - * than the GPIO IRQs, so that its threaded handler 1261 - * unmasks the interrupt line after the decoded IRQs 1262 - * have been processed. */ 1263 - prio = PRIO_GPIODEMUX(irq); 1264 - /* demux irq? */ 1265 - if (prio != -1) 1266 - return IVG13 - prio; 1267 - 1268 - for (ient = 0; ient < NR_PERI_INTS; ient++) { 1269 - struct ivgx *ivg = ivg_table + ient; 1270 - if (ivg->irqno == irq) { 1271 - for (prio = 0; prio <= IVG13-IVG7; prio++) { 1272 - if (ivg7_13[prio].ifirst <= ivg && 1273 - ivg7_13[prio].istop > ivg) 1274 - return IVG7 - prio; 1275 - } 1276 - } 1277 - } 1278 - 1279 - return 0; 1280 - } 1281 - 1282 1211 /* Hw interrupts are disabled on entry (check SAVE_CONTEXT). */ 1283 1212 #ifdef CONFIG_DO_IRQ_L1 1284 1213 __attribute__((l1_text)) 1285 1214 #endif 1286 1215 asmlinkage int __ipipe_grab_irq(int vec, struct pt_regs *regs) 1287 1216 { 1217 + struct ipipe_percpu_domain_data *p = ipipe_root_cpudom_ptr(); 1218 + struct ipipe_domain *this_domain = ipipe_current_domain; 1288 1219 struct ivgx *ivg_stop = ivg7_13[vec-IVG7].istop; 1289 1220 struct ivgx *ivg = ivg7_13[vec-IVG7].ifirst; 1290 - int irq; 1221 + int irq, s; 1291 1222 1292 1223 if (likely(vec == EVT_IVTMR_P)) { 1293 1224 irq = IRQ_CORETMR; 1294 - goto handle_irq; 1225 + goto core_tick; 1295 1226 } 1296 1227 1297 1228 SSYNC(); ··· 1278 1319 irq = ivg->irqno; 1279 1320 1280 1321 if (irq == IRQ_SYSTMR) { 1322 + #ifdef CONFIG_GENERIC_CLOCKEVENTS 1323 + core_tick: 1324 + #else 1281 1325 bfin_write_TIMER_STATUS(1); /* Latch TIMIL0 */ 1326 + #endif 1282 1327 /* This is basically what we need from the register frame. */ 1283 1328 __raw_get_cpu_var(__ipipe_tick_regs).ipend = regs->ipend; 1284 1329 __raw_get_cpu_var(__ipipe_tick_regs).pc = regs->pc; 1285 - if (!ipipe_root_domain_p) 1286 - __raw_get_cpu_var(__ipipe_tick_regs).ipend |= 0x10; 1287 - else 1330 + if (this_domain != ipipe_root_domain) 1288 1331 __raw_get_cpu_var(__ipipe_tick_regs).ipend &= ~0x10; 1332 + else 1333 + __raw_get_cpu_var(__ipipe_tick_regs).ipend |= 0x10; 1289 1334 } 1290 1335 1291 - handle_irq: 1336 + #ifndef CONFIG_GENERIC_CLOCKEVENTS 1337 + core_tick: 1338 + #endif 1339 + if (this_domain == ipipe_root_domain) { 1340 + s = __test_and_set_bit(IPIPE_SYNCDEFER_FLAG, &p->status); 1341 + barrier(); 1342 + } 1292 1343 1293 1344 ipipe_trace_irq_entry(irq); 1294 1345 __ipipe_handle_irq(irq, regs); 1295 - ipipe_trace_irq_exit(irq); 1346 + ipipe_trace_irq_exit(irq); 1296 1347 1297 - if (ipipe_root_domain_p) 1298 - return !test_bit(IPIPE_STALL_FLAG, &ipipe_root_cpudom_var(status)); 1348 + if (this_domain == ipipe_root_domain) { 1349 + set_thread_flag(TIF_IRQ_SYNC); 1350 + if (!s) { 1351 + __clear_bit(IPIPE_SYNCDEFER_FLAG, &p->status); 1352 + return !test_bit(IPIPE_STALL_FLAG, &p->status); 1353 + } 1354 + } 1299 1355 1300 1356 return 0; 1301 1357 }
+5 -1
arch/blackfin/mach-common/smp.c
··· 158 158 kfree(msg); 159 159 break; 160 160 case BFIN_IPI_CALL_FUNC: 161 + spin_unlock(&msg_queue->lock); 161 162 ipi_call_function(cpu, msg); 163 + spin_lock(&msg_queue->lock); 162 164 break; 163 165 case BFIN_IPI_CPU_STOP: 166 + spin_unlock(&msg_queue->lock); 164 167 ipi_cpu_stop(cpu); 168 + spin_lock(&msg_queue->lock); 165 169 kfree(msg); 166 170 break; 167 171 default: ··· 461 457 smp_flush_data.start = start; 462 458 smp_flush_data.end = end; 463 459 464 - if (smp_call_function(&ipi_flush_icache, &smp_flush_data, 1)) 460 + if (smp_call_function(&ipi_flush_icache, &smp_flush_data, 0)) 465 461 printk(KERN_WARNING "SMP: failed to run I-cache flush request on other CPUs\n"); 466 462 } 467 463 EXPORT_SYMBOL_GPL(smp_icache_flush_range_others);
+1 -1
arch/blackfin/mm/init.c
··· 104 104 } 105 105 } 106 106 107 - asmlinkage void init_pda(void) 107 + asmlinkage void __init init_pda(void) 108 108 { 109 109 unsigned int cpu = raw_smp_processor_id(); 110 110
+3 -4
arch/ia64/sn/pci/pcibr/pcibr_dma.c
··· 135 135 if (SN_DMA_ADDRTYPE(dma_flags) == SN_DMA_ADDR_PHYS) 136 136 pci_addr = IS_PIC_SOFT(pcibus_info) ? 137 137 PHYS_TO_DMA(paddr) : 138 - PHYS_TO_TIODMA(paddr) | dma_attributes; 138 + PHYS_TO_TIODMA(paddr); 139 139 else 140 - pci_addr = IS_PIC_SOFT(pcibus_info) ? 141 - paddr : 142 - paddr | dma_attributes; 140 + pci_addr = paddr; 141 + pci_addr |= dma_attributes; 143 142 144 143 /* Handle Bus mode */ 145 144 if (IS_PCIX(pcibus_info))
+1
arch/m68knommu/platform/5206e/config.c
··· 17 17 #include <asm/coldfire.h> 18 18 #include <asm/mcfsim.h> 19 19 #include <asm/mcfdma.h> 20 + #include <asm/mcfuart.h> 20 21 21 22 /***************************************************************************/ 22 23
-228
arch/m68knommu/platform/528x/config.c
··· 24 24 #include <asm/coldfire.h> 25 25 #include <asm/mcfsim.h> 26 26 #include <asm/mcfuart.h> 27 - #include <asm/mcfqspi.h> 28 27 29 28 #ifdef CONFIG_MTD_PARTITIONS 30 29 #include <linux/mtd/partitions.h> ··· 32 33 /***************************************************************************/ 33 34 34 35 void coldfire_reset(void); 35 - static void coldfire_qspi_cs_control(u8 cs, u8 command); 36 - 37 - /***************************************************************************/ 38 - 39 - #if defined(CONFIG_SPI) 40 - 41 - #if defined(CONFIG_WILDFIRE) 42 - #define SPI_NUM_CHIPSELECTS 0x02 43 - #define SPI_PAR_VAL 0x07 /* Enable DIN, DOUT, CLK */ 44 - #define SPI_CS_MASK 0x18 45 - 46 - #define FLASH_BLOCKSIZE (1024*64) 47 - #define FLASH_NUMBLOCKS 16 48 - #define FLASH_TYPE "m25p80" 49 - 50 - #define M25P80_CS 0 51 - #define MMC_CS 1 52 - 53 - #ifdef CONFIG_MTD_PARTITIONS 54 - static struct mtd_partition stm25p_partitions[] = { 55 - /* sflash */ 56 - [0] = { 57 - .name = "stm25p80", 58 - .offset = 0x00000000, 59 - .size = FLASH_BLOCKSIZE * FLASH_NUMBLOCKS, 60 - .mask_flags = 0 61 - } 62 - }; 63 - 64 - #endif 65 - 66 - #elif defined(CONFIG_WILDFIREMOD) 67 - 68 - #define SPI_NUM_CHIPSELECTS 0x08 69 - #define SPI_PAR_VAL 0x07 /* Enable DIN, DOUT, CLK */ 70 - #define SPI_CS_MASK 0x78 71 - 72 - #define FLASH_BLOCKSIZE (1024*64) 73 - #define FLASH_NUMBLOCKS 64 74 - #define FLASH_TYPE "m25p32" 75 - /* Reserve 1M for the kernel parition */ 76 - #define FLASH_KERNEL_SIZE (1024 * 1024) 77 - 78 - #define M25P80_CS 5 79 - #define MMC_CS 6 80 - 81 - #ifdef CONFIG_MTD_PARTITIONS 82 - static struct mtd_partition stm25p_partitions[] = { 83 - /* sflash */ 84 - [0] = { 85 - .name = "kernel", 86 - .offset = FLASH_BLOCKSIZE * FLASH_NUMBLOCKS - FLASH_KERNEL_SIZE, 87 - .size = FLASH_KERNEL_SIZE, 88 - .mask_flags = 0 89 - }, 90 - [1] = { 91 - .name = "image", 92 - .offset = 0x00000000, 93 - .size = FLASH_BLOCKSIZE * FLASH_NUMBLOCKS - FLASH_KERNEL_SIZE, 94 - .mask_flags = 0 95 - }, 96 - [2] = { 97 - .name = "all", 98 - .offset = 0x00000000, 99 - .size = FLASH_BLOCKSIZE * FLASH_NUMBLOCKS, 100 - .mask_flags = 0 101 - } 102 - }; 103 - #endif 104 - 105 - #else 106 - #define SPI_NUM_CHIPSELECTS 0x04 107 - #define SPI_PAR_VAL 0x7F /* Enable DIN, DOUT, CLK, CS0 - CS4 */ 108 - #endif 109 - 110 - #ifdef MMC_CS 111 - static struct coldfire_spi_chip flash_chip_info = { 112 - .mode = SPI_MODE_0, 113 - .bits_per_word = 16, 114 - .del_cs_to_clk = 17, 115 - .del_after_trans = 1, 116 - .void_write_data = 0 117 - }; 118 - 119 - static struct coldfire_spi_chip mmc_chip_info = { 120 - .mode = SPI_MODE_0, 121 - .bits_per_word = 16, 122 - .del_cs_to_clk = 17, 123 - .del_after_trans = 1, 124 - .void_write_data = 0xFFFF 125 - }; 126 - #endif 127 - 128 - #ifdef M25P80_CS 129 - static struct flash_platform_data stm25p80_platform_data = { 130 - .name = "ST M25P80 SPI Flash chip", 131 - #ifdef CONFIG_MTD_PARTITIONS 132 - .parts = stm25p_partitions, 133 - .nr_parts = sizeof(stm25p_partitions) / sizeof(*stm25p_partitions), 134 - #endif 135 - .type = FLASH_TYPE 136 - }; 137 - #endif 138 - 139 - static struct spi_board_info spi_board_info[] __initdata = { 140 - #ifdef M25P80_CS 141 - { 142 - .modalias = "m25p80", 143 - .max_speed_hz = 16000000, 144 - .bus_num = 1, 145 - .chip_select = M25P80_CS, 146 - .platform_data = &stm25p80_platform_data, 147 - .controller_data = &flash_chip_info 148 - }, 149 - #endif 150 - #ifdef MMC_CS 151 - { 152 - .modalias = "mmc_spi", 153 - .max_speed_hz = 16000000, 154 - .bus_num = 1, 155 - .chip_select = MMC_CS, 156 - .controller_data = &mmc_chip_info 157 - } 158 - #endif 159 - }; 160 - 161 - static struct coldfire_spi_master coldfire_master_info = { 162 - .bus_num = 1, 163 - .num_chipselect = SPI_NUM_CHIPSELECTS, 164 - .irq_source = MCF5282_QSPI_IRQ_SOURCE, 165 - .irq_vector = MCF5282_QSPI_IRQ_VECTOR, 166 - .irq_mask = ((0x01 << MCF5282_QSPI_IRQ_SOURCE) | 0x01), 167 - .irq_lp = 0x2B, /* Level 5 and Priority 3 */ 168 - .par_val = SPI_PAR_VAL, 169 - .cs_control = coldfire_qspi_cs_control, 170 - }; 171 - 172 - static struct resource coldfire_spi_resources[] = { 173 - [0] = { 174 - .name = "qspi-par", 175 - .start = MCF5282_QSPI_PAR, 176 - .end = MCF5282_QSPI_PAR, 177 - .flags = IORESOURCE_MEM 178 - }, 179 - 180 - [1] = { 181 - .name = "qspi-module", 182 - .start = MCF5282_QSPI_QMR, 183 - .end = MCF5282_QSPI_QMR + 0x18, 184 - .flags = IORESOURCE_MEM 185 - }, 186 - 187 - [2] = { 188 - .name = "qspi-int-level", 189 - .start = MCF5282_INTC0 + MCFINTC_ICR0 + MCF5282_QSPI_IRQ_SOURCE, 190 - .end = MCF5282_INTC0 + MCFINTC_ICR0 + MCF5282_QSPI_IRQ_SOURCE, 191 - .flags = IORESOURCE_MEM 192 - }, 193 - 194 - [3] = { 195 - .name = "qspi-int-mask", 196 - .start = MCF5282_INTC0 + MCFINTC_IMRL, 197 - .end = MCF5282_INTC0 + MCFINTC_IMRL, 198 - .flags = IORESOURCE_MEM 199 - } 200 - }; 201 - 202 - static struct platform_device coldfire_spi = { 203 - .name = "spi_coldfire", 204 - .id = -1, 205 - .resource = coldfire_spi_resources, 206 - .num_resources = ARRAY_SIZE(coldfire_spi_resources), 207 - .dev = { 208 - .platform_data = &coldfire_master_info, 209 - } 210 - }; 211 - 212 - static void coldfire_qspi_cs_control(u8 cs, u8 command) 213 - { 214 - u8 cs_bit = ((0x01 << cs) << 3) & SPI_CS_MASK; 215 - 216 - #if defined(CONFIG_WILDFIRE) 217 - u8 cs_mask = ~(((0x01 << cs) << 3) & SPI_CS_MASK); 218 - #endif 219 - #if defined(CONFIG_WILDFIREMOD) 220 - u8 cs_mask = (cs << 3) & SPI_CS_MASK; 221 - #endif 222 - 223 - /* 224 - * Don't do anything if the chip select is not 225 - * one of the port qs pins. 226 - */ 227 - if (command & QSPI_CS_INIT) { 228 - #if defined(CONFIG_WILDFIRE) 229 - MCF5282_GPIO_DDRQS |= cs_bit; 230 - MCF5282_GPIO_PQSPAR &= ~cs_bit; 231 - #endif 232 - 233 - #if defined(CONFIG_WILDFIREMOD) 234 - MCF5282_GPIO_DDRQS |= SPI_CS_MASK; 235 - MCF5282_GPIO_PQSPAR &= ~SPI_CS_MASK; 236 - #endif 237 - } 238 - 239 - if (command & QSPI_CS_ASSERT) { 240 - MCF5282_GPIO_PORTQS &= ~SPI_CS_MASK; 241 - MCF5282_GPIO_PORTQS |= cs_mask; 242 - } else if (command & QSPI_CS_DROP) { 243 - MCF5282_GPIO_PORTQS |= SPI_CS_MASK; 244 - } 245 - } 246 - 247 - static int __init spi_dev_init(void) 248 - { 249 - int retval; 250 - 251 - retval = platform_device_register(&coldfire_spi); 252 - if (retval < 0) 253 - return retval; 254 - 255 - if (ARRAY_SIZE(spi_board_info)) 256 - retval = spi_register_board_info(spi_board_info, ARRAY_SIZE(spi_board_info)); 257 - 258 - return retval; 259 - } 260 - 261 - #endif /* CONFIG_SPI */ 262 36 263 37 /***************************************************************************/ 264 38
+7
arch/mips/include/asm/compat.h
··· 3 3 /* 4 4 * Architecture specific compatibility types 5 5 */ 6 + #include <linux/seccomp.h> 7 + #include <linux/thread_info.h> 6 8 #include <linux/types.h> 7 9 #include <asm/page.h> 8 10 #include <asm/ptrace.h> ··· 219 217 compat_ulong_t __unused1; 220 218 compat_ulong_t __unused2; 221 219 }; 220 + 221 + static inline int is_compat_task(void) 222 + { 223 + return test_thread_flag(TIF_32BIT); 224 + } 222 225 223 226 #endif /* _ASM_COMPAT_H */
+4
arch/powerpc/platforms/86xx/gef_sbc610.c
··· 142 142 { 143 143 unsigned int val; 144 144 145 + /* Do not do the fixup on other platforms! */ 146 + if (!machine_is(gef_sbc610)) 147 + return; 148 + 145 149 printk(KERN_INFO "Running NEC uPD720101 Fixup\n"); 146 150 147 151 /* Ensure ports 1, 2, 3, 4 & 5 are enabled */
+1 -1
arch/s390/crypto/aes_s390.c
··· 556 556 module_init(aes_s390_init); 557 557 module_exit(aes_s390_fini); 558 558 559 - MODULE_ALIAS("aes"); 559 + MODULE_ALIAS("aes-all"); 560 560 561 561 MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm"); 562 562 MODULE_LICENSE("GPL");
+1
arch/sh/boards/board-ap325rxa.c
··· 22 22 #include <linux/gpio.h> 23 23 #include <linux/spi/spi.h> 24 24 #include <linux/spi/spi_gpio.h> 25 + #include <media/soc_camera.h> 25 26 #include <media/soc_camera_platform.h> 26 27 #include <media/sh_mobile_ceu.h> 27 28 #include <video/sh_mobile_lcdc.h>
+10 -2
arch/x86/Kconfig
··· 138 138 config HAVE_SETUP_PER_CPU_AREA 139 139 def_bool y 140 140 141 + config HAVE_DYNAMIC_PER_CPU_AREA 142 + def_bool y 143 + 141 144 config HAVE_CPUMASK_OF_CPU_MAP 142 145 def_bool X86_64_SMP 143 146 ··· 783 780 Additional support for AMD specific MCE features such as 784 781 the DRAM Error Threshold. 785 782 783 + config X86_MCE_THRESHOLD 784 + depends on X86_MCE_AMD || X86_MCE_INTEL 785 + bool 786 + default y 787 + 786 788 config X86_MCE_NONFATAL 787 789 tristate "Check for non-fatal errors on AMD Athlon/Duron / Intel Pentium 4" 788 790 depends on X86_32 && X86_MCE ··· 1133 1125 Specify the maximum number of NUMA Nodes available on the target 1134 1126 system. Increases memory reserved to accomodate various tables. 1135 1127 1136 - config HAVE_ARCH_BOOTMEM_NODE 1128 + config HAVE_ARCH_BOOTMEM 1137 1129 def_bool y 1138 1130 depends on X86_32 && NUMA 1139 1131 ··· 1431 1423 config KEXEC_JUMP 1432 1424 bool "kexec jump (EXPERIMENTAL)" 1433 1425 depends on EXPERIMENTAL 1434 - depends on KEXEC && HIBERNATION && X86_32 1426 + depends on KEXEC && HIBERNATION 1435 1427 ---help--- 1436 1428 Jump between original kernel and kexeced kernel and invoke 1437 1429 code in physical address mode via KEXEC
+2
arch/x86/include/asm/apic.h
··· 379 379 380 380 static inline void ack_APIC_irq(void) 381 381 { 382 + #ifdef CONFIG_X86_LOCAL_APIC 382 383 /* 383 384 * ack_APIC_irq() actually gets compiled as a single instruction 384 385 * ... yummie. ··· 387 386 388 387 /* Docs say use 0 for future compatibility */ 389 388 apic_write(APIC_EOI, 0); 389 + #endif 390 390 } 391 391 392 392 static inline unsigned default_get_apic_id(unsigned long x)
+1
arch/x86/include/asm/apicdef.h
··· 53 53 #define APIC_ESR_SENDILL 0x00020 54 54 #define APIC_ESR_RECVILL 0x00040 55 55 #define APIC_ESR_ILLREGA 0x00080 56 + #define APIC_LVTCMCI 0x2f0 56 57 #define APIC_ICR 0x300 57 58 #define APIC_DEST_SELF 0x40000 58 59 #define APIC_DEST_ALLINC 0x80000
+36 -17
arch/x86/include/asm/cacheflush.h
··· 5 5 #include <linux/mm.h> 6 6 7 7 /* Caches aren't brain-dead on the intel. */ 8 - #define flush_cache_all() do { } while (0) 9 - #define flush_cache_mm(mm) do { } while (0) 10 - #define flush_cache_dup_mm(mm) do { } while (0) 11 - #define flush_cache_range(vma, start, end) do { } while (0) 12 - #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 13 - #define flush_dcache_page(page) do { } while (0) 14 - #define flush_dcache_mmap_lock(mapping) do { } while (0) 15 - #define flush_dcache_mmap_unlock(mapping) do { } while (0) 16 - #define flush_icache_range(start, end) do { } while (0) 17 - #define flush_icache_page(vma, pg) do { } while (0) 18 - #define flush_icache_user_range(vma, pg, adr, len) do { } while (0) 19 - #define flush_cache_vmap(start, end) do { } while (0) 20 - #define flush_cache_vunmap(start, end) do { } while (0) 8 + static inline void flush_cache_all(void) { } 9 + static inline void flush_cache_mm(struct mm_struct *mm) { } 10 + static inline void flush_cache_dup_mm(struct mm_struct *mm) { } 11 + static inline void flush_cache_range(struct vm_area_struct *vma, 12 + unsigned long start, unsigned long end) { } 13 + static inline void flush_cache_page(struct vm_area_struct *vma, 14 + unsigned long vmaddr, unsigned long pfn) { } 15 + static inline void flush_dcache_page(struct page *page) { } 16 + static inline void flush_dcache_mmap_lock(struct address_space *mapping) { } 17 + static inline void flush_dcache_mmap_unlock(struct address_space *mapping) { } 18 + static inline void flush_icache_range(unsigned long start, 19 + unsigned long end) { } 20 + static inline void flush_icache_page(struct vm_area_struct *vma, 21 + struct page *page) { } 22 + static inline void flush_icache_user_range(struct vm_area_struct *vma, 23 + struct page *page, 24 + unsigned long addr, 25 + unsigned long len) { } 26 + static inline void flush_cache_vmap(unsigned long start, unsigned long end) { } 27 + static inline void flush_cache_vunmap(unsigned long start, 28 + unsigned long end) { } 21 29 22 - #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 23 - memcpy((dst), (src), (len)) 24 - #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 25 - memcpy((dst), (src), (len)) 30 + static inline void copy_to_user_page(struct vm_area_struct *vma, 31 + struct page *page, unsigned long vaddr, 32 + void *dst, const void *src, 33 + unsigned long len) 34 + { 35 + memcpy(dst, src, len); 36 + } 37 + 38 + static inline void copy_from_user_page(struct vm_area_struct *vma, 39 + struct page *page, unsigned long vaddr, 40 + void *dst, const void *src, 41 + unsigned long len) 42 + { 43 + memcpy(dst, src, len); 44 + } 26 45 27 46 #define PG_non_WB PG_arch_1 28 47 PAGEFLAG(NonWB, non_WB)
-2
arch/x86/include/asm/efi.h
··· 37 37 38 38 #else /* !CONFIG_X86_32 */ 39 39 40 - #define MAX_EFI_IO_PAGES 100 41 - 42 40 extern u64 efi_call0(void *fp); 43 41 extern u64 efi_call1(void *fp, u64 arg1); 44 42 extern u64 efi_call2(void *fp, u64 arg1, u64 arg2);
+2
arch/x86/include/asm/entry_arch.h
··· 33 33 smp_invalidate_interrupt) 34 34 #endif 35 35 36 + BUILD_INTERRUPT(generic_interrupt, GENERIC_INTERRUPT_VECTOR) 37 + 36 38 /* 37 39 * every pentium local APIC has two 'local interrupts', with a 38 40 * soft-definable vector attached to both interrupts, one of
-10
arch/x86/include/asm/fixmap.h
··· 24 24 #include <asm/kmap_types.h> 25 25 #else 26 26 #include <asm/vsyscall.h> 27 - #ifdef CONFIG_EFI 28 - #include <asm/efi.h> 29 - #endif 30 27 #endif 31 28 32 29 /* ··· 88 91 #ifdef CONFIG_X86_IO_APIC 89 92 FIX_IO_APIC_BASE_0, 90 93 FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS - 1, 91 - #endif 92 - #ifdef CONFIG_X86_64 93 - #ifdef CONFIG_EFI 94 - FIX_EFI_IO_MAP_LAST_PAGE, 95 - FIX_EFI_IO_MAP_FIRST_PAGE = FIX_EFI_IO_MAP_LAST_PAGE 96 - + MAX_EFI_IO_PAGES - 1, 97 - #endif 98 94 #endif 99 95 #ifdef CONFIG_X86_VISWS_APIC 100 96 FIX_CO_CPU, /* Cobalt timer */
+1
arch/x86/include/asm/hardirq.h
··· 12 12 unsigned int apic_timer_irqs; /* arch dependent */ 13 13 unsigned int irq_spurious_count; 14 14 #endif 15 + unsigned int generic_irqs; /* arch dependent */ 15 16 #ifdef CONFIG_SMP 16 17 unsigned int irq_resched_count; 17 18 unsigned int irq_call_count;
+1
arch/x86/include/asm/hw_irq.h
··· 27 27 28 28 /* Interrupt handlers registered during init_IRQ */ 29 29 extern void apic_timer_interrupt(void); 30 + extern void generic_interrupt(void); 30 31 extern void error_interrupt(void); 31 32 extern void spurious_interrupt(void); 32 33 extern void thermal_interrupt(void);
+7 -1
arch/x86/include/asm/i387.h
··· 172 172 173 173 #else /* CONFIG_X86_32 */ 174 174 175 - extern void finit(void); 175 + #ifdef CONFIG_MATH_EMULATION 176 + extern void finit_task(struct task_struct *tsk); 177 + #else 178 + static inline void finit_task(struct task_struct *tsk) 179 + { 180 + } 181 + #endif 176 182 177 183 static inline void tolerant_fwait(void) 178 184 {
+18
arch/x86/include/asm/init.h
··· 1 + #ifndef _ASM_X86_INIT_32_H 2 + #define _ASM_X86_INIT_32_H 3 + 4 + #ifdef CONFIG_X86_32 5 + extern void __init early_ioremap_page_table_range_init(void); 6 + #endif 7 + 8 + extern unsigned long __init 9 + kernel_physical_mapping_init(unsigned long start, 10 + unsigned long end, 11 + unsigned long page_size_mask); 12 + 13 + 14 + extern unsigned long __initdata e820_table_start; 15 + extern unsigned long __meminitdata e820_table_end; 16 + extern unsigned long __meminitdata e820_table_top; 17 + 18 + #endif /* _ASM_X86_INIT_32_H */
-3
arch/x86/include/asm/io.h
··· 172 172 173 173 extern void iounmap(volatile void __iomem *addr); 174 174 175 - extern void __iomem *fix_ioremap(unsigned idx, unsigned long phys); 176 - 177 175 178 176 #ifdef CONFIG_X86_32 179 177 # include "io_32.h" ··· 196 198 extern void __iomem *early_ioremap(unsigned long offset, unsigned long size); 197 199 extern void __iomem *early_memremap(unsigned long offset, unsigned long size); 198 200 extern void early_iounmap(void __iomem *addr, unsigned long size); 199 - extern void __iomem *fix_ioremap(unsigned idx, unsigned long phys); 200 201 201 202 #define IO_SPACE_LIMIT 0xffff 202 203
+1
arch/x86/include/asm/irq.h
··· 36 36 extern void fixup_irqs(void); 37 37 #endif 38 38 39 + extern void (*generic_interrupt_extension)(void); 39 40 extern void init_IRQ(void); 40 41 extern void native_init_IRQ(void); 41 42 extern bool handle_irq(unsigned irq, struct pt_regs *regs);
+5
arch/x86/include/asm/irq_vectors.h
··· 112 112 #define LOCAL_PERF_VECTOR 0xee 113 113 114 114 /* 115 + * Generic system vector for platform specific use 116 + */ 117 + #define GENERIC_INTERRUPT_VECTOR 0xed 118 + 119 + /* 115 120 * First APIC vector available to drivers: (vectors 0x30-0xee) we 116 121 * start at 0x31(0x41) to spread out vectors evenly between priority 117 122 * levels. (0x80 is the syscall vector)
+7 -6
arch/x86/include/asm/kexec.h
··· 9 9 # define PAGES_NR 4 10 10 #else 11 11 # define PA_CONTROL_PAGE 0 12 - # define PA_TABLE_PAGE 1 13 - # define PAGES_NR 2 12 + # define VA_CONTROL_PAGE 1 13 + # define PA_TABLE_PAGE 2 14 + # define PA_SWAP_PAGE 3 15 + # define PAGES_NR 4 14 16 #endif 15 17 16 - #ifdef CONFIG_X86_32 17 18 # define KEXEC_CONTROL_CODE_MAX_SIZE 2048 18 - #endif 19 19 20 20 #ifndef __ASSEMBLY__ 21 21 ··· 136 136 unsigned int has_pae, 137 137 unsigned int preserve_context); 138 138 #else 139 - NORET_TYPE void 139 + unsigned long 140 140 relocate_kernel(unsigned long indirection_page, 141 141 unsigned long page_list, 142 - unsigned long start_address) ATTRIB_NORET; 142 + unsigned long start_address, 143 + unsigned int preserve_context); 143 144 #endif 144 145 145 146 #define ARCH_HAS_KIMAGE_ARCH
+10 -6
arch/x86/include/asm/linkage.h
··· 4 4 #undef notrace 5 5 #define notrace __attribute__((no_instrument_function)) 6 6 7 - #ifdef CONFIG_X86_64 8 - #define __ALIGN .p2align 4,,15 9 - #define __ALIGN_STR ".p2align 4,,15" 10 - #endif 11 - 12 7 #ifdef CONFIG_X86_32 13 8 #define asmlinkage CPP_ASMLINKAGE __attribute__((regparm(0))) 14 9 /* ··· 45 50 __asmlinkage_protect_n(ret, "g" (arg1), "g" (arg2), "g" (arg3), \ 46 51 "g" (arg4), "g" (arg5), "g" (arg6)) 47 52 48 - #endif 53 + #endif /* CONFIG_X86_32 */ 54 + 55 + #ifdef __ASSEMBLY__ 49 56 50 57 #define GLOBAL(name) \ 51 58 .globl name; \ 52 59 name: 53 60 61 + #ifdef CONFIG_X86_64 62 + #define __ALIGN .p2align 4,,15 63 + #define __ALIGN_STR ".p2align 4,,15" 64 + #endif 65 + 54 66 #ifdef CONFIG_X86_ALIGNMENT_16 55 67 #define __ALIGN .align 16,0x90 56 68 #define __ALIGN_STR ".align 16,0x90" 57 69 #endif 70 + 71 + #endif /* __ASSEMBLY__ */ 58 72 59 73 #endif /* _ASM_X86_LINKAGE_H */ 60 74
+32 -3
arch/x86/include/asm/mce.h
··· 11 11 */ 12 12 13 13 #define MCG_CTL_P (1UL<<8) /* MCG_CAP register available */ 14 + #define MCG_EXT_P (1ULL<<9) /* Extended registers available */ 15 + #define MCG_CMCI_P (1ULL<<10) /* CMCI supported */ 14 16 15 17 #define MCG_STATUS_RIPV (1UL<<0) /* restart ip valid */ 16 18 #define MCG_STATUS_EIPV (1UL<<1) /* ip points to correct instruction */ ··· 92 90 93 91 #include <asm/atomic.h> 94 92 93 + void mce_setup(struct mce *m); 95 94 void mce_log(struct mce *m); 96 95 DECLARE_PER_CPU(struct sys_device, device_mce); 97 96 extern void (*threshold_cpu_callback)(unsigned long action, unsigned int cpu); 98 97 98 + /* 99 + * To support more than 128 would need to escape the predefined 100 + * Linux defined extended banks first. 101 + */ 102 + #define MAX_NR_BANKS (MCE_EXTENDED_BANK - 1) 103 + 99 104 #ifdef CONFIG_X86_MCE_INTEL 100 105 void mce_intel_feature_init(struct cpuinfo_x86 *c); 106 + void cmci_clear(void); 107 + void cmci_reenable(void); 108 + void cmci_rediscover(int dying); 109 + void cmci_recheck(void); 101 110 #else 102 111 static inline void mce_intel_feature_init(struct cpuinfo_x86 *c) { } 112 + static inline void cmci_clear(void) {} 113 + static inline void cmci_reenable(void) {} 114 + static inline void cmci_rediscover(int dying) {} 115 + static inline void cmci_recheck(void) {} 103 116 #endif 104 117 105 118 #ifdef CONFIG_X86_MCE_AMD ··· 123 106 static inline void mce_amd_feature_init(struct cpuinfo_x86 *c) { } 124 107 #endif 125 108 126 - void mce_log_therm_throt_event(unsigned int cpu, __u64 status); 109 + extern int mce_available(struct cpuinfo_x86 *c); 110 + 111 + void mce_log_therm_throt_event(__u64 status); 127 112 128 113 extern atomic_t mce_entry; 129 114 130 115 extern void do_machine_check(struct pt_regs *, long); 116 + 117 + typedef DECLARE_BITMAP(mce_banks_t, MAX_NR_BANKS); 118 + DECLARE_PER_CPU(mce_banks_t, mce_poll_banks); 119 + 120 + enum mcp_flags { 121 + MCP_TIMESTAMP = (1 << 0), /* log time stamp */ 122 + MCP_UC = (1 << 1), /* log uncorrected errors */ 123 + }; 124 + extern void machine_check_poll(enum mcp_flags flags, mce_banks_t *b); 125 + 131 126 extern int mce_notify_user(void); 132 127 133 128 #endif /* !CONFIG_X86_32 */ ··· 149 120 #else 150 121 #define mcheck_init(c) do { } while (0) 151 122 #endif 152 - extern void stop_mce(void); 153 - extern void restart_mce(void); 123 + 124 + extern void (*mce_threshold_vector)(void); 154 125 155 126 #endif /* __KERNEL__ */ 156 127 #endif /* _ASM_X86_MCE_H */
+3 -40
arch/x86/include/asm/mmzone_32.h
··· 91 91 #endif /* CONFIG_DISCONTIGMEM */ 92 92 93 93 #ifdef CONFIG_NEED_MULTIPLE_NODES 94 - 95 - /* 96 - * Following are macros that are specific to this numa platform. 97 - */ 98 - #define reserve_bootmem(addr, size, flags) \ 99 - reserve_bootmem_node(NODE_DATA(0), (addr), (size), (flags)) 100 - #define alloc_bootmem(x) \ 101 - __alloc_bootmem_node(NODE_DATA(0), (x), SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS)) 102 - #define alloc_bootmem_nopanic(x) \ 103 - __alloc_bootmem_node_nopanic(NODE_DATA(0), (x), SMP_CACHE_BYTES, \ 104 - __pa(MAX_DMA_ADDRESS)) 105 - #define alloc_bootmem_low(x) \ 106 - __alloc_bootmem_node(NODE_DATA(0), (x), SMP_CACHE_BYTES, 0) 107 - #define alloc_bootmem_pages(x) \ 108 - __alloc_bootmem_node(NODE_DATA(0), (x), PAGE_SIZE, __pa(MAX_DMA_ADDRESS)) 109 - #define alloc_bootmem_pages_nopanic(x) \ 110 - __alloc_bootmem_node_nopanic(NODE_DATA(0), (x), PAGE_SIZE, \ 111 - __pa(MAX_DMA_ADDRESS)) 112 - #define alloc_bootmem_low_pages(x) \ 113 - __alloc_bootmem_node(NODE_DATA(0), (x), PAGE_SIZE, 0) 114 - #define alloc_bootmem_node(pgdat, x) \ 115 - ({ \ 116 - struct pglist_data __maybe_unused \ 117 - *__alloc_bootmem_node__pgdat = (pgdat); \ 118 - __alloc_bootmem_node(NODE_DATA(0), (x), SMP_CACHE_BYTES, \ 119 - __pa(MAX_DMA_ADDRESS)); \ 120 - }) 121 - #define alloc_bootmem_pages_node(pgdat, x) \ 122 - ({ \ 123 - struct pglist_data __maybe_unused \ 124 - *__alloc_bootmem_node__pgdat = (pgdat); \ 125 - __alloc_bootmem_node(NODE_DATA(0), (x), PAGE_SIZE, \ 126 - __pa(MAX_DMA_ADDRESS)); \ 127 - }) 128 - #define alloc_bootmem_low_pages_node(pgdat, x) \ 129 - ({ \ 130 - struct pglist_data __maybe_unused \ 131 - *__alloc_bootmem_node__pgdat = (pgdat); \ 132 - __alloc_bootmem_node(NODE_DATA(0), (x), PAGE_SIZE, 0); \ 133 - }) 94 + /* always use node 0 for bootmem on this numa platform */ 95 + #define bootmem_arch_preferred_node(__bdata, size, align, goal, limit) \ 96 + (NODE_DATA(0)->bdata) 134 97 #endif /* CONFIG_NEED_MULTIPLE_NODES */ 135 98 136 99 #endif /* _ASM_X86_MMZONE_32_H */
+5
arch/x86/include/asm/msr-index.h
··· 77 77 #define MSR_IA32_MC0_ADDR 0x00000402 78 78 #define MSR_IA32_MC0_MISC 0x00000403 79 79 80 + /* These are consecutive and not in the normal 4er MCE bank block */ 81 + #define MSR_IA32_MC0_CTL2 0x00000280 82 + #define CMCI_EN (1ULL << 30) 83 + #define CMCI_THRESHOLD_MASK 0xffffULL 84 + 80 85 #define MSR_P6_PERFCTR0 0x000000c1 81 86 #define MSR_P6_PERFCTR1 0x000000c2 82 87 #define MSR_P6_EVNTSEL0 0x00000186
-6
arch/x86/include/asm/page_types.h
··· 40 40 41 41 #ifndef __ASSEMBLY__ 42 42 43 - struct pgprot; 44 - 45 43 extern int page_is_ram(unsigned long pagenr); 46 44 extern int devmem_is_allowed(unsigned long pagenr); 47 - extern void map_devmem(unsigned long pfn, unsigned long size, 48 - struct pgprot vma_prot); 49 - extern void unmap_devmem(unsigned long pfn, unsigned long size, 50 - struct pgprot vma_prot); 51 45 52 46 extern unsigned long max_low_pfn_mapped; 53 47 extern unsigned long max_pfn_mapped;
+5
arch/x86/include/asm/pat.h
··· 2 2 #define _ASM_X86_PAT_H 3 3 4 4 #include <linux/types.h> 5 + #include <asm/pgtable_types.h> 5 6 6 7 #ifdef CONFIG_X86_PAT 7 8 extern int pat_enabled; ··· 18 17 19 18 extern int kernel_map_sync_memtype(u64 base, unsigned long size, 20 19 unsigned long flag); 20 + extern void map_devmem(unsigned long pfn, unsigned long size, 21 + struct pgprot vma_prot); 22 + extern void unmap_devmem(unsigned long pfn, unsigned long size, 23 + struct pgprot vma_prot); 21 24 22 25 #endif /* _ASM_X86_PAT_H */
+8
arch/x86/include/asm/percpu.h
··· 43 43 #else /* ...!ASSEMBLY */ 44 44 45 45 #include <linux/stringify.h> 46 + #include <asm/sections.h> 47 + 48 + #define __addr_to_pcpu_ptr(addr) \ 49 + (void *)((unsigned long)(addr) - (unsigned long)pcpu_base_addr \ 50 + + (unsigned long)__per_cpu_start) 51 + #define __pcpu_ptr_to_addr(ptr) \ 52 + (void *)((unsigned long)(ptr) + (unsigned long)pcpu_base_addr \ 53 + - (unsigned long)__per_cpu_start) 46 54 47 55 #ifdef CONFIG_SMP 48 56 #define __percpu_arg(x) "%%"__stringify(__percpu_seg)":%P" #x
+2
arch/x86/include/asm/pgtable.h
··· 288 288 return 1; 289 289 } 290 290 291 + pmd_t *populate_extra_pmd(unsigned long vaddr); 292 + pte_t *populate_extra_pte(unsigned long vaddr); 291 293 #endif /* __ASSEMBLY__ */ 292 294 293 295 #ifdef CONFIG_X86_32
+5
arch/x86/include/asm/pgtable_32_types.h
··· 25 25 * area for the same reason. ;) 26 26 */ 27 27 #define VMALLOC_OFFSET (8 * 1024 * 1024) 28 + 29 + #ifndef __ASSEMBLER__ 30 + extern bool __vmalloc_start_set; /* set once high_memory is set */ 31 + #endif 32 + 28 33 #define VMALLOC_START ((unsigned long)high_memory + VMALLOC_OFFSET) 29 34 #ifdef CONFIG_X86_PAE 30 35 #define LAST_PKMAP 512
+1
arch/x86/include/asm/pgtable_types.h
··· 273 273 274 274 extern pteval_t __supported_pte_mask; 275 275 extern int nx_enabled; 276 + extern void set_nx(void); 276 277 277 278 #define pgprot_writecombine pgprot_writecombine 278 279 extern pgprot_t pgprot_writecombine(pgprot_t prot);
+4
arch/x86/include/asm/uv/uv_hub.h
··· 199 199 #define SCIR_CPU_ACTIVITY 0x02 /* not idle */ 200 200 #define SCIR_CPU_HB_INTERVAL (HZ) /* once per second */ 201 201 202 + /* Loop through all installed blades */ 203 + #define for_each_possible_blade(bid) \ 204 + for ((bid) = 0; (bid) < uv_num_possible_blades(); (bid)++) 205 + 202 206 /* 203 207 * Macros for converting between kernel virtual addresses, socket local physical 204 208 * addresses, and UV global physical addresses.
+1
arch/x86/include/asm/xen/page.h
··· 164 164 165 165 166 166 xmaddr_t arbitrary_virt_to_machine(void *address); 167 + unsigned long arbitrary_virt_to_mfn(void *vaddr); 167 168 void make_lowmem_page_readonly(void *vaddr); 168 169 void make_lowmem_page_readwrite(void *vaddr); 169 170
+1 -1
arch/x86/kernel/Makefile
··· 111 111 ### 112 112 # 64 bit specific files 113 113 ifeq ($(CONFIG_X86_64),y) 114 - obj-$(CONFIG_X86_UV) += tlb_uv.o bios_uv.o uv_irq.o uv_sysfs.o 114 + obj-$(CONFIG_X86_UV) += tlb_uv.o bios_uv.o uv_irq.o uv_sysfs.o uv_time.o 115 115 obj-$(CONFIG_X86_PM_TIMER) += pmtimer_64.o 116 116 obj-$(CONFIG_AUDIT) += audit_64.o 117 117
+11 -6
arch/x86/kernel/alternative.c
··· 414 414 that might execute the to be patched code. 415 415 Other CPUs are not running. */ 416 416 stop_nmi(); 417 - #ifdef CONFIG_X86_MCE 418 - stop_mce(); 419 - #endif 417 + 418 + /* 419 + * Don't stop machine check exceptions while patching. 420 + * MCEs only happen when something got corrupted and in this 421 + * case we must do something about the corruption. 422 + * Ignoring it is worse than a unlikely patching race. 423 + * Also machine checks tend to be broadcast and if one CPU 424 + * goes into machine check the others follow quickly, so we don't 425 + * expect a machine check to cause undue problems during to code 426 + * patching. 427 + */ 420 428 421 429 apply_alternatives(__alt_instructions, __alt_instructions_end); 422 430 ··· 464 456 (unsigned long)__smp_locks_end); 465 457 466 458 restart_nmi(); 467 - #ifdef CONFIG_X86_MCE 468 - restart_mce(); 469 - #endif 470 459 } 471 460 472 461 /**
+15
arch/x86/kernel/apic/apic.c
··· 46 46 #include <asm/idle.h> 47 47 #include <asm/mtrr.h> 48 48 #include <asm/smp.h> 49 + #include <asm/mce.h> 49 50 50 51 unsigned int num_processors; 51 52 ··· 843 842 apic_write(APIC_LVTTHMR, v | APIC_LVT_MASKED); 844 843 } 845 844 #endif 845 + #ifdef CONFIG_X86_MCE_INTEL 846 + if (maxlvt >= 6) { 847 + v = apic_read(APIC_LVTCMCI); 848 + if (!(v & APIC_LVT_MASKED)) 849 + apic_write(APIC_LVTCMCI, v | APIC_LVT_MASKED); 850 + } 851 + #endif 852 + 846 853 /* 847 854 * Clean APIC state for other OSs: 848 855 */ ··· 1250 1241 apic_write(APIC_LVT1, value); 1251 1242 1252 1243 preempt_enable(); 1244 + 1245 + #ifdef CONFIG_X86_MCE_INTEL 1246 + /* Recheck CMCI information after local APIC is up on CPU #0 */ 1247 + if (smp_processor_id() == 0) 1248 + cmci_recheck(); 1249 + #endif 1253 1250 } 1254 1251 1255 1252 void __cpuinit end_local_APIC_setup(void)
+52
arch/x86/kernel/cpu/amd.c
··· 5 5 #include <asm/io.h> 6 6 #include <asm/processor.h> 7 7 #include <asm/apic.h> 8 + #include <asm/cpu.h> 8 9 9 10 #ifdef CONFIG_X86_64 10 11 # include <asm/numa_64.h> ··· 142 141 } 143 142 } 144 143 144 + static void __cpuinit amd_k7_smp_check(struct cpuinfo_x86 *c) 145 + { 146 + #ifdef CONFIG_SMP 147 + /* calling is from identify_secondary_cpu() ? */ 148 + if (c->cpu_index == boot_cpu_id) 149 + return; 150 + 151 + /* 152 + * Certain Athlons might work (for various values of 'work') in SMP 153 + * but they are not certified as MP capable. 154 + */ 155 + /* Athlon 660/661 is valid. */ 156 + if ((c->x86_model == 6) && ((c->x86_mask == 0) || 157 + (c->x86_mask == 1))) 158 + goto valid_k7; 159 + 160 + /* Duron 670 is valid */ 161 + if ((c->x86_model == 7) && (c->x86_mask == 0)) 162 + goto valid_k7; 163 + 164 + /* 165 + * Athlon 662, Duron 671, and Athlon >model 7 have capability 166 + * bit. It's worth noting that the A5 stepping (662) of some 167 + * Athlon XP's have the MP bit set. 168 + * See http://www.heise.de/newsticker/data/jow-18.10.01-000 for 169 + * more. 170 + */ 171 + if (((c->x86_model == 6) && (c->x86_mask >= 2)) || 172 + ((c->x86_model == 7) && (c->x86_mask >= 1)) || 173 + (c->x86_model > 7)) 174 + if (cpu_has_mp) 175 + goto valid_k7; 176 + 177 + /* If we get here, not a certified SMP capable AMD system. */ 178 + 179 + /* 180 + * Don't taint if we are running SMP kernel on a single non-MP 181 + * approved Athlon 182 + */ 183 + WARN_ONCE(1, "WARNING: This combination of AMD" 184 + "processors is not suitable for SMP.\n"); 185 + if (!test_taint(TAINT_UNSAFE_SMP)) 186 + add_taint(TAINT_UNSAFE_SMP); 187 + 188 + valid_k7: 189 + ; 190 + #endif 191 + } 192 + 145 193 static void __cpuinit init_amd_k7(struct cpuinfo_x86 *c) 146 194 { 147 195 u32 l, h; ··· 225 175 } 226 176 227 177 set_cpu_cap(c, X86_FEATURE_K7); 178 + 179 + amd_k7_smp_check(c); 228 180 } 229 181 #endif 230 182
+1 -1
arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
··· 601 601 if (!data) 602 602 return -ENOMEM; 603 603 604 - data->acpi_data = percpu_ptr(acpi_perf_data, cpu); 604 + data->acpi_data = per_cpu_ptr(acpi_perf_data, cpu); 605 605 per_cpu(drv_data, cpu) = data; 606 606 607 607 if (cpu_has(c, X86_FEATURE_CONSTANT_TSC))
-1
arch/x86/kernel/cpu/cpufreq/p4-clockmod.c
··· 277 277 .name = "p4-clockmod", 278 278 .owner = THIS_MODULE, 279 279 .attr = p4clockmod_attr, 280 - .hide_interface = 1, 281 280 }; 282 281 283 282
+25
arch/x86/kernel/cpu/intel.c
··· 13 13 #include <asm/uaccess.h> 14 14 #include <asm/ds.h> 15 15 #include <asm/bugs.h> 16 + #include <asm/cpu.h> 16 17 17 18 #ifdef CONFIG_X86_64 18 19 #include <asm/topology.h> ··· 111 110 } 112 111 #endif 113 112 113 + static void __cpuinit intel_smp_check(struct cpuinfo_x86 *c) 114 + { 115 + #ifdef CONFIG_SMP 116 + /* calling is from identify_secondary_cpu() ? */ 117 + if (c->cpu_index == boot_cpu_id) 118 + return; 119 + 120 + /* 121 + * Mask B, Pentium, but not Pentium MMX 122 + */ 123 + if (c->x86 == 5 && 124 + c->x86_mask >= 1 && c->x86_mask <= 4 && 125 + c->x86_model <= 3) { 126 + /* 127 + * Remember we have B step Pentia with bugs 128 + */ 129 + WARN_ONCE(1, "WARNING: SMP operation may be unreliable" 130 + "with B stepping processors.\n"); 131 + } 132 + #endif 133 + } 134 + 114 135 static void __cpuinit intel_workarounds(struct cpuinfo_x86 *c) 115 136 { 116 137 unsigned long lo, hi; ··· 209 186 #ifdef CONFIG_X86_NUMAQ 210 187 numaq_tsc_disable(); 211 188 #endif 189 + 190 + intel_smp_check(c); 212 191 } 213 192 #else 214 193 static void __cpuinit intel_workarounds(struct cpuinfo_x86 *c)
+1
arch/x86/kernel/cpu/mcheck/Makefile
··· 4 4 obj-$(CONFIG_X86_MCE_INTEL) += mce_intel_64.o 5 5 obj-$(CONFIG_X86_MCE_AMD) += mce_amd_64.o 6 6 obj-$(CONFIG_X86_MCE_NONFATAL) += non-fatal.o 7 + obj-$(CONFIG_X86_MCE_THRESHOLD) += threshold.o
-14
arch/x86/kernel/cpu/mcheck/mce_32.c
··· 60 60 } 61 61 } 62 62 63 - static unsigned long old_cr4 __initdata; 64 - 65 - void __init stop_mce(void) 66 - { 67 - old_cr4 = read_cr4(); 68 - clear_in_cr4(X86_CR4_MCE); 69 - } 70 - 71 - void __init restart_mce(void) 72 - { 73 - if (old_cr4 & X86_CR4_MCE) 74 - set_in_cr4(X86_CR4_MCE); 75 - } 76 - 77 63 static int __init mcheck_disable(char *str) 78 64 { 79 65 mce_disabled = 1;
+395 -135
arch/x86/kernel/cpu/mcheck/mce_64.c
··· 3 3 * K8 parts Copyright 2002,2003 Andi Kleen, SuSE Labs. 4 4 * Rest from unknown author(s). 5 5 * 2004 Andi Kleen. Rewrote most of it. 6 + * Copyright 2008 Intel Corporation 7 + * Author: Andi Kleen 6 8 */ 7 9 8 10 #include <linux/init.h> ··· 26 24 #include <linux/ctype.h> 27 25 #include <linux/kmod.h> 28 26 #include <linux/kdebug.h> 27 + #include <linux/kobject.h> 28 + #include <linux/sysfs.h> 29 + #include <linux/ratelimit.h> 29 30 #include <asm/processor.h> 30 31 #include <asm/msr.h> 31 32 #include <asm/mce.h> ··· 37 32 #include <asm/idle.h> 38 33 39 34 #define MISC_MCELOG_MINOR 227 40 - #define NR_SYSFS_BANKS 6 41 35 42 36 atomic_t mce_entry; 43 37 ··· 51 47 */ 52 48 static int tolerant = 1; 53 49 static int banks; 54 - static unsigned long bank[NR_SYSFS_BANKS] = { [0 ... NR_SYSFS_BANKS-1] = ~0UL }; 50 + static u64 *bank; 55 51 static unsigned long notify_user; 56 52 static int rip_msr; 57 53 static int mce_bootlog = -1; ··· 61 57 static char *trigger_argv[2] = { trigger, NULL }; 62 58 63 59 static DECLARE_WAIT_QUEUE_HEAD(mce_wait); 60 + 61 + /* MCA banks polled by the period polling timer for corrected events */ 62 + DEFINE_PER_CPU(mce_banks_t, mce_poll_banks) = { 63 + [0 ... BITS_TO_LONGS(MAX_NR_BANKS)-1] = ~0UL 64 + }; 65 + 66 + /* Do initial initialization of a struct mce */ 67 + void mce_setup(struct mce *m) 68 + { 69 + memset(m, 0, sizeof(struct mce)); 70 + m->cpu = smp_processor_id(); 71 + rdtscll(m->tsc); 72 + } 64 73 65 74 /* 66 75 * Lockless MCE logging infrastructure. ··· 136 119 print_symbol("{%s}", m->ip); 137 120 printk("\n"); 138 121 } 139 - printk(KERN_EMERG "TSC %Lx ", m->tsc); 122 + printk(KERN_EMERG "TSC %llx ", m->tsc); 140 123 if (m->addr) 141 - printk("ADDR %Lx ", m->addr); 124 + printk("ADDR %llx ", m->addr); 142 125 if (m->misc) 143 - printk("MISC %Lx ", m->misc); 126 + printk("MISC %llx ", m->misc); 144 127 printk("\n"); 145 128 printk(KERN_EMERG "This is not a software problem!\n"); 146 129 printk(KERN_EMERG "Run through mcelog --ascii to decode " ··· 166 149 panic(msg); 167 150 } 168 151 169 - static int mce_available(struct cpuinfo_x86 *c) 152 + int mce_available(struct cpuinfo_x86 *c) 170 153 { 154 + if (mce_dont_init) 155 + return 0; 171 156 return cpu_has(c, X86_FEATURE_MCE) && cpu_has(c, X86_FEATURE_MCA); 172 157 } 173 158 ··· 191 172 } 192 173 193 174 /* 194 - * The actual machine check handler 175 + * Poll for corrected events or events that happened before reset. 176 + * Those are just logged through /dev/mcelog. 177 + * 178 + * This is executed in standard interrupt context. 179 + */ 180 + void machine_check_poll(enum mcp_flags flags, mce_banks_t *b) 181 + { 182 + struct mce m; 183 + int i; 184 + 185 + mce_setup(&m); 186 + 187 + rdmsrl(MSR_IA32_MCG_STATUS, m.mcgstatus); 188 + for (i = 0; i < banks; i++) { 189 + if (!bank[i] || !test_bit(i, *b)) 190 + continue; 191 + 192 + m.misc = 0; 193 + m.addr = 0; 194 + m.bank = i; 195 + m.tsc = 0; 196 + 197 + barrier(); 198 + rdmsrl(MSR_IA32_MC0_STATUS + i*4, m.status); 199 + if (!(m.status & MCI_STATUS_VAL)) 200 + continue; 201 + 202 + /* 203 + * Uncorrected events are handled by the exception handler 204 + * when it is enabled. But when the exception is disabled log 205 + * everything. 206 + * 207 + * TBD do the same check for MCI_STATUS_EN here? 208 + */ 209 + if ((m.status & MCI_STATUS_UC) && !(flags & MCP_UC)) 210 + continue; 211 + 212 + if (m.status & MCI_STATUS_MISCV) 213 + rdmsrl(MSR_IA32_MC0_MISC + i*4, m.misc); 214 + if (m.status & MCI_STATUS_ADDRV) 215 + rdmsrl(MSR_IA32_MC0_ADDR + i*4, m.addr); 216 + 217 + if (!(flags & MCP_TIMESTAMP)) 218 + m.tsc = 0; 219 + /* 220 + * Don't get the IP here because it's unlikely to 221 + * have anything to do with the actual error location. 222 + */ 223 + 224 + mce_log(&m); 225 + add_taint(TAINT_MACHINE_CHECK); 226 + 227 + /* 228 + * Clear state for this bank. 229 + */ 230 + wrmsrl(MSR_IA32_MC0_STATUS+4*i, 0); 231 + } 232 + 233 + /* 234 + * Don't clear MCG_STATUS here because it's only defined for 235 + * exceptions. 236 + */ 237 + } 238 + 239 + /* 240 + * The actual machine check handler. This only handles real 241 + * exceptions when something got corrupted coming in through int 18. 242 + * 243 + * This is executed in NMI context not subject to normal locking rules. This 244 + * implies that most kernel services cannot be safely used. Don't even 245 + * think about putting a printk in there! 195 246 */ 196 247 void do_machine_check(struct pt_regs * regs, long error_code) 197 248 { ··· 279 190 * error. 280 191 */ 281 192 int kill_it = 0; 193 + DECLARE_BITMAP(toclear, MAX_NR_BANKS); 282 194 283 195 atomic_inc(&mce_entry); 284 196 285 - if ((regs 286 - && notify_die(DIE_NMI, "machine check", regs, error_code, 197 + if (notify_die(DIE_NMI, "machine check", regs, error_code, 287 198 18, SIGKILL) == NOTIFY_STOP) 288 - || !banks) 199 + goto out2; 200 + if (!banks) 289 201 goto out2; 290 202 291 - memset(&m, 0, sizeof(struct mce)); 292 - m.cpu = smp_processor_id(); 203 + mce_setup(&m); 204 + 293 205 rdmsrl(MSR_IA32_MCG_STATUS, m.mcgstatus); 294 206 /* if the restart IP is not valid, we're done for */ 295 207 if (!(m.mcgstatus & MCG_STATUS_RIPV)) ··· 300 210 barrier(); 301 211 302 212 for (i = 0; i < banks; i++) { 303 - if (i < NR_SYSFS_BANKS && !bank[i]) 213 + __clear_bit(i, toclear); 214 + if (!bank[i]) 304 215 continue; 305 216 306 217 m.misc = 0; 307 218 m.addr = 0; 308 219 m.bank = i; 309 - m.tsc = 0; 310 220 311 221 rdmsrl(MSR_IA32_MC0_STATUS + i*4, m.status); 312 222 if ((m.status & MCI_STATUS_VAL) == 0) 313 223 continue; 224 + 225 + /* 226 + * Non uncorrected errors are handled by machine_check_poll 227 + * Leave them alone. 228 + */ 229 + if ((m.status & MCI_STATUS_UC) == 0) 230 + continue; 231 + 232 + /* 233 + * Set taint even when machine check was not enabled. 234 + */ 235 + add_taint(TAINT_MACHINE_CHECK); 236 + 237 + __set_bit(i, toclear); 314 238 315 239 if (m.status & MCI_STATUS_EN) { 316 240 /* if PCC was set, there's no way out */ ··· 339 235 no_way_out = 1; 340 236 kill_it = 1; 341 237 } 238 + } else { 239 + /* 240 + * Machine check event was not enabled. Clear, but 241 + * ignore. 242 + */ 243 + continue; 342 244 } 343 245 344 246 if (m.status & MCI_STATUS_MISCV) ··· 353 243 rdmsrl(MSR_IA32_MC0_ADDR + i*4, m.addr); 354 244 355 245 mce_get_rip(&m, regs); 356 - if (error_code >= 0) 357 - rdtscll(m.tsc); 358 - if (error_code != -2) 359 - mce_log(&m); 246 + mce_log(&m); 360 247 361 248 /* Did this bank cause the exception? */ 362 249 /* Assume that the bank with uncorrectable errors did it, ··· 362 255 panicm = m; 363 256 panicm_found = 1; 364 257 } 365 - 366 - add_taint(TAINT_MACHINE_CHECK); 367 258 } 368 - 369 - /* Never do anything final in the polling timer */ 370 - if (!regs) 371 - goto out; 372 259 373 260 /* If we didn't find an uncorrectable error, pick 374 261 the last one (shouldn't happen, just being safe). */ ··· 410 309 /* notify userspace ASAP */ 411 310 set_thread_flag(TIF_MCE_NOTIFY); 412 311 413 - out: 414 312 /* the last thing we do is clear state */ 415 - for (i = 0; i < banks; i++) 416 - wrmsrl(MSR_IA32_MC0_STATUS+4*i, 0); 313 + for (i = 0; i < banks; i++) { 314 + if (test_bit(i, toclear)) 315 + wrmsrl(MSR_IA32_MC0_STATUS+4*i, 0); 316 + } 417 317 wrmsrl(MSR_IA32_MCG_STATUS, 0); 418 318 out2: 419 319 atomic_dec(&mce_entry); ··· 434 332 * and historically has been the register value of the 435 333 * MSR_IA32_THERMAL_STATUS (Intel) msr. 436 334 */ 437 - void mce_log_therm_throt_event(unsigned int cpu, __u64 status) 335 + void mce_log_therm_throt_event(__u64 status) 438 336 { 439 337 struct mce m; 440 338 441 - memset(&m, 0, sizeof(m)); 442 - m.cpu = cpu; 339 + mce_setup(&m); 443 340 m.bank = MCE_THERMAL_BANK; 444 341 m.status = status; 445 - rdtscll(m.tsc); 446 342 mce_log(&m); 447 343 } 448 344 #endif /* CONFIG_X86_MCE_INTEL */ ··· 453 353 454 354 static int check_interval = 5 * 60; /* 5 minutes */ 455 355 static int next_interval; /* in jiffies */ 456 - static void mcheck_timer(struct work_struct *work); 457 - static DECLARE_DELAYED_WORK(mcheck_work, mcheck_timer); 356 + static void mcheck_timer(unsigned long); 357 + static DEFINE_PER_CPU(struct timer_list, mce_timer); 458 358 459 - static void mcheck_check_cpu(void *info) 359 + static void mcheck_timer(unsigned long data) 460 360 { 361 + struct timer_list *t = &per_cpu(mce_timer, data); 362 + 363 + WARN_ON(smp_processor_id() != data); 364 + 461 365 if (mce_available(&current_cpu_data)) 462 - do_machine_check(NULL, 0); 463 - } 464 - 465 - static void mcheck_timer(struct work_struct *work) 466 - { 467 - on_each_cpu(mcheck_check_cpu, NULL, 1); 366 + machine_check_poll(MCP_TIMESTAMP, 367 + &__get_cpu_var(mce_poll_banks)); 468 368 469 369 /* 470 370 * Alert userspace if needed. If we logged an MCE, reduce the ··· 477 377 (int)round_jiffies_relative(check_interval*HZ)); 478 378 } 479 379 480 - schedule_delayed_work(&mcheck_work, next_interval); 380 + t->expires = jiffies + next_interval; 381 + add_timer(t); 481 382 } 482 383 384 + static void mce_do_trigger(struct work_struct *work) 385 + { 386 + call_usermodehelper(trigger, trigger_argv, NULL, UMH_NO_WAIT); 387 + } 388 + 389 + static DECLARE_WORK(mce_trigger_work, mce_do_trigger); 390 + 483 391 /* 484 - * This is only called from process context. This is where we do 485 - * anything we need to alert userspace about new MCEs. This is called 486 - * directly from the poller and also from entry.S and idle, thanks to 487 - * TIF_MCE_NOTIFY. 392 + * Notify the user(s) about new machine check events. 393 + * Can be called from interrupt context, but not from machine check/NMI 394 + * context. 488 395 */ 489 396 int mce_notify_user(void) 490 397 { 398 + /* Not more than two messages every minute */ 399 + static DEFINE_RATELIMIT_STATE(ratelimit, 60*HZ, 2); 400 + 491 401 clear_thread_flag(TIF_MCE_NOTIFY); 492 402 if (test_and_clear_bit(0, &notify_user)) { 493 - static unsigned long last_print; 494 - unsigned long now = jiffies; 495 - 496 403 wake_up_interruptible(&mce_wait); 497 - if (trigger[0]) 498 - call_usermodehelper(trigger, trigger_argv, NULL, 499 - UMH_NO_WAIT); 500 404 501 - if (time_after_eq(now, last_print + (check_interval*HZ))) { 502 - last_print = now; 405 + /* 406 + * There is no risk of missing notifications because 407 + * work_pending is always cleared before the function is 408 + * executed. 409 + */ 410 + if (trigger[0] && !work_pending(&mce_trigger_work)) 411 + schedule_work(&mce_trigger_work); 412 + 413 + if (__ratelimit(&ratelimit)) 503 414 printk(KERN_INFO "Machine check events logged\n"); 504 - } 505 415 506 416 return 1; 507 417 } ··· 535 425 536 426 static __init int periodic_mcheck_init(void) 537 427 { 538 - next_interval = check_interval * HZ; 539 - if (next_interval) 540 - schedule_delayed_work(&mcheck_work, 541 - round_jiffies_relative(next_interval)); 542 - idle_notifier_register(&mce_idle_notifier); 543 - return 0; 428 + idle_notifier_register(&mce_idle_notifier); 429 + return 0; 544 430 } 545 431 __initcall(periodic_mcheck_init); 546 - 547 432 548 433 /* 549 434 * Initialize Machine Checks for a CPU. 550 435 */ 551 - static void mce_init(void *dummy) 436 + static int mce_cap_init(void) 552 437 { 553 438 u64 cap; 554 - int i; 439 + unsigned b; 555 440 556 441 rdmsrl(MSR_IA32_MCG_CAP, cap); 557 - banks = cap & 0xff; 558 - if (banks > MCE_EXTENDED_BANK) { 559 - banks = MCE_EXTENDED_BANK; 560 - printk(KERN_INFO "MCE: warning: using only %d banks\n", 561 - MCE_EXTENDED_BANK); 442 + b = cap & 0xff; 443 + if (b > MAX_NR_BANKS) { 444 + printk(KERN_WARNING 445 + "MCE: Using only %u machine check banks out of %u\n", 446 + MAX_NR_BANKS, b); 447 + b = MAX_NR_BANKS; 562 448 } 449 + 450 + /* Don't support asymmetric configurations today */ 451 + WARN_ON(banks != 0 && b != banks); 452 + banks = b; 453 + if (!bank) { 454 + bank = kmalloc(banks * sizeof(u64), GFP_KERNEL); 455 + if (!bank) 456 + return -ENOMEM; 457 + memset(bank, 0xff, banks * sizeof(u64)); 458 + } 459 + 563 460 /* Use accurate RIP reporting if available. */ 564 461 if ((cap & (1<<9)) && ((cap >> 16) & 0xff) >= 9) 565 462 rip_msr = MSR_IA32_MCG_EIP; 566 463 567 - /* Log the machine checks left over from the previous reset. 568 - This also clears all registers */ 569 - do_machine_check(NULL, mce_bootlog ? -1 : -2); 464 + return 0; 465 + } 466 + 467 + static void mce_init(void *dummy) 468 + { 469 + u64 cap; 470 + int i; 471 + mce_banks_t all_banks; 472 + 473 + /* 474 + * Log the machine checks left over from the previous reset. 475 + */ 476 + bitmap_fill(all_banks, MAX_NR_BANKS); 477 + machine_check_poll(MCP_UC, &all_banks); 570 478 571 479 set_in_cr4(X86_CR4_MCE); 572 480 481 + rdmsrl(MSR_IA32_MCG_CAP, cap); 573 482 if (cap & MCG_CTL_P) 574 483 wrmsr(MSR_IA32_MCG_CTL, 0xffffffff, 0xffffffff); 575 484 576 485 for (i = 0; i < banks; i++) { 577 - if (i < NR_SYSFS_BANKS) 578 - wrmsrl(MSR_IA32_MC0_CTL+4*i, bank[i]); 579 - else 580 - wrmsrl(MSR_IA32_MC0_CTL+4*i, ~0UL); 581 - 486 + wrmsrl(MSR_IA32_MC0_CTL+4*i, bank[i]); 582 487 wrmsrl(MSR_IA32_MC0_STATUS+4*i, 0); 583 488 } 584 489 } 585 490 586 491 /* Add per CPU specific workarounds here */ 587 - static void __cpuinit mce_cpu_quirks(struct cpuinfo_x86 *c) 492 + static void mce_cpu_quirks(struct cpuinfo_x86 *c) 588 493 { 589 494 /* This should be disabled by the BIOS, but isn't always */ 590 495 if (c->x86_vendor == X86_VENDOR_AMD) { 591 - if(c->x86 == 15) 496 + if (c->x86 == 15 && banks > 4) 592 497 /* disable GART TBL walk error reporting, which trips off 593 498 incorrectly with the IOMMU & 3ware & Cerberus. */ 594 - clear_bit(10, &bank[4]); 499 + clear_bit(10, (unsigned long *)&bank[4]); 595 500 if(c->x86 <= 17 && mce_bootlog < 0) 596 501 /* Lots of broken BIOS around that don't clear them 597 502 by default and leave crap in there. Don't log. */ ··· 629 504 } 630 505 } 631 506 507 + static void mce_init_timer(void) 508 + { 509 + struct timer_list *t = &__get_cpu_var(mce_timer); 510 + 511 + /* data race harmless because everyone sets to the same value */ 512 + if (!next_interval) 513 + next_interval = check_interval * HZ; 514 + if (!next_interval) 515 + return; 516 + setup_timer(t, mcheck_timer, smp_processor_id()); 517 + t->expires = round_jiffies(jiffies + next_interval); 518 + add_timer(t); 519 + } 520 + 632 521 /* 633 522 * Called for each booted CPU to set up machine checks. 634 523 * Must be called with preempt off. 635 524 */ 636 525 void __cpuinit mcheck_init(struct cpuinfo_x86 *c) 637 526 { 638 - mce_cpu_quirks(c); 639 - 640 - if (mce_dont_init || 641 - !mce_available(c)) 527 + if (!mce_available(c)) 642 528 return; 529 + 530 + if (mce_cap_init() < 0) { 531 + mce_dont_init = 1; 532 + return; 533 + } 534 + mce_cpu_quirks(c); 643 535 644 536 mce_init(NULL); 645 537 mce_cpu_features(c); 538 + mce_init_timer(); 646 539 } 647 540 648 541 /* ··· 716 573 { 717 574 unsigned long *cpu_tsc; 718 575 static DEFINE_MUTEX(mce_read_mutex); 719 - unsigned next; 576 + unsigned prev, next; 720 577 char __user *buf = ubuf; 721 578 int i, err; 722 579 ··· 735 592 } 736 593 737 594 err = 0; 738 - for (i = 0; i < next; i++) { 739 - unsigned long start = jiffies; 595 + prev = 0; 596 + do { 597 + for (i = prev; i < next; i++) { 598 + unsigned long start = jiffies; 740 599 741 - while (!mcelog.entry[i].finished) { 742 - if (time_after_eq(jiffies, start + 2)) { 743 - memset(mcelog.entry + i,0, sizeof(struct mce)); 744 - goto timeout; 600 + while (!mcelog.entry[i].finished) { 601 + if (time_after_eq(jiffies, start + 2)) { 602 + memset(mcelog.entry + i, 0, 603 + sizeof(struct mce)); 604 + goto timeout; 605 + } 606 + cpu_relax(); 745 607 } 746 - cpu_relax(); 608 + smp_rmb(); 609 + err |= copy_to_user(buf, mcelog.entry + i, 610 + sizeof(struct mce)); 611 + buf += sizeof(struct mce); 612 + timeout: 613 + ; 747 614 } 748 - smp_rmb(); 749 - err |= copy_to_user(buf, mcelog.entry + i, sizeof(struct mce)); 750 - buf += sizeof(struct mce); 751 - timeout: 752 - ; 753 - } 754 615 755 - memset(mcelog.entry, 0, next * sizeof(struct mce)); 756 - mcelog.next = 0; 616 + memset(mcelog.entry + prev, 0, 617 + (next - prev) * sizeof(struct mce)); 618 + prev = next; 619 + next = cmpxchg(&mcelog.next, prev, 0); 620 + } while (next != prev); 757 621 758 622 synchronize_sched(); 759 623 ··· 830 680 &mce_chrdev_ops, 831 681 }; 832 682 833 - static unsigned long old_cr4 __initdata; 834 - 835 - void __init stop_mce(void) 836 - { 837 - old_cr4 = read_cr4(); 838 - clear_in_cr4(X86_CR4_MCE); 839 - } 840 - 841 - void __init restart_mce(void) 842 - { 843 - if (old_cr4 & X86_CR4_MCE) 844 - set_in_cr4(X86_CR4_MCE); 845 - } 846 - 847 683 /* 848 684 * Old style boot options parsing. Only for compatibility. 849 685 */ ··· 839 703 return 1; 840 704 } 841 705 842 - /* mce=off disables machine check. Note you can re-enable it later 843 - using sysfs. 706 + /* mce=off disables machine check. 844 707 mce=TOLERANCELEVEL (number, see above) 845 708 mce=bootlog Log MCEs from before booting. Disabled by default on AMD. 846 709 mce=nobootlog Don't log MCEs from before booting. */ ··· 863 728 * Sysfs support 864 729 */ 865 730 731 + /* 732 + * Disable machine checks on suspend and shutdown. We can't really handle 733 + * them later. 734 + */ 735 + static int mce_disable(void) 736 + { 737 + int i; 738 + 739 + for (i = 0; i < banks; i++) 740 + wrmsrl(MSR_IA32_MC0_CTL + i*4, 0); 741 + return 0; 742 + } 743 + 744 + static int mce_suspend(struct sys_device *dev, pm_message_t state) 745 + { 746 + return mce_disable(); 747 + } 748 + 749 + static int mce_shutdown(struct sys_device *dev) 750 + { 751 + return mce_disable(); 752 + } 753 + 866 754 /* On resume clear all MCE state. Don't want to see leftovers from the BIOS. 867 755 Only one CPU is active at this time, the others get readded later using 868 756 CPU hotplug. */ ··· 896 738 return 0; 897 739 } 898 740 741 + static void mce_cpu_restart(void *data) 742 + { 743 + del_timer_sync(&__get_cpu_var(mce_timer)); 744 + if (mce_available(&current_cpu_data)) 745 + mce_init(NULL); 746 + mce_init_timer(); 747 + } 748 + 899 749 /* Reinit MCEs after user configuration changes */ 900 750 static void mce_restart(void) 901 751 { 902 - if (next_interval) 903 - cancel_delayed_work(&mcheck_work); 904 - /* Timer race is harmless here */ 905 - on_each_cpu(mce_init, NULL, 1); 906 752 next_interval = check_interval * HZ; 907 - if (next_interval) 908 - schedule_delayed_work(&mcheck_work, 909 - round_jiffies_relative(next_interval)); 753 + on_each_cpu(mce_cpu_restart, NULL, 1); 910 754 } 911 755 912 756 static struct sysdev_class mce_sysclass = { 757 + .suspend = mce_suspend, 758 + .shutdown = mce_shutdown, 913 759 .resume = mce_resume, 914 760 .name = "machinecheck", 915 761 }; ··· 940 778 } \ 941 779 static SYSDEV_ATTR(name, 0644, show_ ## name, set_ ## name); 942 780 943 - /* 944 - * TBD should generate these dynamically based on number of available banks. 945 - * Have only 6 contol banks in /sysfs until then. 946 - */ 947 - ACCESSOR(bank0ctl,bank[0],mce_restart()) 948 - ACCESSOR(bank1ctl,bank[1],mce_restart()) 949 - ACCESSOR(bank2ctl,bank[2],mce_restart()) 950 - ACCESSOR(bank3ctl,bank[3],mce_restart()) 951 - ACCESSOR(bank4ctl,bank[4],mce_restart()) 952 - ACCESSOR(bank5ctl,bank[5],mce_restart()) 781 + static struct sysdev_attribute *bank_attrs; 782 + 783 + static ssize_t show_bank(struct sys_device *s, struct sysdev_attribute *attr, 784 + char *buf) 785 + { 786 + u64 b = bank[attr - bank_attrs]; 787 + return sprintf(buf, "%llx\n", b); 788 + } 789 + 790 + static ssize_t set_bank(struct sys_device *s, struct sysdev_attribute *attr, 791 + const char *buf, size_t siz) 792 + { 793 + char *end; 794 + u64 new = simple_strtoull(buf, &end, 0); 795 + if (end == buf) 796 + return -EINVAL; 797 + bank[attr - bank_attrs] = new; 798 + mce_restart(); 799 + return end-buf; 800 + } 953 801 954 802 static ssize_t show_trigger(struct sys_device *s, struct sysdev_attribute *attr, 955 803 char *buf) ··· 986 814 static SYSDEV_INT_ATTR(tolerant, 0644, tolerant); 987 815 ACCESSOR(check_interval,check_interval,mce_restart()) 988 816 static struct sysdev_attribute *mce_attributes[] = { 989 - &attr_bank0ctl, &attr_bank1ctl, &attr_bank2ctl, 990 - &attr_bank3ctl, &attr_bank4ctl, &attr_bank5ctl, 991 817 &attr_tolerant.attr, &attr_check_interval, &attr_trigger, 992 818 NULL 993 819 }; ··· 1015 845 if (err) 1016 846 goto error; 1017 847 } 848 + for (i = 0; i < banks; i++) { 849 + err = sysdev_create_file(&per_cpu(device_mce, cpu), 850 + &bank_attrs[i]); 851 + if (err) 852 + goto error2; 853 + } 1018 854 cpu_set(cpu, mce_device_initialized); 1019 855 1020 856 return 0; 857 + error2: 858 + while (--i >= 0) { 859 + sysdev_remove_file(&per_cpu(device_mce, cpu), 860 + &bank_attrs[i]); 861 + } 1021 862 error: 1022 - while (i--) { 863 + while (--i >= 0) { 1023 864 sysdev_remove_file(&per_cpu(device_mce,cpu), 1024 865 mce_attributes[i]); 1025 866 } ··· 1049 868 for (i = 0; mce_attributes[i]; i++) 1050 869 sysdev_remove_file(&per_cpu(device_mce,cpu), 1051 870 mce_attributes[i]); 871 + for (i = 0; i < banks; i++) 872 + sysdev_remove_file(&per_cpu(device_mce, cpu), 873 + &bank_attrs[i]); 1052 874 sysdev_unregister(&per_cpu(device_mce,cpu)); 1053 875 cpu_clear(cpu, mce_device_initialized); 876 + } 877 + 878 + /* Make sure there are no machine checks on offlined CPUs. */ 879 + static void mce_disable_cpu(void *h) 880 + { 881 + int i; 882 + unsigned long action = *(unsigned long *)h; 883 + 884 + if (!mce_available(&current_cpu_data)) 885 + return; 886 + if (!(action & CPU_TASKS_FROZEN)) 887 + cmci_clear(); 888 + for (i = 0; i < banks; i++) 889 + wrmsrl(MSR_IA32_MC0_CTL + i*4, 0); 890 + } 891 + 892 + static void mce_reenable_cpu(void *h) 893 + { 894 + int i; 895 + unsigned long action = *(unsigned long *)h; 896 + 897 + if (!mce_available(&current_cpu_data)) 898 + return; 899 + if (!(action & CPU_TASKS_FROZEN)) 900 + cmci_reenable(); 901 + for (i = 0; i < banks; i++) 902 + wrmsrl(MSR_IA32_MC0_CTL + i*4, bank[i]); 1054 903 } 1055 904 1056 905 /* Get notified when a cpu comes on/off. Be hotplug friendly. */ ··· 1088 877 unsigned long action, void *hcpu) 1089 878 { 1090 879 unsigned int cpu = (unsigned long)hcpu; 880 + struct timer_list *t = &per_cpu(mce_timer, cpu); 1091 881 1092 882 switch (action) { 1093 883 case CPU_ONLINE: ··· 1103 891 threshold_cpu_callback(action, cpu); 1104 892 mce_remove_device(cpu); 1105 893 break; 894 + case CPU_DOWN_PREPARE: 895 + case CPU_DOWN_PREPARE_FROZEN: 896 + del_timer_sync(t); 897 + smp_call_function_single(cpu, mce_disable_cpu, &action, 1); 898 + break; 899 + case CPU_DOWN_FAILED: 900 + case CPU_DOWN_FAILED_FROZEN: 901 + t->expires = round_jiffies(jiffies + next_interval); 902 + add_timer_on(t, cpu); 903 + smp_call_function_single(cpu, mce_reenable_cpu, &action, 1); 904 + break; 905 + case CPU_POST_DEAD: 906 + /* intentionally ignoring frozen here */ 907 + cmci_rediscover(cpu); 908 + break; 1106 909 } 1107 910 return NOTIFY_OK; 1108 911 } ··· 1126 899 .notifier_call = mce_cpu_callback, 1127 900 }; 1128 901 902 + static __init int mce_init_banks(void) 903 + { 904 + int i; 905 + 906 + bank_attrs = kzalloc(sizeof(struct sysdev_attribute) * banks, 907 + GFP_KERNEL); 908 + if (!bank_attrs) 909 + return -ENOMEM; 910 + 911 + for (i = 0; i < banks; i++) { 912 + struct sysdev_attribute *a = &bank_attrs[i]; 913 + a->attr.name = kasprintf(GFP_KERNEL, "bank%d", i); 914 + if (!a->attr.name) 915 + goto nomem; 916 + a->attr.mode = 0644; 917 + a->show = show_bank; 918 + a->store = set_bank; 919 + } 920 + return 0; 921 + 922 + nomem: 923 + while (--i >= 0) 924 + kfree(bank_attrs[i].attr.name); 925 + kfree(bank_attrs); 926 + bank_attrs = NULL; 927 + return -ENOMEM; 928 + } 929 + 1129 930 static __init int mce_init_device(void) 1130 931 { 1131 932 int err; ··· 1161 906 1162 907 if (!mce_available(&boot_cpu_data)) 1163 908 return -EIO; 909 + 910 + err = mce_init_banks(); 911 + if (err) 912 + return err; 913 + 1164 914 err = sysdev_class_register(&mce_sysclass); 1165 915 if (err) 1166 916 return err;
+9 -13
arch/x86/kernel/cpu/mcheck/mce_amd_64.c
··· 79 79 80 80 static DEFINE_PER_CPU(unsigned char, bank_map); /* see which banks are on */ 81 81 82 + static void amd_threshold_interrupt(void); 83 + 82 84 /* 83 85 * CPU Initialization 84 86 */ ··· 176 174 tr.reset = 0; 177 175 tr.old_limit = 0; 178 176 threshold_restart_bank(&tr); 177 + 178 + mce_threshold_vector = amd_threshold_interrupt; 179 179 } 180 180 } 181 181 } ··· 191 187 * the interrupt goes off when error_count reaches threshold_limit. 192 188 * the handler will simply log mcelog w/ software defined bank number. 193 189 */ 194 - asmlinkage void mce_threshold_interrupt(void) 190 + static void amd_threshold_interrupt(void) 195 191 { 196 192 unsigned int bank, block; 197 193 struct mce m; 198 194 u32 low = 0, high = 0, address = 0; 199 195 200 - ack_APIC_irq(); 201 - exit_idle(); 202 - irq_enter(); 203 - 204 - memset(&m, 0, sizeof(m)); 205 - rdtscll(m.tsc); 206 - m.cpu = smp_processor_id(); 196 + mce_setup(&m); 207 197 208 198 /* assume first bank caused it */ 209 199 for (bank = 0; bank < NR_BANKS; ++bank) { ··· 231 233 232 234 /* Log the machine check that caused the threshold 233 235 event. */ 234 - do_machine_check(NULL, 0); 236 + machine_check_poll(MCP_TIMESTAMP, 237 + &__get_cpu_var(mce_poll_banks)); 235 238 236 239 if (high & MASK_OVERFLOW_HI) { 237 240 rdmsrl(address, m.misc); ··· 242 243 + bank * NR_BLOCKS 243 244 + block; 244 245 mce_log(&m); 245 - goto out; 246 + return; 246 247 } 247 248 } 248 249 } 249 - out: 250 - inc_irq_stat(irq_threshold_count); 251 - irq_exit(); 252 250 } 253 251 254 252 /*
+206 -1
arch/x86/kernel/cpu/mcheck/mce_intel_64.c
··· 1 1 /* 2 2 * Intel specific MCE features. 3 3 * Copyright 2004 Zwane Mwaikambo <zwane@linuxpower.ca> 4 + * Copyright (C) 2008, 2009 Intel Corporation 5 + * Author: Andi Kleen 4 6 */ 5 7 6 8 #include <linux/init.h> ··· 15 13 #include <asm/hw_irq.h> 16 14 #include <asm/idle.h> 17 15 #include <asm/therm_throt.h> 16 + #include <asm/apic.h> 18 17 19 18 asmlinkage void smp_thermal_interrupt(void) 20 19 { ··· 28 25 29 26 rdmsrl(MSR_IA32_THERM_STATUS, msr_val); 30 27 if (therm_throt_process(msr_val & 1)) 31 - mce_log_therm_throt_event(smp_processor_id(), msr_val); 28 + mce_log_therm_throt_event(msr_val); 32 29 33 30 inc_irq_stat(irq_thermal_count); 34 31 irq_exit(); ··· 88 85 return; 89 86 } 90 87 88 + /* 89 + * Support for Intel Correct Machine Check Interrupts. This allows 90 + * the CPU to raise an interrupt when a corrected machine check happened. 91 + * Normally we pick those up using a regular polling timer. 92 + * Also supports reliable discovery of shared banks. 93 + */ 94 + 95 + static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned); 96 + 97 + /* 98 + * cmci_discover_lock protects against parallel discovery attempts 99 + * which could race against each other. 100 + */ 101 + static DEFINE_SPINLOCK(cmci_discover_lock); 102 + 103 + #define CMCI_THRESHOLD 1 104 + 105 + static int cmci_supported(int *banks) 106 + { 107 + u64 cap; 108 + 109 + /* 110 + * Vendor check is not strictly needed, but the initial 111 + * initialization is vendor keyed and this 112 + * makes sure none of the backdoors are entered otherwise. 113 + */ 114 + if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) 115 + return 0; 116 + if (!cpu_has_apic || lapic_get_maxlvt() < 6) 117 + return 0; 118 + rdmsrl(MSR_IA32_MCG_CAP, cap); 119 + *banks = min_t(unsigned, MAX_NR_BANKS, cap & 0xff); 120 + return !!(cap & MCG_CMCI_P); 121 + } 122 + 123 + /* 124 + * The interrupt handler. This is called on every event. 125 + * Just call the poller directly to log any events. 126 + * This could in theory increase the threshold under high load, 127 + * but doesn't for now. 128 + */ 129 + static void intel_threshold_interrupt(void) 130 + { 131 + machine_check_poll(MCP_TIMESTAMP, &__get_cpu_var(mce_banks_owned)); 132 + mce_notify_user(); 133 + } 134 + 135 + static void print_update(char *type, int *hdr, int num) 136 + { 137 + if (*hdr == 0) 138 + printk(KERN_INFO "CPU %d MCA banks", smp_processor_id()); 139 + *hdr = 1; 140 + printk(KERN_CONT " %s:%d", type, num); 141 + } 142 + 143 + /* 144 + * Enable CMCI (Corrected Machine Check Interrupt) for available MCE banks 145 + * on this CPU. Use the algorithm recommended in the SDM to discover shared 146 + * banks. 147 + */ 148 + static void cmci_discover(int banks, int boot) 149 + { 150 + unsigned long *owned = (void *)&__get_cpu_var(mce_banks_owned); 151 + int hdr = 0; 152 + int i; 153 + 154 + spin_lock(&cmci_discover_lock); 155 + for (i = 0; i < banks; i++) { 156 + u64 val; 157 + 158 + if (test_bit(i, owned)) 159 + continue; 160 + 161 + rdmsrl(MSR_IA32_MC0_CTL2 + i, val); 162 + 163 + /* Already owned by someone else? */ 164 + if (val & CMCI_EN) { 165 + if (test_and_clear_bit(i, owned) || boot) 166 + print_update("SHD", &hdr, i); 167 + __clear_bit(i, __get_cpu_var(mce_poll_banks)); 168 + continue; 169 + } 170 + 171 + val |= CMCI_EN | CMCI_THRESHOLD; 172 + wrmsrl(MSR_IA32_MC0_CTL2 + i, val); 173 + rdmsrl(MSR_IA32_MC0_CTL2 + i, val); 174 + 175 + /* Did the enable bit stick? -- the bank supports CMCI */ 176 + if (val & CMCI_EN) { 177 + if (!test_and_set_bit(i, owned) || boot) 178 + print_update("CMCI", &hdr, i); 179 + __clear_bit(i, __get_cpu_var(mce_poll_banks)); 180 + } else { 181 + WARN_ON(!test_bit(i, __get_cpu_var(mce_poll_banks))); 182 + } 183 + } 184 + spin_unlock(&cmci_discover_lock); 185 + if (hdr) 186 + printk(KERN_CONT "\n"); 187 + } 188 + 189 + /* 190 + * Just in case we missed an event during initialization check 191 + * all the CMCI owned banks. 192 + */ 193 + void cmci_recheck(void) 194 + { 195 + unsigned long flags; 196 + int banks; 197 + 198 + if (!mce_available(&current_cpu_data) || !cmci_supported(&banks)) 199 + return; 200 + local_irq_save(flags); 201 + machine_check_poll(MCP_TIMESTAMP, &__get_cpu_var(mce_banks_owned)); 202 + local_irq_restore(flags); 203 + } 204 + 205 + /* 206 + * Disable CMCI on this CPU for all banks it owns when it goes down. 207 + * This allows other CPUs to claim the banks on rediscovery. 208 + */ 209 + void cmci_clear(void) 210 + { 211 + int i; 212 + int banks; 213 + u64 val; 214 + 215 + if (!cmci_supported(&banks)) 216 + return; 217 + spin_lock(&cmci_discover_lock); 218 + for (i = 0; i < banks; i++) { 219 + if (!test_bit(i, __get_cpu_var(mce_banks_owned))) 220 + continue; 221 + /* Disable CMCI */ 222 + rdmsrl(MSR_IA32_MC0_CTL2 + i, val); 223 + val &= ~(CMCI_EN|CMCI_THRESHOLD_MASK); 224 + wrmsrl(MSR_IA32_MC0_CTL2 + i, val); 225 + __clear_bit(i, __get_cpu_var(mce_banks_owned)); 226 + } 227 + spin_unlock(&cmci_discover_lock); 228 + } 229 + 230 + /* 231 + * After a CPU went down cycle through all the others and rediscover 232 + * Must run in process context. 233 + */ 234 + void cmci_rediscover(int dying) 235 + { 236 + int banks; 237 + int cpu; 238 + cpumask_var_t old; 239 + 240 + if (!cmci_supported(&banks)) 241 + return; 242 + if (!alloc_cpumask_var(&old, GFP_KERNEL)) 243 + return; 244 + cpumask_copy(old, &current->cpus_allowed); 245 + 246 + for_each_online_cpu (cpu) { 247 + if (cpu == dying) 248 + continue; 249 + if (set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu))) 250 + continue; 251 + /* Recheck banks in case CPUs don't all have the same */ 252 + if (cmci_supported(&banks)) 253 + cmci_discover(banks, 0); 254 + } 255 + 256 + set_cpus_allowed_ptr(current, old); 257 + free_cpumask_var(old); 258 + } 259 + 260 + /* 261 + * Reenable CMCI on this CPU in case a CPU down failed. 262 + */ 263 + void cmci_reenable(void) 264 + { 265 + int banks; 266 + if (cmci_supported(&banks)) 267 + cmci_discover(banks, 0); 268 + } 269 + 270 + static __cpuinit void intel_init_cmci(void) 271 + { 272 + int banks; 273 + 274 + if (!cmci_supported(&banks)) 275 + return; 276 + 277 + mce_threshold_vector = intel_threshold_interrupt; 278 + cmci_discover(banks, 1); 279 + /* 280 + * For CPU #0 this runs with still disabled APIC, but that's 281 + * ok because only the vector is set up. We still do another 282 + * check for the banks later for CPU #0 just to make sure 283 + * to not miss any events. 284 + */ 285 + apic_write(APIC_LVTCMCI, THRESHOLD_APIC_VECTOR|APIC_DM_FIXED); 286 + cmci_recheck(); 287 + } 288 + 91 289 void mce_intel_feature_init(struct cpuinfo_x86 *c) 92 290 { 93 291 intel_init_thermal(c); 292 + intel_init_cmci(); 94 293 }
+29
arch/x86/kernel/cpu/mcheck/threshold.c
··· 1 + /* 2 + * Common corrected MCE threshold handler code: 3 + */ 4 + #include <linux/interrupt.h> 5 + #include <linux/kernel.h> 6 + 7 + #include <asm/irq_vectors.h> 8 + #include <asm/apic.h> 9 + #include <asm/idle.h> 10 + #include <asm/mce.h> 11 + 12 + static void default_threshold_interrupt(void) 13 + { 14 + printk(KERN_ERR "Unexpected threshold interrupt at vector %x\n", 15 + THRESHOLD_APIC_VECTOR); 16 + } 17 + 18 + void (*mce_threshold_vector)(void) = default_threshold_interrupt; 19 + 20 + asmlinkage void mce_threshold_interrupt(void) 21 + { 22 + exit_idle(); 23 + irq_enter(); 24 + inc_irq_stat(irq_threshold_count); 25 + mce_threshold_vector(); 26 + irq_exit(); 27 + /* Ack only at the end to avoid potential reentry */ 28 + ack_APIC_irq(); 29 + }
+1 -2
arch/x86/kernel/ds.c
··· 729 729 730 730 spin_unlock_irqrestore(&ds_lock, irq); 731 731 732 - ds_write_config(tracer->ds.context, &tracer->trace.ds, ds_bts); 732 + ds_write_config(tracer->ds.context, &tracer->trace.ds, ds_pebs); 733 733 ds_resume_pebs(tracer); 734 734 735 735 return tracer; ··· 1029 1029 1030 1030 void ds_exit_thread(struct task_struct *tsk) 1031 1031 { 1032 - WARN_ON(tsk->thread.ds_ctx); 1033 1032 }
+5 -2
arch/x86/kernel/efi.c
··· 469 469 efi_memory_desc_t *md; 470 470 efi_status_t status; 471 471 unsigned long size; 472 - u64 end, systab, addr, npages; 472 + u64 end, systab, addr, npages, end_pfn; 473 473 void *p, *va; 474 474 475 475 efi.systab = NULL; ··· 481 481 size = md->num_pages << EFI_PAGE_SHIFT; 482 482 end = md->phys_addr + size; 483 483 484 - if (PFN_UP(end) <= max_low_pfn_mapped) 484 + end_pfn = PFN_UP(end); 485 + if (end_pfn <= max_low_pfn_mapped 486 + || (end_pfn > (1UL << (32 - PAGE_SHIFT)) 487 + && end_pfn <= max_pfn_mapped)) 485 488 va = __va(md->phys_addr); 486 489 else 487 490 va = efi_ioremap(md->phys_addr, size);
+4 -17
arch/x86/kernel/efi_64.c
··· 100 100 101 101 void __iomem *__init efi_ioremap(unsigned long phys_addr, unsigned long size) 102 102 { 103 - static unsigned pages_mapped __initdata; 104 - unsigned i, pages; 105 - unsigned long offset; 103 + unsigned long last_map_pfn; 106 104 107 - pages = PFN_UP(phys_addr + size) - PFN_DOWN(phys_addr); 108 - offset = phys_addr & ~PAGE_MASK; 109 - phys_addr &= PAGE_MASK; 110 - 111 - if (pages_mapped + pages > MAX_EFI_IO_PAGES) 105 + last_map_pfn = init_memory_mapping(phys_addr, phys_addr + size); 106 + if ((last_map_pfn << PAGE_SHIFT) < phys_addr + size) 112 107 return NULL; 113 108 114 - for (i = 0; i < pages; i++) { 115 - __set_fixmap(FIX_EFI_IO_MAP_FIRST_PAGE - pages_mapped, 116 - phys_addr, PAGE_KERNEL); 117 - phys_addr += PAGE_SIZE; 118 - pages_mapped++; 119 - } 120 - 121 - return (void __iomem *)__fix_to_virt(FIX_EFI_IO_MAP_FIRST_PAGE - \ 122 - (pages_mapped - pages)) + offset; 109 + return (void __iomem *)__va(phys_addr); 123 110 }
+2
arch/x86/kernel/entry_64.S
··· 984 984 #endif 985 985 apicinterrupt LOCAL_TIMER_VECTOR \ 986 986 apic_timer_interrupt smp_apic_timer_interrupt 987 + apicinterrupt GENERIC_INTERRUPT_VECTOR \ 988 + generic_interrupt smp_generic_interrupt 987 989 988 990 #ifdef CONFIG_SMP 989 991 apicinterrupt INVALIDATE_TLB_VECTOR_START+0 \
+1 -1
arch/x86/kernel/i387.c
··· 136 136 #ifdef CONFIG_X86_32 137 137 if (!HAVE_HWFP) { 138 138 memset(tsk->thread.xstate, 0, xstate_size); 139 - finit(); 139 + finit_task(tsk); 140 140 set_stopped_child_used_math(tsk); 141 141 return 0; 142 142 }
+34
arch/x86/kernel/irq.c
··· 15 15 16 16 atomic_t irq_err_count; 17 17 18 + /* Function pointer for generic interrupt vector handling */ 19 + void (*generic_interrupt_extension)(void) = NULL; 20 + 18 21 /* 19 22 * 'what should we do if we get a hw irq event on an illegal vector'. 20 23 * each architecture has to answer this themselves. ··· 59 56 seq_printf(p, "%10u ", irq_stats(j)->apic_timer_irqs); 60 57 seq_printf(p, " Local timer interrupts\n"); 61 58 #endif 59 + if (generic_interrupt_extension) { 60 + seq_printf(p, "PLT: "); 61 + for_each_online_cpu(j) 62 + seq_printf(p, "%10u ", irq_stats(j)->generic_irqs); 63 + seq_printf(p, " Platform interrupts\n"); 64 + } 62 65 #ifdef CONFIG_SMP 63 66 seq_printf(p, "RES: "); 64 67 for_each_online_cpu(j) ··· 172 163 #ifdef CONFIG_X86_LOCAL_APIC 173 164 sum += irq_stats(cpu)->apic_timer_irqs; 174 165 #endif 166 + if (generic_interrupt_extension) 167 + sum += irq_stats(cpu)->generic_irqs; 175 168 #ifdef CONFIG_SMP 176 169 sum += irq_stats(cpu)->irq_resched_count; 177 170 sum += irq_stats(cpu)->irq_call_count; ··· 235 224 236 225 set_irq_regs(old_regs); 237 226 return 1; 227 + } 228 + 229 + /* 230 + * Handler for GENERIC_INTERRUPT_VECTOR. 231 + */ 232 + void smp_generic_interrupt(struct pt_regs *regs) 233 + { 234 + struct pt_regs *old_regs = set_irq_regs(regs); 235 + 236 + ack_APIC_irq(); 237 + 238 + exit_idle(); 239 + 240 + irq_enter(); 241 + 242 + inc_irq_stat(generic_irqs); 243 + 244 + if (generic_interrupt_extension) 245 + generic_interrupt_extension(); 246 + 247 + irq_exit(); 248 + 249 + set_irq_regs(old_regs); 238 250 } 239 251 240 252 EXPORT_SYMBOL_GPL(vector_used_by_percpu_irq);
+15 -14
arch/x86/kernel/irq_32.c
··· 16 16 #include <linux/cpu.h> 17 17 #include <linux/delay.h> 18 18 #include <linux/uaccess.h> 19 + #include <linux/percpu.h> 19 20 20 21 #include <asm/apic.h> 21 22 ··· 56 55 union irq_ctx { 57 56 struct thread_info tinfo; 58 57 u32 stack[THREAD_SIZE/sizeof(u32)]; 59 - }; 58 + } __attribute__((aligned(PAGE_SIZE))); 60 59 61 - static union irq_ctx *hardirq_ctx[NR_CPUS] __read_mostly; 62 - static union irq_ctx *softirq_ctx[NR_CPUS] __read_mostly; 60 + static DEFINE_PER_CPU(union irq_ctx *, hardirq_ctx); 61 + static DEFINE_PER_CPU(union irq_ctx *, softirq_ctx); 63 62 64 - static char softirq_stack[NR_CPUS * THREAD_SIZE] __page_aligned_bss; 65 - static char hardirq_stack[NR_CPUS * THREAD_SIZE] __page_aligned_bss; 63 + static DEFINE_PER_CPU_PAGE_ALIGNED(union irq_ctx, hardirq_stack); 64 + static DEFINE_PER_CPU_PAGE_ALIGNED(union irq_ctx, softirq_stack); 66 65 67 66 static void call_on_stack(void *func, void *stack) 68 67 { ··· 82 81 u32 *isp, arg1, arg2; 83 82 84 83 curctx = (union irq_ctx *) current_thread_info(); 85 - irqctx = hardirq_ctx[smp_processor_id()]; 84 + irqctx = __get_cpu_var(hardirq_ctx); 86 85 87 86 /* 88 87 * this is where we switch to the IRQ stack. However, if we are ··· 126 125 { 127 126 union irq_ctx *irqctx; 128 127 129 - if (hardirq_ctx[cpu]) 128 + if (per_cpu(hardirq_ctx, cpu)) 130 129 return; 131 130 132 - irqctx = (union irq_ctx*) &hardirq_stack[cpu*THREAD_SIZE]; 131 + irqctx = &per_cpu(hardirq_stack, cpu); 133 132 irqctx->tinfo.task = NULL; 134 133 irqctx->tinfo.exec_domain = NULL; 135 134 irqctx->tinfo.cpu = cpu; 136 135 irqctx->tinfo.preempt_count = HARDIRQ_OFFSET; 137 136 irqctx->tinfo.addr_limit = MAKE_MM_SEG(0); 138 137 139 - hardirq_ctx[cpu] = irqctx; 138 + per_cpu(hardirq_ctx, cpu) = irqctx; 140 139 141 - irqctx = (union irq_ctx *) &softirq_stack[cpu*THREAD_SIZE]; 140 + irqctx = &per_cpu(softirq_stack, cpu); 142 141 irqctx->tinfo.task = NULL; 143 142 irqctx->tinfo.exec_domain = NULL; 144 143 irqctx->tinfo.cpu = cpu; 145 144 irqctx->tinfo.preempt_count = 0; 146 145 irqctx->tinfo.addr_limit = MAKE_MM_SEG(0); 147 146 148 - softirq_ctx[cpu] = irqctx; 147 + per_cpu(softirq_ctx, cpu) = irqctx; 149 148 150 149 printk(KERN_DEBUG "CPU %u irqstacks, hard=%p soft=%p\n", 151 - cpu, hardirq_ctx[cpu], softirq_ctx[cpu]); 150 + cpu, per_cpu(hardirq_ctx, cpu), per_cpu(softirq_ctx, cpu)); 152 151 } 153 152 154 153 void irq_ctx_exit(int cpu) 155 154 { 156 - hardirq_ctx[cpu] = NULL; 155 + per_cpu(hardirq_ctx, cpu) = NULL; 157 156 } 158 157 159 158 asmlinkage void do_softirq(void) ··· 170 169 171 170 if (local_softirq_pending()) { 172 171 curctx = current_thread_info(); 173 - irqctx = softirq_ctx[smp_processor_id()]; 172 + irqctx = __get_cpu_var(softirq_ctx); 174 173 irqctx->tinfo.task = curctx->task; 175 174 irqctx->tinfo.previous_esp = current_stack_pointer; 176 175
+3
arch/x86/kernel/irqinit_32.c
··· 175 175 /* self generated IPI for local APIC timer */ 176 176 alloc_intr_gate(LOCAL_TIMER_VECTOR, apic_timer_interrupt); 177 177 178 + /* generic IPI for platform specific use */ 179 + alloc_intr_gate(GENERIC_INTERRUPT_VECTOR, generic_interrupt); 180 + 178 181 /* IPI vectors for APIC spurious and error interrupts */ 179 182 alloc_intr_gate(SPURIOUS_APIC_VECTOR, spurious_interrupt); 180 183 alloc_intr_gate(ERROR_APIC_VECTOR, error_interrupt);
+3
arch/x86/kernel/irqinit_64.c
··· 147 147 /* self generated IPI for local APIC timer */ 148 148 alloc_intr_gate(LOCAL_TIMER_VECTOR, apic_timer_interrupt); 149 149 150 + /* generic IPI for platform specific use */ 151 + alloc_intr_gate(GENERIC_INTERRUPT_VECTOR, generic_interrupt); 152 + 150 153 /* IPI vectors for APIC spurious and error interrupts */ 151 154 alloc_intr_gate(SPURIOUS_APIC_VECTOR, spurious_interrupt); 152 155 alloc_intr_gate(ERROR_APIC_VECTOR, error_interrupt);
+10 -7
arch/x86/kernel/machine_kexec_32.c
··· 14 14 #include <linux/ftrace.h> 15 15 #include <linux/suspend.h> 16 16 #include <linux/gfp.h> 17 + #include <linux/io.h> 17 18 18 19 #include <asm/pgtable.h> 19 20 #include <asm/pgalloc.h> 20 21 #include <asm/tlbflush.h> 21 22 #include <asm/mmu_context.h> 22 - #include <asm/io.h> 23 23 #include <asm/apic.h> 24 24 #include <asm/cpufeature.h> 25 25 #include <asm/desc.h> ··· 63 63 "\tmovl %%eax,%%fs\n" 64 64 "\tmovl %%eax,%%gs\n" 65 65 "\tmovl %%eax,%%ss\n" 66 - ::: "eax", "memory"); 66 + : : : "eax", "memory"); 67 67 #undef STR 68 68 #undef __STR 69 69 } ··· 205 205 206 206 if (image->preserve_context) { 207 207 #ifdef CONFIG_X86_IO_APIC 208 - /* We need to put APICs in legacy mode so that we can 208 + /* 209 + * We need to put APICs in legacy mode so that we can 209 210 * get timer interrupts in second kernel. kexec/kdump 210 211 * paths already have calls to disable_IO_APIC() in 211 212 * one form or other. kexec jump path also need ··· 228 227 page_list[PA_SWAP_PAGE] = (page_to_pfn(image->swap_page) 229 228 << PAGE_SHIFT); 230 229 231 - /* The segment registers are funny things, they have both a 230 + /* 231 + * The segment registers are funny things, they have both a 232 232 * visible and an invisible part. Whenever the visible part is 233 233 * set to a specific selector, the invisible part is loaded 234 234 * with from a table in memory. At no other time is the ··· 239 237 * segments, before I zap the gdt with an invalid value. 240 238 */ 241 239 load_segments(); 242 - /* The gdt & idt are now invalid. 240 + /* 241 + * The gdt & idt are now invalid. 243 242 * If you want to load them you must set up your own idt & gdt. 244 243 */ 245 - set_gdt(phys_to_virt(0),0); 246 - set_idt(phys_to_virt(0),0); 244 + set_gdt(phys_to_virt(0), 0); 245 + set_idt(phys_to_virt(0), 0); 247 246 248 247 /* now call it */ 249 248 image->start = relocate_kernel_ptr((unsigned long)image->head,
+88 -11
arch/x86/kernel/machine_kexec_64.c
··· 12 12 #include <linux/reboot.h> 13 13 #include <linux/numa.h> 14 14 #include <linux/ftrace.h> 15 + #include <linux/io.h> 16 + #include <linux/suspend.h> 15 17 16 18 #include <asm/pgtable.h> 17 19 #include <asm/tlbflush.h> 18 20 #include <asm/mmu_context.h> 19 - #include <asm/io.h> 21 + 22 + static int init_one_level2_page(struct kimage *image, pgd_t *pgd, 23 + unsigned long addr) 24 + { 25 + pud_t *pud; 26 + pmd_t *pmd; 27 + struct page *page; 28 + int result = -ENOMEM; 29 + 30 + addr &= PMD_MASK; 31 + pgd += pgd_index(addr); 32 + if (!pgd_present(*pgd)) { 33 + page = kimage_alloc_control_pages(image, 0); 34 + if (!page) 35 + goto out; 36 + pud = (pud_t *)page_address(page); 37 + memset(pud, 0, PAGE_SIZE); 38 + set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE)); 39 + } 40 + pud = pud_offset(pgd, addr); 41 + if (!pud_present(*pud)) { 42 + page = kimage_alloc_control_pages(image, 0); 43 + if (!page) 44 + goto out; 45 + pmd = (pmd_t *)page_address(page); 46 + memset(pmd, 0, PAGE_SIZE); 47 + set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE)); 48 + } 49 + pmd = pmd_offset(pud, addr); 50 + if (!pmd_present(*pmd)) 51 + set_pmd(pmd, __pmd(addr | __PAGE_KERNEL_LARGE_EXEC)); 52 + result = 0; 53 + out: 54 + return result; 55 + } 20 56 21 57 static void init_level2_page(pmd_t *level2p, unsigned long addr) 22 58 { ··· 119 83 } 120 84 level3p = (pud_t *)page_address(page); 121 85 result = init_level3_page(image, level3p, addr, last_addr); 122 - if (result) { 86 + if (result) 123 87 goto out; 124 - } 125 88 set_pgd(level4p++, __pgd(__pa(level3p) | _KERNPG_TABLE)); 126 89 addr += PGDIR_SIZE; 127 90 } ··· 189 154 int result; 190 155 level4p = (pgd_t *)__va(start_pgtable); 191 156 result = init_level4_page(image, level4p, 0, max_pfn << PAGE_SHIFT); 157 + if (result) 158 + return result; 159 + /* 160 + * image->start may be outside 0 ~ max_pfn, for example when 161 + * jump back to original kernel from kexeced kernel 162 + */ 163 + result = init_one_level2_page(image, level4p, image->start); 192 164 if (result) 193 165 return result; 194 166 return init_transition_pgtable(image, level4p); ··· 271 229 { 272 230 unsigned long page_list[PAGES_NR]; 273 231 void *control_page; 232 + int save_ftrace_enabled; 274 233 275 - tracer_disable(); 234 + #ifdef CONFIG_KEXEC_JUMP 235 + if (kexec_image->preserve_context) 236 + save_processor_state(); 237 + #endif 238 + 239 + save_ftrace_enabled = __ftrace_enabled_save(); 276 240 277 241 /* Interrupts aren't acceptable while we reboot */ 278 242 local_irq_disable(); 279 243 244 + if (image->preserve_context) { 245 + #ifdef CONFIG_X86_IO_APIC 246 + /* 247 + * We need to put APICs in legacy mode so that we can 248 + * get timer interrupts in second kernel. kexec/kdump 249 + * paths already have calls to disable_IO_APIC() in 250 + * one form or other. kexec jump path also need 251 + * one. 252 + */ 253 + disable_IO_APIC(); 254 + #endif 255 + } 256 + 280 257 control_page = page_address(image->control_code_page) + PAGE_SIZE; 281 - memcpy(control_page, relocate_kernel, PAGE_SIZE); 258 + memcpy(control_page, relocate_kernel, KEXEC_CONTROL_CODE_MAX_SIZE); 282 259 283 260 page_list[PA_CONTROL_PAGE] = virt_to_phys(control_page); 261 + page_list[VA_CONTROL_PAGE] = (unsigned long)control_page; 284 262 page_list[PA_TABLE_PAGE] = 285 263 (unsigned long)__pa(page_address(image->control_code_page)); 286 264 287 - /* The segment registers are funny things, they have both a 265 + if (image->type == KEXEC_TYPE_DEFAULT) 266 + page_list[PA_SWAP_PAGE] = (page_to_pfn(image->swap_page) 267 + << PAGE_SHIFT); 268 + 269 + /* 270 + * The segment registers are funny things, they have both a 288 271 * visible and an invisible part. Whenever the visible part is 289 272 * set to a specific selector, the invisible part is loaded 290 273 * with from a table in memory. At no other time is the ··· 319 252 * segments, before I zap the gdt with an invalid value. 320 253 */ 321 254 load_segments(); 322 - /* The gdt & idt are now invalid. 255 + /* 256 + * The gdt & idt are now invalid. 323 257 * If you want to load them you must set up your own idt & gdt. 324 258 */ 325 - set_gdt(phys_to_virt(0),0); 326 - set_idt(phys_to_virt(0),0); 259 + set_gdt(phys_to_virt(0), 0); 260 + set_idt(phys_to_virt(0), 0); 327 261 328 262 /* now call it */ 329 - relocate_kernel((unsigned long)image->head, (unsigned long)page_list, 330 - image->start); 263 + image->start = relocate_kernel((unsigned long)image->head, 264 + (unsigned long)page_list, 265 + image->start, 266 + image->preserve_context); 267 + 268 + #ifdef CONFIG_KEXEC_JUMP 269 + if (kexec_image->preserve_context) 270 + restore_processor_state(); 271 + #endif 272 + 273 + __ftrace_enabled_restore(save_ftrace_enabled); 331 274 } 332 275 333 276 void arch_crash_save_vmcoreinfo(void)
+22 -3
arch/x86/kernel/mpparse.c
··· 558 558 559 559 static struct mpf_intel *mpf_found; 560 560 561 + static unsigned long __init get_mpc_size(unsigned long physptr) 562 + { 563 + struct mpc_table *mpc; 564 + unsigned long size; 565 + 566 + mpc = early_ioremap(physptr, PAGE_SIZE); 567 + size = mpc->length; 568 + early_iounmap(mpc, PAGE_SIZE); 569 + apic_printk(APIC_VERBOSE, " mpc: %lx-%lx\n", physptr, physptr + size); 570 + 571 + return size; 572 + } 573 + 561 574 /* 562 575 * Scan the memory blocks for an SMP configuration block. 563 576 */ ··· 624 611 construct_default_ISA_mptable(mpf->feature1); 625 612 626 613 } else if (mpf->physptr) { 614 + struct mpc_table *mpc; 615 + unsigned long size; 627 616 617 + size = get_mpc_size(mpf->physptr); 618 + mpc = early_ioremap(mpf->physptr, size); 628 619 /* 629 620 * Read the physical hardware table. Anything here will 630 621 * override the defaults. 631 622 */ 632 - if (!smp_read_mpc(phys_to_virt(mpf->physptr), early)) { 623 + if (!smp_read_mpc(mpc, early)) { 633 624 #ifdef CONFIG_X86_LOCAL_APIC 634 625 smp_found_config = 0; 635 626 #endif ··· 641 624 "BIOS bug, MP table errors detected!...\n"); 642 625 printk(KERN_ERR "... disabling SMP support. " 643 626 "(tell your hw vendor)\n"); 627 + early_iounmap(mpc, size); 644 628 return; 645 629 } 630 + early_iounmap(mpc, size); 646 631 647 632 if (early) 648 633 return; ··· 716 697 717 698 if (!reserve) 718 699 return 1; 719 - reserve_bootmem_generic(virt_to_phys(mpf), PAGE_SIZE, 700 + reserve_bootmem_generic(virt_to_phys(mpf), sizeof(*mpf), 720 701 BOOTMEM_DEFAULT); 721 702 if (mpf->physptr) { 722 - unsigned long size = PAGE_SIZE; 703 + unsigned long size = get_mpc_size(mpf->physptr); 723 704 #ifdef CONFIG_X86_32 724 705 /* 725 706 * We cannot access to MPC table to compute
+8
arch/x86/kernel/reboot.c
··· 216 216 DMI_MATCH(DMI_PRODUCT_NAME, "HP Compaq"), 217 217 }, 218 218 }, 219 + { /* Handle problems with rebooting on Dell XPS710 */ 220 + .callback = set_bios_reboot, 221 + .ident = "Dell XPS710", 222 + .matches = { 223 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 224 + DMI_MATCH(DMI_PRODUCT_NAME, "Dell XPS710"), 225 + }, 226 + }, 219 227 { } 220 228 }; 221 229
+16 -8
arch/x86/kernel/relocate_kernel_32.S
··· 17 17 18 18 #define PTR(x) (x << 2) 19 19 20 - /* control_page + KEXEC_CONTROL_CODE_MAX_SIZE 20 + /* 21 + * control_page + KEXEC_CONTROL_CODE_MAX_SIZE 21 22 * ~ control_page + PAGE_SIZE are used as data storage and stack for 22 23 * jumping back 23 24 */ ··· 77 76 movl %eax, CP_PA_SWAP_PAGE(%edi) 78 77 movl %ebx, CP_PA_BACKUP_PAGES_MAP(%edi) 79 78 80 - /* get physical address of control page now */ 81 - /* this is impossible after page table switch */ 79 + /* 80 + * get physical address of control page now 81 + * this is impossible after page table switch 82 + */ 82 83 movl PTR(PA_CONTROL_PAGE)(%ebp), %edi 83 84 84 85 /* switch to new set of page tables */ ··· 100 97 /* store the start address on the stack */ 101 98 pushl %edx 102 99 103 - /* Set cr0 to a known state: 100 + /* 101 + * Set cr0 to a known state: 104 102 * - Paging disabled 105 103 * - Alignment check disabled 106 104 * - Write protect disabled ··· 117 113 /* clear cr4 if applicable */ 118 114 testl %ecx, %ecx 119 115 jz 1f 120 - /* Set cr4 to a known state: 116 + /* 117 + * Set cr4 to a known state: 121 118 * Setting everything to zero seems safe. 122 119 */ 123 120 xorl %eax, %eax ··· 137 132 call swap_pages 138 133 addl $8, %esp 139 134 140 - /* To be certain of avoiding problems with self-modifying code 135 + /* 136 + * To be certain of avoiding problems with self-modifying code 141 137 * I need to execute a serializing instruction here. 142 138 * So I flush the TLB, it's handy, and not processor dependent. 143 139 */ 144 140 xorl %eax, %eax 145 141 movl %eax, %cr3 146 142 147 - /* set all of the registers to known values */ 148 - /* leave %esp alone */ 143 + /* 144 + * set all of the registers to known values 145 + * leave %esp alone 146 + */ 149 147 150 148 testl %esi, %esi 151 149 jnz 1f
+155 -36
arch/x86/kernel/relocate_kernel_64.S
··· 19 19 #define PTR(x) (x << 3) 20 20 #define PAGE_ATTR (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY) 21 21 22 + /* 23 + * control_page + KEXEC_CONTROL_CODE_MAX_SIZE 24 + * ~ control_page + PAGE_SIZE are used as data storage and stack for 25 + * jumping back 26 + */ 27 + #define DATA(offset) (KEXEC_CONTROL_CODE_MAX_SIZE+(offset)) 28 + 29 + /* Minimal CPU state */ 30 + #define RSP DATA(0x0) 31 + #define CR0 DATA(0x8) 32 + #define CR3 DATA(0x10) 33 + #define CR4 DATA(0x18) 34 + 35 + /* other data */ 36 + #define CP_PA_TABLE_PAGE DATA(0x20) 37 + #define CP_PA_SWAP_PAGE DATA(0x28) 38 + #define CP_PA_BACKUP_PAGES_MAP DATA(0x30) 39 + 22 40 .text 23 41 .align PAGE_SIZE 24 42 .code64 25 43 .globl relocate_kernel 26 44 relocate_kernel: 27 - /* %rdi indirection_page 45 + /* 46 + * %rdi indirection_page 28 47 * %rsi page_list 29 48 * %rdx start address 49 + * %rcx preserve_context 30 50 */ 51 + 52 + /* Save the CPU context, used for jumping back */ 53 + pushq %rbx 54 + pushq %rbp 55 + pushq %r12 56 + pushq %r13 57 + pushq %r14 58 + pushq %r15 59 + pushf 60 + 61 + movq PTR(VA_CONTROL_PAGE)(%rsi), %r11 62 + movq %rsp, RSP(%r11) 63 + movq %cr0, %rax 64 + movq %rax, CR0(%r11) 65 + movq %cr3, %rax 66 + movq %rax, CR3(%r11) 67 + movq %cr4, %rax 68 + movq %rax, CR4(%r11) 31 69 32 70 /* zero out flags, and disable interrupts */ 33 71 pushq $0 34 72 popfq 35 73 36 - /* get physical address of control page now */ 37 - /* this is impossible after page table switch */ 74 + /* 75 + * get physical address of control page now 76 + * this is impossible after page table switch 77 + */ 38 78 movq PTR(PA_CONTROL_PAGE)(%rsi), %r8 39 79 40 80 /* get physical address of page table now too */ 41 - movq PTR(PA_TABLE_PAGE)(%rsi), %rcx 81 + movq PTR(PA_TABLE_PAGE)(%rsi), %r9 82 + 83 + /* get physical address of swap page now */ 84 + movq PTR(PA_SWAP_PAGE)(%rsi), %r10 85 + 86 + /* save some information for jumping back */ 87 + movq %r9, CP_PA_TABLE_PAGE(%r11) 88 + movq %r10, CP_PA_SWAP_PAGE(%r11) 89 + movq %rdi, CP_PA_BACKUP_PAGES_MAP(%r11) 42 90 43 91 /* Switch to the identity mapped page tables */ 44 - movq %rcx, %cr3 92 + movq %r9, %cr3 45 93 46 94 /* setup a new stack at the end of the physical control page */ 47 95 lea PAGE_SIZE(%r8), %rsp ··· 103 55 /* store the start address on the stack */ 104 56 pushq %rdx 105 57 106 - /* Set cr0 to a known state: 58 + /* 59 + * Set cr0 to a known state: 107 60 * - Paging enabled 108 61 * - Alignment check disabled 109 62 * - Write protect disabled ··· 117 68 orl $(X86_CR0_PG | X86_CR0_PE), %eax 118 69 movq %rax, %cr0 119 70 120 - /* Set cr4 to a known state: 71 + /* 72 + * Set cr4 to a known state: 121 73 * - physical address extension enabled 122 74 */ 123 75 movq $X86_CR4_PAE, %rax ··· 128 78 1: 129 79 130 80 /* Flush the TLB (needed?) */ 131 - movq %rcx, %cr3 81 + movq %r9, %cr3 82 + 83 + movq %rcx, %r11 84 + call swap_pages 85 + 86 + /* 87 + * To be certain of avoiding problems with self-modifying code 88 + * I need to execute a serializing instruction here. 89 + * So I flush the TLB by reloading %cr3 here, it's handy, 90 + * and not processor dependent. 91 + */ 92 + movq %cr3, %rax 93 + movq %rax, %cr3 94 + 95 + /* 96 + * set all of the registers to known values 97 + * leave %rsp alone 98 + */ 99 + 100 + testq %r11, %r11 101 + jnz 1f 102 + xorq %rax, %rax 103 + xorq %rbx, %rbx 104 + xorq %rcx, %rcx 105 + xorq %rdx, %rdx 106 + xorq %rsi, %rsi 107 + xorq %rdi, %rdi 108 + xorq %rbp, %rbp 109 + xorq %r8, %r8 110 + xorq %r9, %r9 111 + xorq %r10, %r9 112 + xorq %r11, %r11 113 + xorq %r12, %r12 114 + xorq %r13, %r13 115 + xorq %r14, %r14 116 + xorq %r15, %r15 117 + 118 + ret 119 + 120 + 1: 121 + popq %rdx 122 + leaq PAGE_SIZE(%r10), %rsp 123 + call *%rdx 124 + 125 + /* get the re-entry point of the peer system */ 126 + movq 0(%rsp), %rbp 127 + call 1f 128 + 1: 129 + popq %r8 130 + subq $(1b - relocate_kernel), %r8 131 + movq CP_PA_SWAP_PAGE(%r8), %r10 132 + movq CP_PA_BACKUP_PAGES_MAP(%r8), %rdi 133 + movq CP_PA_TABLE_PAGE(%r8), %rax 134 + movq %rax, %cr3 135 + lea PAGE_SIZE(%r8), %rsp 136 + call swap_pages 137 + movq $virtual_mapped, %rax 138 + pushq %rax 139 + ret 140 + 141 + virtual_mapped: 142 + movq RSP(%r8), %rsp 143 + movq CR4(%r8), %rax 144 + movq %rax, %cr4 145 + movq CR3(%r8), %rax 146 + movq CR0(%r8), %r8 147 + movq %rax, %cr3 148 + movq %r8, %cr0 149 + movq %rbp, %rax 150 + 151 + popf 152 + popq %r15 153 + popq %r14 154 + popq %r13 155 + popq %r12 156 + popq %rbp 157 + popq %rbx 158 + ret 132 159 133 160 /* Do the copies */ 161 + swap_pages: 134 162 movq %rdi, %rcx /* Put the page_list in %rcx */ 135 163 xorq %rdi, %rdi 136 164 xorq %rsi, %rsi ··· 240 112 movq %rcx, %rsi /* For ever source page do a copy */ 241 113 andq $0xfffffffffffff000, %rsi 242 114 115 + movq %rdi, %rdx 116 + movq %rsi, %rax 117 + 118 + movq %r10, %rdi 243 119 movq $512, %rcx 244 120 rep ; movsq 121 + 122 + movq %rax, %rdi 123 + movq %rdx, %rsi 124 + movq $512, %rcx 125 + rep ; movsq 126 + 127 + movq %rdx, %rdi 128 + movq %r10, %rsi 129 + movq $512, %rcx 130 + rep ; movsq 131 + 132 + lea PAGE_SIZE(%rax), %rsi 245 133 jmp 0b 246 134 3: 247 - 248 - /* To be certain of avoiding problems with self-modifying code 249 - * I need to execute a serializing instruction here. 250 - * So I flush the TLB by reloading %cr3 here, it's handy, 251 - * and not processor dependent. 252 - */ 253 - movq %cr3, %rax 254 - movq %rax, %cr3 255 - 256 - /* set all of the registers to known values */ 257 - /* leave %rsp alone */ 258 - 259 - xorq %rax, %rax 260 - xorq %rbx, %rbx 261 - xorq %rcx, %rcx 262 - xorq %rdx, %rdx 263 - xorq %rsi, %rsi 264 - xorq %rdi, %rdi 265 - xorq %rbp, %rbp 266 - xorq %r8, %r8 267 - xorq %r9, %r9 268 - xorq %r10, %r9 269 - xorq %r11, %r11 270 - xorq %r12, %r12 271 - xorq %r13, %r13 272 - xorq %r14, %r14 273 - xorq %r15, %r15 274 - 275 135 ret 136 + 137 + .globl kexec_control_code_size 138 + .set kexec_control_code_size, . - relocate_kernel
+6 -3
arch/x86/kernel/setup.c
··· 202 202 #endif 203 203 204 204 #else 205 - struct cpuinfo_x86 boot_cpu_data __read_mostly; 205 + struct cpuinfo_x86 boot_cpu_data __read_mostly = { 206 + .x86_phys_bits = MAX_PHYSMEM_BITS, 207 + }; 206 208 EXPORT_SYMBOL(boot_cpu_data); 207 209 #endif 208 210 ··· 772 770 773 771 finish_e820_parsing(); 774 772 773 + if (efi_enabled) 774 + efi_init(); 775 + 775 776 dmi_scan_machine(); 776 777 777 778 dmi_check_system(bad_bios_dmi_table); ··· 794 789 insert_resource(&iomem_resource, &data_resource); 795 790 insert_resource(&iomem_resource, &bss_resource); 796 791 797 - if (efi_enabled) 798 - efi_init(); 799 792 800 793 #ifdef CONFIG_X86_32 801 794 if (ppro_with_ram_bug()) {
+370 -28
arch/x86/kernel/setup_percpu.c
··· 7 7 #include <linux/crash_dump.h> 8 8 #include <linux/smp.h> 9 9 #include <linux/topology.h> 10 + #include <linux/pfn.h> 10 11 #include <asm/sections.h> 11 12 #include <asm/processor.h> 12 13 #include <asm/setup.h> ··· 42 41 }; 43 42 EXPORT_SYMBOL(__per_cpu_offset); 44 43 44 + /* 45 + * On x86_64 symbols referenced from code should be reachable using 46 + * 32bit relocations. Reserve space for static percpu variables in 47 + * modules so that they are always served from the first chunk which 48 + * is located at the percpu segment base. On x86_32, anything can 49 + * address anywhere. No need to reserve space in the first chunk. 50 + */ 51 + #ifdef CONFIG_X86_64 52 + #define PERCPU_FIRST_CHUNK_RESERVE PERCPU_MODULE_RESERVE 53 + #else 54 + #define PERCPU_FIRST_CHUNK_RESERVE 0 55 + #endif 56 + 57 + /** 58 + * pcpu_need_numa - determine percpu allocation needs to consider NUMA 59 + * 60 + * If NUMA is not configured or there is only one NUMA node available, 61 + * there is no reason to consider NUMA. This function determines 62 + * whether percpu allocation should consider NUMA or not. 63 + * 64 + * RETURNS: 65 + * true if NUMA should be considered; otherwise, false. 66 + */ 67 + static bool __init pcpu_need_numa(void) 68 + { 69 + #ifdef CONFIG_NEED_MULTIPLE_NODES 70 + pg_data_t *last = NULL; 71 + unsigned int cpu; 72 + 73 + for_each_possible_cpu(cpu) { 74 + int node = early_cpu_to_node(cpu); 75 + 76 + if (node_online(node) && NODE_DATA(node) && 77 + last && last != NODE_DATA(node)) 78 + return true; 79 + 80 + last = NODE_DATA(node); 81 + } 82 + #endif 83 + return false; 84 + } 85 + 86 + /** 87 + * pcpu_alloc_bootmem - NUMA friendly alloc_bootmem wrapper for percpu 88 + * @cpu: cpu to allocate for 89 + * @size: size allocation in bytes 90 + * @align: alignment 91 + * 92 + * Allocate @size bytes aligned at @align for cpu @cpu. This wrapper 93 + * does the right thing for NUMA regardless of the current 94 + * configuration. 95 + * 96 + * RETURNS: 97 + * Pointer to the allocated area on success, NULL on failure. 98 + */ 99 + static void * __init pcpu_alloc_bootmem(unsigned int cpu, unsigned long size, 100 + unsigned long align) 101 + { 102 + const unsigned long goal = __pa(MAX_DMA_ADDRESS); 103 + #ifdef CONFIG_NEED_MULTIPLE_NODES 104 + int node = early_cpu_to_node(cpu); 105 + void *ptr; 106 + 107 + if (!node_online(node) || !NODE_DATA(node)) { 108 + ptr = __alloc_bootmem_nopanic(size, align, goal); 109 + pr_info("cpu %d has no node %d or node-local memory\n", 110 + cpu, node); 111 + pr_debug("per cpu data for cpu%d %lu bytes at %016lx\n", 112 + cpu, size, __pa(ptr)); 113 + } else { 114 + ptr = __alloc_bootmem_node_nopanic(NODE_DATA(node), 115 + size, align, goal); 116 + pr_debug("per cpu data for cpu%d %lu bytes on node%d at " 117 + "%016lx\n", cpu, size, node, __pa(ptr)); 118 + } 119 + return ptr; 120 + #else 121 + return __alloc_bootmem_nopanic(size, align, goal); 122 + #endif 123 + } 124 + 125 + /* 126 + * Remap allocator 127 + * 128 + * This allocator uses PMD page as unit. A PMD page is allocated for 129 + * each cpu and each is remapped into vmalloc area using PMD mapping. 130 + * As PMD page is quite large, only part of it is used for the first 131 + * chunk. Unused part is returned to the bootmem allocator. 132 + * 133 + * So, the PMD pages are mapped twice - once to the physical mapping 134 + * and to the vmalloc area for the first percpu chunk. The double 135 + * mapping does add one more PMD TLB entry pressure but still is much 136 + * better than only using 4k mappings while still being NUMA friendly. 137 + */ 138 + #ifdef CONFIG_NEED_MULTIPLE_NODES 139 + static size_t pcpur_size __initdata; 140 + static void **pcpur_ptrs __initdata; 141 + 142 + static struct page * __init pcpur_get_page(unsigned int cpu, int pageno) 143 + { 144 + size_t off = (size_t)pageno << PAGE_SHIFT; 145 + 146 + if (off >= pcpur_size) 147 + return NULL; 148 + 149 + return virt_to_page(pcpur_ptrs[cpu] + off); 150 + } 151 + 152 + static ssize_t __init setup_pcpu_remap(size_t static_size) 153 + { 154 + static struct vm_struct vm; 155 + pg_data_t *last; 156 + size_t ptrs_size, dyn_size; 157 + unsigned int cpu; 158 + ssize_t ret; 159 + 160 + /* 161 + * If large page isn't supported, there's no benefit in doing 162 + * this. Also, on non-NUMA, embedding is better. 163 + */ 164 + if (!cpu_has_pse || pcpu_need_numa()) 165 + return -EINVAL; 166 + 167 + last = NULL; 168 + for_each_possible_cpu(cpu) { 169 + int node = early_cpu_to_node(cpu); 170 + 171 + if (node_online(node) && NODE_DATA(node) && 172 + last && last != NODE_DATA(node)) 173 + goto proceed; 174 + 175 + last = NODE_DATA(node); 176 + } 177 + return -EINVAL; 178 + 179 + proceed: 180 + /* 181 + * Currently supports only single page. Supporting multiple 182 + * pages won't be too difficult if it ever becomes necessary. 183 + */ 184 + pcpur_size = PFN_ALIGN(static_size + PERCPU_MODULE_RESERVE + 185 + PERCPU_DYNAMIC_RESERVE); 186 + if (pcpur_size > PMD_SIZE) { 187 + pr_warning("PERCPU: static data is larger than large page, " 188 + "can't use large page\n"); 189 + return -EINVAL; 190 + } 191 + dyn_size = pcpur_size - static_size - PERCPU_FIRST_CHUNK_RESERVE; 192 + 193 + /* allocate pointer array and alloc large pages */ 194 + ptrs_size = PFN_ALIGN(num_possible_cpus() * sizeof(pcpur_ptrs[0])); 195 + pcpur_ptrs = alloc_bootmem(ptrs_size); 196 + 197 + for_each_possible_cpu(cpu) { 198 + pcpur_ptrs[cpu] = pcpu_alloc_bootmem(cpu, PMD_SIZE, PMD_SIZE); 199 + if (!pcpur_ptrs[cpu]) 200 + goto enomem; 201 + 202 + /* 203 + * Only use pcpur_size bytes and give back the rest. 204 + * 205 + * Ingo: The 2MB up-rounding bootmem is needed to make 206 + * sure the partial 2MB page is still fully RAM - it's 207 + * not well-specified to have a PAT-incompatible area 208 + * (unmapped RAM, device memory, etc.) in that hole. 209 + */ 210 + free_bootmem(__pa(pcpur_ptrs[cpu] + pcpur_size), 211 + PMD_SIZE - pcpur_size); 212 + 213 + memcpy(pcpur_ptrs[cpu], __per_cpu_load, static_size); 214 + } 215 + 216 + /* allocate address and map */ 217 + vm.flags = VM_ALLOC; 218 + vm.size = num_possible_cpus() * PMD_SIZE; 219 + vm_area_register_early(&vm, PMD_SIZE); 220 + 221 + for_each_possible_cpu(cpu) { 222 + pmd_t *pmd; 223 + 224 + pmd = populate_extra_pmd((unsigned long)vm.addr 225 + + cpu * PMD_SIZE); 226 + set_pmd(pmd, pfn_pmd(page_to_pfn(virt_to_page(pcpur_ptrs[cpu])), 227 + PAGE_KERNEL_LARGE)); 228 + } 229 + 230 + /* we're ready, commit */ 231 + pr_info("PERCPU: Remapped at %p with large pages, static data " 232 + "%zu bytes\n", vm.addr, static_size); 233 + 234 + ret = pcpu_setup_first_chunk(pcpur_get_page, static_size, 235 + PERCPU_FIRST_CHUNK_RESERVE, 236 + PMD_SIZE, dyn_size, vm.addr, NULL); 237 + goto out_free_ar; 238 + 239 + enomem: 240 + for_each_possible_cpu(cpu) 241 + if (pcpur_ptrs[cpu]) 242 + free_bootmem(__pa(pcpur_ptrs[cpu]), PMD_SIZE); 243 + ret = -ENOMEM; 244 + out_free_ar: 245 + free_bootmem(__pa(pcpur_ptrs), ptrs_size); 246 + return ret; 247 + } 248 + #else 249 + static ssize_t __init setup_pcpu_remap(size_t static_size) 250 + { 251 + return -EINVAL; 252 + } 253 + #endif 254 + 255 + /* 256 + * Embedding allocator 257 + * 258 + * The first chunk is sized to just contain the static area plus 259 + * module and dynamic reserves, and allocated as a contiguous area 260 + * using bootmem allocator and used as-is without being mapped into 261 + * vmalloc area. This enables the first chunk to piggy back on the 262 + * linear physical PMD mapping and doesn't add any additional pressure 263 + * to TLB. Note that if the needed size is smaller than the minimum 264 + * unit size, the leftover is returned to the bootmem allocator. 265 + */ 266 + static void *pcpue_ptr __initdata; 267 + static size_t pcpue_size __initdata; 268 + static size_t pcpue_unit_size __initdata; 269 + 270 + static struct page * __init pcpue_get_page(unsigned int cpu, int pageno) 271 + { 272 + size_t off = (size_t)pageno << PAGE_SHIFT; 273 + 274 + if (off >= pcpue_size) 275 + return NULL; 276 + 277 + return virt_to_page(pcpue_ptr + cpu * pcpue_unit_size + off); 278 + } 279 + 280 + static ssize_t __init setup_pcpu_embed(size_t static_size) 281 + { 282 + unsigned int cpu; 283 + size_t dyn_size; 284 + 285 + /* 286 + * If large page isn't supported, there's no benefit in doing 287 + * this. Also, embedding allocation doesn't play well with 288 + * NUMA. 289 + */ 290 + if (!cpu_has_pse || pcpu_need_numa()) 291 + return -EINVAL; 292 + 293 + /* allocate and copy */ 294 + pcpue_size = PFN_ALIGN(static_size + PERCPU_MODULE_RESERVE + 295 + PERCPU_DYNAMIC_RESERVE); 296 + pcpue_unit_size = max_t(size_t, pcpue_size, PCPU_MIN_UNIT_SIZE); 297 + dyn_size = pcpue_size - static_size - PERCPU_FIRST_CHUNK_RESERVE; 298 + 299 + pcpue_ptr = pcpu_alloc_bootmem(0, num_possible_cpus() * pcpue_unit_size, 300 + PAGE_SIZE); 301 + if (!pcpue_ptr) 302 + return -ENOMEM; 303 + 304 + for_each_possible_cpu(cpu) { 305 + void *ptr = pcpue_ptr + cpu * pcpue_unit_size; 306 + 307 + free_bootmem(__pa(ptr + pcpue_size), 308 + pcpue_unit_size - pcpue_size); 309 + memcpy(ptr, __per_cpu_load, static_size); 310 + } 311 + 312 + /* we're ready, commit */ 313 + pr_info("PERCPU: Embedded %zu pages at %p, static data %zu bytes\n", 314 + pcpue_size >> PAGE_SHIFT, pcpue_ptr, static_size); 315 + 316 + return pcpu_setup_first_chunk(pcpue_get_page, static_size, 317 + PERCPU_FIRST_CHUNK_RESERVE, 318 + pcpue_unit_size, dyn_size, 319 + pcpue_ptr, NULL); 320 + } 321 + 322 + /* 323 + * 4k page allocator 324 + * 325 + * This is the basic allocator. Static percpu area is allocated 326 + * page-by-page and most of initialization is done by the generic 327 + * setup function. 328 + */ 329 + static struct page **pcpu4k_pages __initdata; 330 + static int pcpu4k_nr_static_pages __initdata; 331 + 332 + static struct page * __init pcpu4k_get_page(unsigned int cpu, int pageno) 333 + { 334 + if (pageno < pcpu4k_nr_static_pages) 335 + return pcpu4k_pages[cpu * pcpu4k_nr_static_pages + pageno]; 336 + return NULL; 337 + } 338 + 339 + static void __init pcpu4k_populate_pte(unsigned long addr) 340 + { 341 + populate_extra_pte(addr); 342 + } 343 + 344 + static ssize_t __init setup_pcpu_4k(size_t static_size) 345 + { 346 + size_t pages_size; 347 + unsigned int cpu; 348 + int i, j; 349 + ssize_t ret; 350 + 351 + pcpu4k_nr_static_pages = PFN_UP(static_size); 352 + 353 + /* unaligned allocations can't be freed, round up to page size */ 354 + pages_size = PFN_ALIGN(pcpu4k_nr_static_pages * num_possible_cpus() 355 + * sizeof(pcpu4k_pages[0])); 356 + pcpu4k_pages = alloc_bootmem(pages_size); 357 + 358 + /* allocate and copy */ 359 + j = 0; 360 + for_each_possible_cpu(cpu) 361 + for (i = 0; i < pcpu4k_nr_static_pages; i++) { 362 + void *ptr; 363 + 364 + ptr = pcpu_alloc_bootmem(cpu, PAGE_SIZE, PAGE_SIZE); 365 + if (!ptr) 366 + goto enomem; 367 + 368 + memcpy(ptr, __per_cpu_load + i * PAGE_SIZE, PAGE_SIZE); 369 + pcpu4k_pages[j++] = virt_to_page(ptr); 370 + } 371 + 372 + /* we're ready, commit */ 373 + pr_info("PERCPU: Allocated %d 4k pages, static data %zu bytes\n", 374 + pcpu4k_nr_static_pages, static_size); 375 + 376 + ret = pcpu_setup_first_chunk(pcpu4k_get_page, static_size, 377 + PERCPU_FIRST_CHUNK_RESERVE, -1, -1, NULL, 378 + pcpu4k_populate_pte); 379 + goto out_free_ar; 380 + 381 + enomem: 382 + while (--j >= 0) 383 + free_bootmem(__pa(page_address(pcpu4k_pages[j])), PAGE_SIZE); 384 + ret = -ENOMEM; 385 + out_free_ar: 386 + free_bootmem(__pa(pcpu4k_pages), pages_size); 387 + return ret; 388 + } 389 + 45 390 static inline void setup_percpu_segment(int cpu) 46 391 { 47 392 #ifdef CONFIG_X86_32 ··· 408 61 */ 409 62 void __init setup_per_cpu_areas(void) 410 63 { 411 - ssize_t size; 412 - char *ptr; 413 - int cpu; 414 - 415 - /* Copy section for each CPU (we discard the original) */ 416 - size = roundup(PERCPU_ENOUGH_ROOM, PAGE_SIZE); 64 + size_t static_size = __per_cpu_end - __per_cpu_start; 65 + unsigned int cpu; 66 + unsigned long delta; 67 + size_t pcpu_unit_size; 68 + ssize_t ret; 417 69 418 70 pr_info("NR_CPUS:%d nr_cpumask_bits:%d nr_cpu_ids:%d nr_node_ids:%d\n", 419 71 NR_CPUS, nr_cpumask_bits, nr_cpu_ids, nr_node_ids); 420 72 421 - pr_info("PERCPU: Allocating %zd bytes of per cpu data\n", size); 73 + /* 74 + * Allocate percpu area. If PSE is supported, try to make use 75 + * of large page mappings. Please read comments on top of 76 + * each allocator for details. 77 + */ 78 + ret = setup_pcpu_remap(static_size); 79 + if (ret < 0) 80 + ret = setup_pcpu_embed(static_size); 81 + if (ret < 0) 82 + ret = setup_pcpu_4k(static_size); 83 + if (ret < 0) 84 + panic("cannot allocate static percpu area (%zu bytes, err=%zd)", 85 + static_size, ret); 422 86 87 + pcpu_unit_size = ret; 88 + 89 + /* alrighty, percpu areas up and running */ 90 + delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start; 423 91 for_each_possible_cpu(cpu) { 424 - #ifndef CONFIG_NEED_MULTIPLE_NODES 425 - ptr = alloc_bootmem_pages(size); 426 - #else 427 - int node = early_cpu_to_node(cpu); 428 - if (!node_online(node) || !NODE_DATA(node)) { 429 - ptr = alloc_bootmem_pages(size); 430 - pr_info("cpu %d has no node %d or node-local memory\n", 431 - cpu, node); 432 - pr_debug("per cpu data for cpu%d at %016lx\n", 433 - cpu, __pa(ptr)); 434 - } else { 435 - ptr = alloc_bootmem_pages_node(NODE_DATA(node), size); 436 - pr_debug("per cpu data for cpu%d on node%d at %016lx\n", 437 - cpu, node, __pa(ptr)); 438 - } 439 - #endif 440 - 441 - memcpy(ptr, __per_cpu_load, __per_cpu_end - __per_cpu_start); 442 - per_cpu_offset(cpu) = ptr - __per_cpu_start; 92 + per_cpu_offset(cpu) = delta + cpu * pcpu_unit_size; 443 93 per_cpu(this_cpu_off, cpu) = per_cpu_offset(cpu); 444 94 per_cpu(cpu_number, cpu) = cpu; 445 95 setup_percpu_segment(cpu); ··· 469 125 */ 470 126 if (cpu == boot_cpu_id) 471 127 switch_to_new_gdt(cpu); 472 - 473 - DBG("PERCPU: cpu %4d %p\n", cpu, ptr); 474 128 } 475 129 476 130 /* indicate the early static arrays will soon be gone */
-78
arch/x86/kernel/smpboot.c
··· 114 114 115 115 atomic_t init_deasserted; 116 116 117 - 118 - /* Set if we find a B stepping CPU */ 119 - static int __cpuinitdata smp_b_stepping; 120 - 121 117 #if defined(CONFIG_NUMA) && defined(CONFIG_X86_32) 122 118 123 119 /* which logical CPUs are on which nodes */ ··· 267 271 cpumask_set_cpu(cpuid, cpu_callin_mask); 268 272 } 269 273 270 - static int __cpuinitdata unsafe_smp; 271 - 272 274 /* 273 275 * Activate a secondary processor. 274 276 */ ··· 334 340 cpu_idle(); 335 341 } 336 342 337 - static void __cpuinit smp_apply_quirks(struct cpuinfo_x86 *c) 338 - { 339 - /* 340 - * Mask B, Pentium, but not Pentium MMX 341 - */ 342 - if (c->x86_vendor == X86_VENDOR_INTEL && 343 - c->x86 == 5 && 344 - c->x86_mask >= 1 && c->x86_mask <= 4 && 345 - c->x86_model <= 3) 346 - /* 347 - * Remember we have B step Pentia with bugs 348 - */ 349 - smp_b_stepping = 1; 350 - 351 - /* 352 - * Certain Athlons might work (for various values of 'work') in SMP 353 - * but they are not certified as MP capable. 354 - */ 355 - if ((c->x86_vendor == X86_VENDOR_AMD) && (c->x86 == 6)) { 356 - 357 - if (num_possible_cpus() == 1) 358 - goto valid_k7; 359 - 360 - /* Athlon 660/661 is valid. */ 361 - if ((c->x86_model == 6) && ((c->x86_mask == 0) || 362 - (c->x86_mask == 1))) 363 - goto valid_k7; 364 - 365 - /* Duron 670 is valid */ 366 - if ((c->x86_model == 7) && (c->x86_mask == 0)) 367 - goto valid_k7; 368 - 369 - /* 370 - * Athlon 662, Duron 671, and Athlon >model 7 have capability 371 - * bit. It's worth noting that the A5 stepping (662) of some 372 - * Athlon XP's have the MP bit set. 373 - * See http://www.heise.de/newsticker/data/jow-18.10.01-000 for 374 - * more. 375 - */ 376 - if (((c->x86_model == 6) && (c->x86_mask >= 2)) || 377 - ((c->x86_model == 7) && (c->x86_mask >= 1)) || 378 - (c->x86_model > 7)) 379 - if (cpu_has_mp) 380 - goto valid_k7; 381 - 382 - /* If we get here, not a certified SMP capable AMD system. */ 383 - unsafe_smp = 1; 384 - } 385 - 386 - valid_k7: 387 - ; 388 - } 389 - 390 - static void __cpuinit smp_checks(void) 391 - { 392 - if (smp_b_stepping) 393 - printk(KERN_WARNING "WARNING: SMP operation may be unreliable" 394 - "with B stepping processors.\n"); 395 - 396 - /* 397 - * Don't taint if we are running SMP kernel on a single non-MP 398 - * approved Athlon 399 - */ 400 - if (unsafe_smp && num_online_cpus() > 1) { 401 - printk(KERN_INFO "WARNING: This combination of AMD" 402 - "processors is not suitable for SMP.\n"); 403 - add_taint(TAINT_UNSAFE_SMP); 404 - } 405 - } 406 - 407 343 /* 408 344 * The bootstrap kernel entry code has set these up. Save them for 409 345 * a given CPU ··· 347 423 c->cpu_index = id; 348 424 if (id != 0) 349 425 identify_secondary_cpu(c); 350 - smp_apply_quirks(c); 351 426 } 352 427 353 428 ··· 1116 1193 pr_debug("Boot done.\n"); 1117 1194 1118 1195 impress_friends(); 1119 - smp_checks(); 1120 1196 #ifdef CONFIG_X86_IO_APIC 1121 1197 setup_ioapic_dest(); 1122 1198 #endif
-2
arch/x86/kernel/tlb_uv.c
··· 314 314 int locals = 0; 315 315 struct bau_desc *bau_desc; 316 316 317 - WARN_ON(!in_atomic()); 318 - 319 317 cpumask_andnot(flush_mask, cpumask, cpumask_of(cpu)); 320 318 321 319 uv_cpu = uv_blade_processor_id();
+393
arch/x86/kernel/uv_time.c
··· 1 + /* 2 + * SGI RTC clock/timer routines. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program; if not, write to the Free Software 16 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 17 + * 18 + * Copyright (c) 2009 Silicon Graphics, Inc. All Rights Reserved. 19 + * Copyright (c) Dimitri Sivanich 20 + */ 21 + #include <linux/clockchips.h> 22 + 23 + #include <asm/uv/uv_mmrs.h> 24 + #include <asm/uv/uv_hub.h> 25 + #include <asm/uv/bios.h> 26 + #include <asm/uv/uv.h> 27 + #include <asm/apic.h> 28 + #include <asm/cpu.h> 29 + 30 + #define RTC_NAME "sgi_rtc" 31 + 32 + static cycle_t uv_read_rtc(void); 33 + static int uv_rtc_next_event(unsigned long, struct clock_event_device *); 34 + static void uv_rtc_timer_setup(enum clock_event_mode, 35 + struct clock_event_device *); 36 + 37 + static struct clocksource clocksource_uv = { 38 + .name = RTC_NAME, 39 + .rating = 400, 40 + .read = uv_read_rtc, 41 + .mask = (cycle_t)UVH_RTC_REAL_TIME_CLOCK_MASK, 42 + .shift = 10, 43 + .flags = CLOCK_SOURCE_IS_CONTINUOUS, 44 + }; 45 + 46 + static struct clock_event_device clock_event_device_uv = { 47 + .name = RTC_NAME, 48 + .features = CLOCK_EVT_FEAT_ONESHOT, 49 + .shift = 20, 50 + .rating = 400, 51 + .irq = -1, 52 + .set_next_event = uv_rtc_next_event, 53 + .set_mode = uv_rtc_timer_setup, 54 + .event_handler = NULL, 55 + }; 56 + 57 + static DEFINE_PER_CPU(struct clock_event_device, cpu_ced); 58 + 59 + /* There is one of these allocated per node */ 60 + struct uv_rtc_timer_head { 61 + spinlock_t lock; 62 + /* next cpu waiting for timer, local node relative: */ 63 + int next_cpu; 64 + /* number of cpus on this node: */ 65 + int ncpus; 66 + struct { 67 + int lcpu; /* systemwide logical cpu number */ 68 + u64 expires; /* next timer expiration for this cpu */ 69 + } cpu[1]; 70 + }; 71 + 72 + /* 73 + * Access to uv_rtc_timer_head via blade id. 74 + */ 75 + static struct uv_rtc_timer_head **blade_info __read_mostly; 76 + 77 + static int uv_rtc_enable; 78 + 79 + /* 80 + * Hardware interface routines 81 + */ 82 + 83 + /* Send IPIs to another node */ 84 + static void uv_rtc_send_IPI(int cpu) 85 + { 86 + unsigned long apicid, val; 87 + int pnode; 88 + 89 + apicid = cpu_physical_id(cpu); 90 + pnode = uv_apicid_to_pnode(apicid); 91 + val = (1UL << UVH_IPI_INT_SEND_SHFT) | 92 + (apicid << UVH_IPI_INT_APIC_ID_SHFT) | 93 + (GENERIC_INTERRUPT_VECTOR << UVH_IPI_INT_VECTOR_SHFT); 94 + 95 + uv_write_global_mmr64(pnode, UVH_IPI_INT, val); 96 + } 97 + 98 + /* Check for an RTC interrupt pending */ 99 + static int uv_intr_pending(int pnode) 100 + { 101 + return uv_read_global_mmr64(pnode, UVH_EVENT_OCCURRED0) & 102 + UVH_EVENT_OCCURRED0_RTC1_MASK; 103 + } 104 + 105 + /* Setup interrupt and return non-zero if early expiration occurred. */ 106 + static int uv_setup_intr(int cpu, u64 expires) 107 + { 108 + u64 val; 109 + int pnode = uv_cpu_to_pnode(cpu); 110 + 111 + uv_write_global_mmr64(pnode, UVH_RTC1_INT_CONFIG, 112 + UVH_RTC1_INT_CONFIG_M_MASK); 113 + uv_write_global_mmr64(pnode, UVH_INT_CMPB, -1L); 114 + 115 + uv_write_global_mmr64(pnode, UVH_EVENT_OCCURRED0_ALIAS, 116 + UVH_EVENT_OCCURRED0_RTC1_MASK); 117 + 118 + val = (GENERIC_INTERRUPT_VECTOR << UVH_RTC1_INT_CONFIG_VECTOR_SHFT) | 119 + ((u64)cpu_physical_id(cpu) << UVH_RTC1_INT_CONFIG_APIC_ID_SHFT); 120 + 121 + /* Set configuration */ 122 + uv_write_global_mmr64(pnode, UVH_RTC1_INT_CONFIG, val); 123 + /* Initialize comparator value */ 124 + uv_write_global_mmr64(pnode, UVH_INT_CMPB, expires); 125 + 126 + return (expires < uv_read_rtc() && !uv_intr_pending(pnode)); 127 + } 128 + 129 + /* 130 + * Per-cpu timer tracking routines 131 + */ 132 + 133 + static __init void uv_rtc_deallocate_timers(void) 134 + { 135 + int bid; 136 + 137 + for_each_possible_blade(bid) { 138 + kfree(blade_info[bid]); 139 + } 140 + kfree(blade_info); 141 + } 142 + 143 + /* Allocate per-node list of cpu timer expiration times. */ 144 + static __init int uv_rtc_allocate_timers(void) 145 + { 146 + int cpu; 147 + 148 + blade_info = kmalloc(uv_possible_blades * sizeof(void *), GFP_KERNEL); 149 + if (!blade_info) 150 + return -ENOMEM; 151 + memset(blade_info, 0, uv_possible_blades * sizeof(void *)); 152 + 153 + for_each_present_cpu(cpu) { 154 + int nid = cpu_to_node(cpu); 155 + int bid = uv_cpu_to_blade_id(cpu); 156 + int bcpu = uv_cpu_hub_info(cpu)->blade_processor_id; 157 + struct uv_rtc_timer_head *head = blade_info[bid]; 158 + 159 + if (!head) { 160 + head = kmalloc_node(sizeof(struct uv_rtc_timer_head) + 161 + (uv_blade_nr_possible_cpus(bid) * 162 + 2 * sizeof(u64)), 163 + GFP_KERNEL, nid); 164 + if (!head) { 165 + uv_rtc_deallocate_timers(); 166 + return -ENOMEM; 167 + } 168 + spin_lock_init(&head->lock); 169 + head->ncpus = uv_blade_nr_possible_cpus(bid); 170 + head->next_cpu = -1; 171 + blade_info[bid] = head; 172 + } 173 + 174 + head->cpu[bcpu].lcpu = cpu; 175 + head->cpu[bcpu].expires = ULLONG_MAX; 176 + } 177 + 178 + return 0; 179 + } 180 + 181 + /* Find and set the next expiring timer. */ 182 + static void uv_rtc_find_next_timer(struct uv_rtc_timer_head *head, int pnode) 183 + { 184 + u64 lowest = ULLONG_MAX; 185 + int c, bcpu = -1; 186 + 187 + head->next_cpu = -1; 188 + for (c = 0; c < head->ncpus; c++) { 189 + u64 exp = head->cpu[c].expires; 190 + if (exp < lowest) { 191 + bcpu = c; 192 + lowest = exp; 193 + } 194 + } 195 + if (bcpu >= 0) { 196 + head->next_cpu = bcpu; 197 + c = head->cpu[bcpu].lcpu; 198 + if (uv_setup_intr(c, lowest)) 199 + /* If we didn't set it up in time, trigger */ 200 + uv_rtc_send_IPI(c); 201 + } else { 202 + uv_write_global_mmr64(pnode, UVH_RTC1_INT_CONFIG, 203 + UVH_RTC1_INT_CONFIG_M_MASK); 204 + } 205 + } 206 + 207 + /* 208 + * Set expiration time for current cpu. 209 + * 210 + * Returns 1 if we missed the expiration time. 211 + */ 212 + static int uv_rtc_set_timer(int cpu, u64 expires) 213 + { 214 + int pnode = uv_cpu_to_pnode(cpu); 215 + int bid = uv_cpu_to_blade_id(cpu); 216 + struct uv_rtc_timer_head *head = blade_info[bid]; 217 + int bcpu = uv_cpu_hub_info(cpu)->blade_processor_id; 218 + u64 *t = &head->cpu[bcpu].expires; 219 + unsigned long flags; 220 + int next_cpu; 221 + 222 + spin_lock_irqsave(&head->lock, flags); 223 + 224 + next_cpu = head->next_cpu; 225 + *t = expires; 226 + /* Will this one be next to go off? */ 227 + if (next_cpu < 0 || bcpu == next_cpu || 228 + expires < head->cpu[next_cpu].expires) { 229 + head->next_cpu = bcpu; 230 + if (uv_setup_intr(cpu, expires)) { 231 + *t = ULLONG_MAX; 232 + uv_rtc_find_next_timer(head, pnode); 233 + spin_unlock_irqrestore(&head->lock, flags); 234 + return 1; 235 + } 236 + } 237 + 238 + spin_unlock_irqrestore(&head->lock, flags); 239 + return 0; 240 + } 241 + 242 + /* 243 + * Unset expiration time for current cpu. 244 + * 245 + * Returns 1 if this timer was pending. 246 + */ 247 + static int uv_rtc_unset_timer(int cpu) 248 + { 249 + int pnode = uv_cpu_to_pnode(cpu); 250 + int bid = uv_cpu_to_blade_id(cpu); 251 + struct uv_rtc_timer_head *head = blade_info[bid]; 252 + int bcpu = uv_cpu_hub_info(cpu)->blade_processor_id; 253 + u64 *t = &head->cpu[bcpu].expires; 254 + unsigned long flags; 255 + int rc = 0; 256 + 257 + spin_lock_irqsave(&head->lock, flags); 258 + 259 + if (head->next_cpu == bcpu && uv_read_rtc() >= *t) 260 + rc = 1; 261 + 262 + *t = ULLONG_MAX; 263 + 264 + /* Was the hardware setup for this timer? */ 265 + if (head->next_cpu == bcpu) 266 + uv_rtc_find_next_timer(head, pnode); 267 + 268 + spin_unlock_irqrestore(&head->lock, flags); 269 + 270 + return rc; 271 + } 272 + 273 + 274 + /* 275 + * Kernel interface routines. 276 + */ 277 + 278 + /* 279 + * Read the RTC. 280 + */ 281 + static cycle_t uv_read_rtc(void) 282 + { 283 + return (cycle_t)uv_read_local_mmr(UVH_RTC); 284 + } 285 + 286 + /* 287 + * Program the next event, relative to now 288 + */ 289 + static int uv_rtc_next_event(unsigned long delta, 290 + struct clock_event_device *ced) 291 + { 292 + int ced_cpu = cpumask_first(ced->cpumask); 293 + 294 + return uv_rtc_set_timer(ced_cpu, delta + uv_read_rtc()); 295 + } 296 + 297 + /* 298 + * Setup the RTC timer in oneshot mode 299 + */ 300 + static void uv_rtc_timer_setup(enum clock_event_mode mode, 301 + struct clock_event_device *evt) 302 + { 303 + int ced_cpu = cpumask_first(evt->cpumask); 304 + 305 + switch (mode) { 306 + case CLOCK_EVT_MODE_PERIODIC: 307 + case CLOCK_EVT_MODE_ONESHOT: 308 + case CLOCK_EVT_MODE_RESUME: 309 + /* Nothing to do here yet */ 310 + break; 311 + case CLOCK_EVT_MODE_UNUSED: 312 + case CLOCK_EVT_MODE_SHUTDOWN: 313 + uv_rtc_unset_timer(ced_cpu); 314 + break; 315 + } 316 + } 317 + 318 + static void uv_rtc_interrupt(void) 319 + { 320 + struct clock_event_device *ced = &__get_cpu_var(cpu_ced); 321 + int cpu = smp_processor_id(); 322 + 323 + if (!ced || !ced->event_handler) 324 + return; 325 + 326 + if (uv_rtc_unset_timer(cpu) != 1) 327 + return; 328 + 329 + ced->event_handler(ced); 330 + } 331 + 332 + static int __init uv_enable_rtc(char *str) 333 + { 334 + uv_rtc_enable = 1; 335 + 336 + return 1; 337 + } 338 + __setup("uvrtc", uv_enable_rtc); 339 + 340 + static __init void uv_rtc_register_clockevents(struct work_struct *dummy) 341 + { 342 + struct clock_event_device *ced = &__get_cpu_var(cpu_ced); 343 + 344 + *ced = clock_event_device_uv; 345 + ced->cpumask = cpumask_of(smp_processor_id()); 346 + clockevents_register_device(ced); 347 + } 348 + 349 + static __init int uv_rtc_setup_clock(void) 350 + { 351 + int rc; 352 + 353 + if (!uv_rtc_enable || !is_uv_system() || generic_interrupt_extension) 354 + return -ENODEV; 355 + 356 + generic_interrupt_extension = uv_rtc_interrupt; 357 + 358 + clocksource_uv.mult = clocksource_hz2mult(sn_rtc_cycles_per_second, 359 + clocksource_uv.shift); 360 + 361 + rc = clocksource_register(&clocksource_uv); 362 + if (rc) { 363 + generic_interrupt_extension = NULL; 364 + return rc; 365 + } 366 + 367 + /* Setup and register clockevents */ 368 + rc = uv_rtc_allocate_timers(); 369 + if (rc) { 370 + clocksource_unregister(&clocksource_uv); 371 + generic_interrupt_extension = NULL; 372 + return rc; 373 + } 374 + 375 + clock_event_device_uv.mult = div_sc(sn_rtc_cycles_per_second, 376 + NSEC_PER_SEC, clock_event_device_uv.shift); 377 + 378 + clock_event_device_uv.min_delta_ns = NSEC_PER_SEC / 379 + sn_rtc_cycles_per_second; 380 + 381 + clock_event_device_uv.max_delta_ns = clocksource_uv.mask * 382 + (NSEC_PER_SEC / sn_rtc_cycles_per_second); 383 + 384 + rc = schedule_on_each_cpu(uv_rtc_register_clockevents); 385 + if (rc) { 386 + clocksource_unregister(&clocksource_uv); 387 + generic_interrupt_extension = NULL; 388 + uv_rtc_deallocate_timers(); 389 + } 390 + 391 + return rc; 392 + } 393 + arch_initcall(uv_rtc_setup_clock);
+7
arch/x86/kernel/vmlinux_64.lds.S
··· 275 275 ASSERT((per_cpu__irq_stack_union == 0), 276 276 "irq_stack_union is not at start of per-cpu area"); 277 277 #endif 278 + 279 + #ifdef CONFIG_KEXEC 280 + #include <asm/kexec.h> 281 + 282 + ASSERT(kexec_control_code_size <= KEXEC_CONTROL_CODE_MAX_SIZE, 283 + "kexec control code size is too big") 284 + #endif
+14 -7
arch/x86/lguest/boot.c
··· 348 348 * flush_tlb_user() for both user and kernel mappings unless 349 349 * the Page Global Enable (PGE) feature bit is set. */ 350 350 *dx |= 0x00002000; 351 + /* We also lie, and say we're family id 5. 6 or greater 352 + * leads to a rdmsr in early_init_intel which we can't handle. 353 + * Family ID is returned as bits 8-12 in ax. */ 354 + *ax &= 0xFFFFF0FF; 355 + *ax |= 0x00000500; 351 356 break; 352 357 case 0x80000000: 353 358 /* Futureproof this a little: if they ask how much extended ··· 599 594 /* Some systems map "vectors" to interrupts weirdly. Lguest has 600 595 * a straightforward 1 to 1 mapping, so force that here. */ 601 596 __get_cpu_var(vector_irq)[vector] = i; 602 - if (vector != SYSCALL_VECTOR) { 603 - set_intr_gate(vector, 604 - interrupt[vector-FIRST_EXTERNAL_VECTOR]); 605 - set_irq_chip_and_handler_name(i, &lguest_irq_controller, 606 - handle_level_irq, 607 - "level"); 608 - } 597 + if (vector != SYSCALL_VECTOR) 598 + set_intr_gate(vector, interrupt[i]); 609 599 } 610 600 /* This call is required to set up for 4k stacks, where we have 611 601 * separate stacks for hard and soft interrupts. */ 612 602 irq_ctx_init(smp_processor_id()); 603 + } 604 + 605 + void lguest_setup_irq(unsigned int irq) 606 + { 607 + irq_to_desc_alloc_cpu(irq, 0); 608 + set_irq_chip_and_handler_name(irq, &lguest_irq_controller, 609 + handle_level_irq, "level"); 613 610 } 614 611 615 612 /*
+20 -11
arch/x86/math-emu/fpu_aux.c
··· 30 30 } 31 31 32 32 /* Needs to be externally visible */ 33 - void finit(void) 33 + void finit_task(struct task_struct *tsk) 34 34 { 35 - control_word = 0x037f; 36 - partial_status = 0; 37 - top = 0; /* We don't keep top in the status word internally. */ 38 - fpu_tag_word = 0xffff; 35 + struct i387_soft_struct *soft = &tsk->thread.xstate->soft; 36 + struct address *oaddr, *iaddr; 37 + soft->cwd = 0x037f; 38 + soft->swd = 0; 39 + soft->ftop = 0; /* We don't keep top in the status word internally. */ 40 + soft->twd = 0xffff; 39 41 /* The behaviour is different from that detailed in 40 42 Section 15.1.6 of the Intel manual */ 41 - operand_address.offset = 0; 42 - operand_address.selector = 0; 43 - instruction_address.offset = 0; 44 - instruction_address.selector = 0; 45 - instruction_address.opcode = 0; 46 - no_ip_update = 1; 43 + oaddr = (struct address *)&soft->foo; 44 + oaddr->offset = 0; 45 + oaddr->selector = 0; 46 + iaddr = (struct address *)&soft->fip; 47 + iaddr->offset = 0; 48 + iaddr->selector = 0; 49 + iaddr->opcode = 0; 50 + soft->no_update = 1; 51 + } 52 + 53 + void finit(void) 54 + { 55 + finit_task(current); 47 56 } 48 57 49 58 /*
-9
arch/x86/mm/highmem_32.c
··· 158 158 EXPORT_SYMBOL(kmap_atomic); 159 159 EXPORT_SYMBOL(kunmap_atomic); 160 160 161 - #ifdef CONFIG_NUMA 162 161 void __init set_highmem_pages_init(void) 163 162 { 164 163 struct zone *zone; ··· 181 182 } 182 183 totalram_pages += totalhigh_pages; 183 184 } 184 - #else 185 - void __init set_highmem_pages_init(void) 186 - { 187 - add_highpages_with_active_regions(0, highstart_pfn, highend_pfn); 188 - 189 - totalram_pages += totalhigh_pages; 190 - } 191 - #endif /* CONFIG_NUMA */
+344
arch/x86/mm/init.c
··· 1 + #include <linux/ioport.h> 1 2 #include <linux/swap.h> 3 + 2 4 #include <asm/cacheflush.h> 5 + #include <asm/e820.h> 6 + #include <asm/init.h> 3 7 #include <asm/page.h> 8 + #include <asm/page_types.h> 4 9 #include <asm/sections.h> 5 10 #include <asm/system.h> 11 + #include <asm/tlbflush.h> 12 + 13 + unsigned long __initdata e820_table_start; 14 + unsigned long __meminitdata e820_table_end; 15 + unsigned long __meminitdata e820_table_top; 16 + 17 + int after_bootmem; 18 + 19 + int direct_gbpages 20 + #ifdef CONFIG_DIRECT_GBPAGES 21 + = 1 22 + #endif 23 + ; 24 + 25 + static void __init find_early_table_space(unsigned long end, int use_pse, 26 + int use_gbpages) 27 + { 28 + unsigned long puds, pmds, ptes, tables, start; 29 + 30 + puds = (end + PUD_SIZE - 1) >> PUD_SHIFT; 31 + tables = roundup(puds * sizeof(pud_t), PAGE_SIZE); 32 + 33 + if (use_gbpages) { 34 + unsigned long extra; 35 + 36 + extra = end - ((end>>PUD_SHIFT) << PUD_SHIFT); 37 + pmds = (extra + PMD_SIZE - 1) >> PMD_SHIFT; 38 + } else 39 + pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT; 40 + 41 + tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE); 42 + 43 + if (use_pse) { 44 + unsigned long extra; 45 + 46 + extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT); 47 + #ifdef CONFIG_X86_32 48 + extra += PMD_SIZE; 49 + #endif 50 + ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT; 51 + } else 52 + ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT; 53 + 54 + tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE); 55 + 56 + #ifdef CONFIG_X86_32 57 + /* for fixmap */ 58 + tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE); 59 + #endif 60 + 61 + /* 62 + * RED-PEN putting page tables only on node 0 could 63 + * cause a hotspot and fill up ZONE_DMA. The page tables 64 + * need roughly 0.5KB per GB. 65 + */ 66 + #ifdef CONFIG_X86_32 67 + start = 0x7000; 68 + e820_table_start = find_e820_area(start, max_pfn_mapped<<PAGE_SHIFT, 69 + tables, PAGE_SIZE); 70 + #else /* CONFIG_X86_64 */ 71 + start = 0x8000; 72 + e820_table_start = find_e820_area(start, end, tables, PAGE_SIZE); 73 + #endif 74 + if (e820_table_start == -1UL) 75 + panic("Cannot find space for the kernel page tables"); 76 + 77 + e820_table_start >>= PAGE_SHIFT; 78 + e820_table_end = e820_table_start; 79 + e820_table_top = e820_table_start + (tables >> PAGE_SHIFT); 80 + 81 + printk(KERN_DEBUG "kernel direct mapping tables up to %lx @ %lx-%lx\n", 82 + end, e820_table_start << PAGE_SHIFT, e820_table_top << PAGE_SHIFT); 83 + } 84 + 85 + struct map_range { 86 + unsigned long start; 87 + unsigned long end; 88 + unsigned page_size_mask; 89 + }; 90 + 91 + #ifdef CONFIG_X86_32 92 + #define NR_RANGE_MR 3 93 + #else /* CONFIG_X86_64 */ 94 + #define NR_RANGE_MR 5 95 + #endif 96 + 97 + static int save_mr(struct map_range *mr, int nr_range, 98 + unsigned long start_pfn, unsigned long end_pfn, 99 + unsigned long page_size_mask) 100 + { 101 + if (start_pfn < end_pfn) { 102 + if (nr_range >= NR_RANGE_MR) 103 + panic("run out of range for init_memory_mapping\n"); 104 + mr[nr_range].start = start_pfn<<PAGE_SHIFT; 105 + mr[nr_range].end = end_pfn<<PAGE_SHIFT; 106 + mr[nr_range].page_size_mask = page_size_mask; 107 + nr_range++; 108 + } 109 + 110 + return nr_range; 111 + } 112 + 113 + #ifdef CONFIG_X86_64 114 + static void __init init_gbpages(void) 115 + { 116 + if (direct_gbpages && cpu_has_gbpages) 117 + printk(KERN_INFO "Using GB pages for direct mapping\n"); 118 + else 119 + direct_gbpages = 0; 120 + } 121 + #else 122 + static inline void init_gbpages(void) 123 + { 124 + } 125 + #endif 126 + 127 + /* 128 + * Setup the direct mapping of the physical memory at PAGE_OFFSET. 129 + * This runs before bootmem is initialized and gets pages directly from 130 + * the physical memory. To access them they are temporarily mapped. 131 + */ 132 + unsigned long __init_refok init_memory_mapping(unsigned long start, 133 + unsigned long end) 134 + { 135 + unsigned long page_size_mask = 0; 136 + unsigned long start_pfn, end_pfn; 137 + unsigned long ret = 0; 138 + unsigned long pos; 139 + 140 + struct map_range mr[NR_RANGE_MR]; 141 + int nr_range, i; 142 + int use_pse, use_gbpages; 143 + 144 + printk(KERN_INFO "init_memory_mapping: %016lx-%016lx\n", start, end); 145 + 146 + if (!after_bootmem) 147 + init_gbpages(); 148 + 149 + #ifdef CONFIG_DEBUG_PAGEALLOC 150 + /* 151 + * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages. 152 + * This will simplify cpa(), which otherwise needs to support splitting 153 + * large pages into small in interrupt context, etc. 154 + */ 155 + use_pse = use_gbpages = 0; 156 + #else 157 + use_pse = cpu_has_pse; 158 + use_gbpages = direct_gbpages; 159 + #endif 160 + 161 + #ifdef CONFIG_X86_32 162 + #ifdef CONFIG_X86_PAE 163 + set_nx(); 164 + if (nx_enabled) 165 + printk(KERN_INFO "NX (Execute Disable) protection: active\n"); 166 + #endif 167 + 168 + /* Enable PSE if available */ 169 + if (cpu_has_pse) 170 + set_in_cr4(X86_CR4_PSE); 171 + 172 + /* Enable PGE if available */ 173 + if (cpu_has_pge) { 174 + set_in_cr4(X86_CR4_PGE); 175 + __supported_pte_mask |= _PAGE_GLOBAL; 176 + } 177 + #endif 178 + 179 + if (use_gbpages) 180 + page_size_mask |= 1 << PG_LEVEL_1G; 181 + if (use_pse) 182 + page_size_mask |= 1 << PG_LEVEL_2M; 183 + 184 + memset(mr, 0, sizeof(mr)); 185 + nr_range = 0; 186 + 187 + /* head if not big page alignment ? */ 188 + start_pfn = start >> PAGE_SHIFT; 189 + pos = start_pfn << PAGE_SHIFT; 190 + #ifdef CONFIG_X86_32 191 + /* 192 + * Don't use a large page for the first 2/4MB of memory 193 + * because there are often fixed size MTRRs in there 194 + * and overlapping MTRRs into large pages can cause 195 + * slowdowns. 196 + */ 197 + if (pos == 0) 198 + end_pfn = 1<<(PMD_SHIFT - PAGE_SHIFT); 199 + else 200 + end_pfn = ((pos + (PMD_SIZE - 1))>>PMD_SHIFT) 201 + << (PMD_SHIFT - PAGE_SHIFT); 202 + #else /* CONFIG_X86_64 */ 203 + end_pfn = ((pos + (PMD_SIZE - 1)) >> PMD_SHIFT) 204 + << (PMD_SHIFT - PAGE_SHIFT); 205 + #endif 206 + if (end_pfn > (end >> PAGE_SHIFT)) 207 + end_pfn = end >> PAGE_SHIFT; 208 + if (start_pfn < end_pfn) { 209 + nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0); 210 + pos = end_pfn << PAGE_SHIFT; 211 + } 212 + 213 + /* big page (2M) range */ 214 + start_pfn = ((pos + (PMD_SIZE - 1))>>PMD_SHIFT) 215 + << (PMD_SHIFT - PAGE_SHIFT); 216 + #ifdef CONFIG_X86_32 217 + end_pfn = (end>>PMD_SHIFT) << (PMD_SHIFT - PAGE_SHIFT); 218 + #else /* CONFIG_X86_64 */ 219 + end_pfn = ((pos + (PUD_SIZE - 1))>>PUD_SHIFT) 220 + << (PUD_SHIFT - PAGE_SHIFT); 221 + if (end_pfn > ((end>>PMD_SHIFT)<<(PMD_SHIFT - PAGE_SHIFT))) 222 + end_pfn = ((end>>PMD_SHIFT)<<(PMD_SHIFT - PAGE_SHIFT)); 223 + #endif 224 + 225 + if (start_pfn < end_pfn) { 226 + nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 227 + page_size_mask & (1<<PG_LEVEL_2M)); 228 + pos = end_pfn << PAGE_SHIFT; 229 + } 230 + 231 + #ifdef CONFIG_X86_64 232 + /* big page (1G) range */ 233 + start_pfn = ((pos + (PUD_SIZE - 1))>>PUD_SHIFT) 234 + << (PUD_SHIFT - PAGE_SHIFT); 235 + end_pfn = (end >> PUD_SHIFT) << (PUD_SHIFT - PAGE_SHIFT); 236 + if (start_pfn < end_pfn) { 237 + nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 238 + page_size_mask & 239 + ((1<<PG_LEVEL_2M)|(1<<PG_LEVEL_1G))); 240 + pos = end_pfn << PAGE_SHIFT; 241 + } 242 + 243 + /* tail is not big page (1G) alignment */ 244 + start_pfn = ((pos + (PMD_SIZE - 1))>>PMD_SHIFT) 245 + << (PMD_SHIFT - PAGE_SHIFT); 246 + end_pfn = (end >> PMD_SHIFT) << (PMD_SHIFT - PAGE_SHIFT); 247 + if (start_pfn < end_pfn) { 248 + nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 249 + page_size_mask & (1<<PG_LEVEL_2M)); 250 + pos = end_pfn << PAGE_SHIFT; 251 + } 252 + #endif 253 + 254 + /* tail is not big page (2M) alignment */ 255 + start_pfn = pos>>PAGE_SHIFT; 256 + end_pfn = end>>PAGE_SHIFT; 257 + nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0); 258 + 259 + /* try to merge same page size and continuous */ 260 + for (i = 0; nr_range > 1 && i < nr_range - 1; i++) { 261 + unsigned long old_start; 262 + if (mr[i].end != mr[i+1].start || 263 + mr[i].page_size_mask != mr[i+1].page_size_mask) 264 + continue; 265 + /* move it */ 266 + old_start = mr[i].start; 267 + memmove(&mr[i], &mr[i+1], 268 + (nr_range - 1 - i) * sizeof(struct map_range)); 269 + mr[i--].start = old_start; 270 + nr_range--; 271 + } 272 + 273 + for (i = 0; i < nr_range; i++) 274 + printk(KERN_DEBUG " %010lx - %010lx page %s\n", 275 + mr[i].start, mr[i].end, 276 + (mr[i].page_size_mask & (1<<PG_LEVEL_1G))?"1G":( 277 + (mr[i].page_size_mask & (1<<PG_LEVEL_2M))?"2M":"4k")); 278 + 279 + /* 280 + * Find space for the kernel direct mapping tables. 281 + * 282 + * Later we should allocate these tables in the local node of the 283 + * memory mapped. Unfortunately this is done currently before the 284 + * nodes are discovered. 285 + */ 286 + if (!after_bootmem) 287 + find_early_table_space(end, use_pse, use_gbpages); 288 + 289 + #ifdef CONFIG_X86_32 290 + for (i = 0; i < nr_range; i++) 291 + kernel_physical_mapping_init(mr[i].start, mr[i].end, 292 + mr[i].page_size_mask); 293 + ret = end; 294 + #else /* CONFIG_X86_64 */ 295 + for (i = 0; i < nr_range; i++) 296 + ret = kernel_physical_mapping_init(mr[i].start, mr[i].end, 297 + mr[i].page_size_mask); 298 + #endif 299 + 300 + #ifdef CONFIG_X86_32 301 + early_ioremap_page_table_range_init(); 302 + 303 + load_cr3(swapper_pg_dir); 304 + #endif 305 + 306 + #ifdef CONFIG_X86_64 307 + if (!after_bootmem) 308 + mmu_cr4_features = read_cr4(); 309 + #endif 310 + __flush_tlb_all(); 311 + 312 + if (!after_bootmem && e820_table_end > e820_table_start) 313 + reserve_early(e820_table_start << PAGE_SHIFT, 314 + e820_table_end << PAGE_SHIFT, "PGTABLE"); 315 + 316 + if (!after_bootmem) 317 + early_memtest(start, end); 318 + 319 + return ret >> PAGE_SHIFT; 320 + } 321 + 322 + 323 + /* 324 + * devmem_is_allowed() checks to see if /dev/mem access to a certain address 325 + * is valid. The argument is a physical page number. 326 + * 327 + * 328 + * On x86, access has to be given to the first megabyte of ram because that area 329 + * contains bios code and data regions used by X and dosemu and similar apps. 330 + * Access has to be given to non-kernel-ram areas as well, these contain the PCI 331 + * mmio resources as well as potential bios/acpi data regions. 332 + */ 333 + int devmem_is_allowed(unsigned long pagenr) 334 + { 335 + if (pagenr <= 256) 336 + return 1; 337 + if (iomem_is_exclusive(pagenr << PAGE_SHIFT)) 338 + return 0; 339 + if (!page_is_ram(pagenr)) 340 + return 1; 341 + return 0; 342 + } 6 343 7 344 void free_init_pages(char *what, unsigned long begin, unsigned long end) 8 345 { ··· 384 47 (unsigned long)(&__init_begin), 385 48 (unsigned long)(&__init_end)); 386 49 } 50 + 51 + #ifdef CONFIG_BLK_DEV_INITRD 52 + void free_initrd_mem(unsigned long start, unsigned long end) 53 + { 54 + free_init_pages("initrd memory", start, end); 55 + } 56 + #endif
+79 -194
arch/x86/mm/init_32.c
··· 49 49 #include <asm/paravirt.h> 50 50 #include <asm/setup.h> 51 51 #include <asm/cacheflush.h> 52 + #include <asm/init.h> 52 53 53 54 unsigned long max_low_pfn_mapped; 54 55 unsigned long max_pfn_mapped; ··· 59 58 60 59 static noinline int do_test_wp_bit(void); 61 60 62 - 63 - static unsigned long __initdata table_start; 64 - static unsigned long __meminitdata table_end; 65 - static unsigned long __meminitdata table_top; 66 - 67 - static int __initdata after_init_bootmem; 61 + bool __read_mostly __vmalloc_start_set = false; 68 62 69 63 static __init void *alloc_low_page(void) 70 64 { 71 - unsigned long pfn = table_end++; 65 + unsigned long pfn = e820_table_end++; 72 66 void *adr; 73 67 74 - if (pfn >= table_top) 68 + if (pfn >= e820_table_top) 75 69 panic("alloc_low_page: ran out of memory"); 76 70 77 71 adr = __va(pfn * PAGE_SIZE); ··· 86 90 87 91 #ifdef CONFIG_X86_PAE 88 92 if (!(pgd_val(*pgd) & _PAGE_PRESENT)) { 89 - if (after_init_bootmem) 93 + if (after_bootmem) 90 94 pmd_table = (pmd_t *)alloc_bootmem_low_pages(PAGE_SIZE); 91 95 else 92 96 pmd_table = (pmd_t *)alloc_low_page(); ··· 113 117 if (!(pmd_val(*pmd) & _PAGE_PRESENT)) { 114 118 pte_t *page_table = NULL; 115 119 116 - if (after_init_bootmem) { 120 + if (after_bootmem) { 117 121 #ifdef CONFIG_DEBUG_PAGEALLOC 118 122 page_table = (pte_t *) alloc_bootmem_pages(PAGE_SIZE); 119 123 #endif ··· 129 133 } 130 134 131 135 return pte_offset_kernel(pmd, 0); 136 + } 137 + 138 + pmd_t * __init populate_extra_pmd(unsigned long vaddr) 139 + { 140 + int pgd_idx = pgd_index(vaddr); 141 + int pmd_idx = pmd_index(vaddr); 142 + 143 + return one_md_table_init(swapper_pg_dir + pgd_idx) + pmd_idx; 144 + } 145 + 146 + pte_t * __init populate_extra_pte(unsigned long vaddr) 147 + { 148 + int pte_idx = pte_index(vaddr); 149 + pmd_t *pmd; 150 + 151 + pmd = populate_extra_pmd(vaddr); 152 + return one_page_table_init(pmd) + pte_idx; 132 153 } 133 154 134 155 static pte_t *__init page_table_kmap_check(pte_t *pte, pmd_t *pmd, ··· 164 151 if (pmd_idx_kmap_begin != pmd_idx_kmap_end 165 152 && (vaddr >> PMD_SHIFT) >= pmd_idx_kmap_begin 166 153 && (vaddr >> PMD_SHIFT) <= pmd_idx_kmap_end 167 - && ((__pa(pte) >> PAGE_SHIFT) < table_start 168 - || (__pa(pte) >> PAGE_SHIFT) >= table_end)) { 154 + && ((__pa(pte) >> PAGE_SHIFT) < e820_table_start 155 + || (__pa(pte) >> PAGE_SHIFT) >= e820_table_end)) { 169 156 pte_t *newpte; 170 157 int i; 171 158 172 - BUG_ON(after_init_bootmem); 159 + BUG_ON(after_bootmem); 173 160 newpte = alloc_low_page(); 174 161 for (i = 0; i < PTRS_PER_PTE; i++) 175 162 set_pte(newpte + i, pte[i]); ··· 238 225 * of max_low_pfn pages, by creating page tables starting from address 239 226 * PAGE_OFFSET: 240 227 */ 241 - static void __init kernel_physical_mapping_init(pgd_t *pgd_base, 242 - unsigned long start_pfn, 243 - unsigned long end_pfn, 244 - int use_pse) 228 + unsigned long __init 229 + kernel_physical_mapping_init(unsigned long start, 230 + unsigned long end, 231 + unsigned long page_size_mask) 245 232 { 233 + int use_pse = page_size_mask == (1<<PG_LEVEL_2M); 234 + unsigned long start_pfn, end_pfn; 235 + pgd_t *pgd_base = swapper_pg_dir; 246 236 int pgd_idx, pmd_idx, pte_ofs; 247 237 unsigned long pfn; 248 238 pgd_t *pgd; ··· 253 237 pte_t *pte; 254 238 unsigned pages_2m, pages_4k; 255 239 int mapping_iter; 240 + 241 + start_pfn = start >> PAGE_SHIFT; 242 + end_pfn = end >> PAGE_SHIFT; 256 243 257 244 /* 258 245 * First iteration will setup identity mapping using large/small pages ··· 371 352 mapping_iter = 2; 372 353 goto repeat; 373 354 } 374 - } 375 - 376 - /* 377 - * devmem_is_allowed() checks to see if /dev/mem access to a certain address 378 - * is valid. The argument is a physical page number. 379 - * 380 - * 381 - * On x86, access has to be given to the first megabyte of ram because that area 382 - * contains bios code and data regions used by X and dosemu and similar apps. 383 - * Access has to be given to non-kernel-ram areas as well, these contain the PCI 384 - * mmio resources as well as potential bios/acpi data regions. 385 - */ 386 - int devmem_is_allowed(unsigned long pagenr) 387 - { 388 - if (pagenr <= 256) 389 - return 1; 390 - if (iomem_is_exclusive(pagenr << PAGE_SHIFT)) 391 - return 0; 392 - if (!page_is_ram(pagenr)) 393 - return 1; 394 355 return 0; 395 356 } 396 357 ··· 527 528 * be partially populated, and so it avoids stomping on any existing 528 529 * mappings. 529 530 */ 530 - static void __init early_ioremap_page_table_range_init(pgd_t *pgd_base) 531 + void __init early_ioremap_page_table_range_init(void) 531 532 { 533 + pgd_t *pgd_base = swapper_pg_dir; 532 534 unsigned long vaddr, end; 533 535 534 536 /* ··· 624 624 } 625 625 early_param("noexec", noexec_setup); 626 626 627 - static void __init set_nx(void) 627 + void __init set_nx(void) 628 628 { 629 629 unsigned int v[4], l, h; 630 630 ··· 776 776 #ifdef CONFIG_FLATMEM 777 777 max_mapnr = num_physpages; 778 778 #endif 779 + __vmalloc_start_set = true; 780 + 779 781 printk(KERN_NOTICE "%ldMB LOWMEM available.\n", 780 782 pages_to_mb(max_low_pfn)); 781 783 ··· 799 797 free_area_init_nodes(max_zone_pfns); 800 798 } 801 799 800 + static unsigned long __init setup_node_bootmem(int nodeid, 801 + unsigned long start_pfn, 802 + unsigned long end_pfn, 803 + unsigned long bootmap) 804 + { 805 + unsigned long bootmap_size; 806 + 807 + /* don't touch min_low_pfn */ 808 + bootmap_size = init_bootmem_node(NODE_DATA(nodeid), 809 + bootmap >> PAGE_SHIFT, 810 + start_pfn, end_pfn); 811 + printk(KERN_INFO " node %d low ram: %08lx - %08lx\n", 812 + nodeid, start_pfn<<PAGE_SHIFT, end_pfn<<PAGE_SHIFT); 813 + printk(KERN_INFO " node %d bootmap %08lx - %08lx\n", 814 + nodeid, bootmap, bootmap + bootmap_size); 815 + free_bootmem_with_active_regions(nodeid, end_pfn); 816 + early_res_to_bootmem(start_pfn<<PAGE_SHIFT, end_pfn<<PAGE_SHIFT); 817 + 818 + return bootmap + bootmap_size; 819 + } 820 + 802 821 void __init setup_bootmem_allocator(void) 803 822 { 804 - int i; 823 + int nodeid; 805 824 unsigned long bootmap_size, bootmap; 806 825 /* 807 826 * Initialize the boot-time allocator (with low memory only): 808 827 */ 809 828 bootmap_size = bootmem_bootmap_pages(max_low_pfn)<<PAGE_SHIFT; 810 - bootmap = find_e820_area(min_low_pfn<<PAGE_SHIFT, 811 - max_pfn_mapped<<PAGE_SHIFT, bootmap_size, 829 + bootmap = find_e820_area(0, max_pfn_mapped<<PAGE_SHIFT, bootmap_size, 812 830 PAGE_SIZE); 813 831 if (bootmap == -1L) 814 832 panic("Cannot find bootmem map of size %ld\n", bootmap_size); 815 833 reserve_early(bootmap, bootmap + bootmap_size, "BOOTMAP"); 816 834 817 - /* don't touch min_low_pfn */ 818 - bootmap_size = init_bootmem_node(NODE_DATA(0), bootmap >> PAGE_SHIFT, 819 - min_low_pfn, max_low_pfn); 820 835 printk(KERN_INFO " mapped low ram: 0 - %08lx\n", 821 836 max_pfn_mapped<<PAGE_SHIFT); 822 - printk(KERN_INFO " low ram: %08lx - %08lx\n", 823 - min_low_pfn<<PAGE_SHIFT, max_low_pfn<<PAGE_SHIFT); 824 - printk(KERN_INFO " bootmap %08lx - %08lx\n", 825 - bootmap, bootmap + bootmap_size); 826 - for_each_online_node(i) 827 - free_bootmem_with_active_regions(i, max_low_pfn); 828 - early_res_to_bootmem(0, max_low_pfn<<PAGE_SHIFT); 837 + printk(KERN_INFO " low ram: 0 - %08lx\n", max_low_pfn<<PAGE_SHIFT); 829 838 830 - after_init_bootmem = 1; 831 - } 839 + for_each_online_node(nodeid) { 840 + unsigned long start_pfn, end_pfn; 832 841 833 - static void __init find_early_table_space(unsigned long end, int use_pse) 834 - { 835 - unsigned long puds, pmds, ptes, tables, start; 836 - 837 - puds = (end + PUD_SIZE - 1) >> PUD_SHIFT; 838 - tables = roundup(puds * sizeof(pud_t), PAGE_SIZE); 839 - 840 - pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT; 841 - tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE); 842 - 843 - if (use_pse) { 844 - unsigned long extra; 845 - 846 - extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT); 847 - extra += PMD_SIZE; 848 - ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT; 849 - } else 850 - ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT; 851 - 852 - tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE); 853 - 854 - /* for fixmap */ 855 - tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE); 856 - 857 - /* 858 - * RED-PEN putting page tables only on node 0 could 859 - * cause a hotspot and fill up ZONE_DMA. The page tables 860 - * need roughly 0.5KB per GB. 861 - */ 862 - start = 0x7000; 863 - table_start = find_e820_area(start, max_pfn_mapped<<PAGE_SHIFT, 864 - tables, PAGE_SIZE); 865 - if (table_start == -1UL) 866 - panic("Cannot find space for the kernel page tables"); 867 - 868 - table_start >>= PAGE_SHIFT; 869 - table_end = table_start; 870 - table_top = table_start + (tables>>PAGE_SHIFT); 871 - 872 - printk(KERN_DEBUG "kernel direct mapping tables up to %lx @ %lx-%lx\n", 873 - end, table_start << PAGE_SHIFT, 874 - (table_start << PAGE_SHIFT) + tables); 875 - } 876 - 877 - unsigned long __init_refok init_memory_mapping(unsigned long start, 878 - unsigned long end) 879 - { 880 - pgd_t *pgd_base = swapper_pg_dir; 881 - unsigned long start_pfn, end_pfn; 882 - unsigned long big_page_start; 883 - #ifdef CONFIG_DEBUG_PAGEALLOC 884 - /* 885 - * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages. 886 - * This will simplify cpa(), which otherwise needs to support splitting 887 - * large pages into small in interrupt context, etc. 888 - */ 889 - int use_pse = 0; 842 + #ifdef CONFIG_NEED_MULTIPLE_NODES 843 + start_pfn = node_start_pfn[nodeid]; 844 + end_pfn = node_end_pfn[nodeid]; 845 + if (start_pfn > max_low_pfn) 846 + continue; 847 + if (end_pfn > max_low_pfn) 848 + end_pfn = max_low_pfn; 890 849 #else 891 - int use_pse = cpu_has_pse; 850 + start_pfn = 0; 851 + end_pfn = max_low_pfn; 892 852 #endif 893 - 894 - /* 895 - * Find space for the kernel direct mapping tables. 896 - */ 897 - if (!after_init_bootmem) 898 - find_early_table_space(end, use_pse); 899 - 900 - #ifdef CONFIG_X86_PAE 901 - set_nx(); 902 - if (nx_enabled) 903 - printk(KERN_INFO "NX (Execute Disable) protection: active\n"); 904 - #endif 905 - 906 - /* Enable PSE if available */ 907 - if (cpu_has_pse) 908 - set_in_cr4(X86_CR4_PSE); 909 - 910 - /* Enable PGE if available */ 911 - if (cpu_has_pge) { 912 - set_in_cr4(X86_CR4_PGE); 913 - __supported_pte_mask |= _PAGE_GLOBAL; 853 + bootmap = setup_node_bootmem(nodeid, start_pfn, end_pfn, 854 + bootmap); 914 855 } 915 856 916 - /* 917 - * Don't use a large page for the first 2/4MB of memory 918 - * because there are often fixed size MTRRs in there 919 - * and overlapping MTRRs into large pages can cause 920 - * slowdowns. 921 - */ 922 - big_page_start = PMD_SIZE; 923 - 924 - if (start < big_page_start) { 925 - start_pfn = start >> PAGE_SHIFT; 926 - end_pfn = min(big_page_start>>PAGE_SHIFT, end>>PAGE_SHIFT); 927 - } else { 928 - /* head is not big page alignment ? */ 929 - start_pfn = start >> PAGE_SHIFT; 930 - end_pfn = ((start + (PMD_SIZE - 1))>>PMD_SHIFT) 931 - << (PMD_SHIFT - PAGE_SHIFT); 932 - } 933 - if (start_pfn < end_pfn) 934 - kernel_physical_mapping_init(pgd_base, start_pfn, end_pfn, 0); 935 - 936 - /* big page range */ 937 - start_pfn = ((start + (PMD_SIZE - 1))>>PMD_SHIFT) 938 - << (PMD_SHIFT - PAGE_SHIFT); 939 - if (start_pfn < (big_page_start >> PAGE_SHIFT)) 940 - start_pfn = big_page_start >> PAGE_SHIFT; 941 - end_pfn = (end>>PMD_SHIFT) << (PMD_SHIFT - PAGE_SHIFT); 942 - if (start_pfn < end_pfn) 943 - kernel_physical_mapping_init(pgd_base, start_pfn, end_pfn, 944 - use_pse); 945 - 946 - /* tail is not big page alignment ? */ 947 - start_pfn = end_pfn; 948 - if (start_pfn > (big_page_start>>PAGE_SHIFT)) { 949 - end_pfn = end >> PAGE_SHIFT; 950 - if (start_pfn < end_pfn) 951 - kernel_physical_mapping_init(pgd_base, start_pfn, 952 - end_pfn, 0); 953 - } 954 - 955 - early_ioremap_page_table_range_init(pgd_base); 956 - 957 - load_cr3(swapper_pg_dir); 958 - 959 - __flush_tlb_all(); 960 - 961 - if (!after_init_bootmem) 962 - reserve_early(table_start << PAGE_SHIFT, 963 - table_end << PAGE_SHIFT, "PGTABLE"); 964 - 965 - if (!after_init_bootmem) 966 - early_memtest(start, end); 967 - 968 - return end >> PAGE_SHIFT; 857 + after_bootmem = 1; 969 858 } 970 - 971 859 972 860 /* 973 861 * paging_init() sets up the page tables - note that the first 8MB are ··· 1089 1197 printk(KERN_INFO "Testing CPA: write protecting again\n"); 1090 1198 set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT); 1091 1199 #endif 1092 - } 1093 - #endif 1094 - 1095 - #ifdef CONFIG_BLK_DEV_INITRD 1096 - void free_initrd_mem(unsigned long start, unsigned long end) 1097 - { 1098 - free_init_pages("initrd memory", start, end); 1099 1200 } 1100 1201 #endif 1101 1202
+68 -292
arch/x86/mm/init_64.c
··· 48 48 #include <asm/kdebug.h> 49 49 #include <asm/numa.h> 50 50 #include <asm/cacheflush.h> 51 + #include <asm/init.h> 51 52 52 53 /* 53 54 * end_pfn only includes RAM, while max_pfn_mapped includes all e820 entries. ··· 61 60 static unsigned long dma_reserve __initdata; 62 61 63 62 DEFINE_PER_CPU(struct mmu_gather, mmu_gathers); 64 - 65 - int direct_gbpages 66 - #ifdef CONFIG_DIRECT_GBPAGES 67 - = 1 68 - #endif 69 - ; 70 63 71 64 static int __init parse_direct_gbpages_off(char *arg) 72 65 { ··· 82 87 * around without checking the pgd every time. 83 88 */ 84 89 85 - int after_bootmem; 86 - 87 90 pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP; 88 91 EXPORT_SYMBOL_GPL(__supported_pte_mask); 89 92 90 - static int do_not_nx __cpuinitdata; 93 + static int disable_nx __cpuinitdata; 91 94 92 95 /* 93 96 * noexec=on|off ··· 100 107 return -EINVAL; 101 108 if (!strncmp(str, "on", 2)) { 102 109 __supported_pte_mask |= _PAGE_NX; 103 - do_not_nx = 0; 110 + disable_nx = 0; 104 111 } else if (!strncmp(str, "off", 3)) { 105 - do_not_nx = 1; 112 + disable_nx = 1; 106 113 __supported_pte_mask &= ~_PAGE_NX; 107 114 } 108 115 return 0; ··· 114 121 unsigned long efer; 115 122 116 123 rdmsrl(MSR_EFER, efer); 117 - if (!(efer & EFER_NX) || do_not_nx) 124 + if (!(efer & EFER_NX) || disable_nx) 118 125 __supported_pte_mask &= ~_PAGE_NX; 119 126 } 120 127 ··· 161 168 return ptr; 162 169 } 163 170 164 - void 165 - set_pte_vaddr_pud(pud_t *pud_page, unsigned long vaddr, pte_t new_pte) 171 + static pud_t *fill_pud(pgd_t *pgd, unsigned long vaddr) 172 + { 173 + if (pgd_none(*pgd)) { 174 + pud_t *pud = (pud_t *)spp_getpage(); 175 + pgd_populate(&init_mm, pgd, pud); 176 + if (pud != pud_offset(pgd, 0)) 177 + printk(KERN_ERR "PAGETABLE BUG #00! %p <-> %p\n", 178 + pud, pud_offset(pgd, 0)); 179 + } 180 + return pud_offset(pgd, vaddr); 181 + } 182 + 183 + static pmd_t *fill_pmd(pud_t *pud, unsigned long vaddr) 184 + { 185 + if (pud_none(*pud)) { 186 + pmd_t *pmd = (pmd_t *) spp_getpage(); 187 + pud_populate(&init_mm, pud, pmd); 188 + if (pmd != pmd_offset(pud, 0)) 189 + printk(KERN_ERR "PAGETABLE BUG #01! %p <-> %p\n", 190 + pmd, pmd_offset(pud, 0)); 191 + } 192 + return pmd_offset(pud, vaddr); 193 + } 194 + 195 + static pte_t *fill_pte(pmd_t *pmd, unsigned long vaddr) 196 + { 197 + if (pmd_none(*pmd)) { 198 + pte_t *pte = (pte_t *) spp_getpage(); 199 + pmd_populate_kernel(&init_mm, pmd, pte); 200 + if (pte != pte_offset_kernel(pmd, 0)) 201 + printk(KERN_ERR "PAGETABLE BUG #02!\n"); 202 + } 203 + return pte_offset_kernel(pmd, vaddr); 204 + } 205 + 206 + void set_pte_vaddr_pud(pud_t *pud_page, unsigned long vaddr, pte_t new_pte) 166 207 { 167 208 pud_t *pud; 168 209 pmd_t *pmd; 169 210 pte_t *pte; 170 211 171 212 pud = pud_page + pud_index(vaddr); 172 - if (pud_none(*pud)) { 173 - pmd = (pmd_t *) spp_getpage(); 174 - pud_populate(&init_mm, pud, pmd); 175 - if (pmd != pmd_offset(pud, 0)) { 176 - printk(KERN_ERR "PAGETABLE BUG #01! %p <-> %p\n", 177 - pmd, pmd_offset(pud, 0)); 178 - return; 179 - } 180 - } 181 - pmd = pmd_offset(pud, vaddr); 182 - if (pmd_none(*pmd)) { 183 - pte = (pte_t *) spp_getpage(); 184 - pmd_populate_kernel(&init_mm, pmd, pte); 185 - if (pte != pte_offset_kernel(pmd, 0)) { 186 - printk(KERN_ERR "PAGETABLE BUG #02!\n"); 187 - return; 188 - } 189 - } 213 + pmd = fill_pmd(pud, vaddr); 214 + pte = fill_pte(pmd, vaddr); 190 215 191 - pte = pte_offset_kernel(pmd, vaddr); 192 216 set_pte(pte, new_pte); 193 217 194 218 /* ··· 215 205 __flush_tlb_one(vaddr); 216 206 } 217 207 218 - void 219 - set_pte_vaddr(unsigned long vaddr, pte_t pteval) 208 + void set_pte_vaddr(unsigned long vaddr, pte_t pteval) 220 209 { 221 210 pgd_t *pgd; 222 211 pud_t *pud_page; ··· 230 221 } 231 222 pud_page = (pud_t*)pgd_page_vaddr(*pgd); 232 223 set_pte_vaddr_pud(pud_page, vaddr, pteval); 224 + } 225 + 226 + pmd_t * __init populate_extra_pmd(unsigned long vaddr) 227 + { 228 + pgd_t *pgd; 229 + pud_t *pud; 230 + 231 + pgd = pgd_offset_k(vaddr); 232 + pud = fill_pud(pgd, vaddr); 233 + return fill_pmd(pud, vaddr); 234 + } 235 + 236 + pte_t * __init populate_extra_pte(unsigned long vaddr) 237 + { 238 + pmd_t *pmd; 239 + 240 + pmd = populate_extra_pmd(vaddr); 241 + return fill_pte(pmd, vaddr); 233 242 } 234 243 235 244 /* ··· 318 291 } 319 292 } 320 293 321 - static unsigned long __initdata table_start; 322 - static unsigned long __meminitdata table_end; 323 - static unsigned long __meminitdata table_top; 324 - 325 294 static __ref void *alloc_low_page(unsigned long *phys) 326 295 { 327 - unsigned long pfn = table_end++; 296 + unsigned long pfn = e820_table_end++; 328 297 void *adr; 329 298 330 299 if (after_bootmem) { ··· 330 307 return adr; 331 308 } 332 309 333 - if (pfn >= table_top) 310 + if (pfn >= e820_table_top) 334 311 panic("alloc_low_page: ran out of memory"); 335 312 336 313 adr = early_memremap(pfn * PAGE_SIZE, PAGE_SIZE); ··· 570 547 return phys_pud_init(pud, addr, end, page_size_mask); 571 548 } 572 549 573 - static void __init find_early_table_space(unsigned long end, int use_pse, 574 - int use_gbpages) 575 - { 576 - unsigned long puds, pmds, ptes, tables, start; 577 - 578 - puds = (end + PUD_SIZE - 1) >> PUD_SHIFT; 579 - tables = roundup(puds * sizeof(pud_t), PAGE_SIZE); 580 - if (use_gbpages) { 581 - unsigned long extra; 582 - extra = end - ((end>>PUD_SHIFT) << PUD_SHIFT); 583 - pmds = (extra + PMD_SIZE - 1) >> PMD_SHIFT; 584 - } else 585 - pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT; 586 - tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE); 587 - 588 - if (use_pse) { 589 - unsigned long extra; 590 - extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT); 591 - ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT; 592 - } else 593 - ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT; 594 - tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE); 595 - 596 - /* 597 - * RED-PEN putting page tables only on node 0 could 598 - * cause a hotspot and fill up ZONE_DMA. The page tables 599 - * need roughly 0.5KB per GB. 600 - */ 601 - start = 0x8000; 602 - table_start = find_e820_area(start, end, tables, PAGE_SIZE); 603 - if (table_start == -1UL) 604 - panic("Cannot find space for the kernel page tables"); 605 - 606 - table_start >>= PAGE_SHIFT; 607 - table_end = table_start; 608 - table_top = table_start + (tables >> PAGE_SHIFT); 609 - 610 - printk(KERN_DEBUG "kernel direct mapping tables up to %lx @ %lx-%lx\n", 611 - end, table_start << PAGE_SHIFT, table_top << PAGE_SHIFT); 612 - } 613 - 614 - static void __init init_gbpages(void) 615 - { 616 - if (direct_gbpages && cpu_has_gbpages) 617 - printk(KERN_INFO "Using GB pages for direct mapping\n"); 618 - else 619 - direct_gbpages = 0; 620 - } 621 - 622 - static unsigned long __meminit kernel_physical_mapping_init(unsigned long start, 623 - unsigned long end, 624 - unsigned long page_size_mask) 550 + unsigned long __init 551 + kernel_physical_mapping_init(unsigned long start, 552 + unsigned long end, 553 + unsigned long page_size_mask) 625 554 { 626 555 627 556 unsigned long next, last_map_addr = end; ··· 608 633 __flush_tlb_all(); 609 634 610 635 return last_map_addr; 611 - } 612 - 613 - struct map_range { 614 - unsigned long start; 615 - unsigned long end; 616 - unsigned page_size_mask; 617 - }; 618 - 619 - #define NR_RANGE_MR 5 620 - 621 - static int save_mr(struct map_range *mr, int nr_range, 622 - unsigned long start_pfn, unsigned long end_pfn, 623 - unsigned long page_size_mask) 624 - { 625 - 626 - if (start_pfn < end_pfn) { 627 - if (nr_range >= NR_RANGE_MR) 628 - panic("run out of range for init_memory_mapping\n"); 629 - mr[nr_range].start = start_pfn<<PAGE_SHIFT; 630 - mr[nr_range].end = end_pfn<<PAGE_SHIFT; 631 - mr[nr_range].page_size_mask = page_size_mask; 632 - nr_range++; 633 - } 634 - 635 - return nr_range; 636 - } 637 - 638 - /* 639 - * Setup the direct mapping of the physical memory at PAGE_OFFSET. 640 - * This runs before bootmem is initialized and gets pages directly from 641 - * the physical memory. To access them they are temporarily mapped. 642 - */ 643 - unsigned long __init_refok init_memory_mapping(unsigned long start, 644 - unsigned long end) 645 - { 646 - unsigned long last_map_addr = 0; 647 - unsigned long page_size_mask = 0; 648 - unsigned long start_pfn, end_pfn; 649 - unsigned long pos; 650 - 651 - struct map_range mr[NR_RANGE_MR]; 652 - int nr_range, i; 653 - int use_pse, use_gbpages; 654 - 655 - printk(KERN_INFO "init_memory_mapping: %016lx-%016lx\n", start, end); 656 - 657 - /* 658 - * Find space for the kernel direct mapping tables. 659 - * 660 - * Later we should allocate these tables in the local node of the 661 - * memory mapped. Unfortunately this is done currently before the 662 - * nodes are discovered. 663 - */ 664 - if (!after_bootmem) 665 - init_gbpages(); 666 - 667 - #ifdef CONFIG_DEBUG_PAGEALLOC 668 - /* 669 - * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages. 670 - * This will simplify cpa(), which otherwise needs to support splitting 671 - * large pages into small in interrupt context, etc. 672 - */ 673 - use_pse = use_gbpages = 0; 674 - #else 675 - use_pse = cpu_has_pse; 676 - use_gbpages = direct_gbpages; 677 - #endif 678 - 679 - if (use_gbpages) 680 - page_size_mask |= 1 << PG_LEVEL_1G; 681 - if (use_pse) 682 - page_size_mask |= 1 << PG_LEVEL_2M; 683 - 684 - memset(mr, 0, sizeof(mr)); 685 - nr_range = 0; 686 - 687 - /* head if not big page alignment ?*/ 688 - start_pfn = start >> PAGE_SHIFT; 689 - pos = start_pfn << PAGE_SHIFT; 690 - end_pfn = ((pos + (PMD_SIZE - 1)) >> PMD_SHIFT) 691 - << (PMD_SHIFT - PAGE_SHIFT); 692 - if (end_pfn > (end >> PAGE_SHIFT)) 693 - end_pfn = end >> PAGE_SHIFT; 694 - if (start_pfn < end_pfn) { 695 - nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0); 696 - pos = end_pfn << PAGE_SHIFT; 697 - } 698 - 699 - /* big page (2M) range*/ 700 - start_pfn = ((pos + (PMD_SIZE - 1))>>PMD_SHIFT) 701 - << (PMD_SHIFT - PAGE_SHIFT); 702 - end_pfn = ((pos + (PUD_SIZE - 1))>>PUD_SHIFT) 703 - << (PUD_SHIFT - PAGE_SHIFT); 704 - if (end_pfn > ((end>>PMD_SHIFT)<<(PMD_SHIFT - PAGE_SHIFT))) 705 - end_pfn = ((end>>PMD_SHIFT)<<(PMD_SHIFT - PAGE_SHIFT)); 706 - if (start_pfn < end_pfn) { 707 - nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 708 - page_size_mask & (1<<PG_LEVEL_2M)); 709 - pos = end_pfn << PAGE_SHIFT; 710 - } 711 - 712 - /* big page (1G) range */ 713 - start_pfn = ((pos + (PUD_SIZE - 1))>>PUD_SHIFT) 714 - << (PUD_SHIFT - PAGE_SHIFT); 715 - end_pfn = (end >> PUD_SHIFT) << (PUD_SHIFT - PAGE_SHIFT); 716 - if (start_pfn < end_pfn) { 717 - nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 718 - page_size_mask & 719 - ((1<<PG_LEVEL_2M)|(1<<PG_LEVEL_1G))); 720 - pos = end_pfn << PAGE_SHIFT; 721 - } 722 - 723 - /* tail is not big page (1G) alignment */ 724 - start_pfn = ((pos + (PMD_SIZE - 1))>>PMD_SHIFT) 725 - << (PMD_SHIFT - PAGE_SHIFT); 726 - end_pfn = (end >> PMD_SHIFT) << (PMD_SHIFT - PAGE_SHIFT); 727 - if (start_pfn < end_pfn) { 728 - nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 729 - page_size_mask & (1<<PG_LEVEL_2M)); 730 - pos = end_pfn << PAGE_SHIFT; 731 - } 732 - 733 - /* tail is not big page (2M) alignment */ 734 - start_pfn = pos>>PAGE_SHIFT; 735 - end_pfn = end>>PAGE_SHIFT; 736 - nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0); 737 - 738 - /* try to merge same page size and continuous */ 739 - for (i = 0; nr_range > 1 && i < nr_range - 1; i++) { 740 - unsigned long old_start; 741 - if (mr[i].end != mr[i+1].start || 742 - mr[i].page_size_mask != mr[i+1].page_size_mask) 743 - continue; 744 - /* move it */ 745 - old_start = mr[i].start; 746 - memmove(&mr[i], &mr[i+1], 747 - (nr_range - 1 - i) * sizeof (struct map_range)); 748 - mr[i--].start = old_start; 749 - nr_range--; 750 - } 751 - 752 - for (i = 0; i < nr_range; i++) 753 - printk(KERN_DEBUG " %010lx - %010lx page %s\n", 754 - mr[i].start, mr[i].end, 755 - (mr[i].page_size_mask & (1<<PG_LEVEL_1G))?"1G":( 756 - (mr[i].page_size_mask & (1<<PG_LEVEL_2M))?"2M":"4k")); 757 - 758 - if (!after_bootmem) 759 - find_early_table_space(end, use_pse, use_gbpages); 760 - 761 - for (i = 0; i < nr_range; i++) 762 - last_map_addr = kernel_physical_mapping_init( 763 - mr[i].start, mr[i].end, 764 - mr[i].page_size_mask); 765 - 766 - if (!after_bootmem) 767 - mmu_cr4_features = read_cr4(); 768 - __flush_tlb_all(); 769 - 770 - if (!after_bootmem && table_end > table_start) 771 - reserve_early(table_start << PAGE_SHIFT, 772 - table_end << PAGE_SHIFT, "PGTABLE"); 773 - 774 - printk(KERN_INFO "last_map_addr: %lx end: %lx\n", 775 - last_map_addr, end); 776 - 777 - if (!after_bootmem) 778 - early_memtest(start, end); 779 - 780 - return last_map_addr >> PAGE_SHIFT; 781 636 } 782 637 783 638 #ifndef CONFIG_NUMA ··· 680 875 #endif 681 876 682 877 #endif /* CONFIG_MEMORY_HOTPLUG */ 683 - 684 - /* 685 - * devmem_is_allowed() checks to see if /dev/mem access to a certain address 686 - * is valid. The argument is a physical page number. 687 - * 688 - * 689 - * On x86, access has to be given to the first megabyte of ram because that area 690 - * contains bios code and data regions used by X and dosemu and similar apps. 691 - * Access has to be given to non-kernel-ram areas as well, these contain the PCI 692 - * mmio resources as well as potential bios/acpi data regions. 693 - */ 694 - int devmem_is_allowed(unsigned long pagenr) 695 - { 696 - if (pagenr <= 256) 697 - return 1; 698 - if (iomem_is_exclusive(pagenr << PAGE_SHIFT)) 699 - return 0; 700 - if (!page_is_ram(pagenr)) 701 - return 1; 702 - return 0; 703 - } 704 - 705 878 706 879 static struct kcore_list kcore_mem, kcore_vmalloc, kcore_kernel, 707 880 kcore_modules, kcore_vsyscall; ··· 766 983 #endif 767 984 } 768 985 769 - #endif 770 - 771 - #ifdef CONFIG_BLK_DEV_INITRD 772 - void free_initrd_mem(unsigned long start, unsigned long end) 773 - { 774 - free_init_pages("initrd memory", start, end); 775 - } 776 986 #endif 777 987 778 988 int __init reserve_bootmem_generic(unsigned long phys, unsigned long len,
+18 -17
arch/x86/mm/ioremap.c
··· 38 38 } else { 39 39 VIRTUAL_BUG_ON(x < PAGE_OFFSET); 40 40 x -= PAGE_OFFSET; 41 - VIRTUAL_BUG_ON(system_state == SYSTEM_BOOTING ? x > MAXMEM : 42 - !phys_addr_valid(x)); 41 + VIRTUAL_BUG_ON(!phys_addr_valid(x)); 43 42 } 44 43 return x; 45 44 } ··· 55 56 if (x < PAGE_OFFSET) 56 57 return false; 57 58 x -= PAGE_OFFSET; 58 - if (system_state == SYSTEM_BOOTING ? 59 - x > MAXMEM : !phys_addr_valid(x)) { 59 + if (!phys_addr_valid(x)) 60 60 return false; 61 - } 62 61 } 63 62 64 63 return pfn_valid(x >> PAGE_SHIFT); ··· 73 76 #ifdef CONFIG_DEBUG_VIRTUAL 74 77 unsigned long __phys_addr(unsigned long x) 75 78 { 76 - /* VMALLOC_* aren't constants; not available at the boot time */ 79 + /* VMALLOC_* aren't constants */ 77 80 VIRTUAL_BUG_ON(x < PAGE_OFFSET); 78 - VIRTUAL_BUG_ON(system_state != SYSTEM_BOOTING && 79 - is_vmalloc_addr((void *) x)); 81 + VIRTUAL_BUG_ON(__vmalloc_start_set && is_vmalloc_addr((void *) x)); 80 82 return x - PAGE_OFFSET; 81 83 } 82 84 EXPORT_SYMBOL(__phys_addr); ··· 85 89 { 86 90 if (x < PAGE_OFFSET) 87 91 return false; 88 - if (system_state != SYSTEM_BOOTING && is_vmalloc_addr((void *) x)) 92 + if (__vmalloc_start_set && is_vmalloc_addr((void *) x)) 93 + return false; 94 + if (x >= FIXADDR_START) 89 95 return false; 90 96 return pfn_valid((x - PAGE_OFFSET) >> PAGE_SHIFT); 91 97 } ··· 506 508 return &bm_pte[pte_index(addr)]; 507 509 } 508 510 511 + static unsigned long slot_virt[FIX_BTMAPS_SLOTS] __initdata; 512 + 509 513 void __init early_ioremap_init(void) 510 514 { 511 515 pmd_t *pmd; 516 + int i; 512 517 513 518 if (early_ioremap_debug) 514 519 printk(KERN_INFO "early_ioremap_init()\n"); 520 + 521 + for (i = 0; i < FIX_BTMAPS_SLOTS; i++) 522 + slot_virt[i] = fix_to_virt(FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*i); 515 523 516 524 pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)); 517 525 memset(bm_pte, 0, sizeof(bm_pte)); ··· 585 581 586 582 static void __iomem *prev_map[FIX_BTMAPS_SLOTS] __initdata; 587 583 static unsigned long prev_size[FIX_BTMAPS_SLOTS] __initdata; 584 + 588 585 static int __init check_early_ioremap_leak(void) 589 586 { 590 587 int count = 0; ··· 607 602 } 608 603 late_initcall(check_early_ioremap_leak); 609 604 610 - static void __init __iomem *__early_ioremap(unsigned long phys_addr, unsigned long size, pgprot_t prot) 605 + static void __init __iomem * 606 + __early_ioremap(unsigned long phys_addr, unsigned long size, pgprot_t prot) 611 607 { 612 608 unsigned long offset, last_addr; 613 609 unsigned int nrpages; ··· 674 668 --nrpages; 675 669 } 676 670 if (early_ioremap_debug) 677 - printk(KERN_CONT "%08lx + %08lx\n", offset, fix_to_virt(idx0)); 671 + printk(KERN_CONT "%08lx + %08lx\n", offset, slot_virt[slot]); 678 672 679 - prev_map[slot] = (void __iomem *)(offset + fix_to_virt(idx0)); 673 + prev_map[slot] = (void __iomem *)(offset + slot_virt[slot]); 680 674 return prev_map[slot]; 681 675 } 682 676 ··· 743 737 --nrpages; 744 738 } 745 739 prev_map[slot] = NULL; 746 - } 747 - 748 - void __this_fixmap_does_not_exist(void) 749 - { 750 - WARN_ON(1); 751 740 }
+104 -60
arch/x86/mm/kmmio.c
··· 32 32 struct list_head list; 33 33 struct kmmio_fault_page *release_next; 34 34 unsigned long page; /* location of the fault page */ 35 + bool old_presence; /* page presence prior to arming */ 36 + bool armed; 35 37 36 38 /* 37 39 * Number of times this page has been registered as a part 38 40 * of a probe. If zero, page is disarmed and this may be freed. 39 - * Used only by writers (RCU). 41 + * Used only by writers (RCU) and post_kmmio_handler(). 42 + * Protected by kmmio_lock, when linked into kmmio_page_table. 40 43 */ 41 44 int count; 42 45 }; ··· 108 105 return NULL; 109 106 } 110 107 111 - static void set_page_present(unsigned long addr, bool present, 112 - unsigned int *pglevel) 108 + static void set_pmd_presence(pmd_t *pmd, bool present, bool *old) 113 109 { 114 - pteval_t pteval; 115 - pmdval_t pmdval; 110 + pmdval_t v = pmd_val(*pmd); 111 + *old = !!(v & _PAGE_PRESENT); 112 + v &= ~_PAGE_PRESENT; 113 + if (present) 114 + v |= _PAGE_PRESENT; 115 + set_pmd(pmd, __pmd(v)); 116 + } 117 + 118 + static void set_pte_presence(pte_t *pte, bool present, bool *old) 119 + { 120 + pteval_t v = pte_val(*pte); 121 + *old = !!(v & _PAGE_PRESENT); 122 + v &= ~_PAGE_PRESENT; 123 + if (present) 124 + v |= _PAGE_PRESENT; 125 + set_pte_atomic(pte, __pte(v)); 126 + } 127 + 128 + static int set_page_presence(unsigned long addr, bool present, bool *old) 129 + { 116 130 unsigned int level; 117 - pmd_t *pmd; 118 131 pte_t *pte = lookup_address(addr, &level); 119 132 120 133 if (!pte) { 121 134 pr_err("kmmio: no pte for page 0x%08lx\n", addr); 122 - return; 135 + return -1; 123 136 } 124 - 125 - if (pglevel) 126 - *pglevel = level; 127 137 128 138 switch (level) { 129 139 case PG_LEVEL_2M: 130 - pmd = (pmd_t *)pte; 131 - pmdval = pmd_val(*pmd) & ~_PAGE_PRESENT; 132 - if (present) 133 - pmdval |= _PAGE_PRESENT; 134 - set_pmd(pmd, __pmd(pmdval)); 140 + set_pmd_presence((pmd_t *)pte, present, old); 135 141 break; 136 - 137 142 case PG_LEVEL_4K: 138 - pteval = pte_val(*pte) & ~_PAGE_PRESENT; 139 - if (present) 140 - pteval |= _PAGE_PRESENT; 141 - set_pte_atomic(pte, __pte(pteval)); 143 + set_pte_presence(pte, present, old); 142 144 break; 143 - 144 145 default: 145 146 pr_err("kmmio: unexpected page level 0x%x.\n", level); 146 - return; 147 + return -1; 147 148 } 148 149 149 150 __flush_tlb_one(addr); 151 + return 0; 150 152 } 151 153 152 - /** Mark the given page as not present. Access to it will trigger a fault. */ 153 - static void arm_kmmio_fault_page(unsigned long page, unsigned int *pglevel) 154 + /* 155 + * Mark the given page as not present. Access to it will trigger a fault. 156 + * 157 + * Struct kmmio_fault_page is protected by RCU and kmmio_lock, but the 158 + * protection is ignored here. RCU read lock is assumed held, so the struct 159 + * will not disappear unexpectedly. Furthermore, the caller must guarantee, 160 + * that double arming the same virtual address (page) cannot occur. 161 + * 162 + * Double disarming on the other hand is allowed, and may occur when a fault 163 + * and mmiotrace shutdown happen simultaneously. 164 + */ 165 + static int arm_kmmio_fault_page(struct kmmio_fault_page *f) 154 166 { 155 - set_page_present(page & PAGE_MASK, false, pglevel); 167 + int ret; 168 + WARN_ONCE(f->armed, KERN_ERR "kmmio page already armed.\n"); 169 + if (f->armed) { 170 + pr_warning("kmmio double-arm: page 0x%08lx, ref %d, old %d\n", 171 + f->page, f->count, f->old_presence); 172 + } 173 + ret = set_page_presence(f->page, false, &f->old_presence); 174 + WARN_ONCE(ret < 0, KERN_ERR "kmmio arming 0x%08lx failed.\n", f->page); 175 + f->armed = true; 176 + return ret; 156 177 } 157 178 158 - /** Mark the given page as present. */ 159 - static void disarm_kmmio_fault_page(unsigned long page, unsigned int *pglevel) 179 + /** Restore the given page to saved presence state. */ 180 + static void disarm_kmmio_fault_page(struct kmmio_fault_page *f) 160 181 { 161 - set_page_present(page & PAGE_MASK, true, pglevel); 182 + bool tmp; 183 + int ret = set_page_presence(f->page, f->old_presence, &tmp); 184 + WARN_ONCE(ret < 0, 185 + KERN_ERR "kmmio disarming 0x%08lx failed.\n", f->page); 186 + f->armed = false; 162 187 } 163 188 164 189 /* ··· 233 202 234 203 ctx = &get_cpu_var(kmmio_ctx); 235 204 if (ctx->active) { 236 - disarm_kmmio_fault_page(faultpage->page, NULL); 237 205 if (addr == ctx->addr) { 238 206 /* 239 - * On SMP we sometimes get recursive probe hits on the 240 - * same address. Context is already saved, fall out. 207 + * A second fault on the same page means some other 208 + * condition needs handling by do_page_fault(), the 209 + * page really not being present is the most common. 241 210 */ 242 - pr_debug("kmmio: duplicate probe hit on CPU %d, for " 243 - "address 0x%08lx.\n", 244 - smp_processor_id(), addr); 245 - ret = 1; 246 - goto no_kmmio_ctx; 247 - } 248 - /* 249 - * Prevent overwriting already in-flight context. 250 - * This should not happen, let's hope disarming at least 251 - * prevents a panic. 252 - */ 253 - pr_emerg("kmmio: recursive probe hit on CPU %d, " 211 + pr_debug("kmmio: secondary hit for 0x%08lx CPU %d.\n", 212 + addr, smp_processor_id()); 213 + 214 + if (!faultpage->old_presence) 215 + pr_info("kmmio: unexpected secondary hit for " 216 + "address 0x%08lx on CPU %d.\n", addr, 217 + smp_processor_id()); 218 + } else { 219 + /* 220 + * Prevent overwriting already in-flight context. 221 + * This should not happen, let's hope disarming at 222 + * least prevents a panic. 223 + */ 224 + pr_emerg("kmmio: recursive probe hit on CPU %d, " 254 225 "for address 0x%08lx. Ignoring.\n", 255 226 smp_processor_id(), addr); 256 - pr_emerg("kmmio: previous hit was at 0x%08lx.\n", 257 - ctx->addr); 227 + pr_emerg("kmmio: previous hit was at 0x%08lx.\n", 228 + ctx->addr); 229 + disarm_kmmio_fault_page(faultpage); 230 + } 258 231 goto no_kmmio_ctx; 259 232 } 260 233 ctx->active++; ··· 279 244 regs->flags &= ~X86_EFLAGS_IF; 280 245 281 246 /* Now we set present bit in PTE and single step. */ 282 - disarm_kmmio_fault_page(ctx->fpage->page, NULL); 247 + disarm_kmmio_fault_page(ctx->fpage); 283 248 284 249 /* 285 250 * If another cpu accesses the same page while we are stepping, ··· 310 275 struct kmmio_context *ctx = &get_cpu_var(kmmio_ctx); 311 276 312 277 if (!ctx->active) { 313 - pr_debug("kmmio: spurious debug trap on CPU %d.\n", 278 + pr_warning("kmmio: spurious debug trap on CPU %d.\n", 314 279 smp_processor_id()); 315 280 goto out; 316 281 } ··· 318 283 if (ctx->probe && ctx->probe->post_handler) 319 284 ctx->probe->post_handler(ctx->probe, condition, regs); 320 285 321 - arm_kmmio_fault_page(ctx->fpage->page, NULL); 286 + /* Prevent racing against release_kmmio_fault_page(). */ 287 + spin_lock(&kmmio_lock); 288 + if (ctx->fpage->count) 289 + arm_kmmio_fault_page(ctx->fpage); 290 + spin_unlock(&kmmio_lock); 322 291 323 292 regs->flags &= ~X86_EFLAGS_TF; 324 293 regs->flags |= ctx->saved_flags; ··· 354 315 f = get_kmmio_fault_page(page); 355 316 if (f) { 356 317 if (!f->count) 357 - arm_kmmio_fault_page(f->page, NULL); 318 + arm_kmmio_fault_page(f); 358 319 f->count++; 359 320 return 0; 360 321 } 361 322 362 - f = kmalloc(sizeof(*f), GFP_ATOMIC); 323 + f = kzalloc(sizeof(*f), GFP_ATOMIC); 363 324 if (!f) 364 325 return -1; 365 326 366 327 f->count = 1; 367 328 f->page = page; 368 - list_add_rcu(&f->list, kmmio_page_list(f->page)); 369 329 370 - arm_kmmio_fault_page(f->page, NULL); 330 + if (arm_kmmio_fault_page(f)) { 331 + kfree(f); 332 + return -1; 333 + } 334 + 335 + list_add_rcu(&f->list, kmmio_page_list(f->page)); 371 336 372 337 return 0; 373 338 } ··· 390 347 f->count--; 391 348 BUG_ON(f->count < 0); 392 349 if (!f->count) { 393 - disarm_kmmio_fault_page(f->page, NULL); 350 + disarm_kmmio_fault_page(f); 394 351 f->release_next = *release_list; 395 352 *release_list = f; 396 353 } ··· 451 408 452 409 static void remove_kmmio_fault_pages(struct rcu_head *head) 453 410 { 454 - struct kmmio_delayed_release *dr = container_of( 455 - head, 456 - struct kmmio_delayed_release, 457 - rcu); 411 + struct kmmio_delayed_release *dr = 412 + container_of(head, struct kmmio_delayed_release, rcu); 458 413 struct kmmio_fault_page *p = dr->release_list; 459 414 struct kmmio_fault_page **prevp = &dr->release_list; 460 415 unsigned long flags; 416 + 461 417 spin_lock_irqsave(&kmmio_lock, flags); 462 418 while (p) { 463 - if (!p->count) 419 + if (!p->count) { 464 420 list_del_rcu(&p->list); 465 - else 421 + prevp = &p->release_next; 422 + } else { 466 423 *prevp = p->release_next; 467 - prevp = &p->release_next; 424 + } 468 425 p = p->release_next; 469 426 } 470 427 spin_unlock_irqrestore(&kmmio_lock, flags); 428 + 471 429 /* This is the real RCU destroy call. */ 472 430 call_rcu(&dr->rcu, rcu_free_kmmio_fault_pages); 473 431 }
+3
arch/x86/mm/memtest.c
··· 100 100 { 101 101 if (arg) 102 102 memtest_pattern = simple_strtoul(arg, NULL, 0); 103 + else 104 + memtest_pattern = ARRAY_SIZE(patterns); 105 + 103 106 return 0; 104 107 } 105 108
+3 -2
arch/x86/mm/numa_32.c
··· 416 416 for_each_online_node(nid) 417 417 propagate_e820_map_node(nid); 418 418 419 - for_each_online_node(nid) 419 + for_each_online_node(nid) { 420 420 memset(NODE_DATA(nid), 0, sizeof(struct pglist_data)); 421 + NODE_DATA(nid)->bdata = &bootmem_node_data[nid]; 422 + } 421 423 422 - NODE_DATA(0)->bdata = &bootmem_node_data[0]; 423 424 setup_bootmem_allocator(); 424 425 } 425 426
+57 -13
arch/x86/mm/testmmiotrace.c
··· 1 1 /* 2 - * Written by Pekka Paalanen, 2008 <pq@iki.fi> 2 + * Written by Pekka Paalanen, 2008-2009 <pq@iki.fi> 3 3 */ 4 4 #include <linux/module.h> 5 5 #include <linux/io.h> ··· 9 9 10 10 static unsigned long mmio_address; 11 11 module_param(mmio_address, ulong, 0); 12 - MODULE_PARM_DESC(mmio_address, "Start address of the mapping of 16 kB."); 12 + MODULE_PARM_DESC(mmio_address, " Start address of the mapping of 16 kB " 13 + "(or 8 MB if read_far is non-zero)."); 14 + 15 + static unsigned long read_far = 0x400100; 16 + module_param(read_far, ulong, 0); 17 + MODULE_PARM_DESC(read_far, " Offset of a 32-bit read within 8 MB " 18 + "(default: 0x400100)."); 19 + 20 + static unsigned v16(unsigned i) 21 + { 22 + return i * 12 + 7; 23 + } 24 + 25 + static unsigned v32(unsigned i) 26 + { 27 + return i * 212371 + 13; 28 + } 13 29 14 30 static void do_write_test(void __iomem *p) 15 31 { 16 32 unsigned int i; 33 + pr_info(MODULE_NAME ": write test.\n"); 17 34 mmiotrace_printk("Write test.\n"); 35 + 18 36 for (i = 0; i < 256; i++) 19 37 iowrite8(i, p + i); 38 + 20 39 for (i = 1024; i < (5 * 1024); i += 2) 21 - iowrite16(i * 12 + 7, p + i); 40 + iowrite16(v16(i), p + i); 41 + 22 42 for (i = (5 * 1024); i < (16 * 1024); i += 4) 23 - iowrite32(i * 212371 + 13, p + i); 43 + iowrite32(v32(i), p + i); 24 44 } 25 45 26 46 static void do_read_test(void __iomem *p) 27 47 { 28 48 unsigned int i; 49 + unsigned errs[3] = { 0 }; 50 + pr_info(MODULE_NAME ": read test.\n"); 29 51 mmiotrace_printk("Read test.\n"); 52 + 30 53 for (i = 0; i < 256; i++) 31 - ioread8(p + i); 54 + if (ioread8(p + i) != i) 55 + ++errs[0]; 56 + 32 57 for (i = 1024; i < (5 * 1024); i += 2) 33 - ioread16(p + i); 58 + if (ioread16(p + i) != v16(i)) 59 + ++errs[1]; 60 + 34 61 for (i = (5 * 1024); i < (16 * 1024); i += 4) 35 - ioread32(p + i); 62 + if (ioread32(p + i) != v32(i)) 63 + ++errs[2]; 64 + 65 + mmiotrace_printk("Read errors: 8-bit %d, 16-bit %d, 32-bit %d.\n", 66 + errs[0], errs[1], errs[2]); 36 67 } 37 68 38 - static void do_test(void) 69 + static void do_read_far_test(void __iomem *p) 39 70 { 40 - void __iomem *p = ioremap_nocache(mmio_address, 0x4000); 71 + pr_info(MODULE_NAME ": read far test.\n"); 72 + mmiotrace_printk("Read far test.\n"); 73 + 74 + ioread32(p + read_far); 75 + } 76 + 77 + static void do_test(unsigned long size) 78 + { 79 + void __iomem *p = ioremap_nocache(mmio_address, size); 41 80 if (!p) { 42 81 pr_err(MODULE_NAME ": could not ioremap, aborting.\n"); 43 82 return; ··· 84 45 mmiotrace_printk("ioremap returned %p.\n", p); 85 46 do_write_test(p); 86 47 do_read_test(p); 48 + if (read_far && read_far < size - 4) 49 + do_read_far_test(p); 87 50 iounmap(p); 88 51 } 89 52 90 53 static int __init init(void) 91 54 { 55 + unsigned long size = (read_far) ? (8 << 20) : (16 << 10); 56 + 92 57 if (mmio_address == 0) { 93 58 pr_err(MODULE_NAME ": you have to use the module argument " 94 59 "mmio_address.\n"); ··· 101 58 return -ENXIO; 102 59 } 103 60 104 - pr_warning(MODULE_NAME ": WARNING: mapping 16 kB @ 0x%08lx " 105 - "in PCI address space, and writing " 106 - "rubbish in there.\n", mmio_address); 107 - do_test(); 61 + pr_warning(MODULE_NAME ": WARNING: mapping %lu kB @ 0x%08lx in PCI " 62 + "address space, and writing 16 kB of rubbish in there.\n", 63 + size >> 10, mmio_address); 64 + do_test(size); 65 + pr_info(MODULE_NAME ": All done.\n"); 108 66 return 0; 109 67 } 110 68
+6 -4
arch/x86/xen/enlighten.c
··· 103 103 104 104 vcpup = &per_cpu(xen_vcpu_info, cpu); 105 105 106 - info.mfn = virt_to_mfn(vcpup); 106 + info.mfn = arbitrary_virt_to_mfn(vcpup); 107 107 info.offset = offset_in_page(vcpup); 108 108 109 109 printk(KERN_DEBUG "trying to map vcpu_info %d at %p, mfn %llx, offset %d\n", ··· 301 301 frames = mcs.args; 302 302 303 303 for (f = 0; va < dtr->address + size; va += PAGE_SIZE, f++) { 304 - frames[f] = virt_to_mfn(va); 304 + frames[f] = arbitrary_virt_to_mfn((void *)va); 305 + 305 306 make_lowmem_page_readonly((void *)va); 307 + make_lowmem_page_readonly(mfn_to_virt(frames[f])); 306 308 } 307 309 308 310 MULTI_set_gdt(mcs.mc, frames, size / sizeof(struct desc_struct)); ··· 316 314 unsigned int cpu, unsigned int i) 317 315 { 318 316 struct desc_struct *gdt = get_cpu_gdt_table(cpu); 319 - xmaddr_t maddr = virt_to_machine(&gdt[GDT_ENTRY_TLS_MIN+i]); 317 + xmaddr_t maddr = arbitrary_virt_to_machine(&gdt[GDT_ENTRY_TLS_MIN+i]); 320 318 struct multicall_space mc = __xen_mc_entry(0); 321 319 322 320 MULTI_update_descriptor(mc.mc, maddr.maddr, t->tls_array[i]); ··· 490 488 break; 491 489 492 490 default: { 493 - xmaddr_t maddr = virt_to_machine(&dt[entry]); 491 + xmaddr_t maddr = arbitrary_virt_to_machine(&dt[entry]); 494 492 495 493 xen_mc_flush(); 496 494 if (HYPERVISOR_update_descriptor(maddr.maddr, *(u64 *)desc))
+7
arch/x86/xen/mmu.c
··· 276 276 p2m_top[topidx][idx] = mfn; 277 277 } 278 278 279 + unsigned long arbitrary_virt_to_mfn(void *vaddr) 280 + { 281 + xmaddr_t maddr = arbitrary_virt_to_machine(vaddr); 282 + 283 + return PFN_DOWN(maddr.maddr); 284 + } 285 + 279 286 xmaddr_t arbitrary_virt_to_machine(void *vaddr) 280 287 { 281 288 unsigned long address = (unsigned long)vaddr;
+6 -2
arch/x86/xen/smp.c
··· 219 219 { 220 220 struct vcpu_guest_context *ctxt; 221 221 struct desc_struct *gdt; 222 + unsigned long gdt_mfn; 222 223 223 224 if (cpumask_test_and_set_cpu(cpu, xen_cpu_initialized_map)) 224 225 return 0; ··· 249 248 ctxt->ldt_ents = 0; 250 249 251 250 BUG_ON((unsigned long)gdt & ~PAGE_MASK); 252 - make_lowmem_page_readonly(gdt); 253 251 254 - ctxt->gdt_frames[0] = virt_to_mfn(gdt); 252 + gdt_mfn = arbitrary_virt_to_mfn(gdt); 253 + make_lowmem_page_readonly(gdt); 254 + make_lowmem_page_readonly(mfn_to_virt(gdt_mfn)); 255 + 256 + ctxt->gdt_frames[0] = gdt_mfn; 255 257 ctxt->gdt_ents = GDT_ENTRIES; 256 258 257 259 ctxt->user_regs.cs = __KERNEL_CS;
-3
arch/xtensa/Kconfig
··· 103 103 help 104 104 Can we use information of configuration file? 105 105 106 - config HIGHMEM 107 - bool "High memory support" 108 - 109 106 endmenu 110 107 111 108 menu "Platform options"
+2
arch/xtensa/kernel/setup.c
··· 44 44 #include <asm/setup.h> 45 45 #include <asm/param.h> 46 46 47 + #include <platform/hardware.h> 48 + 47 49 #if defined(CONFIG_VGA_CONSOLE) || defined(CONFIG_DUMMY_CONSOLE) 48 50 struct screen_info screen_info = { 0, 24, 0, 0, 0, 80, 0, 0, 0, 24, 1, 16}; 49 51 #endif
+1
arch/xtensa/kernel/traps.c
··· 30 30 #include <linux/stringify.h> 31 31 #include <linux/kallsyms.h> 32 32 #include <linux/delay.h> 33 + #include <linux/hardirq.h> 33 34 34 35 #include <asm/ptrace.h> 35 36 #include <asm/timex.h>
+1
arch/xtensa/mm/fault.c
··· 14 14 15 15 #include <linux/mm.h> 16 16 #include <linux/module.h> 17 + #include <linux/hardirq.h> 17 18 #include <asm/mmu_context.h> 18 19 #include <asm/cacheflush.h> 19 20 #include <asm/hardirq.h>
+2 -4
arch/xtensa/platforms/iss/console.c
··· 140 140 } 141 141 142 142 143 - static void rs_put_char(struct tty_struct *tty, unsigned char ch) 143 + static int rs_put_char(struct tty_struct *tty, unsigned char ch) 144 144 { 145 145 char buf[2]; 146 - 147 - if (!tty) 148 - return; 149 146 150 147 buf[0] = ch; 151 148 buf[1] = '\0'; /* Is this NULL necessary? */ 152 149 __simc (SYS_write, 1, (unsigned long) buf, 1, 0, 0); 150 + return 1; 153 151 } 154 152 155 153 static void rs_flush_chars(struct tty_struct *tty)
+9 -16
block/blk-merge.c
··· 39 39 } 40 40 41 41 static unsigned int __blk_recalc_rq_segments(struct request_queue *q, 42 - struct bio *bio, 43 - unsigned int *seg_size_ptr) 42 + struct bio *bio) 44 43 { 45 44 unsigned int phys_size; 46 45 struct bio_vec *bv, *bvprv = NULL; 47 46 int cluster, i, high, highprv = 1; 48 47 unsigned int seg_size, nr_phys_segs; 49 - struct bio *fbio; 48 + struct bio *fbio, *bbio; 50 49 51 50 if (!bio) 52 51 return 0; ··· 86 87 seg_size = bv->bv_len; 87 88 highprv = high; 88 89 } 90 + bbio = bio; 89 91 } 90 92 91 - if (seg_size_ptr) 92 - *seg_size_ptr = seg_size; 93 + if (nr_phys_segs == 1 && seg_size > fbio->bi_seg_front_size) 94 + fbio->bi_seg_front_size = seg_size; 95 + if (seg_size > bbio->bi_seg_back_size) 96 + bbio->bi_seg_back_size = seg_size; 93 97 94 98 return nr_phys_segs; 95 99 } 96 100 97 101 void blk_recalc_rq_segments(struct request *rq) 98 102 { 99 - unsigned int seg_size = 0, phys_segs; 100 - 101 - phys_segs = __blk_recalc_rq_segments(rq->q, rq->bio, &seg_size); 102 - 103 - if (phys_segs == 1 && seg_size > rq->bio->bi_seg_front_size) 104 - rq->bio->bi_seg_front_size = seg_size; 105 - if (seg_size > rq->biotail->bi_seg_back_size) 106 - rq->biotail->bi_seg_back_size = seg_size; 107 - 108 - rq->nr_phys_segments = phys_segs; 103 + rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio); 109 104 } 110 105 111 106 void blk_recount_segments(struct request_queue *q, struct bio *bio) ··· 107 114 struct bio *nxt = bio->bi_next; 108 115 109 116 bio->bi_next = NULL; 110 - bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, NULL); 117 + bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio); 111 118 bio->bi_next = nxt; 112 119 bio->bi_flags |= (1 << BIO_SEG_VALID); 113 120 }
+1 -1
block/blktrace.c
··· 363 363 if (!bt->sequence) 364 364 goto err; 365 365 366 - bt->msg_data = __alloc_percpu(BLK_TN_MAX_MSG); 366 + bt->msg_data = __alloc_percpu(BLK_TN_MAX_MSG, __alignof__(char)); 367 367 if (!bt->msg_data) 368 368 goto err; 369 369
+13 -2
crypto/api.c
··· 215 215 mask &= ~(CRYPTO_ALG_LARVAL | CRYPTO_ALG_DEAD); 216 216 type &= mask; 217 217 218 - alg = try_then_request_module(crypto_alg_lookup(name, type, mask), 219 - name); 218 + alg = crypto_alg_lookup(name, type, mask); 219 + if (!alg) { 220 + char tmp[CRYPTO_MAX_ALG_NAME]; 221 + 222 + request_module(name); 223 + 224 + if (!((type ^ CRYPTO_ALG_NEED_FALLBACK) & mask) && 225 + snprintf(tmp, sizeof(tmp), "%s-all", name) < sizeof(tmp)) 226 + request_module(tmp); 227 + 228 + alg = crypto_alg_lookup(name, type, mask); 229 + } 230 + 220 231 if (alg) 221 232 return crypto_is_larval(alg) ? crypto_larval_wait(alg) : alg; 222 233
+2 -2
drivers/acpi/processor_perflib.c
··· 516 516 continue; 517 517 } 518 518 519 - if (!performance || !percpu_ptr(performance, i)) { 519 + if (!performance || !per_cpu_ptr(performance, i)) { 520 520 retval = -EINVAL; 521 521 continue; 522 522 } 523 523 524 - pr->performance = percpu_ptr(performance, i); 524 + pr->performance = per_cpu_ptr(performance, i); 525 525 cpumask_set_cpu(i, pr->performance->shared_cpu_map); 526 526 if (acpi_processor_get_psd(pr)) { 527 527 retval = -EINVAL;
+12 -12
drivers/ata/ahci.c
··· 582 582 { PCI_VDEVICE(NVIDIA, 0x0abd), board_ahci }, /* MCP79 */ 583 583 { PCI_VDEVICE(NVIDIA, 0x0abe), board_ahci }, /* MCP79 */ 584 584 { PCI_VDEVICE(NVIDIA, 0x0abf), board_ahci }, /* MCP79 */ 585 - { PCI_VDEVICE(NVIDIA, 0x0bc8), board_ahci }, /* MCP7B */ 586 - { PCI_VDEVICE(NVIDIA, 0x0bc9), board_ahci }, /* MCP7B */ 587 - { PCI_VDEVICE(NVIDIA, 0x0bca), board_ahci }, /* MCP7B */ 588 - { PCI_VDEVICE(NVIDIA, 0x0bcb), board_ahci }, /* MCP7B */ 589 - { PCI_VDEVICE(NVIDIA, 0x0bcc), board_ahci }, /* MCP7B */ 590 - { PCI_VDEVICE(NVIDIA, 0x0bcd), board_ahci }, /* MCP7B */ 591 - { PCI_VDEVICE(NVIDIA, 0x0bce), board_ahci }, /* MCP7B */ 592 - { PCI_VDEVICE(NVIDIA, 0x0bcf), board_ahci }, /* MCP7B */ 593 - { PCI_VDEVICE(NVIDIA, 0x0bc4), board_ahci }, /* MCP7B */ 594 - { PCI_VDEVICE(NVIDIA, 0x0bc5), board_ahci }, /* MCP7B */ 595 - { PCI_VDEVICE(NVIDIA, 0x0bc6), board_ahci }, /* MCP7B */ 596 - { PCI_VDEVICE(NVIDIA, 0x0bc7), board_ahci }, /* MCP7B */ 585 + { PCI_VDEVICE(NVIDIA, 0x0d84), board_ahci }, /* MCP89 */ 586 + { PCI_VDEVICE(NVIDIA, 0x0d85), board_ahci }, /* MCP89 */ 587 + { PCI_VDEVICE(NVIDIA, 0x0d86), board_ahci }, /* MCP89 */ 588 + { PCI_VDEVICE(NVIDIA, 0x0d87), board_ahci }, /* MCP89 */ 589 + { PCI_VDEVICE(NVIDIA, 0x0d88), board_ahci }, /* MCP89 */ 590 + { PCI_VDEVICE(NVIDIA, 0x0d89), board_ahci }, /* MCP89 */ 591 + { PCI_VDEVICE(NVIDIA, 0x0d8a), board_ahci }, /* MCP89 */ 592 + { PCI_VDEVICE(NVIDIA, 0x0d8b), board_ahci }, /* MCP89 */ 593 + { PCI_VDEVICE(NVIDIA, 0x0d8c), board_ahci }, /* MCP89 */ 594 + { PCI_VDEVICE(NVIDIA, 0x0d8d), board_ahci }, /* MCP89 */ 595 + { PCI_VDEVICE(NVIDIA, 0x0d8e), board_ahci }, /* MCP89 */ 596 + { PCI_VDEVICE(NVIDIA, 0x0d8f), board_ahci }, /* MCP89 */ 597 597 598 598 /* SiS */ 599 599 { PCI_VDEVICE(SI, 0x1184), board_ahci }, /* SiS 966 */
+8 -6
drivers/ata/libata-core.c
··· 1322 1322 { 1323 1323 if (ata_id_has_lba(id)) { 1324 1324 if (ata_id_has_lba48(id)) 1325 - return ata_id_u64(id, 100); 1325 + return ata_id_u64(id, ATA_ID_LBA_CAPACITY_2); 1326 1326 else 1327 - return ata_id_u32(id, 60); 1327 + return ata_id_u32(id, ATA_ID_LBA_CAPACITY); 1328 1328 } else { 1329 1329 if (ata_id_current_chs_valid(id)) 1330 - return ata_id_u32(id, 57); 1330 + return id[ATA_ID_CUR_CYLS] * id[ATA_ID_CUR_HEADS] * 1331 + id[ATA_ID_CUR_SECTORS]; 1331 1332 else 1332 - return id[1] * id[3] * id[6]; 1333 + return id[ATA_ID_CYLS] * id[ATA_ID_HEADS] * 1334 + id[ATA_ID_SECTORS]; 1333 1335 } 1334 1336 } 1335 1337 ··· 4614 4612 VPRINTK("unmapping %u sg elements\n", qc->n_elem); 4615 4613 4616 4614 if (qc->n_elem) 4617 - dma_unmap_sg(ap->dev, sg, qc->n_elem, dir); 4615 + dma_unmap_sg(ap->dev, sg, qc->orig_n_elem, dir); 4618 4616 4619 4617 qc->flags &= ~ATA_QCFLAG_DMAMAP; 4620 4618 qc->sg = NULL; ··· 4729 4727 return -1; 4730 4728 4731 4729 DPRINTK("%d sg elements mapped\n", n_elem); 4732 - 4730 + qc->orig_n_elem = qc->n_elem; 4733 4731 qc->n_elem = n_elem; 4734 4732 qc->flags |= ATA_QCFLAG_DMAMAP; 4735 4733
+5 -2
drivers/ata/libata-eh.c
··· 2423 2423 } 2424 2424 2425 2425 /* prereset() might have cleared ATA_EH_RESET. If so, 2426 - * bang classes and return. 2426 + * bang classes, thaw and return. 2427 2427 */ 2428 2428 if (reset && !(ehc->i.action & ATA_EH_RESET)) { 2429 2429 ata_for_each_dev(dev, link, ALL) 2430 2430 classes[dev->devno] = ATA_DEV_NONE; 2431 + if ((ap->pflags & ATA_PFLAG_FROZEN) && 2432 + ata_is_host_link(link)) 2433 + ata_eh_thaw_port(ap); 2431 2434 rc = 0; 2432 2435 goto out; 2433 2436 } ··· 2904 2901 int i; 2905 2902 2906 2903 for (i = 0; i < ATA_EH_UA_TRIES; i++) { 2907 - u8 sense_buffer[SCSI_SENSE_BUFFERSIZE]; 2904 + u8 *sense_buffer = dev->link->ap->sector_buf; 2908 2905 u8 sense_key = 0; 2909 2906 unsigned int err_mask; 2910 2907
+1 -1
drivers/ata/sata_nv.c
··· 2523 2523 module_init(nv_init); 2524 2524 module_exit(nv_exit); 2525 2525 module_param_named(adma, adma_enabled, bool, 0444); 2526 - MODULE_PARM_DESC(adma, "Enable use of ADMA (Default: true)"); 2526 + MODULE_PARM_DESC(adma, "Enable use of ADMA (Default: false)"); 2527 2527 module_param_named(swncq, swncq_enabled, bool, 0444); 2528 2528 MODULE_PARM_DESC(swncq, "Enable use of SWNCQ (Default: true)"); 2529 2529
+1 -1
drivers/base/node.c
··· 303 303 sect_start_pfn = section_nr_to_pfn(mem_blk->phys_index); 304 304 sect_end_pfn = sect_start_pfn + PAGES_PER_SECTION - 1; 305 305 for (pfn = sect_start_pfn; pfn <= sect_end_pfn; pfn++) { 306 - unsigned int nid; 306 + int nid; 307 307 308 308 nid = get_nid_for_pfn(pfn); 309 309 if (nid < 0)
+1 -1
drivers/block/aoe/aoedev.c
··· 173 173 return; 174 174 while (atomic_read(&skb_shinfo(skb)->dataref) != 1 && i-- > 0) 175 175 msleep(Sms); 176 - if (i <= 0) { 176 + if (i < 0) { 177 177 printk(KERN_ERR 178 178 "aoe: %s holds ref: %s\n", 179 179 skb->dev ? skb->dev->name : "netif",
+3 -5
drivers/block/cciss.c
··· 3606 3606 if (cciss_hard_reset_controller(pdev) || cciss_reset_msi(pdev)) 3607 3607 return -ENODEV; 3608 3608 3609 - /* Some devices (notably the HP Smart Array 5i Controller) 3610 - need a little pause here */ 3611 - schedule_timeout_uninterruptible(30*HZ); 3612 - 3613 - /* Now try to get the controller to respond to a no-op */ 3609 + /* Now try to get the controller to respond to a no-op. Some 3610 + devices (notably the HP Smart Array 5i Controller) need 3611 + up to 30 seconds to respond. */ 3614 3612 for (i=0; i<30; i++) { 3615 3613 if (cciss_noop(pdev) == 0) 3616 3614 break;
+1 -2
drivers/block/loop.c
··· 392 392 struct loop_device *lo = p->lo; 393 393 struct page *page = buf->page; 394 394 sector_t IV; 395 - size_t size; 396 - int ret; 395 + int size, ret; 397 396 398 397 ret = buf->ops->confirm(pipe, buf); 399 398 if (unlikely(ret))
+2
drivers/block/xen-blkfront.c
··· 977 977 break; 978 978 979 979 case XenbusStateClosing: 980 + if (info->gd == NULL) 981 + xenbus_dev_fatal(dev, -ENODEV, "gd is NULL"); 980 982 bd = bdget_disk(info->gd, 0); 981 983 if (bd == NULL) 982 984 xenbus_dev_fatal(dev, -ENODEV, "bdget failed");
+9 -4
drivers/char/agp/amd64-agp.c
··· 271 271 nb_order = (nb_order >> 1) & 7; 272 272 pci_read_config_dword(nb, AMD64_GARTAPERTUREBASE, &nb_base); 273 273 nb_aper = nb_base << 25; 274 - if (agp_aperture_valid(nb_aper, (32*1024*1024)<<nb_order)) { 275 - return 0; 276 - } 277 274 278 275 /* Northbridge seems to contain crap. Try the AGP bridge. */ 279 276 280 277 pci_read_config_word(agp, cap+0x14, &apsize); 281 - if (apsize == 0xffff) 278 + if (apsize == 0xffff) { 279 + if (agp_aperture_valid(nb_aper, (32*1024*1024)<<nb_order)) 280 + return 0; 282 281 return -1; 282 + } 283 283 284 284 apsize &= 0xfff; 285 285 /* Some BIOS use weird encodings not in the AGPv3 table. */ ··· 299 299 dev_info(&agp->dev, "aperture size %u MB is not right, using settings from NB\n", 300 300 32 << order); 301 301 order = nb_order; 302 + } 303 + 304 + if (nb_order >= order) { 305 + if (agp_aperture_valid(nb_aper, (32*1024*1024)<<nb_order)) 306 + return 0; 302 307 } 303 308 304 309 dev_info(&agp->dev, "aperture from AGP @ %Lx size %u MB\n",
+5 -3
drivers/char/agp/intel-agp.c
··· 633 633 break; 634 634 } 635 635 } 636 - if (gtt_entries > 0) 636 + if (gtt_entries > 0) { 637 637 dev_info(&agp_bridge->dev->dev, "detected %dK %s memory\n", 638 638 gtt_entries / KB(1), local ? "local" : "stolen"); 639 - else 639 + gtt_entries /= KB(4); 640 + } else { 640 641 dev_info(&agp_bridge->dev->dev, 641 642 "no pre-allocated video memory detected\n"); 642 - gtt_entries /= KB(4); 643 + gtt_entries = 0; 644 + } 643 645 644 646 intel_private.gtt_entries = gtt_entries; 645 647 }
+18 -33
drivers/cpufreq/cpufreq.c
··· 754 754 .release = cpufreq_sysfs_release, 755 755 }; 756 756 757 - static struct kobj_type ktype_empty_cpufreq = { 758 - .sysfs_ops = &sysfs_ops, 759 - .release = cpufreq_sysfs_release, 760 - }; 761 - 762 757 763 758 /** 764 759 * cpufreq_add_dev - add a CPU device ··· 887 892 memcpy(&new_policy, policy, sizeof(struct cpufreq_policy)); 888 893 889 894 /* prepare interface data */ 890 - if (!cpufreq_driver->hide_interface) { 891 - ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq, 892 - &sys_dev->kobj, "cpufreq"); 895 + ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq, &sys_dev->kobj, 896 + "cpufreq"); 897 + if (ret) 898 + goto err_out_driver_exit; 899 + 900 + /* set up files for this cpu device */ 901 + drv_attr = cpufreq_driver->attr; 902 + while ((drv_attr) && (*drv_attr)) { 903 + ret = sysfs_create_file(&policy->kobj, &((*drv_attr)->attr)); 893 904 if (ret) 894 905 goto err_out_driver_exit; 895 - 896 - /* set up files for this cpu device */ 897 - drv_attr = cpufreq_driver->attr; 898 - while ((drv_attr) && (*drv_attr)) { 899 - ret = sysfs_create_file(&policy->kobj, 900 - &((*drv_attr)->attr)); 901 - if (ret) 902 - goto err_out_driver_exit; 903 - drv_attr++; 904 - } 905 - if (cpufreq_driver->get) { 906 - ret = sysfs_create_file(&policy->kobj, 907 - &cpuinfo_cur_freq.attr); 908 - if (ret) 909 - goto err_out_driver_exit; 910 - } 911 - if (cpufreq_driver->target) { 912 - ret = sysfs_create_file(&policy->kobj, 913 - &scaling_cur_freq.attr); 914 - if (ret) 915 - goto err_out_driver_exit; 916 - } 917 - } else { 918 - ret = kobject_init_and_add(&policy->kobj, &ktype_empty_cpufreq, 919 - &sys_dev->kobj, "cpufreq"); 906 + drv_attr++; 907 + } 908 + if (cpufreq_driver->get) { 909 + ret = sysfs_create_file(&policy->kobj, &cpuinfo_cur_freq.attr); 910 + if (ret) 911 + goto err_out_driver_exit; 912 + } 913 + if (cpufreq_driver->target) { 914 + ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr); 920 915 if (ret) 921 916 goto err_out_driver_exit; 922 917 }
+4 -2
drivers/crypto/ixp4xx_crypto.c
··· 457 457 if (!ctx_pool) { 458 458 goto err; 459 459 } 460 - ret = qmgr_request_queue(SEND_QID, NPE_QLEN_TOTAL, 0, 0); 460 + ret = qmgr_request_queue(SEND_QID, NPE_QLEN_TOTAL, 0, 0, 461 + "ixp_crypto:out", NULL); 461 462 if (ret) 462 463 goto err; 463 - ret = qmgr_request_queue(RECV_QID, NPE_QLEN, 0, 0); 464 + ret = qmgr_request_queue(RECV_QID, NPE_QLEN, 0, 0, 465 + "ixp_crypto:in", NULL); 464 466 if (ret) { 465 467 qmgr_release_queue(SEND_QID); 466 468 goto err;
+1 -1
drivers/crypto/padlock-aes.c
··· 489 489 MODULE_LICENSE("GPL"); 490 490 MODULE_AUTHOR("Michal Ludvig"); 491 491 492 - MODULE_ALIAS("aes"); 492 + MODULE_ALIAS("aes-all");
+2 -2
drivers/crypto/padlock-sha.c
··· 304 304 MODULE_LICENSE("GPL"); 305 305 MODULE_AUTHOR("Michal Ludvig"); 306 306 307 - MODULE_ALIAS("sha1"); 308 - MODULE_ALIAS("sha256"); 307 + MODULE_ALIAS("sha1-all"); 308 + MODULE_ALIAS("sha256-all"); 309 309 MODULE_ALIAS("sha1-padlock"); 310 310 MODULE_ALIAS("sha256-padlock");
+1 -1
drivers/dca/dca-core.c
··· 1 1 /* 2 - * Copyright(c) 2007 Intel Corporation. All rights reserved. 2 + * Copyright(c) 2007 - 2009 Intel Corporation. All rights reserved. 3 3 * 4 4 * This program is free software; you can redistribute it and/or modify it 5 5 * under the terms of the GNU General Public License as published by the Free
+4 -2
drivers/dma/dmatest.c
··· 430 430 static void __exit dmatest_exit(void) 431 431 { 432 432 struct dmatest_chan *dtc, *_dtc; 433 + struct dma_chan *chan; 433 434 434 435 list_for_each_entry_safe(dtc, _dtc, &dmatest_channels, node) { 435 436 list_del(&dtc->node); 437 + chan = dtc->chan; 436 438 dmatest_cleanup_channel(dtc); 437 439 pr_debug("dmatest: dropped channel %s\n", 438 - dma_chan_name(dtc->chan)); 439 - dma_release_channel(dtc->chan); 440 + dma_chan_name(chan)); 441 + dma_release_channel(chan); 440 442 } 441 443 } 442 444 module_exit(dmatest_exit);
+6 -2
drivers/dma/fsldma.c
··· 158 158 159 159 static void dma_halt(struct fsl_dma_chan *fsl_chan) 160 160 { 161 - int i = 0; 161 + int i; 162 + 162 163 DMA_OUT(fsl_chan, &fsl_chan->reg_base->mr, 163 164 DMA_IN(fsl_chan, &fsl_chan->reg_base->mr, 32) | FSL_DMA_MR_CA, 164 165 32); ··· 167 166 DMA_IN(fsl_chan, &fsl_chan->reg_base->mr, 32) & ~(FSL_DMA_MR_CS 168 167 | FSL_DMA_MR_EMS_EN | FSL_DMA_MR_CA), 32); 169 168 170 - while (!dma_is_idle(fsl_chan) && (i++ < 100)) 169 + for (i = 0; i < 100; i++) { 170 + if (dma_is_idle(fsl_chan)) 171 + break; 171 172 udelay(10); 173 + } 172 174 if (i >= 100 && !dma_is_idle(fsl_chan)) 173 175 dev_err(fsl_chan->dev, "DMA halt timeout!\n"); 174 176 }
+1 -1
drivers/dma/ioat.c
··· 1 1 /* 2 2 * Intel I/OAT DMA Linux driver 3 - * Copyright(c) 2007 Intel Corporation. 3 + * Copyright(c) 2007 - 2009 Intel Corporation. 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify it 6 6 * under the terms and conditions of the GNU General Public License,
+25 -1
drivers/dma/ioat_dca.c
··· 1 1 /* 2 2 * Intel I/OAT DMA Linux driver 3 - * Copyright(c) 2007 Intel Corporation. 3 + * Copyright(c) 2007 - 2009 Intel Corporation. 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify it 6 6 * under the terms and conditions of the GNU General Public License, ··· 48 48 #define DCA3_TAG_MAP_LITERAL_VAL 0x1 49 49 50 50 #define DCA_TAG_MAP_MASK 0xDF 51 + 52 + /* expected tag map bytes for I/OAT ver.2 */ 53 + #define DCA2_TAG_MAP_BYTE0 0x80 54 + #define DCA2_TAG_MAP_BYTE1 0x0 55 + #define DCA2_TAG_MAP_BYTE2 0x81 56 + #define DCA2_TAG_MAP_BYTE3 0x82 57 + #define DCA2_TAG_MAP_BYTE4 0x82 58 + 59 + /* verify if tag map matches expected values */ 60 + static inline int dca2_tag_map_valid(u8 *tag_map) 61 + { 62 + return ((tag_map[0] == DCA2_TAG_MAP_BYTE0) && 63 + (tag_map[1] == DCA2_TAG_MAP_BYTE1) && 64 + (tag_map[2] == DCA2_TAG_MAP_BYTE2) && 65 + (tag_map[3] == DCA2_TAG_MAP_BYTE3) && 66 + (tag_map[4] == DCA2_TAG_MAP_BYTE4)); 67 + } 51 68 52 69 /* 53 70 * "Legacy" DCA systems do not implement the DCA register set in the ··· 467 450 ioatdca->tag_map[i] = bit | DCA_TAG_MAP_VALID; 468 451 else 469 452 ioatdca->tag_map[i] = 0; 453 + } 454 + 455 + if (!dca2_tag_map_valid(ioatdca->tag_map)) { 456 + dev_err(&pdev->dev, "APICID_TAG_MAP set incorrectly by BIOS, " 457 + "disabling DCA\n"); 458 + free_dca_provider(dca); 459 + return NULL; 470 460 } 471 461 472 462 err = register_dca_provider(dca, &pdev->dev);
+24 -15
drivers/dma/ioat_dma.c
··· 1 1 /* 2 2 * Intel I/OAT DMA Linux driver 3 - * Copyright(c) 2004 - 2007 Intel Corporation. 3 + * Copyright(c) 2004 - 2009 Intel Corporation. 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify it 6 6 * under the terms and conditions of the GNU General Public License, ··· 189 189 ioat_chan->xfercap = xfercap; 190 190 ioat_chan->desccount = 0; 191 191 INIT_DELAYED_WORK(&ioat_chan->work, ioat_dma_chan_reset_part2); 192 - if (ioat_chan->device->version != IOAT_VER_1_2) { 193 - writel(IOAT_DCACTRL_CMPL_WRITE_ENABLE 194 - | IOAT_DMA_DCA_ANY_CPU, 195 - ioat_chan->reg_base + IOAT_DCACTRL_OFFSET); 196 - } 192 + if (ioat_chan->device->version == IOAT_VER_2_0) 193 + writel(IOAT_DCACTRL_CMPL_WRITE_ENABLE | 194 + IOAT_DMA_DCA_ANY_CPU, 195 + ioat_chan->reg_base + IOAT_DCACTRL_OFFSET); 196 + else if (ioat_chan->device->version == IOAT_VER_3_0) 197 + writel(IOAT_DMA_DCA_ANY_CPU, 198 + ioat_chan->reg_base + IOAT_DCACTRL_OFFSET); 197 199 spin_lock_init(&ioat_chan->cleanup_lock); 198 200 spin_lock_init(&ioat_chan->desc_lock); 199 201 INIT_LIST_HEAD(&ioat_chan->free_desc); ··· 1171 1169 * up if the client is done with the descriptor 1172 1170 */ 1173 1171 if (async_tx_test_ack(&desc->async_tx)) { 1174 - list_del(&desc->node); 1175 - list_add_tail(&desc->node, 1176 - &ioat_chan->free_desc); 1172 + list_move_tail(&desc->node, 1173 + &ioat_chan->free_desc); 1177 1174 } else 1178 1175 desc->async_tx.cookie = 0; 1179 1176 } else { ··· 1363 1362 dma_cookie_t cookie; 1364 1363 int err = 0; 1365 1364 struct completion cmp; 1365 + unsigned long tmo; 1366 1366 1367 1367 src = kzalloc(sizeof(u8) * IOAT_TEST_SIZE, GFP_KERNEL); 1368 1368 if (!src) ··· 1415 1413 } 1416 1414 device->common.device_issue_pending(dma_chan); 1417 1415 1418 - wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000)); 1416 + tmo = wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000)); 1419 1417 1420 - if (device->common.device_is_tx_complete(dma_chan, cookie, NULL, NULL) 1418 + if (tmo == 0 || 1419 + device->common.device_is_tx_complete(dma_chan, cookie, NULL, NULL) 1421 1420 != DMA_SUCCESS) { 1422 1421 dev_err(&device->pdev->dev, 1423 1422 "Self-test copy timed out, disabling\n"); ··· 1660 1657 " %d channels, device version 0x%02x, driver version %s\n", 1661 1658 device->common.chancnt, device->version, IOAT_DMA_VERSION); 1662 1659 1660 + if (!device->common.chancnt) { 1661 + dev_err(&device->pdev->dev, 1662 + "Intel(R) I/OAT DMA Engine problem found: " 1663 + "zero channels detected\n"); 1664 + goto err_setup_interrupts; 1665 + } 1666 + 1663 1667 err = ioat_dma_setup_interrupts(device); 1664 1668 if (err) 1665 1669 goto err_setup_interrupts; ··· 1706 1696 struct dma_chan *chan, *_chan; 1707 1697 struct ioat_dma_chan *ioat_chan; 1708 1698 1699 + if (device->version != IOAT_VER_3_0) 1700 + cancel_delayed_work(&device->work); 1701 + 1709 1702 ioat_dma_remove_interrupts(device); 1710 1703 1711 1704 dma_async_device_unregister(&device->common); ··· 1719 1706 iounmap(device->reg_base); 1720 1707 pci_release_regions(device->pdev); 1721 1708 pci_disable_device(device->pdev); 1722 - 1723 - if (device->version != IOAT_VER_3_0) { 1724 - cancel_delayed_work(&device->work); 1725 - } 1726 1709 1727 1710 list_for_each_entry_safe(chan, _chan, 1728 1711 &device->common.channels, device_node) {
+5 -3
drivers/dma/ioatdma.h
··· 1 1 /* 2 - * Copyright(c) 2004 - 2007 Intel Corporation. All rights reserved. 2 + * Copyright(c) 2004 - 2009 Intel Corporation. All rights reserved. 3 3 * 4 4 * This program is free software; you can redistribute it and/or modify it 5 5 * under the terms of the GNU General Public License as published by the Free ··· 29 29 #include <linux/pci_ids.h> 30 30 #include <net/tcp.h> 31 31 32 - #define IOAT_DMA_VERSION "3.30" 32 + #define IOAT_DMA_VERSION "3.64" 33 33 34 34 enum ioat_interrupt { 35 35 none = 0, ··· 135 135 #ifdef CONFIG_NET_DMA 136 136 switch (dev->version) { 137 137 case IOAT_VER_1_2: 138 - case IOAT_VER_3_0: 139 138 sysctl_tcp_dma_copybreak = 4096; 140 139 break; 141 140 case IOAT_VER_2_0: 142 141 sysctl_tcp_dma_copybreak = 2048; 142 + break; 143 + case IOAT_VER_3_0: 144 + sysctl_tcp_dma_copybreak = 262144; 143 145 break; 144 146 } 145 147 #endif
+1 -1
drivers/dma/ioatdma_hw.h
··· 1 1 /* 2 - * Copyright(c) 2004 - 2007 Intel Corporation. All rights reserved. 2 + * Copyright(c) 2004 - 2009 Intel Corporation. All rights reserved. 3 3 * 4 4 * This program is free software; you can redistribute it and/or modify it 5 5 * under the terms of the GNU General Public License as published by the Free
+1 -1
drivers/dma/ioatdma_registers.h
··· 1 1 /* 2 - * Copyright(c) 2004 - 2007 Intel Corporation. All rights reserved. 2 + * Copyright(c) 2004 - 2009 Intel Corporation. All rights reserved. 3 3 * 4 4 * This program is free software; you can redistribute it and/or modify it 5 5 * under the terms of the GNU General Public License as published by the Free
+9 -9
drivers/dma/iop-adma.c
··· 928 928 929 929 for (src_idx = 0; src_idx < IOP_ADMA_NUM_SRC_TEST; src_idx++) { 930 930 xor_srcs[src_idx] = alloc_page(GFP_KERNEL); 931 - if (!xor_srcs[src_idx]) 932 - while (src_idx--) { 931 + if (!xor_srcs[src_idx]) { 932 + while (src_idx--) 933 933 __free_page(xor_srcs[src_idx]); 934 - return -ENOMEM; 935 - } 934 + return -ENOMEM; 935 + } 936 936 } 937 937 938 938 dest = alloc_page(GFP_KERNEL); 939 - if (!dest) 940 - while (src_idx--) { 939 + if (!dest) { 940 + while (src_idx--) 941 941 __free_page(xor_srcs[src_idx]); 942 - return -ENOMEM; 943 - } 942 + return -ENOMEM; 943 + } 944 944 945 945 /* Fill in src buffers */ 946 946 for (src_idx = 0; src_idx < IOP_ADMA_NUM_SRC_TEST; src_idx++) { ··· 1401 1401 1402 1402 static struct platform_driver iop_adma_driver = { 1403 1403 .probe = iop_adma_probe, 1404 - .remove = iop_adma_remove, 1404 + .remove = __devexit_p(iop_adma_remove), 1405 1405 .driver = { 1406 1406 .owner = THIS_MODULE, 1407 1407 .name = "iop-adma",
+1 -1
drivers/dma/ipu/ipu_idmac.c
··· 729 729 730 730 ichan->status = IPU_CHANNEL_READY; 731 731 732 - spin_unlock_irqrestore(ipu->lock, flags); 732 + spin_unlock_irqrestore(&ipu->lock, flags); 733 733 734 734 return 0; 735 735 }
+9 -9
drivers/dma/mv_xor.c
··· 1019 1019 1020 1020 for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) { 1021 1021 xor_srcs[src_idx] = alloc_page(GFP_KERNEL); 1022 - if (!xor_srcs[src_idx]) 1023 - while (src_idx--) { 1022 + if (!xor_srcs[src_idx]) { 1023 + while (src_idx--) 1024 1024 __free_page(xor_srcs[src_idx]); 1025 - return -ENOMEM; 1026 - } 1025 + return -ENOMEM; 1026 + } 1027 1027 } 1028 1028 1029 1029 dest = alloc_page(GFP_KERNEL); 1030 - if (!dest) 1031 - while (src_idx--) { 1030 + if (!dest) { 1031 + while (src_idx--) 1032 1032 __free_page(xor_srcs[src_idx]); 1033 - return -ENOMEM; 1034 - } 1033 + return -ENOMEM; 1034 + } 1035 1035 1036 1036 /* Fill in src buffers */ 1037 1037 for (src_idx = 0; src_idx < MV_XOR_NUM_SRC_TEST; src_idx++) { ··· 1287 1287 1288 1288 static struct platform_driver mv_xor_driver = { 1289 1289 .probe = mv_xor_probe, 1290 - .remove = mv_xor_remove, 1290 + .remove = __devexit_p(mv_xor_remove), 1291 1291 .driver = { 1292 1292 .owner = THIS_MODULE, 1293 1293 .name = MV_XOR_NAME,
+1 -1
drivers/gpu/drm/drm_stub.c
··· 168 168 file_priv->minor->master != file_priv->master) { 169 169 mutex_lock(&dev->struct_mutex); 170 170 file_priv->minor->master = drm_master_get(file_priv->master); 171 - mutex_lock(&dev->struct_mutex); 171 + mutex_unlock(&dev->struct_mutex); 172 172 } 173 173 174 174 return 0;
+6 -2
drivers/hwmon/lm85.c
··· 72 72 #define LM85_COMPANY_SMSC 0x5c 73 73 #define LM85_VERSTEP_VMASK 0xf0 74 74 #define LM85_VERSTEP_GENERIC 0x60 75 + #define LM85_VERSTEP_GENERIC2 0x70 75 76 #define LM85_VERSTEP_LM85C 0x60 76 77 #define LM85_VERSTEP_LM85B 0x62 77 78 #define LM85_VERSTEP_ADM1027 0x60 ··· 335 334 static const struct i2c_device_id lm85_id[] = { 336 335 { "adm1027", adm1027 }, 337 336 { "adt7463", adt7463 }, 337 + { "adt7468", adt7468 }, 338 338 { "lm85", any_chip }, 339 339 { "lm85b", lm85b }, 340 340 { "lm85c", lm85c }, ··· 410 408 struct lm85_data *data = lm85_update_device(dev); 411 409 int vid; 412 410 413 - if (data->type == adt7463 && (data->vid & 0x80)) { 411 + if ((data->type == adt7463 || data->type == adt7468) && 412 + (data->vid & 0x80)) { 414 413 /* 6-pin VID (VRM 10) */ 415 414 vid = vid_from_reg(data->vid & 0x3f, data->vrm); 416 415 } else { ··· 1156 1153 address, company, verstep); 1157 1154 1158 1155 /* All supported chips have the version in common */ 1159 - if ((verstep & LM85_VERSTEP_VMASK) != LM85_VERSTEP_GENERIC) { 1156 + if ((verstep & LM85_VERSTEP_VMASK) != LM85_VERSTEP_GENERIC && 1157 + (verstep & LM85_VERSTEP_VMASK) != LM85_VERSTEP_GENERIC2) { 1160 1158 dev_dbg(&adapter->dev, "Autodetection failed: " 1161 1159 "unsupported version\n"); 1162 1160 return -ENODEV;
+2 -2
drivers/i2c/busses/i2c-mv64xxx.c
··· 482 482 return 0; 483 483 } 484 484 485 - static void __devexit 485 + static void 486 486 mv64xxx_i2c_unmap_regs(struct mv64xxx_i2c_data *drv_data) 487 487 { 488 488 if (drv_data->reg_base) { ··· 577 577 578 578 static struct platform_driver mv64xxx_i2c_driver = { 579 579 .probe = mv64xxx_i2c_probe, 580 - .remove = mv64xxx_i2c_remove, 580 + .remove = __devexit_p(mv64xxx_i2c_remove), 581 581 .driver = { 582 582 .owner = THIS_MODULE, 583 583 .name = MV64XXX_I2C_CTLR_NAME,
+5
drivers/ide/Kconfig
··· 721 721 depends on SOC_TX4939 722 722 select BLK_DEV_IDEDMA_SFF 723 723 724 + config BLK_DEV_IDE_AT91 725 + tristate "Atmel AT91 (SAM9, CAP9, AT572D940HF) IDE support" 726 + depends on ARM && ARCH_AT91 && !ARCH_AT91RM9200 && !ARCH_AT91X40 727 + select IDE_TIMINGS 728 + 724 729 config IDE_ARM 725 730 tristate "ARM IDE support" 726 731 depends on ARM && (ARCH_RPC || ARCH_SHARK)
+1
drivers/ide/Makefile
··· 116 116 117 117 obj-$(CONFIG_BLK_DEV_IDE_TX4938) += tx4938ide.o 118 118 obj-$(CONFIG_BLK_DEV_IDE_TX4939) += tx4939ide.o 119 + obj-$(CONFIG_BLK_DEV_IDE_AT91) += at91_ide.o
+467
drivers/ide/at91_ide.c
··· 1 + /* 2 + * IDE host driver for AT91 (SAM9, CAP9, AT572D940HF) Static Memory Controller 3 + * with Compact Flash True IDE logic 4 + * 5 + * Copyright (c) 2008, 2009 Kelvatek Ltd. 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; either version 2 of the License, or 10 + * (at your option) any later version. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License 18 + * along with this program; if not, write to the Free Software 19 + * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 20 + * 21 + */ 22 + 23 + #include <linux/version.h> 24 + #include <linux/kernel.h> 25 + #include <linux/module.h> 26 + #include <linux/clk.h> 27 + #include <linux/err.h> 28 + #include <linux/ide.h> 29 + #include <linux/platform_device.h> 30 + 31 + #include <mach/board.h> 32 + #include <mach/gpio.h> 33 + #include <mach/at91sam9263.h> 34 + #include <mach/at91sam9_smc.h> 35 + #include <mach/at91sam9263_matrix.h> 36 + 37 + #define DRV_NAME "at91_ide" 38 + 39 + #define perr(fmt, args...) pr_err(DRV_NAME ": " fmt, ##args) 40 + #define pdbg(fmt, args...) pr_debug("%s " fmt, __func__, ##args) 41 + 42 + /* 43 + * Access to IDE device is possible through EBI Static Memory Controller 44 + * with Compact Flash logic. For details see EBI and SMC datasheet sections 45 + * of any microcontroller from AT91SAM9 family. 46 + * 47 + * Within SMC chip select address space, lines A[23:21] distinguish Compact 48 + * Flash modes (I/O, common memory, attribute memory, True IDE). IDE modes are: 49 + * 0x00c0000 - True IDE 50 + * 0x00e0000 - Alternate True IDE (Alt Status Register) 51 + * 52 + * On True IDE mode Task File and Data Register are mapped at the same address. 53 + * To distinguish access between these two different bus data width is used: 54 + * 8Bit for Task File, 16Bit for Data I/O. 55 + * 56 + * After initialization we do 8/16 bit flipping (changes in SMC MODE register) 57 + * only inside IDE callback routines which are serialized by IDE layer, 58 + * so no additional locking needed. 59 + */ 60 + 61 + #define TASK_FILE 0x00c00000 62 + #define ALT_MODE 0x00e00000 63 + #define REGS_SIZE 8 64 + 65 + #define enter_16bit(cs, mode) do { \ 66 + mode = at91_sys_read(AT91_SMC_MODE(cs)); \ 67 + at91_sys_write(AT91_SMC_MODE(cs), mode | AT91_SMC_DBW_16); \ 68 + } while (0) 69 + 70 + #define leave_16bit(cs, mode) at91_sys_write(AT91_SMC_MODE(cs), mode); 71 + 72 + static void set_smc_timings(const u8 chipselect, const u16 cycle, 73 + const u16 setup, const u16 pulse, 74 + const u16 data_float, int use_iordy) 75 + { 76 + unsigned long mode = AT91_SMC_READMODE | AT91_SMC_WRITEMODE | 77 + AT91_SMC_BAT_SELECT; 78 + 79 + /* disable or enable waiting for IORDY signal */ 80 + if (use_iordy) 81 + mode |= AT91_SMC_EXNWMODE_READY; 82 + 83 + /* add data float cycles if needed */ 84 + if (data_float) 85 + mode |= AT91_SMC_TDF_(data_float); 86 + 87 + at91_sys_write(AT91_SMC_MODE(chipselect), mode); 88 + 89 + /* setup timings in SMC */ 90 + at91_sys_write(AT91_SMC_SETUP(chipselect), AT91_SMC_NWESETUP_(setup) | 91 + AT91_SMC_NCS_WRSETUP_(0) | 92 + AT91_SMC_NRDSETUP_(setup) | 93 + AT91_SMC_NCS_RDSETUP_(0)); 94 + at91_sys_write(AT91_SMC_PULSE(chipselect), AT91_SMC_NWEPULSE_(pulse) | 95 + AT91_SMC_NCS_WRPULSE_(cycle) | 96 + AT91_SMC_NRDPULSE_(pulse) | 97 + AT91_SMC_NCS_RDPULSE_(cycle)); 98 + at91_sys_write(AT91_SMC_CYCLE(chipselect), AT91_SMC_NWECYCLE_(cycle) | 99 + AT91_SMC_NRDCYCLE_(cycle)); 100 + } 101 + 102 + static unsigned int calc_mck_cycles(unsigned int ns, unsigned int mck_hz) 103 + { 104 + u64 tmp = ns; 105 + 106 + tmp *= mck_hz; 107 + tmp += 1000*1000*1000 - 1; /* round up */ 108 + do_div(tmp, 1000*1000*1000); 109 + return (unsigned int) tmp; 110 + } 111 + 112 + static void apply_timings(const u8 chipselect, const u8 pio, 113 + const struct ide_timing *timing, int use_iordy) 114 + { 115 + unsigned int t0, t1, t2, t6z; 116 + unsigned int cycle, setup, pulse, data_float; 117 + unsigned int mck_hz; 118 + struct clk *mck; 119 + 120 + /* see table 22 of Compact Flash standard 4.1 for the meaning, 121 + * we do not stretch active (t2) time, so setup (t1) + hold time (th) 122 + * assure at least minimal recovery (t2i) time */ 123 + t0 = timing->cyc8b; 124 + t1 = timing->setup; 125 + t2 = timing->act8b; 126 + t6z = (pio < 5) ? 30 : 20; 127 + 128 + pdbg("t0=%u t1=%u t2=%u t6z=%u\n", t0, t1, t2, t6z); 129 + 130 + mck = clk_get(NULL, "mck"); 131 + BUG_ON(IS_ERR(mck)); 132 + mck_hz = clk_get_rate(mck); 133 + pdbg("mck_hz=%u\n", mck_hz); 134 + 135 + cycle = calc_mck_cycles(t0, mck_hz); 136 + setup = calc_mck_cycles(t1, mck_hz); 137 + pulse = calc_mck_cycles(t2, mck_hz); 138 + data_float = calc_mck_cycles(t6z, mck_hz); 139 + 140 + pdbg("cycle=%u setup=%u pulse=%u data_float=%u\n", 141 + cycle, setup, pulse, data_float); 142 + 143 + set_smc_timings(chipselect, cycle, setup, pulse, data_float, use_iordy); 144 + } 145 + 146 + static void at91_ide_input_data(ide_drive_t *drive, struct request *rq, 147 + void *buf, unsigned int len) 148 + { 149 + ide_hwif_t *hwif = drive->hwif; 150 + struct ide_io_ports *io_ports = &hwif->io_ports; 151 + u8 chipselect = hwif->select_data; 152 + unsigned long mode; 153 + 154 + pdbg("cs %u buf %p len %d\n", chipselect, buf, len); 155 + 156 + len++; 157 + 158 + enter_16bit(chipselect, mode); 159 + __ide_mm_insw((void __iomem *) io_ports->data_addr, buf, len / 2); 160 + leave_16bit(chipselect, mode); 161 + } 162 + 163 + static void at91_ide_output_data(ide_drive_t *drive, struct request *rq, 164 + void *buf, unsigned int len) 165 + { 166 + ide_hwif_t *hwif = drive->hwif; 167 + struct ide_io_ports *io_ports = &hwif->io_ports; 168 + u8 chipselect = hwif->select_data; 169 + unsigned long mode; 170 + 171 + pdbg("cs %u buf %p len %d\n", chipselect, buf, len); 172 + 173 + enter_16bit(chipselect, mode); 174 + __ide_mm_outsw((void __iomem *) io_ports->data_addr, buf, len / 2); 175 + leave_16bit(chipselect, mode); 176 + } 177 + 178 + static u8 ide_mm_inb(unsigned long port) 179 + { 180 + return readb((void __iomem *) port); 181 + } 182 + 183 + static void ide_mm_outb(u8 value, unsigned long port) 184 + { 185 + writeb(value, (void __iomem *) port); 186 + } 187 + 188 + static void at91_ide_tf_load(ide_drive_t *drive, ide_task_t *task) 189 + { 190 + ide_hwif_t *hwif = drive->hwif; 191 + struct ide_io_ports *io_ports = &hwif->io_ports; 192 + struct ide_taskfile *tf = &task->tf; 193 + u8 HIHI = (task->tf_flags & IDE_TFLAG_LBA48) ? 0xE0 : 0xEF; 194 + 195 + if (task->tf_flags & IDE_TFLAG_FLAGGED) 196 + HIHI = 0xFF; 197 + 198 + if (task->tf_flags & IDE_TFLAG_OUT_DATA) { 199 + u16 data = (tf->hob_data << 8) | tf->data; 200 + 201 + at91_ide_output_data(drive, NULL, &data, 2); 202 + } 203 + 204 + if (task->tf_flags & IDE_TFLAG_OUT_HOB_FEATURE) 205 + ide_mm_outb(tf->hob_feature, io_ports->feature_addr); 206 + if (task->tf_flags & IDE_TFLAG_OUT_HOB_NSECT) 207 + ide_mm_outb(tf->hob_nsect, io_ports->nsect_addr); 208 + if (task->tf_flags & IDE_TFLAG_OUT_HOB_LBAL) 209 + ide_mm_outb(tf->hob_lbal, io_ports->lbal_addr); 210 + if (task->tf_flags & IDE_TFLAG_OUT_HOB_LBAM) 211 + ide_mm_outb(tf->hob_lbam, io_ports->lbam_addr); 212 + if (task->tf_flags & IDE_TFLAG_OUT_HOB_LBAH) 213 + ide_mm_outb(tf->hob_lbah, io_ports->lbah_addr); 214 + 215 + if (task->tf_flags & IDE_TFLAG_OUT_FEATURE) 216 + ide_mm_outb(tf->feature, io_ports->feature_addr); 217 + if (task->tf_flags & IDE_TFLAG_OUT_NSECT) 218 + ide_mm_outb(tf->nsect, io_ports->nsect_addr); 219 + if (task->tf_flags & IDE_TFLAG_OUT_LBAL) 220 + ide_mm_outb(tf->lbal, io_ports->lbal_addr); 221 + if (task->tf_flags & IDE_TFLAG_OUT_LBAM) 222 + ide_mm_outb(tf->lbam, io_ports->lbam_addr); 223 + if (task->tf_flags & IDE_TFLAG_OUT_LBAH) 224 + ide_mm_outb(tf->lbah, io_ports->lbah_addr); 225 + 226 + if (task->tf_flags & IDE_TFLAG_OUT_DEVICE) 227 + ide_mm_outb((tf->device & HIHI) | drive->select, io_ports->device_addr); 228 + } 229 + 230 + static void at91_ide_tf_read(ide_drive_t *drive, ide_task_t *task) 231 + { 232 + ide_hwif_t *hwif = drive->hwif; 233 + struct ide_io_ports *io_ports = &hwif->io_ports; 234 + struct ide_taskfile *tf = &task->tf; 235 + 236 + if (task->tf_flags & IDE_TFLAG_IN_DATA) { 237 + u16 data; 238 + 239 + at91_ide_input_data(drive, NULL, &data, 2); 240 + tf->data = data & 0xff; 241 + tf->hob_data = (data >> 8) & 0xff; 242 + } 243 + 244 + /* be sure we're looking at the low order bits */ 245 + ide_mm_outb(ATA_DEVCTL_OBS & ~0x80, io_ports->ctl_addr); 246 + 247 + if (task->tf_flags & IDE_TFLAG_IN_FEATURE) 248 + tf->feature = ide_mm_inb(io_ports->feature_addr); 249 + if (task->tf_flags & IDE_TFLAG_IN_NSECT) 250 + tf->nsect = ide_mm_inb(io_ports->nsect_addr); 251 + if (task->tf_flags & IDE_TFLAG_IN_LBAL) 252 + tf->lbal = ide_mm_inb(io_ports->lbal_addr); 253 + if (task->tf_flags & IDE_TFLAG_IN_LBAM) 254 + tf->lbam = ide_mm_inb(io_ports->lbam_addr); 255 + if (task->tf_flags & IDE_TFLAG_IN_LBAH) 256 + tf->lbah = ide_mm_inb(io_ports->lbah_addr); 257 + if (task->tf_flags & IDE_TFLAG_IN_DEVICE) 258 + tf->device = ide_mm_inb(io_ports->device_addr); 259 + 260 + if (task->tf_flags & IDE_TFLAG_LBA48) { 261 + ide_mm_outb(ATA_DEVCTL_OBS | 0x80, io_ports->ctl_addr); 262 + 263 + if (task->tf_flags & IDE_TFLAG_IN_HOB_FEATURE) 264 + tf->hob_feature = ide_mm_inb(io_ports->feature_addr); 265 + if (task->tf_flags & IDE_TFLAG_IN_HOB_NSECT) 266 + tf->hob_nsect = ide_mm_inb(io_ports->nsect_addr); 267 + if (task->tf_flags & IDE_TFLAG_IN_HOB_LBAL) 268 + tf->hob_lbal = ide_mm_inb(io_ports->lbal_addr); 269 + if (task->tf_flags & IDE_TFLAG_IN_HOB_LBAM) 270 + tf->hob_lbam = ide_mm_inb(io_ports->lbam_addr); 271 + if (task->tf_flags & IDE_TFLAG_IN_HOB_LBAH) 272 + tf->hob_lbah = ide_mm_inb(io_ports->lbah_addr); 273 + } 274 + } 275 + 276 + static void at91_ide_set_pio_mode(ide_drive_t *drive, const u8 pio) 277 + { 278 + struct ide_timing *timing; 279 + u8 chipselect = drive->hwif->select_data; 280 + int use_iordy = 0; 281 + 282 + pdbg("chipselect %u pio %u\n", chipselect, pio); 283 + 284 + timing = ide_timing_find_mode(XFER_PIO_0 + pio); 285 + BUG_ON(!timing); 286 + 287 + if ((pio > 2 || ata_id_has_iordy(drive->id)) && 288 + !(ata_id_is_cfa(drive->id) && pio > 4)) 289 + use_iordy = 1; 290 + 291 + apply_timings(chipselect, pio, timing, use_iordy); 292 + } 293 + 294 + static const struct ide_tp_ops at91_ide_tp_ops = { 295 + .exec_command = ide_exec_command, 296 + .read_status = ide_read_status, 297 + .read_altstatus = ide_read_altstatus, 298 + .set_irq = ide_set_irq, 299 + 300 + .tf_load = at91_ide_tf_load, 301 + .tf_read = at91_ide_tf_read, 302 + 303 + .input_data = at91_ide_input_data, 304 + .output_data = at91_ide_output_data, 305 + }; 306 + 307 + static const struct ide_port_ops at91_ide_port_ops = { 308 + .set_pio_mode = at91_ide_set_pio_mode, 309 + }; 310 + 311 + static const struct ide_port_info at91_ide_port_info __initdata = { 312 + .port_ops = &at91_ide_port_ops, 313 + .tp_ops = &at91_ide_tp_ops, 314 + .host_flags = IDE_HFLAG_MMIO | IDE_HFLAG_NO_DMA | IDE_HFLAG_SINGLE | 315 + IDE_HFLAG_NO_IO_32BIT | IDE_HFLAG_UNMASK_IRQS, 316 + .pio_mask = ATA_PIO5, 317 + }; 318 + 319 + /* 320 + * If interrupt is delivered through GPIO, IRQ are triggered on falling 321 + * and rising edge of signal. Whereas IDE device request interrupt on high 322 + * level (rising edge in our case). This mean we have fake interrupts, so 323 + * we need to check interrupt pin and exit instantly from ISR when line 324 + * is on low level. 325 + */ 326 + 327 + irqreturn_t at91_irq_handler(int irq, void *dev_id) 328 + { 329 + int ntries = 8; 330 + int pin_val1, pin_val2; 331 + 332 + /* additional deglitch, line can be noisy in badly designed PCB */ 333 + do { 334 + pin_val1 = at91_get_gpio_value(irq); 335 + pin_val2 = at91_get_gpio_value(irq); 336 + } while (pin_val1 != pin_val2 && --ntries > 0); 337 + 338 + if (pin_val1 == 0 || ntries <= 0) 339 + return IRQ_HANDLED; 340 + 341 + return ide_intr(irq, dev_id); 342 + } 343 + 344 + static int __init at91_ide_probe(struct platform_device *pdev) 345 + { 346 + int ret; 347 + hw_regs_t hw; 348 + hw_regs_t *hws[] = { &hw, NULL, NULL, NULL }; 349 + struct ide_host *host; 350 + struct resource *res; 351 + unsigned long tf_base = 0, ctl_base = 0; 352 + struct at91_cf_data *board = pdev->dev.platform_data; 353 + 354 + if (!board) 355 + return -ENODEV; 356 + 357 + if (board->det_pin && at91_get_gpio_value(board->det_pin) != 0) { 358 + perr("no device detected\n"); 359 + return -ENODEV; 360 + } 361 + 362 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 363 + if (!res) { 364 + perr("can't get memory resource\n"); 365 + return -ENODEV; 366 + } 367 + 368 + if (!devm_request_mem_region(&pdev->dev, res->start + TASK_FILE, 369 + REGS_SIZE, "ide") || 370 + !devm_request_mem_region(&pdev->dev, res->start + ALT_MODE, 371 + REGS_SIZE, "alt")) { 372 + perr("memory resources in use\n"); 373 + return -EBUSY; 374 + } 375 + 376 + pdbg("chipselect %u irq %u res %08lx\n", board->chipselect, 377 + board->irq_pin, (unsigned long) res->start); 378 + 379 + tf_base = (unsigned long) devm_ioremap(&pdev->dev, res->start + TASK_FILE, 380 + REGS_SIZE); 381 + ctl_base = (unsigned long) devm_ioremap(&pdev->dev, res->start + ALT_MODE, 382 + REGS_SIZE); 383 + if (!tf_base || !ctl_base) { 384 + perr("can't map memory regions\n"); 385 + return -EBUSY; 386 + } 387 + 388 + memset(&hw, 0, sizeof(hw)); 389 + 390 + if (board->flags & AT91_IDE_SWAP_A0_A2) { 391 + /* workaround for stupid hardware bug */ 392 + hw.io_ports.data_addr = tf_base + 0; 393 + hw.io_ports.error_addr = tf_base + 4; 394 + hw.io_ports.nsect_addr = tf_base + 2; 395 + hw.io_ports.lbal_addr = tf_base + 6; 396 + hw.io_ports.lbam_addr = tf_base + 1; 397 + hw.io_ports.lbah_addr = tf_base + 5; 398 + hw.io_ports.device_addr = tf_base + 3; 399 + hw.io_ports.command_addr = tf_base + 7; 400 + hw.io_ports.ctl_addr = ctl_base + 3; 401 + } else 402 + ide_std_init_ports(&hw, tf_base, ctl_base + 6); 403 + 404 + hw.irq = board->irq_pin; 405 + hw.chipset = ide_generic; 406 + hw.dev = &pdev->dev; 407 + 408 + host = ide_host_alloc(&at91_ide_port_info, hws); 409 + if (!host) { 410 + perr("failed to allocate ide host\n"); 411 + return -ENOMEM; 412 + } 413 + 414 + /* setup Static Memory Controller - PIO 0 as default */ 415 + apply_timings(board->chipselect, 0, ide_timing_find_mode(XFER_PIO_0), 0); 416 + 417 + /* with GPIO interrupt we have to do quirks in handler */ 418 + if (board->irq_pin >= PIN_BASE) 419 + host->irq_handler = at91_irq_handler; 420 + 421 + host->ports[0]->select_data = board->chipselect; 422 + 423 + ret = ide_host_register(host, &at91_ide_port_info, hws); 424 + if (ret) { 425 + perr("failed to register ide host\n"); 426 + goto err_free_host; 427 + } 428 + platform_set_drvdata(pdev, host); 429 + return 0; 430 + 431 + err_free_host: 432 + ide_host_free(host); 433 + return ret; 434 + } 435 + 436 + static int __exit at91_ide_remove(struct platform_device *pdev) 437 + { 438 + struct ide_host *host = platform_get_drvdata(pdev); 439 + 440 + ide_host_remove(host); 441 + return 0; 442 + } 443 + 444 + static struct platform_driver at91_ide_driver = { 445 + .driver = { 446 + .name = DRV_NAME, 447 + .owner = THIS_MODULE, 448 + }, 449 + .remove = __exit_p(at91_ide_remove), 450 + }; 451 + 452 + static int __init at91_ide_init(void) 453 + { 454 + return platform_driver_probe(&at91_ide_driver, at91_ide_probe); 455 + } 456 + 457 + static void __exit at91_ide_exit(void) 458 + { 459 + platform_driver_unregister(&at91_ide_driver); 460 + } 461 + 462 + module_init(at91_ide_init); 463 + module_exit(at91_ide_exit); 464 + 465 + MODULE_LICENSE("GPL"); 466 + MODULE_AUTHOR("Stanislaw Gruszka <stf_xl@wp.pl>"); 467 +
+1 -1
drivers/ide/ide-disk_proc.c
··· 125 125 IDE_PROC_DEVSET(multcount, 0, 16), 126 126 IDE_PROC_DEVSET(nowerr, 0, 1), 127 127 IDE_PROC_DEVSET(wcache, 0, 1), 128 - { 0 }, 128 + { NULL }, 129 129 };
+1 -1
drivers/ide/ide-floppy_proc.c
··· 29 29 IDE_PROC_DEVSET(bios_head, 0, 255), 30 30 IDE_PROC_DEVSET(bios_sect, 0, 63), 31 31 IDE_PROC_DEVSET(ticks, 0, 255), 32 - { 0 }, 32 + { NULL }, 33 33 };
+2 -1
drivers/ide/ide-io.c
··· 908 908 ide_drive_t *uninitialized_var(drive); 909 909 ide_handler_t *handler; 910 910 unsigned long flags; 911 - unsigned long wait = -1; 911 + int wait = -1; 912 912 int plug_device = 0; 913 913 914 914 spin_lock_irqsave(&hwif->lock, flags); ··· 1162 1162 1163 1163 return irq_ret; 1164 1164 } 1165 + EXPORT_SYMBOL_GPL(ide_intr); 1165 1166 1166 1167 /** 1167 1168 * ide_do_drive_cmd - issue IDE special command
+2
drivers/ide/ide-iops.c
··· 315 315 u8 io_32bit = drive->io_32bit; 316 316 u8 mmio = (hwif->host_flags & IDE_HFLAG_MMIO) ? 1 : 0; 317 317 318 + len++; 319 + 318 320 if (io_32bit) { 319 321 unsigned long uninitialized_var(flags); 320 322
+6 -1
drivers/ide/ide-probe.c
··· 950 950 static int init_irq (ide_hwif_t *hwif) 951 951 { 952 952 struct ide_io_ports *io_ports = &hwif->io_ports; 953 + irq_handler_t irq_handler; 953 954 int sa = 0; 954 955 955 956 mutex_lock(&ide_cfg_mtx); ··· 959 958 init_timer(&hwif->timer); 960 959 hwif->timer.function = &ide_timer_expiry; 961 960 hwif->timer.data = (unsigned long)hwif; 961 + 962 + irq_handler = hwif->host->irq_handler; 963 + if (irq_handler == NULL) 964 + irq_handler = ide_intr; 962 965 963 966 #if defined(__mc68000__) 964 967 sa = IRQF_SHARED; ··· 974 969 if (io_ports->ctl_addr) 975 970 hwif->tp_ops->set_irq(hwif, 1); 976 971 977 - if (request_irq(hwif->irq, &ide_intr, sa, hwif->name, hwif)) 972 + if (request_irq(hwif->irq, irq_handler, sa, hwif->name, hwif)) 978 973 goto out_up; 979 974 980 975 if (!hwif->rqsize) {
+1 -1
drivers/ide/ide-proc.c
··· 231 231 IDE_PROC_DEVSET(pio_mode, 0, 255), 232 232 IDE_PROC_DEVSET(unmaskirq, 0, 1), 233 233 IDE_PROC_DEVSET(using_dma, 0, 1), 234 - { 0 }, 234 + { NULL }, 235 235 }; 236 236 237 237 static void proc_ide_settings_warn(void)
+1 -1
drivers/ide/ide-tape.c
··· 2166 2166 __IDE_PROC_DEVSET(speed, 0, 0xffff, NULL, NULL), 2167 2167 __IDE_PROC_DEVSET(tdsc, IDETAPE_DSC_RW_MIN, IDETAPE_DSC_RW_MAX, 2168 2168 mulf_tdsc, divf_tdsc), 2169 - { 0 }, 2169 + { NULL }, 2170 2170 }; 2171 2171 #endif 2172 2172
+6
drivers/lguest/lguest_device.c
··· 212 212 hcall(LHCALL_NOTIFY, lvq->config.pfn << PAGE_SHIFT, 0, 0); 213 213 } 214 214 215 + /* An extern declaration inside a C file is bad form. Don't do it. */ 216 + extern void lguest_setup_irq(unsigned int irq); 217 + 215 218 /* This routine finds the first virtqueue described in the configuration of 216 219 * this device and sets it up. 217 220 * ··· 268 265 err = -ENOMEM; 269 266 goto unmap; 270 267 } 268 + 269 + /* Make sure the interrupt is allocated. */ 270 + lguest_setup_irq(lvq->config.irq); 271 271 272 272 /* Tell the interrupt for this virtqueue to go to the virtio_ring 273 273 * interrupt handler. */
+18 -12
drivers/md/md.c
··· 214 214 return mddev; 215 215 } 216 216 217 - static void mddev_delayed_delete(struct work_struct *ws) 218 - { 219 - mddev_t *mddev = container_of(ws, mddev_t, del_work); 220 - kobject_del(&mddev->kobj); 221 - kobject_put(&mddev->kobj); 222 - } 217 + static void mddev_delayed_delete(struct work_struct *ws); 223 218 224 219 static void mddev_put(mddev_t *mddev) 225 220 { ··· 3537 3542 3538 3543 int mdp_major = 0; 3539 3544 3545 + static void mddev_delayed_delete(struct work_struct *ws) 3546 + { 3547 + mddev_t *mddev = container_of(ws, mddev_t, del_work); 3548 + 3549 + if (mddev->private == &md_redundancy_group) { 3550 + sysfs_remove_group(&mddev->kobj, &md_redundancy_group); 3551 + if (mddev->sysfs_action) 3552 + sysfs_put(mddev->sysfs_action); 3553 + mddev->sysfs_action = NULL; 3554 + mddev->private = NULL; 3555 + } 3556 + kobject_del(&mddev->kobj); 3557 + kobject_put(&mddev->kobj); 3558 + } 3559 + 3540 3560 static int md_alloc(dev_t dev, char *name) 3541 3561 { 3542 3562 static DEFINE_MUTEX(disks_mutex); ··· 4043 4033 mddev->queue->merge_bvec_fn = NULL; 4044 4034 mddev->queue->unplug_fn = NULL; 4045 4035 mddev->queue->backing_dev_info.congested_fn = NULL; 4046 - if (mddev->pers->sync_request) { 4047 - sysfs_remove_group(&mddev->kobj, &md_redundancy_group); 4048 - if (mddev->sysfs_action) 4049 - sysfs_put(mddev->sysfs_action); 4050 - mddev->sysfs_action = NULL; 4051 - } 4052 4036 module_put(mddev->pers->owner); 4037 + if (mddev->pers->sync_request) 4038 + mddev->private = &md_redundancy_group; 4053 4039 mddev->pers = NULL; 4054 4040 /* tell userspace to handle 'inactive' */ 4055 4041 sysfs_notify_dirent(mddev->sysfs_state);
+9 -6
drivers/mmc/core/mmc_ops.c
··· 248 248 249 249 sg_init_one(&sg, data_buf, len); 250 250 251 - /* 252 - * The spec states that CSR and CID accesses have a timeout 253 - * of 64 clock cycles. 254 - */ 255 - data.timeout_ns = 0; 256 - data.timeout_clks = 64; 251 + if (opcode == MMC_SEND_CSD || opcode == MMC_SEND_CID) { 252 + /* 253 + * The spec states that CSR and CID accesses have a timeout 254 + * of 64 clock cycles. 255 + */ 256 + data.timeout_ns = 0; 257 + data.timeout_clks = 64; 258 + } else 259 + mmc_set_data_timeout(&data, card); 257 260 258 261 mmc_wait_for_req(host, &mrq); 259 262
+2 -1
drivers/mtd/devices/mtd_dataflash.c
··· 821 821 if (!(info->flags & IS_POW2PS)) 822 822 return info; 823 823 } 824 - } 824 + } else 825 + return info; 825 826 } 826 827 } 827 828
+11 -8
drivers/mtd/maps/physmap.c
··· 46 46 47 47 physmap_data = dev->dev.platform_data; 48 48 49 + if (info->cmtd) { 49 50 #ifdef CONFIG_MTD_PARTITIONS 50 - if (info->nr_parts) { 51 - del_mtd_partitions(info->cmtd); 52 - kfree(info->parts); 53 - } else if (physmap_data->nr_parts) 54 - del_mtd_partitions(info->cmtd); 55 - else 56 - del_mtd_device(info->cmtd); 51 + if (info->nr_parts || physmap_data->nr_parts) 52 + del_mtd_partitions(info->cmtd); 53 + else 54 + del_mtd_device(info->cmtd); 57 55 #else 58 - del_mtd_device(info->cmtd); 56 + del_mtd_device(info->cmtd); 57 + #endif 58 + } 59 + #ifdef CONFIG_MTD_PARTITIONS 60 + if (info->nr_parts) 61 + kfree(info->parts); 59 62 #endif 60 63 61 64 #ifdef CONFIG_MTD_CONCAT
+1 -1
drivers/mtd/nand/orion_nand.c
··· 149 149 150 150 static struct platform_driver orion_nand_driver = { 151 151 .probe = orion_nand_probe, 152 - .remove = orion_nand_remove, 152 + .remove = __devexit_p(orion_nand_remove), 153 153 .driver = { 154 154 .name = "orion_nand", 155 155 .owner = THIS_MODULE,
+1 -1
drivers/net/arm/Makefile
··· 4 4 # 5 5 6 6 obj-$(CONFIG_ARM_AM79C961A) += am79c961a.o 7 - obj-$(CONFIG_ARM_ETHERH) += etherh.o ../8390.o 7 + obj-$(CONFIG_ARM_ETHERH) += etherh.o 8 8 obj-$(CONFIG_ARM_ETHER3) += ether3.o 9 9 obj-$(CONFIG_ARM_ETHER1) += ether1.o 10 10 obj-$(CONFIG_ARM_AT91_ETHER) += at91_ether.o
+5 -5
drivers/net/arm/etherh.c
··· 641 641 .ndo_open = etherh_open, 642 642 .ndo_stop = etherh_close, 643 643 .ndo_set_config = etherh_set_config, 644 - .ndo_start_xmit = ei_start_xmit, 645 - .ndo_tx_timeout = ei_tx_timeout, 646 - .ndo_get_stats = ei_get_stats, 647 - .ndo_set_multicast_list = ei_set_multicast_list, 644 + .ndo_start_xmit = __ei_start_xmit, 645 + .ndo_tx_timeout = __ei_tx_timeout, 646 + .ndo_get_stats = __ei_get_stats, 647 + .ndo_set_multicast_list = __ei_set_multicast_list, 648 648 .ndo_validate_addr = eth_validate_addr, 649 649 .ndo_set_mac_address = eth_mac_addr, 650 650 .ndo_change_mtu = eth_change_mtu, 651 651 #ifdef CONFIG_NET_POLL_CONTROLLER 652 - .ndo_poll_controller = ei_poll, 652 + .ndo_poll_controller = __ei_poll, 653 653 #endif 654 654 }; 655 655
+1 -1
drivers/net/arm/ks8695net.c
··· 560 560 msleep(1); 561 561 } 562 562 563 - if (reset_timeout == 0) { 563 + if (reset_timeout < 0) { 564 564 dev_crit(ksp->dev, 565 565 "Timeout waiting for DMA engines to reset\n"); 566 566 /* And blithely carry on */
+1 -1
drivers/net/bonding/bond_main.c
··· 4113 4113 const struct net_device_ops *slave_ops 4114 4114 = slave->dev->netdev_ops; 4115 4115 if (slave_ops->ndo_neigh_setup) 4116 - return slave_ops->ndo_neigh_setup(dev, parms); 4116 + return slave_ops->ndo_neigh_setup(slave->dev, parms); 4117 4117 } 4118 4118 return 0; 4119 4119 }
+2 -1
drivers/net/jme.c
··· 957 957 goto out_inc; 958 958 959 959 i = atomic_read(&rxring->next_to_clean); 960 - while (limit-- > 0) { 960 + while (limit > 0) { 961 961 rxdesc = rxring->desc; 962 962 rxdesc += i; 963 963 964 964 if ((rxdesc->descwb.flags & cpu_to_le16(RXWBFLAG_OWN)) || 965 965 !(rxdesc->descwb.desccnt & RXWBDCNT_WBCPL)) 966 966 goto out; 967 + --limit; 967 968 968 969 desccnt = rxdesc->descwb.desccnt & RXWBDCNT_DCNT; 969 970
+2 -1
drivers/net/pcmcia/3c574_cs.c
··· 1035 1035 DEBUG(3, "%s: in rx_packet(), status %4.4x, rx_status %4.4x.\n", 1036 1036 dev->name, inw(ioaddr+EL3_STATUS), inw(ioaddr+RxStatus)); 1037 1037 while (!((rx_status = inw(ioaddr + RxStatus)) & 0x8000) && 1038 - (--worklimit >= 0)) { 1038 + worklimit > 0) { 1039 + worklimit--; 1039 1040 if (rx_status & 0x4000) { /* Error, update stats. */ 1040 1041 short error = rx_status & 0x3800; 1041 1042 dev->stats.rx_errors++;
+2 -1
drivers/net/pcmcia/3c589_cs.c
··· 857 857 DEBUG(3, "%s: in rx_packet(), status %4.4x, rx_status %4.4x.\n", 858 858 dev->name, inw(ioaddr+EL3_STATUS), inw(ioaddr+RX_STATUS)); 859 859 while (!((rx_status = inw(ioaddr + RX_STATUS)) & 0x8000) && 860 - (--worklimit >= 0)) { 860 + worklimit > 0) { 861 + worklimit--; 861 862 if (rx_status & 0x4000) { /* Error, update stats. */ 862 863 short error = rx_status & 0x3800; 863 864 dev->stats.rx_errors++;
+12
drivers/net/smc911x.h
··· 42 42 #define SMC_USE_16BIT 0 43 43 #define SMC_USE_32BIT 1 44 44 #define SMC_IRQ_SENSE IRQF_TRIGGER_LOW 45 + #elif defined(CONFIG_ARCH_OMAP34XX) 46 + #define SMC_USE_16BIT 0 47 + #define SMC_USE_32BIT 1 48 + #define SMC_IRQ_SENSE IRQF_TRIGGER_LOW 49 + #define SMC_MEM_RESERVED 1 50 + #elif defined(CONFIG_ARCH_OMAP24XX) 51 + #define SMC_USE_16BIT 0 52 + #define SMC_USE_32BIT 1 53 + #define SMC_IRQ_SENSE IRQF_TRIGGER_LOW 54 + #define SMC_MEM_RESERVED 1 45 55 #else 46 56 /* 47 57 * Default configuration ··· 685 675 #define CHIP_9116 0x0116 686 676 #define CHIP_9117 0x0117 687 677 #define CHIP_9118 0x0118 678 + #define CHIP_9211 0x9211 688 679 #define CHIP_9215 0x115A 689 680 #define CHIP_9217 0x117A 690 681 #define CHIP_9218 0x118A ··· 700 689 { CHIP_9116, "LAN9116" }, 701 690 { CHIP_9117, "LAN9117" }, 702 691 { CHIP_9118, "LAN9118" }, 692 + { CHIP_9211, "LAN9211" }, 703 693 { CHIP_9215, "LAN9215" }, 704 694 { CHIP_9217, "LAN9217" }, 705 695 { CHIP_9218, "LAN9218" },
+1 -1
drivers/net/sungem.c
··· 1229 1229 break; 1230 1230 } while (val & (GREG_SWRST_TXRST | GREG_SWRST_RXRST)); 1231 1231 1232 - if (limit <= 0) 1232 + if (limit < 0) 1233 1233 printk(KERN_ERR "%s: SW reset is ghetto.\n", gp->dev->name); 1234 1234 1235 1235 if (gp->phy_type == phy_serialink || gp->phy_type == phy_serdes)
+2 -1
drivers/net/tg3.c
··· 1473 1473 { 1474 1474 u32 reg; 1475 1475 1476 - if (!(tp->tg3_flags2 & TG3_FLG2_5705_PLUS)) 1476 + if (!(tp->tg3_flags2 & TG3_FLG2_5705_PLUS) || 1477 + GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906) 1477 1478 return; 1478 1479 1479 1480 reg = MII_TG3_MISC_SHDW_WREN |
+9 -9
drivers/net/tokenring/tmspci.c
··· 121 121 goto err_out_trdev; 122 122 } 123 123 124 - ret = request_irq(pdev->irq, tms380tr_interrupt, IRQF_SHARED, 125 - dev->name, dev); 126 - if (ret) 127 - goto err_out_region; 128 - 129 124 dev->base_addr = pci_ioaddr; 130 125 dev->irq = pci_irq_line; 131 126 dev->dma = 0; ··· 137 142 ret = tmsdev_init(dev, &pdev->dev); 138 143 if (ret) { 139 144 printk("%s: unable to get memory for dev->priv.\n", dev->name); 140 - goto err_out_irq; 145 + goto err_out_region; 141 146 } 142 147 143 148 tp = netdev_priv(dev); ··· 152 157 153 158 tp->tmspriv = cardinfo; 154 159 160 + ret = request_irq(pdev->irq, tms380tr_interrupt, IRQF_SHARED, 161 + dev->name, dev); 162 + if (ret) 163 + goto err_out_tmsdev; 164 + 155 165 dev->open = tms380tr_open; 156 166 dev->stop = tms380tr_close; 157 167 pci_set_drvdata(pdev, dev); ··· 164 164 165 165 ret = register_netdev(dev); 166 166 if (ret) 167 - goto err_out_tmsdev; 167 + goto err_out_irq; 168 168 169 169 return 0; 170 170 171 + err_out_irq: 172 + free_irq(pdev->irq, dev); 171 173 err_out_tmsdev: 172 174 pci_set_drvdata(pdev, NULL); 173 175 tmsdev_term(dev); 174 - err_out_irq: 175 - free_irq(pdev->irq, dev); 176 176 err_out_region: 177 177 release_region(pci_ioaddr, TMS_PCI_IO_EXTENT); 178 178 err_out_trdev:
+2 -2
drivers/net/ucc_geth_mii.c
··· 107 107 static int uec_mdio_reset(struct mii_bus *bus) 108 108 { 109 109 struct ucc_mii_mng __iomem *regs = (void __iomem *)bus->priv; 110 - unsigned int timeout = PHY_INIT_TIMEOUT; 110 + int timeout = PHY_INIT_TIMEOUT; 111 111 112 112 mutex_lock(&bus->mdio_lock); 113 113 ··· 123 123 124 124 mutex_unlock(&bus->mdio_lock); 125 125 126 - if (timeout <= 0) { 126 + if (timeout < 0) { 127 127 printk(KERN_ERR "%s: The MII Bus is stuck!\n", bus->name); 128 128 return -EBUSY; 129 129 }
+4
drivers/net/usb/dm9601.c
··· 635 635 USB_DEVICE(0x0a47, 0x9601), /* Hirose USB-100 */ 636 636 .driver_info = (unsigned long)&dm9601_info, 637 637 }, 638 + { 639 + USB_DEVICE(0x0fe6, 0x8101), /* DM9601 USB to Fast Ethernet Adapter */ 640 + .driver_info = (unsigned long)&dm9601_info, 641 + }, 638 642 {}, // END 639 643 }; 640 644
+4 -2
drivers/net/wireless/iwlwifi/iwl-agn.c
··· 3868 3868 } 3869 3869 err = iwl_eeprom_check_version(priv); 3870 3870 if (err) 3871 - goto out_iounmap; 3871 + goto out_free_eeprom; 3872 3872 3873 3873 /* extract MAC Address */ 3874 3874 iwl_eeprom_get_mac(priv, priv->mac_addr); ··· 3945 3945 return 0; 3946 3946 3947 3947 out_remove_sysfs: 3948 + destroy_workqueue(priv->workqueue); 3949 + priv->workqueue = NULL; 3948 3950 sysfs_remove_group(&pdev->dev.kobj, &iwl_attribute_group); 3949 3951 out_uninit_drv: 3950 3952 iwl_uninit_drv(priv); ··· 3955 3953 out_iounmap: 3956 3954 pci_iounmap(pdev, priv->hw_base); 3957 3955 out_pci_release_regions: 3958 - pci_release_regions(pdev); 3959 3956 pci_set_drvdata(pdev, NULL); 3957 + pci_release_regions(pdev); 3960 3958 out_pci_disable_device: 3961 3959 pci_disable_device(pdev); 3962 3960 out_ieee80211_free_hw:
+7 -10
drivers/net/wireless/iwlwifi/iwl3945-base.c
··· 7911 7911 CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY, 25000); 7912 7912 if (err < 0) { 7913 7913 IWL_DEBUG_INFO("Failed to init the card\n"); 7914 - goto out_remove_sysfs; 7914 + goto out_iounmap; 7915 7915 } 7916 7916 7917 7917 /*********************** ··· 7921 7921 err = iwl3945_eeprom_init(priv); 7922 7922 if (err) { 7923 7923 IWL_ERROR("Unable to init EEPROM\n"); 7924 - goto out_remove_sysfs; 7924 + goto out_iounmap; 7925 7925 } 7926 7926 /* MAC Address location in EEPROM same for 3945/4965 */ 7927 7927 get_eeprom_mac(priv, priv->mac_addr); ··· 7975 7975 err = iwl3945_init_channel_map(priv); 7976 7976 if (err) { 7977 7977 IWL_ERROR("initializing regulatory failed: %d\n", err); 7978 - goto out_release_irq; 7978 + goto out_unset_hw_setting; 7979 7979 } 7980 7980 7981 7981 err = iwl3945_init_geos(priv); ··· 8045 8045 return 0; 8046 8046 8047 8047 out_remove_sysfs: 8048 + destroy_workqueue(priv->workqueue); 8049 + priv->workqueue = NULL; 8048 8050 sysfs_remove_group(&pdev->dev.kobj, &iwl3945_attribute_group); 8049 8051 out_free_geos: 8050 8052 iwl3945_free_geos(priv); 8051 8053 out_free_channel_map: 8052 8054 iwl3945_free_channel_map(priv); 8053 - 8054 - 8055 - out_release_irq: 8056 - destroy_workqueue(priv->workqueue); 8057 - priv->workqueue = NULL; 8055 + out_unset_hw_setting: 8058 8056 iwl3945_unset_hw_setting(priv); 8059 - 8060 8057 out_iounmap: 8061 8058 pci_iounmap(pdev, priv->hw_base); 8062 8059 out_pci_release_regions: 8063 8060 pci_release_regions(pdev); 8064 8061 out_pci_disable_device: 8065 - pci_disable_device(pdev); 8066 8062 pci_set_drvdata(pdev, NULL); 8063 + pci_disable_device(pdev); 8067 8064 out_ieee80211_free_hw: 8068 8065 ieee80211_free_hw(priv->hw); 8069 8066 out:
+6 -3
drivers/net/wireless/p54/p54common.c
··· 710 710 __le32 req_id) 711 711 { 712 712 struct p54_common *priv = dev->priv; 713 - struct sk_buff *entry = priv->tx_queue.next; 713 + struct sk_buff *entry; 714 714 unsigned long flags; 715 715 716 716 spin_lock_irqsave(&priv->tx_queue.lock, flags); 717 + entry = priv->tx_queue.next; 717 718 while (entry != (struct sk_buff *)&priv->tx_queue) { 718 719 struct p54_hdr *hdr = (struct p54_hdr *) entry->data; 719 720 ··· 733 732 struct p54_common *priv = dev->priv; 734 733 struct p54_hdr *hdr = (struct p54_hdr *) skb->data; 735 734 struct p54_frame_sent *payload = (struct p54_frame_sent *) hdr->data; 736 - struct sk_buff *entry = (struct sk_buff *) priv->tx_queue.next; 735 + struct sk_buff *entry; 737 736 u32 addr = le32_to_cpu(hdr->req_id) - priv->headroom; 738 737 struct memrecord *range = NULL; 739 738 u32 freed = 0; ··· 742 741 int count, idx; 743 742 744 743 spin_lock_irqsave(&priv->tx_queue.lock, flags); 744 + entry = (struct sk_buff *) priv->tx_queue.next; 745 745 while (entry != (struct sk_buff *)&priv->tx_queue) { 746 746 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(entry); 747 747 struct p54_hdr *entry_hdr; ··· 978 976 struct p54_hdr *data, u32 len) 979 977 { 980 978 struct p54_common *priv = dev->priv; 981 - struct sk_buff *entry = priv->tx_queue.next; 979 + struct sk_buff *entry; 982 980 struct sk_buff *target_skb = NULL; 983 981 struct ieee80211_tx_info *info; 984 982 struct memrecord *range; ··· 1016 1014 } 1017 1015 } 1018 1016 1017 + entry = priv->tx_queue.next; 1019 1018 while (left--) { 1020 1019 u32 hole_size; 1021 1020 info = IEEE80211_SKB_CB(entry);
+8
drivers/net/wireless/rt2x00/rt2500usb.c
··· 1952 1952 { USB_DEVICE(0x13b1, 0x000d), USB_DEVICE_DATA(&rt2500usb_ops) }, 1953 1953 { USB_DEVICE(0x13b1, 0x0011), USB_DEVICE_DATA(&rt2500usb_ops) }, 1954 1954 { USB_DEVICE(0x13b1, 0x001a), USB_DEVICE_DATA(&rt2500usb_ops) }, 1955 + /* CNet */ 1956 + { USB_DEVICE(0x1371, 0x9022), USB_DEVICE_DATA(&rt2500usb_ops) }, 1955 1957 /* Conceptronic */ 1956 1958 { USB_DEVICE(0x14b2, 0x3c02), USB_DEVICE_DATA(&rt2500usb_ops) }, 1957 1959 /* D-LINK */ ··· 1978 1976 { USB_DEVICE(0x148f, 0x2570), USB_DEVICE_DATA(&rt2500usb_ops) }, 1979 1977 { USB_DEVICE(0x148f, 0x2573), USB_DEVICE_DATA(&rt2500usb_ops) }, 1980 1978 { USB_DEVICE(0x148f, 0x9020), USB_DEVICE_DATA(&rt2500usb_ops) }, 1979 + /* Sagem */ 1980 + { USB_DEVICE(0x079b, 0x004b), USB_DEVICE_DATA(&rt2500usb_ops) }, 1981 1981 /* Siemens */ 1982 1982 { USB_DEVICE(0x0681, 0x3c06), USB_DEVICE_DATA(&rt2500usb_ops) }, 1983 1983 /* SMC */ 1984 1984 { USB_DEVICE(0x0707, 0xee13), USB_DEVICE_DATA(&rt2500usb_ops) }, 1985 1985 /* Spairon */ 1986 1986 { USB_DEVICE(0x114b, 0x0110), USB_DEVICE_DATA(&rt2500usb_ops) }, 1987 + /* SURECOM */ 1988 + { USB_DEVICE(0x0769, 0x11f3), USB_DEVICE_DATA(&rt2500usb_ops) }, 1987 1989 /* Trust */ 1988 1990 { USB_DEVICE(0x0eb0, 0x9020), USB_DEVICE_DATA(&rt2500usb_ops) }, 1991 + /* VTech */ 1992 + { USB_DEVICE(0x0f88, 0x3012), USB_DEVICE_DATA(&rt2500usb_ops) }, 1989 1993 /* Zinwell */ 1990 1994 { USB_DEVICE(0x5a57, 0x0260), USB_DEVICE_DATA(&rt2500usb_ops) }, 1991 1995 { 0, }
+31 -1
drivers/net/wireless/rt2x00/rt73usb.c
··· 2281 2281 */ 2282 2282 static struct usb_device_id rt73usb_device_table[] = { 2283 2283 /* AboCom */ 2284 + { USB_DEVICE(0x07b8, 0xb21b), USB_DEVICE_DATA(&rt73usb_ops) }, 2285 + { USB_DEVICE(0x07b8, 0xb21c), USB_DEVICE_DATA(&rt73usb_ops) }, 2284 2286 { USB_DEVICE(0x07b8, 0xb21d), USB_DEVICE_DATA(&rt73usb_ops) }, 2287 + { USB_DEVICE(0x07b8, 0xb21e), USB_DEVICE_DATA(&rt73usb_ops) }, 2288 + { USB_DEVICE(0x07b8, 0xb21f), USB_DEVICE_DATA(&rt73usb_ops) }, 2289 + /* AL */ 2290 + { USB_DEVICE(0x14b2, 0x3c10), USB_DEVICE_DATA(&rt73usb_ops) }, 2291 + /* Amigo */ 2292 + { USB_DEVICE(0x148f, 0x9021), USB_DEVICE_DATA(&rt73usb_ops) }, 2293 + { USB_DEVICE(0x0eb0, 0x9021), USB_DEVICE_DATA(&rt73usb_ops) }, 2294 + /* AMIT */ 2295 + { USB_DEVICE(0x18c5, 0x0002), USB_DEVICE_DATA(&rt73usb_ops) }, 2285 2296 /* Askey */ 2286 2297 { USB_DEVICE(0x1690, 0x0722), USB_DEVICE_DATA(&rt73usb_ops) }, 2287 2298 /* ASUS */ ··· 2305 2294 { USB_DEVICE(0x050d, 0x905c), USB_DEVICE_DATA(&rt73usb_ops) }, 2306 2295 /* Billionton */ 2307 2296 { USB_DEVICE(0x1631, 0xc019), USB_DEVICE_DATA(&rt73usb_ops) }, 2297 + { USB_DEVICE(0x08dd, 0x0120), USB_DEVICE_DATA(&rt73usb_ops) }, 2308 2298 /* Buffalo */ 2299 + { USB_DEVICE(0x0411, 0x00d8), USB_DEVICE_DATA(&rt73usb_ops) }, 2309 2300 { USB_DEVICE(0x0411, 0x00f4), USB_DEVICE_DATA(&rt73usb_ops) }, 2310 2301 /* CNet */ 2311 2302 { USB_DEVICE(0x1371, 0x9022), USB_DEVICE_DATA(&rt73usb_ops) }, ··· 2321 2308 { USB_DEVICE(0x07d1, 0x3c04), USB_DEVICE_DATA(&rt73usb_ops) }, 2322 2309 { USB_DEVICE(0x07d1, 0x3c06), USB_DEVICE_DATA(&rt73usb_ops) }, 2323 2310 { USB_DEVICE(0x07d1, 0x3c07), USB_DEVICE_DATA(&rt73usb_ops) }, 2311 + /* Edimax */ 2312 + { USB_DEVICE(0x7392, 0x7318), USB_DEVICE_DATA(&rt73usb_ops) }, 2313 + { USB_DEVICE(0x7392, 0x7618), USB_DEVICE_DATA(&rt73usb_ops) }, 2314 + /* EnGenius */ 2315 + { USB_DEVICE(0x1740, 0x3701), USB_DEVICE_DATA(&rt73usb_ops) }, 2324 2316 /* Gemtek */ 2325 2317 { USB_DEVICE(0x15a9, 0x0004), USB_DEVICE_DATA(&rt73usb_ops) }, 2326 2318 /* Gigabyte */ ··· 2346 2328 { USB_DEVICE(0x0db0, 0xa861), USB_DEVICE_DATA(&rt73usb_ops) }, 2347 2329 { USB_DEVICE(0x0db0, 0xa874), USB_DEVICE_DATA(&rt73usb_ops) }, 2348 2330 /* Ralink */ 2331 + { USB_DEVICE(0x04bb, 0x093d), USB_DEVICE_DATA(&rt73usb_ops) }, 2349 2332 { USB_DEVICE(0x148f, 0x2573), USB_DEVICE_DATA(&rt73usb_ops) }, 2350 2333 { USB_DEVICE(0x148f, 0x2671), USB_DEVICE_DATA(&rt73usb_ops) }, 2351 2334 /* Qcom */ 2352 2335 { USB_DEVICE(0x18e8, 0x6196), USB_DEVICE_DATA(&rt73usb_ops) }, 2353 2336 { USB_DEVICE(0x18e8, 0x6229), USB_DEVICE_DATA(&rt73usb_ops) }, 2354 2337 { USB_DEVICE(0x18e8, 0x6238), USB_DEVICE_DATA(&rt73usb_ops) }, 2338 + /* Samsung */ 2339 + { USB_DEVICE(0x04e8, 0x4471), USB_DEVICE_DATA(&rt73usb_ops) }, 2355 2340 /* Senao */ 2356 2341 { USB_DEVICE(0x1740, 0x7100), USB_DEVICE_DATA(&rt73usb_ops) }, 2357 2342 /* Sitecom */ 2358 - { USB_DEVICE(0x0df6, 0x9712), USB_DEVICE_DATA(&rt73usb_ops) }, 2343 + { USB_DEVICE(0x0df6, 0x0024), USB_DEVICE_DATA(&rt73usb_ops) }, 2344 + { USB_DEVICE(0x0df6, 0x0027), USB_DEVICE_DATA(&rt73usb_ops) }, 2345 + { USB_DEVICE(0x0df6, 0x002f), USB_DEVICE_DATA(&rt73usb_ops) }, 2359 2346 { USB_DEVICE(0x0df6, 0x90ac), USB_DEVICE_DATA(&rt73usb_ops) }, 2347 + { USB_DEVICE(0x0df6, 0x9712), USB_DEVICE_DATA(&rt73usb_ops) }, 2360 2348 /* Surecom */ 2361 2349 { USB_DEVICE(0x0769, 0x31f3), USB_DEVICE_DATA(&rt73usb_ops) }, 2350 + /* Philips */ 2351 + { USB_DEVICE(0x0471, 0x200a), USB_DEVICE_DATA(&rt73usb_ops) }, 2362 2352 /* Planex */ 2363 2353 { USB_DEVICE(0x2019, 0xab01), USB_DEVICE_DATA(&rt73usb_ops) }, 2364 2354 { USB_DEVICE(0x2019, 0xab50), USB_DEVICE_DATA(&rt73usb_ops) }, 2355 + /* Zcom */ 2356 + { USB_DEVICE(0x0cde, 0x001c), USB_DEVICE_DATA(&rt73usb_ops) }, 2357 + /* ZyXEL */ 2358 + { USB_DEVICE(0x0586, 0x3415), USB_DEVICE_DATA(&rt73usb_ops) }, 2365 2359 { 0, } 2366 2360 }; 2367 2361
+3 -2
drivers/video/i810/i810_main.c
··· 993 993 struct i810fb_par *par = info->par; 994 994 int line_length, vidmem, mode_valid = 0, retval = 0; 995 995 u32 vyres = var->yres_virtual, vxres = var->xres_virtual; 996 + 996 997 /* 997 998 * Memory limit 998 999 */ ··· 1003 1002 if (vidmem > par->fb.size) { 1004 1003 vyres = par->fb.size/line_length; 1005 1004 if (vyres < var->yres) { 1006 - vyres = yres; 1005 + vyres = info->var.yres; 1007 1006 vxres = par->fb.size/vyres; 1008 1007 vxres /= var->bits_per_pixel >> 3; 1009 1008 line_length = get_line_length(par, vxres, 1010 1009 var->bits_per_pixel); 1011 - vidmem = line_length * yres; 1010 + vidmem = line_length * info->var.yres; 1012 1011 if (vxres < var->xres) { 1013 1012 printk("i810fb: required video memory, " 1014 1013 "%d bytes, for %dx%d-%d (virtual) "
+1 -1
drivers/video/pxafb.c
··· 2230 2230 2231 2231 static struct platform_driver pxafb_driver = { 2232 2232 .probe = pxafb_probe, 2233 - .remove = pxafb_remove, 2233 + .remove = __devexit_p(pxafb_remove), 2234 2234 .suspend = pxafb_suspend, 2235 2235 .resume = pxafb_resume, 2236 2236 .driver = {
+2 -4
drivers/video/sh_mobile_lcdcfb.c
··· 446 446 { 447 447 struct sh_mobile_lcdc_chan *ch; 448 448 struct sh_mobile_lcdc_board_cfg *board_cfg; 449 - unsigned long tmp; 450 449 int k; 451 450 452 451 /* tell the board code to disable the panel */ ··· 455 456 if (board_cfg->display_off) 456 457 board_cfg->display_off(board_cfg->board_data); 457 458 458 - /* cleanup deferred io if SYS bus */ 459 - tmp = ch->cfg.sys_bus_cfg.deferred_io_msec; 460 - if (ch->ldmt1r_value & (1 << 12) && tmp) { 459 + /* cleanup deferred io if enabled */ 460 + if (ch->info.fbdefio) { 461 461 fb_deferred_io_cleanup(&ch->info); 462 462 ch->info.fbdefio = NULL; 463 463 }
+1 -1
drivers/watchdog/gef_wdt.c
··· 269 269 bus_clk = 133; /* in MHz */ 270 270 271 271 freq = fsl_get_sys_freq(); 272 - if (freq > 0) 272 + if (freq != -1) 273 273 bus_clk = freq; 274 274 275 275 /* Map devices registers into memory */
+1
drivers/watchdog/ks8695_wdt.c
··· 21 21 #include <linux/watchdog.h> 22 22 #include <linux/io.h> 23 23 #include <linux/uaccess.h> 24 + #include <mach/timex.h> 24 25 #include <mach/regs-timer.h> 25 26 26 27 #define WDT_DEFAULT_TIME 5 /* seconds */
+1
drivers/watchdog/orion5x_wdt.c
··· 29 29 #define WDT_EN 0x0010 30 30 #define WDT_VAL (TIMER_VIRT_BASE + 0x0024) 31 31 32 + #define ORION5X_TCLK 166666667 32 33 #define WDT_MAX_DURATION (0xffffffff / ORION5X_TCLK) 33 34 #define WDT_IN_USE 0 34 35 #define WDT_OK_TO_CLOSE 1
+70 -100
drivers/watchdog/rc32434_wdt.c
··· 34 34 #include <asm/time.h> 35 35 #include <asm/mach-rc32434/integ.h> 36 36 37 - #define MAX_TIMEOUT 20 38 - #define RC32434_WDT_INTERVAL (15 * HZ) 39 - 40 - #define VERSION "0.2" 37 + #define VERSION "0.4" 41 38 42 39 static struct { 43 - struct completion stop; 44 - int running; 45 - struct timer_list timer; 46 - int queue; 47 - int default_ticks; 48 40 unsigned long inuse; 49 41 } rc32434_wdt_device; 50 42 51 43 static struct integ __iomem *wdt_reg; 52 - static int ticks = 100 * HZ; 53 44 54 45 static int expect_close; 55 - static int timeout; 46 + 47 + /* Board internal clock speed in Hz, 48 + * the watchdog timer ticks at. */ 49 + extern unsigned int idt_cpu_freq; 50 + 51 + /* translate wtcompare value to seconds and vice versa */ 52 + #define WTCOMP2SEC(x) (x / idt_cpu_freq) 53 + #define SEC2WTCOMP(x) (x * idt_cpu_freq) 54 + 55 + /* Use a default timeout of 20s. This should be 56 + * safe for CPU clock speeds up to 400MHz, as 57 + * ((2 ^ 32) - 1) / (400MHz / 2) = 21s. */ 58 + #define WATCHDOG_TIMEOUT 20 59 + 60 + static int timeout = WATCHDOG_TIMEOUT; 56 61 57 62 static int nowayout = WATCHDOG_NOWAYOUT; 58 63 module_param(nowayout, int, 0); 59 64 MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started (default=" 60 65 __MODULE_STRING(WATCHDOG_NOWAYOUT) ")"); 61 66 67 + /* apply or and nand masks to data read from addr and write back */ 68 + #define SET_BITS(addr, or, nand) \ 69 + writel((readl(&addr) | or) & ~nand, &addr) 62 70 63 71 static void rc32434_wdt_start(void) 64 72 { 65 - u32 val; 73 + u32 or, nand; 66 74 67 - if (!rc32434_wdt_device.inuse) { 68 - writel(0, &wdt_reg->wtcount); 75 + /* zero the counter before enabling */ 76 + writel(0, &wdt_reg->wtcount); 69 77 70 - val = RC32434_ERR_WRE; 71 - writel(readl(&wdt_reg->errcs) | val, &wdt_reg->errcs); 78 + /* don't generate a non-maskable interrupt, 79 + * do a warm reset instead */ 80 + nand = 1 << RC32434_ERR_WNE; 81 + or = 1 << RC32434_ERR_WRE; 72 82 73 - val = RC32434_WTC_EN; 74 - writel(readl(&wdt_reg->wtc) | val, &wdt_reg->wtc); 75 - } 76 - rc32434_wdt_device.running++; 83 + /* reset the ERRCS timeout bit in case it's set */ 84 + nand |= 1 << RC32434_ERR_WTO; 85 + 86 + SET_BITS(wdt_reg->errcs, or, nand); 87 + 88 + /* reset WTC timeout bit and enable WDT */ 89 + nand = 1 << RC32434_WTC_TO; 90 + or = 1 << RC32434_WTC_EN; 91 + 92 + SET_BITS(wdt_reg->wtc, or, nand); 77 93 } 78 94 79 95 static void rc32434_wdt_stop(void) 80 96 { 81 - u32 val; 97 + /* Disable WDT */ 98 + SET_BITS(wdt_reg->wtc, 0, 1 << RC32434_WTC_EN); 99 + } 82 100 83 - if (rc32434_wdt_device.running) { 101 + static int rc32434_wdt_set(int new_timeout) 102 + { 103 + int max_to = WTCOMP2SEC((u32)-1); 84 104 85 - val = ~RC32434_WTC_EN; 86 - writel(readl(&wdt_reg->wtc) & val, &wdt_reg->wtc); 87 - 88 - val = ~RC32434_ERR_WRE; 89 - writel(readl(&wdt_reg->errcs) & val, &wdt_reg->errcs); 90 - 91 - rc32434_wdt_device.running = 0; 105 + if (new_timeout < 0 || new_timeout > max_to) { 106 + printk(KERN_ERR KBUILD_MODNAME 107 + ": timeout value must be between 0 and %d", 108 + max_to); 109 + return -EINVAL; 92 110 } 93 - } 94 - 95 - static void rc32434_wdt_set(int new_timeout) 96 - { 97 - u32 cmp = new_timeout * HZ; 98 - u32 state, val; 99 - 100 111 timeout = new_timeout; 101 - /* 102 - * store and disable WTC 103 - */ 104 - state = (u32)(readl(&wdt_reg->wtc) & RC32434_WTC_EN); 105 - val = ~RC32434_WTC_EN; 106 - writel(readl(&wdt_reg->wtc) & val, &wdt_reg->wtc); 112 + writel(SEC2WTCOMP(timeout), &wdt_reg->wtcompare); 107 113 108 - writel(0, &wdt_reg->wtcount); 109 - writel(cmp, &wdt_reg->wtcompare); 110 - 111 - /* 112 - * restore WTC 113 - */ 114 - 115 - writel(readl(&wdt_reg->wtc) | state, &wdt_reg); 114 + return 0; 116 115 } 117 116 118 - static void rc32434_wdt_reset(void) 117 + static void rc32434_wdt_ping(void) 119 118 { 120 - ticks = rc32434_wdt_device.default_ticks; 121 - } 122 - 123 - static void rc32434_wdt_update(unsigned long unused) 124 - { 125 - if (rc32434_wdt_device.running) 126 - ticks--; 127 - 128 119 writel(0, &wdt_reg->wtcount); 129 - 130 - if (rc32434_wdt_device.queue && ticks) 131 - mod_timer(&rc32434_wdt_device.timer, 132 - jiffies + RC32434_WDT_INTERVAL); 133 - else 134 - complete(&rc32434_wdt_device.stop); 135 120 } 136 121 137 122 static int rc32434_wdt_open(struct inode *inode, struct file *file) ··· 127 142 if (nowayout) 128 143 __module_get(THIS_MODULE); 129 144 145 + rc32434_wdt_start(); 146 + rc32434_wdt_ping(); 147 + 130 148 return nonseekable_open(inode, file); 131 149 } 132 150 133 151 static int rc32434_wdt_release(struct inode *inode, struct file *file) 134 152 { 135 - if (expect_close && nowayout == 0) { 153 + if (expect_close == 42) { 136 154 rc32434_wdt_stop(); 137 155 printk(KERN_INFO KBUILD_MODNAME ": disabling watchdog timer\n"); 138 156 module_put(THIS_MODULE); 139 - } else 157 + } else { 140 158 printk(KERN_CRIT KBUILD_MODNAME 141 159 ": device closed unexpectedly. WDT will not stop !\n"); 142 - 160 + rc32434_wdt_ping(); 161 + } 143 162 clear_bit(0, &rc32434_wdt_device.inuse); 144 163 return 0; 145 164 } ··· 163 174 if (get_user(c, data + i)) 164 175 return -EFAULT; 165 176 if (c == 'V') 166 - expect_close = 1; 177 + expect_close = 42; 167 178 } 168 179 } 169 - rc32434_wdt_update(0); 180 + rc32434_wdt_ping(); 170 181 return len; 171 182 } 172 183 return 0; ··· 186 197 }; 187 198 switch (cmd) { 188 199 case WDIOC_KEEPALIVE: 189 - rc32434_wdt_reset(); 200 + rc32434_wdt_ping(); 190 201 break; 191 202 case WDIOC_GETSTATUS: 192 203 case WDIOC_GETBOOTSTATUS: 193 - value = readl(&wdt_reg->wtcount); 204 + value = 0; 194 205 if (copy_to_user(argp, &value, sizeof(int))) 195 206 return -EFAULT; 196 207 break; ··· 207 218 break; 208 219 case WDIOS_DISABLECARD: 209 220 rc32434_wdt_stop(); 221 + break; 210 222 default: 211 223 return -EINVAL; 212 224 } ··· 215 225 case WDIOC_SETTIMEOUT: 216 226 if (copy_from_user(&new_timeout, argp, sizeof(int))) 217 227 return -EFAULT; 218 - if (new_timeout < 1) 228 + if (rc32434_wdt_set(new_timeout)) 219 229 return -EINVAL; 220 - if (new_timeout > MAX_TIMEOUT) 221 - return -EINVAL; 222 - rc32434_wdt_set(new_timeout); 230 + /* Fall through */ 223 231 case WDIOC_GETTIMEOUT: 224 232 return copy_to_user(argp, &timeout, sizeof(int)); 225 233 default: ··· 242 254 .fops = &rc32434_wdt_fops, 243 255 }; 244 256 245 - static char banner[] = KERN_INFO KBUILD_MODNAME 257 + static char banner[] __devinitdata = KERN_INFO KBUILD_MODNAME 246 258 ": Watchdog Timer version " VERSION ", timer margin: %d sec\n"; 247 259 248 - static int rc32434_wdt_probe(struct platform_device *pdev) 260 + static int __devinit rc32434_wdt_probe(struct platform_device *pdev) 249 261 { 250 262 int ret; 251 263 struct resource *r; 252 264 253 - r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rb500_wdt_res"); 265 + r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rb532_wdt_res"); 254 266 if (!r) { 255 267 printk(KERN_ERR KBUILD_MODNAME 256 268 "failed to retrieve resources\n"); ··· 265 277 } 266 278 267 279 ret = misc_register(&rc32434_wdt_miscdev); 268 - 269 280 if (ret < 0) { 270 281 printk(KERN_ERR KBUILD_MODNAME 271 282 "failed to register watchdog device\n"); 272 283 goto unmap; 273 284 } 274 - 275 - init_completion(&rc32434_wdt_device.stop); 276 - rc32434_wdt_device.queue = 0; 277 - 278 - clear_bit(0, &rc32434_wdt_device.inuse); 279 - 280 - setup_timer(&rc32434_wdt_device.timer, rc32434_wdt_update, 0L); 281 - 282 - rc32434_wdt_device.default_ticks = ticks; 283 - 284 - rc32434_wdt_start(); 285 285 286 286 printk(banner, timeout); 287 287 ··· 280 304 return ret; 281 305 } 282 306 283 - static int rc32434_wdt_remove(struct platform_device *pdev) 307 + static int __devexit rc32434_wdt_remove(struct platform_device *pdev) 284 308 { 285 - if (rc32434_wdt_device.queue) { 286 - rc32434_wdt_device.queue = 0; 287 - wait_for_completion(&rc32434_wdt_device.stop); 288 - } 289 309 misc_deregister(&rc32434_wdt_miscdev); 290 - 291 310 iounmap(wdt_reg); 292 - 293 311 return 0; 294 312 } 295 313 296 314 static struct platform_driver rc32434_wdt = { 297 315 .probe = rc32434_wdt_probe, 298 - .remove = rc32434_wdt_remove, 299 - .driver = { 316 + .remove = __devexit_p(rc32434_wdt_remove), 317 + .driver = { 300 318 .name = "rc32434_wdt", 301 319 } 302 320 };
+5 -5
fs/btrfs/ctree.c
··· 277 277 if (*cow_ret == buf) 278 278 unlock_orig = 1; 279 279 280 - WARN_ON(!btrfs_tree_locked(buf)); 280 + btrfs_assert_tree_locked(buf); 281 281 282 282 if (parent) 283 283 parent_start = parent->start; ··· 2365 2365 if (slot >= btrfs_header_nritems(upper) - 1) 2366 2366 return 1; 2367 2367 2368 - WARN_ON(!btrfs_tree_locked(path->nodes[1])); 2368 + btrfs_assert_tree_locked(path->nodes[1]); 2369 2369 2370 2370 right = read_node_slot(root, upper, slot + 1); 2371 2371 btrfs_tree_lock(right); ··· 2562 2562 if (right_nritems == 0) 2563 2563 return 1; 2564 2564 2565 - WARN_ON(!btrfs_tree_locked(path->nodes[1])); 2565 + btrfs_assert_tree_locked(path->nodes[1]); 2566 2566 2567 2567 left = read_node_slot(root, path->nodes[1], slot - 1); 2568 2568 btrfs_tree_lock(left); ··· 4101 4101 4102 4102 next = read_node_slot(root, c, slot); 4103 4103 if (!path->skip_locking) { 4104 - WARN_ON(!btrfs_tree_locked(c)); 4104 + btrfs_assert_tree_locked(c); 4105 4105 btrfs_tree_lock(next); 4106 4106 btrfs_set_lock_blocking(next); 4107 4107 } ··· 4126 4126 reada_for_search(root, path, level, slot, 0); 4127 4127 next = read_node_slot(root, next, 0); 4128 4128 if (!path->skip_locking) { 4129 - WARN_ON(!btrfs_tree_locked(path->nodes[level])); 4129 + btrfs_assert_tree_locked(path->nodes[level]); 4130 4130 btrfs_tree_lock(next); 4131 4131 btrfs_set_lock_blocking(next); 4132 4132 }
+2 -2
fs/btrfs/disk-io.c
··· 857 857 struct inode *btree_inode = root->fs_info->btree_inode; 858 858 if (btrfs_header_generation(buf) == 859 859 root->fs_info->running_transaction->transid) { 860 - WARN_ON(!btrfs_tree_locked(buf)); 860 + btrfs_assert_tree_locked(buf); 861 861 862 862 /* ugh, clear_extent_buffer_dirty can be expensive */ 863 863 btrfs_set_lock_blocking(buf); ··· 2361 2361 2362 2362 btrfs_set_lock_blocking(buf); 2363 2363 2364 - WARN_ON(!btrfs_tree_locked(buf)); 2364 + btrfs_assert_tree_locked(buf); 2365 2365 if (transid != root->fs_info->generation) { 2366 2366 printk(KERN_CRIT "btrfs transid mismatch buffer %llu, " 2367 2367 "found %llu running %llu\n",
+2 -2
fs/btrfs/extent-tree.c
··· 4418 4418 path = btrfs_alloc_path(); 4419 4419 BUG_ON(!path); 4420 4420 4421 - BUG_ON(!btrfs_tree_locked(parent)); 4421 + btrfs_assert_tree_locked(parent); 4422 4422 parent_level = btrfs_header_level(parent); 4423 4423 extent_buffer_get(parent); 4424 4424 path->nodes[parent_level] = parent; 4425 4425 path->slots[parent_level] = btrfs_header_nritems(parent); 4426 4426 4427 - BUG_ON(!btrfs_tree_locked(node)); 4427 + btrfs_assert_tree_locked(node); 4428 4428 level = btrfs_header_level(node); 4429 4429 extent_buffer_get(node); 4430 4430 path->nodes[level] = node;
+3 -3
fs/btrfs/locking.c
··· 220 220 return 0; 221 221 } 222 222 223 - int btrfs_tree_locked(struct extent_buffer *eb) 223 + void btrfs_assert_tree_locked(struct extent_buffer *eb) 224 224 { 225 - return test_bit(EXTENT_BUFFER_BLOCKING, &eb->bflags) || 226 - spin_is_locked(&eb->lock); 225 + if (!test_bit(EXTENT_BUFFER_BLOCKING, &eb->bflags)) 226 + assert_spin_locked(&eb->lock); 227 227 }
+1 -1
fs/btrfs/locking.h
··· 21 21 22 22 int btrfs_tree_lock(struct extent_buffer *eb); 23 23 int btrfs_tree_unlock(struct extent_buffer *eb); 24 - int btrfs_tree_locked(struct extent_buffer *eb); 25 24 26 25 int btrfs_try_tree_lock(struct extent_buffer *eb); 27 26 int btrfs_try_spin_lock(struct extent_buffer *eb); 28 27 29 28 void btrfs_set_lock_blocking(struct extent_buffer *eb); 30 29 void btrfs_clear_lock_blocking(struct extent_buffer *eb); 30 + void btrfs_assert_tree_locked(struct extent_buffer *eb); 31 31 #endif
-5
fs/devpts/inode.c
··· 198 198 199 199 fsi->ptmx_dentry = dentry; 200 200 rc = 0; 201 - 202 - printk(KERN_DEBUG "Created ptmx node in devpts ino %lu\n", 203 - inode->i_ino); 204 201 out: 205 202 mutex_unlock(&root->d_inode->i_mutex); 206 203 return rc; ··· 365 368 int err; 366 369 struct pts_fs_info *fsi; 367 370 struct pts_mount_opts *opts; 368 - 369 - printk(KERN_NOTICE "devpts: newinstance mount\n"); 370 371 371 372 err = get_sb_nodev(fs_type, flags, data, devpts_fill_super, mnt); 372 373 if (err)
+5 -3
fs/ext4/ialloc.c
··· 188 188 struct ext4_group_desc *gdp; 189 189 struct ext4_super_block *es; 190 190 struct ext4_sb_info *sbi; 191 - int fatal = 0, err, count; 191 + int fatal = 0, err, count, cleared; 192 192 ext4_group_t flex_group; 193 193 194 194 if (atomic_read(&inode->i_count) > 1) { ··· 248 248 goto error_return; 249 249 250 250 /* Ok, now we can actually update the inode bitmaps.. */ 251 - if (!ext4_clear_bit_atomic(sb_bgl_lock(sbi, block_group), 252 - bit, bitmap_bh->b_data)) 251 + spin_lock(sb_bgl_lock(sbi, block_group)); 252 + cleared = ext4_clear_bit(bit, bitmap_bh->b_data); 253 + spin_unlock(sb_bgl_lock(sbi, block_group)); 254 + if (!cleared) 253 255 ext4_error(sb, "ext4_free_inode", 254 256 "bit already cleared for inode %lu", ino); 255 257 else {
+11 -2
fs/squashfs/block.c
··· 80 80 * generated a larger block - this does occasionally happen with zlib). 81 81 */ 82 82 int squashfs_read_data(struct super_block *sb, void **buffer, u64 index, 83 - int length, u64 *next_index, int srclength) 83 + int length, u64 *next_index, int srclength, int pages) 84 84 { 85 85 struct squashfs_sb_info *msblk = sb->s_fs_info; 86 86 struct buffer_head **bh; ··· 185 185 } 186 186 187 187 if (msblk->stream.avail_out == 0) { 188 + if (page == pages) { 189 + ERROR("zlib_inflate tried to " 190 + "decompress too much data, " 191 + "expected %d bytes. Zlib " 192 + "data probably corrupt\n", 193 + srclength); 194 + goto release_mutex; 195 + } 188 196 msblk->stream.next_out = buffer[page++]; 189 197 msblk->stream.avail_out = PAGE_CACHE_SIZE; 190 198 } ··· 276 268 put_bh(bh[k]); 277 269 278 270 read_failure: 279 - ERROR("sb_bread failed reading block 0x%llx\n", cur_index); 271 + ERROR("squashfs_read_data failed to read block 0x%llx\n", 272 + (unsigned long long) index); 280 273 kfree(bh); 281 274 return -EIO; 282 275 }
+2 -2
fs/squashfs/cache.c
··· 119 119 120 120 entry->length = squashfs_read_data(sb, entry->data, 121 121 block, length, &entry->next_index, 122 - cache->block_size); 122 + cache->block_size, cache->pages); 123 123 124 124 spin_lock(&cache->lock); 125 125 ··· 406 406 for (i = 0; i < pages; i++, buffer += PAGE_CACHE_SIZE) 407 407 data[i] = buffer; 408 408 res = squashfs_read_data(sb, data, block, length | 409 - SQUASHFS_COMPRESSED_BIT_BLOCK, NULL, length); 409 + SQUASHFS_COMPRESSED_BIT_BLOCK, NULL, length, pages); 410 410 kfree(data); 411 411 return res; 412 412 }
+4 -2
fs/squashfs/inode.c
··· 133 133 type = le16_to_cpu(sqshb_ino->inode_type); 134 134 switch (type) { 135 135 case SQUASHFS_REG_TYPE: { 136 - unsigned int frag_offset, frag_size, frag; 136 + unsigned int frag_offset, frag; 137 + int frag_size; 137 138 u64 frag_blk; 138 139 struct squashfs_reg_inode *sqsh_ino = &squashfs_ino.reg; 139 140 ··· 176 175 break; 177 176 } 178 177 case SQUASHFS_LREG_TYPE: { 179 - unsigned int frag_offset, frag_size, frag; 178 + unsigned int frag_offset, frag; 179 + int frag_size; 180 180 u64 frag_blk; 181 181 struct squashfs_lreg_inode *sqsh_ino = &squashfs_ino.lreg; 182 182
+1 -1
fs/squashfs/squashfs.h
··· 34 34 35 35 /* block.c */ 36 36 extern int squashfs_read_data(struct super_block *, void **, u64, int, u64 *, 37 - int); 37 + int, int); 38 38 39 39 /* cache.c */ 40 40 extern struct squashfs_cache *squashfs_cache_init(char *, int, int);
+1 -1
fs/squashfs/super.c
··· 389 389 return err; 390 390 } 391 391 392 - printk(KERN_INFO "squashfs: version 4.0 (2009/01/03) " 392 + printk(KERN_INFO "squashfs: version 4.0 (2009/01/31) " 393 393 "Phillip Lougher\n"); 394 394 395 395 return 0;
+2
include/linux/ata.h
··· 89 89 ATA_ID_DLF = 128, 90 90 ATA_ID_CSFO = 129, 91 91 ATA_ID_CFA_POWER = 160, 92 + ATA_ID_CFA_KEY_MGMT = 162, 93 + ATA_ID_CFA_MODES = 163, 92 94 ATA_ID_ROT_SPEED = 217, 93 95 ATA_ID_PIO4 = (1 << 1), 94 96
+19 -17
include/linux/bootmem.h
··· 65 65 #define BOOTMEM_DEFAULT 0 66 66 #define BOOTMEM_EXCLUSIVE (1<<0) 67 67 68 + extern int reserve_bootmem(unsigned long addr, 69 + unsigned long size, 70 + int flags); 68 71 extern int reserve_bootmem_node(pg_data_t *pgdat, 69 - unsigned long physaddr, 70 - unsigned long size, 71 - int flags); 72 - #ifndef CONFIG_HAVE_ARCH_BOOTMEM_NODE 73 - extern int reserve_bootmem(unsigned long addr, unsigned long size, int flags); 74 - #endif 72 + unsigned long physaddr, 73 + unsigned long size, 74 + int flags); 75 75 76 - extern void *__alloc_bootmem_nopanic(unsigned long size, 76 + extern void *__alloc_bootmem(unsigned long size, 77 77 unsigned long align, 78 78 unsigned long goal); 79 - extern void *__alloc_bootmem(unsigned long size, 79 + extern void *__alloc_bootmem_nopanic(unsigned long size, 80 80 unsigned long align, 81 81 unsigned long goal); 82 - extern void *__alloc_bootmem_low(unsigned long size, 83 - unsigned long align, 84 - unsigned long goal); 85 82 extern void *__alloc_bootmem_node(pg_data_t *pgdat, 86 83 unsigned long size, 87 84 unsigned long align, ··· 87 90 unsigned long size, 88 91 unsigned long align, 89 92 unsigned long goal); 93 + extern void *__alloc_bootmem_low(unsigned long size, 94 + unsigned long align, 95 + unsigned long goal); 90 96 extern void *__alloc_bootmem_low_node(pg_data_t *pgdat, 91 97 unsigned long size, 92 98 unsigned long align, 93 99 unsigned long goal); 94 - #ifndef CONFIG_HAVE_ARCH_BOOTMEM_NODE 100 + 95 101 #define alloc_bootmem(x) \ 96 102 __alloc_bootmem(x, SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS)) 97 103 #define alloc_bootmem_nopanic(x) \ 98 104 __alloc_bootmem_nopanic(x, SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS)) 99 - #define alloc_bootmem_low(x) \ 100 - __alloc_bootmem_low(x, SMP_CACHE_BYTES, 0) 101 105 #define alloc_bootmem_pages(x) \ 102 106 __alloc_bootmem(x, PAGE_SIZE, __pa(MAX_DMA_ADDRESS)) 103 107 #define alloc_bootmem_pages_nopanic(x) \ 104 108 __alloc_bootmem_nopanic(x, PAGE_SIZE, __pa(MAX_DMA_ADDRESS)) 105 - #define alloc_bootmem_low_pages(x) \ 106 - __alloc_bootmem_low(x, PAGE_SIZE, 0) 107 109 #define alloc_bootmem_node(pgdat, x) \ 108 110 __alloc_bootmem_node(pgdat, x, SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS)) 109 111 #define alloc_bootmem_pages_node(pgdat, x) \ 110 112 __alloc_bootmem_node(pgdat, x, PAGE_SIZE, __pa(MAX_DMA_ADDRESS)) 113 + #define alloc_bootmem_pages_node_nopanic(pgdat, x) \ 114 + __alloc_bootmem_node_nopanic(pgdat, x, PAGE_SIZE, __pa(MAX_DMA_ADDRESS)) 115 + 116 + #define alloc_bootmem_low(x) \ 117 + __alloc_bootmem_low(x, SMP_CACHE_BYTES, 0) 118 + #define alloc_bootmem_low_pages(x) \ 119 + __alloc_bootmem_low(x, PAGE_SIZE, 0) 111 120 #define alloc_bootmem_low_pages_node(pgdat, x) \ 112 121 __alloc_bootmem_low_node(pgdat, x, PAGE_SIZE, 0) 113 - #endif /* !CONFIG_HAVE_ARCH_BOOTMEM_NODE */ 114 122 115 123 extern int reserve_bootmem_generic(unsigned long addr, unsigned long size, 116 124 int flags);
-1
include/linux/cpufreq.h
··· 234 234 int (*suspend) (struct cpufreq_policy *policy, pm_message_t pmsg); 235 235 int (*resume) (struct cpufreq_policy *policy); 236 236 struct freq_attr **attr; 237 - bool hide_interface; 238 237 }; 239 238 240 239 /* flags */
+1 -6
include/linux/dmaengine.h
··· 97 97 98 98 /** 99 99 * struct dma_chan_percpu - the per-CPU part of struct dma_chan 100 - * @refcount: local_t used for open-coded "bigref" counting 101 100 * @memcpy_count: transaction counter 102 101 * @bytes_transferred: byte counter 103 102 */ ··· 113 114 * @cookie: last cookie value returned to client 114 115 * @chan_id: channel ID for sysfs 115 116 * @dev: class device for sysfs 116 - * @refcount: kref, used in "bigref" slow-mode 117 - * @slow_ref: indicates that the DMA channel is free 118 - * @rcu: the DMA channel's RCU head 119 117 * @device_node: used to add this to the device chan list 120 118 * @local: per-cpu pointer to a struct dma_chan_percpu 121 119 * @client-count: how many clients are using this channel ··· 209 213 * @global_node: list_head for global dma_device_list 210 214 * @cap_mask: one or more dma_capability flags 211 215 * @max_xor: maximum number of xor sources, 0 if no capability 212 - * @refcount: reference count 213 - * @done: IO completion struct 214 216 * @dev_id: unique device ID 215 217 * @dev: struct device reference for dma mapping api 216 218 * @device_alloc_chan_resources: allocate resources and return the ··· 221 227 * @device_prep_dma_interrupt: prepares an end of chain interrupt operation 222 228 * @device_prep_slave_sg: prepares a slave dma operation 223 229 * @device_terminate_all: terminate all pending operations 230 + * @device_is_tx_complete: poll for transaction completion 224 231 * @device_issue_pending: push pending transactions to hardware 225 232 */ 226 233 struct dma_device {
-1
include/linux/hdreg.h
··· 511 511 unsigned short words69_70[2]; /* reserved words 69-70 512 512 * future command overlap and queuing 513 513 */ 514 - /* HDIO_GET_IDENTITY currently returns only words 0 through 70 */ 515 514 unsigned short words71_74[4]; /* reserved words 71-74 516 515 * for IDENTIFY PACKET DEVICE command 517 516 */
+1
include/linux/ide.h
··· 866 866 unsigned int n_ports; 867 867 struct device *dev[2]; 868 868 unsigned int (*init_chipset)(struct pci_dev *); 869 + irq_handler_t irq_handler; 869 870 unsigned long host_flags; 870 871 void *host_priv; 871 872 ide_hwif_t *cur_port; /* for hosts requiring serialization */
+4 -2
include/linux/libata.h
··· 275 275 * advised to wait only for the following duration before 276 276 * doing SRST. 277 277 */ 278 - ATA_TMOUT_PMP_SRST_WAIT = 1000, 278 + ATA_TMOUT_PMP_SRST_WAIT = 5000, 279 279 280 280 /* ATA bus states */ 281 281 BUS_UNKNOWN = 0, ··· 530 530 unsigned long flags; /* ATA_QCFLAG_xxx */ 531 531 unsigned int tag; 532 532 unsigned int n_elem; 533 + unsigned int orig_n_elem; 533 534 534 535 int dma_dir; 535 536 ··· 751 750 acpi_handle acpi_handle; 752 751 struct ata_acpi_gtm __acpi_init_gtm; /* use ata_acpi_init_gtm() */ 753 752 #endif 754 - u8 sector_buf[ATA_SECT_SIZE]; /* owned by EH */ 753 + /* owned by EH */ 754 + u8 sector_buf[ATA_SECT_SIZE] ____cacheline_aligned; 755 755 }; 756 756 757 757 /* The following initializer overrides a method to NULL whether one of
+1
include/linux/netdevice.h
··· 1079 1079 extern int register_netdevice_notifier(struct notifier_block *nb); 1080 1080 extern int unregister_netdevice_notifier(struct notifier_block *nb); 1081 1081 extern int init_dummy_netdev(struct net_device *dev); 1082 + extern void netdev_resync_ops(struct net_device *dev); 1082 1083 1083 1084 extern int call_netdevice_notifiers(unsigned long val, struct net_device *dev); 1084 1085 extern struct net_device *dev_get_by_index(struct net *net, int ifindex);
+74 -34
include/linux/percpu.h
··· 5 5 #include <linux/slab.h> /* For kmalloc() */ 6 6 #include <linux/smp.h> 7 7 #include <linux/cpumask.h> 8 + #include <linux/pfn.h> 8 9 9 10 #include <asm/percpu.h> 10 11 ··· 53 52 #define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(per_cpu__##var) 54 53 #define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(per_cpu__##var) 55 54 56 - /* Enough to cover all DEFINE_PER_CPUs in kernel, including modules. */ 57 - #ifndef PERCPU_ENOUGH_ROOM 55 + /* enough to cover all DEFINE_PER_CPUs in modules */ 58 56 #ifdef CONFIG_MODULES 59 - #define PERCPU_MODULE_RESERVE 8192 57 + #define PERCPU_MODULE_RESERVE (8 << 10) 60 58 #else 61 - #define PERCPU_MODULE_RESERVE 0 59 + #define PERCPU_MODULE_RESERVE 0 62 60 #endif 63 61 62 + #ifndef PERCPU_ENOUGH_ROOM 64 63 #define PERCPU_ENOUGH_ROOM \ 65 - (__per_cpu_end - __per_cpu_start + PERCPU_MODULE_RESERVE) 66 - #endif /* PERCPU_ENOUGH_ROOM */ 64 + (ALIGN(__per_cpu_end - __per_cpu_start, SMP_CACHE_BYTES) + \ 65 + PERCPU_MODULE_RESERVE) 66 + #endif 67 67 68 68 /* 69 69 * Must be an lvalue. Since @var must be a simple identifier, ··· 78 76 79 77 #ifdef CONFIG_SMP 80 78 79 + #ifdef CONFIG_HAVE_DYNAMIC_PER_CPU_AREA 80 + 81 + /* minimum unit size, also is the maximum supported allocation size */ 82 + #define PCPU_MIN_UNIT_SIZE PFN_ALIGN(64 << 10) 83 + 84 + /* 85 + * PERCPU_DYNAMIC_RESERVE indicates the amount of free area to piggy 86 + * back on the first chunk for dynamic percpu allocation if arch is 87 + * manually allocating and mapping it for faster access (as a part of 88 + * large page mapping for example). 89 + * 90 + * The following values give between one and two pages of free space 91 + * after typical minimal boot (2-way SMP, single disk and NIC) with 92 + * both defconfig and a distro config on x86_64 and 32. More 93 + * intelligent way to determine this would be nice. 94 + */ 95 + #if BITS_PER_LONG > 32 96 + #define PERCPU_DYNAMIC_RESERVE (20 << 10) 97 + #else 98 + #define PERCPU_DYNAMIC_RESERVE (12 << 10) 99 + #endif 100 + 101 + extern void *pcpu_base_addr; 102 + 103 + typedef struct page * (*pcpu_get_page_fn_t)(unsigned int cpu, int pageno); 104 + typedef void (*pcpu_populate_pte_fn_t)(unsigned long addr); 105 + 106 + extern size_t __init pcpu_setup_first_chunk(pcpu_get_page_fn_t get_page_fn, 107 + size_t static_size, size_t reserved_size, 108 + ssize_t unit_size, ssize_t dyn_size, 109 + void *base_addr, 110 + pcpu_populate_pte_fn_t populate_pte_fn); 111 + 112 + /* 113 + * Use this to get to a cpu's version of the per-cpu object 114 + * dynamically allocated. Non-atomic access to the current CPU's 115 + * version should probably be combined with get_cpu()/put_cpu(). 116 + */ 117 + #define per_cpu_ptr(ptr, cpu) SHIFT_PERCPU_PTR((ptr), per_cpu_offset((cpu))) 118 + 119 + extern void *__alloc_reserved_percpu(size_t size, size_t align); 120 + 121 + #else /* CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */ 122 + 81 123 struct percpu_data { 82 124 void *ptrs[1]; 83 125 }; 84 126 85 127 #define __percpu_disguise(pdata) (struct percpu_data *)~(unsigned long)(pdata) 86 - /* 87 - * Use this to get to a cpu's version of the per-cpu object dynamically 88 - * allocated. Non-atomic access to the current CPU's version should 89 - * probably be combined with get_cpu()/put_cpu(). 90 - */ 91 - #define percpu_ptr(ptr, cpu) \ 92 - ({ \ 93 - struct percpu_data *__p = __percpu_disguise(ptr); \ 94 - (__typeof__(ptr))__p->ptrs[(cpu)]; \ 128 + 129 + #define per_cpu_ptr(ptr, cpu) \ 130 + ({ \ 131 + struct percpu_data *__p = __percpu_disguise(ptr); \ 132 + (__typeof__(ptr))__p->ptrs[(cpu)]; \ 95 133 }) 96 134 97 - extern void *__percpu_alloc_mask(size_t size, gfp_t gfp, cpumask_t *mask); 98 - extern void percpu_free(void *__pdata); 135 + #endif /* CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */ 136 + 137 + extern void *__alloc_percpu(size_t size, size_t align); 138 + extern void free_percpu(void *__pdata); 99 139 100 140 #else /* CONFIG_SMP */ 101 141 102 - #define percpu_ptr(ptr, cpu) ({ (void)(cpu); (ptr); }) 142 + #define per_cpu_ptr(ptr, cpu) ({ (void)(cpu); (ptr); }) 103 143 104 - static __always_inline void *__percpu_alloc_mask(size_t size, gfp_t gfp, cpumask_t *mask) 144 + static inline void *__alloc_percpu(size_t size, size_t align) 105 145 { 106 - return kzalloc(size, gfp); 146 + /* 147 + * Can't easily make larger alignment work with kmalloc. WARN 148 + * on it. Larger alignment should only be used for module 149 + * percpu sections on SMP for which this path isn't used. 150 + */ 151 + WARN_ON_ONCE(align > SMP_CACHE_BYTES); 152 + return kzalloc(size, GFP_KERNEL); 107 153 } 108 154 109 - static inline void percpu_free(void *__pdata) 155 + static inline void free_percpu(void *p) 110 156 { 111 - kfree(__pdata); 157 + kfree(p); 112 158 } 113 159 114 160 #endif /* CONFIG_SMP */ 115 161 116 - #define percpu_alloc_mask(size, gfp, mask) \ 117 - __percpu_alloc_mask((size), (gfp), &(mask)) 118 - 119 - #define percpu_alloc(size, gfp) percpu_alloc_mask((size), (gfp), cpu_online_map) 120 - 121 - /* (legacy) interface for use without CPU hotplug handling */ 122 - 123 - #define __alloc_percpu(size) percpu_alloc_mask((size), GFP_KERNEL, \ 124 - cpu_possible_map) 125 - #define alloc_percpu(type) (type *)__alloc_percpu(sizeof(type)) 126 - #define free_percpu(ptr) percpu_free((ptr)) 127 - #define per_cpu_ptr(ptr, cpu) percpu_ptr((ptr), (cpu)) 162 + #define alloc_percpu(type) (type *)__alloc_percpu(sizeof(type), \ 163 + __alignof__(type)) 128 164 129 165 #endif /* __LINUX_PERCPU_H */
+6
include/linux/rcuclassic.h
··· 181 181 #define rcu_enter_nohz() do { } while (0) 182 182 #define rcu_exit_nohz() do { } while (0) 183 183 184 + /* A context switch is a grace period for rcuclassic. */ 185 + static inline int rcu_blocking_is_gp(void) 186 + { 187 + return num_online_cpus() == 1; 188 + } 189 + 184 190 #endif /* __LINUX_RCUCLASSIC_H */
+4
include/linux/rcupdate.h
··· 52 52 void (*func)(struct rcu_head *head); 53 53 }; 54 54 55 + /* Internal to kernel, but needed by rcupreempt.h. */ 56 + extern int rcu_scheduler_active; 57 + 55 58 #if defined(CONFIG_CLASSIC_RCU) 56 59 #include <linux/rcuclassic.h> 57 60 #elif defined(CONFIG_TREE_RCU) ··· 268 265 269 266 /* Internal to kernel */ 270 267 extern void rcu_init(void); 268 + extern void rcu_scheduler_starting(void); 271 269 extern int rcu_needs_cpu(int cpu); 272 270 273 271 #endif /* __LINUX_RCUPDATE_H */
+15
include/linux/rcupreempt.h
··· 142 142 #define rcu_exit_nohz() do { } while (0) 143 143 #endif /* CONFIG_NO_HZ */ 144 144 145 + /* 146 + * A context switch is a grace period for rcupreempt synchronize_rcu() 147 + * only during early boot, before the scheduler has been initialized. 148 + * So, how the heck do we get a context switch? Well, if the caller 149 + * invokes synchronize_rcu(), they are willing to accept a context 150 + * switch, so we simply pretend that one happened. 151 + * 152 + * After boot, there might be a blocked or preempted task in an RCU 153 + * read-side critical section, so we cannot then take the fastpath. 154 + */ 155 + static inline int rcu_blocking_is_gp(void) 156 + { 157 + return num_online_cpus() == 1 && !rcu_scheduler_active; 158 + } 159 + 145 160 #endif /* __LINUX_RCUPREEMPT_H */
+6
include/linux/rcutree.h
··· 326 326 } 327 327 #endif /* CONFIG_NO_HZ */ 328 328 329 + /* A context switch is a grace period for rcutree. */ 330 + static inline int rcu_blocking_is_gp(void) 331 + { 332 + return num_online_cpus() == 1; 333 + } 334 + 329 335 #endif /* __LINUX_RCUTREE_H */
+4
include/linux/sched.h
··· 2303 2303 extern int sched_group_set_rt_period(struct task_group *tg, 2304 2304 long rt_period_us); 2305 2305 extern long sched_group_rt_period(struct task_group *tg); 2306 + extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk); 2306 2307 #endif 2307 2308 #endif 2309 + 2310 + extern int task_can_switch_user(struct user_struct *up, 2311 + struct task_struct *tsk); 2308 2312 2309 2313 #ifdef CONFIG_TASK_XACCT 2310 2314 static inline void add_rchar(struct task_struct *tsk, ssize_t amt)
+1 -1
include/linux/serio.h
··· 212 212 #define SERIO_FUJITSU 0x35 213 213 #define SERIO_ZHENHUA 0x36 214 214 #define SERIO_INEXIO 0x37 215 - #define SERIO_TOUCHIT213 0x37 215 + #define SERIO_TOUCHIT213 0x38 216 216 #define SERIO_W8001 0x39 217 217 218 218 #endif
+4
include/linux/vmalloc.h
··· 95 95 96 96 extern int map_vm_area(struct vm_struct *area, pgprot_t prot, 97 97 struct page ***pages); 98 + extern int map_kernel_range_noflush(unsigned long start, unsigned long size, 99 + pgprot_t prot, struct page **pages); 100 + extern void unmap_kernel_range_noflush(unsigned long addr, unsigned long size); 98 101 extern void unmap_kernel_range(unsigned long addr, unsigned long size); 99 102 100 103 /* Allocate/destroy a 'vmalloc' VM area. */ ··· 113 110 */ 114 111 extern rwlock_t vmlist_lock; 115 112 extern struct vm_struct *vmlist; 113 + extern __init void vm_area_register_early(struct vm_struct *vm, size_t align); 116 114 117 115 #endif /* _LINUX_VMALLOC_H */
+17 -10
include/net/net_namespace.h
··· 109 109 #ifdef CONFIG_NET_NS 110 110 extern void __put_net(struct net *net); 111 111 112 - static inline int net_alive(struct net *net) 113 - { 114 - return net && atomic_read(&net->count); 115 - } 116 - 117 112 static inline struct net *get_net(struct net *net) 118 113 { 119 114 atomic_inc(&net->count); ··· 139 144 return net1 == net2; 140 145 } 141 146 #else 142 - 143 - static inline int net_alive(struct net *net) 144 - { 145 - return 1; 146 - } 147 147 148 148 static inline struct net *get_net(struct net *net) 149 149 { ··· 224 234 void (*exit)(struct net *net); 225 235 }; 226 236 237 + /* 238 + * Use these carefully. If you implement a network device and it 239 + * needs per network namespace operations use device pernet operations, 240 + * otherwise use pernet subsys operations. 241 + * 242 + * This is critically important. Most of the network code cleanup 243 + * runs with the assumption that dev_remove_pack has been called so no 244 + * new packets will arrive during and after the cleanup functions have 245 + * been called. dev_remove_pack is not per namespace so instead the 246 + * guarantee of no more packets arriving in a network namespace is 247 + * provided by ensuring that all network devices and all sockets have 248 + * left the network namespace before the cleanup methods are called. 249 + * 250 + * For the longest time the ipv4 icmp code was registered as a pernet 251 + * device which caused kernel oops, and panics during network 252 + * namespace cleanup. So please don't get this wrong. 253 + */ 227 254 extern int register_pernet_subsys(struct pernet_operations *); 228 255 extern void unregister_pernet_subsys(struct pernet_operations *); 229 256 extern int register_pernet_gen_subsys(int *id, struct pernet_operations *);
+15 -15
init/Kconfig
··· 735 735 config SYSCTL 736 736 bool 737 737 738 + config ANON_INODES 739 + bool 740 + 738 741 menuconfig EMBEDDED 739 742 bool "Configure standard kernel features (for small systems)" 740 743 help ··· 843 840 This option allows to disable the internal PC-Speaker 844 841 support, saving some memory. 845 842 846 - config COMPAT_BRK 847 - bool "Disable heap randomization" 848 - default y 849 - help 850 - Randomizing heap placement makes heap exploits harder, but it 851 - also breaks ancient binaries (including anything libc5 based). 852 - This option changes the bootup default to heap randomization 853 - disabled, and can be overriden runtime by setting 854 - /proc/sys/kernel/randomize_va_space to 2. 855 - 856 - On non-ancient distros (post-2000 ones) N is usually a safe choice. 857 - 858 843 config BASE_FULL 859 844 default y 860 845 bool "Enable full-sized data structures for core" if EMBEDDED ··· 859 868 Disabling this option will cause the kernel to be built without 860 869 support for "fast userspace mutexes". The resulting kernel may not 861 870 run glibc-based applications correctly. 862 - 863 - config ANON_INODES 864 - bool 865 871 866 872 config EPOLL 867 873 bool "Enable eventpoll support" if EMBEDDED ··· 944 956 result in significant savings in code size. This also disables 945 957 SLUB sysfs support. /sys/slab will not exist and there will be 946 958 no support for cache validation etc. 959 + 960 + config COMPAT_BRK 961 + bool "Disable heap randomization" 962 + default y 963 + help 964 + Randomizing heap placement makes heap exploits harder, but it 965 + also breaks ancient binaries (including anything libc5 based). 966 + This option changes the bootup default to heap randomization 967 + disabled, and can be overriden runtime by setting 968 + /proc/sys/kernel/randomize_va_space to 2. 969 + 970 + On non-ancient distros (post-2000 ones) N is usually a safe choice. 947 971 948 972 choice 949 973 prompt "Choose SLAB allocator"
+2 -1
init/main.c
··· 98 98 extern void tc_init(void); 99 99 #endif 100 100 101 - enum system_states system_state; 101 + enum system_states system_state __read_mostly; 102 102 EXPORT_SYMBOL(system_state); 103 103 104 104 /* ··· 464 464 * at least once to get things moving: 465 465 */ 466 466 init_idle_bootup_task(current); 467 + rcu_scheduler_starting(); 467 468 preempt_enable_no_resched(); 468 469 schedule(); 469 470 preempt_disable();
+5 -6
kernel/fork.c
··· 1184 1184 #endif 1185 1185 clear_all_latency_tracing(p); 1186 1186 1187 - /* Our parent execution domain becomes current domain 1188 - These must match for thread signalling to apply */ 1189 - p->parent_exec_id = p->self_exec_id; 1190 - 1191 1187 /* ok, now we should be set up.. */ 1192 1188 p->exit_signal = (clone_flags & CLONE_THREAD) ? -1 : (clone_flags & CSIGNAL); 1193 1189 p->pdeath_signal = 0; ··· 1221 1225 set_task_cpu(p, smp_processor_id()); 1222 1226 1223 1227 /* CLONE_PARENT re-uses the old parent */ 1224 - if (clone_flags & (CLONE_PARENT|CLONE_THREAD)) 1228 + if (clone_flags & (CLONE_PARENT|CLONE_THREAD)) { 1225 1229 p->real_parent = current->real_parent; 1226 - else 1230 + p->parent_exec_id = current->parent_exec_id; 1231 + } else { 1227 1232 p->real_parent = current; 1233 + p->parent_exec_id = current->self_exec_id; 1234 + } 1228 1235 1229 1236 spin_lock(&current->sighand->siglock); 1230 1237
+49 -15
kernel/module.c
··· 51 51 #include <linux/tracepoint.h> 52 52 #include <linux/ftrace.h> 53 53 #include <linux/async.h> 54 + #include <linux/percpu.h> 54 55 55 56 #if 0 56 57 #define DEBUGP printk ··· 367 366 } 368 367 369 368 #ifdef CONFIG_SMP 369 + 370 + #ifdef CONFIG_HAVE_DYNAMIC_PER_CPU_AREA 371 + 372 + static void *percpu_modalloc(unsigned long size, unsigned long align, 373 + const char *name) 374 + { 375 + void *ptr; 376 + 377 + if (align > PAGE_SIZE) { 378 + printk(KERN_WARNING "%s: per-cpu alignment %li > %li\n", 379 + name, align, PAGE_SIZE); 380 + align = PAGE_SIZE; 381 + } 382 + 383 + ptr = __alloc_reserved_percpu(size, align); 384 + if (!ptr) 385 + printk(KERN_WARNING 386 + "Could not allocate %lu bytes percpu data\n", size); 387 + return ptr; 388 + } 389 + 390 + static void percpu_modfree(void *freeme) 391 + { 392 + free_percpu(freeme); 393 + } 394 + 395 + #else /* ... !CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */ 396 + 370 397 /* Number of blocks used and allocated. */ 371 398 static unsigned int pcpu_num_used, pcpu_num_allocated; 372 399 /* Size of each block. -ve means used. */ ··· 509 480 } 510 481 } 511 482 512 - static unsigned int find_pcpusec(Elf_Ehdr *hdr, 513 - Elf_Shdr *sechdrs, 514 - const char *secstrings) 515 - { 516 - return find_sec(hdr, sechdrs, secstrings, ".data.percpu"); 517 - } 518 - 519 - static void percpu_modcopy(void *pcpudest, const void *from, unsigned long size) 520 - { 521 - int cpu; 522 - 523 - for_each_possible_cpu(cpu) 524 - memcpy(pcpudest + per_cpu_offset(cpu), from, size); 525 - } 526 - 527 483 static int percpu_modinit(void) 528 484 { 529 485 pcpu_num_used = 2; ··· 527 513 return 0; 528 514 } 529 515 __initcall(percpu_modinit); 516 + 517 + #endif /* CONFIG_HAVE_DYNAMIC_PER_CPU_AREA */ 518 + 519 + static unsigned int find_pcpusec(Elf_Ehdr *hdr, 520 + Elf_Shdr *sechdrs, 521 + const char *secstrings) 522 + { 523 + return find_sec(hdr, sechdrs, secstrings, ".data.percpu"); 524 + } 525 + 526 + static void percpu_modcopy(void *pcpudest, const void *from, unsigned long size) 527 + { 528 + int cpu; 529 + 530 + for_each_possible_cpu(cpu) 531 + memcpy(pcpudest + per_cpu_offset(cpu), from, size); 532 + } 533 + 530 534 #else /* ... !CONFIG_SMP */ 535 + 531 536 static inline void *percpu_modalloc(unsigned long size, unsigned long align, 532 537 const char *name) 533 538 { ··· 568 535 /* pcpusec should be 0, and size of that section should be 0. */ 569 536 BUG_ON(size != 0); 570 537 } 538 + 571 539 #endif /* CONFIG_SMP */ 572 540 573 541 #define MODINFO_ATTR(field) \
+2 -2
kernel/rcuclassic.c
··· 679 679 void rcu_check_callbacks(int cpu, int user) 680 680 { 681 681 if (user || 682 - (idle_cpu(cpu) && !in_softirq() && 683 - hardirq_count() <= (1 << HARDIRQ_SHIFT))) { 682 + (idle_cpu(cpu) && rcu_scheduler_active && 683 + !in_softirq() && hardirq_count() <= (1 << HARDIRQ_SHIFT))) { 684 684 685 685 /* 686 686 * Get here if this CPU took its interrupt from user
+12
kernel/rcupdate.c
··· 44 44 #include <linux/cpu.h> 45 45 #include <linux/mutex.h> 46 46 #include <linux/module.h> 47 + #include <linux/kernel_stat.h> 47 48 48 49 enum rcu_barrier { 49 50 RCU_BARRIER_STD, ··· 56 55 static atomic_t rcu_barrier_cpu_count; 57 56 static DEFINE_MUTEX(rcu_barrier_mutex); 58 57 static struct completion rcu_barrier_completion; 58 + int rcu_scheduler_active __read_mostly; 59 59 60 60 /* 61 61 * Awaken the corresponding synchronize_rcu() instance now that a ··· 82 80 void synchronize_rcu(void) 83 81 { 84 82 struct rcu_synchronize rcu; 83 + 84 + if (rcu_blocking_is_gp()) 85 + return; 86 + 85 87 init_completion(&rcu.completion); 86 88 /* Will wake me after RCU finished. */ 87 89 call_rcu(&rcu.head, wakeme_after_rcu); ··· 181 175 __rcu_init(); 182 176 } 183 177 178 + void rcu_scheduler_starting(void) 179 + { 180 + WARN_ON(num_online_cpus() != 1); 181 + WARN_ON(nr_context_switches() > 0); 182 + rcu_scheduler_active = 1; 183 + }
+3
kernel/rcupreempt.c
··· 1181 1181 { 1182 1182 struct rcu_synchronize rcu; 1183 1183 1184 + if (num_online_cpus() == 1) 1185 + return; /* blocking is gp if only one CPU! */ 1186 + 1184 1187 init_completion(&rcu.completion); 1185 1188 /* Will wake me after RCU finished. */ 1186 1189 call_rcu_sched(&rcu.head, wakeme_after_rcu);
+2 -2
kernel/rcutree.c
··· 948 948 void rcu_check_callbacks(int cpu, int user) 949 949 { 950 950 if (user || 951 - (idle_cpu(cpu) && !in_softirq() && 952 - hardirq_count() <= (1 << HARDIRQ_SHIFT))) { 951 + (idle_cpu(cpu) && rcu_scheduler_active && 952 + !in_softirq() && hardirq_count() <= (1 << HARDIRQ_SHIFT))) { 953 953 954 954 /* 955 955 * Get here if this CPU took its interrupt from user
+15 -6
kernel/sched.c
··· 223 223 { 224 224 ktime_t now; 225 225 226 - if (rt_bandwidth_enabled() && rt_b->rt_runtime == RUNTIME_INF) 226 + if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF) 227 227 return; 228 228 229 229 if (hrtimer_active(&rt_b->rt_period_timer)) ··· 9219 9219 9220 9220 return ret; 9221 9221 } 9222 + 9223 + int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk) 9224 + { 9225 + /* Don't accept realtime tasks when there is no way for them to run */ 9226 + if (rt_task(tsk) && tg->rt_bandwidth.rt_runtime == 0) 9227 + return 0; 9228 + 9229 + return 1; 9230 + } 9231 + 9222 9232 #else /* !CONFIG_RT_GROUP_SCHED */ 9223 9233 static int sched_rt_global_constraints(void) 9224 9234 { ··· 9322 9312 struct task_struct *tsk) 9323 9313 { 9324 9314 #ifdef CONFIG_RT_GROUP_SCHED 9325 - /* Don't accept realtime tasks when there is no way for them to run */ 9326 - if (rt_task(tsk) && cgroup_tg(cgrp)->rt_bandwidth.rt_runtime == 0) 9315 + if (!sched_rt_can_attach(cgroup_tg(cgrp), tsk)) 9327 9316 return -EINVAL; 9328 9317 #else 9329 9318 /* We don't support RT-tasks being in separate groups */ ··· 9485 9476 9486 9477 static u64 cpuacct_cpuusage_read(struct cpuacct *ca, int cpu) 9487 9478 { 9488 - u64 *cpuusage = percpu_ptr(ca->cpuusage, cpu); 9479 + u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu); 9489 9480 u64 data; 9490 9481 9491 9482 #ifndef CONFIG_64BIT ··· 9504 9495 9505 9496 static void cpuacct_cpuusage_write(struct cpuacct *ca, int cpu, u64 val) 9506 9497 { 9507 - u64 *cpuusage = percpu_ptr(ca->cpuusage, cpu); 9498 + u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu); 9508 9499 9509 9500 #ifndef CONFIG_64BIT 9510 9501 /* ··· 9600 9591 ca = task_ca(tsk); 9601 9592 9602 9593 for (; ca; ca = ca->parent) { 9603 - u64 *cpuusage = percpu_ptr(ca->cpuusage, cpu); 9594 + u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu); 9604 9595 *cpuusage += cputime; 9605 9596 } 9606 9597 }
+1
kernel/softirq.c
··· 626 626 preempt_enable_no_resched(); 627 627 cond_resched(); 628 628 preempt_disable(); 629 + rcu_qsctr_inc((long)__bind_cpu); 629 630 } 630 631 preempt_enable(); 631 632 set_current_state(TASK_INTERRUPTIBLE);
+1 -1
kernel/stop_machine.c
··· 170 170 * doesn't hit this CPU until we're ready. */ 171 171 get_cpu(); 172 172 for_each_online_cpu(i) { 173 - sm_work = percpu_ptr(stop_machine_work, i); 173 + sm_work = per_cpu_ptr(stop_machine_work, i); 174 174 INIT_WORK(sm_work, stop_cpu); 175 175 queue_work_on(i, stop_machine_wq, sm_work); 176 176 }
+20 -11
kernel/sys.c
··· 559 559 abort_creds(new); 560 560 return retval; 561 561 } 562 - 562 + 563 563 /* 564 564 * change the user struct in a credentials set to match the new UID 565 565 */ ··· 570 570 new_user = alloc_uid(current_user_ns(), new->uid); 571 571 if (!new_user) 572 572 return -EAGAIN; 573 + 574 + if (!task_can_switch_user(new_user, current)) { 575 + free_uid(new_user); 576 + return -EINVAL; 577 + } 573 578 574 579 if (atomic_read(&new_user->processes) >= 575 580 current->signal->rlim[RLIMIT_NPROC].rlim_cur && ··· 636 631 goto error; 637 632 } 638 633 639 - retval = -EAGAIN; 640 - if (new->uid != old->uid && set_user(new) < 0) 641 - goto error; 642 - 634 + if (new->uid != old->uid) { 635 + retval = set_user(new); 636 + if (retval < 0) 637 + goto error; 638 + } 643 639 if (ruid != (uid_t) -1 || 644 640 (euid != (uid_t) -1 && euid != old->uid)) 645 641 new->suid = new->euid; ··· 686 680 retval = -EPERM; 687 681 if (capable(CAP_SETUID)) { 688 682 new->suid = new->uid = uid; 689 - if (uid != old->uid && set_user(new) < 0) { 690 - retval = -EAGAIN; 691 - goto error; 683 + if (uid != old->uid) { 684 + retval = set_user(new); 685 + if (retval < 0) 686 + goto error; 692 687 } 693 688 } else if (uid != old->uid && uid != new->suid) { 694 689 goto error; ··· 741 734 goto error; 742 735 } 743 736 744 - retval = -EAGAIN; 745 737 if (ruid != (uid_t) -1) { 746 738 new->uid = ruid; 747 - if (ruid != old->uid && set_user(new) < 0) 748 - goto error; 739 + if (ruid != old->uid) { 740 + retval = set_user(new); 741 + if (retval < 0) 742 + goto error; 743 + } 749 744 } 750 745 if (euid != (uid_t) -1) 751 746 new->euid = euid;
+5 -1
kernel/tsacct.c
··· 122 122 if (likely(tsk->mm)) { 123 123 cputime_t time, dtime; 124 124 struct timeval value; 125 + unsigned long flags; 125 126 u64 delta; 126 127 128 + local_irq_save(flags); 127 129 time = tsk->stime + tsk->utime; 128 130 dtime = cputime_sub(time, tsk->acct_timexpd); 129 131 jiffies_to_timeval(cputime_to_jiffies(dtime), &value); ··· 133 131 delta = delta * USEC_PER_SEC + value.tv_usec; 134 132 135 133 if (delta == 0) 136 - return; 134 + goto out; 137 135 tsk->acct_timexpd = time; 138 136 tsk->acct_rss_mem1 += delta * get_mm_rss(tsk->mm); 139 137 tsk->acct_vm_mem1 += delta * tsk->mm->total_vm; 138 + out: 139 + local_irq_restore(flags); 140 140 } 141 141 } 142 142
+25 -7
kernel/user.c
··· 286 286 /* work function to remove sysfs directory for a user and free up 287 287 * corresponding structures. 288 288 */ 289 - static void remove_user_sysfs_dir(struct work_struct *w) 289 + static void cleanup_user_struct(struct work_struct *w) 290 290 { 291 291 struct user_struct *up = container_of(w, struct user_struct, work); 292 292 unsigned long flags; 293 293 int remove_user = 0; 294 294 295 - if (up->user_ns != &init_user_ns) 296 - return; 297 295 /* Make uid_hash_remove() + sysfs_remove_file() + kobject_del() 298 296 * atomic. 299 297 */ ··· 310 312 if (!remove_user) 311 313 goto done; 312 314 313 - kobject_uevent(&up->kobj, KOBJ_REMOVE); 314 - kobject_del(&up->kobj); 315 - kobject_put(&up->kobj); 315 + if (up->user_ns == &init_user_ns) { 316 + kobject_uevent(&up->kobj, KOBJ_REMOVE); 317 + kobject_del(&up->kobj); 318 + kobject_put(&up->kobj); 319 + } 316 320 317 321 sched_destroy_user(up); 318 322 key_put(up->uid_keyring); ··· 335 335 atomic_inc(&up->__count); 336 336 spin_unlock_irqrestore(&uidhash_lock, flags); 337 337 338 - INIT_WORK(&up->work, remove_user_sysfs_dir); 338 + INIT_WORK(&up->work, cleanup_user_struct); 339 339 schedule_work(&up->work); 340 340 } 341 341 ··· 360 360 kmem_cache_free(uid_cachep, up); 361 361 } 362 362 363 + #endif 364 + 365 + #if defined(CONFIG_RT_GROUP_SCHED) && defined(CONFIG_USER_SCHED) 366 + /* 367 + * We need to check if a setuid can take place. This function should be called 368 + * before successfully completing the setuid. 369 + */ 370 + int task_can_switch_user(struct user_struct *up, struct task_struct *tsk) 371 + { 372 + 373 + return sched_rt_can_attach(up->tg, tsk); 374 + 375 + } 376 + #else 377 + int task_can_switch_user(struct user_struct *up, struct task_struct *tsk) 378 + { 379 + return 1; 380 + } 363 381 #endif 364 382 365 383 /*
+1 -1
lib/idr.c
··· 449 449 450 450 n = idp->layers * IDR_BITS; 451 451 p = idp->top; 452 + rcu_assign_pointer(idp->top, NULL); 452 453 max = 1 << n; 453 454 454 455 id = 0; ··· 468 467 p = *--paa; 469 468 } 470 469 } 471 - rcu_assign_pointer(idp->top, NULL); 472 470 idp->layers = 0; 473 471 } 474 472 EXPORT_SYMBOL(idr_remove_all);
+4
mm/Makefile
··· 30 30 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o 31 31 obj-$(CONFIG_FS_XIP) += filemap_xip.o 32 32 obj-$(CONFIG_MIGRATION) += migrate.o 33 + ifdef CONFIG_HAVE_DYNAMIC_PER_CPU_AREA 34 + obj-$(CONFIG_SMP) += percpu.o 35 + else 33 36 obj-$(CONFIG_SMP) += allocpercpu.o 37 + endif 34 38 obj-$(CONFIG_QUICKLIST) += quicklist.o 35 39 obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o
+19 -13
mm/allocpercpu.c
··· 99 99 __percpu_populate_mask((__pdata), (size), (gfp), &(mask)) 100 100 101 101 /** 102 - * percpu_alloc_mask - initial setup of per-cpu data 102 + * alloc_percpu - initial setup of per-cpu data 103 103 * @size: size of per-cpu object 104 - * @gfp: may sleep or not etc. 105 - * @mask: populate per-data for cpu's selected through mask bits 104 + * @align: alignment 106 105 * 107 - * Populating per-cpu data for all online cpu's would be a typical use case, 108 - * which is simplified by the percpu_alloc() wrapper. 109 - * Per-cpu objects are populated with zeroed buffers. 106 + * Allocate dynamic percpu area. Percpu objects are populated with 107 + * zeroed buffers. 110 108 */ 111 - void *__percpu_alloc_mask(size_t size, gfp_t gfp, cpumask_t *mask) 109 + void *__alloc_percpu(size_t size, size_t align) 112 110 { 113 111 /* 114 112 * We allocate whole cache lines to avoid false sharing 115 113 */ 116 114 size_t sz = roundup(nr_cpu_ids * sizeof(void *), cache_line_size()); 117 - void *pdata = kzalloc(sz, gfp); 115 + void *pdata = kzalloc(sz, GFP_KERNEL); 118 116 void *__pdata = __percpu_disguise(pdata); 117 + 118 + /* 119 + * Can't easily make larger alignment work with kmalloc. WARN 120 + * on it. Larger alignment should only be used for module 121 + * percpu sections on SMP for which this path isn't used. 122 + */ 123 + WARN_ON_ONCE(align > __alignof__(unsigned long long)); 119 124 120 125 if (unlikely(!pdata)) 121 126 return NULL; 122 - if (likely(!__percpu_populate_mask(__pdata, size, gfp, mask))) 127 + if (likely(!__percpu_populate_mask(__pdata, size, GFP_KERNEL, 128 + &cpu_possible_map))) 123 129 return __pdata; 124 130 kfree(pdata); 125 131 return NULL; 126 132 } 127 - EXPORT_SYMBOL_GPL(__percpu_alloc_mask); 133 + EXPORT_SYMBOL_GPL(__alloc_percpu); 128 134 129 135 /** 130 - * percpu_free - final cleanup of per-cpu data 136 + * free_percpu - final cleanup of per-cpu data 131 137 * @__pdata: object to clean up 132 138 * 133 139 * We simply clean up any per-cpu object left. No need for the client to 134 140 * track and specify through a bis mask which per-cpu objects are to free. 135 141 */ 136 - void percpu_free(void *__pdata) 142 + void free_percpu(void *__pdata) 137 143 { 138 144 if (unlikely(!__pdata)) 139 145 return; 140 146 __percpu_depopulate_mask(__pdata, &cpu_possible_map); 141 147 kfree(__percpu_disguise(__pdata)); 142 148 } 143 - EXPORT_SYMBOL_GPL(percpu_free); 149 + EXPORT_SYMBOL_GPL(free_percpu);
+29 -6
mm/bootmem.c
··· 382 382 return mark_bootmem_node(pgdat->bdata, start, end, 1, flags); 383 383 } 384 384 385 - #ifndef CONFIG_HAVE_ARCH_BOOTMEM_NODE 386 385 /** 387 386 * reserve_bootmem - mark a page range as usable 388 387 * @addr: starting address of the range ··· 402 403 403 404 return mark_bootmem(start, end, 1, flags); 404 405 } 405 - #endif /* !CONFIG_HAVE_ARCH_BOOTMEM_NODE */ 406 406 407 407 static unsigned long align_idx(struct bootmem_data *bdata, unsigned long idx, 408 408 unsigned long step) ··· 427 429 } 428 430 429 431 static void * __init alloc_bootmem_core(struct bootmem_data *bdata, 430 - unsigned long size, unsigned long align, 431 - unsigned long goal, unsigned long limit) 432 + unsigned long size, unsigned long align, 433 + unsigned long goal, unsigned long limit) 432 434 { 433 435 unsigned long fallback = 0; 434 436 unsigned long min, max, start, sidx, midx, step; ··· 528 530 return NULL; 529 531 } 530 532 533 + static void * __init alloc_arch_preferred_bootmem(bootmem_data_t *bdata, 534 + unsigned long size, unsigned long align, 535 + unsigned long goal, unsigned long limit) 536 + { 537 + #ifdef CONFIG_HAVE_ARCH_BOOTMEM 538 + bootmem_data_t *p_bdata; 539 + 540 + p_bdata = bootmem_arch_preferred_node(bdata, size, align, goal, limit); 541 + if (p_bdata) 542 + return alloc_bootmem_core(p_bdata, size, align, goal, limit); 543 + #endif 544 + return NULL; 545 + } 546 + 531 547 static void * __init ___alloc_bootmem_nopanic(unsigned long size, 532 548 unsigned long align, 533 549 unsigned long goal, 534 550 unsigned long limit) 535 551 { 536 552 bootmem_data_t *bdata; 553 + void *region; 537 554 538 555 restart: 539 - list_for_each_entry(bdata, &bdata_list, list) { 540 - void *region; 556 + region = alloc_arch_preferred_bootmem(NULL, size, align, goal, limit); 557 + if (region) 558 + return region; 541 559 560 + list_for_each_entry(bdata, &bdata_list, list) { 542 561 if (goal && bdata->node_low_pfn <= PFN_DOWN(goal)) 543 562 continue; 544 563 if (limit && bdata->node_min_pfn >= PFN_DOWN(limit)) ··· 633 618 { 634 619 void *ptr; 635 620 621 + ptr = alloc_arch_preferred_bootmem(bdata, size, align, goal, limit); 622 + if (ptr) 623 + return ptr; 624 + 636 625 ptr = alloc_bootmem_core(bdata, size, align, goal, limit); 637 626 if (ptr) 638 627 return ptr; ··· 692 673 unsigned long align, unsigned long goal) 693 674 { 694 675 void *ptr; 676 + 677 + ptr = alloc_arch_preferred_bootmem(pgdat->bdata, size, align, goal, 0); 678 + if (ptr) 679 + return ptr; 695 680 696 681 ptr = alloc_bootmem_core(pgdat->bdata, size, align, goal, 0); 697 682 if (ptr)
+1226
mm/percpu.c
··· 1 + /* 2 + * linux/mm/percpu.c - percpu memory allocator 3 + * 4 + * Copyright (C) 2009 SUSE Linux Products GmbH 5 + * Copyright (C) 2009 Tejun Heo <tj@kernel.org> 6 + * 7 + * This file is released under the GPLv2. 8 + * 9 + * This is percpu allocator which can handle both static and dynamic 10 + * areas. Percpu areas are allocated in chunks in vmalloc area. Each 11 + * chunk is consisted of num_possible_cpus() units and the first chunk 12 + * is used for static percpu variables in the kernel image (special 13 + * boot time alloc/init handling necessary as these areas need to be 14 + * brought up before allocation services are running). Unit grows as 15 + * necessary and all units grow or shrink in unison. When a chunk is 16 + * filled up, another chunk is allocated. ie. in vmalloc area 17 + * 18 + * c0 c1 c2 19 + * ------------------- ------------------- ------------ 20 + * | u0 | u1 | u2 | u3 | | u0 | u1 | u2 | u3 | | u0 | u1 | u 21 + * ------------------- ...... ------------------- .... ------------ 22 + * 23 + * Allocation is done in offset-size areas of single unit space. Ie, 24 + * an area of 512 bytes at 6k in c1 occupies 512 bytes at 6k of c1:u0, 25 + * c1:u1, c1:u2 and c1:u3. Percpu access can be done by configuring 26 + * percpu base registers UNIT_SIZE apart. 27 + * 28 + * There are usually many small percpu allocations many of them as 29 + * small as 4 bytes. The allocator organizes chunks into lists 30 + * according to free size and tries to allocate from the fullest one. 31 + * Each chunk keeps the maximum contiguous area size hint which is 32 + * guaranteed to be eqaul to or larger than the maximum contiguous 33 + * area in the chunk. This helps the allocator not to iterate the 34 + * chunk maps unnecessarily. 35 + * 36 + * Allocation state in each chunk is kept using an array of integers 37 + * on chunk->map. A positive value in the map represents a free 38 + * region and negative allocated. Allocation inside a chunk is done 39 + * by scanning this map sequentially and serving the first matching 40 + * entry. This is mostly copied from the percpu_modalloc() allocator. 41 + * Chunks are also linked into a rb tree to ease address to chunk 42 + * mapping during free. 43 + * 44 + * To use this allocator, arch code should do the followings. 45 + * 46 + * - define CONFIG_HAVE_DYNAMIC_PER_CPU_AREA 47 + * 48 + * - define __addr_to_pcpu_ptr() and __pcpu_ptr_to_addr() to translate 49 + * regular address to percpu pointer and back 50 + * 51 + * - use pcpu_setup_first_chunk() during percpu area initialization to 52 + * setup the first chunk containing the kernel static percpu area 53 + */ 54 + 55 + #include <linux/bitmap.h> 56 + #include <linux/bootmem.h> 57 + #include <linux/list.h> 58 + #include <linux/mm.h> 59 + #include <linux/module.h> 60 + #include <linux/mutex.h> 61 + #include <linux/percpu.h> 62 + #include <linux/pfn.h> 63 + #include <linux/rbtree.h> 64 + #include <linux/slab.h> 65 + #include <linux/spinlock.h> 66 + #include <linux/vmalloc.h> 67 + #include <linux/workqueue.h> 68 + 69 + #include <asm/cacheflush.h> 70 + #include <asm/tlbflush.h> 71 + 72 + #define PCPU_SLOT_BASE_SHIFT 5 /* 1-31 shares the same slot */ 73 + #define PCPU_DFL_MAP_ALLOC 16 /* start a map with 16 ents */ 74 + 75 + struct pcpu_chunk { 76 + struct list_head list; /* linked to pcpu_slot lists */ 77 + struct rb_node rb_node; /* key is chunk->vm->addr */ 78 + int free_size; /* free bytes in the chunk */ 79 + int contig_hint; /* max contiguous size hint */ 80 + struct vm_struct *vm; /* mapped vmalloc region */ 81 + int map_used; /* # of map entries used */ 82 + int map_alloc; /* # of map entries allocated */ 83 + int *map; /* allocation map */ 84 + bool immutable; /* no [de]population allowed */ 85 + struct page **page; /* points to page array */ 86 + struct page *page_ar[]; /* #cpus * UNIT_PAGES */ 87 + }; 88 + 89 + static int pcpu_unit_pages __read_mostly; 90 + static int pcpu_unit_size __read_mostly; 91 + static int pcpu_chunk_size __read_mostly; 92 + static int pcpu_nr_slots __read_mostly; 93 + static size_t pcpu_chunk_struct_size __read_mostly; 94 + 95 + /* the address of the first chunk which starts with the kernel static area */ 96 + void *pcpu_base_addr __read_mostly; 97 + EXPORT_SYMBOL_GPL(pcpu_base_addr); 98 + 99 + /* optional reserved chunk, only accessible for reserved allocations */ 100 + static struct pcpu_chunk *pcpu_reserved_chunk; 101 + /* offset limit of the reserved chunk */ 102 + static int pcpu_reserved_chunk_limit; 103 + 104 + /* 105 + * Synchronization rules. 106 + * 107 + * There are two locks - pcpu_alloc_mutex and pcpu_lock. The former 108 + * protects allocation/reclaim paths, chunks and chunk->page arrays. 109 + * The latter is a spinlock and protects the index data structures - 110 + * chunk slots, rbtree, chunks and area maps in chunks. 111 + * 112 + * During allocation, pcpu_alloc_mutex is kept locked all the time and 113 + * pcpu_lock is grabbed and released as necessary. All actual memory 114 + * allocations are done using GFP_KERNEL with pcpu_lock released. 115 + * 116 + * Free path accesses and alters only the index data structures, so it 117 + * can be safely called from atomic context. When memory needs to be 118 + * returned to the system, free path schedules reclaim_work which 119 + * grabs both pcpu_alloc_mutex and pcpu_lock, unlinks chunks to be 120 + * reclaimed, release both locks and frees the chunks. Note that it's 121 + * necessary to grab both locks to remove a chunk from circulation as 122 + * allocation path might be referencing the chunk with only 123 + * pcpu_alloc_mutex locked. 124 + */ 125 + static DEFINE_MUTEX(pcpu_alloc_mutex); /* protects whole alloc and reclaim */ 126 + static DEFINE_SPINLOCK(pcpu_lock); /* protects index data structures */ 127 + 128 + static struct list_head *pcpu_slot __read_mostly; /* chunk list slots */ 129 + static struct rb_root pcpu_addr_root = RB_ROOT; /* chunks by address */ 130 + 131 + /* reclaim work to release fully free chunks, scheduled from free path */ 132 + static void pcpu_reclaim(struct work_struct *work); 133 + static DECLARE_WORK(pcpu_reclaim_work, pcpu_reclaim); 134 + 135 + static int __pcpu_size_to_slot(int size) 136 + { 137 + int highbit = fls(size); /* size is in bytes */ 138 + return max(highbit - PCPU_SLOT_BASE_SHIFT + 2, 1); 139 + } 140 + 141 + static int pcpu_size_to_slot(int size) 142 + { 143 + if (size == pcpu_unit_size) 144 + return pcpu_nr_slots - 1; 145 + return __pcpu_size_to_slot(size); 146 + } 147 + 148 + static int pcpu_chunk_slot(const struct pcpu_chunk *chunk) 149 + { 150 + if (chunk->free_size < sizeof(int) || chunk->contig_hint < sizeof(int)) 151 + return 0; 152 + 153 + return pcpu_size_to_slot(chunk->free_size); 154 + } 155 + 156 + static int pcpu_page_idx(unsigned int cpu, int page_idx) 157 + { 158 + return cpu * pcpu_unit_pages + page_idx; 159 + } 160 + 161 + static struct page **pcpu_chunk_pagep(struct pcpu_chunk *chunk, 162 + unsigned int cpu, int page_idx) 163 + { 164 + return &chunk->page[pcpu_page_idx(cpu, page_idx)]; 165 + } 166 + 167 + static unsigned long pcpu_chunk_addr(struct pcpu_chunk *chunk, 168 + unsigned int cpu, int page_idx) 169 + { 170 + return (unsigned long)chunk->vm->addr + 171 + (pcpu_page_idx(cpu, page_idx) << PAGE_SHIFT); 172 + } 173 + 174 + static bool pcpu_chunk_page_occupied(struct pcpu_chunk *chunk, 175 + int page_idx) 176 + { 177 + return *pcpu_chunk_pagep(chunk, 0, page_idx) != NULL; 178 + } 179 + 180 + /** 181 + * pcpu_mem_alloc - allocate memory 182 + * @size: bytes to allocate 183 + * 184 + * Allocate @size bytes. If @size is smaller than PAGE_SIZE, 185 + * kzalloc() is used; otherwise, vmalloc() is used. The returned 186 + * memory is always zeroed. 187 + * 188 + * CONTEXT: 189 + * Does GFP_KERNEL allocation. 190 + * 191 + * RETURNS: 192 + * Pointer to the allocated area on success, NULL on failure. 193 + */ 194 + static void *pcpu_mem_alloc(size_t size) 195 + { 196 + if (size <= PAGE_SIZE) 197 + return kzalloc(size, GFP_KERNEL); 198 + else { 199 + void *ptr = vmalloc(size); 200 + if (ptr) 201 + memset(ptr, 0, size); 202 + return ptr; 203 + } 204 + } 205 + 206 + /** 207 + * pcpu_mem_free - free memory 208 + * @ptr: memory to free 209 + * @size: size of the area 210 + * 211 + * Free @ptr. @ptr should have been allocated using pcpu_mem_alloc(). 212 + */ 213 + static void pcpu_mem_free(void *ptr, size_t size) 214 + { 215 + if (size <= PAGE_SIZE) 216 + kfree(ptr); 217 + else 218 + vfree(ptr); 219 + } 220 + 221 + /** 222 + * pcpu_chunk_relocate - put chunk in the appropriate chunk slot 223 + * @chunk: chunk of interest 224 + * @oslot: the previous slot it was on 225 + * 226 + * This function is called after an allocation or free changed @chunk. 227 + * New slot according to the changed state is determined and @chunk is 228 + * moved to the slot. Note that the reserved chunk is never put on 229 + * chunk slots. 230 + * 231 + * CONTEXT: 232 + * pcpu_lock. 233 + */ 234 + static void pcpu_chunk_relocate(struct pcpu_chunk *chunk, int oslot) 235 + { 236 + int nslot = pcpu_chunk_slot(chunk); 237 + 238 + if (chunk != pcpu_reserved_chunk && oslot != nslot) { 239 + if (oslot < nslot) 240 + list_move(&chunk->list, &pcpu_slot[nslot]); 241 + else 242 + list_move_tail(&chunk->list, &pcpu_slot[nslot]); 243 + } 244 + } 245 + 246 + static struct rb_node **pcpu_chunk_rb_search(void *addr, 247 + struct rb_node **parentp) 248 + { 249 + struct rb_node **p = &pcpu_addr_root.rb_node; 250 + struct rb_node *parent = NULL; 251 + struct pcpu_chunk *chunk; 252 + 253 + while (*p) { 254 + parent = *p; 255 + chunk = rb_entry(parent, struct pcpu_chunk, rb_node); 256 + 257 + if (addr < chunk->vm->addr) 258 + p = &(*p)->rb_left; 259 + else if (addr > chunk->vm->addr) 260 + p = &(*p)->rb_right; 261 + else 262 + break; 263 + } 264 + 265 + if (parentp) 266 + *parentp = parent; 267 + return p; 268 + } 269 + 270 + /** 271 + * pcpu_chunk_addr_search - search for chunk containing specified address 272 + * @addr: address to search for 273 + * 274 + * Look for chunk which might contain @addr. More specifically, it 275 + * searchs for the chunk with the highest start address which isn't 276 + * beyond @addr. 277 + * 278 + * CONTEXT: 279 + * pcpu_lock. 280 + * 281 + * RETURNS: 282 + * The address of the found chunk. 283 + */ 284 + static struct pcpu_chunk *pcpu_chunk_addr_search(void *addr) 285 + { 286 + struct rb_node *n, *parent; 287 + struct pcpu_chunk *chunk; 288 + 289 + /* is it in the reserved chunk? */ 290 + if (pcpu_reserved_chunk) { 291 + void *start = pcpu_reserved_chunk->vm->addr; 292 + 293 + if (addr >= start && addr < start + pcpu_reserved_chunk_limit) 294 + return pcpu_reserved_chunk; 295 + } 296 + 297 + /* nah... search the regular ones */ 298 + n = *pcpu_chunk_rb_search(addr, &parent); 299 + if (!n) { 300 + /* no exactly matching chunk, the parent is the closest */ 301 + n = parent; 302 + BUG_ON(!n); 303 + } 304 + chunk = rb_entry(n, struct pcpu_chunk, rb_node); 305 + 306 + if (addr < chunk->vm->addr) { 307 + /* the parent was the next one, look for the previous one */ 308 + n = rb_prev(n); 309 + BUG_ON(!n); 310 + chunk = rb_entry(n, struct pcpu_chunk, rb_node); 311 + } 312 + 313 + return chunk; 314 + } 315 + 316 + /** 317 + * pcpu_chunk_addr_insert - insert chunk into address rb tree 318 + * @new: chunk to insert 319 + * 320 + * Insert @new into address rb tree. 321 + * 322 + * CONTEXT: 323 + * pcpu_lock. 324 + */ 325 + static void pcpu_chunk_addr_insert(struct pcpu_chunk *new) 326 + { 327 + struct rb_node **p, *parent; 328 + 329 + p = pcpu_chunk_rb_search(new->vm->addr, &parent); 330 + BUG_ON(*p); 331 + rb_link_node(&new->rb_node, parent, p); 332 + rb_insert_color(&new->rb_node, &pcpu_addr_root); 333 + } 334 + 335 + /** 336 + * pcpu_extend_area_map - extend area map for allocation 337 + * @chunk: target chunk 338 + * 339 + * Extend area map of @chunk so that it can accomodate an allocation. 340 + * A single allocation can split an area into three areas, so this 341 + * function makes sure that @chunk->map has at least two extra slots. 342 + * 343 + * CONTEXT: 344 + * pcpu_alloc_mutex, pcpu_lock. pcpu_lock is released and reacquired 345 + * if area map is extended. 346 + * 347 + * RETURNS: 348 + * 0 if noop, 1 if successfully extended, -errno on failure. 349 + */ 350 + static int pcpu_extend_area_map(struct pcpu_chunk *chunk) 351 + { 352 + int new_alloc; 353 + int *new; 354 + size_t size; 355 + 356 + /* has enough? */ 357 + if (chunk->map_alloc >= chunk->map_used + 2) 358 + return 0; 359 + 360 + spin_unlock_irq(&pcpu_lock); 361 + 362 + new_alloc = PCPU_DFL_MAP_ALLOC; 363 + while (new_alloc < chunk->map_used + 2) 364 + new_alloc *= 2; 365 + 366 + new = pcpu_mem_alloc(new_alloc * sizeof(new[0])); 367 + if (!new) { 368 + spin_lock_irq(&pcpu_lock); 369 + return -ENOMEM; 370 + } 371 + 372 + /* 373 + * Acquire pcpu_lock and switch to new area map. Only free 374 + * could have happened inbetween, so map_used couldn't have 375 + * grown. 376 + */ 377 + spin_lock_irq(&pcpu_lock); 378 + BUG_ON(new_alloc < chunk->map_used + 2); 379 + 380 + size = chunk->map_alloc * sizeof(chunk->map[0]); 381 + memcpy(new, chunk->map, size); 382 + 383 + /* 384 + * map_alloc < PCPU_DFL_MAP_ALLOC indicates that the chunk is 385 + * one of the first chunks and still using static map. 386 + */ 387 + if (chunk->map_alloc >= PCPU_DFL_MAP_ALLOC) 388 + pcpu_mem_free(chunk->map, size); 389 + 390 + chunk->map_alloc = new_alloc; 391 + chunk->map = new; 392 + return 0; 393 + } 394 + 395 + /** 396 + * pcpu_split_block - split a map block 397 + * @chunk: chunk of interest 398 + * @i: index of map block to split 399 + * @head: head size in bytes (can be 0) 400 + * @tail: tail size in bytes (can be 0) 401 + * 402 + * Split the @i'th map block into two or three blocks. If @head is 403 + * non-zero, @head bytes block is inserted before block @i moving it 404 + * to @i+1 and reducing its size by @head bytes. 405 + * 406 + * If @tail is non-zero, the target block, which can be @i or @i+1 407 + * depending on @head, is reduced by @tail bytes and @tail byte block 408 + * is inserted after the target block. 409 + * 410 + * @chunk->map must have enough free slots to accomodate the split. 411 + * 412 + * CONTEXT: 413 + * pcpu_lock. 414 + */ 415 + static void pcpu_split_block(struct pcpu_chunk *chunk, int i, 416 + int head, int tail) 417 + { 418 + int nr_extra = !!head + !!tail; 419 + 420 + BUG_ON(chunk->map_alloc < chunk->map_used + nr_extra); 421 + 422 + /* insert new subblocks */ 423 + memmove(&chunk->map[i + nr_extra], &chunk->map[i], 424 + sizeof(chunk->map[0]) * (chunk->map_used - i)); 425 + chunk->map_used += nr_extra; 426 + 427 + if (head) { 428 + chunk->map[i + 1] = chunk->map[i] - head; 429 + chunk->map[i++] = head; 430 + } 431 + if (tail) { 432 + chunk->map[i++] -= tail; 433 + chunk->map[i] = tail; 434 + } 435 + } 436 + 437 + /** 438 + * pcpu_alloc_area - allocate area from a pcpu_chunk 439 + * @chunk: chunk of interest 440 + * @size: wanted size in bytes 441 + * @align: wanted align 442 + * 443 + * Try to allocate @size bytes area aligned at @align from @chunk. 444 + * Note that this function only allocates the offset. It doesn't 445 + * populate or map the area. 446 + * 447 + * @chunk->map must have at least two free slots. 448 + * 449 + * CONTEXT: 450 + * pcpu_lock. 451 + * 452 + * RETURNS: 453 + * Allocated offset in @chunk on success, -1 if no matching area is 454 + * found. 455 + */ 456 + static int pcpu_alloc_area(struct pcpu_chunk *chunk, int size, int align) 457 + { 458 + int oslot = pcpu_chunk_slot(chunk); 459 + int max_contig = 0; 460 + int i, off; 461 + 462 + for (i = 0, off = 0; i < chunk->map_used; off += abs(chunk->map[i++])) { 463 + bool is_last = i + 1 == chunk->map_used; 464 + int head, tail; 465 + 466 + /* extra for alignment requirement */ 467 + head = ALIGN(off, align) - off; 468 + BUG_ON(i == 0 && head != 0); 469 + 470 + if (chunk->map[i] < 0) 471 + continue; 472 + if (chunk->map[i] < head + size) { 473 + max_contig = max(chunk->map[i], max_contig); 474 + continue; 475 + } 476 + 477 + /* 478 + * If head is small or the previous block is free, 479 + * merge'em. Note that 'small' is defined as smaller 480 + * than sizeof(int), which is very small but isn't too 481 + * uncommon for percpu allocations. 482 + */ 483 + if (head && (head < sizeof(int) || chunk->map[i - 1] > 0)) { 484 + if (chunk->map[i - 1] > 0) 485 + chunk->map[i - 1] += head; 486 + else { 487 + chunk->map[i - 1] -= head; 488 + chunk->free_size -= head; 489 + } 490 + chunk->map[i] -= head; 491 + off += head; 492 + head = 0; 493 + } 494 + 495 + /* if tail is small, just keep it around */ 496 + tail = chunk->map[i] - head - size; 497 + if (tail < sizeof(int)) 498 + tail = 0; 499 + 500 + /* split if warranted */ 501 + if (head || tail) { 502 + pcpu_split_block(chunk, i, head, tail); 503 + if (head) { 504 + i++; 505 + off += head; 506 + max_contig = max(chunk->map[i - 1], max_contig); 507 + } 508 + if (tail) 509 + max_contig = max(chunk->map[i + 1], max_contig); 510 + } 511 + 512 + /* update hint and mark allocated */ 513 + if (is_last) 514 + chunk->contig_hint = max_contig; /* fully scanned */ 515 + else 516 + chunk->contig_hint = max(chunk->contig_hint, 517 + max_contig); 518 + 519 + chunk->free_size -= chunk->map[i]; 520 + chunk->map[i] = -chunk->map[i]; 521 + 522 + pcpu_chunk_relocate(chunk, oslot); 523 + return off; 524 + } 525 + 526 + chunk->contig_hint = max_contig; /* fully scanned */ 527 + pcpu_chunk_relocate(chunk, oslot); 528 + 529 + /* tell the upper layer that this chunk has no matching area */ 530 + return -1; 531 + } 532 + 533 + /** 534 + * pcpu_free_area - free area to a pcpu_chunk 535 + * @chunk: chunk of interest 536 + * @freeme: offset of area to free 537 + * 538 + * Free area starting from @freeme to @chunk. Note that this function 539 + * only modifies the allocation map. It doesn't depopulate or unmap 540 + * the area. 541 + * 542 + * CONTEXT: 543 + * pcpu_lock. 544 + */ 545 + static void pcpu_free_area(struct pcpu_chunk *chunk, int freeme) 546 + { 547 + int oslot = pcpu_chunk_slot(chunk); 548 + int i, off; 549 + 550 + for (i = 0, off = 0; i < chunk->map_used; off += abs(chunk->map[i++])) 551 + if (off == freeme) 552 + break; 553 + BUG_ON(off != freeme); 554 + BUG_ON(chunk->map[i] > 0); 555 + 556 + chunk->map[i] = -chunk->map[i]; 557 + chunk->free_size += chunk->map[i]; 558 + 559 + /* merge with previous? */ 560 + if (i > 0 && chunk->map[i - 1] >= 0) { 561 + chunk->map[i - 1] += chunk->map[i]; 562 + chunk->map_used--; 563 + memmove(&chunk->map[i], &chunk->map[i + 1], 564 + (chunk->map_used - i) * sizeof(chunk->map[0])); 565 + i--; 566 + } 567 + /* merge with next? */ 568 + if (i + 1 < chunk->map_used && chunk->map[i + 1] >= 0) { 569 + chunk->map[i] += chunk->map[i + 1]; 570 + chunk->map_used--; 571 + memmove(&chunk->map[i + 1], &chunk->map[i + 2], 572 + (chunk->map_used - (i + 1)) * sizeof(chunk->map[0])); 573 + } 574 + 575 + chunk->contig_hint = max(chunk->map[i], chunk->contig_hint); 576 + pcpu_chunk_relocate(chunk, oslot); 577 + } 578 + 579 + /** 580 + * pcpu_unmap - unmap pages out of a pcpu_chunk 581 + * @chunk: chunk of interest 582 + * @page_start: page index of the first page to unmap 583 + * @page_end: page index of the last page to unmap + 1 584 + * @flush: whether to flush cache and tlb or not 585 + * 586 + * For each cpu, unmap pages [@page_start,@page_end) out of @chunk. 587 + * If @flush is true, vcache is flushed before unmapping and tlb 588 + * after. 589 + */ 590 + static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end, 591 + bool flush) 592 + { 593 + unsigned int last = num_possible_cpus() - 1; 594 + unsigned int cpu; 595 + 596 + /* unmap must not be done on immutable chunk */ 597 + WARN_ON(chunk->immutable); 598 + 599 + /* 600 + * Each flushing trial can be very expensive, issue flush on 601 + * the whole region at once rather than doing it for each cpu. 602 + * This could be an overkill but is more scalable. 603 + */ 604 + if (flush) 605 + flush_cache_vunmap(pcpu_chunk_addr(chunk, 0, page_start), 606 + pcpu_chunk_addr(chunk, last, page_end)); 607 + 608 + for_each_possible_cpu(cpu) 609 + unmap_kernel_range_noflush( 610 + pcpu_chunk_addr(chunk, cpu, page_start), 611 + (page_end - page_start) << PAGE_SHIFT); 612 + 613 + /* ditto as flush_cache_vunmap() */ 614 + if (flush) 615 + flush_tlb_kernel_range(pcpu_chunk_addr(chunk, 0, page_start), 616 + pcpu_chunk_addr(chunk, last, page_end)); 617 + } 618 + 619 + /** 620 + * pcpu_depopulate_chunk - depopulate and unmap an area of a pcpu_chunk 621 + * @chunk: chunk to depopulate 622 + * @off: offset to the area to depopulate 623 + * @size: size of the area to depopulate in bytes 624 + * @flush: whether to flush cache and tlb or not 625 + * 626 + * For each cpu, depopulate and unmap pages [@page_start,@page_end) 627 + * from @chunk. If @flush is true, vcache is flushed before unmapping 628 + * and tlb after. 629 + * 630 + * CONTEXT: 631 + * pcpu_alloc_mutex. 632 + */ 633 + static void pcpu_depopulate_chunk(struct pcpu_chunk *chunk, int off, int size, 634 + bool flush) 635 + { 636 + int page_start = PFN_DOWN(off); 637 + int page_end = PFN_UP(off + size); 638 + int unmap_start = -1; 639 + int uninitialized_var(unmap_end); 640 + unsigned int cpu; 641 + int i; 642 + 643 + for (i = page_start; i < page_end; i++) { 644 + for_each_possible_cpu(cpu) { 645 + struct page **pagep = pcpu_chunk_pagep(chunk, cpu, i); 646 + 647 + if (!*pagep) 648 + continue; 649 + 650 + __free_page(*pagep); 651 + 652 + /* 653 + * If it's partial depopulation, it might get 654 + * populated or depopulated again. Mark the 655 + * page gone. 656 + */ 657 + *pagep = NULL; 658 + 659 + unmap_start = unmap_start < 0 ? i : unmap_start; 660 + unmap_end = i + 1; 661 + } 662 + } 663 + 664 + if (unmap_start >= 0) 665 + pcpu_unmap(chunk, unmap_start, unmap_end, flush); 666 + } 667 + 668 + /** 669 + * pcpu_map - map pages into a pcpu_chunk 670 + * @chunk: chunk of interest 671 + * @page_start: page index of the first page to map 672 + * @page_end: page index of the last page to map + 1 673 + * 674 + * For each cpu, map pages [@page_start,@page_end) into @chunk. 675 + * vcache is flushed afterwards. 676 + */ 677 + static int pcpu_map(struct pcpu_chunk *chunk, int page_start, int page_end) 678 + { 679 + unsigned int last = num_possible_cpus() - 1; 680 + unsigned int cpu; 681 + int err; 682 + 683 + /* map must not be done on immutable chunk */ 684 + WARN_ON(chunk->immutable); 685 + 686 + for_each_possible_cpu(cpu) { 687 + err = map_kernel_range_noflush( 688 + pcpu_chunk_addr(chunk, cpu, page_start), 689 + (page_end - page_start) << PAGE_SHIFT, 690 + PAGE_KERNEL, 691 + pcpu_chunk_pagep(chunk, cpu, page_start)); 692 + if (err < 0) 693 + return err; 694 + } 695 + 696 + /* flush at once, please read comments in pcpu_unmap() */ 697 + flush_cache_vmap(pcpu_chunk_addr(chunk, 0, page_start), 698 + pcpu_chunk_addr(chunk, last, page_end)); 699 + return 0; 700 + } 701 + 702 + /** 703 + * pcpu_populate_chunk - populate and map an area of a pcpu_chunk 704 + * @chunk: chunk of interest 705 + * @off: offset to the area to populate 706 + * @size: size of the area to populate in bytes 707 + * 708 + * For each cpu, populate and map pages [@page_start,@page_end) into 709 + * @chunk. The area is cleared on return. 710 + * 711 + * CONTEXT: 712 + * pcpu_alloc_mutex, does GFP_KERNEL allocation. 713 + */ 714 + static int pcpu_populate_chunk(struct pcpu_chunk *chunk, int off, int size) 715 + { 716 + const gfp_t alloc_mask = GFP_KERNEL | __GFP_HIGHMEM | __GFP_COLD; 717 + int page_start = PFN_DOWN(off); 718 + int page_end = PFN_UP(off + size); 719 + int map_start = -1; 720 + int uninitialized_var(map_end); 721 + unsigned int cpu; 722 + int i; 723 + 724 + for (i = page_start; i < page_end; i++) { 725 + if (pcpu_chunk_page_occupied(chunk, i)) { 726 + if (map_start >= 0) { 727 + if (pcpu_map(chunk, map_start, map_end)) 728 + goto err; 729 + map_start = -1; 730 + } 731 + continue; 732 + } 733 + 734 + map_start = map_start < 0 ? i : map_start; 735 + map_end = i + 1; 736 + 737 + for_each_possible_cpu(cpu) { 738 + struct page **pagep = pcpu_chunk_pagep(chunk, cpu, i); 739 + 740 + *pagep = alloc_pages_node(cpu_to_node(cpu), 741 + alloc_mask, 0); 742 + if (!*pagep) 743 + goto err; 744 + } 745 + } 746 + 747 + if (map_start >= 0 && pcpu_map(chunk, map_start, map_end)) 748 + goto err; 749 + 750 + for_each_possible_cpu(cpu) 751 + memset(chunk->vm->addr + cpu * pcpu_unit_size + off, 0, 752 + size); 753 + 754 + return 0; 755 + err: 756 + /* likely under heavy memory pressure, give memory back */ 757 + pcpu_depopulate_chunk(chunk, off, size, true); 758 + return -ENOMEM; 759 + } 760 + 761 + static void free_pcpu_chunk(struct pcpu_chunk *chunk) 762 + { 763 + if (!chunk) 764 + return; 765 + if (chunk->vm) 766 + free_vm_area(chunk->vm); 767 + pcpu_mem_free(chunk->map, chunk->map_alloc * sizeof(chunk->map[0])); 768 + kfree(chunk); 769 + } 770 + 771 + static struct pcpu_chunk *alloc_pcpu_chunk(void) 772 + { 773 + struct pcpu_chunk *chunk; 774 + 775 + chunk = kzalloc(pcpu_chunk_struct_size, GFP_KERNEL); 776 + if (!chunk) 777 + return NULL; 778 + 779 + chunk->map = pcpu_mem_alloc(PCPU_DFL_MAP_ALLOC * sizeof(chunk->map[0])); 780 + chunk->map_alloc = PCPU_DFL_MAP_ALLOC; 781 + chunk->map[chunk->map_used++] = pcpu_unit_size; 782 + chunk->page = chunk->page_ar; 783 + 784 + chunk->vm = get_vm_area(pcpu_chunk_size, GFP_KERNEL); 785 + if (!chunk->vm) { 786 + free_pcpu_chunk(chunk); 787 + return NULL; 788 + } 789 + 790 + INIT_LIST_HEAD(&chunk->list); 791 + chunk->free_size = pcpu_unit_size; 792 + chunk->contig_hint = pcpu_unit_size; 793 + 794 + return chunk; 795 + } 796 + 797 + /** 798 + * pcpu_alloc - the percpu allocator 799 + * @size: size of area to allocate in bytes 800 + * @align: alignment of area (max PAGE_SIZE) 801 + * @reserved: allocate from the reserved chunk if available 802 + * 803 + * Allocate percpu area of @size bytes aligned at @align. 804 + * 805 + * CONTEXT: 806 + * Does GFP_KERNEL allocation. 807 + * 808 + * RETURNS: 809 + * Percpu pointer to the allocated area on success, NULL on failure. 810 + */ 811 + static void *pcpu_alloc(size_t size, size_t align, bool reserved) 812 + { 813 + struct pcpu_chunk *chunk; 814 + int slot, off; 815 + 816 + if (unlikely(!size || size > PCPU_MIN_UNIT_SIZE || align > PAGE_SIZE)) { 817 + WARN(true, "illegal size (%zu) or align (%zu) for " 818 + "percpu allocation\n", size, align); 819 + return NULL; 820 + } 821 + 822 + mutex_lock(&pcpu_alloc_mutex); 823 + spin_lock_irq(&pcpu_lock); 824 + 825 + /* serve reserved allocations from the reserved chunk if available */ 826 + if (reserved && pcpu_reserved_chunk) { 827 + chunk = pcpu_reserved_chunk; 828 + if (size > chunk->contig_hint || 829 + pcpu_extend_area_map(chunk) < 0) 830 + goto fail_unlock; 831 + off = pcpu_alloc_area(chunk, size, align); 832 + if (off >= 0) 833 + goto area_found; 834 + goto fail_unlock; 835 + } 836 + 837 + restart: 838 + /* search through normal chunks */ 839 + for (slot = pcpu_size_to_slot(size); slot < pcpu_nr_slots; slot++) { 840 + list_for_each_entry(chunk, &pcpu_slot[slot], list) { 841 + if (size > chunk->contig_hint) 842 + continue; 843 + 844 + switch (pcpu_extend_area_map(chunk)) { 845 + case 0: 846 + break; 847 + case 1: 848 + goto restart; /* pcpu_lock dropped, restart */ 849 + default: 850 + goto fail_unlock; 851 + } 852 + 853 + off = pcpu_alloc_area(chunk, size, align); 854 + if (off >= 0) 855 + goto area_found; 856 + } 857 + } 858 + 859 + /* hmmm... no space left, create a new chunk */ 860 + spin_unlock_irq(&pcpu_lock); 861 + 862 + chunk = alloc_pcpu_chunk(); 863 + if (!chunk) 864 + goto fail_unlock_mutex; 865 + 866 + spin_lock_irq(&pcpu_lock); 867 + pcpu_chunk_relocate(chunk, -1); 868 + pcpu_chunk_addr_insert(chunk); 869 + goto restart; 870 + 871 + area_found: 872 + spin_unlock_irq(&pcpu_lock); 873 + 874 + /* populate, map and clear the area */ 875 + if (pcpu_populate_chunk(chunk, off, size)) { 876 + spin_lock_irq(&pcpu_lock); 877 + pcpu_free_area(chunk, off); 878 + goto fail_unlock; 879 + } 880 + 881 + mutex_unlock(&pcpu_alloc_mutex); 882 + 883 + return __addr_to_pcpu_ptr(chunk->vm->addr + off); 884 + 885 + fail_unlock: 886 + spin_unlock_irq(&pcpu_lock); 887 + fail_unlock_mutex: 888 + mutex_unlock(&pcpu_alloc_mutex); 889 + return NULL; 890 + } 891 + 892 + /** 893 + * __alloc_percpu - allocate dynamic percpu area 894 + * @size: size of area to allocate in bytes 895 + * @align: alignment of area (max PAGE_SIZE) 896 + * 897 + * Allocate percpu area of @size bytes aligned at @align. Might 898 + * sleep. Might trigger writeouts. 899 + * 900 + * CONTEXT: 901 + * Does GFP_KERNEL allocation. 902 + * 903 + * RETURNS: 904 + * Percpu pointer to the allocated area on success, NULL on failure. 905 + */ 906 + void *__alloc_percpu(size_t size, size_t align) 907 + { 908 + return pcpu_alloc(size, align, false); 909 + } 910 + EXPORT_SYMBOL_GPL(__alloc_percpu); 911 + 912 + /** 913 + * __alloc_reserved_percpu - allocate reserved percpu area 914 + * @size: size of area to allocate in bytes 915 + * @align: alignment of area (max PAGE_SIZE) 916 + * 917 + * Allocate percpu area of @size bytes aligned at @align from reserved 918 + * percpu area if arch has set it up; otherwise, allocation is served 919 + * from the same dynamic area. Might sleep. Might trigger writeouts. 920 + * 921 + * CONTEXT: 922 + * Does GFP_KERNEL allocation. 923 + * 924 + * RETURNS: 925 + * Percpu pointer to the allocated area on success, NULL on failure. 926 + */ 927 + void *__alloc_reserved_percpu(size_t size, size_t align) 928 + { 929 + return pcpu_alloc(size, align, true); 930 + } 931 + 932 + /** 933 + * pcpu_reclaim - reclaim fully free chunks, workqueue function 934 + * @work: unused 935 + * 936 + * Reclaim all fully free chunks except for the first one. 937 + * 938 + * CONTEXT: 939 + * workqueue context. 940 + */ 941 + static void pcpu_reclaim(struct work_struct *work) 942 + { 943 + LIST_HEAD(todo); 944 + struct list_head *head = &pcpu_slot[pcpu_nr_slots - 1]; 945 + struct pcpu_chunk *chunk, *next; 946 + 947 + mutex_lock(&pcpu_alloc_mutex); 948 + spin_lock_irq(&pcpu_lock); 949 + 950 + list_for_each_entry_safe(chunk, next, head, list) { 951 + WARN_ON(chunk->immutable); 952 + 953 + /* spare the first one */ 954 + if (chunk == list_first_entry(head, struct pcpu_chunk, list)) 955 + continue; 956 + 957 + rb_erase(&chunk->rb_node, &pcpu_addr_root); 958 + list_move(&chunk->list, &todo); 959 + } 960 + 961 + spin_unlock_irq(&pcpu_lock); 962 + mutex_unlock(&pcpu_alloc_mutex); 963 + 964 + list_for_each_entry_safe(chunk, next, &todo, list) { 965 + pcpu_depopulate_chunk(chunk, 0, pcpu_unit_size, false); 966 + free_pcpu_chunk(chunk); 967 + } 968 + } 969 + 970 + /** 971 + * free_percpu - free percpu area 972 + * @ptr: pointer to area to free 973 + * 974 + * Free percpu area @ptr. 975 + * 976 + * CONTEXT: 977 + * Can be called from atomic context. 978 + */ 979 + void free_percpu(void *ptr) 980 + { 981 + void *addr = __pcpu_ptr_to_addr(ptr); 982 + struct pcpu_chunk *chunk; 983 + unsigned long flags; 984 + int off; 985 + 986 + if (!ptr) 987 + return; 988 + 989 + spin_lock_irqsave(&pcpu_lock, flags); 990 + 991 + chunk = pcpu_chunk_addr_search(addr); 992 + off = addr - chunk->vm->addr; 993 + 994 + pcpu_free_area(chunk, off); 995 + 996 + /* if there are more than one fully free chunks, wake up grim reaper */ 997 + if (chunk->free_size == pcpu_unit_size) { 998 + struct pcpu_chunk *pos; 999 + 1000 + list_for_each_entry(pos, &pcpu_slot[pcpu_nr_slots - 1], list) 1001 + if (pos != chunk) { 1002 + schedule_work(&pcpu_reclaim_work); 1003 + break; 1004 + } 1005 + } 1006 + 1007 + spin_unlock_irqrestore(&pcpu_lock, flags); 1008 + } 1009 + EXPORT_SYMBOL_GPL(free_percpu); 1010 + 1011 + /** 1012 + * pcpu_setup_first_chunk - initialize the first percpu chunk 1013 + * @get_page_fn: callback to fetch page pointer 1014 + * @static_size: the size of static percpu area in bytes 1015 + * @reserved_size: the size of reserved percpu area in bytes 1016 + * @unit_size: unit size in bytes, must be multiple of PAGE_SIZE, -1 for auto 1017 + * @dyn_size: free size for dynamic allocation in bytes, -1 for auto 1018 + * @base_addr: mapped address, NULL for auto 1019 + * @populate_pte_fn: callback to allocate pagetable, NULL if unnecessary 1020 + * 1021 + * Initialize the first percpu chunk which contains the kernel static 1022 + * perpcu area. This function is to be called from arch percpu area 1023 + * setup path. The first two parameters are mandatory. The rest are 1024 + * optional. 1025 + * 1026 + * @get_page_fn() should return pointer to percpu page given cpu 1027 + * number and page number. It should at least return enough pages to 1028 + * cover the static area. The returned pages for static area should 1029 + * have been initialized with valid data. If @unit_size is specified, 1030 + * it can also return pages after the static area. NULL return 1031 + * indicates end of pages for the cpu. Note that @get_page_fn() must 1032 + * return the same number of pages for all cpus. 1033 + * 1034 + * @reserved_size, if non-zero, specifies the amount of bytes to 1035 + * reserve after the static area in the first chunk. This reserves 1036 + * the first chunk such that it's available only through reserved 1037 + * percpu allocation. This is primarily used to serve module percpu 1038 + * static areas on architectures where the addressing model has 1039 + * limited offset range for symbol relocations to guarantee module 1040 + * percpu symbols fall inside the relocatable range. 1041 + * 1042 + * @unit_size, if non-negative, specifies unit size and must be 1043 + * aligned to PAGE_SIZE and equal to or larger than @static_size + 1044 + * @reserved_size + @dyn_size. 1045 + * 1046 + * @dyn_size, if non-negative, limits the number of bytes available 1047 + * for dynamic allocation in the first chunk. Specifying non-negative 1048 + * value make percpu leave alone the area beyond @static_size + 1049 + * @reserved_size + @dyn_size. 1050 + * 1051 + * Non-null @base_addr means that the caller already allocated virtual 1052 + * region for the first chunk and mapped it. percpu must not mess 1053 + * with the chunk. Note that @base_addr with 0 @unit_size or non-NULL 1054 + * @populate_pte_fn doesn't make any sense. 1055 + * 1056 + * @populate_pte_fn is used to populate the pagetable. NULL means the 1057 + * caller already populated the pagetable. 1058 + * 1059 + * If the first chunk ends up with both reserved and dynamic areas, it 1060 + * is served by two chunks - one to serve the core static and reserved 1061 + * areas and the other for the dynamic area. They share the same vm 1062 + * and page map but uses different area allocation map to stay away 1063 + * from each other. The latter chunk is circulated in the chunk slots 1064 + * and available for dynamic allocation like any other chunks. 1065 + * 1066 + * RETURNS: 1067 + * The determined pcpu_unit_size which can be used to initialize 1068 + * percpu access. 1069 + */ 1070 + size_t __init pcpu_setup_first_chunk(pcpu_get_page_fn_t get_page_fn, 1071 + size_t static_size, size_t reserved_size, 1072 + ssize_t unit_size, ssize_t dyn_size, 1073 + void *base_addr, 1074 + pcpu_populate_pte_fn_t populate_pte_fn) 1075 + { 1076 + static struct vm_struct first_vm; 1077 + static int smap[2], dmap[2]; 1078 + struct pcpu_chunk *schunk, *dchunk = NULL; 1079 + unsigned int cpu; 1080 + int nr_pages; 1081 + int err, i; 1082 + 1083 + /* santiy checks */ 1084 + BUILD_BUG_ON(ARRAY_SIZE(smap) >= PCPU_DFL_MAP_ALLOC || 1085 + ARRAY_SIZE(dmap) >= PCPU_DFL_MAP_ALLOC); 1086 + BUG_ON(!static_size); 1087 + if (unit_size >= 0) { 1088 + BUG_ON(unit_size < static_size + reserved_size + 1089 + (dyn_size >= 0 ? dyn_size : 0)); 1090 + BUG_ON(unit_size & ~PAGE_MASK); 1091 + } else { 1092 + BUG_ON(dyn_size >= 0); 1093 + BUG_ON(base_addr); 1094 + } 1095 + BUG_ON(base_addr && populate_pte_fn); 1096 + 1097 + if (unit_size >= 0) 1098 + pcpu_unit_pages = unit_size >> PAGE_SHIFT; 1099 + else 1100 + pcpu_unit_pages = max_t(int, PCPU_MIN_UNIT_SIZE >> PAGE_SHIFT, 1101 + PFN_UP(static_size + reserved_size)); 1102 + 1103 + pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT; 1104 + pcpu_chunk_size = num_possible_cpus() * pcpu_unit_size; 1105 + pcpu_chunk_struct_size = sizeof(struct pcpu_chunk) 1106 + + num_possible_cpus() * pcpu_unit_pages * sizeof(struct page *); 1107 + 1108 + if (dyn_size < 0) 1109 + dyn_size = pcpu_unit_size - static_size - reserved_size; 1110 + 1111 + /* 1112 + * Allocate chunk slots. The additional last slot is for 1113 + * empty chunks. 1114 + */ 1115 + pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 2; 1116 + pcpu_slot = alloc_bootmem(pcpu_nr_slots * sizeof(pcpu_slot[0])); 1117 + for (i = 0; i < pcpu_nr_slots; i++) 1118 + INIT_LIST_HEAD(&pcpu_slot[i]); 1119 + 1120 + /* 1121 + * Initialize static chunk. If reserved_size is zero, the 1122 + * static chunk covers static area + dynamic allocation area 1123 + * in the first chunk. If reserved_size is not zero, it 1124 + * covers static area + reserved area (mostly used for module 1125 + * static percpu allocation). 1126 + */ 1127 + schunk = alloc_bootmem(pcpu_chunk_struct_size); 1128 + INIT_LIST_HEAD(&schunk->list); 1129 + schunk->vm = &first_vm; 1130 + schunk->map = smap; 1131 + schunk->map_alloc = ARRAY_SIZE(smap); 1132 + schunk->page = schunk->page_ar; 1133 + 1134 + if (reserved_size) { 1135 + schunk->free_size = reserved_size; 1136 + pcpu_reserved_chunk = schunk; /* not for dynamic alloc */ 1137 + } else { 1138 + schunk->free_size = dyn_size; 1139 + dyn_size = 0; /* dynamic area covered */ 1140 + } 1141 + schunk->contig_hint = schunk->free_size; 1142 + 1143 + schunk->map[schunk->map_used++] = -static_size; 1144 + if (schunk->free_size) 1145 + schunk->map[schunk->map_used++] = schunk->free_size; 1146 + 1147 + pcpu_reserved_chunk_limit = static_size + schunk->free_size; 1148 + 1149 + /* init dynamic chunk if necessary */ 1150 + if (dyn_size) { 1151 + dchunk = alloc_bootmem(sizeof(struct pcpu_chunk)); 1152 + INIT_LIST_HEAD(&dchunk->list); 1153 + dchunk->vm = &first_vm; 1154 + dchunk->map = dmap; 1155 + dchunk->map_alloc = ARRAY_SIZE(dmap); 1156 + dchunk->page = schunk->page_ar; /* share page map with schunk */ 1157 + 1158 + dchunk->contig_hint = dchunk->free_size = dyn_size; 1159 + dchunk->map[dchunk->map_used++] = -pcpu_reserved_chunk_limit; 1160 + dchunk->map[dchunk->map_used++] = dchunk->free_size; 1161 + } 1162 + 1163 + /* allocate vm address */ 1164 + first_vm.flags = VM_ALLOC; 1165 + first_vm.size = pcpu_chunk_size; 1166 + 1167 + if (!base_addr) 1168 + vm_area_register_early(&first_vm, PAGE_SIZE); 1169 + else { 1170 + /* 1171 + * Pages already mapped. No need to remap into 1172 + * vmalloc area. In this case the first chunks can't 1173 + * be mapped or unmapped by percpu and are marked 1174 + * immutable. 1175 + */ 1176 + first_vm.addr = base_addr; 1177 + schunk->immutable = true; 1178 + if (dchunk) 1179 + dchunk->immutable = true; 1180 + } 1181 + 1182 + /* assign pages */ 1183 + nr_pages = -1; 1184 + for_each_possible_cpu(cpu) { 1185 + for (i = 0; i < pcpu_unit_pages; i++) { 1186 + struct page *page = get_page_fn(cpu, i); 1187 + 1188 + if (!page) 1189 + break; 1190 + *pcpu_chunk_pagep(schunk, cpu, i) = page; 1191 + } 1192 + 1193 + BUG_ON(i < PFN_UP(static_size)); 1194 + 1195 + if (nr_pages < 0) 1196 + nr_pages = i; 1197 + else 1198 + BUG_ON(nr_pages != i); 1199 + } 1200 + 1201 + /* map them */ 1202 + if (populate_pte_fn) { 1203 + for_each_possible_cpu(cpu) 1204 + for (i = 0; i < nr_pages; i++) 1205 + populate_pte_fn(pcpu_chunk_addr(schunk, 1206 + cpu, i)); 1207 + 1208 + err = pcpu_map(schunk, 0, nr_pages); 1209 + if (err) 1210 + panic("failed to setup static percpu area, err=%d\n", 1211 + err); 1212 + } 1213 + 1214 + /* link the first chunk in */ 1215 + if (!dchunk) { 1216 + pcpu_chunk_relocate(schunk, -1); 1217 + pcpu_chunk_addr_insert(schunk); 1218 + } else { 1219 + pcpu_chunk_relocate(dchunk, -1); 1220 + pcpu_chunk_addr_insert(dchunk); 1221 + } 1222 + 1223 + /* we're done */ 1224 + pcpu_base_addr = (void *)pcpu_chunk_addr(schunk, 0, 0); 1225 + return pcpu_unit_size; 1226 + }
+91 -3
mm/vmalloc.c
··· 24 24 #include <linux/radix-tree.h> 25 25 #include <linux/rcupdate.h> 26 26 #include <linux/bootmem.h> 27 + #include <linux/pfn.h> 27 28 28 29 #include <asm/atomic.h> 29 30 #include <asm/uaccess.h> ··· 153 152 * 154 153 * Ie. pte at addr+N*PAGE_SIZE shall point to pfn corresponding to pages[N] 155 154 */ 156 - static int vmap_page_range(unsigned long start, unsigned long end, 157 - pgprot_t prot, struct page **pages) 155 + static int vmap_page_range_noflush(unsigned long start, unsigned long end, 156 + pgprot_t prot, struct page **pages) 158 157 { 159 158 pgd_t *pgd; 160 159 unsigned long next; ··· 170 169 if (err) 171 170 break; 172 171 } while (pgd++, addr = next, addr != end); 173 - flush_cache_vmap(start, end); 174 172 175 173 if (unlikely(err)) 176 174 return err; 177 175 return nr; 176 + } 177 + 178 + static int vmap_page_range(unsigned long start, unsigned long end, 179 + pgprot_t prot, struct page **pages) 180 + { 181 + int ret; 182 + 183 + ret = vmap_page_range_noflush(start, end, prot, pages); 184 + flush_cache_vmap(start, end); 185 + return ret; 178 186 } 179 187 180 188 static inline int is_vmalloc_or_module_addr(const void *x) ··· 1000 990 } 1001 991 EXPORT_SYMBOL(vm_map_ram); 1002 992 993 + /** 994 + * vm_area_register_early - register vmap area early during boot 995 + * @vm: vm_struct to register 996 + * @align: requested alignment 997 + * 998 + * This function is used to register kernel vm area before 999 + * vmalloc_init() is called. @vm->size and @vm->flags should contain 1000 + * proper values on entry and other fields should be zero. On return, 1001 + * vm->addr contains the allocated address. 1002 + * 1003 + * DO NOT USE THIS FUNCTION UNLESS YOU KNOW WHAT YOU'RE DOING. 1004 + */ 1005 + void __init vm_area_register_early(struct vm_struct *vm, size_t align) 1006 + { 1007 + static size_t vm_init_off __initdata; 1008 + unsigned long addr; 1009 + 1010 + addr = ALIGN(VMALLOC_START + vm_init_off, align); 1011 + vm_init_off = PFN_ALIGN(addr + vm->size) - VMALLOC_START; 1012 + 1013 + vm->addr = (void *)addr; 1014 + 1015 + vm->next = vmlist; 1016 + vmlist = vm; 1017 + } 1018 + 1003 1019 void __init vmalloc_init(void) 1004 1020 { 1005 1021 struct vmap_area *va; ··· 1053 1017 vmap_initialized = true; 1054 1018 } 1055 1019 1020 + /** 1021 + * map_kernel_range_noflush - map kernel VM area with the specified pages 1022 + * @addr: start of the VM area to map 1023 + * @size: size of the VM area to map 1024 + * @prot: page protection flags to use 1025 + * @pages: pages to map 1026 + * 1027 + * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size 1028 + * specify should have been allocated using get_vm_area() and its 1029 + * friends. 1030 + * 1031 + * NOTE: 1032 + * This function does NOT do any cache flushing. The caller is 1033 + * responsible for calling flush_cache_vmap() on to-be-mapped areas 1034 + * before calling this function. 1035 + * 1036 + * RETURNS: 1037 + * The number of pages mapped on success, -errno on failure. 1038 + */ 1039 + int map_kernel_range_noflush(unsigned long addr, unsigned long size, 1040 + pgprot_t prot, struct page **pages) 1041 + { 1042 + return vmap_page_range_noflush(addr, addr + size, prot, pages); 1043 + } 1044 + 1045 + /** 1046 + * unmap_kernel_range_noflush - unmap kernel VM area 1047 + * @addr: start of the VM area to unmap 1048 + * @size: size of the VM area to unmap 1049 + * 1050 + * Unmap PFN_UP(@size) pages at @addr. The VM area @addr and @size 1051 + * specify should have been allocated using get_vm_area() and its 1052 + * friends. 1053 + * 1054 + * NOTE: 1055 + * This function does NOT do any cache flushing. The caller is 1056 + * responsible for calling flush_cache_vunmap() on to-be-mapped areas 1057 + * before calling this function and flush_tlb_kernel_range() after. 1058 + */ 1059 + void unmap_kernel_range_noflush(unsigned long addr, unsigned long size) 1060 + { 1061 + vunmap_page_range(addr, addr + size); 1062 + } 1063 + 1064 + /** 1065 + * unmap_kernel_range - unmap kernel VM area and flush cache and TLB 1066 + * @addr: start of the VM area to unmap 1067 + * @size: size of the VM area to unmap 1068 + * 1069 + * Similar to unmap_kernel_range_noflush() but flushes vcache before 1070 + * the unmapping and tlb after. 1071 + */ 1056 1072 void unmap_kernel_range(unsigned long addr, unsigned long size) 1057 1073 { 1058 1074 unsigned long end = addr + size;
+2
net/802/tr.c
··· 668 668 669 669 EXPORT_SYMBOL(tr_type_trans); 670 670 EXPORT_SYMBOL(alloc_trdev); 671 + 672 + MODULE_LICENSE("GPL");
+2 -1
net/8021q/vlan_dev.c
··· 553 553 int err = 0; 554 554 555 555 if (netif_device_present(real_dev) && ops->ndo_neigh_setup) 556 - err = ops->ndo_neigh_setup(dev, pa); 556 + err = ops->ndo_neigh_setup(real_dev, pa); 557 557 558 558 return err; 559 559 } ··· 639 639 dev->hard_header_len = real_dev->hard_header_len + VLAN_HLEN; 640 640 dev->netdev_ops = &vlan_netdev_ops; 641 641 } 642 + netdev_resync_ops(dev); 642 643 643 644 if (is_vlan_dev(real_dev)) 644 645 subclass = 1;
+34 -27
net/core/dev.c
··· 2267 2267 2268 2268 rcu_read_lock(); 2269 2269 2270 - /* Don't receive packets in an exiting network namespace */ 2271 - if (!net_alive(dev_net(skb->dev))) { 2272 - kfree_skb(skb); 2273 - goto out; 2274 - } 2275 - 2276 2270 #ifdef CONFIG_NET_CLS_ACT 2277 2271 if (skb->tc_verd & TC_NCLS) { 2278 2272 skb->tc_verd = CLR_TC_NCLS(skb->tc_verd); ··· 4282 4288 } 4283 4289 EXPORT_SYMBOL(netdev_fix_features); 4284 4290 4291 + /* Some devices need to (re-)set their netdev_ops inside 4292 + * ->init() or similar. If that happens, we have to setup 4293 + * the compat pointers again. 4294 + */ 4295 + void netdev_resync_ops(struct net_device *dev) 4296 + { 4297 + #ifdef CONFIG_COMPAT_NET_DEV_OPS 4298 + const struct net_device_ops *ops = dev->netdev_ops; 4299 + 4300 + dev->init = ops->ndo_init; 4301 + dev->uninit = ops->ndo_uninit; 4302 + dev->open = ops->ndo_open; 4303 + dev->change_rx_flags = ops->ndo_change_rx_flags; 4304 + dev->set_rx_mode = ops->ndo_set_rx_mode; 4305 + dev->set_multicast_list = ops->ndo_set_multicast_list; 4306 + dev->set_mac_address = ops->ndo_set_mac_address; 4307 + dev->validate_addr = ops->ndo_validate_addr; 4308 + dev->do_ioctl = ops->ndo_do_ioctl; 4309 + dev->set_config = ops->ndo_set_config; 4310 + dev->change_mtu = ops->ndo_change_mtu; 4311 + dev->neigh_setup = ops->ndo_neigh_setup; 4312 + dev->tx_timeout = ops->ndo_tx_timeout; 4313 + dev->get_stats = ops->ndo_get_stats; 4314 + dev->vlan_rx_register = ops->ndo_vlan_rx_register; 4315 + dev->vlan_rx_add_vid = ops->ndo_vlan_rx_add_vid; 4316 + dev->vlan_rx_kill_vid = ops->ndo_vlan_rx_kill_vid; 4317 + #ifdef CONFIG_NET_POLL_CONTROLLER 4318 + dev->poll_controller = ops->ndo_poll_controller; 4319 + #endif 4320 + #endif 4321 + } 4322 + EXPORT_SYMBOL(netdev_resync_ops); 4323 + 4285 4324 /** 4286 4325 * register_netdevice - register a network device 4287 4326 * @dev: device to register ··· 4359 4332 * This is temporary until all network devices are converted. 4360 4333 */ 4361 4334 if (dev->netdev_ops) { 4362 - const struct net_device_ops *ops = dev->netdev_ops; 4363 - 4364 - dev->init = ops->ndo_init; 4365 - dev->uninit = ops->ndo_uninit; 4366 - dev->open = ops->ndo_open; 4367 - dev->change_rx_flags = ops->ndo_change_rx_flags; 4368 - dev->set_rx_mode = ops->ndo_set_rx_mode; 4369 - dev->set_multicast_list = ops->ndo_set_multicast_list; 4370 - dev->set_mac_address = ops->ndo_set_mac_address; 4371 - dev->validate_addr = ops->ndo_validate_addr; 4372 - dev->do_ioctl = ops->ndo_do_ioctl; 4373 - dev->set_config = ops->ndo_set_config; 4374 - dev->change_mtu = ops->ndo_change_mtu; 4375 - dev->tx_timeout = ops->ndo_tx_timeout; 4376 - dev->get_stats = ops->ndo_get_stats; 4377 - dev->vlan_rx_register = ops->ndo_vlan_rx_register; 4378 - dev->vlan_rx_add_vid = ops->ndo_vlan_rx_add_vid; 4379 - dev->vlan_rx_kill_vid = ops->ndo_vlan_rx_kill_vid; 4380 - #ifdef CONFIG_NET_POLL_CONTROLLER 4381 - dev->poll_controller = ops->ndo_poll_controller; 4382 - #endif 4335 + netdev_resync_ops(dev); 4383 4336 } else { 4384 4337 char drivername[64]; 4385 4338 pr_info("%s (%s): not using net_device_ops yet\n",
+3 -1
net/core/net-sysfs.c
··· 77 77 if (endp == buf) 78 78 goto err; 79 79 80 - rtnl_lock(); 80 + if (!rtnl_trylock()) 81 + return -ERESTARTSYS; 82 + 81 83 if (dev_isalive(net)) { 82 84 if ((ret = (*set)(net, new)) == 0) 83 85 ret = len;
-3
net/core/net_namespace.c
··· 157 157 struct pernet_operations *ops; 158 158 struct net *net; 159 159 160 - /* Be very certain incoming network packets will not find us */ 161 - rcu_barrier(); 162 - 163 160 net = container_of(work, struct net, work); 164 161 165 162 mutex_lock(&net_mutex);
+2 -2
net/ipv4/af_inet.c
··· 1375 1375 int snmp_mib_init(void *ptr[2], size_t mibsize) 1376 1376 { 1377 1377 BUG_ON(ptr == NULL); 1378 - ptr[0] = __alloc_percpu(mibsize); 1378 + ptr[0] = __alloc_percpu(mibsize, __alignof__(unsigned long long)); 1379 1379 if (!ptr[0]) 1380 1380 goto err0; 1381 - ptr[1] = __alloc_percpu(mibsize); 1381 + ptr[1] = __alloc_percpu(mibsize, __alignof__(unsigned long long)); 1382 1382 if (!ptr[1]) 1383 1383 goto err1; 1384 1384 return 0;
+1 -1
net/ipv4/icmp.c
··· 1205 1205 1206 1206 int __init icmp_init(void) 1207 1207 { 1208 - return register_pernet_device(&icmp_sk_ops); 1208 + return register_pernet_subsys(&icmp_sk_ops); 1209 1209 } 1210 1210 1211 1211 EXPORT_SYMBOL(icmp_err_convert);
+1 -1
net/ipv4/route.c
··· 3376 3376 int rc = 0; 3377 3377 3378 3378 #ifdef CONFIG_NET_CLS_ROUTE 3379 - ip_rt_acct = __alloc_percpu(256 * sizeof(struct ip_rt_acct)); 3379 + ip_rt_acct = __alloc_percpu(256 * sizeof(struct ip_rt_acct), __alignof__(struct ip_rt_acct)); 3380 3380 if (!ip_rt_acct) 3381 3381 panic("IP: failed to allocate ip_rt_acct\n"); 3382 3382 #endif
+1 -1
net/ipv4/tcp_ipv4.c
··· 2443 2443 void __init tcp_v4_init(void) 2444 2444 { 2445 2445 inet_hashinfo_init(&tcp_hashinfo); 2446 - if (register_pernet_device(&tcp_sk_ops)) 2446 + if (register_pernet_subsys(&tcp_sk_ops)) 2447 2447 panic("Failed to create the TCP control socket.\n"); 2448 2448 } 2449 2449
+17 -36
net/ipv6/addrconf.c
··· 493 493 read_unlock(&dev_base_lock); 494 494 } 495 495 496 - static void addrconf_fixup_forwarding(struct ctl_table *table, int *p, int old) 496 + static int addrconf_fixup_forwarding(struct ctl_table *table, int *p, int old) 497 497 { 498 498 struct net *net; 499 499 500 500 net = (struct net *)table->extra2; 501 501 if (p == &net->ipv6.devconf_dflt->forwarding) 502 - return; 502 + return 0; 503 503 504 - rtnl_lock(); 504 + if (!rtnl_trylock()) 505 + return -ERESTARTSYS; 506 + 505 507 if (p == &net->ipv6.devconf_all->forwarding) { 506 508 __s32 newf = net->ipv6.devconf_all->forwarding; 507 509 net->ipv6.devconf_dflt->forwarding = newf; ··· 514 512 515 513 if (*p) 516 514 rt6_purge_dflt_routers(net); 515 + return 1; 517 516 } 518 517 #endif 519 518 ··· 2611 2608 2612 2609 ASSERT_RTNL(); 2613 2610 2614 - if ((dev->flags & IFF_LOOPBACK) && how == 1) 2615 - how = 0; 2616 - 2617 2611 rt6_ifdown(net, dev); 2618 2612 neigh_ifdown(&nd_tbl, dev); 2619 2613 ··· 3983 3983 ret = proc_dointvec(ctl, write, filp, buffer, lenp, ppos); 3984 3984 3985 3985 if (write) 3986 - addrconf_fixup_forwarding(ctl, valp, val); 3986 + ret = addrconf_fixup_forwarding(ctl, valp, val); 3987 3987 return ret; 3988 3988 } 3989 3989 ··· 4019 4019 } 4020 4020 4021 4021 *valp = new; 4022 - addrconf_fixup_forwarding(table, valp, val); 4023 - return 1; 4022 + return addrconf_fixup_forwarding(table, valp, val); 4024 4023 } 4025 4024 4026 4025 static struct addrconf_sysctl_table ··· 4445 4446 4446 4447 EXPORT_SYMBOL(unregister_inet6addr_notifier); 4447 4448 4448 - static void addrconf_net_exit(struct net *net) 4449 - { 4450 - struct net_device *dev; 4451 - 4452 - rtnl_lock(); 4453 - /* clean dev list */ 4454 - for_each_netdev(net, dev) { 4455 - if (__in6_dev_get(dev) == NULL) 4456 - continue; 4457 - addrconf_ifdown(dev, 1); 4458 - } 4459 - addrconf_ifdown(net->loopback_dev, 2); 4460 - rtnl_unlock(); 4461 - } 4462 - 4463 - static struct pernet_operations addrconf_net_ops = { 4464 - .exit = addrconf_net_exit, 4465 - }; 4466 - 4467 4449 /* 4468 4450 * Init / cleanup code 4469 4451 */ ··· 4486 4506 if (err) 4487 4507 goto errlo; 4488 4508 4489 - err = register_pernet_device(&addrconf_net_ops); 4490 - if (err) 4491 - return err; 4492 - 4493 4509 register_netdevice_notifier(&ipv6_dev_notf); 4494 4510 4495 4511 addrconf_verify(0); ··· 4515 4539 void addrconf_cleanup(void) 4516 4540 { 4517 4541 struct inet6_ifaddr *ifa; 4542 + struct net_device *dev; 4518 4543 int i; 4519 4544 4520 4545 unregister_netdevice_notifier(&ipv6_dev_notf); 4521 - unregister_pernet_device(&addrconf_net_ops); 4522 - 4523 4546 unregister_pernet_subsys(&addrconf_ops); 4524 4547 4525 4548 rtnl_lock(); 4549 + 4550 + /* clean dev list */ 4551 + for_each_netdev(&init_net, dev) { 4552 + if (__in6_dev_get(dev) == NULL) 4553 + continue; 4554 + addrconf_ifdown(dev, 1); 4555 + } 4556 + addrconf_ifdown(init_net.loopback_dev, 2); 4526 4557 4527 4558 /* 4528 4559 * Check hash table. ··· 4551 4568 4552 4569 del_timer(&addr_chk_timer); 4553 4570 rtnl_unlock(); 4554 - 4555 - unregister_pernet_subsys(&addrconf_net_ops); 4556 4571 }
+16 -5
net/ipv6/af_inet6.c
··· 72 72 static struct list_head inetsw6[SOCK_MAX]; 73 73 static DEFINE_SPINLOCK(inetsw6_lock); 74 74 75 + static int disable_ipv6 = 0; 76 + module_param_named(disable, disable_ipv6, int, 0); 77 + MODULE_PARM_DESC(disable, "Disable IPv6 such that it is non-functional"); 78 + 75 79 static __inline__ struct ipv6_pinfo *inet6_sk_generic(struct sock *sk) 76 80 { 77 81 const int offset = sk->sk_prot->obj_size - sizeof(struct ipv6_pinfo); ··· 995 991 { 996 992 struct sk_buff *dummy_skb; 997 993 struct list_head *r; 998 - int err; 994 + int err = 0; 999 995 1000 996 BUILD_BUG_ON(sizeof(struct inet6_skb_parm) > sizeof(dummy_skb->cb)); 997 + 998 + /* Register the socket-side information for inet6_create. */ 999 + for(r = &inetsw6[0]; r < &inetsw6[SOCK_MAX]; ++r) 1000 + INIT_LIST_HEAD(r); 1001 + 1002 + if (disable_ipv6) { 1003 + printk(KERN_INFO 1004 + "IPv6: Loaded, but administratively disabled, " 1005 + "reboot required to enable\n"); 1006 + goto out; 1007 + } 1001 1008 1002 1009 err = proto_register(&tcpv6_prot, 1); 1003 1010 if (err) ··· 1026 1011 if (err) 1027 1012 goto out_unregister_udplite_proto; 1028 1013 1029 - 1030 - /* Register the socket-side information for inet6_create. */ 1031 - for(r = &inetsw6[0]; r < &inetsw6[SOCK_MAX]; ++r) 1032 - INIT_LIST_HEAD(r); 1033 1014 1034 1015 /* We MUST register RAW sockets before we create the ICMP6, 1035 1016 * IGMP6, or NDISC control sockets.
+9 -1
net/netlink/af_netlink.c
··· 1084 1084 return 0; 1085 1085 } 1086 1086 1087 + /** 1088 + * netlink_set_err - report error to broadcast listeners 1089 + * @ssk: the kernel netlink socket, as returned by netlink_kernel_create() 1090 + * @pid: the PID of a process that we want to skip (if any) 1091 + * @groups: the broadcast group that will notice the error 1092 + * @code: error code, must be negative (as usual in kernelspace) 1093 + */ 1087 1094 void netlink_set_err(struct sock *ssk, u32 pid, u32 group, int code) 1088 1095 { 1089 1096 struct netlink_set_err_data info; ··· 1100 1093 info.exclude_sk = ssk; 1101 1094 info.pid = pid; 1102 1095 info.group = group; 1103 - info.code = code; 1096 + /* sk->sk_err wants a positive error value */ 1097 + info.code = -code; 1104 1098 1105 1099 read_lock(&nl_table_lock); 1106 1100
+6 -7
net/sched/act_police.c
··· 183 183 if (R_tab == NULL) 184 184 goto failure; 185 185 186 - if (!est && (ret == ACT_P_CREATED || 187 - !gen_estimator_active(&police->tcf_bstats, 188 - &police->tcf_rate_est))) { 189 - err = -EINVAL; 190 - goto failure; 191 - } 192 - 193 186 if (parm->peakrate.rate) { 194 187 P_tab = qdisc_get_rtab(&parm->peakrate, 195 188 tb[TCA_POLICE_PEAKRATE]); ··· 198 205 &police->tcf_lock, est); 199 206 if (err) 200 207 goto failure_unlock; 208 + } else if (tb[TCA_POLICE_AVRATE] && 209 + (ret == ACT_P_CREATED || 210 + !gen_estimator_active(&police->tcf_bstats, 211 + &police->tcf_rate_est))) { 212 + err = -EINVAL; 213 + goto failure_unlock; 201 214 } 202 215 203 216 /* No failure allowed after this point */
+9 -7
net/sctp/protocol.c
··· 717 717 static int sctp_ctl_sock_init(void) 718 718 { 719 719 int err; 720 - sa_family_t family; 720 + sa_family_t family = PF_INET; 721 721 722 722 if (sctp_get_pf_specific(PF_INET6)) 723 723 family = PF_INET6; 724 - else 725 - family = PF_INET; 726 724 727 725 err = inet_ctl_sock_create(&sctp_ctl_sock, family, 728 726 SOCK_SEQPACKET, IPPROTO_SCTP, &init_net); 727 + 728 + /* If IPv6 socket could not be created, try the IPv4 socket */ 729 + if (err < 0 && family == PF_INET6) 730 + err = inet_ctl_sock_create(&sctp_ctl_sock, AF_INET, 731 + SOCK_SEQPACKET, IPPROTO_SCTP, 732 + &init_net); 733 + 729 734 if (err < 0) { 730 735 printk(KERN_ERR 731 736 "SCTP: Failed to create the SCTP control socket.\n"); ··· 1327 1322 out: 1328 1323 return status; 1329 1324 err_v6_add_protocol: 1330 - sctp_v6_del_protocol(); 1331 - err_add_protocol: 1332 1325 sctp_v4_del_protocol(); 1326 + err_add_protocol: 1333 1327 inet_ctl_sock_destroy(sctp_ctl_sock); 1334 1328 err_ctl_sock_init: 1335 1329 sctp_v6_protosw_exit(); ··· 1339 1335 sctp_v4_pf_exit(); 1340 1336 sctp_v6_pf_exit(); 1341 1337 sctp_sysctl_unregister(); 1342 - list_del(&sctp_af_inet.list); 1343 1338 free_pages((unsigned long)sctp_port_hashtable, 1344 1339 get_order(sctp_port_hashsize * 1345 1340 sizeof(struct sctp_bind_hashbucket))); ··· 1386 1383 sctp_v4_pf_exit(); 1387 1384 1388 1385 sctp_sysctl_unregister(); 1389 - list_del(&sctp_af_inet.list); 1390 1386 1391 1387 free_pages((unsigned long)sctp_assoc_hashtable, 1392 1388 get_order(sctp_assoc_hashsize *
+33 -21
net/sctp/sm_sideeffect.c
··· 787 787 struct sctp_association *asoc, 788 788 struct sctp_chunk *chunk) 789 789 { 790 - struct sctp_operr_chunk *operr_chunk; 791 790 struct sctp_errhdr *err_hdr; 791 + struct sctp_ulpevent *ev; 792 792 793 - operr_chunk = (struct sctp_operr_chunk *)chunk->chunk_hdr; 794 - err_hdr = &operr_chunk->err_hdr; 793 + while (chunk->chunk_end > chunk->skb->data) { 794 + err_hdr = (struct sctp_errhdr *)(chunk->skb->data); 795 795 796 - switch (err_hdr->cause) { 797 - case SCTP_ERROR_UNKNOWN_CHUNK: 798 - { 799 - struct sctp_chunkhdr *unk_chunk_hdr; 796 + ev = sctp_ulpevent_make_remote_error(asoc, chunk, 0, 797 + GFP_ATOMIC); 798 + if (!ev) 799 + return; 800 800 801 - unk_chunk_hdr = (struct sctp_chunkhdr *)err_hdr->variable; 802 - switch (unk_chunk_hdr->type) { 803 - /* ADDIP 4.1 A9) If the peer responds to an ASCONF with an 804 - * ERROR chunk reporting that it did not recognized the ASCONF 805 - * chunk type, the sender of the ASCONF MUST NOT send any 806 - * further ASCONF chunks and MUST stop its T-4 timer. 807 - */ 808 - case SCTP_CID_ASCONF: 809 - asoc->peer.asconf_capable = 0; 810 - sctp_add_cmd_sf(cmds, SCTP_CMD_TIMER_STOP, 801 + sctp_ulpq_tail_event(&asoc->ulpq, ev); 802 + 803 + switch (err_hdr->cause) { 804 + case SCTP_ERROR_UNKNOWN_CHUNK: 805 + { 806 + sctp_chunkhdr_t *unk_chunk_hdr; 807 + 808 + unk_chunk_hdr = (sctp_chunkhdr_t *)err_hdr->variable; 809 + switch (unk_chunk_hdr->type) { 810 + /* ADDIP 4.1 A9) If the peer responds to an ASCONF with 811 + * an ERROR chunk reporting that it did not recognized 812 + * the ASCONF chunk type, the sender of the ASCONF MUST 813 + * NOT send any further ASCONF chunks and MUST stop its 814 + * T-4 timer. 815 + */ 816 + case SCTP_CID_ASCONF: 817 + if (asoc->peer.asconf_capable == 0) 818 + break; 819 + 820 + asoc->peer.asconf_capable = 0; 821 + sctp_add_cmd_sf(cmds, SCTP_CMD_TIMER_STOP, 811 822 SCTP_TO(SCTP_EVENT_TIMEOUT_T4_RTO)); 823 + break; 824 + default: 825 + break; 826 + } 812 827 break; 828 + } 813 829 default: 814 830 break; 815 831 } 816 - break; 817 - } 818 - default: 819 - break; 820 832 } 821 833 } 822 834
+2 -14
net/sctp/sm_statefuns.c
··· 3163 3163 sctp_cmd_seq_t *commands) 3164 3164 { 3165 3165 struct sctp_chunk *chunk = arg; 3166 - struct sctp_ulpevent *ev; 3167 3166 3168 3167 if (!sctp_vtag_verify(chunk, asoc)) 3169 3168 return sctp_sf_pdiscard(ep, asoc, type, arg, commands); ··· 3172 3173 return sctp_sf_violation_chunklen(ep, asoc, type, arg, 3173 3174 commands); 3174 3175 3175 - while (chunk->chunk_end > chunk->skb->data) { 3176 - ev = sctp_ulpevent_make_remote_error(asoc, chunk, 0, 3177 - GFP_ATOMIC); 3178 - if (!ev) 3179 - goto nomem; 3176 + sctp_add_cmd_sf(commands, SCTP_CMD_PROCESS_OPERR, 3177 + SCTP_CHUNK(chunk)); 3180 3178 3181 - sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, 3182 - SCTP_ULPEVENT(ev)); 3183 - sctp_add_cmd_sf(commands, SCTP_CMD_PROCESS_OPERR, 3184 - SCTP_CHUNK(chunk)); 3185 - } 3186 3179 return SCTP_DISPOSITION_CONSUME; 3187 - 3188 - nomem: 3189 - return SCTP_DISPOSITION_NOMEM; 3190 3180 } 3191 3181 3192 3182 /*
+2 -1
net/wireless/reg.c
··· 380 380 381 381 freq_diff = freq_range->end_freq_khz - freq_range->start_freq_khz; 382 382 383 - if (freq_diff <= 0 || freq_range->max_bandwidth_khz > freq_diff) 383 + if (freq_range->end_freq_khz <= freq_range->start_freq_khz || 384 + freq_range->max_bandwidth_khz > freq_diff) 384 385 return false; 385 386 386 387 return true;
+8 -35
security/smack/smack_lsm.c
··· 1498 1498 * looks for host based access restrictions 1499 1499 * 1500 1500 * This version will only be appropriate for really small 1501 - * sets of single label hosts. Because of the masking 1502 - * it cannot shortcut out on the first match. There are 1503 - * numerious ways to address the problem, but none of them 1504 - * have been applied here. 1501 + * sets of single label hosts. 1505 1502 * 1506 1503 * Returns the label of the far end or NULL if it's not special. 1507 1504 */ 1508 1505 static char *smack_host_label(struct sockaddr_in *sip) 1509 1506 { 1510 1507 struct smk_netlbladdr *snp; 1511 - char *bestlabel = NULL; 1512 1508 struct in_addr *siap = &sip->sin_addr; 1513 - struct in_addr *liap; 1514 - struct in_addr *miap; 1515 - struct in_addr bestmask; 1516 1509 1517 1510 if (siap->s_addr == 0) 1518 1511 return NULL; 1519 1512 1520 - bestmask.s_addr = 0; 1521 - 1522 1513 for (snp = smack_netlbladdrs; snp != NULL; snp = snp->smk_next) { 1523 - liap = &snp->smk_host.sin_addr; 1524 - miap = &snp->smk_mask; 1525 1514 /* 1526 - * If the addresses match after applying the list entry mask 1527 - * the entry matches the address. If it doesn't move along to 1528 - * the next entry. 1515 + * we break after finding the first match because 1516 + * the list is sorted from longest to shortest mask 1517 + * so we have found the most specific match 1529 1518 */ 1530 - if ((liap->s_addr & miap->s_addr) != 1531 - (siap->s_addr & miap->s_addr)) 1532 - continue; 1533 - /* 1534 - * If the list entry mask identifies a single address 1535 - * it can't get any more specific. 1536 - */ 1537 - if (miap->s_addr == 0xffffffff) 1519 + if ((&snp->smk_host.sin_addr)->s_addr == 1520 + (siap->s_addr & (&snp->smk_mask)->s_addr)) { 1538 1521 return snp->smk_label; 1539 - /* 1540 - * If the list entry mask is less specific than the best 1541 - * already found this entry is uninteresting. 1542 - */ 1543 - if ((miap->s_addr | bestmask.s_addr) == bestmask.s_addr) 1544 - continue; 1545 - /* 1546 - * This is better than any entry found so far. 1547 - */ 1548 - bestmask.s_addr = miap->s_addr; 1549 - bestlabel = snp->smk_label; 1522 + } 1550 1523 } 1551 1524 1552 - return bestlabel; 1525 + return NULL; 1553 1526 } 1554 1527 1555 1528 /**
+49 -15
security/smack/smackfs.c
··· 650 650 651 651 return skp; 652 652 } 653 - /* 654 - #define BEMASK 0x80000000 655 - */ 656 - #define BEMASK 0x00000001 657 653 #define BEBITS (sizeof(__be32) * 8) 658 654 659 655 /* ··· 659 663 { 660 664 struct smk_netlbladdr *skp = (struct smk_netlbladdr *) v; 661 665 unsigned char *hp = (char *) &skp->smk_host.sin_addr.s_addr; 662 - __be32 bebits; 663 - int maskn = 0; 666 + int maskn; 667 + u32 temp_mask = be32_to_cpu(skp->smk_mask.s_addr); 664 668 665 - for (bebits = BEMASK; bebits != 0; maskn++, bebits <<= 1) 666 - if ((skp->smk_mask.s_addr & bebits) == 0) 667 - break; 669 + for (maskn = 0; temp_mask; temp_mask <<= 1, maskn++); 668 670 669 671 seq_printf(s, "%u.%u.%u.%u/%d %s\n", 670 672 hp[0], hp[1], hp[2], hp[3], maskn, skp->smk_label); ··· 696 702 } 697 703 698 704 /** 705 + * smk_netlbladdr_insert 706 + * @new : netlabel to insert 707 + * 708 + * This helper insert netlabel in the smack_netlbladdrs list 709 + * sorted by netmask length (longest to smallest) 710 + */ 711 + static void smk_netlbladdr_insert(struct smk_netlbladdr *new) 712 + { 713 + struct smk_netlbladdr *m; 714 + 715 + if (smack_netlbladdrs == NULL) { 716 + smack_netlbladdrs = new; 717 + return; 718 + } 719 + 720 + /* the comparison '>' is a bit hacky, but works */ 721 + if (new->smk_mask.s_addr > smack_netlbladdrs->smk_mask.s_addr) { 722 + new->smk_next = smack_netlbladdrs; 723 + smack_netlbladdrs = new; 724 + return; 725 + } 726 + for (m = smack_netlbladdrs; m != NULL; m = m->smk_next) { 727 + if (m->smk_next == NULL) { 728 + m->smk_next = new; 729 + return; 730 + } 731 + if (new->smk_mask.s_addr > m->smk_next->smk_mask.s_addr) { 732 + new->smk_next = m->smk_next; 733 + m->smk_next = new; 734 + return; 735 + } 736 + } 737 + } 738 + 739 + 740 + /** 699 741 * smk_write_netlbladdr - write() for /smack/netlabel 700 742 * @filp: file pointer, not actually used 701 743 * @buf: where to get the data from ··· 754 724 struct netlbl_audit audit_info; 755 725 struct in_addr mask; 756 726 unsigned int m; 757 - __be32 bebits = BEMASK; 727 + u32 mask_bits = (1<<31); 758 728 __be32 nsa; 729 + u32 temp_mask; 759 730 760 731 /* 761 732 * Must have privilege. ··· 792 761 if (sp == NULL) 793 762 return -EINVAL; 794 763 795 - for (mask.s_addr = 0; m > 0; m--) { 796 - mask.s_addr |= bebits; 797 - bebits <<= 1; 764 + for (temp_mask = 0; m > 0; m--) { 765 + temp_mask |= mask_bits; 766 + mask_bits >>= 1; 798 767 } 768 + mask.s_addr = cpu_to_be32(temp_mask); 769 + 770 + newname.sin_addr.s_addr &= mask.s_addr; 799 771 /* 800 772 * Only allow one writer at a time. Writes should be 801 773 * quite rare and small in any case. ··· 806 772 mutex_lock(&smk_netlbladdr_lock); 807 773 808 774 nsa = newname.sin_addr.s_addr; 775 + /* try to find if the prefix is already in the list */ 809 776 for (skp = smack_netlbladdrs; skp != NULL; skp = skp->smk_next) 810 777 if (skp->smk_host.sin_addr.s_addr == nsa && 811 778 skp->smk_mask.s_addr == mask.s_addr) ··· 822 787 rc = 0; 823 788 skp->smk_host.sin_addr.s_addr = newname.sin_addr.s_addr; 824 789 skp->smk_mask.s_addr = mask.s_addr; 825 - skp->smk_next = smack_netlbladdrs; 826 790 skp->smk_label = sp; 827 - smack_netlbladdrs = skp; 791 + smk_netlbladdr_insert(skp); 828 792 } 829 793 } else { 830 794 rc = netlbl_cfg_unlbl_static_del(&init_net, NULL,
+4 -2
sound/pci/hda/patch_sigmatel.c
··· 1207 1207 "LFE Playback Volume", 1208 1208 "Side Playback Volume", 1209 1209 "Headphone Playback Volume", 1210 - "Headphone Playback Volume", 1210 + "Headphone2 Playback Volume", 1211 1211 "Speaker Playback Volume", 1212 1212 "External Speaker Playback Volume", 1213 1213 "Speaker2 Playback Volume", ··· 1221 1221 "LFE Playback Switch", 1222 1222 "Side Playback Switch", 1223 1223 "Headphone Playback Switch", 1224 - "Headphone Playback Switch", 1224 + "Headphone2 Playback Switch", 1225 1225 "Speaker Playback Switch", 1226 1226 "External Speaker Playback Switch", 1227 1227 "Speaker2 Playback Switch", ··· 3516 3516 if (! spec->autocfg.line_outs) 3517 3517 return 0; /* can't find valid pin config */ 3518 3518 3519 + #if 0 /* FIXME: temporarily disabled */ 3519 3520 /* If we have no real line-out pin and multiple hp-outs, HPs should 3520 3521 * be set up as multi-channel outputs. 3521 3522 */ ··· 3536 3535 spec->autocfg.line_out_type = AUTO_PIN_HP_OUT; 3537 3536 spec->autocfg.hp_outs = 0; 3538 3537 } 3538 + #endif /* FIXME: temporarily disabled */ 3539 3539 if (spec->autocfg.mono_out_pin) { 3540 3540 int dir = get_wcaps(codec, spec->autocfg.mono_out_pin) & 3541 3541 (AC_WCAP_OUT_AMP | AC_WCAP_IN_AMP);