Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge phase #1 of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

This merges phase 1 of the x86 tree, which is a collection of branches:

x86/alternatives, x86/cleanups, x86/commandline, x86/crashdump,
x86/debug, x86/defconfig, x86/doc, x86/exports, x86/fpu, x86/gart,
x86/idle, x86/mm, x86/mtrr, x86/nmi-watchdog, x86/oprofile,
x86/paravirt, x86/reboot, x86/sparse-fixes, x86/tsc, x86/urgent and
x86/vmalloc

and as Ingo says: "these are the easiest, purely independent x86 topics
with no conflicts, in one nice Octopus merge".

* 'x86-v28-for-linus-phase1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (147 commits)
x86: mtrr_cleanup: treat WRPROT as UNCACHEABLE
x86: mtrr_cleanup: first 1M may be covered in var mtrrs
x86: mtrr_cleanup: print out correct type v2
x86: trivial printk fix in efi.c
x86, debug: mtrr_cleanup print out var mtrr before change it
x86: mtrr_cleanup try gran_size to less than 1M, v3
x86: mtrr_cleanup try gran_size to less than 1M, cleanup
x86: change MTRR_SANITIZER to def_bool y
x86, debug printouts: IOMMU setup failures should not be KERN_ERR
x86: export set_memory_ro and set_memory_rw
x86: mtrr_cleanup try gran_size to less than 1M
x86: mtrr_cleanup prepare to make gran_size to less 1M
x86: mtrr_cleanup safe to get more spare regs now
x86_64: be less annoying on boot, v2
x86: mtrr_cleanup hole size should be less than half of chunk_size, v2
x86: add mtrr_cleanup_debug command line
x86: mtrr_cleanup optimization, v2
x86: don't need to go to chunksize to 4G
x86_64: be less annoying on boot
x86, olpc: fix endian bug in openfirmware workaround
...

+3925 -2276
-2
Documentation/00-INDEX
··· 251 251 - how to execute Mono-based .NET binaries with the help of BINFMT_MISC. 252 252 moxa-smartio 253 253 - file with info on installing/using Moxa multiport serial driver. 254 - mtrr.txt 255 - - how to use PPro Memory Type Range Registers to increase performance. 256 254 mutex-design.txt 257 255 - info on the generic mutex subsystem. 258 256 namespaces/
+6 -6
Documentation/kernel-parameters.txt
··· 463 463 Range: 0 - 8192 464 464 Default: 64 465 465 466 - disable_8254_timer 467 - enable_8254_timer 468 - [IA32/X86_64] Disable/Enable interrupt 0 timer routing 469 - over the 8254 in addition to over the IO-APIC. The 470 - kernel tries to set a sensible default. 471 - 472 466 hpet= [X86-32,HPET] option to control HPET usage 473 467 Format: { enable (default) | disable | force } 474 468 disable: disable HPET and use PIT instead ··· 1875 1881 1876 1882 shapers= [NET] 1877 1883 Maximal number of shapers. 1884 + 1885 + show_msr= [x86] show boot-time MSR settings 1886 + Format: { <integer> } 1887 + Show boot-time (BIOS-initialized) MSR settings. 1888 + The parameter means the number of CPUs to show, 1889 + for example 1 means boot CPU only. 1878 1890 1879 1891 sim710= [SCSI,HW] 1880 1892 See header of drivers/scsi/sim710.c.
+2 -2
Documentation/mtrr.txt Documentation/x86/mtrr.txt
··· 18 18 The AMD K6-2 (stepping 8 and above) and K6-3 processors have two 19 19 MTRRs. These are supported. The AMD Athlon family provide 8 Intel 20 20 style MTRRs. 21 - 21 + 22 22 The Centaur C6 (WinChip) has 8 MCRs, allowing write-combining. These 23 23 are supported. 24 24 ··· 87 87 reg01: base=0xfb000000 (4016MB), size= 16MB: write-combining, count=1 88 88 reg02: base=0xfb000000 (4016MB), size= 4kB: uncachable, count=1 89 89 90 - Some cards (especially Voodoo Graphics boards) need this 4 kB area 90 + Some cards (especially Voodoo Graphics boards) need this 4 kB area 91 91 excluded from the beginning of the region because it is used for 92 92 registers. 93 93
+4
Documentation/x86/00-INDEX
··· 1 + 00-INDEX 2 + - this file 3 + mtrr.txt 4 + - how to use x86 Memory Type Range Registers to increase performance
+1 -1
Documentation/x86/i386/boot.txt Documentation/x86/boot.txt
··· 308 308 309 309 Field name: start_sys 310 310 Type: read 311 - Offset/size: 0x20c/4 311 + Offset/size: 0x20c/2 312 312 Protocol: 2.00+ 313 313 314 314 The load low segment (0x1000). Obsolete.
Documentation/x86/i386/usb-legacy-support.txt Documentation/x86/usb-legacy-support.txt
Documentation/x86/i386/zero-page.txt Documentation/x86/zero-page.txt
+45 -9
Documentation/x86/pat.txt
··· 14 14 ones that will be supported at this time are Write-back, Uncached, 15 15 Write-combined and Uncached Minus. 16 16 17 + 18 + PAT APIs 19 + -------- 20 + 17 21 There are many different APIs in the kernel that allows setting of memory 18 22 attributes at the page level. In order to avoid aliasing, these interfaces 19 23 should be used thoughtfully. Below is a table of interfaces available, ··· 30 26 API | RAM | ACPI,... | Reserved/Holes | 31 27 -----------------------|----------|------------|------------------| 32 28 | | | | 33 - ioremap | -- | UC | UC | 29 + ioremap | -- | UC- | UC- | 34 30 | | | | 35 31 ioremap_cache | -- | WB | WB | 36 32 | | | | 37 - ioremap_nocache | -- | UC | UC | 33 + ioremap_nocache | -- | UC- | UC- | 38 34 | | | | 39 35 ioremap_wc | -- | -- | WC | 40 36 | | | | 41 - set_memory_uc | UC | -- | -- | 37 + set_memory_uc | UC- | -- | -- | 42 38 set_memory_wb | | | | 43 39 | | | | 44 40 set_memory_wc | WC | -- | -- | 45 41 set_memory_wb | | | | 46 42 | | | | 47 - pci sysfs resource | -- | -- | UC | 43 + pci sysfs resource | -- | -- | UC- | 48 44 | | | | 49 45 pci sysfs resource_wc | -- | -- | WC | 50 46 is IORESOURCE_PREFETCH| | | | 51 47 | | | | 52 - pci proc | -- | -- | UC | 48 + pci proc | -- | -- | UC- | 53 49 !PCIIOC_WRITE_COMBINE | | | | 54 50 | | | | 55 51 pci proc | -- | -- | WC | 56 52 PCIIOC_WRITE_COMBINE | | | | 57 53 | | | | 58 - /dev/mem | -- | UC | UC | 54 + /dev/mem | -- | WB/WC/UC- | WB/WC/UC- | 59 55 read-write | | | | 60 56 | | | | 61 - /dev/mem | -- | UC | UC | 57 + /dev/mem | -- | UC- | UC- | 62 58 mmap SYNC flag | | | | 63 59 | | | | 64 - /dev/mem | -- | WB/WC/UC | WB/WC/UC | 60 + /dev/mem | -- | WB/WC/UC- | WB/WC/UC- | 65 61 mmap !SYNC flag | |(from exist-| (from exist- | 66 62 and | | ing alias)| ing alias) | 67 63 any alias to this area| | | | ··· 72 68 and | | | | 73 69 MTRR says WB | | | | 74 70 | | | | 75 - /dev/mem | -- | -- | UC_MINUS | 71 + /dev/mem | -- | -- | UC- | 76 72 mmap !SYNC flag | | | | 77 73 no alias to this area | | | | 78 74 and | | | | ··· 101 97 types. 102 98 103 99 Drivers should use set_memory_[uc|wc] to set access type for RAM ranges. 100 + 101 + 102 + PAT debugging 103 + ------------- 104 + 105 + With CONFIG_DEBUG_FS enabled, PAT memtype list can be examined by 106 + 107 + # mount -t debugfs debugfs /sys/kernel/debug 108 + # cat /sys/kernel/debug/x86/pat_memtype_list 109 + PAT memtype list: 110 + uncached-minus @ 0x7fadf000-0x7fae0000 111 + uncached-minus @ 0x7fb19000-0x7fb1a000 112 + uncached-minus @ 0x7fb1a000-0x7fb1b000 113 + uncached-minus @ 0x7fb1b000-0x7fb1c000 114 + uncached-minus @ 0x7fb1c000-0x7fb1d000 115 + uncached-minus @ 0x7fb1d000-0x7fb1e000 116 + uncached-minus @ 0x7fb1e000-0x7fb25000 117 + uncached-minus @ 0x7fb25000-0x7fb26000 118 + uncached-minus @ 0x7fb26000-0x7fb27000 119 + uncached-minus @ 0x7fb27000-0x7fb28000 120 + uncached-minus @ 0x7fb28000-0x7fb2e000 121 + uncached-minus @ 0x7fb2e000-0x7fb2f000 122 + uncached-minus @ 0x7fb2f000-0x7fb30000 123 + uncached-minus @ 0x7fb31000-0x7fb32000 124 + uncached-minus @ 0x80000000-0x90000000 125 + 126 + This list shows physical address ranges and various PAT settings used to 127 + access those physical address ranges. 128 + 129 + Another, more verbose way of getting PAT related debug messages is with 130 + "debugpat" boot parameter. With this parameter, various debug messages are 131 + printed to dmesg log. 104 132
-4
Documentation/x86/x86_64/boot-options.txt
··· 54 54 apicmaintimer. Useful when your PIT timer is totally 55 55 broken. 56 56 57 - disable_8254_timer / enable_8254_timer 58 - Enable interrupt 0 timer routing over the 8254 in addition to over 59 - the IO-APIC. The kernel tries to set a sensible default. 60 - 61 57 Early Console 62 58 63 59 syntax: earlyprintk=vga
+60 -15
arch/x86/Kconfig
··· 29 29 select HAVE_FTRACE 30 30 select HAVE_KVM if ((X86_32 && !X86_VOYAGER && !X86_VISWS && !X86_NUMAQ) || X86_64) 31 31 select HAVE_ARCH_KGDB if !X86_VOYAGER 32 + select HAVE_ARCH_TRACEHOOK 32 33 select HAVE_GENERIC_DMA_COHERENT if X86_32 33 34 select HAVE_EFFICIENT_UNALIGNED_ACCESS 34 35 ··· 1021 1020 1022 1021 config ARCH_FLATMEM_ENABLE 1023 1022 def_bool y 1024 - depends on X86_32 && ARCH_SELECT_MEMORY_MODEL && X86_PC && !NUMA 1023 + depends on X86_32 && ARCH_SELECT_MEMORY_MODEL && !NUMA 1025 1024 1026 1025 config ARCH_DISCONTIGMEM_ENABLE 1027 1026 def_bool y ··· 1037 1036 1038 1037 config ARCH_SPARSEMEM_ENABLE 1039 1038 def_bool y 1040 - depends on X86_64 || NUMA || (EXPERIMENTAL && X86_PC) 1039 + depends on X86_64 || NUMA || (EXPERIMENTAL && X86_PC) || X86_GENERICARCH 1041 1040 select SPARSEMEM_STATIC if X86_32 1042 1041 select SPARSEMEM_VMEMMAP_ENABLE if X86_64 1043 1042 ··· 1118 1117 You can safely say Y even if your machine doesn't have MTRRs, you'll 1119 1118 just add about 9 KB to your kernel. 1120 1119 1121 - See <file:Documentation/mtrr.txt> for more information. 1120 + See <file:Documentation/x86/mtrr.txt> for more information. 1122 1121 1123 1122 config MTRR_SANITIZER 1124 - bool 1123 + def_bool y 1125 1124 prompt "MTRR cleanup support" 1126 1125 depends on MTRR 1127 1126 help ··· 1132 1131 The largest mtrr entry size for a continous block can be set with 1133 1132 mtrr_chunk_size. 1134 1133 1135 - If unsure, say N. 1134 + If unsure, say Y. 1136 1135 1137 1136 config MTRR_SANITIZER_ENABLE_DEFAULT 1138 1137 int "MTRR cleanup enable value (0-1)" ··· 1192 1191 config SECCOMP 1193 1192 def_bool y 1194 1193 prompt "Enable seccomp to safely compute untrusted bytecode" 1195 - depends on PROC_FS 1196 1194 help 1197 1195 This kernel feature is useful for number crunching applications 1198 1196 that may need to compute untrusted bytecode during their ··· 1199 1199 the process as file descriptors supporting the read/write 1200 1200 syscalls, it's possible to isolate those applications in 1201 1201 their own address space using seccomp. Once seccomp is 1202 - enabled via /proc/<pid>/seccomp, it cannot be disabled 1202 + enabled via prctl(PR_SET_SECCOMP), it cannot be disabled 1203 1203 and the task is only allowed to execute a few safe syscalls 1204 1204 defined by each seccomp mode. 1205 1205 ··· 1356 1356 Don't change this unless you know what you are doing. 1357 1357 1358 1358 config HOTPLUG_CPU 1359 - bool "Support for suspend on SMP and hot-pluggable CPUs (EXPERIMENTAL)" 1360 - depends on SMP && HOTPLUG && EXPERIMENTAL && !X86_VOYAGER 1359 + bool "Support for hot-pluggable CPUs" 1360 + depends on SMP && HOTPLUG && !X86_VOYAGER 1361 1361 ---help--- 1362 - Say Y here to experiment with turning CPUs off and on, and to 1363 - enable suspend on SMP systems. CPUs can be controlled through 1364 - /sys/devices/system/cpu. 1365 - Say N if you want to disable CPU hotplug and don't need to 1366 - suspend. 1362 + Say Y here to allow turning CPUs off and on. CPUs can be 1363 + controlled through /sys/devices/system/cpu. 1364 + ( Note: power management support will enable this option 1365 + automatically on SMP systems. ) 1366 + Say N if you want to disable CPU hotplug. 1367 1367 1368 1368 config COMPAT_VDSO 1369 1369 def_bool y ··· 1377 1377 VDSO mapping and to exclusively use the randomized VDSO. 1378 1378 1379 1379 If unsure, say Y. 1380 + 1381 + config CMDLINE_BOOL 1382 + bool "Built-in kernel command line" 1383 + default n 1384 + help 1385 + Allow for specifying boot arguments to the kernel at 1386 + build time. On some systems (e.g. embedded ones), it is 1387 + necessary or convenient to provide some or all of the 1388 + kernel boot arguments with the kernel itself (that is, 1389 + to not rely on the boot loader to provide them.) 1390 + 1391 + To compile command line arguments into the kernel, 1392 + set this option to 'Y', then fill in the 1393 + the boot arguments in CONFIG_CMDLINE. 1394 + 1395 + Systems with fully functional boot loaders (i.e. non-embedded) 1396 + should leave this option set to 'N'. 1397 + 1398 + config CMDLINE 1399 + string "Built-in kernel command string" 1400 + depends on CMDLINE_BOOL 1401 + default "" 1402 + help 1403 + Enter arguments here that should be compiled into the kernel 1404 + image and used at boot time. If the boot loader provides a 1405 + command line at boot time, it is appended to this string to 1406 + form the full kernel command line, when the system boots. 1407 + 1408 + However, you can use the CONFIG_CMDLINE_OVERRIDE option to 1409 + change this behavior. 1410 + 1411 + In most cases, the command line (whether built-in or provided 1412 + by the boot loader) should specify the device for the root 1413 + file system. 1414 + 1415 + config CMDLINE_OVERRIDE 1416 + bool "Built-in command line overrides boot loader arguments" 1417 + default n 1418 + depends on CMDLINE_BOOL 1419 + help 1420 + Set this option to 'Y' to have the kernel ignore the boot loader 1421 + command line, and use ONLY the built-in command line. 1422 + 1423 + This is used to work around broken boot loaders. This should 1424 + be set to 'N' under normal conditions. 1380 1425 1381 1426 endmenu 1382 1427 ··· 1818 1773 1819 1774 config SYSVIPC_COMPAT 1820 1775 def_bool y 1821 - depends on X86_64 && COMPAT && SYSVIPC 1776 + depends on COMPAT && SYSVIPC 1822 1777 1823 1778 endmenu 1824 1779
+18
arch/x86/Kconfig.cpu
··· 418 418 config X86_DEBUGCTLMSR 419 419 def_bool y 420 420 depends on !(MK6 || MWINCHIPC6 || MWINCHIP2 || MWINCHIP3D || MCYRIXIII || M586MMX || M586TSC || M586 || M486 || M386) 421 + 422 + config X86_DS 423 + bool "Debug Store support" 424 + default y 425 + help 426 + Add support for Debug Store. 427 + This allows the kernel to provide a memory buffer to the hardware 428 + to store various profiling and tracing events. 429 + 430 + config X86_PTRACE_BTS 431 + bool "ptrace interface to Branch Trace Store" 432 + default y 433 + depends on (X86_DS && X86_DEBUGCTLMSR) 434 + help 435 + Add a ptrace interface to allow collecting an execution trace 436 + of the traced task. 437 + This collects control flow changes in a (cyclic) buffer and allows 438 + debuggers to fill in the gaps and show an execution trace of the debuggee.
+3 -2
arch/x86/boot/compressed/head_32.S
··· 137 137 */ 138 138 movl output_len(%ebx), %eax 139 139 pushl %eax 140 + # push arguments for decompress_kernel: 140 141 pushl %ebp # output address 141 142 movl input_len(%ebx), %eax 142 143 pushl %eax # input_len 143 144 leal input_data(%ebx), %eax 144 145 pushl %eax # input_data 145 146 leal boot_heap(%ebx), %eax 146 - pushl %eax # heap area as third argument 147 - pushl %esi # real mode pointer as second arg 147 + pushl %eax # heap area 148 + pushl %esi # real mode pointer 148 149 call decompress_kernel 149 150 addl $20, %esp 150 151 popl %ecx
+7 -5
arch/x86/boot/compressed/misc.c
··· 16 16 */ 17 17 #undef CONFIG_PARAVIRT 18 18 #ifdef CONFIG_X86_32 19 - #define _ASM_DESC_H_ 1 19 + #define ASM_X86__DESC_H 1 20 20 #endif 21 21 22 22 #ifdef CONFIG_X86_64 ··· 27 27 #include <linux/linkage.h> 28 28 #include <linux/screen_info.h> 29 29 #include <linux/elf.h> 30 - #include <asm/io.h> 30 + #include <linux/io.h> 31 31 #include <asm/page.h> 32 32 #include <asm/boot.h> 33 33 #include <asm/bootparam.h> ··· 251 251 y--; 252 252 } 253 253 } else { 254 - vidmem [(x + cols * y) * 2] = c; 254 + vidmem[(x + cols * y) * 2] = c; 255 255 if (++x >= cols) { 256 256 x = 0; 257 257 if (++y >= lines) { ··· 277 277 int i; 278 278 char *ss = s; 279 279 280 - for (i = 0; i < n; i++) ss[i] = c; 280 + for (i = 0; i < n; i++) 281 + ss[i] = c; 281 282 return s; 282 283 } 283 284 ··· 288 287 const char *s = src; 289 288 char *d = dest; 290 289 291 - for (i = 0; i < n; i++) d[i] = s[i]; 290 + for (i = 0; i < n; i++) 291 + d[i] = s[i]; 292 292 return dest; 293 293 } 294 294
-1
arch/x86/boot/header.S
··· 30 30 SYSSIZE = DEF_SYSSIZE /* system size: # of 16-byte clicks */ 31 31 /* to be loaded */ 32 32 ROOT_DEV = 0 /* ROOT_DEV is now written by "build" */ 33 - SWAP_DEV = 0 /* SWAP_DEV is now written by "build" */ 34 33 35 34 #ifndef SVGA_MODE 36 35 #define SVGA_MODE ASK_VGA
+11 -8
arch/x86/configs/i386_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.27-rc4 4 - # Mon Aug 25 15:04:00 2008 3 + # Linux kernel version: 2.6.27-rc5 4 + # Wed Sep 3 17:23:09 2008 5 5 # 6 6 # CONFIG_64BIT is not set 7 7 CONFIG_X86_32=y ··· 202 202 # CONFIG_M586 is not set 203 203 # CONFIG_M586TSC is not set 204 204 # CONFIG_M586MMX is not set 205 - # CONFIG_M686 is not set 205 + CONFIG_M686=y 206 206 # CONFIG_MPENTIUMII is not set 207 207 # CONFIG_MPENTIUMIII is not set 208 208 # CONFIG_MPENTIUMM is not set ··· 221 221 # CONFIG_MVIAC3_2 is not set 222 222 # CONFIG_MVIAC7 is not set 223 223 # CONFIG_MPSC is not set 224 - CONFIG_MCORE2=y 224 + # CONFIG_MCORE2 is not set 225 225 # CONFIG_GENERIC_CPU is not set 226 226 CONFIG_X86_GENERIC=y 227 227 CONFIG_X86_CPU=y 228 228 CONFIG_X86_CMPXCHG=y 229 229 CONFIG_X86_L1_CACHE_SHIFT=7 230 230 CONFIG_X86_XADD=y 231 + # CONFIG_X86_PPRO_FENCE is not set 231 232 CONFIG_X86_WP_WORKS_OK=y 232 233 CONFIG_X86_INVLPG=y 233 234 CONFIG_X86_BSWAP=y ··· 236 235 CONFIG_X86_INTEL_USERCOPY=y 237 236 CONFIG_X86_USE_PPRO_CHECKSUM=y 238 237 CONFIG_X86_TSC=y 238 + CONFIG_X86_CMOV=y 239 239 CONFIG_X86_MINIMUM_CPU_FAMILY=4 240 240 CONFIG_X86_DEBUGCTLMSR=y 241 241 CONFIG_HPET_TIMER=y 242 242 CONFIG_HPET_EMULATE_RTC=y 243 243 CONFIG_DMI=y 244 244 # CONFIG_IOMMU_HELPER is not set 245 - CONFIG_NR_CPUS=4 246 - # CONFIG_SCHED_SMT is not set 245 + CONFIG_NR_CPUS=64 246 + CONFIG_SCHED_SMT=y 247 247 CONFIG_SCHED_MC=y 248 248 # CONFIG_PREEMPT_NONE is not set 249 249 CONFIG_PREEMPT_VOLUNTARY=y ··· 256 254 # CONFIG_TOSHIBA is not set 257 255 # CONFIG_I8K is not set 258 256 CONFIG_X86_REBOOTFIXUPS=y 259 - # CONFIG_MICROCODE is not set 257 + CONFIG_MICROCODE=y 258 + CONFIG_MICROCODE_OLD_INTERFACE=y 260 259 CONFIG_X86_MSR=y 261 260 CONFIG_X86_CPUID=y 262 261 # CONFIG_NOHIGHMEM is not set ··· 2118 2115 CONFIG_DEFAULT_IO_DELAY_TYPE=0 2119 2116 CONFIG_DEBUG_BOOT_PARAMS=y 2120 2117 # CONFIG_CPA_DEBUG is not set 2121 - # CONFIG_OPTIMIZE_INLINING is not set 2118 + CONFIG_OPTIMIZE_INLINING=y 2122 2119 2123 2120 # 2124 2121 # Security options
+13 -16
arch/x86/configs/x86_64_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.27-rc4 4 - # Mon Aug 25 14:40:46 2008 3 + # Linux kernel version: 2.6.27-rc5 4 + # Wed Sep 3 17:13:39 2008 5 5 # 6 6 CONFIG_64BIT=y 7 7 # CONFIG_X86_32 is not set ··· 218 218 # CONFIG_MVIAC3_2 is not set 219 219 # CONFIG_MVIAC7 is not set 220 220 # CONFIG_MPSC is not set 221 - CONFIG_MCORE2=y 222 - # CONFIG_GENERIC_CPU is not set 221 + # CONFIG_MCORE2 is not set 222 + CONFIG_GENERIC_CPU=y 223 223 CONFIG_X86_CPU=y 224 - CONFIG_X86_L1_CACHE_BYTES=64 225 - CONFIG_X86_INTERNODE_CACHE_BYTES=64 224 + CONFIG_X86_L1_CACHE_BYTES=128 225 + CONFIG_X86_INTERNODE_CACHE_BYTES=128 226 226 CONFIG_X86_CMPXCHG=y 227 - CONFIG_X86_L1_CACHE_SHIFT=6 227 + CONFIG_X86_L1_CACHE_SHIFT=7 228 228 CONFIG_X86_WP_WORKS_OK=y 229 - CONFIG_X86_INTEL_USERCOPY=y 230 - CONFIG_X86_USE_PPRO_CHECKSUM=y 231 - CONFIG_X86_P6_NOP=y 232 229 CONFIG_X86_TSC=y 233 230 CONFIG_X86_CMPXCHG64=y 234 231 CONFIG_X86_CMOV=y ··· 240 243 CONFIG_AMD_IOMMU=y 241 244 CONFIG_SWIOTLB=y 242 245 CONFIG_IOMMU_HELPER=y 243 - # CONFIG_MAXSMP is not set 244 - CONFIG_NR_CPUS=4 245 - # CONFIG_SCHED_SMT is not set 246 + CONFIG_NR_CPUS=64 247 + CONFIG_SCHED_SMT=y 246 248 CONFIG_SCHED_MC=y 247 249 # CONFIG_PREEMPT_NONE is not set 248 250 CONFIG_PREEMPT_VOLUNTARY=y ··· 250 254 CONFIG_X86_IO_APIC=y 251 255 # CONFIG_X86_MCE is not set 252 256 # CONFIG_I8K is not set 253 - # CONFIG_MICROCODE is not set 257 + CONFIG_MICROCODE=y 258 + CONFIG_MICROCODE_OLD_INTERFACE=y 254 259 CONFIG_X86_MSR=y 255 260 CONFIG_X86_CPUID=y 256 261 CONFIG_NUMA=y ··· 287 290 CONFIG_VIRT_TO_BUS=y 288 291 CONFIG_MTRR=y 289 292 # CONFIG_MTRR_SANITIZER is not set 290 - # CONFIG_X86_PAT is not set 293 + CONFIG_X86_PAT=y 291 294 CONFIG_EFI=y 292 295 CONFIG_SECCOMP=y 293 296 # CONFIG_HZ_100 is not set ··· 2086 2089 CONFIG_DEFAULT_IO_DELAY_TYPE=0 2087 2090 CONFIG_DEBUG_BOOT_PARAMS=y 2088 2091 # CONFIG_CPA_DEBUG is not set 2089 - # CONFIG_OPTIMIZE_INLINING is not set 2092 + CONFIG_OPTIMIZE_INLINING=y 2090 2093 2091 2094 # 2092 2095 # Security options
+7 -4
arch/x86/ia32/ia32_aout.c
··· 85 85 dump->regs.ax = regs->ax; 86 86 dump->regs.ds = current->thread.ds; 87 87 dump->regs.es = current->thread.es; 88 - asm("movl %%fs,%0" : "=r" (fs)); dump->regs.fs = fs; 89 - asm("movl %%gs,%0" : "=r" (gs)); dump->regs.gs = gs; 88 + savesegment(fs, fs); 89 + dump->regs.fs = fs; 90 + savesegment(gs, gs); 91 + dump->regs.gs = gs; 90 92 dump->regs.orig_ax = regs->orig_ax; 91 93 dump->regs.ip = regs->ip; 92 94 dump->regs.cs = regs->cs; ··· 432 430 current->mm->start_stack = 433 431 (unsigned long)create_aout_tables((char __user *)bprm->p, bprm); 434 432 /* start thread */ 435 - asm volatile("movl %0,%%fs" :: "r" (0)); \ 436 - asm volatile("movl %0,%%es; movl %0,%%ds": :"r" (__USER32_DS)); 433 + loadsegment(fs, 0); 434 + loadsegment(ds, __USER32_DS); 435 + loadsegment(es, __USER32_DS); 437 436 load_gs_index(0); 438 437 (regs)->ip = ex.a_entry; 439 438 (regs)->sp = current->mm->start_stack;
+10 -11
arch/x86/ia32/ia32_signal.c
··· 206 206 { unsigned int cur; \ 207 207 unsigned short pre; \ 208 208 err |= __get_user(pre, &sc->seg); \ 209 - asm volatile("movl %%" #seg ",%0" : "=r" (cur)); \ 209 + savesegment(seg, cur); \ 210 210 pre |= mask; \ 211 211 if (pre != cur) loadsegment(seg, pre); } 212 212 ··· 235 235 */ 236 236 err |= __get_user(gs, &sc->gs); 237 237 gs |= 3; 238 - asm("movl %%gs,%0" : "=r" (oldgs)); 238 + savesegment(gs, oldgs); 239 239 if (gs != oldgs) 240 240 load_gs_index(gs); 241 241 ··· 355 355 { 356 356 int tmp, err = 0; 357 357 358 - tmp = 0; 359 - __asm__("movl %%gs,%0" : "=r"(tmp): "0"(tmp)); 358 + savesegment(gs, tmp); 360 359 err |= __put_user(tmp, (unsigned int __user *)&sc->gs); 361 - __asm__("movl %%fs,%0" : "=r"(tmp): "0"(tmp)); 360 + savesegment(fs, tmp); 362 361 err |= __put_user(tmp, (unsigned int __user *)&sc->fs); 363 - __asm__("movl %%ds,%0" : "=r"(tmp): "0"(tmp)); 362 + savesegment(ds, tmp); 364 363 err |= __put_user(tmp, (unsigned int __user *)&sc->ds); 365 - __asm__("movl %%es,%0" : "=r"(tmp): "0"(tmp)); 364 + savesegment(es, tmp); 366 365 err |= __put_user(tmp, (unsigned int __user *)&sc->es); 367 366 368 367 err |= __put_user((u32)regs->di, &sc->di); ··· 497 498 regs->dx = 0; 498 499 regs->cx = 0; 499 500 500 - asm volatile("movl %0,%%ds" :: "r" (__USER32_DS)); 501 - asm volatile("movl %0,%%es" :: "r" (__USER32_DS)); 501 + loadsegment(ds, __USER32_DS); 502 + loadsegment(es, __USER32_DS); 502 503 503 504 regs->cs = __USER32_CS; 504 505 regs->ss = __USER32_DS; ··· 590 591 regs->dx = (unsigned long) &frame->info; 591 592 regs->cx = (unsigned long) &frame->uc; 592 593 593 - asm volatile("movl %0,%%ds" :: "r" (__USER32_DS)); 594 - asm volatile("movl %0,%%es" :: "r" (__USER32_DS)); 594 + loadsegment(ds, __USER32_DS); 595 + loadsegment(es, __USER32_DS); 595 596 596 597 regs->cs = __USER32_CS; 597 598 regs->ss = __USER32_DS;
-9
arch/x86/ia32/sys_ia32.c
··· 556 556 return ret; 557 557 } 558 558 559 - /* These are here just in case some old ia32 binary calls it. */ 560 - asmlinkage long sys32_pause(void) 561 - { 562 - current->state = TASK_INTERRUPTIBLE; 563 - schedule(); 564 - return -ERESTARTNOHAND; 565 - } 566 - 567 - 568 559 #ifdef CONFIG_SYSCTL_SYSCALL 569 560 struct sysctl_ia32 { 570 561 unsigned int name;
+2 -3
arch/x86/kernel/acpi/boot.c
··· 58 58 #ifdef CONFIG_X86_64 59 59 60 60 #include <asm/proto.h> 61 - #include <asm/genapic.h> 62 61 63 62 #else /* X86 */ 64 63 ··· 95 96 #ifndef __HAVE_ARCH_CMPXCHG 96 97 #warning ACPI uses CMPXCHG, i486 and later hardware 97 98 #endif 98 - 99 - static int acpi_mcfg_64bit_base_addr __initdata = FALSE; 100 99 101 100 /* -------------------------------------------------------------------------- 102 101 Boot-time Configuration ··· 156 159 /* The physical address of the MMCONFIG aperture. Set from ACPI tables. */ 157 160 struct acpi_mcfg_allocation *pci_mmcfg_config; 158 161 int pci_mmcfg_config_num; 162 + 163 + static int acpi_mcfg_64bit_base_addr __initdata = FALSE; 159 164 160 165 static int __init acpi_mcfg_oem_check(struct acpi_table_mcfg *mcfg) 161 166 {
+4 -4
arch/x86/kernel/alternative.c
··· 231 231 continue; 232 232 if (*ptr > text_end) 233 233 continue; 234 - text_poke(*ptr, ((unsigned char []){0xf0}), 1); /* add lock prefix */ 234 + /* turn DS segment override prefix into lock prefix */ 235 + text_poke(*ptr, ((unsigned char []){0xf0}), 1); 235 236 }; 236 237 } 237 238 238 239 static void alternatives_smp_unlock(u8 **start, u8 **end, u8 *text, u8 *text_end) 239 240 { 240 241 u8 **ptr; 241 - char insn[1]; 242 242 243 243 if (noreplace_smp) 244 244 return; 245 245 246 - add_nops(insn, 1); 247 246 for (ptr = start; ptr < end; ptr++) { 248 247 if (*ptr < text) 249 248 continue; 250 249 if (*ptr > text_end) 251 250 continue; 252 - text_poke(*ptr, insn, 1); 251 + /* turn lock prefix into DS segment override prefix */ 252 + text_poke(*ptr, ((unsigned char []){0x3E}), 1); 253 253 }; 254 254 } 255 255
+3 -3
arch/x86/kernel/aperture_64.c
··· 455 455 force_iommu || 456 456 valid_agp || 457 457 fallback_aper_force) { 458 - printk(KERN_ERR 458 + printk(KERN_INFO 459 459 "Your BIOS doesn't leave a aperture memory hole\n"); 460 - printk(KERN_ERR 460 + printk(KERN_INFO 461 461 "Please enable the IOMMU option in the BIOS setup\n"); 462 - printk(KERN_ERR 462 + printk(KERN_INFO 463 463 "This costs you %d MB of RAM\n", 464 464 32 << fallback_aper_order); 465 465
-1
arch/x86/kernel/apm_32.c
··· 228 228 #include <linux/suspend.h> 229 229 #include <linux/kthread.h> 230 230 #include <linux/jiffies.h> 231 - #include <linux/smp_lock.h> 232 231 233 232 #include <asm/system.h> 234 233 #include <asm/uaccess.h>
+1 -1
arch/x86/kernel/asm-offsets_64.c
··· 22 22 23 23 #define __NO_STUBS 1 24 24 #undef __SYSCALL 25 - #undef _ASM_X86_64_UNISTD_H_ 25 + #undef ASM_X86__UNISTD_64_H 26 26 #define __SYSCALL(nr, sym) [nr] = 1, 27 27 static char syscalls[] = { 28 28 #include <asm/unistd.h>
+5 -5
arch/x86/kernel/bios_uv.c
··· 25 25 { 26 26 const char *str; 27 27 switch (status) { 28 - case 0: str = "Call completed without error"; break; 29 - case -1: str = "Not implemented"; break; 30 - case -2: str = "Invalid argument"; break; 31 - case -3: str = "Call completed with error"; break; 32 - default: str = "Unknown BIOS status code"; break; 28 + case 0: str = "Call completed without error"; break; 29 + case -1: str = "Not implemented"; break; 30 + case -2: str = "Invalid argument"; break; 31 + case -3: str = "Call completed with error"; break; 32 + default: str = "Unknown BIOS status code"; break; 33 33 } 34 34 return str; 35 35 }
+51
arch/x86/kernel/cpu/common_64.c
··· 430 430 } 431 431 __setup("noclflush", setup_noclflush); 432 432 433 + struct msr_range { 434 + unsigned min; 435 + unsigned max; 436 + }; 437 + 438 + static struct msr_range msr_range_array[] __cpuinitdata = { 439 + { 0x00000000, 0x00000418}, 440 + { 0xc0000000, 0xc000040b}, 441 + { 0xc0010000, 0xc0010142}, 442 + { 0xc0011000, 0xc001103b}, 443 + }; 444 + 445 + static void __cpuinit print_cpu_msr(void) 446 + { 447 + unsigned index; 448 + u64 val; 449 + int i; 450 + unsigned index_min, index_max; 451 + 452 + for (i = 0; i < ARRAY_SIZE(msr_range_array); i++) { 453 + index_min = msr_range_array[i].min; 454 + index_max = msr_range_array[i].max; 455 + for (index = index_min; index < index_max; index++) { 456 + if (rdmsrl_amd_safe(index, &val)) 457 + continue; 458 + printk(KERN_INFO " MSR%08x: %016llx\n", index, val); 459 + } 460 + } 461 + } 462 + 463 + static int show_msr __cpuinitdata; 464 + static __init int setup_show_msr(char *arg) 465 + { 466 + int num; 467 + 468 + get_option(&arg, &num); 469 + 470 + if (num > 0) 471 + show_msr = num; 472 + return 1; 473 + } 474 + __setup("show_msr=", setup_show_msr); 475 + 433 476 void __cpuinit print_cpu_info(struct cpuinfo_x86 *c) 434 477 { 435 478 if (c->x86_model_id[0]) ··· 482 439 printk(KERN_CONT " stepping %02x\n", c->x86_mask); 483 440 else 484 441 printk(KERN_CONT "\n"); 442 + 443 + #ifdef CONFIG_SMP 444 + if (c->cpu_index < show_msr) 445 + print_cpu_msr(); 446 + #else 447 + if (show_msr) 448 + print_cpu_msr(); 449 + #endif 485 450 } 486 451 487 452 static __init int setup_disablecpuid(char *arg)
+2 -1
arch/x86/kernel/cpu/intel.c
··· 222 222 set_cpu_cap(c, X86_FEATURE_BTS); 223 223 if (!(l1 & (1<<12))) 224 224 set_cpu_cap(c, X86_FEATURE_PEBS); 225 + ds_init_intel(c); 225 226 } 226 227 227 228 if (cpu_has_bts) 228 - ds_init_intel(c); 229 + ptrace_bts_init_intel(c); 229 230 230 231 /* 231 232 * See if we have a good local APIC by checking for buggy Pentia,
+1 -6
arch/x86/kernel/cpu/mtrr/generic.c
··· 401 401 tmp |= ~((1<<(hi - 1)) - 1); 402 402 403 403 if (tmp != mask_lo) { 404 - static int once = 1; 405 - 406 - if (once) { 407 - printk(KERN_INFO "mtrr: your BIOS has set up an incorrect mask, fixing it up.\n"); 408 - once = 0; 409 - } 404 + WARN_ONCE(1, KERN_INFO "mtrr: your BIOS has set up an incorrect mask, fixing it up.\n"); 410 405 mask_lo = tmp; 411 406 } 412 407 }
+2 -2
arch/x86/kernel/cpu/mtrr/if.c
··· 405 405 } 406 406 /* RED-PEN: base can be > 32bit */ 407 407 len += seq_printf(seq, 408 - "reg%02i: base=0x%05lx000 (%4luMB), size=%4lu%cB: %s, count=%d\n", 408 + "reg%02i: base=0x%06lx000 (%5luMB), size=%5lu%cB, count=%d: %s\n", 409 409 i, base, base >> (20 - PAGE_SHIFT), size, factor, 410 - mtrr_attrib_to_str(type), mtrr_usage_table[i]); 410 + mtrr_usage_table[i], mtrr_attrib_to_str(type)); 411 411 } 412 412 } 413 413 return 0;
+189 -85
arch/x86/kernel/cpu/mtrr/main.c
··· 729 729 mtrr_type type; 730 730 }; 731 731 732 - struct var_mtrr_range_state __initdata range_state[RANGE_NUM]; 732 + static struct var_mtrr_range_state __initdata range_state[RANGE_NUM]; 733 733 static int __initdata debug_print; 734 734 735 735 static int __init ··· 759 759 /* take out UC ranges */ 760 760 for (i = 0; i < num_var_ranges; i++) { 761 761 type = range_state[i].type; 762 - if (type != MTRR_TYPE_UNCACHABLE) 762 + if (type != MTRR_TYPE_UNCACHABLE && 763 + type != MTRR_TYPE_WRPROT) 763 764 continue; 764 765 size = range_state[i].size_pfn; 765 766 if (!size) ··· 837 836 } 838 837 early_param("enable_mtrr_cleanup", enable_mtrr_cleanup_setup); 839 838 839 + static int __init mtrr_cleanup_debug_setup(char *str) 840 + { 841 + debug_print = 1; 842 + return 0; 843 + } 844 + early_param("mtrr_cleanup_debug", mtrr_cleanup_debug_setup); 845 + 840 846 struct var_mtrr_state { 841 847 unsigned long range_startk; 842 848 unsigned long range_sizek; ··· 906 898 } 907 899 } 908 900 901 + static unsigned long to_size_factor(unsigned long sizek, char *factorp) 902 + { 903 + char factor; 904 + unsigned long base = sizek; 905 + 906 + if (base & ((1<<10) - 1)) { 907 + /* not MB alignment */ 908 + factor = 'K'; 909 + } else if (base & ((1<<20) - 1)){ 910 + factor = 'M'; 911 + base >>= 10; 912 + } else { 913 + factor = 'G'; 914 + base >>= 20; 915 + } 916 + 917 + *factorp = factor; 918 + 919 + return base; 920 + } 921 + 909 922 static unsigned int __init 910 923 range_to_mtrr(unsigned int reg, unsigned long range_startk, 911 924 unsigned long range_sizek, unsigned char type) ··· 948 919 align = max_align; 949 920 950 921 sizek = 1 << align; 951 - if (debug_print) 922 + if (debug_print) { 923 + char start_factor = 'K', size_factor = 'K'; 924 + unsigned long start_base, size_base; 925 + 926 + start_base = to_size_factor(range_startk, &start_factor), 927 + size_base = to_size_factor(sizek, &size_factor), 928 + 952 929 printk(KERN_DEBUG "Setting variable MTRR %d, " 953 - "base: %ldMB, range: %ldMB, type %s\n", 954 - reg, range_startk >> 10, sizek >> 10, 930 + "base: %ld%cB, range: %ld%cB, type %s\n", 931 + reg, start_base, start_factor, 932 + size_base, size_factor, 955 933 (type == MTRR_TYPE_UNCACHABLE)?"UC": 956 934 ((type == MTRR_TYPE_WRBACK)?"WB":"Other") 957 935 ); 936 + } 958 937 save_var_mtrr(reg++, range_startk, sizek, type); 959 938 range_startk += sizek; 960 939 range_sizek -= sizek; ··· 1007 970 /* try to append some small hole */ 1008 971 range0_basek = state->range_startk; 1009 972 range0_sizek = ALIGN(state->range_sizek, chunk_sizek); 973 + 974 + /* no increase */ 1010 975 if (range0_sizek == state->range_sizek) { 1011 976 if (debug_print) 1012 977 printk(KERN_DEBUG "rangeX: %016lx - %016lx\n", ··· 1019 980 return 0; 1020 981 } 1021 982 1022 - range0_sizek -= chunk_sizek; 1023 - if (range0_sizek && sizek) { 1024 - while (range0_basek + range0_sizek > (basek + sizek)) { 1025 - range0_sizek -= chunk_sizek; 1026 - if (!range0_sizek) 1027 - break; 1028 - } 983 + /* only cut back, when it is not the last */ 984 + if (sizek) { 985 + while (range0_basek + range0_sizek > (basek + sizek)) { 986 + if (range0_sizek >= chunk_sizek) 987 + range0_sizek -= chunk_sizek; 988 + else 989 + range0_sizek = 0; 990 + 991 + if (!range0_sizek) 992 + break; 993 + } 994 + } 995 + 996 + second_try: 997 + range_basek = range0_basek + range0_sizek; 998 + 999 + /* one hole in the middle */ 1000 + if (range_basek > basek && range_basek <= (basek + sizek)) 1001 + second_sizek = range_basek - basek; 1002 + 1003 + if (range0_sizek > state->range_sizek) { 1004 + 1005 + /* one hole in middle or at end */ 1006 + hole_sizek = range0_sizek - state->range_sizek - second_sizek; 1007 + 1008 + /* hole size should be less than half of range0 size */ 1009 + if (hole_sizek >= (range0_sizek >> 1) && 1010 + range0_sizek >= chunk_sizek) { 1011 + range0_sizek -= chunk_sizek; 1012 + second_sizek = 0; 1013 + hole_sizek = 0; 1014 + 1015 + goto second_try; 1016 + } 1029 1017 } 1030 1018 1031 1019 if (range0_sizek) { ··· 1062 996 (range0_basek + range0_sizek)<<10); 1063 997 state->reg = range_to_mtrr(state->reg, range0_basek, 1064 998 range0_sizek, MTRR_TYPE_WRBACK); 1065 - 1066 999 } 1067 1000 1068 - range_basek = range0_basek + range0_sizek; 1069 - range_sizek = chunk_sizek; 1070 - 1071 - if (range_basek + range_sizek > basek && 1072 - range_basek + range_sizek <= (basek + sizek)) { 1073 - /* one hole */ 1074 - second_basek = basek; 1075 - second_sizek = range_basek + range_sizek - basek; 1076 - } 1077 - 1078 - /* if last piece, only could one hole near end */ 1079 - if ((second_basek || !basek) && 1080 - range_sizek - (state->range_sizek - range0_sizek) - second_sizek < 1081 - (chunk_sizek >> 1)) { 1082 - /* 1083 - * one hole in middle (second_sizek is 0) or at end 1084 - * (second_sizek is 0 ) 1085 - */ 1086 - hole_sizek = range_sizek - (state->range_sizek - range0_sizek) 1087 - - second_sizek; 1088 - hole_basek = range_basek + range_sizek - hole_sizek 1089 - - second_sizek; 1090 - } else { 1091 - /* fallback for big hole, or several holes */ 1001 + if (range0_sizek < state->range_sizek) { 1002 + /* need to handle left over */ 1092 1003 range_sizek = state->range_sizek - range0_sizek; 1093 - second_basek = 0; 1094 - second_sizek = 0; 1004 + 1005 + if (debug_print) 1006 + printk(KERN_DEBUG "range: %016lx - %016lx\n", 1007 + range_basek<<10, 1008 + (range_basek + range_sizek)<<10); 1009 + state->reg = range_to_mtrr(state->reg, range_basek, 1010 + range_sizek, MTRR_TYPE_WRBACK); 1095 1011 } 1096 1012 1097 - if (debug_print) 1098 - printk(KERN_DEBUG "range: %016lx - %016lx\n", range_basek<<10, 1099 - (range_basek + range_sizek)<<10); 1100 - state->reg = range_to_mtrr(state->reg, range_basek, range_sizek, 1101 - MTRR_TYPE_WRBACK); 1102 1013 if (hole_sizek) { 1014 + hole_basek = range_basek - hole_sizek - second_sizek; 1103 1015 if (debug_print) 1104 1016 printk(KERN_DEBUG "hole: %016lx - %016lx\n", 1105 - hole_basek<<10, (hole_basek + hole_sizek)<<10); 1106 - state->reg = range_to_mtrr(state->reg, hole_basek, hole_sizek, 1107 - MTRR_TYPE_UNCACHABLE); 1108 - 1017 + hole_basek<<10, 1018 + (hole_basek + hole_sizek)<<10); 1019 + state->reg = range_to_mtrr(state->reg, hole_basek, 1020 + hole_sizek, MTRR_TYPE_UNCACHABLE); 1109 1021 } 1110 1022 1111 1023 return second_sizek; ··· 1198 1154 }; 1199 1155 1200 1156 /* 1201 - * gran_size: 1M, 2M, ..., 2G 1202 - * chunk size: gran_size, ..., 4G 1203 - * so we need (2+13)*6 1157 + * gran_size: 64K, 128K, 256K, 512K, 1M, 2M, ..., 2G 1158 + * chunk size: gran_size, ..., 2G 1159 + * so we need (1+16)*8 1204 1160 */ 1205 - #define NUM_RESULT 90 1161 + #define NUM_RESULT 136 1206 1162 #define PSHIFT (PAGE_SHIFT - 10) 1207 1163 1208 1164 static struct mtrr_cleanup_result __initdata result[NUM_RESULT]; ··· 1212 1168 static int __init mtrr_cleanup(unsigned address_bits) 1213 1169 { 1214 1170 unsigned long extra_remove_base, extra_remove_size; 1215 - unsigned long i, base, size, def, dummy; 1171 + unsigned long base, size, def, dummy; 1216 1172 mtrr_type type; 1217 1173 int nr_range, nr_range_new; 1218 1174 u64 chunk_size, gran_size; 1219 1175 unsigned long range_sums, range_sums_new; 1220 1176 int index_good; 1221 1177 int num_reg_good; 1178 + int i; 1222 1179 1223 1180 /* extra one for all 0 */ 1224 1181 int num[MTRR_NUM_TYPES + 1]; ··· 1249 1204 continue; 1250 1205 if (!size) 1251 1206 type = MTRR_NUM_TYPES; 1207 + if (type == MTRR_TYPE_WRPROT) 1208 + type = MTRR_TYPE_UNCACHABLE; 1252 1209 num[type]++; 1253 1210 } 1254 1211 ··· 1263 1216 num_var_ranges - num[MTRR_NUM_TYPES]) 1264 1217 return 0; 1265 1218 1219 + /* print original var MTRRs at first, for debugging: */ 1220 + printk(KERN_DEBUG "original variable MTRRs\n"); 1221 + for (i = 0; i < num_var_ranges; i++) { 1222 + char start_factor = 'K', size_factor = 'K'; 1223 + unsigned long start_base, size_base; 1224 + 1225 + size_base = range_state[i].size_pfn << (PAGE_SHIFT - 10); 1226 + if (!size_base) 1227 + continue; 1228 + 1229 + size_base = to_size_factor(size_base, &size_factor), 1230 + start_base = range_state[i].base_pfn << (PAGE_SHIFT - 10); 1231 + start_base = to_size_factor(start_base, &start_factor), 1232 + type = range_state[i].type; 1233 + 1234 + printk(KERN_DEBUG "reg %d, base: %ld%cB, range: %ld%cB, type %s\n", 1235 + i, start_base, start_factor, 1236 + size_base, size_factor, 1237 + (type == MTRR_TYPE_UNCACHABLE) ? "UC" : 1238 + ((type == MTRR_TYPE_WRPROT) ? "WP" : 1239 + ((type == MTRR_TYPE_WRBACK) ? "WB" : "Other")) 1240 + ); 1241 + } 1242 + 1266 1243 memset(range, 0, sizeof(range)); 1267 1244 extra_remove_size = 0; 1268 - if (mtrr_tom2) { 1269 - extra_remove_base = 1 << (32 - PAGE_SHIFT); 1245 + extra_remove_base = 1 << (32 - PAGE_SHIFT); 1246 + if (mtrr_tom2) 1270 1247 extra_remove_size = 1271 1248 (mtrr_tom2 >> PAGE_SHIFT) - extra_remove_base; 1272 - } 1273 1249 nr_range = x86_get_mtrr_mem_range(range, 0, extra_remove_base, 1274 1250 extra_remove_size); 1251 + /* 1252 + * [0, 1M) should always be coverred by var mtrr with WB 1253 + * and fixed mtrrs should take effective before var mtrr for it 1254 + */ 1255 + nr_range = add_range_with_merge(range, nr_range, 0, 1256 + (1ULL<<(20 - PAGE_SHIFT)) - 1); 1257 + /* sort the ranges */ 1258 + sort(range, nr_range, sizeof(struct res_range), cmp_range, NULL); 1259 + 1275 1260 range_sums = sum_ranges(range, nr_range); 1276 1261 printk(KERN_INFO "total RAM coverred: %ldM\n", 1277 1262 range_sums >> (20 - PAGE_SHIFT)); 1278 1263 1279 1264 if (mtrr_chunk_size && mtrr_gran_size) { 1280 1265 int num_reg; 1266 + char gran_factor, chunk_factor, lose_factor; 1267 + unsigned long gran_base, chunk_base, lose_base; 1281 1268 1282 - debug_print = 1; 1269 + debug_print++; 1283 1270 /* convert ranges to var ranges state */ 1284 1271 num_reg = x86_setup_var_mtrrs(range, nr_range, mtrr_chunk_size, 1285 1272 mtrr_gran_size); ··· 1337 1256 result[i].lose_cover_sizek = 1338 1257 (range_sums - range_sums_new) << PSHIFT; 1339 1258 1340 - printk(KERN_INFO "%sgran_size: %ldM \tchunk_size: %ldM \t", 1341 - result[i].bad?"*BAD*":" ", result[i].gran_sizek >> 10, 1342 - result[i].chunk_sizek >> 10); 1343 - printk(KERN_CONT "num_reg: %d \tlose cover RAM: %s%ldM \n", 1259 + gran_base = to_size_factor(result[i].gran_sizek, &gran_factor), 1260 + chunk_base = to_size_factor(result[i].chunk_sizek, &chunk_factor), 1261 + lose_base = to_size_factor(result[i].lose_cover_sizek, &lose_factor), 1262 + printk(KERN_INFO "%sgran_size: %ld%c \tchunk_size: %ld%c \t", 1263 + result[i].bad?"*BAD*":" ", 1264 + gran_base, gran_factor, chunk_base, chunk_factor); 1265 + printk(KERN_CONT "num_reg: %d \tlose cover RAM: %s%ld%c\n", 1344 1266 result[i].num_reg, result[i].bad?"-":"", 1345 - result[i].lose_cover_sizek >> 10); 1267 + lose_base, lose_factor); 1346 1268 if (!result[i].bad) { 1347 1269 set_var_mtrr_all(address_bits); 1348 1270 return 1; 1349 1271 } 1350 1272 printk(KERN_INFO "invalid mtrr_gran_size or mtrr_chunk_size, " 1351 1273 "will find optimal one\n"); 1352 - debug_print = 0; 1274 + debug_print--; 1353 1275 memset(result, 0, sizeof(result[0])); 1354 1276 } 1355 1277 1356 1278 i = 0; 1357 1279 memset(min_loss_pfn, 0xff, sizeof(min_loss_pfn)); 1358 1280 memset(result, 0, sizeof(result)); 1359 - for (gran_size = (1ULL<<20); gran_size < (1ULL<<32); gran_size <<= 1) { 1360 - for (chunk_size = gran_size; chunk_size < (1ULL<<33); 1281 + for (gran_size = (1ULL<<16); gran_size < (1ULL<<32); gran_size <<= 1) { 1282 + char gran_factor; 1283 + unsigned long gran_base; 1284 + 1285 + if (debug_print) 1286 + gran_base = to_size_factor(gran_size >> 10, &gran_factor); 1287 + 1288 + for (chunk_size = gran_size; chunk_size < (1ULL<<32); 1361 1289 chunk_size <<= 1) { 1362 1290 int num_reg; 1363 1291 1364 - if (debug_print) 1365 - printk(KERN_INFO 1366 - "\ngran_size: %lldM chunk_size_size: %lldM\n", 1367 - gran_size >> 20, chunk_size >> 20); 1292 + if (debug_print) { 1293 + char chunk_factor; 1294 + unsigned long chunk_base; 1295 + 1296 + chunk_base = to_size_factor(chunk_size>>10, &chunk_factor), 1297 + printk(KERN_INFO "\n"); 1298 + printk(KERN_INFO "gran_size: %ld%c chunk_size: %ld%c \n", 1299 + gran_base, gran_factor, chunk_base, chunk_factor); 1300 + } 1368 1301 if (i >= NUM_RESULT) 1369 1302 continue; 1370 1303 ··· 1421 1326 1422 1327 /* print out all */ 1423 1328 for (i = 0; i < NUM_RESULT; i++) { 1424 - printk(KERN_INFO "%sgran_size: %ldM \tchunk_size: %ldM \t", 1425 - result[i].bad?"*BAD* ":" ", result[i].gran_sizek >> 10, 1426 - result[i].chunk_sizek >> 10); 1427 - printk(KERN_CONT "num_reg: %d \tlose RAM: %s%ldM\n", 1428 - result[i].num_reg, result[i].bad?"-":"", 1429 - result[i].lose_cover_sizek >> 10); 1329 + char gran_factor, chunk_factor, lose_factor; 1330 + unsigned long gran_base, chunk_base, lose_base; 1331 + 1332 + gran_base = to_size_factor(result[i].gran_sizek, &gran_factor), 1333 + chunk_base = to_size_factor(result[i].chunk_sizek, &chunk_factor), 1334 + lose_base = to_size_factor(result[i].lose_cover_sizek, &lose_factor), 1335 + printk(KERN_INFO "%sgran_size: %ld%c \tchunk_size: %ld%c \t", 1336 + result[i].bad?"*BAD*":" ", 1337 + gran_base, gran_factor, chunk_base, chunk_factor); 1338 + printk(KERN_CONT "num_reg: %d \tlose cover RAM: %s%ld%c\n", 1339 + result[i].num_reg, result[i].bad?"-":"", 1340 + lose_base, lose_factor); 1430 1341 } 1431 1342 1432 1343 /* try to find the optimal index */ ··· 1440 1339 nr_mtrr_spare_reg = num_var_ranges - 1; 1441 1340 num_reg_good = -1; 1442 1341 for (i = num_var_ranges - nr_mtrr_spare_reg; i > 0; i--) { 1443 - if (!min_loss_pfn[i]) { 1342 + if (!min_loss_pfn[i]) 1444 1343 num_reg_good = i; 1445 - break; 1446 - } 1447 1344 } 1448 1345 1449 1346 index_good = -1; ··· 1457 1358 } 1458 1359 1459 1360 if (index_good != -1) { 1361 + char gran_factor, chunk_factor, lose_factor; 1362 + unsigned long gran_base, chunk_base, lose_base; 1363 + 1460 1364 printk(KERN_INFO "Found optimal setting for mtrr clean up\n"); 1461 1365 i = index_good; 1462 - printk(KERN_INFO "gran_size: %ldM \tchunk_size: %ldM \t", 1463 - result[i].gran_sizek >> 10, 1464 - result[i].chunk_sizek >> 10); 1465 - printk(KERN_CONT "num_reg: %d \tlose RAM: %ldM\n", 1466 - result[i].num_reg, 1467 - result[i].lose_cover_sizek >> 10); 1366 + gran_base = to_size_factor(result[i].gran_sizek, &gran_factor), 1367 + chunk_base = to_size_factor(result[i].chunk_sizek, &chunk_factor), 1368 + lose_base = to_size_factor(result[i].lose_cover_sizek, &lose_factor), 1369 + printk(KERN_INFO "gran_size: %ld%c \tchunk_size: %ld%c \t", 1370 + gran_base, gran_factor, chunk_base, chunk_factor); 1371 + printk(KERN_CONT "num_reg: %d \tlose RAM: %ld%c\n", 1372 + result[i].num_reg, lose_base, lose_factor); 1468 1373 /* convert ranges to var ranges state */ 1469 1374 chunk_size = result[i].chunk_sizek; 1470 1375 chunk_size <<= 10; 1471 1376 gran_size = result[i].gran_sizek; 1472 1377 gran_size <<= 10; 1473 - debug_print = 1; 1378 + debug_print++; 1474 1379 x86_setup_var_mtrrs(range, nr_range, chunk_size, gran_size); 1380 + debug_print--; 1475 1381 set_var_mtrr_all(address_bits); 1476 1382 return 1; 1477 1383 }
+74 -12
arch/x86/kernel/cpu/perfctr-watchdog.c
··· 295 295 /* setup the timer */ 296 296 wrmsr(evntsel_msr, evntsel, 0); 297 297 write_watchdog_counter(perfctr_msr, "K7_PERFCTR0",nmi_hz); 298 + 299 + /* initialize the wd struct before enabling */ 300 + wd->perfctr_msr = perfctr_msr; 301 + wd->evntsel_msr = evntsel_msr; 302 + wd->cccr_msr = 0; /* unused */ 303 + 304 + /* ok, everything is initialized, announce that we're set */ 305 + cpu_nmi_set_wd_enabled(); 306 + 298 307 apic_write(APIC_LVTPC, APIC_DM_NMI); 299 308 evntsel |= K7_EVNTSEL_ENABLE; 300 309 wrmsr(evntsel_msr, evntsel, 0); 301 310 302 - wd->perfctr_msr = perfctr_msr; 303 - wd->evntsel_msr = evntsel_msr; 304 - wd->cccr_msr = 0; /* unused */ 305 311 return 1; 306 312 } 307 313 ··· 385 379 wrmsr(evntsel_msr, evntsel, 0); 386 380 nmi_hz = adjust_for_32bit_ctr(nmi_hz); 387 381 write_watchdog_counter32(perfctr_msr, "P6_PERFCTR0",nmi_hz); 382 + 383 + /* initialize the wd struct before enabling */ 384 + wd->perfctr_msr = perfctr_msr; 385 + wd->evntsel_msr = evntsel_msr; 386 + wd->cccr_msr = 0; /* unused */ 387 + 388 + /* ok, everything is initialized, announce that we're set */ 389 + cpu_nmi_set_wd_enabled(); 390 + 388 391 apic_write(APIC_LVTPC, APIC_DM_NMI); 389 392 evntsel |= P6_EVNTSEL0_ENABLE; 390 393 wrmsr(evntsel_msr, evntsel, 0); 391 394 392 - wd->perfctr_msr = perfctr_msr; 393 - wd->evntsel_msr = evntsel_msr; 394 - wd->cccr_msr = 0; /* unused */ 395 395 return 1; 396 396 } 397 397 ··· 444 432 #define P4_CCCR_ENABLE (1 << 12) 445 433 #define P4_CCCR_OVF (1 << 31) 446 434 435 + #define P4_CONTROLS 18 436 + static unsigned int p4_controls[18] = { 437 + MSR_P4_BPU_CCCR0, 438 + MSR_P4_BPU_CCCR1, 439 + MSR_P4_BPU_CCCR2, 440 + MSR_P4_BPU_CCCR3, 441 + MSR_P4_MS_CCCR0, 442 + MSR_P4_MS_CCCR1, 443 + MSR_P4_MS_CCCR2, 444 + MSR_P4_MS_CCCR3, 445 + MSR_P4_FLAME_CCCR0, 446 + MSR_P4_FLAME_CCCR1, 447 + MSR_P4_FLAME_CCCR2, 448 + MSR_P4_FLAME_CCCR3, 449 + MSR_P4_IQ_CCCR0, 450 + MSR_P4_IQ_CCCR1, 451 + MSR_P4_IQ_CCCR2, 452 + MSR_P4_IQ_CCCR3, 453 + MSR_P4_IQ_CCCR4, 454 + MSR_P4_IQ_CCCR5, 455 + }; 447 456 /* 448 457 * Set up IQ_COUNTER0 to behave like a clock, by having IQ_CCCR0 filter 449 458 * CRU_ESCR0 (with any non-null event selector) through a complemented ··· 506 473 evntsel_msr = MSR_P4_CRU_ESCR0; 507 474 cccr_msr = MSR_P4_IQ_CCCR0; 508 475 cccr_val = P4_CCCR_OVF_PMI0 | P4_CCCR_ESCR_SELECT(4); 476 + 477 + /* 478 + * If we're on the kdump kernel or other situation, we may 479 + * still have other performance counter registers set to 480 + * interrupt and they'll keep interrupting forever because 481 + * of the P4_CCCR_OVF quirk. So we need to ACK all the 482 + * pending interrupts and disable all the registers here, 483 + * before reenabling the NMI delivery. Refer to p4_rearm() 484 + * about the P4_CCCR_OVF quirk. 485 + */ 486 + if (reset_devices) { 487 + unsigned int low, high; 488 + int i; 489 + 490 + for (i = 0; i < P4_CONTROLS; i++) { 491 + rdmsr(p4_controls[i], low, high); 492 + low &= ~(P4_CCCR_ENABLE | P4_CCCR_OVF); 493 + wrmsr(p4_controls[i], low, high); 494 + } 495 + } 509 496 } else { 510 497 /* logical cpu 1 */ 511 498 perfctr_msr = MSR_P4_IQ_PERFCTR1; ··· 552 499 wrmsr(evntsel_msr, evntsel, 0); 553 500 wrmsr(cccr_msr, cccr_val, 0); 554 501 write_watchdog_counter(perfctr_msr, "P4_IQ_COUNTER0", nmi_hz); 555 - apic_write(APIC_LVTPC, APIC_DM_NMI); 556 - cccr_val |= P4_CCCR_ENABLE; 557 - wrmsr(cccr_msr, cccr_val, 0); 502 + 558 503 wd->perfctr_msr = perfctr_msr; 559 504 wd->evntsel_msr = evntsel_msr; 560 505 wd->cccr_msr = cccr_msr; 506 + 507 + /* ok, everything is initialized, announce that we're set */ 508 + cpu_nmi_set_wd_enabled(); 509 + 510 + apic_write(APIC_LVTPC, APIC_DM_NMI); 511 + cccr_val |= P4_CCCR_ENABLE; 512 + wrmsr(cccr_msr, cccr_val, 0); 561 513 return 1; 562 514 } 563 515 ··· 678 620 wrmsr(evntsel_msr, evntsel, 0); 679 621 nmi_hz = adjust_for_32bit_ctr(nmi_hz); 680 622 write_watchdog_counter32(perfctr_msr, "INTEL_ARCH_PERFCTR0", nmi_hz); 681 - apic_write(APIC_LVTPC, APIC_DM_NMI); 682 - evntsel |= ARCH_PERFMON_EVENTSEL0_ENABLE; 683 - wrmsr(evntsel_msr, evntsel, 0); 684 623 685 624 wd->perfctr_msr = perfctr_msr; 686 625 wd->evntsel_msr = evntsel_msr; 687 626 wd->cccr_msr = 0; /* unused */ 627 + 628 + /* ok, everything is initialized, announce that we're set */ 629 + cpu_nmi_set_wd_enabled(); 630 + 631 + apic_write(APIC_LVTPC, APIC_DM_NMI); 632 + evntsel |= ARCH_PERFMON_EVENTSEL0_ENABLE; 633 + wrmsr(evntsel_msr, evntsel, 0); 688 634 intel_arch_wd_ops.checkbit = 1ULL << (eax.split.bit_width - 1); 689 635 return 1; 690 636 }
-1
arch/x86/kernel/cpuid.c
··· 36 36 #include <linux/smp_lock.h> 37 37 #include <linux/major.h> 38 38 #include <linux/fs.h> 39 - #include <linux/smp_lock.h> 40 39 #include <linux/device.h> 41 40 #include <linux/cpu.h> 42 41 #include <linux/notifier.h>
+7 -6
arch/x86/kernel/crash_dump_64.c
··· 7 7 8 8 #include <linux/errno.h> 9 9 #include <linux/crash_dump.h> 10 - 11 - #include <asm/uaccess.h> 12 - #include <asm/io.h> 10 + #include <linux/uaccess.h> 11 + #include <linux/io.h> 13 12 14 13 /** 15 14 * copy_oldmem_page - copy one page from "oldmem" ··· 24 25 * in the current kernel. We stitch up a pte, similar to kmap_atomic. 25 26 */ 26 27 ssize_t copy_oldmem_page(unsigned long pfn, char *buf, 27 - size_t csize, unsigned long offset, int userbuf) 28 + size_t csize, unsigned long offset, int userbuf) 28 29 { 29 30 void *vaddr; 30 31 ··· 32 33 return 0; 33 34 34 35 vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE); 36 + if (!vaddr) 37 + return -ENOMEM; 35 38 36 39 if (userbuf) { 37 - if (copy_to_user(buf, (vaddr + offset), csize)) { 40 + if (copy_to_user(buf, vaddr + offset, csize)) { 38 41 iounmap(vaddr); 39 42 return -EFAULT; 40 43 } 41 44 } else 42 - memcpy(buf, (vaddr + offset), csize); 45 + memcpy(buf, vaddr + offset, csize); 43 46 44 47 iounmap(vaddr); 45 48 return csize;
+727 -327
arch/x86/kernel/ds.c
··· 2 2 * Debug Store support 3 3 * 4 4 * This provides a low-level interface to the hardware's Debug Store 5 - * feature that is used for last branch recording (LBR) and 5 + * feature that is used for branch trace store (BTS) and 6 6 * precise-event based sampling (PEBS). 7 7 * 8 - * Different architectures use a different DS layout/pointer size. 9 - * The below functions therefore work on a void*. 8 + * It manages: 9 + * - per-thread and per-cpu allocation of BTS and PEBS 10 + * - buffer memory allocation (optional) 11 + * - buffer overflow handling 12 + * - buffer access 13 + * 14 + * It assumes: 15 + * - get_task_struct on all parameter tasks 16 + * - current is allowed to trace parameter tasks 10 17 * 11 18 * 12 - * Since there is no user for PEBS, yet, only LBR (or branch 13 - * trace store, BTS) is supported. 14 - * 15 - * 16 - * Copyright (C) 2007 Intel Corporation. 17 - * Markus Metzger <markus.t.metzger@intel.com>, Dec 2007 19 + * Copyright (C) 2007-2008 Intel Corporation. 20 + * Markus Metzger <markus.t.metzger@intel.com>, 2007-2008 18 21 */ 22 + 23 + 24 + #ifdef CONFIG_X86_DS 19 25 20 26 #include <asm/ds.h> 21 27 22 28 #include <linux/errno.h> 23 29 #include <linux/string.h> 24 30 #include <linux/slab.h> 31 + #include <linux/sched.h> 32 + #include <linux/mm.h> 33 + 34 + 35 + /* 36 + * The configuration for a particular DS hardware implementation. 37 + */ 38 + struct ds_configuration { 39 + /* the size of the DS structure in bytes */ 40 + unsigned char sizeof_ds; 41 + /* the size of one pointer-typed field in the DS structure in bytes; 42 + this covers the first 8 fields related to buffer management. */ 43 + unsigned char sizeof_field; 44 + /* the size of a BTS/PEBS record in bytes */ 45 + unsigned char sizeof_rec[2]; 46 + }; 47 + static struct ds_configuration ds_cfg; 25 48 26 49 27 50 /* ··· 67 44 * (interrupt occurs when write pointer passes interrupt pointer) 68 45 * - value to which counter is reset following counter overflow 69 46 * 70 - * On later architectures, the last branch recording hardware uses 71 - * 64bit pointers even in 32bit mode. 47 + * Later architectures use 64bit pointers throughout, whereas earlier 48 + * architectures use 32bit pointers in 32bit mode. 72 49 * 73 50 * 74 - * Branch Trace Store (BTS) records store information about control 75 - * flow changes. They at least provide the following information: 76 - * - source linear address 77 - * - destination linear address 51 + * We compute the base address for the first 8 fields based on: 52 + * - the field size stored in the DS configuration 53 + * - the relative field position 54 + * - an offset giving the start of the respective region 78 55 * 79 - * Netburst supported a predicated bit that had been dropped in later 80 - * architectures. We do not suppor it. 56 + * This offset is further used to index various arrays holding 57 + * information for BTS and PEBS at the respective index. 81 58 * 82 - * 83 - * In order to abstract from the actual DS and BTS layout, we describe 84 - * the access to the relevant fields. 85 - * Thanks to Andi Kleen for proposing this design. 86 - * 87 - * The implementation, however, is not as general as it might seem. In 88 - * order to stay somewhat simple and efficient, we assume an 89 - * underlying unsigned type (mostly a pointer type) and we expect the 90 - * field to be at least as big as that type. 59 + * On later 32bit processors, we only access the lower 32bit of the 60 + * 64bit pointer fields. The upper halves will be zeroed out. 91 61 */ 92 62 93 - /* 94 - * A special from_ip address to indicate that the BTS record is an 95 - * info record that needs to be interpreted or skipped. 96 - */ 97 - #define BTS_ESCAPE_ADDRESS (-1) 98 - 99 - /* 100 - * A field access descriptor 101 - */ 102 - struct access_desc { 103 - unsigned char offset; 104 - unsigned char size; 63 + enum ds_field { 64 + ds_buffer_base = 0, 65 + ds_index, 66 + ds_absolute_maximum, 67 + ds_interrupt_threshold, 105 68 }; 106 69 107 - /* 108 - * The configuration for a particular DS/BTS hardware implementation. 109 - */ 110 - struct ds_configuration { 111 - /* the DS configuration */ 112 - unsigned char sizeof_ds; 113 - struct access_desc bts_buffer_base; 114 - struct access_desc bts_index; 115 - struct access_desc bts_absolute_maximum; 116 - struct access_desc bts_interrupt_threshold; 117 - /* the BTS configuration */ 118 - unsigned char sizeof_bts; 119 - struct access_desc from_ip; 120 - struct access_desc to_ip; 121 - /* BTS variants used to store additional information like 122 - timestamps */ 123 - struct access_desc info_type; 124 - struct access_desc info_data; 125 - unsigned long debugctl_mask; 70 + enum ds_qualifier { 71 + ds_bts = 0, 72 + ds_pebs 126 73 }; 127 74 75 + static inline unsigned long ds_get(const unsigned char *base, 76 + enum ds_qualifier qual, enum ds_field field) 77 + { 78 + base += (ds_cfg.sizeof_field * (field + (4 * qual))); 79 + return *(unsigned long *)base; 80 + } 81 + 82 + static inline void ds_set(unsigned char *base, enum ds_qualifier qual, 83 + enum ds_field field, unsigned long value) 84 + { 85 + base += (ds_cfg.sizeof_field * (field + (4 * qual))); 86 + (*(unsigned long *)base) = value; 87 + } 88 + 89 + 128 90 /* 129 - * The global configuration used by the below accessor functions 91 + * Locking is done only for allocating BTS or PEBS resources and for 92 + * guarding context and buffer memory allocation. 93 + * 94 + * Most functions require the current task to own the ds context part 95 + * they are going to access. All the locking is done when validating 96 + * access to the context. 130 97 */ 131 - static struct ds_configuration ds_cfg; 98 + static spinlock_t ds_lock = __SPIN_LOCK_UNLOCKED(ds_lock); 132 99 133 100 /* 134 - * Accessor functions for some DS and BTS fields using the above 135 - * global ptrace_bts_cfg. 101 + * Validate that the current task is allowed to access the BTS/PEBS 102 + * buffer of the parameter task. 103 + * 104 + * Returns 0, if access is granted; -Eerrno, otherwise. 136 105 */ 137 - static inline unsigned long get_bts_buffer_base(char *base) 106 + static inline int ds_validate_access(struct ds_context *context, 107 + enum ds_qualifier qual) 138 108 { 139 - return *(unsigned long *)(base + ds_cfg.bts_buffer_base.offset); 140 - } 141 - static inline void set_bts_buffer_base(char *base, unsigned long value) 142 - { 143 - (*(unsigned long *)(base + ds_cfg.bts_buffer_base.offset)) = value; 144 - } 145 - static inline unsigned long get_bts_index(char *base) 146 - { 147 - return *(unsigned long *)(base + ds_cfg.bts_index.offset); 148 - } 149 - static inline void set_bts_index(char *base, unsigned long value) 150 - { 151 - (*(unsigned long *)(base + ds_cfg.bts_index.offset)) = value; 152 - } 153 - static inline unsigned long get_bts_absolute_maximum(char *base) 154 - { 155 - return *(unsigned long *)(base + ds_cfg.bts_absolute_maximum.offset); 156 - } 157 - static inline void set_bts_absolute_maximum(char *base, unsigned long value) 158 - { 159 - (*(unsigned long *)(base + ds_cfg.bts_absolute_maximum.offset)) = value; 160 - } 161 - static inline unsigned long get_bts_interrupt_threshold(char *base) 162 - { 163 - return *(unsigned long *)(base + ds_cfg.bts_interrupt_threshold.offset); 164 - } 165 - static inline void set_bts_interrupt_threshold(char *base, unsigned long value) 166 - { 167 - (*(unsigned long *)(base + ds_cfg.bts_interrupt_threshold.offset)) = value; 168 - } 169 - static inline unsigned long get_from_ip(char *base) 170 - { 171 - return *(unsigned long *)(base + ds_cfg.from_ip.offset); 172 - } 173 - static inline void set_from_ip(char *base, unsigned long value) 174 - { 175 - (*(unsigned long *)(base + ds_cfg.from_ip.offset)) = value; 176 - } 177 - static inline unsigned long get_to_ip(char *base) 178 - { 179 - return *(unsigned long *)(base + ds_cfg.to_ip.offset); 180 - } 181 - static inline void set_to_ip(char *base, unsigned long value) 182 - { 183 - (*(unsigned long *)(base + ds_cfg.to_ip.offset)) = value; 184 - } 185 - static inline unsigned char get_info_type(char *base) 186 - { 187 - return *(unsigned char *)(base + ds_cfg.info_type.offset); 188 - } 189 - static inline void set_info_type(char *base, unsigned char value) 190 - { 191 - (*(unsigned char *)(base + ds_cfg.info_type.offset)) = value; 192 - } 193 - static inline unsigned long get_info_data(char *base) 194 - { 195 - return *(unsigned long *)(base + ds_cfg.info_data.offset); 196 - } 197 - static inline void set_info_data(char *base, unsigned long value) 198 - { 199 - (*(unsigned long *)(base + ds_cfg.info_data.offset)) = value; 200 - } 109 + if (!context) 110 + return -EPERM; 201 111 202 - 203 - int ds_allocate(void **dsp, size_t bts_size_in_bytes) 204 - { 205 - size_t bts_size_in_records; 206 - unsigned long bts; 207 - void *ds; 208 - 209 - if (!ds_cfg.sizeof_ds || !ds_cfg.sizeof_bts) 210 - return -EOPNOTSUPP; 211 - 212 - if (bts_size_in_bytes < 0) 213 - return -EINVAL; 214 - 215 - bts_size_in_records = 216 - bts_size_in_bytes / ds_cfg.sizeof_bts; 217 - bts_size_in_bytes = 218 - bts_size_in_records * ds_cfg.sizeof_bts; 219 - 220 - if (bts_size_in_bytes <= 0) 221 - return -EINVAL; 222 - 223 - bts = (unsigned long)kzalloc(bts_size_in_bytes, GFP_KERNEL); 224 - 225 - if (!bts) 226 - return -ENOMEM; 227 - 228 - ds = kzalloc(ds_cfg.sizeof_ds, GFP_KERNEL); 229 - 230 - if (!ds) { 231 - kfree((void *)bts); 232 - return -ENOMEM; 233 - } 234 - 235 - set_bts_buffer_base(ds, bts); 236 - set_bts_index(ds, bts); 237 - set_bts_absolute_maximum(ds, bts + bts_size_in_bytes); 238 - set_bts_interrupt_threshold(ds, bts + bts_size_in_bytes + 1); 239 - 240 - *dsp = ds; 241 - return 0; 242 - } 243 - 244 - int ds_free(void **dsp) 245 - { 246 - if (*dsp) { 247 - kfree((void *)get_bts_buffer_base(*dsp)); 248 - kfree(*dsp); 249 - *dsp = NULL; 250 - } 251 - return 0; 252 - } 253 - 254 - int ds_get_bts_size(void *ds) 255 - { 256 - int size_in_bytes; 257 - 258 - if (!ds_cfg.sizeof_ds || !ds_cfg.sizeof_bts) 259 - return -EOPNOTSUPP; 260 - 261 - if (!ds) 112 + if (context->owner[qual] == current) 262 113 return 0; 263 114 264 - size_in_bytes = 265 - get_bts_absolute_maximum(ds) - 266 - get_bts_buffer_base(ds); 267 - return size_in_bytes; 115 + return -EPERM; 268 116 } 269 117 270 - int ds_get_bts_end(void *ds) 118 + 119 + /* 120 + * We either support (system-wide) per-cpu or per-thread allocation. 121 + * We distinguish the two based on the task_struct pointer, where a 122 + * NULL pointer indicates per-cpu allocation for the current cpu. 123 + * 124 + * Allocations are use-counted. As soon as resources are allocated, 125 + * further allocations must be of the same type (per-cpu or 126 + * per-thread). We model this by counting allocations (i.e. the number 127 + * of tracers of a certain type) for one type negatively: 128 + * =0 no tracers 129 + * >0 number of per-thread tracers 130 + * <0 number of per-cpu tracers 131 + * 132 + * The below functions to get and put tracers and to check the 133 + * allocation type require the ds_lock to be held by the caller. 134 + * 135 + * Tracers essentially gives the number of ds contexts for a certain 136 + * type of allocation. 137 + */ 138 + static long tracers; 139 + 140 + static inline void get_tracer(struct task_struct *task) 271 141 { 272 - int size_in_bytes = ds_get_bts_size(ds); 273 - 274 - if (size_in_bytes <= 0) 275 - return size_in_bytes; 276 - 277 - return size_in_bytes / ds_cfg.sizeof_bts; 142 + tracers += (task ? 1 : -1); 278 143 } 279 144 280 - int ds_get_bts_index(void *ds) 145 + static inline void put_tracer(struct task_struct *task) 281 146 { 282 - int index_offset_in_bytes; 283 - 284 - if (!ds_cfg.sizeof_ds || !ds_cfg.sizeof_bts) 285 - return -EOPNOTSUPP; 286 - 287 - index_offset_in_bytes = 288 - get_bts_index(ds) - 289 - get_bts_buffer_base(ds); 290 - 291 - return index_offset_in_bytes / ds_cfg.sizeof_bts; 147 + tracers -= (task ? 1 : -1); 292 148 } 293 149 294 - int ds_set_overflow(void *ds, int method) 150 + static inline int check_tracer(struct task_struct *task) 295 151 { 296 - switch (method) { 297 - case DS_O_SIGNAL: 298 - return -EOPNOTSUPP; 299 - case DS_O_WRAP: 300 - return 0; 301 - default: 302 - return -EINVAL; 152 + return (task ? (tracers >= 0) : (tracers <= 0)); 153 + } 154 + 155 + 156 + /* 157 + * The DS context is either attached to a thread or to a cpu: 158 + * - in the former case, the thread_struct contains a pointer to the 159 + * attached context. 160 + * - in the latter case, we use a static array of per-cpu context 161 + * pointers. 162 + * 163 + * Contexts are use-counted. They are allocated on first access and 164 + * deallocated when the last user puts the context. 165 + * 166 + * We distinguish between an allocating and a non-allocating get of a 167 + * context: 168 + * - the allocating get is used for requesting BTS/PEBS resources. It 169 + * requires the caller to hold the global ds_lock. 170 + * - the non-allocating get is used for all other cases. A 171 + * non-existing context indicates an error. It acquires and releases 172 + * the ds_lock itself for obtaining the context. 173 + * 174 + * A context and its DS configuration are allocated and deallocated 175 + * together. A context always has a DS configuration of the 176 + * appropriate size. 177 + */ 178 + static DEFINE_PER_CPU(struct ds_context *, system_context); 179 + 180 + #define this_system_context per_cpu(system_context, smp_processor_id()) 181 + 182 + /* 183 + * Returns the pointer to the parameter task's context or to the 184 + * system-wide context, if task is NULL. 185 + * 186 + * Increases the use count of the returned context, if not NULL. 187 + */ 188 + static inline struct ds_context *ds_get_context(struct task_struct *task) 189 + { 190 + struct ds_context *context; 191 + 192 + spin_lock(&ds_lock); 193 + 194 + context = (task ? task->thread.ds_ctx : this_system_context); 195 + if (context) 196 + context->count++; 197 + 198 + spin_unlock(&ds_lock); 199 + 200 + return context; 201 + } 202 + 203 + /* 204 + * Same as ds_get_context, but allocates the context and it's DS 205 + * structure, if necessary; returns NULL; if out of memory. 206 + * 207 + * pre: requires ds_lock to be held 208 + */ 209 + static inline struct ds_context *ds_alloc_context(struct task_struct *task) 210 + { 211 + struct ds_context **p_context = 212 + (task ? &task->thread.ds_ctx : &this_system_context); 213 + struct ds_context *context = *p_context; 214 + 215 + if (!context) { 216 + context = kzalloc(sizeof(*context), GFP_KERNEL); 217 + 218 + if (!context) 219 + return NULL; 220 + 221 + context->ds = kzalloc(ds_cfg.sizeof_ds, GFP_KERNEL); 222 + if (!context->ds) { 223 + kfree(context); 224 + return NULL; 225 + } 226 + 227 + *p_context = context; 228 + 229 + context->this = p_context; 230 + context->task = task; 231 + 232 + if (task) 233 + set_tsk_thread_flag(task, TIF_DS_AREA_MSR); 234 + 235 + if (!task || (task == current)) 236 + wrmsr(MSR_IA32_DS_AREA, (unsigned long)context->ds, 0); 237 + 238 + get_tracer(task); 303 239 } 240 + 241 + context->count++; 242 + 243 + return context; 304 244 } 305 245 306 - int ds_get_overflow(void *ds) 246 + /* 247 + * Decreases the use count of the parameter context, if not NULL. 248 + * Deallocates the context, if the use count reaches zero. 249 + */ 250 + static inline void ds_put_context(struct ds_context *context) 307 251 { 308 - return DS_O_WRAP; 252 + if (!context) 253 + return; 254 + 255 + spin_lock(&ds_lock); 256 + 257 + if (--context->count) 258 + goto out; 259 + 260 + *(context->this) = NULL; 261 + 262 + if (context->task) 263 + clear_tsk_thread_flag(context->task, TIF_DS_AREA_MSR); 264 + 265 + if (!context->task || (context->task == current)) 266 + wrmsrl(MSR_IA32_DS_AREA, 0); 267 + 268 + put_tracer(context->task); 269 + 270 + /* free any leftover buffers from tracers that did not 271 + * deallocate them properly. */ 272 + kfree(context->buffer[ds_bts]); 273 + kfree(context->buffer[ds_pebs]); 274 + kfree(context->ds); 275 + kfree(context); 276 + out: 277 + spin_unlock(&ds_lock); 309 278 } 310 279 311 - int ds_clear(void *ds) 280 + 281 + /* 282 + * Handle a buffer overflow 283 + * 284 + * task: the task whose buffers are overflowing; 285 + * NULL for a buffer overflow on the current cpu 286 + * context: the ds context 287 + * qual: the buffer type 288 + */ 289 + static void ds_overflow(struct task_struct *task, struct ds_context *context, 290 + enum ds_qualifier qual) 312 291 { 313 - int bts_size = ds_get_bts_size(ds); 314 - unsigned long bts_base; 292 + if (!context) 293 + return; 315 294 316 - if (bts_size <= 0) 317 - return bts_size; 295 + if (context->callback[qual]) 296 + (*context->callback[qual])(task); 318 297 319 - bts_base = get_bts_buffer_base(ds); 320 - memset((void *)bts_base, 0, bts_size); 321 - 322 - set_bts_index(ds, bts_base); 323 - return 0; 298 + /* todo: do some more overflow handling */ 324 299 } 325 300 326 - int ds_read_bts(void *ds, int index, struct bts_struct *out) 327 - { 328 - void *bts; 329 301 330 - if (!ds_cfg.sizeof_ds || !ds_cfg.sizeof_bts) 302 + /* 303 + * Allocate a non-pageable buffer of the parameter size. 304 + * Checks the memory and the locked memory rlimit. 305 + * 306 + * Returns the buffer, if successful; 307 + * NULL, if out of memory or rlimit exceeded. 308 + * 309 + * size: the requested buffer size in bytes 310 + * pages (out): if not NULL, contains the number of pages reserved 311 + */ 312 + static inline void *ds_allocate_buffer(size_t size, unsigned int *pages) 313 + { 314 + unsigned long rlim, vm, pgsz; 315 + void *buffer; 316 + 317 + pgsz = PAGE_ALIGN(size) >> PAGE_SHIFT; 318 + 319 + rlim = current->signal->rlim[RLIMIT_AS].rlim_cur >> PAGE_SHIFT; 320 + vm = current->mm->total_vm + pgsz; 321 + if (rlim < vm) 322 + return NULL; 323 + 324 + rlim = current->signal->rlim[RLIMIT_MEMLOCK].rlim_cur >> PAGE_SHIFT; 325 + vm = current->mm->locked_vm + pgsz; 326 + if (rlim < vm) 327 + return NULL; 328 + 329 + buffer = kzalloc(size, GFP_KERNEL); 330 + if (!buffer) 331 + return NULL; 332 + 333 + current->mm->total_vm += pgsz; 334 + current->mm->locked_vm += pgsz; 335 + 336 + if (pages) 337 + *pages = pgsz; 338 + 339 + return buffer; 340 + } 341 + 342 + static int ds_request(struct task_struct *task, void *base, size_t size, 343 + ds_ovfl_callback_t ovfl, enum ds_qualifier qual) 344 + { 345 + struct ds_context *context; 346 + unsigned long buffer, adj; 347 + const unsigned long alignment = (1 << 3); 348 + int error = 0; 349 + 350 + if (!ds_cfg.sizeof_ds) 331 351 return -EOPNOTSUPP; 332 352 333 - if (index < 0) 353 + /* we require some space to do alignment adjustments below */ 354 + if (size < (alignment + ds_cfg.sizeof_rec[qual])) 334 355 return -EINVAL; 335 356 336 - if (index >= ds_get_bts_size(ds)) 337 - return -EINVAL; 338 - 339 - bts = (void *)(get_bts_buffer_base(ds) + (index * ds_cfg.sizeof_bts)); 340 - 341 - memset(out, 0, sizeof(*out)); 342 - if (get_from_ip(bts) == BTS_ESCAPE_ADDRESS) { 343 - out->qualifier = get_info_type(bts); 344 - out->variant.jiffies = get_info_data(bts); 345 - } else { 346 - out->qualifier = BTS_BRANCH; 347 - out->variant.lbr.from_ip = get_from_ip(bts); 348 - out->variant.lbr.to_ip = get_to_ip(bts); 349 - } 350 - 351 - return sizeof(*out);; 352 - } 353 - 354 - int ds_write_bts(void *ds, const struct bts_struct *in) 355 - { 356 - unsigned long bts; 357 - 358 - if (!ds_cfg.sizeof_ds || !ds_cfg.sizeof_bts) 357 + /* buffer overflow notification is not yet implemented */ 358 + if (ovfl) 359 359 return -EOPNOTSUPP; 360 360 361 - if (ds_get_bts_size(ds) <= 0) 362 - return -ENXIO; 363 361 364 - bts = get_bts_index(ds); 362 + spin_lock(&ds_lock); 365 363 366 - memset((void *)bts, 0, ds_cfg.sizeof_bts); 367 - switch (in->qualifier) { 368 - case BTS_INVALID: 369 - break; 364 + if (!check_tracer(task)) 365 + return -EPERM; 370 366 371 - case BTS_BRANCH: 372 - set_from_ip((void *)bts, in->variant.lbr.from_ip); 373 - set_to_ip((void *)bts, in->variant.lbr.to_ip); 374 - break; 367 + error = -ENOMEM; 368 + context = ds_alloc_context(task); 369 + if (!context) 370 + goto out_unlock; 375 371 376 - case BTS_TASK_ARRIVES: 377 - case BTS_TASK_DEPARTS: 378 - set_from_ip((void *)bts, BTS_ESCAPE_ADDRESS); 379 - set_info_type((void *)bts, in->qualifier); 380 - set_info_data((void *)bts, in->variant.jiffies); 381 - break; 372 + error = -EALREADY; 373 + if (context->owner[qual] == current) 374 + goto out_unlock; 375 + error = -EPERM; 376 + if (context->owner[qual] != NULL) 377 + goto out_unlock; 378 + context->owner[qual] = current; 382 379 383 - default: 380 + spin_unlock(&ds_lock); 381 + 382 + 383 + error = -ENOMEM; 384 + if (!base) { 385 + base = ds_allocate_buffer(size, &context->pages[qual]); 386 + if (!base) 387 + goto out_release; 388 + 389 + context->buffer[qual] = base; 390 + } 391 + error = 0; 392 + 393 + context->callback[qual] = ovfl; 394 + 395 + /* adjust the buffer address and size to meet alignment 396 + * constraints: 397 + * - buffer is double-word aligned 398 + * - size is multiple of record size 399 + * 400 + * We checked the size at the very beginning; we have enough 401 + * space to do the adjustment. 402 + */ 403 + buffer = (unsigned long)base; 404 + 405 + adj = ALIGN(buffer, alignment) - buffer; 406 + buffer += adj; 407 + size -= adj; 408 + 409 + size /= ds_cfg.sizeof_rec[qual]; 410 + size *= ds_cfg.sizeof_rec[qual]; 411 + 412 + ds_set(context->ds, qual, ds_buffer_base, buffer); 413 + ds_set(context->ds, qual, ds_index, buffer); 414 + ds_set(context->ds, qual, ds_absolute_maximum, buffer + size); 415 + 416 + if (ovfl) { 417 + /* todo: select a suitable interrupt threshold */ 418 + } else 419 + ds_set(context->ds, qual, 420 + ds_interrupt_threshold, buffer + size + 1); 421 + 422 + /* we keep the context until ds_release */ 423 + return error; 424 + 425 + out_release: 426 + context->owner[qual] = NULL; 427 + ds_put_context(context); 428 + return error; 429 + 430 + out_unlock: 431 + spin_unlock(&ds_lock); 432 + ds_put_context(context); 433 + return error; 434 + } 435 + 436 + int ds_request_bts(struct task_struct *task, void *base, size_t size, 437 + ds_ovfl_callback_t ovfl) 438 + { 439 + return ds_request(task, base, size, ovfl, ds_bts); 440 + } 441 + 442 + int ds_request_pebs(struct task_struct *task, void *base, size_t size, 443 + ds_ovfl_callback_t ovfl) 444 + { 445 + return ds_request(task, base, size, ovfl, ds_pebs); 446 + } 447 + 448 + static int ds_release(struct task_struct *task, enum ds_qualifier qual) 449 + { 450 + struct ds_context *context; 451 + int error; 452 + 453 + context = ds_get_context(task); 454 + error = ds_validate_access(context, qual); 455 + if (error < 0) 456 + goto out; 457 + 458 + kfree(context->buffer[qual]); 459 + context->buffer[qual] = NULL; 460 + 461 + current->mm->total_vm -= context->pages[qual]; 462 + current->mm->locked_vm -= context->pages[qual]; 463 + context->pages[qual] = 0; 464 + context->owner[qual] = NULL; 465 + 466 + /* 467 + * we put the context twice: 468 + * once for the ds_get_context 469 + * once for the corresponding ds_request 470 + */ 471 + ds_put_context(context); 472 + out: 473 + ds_put_context(context); 474 + return error; 475 + } 476 + 477 + int ds_release_bts(struct task_struct *task) 478 + { 479 + return ds_release(task, ds_bts); 480 + } 481 + 482 + int ds_release_pebs(struct task_struct *task) 483 + { 484 + return ds_release(task, ds_pebs); 485 + } 486 + 487 + static int ds_get_index(struct task_struct *task, size_t *pos, 488 + enum ds_qualifier qual) 489 + { 490 + struct ds_context *context; 491 + unsigned long base, index; 492 + int error; 493 + 494 + context = ds_get_context(task); 495 + error = ds_validate_access(context, qual); 496 + if (error < 0) 497 + goto out; 498 + 499 + base = ds_get(context->ds, qual, ds_buffer_base); 500 + index = ds_get(context->ds, qual, ds_index); 501 + 502 + error = ((index - base) / ds_cfg.sizeof_rec[qual]); 503 + if (pos) 504 + *pos = error; 505 + out: 506 + ds_put_context(context); 507 + return error; 508 + } 509 + 510 + int ds_get_bts_index(struct task_struct *task, size_t *pos) 511 + { 512 + return ds_get_index(task, pos, ds_bts); 513 + } 514 + 515 + int ds_get_pebs_index(struct task_struct *task, size_t *pos) 516 + { 517 + return ds_get_index(task, pos, ds_pebs); 518 + } 519 + 520 + static int ds_get_end(struct task_struct *task, size_t *pos, 521 + enum ds_qualifier qual) 522 + { 523 + struct ds_context *context; 524 + unsigned long base, end; 525 + int error; 526 + 527 + context = ds_get_context(task); 528 + error = ds_validate_access(context, qual); 529 + if (error < 0) 530 + goto out; 531 + 532 + base = ds_get(context->ds, qual, ds_buffer_base); 533 + end = ds_get(context->ds, qual, ds_absolute_maximum); 534 + 535 + error = ((end - base) / ds_cfg.sizeof_rec[qual]); 536 + if (pos) 537 + *pos = error; 538 + out: 539 + ds_put_context(context); 540 + return error; 541 + } 542 + 543 + int ds_get_bts_end(struct task_struct *task, size_t *pos) 544 + { 545 + return ds_get_end(task, pos, ds_bts); 546 + } 547 + 548 + int ds_get_pebs_end(struct task_struct *task, size_t *pos) 549 + { 550 + return ds_get_end(task, pos, ds_pebs); 551 + } 552 + 553 + static int ds_access(struct task_struct *task, size_t index, 554 + const void **record, enum ds_qualifier qual) 555 + { 556 + struct ds_context *context; 557 + unsigned long base, idx; 558 + int error; 559 + 560 + if (!record) 384 561 return -EINVAL; 562 + 563 + context = ds_get_context(task); 564 + error = ds_validate_access(context, qual); 565 + if (error < 0) 566 + goto out; 567 + 568 + base = ds_get(context->ds, qual, ds_buffer_base); 569 + idx = base + (index * ds_cfg.sizeof_rec[qual]); 570 + 571 + error = -EINVAL; 572 + if (idx > ds_get(context->ds, qual, ds_absolute_maximum)) 573 + goto out; 574 + 575 + *record = (const void *)idx; 576 + error = ds_cfg.sizeof_rec[qual]; 577 + out: 578 + ds_put_context(context); 579 + return error; 580 + } 581 + 582 + int ds_access_bts(struct task_struct *task, size_t index, const void **record) 583 + { 584 + return ds_access(task, index, record, ds_bts); 585 + } 586 + 587 + int ds_access_pebs(struct task_struct *task, size_t index, const void **record) 588 + { 589 + return ds_access(task, index, record, ds_pebs); 590 + } 591 + 592 + static int ds_write(struct task_struct *task, const void *record, size_t size, 593 + enum ds_qualifier qual, int force) 594 + { 595 + struct ds_context *context; 596 + int error; 597 + 598 + if (!record) 599 + return -EINVAL; 600 + 601 + error = -EPERM; 602 + context = ds_get_context(task); 603 + if (!context) 604 + goto out; 605 + 606 + if (!force) { 607 + error = ds_validate_access(context, qual); 608 + if (error < 0) 609 + goto out; 385 610 } 386 611 387 - bts = bts + ds_cfg.sizeof_bts; 388 - if (bts >= get_bts_absolute_maximum(ds)) 389 - bts = get_bts_buffer_base(ds); 390 - set_bts_index(ds, bts); 612 + error = 0; 613 + while (size) { 614 + unsigned long base, index, end, write_end, int_th; 615 + unsigned long write_size, adj_write_size; 391 616 392 - return ds_cfg.sizeof_bts; 617 + /* 618 + * write as much as possible without producing an 619 + * overflow interrupt. 620 + * 621 + * interrupt_threshold must either be 622 + * - bigger than absolute_maximum or 623 + * - point to a record between buffer_base and absolute_maximum 624 + * 625 + * index points to a valid record. 626 + */ 627 + base = ds_get(context->ds, qual, ds_buffer_base); 628 + index = ds_get(context->ds, qual, ds_index); 629 + end = ds_get(context->ds, qual, ds_absolute_maximum); 630 + int_th = ds_get(context->ds, qual, ds_interrupt_threshold); 631 + 632 + write_end = min(end, int_th); 633 + 634 + /* if we are already beyond the interrupt threshold, 635 + * we fill the entire buffer */ 636 + if (write_end <= index) 637 + write_end = end; 638 + 639 + if (write_end <= index) 640 + goto out; 641 + 642 + write_size = min((unsigned long) size, write_end - index); 643 + memcpy((void *)index, record, write_size); 644 + 645 + record = (const char *)record + write_size; 646 + size -= write_size; 647 + error += write_size; 648 + 649 + adj_write_size = write_size / ds_cfg.sizeof_rec[qual]; 650 + adj_write_size *= ds_cfg.sizeof_rec[qual]; 651 + 652 + /* zero out trailing bytes */ 653 + memset((char *)index + write_size, 0, 654 + adj_write_size - write_size); 655 + index += adj_write_size; 656 + 657 + if (index >= end) 658 + index = base; 659 + ds_set(context->ds, qual, ds_index, index); 660 + 661 + if (index >= int_th) 662 + ds_overflow(task, context, qual); 663 + } 664 + 665 + out: 666 + ds_put_context(context); 667 + return error; 393 668 } 394 669 395 - unsigned long ds_debugctl_mask(void) 670 + int ds_write_bts(struct task_struct *task, const void *record, size_t size) 396 671 { 397 - return ds_cfg.debugctl_mask; 672 + return ds_write(task, record, size, ds_bts, /* force = */ 0); 398 673 } 399 674 400 - #ifdef __i386__ 401 - static const struct ds_configuration ds_cfg_netburst = { 402 - .sizeof_ds = 9 * 4, 403 - .bts_buffer_base = { 0, 4 }, 404 - .bts_index = { 4, 4 }, 405 - .bts_absolute_maximum = { 8, 4 }, 406 - .bts_interrupt_threshold = { 12, 4 }, 407 - .sizeof_bts = 3 * 4, 408 - .from_ip = { 0, 4 }, 409 - .to_ip = { 4, 4 }, 410 - .info_type = { 4, 1 }, 411 - .info_data = { 8, 4 }, 412 - .debugctl_mask = (1<<2)|(1<<3) 675 + int ds_write_pebs(struct task_struct *task, const void *record, size_t size) 676 + { 677 + return ds_write(task, record, size, ds_pebs, /* force = */ 0); 678 + } 679 + 680 + int ds_unchecked_write_bts(struct task_struct *task, 681 + const void *record, size_t size) 682 + { 683 + return ds_write(task, record, size, ds_bts, /* force = */ 1); 684 + } 685 + 686 + int ds_unchecked_write_pebs(struct task_struct *task, 687 + const void *record, size_t size) 688 + { 689 + return ds_write(task, record, size, ds_pebs, /* force = */ 1); 690 + } 691 + 692 + static int ds_reset_or_clear(struct task_struct *task, 693 + enum ds_qualifier qual, int clear) 694 + { 695 + struct ds_context *context; 696 + unsigned long base, end; 697 + int error; 698 + 699 + context = ds_get_context(task); 700 + error = ds_validate_access(context, qual); 701 + if (error < 0) 702 + goto out; 703 + 704 + base = ds_get(context->ds, qual, ds_buffer_base); 705 + end = ds_get(context->ds, qual, ds_absolute_maximum); 706 + 707 + if (clear) 708 + memset((void *)base, 0, end - base); 709 + 710 + ds_set(context->ds, qual, ds_index, base); 711 + 712 + error = 0; 713 + out: 714 + ds_put_context(context); 715 + return error; 716 + } 717 + 718 + int ds_reset_bts(struct task_struct *task) 719 + { 720 + return ds_reset_or_clear(task, ds_bts, /* clear = */ 0); 721 + } 722 + 723 + int ds_reset_pebs(struct task_struct *task) 724 + { 725 + return ds_reset_or_clear(task, ds_pebs, /* clear = */ 0); 726 + } 727 + 728 + int ds_clear_bts(struct task_struct *task) 729 + { 730 + return ds_reset_or_clear(task, ds_bts, /* clear = */ 1); 731 + } 732 + 733 + int ds_clear_pebs(struct task_struct *task) 734 + { 735 + return ds_reset_or_clear(task, ds_pebs, /* clear = */ 1); 736 + } 737 + 738 + int ds_get_pebs_reset(struct task_struct *task, u64 *value) 739 + { 740 + struct ds_context *context; 741 + int error; 742 + 743 + if (!value) 744 + return -EINVAL; 745 + 746 + context = ds_get_context(task); 747 + error = ds_validate_access(context, ds_pebs); 748 + if (error < 0) 749 + goto out; 750 + 751 + *value = *(u64 *)(context->ds + (ds_cfg.sizeof_field * 8)); 752 + 753 + error = 0; 754 + out: 755 + ds_put_context(context); 756 + return error; 757 + } 758 + 759 + int ds_set_pebs_reset(struct task_struct *task, u64 value) 760 + { 761 + struct ds_context *context; 762 + int error; 763 + 764 + context = ds_get_context(task); 765 + error = ds_validate_access(context, ds_pebs); 766 + if (error < 0) 767 + goto out; 768 + 769 + *(u64 *)(context->ds + (ds_cfg.sizeof_field * 8)) = value; 770 + 771 + error = 0; 772 + out: 773 + ds_put_context(context); 774 + return error; 775 + } 776 + 777 + static const struct ds_configuration ds_cfg_var = { 778 + .sizeof_ds = sizeof(long) * 12, 779 + .sizeof_field = sizeof(long), 780 + .sizeof_rec[ds_bts] = sizeof(long) * 3, 781 + .sizeof_rec[ds_pebs] = sizeof(long) * 10 413 782 }; 414 - 415 - static const struct ds_configuration ds_cfg_pentium_m = { 416 - .sizeof_ds = 9 * 4, 417 - .bts_buffer_base = { 0, 4 }, 418 - .bts_index = { 4, 4 }, 419 - .bts_absolute_maximum = { 8, 4 }, 420 - .bts_interrupt_threshold = { 12, 4 }, 421 - .sizeof_bts = 3 * 4, 422 - .from_ip = { 0, 4 }, 423 - .to_ip = { 4, 4 }, 424 - .info_type = { 4, 1 }, 425 - .info_data = { 8, 4 }, 426 - .debugctl_mask = (1<<6)|(1<<7) 427 - }; 428 - #endif /* _i386_ */ 429 - 430 - static const struct ds_configuration ds_cfg_core2 = { 431 - .sizeof_ds = 9 * 8, 432 - .bts_buffer_base = { 0, 8 }, 433 - .bts_index = { 8, 8 }, 434 - .bts_absolute_maximum = { 16, 8 }, 435 - .bts_interrupt_threshold = { 24, 8 }, 436 - .sizeof_bts = 3 * 8, 437 - .from_ip = { 0, 8 }, 438 - .to_ip = { 8, 8 }, 439 - .info_type = { 8, 1 }, 440 - .info_data = { 16, 8 }, 441 - .debugctl_mask = (1<<6)|(1<<7)|(1<<9) 783 + static const struct ds_configuration ds_cfg_64 = { 784 + .sizeof_ds = 8 * 12, 785 + .sizeof_field = 8, 786 + .sizeof_rec[ds_bts] = 8 * 3, 787 + .sizeof_rec[ds_pebs] = 8 * 10 442 788 }; 443 789 444 790 static inline void ··· 821 429 switch (c->x86) { 822 430 case 0x6: 823 431 switch (c->x86_model) { 824 - #ifdef __i386__ 825 432 case 0xD: 826 433 case 0xE: /* Pentium M */ 827 - ds_configure(&ds_cfg_pentium_m); 434 + ds_configure(&ds_cfg_var); 828 435 break; 829 - #endif /* _i386_ */ 830 436 case 0xF: /* Core2 */ 831 - ds_configure(&ds_cfg_core2); 437 + case 0x1C: /* Atom */ 438 + ds_configure(&ds_cfg_64); 832 439 break; 833 440 default: 834 441 /* sorry, don't know about them */ ··· 836 445 break; 837 446 case 0xF: 838 447 switch (c->x86_model) { 839 - #ifdef __i386__ 840 448 case 0x0: 841 449 case 0x1: 842 450 case 0x2: /* Netburst */ 843 - ds_configure(&ds_cfg_netburst); 451 + ds_configure(&ds_cfg_var); 844 452 break; 845 - #endif /* _i386_ */ 846 453 default: 847 454 /* sorry, don't know about them */ 848 455 break; ··· 851 462 break; 852 463 } 853 464 } 465 + 466 + void ds_free(struct ds_context *context) 467 + { 468 + /* This is called when the task owning the parameter context 469 + * is dying. There should not be any user of that context left 470 + * to disturb us, anymore. */ 471 + unsigned long leftovers = context->count; 472 + while (leftovers--) 473 + ds_put_context(context); 474 + } 475 + #endif /* CONFIG_X86_DS */
+4 -2
arch/x86/kernel/efi.c
··· 414 414 if (memmap.map == NULL) 415 415 printk(KERN_ERR "Could not map the EFI memory map!\n"); 416 416 memmap.map_end = memmap.map + (memmap.nr_map * memmap.desc_size); 417 + 417 418 if (memmap.desc_size != sizeof(efi_memory_desc_t)) 418 - printk(KERN_WARNING "Kernel-defined memdesc" 419 - "doesn't match the one from EFI!\n"); 419 + printk(KERN_WARNING 420 + "Kernel-defined memdesc doesn't match the one from EFI!\n"); 421 + 420 422 if (add_efi_memmap) 421 423 do_add_efi_memmap(); 422 424
+2 -2
arch/x86/kernel/entry_64.S
··· 275 275 ENTRY(ret_from_fork) 276 276 CFI_DEFAULT_STACK 277 277 push kernel_eflags(%rip) 278 - CFI_ADJUST_CFA_OFFSET 4 278 + CFI_ADJUST_CFA_OFFSET 8 279 279 popf # reset kernel eflags 280 - CFI_ADJUST_CFA_OFFSET -4 280 + CFI_ADJUST_CFA_OFFSET -8 281 281 call schedule_tail 282 282 GET_THREAD_INFO(%rcx) 283 283 testl $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT),TI_flags(%rcx)
+2 -3
arch/x86/kernel/head64.c
··· 108 108 } 109 109 load_idt((const struct desc_ptr *)&idt_descr); 110 110 111 - early_printk("Kernel alive\n"); 111 + if (console_loglevel == 10) 112 + early_printk("Kernel alive\n"); 112 113 113 114 x86_64_init_pda(); 114 - 115 - early_printk("Kernel really alive\n"); 116 115 117 116 x86_64_start_reservations(real_mode_data); 118 117 }
+1
arch/x86/kernel/ioport.c
··· 14 14 #include <linux/slab.h> 15 15 #include <linux/thread_info.h> 16 16 #include <linux/syscalls.h> 17 + #include <asm/syscalls.h> 17 18 18 19 /* Set EXTENT bits starting at BASE in BITMAP to value TURN_ON. */ 19 20 static void set_bitmap(unsigned long *bitmap, unsigned int base,
+2 -1
arch/x86/kernel/ipi.c
··· 20 20 21 21 #ifdef CONFIG_X86_32 22 22 #include <mach_apic.h> 23 + #include <mach_ipi.h> 24 + 23 25 /* 24 26 * the following functions deal with sending IPIs between CPUs. 25 27 * ··· 149 147 } 150 148 151 149 /* must come after the send_IPI functions above for inlining */ 152 - #include <mach_ipi.h> 153 150 static int convert_apicid_to_cpu(int apic_id) 154 151 { 155 152 int i;
+1 -1
arch/x86/kernel/irq_32.c
··· 325 325 for_each_online_cpu(j) 326 326 seq_printf(p, "%10u ", 327 327 per_cpu(irq_stat,j).irq_call_count); 328 - seq_printf(p, " function call interrupts\n"); 328 + seq_printf(p, " Function call interrupts\n"); 329 329 seq_printf(p, "TLB: "); 330 330 for_each_online_cpu(j) 331 331 seq_printf(p, "%10u ",
+1 -1
arch/x86/kernel/irq_64.c
··· 129 129 seq_printf(p, "CAL: "); 130 130 for_each_online_cpu(j) 131 131 seq_printf(p, "%10u ", cpu_pda(j)->irq_call_count); 132 - seq_printf(p, " function call interrupts\n"); 132 + seq_printf(p, " Function call interrupts\n"); 133 133 seq_printf(p, "TLB: "); 134 134 for_each_online_cpu(j) 135 135 seq_printf(p, "%10u ", cpu_pda(j)->irq_tlb_count);
+1 -1
arch/x86/kernel/kvm.c
··· 178 178 kvm_deferred_mmu_op(&ftlb, sizeof ftlb); 179 179 } 180 180 181 - static void kvm_release_pt(u32 pfn) 181 + static void kvm_release_pt(unsigned long pfn) 182 182 { 183 183 struct kvm_mmu_op_release_pt rpt = { 184 184 .header.op = KVM_MMU_OP_RELEASE_PT,
+1
arch/x86/kernel/ldt.c
··· 18 18 #include <asm/ldt.h> 19 19 #include <asm/desc.h> 20 20 #include <asm/mmu_context.h> 21 + #include <asm/syscalls.h> 21 22 22 23 #ifdef CONFIG_SMP 23 24 static void flush_ldt(void *current_mm)
+9 -2
arch/x86/kernel/nmi.c
··· 299 299 on_each_cpu(__acpi_nmi_disable, NULL, 1); 300 300 } 301 301 302 + /* 303 + * This function is called as soon the LAPIC NMI watchdog driver has everything 304 + * in place and it's ready to check if the NMIs belong to the NMI watchdog 305 + */ 306 + void cpu_nmi_set_wd_enabled(void) 307 + { 308 + __get_cpu_var(wd_enabled) = 1; 309 + } 310 + 302 311 void setup_apic_nmi_watchdog(void *unused) 303 312 { 304 313 if (__get_cpu_var(wd_enabled)) ··· 320 311 321 312 switch (nmi_watchdog) { 322 313 case NMI_LOCAL_APIC: 323 - /* enable it before to avoid race with handler */ 324 - __get_cpu_var(wd_enabled) = 1; 325 314 if (lapic_watchdog_init(nmi_hz) < 0) { 326 315 __get_cpu_var(wd_enabled) = 0; 327 316 return;
+3 -3
arch/x86/kernel/olpc.c
··· 190 190 static void __init platform_detect(void) 191 191 { 192 192 size_t propsize; 193 - u32 rev; 193 + __be32 rev; 194 194 195 195 if (ofw("getprop", 4, 1, NULL, "board-revision-int", &rev, 4, 196 196 &propsize) || propsize != 4) { 197 197 printk(KERN_ERR "ofw: getprop call failed!\n"); 198 - rev = 0; 198 + rev = cpu_to_be32(0); 199 199 } 200 200 olpc_platform_info.boardrev = be32_to_cpu(rev); 201 201 } ··· 203 203 static void __init platform_detect(void) 204 204 { 205 205 /* stopgap until OFW support is added to the kernel */ 206 - olpc_platform_info.boardrev = be32_to_cpu(0xc2); 206 + olpc_platform_info.boardrev = 0xc2; 207 207 } 208 208 #endif 209 209
+1
arch/x86/kernel/paravirt.c
··· 330 330 #endif 331 331 .wbinvd = native_wbinvd, 332 332 .read_msr = native_read_msr_safe, 333 + .read_msr_amd = native_read_msr_amd_safe, 333 334 .write_msr = native_write_msr_safe, 334 335 .read_tsc = native_read_tsc, 335 336 .read_pmc = native_read_pmc,
+1 -1
arch/x86/kernel/paravirt_patch_32.c
··· 23 23 start = start_##ops##_##x; \ 24 24 end = end_##ops##_##x; \ 25 25 goto patch_site 26 - switch(type) { 26 + switch (type) { 27 27 PATCH_SITE(pv_irq_ops, irq_disable); 28 28 PATCH_SITE(pv_irq_ops, irq_enable); 29 29 PATCH_SITE(pv_irq_ops, restore_fl);
+1 -1
arch/x86/kernel/pci-dma.c
··· 82 82 * using 512M as goal 83 83 */ 84 84 align = 64ULL<<20; 85 - size = round_up(dma32_bootmem_size, align); 85 + size = roundup(dma32_bootmem_size, align); 86 86 dma32_bootmem_ptr = __alloc_bootmem_nopanic(size, align, 87 87 512ULL<<20); 88 88 if (dma32_bootmem_ptr)
+17 -10
arch/x86/kernel/pci-gart_64.c
··· 82 82 static unsigned long next_bit; /* protected by iommu_bitmap_lock */ 83 83 static int need_flush; /* global flush state. set for each gart wrap */ 84 84 85 - static unsigned long alloc_iommu(struct device *dev, int size) 85 + static unsigned long alloc_iommu(struct device *dev, int size, 86 + unsigned long align_mask) 86 87 { 87 88 unsigned long offset, flags; 88 89 unsigned long boundary_size; ··· 91 90 92 91 base_index = ALIGN(iommu_bus_base & dma_get_seg_boundary(dev), 93 92 PAGE_SIZE) >> PAGE_SHIFT; 94 - boundary_size = ALIGN(dma_get_seg_boundary(dev) + 1, 93 + boundary_size = ALIGN((unsigned long long)dma_get_seg_boundary(dev) + 1, 95 94 PAGE_SIZE) >> PAGE_SHIFT; 96 95 97 96 spin_lock_irqsave(&iommu_bitmap_lock, flags); 98 97 offset = iommu_area_alloc(iommu_gart_bitmap, iommu_pages, next_bit, 99 - size, base_index, boundary_size, 0); 98 + size, base_index, boundary_size, align_mask); 100 99 if (offset == -1) { 101 100 need_flush = 1; 102 101 offset = iommu_area_alloc(iommu_gart_bitmap, iommu_pages, 0, 103 - size, base_index, boundary_size, 0); 102 + size, base_index, boundary_size, 103 + align_mask); 104 104 } 105 105 if (offset != -1) { 106 106 next_bit = offset+size; ··· 238 236 * Caller needs to check if the iommu is needed and flush. 239 237 */ 240 238 static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem, 241 - size_t size, int dir) 239 + size_t size, int dir, unsigned long align_mask) 242 240 { 243 241 unsigned long npages = iommu_num_pages(phys_mem, size); 244 - unsigned long iommu_page = alloc_iommu(dev, npages); 242 + unsigned long iommu_page = alloc_iommu(dev, npages, align_mask); 245 243 int i; 246 244 247 245 if (iommu_page == -1) { ··· 264 262 static dma_addr_t 265 263 gart_map_simple(struct device *dev, phys_addr_t paddr, size_t size, int dir) 266 264 { 267 - dma_addr_t map = dma_map_area(dev, paddr, size, dir); 265 + dma_addr_t map; 266 + unsigned long align_mask; 267 + 268 + align_mask = (1UL << get_order(size)) - 1; 269 + map = dma_map_area(dev, paddr, size, dir, align_mask); 268 270 269 271 flush_gart(); 270 272 ··· 287 281 if (!need_iommu(dev, paddr, size)) 288 282 return paddr; 289 283 290 - bus = gart_map_simple(dev, paddr, size, dir); 284 + bus = dma_map_area(dev, paddr, size, dir, 0); 285 + flush_gart(); 291 286 292 287 return bus; 293 288 } ··· 347 340 unsigned long addr = sg_phys(s); 348 341 349 342 if (nonforced_iommu(dev, addr, s->length)) { 350 - addr = dma_map_area(dev, addr, s->length, dir); 343 + addr = dma_map_area(dev, addr, s->length, dir, 0); 351 344 if (addr == bad_dma_address) { 352 345 if (i > 0) 353 346 gart_unmap_sg(dev, sg, i, dir); ··· 369 362 int nelems, struct scatterlist *sout, 370 363 unsigned long pages) 371 364 { 372 - unsigned long iommu_start = alloc_iommu(dev, pages); 365 + unsigned long iommu_start = alloc_iommu(dev, pages, 0); 373 366 unsigned long iommu_page = iommu_start; 374 367 struct scatterlist *s; 375 368 int i;
+3 -10
arch/x86/kernel/pcspeaker.c
··· 1 1 #include <linux/platform_device.h> 2 - #include <linux/errno.h> 2 + #include <linux/err.h> 3 3 #include <linux/init.h> 4 4 5 5 static __init int add_pcspkr(void) 6 6 { 7 7 struct platform_device *pd; 8 - int ret; 9 8 10 - pd = platform_device_alloc("pcspkr", -1); 11 - if (!pd) 12 - return -ENOMEM; 9 + pd = platform_device_register_simple("pcspkr", -1, NULL, 0); 13 10 14 - ret = platform_device_add(pd); 15 - if (ret) 16 - platform_device_put(pd); 17 - 18 - return ret; 11 + return IS_ERR(pd) ? PTR_ERR(pd) : 0; 19 12 } 20 13 device_initcall(add_pcspkr);
+2 -1
arch/x86/kernel/process.c
··· 185 185 static void poll_idle(void) 186 186 { 187 187 local_irq_enable(); 188 - cpu_relax(); 188 + while (!need_resched()) 189 + cpu_relax(); 189 190 } 190 191 191 192 /*
+50 -12
arch/x86/kernel/process_32.c
··· 37 37 #include <linux/tick.h> 38 38 #include <linux/percpu.h> 39 39 #include <linux/prctl.h> 40 + #include <linux/dmi.h> 40 41 41 42 #include <asm/uaccess.h> 42 43 #include <asm/pgtable.h> ··· 57 56 #include <asm/cpu.h> 58 57 #include <asm/kdebug.h> 59 58 #include <asm/idle.h> 59 + #include <asm/syscalls.h> 60 + #include <asm/smp.h> 60 61 61 62 asmlinkage void ret_from_fork(void) __asm__("ret_from_fork"); 62 63 ··· 164 161 unsigned long d0, d1, d2, d3, d6, d7; 165 162 unsigned long sp; 166 163 unsigned short ss, gs; 164 + const char *board; 167 165 168 166 if (user_mode_vm(regs)) { 169 167 sp = regs->sp; ··· 177 173 } 178 174 179 175 printk("\n"); 180 - printk("Pid: %d, comm: %s %s (%s %.*s)\n", 176 + 177 + board = dmi_get_system_info(DMI_PRODUCT_NAME); 178 + if (!board) 179 + board = ""; 180 + printk("Pid: %d, comm: %s %s (%s %.*s) %s\n", 181 181 task_pid_nr(current), current->comm, 182 182 print_tainted(), init_utsname()->release, 183 183 (int)strcspn(init_utsname()->version, " "), 184 - init_utsname()->version); 184 + init_utsname()->version, board); 185 185 186 186 printk("EIP: %04x:[<%08lx>] EFLAGS: %08lx CPU: %d\n", 187 187 (u16)regs->cs, regs->ip, regs->flags, ··· 285 277 tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET; 286 278 put_cpu(); 287 279 } 280 + #ifdef CONFIG_X86_DS 281 + /* Free any DS contexts that have not been properly released. */ 282 + if (unlikely(current->thread.ds_ctx)) { 283 + /* we clear debugctl to make sure DS is not used. */ 284 + update_debugctlmsr(0); 285 + ds_free(current->thread.ds_ctx); 286 + } 287 + #endif /* CONFIG_X86_DS */ 288 288 } 289 289 290 290 void flush_thread(void) ··· 454 438 return 0; 455 439 } 456 440 441 + #ifdef CONFIG_X86_DS 442 + static int update_debugctl(struct thread_struct *prev, 443 + struct thread_struct *next, unsigned long debugctl) 444 + { 445 + unsigned long ds_prev = 0; 446 + unsigned long ds_next = 0; 447 + 448 + if (prev->ds_ctx) 449 + ds_prev = (unsigned long)prev->ds_ctx->ds; 450 + if (next->ds_ctx) 451 + ds_next = (unsigned long)next->ds_ctx->ds; 452 + 453 + if (ds_next != ds_prev) { 454 + /* we clear debugctl to make sure DS 455 + * is not in use when we change it */ 456 + debugctl = 0; 457 + update_debugctlmsr(0); 458 + wrmsr(MSR_IA32_DS_AREA, ds_next, 0); 459 + } 460 + return debugctl; 461 + } 462 + #else 463 + static int update_debugctl(struct thread_struct *prev, 464 + struct thread_struct *next, unsigned long debugctl) 465 + { 466 + return debugctl; 467 + } 468 + #endif /* CONFIG_X86_DS */ 469 + 457 470 static noinline void 458 471 __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, 459 472 struct tss_struct *tss) ··· 493 448 prev = &prev_p->thread; 494 449 next = &next_p->thread; 495 450 496 - debugctl = prev->debugctlmsr; 497 - if (next->ds_area_msr != prev->ds_area_msr) { 498 - /* we clear debugctl to make sure DS 499 - * is not in use when we change it */ 500 - debugctl = 0; 501 - update_debugctlmsr(0); 502 - wrmsr(MSR_IA32_DS_AREA, next->ds_area_msr, 0); 503 - } 451 + debugctl = update_debugctl(prev, next, prev->debugctlmsr); 504 452 505 453 if (next->debugctlmsr != debugctl) 506 454 update_debugctlmsr(next->debugctlmsr); ··· 517 479 hard_enable_TSC(); 518 480 } 519 481 520 - #ifdef X86_BTS 482 + #ifdef CONFIG_X86_PTRACE_BTS 521 483 if (test_tsk_thread_flag(prev_p, TIF_BTS_TRACE_TS)) 522 484 ptrace_bts_take_timestamp(prev_p, BTS_TASK_DEPARTS); 523 485 524 486 if (test_tsk_thread_flag(next_p, TIF_BTS_TRACE_TS)) 525 487 ptrace_bts_take_timestamp(next_p, BTS_TASK_ARRIVES); 526 - #endif 488 + #endif /* CONFIG_X86_PTRACE_BTS */ 527 489 528 490 529 491 if (!test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) {
+96 -72
arch/x86/kernel/process_64.c
··· 37 37 #include <linux/kdebug.h> 38 38 #include <linux/tick.h> 39 39 #include <linux/prctl.h> 40 + #include <linux/uaccess.h> 41 + #include <linux/io.h> 40 42 41 - #include <asm/uaccess.h> 42 43 #include <asm/pgtable.h> 43 44 #include <asm/system.h> 44 - #include <asm/io.h> 45 45 #include <asm/processor.h> 46 46 #include <asm/i387.h> 47 47 #include <asm/mmu_context.h> ··· 51 51 #include <asm/proto.h> 52 52 #include <asm/ia32.h> 53 53 #include <asm/idle.h> 54 + #include <asm/syscalls.h> 54 55 55 56 asmlinkage extern void ret_from_fork(void); 56 57 ··· 89 88 #ifdef CONFIG_HOTPLUG_CPU 90 89 DECLARE_PER_CPU(int, cpu_state); 91 90 92 - #include <asm/nmi.h> 91 + #include <linux/nmi.h> 93 92 /* We halt the CPU with physical CPU hotplug */ 94 93 static inline void play_dead(void) 95 94 { ··· 154 153 } 155 154 156 155 /* Prints also some state that isn't saved in the pt_regs */ 157 - void __show_regs(struct pt_regs * regs) 156 + void __show_regs(struct pt_regs *regs) 158 157 { 159 158 unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L, fs, gs, shadowgs; 160 159 unsigned long d0, d1, d2, d3, d6, d7; ··· 163 162 164 163 printk("\n"); 165 164 print_modules(); 166 - printk("Pid: %d, comm: %.20s %s %s %.*s\n", 165 + printk(KERN_INFO "Pid: %d, comm: %.20s %s %s %.*s\n", 167 166 current->pid, current->comm, print_tainted(), 168 167 init_utsname()->release, 169 168 (int)strcspn(init_utsname()->version, " "), 170 169 init_utsname()->version); 171 - printk("RIP: %04lx:[<%016lx>] ", regs->cs & 0xffff, regs->ip); 170 + printk(KERN_INFO "RIP: %04lx:[<%016lx>] ", regs->cs & 0xffff, regs->ip); 172 171 printk_address(regs->ip, 1); 173 - printk("RSP: %04lx:%016lx EFLAGS: %08lx\n", regs->ss, regs->sp, 174 - regs->flags); 175 - printk("RAX: %016lx RBX: %016lx RCX: %016lx\n", 172 + printk(KERN_INFO "RSP: %04lx:%016lx EFLAGS: %08lx\n", regs->ss, 173 + regs->sp, regs->flags); 174 + printk(KERN_INFO "RAX: %016lx RBX: %016lx RCX: %016lx\n", 176 175 regs->ax, regs->bx, regs->cx); 177 - printk("RDX: %016lx RSI: %016lx RDI: %016lx\n", 176 + printk(KERN_INFO "RDX: %016lx RSI: %016lx RDI: %016lx\n", 178 177 regs->dx, regs->si, regs->di); 179 - printk("RBP: %016lx R08: %016lx R09: %016lx\n", 178 + printk(KERN_INFO "RBP: %016lx R08: %016lx R09: %016lx\n", 180 179 regs->bp, regs->r8, regs->r9); 181 - printk("R10: %016lx R11: %016lx R12: %016lx\n", 182 - regs->r10, regs->r11, regs->r12); 183 - printk("R13: %016lx R14: %016lx R15: %016lx\n", 184 - regs->r13, regs->r14, regs->r15); 180 + printk(KERN_INFO "R10: %016lx R11: %016lx R12: %016lx\n", 181 + regs->r10, regs->r11, regs->r12); 182 + printk(KERN_INFO "R13: %016lx R14: %016lx R15: %016lx\n", 183 + regs->r13, regs->r14, regs->r15); 185 184 186 - asm("movl %%ds,%0" : "=r" (ds)); 187 - asm("movl %%cs,%0" : "=r" (cs)); 188 - asm("movl %%es,%0" : "=r" (es)); 185 + asm("movl %%ds,%0" : "=r" (ds)); 186 + asm("movl %%cs,%0" : "=r" (cs)); 187 + asm("movl %%es,%0" : "=r" (es)); 189 188 asm("movl %%fs,%0" : "=r" (fsindex)); 190 189 asm("movl %%gs,%0" : "=r" (gsindex)); 191 190 192 191 rdmsrl(MSR_FS_BASE, fs); 193 - rdmsrl(MSR_GS_BASE, gs); 194 - rdmsrl(MSR_KERNEL_GS_BASE, shadowgs); 192 + rdmsrl(MSR_GS_BASE, gs); 193 + rdmsrl(MSR_KERNEL_GS_BASE, shadowgs); 195 194 196 195 cr0 = read_cr0(); 197 196 cr2 = read_cr2(); 198 197 cr3 = read_cr3(); 199 198 cr4 = read_cr4(); 200 199 201 - printk("FS: %016lx(%04x) GS:%016lx(%04x) knlGS:%016lx\n", 202 - fs,fsindex,gs,gsindex,shadowgs); 203 - printk("CS: %04x DS: %04x ES: %04x CR0: %016lx\n", cs, ds, es, cr0); 204 - printk("CR2: %016lx CR3: %016lx CR4: %016lx\n", cr2, cr3, cr4); 200 + printk(KERN_INFO "FS: %016lx(%04x) GS:%016lx(%04x) knlGS:%016lx\n", 201 + fs, fsindex, gs, gsindex, shadowgs); 202 + printk(KERN_INFO "CS: %04x DS: %04x ES: %04x CR0: %016lx\n", cs, ds, 203 + es, cr0); 204 + printk(KERN_INFO "CR2: %016lx CR3: %016lx CR4: %016lx\n", cr2, cr3, 205 + cr4); 205 206 206 207 get_debugreg(d0, 0); 207 208 get_debugreg(d1, 1); 208 209 get_debugreg(d2, 2); 209 - printk("DR0: %016lx DR1: %016lx DR2: %016lx\n", d0, d1, d2); 210 + printk(KERN_INFO "DR0: %016lx DR1: %016lx DR2: %016lx\n", d0, d1, d2); 210 211 get_debugreg(d3, 3); 211 212 get_debugreg(d6, 6); 212 213 get_debugreg(d7, 7); 213 - printk("DR3: %016lx DR6: %016lx DR7: %016lx\n", d3, d6, d7); 214 + printk(KERN_INFO "DR3: %016lx DR6: %016lx DR7: %016lx\n", d3, d6, d7); 214 215 } 215 216 216 217 void show_regs(struct pt_regs *regs) 217 218 { 218 - printk("CPU %d:", smp_processor_id()); 219 + printk(KERN_INFO "CPU %d:", smp_processor_id()); 219 220 __show_regs(regs); 220 221 show_trace(NULL, regs, (void *)(regs + 1), regs->bp); 221 222 } ··· 243 240 t->io_bitmap_max = 0; 244 241 put_cpu(); 245 242 } 243 + #ifdef CONFIG_X86_DS 244 + /* Free any DS contexts that have not been properly released. */ 245 + if (unlikely(t->ds_ctx)) { 246 + /* we clear debugctl to make sure DS is not used. */ 247 + update_debugctlmsr(0); 248 + ds_free(t->ds_ctx); 249 + } 250 + #endif /* CONFIG_X86_DS */ 246 251 } 247 252 248 253 void flush_thread(void) ··· 326 315 327 316 int copy_thread(int nr, unsigned long clone_flags, unsigned long sp, 328 317 unsigned long unused, 329 - struct task_struct * p, struct pt_regs * regs) 318 + struct task_struct *p, struct pt_regs *regs) 330 319 { 331 320 int err; 332 - struct pt_regs * childregs; 321 + struct pt_regs *childregs; 333 322 struct task_struct *me = current; 334 323 335 324 childregs = ((struct pt_regs *) ··· 374 363 if (test_thread_flag(TIF_IA32)) 375 364 err = do_set_thread_area(p, -1, 376 365 (struct user_desc __user *)childregs->si, 0); 377 - else 378 - #endif 379 - err = do_arch_prctl(p, ARCH_SET_FS, childregs->r8); 380 - if (err) 366 + else 367 + #endif 368 + err = do_arch_prctl(p, ARCH_SET_FS, childregs->r8); 369 + if (err) 381 370 goto out; 382 371 } 383 372 err = 0; ··· 484 473 next = &next_p->thread; 485 474 486 475 debugctl = prev->debugctlmsr; 487 - if (next->ds_area_msr != prev->ds_area_msr) { 488 - /* we clear debugctl to make sure DS 489 - * is not in use when we change it */ 490 - debugctl = 0; 491 - update_debugctlmsr(0); 492 - wrmsrl(MSR_IA32_DS_AREA, next->ds_area_msr); 476 + 477 + #ifdef CONFIG_X86_DS 478 + { 479 + unsigned long ds_prev = 0, ds_next = 0; 480 + 481 + if (prev->ds_ctx) 482 + ds_prev = (unsigned long)prev->ds_ctx->ds; 483 + if (next->ds_ctx) 484 + ds_next = (unsigned long)next->ds_ctx->ds; 485 + 486 + if (ds_next != ds_prev) { 487 + /* 488 + * We clear debugctl to make sure DS 489 + * is not in use when we change it: 490 + */ 491 + debugctl = 0; 492 + update_debugctlmsr(0); 493 + wrmsrl(MSR_IA32_DS_AREA, ds_next); 494 + } 493 495 } 496 + #endif /* CONFIG_X86_DS */ 494 497 495 498 if (next->debugctlmsr != debugctl) 496 499 update_debugctlmsr(next->debugctlmsr); ··· 542 517 memset(tss->io_bitmap, 0xff, prev->io_bitmap_max); 543 518 } 544 519 545 - #ifdef X86_BTS 520 + #ifdef CONFIG_X86_PTRACE_BTS 546 521 if (test_tsk_thread_flag(prev_p, TIF_BTS_TRACE_TS)) 547 522 ptrace_bts_take_timestamp(prev_p, BTS_TASK_DEPARTS); 548 523 549 524 if (test_tsk_thread_flag(next_p, TIF_BTS_TRACE_TS)) 550 525 ptrace_bts_take_timestamp(next_p, BTS_TASK_ARRIVES); 551 - #endif 526 + #endif /* CONFIG_X86_PTRACE_BTS */ 552 527 } 553 528 554 529 /* ··· 570 545 unsigned fsindex, gsindex; 571 546 572 547 /* we're going to use this soon, after a few expensive things */ 573 - if (next_p->fpu_counter>5) 548 + if (next_p->fpu_counter > 5) 574 549 prefetch(next->xstate); 575 550 576 551 /* ··· 578 553 */ 579 554 load_sp0(tss, next); 580 555 581 - /* 556 + /* 582 557 * Switch DS and ES. 583 558 * This won't pick up thread selector changes, but I guess that is ok. 584 559 */ 585 560 savesegment(es, prev->es); 586 561 if (unlikely(next->es | prev->es)) 587 - loadsegment(es, next->es); 562 + loadsegment(es, next->es); 588 563 589 564 savesegment(ds, prev->ds); 590 565 if (unlikely(next->ds | prev->ds)) ··· 610 585 */ 611 586 arch_leave_lazy_cpu_mode(); 612 587 613 - /* 588 + /* 614 589 * Switch FS and GS. 615 590 * 616 591 * Segment register != 0 always requires a reload. Also ··· 619 594 */ 620 595 if (unlikely(fsindex | next->fsindex | prev->fs)) { 621 596 loadsegment(fs, next->fsindex); 622 - /* 597 + /* 623 598 * Check if the user used a selector != 0; if yes 624 599 * clear 64bit base, since overloaded base is always 625 600 * mapped to the Null selector 626 601 */ 627 602 if (fsindex) 628 - prev->fs = 0; 603 + prev->fs = 0; 629 604 } 630 605 /* when next process has a 64bit base use it */ 631 606 if (next->fs) ··· 635 610 if (unlikely(gsindex | next->gsindex | prev->gs)) { 636 611 load_gs_index(next->gsindex); 637 612 if (gsindex) 638 - prev->gs = 0; 613 + prev->gs = 0; 639 614 } 640 615 if (next->gs) 641 616 wrmsrl(MSR_KERNEL_GS_BASE, next->gs); ··· 644 619 /* Must be after DS reload */ 645 620 unlazy_fpu(prev_p); 646 621 647 - /* 622 + /* 648 623 * Switch the PDA and FPU contexts. 649 624 */ 650 625 prev->usersp = read_pda(oldrsp); 651 626 write_pda(oldrsp, next->usersp); 652 - write_pda(pcurrent, next_p); 627 + write_pda(pcurrent, next_p); 653 628 654 629 write_pda(kernelstack, 655 630 (unsigned long)task_stack_page(next_p) + ··· 690 665 char __user * __user *envp, struct pt_regs *regs) 691 666 { 692 667 long error; 693 - char * filename; 668 + char *filename; 694 669 695 670 filename = getname(name); 696 671 error = PTR_ERR(filename); ··· 748 723 unsigned long get_wchan(struct task_struct *p) 749 724 { 750 725 unsigned long stack; 751 - u64 fp,ip; 726 + u64 fp, ip; 752 727 int count = 0; 753 728 754 - if (!p || p == current || p->state==TASK_RUNNING) 755 - return 0; 729 + if (!p || p == current || p->state == TASK_RUNNING) 730 + return 0; 756 731 stack = (unsigned long)task_stack_page(p); 757 732 if (p->thread.sp < stack || p->thread.sp > stack+THREAD_SIZE) 758 733 return 0; 759 734 fp = *(u64 *)(p->thread.sp); 760 - do { 735 + do { 761 736 if (fp < (unsigned long)stack || 762 737 fp > (unsigned long)stack+THREAD_SIZE) 763 - return 0; 738 + return 0; 764 739 ip = *(u64 *)(fp+8); 765 740 if (!in_sched_functions(ip)) 766 741 return ip; 767 - fp = *(u64 *)fp; 768 - } while (count++ < 16); 742 + fp = *(u64 *)fp; 743 + } while (count++ < 16); 769 744 return 0; 770 745 } 771 746 772 747 long do_arch_prctl(struct task_struct *task, int code, unsigned long addr) 773 - { 774 - int ret = 0; 748 + { 749 + int ret = 0; 775 750 int doit = task == current; 776 751 int cpu; 777 752 778 - switch (code) { 753 + switch (code) { 779 754 case ARCH_SET_GS: 780 755 if (addr >= TASK_SIZE_OF(task)) 781 - return -EPERM; 756 + return -EPERM; 782 757 cpu = get_cpu(); 783 - /* handle small bases via the GDT because that's faster to 758 + /* handle small bases via the GDT because that's faster to 784 759 switch. */ 785 - if (addr <= 0xffffffff) { 786 - set_32bit_tls(task, GS_TLS, addr); 787 - if (doit) { 760 + if (addr <= 0xffffffff) { 761 + set_32bit_tls(task, GS_TLS, addr); 762 + if (doit) { 788 763 load_TLS(&task->thread, cpu); 789 - load_gs_index(GS_TLS_SEL); 764 + load_gs_index(GS_TLS_SEL); 790 765 } 791 - task->thread.gsindex = GS_TLS_SEL; 766 + task->thread.gsindex = GS_TLS_SEL; 792 767 task->thread.gs = 0; 793 - } else { 768 + } else { 794 769 task->thread.gsindex = 0; 795 770 task->thread.gs = addr; 796 771 if (doit) { 797 772 load_gs_index(0); 798 773 ret = checking_wrmsrl(MSR_KERNEL_GS_BASE, addr); 799 - } 774 + } 800 775 } 801 776 put_cpu(); 802 777 break; ··· 850 825 rdmsrl(MSR_KERNEL_GS_BASE, base); 851 826 else 852 827 base = task->thread.gs; 853 - } 854 - else 828 + } else 855 829 base = task->thread.gs; 856 830 ret = put_user(base, (unsigned long __user *)addr); 857 831 break;
+281 -209
arch/x86/kernel/ptrace.c
··· 14 14 #include <linux/errno.h> 15 15 #include <linux/ptrace.h> 16 16 #include <linux/regset.h> 17 + #include <linux/tracehook.h> 17 18 #include <linux/user.h> 18 19 #include <linux/elf.h> 19 20 #include <linux/security.h> ··· 70 69 71 70 #define FLAG_MASK FLAG_MASK_32 72 71 73 - static long *pt_regs_access(struct pt_regs *regs, unsigned long regno) 72 + static unsigned long *pt_regs_access(struct pt_regs *regs, unsigned long regno) 74 73 { 75 74 BUILD_BUG_ON(offsetof(struct pt_regs, bx) != 0); 76 75 regno >>= 2; ··· 555 554 return 0; 556 555 } 557 556 558 - #ifdef X86_BTS 557 + #ifdef CONFIG_X86_PTRACE_BTS 558 + /* 559 + * The configuration for a particular BTS hardware implementation. 560 + */ 561 + struct bts_configuration { 562 + /* the size of a BTS record in bytes; at most BTS_MAX_RECORD_SIZE */ 563 + unsigned char sizeof_bts; 564 + /* the size of a field in the BTS record in bytes */ 565 + unsigned char sizeof_field; 566 + /* a bitmask to enable/disable BTS in DEBUGCTL MSR */ 567 + unsigned long debugctl_mask; 568 + }; 569 + static struct bts_configuration bts_cfg; 559 570 560 - static int ptrace_bts_get_size(struct task_struct *child) 571 + #define BTS_MAX_RECORD_SIZE (8 * 3) 572 + 573 + 574 + /* 575 + * Branch Trace Store (BTS) uses the following format. Different 576 + * architectures vary in the size of those fields. 577 + * - source linear address 578 + * - destination linear address 579 + * - flags 580 + * 581 + * Later architectures use 64bit pointers throughout, whereas earlier 582 + * architectures use 32bit pointers in 32bit mode. 583 + * 584 + * We compute the base address for the first 8 fields based on: 585 + * - the field size stored in the DS configuration 586 + * - the relative field position 587 + * 588 + * In order to store additional information in the BTS buffer, we use 589 + * a special source address to indicate that the record requires 590 + * special interpretation. 591 + * 592 + * Netburst indicated via a bit in the flags field whether the branch 593 + * was predicted; this is ignored. 594 + */ 595 + 596 + enum bts_field { 597 + bts_from = 0, 598 + bts_to, 599 + bts_flags, 600 + 601 + bts_escape = (unsigned long)-1, 602 + bts_qual = bts_to, 603 + bts_jiffies = bts_flags 604 + }; 605 + 606 + static inline unsigned long bts_get(const char *base, enum bts_field field) 561 607 { 562 - if (!child->thread.ds_area_msr) 563 - return -ENXIO; 564 - 565 - return ds_get_bts_index((void *)child->thread.ds_area_msr); 608 + base += (bts_cfg.sizeof_field * field); 609 + return *(unsigned long *)base; 566 610 } 567 611 568 - static int ptrace_bts_read_record(struct task_struct *child, 569 - long index, 612 + static inline void bts_set(char *base, enum bts_field field, unsigned long val) 613 + { 614 + base += (bts_cfg.sizeof_field * field);; 615 + (*(unsigned long *)base) = val; 616 + } 617 + 618 + /* 619 + * Translate a BTS record from the raw format into the bts_struct format 620 + * 621 + * out (out): bts_struct interpretation 622 + * raw: raw BTS record 623 + */ 624 + static void ptrace_bts_translate_record(struct bts_struct *out, const void *raw) 625 + { 626 + memset(out, 0, sizeof(*out)); 627 + if (bts_get(raw, bts_from) == bts_escape) { 628 + out->qualifier = bts_get(raw, bts_qual); 629 + out->variant.jiffies = bts_get(raw, bts_jiffies); 630 + } else { 631 + out->qualifier = BTS_BRANCH; 632 + out->variant.lbr.from_ip = bts_get(raw, bts_from); 633 + out->variant.lbr.to_ip = bts_get(raw, bts_to); 634 + } 635 + } 636 + 637 + static int ptrace_bts_read_record(struct task_struct *child, size_t index, 570 638 struct bts_struct __user *out) 571 639 { 572 640 struct bts_struct ret; 573 - int retval; 574 - int bts_end; 575 - int bts_index; 641 + const void *bts_record; 642 + size_t bts_index, bts_end; 643 + int error; 576 644 577 - if (!child->thread.ds_area_msr) 578 - return -ENXIO; 645 + error = ds_get_bts_end(child, &bts_end); 646 + if (error < 0) 647 + return error; 579 648 580 - if (index < 0) 581 - return -EINVAL; 582 - 583 - bts_end = ds_get_bts_end((void *)child->thread.ds_area_msr); 584 649 if (bts_end <= index) 585 650 return -EINVAL; 586 651 587 - /* translate the ptrace bts index into the ds bts index */ 588 - bts_index = ds_get_bts_index((void *)child->thread.ds_area_msr); 589 - bts_index -= (index + 1); 590 - if (bts_index < 0) 591 - bts_index += bts_end; 652 + error = ds_get_bts_index(child, &bts_index); 653 + if (error < 0) 654 + return error; 592 655 593 - retval = ds_read_bts((void *)child->thread.ds_area_msr, 594 - bts_index, &ret); 595 - if (retval < 0) 596 - return retval; 656 + /* translate the ptrace bts index into the ds bts index */ 657 + bts_index += bts_end - (index + 1); 658 + if (bts_end <= bts_index) 659 + bts_index -= bts_end; 660 + 661 + error = ds_access_bts(child, bts_index, &bts_record); 662 + if (error < 0) 663 + return error; 664 + 665 + ptrace_bts_translate_record(&ret, bts_record); 597 666 598 667 if (copy_to_user(out, &ret, sizeof(ret))) 599 668 return -EFAULT; ··· 671 600 return sizeof(ret); 672 601 } 673 602 674 - static int ptrace_bts_clear(struct task_struct *child) 675 - { 676 - if (!child->thread.ds_area_msr) 677 - return -ENXIO; 678 - 679 - return ds_clear((void *)child->thread.ds_area_msr); 680 - } 681 - 682 603 static int ptrace_bts_drain(struct task_struct *child, 683 604 long size, 684 605 struct bts_struct __user *out) 685 606 { 686 - int end, i; 687 - void *ds = (void *)child->thread.ds_area_msr; 607 + struct bts_struct ret; 608 + const unsigned char *raw; 609 + size_t end, i; 610 + int error; 688 611 689 - if (!ds) 690 - return -ENXIO; 691 - 692 - end = ds_get_bts_index(ds); 693 - if (end <= 0) 694 - return end; 612 + error = ds_get_bts_index(child, &end); 613 + if (error < 0) 614 + return error; 695 615 696 616 if (size < (end * sizeof(struct bts_struct))) 697 617 return -EIO; 698 618 699 - for (i = 0; i < end; i++, out++) { 700 - struct bts_struct ret; 701 - int retval; 619 + error = ds_access_bts(child, 0, (const void **)&raw); 620 + if (error < 0) 621 + return error; 702 622 703 - retval = ds_read_bts(ds, i, &ret); 704 - if (retval < 0) 705 - return retval; 623 + for (i = 0; i < end; i++, out++, raw += bts_cfg.sizeof_bts) { 624 + ptrace_bts_translate_record(&ret, raw); 706 625 707 626 if (copy_to_user(out, &ret, sizeof(ret))) 708 627 return -EFAULT; 709 628 } 710 629 711 - ds_clear(ds); 630 + error = ds_clear_bts(child); 631 + if (error < 0) 632 + return error; 712 633 713 634 return end; 635 + } 636 + 637 + static void ptrace_bts_ovfl(struct task_struct *child) 638 + { 639 + send_sig(child->thread.bts_ovfl_signal, child, 0); 714 640 } 715 641 716 642 static int ptrace_bts_config(struct task_struct *child, ··· 715 647 const struct ptrace_bts_config __user *ucfg) 716 648 { 717 649 struct ptrace_bts_config cfg; 718 - int bts_size, ret = 0; 719 - void *ds; 650 + int error = 0; 720 651 652 + error = -EOPNOTSUPP; 653 + if (!bts_cfg.sizeof_bts) 654 + goto errout; 655 + 656 + error = -EIO; 721 657 if (cfg_size < sizeof(cfg)) 722 - return -EIO; 658 + goto errout; 723 659 660 + error = -EFAULT; 724 661 if (copy_from_user(&cfg, ucfg, sizeof(cfg))) 725 - return -EFAULT; 662 + goto errout; 726 663 727 - if ((int)cfg.size < 0) 728 - return -EINVAL; 664 + error = -EINVAL; 665 + if ((cfg.flags & PTRACE_BTS_O_SIGNAL) && 666 + !(cfg.flags & PTRACE_BTS_O_ALLOC)) 667 + goto errout; 729 668 730 - bts_size = 0; 731 - ds = (void *)child->thread.ds_area_msr; 732 - if (ds) { 733 - bts_size = ds_get_bts_size(ds); 734 - if (bts_size < 0) 735 - return bts_size; 736 - } 737 - cfg.size = PAGE_ALIGN(cfg.size); 669 + if (cfg.flags & PTRACE_BTS_O_ALLOC) { 670 + ds_ovfl_callback_t ovfl = NULL; 671 + unsigned int sig = 0; 738 672 739 - if (bts_size != cfg.size) { 740 - ret = ptrace_bts_realloc(child, cfg.size, 741 - cfg.flags & PTRACE_BTS_O_CUT_SIZE); 742 - if (ret < 0) 673 + /* we ignore the error in case we were not tracing child */ 674 + (void)ds_release_bts(child); 675 + 676 + if (cfg.flags & PTRACE_BTS_O_SIGNAL) { 677 + if (!cfg.signal) 678 + goto errout; 679 + 680 + sig = cfg.signal; 681 + ovfl = ptrace_bts_ovfl; 682 + } 683 + 684 + error = ds_request_bts(child, /* base = */ NULL, cfg.size, ovfl); 685 + if (error < 0) 743 686 goto errout; 744 687 745 - ds = (void *)child->thread.ds_area_msr; 688 + child->thread.bts_ovfl_signal = sig; 746 689 } 747 690 748 - if (cfg.flags & PTRACE_BTS_O_SIGNAL) 749 - ret = ds_set_overflow(ds, DS_O_SIGNAL); 750 - else 751 - ret = ds_set_overflow(ds, DS_O_WRAP); 752 - if (ret < 0) 691 + error = -EINVAL; 692 + if (!child->thread.ds_ctx && cfg.flags) 753 693 goto errout; 754 694 755 695 if (cfg.flags & PTRACE_BTS_O_TRACE) 756 - child->thread.debugctlmsr |= ds_debugctl_mask(); 696 + child->thread.debugctlmsr |= bts_cfg.debugctl_mask; 757 697 else 758 - child->thread.debugctlmsr &= ~ds_debugctl_mask(); 698 + child->thread.debugctlmsr &= ~bts_cfg.debugctl_mask; 759 699 760 700 if (cfg.flags & PTRACE_BTS_O_SCHED) 761 701 set_tsk_thread_flag(child, TIF_BTS_TRACE_TS); 762 702 else 763 703 clear_tsk_thread_flag(child, TIF_BTS_TRACE_TS); 764 704 765 - ret = sizeof(cfg); 705 + error = sizeof(cfg); 766 706 767 707 out: 768 708 if (child->thread.debugctlmsr) ··· 778 702 else 779 703 clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR); 780 704 781 - return ret; 705 + return error; 782 706 783 707 errout: 784 - child->thread.debugctlmsr &= ~ds_debugctl_mask(); 708 + child->thread.debugctlmsr &= ~bts_cfg.debugctl_mask; 785 709 clear_tsk_thread_flag(child, TIF_BTS_TRACE_TS); 786 710 goto out; 787 711 } ··· 790 714 long cfg_size, 791 715 struct ptrace_bts_config __user *ucfg) 792 716 { 793 - void *ds = (void *)child->thread.ds_area_msr; 794 717 struct ptrace_bts_config cfg; 718 + size_t end; 719 + const void *base, *max; 720 + int error; 795 721 796 722 if (cfg_size < sizeof(cfg)) 797 723 return -EIO; 798 724 725 + error = ds_get_bts_end(child, &end); 726 + if (error < 0) 727 + return error; 728 + 729 + error = ds_access_bts(child, /* index = */ 0, &base); 730 + if (error < 0) 731 + return error; 732 + 733 + error = ds_access_bts(child, /* index = */ end, &max); 734 + if (error < 0) 735 + return error; 736 + 799 737 memset(&cfg, 0, sizeof(cfg)); 800 - 801 - if (ds) { 802 - cfg.size = ds_get_bts_size(ds); 803 - 804 - if (ds_get_overflow(ds) == DS_O_SIGNAL) 805 - cfg.flags |= PTRACE_BTS_O_SIGNAL; 806 - 807 - if (test_tsk_thread_flag(child, TIF_DEBUGCTLMSR) && 808 - child->thread.debugctlmsr & ds_debugctl_mask()) 809 - cfg.flags |= PTRACE_BTS_O_TRACE; 810 - 811 - if (test_tsk_thread_flag(child, TIF_BTS_TRACE_TS)) 812 - cfg.flags |= PTRACE_BTS_O_SCHED; 813 - } 814 - 738 + cfg.size = (max - base); 739 + cfg.signal = child->thread.bts_ovfl_signal; 815 740 cfg.bts_size = sizeof(struct bts_struct); 741 + 742 + if (cfg.signal) 743 + cfg.flags |= PTRACE_BTS_O_SIGNAL; 744 + 745 + if (test_tsk_thread_flag(child, TIF_DEBUGCTLMSR) && 746 + child->thread.debugctlmsr & bts_cfg.debugctl_mask) 747 + cfg.flags |= PTRACE_BTS_O_TRACE; 748 + 749 + if (test_tsk_thread_flag(child, TIF_BTS_TRACE_TS)) 750 + cfg.flags |= PTRACE_BTS_O_SCHED; 816 751 817 752 if (copy_to_user(ucfg, &cfg, sizeof(cfg))) 818 753 return -EFAULT; ··· 831 744 return sizeof(cfg); 832 745 } 833 746 834 - 835 747 static int ptrace_bts_write_record(struct task_struct *child, 836 748 const struct bts_struct *in) 837 749 { 838 - int retval; 750 + unsigned char bts_record[BTS_MAX_RECORD_SIZE]; 839 751 840 - if (!child->thread.ds_area_msr) 841 - return -ENXIO; 752 + BUG_ON(BTS_MAX_RECORD_SIZE < bts_cfg.sizeof_bts); 842 753 843 - retval = ds_write_bts((void *)child->thread.ds_area_msr, in); 844 - if (retval) 845 - return retval; 754 + memset(bts_record, 0, bts_cfg.sizeof_bts); 755 + switch (in->qualifier) { 756 + case BTS_INVALID: 757 + break; 846 758 847 - return sizeof(*in); 848 - } 759 + case BTS_BRANCH: 760 + bts_set(bts_record, bts_from, in->variant.lbr.from_ip); 761 + bts_set(bts_record, bts_to, in->variant.lbr.to_ip); 762 + break; 849 763 850 - static int ptrace_bts_realloc(struct task_struct *child, 851 - int size, int reduce_size) 852 - { 853 - unsigned long rlim, vm; 854 - int ret, old_size; 764 + case BTS_TASK_ARRIVES: 765 + case BTS_TASK_DEPARTS: 766 + bts_set(bts_record, bts_from, bts_escape); 767 + bts_set(bts_record, bts_qual, in->qualifier); 768 + bts_set(bts_record, bts_jiffies, in->variant.jiffies); 769 + break; 855 770 856 - if (size < 0) 771 + default: 857 772 return -EINVAL; 858 - 859 - old_size = ds_get_bts_size((void *)child->thread.ds_area_msr); 860 - if (old_size < 0) 861 - return old_size; 862 - 863 - ret = ds_free((void **)&child->thread.ds_area_msr); 864 - if (ret < 0) 865 - goto out; 866 - 867 - size >>= PAGE_SHIFT; 868 - old_size >>= PAGE_SHIFT; 869 - 870 - current->mm->total_vm -= old_size; 871 - current->mm->locked_vm -= old_size; 872 - 873 - if (size == 0) 874 - goto out; 875 - 876 - rlim = current->signal->rlim[RLIMIT_AS].rlim_cur >> PAGE_SHIFT; 877 - vm = current->mm->total_vm + size; 878 - if (rlim < vm) { 879 - ret = -ENOMEM; 880 - 881 - if (!reduce_size) 882 - goto out; 883 - 884 - size = rlim - current->mm->total_vm; 885 - if (size <= 0) 886 - goto out; 887 773 } 888 774 889 - rlim = current->signal->rlim[RLIMIT_MEMLOCK].rlim_cur >> PAGE_SHIFT; 890 - vm = current->mm->locked_vm + size; 891 - if (rlim < vm) { 892 - ret = -ENOMEM; 893 - 894 - if (!reduce_size) 895 - goto out; 896 - 897 - size = rlim - current->mm->locked_vm; 898 - if (size <= 0) 899 - goto out; 900 - } 901 - 902 - ret = ds_allocate((void **)&child->thread.ds_area_msr, 903 - size << PAGE_SHIFT); 904 - if (ret < 0) 905 - goto out; 906 - 907 - current->mm->total_vm += size; 908 - current->mm->locked_vm += size; 909 - 910 - out: 911 - if (child->thread.ds_area_msr) 912 - set_tsk_thread_flag(child, TIF_DS_AREA_MSR); 913 - else 914 - clear_tsk_thread_flag(child, TIF_DS_AREA_MSR); 915 - 916 - return ret; 775 + /* The writing task will be the switched-to task on a context 776 + * switch. It needs to write into the switched-from task's BTS 777 + * buffer. */ 778 + return ds_unchecked_write_bts(child, bts_record, bts_cfg.sizeof_bts); 917 779 } 918 780 919 781 void ptrace_bts_take_timestamp(struct task_struct *tsk, ··· 875 839 876 840 ptrace_bts_write_record(tsk, &rec); 877 841 } 878 - #endif /* X86_BTS */ 842 + 843 + static const struct bts_configuration bts_cfg_netburst = { 844 + .sizeof_bts = sizeof(long) * 3, 845 + .sizeof_field = sizeof(long), 846 + .debugctl_mask = (1<<2)|(1<<3)|(1<<5) 847 + }; 848 + 849 + static const struct bts_configuration bts_cfg_pentium_m = { 850 + .sizeof_bts = sizeof(long) * 3, 851 + .sizeof_field = sizeof(long), 852 + .debugctl_mask = (1<<6)|(1<<7) 853 + }; 854 + 855 + static const struct bts_configuration bts_cfg_core2 = { 856 + .sizeof_bts = 8 * 3, 857 + .sizeof_field = 8, 858 + .debugctl_mask = (1<<6)|(1<<7)|(1<<9) 859 + }; 860 + 861 + static inline void bts_configure(const struct bts_configuration *cfg) 862 + { 863 + bts_cfg = *cfg; 864 + } 865 + 866 + void __cpuinit ptrace_bts_init_intel(struct cpuinfo_x86 *c) 867 + { 868 + switch (c->x86) { 869 + case 0x6: 870 + switch (c->x86_model) { 871 + case 0xD: 872 + case 0xE: /* Pentium M */ 873 + bts_configure(&bts_cfg_pentium_m); 874 + break; 875 + case 0xF: /* Core2 */ 876 + case 0x1C: /* Atom */ 877 + bts_configure(&bts_cfg_core2); 878 + break; 879 + default: 880 + /* sorry, don't know about them */ 881 + break; 882 + } 883 + break; 884 + case 0xF: 885 + switch (c->x86_model) { 886 + case 0x0: 887 + case 0x1: 888 + case 0x2: /* Netburst */ 889 + bts_configure(&bts_cfg_netburst); 890 + break; 891 + default: 892 + /* sorry, don't know about them */ 893 + break; 894 + } 895 + break; 896 + default: 897 + /* sorry, don't know about them */ 898 + break; 899 + } 900 + } 901 + #endif /* CONFIG_X86_PTRACE_BTS */ 879 902 880 903 /* 881 904 * Called by kernel/ptrace.c when detaching.. ··· 947 852 #ifdef TIF_SYSCALL_EMU 948 853 clear_tsk_thread_flag(child, TIF_SYSCALL_EMU); 949 854 #endif 950 - if (child->thread.ds_area_msr) { 951 - #ifdef X86_BTS 952 - ptrace_bts_realloc(child, 0, 0); 953 - #endif 954 - child->thread.debugctlmsr &= ~ds_debugctl_mask(); 955 - if (!child->thread.debugctlmsr) 956 - clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR); 957 - clear_tsk_thread_flag(child, TIF_BTS_TRACE_TS); 958 - } 855 + #ifdef CONFIG_X86_PTRACE_BTS 856 + (void)ds_release_bts(child); 857 + 858 + child->thread.debugctlmsr &= ~bts_cfg.debugctl_mask; 859 + if (!child->thread.debugctlmsr) 860 + clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR); 861 + 862 + clear_tsk_thread_flag(child, TIF_BTS_TRACE_TS); 863 + #endif /* CONFIG_X86_PTRACE_BTS */ 959 864 } 960 865 961 866 #if defined CONFIG_X86_32 || defined CONFIG_IA32_EMULATION ··· 1075 980 /* 1076 981 * These bits need more cooking - not enabled yet: 1077 982 */ 1078 - #ifdef X86_BTS 983 + #ifdef CONFIG_X86_PTRACE_BTS 1079 984 case PTRACE_BTS_CONFIG: 1080 985 ret = ptrace_bts_config 1081 986 (child, data, (struct ptrace_bts_config __user *)addr); ··· 1087 992 break; 1088 993 1089 994 case PTRACE_BTS_SIZE: 1090 - ret = ptrace_bts_get_size(child); 995 + ret = ds_get_bts_index(child, /* pos = */ NULL); 1091 996 break; 1092 997 1093 998 case PTRACE_BTS_GET: ··· 1096 1001 break; 1097 1002 1098 1003 case PTRACE_BTS_CLEAR: 1099 - ret = ptrace_bts_clear(child); 1004 + ret = ds_clear_bts(child); 1100 1005 break; 1101 1006 1102 1007 case PTRACE_BTS_DRAIN: 1103 1008 ret = ptrace_bts_drain 1104 1009 (child, data, (struct bts_struct __user *) addr); 1105 1010 break; 1106 - #endif 1011 + #endif /* CONFIG_X86_PTRACE_BTS */ 1107 1012 1108 1013 default: 1109 1014 ret = ptrace_request(child, request, addr, data); ··· 1470 1375 force_sig_info(SIGTRAP, &info, tsk); 1471 1376 } 1472 1377 1473 - static void syscall_trace(struct pt_regs *regs) 1474 - { 1475 - if (!(current->ptrace & PT_PTRACED)) 1476 - return; 1477 - 1478 - #if 0 1479 - printk("trace %s ip %lx sp %lx ax %d origrax %d caller %lx tiflags %x ptrace %x\n", 1480 - current->comm, 1481 - regs->ip, regs->sp, regs->ax, regs->orig_ax, __builtin_return_address(0), 1482 - current_thread_info()->flags, current->ptrace); 1483 - #endif 1484 - 1485 - ptrace_notify(SIGTRAP | ((current->ptrace & PT_TRACESYSGOOD) 1486 - ? 0x80 : 0)); 1487 - /* 1488 - * this isn't the same as continuing with a signal, but it will do 1489 - * for normal use. strace only continues with a signal if the 1490 - * stopping signal is not SIGTRAP. -brl 1491 - */ 1492 - if (current->exit_code) { 1493 - send_sig(current->exit_code, current, 1); 1494 - current->exit_code = 0; 1495 - } 1496 - } 1497 1378 1498 1379 #ifdef CONFIG_X86_32 1499 1380 # define IS_IA32 1 ··· 1503 1432 if (unlikely(test_thread_flag(TIF_SYSCALL_EMU))) 1504 1433 ret = -1L; 1505 1434 1506 - if (ret || test_thread_flag(TIF_SYSCALL_TRACE)) 1507 - syscall_trace(regs); 1435 + if ((ret || test_thread_flag(TIF_SYSCALL_TRACE)) && 1436 + tracehook_report_syscall_entry(regs)) 1437 + ret = -1L; 1508 1438 1509 1439 if (unlikely(current->audit_context)) { 1510 1440 if (IS_IA32) ··· 1531 1459 audit_syscall_exit(AUDITSC_RESULT(regs->ax), regs->ax); 1532 1460 1533 1461 if (test_thread_flag(TIF_SYSCALL_TRACE)) 1534 - syscall_trace(regs); 1462 + tracehook_report_syscall_exit(regs, 0); 1535 1463 1536 1464 /* 1537 1465 * If TIF_SYSCALL_EMU is set, we only get here because of ··· 1547 1475 * system call instruction. 1548 1476 */ 1549 1477 if (test_thread_flag(TIF_SINGLESTEP) && 1550 - (current->ptrace & PT_PTRACED)) 1478 + tracehook_consider_fatal_signal(current, SIGTRAP, SIG_DFL)) 1551 1479 send_sigtrap(current, regs, 0); 1552 1480 }
+5 -1
arch/x86/kernel/reboot.c
··· 29 29 30 30 static const struct desc_ptr no_idt = {}; 31 31 static int reboot_mode; 32 - enum reboot_type reboot_type = BOOT_KBD; 32 + /* 33 + * Keyboard reset and triple fault may result in INIT, not RESET, which 34 + * doesn't work when we're in vmx root mode. Try ACPI first. 35 + */ 36 + enum reboot_type reboot_type = BOOT_ACPI; 33 37 int reboot_force; 34 38 35 39 #if defined(CONFIG_X86_32) && defined(CONFIG_SMP)
+16
arch/x86/kernel/setup.c
··· 223 223 #define RAMDISK_LOAD_FLAG 0x4000 224 224 225 225 static char __initdata command_line[COMMAND_LINE_SIZE]; 226 + #ifdef CONFIG_CMDLINE_BOOL 227 + static char __initdata builtin_cmdline[COMMAND_LINE_SIZE] = CONFIG_CMDLINE; 228 + #endif 226 229 227 230 #if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE) 228 231 struct edd edd; ··· 667 664 data_resource.end = virt_to_phys(_edata)-1; 668 665 bss_resource.start = virt_to_phys(&__bss_start); 669 666 bss_resource.end = virt_to_phys(&__bss_stop)-1; 667 + 668 + #ifdef CONFIG_CMDLINE_BOOL 669 + #ifdef CONFIG_CMDLINE_OVERRIDE 670 + strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); 671 + #else 672 + if (builtin_cmdline[0]) { 673 + /* append boot loader cmdline to builtin */ 674 + strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE); 675 + strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE); 676 + strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); 677 + } 678 + #endif 679 + #endif 670 680 671 681 strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE); 672 682 *cmdline_p = command_line;
+8 -1
arch/x86/kernel/setup_percpu.c
··· 162 162 printk(KERN_INFO 163 163 "cpu %d has no node %d or node-local memory\n", 164 164 cpu, node); 165 + if (ptr) 166 + printk(KERN_DEBUG "per cpu data for cpu%d at %016lx\n", 167 + cpu, __pa(ptr)); 165 168 } 166 - else 169 + else { 167 170 ptr = alloc_bootmem_pages_node(NODE_DATA(node), size); 171 + if (ptr) 172 + printk(KERN_DEBUG "per cpu data for cpu%d on node%d at %016lx\n", 173 + cpu, node, __pa(ptr)); 174 + } 168 175 #endif 169 176 per_cpu_offset(cpu) = ptr - __per_cpu_start; 170 177 memcpy(ptr, __per_cpu_start, __per_cpu_end - __per_cpu_start);
+5
arch/x86/kernel/sigframe.h
··· 24 24 struct ucontext uc; 25 25 struct siginfo info; 26 26 }; 27 + 28 + int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, 29 + sigset_t *set, struct pt_regs *regs); 30 + int ia32_setup_frame(int sig, struct k_sigaction *ka, 31 + sigset_t *set, struct pt_regs *regs); 27 32 #endif
+10 -2
arch/x86/kernel/signal_32.c
··· 17 17 #include <linux/errno.h> 18 18 #include <linux/sched.h> 19 19 #include <linux/wait.h> 20 + #include <linux/tracehook.h> 20 21 #include <linux/elf.h> 21 22 #include <linux/smp.h> 22 23 #include <linux/mm.h> ··· 27 26 #include <asm/uaccess.h> 28 27 #include <asm/i387.h> 29 28 #include <asm/vdso.h> 29 + #include <asm/syscalls.h> 30 30 31 31 #include "sigframe.h" 32 32 ··· 560 558 * handler too. 561 559 */ 562 560 regs->flags &= ~X86_EFLAGS_TF; 563 - if (test_thread_flag(TIF_SINGLESTEP)) 564 - ptrace_notify(SIGTRAP); 565 561 566 562 spin_lock_irq(&current->sighand->siglock); 567 563 sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask); ··· 567 567 sigaddset(&current->blocked, sig); 568 568 recalc_sigpending(); 569 569 spin_unlock_irq(&current->sighand->siglock); 570 + 571 + tracehook_signal_handler(sig, info, ka, regs, 572 + test_thread_flag(TIF_SINGLESTEP)); 570 573 571 574 return 0; 572 575 } ··· 663 660 /* deal with pending signal delivery */ 664 661 if (thread_info_flags & _TIF_SIGPENDING) 665 662 do_signal(regs); 663 + 664 + if (thread_info_flags & _TIF_NOTIFY_RESUME) { 665 + clear_thread_flag(TIF_NOTIFY_RESUME); 666 + tracehook_notify_resume(regs); 667 + } 666 668 667 669 clear_thread_flag(TIF_IRET); 668 670 }
+45 -67
arch/x86/kernel/signal_64.c
··· 15 15 #include <linux/errno.h> 16 16 #include <linux/wait.h> 17 17 #include <linux/ptrace.h> 18 + #include <linux/tracehook.h> 18 19 #include <linux/unistd.h> 19 20 #include <linux/stddef.h> 20 21 #include <linux/personality.h> 21 22 #include <linux/compiler.h> 23 + #include <linux/uaccess.h> 24 + 22 25 #include <asm/processor.h> 23 26 #include <asm/ucontext.h> 24 - #include <asm/uaccess.h> 25 27 #include <asm/i387.h> 26 28 #include <asm/proto.h> 27 29 #include <asm/ia32_unistd.h> 28 30 #include <asm/mce.h> 31 + #include <asm/syscall.h> 32 + #include <asm/syscalls.h> 29 33 #include "sigframe.h" 30 34 31 35 #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP))) ··· 44 40 #else 45 41 # define FIX_EFLAGS __FIX_EFLAGS 46 42 #endif 47 - 48 - int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, 49 - sigset_t *set, struct pt_regs * regs); 50 - int ia32_setup_frame(int sig, struct k_sigaction *ka, 51 - sigset_t *set, struct pt_regs * regs); 52 43 53 44 asmlinkage long 54 45 sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss, ··· 127 128 /* Always make any pending restarted system calls return -EINTR */ 128 129 current_thread_info()->restart_block.fn = do_no_restart_syscall; 129 130 130 - #define COPY(x) err |= __get_user(regs->x, &sc->x) 131 + #define COPY(x) (err |= __get_user(regs->x, &sc->x)) 131 132 132 133 COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx); 133 134 COPY(dx); COPY(cx); COPY(ip); ··· 157 158 } 158 159 159 160 { 160 - struct _fpstate __user * buf; 161 + struct _fpstate __user *buf; 161 162 err |= __get_user(buf, &sc->fpstate); 162 163 163 164 if (buf) { ··· 197 198 current->blocked = set; 198 199 recalc_sigpending(); 199 200 spin_unlock_irq(&current->sighand->siglock); 200 - 201 + 201 202 if (restore_sigcontext(regs, &frame->uc.uc_mcontext, &ax)) 202 203 goto badframe; 203 204 ··· 207 208 return ax; 208 209 209 210 badframe: 210 - signal_fault(regs,frame,"sigreturn"); 211 + signal_fault(regs, frame, "sigreturn"); 211 212 return 0; 212 - } 213 + } 213 214 214 215 /* 215 216 * Set up a signal frame. 216 217 */ 217 218 218 219 static inline int 219 - setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs, unsigned long mask, struct task_struct *me) 220 + setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs, 221 + unsigned long mask, struct task_struct *me) 220 222 { 221 223 int err = 0; 222 224 ··· 273 273 } 274 274 275 275 static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, 276 - sigset_t *set, struct pt_regs * regs) 276 + sigset_t *set, struct pt_regs *regs) 277 277 { 278 278 struct rt_sigframe __user *frame; 279 - struct _fpstate __user *fp = NULL; 279 + struct _fpstate __user *fp = NULL; 280 280 int err = 0; 281 281 struct task_struct *me = current; 282 282 283 283 if (used_math()) { 284 - fp = get_stack(ka, regs, sizeof(struct _fpstate)); 284 + fp = get_stack(ka, regs, sizeof(struct _fpstate)); 285 285 frame = (void __user *)round_down( 286 286 (unsigned long)fp - sizeof(struct rt_sigframe), 16) - 8; 287 287 288 288 if (!access_ok(VERIFY_WRITE, fp, sizeof(struct _fpstate))) 289 289 goto give_sigsegv; 290 290 291 - if (save_i387(fp) < 0) 292 - err |= -1; 291 + if (save_i387(fp) < 0) 292 + err |= -1; 293 293 } else 294 294 frame = get_stack(ka, regs, sizeof(struct rt_sigframe)) - 8; 295 295 296 296 if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) 297 297 goto give_sigsegv; 298 298 299 - if (ka->sa.sa_flags & SA_SIGINFO) { 299 + if (ka->sa.sa_flags & SA_SIGINFO) { 300 300 err |= copy_siginfo_to_user(&frame->info, info); 301 301 if (err) 302 302 goto give_sigsegv; 303 303 } 304 - 304 + 305 305 /* Create the ucontext. */ 306 306 err |= __put_user(0, &frame->uc.uc_flags); 307 307 err |= __put_user(0, &frame->uc.uc_link); ··· 311 311 err |= __put_user(me->sas_ss_size, &frame->uc.uc_stack.ss_size); 312 312 err |= setup_sigcontext(&frame->uc.uc_mcontext, regs, set->sig[0], me); 313 313 err |= __put_user(fp, &frame->uc.uc_mcontext.fpstate); 314 - if (sizeof(*set) == 16) { 314 + if (sizeof(*set) == 16) { 315 315 __put_user(set->sig[0], &frame->uc.uc_sigmask.sig[0]); 316 - __put_user(set->sig[1], &frame->uc.uc_sigmask.sig[1]); 316 + __put_user(set->sig[1], &frame->uc.uc_sigmask.sig[1]); 317 317 } else 318 318 err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)); 319 319 ··· 324 324 err |= __put_user(ka->sa.sa_restorer, &frame->pretcode); 325 325 } else { 326 326 /* could use a vstub here */ 327 - goto give_sigsegv; 327 + goto give_sigsegv; 328 328 } 329 329 330 330 if (err) ··· 332 332 333 333 /* Set up registers for signal handler */ 334 334 regs->di = sig; 335 - /* In case the signal handler was declared without prototypes */ 335 + /* In case the signal handler was declared without prototypes */ 336 336 regs->ax = 0; 337 337 338 338 /* This also works for non SA_SIGINFO handlers because they expect the ··· 355 355 } 356 356 357 357 /* 358 - * Return -1L or the syscall number that @regs is executing. 359 - */ 360 - static long current_syscall(struct pt_regs *regs) 361 - { 362 - /* 363 - * We always sign-extend a -1 value being set here, 364 - * so this is always either -1L or a syscall number. 365 - */ 366 - return regs->orig_ax; 367 - } 368 - 369 - /* 370 - * Return a value that is -EFOO if the system call in @regs->orig_ax 371 - * returned an error. This only works for @regs from @current. 372 - */ 373 - static long current_syscall_ret(struct pt_regs *regs) 374 - { 375 - #ifdef CONFIG_IA32_EMULATION 376 - if (test_thread_flag(TIF_IA32)) 377 - /* 378 - * Sign-extend the value so (int)-EFOO becomes (long)-EFOO 379 - * and will match correctly in comparisons. 380 - */ 381 - return (int) regs->ax; 382 - #endif 383 - return regs->ax; 384 - } 385 - 386 - /* 387 358 * OK, we're invoking a handler 388 - */ 359 + */ 389 360 390 361 static int 391 362 handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, ··· 365 394 int ret; 366 395 367 396 /* Are we from a system call? */ 368 - if (current_syscall(regs) >= 0) { 397 + if (syscall_get_nr(current, regs) >= 0) { 369 398 /* If so, check system call restarting.. */ 370 - switch (current_syscall_ret(regs)) { 399 + switch (syscall_get_error(current, regs)) { 371 400 case -ERESTART_RESTARTBLOCK: 372 401 case -ERESTARTNOHAND: 373 402 regs->ax = -EINTR; ··· 400 429 ret = ia32_setup_rt_frame(sig, ka, info, oldset, regs); 401 430 else 402 431 ret = ia32_setup_frame(sig, ka, oldset, regs); 403 - } else 432 + } else 404 433 #endif 405 434 ret = setup_rt_frame(sig, ka, info, oldset, regs); 406 435 ··· 424 453 * handler too. 425 454 */ 426 455 regs->flags &= ~X86_EFLAGS_TF; 427 - if (test_thread_flag(TIF_SINGLESTEP)) 428 - ptrace_notify(SIGTRAP); 429 456 430 457 spin_lock_irq(&current->sighand->siglock); 431 - sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask); 458 + sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask); 432 459 if (!(ka->sa.sa_flags & SA_NODEFER)) 433 - sigaddset(&current->blocked,sig); 460 + sigaddset(&current->blocked, sig); 434 461 recalc_sigpending(); 435 462 spin_unlock_irq(&current->sighand->siglock); 463 + 464 + tracehook_signal_handler(sig, info, ka, regs, 465 + test_thread_flag(TIF_SINGLESTEP)); 436 466 } 437 467 438 468 return ret; ··· 490 518 } 491 519 492 520 /* Did we come from a system call? */ 493 - if (current_syscall(regs) >= 0) { 521 + if (syscall_get_nr(current, regs) >= 0) { 494 522 /* Restart the system call - no handlers present */ 495 - switch (current_syscall_ret(regs)) { 523 + switch (syscall_get_error(current, regs)) { 496 524 case -ERESTARTNOHAND: 497 525 case -ERESTARTSYS: 498 526 case -ERESTARTNOINTR: ··· 530 558 /* deal with pending signal delivery */ 531 559 if (thread_info_flags & _TIF_SIGPENDING) 532 560 do_signal(regs); 561 + 562 + if (thread_info_flags & _TIF_NOTIFY_RESUME) { 563 + clear_thread_flag(TIF_NOTIFY_RESUME); 564 + tracehook_notify_resume(regs); 565 + } 533 566 } 534 567 535 568 void signal_fault(struct pt_regs *regs, void __user *frame, char *where) 536 - { 537 - struct task_struct *me = current; 569 + { 570 + struct task_struct *me = current; 538 571 if (show_unhandled_signals && printk_ratelimit()) { 539 572 printk("%s[%d] bad frame in %s frame:%p ip:%lx sp:%lx orax:%lx", 540 - me->comm,me->pid,where,frame,regs->ip,regs->sp,regs->orig_ax); 573 + me->comm, me->pid, where, frame, regs->ip, 574 + regs->sp, regs->orig_ax); 541 575 print_vma_addr(" in ", regs->ip); 542 576 printk("\n"); 543 577 } 544 578 545 - force_sig(SIGSEGV, me); 546 - } 579 + force_sig(SIGSEGV, me); 580 + }
+3 -6
arch/x86/kernel/smpboot.c
··· 88 88 #define get_idle_for_cpu(x) (per_cpu(idle_thread_array, x)) 89 89 #define set_idle_for_cpu(x, p) (per_cpu(idle_thread_array, x) = (p)) 90 90 #else 91 - struct task_struct *idle_thread_array[NR_CPUS] __cpuinitdata ; 91 + static struct task_struct *idle_thread_array[NR_CPUS] __cpuinitdata ; 92 92 #define get_idle_for_cpu(x) (idle_thread_array[(x)]) 93 93 #define set_idle_for_cpu(x, p) (idle_thread_array[(x)] = (p)) 94 94 #endif ··· 129 129 static cpumask_t cpu_sibling_setup_map; 130 130 131 131 /* Set if we find a B stepping CPU */ 132 - int __cpuinitdata smp_b_stepping; 132 + static int __cpuinitdata smp_b_stepping; 133 133 134 134 #if defined(CONFIG_NUMA) && defined(CONFIG_X86_32) 135 135 ··· 1313 1313 if (!num_processors) 1314 1314 num_processors = 1; 1315 1315 1316 - #ifdef CONFIG_HOTPLUG_CPU 1317 1316 if (additional_cpus == -1) { 1318 1317 if (disabled_cpus > 0) 1319 1318 additional_cpus = disabled_cpus; 1320 1319 else 1321 1320 additional_cpus = 0; 1322 1321 } 1323 - #else 1324 - additional_cpus = 0; 1325 - #endif 1322 + 1326 1323 possible = num_processors + additional_cpus; 1327 1324 if (possible > NR_CPUS) 1328 1325 possible = NR_CPUS;
+2
arch/x86/kernel/sys_i386_32.c
··· 22 22 #include <linux/uaccess.h> 23 23 #include <linux/unistd.h> 24 24 25 + #include <asm/syscalls.h> 26 + 25 27 asmlinkage long sys_mmap2(unsigned long addr, unsigned long len, 26 28 unsigned long prot, unsigned long flags, 27 29 unsigned long fd, unsigned long pgoff)
+23 -21
arch/x86/kernel/sys_x86_64.c
··· 13 13 #include <linux/utsname.h> 14 14 #include <linux/personality.h> 15 15 #include <linux/random.h> 16 + #include <linux/uaccess.h> 16 17 17 - #include <asm/uaccess.h> 18 18 #include <asm/ia32.h> 19 + #include <asm/syscalls.h> 19 20 20 - asmlinkage long sys_mmap(unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, 21 - unsigned long fd, unsigned long off) 21 + asmlinkage long sys_mmap(unsigned long addr, unsigned long len, 22 + unsigned long prot, unsigned long flags, 23 + unsigned long fd, unsigned long off) 22 24 { 23 25 long error; 24 - struct file * file; 26 + struct file *file; 25 27 26 28 error = -EINVAL; 27 29 if (off & ~PAGE_MASK) ··· 58 56 unmapped base down for this case. This can give 59 57 conflicts with the heap, but we assume that glibc 60 58 malloc knows how to fall back to mmap. Give it 1GB 61 - of playground for now. -AK */ 62 - *begin = 0x40000000; 63 - *end = 0x80000000; 59 + of playground for now. -AK */ 60 + *begin = 0x40000000; 61 + *end = 0x80000000; 64 62 if (current->flags & PF_RANDOMIZE) { 65 63 new_begin = randomize_range(*begin, *begin + 0x02000000, 0); 66 64 if (new_begin) ··· 68 66 } 69 67 } else { 70 68 *begin = TASK_UNMAPPED_BASE; 71 - *end = TASK_SIZE; 69 + *end = TASK_SIZE; 72 70 } 73 - } 71 + } 74 72 75 73 unsigned long 76 74 arch_get_unmapped_area(struct file *filp, unsigned long addr, ··· 80 78 struct vm_area_struct *vma; 81 79 unsigned long start_addr; 82 80 unsigned long begin, end; 83 - 81 + 84 82 if (flags & MAP_FIXED) 85 83 return addr; 86 84 87 - find_start_end(flags, &begin, &end); 85 + find_start_end(flags, &begin, &end); 88 86 89 87 if (len > end) 90 88 return -ENOMEM; ··· 98 96 } 99 97 if (((flags & MAP_32BIT) || test_thread_flag(TIF_IA32)) 100 98 && len <= mm->cached_hole_size) { 101 - mm->cached_hole_size = 0; 99 + mm->cached_hole_size = 0; 102 100 mm->free_area_cache = begin; 103 101 } 104 102 addr = mm->free_area_cache; 105 - if (addr < begin) 106 - addr = begin; 103 + if (addr < begin) 104 + addr = begin; 107 105 start_addr = addr; 108 106 109 107 full_search: ··· 129 127 return addr; 130 128 } 131 129 if (addr + mm->cached_hole_size < vma->vm_start) 132 - mm->cached_hole_size = vma->vm_start - addr; 130 + mm->cached_hole_size = vma->vm_start - addr; 133 131 134 132 addr = vma->vm_end; 135 133 } ··· 179 177 vma = find_vma(mm, addr-len); 180 178 if (!vma || addr <= vma->vm_start) 181 179 /* remember the address as a hint for next time */ 182 - return (mm->free_area_cache = addr-len); 180 + return mm->free_area_cache = addr-len; 183 181 } 184 182 185 183 if (mm->mmap_base < len) ··· 196 194 vma = find_vma(mm, addr); 197 195 if (!vma || addr+len <= vma->vm_start) 198 196 /* remember the address as a hint for next time */ 199 - return (mm->free_area_cache = addr); 197 + return mm->free_area_cache = addr; 200 198 201 199 /* remember the largest hole we saw so far */ 202 200 if (addr + mm->cached_hole_size < vma->vm_start) ··· 226 224 } 227 225 228 226 229 - asmlinkage long sys_uname(struct new_utsname __user * name) 227 + asmlinkage long sys_uname(struct new_utsname __user *name) 230 228 { 231 229 int err; 232 230 down_read(&uts_sem); 233 - err = copy_to_user(name, utsname(), sizeof (*name)); 231 + err = copy_to_user(name, utsname(), sizeof(*name)); 234 232 up_read(&uts_sem); 235 - if (personality(current->personality) == PER_LINUX32) 236 - err |= copy_to_user(&name->machine, "i686", 5); 233 + if (personality(current->personality) == PER_LINUX32) 234 + err |= copy_to_user(&name->machine, "i686", 5); 237 235 return err ? -EFAULT : 0; 238 236 }
+2 -2
arch/x86/kernel/syscall_64.c
··· 8 8 #define __NO_STUBS 9 9 10 10 #define __SYSCALL(nr, sym) extern asmlinkage void sym(void) ; 11 - #undef _ASM_X86_64_UNISTD_H_ 11 + #undef ASM_X86__UNISTD_64_H 12 12 #include <asm/unistd_64.h> 13 13 14 14 #undef __SYSCALL 15 15 #define __SYSCALL(nr, sym) [nr] = sym, 16 - #undef _ASM_X86_64_UNISTD_H_ 16 + #undef ASM_X86__UNISTD_64_H 17 17 18 18 typedef void (*sys_call_ptr_t)(void); 19 19
+1
arch/x86/kernel/time_32.c
··· 36 36 #include <asm/arch_hooks.h> 37 37 #include <asm/hpet.h> 38 38 #include <asm/time.h> 39 + #include <asm/timer.h> 39 40 40 41 #include "do_timer.h" 41 42
+1
arch/x86/kernel/tls.c
··· 10 10 #include <asm/ldt.h> 11 11 #include <asm/processor.h> 12 12 #include <asm/proto.h> 13 + #include <asm/syscalls.h> 13 14 14 15 #include "tls.h" 15 16
+36 -30
arch/x86/kernel/traps_64.c
··· 32 32 #include <linux/bug.h> 33 33 #include <linux/nmi.h> 34 34 #include <linux/mm.h> 35 + #include <linux/smp.h> 36 + #include <linux/io.h> 35 37 36 38 #if defined(CONFIG_EDAC) 37 39 #include <linux/edac.h> ··· 47 45 #include <asm/unwind.h> 48 46 #include <asm/desc.h> 49 47 #include <asm/i387.h> 50 - #include <asm/nmi.h> 51 - #include <asm/smp.h> 52 - #include <asm/io.h> 53 48 #include <asm/pgalloc.h> 54 49 #include <asm/proto.h> 55 50 #include <asm/pda.h> ··· 84 85 85 86 void printk_address(unsigned long address, int reliable) 86 87 { 87 - printk(" [<%016lx>] %s%pS\n", address, reliable ? "": "? ", (void *) address); 88 + printk(" [<%016lx>] %s%pS\n", 89 + address, reliable ? "" : "? ", (void *) address); 88 90 } 89 91 90 92 static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack, ··· 98 98 [STACKFAULT_STACK - 1] = "#SS", 99 99 [MCE_STACK - 1] = "#MC", 100 100 #if DEBUG_STKSZ > EXCEPTION_STKSZ 101 - [N_EXCEPTION_STACKS ... N_EXCEPTION_STACKS + DEBUG_STKSZ / EXCEPTION_STKSZ - 2] = "#DB[?]" 101 + [N_EXCEPTION_STACKS ... 102 + N_EXCEPTION_STACKS + DEBUG_STKSZ / EXCEPTION_STKSZ - 2] = "#DB[?]" 102 103 #endif 103 104 }; 104 105 unsigned k; ··· 164 163 } 165 164 166 165 /* 167 - * x86-64 can have up to three kernel stacks: 166 + * x86-64 can have up to three kernel stacks: 168 167 * process stack 169 168 * interrupt stack 170 169 * severe exception (double fault, nmi, stack fault, debug, mce) hardware stack ··· 220 219 const struct stacktrace_ops *ops, void *data) 221 220 { 222 221 const unsigned cpu = get_cpu(); 223 - unsigned long *irqstack_end = (unsigned long*)cpu_pda(cpu)->irqstackptr; 222 + unsigned long *irqstack_end = (unsigned long *)cpu_pda(cpu)->irqstackptr; 224 223 unsigned used = 0; 225 224 struct thread_info *tinfo; 226 225 ··· 238 237 if (!bp) { 239 238 if (task == current) { 240 239 /* Grab bp right from our regs */ 241 - asm("movq %%rbp, %0" : "=r" (bp) :); 240 + asm("movq %%rbp, %0" : "=r" (bp) : ); 242 241 } else { 243 242 /* bp is the last reg pushed by switch_to */ 244 243 bp = *(unsigned long *) task->thread.sp; ··· 340 339 show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs, 341 340 unsigned long *stack, unsigned long bp, char *log_lvl) 342 341 { 343 - printk("\nCall Trace:\n"); 342 + printk("Call Trace:\n"); 344 343 dump_trace(task, regs, stack, bp, &print_trace_ops, log_lvl); 345 - printk("\n"); 346 344 } 347 345 348 346 void show_trace(struct task_struct *task, struct pt_regs *regs, ··· 357 357 unsigned long *stack; 358 358 int i; 359 359 const int cpu = smp_processor_id(); 360 - unsigned long *irqstack_end = (unsigned long *) (cpu_pda(cpu)->irqstackptr); 361 - unsigned long *irqstack = (unsigned long *) (cpu_pda(cpu)->irqstackptr - IRQSTACKSIZE); 360 + unsigned long *irqstack_end = 361 + (unsigned long *) (cpu_pda(cpu)->irqstackptr); 362 + unsigned long *irqstack = 363 + (unsigned long *) (cpu_pda(cpu)->irqstackptr - IRQSTACKSIZE); 362 364 363 - // debugging aid: "show_stack(NULL, NULL);" prints the 364 - // back trace for this cpu. 365 + /* 366 + * debugging aid: "show_stack(NULL, NULL);" prints the 367 + * back trace for this cpu. 368 + */ 365 369 366 370 if (sp == NULL) { 367 371 if (task) ··· 390 386 printk(" %016lx", *stack++); 391 387 touch_nmi_watchdog(); 392 388 } 389 + printk("\n"); 393 390 show_trace_log_lvl(task, regs, sp, bp, log_lvl); 394 391 } 395 392 ··· 409 404 410 405 #ifdef CONFIG_FRAME_POINTER 411 406 if (!bp) 412 - asm("movq %%rbp, %0" : "=r" (bp):); 407 + asm("movq %%rbp, %0" : "=r" (bp) : ); 413 408 #endif 414 409 415 410 printk("Pid: %d, comm: %.20s %s %s %.*s\n", ··· 419 414 init_utsname()->version); 420 415 show_trace(NULL, NULL, &stack, bp); 421 416 } 422 - 423 417 EXPORT_SYMBOL(dump_stack); 424 418 425 419 void show_registers(struct pt_regs *regs) ··· 447 443 printk("Stack: "); 448 444 show_stack_log_lvl(NULL, regs, (unsigned long *)sp, 449 445 regs->bp, ""); 450 - printk("\n"); 451 446 452 447 printk(KERN_EMERG "Code: "); 453 448 ··· 496 493 raw_local_irq_save(flags); 497 494 cpu = smp_processor_id(); 498 495 if (!__raw_spin_trylock(&die_lock)) { 499 - if (cpu == die_owner) 496 + if (cpu == die_owner) 500 497 /* nested oops. should stop eventually */; 501 498 else 502 499 __raw_spin_lock(&die_lock); ··· 641 638 } 642 639 643 640 #define DO_ERROR(trapnr, signr, str, name) \ 644 - asmlinkage void do_##name(struct pt_regs * regs, long error_code) \ 641 + asmlinkage void do_##name(struct pt_regs *regs, long error_code) \ 645 642 { \ 646 643 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \ 647 644 == NOTIFY_STOP) \ ··· 651 648 } 652 649 653 650 #define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \ 654 - asmlinkage void do_##name(struct pt_regs * regs, long error_code) \ 651 + asmlinkage void do_##name(struct pt_regs *regs, long error_code) \ 655 652 { \ 656 653 siginfo_t info; \ 657 654 info.si_signo = signr; \ ··· 686 683 preempt_conditional_cli(regs); 687 684 } 688 685 689 - asmlinkage void do_double_fault(struct pt_regs * regs, long error_code) 686 + asmlinkage void do_double_fault(struct pt_regs *regs, long error_code) 690 687 { 691 688 static const char str[] = "double fault"; 692 689 struct task_struct *tsk = current; ··· 781 778 } 782 779 783 780 static notrace __kprobes void 784 - unknown_nmi_error(unsigned char reason, struct pt_regs * regs) 781 + unknown_nmi_error(unsigned char reason, struct pt_regs *regs) 785 782 { 786 - if (notify_die(DIE_NMIUNKNOWN, "nmi", regs, reason, 2, SIGINT) == NOTIFY_STOP) 783 + if (notify_die(DIE_NMIUNKNOWN, "nmi", regs, reason, 2, SIGINT) == 784 + NOTIFY_STOP) 787 785 return; 788 786 printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x.\n", 789 787 reason); ··· 886 882 else if (user_mode(eregs)) 887 883 regs = task_pt_regs(current); 888 884 /* Exception from kernel and interrupts are enabled. Move to 889 - kernel process stack. */ 885 + kernel process stack. */ 890 886 else if (eregs->flags & X86_EFLAGS_IF) 891 887 regs = (struct pt_regs *)(eregs->sp -= sizeof(struct pt_regs)); 892 888 if (eregs != regs) ··· 895 891 } 896 892 897 893 /* runs on IST stack. */ 898 - asmlinkage void __kprobes do_debug(struct pt_regs * regs, 894 + asmlinkage void __kprobes do_debug(struct pt_regs *regs, 899 895 unsigned long error_code) 900 896 { 901 897 struct task_struct *tsk = current; ··· 1039 1035 1040 1036 asmlinkage void bad_intr(void) 1041 1037 { 1042 - printk("bad interrupt"); 1038 + printk("bad interrupt"); 1043 1039 } 1044 1040 1045 1041 asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs) ··· 1051 1047 1052 1048 conditional_sti(regs); 1053 1049 if (!user_mode(regs) && 1054 - kernel_math_error(regs, "kernel simd math error", 19)) 1050 + kernel_math_error(regs, "kernel simd math error", 19)) 1055 1051 return; 1056 1052 1057 1053 /* ··· 1096 1092 force_sig_info(SIGFPE, &info, task); 1097 1093 } 1098 1094 1099 - asmlinkage void do_spurious_interrupt_bug(struct pt_regs * regs) 1095 + asmlinkage void do_spurious_interrupt_bug(struct pt_regs *regs) 1100 1096 { 1101 1097 } 1102 1098 ··· 1153 1149 set_intr_gate(0, &divide_error); 1154 1150 set_intr_gate_ist(1, &debug, DEBUG_STACK); 1155 1151 set_intr_gate_ist(2, &nmi, NMI_STACK); 1156 - set_system_gate_ist(3, &int3, DEBUG_STACK); /* int3 can be called from all */ 1157 - set_system_gate(4, &overflow); /* int4 can be called from all */ 1152 + /* int3 can be called from all */ 1153 + set_system_gate_ist(3, &int3, DEBUG_STACK); 1154 + /* int4 can be called from all */ 1155 + set_system_gate(4, &overflow); 1158 1156 set_intr_gate(5, &bounds); 1159 1157 set_intr_gate(6, &invalid_op); 1160 1158 set_intr_gate(7, &device_not_available);
+232 -58
arch/x86/kernel/tsc.c
··· 104 104 /* 105 105 * Read TSC and the reference counters. Take care of SMI disturbance 106 106 */ 107 - static u64 tsc_read_refs(u64 *pm, u64 *hpet) 107 + static u64 tsc_read_refs(u64 *p, int hpet) 108 108 { 109 109 u64 t1, t2; 110 110 int i; ··· 112 112 for (i = 0; i < MAX_RETRIES; i++) { 113 113 t1 = get_cycles(); 114 114 if (hpet) 115 - *hpet = hpet_readl(HPET_COUNTER) & 0xFFFFFFFF; 115 + *p = hpet_readl(HPET_COUNTER) & 0xFFFFFFFF; 116 116 else 117 - *pm = acpi_pm_read_early(); 117 + *p = acpi_pm_read_early(); 118 118 t2 = get_cycles(); 119 119 if ((t2 - t1) < SMI_TRESHOLD) 120 120 return t2; ··· 123 123 } 124 124 125 125 /* 126 + * Calculate the TSC frequency from HPET reference 127 + */ 128 + static unsigned long calc_hpet_ref(u64 deltatsc, u64 hpet1, u64 hpet2) 129 + { 130 + u64 tmp; 131 + 132 + if (hpet2 < hpet1) 133 + hpet2 += 0x100000000ULL; 134 + hpet2 -= hpet1; 135 + tmp = ((u64)hpet2 * hpet_readl(HPET_PERIOD)); 136 + do_div(tmp, 1000000); 137 + do_div(deltatsc, tmp); 138 + 139 + return (unsigned long) deltatsc; 140 + } 141 + 142 + /* 143 + * Calculate the TSC frequency from PMTimer reference 144 + */ 145 + static unsigned long calc_pmtimer_ref(u64 deltatsc, u64 pm1, u64 pm2) 146 + { 147 + u64 tmp; 148 + 149 + if (!pm1 && !pm2) 150 + return ULONG_MAX; 151 + 152 + if (pm2 < pm1) 153 + pm2 += (u64)ACPI_PM_OVRRUN; 154 + pm2 -= pm1; 155 + tmp = pm2 * 1000000000LL; 156 + do_div(tmp, PMTMR_TICKS_PER_SEC); 157 + do_div(deltatsc, tmp); 158 + 159 + return (unsigned long) deltatsc; 160 + } 161 + 162 + #define CAL_MS 10 163 + #define CAL_LATCH (CLOCK_TICK_RATE / (1000 / CAL_MS)) 164 + #define CAL_PIT_LOOPS 1000 165 + 166 + #define CAL2_MS 50 167 + #define CAL2_LATCH (CLOCK_TICK_RATE / (1000 / CAL2_MS)) 168 + #define CAL2_PIT_LOOPS 5000 169 + 170 + 171 + /* 126 172 * Try to calibrate the TSC against the Programmable 127 173 * Interrupt Timer and return the frequency of the TSC 128 174 * in kHz. 129 175 * 130 176 * Return ULONG_MAX on failure to calibrate. 131 177 */ 132 - static unsigned long pit_calibrate_tsc(void) 178 + static unsigned long pit_calibrate_tsc(u32 latch, unsigned long ms, int loopmin) 133 179 { 134 180 u64 tsc, t1, t2, delta; 135 181 unsigned long tscmin, tscmax; ··· 190 144 * (LSB then MSB) to begin countdown. 191 145 */ 192 146 outb(0xb0, 0x43); 193 - outb((CLOCK_TICK_RATE / (1000 / 50)) & 0xff, 0x42); 194 - outb((CLOCK_TICK_RATE / (1000 / 50)) >> 8, 0x42); 147 + outb(latch & 0xff, 0x42); 148 + outb(latch >> 8, 0x42); 195 149 196 150 tsc = t1 = t2 = get_cycles(); 197 151 ··· 212 166 /* 213 167 * Sanity checks: 214 168 * 215 - * If we were not able to read the PIT more than 5000 169 + * If we were not able to read the PIT more than loopmin 216 170 * times, then we have been hit by a massive SMI 217 171 * 218 172 * If the maximum is 10 times larger than the minimum, 219 173 * then we got hit by an SMI as well. 220 174 */ 221 - if (pitcnt < 5000 || tscmax > 10 * tscmin) 175 + if (pitcnt < loopmin || tscmax > 10 * tscmin) 222 176 return ULONG_MAX; 223 177 224 178 /* Calculate the PIT value */ 225 179 delta = t2 - t1; 226 - do_div(delta, 50); 180 + do_div(delta, ms); 227 181 return delta; 228 182 } 229 183 184 + /* 185 + * This reads the current MSB of the PIT counter, and 186 + * checks if we are running on sufficiently fast and 187 + * non-virtualized hardware. 188 + * 189 + * Our expectations are: 190 + * 191 + * - the PIT is running at roughly 1.19MHz 192 + * 193 + * - each IO is going to take about 1us on real hardware, 194 + * but we allow it to be much faster (by a factor of 10) or 195 + * _slightly_ slower (ie we allow up to a 2us read+counter 196 + * update - anything else implies a unacceptably slow CPU 197 + * or PIT for the fast calibration to work. 198 + * 199 + * - with 256 PIT ticks to read the value, we have 214us to 200 + * see the same MSB (and overhead like doing a single TSC 201 + * read per MSB value etc). 202 + * 203 + * - We're doing 2 reads per loop (LSB, MSB), and we expect 204 + * them each to take about a microsecond on real hardware. 205 + * So we expect a count value of around 100. But we'll be 206 + * generous, and accept anything over 50. 207 + * 208 + * - if the PIT is stuck, and we see *many* more reads, we 209 + * return early (and the next caller of pit_expect_msb() 210 + * then consider it a failure when they don't see the 211 + * next expected value). 212 + * 213 + * These expectations mean that we know that we have seen the 214 + * transition from one expected value to another with a fairly 215 + * high accuracy, and we didn't miss any events. We can thus 216 + * use the TSC value at the transitions to calculate a pretty 217 + * good value for the TSC frequencty. 218 + */ 219 + static inline int pit_expect_msb(unsigned char val) 220 + { 221 + int count = 0; 222 + 223 + for (count = 0; count < 50000; count++) { 224 + /* Ignore LSB */ 225 + inb(0x42); 226 + if (inb(0x42) != val) 227 + break; 228 + } 229 + return count > 50; 230 + } 231 + 232 + /* 233 + * How many MSB values do we want to see? We aim for a 234 + * 15ms calibration, which assuming a 2us counter read 235 + * error should give us roughly 150 ppm precision for 236 + * the calibration. 237 + */ 238 + #define QUICK_PIT_MS 15 239 + #define QUICK_PIT_ITERATIONS (QUICK_PIT_MS * PIT_TICK_RATE / 1000 / 256) 240 + 241 + static unsigned long quick_pit_calibrate(void) 242 + { 243 + /* Set the Gate high, disable speaker */ 244 + outb((inb(0x61) & ~0x02) | 0x01, 0x61); 245 + 246 + /* 247 + * Counter 2, mode 0 (one-shot), binary count 248 + * 249 + * NOTE! Mode 2 decrements by two (and then the 250 + * output is flipped each time, giving the same 251 + * final output frequency as a decrement-by-one), 252 + * so mode 0 is much better when looking at the 253 + * individual counts. 254 + */ 255 + outb(0xb0, 0x43); 256 + 257 + /* Start at 0xffff */ 258 + outb(0xff, 0x42); 259 + outb(0xff, 0x42); 260 + 261 + if (pit_expect_msb(0xff)) { 262 + int i; 263 + u64 t1, t2, delta; 264 + unsigned char expect = 0xfe; 265 + 266 + t1 = get_cycles(); 267 + for (i = 0; i < QUICK_PIT_ITERATIONS; i++, expect--) { 268 + if (!pit_expect_msb(expect)) 269 + goto failed; 270 + } 271 + t2 = get_cycles(); 272 + 273 + /* 274 + * Make sure we can rely on the second TSC timestamp: 275 + */ 276 + if (!pit_expect_msb(expect)) 277 + goto failed; 278 + 279 + /* 280 + * Ok, if we get here, then we've seen the 281 + * MSB of the PIT decrement QUICK_PIT_ITERATIONS 282 + * times, and each MSB had many hits, so we never 283 + * had any sudden jumps. 284 + * 285 + * As a result, we can depend on there not being 286 + * any odd delays anywhere, and the TSC reads are 287 + * reliable. 288 + * 289 + * kHz = ticks / time-in-seconds / 1000; 290 + * kHz = (t2 - t1) / (QPI * 256 / PIT_TICK_RATE) / 1000 291 + * kHz = ((t2 - t1) * PIT_TICK_RATE) / (QPI * 256 * 1000) 292 + */ 293 + delta = (t2 - t1)*PIT_TICK_RATE; 294 + do_div(delta, QUICK_PIT_ITERATIONS*256*1000); 295 + printk("Fast TSC calibration using PIT\n"); 296 + return delta; 297 + } 298 + failed: 299 + return 0; 300 + } 230 301 231 302 /** 232 303 * native_calibrate_tsc - calibrate the tsc on boot 233 304 */ 234 305 unsigned long native_calibrate_tsc(void) 235 306 { 236 - u64 tsc1, tsc2, delta, pm1, pm2, hpet1, hpet2; 307 + u64 tsc1, tsc2, delta, ref1, ref2; 237 308 unsigned long tsc_pit_min = ULONG_MAX, tsc_ref_min = ULONG_MAX; 238 - unsigned long flags; 239 - int hpet = is_hpet_enabled(), i; 309 + unsigned long flags, latch, ms, fast_calibrate; 310 + int hpet = is_hpet_enabled(), i, loopmin; 311 + 312 + local_irq_save(flags); 313 + fast_calibrate = quick_pit_calibrate(); 314 + local_irq_restore(flags); 315 + if (fast_calibrate) 316 + return fast_calibrate; 240 317 241 318 /* 242 319 * Run 5 calibration loops to get the lowest frequency value ··· 385 216 * calibration delay loop as we have to wait for a certain 386 217 * amount of time anyway. 387 218 */ 388 - for (i = 0; i < 5; i++) { 219 + 220 + /* Preset PIT loop values */ 221 + latch = CAL_LATCH; 222 + ms = CAL_MS; 223 + loopmin = CAL_PIT_LOOPS; 224 + 225 + for (i = 0; i < 3; i++) { 389 226 unsigned long tsc_pit_khz; 390 227 391 228 /* ··· 401 226 * read the end value. 402 227 */ 403 228 local_irq_save(flags); 404 - tsc1 = tsc_read_refs(&pm1, hpet ? &hpet1 : NULL); 405 - tsc_pit_khz = pit_calibrate_tsc(); 406 - tsc2 = tsc_read_refs(&pm2, hpet ? &hpet2 : NULL); 229 + tsc1 = tsc_read_refs(&ref1, hpet); 230 + tsc_pit_khz = pit_calibrate_tsc(latch, ms, loopmin); 231 + tsc2 = tsc_read_refs(&ref2, hpet); 407 232 local_irq_restore(flags); 408 233 409 234 /* Pick the lowest PIT TSC calibration so far */ 410 235 tsc_pit_min = min(tsc_pit_min, tsc_pit_khz); 411 236 412 237 /* hpet or pmtimer available ? */ 413 - if (!hpet && !pm1 && !pm2) 238 + if (!hpet && !ref1 && !ref2) 414 239 continue; 415 240 416 241 /* Check, whether the sampling was disturbed by an SMI */ ··· 418 243 continue; 419 244 420 245 tsc2 = (tsc2 - tsc1) * 1000000LL; 246 + if (hpet) 247 + tsc2 = calc_hpet_ref(tsc2, ref1, ref2); 248 + else 249 + tsc2 = calc_pmtimer_ref(tsc2, ref1, ref2); 421 250 422 - if (hpet) { 423 - if (hpet2 < hpet1) 424 - hpet2 += 0x100000000ULL; 425 - hpet2 -= hpet1; 426 - tsc1 = ((u64)hpet2 * hpet_readl(HPET_PERIOD)); 427 - do_div(tsc1, 1000000); 428 - } else { 429 - if (pm2 < pm1) 430 - pm2 += (u64)ACPI_PM_OVRRUN; 431 - pm2 -= pm1; 432 - tsc1 = pm2 * 1000000000LL; 433 - do_div(tsc1, PMTMR_TICKS_PER_SEC); 251 + tsc_ref_min = min(tsc_ref_min, (unsigned long) tsc2); 252 + 253 + /* Check the reference deviation */ 254 + delta = ((u64) tsc_pit_min) * 100; 255 + do_div(delta, tsc_ref_min); 256 + 257 + /* 258 + * If both calibration results are inside a 10% window 259 + * then we can be sure, that the calibration 260 + * succeeded. We break out of the loop right away. We 261 + * use the reference value, as it is more precise. 262 + */ 263 + if (delta >= 90 && delta <= 110) { 264 + printk(KERN_INFO 265 + "TSC: PIT calibration matches %s. %d loops\n", 266 + hpet ? "HPET" : "PMTIMER", i + 1); 267 + return tsc_ref_min; 434 268 } 435 269 436 - do_div(tsc2, tsc1); 437 - tsc_ref_min = min(tsc_ref_min, (unsigned long) tsc2); 270 + /* 271 + * Check whether PIT failed more than once. This 272 + * happens in virtualized environments. We need to 273 + * give the virtual PC a slightly longer timeframe for 274 + * the HPET/PMTIMER to make the result precise. 275 + */ 276 + if (i == 1 && tsc_pit_min == ULONG_MAX) { 277 + latch = CAL2_LATCH; 278 + ms = CAL2_MS; 279 + loopmin = CAL2_PIT_LOOPS; 280 + } 438 281 } 439 282 440 283 /* ··· 463 270 printk(KERN_WARNING "TSC: Unable to calibrate against PIT\n"); 464 271 465 272 /* We don't have an alternative source, disable TSC */ 466 - if (!hpet && !pm1 && !pm2) { 273 + if (!hpet && !ref1 && !ref2) { 467 274 printk("TSC: No reference (HPET/PMTIMER) available\n"); 468 275 return 0; 469 276 } ··· 471 278 /* The alternative source failed as well, disable TSC */ 472 279 if (tsc_ref_min == ULONG_MAX) { 473 280 printk(KERN_WARNING "TSC: HPET/PMTIMER calibration " 474 - "failed due to SMI disturbance.\n"); 281 + "failed.\n"); 475 282 return 0; 476 283 } 477 284 ··· 483 290 } 484 291 485 292 /* We don't have an alternative source, use the PIT calibration value */ 486 - if (!hpet && !pm1 && !pm2) { 293 + if (!hpet && !ref1 && !ref2) { 487 294 printk(KERN_INFO "TSC: Using PIT calibration value\n"); 488 295 return tsc_pit_min; 489 296 } 490 297 491 298 /* The alternative source failed, use the PIT calibration value */ 492 299 if (tsc_ref_min == ULONG_MAX) { 493 - printk(KERN_WARNING "TSC: HPET/PMTIMER calibration failed due " 494 - "to SMI disturbance. Using PIT calibration\n"); 300 + printk(KERN_WARNING "TSC: HPET/PMTIMER calibration failed. " 301 + "Using PIT calibration\n"); 495 302 return tsc_pit_min; 496 303 } 497 - 498 - /* Check the reference deviation */ 499 - delta = ((u64) tsc_pit_min) * 100; 500 - do_div(delta, tsc_ref_min); 501 - 502 - /* 503 - * If both calibration results are inside a 5% window, the we 504 - * use the lower frequency of those as it is probably the 505 - * closest estimate. 506 - */ 507 - if (delta >= 95 && delta <= 105) { 508 - printk(KERN_INFO "TSC: PIT calibration confirmed by %s.\n", 509 - hpet ? "HPET" : "PMTIMER"); 510 - printk(KERN_INFO "TSC: using %s calibration value\n", 511 - tsc_pit_min <= tsc_ref_min ? "PIT" : 512 - hpet ? "HPET" : "PMTIMER"); 513 - return tsc_pit_min <= tsc_ref_min ? tsc_pit_min : tsc_ref_min; 514 - } 515 - 516 - printk(KERN_WARNING "TSC: PIT calibration deviates from %s: %lu %lu.\n", 517 - hpet ? "HPET" : "PMTIMER", tsc_pit_min, tsc_ref_min); 518 304 519 305 /* 520 306 * The calibration values differ too much. In doubt, we use 521 307 * the PIT value as we know that there are PMTIMERs around 522 - * running at double speed. 308 + * running at double speed. At least we let the user know: 523 309 */ 310 + printk(KERN_WARNING "TSC: PIT calibration deviates from %s: %lu %lu.\n", 311 + hpet ? "HPET" : "PMTIMER", tsc_pit_min, tsc_ref_min); 524 312 printk(KERN_INFO "TSC: Using PIT calibration value\n"); 525 313 return tsc_pit_min; 526 314 }
+1 -15
arch/x86/kernel/visws_quirks.c
··· 25 25 #include <asm/visws/cobalt.h> 26 26 #include <asm/visws/piix4.h> 27 27 #include <asm/arch_hooks.h> 28 + #include <asm/io_apic.h> 28 29 #include <asm/fixmap.h> 29 30 #include <asm/reboot.h> 30 31 #include <asm/setup.h> 31 32 #include <asm/e820.h> 32 - #include <asm/smp.h> 33 33 #include <asm/io.h> 34 34 35 35 #include <mach_ipi.h> 36 36 37 37 #include "mach_apic.h" 38 38 39 - #include <linux/init.h> 40 - #include <linux/smp.h> 41 - 42 39 #include <linux/kernel_stat.h> 43 - #include <linux/interrupt.h> 44 - #include <linux/init.h> 45 40 46 - #include <asm/io.h> 47 - #include <asm/apic.h> 48 41 #include <asm/i8259.h> 49 42 #include <asm/irq_vectors.h> 50 - #include <asm/visws/cobalt.h> 51 43 #include <asm/visws/lithium.h> 52 - #include <asm/visws/piix4.h> 53 44 54 45 #include <linux/sched.h> 55 46 #include <linux/kernel.h> 56 - #include <linux/init.h> 57 47 #include <linux/pci.h> 58 48 #include <linux/pci_ids.h> 59 49 60 50 extern int no_broadcast; 61 51 62 - #include <asm/io.h> 63 52 #include <asm/apic.h> 64 - #include <asm/arch_hooks.h> 65 - #include <asm/visws/cobalt.h> 66 - #include <asm/visws/lithium.h> 67 53 68 54 char visws_board_type = -1; 69 55 char visws_board_rev = -1;
+1
arch/x86/kernel/vm86_32.c
··· 46 46 #include <asm/io.h> 47 47 #include <asm/tlbflush.h> 48 48 #include <asm/irq.h> 49 + #include <asm/syscalls.h> 49 50 50 51 /* 51 52 * Known problems:
+5 -5
arch/x86/kernel/vmi_32.c
··· 393 393 } 394 394 #endif 395 395 396 - static void vmi_allocate_pte(struct mm_struct *mm, u32 pfn) 396 + static void vmi_allocate_pte(struct mm_struct *mm, unsigned long pfn) 397 397 { 398 398 vmi_set_page_type(pfn, VMI_PAGE_L1); 399 399 vmi_ops.allocate_page(pfn, VMI_PAGE_L1, 0, 0, 0); 400 400 } 401 401 402 - static void vmi_allocate_pmd(struct mm_struct *mm, u32 pfn) 402 + static void vmi_allocate_pmd(struct mm_struct *mm, unsigned long pfn) 403 403 { 404 404 /* 405 405 * This call comes in very early, before mem_map is setup. ··· 410 410 vmi_ops.allocate_page(pfn, VMI_PAGE_L2, 0, 0, 0); 411 411 } 412 412 413 - static void vmi_allocate_pmd_clone(u32 pfn, u32 clonepfn, u32 start, u32 count) 413 + static void vmi_allocate_pmd_clone(unsigned long pfn, unsigned long clonepfn, unsigned long start, unsigned long count) 414 414 { 415 415 vmi_set_page_type(pfn, VMI_PAGE_L2 | VMI_PAGE_CLONE); 416 416 vmi_check_page_type(clonepfn, VMI_PAGE_L2); 417 417 vmi_ops.allocate_page(pfn, VMI_PAGE_L2 | VMI_PAGE_CLONE, clonepfn, start, count); 418 418 } 419 419 420 - static void vmi_release_pte(u32 pfn) 420 + static void vmi_release_pte(unsigned long pfn) 421 421 { 422 422 vmi_ops.release_page(pfn, VMI_PAGE_L1); 423 423 vmi_set_page_type(pfn, VMI_PAGE_NORMAL); 424 424 } 425 425 426 - static void vmi_release_pmd(u32 pfn) 426 + static void vmi_release_pmd(unsigned long pfn) 427 427 { 428 428 vmi_ops.release_page(pfn, VMI_PAGE_L2); 429 429 vmi_set_page_type(pfn, VMI_PAGE_NORMAL);
+39 -45
arch/x86/lib/msr-on-cpu.c
··· 16 16 rdmsr(rv->msr_no, rv->l, rv->h); 17 17 } 18 18 19 - static void __rdmsr_safe_on_cpu(void *info) 19 + static void __wrmsr_on_cpu(void *info) 20 20 { 21 21 struct msr_info *rv = info; 22 22 23 - rv->err = rdmsr_safe(rv->msr_no, &rv->l, &rv->h); 23 + wrmsr(rv->msr_no, rv->l, rv->h); 24 24 } 25 25 26 - static int _rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h, int safe) 26 + int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h) 27 27 { 28 - int err = 0; 28 + int err; 29 29 struct msr_info rv; 30 30 31 31 rv.msr_no = msr_no; 32 - if (safe) { 33 - err = smp_call_function_single(cpu, __rdmsr_safe_on_cpu, 34 - &rv, 1); 35 - err = err ? err : rv.err; 36 - } else { 37 - err = smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 1); 38 - } 32 + err = smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 1); 39 33 *l = rv.l; 40 34 *h = rv.h; 41 35 42 36 return err; 43 37 } 44 38 45 - static void __wrmsr_on_cpu(void *info) 39 + int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h) 40 + { 41 + int err; 42 + struct msr_info rv; 43 + 44 + rv.msr_no = msr_no; 45 + rv.l = l; 46 + rv.h = h; 47 + err = smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 1); 48 + 49 + return err; 50 + } 51 + 52 + /* These "safe" variants are slower and should be used when the target MSR 53 + may not actually exist. */ 54 + static void __rdmsr_safe_on_cpu(void *info) 46 55 { 47 56 struct msr_info *rv = info; 48 57 49 - wrmsr(rv->msr_no, rv->l, rv->h); 58 + rv->err = rdmsr_safe(rv->msr_no, &rv->l, &rv->h); 50 59 } 51 60 52 61 static void __wrmsr_safe_on_cpu(void *info) ··· 65 56 rv->err = wrmsr_safe(rv->msr_no, rv->l, rv->h); 66 57 } 67 58 68 - static int _wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h, int safe) 59 + int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h) 69 60 { 70 - int err = 0; 61 + int err; 62 + struct msr_info rv; 63 + 64 + rv.msr_no = msr_no; 65 + err = smp_call_function_single(cpu, __rdmsr_safe_on_cpu, &rv, 1); 66 + *l = rv.l; 67 + *h = rv.h; 68 + 69 + return err ? err : rv.err; 70 + } 71 + 72 + int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h) 73 + { 74 + int err; 71 75 struct msr_info rv; 72 76 73 77 rv.msr_no = msr_no; 74 78 rv.l = l; 75 79 rv.h = h; 76 - if (safe) { 77 - err = smp_call_function_single(cpu, __wrmsr_safe_on_cpu, 78 - &rv, 1); 79 - err = err ? err : rv.err; 80 - } else { 81 - err = smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 1); 82 - } 80 + err = smp_call_function_single(cpu, __wrmsr_safe_on_cpu, &rv, 1); 83 81 84 - return err; 85 - } 86 - 87 - int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h) 88 - { 89 - return _wrmsr_on_cpu(cpu, msr_no, l, h, 0); 90 - } 91 - 92 - int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h) 93 - { 94 - return _rdmsr_on_cpu(cpu, msr_no, l, h, 0); 95 - } 96 - 97 - /* These "safe" variants are slower and should be used when the target MSR 98 - may not actually exist. */ 99 - int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h) 100 - { 101 - return _wrmsr_on_cpu(cpu, msr_no, l, h, 1); 102 - } 103 - 104 - int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h) 105 - { 106 - return _rdmsr_on_cpu(cpu, msr_no, l, h, 1); 82 + return err ? err : rv.err; 107 83 } 108 84 109 85 EXPORT_SYMBOL(rdmsr_on_cpu);
+21 -21
arch/x86/lib/string_32.c
··· 22 22 "testb %%al,%%al\n\t" 23 23 "jne 1b" 24 24 : "=&S" (d0), "=&D" (d1), "=&a" (d2) 25 - :"0" (src), "1" (dest) : "memory"); 25 + : "0" (src), "1" (dest) : "memory"); 26 26 return dest; 27 27 } 28 28 EXPORT_SYMBOL(strcpy); ··· 42 42 "stosb\n" 43 43 "2:" 44 44 : "=&S" (d0), "=&D" (d1), "=&c" (d2), "=&a" (d3) 45 - :"0" (src), "1" (dest), "2" (count) : "memory"); 45 + : "0" (src), "1" (dest), "2" (count) : "memory"); 46 46 return dest; 47 47 } 48 48 EXPORT_SYMBOL(strncpy); ··· 60 60 "testb %%al,%%al\n\t" 61 61 "jne 1b" 62 62 : "=&S" (d0), "=&D" (d1), "=&a" (d2), "=&c" (d3) 63 - : "0" (src), "1" (dest), "2" (0), "3" (0xffffffffu): "memory"); 63 + : "0" (src), "1" (dest), "2" (0), "3" (0xffffffffu) : "memory"); 64 64 return dest; 65 65 } 66 66 EXPORT_SYMBOL(strcat); ··· 105 105 "2:\tsbbl %%eax,%%eax\n\t" 106 106 "orb $1,%%al\n" 107 107 "3:" 108 - :"=a" (res), "=&S" (d0), "=&D" (d1) 109 - :"1" (cs), "2" (ct) 110 - :"memory"); 108 + : "=a" (res), "=&S" (d0), "=&D" (d1) 109 + : "1" (cs), "2" (ct) 110 + : "memory"); 111 111 return res; 112 112 } 113 113 EXPORT_SYMBOL(strcmp); ··· 130 130 "3:\tsbbl %%eax,%%eax\n\t" 131 131 "orb $1,%%al\n" 132 132 "4:" 133 - :"=a" (res), "=&S" (d0), "=&D" (d1), "=&c" (d2) 134 - :"1" (cs), "2" (ct), "3" (count) 135 - :"memory"); 133 + : "=a" (res), "=&S" (d0), "=&D" (d1), "=&c" (d2) 134 + : "1" (cs), "2" (ct), "3" (count) 135 + : "memory"); 136 136 return res; 137 137 } 138 138 EXPORT_SYMBOL(strncmp); ··· 152 152 "movl $1,%1\n" 153 153 "2:\tmovl %1,%0\n\t" 154 154 "decl %0" 155 - :"=a" (res), "=&S" (d0) 156 - :"1" (s), "0" (c) 157 - :"memory"); 155 + : "=a" (res), "=&S" (d0) 156 + : "1" (s), "0" (c) 157 + : "memory"); 158 158 return res; 159 159 } 160 160 EXPORT_SYMBOL(strchr); ··· 169 169 "scasb\n\t" 170 170 "notl %0\n\t" 171 171 "decl %0" 172 - :"=c" (res), "=&D" (d0) 173 - :"1" (s), "a" (0), "0" (0xffffffffu) 174 - :"memory"); 172 + : "=c" (res), "=&D" (d0) 173 + : "1" (s), "a" (0), "0" (0xffffffffu) 174 + : "memory"); 175 175 return res; 176 176 } 177 177 EXPORT_SYMBOL(strlen); ··· 189 189 "je 1f\n\t" 190 190 "movl $1,%0\n" 191 191 "1:\tdecl %0" 192 - :"=D" (res), "=&c" (d0) 193 - :"a" (c), "0" (cs), "1" (count) 194 - :"memory"); 192 + : "=D" (res), "=&c" (d0) 193 + : "a" (c), "0" (cs), "1" (count) 194 + : "memory"); 195 195 return res; 196 196 } 197 197 EXPORT_SYMBOL(memchr); ··· 228 228 "cmpl $-1,%1\n\t" 229 229 "jne 1b\n" 230 230 "3:\tsubl %2,%0" 231 - :"=a" (res), "=&d" (d0) 232 - :"c" (s), "1" (count) 233 - :"memory"); 231 + : "=a" (res), "=&d" (d0) 232 + : "c" (s), "1" (count) 233 + : "memory"); 234 234 return res; 235 235 } 236 236 EXPORT_SYMBOL(strnlen);
+3 -3
arch/x86/lib/strstr_32.c
··· 23 23 "jne 1b\n\t" 24 24 "xorl %%eax,%%eax\n\t" 25 25 "2:" 26 - :"=a" (__res), "=&c" (d0), "=&S" (d1) 27 - :"0" (0), "1" (0xffffffff), "2" (cs), "g" (ct) 28 - :"dx", "di"); 26 + : "=a" (__res), "=&c" (d0), "=&S" (d1) 27 + : "0" (0), "1" (0xffffffff), "2" (cs), "g" (ct) 28 + : "dx", "di"); 29 29 return __res; 30 30 } 31 31
+3 -1
arch/x86/mach-default/setup.c
··· 10 10 #include <asm/e820.h> 11 11 #include <asm/setup.h> 12 12 13 + #include <mach_ipi.h> 14 + 13 15 #ifdef CONFIG_HOTPLUG_CPU 14 16 #define DEFAULT_SEND_IPI (1) 15 17 #else 16 18 #define DEFAULT_SEND_IPI (0) 17 19 #endif 18 20 19 - int no_broadcast=DEFAULT_SEND_IPI; 21 + int no_broadcast = DEFAULT_SEND_IPI; 20 22 21 23 /** 22 24 * pre_intr_init_hook - initialisation prior to setting up interrupt vectors
+1 -1
arch/x86/mm/discontig_32.c
··· 328 328 329 329 get_memcfg_numa(); 330 330 331 - kva_pages = round_up(calculate_numa_remap_pages(), PTRS_PER_PTE); 331 + kva_pages = roundup(calculate_numa_remap_pages(), PTRS_PER_PTE); 332 332 333 333 kva_target_pfn = round_down(max_low_pfn - kva_pages, PTRS_PER_PTE); 334 334 do {
+2 -2
arch/x86/mm/dump_pagetables.c
··· 148 148 * we have now. "break" is either changing perms, levels or 149 149 * address space marker. 150 150 */ 151 - prot = pgprot_val(new_prot) & ~(PTE_PFN_MASK); 152 - cur = pgprot_val(st->current_prot) & ~(PTE_PFN_MASK); 151 + prot = pgprot_val(new_prot) & PTE_FLAGS_MASK; 152 + cur = pgprot_val(st->current_prot) & PTE_FLAGS_MASK; 153 153 154 154 if (!st->level) { 155 155 /* First entry */
+1 -2
arch/x86/mm/fault.c
··· 35 35 #include <asm/tlbflush.h> 36 36 #include <asm/proto.h> 37 37 #include <asm-generic/sections.h> 38 + #include <asm/traps.h> 38 39 39 40 /* 40 41 * Page fault error code bits ··· 357 356 #endif 358 357 return 0; 359 358 } 360 - 361 - void do_invalid_op(struct pt_regs *, unsigned long); 362 359 363 360 static int is_f00f_bug(struct pt_regs *regs, unsigned long address) 364 361 {
+1
arch/x86/mm/init_32.c
··· 47 47 #include <asm/paravirt.h> 48 48 #include <asm/setup.h> 49 49 #include <asm/cacheflush.h> 50 + #include <asm/smp.h> 50 51 51 52 unsigned int __VMALLOC_RESERVE = 128 << 20; 52 53
+4 -4
arch/x86/mm/init_64.c
··· 225 225 void __init cleanup_highmap(void) 226 226 { 227 227 unsigned long vaddr = __START_KERNEL_map; 228 - unsigned long end = round_up((unsigned long)_end, PMD_SIZE) - 1; 228 + unsigned long end = roundup((unsigned long)_end, PMD_SIZE) - 1; 229 229 pmd_t *pmd = level2_kernel_pgt; 230 230 pmd_t *last_pmd = pmd + PTRS_PER_PMD; 231 231 ··· 451 451 unsigned long puds, pmds, ptes, tables, start; 452 452 453 453 puds = (end + PUD_SIZE - 1) >> PUD_SHIFT; 454 - tables = round_up(puds * sizeof(pud_t), PAGE_SIZE); 454 + tables = roundup(puds * sizeof(pud_t), PAGE_SIZE); 455 455 if (direct_gbpages) { 456 456 unsigned long extra; 457 457 extra = end - ((end>>PUD_SHIFT) << PUD_SHIFT); 458 458 pmds = (extra + PMD_SIZE - 1) >> PMD_SHIFT; 459 459 } else 460 460 pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT; 461 - tables += round_up(pmds * sizeof(pmd_t), PAGE_SIZE); 461 + tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE); 462 462 463 463 if (cpu_has_pse) { 464 464 unsigned long extra; ··· 466 466 ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT; 467 467 } else 468 468 ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT; 469 - tables += round_up(ptes * sizeof(pte_t), PAGE_SIZE); 469 + tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE); 470 470 471 471 /* 472 472 * RED-PEN putting page tables only on node 0 could
+2 -2
arch/x86/mm/ioremap.c
··· 421 421 return; 422 422 } 423 423 424 - int __initdata early_ioremap_debug; 424 + static int __initdata early_ioremap_debug; 425 425 426 426 static int __init early_ioremap_debug_setup(char *str) 427 427 { ··· 547 547 } 548 548 549 549 550 - int __initdata early_ioremap_nested; 550 + static int __initdata early_ioremap_nested; 551 551 552 552 static int __init check_early_ioremap_leak(void) 553 553 {
+5 -5
arch/x86/mm/numa_64.c
··· 79 79 return 0; 80 80 81 81 addr = 0x8000; 82 - nodemap_size = round_up(sizeof(s16) * memnodemapsize, L1_CACHE_BYTES); 82 + nodemap_size = roundup(sizeof(s16) * memnodemapsize, L1_CACHE_BYTES); 83 83 nodemap_addr = find_e820_area(addr, max_pfn<<PAGE_SHIFT, 84 84 nodemap_size, L1_CACHE_BYTES); 85 85 if (nodemap_addr == -1UL) { ··· 176 176 unsigned long start_pfn, last_pfn, bootmap_pages, bootmap_size; 177 177 unsigned long bootmap_start, nodedata_phys; 178 178 void *bootmap; 179 - const int pgdat_size = round_up(sizeof(pg_data_t), PAGE_SIZE); 179 + const int pgdat_size = roundup(sizeof(pg_data_t), PAGE_SIZE); 180 180 int nid; 181 181 182 - start = round_up(start, ZONE_ALIGN); 182 + start = roundup(start, ZONE_ALIGN); 183 183 184 184 printk(KERN_INFO "Bootmem setup node %d %016lx-%016lx\n", nodeid, 185 185 start, end); ··· 210 210 bootmap_pages = bootmem_bootmap_pages(last_pfn - start_pfn); 211 211 nid = phys_to_nid(nodedata_phys); 212 212 if (nid == nodeid) 213 - bootmap_start = round_up(nodedata_phys + pgdat_size, PAGE_SIZE); 213 + bootmap_start = roundup(nodedata_phys + pgdat_size, PAGE_SIZE); 214 214 else 215 - bootmap_start = round_up(start, PAGE_SIZE); 215 + bootmap_start = roundup(start, PAGE_SIZE); 216 216 /* 217 217 * SMP_CACHE_BYTES could be enough, but init_bootmem_node like 218 218 * to use that to align to PAGE_SIZE
+3 -1
arch/x86/mm/pageattr.c
··· 84 84 85 85 static inline unsigned long highmap_end_pfn(void) 86 86 { 87 - return __pa(round_up((unsigned long)_end, PMD_SIZE)) >> PAGE_SHIFT; 87 + return __pa(roundup((unsigned long)_end, PMD_SIZE)) >> PAGE_SHIFT; 88 88 } 89 89 90 90 #endif ··· 906 906 { 907 907 return change_page_attr_clear(addr, numpages, __pgprot(_PAGE_RW)); 908 908 } 909 + EXPORT_SYMBOL_GPL(set_memory_ro); 909 910 910 911 int set_memory_rw(unsigned long addr, int numpages) 911 912 { 912 913 return change_page_attr_set(addr, numpages, __pgprot(_PAGE_RW)); 913 914 } 915 + EXPORT_SYMBOL_GPL(set_memory_rw); 914 916 915 917 int set_memory_np(unsigned long addr, int numpages) 916 918 {
+2 -4
arch/x86/mm/pgtable.c
··· 63 63 #define UNSHARED_PTRS_PER_PGD \ 64 64 (SHARED_KERNEL_PMD ? KERNEL_PGD_BOUNDARY : PTRS_PER_PGD) 65 65 66 - static void pgd_ctor(void *p) 66 + static void pgd_ctor(pgd_t *pgd) 67 67 { 68 - pgd_t *pgd = p; 69 - 70 68 /* If the pgd points to a shared pagetable level (either the 71 69 ptes in non-PAE, or shared PMD in PAE), then just copy the 72 70 references from swapper_pg_dir. */ ··· 85 87 pgd_list_add(pgd); 86 88 } 87 89 88 - static void pgd_dtor(void *pgd) 90 + static void pgd_dtor(pgd_t *pgd) 89 91 { 90 92 unsigned long flags; /* can be called from interrupt context */ 91 93
+2 -1
arch/x86/mm/pgtable_32.c
··· 123 123 if (!arg) 124 124 return -EINVAL; 125 125 126 - __VMALLOC_RESERVE = memparse(arg, &arg); 126 + /* Add VMALLOC_OFFSET to the parsed value due to vm area guard hole*/ 127 + __VMALLOC_RESERVE = memparse(arg, &arg) + VMALLOC_OFFSET; 127 128 return 0; 128 129 } 129 130 early_param("vmalloc", parse_vmalloc);
+87 -88
arch/x86/oprofile/op_model_p4.c
··· 10 10 11 11 #include <linux/oprofile.h> 12 12 #include <linux/smp.h> 13 + #include <linux/ptrace.h> 14 + #include <linux/nmi.h> 13 15 #include <asm/msr.h> 14 - #include <asm/ptrace.h> 15 16 #include <asm/fixmap.h> 16 17 #include <asm/apic.h> 17 - #include <asm/nmi.h> 18 + 18 19 19 20 #include "op_x86_model.h" 20 21 #include "op_counter.h" ··· 41 40 static inline void setup_num_counters(void) 42 41 { 43 42 #ifdef CONFIG_SMP 44 - if (smp_num_siblings == 2){ 43 + if (smp_num_siblings == 2) { 45 44 num_counters = NUM_COUNTERS_HT2; 46 45 num_controls = NUM_CONTROLS_HT2; 47 46 } ··· 87 86 #define CTR_FLAME_2 (1 << 6) 88 87 #define CTR_IQ_5 (1 << 7) 89 88 90 - static struct p4_counter_binding p4_counters [NUM_COUNTERS_NON_HT] = { 89 + static struct p4_counter_binding p4_counters[NUM_COUNTERS_NON_HT] = { 91 90 { CTR_BPU_0, MSR_P4_BPU_PERFCTR0, MSR_P4_BPU_CCCR0 }, 92 91 { CTR_MS_0, MSR_P4_MS_PERFCTR0, MSR_P4_MS_CCCR0 }, 93 92 { CTR_FLAME_0, MSR_P4_FLAME_PERFCTR0, MSR_P4_FLAME_CCCR0 }, ··· 98 97 { CTR_IQ_5, MSR_P4_IQ_PERFCTR5, MSR_P4_IQ_CCCR5 } 99 98 }; 100 99 101 - #define NUM_UNUSED_CCCRS NUM_CCCRS_NON_HT - NUM_COUNTERS_NON_HT 100 + #define NUM_UNUSED_CCCRS (NUM_CCCRS_NON_HT - NUM_COUNTERS_NON_HT) 102 101 103 102 /* p4 event codes in libop/op_event.h are indices into this table. */ 104 103 105 104 static struct p4_event_binding p4_events[NUM_EVENTS] = { 106 - 105 + 107 106 { /* BRANCH_RETIRED */ 108 - 0x05, 0x06, 107 + 0x05, 0x06, 109 108 { {CTR_IQ_4, MSR_P4_CRU_ESCR2}, 110 109 {CTR_IQ_5, MSR_P4_CRU_ESCR3} } 111 110 }, 112 - 111 + 113 112 { /* MISPRED_BRANCH_RETIRED */ 114 - 0x04, 0x03, 113 + 0x04, 0x03, 115 114 { { CTR_IQ_4, MSR_P4_CRU_ESCR0}, 116 115 { CTR_IQ_5, MSR_P4_CRU_ESCR1} } 117 116 }, 118 - 117 + 119 118 { /* TC_DELIVER_MODE */ 120 119 0x01, 0x01, 121 - { { CTR_MS_0, MSR_P4_TC_ESCR0}, 120 + { { CTR_MS_0, MSR_P4_TC_ESCR0}, 122 121 { CTR_MS_2, MSR_P4_TC_ESCR1} } 123 122 }, 124 - 123 + 125 124 { /* BPU_FETCH_REQUEST */ 126 - 0x00, 0x03, 125 + 0x00, 0x03, 127 126 { { CTR_BPU_0, MSR_P4_BPU_ESCR0}, 128 127 { CTR_BPU_2, MSR_P4_BPU_ESCR1} } 129 128 }, ··· 147 146 }, 148 147 149 148 { /* LOAD_PORT_REPLAY */ 150 - 0x02, 0x04, 149 + 0x02, 0x04, 151 150 { { CTR_FLAME_0, MSR_P4_SAAT_ESCR0}, 152 151 { CTR_FLAME_2, MSR_P4_SAAT_ESCR1} } 153 152 }, ··· 171 170 }, 172 171 173 172 { /* BSQ_CACHE_REFERENCE */ 174 - 0x07, 0x0c, 173 + 0x07, 0x0c, 175 174 { { CTR_BPU_0, MSR_P4_BSU_ESCR0}, 176 175 { CTR_BPU_2, MSR_P4_BSU_ESCR1} } 177 176 }, 178 177 179 178 { /* IOQ_ALLOCATION */ 180 - 0x06, 0x03, 179 + 0x06, 0x03, 181 180 { { CTR_BPU_0, MSR_P4_FSB_ESCR0}, 182 181 { 0, 0 } } 183 182 }, 184 183 185 184 { /* IOQ_ACTIVE_ENTRIES */ 186 - 0x06, 0x1a, 185 + 0x06, 0x1a, 187 186 { { CTR_BPU_2, MSR_P4_FSB_ESCR1}, 188 187 { 0, 0 } } 189 188 }, 190 189 191 190 { /* FSB_DATA_ACTIVITY */ 192 - 0x06, 0x17, 191 + 0x06, 0x17, 193 192 { { CTR_BPU_0, MSR_P4_FSB_ESCR0}, 194 193 { CTR_BPU_2, MSR_P4_FSB_ESCR1} } 195 194 }, 196 195 197 196 { /* BSQ_ALLOCATION */ 198 - 0x07, 0x05, 197 + 0x07, 0x05, 199 198 { { CTR_BPU_0, MSR_P4_BSU_ESCR0}, 200 199 { 0, 0 } } 201 200 }, 202 201 203 202 { /* BSQ_ACTIVE_ENTRIES */ 204 203 0x07, 0x06, 205 - { { CTR_BPU_2, MSR_P4_BSU_ESCR1 /* guess */}, 204 + { { CTR_BPU_2, MSR_P4_BSU_ESCR1 /* guess */}, 206 205 { 0, 0 } } 207 206 }, 208 207 209 208 { /* X87_ASSIST */ 210 - 0x05, 0x03, 209 + 0x05, 0x03, 211 210 { { CTR_IQ_4, MSR_P4_CRU_ESCR2}, 212 211 { CTR_IQ_5, MSR_P4_CRU_ESCR3} } 213 212 }, ··· 217 216 { { CTR_FLAME_0, MSR_P4_FIRM_ESCR0}, 218 217 { CTR_FLAME_2, MSR_P4_FIRM_ESCR1} } 219 218 }, 220 - 219 + 221 220 { /* PACKED_SP_UOP */ 222 - 0x01, 0x08, 221 + 0x01, 0x08, 223 222 { { CTR_FLAME_0, MSR_P4_FIRM_ESCR0}, 224 223 { CTR_FLAME_2, MSR_P4_FIRM_ESCR1} } 225 224 }, 226 - 225 + 227 226 { /* PACKED_DP_UOP */ 228 - 0x01, 0x0c, 227 + 0x01, 0x0c, 229 228 { { CTR_FLAME_0, MSR_P4_FIRM_ESCR0}, 230 229 { CTR_FLAME_2, MSR_P4_FIRM_ESCR1} } 231 230 }, 232 231 233 232 { /* SCALAR_SP_UOP */ 234 - 0x01, 0x0a, 233 + 0x01, 0x0a, 235 234 { { CTR_FLAME_0, MSR_P4_FIRM_ESCR0}, 236 235 { CTR_FLAME_2, MSR_P4_FIRM_ESCR1} } 237 236 }, ··· 243 242 }, 244 243 245 244 { /* 64BIT_MMX_UOP */ 246 - 0x01, 0x02, 245 + 0x01, 0x02, 247 246 { { CTR_FLAME_0, MSR_P4_FIRM_ESCR0}, 248 247 { CTR_FLAME_2, MSR_P4_FIRM_ESCR1} } 249 248 }, 250 - 249 + 251 250 { /* 128BIT_MMX_UOP */ 252 - 0x01, 0x1a, 251 + 0x01, 0x1a, 253 252 { { CTR_FLAME_0, MSR_P4_FIRM_ESCR0}, 254 253 { CTR_FLAME_2, MSR_P4_FIRM_ESCR1} } 255 254 }, 256 255 257 256 { /* X87_FP_UOP */ 258 - 0x01, 0x04, 257 + 0x01, 0x04, 259 258 { { CTR_FLAME_0, MSR_P4_FIRM_ESCR0}, 260 259 { CTR_FLAME_2, MSR_P4_FIRM_ESCR1} } 261 260 }, 262 - 261 + 263 262 { /* X87_SIMD_MOVES_UOP */ 264 - 0x01, 0x2e, 263 + 0x01, 0x2e, 265 264 { { CTR_FLAME_0, MSR_P4_FIRM_ESCR0}, 266 265 { CTR_FLAME_2, MSR_P4_FIRM_ESCR1} } 267 266 }, 268 - 267 + 269 268 { /* MACHINE_CLEAR */ 270 - 0x05, 0x02, 269 + 0x05, 0x02, 271 270 { { CTR_IQ_4, MSR_P4_CRU_ESCR2}, 272 271 { CTR_IQ_5, MSR_P4_CRU_ESCR3} } 273 272 }, ··· 277 276 { { CTR_BPU_0, MSR_P4_FSB_ESCR0}, 278 277 { CTR_BPU_2, MSR_P4_FSB_ESCR1} } 279 278 }, 280 - 279 + 281 280 { /* TC_MS_XFER */ 282 - 0x00, 0x05, 281 + 0x00, 0x05, 283 282 { { CTR_MS_0, MSR_P4_MS_ESCR0}, 284 283 { CTR_MS_2, MSR_P4_MS_ESCR1} } 285 284 }, ··· 309 308 }, 310 309 311 310 { /* INSTR_RETIRED */ 312 - 0x04, 0x02, 311 + 0x04, 0x02, 313 312 { { CTR_IQ_4, MSR_P4_CRU_ESCR0}, 314 313 { CTR_IQ_5, MSR_P4_CRU_ESCR1} } 315 314 }, ··· 320 319 { CTR_IQ_5, MSR_P4_CRU_ESCR1} } 321 320 }, 322 321 323 - { /* UOP_TYPE */ 324 - 0x02, 0x02, 322 + { /* UOP_TYPE */ 323 + 0x02, 0x02, 325 324 { { CTR_IQ_4, MSR_P4_RAT_ESCR0}, 326 325 { CTR_IQ_5, MSR_P4_RAT_ESCR1} } 327 326 }, 328 327 329 328 { /* RETIRED_MISPRED_BRANCH_TYPE */ 330 - 0x02, 0x05, 329 + 0x02, 0x05, 331 330 { { CTR_MS_0, MSR_P4_TBPU_ESCR0}, 332 331 { CTR_MS_2, MSR_P4_TBPU_ESCR1} } 333 332 }, ··· 350 349 #define ESCR_SET_OS_1(escr, os) ((escr) |= (((os) & 1) << 1)) 351 350 #define ESCR_SET_EVENT_SELECT(escr, sel) ((escr) |= (((sel) & 0x3f) << 25)) 352 351 #define ESCR_SET_EVENT_MASK(escr, mask) ((escr) |= (((mask) & 0xffff) << 9)) 353 - #define ESCR_READ(escr,high,ev,i) do {rdmsr(ev->bindings[(i)].escr_address, (escr), (high));} while (0) 354 - #define ESCR_WRITE(escr,high,ev,i) do {wrmsr(ev->bindings[(i)].escr_address, (escr), (high));} while (0) 352 + #define ESCR_READ(escr, high, ev, i) do {rdmsr(ev->bindings[(i)].escr_address, (escr), (high)); } while (0) 353 + #define ESCR_WRITE(escr, high, ev, i) do {wrmsr(ev->bindings[(i)].escr_address, (escr), (high)); } while (0) 355 354 356 355 #define CCCR_RESERVED_BITS 0x38030FFF 357 356 #define CCCR_CLEAR(cccr) ((cccr) &= CCCR_RESERVED_BITS) ··· 361 360 #define CCCR_SET_PMI_OVF_1(cccr) ((cccr) |= (1<<27)) 362 361 #define CCCR_SET_ENABLE(cccr) ((cccr) |= (1<<12)) 363 362 #define CCCR_SET_DISABLE(cccr) ((cccr) &= ~(1<<12)) 364 - #define CCCR_READ(low, high, i) do {rdmsr(p4_counters[(i)].cccr_address, (low), (high));} while (0) 365 - #define CCCR_WRITE(low, high, i) do {wrmsr(p4_counters[(i)].cccr_address, (low), (high));} while (0) 363 + #define CCCR_READ(low, high, i) do {rdmsr(p4_counters[(i)].cccr_address, (low), (high)); } while (0) 364 + #define CCCR_WRITE(low, high, i) do {wrmsr(p4_counters[(i)].cccr_address, (low), (high)); } while (0) 366 365 #define CCCR_OVF_P(cccr) ((cccr) & (1U<<31)) 367 366 #define CCCR_CLEAR_OVF(cccr) ((cccr) &= (~(1U<<31))) 368 367 369 - #define CTRL_IS_RESERVED(msrs,c) (msrs->controls[(c)].addr ? 1 : 0) 370 - #define CTR_IS_RESERVED(msrs,c) (msrs->counters[(c)].addr ? 1 : 0) 371 - #define CTR_READ(l,h,i) do {rdmsr(p4_counters[(i)].counter_address, (l), (h));} while (0) 372 - #define CTR_WRITE(l,i) do {wrmsr(p4_counters[(i)].counter_address, -(u32)(l), -1);} while (0) 368 + #define CTRL_IS_RESERVED(msrs, c) (msrs->controls[(c)].addr ? 1 : 0) 369 + #define CTR_IS_RESERVED(msrs, c) (msrs->counters[(c)].addr ? 1 : 0) 370 + #define CTR_READ(l, h, i) do {rdmsr(p4_counters[(i)].counter_address, (l), (h)); } while (0) 371 + #define CTR_WRITE(l, i) do {wrmsr(p4_counters[(i)].counter_address, -(u32)(l), -1); } while (0) 373 372 #define CTR_OVERFLOW_P(ctr) (!((ctr) & 0x80000000)) 374 373 375 374 ··· 381 380 #ifdef CONFIG_SMP 382 381 int cpu = smp_processor_id(); 383 382 return (cpu != first_cpu(per_cpu(cpu_sibling_map, cpu))); 384 - #endif 383 + #endif 385 384 return 0; 386 385 } 387 386 ··· 396 395 397 396 static void p4_fill_in_addresses(struct op_msrs * const msrs) 398 397 { 399 - unsigned int i; 398 + unsigned int i; 400 399 unsigned int addr, cccraddr, stag; 401 400 402 401 setup_num_counters(); 403 402 stag = get_stagger(); 404 403 405 404 /* initialize some registers */ 406 - for (i = 0; i < num_counters; ++i) { 405 + for (i = 0; i < num_counters; ++i) 407 406 msrs->counters[i].addr = 0; 408 - } 409 - for (i = 0; i < num_controls; ++i) { 407 + for (i = 0; i < num_controls; ++i) 410 408 msrs->controls[i].addr = 0; 411 - } 412 - 409 + 413 410 /* the counter & cccr registers we pay attention to */ 414 411 for (i = 0; i < num_counters; ++i) { 415 412 addr = p4_counters[VIRT_CTR(stag, i)].counter_address; 416 413 cccraddr = p4_counters[VIRT_CTR(stag, i)].cccr_address; 417 - if (reserve_perfctr_nmi(addr)){ 414 + if (reserve_perfctr_nmi(addr)) { 418 415 msrs->counters[i].addr = addr; 419 416 msrs->controls[i].addr = cccraddr; 420 417 } ··· 446 447 if (reserve_evntsel_nmi(addr)) 447 448 msrs->controls[i].addr = addr; 448 449 } 449 - 450 + 450 451 for (addr = MSR_P4_MS_ESCR0 + stag; 451 - addr <= MSR_P4_TC_ESCR1; ++i, addr += addr_increment()) { 452 + addr <= MSR_P4_TC_ESCR1; ++i, addr += addr_increment()) { 452 453 if (reserve_evntsel_nmi(addr)) 453 454 msrs->controls[i].addr = addr; 454 455 } 455 - 456 + 456 457 for (addr = MSR_P4_IX_ESCR0 + stag; 457 - addr <= MSR_P4_CRU_ESCR3; ++i, addr += addr_increment()) { 458 + addr <= MSR_P4_CRU_ESCR3; ++i, addr += addr_increment()) { 458 459 if (reserve_evntsel_nmi(addr)) 459 460 msrs->controls[i].addr = addr; 460 461 } 461 462 462 463 /* there are 2 remaining non-contiguously located ESCRs */ 463 464 464 - if (num_counters == NUM_COUNTERS_NON_HT) { 465 + if (num_counters == NUM_COUNTERS_NON_HT) { 465 466 /* standard non-HT CPUs handle both remaining ESCRs*/ 466 467 if (reserve_evntsel_nmi(MSR_P4_CRU_ESCR5)) 467 468 msrs->controls[i++].addr = MSR_P4_CRU_ESCR5; ··· 497 498 unsigned int stag; 498 499 499 500 stag = get_stagger(); 500 - 501 + 501 502 /* convert from counter *number* to counter *bit* */ 502 503 counter_bit = 1 << VIRT_CTR(stag, ctr); 503 - 504 + 504 505 /* find our event binding structure. */ 505 506 if (counter_config[ctr].event <= 0 || counter_config[ctr].event > NUM_EVENTS) { 506 - printk(KERN_ERR 507 - "oprofile: P4 event code 0x%lx out of range\n", 507 + printk(KERN_ERR 508 + "oprofile: P4 event code 0x%lx out of range\n", 508 509 counter_config[ctr].event); 509 510 return; 510 511 } 511 - 512 + 512 513 ev = &(p4_events[counter_config[ctr].event - 1]); 513 - 514 + 514 515 for (i = 0; i < maxbind; i++) { 515 516 if (ev->bindings[i].virt_counter & counter_bit) { 516 517 ··· 525 526 ESCR_SET_OS_1(escr, counter_config[ctr].kernel); 526 527 } 527 528 ESCR_SET_EVENT_SELECT(escr, ev->event_select); 528 - ESCR_SET_EVENT_MASK(escr, counter_config[ctr].unit_mask); 529 + ESCR_SET_EVENT_MASK(escr, counter_config[ctr].unit_mask); 529 530 ESCR_WRITE(escr, high, ev, i); 530 - 531 + 531 532 /* modify CCCR */ 532 533 CCCR_READ(cccr, high, VIRT_CTR(stag, ctr)); 533 534 CCCR_CLEAR(cccr); 534 535 CCCR_SET_REQUIRED_BITS(cccr); 535 536 CCCR_SET_ESCR_SELECT(cccr, ev->escr_select); 536 - if (stag == 0) { 537 + if (stag == 0) 537 538 CCCR_SET_PMI_OVF_0(cccr); 538 - } else { 539 + else 539 540 CCCR_SET_PMI_OVF_1(cccr); 540 - } 541 541 CCCR_WRITE(cccr, high, VIRT_CTR(stag, ctr)); 542 542 return; 543 543 } 544 544 } 545 545 546 - printk(KERN_ERR 546 + printk(KERN_ERR 547 547 "oprofile: P4 event code 0x%lx no binding, stag %d ctr %d\n", 548 548 counter_config[ctr].event, stag, ctr); 549 549 } ··· 557 559 stag = get_stagger(); 558 560 559 561 rdmsr(MSR_IA32_MISC_ENABLE, low, high); 560 - if (! MISC_PMC_ENABLED_P(low)) { 562 + if (!MISC_PMC_ENABLED_P(low)) { 561 563 printk(KERN_ERR "oprofile: P4 PMC not available\n"); 562 564 return; 563 565 } 564 566 565 567 /* clear the cccrs we will use */ 566 568 for (i = 0 ; i < num_counters ; i++) { 567 - if (unlikely(!CTRL_IS_RESERVED(msrs,i))) 569 + if (unlikely(!CTRL_IS_RESERVED(msrs, i))) 568 570 continue; 569 571 rdmsr(p4_counters[VIRT_CTR(stag, i)].cccr_address, low, high); 570 572 CCCR_CLEAR(low); ··· 574 576 575 577 /* clear all escrs (including those outside our concern) */ 576 578 for (i = num_counters; i < num_controls; i++) { 577 - if (unlikely(!CTRL_IS_RESERVED(msrs,i))) 579 + if (unlikely(!CTRL_IS_RESERVED(msrs, i))) 578 580 continue; 579 581 wrmsr(msrs->controls[i].addr, 0, 0); 580 582 } 581 583 582 584 /* setup all counters */ 583 585 for (i = 0 ; i < num_counters ; ++i) { 584 - if ((counter_config[i].enabled) && (CTRL_IS_RESERVED(msrs,i))) { 586 + if ((counter_config[i].enabled) && (CTRL_IS_RESERVED(msrs, i))) { 585 587 reset_value[i] = counter_config[i].count; 586 588 pmc_setup_one_p4_counter(i); 587 589 CTR_WRITE(counter_config[i].count, VIRT_CTR(stag, i)); ··· 601 603 stag = get_stagger(); 602 604 603 605 for (i = 0; i < num_counters; ++i) { 604 - 605 - if (!reset_value[i]) 606 + 607 + if (!reset_value[i]) 606 608 continue; 607 609 608 - /* 610 + /* 609 611 * there is some eccentricity in the hardware which 610 612 * requires that we perform 2 extra corrections: 611 613 * ··· 614 616 * 615 617 * - write the counter back twice to ensure it gets 616 618 * updated properly. 617 - * 619 + * 618 620 * the former seems to be related to extra NMIs happening 619 621 * during the current NMI; the latter is reported as errata 620 622 * N15 in intel doc 249199-029, pentium 4 specification 621 623 * update, though their suggested work-around does not 622 624 * appear to solve the problem. 623 625 */ 624 - 626 + 625 627 real = VIRT_CTR(stag, i); 626 628 627 629 CCCR_READ(low, high, real); 628 - CTR_READ(ctr, high, real); 630 + CTR_READ(ctr, high, real); 629 631 if (CCCR_OVF_P(low) || CTR_OVERFLOW_P(ctr)) { 630 632 oprofile_add_sample(regs, i); 631 - CTR_WRITE(reset_value[i], real); 633 + CTR_WRITE(reset_value[i], real); 632 634 CCCR_CLEAR_OVF(low); 633 635 CCCR_WRITE(low, high, real); 634 - CTR_WRITE(reset_value[i], real); 636 + CTR_WRITE(reset_value[i], real); 635 637 } 636 638 } 637 639 ··· 681 683 int i; 682 684 683 685 for (i = 0 ; i < num_counters ; ++i) { 684 - if (CTR_IS_RESERVED(msrs,i)) 686 + if (CTR_IS_RESERVED(msrs, i)) 685 687 release_perfctr_nmi(msrs->counters[i].addr); 686 688 } 687 - /* some of the control registers are specially reserved in 689 + /* 690 + * some of the control registers are specially reserved in 688 691 * conjunction with the counter registers (hence the starting offset). 689 692 * This saves a few bits. 690 693 */ 691 694 for (i = num_counters ; i < num_controls ; ++i) { 692 - if (CTRL_IS_RESERVED(msrs,i)) 695 + if (CTRL_IS_RESERVED(msrs, i)) 693 696 release_evntsel_nmi(msrs->controls[i].addr); 694 697 } 695 698 }
+1 -1
arch/x86/pci/amd_bus.c
··· 580 580 unsigned long action, void *hcpu) 581 581 { 582 582 int cpu = (long)hcpu; 583 - switch(action) { 583 + switch (action) { 584 584 case CPU_ONLINE: 585 585 case CPU_ONLINE_FROZEN: 586 586 smp_call_function_single(cpu, enable_pci_io_ecs, NULL, 0);
+37 -28
arch/x86/pci/irq.c
··· 1043 1043 if (io_apic_assign_pci_irqs) { 1044 1044 int irq; 1045 1045 1046 - if (pin) { 1047 - /* 1048 - * interrupt pins are numbered starting 1049 - * from 1 1050 - */ 1051 - pin--; 1052 - irq = IO_APIC_get_PCI_irq_vector(dev->bus->number, 1053 - PCI_SLOT(dev->devfn), pin); 1054 - /* 1055 - * Busses behind bridges are typically not listed in the MP-table. 1056 - * In this case we have to look up the IRQ based on the parent bus, 1057 - * parent slot, and pin number. The SMP code detects such bridged 1058 - * busses itself so we should get into this branch reliably. 1059 - */ 1060 - if (irq < 0 && dev->bus->parent) { /* go back to the bridge */ 1061 - struct pci_dev *bridge = dev->bus->self; 1046 + if (!pin) 1047 + continue; 1062 1048 1063 - pin = (pin + PCI_SLOT(dev->devfn)) % 4; 1064 - irq = IO_APIC_get_PCI_irq_vector(bridge->bus->number, 1065 - PCI_SLOT(bridge->devfn), pin); 1066 - if (irq >= 0) 1067 - dev_warn(&dev->dev, "using bridge %s INT %c to get IRQ %d\n", 1068 - pci_name(bridge), 1069 - 'A' + pin, irq); 1070 - } 1071 - if (irq >= 0) { 1072 - dev_info(&dev->dev, "PCI->APIC IRQ transform: INT %c -> IRQ %d\n", 'A' + pin, irq); 1073 - dev->irq = irq; 1074 - } 1049 + /* 1050 + * interrupt pins are numbered starting from 1 1051 + */ 1052 + pin--; 1053 + irq = IO_APIC_get_PCI_irq_vector(dev->bus->number, 1054 + PCI_SLOT(dev->devfn), pin); 1055 + /* 1056 + * Busses behind bridges are typically not listed in the 1057 + * MP-table. In this case we have to look up the IRQ 1058 + * based on the parent bus, parent slot, and pin number. 1059 + * The SMP code detects such bridged busses itself so we 1060 + * should get into this branch reliably. 1061 + */ 1062 + if (irq < 0 && dev->bus->parent) { 1063 + /* go back to the bridge */ 1064 + struct pci_dev *bridge = dev->bus->self; 1065 + int bus; 1066 + 1067 + pin = (pin + PCI_SLOT(dev->devfn)) % 4; 1068 + bus = bridge->bus->number; 1069 + irq = IO_APIC_get_PCI_irq_vector(bus, 1070 + PCI_SLOT(bridge->devfn), pin); 1071 + if (irq >= 0) 1072 + dev_warn(&dev->dev, 1073 + "using bridge %s INT %c to " 1074 + "get IRQ %d\n", 1075 + pci_name(bridge), 1076 + 'A' + pin, irq); 1077 + } 1078 + if (irq >= 0) { 1079 + dev_info(&dev->dev, 1080 + "PCI->APIC IRQ transform: INT %c " 1081 + "-> IRQ %d\n", 1082 + 'A' + pin, irq); 1083 + dev->irq = irq; 1075 1084 } 1076 1085 } 1077 1086 #endif
+7 -7
arch/x86/power/hibernate_asm_32.S
··· 1 - .text 2 - 3 1 /* 4 2 * This may not use any stack, nor any variable that is not "NoSave": 5 3 * ··· 10 12 #include <asm/segment.h> 11 13 #include <asm/page.h> 12 14 #include <asm/asm-offsets.h> 15 + #include <asm/processor-flags.h> 13 16 14 - .text 17 + .text 15 18 16 19 ENTRY(swsusp_arch_suspend) 17 - 18 20 movl %esp, saved_context_esp 19 21 movl %ebx, saved_context_ebx 20 22 movl %ebp, saved_context_ebp 21 23 movl %esi, saved_context_esi 22 24 movl %edi, saved_context_edi 23 - pushfl ; popl saved_context_eflags 25 + pushfl 26 + popl saved_context_eflags 24 27 25 28 call swsusp_save 26 29 ret ··· 58 59 movl mmu_cr4_features, %ecx 59 60 jecxz 1f # cr4 Pentium and higher, skip if zero 60 61 movl %ecx, %edx 61 - andl $~(1<<7), %edx; # PGE 62 + andl $~(X86_CR4_PGE), %edx 62 63 movl %edx, %cr4; # turn off PGE 63 64 1: 64 65 movl %cr3, %eax; # flush TLB ··· 73 74 movl saved_context_esi, %esi 74 75 movl saved_context_edi, %edi 75 76 76 - pushl saved_context_eflags ; popfl 77 + pushl saved_context_eflags 78 + popfl 77 79 78 80 xorl %eax, %eax 79 81
+10 -10
arch/x86/xen/enlighten.c
··· 812 812 813 813 /* Early in boot, while setting up the initial pagetable, assume 814 814 everything is pinned. */ 815 - static __init void xen_alloc_pte_init(struct mm_struct *mm, u32 pfn) 815 + static __init void xen_alloc_pte_init(struct mm_struct *mm, unsigned long pfn) 816 816 { 817 817 #ifdef CONFIG_FLATMEM 818 818 BUG_ON(mem_map); /* should only be used early */ ··· 822 822 823 823 /* Early release_pte assumes that all pts are pinned, since there's 824 824 only init_mm and anything attached to that is pinned. */ 825 - static void xen_release_pte_init(u32 pfn) 825 + static void xen_release_pte_init(unsigned long pfn) 826 826 { 827 827 make_lowmem_page_readwrite(__va(PFN_PHYS(pfn))); 828 828 } ··· 838 838 839 839 /* This needs to make sure the new pte page is pinned iff its being 840 840 attached to a pinned pagetable. */ 841 - static void xen_alloc_ptpage(struct mm_struct *mm, u32 pfn, unsigned level) 841 + static void xen_alloc_ptpage(struct mm_struct *mm, unsigned long pfn, unsigned level) 842 842 { 843 843 struct page *page = pfn_to_page(pfn); 844 844 ··· 856 856 } 857 857 } 858 858 859 - static void xen_alloc_pte(struct mm_struct *mm, u32 pfn) 859 + static void xen_alloc_pte(struct mm_struct *mm, unsigned long pfn) 860 860 { 861 861 xen_alloc_ptpage(mm, pfn, PT_PTE); 862 862 } 863 863 864 - static void xen_alloc_pmd(struct mm_struct *mm, u32 pfn) 864 + static void xen_alloc_pmd(struct mm_struct *mm, unsigned long pfn) 865 865 { 866 866 xen_alloc_ptpage(mm, pfn, PT_PMD); 867 867 } ··· 909 909 } 910 910 911 911 /* This should never happen until we're OK to use struct page */ 912 - static void xen_release_ptpage(u32 pfn, unsigned level) 912 + static void xen_release_ptpage(unsigned long pfn, unsigned level) 913 913 { 914 914 struct page *page = pfn_to_page(pfn); 915 915 ··· 923 923 } 924 924 } 925 925 926 - static void xen_release_pte(u32 pfn) 926 + static void xen_release_pte(unsigned long pfn) 927 927 { 928 928 xen_release_ptpage(pfn, PT_PTE); 929 929 } 930 930 931 - static void xen_release_pmd(u32 pfn) 931 + static void xen_release_pmd(unsigned long pfn) 932 932 { 933 933 xen_release_ptpage(pfn, PT_PMD); 934 934 } 935 935 936 936 #if PAGETABLE_LEVELS == 4 937 - static void xen_alloc_pud(struct mm_struct *mm, u32 pfn) 937 + static void xen_alloc_pud(struct mm_struct *mm, unsigned long pfn) 938 938 { 939 939 xen_alloc_ptpage(mm, pfn, PT_PUD); 940 940 } 941 941 942 - static void xen_release_pud(u32 pfn) 942 + static void xen_release_pud(unsigned long pfn) 943 943 { 944 944 xen_release_ptpage(pfn, PT_PUD); 945 945 }
+3 -3
include/asm-x86/a.out-core.h
··· 9 9 * 2 of the Licence, or (at your option) any later version. 10 10 */ 11 11 12 - #ifndef _ASM_A_OUT_CORE_H 13 - #define _ASM_A_OUT_CORE_H 12 + #ifndef ASM_X86__A_OUT_CORE_H 13 + #define ASM_X86__A_OUT_CORE_H 14 14 15 15 #ifdef __KERNEL__ 16 16 #ifdef CONFIG_X86_32 ··· 70 70 71 71 #endif /* CONFIG_X86_32 */ 72 72 #endif /* __KERNEL__ */ 73 - #endif /* _ASM_A_OUT_CORE_H */ 73 + #endif /* ASM_X86__A_OUT_CORE_H */
+3 -3
include/asm-x86/a.out.h
··· 1 - #ifndef _ASM_X86_A_OUT_H 2 - #define _ASM_X86_A_OUT_H 1 + #ifndef ASM_X86__A_OUT_H 2 + #define ASM_X86__A_OUT_H 3 3 4 4 struct exec 5 5 { ··· 17 17 #define N_DRSIZE(a) ((a).a_drsize) 18 18 #define N_SYMSIZE(a) ((a).a_syms) 19 19 20 - #endif /* _ASM_X86_A_OUT_H */ 20 + #endif /* ASM_X86__A_OUT_H */
+3 -3
include/asm-x86/acpi.h
··· 1 - #ifndef _ASM_X86_ACPI_H 2 - #define _ASM_X86_ACPI_H 1 + #ifndef ASM_X86__ACPI_H 2 + #define ASM_X86__ACPI_H 3 3 4 4 /* 5 5 * Copyright (C) 2001 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com> ··· 175 175 176 176 #define acpi_unlazy_tlb(x) leave_mm(x) 177 177 178 - #endif /*__X86_ASM_ACPI_H*/ 178 + #endif /* ASM_X86__ACPI_H */
+3 -3
include/asm-x86/agp.h
··· 1 - #ifndef _ASM_X86_AGP_H 2 - #define _ASM_X86_AGP_H 1 + #ifndef ASM_X86__AGP_H 2 + #define ASM_X86__AGP_H 3 3 4 4 #include <asm/pgtable.h> 5 5 #include <asm/cacheflush.h> ··· 32 32 #define free_gatt_pages(table, order) \ 33 33 free_pages((unsigned long)(table), (order)) 34 34 35 - #endif 35 + #endif /* ASM_X86__AGP_H */
+3 -3
include/asm-x86/alternative.h
··· 1 - #ifndef _ASM_X86_ALTERNATIVE_H 2 - #define _ASM_X86_ALTERNATIVE_H 1 + #ifndef ASM_X86__ALTERNATIVE_H 2 + #define ASM_X86__ALTERNATIVE_H 3 3 4 4 #include <linux/types.h> 5 5 #include <linux/stddef.h> ··· 180 180 extern void *text_poke(void *addr, const void *opcode, size_t len); 181 181 extern void *text_poke_early(void *addr, const void *opcode, size_t len); 182 182 183 - #endif /* _ASM_X86_ALTERNATIVE_H */ 183 + #endif /* ASM_X86__ALTERNATIVE_H */
+3 -3
include/asm-x86/amd_iommu.h
··· 17 17 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 18 */ 19 19 20 - #ifndef _ASM_X86_AMD_IOMMU_H 21 - #define _ASM_X86_AMD_IOMMU_H 20 + #ifndef ASM_X86__AMD_IOMMU_H 21 + #define ASM_X86__AMD_IOMMU_H 22 22 23 23 #ifdef CONFIG_AMD_IOMMU 24 24 extern int amd_iommu_init(void); ··· 29 29 static inline void amd_iommu_detect(void) { } 30 30 #endif 31 31 32 - #endif 32 + #endif /* ASM_X86__AMD_IOMMU_H */
+3 -3
include/asm-x86/amd_iommu_types.h
··· 17 17 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 18 */ 19 19 20 - #ifndef __AMD_IOMMU_TYPES_H__ 21 - #define __AMD_IOMMU_TYPES_H__ 20 + #ifndef ASM_X86__AMD_IOMMU_TYPES_H 21 + #define ASM_X86__AMD_IOMMU_TYPES_H 22 22 23 23 #include <linux/types.h> 24 24 #include <linux/list.h> ··· 341 341 return (((u16)bus) << 8) | devfn; 342 342 } 343 343 344 - #endif 344 + #endif /* ASM_X86__AMD_IOMMU_TYPES_H */
+9 -6
include/asm-x86/apic.h
··· 1 - #ifndef _ASM_X86_APIC_H 2 - #define _ASM_X86_APIC_H 1 + #ifndef ASM_X86__APIC_H 2 + #define ASM_X86__APIC_H 3 3 4 4 #include <linux/pm.h> 5 5 #include <linux/delay.h> ··· 54 54 #endif 55 55 56 56 extern int is_vsmp_box(void); 57 + extern void xapic_wait_icr_idle(void); 58 + extern u32 safe_xapic_wait_icr_idle(void); 59 + extern u64 xapic_icr_read(void); 60 + extern void xapic_icr_write(u32, u32); 61 + extern int setup_profiling_timer(unsigned int); 57 62 58 63 static inline void native_apic_write(unsigned long reg, u32 v) 59 64 { ··· 81 76 static inline void ack_APIC_irq(void) 82 77 { 83 78 /* 84 - * ack_APIC_irq() actually gets compiled as a single instruction: 85 - * - a single rmw on Pentium/82489DX 86 - * - a single write on P6+ cores (CONFIG_X86_GOOD_APIC) 79 + * ack_APIC_irq() actually gets compiled as a single instruction 87 80 * ... yummie. 88 81 */ 89 82 ··· 131 128 132 129 #endif /* !CONFIG_X86_LOCAL_APIC */ 133 130 134 - #endif /* __ASM_APIC_H */ 131 + #endif /* ASM_X86__APIC_H */
+3 -3
include/asm-x86/apicdef.h
··· 1 - #ifndef _ASM_X86_APICDEF_H 2 - #define _ASM_X86_APICDEF_H 1 + #ifndef ASM_X86__APICDEF_H 2 + #define ASM_X86__APICDEF_H 3 3 4 4 /* 5 5 * Constants for various Intel APICs. (local APIC, IOAPIC, etc.) ··· 411 411 #else 412 412 #define BAD_APICID 0xFFFFu 413 413 #endif 414 - #endif 414 + #endif /* ASM_X86__APICDEF_H */
+3 -3
include/asm-x86/arch_hooks.h
··· 1 - #ifndef _ASM_ARCH_HOOKS_H 2 - #define _ASM_ARCH_HOOKS_H 1 + #ifndef ASM_X86__ARCH_HOOKS_H 2 + #define ASM_X86__ARCH_HOOKS_H 3 3 4 4 #include <linux/interrupt.h> 5 5 ··· 25 25 extern void time_init_hook(void); 26 26 extern void mca_nmi_hook(void); 27 27 28 - #endif 28 + #endif /* ASM_X86__ARCH_HOOKS_H */
+9 -4
include/asm-x86/asm.h
··· 1 - #ifndef _ASM_X86_ASM_H 2 - #define _ASM_X86_ASM_H 1 + #ifndef ASM_X86__ASM_H 2 + #define ASM_X86__ASM_H 3 3 4 4 #ifdef __ASSEMBLY__ 5 5 # define __ASM_FORM(x) x ··· 20 20 21 21 #define _ASM_PTR __ASM_SEL(.long, .quad) 22 22 #define _ASM_ALIGN __ASM_SEL(.balign 4, .balign 8) 23 - #define _ASM_MOV_UL __ASM_SIZE(mov) 24 23 24 + #define _ASM_MOV __ASM_SIZE(mov) 25 25 #define _ASM_INC __ASM_SIZE(inc) 26 26 #define _ASM_DEC __ASM_SIZE(dec) 27 27 #define _ASM_ADD __ASM_SIZE(add) 28 28 #define _ASM_SUB __ASM_SIZE(sub) 29 29 #define _ASM_XADD __ASM_SIZE(xadd) 30 + 30 31 #define _ASM_AX __ASM_REG(ax) 31 32 #define _ASM_BX __ASM_REG(bx) 32 33 #define _ASM_CX __ASM_REG(cx) 33 34 #define _ASM_DX __ASM_REG(dx) 35 + #define _ASM_SP __ASM_REG(sp) 36 + #define _ASM_BP __ASM_REG(bp) 37 + #define _ASM_SI __ASM_REG(si) 38 + #define _ASM_DI __ASM_REG(di) 34 39 35 40 /* Exception table entry */ 36 41 # define _ASM_EXTABLE(from,to) \ ··· 44 39 _ASM_PTR #from "," #to "\n" \ 45 40 " .previous\n" 46 41 47 - #endif /* _ASM_X86_ASM_H */ 42 + #endif /* ASM_X86__ASM_H */
+3 -3
include/asm-x86/atomic_32.h
··· 1 - #ifndef __ARCH_I386_ATOMIC__ 2 - #define __ARCH_I386_ATOMIC__ 1 + #ifndef ASM_X86__ATOMIC_32_H 2 + #define ASM_X86__ATOMIC_32_H 3 3 4 4 #include <linux/compiler.h> 5 5 #include <asm/processor.h> ··· 256 256 #define smp_mb__after_atomic_inc() barrier() 257 257 258 258 #include <asm-generic/atomic.h> 259 - #endif 259 + #endif /* ASM_X86__ATOMIC_32_H */
+3 -3
include/asm-x86/atomic_64.h
··· 1 - #ifndef __ARCH_X86_64_ATOMIC__ 2 - #define __ARCH_X86_64_ATOMIC__ 1 + #ifndef ASM_X86__ATOMIC_64_H 2 + #define ASM_X86__ATOMIC_64_H 3 3 4 4 #include <asm/alternative.h> 5 5 #include <asm/cmpxchg.h> ··· 470 470 #define smp_mb__after_atomic_inc() barrier() 471 471 472 472 #include <asm-generic/atomic.h> 473 - #endif 473 + #endif /* ASM_X86__ATOMIC_64_H */
+3 -3
include/asm-x86/auxvec.h
··· 1 - #ifndef _ASM_X86_AUXVEC_H 2 - #define _ASM_X86_AUXVEC_H 1 + #ifndef ASM_X86__AUXVEC_H 2 + #define ASM_X86__AUXVEC_H 3 3 /* 4 4 * Architecture-neutral AT_ values in 0-17, leave some room 5 5 * for more of them, start the x86-specific ones at 32. ··· 9 9 #endif 10 10 #define AT_SYSINFO_EHDR 33 11 11 12 - #endif 12 + #endif /* ASM_X86__AUXVEC_H */
+3 -3
include/asm-x86/bios_ebda.h
··· 1 - #ifndef _MACH_BIOS_EBDA_H 2 - #define _MACH_BIOS_EBDA_H 1 + #ifndef ASM_X86__BIOS_EBDA_H 2 + #define ASM_X86__BIOS_EBDA_H 3 3 4 4 #include <asm/io.h> 5 5 ··· 16 16 17 17 void reserve_ebda_region(void); 18 18 19 - #endif /* _MACH_BIOS_EBDA_H */ 19 + #endif /* ASM_X86__BIOS_EBDA_H */
+3 -3
include/asm-x86/bitops.h
··· 1 - #ifndef _ASM_X86_BITOPS_H 2 - #define _ASM_X86_BITOPS_H 1 + #ifndef ASM_X86__BITOPS_H 2 + #define ASM_X86__BITOPS_H 3 3 4 4 /* 5 5 * Copyright 1992, Linus Torvalds. ··· 458 458 #include <asm-generic/bitops/minix.h> 459 459 460 460 #endif /* __KERNEL__ */ 461 - #endif /* _ASM_X86_BITOPS_H */ 461 + #endif /* ASM_X86__BITOPS_H */
+3 -3
include/asm-x86/boot.h
··· 1 - #ifndef _ASM_BOOT_H 2 - #define _ASM_BOOT_H 1 + #ifndef ASM_X86__BOOT_H 2 + #define ASM_X86__BOOT_H 3 3 4 4 /* Don't touch these, unless you really know what you're doing. */ 5 5 #define DEF_INITSEG 0x9000 ··· 25 25 #define BOOT_STACK_SIZE 0x1000 26 26 #endif 27 27 28 - #endif /* _ASM_BOOT_H */ 28 + #endif /* ASM_X86__BOOT_H */
+3 -3
include/asm-x86/bootparam.h
··· 1 - #ifndef _ASM_BOOTPARAM_H 2 - #define _ASM_BOOTPARAM_H 1 + #ifndef ASM_X86__BOOTPARAM_H 2 + #define ASM_X86__BOOTPARAM_H 3 3 4 4 #include <linux/types.h> 5 5 #include <linux/screen_info.h> ··· 108 108 __u8 _pad9[276]; /* 0xeec */ 109 109 } __attribute__((packed)); 110 110 111 - #endif /* _ASM_BOOTPARAM_H */ 111 + #endif /* ASM_X86__BOOTPARAM_H */
+3 -3
include/asm-x86/bug.h
··· 1 - #ifndef _ASM_X86_BUG_H 2 - #define _ASM_X86_BUG_H 1 + #ifndef ASM_X86__BUG_H 2 + #define ASM_X86__BUG_H 3 3 4 4 #ifdef CONFIG_BUG 5 5 #define HAVE_ARCH_BUG ··· 36 36 #endif /* !CONFIG_BUG */ 37 37 38 38 #include <asm-generic/bug.h> 39 - #endif 39 + #endif /* ASM_X86__BUG_H */
+3 -3
include/asm-x86/bugs.h
··· 1 - #ifndef _ASM_X86_BUGS_H 2 - #define _ASM_X86_BUGS_H 1 + #ifndef ASM_X86__BUGS_H 2 + #define ASM_X86__BUGS_H 3 3 4 4 extern void check_bugs(void); 5 5 int ppro_with_ram_bug(void); 6 6 7 - #endif /* _ASM_X86_BUGS_H */ 7 + #endif /* ASM_X86__BUGS_H */
+3 -3
include/asm-x86/byteorder.h
··· 1 - #ifndef _ASM_X86_BYTEORDER_H 2 - #define _ASM_X86_BYTEORDER_H 1 + #ifndef ASM_X86__BYTEORDER_H 2 + #define ASM_X86__BYTEORDER_H 3 3 4 4 #include <asm/types.h> 5 5 #include <linux/compiler.h> ··· 78 78 79 79 #include <linux/byteorder/little_endian.h> 80 80 81 - #endif /* _ASM_X86_BYTEORDER_H */ 81 + #endif /* ASM_X86__BYTEORDER_H */
+3 -3
include/asm-x86/cache.h
··· 1 - #ifndef _ARCH_X86_CACHE_H 2 - #define _ARCH_X86_CACHE_H 1 + #ifndef ASM_X86__CACHE_H 2 + #define ASM_X86__CACHE_H 3 3 4 4 /* L1 cache line size */ 5 5 #define L1_CACHE_SHIFT (CONFIG_X86_L1_CACHE_SHIFT) ··· 17 17 #endif 18 18 #endif 19 19 20 - #endif 20 + #endif /* ASM_X86__CACHE_H */
+3 -3
include/asm-x86/cacheflush.h
··· 1 - #ifndef _ASM_X86_CACHEFLUSH_H 2 - #define _ASM_X86_CACHEFLUSH_H 1 + #ifndef ASM_X86__CACHEFLUSH_H 2 + #define ASM_X86__CACHEFLUSH_H 3 3 4 4 /* Keep includes the same across arches. */ 5 5 #include <linux/mm.h> ··· 112 112 } 113 113 #endif 114 114 115 - #endif 115 + #endif /* ASM_X86__CACHEFLUSH_H */
+3 -3
include/asm-x86/calgary.h
··· 21 21 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 22 22 */ 23 23 24 - #ifndef _ASM_X86_64_CALGARY_H 25 - #define _ASM_X86_64_CALGARY_H 24 + #ifndef ASM_X86__CALGARY_H 25 + #define ASM_X86__CALGARY_H 26 26 27 27 #include <linux/spinlock.h> 28 28 #include <linux/device.h> ··· 69 69 static inline void detect_calgary(void) { return; } 70 70 #endif 71 71 72 - #endif /* _ASM_X86_64_CALGARY_H */ 72 + #endif /* ASM_X86__CALGARY_H */
+3 -3
include/asm-x86/checksum_32.h
··· 1 - #ifndef _I386_CHECKSUM_H 2 - #define _I386_CHECKSUM_H 1 + #ifndef ASM_X86__CHECKSUM_32_H 2 + #define ASM_X86__CHECKSUM_32_H 3 3 4 4 #include <linux/in6.h> 5 5 ··· 186 186 return (__force __wsum)-1; /* invalid checksum */ 187 187 } 188 188 189 - #endif 189 + #endif /* ASM_X86__CHECKSUM_32_H */
+3 -3
include/asm-x86/checksum_64.h
··· 1 - #ifndef _X86_64_CHECKSUM_H 2 - #define _X86_64_CHECKSUM_H 1 + #ifndef ASM_X86__CHECKSUM_64_H 2 + #define ASM_X86__CHECKSUM_64_H 3 3 4 4 /* 5 5 * Checksums for x86-64 ··· 188 188 return a; 189 189 } 190 190 191 - #endif 191 + #endif /* ASM_X86__CHECKSUM_64_H */
+3 -3
include/asm-x86/cmpxchg_32.h
··· 1 - #ifndef __ASM_CMPXCHG_H 2 - #define __ASM_CMPXCHG_H 1 + #ifndef ASM_X86__CMPXCHG_32_H 2 + #define ASM_X86__CMPXCHG_32_H 3 3 4 4 #include <linux/bitops.h> /* for LOCK_PREFIX */ 5 5 ··· 341 341 342 342 #endif 343 343 344 - #endif 344 + #endif /* ASM_X86__CMPXCHG_32_H */
+3 -3
include/asm-x86/cmpxchg_64.h
··· 1 - #ifndef __ASM_CMPXCHG_H 2 - #define __ASM_CMPXCHG_H 1 + #ifndef ASM_X86__CMPXCHG_64_H 2 + #define ASM_X86__CMPXCHG_64_H 3 3 4 4 #include <asm/alternative.h> /* Provides LOCK_PREFIX */ 5 5 ··· 182 182 cmpxchg_local((ptr), (o), (n)); \ 183 183 }) 184 184 185 - #endif 185 + #endif /* ASM_X86__CMPXCHG_64_H */
+3 -3
include/asm-x86/compat.h
··· 1 - #ifndef _ASM_X86_64_COMPAT_H 2 - #define _ASM_X86_64_COMPAT_H 1 + #ifndef ASM_X86__COMPAT_H 2 + #define ASM_X86__COMPAT_H 3 3 4 4 /* 5 5 * Architecture specific compatibility types ··· 215 215 return current_thread_info()->status & TS_COMPAT; 216 216 } 217 217 218 - #endif /* _ASM_X86_64_COMPAT_H */ 218 + #endif /* ASM_X86__COMPAT_H */
+3 -3
include/asm-x86/cpu.h
··· 1 - #ifndef _ASM_I386_CPU_H_ 2 - #define _ASM_I386_CPU_H_ 1 + #ifndef ASM_X86__CPU_H 2 + #define ASM_X86__CPU_H 3 3 4 4 #include <linux/device.h> 5 5 #include <linux/cpu.h> ··· 17 17 #endif 18 18 19 19 DECLARE_PER_CPU(int, cpu_state); 20 - #endif /* _ASM_I386_CPU_H_ */ 20 + #endif /* ASM_X86__CPU_H */
+3 -3
include/asm-x86/cpufeature.h
··· 1 1 /* 2 2 * Defines x86 CPU feature bits 3 3 */ 4 - #ifndef _ASM_X86_CPUFEATURE_H 5 - #define _ASM_X86_CPUFEATURE_H 4 + #ifndef ASM_X86__CPUFEATURE_H 5 + #define ASM_X86__CPUFEATURE_H 6 6 7 7 #include <asm/required-features.h> 8 8 ··· 224 224 225 225 #endif /* defined(__KERNEL__) && !defined(__ASSEMBLY__) */ 226 226 227 - #endif /* _ASM_X86_CPUFEATURE_H */ 227 + #endif /* ASM_X86__CPUFEATURE_H */
+3 -3
include/asm-x86/current.h
··· 1 - #ifndef _X86_CURRENT_H 2 - #define _X86_CURRENT_H 1 + #ifndef ASM_X86__CURRENT_H 2 + #define ASM_X86__CURRENT_H 3 3 4 4 #ifdef CONFIG_X86_32 5 5 #include <linux/compiler.h> ··· 36 36 37 37 #define current get_current() 38 38 39 - #endif /* X86_CURRENT_H */ 39 + #endif /* ASM_X86__CURRENT_H */
+3 -3
include/asm-x86/debugreg.h
··· 1 - #ifndef _ASM_X86_DEBUGREG_H 2 - #define _ASM_X86_DEBUGREG_H 1 + #ifndef ASM_X86__DEBUGREG_H 2 + #define ASM_X86__DEBUGREG_H 3 3 4 4 5 5 /* Indicate the register numbers for a number of the specific ··· 67 67 #define DR_LOCAL_SLOWDOWN (0x100) /* Local slow the pipeline */ 68 68 #define DR_GLOBAL_SLOWDOWN (0x200) /* Global slow the pipeline */ 69 69 70 - #endif 70 + #endif /* ASM_X86__DEBUGREG_H */
+3 -3
include/asm-x86/delay.h
··· 1 - #ifndef _ASM_X86_DELAY_H 2 - #define _ASM_X86_DELAY_H 1 + #ifndef ASM_X86__DELAY_H 2 + #define ASM_X86__DELAY_H 3 3 4 4 /* 5 5 * Copyright (C) 1993 Linus Torvalds ··· 28 28 29 29 void use_tsc_delay(void); 30 30 31 - #endif /* _ASM_X86_DELAY_H */ 31 + #endif /* ASM_X86__DELAY_H */
+3 -3
include/asm-x86/desc.h
··· 1 - #ifndef _ASM_DESC_H_ 2 - #define _ASM_DESC_H_ 1 + #ifndef ASM_X86__DESC_H 2 + #define ASM_X86__DESC_H 3 3 4 4 #ifndef __ASSEMBLY__ 5 5 #include <asm/desc_defs.h> ··· 397 397 398 398 #endif /* __ASSEMBLY__ */ 399 399 400 - #endif 400 + #endif /* ASM_X86__DESC_H */
+3 -3
include/asm-x86/desc_defs.h
··· 1 1 /* Written 2000 by Andi Kleen */ 2 - #ifndef __ARCH_DESC_DEFS_H 3 - #define __ARCH_DESC_DEFS_H 2 + #ifndef ASM_X86__DESC_DEFS_H 3 + #define ASM_X86__DESC_DEFS_H 4 4 5 5 /* 6 6 * Segment descriptor structure definitions, usable from both x86_64 and i386 ··· 92 92 93 93 #endif /* !__ASSEMBLY__ */ 94 94 95 - #endif 95 + #endif /* ASM_X86__DESC_DEFS_H */
+3 -3
include/asm-x86/device.h
··· 1 - #ifndef _ASM_X86_DEVICE_H 2 - #define _ASM_X86_DEVICE_H 1 + #ifndef ASM_X86__DEVICE_H 2 + #define ASM_X86__DEVICE_H 3 3 4 4 struct dev_archdata { 5 5 #ifdef CONFIG_ACPI ··· 13 13 #endif 14 14 }; 15 15 16 - #endif /* _ASM_X86_DEVICE_H */ 16 + #endif /* ASM_X86__DEVICE_H */
+3 -3
include/asm-x86/div64.h
··· 1 - #ifndef _ASM_X86_DIV64_H 2 - #define _ASM_X86_DIV64_H 1 + #ifndef ASM_X86__DIV64_H 2 + #define ASM_X86__DIV64_H 3 3 4 4 #ifdef CONFIG_X86_32 5 5 ··· 57 57 # include <asm-generic/div64.h> 58 58 #endif /* CONFIG_X86_32 */ 59 59 60 - #endif /* _ASM_X86_DIV64_H */ 60 + #endif /* ASM_X86__DIV64_H */
+3 -3
include/asm-x86/dma-mapping.h
··· 1 - #ifndef _ASM_DMA_MAPPING_H_ 2 - #define _ASM_DMA_MAPPING_H_ 1 + #ifndef ASM_X86__DMA_MAPPING_H 2 + #define ASM_X86__DMA_MAPPING_H 3 3 4 4 /* 5 5 * IOMMU interface. See Documentation/DMA-mapping.txt and DMA-API.txt for ··· 250 250 #define dma_is_consistent(d, h) (1) 251 251 252 252 #include <asm-generic/dma-coherent.h> 253 - #endif 253 + #endif /* ASM_X86__DMA_MAPPING_H */
+3 -3
include/asm-x86/dma.h
··· 5 5 * and John Boyd, Nov. 1992. 6 6 */ 7 7 8 - #ifndef _ASM_X86_DMA_H 9 - #define _ASM_X86_DMA_H 8 + #ifndef ASM_X86__DMA_H 9 + #define ASM_X86__DMA_H 10 10 11 11 #include <linux/spinlock.h> /* And spinlocks */ 12 12 #include <asm/io.h> /* need byte IO */ ··· 315 315 #define isa_dma_bridge_buggy (0) 316 316 #endif 317 317 318 - #endif /* _ASM_X86_DMA_H */ 318 + #endif /* ASM_X86__DMA_H */
+3 -3
include/asm-x86/dmi.h
··· 1 - #ifndef _ASM_X86_DMI_H 2 - #define _ASM_X86_DMI_H 1 + #ifndef ASM_X86__DMI_H 2 + #define ASM_X86__DMI_H 3 3 4 4 #include <asm/io.h> 5 5 ··· 23 23 #define dmi_ioremap early_ioremap 24 24 #define dmi_iounmap early_iounmap 25 25 26 - #endif 26 + #endif /* ASM_X86__DMI_H */
+215 -49
include/asm-x86/ds.h
··· 2 2 * Debug Store (DS) support 3 3 * 4 4 * This provides a low-level interface to the hardware's Debug Store 5 - * feature that is used for last branch recording (LBR) and 5 + * feature that is used for branch trace store (BTS) and 6 6 * precise-event based sampling (PEBS). 7 7 * 8 - * Different architectures use a different DS layout/pointer size. 9 - * The below functions therefore work on a void*. 8 + * It manages: 9 + * - per-thread and per-cpu allocation of BTS and PEBS 10 + * - buffer memory allocation (optional) 11 + * - buffer overflow handling 12 + * - buffer access 13 + * 14 + * It assumes: 15 + * - get_task_struct on all parameter tasks 16 + * - current is allowed to trace parameter tasks 10 17 * 11 18 * 12 - * Since there is no user for PEBS, yet, only LBR (or branch 13 - * trace store, BTS) is supported. 14 - * 15 - * 16 - * Copyright (C) 2007 Intel Corporation. 17 - * Markus Metzger <markus.t.metzger@intel.com>, Dec 2007 19 + * Copyright (C) 2007-2008 Intel Corporation. 20 + * Markus Metzger <markus.t.metzger@intel.com>, 2007-2008 18 21 */ 19 22 20 - #ifndef _ASM_X86_DS_H 21 - #define _ASM_X86_DS_H 23 + #ifndef ASM_X86__DS_H 24 + #define ASM_X86__DS_H 25 + 26 + #ifdef CONFIG_X86_DS 22 27 23 28 #include <linux/types.h> 24 29 #include <linux/init.h> 25 30 26 - struct cpuinfo_x86; 27 31 32 + struct task_struct; 28 33 29 - /* a branch trace record entry 34 + /* 35 + * Request BTS or PEBS 30 36 * 31 - * In order to unify the interface between various processor versions, 32 - * we use the below data structure for all processors. 37 + * Due to alignement constraints, the actual buffer may be slightly 38 + * smaller than the requested or provided buffer. 39 + * 40 + * Returns 0 on success; -Eerrno otherwise 41 + * 42 + * task: the task to request recording for; 43 + * NULL for per-cpu recording on the current cpu 44 + * base: the base pointer for the (non-pageable) buffer; 45 + * NULL if buffer allocation requested 46 + * size: the size of the requested or provided buffer 47 + * ovfl: pointer to a function to be called on buffer overflow; 48 + * NULL if cyclic buffer requested 33 49 */ 34 - enum bts_qualifier { 35 - BTS_INVALID = 0, 36 - BTS_BRANCH, 37 - BTS_TASK_ARRIVES, 38 - BTS_TASK_DEPARTS 50 + typedef void (*ds_ovfl_callback_t)(struct task_struct *); 51 + extern int ds_request_bts(struct task_struct *task, void *base, size_t size, 52 + ds_ovfl_callback_t ovfl); 53 + extern int ds_request_pebs(struct task_struct *task, void *base, size_t size, 54 + ds_ovfl_callback_t ovfl); 55 + 56 + /* 57 + * Release BTS or PEBS resources 58 + * 59 + * Frees buffers allocated on ds_request. 60 + * 61 + * Returns 0 on success; -Eerrno otherwise 62 + * 63 + * task: the task to release resources for; 64 + * NULL to release resources for the current cpu 65 + */ 66 + extern int ds_release_bts(struct task_struct *task); 67 + extern int ds_release_pebs(struct task_struct *task); 68 + 69 + /* 70 + * Return the (array) index of the write pointer. 71 + * (assuming an array of BTS/PEBS records) 72 + * 73 + * Returns -Eerrno on error 74 + * 75 + * task: the task to access; 76 + * NULL to access the current cpu 77 + * pos (out): if not NULL, will hold the result 78 + */ 79 + extern int ds_get_bts_index(struct task_struct *task, size_t *pos); 80 + extern int ds_get_pebs_index(struct task_struct *task, size_t *pos); 81 + 82 + /* 83 + * Return the (array) index one record beyond the end of the array. 84 + * (assuming an array of BTS/PEBS records) 85 + * 86 + * Returns -Eerrno on error 87 + * 88 + * task: the task to access; 89 + * NULL to access the current cpu 90 + * pos (out): if not NULL, will hold the result 91 + */ 92 + extern int ds_get_bts_end(struct task_struct *task, size_t *pos); 93 + extern int ds_get_pebs_end(struct task_struct *task, size_t *pos); 94 + 95 + /* 96 + * Provide a pointer to the BTS/PEBS record at parameter index. 97 + * (assuming an array of BTS/PEBS records) 98 + * 99 + * The pointer points directly into the buffer. The user is 100 + * responsible for copying the record. 101 + * 102 + * Returns the size of a single record on success; -Eerrno on error 103 + * 104 + * task: the task to access; 105 + * NULL to access the current cpu 106 + * index: the index of the requested record 107 + * record (out): pointer to the requested record 108 + */ 109 + extern int ds_access_bts(struct task_struct *task, 110 + size_t index, const void **record); 111 + extern int ds_access_pebs(struct task_struct *task, 112 + size_t index, const void **record); 113 + 114 + /* 115 + * Write one or more BTS/PEBS records at the write pointer index and 116 + * advance the write pointer. 117 + * 118 + * If size is not a multiple of the record size, trailing bytes are 119 + * zeroed out. 120 + * 121 + * May result in one or more overflow notifications. 122 + * 123 + * If called during overflow handling, that is, with index >= 124 + * interrupt threshold, the write will wrap around. 125 + * 126 + * An overflow notification is given if and when the interrupt 127 + * threshold is reached during or after the write. 128 + * 129 + * Returns the number of bytes written or -Eerrno. 130 + * 131 + * task: the task to access; 132 + * NULL to access the current cpu 133 + * buffer: the buffer to write 134 + * size: the size of the buffer 135 + */ 136 + extern int ds_write_bts(struct task_struct *task, 137 + const void *buffer, size_t size); 138 + extern int ds_write_pebs(struct task_struct *task, 139 + const void *buffer, size_t size); 140 + 141 + /* 142 + * Same as ds_write_bts/pebs, but omit ownership checks. 143 + * 144 + * This is needed to have some other task than the owner of the 145 + * BTS/PEBS buffer or the parameter task itself write into the 146 + * respective buffer. 147 + */ 148 + extern int ds_unchecked_write_bts(struct task_struct *task, 149 + const void *buffer, size_t size); 150 + extern int ds_unchecked_write_pebs(struct task_struct *task, 151 + const void *buffer, size_t size); 152 + 153 + /* 154 + * Reset the write pointer of the BTS/PEBS buffer. 155 + * 156 + * Returns 0 on success; -Eerrno on error 157 + * 158 + * task: the task to access; 159 + * NULL to access the current cpu 160 + */ 161 + extern int ds_reset_bts(struct task_struct *task); 162 + extern int ds_reset_pebs(struct task_struct *task); 163 + 164 + /* 165 + * Clear the BTS/PEBS buffer and reset the write pointer. 166 + * The entire buffer will be zeroed out. 167 + * 168 + * Returns 0 on success; -Eerrno on error 169 + * 170 + * task: the task to access; 171 + * NULL to access the current cpu 172 + */ 173 + extern int ds_clear_bts(struct task_struct *task); 174 + extern int ds_clear_pebs(struct task_struct *task); 175 + 176 + /* 177 + * Provide the PEBS counter reset value. 178 + * 179 + * Returns 0 on success; -Eerrno on error 180 + * 181 + * task: the task to access; 182 + * NULL to access the current cpu 183 + * value (out): the counter reset value 184 + */ 185 + extern int ds_get_pebs_reset(struct task_struct *task, u64 *value); 186 + 187 + /* 188 + * Set the PEBS counter reset value. 189 + * 190 + * Returns 0 on success; -Eerrno on error 191 + * 192 + * task: the task to access; 193 + * NULL to access the current cpu 194 + * value: the new counter reset value 195 + */ 196 + extern int ds_set_pebs_reset(struct task_struct *task, u64 value); 197 + 198 + /* 199 + * Initialization 200 + */ 201 + struct cpuinfo_x86; 202 + extern void __cpuinit ds_init_intel(struct cpuinfo_x86 *); 203 + 204 + 205 + 206 + /* 207 + * The DS context - part of struct thread_struct. 208 + */ 209 + struct ds_context { 210 + /* pointer to the DS configuration; goes into MSR_IA32_DS_AREA */ 211 + unsigned char *ds; 212 + /* the owner of the BTS and PEBS configuration, respectively */ 213 + struct task_struct *owner[2]; 214 + /* buffer overflow notification function for BTS and PEBS */ 215 + ds_ovfl_callback_t callback[2]; 216 + /* the original buffer address */ 217 + void *buffer[2]; 218 + /* the number of allocated pages for on-request allocated buffers */ 219 + unsigned int pages[2]; 220 + /* use count */ 221 + unsigned long count; 222 + /* a pointer to the context location inside the thread_struct 223 + * or the per_cpu context array */ 224 + struct ds_context **this; 225 + /* a pointer to the task owning this context, or NULL, if the 226 + * context is owned by a cpu */ 227 + struct task_struct *task; 39 228 }; 40 229 41 - struct bts_struct { 42 - u64 qualifier; 43 - union { 44 - /* BTS_BRANCH */ 45 - struct { 46 - u64 from_ip; 47 - u64 to_ip; 48 - } lbr; 49 - /* BTS_TASK_ARRIVES or 50 - BTS_TASK_DEPARTS */ 51 - u64 jiffies; 52 - } variant; 53 - }; 230 + /* called by exit_thread() to free leftover contexts */ 231 + extern void ds_free(struct ds_context *context); 54 232 55 - /* Overflow handling mechanisms */ 56 - #define DS_O_SIGNAL 1 /* send overflow signal */ 57 - #define DS_O_WRAP 2 /* wrap around */ 233 + #else /* CONFIG_X86_DS */ 58 234 59 - extern int ds_allocate(void **, size_t); 60 - extern int ds_free(void **); 61 - extern int ds_get_bts_size(void *); 62 - extern int ds_get_bts_end(void *); 63 - extern int ds_get_bts_index(void *); 64 - extern int ds_set_overflow(void *, int); 65 - extern int ds_get_overflow(void *); 66 - extern int ds_clear(void *); 67 - extern int ds_read_bts(void *, int, struct bts_struct *); 68 - extern int ds_write_bts(void *, const struct bts_struct *); 69 - extern unsigned long ds_debugctl_mask(void); 70 - extern void __cpuinit ds_init_intel(struct cpuinfo_x86 *c); 235 + #define ds_init_intel(config) do {} while (0) 71 236 72 - #endif /* _ASM_X86_DS_H */ 237 + #endif /* CONFIG_X86_DS */ 238 + #endif /* ASM_X86__DS_H */
+3 -3
include/asm-x86/dwarf2.h
··· 1 - #ifndef _DWARF2_H 2 - #define _DWARF2_H 1 + #ifndef ASM_X86__DWARF2_H 2 + #define ASM_X86__DWARF2_H 3 3 4 4 #ifndef __ASSEMBLY__ 5 5 #warning "asm/dwarf2.h should be only included in pure assembly files" ··· 58 58 59 59 #endif 60 60 61 - #endif 61 + #endif /* ASM_X86__DWARF2_H */
+4 -3
include/asm-x86/e820.h
··· 1 - #ifndef __ASM_E820_H 2 - #define __ASM_E820_H 1 + #ifndef ASM_X86__E820_H 2 + #define ASM_X86__E820_H 3 3 #define E820MAP 0x2d0 /* our map */ 4 4 #define E820MAX 128 /* number of entries in E820MAP */ 5 5 ··· 64 64 extern struct e820map e820; 65 65 extern struct e820map e820_saved; 66 66 67 + extern unsigned long pci_mem_start; 67 68 extern int e820_any_mapped(u64 start, u64 end, unsigned type); 68 69 extern int e820_all_mapped(u64 start, u64 end, unsigned type); 69 70 extern void e820_add_region(u64 start, u64 size, int type); ··· 141 140 #define HIGH_MEMORY (1024*1024) 142 141 #endif /* __KERNEL__ */ 143 142 144 - #endif /* __ASM_E820_H */ 143 + #endif /* ASM_X86__E820_H */
+3 -3
include/asm-x86/edac.h
··· 1 - #ifndef _ASM_X86_EDAC_H 2 - #define _ASM_X86_EDAC_H 1 + #ifndef ASM_X86__EDAC_H 2 + #define ASM_X86__EDAC_H 3 3 4 4 /* ECC atomic, DMA, SMP and interrupt safe scrub function */ 5 5 ··· 15 15 asm volatile("lock; addl $0, %0"::"m" (*virt_addr)); 16 16 } 17 17 18 - #endif 18 + #endif /* ASM_X86__EDAC_H */
+3 -3
include/asm-x86/efi.h
··· 1 - #ifndef _ASM_X86_EFI_H 2 - #define _ASM_X86_EFI_H 1 + #ifndef ASM_X86__EFI_H 2 + #define ASM_X86__EFI_H 3 3 4 4 #ifdef CONFIG_X86_32 5 5 ··· 94 94 extern void efi_call_phys_prelog(void); 95 95 extern void efi_call_phys_epilog(void); 96 96 97 - #endif 97 + #endif /* ASM_X86__EFI_H */
+6 -5
include/asm-x86/elf.h
··· 1 - #ifndef _ASM_X86_ELF_H 2 - #define _ASM_X86_ELF_H 1 + #ifndef ASM_X86__ELF_H 2 + #define ASM_X86__ELF_H 3 3 4 4 /* 5 5 * ELF register definitions.. ··· 148 148 149 149 static inline void start_ia32_thread(struct pt_regs *regs, u32 ip, u32 sp) 150 150 { 151 - asm volatile("movl %0,%%fs" :: "r" (0)); 152 - asm volatile("movl %0,%%es; movl %0,%%ds" : : "r" (__USER32_DS)); 151 + loadsegment(fs, 0); 152 + loadsegment(ds, __USER32_DS); 153 + loadsegment(es, __USER32_DS); 153 154 load_gs_index(0); 154 155 regs->ip = ip; 155 156 regs->sp = sp; ··· 333 332 extern unsigned long arch_randomize_brk(struct mm_struct *mm); 334 333 #define arch_randomize_brk arch_randomize_brk 335 334 336 - #endif 335 + #endif /* ASM_X86__ELF_H */
+3 -3
include/asm-x86/emergency-restart.h
··· 1 - #ifndef _ASM_EMERGENCY_RESTART_H 2 - #define _ASM_EMERGENCY_RESTART_H 1 + #ifndef ASM_X86__EMERGENCY_RESTART_H 2 + #define ASM_X86__EMERGENCY_RESTART_H 3 3 4 4 enum reboot_type { 5 5 BOOT_TRIPLE = 't', ··· 15 15 16 16 extern void machine_emergency_restart(void); 17 17 18 - #endif /* _ASM_EMERGENCY_RESTART_H */ 18 + #endif /* ASM_X86__EMERGENCY_RESTART_H */
+3 -3
include/asm-x86/fb.h
··· 1 - #ifndef _ASM_X86_FB_H 2 - #define _ASM_X86_FB_H 1 + #ifndef ASM_X86__FB_H 2 + #define ASM_X86__FB_H 3 3 4 4 #include <linux/fb.h> 5 5 #include <linux/fs.h> ··· 18 18 static inline int fb_is_primary_device(struct fb_info *info) { return 0; } 19 19 #endif 20 20 21 - #endif /* _ASM_X86_FB_H */ 21 + #endif /* ASM_X86__FB_H */
+3 -3
include/asm-x86/fixmap.h
··· 1 - #ifndef _ASM_FIXMAP_H 2 - #define _ASM_FIXMAP_H 1 + #ifndef ASM_X86__FIXMAP_H 2 + #define ASM_X86__FIXMAP_H 3 3 4 4 #ifdef CONFIG_X86_32 5 5 # include "fixmap_32.h" ··· 65 65 BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START); 66 66 return __virt_to_fix(vaddr); 67 67 } 68 - #endif 68 + #endif /* ASM_X86__FIXMAP_H */
+3 -3
include/asm-x86/fixmap_32.h
··· 10 10 * Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999 11 11 */ 12 12 13 - #ifndef _ASM_FIXMAP_32_H 14 - #define _ASM_FIXMAP_32_H 13 + #ifndef ASM_X86__FIXMAP_32_H 14 + #define ASM_X86__FIXMAP_32_H 15 15 16 16 17 17 /* used by vmalloc.c, vsyscall.lds.S. ··· 120 120 #define FIXADDR_BOOT_START (FIXADDR_TOP - __FIXADDR_BOOT_SIZE) 121 121 122 122 #endif /* !__ASSEMBLY__ */ 123 - #endif 123 + #endif /* ASM_X86__FIXMAP_32_H */
+3 -3
include/asm-x86/fixmap_64.h
··· 8 8 * Copyright (C) 1998 Ingo Molnar 9 9 */ 10 10 11 - #ifndef _ASM_FIXMAP_64_H 12 - #define _ASM_FIXMAP_64_H 11 + #ifndef ASM_X86__FIXMAP_64_H 12 + #define ASM_X86__FIXMAP_64_H 13 13 14 14 #include <linux/kernel.h> 15 15 #include <asm/acpi.h> ··· 80 80 #define FIXADDR_USER_START ((unsigned long)VSYSCALL32_VSYSCALL) 81 81 #define FIXADDR_USER_END (FIXADDR_USER_START + PAGE_SIZE) 82 82 83 - #endif 83 + #endif /* ASM_X86__FIXMAP_64_H */
+3 -3
include/asm-x86/floppy.h
··· 7 7 * 8 8 * Copyright (C) 1995 9 9 */ 10 - #ifndef _ASM_X86_FLOPPY_H 11 - #define _ASM_X86_FLOPPY_H 10 + #ifndef ASM_X86__FLOPPY_H 11 + #define ASM_X86__FLOPPY_H 12 12 13 13 #include <linux/vmalloc.h> 14 14 ··· 278 278 279 279 #define EXTRA_FLOPPY_PARAMS 280 280 281 - #endif /* _ASM_X86_FLOPPY_H */ 281 + #endif /* ASM_X86__FLOPPY_H */
+3 -3
include/asm-x86/ftrace.h
··· 1 - #ifndef _ASM_X86_FTRACE 2 - #define _ASM_X86_FTRACE 1 + #ifndef ASM_X86__FTRACE_H 2 + #define ASM_X86__FTRACE_H 3 3 4 4 #ifdef CONFIG_FTRACE 5 5 #define MCOUNT_ADDR ((long)(mcount)) ··· 11 11 12 12 #endif /* CONFIG_FTRACE */ 13 13 14 - #endif /* _ASM_X86_FTRACE */ 14 + #endif /* ASM_X86__FTRACE_H */
+6 -6
include/asm-x86/futex.h
··· 1 - #ifndef _ASM_X86_FUTEX_H 2 - #define _ASM_X86_FUTEX_H 1 + #ifndef ASM_X86__FUTEX_H 2 + #define ASM_X86__FUTEX_H 3 3 4 4 #ifdef __KERNEL__ 5 5 ··· 25 25 asm volatile("1:\tmovl %2, %0\n" \ 26 26 "\tmovl\t%0, %3\n" \ 27 27 "\t" insn "\n" \ 28 - "2:\tlock; cmpxchgl %3, %2\n" \ 28 + "2:\t" LOCK_PREFIX "cmpxchgl %3, %2\n" \ 29 29 "\tjnz\t1b\n" \ 30 30 "3:\t.section .fixup,\"ax\"\n" \ 31 31 "4:\tmov\t%5, %1\n" \ ··· 64 64 __futex_atomic_op1("xchgl %0, %2", ret, oldval, uaddr, oparg); 65 65 break; 66 66 case FUTEX_OP_ADD: 67 - __futex_atomic_op1("lock; xaddl %0, %2", ret, oldval, 67 + __futex_atomic_op1(LOCK_PREFIX "xaddl %0, %2", ret, oldval, 68 68 uaddr, oparg); 69 69 break; 70 70 case FUTEX_OP_OR: ··· 122 122 if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int))) 123 123 return -EFAULT; 124 124 125 - asm volatile("1:\tlock; cmpxchgl %3, %1\n" 125 + asm volatile("1:\t" LOCK_PREFIX "cmpxchgl %3, %1\n" 126 126 "2:\t.section .fixup, \"ax\"\n" 127 127 "3:\tmov %2, %0\n" 128 128 "\tjmp 2b\n" ··· 137 137 } 138 138 139 139 #endif 140 - #endif 140 + #endif /* ASM_X86__FUTEX_H */
+6 -6
include/asm-x86/gart.h
··· 1 - #ifndef _ASM_X8664_GART_H 2 - #define _ASM_X8664_GART_H 1 1 + #ifndef ASM_X86__GART_H 2 + #define ASM_X86__GART_H 3 3 4 4 #include <asm/e820.h> 5 5 ··· 52 52 return 0; 53 53 54 54 if (aper_base + aper_size > 0x100000000ULL) { 55 - printk(KERN_ERR "Aperture beyond 4GB. Ignoring.\n"); 55 + printk(KERN_INFO "Aperture beyond 4GB. Ignoring.\n"); 56 56 return 0; 57 57 } 58 58 if (e820_any_mapped(aper_base, aper_base + aper_size, E820_RAM)) { 59 - printk(KERN_ERR "Aperture pointing to e820 RAM. Ignoring.\n"); 59 + printk(KERN_INFO "Aperture pointing to e820 RAM. Ignoring.\n"); 60 60 return 0; 61 61 } 62 62 if (aper_size < min_size) { 63 - printk(KERN_ERR "Aperture too small (%d MB) than (%d MB)\n", 63 + printk(KERN_INFO "Aperture too small (%d MB) than (%d MB)\n", 64 64 aper_size>>20, min_size>>20); 65 65 return 0; 66 66 } ··· 68 68 return 1; 69 69 } 70 70 71 - #endif 71 + #endif /* ASM_X86__GART_H */
+3 -3
include/asm-x86/genapic_32.h
··· 1 - #ifndef _ASM_GENAPIC_H 2 - #define _ASM_GENAPIC_H 1 1 + #ifndef ASM_X86__GENAPIC_32_H 2 + #define ASM_X86__GENAPIC_32_H 3 3 4 4 #include <asm/mpspec.h> 5 5 ··· 121 121 #define uv_system_init() do {} while (0) 122 122 123 123 124 - #endif 124 + #endif /* ASM_X86__GENAPIC_32_H */
+3 -3
include/asm-x86/genapic_64.h
··· 1 - #ifndef _ASM_GENAPIC_H 2 - #define _ASM_GENAPIC_H 1 1 + #ifndef ASM_X86__GENAPIC_64_H 2 + #define ASM_X86__GENAPIC_64_H 3 3 4 4 /* 5 5 * Copyright 2004 James Cleverdon, IBM. ··· 47 47 48 48 extern void setup_apic_routing(void); 49 49 50 - #endif 50 + #endif /* ASM_X86__GENAPIC_64_H */
+3 -3
include/asm-x86/geode.h
··· 7 7 * as published by the Free Software Foundation. 8 8 */ 9 9 10 - #ifndef _ASM_GEODE_H_ 11 - #define _ASM_GEODE_H_ 10 + #ifndef ASM_X86__GEODE_H 11 + #define ASM_X86__GEODE_H 12 12 13 13 #include <asm/processor.h> 14 14 #include <linux/io.h> ··· 250 250 static inline int mfgpt_timer_setup(void) { return 0; } 251 251 #endif 252 252 253 - #endif 253 + #endif /* ASM_X86__GEODE_H */
+1 -1
include/asm-x86/gpio.h
··· 53 53 54 54 #endif /* CONFIG_GPIOLIB */ 55 55 56 - #endif /* _ASM_I386_GPIO_H */ 56 + #endif /* ASM_X86__GPIO_H */
+3 -3
include/asm-x86/hardirq_32.h
··· 1 - #ifndef __ASM_HARDIRQ_H 2 - #define __ASM_HARDIRQ_H 1 + #ifndef ASM_X86__HARDIRQ_32_H 2 + #define ASM_X86__HARDIRQ_32_H 3 3 4 4 #include <linux/threads.h> 5 5 #include <linux/irq.h> ··· 25 25 void ack_bad_irq(unsigned int irq); 26 26 #include <linux/irq_cpustat.h> 27 27 28 - #endif /* __ASM_HARDIRQ_H */ 28 + #endif /* ASM_X86__HARDIRQ_32_H */
+3 -3
include/asm-x86/hardirq_64.h
··· 1 - #ifndef __ASM_HARDIRQ_H 2 - #define __ASM_HARDIRQ_H 1 + #ifndef ASM_X86__HARDIRQ_64_H 2 + #define ASM_X86__HARDIRQ_64_H 3 3 4 4 #include <linux/threads.h> 5 5 #include <linux/irq.h> ··· 20 20 21 21 extern void ack_bad_irq(unsigned int irq); 22 22 23 - #endif /* __ASM_HARDIRQ_H */ 23 + #endif /* ASM_X86__HARDIRQ_64_H */
+3 -3
include/asm-x86/highmem.h
··· 15 15 * Copyright (C) 1999 Ingo Molnar <mingo@redhat.com> 16 16 */ 17 17 18 - #ifndef _ASM_HIGHMEM_H 19 - #define _ASM_HIGHMEM_H 18 + #ifndef ASM_X86__HIGHMEM_H 19 + #define ASM_X86__HIGHMEM_H 20 20 21 21 #ifdef __KERNEL__ 22 22 ··· 79 79 80 80 #endif /* __KERNEL__ */ 81 81 82 - #endif /* _ASM_HIGHMEM_H */ 82 + #endif /* ASM_X86__HIGHMEM_H */
+3 -3
include/asm-x86/hpet.h
··· 1 - #ifndef ASM_X86_HPET_H 2 - #define ASM_X86_HPET_H 1 + #ifndef ASM_X86__HPET_H 2 + #define ASM_X86__HPET_H 3 3 4 4 #ifdef CONFIG_HPET_TIMER 5 5 ··· 90 90 #define hpet_readl(a) 0 91 91 92 92 #endif 93 - #endif /* ASM_X86_HPET_H */ 93 + #endif /* ASM_X86__HPET_H */
+3 -3
include/asm-x86/hugetlb.h
··· 1 - #ifndef _ASM_X86_HUGETLB_H 2 - #define _ASM_X86_HUGETLB_H 1 + #ifndef ASM_X86__HUGETLB_H 2 + #define ASM_X86__HUGETLB_H 3 3 4 4 #include <asm/page.h> 5 5 ··· 90 90 { 91 91 } 92 92 93 - #endif /* _ASM_X86_HUGETLB_H */ 93 + #endif /* ASM_X86__HUGETLB_H */
+23 -3
include/asm-x86/hw_irq.h
··· 1 - #ifndef _ASM_HW_IRQ_H 2 - #define _ASM_HW_IRQ_H 1 + #ifndef ASM_X86__HW_IRQ_H 2 + #define ASM_X86__HW_IRQ_H 3 3 4 4 /* 5 5 * (C) 1992, 1993 Linus Torvalds, (C) 1997 Ingo Molnar ··· 93 93 extern asmlinkage void qic_enable_irq_interrupt(void); 94 94 extern asmlinkage void qic_call_function_interrupt(void); 95 95 96 + /* SMP */ 97 + extern void smp_apic_timer_interrupt(struct pt_regs *); 98 + #ifdef CONFIG_X86_32 99 + extern void smp_spurious_interrupt(struct pt_regs *); 100 + extern void smp_error_interrupt(struct pt_regs *); 101 + #else 102 + extern asmlinkage void smp_spurious_interrupt(void); 103 + extern asmlinkage void smp_error_interrupt(void); 104 + #endif 105 + #ifdef CONFIG_X86_SMP 106 + extern void smp_reschedule_interrupt(struct pt_regs *); 107 + extern void smp_call_function_interrupt(struct pt_regs *); 108 + extern void smp_call_function_single_interrupt(struct pt_regs *); 109 + #ifdef CONFIG_X86_32 110 + extern void smp_invalidate_interrupt(struct pt_regs *); 111 + #else 112 + extern asmlinkage void smp_invalidate_interrupt(struct pt_regs *); 113 + #endif 114 + #endif 115 + 96 116 #ifdef CONFIG_X86_32 97 117 extern void (*const interrupt[NR_IRQS])(void); 98 118 #else ··· 132 112 133 113 #endif /* !ASSEMBLY_ */ 134 114 135 - #endif 115 + #endif /* ASM_X86__HW_IRQ_H */
+3 -3
include/asm-x86/hypertransport.h
··· 1 - #ifndef ASM_HYPERTRANSPORT_H 2 - #define ASM_HYPERTRANSPORT_H 1 + #ifndef ASM_X86__HYPERTRANSPORT_H 2 + #define ASM_X86__HYPERTRANSPORT_H 3 3 4 4 /* 5 5 * Constants for x86 Hypertransport Interrupts. ··· 42 42 #define HT_IRQ_HIGH_DEST_ID(v) \ 43 43 ((((v) >> 8) << HT_IRQ_HIGH_DEST_ID_SHIFT) & HT_IRQ_HIGH_DEST_ID_MASK) 44 44 45 - #endif /* ASM_HYPERTRANSPORT_H */ 45 + #endif /* ASM_X86__HYPERTRANSPORT_H */
+4 -3
include/asm-x86/i387.h
··· 7 7 * x86-64 work by Andi Kleen 2002 8 8 */ 9 9 10 - #ifndef _ASM_X86_I387_H 11 - #define _ASM_X86_I387_H 10 + #ifndef ASM_X86__I387_H 11 + #define ASM_X86__I387_H 12 12 13 13 #include <linux/sched.h> 14 14 #include <linux/kernel_stat.h> ··· 25 25 extern int init_fpu(struct task_struct *child); 26 26 extern asmlinkage void math_state_restore(void); 27 27 extern void init_thread_xstate(void); 28 + extern int dump_fpu(struct pt_regs *, struct user_i387_struct *); 28 29 29 30 extern user_regset_active_fn fpregs_active, xfpregs_active; 30 31 extern user_regset_get_fn fpregs_get, xfpregs_get, fpregs_soft_get; ··· 337 336 } 338 337 } 339 338 340 - #endif /* _ASM_X86_I387_H */ 339 + #endif /* ASM_X86__I387_H */
+3 -3
include/asm-x86/i8253.h
··· 1 - #ifndef __ASM_I8253_H__ 2 - #define __ASM_I8253_H__ 1 + #ifndef ASM_X86__I8253_H 2 + #define ASM_X86__I8253_H 3 3 4 4 /* i8253A PIT registers */ 5 5 #define PIT_MODE 0x43 ··· 15 15 #define inb_pit inb_p 16 16 #define outb_pit outb_p 17 17 18 - #endif /* __ASM_I8253_H__ */ 18 + #endif /* ASM_X86__I8253_H */
+3 -3
include/asm-x86/i8259.h
··· 1 - #ifndef __ASM_I8259_H__ 2 - #define __ASM_I8259_H__ 1 + #ifndef ASM_X86__I8259_H 2 + #define ASM_X86__I8259_H 3 3 4 4 #include <linux/delay.h> 5 5 ··· 57 57 58 58 extern struct irq_chip i8259A_chip; 59 59 60 - #endif /* __ASM_I8259_H__ */ 60 + #endif /* ASM_X86__I8259_H */
+3 -3
include/asm-x86/ia32.h
··· 1 - #ifndef _ASM_X86_64_IA32_H 2 - #define _ASM_X86_64_IA32_H 1 + #ifndef ASM_X86__IA32_H 2 + #define ASM_X86__IA32_H 3 3 4 4 5 5 #ifdef CONFIG_IA32_EMULATION ··· 167 167 168 168 #endif /* !CONFIG_IA32_SUPPORT */ 169 169 170 - #endif 170 + #endif /* ASM_X86__IA32_H */
+3 -3
include/asm-x86/ia32_unistd.h
··· 1 - #ifndef _ASM_X86_64_IA32_UNISTD_H_ 2 - #define _ASM_X86_64_IA32_UNISTD_H_ 1 + #ifndef ASM_X86__IA32_UNISTD_H 2 + #define ASM_X86__IA32_UNISTD_H 3 3 4 4 /* 5 5 * This file contains the system call numbers of the ia32 port, ··· 15 15 #define __NR_ia32_sigreturn 119 16 16 #define __NR_ia32_rt_sigreturn 173 17 17 18 - #endif /* _ASM_X86_64_IA32_UNISTD_H_ */ 18 + #endif /* ASM_X86__IA32_UNISTD_H */
+3 -3
include/asm-x86/idle.h
··· 1 - #ifndef _ASM_X86_64_IDLE_H 2 - #define _ASM_X86_64_IDLE_H 1 1 + #ifndef ASM_X86__IDLE_H 2 + #define ASM_X86__IDLE_H 3 3 4 4 #define IDLE_START 1 5 5 #define IDLE_END 2 ··· 12 12 13 13 void c1e_remove_cpu(int cpu); 14 14 15 - #endif 15 + #endif /* ASM_X86__IDLE_H */
+3 -3
include/asm-x86/intel_arch_perfmon.h
··· 1 - #ifndef _ASM_X86_INTEL_ARCH_PERFMON_H 2 - #define _ASM_X86_INTEL_ARCH_PERFMON_H 1 + #ifndef ASM_X86__INTEL_ARCH_PERFMON_H 2 + #define ASM_X86__INTEL_ARCH_PERFMON_H 3 3 4 4 #define MSR_ARCH_PERFMON_PERFCTR0 0xc1 5 5 #define MSR_ARCH_PERFMON_PERFCTR1 0xc2 ··· 28 28 unsigned int full; 29 29 }; 30 30 31 - #endif /* _ASM_X86_INTEL_ARCH_PERFMON_H */ 31 + #endif /* ASM_X86__INTEL_ARCH_PERFMON_H */
+5 -3
include/asm-x86/io.h
··· 1 - #ifndef _ASM_X86_IO_H 2 - #define _ASM_X86_IO_H 1 + #ifndef ASM_X86__IO_H 2 + #define ASM_X86__IO_H 3 3 4 4 #define ARCH_HAS_IOREMAP_WC 5 5 ··· 73 73 #define writeq writeq 74 74 #endif 75 75 76 + extern int iommu_bio_merge; 77 + 76 78 #ifdef CONFIG_X86_32 77 79 # include "io_32.h" 78 80 #else ··· 101 99 extern void __iomem *fix_ioremap(unsigned idx, unsigned long phys); 102 100 103 101 104 - #endif /* _ASM_X86_IO_H */ 102 + #endif /* ASM_X86__IO_H */
+3 -3
include/asm-x86/io_32.h
··· 1 - #ifndef _ASM_IO_H 2 - #define _ASM_IO_H 1 + #ifndef ASM_X86__IO_32_H 2 + #define ASM_X86__IO_32_H 3 3 4 4 #include <linux/string.h> 5 5 #include <linux/compiler.h> ··· 281 281 BUILDIO(w, w, short) 282 282 BUILDIO(l, , int) 283 283 284 - #endif 284 + #endif /* ASM_X86__IO_32_H */
+3 -4
include/asm-x86/io_64.h
··· 1 - #ifndef _ASM_IO_H 2 - #define _ASM_IO_H 1 + #ifndef ASM_X86__IO_64_H 2 + #define ASM_X86__IO_64_H 3 3 4 4 5 5 /* ··· 235 235 236 236 #define flush_write_buffers() 237 237 238 - extern int iommu_bio_merge; 239 238 #define BIO_VMERGE_BOUNDARY iommu_bio_merge 240 239 241 240 /* ··· 244 245 245 246 #endif /* __KERNEL__ */ 246 247 247 - #endif 248 + #endif /* ASM_X86__IO_64_H */
+3 -3
include/asm-x86/io_apic.h
··· 1 - #ifndef __ASM_IO_APIC_H 2 - #define __ASM_IO_APIC_H 1 + #ifndef ASM_X86__IO_APIC_H 2 + #define ASM_X86__IO_APIC_H 3 3 4 4 #include <linux/types.h> 5 5 #include <asm/mpspec.h> ··· 189 189 static inline void ioapic_init_mappings(void) { } 190 190 #endif 191 191 192 - #endif 192 + #endif /* ASM_X86__IO_APIC_H */
+3 -3
include/asm-x86/ioctls.h
··· 1 - #ifndef _ASM_X86_IOCTLS_H 2 - #define _ASM_X86_IOCTLS_H 1 + #ifndef ASM_X86__IOCTLS_H 2 + #define ASM_X86__IOCTLS_H 3 3 4 4 #include <asm/ioctl.h> 5 5 ··· 85 85 86 86 #define TIOCSER_TEMT 0x01 /* Transmitter physically empty */ 87 87 88 - #endif 88 + #endif /* ASM_X86__IOCTLS_H */
+3 -3
include/asm-x86/iommu.h
··· 1 - #ifndef _ASM_X8664_IOMMU_H 2 - #define _ASM_X8664_IOMMU_H 1 1 + #ifndef ASM_X86__IOMMU_H 2 + #define ASM_X86__IOMMU_H 3 3 4 4 extern void pci_iommu_shutdown(void); 5 5 extern void no_iommu_init(void); ··· 42 42 } 43 43 #endif 44 44 45 - #endif 45 + #endif /* ASM_X86__IOMMU_H */
+3 -3
include/asm-x86/ipcbuf.h
··· 1 - #ifndef _ASM_X86_IPCBUF_H 2 - #define _ASM_X86_IPCBUF_H 1 + #ifndef ASM_X86__IPCBUF_H 2 + #define ASM_X86__IPCBUF_H 3 3 4 4 /* 5 5 * The ipc64_perm structure for x86 architecture. ··· 25 25 unsigned long __unused2; 26 26 }; 27 27 28 - #endif /* _ASM_X86_IPCBUF_H */ 28 + #endif /* ASM_X86__IPCBUF_H */
+3 -3
include/asm-x86/ipi.h
··· 1 - #ifndef __ASM_IPI_H 2 - #define __ASM_IPI_H 1 + #ifndef ASM_X86__IPI_H 2 + #define ASM_X86__IPI_H 3 3 4 4 /* 5 5 * Copyright 2004 James Cleverdon, IBM. ··· 129 129 local_irq_restore(flags); 130 130 } 131 131 132 - #endif /* __ASM_IPI_H */ 132 + #endif /* ASM_X86__IPI_H */
+3 -3
include/asm-x86/irq.h
··· 1 - #ifndef _ASM_IRQ_H 2 - #define _ASM_IRQ_H 1 + #ifndef ASM_X86__IRQ_H 2 + #define ASM_X86__IRQ_H 3 3 /* 4 4 * (C) 1992, 1993 Linus Torvalds, (C) 1997 Ingo Molnar 5 5 * ··· 47 47 /* Interrupt vector management */ 48 48 extern DECLARE_BITMAP(used_vectors, NR_VECTORS); 49 49 50 - #endif /* _ASM_IRQ_H */ 50 + #endif /* ASM_X86__IRQ_H */
+3 -3
include/asm-x86/irq_regs_32.h
··· 4 4 * 5 5 * Jeremy Fitzhardinge <jeremy@goop.org> 6 6 */ 7 - #ifndef _ASM_I386_IRQ_REGS_H 8 - #define _ASM_I386_IRQ_REGS_H 7 + #ifndef ASM_X86__IRQ_REGS_32_H 8 + #define ASM_X86__IRQ_REGS_32_H 9 9 10 10 #include <asm/percpu.h> 11 11 ··· 26 26 return old_regs; 27 27 } 28 28 29 - #endif /* _ASM_I386_IRQ_REGS_H */ 29 + #endif /* ASM_X86__IRQ_REGS_32_H */
+3 -3
include/asm-x86/irq_vectors.h
··· 1 - #ifndef _ASM_IRQ_VECTORS_H 2 - #define _ASM_IRQ_VECTORS_H 1 + #ifndef ASM_X86__IRQ_VECTORS_H 2 + #define ASM_X86__IRQ_VECTORS_H 3 3 4 4 #include <linux/threads.h> 5 5 ··· 179 179 #define VIC_CPU_BOOT_ERRATA_CPI (VIC_CPI_LEVEL0 + 8) 180 180 181 181 182 - #endif /* _ASM_IRQ_VECTORS_H */ 182 + #endif /* ASM_X86__IRQ_VECTORS_H */
+3 -3
include/asm-x86/ist.h
··· 1 - #ifndef _ASM_IST_H 2 - #define _ASM_IST_H 1 + #ifndef ASM_X86__IST_H 2 + #define ASM_X86__IST_H 3 3 4 4 /* 5 5 * Include file for the interface to IST BIOS ··· 31 31 extern struct ist_info ist_info; 32 32 33 33 #endif /* __KERNEL__ */ 34 - #endif /* _ASM_IST_H */ 34 + #endif /* ASM_X86__IST_H */
+3 -3
include/asm-x86/k8.h
··· 1 - #ifndef _ASM_K8_H 2 - #define _ASM_K8_H 1 1 + #ifndef ASM_X86__K8_H 2 + #define ASM_X86__K8_H 3 3 4 4 #include <linux/pci.h> 5 5 ··· 12 12 extern void k8_flush_garts(void); 13 13 extern int k8_scan_nodes(unsigned long start, unsigned long end); 14 14 15 - #endif 15 + #endif /* ASM_X86__K8_H */
+3 -3
include/asm-x86/kdebug.h
··· 1 - #ifndef _ASM_X86_KDEBUG_H 2 - #define _ASM_X86_KDEBUG_H 1 + #ifndef ASM_X86__KDEBUG_H 2 + #define ASM_X86__KDEBUG_H 3 3 4 4 #include <linux/notifier.h> 5 5 ··· 35 35 extern unsigned long oops_begin(void); 36 36 extern void oops_end(unsigned long, struct pt_regs *, int signr); 37 37 38 - #endif 38 + #endif /* ASM_X86__KDEBUG_H */
+3 -3
include/asm-x86/kexec.h
··· 1 - #ifndef _KEXEC_H 2 - #define _KEXEC_H 1 + #ifndef ASM_X86__KEXEC_H 2 + #define ASM_X86__KEXEC_H 3 3 4 4 #ifdef CONFIG_X86_32 5 5 # define PA_CONTROL_PAGE 0 ··· 172 172 173 173 #endif /* __ASSEMBLY__ */ 174 174 175 - #endif /* _KEXEC_H */ 175 + #endif /* ASM_X86__KEXEC_H */
+3 -3
include/asm-x86/kgdb.h
··· 1 - #ifndef _ASM_KGDB_H_ 2 - #define _ASM_KGDB_H_ 1 + #ifndef ASM_X86__KGDB_H 2 + #define ASM_X86__KGDB_H 3 3 4 4 /* 5 5 * Copyright (C) 2001-2004 Amit S. Kale ··· 76 76 #define BREAK_INSTR_SIZE 1 77 77 #define CACHE_FLUSH_IS_SAFE 1 78 78 79 - #endif /* _ASM_KGDB_H_ */ 79 + #endif /* ASM_X86__KGDB_H */
+3 -3
include/asm-x86/kmap_types.h
··· 1 - #ifndef _ASM_X86_KMAP_TYPES_H 2 - #define _ASM_X86_KMAP_TYPES_H 1 + #ifndef ASM_X86__KMAP_TYPES_H 2 + #define ASM_X86__KMAP_TYPES_H 3 3 4 4 #if defined(CONFIG_X86_32) && defined(CONFIG_DEBUG_HIGHMEM) 5 5 # define D(n) __KM_FENCE_##n , ··· 26 26 27 27 #undef D 28 28 29 - #endif 29 + #endif /* ASM_X86__KMAP_TYPES_H */
+3 -3
include/asm-x86/kprobes.h
··· 1 - #ifndef _ASM_KPROBES_H 2 - #define _ASM_KPROBES_H 1 + #ifndef ASM_X86__KPROBES_H 2 + #define ASM_X86__KPROBES_H 3 3 /* 4 4 * Kernel Probes (KProbes) 5 5 * ··· 94 94 extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr); 95 95 extern int kprobe_exceptions_notify(struct notifier_block *self, 96 96 unsigned long val, void *data); 97 - #endif /* _ASM_KPROBES_H */ 97 + #endif /* ASM_X86__KPROBES_H */
+3 -3
include/asm-x86/kvm.h
··· 1 - #ifndef __LINUX_KVM_X86_H 2 - #define __LINUX_KVM_X86_H 1 + #ifndef ASM_X86__KVM_H 2 + #define ASM_X86__KVM_H 3 3 4 4 /* 5 5 * KVM x86 specific structures and definitions ··· 230 230 #define KVM_TRC_APIC_ACCESS (KVM_TRC_HANDLER + 0x14) 231 231 #define KVM_TRC_TDP_FAULT (KVM_TRC_HANDLER + 0x15) 232 232 233 - #endif 233 + #endif /* ASM_X86__KVM_H */
+4 -4
include/asm-x86/kvm_host.h
··· 1 - #/* 1 + /* 2 2 * Kernel-based Virtual Machine driver for Linux 3 3 * 4 4 * This header defines architecture specific interfaces, x86 version ··· 8 8 * 9 9 */ 10 10 11 - #ifndef ASM_KVM_HOST_H 12 - #define ASM_KVM_HOST_H 11 + #ifndef ASM_X86__KVM_HOST_H 12 + #define ASM_X86__KVM_HOST_H 13 13 14 14 #include <linux/types.h> 15 15 #include <linux/mm.h> ··· 735 735 int kvm_unmap_hva(struct kvm *kvm, unsigned long hva); 736 736 int kvm_age_hva(struct kvm *kvm, unsigned long hva); 737 737 738 - #endif 738 + #endif /* ASM_X86__KVM_HOST_H */
+3 -3
include/asm-x86/kvm_para.h
··· 1 - #ifndef __X86_KVM_PARA_H 2 - #define __X86_KVM_PARA_H 1 + #ifndef ASM_X86__KVM_PARA_H 2 + #define ASM_X86__KVM_PARA_H 3 3 4 4 /* This CPUID returns the signature 'KVMKVMKVM' in ebx, ecx, and edx. It 5 5 * should be used to determine that a VM is running under KVM. ··· 144 144 145 145 #endif 146 146 147 - #endif 147 + #endif /* ASM_X86__KVM_PARA_H */
+3 -3
include/asm-x86/kvm_x86_emulate.h
··· 8 8 * From: xen-unstable 10676:af9809f51f81a3c43f276f00c81a52ef558afda4 9 9 */ 10 10 11 - #ifndef __X86_EMULATE_H__ 12 - #define __X86_EMULATE_H__ 11 + #ifndef ASM_X86__KVM_X86_EMULATE_H 12 + #define ASM_X86__KVM_X86_EMULATE_H 13 13 14 14 struct x86_emulate_ctxt; 15 15 ··· 181 181 int x86_emulate_insn(struct x86_emulate_ctxt *ctxt, 182 182 struct x86_emulate_ops *ops); 183 183 184 - #endif /* __X86_EMULATE_H__ */ 184 + #endif /* ASM_X86__KVM_X86_EMULATE_H */
+3 -3
include/asm-x86/ldt.h
··· 3 3 * 4 4 * Definitions of structures used with the modify_ldt system call. 5 5 */ 6 - #ifndef _ASM_X86_LDT_H 7 - #define _ASM_X86_LDT_H 6 + #ifndef ASM_X86__LDT_H 7 + #define ASM_X86__LDT_H 8 8 9 9 /* Maximum number of LDT entries supported. */ 10 10 #define LDT_ENTRIES 8192 ··· 37 37 #define MODIFY_LDT_CONTENTS_CODE 2 38 38 39 39 #endif /* !__ASSEMBLY__ */ 40 - #endif 40 + #endif /* ASM_X86__LDT_H */
+3 -3
include/asm-x86/lguest.h
··· 1 - #ifndef _X86_LGUEST_H 2 - #define _X86_LGUEST_H 1 + #ifndef ASM_X86__LGUEST_H 2 + #define ASM_X86__LGUEST_H 3 3 4 4 #define GDT_ENTRY_LGUEST_CS 10 5 5 #define GDT_ENTRY_LGUEST_DS 11 ··· 91 91 92 92 #endif /* __ASSEMBLY__ */ 93 93 94 - #endif 94 + #endif /* ASM_X86__LGUEST_H */
+3 -3
include/asm-x86/lguest_hcall.h
··· 1 1 /* Architecture specific portion of the lguest hypercalls */ 2 - #ifndef _X86_LGUEST_HCALL_H 3 - #define _X86_LGUEST_HCALL_H 2 + #ifndef ASM_X86__LGUEST_HCALL_H 3 + #define ASM_X86__LGUEST_HCALL_H 4 4 5 5 #define LHCALL_FLUSH_ASYNC 0 6 6 #define LHCALL_LGUEST_INIT 1 ··· 68 68 }; 69 69 70 70 #endif /* !__ASSEMBLY__ */ 71 - #endif /* _I386_LGUEST_HCALL_H */ 71 + #endif /* ASM_X86__LGUEST_HCALL_H */
+3 -3
include/asm-x86/linkage.h
··· 1 - #ifndef __ASM_LINKAGE_H 2 - #define __ASM_LINKAGE_H 1 + #ifndef ASM_X86__LINKAGE_H 2 + #define ASM_X86__LINKAGE_H 3 3 4 4 #undef notrace 5 5 #define notrace __attribute__((no_instrument_function)) ··· 57 57 #define __ALIGN_STR ".align 16,0x90" 58 58 #endif 59 59 60 - #endif 60 + #endif /* ASM_X86__LINKAGE_H */ 61 61
+3 -3
include/asm-x86/local.h
··· 1 - #ifndef _ARCH_LOCAL_H 2 - #define _ARCH_LOCAL_H 1 + #ifndef ASM_X86__LOCAL_H 2 + #define ASM_X86__LOCAL_H 3 3 4 4 #include <linux/percpu.h> 5 5 ··· 232 232 #define __cpu_local_add(i, l) cpu_local_add((i), (l)) 233 233 #define __cpu_local_sub(i, l) cpu_local_sub((i), (l)) 234 234 235 - #endif /* _ARCH_LOCAL_H */ 235 + #endif /* ASM_X86__LOCAL_H */
+3 -3
include/asm-x86/mach-bigsmp/mach_apic.h
··· 1 - #ifndef __ASM_MACH_APIC_H 2 - #define __ASM_MACH_APIC_H 1 + #ifndef ASM_X86__MACH_BIGSMP__MACH_APIC_H 2 + #define ASM_X86__MACH_BIGSMP__MACH_APIC_H 3 3 4 4 #define xapic_phys_to_log_apicid(cpu) (per_cpu(x86_bios_cpu_apicid, cpu)) 5 5 #define esr_disable (1) ··· 141 141 return cpuid_apic >> index_msb; 142 142 } 143 143 144 - #endif /* __ASM_MACH_APIC_H */ 144 + #endif /* ASM_X86__MACH_BIGSMP__MACH_APIC_H */
+3 -3
include/asm-x86/mach-bigsmp/mach_apicdef.h
··· 1 - #ifndef __ASM_MACH_APICDEF_H 2 - #define __ASM_MACH_APICDEF_H 1 + #ifndef ASM_X86__MACH_BIGSMP__MACH_APICDEF_H 2 + #define ASM_X86__MACH_BIGSMP__MACH_APICDEF_H 3 3 4 4 #define APIC_ID_MASK (0xFF<<24) 5 5 ··· 10 10 11 11 #define GET_APIC_ID(x) get_apic_id(x) 12 12 13 - #endif 13 + #endif /* ASM_X86__MACH_BIGSMP__MACH_APICDEF_H */
+3 -3
include/asm-x86/mach-bigsmp/mach_ipi.h
··· 1 - #ifndef __ASM_MACH_IPI_H 2 - #define __ASM_MACH_IPI_H 1 + #ifndef ASM_X86__MACH_BIGSMP__MACH_IPI_H 2 + #define ASM_X86__MACH_BIGSMP__MACH_IPI_H 3 3 4 4 void send_IPI_mask_sequence(cpumask_t mask, int vector); 5 5 ··· 22 22 send_IPI_mask(cpu_online_map, vector); 23 23 } 24 24 25 - #endif /* __ASM_MACH_IPI_H */ 25 + #endif /* ASM_X86__MACH_BIGSMP__MACH_IPI_H */
+3 -3
include/asm-x86/mach-default/apm.h
··· 3 3 * Split out from apm.c by Osamu Tomita <tomita@cinet.co.jp> 4 4 */ 5 5 6 - #ifndef _ASM_APM_H 7 - #define _ASM_APM_H 6 + #ifndef ASM_X86__MACH_DEFAULT__APM_H 7 + #define ASM_X86__MACH_DEFAULT__APM_H 8 8 9 9 #ifdef APM_ZERO_SEGS 10 10 # define APM_DO_ZERO_SEGS \ ··· 70 70 return error; 71 71 } 72 72 73 - #endif /* _ASM_APM_H */ 73 + #endif /* ASM_X86__MACH_DEFAULT__APM_H */
+3 -3
include/asm-x86/mach-default/mach_apic.h
··· 1 - #ifndef __ASM_MACH_APIC_H 2 - #define __ASM_MACH_APIC_H 1 + #ifndef ASM_X86__MACH_DEFAULT__MACH_APIC_H 2 + #define ASM_X86__MACH_DEFAULT__MACH_APIC_H 3 3 4 4 #ifdef CONFIG_X86_LOCAL_APIC 5 5 ··· 138 138 } 139 139 140 140 #endif /* CONFIG_X86_LOCAL_APIC */ 141 - #endif /* __ASM_MACH_APIC_H */ 141 + #endif /* ASM_X86__MACH_DEFAULT__MACH_APIC_H */
+3 -3
include/asm-x86/mach-default/mach_apicdef.h
··· 1 - #ifndef __ASM_MACH_APICDEF_H 2 - #define __ASM_MACH_APICDEF_H 1 + #ifndef ASM_X86__MACH_DEFAULT__MACH_APICDEF_H 2 + #define ASM_X86__MACH_DEFAULT__MACH_APICDEF_H 3 3 4 4 #include <asm/apic.h> 5 5 ··· 21 21 #define GET_APIC_ID(x) get_apic_id(x) 22 22 #endif 23 23 24 - #endif 24 + #endif /* ASM_X86__MACH_DEFAULT__MACH_APICDEF_H */
+3 -3
include/asm-x86/mach-default/mach_ipi.h
··· 1 - #ifndef __ASM_MACH_IPI_H 2 - #define __ASM_MACH_IPI_H 1 + #ifndef ASM_X86__MACH_DEFAULT__MACH_IPI_H 2 + #define ASM_X86__MACH_DEFAULT__MACH_IPI_H 3 3 4 4 /* Avoid include hell */ 5 5 #define NMI_VECTOR 0x02 ··· 61 61 } 62 62 #endif 63 63 64 - #endif /* __ASM_MACH_IPI_H */ 64 + #endif /* ASM_X86__MACH_DEFAULT__MACH_IPI_H */
+3 -3
include/asm-x86/mach-default/mach_mpparse.h
··· 1 - #ifndef __ASM_MACH_MPPARSE_H 2 - #define __ASM_MACH_MPPARSE_H 1 + #ifndef ASM_X86__MACH_DEFAULT__MACH_MPPARSE_H 2 + #define ASM_X86__MACH_DEFAULT__MACH_MPPARSE_H 3 3 4 4 static inline int mps_oem_check(struct mp_config_table *mpc, char *oem, 5 5 char *productid) ··· 14 14 } 15 15 16 16 17 - #endif /* __ASM_MACH_MPPARSE_H */ 17 + #endif /* ASM_X86__MACH_DEFAULT__MACH_MPPARSE_H */
+3 -3
include/asm-x86/mach-default/mach_mpspec.h
··· 1 - #ifndef __ASM_MACH_MPSPEC_H 2 - #define __ASM_MACH_MPSPEC_H 1 + #ifndef ASM_X86__MACH_DEFAULT__MACH_MPSPEC_H 2 + #define ASM_X86__MACH_DEFAULT__MACH_MPSPEC_H 3 3 4 4 #define MAX_IRQ_SOURCES 256 5 5 ··· 9 9 #define MAX_MP_BUSSES 32 10 10 #endif 11 11 12 - #endif /* __ASM_MACH_MPSPEC_H */ 12 + #endif /* ASM_X86__MACH_DEFAULT__MACH_MPSPEC_H */
+3 -3
include/asm-x86/mach-default/mach_timer.h
··· 10 10 * directly because of the awkward 8-bit access mechanism of the 82C54 11 11 * device. 12 12 */ 13 - #ifndef _MACH_TIMER_H 14 - #define _MACH_TIMER_H 13 + #ifndef ASM_X86__MACH_DEFAULT__MACH_TIMER_H 14 + #define ASM_X86__MACH_DEFAULT__MACH_TIMER_H 15 15 16 16 #define CALIBRATE_TIME_MSEC 30 /* 30 msecs */ 17 17 #define CALIBRATE_LATCH \ ··· 45 45 *count_p = count; 46 46 } 47 47 48 - #endif /* !_MACH_TIMER_H */ 48 + #endif /* ASM_X86__MACH_DEFAULT__MACH_TIMER_H */
+3 -3
include/asm-x86/mach-default/mach_traps.h
··· 2 2 * Machine specific NMI handling for generic. 3 3 * Split out from traps.c by Osamu Tomita <tomita@cinet.co.jp> 4 4 */ 5 - #ifndef _MACH_TRAPS_H 6 - #define _MACH_TRAPS_H 5 + #ifndef ASM_X86__MACH_DEFAULT__MACH_TRAPS_H 6 + #define ASM_X86__MACH_DEFAULT__MACH_TRAPS_H 7 7 8 8 #include <asm/mc146818rtc.h> 9 9 ··· 36 36 unlock_cmos(); 37 37 } 38 38 39 - #endif /* !_MACH_TRAPS_H */ 39 + #endif /* ASM_X86__MACH_DEFAULT__MACH_TRAPS_H */
+3 -3
include/asm-x86/mach-default/mach_wakecpu.h
··· 1 - #ifndef __ASM_MACH_WAKECPU_H 2 - #define __ASM_MACH_WAKECPU_H 1 + #ifndef ASM_X86__MACH_DEFAULT__MACH_WAKECPU_H 2 + #define ASM_X86__MACH_DEFAULT__MACH_WAKECPU_H 3 3 4 4 /* 5 5 * This file copes with machines that wakeup secondary CPUs by the ··· 39 39 #define inquire_remote_apic(apicid) {} 40 40 #endif 41 41 42 - #endif /* __ASM_MACH_WAKECPU_H */ 42 + #endif /* ASM_X86__MACH_DEFAULT__MACH_WAKECPU_H */
+3 -3
include/asm-x86/mach-es7000/mach_apic.h
··· 1 - #ifndef __ASM_MACH_APIC_H 2 - #define __ASM_MACH_APIC_H 1 + #ifndef ASM_X86__MACH_ES7000__MACH_APIC_H 2 + #define ASM_X86__MACH_ES7000__MACH_APIC_H 3 3 4 4 #define xapic_phys_to_log_apicid(cpu) per_cpu(x86_bios_cpu_apicid, cpu) 5 5 #define esr_disable (1) ··· 191 191 return cpuid_apic >> index_msb; 192 192 } 193 193 194 - #endif /* __ASM_MACH_APIC_H */ 194 + #endif /* ASM_X86__MACH_ES7000__MACH_APIC_H */
+3 -3
include/asm-x86/mach-es7000/mach_apicdef.h
··· 1 - #ifndef __ASM_MACH_APICDEF_H 2 - #define __ASM_MACH_APICDEF_H 1 + #ifndef ASM_X86__MACH_ES7000__MACH_APICDEF_H 2 + #define ASM_X86__MACH_ES7000__MACH_APICDEF_H 3 3 4 4 #define APIC_ID_MASK (0xFF<<24) 5 5 ··· 10 10 11 11 #define GET_APIC_ID(x) get_apic_id(x) 12 12 13 - #endif 13 + #endif /* ASM_X86__MACH_ES7000__MACH_APICDEF_H */
+3 -3
include/asm-x86/mach-es7000/mach_ipi.h
··· 1 - #ifndef __ASM_MACH_IPI_H 2 - #define __ASM_MACH_IPI_H 1 + #ifndef ASM_X86__MACH_ES7000__MACH_IPI_H 2 + #define ASM_X86__MACH_ES7000__MACH_IPI_H 3 3 4 4 void send_IPI_mask_sequence(cpumask_t mask, int vector); 5 5 ··· 21 21 send_IPI_mask(cpu_online_map, vector); 22 22 } 23 23 24 - #endif /* __ASM_MACH_IPI_H */ 24 + #endif /* ASM_X86__MACH_ES7000__MACH_IPI_H */
+3 -3
include/asm-x86/mach-es7000/mach_mpparse.h
··· 1 - #ifndef __ASM_MACH_MPPARSE_H 2 - #define __ASM_MACH_MPPARSE_H 1 + #ifndef ASM_X86__MACH_ES7000__MACH_MPPARSE_H 2 + #define ASM_X86__MACH_ES7000__MACH_MPPARSE_H 3 3 4 4 #include <linux/acpi.h> 5 5 ··· 26 26 } 27 27 #endif 28 28 29 - #endif /* __ASM_MACH_MPPARSE_H */ 29 + #endif /* ASM_X86__MACH_ES7000__MACH_MPPARSE_H */
+3 -3
include/asm-x86/mach-es7000/mach_wakecpu.h
··· 1 - #ifndef __ASM_MACH_WAKECPU_H 2 - #define __ASM_MACH_WAKECPU_H 1 + #ifndef ASM_X86__MACH_ES7000__MACH_WAKECPU_H 2 + #define ASM_X86__MACH_ES7000__MACH_WAKECPU_H 3 3 4 4 /* 5 5 * This file copes with machines that wakeup secondary CPUs by the ··· 56 56 #define inquire_remote_apic(apicid) {} 57 57 #endif 58 58 59 - #endif /* __ASM_MACH_WAKECPU_H */ 59 + #endif /* ASM_X86__MACH_ES7000__MACH_WAKECPU_H */
+3 -3
include/asm-x86/mach-generic/gpio.h
··· 1 - #ifndef __ASM_MACH_GENERIC_GPIO_H 2 - #define __ASM_MACH_GENERIC_GPIO_H 1 + #ifndef ASM_X86__MACH_GENERIC__GPIO_H 2 + #define ASM_X86__MACH_GENERIC__GPIO_H 3 3 4 4 int gpio_request(unsigned gpio, const char *label); 5 5 void gpio_free(unsigned gpio); ··· 12 12 13 13 #include <asm-generic/gpio.h> /* cansleep wrappers */ 14 14 15 - #endif /* __ASM_MACH_GENERIC_GPIO_H */ 15 + #endif /* ASM_X86__MACH_GENERIC__GPIO_H */
+3 -3
include/asm-x86/mach-generic/irq_vectors_limits.h
··· 1 - #ifndef _ASM_IRQ_VECTORS_LIMITS_H 2 - #define _ASM_IRQ_VECTORS_LIMITS_H 1 + #ifndef ASM_X86__MACH_GENERIC__IRQ_VECTORS_LIMITS_H 2 + #define ASM_X86__MACH_GENERIC__IRQ_VECTORS_LIMITS_H 3 3 4 4 /* 5 5 * For Summit or generic (i.e. installer) kernels, we have lots of I/O APICs, ··· 11 11 #define NR_IRQS 224 12 12 #define NR_IRQ_VECTORS 1024 13 13 14 - #endif /* _ASM_IRQ_VECTORS_LIMITS_H */ 14 + #endif /* ASM_X86__MACH_GENERIC__IRQ_VECTORS_LIMITS_H */
+3 -3
include/asm-x86/mach-generic/mach_apic.h
··· 1 - #ifndef __ASM_MACH_APIC_H 2 - #define __ASM_MACH_APIC_H 1 + #ifndef ASM_X86__MACH_GENERIC__MACH_APIC_H 2 + #define ASM_X86__MACH_GENERIC__MACH_APIC_H 3 3 4 4 #include <asm/genapic.h> 5 5 ··· 29 29 30 30 extern void generic_bigsmp_probe(void); 31 31 32 - #endif /* __ASM_MACH_APIC_H */ 32 + #endif /* ASM_X86__MACH_GENERIC__MACH_APIC_H */
+3 -3
include/asm-x86/mach-generic/mach_apicdef.h
··· 1 - #ifndef _GENAPIC_MACH_APICDEF_H 2 - #define _GENAPIC_MACH_APICDEF_H 1 1 + #ifndef ASM_X86__MACH_GENERIC__MACH_APICDEF_H 2 + #define ASM_X86__MACH_GENERIC__MACH_APICDEF_H 3 3 4 4 #ifndef APIC_DEFINITION 5 5 #include <asm/genapic.h> ··· 8 8 #define APIC_ID_MASK (genapic->apic_id_mask) 9 9 #endif 10 10 11 - #endif 11 + #endif /* ASM_X86__MACH_GENERIC__MACH_APICDEF_H */
+3 -3
include/asm-x86/mach-generic/mach_ipi.h
··· 1 - #ifndef _MACH_IPI_H 2 - #define _MACH_IPI_H 1 1 + #ifndef ASM_X86__MACH_GENERIC__MACH_IPI_H 2 + #define ASM_X86__MACH_GENERIC__MACH_IPI_H 3 3 4 4 #include <asm/genapic.h> 5 5 ··· 7 7 #define send_IPI_allbutself (genapic->send_IPI_allbutself) 8 8 #define send_IPI_all (genapic->send_IPI_all) 9 9 10 - #endif 10 + #endif /* ASM_X86__MACH_GENERIC__MACH_IPI_H */
+3 -3
include/asm-x86/mach-generic/mach_mpparse.h
··· 1 - #ifndef _MACH_MPPARSE_H 2 - #define _MACH_MPPARSE_H 1 1 + #ifndef ASM_X86__MACH_GENERIC__MACH_MPPARSE_H 2 + #define ASM_X86__MACH_GENERIC__MACH_MPPARSE_H 3 3 4 4 5 5 extern int mps_oem_check(struct mp_config_table *mpc, char *oem, ··· 7 7 8 8 extern int acpi_madt_oem_check(char *oem_id, char *oem_table_id); 9 9 10 - #endif 10 + #endif /* ASM_X86__MACH_GENERIC__MACH_MPPARSE_H */
+3 -3
include/asm-x86/mach-generic/mach_mpspec.h
··· 1 - #ifndef __ASM_MACH_MPSPEC_H 2 - #define __ASM_MACH_MPSPEC_H 1 + #ifndef ASM_X86__MACH_GENERIC__MACH_MPSPEC_H 2 + #define ASM_X86__MACH_GENERIC__MACH_MPSPEC_H 3 3 4 4 #define MAX_IRQ_SOURCES 256 5 5 ··· 9 9 10 10 extern void numaq_mps_oem_check(struct mp_config_table *mpc, char *oem, 11 11 char *productid); 12 - #endif /* __ASM_MACH_MPSPEC_H */ 12 + #endif /* ASM_X86__MACH_GENERIC__MACH_MPSPEC_H */
+3 -3
include/asm-x86/mach-numaq/mach_apic.h
··· 1 - #ifndef __ASM_MACH_APIC_H 2 - #define __ASM_MACH_APIC_H 1 + #ifndef ASM_X86__MACH_NUMAQ__MACH_APIC_H 2 + #define ASM_X86__MACH_NUMAQ__MACH_APIC_H 3 3 4 4 #include <asm/io.h> 5 5 #include <linux/mmzone.h> ··· 135 135 return cpuid_apic >> index_msb; 136 136 } 137 137 138 - #endif /* __ASM_MACH_APIC_H */ 138 + #endif /* ASM_X86__MACH_NUMAQ__MACH_APIC_H */
+3 -3
include/asm-x86/mach-numaq/mach_apicdef.h
··· 1 - #ifndef __ASM_MACH_APICDEF_H 2 - #define __ASM_MACH_APICDEF_H 1 + #ifndef ASM_X86__MACH_NUMAQ__MACH_APICDEF_H 2 + #define ASM_X86__MACH_NUMAQ__MACH_APICDEF_H 3 3 4 4 5 5 #define APIC_ID_MASK (0xF<<24) ··· 11 11 12 12 #define GET_APIC_ID(x) get_apic_id(x) 13 13 14 - #endif 14 + #endif /* ASM_X86__MACH_NUMAQ__MACH_APICDEF_H */
+3 -3
include/asm-x86/mach-numaq/mach_ipi.h
··· 1 - #ifndef __ASM_MACH_IPI_H 2 - #define __ASM_MACH_IPI_H 1 + #ifndef ASM_X86__MACH_NUMAQ__MACH_IPI_H 2 + #define ASM_X86__MACH_NUMAQ__MACH_IPI_H 3 3 4 4 void send_IPI_mask_sequence(cpumask_t, int vector); 5 5 ··· 22 22 send_IPI_mask(cpu_online_map, vector); 23 23 } 24 24 25 - #endif /* __ASM_MACH_IPI_H */ 25 + #endif /* ASM_X86__MACH_NUMAQ__MACH_IPI_H */
+3 -3
include/asm-x86/mach-numaq/mach_mpparse.h
··· 1 - #ifndef __ASM_MACH_MPPARSE_H 2 - #define __ASM_MACH_MPPARSE_H 1 + #ifndef ASM_X86__MACH_NUMAQ__MACH_MPPARSE_H 2 + #define ASM_X86__MACH_NUMAQ__MACH_MPPARSE_H 3 3 4 4 extern void numaq_mps_oem_check(struct mp_config_table *mpc, char *oem, 5 5 char *productid); 6 6 7 - #endif /* __ASM_MACH_MPPARSE_H */ 7 + #endif /* ASM_X86__MACH_NUMAQ__MACH_MPPARSE_H */
+3 -3
include/asm-x86/mach-numaq/mach_wakecpu.h
··· 1 - #ifndef __ASM_MACH_WAKECPU_H 2 - #define __ASM_MACH_WAKECPU_H 1 + #ifndef ASM_X86__MACH_NUMAQ__MACH_WAKECPU_H 2 + #define ASM_X86__MACH_NUMAQ__MACH_WAKECPU_H 3 3 4 4 /* This file copes with machines that wakeup secondary CPUs by NMIs */ 5 5 ··· 40 40 41 41 #define inquire_remote_apic(apicid) {} 42 42 43 - #endif /* __ASM_MACH_WAKECPU_H */ 43 + #endif /* ASM_X86__MACH_NUMAQ__MACH_WAKECPU_H */
+6 -3
include/asm-x86/mach-rdc321x/gpio.h
··· 1 - #ifndef _RDC321X_GPIO_H 2 - #define _RDC321X_GPIO_H 1 + #ifndef ASM_X86__MACH_RDC321X__GPIO_H 2 + #define ASM_X86__MACH_RDC321X__GPIO_H 3 + 4 + #include <linux/kernel.h> 3 5 4 6 extern int rdc_gpio_get_value(unsigned gpio); 5 7 extern void rdc_gpio_set_value(unsigned gpio, int value); ··· 20 18 21 19 static inline void gpio_free(unsigned gpio) 22 20 { 21 + might_sleep(); 23 22 rdc_gpio_free(gpio); 24 23 } 25 24 ··· 57 54 /* For cansleep */ 58 55 #include <asm-generic/gpio.h> 59 56 60 - #endif /* _RDC321X_GPIO_H_ */ 57 + #endif /* ASM_X86__MACH_RDC321X__GPIO_H */
+3 -3
include/asm-x86/mach-summit/irq_vectors_limits.h
··· 1 - #ifndef _ASM_IRQ_VECTORS_LIMITS_H 2 - #define _ASM_IRQ_VECTORS_LIMITS_H 1 + #ifndef ASM_X86__MACH_SUMMIT__IRQ_VECTORS_LIMITS_H 2 + #define ASM_X86__MACH_SUMMIT__IRQ_VECTORS_LIMITS_H 3 3 4 4 /* 5 5 * For Summit or generic (i.e. installer) kernels, we have lots of I/O APICs, ··· 11 11 #define NR_IRQS 224 12 12 #define NR_IRQ_VECTORS 1024 13 13 14 - #endif /* _ASM_IRQ_VECTORS_LIMITS_H */ 14 + #endif /* ASM_X86__MACH_SUMMIT__IRQ_VECTORS_LIMITS_H */
+3 -3
include/asm-x86/mach-summit/mach_apic.h
··· 1 - #ifndef __ASM_MACH_APIC_H 2 - #define __ASM_MACH_APIC_H 1 + #ifndef ASM_X86__MACH_SUMMIT__MACH_APIC_H 2 + #define ASM_X86__MACH_SUMMIT__MACH_APIC_H 3 3 4 4 #include <asm/smp.h> 5 5 ··· 182 182 return hard_smp_processor_id() >> index_msb; 183 183 } 184 184 185 - #endif /* __ASM_MACH_APIC_H */ 185 + #endif /* ASM_X86__MACH_SUMMIT__MACH_APIC_H */
+3 -3
include/asm-x86/mach-summit/mach_apicdef.h
··· 1 - #ifndef __ASM_MACH_APICDEF_H 2 - #define __ASM_MACH_APICDEF_H 1 + #ifndef ASM_X86__MACH_SUMMIT__MACH_APICDEF_H 2 + #define ASM_X86__MACH_SUMMIT__MACH_APICDEF_H 3 3 4 4 #define APIC_ID_MASK (0xFF<<24) 5 5 ··· 10 10 11 11 #define GET_APIC_ID(x) get_apic_id(x) 12 12 13 - #endif 13 + #endif /* ASM_X86__MACH_SUMMIT__MACH_APICDEF_H */
+3 -3
include/asm-x86/mach-summit/mach_ipi.h
··· 1 - #ifndef __ASM_MACH_IPI_H 2 - #define __ASM_MACH_IPI_H 1 + #ifndef ASM_X86__MACH_SUMMIT__MACH_IPI_H 2 + #define ASM_X86__MACH_SUMMIT__MACH_IPI_H 3 3 4 4 void send_IPI_mask_sequence(cpumask_t mask, int vector); 5 5 ··· 22 22 send_IPI_mask(cpu_online_map, vector); 23 23 } 24 24 25 - #endif /* __ASM_MACH_IPI_H */ 25 + #endif /* ASM_X86__MACH_SUMMIT__MACH_IPI_H */
+3 -3
include/asm-x86/mach-summit/mach_mpparse.h
··· 1 - #ifndef __ASM_MACH_MPPARSE_H 2 - #define __ASM_MACH_MPPARSE_H 1 + #ifndef ASM_X86__MACH_SUMMIT__MACH_MPPARSE_H 2 + #define ASM_X86__MACH_SUMMIT__MACH_MPPARSE_H 3 3 4 4 #include <mach_apic.h> 5 5 #include <asm/tsc.h> ··· 107 107 rio->type == LookOutAWPEG || rio->type == LookOutBWPEG); 108 108 } 109 109 110 - #endif /* __ASM_MACH_MPPARSE_H */ 110 + #endif /* ASM_X86__MACH_SUMMIT__MACH_MPPARSE_H */
+3 -3
include/asm-x86/math_emu.h
··· 1 - #ifndef _I386_MATH_EMU_H 2 - #define _I386_MATH_EMU_H 1 + #ifndef ASM_X86__MATH_EMU_H 2 + #define ASM_X86__MATH_EMU_H 3 3 4 4 /* This structure matches the layout of the data saved to the stack 5 5 following a device-not-present interrupt, part of it saved ··· 28 28 long ___vm86_fs; 29 29 long ___vm86_gs; 30 30 }; 31 - #endif 31 + #endif /* ASM_X86__MATH_EMU_H */
+3 -3
include/asm-x86/mc146818rtc.h
··· 1 1 /* 2 2 * Machine dependent access functions for RTC registers. 3 3 */ 4 - #ifndef _ASM_MC146818RTC_H 5 - #define _ASM_MC146818RTC_H 4 + #ifndef ASM_X86__MC146818RTC_H 5 + #define ASM_X86__MC146818RTC_H 6 6 7 7 #include <asm/io.h> 8 8 #include <asm/system.h> ··· 101 101 102 102 #define RTC_IRQ 8 103 103 104 - #endif /* _ASM_MC146818RTC_H */ 104 + #endif /* ASM_X86__MC146818RTC_H */
+3 -3
include/asm-x86/mca.h
··· 1 1 /* -*- mode: c; c-basic-offset: 8 -*- */ 2 2 3 3 /* Platform specific MCA defines */ 4 - #ifndef _ASM_MCA_H 5 - #define _ASM_MCA_H 4 + #ifndef ASM_X86__MCA_H 5 + #define ASM_X86__MCA_H 6 6 7 7 /* Maximal number of MCA slots - actually, some machines have less, but 8 8 * they all have sufficient number of POS registers to cover 8. ··· 40 40 */ 41 41 #define MCA_NUMADAPTERS (MCA_MAX_SLOT_NR+3) 42 42 43 - #endif 43 + #endif /* ASM_X86__MCA_H */
+3 -3
include/asm-x86/mca_dma.h
··· 1 - #ifndef MCA_DMA_H 2 - #define MCA_DMA_H 1 + #ifndef ASM_X86__MCA_DMA_H 2 + #define ASM_X86__MCA_DMA_H 3 3 4 4 #include <asm/io.h> 5 5 #include <linux/ioport.h> ··· 198 198 outb(mode, MCA_DMA_REG_EXE); 199 199 } 200 200 201 - #endif /* MCA_DMA_H */ 201 + #endif /* ASM_X86__MCA_DMA_H */
+3 -3
include/asm-x86/mce.h
··· 1 - #ifndef _ASM_X86_MCE_H 2 - #define _ASM_X86_MCE_H 1 + #ifndef ASM_X86__MCE_H 2 + #define ASM_X86__MCE_H 3 3 4 4 #ifdef __x86_64__ 5 5 ··· 127 127 128 128 #endif /* __KERNEL__ */ 129 129 130 - #endif 130 + #endif /* ASM_X86__MCE_H */
+3 -3
include/asm-x86/mman.h
··· 1 - #ifndef _ASM_X86_MMAN_H 2 - #define _ASM_X86_MMAN_H 1 + #ifndef ASM_X86__MMAN_H 2 + #define ASM_X86__MMAN_H 3 3 4 4 #include <asm-generic/mman.h> 5 5 ··· 17 17 #define MCL_CURRENT 1 /* lock all current mappings */ 18 18 #define MCL_FUTURE 2 /* lock all future mappings */ 19 19 20 - #endif /* _ASM_X86_MMAN_H */ 20 + #endif /* ASM_X86__MMAN_H */
+3 -3
include/asm-x86/mmconfig.h
··· 1 - #ifndef _ASM_MMCONFIG_H 2 - #define _ASM_MMCONFIG_H 1 + #ifndef ASM_X86__MMCONFIG_H 2 + #define ASM_X86__MMCONFIG_H 3 3 4 4 #ifdef CONFIG_PCI_MMCONFIG 5 5 extern void __cpuinit fam10h_check_enable_mmcfg(void); ··· 9 9 static inline void check_enable_amd_mmconf_dmi(void) { } 10 10 #endif 11 11 12 - #endif 12 + #endif /* ASM_X86__MMCONFIG_H */
+3 -8
include/asm-x86/mmu.h
··· 1 - #ifndef _ASM_X86_MMU_H 2 - #define _ASM_X86_MMU_H 1 + #ifndef ASM_X86__MMU_H 2 + #define ASM_X86__MMU_H 3 3 4 4 #include <linux/spinlock.h> 5 5 #include <linux/mutex.h> ··· 7 7 /* 8 8 * The x86 doesn't have a mmu context, but 9 9 * we put the segment information here. 10 - * 11 - * cpu_vm_mask is used to optimize ldt flushing. 12 10 */ 13 11 typedef struct { 14 12 void *ldt; 15 - #ifdef CONFIG_X86_64 16 - rwlock_t ldtlock; 17 - #endif 18 13 int size; 19 14 struct mutex lock; 20 15 void *vdso; ··· 23 28 } 24 29 #endif 25 30 26 - #endif /* _ASM_X86_MMU_H */ 31 + #endif /* ASM_X86__MMU_H */
+3 -3
include/asm-x86/mmu_context.h
··· 1 - #ifndef __ASM_X86_MMU_CONTEXT_H 2 - #define __ASM_X86_MMU_CONTEXT_H 1 + #ifndef ASM_X86__MMU_CONTEXT_H 2 + #define ASM_X86__MMU_CONTEXT_H 3 3 4 4 #include <asm/desc.h> 5 5 #include <asm/atomic.h> ··· 34 34 } while (0); 35 35 36 36 37 - #endif /* __ASM_X86_MMU_CONTEXT_H */ 37 + #endif /* ASM_X86__MMU_CONTEXT_H */
+3 -3
include/asm-x86/mmu_context_32.h
··· 1 - #ifndef __I386_SCHED_H 2 - #define __I386_SCHED_H 1 + #ifndef ASM_X86__MMU_CONTEXT_32_H 2 + #define ASM_X86__MMU_CONTEXT_32_H 3 3 4 4 static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) 5 5 { ··· 53 53 #define deactivate_mm(tsk, mm) \ 54 54 asm("movl %0,%%gs": :"r" (0)); 55 55 56 - #endif 56 + #endif /* ASM_X86__MMU_CONTEXT_32_H */
+3 -3
include/asm-x86/mmu_context_64.h
··· 1 - #ifndef __X86_64_MMU_CONTEXT_H 2 - #define __X86_64_MMU_CONTEXT_H 1 + #ifndef ASM_X86__MMU_CONTEXT_64_H 2 + #define ASM_X86__MMU_CONTEXT_64_H 3 3 4 4 #include <asm/pda.h> 5 5 ··· 51 51 asm volatile("movl %0,%%fs"::"r"(0)); \ 52 52 } while (0) 53 53 54 - #endif 54 + #endif /* ASM_X86__MMU_CONTEXT_64_H */
+3 -3
include/asm-x86/mmx.h
··· 1 - #ifndef _ASM_MMX_H 2 - #define _ASM_MMX_H 1 + #ifndef ASM_X86__MMX_H 2 + #define ASM_X86__MMX_H 3 3 4 4 /* 5 5 * MMX 3Dnow! helper operations ··· 11 11 extern void mmx_clear_page(void *page); 12 12 extern void mmx_copy_page(void *to, void *from); 13 13 14 - #endif 14 + #endif /* ASM_X86__MMX_H */
+3 -3
include/asm-x86/mmzone_32.h
··· 3 3 * 4 4 */ 5 5 6 - #ifndef _ASM_MMZONE_H_ 7 - #define _ASM_MMZONE_H_ 6 + #ifndef ASM_X86__MMZONE_32_H 7 + #define ASM_X86__MMZONE_32_H 8 8 9 9 #include <asm/smp.h> 10 10 ··· 131 131 }) 132 132 #endif /* CONFIG_NEED_MULTIPLE_NODES */ 133 133 134 - #endif /* _ASM_MMZONE_H_ */ 134 + #endif /* ASM_X86__MMZONE_32_H */
+3 -3
include/asm-x86/mmzone_64.h
··· 1 1 /* K8 NUMA support */ 2 2 /* Copyright 2002,2003 by Andi Kleen, SuSE Labs */ 3 3 /* 2.5 Version loosely based on the NUMAQ Code by Pat Gaughen. */ 4 - #ifndef _ASM_X86_64_MMZONE_H 5 - #define _ASM_X86_64_MMZONE_H 1 4 + #ifndef ASM_X86__MMZONE_64_H 5 + #define ASM_X86__MMZONE_64_H 6 6 7 7 8 8 #ifdef CONFIG_NUMA ··· 49 49 #endif 50 50 51 51 #endif 52 - #endif 52 + #endif /* ASM_X86__MMZONE_64_H */
+3 -3
include/asm-x86/module.h
··· 1 - #ifndef _ASM_MODULE_H 2 - #define _ASM_MODULE_H 1 + #ifndef ASM_X86__MODULE_H 2 + #define ASM_X86__MODULE_H 3 3 4 4 /* x86_32/64 are simple */ 5 5 struct mod_arch_specific {}; ··· 79 79 # define MODULE_ARCH_VERMAGIC MODULE_PROC_FAMILY MODULE_STACKSIZE 80 80 #endif 81 81 82 - #endif /* _ASM_MODULE_H */ 82 + #endif /* ASM_X86__MODULE_H */
+3 -3
include/asm-x86/mpspec.h
··· 1 - #ifndef _AM_X86_MPSPEC_H 2 - #define _AM_X86_MPSPEC_H 1 + #ifndef ASM_X86__MPSPEC_H 2 + #define ASM_X86__MPSPEC_H 3 3 4 4 #include <linux/init.h> 5 5 ··· 141 141 142 142 extern physid_mask_t phys_cpu_present_map; 143 143 144 - #endif 144 + #endif /* ASM_X86__MPSPEC_H */
+3 -3
include/asm-x86/mpspec_def.h
··· 1 - #ifndef __ASM_MPSPEC_DEF_H 2 - #define __ASM_MPSPEC_DEF_H 1 + #ifndef ASM_X86__MPSPEC_DEF_H 2 + #define ASM_X86__MPSPEC_DEF_H 3 3 4 4 /* 5 5 * Structure definitions for SMP machines following the ··· 177 177 MP_BUS_PCI, 178 178 MP_BUS_MCA, 179 179 }; 180 - #endif 180 + #endif /* ASM_X86__MPSPEC_DEF_H */
+3 -3
include/asm-x86/msgbuf.h
··· 1 - #ifndef _ASM_X86_MSGBUF_H 2 - #define _ASM_X86_MSGBUF_H 1 + #ifndef ASM_X86__MSGBUF_H 2 + #define ASM_X86__MSGBUF_H 3 3 4 4 /* 5 5 * The msqid64_ds structure for i386 architecture. ··· 36 36 unsigned long __unused5; 37 37 }; 38 38 39 - #endif /* _ASM_X86_MSGBUF_H */ 39 + #endif /* ASM_X86__MSGBUF_H */
+3 -3
include/asm-x86/msidef.h
··· 1 - #ifndef ASM_MSIDEF_H 2 - #define ASM_MSIDEF_H 1 + #ifndef ASM_X86__MSIDEF_H 2 + #define ASM_X86__MSIDEF_H 3 3 4 4 /* 5 5 * Constants for Intel APIC based MSI messages. ··· 48 48 #define MSI_ADDR_DEST_ID(dest) (((dest) << MSI_ADDR_DEST_ID_SHIFT) & \ 49 49 MSI_ADDR_DEST_ID_MASK) 50 50 51 - #endif /* ASM_MSIDEF_H */ 51 + #endif /* ASM_X86__MSIDEF_H */
+3 -3
include/asm-x86/msr-index.h
··· 1 - #ifndef __ASM_MSR_INDEX_H 2 - #define __ASM_MSR_INDEX_H 1 + #ifndef ASM_X86__MSR_INDEX_H 2 + #define ASM_X86__MSR_INDEX_H 3 3 4 4 /* CPU model specific register (MSR) numbers */ 5 5 ··· 310 310 /* Geode defined MSRs */ 311 311 #define MSR_GEODE_BUSCONT_CONF0 0x00001900 312 312 313 - #endif /* __ASM_MSR_INDEX_H */ 313 + #endif /* ASM_X86__MSR_INDEX_H */
+26 -3
include/asm-x86/msr.h
··· 1 - #ifndef __ASM_X86_MSR_H_ 2 - #define __ASM_X86_MSR_H_ 1 + #ifndef ASM_X86__MSR_H 2 + #define ASM_X86__MSR_H 3 3 4 4 #include <asm/msr-index.h> 5 5 ··· 60 60 _ASM_EXTABLE(2b, 3b) 61 61 : [err] "=r" (*err), EAX_EDX_RET(val, low, high) 62 62 : "c" (msr), [fault] "i" (-EFAULT)); 63 + return EAX_EDX_VAL(val, low, high); 64 + } 65 + 66 + static inline unsigned long long native_read_msr_amd_safe(unsigned int msr, 67 + int *err) 68 + { 69 + DECLARE_ARGS(val, low, high); 70 + 71 + asm volatile("2: rdmsr ; xor %0,%0\n" 72 + "1:\n\t" 73 + ".section .fixup,\"ax\"\n\t" 74 + "3: mov %3,%0 ; jmp 1b\n\t" 75 + ".previous\n\t" 76 + _ASM_EXTABLE(2b, 3b) 77 + : "=r" (*err), EAX_EDX_RET(val, low, high) 78 + : "c" (msr), "D" (0x9c5a203a), "i" (-EFAULT)); 63 79 return EAX_EDX_VAL(val, low, high); 64 80 } 65 81 ··· 174 158 *p = native_read_msr_safe(msr, &err); 175 159 return err; 176 160 } 161 + static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p) 162 + { 163 + int err; 164 + 165 + *p = native_read_msr_amd_safe(msr, &err); 166 + return err; 167 + } 177 168 178 169 #define rdtscl(low) \ 179 170 ((low) = (u32)native_read_tsc()) ··· 244 221 #endif /* __KERNEL__ */ 245 222 246 223 247 - #endif 224 + #endif /* ASM_X86__MSR_H */
+3 -3
include/asm-x86/mtrr.h
··· 20 20 The postal address is: 21 21 Richard Gooch, c/o ATNF, P. O. Box 76, Epping, N.S.W., 2121, Australia. 22 22 */ 23 - #ifndef _ASM_X86_MTRR_H 24 - #define _ASM_X86_MTRR_H 23 + #ifndef ASM_X86__MTRR_H 24 + #define ASM_X86__MTRR_H 25 25 26 26 #include <linux/ioctl.h> 27 27 #include <linux/errno.h> ··· 170 170 171 171 #endif /* __KERNEL__ */ 172 172 173 - #endif /* _ASM_X86_MTRR_H */ 173 + #endif /* ASM_X86__MTRR_H */
+3 -3
include/asm-x86/mutex_32.h
··· 6 6 * 7 7 * Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> 8 8 */ 9 - #ifndef _ASM_MUTEX_H 10 - #define _ASM_MUTEX_H 9 + #ifndef ASM_X86__MUTEX_32_H 10 + #define ASM_X86__MUTEX_32_H 11 11 12 12 #include <asm/alternative.h> 13 13 ··· 122 122 #endif 123 123 } 124 124 125 - #endif 125 + #endif /* ASM_X86__MUTEX_32_H */
+3 -3
include/asm-x86/mutex_64.h
··· 6 6 * 7 7 * Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com> 8 8 */ 9 - #ifndef _ASM_MUTEX_H 10 - #define _ASM_MUTEX_H 9 + #ifndef ASM_X86__MUTEX_64_H 10 + #define ASM_X86__MUTEX_64_H 11 11 12 12 /** 13 13 * __mutex_fastpath_lock - decrement and call function if negative ··· 97 97 return 0; 98 98 } 99 99 100 - #endif 100 + #endif /* ASM_X86__MUTEX_64_H */
+4 -3
include/asm-x86/nmi.h
··· 1 - #ifndef _ASM_X86_NMI_H_ 2 - #define _ASM_X86_NMI_H_ 1 + #ifndef ASM_X86__NMI_H 2 + #define ASM_X86__NMI_H 3 3 4 4 #include <linux/pm.h> 5 5 #include <asm/irq.h> ··· 34 34 extern void disable_timer_nmi_watchdog(void); 35 35 extern void enable_timer_nmi_watchdog(void); 36 36 extern int nmi_watchdog_tick(struct pt_regs *regs, unsigned reason); 37 + extern void cpu_nmi_set_wd_enabled(void); 37 38 38 39 extern atomic_t nmi_active; 39 40 extern unsigned int nmi_watchdog; ··· 82 81 void stop_nmi(void); 83 82 void restart_nmi(void); 84 83 85 - #endif 84 + #endif /* ASM_X86__NMI_H */
+3 -3
include/asm-x86/nops.h
··· 1 - #ifndef _ASM_NOPS_H 2 - #define _ASM_NOPS_H 1 1 + #ifndef ASM_X86__NOPS_H 2 + #define ASM_X86__NOPS_H 3 3 4 4 /* Define nops for use with alternative() */ 5 5 ··· 115 115 116 116 #define ASM_NOP_MAX 8 117 117 118 - #endif 118 + #endif /* ASM_X86__NOPS_H */
+3 -3
include/asm-x86/numa_32.h
··· 1 - #ifndef _ASM_X86_32_NUMA_H 2 - #define _ASM_X86_32_NUMA_H 1 1 + #ifndef ASM_X86__NUMA_32_H 2 + #define ASM_X86__NUMA_32_H 3 3 4 4 extern int pxm_to_nid(int pxm); 5 5 extern void numa_remove_cpu(int cpu); ··· 8 8 extern void set_highmem_pages_init(void); 9 9 #endif 10 10 11 - #endif /* _ASM_X86_32_NUMA_H */ 11 + #endif /* ASM_X86__NUMA_32_H */
+3 -3
include/asm-x86/numa_64.h
··· 1 - #ifndef _ASM_X8664_NUMA_H 2 - #define _ASM_X8664_NUMA_H 1 1 + #ifndef ASM_X86__NUMA_64_H 2 + #define ASM_X86__NUMA_64_H 3 3 4 4 #include <linux/nodemask.h> 5 5 #include <asm/apicdef.h> ··· 40 40 static inline void numa_remove_cpu(int cpu) { } 41 41 #endif 42 42 43 - #endif 43 + #endif /* ASM_X86__NUMA_64_H */
+3 -3
include/asm-x86/numaq.h
··· 23 23 * Send feedback to <gone@us.ibm.com> 24 24 */ 25 25 26 - #ifndef NUMAQ_H 27 - #define NUMAQ_H 26 + #ifndef ASM_X86__NUMAQ_H 27 + #define ASM_X86__NUMAQ_H 28 28 29 29 #ifdef CONFIG_X86_NUMAQ 30 30 ··· 165 165 return 0; 166 166 } 167 167 #endif /* CONFIG_X86_NUMAQ */ 168 - #endif /* NUMAQ_H */ 168 + #endif /* ASM_X86__NUMAQ_H */ 169 169
+3 -3
include/asm-x86/olpc.h
··· 1 1 /* OLPC machine specific definitions */ 2 2 3 - #ifndef ASM_OLPC_H_ 4 - #define ASM_OLPC_H_ 3 + #ifndef ASM_X86__OLPC_H 4 + #define ASM_X86__OLPC_H 5 5 6 6 #include <asm/geode.h> 7 7 ··· 129 129 #define OLPC_GPIO_LID geode_gpio(26) 130 130 #define OLPC_GPIO_ECSCI geode_gpio(27) 131 131 132 - #endif 132 + #endif /* ASM_X86__OLPC_H */
+3 -3
include/asm-x86/page.h
··· 1 - #ifndef _ASM_X86_PAGE_H 2 - #define _ASM_X86_PAGE_H 1 + #ifndef ASM_X86__PAGE_H 2 + #define ASM_X86__PAGE_H 3 3 4 4 #include <linux/const.h> 5 5 ··· 199 199 #define __HAVE_ARCH_GATE_AREA 1 200 200 201 201 #endif /* __KERNEL__ */ 202 - #endif /* _ASM_X86_PAGE_H */ 202 + #endif /* ASM_X86__PAGE_H */
+4 -6
include/asm-x86/page_32.h
··· 1 - #ifndef _ASM_X86_PAGE_32_H 2 - #define _ASM_X86_PAGE_32_H 1 + #ifndef ASM_X86__PAGE_32_H 2 + #define ASM_X86__PAGE_32_H 3 3 4 4 /* 5 5 * This handles the memory map. ··· 89 89 extern unsigned int __VMALLOC_RESERVE; 90 90 extern int sysctl_legacy_va_layout; 91 91 92 - #define VMALLOC_RESERVE ((unsigned long)__VMALLOC_RESERVE) 93 - #define MAXMEM (-__PAGE_OFFSET - __VMALLOC_RESERVE) 94 - 95 92 extern void find_low_pfn_range(void); 96 93 extern unsigned long init_memory_mapping(unsigned long start, 97 94 unsigned long end); 98 95 extern void initmem_init(unsigned long, unsigned long); 96 + extern void free_initmem(void); 99 97 extern void setup_bootmem_allocator(void); 100 98 101 99 ··· 124 126 #endif /* CONFIG_X86_3DNOW */ 125 127 #endif /* !__ASSEMBLY__ */ 126 128 127 - #endif /* _ASM_X86_PAGE_32_H */ 129 + #endif /* ASM_X86__PAGE_32_H */
+4 -3
include/asm-x86/page_64.h
··· 1 - #ifndef _X86_64_PAGE_H 2 - #define _X86_64_PAGE_H 1 + #ifndef ASM_X86__PAGE_64_H 2 + #define ASM_X86__PAGE_64_H 3 3 4 4 #define PAGETABLE_LEVELS 4 5 5 ··· 91 91 unsigned long end); 92 92 93 93 extern void initmem_init(unsigned long start_pfn, unsigned long end_pfn); 94 + extern void free_initmem(void); 94 95 95 96 extern void init_extra_mapping_uc(unsigned long phys, unsigned long size); 96 97 extern void init_extra_mapping_wb(unsigned long phys, unsigned long size); ··· 103 102 #endif 104 103 105 104 106 - #endif /* _X86_64_PAGE_H */ 105 + #endif /* ASM_X86__PAGE_64_H */
+3 -3
include/asm-x86/param.h
··· 1 - #ifndef _ASM_X86_PARAM_H 2 - #define _ASM_X86_PARAM_H 1 + #ifndef ASM_X86__PARAM_H 2 + #define ASM_X86__PARAM_H 3 3 4 4 #ifdef __KERNEL__ 5 5 # define HZ CONFIG_HZ /* Internal kernel timer frequency */ ··· 19 19 20 20 #define MAXHOSTNAMELEN 64 /* max length of hostname */ 21 21 22 - #endif /* _ASM_X86_PARAM_H */ 22 + #endif /* ASM_X86__PARAM_H */
+30 -18
include/asm-x86/paravirt.h
··· 1 - #ifndef __ASM_PARAVIRT_H 2 - #define __ASM_PARAVIRT_H 1 + #ifndef ASM_X86__PARAVIRT_H 2 + #define ASM_X86__PARAVIRT_H 3 3 /* Various instructions on x86 need to be replaced for 4 4 * para-virtualization: those hooks are defined here. */ 5 5 ··· 137 137 138 138 /* MSR, PMC and TSR operations. 139 139 err = 0/-EFAULT. wrmsr returns 0/-EFAULT. */ 140 + u64 (*read_msr_amd)(unsigned int msr, int *err); 140 141 u64 (*read_msr)(unsigned int msr, int *err); 141 142 int (*write_msr)(unsigned int msr, unsigned low, unsigned high); 142 143 ··· 258 257 * Hooks for allocating/releasing pagetable pages when they're 259 258 * attached to a pagetable 260 259 */ 261 - void (*alloc_pte)(struct mm_struct *mm, u32 pfn); 262 - void (*alloc_pmd)(struct mm_struct *mm, u32 pfn); 263 - void (*alloc_pmd_clone)(u32 pfn, u32 clonepfn, u32 start, u32 count); 264 - void (*alloc_pud)(struct mm_struct *mm, u32 pfn); 265 - void (*release_pte)(u32 pfn); 266 - void (*release_pmd)(u32 pfn); 267 - void (*release_pud)(u32 pfn); 260 + void (*alloc_pte)(struct mm_struct *mm, unsigned long pfn); 261 + void (*alloc_pmd)(struct mm_struct *mm, unsigned long pfn); 262 + void (*alloc_pmd_clone)(unsigned long pfn, unsigned long clonepfn, unsigned long start, unsigned long count); 263 + void (*alloc_pud)(struct mm_struct *mm, unsigned long pfn); 264 + void (*release_pte)(unsigned long pfn); 265 + void (*release_pmd)(unsigned long pfn); 266 + void (*release_pud)(unsigned long pfn); 268 267 269 268 /* Pagetable manipulation functions */ 270 269 void (*set_pte)(pte_t *ptep, pte_t pteval); ··· 727 726 { 728 727 return PVOP_CALL2(u64, pv_cpu_ops.read_msr, msr, err); 729 728 } 729 + static inline u64 paravirt_read_msr_amd(unsigned msr, int *err) 730 + { 731 + return PVOP_CALL2(u64, pv_cpu_ops.read_msr_amd, msr, err); 732 + } 730 733 static inline int paravirt_write_msr(unsigned msr, unsigned low, unsigned high) 731 734 { 732 735 return PVOP_CALL3(int, pv_cpu_ops.write_msr, msr, low, high); ··· 774 769 int err; 775 770 776 771 *p = paravirt_read_msr(msr, &err); 772 + return err; 773 + } 774 + static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p) 775 + { 776 + int err; 777 + 778 + *p = paravirt_read_msr_amd(msr, &err); 777 779 return err; 778 780 } 779 781 ··· 1005 993 PVOP_VCALL2(pv_mmu_ops.pgd_free, mm, pgd); 1006 994 } 1007 995 1008 - static inline void paravirt_alloc_pte(struct mm_struct *mm, unsigned pfn) 996 + static inline void paravirt_alloc_pte(struct mm_struct *mm, unsigned long pfn) 1009 997 { 1010 998 PVOP_VCALL2(pv_mmu_ops.alloc_pte, mm, pfn); 1011 999 } 1012 - static inline void paravirt_release_pte(unsigned pfn) 1000 + static inline void paravirt_release_pte(unsigned long pfn) 1013 1001 { 1014 1002 PVOP_VCALL1(pv_mmu_ops.release_pte, pfn); 1015 1003 } 1016 1004 1017 - static inline void paravirt_alloc_pmd(struct mm_struct *mm, unsigned pfn) 1005 + static inline void paravirt_alloc_pmd(struct mm_struct *mm, unsigned long pfn) 1018 1006 { 1019 1007 PVOP_VCALL2(pv_mmu_ops.alloc_pmd, mm, pfn); 1020 1008 } 1021 1009 1022 - static inline void paravirt_alloc_pmd_clone(unsigned pfn, unsigned clonepfn, 1023 - unsigned start, unsigned count) 1010 + static inline void paravirt_alloc_pmd_clone(unsigned long pfn, unsigned long clonepfn, 1011 + unsigned long start, unsigned long count) 1024 1012 { 1025 1013 PVOP_VCALL4(pv_mmu_ops.alloc_pmd_clone, pfn, clonepfn, start, count); 1026 1014 } 1027 - static inline void paravirt_release_pmd(unsigned pfn) 1015 + static inline void paravirt_release_pmd(unsigned long pfn) 1028 1016 { 1029 1017 PVOP_VCALL1(pv_mmu_ops.release_pmd, pfn); 1030 1018 } 1031 1019 1032 - static inline void paravirt_alloc_pud(struct mm_struct *mm, unsigned pfn) 1020 + static inline void paravirt_alloc_pud(struct mm_struct *mm, unsigned long pfn) 1033 1021 { 1034 1022 PVOP_VCALL2(pv_mmu_ops.alloc_pud, mm, pfn); 1035 1023 } 1036 - static inline void paravirt_release_pud(unsigned pfn) 1024 + static inline void paravirt_release_pud(unsigned long pfn) 1037 1025 { 1038 1026 PVOP_VCALL1(pv_mmu_ops.release_pud, pfn); 1039 1027 } ··· 1646 1634 1647 1635 #endif /* __ASSEMBLY__ */ 1648 1636 #endif /* CONFIG_PARAVIRT */ 1649 - #endif /* __ASM_PARAVIRT_H */ 1637 + #endif /* ASM_X86__PARAVIRT_H */
+3 -3
include/asm-x86/parport.h
··· 1 - #ifndef _ASM_X86_PARPORT_H 2 - #define _ASM_X86_PARPORT_H 1 + #ifndef ASM_X86__PARPORT_H 2 + #define ASM_X86__PARPORT_H 3 3 4 4 static int __devinit parport_pc_find_isa_ports(int autoirq, int autodma); 5 5 static int __devinit parport_pc_find_nonpci_ports(int autoirq, int autodma) ··· 7 7 return parport_pc_find_isa_ports(autoirq, autodma); 8 8 } 9 9 10 - #endif /* _ASM_X86_PARPORT_H */ 10 + #endif /* ASM_X86__PARPORT_H */
+3 -3
include/asm-x86/pat.h
··· 1 - #ifndef _ASM_PAT_H 2 - #define _ASM_PAT_H 1 + #ifndef ASM_X86__PAT_H 2 + #define ASM_X86__PAT_H 3 3 4 4 #include <linux/types.h> 5 5 ··· 19 19 20 20 extern void pat_disable(char *reason); 21 21 22 - #endif 22 + #endif /* ASM_X86__PAT_H */
+3 -3
include/asm-x86/pci-direct.h
··· 1 - #ifndef ASM_PCI_DIRECT_H 2 - #define ASM_PCI_DIRECT_H 1 1 + #ifndef ASM_X86__PCI_DIRECT_H 2 + #define ASM_X86__PCI_DIRECT_H 3 3 4 4 #include <linux/types.h> 5 5 ··· 18 18 extern unsigned int pci_early_dump_regs; 19 19 extern void early_dump_pci_device(u8 bus, u8 slot, u8 func); 20 20 extern void early_dump_pci_devices(void); 21 - #endif 21 + #endif /* ASM_X86__PCI_DIRECT_H */
+3 -3
include/asm-x86/pci.h
··· 1 - #ifndef __x86_PCI_H 2 - #define __x86_PCI_H 1 + #ifndef ASM_X86__PCI_H 2 + #define ASM_X86__PCI_H 3 3 4 4 #include <linux/mm.h> /* for struct page */ 5 5 #include <linux/types.h> ··· 111 111 } 112 112 #endif 113 113 114 - #endif 114 + #endif /* ASM_X86__PCI_H */
+3 -3
include/asm-x86/pci_32.h
··· 1 - #ifndef __i386_PCI_H 2 - #define __i386_PCI_H 1 + #ifndef ASM_X86__PCI_32_H 2 + #define ASM_X86__PCI_32_H 3 3 4 4 5 5 #ifdef __KERNEL__ ··· 31 31 #endif /* __KERNEL__ */ 32 32 33 33 34 - #endif /* __i386_PCI_H */ 34 + #endif /* ASM_X86__PCI_32_H */
+3 -3
include/asm-x86/pci_64.h
··· 1 - #ifndef __x8664_PCI_H 2 - #define __x8664_PCI_H 1 + #ifndef ASM_X86__PCI_64_H 2 + #define ASM_X86__PCI_64_H 3 3 4 4 #ifdef __KERNEL__ 5 5 ··· 63 63 64 64 #endif /* __KERNEL__ */ 65 65 66 - #endif /* __x8664_PCI_H */ 66 + #endif /* ASM_X86__PCI_64_H */
+3 -3
include/asm-x86/pda.h
··· 1 - #ifndef X86_64_PDA_H 2 - #define X86_64_PDA_H 1 + #ifndef ASM_X86__PDA_H 2 + #define ASM_X86__PDA_H 3 3 4 4 #ifndef __ASSEMBLY__ 5 5 #include <linux/stddef.h> ··· 134 134 135 135 #define PDA_STACKOFFSET (5*8) 136 136 137 - #endif 137 + #endif /* ASM_X86__PDA_H */
+3 -3
include/asm-x86/percpu.h
··· 1 - #ifndef _ASM_X86_PERCPU_H_ 2 - #define _ASM_X86_PERCPU_H_ 1 + #ifndef ASM_X86__PERCPU_H 2 + #define ASM_X86__PERCPU_H 3 3 4 4 #ifdef CONFIG_X86_64 5 5 #include <linux/compiler.h> ··· 215 215 216 216 #endif /* !CONFIG_SMP */ 217 217 218 - #endif /* _ASM_X86_PERCPU_H_ */ 218 + #endif /* ASM_X86__PERCPU_H */
+3 -3
include/asm-x86/pgalloc.h
··· 1 - #ifndef _ASM_X86_PGALLOC_H 2 - #define _ASM_X86_PGALLOC_H 1 + #ifndef ASM_X86__PGALLOC_H 2 + #define ASM_X86__PGALLOC_H 3 3 4 4 #include <linux/threads.h> 5 5 #include <linux/mm.h> /* for struct page */ ··· 111 111 #endif /* PAGETABLE_LEVELS > 3 */ 112 112 #endif /* PAGETABLE_LEVELS > 2 */ 113 113 114 - #endif /* _ASM_X86_PGALLOC_H */ 114 + #endif /* ASM_X86__PGALLOC_H */
+3 -3
include/asm-x86/pgtable-2level-defs.h
··· 1 - #ifndef _I386_PGTABLE_2LEVEL_DEFS_H 2 - #define _I386_PGTABLE_2LEVEL_DEFS_H 1 + #ifndef ASM_X86__PGTABLE_2LEVEL_DEFS_H 2 + #define ASM_X86__PGTABLE_2LEVEL_DEFS_H 3 3 4 4 #define SHARED_KERNEL_PMD 0 5 5 ··· 17 17 18 18 #define PTRS_PER_PTE 1024 19 19 20 - #endif /* _I386_PGTABLE_2LEVEL_DEFS_H */ 20 + #endif /* ASM_X86__PGTABLE_2LEVEL_DEFS_H */
+3 -5
include/asm-x86/pgtable-2level.h
··· 1 - #ifndef _I386_PGTABLE_2LEVEL_H 2 - #define _I386_PGTABLE_2LEVEL_H 1 + #ifndef ASM_X86__PGTABLE_2LEVEL_H 2 + #define ASM_X86__PGTABLE_2LEVEL_H 3 3 4 4 #define pte_ERROR(e) \ 5 5 printk("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, (e).pte_low) ··· 53 53 #define native_ptep_get_and_clear(xp) native_local_ptep_get_and_clear(xp) 54 54 #endif 55 55 56 - #define pte_page(x) pfn_to_page(pte_pfn(x)) 57 56 #define pte_none(x) (!(x).pte_low) 58 - #define pte_pfn(x) (pte_val(x) >> PAGE_SHIFT) 59 57 60 58 /* 61 59 * Bits 0, 6 and 7 are taken, split up the 29 bits of offset ··· 76 78 #define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_low }) 77 79 #define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val }) 78 80 79 - #endif /* _I386_PGTABLE_2LEVEL_H */ 81 + #endif /* ASM_X86__PGTABLE_2LEVEL_H */
+3 -3
include/asm-x86/pgtable-3level-defs.h
··· 1 - #ifndef _I386_PGTABLE_3LEVEL_DEFS_H 2 - #define _I386_PGTABLE_3LEVEL_DEFS_H 1 + #ifndef ASM_X86__PGTABLE_3LEVEL_DEFS_H 2 + #define ASM_X86__PGTABLE_3LEVEL_DEFS_H 3 3 4 4 #ifdef CONFIG_PARAVIRT 5 5 #define SHARED_KERNEL_PMD (pv_info.shared_kernel_pmd) ··· 25 25 */ 26 26 #define PTRS_PER_PTE 512 27 27 28 - #endif /* _I386_PGTABLE_3LEVEL_DEFS_H */ 28 + #endif /* ASM_X86__PGTABLE_3LEVEL_DEFS_H */
+3 -10
include/asm-x86/pgtable-3level.h
··· 1 - #ifndef _I386_PGTABLE_3LEVEL_H 2 - #define _I386_PGTABLE_3LEVEL_H 1 + #ifndef ASM_X86__PGTABLE_3LEVEL_H 2 + #define ASM_X86__PGTABLE_3LEVEL_H 3 3 4 4 /* 5 5 * Intel Physical Address Extension (PAE) Mode - three-level page ··· 151 151 return a.pte_low == b.pte_low && a.pte_high == b.pte_high; 152 152 } 153 153 154 - #define pte_page(x) pfn_to_page(pte_pfn(x)) 155 - 156 154 static inline int pte_none(pte_t pte) 157 155 { 158 156 return !pte.pte_low && !pte.pte_high; 159 - } 160 - 161 - static inline unsigned long pte_pfn(pte_t pte) 162 - { 163 - return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT; 164 157 } 165 158 166 159 /* ··· 172 179 #define __pte_to_swp_entry(pte) ((swp_entry_t){ (pte).pte_high }) 173 180 #define __swp_entry_to_pte(x) ((pte_t){ { .pte_high = (x).val } }) 174 181 175 - #endif /* _I386_PGTABLE_3LEVEL_H */ 182 + #endif /* ASM_X86__PGTABLE_3LEVEL_H */
+12 -3
include/asm-x86/pgtable.h
··· 1 - #ifndef _ASM_X86_PGTABLE_H 2 - #define _ASM_X86_PGTABLE_H 1 + #ifndef ASM_X86__PGTABLE_H 2 + #define ASM_X86__PGTABLE_H 3 3 4 4 #define FIRST_USER_ADDRESS 0 5 5 ··· 186 186 return pte_val(pte) & _PAGE_SPECIAL; 187 187 } 188 188 189 + static inline unsigned long pte_pfn(pte_t pte) 190 + { 191 + return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT; 192 + } 193 + 194 + #define pte_page(pte) pfn_to_page(pte_pfn(pte)) 195 + 189 196 static inline int pmd_large(pmd_t pte) 190 197 { 191 198 return (pmd_val(pte) & (_PAGE_PSE | _PAGE_PRESENT)) == ··· 319 312 static inline void native_pagetable_setup_start(pgd_t *base) {} 320 313 static inline void native_pagetable_setup_done(pgd_t *base) {} 321 314 #endif 315 + 316 + extern int arch_report_meminfo(char *page); 322 317 323 318 #ifdef CONFIG_PARAVIRT 324 319 #include <asm/paravirt.h> ··· 530 521 #include <asm-generic/pgtable.h> 531 522 #endif /* __ASSEMBLY__ */ 532 523 533 - #endif /* _ASM_X86_PGTABLE_H */ 524 + #endif /* ASM_X86__PGTABLE_H */
+7 -5
include/asm-x86/pgtable_32.h
··· 1 - #ifndef _I386_PGTABLE_H 2 - #define _I386_PGTABLE_H 1 + #ifndef ASM_X86__PGTABLE_32_H 2 + #define ASM_X86__PGTABLE_32_H 3 3 4 4 5 5 /* ··· 31 31 static inline void check_pgt_cache(void) { } 32 32 void paging_init(void); 33 33 34 + extern void set_pmd_pfn(unsigned long, unsigned long, pgprot_t); 34 35 35 36 /* 36 37 * The Linux x86 paging architecture is 'compile-time dual-mode', it ··· 57 56 * area for the same reason. ;) 58 57 */ 59 58 #define VMALLOC_OFFSET (8 * 1024 * 1024) 60 - #define VMALLOC_START (((unsigned long)high_memory + 2 * VMALLOC_OFFSET - 1) \ 61 - & ~(VMALLOC_OFFSET - 1)) 59 + #define VMALLOC_START ((unsigned long)high_memory + VMALLOC_OFFSET) 62 60 #ifdef CONFIG_X86_PAE 63 61 #define LAST_PKMAP 512 64 62 #else ··· 72 72 #else 73 73 # define VMALLOC_END (FIXADDR_START - 2 * PAGE_SIZE) 74 74 #endif 75 + 76 + #define MAXMEM (VMALLOC_END - PAGE_OFFSET - __VMALLOC_RESERVE) 75 77 76 78 /* 77 79 * Define this if things work differently on an i386 and an i486: ··· 188 186 #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ 189 187 remap_pfn_range(vma, vaddr, pfn, size, prot) 190 188 191 - #endif /* _I386_PGTABLE_H */ 189 + #endif /* ASM_X86__PGTABLE_32_H */
+3 -5
include/asm-x86/pgtable_64.h
··· 1 - #ifndef _X86_64_PGTABLE_H 2 - #define _X86_64_PGTABLE_H 1 + #ifndef ASM_X86__PGTABLE_64_H 2 + #define ASM_X86__PGTABLE_64_H 3 3 4 4 #include <linux/const.h> 5 5 #ifndef __ASSEMBLY__ ··· 175 175 #define pte_present(x) (pte_val((x)) & (_PAGE_PRESENT | _PAGE_PROTNONE)) 176 176 177 177 #define pages_to_mb(x) ((x) >> (20 - PAGE_SHIFT)) /* FIXME: is this right? */ 178 - #define pte_page(x) pfn_to_page(pte_pfn((x))) 179 - #define pte_pfn(x) ((pte_val((x)) & __PHYSICAL_MASK) >> PAGE_SHIFT) 180 178 181 179 /* 182 180 * Macro to mark a page protection value as "uncacheable". ··· 282 284 #define __HAVE_ARCH_PTE_SAME 283 285 #endif /* !__ASSEMBLY__ */ 284 286 285 - #endif /* _X86_64_PGTABLE_H */ 287 + #endif /* ASM_X86__PGTABLE_64_H */
+3 -3
include/asm-x86/posix_types_32.h
··· 1 - #ifndef __ARCH_I386_POSIX_TYPES_H 2 - #define __ARCH_I386_POSIX_TYPES_H 1 + #ifndef ASM_X86__POSIX_TYPES_32_H 2 + #define ASM_X86__POSIX_TYPES_32_H 3 3 4 4 /* 5 5 * This file is generally used by user-level software, so you need to ··· 82 82 83 83 #endif /* defined(__KERNEL__) */ 84 84 85 - #endif 85 + #endif /* ASM_X86__POSIX_TYPES_32_H */
+3 -3
include/asm-x86/posix_types_64.h
··· 1 - #ifndef _ASM_X86_64_POSIX_TYPES_H 2 - #define _ASM_X86_64_POSIX_TYPES_H 1 + #ifndef ASM_X86__POSIX_TYPES_64_H 2 + #define ASM_X86__POSIX_TYPES_64_H 3 3 4 4 /* 5 5 * This file is generally used by user-level software, so you need to ··· 116 116 117 117 #endif /* defined(__KERNEL__) */ 118 118 119 - #endif 119 + #endif /* ASM_X86__POSIX_TYPES_64_H */
+3 -3
include/asm-x86/prctl.h
··· 1 - #ifndef X86_64_PRCTL_H 2 - #define X86_64_PRCTL_H 1 1 + #ifndef ASM_X86__PRCTL_H 2 + #define ASM_X86__PRCTL_H 3 3 4 4 #define ARCH_SET_GS 0x1001 5 5 #define ARCH_SET_FS 0x1002 ··· 7 7 #define ARCH_GET_GS 0x1004 8 8 9 9 10 - #endif 10 + #endif /* ASM_X86__PRCTL_H */
+3 -3
include/asm-x86/processor-flags.h
··· 1 - #ifndef __ASM_I386_PROCESSOR_FLAGS_H 2 - #define __ASM_I386_PROCESSOR_FLAGS_H 1 + #ifndef ASM_X86__PROCESSOR_FLAGS_H 2 + #define ASM_X86__PROCESSOR_FLAGS_H 3 3 /* Various flags defined: can be included from assembler. */ 4 4 5 5 /* ··· 96 96 #endif 97 97 #endif 98 98 99 - #endif /* __ASM_I386_PROCESSOR_FLAGS_H */ 99 + #endif /* ASM_X86__PROCESSOR_FLAGS_H */
+16 -6
include/asm-x86/processor.h
··· 1 - #ifndef __ASM_X86_PROCESSOR_H 2 - #define __ASM_X86_PROCESSOR_H 1 + #ifndef ASM_X86__PROCESSOR_H 2 + #define ASM_X86__PROCESSOR_H 3 3 4 4 #include <asm/processor-flags.h> 5 5 ··· 20 20 #include <asm/msr.h> 21 21 #include <asm/desc_defs.h> 22 22 #include <asm/nops.h> 23 + #include <asm/ds.h> 23 24 24 25 #include <linux/personality.h> 25 26 #include <linux/cpumask.h> ··· 141 140 #define current_cpu_data boot_cpu_data 142 141 #endif 143 142 143 + extern const struct seq_operations cpuinfo_op; 144 + 144 145 static inline int hlt_works(int cpu) 145 146 { 146 147 #ifdef CONFIG_X86_32 ··· 155 152 #define cache_line_size() (boot_cpu_data.x86_cache_alignment) 156 153 157 154 extern void cpu_detect(struct cpuinfo_x86 *c); 155 + 156 + extern struct pt_regs *idle_regs(struct pt_regs *); 158 157 159 158 extern void early_cpu_init(void); 160 159 extern void identify_boot_cpu(void); ··· 416 411 unsigned io_bitmap_max; 417 412 /* MSR_IA32_DEBUGCTLMSR value to switch in if TIF_DEBUGCTLMSR is set. */ 418 413 unsigned long debugctlmsr; 419 - /* Debug Store - if not 0 points to a DS Save Area configuration; 420 - * goes into MSR_IA32_DS_AREA */ 421 - unsigned long ds_area_msr; 414 + #ifdef CONFIG_X86_DS 415 + /* Debug Store context; see include/asm-x86/ds.h; goes into MSR_IA32_DS_AREA */ 416 + struct ds_context *ds_ctx; 417 + #endif /* CONFIG_X86_DS */ 418 + #ifdef CONFIG_X86_PTRACE_BTS 419 + /* the signal to send on a bts buffer overflow */ 420 + unsigned int bts_ovfl_signal; 421 + #endif /* CONFIG_X86_PTRACE_BTS */ 422 422 }; 423 423 424 424 static inline unsigned long native_get_debugreg(int regno) ··· 953 943 extern int get_tsc_mode(unsigned long adr); 954 944 extern int set_tsc_mode(unsigned int val); 955 945 956 - #endif 946 + #endif /* ASM_X86__PROCESSOR_H */
+3 -3
include/asm-x86/proto.h
··· 1 - #ifndef _ASM_X8664_PROTO_H 2 - #define _ASM_X8664_PROTO_H 1 1 + #ifndef ASM_X86__PROTO_H 2 + #define ASM_X86__PROTO_H 3 3 4 4 #include <asm/ldt.h> 5 5 ··· 29 29 #define round_up(x, y) (((x) + (y) - 1) & ~((y) - 1)) 30 30 #define round_down(x, y) ((x) & ~((y) - 1)) 31 31 32 - #endif 32 + #endif /* ASM_X86__PROTO_H */
+11 -9
include/asm-x86/ptrace-abi.h
··· 1 - #ifndef _ASM_X86_PTRACE_ABI_H 2 - #define _ASM_X86_PTRACE_ABI_H 1 + #ifndef ASM_X86__PTRACE_ABI_H 2 + #define ASM_X86__PTRACE_ABI_H 3 3 4 4 #ifdef __i386__ 5 5 ··· 80 80 81 81 #define PTRACE_SINGLEBLOCK 33 /* resume execution until next branch */ 82 82 83 - #ifndef __ASSEMBLY__ 83 + #ifdef CONFIG_X86_PTRACE_BTS 84 84 85 + #ifndef __ASSEMBLY__ 85 86 #include <asm/types.h> 86 87 87 88 /* configuration/status structure used in PTRACE_BTS_CONFIG and ··· 98 97 /* actual size of bts_struct in bytes */ 99 98 __u32 bts_size; 100 99 }; 101 - #endif 100 + #endif /* __ASSEMBLY__ */ 102 101 103 102 #define PTRACE_BTS_O_TRACE 0x1 /* branch trace */ 104 103 #define PTRACE_BTS_O_SCHED 0x2 /* scheduling events w/ jiffies */ 105 104 #define PTRACE_BTS_O_SIGNAL 0x4 /* send SIG<signal> on buffer overflow 106 105 instead of wrapping around */ 107 - #define PTRACE_BTS_O_CUT_SIZE 0x8 /* cut requested size to max available 108 - instead of failing */ 106 + #define PTRACE_BTS_O_ALLOC 0x8 /* (re)allocate buffer */ 109 107 110 108 #define PTRACE_BTS_CONFIG 40 111 109 /* Configure branch trace recording. 112 110 ADDR points to a struct ptrace_bts_config. 113 111 DATA gives the size of that buffer. 114 - A new buffer is allocated, iff the size changes. 112 + A new buffer is allocated, if requested in the flags. 113 + An overflow signal may only be requested for new buffers. 115 114 Returns the number of bytes read. 116 115 */ 117 116 #define PTRACE_BTS_STATUS 41 ··· 120 119 Returns the number of bytes written. 121 120 */ 122 121 #define PTRACE_BTS_SIZE 42 123 - /* Return the number of available BTS records. 122 + /* Return the number of available BTS records for draining. 124 123 DATA and ADDR are ignored. 125 124 */ 126 125 #define PTRACE_BTS_GET 43 ··· 140 139 BTS records are read from oldest to newest. 141 140 Returns number of BTS records drained. 142 141 */ 142 + #endif /* CONFIG_X86_PTRACE_BTS */ 143 143 144 - #endif 144 + #endif /* ASM_X86__PTRACE_ABI_H */
+47 -5
include/asm-x86/ptrace.h
··· 1 - #ifndef _ASM_X86_PTRACE_H 2 - #define _ASM_X86_PTRACE_H 1 + #ifndef ASM_X86__PTRACE_H 2 + #define ASM_X86__PTRACE_H 3 3 4 4 #include <linux/compiler.h> /* For __user */ 5 5 #include <asm/ptrace-abi.h> ··· 127 127 #endif /* __KERNEL__ */ 128 128 #endif /* !__i386__ */ 129 129 130 + 131 + #ifdef CONFIG_X86_PTRACE_BTS 132 + /* a branch trace record entry 133 + * 134 + * In order to unify the interface between various processor versions, 135 + * we use the below data structure for all processors. 136 + */ 137 + enum bts_qualifier { 138 + BTS_INVALID = 0, 139 + BTS_BRANCH, 140 + BTS_TASK_ARRIVES, 141 + BTS_TASK_DEPARTS 142 + }; 143 + 144 + struct bts_struct { 145 + __u64 qualifier; 146 + union { 147 + /* BTS_BRANCH */ 148 + struct { 149 + __u64 from_ip; 150 + __u64 to_ip; 151 + } lbr; 152 + /* BTS_TASK_ARRIVES or 153 + BTS_TASK_DEPARTS */ 154 + __u64 jiffies; 155 + } variant; 156 + }; 157 + #endif /* CONFIG_X86_PTRACE_BTS */ 158 + 130 159 #ifdef __KERNEL__ 131 160 132 - /* the DS BTS struct is used for ptrace as well */ 133 - #include <asm/ds.h> 161 + #include <linux/init.h> 134 162 163 + struct cpuinfo_x86; 135 164 struct task_struct; 136 165 166 + #ifdef CONFIG_X86_PTRACE_BTS 167 + extern void __cpuinit ptrace_bts_init_intel(struct cpuinfo_x86 *); 137 168 extern void ptrace_bts_take_timestamp(struct task_struct *, enum bts_qualifier); 169 + #else 170 + #define ptrace_bts_init_intel(config) do {} while (0) 171 + #endif /* CONFIG_X86_PTRACE_BTS */ 138 172 139 173 extern unsigned long profile_pc(struct pt_regs *regs); 140 174 ··· 181 147 #else 182 148 void signal_fault(struct pt_regs *regs, void __user *frame, char *where); 183 149 #endif 150 + 151 + extern long syscall_trace_enter(struct pt_regs *); 152 + extern void syscall_trace_leave(struct pt_regs *); 184 153 185 154 static inline unsigned long regs_return_value(struct pt_regs *regs) 186 155 { ··· 250 213 return regs->bp; 251 214 } 252 215 216 + static inline unsigned long user_stack_pointer(struct pt_regs *regs) 217 + { 218 + return regs->sp; 219 + } 220 + 253 221 /* 254 222 * These are defined as per linux/ptrace.h, which see. 255 223 */ ··· 281 239 282 240 #endif /* !__ASSEMBLY__ */ 283 241 284 - #endif 242 + #endif /* ASM_X86__PTRACE_H */
+3 -3
include/asm-x86/pvclock-abi.h
··· 1 - #ifndef _ASM_X86_PVCLOCK_ABI_H_ 2 - #define _ASM_X86_PVCLOCK_ABI_H_ 1 + #ifndef ASM_X86__PVCLOCK_ABI_H 2 + #define ASM_X86__PVCLOCK_ABI_H 3 3 #ifndef __ASSEMBLY__ 4 4 5 5 /* ··· 39 39 } __attribute__((__packed__)); 40 40 41 41 #endif /* __ASSEMBLY__ */ 42 - #endif /* _ASM_X86_PVCLOCK_ABI_H_ */ 42 + #endif /* ASM_X86__PVCLOCK_ABI_H */
+3 -3
include/asm-x86/pvclock.h
··· 1 - #ifndef _ASM_X86_PVCLOCK_H_ 2 - #define _ASM_X86_PVCLOCK_H_ 1 + #ifndef ASM_X86__PVCLOCK_H 2 + #define ASM_X86__PVCLOCK_H 3 3 4 4 #include <linux/clocksource.h> 5 5 #include <asm/pvclock-abi.h> ··· 10 10 struct pvclock_vcpu_time_info *vcpu, 11 11 struct timespec *ts); 12 12 13 - #endif /* _ASM_X86_PVCLOCK_H_ */ 13 + #endif /* ASM_X86__PVCLOCK_H */
+3 -3
include/asm-x86/reboot.h
··· 1 - #ifndef _ASM_REBOOT_H 2 - #define _ASM_REBOOT_H 1 + #ifndef ASM_X86__REBOOT_H 2 + #define ASM_X86__REBOOT_H 3 3 4 4 struct pt_regs; 5 5 ··· 18 18 void native_machine_shutdown(void); 19 19 void machine_real_restart(const unsigned char *code, int length); 20 20 21 - #endif /* _ASM_REBOOT_H */ 21 + #endif /* ASM_X86__REBOOT_H */
+3 -3
include/asm-x86/reboot_fixups.h
··· 1 - #ifndef _LINUX_REBOOT_FIXUPS_H 2 - #define _LINUX_REBOOT_FIXUPS_H 1 + #ifndef ASM_X86__REBOOT_FIXUPS_H 2 + #define ASM_X86__REBOOT_FIXUPS_H 3 3 4 4 extern void mach_reboot_fixups(void); 5 5 6 - #endif /* _LINUX_REBOOT_FIXUPS_H */ 6 + #endif /* ASM_X86__REBOOT_FIXUPS_H */
+3 -3
include/asm-x86/required-features.h
··· 1 - #ifndef _ASM_REQUIRED_FEATURES_H 2 - #define _ASM_REQUIRED_FEATURES_H 1 1 + #ifndef ASM_X86__REQUIRED_FEATURES_H 2 + #define ASM_X86__REQUIRED_FEATURES_H 3 3 4 4 /* Define minimum CPUID feature set for kernel These bits are checked 5 5 really early to actually display a visible error message before the ··· 79 79 #define REQUIRED_MASK6 0 80 80 #define REQUIRED_MASK7 0 81 81 82 - #endif 82 + #endif /* ASM_X86__REQUIRED_FEATURES_H */
+4 -4
include/asm-x86/resume-trace.h
··· 1 - #ifndef _ASM_X86_RESUME_TRACE_H 2 - #define _ASM_X86_RESUME_TRACE_H 1 + #ifndef ASM_X86__RESUME_TRACE_H 2 + #define ASM_X86__RESUME_TRACE_H 3 3 4 4 #include <asm/asm.h> 5 5 ··· 7 7 do { \ 8 8 if (pm_trace_enabled) { \ 9 9 const void *tracedata; \ 10 - asm volatile(_ASM_MOV_UL " $1f,%0\n" \ 10 + asm volatile(_ASM_MOV " $1f,%0\n" \ 11 11 ".section .tracedata,\"a\"\n" \ 12 12 "1:\t.word %c1\n\t" \ 13 13 _ASM_PTR " %c2\n" \ ··· 18 18 } \ 19 19 } while (0) 20 20 21 - #endif 21 + #endif /* ASM_X86__RESUME_TRACE_H */
+3 -3
include/asm-x86/rio.h
··· 5 5 * Author: Laurent Vivier <Laurent.Vivier@bull.net> 6 6 */ 7 7 8 - #ifndef __ASM_RIO_H 9 - #define __ASM_RIO_H 8 + #ifndef ASM_X86__RIO_H 9 + #define ASM_X86__RIO_H 10 10 11 11 #define RIO_TABLE_VERSION 3 12 12 ··· 60 60 ALT_CALGARY = 5, /* Second Planar Calgary */ 61 61 }; 62 62 63 - #endif /* __ASM_RIO_H */ 63 + #endif /* ASM_X86__RIO_H */
+3 -3
include/asm-x86/rwlock.h
··· 1 - #ifndef _ASM_X86_RWLOCK_H 2 - #define _ASM_X86_RWLOCK_H 1 + #ifndef ASM_X86__RWLOCK_H 2 + #define ASM_X86__RWLOCK_H 3 3 4 4 #define RW_LOCK_BIAS 0x01000000 5 5 6 6 /* Actual code is in asm/spinlock.h or in arch/x86/lib/rwlock.S */ 7 7 8 - #endif /* _ASM_X86_RWLOCK_H */ 8 + #endif /* ASM_X86__RWLOCK_H */
+3 -3
include/asm-x86/rwsem.h
··· 29 29 * front, then they'll all be woken up, but no other readers will be. 30 30 */ 31 31 32 - #ifndef _I386_RWSEM_H 33 - #define _I386_RWSEM_H 32 + #ifndef ASM_X86__RWSEM_H 33 + #define ASM_X86__RWSEM_H 34 34 35 35 #ifndef _LINUX_RWSEM_H 36 36 #error "please don't include asm/rwsem.h directly, use linux/rwsem.h instead" ··· 262 262 } 263 263 264 264 #endif /* __KERNEL__ */ 265 - #endif /* _I386_RWSEM_H */ 265 + #endif /* ASM_X86__RWSEM_H */
+3 -3
include/asm-x86/scatterlist.h
··· 1 - #ifndef _ASM_X86_SCATTERLIST_H 2 - #define _ASM_X86_SCATTERLIST_H 1 + #ifndef ASM_X86__SCATTERLIST_H 2 + #define ASM_X86__SCATTERLIST_H 3 3 4 4 #include <asm/types.h> 5 5 ··· 30 30 # define sg_dma_len(sg) ((sg)->dma_length) 31 31 #endif 32 32 33 - #endif 33 + #endif /* ASM_X86__SCATTERLIST_H */
+3 -3
include/asm-x86/seccomp_32.h
··· 1 - #ifndef _ASM_SECCOMP_H 2 - #define _ASM_SECCOMP_H 1 + #ifndef ASM_X86__SECCOMP_32_H 2 + #define ASM_X86__SECCOMP_32_H 3 3 4 4 #include <linux/thread_info.h> 5 5 ··· 14 14 #define __NR_seccomp_exit __NR_exit 15 15 #define __NR_seccomp_sigreturn __NR_sigreturn 16 16 17 - #endif /* _ASM_SECCOMP_H */ 17 + #endif /* ASM_X86__SECCOMP_32_H */
+3 -3
include/asm-x86/seccomp_64.h
··· 1 - #ifndef _ASM_SECCOMP_H 2 - #define _ASM_SECCOMP_H 1 + #ifndef ASM_X86__SECCOMP_64_H 2 + #define ASM_X86__SECCOMP_64_H 3 3 4 4 #include <linux/thread_info.h> 5 5 ··· 22 22 #define __NR_seccomp_exit_32 __NR_ia32_exit 23 23 #define __NR_seccomp_sigreturn_32 __NR_ia32_sigreturn 24 24 25 - #endif /* _ASM_SECCOMP_H */ 25 + #endif /* ASM_X86__SECCOMP_64_H */
+3 -3
include/asm-x86/segment.h
··· 1 - #ifndef _ASM_X86_SEGMENT_H_ 2 - #define _ASM_X86_SEGMENT_H_ 1 + #ifndef ASM_X86__SEGMENT_H 2 + #define ASM_X86__SEGMENT_H 3 3 4 4 /* Constructor for a conventional segment GDT (or LDT) entry */ 5 5 /* This is a macro so it can be used in initializers */ ··· 212 212 #endif 213 213 #endif 214 214 215 - #endif 215 + #endif /* ASM_X86__SEGMENT_H */
+3 -3
include/asm-x86/sembuf.h
··· 1 - #ifndef _ASM_X86_SEMBUF_H 2 - #define _ASM_X86_SEMBUF_H 1 + #ifndef ASM_X86__SEMBUF_H 2 + #define ASM_X86__SEMBUF_H 3 3 4 4 /* 5 5 * The semid64_ds structure for x86 architecture. ··· 21 21 unsigned long __unused4; 22 22 }; 23 23 24 - #endif /* _ASM_X86_SEMBUF_H */ 24 + #endif /* ASM_X86__SEMBUF_H */
+3 -3
include/asm-x86/serial.h
··· 1 - #ifndef _ASM_X86_SERIAL_H 2 - #define _ASM_X86_SERIAL_H 1 + #ifndef ASM_X86__SERIAL_H 2 + #define ASM_X86__SERIAL_H 3 3 4 4 /* 5 5 * This assumes you have a 1.8432 MHz clock for your UART. ··· 26 26 { 0, BASE_BAUD, 0x3E8, 4, STD_COM_FLAGS }, /* ttyS2 */ \ 27 27 { 0, BASE_BAUD, 0x2E8, 3, STD_COM4_FLAGS }, /* ttyS3 */ 28 28 29 - #endif /* _ASM_X86_SERIAL_H */ 29 + #endif /* ASM_X86__SERIAL_H */
+4 -3
include/asm-x86/setup.h
··· 1 - #ifndef _ASM_X86_SETUP_H 2 - #define _ASM_X86_SETUP_H 1 + #ifndef ASM_X86__SETUP_H 2 + #define ASM_X86__SETUP_H 3 3 4 4 #define COMMAND_LINE_SIZE 2048 5 5 ··· 41 41 }; 42 42 43 43 extern struct x86_quirks *x86_quirks; 44 + extern unsigned long saved_video_mode; 44 45 45 46 #ifndef CONFIG_PARAVIRT 46 47 #define paravirt_post_allocator_init() do {} while (0) ··· 101 100 #endif /* __ASSEMBLY__ */ 102 101 #endif /* __KERNEL__ */ 103 102 104 - #endif /* _ASM_X86_SETUP_H */ 103 + #endif /* ASM_X86__SETUP_H */
+3 -3
include/asm-x86/shmbuf.h
··· 1 - #ifndef _ASM_X86_SHMBUF_H 2 - #define _ASM_X86_SHMBUF_H 1 + #ifndef ASM_X86__SHMBUF_H 2 + #define ASM_X86__SHMBUF_H 3 3 4 4 /* 5 5 * The shmid64_ds structure for x86 architecture. ··· 48 48 unsigned long __unused4; 49 49 }; 50 50 51 - #endif /* _ASM_X86_SHMBUF_H */ 51 + #endif /* ASM_X86__SHMBUF_H */
+3 -3
include/asm-x86/shmparam.h
··· 1 - #ifndef _ASM_X86_SHMPARAM_H 2 - #define _ASM_X86_SHMPARAM_H 1 + #ifndef ASM_X86__SHMPARAM_H 2 + #define ASM_X86__SHMPARAM_H 3 3 4 4 #define SHMLBA PAGE_SIZE /* attach addr a multiple of this */ 5 5 6 - #endif /* _ASM_X86_SHMPARAM_H */ 6 + #endif /* ASM_X86__SHMPARAM_H */
+3 -3
include/asm-x86/sigcontext.h
··· 1 - #ifndef _ASM_X86_SIGCONTEXT_H 2 - #define _ASM_X86_SIGCONTEXT_H 1 + #ifndef ASM_X86__SIGCONTEXT_H 2 + #define ASM_X86__SIGCONTEXT_H 3 3 4 4 #include <linux/compiler.h> 5 5 #include <asm/types.h> ··· 202 202 203 203 #endif /* !__i386__ */ 204 204 205 - #endif 205 + #endif /* ASM_X86__SIGCONTEXT_H */
+3 -3
include/asm-x86/sigcontext32.h
··· 1 - #ifndef _SIGCONTEXT32_H 2 - #define _SIGCONTEXT32_H 1 1 + #ifndef ASM_X86__SIGCONTEXT32_H 2 + #define ASM_X86__SIGCONTEXT32_H 3 3 4 4 /* signal context for 32bit programs. */ 5 5 ··· 68 68 unsigned int cr2; 69 69 }; 70 70 71 - #endif 71 + #endif /* ASM_X86__SIGCONTEXT32_H */
+3 -3
include/asm-x86/siginfo.h
··· 1 - #ifndef _ASM_X86_SIGINFO_H 2 - #define _ASM_X86_SIGINFO_H 1 + #ifndef ASM_X86__SIGINFO_H 2 + #define ASM_X86__SIGINFO_H 3 3 4 4 #ifdef __x86_64__ 5 5 # define __ARCH_SI_PREAMBLE_SIZE (4 * sizeof(int)) ··· 7 7 8 8 #include <asm-generic/siginfo.h> 9 9 10 - #endif 10 + #endif /* ASM_X86__SIGINFO_H */
+6 -3
include/asm-x86/signal.h
··· 1 - #ifndef _ASM_X86_SIGNAL_H 2 - #define _ASM_X86_SIGNAL_H 1 + #ifndef ASM_X86__SIGNAL_H 2 + #define ASM_X86__SIGNAL_H 3 3 4 4 #ifndef __ASSEMBLY__ 5 5 #include <linux/types.h> ··· 140 140 struct k_sigaction { 141 141 struct sigaction sa; 142 142 }; 143 + 144 + extern void do_notify_resume(struct pt_regs *, void *, __u32); 145 + 143 146 # else /* __KERNEL__ */ 144 147 /* Here we must cater to libcs that poke about in kernel headers. */ 145 148 ··· 259 256 #endif /* __KERNEL__ */ 260 257 #endif /* __ASSEMBLY__ */ 261 258 262 - #endif 259 + #endif /* ASM_X86__SIGNAL_H */
+6 -4
include/asm-x86/smp.h
··· 1 - #ifndef _ASM_X86_SMP_H_ 2 - #define _ASM_X86_SMP_H_ 1 + #ifndef ASM_X86__SMP_H 2 + #define ASM_X86__SMP_H 3 3 #ifndef __ASSEMBLY__ 4 4 #include <linux/cpumask.h> 5 5 #include <linux/init.h> ··· 34 34 DECLARE_PER_CPU(cpumask_t, cpu_sibling_map); 35 35 DECLARE_PER_CPU(cpumask_t, cpu_core_map); 36 36 DECLARE_PER_CPU(u16, cpu_llc_id); 37 + #ifdef CONFIG_X86_32 38 + DECLARE_PER_CPU(int, cpu_number); 39 + #endif 37 40 38 41 DECLARE_EARLY_PER_CPU(u16, x86_cpu_to_apicid); 39 42 DECLARE_EARLY_PER_CPU(u16, x86_bios_cpu_apicid); ··· 145 142 * from the initial startup. We map APIC_BASE very early in page_setup(), 146 143 * so this is correct in the x86 case. 147 144 */ 148 - DECLARE_PER_CPU(int, cpu_number); 149 145 #define raw_smp_processor_id() (x86_read_percpu(cpu_number)) 150 146 extern int safe_smp_processor_id(void); 151 147 ··· 207 205 #endif 208 206 209 207 #endif /* __ASSEMBLY__ */ 210 - #endif 208 + #endif /* ASM_X86__SMP_H */
+3 -3
include/asm-x86/socket.h
··· 1 - #ifndef _ASM_SOCKET_H 2 - #define _ASM_SOCKET_H 1 + #ifndef ASM_X86__SOCKET_H 2 + #define ASM_X86__SOCKET_H 3 3 4 4 #include <asm/sockios.h> 5 5 ··· 54 54 55 55 #define SO_MARK 36 56 56 57 - #endif /* _ASM_SOCKET_H */ 57 + #endif /* ASM_X86__SOCKET_H */
+3 -3
include/asm-x86/sockios.h
··· 1 - #ifndef _ASM_X86_SOCKIOS_H 2 - #define _ASM_X86_SOCKIOS_H 1 + #ifndef ASM_X86__SOCKIOS_H 2 + #define ASM_X86__SOCKIOS_H 3 3 4 4 /* Socket-level I/O control calls. */ 5 5 #define FIOSETOWN 0x8901 ··· 10 10 #define SIOCGSTAMP 0x8906 /* Get stamp (timeval) */ 11 11 #define SIOCGSTAMPNS 0x8907 /* Get stamp (timespec) */ 12 12 13 - #endif /* _ASM_X86_SOCKIOS_H */ 13 + #endif /* ASM_X86__SOCKIOS_H */
+3 -3
include/asm-x86/sparsemem.h
··· 1 - #ifndef _ASM_X86_SPARSEMEM_H 2 - #define _ASM_X86_SPARSEMEM_H 1 + #ifndef ASM_X86__SPARSEMEM_H 2 + #define ASM_X86__SPARSEMEM_H 3 3 4 4 #ifdef CONFIG_SPARSEMEM 5 5 /* ··· 31 31 #endif 32 32 33 33 #endif /* CONFIG_SPARSEMEM */ 34 - #endif 34 + #endif /* ASM_X86__SPARSEMEM_H */
+6 -6
include/asm-x86/spinlock.h
··· 1 - #ifndef _X86_SPINLOCK_H_ 2 - #define _X86_SPINLOCK_H_ 1 + #ifndef ASM_X86__SPINLOCK_H 2 + #define ASM_X86__SPINLOCK_H 3 3 4 4 #include <asm/atomic.h> 5 5 #include <asm/rwlock.h> ··· 97 97 "jne 1f\n\t" 98 98 "movw %w0,%w1\n\t" 99 99 "incb %h1\n\t" 100 - "lock ; cmpxchgw %w1,%2\n\t" 100 + LOCK_PREFIX "cmpxchgw %w1,%2\n\t" 101 101 "1:" 102 102 "sete %b1\n\t" 103 103 "movzbl %b1,%0\n\t" ··· 135 135 int inc = 0x00010000; 136 136 int tmp; 137 137 138 - asm volatile("lock ; xaddl %0, %1\n" 138 + asm volatile(LOCK_PREFIX "xaddl %0, %1\n" 139 139 "movzwl %w0, %2\n\t" 140 140 "shrl $16, %0\n\t" 141 141 "1:\t" ··· 162 162 "cmpl %0,%1\n\t" 163 163 "jne 1f\n\t" 164 164 "addl $0x00010000, %1\n\t" 165 - "lock ; cmpxchgl %1,%2\n\t" 165 + LOCK_PREFIX "cmpxchgl %1,%2\n\t" 166 166 "1:" 167 167 "sete %b1\n\t" 168 168 "movzbl %b1,%0\n\t" ··· 366 366 #define _raw_read_relax(lock) cpu_relax() 367 367 #define _raw_write_relax(lock) cpu_relax() 368 368 369 - #endif 369 + #endif /* ASM_X86__SPINLOCK_H */
+3 -3
include/asm-x86/spinlock_types.h
··· 1 - #ifndef __ASM_SPINLOCK_TYPES_H 2 - #define __ASM_SPINLOCK_TYPES_H 1 + #ifndef ASM_X86__SPINLOCK_TYPES_H 2 + #define ASM_X86__SPINLOCK_TYPES_H 3 3 4 4 #ifndef __LINUX_SPINLOCK_TYPES_H 5 5 # error "please don't include this file directly" ··· 17 17 18 18 #define __RAW_RW_LOCK_UNLOCKED { RW_LOCK_BIAS } 19 19 20 - #endif 20 + #endif /* ASM_X86__SPINLOCK_TYPES_H */
+3 -3
include/asm-x86/srat.h
··· 24 24 * Send feedback to Pat Gaughen <gone@us.ibm.com> 25 25 */ 26 26 27 - #ifndef _ASM_SRAT_H_ 28 - #define _ASM_SRAT_H_ 27 + #ifndef ASM_X86__SRAT_H 28 + #define ASM_X86__SRAT_H 29 29 30 30 #ifdef CONFIG_ACPI_NUMA 31 31 extern int get_memcfg_from_srat(void); ··· 36 36 } 37 37 #endif 38 38 39 - #endif /* _ASM_SRAT_H_ */ 39 + #endif /* ASM_X86__SRAT_H */
+3 -3
include/asm-x86/stacktrace.h
··· 1 - #ifndef _ASM_STACKTRACE_H 2 - #define _ASM_STACKTRACE_H 1 1 + #ifndef ASM_X86__STACKTRACE_H 2 + #define ASM_X86__STACKTRACE_H 3 3 4 4 extern int kstack_depth_to_print; 5 5 ··· 18 18 unsigned long *stack, unsigned long bp, 19 19 const struct stacktrace_ops *ops, void *data); 20 20 21 - #endif 21 + #endif /* ASM_X86__STACKTRACE_H */
+3 -3
include/asm-x86/stat.h
··· 1 - #ifndef _ASM_X86_STAT_H 2 - #define _ASM_X86_STAT_H 1 + #ifndef ASM_X86__STAT_H 2 + #define ASM_X86__STAT_H 3 3 4 4 #define STAT_HAVE_NSEC 1 5 5 ··· 111 111 #endif 112 112 }; 113 113 114 - #endif 114 + #endif /* ASM_X86__STAT_H */
+3 -3
include/asm-x86/statfs.h
··· 1 - #ifndef _ASM_X86_STATFS_H 2 - #define _ASM_X86_STATFS_H 1 + #ifndef ASM_X86__STATFS_H 2 + #define ASM_X86__STATFS_H 3 3 4 4 #ifdef __i386__ 5 5 #include <asm-generic/statfs.h> ··· 60 60 } __attribute__((packed)); 61 61 62 62 #endif /* !__i386__ */ 63 - #endif 63 + #endif /* ASM_X86__STATFS_H */
+3 -3
include/asm-x86/string_32.h
··· 1 - #ifndef _I386_STRING_H_ 2 - #define _I386_STRING_H_ 1 + #ifndef ASM_X86__STRING_32_H 2 + #define ASM_X86__STRING_32_H 3 3 4 4 #ifdef __KERNEL__ 5 5 ··· 323 323 324 324 #endif /* __KERNEL__ */ 325 325 326 - #endif 326 + #endif /* ASM_X86__STRING_32_H */
+3 -3
include/asm-x86/string_64.h
··· 1 - #ifndef _X86_64_STRING_H_ 2 - #define _X86_64_STRING_H_ 1 + #ifndef ASM_X86__STRING_64_H 2 + #define ASM_X86__STRING_64_H 3 3 4 4 #ifdef __KERNEL__ 5 5 ··· 57 57 58 58 #endif /* __KERNEL__ */ 59 59 60 - #endif 60 + #endif /* ASM_X86__STRING_64_H */
+3 -3
include/asm-x86/suspend_32.h
··· 3 3 * Based on code 4 4 * Copyright 2001 Patrick Mochel <mochel@osdl.org> 5 5 */ 6 - #ifndef __ASM_X86_32_SUSPEND_H 7 - #define __ASM_X86_32_SUSPEND_H 6 + #ifndef ASM_X86__SUSPEND_32_H 7 + #define ASM_X86__SUSPEND_32_H 8 8 9 9 #include <asm/desc.h> 10 10 #include <asm/i387.h> ··· 48 48 extern int acpi_save_state_mem(void); 49 49 #endif 50 50 51 - #endif /* __ASM_X86_32_SUSPEND_H */ 51 + #endif /* ASM_X86__SUSPEND_32_H */
+3 -3
include/asm-x86/suspend_64.h
··· 3 3 * Based on code 4 4 * Copyright 2001 Patrick Mochel <mochel@osdl.org> 5 5 */ 6 - #ifndef __ASM_X86_64_SUSPEND_H 7 - #define __ASM_X86_64_SUSPEND_H 6 + #ifndef ASM_X86__SUSPEND_64_H 7 + #define ASM_X86__SUSPEND_64_H 8 8 9 9 #include <asm/desc.h> 10 10 #include <asm/i387.h> ··· 49 49 extern char core_restore_code; 50 50 extern char restore_registers; 51 51 52 - #endif /* __ASM_X86_64_SUSPEND_H */ 52 + #endif /* ASM_X86__SUSPEND_64_H */
+3 -3
include/asm-x86/swiotlb.h
··· 1 - #ifndef _ASM_SWIOTLB_H 2 - #define _ASM_SWIOTLB_H 1 1 + #ifndef ASM_X86__SWIOTLB_H 2 + #define ASM_X86__SWIOTLB_H 3 3 4 4 #include <asm/dma-mapping.h> 5 5 ··· 55 55 56 56 static inline void dma_mark_clean(void *addr, size_t size) {} 57 57 58 - #endif /* _ASM_SWIOTLB_H */ 58 + #endif /* ASM_X86__SWIOTLB_H */
+3 -3
include/asm-x86/sync_bitops.h
··· 1 - #ifndef _I386_SYNC_BITOPS_H 2 - #define _I386_SYNC_BITOPS_H 1 + #ifndef ASM_X86__SYNC_BITOPS_H 2 + #define ASM_X86__SYNC_BITOPS_H 3 3 4 4 /* 5 5 * Copyright 1992, Linus Torvalds. ··· 127 127 128 128 #undef ADDR 129 129 130 - #endif /* _I386_SYNC_BITOPS_H */ 130 + #endif /* ASM_X86__SYNC_BITOPS_H */
+211
include/asm-x86/syscall.h
··· 1 + /* 2 + * Access to user system call parameters and results 3 + * 4 + * Copyright (C) 2008 Red Hat, Inc. All rights reserved. 5 + * 6 + * This copyrighted material is made available to anyone wishing to use, 7 + * modify, copy, or redistribute it subject to the terms and conditions 8 + * of the GNU General Public License v.2. 9 + * 10 + * See asm-generic/syscall.h for descriptions of what we must do here. 11 + */ 12 + 13 + #ifndef _ASM_SYSCALL_H 14 + #define _ASM_SYSCALL_H 1 15 + 16 + #include <linux/sched.h> 17 + #include <linux/err.h> 18 + 19 + static inline long syscall_get_nr(struct task_struct *task, 20 + struct pt_regs *regs) 21 + { 22 + /* 23 + * We always sign-extend a -1 value being set here, 24 + * so this is always either -1L or a syscall number. 25 + */ 26 + return regs->orig_ax; 27 + } 28 + 29 + static inline void syscall_rollback(struct task_struct *task, 30 + struct pt_regs *regs) 31 + { 32 + regs->ax = regs->orig_ax; 33 + } 34 + 35 + static inline long syscall_get_error(struct task_struct *task, 36 + struct pt_regs *regs) 37 + { 38 + unsigned long error = regs->ax; 39 + #ifdef CONFIG_IA32_EMULATION 40 + /* 41 + * TS_COMPAT is set for 32-bit syscall entries and then 42 + * remains set until we return to user mode. 43 + */ 44 + if (task_thread_info(task)->status & TS_COMPAT) 45 + /* 46 + * Sign-extend the value so (int)-EFOO becomes (long)-EFOO 47 + * and will match correctly in comparisons. 48 + */ 49 + error = (long) (int) error; 50 + #endif 51 + return IS_ERR_VALUE(error) ? error : 0; 52 + } 53 + 54 + static inline long syscall_get_return_value(struct task_struct *task, 55 + struct pt_regs *regs) 56 + { 57 + return regs->ax; 58 + } 59 + 60 + static inline void syscall_set_return_value(struct task_struct *task, 61 + struct pt_regs *regs, 62 + int error, long val) 63 + { 64 + regs->ax = (long) error ?: val; 65 + } 66 + 67 + #ifdef CONFIG_X86_32 68 + 69 + static inline void syscall_get_arguments(struct task_struct *task, 70 + struct pt_regs *regs, 71 + unsigned int i, unsigned int n, 72 + unsigned long *args) 73 + { 74 + BUG_ON(i + n > 6); 75 + memcpy(args, &regs->bx + i, n * sizeof(args[0])); 76 + } 77 + 78 + static inline void syscall_set_arguments(struct task_struct *task, 79 + struct pt_regs *regs, 80 + unsigned int i, unsigned int n, 81 + const unsigned long *args) 82 + { 83 + BUG_ON(i + n > 6); 84 + memcpy(&regs->bx + i, args, n * sizeof(args[0])); 85 + } 86 + 87 + #else /* CONFIG_X86_64 */ 88 + 89 + static inline void syscall_get_arguments(struct task_struct *task, 90 + struct pt_regs *regs, 91 + unsigned int i, unsigned int n, 92 + unsigned long *args) 93 + { 94 + # ifdef CONFIG_IA32_EMULATION 95 + if (task_thread_info(task)->status & TS_COMPAT) 96 + switch (i + n) { 97 + case 6: 98 + if (!n--) break; 99 + *args++ = regs->bp; 100 + case 5: 101 + if (!n--) break; 102 + *args++ = regs->di; 103 + case 4: 104 + if (!n--) break; 105 + *args++ = regs->si; 106 + case 3: 107 + if (!n--) break; 108 + *args++ = regs->dx; 109 + case 2: 110 + if (!n--) break; 111 + *args++ = regs->cx; 112 + case 1: 113 + if (!n--) break; 114 + *args++ = regs->bx; 115 + case 0: 116 + if (!n--) break; 117 + default: 118 + BUG(); 119 + break; 120 + } 121 + else 122 + # endif 123 + switch (i + n) { 124 + case 6: 125 + if (!n--) break; 126 + *args++ = regs->r9; 127 + case 5: 128 + if (!n--) break; 129 + *args++ = regs->r8; 130 + case 4: 131 + if (!n--) break; 132 + *args++ = regs->r10; 133 + case 3: 134 + if (!n--) break; 135 + *args++ = regs->dx; 136 + case 2: 137 + if (!n--) break; 138 + *args++ = regs->si; 139 + case 1: 140 + if (!n--) break; 141 + *args++ = regs->di; 142 + case 0: 143 + if (!n--) break; 144 + default: 145 + BUG(); 146 + break; 147 + } 148 + } 149 + 150 + static inline void syscall_set_arguments(struct task_struct *task, 151 + struct pt_regs *regs, 152 + unsigned int i, unsigned int n, 153 + const unsigned long *args) 154 + { 155 + # ifdef CONFIG_IA32_EMULATION 156 + if (task_thread_info(task)->status & TS_COMPAT) 157 + switch (i + n) { 158 + case 6: 159 + if (!n--) break; 160 + regs->bp = *args++; 161 + case 5: 162 + if (!n--) break; 163 + regs->di = *args++; 164 + case 4: 165 + if (!n--) break; 166 + regs->si = *args++; 167 + case 3: 168 + if (!n--) break; 169 + regs->dx = *args++; 170 + case 2: 171 + if (!n--) break; 172 + regs->cx = *args++; 173 + case 1: 174 + if (!n--) break; 175 + regs->bx = *args++; 176 + case 0: 177 + if (!n--) break; 178 + default: 179 + BUG(); 180 + } 181 + else 182 + # endif 183 + switch (i + n) { 184 + case 6: 185 + if (!n--) break; 186 + regs->r9 = *args++; 187 + case 5: 188 + if (!n--) break; 189 + regs->r8 = *args++; 190 + case 4: 191 + if (!n--) break; 192 + regs->r10 = *args++; 193 + case 3: 194 + if (!n--) break; 195 + regs->dx = *args++; 196 + case 2: 197 + if (!n--) break; 198 + regs->si = *args++; 199 + case 1: 200 + if (!n--) break; 201 + regs->di = *args++; 202 + case 0: 203 + if (!n--) break; 204 + default: 205 + BUG(); 206 + } 207 + } 208 + 209 + #endif /* CONFIG_X86_32 */ 210 + 211 + #endif /* _ASM_SYSCALL_H */
+93
include/asm-x86/syscalls.h
··· 1 + /* 2 + * syscalls.h - Linux syscall interfaces (arch-specific) 3 + * 4 + * Copyright (c) 2008 Jaswinder Singh 5 + * 6 + * This file is released under the GPLv2. 7 + * See the file COPYING for more details. 8 + */ 9 + 10 + #ifndef _ASM_X86_SYSCALLS_H 11 + #define _ASM_X86_SYSCALLS_H 12 + 13 + #include <linux/compiler.h> 14 + #include <linux/linkage.h> 15 + #include <linux/types.h> 16 + #include <linux/signal.h> 17 + 18 + /* Common in X86_32 and X86_64 */ 19 + /* kernel/ioport.c */ 20 + asmlinkage long sys_ioperm(unsigned long, unsigned long, int); 21 + 22 + /* X86_32 only */ 23 + #ifdef CONFIG_X86_32 24 + /* kernel/process_32.c */ 25 + asmlinkage int sys_fork(struct pt_regs); 26 + asmlinkage int sys_clone(struct pt_regs); 27 + asmlinkage int sys_vfork(struct pt_regs); 28 + asmlinkage int sys_execve(struct pt_regs); 29 + 30 + /* kernel/signal_32.c */ 31 + asmlinkage int sys_sigsuspend(int, int, old_sigset_t); 32 + asmlinkage int sys_sigaction(int, const struct old_sigaction __user *, 33 + struct old_sigaction __user *); 34 + asmlinkage int sys_sigaltstack(unsigned long); 35 + asmlinkage unsigned long sys_sigreturn(unsigned long); 36 + asmlinkage int sys_rt_sigreturn(unsigned long); 37 + 38 + /* kernel/ioport.c */ 39 + asmlinkage long sys_iopl(unsigned long); 40 + 41 + /* kernel/ldt.c */ 42 + asmlinkage int sys_modify_ldt(int, void __user *, unsigned long); 43 + 44 + /* kernel/sys_i386_32.c */ 45 + asmlinkage long sys_mmap2(unsigned long, unsigned long, unsigned long, 46 + unsigned long, unsigned long, unsigned long); 47 + struct mmap_arg_struct; 48 + asmlinkage int old_mmap(struct mmap_arg_struct __user *); 49 + struct sel_arg_struct; 50 + asmlinkage int old_select(struct sel_arg_struct __user *); 51 + asmlinkage int sys_ipc(uint, int, int, int, void __user *, long); 52 + struct old_utsname; 53 + asmlinkage int sys_uname(struct old_utsname __user *); 54 + struct oldold_utsname; 55 + asmlinkage int sys_olduname(struct oldold_utsname __user *); 56 + 57 + /* kernel/tls.c */ 58 + asmlinkage int sys_set_thread_area(struct user_desc __user *); 59 + asmlinkage int sys_get_thread_area(struct user_desc __user *); 60 + 61 + /* kernel/vm86_32.c */ 62 + asmlinkage int sys_vm86old(struct pt_regs); 63 + asmlinkage int sys_vm86(struct pt_regs); 64 + 65 + #else /* CONFIG_X86_32 */ 66 + 67 + /* X86_64 only */ 68 + /* kernel/process_64.c */ 69 + asmlinkage long sys_fork(struct pt_regs *); 70 + asmlinkage long sys_clone(unsigned long, unsigned long, 71 + void __user *, void __user *, 72 + struct pt_regs *); 73 + asmlinkage long sys_vfork(struct pt_regs *); 74 + asmlinkage long sys_execve(char __user *, char __user * __user *, 75 + char __user * __user *, 76 + struct pt_regs *); 77 + 78 + /* kernel/ioport.c */ 79 + asmlinkage long sys_iopl(unsigned int, struct pt_regs *); 80 + 81 + /* kernel/signal_64.c */ 82 + asmlinkage long sys_sigaltstack(const stack_t __user *, stack_t __user *, 83 + struct pt_regs *); 84 + asmlinkage long sys_rt_sigreturn(struct pt_regs *); 85 + 86 + /* kernel/sys_x86_64.c */ 87 + asmlinkage long sys_mmap(unsigned long, unsigned long, unsigned long, 88 + unsigned long, unsigned long, unsigned long); 89 + struct new_utsname; 90 + asmlinkage long sys_uname(struct new_utsname __user *); 91 + 92 + #endif /* CONFIG_X86_32 */ 93 + #endif /* _ASM_X86_SYSCALLS_H */
+3 -3
include/asm-x86/system.h
··· 1 - #ifndef _ASM_X86_SYSTEM_H_ 2 - #define _ASM_X86_SYSTEM_H_ 1 + #ifndef ASM_X86__SYSTEM_H 2 + #define ASM_X86__SYSTEM_H 3 3 4 4 #include <asm/asm.h> 5 5 #include <asm/segment.h> ··· 419 419 alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC); 420 420 } 421 421 422 - #endif 422 + #endif /* ASM_X86__SYSTEM_H */
+3 -3
include/asm-x86/system_64.h
··· 1 - #ifndef __ASM_SYSTEM_H 2 - #define __ASM_SYSTEM_H 1 + #ifndef ASM_X86__SYSTEM_64_H 2 + #define ASM_X86__SYSTEM_64_H 3 3 4 4 #include <asm/segment.h> 5 5 #include <asm/cmpxchg.h> ··· 19 19 20 20 #include <linux/irqflags.h> 21 21 22 - #endif 22 + #endif /* ASM_X86__SYSTEM_64_H */
+3 -3
include/asm-x86/tce.h
··· 21 21 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 22 22 */ 23 23 24 - #ifndef _ASM_X86_64_TCE_H 25 - #define _ASM_X86_64_TCE_H 24 + #ifndef ASM_X86__TCE_H 25 + #define ASM_X86__TCE_H 26 26 27 27 extern unsigned int specified_table_size; 28 28 struct iommu_table; ··· 45 45 extern void __init free_tce_table(void *tbl); 46 46 extern int __init build_tce_table(struct pci_dev *dev, void __iomem *bbar); 47 47 48 - #endif /* _ASM_X86_64_TCE_H */ 48 + #endif /* ASM_X86__TCE_H */
+3 -3
include/asm-x86/termbits.h
··· 1 - #ifndef _ASM_X86_TERMBITS_H 2 - #define _ASM_X86_TERMBITS_H 1 + #ifndef ASM_X86__TERMBITS_H 2 + #define ASM_X86__TERMBITS_H 3 3 4 4 #include <linux/posix_types.h> 5 5 ··· 195 195 #define TCSADRAIN 1 196 196 #define TCSAFLUSH 2 197 197 198 - #endif /* _ASM_X86_TERMBITS_H */ 198 + #endif /* ASM_X86__TERMBITS_H */
+3 -3
include/asm-x86/termios.h
··· 1 - #ifndef _ASM_X86_TERMIOS_H 2 - #define _ASM_X86_TERMIOS_H 1 + #ifndef ASM_X86__TERMIOS_H 2 + #define ASM_X86__TERMIOS_H 3 3 4 4 #include <asm/termbits.h> 5 5 #include <asm/ioctls.h> ··· 110 110 111 111 #endif /* __KERNEL__ */ 112 112 113 - #endif /* _ASM_X86_TERMIOS_H */ 113 + #endif /* ASM_X86__TERMIOS_H */
+3 -3
include/asm-x86/therm_throt.h
··· 1 - #ifndef __ASM_I386_THERM_THROT_H__ 2 - #define __ASM_I386_THERM_THROT_H__ 1 1 + #ifndef ASM_X86__THERM_THROT_H 2 + #define ASM_X86__THERM_THROT_H 3 3 4 4 #include <asm/atomic.h> 5 5 6 6 extern atomic_t therm_throt_en; 7 7 int therm_throt_process(int curr); 8 8 9 - #endif /* __ASM_I386_THERM_THROT_H__ */ 9 + #endif /* ASM_X86__THERM_THROT_H */
+6 -4
include/asm-x86/thread_info.h
··· 4 4 * - Incorporating suggestions made by Linus Torvalds and Dave Miller 5 5 */ 6 6 7 - #ifndef _ASM_X86_THREAD_INFO_H 8 - #define _ASM_X86_THREAD_INFO_H 7 + #ifndef ASM_X86__THREAD_INFO_H 8 + #define ASM_X86__THREAD_INFO_H 9 9 10 10 #include <linux/compiler.h> 11 11 #include <asm/page.h> ··· 71 71 * Warning: layout of LSW is hardcoded in entry.S 72 72 */ 73 73 #define TIF_SYSCALL_TRACE 0 /* syscall trace active */ 74 + #define TIF_NOTIFY_RESUME 1 /* callback before returning to user */ 74 75 #define TIF_SIGPENDING 2 /* signal pending */ 75 76 #define TIF_NEED_RESCHED 3 /* rescheduling necessary */ 76 77 #define TIF_SINGLESTEP 4 /* reenable singlestep on user return*/ ··· 94 93 #define TIF_BTS_TRACE_TS 27 /* record scheduling event timestamps */ 95 94 96 95 #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) 96 + #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) 97 97 #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) 98 98 #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) 99 99 #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) ··· 135 133 136 134 /* Only used for 64 bit */ 137 135 #define _TIF_DO_NOTIFY_MASK \ 138 - (_TIF_SIGPENDING|_TIF_MCE_NOTIFY) 136 + (_TIF_SIGPENDING|_TIF_MCE_NOTIFY|_TIF_NOTIFY_RESUME) 139 137 140 138 /* flags to check in __switch_to() */ 141 139 #define _TIF_WORK_CTXSW \ ··· 260 258 extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); 261 259 #define arch_task_cache_init arch_task_cache_init 262 260 #endif 263 - #endif /* _ASM_X86_THREAD_INFO_H */ 261 + #endif /* ASM_X86__THREAD_INFO_H */
+5 -3
include/asm-x86/time.h
··· 1 - #ifndef _ASMX86_TIME_H 2 - #define _ASMX86_TIME_H 1 + #ifndef ASM_X86__TIME_H 2 + #define ASM_X86__TIME_H 3 3 4 4 extern void hpet_time_init(void); 5 5 ··· 46 46 47 47 #endif 48 48 49 + extern void time_init(void); 50 + 49 51 #ifdef CONFIG_PARAVIRT 50 52 #include <asm/paravirt.h> 51 53 #else /* !CONFIG_PARAVIRT */ ··· 60 58 61 59 extern unsigned long __init calibrate_cpu(void); 62 60 63 - #endif 61 + #endif /* ASM_X86__TIME_H */
+7 -4
include/asm-x86/timer.h
··· 1 - #ifndef _ASMi386_TIMER_H 2 - #define _ASMi386_TIMER_H 1 + #ifndef ASM_X86__TIMER_H 2 + #define ASM_X86__TIMER_H 3 3 #include <linux/init.h> 4 4 #include <linux/pm.h> 5 5 #include <linux/percpu.h> ··· 9 9 unsigned long long native_sched_clock(void); 10 10 unsigned long native_calibrate_tsc(void); 11 11 12 + #ifdef CONFIG_X86_32 12 13 extern int timer_ack; 13 - extern int no_timer_check; 14 14 extern int recalibrate_cpu_khz(void); 15 + #endif /* CONFIG_X86_32 */ 16 + 17 + extern int no_timer_check; 15 18 16 19 #ifndef CONFIG_PARAVIRT 17 20 #define calibrate_tsc() native_calibrate_tsc() ··· 63 60 return ns; 64 61 } 65 62 66 - #endif 63 + #endif /* ASM_X86__TIMER_H */
+3 -3
include/asm-x86/timex.h
··· 1 1 /* x86 architecture timex specifications */ 2 - #ifndef _ASM_X86_TIMEX_H 3 - #define _ASM_X86_TIMEX_H 2 + #ifndef ASM_X86__TIMEX_H 3 + #define ASM_X86__TIMEX_H 4 4 5 5 #include <asm/processor.h> 6 6 #include <asm/tsc.h> ··· 16 16 17 17 #define ARCH_HAS_READ_CURRENT_TIMER 18 18 19 - #endif 19 + #endif /* ASM_X86__TIMEX_H */
+3 -3
include/asm-x86/tlb.h
··· 1 - #ifndef _ASM_X86_TLB_H 2 - #define _ASM_X86_TLB_H 1 + #ifndef ASM_X86__TLB_H 2 + #define ASM_X86__TLB_H 3 3 4 4 #define tlb_start_vma(tlb, vma) do { } while (0) 5 5 #define tlb_end_vma(tlb, vma) do { } while (0) ··· 8 8 9 9 #include <asm-generic/tlb.h> 10 10 11 - #endif 11 + #endif /* ASM_X86__TLB_H */
+3 -3
include/asm-x86/tlbflush.h
··· 1 - #ifndef _ASM_X86_TLBFLUSH_H 2 - #define _ASM_X86_TLBFLUSH_H 1 + #ifndef ASM_X86__TLBFLUSH_H 2 + #define ASM_X86__TLBFLUSH_H 3 3 4 4 #include <linux/mm.h> 5 5 #include <linux/sched.h> ··· 165 165 flush_tlb_all(); 166 166 } 167 167 168 - #endif /* _ASM_X86_TLBFLUSH_H */ 168 + #endif /* ASM_X86__TLBFLUSH_H */
+3 -3
include/asm-x86/topology.h
··· 22 22 * 23 23 * Send feedback to <colpatch@us.ibm.com> 24 24 */ 25 - #ifndef _ASM_X86_TOPOLOGY_H 26 - #define _ASM_X86_TOPOLOGY_H 25 + #ifndef ASM_X86__TOPOLOGY_H 26 + #define ASM_X86__TOPOLOGY_H 27 27 28 28 #ifdef CONFIG_X86_32 29 29 # ifdef CONFIG_X86_HT ··· 255 255 } 256 256 #endif 257 257 258 - #endif /* _ASM_X86_TOPOLOGY_H */ 258 + #endif /* ASM_X86__TOPOLOGY_H */
+3 -3
include/asm-x86/trampoline.h
··· 1 - #ifndef __TRAMPOLINE_HEADER 2 - #define __TRAMPOLINE_HEADER 1 + #ifndef ASM_X86__TRAMPOLINE_H 2 + #define ASM_X86__TRAMPOLINE_H 3 3 4 4 #ifndef __ASSEMBLY__ 5 5 ··· 18 18 19 19 #endif /* __ASSEMBLY__ */ 20 20 21 - #endif /* __TRAMPOLINE_HEADER */ 21 + #endif /* ASM_X86__TRAMPOLINE_H */
+7 -3
include/asm-x86/traps.h
··· 1 - #ifndef _ASM_X86_TRAPS_H 2 - #define _ASM_X86_TRAPS_H 1 + #ifndef ASM_X86__TRAPS_H 2 + #define ASM_X86__TRAPS_H 3 3 4 4 /* Common in X86_32 and X86_64 */ 5 5 asmlinkage void divide_error(void); ··· 51 51 unsigned long patch_espfix_desc(unsigned long, unsigned long); 52 52 asmlinkage void math_emulate(long); 53 53 54 + void do_page_fault(struct pt_regs *regs, unsigned long error_code); 55 + 54 56 #else /* CONFIG_X86_32 */ 55 57 56 58 asmlinkage void double_fault(void); ··· 64 62 asmlinkage void do_simd_coprocessor_error(struct pt_regs *); 65 63 asmlinkage void do_spurious_interrupt_bug(struct pt_regs *); 66 64 65 + asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code); 66 + 67 67 #endif /* CONFIG_X86_32 */ 68 - #endif /* _ASM_X86_TRAPS_H */ 68 + #endif /* ASM_X86__TRAPS_H */
+3 -3
include/asm-x86/tsc.h
··· 1 1 /* 2 2 * x86 TSC related functions 3 3 */ 4 - #ifndef _ASM_X86_TSC_H 5 - #define _ASM_X86_TSC_H 4 + #ifndef ASM_X86__TSC_H 5 + #define ASM_X86__TSC_H 6 6 7 7 #include <asm/processor.h> 8 8 ··· 59 59 60 60 extern int notsc_setup(char *); 61 61 62 - #endif 62 + #endif /* ASM_X86__TSC_H */
+3 -3
include/asm-x86/types.h
··· 1 - #ifndef _ASM_X86_TYPES_H 2 - #define _ASM_X86_TYPES_H 1 + #ifndef ASM_X86__TYPES_H 2 + #define ASM_X86__TYPES_H 3 3 4 4 #include <asm-generic/int-ll64.h> 5 5 ··· 33 33 #endif /* __ASSEMBLY__ */ 34 34 #endif /* __KERNEL__ */ 35 35 36 - #endif 36 + #endif /* ASM_X86__TYPES_H */
+3 -3
include/asm-x86/uaccess.h
··· 1 - #ifndef _ASM_UACCES_H_ 2 - #define _ASM_UACCES_H_ 1 + #ifndef ASM_X86__UACCESS_H 2 + #define ASM_X86__UACCESS_H 3 3 /* 4 4 * User space memory access functions 5 5 */ ··· 450 450 # include "uaccess_64.h" 451 451 #endif 452 452 453 - #endif 453 + #endif /* ASM_X86__UACCESS_H */ 454 454
+3 -3
include/asm-x86/uaccess_32.h
··· 1 - #ifndef __i386_UACCESS_H 2 - #define __i386_UACCESS_H 1 + #ifndef ASM_X86__UACCESS_32_H 2 + #define ASM_X86__UACCESS_32_H 3 3 4 4 /* 5 5 * User space memory access functions ··· 215 215 unsigned long __must_check clear_user(void __user *mem, unsigned long len); 216 216 unsigned long __must_check __clear_user(void __user *mem, unsigned long len); 217 217 218 - #endif /* __i386_UACCESS_H */ 218 + #endif /* ASM_X86__UACCESS_32_H */
+3 -3
include/asm-x86/uaccess_64.h
··· 1 - #ifndef __X86_64_UACCESS_H 2 - #define __X86_64_UACCESS_H 1 + #ifndef ASM_X86__UACCESS_64_H 2 + #define ASM_X86__UACCESS_64_H 3 3 4 4 /* 5 5 * User space memory access functions ··· 199 199 unsigned long 200 200 copy_user_handle_tail(char *to, char *from, unsigned len, unsigned zerorest); 201 201 202 - #endif /* __X86_64_UACCESS_H */ 202 + #endif /* ASM_X86__UACCESS_64_H */
+3 -3
include/asm-x86/ucontext.h
··· 1 - #ifndef _ASM_X86_UCONTEXT_H 2 - #define _ASM_X86_UCONTEXT_H 1 + #ifndef ASM_X86__UCONTEXT_H 2 + #define ASM_X86__UCONTEXT_H 3 3 4 4 struct ucontext { 5 5 unsigned long uc_flags; ··· 9 9 sigset_t uc_sigmask; /* mask last for extensibility */ 10 10 }; 11 11 12 - #endif /* _ASM_X86_UCONTEXT_H */ 12 + #endif /* ASM_X86__UCONTEXT_H */
+3 -3
include/asm-x86/unaligned.h
··· 1 - #ifndef _ASM_X86_UNALIGNED_H 2 - #define _ASM_X86_UNALIGNED_H 1 + #ifndef ASM_X86__UNALIGNED_H 2 + #define ASM_X86__UNALIGNED_H 3 3 4 4 /* 5 5 * The x86 can do unaligned accesses itself. ··· 11 11 #define get_unaligned __get_unaligned_le 12 12 #define put_unaligned __put_unaligned_le 13 13 14 - #endif /* _ASM_X86_UNALIGNED_H */ 14 + #endif /* ASM_X86__UNALIGNED_H */
+3 -3
include/asm-x86/unistd_32.h
··· 1 - #ifndef _ASM_I386_UNISTD_H_ 2 - #define _ASM_I386_UNISTD_H_ 1 + #ifndef ASM_X86__UNISTD_32_H 2 + #define ASM_X86__UNISTD_32_H 3 3 4 4 /* 5 5 * This file contains the system call numbers. ··· 376 376 #endif 377 377 378 378 #endif /* __KERNEL__ */ 379 - #endif /* _ASM_I386_UNISTD_H_ */ 379 + #endif /* ASM_X86__UNISTD_32_H */
+3 -3
include/asm-x86/unistd_64.h
··· 1 - #ifndef _ASM_X86_64_UNISTD_H_ 2 - #define _ASM_X86_64_UNISTD_H_ 1 + #ifndef ASM_X86__UNISTD_64_H 2 + #define ASM_X86__UNISTD_64_H 3 3 4 4 #ifndef __SYSCALL 5 5 #define __SYSCALL(a, b) ··· 690 690 #define cond_syscall(x) asm(".weak\t" #x "\n\t.set\t" #x ",sys_ni_syscall") 691 691 #endif /* __KERNEL__ */ 692 692 693 - #endif /* _ASM_X86_64_UNISTD_H_ */ 693 + #endif /* ASM_X86__UNISTD_64_H */
+3 -3
include/asm-x86/unwind.h
··· 1 - #ifndef _ASM_X86_UNWIND_H 2 - #define _ASM_X86_UNWIND_H 1 + #ifndef ASM_X86__UNWIND_H 2 + #define ASM_X86__UNWIND_H 3 3 4 4 #define UNW_PC(frame) ((void)(frame), 0UL) 5 5 #define UNW_SP(frame) ((void)(frame), 0UL) ··· 10 10 return 0; 11 11 } 12 12 13 - #endif /* _ASM_X86_UNWIND_H */ 13 + #endif /* ASM_X86__UNWIND_H */
+3 -3
include/asm-x86/user32.h
··· 1 - #ifndef USER32_H 2 - #define USER32_H 1 1 + #ifndef ASM_X86__USER32_H 2 + #define ASM_X86__USER32_H 3 3 4 4 /* IA32 compatible user structures for ptrace. 5 5 * These should be used for 32bit coredumps too. */ ··· 67 67 }; 68 68 69 69 70 - #endif 70 + #endif /* ASM_X86__USER32_H */
+3 -3
include/asm-x86/user_32.h
··· 1 - #ifndef _I386_USER_H 2 - #define _I386_USER_H 1 + #ifndef ASM_X86__USER_32_H 2 + #define ASM_X86__USER_32_H 3 3 4 4 #include <asm/page.h> 5 5 /* Core file format: The core file is written in such a way that gdb ··· 128 128 #define HOST_TEXT_START_ADDR (u.start_code) 129 129 #define HOST_STACK_END_ADDR (u.start_stack + u.u_ssize * NBPG) 130 130 131 - #endif /* _I386_USER_H */ 131 + #endif /* ASM_X86__USER_32_H */
+3 -3
include/asm-x86/user_64.h
··· 1 - #ifndef _X86_64_USER_H 2 - #define _X86_64_USER_H 1 + #ifndef ASM_X86__USER_64_H 2 + #define ASM_X86__USER_64_H 3 3 4 4 #include <asm/types.h> 5 5 #include <asm/page.h> ··· 134 134 #define HOST_TEXT_START_ADDR (u.start_code) 135 135 #define HOST_STACK_END_ADDR (u.start_stack + u.u_ssize * NBPG) 136 136 137 - #endif /* _X86_64_USER_H */ 137 + #endif /* ASM_X86__USER_64_H */
+3 -3
include/asm-x86/uv/bios.h
··· 1 - #ifndef _ASM_X86_BIOS_H 2 - #define _ASM_X86_BIOS_H 1 + #ifndef ASM_X86__UV__BIOS_H 2 + #define ASM_X86__UV__BIOS_H 3 3 4 4 /* 5 5 * BIOS layer definitions. ··· 65 65 unsigned long *drift_info); 66 66 extern const char *x86_bios_strerror(long status); 67 67 68 - #endif /* _ASM_X86_BIOS_H */ 68 + #endif /* ASM_X86__UV__BIOS_H */
+3 -3
include/asm-x86/uv/uv_bau.h
··· 8 8 * Copyright (C) 2008 Silicon Graphics, Inc. All rights reserved. 9 9 */ 10 10 11 - #ifndef __ASM_X86_UV_BAU__ 12 - #define __ASM_X86_UV_BAU__ 11 + #ifndef ASM_X86__UV__UV_BAU_H 12 + #define ASM_X86__UV__UV_BAU_H 13 13 14 14 #include <linux/bitmap.h> 15 15 #define BITSPERBYTE 8 ··· 329 329 extern void uv_bau_message_intr1(void); 330 330 extern void uv_bau_timeout_intr1(void); 331 331 332 - #endif /* __ASM_X86_UV_BAU__ */ 332 + #endif /* ASM_X86__UV__UV_BAU_H */
+3 -3
include/asm-x86/uv/uv_hub.h
··· 8 8 * Copyright (C) 2007-2008 Silicon Graphics, Inc. All rights reserved. 9 9 */ 10 10 11 - #ifndef __ASM_X86_UV_HUB_H__ 12 - #define __ASM_X86_UV_HUB_H__ 11 + #ifndef ASM_X86__UV__UV_HUB_H 12 + #define ASM_X86__UV__UV_HUB_H 13 13 14 14 #include <linux/numa.h> 15 15 #include <linux/percpu.h> ··· 350 350 return uv_possible_blades; 351 351 } 352 352 353 - #endif /* __ASM_X86_UV_HUB__ */ 353 + #endif /* ASM_X86__UV__UV_HUB_H */ 354 354
+3 -3
include/asm-x86/uv/uv_mmrs.h
··· 8 8 * Copyright (C) 2007-2008 Silicon Graphics, Inc. All rights reserved. 9 9 */ 10 10 11 - #ifndef __ASM_X86_UV_MMRS__ 12 - #define __ASM_X86_UV_MMRS__ 11 + #ifndef ASM_X86__UV__UV_MMRS_H 12 + #define ASM_X86__UV__UV_MMRS_H 13 13 14 14 #define UV_MMR_ENABLE (1UL << 63) 15 15 ··· 1292 1292 }; 1293 1293 1294 1294 1295 - #endif /* __ASM_X86_UV_MMRS__ */ 1295 + #endif /* ASM_X86__UV__UV_MMRS_H */
+3 -3
include/asm-x86/vdso.h
··· 1 - #ifndef _ASM_X86_VDSO_H 2 - #define _ASM_X86_VDSO_H 1 1 + #ifndef ASM_X86__VDSO_H 2 + #define ASM_X86__VDSO_H 3 3 4 4 #ifdef CONFIG_X86_64 5 5 extern const char VDSO64_PRELINK[]; ··· 44 44 extern const char vdso32_syscall_start, vdso32_syscall_end; 45 45 extern const char vdso32_sysenter_start, vdso32_sysenter_end; 46 46 47 - #endif /* asm-x86/vdso.h */ 47 + #endif /* ASM_X86__VDSO_H */
+3 -3
include/asm-x86/vga.h
··· 4 4 * (c) 1998 Martin Mares <mj@ucw.cz> 5 5 */ 6 6 7 - #ifndef _LINUX_ASM_VGA_H_ 8 - #define _LINUX_ASM_VGA_H_ 7 + #ifndef ASM_X86__VGA_H 8 + #define ASM_X86__VGA_H 9 9 10 10 /* 11 11 * On the PC, we can just recalculate addresses and then ··· 17 17 #define vga_readb(x) (*(x)) 18 18 #define vga_writeb(x, y) (*(y) = (x)) 19 19 20 - #endif 20 + #endif /* ASM_X86__VGA_H */
+3 -3
include/asm-x86/vgtod.h
··· 1 - #ifndef _ASM_VGTOD_H 2 - #define _ASM_VGTOD_H 1 1 + #ifndef ASM_X86__VGTOD_H 2 + #define ASM_X86__VGTOD_H 3 3 4 4 #include <asm/vsyscall.h> 5 5 #include <linux/clocksource.h> ··· 26 26 __section_vsyscall_gtod_data; 27 27 extern struct vsyscall_gtod_data vsyscall_gtod_data; 28 28 29 - #endif 29 + #endif /* ASM_X86__VGTOD_H */
+3 -3
include/asm-x86/visws/cobalt.h
··· 1 - #ifndef __I386_SGI_COBALT_H 2 - #define __I386_SGI_COBALT_H 1 + #ifndef ASM_X86__VISWS__COBALT_H 2 + #define ASM_X86__VISWS__COBALT_H 3 3 4 4 #include <asm/fixmap.h> 5 5 ··· 122 122 123 123 extern char visws_board_rev; 124 124 125 - #endif /* __I386_SGI_COBALT_H */ 125 + #endif /* ASM_X86__VISWS__COBALT_H */
+3 -3
include/asm-x86/visws/lithium.h
··· 1 - #ifndef __I386_SGI_LITHIUM_H 2 - #define __I386_SGI_LITHIUM_H 1 + #ifndef ASM_X86__VISWS__LITHIUM_H 2 + #define ASM_X86__VISWS__LITHIUM_H 3 3 4 4 #include <asm/fixmap.h> 5 5 ··· 49 49 return *((volatile unsigned short *)(LI_PCIB_VADDR+reg)); 50 50 } 51 51 52 - #endif 52 + #endif /* ASM_X86__VISWS__LITHIUM_H */ 53 53
+3 -3
include/asm-x86/visws/piix4.h
··· 1 - #ifndef __I386_SGI_PIIX_H 2 - #define __I386_SGI_PIIX_H 1 + #ifndef ASM_X86__VISWS__PIIX4_H 2 + #define ASM_X86__VISWS__PIIX4_H 3 3 4 4 /* 5 5 * PIIX4 as used on SGI Visual Workstations ··· 104 104 */ 105 105 #define PIIX_GPI_STPCLK 0x4 // STPCLK signal routed back in 106 106 107 - #endif 107 + #endif /* ASM_X86__VISWS__PIIX4_H */
+3 -3
include/asm-x86/vm86.h
··· 1 - #ifndef _LINUX_VM86_H 2 - #define _LINUX_VM86_H 1 + #ifndef ASM_X86__VM86_H 2 + #define ASM_X86__VM86_H 3 3 4 4 /* 5 5 * I'm guessing at the VIF/VIP flag usage, but hope that this is how ··· 205 205 206 206 #endif /* __KERNEL__ */ 207 207 208 - #endif 208 + #endif /* ASM_X86__VM86_H */
+3 -3
include/asm-x86/vmi_time.h
··· 22 22 * 23 23 */ 24 24 25 - #ifndef __VMI_TIME_H 26 - #define __VMI_TIME_H 25 + #ifndef ASM_X86__VMI_TIME_H 26 + #define ASM_X86__VMI_TIME_H 27 27 28 28 /* 29 29 * Raw VMI call indices for timer functions ··· 95 95 96 96 #define CONFIG_VMI_ALARM_HZ 100 97 97 98 - #endif 98 + #endif /* ASM_X86__VMI_TIME_H */
+3 -3
include/asm-x86/vsyscall.h
··· 1 - #ifndef _ASM_X86_64_VSYSCALL_H_ 2 - #define _ASM_X86_64_VSYSCALL_H_ 1 + #ifndef ASM_X86__VSYSCALL_H 2 + #define ASM_X86__VSYSCALL_H 3 3 4 4 enum vsyscall_num { 5 5 __NR_vgettimeofday, ··· 41 41 42 42 #endif /* __KERNEL__ */ 43 43 44 - #endif /* _ASM_X86_64_VSYSCALL_H_ */ 44 + #endif /* ASM_X86__VSYSCALL_H */
+3 -3
include/asm-x86/xen/events.h
··· 1 - #ifndef __XEN_EVENTS_H 2 - #define __XEN_EVENTS_H 1 + #ifndef ASM_X86__XEN__EVENTS_H 2 + #define ASM_X86__XEN__EVENTS_H 3 3 4 4 enum ipi_vector { 5 5 XEN_RESCHEDULE_VECTOR, ··· 21 21 do_IRQ(regs); 22 22 } 23 23 24 - #endif /* __XEN_EVENTS_H */ 24 + #endif /* ASM_X86__XEN__EVENTS_H */
+3 -3
include/asm-x86/xen/grant_table.h
··· 1 - #ifndef __XEN_GRANT_TABLE_H 2 - #define __XEN_GRANT_TABLE_H 1 + #ifndef ASM_X86__XEN__GRANT_TABLE_H 2 + #define ASM_X86__XEN__GRANT_TABLE_H 3 3 4 4 #define xen_alloc_vm_area(size) alloc_vm_area(size) 5 5 #define xen_free_vm_area(area) free_vm_area(area) 6 6 7 - #endif /* __XEN_GRANT_TABLE_H */ 7 + #endif /* ASM_X86__XEN__GRANT_TABLE_H */
+3 -3
include/asm-x86/xen/hypercall.h
··· 30 30 * IN THE SOFTWARE. 31 31 */ 32 32 33 - #ifndef __HYPERCALL_H__ 34 - #define __HYPERCALL_H__ 33 + #ifndef ASM_X86__XEN__HYPERCALL_H 34 + #define ASM_X86__XEN__HYPERCALL_H 35 35 36 36 #include <linux/errno.h> 37 37 #include <linux/string.h> ··· 524 524 mcl->args[1] = esp; 525 525 } 526 526 527 - #endif /* __HYPERCALL_H__ */ 527 + #endif /* ASM_X86__XEN__HYPERCALL_H */
+3 -3
include/asm-x86/xen/hypervisor.h
··· 30 30 * IN THE SOFTWARE. 31 31 */ 32 32 33 - #ifndef __HYPERVISOR_H__ 34 - #define __HYPERVISOR_H__ 33 + #ifndef ASM_X86__XEN__HYPERVISOR_H 34 + #define ASM_X86__XEN__HYPERVISOR_H 35 35 36 36 #include <linux/types.h> 37 37 #include <linux/kernel.h> ··· 69 69 70 70 #define is_running_on_xen() (xen_start_info ? 1 : 0) 71 71 72 - #endif /* __HYPERVISOR_H__ */ 72 + #endif /* ASM_X86__XEN__HYPERVISOR_H */
+3 -3
include/asm-x86/xen/interface.h
··· 6 6 * Copyright (c) 2004, K A Fraser 7 7 */ 8 8 9 - #ifndef __ASM_X86_XEN_INTERFACE_H 10 - #define __ASM_X86_XEN_INTERFACE_H 9 + #ifndef ASM_X86__XEN__INTERFACE_H 10 + #define ASM_X86__XEN__INTERFACE_H 11 11 12 12 #ifdef __XEN__ 13 13 #define __DEFINE_GUEST_HANDLE(name, type) \ ··· 172 172 #define XEN_CPUID XEN_EMULATE_PREFIX "cpuid" 173 173 #endif 174 174 175 - #endif /* __ASM_X86_XEN_INTERFACE_H */ 175 + #endif /* ASM_X86__XEN__INTERFACE_H */
+3 -3
include/asm-x86/xen/interface_32.h
··· 6 6 * Copyright (c) 2004, K A Fraser 7 7 */ 8 8 9 - #ifndef __ASM_X86_XEN_INTERFACE_32_H 10 - #define __ASM_X86_XEN_INTERFACE_32_H 9 + #ifndef ASM_X86__XEN__INTERFACE_32_H 10 + #define ASM_X86__XEN__INTERFACE_32_H 11 11 12 12 13 13 /* ··· 94 94 #define xen_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) | ((unsigned)(pfn) >> 20)) 95 95 #define xen_cr3_to_pfn(cr3) (((unsigned)(cr3) >> 12) | ((unsigned)(cr3) << 20)) 96 96 97 - #endif /* __ASM_X86_XEN_INTERFACE_32_H */ 97 + #endif /* ASM_X86__XEN__INTERFACE_32_H */
+3 -3
include/asm-x86/xen/interface_64.h
··· 1 - #ifndef __ASM_X86_XEN_INTERFACE_64_H 2 - #define __ASM_X86_XEN_INTERFACE_64_H 1 + #ifndef ASM_X86__XEN__INTERFACE_64_H 2 + #define ASM_X86__XEN__INTERFACE_64_H 3 3 4 4 /* 5 5 * 64-bit segment selectors ··· 156 156 #endif /* !__ASSEMBLY__ */ 157 157 158 158 159 - #endif /* __ASM_X86_XEN_INTERFACE_64_H */ 159 + #endif /* ASM_X86__XEN__INTERFACE_64_H */
+3 -3
include/asm-x86/xen/page.h
··· 1 - #ifndef __XEN_PAGE_H 2 - #define __XEN_PAGE_H 1 + #ifndef ASM_X86__XEN__PAGE_H 2 + #define ASM_X86__XEN__PAGE_H 3 3 4 4 #include <linux/pfn.h> 5 5 ··· 162 162 void make_lowmem_page_readonly(void *vaddr); 163 163 void make_lowmem_page_readwrite(void *vaddr); 164 164 165 - #endif /* __XEN_PAGE_H */ 165 + #endif /* ASM_X86__XEN__PAGE_H */