Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (47 commits)
doc: CONFIG_UNEVICTABLE_LRU doesn't exist anymore
Update cpuset info & webiste for cgroups
dcdbas: force SMI to happen when expected
arch/arm/Kconfig: remove one to many l's in the word.
asm-generic/user.h: Fix spelling in comment
drm: fix printk typo 'sracth'
Remove one to many n's in a word
Documentation/filesystems/romfs.txt: fixing link to genromfs
drivers:scsi Change printk typo initate -> initiate
serial, pch uart: Remove duplicate inclusion of linux/pci.h header
fs/eventpoll.c: fix spelling
mm: Fix out-of-date comments which refers non-existent functions
drm: Fix printk typo 'failled'
coh901318.c: Change initate to initiate.
mbox-db5500.c Change initate to initiate.
edac: correct i82975x error-info reported
edac: correct i82975x mci initialisation
edac: correct commented info
fs: update comments to point correct document
target: remove duplicate include of target/target_core_device.h from drivers/target/target_core_hba.c
...

Trivial conflict in fs/eventpoll.c (spelling vs addition)

+228 -214
+9 -8
Documentation/cgroups/cpusets.txt
··· 693 693 - via the C library libcgroup. 694 694 (http://sourceforge.net/projects/libcg/) 695 695 - via the python application cset. 696 - (http://developer.novell.com/wiki/index.php/Cpuset) 696 + (http://code.google.com/p/cpuset/) 697 697 698 698 The sched_setaffinity calls can also be done at the shell prompt using 699 699 SGI's runon or Robert Love's taskset. The mbind and set_mempolicy ··· 725 725 726 726 In this directory you can find several files: 727 727 # ls 728 - cpuset.cpu_exclusive cpuset.memory_spread_slab 729 - cpuset.cpus cpuset.mems 730 - cpuset.mem_exclusive cpuset.sched_load_balance 731 - cpuset.mem_hardwall cpuset.sched_relax_domain_level 732 - cpuset.memory_migrate notify_on_release 733 - cpuset.memory_pressure tasks 734 - cpuset.memory_spread_page 728 + cgroup.clone_children cpuset.memory_pressure 729 + cgroup.event_control cpuset.memory_spread_page 730 + cgroup.procs cpuset.memory_spread_slab 731 + cpuset.cpu_exclusive cpuset.mems 732 + cpuset.cpus cpuset.sched_load_balance 733 + cpuset.mem_exclusive cpuset.sched_relax_domain_level 734 + cpuset.mem_hardwall notify_on_release 735 + cpuset.memory_migrate tasks 735 736 736 737 Reading them will give you information about the state of this cpuset: 737 738 the CPUs and Memory Nodes it can use, the processes that are using
+3 -2
Documentation/cgroups/memory.txt
··· 485 485 486 486 # echo 0 > memory.use_hierarchy 487 487 488 - NOTE1: Enabling/disabling will fail if the cgroup already has other 489 - cgroups created below it. 488 + NOTE1: Enabling/disabling will fail if either the cgroup already has other 489 + cgroups created below it, or if the parent cgroup has use_hierarchy 490 + enabled. 490 491 491 492 NOTE2: When panic_on_oom is set to "2", the whole system will panic in 492 493 case of an OOM event in any cgroup.
+1 -1
Documentation/device-mapper/dm-crypt.txt
··· 41 41 =============== 42 42 LUKS (Linux Unified Key Setup) is now the preferred way to set up disk 43 43 encryption with dm-crypt using the 'cryptsetup' utility, see 44 - http://clemens.endorphin.org/cryptography 44 + http://code.google.com/p/cryptsetup/ 45 45 46 46 [[ 47 47 #!/bin/sh
+1 -2
Documentation/filesystems/romfs.txt
··· 17 17 with romfs, it needed 3079 blocks. 18 18 19 19 To create such a file system, you'll need a user program named 20 - genromfs. It is available via anonymous ftp on sunsite.unc.edu and 21 - its mirrors, in the /pub/Linux/system/recovery/ directory. 20 + genromfs. It is available on http://romfs.sourceforge.net/ 22 21 23 22 As the name suggests, romfs could be also used (space-efficiently) on 24 23 various read-only media, like (E)EPROM disks if someone will have the
+1 -1
Documentation/kbuild/kbuild.txt
··· 146 146 INSTALL_MOD_STRIP, if defined, will cause modules to be 147 147 stripped after they are installed. If INSTALL_MOD_STRIP is '1', then 148 148 the default option --strip-debug will be used. Otherwise, 149 - INSTALL_MOD_STRIP will used as the options to the strip command. 149 + INSTALL_MOD_STRIP value will be used as the options to the strip command. 150 150 151 151 INSTALL_FW_PATH 152 152 --------------------------------------------------
+2 -1
Documentation/kbuild/makefiles.txt
··· 1325 1325 If this variable is specified, will cause modules to be stripped 1326 1326 after they are installed. If INSTALL_MOD_STRIP is '1', then the 1327 1327 default option --strip-debug will be used. Otherwise, 1328 - INSTALL_MOD_STRIP will used as the option(s) to the strip command. 1328 + INSTALL_MOD_STRIP value will be used as the option(s) to the strip 1329 + command. 1329 1330 1330 1331 1331 1332 === 9 Makefile language
+48 -48
Documentation/kvm/api.txt
··· 166 166 167 167 This ioctl is obsolete and has been removed. 168 168 169 - 4.6 KVM_CREATE_VCPU 169 + 4.7 KVM_CREATE_VCPU 170 170 171 171 Capability: basic 172 172 Architectures: all ··· 177 177 This API adds a vcpu to a virtual machine. The vcpu id is a small integer 178 178 in the range [0, max_vcpus). 179 179 180 - 4.7 KVM_GET_DIRTY_LOG (vm ioctl) 180 + 4.8 KVM_GET_DIRTY_LOG (vm ioctl) 181 181 182 182 Capability: basic 183 183 Architectures: x86 ··· 200 200 memory slot. Ensure the entire structure is cleared to avoid padding 201 201 issues. 202 202 203 - 4.8 KVM_SET_MEMORY_ALIAS 203 + 4.9 KVM_SET_MEMORY_ALIAS 204 204 205 205 Capability: basic 206 206 Architectures: x86 ··· 210 210 211 211 This ioctl is obsolete and has been removed. 212 212 213 - 4.9 KVM_RUN 213 + 4.10 KVM_RUN 214 214 215 215 Capability: basic 216 216 Architectures: all ··· 226 226 KVM_GET_VCPU_MMAP_SIZE. The parameter block is formatted as a 'struct 227 227 kvm_run' (see below). 228 228 229 - 4.10 KVM_GET_REGS 229 + 4.11 KVM_GET_REGS 230 230 231 231 Capability: basic 232 232 Architectures: all ··· 246 246 __u64 rip, rflags; 247 247 }; 248 248 249 - 4.11 KVM_SET_REGS 249 + 4.12 KVM_SET_REGS 250 250 251 251 Capability: basic 252 252 Architectures: all ··· 258 258 259 259 See KVM_GET_REGS for the data structure. 260 260 261 - 4.12 KVM_GET_SREGS 261 + 4.13 KVM_GET_SREGS 262 262 263 263 Capability: basic 264 264 Architectures: x86 ··· 283 283 one bit may be set. This interrupt has been acknowledged by the APIC 284 284 but not yet injected into the cpu core. 285 285 286 - 4.13 KVM_SET_SREGS 286 + 4.14 KVM_SET_SREGS 287 287 288 288 Capability: basic 289 289 Architectures: x86 ··· 294 294 Writes special registers into the vcpu. See KVM_GET_SREGS for the 295 295 data structures. 296 296 297 - 4.14 KVM_TRANSLATE 297 + 4.15 KVM_TRANSLATE 298 298 299 299 Capability: basic 300 300 Architectures: x86 ··· 317 317 __u8 pad[5]; 318 318 }; 319 319 320 - 4.15 KVM_INTERRUPT 320 + 4.16 KVM_INTERRUPT 321 321 322 322 Capability: basic 323 323 Architectures: x86, ppc ··· 365 365 Note that any value for 'irq' other than the ones stated above is invalid 366 366 and incurs unexpected behavior. 367 367 368 - 4.16 KVM_DEBUG_GUEST 368 + 4.17 KVM_DEBUG_GUEST 369 369 370 370 Capability: basic 371 371 Architectures: none ··· 375 375 376 376 Support for this has been removed. Use KVM_SET_GUEST_DEBUG instead. 377 377 378 - 4.17 KVM_GET_MSRS 378 + 4.18 KVM_GET_MSRS 379 379 380 380 Capability: basic 381 381 Architectures: x86 ··· 403 403 size of the entries array) and the 'index' member of each array entry. 404 404 kvm will fill in the 'data' member. 405 405 406 - 4.18 KVM_SET_MSRS 406 + 4.19 KVM_SET_MSRS 407 407 408 408 Capability: basic 409 409 Architectures: x86 ··· 418 418 size of the entries array), and the 'index' and 'data' members of each 419 419 array entry. 420 420 421 - 4.19 KVM_SET_CPUID 421 + 4.20 KVM_SET_CPUID 422 422 423 423 Capability: basic 424 424 Architectures: x86 ··· 446 446 struct kvm_cpuid_entry entries[0]; 447 447 }; 448 448 449 - 4.20 KVM_SET_SIGNAL_MASK 449 + 4.21 KVM_SET_SIGNAL_MASK 450 450 451 451 Capability: basic 452 452 Architectures: x86 ··· 468 468 __u8 sigset[0]; 469 469 }; 470 470 471 - 4.21 KVM_GET_FPU 471 + 4.22 KVM_GET_FPU 472 472 473 473 Capability: basic 474 474 Architectures: x86 ··· 493 493 __u32 pad2; 494 494 }; 495 495 496 - 4.22 KVM_SET_FPU 496 + 4.23 KVM_SET_FPU 497 497 498 498 Capability: basic 499 499 Architectures: x86 ··· 518 518 __u32 pad2; 519 519 }; 520 520 521 - 4.23 KVM_CREATE_IRQCHIP 521 + 4.24 KVM_CREATE_IRQCHIP 522 522 523 523 Capability: KVM_CAP_IRQCHIP 524 524 Architectures: x86, ia64 ··· 531 531 local APIC. IRQ routing for GSIs 0-15 is set to both PIC and IOAPIC; GSI 16-23 532 532 only go to the IOAPIC. On ia64, a IOSAPIC is created. 533 533 534 - 4.24 KVM_IRQ_LINE 534 + 4.25 KVM_IRQ_LINE 535 535 536 536 Capability: KVM_CAP_IRQCHIP 537 537 Architectures: x86, ia64 ··· 552 552 __u32 level; /* 0 or 1 */ 553 553 }; 554 554 555 - 4.25 KVM_GET_IRQCHIP 555 + 4.26 KVM_GET_IRQCHIP 556 556 557 557 Capability: KVM_CAP_IRQCHIP 558 558 Architectures: x86, ia64 ··· 573 573 } chip; 574 574 }; 575 575 576 - 4.26 KVM_SET_IRQCHIP 576 + 4.27 KVM_SET_IRQCHIP 577 577 578 578 Capability: KVM_CAP_IRQCHIP 579 579 Architectures: x86, ia64 ··· 594 594 } chip; 595 595 }; 596 596 597 - 4.27 KVM_XEN_HVM_CONFIG 597 + 4.28 KVM_XEN_HVM_CONFIG 598 598 599 599 Capability: KVM_CAP_XEN_HVM 600 600 Architectures: x86 ··· 618 618 __u8 pad2[30]; 619 619 }; 620 620 621 - 4.27 KVM_GET_CLOCK 621 + 4.29 KVM_GET_CLOCK 622 622 623 623 Capability: KVM_CAP_ADJUST_CLOCK 624 624 Architectures: x86 ··· 636 636 __u32 pad[9]; 637 637 }; 638 638 639 - 4.28 KVM_SET_CLOCK 639 + 4.30 KVM_SET_CLOCK 640 640 641 641 Capability: KVM_CAP_ADJUST_CLOCK 642 642 Architectures: x86 ··· 654 654 __u32 pad[9]; 655 655 }; 656 656 657 - 4.29 KVM_GET_VCPU_EVENTS 657 + 4.31 KVM_GET_VCPU_EVENTS 658 658 659 659 Capability: KVM_CAP_VCPU_EVENTS 660 660 Extended by: KVM_CAP_INTR_SHADOW ··· 693 693 KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that 694 694 interrupt.shadow contains a valid state. Otherwise, this field is undefined. 695 695 696 - 4.30 KVM_SET_VCPU_EVENTS 696 + 4.32 KVM_SET_VCPU_EVENTS 697 697 698 698 Capability: KVM_CAP_VCPU_EVENTS 699 699 Extended by: KVM_CAP_INTR_SHADOW ··· 719 719 the flags field to signal that interrupt.shadow contains a valid state and 720 720 shall be written into the VCPU. 721 721 722 - 4.32 KVM_GET_DEBUGREGS 722 + 4.33 KVM_GET_DEBUGREGS 723 723 724 724 Capability: KVM_CAP_DEBUGREGS 725 725 Architectures: x86 ··· 737 737 __u64 reserved[9]; 738 738 }; 739 739 740 - 4.33 KVM_SET_DEBUGREGS 740 + 4.34 KVM_SET_DEBUGREGS 741 741 742 742 Capability: KVM_CAP_DEBUGREGS 743 743 Architectures: x86 ··· 750 750 See KVM_GET_DEBUGREGS for the data structure. The flags field is unused 751 751 yet and must be cleared on entry. 752 752 753 - 4.34 KVM_SET_USER_MEMORY_REGION 753 + 4.35 KVM_SET_USER_MEMORY_REGION 754 754 755 755 Capability: KVM_CAP_USER_MEM 756 756 Architectures: all ··· 796 796 The KVM_SET_MEMORY_REGION does not allow fine grained control over memory 797 797 allocation and is deprecated. 798 798 799 - 4.35 KVM_SET_TSS_ADDR 799 + 4.36 KVM_SET_TSS_ADDR 800 800 801 801 Capability: KVM_CAP_SET_TSS_ADDR 802 802 Architectures: x86 ··· 814 814 because of a quirk in the virtualization implementation (see the internals 815 815 documentation when it pops into existence). 816 816 817 - 4.36 KVM_ENABLE_CAP 817 + 4.37 KVM_ENABLE_CAP 818 818 819 819 Capability: KVM_CAP_ENABLE_CAP 820 820 Architectures: ppc ··· 849 849 __u8 pad[64]; 850 850 }; 851 851 852 - 4.37 KVM_GET_MP_STATE 852 + 4.38 KVM_GET_MP_STATE 853 853 854 854 Capability: KVM_CAP_MP_STATE 855 855 Architectures: x86, ia64 ··· 879 879 This ioctl is only useful after KVM_CREATE_IRQCHIP. Without an in-kernel 880 880 irqchip, the multiprocessing state must be maintained by userspace. 881 881 882 - 4.38 KVM_SET_MP_STATE 882 + 4.39 KVM_SET_MP_STATE 883 883 884 884 Capability: KVM_CAP_MP_STATE 885 885 Architectures: x86, ia64 ··· 893 893 This ioctl is only useful after KVM_CREATE_IRQCHIP. Without an in-kernel 894 894 irqchip, the multiprocessing state must be maintained by userspace. 895 895 896 - 4.39 KVM_SET_IDENTITY_MAP_ADDR 896 + 4.40 KVM_SET_IDENTITY_MAP_ADDR 897 897 898 898 Capability: KVM_CAP_SET_IDENTITY_MAP_ADDR 899 899 Architectures: x86 ··· 911 911 because of a quirk in the virtualization implementation (see the internals 912 912 documentation when it pops into existence). 913 913 914 - 4.40 KVM_SET_BOOT_CPU_ID 914 + 4.41 KVM_SET_BOOT_CPU_ID 915 915 916 916 Capability: KVM_CAP_SET_BOOT_CPU_ID 917 917 Architectures: x86, ia64 ··· 923 923 as the vcpu id in KVM_CREATE_VCPU. If this ioctl is not called, the default 924 924 is vcpu 0. 925 925 926 - 4.41 KVM_GET_XSAVE 926 + 4.42 KVM_GET_XSAVE 927 927 928 928 Capability: KVM_CAP_XSAVE 929 929 Architectures: x86 ··· 937 937 938 938 This ioctl would copy current vcpu's xsave struct to the userspace. 939 939 940 - 4.42 KVM_SET_XSAVE 940 + 4.43 KVM_SET_XSAVE 941 941 942 942 Capability: KVM_CAP_XSAVE 943 943 Architectures: x86 ··· 951 951 952 952 This ioctl would copy userspace's xsave struct to the kernel. 953 953 954 - 4.43 KVM_GET_XCRS 954 + 4.44 KVM_GET_XCRS 955 955 956 956 Capability: KVM_CAP_XCRS 957 957 Architectures: x86 ··· 974 974 975 975 This ioctl would copy current vcpu's xcrs to the userspace. 976 976 977 - 4.44 KVM_SET_XCRS 977 + 4.45 KVM_SET_XCRS 978 978 979 979 Capability: KVM_CAP_XCRS 980 980 Architectures: x86 ··· 997 997 998 998 This ioctl would set vcpu's xcr to the value userspace specified. 999 999 1000 - 4.45 KVM_GET_SUPPORTED_CPUID 1000 + 4.46 KVM_GET_SUPPORTED_CPUID 1001 1001 1002 1002 Capability: KVM_CAP_EXT_CPUID 1003 1003 Architectures: x86 ··· 1062 1062 eax, ebx, ecx, edx: the values returned by the cpuid instruction for 1063 1063 this function/index combination 1064 1064 1065 - 4.46 KVM_PPC_GET_PVINFO 1065 + 4.47 KVM_PPC_GET_PVINFO 1066 1066 1067 1067 Capability: KVM_CAP_PPC_GET_PVINFO 1068 1068 Architectures: ppc ··· 1085 1085 If any additional field gets added to this structure later on, a bit for that 1086 1086 additional piece of information will be set in the flags bitmap. 1087 1087 1088 - 4.47 KVM_ASSIGN_PCI_DEVICE 1088 + 4.48 KVM_ASSIGN_PCI_DEVICE 1089 1089 1090 1090 Capability: KVM_CAP_DEVICE_ASSIGNMENT 1091 1091 Architectures: x86 ia64 ··· 1113 1113 /* Depends on KVM_CAP_IOMMU */ 1114 1114 #define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0) 1115 1115 1116 - 4.48 KVM_DEASSIGN_PCI_DEVICE 1116 + 4.49 KVM_DEASSIGN_PCI_DEVICE 1117 1117 1118 1118 Capability: KVM_CAP_DEVICE_DEASSIGNMENT 1119 1119 Architectures: x86 ia64 ··· 1126 1126 See KVM_CAP_DEVICE_ASSIGNMENT for the data structure. Only assigned_dev_id is 1127 1127 used in kvm_assigned_pci_dev to identify the device. 1128 1128 1129 - 4.49 KVM_ASSIGN_DEV_IRQ 1129 + 4.50 KVM_ASSIGN_DEV_IRQ 1130 1130 1131 1131 Capability: KVM_CAP_ASSIGN_DEV_IRQ 1132 1132 Architectures: x86 ia64 ··· 1164 1164 It is not valid to specify multiple types per host or guest IRQ. However, the 1165 1165 IRQ type of host and guest can differ or can even be null. 1166 1166 1167 - 4.50 KVM_DEASSIGN_DEV_IRQ 1167 + 4.51 KVM_DEASSIGN_DEV_IRQ 1168 1168 1169 1169 Capability: KVM_CAP_ASSIGN_DEV_IRQ 1170 1170 Architectures: x86 ia64 ··· 1178 1178 by assigned_dev_id, flags must correspond to the IRQ type specified on 1179 1179 KVM_ASSIGN_DEV_IRQ. Partial deassignment of host or guest IRQ is allowed. 1180 1180 1181 - 4.51 KVM_SET_GSI_ROUTING 1181 + 4.52 KVM_SET_GSI_ROUTING 1182 1182 1183 1183 Capability: KVM_CAP_IRQ_ROUTING 1184 1184 Architectures: x86 ia64 ··· 1226 1226 __u32 pad; 1227 1227 }; 1228 1228 1229 - 4.52 KVM_ASSIGN_SET_MSIX_NR 1229 + 4.53 KVM_ASSIGN_SET_MSIX_NR 1230 1230 1231 1231 Capability: KVM_CAP_DEVICE_MSIX 1232 1232 Architectures: x86 ia64 ··· 1245 1245 1246 1246 #define KVM_MAX_MSIX_PER_DEV 256 1247 1247 1248 - 4.53 KVM_ASSIGN_SET_MSIX_ENTRY 1248 + 4.54 KVM_ASSIGN_SET_MSIX_ENTRY 1249 1249 1250 1250 Capability: KVM_CAP_DEVICE_MSIX 1251 1251 Architectures: x86 ia64
+1 -1
Documentation/sysctl/kernel.txt
··· 367 367 368 368 - console_loglevel: messages with a higher priority than 369 369 this will be printed to the console 370 - - default_message_level: messages without an explicit priority 370 + - default_message_loglevel: messages without an explicit priority 371 371 will be printed with this priority 372 372 - minimum_console_loglevel: minimum (highest) value to which 373 373 console_loglevel can be set
+1 -2
Documentation/vm/unevictable-lru.txt
··· 84 84 85 85 The PG_unevictable flag is analogous to, and mutually exclusive with, the 86 86 PG_active flag in that it indicates on which LRU list a page resides when 87 - PG_lru is set. The unevictable list is compile-time configurable based on the 88 - UNEVICTABLE_LRU Kconfig option. 87 + PG_lru is set. 89 88 90 89 The Unevictable LRU infrastructure maintains unevictable pages on an additional 91 90 LRU list for a few reasons:
+8 -7
MAINTAINERS
··· 2376 2376 2377 2377 EDAC-I82975X 2378 2378 M: Ranganathan Desikan <ravi@jetztechnologies.com> 2379 - M: "Arvind R." <arvind@jetztechnologies.com> 2379 + M: "Arvind R." <arvino55@gmail.com> 2380 2380 L: bluesmoke-devel@lists.sourceforge.net (moderated for non-subscribers) 2381 2381 W: bluesmoke.sourceforge.net 2382 2382 S: Maintained ··· 3472 3472 IRDA SUBSYSTEM 3473 3473 M: Samuel Ortiz <samuel@sortiz.org> 3474 3474 L: irda-users@lists.sourceforge.net (subscribers-only) 3475 + L: netdev@vger.kernel.org 3475 3476 W: http://irda.sourceforge.net/ 3476 3477 S: Maintained 3477 3478 T: git git://git.kernel.org/pub/scm/linux/kernel/git/sameo/irda-2.6.git ··· 3910 3909 T: git git://git.kernel.org/pub/scm/linux/kernel/git/chrisw/lsm-2.6.git 3911 3910 S: Supported 3912 3911 3912 + LIS3LV02D ACCELEROMETER DRIVER 3913 + M: Eric Piel <eric.piel@tremplin-utc.net> 3914 + S: Maintained 3915 + F: Documentation/hwmon/lis3lv02d 3916 + F: drivers/hwmon/lis3lv02d.* 3917 + 3913 3918 LLC (802.2) 3914 3919 M: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> 3915 3920 S: Maintained 3916 3921 F: include/linux/llc.h 3917 3922 F: include/net/llc* 3918 3923 F: net/llc/ 3919 - 3920 - LIS3LV02D ACCELEROMETER DRIVER 3921 - M: Eric Piel <eric.piel@tremplin-utc.net> 3922 - S: Maintained 3923 - F: Documentation/hwmon/lis3lv02d 3924 - F: drivers/hwmon/lis3lv02d.* 3925 3924 3926 3925 LM73 HARDWARE MONITOR DRIVER 3927 3926 M: Guillaume Ligneul <guillaume.ligneul@gmail.com>
+1 -1
Makefile
··· 666 666 # INSTALL_MOD_STRIP, if defined, will cause modules to be 667 667 # stripped after they are installed. If INSTALL_MOD_STRIP is '1', then 668 668 # the default option --strip-debug will be used. Otherwise, 669 - # INSTALL_MOD_STRIP will used as the options to the strip command. 669 + # INSTALL_MOD_STRIP value will be used as the options to the strip command. 670 670 671 671 ifdef INSTALL_MOD_STRIP 672 672 ifeq ($(INSTALL_MOD_STRIP),1)
+1 -1
arch/alpha/include/asm/cacheflush.h
··· 63 63 struct page *page, unsigned long addr, int len); 64 64 #endif 65 65 66 - /* This is used only in do_no_page and do_swap_page. */ 66 + /* This is used only in __do_fault and do_swap_page. */ 67 67 #define flush_icache_page(vma, page) \ 68 68 flush_icache_user_range((vma), (page), 0, 0) 69 69
+1 -1
arch/arm/mach-ux500/mbox-db5500.c
··· 498 498 #endif 499 499 500 500 dev_info(&(mbox->pdev->dev), 501 - "Mailbox driver with index %d initated!\n", mbox_id); 501 + "Mailbox driver with index %d initiated!\n", mbox_id); 502 502 503 503 exit: 504 504 return mbox;
+1 -1
arch/arm/plat-omap/Kconfig
··· 54 54 user must write 1 to 55 55 /debug/voltage/vdd_<X>/smartreflex/autocomp, 56 56 where X is mpu or core for OMAP3. 57 - Optionallly autocompensation can be enabled in the kernel 57 + Optionally autocompensation can be enabled in the kernel 58 58 by default during system init via the enable_on_init flag 59 59 which an be passed as platform data to the smartreflex driver. 60 60
+1 -1
arch/avr32/mm/cache.c
··· 113 113 } 114 114 115 115 /* 116 - * This one is called from do_no_page(), do_swap_page() and install_page(). 116 + * This one is called from __do_fault() and do_swap_page(). 117 117 */ 118 118 void flush_icache_page(struct vm_area_struct *vma, struct page *page) 119 119 {
+1 -1
arch/cris/arch-v10/mm/init.c
··· 241 241 } 242 242 243 243 /* Due to a bug in Etrax100(LX) all versions, receiving DMA buffers 244 - * will occationally corrupt certain CPU writes if the DMA buffers 244 + * will occasionally corrupt certain CPU writes if the DMA buffers 245 245 * happen to be hot in the cache. 246 246 * 247 247 * As a workaround, we have to flush the relevant parts of the cache
+1 -1
arch/ia64/include/asm/perfmon.h
··· 7 7 #define _ASM_IA64_PERFMON_H 8 8 9 9 /* 10 - * perfmon comamnds supported on all CPU models 10 + * perfmon commands supported on all CPU models 11 11 */ 12 12 #define PFM_WRITE_PMCS 0x01 13 13 #define PFM_WRITE_PMDS 0x02
+1 -1
arch/x86/kernel/apic/io_apic.c
··· 3983 3983 static __init int bad_ioapic(unsigned long address) 3984 3984 { 3985 3985 if (nr_ioapics >= MAX_IO_APICS) { 3986 - printk(KERN_WARNING "WARING: Max # of I/O APICs (%d) exceeded " 3986 + printk(KERN_WARNING "WARNING: Max # of I/O APICs (%d) exceeded " 3987 3987 "(found %d), skipping\n", MAX_IO_APICS, nr_ioapics); 3988 3988 return 1; 3989 3989 }
+1 -1
arch/x86/oprofile/op_model_p4.c
··· 50 50 #endif 51 51 } 52 52 53 - static int inline addr_increment(void) 53 + static inline int addr_increment(void) 54 54 { 55 55 #ifdef CONFIG_SMP 56 56 return smp_num_siblings == 2 ? 2 : 1;
+1 -1
arch/xtensa/configs/s6105_defconfig
··· 598 598 # CONFIG_CONTEXT_SWITCH_TRACER is not set 599 599 # CONFIG_BOOT_TRACER is not set 600 600 # CONFIG_TRACE_BRANCH_PROFILING is not set 601 - # CONFIG_DYNAMIC_PRINTK_DEBUG is not set 601 + # CONFIG_DYNAMIC_DEBUG is not set 602 602 # CONFIG_SAMPLES is not set 603 603 604 604 #
+1 -1
drivers/atm/firestream.c
··· 1031 1031 /* We now use the "submit_command" function to submit commands to 1032 1032 the firestream. There is a define up near the definition of 1033 1033 that routine that switches this routine between immediate write 1034 - to the immediate comamnd registers and queuing the commands in 1034 + to the immediate command registers and queuing the commands in 1035 1035 the HPTXQ for execution. This last technique might be more 1036 1036 efficient if we know we're going to submit a whole lot of 1037 1037 commands in one go, but this driver is not setup to be able to
+1 -1
drivers/block/smart1,2.h
··· 95 95 /* 96 96 * This hardware returns interrupt pending at a different place and 97 97 * it does not tell us if the fifo is empty, we will have check 98 - * that by getting a 0 back from the comamnd_completed call. 98 + * that by getting a 0 back from the command_completed call. 99 99 */ 100 100 static unsigned long smart4_intr_pending(ctlr_info_t *h) 101 101 {
+2 -2
drivers/bluetooth/btusb.c
··· 433 433 } 434 434 } 435 435 436 - static void inline __fill_isoc_descriptor(struct urb *urb, int len, int mtu) 436 + static inline void __fill_isoc_descriptor(struct urb *urb, int len, int mtu) 437 437 { 438 438 int i, offset = 0; 439 439 ··· 780 780 } 781 781 } 782 782 783 - static int inline __set_isoc_interface(struct hci_dev *hdev, int altsetting) 783 + static inline int __set_isoc_interface(struct hci_dev *hdev, int altsetting) 784 784 { 785 785 struct btusb_data *data = hdev->driver_data; 786 786 struct usb_interface *intf = data->isoc;
+1 -1
drivers/cpuidle/sysfs.c
··· 300 300 .release = cpuidle_state_sysfs_release, 301 301 }; 302 302 303 - static void inline cpuidle_free_state_kobj(struct cpuidle_device *device, int i) 303 + static inline void cpuidle_free_state_kobj(struct cpuidle_device *device, int i) 304 304 { 305 305 kobject_put(&device->kobjs[i]->kobj); 306 306 wait_for_completion(&device->kobjs[i]->kobj_unregister);
+2 -2
drivers/dma/coh901318.c
··· 849 849 850 850 /* Must clear TC interrupt before calling 851 851 * dma_tc_handle 852 - * in case tc_handle initate a new dma job 852 + * in case tc_handle initiate a new dma job 853 853 */ 854 854 __set_bit(i, virtbase + COH901318_TC_INT_CLEAR1); 855 855 ··· 894 894 } 895 895 /* Must clear TC interrupt before calling 896 896 * dma_tc_handle 897 - * in case tc_handle initate a new dma job 897 + * in case tc_handle initiate a new dma job 898 898 */ 899 899 __set_bit(i, virtbase + COH901318_TC_INT_CLEAR2); 900 900
+1 -1
drivers/dma/shdma.c
··· 750 750 return; 751 751 } 752 752 753 - /* Find the first not transferred desciptor */ 753 + /* Find the first not transferred descriptor */ 754 754 list_for_each_entry(desc, &sh_chan->ld_queue, node) 755 755 if (desc->mark == DESC_SUBMITTED) { 756 756 dev_dbg(sh_chan->dev, "Queue #%d to %d: %u@%x -> %x\n",
+1 -1
drivers/dma/timb_dma.c
··· 629 629 desc_node) 630 630 list_move(&td_desc->desc_node, &td_chan->free_list); 631 631 632 - /* now tear down the runnning */ 632 + /* now tear down the running */ 633 633 __td_finish(td_chan); 634 634 spin_unlock_bh(&td_chan->lock); 635 635
+1 -1
drivers/edac/i7300_edac.c
··· 162 162 #define AMBPRESENT_0 0x64 163 163 #define AMBPRESENT_1 0x66 164 164 165 - const static u16 mtr_regs[MAX_SLOTS] = { 165 + static const u16 mtr_regs[MAX_SLOTS] = { 166 166 0x80, 0x84, 0x88, 0x8c, 167 167 0x82, 0x86, 0x8a, 0x8e 168 168 };
+44 -25
drivers/edac/i82975x_edac.c
··· 160 160 * 3:2 Rank 1 architecture 161 161 * 1:0 Rank 0 architecture 162 162 * 163 - * 00 => x16 devices; i.e 4 banks 164 - * 01 => x8 devices; i.e 8 banks 163 + * 00 => 4 banks 164 + * 01 => 8 banks 165 165 */ 166 166 #define I82975X_C0BNKARC 0x10e 167 167 #define I82975X_C1BNKARC 0x18e ··· 278 278 struct i82975x_error_info *info, int handle_errors) 279 279 { 280 280 int row, multi_chan, chan; 281 + unsigned long offst, page; 281 282 282 283 multi_chan = mci->csrows[0].nr_channels - 1; 283 284 ··· 293 292 info->errsts = info->errsts2; 294 293 } 295 294 296 - chan = info->eap & 1; 297 - info->eap >>= 1; 298 - if (info->xeap ) 299 - info->eap |= 0x80000000; 300 - info->eap >>= PAGE_SHIFT; 301 - row = edac_mc_find_csrow_by_page(mci, info->eap); 295 + page = (unsigned long) info->eap; 296 + if (info->xeap & 1) 297 + page |= 0x100000000ul; 298 + chan = page & 1; 299 + page >>= 1; 300 + offst = page & ((1 << PAGE_SHIFT) - 1); 301 + page >>= PAGE_SHIFT; 302 + row = edac_mc_find_csrow_by_page(mci, page); 302 303 303 304 if (info->errsts & 0x0002) 304 - edac_mc_handle_ue(mci, info->eap, 0, row, "i82975x UE"); 305 + edac_mc_handle_ue(mci, page, offst , row, "i82975x UE"); 305 306 else 306 - edac_mc_handle_ce(mci, info->eap, 0, info->derrsyn, row, 307 + edac_mc_handle_ce(mci, page, offst, info->derrsyn, row, 307 308 multi_chan ? chan : 0, 308 309 "i82975x CE"); 309 310 ··· 347 344 static enum dev_type i82975x_dram_type(void __iomem *mch_window, int rank) 348 345 { 349 346 /* 350 - * ASUS P5W DH either does not program this register or programs 351 - * it wrong! 352 - * ECC is possible on i92975x ONLY with DEV_X8 which should mean 'val' 353 - * for each rank should be 01b - the LSB of the word should be 0x55; 354 - * but it reads 0! 347 + * ECC is possible on i92975x ONLY with DEV_X8 355 348 */ 356 349 return DEV_X8; 357 350 } ··· 355 356 static void i82975x_init_csrows(struct mem_ctl_info *mci, 356 357 struct pci_dev *pdev, void __iomem *mch_window) 357 358 { 359 + static const char *labels[4] = { 360 + "DIMM A1", "DIMM A2", 361 + "DIMM B1", "DIMM B2" 362 + }; 358 363 struct csrow_info *csrow; 359 364 unsigned long last_cumul_size; 360 365 u8 value; 361 366 u32 cumul_size; 362 - int index; 367 + int index, chan; 363 368 364 369 last_cumul_size = 0; 365 370 ··· 372 369 * The dram row boundary (DRB) reg values are boundary address 373 370 * for each DRAM row with a granularity of 32 or 64MB (single/dual 374 371 * channel operation). DRB regs are cumulative; therefore DRB7 will 375 - * contain the total memory contained in all eight rows. 376 - * 377 - * FIXME: 378 - * EDAC currently works for Dual-channel Interleaved configuration. 379 - * Other configurations, which the chip supports, need fixing/testing. 372 + * contain the total memory contained in all rows. 380 373 * 381 374 */ 382 375 ··· 383 384 ((index >= 4) ? 0x80 : 0)); 384 385 cumul_size = value; 385 386 cumul_size <<= (I82975X_DRB_SHIFT - PAGE_SHIFT); 387 + /* 388 + * Adjust cumul_size w.r.t number of channels 389 + * 390 + */ 391 + if (csrow->nr_channels > 1) 392 + cumul_size <<= 1; 386 393 debugf3("%s(): (%d) cumul_size 0x%x\n", __func__, index, 387 394 cumul_size); 395 + 396 + /* 397 + * Initialise dram labels 398 + * index values: 399 + * [0-7] for single-channel; i.e. csrow->nr_channels = 1 400 + * [0-3] for dual-channel; i.e. csrow->nr_channels = 2 401 + */ 402 + for (chan = 0; chan < csrow->nr_channels; chan++) 403 + strncpy(csrow->channels[chan].label, 404 + labels[(index >> 1) + (chan * 2)], 405 + EDAC_MC_LABEL_LEN); 406 + 388 407 if (cumul_size == last_cumul_size) 389 408 continue; /* not populated */ 390 409 ··· 410 393 csrow->last_page = cumul_size - 1; 411 394 csrow->nr_pages = cumul_size - last_cumul_size; 412 395 last_cumul_size = cumul_size; 413 - csrow->grain = 1 << 7; /* I82975X_EAP has 128B resolution */ 414 - csrow->mtype = MEM_DDR; /* i82975x supports only DDR2 */ 396 + csrow->grain = 1 << 6; /* I82975X_EAP has 64B resolution */ 397 + csrow->mtype = MEM_DDR2; /* I82975x supports only DDR2 */ 415 398 csrow->dtype = i82975x_dram_type(mch_window, index); 416 399 csrow->edac_mode = EDAC_SECDED; /* only supported */ 417 400 } ··· 532 515 533 516 debugf3("%s(): init mci\n", __func__); 534 517 mci->dev = &pdev->dev; 535 - mci->mtype_cap = MEM_FLAG_DDR; 518 + mci->mtype_cap = MEM_FLAG_DDR2; 536 519 mci->edac_ctl_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED; 537 520 mci->edac_cap = EDAC_FLAG_NONE | EDAC_FLAG_SECDED; 538 521 mci->mod_name = EDAC_MOD_STR; 539 522 mci->mod_ver = I82975X_REVISION; 540 523 mci->ctl_name = i82975x_devs[dev_idx].ctl_name; 524 + mci->dev_name = pci_name(pdev); 541 525 mci->edac_check = i82975x_check; 542 526 mci->ctl_page_to_phys = NULL; 543 527 debugf3("%s(): init pvt\n", __func__); 544 528 pvt = (struct i82975x_pvt *) mci->pvt_info; 545 529 pvt->mch_window = mch_window; 546 530 i82975x_init_csrows(mci, pdev, mch_window); 531 + mci->scrub_mode = SCRUB_HW_SRC; 547 532 i82975x_get_error_info(mci, &discard); /* clear counters */ 548 533 549 534 /* finalize this instance of memory controller with edac core */ ··· 683 664 module_exit(i82975x_exit); 684 665 685 666 MODULE_LICENSE("GPL"); 686 - MODULE_AUTHOR("Arvind R. <arvind@acarlab.com>"); 667 + MODULE_AUTHOR("Arvind R. <arvino55@gmail.com>"); 687 668 MODULE_DESCRIPTION("MC support for Intel 82975 memory hub controllers"); 688 669 689 670 module_param(edac_op_state, int, 0444);
+3 -1
drivers/firmware/dcdbas.c
··· 268 268 } 269 269 270 270 /* generate SMI */ 271 + /* inb to force posted write through and make SMI happen now */ 271 272 asm volatile ( 272 - "outb %b0,%w1" 273 + "outb %b0,%w1\n" 274 + "inb %w1" 273 275 : /* no output args */ 274 276 : "a" (smi_cmd->command_code), 275 277 "d" (smi_cmd->command_address),
+1 -3
drivers/gpu/drm/drm_sman.c
··· 59 59 { 60 60 int ret = 0; 61 61 62 - sman->mm = (struct drm_sman_mm *) kcalloc(num_managers, 63 - sizeof(*sman->mm), 64 - GFP_KERNEL); 62 + sman->mm = kcalloc(num_managers, sizeof(*sman->mm), GFP_KERNEL); 65 63 if (!sman->mm) { 66 64 ret = -ENOMEM; 67 65 goto out;
+1 -1
drivers/gpu/drm/radeon/evergreen.c
··· 2987 2987 2988 2988 r = r600_ib_test(rdev); 2989 2989 if (r) { 2990 - DRM_ERROR("radeon: failled testing IB (%d).\n", r); 2990 + DRM_ERROR("radeon: failed testing IB (%d).\n", r); 2991 2991 return r; 2992 2992 } 2993 2993
+5 -5
drivers/gpu/drm/radeon/r100.c
··· 3617 3617 if (i < rdev->usec_timeout) { 3618 3618 DRM_INFO("ib test succeeded in %u usecs\n", i); 3619 3619 } else { 3620 - DRM_ERROR("radeon: ib test failed (sracth(0x%04X)=0x%08X)\n", 3620 + DRM_ERROR("radeon: ib test failed (scratch(0x%04X)=0x%08X)\n", 3621 3621 scratch, tmp); 3622 3622 r = -EINVAL; 3623 3623 } ··· 3637 3637 3638 3638 r = radeon_ib_pool_init(rdev); 3639 3639 if (r) { 3640 - dev_err(rdev->dev, "failled initializing IB pool (%d).\n", r); 3640 + dev_err(rdev->dev, "failed initializing IB pool (%d).\n", r); 3641 3641 r100_ib_fini(rdev); 3642 3642 return r; 3643 3643 } 3644 3644 r = r100_ib_test(rdev); 3645 3645 if (r) { 3646 - dev_err(rdev->dev, "failled testing IB (%d).\n", r); 3646 + dev_err(rdev->dev, "failed testing IB (%d).\n", r); 3647 3647 r100_ib_fini(rdev); 3648 3648 return r; 3649 3649 } ··· 3799 3799 /* 1M ring buffer */ 3800 3800 r = r100_cp_init(rdev, 1024 * 1024); 3801 3801 if (r) { 3802 - dev_err(rdev->dev, "failled initializing CP (%d).\n", r); 3802 + dev_err(rdev->dev, "failed initializing CP (%d).\n", r); 3803 3803 return r; 3804 3804 } 3805 3805 r = r100_ib_init(rdev); 3806 3806 if (r) { 3807 - dev_err(rdev->dev, "failled initializing IB (%d).\n", r); 3807 + dev_err(rdev->dev, "failed initializing IB (%d).\n", r); 3808 3808 return r; 3809 3809 } 3810 3810 return 0;
+2 -2
drivers/gpu/drm/radeon/r300.c
··· 1401 1401 /* 1M ring buffer */ 1402 1402 r = r100_cp_init(rdev, 1024 * 1024); 1403 1403 if (r) { 1404 - dev_err(rdev->dev, "failled initializing CP (%d).\n", r); 1404 + dev_err(rdev->dev, "failed initializing CP (%d).\n", r); 1405 1405 return r; 1406 1406 } 1407 1407 r = r100_ib_init(rdev); 1408 1408 if (r) { 1409 - dev_err(rdev->dev, "failled initializing IB (%d).\n", r); 1409 + dev_err(rdev->dev, "failed initializing IB (%d).\n", r); 1410 1410 return r; 1411 1411 } 1412 1412 return 0;
+2 -2
drivers/gpu/drm/radeon/r420.c
··· 260 260 /* 1M ring buffer */ 261 261 r = r100_cp_init(rdev, 1024 * 1024); 262 262 if (r) { 263 - dev_err(rdev->dev, "failled initializing CP (%d).\n", r); 263 + dev_err(rdev->dev, "failed initializing CP (%d).\n", r); 264 264 return r; 265 265 } 266 266 r420_cp_errata_init(rdev); 267 267 r = r100_ib_init(rdev); 268 268 if (r) { 269 - dev_err(rdev->dev, "failled initializing IB (%d).\n", r); 269 + dev_err(rdev->dev, "failed initializing IB (%d).\n", r); 270 270 return r; 271 271 } 272 272 return 0;
+2 -2
drivers/gpu/drm/radeon/r520.c
··· 193 193 /* 1M ring buffer */ 194 194 r = r100_cp_init(rdev, 1024 * 1024); 195 195 if (r) { 196 - dev_err(rdev->dev, "failled initializing CP (%d).\n", r); 196 + dev_err(rdev->dev, "failed initializing CP (%d).\n", r); 197 197 return r; 198 198 } 199 199 r = r100_ib_init(rdev); 200 200 if (r) { 201 - dev_err(rdev->dev, "failled initializing IB (%d).\n", r); 201 + dev_err(rdev->dev, "failed initializing IB (%d).\n", r); 202 202 return r; 203 203 } 204 204 return 0;
+1 -1
drivers/gpu/drm/radeon/r600.c
··· 2464 2464 2465 2465 r = r600_ib_test(rdev); 2466 2466 if (r) { 2467 - DRM_ERROR("radeon: failled testing IB (%d).\n", r); 2467 + DRM_ERROR("radeon: failed testing IB (%d).\n", r); 2468 2468 return r; 2469 2469 } 2470 2470
+1 -1
drivers/gpu/drm/radeon/radeon_ring.c
··· 151 151 /* 64 dwords should be enough for fence too */ 152 152 r = radeon_ring_lock(rdev, 64); 153 153 if (r) { 154 - DRM_ERROR("radeon: scheduling IB failled (%d).\n", r); 154 + DRM_ERROR("radeon: scheduling IB failed (%d).\n", r); 155 155 return r; 156 156 } 157 157 radeon_ring_ib_execute(rdev, ib);
+2 -2
drivers/gpu/drm/radeon/rs400.c
··· 412 412 /* 1M ring buffer */ 413 413 r = r100_cp_init(rdev, 1024 * 1024); 414 414 if (r) { 415 - dev_err(rdev->dev, "failled initializing CP (%d).\n", r); 415 + dev_err(rdev->dev, "failed initializing CP (%d).\n", r); 416 416 return r; 417 417 } 418 418 r = r100_ib_init(rdev); 419 419 if (r) { 420 - dev_err(rdev->dev, "failled initializing IB (%d).\n", r); 420 + dev_err(rdev->dev, "failed initializing IB (%d).\n", r); 421 421 return r; 422 422 } 423 423 return 0;
+2 -2
drivers/gpu/drm/radeon/rs600.c
··· 865 865 /* 1M ring buffer */ 866 866 r = r100_cp_init(rdev, 1024 * 1024); 867 867 if (r) { 868 - dev_err(rdev->dev, "failled initializing CP (%d).\n", r); 868 + dev_err(rdev->dev, "failed initializing CP (%d).\n", r); 869 869 return r; 870 870 } 871 871 r = r100_ib_init(rdev); 872 872 if (r) { 873 - dev_err(rdev->dev, "failled initializing IB (%d).\n", r); 873 + dev_err(rdev->dev, "failed initializing IB (%d).\n", r); 874 874 return r; 875 875 } 876 876
+2 -2
drivers/gpu/drm/radeon/rs690.c
··· 627 627 /* 1M ring buffer */ 628 628 r = r100_cp_init(rdev, 1024 * 1024); 629 629 if (r) { 630 - dev_err(rdev->dev, "failled initializing CP (%d).\n", r); 630 + dev_err(rdev->dev, "failed initializing CP (%d).\n", r); 631 631 return r; 632 632 } 633 633 r = r100_ib_init(rdev); 634 634 if (r) { 635 - dev_err(rdev->dev, "failled initializing IB (%d).\n", r); 635 + dev_err(rdev->dev, "failed initializing IB (%d).\n", r); 636 636 return r; 637 637 } 638 638
+2 -2
drivers/gpu/drm/radeon/rv515.c
··· 398 398 /* 1M ring buffer */ 399 399 r = r100_cp_init(rdev, 1024 * 1024); 400 400 if (r) { 401 - dev_err(rdev->dev, "failled initializing CP (%d).\n", r); 401 + dev_err(rdev->dev, "failed initializing CP (%d).\n", r); 402 402 return r; 403 403 } 404 404 r = r100_ib_init(rdev); 405 405 if (r) { 406 - dev_err(rdev->dev, "failled initializing IB (%d).\n", r); 406 + dev_err(rdev->dev, "failed initializing IB (%d).\n", r); 407 407 return r; 408 408 } 409 409 return 0;
+1 -1
drivers/gpu/drm/radeon/rv770.c
··· 1209 1209 1210 1210 r = r600_ib_test(rdev); 1211 1211 if (r) { 1212 - DRM_ERROR("radeon: failled testing IB (%d).\n", r); 1212 + DRM_ERROR("radeon: failed testing IB (%d).\n", r); 1213 1213 return r; 1214 1214 } 1215 1215
+3 -3
drivers/isdn/mISDN/hwchannel.c
··· 206 206 hh->id = id; 207 207 if (bch->rcount >= 64) { 208 208 printk(KERN_WARNING "B-channel %p receive queue overflow, " 209 - "fushing!\n", bch); 209 + "flushing!\n", bch); 210 210 skb_queue_purge(&bch->rqueue); 211 211 bch->rcount = 0; 212 212 return; ··· 231 231 { 232 232 if (bch->rcount >= 64) { 233 233 printk(KERN_WARNING "B-channel %p receive queue overflow, " 234 - "fushing!\n", bch); 234 + "flushing!\n", bch); 235 235 skb_queue_purge(&bch->rqueue); 236 236 bch->rcount = 0; 237 237 } ··· 279 279 280 280 if (bch->rcount >= 64) { 281 281 printk(KERN_WARNING "B-channel %p receive queue overflow, " 282 - "fushing!\n", bch); 282 + "flushing!\n", bch); 283 283 skb_queue_purge(&bch->rqueue); 284 284 bch->rcount = 0; 285 285 }
+1 -2
drivers/message/i2o/i2o_config.c
··· 1044 1044 1045 1045 static int cfg_open(struct inode *inode, struct file *file) 1046 1046 { 1047 - struct i2o_cfg_info *tmp = 1048 - (struct i2o_cfg_info *)kmalloc(sizeof(struct i2o_cfg_info), 1047 + struct i2o_cfg_info *tmp = kmalloc(sizeof(struct i2o_cfg_info), 1049 1048 GFP_KERNEL); 1050 1049 unsigned long flags; 1051 1050
+2 -3
drivers/mtd/nand/mxc_nand.c
··· 722 722 /* 723 723 * MXC NANDFC can only perform full page+spare or 724 724 * spare-only read/write. When the upper layers 725 - * layers perform a read/write buf operation, 726 - * we will used the saved column address to index into 727 - * the full page. 725 + * perform a read/write buf operation, the saved column 726 + * address is used to index into the full page. 728 727 */ 729 728 host->send_addr(host, 0, page_addr == -1); 730 729 if (mtd->writesize > 512)
+2 -2
drivers/net/atl1c/atl1c.h
··· 265 265 __le32 word3; 266 266 }; 267 267 268 - /* RFD desciptor */ 268 + /* RFD descriptor */ 269 269 struct atl1c_rx_free_desc { 270 270 __le64 buffer_addr; 271 271 }; ··· 531 531 struct atl1c_buffer *buffer_info; 532 532 }; 533 533 534 - /* receive return desciptor (rrd) ring */ 534 + /* receive return descriptor (rrd) ring */ 535 535 struct atl1c_rrd_ring { 536 536 void *desc; /* descriptor ring virtual address */ 537 537 dma_addr_t dma; /* descriptor ring physical address */
+1 -1
drivers/net/qla3xxx.c
··· 2460 2460 * The 3032 supports sglists by using the 3 addr/len pairs (ALP) 2461 2461 * in the IOCB plus a chain of outbound address lists (OAL) that 2462 2462 * each contain 5 ALPs. The last ALP of the IOCB (3rd) or OAL (5th) 2463 - * will used to point to an OAL when more ALP entries are required. 2463 + * will be used to point to an OAL when more ALP entries are required. 2464 2464 * The IOCB is always the top of the chain followed by one or more 2465 2465 * OALs (when necessary). 2466 2466 */
+1 -1
drivers/net/sungem.h
··· 843 843 844 844 /* GEM requires that RX descriptors are provided four at a time, 845 845 * aligned. Also, the RX ring may not wrap around. This means that 846 - * there will be at least 4 unused desciptor entries in the middle 846 + * there will be at least 4 unused descriptor entries in the middle 847 847 * of the RX ring at all times. 848 848 * 849 849 * Similar to HME, GEM assumes that it can write garbage bytes before
-1
drivers/platform/x86/acer-wmi.c
··· 39 39 #include <linux/slab.h> 40 40 #include <linux/input.h> 41 41 #include <linux/input/sparse-keymap.h> 42 - #include <linux/dmi.h> 43 42 44 43 #include <acpi/acpi_drivers.h> 45 44
+1 -1
drivers/scsi/aic7xxx/aic79xx.h
··· 672 672 /************************ Target Mode Definitions *****************************/ 673 673 674 674 /* 675 - * Connection desciptor for select-in requests in target mode. 675 + * Connection descriptor for select-in requests in target mode. 676 676 */ 677 677 struct target_cmd { 678 678 uint8_t scsiid; /* Our ID and the initiator's ID */
+1 -1
drivers/scsi/aic7xxx/aic7xxx.h
··· 618 618 /************************ Target Mode Definitions *****************************/ 619 619 620 620 /* 621 - * Connection desciptor for select-in requests in target mode. 621 + * Connection descriptor for select-in requests in target mode. 622 622 */ 623 623 struct target_cmd { 624 624 uint8_t scsiid; /* Our ID and the initiator's ID */
+1 -1
drivers/scsi/aic7xxx/aic7xxx_core.c
··· 4780 4780 SLIST_INIT(&scb_data->sg_maps); 4781 4781 4782 4782 /* Allocate SCB resources */ 4783 - scb_data->scbarray = (struct scb *)kmalloc(sizeof(struct scb) * AHC_SCB_MAX_ALLOC, GFP_ATOMIC); 4783 + scb_data->scbarray = kmalloc(sizeof(struct scb) * AHC_SCB_MAX_ALLOC, GFP_ATOMIC); 4784 4784 if (scb_data->scbarray == NULL) 4785 4785 return (ENOMEM); 4786 4786 memset(scb_data->scbarray, 0, sizeof(struct scb) * AHC_SCB_MAX_ALLOC);
+2 -2
drivers/scsi/megaraid.c
··· 1412 1412 * @nstatus - number of completed commands 1413 1413 * @status - status of the last command completed 1414 1414 * 1415 - * Complete the comamnds and call the scsi mid-layer callback hooks. 1415 + * Complete the commands and call the scsi mid-layer callback hooks. 1416 1416 */ 1417 1417 static void 1418 1418 mega_cmd_done(adapter_t *adapter, u8 completed[], int nstatus, int status) ··· 4296 4296 * @adapter - pointer to our soft state 4297 4297 * @dma_handle - DMA address of the buffer 4298 4298 * 4299 - * Issue internal comamnds while interrupts are available. 4299 + * Issue internal commands while interrupts are available. 4300 4300 * We only issue direct mailbox commands from within the driver. ioctl() 4301 4301 * interface using these routines can issue passthru commands. 4302 4302 */
+1 -1
drivers/scsi/megaraid/megaraid_sas_base.c
··· 890 890 * @instance: Adapter soft state 891 891 * @cmd_to_abort: Previously issued cmd to be aborted 892 892 * 893 - * MFI firmware can abort previously issued AEN comamnd (automatic event 893 + * MFI firmware can abort previously issued AEN command (automatic event 894 894 * notification). The megasas_issue_blocked_abort_cmd() issues such abort 895 895 * cmd and waits for return status. 896 896 * Max wait time is MEGASAS_INTERNAL_CMD_WAIT_TIME secs
+4 -6
drivers/scsi/osst.c
··· 1484 1484 int dbg = debugging; 1485 1485 #endif 1486 1486 1487 - if ((buffer = (unsigned char *)vmalloc((nframes + 1) * OS_DATA_SIZE)) == NULL) 1487 + if ((buffer = vmalloc((nframes + 1) * OS_DATA_SIZE)) == NULL) 1488 1488 return (-EIO); 1489 1489 1490 1490 printk(KERN_INFO "%s:I: Reading back %d frames from drive buffer%s\n", ··· 2296 2296 if (STp->raw) return 0; 2297 2297 2298 2298 if (STp->header_cache == NULL) { 2299 - if ((STp->header_cache = (os_header_t *)vmalloc(sizeof(os_header_t))) == NULL) { 2299 + if ((STp->header_cache = vmalloc(sizeof(os_header_t))) == NULL) { 2300 2300 printk(KERN_ERR "%s:E: Failed to allocate header cache\n", name); 2301 2301 return (-ENOMEM); 2302 2302 } ··· 2484 2484 name, ppos, update_frame_cntr); 2485 2485 #endif 2486 2486 if (STp->header_cache == NULL) { 2487 - if ((STp->header_cache = (os_header_t *)vmalloc(sizeof(os_header_t))) == NULL) { 2487 + if ((STp->header_cache = vmalloc(sizeof(os_header_t))) == NULL) { 2488 2488 printk(KERN_ERR "%s:E: Failed to allocate header cache\n", name); 2489 2489 return 0; 2490 2490 } ··· 5851 5851 /* if this is the first attach, build the infrastructure */ 5852 5852 write_lock(&os_scsi_tapes_lock); 5853 5853 if (os_scsi_tapes == NULL) { 5854 - os_scsi_tapes = 5855 - (struct osst_tape **)kmalloc(osst_max_dev * sizeof(struct osst_tape *), 5856 - GFP_ATOMIC); 5854 + os_scsi_tapes = kmalloc(osst_max_dev * sizeof(struct osst_tape *), GFP_ATOMIC); 5857 5855 if (os_scsi_tapes == NULL) { 5858 5856 write_unlock(&os_scsi_tapes_lock); 5859 5857 printk(KERN_ERR "osst :E: Unable to allocate array for OnStream SCSI tapes.\n");
+1 -1
drivers/scsi/qla4xxx/ql4_isr.c
··· 1027 1027 ((ddb_entry->default_time2wait + 1028 1028 4) * HZ); 1029 1029 1030 - DEBUG2(printk("scsi%ld: ddb [%d] initate" 1030 + DEBUG2(printk("scsi%ld: ddb [%d] initiate" 1031 1031 " RELOGIN after %d seconds\n", 1032 1032 ha->host_no, 1033 1033 ddb_entry->fw_ddb_index,
+1 -1
drivers/scsi/qla4xxx/ql4_os.c
··· 812 812 ); 813 813 start_dpc++; 814 814 DEBUG(printk("scsi%ld:%d:%d: ddb [%d] " 815 - "initate relogin after" 815 + "initiate relogin after" 816 816 " %d seconds\n", 817 817 ha->host_no, ddb_entry->bus, 818 818 ddb_entry->target,
-1
drivers/target/target_core_hba.c
··· 37 37 38 38 #include <target/target_core_base.h> 39 39 #include <target/target_core_device.h> 40 - #include <target/target_core_device.h> 41 40 #include <target/target_core_tpg.h> 42 41 #include <target/target_core_transport.h> 43 42
+1 -1
drivers/tty/hvc/hvcs.c
··· 292 292 /* 293 293 * Any variable below the kref is valid before a tty is connected and 294 294 * stays valid after the tty is disconnected. These shouldn't be 295 - * whacked until the koject refcount reaches zero though some entries 295 + * whacked until the kobject refcount reaches zero though some entries 296 296 * may be changed via sysfs initiatives. 297 297 */ 298 298 struct kref kref; /* ref count & hvcs_struct lifetime */
-1
drivers/tty/serial/pch_uart.c
··· 15 15 *Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. 16 16 */ 17 17 #include <linux/serial_reg.h> 18 - #include <linux/pci.h> 19 18 #include <linux/module.h> 20 19 #include <linux/pci.h> 21 20 #include <linux/serial_core.h>
+1 -1
drivers/watchdog/sbc_epx_c3.c
··· 220 220 MODULE_AUTHOR("Calin A. Culianu <calin@ajvar.org>"); 221 221 MODULE_DESCRIPTION("Hardware Watchdog Device for Winsystems EPX-C3 SBC. " 222 222 "Note that there is no way to probe for this device -- " 223 - "so only use it if you are *sure* you are runnning on this specific " 223 + "so only use it if you are *sure* you are running on this specific " 224 224 "SBC system from Winsystems! It writes to IO ports 0x1ee and 0x1ef!"); 225 225 MODULE_LICENSE("GPL"); 226 226 MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);
+1 -1
fs/btrfs/disk-io.c
··· 2493 2493 * ERROR state on disk. 2494 2494 * 2495 2495 * 2. when btrfs flips readonly just in btrfs_commit_super, 2496 - * and in such case, btrfs cannnot write sb via btrfs_commit_super, 2496 + * and in such case, btrfs cannot write sb via btrfs_commit_super, 2497 2497 * and since fs_state has been set BTRFS_SUPER_FLAG_ERROR flag, 2498 2498 * btrfs will cleanup all FS resources first and write sb then. 2499 2499 */
+2 -2
fs/dcache.c
··· 1808 1808 * false-negative result. d_lookup() protects against concurrent 1809 1809 * renames using rename_lock seqlock. 1810 1810 * 1811 - * See Documentation/vfs/dcache-locking.txt for more details. 1811 + * See Documentation/filesystems/path-lookup.txt for more details. 1812 1812 */ 1813 1813 hlist_bl_for_each_entry_rcu(dentry, node, &b->head, d_hash) { 1814 1814 struct inode *i; ··· 1928 1928 * false-negative result. d_lookup() protects against concurrent 1929 1929 * renames using rename_lock seqlock. 1930 1930 * 1931 - * See Documentation/vfs/dcache-locking.txt for more details. 1931 + * See Documentation/filesystems/path-lookup.txt for more details. 1932 1932 */ 1933 1933 rcu_read_lock(); 1934 1934
+3 -3
fs/direct-io.c
··· 645 645 /* 646 646 * See whether this new request is contiguous with the old. 647 647 * 648 - * Btrfs cannot handl having logically non-contiguous requests 649 - * submitted. For exmple if you have 648 + * Btrfs cannot handle having logically non-contiguous requests 649 + * submitted. For example if you have 650 650 * 651 651 * Logical: [0-4095][HOLE][8192-12287] 652 - * Phyiscal: [0-4095] [4096-8181] 652 + * Physical: [0-4095] [4096-8191] 653 653 * 654 654 * We cannot submit those pages together as one BIO. So if our 655 655 * current logical offset in the file does not equal what would
+6 -6
fs/eventpoll.c
··· 62 62 * This mutex is acquired by ep_free() during the epoll file 63 63 * cleanup path and it is also acquired by eventpoll_release_file() 64 64 * if a file has been pushed inside an epoll set and it is then 65 - * close()d without a previous call toepoll_ctl(EPOLL_CTL_DEL). 65 + * close()d without a previous call to epoll_ctl(EPOLL_CTL_DEL). 66 66 * It is also acquired when inserting an epoll fd onto another epoll 67 67 * fd. We do this so that we walk the epoll tree and ensure that this 68 68 * insertion does not create a cycle of epoll file descriptors, which ··· 152 152 153 153 /* 154 154 * This structure is stored inside the "private_data" member of the file 155 - * structure and rapresent the main data sructure for the eventpoll 155 + * structure and represents the main data structure for the eventpoll 156 156 * interface. 157 157 */ 158 158 struct eventpoll { 159 - /* Protect the this structure access */ 159 + /* Protect the access to this structure */ 160 160 spinlock_t lock; 161 161 162 162 /* ··· 793 793 794 794 /* 795 795 * This is the callback that is passed to the wait queue wakeup 796 - * machanism. It is called by the stored file descriptors when they 796 + * mechanism. It is called by the stored file descriptors when they 797 797 * have events to report. 798 798 */ 799 799 static int ep_poll_callback(wait_queue_t *wait, unsigned mode, int sync, void *key) ··· 824 824 goto out_unlock; 825 825 826 826 /* 827 - * If we are trasfering events to userspace, we can hold no locks 827 + * If we are transferring events to userspace, we can hold no locks 828 828 * (because we're accessing user memory, and because of linux f_op->poll() 829 - * semantics). All the events that happens during that period of time are 829 + * semantics). All the events that happen during that period of time are 830 830 * chained in ep->ovflist and requeued later on. 831 831 */ 832 832 if (unlikely(ep->ovflist != EP_UNACTIVE_PTR)) {
+2 -2
fs/ext4/extents.c
··· 131 131 * fragmenting the file system's free space. Maybe we 132 132 * should have some hueristics or some way to allow 133 133 * userspace to pass a hint to file system, 134 - * especiially if the latter case turns out to be 134 + * especially if the latter case turns out to be 135 135 * common. 136 136 */ 137 137 ex = path[depth].p_ext; ··· 2844 2844 * ext4_get_blocks_dio_write() when DIO to write 2845 2845 * to an uninitialized extent. 2846 2846 * 2847 - * Writing to an uninitized extent may result in splitting the uninitialized 2847 + * Writing to an uninitialized extent may result in splitting the uninitialized 2848 2848 * extent into multiple /initialized uninitialized extents (up to three) 2849 2849 * There are three possibilities: 2850 2850 * a> There is no split required: Entire extent should be uninitialized
+1 -1
fs/fuse/cuse.c
··· 458 458 * @file: file struct being opened 459 459 * 460 460 * Userland CUSE server can create a CUSE device by opening /dev/cuse 461 - * and replying to the initilaization request kernel sends. This 461 + * and replying to the initialization request kernel sends. This 462 462 * function is responsible for handling CUSE device initialization. 463 463 * Because the fd opened by this function is used during 464 464 * initialization, this function only creates cuse_conn and sends
+1 -1
fs/notify/fanotify/fanotify_user.c
··· 876 876 #endif 877 877 878 878 /* 879 - * fanotify_user_setup - Our initialization function. Note that we cannnot return 879 + * fanotify_user_setup - Our initialization function. Note that we cannot return 880 880 * error because we have compiled-in VFS hooks. So an (unlikely) failure here 881 881 * must result in panic(). 882 882 */
+1 -1
fs/notify/inotify/inotify_user.c
··· 841 841 } 842 842 843 843 /* 844 - * inotify_user_setup - Our initialization function. Note that we cannnot return 844 + * inotify_user_setup - Our initialization function. Note that we cannot return 845 845 * error because we have compiled-in VFS hooks. So an (unlikely) failure here 846 846 * must result in panic(). 847 847 */
+1 -1
fs/ocfs2/dir.c
··· 354 354 /* 355 355 * Returns 0 if not found, -1 on failure, and 1 on success 356 356 */ 357 - static int inline ocfs2_search_dirblock(struct buffer_head *bh, 357 + static inline int ocfs2_search_dirblock(struct buffer_head *bh, 358 358 struct inode *dir, 359 359 const char *name, int namelen, 360 360 unsigned long offset,
+2 -2
include/asm-generic/user.h
··· 1 1 #ifndef __ASM_GENERIC_USER_H 2 2 #define __ASM_GENERIC_USER_H 3 3 /* 4 - * This file may define a 'struct user' structure. However, it it only 5 - * used for a.out file, which are not supported on new architectures. 4 + * This file may define a 'struct user' structure. However, it is only 5 + * used for a.out files, which are not supported on new architectures. 6 6 */ 7 7 8 8 #endif /* __ASM_GENERIC_USER_H */
+1 -1
include/linux/mmzone.h
··· 472 472 #ifdef CONFIG_NUMA 473 473 474 474 /* 475 - * The NUMA zonelists are doubled becausse we need zonelists that restrict the 475 + * The NUMA zonelists are doubled because we need zonelists that restrict the 476 476 * allocations to a single node for GFP_THISNODE. 477 477 * 478 478 * [0] : Zonelist with fallback
+3 -3
init/Kconfig
··· 745 745 746 746 This option only enables generic Block IO controller infrastructure. 747 747 One needs to also enable actual IO controlling logic/policy. For 748 - enabling proportional weight division of disk bandwidth in CFQ seti 749 - CONFIG_CFQ_GROUP_IOSCHED=y and for enabling throttling policy set 750 - CONFIG_BLK_THROTTLE=y. 748 + enabling proportional weight division of disk bandwidth in CFQ, set 749 + CONFIG_CFQ_GROUP_IOSCHED=y; for enabling throttling policy, set 750 + CONFIG_BLK_DEV_THROTTLING=y. 751 751 752 752 See Documentation/cgroups/blkio-controller.txt for more information. 753 753
+1 -1
kernel/trace/ring_buffer.c
··· 668 668 * the reader page). But if the next page is a header page, 669 669 * its flags will be non zero. 670 670 */ 671 - static int inline 671 + static inline int 672 672 rb_is_head_page(struct ring_buffer_per_cpu *cpu_buffer, 673 673 struct buffer_page *page, struct list_head *list) 674 674 {
+3 -3
mm/memory.c
··· 2172 2172 * handle_pte_fault chooses page fault handler according to an entry 2173 2173 * which was read non-atomically. Before making any commitment, on 2174 2174 * those architectures or configurations (e.g. i386 with PAE) which 2175 - * might give a mix of unmatched parts, do_swap_page and do_file_page 2175 + * might give a mix of unmatched parts, do_swap_page and do_nonlinear_fault 2176 2176 * must check under lock before unmapping the pte and proceeding 2177 2177 * (but do_wp_page is only called after already making such a check; 2178 - * and do_anonymous_page and do_no_page can safely check later on). 2178 + * and do_anonymous_page can safely check later on). 2179 2179 */ 2180 2180 static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, 2181 2181 pte_t *page_table, pte_t orig_pte) ··· 2371 2371 * bit after it clear all dirty ptes, but before a racing 2372 2372 * do_wp_page installs a dirty pte. 2373 2373 * 2374 - * do_no_page is protected similarly. 2374 + * __do_fault is protected similarly. 2375 2375 */ 2376 2376 if (!page_mkwrite) { 2377 2377 wait_on_page_locked(dirty_page);
+1 -1
mm/mempolicy.c
··· 993 993 * most recent <s, d> pair that moved (s != d). If we find a pair 994 994 * that not only moved, but what's better, moved to an empty slot 995 995 * (d is not set in tmp), then we break out then, with that pair. 996 - * Otherwise when we finish scannng from_tmp, we at least have the 996 + * Otherwise when we finish scanning from_tmp, we at least have the 997 997 * most recent <s, d> pair that moved. If we get all the way through 998 998 * the scan of tmp without finding any node that moved, much less 999 999 * moved to an empty node, then there is nothing left worth migrating.
+1 -1
mm/shmem.c
··· 779 779 * If truncating down to a partial page, then 780 780 * if that page is already allocated, hold it 781 781 * in memory until the truncation is over, so 782 - * truncate_partial_page cannnot miss it were 782 + * truncate_partial_page cannot miss it were 783 783 * it assigned to swap. 784 784 */ 785 785 if (newsize & (PAGE_CACHE_SIZE-1)) {
+2 -2
net/core/dev_addr_lists.c
··· 357 357 /** 358 358 * dev_addr_del_multiple - Delete device addresses by another device 359 359 * @to_dev: device where the addresses will be deleted 360 - * @from_dev: device by which addresses the addresses will be deleted 361 - * @addr_type: address type - 0 means type will used from from_dev 360 + * @from_dev: device supplying the addresses to be deleted 361 + * @addr_type: address type - 0 means type will be used from from_dev 362 362 * 363 363 * Deletes addresses in to device by the list of addresses in from device. 364 364 *
+1 -1
net/ipv6/inet6_hashtables.c
··· 124 124 } 125 125 EXPORT_SYMBOL(__inet6_lookup_established); 126 126 127 - static int inline compute_score(struct sock *sk, struct net *net, 127 + static inline int compute_score(struct sock *sk, struct net *net, 128 128 const unsigned short hnum, 129 129 const struct in6_addr *daddr, 130 130 const int dif)
+1 -1
net/mac80211/tx.c
··· 169 169 return cpu_to_le16(dur); 170 170 } 171 171 172 - static int inline is_ieee80211_device(struct ieee80211_local *local, 172 + static inline int is_ieee80211_device(struct ieee80211_local *local, 173 173 struct net_device *dev) 174 174 { 175 175 return local == wdev_priv(dev->ieee80211_ptr);
+2 -2
sound/pci/au88x0/au88x0.h
··· 211 211 //static void vortex_adbdma_stopfifo(vortex_t *vortex, int adbdma); 212 212 static void vortex_adbdma_pausefifo(vortex_t * vortex, int adbdma); 213 213 static void vortex_adbdma_resumefifo(vortex_t * vortex, int adbdma); 214 - static int inline vortex_adbdma_getlinearpos(vortex_t * vortex, int adbdma); 214 + static inline int vortex_adbdma_getlinearpos(vortex_t * vortex, int adbdma); 215 215 static void vortex_adbdma_resetup(vortex_t *vortex, int adbdma); 216 216 217 217 #ifndef CHIP_AU8810 ··· 219 219 static void vortex_wtdma_stopfifo(vortex_t * vortex, int wtdma); 220 220 static void vortex_wtdma_pausefifo(vortex_t * vortex, int wtdma); 221 221 static void vortex_wtdma_resumefifo(vortex_t * vortex, int wtdma); 222 - static int inline vortex_wtdma_getlinearpos(vortex_t * vortex, int wtdma); 222 + static inline int vortex_wtdma_getlinearpos(vortex_t * vortex, int wtdma); 223 223 #endif 224 224 225 225 /* global stuff. */
+2 -2
sound/pci/au88x0/au88x0_core.c
··· 1249 1249 } 1250 1250 } 1251 1251 1252 - static int inline vortex_adbdma_getlinearpos(vortex_t * vortex, int adbdma) 1252 + static inline int vortex_adbdma_getlinearpos(vortex_t * vortex, int adbdma) 1253 1253 { 1254 1254 stream_t *dma = &vortex->dma_adb[adbdma]; 1255 1255 int temp, page, delta; ··· 1506 1506 POS_SHIFT) & POS_MASK); 1507 1507 } 1508 1508 #endif 1509 - static int inline vortex_wtdma_getlinearpos(vortex_t * vortex, int wtdma) 1509 + static inline int vortex_wtdma_getlinearpos(vortex_t * vortex, int wtdma) 1510 1510 { 1511 1511 stream_t *dma = &vortex->dma_wt[wtdma]; 1512 1512 int temp;