Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial

* git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial:
typo fixes
Clean up 'inline is not at beginning' warnings for usb storage
Storage class should be first
i386: Trivial typo fixes
ixj: make ixj_set_tone_off() static
spelling fixes
fix paniced->panicked typos
Spelling fixes for Documentation/atomic_ops.txt
move acknowledgment for Mark Adler to CREDITS
remove the bouncing email address of David Campbell

+147 -158
+5
CREDITS
··· 24 24 S: Iasi 6600 25 25 S: Romania 26 26 27 + N: Mark Adler 28 + E: madler@alumni.caltech.edu 29 + W: http://alumnus.caltech.edu/~madler/ 30 + D: zlib decompression 31 + 27 32 N: Monalisa Agrawal 28 33 E: magrawal@nortelnetworks.com 29 34 D: Basic Interphase 5575 driver with UBR and ABR support.
+1 -1
Documentation/DocBook/kernel-locking.tmpl
··· 1590 1590 <para> 1591 1591 Our final dilemma is this: when can we actually destroy the 1592 1592 removed element? Remember, a reader might be stepping through 1593 - this element in the list right now: it we free this element and 1593 + this element in the list right now: if we free this element and 1594 1594 the <symbol>next</symbol> pointer changes, the reader will jump 1595 1595 off into garbage and crash. We need to wait until we know that 1596 1596 all the readers who were traversing the list when we deleted the
+14 -14
Documentation/atomic_ops.txt
··· 157 157 smp_mb__before_atomic_dec(); 158 158 atomic_dec(&obj->ref_count); 159 159 160 - It makes sure that all memory operations preceeding the atomic_dec() 160 + It makes sure that all memory operations preceding the atomic_dec() 161 161 call are strongly ordered with respect to the atomic counter 162 - operation. In the above example, it guarentees that the assignment of 162 + operation. In the above example, it guarantees that the assignment of 163 163 "1" to obj->dead will be globally visible to other cpus before the 164 164 atomic counter decrement. 165 165 166 - Without the explicitl smp_mb__before_atomic_dec() call, the 166 + Without the explicit smp_mb__before_atomic_dec() call, the 167 167 implementation could legally allow the atomic counter update visible 168 168 to other cpus before the "obj->dead = 1;" assignment. 169 169 ··· 173 173 (smp_mb__{before,after}_atomic_inc()). 174 174 175 175 A missing memory barrier in the cases where they are required by the 176 - atomic_t implementation above can have disasterous results. Here is 177 - an example, which follows a pattern occuring frequently in the Linux 176 + atomic_t implementation above can have disastrous results. Here is 177 + an example, which follows a pattern occurring frequently in the Linux 178 178 kernel. It is the use of atomic counters to implement reference 179 179 counting, and it works such that once the counter falls to zero it can 180 - be guarenteed that no other entity can be accessing the object: 180 + be guaranteed that no other entity can be accessing the object: 181 181 182 182 static void obj_list_add(struct obj *obj) 183 183 { ··· 291 291 size. The endianness of the bits within each "unsigned long" are the 292 292 native endianness of the cpu. 293 293 294 - void set_bit(unsigned long nr, volatils unsigned long *addr); 295 - void clear_bit(unsigned long nr, volatils unsigned long *addr); 296 - void change_bit(unsigned long nr, volatils unsigned long *addr); 294 + void set_bit(unsigned long nr, volatile unsigned long *addr); 295 + void clear_bit(unsigned long nr, volatile unsigned long *addr); 296 + void change_bit(unsigned long nr, volatile unsigned long *addr); 297 297 298 298 These routines set, clear, and change, respectively, the bit number 299 299 indicated by "nr" on the bit mask pointed to by "ADDR". ··· 301 301 They must execute atomically, yet there are no implicit memory barrier 302 302 semantics required of these interfaces. 303 303 304 - int test_and_set_bit(unsigned long nr, volatils unsigned long *addr); 305 - int test_and_clear_bit(unsigned long nr, volatils unsigned long *addr); 306 - int test_and_change_bit(unsigned long nr, volatils unsigned long *addr); 304 + int test_and_set_bit(unsigned long nr, volatile unsigned long *addr); 305 + int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); 306 + int test_and_change_bit(unsigned long nr, volatile unsigned long *addr); 307 307 308 308 Like the above, except that these routines return a boolean which 309 309 indicates whether the changed bit was set _BEFORE_ the atomic bit ··· 335 335 /* ... */; 336 336 obj->killed = 1; 337 337 338 - The implementation of test_and_set_bit() must guarentee that 338 + The implementation of test_and_set_bit() must guarantee that 339 339 "obj->dead = 1;" is visible to cpus before the atomic memory operation 340 340 done by test_and_set_bit() becomes visible. Likewise, the atomic 341 341 memory operation done by test_and_set_bit() must become visible before ··· 474 474 strictly orders all subsequent memory operations (including 475 475 the cas()) with respect to itself, things will be fine. 476 476 477 - Said another way, _atomic_dec_and_lock() must guarentee that 477 + Said another way, _atomic_dec_and_lock() must guarantee that 478 478 a counter dropping to zero is never made visible before the 479 479 spinlock being acquired. 480 480
+1 -1
Documentation/driver-model/overview.txt
··· 18 18 (sometimes just a list) for the devices they control. There wasn't any 19 19 uniformity across the different bus types. 20 20 21 - The current driver model provides a comon, uniform data model for describing 21 + The current driver model provides a common, uniform data model for describing 22 22 a bus and the devices that can appear under the bus. The unified bus 23 23 model includes a set of common attributes which all busses carry, and a set 24 24 of common callbacks, such as device discovery during bus probing, bus
+1 -1
Documentation/kdump/gdbmacros.txt
··· 175 175 document trapinfo 176 176 Run info threads and lookup pid of thread #1 177 177 'trapinfo <pid>' will tell you by which trap & possibly 178 - addresthe kernel paniced. 178 + address the kernel panicked. 179 179 end 180 180 181 181
-2
Documentation/scsi/ppa.txt
··· 12 12 Email list for Linux Parport 13 13 linux-parport@torque.net 14 14 15 - Email for problems with ZIP or ZIP Plus drivers 16 - campbell@torque.net
+1 -1
arch/i386/Kconfig
··· 734 734 help 735 735 kexec is a system call that implements the ability to shutdown your 736 736 current kernel, and to start another kernel. It is like a reboot 737 - but it is indepedent of the system firmware. And like a reboot 737 + but it is independent of the system firmware. And like a reboot 738 738 you can start any kernel with it, not just Linux. 739 739 740 740 The name comes from the similiarity to the exec system call.
+1 -1
arch/i386/Kconfig.cpu
··· 41 41 - "GeodeGX1" for Geode GX1 (Cyrix MediaGX). 42 42 - "Geode GX/LX" For AMD Geode GX and LX processors. 43 43 - "CyrixIII/VIA C3" for VIA Cyrix III or VIA C3. 44 - - "VIA C3-2 for VIA C3-2 "Nehemiah" (model 9 and above). 44 + - "VIA C3-2" for VIA C3-2 "Nehemiah" (model 9 and above). 45 45 46 46 If you don't know what to do, choose "386". 47 47
+1 -1
arch/i386/kernel/cpu/cyrix.c
··· 354 354 * This function only handles the GX processor, and kicks every 355 355 * thing else to the Cyrix init function above - that should 356 356 * cover any processors that might have been branded differently 357 - * after NSC aquired Cyrix. 357 + * after NSC acquired Cyrix. 358 358 * 359 359 * If this breaks your GX1 horribly, please e-mail 360 360 * info-linux@ldcmail.amd.com to tell us.
+1 -1
arch/i386/kernel/crash.c
··· 158 158 void machine_crash_shutdown(struct pt_regs *regs) 159 159 { 160 160 /* This function is only called after the system 161 - * has paniced or is otherwise in a critical state. 161 + * has panicked or is otherwise in a critical state. 162 162 * The minimum amount of code to allow a kexec'd kernel 163 163 * to run successfully needs to happen here. 164 164 *
+1 -1
arch/i386/kernel/i8259.c
··· 175 175 * Lightweight spurious IRQ detection. We do not want 176 176 * to overdo spurious IRQ handling - it's usually a sign 177 177 * of hardware problems, so we only do the checks we can 178 - * do without slowing down good hardware unnecesserily. 178 + * do without slowing down good hardware unnecessarily. 179 179 * 180 180 * Note that IRQ7 and IRQ15 (the two spurious IRQs 181 181 * usually resulting from the 8259A-1|2 PICs) occur
+2 -2
arch/i386/kernel/machine_kexec.c
··· 133 133 unsigned long start_address, 134 134 unsigned int has_pae) ATTRIB_NORET; 135 135 136 - const extern unsigned char relocate_new_kernel[]; 136 + extern const unsigned char relocate_new_kernel[]; 137 137 extern void relocate_new_kernel_end(void); 138 - const extern unsigned int relocate_new_kernel_size; 138 + extern const unsigned int relocate_new_kernel_size; 139 139 140 140 /* 141 141 * A architecture hook called to validate the
+2 -2
arch/mips/momentum/ocelot_g/gt-irq.c
··· 59 59 * bit_num - Indicates which bit number in the cause register 60 60 * 61 61 * Outputs : 62 - * 1 if succesful, 0 if failure 62 + * 1 if successful, 0 if failure 63 63 */ 64 64 int enable_galileo_irq(int int_cause, int bit_num) 65 65 { ··· 83 83 * bit_num - Indicates which bit number in the cause register 84 84 * 85 85 * Outputs : 86 - * 1 if succesful, 0 if failure 86 + * 1 if successful, 0 if failure 87 87 */ 88 88 int disable_galileo_irq(int int_cause, int bit_num) 89 89 {
+1 -1
arch/mips/sgi-ip22/ip22-reset.c
··· 34 34 #define POWERDOWN_TIMEOUT 120 35 35 36 36 /* 37 - * Blink frequency during reboot grace period and when paniced. 37 + * Blink frequency during reboot grace period and when panicked. 38 38 */ 39 39 #define POWERDOWN_FREQ (HZ / 4) 40 40 #define PANIC_FREQ (HZ / 8)
+6 -6
arch/mips/sgi-ip32/ip32-reset.c
··· 28 28 29 29 #define POWERDOWN_TIMEOUT 120 30 30 /* 31 - * Blink frequency during reboot grace period and when paniced. 31 + * Blink frequency during reboot grace period and when panicked. 32 32 */ 33 33 #define POWERDOWN_FREQ (HZ / 4) 34 34 #define PANIC_FREQ (HZ / 8) 35 35 36 36 static struct timer_list power_timer, blink_timer, debounce_timer; 37 - static int has_paniced, shuting_down; 37 + static int has_panicked, shuting_down; 38 38 39 39 static void ip32_machine_restart(char *command) __attribute__((noreturn)); 40 40 static void ip32_machine_halt(void) __attribute__((noreturn)); ··· 109 109 } 110 110 CMOS_WRITE(reg_a & ~DS_REGA_DV0, RTC_REG_A); 111 111 112 - if (has_paniced) 112 + if (has_panicked) 113 113 ip32_machine_restart(NULL); 114 114 115 115 enable_irq(MACEISA_RTC_IRQ); ··· 117 117 118 118 static inline void ip32_power_button(void) 119 119 { 120 - if (has_paniced) 120 + if (has_panicked) 121 121 return; 122 122 123 123 if (shuting_down || kill_proc(1, SIGINT, 1)) { ··· 161 161 { 162 162 unsigned long led; 163 163 164 - if (has_paniced) 164 + if (has_panicked) 165 165 return NOTIFY_DONE; 166 - has_paniced = 1; 166 + has_panicked = 1; 167 167 168 168 /* turn off the green LED */ 169 169 led = mace->perif.ctrl.misc | MACEISA_LED_GREEN;
+1 -1
arch/powerpc/kernel/crash.c
··· 179 179 180 180 /* 181 181 * This function is only called after the system 182 - * has paniced or is otherwise in a critical state. 182 + * has panicked or is otherwise in a critical state. 183 183 * The minimum amount of code to allow a kexec'd kernel 184 184 * to run successfully needs to happen here. 185 185 *
+2 -2
arch/powerpc/kernel/machine_kexec_32.c
··· 30 30 */ 31 31 void default_machine_kexec(struct kimage *image) 32 32 { 33 - const extern unsigned char relocate_new_kernel[]; 34 - const extern unsigned int relocate_new_kernel_size; 33 + extern const unsigned char relocate_new_kernel[]; 34 + extern const unsigned int relocate_new_kernel_size; 35 35 unsigned long page_list; 36 36 unsigned long reboot_code_buffer, reboot_code_buffer_phys; 37 37 relocate_new_kernel_t rnk;
+1 -1
arch/powerpc/platforms/cell/spufs/switch.c
··· 2100 2100 * @spu: pointer to SPU iomem structure. 2101 2101 * 2102 2102 * Perform harvest + restore, as we may not be coming 2103 - * from a previous succesful save operation, and the 2103 + * from a previous successful save operation, and the 2104 2104 * hardware state is unknown. 2105 2105 */ 2106 2106 int spu_restore(struct spu_state *new, struct spu *spu)
+1 -1
arch/powerpc/platforms/pseries/eeh_cache.c
··· 287 287 * find the pci device that corresponds to a given address. 288 288 * This routine scans all pci busses to build the cache. 289 289 * Must be run late in boot process, after the pci controllers 290 - * have been scaned for devices (after all device resources are known). 290 + * have been scanned for devices (after all device resources are known). 291 291 */ 292 292 void __init pci_addr_cache_build(void) 293 293 {
+2 -2
arch/ppc/kernel/machine_kexec.c
··· 25 25 unsigned long reboot_code_buffer, 26 26 unsigned long start_address) ATTRIB_NORET; 27 27 28 - const extern unsigned char relocate_new_kernel[]; 29 - const extern unsigned int relocate_new_kernel_size; 28 + extern const unsigned char relocate_new_kernel[]; 29 + extern const unsigned int relocate_new_kernel_size; 30 30 31 31 void machine_shutdown(void) 32 32 {
+2 -2
arch/s390/kernel/machine_kexec.c
··· 27 27 28 28 typedef void (*relocate_kernel_t) (kimage_entry_t *, unsigned long); 29 29 30 - const extern unsigned char relocate_kernel[]; 31 - const extern unsigned long long relocate_kernel_len; 30 + extern const unsigned char relocate_kernel[]; 31 + extern const unsigned long long relocate_kernel_len; 32 32 33 33 int 34 34 machine_kexec_prepare(struct kimage *image)
+1 -1
arch/s390/kernel/vtime.c
··· 356 356 357 357 set_vtimer(event->expires); 358 358 spin_unlock_irqrestore(&vt_list->lock, flags); 359 - /* release CPU aquired in prepare_vtimer or mod_virt_timer() */ 359 + /* release CPU acquired in prepare_vtimer or mod_virt_timer() */ 360 360 put_cpu(); 361 361 } 362 362
+2 -2
arch/sh/kernel/machine_kexec.c
··· 25 25 unsigned long start_address, 26 26 unsigned long vbr_reg) ATTRIB_NORET; 27 27 28 - const extern unsigned char relocate_new_kernel[]; 29 - const extern unsigned int relocate_new_kernel_size; 28 + extern const unsigned char relocate_new_kernel[]; 29 + extern const unsigned int relocate_new_kernel_size; 30 30 extern void *gdb_vbr_vector; 31 31 32 32 /*
+1 -1
arch/um/drivers/ubd_kern.c
··· 1222 1222 } 1223 1223 } 1224 1224 1225 - /* Succesful return case! */ 1225 + /* Successful return case! */ 1226 1226 if(backing_file_out == NULL) 1227 1227 return(fd); 1228 1228
+1 -1
arch/x86_64/kernel/crash.c
··· 161 161 { 162 162 /* 163 163 * This function is only called after the system 164 - * has paniced or is otherwise in a critical state. 164 + * has panicked or is otherwise in a critical state. 165 165 * The minimum amount of code to allow a kexec'd kernel 166 166 * to run successfully needs to happen here. 167 167 *
+1 -1
arch/x86_64/kernel/i8259.c
··· 278 278 * Lightweight spurious IRQ detection. We do not want 279 279 * to overdo spurious IRQ handling - it's usually a sign 280 280 * of hardware problems, so we only do the checks we can 281 - * do without slowing down good hardware unnecesserily. 281 + * do without slowing down good hardware unnecessarily. 282 282 * 283 283 * Note that IRQ7 and IRQ15 (the two spurious IRQs 284 284 * usually resulting from the 8259A-1|2 PICs) occur
+2 -2
arch/x86_64/kernel/machine_kexec.c
··· 149 149 unsigned long start_address, 150 150 unsigned long pgtable) ATTRIB_NORET; 151 151 152 - const extern unsigned char relocate_new_kernel[]; 153 - const extern unsigned long relocate_new_kernel_size; 152 + extern const unsigned char relocate_new_kernel[]; 153 + extern const unsigned long relocate_new_kernel_size; 154 154 155 155 int machine_kexec_prepare(struct kimage *image) 156 156 {
+1 -1
arch/x86_64/kernel/smp.c
··· 474 474 return; 475 475 /* Don't deadlock on the call lock in panic */ 476 476 if (!spin_trylock(&call_lock)) { 477 - /* ignore locking because we have paniced anyways */ 477 + /* ignore locking because we have panicked anyways */ 478 478 nolock = 1; 479 479 } 480 480 __smp_call_function(smp_really_stop_cpu, NULL, 0, 0);
+1 -1
block/as-iosched.c
··· 892 892 } 893 893 894 894 /* 895 - * as_can_anticipate indicates weather we should either run arq 895 + * as_can_anticipate indicates whether we should either run arq 896 896 * or keep anticipating a better request. 897 897 */ 898 898 static int as_can_anticipate(struct as_data *ad, struct as_rq *arq)
+2 -2
block/ll_rw_blk.c
··· 2745 2745 return 0; 2746 2746 2747 2747 /* 2748 - * not contigious 2748 + * not contiguous 2749 2749 */ 2750 2750 if (req->sector + req->nr_sectors != next->sector) 2751 2751 return 0; ··· 3415 3415 * 3416 3416 * Description: 3417 3417 * Ends all I/O on a request. It does not handle partial completions, 3418 - * unless the driver actually implements this in its completionc callback 3418 + * unless the driver actually implements this in its completion callback 3419 3419 * through requeueing. Theh actual completion happens out-of-order, 3420 3420 * through a softirq handler. The user must have registered a completion 3421 3421 * callback through blk_queue_softirq_done().
+1 -1
drivers/atm/firestream.c
··· 951 951 it most likely that the chip will notice it. It also prevents us 952 952 from having to wait for completion. On the other hand, we may 953 953 need to wait for completion anyway, to see if it completed 954 - succesfully. */ 954 + successfully. */ 955 955 956 956 switch (atm_vcc->qos.aal) { 957 957 case ATM_AAL2:
+1 -1
drivers/char/agp/amd-k7-agp.c
··· 118 118 return retval; 119 119 } 120 120 121 - /* Since we don't need contigious memory we just try 121 + /* Since we don't need contiguous memory we just try 122 122 * to get the gatt table once 123 123 */ 124 124
+1 -1
drivers/char/agp/ati-agp.c
··· 261 261 #endif 262 262 263 263 /* 264 - *Since we don't need contigious memory we just try 264 + *Since we don't need contiguous memory we just try 265 265 * to get the gatt table once 266 266 */ 267 267
+1 -1
drivers/char/agp/efficeon-agp.c
··· 177 177 178 178 179 179 /* 180 - * Since we don't need contigious memory we just try 180 + * Since we don't need contiguous memory we just try 181 181 * to get the gatt table once 182 182 */ 183 183
+3 -3
drivers/char/ipmi/ipmi_msghandler.c
··· 3674 3674 } 3675 3675 #endif /* CONFIG_IPMI_PANIC_EVENT */ 3676 3676 3677 - static int has_paniced = 0; 3677 + static int has_panicked = 0; 3678 3678 3679 3679 static int panic_event(struct notifier_block *this, 3680 3680 unsigned long event, ··· 3683 3683 int i; 3684 3684 ipmi_smi_t intf; 3685 3685 3686 - if (has_paniced) 3686 + if (has_panicked) 3687 3687 return NOTIFY_DONE; 3688 - has_paniced = 1; 3688 + has_panicked = 1; 3689 3689 3690 3690 /* For every registered interface, set it to run to completion. */ 3691 3691 for (i = 0; i < MAX_IPMI_INTERFACES; i++) {
+1 -1
drivers/char/rio/riointr.c
··· 546 546 ** run out of space it will be set to the offset of the 547 547 ** next byte to copy from the packet data area. The packet 548 548 ** length field is decremented by the number of bytes that 549 - ** we succesfully removed from the packet. When this reaches 549 + ** we successfully removed from the packet. When this reaches 550 550 ** zero, we reset the offset pointer to be zero, and free 551 551 ** the packet from the front of the queue. 552 552 */
+1 -1
drivers/hwmon/asb100.c
··· 341 341 342 342 /* Note: we save and restore the fan minimum here, because its value is 343 343 determined in part by the fan divisor. This follows the principle of 344 - least suprise; the user doesn't expect the fan minimum to change just 344 + least surprise; the user doesn't expect the fan minimum to change just 345 345 because the divisor changed. */ 346 346 static ssize_t set_fan_div(struct device *dev, const char *buf, 347 347 size_t count, int nr)
+1 -1
drivers/hwmon/lm78.c
··· 358 358 359 359 /* Note: we save and restore the fan minimum here, because its value is 360 360 determined in part by the fan divisor. This follows the principle of 361 - least suprise; the user doesn't expect the fan minimum to change just 361 + least surprise; the user doesn't expect the fan minimum to change just 362 362 because the divisor changed. */ 363 363 static ssize_t set_fan_div(struct device *dev, const char *buf, 364 364 size_t count, int nr)
+1 -1
drivers/hwmon/lm80.c
··· 253 253 254 254 /* Note: we save and restore the fan minimum here, because its value is 255 255 determined in part by the fan divisor. This follows the principle of 256 - least suprise; the user doesn't expect the fan minimum to change just 256 + least surprise; the user doesn't expect the fan minimum to change just 257 257 because the divisor changed. */ 258 258 static ssize_t set_fan_div(struct device *dev, const char *buf, 259 259 size_t count, int nr)
+1 -1
drivers/hwmon/lm87.c
··· 421 421 422 422 /* Note: we save and restore the fan minimum here, because its value is 423 423 determined in part by the fan clock divider. This follows the principle 424 - of least suprise; the user doesn't expect the fan minimum to change just 424 + of least surprise; the user doesn't expect the fan minimum to change just 425 425 because the divider changed. */ 426 426 static ssize_t set_fan_div(struct device *dev, const char *buf, 427 427 size_t count, int nr)
+1 -1
drivers/hwmon/sis5595.c
··· 380 380 381 381 /* Note: we save and restore the fan minimum here, because its value is 382 382 determined in part by the fan divisor. This follows the principle of 383 - least suprise; the user doesn't expect the fan minimum to change just 383 + least surprise; the user doesn't expect the fan minimum to change just 384 384 because the divisor changed. */ 385 385 static ssize_t set_fan_div(struct device *dev, const char *buf, 386 386 size_t count, int nr)
+1 -1
drivers/hwmon/smsc47m1.c
··· 207 207 208 208 /* Note: we save and restore the fan minimum here, because its value is 209 209 determined in part by the fan clock divider. This follows the principle 210 - of least suprise; the user doesn't expect the fan minimum to change just 210 + of least surprise; the user doesn't expect the fan minimum to change just 211 211 because the divider changed. */ 212 212 static ssize_t set_fan_div(struct device *dev, const char *buf, 213 213 size_t count, int nr)
+1 -1
drivers/hwmon/w83627hf.c
··· 781 781 782 782 /* Note: we save and restore the fan minimum here, because its value is 783 783 determined in part by the fan divisor. This follows the principle of 784 - least suprise; the user doesn't expect the fan minimum to change just 784 + least surprise; the user doesn't expect the fan minimum to change just 785 785 because the divisor changed. */ 786 786 static ssize_t 787 787 store_fan_div_reg(struct device *dev, const char *buf, size_t count, int nr)
+1 -1
drivers/hwmon/w83781d.c
··· 630 630 631 631 /* Note: we save and restore the fan minimum here, because its value is 632 632 determined in part by the fan divisor. This follows the principle of 633 - least suprise; the user doesn't expect the fan minimum to change just 633 + least surprise; the user doesn't expect the fan minimum to change just 634 634 because the divisor changed. */ 635 635 static ssize_t 636 636 store_fan_div_reg(struct device *dev, const char *buf, size_t count, int nr)
+1 -1
drivers/hwmon/w83792d.c
··· 463 463 464 464 /* Note: we save and restore the fan minimum here, because its value is 465 465 determined in part by the fan divisor. This follows the principle of 466 - least suprise; the user doesn't expect the fan minimum to change just 466 + least surprise; the user doesn't expect the fan minimum to change just 467 467 because the divisor changed. */ 468 468 static ssize_t 469 469 store_fan_div(struct device *dev, struct device_attribute *attr,
+1 -1
drivers/ide/ide-disk.c
··· 37 37 * Version 1.15 convert all calls to ide_raw_taskfile 38 38 * since args will return register content. 39 39 * Version 1.16 added suspend-resume-checkpower 40 - * Version 1.17 do flush on standy, do flush on ATA < ATA6 40 + * Version 1.17 do flush on standby, do flush on ATA < ATA6 41 41 * fix wcache setup. 42 42 */ 43 43
+1 -1
drivers/ide/ide-io.c
··· 1665 1665 * Initialize a request before we fill it in and send it down to 1666 1666 * ide_do_drive_cmd. Commands must be set up by this function. Right 1667 1667 * now it doesn't do a lot, but if that changes abusers will have a 1668 - * nasty suprise. 1668 + * nasty surprise. 1669 1669 */ 1670 1670 1671 1671 void ide_init_drive_cmd (struct request *rq)
+1 -1
drivers/ieee1394/hosts.c
··· 103 103 * driver specific parts, enable the controller and make it available 104 104 * to the general subsystem using hpsb_add_host(). 105 105 * 106 - * Return Value: a pointer to the &hpsb_host if succesful, %NULL if 106 + * Return Value: a pointer to the &hpsb_host if successful, %NULL if 107 107 * no memory was available. 108 108 */ 109 109 static DEFINE_MUTEX(host_num_alloc);
+1 -1
drivers/ieee1394/ieee1394_core.h
··· 139 139 140 140 /* 141 141 * Hand over received selfid packet to the core. Complement check (second 142 - * quadlet is complement of first) is expected to be done and succesful. 142 + * quadlet is complement of first) is expected to be done and successful. 143 143 */ 144 144 void hpsb_selfid_received(struct hpsb_host *host, quadlet_t sid); 145 145
+1 -1
drivers/isdn/divert/isdn_divert.c
··· 592 592 } /* put_address */ 593 593 594 594 /*************************************/ 595 - /* report a succesfull interrogation */ 595 + /* report a successful interrogation */ 596 596 /*************************************/ 597 597 static int interrogate_success(isdn_ctrl *ic, struct call_struc *cs) 598 598 { char *src = ic->parm.dss1_io.data;
+1 -1
drivers/media/video/bt8xx/bttv-cards.c
··· 4848 4848 * 4849 4849 * The IVC120G security card has 4 i2c controlled TDA8540 matrix 4850 4850 * swichers to provide 16 channels to MUX0. The TDA8540's have 4851 - * 4 indepedant outputs and as such the IVC120G also has the 4851 + * 4 independent outputs and as such the IVC120G also has the 4852 4852 * optional "Monitor Out" bus. This allows the card to be looking 4853 4853 * at one input while the monitor is looking at another. 4854 4854 *
+2 -2
drivers/net/3c501.c
··· 508 508 * speak of. We simply pull the packet out of its PIO buffer (which is slow) 509 509 * and queue it for the kernel. Then we reset the card for the next packet. 510 510 * 511 - * We sometimes get suprise interrupts late both because the SMP IRQ delivery 511 + * We sometimes get surprise interrupts late both because the SMP IRQ delivery 512 512 * is message passing and because the card sometimes seems to deliver late. I 513 513 * think if it is part way through a receive and the mode is changed it carries 514 514 * on receiving and sends us an interrupt. We have to band aid all these cases 515 - * to get a sensible 150kbytes/second performance. Even then you want a small 515 + * to get a sensible 150kBytes/second performance. Even then you want a small 516 516 * TCP window. 517 517 */ 518 518
+1 -1
drivers/net/irda/irport.c
··· 386 386 /* Locking notes : this function may be called from irq context with 387 387 * spinlock, via irport_write_wakeup(), or from non-interrupt without 388 388 * spinlock (from the task timer). Yuck ! 389 - * This is ugly, and unsafe is the spinlock is not already aquired. 389 + * This is ugly, and unsafe is the spinlock is not already acquired. 390 390 * This will be fixed when irda-task get rewritten. 391 391 * Jean II */ 392 392 if (!spin_is_locked(&self->lock)) {
+1 -1
drivers/net/pcmcia/smc91c92_cs.c
··· 1883 1883 /* Set the Window 1 control, configuration and station addr registers. 1884 1884 No point in writing the I/O base register ;-> */ 1885 1885 SMC_SELECT_BANK(1); 1886 - /* Automatically release succesfully transmitted packets, 1886 + /* Automatically release successfully transmitted packets, 1887 1887 Accept link errors, counter and Tx error interrupts. */ 1888 1888 outw(CTL_AUTO_RELEASE | CTL_TE_ENABLE | CTL_CR_ENABLE, 1889 1889 ioaddr + CONTROL);
+1 -1
drivers/net/wireless/ipw2100.c
··· 1485 1485 * 1486 1486 * Sending the PREPARE_FOR_POWER_DOWN will restrict the 1487 1487 * hardware from going into standby mode and will transition 1488 - * out of D0-standy if it is already in that state. 1488 + * out of D0-standby if it is already in that state. 1489 1489 * 1490 1490 * STATUS_PREPARE_POWER_DOWN_COMPLETE will be sent by the 1491 1491 * driver upon completion. Once received, the driver can
+1 -1
drivers/parport/parport_gsc.c
··· 15 15 * Phil Blundell <philb@gnu.org> 16 16 * Tim Waugh <tim@cyberelk.demon.co.uk> 17 17 * Jose Renau <renau@acm.org> 18 - * David Campbell <campbell@torque.net> 18 + * David Campbell 19 19 * Andrea Arcangeli 20 20 */ 21 21
+1 -1
drivers/parport/parport_gsc.h
··· 24 24 * Phil Blundell <Philip.Blundell@pobox.com> 25 25 * Tim Waugh <tim@cyberelk.demon.co.uk> 26 26 * Jose Renau <renau@acm.org> 27 - * David Campbell <campbell@torque.net> 27 + * David Campbell 28 28 * Andrea Arcangeli 29 29 */ 30 30
+1 -1
drivers/parport/parport_pc.c
··· 3 3 * Authors: Phil Blundell <philb@gnu.org> 4 4 * Tim Waugh <tim@cyberelk.demon.co.uk> 5 5 * Jose Renau <renau@acm.org> 6 - * David Campbell <campbell@torque.net> 6 + * David Campbell 7 7 * Andrea Arcangeli 8 8 * 9 9 * based on work by Grant Guenther <grant@torque.net> and Phil Blundell.
+1 -1
drivers/parport/procfs.c
··· 1 1 /* Sysctl interface for parport devices. 2 2 * 3 - * Authors: David Campbell <campbell@torque.net> 3 + * Authors: David Campbell 4 4 * Tim Waugh <tim@cyberelk.demon.co.uk> 5 5 * Philip Blundell <philb@gnu.org> 6 6 * Andrea Arcangeli
+1 -1
drivers/pci/pci.c
··· 369 369 370 370 /* 371 371 * Give firmware a chance to be called, such as ACPI _PRx, _PSx 372 - * Firmware method after natice method ? 372 + * Firmware method after native method ? 373 373 */ 374 374 if (platform_pci_set_power_state) 375 375 platform_pci_set_power_state(dev, state);
+7 -7
drivers/s390/scsi/zfcp_erp.c
··· 167 167 * initiates adapter recovery which is done 168 168 * asynchronously 169 169 * 170 - * returns: 0 - initiated action succesfully 170 + * returns: 0 - initiated action successfully 171 171 * <0 - failed to initiate action 172 172 */ 173 173 int ··· 203 203 * purpose: Wrappper for zfcp_erp_adapter_reopen_internal 204 204 * used to ensure the correct locking 205 205 * 206 - * returns: 0 - initiated action succesfully 206 + * returns: 0 - initiated action successfully 207 207 * <0 - failed to initiate action 208 208 */ 209 209 int ··· 469 469 * initiates Forced Reopen recovery which is done 470 470 * asynchronously 471 471 * 472 - * returns: 0 - initiated action succesfully 472 + * returns: 0 - initiated action successfully 473 473 * <0 - failed to initiate action 474 474 */ 475 475 static int ··· 509 509 * purpose: Wrappper for zfcp_erp_port_forced_reopen_internal 510 510 * used to ensure the correct locking 511 511 * 512 - * returns: 0 - initiated action succesfully 512 + * returns: 0 - initiated action successfully 513 513 * <0 - failed to initiate action 514 514 */ 515 515 int ··· 536 536 * initiates Reopen recovery which is done 537 537 * asynchronously 538 538 * 539 - * returns: 0 - initiated action succesfully 539 + * returns: 0 - initiated action successfully 540 540 * <0 - failed to initiate action 541 541 */ 542 542 static int ··· 605 605 * initiates Reopen recovery which is done 606 606 * asynchronously 607 607 * 608 - * returns: 0 - initiated action succesfully 608 + * returns: 0 - initiated action successfully 609 609 * <0 - failed to initiate action 610 610 */ 611 611 static int ··· 1805 1805 * purpose: Wrappper for zfcp_erp_port_reopen_all_internal 1806 1806 * used to ensure the correct locking 1807 1807 * 1808 - * returns: 0 - initiated action succesfully 1808 + * returns: 0 - initiated action successfully 1809 1809 * <0 - failed to initiate action 1810 1810 */ 1811 1811 int
+1 -1
drivers/scsi/NCR5380.c
··· 500 500 /* 501 501 * Function : int should_disconnect (unsigned char cmd) 502 502 * 503 - * Purpose : decide weather a command would normally disconnect or 503 + * Purpose : decide whether a command would normally disconnect or 504 504 * not, since if it won't disconnect we should go to sleep. 505 505 * 506 506 * Input : cmd - opcode of SCSI command
+1 -1
drivers/scsi/advansys.c
··· 12374 12374 ASC_PRINT1( 12375 12375 "AscInitFromEEP: Failed to re-write EEPROM with %d errors.\n", i); 12376 12376 } else { 12377 - ASC_PRINT("AscInitFromEEP: Succesfully re-wrote EEPROM."); 12377 + ASC_PRINT("AscInitFromEEP: Successfully re-wrote EEPROM.\n"); 12378 12378 } 12379 12379 } 12380 12380 return (warn_code);
+1 -1
drivers/scsi/dc395x.c
··· 3771 3771 * @target: The target for the new device. 3772 3772 * @lun: The lun for the new device. 3773 3773 * 3774 - * Return the new device if succesfull or NULL on failure. 3774 + * Return the new device if successful or NULL on failure. 3775 3775 **/ 3776 3776 static struct DeviceCtlBlk *device_alloc(struct AdapterCtlBlk *acb, 3777 3777 u8 target, u8 lun)
+4 -4
drivers/scsi/ibmmca.c
··· 760 760 while (!got_interrupt(host_index)) 761 761 barrier(); 762 762 763 - /*if command succesful, break */ 763 + /*if command successful, break */ 764 764 if ((stat_result(host_index) == IM_SCB_CMD_COMPLETED) || (stat_result(host_index) == IM_SCB_CMD_COMPLETED_WITH_RETRIES)) 765 765 return 1; 766 766 } ··· 885 885 while (!got_interrupt(host_index)) 886 886 barrier(); 887 887 888 - /*if command succesful, break */ 888 + /*if command successful, break */ 889 889 if (stat_result(host_index) == IM_IMMEDIATE_CMD_COMPLETED) 890 890 return 1; 891 891 } ··· 921 921 return 2; 922 922 } else 923 923 global_command_error_excuse = 0; 924 - /*if command succesful, break */ 924 + /*if command successful, break */ 925 925 if (stat_result(host_index) == IM_IMMEDIATE_CMD_COMPLETED) 926 926 return 1; 927 927 } ··· 959 959 /* did not work, finish */ 960 960 return 1; 961 961 } 962 - /*if command succesful, break */ 962 + /*if command successful, break */ 963 963 if (stat_result(host_index) == IM_IMMEDIATE_CMD_COMPLETED) 964 964 return 1; 965 965 }
-3
drivers/scsi/imm.c
··· 3 3 * 4 4 * (The IMM is the embedded controller in the ZIP Plus drive.) 5 5 * 6 - * Current Maintainer: David Campbell (Perth, Western Australia) 7 - * campbell@torque.net 8 - * 9 6 * My unoffical company acronym list is 21 pages long: 10 7 * FLA: Four letter acronym with built in facility for 11 8 * future expansion to five letters.
+1 -1
drivers/scsi/imm.h
··· 2 2 /* Driver for the Iomega MatchMaker parallel port SCSI HBA embedded in 3 3 * the Iomega ZIP Plus drive 4 4 * 5 - * (c) 1998 David Campbell campbell@torque.net 5 + * (c) 1998 David Campbell 6 6 * 7 7 * Please note that I live in Perth, Western Australia. GMT+0800 8 8 */
+2 -2
drivers/scsi/ips.c
··· 6438 6438 /* VPP failure */ 6439 6439 return (1); 6440 6440 6441 - /* check for succesful flash */ 6441 + /* check for successful flash */ 6442 6442 if (status & 0x30) 6443 6443 /* sequence error */ 6444 6444 return (1); ··· 6550 6550 /* VPP failure */ 6551 6551 return (1); 6552 6552 6553 - /* check for succesful flash */ 6553 + /* check for successful flash */ 6554 6554 if (status & 0x30) 6555 6555 /* sequence error */ 6556 6556 return (1);
-2
drivers/scsi/ppa.c
··· 6 6 * (c) 1995,1996 Grant R. Guenther, grant@torque.net, 7 7 * under the terms of the GNU General Public License. 8 8 * 9 - * Current Maintainer: David Campbell (Perth, Western Australia, GMT+0800) 10 - * campbell@torque.net 11 9 */ 12 10 13 11 #include <linux/config.h>
+1 -1
drivers/scsi/ppa.h
··· 2 2 * the Iomega ZIP drive 3 3 * 4 4 * (c) 1996 Grant R. Guenther grant@torque.net 5 - * David Campbell campbell@torque.net 5 + * David Campbell 6 6 * 7 7 * All comments to David. 8 8 */
+1 -1
drivers/scsi/st.c
··· 2818 2818 (cmdstatp->sense_hdr.sense_key == NO_SENSE || 2819 2819 cmdstatp->sense_hdr.sense_key == RECOVERED_ERROR) && 2820 2820 undone == 0) { 2821 - ioctl_result = 0; /* EOF written succesfully at EOM */ 2821 + ioctl_result = 0; /* EOF written successfully at EOM */ 2822 2822 if (fileno >= 0) 2823 2823 fileno++; 2824 2824 STps->drv_file = fileno;
+1 -1
drivers/telephony/ixj.c
··· 5712 5712 return 1; 5713 5713 } 5714 5714 5715 - int ixj_set_tone_off(unsigned short arg, IXJ *j) 5715 + static int ixj_set_tone_off(unsigned short arg, IXJ *j) 5716 5716 { 5717 5717 j->tone_off_time = arg; 5718 5718 if (ixj_WriteDSPCommand(0x6E05, j)) /* Set Tone Off Period */
+2 -2
drivers/usb/storage/usb.h
··· 160 160 }; 161 161 162 162 /* Convert between us_data and the corresponding Scsi_Host */ 163 - static struct Scsi_Host inline *us_to_host(struct us_data *us) { 163 + static inline struct Scsi_Host *us_to_host(struct us_data *us) { 164 164 return container_of((void *) us, struct Scsi_Host, hostdata); 165 165 } 166 - static struct us_data inline *host_to_us(struct Scsi_Host *host) { 166 + static inline struct us_data *host_to_us(struct Scsi_Host *host) { 167 167 return (struct us_data *) host->hostdata; 168 168 } 169 169
+1 -1
fs/9p/mux.c
··· 712 712 * v9fs_send_request - send 9P request 713 713 * The function can sleep until the request is scheduled for sending. 714 714 * The function can be interrupted. Return from the function is not 715 - * a guarantee that the request is sent succesfully. Can return errors 715 + * a guarantee that the request is sent successfully. Can return errors 716 716 * that can be retrieved by PTR_ERR macros. 717 717 * 718 718 * @m: mux data
+1 -1
fs/aio.c
··· 641 641 * invoked both for initial i/o submission and 642 642 * subsequent retries via the aio_kick_handler. 643 643 * Expects to be invoked with iocb->ki_ctx->lock 644 - * already held. The lock is released and reaquired 644 + * already held. The lock is released and reacquired 645 645 * as needed during processing. 646 646 * 647 647 * Calls the iocb retry method (already setup for the
+1 -1
fs/jffs2/summary.c
··· 43 43 return -ENOMEM; 44 44 } 45 45 46 - dbg_summary("returned succesfully\n"); 46 + dbg_summary("returned successfully\n"); 47 47 48 48 return 0; 49 49 }
+4 -4
fs/jfs/jfs_extent.c
··· 126 126 127 127 /* allocate the disk blocks for the extent. initially, extBalloc() 128 128 * will try to allocate disk blocks for the requested size (xlen). 129 - * if this fails (xlen contigious free blocks not avaliable), it'll 129 + * if this fails (xlen contiguous free blocks not avaliable), it'll 130 130 * try to allocate a smaller number of blocks (producing a smaller 131 131 * extent), with this smaller number of blocks consisting of the 132 132 * requested number of blocks rounded down to the next smaller ··· 493 493 * 494 494 * initially, we will try to allocate disk blocks for the 495 495 * requested size (nblocks). if this fails (nblocks 496 - * contigious free blocks not avaliable), we'll try to allocate 496 + * contiguous free blocks not avaliable), we'll try to allocate 497 497 * a smaller number of blocks (producing a smaller extent), with 498 498 * this smaller number of blocks consisting of the requested 499 499 * number of blocks rounded down to the next smaller power of 2 ··· 529 529 530 530 /* get the number of blocks to initially attempt to allocate. 531 531 * we'll first try the number of blocks requested unless this 532 - * number is greater than the maximum number of contigious free 532 + * number is greater than the maximum number of contiguous free 533 533 * blocks in the map. in that case, we'll start off with the 534 534 * maximum free. 535 535 */ ··· 586 586 * in place. if this fails, we'll try to move the extent 587 587 * to a new set of blocks. if moving the extent, we initially 588 588 * will try to allocate disk blocks for the requested size 589 - * (nnew). if this fails (nnew contigious free blocks not 589 + * (nnew). if this fails (new contiguous free blocks not 590 590 * avaliable), we'll try to allocate a smaller number of 591 591 * blocks (producing a smaller extent), with this smaller 592 592 * number of blocks consisting of the requested number of
+1 -1
include/asm-i386/system.h
··· 427 427 * does not enforce ordering, since there is no data dependency between 428 428 * the read of "a" and the read of "b". Therefore, on some CPUs, such 429 429 * as Alpha, "y" could be set to 3 and "x" to 0. Use rmb() 430 - * in cases like thiswhere there are no data dependencies. 430 + * in cases like this where there are no data dependencies. 431 431 **/ 432 432 433 433 #define read_barrier_depends() do { } while(0)
+1 -1
include/asm-m32r/system.h
··· 318 318 * does not enforce ordering, since there is no data dependency between 319 319 * the read of "a" and the read of "b". Therefore, on some CPUs, such 320 320 * as Alpha, "y" could be set to 3 and "x" to 0. Use rmb() 321 - * in cases like thiswhere there are no data dependencies. 321 + * in cases like this where there are no data dependencies. 322 322 **/ 323 323 324 324 #define read_barrier_depends() do { } while (0)
+1 -1
include/linux/sunrpc/gss_api.h
··· 126 126 /* Just increments the mechanism's reference count and returns its input: */ 127 127 struct gss_api_mech * gss_mech_get(struct gss_api_mech *); 128 128 129 - /* For every succesful gss_mech_get or gss_mech_get_by_* call there must be a 129 + /* For every successful gss_mech_get or gss_mech_get_by_* call there must be a 130 130 * corresponding call to gss_mech_put. */ 131 131 void gss_mech_put(struct gss_api_mech *); 132 132
+2 -2
lib/kernel_lock.c
··· 14 14 * The 'big kernel semaphore' 15 15 * 16 16 * This mutex is taken and released recursively by lock_kernel() 17 - * and unlock_kernel(). It is transparently dropped and reaquired 17 + * and unlock_kernel(). It is transparently dropped and reacquired 18 18 * over schedule(). It is used to protect legacy code that hasn't 19 19 * been migrated to a proper locking design yet. 20 20 * ··· 92 92 * The 'big kernel lock' 93 93 * 94 94 * This spinlock is taken and released recursively by lock_kernel() 95 - * and unlock_kernel(). It is transparently dropped and reaquired 95 + * and unlock_kernel(). It is transparently dropped and reacquired 96 96 * over schedule(). It is used to protect legacy code that hasn't 97 97 * been migrated to a proper locking design yet. 98 98 *
-9
lib/zlib_inflate/inftrees.c
··· 8 8 9 9 #define MAXBITS 15 10 10 11 - const char inflate_copyright[] = 12 - " inflate 1.2.3 Copyright 1995-2005 Mark Adler "; 13 - /* 14 - If you use the zlib library in a product, an acknowledgment is welcome 15 - in the documentation of your product. If for some reason you cannot 16 - include such an acknowledgment, I would appreciate that you keep this 17 - copyright string in the executable of your product. 18 - */ 19 - 20 11 /* 21 12 Build a set of tables to decode the provided canonical Huffman code. 22 13 The code lengths are lens[0..codes-1]. The result starts at *table,
+1 -1
mm/page_alloc.c
··· 266 266 * satisfies the following equation: 267 267 * P = B & ~(1 << O) 268 268 * 269 - * Assumption: *_mem_map is contigious at least up to MAX_ORDER 269 + * Assumption: *_mem_map is contiguous at least up to MAX_ORDER 270 270 */ 271 271 static inline struct page * 272 272 __page_find_buddy(struct page *page, unsigned long page_idx, unsigned int order)
+2 -2
mm/readahead.c
··· 390 390 * Read 'nr_to_read' pages starting at page 'offset'. If the flag 'block' 391 391 * is set wait till the read completes. Otherwise attempt to read without 392 392 * blocking. 393 - * Returns 1 meaning 'success' if read is succesfull without switching off 394 - * readhaead mode. Otherwise return failure. 393 + * Returns 1 meaning 'success' if read is successful without switching off 394 + * readahead mode. Otherwise return failure. 395 395 */ 396 396 static int 397 397 blockable_page_cache_readahead(struct address_space *mapping, struct file *filp,
+1 -1
net/sunrpc/auth_gss/gss_krb5_mech.c
··· 169 169 } 170 170 171 171 ctx_id->internal_ctx_id = ctx; 172 - dprintk("RPC: Succesfully imported new context.\n"); 172 + dprintk("RPC: Successfully imported new context.\n"); 173 173 return 0; 174 174 175 175 out_err_free_key2:
+1 -1
net/sunrpc/auth_gss/gss_spkm3_mech.c
··· 201 201 202 202 ctx_id->internal_ctx_id = ctx; 203 203 204 - dprintk("Succesfully imported new spkm context.\n"); 204 + dprintk("Successfully imported new spkm context.\n"); 205 205 return 0; 206 206 207 207 out_err_free_key2:
+1 -1
sound/core/seq/seq_memory.h
··· 31 31 struct snd_seq_event_cell *next; /* next cell */ 32 32 }; 33 33 34 - /* design note: the pool is a contigious block of memory, if we dynamicly 34 + /* design note: the pool is a contiguous block of memory, if we dynamicly 35 35 want to add additional cells to the pool be better store this in another 36 36 pool as we need to know the base address of the pool when releasing 37 37 memory. */
+14 -14
sound/oss/sb_ess.c
··· 97 97 * 98 98 * The documentation is an adventure: it's close but not fully accurate. I 99 99 * found out that after a reset some registers are *NOT* reset, though the 100 - * docs say the would be. Interresting ones are 0x7f, 0x7d and 0x7a. They are 101 - * related to the Audio 2 channel. I also was suprised about the consequenses 100 + * docs say the would be. Interesting ones are 0x7f, 0x7d and 0x7a. They are 101 + * related to the Audio 2 channel. I also was surprised about the consequences 102 102 * of writing 0x00 to 0x7f (which should be done by reset): The ES1887 moves 103 103 * into ES1888 mode. This means that it claims IRQ 11, which happens to be my 104 104 * ISDN adapter. Needless to say it no longer worked. I now understand why 105 105 * after rebooting 0x7f already was 0x05, the value of my choice: the BIOS 106 106 * did it. 107 107 * 108 - * Oh, and this is another trap: in ES1887 docs mixer register 0x70 is decribed 109 - * as if it's exactly the same as register 0xa1. This is *NOT* true. The 110 - * description of 0x70 in ES1869 docs is accurate however. 108 + * Oh, and this is another trap: in ES1887 docs mixer register 0x70 is 109 + * described as if it's exactly the same as register 0xa1. This is *NOT* true. 110 + * The description of 0x70 in ES1869 docs is accurate however. 111 111 * Well, the assumption about ES1869 was wrong: register 0x70 is very much 112 - * like register 0xa1, except that bit 7 is allways 1, whatever you want 112 + * like register 0xa1, except that bit 7 is always 1, whatever you want 113 113 * it to be. 114 114 * 115 115 * When using audio 2 mixer register 0x72 seems te be meaningless. Only 0xa2 ··· 117 117 * 118 118 * Software reset not being able to reset all registers is great! Especially 119 119 * the fact that register 0x78 isn't reset is great when you wanna change back 120 - * to single dma operation (simplex): audio 2 is still operation, and uses the 121 - * same dma as audio 1: your ess changes into a funny echo machine. 120 + * to single dma operation (simplex): audio 2 is still operational, and uses 121 + * the same dma as audio 1: your ess changes into a funny echo machine. 122 122 * 123 - * Received the new that ES1688 is detected as a ES1788. Did some thinking: 123 + * Received the news that ES1688 is detected as a ES1788. Did some thinking: 124 124 * the ES1887 detection scheme suggests in step 2 to try if bit 3 of register 125 125 * 0x64 can be changed. This is inaccurate, first I inverted the * check: "If 126 126 * can be modified, it's a 1688", which lead to a correct detection ··· 135 135 * About recognition of ESS chips 136 136 * 137 137 * The distinction of ES688, ES1688, ES1788, ES1887 and ES1888 is described in 138 - * a (preliminary ??) datasheet on ES1887. It's aim is to identify ES1887, but 138 + * a (preliminary ??) datasheet on ES1887. Its aim is to identify ES1887, but 139 139 * during detection the text claims that "this chip may be ..." when a step 140 140 * fails. This scheme is used to distinct between the above chips. 141 141 * It appears however that some PnP chips like ES1868 are recognized as ES1788 ··· 156 156 * 157 157 * The existing ES1688 support didn't take care of the ES1688+ recording 158 158 * levels very well. Whenever a device was selected (recmask) for recording 159 - * it's recording level was loud, and it couldn't be changed. The fact that 159 + * its recording level was loud, and it couldn't be changed. The fact that 160 160 * internal register 0xb4 could take care of RECLEV, didn't work meaning until 161 - * it's value was restored every time the chip was reset; this reset the 161 + * its value was restored every time the chip was reset; this reset the 162 162 * value of 0xb4 too. I guess that's what 4front also had (have?) trouble with. 163 163 * 164 164 * About ES1887 support: ··· 169 169 * the latter case the recording volumes are 0. 170 170 * Now recording levels of inputs can be controlled, by changing the playback 171 171 * levels. Futhermore several devices can be recorded together (which is not 172 - * possible with the ES1688. 172 + * possible with the ES1688). 173 173 * Besides the separate recording level control for each input, the common 174 - * recordig level can also be controlled by RECLEV as described above. 174 + * recording level can also be controlled by RECLEV as described above. 175 175 * 176 176 * Not only ES1887 have this recording mixer. I know the following from the 177 177 * documentation:
+1 -1
sound/pci/cs5535audio/cs5535audio_pcm.c
··· 143 143 if (dma->periods == periods && dma->period_bytes == period_bytes) 144 144 return 0; 145 145 146 - /* the u32 cast is okay because in snd*create we succesfully told 146 + /* the u32 cast is okay because in snd*create we successfully told 147 147 pci alloc that we're only 32 bit capable so the uppper will be 0 */ 148 148 addr = (u32) substream->runtime->dma_addr; 149 149 desc_addr = (u32) dma->desc_buf.addr;