Automatic merge of /spare/repo/linux-2.6/.git branch HEAD

authored by

and committed by
Jeff Garzik
d7aaf481 decc6d0b

+4608 -1713
+128
Documentation/cpu-freq/cpufreq-stats.txt
···
··· 1 + 2 + CPU frequency and voltage scaling statictics in the Linux(TM) kernel 3 + 4 + 5 + L i n u x c p u f r e q - s t a t s d r i v e r 6 + 7 + - information for users - 8 + 9 + 10 + Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> 11 + 12 + Contents 13 + 1. Introduction 14 + 2. Statistics Provided (with example) 15 + 3. Configuring cpufreq-stats 16 + 17 + 18 + 1. Introduction 19 + 20 + cpufreq-stats is a driver that provices CPU frequency statistics for each CPU. 21 + This statistics is provided in /sysfs as a bunch of read_only interfaces. This 22 + interface (when configured) will appear in a seperate directory under cpufreq 23 + in /sysfs (<sysfs root>/devices/system/cpu/cpuX/cpufreq/stats/) for each CPU. 24 + Various statistics will form read_only files under this directory. 25 + 26 + This driver is designed to be independent of any particular cpufreq_driver 27 + that may be running on your CPU. So, it will work with any cpufreq_driver. 28 + 29 + 30 + 2. Statistics Provided (with example) 31 + 32 + cpufreq stats provides following statistics (explained in detail below). 33 + - time_in_state 34 + - total_trans 35 + - trans_table 36 + 37 + All the statistics will be from the time the stats driver has been inserted 38 + to the time when a read of a particular statistic is done. Obviously, stats 39 + driver will not have any information about the the frequcny transitions before 40 + the stats driver insertion. 41 + 42 + -------------------------------------------------------------------------------- 43 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # ls -l 44 + total 0 45 + drwxr-xr-x 2 root root 0 May 14 16:06 . 46 + drwxr-xr-x 3 root root 0 May 14 15:58 .. 47 + -r--r--r-- 1 root root 4096 May 14 16:06 time_in_state 48 + -r--r--r-- 1 root root 4096 May 14 16:06 total_trans 49 + -r--r--r-- 1 root root 4096 May 14 16:06 trans_table 50 + -------------------------------------------------------------------------------- 51 + 52 + - time_in_state 53 + This gives the amount of time spent in each of the frequencies supported by 54 + this CPU. The cat output will have "<frequency> <time>" pair in each line, which 55 + will mean this CPU spent <time> usertime units of time at <frequency>. Output 56 + will have one line for each of the supported freuencies. usertime units here 57 + is 10mS (similar to other time exported in /proc). 58 + 59 + -------------------------------------------------------------------------------- 60 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat time_in_state 61 + 3600000 2089 62 + 3400000 136 63 + 3200000 34 64 + 3000000 67 65 + 2800000 172488 66 + -------------------------------------------------------------------------------- 67 + 68 + 69 + - total_trans 70 + This gives the total number of frequency transitions on this CPU. The cat 71 + output will have a single count which is the total number of frequency 72 + transitions. 73 + 74 + -------------------------------------------------------------------------------- 75 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat total_trans 76 + 20 77 + -------------------------------------------------------------------------------- 78 + 79 + - trans_table 80 + This will give a fine grained information about all the CPU frequency 81 + transitions. The cat output here is a two dimensional matrix, where an entry 82 + <i,j> (row i, column j) represents the count of number of transitions from 83 + Freq_i to Freq_j. Freq_i is in descending order with increasing rows and 84 + Freq_j is in descending order with increasing columns. The output here also 85 + contains the actual freq values for each row and column for better readability. 86 + 87 + -------------------------------------------------------------------------------- 88 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat trans_table 89 + From : To 90 + : 3600000 3400000 3200000 3000000 2800000 91 + 3600000: 0 5 0 0 0 92 + 3400000: 4 0 2 0 0 93 + 3200000: 0 1 0 2 0 94 + 3000000: 0 0 1 0 3 95 + 2800000: 0 0 0 2 0 96 + -------------------------------------------------------------------------------- 97 + 98 + 99 + 3. Configuring cpufreq-stats 100 + 101 + To configure cpufreq-stats in your kernel 102 + Config Main Menu 103 + Power management options (ACPI, APM) ---> 104 + CPU Frequency scaling ---> 105 + [*] CPU Frequency scaling 106 + <*> CPU frequency translation statistics 107 + [*] CPU frequency translation statistics details 108 + 109 + 110 + "CPU Frequency scaling" (CONFIG_CPU_FREQ) should be enabled to configure 111 + cpufreq-stats. 112 + 113 + "CPU frequency translation statistics" (CONFIG_CPU_FREQ_STAT) provides the 114 + basic statistics which includes time_in_state and total_trans. 115 + 116 + "CPU frequency translation statistics details" (CONFIG_CPU_FREQ_STAT_DETAILS) 117 + provides fine grained cpufreq stats by trans_table. The reason for having a 118 + seperate config option for trans_table is: 119 + - trans_table goes against the traditional /sysfs rule of one value per 120 + interface. It provides a whole bunch of value in a 2 dimensional matrix 121 + form. 122 + 123 + Once these two options are enabled and your CPU supports cpufrequency, you 124 + will be able to see the CPU frequency statistics in /sysfs. 125 + 126 + 127 + 128 +
+8 -2
MAINTAINERS
··· 239 W: http://www.linux-usb.org/SpeedTouch/ 240 S: Maintained 241 242 ALPHA PORT 243 P: Richard Henderson 244 M: rth@twiddle.net ··· 1029 S: Maintained 1030 1031 SN-IA64 (Itanium) SUB-PLATFORM 1032 - P: Jesse Barnes 1033 - M: jbarnes@sgi.com 1034 L: linux-altix@sgi.com 1035 L: linux-ia64@vger.kernel.org 1036 W: http://www.sgi.com/altix
··· 239 W: http://www.linux-usb.org/SpeedTouch/ 240 S: Maintained 241 242 + ALI1563 I2C DRIVER 243 + P: Rudolf Marek 244 + M: r.marek@sh.cvut.cz 245 + L: sensors@stimpy.netroedge.com 246 + S: Maintained 247 + 248 ALPHA PORT 249 P: Richard Henderson 250 M: rth@twiddle.net ··· 1023 S: Maintained 1024 1025 SN-IA64 (Itanium) SUB-PLATFORM 1026 + P: Greg Edwards 1027 + M: edwardsg@sgi.com 1028 L: linux-altix@sgi.com 1029 L: linux-ia64@vger.kernel.org 1030 W: http://www.sgi.com/altix
+1 -1
arch/h8300/kernel/process.c
··· 54 void default_idle(void) 55 { 56 while(1) { 57 - if (need_resched()) { 58 local_irq_enable(); 59 __asm__("sleep"); 60 local_irq_disable();
··· 54 void default_idle(void) 55 { 56 while(1) { 57 + if (!need_resched()) { 58 local_irq_enable(); 59 __asm__("sleep"); 60 local_irq_disable();
+13 -1
arch/i386/kernel/cpu/cpufreq/Kconfig
··· 23 If in doubt, say N. 24 25 config ELAN_CPUFREQ 26 - tristate "AMD Elan" 27 select CPU_FREQ_TABLE 28 depends on X86_ELAN 29 ---help--- ··· 37 For details, take a look at <file:Documentation/cpu-freq/>. 38 39 If in doubt, say N. 40 41 config X86_POWERNOW_K6 42 tristate "AMD Mobile K6-2/K6-3 PowerNow!"
··· 23 If in doubt, say N. 24 25 config ELAN_CPUFREQ 26 + tristate "AMD Elan SC400 and SC410" 27 select CPU_FREQ_TABLE 28 depends on X86_ELAN 29 ---help--- ··· 37 For details, take a look at <file:Documentation/cpu-freq/>. 38 39 If in doubt, say N. 40 + 41 + config SC520_CPUFREQ 42 + tristate "AMD Elan SC520" 43 + select CPU_FREQ_TABLE 44 + depends on X86_ELAN 45 + ---help--- 46 + This adds the CPUFreq driver for AMD Elan SC520 processor. 47 + 48 + For details, take a look at <file:Documentation/cpu-freq/>. 49 + 50 + If in doubt, say N. 51 + 52 53 config X86_POWERNOW_K6 54 tristate "AMD Mobile K6-2/K6-3 PowerNow!"
+1
arch/i386/kernel/cpu/cpufreq/Makefile
··· 3 obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o 4 obj-$(CONFIG_X86_LONGHAUL) += longhaul.o 5 obj-$(CONFIG_ELAN_CPUFREQ) += elanfreq.o 6 obj-$(CONFIG_X86_LONGRUN) += longrun.o 7 obj-$(CONFIG_X86_GX_SUSPMOD) += gx-suspmod.o 8 obj-$(CONFIG_X86_SPEEDSTEP_ICH) += speedstep-ich.o
··· 3 obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o 4 obj-$(CONFIG_X86_LONGHAUL) += longhaul.o 5 obj-$(CONFIG_ELAN_CPUFREQ) += elanfreq.o 6 + obj-$(CONFIG_SC520_CPUFREQ) += sc520_freq.o 7 obj-$(CONFIG_X86_LONGRUN) += longrun.o 8 obj-$(CONFIG_X86_GX_SUSPMOD) += gx-suspmod.o 9 obj-$(CONFIG_X86_SPEEDSTEP_ICH) += speedstep-ich.o
+53 -5
arch/i386/kernel/cpu/cpufreq/longhaul.c
··· 29 #include <linux/cpufreq.h> 30 #include <linux/slab.h> 31 #include <linux/string.h> 32 33 #include <asm/msr.h> 34 #include <asm/timex.h> ··· 120 static void do_powersaver(union msr_longhaul *longhaul, 121 unsigned int clock_ratio_index) 122 { 123 int version; 124 125 switch (cpu_model) { 126 case CPU_EZRA_T: ··· 144 longhaul->bits.SoftBusRatio4 = (clock_ratio_index & 0x10) >> 4; 145 longhaul->bits.EnableSoftBusRatio = 1; 146 longhaul->bits.RevisionKey = 0; 147 - local_irq_disable(); 148 - wrmsrl(MSR_VIA_LONGHAUL, longhaul->val); 149 local_irq_enable(); 150 __hlt(); 151 152 rdmsrl(MSR_VIA_LONGHAUL, longhaul->val); 153 longhaul->bits.EnableSoftBusRatio = 0; 154 longhaul->bits.RevisionKey = version; 155 - local_irq_disable(); 156 wrmsrl(MSR_VIA_LONGHAUL, longhaul->val); 157 - local_irq_enable(); 158 } 159 160 /** ··· 626 longhaul_setup_voltagescaling(); 627 628 policy->governor = CPUFREQ_DEFAULT_GOVERNOR; 629 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 630 policy->cur = calc_speed(longhaul_get_cpu_mult()); 631 632 ret = cpufreq_frequency_table_cpuinfo(policy, longhaul_table);
··· 29 #include <linux/cpufreq.h> 30 #include <linux/slab.h> 31 #include <linux/string.h> 32 + #include <linux/pci.h> 33 34 #include <asm/msr.h> 35 #include <asm/timex.h> ··· 119 static void do_powersaver(union msr_longhaul *longhaul, 120 unsigned int clock_ratio_index) 121 { 122 + struct pci_dev *dev; 123 + unsigned long flags; 124 + unsigned int tmp_mask; 125 int version; 126 + int i; 127 + u16 pci_cmd; 128 + u16 cmd_state[64]; 129 130 switch (cpu_model) { 131 case CPU_EZRA_T: ··· 137 longhaul->bits.SoftBusRatio4 = (clock_ratio_index & 0x10) >> 4; 138 longhaul->bits.EnableSoftBusRatio = 1; 139 longhaul->bits.RevisionKey = 0; 140 + 141 + preempt_disable(); 142 + local_irq_save(flags); 143 + 144 + /* 145 + * get current pci bus master state for all devices 146 + * and clear bus master bit 147 + */ 148 + dev = NULL; 149 + i = 0; 150 + do { 151 + dev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, dev); 152 + if (dev != NULL) { 153 + pci_read_config_word(dev, PCI_COMMAND, &pci_cmd); 154 + cmd_state[i++] = pci_cmd; 155 + pci_cmd &= ~PCI_COMMAND_MASTER; 156 + pci_write_config_word(dev, PCI_COMMAND, pci_cmd); 157 + } 158 + } while (dev != NULL); 159 + 160 + tmp_mask=inb(0x21); /* works on C3. save mask. */ 161 + outb(0xFE,0x21); /* TMR0 only */ 162 + outb(0xFF,0x80); /* delay */ 163 + 164 local_irq_enable(); 165 + 166 + __hlt(); 167 + wrmsrl(MSR_VIA_LONGHAUL, longhaul->val); 168 __hlt(); 169 170 + local_irq_disable(); 171 + 172 + outb(tmp_mask,0x21); /* restore mask */ 173 + 174 + /* restore pci bus master state for all devices */ 175 + dev = NULL; 176 + i = 0; 177 + do { 178 + dev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, dev); 179 + if (dev != NULL) { 180 + pci_cmd = cmd_state[i++]; 181 + pci_write_config_byte(dev, PCI_COMMAND, pci_cmd); 182 + } 183 + } while (dev != NULL); 184 + local_irq_restore(flags); 185 + preempt_enable(); 186 + 187 + /* disable bus ratio bit */ 188 rdmsrl(MSR_VIA_LONGHAUL, longhaul->val); 189 longhaul->bits.EnableSoftBusRatio = 0; 190 longhaul->bits.RevisionKey = version; 191 wrmsrl(MSR_VIA_LONGHAUL, longhaul->val); 192 } 193 194 /** ··· 578 longhaul_setup_voltagescaling(); 579 580 policy->governor = CPUFREQ_DEFAULT_GOVERNOR; 581 + policy->cpuinfo.transition_latency = 200000; /* nsec */ 582 policy->cur = calc_speed(longhaul_get_cpu_mult()); 583 584 ret = cpufreq_frequency_table_cpuinfo(policy, longhaul_table);
+8 -3
arch/i386/kernel/cpu/cpufreq/powernow-k7.c
··· 23 #include <linux/dmi.h> 24 25 #include <asm/msr.h> 26 #include <asm/timex.h> 27 #include <asm/io.h> 28 #include <asm/system.h> ··· 587 588 rdmsrl (MSR_K7_FID_VID_STATUS, fidvidstatus.val); 589 590 - /* A K7 with powernow technology is set to max frequency by BIOS */ 591 - fsb = (10 * cpu_khz) / fid_codes[fidvidstatus.bits.MFID]; 592 if (!fsb) { 593 printk(KERN_WARNING PFX "can not determine bus frequency\n"); 594 return -EINVAL; 595 } 596 - dprintk("FSB: %3d.%03d MHz\n", fsb/1000, fsb%1000); 597 598 if (dmi_check_system(powernow_dmi_table) || acpi_force) { 599 printk (KERN_INFO PFX "PSB/PST known to be broken. Trying ACPI instead\n");
··· 23 #include <linux/dmi.h> 24 25 #include <asm/msr.h> 26 + #include <asm/timer.h> 27 #include <asm/timex.h> 28 #include <asm/io.h> 29 #include <asm/system.h> ··· 586 587 rdmsrl (MSR_K7_FID_VID_STATUS, fidvidstatus.val); 588 589 + /* recalibrate cpu_khz */ 590 + result = recalibrate_cpu_khz(); 591 + if (result) 592 + return result; 593 + 594 + fsb = (10 * cpu_khz) / fid_codes[fidvidstatus.bits.CFID]; 595 if (!fsb) { 596 printk(KERN_WARNING PFX "can not determine bus frequency\n"); 597 return -EINVAL; 598 } 599 + dprintk("FSB: %3dMHz\n", fsb/1000); 600 601 if (dmi_check_system(powernow_dmi_table) || acpi_force) { 602 printk (KERN_INFO PFX "PSB/PST known to be broken. Trying ACPI instead\n");
+76 -37
arch/i386/kernel/cpu/cpufreq/powernow-k8.c
··· 4 * GNU general public license version 2. See "COPYING" or 5 * http://www.gnu.org/licenses/gpl.html 6 * 7 - * Support : paul.devriendt@amd.com 8 * 9 * Based on the powernow-k7.c module written by Dave Jones. 10 * (C) 2003 Dave Jones <davej@codemonkey.org.uk> on behalf of SuSE Labs ··· 15 * 16 * Valuable input gratefully received from Dave Jones, Pavel Machek, 17 * Dominik Brodowski, and others. 18 * Processor information obtained from Chapter 9 (Power and Thermal Management) 19 * of the "BIOS and Kernel Developer's Guide for the AMD Athlon 64 and AMD 20 * Opteron Processors" available for download from www.amd.com 21 * 22 * Tables for specific CPUs can be infrerred from 23 - * http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/30430.pdf 24 */ 25 26 #include <linux/kernel.h> ··· 31 #include <linux/cpufreq.h> 32 #include <linux/slab.h> 33 #include <linux/string.h> 34 35 #include <asm/msr.h> 36 #include <asm/io.h> ··· 44 45 #define PFX "powernow-k8: " 46 #define BFX PFX "BIOS error: " 47 - #define VERSION "version 1.00.09e" 48 #include "powernow-k8.h" 49 50 /* serialize freq changes */ 51 static DECLARE_MUTEX(fidvid_sem); 52 53 static struct powernow_k8_data *powernow_data[NR_CPUS]; 54 55 /* Return a frequency in MHz, given an input fid */ 56 static u32 find_freq_from_fid(u32 fid) ··· 280 { 281 u32 rvosteps = data->rvo; 282 u32 savefid = data->currfid; 283 284 dprintk("ph1 (cpu%d): start, currfid 0x%x, currvid 0x%x, reqvid 0x%x, rvo 0x%x\n", 285 smp_processor_id(), 286 data->currfid, data->currvid, reqvid, data->rvo); 287 288 while (data->currvid > reqvid) { 289 dprintk("ph1: curr 0x%x, req vid 0x%x\n", ··· 299 return 1; 300 } 301 302 - while ((rvosteps > 0) && ((data->rvo + data->currvid) > reqvid)) { 303 - if (data->currvid == 0) { 304 rvosteps = 0; 305 } else { 306 dprintk("ph1: changing vid for rvo, req 0x%x\n", ··· 684 * BIOS and Kernel Developer's Guide, which is available on 685 * www.amd.com 686 */ 687 - printk(KERN_ERR PFX "BIOS error - no PSB\n"); 688 return -ENODEV; 689 } 690 ··· 708 struct cpufreq_frequency_table *powernow_table; 709 710 if (acpi_processor_register_performance(&data->acpi_data, data->cpu)) { 711 - dprintk("register performance failed\n"); 712 return -EIO; 713 } 714 ··· 759 continue; 760 } 761 762 - if (fid < HI_FID_TABLE_BOTTOM) { 763 - if (cntlofreq) { 764 - /* if both entries are the same, ignore this 765 - * one... 766 - */ 767 - if ((powernow_table[i].frequency != powernow_table[cntlofreq].frequency) || 768 - (powernow_table[i].index != powernow_table[cntlofreq].index)) { 769 - printk(KERN_ERR PFX "Too many lo freq table entries\n"); 770 - goto err_out_mem; 771 - } 772 - 773 - dprintk("double low frequency table entry, ignoring it.\n"); 774 - powernow_table[i].frequency = CPUFREQ_ENTRY_INVALID; 775 - continue; 776 - } else 777 - cntlofreq = i; 778 } 779 780 if (powernow_table[i].frequency != (data->acpi_data.states[i].core_frequency * 1000)) { ··· 830 { 831 u32 fid; 832 u32 vid; 833 - int res; 834 struct cpufreq_freqs freqs; 835 836 dprintk("cpu %d transition to index %u\n", smp_processor_id(), index); ··· 855 } 856 857 if ((fid < HI_FID_TABLE_BOTTOM) && (data->currfid < HI_FID_TABLE_BOTTOM)) { 858 - printk("ignoring illegal change in lo freq table-%x to 0x%x\n", 859 data->currfid, fid); 860 return 1; 861 } ··· 865 smp_processor_id(), fid, vid); 866 867 freqs.cpu = data->cpu; 868 - 869 freqs.old = find_khz_freq_from_fid(data->currfid); 870 freqs.new = find_khz_freq_from_fid(fid); 871 - cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); 872 873 - down(&fidvid_sem); 874 res = transition_fid_vid(data, fid, vid); 875 - up(&fidvid_sem); 876 877 freqs.new = find_khz_freq_from_fid(data->currfid); 878 - cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); 879 - 880 return res; 881 } 882 ··· 891 u32 checkvid = data->currvid; 892 unsigned int newstate; 893 int ret = -EIO; 894 895 /* only run on specific CPU from here on */ 896 oldmask = current->cpus_allowed; ··· 920 data->currfid, data->currvid); 921 922 if ((checkvid != data->currvid) || (checkfid != data->currfid)) { 923 - printk(KERN_ERR PFX 924 - "error - out of sync, fid 0x%x 0x%x, vid 0x%x 0x%x\n", 925 - checkfid, data->currfid, checkvid, data->currvid); 926 } 927 928 if (cpufreq_frequency_table_target(pol, data->powernow_table, targfreq, relation, &newstate)) 929 goto err_out; 930 931 powernow_k8_acpi_pst_values(data, newstate); 932 933 if (transition_frequency(data, newstate)) { 934 printk(KERN_ERR PFX "transition frequency failed\n"); 935 ret = 1; 936 goto err_out; 937 } 938 939 pol->cur = find_khz_freq_from_fid(data->currfid); 940 ret = 0; ··· 999 */ 1000 1001 if ((num_online_cpus() != 1) || (num_possible_cpus() != 1)) { 1002 - printk(KERN_INFO PFX "MP systems not supported by PSB BIOS structure\n"); 1003 kfree(data); 1004 return -ENODEV; 1005 } ··· 1040 schedule(); 1041 1042 pol->governor = CPUFREQ_DEFAULT_GOVERNOR; 1043 1044 /* Take a crude guess here. 1045 * That guess was in microseconds, so multiply with 1000 */ ··· 1107 return 0; 1108 } 1109 preempt_disable(); 1110 - 1111 if (query_current_values_with_pending_wait(data)) 1112 goto out; 1113 ··· 1165 cpufreq_unregister_driver(&cpufreq_amd64_driver); 1166 } 1167 1168 - MODULE_AUTHOR("Paul Devriendt <paul.devriendt@amd.com>"); 1169 MODULE_DESCRIPTION("AMD Athlon 64 and Opteron processor frequency driver."); 1170 MODULE_LICENSE("GPL"); 1171 1172 late_initcall(powernowk8_init); 1173 module_exit(powernowk8_exit);
··· 4 * GNU general public license version 2. See "COPYING" or 5 * http://www.gnu.org/licenses/gpl.html 6 * 7 + * Support : mark.langsdorf@amd.com 8 * 9 * Based on the powernow-k7.c module written by Dave Jones. 10 * (C) 2003 Dave Jones <davej@codemonkey.org.uk> on behalf of SuSE Labs ··· 15 * 16 * Valuable input gratefully received from Dave Jones, Pavel Machek, 17 * Dominik Brodowski, and others. 18 + * Originally developed by Paul Devriendt. 19 * Processor information obtained from Chapter 9 (Power and Thermal Management) 20 * of the "BIOS and Kernel Developer's Guide for the AMD Athlon 64 and AMD 21 * Opteron Processors" available for download from www.amd.com 22 * 23 * Tables for specific CPUs can be infrerred from 24 + * http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/30430.pdf 25 */ 26 27 #include <linux/kernel.h> ··· 30 #include <linux/cpufreq.h> 31 #include <linux/slab.h> 32 #include <linux/string.h> 33 + #include <linux/cpumask.h> 34 35 #include <asm/msr.h> 36 #include <asm/io.h> ··· 42 43 #define PFX "powernow-k8: " 44 #define BFX PFX "BIOS error: " 45 + #define VERSION "version 1.40.2" 46 #include "powernow-k8.h" 47 48 /* serialize freq changes */ 49 static DECLARE_MUTEX(fidvid_sem); 50 51 static struct powernow_k8_data *powernow_data[NR_CPUS]; 52 + 53 + #ifndef CONFIG_SMP 54 + static cpumask_t cpu_core_map[1]; 55 + #endif 56 57 /* Return a frequency in MHz, given an input fid */ 58 static u32 find_freq_from_fid(u32 fid) ··· 274 { 275 u32 rvosteps = data->rvo; 276 u32 savefid = data->currfid; 277 + u32 maxvid, lo; 278 279 dprintk("ph1 (cpu%d): start, currfid 0x%x, currvid 0x%x, reqvid 0x%x, rvo 0x%x\n", 280 smp_processor_id(), 281 data->currfid, data->currvid, reqvid, data->rvo); 282 + 283 + rdmsr(MSR_FIDVID_STATUS, lo, maxvid); 284 + maxvid = 0x1f & (maxvid >> 16); 285 + dprintk("ph1 maxvid=0x%x\n", maxvid); 286 + if (reqvid < maxvid) /* lower numbers are higher voltages */ 287 + reqvid = maxvid; 288 289 while (data->currvid > reqvid) { 290 dprintk("ph1: curr 0x%x, req vid 0x%x\n", ··· 286 return 1; 287 } 288 289 + while ((rvosteps > 0) && ((data->rvo + data->currvid) > reqvid)) { 290 + if (data->currvid == maxvid) { 291 rvosteps = 0; 292 } else { 293 dprintk("ph1: changing vid for rvo, req 0x%x\n", ··· 671 * BIOS and Kernel Developer's Guide, which is available on 672 * www.amd.com 673 */ 674 + printk(KERN_INFO PFX "BIOS error - no PSB or ACPI _PSS objects\n"); 675 return -ENODEV; 676 } 677 ··· 695 struct cpufreq_frequency_table *powernow_table; 696 697 if (acpi_processor_register_performance(&data->acpi_data, data->cpu)) { 698 + dprintk("register performance failed: bad ACPI data\n"); 699 return -EIO; 700 } 701 ··· 746 continue; 747 } 748 749 + /* verify only 1 entry from the lo frequency table */ 750 + if (fid < HI_FID_TABLE_BOTTOM) { 751 + if (cntlofreq) { 752 + /* if both entries are the same, ignore this 753 + * one... 754 + */ 755 + if ((powernow_table[i].frequency != powernow_table[cntlofreq].frequency) || 756 + (powernow_table[i].index != powernow_table[cntlofreq].index)) { 757 + printk(KERN_ERR PFX "Too many lo freq table entries\n"); 758 + goto err_out_mem; 759 + } 760 + 761 + dprintk("double low frequency table entry, ignoring it.\n"); 762 + powernow_table[i].frequency = CPUFREQ_ENTRY_INVALID; 763 + continue; 764 + } else 765 + cntlofreq = i; 766 } 767 768 if (powernow_table[i].frequency != (data->acpi_data.states[i].core_frequency * 1000)) { ··· 816 { 817 u32 fid; 818 u32 vid; 819 + int res, i; 820 struct cpufreq_freqs freqs; 821 822 dprintk("cpu %d transition to index %u\n", smp_processor_id(), index); ··· 841 } 842 843 if ((fid < HI_FID_TABLE_BOTTOM) && (data->currfid < HI_FID_TABLE_BOTTOM)) { 844 + printk(KERN_ERR PFX 845 + "ignoring illegal change in lo freq table-%x to 0x%x\n", 846 data->currfid, fid); 847 return 1; 848 } ··· 850 smp_processor_id(), fid, vid); 851 852 freqs.cpu = data->cpu; 853 freqs.old = find_khz_freq_from_fid(data->currfid); 854 freqs.new = find_khz_freq_from_fid(fid); 855 + for_each_cpu_mask(i, cpu_core_map[data->cpu]) { 856 + freqs.cpu = i; 857 + cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); 858 + } 859 860 res = transition_fid_vid(data, fid, vid); 861 862 freqs.new = find_khz_freq_from_fid(data->currfid); 863 + for_each_cpu_mask(i, cpu_core_map[data->cpu]) { 864 + freqs.cpu = i; 865 + cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); 866 + } 867 return res; 868 } 869 ··· 874 u32 checkvid = data->currvid; 875 unsigned int newstate; 876 int ret = -EIO; 877 + int i; 878 879 /* only run on specific CPU from here on */ 880 oldmask = current->cpus_allowed; ··· 902 data->currfid, data->currvid); 903 904 if ((checkvid != data->currvid) || (checkfid != data->currfid)) { 905 + printk(KERN_INFO PFX 906 + "error - out of sync, fix 0x%x 0x%x, vid 0x%x 0x%x\n", 907 + checkfid, data->currfid, checkvid, data->currvid); 908 } 909 910 if (cpufreq_frequency_table_target(pol, data->powernow_table, targfreq, relation, &newstate)) 911 goto err_out; 912 + 913 + down(&fidvid_sem); 914 + 915 + for_each_cpu_mask(i, cpu_core_map[pol->cpu]) { 916 + /* make sure the sibling is initialized */ 917 + if (!powernow_data[i]) { 918 + ret = 0; 919 + up(&fidvid_sem); 920 + goto err_out; 921 + } 922 + } 923 924 powernow_k8_acpi_pst_values(data, newstate); 925 926 if (transition_frequency(data, newstate)) { 927 printk(KERN_ERR PFX "transition frequency failed\n"); 928 ret = 1; 929 + up(&fidvid_sem); 930 goto err_out; 931 } 932 + 933 + /* Update all the fid/vids of our siblings */ 934 + for_each_cpu_mask(i, cpu_core_map[pol->cpu]) { 935 + powernow_data[i]->currvid = data->currvid; 936 + powernow_data[i]->currfid = data->currfid; 937 + } 938 + up(&fidvid_sem); 939 940 pol->cur = find_khz_freq_from_fid(data->currfid); 941 ret = 0; ··· 962 */ 963 964 if ((num_online_cpus() != 1) || (num_possible_cpus() != 1)) { 965 + printk(KERN_ERR PFX "MP systems not supported by PSB BIOS structure\n"); 966 kfree(data); 967 return -ENODEV; 968 } ··· 1003 schedule(); 1004 1005 pol->governor = CPUFREQ_DEFAULT_GOVERNOR; 1006 + pol->cpus = cpu_core_map[pol->cpu]; 1007 1008 /* Take a crude guess here. 1009 * That guess was in microseconds, so multiply with 1000 */ ··· 1069 return 0; 1070 } 1071 preempt_disable(); 1072 + 1073 if (query_current_values_with_pending_wait(data)) 1074 goto out; 1075 ··· 1127 cpufreq_unregister_driver(&cpufreq_amd64_driver); 1128 } 1129 1130 + MODULE_AUTHOR("Paul Devriendt <paul.devriendt@amd.com> and Mark Langsdorf <mark.langsdorf@amd.com."); 1131 MODULE_DESCRIPTION("AMD Athlon 64 and Opteron processor frequency driver."); 1132 MODULE_LICENSE("GPL"); 1133 1134 late_initcall(powernowk8_init); 1135 module_exit(powernowk8_exit); 1136 +
+15
arch/i386/kernel/cpu/cpufreq/powernow-k8.h
··· 174 static int core_frequency_transition(struct powernow_k8_data *data, u32 reqfid); 175 176 static void powernow_k8_acpi_pst_values(struct powernow_k8_data *data, unsigned int index);
··· 174 static int core_frequency_transition(struct powernow_k8_data *data, u32 reqfid); 175 176 static void powernow_k8_acpi_pst_values(struct powernow_k8_data *data, unsigned int index); 177 + 178 + #ifndef for_each_cpu_mask 179 + #define for_each_cpu_mask(i,mask) for (i=0;i<1;i++) 180 + #endif 181 + 182 + #ifdef CONFIG_SMP 183 + static inline void define_siblings(int cpu, cpumask_t cpu_sharedcore_mask[]) 184 + { 185 + } 186 + #else 187 + static inline void define_siblings(int cpu, cpumask_t cpu_sharedcore_mask[]) 188 + { 189 + cpu_set(0, cpu_sharedcore_mask[0]); 190 + } 191 + #endif
+186
arch/i386/kernel/cpu/cpufreq/sc520_freq.c
···
··· 1 + /* 2 + * sc520_freq.c: cpufreq driver for the AMD Elan sc520 3 + * 4 + * Copyright (C) 2005 Sean Young <sean@mess.org> 5 + * 6 + * This program is free software; you can redistribute it and/or 7 + * modify it under the terms of the GNU General Public License 8 + * as published by the Free Software Foundation; either version 9 + * 2 of the License, or (at your option) any later version. 10 + * 11 + * Based on elanfreq.c 12 + * 13 + * 2005-03-30: - initial revision 14 + */ 15 + 16 + #include <linux/kernel.h> 17 + #include <linux/module.h> 18 + #include <linux/init.h> 19 + 20 + #include <linux/delay.h> 21 + #include <linux/cpufreq.h> 22 + 23 + #include <asm/msr.h> 24 + #include <asm/timex.h> 25 + #include <asm/io.h> 26 + 27 + #define MMCR_BASE 0xfffef000 /* The default base address */ 28 + #define OFFS_CPUCTL 0x2 /* CPU Control Register */ 29 + 30 + static __u8 __iomem *cpuctl; 31 + 32 + #define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_DRIVER, "sc520_freq", msg) 33 + 34 + static struct cpufreq_frequency_table sc520_freq_table[] = { 35 + {0x01, 100000}, 36 + {0x02, 133000}, 37 + {0, CPUFREQ_TABLE_END}, 38 + }; 39 + 40 + static unsigned int sc520_freq_get_cpu_frequency(unsigned int cpu) 41 + { 42 + u8 clockspeed_reg = *cpuctl; 43 + 44 + switch (clockspeed_reg & 0x03) { 45 + default: 46 + printk(KERN_ERR "sc520_freq: error: cpuctl register has unexpected value %02x\n", clockspeed_reg); 47 + case 0x01: 48 + return 100000; 49 + case 0x02: 50 + return 133000; 51 + } 52 + } 53 + 54 + static void sc520_freq_set_cpu_state (unsigned int state) 55 + { 56 + 57 + struct cpufreq_freqs freqs; 58 + u8 clockspeed_reg; 59 + 60 + freqs.old = sc520_freq_get_cpu_frequency(0); 61 + freqs.new = sc520_freq_table[state].frequency; 62 + freqs.cpu = 0; /* AMD Elan is UP */ 63 + 64 + cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); 65 + 66 + dprintk("attempting to set frequency to %i kHz\n", 67 + sc520_freq_table[state].frequency); 68 + 69 + local_irq_disable(); 70 + 71 + clockspeed_reg = *cpuctl & ~0x03; 72 + *cpuctl = clockspeed_reg | sc520_freq_table[state].index; 73 + 74 + local_irq_enable(); 75 + 76 + cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); 77 + }; 78 + 79 + static int sc520_freq_verify (struct cpufreq_policy *policy) 80 + { 81 + return cpufreq_frequency_table_verify(policy, &sc520_freq_table[0]); 82 + } 83 + 84 + static int sc520_freq_target (struct cpufreq_policy *policy, 85 + unsigned int target_freq, 86 + unsigned int relation) 87 + { 88 + unsigned int newstate = 0; 89 + 90 + if (cpufreq_frequency_table_target(policy, sc520_freq_table, target_freq, relation, &newstate)) 91 + return -EINVAL; 92 + 93 + sc520_freq_set_cpu_state(newstate); 94 + 95 + return 0; 96 + } 97 + 98 + 99 + /* 100 + * Module init and exit code 101 + */ 102 + 103 + static int sc520_freq_cpu_init(struct cpufreq_policy *policy) 104 + { 105 + struct cpuinfo_x86 *c = cpu_data; 106 + int result; 107 + 108 + /* capability check */ 109 + if (c->x86_vendor != X86_VENDOR_AMD || 110 + c->x86 != 4 || c->x86_model != 9) 111 + return -ENODEV; 112 + 113 + /* cpuinfo and default policy values */ 114 + policy->governor = CPUFREQ_DEFAULT_GOVERNOR; 115 + policy->cpuinfo.transition_latency = 1000000; /* 1ms */ 116 + policy->cur = sc520_freq_get_cpu_frequency(0); 117 + 118 + result = cpufreq_frequency_table_cpuinfo(policy, sc520_freq_table); 119 + if (result) 120 + return (result); 121 + 122 + cpufreq_frequency_table_get_attr(sc520_freq_table, policy->cpu); 123 + 124 + return 0; 125 + } 126 + 127 + 128 + static int sc520_freq_cpu_exit(struct cpufreq_policy *policy) 129 + { 130 + cpufreq_frequency_table_put_attr(policy->cpu); 131 + return 0; 132 + } 133 + 134 + 135 + static struct freq_attr* sc520_freq_attr[] = { 136 + &cpufreq_freq_attr_scaling_available_freqs, 137 + NULL, 138 + }; 139 + 140 + 141 + static struct cpufreq_driver sc520_freq_driver = { 142 + .get = sc520_freq_get_cpu_frequency, 143 + .verify = sc520_freq_verify, 144 + .target = sc520_freq_target, 145 + .init = sc520_freq_cpu_init, 146 + .exit = sc520_freq_cpu_exit, 147 + .name = "sc520_freq", 148 + .owner = THIS_MODULE, 149 + .attr = sc520_freq_attr, 150 + }; 151 + 152 + 153 + static int __init sc520_freq_init(void) 154 + { 155 + struct cpuinfo_x86 *c = cpu_data; 156 + 157 + /* Test if we have the right hardware */ 158 + if(c->x86_vendor != X86_VENDOR_AMD || 159 + c->x86 != 4 || c->x86_model != 9) { 160 + dprintk("no Elan SC520 processor found!\n"); 161 + return -ENODEV; 162 + } 163 + cpuctl = ioremap((unsigned long)(MMCR_BASE + OFFS_CPUCTL), 1); 164 + if(!cpuctl) { 165 + printk(KERN_ERR "sc520_freq: error: failed to remap memory\n"); 166 + return -ENOMEM; 167 + } 168 + 169 + return cpufreq_register_driver(&sc520_freq_driver); 170 + } 171 + 172 + 173 + static void __exit sc520_freq_exit(void) 174 + { 175 + cpufreq_unregister_driver(&sc520_freq_driver); 176 + iounmap(cpuctl); 177 + } 178 + 179 + 180 + MODULE_LICENSE("GPL"); 181 + MODULE_AUTHOR("Sean Young <sean@mess.org>"); 182 + MODULE_DESCRIPTION("cpufreq driver for AMD's Elan sc520 CPU"); 183 + 184 + module_init(sc520_freq_init); 185 + module_exit(sc520_freq_exit); 186 +
+6
arch/i386/kernel/cpu/cpufreq/speedstep-centrino.c
··· 54 CPU_DOTHAN_A1, 55 CPU_DOTHAN_A2, 56 CPU_DOTHAN_B0, 57 }; 58 59 static const struct cpu_id cpu_ids[] = { ··· 63 [CPU_DOTHAN_A1] = { 6, 13, 1 }, 64 [CPU_DOTHAN_A2] = { 6, 13, 2 }, 65 [CPU_DOTHAN_B0] = { 6, 13, 6 }, 66 }; 67 #define N_IDS (sizeof(cpu_ids)/sizeof(cpu_ids[0])) 68 ··· 230 { &cpu_ids[CPU_DOTHAN_A1], NULL, 0, NULL }, 231 { &cpu_ids[CPU_DOTHAN_A2], NULL, 0, NULL }, 232 { &cpu_ids[CPU_DOTHAN_B0], NULL, 0, NULL }, 233 234 { NULL, } 235 };
··· 54 CPU_DOTHAN_A1, 55 CPU_DOTHAN_A2, 56 CPU_DOTHAN_B0, 57 + CPU_MP4HT_D0, 58 + CPU_MP4HT_E0, 59 }; 60 61 static const struct cpu_id cpu_ids[] = { ··· 61 [CPU_DOTHAN_A1] = { 6, 13, 1 }, 62 [CPU_DOTHAN_A2] = { 6, 13, 2 }, 63 [CPU_DOTHAN_B0] = { 6, 13, 6 }, 64 + [CPU_MP4HT_D0] = {15, 3, 4 }, 65 + [CPU_MP4HT_E0] = {15, 4, 1 }, 66 }; 67 #define N_IDS (sizeof(cpu_ids)/sizeof(cpu_ids[0])) 68 ··· 226 { &cpu_ids[CPU_DOTHAN_A1], NULL, 0, NULL }, 227 { &cpu_ids[CPU_DOTHAN_A2], NULL, 0, NULL }, 228 { &cpu_ids[CPU_DOTHAN_B0], NULL, 0, NULL }, 229 + { &cpu_ids[CPU_MP4HT_D0], NULL, 0, NULL }, 230 + { &cpu_ids[CPU_MP4HT_E0], NULL, 0, NULL }, 231 232 { NULL, } 233 };
+3 -3
arch/i386/kernel/cpu/cpufreq/speedstep-lib.c
··· 336 if (!prev_speed) 337 return -EIO; 338 339 - dprintk("previous seped is %u\n", prev_speed); 340 341 local_irq_save(flags); 342 ··· 348 goto out; 349 } 350 351 - dprintk("low seped is %u\n", *low_speed); 352 353 /* switch to high state */ 354 set_state(SPEEDSTEP_HIGH); ··· 358 goto out; 359 } 360 361 - dprintk("high seped is %u\n", *high_speed); 362 363 if (*low_speed == *high_speed) { 364 ret = -ENODEV;
··· 336 if (!prev_speed) 337 return -EIO; 338 339 + dprintk("previous speed is %u\n", prev_speed); 340 341 local_irq_save(flags); 342 ··· 348 goto out; 349 } 350 351 + dprintk("low speed is %u\n", *low_speed); 352 353 /* switch to high state */ 354 set_state(SPEEDSTEP_HIGH); ··· 358 goto out; 359 } 360 361 + dprintk("high speed is %u\n", *high_speed); 362 363 if (*low_speed == *high_speed) { 364 ret = -ENODEV;
+3
arch/i386/kernel/cpu/cpufreq/speedstep-smi.c
··· 357 case SPEEDSTEP_PROCESSOR_PIII_C: 358 case SPEEDSTEP_PROCESSOR_PIII_C_EARLY: 359 break; 360 default: 361 speedstep_processor = 0; 362 }
··· 357 case SPEEDSTEP_PROCESSOR_PIII_C: 358 case SPEEDSTEP_PROCESSOR_PIII_C_EARLY: 359 break; 360 + case SPEEDSTEP_PROCESSOR_P4M: 361 + printk(KERN_INFO "speedstep-smi: you're trying to use this cpufreq driver on a Pentium 4-based CPU. Most likely it will not work.\n"); 362 + break; 363 default: 364 speedstep_processor = 0; 365 }
+1 -1
arch/i386/kernel/cpu/intel_cacheinfo.c
··· 118 }; 119 120 #define MAX_CACHE_LEAVES 4 121 - static unsigned short __devinitdata num_cache_leaves; 122 123 static int __devinit cpuid4_cache_lookup(int index, struct _cpuid4_info *this_leaf) 124 {
··· 118 }; 119 120 #define MAX_CACHE_LEAVES 4 121 + static unsigned short num_cache_leaves; 122 123 static int __devinit cpuid4_cache_lookup(int index, struct _cpuid4_info *this_leaf) 124 {
+2
arch/i386/kernel/setup.c
··· 1502 if (efi_enabled) 1503 efi_map_memmap(); 1504 1505 /* 1506 * Parse the ACPI tables for possible boot-time SMP configuration. 1507 */ 1508 acpi_boot_table_init(); 1509 acpi_boot_init(); 1510 1511 #ifdef CONFIG_X86_LOCAL_APIC 1512 if (smp_found_config)
··· 1502 if (efi_enabled) 1503 efi_map_memmap(); 1504 1505 + #ifdef CONFIG_ACPI_BOOT 1506 /* 1507 * Parse the ACPI tables for possible boot-time SMP configuration. 1508 */ 1509 acpi_boot_table_init(); 1510 acpi_boot_init(); 1511 + #endif 1512 1513 #ifdef CONFIG_X86_LOCAL_APIC 1514 if (smp_found_config)
+3 -1
arch/i386/kernel/smpboot.c
··· 1074 cpu_set(cpu, cpu_sibling_map[cpu]); 1075 } 1076 1077 - if (siblings != smp_num_siblings) 1078 printk(KERN_WARNING "WARNING: %d siblings found for CPU%d, should be %d\n", siblings, cpu, smp_num_siblings); 1079 1080 if (c->x86_num_cores > 1) { 1081 for (i = 0; i < NR_CPUS; i++) {
··· 1074 cpu_set(cpu, cpu_sibling_map[cpu]); 1075 } 1076 1077 + if (siblings != smp_num_siblings) { 1078 printk(KERN_WARNING "WARNING: %d siblings found for CPU%d, should be %d\n", siblings, cpu, smp_num_siblings); 1079 + smp_num_siblings = siblings; 1080 + } 1081 1082 if (c->x86_num_cores > 1) { 1083 for (i = 0; i < NR_CPUS; i++) {
+4 -2
arch/i386/kernel/timers/common.c
··· 6 #include <linux/timex.h> 7 #include <linux/errno.h> 8 #include <linux/jiffies.h> 9 10 #include <asm/io.h> 11 #include <asm/timer.h> ··· 25 26 #define CALIBRATE_TIME (5 * 1000020/HZ) 27 28 - unsigned long __init calibrate_tsc(void) 29 { 30 mach_prepare_counter(); 31 ··· 140 #endif 141 142 /* calculate cpu_khz */ 143 - void __init init_cpu_khz(void) 144 { 145 if (cpu_has_tsc) { 146 unsigned long tsc_quotient = calibrate_tsc(); ··· 159 } 160 } 161 }
··· 6 #include <linux/timex.h> 7 #include <linux/errno.h> 8 #include <linux/jiffies.h> 9 + #include <linux/module.h> 10 11 #include <asm/io.h> 12 #include <asm/timer.h> ··· 24 25 #define CALIBRATE_TIME (5 * 1000020/HZ) 26 27 + unsigned long calibrate_tsc(void) 28 { 29 mach_prepare_counter(); 30 ··· 139 #endif 140 141 /* calculate cpu_khz */ 142 + void init_cpu_khz(void) 143 { 144 if (cpu_has_tsc) { 145 unsigned long tsc_quotient = calibrate_tsc(); ··· 158 } 159 } 160 } 161 +
+20
arch/i386/kernel/timers/timer_tsc.c
··· 320 static inline void cpufreq_delayed_get(void) { return; } 321 #endif 322 323 static void mark_offset_tsc(void) 324 { 325 unsigned long lost,delay;
··· 320 static inline void cpufreq_delayed_get(void) { return; } 321 #endif 322 323 + int recalibrate_cpu_khz(void) 324 + { 325 + #ifndef CONFIG_SMP 326 + unsigned long cpu_khz_old = cpu_khz; 327 + 328 + if (cpu_has_tsc) { 329 + init_cpu_khz(); 330 + cpu_data[0].loops_per_jiffy = 331 + cpufreq_scale(cpu_data[0].loops_per_jiffy, 332 + cpu_khz_old, 333 + cpu_khz); 334 + return 0; 335 + } else 336 + return -ENODEV; 337 + #else 338 + return -ENODEV; 339 + #endif 340 + } 341 + EXPORT_SYMBOL(recalibrate_cpu_khz); 342 + 343 static void mark_offset_tsc(void) 344 { 345 unsigned long lost,delay;
+1 -1
arch/ia64/ia32/sys_ia32.c
··· 2427 { 2428 struct epoll_event *events64 = NULL; 2429 mm_segment_t old_fs = get_fs(); 2430 - int error, numevents, size; 2431 int evt_idx; 2432 int do_free_pages = 0; 2433
··· 2427 { 2428 struct epoll_event *events64 = NULL; 2429 mm_segment_t old_fs = get_fs(); 2430 + int numevents, size; 2431 int evt_idx; 2432 int do_free_pages = 0; 2433
+2 -2
arch/ia64/kernel/entry.S
··· 1182 ;; 1183 (pNonSys) mov out2=0 // out2==0 => not a syscall 1184 .fframe 16 1185 - .spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!) 1186 st8 [sp]=r9,-16 // allocate space for ar.unat and save it 1187 st8 [out1]=loc1,-8 // save ar.pfs, out1=&sigscratch 1188 .body ··· 1208 adds out2=8,sp // out2=&sigscratch->ar_pfs 1209 ;; 1210 .fframe 16 1211 - .spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!) 1212 st8 [sp]=r9,-16 // allocate space for ar.unat and save it 1213 st8 [out2]=loc1,-8 // save ar.pfs, out2=&sigscratch 1214 .body
··· 1182 ;; 1183 (pNonSys) mov out2=0 // out2==0 => not a syscall 1184 .fframe 16 1185 + .spillsp ar.unat, 16 1186 st8 [sp]=r9,-16 // allocate space for ar.unat and save it 1187 st8 [out1]=loc1,-8 // save ar.pfs, out1=&sigscratch 1188 .body ··· 1208 adds out2=8,sp // out2=&sigscratch->ar_pfs 1209 ;; 1210 .fframe 16 1211 + .spillsp ar.unat, 16 1212 st8 [sp]=r9,-16 // allocate space for ar.unat and save it 1213 st8 [out2]=loc1,-8 // save ar.pfs, out2=&sigscratch 1214 .body
+4 -4
arch/ia64/kernel/mca.c
··· 1103 return IRQ_HANDLED; 1104 } 1105 1106 - #endif /* CONFIG_ACPI */ 1107 - 1108 /* 1109 * ia64_mca_cpe_poll 1110 * ··· 1119 /* Trigger a CPE interrupt cascade */ 1120 platform_send_ipi(first_cpu(cpu_online_map), IA64_CPEP_VECTOR, IA64_IPI_DM_INT, 0); 1121 } 1122 1123 /* 1124 * C portion of the OS INIT handler ··· 1390 register_percpu_irq(IA64_MCA_WAKEUP_VECTOR, &mca_wkup_irqaction); 1391 1392 #ifdef CONFIG_ACPI 1393 - /* Setup the CPEI/P vector and handler */ 1394 - cpe_vector = acpi_request_vector(ACPI_INTERRUPT_CPEI); 1395 register_percpu_irq(IA64_CPEP_VECTOR, &mca_cpep_irqaction); 1396 #endif 1397 ··· 1435 1436 #ifdef CONFIG_ACPI 1437 /* Setup the CPEI/P vector and handler */ 1438 init_timer(&cpe_poll_timer); 1439 cpe_poll_timer.function = ia64_mca_cpe_poll; 1440
··· 1103 return IRQ_HANDLED; 1104 } 1105 1106 /* 1107 * ia64_mca_cpe_poll 1108 * ··· 1121 /* Trigger a CPE interrupt cascade */ 1122 platform_send_ipi(first_cpu(cpu_online_map), IA64_CPEP_VECTOR, IA64_IPI_DM_INT, 0); 1123 } 1124 + 1125 + #endif /* CONFIG_ACPI */ 1126 1127 /* 1128 * C portion of the OS INIT handler ··· 1390 register_percpu_irq(IA64_MCA_WAKEUP_VECTOR, &mca_wkup_irqaction); 1391 1392 #ifdef CONFIG_ACPI 1393 + /* Setup the CPEI/P handler */ 1394 register_percpu_irq(IA64_CPEP_VECTOR, &mca_cpep_irqaction); 1395 #endif 1396 ··· 1436 1437 #ifdef CONFIG_ACPI 1438 /* Setup the CPEI/P vector and handler */ 1439 + cpe_vector = acpi_request_vector(ACPI_INTERRUPT_CPEI); 1440 init_timer(&cpe_poll_timer); 1441 cpe_poll_timer.function = ia64_mca_cpe_poll; 1442
+1 -2
arch/ia64/kernel/minstate.h
··· 41 (pKStk) addl r3=THIS_CPU(ia64_mca_data),r3;; \ 42 (pKStk) ld8 r3 = [r3];; \ 43 (pKStk) addl r3=IA64_MCA_CPU_INIT_STACK_OFFSET,r3;; \ 44 - (pKStk) addl sp=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r3; \ 45 (pUStk) mov ar.rsc=0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \ 46 (pUStk) addl r22=IA64_RBS_OFFSET,r1; /* compute base of register backing store */ \ 47 ;; \ ··· 50 (pUStk) mov r23=ar.bspstore; /* save ar.bspstore */ \ 51 (pUStk) dep r22=-1,r22,61,3; /* compute kernel virtual addr of RBS */ \ 52 ;; \ 53 - (pKStk) addl r1=-IA64_PT_REGS_SIZE,r1; /* if in kernel mode, use sp (r12) */ \ 54 (pUStk) mov ar.bspstore=r22; /* switch to kernel RBS */ \ 55 ;; \ 56 (pUStk) mov r18=ar.bsp; \
··· 41 (pKStk) addl r3=THIS_CPU(ia64_mca_data),r3;; \ 42 (pKStk) ld8 r3 = [r3];; \ 43 (pKStk) addl r3=IA64_MCA_CPU_INIT_STACK_OFFSET,r3;; \ 44 + (pKStk) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r3; \ 45 (pUStk) mov ar.rsc=0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \ 46 (pUStk) addl r22=IA64_RBS_OFFSET,r1; /* compute base of register backing store */ \ 47 ;; \ ··· 50 (pUStk) mov r23=ar.bspstore; /* save ar.bspstore */ \ 51 (pUStk) dep r22=-1,r22,61,3; /* compute kernel virtual addr of RBS */ \ 52 ;; \ 53 (pUStk) mov ar.bspstore=r22; /* switch to kernel RBS */ \ 54 ;; \ 55 (pUStk) mov r18=ar.bsp; \
+160 -15
arch/ia64/kernel/perfmon.c
··· 11 * Version Perfmon-2.x is a rewrite of perfmon-1.x 12 * by Stephane Eranian, Hewlett Packard Co. 13 * 14 - * Copyright (C) 1999-2003, 2005 Hewlett Packard Co 15 * Stephane Eranian <eranian@hpl.hp.com> 16 * David Mosberger-Tang <davidm@hpl.hp.com> 17 * ··· 497 static pfm_stats_t pfm_stats[NR_CPUS]; 498 static pfm_session_t pfm_sessions; /* global sessions information */ 499 500 static struct proc_dir_entry *perfmon_dir; 501 static pfm_uuid_t pfm_null_uuid = {0,}; 502 ··· 609 DEFINE_PER_CPU(struct task_struct *, pmu_owner); 610 DEFINE_PER_CPU(pfm_context_t *, pmu_ctx); 611 DEFINE_PER_CPU(unsigned long, pmu_activation_number); 612 613 614 /* forward declaration */ ··· 1329 error_conflict: 1330 DPRINT(("system wide not possible, conflicting session [%d] on CPU%d\n", 1331 pfm_sessions.pfs_sys_session[cpu]->pid, 1332 - smp_processor_id())); 1333 abort: 1334 UNLOCK_PFS(flags); 1335 ··· 5559 int ret; 5560 5561 this_cpu = get_cpu(); 5562 - min = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min; 5563 - max = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max; 5564 5565 - start_cycles = ia64_get_itc(); 5566 5567 - ret = pfm_do_interrupt_handler(irq, arg, regs); 5568 5569 - total_cycles = ia64_get_itc(); 5570 5571 - /* 5572 - * don't measure spurious interrupts 5573 - */ 5574 - if (likely(ret == 0)) { 5575 - total_cycles -= start_cycles; 5576 5577 - if (total_cycles < min) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min = total_cycles; 5578 - if (total_cycles > max) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max = total_cycles; 5579 5580 - pfm_stats[this_cpu].pfm_ovfl_intr_cycles += total_cycles; 5581 } 5582 put_cpu_no_resched(); 5583 return IRQ_HANDLED; 5584 } ··· 6434 .flags = SA_INTERRUPT, 6435 .name = "perfmon" 6436 }; 6437 6438 /* 6439 * perfmon initialization routine, called from the initcall() table
··· 11 * Version Perfmon-2.x is a rewrite of perfmon-1.x 12 * by Stephane Eranian, Hewlett Packard Co. 13 * 14 + * Copyright (C) 1999-2005 Hewlett Packard Co 15 * Stephane Eranian <eranian@hpl.hp.com> 16 * David Mosberger-Tang <davidm@hpl.hp.com> 17 * ··· 497 static pfm_stats_t pfm_stats[NR_CPUS]; 498 static pfm_session_t pfm_sessions; /* global sessions information */ 499 500 + static spinlock_t pfm_alt_install_check = SPIN_LOCK_UNLOCKED; 501 + static pfm_intr_handler_desc_t *pfm_alt_intr_handler; 502 + 503 static struct proc_dir_entry *perfmon_dir; 504 static pfm_uuid_t pfm_null_uuid = {0,}; 505 ··· 606 DEFINE_PER_CPU(struct task_struct *, pmu_owner); 607 DEFINE_PER_CPU(pfm_context_t *, pmu_ctx); 608 DEFINE_PER_CPU(unsigned long, pmu_activation_number); 609 + EXPORT_PER_CPU_SYMBOL_GPL(pfm_syst_info); 610 611 612 /* forward declaration */ ··· 1325 error_conflict: 1326 DPRINT(("system wide not possible, conflicting session [%d] on CPU%d\n", 1327 pfm_sessions.pfs_sys_session[cpu]->pid, 1328 + cpu)); 1329 abort: 1330 UNLOCK_PFS(flags); 1331 ··· 5555 int ret; 5556 5557 this_cpu = get_cpu(); 5558 + if (likely(!pfm_alt_intr_handler)) { 5559 + min = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min; 5560 + max = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max; 5561 5562 + start_cycles = ia64_get_itc(); 5563 5564 + ret = pfm_do_interrupt_handler(irq, arg, regs); 5565 5566 + total_cycles = ia64_get_itc(); 5567 5568 + /* 5569 + * don't measure spurious interrupts 5570 + */ 5571 + if (likely(ret == 0)) { 5572 + total_cycles -= start_cycles; 5573 5574 + if (total_cycles < min) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min = total_cycles; 5575 + if (total_cycles > max) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max = total_cycles; 5576 5577 + pfm_stats[this_cpu].pfm_ovfl_intr_cycles += total_cycles; 5578 + } 5579 } 5580 + else { 5581 + (*pfm_alt_intr_handler->handler)(irq, arg, regs); 5582 + } 5583 + 5584 put_cpu_no_resched(); 5585 return IRQ_HANDLED; 5586 } ··· 6424 .flags = SA_INTERRUPT, 6425 .name = "perfmon" 6426 }; 6427 + 6428 + static void 6429 + pfm_alt_save_pmu_state(void *data) 6430 + { 6431 + struct pt_regs *regs; 6432 + 6433 + regs = ia64_task_regs(current); 6434 + 6435 + DPRINT(("called\n")); 6436 + 6437 + /* 6438 + * should not be necessary but 6439 + * let's take not risk 6440 + */ 6441 + pfm_clear_psr_up(); 6442 + pfm_clear_psr_pp(); 6443 + ia64_psr(regs)->pp = 0; 6444 + 6445 + /* 6446 + * This call is required 6447 + * May cause a spurious interrupt on some processors 6448 + */ 6449 + pfm_freeze_pmu(); 6450 + 6451 + ia64_srlz_d(); 6452 + } 6453 + 6454 + void 6455 + pfm_alt_restore_pmu_state(void *data) 6456 + { 6457 + struct pt_regs *regs; 6458 + 6459 + regs = ia64_task_regs(current); 6460 + 6461 + DPRINT(("called\n")); 6462 + 6463 + /* 6464 + * put PMU back in state expected 6465 + * by perfmon 6466 + */ 6467 + pfm_clear_psr_up(); 6468 + pfm_clear_psr_pp(); 6469 + ia64_psr(regs)->pp = 0; 6470 + 6471 + /* 6472 + * perfmon runs with PMU unfrozen at all times 6473 + */ 6474 + pfm_unfreeze_pmu(); 6475 + 6476 + ia64_srlz_d(); 6477 + } 6478 + 6479 + int 6480 + pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl) 6481 + { 6482 + int ret, i; 6483 + int reserve_cpu; 6484 + 6485 + /* some sanity checks */ 6486 + if (hdl == NULL || hdl->handler == NULL) return -EINVAL; 6487 + 6488 + /* do the easy test first */ 6489 + if (pfm_alt_intr_handler) return -EBUSY; 6490 + 6491 + /* one at a time in the install or remove, just fail the others */ 6492 + if (!spin_trylock(&pfm_alt_install_check)) { 6493 + return -EBUSY; 6494 + } 6495 + 6496 + /* reserve our session */ 6497 + for_each_online_cpu(reserve_cpu) { 6498 + ret = pfm_reserve_session(NULL, 1, reserve_cpu); 6499 + if (ret) goto cleanup_reserve; 6500 + } 6501 + 6502 + /* save the current system wide pmu states */ 6503 + ret = on_each_cpu(pfm_alt_save_pmu_state, NULL, 0, 1); 6504 + if (ret) { 6505 + DPRINT(("on_each_cpu() failed: %d\n", ret)); 6506 + goto cleanup_reserve; 6507 + } 6508 + 6509 + /* officially change to the alternate interrupt handler */ 6510 + pfm_alt_intr_handler = hdl; 6511 + 6512 + spin_unlock(&pfm_alt_install_check); 6513 + 6514 + return 0; 6515 + 6516 + cleanup_reserve: 6517 + for_each_online_cpu(i) { 6518 + /* don't unreserve more than we reserved */ 6519 + if (i >= reserve_cpu) break; 6520 + 6521 + pfm_unreserve_session(NULL, 1, i); 6522 + } 6523 + 6524 + spin_unlock(&pfm_alt_install_check); 6525 + 6526 + return ret; 6527 + } 6528 + EXPORT_SYMBOL_GPL(pfm_install_alt_pmu_interrupt); 6529 + 6530 + int 6531 + pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl) 6532 + { 6533 + int i; 6534 + int ret; 6535 + 6536 + if (hdl == NULL) return -EINVAL; 6537 + 6538 + /* cannot remove someone else's handler! */ 6539 + if (pfm_alt_intr_handler != hdl) return -EINVAL; 6540 + 6541 + /* one at a time in the install or remove, just fail the others */ 6542 + if (!spin_trylock(&pfm_alt_install_check)) { 6543 + return -EBUSY; 6544 + } 6545 + 6546 + pfm_alt_intr_handler = NULL; 6547 + 6548 + ret = on_each_cpu(pfm_alt_restore_pmu_state, NULL, 0, 1); 6549 + if (ret) { 6550 + DPRINT(("on_each_cpu() failed: %d\n", ret)); 6551 + } 6552 + 6553 + for_each_online_cpu(i) { 6554 + pfm_unreserve_session(NULL, 1, i); 6555 + } 6556 + 6557 + spin_unlock(&pfm_alt_install_check); 6558 + 6559 + return 0; 6560 + } 6561 + EXPORT_SYMBOL_GPL(pfm_remove_alt_pmu_interrupt); 6562 6563 /* 6564 * perfmon initialization routine, called from the initcall() table
+17 -3
arch/ia64/kernel/ptrace.c
··· 692 unsigned long cfm) 693 { 694 struct unw_frame_info info, prev_info; 695 - unsigned long ip, pr; 696 697 unw_init_from_blocked_task(&info, child); 698 while (1) { 699 prev_info = info; 700 if (unw_unwind(&info) < 0) 701 return; 702 - if (unw_get_rp(&info, &ip) < 0) 703 return; 704 - if (ip < FIXADDR_USER_END) 705 break; 706 } 707
··· 692 unsigned long cfm) 693 { 694 struct unw_frame_info info, prev_info; 695 + unsigned long ip, sp, pr; 696 697 unw_init_from_blocked_task(&info, child); 698 while (1) { 699 prev_info = info; 700 if (unw_unwind(&info) < 0) 701 return; 702 + 703 + unw_get_sp(&info, &sp); 704 + if ((long)((unsigned long)child + IA64_STK_OFFSET - sp) 705 + < IA64_PT_REGS_SIZE) { 706 + dprintk("ptrace.%s: ran off the top of the kernel " 707 + "stack\n", __FUNCTION__); 708 return; 709 + } 710 + if (unw_get_pr (&prev_info, &pr) < 0) { 711 + unw_get_rp(&prev_info, &ip); 712 + dprintk("ptrace.%s: failed to read " 713 + "predicate register (ip=0x%lx)\n", 714 + __FUNCTION__, ip); 715 + return; 716 + } 717 + if (unw_is_intr_frame(&info) 718 + && (pr & (1UL << PRED_USER_STACK))) 719 break; 720 } 721
+1 -1
arch/ia64/kernel/smpboot.c
··· 624 __u16 thread_id; 625 __u16 proc_fixed_addr; 626 __u8 valid; 627 - }mt_info[NR_CPUS] __devinit; 628 629 #ifdef CONFIG_HOTPLUG_CPU 630 static inline void
··· 624 __u16 thread_id; 625 __u16 proc_fixed_addr; 626 __u8 valid; 627 + } mt_info[NR_CPUS] __devinitdata; 628 629 #ifdef CONFIG_HOTPLUG_CPU 630 static inline void
-7
arch/ia64/kernel/sys_ia64.c
··· 182 } 183 } 184 185 - /* 186 - * A zero mmap always succeeds in Linux, independent of whether or not the 187 - * remaining arguments are valid. 188 - */ 189 - if (len == 0) 190 - goto out; 191 - 192 /* Careful about overflows.. */ 193 len = PAGE_ALIGN(len); 194 if (!len || len > TASK_SIZE) {
··· 182 } 183 } 184 185 /* Careful about overflows.. */ 186 len = PAGE_ALIGN(len); 187 if (!len || len > TASK_SIZE) {
+2
arch/ia64/sn/kernel/setup.c
··· 271 int major = sn_sal_rev_major(), minor = sn_sal_rev_minor(); 272 extern void sn_cpu_init(void); 273 274 /* 275 * If the generic code has enabled vga console support - lets 276 * get rid of it again. This is a kludge for the fact that ACPI
··· 271 int major = sn_sal_rev_major(), minor = sn_sal_rev_minor(); 272 extern void sn_cpu_init(void); 273 274 + ia64_sn_plat_set_error_handling_features(); 275 + 276 /* 277 * If the generic code has enabled vga console support - lets 278 * get rid of it again. This is a kludge for the fact that ACPI
+2 -2
arch/ppc/Kconfig
··· 1143 1144 config PCI_8260 1145 bool 1146 - depends on PCI && 8260 && !8272 1147 default y 1148 1149 config 8260_PCI9 1150 bool " Enable workaround for MPC826x erratum PCI 9" 1151 - depends on PCI_8260 1152 default y 1153 1154 choice
··· 1143 1144 config PCI_8260 1145 bool 1146 + depends on PCI && 8260 1147 default y 1148 1149 config 8260_PCI9 1150 bool " Enable workaround for MPC826x erratum PCI 9" 1151 + depends on PCI_8260 && !ADS8272 1152 default y 1153 1154 choice
+2 -1
arch/ppc/boot/images/Makefile
··· 22 $(obj)/uImage: $(obj)/vmlinux.gz 23 $(Q)rm -f $@ 24 $(call if_changed,uimage) 25 - @echo ' Image: $@' $(if $(wildcard $@),'is ready','not made') 26 27 # Files generated that shall be removed upon make clean 28 clean-files := sImage vmapus vmlinux* miboot* zImage* uImage
··· 22 $(obj)/uImage: $(obj)/vmlinux.gz 23 $(Q)rm -f $@ 24 $(call if_changed,uimage) 25 + @echo -n ' Image: $@ ' 26 + @if [ -f $@ ]; then echo 'is ready' ; else echo 'not made'; fi 27 28 # Files generated that shall be removed upon make clean 29 clean-files := sImage vmapus vmlinux* miboot* zImage* uImage
+92 -25
arch/ppc/configs/mpc8555_cds_defconfig
··· 1 # 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.11-rc1 4 - # Thu Jan 20 01:25:35 2005 5 # 6 CONFIG_MMU=y 7 CONFIG_GENERIC_HARDIRQS=y ··· 11 CONFIG_PPC=y 12 CONFIG_PPC32=y 13 CONFIG_GENERIC_NVRAM=y 14 15 # 16 # Code maturity level options ··· 19 CONFIG_EXPERIMENTAL=y 20 CONFIG_CLEAN_COMPILE=y 21 CONFIG_BROKEN_ON_SMP=y 22 23 # 24 # General setup ··· 31 # CONFIG_BSD_PROCESS_ACCT is not set 32 CONFIG_SYSCTL=y 33 # CONFIG_AUDIT is not set 34 - CONFIG_LOG_BUF_SHIFT=14 35 # CONFIG_HOTPLUG is not set 36 CONFIG_KOBJECT_UEVENT=y 37 # CONFIG_IKCONFIG is not set 38 CONFIG_EMBEDDED=y 39 # CONFIG_KALLSYMS is not set 40 CONFIG_FUTEX=y 41 # CONFIG_EPOLL is not set 42 # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set ··· 48 CONFIG_CC_ALIGN_LOOPS=0 49 CONFIG_CC_ALIGN_JUMPS=0 50 # CONFIG_TINY_SHMEM is not set 51 52 # 53 # Loadable module support ··· 67 CONFIG_E500=y 68 CONFIG_BOOKE=y 69 CONFIG_FSL_BOOKE=y 70 CONFIG_SPE=y 71 CONFIG_MATH_EMULATION=y 72 # CONFIG_CPU_FREQ is not set 73 CONFIG_PPC_GEN550=y 74 CONFIG_85xx=y 75 CONFIG_PPC_INDIRECT_PCI_BE=y 76 ··· 83 CONFIG_MPC8555_CDS=y 84 # CONFIG_MPC8560_ADS is not set 85 # CONFIG_SBC8560 is not set 86 CONFIG_MPC8555=y 87 CONFIG_85xx_PCI2=y 88 ··· 98 CONFIG_BINFMT_ELF=y 99 # CONFIG_BINFMT_MISC is not set 100 # CONFIG_CMDLINE_BOOL is not set 101 102 # 103 # Bus options ··· 112 # PCCARD (PCMCIA/CardBus) support 113 # 114 # CONFIG_PCCARD is not set 115 - 116 - # 117 - # PC-card bridges 118 - # 119 120 # 121 # Advanced setup ··· 185 # 186 # ATA/ATAPI/MFM/RLL support 187 # 188 - # CONFIG_IDE is not set 189 190 # 191 # SCSI device support ··· 277 # 278 CONFIG_PACKET=y 279 # CONFIG_PACKET_MMAP is not set 280 - # CONFIG_NETLINK_DEV is not set 281 CONFIG_UNIX=y 282 # CONFIG_NET_KEY is not set 283 CONFIG_INET=y ··· 426 # CONFIG_INPUT_EVBUG is not set 427 428 # 429 - # Input I/O drivers 430 - # 431 - # CONFIG_GAMEPORT is not set 432 - CONFIG_SOUND_GAMEPORT=y 433 - # CONFIG_SERIO is not set 434 - # CONFIG_SERIO_I8042 is not set 435 - 436 - # 437 # Input Device Drivers 438 # 439 # CONFIG_INPUT_KEYBOARD is not set ··· 433 # CONFIG_INPUT_JOYSTICK is not set 434 # CONFIG_INPUT_TOUCHSCREEN is not set 435 # CONFIG_INPUT_MISC is not set 436 437 # 438 # Character devices ··· 461 CONFIG_SERIAL_CORE=y 462 CONFIG_SERIAL_CORE_CONSOLE=y 463 # CONFIG_SERIAL_CPM is not set 464 CONFIG_UNIX98_PTYS=y 465 CONFIG_LEGACY_PTYS=y 466 CONFIG_LEGACY_PTY_COUNT=256 ··· 490 # CONFIG_RAW_DRIVER is not set 491 492 # 493 # I2C support 494 # 495 CONFIG_I2C=y ··· 517 # CONFIG_I2C_AMD8111 is not set 518 # CONFIG_I2C_I801 is not set 519 # CONFIG_I2C_I810 is not set 520 # CONFIG_I2C_ISA is not set 521 CONFIG_I2C_MPC=y 522 # CONFIG_I2C_NFORCE2 is not set 523 # CONFIG_I2C_PARPORT_LIGHT is not set 524 - # CONFIG_I2C_PIIX4 is not set 525 # CONFIG_I2C_PROSAVAGE is not set 526 # CONFIG_I2C_SAVAGE4 is not set 527 # CONFIG_SCx200_ACB is not set ··· 544 # CONFIG_SENSORS_ASB100 is not set 545 # CONFIG_SENSORS_DS1621 is not set 546 # CONFIG_SENSORS_FSCHER is not set 547 # CONFIG_SENSORS_GL518SM is not set 548 # CONFIG_SENSORS_IT87 is not set 549 # CONFIG_SENSORS_LM63 is not set 550 # CONFIG_SENSORS_LM75 is not set ··· 557 # CONFIG_SENSORS_LM85 is not set 558 # CONFIG_SENSORS_LM87 is not set 559 # CONFIG_SENSORS_LM90 is not set 560 # CONFIG_SENSORS_MAX1619 is not set 561 # CONFIG_SENSORS_PC87360 is not set 562 # CONFIG_SENSORS_SMSC47B397 is not set 563 # CONFIG_SENSORS_SMSC47M1 is not set 564 # CONFIG_SENSORS_VIA686A is not set 565 # CONFIG_SENSORS_W83781D is not set ··· 571 # 572 # Other I2C Chip support 573 # 574 # CONFIG_SENSORS_EEPROM is not set 575 # CONFIG_SENSORS_PCF8574 is not set 576 # CONFIG_SENSORS_PCF8591 is not set 577 # CONFIG_SENSORS_RTC8564 is not set 578 # CONFIG_I2C_DEBUG_CORE is not set 579 # CONFIG_I2C_DEBUG_ALGO is not set 580 # CONFIG_I2C_DEBUG_BUS is not set ··· 605 # Graphics support 606 # 607 # CONFIG_FB is not set 608 - # CONFIG_BACKLIGHT_LCD_SUPPORT is not set 609 610 # 611 # Sound ··· 614 # 615 # USB support 616 # 617 - # CONFIG_USB is not set 618 CONFIG_USB_ARCH_HAS_HCD=y 619 CONFIG_USB_ARCH_HAS_OHCI=y 620 - 621 - # 622 - # NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information 623 - # 624 625 # 626 # USB Gadget Support ··· 647 CONFIG_FS_MBCACHE=y 648 # CONFIG_REISERFS_FS is not set 649 # CONFIG_JFS_FS is not set 650 # CONFIG_XFS_FS is not set 651 # CONFIG_MINIX_FS is not set 652 # CONFIG_ROMFS_FS is not set ··· 712 # CONFIG_NFSD is not set 713 CONFIG_ROOT_NFS=y 714 CONFIG_LOCKD=y 715 - # CONFIG_EXPORTFS is not set 716 CONFIG_SUNRPC=y 717 # CONFIG_RPCSEC_GSS_KRB5 is not set 718 # CONFIG_RPCSEC_GSS_SPKM3 is not set ··· 763 # 764 # Kernel hacking 765 # 766 # CONFIG_DEBUG_KERNEL is not set 767 # CONFIG_KGDB_CONSOLE is not set 768 # CONFIG_SERIAL_TEXT_DEBUG is not set 769
··· 1 # 2 # Automatically generated make config: don't edit 3 + # Linux kernel version: 2.6.12-rc4 4 + # Tue May 17 11:56:01 2005 5 # 6 CONFIG_MMU=y 7 CONFIG_GENERIC_HARDIRQS=y ··· 11 CONFIG_PPC=y 12 CONFIG_PPC32=y 13 CONFIG_GENERIC_NVRAM=y 14 + CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y 15 16 # 17 # Code maturity level options ··· 18 CONFIG_EXPERIMENTAL=y 19 CONFIG_CLEAN_COMPILE=y 20 CONFIG_BROKEN_ON_SMP=y 21 + CONFIG_INIT_ENV_ARG_LIMIT=32 22 23 # 24 # General setup ··· 29 # CONFIG_BSD_PROCESS_ACCT is not set 30 CONFIG_SYSCTL=y 31 # CONFIG_AUDIT is not set 32 # CONFIG_HOTPLUG is not set 33 CONFIG_KOBJECT_UEVENT=y 34 # CONFIG_IKCONFIG is not set 35 CONFIG_EMBEDDED=y 36 # CONFIG_KALLSYMS is not set 37 + CONFIG_PRINTK=y 38 + CONFIG_BUG=y 39 + CONFIG_BASE_FULL=y 40 CONFIG_FUTEX=y 41 # CONFIG_EPOLL is not set 42 # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set ··· 44 CONFIG_CC_ALIGN_LOOPS=0 45 CONFIG_CC_ALIGN_JUMPS=0 46 # CONFIG_TINY_SHMEM is not set 47 + CONFIG_BASE_SMALL=0 48 49 # 50 # Loadable module support ··· 62 CONFIG_E500=y 63 CONFIG_BOOKE=y 64 CONFIG_FSL_BOOKE=y 65 + # CONFIG_PHYS_64BIT is not set 66 CONFIG_SPE=y 67 CONFIG_MATH_EMULATION=y 68 # CONFIG_CPU_FREQ is not set 69 CONFIG_PPC_GEN550=y 70 + # CONFIG_PM is not set 71 CONFIG_85xx=y 72 CONFIG_PPC_INDIRECT_PCI_BE=y 73 ··· 76 CONFIG_MPC8555_CDS=y 77 # CONFIG_MPC8560_ADS is not set 78 # CONFIG_SBC8560 is not set 79 + # CONFIG_STX_GP3 is not set 80 CONFIG_MPC8555=y 81 CONFIG_85xx_PCI2=y 82 ··· 90 CONFIG_BINFMT_ELF=y 91 # CONFIG_BINFMT_MISC is not set 92 # CONFIG_CMDLINE_BOOL is not set 93 + CONFIG_ISA_DMA_API=y 94 95 # 96 # Bus options ··· 103 # PCCARD (PCMCIA/CardBus) support 104 # 105 # CONFIG_PCCARD is not set 106 107 # 108 # Advanced setup ··· 180 # 181 # ATA/ATAPI/MFM/RLL support 182 # 183 + CONFIG_IDE=y 184 + CONFIG_BLK_DEV_IDE=y 185 + 186 + # 187 + # Please see Documentation/ide.txt for help/info on IDE drives 188 + # 189 + # CONFIG_BLK_DEV_IDE_SATA is not set 190 + CONFIG_BLK_DEV_IDEDISK=y 191 + # CONFIG_IDEDISK_MULTI_MODE is not set 192 + # CONFIG_BLK_DEV_IDECD is not set 193 + # CONFIG_BLK_DEV_IDETAPE is not set 194 + # CONFIG_BLK_DEV_IDEFLOPPY is not set 195 + # CONFIG_IDE_TASK_IOCTL is not set 196 + 197 + # 198 + # IDE chipset support/bugfixes 199 + # 200 + CONFIG_IDE_GENERIC=y 201 + CONFIG_BLK_DEV_IDEPCI=y 202 + CONFIG_IDEPCI_SHARE_IRQ=y 203 + # CONFIG_BLK_DEV_OFFBOARD is not set 204 + CONFIG_BLK_DEV_GENERIC=y 205 + # CONFIG_BLK_DEV_OPTI621 is not set 206 + # CONFIG_BLK_DEV_SL82C105 is not set 207 + CONFIG_BLK_DEV_IDEDMA_PCI=y 208 + # CONFIG_BLK_DEV_IDEDMA_FORCED is not set 209 + CONFIG_IDEDMA_PCI_AUTO=y 210 + # CONFIG_IDEDMA_ONLYDISK is not set 211 + # CONFIG_BLK_DEV_AEC62XX is not set 212 + # CONFIG_BLK_DEV_ALI15X3 is not set 213 + # CONFIG_BLK_DEV_AMD74XX is not set 214 + # CONFIG_BLK_DEV_CMD64X is not set 215 + # CONFIG_BLK_DEV_TRIFLEX is not set 216 + # CONFIG_BLK_DEV_CY82C693 is not set 217 + # CONFIG_BLK_DEV_CS5520 is not set 218 + # CONFIG_BLK_DEV_CS5530 is not set 219 + # CONFIG_BLK_DEV_HPT34X is not set 220 + # CONFIG_BLK_DEV_HPT366 is not set 221 + # CONFIG_BLK_DEV_SC1200 is not set 222 + # CONFIG_BLK_DEV_PIIX is not set 223 + # CONFIG_BLK_DEV_NS87415 is not set 224 + # CONFIG_BLK_DEV_PDC202XX_OLD is not set 225 + # CONFIG_BLK_DEV_PDC202XX_NEW is not set 226 + # CONFIG_BLK_DEV_SVWKS is not set 227 + # CONFIG_BLK_DEV_SIIMAGE is not set 228 + # CONFIG_BLK_DEV_SLC90E66 is not set 229 + # CONFIG_BLK_DEV_TRM290 is not set 230 + CONFIG_BLK_DEV_VIA82CXXX=y 231 + # CONFIG_IDE_ARM is not set 232 + CONFIG_BLK_DEV_IDEDMA=y 233 + # CONFIG_IDEDMA_IVB is not set 234 + CONFIG_IDEDMA_AUTO=y 235 + # CONFIG_BLK_DEV_HD is not set 236 237 # 238 # SCSI device support ··· 220 # 221 CONFIG_PACKET=y 222 # CONFIG_PACKET_MMAP is not set 223 CONFIG_UNIX=y 224 # CONFIG_NET_KEY is not set 225 CONFIG_INET=y ··· 370 # CONFIG_INPUT_EVBUG is not set 371 372 # 373 # Input Device Drivers 374 # 375 # CONFIG_INPUT_KEYBOARD is not set ··· 385 # CONFIG_INPUT_JOYSTICK is not set 386 # CONFIG_INPUT_TOUCHSCREEN is not set 387 # CONFIG_INPUT_MISC is not set 388 + 389 + # 390 + # Hardware I/O ports 391 + # 392 + # CONFIG_SERIO is not set 393 + # CONFIG_GAMEPORT is not set 394 + CONFIG_SOUND_GAMEPORT=y 395 396 # 397 # Character devices ··· 406 CONFIG_SERIAL_CORE=y 407 CONFIG_SERIAL_CORE_CONSOLE=y 408 # CONFIG_SERIAL_CPM is not set 409 + # CONFIG_SERIAL_JSM is not set 410 CONFIG_UNIX98_PTYS=y 411 CONFIG_LEGACY_PTYS=y 412 CONFIG_LEGACY_PTY_COUNT=256 ··· 434 # CONFIG_RAW_DRIVER is not set 435 436 # 437 + # TPM devices 438 + # 439 + # CONFIG_TCG_TPM is not set 440 + 441 + # 442 # I2C support 443 # 444 CONFIG_I2C=y ··· 456 # CONFIG_I2C_AMD8111 is not set 457 # CONFIG_I2C_I801 is not set 458 # CONFIG_I2C_I810 is not set 459 + # CONFIG_I2C_PIIX4 is not set 460 # CONFIG_I2C_ISA is not set 461 CONFIG_I2C_MPC=y 462 # CONFIG_I2C_NFORCE2 is not set 463 # CONFIG_I2C_PARPORT_LIGHT is not set 464 # CONFIG_I2C_PROSAVAGE is not set 465 # CONFIG_I2C_SAVAGE4 is not set 466 # CONFIG_SCx200_ACB is not set ··· 483 # CONFIG_SENSORS_ASB100 is not set 484 # CONFIG_SENSORS_DS1621 is not set 485 # CONFIG_SENSORS_FSCHER is not set 486 + # CONFIG_SENSORS_FSCPOS is not set 487 # CONFIG_SENSORS_GL518SM is not set 488 + # CONFIG_SENSORS_GL520SM is not set 489 # CONFIG_SENSORS_IT87 is not set 490 # CONFIG_SENSORS_LM63 is not set 491 # CONFIG_SENSORS_LM75 is not set ··· 494 # CONFIG_SENSORS_LM85 is not set 495 # CONFIG_SENSORS_LM87 is not set 496 # CONFIG_SENSORS_LM90 is not set 497 + # CONFIG_SENSORS_LM92 is not set 498 # CONFIG_SENSORS_MAX1619 is not set 499 # CONFIG_SENSORS_PC87360 is not set 500 # CONFIG_SENSORS_SMSC47B397 is not set 501 + # CONFIG_SENSORS_SIS5595 is not set 502 # CONFIG_SENSORS_SMSC47M1 is not set 503 # CONFIG_SENSORS_VIA686A is not set 504 # CONFIG_SENSORS_W83781D is not set ··· 506 # 507 # Other I2C Chip support 508 # 509 + # CONFIG_SENSORS_DS1337 is not set 510 # CONFIG_SENSORS_EEPROM is not set 511 # CONFIG_SENSORS_PCF8574 is not set 512 # CONFIG_SENSORS_PCF8591 is not set 513 # CONFIG_SENSORS_RTC8564 is not set 514 + # CONFIG_SENSORS_M41T00 is not set 515 # CONFIG_I2C_DEBUG_CORE is not set 516 # CONFIG_I2C_DEBUG_ALGO is not set 517 # CONFIG_I2C_DEBUG_BUS is not set ··· 538 # Graphics support 539 # 540 # CONFIG_FB is not set 541 542 # 543 # Sound ··· 548 # 549 # USB support 550 # 551 CONFIG_USB_ARCH_HAS_HCD=y 552 CONFIG_USB_ARCH_HAS_OHCI=y 553 + # CONFIG_USB is not set 554 555 # 556 # USB Gadget Support ··· 585 CONFIG_FS_MBCACHE=y 586 # CONFIG_REISERFS_FS is not set 587 # CONFIG_JFS_FS is not set 588 + 589 + # 590 + # XFS support 591 + # 592 # CONFIG_XFS_FS is not set 593 # CONFIG_MINIX_FS is not set 594 # CONFIG_ROMFS_FS is not set ··· 646 # CONFIG_NFSD is not set 647 CONFIG_ROOT_NFS=y 648 CONFIG_LOCKD=y 649 CONFIG_SUNRPC=y 650 # CONFIG_RPCSEC_GSS_KRB5 is not set 651 # CONFIG_RPCSEC_GSS_SPKM3 is not set ··· 698 # 699 # Kernel hacking 700 # 701 + # CONFIG_PRINTK_TIME is not set 702 # CONFIG_DEBUG_KERNEL is not set 703 + CONFIG_LOG_BUF_SHIFT=14 704 # CONFIG_KGDB_CONSOLE is not set 705 # CONFIG_SERIAL_TEXT_DEBUG is not set 706
+14 -1
arch/ppc/kernel/head_fsl_booke.S
··· 232 tlbwe 233 234 /* 7. Jump to KERNELBASE mapping */ 235 - li r7,0 236 bl 1f /* Find our address */ 237 1: mflr r9 238 rlwimi r6,r9,0,20,31 ··· 292 mfspr r2,SPRN_HID0 293 oris r2,r2,HID0_DOZE@h 294 mtspr SPRN_HID0, r2 295 #endif 296 297 /*
··· 232 tlbwe 233 234 /* 7. Jump to KERNELBASE mapping */ 235 + lis r7,MSR_KERNEL@h 236 + ori r7,r7,MSR_KERNEL@l 237 bl 1f /* Find our address */ 238 1: mflr r9 239 rlwimi r6,r9,0,20,31 ··· 291 mfspr r2,SPRN_HID0 292 oris r2,r2,HID0_DOZE@h 293 mtspr SPRN_HID0, r2 294 + #endif 295 + 296 + #if !defined(CONFIG_BDI_SWITCH) 297 + /* 298 + * The Abatron BDI JTAG debugger does not tolerate others 299 + * mucking with the debug registers. 300 + */ 301 + lis r2,DBCR0_IDM@h 302 + mtspr SPRN_DBCR0,r2 303 + /* clear any residual debug events */ 304 + li r2,-1 305 + mtspr SPRN_DBSR,r2 306 #endif 307 308 /*
+1 -6
arch/ppc/kernel/traps.c
··· 408 409 /* Early out if we are an invalid form of lswx */ 410 if ((instword & INST_STRING_MASK) == INST_LSWX) 411 - if ((rA >= rT) || (NB_RB >= rT) || (rT == rA) || (rT == NB_RB)) 412 - return -EINVAL; 413 - 414 - /* Early out if we are an invalid form of lswi */ 415 - if ((instword & INST_STRING_MASK) == INST_LSWI) 416 - if ((rA >= rT) || (rT == rA)) 417 return -EINVAL; 418 419 EA = (rA == 0) ? 0 : regs->gpr[rA];
··· 408 409 /* Early out if we are an invalid form of lswx */ 410 if ((instword & INST_STRING_MASK) == INST_LSWX) 411 + if ((rT == rA) || (rT == NB_RB)) 412 return -EINVAL; 413 414 EA = (rA == 0) ? 0 : regs->gpr[rA];
-1
arch/ppc/platforms/83xx/mpc834x_sys.c
··· 127 { 128 /* we steal the lowest ioremap addr for virt space */ 129 io_block_mapping(VIRT_IMMRBAR, immrbar, 1024*1024, _PAGE_IO); 130 - io_block_mapping(BCSR_VIRT_ADDR, BCSR_PHYS_ADDR, BCSR_SIZE, _PAGE_IO); 131 } 132 133 int
··· 127 { 128 /* we steal the lowest ioremap addr for virt space */ 129 io_block_mapping(VIRT_IMMRBAR, immrbar, 1024*1024, _PAGE_IO); 130 } 131 132 int
+6 -1
arch/ppc/platforms/83xx/mpc834x_sys.h
··· 26 #define VIRT_IMMRBAR ((uint)0xfe000000) 27 28 #define BCSR_PHYS_ADDR ((uint)0xf8000000) 29 - #define BCSR_VIRT_ADDR ((uint)0xfe100000) 30 #define BCSR_SIZE ((uint)(32 * 1024)) 31 32 #ifdef CONFIG_PCI 33 /* PCI interrupt controller */
··· 26 #define VIRT_IMMRBAR ((uint)0xfe000000) 27 28 #define BCSR_PHYS_ADDR ((uint)0xf8000000) 29 #define BCSR_SIZE ((uint)(32 * 1024)) 30 + 31 + #define BCSR_MISC_REG2_OFF 0x07 32 + #define BCSR_MISC_REG2_PORESET 0x01 33 + 34 + #define BCSR_MISC_REG3_OFF 0x08 35 + #define BCSR_MISC_REG3_CNFLOCK 0x80 36 37 #ifdef CONFIG_PCI 38 /* PCI interrupt controller */
+3
arch/ppc/platforms/85xx/mpc8540_ads.c
··· 210 #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG) 211 ppc_md.progress = gen550_progress; 212 #endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */ 213 214 if (ppc_md.progress) 215 ppc_md.progress("mpc8540ads_init(): exit", 0);
··· 210 #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG) 211 ppc_md.progress = gen550_progress; 212 #endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */ 213 + #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_KGDB) 214 + ppc_md.early_serial_map = mpc85xx_early_serial_map; 215 + #endif /* CONFIG_SERIAL_8250 && CONFIG_KGDB */ 216 217 if (ppc_md.progress) 218 ppc_md.progress("mpc8540ads_init(): exit", 0);
+141 -2
arch/ppc/platforms/85xx/mpc85xx_cds_common.c
··· 44 #include <asm/machdep.h> 45 #include <asm/prom.h> 46 #include <asm/open_pic.h> 47 #include <asm/bootinfo.h> 48 #include <asm/pci-bridge.h> 49 #include <asm/mpc85xx.h> ··· 182 mpc85xx_cds_init_IRQ(void) 183 { 184 bd_t *binfo = (bd_t *) __res; 185 186 /* Determine the Physical Address of the OpenPIC regs */ 187 phys_addr_t OpenPIC_PAddr = binfo->bi_immr_base + MPC85xx_OPENPIC_OFFSET; ··· 199 * leave space for cascading interrupts underneath. 200 */ 201 openpic_init(MPC85xx_OPENPIC_IRQ_OFFSET); 202 203 #ifdef CONFIG_CPM2 204 /* Setup CPM2 PIC */ ··· 242 * interrupt on slot */ 243 { 244 { 0, 1, 2, 3 }, /* 16 - PMC */ 245 - { 3, 0, 0, 0 }, /* 17 P2P (Tsi320) */ 246 { 0, 1, 2, 3 }, /* 18 - Slot 1 */ 247 { 1, 2, 3, 0 }, /* 19 - Slot 2 */ 248 { 2, 3, 0, 1 }, /* 20 - Slot 3 */ ··· 291 return PCIBIOS_DEVICE_NOT_FOUND; 292 #endif 293 /* We explicitly do not go past the Tundra 320 Bridge */ 294 - if (bus == 1) 295 return PCIBIOS_DEVICE_NOT_FOUND; 296 if ((bus == 0) && (PCI_SLOT(devfn) == ARCADIA_2ND_BRIDGE_IDSEL)) 297 return PCIBIOS_DEVICE_NOT_FOUND; 298 else 299 return PCIBIOS_SUCCESSFUL; 300 } 301 #endif /* CONFIG_PCI */ 302 ··· 461 loops_per_jiffy = freq / HZ; 462 463 #ifdef CONFIG_PCI 464 /* setup PCI host bridges */ 465 mpc85xx_setup_hose(); 466 #endif ··· 595 #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG) 596 ppc_md.progress = gen550_progress; 597 #endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */ 598 599 if (ppc_md.progress) 600 ppc_md.progress("mpc85xx_cds_init(): exit", 0);
··· 44 #include <asm/machdep.h> 45 #include <asm/prom.h> 46 #include <asm/open_pic.h> 47 + #include <asm/i8259.h> 48 #include <asm/bootinfo.h> 49 #include <asm/pci-bridge.h> 50 #include <asm/mpc85xx.h> ··· 181 mpc85xx_cds_init_IRQ(void) 182 { 183 bd_t *binfo = (bd_t *) __res; 184 + int i; 185 186 /* Determine the Physical Address of the OpenPIC regs */ 187 phys_addr_t OpenPIC_PAddr = binfo->bi_immr_base + MPC85xx_OPENPIC_OFFSET; ··· 197 * leave space for cascading interrupts underneath. 198 */ 199 openpic_init(MPC85xx_OPENPIC_IRQ_OFFSET); 200 + 201 + #ifdef CONFIG_PCI 202 + openpic_hookup_cascade(PIRQ0A, "82c59 cascade", i8259_irq); 203 + 204 + for (i = 0; i < NUM_8259_INTERRUPTS; i++) 205 + irq_desc[i].handler = &i8259_pic; 206 + 207 + i8259_init(0); 208 + #endif 209 210 #ifdef CONFIG_CPM2 211 /* Setup CPM2 PIC */ ··· 231 * interrupt on slot */ 232 { 233 { 0, 1, 2, 3 }, /* 16 - PMC */ 234 + { 0, 1, 2, 3 }, /* 17 P2P (Tsi320) */ 235 { 0, 1, 2, 3 }, /* 18 - Slot 1 */ 236 { 1, 2, 3, 0 }, /* 19 - Slot 2 */ 237 { 2, 3, 0, 1 }, /* 20 - Slot 3 */ ··· 280 return PCIBIOS_DEVICE_NOT_FOUND; 281 #endif 282 /* We explicitly do not go past the Tundra 320 Bridge */ 283 + if ((bus == 1) && (PCI_SLOT(devfn) == ARCADIA_2ND_BRIDGE_IDSEL)) 284 return PCIBIOS_DEVICE_NOT_FOUND; 285 if ((bus == 0) && (PCI_SLOT(devfn) == ARCADIA_2ND_BRIDGE_IDSEL)) 286 return PCIBIOS_DEVICE_NOT_FOUND; 287 else 288 return PCIBIOS_SUCCESSFUL; 289 + } 290 + 291 + void __init 292 + mpc85xx_cds_enable_via(struct pci_controller *hose) 293 + { 294 + u32 pci_class; 295 + u16 vid, did; 296 + 297 + early_read_config_dword(hose, 0, 0x88, PCI_CLASS_REVISION, &pci_class); 298 + if ((pci_class >> 16) != PCI_CLASS_BRIDGE_PCI) 299 + return; 300 + 301 + /* Configure P2P so that we can reach bus 1 */ 302 + early_write_config_byte(hose, 0, 0x88, PCI_PRIMARY_BUS, 0); 303 + early_write_config_byte(hose, 0, 0x88, PCI_SECONDARY_BUS, 1); 304 + early_write_config_byte(hose, 0, 0x88, PCI_SUBORDINATE_BUS, 0xff); 305 + 306 + early_read_config_word(hose, 1, 0x10, PCI_VENDOR_ID, &vid); 307 + early_read_config_word(hose, 1, 0x10, PCI_DEVICE_ID, &did); 308 + 309 + if ((vid != PCI_VENDOR_ID_VIA) || 310 + (did != PCI_DEVICE_ID_VIA_82C686)) 311 + return; 312 + 313 + /* Enable USB and IDE functions */ 314 + early_write_config_byte(hose, 1, 0x10, 0x48, 0x08); 315 + } 316 + 317 + void __init 318 + mpc85xx_cds_fixup_via(struct pci_controller *hose) 319 + { 320 + u32 pci_class; 321 + u16 vid, did; 322 + 323 + early_read_config_dword(hose, 0, 0x88, PCI_CLASS_REVISION, &pci_class); 324 + if ((pci_class >> 16) != PCI_CLASS_BRIDGE_PCI) 325 + return; 326 + 327 + /* 328 + * Force the backplane P2P bridge to have a window 329 + * open from 0x00000000-0x00001fff in PCI I/O space. 330 + * This allows legacy I/O (i8259, etc) on the VIA 331 + * southbridge to be accessed. 332 + */ 333 + early_write_config_byte(hose, 0, 0x88, PCI_IO_BASE, 0x00); 334 + early_write_config_word(hose, 0, 0x88, PCI_IO_BASE_UPPER16, 0x0000); 335 + early_write_config_byte(hose, 0, 0x88, PCI_IO_LIMIT, 0x10); 336 + early_write_config_word(hose, 0, 0x88, PCI_IO_LIMIT_UPPER16, 0x0000); 337 + 338 + early_read_config_word(hose, 1, 0x10, PCI_VENDOR_ID, &vid); 339 + early_read_config_word(hose, 1, 0x10, PCI_DEVICE_ID, &did); 340 + if ((vid != PCI_VENDOR_ID_VIA) || 341 + (did != PCI_DEVICE_ID_VIA_82C686)) 342 + return; 343 + 344 + /* 345 + * Since the P2P window was forced to cover the fixed 346 + * legacy I/O addresses, it is necessary to manually 347 + * place the base addresses for the IDE and USB functions 348 + * within this window. 349 + */ 350 + /* Function 1, IDE */ 351 + early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_0, 0x1ff8); 352 + early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_1, 0x1ff4); 353 + early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_2, 0x1fe8); 354 + early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_3, 0x1fe4); 355 + early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_4, 0x1fd0); 356 + 357 + /* Function 2, USB ports 0-1 */ 358 + early_write_config_dword(hose, 1, 0x12, PCI_BASE_ADDRESS_4, 0x1fa0); 359 + 360 + /* Function 3, USB ports 2-3 */ 361 + early_write_config_dword(hose, 1, 0x13, PCI_BASE_ADDRESS_4, 0x1f80); 362 + 363 + /* Function 5, Power Management */ 364 + early_write_config_dword(hose, 1, 0x15, PCI_BASE_ADDRESS_0, 0x1e00); 365 + early_write_config_dword(hose, 1, 0x15, PCI_BASE_ADDRESS_1, 0x1dfc); 366 + early_write_config_dword(hose, 1, 0x15, PCI_BASE_ADDRESS_2, 0x1df8); 367 + 368 + /* Function 6, AC97 Interface */ 369 + early_write_config_dword(hose, 1, 0x16, PCI_BASE_ADDRESS_0, 0x1c00); 370 + } 371 + 372 + void __init 373 + mpc85xx_cds_pcibios_fixup(void) 374 + { 375 + struct pci_dev *dev = NULL; 376 + u_char c; 377 + 378 + if ((dev = pci_find_device(PCI_VENDOR_ID_VIA, 379 + PCI_DEVICE_ID_VIA_82C586_1, NULL))) { 380 + /* 381 + * U-Boot does not set the enable bits 382 + * for the IDE device. Force them on here. 383 + */ 384 + pci_read_config_byte(dev, 0x40, &c); 385 + c |= 0x03; /* IDE: Chip Enable Bits */ 386 + pci_write_config_byte(dev, 0x40, c); 387 + 388 + /* 389 + * Since only primary interface works, force the 390 + * IDE function to standard primary IDE interrupt 391 + * w/ 8259 offset 392 + */ 393 + dev->irq = 14; 394 + pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq); 395 + } 396 + 397 + /* 398 + * Force legacy USB interrupt routing 399 + */ 400 + if ((dev = pci_find_device(PCI_VENDOR_ID_VIA, 401 + PCI_DEVICE_ID_VIA_82C586_2, NULL))) { 402 + dev->irq = 10; 403 + pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 10); 404 + } 405 + 406 + if ((dev = pci_find_device(PCI_VENDOR_ID_VIA, 407 + PCI_DEVICE_ID_VIA_82C586_2, dev))) { 408 + dev->irq = 11; 409 + pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 11); 410 + } 411 } 412 #endif /* CONFIG_PCI */ 413 ··· 328 loops_per_jiffy = freq / HZ; 329 330 #ifdef CONFIG_PCI 331 + /* VIA IDE configuration */ 332 + ppc_md.pcibios_fixup = mpc85xx_cds_pcibios_fixup; 333 + 334 /* setup PCI host bridges */ 335 mpc85xx_setup_hose(); 336 #endif ··· 459 #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG) 460 ppc_md.progress = gen550_progress; 461 #endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */ 462 + #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_KGDB) 463 + ppc_md.early_serial_map = mpc85xx_early_serial_map; 464 + #endif /* CONFIG_SERIAL_8250 && CONFIG_KGDB */ 465 466 if (ppc_md.progress) 467 ppc_md.progress("mpc85xx_cds_init(): exit", 0);
+3
arch/ppc/platforms/85xx/mpc85xx_cds_common.h
··· 77 78 #define MPC85XX_PCI2_IO_SIZE 0x01000000 79 80 #endif /* __MACH_MPC85XX_CDS_H__ */
··· 77 78 #define MPC85XX_PCI2_IO_SIZE 0x01000000 79 80 + #define NR_8259_INTS 16 81 + #define CPM_IRQ_OFFSET NR_8259_INTS 82 + 83 #endif /* __MACH_MPC85XX_CDS_H__ */
+3
arch/ppc/platforms/85xx/sbc8560.c
··· 221 #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG) 222 ppc_md.progress = gen550_progress; 223 #endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */ 224 225 if (ppc_md.progress) 226 ppc_md.progress("sbc8560_init(): exit", 0);
··· 221 #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG) 222 ppc_md.progress = gen550_progress; 223 #endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */ 224 + #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_KGDB) 225 + ppc_md.early_serial_map = sbc8560_early_serial_map; 226 + #endif /* CONFIG_SERIAL_8250 && CONFIG_KGDB */ 227 228 if (ppc_md.progress) 229 ppc_md.progress("sbc8560_init(): exit", 0);
+22 -14
arch/ppc/platforms/pmac_cpufreq.c
··· 85 static int has_cpu_l2lve; 86 87 88 - #define PMAC_CPU_LOW_SPEED 1 89 - #define PMAC_CPU_HIGH_SPEED 0 90 - 91 /* There are only two frequency states for each processor. Values 92 * are in kHz for the time being. 93 */ 94 - #define CPUFREQ_HIGH PMAC_CPU_HIGH_SPEED 95 - #define CPUFREQ_LOW PMAC_CPU_LOW_SPEED 96 97 static struct cpufreq_frequency_table pmac_cpu_freqs[] = { 98 {CPUFREQ_HIGH, 0}, 99 {CPUFREQ_LOW, 0}, 100 {0, CPUFREQ_TABLE_END}, 101 }; 102 103 static inline void local_delay(unsigned long ms) ··· 271 #ifdef DEBUG_FREQ 272 printk(KERN_DEBUG "HID1, before: %x\n", mfspr(SPRN_HID1)); 273 #endif 274 /* Disable all interrupt sources on openpic */ 275 pic_prio = openpic_get_priority(); 276 openpic_set_priority(0xf); ··· 347 debug_calc_bogomips(); 348 #endif 349 350 preempt_enable(); 351 352 return 0; ··· 361 static unsigned long prev_l3cr; 362 363 freqs.old = cur_freq; 364 - freqs.new = (speed_mode == PMAC_CPU_HIGH_SPEED) ? hi_freq : low_freq; 365 freqs.cpu = smp_processor_id(); 366 367 if (freqs.old == freqs.new) ··· 369 370 if (notify) 371 cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); 372 - if (speed_mode == PMAC_CPU_LOW_SPEED && 373 cpu_has_feature(CPU_FTR_L3CR)) { 374 l3cr = _get_L3CR(); 375 if (l3cr & L3CR_L3E) { ··· 377 _set_L3CR(0); 378 } 379 } 380 - set_speed_proc(speed_mode == PMAC_CPU_LOW_SPEED); 381 - if (speed_mode == PMAC_CPU_HIGH_SPEED && 382 cpu_has_feature(CPU_FTR_L3CR)) { 383 l3cr = _get_L3CR(); 384 if ((prev_l3cr & L3CR_L3E) && l3cr != prev_l3cr) ··· 386 } 387 if (notify) 388 cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); 389 - cur_freq = (speed_mode == PMAC_CPU_HIGH_SPEED) ? hi_freq : low_freq; 390 391 return 0; 392 } ··· 429 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 430 policy->cur = cur_freq; 431 432 - return cpufreq_frequency_table_cpuinfo(policy, &pmac_cpu_freqs[0]); 433 } 434 435 static u32 __pmac read_gpio(struct device_node *np) ··· 464 no_schedule = 1; 465 sleep_freq = cur_freq; 466 if (cur_freq == low_freq) 467 - do_set_cpu_speed(PMAC_CPU_HIGH_SPEED, 0); 468 return 0; 469 } 470 ··· 480 * is that we force a switch to whatever it was, which is 481 * probably high speed due to our suspend() routine 482 */ 483 - do_set_cpu_speed(sleep_freq == low_freq ? PMAC_CPU_LOW_SPEED 484 - : PMAC_CPU_HIGH_SPEED, 0); 485 486 no_schedule = 0; 487 return 0; ··· 495 .suspend = pmac_cpufreq_suspend, 496 .resume = pmac_cpufreq_resume, 497 .flags = CPUFREQ_PM_NO_WARN, 498 .name = "powermac", 499 .owner = THIS_MODULE, 500 };
··· 85 static int has_cpu_l2lve; 86 87 88 /* There are only two frequency states for each processor. Values 89 * are in kHz for the time being. 90 */ 91 + #define CPUFREQ_HIGH 0 92 + #define CPUFREQ_LOW 1 93 94 static struct cpufreq_frequency_table pmac_cpu_freqs[] = { 95 {CPUFREQ_HIGH, 0}, 96 {CPUFREQ_LOW, 0}, 97 {0, CPUFREQ_TABLE_END}, 98 + }; 99 + 100 + static struct freq_attr* pmac_cpu_freqs_attr[] = { 101 + &cpufreq_freq_attr_scaling_available_freqs, 102 + NULL, 103 }; 104 105 static inline void local_delay(unsigned long ms) ··· 269 #ifdef DEBUG_FREQ 270 printk(KERN_DEBUG "HID1, before: %x\n", mfspr(SPRN_HID1)); 271 #endif 272 + pmu_suspend(); 273 + 274 /* Disable all interrupt sources on openpic */ 275 pic_prio = openpic_get_priority(); 276 openpic_set_priority(0xf); ··· 343 debug_calc_bogomips(); 344 #endif 345 346 + pmu_resume(); 347 + 348 preempt_enable(); 349 350 return 0; ··· 355 static unsigned long prev_l3cr; 356 357 freqs.old = cur_freq; 358 + freqs.new = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq; 359 freqs.cpu = smp_processor_id(); 360 361 if (freqs.old == freqs.new) ··· 363 364 if (notify) 365 cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); 366 + if (speed_mode == CPUFREQ_LOW && 367 cpu_has_feature(CPU_FTR_L3CR)) { 368 l3cr = _get_L3CR(); 369 if (l3cr & L3CR_L3E) { ··· 371 _set_L3CR(0); 372 } 373 } 374 + set_speed_proc(speed_mode == CPUFREQ_LOW); 375 + if (speed_mode == CPUFREQ_HIGH && 376 cpu_has_feature(CPU_FTR_L3CR)) { 377 l3cr = _get_L3CR(); 378 if ((prev_l3cr & L3CR_L3E) && l3cr != prev_l3cr) ··· 380 } 381 if (notify) 382 cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); 383 + cur_freq = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq; 384 385 return 0; 386 } ··· 423 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 424 policy->cur = cur_freq; 425 426 + cpufreq_frequency_table_get_attr(pmac_cpu_freqs, policy->cpu); 427 + return cpufreq_frequency_table_cpuinfo(policy, pmac_cpu_freqs); 428 } 429 430 static u32 __pmac read_gpio(struct device_node *np) ··· 457 no_schedule = 1; 458 sleep_freq = cur_freq; 459 if (cur_freq == low_freq) 460 + do_set_cpu_speed(CPUFREQ_HIGH, 0); 461 return 0; 462 } 463 ··· 473 * is that we force a switch to whatever it was, which is 474 * probably high speed due to our suspend() routine 475 */ 476 + do_set_cpu_speed(sleep_freq == low_freq ? 477 + CPUFREQ_LOW : CPUFREQ_HIGH, 0); 478 479 no_schedule = 0; 480 return 0; ··· 488 .suspend = pmac_cpufreq_suspend, 489 .resume = pmac_cpufreq_resume, 490 .flags = CPUFREQ_PM_NO_WARN, 491 + .attr = pmac_cpu_freqs_attr, 492 .name = "powermac", 493 .owner = THIS_MODULE, 494 };
+18 -23
arch/ppc/platforms/pq2ads.h
··· 49 /* PCI interrupt controller */ 50 #define PCI_INT_STAT_REG 0xF8200000 51 #define PCI_INT_MASK_REG 0xF8200004 52 - #define PIRQA (NR_SIU_INTS + 0) 53 - #define PIRQB (NR_SIU_INTS + 1) 54 - #define PIRQC (NR_SIU_INTS + 2) 55 - #define PIRQD (NR_SIU_INTS + 3) 56 57 /* 58 * PCI memory map definitions for MPC8266ADS-PCI. ··· 68 * 0x00000000-0x1FFFFFFF 0x00000000-0x1FFFFFFF MPC8266 local memory 69 */ 70 71 - /* window for a PCI master to access MPC8266 memory */ 72 - #define PCI_SLV_MEM_LOCAL 0x00000000 /* Local base */ 73 - #define PCI_SLV_MEM_BUS 0x00000000 /* PCI base */ 74 75 - /* window for the processor to access PCI memory with prefetching */ 76 - #define PCI_MSTR_MEM_LOCAL 0x80000000 /* Local base */ 77 - #define PCI_MSTR_MEM_BUS 0x80000000 /* PCI base */ 78 - #define PCI_MSTR_MEM_SIZE 0x20000000 /* 512MB */ 79 80 - /* window for the processor to access PCI memory without prefetching */ 81 - #define PCI_MSTR_MEMIO_LOCAL 0xA0000000 /* Local base */ 82 - #define PCI_MSTR_MEMIO_BUS 0xA0000000 /* PCI base */ 83 - #define PCI_MSTR_MEMIO_SIZE 0x20000000 /* 512MB */ 84 85 - /* window for the processor to access PCI I/O */ 86 - #define PCI_MSTR_IO_LOCAL 0xF4000000 /* Local base */ 87 - #define PCI_MSTR_IO_BUS 0x00000000 /* PCI base */ 88 - #define PCI_MSTR_IO_SIZE 0x04000000 /* 64MB */ 89 - 90 - #define _IO_BASE PCI_MSTR_IO_LOCAL 91 - #define _ISA_MEM_BASE PCI_MSTR_MEMIO_LOCAL 92 - #define PCI_DRAM_OFFSET PCI_SLV_MEM_BUS 93 #endif /* CONFIG_PCI */ 94 95 #endif /* __MACH_ADS8260_DEFS */
··· 49 /* PCI interrupt controller */ 50 #define PCI_INT_STAT_REG 0xF8200000 51 #define PCI_INT_MASK_REG 0xF8200004 52 + #define PIRQA (NR_CPM_INTS + 0) 53 + #define PIRQB (NR_CPM_INTS + 1) 54 + #define PIRQC (NR_CPM_INTS + 2) 55 + #define PIRQD (NR_CPM_INTS + 3) 56 57 /* 58 * PCI memory map definitions for MPC8266ADS-PCI. ··· 68 * 0x00000000-0x1FFFFFFF 0x00000000-0x1FFFFFFF MPC8266 local memory 69 */ 70 71 + /* All the other PCI memory map definitions reside at syslib/m82xx_pci.h 72 + Here we should redefine what is unique for this board */ 73 + #define M82xx_PCI_SLAVE_MEM_LOCAL 0x00000000 /* Local base */ 74 + #define M82xx_PCI_SLAVE_MEM_BUS 0x00000000 /* PCI base */ 75 + #define M82xx_PCI_SLAVE_MEM_SIZE 0x10000000 /* 256 Mb */ 76 77 + #define M82xx_PCI_SLAVE_SEC_WND_SIZE ~(0x40000000 - 1U) /* 2 x 512Mb */ 78 + #define M82xx_PCI_SLAVE_SEC_WND_BASE 0x80000000 /* PCI Memory base */ 79 80 + #if defined(CONFIG_ADS8272) 81 + #define PCI_INT_TO_SIU SIU_INT_IRQ2 82 + #elif defined(CONFIG_PQ2FADS) 83 + #define PCI_INT_TO_SIU SIU_INT_IRQ6 84 + #else 85 + #warning PCI Bridge will be without interrupts support 86 + #endif 87 88 #endif /* CONFIG_PCI */ 89 90 #endif /* __MACH_ADS8260_DEFS */
+2 -2
arch/ppc/syslib/Makefile
··· 81 obj-$(CONFIG_SPRUCE) += cpc700_pic.o indirect_pci.o pci_auto.o \ 82 todc_time.o 83 obj-$(CONFIG_8260) += m8260_setup.o 84 - obj-$(CONFIG_PCI_8260) += m8260_pci.o indirect_pci.o 85 obj-$(CONFIG_8260_PCI9) += m8260_pci_erratum9.o 86 obj-$(CONFIG_CPM2) += cpm2_common.o cpm2_pic.o 87 ifeq ($(CONFIG_PPC_GEN550),y) ··· 97 obj-$(CONFIG_40x) += dcr.o 98 obj-$(CONFIG_BOOKE) += dcr.o 99 obj-$(CONFIG_85xx) += open_pic.o ppc85xx_common.o ppc85xx_setup.o \ 100 - ppc_sys.o mpc85xx_sys.o \ 101 mpc85xx_devices.o 102 ifeq ($(CONFIG_85xx),y) 103 obj-$(CONFIG_PCI) += indirect_pci.o pci_auto.o
··· 81 obj-$(CONFIG_SPRUCE) += cpc700_pic.o indirect_pci.o pci_auto.o \ 82 todc_time.o 83 obj-$(CONFIG_8260) += m8260_setup.o 84 + obj-$(CONFIG_PCI_8260) += m82xx_pci.o indirect_pci.o pci_auto.o 85 obj-$(CONFIG_8260_PCI9) += m8260_pci_erratum9.o 86 obj-$(CONFIG_CPM2) += cpm2_common.o cpm2_pic.o 87 ifeq ($(CONFIG_PPC_GEN550),y) ··· 97 obj-$(CONFIG_40x) += dcr.o 98 obj-$(CONFIG_BOOKE) += dcr.o 99 obj-$(CONFIG_85xx) += open_pic.o ppc85xx_common.o ppc85xx_setup.o \ 100 + ppc_sys.o i8259.o mpc85xx_sys.o \ 101 mpc85xx_devices.o 102 ifeq ($(CONFIG_85xx),y) 103 obj-$(CONFIG_PCI) += indirect_pci.o pci_auto.o
-193
arch/ppc/syslib/m8260_pci.c
··· 1 - /* 2 - * (C) Copyright 2003 3 - * Wolfgang Denk, DENX Software Engineering, wd@denx.de. 4 - * 5 - * (C) Copyright 2004 Red Hat, Inc. 6 - * 7 - * See file CREDITS for list of people who contributed to this 8 - * project. 9 - * 10 - * This program is free software; you can redistribute it and/or 11 - * modify it under the terms of the GNU General Public License as 12 - * published by the Free Software Foundation; either version 2 of 13 - * the License, or (at your option) any later version. 14 - * 15 - * This program is distributed in the hope that it will be useful, 16 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 17 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 18 - * GNU General Public License for more details. 19 - * 20 - * You should have received a copy of the GNU General Public License 21 - * along with this program; if not, write to the Free Software 22 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, 23 - * MA 02111-1307 USA 24 - */ 25 - 26 - #include <linux/kernel.h> 27 - #include <linux/init.h> 28 - #include <linux/pci.h> 29 - #include <linux/slab.h> 30 - #include <linux/delay.h> 31 - 32 - #include <asm/byteorder.h> 33 - #include <asm/io.h> 34 - #include <asm/irq.h> 35 - #include <asm/uaccess.h> 36 - #include <asm/machdep.h> 37 - #include <asm/pci-bridge.h> 38 - #include <asm/immap_cpm2.h> 39 - #include <asm/mpc8260.h> 40 - 41 - #include "m8260_pci.h" 42 - 43 - 44 - /* PCI bus configuration registers. 45 - */ 46 - 47 - static void __init m8260_setup_pci(struct pci_controller *hose) 48 - { 49 - volatile cpm2_map_t *immap = cpm2_immr; 50 - unsigned long pocmr; 51 - u16 tempShort; 52 - 53 - #ifndef CONFIG_ATC /* already done in U-Boot */ 54 - /* 55 - * Setting required to enable IRQ1-IRQ7 (SIUMCR [DPPC]), 56 - * and local bus for PCI (SIUMCR [LBPC]). 57 - */ 58 - immap->im_siu_conf.siu_82xx.sc_siumcr = 0x00640000; 59 - #endif 60 - 61 - /* Make PCI lowest priority */ 62 - /* Each 4 bits is a device bus request and the MS 4bits 63 - is highest priority */ 64 - /* Bus 4bit value 65 - --- ---------- 66 - CPM high 0b0000 67 - CPM middle 0b0001 68 - CPM low 0b0010 69 - PCI reguest 0b0011 70 - Reserved 0b0100 71 - Reserved 0b0101 72 - Internal Core 0b0110 73 - External Master 1 0b0111 74 - External Master 2 0b1000 75 - External Master 3 0b1001 76 - The rest are reserved */ 77 - immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x61207893; 78 - 79 - /* Park bus on core while modifying PCI Bus accesses */ 80 - immap->im_siu_conf.siu_82xx.sc_ppc_acr = 0x6; 81 - 82 - /* 83 - * Set up master window that allows the CPU to access PCI space. This 84 - * window is set up using the first SIU PCIBR registers. 85 - */ 86 - immap->im_memctl.memc_pcimsk0 = MPC826x_PCI_MASK; 87 - immap->im_memctl.memc_pcibr0 = MPC826x_PCI_BASE | PCIBR_ENABLE; 88 - 89 - /* Disable machine check on no response or target abort */ 90 - immap->im_pci.pci_emr = cpu_to_le32(0x1fe7); 91 - /* Release PCI RST (by default the PCI RST signal is held low) */ 92 - immap->im_pci.pci_gcr = cpu_to_le32(PCIGCR_PCI_BUS_EN); 93 - 94 - /* give it some time */ 95 - mdelay(1); 96 - 97 - /* 98 - * Set up master window that allows the CPU to access PCI Memory (prefetch) 99 - * space. This window is set up using the first set of Outbound ATU registers. 100 - */ 101 - immap->im_pci.pci_potar0 = cpu_to_le32(MPC826x_PCI_LOWER_MEM >> 12); 102 - immap->im_pci.pci_pobar0 = cpu_to_le32((MPC826x_PCI_LOWER_MEM - MPC826x_PCI_MEM_OFFSET) >> 12); 103 - pocmr = ((MPC826x_PCI_UPPER_MEM - MPC826x_PCI_LOWER_MEM) >> 12) ^ 0xfffff; 104 - immap->im_pci.pci_pocmr0 = cpu_to_le32(pocmr | POCMR_ENABLE | POCMR_PREFETCH_EN); 105 - 106 - /* 107 - * Set up master window that allows the CPU to access PCI Memory (non-prefetch) 108 - * space. This window is set up using the second set of Outbound ATU registers. 109 - */ 110 - immap->im_pci.pci_potar1 = cpu_to_le32(MPC826x_PCI_LOWER_MMIO >> 12); 111 - immap->im_pci.pci_pobar1 = cpu_to_le32((MPC826x_PCI_LOWER_MMIO - MPC826x_PCI_MMIO_OFFSET) >> 12); 112 - pocmr = ((MPC826x_PCI_UPPER_MMIO - MPC826x_PCI_LOWER_MMIO) >> 12) ^ 0xfffff; 113 - immap->im_pci.pci_pocmr1 = cpu_to_le32(pocmr | POCMR_ENABLE); 114 - 115 - /* 116 - * Set up master window that allows the CPU to access PCI IO space. This window 117 - * is set up using the third set of Outbound ATU registers. 118 - */ 119 - immap->im_pci.pci_potar2 = cpu_to_le32(MPC826x_PCI_IO_BASE >> 12); 120 - immap->im_pci.pci_pobar2 = cpu_to_le32(MPC826x_PCI_LOWER_IO >> 12); 121 - pocmr = ((MPC826x_PCI_UPPER_IO - MPC826x_PCI_LOWER_IO) >> 12) ^ 0xfffff; 122 - immap->im_pci.pci_pocmr2 = cpu_to_le32(pocmr | POCMR_ENABLE | POCMR_PCI_IO); 123 - 124 - /* 125 - * Set up slave window that allows PCI masters to access MPC826x local memory. 126 - * This window is set up using the first set of Inbound ATU registers 127 - */ 128 - 129 - immap->im_pci.pci_pitar0 = cpu_to_le32(MPC826x_PCI_SLAVE_MEM_LOCAL >> 12); 130 - immap->im_pci.pci_pibar0 = cpu_to_le32(MPC826x_PCI_SLAVE_MEM_BUS >> 12); 131 - pocmr = ((MPC826x_PCI_SLAVE_MEM_SIZE-1) >> 12) ^ 0xfffff; 132 - immap->im_pci.pci_picmr0 = cpu_to_le32(pocmr | PICMR_ENABLE | PICMR_PREFETCH_EN); 133 - 134 - /* See above for description - puts PCI request as highest priority */ 135 - immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x03124567; 136 - 137 - /* Park the bus on the PCI */ 138 - immap->im_siu_conf.siu_82xx.sc_ppc_acr = PPC_ACR_BUS_PARK_PCI; 139 - 140 - /* Host mode - specify the bridge as a host-PCI bridge */ 141 - early_write_config_word(hose, 0, 0, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_HOST); 142 - 143 - /* Enable the host bridge to be a master on the PCI bus, and to act as a PCI memory target */ 144 - early_read_config_word(hose, 0, 0, PCI_COMMAND, &tempShort); 145 - early_write_config_word(hose, 0, 0, PCI_COMMAND, 146 - tempShort | PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY); 147 - } 148 - 149 - void __init m8260_find_bridges(void) 150 - { 151 - extern int pci_assign_all_busses; 152 - struct pci_controller * hose; 153 - 154 - pci_assign_all_busses = 1; 155 - 156 - hose = pcibios_alloc_controller(); 157 - 158 - if (!hose) 159 - return; 160 - 161 - ppc_md.pci_swizzle = common_swizzle; 162 - 163 - hose->first_busno = 0; 164 - hose->bus_offset = 0; 165 - hose->last_busno = 0xff; 166 - 167 - setup_m8260_indirect_pci(hose, 168 - (unsigned long)&cpm2_immr->im_pci.pci_cfg_addr, 169 - (unsigned long)&cpm2_immr->im_pci.pci_cfg_data); 170 - 171 - m8260_setup_pci(hose); 172 - hose->pci_mem_offset = MPC826x_PCI_MEM_OFFSET; 173 - 174 - hose->io_base_virt = ioremap(MPC826x_PCI_IO_BASE, 175 - MPC826x_PCI_IO_SIZE); 176 - isa_io_base = (unsigned long) hose->io_base_virt; 177 - 178 - /* setup resources */ 179 - pci_init_resource(&hose->mem_resources[0], 180 - MPC826x_PCI_LOWER_MEM, 181 - MPC826x_PCI_UPPER_MEM, 182 - IORESOURCE_MEM|IORESOURCE_PREFETCH, "PCI prefetchable memory"); 183 - 184 - pci_init_resource(&hose->mem_resources[1], 185 - MPC826x_PCI_LOWER_MMIO, 186 - MPC826x_PCI_UPPER_MMIO, 187 - IORESOURCE_MEM, "PCI memory"); 188 - 189 - pci_init_resource(&hose->io_resource, 190 - MPC826x_PCI_LOWER_IO, 191 - MPC826x_PCI_UPPER_IO, 192 - IORESOURCE_IO, "PCI I/O"); 193 - }
···
-76
arch/ppc/syslib/m8260_pci.h
··· 1 - 2 - #ifndef _PPC_KERNEL_M8260_PCI_H 3 - #define _PPC_KERNEL_M8260_PCI_H 4 - 5 - #include <asm/m8260_pci.h> 6 - 7 - /* 8 - * Local->PCI map (from CPU) controlled by 9 - * MPC826x master window 10 - * 11 - * 0x80000000 - 0xBFFFFFFF Total CPU2PCI space PCIBR0 12 - * 13 - * 0x80000000 - 0x9FFFFFFF PCI Mem with prefetch (Outbound ATU #1) 14 - * 0xA0000000 - 0xAFFFFFFF PCI Mem w/o prefetch (Outbound ATU #2) 15 - * 0xB0000000 - 0xB0FFFFFF 32-bit PCI IO (Outbound ATU #3) 16 - * 17 - * PCI->Local map (from PCI) 18 - * MPC826x slave window controlled by 19 - * 20 - * 0x00000000 - 0x07FFFFFF MPC826x local memory (Inbound ATU #1) 21 - */ 22 - 23 - /* 24 - * Slave window that allows PCI masters to access MPC826x local memory. 25 - * This window is set up using the first set of Inbound ATU registers 26 - */ 27 - 28 - #ifndef MPC826x_PCI_SLAVE_MEM_LOCAL 29 - #define MPC826x_PCI_SLAVE_MEM_LOCAL (((struct bd_info *)__res)->bi_memstart) 30 - #define MPC826x_PCI_SLAVE_MEM_BUS (((struct bd_info *)__res)->bi_memstart) 31 - #define MPC826x_PCI_SLAVE_MEM_SIZE (((struct bd_info *)__res)->bi_memsize) 32 - #endif 33 - 34 - /* 35 - * This is the window that allows the CPU to access PCI address space. 36 - * It will be setup with the SIU PCIBR0 register. All three PCI master 37 - * windows, which allow the CPU to access PCI prefetch, non prefetch, 38 - * and IO space (see below), must all fit within this window. 39 - */ 40 - #ifndef MPC826x_PCI_BASE 41 - #define MPC826x_PCI_BASE 0x80000000 42 - #define MPC826x_PCI_MASK 0xc0000000 43 - #endif 44 - 45 - #ifndef MPC826x_PCI_LOWER_MEM 46 - #define MPC826x_PCI_LOWER_MEM 0x80000000 47 - #define MPC826x_PCI_UPPER_MEM 0x9fffffff 48 - #define MPC826x_PCI_MEM_OFFSET 0x00000000 49 - #endif 50 - 51 - #ifndef MPC826x_PCI_LOWER_MMIO 52 - #define MPC826x_PCI_LOWER_MMIO 0xa0000000 53 - #define MPC826x_PCI_UPPER_MMIO 0xafffffff 54 - #define MPC826x_PCI_MMIO_OFFSET 0x00000000 55 - #endif 56 - 57 - #ifndef MPC826x_PCI_LOWER_IO 58 - #define MPC826x_PCI_LOWER_IO 0x00000000 59 - #define MPC826x_PCI_UPPER_IO 0x00ffffff 60 - #define MPC826x_PCI_IO_BASE 0xb0000000 61 - #define MPC826x_PCI_IO_SIZE 0x01000000 62 - #endif 63 - 64 - #ifndef _IO_BASE 65 - #define _IO_BASE isa_io_base 66 - #endif 67 - 68 - #ifdef CONFIG_8260_PCI9 69 - struct pci_controller; 70 - extern void setup_m8260_indirect_pci(struct pci_controller* hose, 71 - u32 cfg_addr, u32 cfg_data); 72 - #else 73 - #define setup_m8260_indirect_pci setup_indirect_pci 74 - #endif 75 - 76 - #endif /* _PPC_KERNEL_M8260_PCI_H */
···
+5 -5
arch/ppc/syslib/m8260_pci_erratum9.c
··· 31 #include <asm/immap_cpm2.h> 32 #include <asm/cpm2.h> 33 34 - #include "m8260_pci.h" 35 36 #ifdef CONFIG_8260_PCI9 37 /*#include <asm/mpc8260_pci9.h>*/ /* included in asm/io.h */ ··· 248 249 static inline int is_pci_mem(unsigned long addr) 250 { 251 - if (addr >= MPC826x_PCI_LOWER_MMIO && 252 - addr <= MPC826x_PCI_UPPER_MMIO) 253 return 1; 254 - if (addr >= MPC826x_PCI_LOWER_MEM && 255 - addr <= MPC826x_PCI_UPPER_MEM) 256 return 1; 257 return 0; 258 }
··· 31 #include <asm/immap_cpm2.h> 32 #include <asm/cpm2.h> 33 34 + #include "m82xx_pci.h" 35 36 #ifdef CONFIG_8260_PCI9 37 /*#include <asm/mpc8260_pci9.h>*/ /* included in asm/io.h */ ··· 248 249 static inline int is_pci_mem(unsigned long addr) 250 { 251 + if (addr >= M82xx_PCI_LOWER_MMIO && 252 + addr <= M82xx_PCI_UPPER_MMIO) 253 return 1; 254 + if (addr >= M82xx_PCI_LOWER_MEM && 255 + addr <= M82xx_PCI_UPPER_MEM) 256 return 1; 257 return 0; 258 }
+9 -2
arch/ppc/syslib/m8260_setup.c
··· 34 unsigned char __res[sizeof(bd_t)]; 35 36 extern void cpm2_reset(void); 37 - extern void m8260_find_bridges(void); 38 extern void idma_pci9_init(void); 39 40 /* Place-holder for board-specific init */ ··· 57 idma_pci9_init(); 58 #endif 59 #ifdef CONFIG_PCI_8260 60 - m8260_find_bridges(); 61 #endif 62 #ifdef CONFIG_BLK_DEV_INITRD 63 if (initrd_start) ··· 174 * in case the boot rom changed something on us. 175 */ 176 cpm2_immr->im_intctl.ic_siprr = 0x05309770; 177 } 178 179 /*
··· 34 unsigned char __res[sizeof(bd_t)]; 35 36 extern void cpm2_reset(void); 37 + extern void pq2_find_bridges(void); 38 + extern void pq2pci_init_irq(void); 39 extern void idma_pci9_init(void); 40 41 /* Place-holder for board-specific init */ ··· 56 idma_pci9_init(); 57 #endif 58 #ifdef CONFIG_PCI_8260 59 + pq2_find_bridges(); 60 #endif 61 #ifdef CONFIG_BLK_DEV_INITRD 62 if (initrd_start) ··· 173 * in case the boot rom changed something on us. 174 */ 175 cpm2_immr->im_intctl.ic_siprr = 0x05309770; 176 + 177 + #if defined(CONFIG_PCI) && (defined(CONFIG_ADS8272) || defined(CONFIG_PQ2FADS)) 178 + /* Initialize stuff for the 82xx CPLD IC and install demux */ 179 + pq2pci_init_irq(); 180 + #endif 181 + 182 } 183 184 /*
+383
arch/ppc/syslib/m82xx_pci.c
···
··· 1 + /* 2 + * 3 + * (C) Copyright 2003 4 + * Wolfgang Denk, DENX Software Engineering, wd@denx.de. 5 + * 6 + * (C) Copyright 2004 Red Hat, Inc. 7 + * 8 + * 2005 (c) MontaVista Software, Inc. 9 + * Vitaly Bordug <vbordug@ru.mvista.com> 10 + * 11 + * See file CREDITS for list of people who contributed to this 12 + * project. 13 + * 14 + * This program is free software; you can redistribute it and/or 15 + * modify it under the terms of the GNU General Public License as 16 + * published by the Free Software Foundation; either version 2 of 17 + * the License, or (at your option) any later version. 18 + * 19 + * This program is distributed in the hope that it will be useful, 20 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 21 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 22 + * GNU General Public License for more details. 23 + * 24 + * You should have received a copy of the GNU General Public License 25 + * along with this program; if not, write to the Free Software 26 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, 27 + * MA 02111-1307 USA 28 + */ 29 + 30 + #include <linux/kernel.h> 31 + #include <linux/init.h> 32 + #include <linux/pci.h> 33 + #include <linux/slab.h> 34 + #include <linux/delay.h> 35 + #include <linux/irq.h> 36 + #include <linux/interrupt.h> 37 + 38 + #include <asm/byteorder.h> 39 + #include <asm/io.h> 40 + #include <asm/irq.h> 41 + #include <asm/uaccess.h> 42 + #include <asm/machdep.h> 43 + #include <asm/pci-bridge.h> 44 + #include <asm/immap_cpm2.h> 45 + #include <asm/mpc8260.h> 46 + #include <asm/cpm2.h> 47 + 48 + #include "m82xx_pci.h" 49 + 50 + /* 51 + * Interrupt routing 52 + */ 53 + 54 + static inline int 55 + pq2pci_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin) 56 + { 57 + static char pci_irq_table[][4] = 58 + /* 59 + * PCI IDSEL/INTPIN->INTLINE 60 + * A B C D 61 + */ 62 + { 63 + { PIRQA, PIRQB, PIRQC, PIRQD }, /* IDSEL 22 - PCI slot 0 */ 64 + { PIRQD, PIRQA, PIRQB, PIRQC }, /* IDSEL 23 - PCI slot 1 */ 65 + { PIRQC, PIRQD, PIRQA, PIRQB }, /* IDSEL 24 - PCI slot 2 */ 66 + }; 67 + 68 + const long min_idsel = 22, max_idsel = 24, irqs_per_slot = 4; 69 + return PCI_IRQ_TABLE_LOOKUP; 70 + } 71 + 72 + static void 73 + pq2pci_mask_irq(unsigned int irq) 74 + { 75 + int bit = irq - NR_CPM_INTS; 76 + 77 + *(volatile unsigned long *) PCI_INT_MASK_REG |= (1 << (31 - bit)); 78 + return; 79 + } 80 + 81 + static void 82 + pq2pci_unmask_irq(unsigned int irq) 83 + { 84 + int bit = irq - NR_CPM_INTS; 85 + 86 + *(volatile unsigned long *) PCI_INT_MASK_REG &= ~(1 << (31 - bit)); 87 + return; 88 + } 89 + 90 + static void 91 + pq2pci_mask_and_ack(unsigned int irq) 92 + { 93 + int bit = irq - NR_CPM_INTS; 94 + 95 + *(volatile unsigned long *) PCI_INT_MASK_REG |= (1 << (31 - bit)); 96 + return; 97 + } 98 + 99 + static void 100 + pq2pci_end_irq(unsigned int irq) 101 + { 102 + int bit = irq - NR_CPM_INTS; 103 + 104 + *(volatile unsigned long *) PCI_INT_MASK_REG &= ~(1 << (31 - bit)); 105 + return; 106 + } 107 + 108 + struct hw_interrupt_type pq2pci_ic = { 109 + "PQ2 PCI", 110 + NULL, 111 + NULL, 112 + pq2pci_unmask_irq, 113 + pq2pci_mask_irq, 114 + pq2pci_mask_and_ack, 115 + pq2pci_end_irq, 116 + 0 117 + }; 118 + 119 + static irqreturn_t 120 + pq2pci_irq_demux(int irq, void *dev_id, struct pt_regs *regs) 121 + { 122 + unsigned long stat, mask, pend; 123 + int bit; 124 + 125 + for(;;) { 126 + stat = *(volatile unsigned long *) PCI_INT_STAT_REG; 127 + mask = *(volatile unsigned long *) PCI_INT_MASK_REG; 128 + pend = stat & ~mask & 0xf0000000; 129 + if (!pend) 130 + break; 131 + for (bit = 0; pend != 0; ++bit, pend <<= 1) { 132 + if (pend & 0x80000000) 133 + __do_IRQ(NR_CPM_INTS + bit, regs); 134 + } 135 + } 136 + 137 + return IRQ_HANDLED; 138 + } 139 + 140 + static struct irqaction pq2pci_irqaction = { 141 + .handler = pq2pci_irq_demux, 142 + .flags = SA_INTERRUPT, 143 + .mask = CPU_MASK_NONE, 144 + .name = "PQ2 PCI cascade", 145 + }; 146 + 147 + 148 + void 149 + pq2pci_init_irq(void) 150 + { 151 + int irq; 152 + volatile cpm2_map_t *immap = cpm2_immr; 153 + #if defined CONFIG_ADS8272 154 + /* configure chip select for PCI interrupt controller */ 155 + immap->im_memctl.memc_br3 = PCI_INT_STAT_REG | 0x00001801; 156 + immap->im_memctl.memc_or3 = 0xffff8010; 157 + #elif defined CONFIG_PQ2FADS 158 + immap->im_memctl.memc_br8 = PCI_INT_STAT_REG | 0x00001801; 159 + immap->im_memctl.memc_or8 = 0xffff8010; 160 + #endif 161 + for (irq = NR_CPM_INTS; irq < NR_CPM_INTS + 4; irq++) 162 + irq_desc[irq].handler = &pq2pci_ic; 163 + 164 + /* make PCI IRQ level sensitive */ 165 + immap->im_intctl.ic_siexr &= 166 + ~(1 << (14 - (PCI_INT_TO_SIU - SIU_INT_IRQ1))); 167 + 168 + /* mask all PCI interrupts */ 169 + *(volatile unsigned long *) PCI_INT_MASK_REG |= 0xfff00000; 170 + 171 + /* install the demultiplexer for the PCI cascade interrupt */ 172 + setup_irq(PCI_INT_TO_SIU, &pq2pci_irqaction); 173 + return; 174 + } 175 + 176 + static int 177 + pq2pci_exclude_device(u_char bus, u_char devfn) 178 + { 179 + return PCIBIOS_SUCCESSFUL; 180 + } 181 + 182 + /* PCI bus configuration registers. 183 + */ 184 + static void 185 + pq2ads_setup_pci(struct pci_controller *hose) 186 + { 187 + __u32 val; 188 + volatile cpm2_map_t *immap = cpm2_immr; 189 + bd_t* binfo = (bd_t*) __res; 190 + u32 sccr = immap->im_clkrst.car_sccr; 191 + uint pci_div,freq,time; 192 + /* PCI int lowest prio */ 193 + /* Each 4 bits is a device bus request and the MS 4bits 194 + is highest priority */ 195 + /* Bus 4bit value 196 + --- ---------- 197 + CPM high 0b0000 198 + CPM middle 0b0001 199 + CPM low 0b0010 200 + PCI reguest 0b0011 201 + Reserved 0b0100 202 + Reserved 0b0101 203 + Internal Core 0b0110 204 + External Master 1 0b0111 205 + External Master 2 0b1000 206 + External Master 3 0b1001 207 + The rest are reserved 208 + */ 209 + immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x61207893; 210 + /* park bus on core */ 211 + immap->im_siu_conf.siu_82xx.sc_ppc_acr = PPC_ACR_BUS_PARK_CORE; 212 + /* 213 + * Set up master windows that allow the CPU to access PCI space. These 214 + * windows are set up using the two SIU PCIBR registers. 215 + */ 216 + 217 + immap->im_memctl.memc_pcimsk0 = M82xx_PCI_PRIM_WND_SIZE; 218 + immap->im_memctl.memc_pcibr0 = M82xx_PCI_PRIM_WND_BASE | PCIBR_ENABLE; 219 + 220 + #ifdef M82xx_PCI_SEC_WND_SIZE 221 + immap->im_memctl.memc_pcimsk1 = M82xx_PCI_SEC_WND_SIZE; 222 + immap->im_memctl.memc_pcibr1 = M82xx_PCI_SEC_WND_BASE | PCIBR_ENABLE; 223 + #endif 224 + 225 + #if defined CONFIG_ADS8272 226 + immap->im_siu_conf.siu_82xx.sc_siumcr = 227 + (immap->im_siu_conf.siu_82xx.sc_siumcr & 228 + ~(SIUMCR_BBD | SIUMCR_ESE | SIUMCR_PBSE | 229 + SIUMCR_CDIS | SIUMCR_DPPC11 | SIUMCR_L2CPC11 | 230 + SIUMCR_LBPC11 | SIUMCR_APPC11 | 231 + SIUMCR_CS10PC11 | SIUMCR_BCTLC11 | SIUMCR_MMR11)) | 232 + SIUMCR_DPPC11 | SIUMCR_L2CPC01 | SIUMCR_LBPC00 | 233 + SIUMCR_APPC10 | SIUMCR_CS10PC00 | 234 + SIUMCR_BCTLC00 | SIUMCR_MMR11 ; 235 + 236 + #elif defined CONFIG_PQ2FADS 237 + /* 238 + * Setting required to enable IRQ1-IRQ7 (SIUMCR [DPPC]), 239 + * and local bus for PCI (SIUMCR [LBPC]). 240 + */ 241 + immap->im_siu_conf.siu_82xx.sc_siumcr = (immap->im_siu_conf.sc_siumcr & 242 + ~(SIUMCR_L2PC11 | SIUMCR_LBPC11 | SIUMCR_CS10PC11 | SIUMCR_APPC11) | 243 + SIUMCR_BBD | SIUMCR_LBPC01 | SIUMCR_DPPC11 | SIUMCR_APPC10; 244 + #endif 245 + /* Enable PCI */ 246 + immap->im_pci.pci_gcr = cpu_to_le32(PCIGCR_PCI_BUS_EN); 247 + 248 + pci_div = ( (sccr & SCCR_PCI_MODCK) ? 2 : 1) * 249 + ( ( (sccr & SCCR_PCIDF_MSK) >> SCCR_PCIDF_SHIFT) + 1); 250 + freq = (uint)((2*binfo->bi_cpmfreq)/(pci_div)); 251 + time = (int)666666/freq; 252 + /* due to PCI Local Bus spec, some devices needs to wait such a long 253 + time after RST deassertion. More specifically, 0.508s for 66MHz & twice more for 33 */ 254 + printk("%s: The PCI bus is %d Mhz.\nWaiting %s after deasserting RST...\n",__FILE__,freq, 255 + (time==1) ? "0.5 seconds":"1 second" ); 256 + 257 + { 258 + int i; 259 + for(i=0;i<(500*time);i++) 260 + udelay(1000); 261 + } 262 + 263 + /* setup ATU registers */ 264 + immap->im_pci.pci_pocmr0 = cpu_to_le32(POCMR_ENABLE | POCMR_PCI_IO | 265 + ((~(M82xx_PCI_IO_SIZE - 1U)) >> POTA_ADDR_SHIFT)); 266 + immap->im_pci.pci_potar0 = cpu_to_le32(M82xx_PCI_LOWER_IO >> POTA_ADDR_SHIFT); 267 + immap->im_pci.pci_pobar0 = cpu_to_le32(M82xx_PCI_IO_BASE >> POTA_ADDR_SHIFT); 268 + 269 + /* Set-up non-prefetchable window */ 270 + immap->im_pci.pci_pocmr1 = cpu_to_le32(POCMR_ENABLE | ((~(M82xx_PCI_MMIO_SIZE-1U)) >> POTA_ADDR_SHIFT)); 271 + immap->im_pci.pci_potar1 = cpu_to_le32(M82xx_PCI_LOWER_MMIO >> POTA_ADDR_SHIFT); 272 + immap->im_pci.pci_pobar1 = cpu_to_le32((M82xx_PCI_LOWER_MMIO - M82xx_PCI_MMIO_OFFSET) >> POTA_ADDR_SHIFT); 273 + 274 + /* Set-up prefetchable window */ 275 + immap->im_pci.pci_pocmr2 = cpu_to_le32(POCMR_ENABLE |POCMR_PREFETCH_EN | 276 + (~(M82xx_PCI_MEM_SIZE-1U) >> POTA_ADDR_SHIFT)); 277 + immap->im_pci.pci_potar2 = cpu_to_le32(M82xx_PCI_LOWER_MEM >> POTA_ADDR_SHIFT); 278 + immap->im_pci.pci_pobar2 = cpu_to_le32((M82xx_PCI_LOWER_MEM - M82xx_PCI_MEM_OFFSET) >> POTA_ADDR_SHIFT); 279 + 280 + /* Inbound transactions from PCI memory space */ 281 + immap->im_pci.pci_picmr0 = cpu_to_le32(PICMR_ENABLE | PICMR_PREFETCH_EN | 282 + ((~(M82xx_PCI_SLAVE_MEM_SIZE-1U)) >> PITA_ADDR_SHIFT)); 283 + immap->im_pci.pci_pibar0 = cpu_to_le32(M82xx_PCI_SLAVE_MEM_BUS >> PITA_ADDR_SHIFT); 284 + immap->im_pci.pci_pitar0 = cpu_to_le32(M82xx_PCI_SLAVE_MEM_LOCAL>> PITA_ADDR_SHIFT); 285 + 286 + #if defined CONFIG_ADS8272 287 + /* PCI int highest prio */ 288 + immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x01236745; 289 + #elif defined CONFIG_PQ2FADS 290 + immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x03124567; 291 + #endif 292 + /* park bus on PCI */ 293 + immap->im_siu_conf.siu_82xx.sc_ppc_acr = PPC_ACR_BUS_PARK_PCI; 294 + 295 + /* Enable bus mastering and inbound memory transactions */ 296 + early_read_config_dword(hose, hose->first_busno, 0, PCI_COMMAND, &val); 297 + val &= 0xffff0000; 298 + val |= PCI_COMMAND_MEMORY|PCI_COMMAND_MASTER; 299 + early_write_config_dword(hose, hose->first_busno, 0, PCI_COMMAND, val); 300 + 301 + } 302 + 303 + void __init pq2_find_bridges(void) 304 + { 305 + extern int pci_assign_all_busses; 306 + struct pci_controller * hose; 307 + int host_bridge; 308 + 309 + pci_assign_all_busses = 1; 310 + 311 + hose = pcibios_alloc_controller(); 312 + 313 + if (!hose) 314 + return; 315 + 316 + ppc_md.pci_swizzle = common_swizzle; 317 + 318 + hose->first_busno = 0; 319 + hose->bus_offset = 0; 320 + hose->last_busno = 0xff; 321 + 322 + #ifdef CONFIG_ADS8272 323 + hose->set_cfg_type = 1; 324 + #endif 325 + 326 + setup_m8260_indirect_pci(hose, 327 + (unsigned long)&cpm2_immr->im_pci.pci_cfg_addr, 328 + (unsigned long)&cpm2_immr->im_pci.pci_cfg_data); 329 + 330 + /* Make sure it is a supported bridge */ 331 + early_read_config_dword(hose, 332 + 0, 333 + PCI_DEVFN(0,0), 334 + PCI_VENDOR_ID, 335 + &host_bridge); 336 + switch (host_bridge) { 337 + case PCI_DEVICE_ID_MPC8265: 338 + break; 339 + case PCI_DEVICE_ID_MPC8272: 340 + break; 341 + default: 342 + printk("Attempting to use unrecognized host bridge ID" 343 + " 0x%08x.\n", host_bridge); 344 + break; 345 + } 346 + 347 + pq2ads_setup_pci(hose); 348 + 349 + hose->io_space.start = M82xx_PCI_LOWER_IO; 350 + hose->io_space.end = M82xx_PCI_UPPER_IO; 351 + hose->mem_space.start = M82xx_PCI_LOWER_MEM; 352 + hose->mem_space.end = M82xx_PCI_UPPER_MMIO; 353 + hose->pci_mem_offset = M82xx_PCI_MEM_OFFSET; 354 + 355 + isa_io_base = 356 + (unsigned long) ioremap(M82xx_PCI_IO_BASE, 357 + M82xx_PCI_IO_SIZE); 358 + hose->io_base_virt = (void *) isa_io_base; 359 + 360 + /* setup resources */ 361 + pci_init_resource(&hose->mem_resources[0], 362 + M82xx_PCI_LOWER_MEM, 363 + M82xx_PCI_UPPER_MEM, 364 + IORESOURCE_MEM|IORESOURCE_PREFETCH, "PCI prefetchable memory"); 365 + 366 + pci_init_resource(&hose->mem_resources[1], 367 + M82xx_PCI_LOWER_MMIO, 368 + M82xx_PCI_UPPER_MMIO, 369 + IORESOURCE_MEM, "PCI memory"); 370 + 371 + pci_init_resource(&hose->io_resource, 372 + M82xx_PCI_LOWER_IO, 373 + M82xx_PCI_UPPER_IO, 374 + IORESOURCE_IO | 1, "PCI I/O"); 375 + 376 + ppc_md.pci_exclude_device = pq2pci_exclude_device; 377 + hose->last_busno = pciauto_bus_scan(hose, hose->first_busno); 378 + 379 + ppc_md.pci_map_irq = pq2pci_map_irq; 380 + ppc_md.pcibios_fixup = NULL; 381 + ppc_md.pcibios_fixup_bus = NULL; 382 + 383 + }
+92
arch/ppc/syslib/m82xx_pci.h
···
··· 1 + 2 + #ifndef _PPC_KERNEL_M82XX_PCI_H 3 + #define _PPC_KERNEL_M82XX_PCI_H 4 + 5 + #include <asm/m8260_pci.h> 6 + /* 7 + * Local->PCI map (from CPU) controlled by 8 + * MPC826x master window 9 + * 10 + * 0xF6000000 - 0xF7FFFFFF IO space 11 + * 0x80000000 - 0xBFFFFFFF CPU2PCI memory space PCIBR0 12 + * 13 + * 0x80000000 - 0x9FFFFFFF PCI Mem with prefetch (Outbound ATU #1) 14 + * 0xA0000000 - 0xBFFFFFFF PCI Mem w/o prefetch (Outbound ATU #2) 15 + * 0xF6000000 - 0xF7FFFFFF 32-bit PCI IO (Outbound ATU #3) 16 + * 17 + * PCI->Local map (from PCI) 18 + * MPC826x slave window controlled by 19 + * 20 + * 0x00000000 - 0x07FFFFFF MPC826x local memory (Inbound ATU #1) 21 + */ 22 + 23 + /* 24 + * Slave window that allows PCI masters to access MPC826x local memory. 25 + * This window is set up using the first set of Inbound ATU registers 26 + */ 27 + 28 + #ifndef M82xx_PCI_SLAVE_MEM_LOCAL 29 + #define M82xx_PCI_SLAVE_MEM_LOCAL (((struct bd_info *)__res)->bi_memstart) 30 + #define M82xx_PCI_SLAVE_MEM_BUS (((struct bd_info *)__res)->bi_memstart) 31 + #define M82xx_PCI_SLAVE_MEM_SIZE (((struct bd_info *)__res)->bi_memsize) 32 + #endif 33 + 34 + /* 35 + * This is the window that allows the CPU to access PCI address space. 36 + * It will be setup with the SIU PCIBR0 register. All three PCI master 37 + * windows, which allow the CPU to access PCI prefetch, non prefetch, 38 + * and IO space (see below), must all fit within this window. 39 + */ 40 + 41 + #ifndef M82xx_PCI_LOWER_MEM 42 + #define M82xx_PCI_LOWER_MEM 0x80000000 43 + #define M82xx_PCI_UPPER_MEM 0x9fffffff 44 + #define M82xx_PCI_MEM_OFFSET 0x00000000 45 + #define M82xx_PCI_MEM_SIZE 0x20000000 46 + #endif 47 + 48 + #ifndef M82xx_PCI_LOWER_MMIO 49 + #define M82xx_PCI_LOWER_MMIO 0xa0000000 50 + #define M82xx_PCI_UPPER_MMIO 0xafffffff 51 + #define M82xx_PCI_MMIO_OFFSET 0x00000000 52 + #define M82xx_PCI_MMIO_SIZE 0x20000000 53 + #endif 54 + 55 + #ifndef M82xx_PCI_LOWER_IO 56 + #define M82xx_PCI_LOWER_IO 0x00000000 57 + #define M82xx_PCI_UPPER_IO 0x01ffffff 58 + #define M82xx_PCI_IO_BASE 0xf6000000 59 + #define M82xx_PCI_IO_SIZE 0x02000000 60 + #endif 61 + 62 + #ifndef M82xx_PCI_PRIM_WND_SIZE 63 + #define M82xx_PCI_PRIM_WND_SIZE ~(M82xx_PCI_IO_SIZE - 1U) 64 + #define M82xx_PCI_PRIM_WND_BASE (M82xx_PCI_IO_BASE) 65 + #endif 66 + 67 + #ifndef M82xx_PCI_SEC_WND_SIZE 68 + #define M82xx_PCI_SEC_WND_SIZE ~(M82xx_PCI_MEM_SIZE + M82xx_PCI_MMIO_SIZE - 1U) 69 + #define M82xx_PCI_SEC_WND_BASE (M82xx_PCI_LOWER_MEM) 70 + #endif 71 + 72 + #ifndef POTA_ADDR_SHIFT 73 + #define POTA_ADDR_SHIFT 12 74 + #endif 75 + 76 + #ifndef PITA_ADDR_SHIFT 77 + #define PITA_ADDR_SHIFT 12 78 + #endif 79 + 80 + #ifndef _IO_BASE 81 + #define _IO_BASE isa_io_base 82 + #endif 83 + 84 + #ifdef CONFIG_8260_PCI9 85 + struct pci_controller; 86 + extern void setup_m8260_indirect_pci(struct pci_controller* hose, 87 + u32 cfg_addr, u32 cfg_data); 88 + #else 89 + #define setup_m8260_indirect_pci setup_indirect_pci 90 + #endif 91 + 92 + #endif /* _PPC_KERNEL_M8260_PCI_H */
+1 -3
arch/ppc/syslib/open_pic.c
··· 275 } 276 #endif 277 278 - #if defined(CONFIG_EPIC_SERIAL_MODE) || defined(CONFIG_PM) 279 static void openpic_reset(void) 280 { 281 openpic_setfield(&OpenPIC->Global.Global_Configuration0, ··· 992 spin_unlock_irqrestore(&openpic_setup_lock, flags); 993 return 0; 994 } 995 - 996 - openpic_reset(); 997 998 /* OpenPIC sometimes seem to need some time to be fully back up... */ 999 do {
··· 275 } 276 #endif 277 278 + #if defined(CONFIG_EPIC_SERIAL_MODE) 279 static void openpic_reset(void) 280 { 281 openpic_setfield(&OpenPIC->Global.Global_Configuration0, ··· 992 spin_unlock_irqrestore(&openpic_setup_lock, flags); 993 return 0; 994 } 995 996 /* OpenPIC sometimes seem to need some time to be fully back up... */ 997 do {
+28
arch/ppc/syslib/ppc83xx_setup.c
··· 29 #include <asm/mmu.h> 30 #include <asm/ppc_sys.h> 31 #include <asm/kgdb.h> 32 33 #include <syslib/ppc83xx_setup.h> 34 ··· 118 void 119 mpc83xx_restart(char *cmd) 120 { 121 local_irq_disable(); 122 for(;;); 123 } 124
··· 29 #include <asm/mmu.h> 30 #include <asm/ppc_sys.h> 31 #include <asm/kgdb.h> 32 + #include <asm/delay.h> 33 34 #include <syslib/ppc83xx_setup.h> 35 ··· 117 void 118 mpc83xx_restart(char *cmd) 119 { 120 + volatile unsigned char __iomem *reg; 121 + unsigned char tmp; 122 + 123 + reg = ioremap(BCSR_PHYS_ADDR, BCSR_SIZE); 124 + 125 local_irq_disable(); 126 + 127 + /* 128 + * Unlock the BCSR bits so a PRST will update the contents. 129 + * Otherwise the reset asserts but doesn't clear. 130 + */ 131 + tmp = in_8(reg + BCSR_MISC_REG3_OFF); 132 + tmp |= BCSR_MISC_REG3_CNFLOCK; /* low true, high false */ 133 + out_8(reg + BCSR_MISC_REG3_OFF, tmp); 134 + 135 + /* 136 + * Trigger a reset via a low->high transition of the 137 + * PORESET bit. 138 + */ 139 + tmp = in_8(reg + BCSR_MISC_REG2_OFF); 140 + tmp &= ~BCSR_MISC_REG2_PORESET; 141 + out_8(reg + BCSR_MISC_REG2_OFF, tmp); 142 + 143 + udelay(1); 144 + 145 + tmp |= BCSR_MISC_REG2_PORESET; 146 + out_8(reg + BCSR_MISC_REG2_OFF, tmp); 147 + 148 for(;;); 149 } 150
+16
arch/ppc/syslib/ppc85xx_setup.c
··· 132 } 133 134 #ifdef CONFIG_PCI 135 static void __init 136 mpc85xx_setup_pci1(struct pci_controller *hose) 137 { ··· 308 309 ppc_md.pci_exclude_device = mpc85xx_exclude_device; 310 311 hose_a->last_busno = pciauto_bus_scan(hose_a, hose_a->first_busno); 312 313 #ifdef CONFIG_85xx_PCI2 314 hose_b = pcibios_alloc_controller();
··· 132 } 133 134 #ifdef CONFIG_PCI 135 + 136 + #if defined(CONFIG_MPC8555_CDS) 137 + extern void mpc85xx_cds_enable_via(struct pci_controller *hose); 138 + extern void mpc85xx_cds_fixup_via(struct pci_controller *hose); 139 + #endif 140 + 141 static void __init 142 mpc85xx_setup_pci1(struct pci_controller *hose) 143 { ··· 302 303 ppc_md.pci_exclude_device = mpc85xx_exclude_device; 304 305 + #if defined(CONFIG_MPC8555_CDS) 306 + /* Pre pciauto_bus_scan VIA init */ 307 + mpc85xx_cds_enable_via(hose_a); 308 + #endif 309 + 310 hose_a->last_busno = pciauto_bus_scan(hose_a, hose_a->first_busno); 311 + 312 + #if defined(CONFIG_MPC8555_CDS) 313 + /* Post pciauto_bus_scan VIA fixup */ 314 + mpc85xx_cds_fixup_via(hose_a); 315 + #endif 316 317 #ifdef CONFIG_85xx_PCI2 318 hose_b = pcibios_alloc_controller();
+10
arch/ppc/syslib/prom_init.c
··· 626 l = call_prom("package-to-path", 3, 1, node, 627 mem_start, mem_end - mem_start); 628 if (l >= 0) { 629 np->full_name = PTRUNRELOC((char *) mem_start); 630 *(char *)(mem_start + l) = 0; 631 mem_start = ALIGNUL(mem_start + l + 1); 632 } 633
··· 626 l = call_prom("package-to-path", 3, 1, node, 627 mem_start, mem_end - mem_start); 628 if (l >= 0) { 629 + char *p, *ep; 630 + 631 np->full_name = PTRUNRELOC((char *) mem_start); 632 *(char *)(mem_start + l) = 0; 633 + /* Fixup an Apple bug where they have bogus \0 chars in the 634 + * middle of the path in some properties 635 + */ 636 + for (p = (char *)mem_start, ep = p + l; p < ep; p++) 637 + if ((*p) == '\0') { 638 + memmove(p, p+1, ep - p); 639 + ep--; 640 + } 641 mem_start = ALIGNUL(mem_start + l + 1); 642 } 643
-8
arch/ppc64/kernel/pSeries_reconfig.c
··· 47 remove_proc_entry(pp->name, np->pde); 48 pp = pp->next; 49 } 50 - 51 - /* Assuming that symlinks have the same parent directory as 52 - * np->pde. 53 - */ 54 - if (np->name_link) 55 - remove_proc_entry(np->name_link->name, parent->pde); 56 - if (np->addr_link) 57 - remove_proc_entry(np->addr_link->name, parent->pde); 58 if (np->pde) 59 remove_proc_entry(np->pde->name, parent->pde); 60 }
··· 47 remove_proc_entry(pp->name, np->pde); 48 pp = pp->next; 49 } 50 if (np->pde) 51 remove_proc_entry(np->pde->name, parent->pde); 52 }
+76 -42
arch/ppc64/kernel/prom_init.c
··· 211 */ 212 #define ADDR(x) (u32) ((unsigned long)(x) - offset) 213 214 /* This is the one and *ONLY* place where we actually call open 215 * firmware from, since we need to make sure we're running in 32b 216 * mode when we do. We switch back to 64b mode upon return. 217 */ 218 - 219 - #define PROM_ERROR (-1) 220 221 static int __init call_prom(const char *service, int nargs, int nret, ...) 222 { ··· 597 { 598 unsigned long offset = reloc_offset(); 599 ihandle elfloader; 600 - int ret; 601 602 elfloader = call_prom("open", 1, 1, ADDR("/packages/elf-loader")); 603 if (elfloader == 0) { 604 prom_printf("couldn't open /packages/elf-loader\n"); 605 return; 606 } 607 - ret = call_prom("call-method", 3, 1, ADDR("process-elf-header"), 608 elfloader, ADDR(&fake_elf)); 609 call_prom("close", 1, 0, elfloader); 610 } ··· 655 base = _ALIGN_UP(base + 0x100000, align)) { 656 prom_debug(" trying: 0x%x\n\r", base); 657 addr = (unsigned long)prom_claim(base, size, 0); 658 - if ((int)addr != PROM_ERROR) 659 break; 660 addr = 0; 661 if (align == 0) ··· 717 for(; base > RELOC(alloc_bottom); base = _ALIGN_DOWN(base - 0x100000, align)) { 718 prom_debug(" trying: 0x%x\n\r", base); 719 addr = (unsigned long)prom_claim(base, size, 0); 720 - if ((int)addr != PROM_ERROR) 721 break; 722 addr = 0; 723 } ··· 911 { 912 unsigned long offset = reloc_offset(); 913 struct prom_t *_prom = PTRRELOC(&prom); 914 - phandle prom_rtas, rtas_node; 915 u32 base, entry = 0; 916 u32 size = 0; 917 918 prom_debug("prom_instantiate_rtas: start...\n"); 919 920 - prom_rtas = call_prom("finddevice", 1, 1, ADDR("/rtas")); 921 - prom_debug("prom_rtas: %x\n", prom_rtas); 922 - if (prom_rtas == (phandle) -1) 923 return; 924 925 - prom_getprop(prom_rtas, "rtas-size", &size, sizeof(size)); 926 if (size == 0) 927 return; 928 ··· 932 prom_printf("RTAS allocation failed !\n"); 933 return; 934 } 935 - prom_printf("instantiating rtas at 0x%x", base); 936 937 - rtas_node = call_prom("open", 1, 1, ADDR("/rtas")); 938 - prom_printf("..."); 939 940 if (call_prom("call-method", 3, 2, 941 ADDR("instantiate-rtas"), 942 - rtas_node, base) != PROM_ERROR) { 943 entry = (long)_prom->args.rets[1]; 944 } 945 if (entry == 0) { ··· 954 955 reserve_mem(base, size); 956 957 - prom_setprop(prom_rtas, "linux,rtas-base", &base, sizeof(base)); 958 - prom_setprop(prom_rtas, "linux,rtas-entry", &entry, sizeof(entry)); 959 960 prom_debug("rtas base = 0x%x\n", base); 961 prom_debug("rtas entry = 0x%x\n", entry); ··· 1076 1077 prom_printf("opening PHB %s", path); 1078 phb_node = call_prom("open", 1, 1, path); 1079 - if ( (long)phb_node <= 0) 1080 prom_printf("... failed\n"); 1081 else 1082 prom_printf("... done\n"); ··· 1293 1294 /* get a handle for the stdout device */ 1295 _prom->chosen = call_prom("finddevice", 1, 1, ADDR("/chosen")); 1296 - if ((long)_prom->chosen <= 0) 1297 prom_panic("cannot find chosen"); /* msg won't be printed :( */ 1298 1299 /* get device tree root */ 1300 _prom->root = call_prom("finddevice", 1, 1, ADDR("/")); 1301 - if ((long)_prom->root <= 0) 1302 prom_panic("cannot find device tree root"); /* msg won't be printed :( */ 1303 } 1304 ··· 1370 } 1371 /* Default to pSeries. We need to know if we are running LPAR */ 1372 rtas = call_prom("finddevice", 1, 1, ADDR("/rtas")); 1373 - if (rtas != (phandle) -1) { 1374 - unsigned long x; 1375 - x = prom_getproplen(rtas, "ibm,hypertas-functions"); 1376 if (x != PROM_ERROR) { 1377 prom_printf("Hypertas detected, assuming LPAR !\n"); 1378 return PLATFORM_PSERIES_LPAR; ··· 1439 * leave some room at the end of the path for appending extra 1440 * arguments 1441 */ 1442 - if (call_prom("package-to-path", 3, 1, node, path, PROM_SCRATCH_SIZE-10) < 0) 1443 continue; 1444 prom_printf("found display : %s, opening ... ", path); 1445 1446 ih = call_prom("open", 1, 1, path); 1447 - if (ih == (ihandle)0 || ih == (ihandle)-1) { 1448 prom_printf("failed\n"); 1449 continue; 1450 } ··· 1528 return 0; 1529 } 1530 1531 static void __init scan_dt_build_strings(phandle node, unsigned long *mem_start, 1532 unsigned long *mem_end) 1533 { ··· 1547 /* get and store all property names */ 1548 prev_name = RELOC(""); 1549 for (;;) { 1550 - 1551 - /* 32 is max len of name including nul. */ 1552 - namep = make_room(mem_start, mem_end, 32, 1); 1553 - if (call_prom("nextprop", 3, 1, node, prev_name, namep) <= 0) { 1554 /* No more nodes: unwind alloc */ 1555 *mem_start = (unsigned long)namep; 1556 break; ··· 1577 } 1578 } 1579 1580 - /* 1581 - * The Open Firmware 1275 specification states properties must be 31 bytes or 1582 - * less, however not all firmwares obey this. Make it 64 bytes to be safe. 1583 - */ 1584 - #define MAX_PROPERTY_NAME 64 1585 - 1586 static void __init scan_dt_build_struct(phandle node, unsigned long *mem_start, 1587 unsigned long *mem_end) 1588 { 1589 int l, align; 1590 phandle child; 1591 - char *namep, *prev_name, *sstart; 1592 unsigned long soff; 1593 unsigned char *valp; 1594 unsigned long offset = reloc_offset(); ··· 1604 call_prom("package-to-path", 3, 1, node, namep, l); 1605 } 1606 namep[l] = '\0'; 1607 *mem_start = _ALIGN(((unsigned long) namep) + strlen(namep) + 1, 4); 1608 } 1609 ··· 1623 prev_name = RELOC(""); 1624 sstart = (char *)RELOC(dt_string_start); 1625 for (;;) { 1626 - if (call_prom("nextprop", 3, 1, node, prev_name, pname) <= 0) 1627 break; 1628 1629 /* find string offset */ ··· 1642 l = call_prom("getproplen", 2, 1, node, pname); 1643 1644 /* sanity checks */ 1645 - if (l < 0) 1646 continue; 1647 if (l > MAX_PROPERTY_LENGTH) { 1648 prom_printf("WARNING: ignoring large property "); ··· 1790 1791 /* Some G5s have a missing interrupt definition, fix it up here */ 1792 u3 = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000")); 1793 - if ((long)u3 <= 0) 1794 return; 1795 i2c = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000/i2c@f8001000")); 1796 - if ((long)i2c <= 0) 1797 return; 1798 mpic = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000/mpic@f8040000")); 1799 - if ((long)mpic <= 0) 1800 return; 1801 1802 /* check if proper rev of u3 */ 1803 - if (prom_getprop(u3, "device-rev", &u3_rev, sizeof(u3_rev)) <= 0) 1804 return; 1805 if (u3_rev != 0x35) 1806 return; ··· 1907 getprop_rval = RELOC(of_platform); 1908 prom_setprop(_prom->chosen, "linux,platform", 1909 &getprop_rval, sizeof(getprop_rval)); 1910 1911 /* 1912 * On pSeries, copy the CPU hold code
··· 211 */ 212 #define ADDR(x) (u32) ((unsigned long)(x) - offset) 213 214 + /* 215 + * Error results ... some OF calls will return "-1" on error, some 216 + * will return 0, some will return either. To simplify, here are 217 + * macros to use with any ihandle or phandle return value to check if 218 + * it is valid 219 + */ 220 + 221 + #define PROM_ERROR (-1u) 222 + #define PHANDLE_VALID(p) ((p) != 0 && (p) != PROM_ERROR) 223 + #define IHANDLE_VALID(i) ((i) != 0 && (i) != PROM_ERROR) 224 + 225 + 226 /* This is the one and *ONLY* place where we actually call open 227 * firmware from, since we need to make sure we're running in 32b 228 * mode when we do. We switch back to 64b mode upon return. 229 */ 230 231 static int __init call_prom(const char *service, int nargs, int nret, ...) 232 { ··· 587 { 588 unsigned long offset = reloc_offset(); 589 ihandle elfloader; 590 591 elfloader = call_prom("open", 1, 1, ADDR("/packages/elf-loader")); 592 if (elfloader == 0) { 593 prom_printf("couldn't open /packages/elf-loader\n"); 594 return; 595 } 596 + call_prom("call-method", 3, 1, ADDR("process-elf-header"), 597 elfloader, ADDR(&fake_elf)); 598 call_prom("close", 1, 0, elfloader); 599 } ··· 646 base = _ALIGN_UP(base + 0x100000, align)) { 647 prom_debug(" trying: 0x%x\n\r", base); 648 addr = (unsigned long)prom_claim(base, size, 0); 649 + if (addr != PROM_ERROR) 650 break; 651 addr = 0; 652 if (align == 0) ··· 708 for(; base > RELOC(alloc_bottom); base = _ALIGN_DOWN(base - 0x100000, align)) { 709 prom_debug(" trying: 0x%x\n\r", base); 710 addr = (unsigned long)prom_claim(base, size, 0); 711 + if (addr != PROM_ERROR) 712 break; 713 addr = 0; 714 } ··· 902 { 903 unsigned long offset = reloc_offset(); 904 struct prom_t *_prom = PTRRELOC(&prom); 905 + phandle rtas_node; 906 + ihandle rtas_inst; 907 u32 base, entry = 0; 908 u32 size = 0; 909 910 prom_debug("prom_instantiate_rtas: start...\n"); 911 912 + rtas_node = call_prom("finddevice", 1, 1, ADDR("/rtas")); 913 + prom_debug("rtas_node: %x\n", rtas_node); 914 + if (!PHANDLE_VALID(rtas_node)) 915 return; 916 917 + prom_getprop(rtas_node, "rtas-size", &size, sizeof(size)); 918 if (size == 0) 919 return; 920 ··· 922 prom_printf("RTAS allocation failed !\n"); 923 return; 924 } 925 926 + rtas_inst = call_prom("open", 1, 1, ADDR("/rtas")); 927 + if (!IHANDLE_VALID(rtas_inst)) { 928 + prom_printf("opening rtas package failed"); 929 + return; 930 + } 931 + 932 + prom_printf("instantiating rtas at 0x%x ...", base); 933 934 if (call_prom("call-method", 3, 2, 935 ADDR("instantiate-rtas"), 936 + rtas_inst, base) != PROM_ERROR) { 937 entry = (long)_prom->args.rets[1]; 938 } 939 if (entry == 0) { ··· 940 941 reserve_mem(base, size); 942 943 + prom_setprop(rtas_node, "linux,rtas-base", &base, sizeof(base)); 944 + prom_setprop(rtas_node, "linux,rtas-entry", &entry, sizeof(entry)); 945 946 prom_debug("rtas base = 0x%x\n", base); 947 prom_debug("rtas entry = 0x%x\n", entry); ··· 1062 1063 prom_printf("opening PHB %s", path); 1064 phb_node = call_prom("open", 1, 1, path); 1065 + if (phb_node == 0) 1066 prom_printf("... failed\n"); 1067 else 1068 prom_printf("... done\n"); ··· 1279 1280 /* get a handle for the stdout device */ 1281 _prom->chosen = call_prom("finddevice", 1, 1, ADDR("/chosen")); 1282 + if (!PHANDLE_VALID(_prom->chosen)) 1283 prom_panic("cannot find chosen"); /* msg won't be printed :( */ 1284 1285 /* get device tree root */ 1286 _prom->root = call_prom("finddevice", 1, 1, ADDR("/")); 1287 + if (!PHANDLE_VALID(_prom->root)) 1288 prom_panic("cannot find device tree root"); /* msg won't be printed :( */ 1289 } 1290 ··· 1356 } 1357 /* Default to pSeries. We need to know if we are running LPAR */ 1358 rtas = call_prom("finddevice", 1, 1, ADDR("/rtas")); 1359 + if (!PHANDLE_VALID(rtas)) { 1360 + int x = prom_getproplen(rtas, "ibm,hypertas-functions"); 1361 if (x != PROM_ERROR) { 1362 prom_printf("Hypertas detected, assuming LPAR !\n"); 1363 return PLATFORM_PSERIES_LPAR; ··· 1426 * leave some room at the end of the path for appending extra 1427 * arguments 1428 */ 1429 + if (call_prom("package-to-path", 3, 1, node, path, 1430 + PROM_SCRATCH_SIZE-10) == PROM_ERROR) 1431 continue; 1432 prom_printf("found display : %s, opening ... ", path); 1433 1434 ih = call_prom("open", 1, 1, path); 1435 + if (ih == 0) { 1436 prom_printf("failed\n"); 1437 continue; 1438 } ··· 1514 return 0; 1515 } 1516 1517 + /* 1518 + * The Open Firmware 1275 specification states properties must be 31 bytes or 1519 + * less, however not all firmwares obey this. Make it 64 bytes to be safe. 1520 + */ 1521 + #define MAX_PROPERTY_NAME 64 1522 + 1523 static void __init scan_dt_build_strings(phandle node, unsigned long *mem_start, 1524 unsigned long *mem_end) 1525 { ··· 1527 /* get and store all property names */ 1528 prev_name = RELOC(""); 1529 for (;;) { 1530 + int rc; 1531 + 1532 + /* 64 is max len of name including nul. */ 1533 + namep = make_room(mem_start, mem_end, MAX_PROPERTY_NAME, 1); 1534 + rc = call_prom("nextprop", 3, 1, node, prev_name, namep); 1535 + if (rc != 1) { 1536 /* No more nodes: unwind alloc */ 1537 *mem_start = (unsigned long)namep; 1538 break; ··· 1555 } 1556 } 1557 1558 static void __init scan_dt_build_struct(phandle node, unsigned long *mem_start, 1559 unsigned long *mem_end) 1560 { 1561 int l, align; 1562 phandle child; 1563 + char *namep, *prev_name, *sstart, *p, *ep; 1564 unsigned long soff; 1565 unsigned char *valp; 1566 unsigned long offset = reloc_offset(); ··· 1588 call_prom("package-to-path", 3, 1, node, namep, l); 1589 } 1590 namep[l] = '\0'; 1591 + /* Fixup an Apple bug where they have bogus \0 chars in the 1592 + * middle of the path in some properties 1593 + */ 1594 + for (p = namep, ep = namep + l; p < ep; p++) 1595 + if (*p == '\0') { 1596 + memmove(p, p+1, ep - p); 1597 + ep--; l--; 1598 + } 1599 *mem_start = _ALIGN(((unsigned long) namep) + strlen(namep) + 1, 4); 1600 } 1601 ··· 1599 prev_name = RELOC(""); 1600 sstart = (char *)RELOC(dt_string_start); 1601 for (;;) { 1602 + int rc; 1603 + 1604 + rc = call_prom("nextprop", 3, 1, node, prev_name, pname); 1605 + if (rc != 1) 1606 break; 1607 1608 /* find string offset */ ··· 1615 l = call_prom("getproplen", 2, 1, node, pname); 1616 1617 /* sanity checks */ 1618 + if (l == PROM_ERROR) 1619 continue; 1620 if (l > MAX_PROPERTY_LENGTH) { 1621 prom_printf("WARNING: ignoring large property "); ··· 1763 1764 /* Some G5s have a missing interrupt definition, fix it up here */ 1765 u3 = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000")); 1766 + if (!PHANDLE_VALID(u3)) 1767 return; 1768 i2c = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000/i2c@f8001000")); 1769 + if (!PHANDLE_VALID(i2c)) 1770 return; 1771 mpic = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000/mpic@f8040000")); 1772 + if (!PHANDLE_VALID(mpic)) 1773 return; 1774 1775 /* check if proper rev of u3 */ 1776 + if (prom_getprop(u3, "device-rev", &u3_rev, sizeof(u3_rev)) 1777 + == PROM_ERROR) 1778 return; 1779 if (u3_rev != 0x35) 1780 return; ··· 1879 getprop_rval = RELOC(of_platform); 1880 prom_setprop(_prom->chosen, "linux,platform", 1881 &getprop_rval, sizeof(getprop_rval)); 1882 + 1883 + /* 1884 + * On pSeries, inform the firmware about our capabilities 1885 + */ 1886 + if (RELOC(of_platform) & PLATFORM_PSERIES) 1887 + prom_send_capabilities(); 1888 1889 /* 1890 * On pSeries, copy the CPU hold code
-2
arch/ppc64/kernel/time.c
··· 325 326 irq_enter(); 327 328 - #ifndef CONFIG_PPC_ISERIES 329 profile_tick(CPU_PROFILING, regs); 330 - #endif 331 332 lpaca->lppaca.int_dword.fields.decr_int = 0; 333
··· 325 326 irq_enter(); 327 328 profile_tick(CPU_PROFILING, regs); 329 330 lpaca->lppaca.int_dword.fields.decr_int = 0; 331
+68 -20
arch/sparc64/kernel/pci_iommu.c
··· 196 return NULL; 197 } 198 199 /* Allocate and map kernel buffer of size SIZE using consistent mode 200 * DMA for PCI device PDEV. Return non-NULL cpu-side address if 201 * successful and set *DMA_ADDRP to the PCI side dma address. ··· 264 npages = size >> IO_PAGE_SHIFT; 265 ctx = 0; 266 if (iommu->iommu_ctxflush) 267 - ctx = iommu->iommu_cur_ctx++; 268 first_page = __pa(first_page); 269 while (npages--) { 270 iopte_val(*iopte) = (IOPTE_CONSISTENT(ctx) | ··· 345 } 346 } 347 348 spin_unlock_irqrestore(&iommu->lock, flags); 349 350 order = get_order(size); ··· 390 base_paddr = __pa(oaddr & IO_PAGE_MASK); 391 ctx = 0; 392 if (iommu->iommu_ctxflush) 393 - ctx = iommu->iommu_cur_ctx++; 394 if (strbuf->strbuf_enabled) 395 iopte_protection = IOPTE_STREAMING(ctx); 396 else ··· 410 return PCI_DMA_ERROR_CODE; 411 } 412 413 - static void pci_strbuf_flush(struct pci_strbuf *strbuf, struct pci_iommu *iommu, u32 vaddr, unsigned long ctx, unsigned long npages) 414 { 415 int limit; 416 417 - PCI_STC_FLUSHFLAG_INIT(strbuf); 418 if (strbuf->strbuf_ctxflush && 419 iommu->iommu_ctxflush) { 420 unsigned long matchreg, flushreg; 421 422 flushreg = strbuf->strbuf_ctxflush; 423 matchreg = PCI_STC_CTXMATCH_ADDR(strbuf, ctx); 424 425 - limit = 100000; 426 pci_iommu_write(flushreg, ctx); 427 - for(;;) { 428 - if (((long)pci_iommu_read(matchreg)) >= 0L) 429 - break; 430 - limit--; 431 - if (!limit) 432 - break; 433 - udelay(1); 434 } 435 - if (!limit) 436 printk(KERN_WARNING "pci_strbuf_flush: ctx flush " 437 - "timeout vaddr[%08x] ctx[%lx]\n", 438 - vaddr, ctx); 439 } else { 440 unsigned long i; 441 442 for (i = 0; i < npages; i++, vaddr += IO_PAGE_SIZE) 443 pci_iommu_write(strbuf->strbuf_pflush, vaddr); 444 } 445 446 pci_iommu_write(strbuf->strbuf_fsync, strbuf->strbuf_flushflag_pa); 447 (void) pci_iommu_read(iommu->write_complete_reg); 448 ··· 510 511 /* Step 1: Kick data out of streaming buffers if necessary. */ 512 if (strbuf->strbuf_enabled) 513 - pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages); 514 515 /* Step 2: Clear out first TSB entry. */ 516 iopte_make_dummy(iommu, base); 517 518 free_streaming_cluster(iommu, bus_addr - iommu->page_table_map_base, 519 npages, ctx); 520 521 spin_unlock_irqrestore(&iommu->lock, flags); 522 } ··· 659 /* Step 4: Choose a context if necessary. */ 660 ctx = 0; 661 if (iommu->iommu_ctxflush) 662 - ctx = iommu->iommu_cur_ctx++; 663 664 /* Step 5: Create the mappings. */ 665 if (strbuf->strbuf_enabled) ··· 724 725 /* Step 1: Kick data out of streaming buffers if necessary. */ 726 if (strbuf->strbuf_enabled) 727 - pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages); 728 729 /* Step 2: Clear out first TSB entry. */ 730 iopte_make_dummy(iommu, base); 731 732 free_streaming_cluster(iommu, bus_addr - iommu->page_table_map_base, 733 npages, ctx); 734 735 spin_unlock_irqrestore(&iommu->lock, flags); 736 } ··· 772 } 773 774 /* Step 2: Kick data out of streaming buffers. */ 775 - pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages); 776 777 spin_unlock_irqrestore(&iommu->lock, flags); 778 } ··· 816 i--; 817 npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length) 818 - bus_addr) >> IO_PAGE_SHIFT; 819 - pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages); 820 821 spin_unlock_irqrestore(&iommu->lock, flags); 822 }
··· 196 return NULL; 197 } 198 199 + static int iommu_alloc_ctx(struct pci_iommu *iommu) 200 + { 201 + int lowest = iommu->ctx_lowest_free; 202 + int sz = IOMMU_NUM_CTXS - lowest; 203 + int n = find_next_zero_bit(iommu->ctx_bitmap, sz, lowest); 204 + 205 + if (unlikely(n == sz)) { 206 + n = find_next_zero_bit(iommu->ctx_bitmap, lowest, 1); 207 + if (unlikely(n == lowest)) { 208 + printk(KERN_WARNING "IOMMU: Ran out of contexts.\n"); 209 + n = 0; 210 + } 211 + } 212 + if (n) 213 + __set_bit(n, iommu->ctx_bitmap); 214 + 215 + return n; 216 + } 217 + 218 + static inline void iommu_free_ctx(struct pci_iommu *iommu, int ctx) 219 + { 220 + if (likely(ctx)) { 221 + __clear_bit(ctx, iommu->ctx_bitmap); 222 + if (ctx < iommu->ctx_lowest_free) 223 + iommu->ctx_lowest_free = ctx; 224 + } 225 + } 226 + 227 /* Allocate and map kernel buffer of size SIZE using consistent mode 228 * DMA for PCI device PDEV. Return non-NULL cpu-side address if 229 * successful and set *DMA_ADDRP to the PCI side dma address. ··· 236 npages = size >> IO_PAGE_SHIFT; 237 ctx = 0; 238 if (iommu->iommu_ctxflush) 239 + ctx = iommu_alloc_ctx(iommu); 240 first_page = __pa(first_page); 241 while (npages--) { 242 iopte_val(*iopte) = (IOPTE_CONSISTENT(ctx) | ··· 317 } 318 } 319 320 + iommu_free_ctx(iommu, ctx); 321 + 322 spin_unlock_irqrestore(&iommu->lock, flags); 323 324 order = get_order(size); ··· 360 base_paddr = __pa(oaddr & IO_PAGE_MASK); 361 ctx = 0; 362 if (iommu->iommu_ctxflush) 363 + ctx = iommu_alloc_ctx(iommu); 364 if (strbuf->strbuf_enabled) 365 iopte_protection = IOPTE_STREAMING(ctx); 366 else ··· 380 return PCI_DMA_ERROR_CODE; 381 } 382 383 + static void pci_strbuf_flush(struct pci_strbuf *strbuf, struct pci_iommu *iommu, u32 vaddr, unsigned long ctx, unsigned long npages, int direction) 384 { 385 int limit; 386 387 if (strbuf->strbuf_ctxflush && 388 iommu->iommu_ctxflush) { 389 unsigned long matchreg, flushreg; 390 + u64 val; 391 392 flushreg = strbuf->strbuf_ctxflush; 393 matchreg = PCI_STC_CTXMATCH_ADDR(strbuf, ctx); 394 395 pci_iommu_write(flushreg, ctx); 396 + val = pci_iommu_read(matchreg); 397 + val &= 0xffff; 398 + if (!val) 399 + goto do_flush_sync; 400 + 401 + while (val) { 402 + if (val & 0x1) 403 + pci_iommu_write(flushreg, ctx); 404 + val >>= 1; 405 } 406 + val = pci_iommu_read(matchreg); 407 + if (unlikely(val)) { 408 printk(KERN_WARNING "pci_strbuf_flush: ctx flush " 409 + "timeout matchreg[%lx] ctx[%lx]\n", 410 + val, ctx); 411 + goto do_page_flush; 412 + } 413 } else { 414 unsigned long i; 415 416 + do_page_flush: 417 for (i = 0; i < npages; i++, vaddr += IO_PAGE_SIZE) 418 pci_iommu_write(strbuf->strbuf_pflush, vaddr); 419 } 420 421 + do_flush_sync: 422 + /* If the device could not have possibly put dirty data into 423 + * the streaming cache, no flush-flag synchronization needs 424 + * to be performed. 425 + */ 426 + if (direction == PCI_DMA_TODEVICE) 427 + return; 428 + 429 + PCI_STC_FLUSHFLAG_INIT(strbuf); 430 pci_iommu_write(strbuf->strbuf_fsync, strbuf->strbuf_flushflag_pa); 431 (void) pci_iommu_read(iommu->write_complete_reg); 432 ··· 466 467 /* Step 1: Kick data out of streaming buffers if necessary. */ 468 if (strbuf->strbuf_enabled) 469 + pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction); 470 471 /* Step 2: Clear out first TSB entry. */ 472 iopte_make_dummy(iommu, base); 473 474 free_streaming_cluster(iommu, bus_addr - iommu->page_table_map_base, 475 npages, ctx); 476 + 477 + iommu_free_ctx(iommu, ctx); 478 479 spin_unlock_irqrestore(&iommu->lock, flags); 480 } ··· 613 /* Step 4: Choose a context if necessary. */ 614 ctx = 0; 615 if (iommu->iommu_ctxflush) 616 + ctx = iommu_alloc_ctx(iommu); 617 618 /* Step 5: Create the mappings. */ 619 if (strbuf->strbuf_enabled) ··· 678 679 /* Step 1: Kick data out of streaming buffers if necessary. */ 680 if (strbuf->strbuf_enabled) 681 + pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction); 682 683 /* Step 2: Clear out first TSB entry. */ 684 iopte_make_dummy(iommu, base); 685 686 free_streaming_cluster(iommu, bus_addr - iommu->page_table_map_base, 687 npages, ctx); 688 + 689 + iommu_free_ctx(iommu, ctx); 690 691 spin_unlock_irqrestore(&iommu->lock, flags); 692 } ··· 724 } 725 726 /* Step 2: Kick data out of streaming buffers. */ 727 + pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction); 728 729 spin_unlock_irqrestore(&iommu->lock, flags); 730 } ··· 768 i--; 769 npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length) 770 - bus_addr) >> IO_PAGE_SHIFT; 771 + pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction); 772 773 spin_unlock_irqrestore(&iommu->lock, flags); 774 }
+1 -1
arch/sparc64/kernel/pci_psycho.c
··· 1212 1213 /* Setup initial software IOMMU state. */ 1214 spin_lock_init(&iommu->lock); 1215 - iommu->iommu_cur_ctx = 0; 1216 1217 /* Register addresses. */ 1218 iommu->iommu_control = p->pbm_A.controller_regs + PSYCHO_IOMMU_CONTROL;
··· 1212 1213 /* Setup initial software IOMMU state. */ 1214 spin_lock_init(&iommu->lock); 1215 + iommu->ctx_lowest_free = 1; 1216 1217 /* Register addresses. */ 1218 iommu->iommu_control = p->pbm_A.controller_regs + PSYCHO_IOMMU_CONTROL;
+1 -1
arch/sparc64/kernel/pci_sabre.c
··· 1265 1266 /* Setup initial software IOMMU state. */ 1267 spin_lock_init(&iommu->lock); 1268 - iommu->iommu_cur_ctx = 0; 1269 1270 /* Register addresses. */ 1271 iommu->iommu_control = p->pbm_A.controller_regs + SABRE_IOMMU_CONTROL;
··· 1265 1266 /* Setup initial software IOMMU state. */ 1267 spin_lock_init(&iommu->lock); 1268 + iommu->ctx_lowest_free = 1; 1269 1270 /* Register addresses. */ 1271 iommu->iommu_control = p->pbm_A.controller_regs + SABRE_IOMMU_CONTROL;
+1 -1
arch/sparc64/kernel/pci_schizo.c
··· 1753 1754 /* Setup initial software IOMMU state. */ 1755 spin_lock_init(&iommu->lock); 1756 - iommu->iommu_cur_ctx = 0; 1757 1758 /* Register addresses, SCHIZO has iommu ctx flushing. */ 1759 iommu->iommu_control = pbm->pbm_regs + SCHIZO_IOMMU_CONTROL;
··· 1753 1754 /* Setup initial software IOMMU state. */ 1755 spin_lock_init(&iommu->lock); 1756 + iommu->ctx_lowest_free = 1; 1757 1758 /* Register addresses, SCHIZO has iommu ctx flushing. */ 1759 iommu->iommu_control = pbm->pbm_regs + SCHIZO_IOMMU_CONTROL;
+14 -6
arch/sparc64/kernel/sbus.c
··· 117 118 #define STRBUF_TAG_VALID 0x02UL 119 120 - static void sbus_strbuf_flush(struct sbus_iommu *iommu, u32 base, unsigned long npages) 121 { 122 unsigned long n; 123 int limit; 124 125 - iommu->strbuf_flushflag = 0UL; 126 n = npages; 127 while (n--) 128 upa_writeq(base + (n << IO_PAGE_SHIFT), 129 iommu->strbuf_regs + STRBUF_PFLUSH); 130 131 /* Whoopee cushion! */ 132 upa_writeq(__pa(&iommu->strbuf_flushflag), ··· 429 430 spin_lock_irqsave(&iommu->lock, flags); 431 free_streaming_cluster(iommu, dma_base, size >> IO_PAGE_SHIFT); 432 - sbus_strbuf_flush(iommu, dma_base, size >> IO_PAGE_SHIFT); 433 spin_unlock_irqrestore(&iommu->lock, flags); 434 } 435 ··· 592 iommu = sdev->bus->iommu; 593 spin_lock_irqsave(&iommu->lock, flags); 594 free_streaming_cluster(iommu, dvma_base, size >> IO_PAGE_SHIFT); 595 - sbus_strbuf_flush(iommu, dvma_base, size >> IO_PAGE_SHIFT); 596 spin_unlock_irqrestore(&iommu->lock, flags); 597 } 598 ··· 604 size = (IO_PAGE_ALIGN(base + size) - (base & IO_PAGE_MASK)); 605 606 spin_lock_irqsave(&iommu->lock, flags); 607 - sbus_strbuf_flush(iommu, base & IO_PAGE_MASK, size >> IO_PAGE_SHIFT); 608 spin_unlock_irqrestore(&iommu->lock, flags); 609 } 610 ··· 628 size = IO_PAGE_ALIGN(sg[i].dma_address + sg[i].dma_length) - base; 629 630 spin_lock_irqsave(&iommu->lock, flags); 631 - sbus_strbuf_flush(iommu, base, size >> IO_PAGE_SHIFT); 632 spin_unlock_irqrestore(&iommu->lock, flags); 633 } 634
··· 117 118 #define STRBUF_TAG_VALID 0x02UL 119 120 + static void sbus_strbuf_flush(struct sbus_iommu *iommu, u32 base, unsigned long npages, int direction) 121 { 122 unsigned long n; 123 int limit; 124 125 n = npages; 126 while (n--) 127 upa_writeq(base + (n << IO_PAGE_SHIFT), 128 iommu->strbuf_regs + STRBUF_PFLUSH); 129 + 130 + /* If the device could not have possibly put dirty data into 131 + * the streaming cache, no flush-flag synchronization needs 132 + * to be performed. 133 + */ 134 + if (direction == SBUS_DMA_TODEVICE) 135 + return; 136 + 137 + iommu->strbuf_flushflag = 0UL; 138 139 /* Whoopee cushion! */ 140 upa_writeq(__pa(&iommu->strbuf_flushflag), ··· 421 422 spin_lock_irqsave(&iommu->lock, flags); 423 free_streaming_cluster(iommu, dma_base, size >> IO_PAGE_SHIFT); 424 + sbus_strbuf_flush(iommu, dma_base, size >> IO_PAGE_SHIFT, direction); 425 spin_unlock_irqrestore(&iommu->lock, flags); 426 } 427 ··· 584 iommu = sdev->bus->iommu; 585 spin_lock_irqsave(&iommu->lock, flags); 586 free_streaming_cluster(iommu, dvma_base, size >> IO_PAGE_SHIFT); 587 + sbus_strbuf_flush(iommu, dvma_base, size >> IO_PAGE_SHIFT, direction); 588 spin_unlock_irqrestore(&iommu->lock, flags); 589 } 590 ··· 596 size = (IO_PAGE_ALIGN(base + size) - (base & IO_PAGE_MASK)); 597 598 spin_lock_irqsave(&iommu->lock, flags); 599 + sbus_strbuf_flush(iommu, base & IO_PAGE_MASK, size >> IO_PAGE_SHIFT, direction); 600 spin_unlock_irqrestore(&iommu->lock, flags); 601 } 602 ··· 620 size = IO_PAGE_ALIGN(sg[i].dma_address + sg[i].dma_length) - base; 621 622 spin_lock_irqsave(&iommu->lock, flags); 623 + sbus_strbuf_flush(iommu, base, size >> IO_PAGE_SHIFT, direction); 624 spin_unlock_irqrestore(&iommu->lock, flags); 625 } 626
-4
arch/um/Kconfig.debug
··· 2 3 source "lib/Kconfig.debug" 4 5 - config FRAME_POINTER 6 - bool 7 - default y if DEBUG_INFO 8 - 9 config PT_PROXY 10 bool "Enable ptrace proxy" 11 depends on XTERM_CHAN && DEBUG_INFO && MODE_TT
··· 2 3 source "lib/Kconfig.debug" 4 5 config PT_PROXY 6 bool "Enable ptrace proxy" 7 depends on XTERM_CHAN && DEBUG_INFO && MODE_TT
+11 -5
arch/um/drivers/random.c
··· 1 - /* Much of this ripped from hw_random.c */ 2 - 3 #include <linux/module.h> 4 #include <linux/fs.h> 5 #include <linux/miscdevice.h> ··· 17 */ 18 #define RNG_VERSION "1.0.0" 19 #define RNG_MODULE_NAME "random" 20 - #define RNG_DRIVER_NAME RNG_MODULE_NAME " virtual driver " RNG_VERSION 21 - #define PFX RNG_MODULE_NAME ": " 22 23 #define RNG_MISCDEV_MINOR 183 /* official */ 24 ··· 101 102 err = misc_register (&rng_miscdev); 103 if (err) { 104 - printk (KERN_ERR PFX "misc device register failed\n"); 105 goto err_out_cleanup_hw; 106 } 107 ··· 123 124 module_init (rng_init); 125 module_exit (rng_cleanup);
··· 1 + /* Copyright (C) 2005 Jeff Dike <jdike@addtoit.com> */ 2 + /* Much of this ripped from drivers/char/hw_random.c, see there for other 3 + * copyright. 4 + * 5 + * This software may be used and distributed according to the terms 6 + * of the GNU General Public License, incorporated herein by reference. 7 + */ 8 #include <linux/module.h> 9 #include <linux/fs.h> 10 #include <linux/miscdevice.h> ··· 12 */ 13 #define RNG_VERSION "1.0.0" 14 #define RNG_MODULE_NAME "random" 15 16 #define RNG_MISCDEV_MINOR 183 /* official */ 17 ··· 98 99 err = misc_register (&rng_miscdev); 100 if (err) { 101 + printk (KERN_ERR RNG_MODULE_NAME ": misc device register failed\n"); 102 goto err_out_cleanup_hw; 103 } 104 ··· 120 121 module_init (rng_init); 122 module_exit (rng_cleanup); 123 + 124 + MODULE_DESCRIPTION("UML Host Random Number Generator (RNG) driver"); 125 + MODULE_LICENSE("GPL");
-1
arch/um/drivers/ssl.c
··· 22 #include "init.h" 23 #include "irq_user.h" 24 #include "mconsole_kern.h" 25 - #include "2_5compat.h" 26 27 static int ssl_version = 1; 28
··· 22 #include "init.h" 23 #include "irq_user.h" 24 #include "mconsole_kern.h" 25 26 static int ssl_version = 1; 27
-1
arch/um/drivers/stdio_console.c
··· 28 #include "irq_user.h" 29 #include "mconsole_kern.h" 30 #include "init.h" 31 - #include "2_5compat.h" 32 33 #define MAX_TTYS (16) 34
··· 28 #include "irq_user.h" 29 #include "mconsole_kern.h" 30 #include "init.h" 31 32 #define MAX_TTYS (16) 33
+3 -4
arch/um/drivers/ubd_kern.c
··· 49 #include "irq_user.h" 50 #include "irq_kern.h" 51 #include "ubd_user.h" 52 - #include "2_5compat.h" 53 #include "os.h" 54 #include "mem.h" 55 #include "mem_kern.h" ··· 439 __setup("udb", udb_setup); 440 __uml_help(udb_setup, 441 "udb\n" 442 - " This option is here solely to catch ubd -> udb typos, which can be\n\n" 443 - " to impossible to catch visually unless you specifically look for\n\n" 444 - " them. The only result of any option starting with 'udb' is an error\n\n" 445 " in the boot output.\n\n" 446 ); 447
··· 49 #include "irq_user.h" 50 #include "irq_kern.h" 51 #include "ubd_user.h" 52 #include "os.h" 53 #include "mem.h" 54 #include "mem_kern.h" ··· 440 __setup("udb", udb_setup); 441 __uml_help(udb_setup, 442 "udb\n" 443 + " This option is here solely to catch ubd -> udb typos, which can be\n" 444 + " to impossible to catch visually unless you specifically look for\n" 445 + " them. The only result of any option starting with 'udb' is an error\n" 446 " in the boot output.\n\n" 447 ); 448
-24
arch/um/include/2_5compat.h
··· 1 - /* 2 - * Copyright (C) 2001 Jeff Dike (jdike@karaya.com) 3 - * Licensed under the GPL 4 - */ 5 - 6 - #ifndef __2_5_COMPAT_H__ 7 - #define __2_5_COMPAT_H__ 8 - 9 - #define INIT_HARDSECT(arr, maj, sizes) 10 - 11 - #define SET_PRI(task) do ; while(0) 12 - 13 - #endif 14 - 15 - /* 16 - * Overrides for Emacs so that we follow Linus's tabbing style. 17 - * Emacs will notice this stuff at the end of the file and automatically 18 - * adjust the settings for this buffer only. This must remain at the end 19 - * of the file. 20 - * --------------------------------------------------------------------------- 21 - * Local variables: 22 - * c-file-style: "linux" 23 - * End: 24 - */
···
+2 -1
arch/um/include/sysrq.h
··· 1 #ifndef __UM_SYSRQ_H 2 #define __UM_SYSRQ_H 3 4 - extern void show_trace(unsigned long *stack); 5 6 #endif
··· 1 #ifndef __UM_SYSRQ_H 2 #define __UM_SYSRQ_H 3 4 + struct task_struct; 5 + extern void show_trace(struct task_struct* task, unsigned long *stack); 6 7 #endif
-1
arch/um/kernel/exec_kern.c
··· 16 #include "kern.h" 17 #include "irq_user.h" 18 #include "tlb.h" 19 - #include "2_5compat.h" 20 #include "os.h" 21 #include "time_user.h" 22 #include "choose-mode.h"
··· 16 #include "kern.h" 17 #include "irq_user.h" 18 #include "tlb.h" 19 #include "os.h" 20 #include "time_user.h" 21 #include "choose-mode.h"
-59
arch/um/kernel/initrd_kern.c
··· 1 - /* 2 - * Copyright (C) 2000, 2001, 2002 Jeff Dike (jdike@karaya.com) 3 - * Licensed under the GPL 4 - */ 5 - 6 - #include "linux/init.h" 7 - #include "linux/bootmem.h" 8 - #include "linux/initrd.h" 9 - #include "asm/types.h" 10 - #include "user_util.h" 11 - #include "kern_util.h" 12 - #include "initrd.h" 13 - #include "init.h" 14 - #include "os.h" 15 - 16 - /* Changed by uml_initrd_setup, which is a setup */ 17 - static char *initrd __initdata = NULL; 18 - 19 - static int __init read_initrd(void) 20 - { 21 - void *area; 22 - long long size; 23 - int err; 24 - 25 - if(initrd == NULL) return 0; 26 - err = os_file_size(initrd, &size); 27 - if(err) return 0; 28 - area = alloc_bootmem(size); 29 - if(area == NULL) return 0; 30 - if(load_initrd(initrd, area, size) == -1) return 0; 31 - initrd_start = (unsigned long) area; 32 - initrd_end = initrd_start + size; 33 - return 0; 34 - } 35 - 36 - __uml_postsetup(read_initrd); 37 - 38 - static int __init uml_initrd_setup(char *line, int *add) 39 - { 40 - initrd = line; 41 - return 0; 42 - } 43 - 44 - __uml_setup("initrd=", uml_initrd_setup, 45 - "initrd=<initrd image>\n" 46 - " This is used to boot UML from an initrd image. The argument is the\n" 47 - " name of the file containing the image.\n\n" 48 - ); 49 - 50 - /* 51 - * Overrides for Emacs so that we follow Linus's tabbing style. 52 - * Emacs will notice this stuff at the end of the file and automatically 53 - * adjust the settings for this buffer only. This must remain at the end 54 - * of the file. 55 - * --------------------------------------------------------------------------- 56 - * Local variables: 57 - * c-file-style: "linux" 58 - * End: 59 - */
···
-46
arch/um/kernel/initrd_user.c
··· 1 - /* 2 - * Copyright (C) 2000, 2001 Jeff Dike (jdike@karaya.com) 3 - * Licensed under the GPL 4 - */ 5 - 6 - #include <unistd.h> 7 - #include <sys/types.h> 8 - #include <sys/stat.h> 9 - #include <errno.h> 10 - 11 - #include "user_util.h" 12 - #include "kern_util.h" 13 - #include "user.h" 14 - #include "initrd.h" 15 - #include "os.h" 16 - 17 - int load_initrd(char *filename, void *buf, int size) 18 - { 19 - int fd, n; 20 - 21 - fd = os_open_file(filename, of_read(OPENFLAGS()), 0); 22 - if(fd < 0){ 23 - printk("Opening '%s' failed - err = %d\n", filename, -fd); 24 - return(-1); 25 - } 26 - n = os_read_file(fd, buf, size); 27 - if(n != size){ 28 - printk("Read of %d bytes from '%s' failed, err = %d\n", size, 29 - filename, -n); 30 - return(-1); 31 - } 32 - 33 - os_close_file(fd); 34 - return(0); 35 - } 36 - 37 - /* 38 - * Overrides for Emacs so that we follow Linus's tabbing style. 39 - * Emacs will notice this stuff at the end of the file and automatically 40 - * adjust the settings for this buffer only. This must remain at the end 41 - * of the file. 42 - * --------------------------------------------------------------------------- 43 - * Local variables: 44 - * c-file-style: "linux" 45 - * End: 46 - */
···
+22 -20
arch/um/kernel/main.c
··· 71 72 static void last_ditch_exit(int sig) 73 { 74 - CHOOSE_MODE(kmalloc_ok = 0, (void) 0); 75 signal(SIGINT, SIG_DFL); 76 signal(SIGTERM, SIG_DFL); 77 signal(SIGHUP, SIG_DFL); ··· 87 { 88 char **new_argv; 89 sigset_t mask; 90 - int ret, i; 91 92 /* Enable all signals except SIGIO - in some environments, we can 93 * enter with some signals blocked ··· 160 */ 161 change_sig(SIGPROF, 0); 162 163 /* Reboot */ 164 if(ret){ 165 - int err; 166 - 167 printf("\n"); 168 - 169 - /* stop timers and set SIG*ALRM to be ignored */ 170 - disable_timer(); 171 - 172 - /* disable SIGIO for the fds and set SIGIO to be ignored */ 173 - err = deactivate_all_fds(); 174 - if(err) 175 - printf("deactivate_all_fds failed, errno = %d\n", 176 - -err); 177 - 178 - /* Let any pending signals fire now. This ensures 179 - * that they won't be delivered after the exec, when 180 - * they are definitely not expected. 181 - */ 182 - unblock_signals(); 183 - 184 execvp(new_argv[0], new_argv); 185 perror("Failed to exec kernel"); 186 ret = 1;
··· 71 72 static void last_ditch_exit(int sig) 73 { 74 + kmalloc_ok = 0; 75 signal(SIGINT, SIG_DFL); 76 signal(SIGTERM, SIG_DFL); 77 signal(SIGHUP, SIG_DFL); ··· 87 { 88 char **new_argv; 89 sigset_t mask; 90 + int ret, i, err; 91 92 /* Enable all signals except SIGIO - in some environments, we can 93 * enter with some signals blocked ··· 160 */ 161 change_sig(SIGPROF, 0); 162 163 + /* This signal stuff used to be in the reboot case. However, 164 + * sometimes a SIGVTALRM can come in when we're halting (reproducably 165 + * when writing out gcov information, presumably because that takes 166 + * some time) and cause a segfault. 167 + */ 168 + 169 + /* stop timers and set SIG*ALRM to be ignored */ 170 + disable_timer(); 171 + 172 + /* disable SIGIO for the fds and set SIGIO to be ignored */ 173 + err = deactivate_all_fds(); 174 + if(err) 175 + printf("deactivate_all_fds failed, errno = %d\n", -err); 176 + 177 + /* Let any pending signals fire now. This ensures 178 + * that they won't be delivered after the exec, when 179 + * they are definitely not expected. 180 + */ 181 + unblock_signals(); 182 + 183 /* Reboot */ 184 if(ret){ 185 printf("\n"); 186 execvp(new_argv[0], new_argv); 187 perror("Failed to exec kernel"); 188 ret = 1;
-36
arch/um/kernel/process_kern.c
··· 43 #include "tlb.h" 44 #include "frame_kern.h" 45 #include "sigcontext.h" 46 - #include "2_5compat.h" 47 #include "os.h" 48 #include "mode.h" 49 #include "mode_kern.h" ··· 53 * entry. 54 */ 55 struct cpu_task cpu_tasks[NR_CPUS] = { [0 ... NR_CPUS - 1] = { -1, NULL } }; 56 - 57 - struct task_struct *get_task(int pid, int require) 58 - { 59 - struct task_struct *ret; 60 - 61 - read_lock(&tasklist_lock); 62 - ret = find_task_by_pid(pid); 63 - read_unlock(&tasklist_lock); 64 - 65 - if(require && (ret == NULL)) panic("get_task couldn't find a task\n"); 66 - return(ret); 67 - } 68 69 int external_pid(void *t) 70 { ··· 176 177 while(1){ 178 /* endless idle loop with no priority at all */ 179 - SET_PRI(current); 180 181 /* 182 * although we are an idle CPU, we do not want to ··· 196 int page_size(void) 197 { 198 return(PAGE_SIZE); 199 - } 200 - 201 - unsigned long page_mask(void) 202 - { 203 - return(PAGE_MASK); 204 } 205 206 void *um_virt_to_phys(struct task_struct *task, unsigned long addr, ··· 330 return(new); 331 } 332 333 - void *get_init_task(void) 334 - { 335 - return(&init_thread_union.thread_info.task); 336 - } 337 - 338 int copy_to_user_proc(void __user *to, void *from, int size) 339 { 340 return(copy_to_user(to, from, size)); ··· 456 return sp & ~0xf; 457 } 458 #endif 459 - 460 - 461 - /* 462 - * Overrides for Emacs so that we follow Linus's tabbing style. 463 - * Emacs will notice this stuff at the end of the file and automatically 464 - * adjust the settings for this buffer only. This must remain at the end 465 - * of the file. 466 - * --------------------------------------------------------------------------- 467 - * Local variables: 468 - * c-file-style: "linux" 469 - * End: 470 - */
··· 43 #include "tlb.h" 44 #include "frame_kern.h" 45 #include "sigcontext.h" 46 #include "os.h" 47 #include "mode.h" 48 #include "mode_kern.h" ··· 54 * entry. 55 */ 56 struct cpu_task cpu_tasks[NR_CPUS] = { [0 ... NR_CPUS - 1] = { -1, NULL } }; 57 58 int external_pid(void *t) 59 { ··· 189 190 while(1){ 191 /* endless idle loop with no priority at all */ 192 193 /* 194 * although we are an idle CPU, we do not want to ··· 210 int page_size(void) 211 { 212 return(PAGE_SIZE); 213 } 214 215 void *um_virt_to_phys(struct task_struct *task, unsigned long addr, ··· 349 return(new); 350 } 351 352 int copy_to_user_proc(void __user *to, void *from, int size) 353 { 354 return(copy_to_user(to, from, size)); ··· 480 return sp & ~0xf; 481 } 482 #endif
+3 -16
arch/um/kernel/ptrace.c
··· 322 UPT_SYSCALL_ARG2(regs), 323 UPT_SYSCALL_ARG3(regs), 324 UPT_SYSCALL_ARG4(regs)); 325 - else { 326 - int res = UPT_SYSCALL_RET(regs); 327 - audit_syscall_exit(current, AUDITSC_RESULT(res), 328 - res); 329 - } 330 } 331 332 /* Fake a debug trap */ ··· 354 current->exit_code = 0; 355 } 356 } 357 - 358 - /* 359 - * Overrides for Emacs so that we follow Linus's tabbing style. 360 - * Emacs will notice this stuff at the end of the file and automatically 361 - * adjust the settings for this buffer only. This must remain at the end 362 - * of the file. 363 - * --------------------------------------------------------------------------- 364 - * Local variables: 365 - * c-file-style: "linux" 366 - * End: 367 - */
··· 322 UPT_SYSCALL_ARG2(regs), 323 UPT_SYSCALL_ARG3(regs), 324 UPT_SYSCALL_ARG4(regs)); 325 + else audit_syscall_exit(current, 326 + AUDITSC_RESULT(UPT_SYSCALL_RET(regs)), 327 + UPT_SYSCALL_RET(regs)); 328 } 329 330 /* Fake a debug trap */ ··· 356 current->exit_code = 0; 357 } 358 }
+14 -7
arch/um/kernel/sysrq.c
··· 3 * Licensed under the GPL 4 */ 5 6 #include "linux/sched.h" 7 #include "linux/kernel.h" 8 #include "linux/module.h" ··· 13 #include "sysrq.h" 14 #include "user_util.h" 15 16 - void show_trace(unsigned long * stack) 17 { 18 - /* XXX: Copy the CONFIG_FRAME_POINTER stack-walking backtrace from 19 - * arch/i386/kernel/traps.c, and then move this to sys-i386/sysrq.c.*/ 20 unsigned long addr; 21 22 if (!stack) { 23 - stack = (unsigned long*) &stack; 24 WARN_ON(1); 25 } 26 ··· 36 } 37 printk("\n"); 38 } 39 40 /* 41 * stack dumps generator - this is used by arch-independent code. ··· 46 { 47 unsigned long stack; 48 49 - show_trace(&stack); 50 } 51 EXPORT_SYMBOL(dump_stack); 52 ··· 61 int i; 62 63 if (esp == NULL) { 64 - if (task != current) { 65 esp = (unsigned long *) KSTK_ESP(task); 66 /* Which one? No actual difference - just coding style.*/ 67 //esp = (unsigned long *) PT_REGS_IP(&task->thread.regs); ··· 83 printk("%08lx ", *stack++); 84 } 85 86 - show_trace(esp); 87 }
··· 3 * Licensed under the GPL 4 */ 5 6 + #include "linux/config.h" 7 #include "linux/sched.h" 8 #include "linux/kernel.h" 9 #include "linux/module.h" ··· 12 #include "sysrq.h" 13 #include "user_util.h" 14 15 + /* Catch non-i386 SUBARCH's. */ 16 + #if !defined(CONFIG_UML_X86) || defined(CONFIG_64BIT) 17 + void show_trace(struct task_struct *task, unsigned long * stack) 18 { 19 unsigned long addr; 20 21 if (!stack) { 22 + stack = (unsigned long*) &stack; 23 WARN_ON(1); 24 } 25 ··· 35 } 36 printk("\n"); 37 } 38 + #endif 39 40 /* 41 * stack dumps generator - this is used by arch-independent code. ··· 44 { 45 unsigned long stack; 46 47 + show_trace(current, &stack); 48 } 49 EXPORT_SYMBOL(dump_stack); 50 ··· 59 int i; 60 61 if (esp == NULL) { 62 + if (task != current && task != NULL) { 63 + /* XXX: Isn't this bogus? I.e. isn't this the 64 + * *userspace* stack of this task? If not so, use this 65 + * even when task == current (as in i386). 66 + */ 67 esp = (unsigned long *) KSTK_ESP(task); 68 /* Which one? No actual difference - just coding style.*/ 69 //esp = (unsigned long *) PT_REGS_IP(&task->thread.regs); ··· 77 printk("%08lx ", *stack++); 78 } 79 80 + printk("Call Trace: \n"); 81 + show_trace(current, esp); 82 }
-1
arch/um/kernel/trap_kern.c
··· 23 #include "kern.h" 24 #include "chan_kern.h" 25 #include "mconsole_kern.h" 26 - #include "2_5compat.h" 27 #include "mem.h" 28 #include "mem_kern.h" 29
··· 23 #include "kern.h" 24 #include "chan_kern.h" 25 #include "mconsole_kern.h" 26 #include "mem.h" 27 #include "mem_kern.h" 28
+1 -7
arch/um/kernel/tt/process_kern.c
··· 32 unsigned long flags; 33 int err, vtalrm, alrm, prof, cpu; 34 char c; 35 - /* jailing and SMP are incompatible, so this doesn't need to be 36 - * made per-cpu 37 - */ 38 - static int reading; 39 40 from = prev; 41 to = next; ··· 55 c = 0; 56 set_current(to); 57 58 - reading = 0; 59 err = os_write_file(to->thread.mode.tt.switch_pipe[1], &c, sizeof(c)); 60 if(err != sizeof(c)) 61 panic("write of switch_pipe failed, err = %d", -err); 62 63 - reading = 1; 64 - if(from->thread.mode.tt.switch_pipe[0] == -1) 65 os_kill_process(os_getpid(), 0); 66 67 err = os_read_file(from->thread.mode.tt.switch_pipe[0], &c, sizeof(c));
··· 32 unsigned long flags; 33 int err, vtalrm, alrm, prof, cpu; 34 char c; 35 36 from = prev; 37 to = next; ··· 59 c = 0; 60 set_current(to); 61 62 err = os_write_file(to->thread.mode.tt.switch_pipe[1], &c, sizeof(c)); 63 if(err != sizeof(c)) 64 panic("write of switch_pipe failed, err = %d", -err); 65 66 + if(from->thread.mode.tt.switch_pipe[0] == -1) 67 os_kill_process(os_getpid(), 0); 68 69 err = os_read_file(from->thread.mode.tt.switch_pipe[0], &c, sizeof(c));
-6
arch/um/kernel/um_arch.c
··· 111 .show = show_cpuinfo, 112 }; 113 114 - pte_t * __bad_pagetable(void) 115 - { 116 - panic("Someone should implement __bad_pagetable"); 117 - return(NULL); 118 - } 119 - 120 /* Set in linux_main */ 121 unsigned long host_task_size; 122 unsigned long task_size;
··· 111 .show = show_cpuinfo, 112 }; 113 114 /* Set in linux_main */ 115 unsigned long host_task_size; 116 unsigned long task_size;
+79 -1
arch/um/sys-i386/sysrq.c
··· 3 * Licensed under the GPL 4 */ 5 6 #include "linux/kernel.h" 7 #include "linux/smp.h" 8 #include "linux/sched.h" 9 #include "asm/ptrace.h" 10 #include "sysrq.h" 11 12 void show_regs(struct pt_regs *regs) 13 { 14 printk("\n"); ··· 34 0xffff & PT_REGS_DS(regs), 35 0xffff & PT_REGS_ES(regs)); 36 37 - show_trace((unsigned long *) &regs); 38 }
··· 3 * Licensed under the GPL 4 */ 5 6 + #include "linux/config.h" 7 #include "linux/kernel.h" 8 #include "linux/smp.h" 9 #include "linux/sched.h" 10 + #include "linux/kallsyms.h" 11 #include "asm/ptrace.h" 12 #include "sysrq.h" 13 14 + /* This is declared by <linux/sched.h> */ 15 void show_regs(struct pt_regs *regs) 16 { 17 printk("\n"); ··· 31 0xffff & PT_REGS_DS(regs), 32 0xffff & PT_REGS_ES(regs)); 33 34 + show_trace(NULL, (unsigned long *) &regs); 35 } 36 + 37 + /* Copied from i386. */ 38 + static inline int valid_stack_ptr(struct thread_info *tinfo, void *p) 39 + { 40 + return p > (void *)tinfo && 41 + p < (void *)tinfo + THREAD_SIZE - 3; 42 + } 43 + 44 + /* Adapted from i386 (we also print the address we read from). */ 45 + static inline unsigned long print_context_stack(struct thread_info *tinfo, 46 + unsigned long *stack, unsigned long ebp) 47 + { 48 + unsigned long addr; 49 + 50 + #ifdef CONFIG_FRAME_POINTER 51 + while (valid_stack_ptr(tinfo, (void *)ebp)) { 52 + addr = *(unsigned long *)(ebp + 4); 53 + printk("%08lx: [<%08lx>]", ebp + 4, addr); 54 + print_symbol(" %s", addr); 55 + printk("\n"); 56 + ebp = *(unsigned long *)ebp; 57 + } 58 + #else 59 + while (valid_stack_ptr(tinfo, stack)) { 60 + addr = *stack; 61 + if (__kernel_text_address(addr)) { 62 + printk("%08lx: [<%08lx>]", (unsigned long) stack, addr); 63 + print_symbol(" %s", addr); 64 + printk("\n"); 65 + } 66 + stack++; 67 + } 68 + #endif 69 + return ebp; 70 + } 71 + 72 + void show_trace(struct task_struct* task, unsigned long * stack) 73 + { 74 + unsigned long ebp; 75 + struct thread_info *context; 76 + 77 + /* Turn this into BUG_ON if possible. */ 78 + if (!stack) { 79 + stack = (unsigned long*) &stack; 80 + printk("show_trace: got NULL stack, implicit assumption task == current"); 81 + WARN_ON(1); 82 + } 83 + 84 + if (!task) 85 + task = current; 86 + 87 + if (task != current) { 88 + //ebp = (unsigned long) KSTK_EBP(task); 89 + /* Which one? No actual difference - just coding style.*/ 90 + ebp = (unsigned long) PT_REGS_EBP(&task->thread.regs); 91 + } else { 92 + asm ("movl %%ebp, %0" : "=r" (ebp) : ); 93 + } 94 + 95 + context = (struct thread_info *) 96 + ((unsigned long)stack & (~(THREAD_SIZE - 1))); 97 + print_context_stack(context, stack, ebp); 98 + 99 + /*while (((long) stack & (THREAD_SIZE-1)) != 0) { 100 + addr = *stack; 101 + if (__kernel_text_address(addr)) { 102 + printk("%08lx: [<%08lx>]", (unsigned long) stack, addr); 103 + print_symbol(" %s", addr); 104 + printk("\n"); 105 + } 106 + stack++; 107 + }*/ 108 + printk("\n"); 109 + } 110 +
+1 -13
arch/um/sys-ppc/sysrq.c
··· 27 0xffff & regs->xds, 0xffff & regs->xes); 28 #endif 29 30 - show_trace(&regs->gpr[1]); 31 } 32 - 33 - 34 - /* 35 - * Overrides for Emacs so that we follow Linus's tabbing style. 36 - * Emacs will notice this stuff at the end of the file and automatically 37 - * adjust the settings for this buffer only. This must remain at the end 38 - * of the file. 39 - * --------------------------------------------------------------------------- 40 - * Local variables: 41 - * c-file-style: "linux" 42 - * End: 43 - */
··· 27 0xffff & regs->xds, 0xffff & regs->xes); 28 #endif 29 30 + show_trace(current, &regs->gpr[1]); 31 }
+10 -6
arch/um/sys-x86_64/syscalls.c
··· 133 134 #ifdef CONFIG_MODE_SKAS 135 136 static long arch_prctl_skas(int code, unsigned long addr) 137 { 138 long ret = 0; 139 140 switch(code){ 141 - case ARCH_SET_GS: 142 - current->thread.regs.regs.skas.regs[GS_BASE / sizeof(unsigned long)] = addr; 143 - break; 144 case ARCH_SET_FS: 145 current->thread.regs.regs.skas.regs[FS_BASE / sizeof(unsigned long)] = addr; 146 break; 147 case ARCH_GET_FS: 148 - ret = put_user(current->thread.regs.regs.skas.regs[GS / sizeof(unsigned long)], &addr); 149 break; 150 case ARCH_GET_GS: 151 - ret = put_user(current->thread.regs.regs.skas.regs[FS / sizeof(unsigned \ 152 - long)], &addr); 153 break; 154 default: 155 ret = -EINVAL;
··· 133 134 #ifdef CONFIG_MODE_SKAS 135 136 + /* XXX: Must also call arch_prctl in the host, beside saving the segment bases! */ 137 static long arch_prctl_skas(int code, unsigned long addr) 138 { 139 long ret = 0; 140 141 switch(code){ 142 case ARCH_SET_FS: 143 current->thread.regs.regs.skas.regs[FS_BASE / sizeof(unsigned long)] = addr; 144 break; 145 + case ARCH_SET_GS: 146 + current->thread.regs.regs.skas.regs[GS_BASE / sizeof(unsigned long)] = addr; 147 + break; 148 case ARCH_GET_FS: 149 + ret = put_user(current->thread.regs.regs.skas. 150 + regs[FS_BASE / sizeof(unsigned long)], 151 + (unsigned long __user *)addr); 152 break; 153 case ARCH_GET_GS: 154 + ret = put_user(current->thread.regs.regs.skas. 155 + regs[GS_BASE / sizeof(unsigned long)], 156 + (unsigned long __user *)addr); 157 break; 158 default: 159 ret = -EINVAL;
+1 -10
arch/um/sys-x86_64/sysrq.c
··· 36 void show_regs(struct pt_regs *regs) 37 { 38 __show_regs(regs); 39 - show_trace((unsigned long *) &regs); 40 } 41 - 42 - /* Emacs will notice this stuff at the end of the file and automatically 43 - * adjust the settings for this buffer only. This must remain at the end 44 - * of the file. 45 - * --------------------------------------------------------------------------- 46 - * Local variables: 47 - * c-file-style: "linux" 48 - * End: 49 - */
··· 36 void show_regs(struct pt_regs *regs) 37 { 38 __show_regs(regs); 39 + show_trace(current, (unsigned long *) &regs); 40 }
+1
arch/x86_64/Kconfig
··· 305 306 config X86_PM_TIMER 307 bool "PM timer" 308 default y 309 help 310 Support the ACPI PM timer for time keeping. This is slow,
··· 305 306 config X86_PM_TIMER 307 bool "PM timer" 308 + depends on ACPI 309 default y 310 help 311 Support the ACPI PM timer for time keeping. This is slow,
+1
arch/x86_64/kernel/io_apic.c
··· 37 #include <asm/desc.h> 38 #include <asm/proto.h> 39 #include <asm/mach_apic.h> 40 41 #define __apicdebuginit __init 42
··· 37 #include <asm/desc.h> 38 #include <asm/proto.h> 39 #include <asm/mach_apic.h> 40 + #include <asm/acpi.h> 41 42 #define __apicdebuginit __init 43
+1
arch/x86_64/kernel/mpparse.c
··· 30 #include <asm/pgalloc.h> 31 #include <asm/io_apic.h> 32 #include <asm/proto.h> 33 34 /* Have we found an MP table */ 35 int smp_found_config;
··· 30 #include <asm/pgalloc.h> 31 #include <asm/io_apic.h> 32 #include <asm/proto.h> 33 + #include <asm/acpi.h> 34 35 /* Have we found an MP table */ 36 int smp_found_config;
+1
arch/x86_64/kernel/signal.c
··· 28 #include <asm/uaccess.h> 29 #include <asm/i387.h> 30 #include <asm/proto.h> 31 32 /* #define DEBUG_SIG 1 */ 33
··· 28 #include <asm/uaccess.h> 29 #include <asm/i387.h> 30 #include <asm/proto.h> 31 + #include <asm/ia32_unistd.h> 32 33 /* #define DEBUG_SIG 1 */ 34
+2
arch/x86_64/kernel/time.c
··· 27 #include <linux/bcd.h> 28 #include <linux/kallsyms.h> 29 #include <linux/acpi.h> 30 #include <acpi/achware.h> /* for PM timer frequency */ 31 #include <asm/8253pit.h> 32 #include <asm/pgtable.h> 33 #include <asm/vsyscall.h>
··· 27 #include <linux/bcd.h> 28 #include <linux/kallsyms.h> 29 #include <linux/acpi.h> 30 + #ifdef CONFIG_ACPI 31 #include <acpi/achware.h> /* for PM timer frequency */ 32 + #endif 33 #include <asm/8253pit.h> 34 #include <asm/pgtable.h> 35 #include <asm/vsyscall.h>
+14
drivers/cdrom/viocd.c
··· 488 & (CDC_DVD_RAM | CDC_RAM)) != 0; 489 } 490 break; 491 default: 492 if (cgc->sense) { 493 /* indicate Unknown code */
··· 488 & (CDC_DVD_RAM | CDC_RAM)) != 0; 489 } 490 break; 491 + case GPCMD_GET_CONFIGURATION: 492 + if (cgc->cmd[3] == CDF_RWRT) { 493 + struct rwrt_feature_desc *rfd = (struct rwrt_feature_desc *)(cgc->buffer + sizeof(struct feature_header)); 494 + 495 + if ((buflen >= 496 + (sizeof(struct feature_header) + sizeof(*rfd))) && 497 + (cdi->ops->capability & ~cdi->mask 498 + & (CDC_DVD_RAM | CDC_RAM))) { 499 + rfd->feature_code = cpu_to_be16(CDF_RWRT); 500 + rfd->curr = 1; 501 + ret = 0; 502 + } 503 + } 504 + break; 505 default: 506 if (cgc->sense) { 507 /* indicate Unknown code */
+24
drivers/cpufreq/Kconfig
··· 46 This will show detail CPU frequency translation table in sysfs file 47 system 48 49 choice 50 prompt "Default CPUFreq governor" 51 default CPU_FREQ_DEFAULT_GOV_USERSPACE if CPU_FREQ_SA1100 || CPU_FREQ_SA1110 ··· 114 The support for this governor depends on CPU capability to 115 do fast frequency switching (i.e, very low latency frequency 116 transitions). 117 118 For details, take a look at linux/Documentation/cpu-freq. 119
··· 46 This will show detail CPU frequency translation table in sysfs file 47 system 48 49 + # Note that it is not currently possible to set the other governors (such as ondemand) 50 + # as the default, since if they fail to initialise, cpufreq will be 51 + # left in an undefined state. 52 + 53 choice 54 prompt "Default CPUFreq governor" 55 default CPU_FREQ_DEFAULT_GOV_USERSPACE if CPU_FREQ_SA1100 || CPU_FREQ_SA1110 ··· 110 The support for this governor depends on CPU capability to 111 do fast frequency switching (i.e, very low latency frequency 112 transitions). 113 + 114 + For details, take a look at linux/Documentation/cpu-freq. 115 + 116 + If in doubt, say N. 117 + 118 + config CPU_FREQ_GOV_CONSERVATIVE 119 + tristate "'conservative' cpufreq governor" 120 + depends on CPU_FREQ 121 + help 122 + 'conservative' - this driver is rather similar to the 'ondemand' 123 + governor both in its source code and its purpose, the difference is 124 + its optimisation for better suitability in a battery powered 125 + environment. The frequency is gracefully increased and decreased 126 + rather than jumping to 100% when speed is required. 127 + 128 + If you have a desktop machine then you should really be considering 129 + the 'ondemand' governor instead, however if you are using a laptop, 130 + PDA or even an AMD64 based computer (due to the unacceptable 131 + step-by-step latency issues between the minimum and maximum frequency 132 + transitions in the CPU) you will probably want to use this governor. 133 134 For details, take a look at linux/Documentation/cpu-freq. 135
+1
drivers/cpufreq/Makefile
··· 8 obj-$(CONFIG_CPU_FREQ_GOV_POWERSAVE) += cpufreq_powersave.o 9 obj-$(CONFIG_CPU_FREQ_GOV_USERSPACE) += cpufreq_userspace.o 10 obj-$(CONFIG_CPU_FREQ_GOV_ONDEMAND) += cpufreq_ondemand.o 11 12 # CPUfreq cross-arch helpers 13 obj-$(CONFIG_CPU_FREQ_TABLE) += freq_table.o
··· 8 obj-$(CONFIG_CPU_FREQ_GOV_POWERSAVE) += cpufreq_powersave.o 9 obj-$(CONFIG_CPU_FREQ_GOV_USERSPACE) += cpufreq_userspace.o 10 obj-$(CONFIG_CPU_FREQ_GOV_ONDEMAND) += cpufreq_ondemand.o 11 + obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o 12 13 # CPUfreq cross-arch helpers 14 obj-$(CONFIG_CPU_FREQ_TABLE) += freq_table.o
+4 -4
drivers/cpufreq/cpufreq.c
··· 258 (likely(cpufreq_cpu_data[freqs->cpu]->cur)) && 259 (unlikely(freqs->old != cpufreq_cpu_data[freqs->cpu]->cur))) 260 { 261 - printk(KERN_WARNING "Warning: CPU frequency is %u, " 262 "cpufreq assumed %u kHz.\n", freqs->old, cpufreq_cpu_data[freqs->cpu]->cur); 263 freqs->old = cpufreq_cpu_data[freqs->cpu]->cur; 264 } ··· 814 { 815 struct cpufreq_freqs freqs; 816 817 - printk(KERN_WARNING "Warning: CPU frequency out of sync: cpufreq and timing " 818 "core thinks of %u, is %u kHz.\n", old_freq, new_freq); 819 820 freqs.cpu = cpu; ··· 923 struct cpufreq_freqs freqs; 924 925 if (!(cpufreq_driver->flags & CPUFREQ_PM_NO_WARN)) 926 - printk(KERN_DEBUG "Warning: CPU frequency is %u, " 927 "cpufreq assumed %u kHz.\n", 928 cur_freq, cpu_policy->cur); 929 ··· 1004 struct cpufreq_freqs freqs; 1005 1006 if (!(cpufreq_driver->flags & CPUFREQ_PM_NO_WARN)) 1007 - printk(KERN_WARNING "Warning: CPU frequency" 1008 "is %u, cpufreq assumed %u kHz.\n", 1009 cur_freq, cpu_policy->cur); 1010
··· 258 (likely(cpufreq_cpu_data[freqs->cpu]->cur)) && 259 (unlikely(freqs->old != cpufreq_cpu_data[freqs->cpu]->cur))) 260 { 261 + dprintk(KERN_WARNING "Warning: CPU frequency is %u, " 262 "cpufreq assumed %u kHz.\n", freqs->old, cpufreq_cpu_data[freqs->cpu]->cur); 263 freqs->old = cpufreq_cpu_data[freqs->cpu]->cur; 264 } ··· 814 { 815 struct cpufreq_freqs freqs; 816 817 + dprintk(KERN_WARNING "Warning: CPU frequency out of sync: cpufreq and timing " 818 "core thinks of %u, is %u kHz.\n", old_freq, new_freq); 819 820 freqs.cpu = cpu; ··· 923 struct cpufreq_freqs freqs; 924 925 if (!(cpufreq_driver->flags & CPUFREQ_PM_NO_WARN)) 926 + dprintk(KERN_DEBUG "Warning: CPU frequency is %u, " 927 "cpufreq assumed %u kHz.\n", 928 cur_freq, cpu_policy->cur); 929 ··· 1004 struct cpufreq_freqs freqs; 1005 1006 if (!(cpufreq_driver->flags & CPUFREQ_PM_NO_WARN)) 1007 + dprintk(KERN_WARNING "Warning: CPU frequency" 1008 "is %u, cpufreq assumed %u kHz.\n", 1009 cur_freq, cpu_policy->cur); 1010
+586
drivers/cpufreq/cpufreq_conservative.c
···
··· 1 + /* 2 + * drivers/cpufreq/cpufreq_conservative.c 3 + * 4 + * Copyright (C) 2001 Russell King 5 + * (C) 2003 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>. 6 + * Jun Nakajima <jun.nakajima@intel.com> 7 + * (C) 2004 Alexander Clouter <alex-kernel@digriz.org.uk> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + 14 + #include <linux/kernel.h> 15 + #include <linux/module.h> 16 + #include <linux/smp.h> 17 + #include <linux/init.h> 18 + #include <linux/interrupt.h> 19 + #include <linux/ctype.h> 20 + #include <linux/cpufreq.h> 21 + #include <linux/sysctl.h> 22 + #include <linux/types.h> 23 + #include <linux/fs.h> 24 + #include <linux/sysfs.h> 25 + #include <linux/sched.h> 26 + #include <linux/kmod.h> 27 + #include <linux/workqueue.h> 28 + #include <linux/jiffies.h> 29 + #include <linux/kernel_stat.h> 30 + #include <linux/percpu.h> 31 + 32 + /* 33 + * dbs is used in this file as a shortform for demandbased switching 34 + * It helps to keep variable names smaller, simpler 35 + */ 36 + 37 + #define DEF_FREQUENCY_UP_THRESHOLD (80) 38 + #define MIN_FREQUENCY_UP_THRESHOLD (0) 39 + #define MAX_FREQUENCY_UP_THRESHOLD (100) 40 + 41 + #define DEF_FREQUENCY_DOWN_THRESHOLD (20) 42 + #define MIN_FREQUENCY_DOWN_THRESHOLD (0) 43 + #define MAX_FREQUENCY_DOWN_THRESHOLD (100) 44 + 45 + /* 46 + * The polling frequency of this governor depends on the capability of 47 + * the processor. Default polling frequency is 1000 times the transition 48 + * latency of the processor. The governor will work on any processor with 49 + * transition latency <= 10mS, using appropriate sampling 50 + * rate. 51 + * For CPUs with transition latency > 10mS (mostly drivers with CPUFREQ_ETERNAL) 52 + * this governor will not work. 53 + * All times here are in uS. 54 + */ 55 + static unsigned int def_sampling_rate; 56 + #define MIN_SAMPLING_RATE (def_sampling_rate / 2) 57 + #define MAX_SAMPLING_RATE (500 * def_sampling_rate) 58 + #define DEF_SAMPLING_RATE_LATENCY_MULTIPLIER (100000) 59 + #define DEF_SAMPLING_DOWN_FACTOR (5) 60 + #define TRANSITION_LATENCY_LIMIT (10 * 1000) 61 + 62 + static void do_dbs_timer(void *data); 63 + 64 + struct cpu_dbs_info_s { 65 + struct cpufreq_policy *cur_policy; 66 + unsigned int prev_cpu_idle_up; 67 + unsigned int prev_cpu_idle_down; 68 + unsigned int enable; 69 + }; 70 + static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info); 71 + 72 + static unsigned int dbs_enable; /* number of CPUs using this policy */ 73 + 74 + static DECLARE_MUTEX (dbs_sem); 75 + static DECLARE_WORK (dbs_work, do_dbs_timer, NULL); 76 + 77 + struct dbs_tuners { 78 + unsigned int sampling_rate; 79 + unsigned int sampling_down_factor; 80 + unsigned int up_threshold; 81 + unsigned int down_threshold; 82 + unsigned int ignore_nice; 83 + unsigned int freq_step; 84 + }; 85 + 86 + static struct dbs_tuners dbs_tuners_ins = { 87 + .up_threshold = DEF_FREQUENCY_UP_THRESHOLD, 88 + .down_threshold = DEF_FREQUENCY_DOWN_THRESHOLD, 89 + .sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR, 90 + }; 91 + 92 + static inline unsigned int get_cpu_idle_time(unsigned int cpu) 93 + { 94 + return kstat_cpu(cpu).cpustat.idle + 95 + kstat_cpu(cpu).cpustat.iowait + 96 + ( !dbs_tuners_ins.ignore_nice ? 97 + kstat_cpu(cpu).cpustat.nice : 98 + 0); 99 + } 100 + 101 + /************************** sysfs interface ************************/ 102 + static ssize_t show_sampling_rate_max(struct cpufreq_policy *policy, char *buf) 103 + { 104 + return sprintf (buf, "%u\n", MAX_SAMPLING_RATE); 105 + } 106 + 107 + static ssize_t show_sampling_rate_min(struct cpufreq_policy *policy, char *buf) 108 + { 109 + return sprintf (buf, "%u\n", MIN_SAMPLING_RATE); 110 + } 111 + 112 + #define define_one_ro(_name) \ 113 + static struct freq_attr _name = \ 114 + __ATTR(_name, 0444, show_##_name, NULL) 115 + 116 + define_one_ro(sampling_rate_max); 117 + define_one_ro(sampling_rate_min); 118 + 119 + /* cpufreq_conservative Governor Tunables */ 120 + #define show_one(file_name, object) \ 121 + static ssize_t show_##file_name \ 122 + (struct cpufreq_policy *unused, char *buf) \ 123 + { \ 124 + return sprintf(buf, "%u\n", dbs_tuners_ins.object); \ 125 + } 126 + show_one(sampling_rate, sampling_rate); 127 + show_one(sampling_down_factor, sampling_down_factor); 128 + show_one(up_threshold, up_threshold); 129 + show_one(down_threshold, down_threshold); 130 + show_one(ignore_nice, ignore_nice); 131 + show_one(freq_step, freq_step); 132 + 133 + static ssize_t store_sampling_down_factor(struct cpufreq_policy *unused, 134 + const char *buf, size_t count) 135 + { 136 + unsigned int input; 137 + int ret; 138 + ret = sscanf (buf, "%u", &input); 139 + if (ret != 1 ) 140 + return -EINVAL; 141 + 142 + down(&dbs_sem); 143 + dbs_tuners_ins.sampling_down_factor = input; 144 + up(&dbs_sem); 145 + 146 + return count; 147 + } 148 + 149 + static ssize_t store_sampling_rate(struct cpufreq_policy *unused, 150 + const char *buf, size_t count) 151 + { 152 + unsigned int input; 153 + int ret; 154 + ret = sscanf (buf, "%u", &input); 155 + 156 + down(&dbs_sem); 157 + if (ret != 1 || input > MAX_SAMPLING_RATE || input < MIN_SAMPLING_RATE) { 158 + up(&dbs_sem); 159 + return -EINVAL; 160 + } 161 + 162 + dbs_tuners_ins.sampling_rate = input; 163 + up(&dbs_sem); 164 + 165 + return count; 166 + } 167 + 168 + static ssize_t store_up_threshold(struct cpufreq_policy *unused, 169 + const char *buf, size_t count) 170 + { 171 + unsigned int input; 172 + int ret; 173 + ret = sscanf (buf, "%u", &input); 174 + 175 + down(&dbs_sem); 176 + if (ret != 1 || input > MAX_FREQUENCY_UP_THRESHOLD || 177 + input < MIN_FREQUENCY_UP_THRESHOLD || 178 + input <= dbs_tuners_ins.down_threshold) { 179 + up(&dbs_sem); 180 + return -EINVAL; 181 + } 182 + 183 + dbs_tuners_ins.up_threshold = input; 184 + up(&dbs_sem); 185 + 186 + return count; 187 + } 188 + 189 + static ssize_t store_down_threshold(struct cpufreq_policy *unused, 190 + const char *buf, size_t count) 191 + { 192 + unsigned int input; 193 + int ret; 194 + ret = sscanf (buf, "%u", &input); 195 + 196 + down(&dbs_sem); 197 + if (ret != 1 || input > MAX_FREQUENCY_DOWN_THRESHOLD || 198 + input < MIN_FREQUENCY_DOWN_THRESHOLD || 199 + input >= dbs_tuners_ins.up_threshold) { 200 + up(&dbs_sem); 201 + return -EINVAL; 202 + } 203 + 204 + dbs_tuners_ins.down_threshold = input; 205 + up(&dbs_sem); 206 + 207 + return count; 208 + } 209 + 210 + static ssize_t store_ignore_nice(struct cpufreq_policy *policy, 211 + const char *buf, size_t count) 212 + { 213 + unsigned int input; 214 + int ret; 215 + 216 + unsigned int j; 217 + 218 + ret = sscanf (buf, "%u", &input); 219 + if ( ret != 1 ) 220 + return -EINVAL; 221 + 222 + if ( input > 1 ) 223 + input = 1; 224 + 225 + down(&dbs_sem); 226 + if ( input == dbs_tuners_ins.ignore_nice ) { /* nothing to do */ 227 + up(&dbs_sem); 228 + return count; 229 + } 230 + dbs_tuners_ins.ignore_nice = input; 231 + 232 + /* we need to re-evaluate prev_cpu_idle_up and prev_cpu_idle_down */ 233 + for_each_online_cpu(j) { 234 + struct cpu_dbs_info_s *j_dbs_info; 235 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 236 + j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j); 237 + j_dbs_info->prev_cpu_idle_down = j_dbs_info->prev_cpu_idle_up; 238 + } 239 + up(&dbs_sem); 240 + 241 + return count; 242 + } 243 + 244 + static ssize_t store_freq_step(struct cpufreq_policy *policy, 245 + const char *buf, size_t count) 246 + { 247 + unsigned int input; 248 + int ret; 249 + 250 + ret = sscanf (buf, "%u", &input); 251 + 252 + if ( ret != 1 ) 253 + return -EINVAL; 254 + 255 + if ( input > 100 ) 256 + input = 100; 257 + 258 + /* no need to test here if freq_step is zero as the user might actually 259 + * want this, they would be crazy though :) */ 260 + down(&dbs_sem); 261 + dbs_tuners_ins.freq_step = input; 262 + up(&dbs_sem); 263 + 264 + return count; 265 + } 266 + 267 + #define define_one_rw(_name) \ 268 + static struct freq_attr _name = \ 269 + __ATTR(_name, 0644, show_##_name, store_##_name) 270 + 271 + define_one_rw(sampling_rate); 272 + define_one_rw(sampling_down_factor); 273 + define_one_rw(up_threshold); 274 + define_one_rw(down_threshold); 275 + define_one_rw(ignore_nice); 276 + define_one_rw(freq_step); 277 + 278 + static struct attribute * dbs_attributes[] = { 279 + &sampling_rate_max.attr, 280 + &sampling_rate_min.attr, 281 + &sampling_rate.attr, 282 + &sampling_down_factor.attr, 283 + &up_threshold.attr, 284 + &down_threshold.attr, 285 + &ignore_nice.attr, 286 + &freq_step.attr, 287 + NULL 288 + }; 289 + 290 + static struct attribute_group dbs_attr_group = { 291 + .attrs = dbs_attributes, 292 + .name = "conservative", 293 + }; 294 + 295 + /************************** sysfs end ************************/ 296 + 297 + static void dbs_check_cpu(int cpu) 298 + { 299 + unsigned int idle_ticks, up_idle_ticks, down_idle_ticks; 300 + unsigned int freq_step; 301 + unsigned int freq_down_sampling_rate; 302 + static int down_skip[NR_CPUS]; 303 + static int requested_freq[NR_CPUS]; 304 + static unsigned short init_flag = 0; 305 + struct cpu_dbs_info_s *this_dbs_info; 306 + struct cpu_dbs_info_s *dbs_info; 307 + 308 + struct cpufreq_policy *policy; 309 + unsigned int j; 310 + 311 + this_dbs_info = &per_cpu(cpu_dbs_info, cpu); 312 + if (!this_dbs_info->enable) 313 + return; 314 + 315 + policy = this_dbs_info->cur_policy; 316 + 317 + if ( init_flag == 0 ) { 318 + for ( /* NULL */; init_flag < NR_CPUS; init_flag++ ) { 319 + dbs_info = &per_cpu(cpu_dbs_info, init_flag); 320 + requested_freq[cpu] = dbs_info->cur_policy->cur; 321 + } 322 + init_flag = 1; 323 + } 324 + 325 + /* 326 + * The default safe range is 20% to 80% 327 + * Every sampling_rate, we check 328 + * - If current idle time is less than 20%, then we try to 329 + * increase frequency 330 + * Every sampling_rate*sampling_down_factor, we check 331 + * - If current idle time is more than 80%, then we try to 332 + * decrease frequency 333 + * 334 + * Any frequency increase takes it to the maximum frequency. 335 + * Frequency reduction happens at minimum steps of 336 + * 5% (default) of max_frequency 337 + */ 338 + 339 + /* Check for frequency increase */ 340 + 341 + idle_ticks = UINT_MAX; 342 + for_each_cpu_mask(j, policy->cpus) { 343 + unsigned int tmp_idle_ticks, total_idle_ticks; 344 + struct cpu_dbs_info_s *j_dbs_info; 345 + 346 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 347 + /* Check for frequency increase */ 348 + total_idle_ticks = get_cpu_idle_time(j); 349 + tmp_idle_ticks = total_idle_ticks - 350 + j_dbs_info->prev_cpu_idle_up; 351 + j_dbs_info->prev_cpu_idle_up = total_idle_ticks; 352 + 353 + if (tmp_idle_ticks < idle_ticks) 354 + idle_ticks = tmp_idle_ticks; 355 + } 356 + 357 + /* Scale idle ticks by 100 and compare with up and down ticks */ 358 + idle_ticks *= 100; 359 + up_idle_ticks = (100 - dbs_tuners_ins.up_threshold) * 360 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate); 361 + 362 + if (idle_ticks < up_idle_ticks) { 363 + down_skip[cpu] = 0; 364 + for_each_cpu_mask(j, policy->cpus) { 365 + struct cpu_dbs_info_s *j_dbs_info; 366 + 367 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 368 + j_dbs_info->prev_cpu_idle_down = 369 + j_dbs_info->prev_cpu_idle_up; 370 + } 371 + /* if we are already at full speed then break out early */ 372 + if (requested_freq[cpu] == policy->max) 373 + return; 374 + 375 + freq_step = (dbs_tuners_ins.freq_step * policy->max) / 100; 376 + 377 + /* max freq cannot be less than 100. But who knows.... */ 378 + if (unlikely(freq_step == 0)) 379 + freq_step = 5; 380 + 381 + requested_freq[cpu] += freq_step; 382 + if (requested_freq[cpu] > policy->max) 383 + requested_freq[cpu] = policy->max; 384 + 385 + __cpufreq_driver_target(policy, requested_freq[cpu], 386 + CPUFREQ_RELATION_H); 387 + return; 388 + } 389 + 390 + /* Check for frequency decrease */ 391 + down_skip[cpu]++; 392 + if (down_skip[cpu] < dbs_tuners_ins.sampling_down_factor) 393 + return; 394 + 395 + idle_ticks = UINT_MAX; 396 + for_each_cpu_mask(j, policy->cpus) { 397 + unsigned int tmp_idle_ticks, total_idle_ticks; 398 + struct cpu_dbs_info_s *j_dbs_info; 399 + 400 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 401 + total_idle_ticks = j_dbs_info->prev_cpu_idle_up; 402 + tmp_idle_ticks = total_idle_ticks - 403 + j_dbs_info->prev_cpu_idle_down; 404 + j_dbs_info->prev_cpu_idle_down = total_idle_ticks; 405 + 406 + if (tmp_idle_ticks < idle_ticks) 407 + idle_ticks = tmp_idle_ticks; 408 + } 409 + 410 + /* Scale idle ticks by 100 and compare with up and down ticks */ 411 + idle_ticks *= 100; 412 + down_skip[cpu] = 0; 413 + 414 + freq_down_sampling_rate = dbs_tuners_ins.sampling_rate * 415 + dbs_tuners_ins.sampling_down_factor; 416 + down_idle_ticks = (100 - dbs_tuners_ins.down_threshold) * 417 + usecs_to_jiffies(freq_down_sampling_rate); 418 + 419 + if (idle_ticks > down_idle_ticks) { 420 + /* if we are already at the lowest speed then break out early 421 + * or if we 'cannot' reduce the speed as the user might want 422 + * freq_step to be zero */ 423 + if (requested_freq[cpu] == policy->min 424 + || dbs_tuners_ins.freq_step == 0) 425 + return; 426 + 427 + freq_step = (dbs_tuners_ins.freq_step * policy->max) / 100; 428 + 429 + /* max freq cannot be less than 100. But who knows.... */ 430 + if (unlikely(freq_step == 0)) 431 + freq_step = 5; 432 + 433 + requested_freq[cpu] -= freq_step; 434 + if (requested_freq[cpu] < policy->min) 435 + requested_freq[cpu] = policy->min; 436 + 437 + __cpufreq_driver_target(policy, 438 + requested_freq[cpu], 439 + CPUFREQ_RELATION_H); 440 + return; 441 + } 442 + } 443 + 444 + static void do_dbs_timer(void *data) 445 + { 446 + int i; 447 + down(&dbs_sem); 448 + for_each_online_cpu(i) 449 + dbs_check_cpu(i); 450 + schedule_delayed_work(&dbs_work, 451 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate)); 452 + up(&dbs_sem); 453 + } 454 + 455 + static inline void dbs_timer_init(void) 456 + { 457 + INIT_WORK(&dbs_work, do_dbs_timer, NULL); 458 + schedule_delayed_work(&dbs_work, 459 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate)); 460 + return; 461 + } 462 + 463 + static inline void dbs_timer_exit(void) 464 + { 465 + cancel_delayed_work(&dbs_work); 466 + return; 467 + } 468 + 469 + static int cpufreq_governor_dbs(struct cpufreq_policy *policy, 470 + unsigned int event) 471 + { 472 + unsigned int cpu = policy->cpu; 473 + struct cpu_dbs_info_s *this_dbs_info; 474 + unsigned int j; 475 + 476 + this_dbs_info = &per_cpu(cpu_dbs_info, cpu); 477 + 478 + switch (event) { 479 + case CPUFREQ_GOV_START: 480 + if ((!cpu_online(cpu)) || 481 + (!policy->cur)) 482 + return -EINVAL; 483 + 484 + if (policy->cpuinfo.transition_latency > 485 + (TRANSITION_LATENCY_LIMIT * 1000)) 486 + return -EINVAL; 487 + if (this_dbs_info->enable) /* Already enabled */ 488 + break; 489 + 490 + down(&dbs_sem); 491 + for_each_cpu_mask(j, policy->cpus) { 492 + struct cpu_dbs_info_s *j_dbs_info; 493 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 494 + j_dbs_info->cur_policy = policy; 495 + 496 + j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j); 497 + j_dbs_info->prev_cpu_idle_down 498 + = j_dbs_info->prev_cpu_idle_up; 499 + } 500 + this_dbs_info->enable = 1; 501 + sysfs_create_group(&policy->kobj, &dbs_attr_group); 502 + dbs_enable++; 503 + /* 504 + * Start the timerschedule work, when this governor 505 + * is used for first time 506 + */ 507 + if (dbs_enable == 1) { 508 + unsigned int latency; 509 + /* policy latency is in nS. Convert it to uS first */ 510 + 511 + latency = policy->cpuinfo.transition_latency; 512 + if (latency < 1000) 513 + latency = 1000; 514 + 515 + def_sampling_rate = (latency / 1000) * 516 + DEF_SAMPLING_RATE_LATENCY_MULTIPLIER; 517 + dbs_tuners_ins.sampling_rate = def_sampling_rate; 518 + dbs_tuners_ins.ignore_nice = 0; 519 + dbs_tuners_ins.freq_step = 5; 520 + 521 + dbs_timer_init(); 522 + } 523 + 524 + up(&dbs_sem); 525 + break; 526 + 527 + case CPUFREQ_GOV_STOP: 528 + down(&dbs_sem); 529 + this_dbs_info->enable = 0; 530 + sysfs_remove_group(&policy->kobj, &dbs_attr_group); 531 + dbs_enable--; 532 + /* 533 + * Stop the timerschedule work, when this governor 534 + * is used for first time 535 + */ 536 + if (dbs_enable == 0) 537 + dbs_timer_exit(); 538 + 539 + up(&dbs_sem); 540 + 541 + break; 542 + 543 + case CPUFREQ_GOV_LIMITS: 544 + down(&dbs_sem); 545 + if (policy->max < this_dbs_info->cur_policy->cur) 546 + __cpufreq_driver_target( 547 + this_dbs_info->cur_policy, 548 + policy->max, CPUFREQ_RELATION_H); 549 + else if (policy->min > this_dbs_info->cur_policy->cur) 550 + __cpufreq_driver_target( 551 + this_dbs_info->cur_policy, 552 + policy->min, CPUFREQ_RELATION_L); 553 + up(&dbs_sem); 554 + break; 555 + } 556 + return 0; 557 + } 558 + 559 + static struct cpufreq_governor cpufreq_gov_dbs = { 560 + .name = "conservative", 561 + .governor = cpufreq_governor_dbs, 562 + .owner = THIS_MODULE, 563 + }; 564 + 565 + static int __init cpufreq_gov_dbs_init(void) 566 + { 567 + return cpufreq_register_governor(&cpufreq_gov_dbs); 568 + } 569 + 570 + static void __exit cpufreq_gov_dbs_exit(void) 571 + { 572 + /* Make sure that the scheduled work is indeed not running */ 573 + flush_scheduled_work(); 574 + 575 + cpufreq_unregister_governor(&cpufreq_gov_dbs); 576 + } 577 + 578 + 579 + MODULE_AUTHOR ("Alexander Clouter <alex-kernel@digriz.org.uk>"); 580 + MODULE_DESCRIPTION ("'cpufreq_conservative' - A dynamic cpufreq governor for " 581 + "Low Latency Frequency Transition capable processors " 582 + "optimised for use in a battery environment"); 583 + MODULE_LICENSE ("GPL"); 584 + 585 + module_init(cpufreq_gov_dbs_init); 586 + module_exit(cpufreq_gov_dbs_exit);
+94 -92
drivers/cpufreq/cpufreq_ondemand.c
··· 34 */ 35 36 #define DEF_FREQUENCY_UP_THRESHOLD (80) 37 - #define MIN_FREQUENCY_UP_THRESHOLD (0) 38 #define MAX_FREQUENCY_UP_THRESHOLD (100) 39 - 40 - #define DEF_FREQUENCY_DOWN_THRESHOLD (20) 41 - #define MIN_FREQUENCY_DOWN_THRESHOLD (0) 42 - #define MAX_FREQUENCY_DOWN_THRESHOLD (100) 43 44 /* 45 * The polling frequency of this governor depends on the capability of ··· 51 #define MIN_SAMPLING_RATE (def_sampling_rate / 2) 52 #define MAX_SAMPLING_RATE (500 * def_sampling_rate) 53 #define DEF_SAMPLING_RATE_LATENCY_MULTIPLIER (1000) 54 - #define DEF_SAMPLING_DOWN_FACTOR (10) 55 #define TRANSITION_LATENCY_LIMIT (10 * 1000) 56 - #define sampling_rate_in_HZ(x) (((x * HZ) < (1000 * 1000))?1:((x * HZ) / (1000 * 1000))) 57 58 static void do_dbs_timer(void *data); 59 ··· 74 unsigned int sampling_rate; 75 unsigned int sampling_down_factor; 76 unsigned int up_threshold; 77 - unsigned int down_threshold; 78 }; 79 80 static struct dbs_tuners dbs_tuners_ins = { 81 .up_threshold = DEF_FREQUENCY_UP_THRESHOLD, 82 - .down_threshold = DEF_FREQUENCY_DOWN_THRESHOLD, 83 .sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR, 84 }; 85 86 /************************** sysfs interface ************************/ 87 static ssize_t show_sampling_rate_max(struct cpufreq_policy *policy, char *buf) ··· 119 show_one(sampling_rate, sampling_rate); 120 show_one(sampling_down_factor, sampling_down_factor); 121 show_one(up_threshold, up_threshold); 122 - show_one(down_threshold, down_threshold); 123 124 static ssize_t store_sampling_down_factor(struct cpufreq_policy *unused, 125 const char *buf, size_t count) ··· 128 int ret; 129 ret = sscanf (buf, "%u", &input); 130 if (ret != 1 ) 131 return -EINVAL; 132 133 down(&dbs_sem); ··· 168 169 down(&dbs_sem); 170 if (ret != 1 || input > MAX_FREQUENCY_UP_THRESHOLD || 171 - input < MIN_FREQUENCY_UP_THRESHOLD || 172 - input <= dbs_tuners_ins.down_threshold) { 173 up(&dbs_sem); 174 return -EINVAL; 175 } ··· 179 return count; 180 } 181 182 - static ssize_t store_down_threshold(struct cpufreq_policy *unused, 183 const char *buf, size_t count) 184 { 185 unsigned int input; 186 int ret; 187 ret = sscanf (buf, "%u", &input); 188 - 189 - down(&dbs_sem); 190 - if (ret != 1 || input > MAX_FREQUENCY_DOWN_THRESHOLD || 191 - input < MIN_FREQUENCY_DOWN_THRESHOLD || 192 - input >= dbs_tuners_ins.up_threshold) { 193 - up(&dbs_sem); 194 return -EINVAL; 195 - } 196 197 - dbs_tuners_ins.down_threshold = input; 198 up(&dbs_sem); 199 200 return count; ··· 220 define_one_rw(sampling_rate); 221 define_one_rw(sampling_down_factor); 222 define_one_rw(up_threshold); 223 - define_one_rw(down_threshold); 224 225 static struct attribute * dbs_attributes[] = { 226 &sampling_rate_max.attr, ··· 228 &sampling_rate.attr, 229 &sampling_down_factor.attr, 230 &up_threshold.attr, 231 - &down_threshold.attr, 232 NULL 233 }; 234 ··· 241 242 static void dbs_check_cpu(int cpu) 243 { 244 - unsigned int idle_ticks, up_idle_ticks, down_idle_ticks; 245 - unsigned int total_idle_ticks; 246 - unsigned int freq_down_step; 247 unsigned int freq_down_sampling_rate; 248 static int down_skip[NR_CPUS]; 249 struct cpu_dbs_info_s *this_dbs_info; ··· 256 257 policy = this_dbs_info->cur_policy; 258 /* 259 - * The default safe range is 20% to 80% 260 - * Every sampling_rate, we check 261 - * - If current idle time is less than 20%, then we try to 262 - * increase frequency 263 - * Every sampling_rate*sampling_down_factor, we check 264 - * - If current idle time is more than 80%, then we try to 265 - * decrease frequency 266 * 267 * Any frequency increase takes it to the maximum frequency. 268 * Frequency reduction happens at minimum steps of 269 - * 5% of max_frequency 270 */ 271 272 /* Check for frequency increase */ 273 - total_idle_ticks = kstat_cpu(cpu).cpustat.idle + 274 - kstat_cpu(cpu).cpustat.iowait; 275 - idle_ticks = total_idle_ticks - 276 - this_dbs_info->prev_cpu_idle_up; 277 - this_dbs_info->prev_cpu_idle_up = total_idle_ticks; 278 - 279 - 280 for_each_cpu_mask(j, policy->cpus) { 281 - unsigned int tmp_idle_ticks; 282 struct cpu_dbs_info_s *j_dbs_info; 283 284 - if (j == cpu) 285 - continue; 286 - 287 j_dbs_info = &per_cpu(cpu_dbs_info, j); 288 - /* Check for frequency increase */ 289 - total_idle_ticks = kstat_cpu(j).cpustat.idle + 290 - kstat_cpu(j).cpustat.iowait; 291 tmp_idle_ticks = total_idle_ticks - 292 j_dbs_info->prev_cpu_idle_up; 293 j_dbs_info->prev_cpu_idle_up = total_idle_ticks; ··· 286 /* Scale idle ticks by 100 and compare with up and down ticks */ 287 idle_ticks *= 100; 288 up_idle_ticks = (100 - dbs_tuners_ins.up_threshold) * 289 - sampling_rate_in_HZ(dbs_tuners_ins.sampling_rate); 290 291 if (idle_ticks < up_idle_ticks) { 292 __cpufreq_driver_target(policy, policy->max, 293 CPUFREQ_RELATION_H); 294 - down_skip[cpu] = 0; 295 - this_dbs_info->prev_cpu_idle_down = total_idle_ticks; 296 return; 297 } 298 ··· 311 if (down_skip[cpu] < dbs_tuners_ins.sampling_down_factor) 312 return; 313 314 - total_idle_ticks = kstat_cpu(cpu).cpustat.idle + 315 - kstat_cpu(cpu).cpustat.iowait; 316 - idle_ticks = total_idle_ticks - 317 - this_dbs_info->prev_cpu_idle_down; 318 - this_dbs_info->prev_cpu_idle_down = total_idle_ticks; 319 - 320 for_each_cpu_mask(j, policy->cpus) { 321 - unsigned int tmp_idle_ticks; 322 struct cpu_dbs_info_s *j_dbs_info; 323 324 - if (j == cpu) 325 - continue; 326 - 327 j_dbs_info = &per_cpu(cpu_dbs_info, j); 328 - /* Check for frequency increase */ 329 - total_idle_ticks = kstat_cpu(j).cpustat.idle + 330 - kstat_cpu(j).cpustat.iowait; 331 tmp_idle_ticks = total_idle_ticks - 332 j_dbs_info->prev_cpu_idle_down; 333 j_dbs_info->prev_cpu_idle_down = total_idle_ticks; ··· 327 idle_ticks = tmp_idle_ticks; 328 } 329 330 - /* Scale idle ticks by 100 and compare with up and down ticks */ 331 - idle_ticks *= 100; 332 down_skip[cpu] = 0; 333 334 freq_down_sampling_rate = dbs_tuners_ins.sampling_rate * 335 dbs_tuners_ins.sampling_down_factor; 336 - down_idle_ticks = (100 - dbs_tuners_ins.down_threshold) * 337 - sampling_rate_in_HZ(freq_down_sampling_rate); 338 339 - if (idle_ticks > down_idle_ticks ) { 340 - freq_down_step = (5 * policy->max) / 100; 341 342 - /* max freq cannot be less than 100. But who knows.... */ 343 - if (unlikely(freq_down_step == 0)) 344 - freq_down_step = 5; 345 - 346 - __cpufreq_driver_target(policy, 347 - policy->cur - freq_down_step, 348 - CPUFREQ_RELATION_H); 349 - return; 350 - } 351 } 352 353 static void do_dbs_timer(void *data) 354 { 355 int i; 356 down(&dbs_sem); 357 - for (i = 0; i < NR_CPUS; i++) 358 - if (cpu_online(i)) 359 - dbs_check_cpu(i); 360 schedule_delayed_work(&dbs_work, 361 - sampling_rate_in_HZ(dbs_tuners_ins.sampling_rate)); 362 up(&dbs_sem); 363 } 364 ··· 365 { 366 INIT_WORK(&dbs_work, do_dbs_timer, NULL); 367 schedule_delayed_work(&dbs_work, 368 - sampling_rate_in_HZ(dbs_tuners_ins.sampling_rate)); 369 return; 370 } 371 ··· 402 j_dbs_info = &per_cpu(cpu_dbs_info, j); 403 j_dbs_info->cur_policy = policy; 404 405 - j_dbs_info->prev_cpu_idle_up = 406 - kstat_cpu(j).cpustat.idle + 407 - kstat_cpu(j).cpustat.iowait; 408 - j_dbs_info->prev_cpu_idle_down = 409 - kstat_cpu(j).cpustat.idle + 410 - kstat_cpu(j).cpustat.iowait; 411 } 412 this_dbs_info->enable = 1; 413 sysfs_create_group(&policy->kobj, &dbs_attr_group); ··· 424 def_sampling_rate = (latency / 1000) * 425 DEF_SAMPLING_RATE_LATENCY_MULTIPLIER; 426 dbs_tuners_ins.sampling_rate = def_sampling_rate; 427 428 dbs_timer_init(); 429 } ··· 464 return 0; 465 } 466 467 - struct cpufreq_governor cpufreq_gov_dbs = { 468 .name = "ondemand", 469 .governor = cpufreq_governor_dbs, 470 .owner = THIS_MODULE, 471 }; 472 - EXPORT_SYMBOL(cpufreq_gov_dbs); 473 474 static int __init cpufreq_gov_dbs_init(void) 475 {
··· 34 */ 35 36 #define DEF_FREQUENCY_UP_THRESHOLD (80) 37 + #define MIN_FREQUENCY_UP_THRESHOLD (11) 38 #define MAX_FREQUENCY_UP_THRESHOLD (100) 39 40 /* 41 * The polling frequency of this governor depends on the capability of ··· 55 #define MIN_SAMPLING_RATE (def_sampling_rate / 2) 56 #define MAX_SAMPLING_RATE (500 * def_sampling_rate) 57 #define DEF_SAMPLING_RATE_LATENCY_MULTIPLIER (1000) 58 + #define DEF_SAMPLING_DOWN_FACTOR (1) 59 + #define MAX_SAMPLING_DOWN_FACTOR (10) 60 #define TRANSITION_LATENCY_LIMIT (10 * 1000) 61 62 static void do_dbs_timer(void *data); 63 ··· 78 unsigned int sampling_rate; 79 unsigned int sampling_down_factor; 80 unsigned int up_threshold; 81 + unsigned int ignore_nice; 82 }; 83 84 static struct dbs_tuners dbs_tuners_ins = { 85 .up_threshold = DEF_FREQUENCY_UP_THRESHOLD, 86 .sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR, 87 }; 88 + 89 + static inline unsigned int get_cpu_idle_time(unsigned int cpu) 90 + { 91 + return kstat_cpu(cpu).cpustat.idle + 92 + kstat_cpu(cpu).cpustat.iowait + 93 + ( !dbs_tuners_ins.ignore_nice ? 94 + kstat_cpu(cpu).cpustat.nice : 95 + 0); 96 + } 97 98 /************************** sysfs interface ************************/ 99 static ssize_t show_sampling_rate_max(struct cpufreq_policy *policy, char *buf) ··· 115 show_one(sampling_rate, sampling_rate); 116 show_one(sampling_down_factor, sampling_down_factor); 117 show_one(up_threshold, up_threshold); 118 + show_one(ignore_nice, ignore_nice); 119 120 static ssize_t store_sampling_down_factor(struct cpufreq_policy *unused, 121 const char *buf, size_t count) ··· 124 int ret; 125 ret = sscanf (buf, "%u", &input); 126 if (ret != 1 ) 127 + return -EINVAL; 128 + 129 + if (input > MAX_SAMPLING_DOWN_FACTOR || input < 1) 130 return -EINVAL; 131 132 down(&dbs_sem); ··· 161 162 down(&dbs_sem); 163 if (ret != 1 || input > MAX_FREQUENCY_UP_THRESHOLD || 164 + input < MIN_FREQUENCY_UP_THRESHOLD) { 165 up(&dbs_sem); 166 return -EINVAL; 167 } ··· 173 return count; 174 } 175 176 + static ssize_t store_ignore_nice(struct cpufreq_policy *policy, 177 const char *buf, size_t count) 178 { 179 unsigned int input; 180 int ret; 181 + 182 + unsigned int j; 183 + 184 ret = sscanf (buf, "%u", &input); 185 + if ( ret != 1 ) 186 return -EINVAL; 187 188 + if ( input > 1 ) 189 + input = 1; 190 + 191 + down(&dbs_sem); 192 + if ( input == dbs_tuners_ins.ignore_nice ) { /* nothing to do */ 193 + up(&dbs_sem); 194 + return count; 195 + } 196 + dbs_tuners_ins.ignore_nice = input; 197 + 198 + /* we need to re-evaluate prev_cpu_idle_up and prev_cpu_idle_down */ 199 + for_each_online_cpu(j) { 200 + struct cpu_dbs_info_s *j_dbs_info; 201 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 202 + j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j); 203 + j_dbs_info->prev_cpu_idle_down = j_dbs_info->prev_cpu_idle_up; 204 + } 205 up(&dbs_sem); 206 207 return count; ··· 201 define_one_rw(sampling_rate); 202 define_one_rw(sampling_down_factor); 203 define_one_rw(up_threshold); 204 + define_one_rw(ignore_nice); 205 206 static struct attribute * dbs_attributes[] = { 207 &sampling_rate_max.attr, ··· 209 &sampling_rate.attr, 210 &sampling_down_factor.attr, 211 &up_threshold.attr, 212 + &ignore_nice.attr, 213 NULL 214 }; 215 ··· 222 223 static void dbs_check_cpu(int cpu) 224 { 225 + unsigned int idle_ticks, up_idle_ticks, total_ticks; 226 + unsigned int freq_next; 227 unsigned int freq_down_sampling_rate; 228 static int down_skip[NR_CPUS]; 229 struct cpu_dbs_info_s *this_dbs_info; ··· 238 239 policy = this_dbs_info->cur_policy; 240 /* 241 + * Every sampling_rate, we check, if current idle time is less 242 + * than 20% (default), then we try to increase frequency 243 + * Every sampling_rate*sampling_down_factor, we look for a the lowest 244 + * frequency which can sustain the load while keeping idle time over 245 + * 30%. If such a frequency exist, we try to decrease to this frequency. 246 * 247 * Any frequency increase takes it to the maximum frequency. 248 * Frequency reduction happens at minimum steps of 249 + * 5% (default) of current frequency 250 */ 251 252 /* Check for frequency increase */ 253 + idle_ticks = UINT_MAX; 254 for_each_cpu_mask(j, policy->cpus) { 255 + unsigned int tmp_idle_ticks, total_idle_ticks; 256 struct cpu_dbs_info_s *j_dbs_info; 257 258 j_dbs_info = &per_cpu(cpu_dbs_info, j); 259 + total_idle_ticks = get_cpu_idle_time(j); 260 tmp_idle_ticks = total_idle_ticks - 261 j_dbs_info->prev_cpu_idle_up; 262 j_dbs_info->prev_cpu_idle_up = total_idle_ticks; ··· 281 /* Scale idle ticks by 100 and compare with up and down ticks */ 282 idle_ticks *= 100; 283 up_idle_ticks = (100 - dbs_tuners_ins.up_threshold) * 284 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate); 285 286 if (idle_ticks < up_idle_ticks) { 287 + down_skip[cpu] = 0; 288 + for_each_cpu_mask(j, policy->cpus) { 289 + struct cpu_dbs_info_s *j_dbs_info; 290 + 291 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 292 + j_dbs_info->prev_cpu_idle_down = 293 + j_dbs_info->prev_cpu_idle_up; 294 + } 295 + /* if we are already at full speed then break out early */ 296 + if (policy->cur == policy->max) 297 + return; 298 + 299 __cpufreq_driver_target(policy, policy->max, 300 CPUFREQ_RELATION_H); 301 return; 302 } 303 ··· 296 if (down_skip[cpu] < dbs_tuners_ins.sampling_down_factor) 297 return; 298 299 + idle_ticks = UINT_MAX; 300 for_each_cpu_mask(j, policy->cpus) { 301 + unsigned int tmp_idle_ticks, total_idle_ticks; 302 struct cpu_dbs_info_s *j_dbs_info; 303 304 j_dbs_info = &per_cpu(cpu_dbs_info, j); 305 + /* Check for frequency decrease */ 306 + total_idle_ticks = j_dbs_info->prev_cpu_idle_up; 307 tmp_idle_ticks = total_idle_ticks - 308 j_dbs_info->prev_cpu_idle_down; 309 j_dbs_info->prev_cpu_idle_down = total_idle_ticks; ··· 321 idle_ticks = tmp_idle_ticks; 322 } 323 324 down_skip[cpu] = 0; 325 + /* if we cannot reduce the frequency anymore, break out early */ 326 + if (policy->cur == policy->min) 327 + return; 328 329 + /* Compute how many ticks there are between two measurements */ 330 freq_down_sampling_rate = dbs_tuners_ins.sampling_rate * 331 dbs_tuners_ins.sampling_down_factor; 332 + total_ticks = usecs_to_jiffies(freq_down_sampling_rate); 333 334 + /* 335 + * The optimal frequency is the frequency that is the lowest that 336 + * can support the current CPU usage without triggering the up 337 + * policy. To be safe, we focus 10 points under the threshold. 338 + */ 339 + freq_next = ((total_ticks - idle_ticks) * 100) / total_ticks; 340 + freq_next = (freq_next * policy->cur) / 341 + (dbs_tuners_ins.up_threshold - 10); 342 343 + if (freq_next <= ((policy->cur * 95) / 100)) 344 + __cpufreq_driver_target(policy, freq_next, CPUFREQ_RELATION_L); 345 } 346 347 static void do_dbs_timer(void *data) 348 { 349 int i; 350 down(&dbs_sem); 351 + for_each_online_cpu(i) 352 + dbs_check_cpu(i); 353 schedule_delayed_work(&dbs_work, 354 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate)); 355 up(&dbs_sem); 356 } 357 ··· 360 { 361 INIT_WORK(&dbs_work, do_dbs_timer, NULL); 362 schedule_delayed_work(&dbs_work, 363 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate)); 364 return; 365 } 366 ··· 397 j_dbs_info = &per_cpu(cpu_dbs_info, j); 398 j_dbs_info->cur_policy = policy; 399 400 + j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j); 401 + j_dbs_info->prev_cpu_idle_down 402 + = j_dbs_info->prev_cpu_idle_up; 403 } 404 this_dbs_info->enable = 1; 405 sysfs_create_group(&policy->kobj, &dbs_attr_group); ··· 422 def_sampling_rate = (latency / 1000) * 423 DEF_SAMPLING_RATE_LATENCY_MULTIPLIER; 424 dbs_tuners_ins.sampling_rate = def_sampling_rate; 425 + dbs_tuners_ins.ignore_nice = 0; 426 427 dbs_timer_init(); 428 } ··· 461 return 0; 462 } 463 464 + static struct cpufreq_governor cpufreq_gov_dbs = { 465 .name = "ondemand", 466 .governor = cpufreq_governor_dbs, 467 .owner = THIS_MODULE, 468 }; 469 470 static int __init cpufreq_gov_dbs_init(void) 471 {
+30 -17
drivers/cpufreq/cpufreq_stats.c
··· 19 #include <linux/percpu.h> 20 #include <linux/kobject.h> 21 #include <linux/spinlock.h> 22 23 static spinlock_t cpufreq_stats_lock; 24 ··· 30 .show = _show,\ 31 }; 32 33 - static unsigned long 34 - delta_time(unsigned long old, unsigned long new) 35 - { 36 - return (old > new) ? (old - new): (new + ~old + 1); 37 - } 38 - 39 struct cpufreq_stats { 40 unsigned int cpu; 41 unsigned int total_trans; 42 - unsigned long long last_time; 43 unsigned int max_state; 44 unsigned int state_num; 45 unsigned int last_index; 46 - unsigned long long *time_in_state; 47 unsigned int *freq_table; 48 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 49 unsigned int *trans_table; ··· 55 cpufreq_stats_update (unsigned int cpu) 56 { 57 struct cpufreq_stats *stat; 58 spin_lock(&cpufreq_stats_lock); 59 stat = cpufreq_stats_table[cpu]; 60 if (stat->time_in_state) 61 - stat->time_in_state[stat->last_index] += 62 - delta_time(stat->last_time, jiffies); 63 - stat->last_time = jiffies; 64 spin_unlock(&cpufreq_stats_lock); 65 return 0; 66 } ··· 89 return 0; 90 cpufreq_stats_update(stat->cpu); 91 for (i = 0; i < stat->state_num; i++) { 92 - len += sprintf(buf + len, "%u %llu\n", 93 - stat->freq_table[i], stat->time_in_state[i]); 94 } 95 return len; 96 } ··· 106 if(!stat) 107 return 0; 108 cpufreq_stats_update(stat->cpu); 109 for (i = 0; i < stat->state_num; i++) { 110 if (len >= PAGE_SIZE) 111 break; 112 - len += snprintf(buf + len, PAGE_SIZE - len, "%9u:\t", 113 stat->freq_table[i]); 114 115 for (j = 0; j < stat->state_num; j++) { 116 if (len >= PAGE_SIZE) 117 break; 118 - len += snprintf(buf + len, PAGE_SIZE - len, "%u\t", 119 stat->trans_table[i*stat->max_state+j]); 120 } 121 len += snprintf(buf + len, PAGE_SIZE - len, "\n"); ··· 210 count++; 211 } 212 213 - alloc_size = count * sizeof(int) + count * sizeof(long long); 214 215 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 216 alloc_size += count * count * sizeof(int); ··· 237 } 238 stat->state_num = j; 239 spin_lock(&cpufreq_stats_lock); 240 - stat->last_time = jiffies; 241 stat->last_index = freq_table_get_index(stat, policy->cur); 242 spin_unlock(&cpufreq_stats_lock); 243 cpufreq_cpu_put(data);
··· 19 #include <linux/percpu.h> 20 #include <linux/kobject.h> 21 #include <linux/spinlock.h> 22 + #include <asm/cputime.h> 23 24 static spinlock_t cpufreq_stats_lock; 25 ··· 29 .show = _show,\ 30 }; 31 32 struct cpufreq_stats { 33 unsigned int cpu; 34 unsigned int total_trans; 35 + unsigned long long last_time; 36 unsigned int max_state; 37 unsigned int state_num; 38 unsigned int last_index; 39 + cputime64_t *time_in_state; 40 unsigned int *freq_table; 41 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 42 unsigned int *trans_table; ··· 60 cpufreq_stats_update (unsigned int cpu) 61 { 62 struct cpufreq_stats *stat; 63 + unsigned long long cur_time; 64 + 65 + cur_time = get_jiffies_64(); 66 spin_lock(&cpufreq_stats_lock); 67 stat = cpufreq_stats_table[cpu]; 68 if (stat->time_in_state) 69 + stat->time_in_state[stat->last_index] = 70 + cputime64_add(stat->time_in_state[stat->last_index], 71 + cputime_sub(cur_time, stat->last_time)); 72 + stat->last_time = cur_time; 73 spin_unlock(&cpufreq_stats_lock); 74 return 0; 75 } ··· 90 return 0; 91 cpufreq_stats_update(stat->cpu); 92 for (i = 0; i < stat->state_num; i++) { 93 + len += sprintf(buf + len, "%u %llu\n", stat->freq_table[i], 94 + (unsigned long long)cputime64_to_clock_t(stat->time_in_state[i])); 95 } 96 return len; 97 } ··· 107 if(!stat) 108 return 0; 109 cpufreq_stats_update(stat->cpu); 110 + len += snprintf(buf + len, PAGE_SIZE - len, " From : To\n"); 111 + len += snprintf(buf + len, PAGE_SIZE - len, " : "); 112 for (i = 0; i < stat->state_num; i++) { 113 if (len >= PAGE_SIZE) 114 break; 115 + len += snprintf(buf + len, PAGE_SIZE - len, "%9u ", 116 + stat->freq_table[i]); 117 + } 118 + if (len >= PAGE_SIZE) 119 + return len; 120 + 121 + len += snprintf(buf + len, PAGE_SIZE - len, "\n"); 122 + 123 + for (i = 0; i < stat->state_num; i++) { 124 + if (len >= PAGE_SIZE) 125 + break; 126 + 127 + len += snprintf(buf + len, PAGE_SIZE - len, "%9u: ", 128 stat->freq_table[i]); 129 130 for (j = 0; j < stat->state_num; j++) { 131 if (len >= PAGE_SIZE) 132 break; 133 + len += snprintf(buf + len, PAGE_SIZE - len, "%9u ", 134 stat->trans_table[i*stat->max_state+j]); 135 } 136 len += snprintf(buf + len, PAGE_SIZE - len, "\n"); ··· 197 count++; 198 } 199 200 + alloc_size = count * sizeof(int) + count * sizeof(cputime64_t); 201 202 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 203 alloc_size += count * count * sizeof(int); ··· 224 } 225 stat->state_num = j; 226 spin_lock(&cpufreq_stats_lock); 227 + stat->last_time = get_jiffies_64(); 228 stat->last_index = freq_table_get_index(stat, policy->cur); 229 spin_unlock(&cpufreq_stats_lock); 230 cpufreq_cpu_put(data);
+1
drivers/firmware/pcdp.c
··· 11 * published by the Free Software Foundation. 12 */ 13 14 #include <linux/acpi.h> 15 #include <linux/console.h> 16 #include <linux/efi.h>
··· 11 * published by the Free Software Foundation. 12 */ 13 14 + #include <linux/config.h> 15 #include <linux/acpi.h> 16 #include <linux/console.h> 17 #include <linux/efi.h>
+31 -15
drivers/i2c/busses/i2c-ali1563.c
··· 2 * i2c-ali1563.c - i2c driver for the ALi 1563 Southbridge 3 * 4 * Copyright (C) 2004 Patrick Mochel 5 * 6 * The 1563 southbridge is deceptively similar to the 1533, with a 7 * few notable exceptions. One of those happens to be the fact they ··· 58 #define HST_CNTL2_BLOCK 0x05 59 60 61 62 static unsigned short ali1563_smba; 63 64 - static int ali1563_transaction(struct i2c_adapter * a) 65 { 66 u32 data; 67 int timeout; ··· 75 76 data = inb_p(SMB_HST_STS); 77 if (data & HST_STS_BAD) { 78 - dev_warn(&a->dev,"ali1563: Trying to reset busy device\n"); 79 outb_p(data | HST_STS_BAD,SMB_HST_STS); 80 data = inb_p(SMB_HST_STS); 81 if (data & HST_STS_BAD) ··· 96 97 if (timeout && !(data & HST_STS_BAD)) 98 return 0; 99 - dev_warn(&a->dev, "SMBus Error: %s%s%s%s%s\n", 100 - timeout ? "Timeout " : "", 101 - data & HST_STS_FAIL ? "Transaction Failed " : "", 102 - data & HST_STS_BUSERR ? "No response or Bus Collision " : "", 103 - data & HST_STS_DEVERR ? "Device Error " : "", 104 - !(data & HST_STS_DONE) ? "Transaction Never Finished " : ""); 105 106 - if (!(data & HST_STS_DONE)) 107 /* Issue 'kill' to host controller */ 108 outb_p(HST_CNTL2_KILL,SMB_HST_CNTL2); 109 - else 110 - /* Issue timeout to reset all devices on bus */ 111 outb_p(HST_CNTL1_TIMEOUT,SMB_HST_CNTL1); 112 return -1; 113 } 114 ··· 163 164 if (timeout && !(data & HST_STS_BAD)) 165 return 0; 166 - dev_warn(&a->dev, "SMBus Error: %s%s%s%s%s\n", 167 timeout ? "Timeout " : "", 168 data & HST_STS_FAIL ? "Transaction Failed " : "", 169 data & HST_STS_BUSERR ? "No response or Bus Collision " : "", ··· 256 } 257 258 outb_p(((addr & 0x7f) << 1) | (rw & 0x01), SMB_HST_ADD); 259 - outb_p(inb_p(SMB_HST_CNTL2) | (size << 3), SMB_HST_CNTL2); 260 261 /* Write the command register */ 262 switch(size) { 263 case HST_CNTL2_BYTE: 264 if (rw== I2C_SMBUS_WRITE) 265 - outb_p(cmd, SMB_HST_CMD); 266 break; 267 case HST_CNTL2_BYTE_DATA: 268 outb_p(cmd, SMB_HST_CMD); ··· 284 goto Done; 285 } 286 287 - if ((error = ali1563_transaction(a))) 288 goto Done; 289 290 if ((rw == I2C_SMBUS_WRITE) || (size == HST_CNTL2_QUICK))
··· 2 * i2c-ali1563.c - i2c driver for the ALi 1563 Southbridge 3 * 4 * Copyright (C) 2004 Patrick Mochel 5 + * 2005 Rudolf Marek <r.marek@sh.cvut.cz> 6 * 7 * The 1563 southbridge is deceptively similar to the 1533, with a 8 * few notable exceptions. One of those happens to be the fact they ··· 57 #define HST_CNTL2_BLOCK 0x05 58 59 60 + #define HST_CNTL2_SIZEMASK 0x38 61 62 static unsigned short ali1563_smba; 63 64 + static int ali1563_transaction(struct i2c_adapter * a, int size) 65 { 66 u32 data; 67 int timeout; ··· 73 74 data = inb_p(SMB_HST_STS); 75 if (data & HST_STS_BAD) { 76 + dev_err(&a->dev, "ali1563: Trying to reset busy device\n"); 77 outb_p(data | HST_STS_BAD,SMB_HST_STS); 78 data = inb_p(SMB_HST_STS); 79 if (data & HST_STS_BAD) ··· 94 95 if (timeout && !(data & HST_STS_BAD)) 96 return 0; 97 98 + if (!timeout) { 99 + dev_err(&a->dev, "Timeout - Trying to KILL transaction!\n"); 100 /* Issue 'kill' to host controller */ 101 outb_p(HST_CNTL2_KILL,SMB_HST_CNTL2); 102 + data = inb_p(SMB_HST_STS); 103 + } 104 + 105 + /* device error - no response, ignore the autodetection case */ 106 + if ((data & HST_STS_DEVERR) && (size != HST_CNTL2_QUICK)) { 107 + dev_err(&a->dev, "Device error!\n"); 108 + } 109 + 110 + /* bus collision */ 111 + if (data & HST_STS_BUSERR) { 112 + dev_err(&a->dev, "Bus collision!\n"); 113 + /* Issue timeout, hoping it helps */ 114 outb_p(HST_CNTL1_TIMEOUT,SMB_HST_CNTL1); 115 + } 116 + 117 + if (data & HST_STS_FAIL) { 118 + dev_err(&a->dev, "Cleaning fail after KILL!\n"); 119 + outb_p(0x0,SMB_HST_CNTL2); 120 + } 121 + 122 return -1; 123 } 124 ··· 149 150 if (timeout && !(data & HST_STS_BAD)) 151 return 0; 152 + dev_err(&a->dev, "SMBus Error: %s%s%s%s%s\n", 153 timeout ? "Timeout " : "", 154 data & HST_STS_FAIL ? "Transaction Failed " : "", 155 data & HST_STS_BUSERR ? "No response or Bus Collision " : "", ··· 242 } 243 244 outb_p(((addr & 0x7f) << 1) | (rw & 0x01), SMB_HST_ADD); 245 + outb_p((inb_p(SMB_HST_CNTL2) & ~HST_CNTL2_SIZEMASK) | (size << 3), SMB_HST_CNTL2); 246 247 /* Write the command register */ 248 + 249 switch(size) { 250 case HST_CNTL2_BYTE: 251 if (rw== I2C_SMBUS_WRITE) 252 + /* Beware it uses DAT0 register and not CMD! */ 253 + outb_p(cmd, SMB_HST_DAT0); 254 break; 255 case HST_CNTL2_BYTE_DATA: 256 outb_p(cmd, SMB_HST_CMD); ··· 268 goto Done; 269 } 270 271 + if ((error = ali1563_transaction(a, size))) 272 goto Done; 273 274 if ((rw == I2C_SMBUS_WRITE) || (size == HST_CNTL2_QUICK))
+1 -1
drivers/ide/ide-cd.c
··· 1936 * NOTE! The "len" and "addr" checks should possibly have 1937 * separate masks. 1938 */ 1939 - if ((rq->data_len & mask) || (addr & mask)) 1940 info->dma = 0; 1941 } 1942
··· 1936 * NOTE! The "len" and "addr" checks should possibly have 1937 * separate masks. 1938 */ 1939 + if ((rq->data_len & 15) || (addr & mask)) 1940 info->dma = 0; 1941 } 1942
+3
drivers/ide/pci/amd74xx.c
··· 72 { PCI_DEVICE_ID_NVIDIA_NFORCE3S_SATA2, 0x50, AMD_UDMA_133 }, 73 { PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_IDE, 0x50, AMD_UDMA_133 }, 74 { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, 0x50, AMD_UDMA_133 }, 75 { 0 } 76 }; 77 ··· 488 /* 12 */ DECLARE_NV_DEV("NFORCE3-250-SATA2"), 489 /* 13 */ DECLARE_NV_DEV("NFORCE-CK804"), 490 /* 14 */ DECLARE_NV_DEV("NFORCE-MCP04"), 491 }; 492 493 static int __devinit amd74xx_probe(struct pci_dev *dev, const struct pci_device_id *id) ··· 523 #endif 524 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 13 }, 525 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 14 }, 526 { 0, }, 527 }; 528 MODULE_DEVICE_TABLE(pci, amd74xx_pci_tbl);
··· 72 { PCI_DEVICE_ID_NVIDIA_NFORCE3S_SATA2, 0x50, AMD_UDMA_133 }, 73 { PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_IDE, 0x50, AMD_UDMA_133 }, 74 { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, 0x50, AMD_UDMA_133 }, 75 + { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE, 0x50, AMD_UDMA_133 }, 76 { 0 } 77 }; 78 ··· 487 /* 12 */ DECLARE_NV_DEV("NFORCE3-250-SATA2"), 488 /* 13 */ DECLARE_NV_DEV("NFORCE-CK804"), 489 /* 14 */ DECLARE_NV_DEV("NFORCE-MCP04"), 490 + /* 15 */ DECLARE_NV_DEV("NFORCE-MCP51"), 491 }; 492 493 static int __devinit amd74xx_probe(struct pci_dev *dev, const struct pci_device_id *id) ··· 521 #endif 522 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 13 }, 523 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 14 }, 524 + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 15 }, 525 { 0, }, 526 }; 527 MODULE_DEVICE_TABLE(pci, amd74xx_pci_tbl);
-20
drivers/input/gameport/Kconfig
··· 68 depends on PCI 69 70 endif 71 - 72 - # Yes, SOUND_GAMEPORT looks a bit odd. Yes, it ends up being turned on 73 - # in every .config. Please don't touch it. It is here to handle an 74 - # unusual dependency between GAMEPORT and sound drivers. 75 - # 76 - # Some sound drivers call gameport functions. If GAMEPORT is 77 - # not selected, empty stubs are provided for the functions and all is 78 - # well. 79 - # If GAMEPORT is built in, everything is fine. 80 - # If GAMEPORT is a module, however, it would need to be loaded for the 81 - # sound driver to be able to link properly. Therefore, the sound 82 - # driver must be a module as well in that case. Since there's no way 83 - # to express that directly in Kconfig, we use SOUND_GAMEPORT to 84 - # express it. SOUND_GAMEPORT boils down to "if GAMEPORT is 'm', 85 - # anything that depends on SOUND_GAMEPORT must be 'm' as well. if 86 - # GAMEPORT is 'y' or 'n', it can be anything". 87 - config SOUND_GAMEPORT 88 - tristate 89 - default m if GAMEPORT=m 90 - default y
··· 68 depends on PCI 69 70 endif
+1 -1
drivers/input/joydev.c
··· 422 joydev->nkey++; 423 } 424 425 - for (i = 0; i < BTN_JOYSTICK - BTN_MISC + 1; i++) 426 if (test_bit(i + BTN_MISC, dev->keybit)) { 427 joydev->keymap[i] = joydev->nkey; 428 joydev->keypam[joydev->nkey] = i + BTN_MISC;
··· 422 joydev->nkey++; 423 } 424 425 + for (i = 0; i < BTN_JOYSTICK - BTN_MISC; i++) 426 if (test_bit(i + BTN_MISC, dev->keybit)) { 427 joydev->keymap[i] = joydev->nkey; 428 joydev->keypam[joydev->nkey] = i + BTN_MISC;
+3 -3
drivers/input/keyboard/atkbd.c
··· 171 unsigned char set2; 172 } atkbd_scroll_keys[] = { 173 { ATKBD_SCR_1, 0xc5 }, 174 - { ATKBD_SCR_2, 0xa9 }, 175 - { ATKBD_SCR_4, 0xb6 }, 176 - { ATKBD_SCR_8, 0xa7 }, 177 { ATKBD_SCR_CLICK, 0xe0 }, 178 { ATKBD_SCR_LEFT, 0xcb }, 179 { ATKBD_SCR_RIGHT, 0xd2 },
··· 171 unsigned char set2; 172 } atkbd_scroll_keys[] = { 173 { ATKBD_SCR_1, 0xc5 }, 174 + { ATKBD_SCR_2, 0x9d }, 175 + { ATKBD_SCR_4, 0xa4 }, 176 + { ATKBD_SCR_8, 0x9b }, 177 { ATKBD_SCR_CLICK, 0xe0 }, 178 { ATKBD_SCR_LEFT, 0xcb }, 179 { ATKBD_SCR_RIGHT, 0xd2 },
+5 -2
drivers/input/mouse/psmouse-base.c
··· 518 /* 519 * First, we check if it's a mouse. It should send 0x00 or 0x03 520 * in case of an IntelliMouse in 4-byte mode or 0x04 for IM Explorer. 521 */ 522 523 param[0] = 0xa5; 524 if (ps2_command(ps2dev, param, PSMOUSE_CMD_GETID)) 525 return -1; 526 527 - if (param[0] != 0x00 && param[0] != 0x03 && param[0] != 0x04) 528 return -1; 529 530 /* ··· 975 return -EINVAL; 976 977 if (!strncmp(val, "any", 3)) { 978 - *((unsigned int *)kp->arg) = -1UL; 979 return 0; 980 } 981
··· 518 /* 519 * First, we check if it's a mouse. It should send 0x00 or 0x03 520 * in case of an IntelliMouse in 4-byte mode or 0x04 for IM Explorer. 521 + * Sunrex K8561 IR Keyboard/Mouse reports 0xff on second and subsequent 522 + * ID queries, probably due to a firmware bug. 523 */ 524 525 param[0] = 0xa5; 526 if (ps2_command(ps2dev, param, PSMOUSE_CMD_GETID)) 527 return -1; 528 529 + if (param[0] != 0x00 && param[0] != 0x03 && 530 + param[0] != 0x04 && param[0] != 0xff) 531 return -1; 532 533 /* ··· 972 return -EINVAL; 973 974 if (!strncmp(val, "any", 3)) { 975 + *((unsigned int *)kp->arg) = -1U; 976 return 0; 977 } 978
+5 -34
drivers/input/mouse/synaptics.c
··· 143 return -1; 144 } 145 146 - static void print_ident(struct synaptics_data *priv) 147 - { 148 - printk(KERN_INFO "Synaptics Touchpad, model: %ld\n", SYN_ID_MODEL(priv->identity)); 149 - printk(KERN_INFO " Firmware: %ld.%ld\n", SYN_ID_MAJOR(priv->identity), 150 - SYN_ID_MINOR(priv->identity)); 151 - if (SYN_MODEL_ROT180(priv->model_id)) 152 - printk(KERN_INFO " 180 degree mounted touchpad\n"); 153 - if (SYN_MODEL_PORTRAIT(priv->model_id)) 154 - printk(KERN_INFO " portrait touchpad\n"); 155 - printk(KERN_INFO " Sensor: %ld\n", SYN_MODEL_SENSOR(priv->model_id)); 156 - if (SYN_MODEL_NEWABS(priv->model_id)) 157 - printk(KERN_INFO " new absolute packet format\n"); 158 - if (SYN_MODEL_PEN(priv->model_id)) 159 - printk(KERN_INFO " pen detection\n"); 160 - 161 - if (SYN_CAP_EXTENDED(priv->capabilities)) { 162 - printk(KERN_INFO " Touchpad has extended capability bits\n"); 163 - if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap)) 164 - printk(KERN_INFO " -> %d multi-buttons, i.e. besides standard buttons\n", 165 - (int)(SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap))); 166 - if (SYN_CAP_MIDDLE_BUTTON(priv->capabilities)) 167 - printk(KERN_INFO " -> middle button\n"); 168 - if (SYN_CAP_FOUR_BUTTON(priv->capabilities)) 169 - printk(KERN_INFO " -> four buttons\n"); 170 - if (SYN_CAP_MULTIFINGER(priv->capabilities)) 171 - printk(KERN_INFO " -> multifinger detection\n"); 172 - if (SYN_CAP_PALMDETECT(priv->capabilities)) 173 - printk(KERN_INFO " -> palm detection\n"); 174 - if (SYN_CAP_PASS_THROUGH(priv->capabilities)) 175 - printk(KERN_INFO " -> pass-through port\n"); 176 - } 177 - } 178 - 179 static int synaptics_query_hardware(struct psmouse *psmouse) 180 { 181 int retries = 0; ··· 633 634 priv->pkt_type = SYN_MODEL_NEWABS(priv->model_id) ? SYN_NEWABS : SYN_OLDABS; 635 636 - print_ident(priv); 637 set_input_params(&psmouse->dev, priv); 638 639 psmouse->protocol_handler = synaptics_process_byte;
··· 143 return -1; 144 } 145 146 static int synaptics_query_hardware(struct psmouse *psmouse) 147 { 148 int retries = 0; ··· 666 667 priv->pkt_type = SYN_MODEL_NEWABS(priv->model_id) ? SYN_NEWABS : SYN_OLDABS; 668 669 + printk(KERN_INFO "Synaptics Touchpad, model: %ld, fw: %ld.%ld, id: %#lx, caps: %#lx/%#lx\n", 670 + SYN_ID_MODEL(priv->identity), 671 + SYN_ID_MAJOR(priv->identity), SYN_ID_MINOR(priv->identity), 672 + priv->model_id, priv->capabilities, priv->ext_cap); 673 + 674 set_input_params(&psmouse->dev, priv); 675 676 psmouse->protocol_handler = synaptics_process_byte;
+10 -5
drivers/input/mousedev.c
··· 101 unsigned char ready, buffer, bufsiz; 102 unsigned char imexseq, impsseq; 103 enum mousedev_emul mode; 104 }; 105 106 #define MOUSEDEV_SEQ_LEN 6 ··· 225 spin_lock_irqsave(&list->packet_lock, flags); 226 227 p = &list->packets[list->head]; 228 - if (list->ready && p->buttons != packet->buttons) { 229 unsigned int new_head = (list->head + 1) % PACKET_QUEUE_LEN; 230 if (new_head != list->tail) { 231 p = &list->packets[list->head = new_head]; ··· 250 p->dz += packet->dz; 251 p->buttons = mousedev->packet.buttons; 252 253 - list->ready = 1; 254 255 spin_unlock_irqrestore(&list->packet_lock, flags); 256 - kill_fasync(&list->fasync, SIGIO, POLL_IN); 257 } 258 259 wake_up_interruptible(&mousedev->wait); ··· 481 } 482 483 if (!p->dx && !p->dy && !p->dz) { 484 - if (list->tail == list->head) 485 list->ready = 0; 486 - else 487 list->tail = (list->tail + 1) % PACKET_QUEUE_LEN; 488 } 489
··· 101 unsigned char ready, buffer, bufsiz; 102 unsigned char imexseq, impsseq; 103 enum mousedev_emul mode; 104 + unsigned long last_buttons; 105 }; 106 107 #define MOUSEDEV_SEQ_LEN 6 ··· 224 spin_lock_irqsave(&list->packet_lock, flags); 225 226 p = &list->packets[list->head]; 227 + if (list->ready && p->buttons != mousedev->packet.buttons) { 228 unsigned int new_head = (list->head + 1) % PACKET_QUEUE_LEN; 229 if (new_head != list->tail) { 230 p = &list->packets[list->head = new_head]; ··· 249 p->dz += packet->dz; 250 p->buttons = mousedev->packet.buttons; 251 252 + if (p->dx || p->dy || p->dz || p->buttons != list->last_buttons) 253 + list->ready = 1; 254 255 spin_unlock_irqrestore(&list->packet_lock, flags); 256 + 257 + if (list->ready) 258 + kill_fasync(&list->fasync, SIGIO, POLL_IN); 259 } 260 261 wake_up_interruptible(&mousedev->wait); ··· 477 } 478 479 if (!p->dx && !p->dy && !p->dz) { 480 + if (list->tail == list->head) { 481 list->ready = 0; 482 + list->last_buttons = p->buttons; 483 + } else 484 list->tail = (list->tail + 1) % PACKET_QUEUE_LEN; 485 } 486
+26 -6
drivers/input/serio/i8042-x86ia64io.h
··· 88 }; 89 90 /* 91 - * Some Fujitsu notebooks are ahving trouble with touhcpads if 92 * active multiplexing mode is activated. Luckily they don't have 93 * external PS/2 ports so we can safely disable it. 94 */ 95 static struct dmi_system_id __initdata i8042_dmi_nomux_table[] = { 96 { ··· 117 }, 118 }, 119 { 120 .ident = "Fujitsu T70H", 121 .matches = { 122 DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), 123 DMI_MATCH(DMI_PRODUCT_NAME, "FMVLT70H"), 124 }, 125 }, 126 { } ··· 231 232 static void i8042_pnp_exit(void) 233 { 234 - if (i8042_pnp_kbd_registered) 235 pnp_unregister_driver(&i8042_pnp_kbd_driver); 236 237 - if (i8042_pnp_aux_registered) 238 pnp_unregister_driver(&i8042_pnp_aux_driver); 239 } 240 241 static int i8042_pnp_init(void) ··· 247 int result_kbd, result_aux; 248 249 if (i8042_nopnp) { 250 - printk("i8042: PNP detection disabled\n"); 251 return 0; 252 } 253 ··· 261 #if defined(__ia64__) 262 return -ENODEV; 263 #else 264 - printk(KERN_WARNING "PNP: No PS/2 controller found. Probing ports directly.\n"); 265 return 0; 266 #endif 267 } ··· 285 i8042_pnp_kbd_irq = i8042_kbd_irq; 286 } 287 288 - if (result_aux > 0 && !i8042_pnp_aux_irq) { 289 printk(KERN_WARNING "PNP: PS/2 controller doesn't have AUX irq; using default %#x\n", i8042_aux_irq); 290 i8042_pnp_aux_irq = i8042_aux_irq; 291 }
··· 88 }; 89 90 /* 91 + * Some Fujitsu notebooks are having trouble with touchpads if 92 * active multiplexing mode is activated. Luckily they don't have 93 * external PS/2 ports so we can safely disable it. 94 + * ... apparently some Toshibas don't like MUX mode either and 95 + * die horrible death on reboot. 96 */ 97 static struct dmi_system_id __initdata i8042_dmi_nomux_table[] = { 98 { ··· 115 }, 116 }, 117 { 118 + .ident = "Fujitsu Lifebook S6230", 119 + .matches = { 120 + DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), 121 + DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S6230"), 122 + }, 123 + }, 124 + { 125 .ident = "Fujitsu T70H", 126 .matches = { 127 DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), 128 DMI_MATCH(DMI_PRODUCT_NAME, "FMVLT70H"), 129 + }, 130 + }, 131 + { 132 + .ident = "Toshiba P10", 133 + .matches = { 134 + DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"), 135 + DMI_MATCH(DMI_PRODUCT_NAME, "Satellite P10"), 136 }, 137 }, 138 { } ··· 215 216 static void i8042_pnp_exit(void) 217 { 218 + if (i8042_pnp_kbd_registered) { 219 + i8042_pnp_kbd_registered = 0; 220 pnp_unregister_driver(&i8042_pnp_kbd_driver); 221 + } 222 223 + if (i8042_pnp_aux_registered) { 224 + i8042_pnp_aux_registered = 0; 225 pnp_unregister_driver(&i8042_pnp_aux_driver); 226 + } 227 } 228 229 static int i8042_pnp_init(void) ··· 227 int result_kbd, result_aux; 228 229 if (i8042_nopnp) { 230 + printk(KERN_INFO "i8042: PNP detection disabled\n"); 231 return 0; 232 } 233 ··· 241 #if defined(__ia64__) 242 return -ENODEV; 243 #else 244 + printk(KERN_INFO "PNP: No PS/2 controller found. Probing ports directly.\n"); 245 return 0; 246 #endif 247 } ··· 265 i8042_pnp_kbd_irq = i8042_kbd_irq; 266 } 267 268 + if (!i8042_pnp_aux_irq) { 269 printk(KERN_WARNING "PNP: PS/2 controller doesn't have AUX irq; using default %#x\n", i8042_aux_irq); 270 i8042_pnp_aux_irq = i8042_aux_irq; 271 }
+28 -22
drivers/input/serio/i8042.c
··· 698 i8042_interrupt(0, NULL, NULL); 699 } 700 701 702 /* 703 * i8042_controller init initializes the i8042 controller, and, ··· 739 return -1; 740 } 741 742 - if (i8042_reset) { 743 - 744 - unsigned char param; 745 - 746 - if (i8042_command(&param, I8042_CMD_CTL_TEST)) { 747 - printk(KERN_ERR "i8042.c: i8042 controller self test timeout.\n"); 748 - return -1; 749 - } 750 - 751 - if (param != I8042_RET_CTL_TEST) { 752 - printk(KERN_ERR "i8042.c: i8042 controller selftest failed. (%#x != %#x)\n", 753 - param, I8042_RET_CTL_TEST); 754 - return -1; 755 - } 756 - } 757 758 /* 759 * Save the CTR for restoral on unload / reboot. ··· 809 */ 810 static void i8042_controller_reset(void) 811 { 812 - unsigned char param; 813 - 814 /* 815 * Reset the controller if requested. 816 */ 817 818 - if (i8042_reset) 819 - if (i8042_command(&param, I8042_CMD_CTL_TEST)) 820 - printk(KERN_ERR "i8042.c: i8042 controller reset timeout.\n"); 821 822 /* 823 * Disable MUX mode if present. ··· 925 if (level != RESUME_ENABLE) 926 return 0; 927 928 - if (i8042_controller_init()) { 929 - printk(KERN_ERR "i8042: resume failed\n"); 930 return -1; 931 } 932
··· 698 i8042_interrupt(0, NULL, NULL); 699 } 700 701 + static int i8042_ctl_test(void) 702 + { 703 + unsigned char param; 704 + 705 + if (!i8042_reset) 706 + return 0; 707 + 708 + if (i8042_command(&param, I8042_CMD_CTL_TEST)) { 709 + printk(KERN_ERR "i8042.c: i8042 controller self test timeout.\n"); 710 + return -1; 711 + } 712 + 713 + if (param != I8042_RET_CTL_TEST) { 714 + printk(KERN_ERR "i8042.c: i8042 controller selftest failed. (%#x != %#x)\n", 715 + param, I8042_RET_CTL_TEST); 716 + return -1; 717 + } 718 + 719 + return 0; 720 + } 721 722 /* 723 * i8042_controller init initializes the i8042 controller, and, ··· 719 return -1; 720 } 721 722 + if (i8042_ctl_test()) 723 + return -1; 724 725 /* 726 * Save the CTR for restoral on unload / reboot. ··· 802 */ 803 static void i8042_controller_reset(void) 804 { 805 /* 806 * Reset the controller if requested. 807 */ 808 809 + i8042_ctl_test(); 810 811 /* 812 * Disable MUX mode if present. ··· 922 if (level != RESUME_ENABLE) 923 return 0; 924 925 + if (i8042_ctl_test()) 926 + return -1; 927 + 928 + if (i8042_command(&i8042_ctr, I8042_CMD_CTL_WCTR)) { 929 + printk(KERN_ERR "i8042: Can't write CTR\n"); 930 return -1; 931 } 932
+1 -2
drivers/input/touchscreen/gunze.c
··· 68 69 if (gunze->idx != GUNZE_MAX_LENGTH || gunze->data[5] != ',' || 70 (gunze->data[0] != 'T' && gunze->data[0] != 'R')) { 71 - gunze->data[10] = 0; 72 - printk(KERN_WARNING "gunze.c: bad packet: >%s<\n", gunze->data); 73 return; 74 } 75
··· 68 69 if (gunze->idx != GUNZE_MAX_LENGTH || gunze->data[5] != ',' || 70 (gunze->data[0] != 'T' && gunze->data[0] != 'R')) { 71 + printk(KERN_WARNING "gunze.c: bad packet: >%.*s<\n", GUNZE_MAX_LENGTH, gunze->data); 72 return; 73 } 74
+4 -4
drivers/macintosh/via-pmu.c
··· 2421 2422 /* Re-enable local CPU interrupts */ 2423 local_irq_enable(); 2424 - mdelay(100); 2425 preempt_enable(); 2426 2427 /* Re-enable clock spreading on some machines */ ··· 2549 return ret; 2550 } 2551 2552 - printk(KERN_DEBUG "HID1, before: %x\n", mfspr(SPRN_HID1)); 2553 2554 /* Tell PMU what events will wake us up */ 2555 pmu_request(&req, NULL, 4, PMU_POWER_EVENTS, PMU_PWR_CLR_WAKEUP_EVENTS, ··· 2612 pmu_wait_complete(&req); 2613 pmu_request(&req, NULL, 2, PMU_SET_INTR_MASK, pmu_intr_mask); 2614 pmu_wait_complete(&req); 2615 - 2616 - printk(KERN_DEBUG "HID1, after: %x\n", mfspr(SPRN_HID1)); 2617 2618 pmac_wakeup_devices(); 2619
··· 2421 2422 /* Re-enable local CPU interrupts */ 2423 local_irq_enable(); 2424 + mdelay(10); 2425 preempt_enable(); 2426 2427 /* Re-enable clock spreading on some machines */ ··· 2549 return ret; 2550 } 2551 2552 + /* Stop environment and ADB interrupts */ 2553 + pmu_request(&req, NULL, 2, PMU_SET_INTR_MASK, 0); 2554 + pmu_wait_complete(&req); 2555 2556 /* Tell PMU what events will wake us up */ 2557 pmu_request(&req, NULL, 4, PMU_POWER_EVENTS, PMU_PWR_CLR_WAKEUP_EVENTS, ··· 2610 pmu_wait_complete(&req); 2611 pmu_request(&req, NULL, 2, PMU_SET_INTR_MASK, pmu_intr_mask); 2612 pmu_wait_complete(&req); 2613 2614 pmac_wakeup_devices(); 2615
+64 -62
drivers/media/dvb/bt8xx/dst.c
··· 906 if (state->dst_type == DST_TYPE_IS_TERR) 907 return 0; 908 909 - if (state->voltage == SEC_VOLTAGE_OFF) 910 - paket[4] = 0; 911 - else 912 - paket[4] = 1; 913 - 914 - if (state->tone == SEC_TONE_ON) 915 - paket[2] = 0x02; 916 - else 917 - paket[2] = 0; 918 - if (state->minicmd == SEC_MINI_A) 919 - paket[3] = 0x02; 920 - else 921 - paket[3] = 0; 922 - 923 paket[7] = dst_check_sum (paket, 7); 924 dst_command(state, paket, 8); 925 return 0; 926 } 927 ··· 970 971 static int dst_write_tuna(struct dvb_frontend* fe) 972 { 973 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 974 int retval; 975 u8 reply; 976 ··· 1038 1039 static int dst_set_diseqc(struct dvb_frontend* fe, struct dvb_diseqc_master_cmd* cmd) 1040 { 1041 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1042 u8 paket[8] = { 0x00, 0x08, 0x04, 0xe0, 0x10, 0x38, 0xf0, 0xec }; 1043 1044 - if (state->dst_type == DST_TYPE_IS_TERR) 1045 return 0; 1046 1047 if (cmd->msg_len == 0 || cmd->msg_len > 4) ··· 1054 1055 static int dst_set_voltage(struct dvb_frontend* fe, fe_sec_voltage_t voltage) 1056 { 1057 - u8 *val; 1058 int need_cmd; 1059 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1060 1061 state->voltage = voltage; 1062 1063 - if (state->dst_type == DST_TYPE_IS_TERR) 1064 return 0; 1065 1066 need_cmd = 0; 1067 - val = &state->tx_tuna[0]; 1068 - val[8] &= ~0x40; 1069 switch (voltage) { 1070 - case SEC_VOLTAGE_13: 1071 - if ((state->diseq_flags & HAS_POWER) == 0) 1072 need_cmd = 1; 1073 - state->diseq_flags |= HAS_POWER; 1074 - break; 1075 1076 - case SEC_VOLTAGE_18: 1077 - if ((state->diseq_flags & HAS_POWER) == 0) 1078 - need_cmd = 1; 1079 - state->diseq_flags |= HAS_POWER; 1080 - val[8] |= 0x40; 1081 - break; 1082 - 1083 - case SEC_VOLTAGE_OFF: 1084 - need_cmd = 1; 1085 - state->diseq_flags &= ~(HAS_POWER | HAS_LOCK | ATTEMPT_TUNE); 1086 - break; 1087 - 1088 - default: 1089 - return -EINVAL; 1090 } 1091 if (need_cmd) 1092 dst_tone_power_cmd(state); ··· 1089 1090 static int dst_set_tone(struct dvb_frontend* fe, fe_sec_tone_mode_t tone) 1091 { 1092 - u8 *val; 1093 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1094 1095 state->tone = tone; 1096 1097 - if (state->dst_type == DST_TYPE_IS_TERR) 1098 return 0; 1099 1100 - val = &state->tx_tuna[0]; 1101 - 1102 - val[8] &= ~0x1; 1103 - 1104 switch (tone) { 1105 - case SEC_TONE_OFF: 1106 - break; 1107 1108 - case SEC_TONE_ON: 1109 - val[8] |= 1; 1110 - break; 1111 1112 - default: 1113 - return -EINVAL; 1114 } 1115 dst_tone_power_cmd(state); 1116 1117 return 0; 1118 } 1119 1120 static int dst_init(struct dvb_frontend* fe) 1121 { 1122 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1123 static u8 ini_satci_tuna[] = { 9, 0, 3, 0xb6, 1, 0, 0x73, 0x21, 0, 0 }; 1124 static u8 ini_satfta_tuna[] = { 0, 0, 3, 0xb6, 1, 0x55, 0xbd, 0x50, 0, 0 }; 1125 static u8 ini_tvfta_tuna[] = { 0, 0, 3, 0xb6, 1, 7, 0x0, 0x0, 0, 0 }; ··· 1170 1171 static int dst_read_status(struct dvb_frontend* fe, fe_status_t* status) 1172 { 1173 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1174 1175 *status = 0; 1176 if (state->diseq_flags & HAS_LOCK) { ··· 1184 1185 static int dst_read_signal_strength(struct dvb_frontend* fe, u16* strength) 1186 { 1187 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1188 1189 dst_get_signal(state); 1190 *strength = state->decode_strength; ··· 1194 1195 static int dst_read_snr(struct dvb_frontend* fe, u16* snr) 1196 { 1197 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1198 1199 dst_get_signal(state); 1200 *snr = state->decode_snr; ··· 1204 1205 static int dst_set_frontend(struct dvb_frontend* fe, struct dvb_frontend_parameters *p) 1206 { 1207 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1208 1209 dst_set_freq(state, p->frequency); 1210 if (verbose > 4) ··· 1230 1231 static int dst_get_frontend(struct dvb_frontend* fe, struct dvb_frontend_parameters *p) 1232 { 1233 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1234 1235 p->frequency = state->decode_freq; 1236 p->inversion = state->inversion; ··· 1250 1251 static void dst_release(struct dvb_frontend* fe) 1252 { 1253 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1254 kfree(state); 1255 } 1256 ··· 1348 .read_signal_strength = dst_read_signal_strength, 1349 .read_snr = dst_read_snr, 1350 1351 - .diseqc_send_burst = dst_set_tone, 1352 .diseqc_send_master_cmd = dst_set_diseqc, 1353 .set_voltage = dst_set_voltage, 1354 .set_tone = dst_set_tone,
··· 906 if (state->dst_type == DST_TYPE_IS_TERR) 907 return 0; 908 909 + paket[4] = state->tx_tuna[4]; 910 + paket[2] = state->tx_tuna[2]; 911 + paket[3] = state->tx_tuna[3]; 912 paket[7] = dst_check_sum (paket, 7); 913 dst_command(state, paket, 8); 914 + 915 return 0; 916 } 917 ··· 980 981 static int dst_write_tuna(struct dvb_frontend* fe) 982 { 983 + struct dst_state* state = fe->demodulator_priv; 984 int retval; 985 u8 reply; 986 ··· 1048 1049 static int dst_set_diseqc(struct dvb_frontend* fe, struct dvb_diseqc_master_cmd* cmd) 1050 { 1051 + struct dst_state* state = fe->demodulator_priv; 1052 u8 paket[8] = { 0x00, 0x08, 0x04, 0xe0, 0x10, 0x38, 0xf0, 0xec }; 1053 1054 + if (state->dst_type != DST_TYPE_IS_SAT) 1055 return 0; 1056 1057 if (cmd->msg_len == 0 || cmd->msg_len > 4) ··· 1064 1065 static int dst_set_voltage(struct dvb_frontend* fe, fe_sec_voltage_t voltage) 1066 { 1067 int need_cmd; 1068 + struct dst_state* state = fe->demodulator_priv; 1069 1070 state->voltage = voltage; 1071 1072 + if (state->dst_type != DST_TYPE_IS_SAT) 1073 return 0; 1074 1075 need_cmd = 0; 1076 switch (voltage) { 1077 + case SEC_VOLTAGE_13: 1078 + case SEC_VOLTAGE_18: 1079 + if ((state->diseq_flags & HAS_POWER) == 0) 1080 + need_cmd = 1; 1081 + state->diseq_flags |= HAS_POWER; 1082 + state->tx_tuna[4] = 0x01; 1083 + break; 1084 + 1085 + case SEC_VOLTAGE_OFF: 1086 need_cmd = 1; 1087 + state->diseq_flags &= ~(HAS_POWER | HAS_LOCK | ATTEMPT_TUNE); 1088 + state->tx_tuna[4] = 0x00; 1089 + break; 1090 1091 + default: 1092 + return -EINVAL; 1093 } 1094 if (need_cmd) 1095 dst_tone_power_cmd(state); ··· 1106 1107 static int dst_set_tone(struct dvb_frontend* fe, fe_sec_tone_mode_t tone) 1108 { 1109 + struct dst_state* state = fe->demodulator_priv; 1110 1111 state->tone = tone; 1112 1113 + if (state->dst_type != DST_TYPE_IS_SAT) 1114 return 0; 1115 1116 switch (tone) { 1117 + case SEC_TONE_OFF: 1118 + state->tx_tuna[2] = 0xff; 1119 + break; 1120 1121 + case SEC_TONE_ON: 1122 + state->tx_tuna[2] = 0x02; 1123 + break; 1124 1125 + default: 1126 + return -EINVAL; 1127 } 1128 dst_tone_power_cmd(state); 1129 1130 return 0; 1131 } 1132 1133 + static int dst_send_burst(struct dvb_frontend *fe, fe_sec_mini_cmd_t minicmd) 1134 + { 1135 + struct dst_state *state = fe->demodulator_priv; 1136 + 1137 + if (state->dst_type != DST_TYPE_IS_SAT) 1138 + return 0; 1139 + 1140 + state->minicmd = minicmd; 1141 + 1142 + switch (minicmd) { 1143 + case SEC_MINI_A: 1144 + state->tx_tuna[3] = 0x02; 1145 + break; 1146 + case SEC_MINI_B: 1147 + state->tx_tuna[3] = 0xff; 1148 + break; 1149 + } 1150 + dst_tone_power_cmd(state); 1151 + 1152 + return 0; 1153 + } 1154 + 1155 + 1156 static int dst_init(struct dvb_frontend* fe) 1157 { 1158 + struct dst_state* state = fe->demodulator_priv; 1159 static u8 ini_satci_tuna[] = { 9, 0, 3, 0xb6, 1, 0, 0x73, 0x21, 0, 0 }; 1160 static u8 ini_satfta_tuna[] = { 0, 0, 3, 0xb6, 1, 0x55, 0xbd, 0x50, 0, 0 }; 1161 static u8 ini_tvfta_tuna[] = { 0, 0, 3, 0xb6, 1, 7, 0x0, 0x0, 0, 0 }; ··· 1168 1169 static int dst_read_status(struct dvb_frontend* fe, fe_status_t* status) 1170 { 1171 + struct dst_state* state = fe->demodulator_priv; 1172 1173 *status = 0; 1174 if (state->diseq_flags & HAS_LOCK) { ··· 1182 1183 static int dst_read_signal_strength(struct dvb_frontend* fe, u16* strength) 1184 { 1185 + struct dst_state* state = fe->demodulator_priv; 1186 1187 dst_get_signal(state); 1188 *strength = state->decode_strength; ··· 1192 1193 static int dst_read_snr(struct dvb_frontend* fe, u16* snr) 1194 { 1195 + struct dst_state* state = fe->demodulator_priv; 1196 1197 dst_get_signal(state); 1198 *snr = state->decode_snr; ··· 1202 1203 static int dst_set_frontend(struct dvb_frontend* fe, struct dvb_frontend_parameters *p) 1204 { 1205 + struct dst_state* state = fe->demodulator_priv; 1206 1207 dst_set_freq(state, p->frequency); 1208 if (verbose > 4) ··· 1228 1229 static int dst_get_frontend(struct dvb_frontend* fe, struct dvb_frontend_parameters *p) 1230 { 1231 + struct dst_state* state = fe->demodulator_priv; 1232 1233 p->frequency = state->decode_freq; 1234 p->inversion = state->inversion; ··· 1248 1249 static void dst_release(struct dvb_frontend* fe) 1250 { 1251 + struct dst_state* state = fe->demodulator_priv; 1252 kfree(state); 1253 } 1254 ··· 1346 .read_signal_strength = dst_read_signal_strength, 1347 .read_snr = dst_read_snr, 1348 1349 + .diseqc_send_burst = dst_send_burst, 1350 .diseqc_send_master_cmd = dst_set_diseqc, 1351 .set_voltage = dst_set_voltage, 1352 .set_tone = dst_set_tone,
+556 -15
drivers/net/tg3.c
··· 133 /* number of ETHTOOL_GSTATS u64's */ 134 #define TG3_NUM_STATS (sizeof(struct tg3_ethtool_stats)/sizeof(u64)) 135 136 static char version[] __devinitdata = 137 DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n"; 138 ··· 316 { "nic_irqs" }, 317 { "nic_avoided_irqs" }, 318 { "nic_tx_threshold_hit" } 319 }; 320 321 static void tg3_write_indirect_reg32(struct tg3 *tp, u32 off, u32 val) ··· 3083 } 3084 3085 static int tg3_init_hw(struct tg3 *); 3086 - static int tg3_halt(struct tg3 *, int); 3087 3088 #ifdef CONFIG_NET_POLL_CONTROLLER 3089 static void tg3_poll_controller(struct net_device *dev) ··· 3107 restart_timer = tp->tg3_flags2 & TG3_FLG2_RESTART_TIMER; 3108 tp->tg3_flags2 &= ~TG3_FLG2_RESTART_TIMER; 3109 3110 - tg3_halt(tp, 0); 3111 tg3_init_hw(tp); 3112 3113 tg3_netif_start(tp); ··· 3453 spin_lock_irq(&tp->lock); 3454 spin_lock(&tp->tx_lock); 3455 3456 - tg3_halt(tp, 1); 3457 3458 tg3_set_mtu(dev, tp, new_mtu); 3459 ··· 4144 } 4145 4146 /* tp->lock is held. */ 4147 - static int tg3_halt(struct tg3 *tp, int silent) 4148 { 4149 int err; 4150 4151 tg3_stop_fw(tp); 4152 4153 - tg3_write_sig_pre_reset(tp, RESET_KIND_SHUTDOWN); 4154 4155 tg3_abort_hw(tp, silent); 4156 err = tg3_chip_reset(tp); 4157 4158 - tg3_write_sig_legacy(tp, RESET_KIND_SHUTDOWN); 4159 - tg3_write_sig_post_reset(tp, RESET_KIND_SHUTDOWN); 4160 4161 if (err) 4162 return err; ··· 4370 */ 4371 tp->tg3_flags |= TG3_FLAG_PCIX_TARGET_HWBUG; 4372 4373 err = tg3_halt_cpu(tp, cpu_base); 4374 if (err) 4375 goto out; 4376 ··· 5899 int err, i; 5900 u32 int_mbox = 0; 5901 5902 tg3_disable_ints(tp); 5903 5904 free_irq(tp->pdev->irq, dev); ··· 6005 spin_lock_irq(&tp->lock); 6006 spin_lock(&tp->tx_lock); 6007 6008 - tg3_halt(tp, 1); 6009 err = tg3_init_hw(tp); 6010 6011 spin_unlock(&tp->tx_lock); ··· 6081 6082 err = tg3_init_hw(tp); 6083 if (err) { 6084 - tg3_halt(tp, 1); 6085 tg3_free_rings(tp); 6086 } else { 6087 if (tp->tg3_flags & TG3_FLAG_TAGGED_STATUS) ··· 6125 pci_disable_msi(tp->pdev); 6126 tp->tg3_flags2 &= ~TG3_FLG2_USING_MSI; 6127 } 6128 - tg3_halt(tp, 1); 6129 tg3_free_rings(tp); 6130 tg3_free_consistent(tp); 6131 ··· 6398 6399 tg3_disable_ints(tp); 6400 6401 - tg3_halt(tp, 1); 6402 tg3_free_rings(tp); 6403 tp->tg3_flags &= 6404 ~(TG3_FLAG_INIT_COMPLETE | ··· 7118 tp->tx_pending = ering->tx_pending; 7119 7120 if (netif_running(dev)) { 7121 - tg3_halt(tp, 1); 7122 tg3_init_hw(tp); 7123 tg3_netif_start(tp); 7124 } ··· 7161 tp->tg3_flags &= ~TG3_FLAG_TX_PAUSE; 7162 7163 if (netif_running(dev)) { 7164 - tg3_halt(tp, 1); 7165 tg3_init_hw(tp); 7166 tg3_netif_start(tp); 7167 } ··· 7220 return TG3_NUM_STATS; 7221 } 7222 7223 static void tg3_get_strings (struct net_device *dev, u32 stringset, u8 *buf) 7224 { 7225 switch (stringset) { 7226 case ETH_SS_STATS: 7227 memcpy(buf, &ethtool_stats_keys, sizeof(ethtool_stats_keys)); 7228 break; 7229 default: 7230 WARN_ON(1); /* we need a WARN() */ ··· 7245 { 7246 struct tg3 *tp = netdev_priv(dev); 7247 memcpy(tmp_stats, tg3_get_estats(tp), sizeof(tp->estats)); 7248 } 7249 7250 static int tg3_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) ··· 7870 .get_tso = ethtool_op_get_tso, 7871 .set_tso = tg3_set_tso, 7872 #endif 7873 .get_strings = tg3_get_strings, 7874 .get_stats_count = tg3_get_stats_count, 7875 .get_ethtool_stats = tg3_get_ethtool_stats, ··· 10019 (tr32(WDMAC_MODE) & WDMAC_MODE_ENABLE)) { 10020 pci_save_state(tp->pdev); 10021 tw32(MEMARB_MODE, MEMARB_MODE_ENABLE); 10022 - tg3_halt(tp, 1); 10023 } 10024 10025 err = tg3_test_dma(tp); ··· 10146 10147 spin_lock_irq(&tp->lock); 10148 spin_lock(&tp->tx_lock); 10149 - tg3_halt(tp, 1); 10150 spin_unlock(&tp->tx_lock); 10151 spin_unlock_irq(&tp->lock); 10152
··· 133 /* number of ETHTOOL_GSTATS u64's */ 134 #define TG3_NUM_STATS (sizeof(struct tg3_ethtool_stats)/sizeof(u64)) 135 136 + #define TG3_NUM_TEST 6 137 + 138 static char version[] __devinitdata = 139 DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n"; 140 ··· 314 { "nic_irqs" }, 315 { "nic_avoided_irqs" }, 316 { "nic_tx_threshold_hit" } 317 + }; 318 + 319 + static struct { 320 + const char string[ETH_GSTRING_LEN]; 321 + } ethtool_test_keys[TG3_NUM_TEST] = { 322 + { "nvram test (online) " }, 323 + { "link test (online) " }, 324 + { "register test (offline)" }, 325 + { "memory test (offline)" }, 326 + { "loopback test (offline)" }, 327 + { "interrupt test (offline)" }, 328 }; 329 330 static void tg3_write_indirect_reg32(struct tg3 *tp, u32 off, u32 val) ··· 3070 } 3071 3072 static int tg3_init_hw(struct tg3 *); 3073 + static int tg3_halt(struct tg3 *, int, int); 3074 3075 #ifdef CONFIG_NET_POLL_CONTROLLER 3076 static void tg3_poll_controller(struct net_device *dev) ··· 3094 restart_timer = tp->tg3_flags2 & TG3_FLG2_RESTART_TIMER; 3095 tp->tg3_flags2 &= ~TG3_FLG2_RESTART_TIMER; 3096 3097 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 0); 3098 tg3_init_hw(tp); 3099 3100 tg3_netif_start(tp); ··· 3440 spin_lock_irq(&tp->lock); 3441 spin_lock(&tp->tx_lock); 3442 3443 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 3444 3445 tg3_set_mtu(dev, tp, new_mtu); 3446 ··· 4131 } 4132 4133 /* tp->lock is held. */ 4134 + static int tg3_halt(struct tg3 *tp, int kind, int silent) 4135 { 4136 int err; 4137 4138 tg3_stop_fw(tp); 4139 4140 + tg3_write_sig_pre_reset(tp, kind); 4141 4142 tg3_abort_hw(tp, silent); 4143 err = tg3_chip_reset(tp); 4144 4145 + tg3_write_sig_legacy(tp, kind); 4146 + tg3_write_sig_post_reset(tp, kind); 4147 4148 if (err) 4149 return err; ··· 4357 */ 4358 tp->tg3_flags |= TG3_FLAG_PCIX_TARGET_HWBUG; 4359 4360 + /* It is possible that bootcode is still loading at this point. 4361 + * Get the nvram lock first before halting the cpu. 4362 + */ 4363 + tg3_nvram_lock(tp); 4364 err = tg3_halt_cpu(tp, cpu_base); 4365 + tg3_nvram_unlock(tp); 4366 if (err) 4367 goto out; 4368 ··· 5881 int err, i; 5882 u32 int_mbox = 0; 5883 5884 + if (!netif_running(dev)) 5885 + return -ENODEV; 5886 + 5887 tg3_disable_ints(tp); 5888 5889 free_irq(tp->pdev->irq, dev); ··· 5984 spin_lock_irq(&tp->lock); 5985 spin_lock(&tp->tx_lock); 5986 5987 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 5988 err = tg3_init_hw(tp); 5989 5990 spin_unlock(&tp->tx_lock); ··· 6060 6061 err = tg3_init_hw(tp); 6062 if (err) { 6063 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 6064 tg3_free_rings(tp); 6065 } else { 6066 if (tp->tg3_flags & TG3_FLAG_TAGGED_STATUS) ··· 6104 pci_disable_msi(tp->pdev); 6105 tp->tg3_flags2 &= ~TG3_FLG2_USING_MSI; 6106 } 6107 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 6108 tg3_free_rings(tp); 6109 tg3_free_consistent(tp); 6110 ··· 6377 6378 tg3_disable_ints(tp); 6379 6380 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 6381 tg3_free_rings(tp); 6382 tp->tg3_flags &= 6383 ~(TG3_FLAG_INIT_COMPLETE | ··· 7097 tp->tx_pending = ering->tx_pending; 7098 7099 if (netif_running(dev)) { 7100 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 7101 tg3_init_hw(tp); 7102 tg3_netif_start(tp); 7103 } ··· 7140 tp->tg3_flags &= ~TG3_FLAG_TX_PAUSE; 7141 7142 if (netif_running(dev)) { 7143 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 7144 tg3_init_hw(tp); 7145 tg3_netif_start(tp); 7146 } ··· 7199 return TG3_NUM_STATS; 7200 } 7201 7202 + static int tg3_get_test_count (struct net_device *dev) 7203 + { 7204 + return TG3_NUM_TEST; 7205 + } 7206 + 7207 static void tg3_get_strings (struct net_device *dev, u32 stringset, u8 *buf) 7208 { 7209 switch (stringset) { 7210 case ETH_SS_STATS: 7211 memcpy(buf, &ethtool_stats_keys, sizeof(ethtool_stats_keys)); 7212 + break; 7213 + case ETH_SS_TEST: 7214 + memcpy(buf, &ethtool_test_keys, sizeof(ethtool_test_keys)); 7215 break; 7216 default: 7217 WARN_ON(1); /* we need a WARN() */ ··· 7216 { 7217 struct tg3 *tp = netdev_priv(dev); 7218 memcpy(tmp_stats, tg3_get_estats(tp), sizeof(tp->estats)); 7219 + } 7220 + 7221 + #define NVRAM_TEST_SIZE 0x100 7222 + 7223 + static int tg3_test_nvram(struct tg3 *tp) 7224 + { 7225 + u32 *buf, csum; 7226 + int i, j, err = 0; 7227 + 7228 + buf = kmalloc(NVRAM_TEST_SIZE, GFP_KERNEL); 7229 + if (buf == NULL) 7230 + return -ENOMEM; 7231 + 7232 + for (i = 0, j = 0; i < NVRAM_TEST_SIZE; i += 4, j++) { 7233 + u32 val; 7234 + 7235 + if ((err = tg3_nvram_read(tp, i, &val)) != 0) 7236 + break; 7237 + buf[j] = cpu_to_le32(val); 7238 + } 7239 + if (i < NVRAM_TEST_SIZE) 7240 + goto out; 7241 + 7242 + err = -EIO; 7243 + if (cpu_to_be32(buf[0]) != TG3_EEPROM_MAGIC) 7244 + goto out; 7245 + 7246 + /* Bootstrap checksum at offset 0x10 */ 7247 + csum = calc_crc((unsigned char *) buf, 0x10); 7248 + if(csum != cpu_to_le32(buf[0x10/4])) 7249 + goto out; 7250 + 7251 + /* Manufacturing block starts at offset 0x74, checksum at 0xfc */ 7252 + csum = calc_crc((unsigned char *) &buf[0x74/4], 0x88); 7253 + if (csum != cpu_to_le32(buf[0xfc/4])) 7254 + goto out; 7255 + 7256 + err = 0; 7257 + 7258 + out: 7259 + kfree(buf); 7260 + return err; 7261 + } 7262 + 7263 + #define TG3_SERDES_TIMEOUT_SEC 2 7264 + #define TG3_COPPER_TIMEOUT_SEC 6 7265 + 7266 + static int tg3_test_link(struct tg3 *tp) 7267 + { 7268 + int i, max; 7269 + 7270 + if (!netif_running(tp->dev)) 7271 + return -ENODEV; 7272 + 7273 + if (tp->tg3_flags2 & TG3_FLG2_PHY_SERDES) 7274 + max = TG3_SERDES_TIMEOUT_SEC; 7275 + else 7276 + max = TG3_COPPER_TIMEOUT_SEC; 7277 + 7278 + for (i = 0; i < max; i++) { 7279 + if (netif_carrier_ok(tp->dev)) 7280 + return 0; 7281 + 7282 + if (msleep_interruptible(1000)) 7283 + break; 7284 + } 7285 + 7286 + return -EIO; 7287 + } 7288 + 7289 + /* Only test the commonly used registers */ 7290 + static int tg3_test_registers(struct tg3 *tp) 7291 + { 7292 + int i, is_5705; 7293 + u32 offset, read_mask, write_mask, val, save_val, read_val; 7294 + static struct { 7295 + u16 offset; 7296 + u16 flags; 7297 + #define TG3_FL_5705 0x1 7298 + #define TG3_FL_NOT_5705 0x2 7299 + #define TG3_FL_NOT_5788 0x4 7300 + u32 read_mask; 7301 + u32 write_mask; 7302 + } reg_tbl[] = { 7303 + /* MAC Control Registers */ 7304 + { MAC_MODE, TG3_FL_NOT_5705, 7305 + 0x00000000, 0x00ef6f8c }, 7306 + { MAC_MODE, TG3_FL_5705, 7307 + 0x00000000, 0x01ef6b8c }, 7308 + { MAC_STATUS, TG3_FL_NOT_5705, 7309 + 0x03800107, 0x00000000 }, 7310 + { MAC_STATUS, TG3_FL_5705, 7311 + 0x03800100, 0x00000000 }, 7312 + { MAC_ADDR_0_HIGH, 0x0000, 7313 + 0x00000000, 0x0000ffff }, 7314 + { MAC_ADDR_0_LOW, 0x0000, 7315 + 0x00000000, 0xffffffff }, 7316 + { MAC_RX_MTU_SIZE, 0x0000, 7317 + 0x00000000, 0x0000ffff }, 7318 + { MAC_TX_MODE, 0x0000, 7319 + 0x00000000, 0x00000070 }, 7320 + { MAC_TX_LENGTHS, 0x0000, 7321 + 0x00000000, 0x00003fff }, 7322 + { MAC_RX_MODE, TG3_FL_NOT_5705, 7323 + 0x00000000, 0x000007fc }, 7324 + { MAC_RX_MODE, TG3_FL_5705, 7325 + 0x00000000, 0x000007dc }, 7326 + { MAC_HASH_REG_0, 0x0000, 7327 + 0x00000000, 0xffffffff }, 7328 + { MAC_HASH_REG_1, 0x0000, 7329 + 0x00000000, 0xffffffff }, 7330 + { MAC_HASH_REG_2, 0x0000, 7331 + 0x00000000, 0xffffffff }, 7332 + { MAC_HASH_REG_3, 0x0000, 7333 + 0x00000000, 0xffffffff }, 7334 + 7335 + /* Receive Data and Receive BD Initiator Control Registers. */ 7336 + { RCVDBDI_JUMBO_BD+0, TG3_FL_NOT_5705, 7337 + 0x00000000, 0xffffffff }, 7338 + { RCVDBDI_JUMBO_BD+4, TG3_FL_NOT_5705, 7339 + 0x00000000, 0xffffffff }, 7340 + { RCVDBDI_JUMBO_BD+8, TG3_FL_NOT_5705, 7341 + 0x00000000, 0x00000003 }, 7342 + { RCVDBDI_JUMBO_BD+0xc, TG3_FL_NOT_5705, 7343 + 0x00000000, 0xffffffff }, 7344 + { RCVDBDI_STD_BD+0, 0x0000, 7345 + 0x00000000, 0xffffffff }, 7346 + { RCVDBDI_STD_BD+4, 0x0000, 7347 + 0x00000000, 0xffffffff }, 7348 + { RCVDBDI_STD_BD+8, 0x0000, 7349 + 0x00000000, 0xffff0002 }, 7350 + { RCVDBDI_STD_BD+0xc, 0x0000, 7351 + 0x00000000, 0xffffffff }, 7352 + 7353 + /* Receive BD Initiator Control Registers. */ 7354 + { RCVBDI_STD_THRESH, TG3_FL_NOT_5705, 7355 + 0x00000000, 0xffffffff }, 7356 + { RCVBDI_STD_THRESH, TG3_FL_5705, 7357 + 0x00000000, 0x000003ff }, 7358 + { RCVBDI_JUMBO_THRESH, TG3_FL_NOT_5705, 7359 + 0x00000000, 0xffffffff }, 7360 + 7361 + /* Host Coalescing Control Registers. */ 7362 + { HOSTCC_MODE, TG3_FL_NOT_5705, 7363 + 0x00000000, 0x00000004 }, 7364 + { HOSTCC_MODE, TG3_FL_5705, 7365 + 0x00000000, 0x000000f6 }, 7366 + { HOSTCC_RXCOL_TICKS, TG3_FL_NOT_5705, 7367 + 0x00000000, 0xffffffff }, 7368 + { HOSTCC_RXCOL_TICKS, TG3_FL_5705, 7369 + 0x00000000, 0x000003ff }, 7370 + { HOSTCC_TXCOL_TICKS, TG3_FL_NOT_5705, 7371 + 0x00000000, 0xffffffff }, 7372 + { HOSTCC_TXCOL_TICKS, TG3_FL_5705, 7373 + 0x00000000, 0x000003ff }, 7374 + { HOSTCC_RXMAX_FRAMES, TG3_FL_NOT_5705, 7375 + 0x00000000, 0xffffffff }, 7376 + { HOSTCC_RXMAX_FRAMES, TG3_FL_5705 | TG3_FL_NOT_5788, 7377 + 0x00000000, 0x000000ff }, 7378 + { HOSTCC_TXMAX_FRAMES, TG3_FL_NOT_5705, 7379 + 0x00000000, 0xffffffff }, 7380 + { HOSTCC_TXMAX_FRAMES, TG3_FL_5705 | TG3_FL_NOT_5788, 7381 + 0x00000000, 0x000000ff }, 7382 + { HOSTCC_RXCOAL_TICK_INT, TG3_FL_NOT_5705, 7383 + 0x00000000, 0xffffffff }, 7384 + { HOSTCC_TXCOAL_TICK_INT, TG3_FL_NOT_5705, 7385 + 0x00000000, 0xffffffff }, 7386 + { HOSTCC_RXCOAL_MAXF_INT, TG3_FL_NOT_5705, 7387 + 0x00000000, 0xffffffff }, 7388 + { HOSTCC_RXCOAL_MAXF_INT, TG3_FL_5705 | TG3_FL_NOT_5788, 7389 + 0x00000000, 0x000000ff }, 7390 + { HOSTCC_TXCOAL_MAXF_INT, TG3_FL_NOT_5705, 7391 + 0x00000000, 0xffffffff }, 7392 + { HOSTCC_TXCOAL_MAXF_INT, TG3_FL_5705 | TG3_FL_NOT_5788, 7393 + 0x00000000, 0x000000ff }, 7394 + { HOSTCC_STAT_COAL_TICKS, TG3_FL_NOT_5705, 7395 + 0x00000000, 0xffffffff }, 7396 + { HOSTCC_STATS_BLK_HOST_ADDR, TG3_FL_NOT_5705, 7397 + 0x00000000, 0xffffffff }, 7398 + { HOSTCC_STATS_BLK_HOST_ADDR+4, TG3_FL_NOT_5705, 7399 + 0x00000000, 0xffffffff }, 7400 + { HOSTCC_STATUS_BLK_HOST_ADDR, 0x0000, 7401 + 0x00000000, 0xffffffff }, 7402 + { HOSTCC_STATUS_BLK_HOST_ADDR+4, 0x0000, 7403 + 0x00000000, 0xffffffff }, 7404 + { HOSTCC_STATS_BLK_NIC_ADDR, 0x0000, 7405 + 0xffffffff, 0x00000000 }, 7406 + { HOSTCC_STATUS_BLK_NIC_ADDR, 0x0000, 7407 + 0xffffffff, 0x00000000 }, 7408 + 7409 + /* Buffer Manager Control Registers. */ 7410 + { BUFMGR_MB_POOL_ADDR, 0x0000, 7411 + 0x00000000, 0x007fff80 }, 7412 + { BUFMGR_MB_POOL_SIZE, 0x0000, 7413 + 0x00000000, 0x007fffff }, 7414 + { BUFMGR_MB_RDMA_LOW_WATER, 0x0000, 7415 + 0x00000000, 0x0000003f }, 7416 + { BUFMGR_MB_MACRX_LOW_WATER, 0x0000, 7417 + 0x00000000, 0x000001ff }, 7418 + { BUFMGR_MB_HIGH_WATER, 0x0000, 7419 + 0x00000000, 0x000001ff }, 7420 + { BUFMGR_DMA_DESC_POOL_ADDR, TG3_FL_NOT_5705, 7421 + 0xffffffff, 0x00000000 }, 7422 + { BUFMGR_DMA_DESC_POOL_SIZE, TG3_FL_NOT_5705, 7423 + 0xffffffff, 0x00000000 }, 7424 + 7425 + /* Mailbox Registers */ 7426 + { GRCMBOX_RCVSTD_PROD_IDX+4, 0x0000, 7427 + 0x00000000, 0x000001ff }, 7428 + { GRCMBOX_RCVJUMBO_PROD_IDX+4, TG3_FL_NOT_5705, 7429 + 0x00000000, 0x000001ff }, 7430 + { GRCMBOX_RCVRET_CON_IDX_0+4, 0x0000, 7431 + 0x00000000, 0x000007ff }, 7432 + { GRCMBOX_SNDHOST_PROD_IDX_0+4, 0x0000, 7433 + 0x00000000, 0x000001ff }, 7434 + 7435 + { 0xffff, 0x0000, 0x00000000, 0x00000000 }, 7436 + }; 7437 + 7438 + if (tp->tg3_flags2 & TG3_FLG2_5705_PLUS) 7439 + is_5705 = 1; 7440 + else 7441 + is_5705 = 0; 7442 + 7443 + for (i = 0; reg_tbl[i].offset != 0xffff; i++) { 7444 + if (is_5705 && (reg_tbl[i].flags & TG3_FL_NOT_5705)) 7445 + continue; 7446 + 7447 + if (!is_5705 && (reg_tbl[i].flags & TG3_FL_5705)) 7448 + continue; 7449 + 7450 + if ((tp->tg3_flags2 & TG3_FLG2_IS_5788) && 7451 + (reg_tbl[i].flags & TG3_FL_NOT_5788)) 7452 + continue; 7453 + 7454 + offset = (u32) reg_tbl[i].offset; 7455 + read_mask = reg_tbl[i].read_mask; 7456 + write_mask = reg_tbl[i].write_mask; 7457 + 7458 + /* Save the original register content */ 7459 + save_val = tr32(offset); 7460 + 7461 + /* Determine the read-only value. */ 7462 + read_val = save_val & read_mask; 7463 + 7464 + /* Write zero to the register, then make sure the read-only bits 7465 + * are not changed and the read/write bits are all zeros. 7466 + */ 7467 + tw32(offset, 0); 7468 + 7469 + val = tr32(offset); 7470 + 7471 + /* Test the read-only and read/write bits. */ 7472 + if (((val & read_mask) != read_val) || (val & write_mask)) 7473 + goto out; 7474 + 7475 + /* Write ones to all the bits defined by RdMask and WrMask, then 7476 + * make sure the read-only bits are not changed and the 7477 + * read/write bits are all ones. 7478 + */ 7479 + tw32(offset, read_mask | write_mask); 7480 + 7481 + val = tr32(offset); 7482 + 7483 + /* Test the read-only bits. */ 7484 + if ((val & read_mask) != read_val) 7485 + goto out; 7486 + 7487 + /* Test the read/write bits. */ 7488 + if ((val & write_mask) != write_mask) 7489 + goto out; 7490 + 7491 + tw32(offset, save_val); 7492 + } 7493 + 7494 + return 0; 7495 + 7496 + out: 7497 + printk(KERN_ERR PFX "Register test failed at offset %x\n", offset); 7498 + tw32(offset, save_val); 7499 + return -EIO; 7500 + } 7501 + 7502 + static int tg3_do_mem_test(struct tg3 *tp, u32 offset, u32 len) 7503 + { 7504 + static u32 test_pattern[] = { 0x00000000, 0xffffffff, 0xaa55a55a }; 7505 + int i; 7506 + u32 j; 7507 + 7508 + for (i = 0; i < sizeof(test_pattern)/sizeof(u32); i++) { 7509 + for (j = 0; j < len; j += 4) { 7510 + u32 val; 7511 + 7512 + tg3_write_mem(tp, offset + j, test_pattern[i]); 7513 + tg3_read_mem(tp, offset + j, &val); 7514 + if (val != test_pattern[i]) 7515 + return -EIO; 7516 + } 7517 + } 7518 + return 0; 7519 + } 7520 + 7521 + static int tg3_test_memory(struct tg3 *tp) 7522 + { 7523 + static struct mem_entry { 7524 + u32 offset; 7525 + u32 len; 7526 + } mem_tbl_570x[] = { 7527 + { 0x00000000, 0x01000}, 7528 + { 0x00002000, 0x1c000}, 7529 + { 0xffffffff, 0x00000} 7530 + }, mem_tbl_5705[] = { 7531 + { 0x00000100, 0x0000c}, 7532 + { 0x00000200, 0x00008}, 7533 + { 0x00000b50, 0x00400}, 7534 + { 0x00004000, 0x00800}, 7535 + { 0x00006000, 0x01000}, 7536 + { 0x00008000, 0x02000}, 7537 + { 0x00010000, 0x0e000}, 7538 + { 0xffffffff, 0x00000} 7539 + }; 7540 + struct mem_entry *mem_tbl; 7541 + int err = 0; 7542 + int i; 7543 + 7544 + if (tp->tg3_flags2 & TG3_FLG2_5705_PLUS) 7545 + mem_tbl = mem_tbl_5705; 7546 + else 7547 + mem_tbl = mem_tbl_570x; 7548 + 7549 + for (i = 0; mem_tbl[i].offset != 0xffffffff; i++) { 7550 + if ((err = tg3_do_mem_test(tp, mem_tbl[i].offset, 7551 + mem_tbl[i].len)) != 0) 7552 + break; 7553 + } 7554 + 7555 + return err; 7556 + } 7557 + 7558 + static int tg3_test_loopback(struct tg3 *tp) 7559 + { 7560 + u32 mac_mode, send_idx, rx_start_idx, rx_idx, tx_idx, opaque_key; 7561 + u32 desc_idx; 7562 + struct sk_buff *skb, *rx_skb; 7563 + u8 *tx_data; 7564 + dma_addr_t map; 7565 + int num_pkts, tx_len, rx_len, i, err; 7566 + struct tg3_rx_buffer_desc *desc; 7567 + 7568 + if (!netif_running(tp->dev)) 7569 + return -ENODEV; 7570 + 7571 + err = -EIO; 7572 + 7573 + tg3_abort_hw(tp, 1); 7574 + 7575 + /* Clearing this flag to keep interrupts disabled */ 7576 + tp->tg3_flags &= ~TG3_FLAG_INIT_COMPLETE; 7577 + tg3_reset_hw(tp); 7578 + 7579 + mac_mode = (tp->mac_mode & ~MAC_MODE_PORT_MODE_MASK) | 7580 + MAC_MODE_PORT_INT_LPBACK | MAC_MODE_LINK_POLARITY | 7581 + MAC_MODE_PORT_MODE_GMII; 7582 + tw32(MAC_MODE, mac_mode); 7583 + 7584 + tx_len = 1514; 7585 + skb = dev_alloc_skb(tx_len); 7586 + tx_data = skb_put(skb, tx_len); 7587 + memcpy(tx_data, tp->dev->dev_addr, 6); 7588 + memset(tx_data + 6, 0x0, 8); 7589 + 7590 + tw32(MAC_RX_MTU_SIZE, tx_len + 4); 7591 + 7592 + for (i = 14; i < tx_len; i++) 7593 + tx_data[i] = (u8) (i & 0xff); 7594 + 7595 + map = pci_map_single(tp->pdev, skb->data, tx_len, PCI_DMA_TODEVICE); 7596 + 7597 + tw32_f(HOSTCC_MODE, tp->coalesce_mode | HOSTCC_MODE_ENABLE | 7598 + HOSTCC_MODE_NOW); 7599 + 7600 + udelay(10); 7601 + 7602 + rx_start_idx = tp->hw_status->idx[0].rx_producer; 7603 + 7604 + send_idx = 0; 7605 + num_pkts = 0; 7606 + 7607 + tg3_set_txd(tp, send_idx, map, tx_len, 0, 1); 7608 + 7609 + send_idx++; 7610 + num_pkts++; 7611 + 7612 + tw32_tx_mbox(MAILBOX_SNDHOST_PROD_IDX_0 + TG3_64BIT_REG_LOW, send_idx); 7613 + tr32(MAILBOX_SNDHOST_PROD_IDX_0 + TG3_64BIT_REG_LOW); 7614 + 7615 + udelay(10); 7616 + 7617 + for (i = 0; i < 10; i++) { 7618 + tw32_f(HOSTCC_MODE, tp->coalesce_mode | HOSTCC_MODE_ENABLE | 7619 + HOSTCC_MODE_NOW); 7620 + 7621 + udelay(10); 7622 + 7623 + tx_idx = tp->hw_status->idx[0].tx_consumer; 7624 + rx_idx = tp->hw_status->idx[0].rx_producer; 7625 + if ((tx_idx == send_idx) && 7626 + (rx_idx == (rx_start_idx + num_pkts))) 7627 + break; 7628 + } 7629 + 7630 + pci_unmap_single(tp->pdev, map, tx_len, PCI_DMA_TODEVICE); 7631 + dev_kfree_skb(skb); 7632 + 7633 + if (tx_idx != send_idx) 7634 + goto out; 7635 + 7636 + if (rx_idx != rx_start_idx + num_pkts) 7637 + goto out; 7638 + 7639 + desc = &tp->rx_rcb[rx_start_idx]; 7640 + desc_idx = desc->opaque & RXD_OPAQUE_INDEX_MASK; 7641 + opaque_key = desc->opaque & RXD_OPAQUE_RING_MASK; 7642 + if (opaque_key != RXD_OPAQUE_RING_STD) 7643 + goto out; 7644 + 7645 + if ((desc->err_vlan & RXD_ERR_MASK) != 0 && 7646 + (desc->err_vlan != RXD_ERR_ODD_NIBBLE_RCVD_MII)) 7647 + goto out; 7648 + 7649 + rx_len = ((desc->idx_len & RXD_LEN_MASK) >> RXD_LEN_SHIFT) - 4; 7650 + if (rx_len != tx_len) 7651 + goto out; 7652 + 7653 + rx_skb = tp->rx_std_buffers[desc_idx].skb; 7654 + 7655 + map = pci_unmap_addr(&tp->rx_std_buffers[desc_idx], mapping); 7656 + pci_dma_sync_single_for_cpu(tp->pdev, map, rx_len, PCI_DMA_FROMDEVICE); 7657 + 7658 + for (i = 14; i < tx_len; i++) { 7659 + if (*(rx_skb->data + i) != (u8) (i & 0xff)) 7660 + goto out; 7661 + } 7662 + err = 0; 7663 + 7664 + /* tg3_free_rings will unmap and free the rx_skb */ 7665 + out: 7666 + return err; 7667 + } 7668 + 7669 + static void tg3_self_test(struct net_device *dev, struct ethtool_test *etest, 7670 + u64 *data) 7671 + { 7672 + struct tg3 *tp = netdev_priv(dev); 7673 + 7674 + memset(data, 0, sizeof(u64) * TG3_NUM_TEST); 7675 + 7676 + if (tg3_test_nvram(tp) != 0) { 7677 + etest->flags |= ETH_TEST_FL_FAILED; 7678 + data[0] = 1; 7679 + } 7680 + if (tg3_test_link(tp) != 0) { 7681 + etest->flags |= ETH_TEST_FL_FAILED; 7682 + data[1] = 1; 7683 + } 7684 + if (etest->flags & ETH_TEST_FL_OFFLINE) { 7685 + if (netif_running(dev)) 7686 + tg3_netif_stop(tp); 7687 + 7688 + spin_lock_irq(&tp->lock); 7689 + spin_lock(&tp->tx_lock); 7690 + 7691 + tg3_halt(tp, RESET_KIND_SUSPEND, 1); 7692 + tg3_nvram_lock(tp); 7693 + tg3_halt_cpu(tp, RX_CPU_BASE); 7694 + if (!(tp->tg3_flags2 & TG3_FLG2_5705_PLUS)) 7695 + tg3_halt_cpu(tp, TX_CPU_BASE); 7696 + tg3_nvram_unlock(tp); 7697 + 7698 + if (tg3_test_registers(tp) != 0) { 7699 + etest->flags |= ETH_TEST_FL_FAILED; 7700 + data[2] = 1; 7701 + } 7702 + if (tg3_test_memory(tp) != 0) { 7703 + etest->flags |= ETH_TEST_FL_FAILED; 7704 + data[3] = 1; 7705 + } 7706 + if (tg3_test_loopback(tp) != 0) { 7707 + etest->flags |= ETH_TEST_FL_FAILED; 7708 + data[4] = 1; 7709 + } 7710 + 7711 + spin_unlock(&tp->tx_lock); 7712 + spin_unlock_irq(&tp->lock); 7713 + if (tg3_test_interrupt(tp) != 0) { 7714 + etest->flags |= ETH_TEST_FL_FAILED; 7715 + data[5] = 1; 7716 + } 7717 + spin_lock_irq(&tp->lock); 7718 + spin_lock(&tp->tx_lock); 7719 + 7720 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 7721 + if (netif_running(dev)) { 7722 + tp->tg3_flags |= TG3_FLAG_INIT_COMPLETE; 7723 + tg3_init_hw(tp); 7724 + tg3_netif_start(tp); 7725 + } 7726 + spin_unlock(&tp->tx_lock); 7727 + spin_unlock_irq(&tp->lock); 7728 + } 7729 } 7730 7731 static int tg3_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) ··· 7331 .get_tso = ethtool_op_get_tso, 7332 .set_tso = tg3_set_tso, 7333 #endif 7334 + .self_test_count = tg3_get_test_count, 7335 + .self_test = tg3_self_test, 7336 .get_strings = tg3_get_strings, 7337 .get_stats_count = tg3_get_stats_count, 7338 .get_ethtool_stats = tg3_get_ethtool_stats, ··· 9478 (tr32(WDMAC_MODE) & WDMAC_MODE_ENABLE)) { 9479 pci_save_state(tp->pdev); 9480 tw32(MEMARB_MODE, MEMARB_MODE_ENABLE); 9481 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 9482 } 9483 9484 err = tg3_test_dma(tp); ··· 9605 9606 spin_lock_irq(&tp->lock); 9607 spin_lock(&tp->tx_lock); 9608 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 9609 spin_unlock(&tp->tx_lock); 9610 spin_unlock_irq(&tp->lock); 9611
+141 -167
drivers/pci/hotplug/cpci_hotplug_core.c
··· 1 /* 2 * CompactPCI Hot Plug Driver 3 * 4 - * Copyright (C) 2002 SOMA Networks, Inc. 5 * Copyright (C) 2001 Greg Kroah-Hartman (greg@kroah.com) 6 * Copyright (C) 2001 IBM Corp. 7 * ··· 45 46 #define dbg(format, arg...) \ 47 do { \ 48 - if(cpci_debug) \ 49 printk (KERN_DEBUG "%s: " format "\n", \ 50 MY_NAME , ## arg); \ 51 - } while(0) 52 #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 53 #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 54 #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) ··· 111 112 dbg("%s - physical_slot = %s", __FUNCTION__, hotplug_slot->name); 113 114 - if(controller->ops->set_power) { 115 retval = controller->ops->set_power(slot, 1); 116 - } 117 - 118 return retval; 119 } 120 ··· 124 125 dbg("%s - physical_slot = %s", __FUNCTION__, hotplug_slot->name); 126 127 /* Unconfigure device */ 128 dbg("%s - unconfiguring slot %s", 129 __FUNCTION__, slot->hotplug_slot->name); 130 - if((retval = cpci_unconfigure_slot(slot))) { 131 err("%s - could not unconfigure slot %s", 132 __FUNCTION__, slot->hotplug_slot->name); 133 - return retval; 134 } 135 dbg("%s - finished unconfiguring slot %s", 136 __FUNCTION__, slot->hotplug_slot->name); 137 138 /* Clear EXT (by setting it) */ 139 - if(cpci_clear_ext(slot)) { 140 err("%s - could not clear EXT for slot %s", 141 __FUNCTION__, slot->hotplug_slot->name); 142 retval = -ENODEV; 143 } 144 cpci_led_on(slot); 145 146 - if(controller->ops->set_power) { 147 - retval = controller->ops->set_power(slot, 0); 148 - } 149 150 - if(update_adapter_status(slot->hotplug_slot, 0)) { 151 warn("failure to update adapter file"); 152 - } 153 154 - if(slot->extracting) { 155 slot->extracting = 0; 156 atomic_dec(&extracting); 157 } 158 return retval; 159 } 160 ··· 167 { 168 u8 power = 1; 169 170 - if(controller->ops->get_power) { 171 power = controller->ops->get_power(slot); 172 - } 173 return power; 174 } 175 ··· 238 int status = -ENOMEM; 239 int i; 240 241 - if(!(controller && bus)) { 242 return -ENODEV; 243 - } 244 245 /* 246 * Create a structure for each slot, and register that slot ··· 316 cpci_hp_unregister_bus(struct pci_bus *bus) 317 { 318 struct slot *slot; 319 - struct list_head *tmp; 320 - struct list_head *next; 321 - int status; 322 323 down_write(&list_rwsem); 324 - if(!slots) { 325 up_write(&list_rwsem); 326 return -1; 327 } 328 - list_for_each_safe(tmp, next, &slot_list) { 329 - slot = list_entry(tmp, struct slot, slot_list); 330 - if(slot->bus == bus) { 331 - dbg("deregistering slot %s", slot->hotplug_slot->name); 332 - status = pci_hp_deregister(slot->hotplug_slot); 333 - if(status) { 334 - err("pci_hp_deregister failed with error %d", 335 - status); 336 - return status; 337 - } 338 - 339 list_del(&slot->slot_list); 340 slots--; 341 } 342 } 343 up_write(&list_rwsem); 344 - return 0; 345 } 346 347 /* This is the interrupt mode interrupt handler */ ··· 349 dbg("entered cpci_hp_intr"); 350 351 /* Check to see if it was our interrupt */ 352 - if((controller->irq_flags & SA_SHIRQ) && 353 !controller->ops->check_irq(controller->dev_id)) { 354 dbg("exited cpci_hp_intr, not our interrupt"); 355 return IRQ_NONE; ··· 371 * INS bits of the cold-inserted devices. 372 */ 373 static int 374 - init_slots(void) 375 { 376 struct slot *slot; 377 - struct list_head *tmp; 378 struct pci_dev* dev; 379 380 dbg("%s - enter", __FUNCTION__); 381 down_read(&list_rwsem); 382 - if(!slots) { 383 up_read(&list_rwsem); 384 return -1; 385 } 386 - list_for_each(tmp, &slot_list) { 387 - slot = list_entry(tmp, struct slot, slot_list); 388 dbg("%s - looking at slot %s", 389 __FUNCTION__, slot->hotplug_slot->name); 390 - if(cpci_check_and_clear_ins(slot)) { 391 dbg("%s - cleared INS for slot %s", 392 __FUNCTION__, slot->hotplug_slot->name); 393 - dev = pci_find_slot(slot->bus->number, PCI_DEVFN(slot->number, 0)); 394 - if(dev) { 395 - if(update_adapter_status(slot->hotplug_slot, 1)) { 396 - warn("failure to update adapter file"); 397 - } 398 - if(update_latch_status(slot->hotplug_slot, 1)) { 399 - warn("failure to update latch file"); 400 - } 401 - slot->dev = dev; 402 - } else { 403 - err("%s - no driver attached to device in slot %s", 404 - __FUNCTION__, slot->hotplug_slot->name); 405 - } 406 } 407 } 408 up_read(&list_rwsem); ··· 406 check_slots(void) 407 { 408 struct slot *slot; 409 - struct list_head *tmp; 410 int extracted; 411 int inserted; 412 u16 hs_csr; 413 414 down_read(&list_rwsem); 415 - if(!slots) { 416 up_read(&list_rwsem); 417 err("no slots registered, shutting down"); 418 return -1; 419 } 420 extracted = inserted = 0; 421 - list_for_each(tmp, &slot_list) { 422 - slot = list_entry(tmp, struct slot, slot_list); 423 dbg("%s - looking at slot %s", 424 __FUNCTION__, slot->hotplug_slot->name); 425 - if(cpci_check_and_clear_ins(slot)) { 426 - /* Some broken hardware (e.g. PLX 9054AB) asserts ENUM# twice... */ 427 - if(slot->dev) { 428 - warn("slot %s already inserted", slot->hotplug_slot->name); 429 inserted++; 430 continue; 431 } ··· 444 /* Configure device */ 445 dbg("%s - configuring slot %s", 446 __FUNCTION__, slot->hotplug_slot->name); 447 - if(cpci_configure_slot(slot)) { 448 err("%s - could not configure slot %s", 449 __FUNCTION__, slot->hotplug_slot->name); 450 continue; ··· 457 dbg("%s - slot %s HS_CSR (2) = %04x", 458 __FUNCTION__, slot->hotplug_slot->name, hs_csr); 459 460 - if(update_latch_status(slot->hotplug_slot, 1)) { 461 warn("failure to update latch file"); 462 - } 463 464 - if(update_adapter_status(slot->hotplug_slot, 1)) { 465 warn("failure to update adapter file"); 466 - } 467 468 cpci_led_off(slot); 469 ··· 471 __FUNCTION__, slot->hotplug_slot->name, hs_csr); 472 473 inserted++; 474 - } else if(cpci_check_ext(slot)) { 475 /* Process extraction request */ 476 dbg("%s - slot %s extracted", 477 __FUNCTION__, slot->hotplug_slot->name); ··· 481 dbg("%s - slot %s HS_CSR = %04x", 482 __FUNCTION__, slot->hotplug_slot->name, hs_csr); 483 484 - if(!slot->extracting) { 485 - if(update_latch_status(slot->hotplug_slot, 0)) { 486 warn("failure to update latch file"); 487 - 488 } 489 - atomic_inc(&extracting); 490 slot->extracting = 1; 491 } 492 extracted++; 493 - } else if(slot->extracting) { 494 hs_csr = cpci_get_hs_csr(slot); 495 - if(hs_csr == 0xffff) { 496 /* 497 * Hmmm, we're likely hosed at this point, should we 498 * bother trying to tell the driver or not? 499 */ 500 err("card in slot %s was improperly removed", 501 slot->hotplug_slot->name); 502 - if(update_adapter_status(slot->hotplug_slot, 0)) { 503 warn("failure to update adapter file"); 504 - } 505 slot->extracting = 0; 506 atomic_dec(&extracting); 507 } ··· 508 up_read(&list_rwsem); 509 dbg("inserted=%d, extracted=%d, extracting=%d", 510 inserted, extracted, atomic_read(&extracting)); 511 - if(inserted || extracted) { 512 return extracted; 513 - } 514 - else if(!atomic_read(&extracting)) { 515 err("cannot find ENUM# source, shutting down"); 516 return -1; 517 } ··· 528 unlock_kernel(); 529 530 dbg("%s - event thread started", __FUNCTION__); 531 - while(1) { 532 dbg("event thread sleeping"); 533 down_interruptible(&event_semaphore); 534 dbg("event thread woken, thread_finished = %d", 535 thread_finished); 536 - if(thread_finished || signal_pending(current)) 537 break; 538 do { 539 rc = check_slots(); ··· 545 thread_finished = 1; 546 break; 547 } 548 - } while(atomic_read(&extracting) != 0); 549 550 /* Re-enable ENUM# interrupt */ 551 dbg("%s - re-enabling irq", __FUNCTION__); ··· 568 daemonize("cpci_hp_polld"); 569 unlock_kernel(); 570 571 - while(1) { 572 - if(thread_finished || signal_pending(current)) 573 break; 574 - if(controller->ops->query_enum()) { 575 do { 576 rc = check_slots(); 577 - if(rc > 0) { 578 /* Give userspace a chance to handle extraction */ 579 msleep(500); 580 - } else if(rc < 0) { 581 dbg("%s - error checking slots", __FUNCTION__); 582 thread_finished = 1; 583 break; 584 } 585 - } while(atomic_read(&extracting) != 0); 586 } 587 msleep(100); 588 } ··· 601 init_MUTEX_LOCKED(&thread_exit); 602 thread_finished = 0; 603 604 - if(controller->irq) { 605 pid = kernel_thread(event_thread, NULL, 0); 606 - } else { 607 pid = kernel_thread(poll_thread, NULL, 0); 608 - } 609 - if(pid < 0) { 610 err("Can't start up our thread"); 611 return -1; 612 } ··· 618 { 619 thread_finished = 1; 620 dbg("thread finish command given"); 621 - if(controller->irq) { 622 up(&event_semaphore); 623 - } 624 dbg("wait for thread to exit"); 625 down(&thread_exit); 626 } ··· 629 { 630 int status = 0; 631 632 - if(!controller) { 633 - controller = new_controller; 634 - if(controller->irq) { 635 - if(request_irq(controller->irq, 636 - cpci_hp_intr, 637 - controller->irq_flags, 638 - MY_NAME, controller->dev_id)) { 639 - err("Can't get irq %d for the hotplug cPCI controller", controller->irq); 640 - status = -ENODEV; 641 - } 642 - dbg("%s - acquired controller irq %d", __FUNCTION__, 643 - controller->irq); 644 } 645 - } else { 646 - err("cPCI hotplug controller already registered"); 647 - status = -1; 648 } 649 return status; 650 } 651 652 int ··· 681 { 682 int status = 0; 683 684 - if(controller) { 685 - if(atomic_read(&extracting) != 0) { 686 - return -EBUSY; 687 - } 688 - if(!thread_finished) { 689 cpci_stop_thread(); 690 - } 691 - if(controller->irq) { 692 free_irq(controller->irq, controller->dev_id); 693 - } 694 controller = NULL; 695 - } else { 696 status = -ENODEV; 697 - } 698 return status; 699 } 700 ··· 700 int status; 701 702 dbg("%s - enter", __FUNCTION__); 703 - if(!controller) { 704 return -ENODEV; 705 - } 706 707 down_read(&list_rwsem); 708 - if(list_empty(&slot_list)) { 709 up_read(&list_rwsem); 710 return -ENODEV; 711 } 712 up_read(&list_rwsem); 713 714 - if(first) { 715 - status = init_slots(); 716 - if(status) { 717 - return status; 718 - } 719 first = 0; 720 - } 721 722 status = cpci_start_thread(); 723 - if(status) { 724 return status; 725 - } 726 dbg("%s - thread started", __FUNCTION__); 727 728 - if(controller->irq) { 729 /* Start enum interrupt processing */ 730 dbg("%s - enabling irq", __FUNCTION__); 731 controller->ops->enable_irq(); ··· 733 int 734 cpci_hp_stop(void) 735 { 736 - if(!controller) { 737 return -ENODEV; 738 - } 739 - if(atomic_read(&extracting) != 0) { 740 - return -EBUSY; 741 - } 742 - if(controller->irq) { 743 /* Stop enum interrupt processing */ 744 dbg("%s - disabling irq", __FUNCTION__); 745 controller->ops->disable_irq(); 746 } 747 cpci_stop_thread(); 748 return 0; 749 - } 750 - 751 - static void __exit 752 - cleanup_slots(void) 753 - { 754 - struct list_head *tmp; 755 - struct slot *slot; 756 - 757 - /* 758 - * Unregister all of our slots with the pci_hotplug subsystem, 759 - * and free up all memory that we had allocated. 760 - */ 761 - down_write(&list_rwsem); 762 - if(!slots) { 763 - goto null_cleanup; 764 - } 765 - list_for_each(tmp, &slot_list) { 766 - slot = list_entry(tmp, struct slot, slot_list); 767 - list_del(&slot->slot_list); 768 - pci_hp_deregister(slot->hotplug_slot); 769 - kfree(slot->hotplug_slot->info); 770 - kfree(slot->hotplug_slot->name); 771 - kfree(slot->hotplug_slot); 772 - kfree(slot); 773 - } 774 - null_cleanup: 775 - up_write(&list_rwsem); 776 - return; 777 } 778 779 int __init ··· 757 /* 758 * Clean everything up. 759 */ 760 - cleanup_slots(); 761 } 762 763 EXPORT_SYMBOL_GPL(cpci_hp_register_controller);
··· 1 /* 2 * CompactPCI Hot Plug Driver 3 * 4 + * Copyright (C) 2002,2005 SOMA Networks, Inc. 5 * Copyright (C) 2001 Greg Kroah-Hartman (greg@kroah.com) 6 * Copyright (C) 2001 IBM Corp. 7 * ··· 45 46 #define dbg(format, arg...) \ 47 do { \ 48 + if (cpci_debug) \ 49 printk (KERN_DEBUG "%s: " format "\n", \ 50 MY_NAME , ## arg); \ 51 + } while (0) 52 #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 53 #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 54 #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) ··· 111 112 dbg("%s - physical_slot = %s", __FUNCTION__, hotplug_slot->name); 113 114 + if (controller->ops->set_power) 115 retval = controller->ops->set_power(slot, 1); 116 return retval; 117 } 118 ··· 126 127 dbg("%s - physical_slot = %s", __FUNCTION__, hotplug_slot->name); 128 129 + down_write(&list_rwsem); 130 + 131 /* Unconfigure device */ 132 dbg("%s - unconfiguring slot %s", 133 __FUNCTION__, slot->hotplug_slot->name); 134 + if ((retval = cpci_unconfigure_slot(slot))) { 135 err("%s - could not unconfigure slot %s", 136 __FUNCTION__, slot->hotplug_slot->name); 137 + goto disable_error; 138 } 139 dbg("%s - finished unconfiguring slot %s", 140 __FUNCTION__, slot->hotplug_slot->name); 141 142 /* Clear EXT (by setting it) */ 143 + if (cpci_clear_ext(slot)) { 144 err("%s - could not clear EXT for slot %s", 145 __FUNCTION__, slot->hotplug_slot->name); 146 retval = -ENODEV; 147 + goto disable_error; 148 } 149 cpci_led_on(slot); 150 151 + if (controller->ops->set_power) 152 + if ((retval = controller->ops->set_power(slot, 0))) 153 + goto disable_error; 154 155 + if (update_adapter_status(slot->hotplug_slot, 0)) 156 warn("failure to update adapter file"); 157 158 + if (slot->extracting) { 159 slot->extracting = 0; 160 atomic_dec(&extracting); 161 } 162 + disable_error: 163 + up_write(&list_rwsem); 164 return retval; 165 } 166 ··· 165 { 166 u8 power = 1; 167 168 + if (controller->ops->get_power) 169 power = controller->ops->get_power(slot); 170 return power; 171 } 172 ··· 237 int status = -ENOMEM; 238 int i; 239 240 + if (!(controller && bus)) 241 return -ENODEV; 242 243 /* 244 * Create a structure for each slot, and register that slot ··· 316 cpci_hp_unregister_bus(struct pci_bus *bus) 317 { 318 struct slot *slot; 319 + struct slot *tmp; 320 + int status = 0; 321 322 down_write(&list_rwsem); 323 + if (!slots) { 324 up_write(&list_rwsem); 325 return -1; 326 } 327 + list_for_each_entry_safe(slot, tmp, &slot_list, slot_list) { 328 + if (slot->bus == bus) { 329 list_del(&slot->slot_list); 330 slots--; 331 + 332 + dbg("deregistering slot %s", slot->hotplug_slot->name); 333 + status = pci_hp_deregister(slot->hotplug_slot); 334 + if (status) { 335 + err("pci_hp_deregister failed with error %d", 336 + status); 337 + break; 338 + } 339 } 340 } 341 up_write(&list_rwsem); 342 + return status; 343 } 344 345 /* This is the interrupt mode interrupt handler */ ··· 351 dbg("entered cpci_hp_intr"); 352 353 /* Check to see if it was our interrupt */ 354 + if ((controller->irq_flags & SA_SHIRQ) && 355 !controller->ops->check_irq(controller->dev_id)) { 356 dbg("exited cpci_hp_intr, not our interrupt"); 357 return IRQ_NONE; ··· 373 * INS bits of the cold-inserted devices. 374 */ 375 static int 376 + init_slots(int clear_ins) 377 { 378 struct slot *slot; 379 struct pci_dev* dev; 380 381 dbg("%s - enter", __FUNCTION__); 382 down_read(&list_rwsem); 383 + if (!slots) { 384 up_read(&list_rwsem); 385 return -1; 386 } 387 + list_for_each_entry(slot, &slot_list, slot_list) { 388 dbg("%s - looking at slot %s", 389 __FUNCTION__, slot->hotplug_slot->name); 390 + if (clear_ins && cpci_check_and_clear_ins(slot)) 391 dbg("%s - cleared INS for slot %s", 392 __FUNCTION__, slot->hotplug_slot->name); 393 + dev = pci_get_slot(slot->bus, PCI_DEVFN(slot->number, 0)); 394 + if (dev) { 395 + if (update_adapter_status(slot->hotplug_slot, 1)) 396 + warn("failure to update adapter file"); 397 + if (update_latch_status(slot->hotplug_slot, 1)) 398 + warn("failure to update latch file"); 399 + slot->dev = dev; 400 } 401 } 402 up_read(&list_rwsem); ··· 416 check_slots(void) 417 { 418 struct slot *slot; 419 int extracted; 420 int inserted; 421 u16 hs_csr; 422 423 down_read(&list_rwsem); 424 + if (!slots) { 425 up_read(&list_rwsem); 426 err("no slots registered, shutting down"); 427 return -1; 428 } 429 extracted = inserted = 0; 430 + list_for_each_entry(slot, &slot_list, slot_list) { 431 dbg("%s - looking at slot %s", 432 __FUNCTION__, slot->hotplug_slot->name); 433 + if (cpci_check_and_clear_ins(slot)) { 434 + /* 435 + * Some broken hardware (e.g. PLX 9054AB) asserts 436 + * ENUM# twice... 437 + */ 438 + if (slot->dev) { 439 + warn("slot %s already inserted", 440 + slot->hotplug_slot->name); 441 inserted++; 442 continue; 443 } ··· 452 /* Configure device */ 453 dbg("%s - configuring slot %s", 454 __FUNCTION__, slot->hotplug_slot->name); 455 + if (cpci_configure_slot(slot)) { 456 err("%s - could not configure slot %s", 457 __FUNCTION__, slot->hotplug_slot->name); 458 continue; ··· 465 dbg("%s - slot %s HS_CSR (2) = %04x", 466 __FUNCTION__, slot->hotplug_slot->name, hs_csr); 467 468 + if (update_latch_status(slot->hotplug_slot, 1)) 469 warn("failure to update latch file"); 470 471 + if (update_adapter_status(slot->hotplug_slot, 1)) 472 warn("failure to update adapter file"); 473 474 cpci_led_off(slot); 475 ··· 481 __FUNCTION__, slot->hotplug_slot->name, hs_csr); 482 483 inserted++; 484 + } else if (cpci_check_ext(slot)) { 485 /* Process extraction request */ 486 dbg("%s - slot %s extracted", 487 __FUNCTION__, slot->hotplug_slot->name); ··· 491 dbg("%s - slot %s HS_CSR = %04x", 492 __FUNCTION__, slot->hotplug_slot->name, hs_csr); 493 494 + if (!slot->extracting) { 495 + if (update_latch_status(slot->hotplug_slot, 0)) { 496 warn("failure to update latch file"); 497 } 498 slot->extracting = 1; 499 + atomic_inc(&extracting); 500 } 501 extracted++; 502 + } else if (slot->extracting) { 503 hs_csr = cpci_get_hs_csr(slot); 504 + if (hs_csr == 0xffff) { 505 /* 506 * Hmmm, we're likely hosed at this point, should we 507 * bother trying to tell the driver or not? 508 */ 509 err("card in slot %s was improperly removed", 510 slot->hotplug_slot->name); 511 + if (update_adapter_status(slot->hotplug_slot, 0)) 512 warn("failure to update adapter file"); 513 slot->extracting = 0; 514 atomic_dec(&extracting); 515 } ··· 520 up_read(&list_rwsem); 521 dbg("inserted=%d, extracted=%d, extracting=%d", 522 inserted, extracted, atomic_read(&extracting)); 523 + if (inserted || extracted) 524 return extracted; 525 + else if (!atomic_read(&extracting)) { 526 err("cannot find ENUM# source, shutting down"); 527 return -1; 528 } ··· 541 unlock_kernel(); 542 543 dbg("%s - event thread started", __FUNCTION__); 544 + while (1) { 545 dbg("event thread sleeping"); 546 down_interruptible(&event_semaphore); 547 dbg("event thread woken, thread_finished = %d", 548 thread_finished); 549 + if (thread_finished || signal_pending(current)) 550 break; 551 do { 552 rc = check_slots(); ··· 558 thread_finished = 1; 559 break; 560 } 561 + } while (atomic_read(&extracting) && !thread_finished); 562 + if (thread_finished) 563 + break; 564 565 /* Re-enable ENUM# interrupt */ 566 dbg("%s - re-enabling irq", __FUNCTION__); ··· 579 daemonize("cpci_hp_polld"); 580 unlock_kernel(); 581 582 + while (1) { 583 + if (thread_finished || signal_pending(current)) 584 break; 585 + if (controller->ops->query_enum()) { 586 do { 587 rc = check_slots(); 588 + if (rc > 0) { 589 /* Give userspace a chance to handle extraction */ 590 msleep(500); 591 + } else if (rc < 0) { 592 dbg("%s - error checking slots", __FUNCTION__); 593 thread_finished = 1; 594 break; 595 } 596 + } while (atomic_read(&extracting) && !thread_finished); 597 } 598 msleep(100); 599 } ··· 612 init_MUTEX_LOCKED(&thread_exit); 613 thread_finished = 0; 614 615 + if (controller->irq) 616 pid = kernel_thread(event_thread, NULL, 0); 617 + else 618 pid = kernel_thread(poll_thread, NULL, 0); 619 + if (pid < 0) { 620 err("Can't start up our thread"); 621 return -1; 622 } ··· 630 { 631 thread_finished = 1; 632 dbg("thread finish command given"); 633 + if (controller->irq) 634 up(&event_semaphore); 635 dbg("wait for thread to exit"); 636 down(&thread_exit); 637 } ··· 642 { 643 int status = 0; 644 645 + if (controller) 646 + return -1; 647 + if (!(new_controller && new_controller->ops)) 648 + return -EINVAL; 649 + if (new_controller->irq) { 650 + if (!(new_controller->ops->enable_irq && 651 + new_controller->ops->disable_irq)) 652 + status = -EINVAL; 653 + if (request_irq(new_controller->irq, 654 + cpci_hp_intr, 655 + new_controller->irq_flags, 656 + MY_NAME, 657 + new_controller->dev_id)) { 658 + err("Can't get irq %d for the hotplug cPCI controller", 659 + new_controller->irq); 660 + status = -ENODEV; 661 } 662 + dbg("%s - acquired controller irq %d", 663 + __FUNCTION__, new_controller->irq); 664 } 665 + if (!status) 666 + controller = new_controller; 667 return status; 668 + } 669 + 670 + static void 671 + cleanup_slots(void) 672 + { 673 + struct slot *slot; 674 + struct slot *tmp; 675 + 676 + /* 677 + * Unregister all of our slots with the pci_hotplug subsystem, 678 + * and free up all memory that we had allocated. 679 + */ 680 + down_write(&list_rwsem); 681 + if (!slots) 682 + goto cleanup_null; 683 + list_for_each_entry_safe(slot, tmp, &slot_list, slot_list) { 684 + list_del(&slot->slot_list); 685 + pci_hp_deregister(slot->hotplug_slot); 686 + } 687 + cleanup_null: 688 + up_write(&list_rwsem); 689 + return; 690 } 691 692 int ··· 667 { 668 int status = 0; 669 670 + if (controller) { 671 + if (!thread_finished) 672 cpci_stop_thread(); 673 + if (controller->irq) 674 free_irq(controller->irq, controller->dev_id); 675 controller = NULL; 676 + cleanup_slots(); 677 + } else 678 status = -ENODEV; 679 return status; 680 } 681 ··· 691 int status; 692 693 dbg("%s - enter", __FUNCTION__); 694 + if (!controller) 695 return -ENODEV; 696 697 down_read(&list_rwsem); 698 + if (list_empty(&slot_list)) { 699 up_read(&list_rwsem); 700 return -ENODEV; 701 } 702 up_read(&list_rwsem); 703 704 + status = init_slots(first); 705 + if (first) 706 first = 0; 707 + if (status) 708 + return status; 709 710 status = cpci_start_thread(); 711 + if (status) 712 return status; 713 dbg("%s - thread started", __FUNCTION__); 714 715 + if (controller->irq) { 716 /* Start enum interrupt processing */ 717 dbg("%s - enabling irq", __FUNCTION__); 718 controller->ops->enable_irq(); ··· 728 int 729 cpci_hp_stop(void) 730 { 731 + if (!controller) 732 return -ENODEV; 733 + if (controller->irq) { 734 /* Stop enum interrupt processing */ 735 dbg("%s - disabling irq", __FUNCTION__); 736 controller->ops->disable_irq(); 737 } 738 cpci_stop_thread(); 739 return 0; 740 } 741 742 int __init ··· 784 /* 785 * Clean everything up. 786 */ 787 + cpci_hp_stop(); 788 + cpci_hp_unregister_controller(controller); 789 } 790 791 EXPORT_SYMBOL_GPL(cpci_hp_register_controller);
+54 -90
drivers/pci/hotplug/cpci_hotplug_pci.c
··· 1 /* 2 * CompactPCI Hot Plug Driver PCI functions 3 * 4 - * Copyright (C) 2002 by SOMA Networks, Inc. 5 * 6 * All rights reserved. 7 * ··· 38 39 #define dbg(format, arg...) \ 40 do { \ 41 - if(cpci_debug) \ 42 printk (KERN_DEBUG "%s: " format "\n", \ 43 MY_NAME , ## arg); \ 44 - } while(0) 45 #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 46 #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 47 #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) ··· 57 hs_cap = pci_bus_find_capability(slot->bus, 58 slot->devfn, 59 PCI_CAP_ID_CHSWP); 60 - if(!hs_cap) { 61 return 0; 62 - } 63 64 - if(pci_bus_read_config_word(slot->bus, 65 slot->devfn, 66 hs_cap + 2, 67 - &hs_csr)) { 68 return 0; 69 - } 70 return hs_csr & 0x0008 ? 1 : 0; 71 } 72 ··· 77 hs_cap = pci_bus_find_capability(slot->bus, 78 slot->devfn, 79 PCI_CAP_ID_CHSWP); 80 - if(!hs_cap) { 81 return 0; 82 - } 83 - 84 - if(pci_bus_read_config_word(slot->bus, 85 slot->devfn, 86 hs_cap + 2, 87 - &hs_csr)) { 88 return 0; 89 - } 90 - if(status) { 91 hs_csr |= HS_CSR_LOO; 92 - } else { 93 hs_csr &= ~HS_CSR_LOO; 94 - } 95 - if(pci_bus_write_config_word(slot->bus, 96 slot->devfn, 97 hs_cap + 2, 98 - hs_csr)) { 99 return 0; 100 - } 101 return 1; 102 } 103 ··· 104 hs_cap = pci_bus_find_capability(slot->bus, 105 slot->devfn, 106 PCI_CAP_ID_CHSWP); 107 - if(!hs_cap) { 108 return 0xFFFF; 109 - } 110 - 111 - if(pci_bus_read_config_word(slot->bus, 112 slot->devfn, 113 hs_cap + 2, 114 - &hs_csr)) { 115 return 0xFFFF; 116 - } 117 return hs_csr; 118 } 119 ··· 123 hs_cap = pci_bus_find_capability(slot->bus, 124 slot->devfn, 125 PCI_CAP_ID_CHSWP); 126 - if(!hs_cap) { 127 return 0; 128 - } 129 - if(pci_bus_read_config_word(slot->bus, 130 slot->devfn, 131 hs_cap + 2, 132 - &hs_csr)) { 133 return 0; 134 - } 135 - if(hs_csr & HS_CSR_INS) { 136 /* Clear INS (by setting it) */ 137 - if(pci_bus_write_config_word(slot->bus, 138 slot->devfn, 139 hs_cap + 2, 140 - hs_csr)) { 141 ins = 0; 142 - } 143 - ins = 1; 144 } 145 return ins; 146 } ··· 152 hs_cap = pci_bus_find_capability(slot->bus, 153 slot->devfn, 154 PCI_CAP_ID_CHSWP); 155 - if(!hs_cap) { 156 return 0; 157 - } 158 - if(pci_bus_read_config_word(slot->bus, 159 slot->devfn, 160 hs_cap + 2, 161 - &hs_csr)) { 162 return 0; 163 - } 164 - if(hs_csr & HS_CSR_EXT) { 165 ext = 1; 166 - } 167 return ext; 168 } 169 ··· 172 hs_cap = pci_bus_find_capability(slot->bus, 173 slot->devfn, 174 PCI_CAP_ID_CHSWP); 175 - if(!hs_cap) { 176 return -ENODEV; 177 - } 178 - if(pci_bus_read_config_word(slot->bus, 179 slot->devfn, 180 hs_cap + 2, 181 - &hs_csr)) { 182 return -ENODEV; 183 - } 184 - if(hs_csr & HS_CSR_EXT) { 185 /* Clear EXT (by setting it) */ 186 - if(pci_bus_write_config_word(slot->bus, 187 slot->devfn, 188 hs_cap + 2, 189 - hs_csr)) { 190 return -ENODEV; 191 - } 192 } 193 return 0; 194 } ··· 198 hs_cap = pci_bus_find_capability(slot->bus, 199 slot->devfn, 200 PCI_CAP_ID_CHSWP); 201 - if(!hs_cap) { 202 return -ENODEV; 203 - } 204 - if(pci_bus_read_config_word(slot->bus, 205 slot->devfn, 206 hs_cap + 2, 207 - &hs_csr)) { 208 return -ENODEV; 209 - } 210 - if((hs_csr & HS_CSR_LOO) != HS_CSR_LOO) { 211 hs_csr |= HS_CSR_LOO; 212 - if(pci_bus_write_config_word(slot->bus, 213 slot->devfn, 214 hs_cap + 2, 215 hs_csr)) { ··· 227 hs_cap = pci_bus_find_capability(slot->bus, 228 slot->devfn, 229 PCI_CAP_ID_CHSWP); 230 - if(!hs_cap) { 231 return -ENODEV; 232 - } 233 - if(pci_bus_read_config_word(slot->bus, 234 slot->devfn, 235 hs_cap + 2, 236 - &hs_csr)) { 237 return -ENODEV; 238 - } 239 - if(hs_csr & HS_CSR_LOO) { 240 hs_csr &= ~HS_CSR_LOO; 241 - if(pci_bus_write_config_word(slot->bus, 242 slot->devfn, 243 hs_cap + 2, 244 hs_csr)) { ··· 253 * Device configuration functions 254 */ 255 256 - static void cpci_enable_device(struct pci_dev *dev) 257 - { 258 - struct pci_bus *bus; 259 - 260 - pci_enable_device(dev); 261 - if(dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 262 - bus = dev->subordinate; 263 - list_for_each_entry(dev, &bus->devices, bus_list) { 264 - cpci_enable_device(dev); 265 - } 266 - } 267 - } 268 - 269 int cpci_configure_slot(struct slot* slot) 270 { 271 unsigned char busnr; ··· 260 261 dbg("%s - enter", __FUNCTION__); 262 263 - if(slot->dev == NULL) { 264 dbg("pci_dev null, finding %02x:%02x:%x", 265 slot->bus->number, PCI_SLOT(slot->devfn), PCI_FUNC(slot->devfn)); 266 - slot->dev = pci_find_slot(slot->bus->number, slot->devfn); 267 } 268 269 /* Still NULL? Well then scan for it! */ 270 - if(slot->dev == NULL) { 271 int n; 272 dbg("pci_dev still null"); 273 ··· 277 */ 278 n = pci_scan_slot(slot->bus, slot->devfn); 279 dbg("%s: pci_scan_slot returned %d", __FUNCTION__, n); 280 - if(n > 0) 281 pci_bus_add_devices(slot->bus); 282 - slot->dev = pci_find_slot(slot->bus->number, slot->devfn); 283 - if(slot->dev == NULL) { 284 err("Could not find PCI device for slot %02x", slot->number); 285 return 1; 286 } ··· 295 296 pci_bus_assign_resources(slot->dev->bus); 297 298 - cpci_enable_device(slot->dev); 299 - 300 dbg("%s - exit", __FUNCTION__); 301 return 0; 302 } ··· 305 struct pci_dev *dev; 306 307 dbg("%s - enter", __FUNCTION__); 308 - if(!slot->dev) { 309 err("No device for slot %02x\n", slot->number); 310 return -ENODEV; 311 } 312 313 for (i = 0; i < 8; i++) { 314 - dev = pci_find_slot(slot->bus->number, 315 PCI_DEVFN(PCI_SLOT(slot->devfn), i)); 316 - if(dev) { 317 pci_remove_bus_device(dev); 318 slot->dev = NULL; 319 }
··· 1 /* 2 * CompactPCI Hot Plug Driver PCI functions 3 * 4 + * Copyright (C) 2002,2005 by SOMA Networks, Inc. 5 * 6 * All rights reserved. 7 * ··· 38 39 #define dbg(format, arg...) \ 40 do { \ 41 + if (cpci_debug) \ 42 printk (KERN_DEBUG "%s: " format "\n", \ 43 MY_NAME , ## arg); \ 44 + } while (0) 45 #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 46 #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 47 #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) ··· 57 hs_cap = pci_bus_find_capability(slot->bus, 58 slot->devfn, 59 PCI_CAP_ID_CHSWP); 60 + if (!hs_cap) 61 return 0; 62 63 + if (pci_bus_read_config_word(slot->bus, 64 slot->devfn, 65 hs_cap + 2, 66 + &hs_csr)) 67 return 0; 68 + 69 return hs_csr & 0x0008 ? 1 : 0; 70 } 71 ··· 78 hs_cap = pci_bus_find_capability(slot->bus, 79 slot->devfn, 80 PCI_CAP_ID_CHSWP); 81 + if (!hs_cap) 82 return 0; 83 + if (pci_bus_read_config_word(slot->bus, 84 slot->devfn, 85 hs_cap + 2, 86 + &hs_csr)) 87 return 0; 88 + if (status) 89 hs_csr |= HS_CSR_LOO; 90 + else 91 hs_csr &= ~HS_CSR_LOO; 92 + if (pci_bus_write_config_word(slot->bus, 93 slot->devfn, 94 hs_cap + 2, 95 + hs_csr)) 96 return 0; 97 return 1; 98 } 99 ··· 110 hs_cap = pci_bus_find_capability(slot->bus, 111 slot->devfn, 112 PCI_CAP_ID_CHSWP); 113 + if (!hs_cap) 114 return 0xFFFF; 115 + if (pci_bus_read_config_word(slot->bus, 116 slot->devfn, 117 hs_cap + 2, 118 + &hs_csr)) 119 return 0xFFFF; 120 return hs_csr; 121 } 122 ··· 132 hs_cap = pci_bus_find_capability(slot->bus, 133 slot->devfn, 134 PCI_CAP_ID_CHSWP); 135 + if (!hs_cap) 136 return 0; 137 + if (pci_bus_read_config_word(slot->bus, 138 slot->devfn, 139 hs_cap + 2, 140 + &hs_csr)) 141 return 0; 142 + if (hs_csr & HS_CSR_INS) { 143 /* Clear INS (by setting it) */ 144 + if (pci_bus_write_config_word(slot->bus, 145 slot->devfn, 146 hs_cap + 2, 147 + hs_csr)) 148 ins = 0; 149 + else 150 + ins = 1; 151 } 152 return ins; 153 } ··· 163 hs_cap = pci_bus_find_capability(slot->bus, 164 slot->devfn, 165 PCI_CAP_ID_CHSWP); 166 + if (!hs_cap) 167 return 0; 168 + if (pci_bus_read_config_word(slot->bus, 169 slot->devfn, 170 hs_cap + 2, 171 + &hs_csr)) 172 return 0; 173 + if (hs_csr & HS_CSR_EXT) 174 ext = 1; 175 return ext; 176 } 177 ··· 186 hs_cap = pci_bus_find_capability(slot->bus, 187 slot->devfn, 188 PCI_CAP_ID_CHSWP); 189 + if (!hs_cap) 190 return -ENODEV; 191 + if (pci_bus_read_config_word(slot->bus, 192 slot->devfn, 193 hs_cap + 2, 194 + &hs_csr)) 195 return -ENODEV; 196 + if (hs_csr & HS_CSR_EXT) { 197 /* Clear EXT (by setting it) */ 198 + if (pci_bus_write_config_word(slot->bus, 199 slot->devfn, 200 hs_cap + 2, 201 + hs_csr)) 202 return -ENODEV; 203 } 204 return 0; 205 } ··· 215 hs_cap = pci_bus_find_capability(slot->bus, 216 slot->devfn, 217 PCI_CAP_ID_CHSWP); 218 + if (!hs_cap) 219 return -ENODEV; 220 + if (pci_bus_read_config_word(slot->bus, 221 slot->devfn, 222 hs_cap + 2, 223 + &hs_csr)) 224 return -ENODEV; 225 + if ((hs_csr & HS_CSR_LOO) != HS_CSR_LOO) { 226 hs_csr |= HS_CSR_LOO; 227 + if (pci_bus_write_config_word(slot->bus, 228 slot->devfn, 229 hs_cap + 2, 230 hs_csr)) { ··· 246 hs_cap = pci_bus_find_capability(slot->bus, 247 slot->devfn, 248 PCI_CAP_ID_CHSWP); 249 + if (!hs_cap) 250 return -ENODEV; 251 + if (pci_bus_read_config_word(slot->bus, 252 slot->devfn, 253 hs_cap + 2, 254 + &hs_csr)) 255 return -ENODEV; 256 + if (hs_csr & HS_CSR_LOO) { 257 hs_csr &= ~HS_CSR_LOO; 258 + if (pci_bus_write_config_word(slot->bus, 259 slot->devfn, 260 hs_cap + 2, 261 hs_csr)) { ··· 274 * Device configuration functions 275 */ 276 277 int cpci_configure_slot(struct slot* slot) 278 { 279 unsigned char busnr; ··· 294 295 dbg("%s - enter", __FUNCTION__); 296 297 + if (slot->dev == NULL) { 298 dbg("pci_dev null, finding %02x:%02x:%x", 299 slot->bus->number, PCI_SLOT(slot->devfn), PCI_FUNC(slot->devfn)); 300 + slot->dev = pci_get_slot(slot->bus, slot->devfn); 301 } 302 303 /* Still NULL? Well then scan for it! */ 304 + if (slot->dev == NULL) { 305 int n; 306 dbg("pci_dev still null"); 307 ··· 311 */ 312 n = pci_scan_slot(slot->bus, slot->devfn); 313 dbg("%s: pci_scan_slot returned %d", __FUNCTION__, n); 314 + if (n > 0) 315 pci_bus_add_devices(slot->bus); 316 + slot->dev = pci_get_slot(slot->bus, slot->devfn); 317 + if (slot->dev == NULL) { 318 err("Could not find PCI device for slot %02x", slot->number); 319 return 1; 320 } ··· 329 330 pci_bus_assign_resources(slot->dev->bus); 331 332 dbg("%s - exit", __FUNCTION__); 333 return 0; 334 } ··· 341 struct pci_dev *dev; 342 343 dbg("%s - enter", __FUNCTION__); 344 + if (!slot->dev) { 345 err("No device for slot %02x\n", slot->number); 346 return -ENODEV; 347 } 348 349 for (i = 0; i < 8; i++) { 350 + dev = pci_get_slot(slot->bus, 351 PCI_DEVFN(PCI_SLOT(slot->devfn), i)); 352 + if (dev) { 353 pci_remove_bus_device(dev); 354 slot->dev = NULL; 355 }
+2 -2
drivers/pci/hotplug/shpchprm_acpi.c
··· 1626 pci_bus->number = func->bus; 1627 devfn = PCI_DEVFN(func->device, func->function); 1628 1629 - ab = find_acpi_bridge_by_bus(acpi_bridges_head, ctrl->seg, ctrl->bus); 1630 1631 if (ab) { 1632 if (ab->_hpp) { ··· 1681 | PCI_COMMAND_IO | PCI_COMMAND_MEMORY; 1682 bcmd = bcommand = bcommand | PCI_BRIDGE_CTL_NO_ISA; 1683 1684 - ab = find_acpi_bridge_by_bus(acpi_bridges_head, ctrl->seg, ctrl->bus); 1685 if (ab) { 1686 if (ab->_hpp) { 1687 if (ab->_hpp->enable_perr) {
··· 1626 pci_bus->number = func->bus; 1627 devfn = PCI_DEVFN(func->device, func->function); 1628 1629 + ab = find_acpi_bridge_by_bus(acpi_bridges_head, ctrl->seg, ctrl->slot_bus); 1630 1631 if (ab) { 1632 if (ab->_hpp) { ··· 1681 | PCI_COMMAND_IO | PCI_COMMAND_MEMORY; 1682 bcmd = bcommand = bcommand | PCI_BRIDGE_CTL_NO_ISA; 1683 1684 + ab = find_acpi_bridge_by_bus(acpi_bridges_head, ctrl->seg, ctrl->slot_bus); 1685 if (ab) { 1686 if (ab->_hpp) { 1687 if (ab->_hpp->enable_perr) {
+3
drivers/scsi/ahci.c
··· 32 #include <linux/delay.h> 33 #include <linux/interrupt.h> 34 #include <linux/sched.h> 35 #include "scsi.h" 36 #include <scsi/scsi_host.h> 37 #include <linux/libata.h> ··· 290 { 291 struct ahci_host_priv *hpriv = host_set->private_data; 292 kfree(hpriv); 293 } 294 295 static int ahci_port_start(struct ata_port *ap)
··· 32 #include <linux/delay.h> 33 #include <linux/interrupt.h> 34 #include <linux/sched.h> 35 + #include <linux/dma-mapping.h> 36 #include "scsi.h" 37 #include <scsi/scsi_host.h> 38 #include <linux/libata.h> ··· 289 { 290 struct ahci_host_priv *hpriv = host_set->private_data; 291 kfree(hpriv); 292 + 293 + ata_host_stop(host_set); 294 } 295 296 static int ahci_port_start(struct ata_port *ap)
+1 -1
drivers/scsi/aic7xxx/aic79xx_osm.c
··· 2488 sprintf(current->comm, "ahd_dv_%d", ahd->unit); 2489 #else 2490 daemonize("ahd_dv_%d", ahd->unit); 2491 - current->flags |= PF_FREEZE; 2492 #endif 2493 unlock_kernel(); 2494
··· 2488 sprintf(current->comm, "ahd_dv_%d", ahd->unit); 2489 #else 2490 daemonize("ahd_dv_%d", ahd->unit); 2491 + current->flags |= PF_NOFREEZE; 2492 #endif 2493 unlock_kernel(); 2494
+2
drivers/scsi/ata_piix.c
··· 153 154 .port_start = ata_port_start, 155 .port_stop = ata_port_stop, 156 }; 157 158 static struct ata_port_operations piix_sata_ops = { ··· 181 182 .port_start = ata_port_start, 183 .port_stop = ata_port_stop, 184 }; 185 186 static struct ata_port_info piix_port_info[] = {
··· 153 154 .port_start = ata_port_start, 155 .port_stop = ata_port_stop, 156 + .host_stop = ata_host_stop, 157 }; 158 159 static struct ata_port_operations piix_sata_ops = { ··· 180 181 .port_start = ata_port_start, 182 .port_stop = ata_port_stop, 183 + .host_stop = ata_host_stop, 184 }; 185 186 static struct ata_port_info piix_port_info[] = {
+11 -4
drivers/scsi/libata-core.c
··· 3666 dma_free_coherent(dev, ATA_PRD_TBL_SZ, ap->prd, ap->prd_dma); 3667 } 3668 3669 /** 3670 * ata_host_remove - Unregister SCSI host structure with upper layers 3671 * @ap: Port to unregister ··· 4278 } 4279 4280 free_irq(host_set->irq, host_set); 4281 - if (host_set->ops->host_stop) 4282 - host_set->ops->host_stop(host_set); 4283 - if (host_set->mmio_base) 4284 - iounmap(host_set->mmio_base); 4285 4286 for (i = 0; i < host_set->n_ports; i++) { 4287 ap = host_set->ports[i]; ··· 4295 4296 scsi_host_put(ap->host); 4297 } 4298 4299 kfree(host_set); 4300 ··· 4387 EXPORT_SYMBOL_GPL(ata_exec_command); 4388 EXPORT_SYMBOL_GPL(ata_port_start); 4389 EXPORT_SYMBOL_GPL(ata_port_stop); 4390 EXPORT_SYMBOL_GPL(ata_interrupt); 4391 EXPORT_SYMBOL_GPL(ata_qc_prep); 4392 EXPORT_SYMBOL_GPL(ata_bmdma_setup);
··· 3666 dma_free_coherent(dev, ATA_PRD_TBL_SZ, ap->prd, ap->prd_dma); 3667 } 3668 3669 + void ata_host_stop (struct ata_host_set *host_set) 3670 + { 3671 + if (host_set->mmio_base) 3672 + iounmap(host_set->mmio_base); 3673 + } 3674 + 3675 + 3676 /** 3677 * ata_host_remove - Unregister SCSI host structure with upper layers 3678 * @ap: Port to unregister ··· 4271 } 4272 4273 free_irq(host_set->irq, host_set); 4274 4275 for (i = 0; i < host_set->n_ports; i++) { 4276 ap = host_set->ports[i]; ··· 4292 4293 scsi_host_put(ap->host); 4294 } 4295 + 4296 + if (host_set->ops->host_stop) 4297 + host_set->ops->host_stop(host_set); 4298 4299 kfree(host_set); 4300 ··· 4381 EXPORT_SYMBOL_GPL(ata_exec_command); 4382 EXPORT_SYMBOL_GPL(ata_port_start); 4383 EXPORT_SYMBOL_GPL(ata_port_stop); 4384 + EXPORT_SYMBOL_GPL(ata_host_stop); 4385 EXPORT_SYMBOL_GPL(ata_interrupt); 4386 EXPORT_SYMBOL_GPL(ata_qc_prep); 4387 EXPORT_SYMBOL_GPL(ata_bmdma_setup);
+1 -1
drivers/scsi/libata.h
··· 26 #define __LIBATA_H__ 27 28 #define DRV_NAME "libata" 29 - #define DRV_VERSION "1.10" /* must be exactly four chars */ 30 31 struct ata_scsi_args { 32 u16 *id;
··· 26 #define __LIBATA_H__ 27 28 #define DRV_NAME "libata" 29 + #define DRV_VERSION "1.11" /* must be exactly four chars */ 30 31 struct ata_scsi_args { 32 u16 *id;
+2
drivers/scsi/sata_nv.c
··· 329 host->host_desc->disable_hotplug(host_set); 330 331 kfree(host); 332 } 333 334 static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
··· 329 host->host_desc->disable_hotplug(host_set); 330 331 kfree(host); 332 + 333 + ata_host_stop(host_set); 334 } 335 336 static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
+1
drivers/scsi/sata_promise.c
··· 122 .scr_write = pdc_sata_scr_write, 123 .port_start = pdc_port_start, 124 .port_stop = pdc_port_stop, 125 }; 126 127 static struct ata_port_info pdc_port_info[] = {
··· 122 .scr_write = pdc_sata_scr_write, 123 .port_start = pdc_port_start, 124 .port_stop = pdc_port_stop, 125 + .host_stop = ata_host_stop, 126 }; 127 128 static struct ata_port_info pdc_port_info[] = {
+2
drivers/scsi/sata_qstor.c
··· 536 537 writeb(0, mmio_base + QS_HCT_CTRL); /* disable host interrupts */ 538 writeb(QS_CNFG3_GSRST, mmio_base + QS_HCF_CNFG3); /* global reset */ 539 } 540 541 static void qs_host_init(unsigned int chip_id, struct ata_probe_ent *pe)
··· 536 537 writeb(0, mmio_base + QS_HCT_CTRL); /* disable host interrupts */ 538 writeb(QS_CNFG3_GSRST, mmio_base + QS_HCF_CNFG3); /* global reset */ 539 + 540 + ata_host_stop(host_set); 541 } 542 543 static void qs_host_init(unsigned int chip_id, struct ata_probe_ent *pe)
+1
drivers/scsi/sata_sil.c
··· 161 .scr_write = sil_scr_write, 162 .port_start = ata_port_start, 163 .port_stop = ata_port_stop, 164 }; 165 166 static struct ata_port_info sil_port_info[] = {
··· 161 .scr_write = sil_scr_write, 162 .port_start = ata_port_start, 163 .port_stop = ata_port_stop, 164 + .host_stop = ata_host_stop, 165 }; 166 167 static struct ata_port_info sil_port_info[] = {
+1
drivers/scsi/sata_sis.c
··· 114 .scr_write = sis_scr_write, 115 .port_start = ata_port_start, 116 .port_stop = ata_port_stop, 117 }; 118 119 static struct ata_port_info sis_port_info = {
··· 114 .scr_write = sis_scr_write, 115 .port_start = ata_port_start, 116 .port_stop = ata_port_stop, 117 + .host_stop = ata_host_stop, 118 }; 119 120 static struct ata_port_info sis_port_info = {
+1
drivers/scsi/sata_svw.c
··· 313 .scr_write = k2_sata_scr_write, 314 .port_start = ata_port_start, 315 .port_stop = ata_port_stop, 316 }; 317 318 static void k2_sata_setup_port(struct ata_ioports *port, unsigned long base)
··· 313 .scr_write = k2_sata_scr_write, 314 .port_start = ata_port_start, 315 .port_stop = ata_port_stop, 316 + .host_stop = ata_host_stop, 317 }; 318 319 static void k2_sata_setup_port(struct ata_ioports *port, unsigned long base)
+2
drivers/scsi/sata_sx4.c
··· 245 246 iounmap(dimm_mmio); 247 kfree(hpriv); 248 } 249 250 static int pdc_port_start(struct ata_port *ap)
··· 245 246 iounmap(dimm_mmio); 247 kfree(hpriv); 248 + 249 + ata_host_stop(host_set); 250 } 251 252 static int pdc_port_start(struct ata_port *ap)
+1
drivers/scsi/sata_uli.c
··· 113 114 .port_start = ata_port_start, 115 .port_stop = ata_port_stop, 116 }; 117 118 static struct ata_port_info uli_port_info = {
··· 113 114 .port_start = ata_port_start, 115 .port_stop = ata_port_stop, 116 + .host_stop = ata_host_stop, 117 }; 118 119 static struct ata_port_info uli_port_info = {
+1
drivers/scsi/sata_via.c
··· 134 135 .port_start = ata_port_start, 136 .port_stop = ata_port_stop, 137 }; 138 139 static struct ata_port_info svia_port_info = {
··· 134 135 .port_start = ata_port_start, 136 .port_stop = ata_port_stop, 137 + .host_stop = ata_host_stop, 138 }; 139 140 static struct ata_port_info svia_port_info = {
+2
drivers/scsi/sata_vsc.c
··· 21 #include <linux/blkdev.h> 22 #include <linux/delay.h> 23 #include <linux/interrupt.h> 24 #include "scsi.h" 25 #include <scsi/scsi_host.h> 26 #include <linux/libata.h> ··· 231 .scr_write = vsc_sata_scr_write, 232 .port_start = ata_port_start, 233 .port_stop = ata_port_stop, 234 }; 235 236 static void __devinit vsc_sata_setup_port(struct ata_ioports *port, unsigned long base)
··· 21 #include <linux/blkdev.h> 22 #include <linux/delay.h> 23 #include <linux/interrupt.h> 24 + #include <linux/dma-mapping.h> 25 #include "scsi.h" 26 #include <scsi/scsi_host.h> 27 #include <linux/libata.h> ··· 230 .scr_write = vsc_sata_scr_write, 231 .port_start = ata_port_start, 232 .port_stop = ata_port_stop, 233 + .host_stop = ata_host_stop, 234 }; 235 236 static void __devinit vsc_sata_setup_port(struct ata_ioports *port, unsigned long base)
+11
drivers/usb/host/Kconfig
··· 124 To compile this driver as a module, choose M here: the 125 module will be called sl811-hcd. 126
··· 124 To compile this driver as a module, choose M here: the 125 module will be called sl811-hcd. 126 127 + config USB_SL811_CS 128 + tristate "CF/PCMCIA support for SL811HS HCD" 129 + depends on USB_SL811_HCD && PCMCIA 130 + default N 131 + help 132 + Wraps a PCMCIA driver around the SL811HS HCD, supporting the RATOC 133 + REX-CFU1U CF card (often used with PDAs). If unsure, say N. 134 + 135 + To compile this driver as a module, choose M here: the 136 + module will be called "sl811_cs". 137 +
+1
drivers/usb/host/Makefile
··· 7 obj-$(CONFIG_USB_OHCI_HCD) += ohci-hcd.o 8 obj-$(CONFIG_USB_UHCI_HCD) += uhci-hcd.o 9 obj-$(CONFIG_USB_SL811_HCD) += sl811-hcd.o 10 obj-$(CONFIG_ETRAX_ARCH_V10) += hc_crisv10.o
··· 7 obj-$(CONFIG_USB_OHCI_HCD) += ohci-hcd.o 8 obj-$(CONFIG_USB_UHCI_HCD) += uhci-hcd.o 9 obj-$(CONFIG_USB_SL811_HCD) += sl811-hcd.o 10 + obj-$(CONFIG_USB_SL811_CS) += sl811_cs.o 11 obj-$(CONFIG_ETRAX_ARCH_V10) += hc_crisv10.o
+82 -66
drivers/usb/host/sl811-hcd.c
··· 2 * SL811HS HCD (Host Controller Driver) for USB. 3 * 4 * Copyright (C) 2004 Psion Teklogix (for NetBook PRO) 5 - * Copyright (C) 2004 David Brownell 6 - * 7 * Periodic scheduling is based on Roman's OHCI code 8 * Copyright (C) 1999 Roman Weissgaerber 9 * ··· 15 * For documentation, see the SL811HS spec and the "SL811HS Embedded Host" 16 * document (providing significant pieces missing from that spec); plus 17 * the SL811S spec if you want peripheral side info. 18 - */ 19 20 /* 21 * Status: Passed basic stress testing, works with hubs, mice, keyboards, ··· 67 MODULE_DESCRIPTION("SL811HS USB Host Controller Driver"); 68 MODULE_LICENSE("GPL"); 69 70 - #define DRIVER_VERSION "15 Dec 2004" 71 72 73 #ifndef DEBUG ··· 121 /* reset as thoroughly as we can */ 122 if (sl811->board && sl811->board->reset) 123 sl811->board->reset(hcd->self.controller); 124 125 sl811_write(sl811, SL11H_IRQ_ENABLE, 0); 126 sl811_write(sl811, SL11H_CTLREG1, sl811->ctrl1); ··· 447 spin_lock(&urb->lock); 448 if (urb->status == -EINPROGRESS) 449 urb->status = status; 450 spin_unlock(&urb->lock); 451 452 spin_unlock(&sl811->lock); ··· 477 if (*prev) 478 *prev = ep->next; 479 sl811->load[i] -= ep->load; 480 - } 481 ep->branch = PERIODIC_SIZE; 482 sl811->periodic_count--; 483 sl811_to_hcd(sl811)->self.bandwidth_allocated ··· 666 667 #ifdef QUIRK2 668 /* this may no longer be necessary ... */ 669 - if (irqstat == 0 && ret == IRQ_NONE) { 670 irqstat = checkdone(sl811); 671 - if (irqstat /* && irq != ~0 */ ) 672 sl811->stat_lost++; 673 } 674 #endif ··· 727 if (sl811->active_a) { 728 sl811_write(sl811, SL811_EP_A(SL11H_HOSTCTLREG), 0); 729 finish_request(sl811, sl811->active_a, 730 - container_of(sl811->active_a->hep->urb_list.next, 731 struct urb, urb_list), 732 NULL, -ESHUTDOWN); 733 sl811->active_a = NULL; ··· 737 if (sl811->active_b) { 738 sl811_write(sl811, SL811_EP_B(SL11H_HOSTCTLREG), 0); 739 finish_request(sl811, sl811->active_b, 740 - container_of(sl811->active_b->hep->urb_list.next, 741 struct urb, urb_list), 742 NULL, -ESHUTDOWN); 743 sl811->active_b = NULL; ··· 768 goto retry; 769 } 770 771 - if (sl811->periodic_count == 0 && list_empty(&sl811->async)) 772 sofirq_off(sl811); 773 sl811_write(sl811, SL11H_IRQ_ENABLE, sl811->irq_enable); 774 ··· 803 } 804 if (j < PERIODIC_SIZE) 805 continue; 806 - branch = i; 807 } 808 } 809 return branch; ··· 897 break; 898 } 899 900 hep->hcpriv = ep; 901 } 902 ··· 969 static int sl811h_urb_dequeue(struct usb_hcd *hcd, struct urb *urb) 970 { 971 struct sl811 *sl811 = hcd_to_sl811(hcd); 972 - struct usb_host_endpoint *hep = urb->hcpriv; 973 unsigned long flags; 974 struct sl811h_ep *ep; 975 int retval = 0; 976 977 - if (!hep) 978 - return -EINVAL; 979 - 980 spin_lock_irqsave(&sl811->lock, flags); 981 ep = hep->hcpriv; 982 if (ep) { 983 /* finish right away if this urb can't be active ... ··· 1026 VDBG("dequeue, urb %p active %s; wait4irq\n", urb, 1027 (sl811->active_a == ep) ? "A" : "B"); 1028 } else 1029 retval = -EINVAL; 1030 spin_unlock_irqrestore(&sl811->lock, flags); 1031 return retval; ··· 1586 if (sl811->board && sl811->board->power) 1587 hub_set_power_budget(udev, sl811->board->power * 2); 1588 1589 return 0; 1590 } 1591 ··· 1631 1632 /*-------------------------------------------------------------------------*/ 1633 1634 - static int __init_or_module 1635 sl811h_remove(struct device *dev) 1636 { 1637 struct usb_hcd *hcd = dev_get_drvdata(dev); ··· 1644 remove_debug_file(sl811); 1645 usb_remove_hcd(hcd); 1646 1647 - iounmap(sl811->data_reg); 1648 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1649 - release_mem_region(res->start, 1); 1650 1651 - iounmap(sl811->addr_reg); 1652 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1653 - release_mem_region(res->start, 1); 1654 1655 usb_put_hcd(hcd); 1656 return 0; 1657 } 1658 1659 - #define resource_len(r) (((r)->end - (r)->start) + 1) 1660 - 1661 - static int __init 1662 sl811h_probe(struct device *dev) 1663 { 1664 struct usb_hcd *hcd; ··· 1668 void __iomem *addr_reg; 1669 void __iomem *data_reg; 1670 int retval; 1671 - u8 tmp; 1672 1673 /* basic sanity checks first. board-specific init logic should 1674 * have initialized these three resources and probably board ··· 1676 * minimal sanity checking. 1677 */ 1678 pdev = container_of(dev, struct platform_device, dev); 1679 - if (pdev->num_resources < 3) 1680 - return -ENODEV; 1681 - 1682 - addr = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1683 - data = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1684 irq = platform_get_irq(pdev, 0); 1685 - if (!addr || !data || irq < 0) 1686 return -ENODEV; 1687 1688 /* refuse to confuse usbcore */ ··· 1686 return -EINVAL; 1687 } 1688 1689 - if (!request_mem_region(addr->start, 1, hcd_name)) { 1690 - retval = -EBUSY; 1691 - goto err1; 1692 - } 1693 - addr_reg = ioremap(addr->start, resource_len(addr)); 1694 - if (addr_reg == NULL) { 1695 - retval = -ENOMEM; 1696 - goto err2; 1697 - } 1698 1699 - if (!request_mem_region(data->start, 1, hcd_name)) { 1700 - retval = -EBUSY; 1701 - goto err3; 1702 - } 1703 - data_reg = ioremap(data->start, resource_len(addr)); 1704 - if (data_reg == NULL) { 1705 - retval = -ENOMEM; 1706 - goto err4; 1707 } 1708 1709 /* allocate and initialize hcd */ ··· 1751 goto err6; 1752 } 1753 1754 - /* sl811s would need a different handler for this irq */ 1755 - #ifdef CONFIG_ARM 1756 - /* Cypress docs say the IRQ is IRQT_HIGH ... */ 1757 - set_irq_type(irq, IRQT_RISING); 1758 - #endif 1759 - retval = usb_add_hcd(hcd, irq, SA_INTERRUPT); 1760 if (retval != 0) 1761 goto err6; 1762 ··· 1768 err6: 1769 usb_put_hcd(hcd); 1770 err5: 1771 - iounmap(data_reg); 1772 err4: 1773 - release_mem_region(data->start, 1); 1774 - err3: 1775 - iounmap(addr_reg); 1776 err2: 1777 - release_mem_region(addr->start, 1); 1778 - err1: 1779 DBG("init error, %d\n", retval); 1780 return retval; 1781 } ··· 1781 #ifdef CONFIG_PM 1782 1783 /* for this device there's no useful distinction between the controller 1784 - * and its root hub, except that the root hub only gets direct PM calls 1785 * when CONFIG_USB_SUSPEND is enabled. 1786 */ 1787 ··· 1835 #endif 1836 1837 1838 - static struct device_driver sl811h_driver = { 1839 .name = (char *) hcd_name, 1840 .bus = &platform_bus_type, 1841 1842 .probe = sl811h_probe, 1843 - .remove = sl811h_remove, 1844 1845 .suspend = sl811h_suspend, 1846 .resume = sl811h_resume, 1847 }; 1848 1849 /*-------------------------------------------------------------------------*/ 1850 - 1851 - static int __init sl811h_init(void) 1852 { 1853 if (usb_disabled()) 1854 return -ENODEV; ··· 1860 } 1861 module_init(sl811h_init); 1862 1863 - static void __exit sl811h_cleanup(void) 1864 - { 1865 driver_unregister(&sl811h_driver); 1866 } 1867 module_exit(sl811h_cleanup);
··· 2 * SL811HS HCD (Host Controller Driver) for USB. 3 * 4 * Copyright (C) 2004 Psion Teklogix (for NetBook PRO) 5 + * Copyright (C) 2004-2005 David Brownell 6 + * 7 * Periodic scheduling is based on Roman's OHCI code 8 * Copyright (C) 1999 Roman Weissgaerber 9 * ··· 15 * For documentation, see the SL811HS spec and the "SL811HS Embedded Host" 16 * document (providing significant pieces missing from that spec); plus 17 * the SL811S spec if you want peripheral side info. 18 + */ 19 20 /* 21 * Status: Passed basic stress testing, works with hubs, mice, keyboards, ··· 67 MODULE_DESCRIPTION("SL811HS USB Host Controller Driver"); 68 MODULE_LICENSE("GPL"); 69 70 + #define DRIVER_VERSION "19 May 2005" 71 72 73 #ifndef DEBUG ··· 121 /* reset as thoroughly as we can */ 122 if (sl811->board && sl811->board->reset) 123 sl811->board->reset(hcd->self.controller); 124 + else { 125 + sl811_write(sl811, SL11H_CTLREG1, SL11H_CTL1MASK_SE0); 126 + mdelay(20); 127 + } 128 129 sl811_write(sl811, SL11H_IRQ_ENABLE, 0); 130 sl811_write(sl811, SL11H_CTLREG1, sl811->ctrl1); ··· 443 spin_lock(&urb->lock); 444 if (urb->status == -EINPROGRESS) 445 urb->status = status; 446 + urb->hcpriv = NULL; 447 spin_unlock(&urb->lock); 448 449 spin_unlock(&sl811->lock); ··· 472 if (*prev) 473 *prev = ep->next; 474 sl811->load[i] -= ep->load; 475 + } 476 ep->branch = PERIODIC_SIZE; 477 sl811->periodic_count--; 478 sl811_to_hcd(sl811)->self.bandwidth_allocated ··· 661 662 #ifdef QUIRK2 663 /* this may no longer be necessary ... */ 664 + if (irqstat == 0) { 665 irqstat = checkdone(sl811); 666 + if (irqstat) 667 sl811->stat_lost++; 668 } 669 #endif ··· 722 if (sl811->active_a) { 723 sl811_write(sl811, SL811_EP_A(SL11H_HOSTCTLREG), 0); 724 finish_request(sl811, sl811->active_a, 725 + container_of(sl811->active_a 726 + ->hep->urb_list.next, 727 struct urb, urb_list), 728 NULL, -ESHUTDOWN); 729 sl811->active_a = NULL; ··· 731 if (sl811->active_b) { 732 sl811_write(sl811, SL811_EP_B(SL11H_HOSTCTLREG), 0); 733 finish_request(sl811, sl811->active_b, 734 + container_of(sl811->active_b 735 + ->hep->urb_list.next, 736 struct urb, urb_list), 737 NULL, -ESHUTDOWN); 738 sl811->active_b = NULL; ··· 761 goto retry; 762 } 763 764 + if (sl811->periodic_count == 0 && list_empty(&sl811->async)) 765 sofirq_off(sl811); 766 sl811_write(sl811, SL11H_IRQ_ENABLE, sl811->irq_enable); 767 ··· 796 } 797 if (j < PERIODIC_SIZE) 798 continue; 799 + branch = i; 800 } 801 } 802 return branch; ··· 890 break; 891 } 892 893 + ep->hep = hep; 894 hep->hcpriv = ep; 895 } 896 ··· 961 static int sl811h_urb_dequeue(struct usb_hcd *hcd, struct urb *urb) 962 { 963 struct sl811 *sl811 = hcd_to_sl811(hcd); 964 + struct usb_host_endpoint *hep; 965 unsigned long flags; 966 struct sl811h_ep *ep; 967 int retval = 0; 968 969 spin_lock_irqsave(&sl811->lock, flags); 970 + hep = urb->hcpriv; 971 + if (!hep) 972 + goto fail; 973 + 974 ep = hep->hcpriv; 975 if (ep) { 976 /* finish right away if this urb can't be active ... ··· 1017 VDBG("dequeue, urb %p active %s; wait4irq\n", urb, 1018 (sl811->active_a == ep) ? "A" : "B"); 1019 } else 1020 + fail: 1021 retval = -EINVAL; 1022 spin_unlock_irqrestore(&sl811->lock, flags); 1023 return retval; ··· 1576 if (sl811->board && sl811->board->power) 1577 hub_set_power_budget(udev, sl811->board->power * 2); 1578 1579 + /* enable power and interupts */ 1580 + port_power(sl811, 1); 1581 + 1582 return 0; 1583 } 1584 ··· 1618 1619 /*-------------------------------------------------------------------------*/ 1620 1621 + static int __devexit 1622 sl811h_remove(struct device *dev) 1623 { 1624 struct usb_hcd *hcd = dev_get_drvdata(dev); ··· 1631 remove_debug_file(sl811); 1632 usb_remove_hcd(hcd); 1633 1634 + /* some platforms may use IORESOURCE_IO */ 1635 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1636 + if (res) 1637 + iounmap(sl811->data_reg); 1638 1639 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1640 + if (res) 1641 + iounmap(sl811->addr_reg); 1642 1643 usb_put_hcd(hcd); 1644 return 0; 1645 } 1646 1647 + static int __devinit 1648 sl811h_probe(struct device *dev) 1649 { 1650 struct usb_hcd *hcd; ··· 1656 void __iomem *addr_reg; 1657 void __iomem *data_reg; 1658 int retval; 1659 + u8 tmp, ioaddr = 0; 1660 1661 /* basic sanity checks first. board-specific init logic should 1662 * have initialized these three resources and probably board ··· 1664 * minimal sanity checking. 1665 */ 1666 pdev = container_of(dev, struct platform_device, dev); 1667 irq = platform_get_irq(pdev, 0); 1668 + if (pdev->num_resources < 3 || irq < 0) 1669 return -ENODEV; 1670 1671 /* refuse to confuse usbcore */ ··· 1679 return -EINVAL; 1680 } 1681 1682 + /* the chip may be wired for either kind of addressing */ 1683 + addr = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1684 + data = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1685 + retval = -EBUSY; 1686 + if (!addr || !data) { 1687 + addr = platform_get_resource(pdev, IORESOURCE_IO, 0); 1688 + data = platform_get_resource(pdev, IORESOURCE_IO, 1); 1689 + if (!addr || !data) 1690 + return -ENODEV; 1691 + ioaddr = 1; 1692 1693 + addr_reg = (void __iomem *) addr->start; 1694 + data_reg = (void __iomem *) data->start; 1695 + } else { 1696 + addr_reg = ioremap(addr->start, 1); 1697 + if (addr_reg == NULL) { 1698 + retval = -ENOMEM; 1699 + goto err2; 1700 + } 1701 + 1702 + data_reg = ioremap(data->start, 1); 1703 + if (data_reg == NULL) { 1704 + retval = -ENOMEM; 1705 + goto err4; 1706 + } 1707 } 1708 1709 /* allocate and initialize hcd */ ··· 1737 goto err6; 1738 } 1739 1740 + /* The chip's IRQ is level triggered, active high. A requirement 1741 + * for platform device setup is to cope with things like signal 1742 + * inverters (e.g. CF is active low) or working only with edge 1743 + * triggers (e.g. most ARM CPUs). Initial driver stress testing 1744 + * was on a system with single edge triggering, so most sorts of 1745 + * triggering arrangement should work. 1746 + */ 1747 + retval = usb_add_hcd(hcd, irq, SA_INTERRUPT | SA_SHIRQ); 1748 if (retval != 0) 1749 goto err6; 1750 ··· 1752 err6: 1753 usb_put_hcd(hcd); 1754 err5: 1755 + if (!ioaddr) 1756 + iounmap(data_reg); 1757 err4: 1758 + if (!ioaddr) 1759 + iounmap(addr_reg); 1760 err2: 1761 DBG("init error, %d\n", retval); 1762 return retval; 1763 } ··· 1767 #ifdef CONFIG_PM 1768 1769 /* for this device there's no useful distinction between the controller 1770 + * and its root hub, except that the root hub only gets direct PM calls 1771 * when CONFIG_USB_SUSPEND is enabled. 1772 */ 1773 ··· 1821 #endif 1822 1823 1824 + /* this driver is exported so sl811_cs can depend on it */ 1825 + struct device_driver sl811h_driver = { 1826 .name = (char *) hcd_name, 1827 .bus = &platform_bus_type, 1828 1829 .probe = sl811h_probe, 1830 + .remove = __devexit_p(sl811h_remove), 1831 1832 .suspend = sl811h_suspend, 1833 .resume = sl811h_resume, 1834 }; 1835 + EXPORT_SYMBOL(sl811h_driver); 1836 1837 /*-------------------------------------------------------------------------*/ 1838 + 1839 + static int __init sl811h_init(void) 1840 { 1841 if (usb_disabled()) 1842 return -ENODEV; ··· 1844 } 1845 module_init(sl811h_init); 1846 1847 + static void __exit sl811h_cleanup(void) 1848 + { 1849 driver_unregister(&sl811h_driver); 1850 } 1851 module_exit(sl811h_cleanup);
+442
drivers/usb/host/sl811_cs.c
···
··· 1 + /* 2 + * PCMCIA driver for SL811HS (as found in REX-CFU1U) 3 + * Filename: sl811_cs.c 4 + * Author: Yukio Yamamoto 5 + * 6 + * Port to sl811-hcd and 2.6.x by 7 + * Botond Botyanszki <boti@rocketmail.com> 8 + * Simon Pickering 9 + * 10 + * Last update: 2005-05-12 11 + */ 12 + 13 + #include <linux/kernel.h> 14 + #include <linux/module.h> 15 + #include <linux/init.h> 16 + #include <linux/sched.h> 17 + #include <linux/ptrace.h> 18 + #include <linux/slab.h> 19 + #include <linux/string.h> 20 + #include <linux/timer.h> 21 + #include <linux/ioport.h> 22 + 23 + #include <pcmcia/version.h> 24 + #include <pcmcia/cs_types.h> 25 + #include <pcmcia/cs.h> 26 + #include <pcmcia/cistpl.h> 27 + #include <pcmcia/cisreg.h> 28 + #include <pcmcia/ds.h> 29 + 30 + #include <linux/usb_sl811.h> 31 + 32 + MODULE_AUTHOR("Botond Botyanszki"); 33 + MODULE_DESCRIPTION("REX-CFU1U PCMCIA driver for 2.6"); 34 + MODULE_LICENSE("GPL"); 35 + 36 + 37 + /*====================================================================*/ 38 + /* MACROS */ 39 + /*====================================================================*/ 40 + 41 + #if defined(DEBUG) || defined(CONFIG_USB_DEBUG) || defined(PCMCIA_DEBUG) 42 + 43 + static int pc_debug = 0; 44 + module_param(pc_debug, int, 0644); 45 + 46 + #define DBG(n, args...) if (pc_debug>(n)) printk(KERN_DEBUG "sl811_cs: " args) 47 + 48 + #else 49 + #define DBG(n, args...) do{}while(0) 50 + #endif /* no debugging */ 51 + 52 + #define INFO(args...) printk(KERN_INFO "sl811_cs: " args) 53 + 54 + #define INT_MODULE_PARM(n, v) static int n = v; module_param(n, int, 0444) 55 + 56 + #define CS_CHECK(fn, ret) \ 57 + do { \ 58 + last_fn = (fn); \ 59 + if ((last_ret = (ret)) != 0) \ 60 + goto cs_failed; \ 61 + } while (0) 62 + 63 + /*====================================================================*/ 64 + /* VARIABLES */ 65 + /*====================================================================*/ 66 + 67 + static const char driver_name[DEV_NAME_LEN] = "sl811_cs"; 68 + 69 + static dev_link_t *dev_list = NULL; 70 + 71 + static int irq_list[4] = { -1 }; 72 + static int irq_list_count; 73 + 74 + module_param_array(irq_list, int, &irq_list_count, 0444); 75 + 76 + INT_MODULE_PARM(irq_mask, 0xdeb8); 77 + 78 + typedef struct local_info_t { 79 + dev_link_t link; 80 + dev_node_t node; 81 + } local_info_t; 82 + 83 + /*====================================================================*/ 84 + 85 + static void release_platform_dev(struct device * dev) 86 + { 87 + DBG(0, "sl811_cs platform_dev release\n"); 88 + dev->parent = NULL; 89 + } 90 + 91 + static struct sl811_platform_data platform_data = { 92 + .potpg = 100, 93 + .power = 50, /* == 100mA */ 94 + // .reset = ... FIXME: invoke CF reset on the card 95 + }; 96 + 97 + static struct resource resources[] = { 98 + [0] = { 99 + .flags = IORESOURCE_IRQ, 100 + }, 101 + [1] = { 102 + // .name = "address", 103 + .flags = IORESOURCE_IO, 104 + }, 105 + [2] = { 106 + // .name = "data", 107 + .flags = IORESOURCE_IO, 108 + }, 109 + }; 110 + 111 + extern struct device_driver sl811h_driver; 112 + 113 + static struct platform_device platform_dev = { 114 + .id = -1, 115 + .dev = { 116 + .platform_data = &platform_data, 117 + .release = release_platform_dev, 118 + }, 119 + .resource = resources, 120 + .num_resources = ARRAY_SIZE(resources), 121 + }; 122 + 123 + static int sl811_hc_init(struct device *parent, ioaddr_t base_addr, int irq) 124 + { 125 + if (platform_dev.dev.parent) 126 + return -EBUSY; 127 + platform_dev.dev.parent = parent; 128 + 129 + /* finish seting up the platform device */ 130 + resources[0].start = irq; 131 + 132 + resources[1].start = base_addr; 133 + resources[1].end = base_addr; 134 + 135 + resources[2].start = base_addr + 1; 136 + resources[2].end = base_addr + 1; 137 + 138 + /* The driver core will probe for us. We know sl811-hcd has been 139 + * initialized already because of the link order dependency. 140 + */ 141 + platform_dev.name = sl811h_driver.name; 142 + return platform_device_register(&platform_dev); 143 + } 144 + 145 + /*====================================================================*/ 146 + 147 + static void sl811_cs_detach(dev_link_t *link) 148 + { 149 + dev_link_t **linkp; 150 + 151 + DBG(0, "sl811_cs_detach(0x%p)\n", link); 152 + 153 + /* Locate device structure */ 154 + for (linkp = &dev_list; *linkp; linkp = &(*linkp)->next) { 155 + if (*linkp == link) 156 + break; 157 + } 158 + if (*linkp == NULL) 159 + return; 160 + 161 + /* Break the link with Card Services */ 162 + if (link->handle) 163 + pcmcia_deregister_client(link->handle); 164 + 165 + /* Unlink device structure, and free it */ 166 + *linkp = link->next; 167 + /* This points to the parent local_info_t struct */ 168 + kfree(link->priv); 169 + } 170 + 171 + static void sl811_cs_release(dev_link_t * link) 172 + { 173 + 174 + DBG(0, "sl811_cs_release(0x%p)\n", link); 175 + 176 + if (link->open) { 177 + DBG(1, "sl811_cs: release postponed, '%s' still open\n", 178 + link->dev->dev_name); 179 + link->state |= DEV_STALE_CONFIG; 180 + return; 181 + } 182 + 183 + /* Unlink the device chain */ 184 + link->dev = NULL; 185 + 186 + platform_device_unregister(&platform_dev); 187 + pcmcia_release_configuration(link->handle); 188 + if (link->io.NumPorts1) 189 + pcmcia_release_io(link->handle, &link->io); 190 + if (link->irq.AssignedIRQ) 191 + pcmcia_release_irq(link->handle, &link->irq); 192 + link->state &= ~DEV_CONFIG; 193 + 194 + if (link->state & DEV_STALE_LINK) 195 + sl811_cs_detach(link); 196 + } 197 + 198 + static void sl811_cs_config(dev_link_t *link) 199 + { 200 + client_handle_t handle = link->handle; 201 + struct device *parent = &handle_to_dev(handle); 202 + local_info_t *dev = link->priv; 203 + tuple_t tuple; 204 + cisparse_t parse; 205 + int last_fn, last_ret; 206 + u_char buf[64]; 207 + config_info_t conf; 208 + cistpl_cftable_entry_t dflt = { 0 }; 209 + 210 + DBG(0, "sl811_cs_config(0x%p)\n", link); 211 + 212 + tuple.DesiredTuple = CISTPL_CONFIG; 213 + tuple.Attributes = 0; 214 + tuple.TupleData = buf; 215 + tuple.TupleDataMax = sizeof(buf); 216 + tuple.TupleOffset = 0; 217 + CS_CHECK(GetFirstTuple, pcmcia_get_first_tuple(handle, &tuple)); 218 + CS_CHECK(GetTupleData, pcmcia_get_tuple_data(handle, &tuple)); 219 + CS_CHECK(ParseTuple, pcmcia_parse_tuple(handle, &tuple, &parse)); 220 + link->conf.ConfigBase = parse.config.base; 221 + link->conf.Present = parse.config.rmask[0]; 222 + 223 + /* Configure card */ 224 + link->state |= DEV_CONFIG; 225 + 226 + /* Look up the current Vcc */ 227 + CS_CHECK(GetConfigurationInfo, 228 + pcmcia_get_configuration_info(handle, &conf)); 229 + link->conf.Vcc = conf.Vcc; 230 + 231 + tuple.DesiredTuple = CISTPL_CFTABLE_ENTRY; 232 + CS_CHECK(GetFirstTuple, pcmcia_get_first_tuple(handle, &tuple)); 233 + while (1) { 234 + cistpl_cftable_entry_t *cfg = &(parse.cftable_entry); 235 + 236 + if (pcmcia_get_tuple_data(handle, &tuple) != 0 237 + || pcmcia_parse_tuple(handle, &tuple, &parse) 238 + != 0) 239 + goto next_entry; 240 + 241 + if (cfg->flags & CISTPL_CFTABLE_DEFAULT) { 242 + dflt = *cfg; 243 + } 244 + 245 + if (cfg->index == 0) 246 + goto next_entry; 247 + 248 + link->conf.ConfigIndex = cfg->index; 249 + 250 + /* Use power settings for Vcc and Vpp if present */ 251 + /* Note that the CIS values need to be rescaled */ 252 + if (cfg->vcc.present & (1<<CISTPL_POWER_VNOM)) { 253 + if (cfg->vcc.param[CISTPL_POWER_VNOM]/10000 254 + != conf.Vcc) 255 + goto next_entry; 256 + } else if (dflt.vcc.present & (1<<CISTPL_POWER_VNOM)) { 257 + if (dflt.vcc.param[CISTPL_POWER_VNOM]/10000 258 + != conf.Vcc) 259 + goto next_entry; 260 + } 261 + 262 + if (cfg->vpp1.present & (1<<CISTPL_POWER_VNOM)) 263 + link->conf.Vpp1 = link->conf.Vpp2 = 264 + cfg->vpp1.param[CISTPL_POWER_VNOM]/10000; 265 + else if (dflt.vpp1.present & (1<<CISTPL_POWER_VNOM)) 266 + link->conf.Vpp1 = link->conf.Vpp2 = 267 + dflt.vpp1.param[CISTPL_POWER_VNOM]/10000; 268 + 269 + /* we need an interrupt */ 270 + if (cfg->irq.IRQInfo1 || dflt.irq.IRQInfo1) 271 + link->conf.Attributes |= CONF_ENABLE_IRQ; 272 + 273 + /* IO window settings */ 274 + link->io.NumPorts1 = link->io.NumPorts2 = 0; 275 + if ((cfg->io.nwin > 0) || (dflt.io.nwin > 0)) { 276 + cistpl_io_t *io = (cfg->io.nwin) ? &cfg->io : &dflt.io; 277 + 278 + link->io.Attributes1 = IO_DATA_PATH_WIDTH_8; 279 + link->io.IOAddrLines = io->flags & CISTPL_IO_LINES_MASK; 280 + link->io.BasePort1 = io->win[0].base; 281 + link->io.NumPorts1 = io->win[0].len; 282 + 283 + if (pcmcia_request_io(link->handle, &link->io) != 0) 284 + goto next_entry; 285 + } 286 + break; 287 + 288 + next_entry: 289 + if (link->io.NumPorts1) 290 + pcmcia_release_io(link->handle, &link->io); 291 + last_ret = pcmcia_get_next_tuple(handle, &tuple); 292 + } 293 + 294 + /* require an IRQ and two registers */ 295 + if (!link->io.NumPorts1 || link->io.NumPorts1 < 2) 296 + goto cs_failed; 297 + if (link->conf.Attributes & CONF_ENABLE_IRQ) 298 + CS_CHECK(RequestIRQ, 299 + pcmcia_request_irq(link->handle, &link->irq)); 300 + else 301 + goto cs_failed; 302 + 303 + CS_CHECK(RequestConfiguration, 304 + pcmcia_request_configuration(link->handle, &link->conf)); 305 + 306 + sprintf(dev->node.dev_name, driver_name); 307 + dev->node.major = dev->node.minor = 0; 308 + link->dev = &dev->node; 309 + 310 + printk(KERN_INFO "%s: index 0x%02x: Vcc %d.%d", 311 + dev->node.dev_name, link->conf.ConfigIndex, 312 + link->conf.Vcc/10, link->conf.Vcc%10); 313 + if (link->conf.Vpp1) 314 + printk(", Vpp %d.%d", link->conf.Vpp1/10, link->conf.Vpp1%10); 315 + printk(", irq %d", link->irq.AssignedIRQ); 316 + printk(", io 0x%04x-0x%04x", link->io.BasePort1, 317 + link->io.BasePort1+link->io.NumPorts1-1); 318 + printk("\n"); 319 + 320 + link->state &= ~DEV_CONFIG_PENDING; 321 + 322 + if (sl811_hc_init(parent, link->io.BasePort1, link->irq.AssignedIRQ) 323 + < 0) { 324 + cs_failed: 325 + printk("sl811_cs_config failed\n"); 326 + cs_error(link->handle, last_fn, last_ret); 327 + sl811_cs_release(link); 328 + link->state &= ~DEV_CONFIG_PENDING; 329 + } 330 + } 331 + 332 + static int 333 + sl811_cs_event(event_t event, int priority, event_callback_args_t *args) 334 + { 335 + dev_link_t *link = args->client_data; 336 + 337 + DBG(1, "sl811_cs_event(0x%06x)\n", event); 338 + 339 + switch (event) { 340 + case CS_EVENT_CARD_REMOVAL: 341 + link->state &= ~DEV_PRESENT; 342 + if (link->state & DEV_CONFIG) 343 + sl811_cs_release(link); 344 + break; 345 + 346 + case CS_EVENT_CARD_INSERTION: 347 + link->state |= DEV_PRESENT | DEV_CONFIG_PENDING; 348 + sl811_cs_config(link); 349 + break; 350 + 351 + case CS_EVENT_PM_SUSPEND: 352 + link->state |= DEV_SUSPEND; 353 + /* Fall through... */ 354 + case CS_EVENT_RESET_PHYSICAL: 355 + if (link->state & DEV_CONFIG) 356 + pcmcia_release_configuration(link->handle); 357 + break; 358 + 359 + case CS_EVENT_PM_RESUME: 360 + link->state &= ~DEV_SUSPEND; 361 + /* Fall through... */ 362 + case CS_EVENT_CARD_RESET: 363 + if (link->state & DEV_CONFIG) 364 + pcmcia_request_configuration(link->handle, &link->conf); 365 + DBG(0, "reset sl811-hcd here?\n"); 366 + break; 367 + } 368 + return 0; 369 + } 370 + 371 + static dev_link_t *sl811_cs_attach(void) 372 + { 373 + local_info_t *local; 374 + dev_link_t *link; 375 + client_reg_t client_reg; 376 + int ret, i; 377 + 378 + local = kmalloc(sizeof(local_info_t), GFP_KERNEL); 379 + if (!local) 380 + return NULL; 381 + memset(local, 0, sizeof(local_info_t)); 382 + link = &local->link; 383 + link->priv = local; 384 + 385 + /* Initialize */ 386 + link->irq.Attributes = IRQ_TYPE_EXCLUSIVE; 387 + link->irq.IRQInfo1 = IRQ_INFO2_VALID|IRQ_LEVEL_ID; 388 + if (irq_list[0] == -1) 389 + link->irq.IRQInfo2 = irq_mask; 390 + else 391 + for (i = 0; i < irq_list_count; i++) 392 + link->irq.IRQInfo2 |= 1 << irq_list[i]; 393 + link->irq.Handler = NULL; 394 + 395 + link->conf.Attributes = 0; 396 + link->conf.Vcc = 33; 397 + link->conf.IntType = INT_MEMORY_AND_IO; 398 + 399 + /* Register with Card Services */ 400 + link->next = dev_list; 401 + dev_list = link; 402 + client_reg.dev_info = (dev_info_t *) &driver_name; 403 + client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE; 404 + client_reg.EventMask = 405 + CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL | 406 + CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET | 407 + CS_EVENT_PM_SUSPEND | CS_EVENT_PM_RESUME; 408 + client_reg.event_handler = &sl811_cs_event; 409 + client_reg.Version = 0x0210; 410 + client_reg.event_callback_args.client_data = link; 411 + ret = pcmcia_register_client(&link->handle, &client_reg); 412 + if (ret != CS_SUCCESS) { 413 + cs_error(link->handle, RegisterClient, ret); 414 + sl811_cs_detach(link); 415 + return NULL; 416 + } 417 + 418 + return link; 419 + } 420 + 421 + static struct pcmcia_driver sl811_cs_driver = { 422 + .owner = THIS_MODULE, 423 + .drv = { 424 + .name = (char *)driver_name, 425 + }, 426 + .attach = sl811_cs_attach, 427 + .detach = sl811_cs_detach, 428 + }; 429 + 430 + /*====================================================================*/ 431 + 432 + static int __init init_sl811_cs(void) 433 + { 434 + return pcmcia_register_driver(&sl811_cs_driver); 435 + } 436 + module_init(init_sl811_cs); 437 + 438 + static void __exit exit_sl811_cs(void) 439 + { 440 + pcmcia_unregister_driver(&sl811_cs_driver); 441 + } 442 + module_exit(exit_sl811_cs);
-2
drivers/usb/media/pwc/pwc-ctrl.c
··· 48 #include "pwc-uncompress.h" 49 #include "pwc-kiara.h" 50 #include "pwc-timon.h" 51 - #include "pwc-dec1.h" 52 - #include "pwc-dec23.h" 53 54 /* Request types: video */ 55 #define SET_LUM_CTL 0x01
··· 48 #include "pwc-uncompress.h" 49 #include "pwc-kiara.h" 50 #include "pwc-timon.h" 51 52 /* Request types: video */ 53 #define SET_LUM_CTL 0x01
-2
drivers/usb/media/pwc/pwc-uncompress.c
··· 29 30 #include "pwc.h" 31 #include "pwc-uncompress.h" 32 - #include "pwc-dec1.h" 33 - #include "pwc-dec23.h" 34 35 int pwc_decompress(struct pwc_device *pdev) 36 {
··· 29 30 #include "pwc.h" 31 #include "pwc-uncompress.h" 32 33 int pwc_decompress(struct pwc_device *pdev) 34 {
+3
drivers/usb/serial/ftdi_sio.c
··· 364 { USB_DEVICE_VER(FTDI_VID, PROTEGO_SPECIAL_3, 0, 0x3ff) }, 365 { USB_DEVICE_VER(FTDI_VID, PROTEGO_SPECIAL_4, 0, 0x3ff) }, 366 { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UO100_PID, 0, 0x3ff) }, 367 { USB_DEVICE_VER(FTDI_VID, INSIDE_ACCESSO, 0, 0x3ff) }, 368 { USB_DEVICE_VER(INTREPID_VID, INTREPID_VALUECAN_PID, 0, 0x3ff) }, 369 { USB_DEVICE_VER(INTREPID_VID, INTREPID_NEOVI_PID, 0, 0x3ff) }, ··· 476 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88E_PID, 0x400, 0xffff) }, 477 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88F_PID, 0x400, 0xffff) }, 478 { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UO100_PID, 0x400, 0xffff) }, 479 { USB_DEVICE_VER(FTDI_VID, LINX_SDMUSBQSS_PID, 0x400, 0xffff) }, 480 { USB_DEVICE_VER(FTDI_VID, LINX_MASTERDEVEL2_PID, 0x400, 0xffff) }, 481 { USB_DEVICE_VER(FTDI_VID, LINX_FUTURE_0_PID, 0x400, 0xffff) }, ··· 620 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88E_PID, 0x400, 0xffff) }, 621 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88F_PID, 0x400, 0xffff) }, 622 { USB_DEVICE(FTDI_VID, FTDI_ELV_UO100_PID) }, 623 { USB_DEVICE_VER(FTDI_VID, LINX_SDMUSBQSS_PID, 0x400, 0xffff) }, 624 { USB_DEVICE_VER(FTDI_VID, LINX_MASTERDEVEL2_PID, 0x400, 0xffff) }, 625 { USB_DEVICE_VER(FTDI_VID, LINX_FUTURE_0_PID, 0x400, 0xffff) },
··· 364 { USB_DEVICE_VER(FTDI_VID, PROTEGO_SPECIAL_3, 0, 0x3ff) }, 365 { USB_DEVICE_VER(FTDI_VID, PROTEGO_SPECIAL_4, 0, 0x3ff) }, 366 { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UO100_PID, 0, 0x3ff) }, 367 + { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UM100_PID, 0, 0x3ff) }, 368 { USB_DEVICE_VER(FTDI_VID, INSIDE_ACCESSO, 0, 0x3ff) }, 369 { USB_DEVICE_VER(INTREPID_VID, INTREPID_VALUECAN_PID, 0, 0x3ff) }, 370 { USB_DEVICE_VER(INTREPID_VID, INTREPID_NEOVI_PID, 0, 0x3ff) }, ··· 475 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88E_PID, 0x400, 0xffff) }, 476 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88F_PID, 0x400, 0xffff) }, 477 { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UO100_PID, 0x400, 0xffff) }, 478 + { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UM100_PID, 0x400, 0xffff) }, 479 { USB_DEVICE_VER(FTDI_VID, LINX_SDMUSBQSS_PID, 0x400, 0xffff) }, 480 { USB_DEVICE_VER(FTDI_VID, LINX_MASTERDEVEL2_PID, 0x400, 0xffff) }, 481 { USB_DEVICE_VER(FTDI_VID, LINX_FUTURE_0_PID, 0x400, 0xffff) }, ··· 618 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88E_PID, 0x400, 0xffff) }, 619 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88F_PID, 0x400, 0xffff) }, 620 { USB_DEVICE(FTDI_VID, FTDI_ELV_UO100_PID) }, 621 + { USB_DEVICE(FTDI_VID, FTDI_ELV_UM100_PID) }, 622 { USB_DEVICE_VER(FTDI_VID, LINX_SDMUSBQSS_PID, 0x400, 0xffff) }, 623 { USB_DEVICE_VER(FTDI_VID, LINX_MASTERDEVEL2_PID, 0x400, 0xffff) }, 624 { USB_DEVICE_VER(FTDI_VID, LINX_FUTURE_0_PID, 0x400, 0xffff) },
+2
drivers/usb/serial/ftdi_sio.h
··· 144 145 /* ELV USB Module UO100 (PID sent by Stefan Frings) */ 146 #define FTDI_ELV_UO100_PID 0xFB58 /* Product Id */ 147 148 /* 149 * Definitions for ID TECH (www.idt-net.com) devices
··· 144 145 /* ELV USB Module UO100 (PID sent by Stefan Frings) */ 146 #define FTDI_ELV_UO100_PID 0xFB58 /* Product Id */ 147 + /* ELV USB Module UM100 (PID sent by Arnim Laeuger) */ 148 + #define FTDI_ELV_UM100_PID 0xFB5A /* Product Id */ 149 150 /* 151 * Definitions for ID TECH (www.idt-net.com) devices
+10 -10
drivers/usb/serial/usb-serial.c
··· 1297 goto exit_bus; 1298 } 1299 1300 - /* register the generic driver, if we should */ 1301 - result = usb_serial_generic_register(debug); 1302 - if (result < 0) { 1303 - err("%s - registering generic driver failed", __FUNCTION__); 1304 - goto exit_generic; 1305 - } 1306 - 1307 usb_serial_tty_driver->owner = THIS_MODULE; 1308 usb_serial_tty_driver->driver_name = "usbserial"; 1309 usb_serial_tty_driver->devfs_name = "usb/tts/"; ··· 1322 goto exit_tty; 1323 } 1324 1325 info(DRIVER_DESC " " DRIVER_VERSION); 1326 1327 return result; 1328 1329 exit_tty: 1330 tty_unregister_driver(usb_serial_tty_driver); 1331 1332 exit_reg_driver: 1333 - usb_serial_generic_deregister(); 1334 - 1335 - exit_generic: 1336 bus_unregister(&usb_serial_bus_type); 1337 1338 exit_bus:
··· 1297 goto exit_bus; 1298 } 1299 1300 usb_serial_tty_driver->owner = THIS_MODULE; 1301 usb_serial_tty_driver->driver_name = "usbserial"; 1302 usb_serial_tty_driver->devfs_name = "usb/tts/"; ··· 1329 goto exit_tty; 1330 } 1331 1332 + /* register the generic driver, if we should */ 1333 + result = usb_serial_generic_register(debug); 1334 + if (result < 0) { 1335 + err("%s - registering generic driver failed", __FUNCTION__); 1336 + goto exit_generic; 1337 + } 1338 + 1339 info(DRIVER_DESC " " DRIVER_VERSION); 1340 1341 return result; 1342 + 1343 + exit_generic: 1344 + usb_deregister(&usb_serial_driver); 1345 1346 exit_tty: 1347 tty_unregister_driver(usb_serial_tty_driver); 1348 1349 exit_reg_driver: 1350 bus_unregister(&usb_serial_bus_type); 1351 1352 exit_bus:
+11 -11
drivers/video/intelfb/intelfbdrv.c
··· 228 MODULE_LICENSE("Dual BSD/GPL"); 229 MODULE_DEVICE_TABLE(pci, intelfb_pci_table); 230 231 - static int accel __initdata = 1; 232 - static int vram __initdata = 4; 233 - static int hwcursor __initdata = 1; 234 - static int mtrr __initdata = 1; 235 - static int fixed __initdata = 0; 236 - static int noinit __initdata = 0; 237 - static int noregister __initdata = 0; 238 - static int probeonly __initdata = 0; 239 - static int idonly __initdata = 0; 240 - static int bailearly __initdata = 0; 241 - static char *mode __initdata = NULL; 242 243 module_param(accel, bool, S_IRUGO); 244 MODULE_PARM_DESC(accel, "Enable console acceleration");
··· 228 MODULE_LICENSE("Dual BSD/GPL"); 229 MODULE_DEVICE_TABLE(pci, intelfb_pci_table); 230 231 + static int accel = 1; 232 + static int vram = 4; 233 + static int hwcursor = 1; 234 + static int mtrr = 1; 235 + static int fixed = 0; 236 + static int noinit = 0; 237 + static int noregister = 0; 238 + static int probeonly = 0; 239 + static int idonly = 0; 240 + static int bailearly = 0; 241 + static char *mode = NULL; 242 243 module_param(accel, bool, S_IRUGO); 244 MODULE_PARM_DESC(accel, "Enable console acceleration");
+2 -2
fs/cifs/README
··· 371 on newly created files, directories, and devices (create, 372 mkdir, mknod) which will result in the server setting the 373 uid and gid to the default (usually the server uid of the 374 - usern who mounted the share). Letting the server (rather than 375 the client) set the uid and gid is the default. This 376 parameter has no effect if the CIFS Unix Extensions are not 377 negotiated. ··· 384 client (e.g. when the application is doing large sequential 385 reads bigger than page size without rereading the same data) 386 this can provide better performance than the default 387 - behavior which caches reads (reaadahead) and writes 388 (writebehind) through the local Linux client pagecache 389 if oplock (caching token) is granted and held. Note that 390 direct allows write operations larger than page size
··· 371 on newly created files, directories, and devices (create, 372 mkdir, mknod) which will result in the server setting the 373 uid and gid to the default (usually the server uid of the 374 + user who mounted the share). Letting the server (rather than 375 the client) set the uid and gid is the default. This 376 parameter has no effect if the CIFS Unix Extensions are not 377 negotiated. ··· 384 client (e.g. when the application is doing large sequential 385 reads bigger than page size without rereading the same data) 386 this can provide better performance than the default 387 + behavior which caches reads (readahead) and writes 388 (writebehind) through the local Linux client pagecache 389 if oplock (caching token) is granted and held. Note that 390 direct allows write operations larger than page size
+1 -1
fs/cifs/cifsproto.h
··· 228 const struct nls_table *nls_codepage, 229 int remap_special_chars); 230 #endif /* CONFIG_CIFS_EXPERIMENTAL */ 231 - extern int cifs_convertUCSpath(char *target, const __u16 *source, int maxlen, 232 const struct nls_table * codepage); 233 extern int cifsConvertToUCS(__le16 * target, const char *source, int maxlen, 234 const struct nls_table * cp, int mapChars);
··· 228 const struct nls_table *nls_codepage, 229 int remap_special_chars); 230 #endif /* CONFIG_CIFS_EXPERIMENTAL */ 231 + extern int cifs_convertUCSpath(char *target, const __le16 *source, int maxlen, 232 const struct nls_table * codepage); 233 extern int cifsConvertToUCS(__le16 * target, const char *source, int maxlen, 234 const struct nls_table * cp, int mapChars);
+28 -28
fs/cifs/cifssmb.c
··· 567 568 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 569 name_len = 570 - cifsConvertToUCS((__u16 *) pSMB->fileName, fileName, 571 PATH_MAX, nls_codepage, remap); 572 name_len++; /* trailing null */ 573 name_len *= 2; ··· 665 return rc; 666 667 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 668 - name_len = cifsConvertToUCS((__u16 *) pSMB->DirName, name, 669 PATH_MAX, nls_codepage, remap); 670 name_len++; /* trailing null */ 671 name_len *= 2; ··· 719 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 720 count = 1; /* account for one byte pad to word boundary */ 721 name_len = 722 - cifsConvertToUCS((__u16 *) (pSMB->fileName + 1), 723 fileName, PATH_MAX, nls_codepage, remap); 724 name_len++; /* trailing null */ 725 name_len *= 2; ··· 1141 1142 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1143 name_len = 1144 - cifsConvertToUCS((__u16 *) pSMB->OldFileName, fromName, 1145 PATH_MAX, nls_codepage, remap); 1146 name_len++; /* trailing null */ 1147 name_len *= 2; ··· 1149 /* protocol requires ASCII signature byte on Unicode string */ 1150 pSMB->OldFileName[name_len + 1] = 0x00; 1151 name_len2 = 1152 - cifsConvertToUCS((__u16 *) &pSMB->OldFileName[name_len + 2], 1153 toName, PATH_MAX, nls_codepage, remap); 1154 name_len2 += 1 /* trailing null */ + 1 /* Signature word */ ; 1155 name_len2 *= 2; /* convert to bytes */ ··· 1236 /* unicode only call */ 1237 if(target_name == NULL) { 1238 sprintf(dummy_string,"cifs%x",pSMB->hdr.Mid); 1239 - len_of_str = cifsConvertToUCS((__u16 *)rename_info->target_name, 1240 dummy_string, 24, nls_codepage, remap); 1241 } else { 1242 - len_of_str = cifsConvertToUCS((__u16 *)rename_info->target_name, 1243 target_name, PATH_MAX, nls_codepage, remap); 1244 } 1245 rename_info->target_name_len = cpu_to_le32(2 * len_of_str); ··· 1296 pSMB->Flags = cpu_to_le16(flags & COPY_TREE); 1297 1298 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1299 - name_len = cifsConvertToUCS((__u16 *) pSMB->OldFileName, 1300 fromName, PATH_MAX, nls_codepage, 1301 remap); 1302 name_len++; /* trailing null */ ··· 1304 pSMB->OldFileName[name_len] = 0x04; /* pad */ 1305 /* protocol requires ASCII signature byte on Unicode string */ 1306 pSMB->OldFileName[name_len + 1] = 0x00; 1307 - name_len2 = cifsConvertToUCS((__u16 *)&pSMB->OldFileName[name_len + 2], 1308 toName, PATH_MAX, nls_codepage, remap); 1309 name_len2 += 1 /* trailing null */ + 1 /* Signature word */ ; 1310 name_len2 *= 2; /* convert to bytes */ ··· 1453 return rc; 1454 1455 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1456 - name_len = cifsConvertToUCS((__u16 *) pSMB->FileName, toName, 1457 PATH_MAX, nls_codepage, remap); 1458 name_len++; /* trailing null */ 1459 name_len *= 2; ··· 1476 data_offset = (char *) (&pSMB->hdr.Protocol) + offset; 1477 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1478 name_len_target = 1479 - cifsConvertToUCS((__u16 *) data_offset, fromName, PATH_MAX, 1480 nls_codepage, remap); 1481 name_len_target++; /* trailing null */ 1482 name_len_target *= 2; ··· 1546 1547 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1548 name_len = 1549 - cifsConvertToUCS((__u16 *) pSMB->OldFileName, fromName, 1550 PATH_MAX, nls_codepage, remap); 1551 name_len++; /* trailing null */ 1552 name_len *= 2; 1553 pSMB->OldFileName[name_len] = 0; /* pad */ 1554 pSMB->OldFileName[name_len + 1] = 0x04; 1555 name_len2 = 1556 - cifsConvertToUCS((__u16 *)&pSMB->OldFileName[name_len + 2], 1557 toName, PATH_MAX, nls_codepage, remap); 1558 name_len2 += 1 /* trailing null */ + 1 /* Signature word */ ; 1559 name_len2 *= 2; /* convert to bytes */ ··· 1939 1940 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1941 name_len = 1942 - cifsConvertToUCS((__u16 *) pSMB->FileName, searchName, 1943 PATH_MAX, nls_codepage, remap); 1944 name_len++; /* trailing null */ 1945 name_len *= 2; ··· 2024 return rc; 2025 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2026 name_len = 2027 - cifsConvertToUCS((__u16 *) pSMB->FileName, fileName, 2028 PATH_MAX, nls_codepage, remap); 2029 name_len++; /* trailing null */ 2030 name_len *= 2; ··· 2188 2189 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2190 name_len = 2191 - cifsConvertToUCS((__u16 *) pSMB->FileName, searchName, 2192 PATH_MAX, nls_codepage, remap); 2193 name_len++; /* trailing null */ 2194 name_len *= 2; ··· 2269 2270 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2271 name_len = 2272 - cifsConvertToUCS((__u16 *) pSMB->FileName, searchName, 2273 PATH_MAX, nls_codepage, remap); 2274 name_len++; /* trailing null */ 2275 name_len *= 2; ··· 2350 2351 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2352 name_len = 2353 - cifsConvertToUCS((wchar_t *) pSMB->FileName, searchName, PATH_MAX 2354 /* find define for this maxpathcomponent */ 2355 , nls_codepage); 2356 name_len++; /* trailing null */ ··· 2435 2436 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2437 name_len = 2438 - cifsConvertToUCS((__u16 *) pSMB->FileName,searchName, 2439 PATH_MAX, nls_codepage, remap); 2440 /* We can not add the asterik earlier in case 2441 it got remapped to 0xF03A as if it were part of the ··· 2726 2727 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2728 name_len = 2729 - cifsConvertToUCS((__u16 *) pSMB->FileName, searchName, 2730 PATH_MAX,nls_codepage, remap); 2731 name_len++; /* trailing null */ 2732 name_len *= 2; ··· 2837 if (ses->capabilities & CAP_UNICODE) { 2838 pSMB->hdr.Flags2 |= SMBFLG2_UNICODE; 2839 name_len = 2840 - cifsConvertToUCS((__u16 *) pSMB->RequestFileName, 2841 searchName, PATH_MAX, nls_codepage, remap); 2842 name_len++; /* trailing null */ 2843 name_len *= 2; ··· 3369 3370 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3371 name_len = 3372 - cifsConvertToUCS((__u16 *) pSMB->FileName, fileName, 3373 PATH_MAX, nls_codepage, remap); 3374 name_len++; /* trailing null */ 3375 name_len *= 2; ··· 3627 3628 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3629 name_len = 3630 - cifsConvertToUCS((__u16 *) pSMB->FileName, fileName, 3631 PATH_MAX, nls_codepage, remap); 3632 name_len++; /* trailing null */ 3633 name_len *= 2; ··· 3708 3709 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3710 name_len = 3711 - ConvertToUCS((wchar_t *) pSMB->fileName, fileName, 3712 PATH_MAX, nls_codepage); 3713 name_len++; /* trailing null */ 3714 name_len *= 2; ··· 3759 3760 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3761 name_len = 3762 - cifsConvertToUCS((__u16 *) pSMB->FileName, fileName, 3763 PATH_MAX, nls_codepage, remap); 3764 name_len++; /* trailing null */ 3765 name_len *= 2; ··· 3904 3905 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3906 name_len = 3907 - cifsConvertToUCS((wchar_t *) pSMB->FileName, searchName, 3908 PATH_MAX, nls_codepage, remap); 3909 name_len++; /* trailing null */ 3910 name_len *= 2; ··· 4047 4048 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 4049 name_len = 4050 - cifsConvertToUCS((__u16 *) pSMB->FileName, searchName, 4051 PATH_MAX, nls_codepage, remap); 4052 name_len++; /* trailing null */ 4053 name_len *= 2; ··· 4194 4195 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 4196 name_len = 4197 - cifsConvertToUCS((__u16 *) pSMB->FileName, fileName, 4198 PATH_MAX, nls_codepage, remap); 4199 name_len++; /* trailing null */ 4200 name_len *= 2;
··· 567 568 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 569 name_len = 570 + cifsConvertToUCS((__le16 *) pSMB->fileName, fileName, 571 PATH_MAX, nls_codepage, remap); 572 name_len++; /* trailing null */ 573 name_len *= 2; ··· 665 return rc; 666 667 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 668 + name_len = cifsConvertToUCS((__le16 *) pSMB->DirName, name, 669 PATH_MAX, nls_codepage, remap); 670 name_len++; /* trailing null */ 671 name_len *= 2; ··· 719 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 720 count = 1; /* account for one byte pad to word boundary */ 721 name_len = 722 + cifsConvertToUCS((__le16 *) (pSMB->fileName + 1), 723 fileName, PATH_MAX, nls_codepage, remap); 724 name_len++; /* trailing null */ 725 name_len *= 2; ··· 1141 1142 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1143 name_len = 1144 + cifsConvertToUCS((__le16 *) pSMB->OldFileName, fromName, 1145 PATH_MAX, nls_codepage, remap); 1146 name_len++; /* trailing null */ 1147 name_len *= 2; ··· 1149 /* protocol requires ASCII signature byte on Unicode string */ 1150 pSMB->OldFileName[name_len + 1] = 0x00; 1151 name_len2 = 1152 + cifsConvertToUCS((__le16 *) &pSMB->OldFileName[name_len + 2], 1153 toName, PATH_MAX, nls_codepage, remap); 1154 name_len2 += 1 /* trailing null */ + 1 /* Signature word */ ; 1155 name_len2 *= 2; /* convert to bytes */ ··· 1236 /* unicode only call */ 1237 if(target_name == NULL) { 1238 sprintf(dummy_string,"cifs%x",pSMB->hdr.Mid); 1239 + len_of_str = cifsConvertToUCS((__le16 *)rename_info->target_name, 1240 dummy_string, 24, nls_codepage, remap); 1241 } else { 1242 + len_of_str = cifsConvertToUCS((__le16 *)rename_info->target_name, 1243 target_name, PATH_MAX, nls_codepage, remap); 1244 } 1245 rename_info->target_name_len = cpu_to_le32(2 * len_of_str); ··· 1296 pSMB->Flags = cpu_to_le16(flags & COPY_TREE); 1297 1298 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1299 + name_len = cifsConvertToUCS((__le16 *) pSMB->OldFileName, 1300 fromName, PATH_MAX, nls_codepage, 1301 remap); 1302 name_len++; /* trailing null */ ··· 1304 pSMB->OldFileName[name_len] = 0x04; /* pad */ 1305 /* protocol requires ASCII signature byte on Unicode string */ 1306 pSMB->OldFileName[name_len + 1] = 0x00; 1307 + name_len2 = cifsConvertToUCS((__le16 *)&pSMB->OldFileName[name_len + 2], 1308 toName, PATH_MAX, nls_codepage, remap); 1309 name_len2 += 1 /* trailing null */ + 1 /* Signature word */ ; 1310 name_len2 *= 2; /* convert to bytes */ ··· 1453 return rc; 1454 1455 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1456 + name_len = cifsConvertToUCS((__le16 *) pSMB->FileName, toName, 1457 PATH_MAX, nls_codepage, remap); 1458 name_len++; /* trailing null */ 1459 name_len *= 2; ··· 1476 data_offset = (char *) (&pSMB->hdr.Protocol) + offset; 1477 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1478 name_len_target = 1479 + cifsConvertToUCS((__le16 *) data_offset, fromName, PATH_MAX, 1480 nls_codepage, remap); 1481 name_len_target++; /* trailing null */ 1482 name_len_target *= 2; ··· 1546 1547 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1548 name_len = 1549 + cifsConvertToUCS((__le16 *) pSMB->OldFileName, fromName, 1550 PATH_MAX, nls_codepage, remap); 1551 name_len++; /* trailing null */ 1552 name_len *= 2; 1553 pSMB->OldFileName[name_len] = 0; /* pad */ 1554 pSMB->OldFileName[name_len + 1] = 0x04; 1555 name_len2 = 1556 + cifsConvertToUCS((__le16 *)&pSMB->OldFileName[name_len + 2], 1557 toName, PATH_MAX, nls_codepage, remap); 1558 name_len2 += 1 /* trailing null */ + 1 /* Signature word */ ; 1559 name_len2 *= 2; /* convert to bytes */ ··· 1939 1940 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1941 name_len = 1942 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 1943 PATH_MAX, nls_codepage, remap); 1944 name_len++; /* trailing null */ 1945 name_len *= 2; ··· 2024 return rc; 2025 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2026 name_len = 2027 + cifsConvertToUCS((__le16 *) pSMB->FileName, fileName, 2028 PATH_MAX, nls_codepage, remap); 2029 name_len++; /* trailing null */ 2030 name_len *= 2; ··· 2188 2189 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2190 name_len = 2191 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 2192 PATH_MAX, nls_codepage, remap); 2193 name_len++; /* trailing null */ 2194 name_len *= 2; ··· 2269 2270 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2271 name_len = 2272 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 2273 PATH_MAX, nls_codepage, remap); 2274 name_len++; /* trailing null */ 2275 name_len *= 2; ··· 2350 2351 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2352 name_len = 2353 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, PATH_MAX 2354 /* find define for this maxpathcomponent */ 2355 , nls_codepage); 2356 name_len++; /* trailing null */ ··· 2435 2436 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2437 name_len = 2438 + cifsConvertToUCS((__le16 *) pSMB->FileName,searchName, 2439 PATH_MAX, nls_codepage, remap); 2440 /* We can not add the asterik earlier in case 2441 it got remapped to 0xF03A as if it were part of the ··· 2726 2727 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2728 name_len = 2729 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 2730 PATH_MAX,nls_codepage, remap); 2731 name_len++; /* trailing null */ 2732 name_len *= 2; ··· 2837 if (ses->capabilities & CAP_UNICODE) { 2838 pSMB->hdr.Flags2 |= SMBFLG2_UNICODE; 2839 name_len = 2840 + cifsConvertToUCS((__le16 *) pSMB->RequestFileName, 2841 searchName, PATH_MAX, nls_codepage, remap); 2842 name_len++; /* trailing null */ 2843 name_len *= 2; ··· 3369 3370 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3371 name_len = 3372 + cifsConvertToUCS((__le16 *) pSMB->FileName, fileName, 3373 PATH_MAX, nls_codepage, remap); 3374 name_len++; /* trailing null */ 3375 name_len *= 2; ··· 3627 3628 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3629 name_len = 3630 + cifsConvertToUCS((__le16 *) pSMB->FileName, fileName, 3631 PATH_MAX, nls_codepage, remap); 3632 name_len++; /* trailing null */ 3633 name_len *= 2; ··· 3708 3709 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3710 name_len = 3711 + ConvertToUCS((__le16 *) pSMB->fileName, fileName, 3712 PATH_MAX, nls_codepage); 3713 name_len++; /* trailing null */ 3714 name_len *= 2; ··· 3759 3760 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3761 name_len = 3762 + cifsConvertToUCS((__le16 *) pSMB->FileName, fileName, 3763 PATH_MAX, nls_codepage, remap); 3764 name_len++; /* trailing null */ 3765 name_len *= 2; ··· 3904 3905 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3906 name_len = 3907 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 3908 PATH_MAX, nls_codepage, remap); 3909 name_len++; /* trailing null */ 3910 name_len *= 2; ··· 4047 4048 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 4049 name_len = 4050 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 4051 PATH_MAX, nls_codepage, remap); 4052 name_len++; /* trailing null */ 4053 name_len *= 2; ··· 4194 4195 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 4196 name_len = 4197 + cifsConvertToUCS((__le16 *) pSMB->FileName, fileName, 4198 PATH_MAX, nls_codepage, remap); 4199 name_len++; /* trailing null */ 4200 name_len *= 2;
+2 -1
fs/cifs/dir.c
··· 392 rc = 0; 393 d_add(direntry, NULL); 394 } else { 395 - cERROR(1,("Error 0x%x or on cifs_get_inode_info in lookup",rc)); 396 /* BB special case check for Access Denied - watch security 397 exposure of returning dir info implicitly via different rc 398 if file exists or not but no access BB */
··· 392 rc = 0; 393 d_add(direntry, NULL); 394 } else { 395 + cERROR(1,("Error 0x%x on cifs_get_inode_info in lookup of %s", 396 + rc,full_path)); 397 /* BB special case check for Access Denied - watch security 398 exposure of returning dir info implicitly via different rc 399 if file exists or not but no access BB */
+15 -9
fs/cifs/inode.c
··· 422 cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); 423 424 if (!rc) { 425 - direntry->d_inode->i_nlink--; 426 } else if (rc == -ENOENT) { 427 d_drop(direntry); 428 } else if (rc == -ETXTBSY) { ··· 441 cifs_sb->mnt_cifs_flags & 442 CIFS_MOUNT_MAP_SPECIAL_CHR); 443 CIFSSMBClose(xid, pTcon, netfid); 444 - direntry->d_inode->i_nlink--; 445 } 446 } else if (rc == -EACCES) { 447 /* try only if r/o attribute set in local lookup data? */ ··· 496 cifs_sb->mnt_cifs_flags & 497 CIFS_MOUNT_MAP_SPECIAL_CHR); 498 if (!rc) { 499 - direntry->d_inode->i_nlink--; 500 } else if (rc == -ETXTBSY) { 501 int oplock = FALSE; 502 __u16 netfid; ··· 517 cifs_sb->mnt_cifs_flags & 518 CIFS_MOUNT_MAP_SPECIAL_CHR); 519 CIFSSMBClose(xid, pTcon, netfid); 520 - direntry->d_inode->i_nlink--; 521 } 522 /* BB if rc = -ETXTBUSY goto the rename logic BB */ 523 } 524 } 525 } 526 - cifsInode = CIFS_I(direntry->d_inode); 527 - cifsInode->time = 0; /* will force revalidate to get info when 528 - needed */ 529 - direntry->d_inode->i_ctime = inode->i_ctime = inode->i_mtime = 530 - current_fs_time(inode->i_sb); 531 cifsInode = CIFS_I(inode); 532 cifsInode->time = 0; /* force revalidate of dir as well */ 533
··· 422 cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); 423 424 if (!rc) { 425 + if(direntry->d_inode) 426 + direntry->d_inode->i_nlink--; 427 } else if (rc == -ENOENT) { 428 d_drop(direntry); 429 } else if (rc == -ETXTBSY) { ··· 440 cifs_sb->mnt_cifs_flags & 441 CIFS_MOUNT_MAP_SPECIAL_CHR); 442 CIFSSMBClose(xid, pTcon, netfid); 443 + if(direntry->d_inode) 444 + direntry->d_inode->i_nlink--; 445 } 446 } else if (rc == -EACCES) { 447 /* try only if r/o attribute set in local lookup data? */ ··· 494 cifs_sb->mnt_cifs_flags & 495 CIFS_MOUNT_MAP_SPECIAL_CHR); 496 if (!rc) { 497 + if(direntry->d_inode) 498 + direntry->d_inode->i_nlink--; 499 } else if (rc == -ETXTBSY) { 500 int oplock = FALSE; 501 __u16 netfid; ··· 514 cifs_sb->mnt_cifs_flags & 515 CIFS_MOUNT_MAP_SPECIAL_CHR); 516 CIFSSMBClose(xid, pTcon, netfid); 517 + if(direntry->d_inode) 518 + direntry->d_inode->i_nlink--; 519 } 520 /* BB if rc = -ETXTBUSY goto the rename logic BB */ 521 } 522 } 523 } 524 + if(direntry->d_inode) { 525 + cifsInode = CIFS_I(direntry->d_inode); 526 + cifsInode->time = 0; /* will force revalidate to get info 527 + when needed */ 528 + direntry->d_inode->i_ctime = current_fs_time(inode->i_sb); 529 + } 530 + inode->i_ctime = inode->i_mtime = current_fs_time(inode->i_sb); 531 cifsInode = CIFS_I(inode); 532 cifsInode->time = 0; /* force revalidate of dir as well */ 533
+1
fs/cifs/misc.c
··· 571 break; 572 case UNI_LESSTHAN: 573 target[j] = '<'; 574 default: 575 len = cp->uni2char(src_char, &target[j], 576 NLS_MAX_CHARSET_SIZE);
··· 571 break; 572 case UNI_LESSTHAN: 573 target[j] = '<'; 574 + break; 575 default: 576 len = cp->uni2char(src_char, &target[j], 577 NLS_MAX_CHARSET_SIZE);
-1
fs/hostfs/hostfs_kern.c
··· 23 #include "kern_util.h" 24 #include "kern.h" 25 #include "user_util.h" 26 - #include "2_5compat.h" 27 #include "init.h" 28 29 struct hostfs_inode_info {
··· 23 #include "kern_util.h" 24 #include "kern.h" 25 #include "user_util.h" 26 #include "init.h" 27 28 struct hostfs_inode_info {
+43 -72
fs/proc/proc_devtree.c
··· 12 #include <asm/uaccess.h> 13 14 #ifndef HAVE_ARCH_DEVTREE_FIXUPS 15 - static inline void set_node_proc_entry(struct device_node *np, struct proc_dir_entry *de) 16 - { 17 - } 18 - 19 - static void inline set_node_name_link(struct device_node *np, struct proc_dir_entry *de) 20 - { 21 - } 22 - 23 - static void inline set_node_addr_link(struct device_node *np, struct proc_dir_entry *de) 24 { 25 } 26 #endif ··· 51 /* 52 * Process a node, adding entries for its children and its properties. 53 */ 54 - void proc_device_tree_add_node(struct device_node *np, struct proc_dir_entry *de) 55 { 56 struct property *pp; 57 struct proc_dir_entry *ent; 58 - struct device_node *child, *sib; 59 - const char *p, *at; 60 - int l; 61 - struct proc_dir_entry *list, **lastp, *al; 62 63 set_node_proc_entry(np, de); 64 lastp = &list; 65 for (pp = np->properties; pp != 0; pp = pp->next) { 66 /* 67 * Unfortunately proc_register puts each new entry 68 * at the beginning of the list. So we rearrange them. 69 */ 70 - ent = create_proc_read_entry(pp->name, strncmp(pp->name, "security-", 9) ? 71 - S_IRUGO : S_IRUSR, de, property_read_proc, pp); 72 if (ent == 0) 73 break; 74 if (!strncmp(pp->name, "security-", 9)) 75 ent->size = 0; /* don't leak number of password chars */ 76 else 77 ent->size = pp->length; 78 *lastp = ent; 79 lastp = &ent->next; 80 } 81 - child = NULL; 82 - while ((child = of_get_next_child(np, child))) { 83 - p = strrchr(child->full_name, '/'); 84 - if (!p) 85 - p = child->full_name; 86 - else 87 - ++p; 88 - /* chop off '@0' if the name ends with that */ 89 - l = strlen(p); 90 - if (l > 2 && p[l-2] == '@' && p[l-1] == '0') 91 - l -= 2; 92 - ent = proc_mkdir(p, de); 93 - if (ent == 0) 94 - break; 95 - *lastp = ent; 96 - lastp = &ent->next; 97 - proc_device_tree_add_node(child, ent); 98 - 99 - /* 100 - * If we left the address part on the name, consider 101 - * adding symlinks from the name and address parts. 102 - */ 103 - if (p[l] != 0 || (at = strchr(p, '@')) == 0) 104 - continue; 105 - 106 - /* 107 - * If this is the first node with a given name property, 108 - * add a symlink with the name property as its name. 109 - */ 110 - sib = NULL; 111 - while ((sib = of_get_next_child(np, sib)) && sib != child) 112 - if (sib->name && strcmp(sib->name, child->name) == 0) 113 - break; 114 - if (sib == child && strncmp(p, child->name, l) != 0) { 115 - al = proc_symlink(child->name, de, ent->name); 116 - if (al == 0) { 117 - of_node_put(sib); 118 - break; 119 - } 120 - set_node_name_link(child, al); 121 - *lastp = al; 122 - lastp = &al->next; 123 - } 124 - of_node_put(sib); 125 - /* 126 - * Add another directory with the @address part as its name. 127 - */ 128 - al = proc_symlink(at, de, ent->name); 129 - if (al == 0) 130 - break; 131 - set_node_addr_link(child, al); 132 - *lastp = al; 133 - lastp = &al->next; 134 - } 135 - of_node_put(child); 136 - *lastp = NULL; 137 de->subdir = list; 138 } 139
··· 12 #include <asm/uaccess.h> 13 14 #ifndef HAVE_ARCH_DEVTREE_FIXUPS 15 + static inline void set_node_proc_entry(struct device_node *np, 16 + struct proc_dir_entry *de) 17 { 18 } 19 #endif ··· 58 /* 59 * Process a node, adding entries for its children and its properties. 60 */ 61 + void proc_device_tree_add_node(struct device_node *np, 62 + struct proc_dir_entry *de) 63 { 64 struct property *pp; 65 struct proc_dir_entry *ent; 66 + struct device_node *child; 67 + struct proc_dir_entry *list = NULL, **lastp; 68 + const char *p; 69 70 set_node_proc_entry(np, de); 71 lastp = &list; 72 + for (child = NULL; (child = of_get_next_child(np, child));) { 73 + p = strrchr(child->full_name, '/'); 74 + if (!p) 75 + p = child->full_name; 76 + else 77 + ++p; 78 + ent = proc_mkdir(p, de); 79 + if (ent == 0) 80 + break; 81 + *lastp = ent; 82 + ent->next = NULL; 83 + lastp = &ent->next; 84 + proc_device_tree_add_node(child, ent); 85 + } 86 + of_node_put(child); 87 for (pp = np->properties; pp != 0; pp = pp->next) { 88 + /* 89 + * Yet another Apple device-tree bogosity: on some machines, 90 + * they have properties & nodes with the same name. Those 91 + * properties are quite unimportant for us though, thus we 92 + * simply "skip" them here, but we do have to check. 93 + */ 94 + for (ent = list; ent != NULL; ent = ent->next) 95 + if (!strcmp(ent->name, pp->name)) 96 + break; 97 + if (ent != NULL) { 98 + printk(KERN_WARNING "device-tree: property \"%s\" name" 99 + " conflicts with node in %s\n", pp->name, 100 + np->full_name); 101 + continue; 102 + } 103 + 104 /* 105 * Unfortunately proc_register puts each new entry 106 * at the beginning of the list. So we rearrange them. 107 */ 108 + ent = create_proc_read_entry(pp->name, 109 + strncmp(pp->name, "security-", 9) 110 + ? S_IRUGO : S_IRUSR, de, 111 + property_read_proc, pp); 112 if (ent == 0) 113 break; 114 if (!strncmp(pp->name, "security-", 9)) 115 ent->size = 0; /* don't leak number of password chars */ 116 else 117 ent->size = pp->length; 118 + ent->next = NULL; 119 *lastp = ent; 120 lastp = &ent->next; 121 } 122 de->subdir = list; 123 } 124
+1 -1
fs/udf/udftime.c
··· 46 #endif 47 48 /* How many days come before each month (0-12). */ 49 - const unsigned short int __mon_yday[2][13] = 50 { 51 /* Normal years. */ 52 { 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 },
··· 46 #endif 47 48 /* How many days come before each month (0-12). */ 49 + static const unsigned short int __mon_yday[2][13] = 50 { 51 /* Normal years. */ 52 { 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 },
+1
include/asm-i386/timer.h
··· 53 54 extern unsigned long calibrate_tsc(void); 55 extern void init_cpu_khz(void); 56 #ifdef CONFIG_HPET_TIMER 57 extern struct init_timer_opts timer_hpet_init; 58 extern unsigned long calibrate_tsc_hpet(unsigned long *tsc_hpet_quotient_ptr);
··· 53 54 extern unsigned long calibrate_tsc(void); 55 extern void init_cpu_khz(void); 56 + extern int recalibrate_cpu_khz(void); 57 #ifdef CONFIG_HPET_TIMER 58 extern struct init_timer_opts timer_hpet_init; 59 extern unsigned long calibrate_tsc_hpet(unsigned long *tsc_hpet_quotient_ptr);
+8
include/asm-ia64/perfmon.h
··· 177 178 extern long perfmonctl(int fd, int cmd, void *arg, int narg); 179 180 extern void pfm_save_regs (struct task_struct *); 181 extern void pfm_load_regs (struct task_struct *); 182 ··· 191 extern void pfm_inherit(struct task_struct *task, struct pt_regs *regs); 192 extern void pfm_init_percpu(void); 193 extern void pfm_handle_work(void); 194 195 /* 196 * Reset PMD register flags
··· 177 178 extern long perfmonctl(int fd, int cmd, void *arg, int narg); 179 180 + typedef struct { 181 + void (*handler)(int irq, void *arg, struct pt_regs *regs); 182 + } pfm_intr_handler_desc_t; 183 + 184 extern void pfm_save_regs (struct task_struct *); 185 extern void pfm_load_regs (struct task_struct *); 186 ··· 187 extern void pfm_inherit(struct task_struct *task, struct pt_regs *regs); 188 extern void pfm_init_percpu(void); 189 extern void pfm_handle_work(void); 190 + extern int pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *h); 191 + extern int pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *h); 192 + 193 + 194 195 /* 196 * Reset PMD register flags
+27 -1
include/asm-ia64/sn/sn_sal.h
··· 115 #define SAL_IROUTER_INTR_XMIT SAL_CONSOLE_INTR_XMIT 116 #define SAL_IROUTER_INTR_RECV SAL_CONSOLE_INTR_RECV 117 118 119 /* 120 * SAL Error Codes ··· 349 } 350 351 /* 352 * Checks for console input. 353 */ 354 static inline u64 ··· 498 ia64_sn_pod_mode(void) 499 { 500 struct ia64_sal_retval isrv; 501 - SAL_CALL(isrv, SN_SAL_POD_MODE, 0, 0, 0, 0, 0, 0, 0); 502 if (isrv.status) 503 return 0; 504 return isrv.v0;
··· 115 #define SAL_IROUTER_INTR_XMIT SAL_CONSOLE_INTR_XMIT 116 #define SAL_IROUTER_INTR_RECV SAL_CONSOLE_INTR_RECV 117 118 + /* 119 + * Error Handling Features 120 + */ 121 + #define SAL_ERR_FEAT_MCA_SLV_TO_OS_INIT_SLV 0x1 122 + #define SAL_ERR_FEAT_LOG_SBES 0x2 123 + #define SAL_ERR_FEAT_MFR_OVERRIDE 0x4 124 + #define SAL_ERR_FEAT_SBE_THRESHOLD 0xffff0000 125 126 /* 127 * SAL Error Codes ··· 342 } 343 344 /* 345 + * Set Error Handling Features 346 + */ 347 + static inline u64 348 + ia64_sn_plat_set_error_handling_features(void) 349 + { 350 + struct ia64_sal_retval ret_stuff; 351 + 352 + ret_stuff.status = 0; 353 + ret_stuff.v0 = 0; 354 + ret_stuff.v1 = 0; 355 + ret_stuff.v2 = 0; 356 + SAL_CALL_REENTRANT(ret_stuff, SN_SAL_SET_ERROR_HANDLING_FEATURES, 357 + (SAL_ERR_FEAT_MCA_SLV_TO_OS_INIT_SLV | SAL_ERR_FEAT_LOG_SBES), 358 + 0, 0, 0, 0, 0, 0); 359 + 360 + return ret_stuff.status; 361 + } 362 + 363 + /* 364 * Checks for console input. 365 */ 366 static inline u64 ··· 472 ia64_sn_pod_mode(void) 473 { 474 struct ia64_sal_retval isrv; 475 + SAL_CALL_REENTRANT(isrv, SN_SAL_POD_MODE, 0, 0, 0, 0, 0, 0, 0); 476 if (isrv.status) 477 return 0; 478 return isrv.v0;
+46
include/asm-ppc/cpm2.h
··· 1039 #define CMXSCR_TS4CS_CLK7 0x00000006 /* SCC4 Tx Clock Source is CLK7 */ 1040 #define CMXSCR_TS4CS_CLK8 0x00000007 /* SCC4 Tx Clock Source is CLK8 */ 1041 1042 #endif /* __CPM2__ */ 1043 #endif /* __KERNEL__ */ 1044
··· 1039 #define CMXSCR_TS4CS_CLK7 0x00000006 /* SCC4 Tx Clock Source is CLK7 */ 1040 #define CMXSCR_TS4CS_CLK8 0x00000007 /* SCC4 Tx Clock Source is CLK8 */ 1041 1042 + /*----------------------------------------------------------------------- 1043 + * SIUMCR - SIU Module Configuration Register 4-31 1044 + */ 1045 + #define SIUMCR_BBD 0x80000000 /* Bus Busy Disable */ 1046 + #define SIUMCR_ESE 0x40000000 /* External Snoop Enable */ 1047 + #define SIUMCR_PBSE 0x20000000 /* Parity Byte Select Enable */ 1048 + #define SIUMCR_CDIS 0x10000000 /* Core Disable */ 1049 + #define SIUMCR_DPPC00 0x00000000 /* Data Parity Pins Configuration*/ 1050 + #define SIUMCR_DPPC01 0x04000000 /* - " - */ 1051 + #define SIUMCR_DPPC10 0x08000000 /* - " - */ 1052 + #define SIUMCR_DPPC11 0x0c000000 /* - " - */ 1053 + #define SIUMCR_L2CPC00 0x00000000 /* L2 Cache Pins Configuration */ 1054 + #define SIUMCR_L2CPC01 0x01000000 /* - " - */ 1055 + #define SIUMCR_L2CPC10 0x02000000 /* - " - */ 1056 + #define SIUMCR_L2CPC11 0x03000000 /* - " - */ 1057 + #define SIUMCR_LBPC00 0x00000000 /* Local Bus Pins Configuration */ 1058 + #define SIUMCR_LBPC01 0x00400000 /* - " - */ 1059 + #define SIUMCR_LBPC10 0x00800000 /* - " - */ 1060 + #define SIUMCR_LBPC11 0x00c00000 /* - " - */ 1061 + #define SIUMCR_APPC00 0x00000000 /* Address Parity Pins Configuration*/ 1062 + #define SIUMCR_APPC01 0x00100000 /* - " - */ 1063 + #define SIUMCR_APPC10 0x00200000 /* - " - */ 1064 + #define SIUMCR_APPC11 0x00300000 /* - " - */ 1065 + #define SIUMCR_CS10PC00 0x00000000 /* CS10 Pin Configuration */ 1066 + #define SIUMCR_CS10PC01 0x00040000 /* - " - */ 1067 + #define SIUMCR_CS10PC10 0x00080000 /* - " - */ 1068 + #define SIUMCR_CS10PC11 0x000c0000 /* - " - */ 1069 + #define SIUMCR_BCTLC00 0x00000000 /* Buffer Control Configuration */ 1070 + #define SIUMCR_BCTLC01 0x00010000 /* - " - */ 1071 + #define SIUMCR_BCTLC10 0x00020000 /* - " - */ 1072 + #define SIUMCR_BCTLC11 0x00030000 /* - " - */ 1073 + #define SIUMCR_MMR00 0x00000000 /* Mask Masters Requests */ 1074 + #define SIUMCR_MMR01 0x00004000 /* - " - */ 1075 + #define SIUMCR_MMR10 0x00008000 /* - " - */ 1076 + #define SIUMCR_MMR11 0x0000c000 /* - " - */ 1077 + #define SIUMCR_LPBSE 0x00002000 /* LocalBus Parity Byte Select Enable*/ 1078 + 1079 + /*----------------------------------------------------------------------- 1080 + * SCCR - System Clock Control Register 9-8 1081 + */ 1082 + #define SCCR_PCI_MODE 0x00000100 /* PCI Mode */ 1083 + #define SCCR_PCI_MODCK 0x00000080 /* Value of PCI_MODCK pin */ 1084 + #define SCCR_PCIDF_MSK 0x00000078 /* PCI division factor */ 1085 + #define SCCR_PCIDF_SHIFT 3 1086 + 1087 + 1088 #endif /* __CPM2__ */ 1089 #endif /* __KERNEL__ */ 1090
+1
include/asm-ppc/m8260_pci.h
··· 19 * Define the vendor/device ID for the MPC8265. 20 */ 21 #define PCI_DEVICE_ID_MPC8265 ((0x18C0 << 16) | PCI_VENDOR_ID_MOTOROLA) 22 23 #define M8265_PCIBR0 0x101ac 24 #define M8265_PCIBR1 0x101b0
··· 19 * Define the vendor/device ID for the MPC8265. 20 */ 21 #define PCI_DEVICE_ID_MPC8265 ((0x18C0 << 16) | PCI_VENDOR_ID_MOTOROLA) 22 + #define PCI_DEVICE_ID_MPC8272 ((0x18C1 << 16) | PCI_VENDOR_ID_MOTOROLA) 23 24 #define M8265_PCIBR0 0x101ac 25 #define M8265_PCIBR1 0x101b0
+1 -1
include/asm-ppc/mpc8260.h
··· 41 #endif 42 43 #ifdef CONFIG_PCI_8260 44 - #include <syslib/m8260_pci.h> 45 #endif 46 47 /* Make sure the memory translation stuff is there if PCI not used.
··· 41 #endif 42 43 #ifdef CONFIG_PCI_8260 44 + #include <syslib/m82xx_pci.h> 45 #endif 46 47 /* Make sure the memory translation stuff is there if PCI not used.
+1 -12
include/asm-ppc64/prom.h
··· 147 struct device_node *sibling; 148 struct device_node *next; /* next device of same type */ 149 struct device_node *allnext; /* next in list of all nodes */ 150 - struct proc_dir_entry *pde; /* this node's proc directory */ 151 - struct proc_dir_entry *name_link; /* name symlink */ 152 - struct proc_dir_entry *addr_link; /* addr symlink */ 153 struct kref kref; 154 unsigned long _flags; 155 }; ··· 172 dn->pde = de; 173 } 174 175 - static void inline set_node_name_link(struct device_node *dn, struct proc_dir_entry *de) 176 - { 177 - dn->name_link = de; 178 - } 179 - 180 - static void inline set_node_addr_link(struct device_node *dn, struct proc_dir_entry *de) 181 - { 182 - dn->addr_link = de; 183 - } 184 185 /* OBSOLETE: Old stlye node lookup */ 186 extern struct device_node *find_devices(const char *name);
··· 147 struct device_node *sibling; 148 struct device_node *next; /* next device of same type */ 149 struct device_node *allnext; /* next in list of all nodes */ 150 + struct proc_dir_entry *pde; /* this node's proc directory */ 151 struct kref kref; 152 unsigned long _flags; 153 }; ··· 174 dn->pde = de; 175 } 176 177 178 /* OBSOLETE: Old stlye node lookup */ 179 extern struct device_node *find_devices(const char *name);
+1 -1
include/asm-sh/thread_info.h
··· 27 28 #endif 29 30 - #define PREEMPT_ACTIVE 0x4000000 31 32 /* 33 * macros/functions for gaining access to the thread information structure
··· 27 28 #endif 29 30 + #define PREEMPT_ACTIVE 0x10000000 31 32 /* 33 * macros/functions for gaining access to the thread information structure
+1 -1
include/asm-sh64/thread_info.h
··· 73 74 #define THREAD_SIZE 8192 75 76 - #define PREEMPT_ACTIVE 0x4000000 77 78 /* thread information flags */ 79 #define TIF_SYSCALL_TRACE 0 /* syscall trace active */
··· 73 74 #define THREAD_SIZE 8192 75 76 + #define PREEMPT_ACTIVE 0x10000000 77 78 /* thread information flags */ 79 #define TIF_SYSCALL_TRACE 0 /* syscall trace active */
+2
include/asm-sparc64/iommu.h
··· 16 #define IOPTE_CACHE 0x0000000000000010UL /* Cached (in UPA E-cache) */ 17 #define IOPTE_WRITE 0x0000000000000002UL /* Writeable */ 18 19 #endif /* !(_SPARC_IOMMU_H) */
··· 16 #define IOPTE_CACHE 0x0000000000000010UL /* Cached (in UPA E-cache) */ 17 #define IOPTE_WRITE 0x0000000000000002UL /* Writeable */ 18 19 + #define IOMMU_NUM_CTXS 4096 20 + 21 #endif /* !(_SPARC_IOMMU_H) */
+5 -3
include/asm-sparc64/pbm.h
··· 15 #include <asm/io.h> 16 #include <asm/page.h> 17 #include <asm/oplib.h> 18 19 /* The abstraction used here is that there are PCI controllers, 20 * each with one (Sabre) or two (PSYCHO/SCHIZO) PCI bus modules ··· 40 * streaming buffers underneath. 41 */ 42 spinlock_t lock; 43 - 44 - /* Context allocator. */ 45 - unsigned int iommu_cur_ctx; 46 47 /* IOMMU page table, a linear array of ioptes. */ 48 iopte_t *page_table; /* The page table itself. */ ··· 84 u16 next; 85 u16 flush; 86 } alloc_info[PBM_NCLUSTERS]; 87 88 /* Here a PCI controller driver describes the areas of 89 * PCI memory space where DMA to/from physical memory
··· 15 #include <asm/io.h> 16 #include <asm/page.h> 17 #include <asm/oplib.h> 18 + #include <asm/iommu.h> 19 20 /* The abstraction used here is that there are PCI controllers, 21 * each with one (Sabre) or two (PSYCHO/SCHIZO) PCI bus modules ··· 39 * streaming buffers underneath. 40 */ 41 spinlock_t lock; 42 43 /* IOMMU page table, a linear array of ioptes. */ 44 iopte_t *page_table; /* The page table itself. */ ··· 86 u16 next; 87 u16 flush; 88 } alloc_info[PBM_NCLUSTERS]; 89 + 90 + /* CTX allocation. */ 91 + unsigned long ctx_lowest_free; 92 + unsigned long ctx_bitmap[IOMMU_NUM_CTXS / (sizeof(unsigned long) * 8)]; 93 94 /* Here a PCI controller driver describes the areas of 95 * PCI memory space where DMA to/from physical memory
+7 -1
include/asm-um/page.h
··· 98 99 extern unsigned long to_phys(void *virt); 100 extern void *to_virt(unsigned long phys); 101 - #define __pa(virt) to_phys((void *) virt) 102 #define __va(phys) to_virt((unsigned long) phys) 103 104 #define page_to_pfn(page) ((page) - mem_map)
··· 98 99 extern unsigned long to_phys(void *virt); 100 extern void *to_virt(unsigned long phys); 101 + 102 + /* Cast to unsigned long before casting to void * to avoid a warning from 103 + * mmap_kmem about cutting a long long down to a void *. Not sure that 104 + * casting is the right thing, but 32-bit UML can't have 64-bit virtual 105 + * addresses 106 + */ 107 + #define __pa(virt) to_phys((void *) (unsigned long) virt) 108 #define __va(phys) to_virt((unsigned long) phys) 109 110 #define page_to_pfn(page) ((page) - mem_map)
-8
include/asm-um/pgtable.h
··· 114 extern unsigned long pg0[1024]; 115 116 /* 117 - * BAD_PAGETABLE is used when we need a bogus page-table, while 118 - * BAD_PAGE is used for a bogus page. 119 - * 120 * ZERO_PAGE is a global shared page that is always zero: used 121 * for zero-mapped memory areas etc.. 122 */ 123 - extern pte_t __bad_page(void); 124 - extern pte_t * __bad_pagetable(void); 125 - 126 - #define BAD_PAGETABLE __bad_pagetable() 127 - #define BAD_PAGE __bad_page() 128 129 #define ZERO_PAGE(vaddr) virt_to_page(empty_zero_page) 130
··· 114 extern unsigned long pg0[1024]; 115 116 /* 117 * ZERO_PAGE is a global shared page that is always zero: used 118 * for zero-mapped memory areas etc.. 119 */ 120 121 #define ZERO_PAGE(vaddr) virt_to_page(empty_zero_page) 122
+4 -5
include/asm-um/thread_info.h
··· 41 #define init_thread_info (init_thread_union.thread_info) 42 #define init_stack (init_thread_union.stack) 43 44 /* how to get the thread information struct from C */ 45 static inline struct thread_info *current_thread_info(void) 46 { 47 struct thread_info *ti; 48 - unsigned long mask = PAGE_SIZE * 49 - (1 << CONFIG_KERNEL_STACK_ORDER) - 1; 50 - ti = (struct thread_info *) (((unsigned long) &ti) & ~mask); 51 return ti; 52 } 53 54 /* thread information allocation */ 55 - #define THREAD_SIZE ((1 << CONFIG_KERNEL_STACK_ORDER) * PAGE_SIZE) 56 #define alloc_thread_info(tsk) \ 57 ((struct thread_info *) kmalloc(THREAD_SIZE, GFP_KERNEL)) 58 #define free_thread_info(ti) kfree(ti) ··· 61 62 #endif 63 64 - #define PREEMPT_ACTIVE 0x4000000 65 66 #define TIF_SYSCALL_TRACE 0 /* syscall trace active */ 67 #define TIF_SIGPENDING 1 /* signal pending */
··· 41 #define init_thread_info (init_thread_union.thread_info) 42 #define init_stack (init_thread_union.stack) 43 44 + #define THREAD_SIZE ((1 << CONFIG_KERNEL_STACK_ORDER) * PAGE_SIZE) 45 /* how to get the thread information struct from C */ 46 static inline struct thread_info *current_thread_info(void) 47 { 48 struct thread_info *ti; 49 + unsigned long mask = THREAD_SIZE - 1; 50 + ti = (struct thread_info *) (((unsigned long) &ti) & ~mask); 51 return ti; 52 } 53 54 /* thread information allocation */ 55 #define alloc_thread_info(tsk) \ 56 ((struct thread_info *) kmalloc(THREAD_SIZE, GFP_KERNEL)) 57 #define free_thread_info(ti) kfree(ti) ··· 62 63 #endif 64 65 + #define PREEMPT_ACTIVE 0x10000000 66 67 #define TIF_SYSCALL_TRACE 0 /* syscall trace active */ 68 #define TIF_SIGPENDING 1 /* signal pending */
+1 -1
include/linux/cpufreq.h
··· 49 /* Frequency values here are CPU kHz so that hardware which doesn't run 50 * with some frequencies can complain without having to guess what per 51 * cent / per mille means. 52 - * Maximum transition latency is in microseconds - if it's unknown, 53 * CPUFREQ_ETERNAL shall be used. 54 */ 55
··· 49 /* Frequency values here are CPU kHz so that hardware which doesn't run 50 * with some frequencies can complain without having to guess what per 51 * cent / per mille means. 52 + * Maximum transition latency is in nanoseconds - if it's unknown, 53 * CPUFREQ_ETERNAL shall be used. 54 */ 55
+18 -4
include/linux/etherdevice.h
··· 56 } 57 58 /** 59 * is_valid_ether_addr - Determine if the given Ethernet address is valid 60 * @addr: Pointer to a six-byte array containing the Ethernet address 61 * 62 * Check that the Ethernet address (MAC) is not 00:00:00:00:00:00, is not 63 - * a multicast address, and is not FF:FF:FF:FF:FF:FF. The multicast 64 - * and FF:FF:... tests are combined into the single test "!(addr[0]&1)". 65 * 66 * Return true if the address is valid. 67 */ 68 static inline int is_valid_ether_addr(const u8 *addr) 69 { 70 - return !(addr[0]&1) && !is_zero_ether_addr(addr); 71 } 72 73 /** ··· 97 addr [0] &= 0xfe; /* clear multicast bit */ 98 addr [0] |= 0x02; /* set local assignment bit (IEEE802) */ 99 } 100 - #endif 101 102 #endif /* _LINUX_ETHERDEVICE_H */
··· 56 } 57 58 /** 59 + * is_multicast_ether_addr - Determine if the given Ethernet address is a 60 + * multicast address. 61 + * 62 + * @addr: Pointer to a six-byte array containing the Ethernet address 63 + * 64 + * Return true if the address is a multicast address. 65 + */ 66 + static inline int is_multicast_ether_addr(const u8 *addr) 67 + { 68 + return addr[0] & 0x01; 69 + } 70 + 71 + /** 72 * is_valid_ether_addr - Determine if the given Ethernet address is valid 73 * @addr: Pointer to a six-byte array containing the Ethernet address 74 * 75 * Check that the Ethernet address (MAC) is not 00:00:00:00:00:00, is not 76 + * a multicast address, and is not FF:FF:FF:FF:FF:FF. 77 * 78 * Return true if the address is valid. 79 */ 80 static inline int is_valid_ether_addr(const u8 *addr) 81 { 82 + /* FF:FF:FF:FF:FF:FF is a multicast address so we don't need to 83 + * explicitly check for it here. */ 84 + return !is_multicast_ether_addr(addr) && !is_zero_ether_addr(addr); 85 } 86 87 /** ··· 83 addr [0] &= 0xfe; /* clear multicast bit */ 84 addr [0] |= 0x02; /* set local assignment bit (IEEE802) */ 85 } 86 + #endif /* __KERNEL__ */ 87 88 #endif /* _LINUX_ETHERDEVICE_H */
+1
include/linux/ethtool.h
··· 256 u32 ethtool_op_get_link(struct net_device *dev); 257 u32 ethtool_op_get_tx_csum(struct net_device *dev); 258 int ethtool_op_set_tx_csum(struct net_device *dev, u32 data); 259 u32 ethtool_op_get_sg(struct net_device *dev); 260 int ethtool_op_set_sg(struct net_device *dev, u32 data); 261 u32 ethtool_op_get_tso(struct net_device *dev);
··· 256 u32 ethtool_op_get_link(struct net_device *dev); 257 u32 ethtool_op_get_tx_csum(struct net_device *dev); 258 int ethtool_op_set_tx_csum(struct net_device *dev, u32 data); 259 + int ethtool_op_set_tx_hw_csum(struct net_device *dev, u32 data); 260 u32 ethtool_op_get_sg(struct net_device *dev); 261 int ethtool_op_set_sg(struct net_device *dev, u32 data); 262 u32 ethtool_op_get_tso(struct net_device *dev);
+25 -3
include/linux/gameport.h
··· 67 void gameport_close(struct gameport *gameport); 68 void gameport_rescan(struct gameport *gameport); 69 70 void __gameport_register_port(struct gameport *gameport, struct module *owner); 71 static inline void gameport_register_port(struct gameport *gameport) 72 { ··· 76 } 77 78 void gameport_unregister_port(struct gameport *gameport); 79 80 static inline struct gameport *gameport_allocate_port(void) 81 { ··· 116 { 117 strlcpy(gameport->name, name, sizeof(gameport->name)); 118 } 119 - 120 - void gameport_set_phys(struct gameport *gameport, const char *fmt, ...) 121 - __attribute__ ((format (printf, 2, 3))); 122 123 /* 124 * Use the following fucntions to manipulate gameport's per-port
··· 67 void gameport_close(struct gameport *gameport); 68 void gameport_rescan(struct gameport *gameport); 69 70 + #if defined(CONFIG_GAMEPORT) || (defined(MODULE) && defined(CONFIG_GAMEPORT_MODULE)) 71 + 72 void __gameport_register_port(struct gameport *gameport, struct module *owner); 73 static inline void gameport_register_port(struct gameport *gameport) 74 { ··· 74 } 75 76 void gameport_unregister_port(struct gameport *gameport); 77 + 78 + void gameport_set_phys(struct gameport *gameport, const char *fmt, ...) 79 + __attribute__ ((format (printf, 2, 3))); 80 + 81 + #else 82 + 83 + static inline void gameport_register_port(struct gameport *gameport) 84 + { 85 + return; 86 + } 87 + 88 + static inline void gameport_unregister_port(struct gameport *gameport) 89 + { 90 + return; 91 + } 92 + 93 + static inline void gameport_set_phys(struct gameport *gameport, 94 + const char *fmt, ...) 95 + { 96 + return; 97 + } 98 + 99 + #endif 100 101 static inline struct gameport *gameport_allocate_port(void) 102 { ··· 91 { 92 strlcpy(gameport->name, name, sizeof(gameport->name)); 93 } 94 95 /* 96 * Use the following fucntions to manipulate gameport's per-port
+5 -1
include/linux/hardirq.h
··· 43 #define __IRQ_MASK(x) ((1UL << (x))-1) 44 45 #define PREEMPT_MASK (__IRQ_MASK(PREEMPT_BITS) << PREEMPT_SHIFT) 46 - #define HARDIRQ_MASK (__IRQ_MASK(HARDIRQ_BITS) << HARDIRQ_SHIFT) 47 #define SOFTIRQ_MASK (__IRQ_MASK(SOFTIRQ_BITS) << SOFTIRQ_SHIFT) 48 49 #define PREEMPT_OFFSET (1UL << PREEMPT_SHIFT) 50 #define SOFTIRQ_OFFSET (1UL << SOFTIRQ_SHIFT) 51 #define HARDIRQ_OFFSET (1UL << HARDIRQ_SHIFT) 52 53 #define hardirq_count() (preempt_count() & HARDIRQ_MASK) 54 #define softirq_count() (preempt_count() & SOFTIRQ_MASK)
··· 43 #define __IRQ_MASK(x) ((1UL << (x))-1) 44 45 #define PREEMPT_MASK (__IRQ_MASK(PREEMPT_BITS) << PREEMPT_SHIFT) 46 #define SOFTIRQ_MASK (__IRQ_MASK(SOFTIRQ_BITS) << SOFTIRQ_SHIFT) 47 + #define HARDIRQ_MASK (__IRQ_MASK(HARDIRQ_BITS) << HARDIRQ_SHIFT) 48 49 #define PREEMPT_OFFSET (1UL << PREEMPT_SHIFT) 50 #define SOFTIRQ_OFFSET (1UL << SOFTIRQ_SHIFT) 51 #define HARDIRQ_OFFSET (1UL << HARDIRQ_SHIFT) 52 + 53 + #if PREEMPT_ACTIVE < (1 << (HARDIRQ_SHIFT + HARDIRQ_BITS)) 54 + #error PREEMPT_ACTIVE is too low! 55 + #endif 56 57 #define hardirq_count() (preempt_count() & HARDIRQ_MASK) 58 #define softirq_count() (preempt_count() & SOFTIRQ_MASK)
+2
include/linux/inetdevice.h
··· 29 int no_xfrm; 30 int no_policy; 31 int force_igmp_version; 32 void *sysctl; 33 }; 34 ··· 72 #define IN_DEV_SEC_REDIRECTS(in_dev) (ipv4_devconf.secure_redirects || (in_dev)->cnf.secure_redirects) 73 #define IN_DEV_IDTAG(in_dev) ((in_dev)->cnf.tag) 74 #define IN_DEV_MEDIUM_ID(in_dev) ((in_dev)->cnf.medium_id) 75 76 #define IN_DEV_RX_REDIRECTS(in_dev) \ 77 ((IN_DEV_FORWARD(in_dev) && \
··· 29 int no_xfrm; 30 int no_policy; 31 int force_igmp_version; 32 + int promote_secondaries; 33 void *sysctl; 34 }; 35 ··· 71 #define IN_DEV_SEC_REDIRECTS(in_dev) (ipv4_devconf.secure_redirects || (in_dev)->cnf.secure_redirects) 72 #define IN_DEV_IDTAG(in_dev) ((in_dev)->cnf.tag) 73 #define IN_DEV_MEDIUM_ID(in_dev) ((in_dev)->cnf.medium_id) 74 + #define IN_DEV_PROMOTE_SECONDARIES(in_dev) (ipv4_devconf.promote_secondaries || (in_dev)->cnf.promote_secondaries) 75 76 #define IN_DEV_RX_REDIRECTS(in_dev) \ 77 ((IN_DEV_FORWARD(in_dev) && \
+1
include/linux/libata.h
··· 410 extern void ata_exec_command(struct ata_port *ap, struct ata_taskfile *tf); 411 extern int ata_port_start (struct ata_port *ap); 412 extern void ata_port_stop (struct ata_port *ap); 413 extern irqreturn_t ata_interrupt (int irq, void *dev_instance, struct pt_regs *regs); 414 extern void ata_qc_prep(struct ata_queued_cmd *qc); 415 extern int ata_qc_issue_prot(struct ata_queued_cmd *qc);
··· 410 extern void ata_exec_command(struct ata_port *ap, struct ata_taskfile *tf); 411 extern int ata_port_start (struct ata_port *ap); 412 extern void ata_port_stop (struct ata_port *ap); 413 + extern void ata_host_stop (struct ata_host_set *host_set); 414 extern irqreturn_t ata_interrupt (int irq, void *dev_instance, struct pt_regs *regs); 415 extern void ata_qc_prep(struct ata_queued_cmd *qc); 416 extern int ata_qc_issue_prot(struct ata_queued_cmd *qc);
+2 -1
include/linux/netdevice.h
··· 401 } reg_state; 402 403 /* Net device features */ 404 - int features; 405 #define NETIF_F_SG 1 /* Scatter/gather IO. */ 406 #define NETIF_F_IP_CSUM 2 /* Can checksum only TCP/UDP over IPv4. */ 407 #define NETIF_F_NO_CSUM 4 /* Does not require checksum. F.e. loopack. */ ··· 913 extern void dev_set_promiscuity(struct net_device *dev, int inc); 914 extern void dev_set_allmulti(struct net_device *dev, int inc); 915 extern void netdev_state_change(struct net_device *dev); 916 /* Load a device via the kmod */ 917 extern void dev_load(const char *name); 918 extern void dev_mcast_init(void);
··· 401 } reg_state; 402 403 /* Net device features */ 404 + unsigned long features; 405 #define NETIF_F_SG 1 /* Scatter/gather IO. */ 406 #define NETIF_F_IP_CSUM 2 /* Can checksum only TCP/UDP over IPv4. */ 407 #define NETIF_F_NO_CSUM 4 /* Does not require checksum. F.e. loopack. */ ··· 913 extern void dev_set_promiscuity(struct net_device *dev, int inc); 914 extern void dev_set_allmulti(struct net_device *dev, int inc); 915 extern void netdev_state_change(struct net_device *dev); 916 + extern void netdev_features_change(struct net_device *dev); 917 /* Load a device via the kmod */ 918 extern void dev_load(const char *name); 919 extern void dev_mcast_init(void);
+1
include/linux/notifier.h
··· 56 #define NETDEV_CHANGEADDR 0x0008 57 #define NETDEV_GOING_DOWN 0x0009 58 #define NETDEV_CHANGENAME 0x000A 59 60 #define SYS_DOWN 0x0001 /* Notify of system down */ 61 #define SYS_RESTART SYS_DOWN
··· 56 #define NETDEV_CHANGEADDR 0x0008 57 #define NETDEV_GOING_DOWN 0x0009 58 #define NETDEV_CHANGENAME 0x000A 59 + #define NETDEV_FEAT_CHANGE 0x000B 60 61 #define SYS_DOWN 0x0001 /* Notify of system down */ 62 #define SYS_RESTART SYS_DOWN
+6
include/linux/pci_ids.h
··· 1230 #define PCI_DEVICE_ID_NVIDIA_QUADRO4_900XGL 0x0258 1231 #define PCI_DEVICE_ID_NVIDIA_QUADRO4_750XGL 0x0259 1232 #define PCI_DEVICE_ID_NVIDIA_QUADRO4_700XGL 0x025B 1233 #define PCI_DEVICE_ID_NVIDIA_GEFORCE4_TI_4800 0x0280 1234 #define PCI_DEVICE_ID_NVIDIA_GEFORCE4_TI_4800_8X 0x0281 1235 #define PCI_DEVICE_ID_NVIDIA_GEFORCE4_TI_4800SE 0x0282
··· 1230 #define PCI_DEVICE_ID_NVIDIA_QUADRO4_900XGL 0x0258 1231 #define PCI_DEVICE_ID_NVIDIA_QUADRO4_750XGL 0x0259 1232 #define PCI_DEVICE_ID_NVIDIA_QUADRO4_700XGL 0x025B 1233 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE 0x0265 1234 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA 0x0266 1235 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA2 0x0267 1236 + #define PCI_DEVICE_ID_NVIDIA_NVENET_12 0x0268 1237 + #define PCI_DEVICE_ID_NVIDIA_NVENET_13 0x0269 1238 + #define PCI_DEVICE_ID_NVIDIA_MCP51_AUDIO 0x026B 1239 #define PCI_DEVICE_ID_NVIDIA_GEFORCE4_TI_4800 0x0280 1240 #define PCI_DEVICE_ID_NVIDIA_GEFORCE4_TI_4800_8X 0x0281 1241 #define PCI_DEVICE_ID_NVIDIA_GEFORCE4_TI_4800SE 0x0282
+1
include/linux/sysctl.h
··· 399 NET_IPV4_CONF_FORCE_IGMP_VERSION=17, 400 NET_IPV4_CONF_ARP_ANNOUNCE=18, 401 NET_IPV4_CONF_ARP_IGNORE=19, 402 __NET_IPV4_CONF_MAX 403 }; 404
··· 399 NET_IPV4_CONF_FORCE_IGMP_VERSION=17, 400 NET_IPV4_CONF_ARP_ANNOUNCE=18, 401 NET_IPV4_CONF_ARP_IGNORE=19, 402 + NET_IPV4_CONF_PROMOTE_SECONDARIES=20, 403 __NET_IPV4_CONF_MAX 404 }; 405
-3
include/net/route.h
··· 181 memcpy(&fl, &(*rp)->fl, sizeof(fl)); 182 fl.fl_ip_sport = sport; 183 fl.fl_ip_dport = dport; 184 - #if defined(CONFIG_IP_ROUTE_MULTIPATH_CACHED) 185 - fl.flags |= FLOWI_FLAG_MULTIPATHOLDROUTE; 186 - #endif 187 ip_rt_put(*rp); 188 *rp = NULL; 189 return ip_route_output_flow(rp, &fl, sk, 0);
··· 181 memcpy(&fl, &(*rp)->fl, sizeof(fl)); 182 fl.fl_ip_sport = sport; 183 fl.fl_ip_dport = dport; 184 ip_rt_put(*rp); 185 *rp = NULL; 186 return ip_route_output_flow(rp, &fl, sk, 0);
+1 -1
init/Kconfig
··· 442 443 config MODVERSIONS 444 bool "Module versioning support (EXPERIMENTAL)" 445 - depends on MODULES && EXPERIMENTAL && !UML 446 help 447 Usually, you have to use modules compiled with your kernel. 448 Saying Y here makes it sometimes possible to use modules
··· 442 443 config MODVERSIONS 444 bool "Module versioning support (EXPERIMENTAL)" 445 + depends on MODULES && EXPERIMENTAL 446 help 447 Usually, you have to use modules compiled with your kernel. 448 Saying Y here makes it sometimes possible to use modules
-2
kernel/irq/handle.c
··· 119 */ 120 desc->handler->ack(irq); 121 action_ret = handle_IRQ_event(irq, regs, desc->action); 122 - if (!noirqdebug) 123 - note_interrupt(irq, desc, action_ret); 124 desc->handler->end(irq); 125 return 1; 126 }
··· 119 */ 120 desc->handler->ack(irq); 121 action_ret = handle_IRQ_event(irq, regs, desc->action); 122 desc->handler->end(irq); 123 return 1; 124 }
+6
kernel/module.c
··· 1758 const char __user *uargs) 1759 { 1760 struct module *mod; 1761 int ret = 0; 1762 1763 /* Must have permission */ ··· 1776 return PTR_ERR(mod); 1777 } 1778 1779 /* Flush the instruction cache, since we've played with text */ 1780 if (mod->module_init) 1781 flush_icache_range((unsigned long)mod->module_init, ··· 1786 + mod->init_size); 1787 flush_icache_range((unsigned long)mod->module_core, 1788 (unsigned long)mod->module_core + mod->core_size); 1789 1790 /* Now sew it into the lists. They won't access us, since 1791 strong_try_module_get() will fail. */
··· 1758 const char __user *uargs) 1759 { 1760 struct module *mod; 1761 + mm_segment_t old_fs = get_fs(); 1762 int ret = 0; 1763 1764 /* Must have permission */ ··· 1775 return PTR_ERR(mod); 1776 } 1777 1778 + /* flush the icache in correct context */ 1779 + set_fs(KERNEL_DS); 1780 + 1781 /* Flush the instruction cache, since we've played with text */ 1782 if (mod->module_init) 1783 flush_icache_range((unsigned long)mod->module_init, ··· 1782 + mod->init_size); 1783 flush_icache_range((unsigned long)mod->module_core, 1784 (unsigned long)mod->module_core + mod->core_size); 1785 + 1786 + set_fs(old_fs); 1787 1788 /* Now sew it into the lists. They won't access us, since 1789 strong_try_module_get() will fail. */
+2 -1
lib/Kconfig.debug
··· 151 152 config FRAME_POINTER 153 bool "Compile the kernel with frame pointers" 154 - depends on DEBUG_KERNEL && ((X86 && !X86_64) || CRIS || M68K || M68KNOMMU || FRV) 155 help 156 If you say Y here the resulting kernel image will be slightly larger 157 and slower, but it will give very useful debugging information.
··· 151 152 config FRAME_POINTER 153 bool "Compile the kernel with frame pointers" 154 + depends on DEBUG_KERNEL && ((X86 && !X86_64) || CRIS || M68K || M68KNOMMU || FRV || UML) 155 + default y if DEBUG_INFO && UML 156 help 157 If you say Y here the resulting kernel image will be slightly larger 158 and slower, but it will give very useful debugging information.
+7 -8
net/bridge/br_device.c
··· 21 22 static struct net_device_stats *br_dev_get_stats(struct net_device *dev) 23 { 24 - struct net_bridge *br; 25 - 26 - br = dev->priv; 27 - 28 return &br->statistics; 29 } 30 ··· 51 52 static int br_dev_open(struct net_device *dev) 53 { 54 - netif_start_queue(dev); 55 56 - br_stp_enable_bridge(dev->priv); 57 58 return 0; 59 } ··· 66 67 static int br_dev_stop(struct net_device *dev) 68 { 69 - br_stp_disable_bridge(dev->priv); 70 71 netif_stop_queue(dev); 72 ··· 75 76 static int br_change_mtu(struct net_device *dev, int new_mtu) 77 { 78 - if ((new_mtu < 68) || new_mtu > br_min_mtu(dev->priv)) 79 return -EINVAL; 80 81 dev->mtu = new_mtu;
··· 21 22 static struct net_device_stats *br_dev_get_stats(struct net_device *dev) 23 { 24 + struct net_bridge *br = netdev_priv(dev); 25 return &br->statistics; 26 } 27 ··· 54 55 static int br_dev_open(struct net_device *dev) 56 { 57 + struct net_bridge *br = netdev_priv(dev); 58 59 + br_features_recompute(br); 60 + netif_start_queue(dev); 61 + br_stp_enable_bridge(br); 62 63 return 0; 64 } ··· 67 68 static int br_dev_stop(struct net_device *dev) 69 { 70 + br_stp_disable_bridge(netdev_priv(dev)); 71 72 netif_stop_queue(dev); 73 ··· 76 77 static int br_change_mtu(struct net_device *dev, int new_mtu) 78 { 79 + if (new_mtu < 68 || new_mtu > br_min_mtu(netdev_priv(dev))) 80 return -EINVAL; 81 82 dev->mtu = new_mtu;
+23
net/bridge/br_if.c
··· 314 return mtu; 315 } 316 317 /* called with RTNL */ 318 int br_add_if(struct net_bridge *br, struct net_device *dev) 319 { ··· 390 391 spin_lock_bh(&br->lock); 392 br_stp_recalculate_bridge_id(br); 393 spin_unlock_bh(&br->lock); 394 395 return 0;
··· 314 return mtu; 315 } 316 317 + /* 318 + * Recomputes features using slave's features 319 + */ 320 + void br_features_recompute(struct net_bridge *br) 321 + { 322 + struct net_bridge_port *p; 323 + unsigned long features, checksum; 324 + 325 + features = NETIF_F_SG | NETIF_F_FRAGLIST 326 + | NETIF_F_HIGHDMA | NETIF_F_TSO; 327 + checksum = NETIF_F_IP_CSUM; /* least commmon subset */ 328 + 329 + list_for_each_entry(p, &br->port_list, list) { 330 + if (!(p->dev->features 331 + & (NETIF_F_IP_CSUM|NETIF_F_NO_CSUM|NETIF_F_HW_CSUM))) 332 + checksum = 0; 333 + features &= p->dev->features; 334 + } 335 + 336 + br->dev->features = features | checksum | NETIF_F_LLTX; 337 + } 338 + 339 /* called with RTNL */ 340 int br_add_if(struct net_bridge *br, struct net_device *dev) 341 { ··· 368 369 spin_lock_bh(&br->lock); 370 br_stp_recalculate_bridge_id(br); 371 + br_features_recompute(br); 372 spin_unlock_bh(&br->lock); 373 374 return 0;
+5 -3
net/bridge/br_input.c
··· 26 #ifdef CONFIG_NETFILTER_DEBUG 27 skb->nf_debug = 0; 28 #endif 29 - netif_rx(skb); 30 31 return 0; 32 } ··· 53 struct net_bridge *br = p->br; 54 struct net_bridge_fdb_entry *dst; 55 int passedup = 0; 56 57 if (br->dev->flags & IFF_PROMISC) { 58 struct sk_buff *skb2; ··· 111 if (!is_valid_ether_addr(eth_hdr(skb)->h_source)) 112 goto err; 113 114 - if (p->state == BR_STATE_LEARNING || 115 - p->state == BR_STATE_FORWARDING) 116 br_fdb_update(p->br, p, eth_hdr(skb)->h_source); 117 118 if (p->br->stp_enabled &&
··· 26 #ifdef CONFIG_NETFILTER_DEBUG 27 skb->nf_debug = 0; 28 #endif 29 + netif_receive_skb(skb); 30 31 return 0; 32 } ··· 53 struct net_bridge *br = p->br; 54 struct net_bridge_fdb_entry *dst; 55 int passedup = 0; 56 + 57 + /* insert into forwarding database after filtering to avoid spoofing */ 58 + br_fdb_update(p->br, p, eth_hdr(skb)->h_source); 59 60 if (br->dev->flags & IFF_PROMISC) { 61 struct sk_buff *skb2; ··· 108 if (!is_valid_ether_addr(eth_hdr(skb)->h_source)) 109 goto err; 110 111 + if (p->state == BR_STATE_LEARNING) 112 br_fdb_update(p->br, p, eth_hdr(skb)->h_source); 113 114 if (p->br->stp_enabled &&
+9
net/bridge/br_notify.c
··· 65 } 66 break; 67 68 case NETDEV_DOWN: 69 if (br->dev->flags & IFF_UP) 70 br_stp_disable_port(p);
··· 65 } 66 break; 67 68 + case NETDEV_FEAT_CHANGE: 69 + if (br->dev->flags & IFF_UP) 70 + br_features_recompute(br); 71 + 72 + /* could do recursive feature change notification 73 + * but who would care?? 74 + */ 75 + break; 76 + 77 case NETDEV_DOWN: 78 if (br->dev->flags & IFF_UP) 79 br_stp_disable_port(p);
+1
net/bridge/br_private.h
··· 174 extern int br_del_if(struct net_bridge *br, 175 struct net_device *dev); 176 extern int br_min_mtu(const struct net_bridge *br); 177 178 /* br_input.c */ 179 extern int br_handle_frame_finish(struct sk_buff *skb);
··· 174 extern int br_del_if(struct net_bridge *br, 175 struct net_device *dev); 176 extern int br_min_mtu(const struct net_bridge *br); 177 + extern void br_features_recompute(struct net_bridge *br); 178 179 /* br_input.c */ 180 extern int br_handle_frame_finish(struct sk_buff *skb);
+3
net/bridge/br_stp_bpdu.c
··· 140 struct net_bridge *br = p->br; 141 unsigned char *buf; 142 143 /* need at least the 802 and STP headers */ 144 if (!pskb_may_pull(skb, sizeof(header)+1) || 145 memcmp(skb->data, header, sizeof(header)))
··· 140 struct net_bridge *br = p->br; 141 unsigned char *buf; 142 143 + /* insert into forwarding database after filtering to avoid spoofing */ 144 + br_fdb_update(p->br, p, eth_hdr(skb)->h_source); 145 + 146 /* need at least the 802 and STP headers */ 147 if (!pskb_may_pull(skb, sizeof(header)+1) || 148 memcmp(skb->data, header, sizeof(header)))
+12
net/core/dev.c
··· 761 } 762 763 /** 764 * netdev_state_change - device changes state 765 * @dev: device to cause notification 766 *
··· 761 } 762 763 /** 764 + * netdev_features_change - device changes fatures 765 + * @dev: device to cause notification 766 + * 767 + * Called to indicate a device has changed features. 768 + */ 769 + void netdev_features_change(struct net_device *dev) 770 + { 771 + notifier_call_chain(&netdev_chain, NETDEV_FEAT_CHANGE, dev); 772 + } 773 + EXPORT_SYMBOL(netdev_features_change); 774 + 775 + /** 776 * netdev_state_change - device changes state 777 * @dev: device to cause notification 778 *
+18 -2
net/core/ethtool.c
··· 29 30 u32 ethtool_op_get_tx_csum(struct net_device *dev) 31 { 32 - return (dev->features & NETIF_F_IP_CSUM) != 0; 33 } 34 35 int ethtool_op_set_tx_csum(struct net_device *dev, u32 data) ··· 42 return 0; 43 } 44 45 u32 ethtool_op_get_sg(struct net_device *dev) 46 { 47 return (dev->features & NETIF_F_SG) != 0; ··· 691 void __user *useraddr = ifr->ifr_data; 692 u32 ethcmd; 693 int rc; 694 695 /* 696 * XXX: This can be pushed down into the ethtool_* handlers that ··· 713 if ((rc = dev->ethtool_ops->begin(dev)) < 0) 714 return rc; 715 716 switch (ethcmd) { 717 case ETHTOOL_GSET: 718 rc = ethtool_get_settings(dev, useraddr); ··· 724 break; 725 case ETHTOOL_GDRVINFO: 726 rc = ethtool_get_drvinfo(dev, useraddr); 727 - 728 break; 729 case ETHTOOL_GREGS: 730 rc = ethtool_get_regs(dev, useraddr); ··· 812 813 if(dev->ethtool_ops->complete) 814 dev->ethtool_ops->complete(dev); 815 return rc; 816 817 ioctl: ··· 832 EXPORT_SYMBOL(ethtool_op_set_sg); 833 EXPORT_SYMBOL(ethtool_op_set_tso); 834 EXPORT_SYMBOL(ethtool_op_set_tx_csum);
··· 29 30 u32 ethtool_op_get_tx_csum(struct net_device *dev) 31 { 32 + return (dev->features & (NETIF_F_IP_CSUM | NETIF_F_HW_CSUM)) != 0; 33 } 34 35 int ethtool_op_set_tx_csum(struct net_device *dev, u32 data) ··· 42 return 0; 43 } 44 45 + int ethtool_op_set_tx_hw_csum(struct net_device *dev, u32 data) 46 + { 47 + if (data) 48 + dev->features |= NETIF_F_HW_CSUM; 49 + else 50 + dev->features &= ~NETIF_F_HW_CSUM; 51 + 52 + return 0; 53 + } 54 u32 ethtool_op_get_sg(struct net_device *dev) 55 { 56 return (dev->features & NETIF_F_SG) != 0; ··· 682 void __user *useraddr = ifr->ifr_data; 683 u32 ethcmd; 684 int rc; 685 + unsigned long old_features; 686 687 /* 688 * XXX: This can be pushed down into the ethtool_* handlers that ··· 703 if ((rc = dev->ethtool_ops->begin(dev)) < 0) 704 return rc; 705 706 + old_features = dev->features; 707 + 708 switch (ethcmd) { 709 case ETHTOOL_GSET: 710 rc = ethtool_get_settings(dev, useraddr); ··· 712 break; 713 case ETHTOOL_GDRVINFO: 714 rc = ethtool_get_drvinfo(dev, useraddr); 715 break; 716 case ETHTOOL_GREGS: 717 rc = ethtool_get_regs(dev, useraddr); ··· 801 802 if(dev->ethtool_ops->complete) 803 dev->ethtool_ops->complete(dev); 804 + 805 + if (old_features != dev->features) 806 + netdev_features_change(dev); 807 + 808 return rc; 809 810 ioctl: ··· 817 EXPORT_SYMBOL(ethtool_op_set_sg); 818 EXPORT_SYMBOL(ethtool_op_set_tso); 819 EXPORT_SYMBOL(ethtool_op_set_tx_csum); 820 + EXPORT_SYMBOL(ethtool_op_set_tx_hw_csum);
+2 -1
net/core/net-sysfs.c
··· 21 #define to_net_dev(class) container_of(class, struct net_device, class_dev) 22 23 static const char fmt_hex[] = "%#x\n"; 24 static const char fmt_dec[] = "%d\n"; 25 static const char fmt_ulong[] = "%lu\n"; 26 ··· 92 NETDEVICE_ATTR(addr_len, fmt_dec); 93 NETDEVICE_ATTR(iflink, fmt_dec); 94 NETDEVICE_ATTR(ifindex, fmt_dec); 95 - NETDEVICE_ATTR(features, fmt_hex); 96 NETDEVICE_ATTR(type, fmt_dec); 97 98 /* use same locking rules as GIFHWADDR ioctl's */
··· 21 #define to_net_dev(class) container_of(class, struct net_device, class_dev) 22 23 static const char fmt_hex[] = "%#x\n"; 24 + static const char fmt_long_hex[] = "%#lx\n"; 25 static const char fmt_dec[] = "%d\n"; 26 static const char fmt_ulong[] = "%lu\n"; 27 ··· 91 NETDEVICE_ATTR(addr_len, fmt_dec); 92 NETDEVICE_ATTR(iflink, fmt_dec); 93 NETDEVICE_ATTR(ifindex, fmt_dec); 94 + NETDEVICE_ATTR(features, fmt_long_hex); 95 NETDEVICE_ATTR(type, fmt_dec); 96 97 /* use same locking rules as GIFHWADDR ioctl's */
+29 -5
net/ipv4/devinet.c
··· 233 static void inet_del_ifa(struct in_device *in_dev, struct in_ifaddr **ifap, 234 int destroy) 235 { 236 struct in_ifaddr *ifa1 = *ifap; 237 238 ASSERT_RTNL(); 239 240 - /* 1. Deleting primary ifaddr forces deletion all secondaries */ 241 242 if (!(ifa1->ifa_flags & IFA_F_SECONDARY)) { 243 struct in_ifaddr *ifa; ··· 254 continue; 255 } 256 257 - *ifap1 = ifa->ifa_next; 258 259 - rtmsg_ifa(RTM_DELADDR, ifa); 260 - notifier_call_chain(&inetaddr_chain, NETDEV_DOWN, ifa); 261 - inet_free_ifa(ifa); 262 } 263 } 264 ··· 288 289 if (!in_dev->ifa_list) 290 inetdev_destroy(in_dev); 291 } 292 } 293 ··· 1394 .ctl_name = NET_IPV4_CONF_FORCE_IGMP_VERSION, 1395 .procname = "force_igmp_version", 1396 .data = &ipv4_devconf.force_igmp_version, 1397 .maxlen = sizeof(int), 1398 .mode = 0644, 1399 .proc_handler = &ipv4_doint_and_flush,
··· 233 static void inet_del_ifa(struct in_device *in_dev, struct in_ifaddr **ifap, 234 int destroy) 235 { 236 + struct in_ifaddr *promote = NULL; 237 struct in_ifaddr *ifa1 = *ifap; 238 239 ASSERT_RTNL(); 240 241 + /* 1. Deleting primary ifaddr forces deletion all secondaries 242 + * unless alias promotion is set 243 + **/ 244 245 if (!(ifa1->ifa_flags & IFA_F_SECONDARY)) { 246 struct in_ifaddr *ifa; ··· 251 continue; 252 } 253 254 + if (!IN_DEV_PROMOTE_SECONDARIES(in_dev)) { 255 + *ifap1 = ifa->ifa_next; 256 257 + rtmsg_ifa(RTM_DELADDR, ifa); 258 + notifier_call_chain(&inetaddr_chain, NETDEV_DOWN, ifa); 259 + inet_free_ifa(ifa); 260 + } else { 261 + promote = ifa; 262 + break; 263 + } 264 } 265 } 266 ··· 280 281 if (!in_dev->ifa_list) 282 inetdev_destroy(in_dev); 283 + } 284 + 285 + if (promote && IN_DEV_PROMOTE_SECONDARIES(in_dev)) { 286 + /* not sure if we should send a delete notify first? */ 287 + promote->ifa_flags &= ~IFA_F_SECONDARY; 288 + rtmsg_ifa(RTM_NEWADDR, promote); 289 + notifier_call_chain(&inetaddr_chain, NETDEV_UP, promote); 290 } 291 } 292 ··· 1379 .ctl_name = NET_IPV4_CONF_FORCE_IGMP_VERSION, 1380 .procname = "force_igmp_version", 1381 .data = &ipv4_devconf.force_igmp_version, 1382 + .maxlen = sizeof(int), 1383 + .mode = 0644, 1384 + .proc_handler = &ipv4_doint_and_flush, 1385 + .strategy = &ipv4_doint_and_flush_strategy, 1386 + }, 1387 + { 1388 + .ctl_name = NET_IPV4_CONF_PROMOTE_SECONDARIES, 1389 + .procname = "promote_secondaries", 1390 + .data = &ipv4_devconf.promote_secondaries, 1391 .maxlen = sizeof(int), 1392 .mode = 0644, 1393 .proc_handler = &ipv4_doint_and_flush,
+1 -1
net/ipv4/esp4.c
··· 478 { 479 struct xfrm_decap_state decap; 480 481 - if (sizeof(struct esp_decap_data) < 482 sizeof(decap.decap_data)) { 483 extern void decap_data_too_small(void); 484
··· 478 { 479 struct xfrm_decap_state decap; 480 481 + if (sizeof(struct esp_decap_data) > 482 sizeof(decap.decap_data)) { 483 extern void decap_data_too_small(void); 484
+1 -17
net/ipv4/multipath_drr.c
··· 57 58 static struct multipath_device state[MULTIPATH_MAX_DEVICECANDIDATES]; 59 static DEFINE_SPINLOCK(state_lock); 60 - static struct rtable *last_selection = NULL; 61 62 static int inline __multipath_findslot(void) 63 { ··· 110 .notifier_call = drr_dev_event, 111 }; 112 113 - static void drr_remove(struct rtable *rt) 114 - { 115 - if (last_selection == rt) 116 - last_selection = NULL; 117 - } 118 119 static void drr_safe_inc(atomic_t *usecount) 120 { ··· 137 int min_usecount = -1; 138 int devidx = -1; 139 int cur_min_devidx = -1; 140 - 141 - /* if necessary and possible utilize the old alternative */ 142 - if ((flp->flags & FLOWI_FLAG_MULTIPATHOLDROUTE) != 0 && 143 - last_selection != NULL) { 144 - result = last_selection; 145 - *rp = result; 146 - return; 147 - } 148 149 /* 1. make sure all alt. nexthops have the same GC related data */ 150 /* 2. determine the new candidate to be returned */ ··· 215 } 216 217 *rp = result; 218 - last_selection = result; 219 } 220 221 static struct ip_mp_alg_ops drr_ops = { 222 .mp_alg_select_route = drr_select_route, 223 - .mp_alg_remove = drr_remove, 224 }; 225 226 static int __init drr_init(void) ··· 228 if (err) 229 return err; 230 231 - err = multipath_alg_register(&drr_ops, IP_MP_ALG_RR); 232 if (err) 233 goto fail; 234
··· 57 58 static struct multipath_device state[MULTIPATH_MAX_DEVICECANDIDATES]; 59 static DEFINE_SPINLOCK(state_lock); 60 61 static int inline __multipath_findslot(void) 62 { ··· 111 .notifier_call = drr_dev_event, 112 }; 113 114 115 static void drr_safe_inc(atomic_t *usecount) 116 { ··· 143 int min_usecount = -1; 144 int devidx = -1; 145 int cur_min_devidx = -1; 146 147 /* 1. make sure all alt. nexthops have the same GC related data */ 148 /* 2. determine the new candidate to be returned */ ··· 229 } 230 231 *rp = result; 232 } 233 234 static struct ip_mp_alg_ops drr_ops = { 235 .mp_alg_select_route = drr_select_route, 236 }; 237 238 static int __init drr_init(void) ··· 244 if (err) 245 return err; 246 247 + err = multipath_alg_register(&drr_ops, IP_MP_ALG_DRR); 248 if (err) 249 goto fail; 250
-20
net/ipv4/multipath_rr.c
··· 47 #include <net/checksum.h> 48 #include <net/ip_mp_alg.h> 49 50 - #define MULTIPATH_MAX_CANDIDATES 40 51 - 52 - static struct rtable* last_used = NULL; 53 - 54 - static void rr_remove(struct rtable *rt) 55 - { 56 - if (last_used == rt) 57 - last_used = NULL; 58 - } 59 - 60 static void rr_select_route(const struct flowi *flp, 61 struct rtable *first, struct rtable **rp) 62 { 63 struct rtable *nh, *result, *min_use_cand = NULL; 64 int min_use = -1; 65 - 66 - /* if necessary and possible utilize the old alternative */ 67 - if ((flp->flags & FLOWI_FLAG_MULTIPATHOLDROUTE) != 0 && 68 - last_used != NULL) { 69 - result = last_used; 70 - goto out; 71 - } 72 73 /* 1. make sure all alt. nexthops have the same GC related data 74 * 2. determine the new candidate to be returned ··· 73 if (!result) 74 result = first; 75 76 - out: 77 - last_used = result; 78 result->u.dst.__use++; 79 *rp = result; 80 } 81 82 static struct ip_mp_alg_ops rr_ops = { 83 .mp_alg_select_route = rr_select_route, 84 - .mp_alg_remove = rr_remove, 85 }; 86 87 static int __init rr_init(void)
··· 47 #include <net/checksum.h> 48 #include <net/ip_mp_alg.h> 49 50 static void rr_select_route(const struct flowi *flp, 51 struct rtable *first, struct rtable **rp) 52 { 53 struct rtable *nh, *result, *min_use_cand = NULL; 54 int min_use = -1; 55 56 /* 1. make sure all alt. nexthops have the same GC related data 57 * 2. determine the new candidate to be returned ··· 90 if (!result) 91 result = first; 92 93 result->u.dst.__use++; 94 *rp = result; 95 } 96 97 static struct ip_mp_alg_ops rr_ops = { 98 .mp_alg_select_route = rr_select_route, 99 }; 100 101 static int __init rr_init(void)
+10
net/ipv4/netfilter/ip_queue.c
··· 3 * communicating with userspace via netlink. 4 * 5 * (C) 2000-2002 James Morris <jmorris@intercode.com.au> 6 * 7 * This program is free software; you can redistribute it and/or modify 8 * it under the terms of the GNU General Public License version 2 as ··· 18 * 2005-01-10: Added /proc counter for dropped packets; fixed so 19 * packets aren't delivered to user space if they're going 20 * to be dropped. 21 * 22 */ 23 #include <linux/module.h> ··· 73 static void 74 ipq_issue_verdict(struct ipq_queue_entry *entry, int verdict) 75 { 76 nf_reinject(entry->skb, entry->info, verdict); 77 kfree(entry); 78 } 79
··· 3 * communicating with userspace via netlink. 4 * 5 * (C) 2000-2002 James Morris <jmorris@intercode.com.au> 6 + * (C) 2003-2005 Netfilter Core Team <coreteam@netfilter.org> 7 * 8 * This program is free software; you can redistribute it and/or modify 9 * it under the terms of the GNU General Public License version 2 as ··· 17 * 2005-01-10: Added /proc counter for dropped packets; fixed so 18 * packets aren't delivered to user space if they're going 19 * to be dropped. 20 + * 2005-05-26: local_bh_{disable,enable} around nf_reinject (Harald Welte) 21 * 22 */ 23 #include <linux/module.h> ··· 71 static void 72 ipq_issue_verdict(struct ipq_queue_entry *entry, int verdict) 73 { 74 + /* TCP input path (and probably other bits) assume to be called 75 + * from softirq context, not from syscall, like ipq_issue_verdict is 76 + * called. TCP input path deadlocks with locks taken from timer 77 + * softirq, e.g. We therefore emulate this by local_bh_disable() */ 78 + 79 + local_bh_disable(); 80 nf_reinject(entry->skb, entry->info, verdict); 81 + local_bh_enable(); 82 + 83 kfree(entry); 84 } 85
+6 -6
net/ipv4/udp.c
··· 738 unsigned long amount; 739 740 amount = 0; 741 - spin_lock_irq(&sk->sk_receive_queue.lock); 742 skb = skb_peek(&sk->sk_receive_queue); 743 if (skb != NULL) { 744 /* ··· 748 */ 749 amount = skb->len - sizeof(struct udphdr); 750 } 751 - spin_unlock_irq(&sk->sk_receive_queue.lock); 752 return put_user(amount, (int __user *)arg); 753 } 754 ··· 848 /* Clear queue. */ 849 if (flags&MSG_PEEK) { 850 int clear = 0; 851 - spin_lock_irq(&sk->sk_receive_queue.lock); 852 if (skb == skb_peek(&sk->sk_receive_queue)) { 853 __skb_unlink(skb, &sk->sk_receive_queue); 854 clear = 1; 855 } 856 - spin_unlock_irq(&sk->sk_receive_queue.lock); 857 if (clear) 858 kfree_skb(skb); 859 } ··· 1334 struct sk_buff_head *rcvq = &sk->sk_receive_queue; 1335 struct sk_buff *skb; 1336 1337 - spin_lock_irq(&rcvq->lock); 1338 while ((skb = skb_peek(rcvq)) != NULL) { 1339 if (udp_checksum_complete(skb)) { 1340 UDP_INC_STATS_BH(UDP_MIB_INERRORS); ··· 1345 break; 1346 } 1347 } 1348 - spin_unlock_irq(&rcvq->lock); 1349 1350 /* nothing to see, move along */ 1351 if (skb == NULL)
··· 738 unsigned long amount; 739 740 amount = 0; 741 + spin_lock_bh(&sk->sk_receive_queue.lock); 742 skb = skb_peek(&sk->sk_receive_queue); 743 if (skb != NULL) { 744 /* ··· 748 */ 749 amount = skb->len - sizeof(struct udphdr); 750 } 751 + spin_unlock_bh(&sk->sk_receive_queue.lock); 752 return put_user(amount, (int __user *)arg); 753 } 754 ··· 848 /* Clear queue. */ 849 if (flags&MSG_PEEK) { 850 int clear = 0; 851 + spin_lock_bh(&sk->sk_receive_queue.lock); 852 if (skb == skb_peek(&sk->sk_receive_queue)) { 853 __skb_unlink(skb, &sk->sk_receive_queue); 854 clear = 1; 855 } 856 + spin_unlock_bh(&sk->sk_receive_queue.lock); 857 if (clear) 858 kfree_skb(skb); 859 } ··· 1334 struct sk_buff_head *rcvq = &sk->sk_receive_queue; 1335 struct sk_buff *skb; 1336 1337 + spin_lock_bh(&rcvq->lock); 1338 while ((skb = skb_peek(rcvq)) != NULL) { 1339 if (udp_checksum_complete(skb)) { 1340 UDP_INC_STATS_BH(UDP_MIB_INERRORS); ··· 1345 break; 1346 } 1347 } 1348 + spin_unlock_bh(&rcvq->lock); 1349 1350 /* nothing to see, move along */ 1351 if (skb == NULL)
+6 -4
net/ipv6/ip6_flowlabel.c
··· 535 if (err) 536 goto done; 537 538 - /* Do not check for fault */ 539 - if (!freq.flr_label) 540 - copy_to_user(&((struct in6_flowlabel_req __user *) optval)->flr_label, 541 - &fl->label, sizeof(fl->label)); 542 543 sfl1->fl = fl; 544 sfl1->next = np->ipv6_fl_list;
··· 535 if (err) 536 goto done; 537 538 + if (!freq.flr_label) { 539 + if (copy_to_user(&((struct in6_flowlabel_req __user *) optval)->flr_label, 540 + &fl->label, sizeof(fl->label))) { 541 + /* Intentionally ignore fault. */ 542 + } 543 + } 544 545 sfl1->fl = fl; 546 sfl1->next = np->ipv6_fl_list;
+11 -5
net/sched/sch_dsmark.c
··· 18 #include <asm/byteorder.h> 19 20 21 - #if 1 /* control */ 22 #define DPRINTK(format,args...) printk(KERN_DEBUG format,##args) 23 #else 24 #define DPRINTK(format,args...) ··· 73 74 DPRINTK("dsmark_graft(sch %p,[qdisc %p],new %p,old %p)\n",sch,p,new, 75 old); 76 - if (!new) 77 - new = &noop_qdisc; 78 sch_tree_lock(sch); 79 *old = xchg(&p->q,new); 80 if (*old) ··· 168 return; 169 for (i = 0; i < p->indices; i++) { 170 if (p->mask[i] == 0xff && !p->value[i]) 171 - continue; 172 if (walker->count >= walker->skip) { 173 if (walker->fn(sch, i+1, walker) < 0) { 174 walker->stop = 1; 175 break; 176 } 177 } 178 - walker->count++; 179 } 180 } 181
··· 18 #include <asm/byteorder.h> 19 20 21 + #if 0 /* control */ 22 #define DPRINTK(format,args...) printk(KERN_DEBUG format,##args) 23 #else 24 #define DPRINTK(format,args...) ··· 73 74 DPRINTK("dsmark_graft(sch %p,[qdisc %p],new %p,old %p)\n",sch,p,new, 75 old); 76 + 77 + if (new == NULL) { 78 + new = qdisc_create_dflt(sch->dev, &pfifo_qdisc_ops); 79 + if (new == NULL) 80 + new = &noop_qdisc; 81 + } 82 + 83 sch_tree_lock(sch); 84 *old = xchg(&p->q,new); 85 if (*old) ··· 163 return; 164 for (i = 0; i < p->indices; i++) { 165 if (p->mask[i] == 0xff && !p->value[i]) 166 + goto ignore; 167 if (walker->count >= walker->skip) { 168 if (walker->fn(sch, i+1, walker) < 0) { 169 walker->stop = 1; 170 break; 171 } 172 } 173 + ignore: 174 + walker->count++; 175 } 176 } 177
+6 -6
sound/oss/Kconfig
··· 112 113 config SOUND_ES1370 114 tristate "Ensoniq AudioPCI (ES1370)" 115 - depends on SOUND_PRIME!=n && SOUND && PCI && SOUND_GAMEPORT 116 help 117 Say Y or M if you have a PCI sound card utilizing the Ensoniq 118 ES1370 chipset, such as Ensoniq's AudioPCI (non-97). To find ··· 125 126 config SOUND_ES1371 127 tristate "Creative Ensoniq AudioPCI 97 (ES1371)" 128 - depends on SOUND_PRIME!=n && SOUND && PCI && SOUND_GAMEPORT 129 help 130 Say Y or M if you have a PCI sound card utilizing the Ensoniq 131 ES1371 chipset, such as Ensoniq's AudioPCI97. To find out if ··· 138 139 config SOUND_ESSSOLO1 140 tristate "ESS Technology Solo1" 141 - depends on SOUND_PRIME!=n && SOUND && SOUND_GAMEPORT && PCI 142 help 143 Say Y or M if you have a PCI sound card utilizing the ESS Technology 144 Solo1 chip. To find out if your sound card uses a ··· 179 180 config SOUND_SONICVIBES 181 tristate "S3 SonicVibes" 182 - depends on SOUND_PRIME!=n && SOUND && SOUND_GAMEPORT 183 help 184 Say Y or M if you have a PCI sound card utilizing the S3 185 SonicVibes chipset. To find out if your sound card uses a ··· 226 227 config SOUND_TRIDENT 228 tristate "Trident 4DWave DX/NX, SiS 7018 or ALi 5451 PCI Audio Core" 229 - depends on SOUND_PRIME!=n && SOUND && SOUND_GAMEPORT 230 ---help--- 231 Say Y or M if you have a PCI sound card utilizing the Trident 232 4DWave-DX/NX chipset or your mother board chipset has SiS 7018 ··· 739 740 config SOUND_MAD16 741 tristate "OPTi MAD16 and/or Mozart based cards" 742 - depends on SOUND_OSS && SOUND_GAMEPORT 743 ---help--- 744 Answer Y if your card has a Mozart (OAK OTI-601) or MAD16 (OPTi 745 82C928 or 82C929 or 82C931) audio interface chip. These chips are
··· 112 113 config SOUND_ES1370 114 tristate "Ensoniq AudioPCI (ES1370)" 115 + depends on SOUND_PRIME!=n && SOUND && PCI 116 help 117 Say Y or M if you have a PCI sound card utilizing the Ensoniq 118 ES1370 chipset, such as Ensoniq's AudioPCI (non-97). To find ··· 125 126 config SOUND_ES1371 127 tristate "Creative Ensoniq AudioPCI 97 (ES1371)" 128 + depends on SOUND_PRIME!=n && SOUND && PCI 129 help 130 Say Y or M if you have a PCI sound card utilizing the Ensoniq 131 ES1371 chipset, such as Ensoniq's AudioPCI97. To find out if ··· 138 139 config SOUND_ESSSOLO1 140 tristate "ESS Technology Solo1" 141 + depends on SOUND_PRIME!=n && SOUND && PCI 142 help 143 Say Y or M if you have a PCI sound card utilizing the ESS Technology 144 Solo1 chip. To find out if your sound card uses a ··· 179 180 config SOUND_SONICVIBES 181 tristate "S3 SonicVibes" 182 + depends on SOUND_PRIME!=n && SOUND 183 help 184 Say Y or M if you have a PCI sound card utilizing the S3 185 SonicVibes chipset. To find out if your sound card uses a ··· 226 227 config SOUND_TRIDENT 228 tristate "Trident 4DWave DX/NX, SiS 7018 or ALi 5451 PCI Audio Core" 229 + depends on SOUND_PRIME!=n && SOUND 230 ---help--- 231 Say Y or M if you have a PCI sound card utilizing the Trident 232 4DWave-DX/NX chipset or your mother board chipset has SiS 7018 ··· 739 740 config SOUND_MAD16 741 tristate "OPTi MAD16 and/or Mozart based cards" 742 + depends on SOUND_OSS 743 ---help--- 744 Answer Y if your card has a Mozart (OAK OTI-601) or MAD16 (OPTi 745 82C928 or 82C929 or 82C931) audio interface chip. These chips are
+19 -11
sound/ppc/pmac.c
··· 876 */ 877 static int __init snd_pmac_detect(pmac_t *chip) 878 { 879 - struct device_node *sound; 880 unsigned int *prop, l; 881 struct macio_chip* macio; 882 ··· 906 chip->is_pbook_G3 = 1; 907 chip->node = find_devices("awacs"); 908 if (chip->node) 909 - return 0; /* ok */ 910 911 /* 912 * powermac G3 models have a node called "davbus" 913 * with a child called "sound". 914 */ 915 - chip->node = find_devices("davbus"); 916 /* 917 * if we didn't find a davbus device, try 'i2s-a' since 918 * this seems to be what iBooks have 919 */ 920 if (! chip->node) { 921 chip->node = find_devices("i2s-a"); 922 - if (chip->node && chip->node->parent && chip->node->parent->parent) { 923 if (device_is_compatible(chip->node->parent->parent, 924 "K2-Keylargo")) 925 chip->is_k2 = 1; ··· 930 if (! chip->node) 931 return -ENODEV; 932 933 - sound = find_devices("sound"); 934 - while (sound && sound->parent != chip->node) 935 - sound = sound->next; 936 if (! sound) 937 return -ENODEV; 938 prop = (unsigned int *) get_property(sound, "sub-frame", NULL); ··· 1023 } 1024 } 1025 if (chip->pdev == NULL) 1026 - printk(KERN_WARNING "snd-powermac: can't locate macio PCI device !\n"); 1027 1028 detect_byte_swap(chip); 1029 ··· 1032 are available */ 1033 prop = (unsigned int *) get_property(sound, "sample-rates", &l); 1034 if (! prop) 1035 - prop = (unsigned int *) get_property(sound, "output-frame-rates", &l); 1036 if (prop) { 1037 int i; 1038 chip->freqs_ok = 0; ··· 1060 /* 1061 * exported - boolean info callbacks for ease of programming 1062 */ 1063 - int snd_pmac_boolean_stereo_info(snd_kcontrol_t *kcontrol, snd_ctl_elem_info_t *uinfo) 1064 { 1065 uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN; 1066 uinfo->count = 2; ··· 1070 return 0; 1071 } 1072 1073 - int snd_pmac_boolean_mono_info(snd_kcontrol_t *kcontrol, snd_ctl_elem_info_t *uinfo) 1074 { 1075 uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN; 1076 uinfo->count = 1;
··· 876 */ 877 static int __init snd_pmac_detect(pmac_t *chip) 878 { 879 + struct device_node *sound = NULL; 880 unsigned int *prop, l; 881 struct macio_chip* macio; 882 ··· 906 chip->is_pbook_G3 = 1; 907 chip->node = find_devices("awacs"); 908 if (chip->node) 909 + sound = chip->node; 910 911 /* 912 * powermac G3 models have a node called "davbus" 913 * with a child called "sound". 914 */ 915 + if (!chip->node) 916 + chip->node = find_devices("davbus"); 917 /* 918 * if we didn't find a davbus device, try 'i2s-a' since 919 * this seems to be what iBooks have 920 */ 921 if (! chip->node) { 922 chip->node = find_devices("i2s-a"); 923 + if (chip->node && chip->node->parent && 924 + chip->node->parent->parent) { 925 if (device_is_compatible(chip->node->parent->parent, 926 "K2-Keylargo")) 927 chip->is_k2 = 1; ··· 928 if (! chip->node) 929 return -ENODEV; 930 931 + if (!sound) { 932 + sound = find_devices("sound"); 933 + while (sound && sound->parent != chip->node) 934 + sound = sound->next; 935 + } 936 if (! sound) 937 return -ENODEV; 938 prop = (unsigned int *) get_property(sound, "sub-frame", NULL); ··· 1019 } 1020 } 1021 if (chip->pdev == NULL) 1022 + printk(KERN_WARNING "snd-powermac: can't locate macio PCI" 1023 + " device !\n"); 1024 1025 detect_byte_swap(chip); 1026 ··· 1027 are available */ 1028 prop = (unsigned int *) get_property(sound, "sample-rates", &l); 1029 if (! prop) 1030 + prop = (unsigned int *) get_property(sound, 1031 + "output-frame-rates", &l); 1032 if (prop) { 1033 int i; 1034 chip->freqs_ok = 0; ··· 1054 /* 1055 * exported - boolean info callbacks for ease of programming 1056 */ 1057 + int snd_pmac_boolean_stereo_info(snd_kcontrol_t *kcontrol, 1058 + snd_ctl_elem_info_t *uinfo) 1059 { 1060 uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN; 1061 uinfo->count = 2; ··· 1063 return 0; 1064 } 1065 1066 + int snd_pmac_boolean_mono_info(snd_kcontrol_t *kcontrol, 1067 + snd_ctl_elem_info_t *uinfo) 1068 { 1069 uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN; 1070 uinfo->count = 1;