Automatic merge of /spare/repo/linux-2.6/.git branch HEAD

authored by

and committed by
Jeff Garzik
d7aaf481 decc6d0b

+4608 -1713
+128
Documentation/cpu-freq/cpufreq-stats.txt
··· 1 + 2 + CPU frequency and voltage scaling statictics in the Linux(TM) kernel 3 + 4 + 5 + L i n u x c p u f r e q - s t a t s d r i v e r 6 + 7 + - information for users - 8 + 9 + 10 + Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> 11 + 12 + Contents 13 + 1. Introduction 14 + 2. Statistics Provided (with example) 15 + 3. Configuring cpufreq-stats 16 + 17 + 18 + 1. Introduction 19 + 20 + cpufreq-stats is a driver that provices CPU frequency statistics for each CPU. 21 + This statistics is provided in /sysfs as a bunch of read_only interfaces. This 22 + interface (when configured) will appear in a seperate directory under cpufreq 23 + in /sysfs (<sysfs root>/devices/system/cpu/cpuX/cpufreq/stats/) for each CPU. 24 + Various statistics will form read_only files under this directory. 25 + 26 + This driver is designed to be independent of any particular cpufreq_driver 27 + that may be running on your CPU. So, it will work with any cpufreq_driver. 28 + 29 + 30 + 2. Statistics Provided (with example) 31 + 32 + cpufreq stats provides following statistics (explained in detail below). 33 + - time_in_state 34 + - total_trans 35 + - trans_table 36 + 37 + All the statistics will be from the time the stats driver has been inserted 38 + to the time when a read of a particular statistic is done. Obviously, stats 39 + driver will not have any information about the the frequcny transitions before 40 + the stats driver insertion. 41 + 42 + -------------------------------------------------------------------------------- 43 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # ls -l 44 + total 0 45 + drwxr-xr-x 2 root root 0 May 14 16:06 . 46 + drwxr-xr-x 3 root root 0 May 14 15:58 .. 47 + -r--r--r-- 1 root root 4096 May 14 16:06 time_in_state 48 + -r--r--r-- 1 root root 4096 May 14 16:06 total_trans 49 + -r--r--r-- 1 root root 4096 May 14 16:06 trans_table 50 + -------------------------------------------------------------------------------- 51 + 52 + - time_in_state 53 + This gives the amount of time spent in each of the frequencies supported by 54 + this CPU. The cat output will have "<frequency> <time>" pair in each line, which 55 + will mean this CPU spent <time> usertime units of time at <frequency>. Output 56 + will have one line for each of the supported freuencies. usertime units here 57 + is 10mS (similar to other time exported in /proc). 58 + 59 + -------------------------------------------------------------------------------- 60 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat time_in_state 61 + 3600000 2089 62 + 3400000 136 63 + 3200000 34 64 + 3000000 67 65 + 2800000 172488 66 + -------------------------------------------------------------------------------- 67 + 68 + 69 + - total_trans 70 + This gives the total number of frequency transitions on this CPU. The cat 71 + output will have a single count which is the total number of frequency 72 + transitions. 73 + 74 + -------------------------------------------------------------------------------- 75 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat total_trans 76 + 20 77 + -------------------------------------------------------------------------------- 78 + 79 + - trans_table 80 + This will give a fine grained information about all the CPU frequency 81 + transitions. The cat output here is a two dimensional matrix, where an entry 82 + <i,j> (row i, column j) represents the count of number of transitions from 83 + Freq_i to Freq_j. Freq_i is in descending order with increasing rows and 84 + Freq_j is in descending order with increasing columns. The output here also 85 + contains the actual freq values for each row and column for better readability. 86 + 87 + -------------------------------------------------------------------------------- 88 + <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat trans_table 89 + From : To 90 + : 3600000 3400000 3200000 3000000 2800000 91 + 3600000: 0 5 0 0 0 92 + 3400000: 4 0 2 0 0 93 + 3200000: 0 1 0 2 0 94 + 3000000: 0 0 1 0 3 95 + 2800000: 0 0 0 2 0 96 + -------------------------------------------------------------------------------- 97 + 98 + 99 + 3. Configuring cpufreq-stats 100 + 101 + To configure cpufreq-stats in your kernel 102 + Config Main Menu 103 + Power management options (ACPI, APM) ---> 104 + CPU Frequency scaling ---> 105 + [*] CPU Frequency scaling 106 + <*> CPU frequency translation statistics 107 + [*] CPU frequency translation statistics details 108 + 109 + 110 + "CPU Frequency scaling" (CONFIG_CPU_FREQ) should be enabled to configure 111 + cpufreq-stats. 112 + 113 + "CPU frequency translation statistics" (CONFIG_CPU_FREQ_STAT) provides the 114 + basic statistics which includes time_in_state and total_trans. 115 + 116 + "CPU frequency translation statistics details" (CONFIG_CPU_FREQ_STAT_DETAILS) 117 + provides fine grained cpufreq stats by trans_table. The reason for having a 118 + seperate config option for trans_table is: 119 + - trans_table goes against the traditional /sysfs rule of one value per 120 + interface. It provides a whole bunch of value in a 2 dimensional matrix 121 + form. 122 + 123 + Once these two options are enabled and your CPU supports cpufrequency, you 124 + will be able to see the CPU frequency statistics in /sysfs. 125 + 126 + 127 + 128 +
+8 -2
MAINTAINERS
··· 239 239 W: http://www.linux-usb.org/SpeedTouch/ 240 240 S: Maintained 241 241 242 + ALI1563 I2C DRIVER 243 + P: Rudolf Marek 244 + M: r.marek@sh.cvut.cz 245 + L: sensors@stimpy.netroedge.com 246 + S: Maintained 247 + 242 248 ALPHA PORT 243 249 P: Richard Henderson 244 250 M: rth@twiddle.net ··· 1029 1023 S: Maintained 1030 1024 1031 1025 SN-IA64 (Itanium) SUB-PLATFORM 1032 - P: Jesse Barnes 1033 - M: jbarnes@sgi.com 1026 + P: Greg Edwards 1027 + M: edwardsg@sgi.com 1034 1028 L: linux-altix@sgi.com 1035 1029 L: linux-ia64@vger.kernel.org 1036 1030 W: http://www.sgi.com/altix
+1 -1
arch/h8300/kernel/process.c
··· 54 54 void default_idle(void) 55 55 { 56 56 while(1) { 57 - if (need_resched()) { 57 + if (!need_resched()) { 58 58 local_irq_enable(); 59 59 __asm__("sleep"); 60 60 local_irq_disable();
+13 -1
arch/i386/kernel/cpu/cpufreq/Kconfig
··· 23 23 If in doubt, say N. 24 24 25 25 config ELAN_CPUFREQ 26 - tristate "AMD Elan" 26 + tristate "AMD Elan SC400 and SC410" 27 27 select CPU_FREQ_TABLE 28 28 depends on X86_ELAN 29 29 ---help--- ··· 37 37 For details, take a look at <file:Documentation/cpu-freq/>. 38 38 39 39 If in doubt, say N. 40 + 41 + config SC520_CPUFREQ 42 + tristate "AMD Elan SC520" 43 + select CPU_FREQ_TABLE 44 + depends on X86_ELAN 45 + ---help--- 46 + This adds the CPUFreq driver for AMD Elan SC520 processor. 47 + 48 + For details, take a look at <file:Documentation/cpu-freq/>. 49 + 50 + If in doubt, say N. 51 + 40 52 41 53 config X86_POWERNOW_K6 42 54 tristate "AMD Mobile K6-2/K6-3 PowerNow!"
+1
arch/i386/kernel/cpu/cpufreq/Makefile
··· 3 3 obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o 4 4 obj-$(CONFIG_X86_LONGHAUL) += longhaul.o 5 5 obj-$(CONFIG_ELAN_CPUFREQ) += elanfreq.o 6 + obj-$(CONFIG_SC520_CPUFREQ) += sc520_freq.o 6 7 obj-$(CONFIG_X86_LONGRUN) += longrun.o 7 8 obj-$(CONFIG_X86_GX_SUSPMOD) += gx-suspmod.o 8 9 obj-$(CONFIG_X86_SPEEDSTEP_ICH) += speedstep-ich.o
+53 -5
arch/i386/kernel/cpu/cpufreq/longhaul.c
··· 29 29 #include <linux/cpufreq.h> 30 30 #include <linux/slab.h> 31 31 #include <linux/string.h> 32 + #include <linux/pci.h> 32 33 33 34 #include <asm/msr.h> 34 35 #include <asm/timex.h> ··· 120 119 static void do_powersaver(union msr_longhaul *longhaul, 121 120 unsigned int clock_ratio_index) 122 121 { 122 + struct pci_dev *dev; 123 + unsigned long flags; 124 + unsigned int tmp_mask; 123 125 int version; 126 + int i; 127 + u16 pci_cmd; 128 + u16 cmd_state[64]; 124 129 125 130 switch (cpu_model) { 126 131 case CPU_EZRA_T: ··· 144 137 longhaul->bits.SoftBusRatio4 = (clock_ratio_index & 0x10) >> 4; 145 138 longhaul->bits.EnableSoftBusRatio = 1; 146 139 longhaul->bits.RevisionKey = 0; 147 - local_irq_disable(); 148 - wrmsrl(MSR_VIA_LONGHAUL, longhaul->val); 140 + 141 + preempt_disable(); 142 + local_irq_save(flags); 143 + 144 + /* 145 + * get current pci bus master state for all devices 146 + * and clear bus master bit 147 + */ 148 + dev = NULL; 149 + i = 0; 150 + do { 151 + dev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, dev); 152 + if (dev != NULL) { 153 + pci_read_config_word(dev, PCI_COMMAND, &pci_cmd); 154 + cmd_state[i++] = pci_cmd; 155 + pci_cmd &= ~PCI_COMMAND_MASTER; 156 + pci_write_config_word(dev, PCI_COMMAND, pci_cmd); 157 + } 158 + } while (dev != NULL); 159 + 160 + tmp_mask=inb(0x21); /* works on C3. save mask. */ 161 + outb(0xFE,0x21); /* TMR0 only */ 162 + outb(0xFF,0x80); /* delay */ 163 + 149 164 local_irq_enable(); 165 + 166 + __hlt(); 167 + wrmsrl(MSR_VIA_LONGHAUL, longhaul->val); 150 168 __hlt(); 151 169 170 + local_irq_disable(); 171 + 172 + outb(tmp_mask,0x21); /* restore mask */ 173 + 174 + /* restore pci bus master state for all devices */ 175 + dev = NULL; 176 + i = 0; 177 + do { 178 + dev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, dev); 179 + if (dev != NULL) { 180 + pci_cmd = cmd_state[i++]; 181 + pci_write_config_byte(dev, PCI_COMMAND, pci_cmd); 182 + } 183 + } while (dev != NULL); 184 + local_irq_restore(flags); 185 + preempt_enable(); 186 + 187 + /* disable bus ratio bit */ 152 188 rdmsrl(MSR_VIA_LONGHAUL, longhaul->val); 153 189 longhaul->bits.EnableSoftBusRatio = 0; 154 190 longhaul->bits.RevisionKey = version; 155 - local_irq_disable(); 156 191 wrmsrl(MSR_VIA_LONGHAUL, longhaul->val); 157 - local_irq_enable(); 158 192 } 159 193 160 194 /** ··· 626 578 longhaul_setup_voltagescaling(); 627 579 628 580 policy->governor = CPUFREQ_DEFAULT_GOVERNOR; 629 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 581 + policy->cpuinfo.transition_latency = 200000; /* nsec */ 630 582 policy->cur = calc_speed(longhaul_get_cpu_mult()); 631 583 632 584 ret = cpufreq_frequency_table_cpuinfo(policy, longhaul_table);
+8 -3
arch/i386/kernel/cpu/cpufreq/powernow-k7.c
··· 23 23 #include <linux/dmi.h> 24 24 25 25 #include <asm/msr.h> 26 + #include <asm/timer.h> 26 27 #include <asm/timex.h> 27 28 #include <asm/io.h> 28 29 #include <asm/system.h> ··· 587 586 588 587 rdmsrl (MSR_K7_FID_VID_STATUS, fidvidstatus.val); 589 588 590 - /* A K7 with powernow technology is set to max frequency by BIOS */ 591 - fsb = (10 * cpu_khz) / fid_codes[fidvidstatus.bits.MFID]; 589 + /* recalibrate cpu_khz */ 590 + result = recalibrate_cpu_khz(); 591 + if (result) 592 + return result; 593 + 594 + fsb = (10 * cpu_khz) / fid_codes[fidvidstatus.bits.CFID]; 592 595 if (!fsb) { 593 596 printk(KERN_WARNING PFX "can not determine bus frequency\n"); 594 597 return -EINVAL; 595 598 } 596 - dprintk("FSB: %3d.%03d MHz\n", fsb/1000, fsb%1000); 599 + dprintk("FSB: %3dMHz\n", fsb/1000); 597 600 598 601 if (dmi_check_system(powernow_dmi_table) || acpi_force) { 599 602 printk (KERN_INFO PFX "PSB/PST known to be broken. Trying ACPI instead\n");
+76 -37
arch/i386/kernel/cpu/cpufreq/powernow-k8.c
··· 4 4 * GNU general public license version 2. See "COPYING" or 5 5 * http://www.gnu.org/licenses/gpl.html 6 6 * 7 - * Support : paul.devriendt@amd.com 7 + * Support : mark.langsdorf@amd.com 8 8 * 9 9 * Based on the powernow-k7.c module written by Dave Jones. 10 10 * (C) 2003 Dave Jones <davej@codemonkey.org.uk> on behalf of SuSE Labs ··· 15 15 * 16 16 * Valuable input gratefully received from Dave Jones, Pavel Machek, 17 17 * Dominik Brodowski, and others. 18 + * Originally developed by Paul Devriendt. 18 19 * Processor information obtained from Chapter 9 (Power and Thermal Management) 19 20 * of the "BIOS and Kernel Developer's Guide for the AMD Athlon 64 and AMD 20 21 * Opteron Processors" available for download from www.amd.com 21 22 * 22 23 * Tables for specific CPUs can be infrerred from 23 - * http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/30430.pdf 24 + * http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/30430.pdf 24 25 */ 25 26 26 27 #include <linux/kernel.h> ··· 31 30 #include <linux/cpufreq.h> 32 31 #include <linux/slab.h> 33 32 #include <linux/string.h> 33 + #include <linux/cpumask.h> 34 34 35 35 #include <asm/msr.h> 36 36 #include <asm/io.h> ··· 44 42 45 43 #define PFX "powernow-k8: " 46 44 #define BFX PFX "BIOS error: " 47 - #define VERSION "version 1.00.09e" 45 + #define VERSION "version 1.40.2" 48 46 #include "powernow-k8.h" 49 47 50 48 /* serialize freq changes */ 51 49 static DECLARE_MUTEX(fidvid_sem); 52 50 53 51 static struct powernow_k8_data *powernow_data[NR_CPUS]; 52 + 53 + #ifndef CONFIG_SMP 54 + static cpumask_t cpu_core_map[1]; 55 + #endif 54 56 55 57 /* Return a frequency in MHz, given an input fid */ 56 58 static u32 find_freq_from_fid(u32 fid) ··· 280 274 { 281 275 u32 rvosteps = data->rvo; 282 276 u32 savefid = data->currfid; 277 + u32 maxvid, lo; 283 278 284 279 dprintk("ph1 (cpu%d): start, currfid 0x%x, currvid 0x%x, reqvid 0x%x, rvo 0x%x\n", 285 280 smp_processor_id(), 286 281 data->currfid, data->currvid, reqvid, data->rvo); 282 + 283 + rdmsr(MSR_FIDVID_STATUS, lo, maxvid); 284 + maxvid = 0x1f & (maxvid >> 16); 285 + dprintk("ph1 maxvid=0x%x\n", maxvid); 286 + if (reqvid < maxvid) /* lower numbers are higher voltages */ 287 + reqvid = maxvid; 287 288 288 289 while (data->currvid > reqvid) { 289 290 dprintk("ph1: curr 0x%x, req vid 0x%x\n", ··· 299 286 return 1; 300 287 } 301 288 302 - while ((rvosteps > 0) && ((data->rvo + data->currvid) > reqvid)) { 303 - if (data->currvid == 0) { 289 + while ((rvosteps > 0) && ((data->rvo + data->currvid) > reqvid)) { 290 + if (data->currvid == maxvid) { 304 291 rvosteps = 0; 305 292 } else { 306 293 dprintk("ph1: changing vid for rvo, req 0x%x\n", ··· 684 671 * BIOS and Kernel Developer's Guide, which is available on 685 672 * www.amd.com 686 673 */ 687 - printk(KERN_ERR PFX "BIOS error - no PSB\n"); 674 + printk(KERN_INFO PFX "BIOS error - no PSB or ACPI _PSS objects\n"); 688 675 return -ENODEV; 689 676 } 690 677 ··· 708 695 struct cpufreq_frequency_table *powernow_table; 709 696 710 697 if (acpi_processor_register_performance(&data->acpi_data, data->cpu)) { 711 - dprintk("register performance failed\n"); 698 + dprintk("register performance failed: bad ACPI data\n"); 712 699 return -EIO; 713 700 } 714 701 ··· 759 746 continue; 760 747 } 761 748 762 - if (fid < HI_FID_TABLE_BOTTOM) { 763 - if (cntlofreq) { 764 - /* if both entries are the same, ignore this 765 - * one... 766 - */ 767 - if ((powernow_table[i].frequency != powernow_table[cntlofreq].frequency) || 768 - (powernow_table[i].index != powernow_table[cntlofreq].index)) { 769 - printk(KERN_ERR PFX "Too many lo freq table entries\n"); 770 - goto err_out_mem; 771 - } 772 - 773 - dprintk("double low frequency table entry, ignoring it.\n"); 774 - powernow_table[i].frequency = CPUFREQ_ENTRY_INVALID; 775 - continue; 776 - } else 777 - cntlofreq = i; 749 + /* verify only 1 entry from the lo frequency table */ 750 + if (fid < HI_FID_TABLE_BOTTOM) { 751 + if (cntlofreq) { 752 + /* if both entries are the same, ignore this 753 + * one... 754 + */ 755 + if ((powernow_table[i].frequency != powernow_table[cntlofreq].frequency) || 756 + (powernow_table[i].index != powernow_table[cntlofreq].index)) { 757 + printk(KERN_ERR PFX "Too many lo freq table entries\n"); 758 + goto err_out_mem; 759 + } 760 + 761 + dprintk("double low frequency table entry, ignoring it.\n"); 762 + powernow_table[i].frequency = CPUFREQ_ENTRY_INVALID; 763 + continue; 764 + } else 765 + cntlofreq = i; 778 766 } 779 767 780 768 if (powernow_table[i].frequency != (data->acpi_data.states[i].core_frequency * 1000)) { ··· 830 816 { 831 817 u32 fid; 832 818 u32 vid; 833 - int res; 819 + int res, i; 834 820 struct cpufreq_freqs freqs; 835 821 836 822 dprintk("cpu %d transition to index %u\n", smp_processor_id(), index); ··· 855 841 } 856 842 857 843 if ((fid < HI_FID_TABLE_BOTTOM) && (data->currfid < HI_FID_TABLE_BOTTOM)) { 858 - printk("ignoring illegal change in lo freq table-%x to 0x%x\n", 844 + printk(KERN_ERR PFX 845 + "ignoring illegal change in lo freq table-%x to 0x%x\n", 859 846 data->currfid, fid); 860 847 return 1; 861 848 } ··· 865 850 smp_processor_id(), fid, vid); 866 851 867 852 freqs.cpu = data->cpu; 868 - 869 853 freqs.old = find_khz_freq_from_fid(data->currfid); 870 854 freqs.new = find_khz_freq_from_fid(fid); 871 - cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); 855 + for_each_cpu_mask(i, cpu_core_map[data->cpu]) { 856 + freqs.cpu = i; 857 + cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); 858 + } 872 859 873 - down(&fidvid_sem); 874 860 res = transition_fid_vid(data, fid, vid); 875 - up(&fidvid_sem); 876 861 877 862 freqs.new = find_khz_freq_from_fid(data->currfid); 878 - cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); 879 - 863 + for_each_cpu_mask(i, cpu_core_map[data->cpu]) { 864 + freqs.cpu = i; 865 + cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); 866 + } 880 867 return res; 881 868 } 882 869 ··· 891 874 u32 checkvid = data->currvid; 892 875 unsigned int newstate; 893 876 int ret = -EIO; 877 + int i; 894 878 895 879 /* only run on specific CPU from here on */ 896 880 oldmask = current->cpus_allowed; ··· 920 902 data->currfid, data->currvid); 921 903 922 904 if ((checkvid != data->currvid) || (checkfid != data->currfid)) { 923 - printk(KERN_ERR PFX 924 - "error - out of sync, fid 0x%x 0x%x, vid 0x%x 0x%x\n", 925 - checkfid, data->currfid, checkvid, data->currvid); 905 + printk(KERN_INFO PFX 906 + "error - out of sync, fix 0x%x 0x%x, vid 0x%x 0x%x\n", 907 + checkfid, data->currfid, checkvid, data->currvid); 926 908 } 927 909 928 910 if (cpufreq_frequency_table_target(pol, data->powernow_table, targfreq, relation, &newstate)) 929 911 goto err_out; 912 + 913 + down(&fidvid_sem); 914 + 915 + for_each_cpu_mask(i, cpu_core_map[pol->cpu]) { 916 + /* make sure the sibling is initialized */ 917 + if (!powernow_data[i]) { 918 + ret = 0; 919 + up(&fidvid_sem); 920 + goto err_out; 921 + } 922 + } 930 923 931 924 powernow_k8_acpi_pst_values(data, newstate); 932 925 933 926 if (transition_frequency(data, newstate)) { 934 927 printk(KERN_ERR PFX "transition frequency failed\n"); 935 928 ret = 1; 929 + up(&fidvid_sem); 936 930 goto err_out; 937 931 } 932 + 933 + /* Update all the fid/vids of our siblings */ 934 + for_each_cpu_mask(i, cpu_core_map[pol->cpu]) { 935 + powernow_data[i]->currvid = data->currvid; 936 + powernow_data[i]->currfid = data->currfid; 937 + } 938 + up(&fidvid_sem); 938 939 939 940 pol->cur = find_khz_freq_from_fid(data->currfid); 940 941 ret = 0; ··· 999 962 */ 1000 963 1001 964 if ((num_online_cpus() != 1) || (num_possible_cpus() != 1)) { 1002 - printk(KERN_INFO PFX "MP systems not supported by PSB BIOS structure\n"); 965 + printk(KERN_ERR PFX "MP systems not supported by PSB BIOS structure\n"); 1003 966 kfree(data); 1004 967 return -ENODEV; 1005 968 } ··· 1040 1003 schedule(); 1041 1004 1042 1005 pol->governor = CPUFREQ_DEFAULT_GOVERNOR; 1006 + pol->cpus = cpu_core_map[pol->cpu]; 1043 1007 1044 1008 /* Take a crude guess here. 1045 1009 * That guess was in microseconds, so multiply with 1000 */ ··· 1107 1069 return 0; 1108 1070 } 1109 1071 preempt_disable(); 1110 - 1072 + 1111 1073 if (query_current_values_with_pending_wait(data)) 1112 1074 goto out; 1113 1075 ··· 1165 1127 cpufreq_unregister_driver(&cpufreq_amd64_driver); 1166 1128 } 1167 1129 1168 - MODULE_AUTHOR("Paul Devriendt <paul.devriendt@amd.com>"); 1130 + MODULE_AUTHOR("Paul Devriendt <paul.devriendt@amd.com> and Mark Langsdorf <mark.langsdorf@amd.com."); 1169 1131 MODULE_DESCRIPTION("AMD Athlon 64 and Opteron processor frequency driver."); 1170 1132 MODULE_LICENSE("GPL"); 1171 1133 1172 1134 late_initcall(powernowk8_init); 1173 1135 module_exit(powernowk8_exit); 1136 +
+15
arch/i386/kernel/cpu/cpufreq/powernow-k8.h
··· 174 174 static int core_frequency_transition(struct powernow_k8_data *data, u32 reqfid); 175 175 176 176 static void powernow_k8_acpi_pst_values(struct powernow_k8_data *data, unsigned int index); 177 + 178 + #ifndef for_each_cpu_mask 179 + #define for_each_cpu_mask(i,mask) for (i=0;i<1;i++) 180 + #endif 181 + 182 + #ifdef CONFIG_SMP 183 + static inline void define_siblings(int cpu, cpumask_t cpu_sharedcore_mask[]) 184 + { 185 + } 186 + #else 187 + static inline void define_siblings(int cpu, cpumask_t cpu_sharedcore_mask[]) 188 + { 189 + cpu_set(0, cpu_sharedcore_mask[0]); 190 + } 191 + #endif
+186
arch/i386/kernel/cpu/cpufreq/sc520_freq.c
··· 1 + /* 2 + * sc520_freq.c: cpufreq driver for the AMD Elan sc520 3 + * 4 + * Copyright (C) 2005 Sean Young <sean@mess.org> 5 + * 6 + * This program is free software; you can redistribute it and/or 7 + * modify it under the terms of the GNU General Public License 8 + * as published by the Free Software Foundation; either version 9 + * 2 of the License, or (at your option) any later version. 10 + * 11 + * Based on elanfreq.c 12 + * 13 + * 2005-03-30: - initial revision 14 + */ 15 + 16 + #include <linux/kernel.h> 17 + #include <linux/module.h> 18 + #include <linux/init.h> 19 + 20 + #include <linux/delay.h> 21 + #include <linux/cpufreq.h> 22 + 23 + #include <asm/msr.h> 24 + #include <asm/timex.h> 25 + #include <asm/io.h> 26 + 27 + #define MMCR_BASE 0xfffef000 /* The default base address */ 28 + #define OFFS_CPUCTL 0x2 /* CPU Control Register */ 29 + 30 + static __u8 __iomem *cpuctl; 31 + 32 + #define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_DRIVER, "sc520_freq", msg) 33 + 34 + static struct cpufreq_frequency_table sc520_freq_table[] = { 35 + {0x01, 100000}, 36 + {0x02, 133000}, 37 + {0, CPUFREQ_TABLE_END}, 38 + }; 39 + 40 + static unsigned int sc520_freq_get_cpu_frequency(unsigned int cpu) 41 + { 42 + u8 clockspeed_reg = *cpuctl; 43 + 44 + switch (clockspeed_reg & 0x03) { 45 + default: 46 + printk(KERN_ERR "sc520_freq: error: cpuctl register has unexpected value %02x\n", clockspeed_reg); 47 + case 0x01: 48 + return 100000; 49 + case 0x02: 50 + return 133000; 51 + } 52 + } 53 + 54 + static void sc520_freq_set_cpu_state (unsigned int state) 55 + { 56 + 57 + struct cpufreq_freqs freqs; 58 + u8 clockspeed_reg; 59 + 60 + freqs.old = sc520_freq_get_cpu_frequency(0); 61 + freqs.new = sc520_freq_table[state].frequency; 62 + freqs.cpu = 0; /* AMD Elan is UP */ 63 + 64 + cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); 65 + 66 + dprintk("attempting to set frequency to %i kHz\n", 67 + sc520_freq_table[state].frequency); 68 + 69 + local_irq_disable(); 70 + 71 + clockspeed_reg = *cpuctl & ~0x03; 72 + *cpuctl = clockspeed_reg | sc520_freq_table[state].index; 73 + 74 + local_irq_enable(); 75 + 76 + cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); 77 + }; 78 + 79 + static int sc520_freq_verify (struct cpufreq_policy *policy) 80 + { 81 + return cpufreq_frequency_table_verify(policy, &sc520_freq_table[0]); 82 + } 83 + 84 + static int sc520_freq_target (struct cpufreq_policy *policy, 85 + unsigned int target_freq, 86 + unsigned int relation) 87 + { 88 + unsigned int newstate = 0; 89 + 90 + if (cpufreq_frequency_table_target(policy, sc520_freq_table, target_freq, relation, &newstate)) 91 + return -EINVAL; 92 + 93 + sc520_freq_set_cpu_state(newstate); 94 + 95 + return 0; 96 + } 97 + 98 + 99 + /* 100 + * Module init and exit code 101 + */ 102 + 103 + static int sc520_freq_cpu_init(struct cpufreq_policy *policy) 104 + { 105 + struct cpuinfo_x86 *c = cpu_data; 106 + int result; 107 + 108 + /* capability check */ 109 + if (c->x86_vendor != X86_VENDOR_AMD || 110 + c->x86 != 4 || c->x86_model != 9) 111 + return -ENODEV; 112 + 113 + /* cpuinfo and default policy values */ 114 + policy->governor = CPUFREQ_DEFAULT_GOVERNOR; 115 + policy->cpuinfo.transition_latency = 1000000; /* 1ms */ 116 + policy->cur = sc520_freq_get_cpu_frequency(0); 117 + 118 + result = cpufreq_frequency_table_cpuinfo(policy, sc520_freq_table); 119 + if (result) 120 + return (result); 121 + 122 + cpufreq_frequency_table_get_attr(sc520_freq_table, policy->cpu); 123 + 124 + return 0; 125 + } 126 + 127 + 128 + static int sc520_freq_cpu_exit(struct cpufreq_policy *policy) 129 + { 130 + cpufreq_frequency_table_put_attr(policy->cpu); 131 + return 0; 132 + } 133 + 134 + 135 + static struct freq_attr* sc520_freq_attr[] = { 136 + &cpufreq_freq_attr_scaling_available_freqs, 137 + NULL, 138 + }; 139 + 140 + 141 + static struct cpufreq_driver sc520_freq_driver = { 142 + .get = sc520_freq_get_cpu_frequency, 143 + .verify = sc520_freq_verify, 144 + .target = sc520_freq_target, 145 + .init = sc520_freq_cpu_init, 146 + .exit = sc520_freq_cpu_exit, 147 + .name = "sc520_freq", 148 + .owner = THIS_MODULE, 149 + .attr = sc520_freq_attr, 150 + }; 151 + 152 + 153 + static int __init sc520_freq_init(void) 154 + { 155 + struct cpuinfo_x86 *c = cpu_data; 156 + 157 + /* Test if we have the right hardware */ 158 + if(c->x86_vendor != X86_VENDOR_AMD || 159 + c->x86 != 4 || c->x86_model != 9) { 160 + dprintk("no Elan SC520 processor found!\n"); 161 + return -ENODEV; 162 + } 163 + cpuctl = ioremap((unsigned long)(MMCR_BASE + OFFS_CPUCTL), 1); 164 + if(!cpuctl) { 165 + printk(KERN_ERR "sc520_freq: error: failed to remap memory\n"); 166 + return -ENOMEM; 167 + } 168 + 169 + return cpufreq_register_driver(&sc520_freq_driver); 170 + } 171 + 172 + 173 + static void __exit sc520_freq_exit(void) 174 + { 175 + cpufreq_unregister_driver(&sc520_freq_driver); 176 + iounmap(cpuctl); 177 + } 178 + 179 + 180 + MODULE_LICENSE("GPL"); 181 + MODULE_AUTHOR("Sean Young <sean@mess.org>"); 182 + MODULE_DESCRIPTION("cpufreq driver for AMD's Elan sc520 CPU"); 183 + 184 + module_init(sc520_freq_init); 185 + module_exit(sc520_freq_exit); 186 +
+6
arch/i386/kernel/cpu/cpufreq/speedstep-centrino.c
··· 54 54 CPU_DOTHAN_A1, 55 55 CPU_DOTHAN_A2, 56 56 CPU_DOTHAN_B0, 57 + CPU_MP4HT_D0, 58 + CPU_MP4HT_E0, 57 59 }; 58 60 59 61 static const struct cpu_id cpu_ids[] = { ··· 63 61 [CPU_DOTHAN_A1] = { 6, 13, 1 }, 64 62 [CPU_DOTHAN_A2] = { 6, 13, 2 }, 65 63 [CPU_DOTHAN_B0] = { 6, 13, 6 }, 64 + [CPU_MP4HT_D0] = {15, 3, 4 }, 65 + [CPU_MP4HT_E0] = {15, 4, 1 }, 66 66 }; 67 67 #define N_IDS (sizeof(cpu_ids)/sizeof(cpu_ids[0])) 68 68 ··· 230 226 { &cpu_ids[CPU_DOTHAN_A1], NULL, 0, NULL }, 231 227 { &cpu_ids[CPU_DOTHAN_A2], NULL, 0, NULL }, 232 228 { &cpu_ids[CPU_DOTHAN_B0], NULL, 0, NULL }, 229 + { &cpu_ids[CPU_MP4HT_D0], NULL, 0, NULL }, 230 + { &cpu_ids[CPU_MP4HT_E0], NULL, 0, NULL }, 233 231 234 232 { NULL, } 235 233 };
+3 -3
arch/i386/kernel/cpu/cpufreq/speedstep-lib.c
··· 336 336 if (!prev_speed) 337 337 return -EIO; 338 338 339 - dprintk("previous seped is %u\n", prev_speed); 339 + dprintk("previous speed is %u\n", prev_speed); 340 340 341 341 local_irq_save(flags); 342 342 ··· 348 348 goto out; 349 349 } 350 350 351 - dprintk("low seped is %u\n", *low_speed); 351 + dprintk("low speed is %u\n", *low_speed); 352 352 353 353 /* switch to high state */ 354 354 set_state(SPEEDSTEP_HIGH); ··· 358 358 goto out; 359 359 } 360 360 361 - dprintk("high seped is %u\n", *high_speed); 361 + dprintk("high speed is %u\n", *high_speed); 362 362 363 363 if (*low_speed == *high_speed) { 364 364 ret = -ENODEV;
+3
arch/i386/kernel/cpu/cpufreq/speedstep-smi.c
··· 357 357 case SPEEDSTEP_PROCESSOR_PIII_C: 358 358 case SPEEDSTEP_PROCESSOR_PIII_C_EARLY: 359 359 break; 360 + case SPEEDSTEP_PROCESSOR_P4M: 361 + printk(KERN_INFO "speedstep-smi: you're trying to use this cpufreq driver on a Pentium 4-based CPU. Most likely it will not work.\n"); 362 + break; 360 363 default: 361 364 speedstep_processor = 0; 362 365 }
+1 -1
arch/i386/kernel/cpu/intel_cacheinfo.c
··· 118 118 }; 119 119 120 120 #define MAX_CACHE_LEAVES 4 121 - static unsigned short __devinitdata num_cache_leaves; 121 + static unsigned short num_cache_leaves; 122 122 123 123 static int __devinit cpuid4_cache_lookup(int index, struct _cpuid4_info *this_leaf) 124 124 {
+2
arch/i386/kernel/setup.c
··· 1502 1502 if (efi_enabled) 1503 1503 efi_map_memmap(); 1504 1504 1505 + #ifdef CONFIG_ACPI_BOOT 1505 1506 /* 1506 1507 * Parse the ACPI tables for possible boot-time SMP configuration. 1507 1508 */ 1508 1509 acpi_boot_table_init(); 1509 1510 acpi_boot_init(); 1511 + #endif 1510 1512 1511 1513 #ifdef CONFIG_X86_LOCAL_APIC 1512 1514 if (smp_found_config)
+3 -1
arch/i386/kernel/smpboot.c
··· 1074 1074 cpu_set(cpu, cpu_sibling_map[cpu]); 1075 1075 } 1076 1076 1077 - if (siblings != smp_num_siblings) 1077 + if (siblings != smp_num_siblings) { 1078 1078 printk(KERN_WARNING "WARNING: %d siblings found for CPU%d, should be %d\n", siblings, cpu, smp_num_siblings); 1079 + smp_num_siblings = siblings; 1080 + } 1079 1081 1080 1082 if (c->x86_num_cores > 1) { 1081 1083 for (i = 0; i < NR_CPUS; i++) {
+4 -2
arch/i386/kernel/timers/common.c
··· 6 6 #include <linux/timex.h> 7 7 #include <linux/errno.h> 8 8 #include <linux/jiffies.h> 9 + #include <linux/module.h> 9 10 10 11 #include <asm/io.h> 11 12 #include <asm/timer.h> ··· 25 24 26 25 #define CALIBRATE_TIME (5 * 1000020/HZ) 27 26 28 - unsigned long __init calibrate_tsc(void) 27 + unsigned long calibrate_tsc(void) 29 28 { 30 29 mach_prepare_counter(); 31 30 ··· 140 139 #endif 141 140 142 141 /* calculate cpu_khz */ 143 - void __init init_cpu_khz(void) 142 + void init_cpu_khz(void) 144 143 { 145 144 if (cpu_has_tsc) { 146 145 unsigned long tsc_quotient = calibrate_tsc(); ··· 159 158 } 160 159 } 161 160 } 161 +
+20
arch/i386/kernel/timers/timer_tsc.c
··· 320 320 static inline void cpufreq_delayed_get(void) { return; } 321 321 #endif 322 322 323 + int recalibrate_cpu_khz(void) 324 + { 325 + #ifndef CONFIG_SMP 326 + unsigned long cpu_khz_old = cpu_khz; 327 + 328 + if (cpu_has_tsc) { 329 + init_cpu_khz(); 330 + cpu_data[0].loops_per_jiffy = 331 + cpufreq_scale(cpu_data[0].loops_per_jiffy, 332 + cpu_khz_old, 333 + cpu_khz); 334 + return 0; 335 + } else 336 + return -ENODEV; 337 + #else 338 + return -ENODEV; 339 + #endif 340 + } 341 + EXPORT_SYMBOL(recalibrate_cpu_khz); 342 + 323 343 static void mark_offset_tsc(void) 324 344 { 325 345 unsigned long lost,delay;
+1 -1
arch/ia64/ia32/sys_ia32.c
··· 2427 2427 { 2428 2428 struct epoll_event *events64 = NULL; 2429 2429 mm_segment_t old_fs = get_fs(); 2430 - int error, numevents, size; 2430 + int numevents, size; 2431 2431 int evt_idx; 2432 2432 int do_free_pages = 0; 2433 2433
+2 -2
arch/ia64/kernel/entry.S
··· 1182 1182 ;; 1183 1183 (pNonSys) mov out2=0 // out2==0 => not a syscall 1184 1184 .fframe 16 1185 - .spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!) 1185 + .spillsp ar.unat, 16 1186 1186 st8 [sp]=r9,-16 // allocate space for ar.unat and save it 1187 1187 st8 [out1]=loc1,-8 // save ar.pfs, out1=&sigscratch 1188 1188 .body ··· 1208 1208 adds out2=8,sp // out2=&sigscratch->ar_pfs 1209 1209 ;; 1210 1210 .fframe 16 1211 - .spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!) 1211 + .spillsp ar.unat, 16 1212 1212 st8 [sp]=r9,-16 // allocate space for ar.unat and save it 1213 1213 st8 [out2]=loc1,-8 // save ar.pfs, out2=&sigscratch 1214 1214 .body
+4 -4
arch/ia64/kernel/mca.c
··· 1103 1103 return IRQ_HANDLED; 1104 1104 } 1105 1105 1106 - #endif /* CONFIG_ACPI */ 1107 - 1108 1106 /* 1109 1107 * ia64_mca_cpe_poll 1110 1108 * ··· 1119 1121 /* Trigger a CPE interrupt cascade */ 1120 1122 platform_send_ipi(first_cpu(cpu_online_map), IA64_CPEP_VECTOR, IA64_IPI_DM_INT, 0); 1121 1123 } 1124 + 1125 + #endif /* CONFIG_ACPI */ 1122 1126 1123 1127 /* 1124 1128 * C portion of the OS INIT handler ··· 1390 1390 register_percpu_irq(IA64_MCA_WAKEUP_VECTOR, &mca_wkup_irqaction); 1391 1391 1392 1392 #ifdef CONFIG_ACPI 1393 - /* Setup the CPEI/P vector and handler */ 1394 - cpe_vector = acpi_request_vector(ACPI_INTERRUPT_CPEI); 1393 + /* Setup the CPEI/P handler */ 1395 1394 register_percpu_irq(IA64_CPEP_VECTOR, &mca_cpep_irqaction); 1396 1395 #endif 1397 1396 ··· 1435 1436 1436 1437 #ifdef CONFIG_ACPI 1437 1438 /* Setup the CPEI/P vector and handler */ 1439 + cpe_vector = acpi_request_vector(ACPI_INTERRUPT_CPEI); 1438 1440 init_timer(&cpe_poll_timer); 1439 1441 cpe_poll_timer.function = ia64_mca_cpe_poll; 1440 1442
+1 -2
arch/ia64/kernel/minstate.h
··· 41 41 (pKStk) addl r3=THIS_CPU(ia64_mca_data),r3;; \ 42 42 (pKStk) ld8 r3 = [r3];; \ 43 43 (pKStk) addl r3=IA64_MCA_CPU_INIT_STACK_OFFSET,r3;; \ 44 - (pKStk) addl sp=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r3; \ 44 + (pKStk) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r3; \ 45 45 (pUStk) mov ar.rsc=0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \ 46 46 (pUStk) addl r22=IA64_RBS_OFFSET,r1; /* compute base of register backing store */ \ 47 47 ;; \ ··· 50 50 (pUStk) mov r23=ar.bspstore; /* save ar.bspstore */ \ 51 51 (pUStk) dep r22=-1,r22,61,3; /* compute kernel virtual addr of RBS */ \ 52 52 ;; \ 53 - (pKStk) addl r1=-IA64_PT_REGS_SIZE,r1; /* if in kernel mode, use sp (r12) */ \ 54 53 (pUStk) mov ar.bspstore=r22; /* switch to kernel RBS */ \ 55 54 ;; \ 56 55 (pUStk) mov r18=ar.bsp; \
+160 -15
arch/ia64/kernel/perfmon.c
··· 11 11 * Version Perfmon-2.x is a rewrite of perfmon-1.x 12 12 * by Stephane Eranian, Hewlett Packard Co. 13 13 * 14 - * Copyright (C) 1999-2003, 2005 Hewlett Packard Co 14 + * Copyright (C) 1999-2005 Hewlett Packard Co 15 15 * Stephane Eranian <eranian@hpl.hp.com> 16 16 * David Mosberger-Tang <davidm@hpl.hp.com> 17 17 * ··· 497 497 static pfm_stats_t pfm_stats[NR_CPUS]; 498 498 static pfm_session_t pfm_sessions; /* global sessions information */ 499 499 500 + static spinlock_t pfm_alt_install_check = SPIN_LOCK_UNLOCKED; 501 + static pfm_intr_handler_desc_t *pfm_alt_intr_handler; 502 + 500 503 static struct proc_dir_entry *perfmon_dir; 501 504 static pfm_uuid_t pfm_null_uuid = {0,}; 502 505 ··· 609 606 DEFINE_PER_CPU(struct task_struct *, pmu_owner); 610 607 DEFINE_PER_CPU(pfm_context_t *, pmu_ctx); 611 608 DEFINE_PER_CPU(unsigned long, pmu_activation_number); 609 + EXPORT_PER_CPU_SYMBOL_GPL(pfm_syst_info); 612 610 613 611 614 612 /* forward declaration */ ··· 1329 1325 error_conflict: 1330 1326 DPRINT(("system wide not possible, conflicting session [%d] on CPU%d\n", 1331 1327 pfm_sessions.pfs_sys_session[cpu]->pid, 1332 - smp_processor_id())); 1328 + cpu)); 1333 1329 abort: 1334 1330 UNLOCK_PFS(flags); 1335 1331 ··· 5559 5555 int ret; 5560 5556 5561 5557 this_cpu = get_cpu(); 5562 - min = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min; 5563 - max = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max; 5558 + if (likely(!pfm_alt_intr_handler)) { 5559 + min = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min; 5560 + max = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max; 5564 5561 5565 - start_cycles = ia64_get_itc(); 5562 + start_cycles = ia64_get_itc(); 5566 5563 5567 - ret = pfm_do_interrupt_handler(irq, arg, regs); 5564 + ret = pfm_do_interrupt_handler(irq, arg, regs); 5568 5565 5569 - total_cycles = ia64_get_itc(); 5566 + total_cycles = ia64_get_itc(); 5570 5567 5571 - /* 5572 - * don't measure spurious interrupts 5573 - */ 5574 - if (likely(ret == 0)) { 5575 - total_cycles -= start_cycles; 5568 + /* 5569 + * don't measure spurious interrupts 5570 + */ 5571 + if (likely(ret == 0)) { 5572 + total_cycles -= start_cycles; 5576 5573 5577 - if (total_cycles < min) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min = total_cycles; 5578 - if (total_cycles > max) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max = total_cycles; 5574 + if (total_cycles < min) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min = total_cycles; 5575 + if (total_cycles > max) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max = total_cycles; 5579 5576 5580 - pfm_stats[this_cpu].pfm_ovfl_intr_cycles += total_cycles; 5577 + pfm_stats[this_cpu].pfm_ovfl_intr_cycles += total_cycles; 5578 + } 5581 5579 } 5580 + else { 5581 + (*pfm_alt_intr_handler->handler)(irq, arg, regs); 5582 + } 5583 + 5582 5584 put_cpu_no_resched(); 5583 5585 return IRQ_HANDLED; 5584 5586 } ··· 6434 6424 .flags = SA_INTERRUPT, 6435 6425 .name = "perfmon" 6436 6426 }; 6427 + 6428 + static void 6429 + pfm_alt_save_pmu_state(void *data) 6430 + { 6431 + struct pt_regs *regs; 6432 + 6433 + regs = ia64_task_regs(current); 6434 + 6435 + DPRINT(("called\n")); 6436 + 6437 + /* 6438 + * should not be necessary but 6439 + * let's take not risk 6440 + */ 6441 + pfm_clear_psr_up(); 6442 + pfm_clear_psr_pp(); 6443 + ia64_psr(regs)->pp = 0; 6444 + 6445 + /* 6446 + * This call is required 6447 + * May cause a spurious interrupt on some processors 6448 + */ 6449 + pfm_freeze_pmu(); 6450 + 6451 + ia64_srlz_d(); 6452 + } 6453 + 6454 + void 6455 + pfm_alt_restore_pmu_state(void *data) 6456 + { 6457 + struct pt_regs *regs; 6458 + 6459 + regs = ia64_task_regs(current); 6460 + 6461 + DPRINT(("called\n")); 6462 + 6463 + /* 6464 + * put PMU back in state expected 6465 + * by perfmon 6466 + */ 6467 + pfm_clear_psr_up(); 6468 + pfm_clear_psr_pp(); 6469 + ia64_psr(regs)->pp = 0; 6470 + 6471 + /* 6472 + * perfmon runs with PMU unfrozen at all times 6473 + */ 6474 + pfm_unfreeze_pmu(); 6475 + 6476 + ia64_srlz_d(); 6477 + } 6478 + 6479 + int 6480 + pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl) 6481 + { 6482 + int ret, i; 6483 + int reserve_cpu; 6484 + 6485 + /* some sanity checks */ 6486 + if (hdl == NULL || hdl->handler == NULL) return -EINVAL; 6487 + 6488 + /* do the easy test first */ 6489 + if (pfm_alt_intr_handler) return -EBUSY; 6490 + 6491 + /* one at a time in the install or remove, just fail the others */ 6492 + if (!spin_trylock(&pfm_alt_install_check)) { 6493 + return -EBUSY; 6494 + } 6495 + 6496 + /* reserve our session */ 6497 + for_each_online_cpu(reserve_cpu) { 6498 + ret = pfm_reserve_session(NULL, 1, reserve_cpu); 6499 + if (ret) goto cleanup_reserve; 6500 + } 6501 + 6502 + /* save the current system wide pmu states */ 6503 + ret = on_each_cpu(pfm_alt_save_pmu_state, NULL, 0, 1); 6504 + if (ret) { 6505 + DPRINT(("on_each_cpu() failed: %d\n", ret)); 6506 + goto cleanup_reserve; 6507 + } 6508 + 6509 + /* officially change to the alternate interrupt handler */ 6510 + pfm_alt_intr_handler = hdl; 6511 + 6512 + spin_unlock(&pfm_alt_install_check); 6513 + 6514 + return 0; 6515 + 6516 + cleanup_reserve: 6517 + for_each_online_cpu(i) { 6518 + /* don't unreserve more than we reserved */ 6519 + if (i >= reserve_cpu) break; 6520 + 6521 + pfm_unreserve_session(NULL, 1, i); 6522 + } 6523 + 6524 + spin_unlock(&pfm_alt_install_check); 6525 + 6526 + return ret; 6527 + } 6528 + EXPORT_SYMBOL_GPL(pfm_install_alt_pmu_interrupt); 6529 + 6530 + int 6531 + pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl) 6532 + { 6533 + int i; 6534 + int ret; 6535 + 6536 + if (hdl == NULL) return -EINVAL; 6537 + 6538 + /* cannot remove someone else's handler! */ 6539 + if (pfm_alt_intr_handler != hdl) return -EINVAL; 6540 + 6541 + /* one at a time in the install or remove, just fail the others */ 6542 + if (!spin_trylock(&pfm_alt_install_check)) { 6543 + return -EBUSY; 6544 + } 6545 + 6546 + pfm_alt_intr_handler = NULL; 6547 + 6548 + ret = on_each_cpu(pfm_alt_restore_pmu_state, NULL, 0, 1); 6549 + if (ret) { 6550 + DPRINT(("on_each_cpu() failed: %d\n", ret)); 6551 + } 6552 + 6553 + for_each_online_cpu(i) { 6554 + pfm_unreserve_session(NULL, 1, i); 6555 + } 6556 + 6557 + spin_unlock(&pfm_alt_install_check); 6558 + 6559 + return 0; 6560 + } 6561 + EXPORT_SYMBOL_GPL(pfm_remove_alt_pmu_interrupt); 6437 6562 6438 6563 /* 6439 6564 * perfmon initialization routine, called from the initcall() table
+17 -3
arch/ia64/kernel/ptrace.c
··· 692 692 unsigned long cfm) 693 693 { 694 694 struct unw_frame_info info, prev_info; 695 - unsigned long ip, pr; 695 + unsigned long ip, sp, pr; 696 696 697 697 unw_init_from_blocked_task(&info, child); 698 698 while (1) { 699 699 prev_info = info; 700 700 if (unw_unwind(&info) < 0) 701 701 return; 702 - if (unw_get_rp(&info, &ip) < 0) 702 + 703 + unw_get_sp(&info, &sp); 704 + if ((long)((unsigned long)child + IA64_STK_OFFSET - sp) 705 + < IA64_PT_REGS_SIZE) { 706 + dprintk("ptrace.%s: ran off the top of the kernel " 707 + "stack\n", __FUNCTION__); 703 708 return; 704 - if (ip < FIXADDR_USER_END) 709 + } 710 + if (unw_get_pr (&prev_info, &pr) < 0) { 711 + unw_get_rp(&prev_info, &ip); 712 + dprintk("ptrace.%s: failed to read " 713 + "predicate register (ip=0x%lx)\n", 714 + __FUNCTION__, ip); 715 + return; 716 + } 717 + if (unw_is_intr_frame(&info) 718 + && (pr & (1UL << PRED_USER_STACK))) 705 719 break; 706 720 } 707 721
+1 -1
arch/ia64/kernel/smpboot.c
··· 624 624 __u16 thread_id; 625 625 __u16 proc_fixed_addr; 626 626 __u8 valid; 627 - }mt_info[NR_CPUS] __devinit; 627 + } mt_info[NR_CPUS] __devinitdata; 628 628 629 629 #ifdef CONFIG_HOTPLUG_CPU 630 630 static inline void
-7
arch/ia64/kernel/sys_ia64.c
··· 182 182 } 183 183 } 184 184 185 - /* 186 - * A zero mmap always succeeds in Linux, independent of whether or not the 187 - * remaining arguments are valid. 188 - */ 189 - if (len == 0) 190 - goto out; 191 - 192 185 /* Careful about overflows.. */ 193 186 len = PAGE_ALIGN(len); 194 187 if (!len || len > TASK_SIZE) {
+2
arch/ia64/sn/kernel/setup.c
··· 271 271 int major = sn_sal_rev_major(), minor = sn_sal_rev_minor(); 272 272 extern void sn_cpu_init(void); 273 273 274 + ia64_sn_plat_set_error_handling_features(); 275 + 274 276 /* 275 277 * If the generic code has enabled vga console support - lets 276 278 * get rid of it again. This is a kludge for the fact that ACPI
+2 -2
arch/ppc/Kconfig
··· 1143 1143 1144 1144 config PCI_8260 1145 1145 bool 1146 - depends on PCI && 8260 && !8272 1146 + depends on PCI && 8260 1147 1147 default y 1148 1148 1149 1149 config 8260_PCI9 1150 1150 bool " Enable workaround for MPC826x erratum PCI 9" 1151 - depends on PCI_8260 1151 + depends on PCI_8260 && !ADS8272 1152 1152 default y 1153 1153 1154 1154 choice
+2 -1
arch/ppc/boot/images/Makefile
··· 22 22 $(obj)/uImage: $(obj)/vmlinux.gz 23 23 $(Q)rm -f $@ 24 24 $(call if_changed,uimage) 25 - @echo ' Image: $@' $(if $(wildcard $@),'is ready','not made') 25 + @echo -n ' Image: $@ ' 26 + @if [ -f $@ ]; then echo 'is ready' ; else echo 'not made'; fi 26 27 27 28 # Files generated that shall be removed upon make clean 28 29 clean-files := sImage vmapus vmlinux* miboot* zImage* uImage
+92 -25
arch/ppc/configs/mpc8555_cds_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.11-rc1 4 - # Thu Jan 20 01:25:35 2005 3 + # Linux kernel version: 2.6.12-rc4 4 + # Tue May 17 11:56:01 2005 5 5 # 6 6 CONFIG_MMU=y 7 7 CONFIG_GENERIC_HARDIRQS=y ··· 11 11 CONFIG_PPC=y 12 12 CONFIG_PPC32=y 13 13 CONFIG_GENERIC_NVRAM=y 14 + CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y 14 15 15 16 # 16 17 # Code maturity level options ··· 19 18 CONFIG_EXPERIMENTAL=y 20 19 CONFIG_CLEAN_COMPILE=y 21 20 CONFIG_BROKEN_ON_SMP=y 21 + CONFIG_INIT_ENV_ARG_LIMIT=32 22 22 23 23 # 24 24 # General setup ··· 31 29 # CONFIG_BSD_PROCESS_ACCT is not set 32 30 CONFIG_SYSCTL=y 33 31 # CONFIG_AUDIT is not set 34 - CONFIG_LOG_BUF_SHIFT=14 35 32 # CONFIG_HOTPLUG is not set 36 33 CONFIG_KOBJECT_UEVENT=y 37 34 # CONFIG_IKCONFIG is not set 38 35 CONFIG_EMBEDDED=y 39 36 # CONFIG_KALLSYMS is not set 37 + CONFIG_PRINTK=y 38 + CONFIG_BUG=y 39 + CONFIG_BASE_FULL=y 40 40 CONFIG_FUTEX=y 41 41 # CONFIG_EPOLL is not set 42 42 # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set ··· 48 44 CONFIG_CC_ALIGN_LOOPS=0 49 45 CONFIG_CC_ALIGN_JUMPS=0 50 46 # CONFIG_TINY_SHMEM is not set 47 + CONFIG_BASE_SMALL=0 51 48 52 49 # 53 50 # Loadable module support ··· 67 62 CONFIG_E500=y 68 63 CONFIG_BOOKE=y 69 64 CONFIG_FSL_BOOKE=y 65 + # CONFIG_PHYS_64BIT is not set 70 66 CONFIG_SPE=y 71 67 CONFIG_MATH_EMULATION=y 72 68 # CONFIG_CPU_FREQ is not set 73 69 CONFIG_PPC_GEN550=y 70 + # CONFIG_PM is not set 74 71 CONFIG_85xx=y 75 72 CONFIG_PPC_INDIRECT_PCI_BE=y 76 73 ··· 83 76 CONFIG_MPC8555_CDS=y 84 77 # CONFIG_MPC8560_ADS is not set 85 78 # CONFIG_SBC8560 is not set 79 + # CONFIG_STX_GP3 is not set 86 80 CONFIG_MPC8555=y 87 81 CONFIG_85xx_PCI2=y 88 82 ··· 98 90 CONFIG_BINFMT_ELF=y 99 91 # CONFIG_BINFMT_MISC is not set 100 92 # CONFIG_CMDLINE_BOOL is not set 93 + CONFIG_ISA_DMA_API=y 101 94 102 95 # 103 96 # Bus options ··· 112 103 # PCCARD (PCMCIA/CardBus) support 113 104 # 114 105 # CONFIG_PCCARD is not set 115 - 116 - # 117 - # PC-card bridges 118 - # 119 106 120 107 # 121 108 # Advanced setup ··· 185 180 # 186 181 # ATA/ATAPI/MFM/RLL support 187 182 # 188 - # CONFIG_IDE is not set 183 + CONFIG_IDE=y 184 + CONFIG_BLK_DEV_IDE=y 185 + 186 + # 187 + # Please see Documentation/ide.txt for help/info on IDE drives 188 + # 189 + # CONFIG_BLK_DEV_IDE_SATA is not set 190 + CONFIG_BLK_DEV_IDEDISK=y 191 + # CONFIG_IDEDISK_MULTI_MODE is not set 192 + # CONFIG_BLK_DEV_IDECD is not set 193 + # CONFIG_BLK_DEV_IDETAPE is not set 194 + # CONFIG_BLK_DEV_IDEFLOPPY is not set 195 + # CONFIG_IDE_TASK_IOCTL is not set 196 + 197 + # 198 + # IDE chipset support/bugfixes 199 + # 200 + CONFIG_IDE_GENERIC=y 201 + CONFIG_BLK_DEV_IDEPCI=y 202 + CONFIG_IDEPCI_SHARE_IRQ=y 203 + # CONFIG_BLK_DEV_OFFBOARD is not set 204 + CONFIG_BLK_DEV_GENERIC=y 205 + # CONFIG_BLK_DEV_OPTI621 is not set 206 + # CONFIG_BLK_DEV_SL82C105 is not set 207 + CONFIG_BLK_DEV_IDEDMA_PCI=y 208 + # CONFIG_BLK_DEV_IDEDMA_FORCED is not set 209 + CONFIG_IDEDMA_PCI_AUTO=y 210 + # CONFIG_IDEDMA_ONLYDISK is not set 211 + # CONFIG_BLK_DEV_AEC62XX is not set 212 + # CONFIG_BLK_DEV_ALI15X3 is not set 213 + # CONFIG_BLK_DEV_AMD74XX is not set 214 + # CONFIG_BLK_DEV_CMD64X is not set 215 + # CONFIG_BLK_DEV_TRIFLEX is not set 216 + # CONFIG_BLK_DEV_CY82C693 is not set 217 + # CONFIG_BLK_DEV_CS5520 is not set 218 + # CONFIG_BLK_DEV_CS5530 is not set 219 + # CONFIG_BLK_DEV_HPT34X is not set 220 + # CONFIG_BLK_DEV_HPT366 is not set 221 + # CONFIG_BLK_DEV_SC1200 is not set 222 + # CONFIG_BLK_DEV_PIIX is not set 223 + # CONFIG_BLK_DEV_NS87415 is not set 224 + # CONFIG_BLK_DEV_PDC202XX_OLD is not set 225 + # CONFIG_BLK_DEV_PDC202XX_NEW is not set 226 + # CONFIG_BLK_DEV_SVWKS is not set 227 + # CONFIG_BLK_DEV_SIIMAGE is not set 228 + # CONFIG_BLK_DEV_SLC90E66 is not set 229 + # CONFIG_BLK_DEV_TRM290 is not set 230 + CONFIG_BLK_DEV_VIA82CXXX=y 231 + # CONFIG_IDE_ARM is not set 232 + CONFIG_BLK_DEV_IDEDMA=y 233 + # CONFIG_IDEDMA_IVB is not set 234 + CONFIG_IDEDMA_AUTO=y 235 + # CONFIG_BLK_DEV_HD is not set 189 236 190 237 # 191 238 # SCSI device support ··· 277 220 # 278 221 CONFIG_PACKET=y 279 222 # CONFIG_PACKET_MMAP is not set 280 - # CONFIG_NETLINK_DEV is not set 281 223 CONFIG_UNIX=y 282 224 # CONFIG_NET_KEY is not set 283 225 CONFIG_INET=y ··· 426 370 # CONFIG_INPUT_EVBUG is not set 427 371 428 372 # 429 - # Input I/O drivers 430 - # 431 - # CONFIG_GAMEPORT is not set 432 - CONFIG_SOUND_GAMEPORT=y 433 - # CONFIG_SERIO is not set 434 - # CONFIG_SERIO_I8042 is not set 435 - 436 - # 437 373 # Input Device Drivers 438 374 # 439 375 # CONFIG_INPUT_KEYBOARD is not set ··· 433 385 # CONFIG_INPUT_JOYSTICK is not set 434 386 # CONFIG_INPUT_TOUCHSCREEN is not set 435 387 # CONFIG_INPUT_MISC is not set 388 + 389 + # 390 + # Hardware I/O ports 391 + # 392 + # CONFIG_SERIO is not set 393 + # CONFIG_GAMEPORT is not set 394 + CONFIG_SOUND_GAMEPORT=y 436 395 437 396 # 438 397 # Character devices ··· 461 406 CONFIG_SERIAL_CORE=y 462 407 CONFIG_SERIAL_CORE_CONSOLE=y 463 408 # CONFIG_SERIAL_CPM is not set 409 + # CONFIG_SERIAL_JSM is not set 464 410 CONFIG_UNIX98_PTYS=y 465 411 CONFIG_LEGACY_PTYS=y 466 412 CONFIG_LEGACY_PTY_COUNT=256 ··· 490 434 # CONFIG_RAW_DRIVER is not set 491 435 492 436 # 437 + # TPM devices 438 + # 439 + # CONFIG_TCG_TPM is not set 440 + 441 + # 493 442 # I2C support 494 443 # 495 444 CONFIG_I2C=y ··· 517 456 # CONFIG_I2C_AMD8111 is not set 518 457 # CONFIG_I2C_I801 is not set 519 458 # CONFIG_I2C_I810 is not set 459 + # CONFIG_I2C_PIIX4 is not set 520 460 # CONFIG_I2C_ISA is not set 521 461 CONFIG_I2C_MPC=y 522 462 # CONFIG_I2C_NFORCE2 is not set 523 463 # CONFIG_I2C_PARPORT_LIGHT is not set 524 - # CONFIG_I2C_PIIX4 is not set 525 464 # CONFIG_I2C_PROSAVAGE is not set 526 465 # CONFIG_I2C_SAVAGE4 is not set 527 466 # CONFIG_SCx200_ACB is not set ··· 544 483 # CONFIG_SENSORS_ASB100 is not set 545 484 # CONFIG_SENSORS_DS1621 is not set 546 485 # CONFIG_SENSORS_FSCHER is not set 486 + # CONFIG_SENSORS_FSCPOS is not set 547 487 # CONFIG_SENSORS_GL518SM is not set 488 + # CONFIG_SENSORS_GL520SM is not set 548 489 # CONFIG_SENSORS_IT87 is not set 549 490 # CONFIG_SENSORS_LM63 is not set 550 491 # CONFIG_SENSORS_LM75 is not set ··· 557 494 # CONFIG_SENSORS_LM85 is not set 558 495 # CONFIG_SENSORS_LM87 is not set 559 496 # CONFIG_SENSORS_LM90 is not set 497 + # CONFIG_SENSORS_LM92 is not set 560 498 # CONFIG_SENSORS_MAX1619 is not set 561 499 # CONFIG_SENSORS_PC87360 is not set 562 500 # CONFIG_SENSORS_SMSC47B397 is not set 501 + # CONFIG_SENSORS_SIS5595 is not set 563 502 # CONFIG_SENSORS_SMSC47M1 is not set 564 503 # CONFIG_SENSORS_VIA686A is not set 565 504 # CONFIG_SENSORS_W83781D is not set ··· 571 506 # 572 507 # Other I2C Chip support 573 508 # 509 + # CONFIG_SENSORS_DS1337 is not set 574 510 # CONFIG_SENSORS_EEPROM is not set 575 511 # CONFIG_SENSORS_PCF8574 is not set 576 512 # CONFIG_SENSORS_PCF8591 is not set 577 513 # CONFIG_SENSORS_RTC8564 is not set 514 + # CONFIG_SENSORS_M41T00 is not set 578 515 # CONFIG_I2C_DEBUG_CORE is not set 579 516 # CONFIG_I2C_DEBUG_ALGO is not set 580 517 # CONFIG_I2C_DEBUG_BUS is not set ··· 605 538 # Graphics support 606 539 # 607 540 # CONFIG_FB is not set 608 - # CONFIG_BACKLIGHT_LCD_SUPPORT is not set 609 541 610 542 # 611 543 # Sound ··· 614 548 # 615 549 # USB support 616 550 # 617 - # CONFIG_USB is not set 618 551 CONFIG_USB_ARCH_HAS_HCD=y 619 552 CONFIG_USB_ARCH_HAS_OHCI=y 620 - 621 - # 622 - # NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information 623 - # 553 + # CONFIG_USB is not set 624 554 625 555 # 626 556 # USB Gadget Support ··· 647 585 CONFIG_FS_MBCACHE=y 648 586 # CONFIG_REISERFS_FS is not set 649 587 # CONFIG_JFS_FS is not set 588 + 589 + # 590 + # XFS support 591 + # 650 592 # CONFIG_XFS_FS is not set 651 593 # CONFIG_MINIX_FS is not set 652 594 # CONFIG_ROMFS_FS is not set ··· 712 646 # CONFIG_NFSD is not set 713 647 CONFIG_ROOT_NFS=y 714 648 CONFIG_LOCKD=y 715 - # CONFIG_EXPORTFS is not set 716 649 CONFIG_SUNRPC=y 717 650 # CONFIG_RPCSEC_GSS_KRB5 is not set 718 651 # CONFIG_RPCSEC_GSS_SPKM3 is not set ··· 763 698 # 764 699 # Kernel hacking 765 700 # 701 + # CONFIG_PRINTK_TIME is not set 766 702 # CONFIG_DEBUG_KERNEL is not set 703 + CONFIG_LOG_BUF_SHIFT=14 767 704 # CONFIG_KGDB_CONSOLE is not set 768 705 # CONFIG_SERIAL_TEXT_DEBUG is not set 769 706
+14 -1
arch/ppc/kernel/head_fsl_booke.S
··· 232 232 tlbwe 233 233 234 234 /* 7. Jump to KERNELBASE mapping */ 235 - li r7,0 235 + lis r7,MSR_KERNEL@h 236 + ori r7,r7,MSR_KERNEL@l 236 237 bl 1f /* Find our address */ 237 238 1: mflr r9 238 239 rlwimi r6,r9,0,20,31 ··· 292 291 mfspr r2,SPRN_HID0 293 292 oris r2,r2,HID0_DOZE@h 294 293 mtspr SPRN_HID0, r2 294 + #endif 295 + 296 + #if !defined(CONFIG_BDI_SWITCH) 297 + /* 298 + * The Abatron BDI JTAG debugger does not tolerate others 299 + * mucking with the debug registers. 300 + */ 301 + lis r2,DBCR0_IDM@h 302 + mtspr SPRN_DBCR0,r2 303 + /* clear any residual debug events */ 304 + li r2,-1 305 + mtspr SPRN_DBSR,r2 295 306 #endif 296 307 297 308 /*
+1 -6
arch/ppc/kernel/traps.c
··· 408 408 409 409 /* Early out if we are an invalid form of lswx */ 410 410 if ((instword & INST_STRING_MASK) == INST_LSWX) 411 - if ((rA >= rT) || (NB_RB >= rT) || (rT == rA) || (rT == NB_RB)) 412 - return -EINVAL; 413 - 414 - /* Early out if we are an invalid form of lswi */ 415 - if ((instword & INST_STRING_MASK) == INST_LSWI) 416 - if ((rA >= rT) || (rT == rA)) 411 + if ((rT == rA) || (rT == NB_RB)) 417 412 return -EINVAL; 418 413 419 414 EA = (rA == 0) ? 0 : regs->gpr[rA];
-1
arch/ppc/platforms/83xx/mpc834x_sys.c
··· 127 127 { 128 128 /* we steal the lowest ioremap addr for virt space */ 129 129 io_block_mapping(VIRT_IMMRBAR, immrbar, 1024*1024, _PAGE_IO); 130 - io_block_mapping(BCSR_VIRT_ADDR, BCSR_PHYS_ADDR, BCSR_SIZE, _PAGE_IO); 131 130 } 132 131 133 132 int
+6 -1
arch/ppc/platforms/83xx/mpc834x_sys.h
··· 26 26 #define VIRT_IMMRBAR ((uint)0xfe000000) 27 27 28 28 #define BCSR_PHYS_ADDR ((uint)0xf8000000) 29 - #define BCSR_VIRT_ADDR ((uint)0xfe100000) 30 29 #define BCSR_SIZE ((uint)(32 * 1024)) 30 + 31 + #define BCSR_MISC_REG2_OFF 0x07 32 + #define BCSR_MISC_REG2_PORESET 0x01 33 + 34 + #define BCSR_MISC_REG3_OFF 0x08 35 + #define BCSR_MISC_REG3_CNFLOCK 0x80 31 36 32 37 #ifdef CONFIG_PCI 33 38 /* PCI interrupt controller */
+3
arch/ppc/platforms/85xx/mpc8540_ads.c
··· 210 210 #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG) 211 211 ppc_md.progress = gen550_progress; 212 212 #endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */ 213 + #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_KGDB) 214 + ppc_md.early_serial_map = mpc85xx_early_serial_map; 215 + #endif /* CONFIG_SERIAL_8250 && CONFIG_KGDB */ 213 216 214 217 if (ppc_md.progress) 215 218 ppc_md.progress("mpc8540ads_init(): exit", 0);
+141 -2
arch/ppc/platforms/85xx/mpc85xx_cds_common.c
··· 44 44 #include <asm/machdep.h> 45 45 #include <asm/prom.h> 46 46 #include <asm/open_pic.h> 47 + #include <asm/i8259.h> 47 48 #include <asm/bootinfo.h> 48 49 #include <asm/pci-bridge.h> 49 50 #include <asm/mpc85xx.h> ··· 182 181 mpc85xx_cds_init_IRQ(void) 183 182 { 184 183 bd_t *binfo = (bd_t *) __res; 184 + int i; 185 185 186 186 /* Determine the Physical Address of the OpenPIC regs */ 187 187 phys_addr_t OpenPIC_PAddr = binfo->bi_immr_base + MPC85xx_OPENPIC_OFFSET; ··· 199 197 * leave space for cascading interrupts underneath. 200 198 */ 201 199 openpic_init(MPC85xx_OPENPIC_IRQ_OFFSET); 200 + 201 + #ifdef CONFIG_PCI 202 + openpic_hookup_cascade(PIRQ0A, "82c59 cascade", i8259_irq); 203 + 204 + for (i = 0; i < NUM_8259_INTERRUPTS; i++) 205 + irq_desc[i].handler = &i8259_pic; 206 + 207 + i8259_init(0); 208 + #endif 202 209 203 210 #ifdef CONFIG_CPM2 204 211 /* Setup CPM2 PIC */ ··· 242 231 * interrupt on slot */ 243 232 { 244 233 { 0, 1, 2, 3 }, /* 16 - PMC */ 245 - { 3, 0, 0, 0 }, /* 17 P2P (Tsi320) */ 234 + { 0, 1, 2, 3 }, /* 17 P2P (Tsi320) */ 246 235 { 0, 1, 2, 3 }, /* 18 - Slot 1 */ 247 236 { 1, 2, 3, 0 }, /* 19 - Slot 2 */ 248 237 { 2, 3, 0, 1 }, /* 20 - Slot 3 */ ··· 291 280 return PCIBIOS_DEVICE_NOT_FOUND; 292 281 #endif 293 282 /* We explicitly do not go past the Tundra 320 Bridge */ 294 - if (bus == 1) 283 + if ((bus == 1) && (PCI_SLOT(devfn) == ARCADIA_2ND_BRIDGE_IDSEL)) 295 284 return PCIBIOS_DEVICE_NOT_FOUND; 296 285 if ((bus == 0) && (PCI_SLOT(devfn) == ARCADIA_2ND_BRIDGE_IDSEL)) 297 286 return PCIBIOS_DEVICE_NOT_FOUND; 298 287 else 299 288 return PCIBIOS_SUCCESSFUL; 289 + } 290 + 291 + void __init 292 + mpc85xx_cds_enable_via(struct pci_controller *hose) 293 + { 294 + u32 pci_class; 295 + u16 vid, did; 296 + 297 + early_read_config_dword(hose, 0, 0x88, PCI_CLASS_REVISION, &pci_class); 298 + if ((pci_class >> 16) != PCI_CLASS_BRIDGE_PCI) 299 + return; 300 + 301 + /* Configure P2P so that we can reach bus 1 */ 302 + early_write_config_byte(hose, 0, 0x88, PCI_PRIMARY_BUS, 0); 303 + early_write_config_byte(hose, 0, 0x88, PCI_SECONDARY_BUS, 1); 304 + early_write_config_byte(hose, 0, 0x88, PCI_SUBORDINATE_BUS, 0xff); 305 + 306 + early_read_config_word(hose, 1, 0x10, PCI_VENDOR_ID, &vid); 307 + early_read_config_word(hose, 1, 0x10, PCI_DEVICE_ID, &did); 308 + 309 + if ((vid != PCI_VENDOR_ID_VIA) || 310 + (did != PCI_DEVICE_ID_VIA_82C686)) 311 + return; 312 + 313 + /* Enable USB and IDE functions */ 314 + early_write_config_byte(hose, 1, 0x10, 0x48, 0x08); 315 + } 316 + 317 + void __init 318 + mpc85xx_cds_fixup_via(struct pci_controller *hose) 319 + { 320 + u32 pci_class; 321 + u16 vid, did; 322 + 323 + early_read_config_dword(hose, 0, 0x88, PCI_CLASS_REVISION, &pci_class); 324 + if ((pci_class >> 16) != PCI_CLASS_BRIDGE_PCI) 325 + return; 326 + 327 + /* 328 + * Force the backplane P2P bridge to have a window 329 + * open from 0x00000000-0x00001fff in PCI I/O space. 330 + * This allows legacy I/O (i8259, etc) on the VIA 331 + * southbridge to be accessed. 332 + */ 333 + early_write_config_byte(hose, 0, 0x88, PCI_IO_BASE, 0x00); 334 + early_write_config_word(hose, 0, 0x88, PCI_IO_BASE_UPPER16, 0x0000); 335 + early_write_config_byte(hose, 0, 0x88, PCI_IO_LIMIT, 0x10); 336 + early_write_config_word(hose, 0, 0x88, PCI_IO_LIMIT_UPPER16, 0x0000); 337 + 338 + early_read_config_word(hose, 1, 0x10, PCI_VENDOR_ID, &vid); 339 + early_read_config_word(hose, 1, 0x10, PCI_DEVICE_ID, &did); 340 + if ((vid != PCI_VENDOR_ID_VIA) || 341 + (did != PCI_DEVICE_ID_VIA_82C686)) 342 + return; 343 + 344 + /* 345 + * Since the P2P window was forced to cover the fixed 346 + * legacy I/O addresses, it is necessary to manually 347 + * place the base addresses for the IDE and USB functions 348 + * within this window. 349 + */ 350 + /* Function 1, IDE */ 351 + early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_0, 0x1ff8); 352 + early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_1, 0x1ff4); 353 + early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_2, 0x1fe8); 354 + early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_3, 0x1fe4); 355 + early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_4, 0x1fd0); 356 + 357 + /* Function 2, USB ports 0-1 */ 358 + early_write_config_dword(hose, 1, 0x12, PCI_BASE_ADDRESS_4, 0x1fa0); 359 + 360 + /* Function 3, USB ports 2-3 */ 361 + early_write_config_dword(hose, 1, 0x13, PCI_BASE_ADDRESS_4, 0x1f80); 362 + 363 + /* Function 5, Power Management */ 364 + early_write_config_dword(hose, 1, 0x15, PCI_BASE_ADDRESS_0, 0x1e00); 365 + early_write_config_dword(hose, 1, 0x15, PCI_BASE_ADDRESS_1, 0x1dfc); 366 + early_write_config_dword(hose, 1, 0x15, PCI_BASE_ADDRESS_2, 0x1df8); 367 + 368 + /* Function 6, AC97 Interface */ 369 + early_write_config_dword(hose, 1, 0x16, PCI_BASE_ADDRESS_0, 0x1c00); 370 + } 371 + 372 + void __init 373 + mpc85xx_cds_pcibios_fixup(void) 374 + { 375 + struct pci_dev *dev = NULL; 376 + u_char c; 377 + 378 + if ((dev = pci_find_device(PCI_VENDOR_ID_VIA, 379 + PCI_DEVICE_ID_VIA_82C586_1, NULL))) { 380 + /* 381 + * U-Boot does not set the enable bits 382 + * for the IDE device. Force them on here. 383 + */ 384 + pci_read_config_byte(dev, 0x40, &c); 385 + c |= 0x03; /* IDE: Chip Enable Bits */ 386 + pci_write_config_byte(dev, 0x40, c); 387 + 388 + /* 389 + * Since only primary interface works, force the 390 + * IDE function to standard primary IDE interrupt 391 + * w/ 8259 offset 392 + */ 393 + dev->irq = 14; 394 + pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq); 395 + } 396 + 397 + /* 398 + * Force legacy USB interrupt routing 399 + */ 400 + if ((dev = pci_find_device(PCI_VENDOR_ID_VIA, 401 + PCI_DEVICE_ID_VIA_82C586_2, NULL))) { 402 + dev->irq = 10; 403 + pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 10); 404 + } 405 + 406 + if ((dev = pci_find_device(PCI_VENDOR_ID_VIA, 407 + PCI_DEVICE_ID_VIA_82C586_2, dev))) { 408 + dev->irq = 11; 409 + pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 11); 410 + } 300 411 } 301 412 #endif /* CONFIG_PCI */ 302 413 ··· 461 328 loops_per_jiffy = freq / HZ; 462 329 463 330 #ifdef CONFIG_PCI 331 + /* VIA IDE configuration */ 332 + ppc_md.pcibios_fixup = mpc85xx_cds_pcibios_fixup; 333 + 464 334 /* setup PCI host bridges */ 465 335 mpc85xx_setup_hose(); 466 336 #endif ··· 595 459 #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG) 596 460 ppc_md.progress = gen550_progress; 597 461 #endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */ 462 + #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_KGDB) 463 + ppc_md.early_serial_map = mpc85xx_early_serial_map; 464 + #endif /* CONFIG_SERIAL_8250 && CONFIG_KGDB */ 598 465 599 466 if (ppc_md.progress) 600 467 ppc_md.progress("mpc85xx_cds_init(): exit", 0);
+3
arch/ppc/platforms/85xx/mpc85xx_cds_common.h
··· 77 77 78 78 #define MPC85XX_PCI2_IO_SIZE 0x01000000 79 79 80 + #define NR_8259_INTS 16 81 + #define CPM_IRQ_OFFSET NR_8259_INTS 82 + 80 83 #endif /* __MACH_MPC85XX_CDS_H__ */
+3
arch/ppc/platforms/85xx/sbc8560.c
··· 221 221 #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG) 222 222 ppc_md.progress = gen550_progress; 223 223 #endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */ 224 + #if defined(CONFIG_SERIAL_8250) && defined(CONFIG_KGDB) 225 + ppc_md.early_serial_map = sbc8560_early_serial_map; 226 + #endif /* CONFIG_SERIAL_8250 && CONFIG_KGDB */ 224 227 225 228 if (ppc_md.progress) 226 229 ppc_md.progress("sbc8560_init(): exit", 0);
+22 -14
arch/ppc/platforms/pmac_cpufreq.c
··· 85 85 static int has_cpu_l2lve; 86 86 87 87 88 - #define PMAC_CPU_LOW_SPEED 1 89 - #define PMAC_CPU_HIGH_SPEED 0 90 - 91 88 /* There are only two frequency states for each processor. Values 92 89 * are in kHz for the time being. 93 90 */ 94 - #define CPUFREQ_HIGH PMAC_CPU_HIGH_SPEED 95 - #define CPUFREQ_LOW PMAC_CPU_LOW_SPEED 91 + #define CPUFREQ_HIGH 0 92 + #define CPUFREQ_LOW 1 96 93 97 94 static struct cpufreq_frequency_table pmac_cpu_freqs[] = { 98 95 {CPUFREQ_HIGH, 0}, 99 96 {CPUFREQ_LOW, 0}, 100 97 {0, CPUFREQ_TABLE_END}, 98 + }; 99 + 100 + static struct freq_attr* pmac_cpu_freqs_attr[] = { 101 + &cpufreq_freq_attr_scaling_available_freqs, 102 + NULL, 101 103 }; 102 104 103 105 static inline void local_delay(unsigned long ms) ··· 271 269 #ifdef DEBUG_FREQ 272 270 printk(KERN_DEBUG "HID1, before: %x\n", mfspr(SPRN_HID1)); 273 271 #endif 272 + pmu_suspend(); 273 + 274 274 /* Disable all interrupt sources on openpic */ 275 275 pic_prio = openpic_get_priority(); 276 276 openpic_set_priority(0xf); ··· 347 343 debug_calc_bogomips(); 348 344 #endif 349 345 346 + pmu_resume(); 347 + 350 348 preempt_enable(); 351 349 352 350 return 0; ··· 361 355 static unsigned long prev_l3cr; 362 356 363 357 freqs.old = cur_freq; 364 - freqs.new = (speed_mode == PMAC_CPU_HIGH_SPEED) ? hi_freq : low_freq; 358 + freqs.new = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq; 365 359 freqs.cpu = smp_processor_id(); 366 360 367 361 if (freqs.old == freqs.new) ··· 369 363 370 364 if (notify) 371 365 cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); 372 - if (speed_mode == PMAC_CPU_LOW_SPEED && 366 + if (speed_mode == CPUFREQ_LOW && 373 367 cpu_has_feature(CPU_FTR_L3CR)) { 374 368 l3cr = _get_L3CR(); 375 369 if (l3cr & L3CR_L3E) { ··· 377 371 _set_L3CR(0); 378 372 } 379 373 } 380 - set_speed_proc(speed_mode == PMAC_CPU_LOW_SPEED); 381 - if (speed_mode == PMAC_CPU_HIGH_SPEED && 374 + set_speed_proc(speed_mode == CPUFREQ_LOW); 375 + if (speed_mode == CPUFREQ_HIGH && 382 376 cpu_has_feature(CPU_FTR_L3CR)) { 383 377 l3cr = _get_L3CR(); 384 378 if ((prev_l3cr & L3CR_L3E) && l3cr != prev_l3cr) ··· 386 380 } 387 381 if (notify) 388 382 cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); 389 - cur_freq = (speed_mode == PMAC_CPU_HIGH_SPEED) ? hi_freq : low_freq; 383 + cur_freq = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq; 390 384 391 385 return 0; 392 386 } ··· 429 423 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 430 424 policy->cur = cur_freq; 431 425 432 - return cpufreq_frequency_table_cpuinfo(policy, &pmac_cpu_freqs[0]); 426 + cpufreq_frequency_table_get_attr(pmac_cpu_freqs, policy->cpu); 427 + return cpufreq_frequency_table_cpuinfo(policy, pmac_cpu_freqs); 433 428 } 434 429 435 430 static u32 __pmac read_gpio(struct device_node *np) ··· 464 457 no_schedule = 1; 465 458 sleep_freq = cur_freq; 466 459 if (cur_freq == low_freq) 467 - do_set_cpu_speed(PMAC_CPU_HIGH_SPEED, 0); 460 + do_set_cpu_speed(CPUFREQ_HIGH, 0); 468 461 return 0; 469 462 } 470 463 ··· 480 473 * is that we force a switch to whatever it was, which is 481 474 * probably high speed due to our suspend() routine 482 475 */ 483 - do_set_cpu_speed(sleep_freq == low_freq ? PMAC_CPU_LOW_SPEED 484 - : PMAC_CPU_HIGH_SPEED, 0); 476 + do_set_cpu_speed(sleep_freq == low_freq ? 477 + CPUFREQ_LOW : CPUFREQ_HIGH, 0); 485 478 486 479 no_schedule = 0; 487 480 return 0; ··· 495 488 .suspend = pmac_cpufreq_suspend, 496 489 .resume = pmac_cpufreq_resume, 497 490 .flags = CPUFREQ_PM_NO_WARN, 491 + .attr = pmac_cpu_freqs_attr, 498 492 .name = "powermac", 499 493 .owner = THIS_MODULE, 500 494 };
+18 -23
arch/ppc/platforms/pq2ads.h
··· 49 49 /* PCI interrupt controller */ 50 50 #define PCI_INT_STAT_REG 0xF8200000 51 51 #define PCI_INT_MASK_REG 0xF8200004 52 - #define PIRQA (NR_SIU_INTS + 0) 53 - #define PIRQB (NR_SIU_INTS + 1) 54 - #define PIRQC (NR_SIU_INTS + 2) 55 - #define PIRQD (NR_SIU_INTS + 3) 52 + #define PIRQA (NR_CPM_INTS + 0) 53 + #define PIRQB (NR_CPM_INTS + 1) 54 + #define PIRQC (NR_CPM_INTS + 2) 55 + #define PIRQD (NR_CPM_INTS + 3) 56 56 57 57 /* 58 58 * PCI memory map definitions for MPC8266ADS-PCI. ··· 68 68 * 0x00000000-0x1FFFFFFF 0x00000000-0x1FFFFFFF MPC8266 local memory 69 69 */ 70 70 71 - /* window for a PCI master to access MPC8266 memory */ 72 - #define PCI_SLV_MEM_LOCAL 0x00000000 /* Local base */ 73 - #define PCI_SLV_MEM_BUS 0x00000000 /* PCI base */ 71 + /* All the other PCI memory map definitions reside at syslib/m82xx_pci.h 72 + Here we should redefine what is unique for this board */ 73 + #define M82xx_PCI_SLAVE_MEM_LOCAL 0x00000000 /* Local base */ 74 + #define M82xx_PCI_SLAVE_MEM_BUS 0x00000000 /* PCI base */ 75 + #define M82xx_PCI_SLAVE_MEM_SIZE 0x10000000 /* 256 Mb */ 74 76 75 - /* window for the processor to access PCI memory with prefetching */ 76 - #define PCI_MSTR_MEM_LOCAL 0x80000000 /* Local base */ 77 - #define PCI_MSTR_MEM_BUS 0x80000000 /* PCI base */ 78 - #define PCI_MSTR_MEM_SIZE 0x20000000 /* 512MB */ 77 + #define M82xx_PCI_SLAVE_SEC_WND_SIZE ~(0x40000000 - 1U) /* 2 x 512Mb */ 78 + #define M82xx_PCI_SLAVE_SEC_WND_BASE 0x80000000 /* PCI Memory base */ 79 79 80 - /* window for the processor to access PCI memory without prefetching */ 81 - #define PCI_MSTR_MEMIO_LOCAL 0xA0000000 /* Local base */ 82 - #define PCI_MSTR_MEMIO_BUS 0xA0000000 /* PCI base */ 83 - #define PCI_MSTR_MEMIO_SIZE 0x20000000 /* 512MB */ 80 + #if defined(CONFIG_ADS8272) 81 + #define PCI_INT_TO_SIU SIU_INT_IRQ2 82 + #elif defined(CONFIG_PQ2FADS) 83 + #define PCI_INT_TO_SIU SIU_INT_IRQ6 84 + #else 85 + #warning PCI Bridge will be without interrupts support 86 + #endif 84 87 85 - /* window for the processor to access PCI I/O */ 86 - #define PCI_MSTR_IO_LOCAL 0xF4000000 /* Local base */ 87 - #define PCI_MSTR_IO_BUS 0x00000000 /* PCI base */ 88 - #define PCI_MSTR_IO_SIZE 0x04000000 /* 64MB */ 89 - 90 - #define _IO_BASE PCI_MSTR_IO_LOCAL 91 - #define _ISA_MEM_BASE PCI_MSTR_MEMIO_LOCAL 92 - #define PCI_DRAM_OFFSET PCI_SLV_MEM_BUS 93 88 #endif /* CONFIG_PCI */ 94 89 95 90 #endif /* __MACH_ADS8260_DEFS */
+2 -2
arch/ppc/syslib/Makefile
··· 81 81 obj-$(CONFIG_SPRUCE) += cpc700_pic.o indirect_pci.o pci_auto.o \ 82 82 todc_time.o 83 83 obj-$(CONFIG_8260) += m8260_setup.o 84 - obj-$(CONFIG_PCI_8260) += m8260_pci.o indirect_pci.o 84 + obj-$(CONFIG_PCI_8260) += m82xx_pci.o indirect_pci.o pci_auto.o 85 85 obj-$(CONFIG_8260_PCI9) += m8260_pci_erratum9.o 86 86 obj-$(CONFIG_CPM2) += cpm2_common.o cpm2_pic.o 87 87 ifeq ($(CONFIG_PPC_GEN550),y) ··· 97 97 obj-$(CONFIG_40x) += dcr.o 98 98 obj-$(CONFIG_BOOKE) += dcr.o 99 99 obj-$(CONFIG_85xx) += open_pic.o ppc85xx_common.o ppc85xx_setup.o \ 100 - ppc_sys.o mpc85xx_sys.o \ 100 + ppc_sys.o i8259.o mpc85xx_sys.o \ 101 101 mpc85xx_devices.o 102 102 ifeq ($(CONFIG_85xx),y) 103 103 obj-$(CONFIG_PCI) += indirect_pci.o pci_auto.o
-193
arch/ppc/syslib/m8260_pci.c
··· 1 - /* 2 - * (C) Copyright 2003 3 - * Wolfgang Denk, DENX Software Engineering, wd@denx.de. 4 - * 5 - * (C) Copyright 2004 Red Hat, Inc. 6 - * 7 - * See file CREDITS for list of people who contributed to this 8 - * project. 9 - * 10 - * This program is free software; you can redistribute it and/or 11 - * modify it under the terms of the GNU General Public License as 12 - * published by the Free Software Foundation; either version 2 of 13 - * the License, or (at your option) any later version. 14 - * 15 - * This program is distributed in the hope that it will be useful, 16 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 17 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 18 - * GNU General Public License for more details. 19 - * 20 - * You should have received a copy of the GNU General Public License 21 - * along with this program; if not, write to the Free Software 22 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, 23 - * MA 02111-1307 USA 24 - */ 25 - 26 - #include <linux/kernel.h> 27 - #include <linux/init.h> 28 - #include <linux/pci.h> 29 - #include <linux/slab.h> 30 - #include <linux/delay.h> 31 - 32 - #include <asm/byteorder.h> 33 - #include <asm/io.h> 34 - #include <asm/irq.h> 35 - #include <asm/uaccess.h> 36 - #include <asm/machdep.h> 37 - #include <asm/pci-bridge.h> 38 - #include <asm/immap_cpm2.h> 39 - #include <asm/mpc8260.h> 40 - 41 - #include "m8260_pci.h" 42 - 43 - 44 - /* PCI bus configuration registers. 45 - */ 46 - 47 - static void __init m8260_setup_pci(struct pci_controller *hose) 48 - { 49 - volatile cpm2_map_t *immap = cpm2_immr; 50 - unsigned long pocmr; 51 - u16 tempShort; 52 - 53 - #ifndef CONFIG_ATC /* already done in U-Boot */ 54 - /* 55 - * Setting required to enable IRQ1-IRQ7 (SIUMCR [DPPC]), 56 - * and local bus for PCI (SIUMCR [LBPC]). 57 - */ 58 - immap->im_siu_conf.siu_82xx.sc_siumcr = 0x00640000; 59 - #endif 60 - 61 - /* Make PCI lowest priority */ 62 - /* Each 4 bits is a device bus request and the MS 4bits 63 - is highest priority */ 64 - /* Bus 4bit value 65 - --- ---------- 66 - CPM high 0b0000 67 - CPM middle 0b0001 68 - CPM low 0b0010 69 - PCI reguest 0b0011 70 - Reserved 0b0100 71 - Reserved 0b0101 72 - Internal Core 0b0110 73 - External Master 1 0b0111 74 - External Master 2 0b1000 75 - External Master 3 0b1001 76 - The rest are reserved */ 77 - immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x61207893; 78 - 79 - /* Park bus on core while modifying PCI Bus accesses */ 80 - immap->im_siu_conf.siu_82xx.sc_ppc_acr = 0x6; 81 - 82 - /* 83 - * Set up master window that allows the CPU to access PCI space. This 84 - * window is set up using the first SIU PCIBR registers. 85 - */ 86 - immap->im_memctl.memc_pcimsk0 = MPC826x_PCI_MASK; 87 - immap->im_memctl.memc_pcibr0 = MPC826x_PCI_BASE | PCIBR_ENABLE; 88 - 89 - /* Disable machine check on no response or target abort */ 90 - immap->im_pci.pci_emr = cpu_to_le32(0x1fe7); 91 - /* Release PCI RST (by default the PCI RST signal is held low) */ 92 - immap->im_pci.pci_gcr = cpu_to_le32(PCIGCR_PCI_BUS_EN); 93 - 94 - /* give it some time */ 95 - mdelay(1); 96 - 97 - /* 98 - * Set up master window that allows the CPU to access PCI Memory (prefetch) 99 - * space. This window is set up using the first set of Outbound ATU registers. 100 - */ 101 - immap->im_pci.pci_potar0 = cpu_to_le32(MPC826x_PCI_LOWER_MEM >> 12); 102 - immap->im_pci.pci_pobar0 = cpu_to_le32((MPC826x_PCI_LOWER_MEM - MPC826x_PCI_MEM_OFFSET) >> 12); 103 - pocmr = ((MPC826x_PCI_UPPER_MEM - MPC826x_PCI_LOWER_MEM) >> 12) ^ 0xfffff; 104 - immap->im_pci.pci_pocmr0 = cpu_to_le32(pocmr | POCMR_ENABLE | POCMR_PREFETCH_EN); 105 - 106 - /* 107 - * Set up master window that allows the CPU to access PCI Memory (non-prefetch) 108 - * space. This window is set up using the second set of Outbound ATU registers. 109 - */ 110 - immap->im_pci.pci_potar1 = cpu_to_le32(MPC826x_PCI_LOWER_MMIO >> 12); 111 - immap->im_pci.pci_pobar1 = cpu_to_le32((MPC826x_PCI_LOWER_MMIO - MPC826x_PCI_MMIO_OFFSET) >> 12); 112 - pocmr = ((MPC826x_PCI_UPPER_MMIO - MPC826x_PCI_LOWER_MMIO) >> 12) ^ 0xfffff; 113 - immap->im_pci.pci_pocmr1 = cpu_to_le32(pocmr | POCMR_ENABLE); 114 - 115 - /* 116 - * Set up master window that allows the CPU to access PCI IO space. This window 117 - * is set up using the third set of Outbound ATU registers. 118 - */ 119 - immap->im_pci.pci_potar2 = cpu_to_le32(MPC826x_PCI_IO_BASE >> 12); 120 - immap->im_pci.pci_pobar2 = cpu_to_le32(MPC826x_PCI_LOWER_IO >> 12); 121 - pocmr = ((MPC826x_PCI_UPPER_IO - MPC826x_PCI_LOWER_IO) >> 12) ^ 0xfffff; 122 - immap->im_pci.pci_pocmr2 = cpu_to_le32(pocmr | POCMR_ENABLE | POCMR_PCI_IO); 123 - 124 - /* 125 - * Set up slave window that allows PCI masters to access MPC826x local memory. 126 - * This window is set up using the first set of Inbound ATU registers 127 - */ 128 - 129 - immap->im_pci.pci_pitar0 = cpu_to_le32(MPC826x_PCI_SLAVE_MEM_LOCAL >> 12); 130 - immap->im_pci.pci_pibar0 = cpu_to_le32(MPC826x_PCI_SLAVE_MEM_BUS >> 12); 131 - pocmr = ((MPC826x_PCI_SLAVE_MEM_SIZE-1) >> 12) ^ 0xfffff; 132 - immap->im_pci.pci_picmr0 = cpu_to_le32(pocmr | PICMR_ENABLE | PICMR_PREFETCH_EN); 133 - 134 - /* See above for description - puts PCI request as highest priority */ 135 - immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x03124567; 136 - 137 - /* Park the bus on the PCI */ 138 - immap->im_siu_conf.siu_82xx.sc_ppc_acr = PPC_ACR_BUS_PARK_PCI; 139 - 140 - /* Host mode - specify the bridge as a host-PCI bridge */ 141 - early_write_config_word(hose, 0, 0, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_HOST); 142 - 143 - /* Enable the host bridge to be a master on the PCI bus, and to act as a PCI memory target */ 144 - early_read_config_word(hose, 0, 0, PCI_COMMAND, &tempShort); 145 - early_write_config_word(hose, 0, 0, PCI_COMMAND, 146 - tempShort | PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY); 147 - } 148 - 149 - void __init m8260_find_bridges(void) 150 - { 151 - extern int pci_assign_all_busses; 152 - struct pci_controller * hose; 153 - 154 - pci_assign_all_busses = 1; 155 - 156 - hose = pcibios_alloc_controller(); 157 - 158 - if (!hose) 159 - return; 160 - 161 - ppc_md.pci_swizzle = common_swizzle; 162 - 163 - hose->first_busno = 0; 164 - hose->bus_offset = 0; 165 - hose->last_busno = 0xff; 166 - 167 - setup_m8260_indirect_pci(hose, 168 - (unsigned long)&cpm2_immr->im_pci.pci_cfg_addr, 169 - (unsigned long)&cpm2_immr->im_pci.pci_cfg_data); 170 - 171 - m8260_setup_pci(hose); 172 - hose->pci_mem_offset = MPC826x_PCI_MEM_OFFSET; 173 - 174 - hose->io_base_virt = ioremap(MPC826x_PCI_IO_BASE, 175 - MPC826x_PCI_IO_SIZE); 176 - isa_io_base = (unsigned long) hose->io_base_virt; 177 - 178 - /* setup resources */ 179 - pci_init_resource(&hose->mem_resources[0], 180 - MPC826x_PCI_LOWER_MEM, 181 - MPC826x_PCI_UPPER_MEM, 182 - IORESOURCE_MEM|IORESOURCE_PREFETCH, "PCI prefetchable memory"); 183 - 184 - pci_init_resource(&hose->mem_resources[1], 185 - MPC826x_PCI_LOWER_MMIO, 186 - MPC826x_PCI_UPPER_MMIO, 187 - IORESOURCE_MEM, "PCI memory"); 188 - 189 - pci_init_resource(&hose->io_resource, 190 - MPC826x_PCI_LOWER_IO, 191 - MPC826x_PCI_UPPER_IO, 192 - IORESOURCE_IO, "PCI I/O"); 193 - }
-76
arch/ppc/syslib/m8260_pci.h
··· 1 - 2 - #ifndef _PPC_KERNEL_M8260_PCI_H 3 - #define _PPC_KERNEL_M8260_PCI_H 4 - 5 - #include <asm/m8260_pci.h> 6 - 7 - /* 8 - * Local->PCI map (from CPU) controlled by 9 - * MPC826x master window 10 - * 11 - * 0x80000000 - 0xBFFFFFFF Total CPU2PCI space PCIBR0 12 - * 13 - * 0x80000000 - 0x9FFFFFFF PCI Mem with prefetch (Outbound ATU #1) 14 - * 0xA0000000 - 0xAFFFFFFF PCI Mem w/o prefetch (Outbound ATU #2) 15 - * 0xB0000000 - 0xB0FFFFFF 32-bit PCI IO (Outbound ATU #3) 16 - * 17 - * PCI->Local map (from PCI) 18 - * MPC826x slave window controlled by 19 - * 20 - * 0x00000000 - 0x07FFFFFF MPC826x local memory (Inbound ATU #1) 21 - */ 22 - 23 - /* 24 - * Slave window that allows PCI masters to access MPC826x local memory. 25 - * This window is set up using the first set of Inbound ATU registers 26 - */ 27 - 28 - #ifndef MPC826x_PCI_SLAVE_MEM_LOCAL 29 - #define MPC826x_PCI_SLAVE_MEM_LOCAL (((struct bd_info *)__res)->bi_memstart) 30 - #define MPC826x_PCI_SLAVE_MEM_BUS (((struct bd_info *)__res)->bi_memstart) 31 - #define MPC826x_PCI_SLAVE_MEM_SIZE (((struct bd_info *)__res)->bi_memsize) 32 - #endif 33 - 34 - /* 35 - * This is the window that allows the CPU to access PCI address space. 36 - * It will be setup with the SIU PCIBR0 register. All three PCI master 37 - * windows, which allow the CPU to access PCI prefetch, non prefetch, 38 - * and IO space (see below), must all fit within this window. 39 - */ 40 - #ifndef MPC826x_PCI_BASE 41 - #define MPC826x_PCI_BASE 0x80000000 42 - #define MPC826x_PCI_MASK 0xc0000000 43 - #endif 44 - 45 - #ifndef MPC826x_PCI_LOWER_MEM 46 - #define MPC826x_PCI_LOWER_MEM 0x80000000 47 - #define MPC826x_PCI_UPPER_MEM 0x9fffffff 48 - #define MPC826x_PCI_MEM_OFFSET 0x00000000 49 - #endif 50 - 51 - #ifndef MPC826x_PCI_LOWER_MMIO 52 - #define MPC826x_PCI_LOWER_MMIO 0xa0000000 53 - #define MPC826x_PCI_UPPER_MMIO 0xafffffff 54 - #define MPC826x_PCI_MMIO_OFFSET 0x00000000 55 - #endif 56 - 57 - #ifndef MPC826x_PCI_LOWER_IO 58 - #define MPC826x_PCI_LOWER_IO 0x00000000 59 - #define MPC826x_PCI_UPPER_IO 0x00ffffff 60 - #define MPC826x_PCI_IO_BASE 0xb0000000 61 - #define MPC826x_PCI_IO_SIZE 0x01000000 62 - #endif 63 - 64 - #ifndef _IO_BASE 65 - #define _IO_BASE isa_io_base 66 - #endif 67 - 68 - #ifdef CONFIG_8260_PCI9 69 - struct pci_controller; 70 - extern void setup_m8260_indirect_pci(struct pci_controller* hose, 71 - u32 cfg_addr, u32 cfg_data); 72 - #else 73 - #define setup_m8260_indirect_pci setup_indirect_pci 74 - #endif 75 - 76 - #endif /* _PPC_KERNEL_M8260_PCI_H */
+5 -5
arch/ppc/syslib/m8260_pci_erratum9.c
··· 31 31 #include <asm/immap_cpm2.h> 32 32 #include <asm/cpm2.h> 33 33 34 - #include "m8260_pci.h" 34 + #include "m82xx_pci.h" 35 35 36 36 #ifdef CONFIG_8260_PCI9 37 37 /*#include <asm/mpc8260_pci9.h>*/ /* included in asm/io.h */ ··· 248 248 249 249 static inline int is_pci_mem(unsigned long addr) 250 250 { 251 - if (addr >= MPC826x_PCI_LOWER_MMIO && 252 - addr <= MPC826x_PCI_UPPER_MMIO) 251 + if (addr >= M82xx_PCI_LOWER_MMIO && 252 + addr <= M82xx_PCI_UPPER_MMIO) 253 253 return 1; 254 - if (addr >= MPC826x_PCI_LOWER_MEM && 255 - addr <= MPC826x_PCI_UPPER_MEM) 254 + if (addr >= M82xx_PCI_LOWER_MEM && 255 + addr <= M82xx_PCI_UPPER_MEM) 256 256 return 1; 257 257 return 0; 258 258 }
+9 -2
arch/ppc/syslib/m8260_setup.c
··· 34 34 unsigned char __res[sizeof(bd_t)]; 35 35 36 36 extern void cpm2_reset(void); 37 - extern void m8260_find_bridges(void); 37 + extern void pq2_find_bridges(void); 38 + extern void pq2pci_init_irq(void); 38 39 extern void idma_pci9_init(void); 39 40 40 41 /* Place-holder for board-specific init */ ··· 57 56 idma_pci9_init(); 58 57 #endif 59 58 #ifdef CONFIG_PCI_8260 60 - m8260_find_bridges(); 59 + pq2_find_bridges(); 61 60 #endif 62 61 #ifdef CONFIG_BLK_DEV_INITRD 63 62 if (initrd_start) ··· 174 173 * in case the boot rom changed something on us. 175 174 */ 176 175 cpm2_immr->im_intctl.ic_siprr = 0x05309770; 176 + 177 + #if defined(CONFIG_PCI) && (defined(CONFIG_ADS8272) || defined(CONFIG_PQ2FADS)) 178 + /* Initialize stuff for the 82xx CPLD IC and install demux */ 179 + pq2pci_init_irq(); 180 + #endif 181 + 177 182 } 178 183 179 184 /*
+383
arch/ppc/syslib/m82xx_pci.c
··· 1 + /* 2 + * 3 + * (C) Copyright 2003 4 + * Wolfgang Denk, DENX Software Engineering, wd@denx.de. 5 + * 6 + * (C) Copyright 2004 Red Hat, Inc. 7 + * 8 + * 2005 (c) MontaVista Software, Inc. 9 + * Vitaly Bordug <vbordug@ru.mvista.com> 10 + * 11 + * See file CREDITS for list of people who contributed to this 12 + * project. 13 + * 14 + * This program is free software; you can redistribute it and/or 15 + * modify it under the terms of the GNU General Public License as 16 + * published by the Free Software Foundation; either version 2 of 17 + * the License, or (at your option) any later version. 18 + * 19 + * This program is distributed in the hope that it will be useful, 20 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 21 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 22 + * GNU General Public License for more details. 23 + * 24 + * You should have received a copy of the GNU General Public License 25 + * along with this program; if not, write to the Free Software 26 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, 27 + * MA 02111-1307 USA 28 + */ 29 + 30 + #include <linux/kernel.h> 31 + #include <linux/init.h> 32 + #include <linux/pci.h> 33 + #include <linux/slab.h> 34 + #include <linux/delay.h> 35 + #include <linux/irq.h> 36 + #include <linux/interrupt.h> 37 + 38 + #include <asm/byteorder.h> 39 + #include <asm/io.h> 40 + #include <asm/irq.h> 41 + #include <asm/uaccess.h> 42 + #include <asm/machdep.h> 43 + #include <asm/pci-bridge.h> 44 + #include <asm/immap_cpm2.h> 45 + #include <asm/mpc8260.h> 46 + #include <asm/cpm2.h> 47 + 48 + #include "m82xx_pci.h" 49 + 50 + /* 51 + * Interrupt routing 52 + */ 53 + 54 + static inline int 55 + pq2pci_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin) 56 + { 57 + static char pci_irq_table[][4] = 58 + /* 59 + * PCI IDSEL/INTPIN->INTLINE 60 + * A B C D 61 + */ 62 + { 63 + { PIRQA, PIRQB, PIRQC, PIRQD }, /* IDSEL 22 - PCI slot 0 */ 64 + { PIRQD, PIRQA, PIRQB, PIRQC }, /* IDSEL 23 - PCI slot 1 */ 65 + { PIRQC, PIRQD, PIRQA, PIRQB }, /* IDSEL 24 - PCI slot 2 */ 66 + }; 67 + 68 + const long min_idsel = 22, max_idsel = 24, irqs_per_slot = 4; 69 + return PCI_IRQ_TABLE_LOOKUP; 70 + } 71 + 72 + static void 73 + pq2pci_mask_irq(unsigned int irq) 74 + { 75 + int bit = irq - NR_CPM_INTS; 76 + 77 + *(volatile unsigned long *) PCI_INT_MASK_REG |= (1 << (31 - bit)); 78 + return; 79 + } 80 + 81 + static void 82 + pq2pci_unmask_irq(unsigned int irq) 83 + { 84 + int bit = irq - NR_CPM_INTS; 85 + 86 + *(volatile unsigned long *) PCI_INT_MASK_REG &= ~(1 << (31 - bit)); 87 + return; 88 + } 89 + 90 + static void 91 + pq2pci_mask_and_ack(unsigned int irq) 92 + { 93 + int bit = irq - NR_CPM_INTS; 94 + 95 + *(volatile unsigned long *) PCI_INT_MASK_REG |= (1 << (31 - bit)); 96 + return; 97 + } 98 + 99 + static void 100 + pq2pci_end_irq(unsigned int irq) 101 + { 102 + int bit = irq - NR_CPM_INTS; 103 + 104 + *(volatile unsigned long *) PCI_INT_MASK_REG &= ~(1 << (31 - bit)); 105 + return; 106 + } 107 + 108 + struct hw_interrupt_type pq2pci_ic = { 109 + "PQ2 PCI", 110 + NULL, 111 + NULL, 112 + pq2pci_unmask_irq, 113 + pq2pci_mask_irq, 114 + pq2pci_mask_and_ack, 115 + pq2pci_end_irq, 116 + 0 117 + }; 118 + 119 + static irqreturn_t 120 + pq2pci_irq_demux(int irq, void *dev_id, struct pt_regs *regs) 121 + { 122 + unsigned long stat, mask, pend; 123 + int bit; 124 + 125 + for(;;) { 126 + stat = *(volatile unsigned long *) PCI_INT_STAT_REG; 127 + mask = *(volatile unsigned long *) PCI_INT_MASK_REG; 128 + pend = stat & ~mask & 0xf0000000; 129 + if (!pend) 130 + break; 131 + for (bit = 0; pend != 0; ++bit, pend <<= 1) { 132 + if (pend & 0x80000000) 133 + __do_IRQ(NR_CPM_INTS + bit, regs); 134 + } 135 + } 136 + 137 + return IRQ_HANDLED; 138 + } 139 + 140 + static struct irqaction pq2pci_irqaction = { 141 + .handler = pq2pci_irq_demux, 142 + .flags = SA_INTERRUPT, 143 + .mask = CPU_MASK_NONE, 144 + .name = "PQ2 PCI cascade", 145 + }; 146 + 147 + 148 + void 149 + pq2pci_init_irq(void) 150 + { 151 + int irq; 152 + volatile cpm2_map_t *immap = cpm2_immr; 153 + #if defined CONFIG_ADS8272 154 + /* configure chip select for PCI interrupt controller */ 155 + immap->im_memctl.memc_br3 = PCI_INT_STAT_REG | 0x00001801; 156 + immap->im_memctl.memc_or3 = 0xffff8010; 157 + #elif defined CONFIG_PQ2FADS 158 + immap->im_memctl.memc_br8 = PCI_INT_STAT_REG | 0x00001801; 159 + immap->im_memctl.memc_or8 = 0xffff8010; 160 + #endif 161 + for (irq = NR_CPM_INTS; irq < NR_CPM_INTS + 4; irq++) 162 + irq_desc[irq].handler = &pq2pci_ic; 163 + 164 + /* make PCI IRQ level sensitive */ 165 + immap->im_intctl.ic_siexr &= 166 + ~(1 << (14 - (PCI_INT_TO_SIU - SIU_INT_IRQ1))); 167 + 168 + /* mask all PCI interrupts */ 169 + *(volatile unsigned long *) PCI_INT_MASK_REG |= 0xfff00000; 170 + 171 + /* install the demultiplexer for the PCI cascade interrupt */ 172 + setup_irq(PCI_INT_TO_SIU, &pq2pci_irqaction); 173 + return; 174 + } 175 + 176 + static int 177 + pq2pci_exclude_device(u_char bus, u_char devfn) 178 + { 179 + return PCIBIOS_SUCCESSFUL; 180 + } 181 + 182 + /* PCI bus configuration registers. 183 + */ 184 + static void 185 + pq2ads_setup_pci(struct pci_controller *hose) 186 + { 187 + __u32 val; 188 + volatile cpm2_map_t *immap = cpm2_immr; 189 + bd_t* binfo = (bd_t*) __res; 190 + u32 sccr = immap->im_clkrst.car_sccr; 191 + uint pci_div,freq,time; 192 + /* PCI int lowest prio */ 193 + /* Each 4 bits is a device bus request and the MS 4bits 194 + is highest priority */ 195 + /* Bus 4bit value 196 + --- ---------- 197 + CPM high 0b0000 198 + CPM middle 0b0001 199 + CPM low 0b0010 200 + PCI reguest 0b0011 201 + Reserved 0b0100 202 + Reserved 0b0101 203 + Internal Core 0b0110 204 + External Master 1 0b0111 205 + External Master 2 0b1000 206 + External Master 3 0b1001 207 + The rest are reserved 208 + */ 209 + immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x61207893; 210 + /* park bus on core */ 211 + immap->im_siu_conf.siu_82xx.sc_ppc_acr = PPC_ACR_BUS_PARK_CORE; 212 + /* 213 + * Set up master windows that allow the CPU to access PCI space. These 214 + * windows are set up using the two SIU PCIBR registers. 215 + */ 216 + 217 + immap->im_memctl.memc_pcimsk0 = M82xx_PCI_PRIM_WND_SIZE; 218 + immap->im_memctl.memc_pcibr0 = M82xx_PCI_PRIM_WND_BASE | PCIBR_ENABLE; 219 + 220 + #ifdef M82xx_PCI_SEC_WND_SIZE 221 + immap->im_memctl.memc_pcimsk1 = M82xx_PCI_SEC_WND_SIZE; 222 + immap->im_memctl.memc_pcibr1 = M82xx_PCI_SEC_WND_BASE | PCIBR_ENABLE; 223 + #endif 224 + 225 + #if defined CONFIG_ADS8272 226 + immap->im_siu_conf.siu_82xx.sc_siumcr = 227 + (immap->im_siu_conf.siu_82xx.sc_siumcr & 228 + ~(SIUMCR_BBD | SIUMCR_ESE | SIUMCR_PBSE | 229 + SIUMCR_CDIS | SIUMCR_DPPC11 | SIUMCR_L2CPC11 | 230 + SIUMCR_LBPC11 | SIUMCR_APPC11 | 231 + SIUMCR_CS10PC11 | SIUMCR_BCTLC11 | SIUMCR_MMR11)) | 232 + SIUMCR_DPPC11 | SIUMCR_L2CPC01 | SIUMCR_LBPC00 | 233 + SIUMCR_APPC10 | SIUMCR_CS10PC00 | 234 + SIUMCR_BCTLC00 | SIUMCR_MMR11 ; 235 + 236 + #elif defined CONFIG_PQ2FADS 237 + /* 238 + * Setting required to enable IRQ1-IRQ7 (SIUMCR [DPPC]), 239 + * and local bus for PCI (SIUMCR [LBPC]). 240 + */ 241 + immap->im_siu_conf.siu_82xx.sc_siumcr = (immap->im_siu_conf.sc_siumcr & 242 + ~(SIUMCR_L2PC11 | SIUMCR_LBPC11 | SIUMCR_CS10PC11 | SIUMCR_APPC11) | 243 + SIUMCR_BBD | SIUMCR_LBPC01 | SIUMCR_DPPC11 | SIUMCR_APPC10; 244 + #endif 245 + /* Enable PCI */ 246 + immap->im_pci.pci_gcr = cpu_to_le32(PCIGCR_PCI_BUS_EN); 247 + 248 + pci_div = ( (sccr & SCCR_PCI_MODCK) ? 2 : 1) * 249 + ( ( (sccr & SCCR_PCIDF_MSK) >> SCCR_PCIDF_SHIFT) + 1); 250 + freq = (uint)((2*binfo->bi_cpmfreq)/(pci_div)); 251 + time = (int)666666/freq; 252 + /* due to PCI Local Bus spec, some devices needs to wait such a long 253 + time after RST deassertion. More specifically, 0.508s for 66MHz & twice more for 33 */ 254 + printk("%s: The PCI bus is %d Mhz.\nWaiting %s after deasserting RST...\n",__FILE__,freq, 255 + (time==1) ? "0.5 seconds":"1 second" ); 256 + 257 + { 258 + int i; 259 + for(i=0;i<(500*time);i++) 260 + udelay(1000); 261 + } 262 + 263 + /* setup ATU registers */ 264 + immap->im_pci.pci_pocmr0 = cpu_to_le32(POCMR_ENABLE | POCMR_PCI_IO | 265 + ((~(M82xx_PCI_IO_SIZE - 1U)) >> POTA_ADDR_SHIFT)); 266 + immap->im_pci.pci_potar0 = cpu_to_le32(M82xx_PCI_LOWER_IO >> POTA_ADDR_SHIFT); 267 + immap->im_pci.pci_pobar0 = cpu_to_le32(M82xx_PCI_IO_BASE >> POTA_ADDR_SHIFT); 268 + 269 + /* Set-up non-prefetchable window */ 270 + immap->im_pci.pci_pocmr1 = cpu_to_le32(POCMR_ENABLE | ((~(M82xx_PCI_MMIO_SIZE-1U)) >> POTA_ADDR_SHIFT)); 271 + immap->im_pci.pci_potar1 = cpu_to_le32(M82xx_PCI_LOWER_MMIO >> POTA_ADDR_SHIFT); 272 + immap->im_pci.pci_pobar1 = cpu_to_le32((M82xx_PCI_LOWER_MMIO - M82xx_PCI_MMIO_OFFSET) >> POTA_ADDR_SHIFT); 273 + 274 + /* Set-up prefetchable window */ 275 + immap->im_pci.pci_pocmr2 = cpu_to_le32(POCMR_ENABLE |POCMR_PREFETCH_EN | 276 + (~(M82xx_PCI_MEM_SIZE-1U) >> POTA_ADDR_SHIFT)); 277 + immap->im_pci.pci_potar2 = cpu_to_le32(M82xx_PCI_LOWER_MEM >> POTA_ADDR_SHIFT); 278 + immap->im_pci.pci_pobar2 = cpu_to_le32((M82xx_PCI_LOWER_MEM - M82xx_PCI_MEM_OFFSET) >> POTA_ADDR_SHIFT); 279 + 280 + /* Inbound transactions from PCI memory space */ 281 + immap->im_pci.pci_picmr0 = cpu_to_le32(PICMR_ENABLE | PICMR_PREFETCH_EN | 282 + ((~(M82xx_PCI_SLAVE_MEM_SIZE-1U)) >> PITA_ADDR_SHIFT)); 283 + immap->im_pci.pci_pibar0 = cpu_to_le32(M82xx_PCI_SLAVE_MEM_BUS >> PITA_ADDR_SHIFT); 284 + immap->im_pci.pci_pitar0 = cpu_to_le32(M82xx_PCI_SLAVE_MEM_LOCAL>> PITA_ADDR_SHIFT); 285 + 286 + #if defined CONFIG_ADS8272 287 + /* PCI int highest prio */ 288 + immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x01236745; 289 + #elif defined CONFIG_PQ2FADS 290 + immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x03124567; 291 + #endif 292 + /* park bus on PCI */ 293 + immap->im_siu_conf.siu_82xx.sc_ppc_acr = PPC_ACR_BUS_PARK_PCI; 294 + 295 + /* Enable bus mastering and inbound memory transactions */ 296 + early_read_config_dword(hose, hose->first_busno, 0, PCI_COMMAND, &val); 297 + val &= 0xffff0000; 298 + val |= PCI_COMMAND_MEMORY|PCI_COMMAND_MASTER; 299 + early_write_config_dword(hose, hose->first_busno, 0, PCI_COMMAND, val); 300 + 301 + } 302 + 303 + void __init pq2_find_bridges(void) 304 + { 305 + extern int pci_assign_all_busses; 306 + struct pci_controller * hose; 307 + int host_bridge; 308 + 309 + pci_assign_all_busses = 1; 310 + 311 + hose = pcibios_alloc_controller(); 312 + 313 + if (!hose) 314 + return; 315 + 316 + ppc_md.pci_swizzle = common_swizzle; 317 + 318 + hose->first_busno = 0; 319 + hose->bus_offset = 0; 320 + hose->last_busno = 0xff; 321 + 322 + #ifdef CONFIG_ADS8272 323 + hose->set_cfg_type = 1; 324 + #endif 325 + 326 + setup_m8260_indirect_pci(hose, 327 + (unsigned long)&cpm2_immr->im_pci.pci_cfg_addr, 328 + (unsigned long)&cpm2_immr->im_pci.pci_cfg_data); 329 + 330 + /* Make sure it is a supported bridge */ 331 + early_read_config_dword(hose, 332 + 0, 333 + PCI_DEVFN(0,0), 334 + PCI_VENDOR_ID, 335 + &host_bridge); 336 + switch (host_bridge) { 337 + case PCI_DEVICE_ID_MPC8265: 338 + break; 339 + case PCI_DEVICE_ID_MPC8272: 340 + break; 341 + default: 342 + printk("Attempting to use unrecognized host bridge ID" 343 + " 0x%08x.\n", host_bridge); 344 + break; 345 + } 346 + 347 + pq2ads_setup_pci(hose); 348 + 349 + hose->io_space.start = M82xx_PCI_LOWER_IO; 350 + hose->io_space.end = M82xx_PCI_UPPER_IO; 351 + hose->mem_space.start = M82xx_PCI_LOWER_MEM; 352 + hose->mem_space.end = M82xx_PCI_UPPER_MMIO; 353 + hose->pci_mem_offset = M82xx_PCI_MEM_OFFSET; 354 + 355 + isa_io_base = 356 + (unsigned long) ioremap(M82xx_PCI_IO_BASE, 357 + M82xx_PCI_IO_SIZE); 358 + hose->io_base_virt = (void *) isa_io_base; 359 + 360 + /* setup resources */ 361 + pci_init_resource(&hose->mem_resources[0], 362 + M82xx_PCI_LOWER_MEM, 363 + M82xx_PCI_UPPER_MEM, 364 + IORESOURCE_MEM|IORESOURCE_PREFETCH, "PCI prefetchable memory"); 365 + 366 + pci_init_resource(&hose->mem_resources[1], 367 + M82xx_PCI_LOWER_MMIO, 368 + M82xx_PCI_UPPER_MMIO, 369 + IORESOURCE_MEM, "PCI memory"); 370 + 371 + pci_init_resource(&hose->io_resource, 372 + M82xx_PCI_LOWER_IO, 373 + M82xx_PCI_UPPER_IO, 374 + IORESOURCE_IO | 1, "PCI I/O"); 375 + 376 + ppc_md.pci_exclude_device = pq2pci_exclude_device; 377 + hose->last_busno = pciauto_bus_scan(hose, hose->first_busno); 378 + 379 + ppc_md.pci_map_irq = pq2pci_map_irq; 380 + ppc_md.pcibios_fixup = NULL; 381 + ppc_md.pcibios_fixup_bus = NULL; 382 + 383 + }
+92
arch/ppc/syslib/m82xx_pci.h
··· 1 + 2 + #ifndef _PPC_KERNEL_M82XX_PCI_H 3 + #define _PPC_KERNEL_M82XX_PCI_H 4 + 5 + #include <asm/m8260_pci.h> 6 + /* 7 + * Local->PCI map (from CPU) controlled by 8 + * MPC826x master window 9 + * 10 + * 0xF6000000 - 0xF7FFFFFF IO space 11 + * 0x80000000 - 0xBFFFFFFF CPU2PCI memory space PCIBR0 12 + * 13 + * 0x80000000 - 0x9FFFFFFF PCI Mem with prefetch (Outbound ATU #1) 14 + * 0xA0000000 - 0xBFFFFFFF PCI Mem w/o prefetch (Outbound ATU #2) 15 + * 0xF6000000 - 0xF7FFFFFF 32-bit PCI IO (Outbound ATU #3) 16 + * 17 + * PCI->Local map (from PCI) 18 + * MPC826x slave window controlled by 19 + * 20 + * 0x00000000 - 0x07FFFFFF MPC826x local memory (Inbound ATU #1) 21 + */ 22 + 23 + /* 24 + * Slave window that allows PCI masters to access MPC826x local memory. 25 + * This window is set up using the first set of Inbound ATU registers 26 + */ 27 + 28 + #ifndef M82xx_PCI_SLAVE_MEM_LOCAL 29 + #define M82xx_PCI_SLAVE_MEM_LOCAL (((struct bd_info *)__res)->bi_memstart) 30 + #define M82xx_PCI_SLAVE_MEM_BUS (((struct bd_info *)__res)->bi_memstart) 31 + #define M82xx_PCI_SLAVE_MEM_SIZE (((struct bd_info *)__res)->bi_memsize) 32 + #endif 33 + 34 + /* 35 + * This is the window that allows the CPU to access PCI address space. 36 + * It will be setup with the SIU PCIBR0 register. All three PCI master 37 + * windows, which allow the CPU to access PCI prefetch, non prefetch, 38 + * and IO space (see below), must all fit within this window. 39 + */ 40 + 41 + #ifndef M82xx_PCI_LOWER_MEM 42 + #define M82xx_PCI_LOWER_MEM 0x80000000 43 + #define M82xx_PCI_UPPER_MEM 0x9fffffff 44 + #define M82xx_PCI_MEM_OFFSET 0x00000000 45 + #define M82xx_PCI_MEM_SIZE 0x20000000 46 + #endif 47 + 48 + #ifndef M82xx_PCI_LOWER_MMIO 49 + #define M82xx_PCI_LOWER_MMIO 0xa0000000 50 + #define M82xx_PCI_UPPER_MMIO 0xafffffff 51 + #define M82xx_PCI_MMIO_OFFSET 0x00000000 52 + #define M82xx_PCI_MMIO_SIZE 0x20000000 53 + #endif 54 + 55 + #ifndef M82xx_PCI_LOWER_IO 56 + #define M82xx_PCI_LOWER_IO 0x00000000 57 + #define M82xx_PCI_UPPER_IO 0x01ffffff 58 + #define M82xx_PCI_IO_BASE 0xf6000000 59 + #define M82xx_PCI_IO_SIZE 0x02000000 60 + #endif 61 + 62 + #ifndef M82xx_PCI_PRIM_WND_SIZE 63 + #define M82xx_PCI_PRIM_WND_SIZE ~(M82xx_PCI_IO_SIZE - 1U) 64 + #define M82xx_PCI_PRIM_WND_BASE (M82xx_PCI_IO_BASE) 65 + #endif 66 + 67 + #ifndef M82xx_PCI_SEC_WND_SIZE 68 + #define M82xx_PCI_SEC_WND_SIZE ~(M82xx_PCI_MEM_SIZE + M82xx_PCI_MMIO_SIZE - 1U) 69 + #define M82xx_PCI_SEC_WND_BASE (M82xx_PCI_LOWER_MEM) 70 + #endif 71 + 72 + #ifndef POTA_ADDR_SHIFT 73 + #define POTA_ADDR_SHIFT 12 74 + #endif 75 + 76 + #ifndef PITA_ADDR_SHIFT 77 + #define PITA_ADDR_SHIFT 12 78 + #endif 79 + 80 + #ifndef _IO_BASE 81 + #define _IO_BASE isa_io_base 82 + #endif 83 + 84 + #ifdef CONFIG_8260_PCI9 85 + struct pci_controller; 86 + extern void setup_m8260_indirect_pci(struct pci_controller* hose, 87 + u32 cfg_addr, u32 cfg_data); 88 + #else 89 + #define setup_m8260_indirect_pci setup_indirect_pci 90 + #endif 91 + 92 + #endif /* _PPC_KERNEL_M8260_PCI_H */
+1 -3
arch/ppc/syslib/open_pic.c
··· 275 275 } 276 276 #endif 277 277 278 - #if defined(CONFIG_EPIC_SERIAL_MODE) || defined(CONFIG_PM) 278 + #if defined(CONFIG_EPIC_SERIAL_MODE) 279 279 static void openpic_reset(void) 280 280 { 281 281 openpic_setfield(&OpenPIC->Global.Global_Configuration0, ··· 992 992 spin_unlock_irqrestore(&openpic_setup_lock, flags); 993 993 return 0; 994 994 } 995 - 996 - openpic_reset(); 997 995 998 996 /* OpenPIC sometimes seem to need some time to be fully back up... */ 999 997 do {
+28
arch/ppc/syslib/ppc83xx_setup.c
··· 29 29 #include <asm/mmu.h> 30 30 #include <asm/ppc_sys.h> 31 31 #include <asm/kgdb.h> 32 + #include <asm/delay.h> 32 33 33 34 #include <syslib/ppc83xx_setup.h> 34 35 ··· 118 117 void 119 118 mpc83xx_restart(char *cmd) 120 119 { 120 + volatile unsigned char __iomem *reg; 121 + unsigned char tmp; 122 + 123 + reg = ioremap(BCSR_PHYS_ADDR, BCSR_SIZE); 124 + 121 125 local_irq_disable(); 126 + 127 + /* 128 + * Unlock the BCSR bits so a PRST will update the contents. 129 + * Otherwise the reset asserts but doesn't clear. 130 + */ 131 + tmp = in_8(reg + BCSR_MISC_REG3_OFF); 132 + tmp |= BCSR_MISC_REG3_CNFLOCK; /* low true, high false */ 133 + out_8(reg + BCSR_MISC_REG3_OFF, tmp); 134 + 135 + /* 136 + * Trigger a reset via a low->high transition of the 137 + * PORESET bit. 138 + */ 139 + tmp = in_8(reg + BCSR_MISC_REG2_OFF); 140 + tmp &= ~BCSR_MISC_REG2_PORESET; 141 + out_8(reg + BCSR_MISC_REG2_OFF, tmp); 142 + 143 + udelay(1); 144 + 145 + tmp |= BCSR_MISC_REG2_PORESET; 146 + out_8(reg + BCSR_MISC_REG2_OFF, tmp); 147 + 122 148 for(;;); 123 149 } 124 150
+16
arch/ppc/syslib/ppc85xx_setup.c
··· 132 132 } 133 133 134 134 #ifdef CONFIG_PCI 135 + 136 + #if defined(CONFIG_MPC8555_CDS) 137 + extern void mpc85xx_cds_enable_via(struct pci_controller *hose); 138 + extern void mpc85xx_cds_fixup_via(struct pci_controller *hose); 139 + #endif 140 + 135 141 static void __init 136 142 mpc85xx_setup_pci1(struct pci_controller *hose) 137 143 { ··· 308 302 309 303 ppc_md.pci_exclude_device = mpc85xx_exclude_device; 310 304 305 + #if defined(CONFIG_MPC8555_CDS) 306 + /* Pre pciauto_bus_scan VIA init */ 307 + mpc85xx_cds_enable_via(hose_a); 308 + #endif 309 + 311 310 hose_a->last_busno = pciauto_bus_scan(hose_a, hose_a->first_busno); 311 + 312 + #if defined(CONFIG_MPC8555_CDS) 313 + /* Post pciauto_bus_scan VIA fixup */ 314 + mpc85xx_cds_fixup_via(hose_a); 315 + #endif 312 316 313 317 #ifdef CONFIG_85xx_PCI2 314 318 hose_b = pcibios_alloc_controller();
+10
arch/ppc/syslib/prom_init.c
··· 626 626 l = call_prom("package-to-path", 3, 1, node, 627 627 mem_start, mem_end - mem_start); 628 628 if (l >= 0) { 629 + char *p, *ep; 630 + 629 631 np->full_name = PTRUNRELOC((char *) mem_start); 630 632 *(char *)(mem_start + l) = 0; 633 + /* Fixup an Apple bug where they have bogus \0 chars in the 634 + * middle of the path in some properties 635 + */ 636 + for (p = (char *)mem_start, ep = p + l; p < ep; p++) 637 + if ((*p) == '\0') { 638 + memmove(p, p+1, ep - p); 639 + ep--; 640 + } 631 641 mem_start = ALIGNUL(mem_start + l + 1); 632 642 } 633 643
-8
arch/ppc64/kernel/pSeries_reconfig.c
··· 47 47 remove_proc_entry(pp->name, np->pde); 48 48 pp = pp->next; 49 49 } 50 - 51 - /* Assuming that symlinks have the same parent directory as 52 - * np->pde. 53 - */ 54 - if (np->name_link) 55 - remove_proc_entry(np->name_link->name, parent->pde); 56 - if (np->addr_link) 57 - remove_proc_entry(np->addr_link->name, parent->pde); 58 50 if (np->pde) 59 51 remove_proc_entry(np->pde->name, parent->pde); 60 52 }
+76 -42
arch/ppc64/kernel/prom_init.c
··· 211 211 */ 212 212 #define ADDR(x) (u32) ((unsigned long)(x) - offset) 213 213 214 + /* 215 + * Error results ... some OF calls will return "-1" on error, some 216 + * will return 0, some will return either. To simplify, here are 217 + * macros to use with any ihandle or phandle return value to check if 218 + * it is valid 219 + */ 220 + 221 + #define PROM_ERROR (-1u) 222 + #define PHANDLE_VALID(p) ((p) != 0 && (p) != PROM_ERROR) 223 + #define IHANDLE_VALID(i) ((i) != 0 && (i) != PROM_ERROR) 224 + 225 + 214 226 /* This is the one and *ONLY* place where we actually call open 215 227 * firmware from, since we need to make sure we're running in 32b 216 228 * mode when we do. We switch back to 64b mode upon return. 217 229 */ 218 - 219 - #define PROM_ERROR (-1) 220 230 221 231 static int __init call_prom(const char *service, int nargs, int nret, ...) 222 232 { ··· 597 587 { 598 588 unsigned long offset = reloc_offset(); 599 589 ihandle elfloader; 600 - int ret; 601 590 602 591 elfloader = call_prom("open", 1, 1, ADDR("/packages/elf-loader")); 603 592 if (elfloader == 0) { 604 593 prom_printf("couldn't open /packages/elf-loader\n"); 605 594 return; 606 595 } 607 - ret = call_prom("call-method", 3, 1, ADDR("process-elf-header"), 596 + call_prom("call-method", 3, 1, ADDR("process-elf-header"), 608 597 elfloader, ADDR(&fake_elf)); 609 598 call_prom("close", 1, 0, elfloader); 610 599 } ··· 655 646 base = _ALIGN_UP(base + 0x100000, align)) { 656 647 prom_debug(" trying: 0x%x\n\r", base); 657 648 addr = (unsigned long)prom_claim(base, size, 0); 658 - if ((int)addr != PROM_ERROR) 649 + if (addr != PROM_ERROR) 659 650 break; 660 651 addr = 0; 661 652 if (align == 0) ··· 717 708 for(; base > RELOC(alloc_bottom); base = _ALIGN_DOWN(base - 0x100000, align)) { 718 709 prom_debug(" trying: 0x%x\n\r", base); 719 710 addr = (unsigned long)prom_claim(base, size, 0); 720 - if ((int)addr != PROM_ERROR) 711 + if (addr != PROM_ERROR) 721 712 break; 722 713 addr = 0; 723 714 } ··· 911 902 { 912 903 unsigned long offset = reloc_offset(); 913 904 struct prom_t *_prom = PTRRELOC(&prom); 914 - phandle prom_rtas, rtas_node; 905 + phandle rtas_node; 906 + ihandle rtas_inst; 915 907 u32 base, entry = 0; 916 908 u32 size = 0; 917 909 918 910 prom_debug("prom_instantiate_rtas: start...\n"); 919 911 920 - prom_rtas = call_prom("finddevice", 1, 1, ADDR("/rtas")); 921 - prom_debug("prom_rtas: %x\n", prom_rtas); 922 - if (prom_rtas == (phandle) -1) 912 + rtas_node = call_prom("finddevice", 1, 1, ADDR("/rtas")); 913 + prom_debug("rtas_node: %x\n", rtas_node); 914 + if (!PHANDLE_VALID(rtas_node)) 923 915 return; 924 916 925 - prom_getprop(prom_rtas, "rtas-size", &size, sizeof(size)); 917 + prom_getprop(rtas_node, "rtas-size", &size, sizeof(size)); 926 918 if (size == 0) 927 919 return; 928 920 ··· 932 922 prom_printf("RTAS allocation failed !\n"); 933 923 return; 934 924 } 935 - prom_printf("instantiating rtas at 0x%x", base); 936 925 937 - rtas_node = call_prom("open", 1, 1, ADDR("/rtas")); 938 - prom_printf("..."); 926 + rtas_inst = call_prom("open", 1, 1, ADDR("/rtas")); 927 + if (!IHANDLE_VALID(rtas_inst)) { 928 + prom_printf("opening rtas package failed"); 929 + return; 930 + } 931 + 932 + prom_printf("instantiating rtas at 0x%x ...", base); 939 933 940 934 if (call_prom("call-method", 3, 2, 941 935 ADDR("instantiate-rtas"), 942 - rtas_node, base) != PROM_ERROR) { 936 + rtas_inst, base) != PROM_ERROR) { 943 937 entry = (long)_prom->args.rets[1]; 944 938 } 945 939 if (entry == 0) { ··· 954 940 955 941 reserve_mem(base, size); 956 942 957 - prom_setprop(prom_rtas, "linux,rtas-base", &base, sizeof(base)); 958 - prom_setprop(prom_rtas, "linux,rtas-entry", &entry, sizeof(entry)); 943 + prom_setprop(rtas_node, "linux,rtas-base", &base, sizeof(base)); 944 + prom_setprop(rtas_node, "linux,rtas-entry", &entry, sizeof(entry)); 959 945 960 946 prom_debug("rtas base = 0x%x\n", base); 961 947 prom_debug("rtas entry = 0x%x\n", entry); ··· 1076 1062 1077 1063 prom_printf("opening PHB %s", path); 1078 1064 phb_node = call_prom("open", 1, 1, path); 1079 - if ( (long)phb_node <= 0) 1065 + if (phb_node == 0) 1080 1066 prom_printf("... failed\n"); 1081 1067 else 1082 1068 prom_printf("... done\n"); ··· 1293 1279 1294 1280 /* get a handle for the stdout device */ 1295 1281 _prom->chosen = call_prom("finddevice", 1, 1, ADDR("/chosen")); 1296 - if ((long)_prom->chosen <= 0) 1282 + if (!PHANDLE_VALID(_prom->chosen)) 1297 1283 prom_panic("cannot find chosen"); /* msg won't be printed :( */ 1298 1284 1299 1285 /* get device tree root */ 1300 1286 _prom->root = call_prom("finddevice", 1, 1, ADDR("/")); 1301 - if ((long)_prom->root <= 0) 1287 + if (!PHANDLE_VALID(_prom->root)) 1302 1288 prom_panic("cannot find device tree root"); /* msg won't be printed :( */ 1303 1289 } 1304 1290 ··· 1370 1356 } 1371 1357 /* Default to pSeries. We need to know if we are running LPAR */ 1372 1358 rtas = call_prom("finddevice", 1, 1, ADDR("/rtas")); 1373 - if (rtas != (phandle) -1) { 1374 - unsigned long x; 1375 - x = prom_getproplen(rtas, "ibm,hypertas-functions"); 1359 + if (!PHANDLE_VALID(rtas)) { 1360 + int x = prom_getproplen(rtas, "ibm,hypertas-functions"); 1376 1361 if (x != PROM_ERROR) { 1377 1362 prom_printf("Hypertas detected, assuming LPAR !\n"); 1378 1363 return PLATFORM_PSERIES_LPAR; ··· 1439 1426 * leave some room at the end of the path for appending extra 1440 1427 * arguments 1441 1428 */ 1442 - if (call_prom("package-to-path", 3, 1, node, path, PROM_SCRATCH_SIZE-10) < 0) 1429 + if (call_prom("package-to-path", 3, 1, node, path, 1430 + PROM_SCRATCH_SIZE-10) == PROM_ERROR) 1443 1431 continue; 1444 1432 prom_printf("found display : %s, opening ... ", path); 1445 1433 1446 1434 ih = call_prom("open", 1, 1, path); 1447 - if (ih == (ihandle)0 || ih == (ihandle)-1) { 1435 + if (ih == 0) { 1448 1436 prom_printf("failed\n"); 1449 1437 continue; 1450 1438 } ··· 1528 1514 return 0; 1529 1515 } 1530 1516 1517 + /* 1518 + * The Open Firmware 1275 specification states properties must be 31 bytes or 1519 + * less, however not all firmwares obey this. Make it 64 bytes to be safe. 1520 + */ 1521 + #define MAX_PROPERTY_NAME 64 1522 + 1531 1523 static void __init scan_dt_build_strings(phandle node, unsigned long *mem_start, 1532 1524 unsigned long *mem_end) 1533 1525 { ··· 1547 1527 /* get and store all property names */ 1548 1528 prev_name = RELOC(""); 1549 1529 for (;;) { 1550 - 1551 - /* 32 is max len of name including nul. */ 1552 - namep = make_room(mem_start, mem_end, 32, 1); 1553 - if (call_prom("nextprop", 3, 1, node, prev_name, namep) <= 0) { 1530 + int rc; 1531 + 1532 + /* 64 is max len of name including nul. */ 1533 + namep = make_room(mem_start, mem_end, MAX_PROPERTY_NAME, 1); 1534 + rc = call_prom("nextprop", 3, 1, node, prev_name, namep); 1535 + if (rc != 1) { 1554 1536 /* No more nodes: unwind alloc */ 1555 1537 *mem_start = (unsigned long)namep; 1556 1538 break; ··· 1577 1555 } 1578 1556 } 1579 1557 1580 - /* 1581 - * The Open Firmware 1275 specification states properties must be 31 bytes or 1582 - * less, however not all firmwares obey this. Make it 64 bytes to be safe. 1583 - */ 1584 - #define MAX_PROPERTY_NAME 64 1585 - 1586 1558 static void __init scan_dt_build_struct(phandle node, unsigned long *mem_start, 1587 1559 unsigned long *mem_end) 1588 1560 { 1589 1561 int l, align; 1590 1562 phandle child; 1591 - char *namep, *prev_name, *sstart; 1563 + char *namep, *prev_name, *sstart, *p, *ep; 1592 1564 unsigned long soff; 1593 1565 unsigned char *valp; 1594 1566 unsigned long offset = reloc_offset(); ··· 1604 1588 call_prom("package-to-path", 3, 1, node, namep, l); 1605 1589 } 1606 1590 namep[l] = '\0'; 1591 + /* Fixup an Apple bug where they have bogus \0 chars in the 1592 + * middle of the path in some properties 1593 + */ 1594 + for (p = namep, ep = namep + l; p < ep; p++) 1595 + if (*p == '\0') { 1596 + memmove(p, p+1, ep - p); 1597 + ep--; l--; 1598 + } 1607 1599 *mem_start = _ALIGN(((unsigned long) namep) + strlen(namep) + 1, 4); 1608 1600 } 1609 1601 ··· 1623 1599 prev_name = RELOC(""); 1624 1600 sstart = (char *)RELOC(dt_string_start); 1625 1601 for (;;) { 1626 - if (call_prom("nextprop", 3, 1, node, prev_name, pname) <= 0) 1602 + int rc; 1603 + 1604 + rc = call_prom("nextprop", 3, 1, node, prev_name, pname); 1605 + if (rc != 1) 1627 1606 break; 1628 1607 1629 1608 /* find string offset */ ··· 1642 1615 l = call_prom("getproplen", 2, 1, node, pname); 1643 1616 1644 1617 /* sanity checks */ 1645 - if (l < 0) 1618 + if (l == PROM_ERROR) 1646 1619 continue; 1647 1620 if (l > MAX_PROPERTY_LENGTH) { 1648 1621 prom_printf("WARNING: ignoring large property "); ··· 1790 1763 1791 1764 /* Some G5s have a missing interrupt definition, fix it up here */ 1792 1765 u3 = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000")); 1793 - if ((long)u3 <= 0) 1766 + if (!PHANDLE_VALID(u3)) 1794 1767 return; 1795 1768 i2c = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000/i2c@f8001000")); 1796 - if ((long)i2c <= 0) 1769 + if (!PHANDLE_VALID(i2c)) 1797 1770 return; 1798 1771 mpic = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000/mpic@f8040000")); 1799 - if ((long)mpic <= 0) 1772 + if (!PHANDLE_VALID(mpic)) 1800 1773 return; 1801 1774 1802 1775 /* check if proper rev of u3 */ 1803 - if (prom_getprop(u3, "device-rev", &u3_rev, sizeof(u3_rev)) <= 0) 1776 + if (prom_getprop(u3, "device-rev", &u3_rev, sizeof(u3_rev)) 1777 + == PROM_ERROR) 1804 1778 return; 1805 1779 if (u3_rev != 0x35) 1806 1780 return; ··· 1907 1879 getprop_rval = RELOC(of_platform); 1908 1880 prom_setprop(_prom->chosen, "linux,platform", 1909 1881 &getprop_rval, sizeof(getprop_rval)); 1882 + 1883 + /* 1884 + * On pSeries, inform the firmware about our capabilities 1885 + */ 1886 + if (RELOC(of_platform) & PLATFORM_PSERIES) 1887 + prom_send_capabilities(); 1910 1888 1911 1889 /* 1912 1890 * On pSeries, copy the CPU hold code
-2
arch/ppc64/kernel/time.c
··· 325 325 326 326 irq_enter(); 327 327 328 - #ifndef CONFIG_PPC_ISERIES 329 328 profile_tick(CPU_PROFILING, regs); 330 - #endif 331 329 332 330 lpaca->lppaca.int_dword.fields.decr_int = 0; 333 331
+68 -20
arch/sparc64/kernel/pci_iommu.c
··· 196 196 return NULL; 197 197 } 198 198 199 + static int iommu_alloc_ctx(struct pci_iommu *iommu) 200 + { 201 + int lowest = iommu->ctx_lowest_free; 202 + int sz = IOMMU_NUM_CTXS - lowest; 203 + int n = find_next_zero_bit(iommu->ctx_bitmap, sz, lowest); 204 + 205 + if (unlikely(n == sz)) { 206 + n = find_next_zero_bit(iommu->ctx_bitmap, lowest, 1); 207 + if (unlikely(n == lowest)) { 208 + printk(KERN_WARNING "IOMMU: Ran out of contexts.\n"); 209 + n = 0; 210 + } 211 + } 212 + if (n) 213 + __set_bit(n, iommu->ctx_bitmap); 214 + 215 + return n; 216 + } 217 + 218 + static inline void iommu_free_ctx(struct pci_iommu *iommu, int ctx) 219 + { 220 + if (likely(ctx)) { 221 + __clear_bit(ctx, iommu->ctx_bitmap); 222 + if (ctx < iommu->ctx_lowest_free) 223 + iommu->ctx_lowest_free = ctx; 224 + } 225 + } 226 + 199 227 /* Allocate and map kernel buffer of size SIZE using consistent mode 200 228 * DMA for PCI device PDEV. Return non-NULL cpu-side address if 201 229 * successful and set *DMA_ADDRP to the PCI side dma address. ··· 264 236 npages = size >> IO_PAGE_SHIFT; 265 237 ctx = 0; 266 238 if (iommu->iommu_ctxflush) 267 - ctx = iommu->iommu_cur_ctx++; 239 + ctx = iommu_alloc_ctx(iommu); 268 240 first_page = __pa(first_page); 269 241 while (npages--) { 270 242 iopte_val(*iopte) = (IOPTE_CONSISTENT(ctx) | ··· 345 317 } 346 318 } 347 319 320 + iommu_free_ctx(iommu, ctx); 321 + 348 322 spin_unlock_irqrestore(&iommu->lock, flags); 349 323 350 324 order = get_order(size); ··· 390 360 base_paddr = __pa(oaddr & IO_PAGE_MASK); 391 361 ctx = 0; 392 362 if (iommu->iommu_ctxflush) 393 - ctx = iommu->iommu_cur_ctx++; 363 + ctx = iommu_alloc_ctx(iommu); 394 364 if (strbuf->strbuf_enabled) 395 365 iopte_protection = IOPTE_STREAMING(ctx); 396 366 else ··· 410 380 return PCI_DMA_ERROR_CODE; 411 381 } 412 382 413 - static void pci_strbuf_flush(struct pci_strbuf *strbuf, struct pci_iommu *iommu, u32 vaddr, unsigned long ctx, unsigned long npages) 383 + static void pci_strbuf_flush(struct pci_strbuf *strbuf, struct pci_iommu *iommu, u32 vaddr, unsigned long ctx, unsigned long npages, int direction) 414 384 { 415 385 int limit; 416 386 417 - PCI_STC_FLUSHFLAG_INIT(strbuf); 418 387 if (strbuf->strbuf_ctxflush && 419 388 iommu->iommu_ctxflush) { 420 389 unsigned long matchreg, flushreg; 390 + u64 val; 421 391 422 392 flushreg = strbuf->strbuf_ctxflush; 423 393 matchreg = PCI_STC_CTXMATCH_ADDR(strbuf, ctx); 424 394 425 - limit = 100000; 426 395 pci_iommu_write(flushreg, ctx); 427 - for(;;) { 428 - if (((long)pci_iommu_read(matchreg)) >= 0L) 429 - break; 430 - limit--; 431 - if (!limit) 432 - break; 433 - udelay(1); 396 + val = pci_iommu_read(matchreg); 397 + val &= 0xffff; 398 + if (!val) 399 + goto do_flush_sync; 400 + 401 + while (val) { 402 + if (val & 0x1) 403 + pci_iommu_write(flushreg, ctx); 404 + val >>= 1; 434 405 } 435 - if (!limit) 406 + val = pci_iommu_read(matchreg); 407 + if (unlikely(val)) { 436 408 printk(KERN_WARNING "pci_strbuf_flush: ctx flush " 437 - "timeout vaddr[%08x] ctx[%lx]\n", 438 - vaddr, ctx); 409 + "timeout matchreg[%lx] ctx[%lx]\n", 410 + val, ctx); 411 + goto do_page_flush; 412 + } 439 413 } else { 440 414 unsigned long i; 441 415 416 + do_page_flush: 442 417 for (i = 0; i < npages; i++, vaddr += IO_PAGE_SIZE) 443 418 pci_iommu_write(strbuf->strbuf_pflush, vaddr); 444 419 } 445 420 421 + do_flush_sync: 422 + /* If the device could not have possibly put dirty data into 423 + * the streaming cache, no flush-flag synchronization needs 424 + * to be performed. 425 + */ 426 + if (direction == PCI_DMA_TODEVICE) 427 + return; 428 + 429 + PCI_STC_FLUSHFLAG_INIT(strbuf); 446 430 pci_iommu_write(strbuf->strbuf_fsync, strbuf->strbuf_flushflag_pa); 447 431 (void) pci_iommu_read(iommu->write_complete_reg); 448 432 ··· 510 466 511 467 /* Step 1: Kick data out of streaming buffers if necessary. */ 512 468 if (strbuf->strbuf_enabled) 513 - pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages); 469 + pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction); 514 470 515 471 /* Step 2: Clear out first TSB entry. */ 516 472 iopte_make_dummy(iommu, base); 517 473 518 474 free_streaming_cluster(iommu, bus_addr - iommu->page_table_map_base, 519 475 npages, ctx); 476 + 477 + iommu_free_ctx(iommu, ctx); 520 478 521 479 spin_unlock_irqrestore(&iommu->lock, flags); 522 480 } ··· 659 613 /* Step 4: Choose a context if necessary. */ 660 614 ctx = 0; 661 615 if (iommu->iommu_ctxflush) 662 - ctx = iommu->iommu_cur_ctx++; 616 + ctx = iommu_alloc_ctx(iommu); 663 617 664 618 /* Step 5: Create the mappings. */ 665 619 if (strbuf->strbuf_enabled) ··· 724 678 725 679 /* Step 1: Kick data out of streaming buffers if necessary. */ 726 680 if (strbuf->strbuf_enabled) 727 - pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages); 681 + pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction); 728 682 729 683 /* Step 2: Clear out first TSB entry. */ 730 684 iopte_make_dummy(iommu, base); 731 685 732 686 free_streaming_cluster(iommu, bus_addr - iommu->page_table_map_base, 733 687 npages, ctx); 688 + 689 + iommu_free_ctx(iommu, ctx); 734 690 735 691 spin_unlock_irqrestore(&iommu->lock, flags); 736 692 } ··· 772 724 } 773 725 774 726 /* Step 2: Kick data out of streaming buffers. */ 775 - pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages); 727 + pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction); 776 728 777 729 spin_unlock_irqrestore(&iommu->lock, flags); 778 730 } ··· 816 768 i--; 817 769 npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length) 818 770 - bus_addr) >> IO_PAGE_SHIFT; 819 - pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages); 771 + pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction); 820 772 821 773 spin_unlock_irqrestore(&iommu->lock, flags); 822 774 }
+1 -1
arch/sparc64/kernel/pci_psycho.c
··· 1212 1212 1213 1213 /* Setup initial software IOMMU state. */ 1214 1214 spin_lock_init(&iommu->lock); 1215 - iommu->iommu_cur_ctx = 0; 1215 + iommu->ctx_lowest_free = 1; 1216 1216 1217 1217 /* Register addresses. */ 1218 1218 iommu->iommu_control = p->pbm_A.controller_regs + PSYCHO_IOMMU_CONTROL;
+1 -1
arch/sparc64/kernel/pci_sabre.c
··· 1265 1265 1266 1266 /* Setup initial software IOMMU state. */ 1267 1267 spin_lock_init(&iommu->lock); 1268 - iommu->iommu_cur_ctx = 0; 1268 + iommu->ctx_lowest_free = 1; 1269 1269 1270 1270 /* Register addresses. */ 1271 1271 iommu->iommu_control = p->pbm_A.controller_regs + SABRE_IOMMU_CONTROL;
+1 -1
arch/sparc64/kernel/pci_schizo.c
··· 1753 1753 1754 1754 /* Setup initial software IOMMU state. */ 1755 1755 spin_lock_init(&iommu->lock); 1756 - iommu->iommu_cur_ctx = 0; 1756 + iommu->ctx_lowest_free = 1; 1757 1757 1758 1758 /* Register addresses, SCHIZO has iommu ctx flushing. */ 1759 1759 iommu->iommu_control = pbm->pbm_regs + SCHIZO_IOMMU_CONTROL;
+14 -6
arch/sparc64/kernel/sbus.c
··· 117 117 118 118 #define STRBUF_TAG_VALID 0x02UL 119 119 120 - static void sbus_strbuf_flush(struct sbus_iommu *iommu, u32 base, unsigned long npages) 120 + static void sbus_strbuf_flush(struct sbus_iommu *iommu, u32 base, unsigned long npages, int direction) 121 121 { 122 122 unsigned long n; 123 123 int limit; 124 124 125 - iommu->strbuf_flushflag = 0UL; 126 125 n = npages; 127 126 while (n--) 128 127 upa_writeq(base + (n << IO_PAGE_SHIFT), 129 128 iommu->strbuf_regs + STRBUF_PFLUSH); 129 + 130 + /* If the device could not have possibly put dirty data into 131 + * the streaming cache, no flush-flag synchronization needs 132 + * to be performed. 133 + */ 134 + if (direction == SBUS_DMA_TODEVICE) 135 + return; 136 + 137 + iommu->strbuf_flushflag = 0UL; 130 138 131 139 /* Whoopee cushion! */ 132 140 upa_writeq(__pa(&iommu->strbuf_flushflag), ··· 429 421 430 422 spin_lock_irqsave(&iommu->lock, flags); 431 423 free_streaming_cluster(iommu, dma_base, size >> IO_PAGE_SHIFT); 432 - sbus_strbuf_flush(iommu, dma_base, size >> IO_PAGE_SHIFT); 424 + sbus_strbuf_flush(iommu, dma_base, size >> IO_PAGE_SHIFT, direction); 433 425 spin_unlock_irqrestore(&iommu->lock, flags); 434 426 } 435 427 ··· 592 584 iommu = sdev->bus->iommu; 593 585 spin_lock_irqsave(&iommu->lock, flags); 594 586 free_streaming_cluster(iommu, dvma_base, size >> IO_PAGE_SHIFT); 595 - sbus_strbuf_flush(iommu, dvma_base, size >> IO_PAGE_SHIFT); 587 + sbus_strbuf_flush(iommu, dvma_base, size >> IO_PAGE_SHIFT, direction); 596 588 spin_unlock_irqrestore(&iommu->lock, flags); 597 589 } 598 590 ··· 604 596 size = (IO_PAGE_ALIGN(base + size) - (base & IO_PAGE_MASK)); 605 597 606 598 spin_lock_irqsave(&iommu->lock, flags); 607 - sbus_strbuf_flush(iommu, base & IO_PAGE_MASK, size >> IO_PAGE_SHIFT); 599 + sbus_strbuf_flush(iommu, base & IO_PAGE_MASK, size >> IO_PAGE_SHIFT, direction); 608 600 spin_unlock_irqrestore(&iommu->lock, flags); 609 601 } 610 602 ··· 628 620 size = IO_PAGE_ALIGN(sg[i].dma_address + sg[i].dma_length) - base; 629 621 630 622 spin_lock_irqsave(&iommu->lock, flags); 631 - sbus_strbuf_flush(iommu, base, size >> IO_PAGE_SHIFT); 623 + sbus_strbuf_flush(iommu, base, size >> IO_PAGE_SHIFT, direction); 632 624 spin_unlock_irqrestore(&iommu->lock, flags); 633 625 } 634 626
-4
arch/um/Kconfig.debug
··· 2 2 3 3 source "lib/Kconfig.debug" 4 4 5 - config FRAME_POINTER 6 - bool 7 - default y if DEBUG_INFO 8 - 9 5 config PT_PROXY 10 6 bool "Enable ptrace proxy" 11 7 depends on XTERM_CHAN && DEBUG_INFO && MODE_TT
+11 -5
arch/um/drivers/random.c
··· 1 - /* Much of this ripped from hw_random.c */ 2 - 1 + /* Copyright (C) 2005 Jeff Dike <jdike@addtoit.com> */ 2 + /* Much of this ripped from drivers/char/hw_random.c, see there for other 3 + * copyright. 4 + * 5 + * This software may be used and distributed according to the terms 6 + * of the GNU General Public License, incorporated herein by reference. 7 + */ 3 8 #include <linux/module.h> 4 9 #include <linux/fs.h> 5 10 #include <linux/miscdevice.h> ··· 17 12 */ 18 13 #define RNG_VERSION "1.0.0" 19 14 #define RNG_MODULE_NAME "random" 20 - #define RNG_DRIVER_NAME RNG_MODULE_NAME " virtual driver " RNG_VERSION 21 - #define PFX RNG_MODULE_NAME ": " 22 15 23 16 #define RNG_MISCDEV_MINOR 183 /* official */ 24 17 ··· 101 98 102 99 err = misc_register (&rng_miscdev); 103 100 if (err) { 104 - printk (KERN_ERR PFX "misc device register failed\n"); 101 + printk (KERN_ERR RNG_MODULE_NAME ": misc device register failed\n"); 105 102 goto err_out_cleanup_hw; 106 103 } 107 104 ··· 123 120 124 121 module_init (rng_init); 125 122 module_exit (rng_cleanup); 123 + 124 + MODULE_DESCRIPTION("UML Host Random Number Generator (RNG) driver"); 125 + MODULE_LICENSE("GPL");
-1
arch/um/drivers/ssl.c
··· 22 22 #include "init.h" 23 23 #include "irq_user.h" 24 24 #include "mconsole_kern.h" 25 - #include "2_5compat.h" 26 25 27 26 static int ssl_version = 1; 28 27
-1
arch/um/drivers/stdio_console.c
··· 28 28 #include "irq_user.h" 29 29 #include "mconsole_kern.h" 30 30 #include "init.h" 31 - #include "2_5compat.h" 32 31 33 32 #define MAX_TTYS (16) 34 33
+3 -4
arch/um/drivers/ubd_kern.c
··· 49 49 #include "irq_user.h" 50 50 #include "irq_kern.h" 51 51 #include "ubd_user.h" 52 - #include "2_5compat.h" 53 52 #include "os.h" 54 53 #include "mem.h" 55 54 #include "mem_kern.h" ··· 439 440 __setup("udb", udb_setup); 440 441 __uml_help(udb_setup, 441 442 "udb\n" 442 - " This option is here solely to catch ubd -> udb typos, which can be\n\n" 443 - " to impossible to catch visually unless you specifically look for\n\n" 444 - " them. The only result of any option starting with 'udb' is an error\n\n" 443 + " This option is here solely to catch ubd -> udb typos, which can be\n" 444 + " to impossible to catch visually unless you specifically look for\n" 445 + " them. The only result of any option starting with 'udb' is an error\n" 445 446 " in the boot output.\n\n" 446 447 ); 447 448
-24
arch/um/include/2_5compat.h
··· 1 - /* 2 - * Copyright (C) 2001 Jeff Dike (jdike@karaya.com) 3 - * Licensed under the GPL 4 - */ 5 - 6 - #ifndef __2_5_COMPAT_H__ 7 - #define __2_5_COMPAT_H__ 8 - 9 - #define INIT_HARDSECT(arr, maj, sizes) 10 - 11 - #define SET_PRI(task) do ; while(0) 12 - 13 - #endif 14 - 15 - /* 16 - * Overrides for Emacs so that we follow Linus's tabbing style. 17 - * Emacs will notice this stuff at the end of the file and automatically 18 - * adjust the settings for this buffer only. This must remain at the end 19 - * of the file. 20 - * --------------------------------------------------------------------------- 21 - * Local variables: 22 - * c-file-style: "linux" 23 - * End: 24 - */
+2 -1
arch/um/include/sysrq.h
··· 1 1 #ifndef __UM_SYSRQ_H 2 2 #define __UM_SYSRQ_H 3 3 4 - extern void show_trace(unsigned long *stack); 4 + struct task_struct; 5 + extern void show_trace(struct task_struct* task, unsigned long *stack); 5 6 6 7 #endif
-1
arch/um/kernel/exec_kern.c
··· 16 16 #include "kern.h" 17 17 #include "irq_user.h" 18 18 #include "tlb.h" 19 - #include "2_5compat.h" 20 19 #include "os.h" 21 20 #include "time_user.h" 22 21 #include "choose-mode.h"
-59
arch/um/kernel/initrd_kern.c
··· 1 - /* 2 - * Copyright (C) 2000, 2001, 2002 Jeff Dike (jdike@karaya.com) 3 - * Licensed under the GPL 4 - */ 5 - 6 - #include "linux/init.h" 7 - #include "linux/bootmem.h" 8 - #include "linux/initrd.h" 9 - #include "asm/types.h" 10 - #include "user_util.h" 11 - #include "kern_util.h" 12 - #include "initrd.h" 13 - #include "init.h" 14 - #include "os.h" 15 - 16 - /* Changed by uml_initrd_setup, which is a setup */ 17 - static char *initrd __initdata = NULL; 18 - 19 - static int __init read_initrd(void) 20 - { 21 - void *area; 22 - long long size; 23 - int err; 24 - 25 - if(initrd == NULL) return 0; 26 - err = os_file_size(initrd, &size); 27 - if(err) return 0; 28 - area = alloc_bootmem(size); 29 - if(area == NULL) return 0; 30 - if(load_initrd(initrd, area, size) == -1) return 0; 31 - initrd_start = (unsigned long) area; 32 - initrd_end = initrd_start + size; 33 - return 0; 34 - } 35 - 36 - __uml_postsetup(read_initrd); 37 - 38 - static int __init uml_initrd_setup(char *line, int *add) 39 - { 40 - initrd = line; 41 - return 0; 42 - } 43 - 44 - __uml_setup("initrd=", uml_initrd_setup, 45 - "initrd=<initrd image>\n" 46 - " This is used to boot UML from an initrd image. The argument is the\n" 47 - " name of the file containing the image.\n\n" 48 - ); 49 - 50 - /* 51 - * Overrides for Emacs so that we follow Linus's tabbing style. 52 - * Emacs will notice this stuff at the end of the file and automatically 53 - * adjust the settings for this buffer only. This must remain at the end 54 - * of the file. 55 - * --------------------------------------------------------------------------- 56 - * Local variables: 57 - * c-file-style: "linux" 58 - * End: 59 - */
-46
arch/um/kernel/initrd_user.c
··· 1 - /* 2 - * Copyright (C) 2000, 2001 Jeff Dike (jdike@karaya.com) 3 - * Licensed under the GPL 4 - */ 5 - 6 - #include <unistd.h> 7 - #include <sys/types.h> 8 - #include <sys/stat.h> 9 - #include <errno.h> 10 - 11 - #include "user_util.h" 12 - #include "kern_util.h" 13 - #include "user.h" 14 - #include "initrd.h" 15 - #include "os.h" 16 - 17 - int load_initrd(char *filename, void *buf, int size) 18 - { 19 - int fd, n; 20 - 21 - fd = os_open_file(filename, of_read(OPENFLAGS()), 0); 22 - if(fd < 0){ 23 - printk("Opening '%s' failed - err = %d\n", filename, -fd); 24 - return(-1); 25 - } 26 - n = os_read_file(fd, buf, size); 27 - if(n != size){ 28 - printk("Read of %d bytes from '%s' failed, err = %d\n", size, 29 - filename, -n); 30 - return(-1); 31 - } 32 - 33 - os_close_file(fd); 34 - return(0); 35 - } 36 - 37 - /* 38 - * Overrides for Emacs so that we follow Linus's tabbing style. 39 - * Emacs will notice this stuff at the end of the file and automatically 40 - * adjust the settings for this buffer only. This must remain at the end 41 - * of the file. 42 - * --------------------------------------------------------------------------- 43 - * Local variables: 44 - * c-file-style: "linux" 45 - * End: 46 - */
+22 -20
arch/um/kernel/main.c
··· 71 71 72 72 static void last_ditch_exit(int sig) 73 73 { 74 - CHOOSE_MODE(kmalloc_ok = 0, (void) 0); 74 + kmalloc_ok = 0; 75 75 signal(SIGINT, SIG_DFL); 76 76 signal(SIGTERM, SIG_DFL); 77 77 signal(SIGHUP, SIG_DFL); ··· 87 87 { 88 88 char **new_argv; 89 89 sigset_t mask; 90 - int ret, i; 90 + int ret, i, err; 91 91 92 92 /* Enable all signals except SIGIO - in some environments, we can 93 93 * enter with some signals blocked ··· 160 160 */ 161 161 change_sig(SIGPROF, 0); 162 162 163 + /* This signal stuff used to be in the reboot case. However, 164 + * sometimes a SIGVTALRM can come in when we're halting (reproducably 165 + * when writing out gcov information, presumably because that takes 166 + * some time) and cause a segfault. 167 + */ 168 + 169 + /* stop timers and set SIG*ALRM to be ignored */ 170 + disable_timer(); 171 + 172 + /* disable SIGIO for the fds and set SIGIO to be ignored */ 173 + err = deactivate_all_fds(); 174 + if(err) 175 + printf("deactivate_all_fds failed, errno = %d\n", -err); 176 + 177 + /* Let any pending signals fire now. This ensures 178 + * that they won't be delivered after the exec, when 179 + * they are definitely not expected. 180 + */ 181 + unblock_signals(); 182 + 163 183 /* Reboot */ 164 184 if(ret){ 165 - int err; 166 - 167 185 printf("\n"); 168 - 169 - /* stop timers and set SIG*ALRM to be ignored */ 170 - disable_timer(); 171 - 172 - /* disable SIGIO for the fds and set SIGIO to be ignored */ 173 - err = deactivate_all_fds(); 174 - if(err) 175 - printf("deactivate_all_fds failed, errno = %d\n", 176 - -err); 177 - 178 - /* Let any pending signals fire now. This ensures 179 - * that they won't be delivered after the exec, when 180 - * they are definitely not expected. 181 - */ 182 - unblock_signals(); 183 - 184 186 execvp(new_argv[0], new_argv); 185 187 perror("Failed to exec kernel"); 186 188 ret = 1;
-36
arch/um/kernel/process_kern.c
··· 43 43 #include "tlb.h" 44 44 #include "frame_kern.h" 45 45 #include "sigcontext.h" 46 - #include "2_5compat.h" 47 46 #include "os.h" 48 47 #include "mode.h" 49 48 #include "mode_kern.h" ··· 53 54 * entry. 54 55 */ 55 56 struct cpu_task cpu_tasks[NR_CPUS] = { [0 ... NR_CPUS - 1] = { -1, NULL } }; 56 - 57 - struct task_struct *get_task(int pid, int require) 58 - { 59 - struct task_struct *ret; 60 - 61 - read_lock(&tasklist_lock); 62 - ret = find_task_by_pid(pid); 63 - read_unlock(&tasklist_lock); 64 - 65 - if(require && (ret == NULL)) panic("get_task couldn't find a task\n"); 66 - return(ret); 67 - } 68 57 69 58 int external_pid(void *t) 70 59 { ··· 176 189 177 190 while(1){ 178 191 /* endless idle loop with no priority at all */ 179 - SET_PRI(current); 180 192 181 193 /* 182 194 * although we are an idle CPU, we do not want to ··· 196 210 int page_size(void) 197 211 { 198 212 return(PAGE_SIZE); 199 - } 200 - 201 - unsigned long page_mask(void) 202 - { 203 - return(PAGE_MASK); 204 213 } 205 214 206 215 void *um_virt_to_phys(struct task_struct *task, unsigned long addr, ··· 330 349 return(new); 331 350 } 332 351 333 - void *get_init_task(void) 334 - { 335 - return(&init_thread_union.thread_info.task); 336 - } 337 - 338 352 int copy_to_user_proc(void __user *to, void *from, int size) 339 353 { 340 354 return(copy_to_user(to, from, size)); ··· 456 480 return sp & ~0xf; 457 481 } 458 482 #endif 459 - 460 - 461 - /* 462 - * Overrides for Emacs so that we follow Linus's tabbing style. 463 - * Emacs will notice this stuff at the end of the file and automatically 464 - * adjust the settings for this buffer only. This must remain at the end 465 - * of the file. 466 - * --------------------------------------------------------------------------- 467 - * Local variables: 468 - * c-file-style: "linux" 469 - * End: 470 - */
+3 -16
arch/um/kernel/ptrace.c
··· 322 322 UPT_SYSCALL_ARG2(regs), 323 323 UPT_SYSCALL_ARG3(regs), 324 324 UPT_SYSCALL_ARG4(regs)); 325 - else { 326 - int res = UPT_SYSCALL_RET(regs); 327 - audit_syscall_exit(current, AUDITSC_RESULT(res), 328 - res); 329 - } 325 + else audit_syscall_exit(current, 326 + AUDITSC_RESULT(UPT_SYSCALL_RET(regs)), 327 + UPT_SYSCALL_RET(regs)); 330 328 } 331 329 332 330 /* Fake a debug trap */ ··· 354 356 current->exit_code = 0; 355 357 } 356 358 } 357 - 358 - /* 359 - * Overrides for Emacs so that we follow Linus's tabbing style. 360 - * Emacs will notice this stuff at the end of the file and automatically 361 - * adjust the settings for this buffer only. This must remain at the end 362 - * of the file. 363 - * --------------------------------------------------------------------------- 364 - * Local variables: 365 - * c-file-style: "linux" 366 - * End: 367 - */
+14 -7
arch/um/kernel/sysrq.c
··· 3 3 * Licensed under the GPL 4 4 */ 5 5 6 + #include "linux/config.h" 6 7 #include "linux/sched.h" 7 8 #include "linux/kernel.h" 8 9 #include "linux/module.h" ··· 13 12 #include "sysrq.h" 14 13 #include "user_util.h" 15 14 16 - void show_trace(unsigned long * stack) 15 + /* Catch non-i386 SUBARCH's. */ 16 + #if !defined(CONFIG_UML_X86) || defined(CONFIG_64BIT) 17 + void show_trace(struct task_struct *task, unsigned long * stack) 17 18 { 18 - /* XXX: Copy the CONFIG_FRAME_POINTER stack-walking backtrace from 19 - * arch/i386/kernel/traps.c, and then move this to sys-i386/sysrq.c.*/ 20 19 unsigned long addr; 21 20 22 21 if (!stack) { 23 - stack = (unsigned long*) &stack; 22 + stack = (unsigned long*) &stack; 24 23 WARN_ON(1); 25 24 } 26 25 ··· 36 35 } 37 36 printk("\n"); 38 37 } 38 + #endif 39 39 40 40 /* 41 41 * stack dumps generator - this is used by arch-independent code. ··· 46 44 { 47 45 unsigned long stack; 48 46 49 - show_trace(&stack); 47 + show_trace(current, &stack); 50 48 } 51 49 EXPORT_SYMBOL(dump_stack); 52 50 ··· 61 59 int i; 62 60 63 61 if (esp == NULL) { 64 - if (task != current) { 62 + if (task != current && task != NULL) { 63 + /* XXX: Isn't this bogus? I.e. isn't this the 64 + * *userspace* stack of this task? If not so, use this 65 + * even when task == current (as in i386). 66 + */ 65 67 esp = (unsigned long *) KSTK_ESP(task); 66 68 /* Which one? No actual difference - just coding style.*/ 67 69 //esp = (unsigned long *) PT_REGS_IP(&task->thread.regs); ··· 83 77 printk("%08lx ", *stack++); 84 78 } 85 79 86 - show_trace(esp); 80 + printk("Call Trace: \n"); 81 + show_trace(current, esp); 87 82 }
-1
arch/um/kernel/trap_kern.c
··· 23 23 #include "kern.h" 24 24 #include "chan_kern.h" 25 25 #include "mconsole_kern.h" 26 - #include "2_5compat.h" 27 26 #include "mem.h" 28 27 #include "mem_kern.h" 29 28
+1 -7
arch/um/kernel/tt/process_kern.c
··· 32 32 unsigned long flags; 33 33 int err, vtalrm, alrm, prof, cpu; 34 34 char c; 35 - /* jailing and SMP are incompatible, so this doesn't need to be 36 - * made per-cpu 37 - */ 38 - static int reading; 39 35 40 36 from = prev; 41 37 to = next; ··· 55 59 c = 0; 56 60 set_current(to); 57 61 58 - reading = 0; 59 62 err = os_write_file(to->thread.mode.tt.switch_pipe[1], &c, sizeof(c)); 60 63 if(err != sizeof(c)) 61 64 panic("write of switch_pipe failed, err = %d", -err); 62 65 63 - reading = 1; 64 - if(from->thread.mode.tt.switch_pipe[0] == -1) 66 + if(from->thread.mode.tt.switch_pipe[0] == -1) 65 67 os_kill_process(os_getpid(), 0); 66 68 67 69 err = os_read_file(from->thread.mode.tt.switch_pipe[0], &c, sizeof(c));
-6
arch/um/kernel/um_arch.c
··· 111 111 .show = show_cpuinfo, 112 112 }; 113 113 114 - pte_t * __bad_pagetable(void) 115 - { 116 - panic("Someone should implement __bad_pagetable"); 117 - return(NULL); 118 - } 119 - 120 114 /* Set in linux_main */ 121 115 unsigned long host_task_size; 122 116 unsigned long task_size;
+79 -1
arch/um/sys-i386/sysrq.c
··· 3 3 * Licensed under the GPL 4 4 */ 5 5 6 + #include "linux/config.h" 6 7 #include "linux/kernel.h" 7 8 #include "linux/smp.h" 8 9 #include "linux/sched.h" 10 + #include "linux/kallsyms.h" 9 11 #include "asm/ptrace.h" 10 12 #include "sysrq.h" 11 13 14 + /* This is declared by <linux/sched.h> */ 12 15 void show_regs(struct pt_regs *regs) 13 16 { 14 17 printk("\n"); ··· 34 31 0xffff & PT_REGS_DS(regs), 35 32 0xffff & PT_REGS_ES(regs)); 36 33 37 - show_trace((unsigned long *) &regs); 34 + show_trace(NULL, (unsigned long *) &regs); 38 35 } 36 + 37 + /* Copied from i386. */ 38 + static inline int valid_stack_ptr(struct thread_info *tinfo, void *p) 39 + { 40 + return p > (void *)tinfo && 41 + p < (void *)tinfo + THREAD_SIZE - 3; 42 + } 43 + 44 + /* Adapted from i386 (we also print the address we read from). */ 45 + static inline unsigned long print_context_stack(struct thread_info *tinfo, 46 + unsigned long *stack, unsigned long ebp) 47 + { 48 + unsigned long addr; 49 + 50 + #ifdef CONFIG_FRAME_POINTER 51 + while (valid_stack_ptr(tinfo, (void *)ebp)) { 52 + addr = *(unsigned long *)(ebp + 4); 53 + printk("%08lx: [<%08lx>]", ebp + 4, addr); 54 + print_symbol(" %s", addr); 55 + printk("\n"); 56 + ebp = *(unsigned long *)ebp; 57 + } 58 + #else 59 + while (valid_stack_ptr(tinfo, stack)) { 60 + addr = *stack; 61 + if (__kernel_text_address(addr)) { 62 + printk("%08lx: [<%08lx>]", (unsigned long) stack, addr); 63 + print_symbol(" %s", addr); 64 + printk("\n"); 65 + } 66 + stack++; 67 + } 68 + #endif 69 + return ebp; 70 + } 71 + 72 + void show_trace(struct task_struct* task, unsigned long * stack) 73 + { 74 + unsigned long ebp; 75 + struct thread_info *context; 76 + 77 + /* Turn this into BUG_ON if possible. */ 78 + if (!stack) { 79 + stack = (unsigned long*) &stack; 80 + printk("show_trace: got NULL stack, implicit assumption task == current"); 81 + WARN_ON(1); 82 + } 83 + 84 + if (!task) 85 + task = current; 86 + 87 + if (task != current) { 88 + //ebp = (unsigned long) KSTK_EBP(task); 89 + /* Which one? No actual difference - just coding style.*/ 90 + ebp = (unsigned long) PT_REGS_EBP(&task->thread.regs); 91 + } else { 92 + asm ("movl %%ebp, %0" : "=r" (ebp) : ); 93 + } 94 + 95 + context = (struct thread_info *) 96 + ((unsigned long)stack & (~(THREAD_SIZE - 1))); 97 + print_context_stack(context, stack, ebp); 98 + 99 + /*while (((long) stack & (THREAD_SIZE-1)) != 0) { 100 + addr = *stack; 101 + if (__kernel_text_address(addr)) { 102 + printk("%08lx: [<%08lx>]", (unsigned long) stack, addr); 103 + print_symbol(" %s", addr); 104 + printk("\n"); 105 + } 106 + stack++; 107 + }*/ 108 + printk("\n"); 109 + } 110 +
+1 -13
arch/um/sys-ppc/sysrq.c
··· 27 27 0xffff & regs->xds, 0xffff & regs->xes); 28 28 #endif 29 29 30 - show_trace(&regs->gpr[1]); 30 + show_trace(current, &regs->gpr[1]); 31 31 } 32 - 33 - 34 - /* 35 - * Overrides for Emacs so that we follow Linus's tabbing style. 36 - * Emacs will notice this stuff at the end of the file and automatically 37 - * adjust the settings for this buffer only. This must remain at the end 38 - * of the file. 39 - * --------------------------------------------------------------------------- 40 - * Local variables: 41 - * c-file-style: "linux" 42 - * End: 43 - */
+10 -6
arch/um/sys-x86_64/syscalls.c
··· 133 133 134 134 #ifdef CONFIG_MODE_SKAS 135 135 136 + /* XXX: Must also call arch_prctl in the host, beside saving the segment bases! */ 136 137 static long arch_prctl_skas(int code, unsigned long addr) 137 138 { 138 139 long ret = 0; 139 140 140 141 switch(code){ 141 - case ARCH_SET_GS: 142 - current->thread.regs.regs.skas.regs[GS_BASE / sizeof(unsigned long)] = addr; 143 - break; 144 142 case ARCH_SET_FS: 145 143 current->thread.regs.regs.skas.regs[FS_BASE / sizeof(unsigned long)] = addr; 146 144 break; 145 + case ARCH_SET_GS: 146 + current->thread.regs.regs.skas.regs[GS_BASE / sizeof(unsigned long)] = addr; 147 + break; 147 148 case ARCH_GET_FS: 148 - ret = put_user(current->thread.regs.regs.skas.regs[GS / sizeof(unsigned long)], &addr); 149 + ret = put_user(current->thread.regs.regs.skas. 150 + regs[FS_BASE / sizeof(unsigned long)], 151 + (unsigned long __user *)addr); 149 152 break; 150 153 case ARCH_GET_GS: 151 - ret = put_user(current->thread.regs.regs.skas.regs[FS / sizeof(unsigned \ 152 - long)], &addr); 154 + ret = put_user(current->thread.regs.regs.skas. 155 + regs[GS_BASE / sizeof(unsigned long)], 156 + (unsigned long __user *)addr); 153 157 break; 154 158 default: 155 159 ret = -EINVAL;
+1 -10
arch/um/sys-x86_64/sysrq.c
··· 36 36 void show_regs(struct pt_regs *regs) 37 37 { 38 38 __show_regs(regs); 39 - show_trace((unsigned long *) &regs); 39 + show_trace(current, (unsigned long *) &regs); 40 40 } 41 - 42 - /* Emacs will notice this stuff at the end of the file and automatically 43 - * adjust the settings for this buffer only. This must remain at the end 44 - * of the file. 45 - * --------------------------------------------------------------------------- 46 - * Local variables: 47 - * c-file-style: "linux" 48 - * End: 49 - */
+1
arch/x86_64/Kconfig
··· 305 305 306 306 config X86_PM_TIMER 307 307 bool "PM timer" 308 + depends on ACPI 308 309 default y 309 310 help 310 311 Support the ACPI PM timer for time keeping. This is slow,
+1
arch/x86_64/kernel/io_apic.c
··· 37 37 #include <asm/desc.h> 38 38 #include <asm/proto.h> 39 39 #include <asm/mach_apic.h> 40 + #include <asm/acpi.h> 40 41 41 42 #define __apicdebuginit __init 42 43
+1
arch/x86_64/kernel/mpparse.c
··· 30 30 #include <asm/pgalloc.h> 31 31 #include <asm/io_apic.h> 32 32 #include <asm/proto.h> 33 + #include <asm/acpi.h> 33 34 34 35 /* Have we found an MP table */ 35 36 int smp_found_config;
+1
arch/x86_64/kernel/signal.c
··· 28 28 #include <asm/uaccess.h> 29 29 #include <asm/i387.h> 30 30 #include <asm/proto.h> 31 + #include <asm/ia32_unistd.h> 31 32 32 33 /* #define DEBUG_SIG 1 */ 33 34
+2
arch/x86_64/kernel/time.c
··· 27 27 #include <linux/bcd.h> 28 28 #include <linux/kallsyms.h> 29 29 #include <linux/acpi.h> 30 + #ifdef CONFIG_ACPI 30 31 #include <acpi/achware.h> /* for PM timer frequency */ 32 + #endif 31 33 #include <asm/8253pit.h> 32 34 #include <asm/pgtable.h> 33 35 #include <asm/vsyscall.h>
+14
drivers/cdrom/viocd.c
··· 488 488 & (CDC_DVD_RAM | CDC_RAM)) != 0; 489 489 } 490 490 break; 491 + case GPCMD_GET_CONFIGURATION: 492 + if (cgc->cmd[3] == CDF_RWRT) { 493 + struct rwrt_feature_desc *rfd = (struct rwrt_feature_desc *)(cgc->buffer + sizeof(struct feature_header)); 494 + 495 + if ((buflen >= 496 + (sizeof(struct feature_header) + sizeof(*rfd))) && 497 + (cdi->ops->capability & ~cdi->mask 498 + & (CDC_DVD_RAM | CDC_RAM))) { 499 + rfd->feature_code = cpu_to_be16(CDF_RWRT); 500 + rfd->curr = 1; 501 + ret = 0; 502 + } 503 + } 504 + break; 491 505 default: 492 506 if (cgc->sense) { 493 507 /* indicate Unknown code */
+24
drivers/cpufreq/Kconfig
··· 46 46 This will show detail CPU frequency translation table in sysfs file 47 47 system 48 48 49 + # Note that it is not currently possible to set the other governors (such as ondemand) 50 + # as the default, since if they fail to initialise, cpufreq will be 51 + # left in an undefined state. 52 + 49 53 choice 50 54 prompt "Default CPUFreq governor" 51 55 default CPU_FREQ_DEFAULT_GOV_USERSPACE if CPU_FREQ_SA1100 || CPU_FREQ_SA1110 ··· 114 110 The support for this governor depends on CPU capability to 115 111 do fast frequency switching (i.e, very low latency frequency 116 112 transitions). 113 + 114 + For details, take a look at linux/Documentation/cpu-freq. 115 + 116 + If in doubt, say N. 117 + 118 + config CPU_FREQ_GOV_CONSERVATIVE 119 + tristate "'conservative' cpufreq governor" 120 + depends on CPU_FREQ 121 + help 122 + 'conservative' - this driver is rather similar to the 'ondemand' 123 + governor both in its source code and its purpose, the difference is 124 + its optimisation for better suitability in a battery powered 125 + environment. The frequency is gracefully increased and decreased 126 + rather than jumping to 100% when speed is required. 127 + 128 + If you have a desktop machine then you should really be considering 129 + the 'ondemand' governor instead, however if you are using a laptop, 130 + PDA or even an AMD64 based computer (due to the unacceptable 131 + step-by-step latency issues between the minimum and maximum frequency 132 + transitions in the CPU) you will probably want to use this governor. 117 133 118 134 For details, take a look at linux/Documentation/cpu-freq. 119 135
+1
drivers/cpufreq/Makefile
··· 8 8 obj-$(CONFIG_CPU_FREQ_GOV_POWERSAVE) += cpufreq_powersave.o 9 9 obj-$(CONFIG_CPU_FREQ_GOV_USERSPACE) += cpufreq_userspace.o 10 10 obj-$(CONFIG_CPU_FREQ_GOV_ONDEMAND) += cpufreq_ondemand.o 11 + obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o 11 12 12 13 # CPUfreq cross-arch helpers 13 14 obj-$(CONFIG_CPU_FREQ_TABLE) += freq_table.o
+4 -4
drivers/cpufreq/cpufreq.c
··· 258 258 (likely(cpufreq_cpu_data[freqs->cpu]->cur)) && 259 259 (unlikely(freqs->old != cpufreq_cpu_data[freqs->cpu]->cur))) 260 260 { 261 - printk(KERN_WARNING "Warning: CPU frequency is %u, " 261 + dprintk(KERN_WARNING "Warning: CPU frequency is %u, " 262 262 "cpufreq assumed %u kHz.\n", freqs->old, cpufreq_cpu_data[freqs->cpu]->cur); 263 263 freqs->old = cpufreq_cpu_data[freqs->cpu]->cur; 264 264 } ··· 814 814 { 815 815 struct cpufreq_freqs freqs; 816 816 817 - printk(KERN_WARNING "Warning: CPU frequency out of sync: cpufreq and timing " 817 + dprintk(KERN_WARNING "Warning: CPU frequency out of sync: cpufreq and timing " 818 818 "core thinks of %u, is %u kHz.\n", old_freq, new_freq); 819 819 820 820 freqs.cpu = cpu; ··· 923 923 struct cpufreq_freqs freqs; 924 924 925 925 if (!(cpufreq_driver->flags & CPUFREQ_PM_NO_WARN)) 926 - printk(KERN_DEBUG "Warning: CPU frequency is %u, " 926 + dprintk(KERN_DEBUG "Warning: CPU frequency is %u, " 927 927 "cpufreq assumed %u kHz.\n", 928 928 cur_freq, cpu_policy->cur); 929 929 ··· 1004 1004 struct cpufreq_freqs freqs; 1005 1005 1006 1006 if (!(cpufreq_driver->flags & CPUFREQ_PM_NO_WARN)) 1007 - printk(KERN_WARNING "Warning: CPU frequency" 1007 + dprintk(KERN_WARNING "Warning: CPU frequency" 1008 1008 "is %u, cpufreq assumed %u kHz.\n", 1009 1009 cur_freq, cpu_policy->cur); 1010 1010
+586
drivers/cpufreq/cpufreq_conservative.c
··· 1 + /* 2 + * drivers/cpufreq/cpufreq_conservative.c 3 + * 4 + * Copyright (C) 2001 Russell King 5 + * (C) 2003 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>. 6 + * Jun Nakajima <jun.nakajima@intel.com> 7 + * (C) 2004 Alexander Clouter <alex-kernel@digriz.org.uk> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + 14 + #include <linux/kernel.h> 15 + #include <linux/module.h> 16 + #include <linux/smp.h> 17 + #include <linux/init.h> 18 + #include <linux/interrupt.h> 19 + #include <linux/ctype.h> 20 + #include <linux/cpufreq.h> 21 + #include <linux/sysctl.h> 22 + #include <linux/types.h> 23 + #include <linux/fs.h> 24 + #include <linux/sysfs.h> 25 + #include <linux/sched.h> 26 + #include <linux/kmod.h> 27 + #include <linux/workqueue.h> 28 + #include <linux/jiffies.h> 29 + #include <linux/kernel_stat.h> 30 + #include <linux/percpu.h> 31 + 32 + /* 33 + * dbs is used in this file as a shortform for demandbased switching 34 + * It helps to keep variable names smaller, simpler 35 + */ 36 + 37 + #define DEF_FREQUENCY_UP_THRESHOLD (80) 38 + #define MIN_FREQUENCY_UP_THRESHOLD (0) 39 + #define MAX_FREQUENCY_UP_THRESHOLD (100) 40 + 41 + #define DEF_FREQUENCY_DOWN_THRESHOLD (20) 42 + #define MIN_FREQUENCY_DOWN_THRESHOLD (0) 43 + #define MAX_FREQUENCY_DOWN_THRESHOLD (100) 44 + 45 + /* 46 + * The polling frequency of this governor depends on the capability of 47 + * the processor. Default polling frequency is 1000 times the transition 48 + * latency of the processor. The governor will work on any processor with 49 + * transition latency <= 10mS, using appropriate sampling 50 + * rate. 51 + * For CPUs with transition latency > 10mS (mostly drivers with CPUFREQ_ETERNAL) 52 + * this governor will not work. 53 + * All times here are in uS. 54 + */ 55 + static unsigned int def_sampling_rate; 56 + #define MIN_SAMPLING_RATE (def_sampling_rate / 2) 57 + #define MAX_SAMPLING_RATE (500 * def_sampling_rate) 58 + #define DEF_SAMPLING_RATE_LATENCY_MULTIPLIER (100000) 59 + #define DEF_SAMPLING_DOWN_FACTOR (5) 60 + #define TRANSITION_LATENCY_LIMIT (10 * 1000) 61 + 62 + static void do_dbs_timer(void *data); 63 + 64 + struct cpu_dbs_info_s { 65 + struct cpufreq_policy *cur_policy; 66 + unsigned int prev_cpu_idle_up; 67 + unsigned int prev_cpu_idle_down; 68 + unsigned int enable; 69 + }; 70 + static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info); 71 + 72 + static unsigned int dbs_enable; /* number of CPUs using this policy */ 73 + 74 + static DECLARE_MUTEX (dbs_sem); 75 + static DECLARE_WORK (dbs_work, do_dbs_timer, NULL); 76 + 77 + struct dbs_tuners { 78 + unsigned int sampling_rate; 79 + unsigned int sampling_down_factor; 80 + unsigned int up_threshold; 81 + unsigned int down_threshold; 82 + unsigned int ignore_nice; 83 + unsigned int freq_step; 84 + }; 85 + 86 + static struct dbs_tuners dbs_tuners_ins = { 87 + .up_threshold = DEF_FREQUENCY_UP_THRESHOLD, 88 + .down_threshold = DEF_FREQUENCY_DOWN_THRESHOLD, 89 + .sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR, 90 + }; 91 + 92 + static inline unsigned int get_cpu_idle_time(unsigned int cpu) 93 + { 94 + return kstat_cpu(cpu).cpustat.idle + 95 + kstat_cpu(cpu).cpustat.iowait + 96 + ( !dbs_tuners_ins.ignore_nice ? 97 + kstat_cpu(cpu).cpustat.nice : 98 + 0); 99 + } 100 + 101 + /************************** sysfs interface ************************/ 102 + static ssize_t show_sampling_rate_max(struct cpufreq_policy *policy, char *buf) 103 + { 104 + return sprintf (buf, "%u\n", MAX_SAMPLING_RATE); 105 + } 106 + 107 + static ssize_t show_sampling_rate_min(struct cpufreq_policy *policy, char *buf) 108 + { 109 + return sprintf (buf, "%u\n", MIN_SAMPLING_RATE); 110 + } 111 + 112 + #define define_one_ro(_name) \ 113 + static struct freq_attr _name = \ 114 + __ATTR(_name, 0444, show_##_name, NULL) 115 + 116 + define_one_ro(sampling_rate_max); 117 + define_one_ro(sampling_rate_min); 118 + 119 + /* cpufreq_conservative Governor Tunables */ 120 + #define show_one(file_name, object) \ 121 + static ssize_t show_##file_name \ 122 + (struct cpufreq_policy *unused, char *buf) \ 123 + { \ 124 + return sprintf(buf, "%u\n", dbs_tuners_ins.object); \ 125 + } 126 + show_one(sampling_rate, sampling_rate); 127 + show_one(sampling_down_factor, sampling_down_factor); 128 + show_one(up_threshold, up_threshold); 129 + show_one(down_threshold, down_threshold); 130 + show_one(ignore_nice, ignore_nice); 131 + show_one(freq_step, freq_step); 132 + 133 + static ssize_t store_sampling_down_factor(struct cpufreq_policy *unused, 134 + const char *buf, size_t count) 135 + { 136 + unsigned int input; 137 + int ret; 138 + ret = sscanf (buf, "%u", &input); 139 + if (ret != 1 ) 140 + return -EINVAL; 141 + 142 + down(&dbs_sem); 143 + dbs_tuners_ins.sampling_down_factor = input; 144 + up(&dbs_sem); 145 + 146 + return count; 147 + } 148 + 149 + static ssize_t store_sampling_rate(struct cpufreq_policy *unused, 150 + const char *buf, size_t count) 151 + { 152 + unsigned int input; 153 + int ret; 154 + ret = sscanf (buf, "%u", &input); 155 + 156 + down(&dbs_sem); 157 + if (ret != 1 || input > MAX_SAMPLING_RATE || input < MIN_SAMPLING_RATE) { 158 + up(&dbs_sem); 159 + return -EINVAL; 160 + } 161 + 162 + dbs_tuners_ins.sampling_rate = input; 163 + up(&dbs_sem); 164 + 165 + return count; 166 + } 167 + 168 + static ssize_t store_up_threshold(struct cpufreq_policy *unused, 169 + const char *buf, size_t count) 170 + { 171 + unsigned int input; 172 + int ret; 173 + ret = sscanf (buf, "%u", &input); 174 + 175 + down(&dbs_sem); 176 + if (ret != 1 || input > MAX_FREQUENCY_UP_THRESHOLD || 177 + input < MIN_FREQUENCY_UP_THRESHOLD || 178 + input <= dbs_tuners_ins.down_threshold) { 179 + up(&dbs_sem); 180 + return -EINVAL; 181 + } 182 + 183 + dbs_tuners_ins.up_threshold = input; 184 + up(&dbs_sem); 185 + 186 + return count; 187 + } 188 + 189 + static ssize_t store_down_threshold(struct cpufreq_policy *unused, 190 + const char *buf, size_t count) 191 + { 192 + unsigned int input; 193 + int ret; 194 + ret = sscanf (buf, "%u", &input); 195 + 196 + down(&dbs_sem); 197 + if (ret != 1 || input > MAX_FREQUENCY_DOWN_THRESHOLD || 198 + input < MIN_FREQUENCY_DOWN_THRESHOLD || 199 + input >= dbs_tuners_ins.up_threshold) { 200 + up(&dbs_sem); 201 + return -EINVAL; 202 + } 203 + 204 + dbs_tuners_ins.down_threshold = input; 205 + up(&dbs_sem); 206 + 207 + return count; 208 + } 209 + 210 + static ssize_t store_ignore_nice(struct cpufreq_policy *policy, 211 + const char *buf, size_t count) 212 + { 213 + unsigned int input; 214 + int ret; 215 + 216 + unsigned int j; 217 + 218 + ret = sscanf (buf, "%u", &input); 219 + if ( ret != 1 ) 220 + return -EINVAL; 221 + 222 + if ( input > 1 ) 223 + input = 1; 224 + 225 + down(&dbs_sem); 226 + if ( input == dbs_tuners_ins.ignore_nice ) { /* nothing to do */ 227 + up(&dbs_sem); 228 + return count; 229 + } 230 + dbs_tuners_ins.ignore_nice = input; 231 + 232 + /* we need to re-evaluate prev_cpu_idle_up and prev_cpu_idle_down */ 233 + for_each_online_cpu(j) { 234 + struct cpu_dbs_info_s *j_dbs_info; 235 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 236 + j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j); 237 + j_dbs_info->prev_cpu_idle_down = j_dbs_info->prev_cpu_idle_up; 238 + } 239 + up(&dbs_sem); 240 + 241 + return count; 242 + } 243 + 244 + static ssize_t store_freq_step(struct cpufreq_policy *policy, 245 + const char *buf, size_t count) 246 + { 247 + unsigned int input; 248 + int ret; 249 + 250 + ret = sscanf (buf, "%u", &input); 251 + 252 + if ( ret != 1 ) 253 + return -EINVAL; 254 + 255 + if ( input > 100 ) 256 + input = 100; 257 + 258 + /* no need to test here if freq_step is zero as the user might actually 259 + * want this, they would be crazy though :) */ 260 + down(&dbs_sem); 261 + dbs_tuners_ins.freq_step = input; 262 + up(&dbs_sem); 263 + 264 + return count; 265 + } 266 + 267 + #define define_one_rw(_name) \ 268 + static struct freq_attr _name = \ 269 + __ATTR(_name, 0644, show_##_name, store_##_name) 270 + 271 + define_one_rw(sampling_rate); 272 + define_one_rw(sampling_down_factor); 273 + define_one_rw(up_threshold); 274 + define_one_rw(down_threshold); 275 + define_one_rw(ignore_nice); 276 + define_one_rw(freq_step); 277 + 278 + static struct attribute * dbs_attributes[] = { 279 + &sampling_rate_max.attr, 280 + &sampling_rate_min.attr, 281 + &sampling_rate.attr, 282 + &sampling_down_factor.attr, 283 + &up_threshold.attr, 284 + &down_threshold.attr, 285 + &ignore_nice.attr, 286 + &freq_step.attr, 287 + NULL 288 + }; 289 + 290 + static struct attribute_group dbs_attr_group = { 291 + .attrs = dbs_attributes, 292 + .name = "conservative", 293 + }; 294 + 295 + /************************** sysfs end ************************/ 296 + 297 + static void dbs_check_cpu(int cpu) 298 + { 299 + unsigned int idle_ticks, up_idle_ticks, down_idle_ticks; 300 + unsigned int freq_step; 301 + unsigned int freq_down_sampling_rate; 302 + static int down_skip[NR_CPUS]; 303 + static int requested_freq[NR_CPUS]; 304 + static unsigned short init_flag = 0; 305 + struct cpu_dbs_info_s *this_dbs_info; 306 + struct cpu_dbs_info_s *dbs_info; 307 + 308 + struct cpufreq_policy *policy; 309 + unsigned int j; 310 + 311 + this_dbs_info = &per_cpu(cpu_dbs_info, cpu); 312 + if (!this_dbs_info->enable) 313 + return; 314 + 315 + policy = this_dbs_info->cur_policy; 316 + 317 + if ( init_flag == 0 ) { 318 + for ( /* NULL */; init_flag < NR_CPUS; init_flag++ ) { 319 + dbs_info = &per_cpu(cpu_dbs_info, init_flag); 320 + requested_freq[cpu] = dbs_info->cur_policy->cur; 321 + } 322 + init_flag = 1; 323 + } 324 + 325 + /* 326 + * The default safe range is 20% to 80% 327 + * Every sampling_rate, we check 328 + * - If current idle time is less than 20%, then we try to 329 + * increase frequency 330 + * Every sampling_rate*sampling_down_factor, we check 331 + * - If current idle time is more than 80%, then we try to 332 + * decrease frequency 333 + * 334 + * Any frequency increase takes it to the maximum frequency. 335 + * Frequency reduction happens at minimum steps of 336 + * 5% (default) of max_frequency 337 + */ 338 + 339 + /* Check for frequency increase */ 340 + 341 + idle_ticks = UINT_MAX; 342 + for_each_cpu_mask(j, policy->cpus) { 343 + unsigned int tmp_idle_ticks, total_idle_ticks; 344 + struct cpu_dbs_info_s *j_dbs_info; 345 + 346 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 347 + /* Check for frequency increase */ 348 + total_idle_ticks = get_cpu_idle_time(j); 349 + tmp_idle_ticks = total_idle_ticks - 350 + j_dbs_info->prev_cpu_idle_up; 351 + j_dbs_info->prev_cpu_idle_up = total_idle_ticks; 352 + 353 + if (tmp_idle_ticks < idle_ticks) 354 + idle_ticks = tmp_idle_ticks; 355 + } 356 + 357 + /* Scale idle ticks by 100 and compare with up and down ticks */ 358 + idle_ticks *= 100; 359 + up_idle_ticks = (100 - dbs_tuners_ins.up_threshold) * 360 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate); 361 + 362 + if (idle_ticks < up_idle_ticks) { 363 + down_skip[cpu] = 0; 364 + for_each_cpu_mask(j, policy->cpus) { 365 + struct cpu_dbs_info_s *j_dbs_info; 366 + 367 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 368 + j_dbs_info->prev_cpu_idle_down = 369 + j_dbs_info->prev_cpu_idle_up; 370 + } 371 + /* if we are already at full speed then break out early */ 372 + if (requested_freq[cpu] == policy->max) 373 + return; 374 + 375 + freq_step = (dbs_tuners_ins.freq_step * policy->max) / 100; 376 + 377 + /* max freq cannot be less than 100. But who knows.... */ 378 + if (unlikely(freq_step == 0)) 379 + freq_step = 5; 380 + 381 + requested_freq[cpu] += freq_step; 382 + if (requested_freq[cpu] > policy->max) 383 + requested_freq[cpu] = policy->max; 384 + 385 + __cpufreq_driver_target(policy, requested_freq[cpu], 386 + CPUFREQ_RELATION_H); 387 + return; 388 + } 389 + 390 + /* Check for frequency decrease */ 391 + down_skip[cpu]++; 392 + if (down_skip[cpu] < dbs_tuners_ins.sampling_down_factor) 393 + return; 394 + 395 + idle_ticks = UINT_MAX; 396 + for_each_cpu_mask(j, policy->cpus) { 397 + unsigned int tmp_idle_ticks, total_idle_ticks; 398 + struct cpu_dbs_info_s *j_dbs_info; 399 + 400 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 401 + total_idle_ticks = j_dbs_info->prev_cpu_idle_up; 402 + tmp_idle_ticks = total_idle_ticks - 403 + j_dbs_info->prev_cpu_idle_down; 404 + j_dbs_info->prev_cpu_idle_down = total_idle_ticks; 405 + 406 + if (tmp_idle_ticks < idle_ticks) 407 + idle_ticks = tmp_idle_ticks; 408 + } 409 + 410 + /* Scale idle ticks by 100 and compare with up and down ticks */ 411 + idle_ticks *= 100; 412 + down_skip[cpu] = 0; 413 + 414 + freq_down_sampling_rate = dbs_tuners_ins.sampling_rate * 415 + dbs_tuners_ins.sampling_down_factor; 416 + down_idle_ticks = (100 - dbs_tuners_ins.down_threshold) * 417 + usecs_to_jiffies(freq_down_sampling_rate); 418 + 419 + if (idle_ticks > down_idle_ticks) { 420 + /* if we are already at the lowest speed then break out early 421 + * or if we 'cannot' reduce the speed as the user might want 422 + * freq_step to be zero */ 423 + if (requested_freq[cpu] == policy->min 424 + || dbs_tuners_ins.freq_step == 0) 425 + return; 426 + 427 + freq_step = (dbs_tuners_ins.freq_step * policy->max) / 100; 428 + 429 + /* max freq cannot be less than 100. But who knows.... */ 430 + if (unlikely(freq_step == 0)) 431 + freq_step = 5; 432 + 433 + requested_freq[cpu] -= freq_step; 434 + if (requested_freq[cpu] < policy->min) 435 + requested_freq[cpu] = policy->min; 436 + 437 + __cpufreq_driver_target(policy, 438 + requested_freq[cpu], 439 + CPUFREQ_RELATION_H); 440 + return; 441 + } 442 + } 443 + 444 + static void do_dbs_timer(void *data) 445 + { 446 + int i; 447 + down(&dbs_sem); 448 + for_each_online_cpu(i) 449 + dbs_check_cpu(i); 450 + schedule_delayed_work(&dbs_work, 451 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate)); 452 + up(&dbs_sem); 453 + } 454 + 455 + static inline void dbs_timer_init(void) 456 + { 457 + INIT_WORK(&dbs_work, do_dbs_timer, NULL); 458 + schedule_delayed_work(&dbs_work, 459 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate)); 460 + return; 461 + } 462 + 463 + static inline void dbs_timer_exit(void) 464 + { 465 + cancel_delayed_work(&dbs_work); 466 + return; 467 + } 468 + 469 + static int cpufreq_governor_dbs(struct cpufreq_policy *policy, 470 + unsigned int event) 471 + { 472 + unsigned int cpu = policy->cpu; 473 + struct cpu_dbs_info_s *this_dbs_info; 474 + unsigned int j; 475 + 476 + this_dbs_info = &per_cpu(cpu_dbs_info, cpu); 477 + 478 + switch (event) { 479 + case CPUFREQ_GOV_START: 480 + if ((!cpu_online(cpu)) || 481 + (!policy->cur)) 482 + return -EINVAL; 483 + 484 + if (policy->cpuinfo.transition_latency > 485 + (TRANSITION_LATENCY_LIMIT * 1000)) 486 + return -EINVAL; 487 + if (this_dbs_info->enable) /* Already enabled */ 488 + break; 489 + 490 + down(&dbs_sem); 491 + for_each_cpu_mask(j, policy->cpus) { 492 + struct cpu_dbs_info_s *j_dbs_info; 493 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 494 + j_dbs_info->cur_policy = policy; 495 + 496 + j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j); 497 + j_dbs_info->prev_cpu_idle_down 498 + = j_dbs_info->prev_cpu_idle_up; 499 + } 500 + this_dbs_info->enable = 1; 501 + sysfs_create_group(&policy->kobj, &dbs_attr_group); 502 + dbs_enable++; 503 + /* 504 + * Start the timerschedule work, when this governor 505 + * is used for first time 506 + */ 507 + if (dbs_enable == 1) { 508 + unsigned int latency; 509 + /* policy latency is in nS. Convert it to uS first */ 510 + 511 + latency = policy->cpuinfo.transition_latency; 512 + if (latency < 1000) 513 + latency = 1000; 514 + 515 + def_sampling_rate = (latency / 1000) * 516 + DEF_SAMPLING_RATE_LATENCY_MULTIPLIER; 517 + dbs_tuners_ins.sampling_rate = def_sampling_rate; 518 + dbs_tuners_ins.ignore_nice = 0; 519 + dbs_tuners_ins.freq_step = 5; 520 + 521 + dbs_timer_init(); 522 + } 523 + 524 + up(&dbs_sem); 525 + break; 526 + 527 + case CPUFREQ_GOV_STOP: 528 + down(&dbs_sem); 529 + this_dbs_info->enable = 0; 530 + sysfs_remove_group(&policy->kobj, &dbs_attr_group); 531 + dbs_enable--; 532 + /* 533 + * Stop the timerschedule work, when this governor 534 + * is used for first time 535 + */ 536 + if (dbs_enable == 0) 537 + dbs_timer_exit(); 538 + 539 + up(&dbs_sem); 540 + 541 + break; 542 + 543 + case CPUFREQ_GOV_LIMITS: 544 + down(&dbs_sem); 545 + if (policy->max < this_dbs_info->cur_policy->cur) 546 + __cpufreq_driver_target( 547 + this_dbs_info->cur_policy, 548 + policy->max, CPUFREQ_RELATION_H); 549 + else if (policy->min > this_dbs_info->cur_policy->cur) 550 + __cpufreq_driver_target( 551 + this_dbs_info->cur_policy, 552 + policy->min, CPUFREQ_RELATION_L); 553 + up(&dbs_sem); 554 + break; 555 + } 556 + return 0; 557 + } 558 + 559 + static struct cpufreq_governor cpufreq_gov_dbs = { 560 + .name = "conservative", 561 + .governor = cpufreq_governor_dbs, 562 + .owner = THIS_MODULE, 563 + }; 564 + 565 + static int __init cpufreq_gov_dbs_init(void) 566 + { 567 + return cpufreq_register_governor(&cpufreq_gov_dbs); 568 + } 569 + 570 + static void __exit cpufreq_gov_dbs_exit(void) 571 + { 572 + /* Make sure that the scheduled work is indeed not running */ 573 + flush_scheduled_work(); 574 + 575 + cpufreq_unregister_governor(&cpufreq_gov_dbs); 576 + } 577 + 578 + 579 + MODULE_AUTHOR ("Alexander Clouter <alex-kernel@digriz.org.uk>"); 580 + MODULE_DESCRIPTION ("'cpufreq_conservative' - A dynamic cpufreq governor for " 581 + "Low Latency Frequency Transition capable processors " 582 + "optimised for use in a battery environment"); 583 + MODULE_LICENSE ("GPL"); 584 + 585 + module_init(cpufreq_gov_dbs_init); 586 + module_exit(cpufreq_gov_dbs_exit);
+94 -92
drivers/cpufreq/cpufreq_ondemand.c
··· 34 34 */ 35 35 36 36 #define DEF_FREQUENCY_UP_THRESHOLD (80) 37 - #define MIN_FREQUENCY_UP_THRESHOLD (0) 37 + #define MIN_FREQUENCY_UP_THRESHOLD (11) 38 38 #define MAX_FREQUENCY_UP_THRESHOLD (100) 39 - 40 - #define DEF_FREQUENCY_DOWN_THRESHOLD (20) 41 - #define MIN_FREQUENCY_DOWN_THRESHOLD (0) 42 - #define MAX_FREQUENCY_DOWN_THRESHOLD (100) 43 39 44 40 /* 45 41 * The polling frequency of this governor depends on the capability of ··· 51 55 #define MIN_SAMPLING_RATE (def_sampling_rate / 2) 52 56 #define MAX_SAMPLING_RATE (500 * def_sampling_rate) 53 57 #define DEF_SAMPLING_RATE_LATENCY_MULTIPLIER (1000) 54 - #define DEF_SAMPLING_DOWN_FACTOR (10) 58 + #define DEF_SAMPLING_DOWN_FACTOR (1) 59 + #define MAX_SAMPLING_DOWN_FACTOR (10) 55 60 #define TRANSITION_LATENCY_LIMIT (10 * 1000) 56 - #define sampling_rate_in_HZ(x) (((x * HZ) < (1000 * 1000))?1:((x * HZ) / (1000 * 1000))) 57 61 58 62 static void do_dbs_timer(void *data); 59 63 ··· 74 78 unsigned int sampling_rate; 75 79 unsigned int sampling_down_factor; 76 80 unsigned int up_threshold; 77 - unsigned int down_threshold; 81 + unsigned int ignore_nice; 78 82 }; 79 83 80 84 static struct dbs_tuners dbs_tuners_ins = { 81 85 .up_threshold = DEF_FREQUENCY_UP_THRESHOLD, 82 - .down_threshold = DEF_FREQUENCY_DOWN_THRESHOLD, 83 86 .sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR, 84 87 }; 88 + 89 + static inline unsigned int get_cpu_idle_time(unsigned int cpu) 90 + { 91 + return kstat_cpu(cpu).cpustat.idle + 92 + kstat_cpu(cpu).cpustat.iowait + 93 + ( !dbs_tuners_ins.ignore_nice ? 94 + kstat_cpu(cpu).cpustat.nice : 95 + 0); 96 + } 85 97 86 98 /************************** sysfs interface ************************/ 87 99 static ssize_t show_sampling_rate_max(struct cpufreq_policy *policy, char *buf) ··· 119 115 show_one(sampling_rate, sampling_rate); 120 116 show_one(sampling_down_factor, sampling_down_factor); 121 117 show_one(up_threshold, up_threshold); 122 - show_one(down_threshold, down_threshold); 118 + show_one(ignore_nice, ignore_nice); 123 119 124 120 static ssize_t store_sampling_down_factor(struct cpufreq_policy *unused, 125 121 const char *buf, size_t count) ··· 128 124 int ret; 129 125 ret = sscanf (buf, "%u", &input); 130 126 if (ret != 1 ) 127 + return -EINVAL; 128 + 129 + if (input > MAX_SAMPLING_DOWN_FACTOR || input < 1) 131 130 return -EINVAL; 132 131 133 132 down(&dbs_sem); ··· 168 161 169 162 down(&dbs_sem); 170 163 if (ret != 1 || input > MAX_FREQUENCY_UP_THRESHOLD || 171 - input < MIN_FREQUENCY_UP_THRESHOLD || 172 - input <= dbs_tuners_ins.down_threshold) { 164 + input < MIN_FREQUENCY_UP_THRESHOLD) { 173 165 up(&dbs_sem); 174 166 return -EINVAL; 175 167 } ··· 179 173 return count; 180 174 } 181 175 182 - static ssize_t store_down_threshold(struct cpufreq_policy *unused, 176 + static ssize_t store_ignore_nice(struct cpufreq_policy *policy, 183 177 const char *buf, size_t count) 184 178 { 185 179 unsigned int input; 186 180 int ret; 181 + 182 + unsigned int j; 183 + 187 184 ret = sscanf (buf, "%u", &input); 188 - 189 - down(&dbs_sem); 190 - if (ret != 1 || input > MAX_FREQUENCY_DOWN_THRESHOLD || 191 - input < MIN_FREQUENCY_DOWN_THRESHOLD || 192 - input >= dbs_tuners_ins.up_threshold) { 193 - up(&dbs_sem); 185 + if ( ret != 1 ) 194 186 return -EINVAL; 195 - } 196 187 197 - dbs_tuners_ins.down_threshold = input; 188 + if ( input > 1 ) 189 + input = 1; 190 + 191 + down(&dbs_sem); 192 + if ( input == dbs_tuners_ins.ignore_nice ) { /* nothing to do */ 193 + up(&dbs_sem); 194 + return count; 195 + } 196 + dbs_tuners_ins.ignore_nice = input; 197 + 198 + /* we need to re-evaluate prev_cpu_idle_up and prev_cpu_idle_down */ 199 + for_each_online_cpu(j) { 200 + struct cpu_dbs_info_s *j_dbs_info; 201 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 202 + j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j); 203 + j_dbs_info->prev_cpu_idle_down = j_dbs_info->prev_cpu_idle_up; 204 + } 198 205 up(&dbs_sem); 199 206 200 207 return count; ··· 220 201 define_one_rw(sampling_rate); 221 202 define_one_rw(sampling_down_factor); 222 203 define_one_rw(up_threshold); 223 - define_one_rw(down_threshold); 204 + define_one_rw(ignore_nice); 224 205 225 206 static struct attribute * dbs_attributes[] = { 226 207 &sampling_rate_max.attr, ··· 228 209 &sampling_rate.attr, 229 210 &sampling_down_factor.attr, 230 211 &up_threshold.attr, 231 - &down_threshold.attr, 212 + &ignore_nice.attr, 232 213 NULL 233 214 }; 234 215 ··· 241 222 242 223 static void dbs_check_cpu(int cpu) 243 224 { 244 - unsigned int idle_ticks, up_idle_ticks, down_idle_ticks; 245 - unsigned int total_idle_ticks; 246 - unsigned int freq_down_step; 225 + unsigned int idle_ticks, up_idle_ticks, total_ticks; 226 + unsigned int freq_next; 247 227 unsigned int freq_down_sampling_rate; 248 228 static int down_skip[NR_CPUS]; 249 229 struct cpu_dbs_info_s *this_dbs_info; ··· 256 238 257 239 policy = this_dbs_info->cur_policy; 258 240 /* 259 - * The default safe range is 20% to 80% 260 - * Every sampling_rate, we check 261 - * - If current idle time is less than 20%, then we try to 262 - * increase frequency 263 - * Every sampling_rate*sampling_down_factor, we check 264 - * - If current idle time is more than 80%, then we try to 265 - * decrease frequency 241 + * Every sampling_rate, we check, if current idle time is less 242 + * than 20% (default), then we try to increase frequency 243 + * Every sampling_rate*sampling_down_factor, we look for a the lowest 244 + * frequency which can sustain the load while keeping idle time over 245 + * 30%. If such a frequency exist, we try to decrease to this frequency. 266 246 * 267 247 * Any frequency increase takes it to the maximum frequency. 268 248 * Frequency reduction happens at minimum steps of 269 - * 5% of max_frequency 249 + * 5% (default) of current frequency 270 250 */ 271 251 272 252 /* Check for frequency increase */ 273 - total_idle_ticks = kstat_cpu(cpu).cpustat.idle + 274 - kstat_cpu(cpu).cpustat.iowait; 275 - idle_ticks = total_idle_ticks - 276 - this_dbs_info->prev_cpu_idle_up; 277 - this_dbs_info->prev_cpu_idle_up = total_idle_ticks; 278 - 279 - 253 + idle_ticks = UINT_MAX; 280 254 for_each_cpu_mask(j, policy->cpus) { 281 - unsigned int tmp_idle_ticks; 255 + unsigned int tmp_idle_ticks, total_idle_ticks; 282 256 struct cpu_dbs_info_s *j_dbs_info; 283 257 284 - if (j == cpu) 285 - continue; 286 - 287 258 j_dbs_info = &per_cpu(cpu_dbs_info, j); 288 - /* Check for frequency increase */ 289 - total_idle_ticks = kstat_cpu(j).cpustat.idle + 290 - kstat_cpu(j).cpustat.iowait; 259 + total_idle_ticks = get_cpu_idle_time(j); 291 260 tmp_idle_ticks = total_idle_ticks - 292 261 j_dbs_info->prev_cpu_idle_up; 293 262 j_dbs_info->prev_cpu_idle_up = total_idle_ticks; ··· 286 281 /* Scale idle ticks by 100 and compare with up and down ticks */ 287 282 idle_ticks *= 100; 288 283 up_idle_ticks = (100 - dbs_tuners_ins.up_threshold) * 289 - sampling_rate_in_HZ(dbs_tuners_ins.sampling_rate); 284 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate); 290 285 291 286 if (idle_ticks < up_idle_ticks) { 287 + down_skip[cpu] = 0; 288 + for_each_cpu_mask(j, policy->cpus) { 289 + struct cpu_dbs_info_s *j_dbs_info; 290 + 291 + j_dbs_info = &per_cpu(cpu_dbs_info, j); 292 + j_dbs_info->prev_cpu_idle_down = 293 + j_dbs_info->prev_cpu_idle_up; 294 + } 295 + /* if we are already at full speed then break out early */ 296 + if (policy->cur == policy->max) 297 + return; 298 + 292 299 __cpufreq_driver_target(policy, policy->max, 293 300 CPUFREQ_RELATION_H); 294 - down_skip[cpu] = 0; 295 - this_dbs_info->prev_cpu_idle_down = total_idle_ticks; 296 301 return; 297 302 } 298 303 ··· 311 296 if (down_skip[cpu] < dbs_tuners_ins.sampling_down_factor) 312 297 return; 313 298 314 - total_idle_ticks = kstat_cpu(cpu).cpustat.idle + 315 - kstat_cpu(cpu).cpustat.iowait; 316 - idle_ticks = total_idle_ticks - 317 - this_dbs_info->prev_cpu_idle_down; 318 - this_dbs_info->prev_cpu_idle_down = total_idle_ticks; 319 - 299 + idle_ticks = UINT_MAX; 320 300 for_each_cpu_mask(j, policy->cpus) { 321 - unsigned int tmp_idle_ticks; 301 + unsigned int tmp_idle_ticks, total_idle_ticks; 322 302 struct cpu_dbs_info_s *j_dbs_info; 323 303 324 - if (j == cpu) 325 - continue; 326 - 327 304 j_dbs_info = &per_cpu(cpu_dbs_info, j); 328 - /* Check for frequency increase */ 329 - total_idle_ticks = kstat_cpu(j).cpustat.idle + 330 - kstat_cpu(j).cpustat.iowait; 305 + /* Check for frequency decrease */ 306 + total_idle_ticks = j_dbs_info->prev_cpu_idle_up; 331 307 tmp_idle_ticks = total_idle_ticks - 332 308 j_dbs_info->prev_cpu_idle_down; 333 309 j_dbs_info->prev_cpu_idle_down = total_idle_ticks; ··· 327 321 idle_ticks = tmp_idle_ticks; 328 322 } 329 323 330 - /* Scale idle ticks by 100 and compare with up and down ticks */ 331 - idle_ticks *= 100; 332 324 down_skip[cpu] = 0; 325 + /* if we cannot reduce the frequency anymore, break out early */ 326 + if (policy->cur == policy->min) 327 + return; 333 328 329 + /* Compute how many ticks there are between two measurements */ 334 330 freq_down_sampling_rate = dbs_tuners_ins.sampling_rate * 335 331 dbs_tuners_ins.sampling_down_factor; 336 - down_idle_ticks = (100 - dbs_tuners_ins.down_threshold) * 337 - sampling_rate_in_HZ(freq_down_sampling_rate); 332 + total_ticks = usecs_to_jiffies(freq_down_sampling_rate); 338 333 339 - if (idle_ticks > down_idle_ticks ) { 340 - freq_down_step = (5 * policy->max) / 100; 334 + /* 335 + * The optimal frequency is the frequency that is the lowest that 336 + * can support the current CPU usage without triggering the up 337 + * policy. To be safe, we focus 10 points under the threshold. 338 + */ 339 + freq_next = ((total_ticks - idle_ticks) * 100) / total_ticks; 340 + freq_next = (freq_next * policy->cur) / 341 + (dbs_tuners_ins.up_threshold - 10); 341 342 342 - /* max freq cannot be less than 100. But who knows.... */ 343 - if (unlikely(freq_down_step == 0)) 344 - freq_down_step = 5; 345 - 346 - __cpufreq_driver_target(policy, 347 - policy->cur - freq_down_step, 348 - CPUFREQ_RELATION_H); 349 - return; 350 - } 343 + if (freq_next <= ((policy->cur * 95) / 100)) 344 + __cpufreq_driver_target(policy, freq_next, CPUFREQ_RELATION_L); 351 345 } 352 346 353 347 static void do_dbs_timer(void *data) 354 348 { 355 349 int i; 356 350 down(&dbs_sem); 357 - for (i = 0; i < NR_CPUS; i++) 358 - if (cpu_online(i)) 359 - dbs_check_cpu(i); 351 + for_each_online_cpu(i) 352 + dbs_check_cpu(i); 360 353 schedule_delayed_work(&dbs_work, 361 - sampling_rate_in_HZ(dbs_tuners_ins.sampling_rate)); 354 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate)); 362 355 up(&dbs_sem); 363 356 } 364 357 ··· 365 360 { 366 361 INIT_WORK(&dbs_work, do_dbs_timer, NULL); 367 362 schedule_delayed_work(&dbs_work, 368 - sampling_rate_in_HZ(dbs_tuners_ins.sampling_rate)); 363 + usecs_to_jiffies(dbs_tuners_ins.sampling_rate)); 369 364 return; 370 365 } 371 366 ··· 402 397 j_dbs_info = &per_cpu(cpu_dbs_info, j); 403 398 j_dbs_info->cur_policy = policy; 404 399 405 - j_dbs_info->prev_cpu_idle_up = 406 - kstat_cpu(j).cpustat.idle + 407 - kstat_cpu(j).cpustat.iowait; 408 - j_dbs_info->prev_cpu_idle_down = 409 - kstat_cpu(j).cpustat.idle + 410 - kstat_cpu(j).cpustat.iowait; 400 + j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j); 401 + j_dbs_info->prev_cpu_idle_down 402 + = j_dbs_info->prev_cpu_idle_up; 411 403 } 412 404 this_dbs_info->enable = 1; 413 405 sysfs_create_group(&policy->kobj, &dbs_attr_group); ··· 424 422 def_sampling_rate = (latency / 1000) * 425 423 DEF_SAMPLING_RATE_LATENCY_MULTIPLIER; 426 424 dbs_tuners_ins.sampling_rate = def_sampling_rate; 425 + dbs_tuners_ins.ignore_nice = 0; 427 426 428 427 dbs_timer_init(); 429 428 } ··· 464 461 return 0; 465 462 } 466 463 467 - struct cpufreq_governor cpufreq_gov_dbs = { 464 + static struct cpufreq_governor cpufreq_gov_dbs = { 468 465 .name = "ondemand", 469 466 .governor = cpufreq_governor_dbs, 470 467 .owner = THIS_MODULE, 471 468 }; 472 - EXPORT_SYMBOL(cpufreq_gov_dbs); 473 469 474 470 static int __init cpufreq_gov_dbs_init(void) 475 471 {
+30 -17
drivers/cpufreq/cpufreq_stats.c
··· 19 19 #include <linux/percpu.h> 20 20 #include <linux/kobject.h> 21 21 #include <linux/spinlock.h> 22 + #include <asm/cputime.h> 22 23 23 24 static spinlock_t cpufreq_stats_lock; 24 25 ··· 30 29 .show = _show,\ 31 30 }; 32 31 33 - static unsigned long 34 - delta_time(unsigned long old, unsigned long new) 35 - { 36 - return (old > new) ? (old - new): (new + ~old + 1); 37 - } 38 - 39 32 struct cpufreq_stats { 40 33 unsigned int cpu; 41 34 unsigned int total_trans; 42 - unsigned long long last_time; 35 + unsigned long long last_time; 43 36 unsigned int max_state; 44 37 unsigned int state_num; 45 38 unsigned int last_index; 46 - unsigned long long *time_in_state; 39 + cputime64_t *time_in_state; 47 40 unsigned int *freq_table; 48 41 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 49 42 unsigned int *trans_table; ··· 55 60 cpufreq_stats_update (unsigned int cpu) 56 61 { 57 62 struct cpufreq_stats *stat; 63 + unsigned long long cur_time; 64 + 65 + cur_time = get_jiffies_64(); 58 66 spin_lock(&cpufreq_stats_lock); 59 67 stat = cpufreq_stats_table[cpu]; 60 68 if (stat->time_in_state) 61 - stat->time_in_state[stat->last_index] += 62 - delta_time(stat->last_time, jiffies); 63 - stat->last_time = jiffies; 69 + stat->time_in_state[stat->last_index] = 70 + cputime64_add(stat->time_in_state[stat->last_index], 71 + cputime_sub(cur_time, stat->last_time)); 72 + stat->last_time = cur_time; 64 73 spin_unlock(&cpufreq_stats_lock); 65 74 return 0; 66 75 } ··· 89 90 return 0; 90 91 cpufreq_stats_update(stat->cpu); 91 92 for (i = 0; i < stat->state_num; i++) { 92 - len += sprintf(buf + len, "%u %llu\n", 93 - stat->freq_table[i], stat->time_in_state[i]); 93 + len += sprintf(buf + len, "%u %llu\n", stat->freq_table[i], 94 + (unsigned long long)cputime64_to_clock_t(stat->time_in_state[i])); 94 95 } 95 96 return len; 96 97 } ··· 106 107 if(!stat) 107 108 return 0; 108 109 cpufreq_stats_update(stat->cpu); 110 + len += snprintf(buf + len, PAGE_SIZE - len, " From : To\n"); 111 + len += snprintf(buf + len, PAGE_SIZE - len, " : "); 109 112 for (i = 0; i < stat->state_num; i++) { 110 113 if (len >= PAGE_SIZE) 111 114 break; 112 - len += snprintf(buf + len, PAGE_SIZE - len, "%9u:\t", 115 + len += snprintf(buf + len, PAGE_SIZE - len, "%9u ", 116 + stat->freq_table[i]); 117 + } 118 + if (len >= PAGE_SIZE) 119 + return len; 120 + 121 + len += snprintf(buf + len, PAGE_SIZE - len, "\n"); 122 + 123 + for (i = 0; i < stat->state_num; i++) { 124 + if (len >= PAGE_SIZE) 125 + break; 126 + 127 + len += snprintf(buf + len, PAGE_SIZE - len, "%9u: ", 113 128 stat->freq_table[i]); 114 129 115 130 for (j = 0; j < stat->state_num; j++) { 116 131 if (len >= PAGE_SIZE) 117 132 break; 118 - len += snprintf(buf + len, PAGE_SIZE - len, "%u\t", 133 + len += snprintf(buf + len, PAGE_SIZE - len, "%9u ", 119 134 stat->trans_table[i*stat->max_state+j]); 120 135 } 121 136 len += snprintf(buf + len, PAGE_SIZE - len, "\n"); ··· 210 197 count++; 211 198 } 212 199 213 - alloc_size = count * sizeof(int) + count * sizeof(long long); 200 + alloc_size = count * sizeof(int) + count * sizeof(cputime64_t); 214 201 215 202 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 216 203 alloc_size += count * count * sizeof(int); ··· 237 224 } 238 225 stat->state_num = j; 239 226 spin_lock(&cpufreq_stats_lock); 240 - stat->last_time = jiffies; 227 + stat->last_time = get_jiffies_64(); 241 228 stat->last_index = freq_table_get_index(stat, policy->cur); 242 229 spin_unlock(&cpufreq_stats_lock); 243 230 cpufreq_cpu_put(data);
+1
drivers/firmware/pcdp.c
··· 11 11 * published by the Free Software Foundation. 12 12 */ 13 13 14 + #include <linux/config.h> 14 15 #include <linux/acpi.h> 15 16 #include <linux/console.h> 16 17 #include <linux/efi.h>
+31 -15
drivers/i2c/busses/i2c-ali1563.c
··· 2 2 * i2c-ali1563.c - i2c driver for the ALi 1563 Southbridge 3 3 * 4 4 * Copyright (C) 2004 Patrick Mochel 5 + * 2005 Rudolf Marek <r.marek@sh.cvut.cz> 5 6 * 6 7 * The 1563 southbridge is deceptively similar to the 1533, with a 7 8 * few notable exceptions. One of those happens to be the fact they ··· 58 57 #define HST_CNTL2_BLOCK 0x05 59 58 60 59 60 + #define HST_CNTL2_SIZEMASK 0x38 61 61 62 62 static unsigned short ali1563_smba; 63 63 64 - static int ali1563_transaction(struct i2c_adapter * a) 64 + static int ali1563_transaction(struct i2c_adapter * a, int size) 65 65 { 66 66 u32 data; 67 67 int timeout; ··· 75 73 76 74 data = inb_p(SMB_HST_STS); 77 75 if (data & HST_STS_BAD) { 78 - dev_warn(&a->dev,"ali1563: Trying to reset busy device\n"); 76 + dev_err(&a->dev, "ali1563: Trying to reset busy device\n"); 79 77 outb_p(data | HST_STS_BAD,SMB_HST_STS); 80 78 data = inb_p(SMB_HST_STS); 81 79 if (data & HST_STS_BAD) ··· 96 94 97 95 if (timeout && !(data & HST_STS_BAD)) 98 96 return 0; 99 - dev_warn(&a->dev, "SMBus Error: %s%s%s%s%s\n", 100 - timeout ? "Timeout " : "", 101 - data & HST_STS_FAIL ? "Transaction Failed " : "", 102 - data & HST_STS_BUSERR ? "No response or Bus Collision " : "", 103 - data & HST_STS_DEVERR ? "Device Error " : "", 104 - !(data & HST_STS_DONE) ? "Transaction Never Finished " : ""); 105 97 106 - if (!(data & HST_STS_DONE)) 98 + if (!timeout) { 99 + dev_err(&a->dev, "Timeout - Trying to KILL transaction!\n"); 107 100 /* Issue 'kill' to host controller */ 108 101 outb_p(HST_CNTL2_KILL,SMB_HST_CNTL2); 109 - else 110 - /* Issue timeout to reset all devices on bus */ 102 + data = inb_p(SMB_HST_STS); 103 + } 104 + 105 + /* device error - no response, ignore the autodetection case */ 106 + if ((data & HST_STS_DEVERR) && (size != HST_CNTL2_QUICK)) { 107 + dev_err(&a->dev, "Device error!\n"); 108 + } 109 + 110 + /* bus collision */ 111 + if (data & HST_STS_BUSERR) { 112 + dev_err(&a->dev, "Bus collision!\n"); 113 + /* Issue timeout, hoping it helps */ 111 114 outb_p(HST_CNTL1_TIMEOUT,SMB_HST_CNTL1); 115 + } 116 + 117 + if (data & HST_STS_FAIL) { 118 + dev_err(&a->dev, "Cleaning fail after KILL!\n"); 119 + outb_p(0x0,SMB_HST_CNTL2); 120 + } 121 + 112 122 return -1; 113 123 } 114 124 ··· 163 149 164 150 if (timeout && !(data & HST_STS_BAD)) 165 151 return 0; 166 - dev_warn(&a->dev, "SMBus Error: %s%s%s%s%s\n", 152 + dev_err(&a->dev, "SMBus Error: %s%s%s%s%s\n", 167 153 timeout ? "Timeout " : "", 168 154 data & HST_STS_FAIL ? "Transaction Failed " : "", 169 155 data & HST_STS_BUSERR ? "No response or Bus Collision " : "", ··· 256 242 } 257 243 258 244 outb_p(((addr & 0x7f) << 1) | (rw & 0x01), SMB_HST_ADD); 259 - outb_p(inb_p(SMB_HST_CNTL2) | (size << 3), SMB_HST_CNTL2); 245 + outb_p((inb_p(SMB_HST_CNTL2) & ~HST_CNTL2_SIZEMASK) | (size << 3), SMB_HST_CNTL2); 260 246 261 247 /* Write the command register */ 248 + 262 249 switch(size) { 263 250 case HST_CNTL2_BYTE: 264 251 if (rw== I2C_SMBUS_WRITE) 265 - outb_p(cmd, SMB_HST_CMD); 252 + /* Beware it uses DAT0 register and not CMD! */ 253 + outb_p(cmd, SMB_HST_DAT0); 266 254 break; 267 255 case HST_CNTL2_BYTE_DATA: 268 256 outb_p(cmd, SMB_HST_CMD); ··· 284 268 goto Done; 285 269 } 286 270 287 - if ((error = ali1563_transaction(a))) 271 + if ((error = ali1563_transaction(a, size))) 288 272 goto Done; 289 273 290 274 if ((rw == I2C_SMBUS_WRITE) || (size == HST_CNTL2_QUICK))
+1 -1
drivers/ide/ide-cd.c
··· 1936 1936 * NOTE! The "len" and "addr" checks should possibly have 1937 1937 * separate masks. 1938 1938 */ 1939 - if ((rq->data_len & mask) || (addr & mask)) 1939 + if ((rq->data_len & 15) || (addr & mask)) 1940 1940 info->dma = 0; 1941 1941 } 1942 1942
+3
drivers/ide/pci/amd74xx.c
··· 72 72 { PCI_DEVICE_ID_NVIDIA_NFORCE3S_SATA2, 0x50, AMD_UDMA_133 }, 73 73 { PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_IDE, 0x50, AMD_UDMA_133 }, 74 74 { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, 0x50, AMD_UDMA_133 }, 75 + { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE, 0x50, AMD_UDMA_133 }, 75 76 { 0 } 76 77 }; 77 78 ··· 488 487 /* 12 */ DECLARE_NV_DEV("NFORCE3-250-SATA2"), 489 488 /* 13 */ DECLARE_NV_DEV("NFORCE-CK804"), 490 489 /* 14 */ DECLARE_NV_DEV("NFORCE-MCP04"), 490 + /* 15 */ DECLARE_NV_DEV("NFORCE-MCP51"), 491 491 }; 492 492 493 493 static int __devinit amd74xx_probe(struct pci_dev *dev, const struct pci_device_id *id) ··· 523 521 #endif 524 522 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 13 }, 525 523 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 14 }, 524 + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 15 }, 526 525 { 0, }, 527 526 }; 528 527 MODULE_DEVICE_TABLE(pci, amd74xx_pci_tbl);
-20
drivers/input/gameport/Kconfig
··· 68 68 depends on PCI 69 69 70 70 endif 71 - 72 - # Yes, SOUND_GAMEPORT looks a bit odd. Yes, it ends up being turned on 73 - # in every .config. Please don't touch it. It is here to handle an 74 - # unusual dependency between GAMEPORT and sound drivers. 75 - # 76 - # Some sound drivers call gameport functions. If GAMEPORT is 77 - # not selected, empty stubs are provided for the functions and all is 78 - # well. 79 - # If GAMEPORT is built in, everything is fine. 80 - # If GAMEPORT is a module, however, it would need to be loaded for the 81 - # sound driver to be able to link properly. Therefore, the sound 82 - # driver must be a module as well in that case. Since there's no way 83 - # to express that directly in Kconfig, we use SOUND_GAMEPORT to 84 - # express it. SOUND_GAMEPORT boils down to "if GAMEPORT is 'm', 85 - # anything that depends on SOUND_GAMEPORT must be 'm' as well. if 86 - # GAMEPORT is 'y' or 'n', it can be anything". 87 - config SOUND_GAMEPORT 88 - tristate 89 - default m if GAMEPORT=m 90 - default y
+1 -1
drivers/input/joydev.c
··· 422 422 joydev->nkey++; 423 423 } 424 424 425 - for (i = 0; i < BTN_JOYSTICK - BTN_MISC + 1; i++) 425 + for (i = 0; i < BTN_JOYSTICK - BTN_MISC; i++) 426 426 if (test_bit(i + BTN_MISC, dev->keybit)) { 427 427 joydev->keymap[i] = joydev->nkey; 428 428 joydev->keypam[joydev->nkey] = i + BTN_MISC;
+3 -3
drivers/input/keyboard/atkbd.c
··· 171 171 unsigned char set2; 172 172 } atkbd_scroll_keys[] = { 173 173 { ATKBD_SCR_1, 0xc5 }, 174 - { ATKBD_SCR_2, 0xa9 }, 175 - { ATKBD_SCR_4, 0xb6 }, 176 - { ATKBD_SCR_8, 0xa7 }, 174 + { ATKBD_SCR_2, 0x9d }, 175 + { ATKBD_SCR_4, 0xa4 }, 176 + { ATKBD_SCR_8, 0x9b }, 177 177 { ATKBD_SCR_CLICK, 0xe0 }, 178 178 { ATKBD_SCR_LEFT, 0xcb }, 179 179 { ATKBD_SCR_RIGHT, 0xd2 },
+5 -2
drivers/input/mouse/psmouse-base.c
··· 518 518 /* 519 519 * First, we check if it's a mouse. It should send 0x00 or 0x03 520 520 * in case of an IntelliMouse in 4-byte mode or 0x04 for IM Explorer. 521 + * Sunrex K8561 IR Keyboard/Mouse reports 0xff on second and subsequent 522 + * ID queries, probably due to a firmware bug. 521 523 */ 522 524 523 525 param[0] = 0xa5; 524 526 if (ps2_command(ps2dev, param, PSMOUSE_CMD_GETID)) 525 527 return -1; 526 528 527 - if (param[0] != 0x00 && param[0] != 0x03 && param[0] != 0x04) 529 + if (param[0] != 0x00 && param[0] != 0x03 && 530 + param[0] != 0x04 && param[0] != 0xff) 528 531 return -1; 529 532 530 533 /* ··· 975 972 return -EINVAL; 976 973 977 974 if (!strncmp(val, "any", 3)) { 978 - *((unsigned int *)kp->arg) = -1UL; 975 + *((unsigned int *)kp->arg) = -1U; 979 976 return 0; 980 977 } 981 978
+5 -34
drivers/input/mouse/synaptics.c
··· 143 143 return -1; 144 144 } 145 145 146 - static void print_ident(struct synaptics_data *priv) 147 - { 148 - printk(KERN_INFO "Synaptics Touchpad, model: %ld\n", SYN_ID_MODEL(priv->identity)); 149 - printk(KERN_INFO " Firmware: %ld.%ld\n", SYN_ID_MAJOR(priv->identity), 150 - SYN_ID_MINOR(priv->identity)); 151 - if (SYN_MODEL_ROT180(priv->model_id)) 152 - printk(KERN_INFO " 180 degree mounted touchpad\n"); 153 - if (SYN_MODEL_PORTRAIT(priv->model_id)) 154 - printk(KERN_INFO " portrait touchpad\n"); 155 - printk(KERN_INFO " Sensor: %ld\n", SYN_MODEL_SENSOR(priv->model_id)); 156 - if (SYN_MODEL_NEWABS(priv->model_id)) 157 - printk(KERN_INFO " new absolute packet format\n"); 158 - if (SYN_MODEL_PEN(priv->model_id)) 159 - printk(KERN_INFO " pen detection\n"); 160 - 161 - if (SYN_CAP_EXTENDED(priv->capabilities)) { 162 - printk(KERN_INFO " Touchpad has extended capability bits\n"); 163 - if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap)) 164 - printk(KERN_INFO " -> %d multi-buttons, i.e. besides standard buttons\n", 165 - (int)(SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap))); 166 - if (SYN_CAP_MIDDLE_BUTTON(priv->capabilities)) 167 - printk(KERN_INFO " -> middle button\n"); 168 - if (SYN_CAP_FOUR_BUTTON(priv->capabilities)) 169 - printk(KERN_INFO " -> four buttons\n"); 170 - if (SYN_CAP_MULTIFINGER(priv->capabilities)) 171 - printk(KERN_INFO " -> multifinger detection\n"); 172 - if (SYN_CAP_PALMDETECT(priv->capabilities)) 173 - printk(KERN_INFO " -> palm detection\n"); 174 - if (SYN_CAP_PASS_THROUGH(priv->capabilities)) 175 - printk(KERN_INFO " -> pass-through port\n"); 176 - } 177 - } 178 - 179 146 static int synaptics_query_hardware(struct psmouse *psmouse) 180 147 { 181 148 int retries = 0; ··· 633 666 634 667 priv->pkt_type = SYN_MODEL_NEWABS(priv->model_id) ? SYN_NEWABS : SYN_OLDABS; 635 668 636 - print_ident(priv); 669 + printk(KERN_INFO "Synaptics Touchpad, model: %ld, fw: %ld.%ld, id: %#lx, caps: %#lx/%#lx\n", 670 + SYN_ID_MODEL(priv->identity), 671 + SYN_ID_MAJOR(priv->identity), SYN_ID_MINOR(priv->identity), 672 + priv->model_id, priv->capabilities, priv->ext_cap); 673 + 637 674 set_input_params(&psmouse->dev, priv); 638 675 639 676 psmouse->protocol_handler = synaptics_process_byte;
+10 -5
drivers/input/mousedev.c
··· 101 101 unsigned char ready, buffer, bufsiz; 102 102 unsigned char imexseq, impsseq; 103 103 enum mousedev_emul mode; 104 + unsigned long last_buttons; 104 105 }; 105 106 106 107 #define MOUSEDEV_SEQ_LEN 6 ··· 225 224 spin_lock_irqsave(&list->packet_lock, flags); 226 225 227 226 p = &list->packets[list->head]; 228 - if (list->ready && p->buttons != packet->buttons) { 227 + if (list->ready && p->buttons != mousedev->packet.buttons) { 229 228 unsigned int new_head = (list->head + 1) % PACKET_QUEUE_LEN; 230 229 if (new_head != list->tail) { 231 230 p = &list->packets[list->head = new_head]; ··· 250 249 p->dz += packet->dz; 251 250 p->buttons = mousedev->packet.buttons; 252 251 253 - list->ready = 1; 252 + if (p->dx || p->dy || p->dz || p->buttons != list->last_buttons) 253 + list->ready = 1; 254 254 255 255 spin_unlock_irqrestore(&list->packet_lock, flags); 256 - kill_fasync(&list->fasync, SIGIO, POLL_IN); 256 + 257 + if (list->ready) 258 + kill_fasync(&list->fasync, SIGIO, POLL_IN); 257 259 } 258 260 259 261 wake_up_interruptible(&mousedev->wait); ··· 481 477 } 482 478 483 479 if (!p->dx && !p->dy && !p->dz) { 484 - if (list->tail == list->head) 480 + if (list->tail == list->head) { 485 481 list->ready = 0; 486 - else 482 + list->last_buttons = p->buttons; 483 + } else 487 484 list->tail = (list->tail + 1) % PACKET_QUEUE_LEN; 488 485 } 489 486
+26 -6
drivers/input/serio/i8042-x86ia64io.h
··· 88 88 }; 89 89 90 90 /* 91 - * Some Fujitsu notebooks are ahving trouble with touhcpads if 91 + * Some Fujitsu notebooks are having trouble with touchpads if 92 92 * active multiplexing mode is activated. Luckily they don't have 93 93 * external PS/2 ports so we can safely disable it. 94 + * ... apparently some Toshibas don't like MUX mode either and 95 + * die horrible death on reboot. 94 96 */ 95 97 static struct dmi_system_id __initdata i8042_dmi_nomux_table[] = { 96 98 { ··· 117 115 }, 118 116 }, 119 117 { 118 + .ident = "Fujitsu Lifebook S6230", 119 + .matches = { 120 + DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), 121 + DMI_MATCH(DMI_PRODUCT_NAME, "LifeBook S6230"), 122 + }, 123 + }, 124 + { 120 125 .ident = "Fujitsu T70H", 121 126 .matches = { 122 127 DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), 123 128 DMI_MATCH(DMI_PRODUCT_NAME, "FMVLT70H"), 129 + }, 130 + }, 131 + { 132 + .ident = "Toshiba P10", 133 + .matches = { 134 + DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"), 135 + DMI_MATCH(DMI_PRODUCT_NAME, "Satellite P10"), 124 136 }, 125 137 }, 126 138 { } ··· 231 215 232 216 static void i8042_pnp_exit(void) 233 217 { 234 - if (i8042_pnp_kbd_registered) 218 + if (i8042_pnp_kbd_registered) { 219 + i8042_pnp_kbd_registered = 0; 235 220 pnp_unregister_driver(&i8042_pnp_kbd_driver); 221 + } 236 222 237 - if (i8042_pnp_aux_registered) 223 + if (i8042_pnp_aux_registered) { 224 + i8042_pnp_aux_registered = 0; 238 225 pnp_unregister_driver(&i8042_pnp_aux_driver); 226 + } 239 227 } 240 228 241 229 static int i8042_pnp_init(void) ··· 247 227 int result_kbd, result_aux; 248 228 249 229 if (i8042_nopnp) { 250 - printk("i8042: PNP detection disabled\n"); 230 + printk(KERN_INFO "i8042: PNP detection disabled\n"); 251 231 return 0; 252 232 } 253 233 ··· 261 241 #if defined(__ia64__) 262 242 return -ENODEV; 263 243 #else 264 - printk(KERN_WARNING "PNP: No PS/2 controller found. Probing ports directly.\n"); 244 + printk(KERN_INFO "PNP: No PS/2 controller found. Probing ports directly.\n"); 265 245 return 0; 266 246 #endif 267 247 } ··· 285 265 i8042_pnp_kbd_irq = i8042_kbd_irq; 286 266 } 287 267 288 - if (result_aux > 0 && !i8042_pnp_aux_irq) { 268 + if (!i8042_pnp_aux_irq) { 289 269 printk(KERN_WARNING "PNP: PS/2 controller doesn't have AUX irq; using default %#x\n", i8042_aux_irq); 290 270 i8042_pnp_aux_irq = i8042_aux_irq; 291 271 }
+28 -22
drivers/input/serio/i8042.c
··· 698 698 i8042_interrupt(0, NULL, NULL); 699 699 } 700 700 701 + static int i8042_ctl_test(void) 702 + { 703 + unsigned char param; 704 + 705 + if (!i8042_reset) 706 + return 0; 707 + 708 + if (i8042_command(&param, I8042_CMD_CTL_TEST)) { 709 + printk(KERN_ERR "i8042.c: i8042 controller self test timeout.\n"); 710 + return -1; 711 + } 712 + 713 + if (param != I8042_RET_CTL_TEST) { 714 + printk(KERN_ERR "i8042.c: i8042 controller selftest failed. (%#x != %#x)\n", 715 + param, I8042_RET_CTL_TEST); 716 + return -1; 717 + } 718 + 719 + return 0; 720 + } 701 721 702 722 /* 703 723 * i8042_controller init initializes the i8042 controller, and, ··· 739 719 return -1; 740 720 } 741 721 742 - if (i8042_reset) { 743 - 744 - unsigned char param; 745 - 746 - if (i8042_command(&param, I8042_CMD_CTL_TEST)) { 747 - printk(KERN_ERR "i8042.c: i8042 controller self test timeout.\n"); 748 - return -1; 749 - } 750 - 751 - if (param != I8042_RET_CTL_TEST) { 752 - printk(KERN_ERR "i8042.c: i8042 controller selftest failed. (%#x != %#x)\n", 753 - param, I8042_RET_CTL_TEST); 754 - return -1; 755 - } 756 - } 722 + if (i8042_ctl_test()) 723 + return -1; 757 724 758 725 /* 759 726 * Save the CTR for restoral on unload / reboot. ··· 809 802 */ 810 803 static void i8042_controller_reset(void) 811 804 { 812 - unsigned char param; 813 - 814 805 /* 815 806 * Reset the controller if requested. 816 807 */ 817 808 818 - if (i8042_reset) 819 - if (i8042_command(&param, I8042_CMD_CTL_TEST)) 820 - printk(KERN_ERR "i8042.c: i8042 controller reset timeout.\n"); 809 + i8042_ctl_test(); 821 810 822 811 /* 823 812 * Disable MUX mode if present. ··· 925 922 if (level != RESUME_ENABLE) 926 923 return 0; 927 924 928 - if (i8042_controller_init()) { 929 - printk(KERN_ERR "i8042: resume failed\n"); 925 + if (i8042_ctl_test()) 926 + return -1; 927 + 928 + if (i8042_command(&i8042_ctr, I8042_CMD_CTL_WCTR)) { 929 + printk(KERN_ERR "i8042: Can't write CTR\n"); 930 930 return -1; 931 931 } 932 932
+1 -2
drivers/input/touchscreen/gunze.c
··· 68 68 69 69 if (gunze->idx != GUNZE_MAX_LENGTH || gunze->data[5] != ',' || 70 70 (gunze->data[0] != 'T' && gunze->data[0] != 'R')) { 71 - gunze->data[10] = 0; 72 - printk(KERN_WARNING "gunze.c: bad packet: >%s<\n", gunze->data); 71 + printk(KERN_WARNING "gunze.c: bad packet: >%.*s<\n", GUNZE_MAX_LENGTH, gunze->data); 73 72 return; 74 73 } 75 74
+4 -4
drivers/macintosh/via-pmu.c
··· 2421 2421 2422 2422 /* Re-enable local CPU interrupts */ 2423 2423 local_irq_enable(); 2424 - mdelay(100); 2424 + mdelay(10); 2425 2425 preempt_enable(); 2426 2426 2427 2427 /* Re-enable clock spreading on some machines */ ··· 2549 2549 return ret; 2550 2550 } 2551 2551 2552 - printk(KERN_DEBUG "HID1, before: %x\n", mfspr(SPRN_HID1)); 2552 + /* Stop environment and ADB interrupts */ 2553 + pmu_request(&req, NULL, 2, PMU_SET_INTR_MASK, 0); 2554 + pmu_wait_complete(&req); 2553 2555 2554 2556 /* Tell PMU what events will wake us up */ 2555 2557 pmu_request(&req, NULL, 4, PMU_POWER_EVENTS, PMU_PWR_CLR_WAKEUP_EVENTS, ··· 2612 2610 pmu_wait_complete(&req); 2613 2611 pmu_request(&req, NULL, 2, PMU_SET_INTR_MASK, pmu_intr_mask); 2614 2612 pmu_wait_complete(&req); 2615 - 2616 - printk(KERN_DEBUG "HID1, after: %x\n", mfspr(SPRN_HID1)); 2617 2613 2618 2614 pmac_wakeup_devices(); 2619 2615
+64 -62
drivers/media/dvb/bt8xx/dst.c
··· 906 906 if (state->dst_type == DST_TYPE_IS_TERR) 907 907 return 0; 908 908 909 - if (state->voltage == SEC_VOLTAGE_OFF) 910 - paket[4] = 0; 911 - else 912 - paket[4] = 1; 913 - 914 - if (state->tone == SEC_TONE_ON) 915 - paket[2] = 0x02; 916 - else 917 - paket[2] = 0; 918 - if (state->minicmd == SEC_MINI_A) 919 - paket[3] = 0x02; 920 - else 921 - paket[3] = 0; 922 - 909 + paket[4] = state->tx_tuna[4]; 910 + paket[2] = state->tx_tuna[2]; 911 + paket[3] = state->tx_tuna[3]; 923 912 paket[7] = dst_check_sum (paket, 7); 924 913 dst_command(state, paket, 8); 914 + 925 915 return 0; 926 916 } 927 917 ··· 970 980 971 981 static int dst_write_tuna(struct dvb_frontend* fe) 972 982 { 973 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 983 + struct dst_state* state = fe->demodulator_priv; 974 984 int retval; 975 985 u8 reply; 976 986 ··· 1038 1048 1039 1049 static int dst_set_diseqc(struct dvb_frontend* fe, struct dvb_diseqc_master_cmd* cmd) 1040 1050 { 1041 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1051 + struct dst_state* state = fe->demodulator_priv; 1042 1052 u8 paket[8] = { 0x00, 0x08, 0x04, 0xe0, 0x10, 0x38, 0xf0, 0xec }; 1043 1053 1044 - if (state->dst_type == DST_TYPE_IS_TERR) 1054 + if (state->dst_type != DST_TYPE_IS_SAT) 1045 1055 return 0; 1046 1056 1047 1057 if (cmd->msg_len == 0 || cmd->msg_len > 4) ··· 1054 1064 1055 1065 static int dst_set_voltage(struct dvb_frontend* fe, fe_sec_voltage_t voltage) 1056 1066 { 1057 - u8 *val; 1058 1067 int need_cmd; 1059 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1068 + struct dst_state* state = fe->demodulator_priv; 1060 1069 1061 1070 state->voltage = voltage; 1062 1071 1063 - if (state->dst_type == DST_TYPE_IS_TERR) 1072 + if (state->dst_type != DST_TYPE_IS_SAT) 1064 1073 return 0; 1065 1074 1066 1075 need_cmd = 0; 1067 - val = &state->tx_tuna[0]; 1068 - val[8] &= ~0x40; 1069 1076 switch (voltage) { 1070 - case SEC_VOLTAGE_13: 1071 - if ((state->diseq_flags & HAS_POWER) == 0) 1077 + case SEC_VOLTAGE_13: 1078 + case SEC_VOLTAGE_18: 1079 + if ((state->diseq_flags & HAS_POWER) == 0) 1080 + need_cmd = 1; 1081 + state->diseq_flags |= HAS_POWER; 1082 + state->tx_tuna[4] = 0x01; 1083 + break; 1084 + 1085 + case SEC_VOLTAGE_OFF: 1072 1086 need_cmd = 1; 1073 - state->diseq_flags |= HAS_POWER; 1074 - break; 1087 + state->diseq_flags &= ~(HAS_POWER | HAS_LOCK | ATTEMPT_TUNE); 1088 + state->tx_tuna[4] = 0x00; 1089 + break; 1075 1090 1076 - case SEC_VOLTAGE_18: 1077 - if ((state->diseq_flags & HAS_POWER) == 0) 1078 - need_cmd = 1; 1079 - state->diseq_flags |= HAS_POWER; 1080 - val[8] |= 0x40; 1081 - break; 1082 - 1083 - case SEC_VOLTAGE_OFF: 1084 - need_cmd = 1; 1085 - state->diseq_flags &= ~(HAS_POWER | HAS_LOCK | ATTEMPT_TUNE); 1086 - break; 1087 - 1088 - default: 1089 - return -EINVAL; 1091 + default: 1092 + return -EINVAL; 1090 1093 } 1091 1094 if (need_cmd) 1092 1095 dst_tone_power_cmd(state); ··· 1089 1106 1090 1107 static int dst_set_tone(struct dvb_frontend* fe, fe_sec_tone_mode_t tone) 1091 1108 { 1092 - u8 *val; 1093 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1109 + struct dst_state* state = fe->demodulator_priv; 1094 1110 1095 1111 state->tone = tone; 1096 1112 1097 - if (state->dst_type == DST_TYPE_IS_TERR) 1113 + if (state->dst_type != DST_TYPE_IS_SAT) 1098 1114 return 0; 1099 1115 1100 - val = &state->tx_tuna[0]; 1101 - 1102 - val[8] &= ~0x1; 1103 - 1104 1116 switch (tone) { 1105 - case SEC_TONE_OFF: 1106 - break; 1117 + case SEC_TONE_OFF: 1118 + state->tx_tuna[2] = 0xff; 1119 + break; 1107 1120 1108 - case SEC_TONE_ON: 1109 - val[8] |= 1; 1110 - break; 1121 + case SEC_TONE_ON: 1122 + state->tx_tuna[2] = 0x02; 1123 + break; 1111 1124 1112 - default: 1113 - return -EINVAL; 1125 + default: 1126 + return -EINVAL; 1114 1127 } 1115 1128 dst_tone_power_cmd(state); 1116 1129 1117 1130 return 0; 1118 1131 } 1119 1132 1133 + static int dst_send_burst(struct dvb_frontend *fe, fe_sec_mini_cmd_t minicmd) 1134 + { 1135 + struct dst_state *state = fe->demodulator_priv; 1136 + 1137 + if (state->dst_type != DST_TYPE_IS_SAT) 1138 + return 0; 1139 + 1140 + state->minicmd = minicmd; 1141 + 1142 + switch (minicmd) { 1143 + case SEC_MINI_A: 1144 + state->tx_tuna[3] = 0x02; 1145 + break; 1146 + case SEC_MINI_B: 1147 + state->tx_tuna[3] = 0xff; 1148 + break; 1149 + } 1150 + dst_tone_power_cmd(state); 1151 + 1152 + return 0; 1153 + } 1154 + 1155 + 1120 1156 static int dst_init(struct dvb_frontend* fe) 1121 1157 { 1122 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1158 + struct dst_state* state = fe->demodulator_priv; 1123 1159 static u8 ini_satci_tuna[] = { 9, 0, 3, 0xb6, 1, 0, 0x73, 0x21, 0, 0 }; 1124 1160 static u8 ini_satfta_tuna[] = { 0, 0, 3, 0xb6, 1, 0x55, 0xbd, 0x50, 0, 0 }; 1125 1161 static u8 ini_tvfta_tuna[] = { 0, 0, 3, 0xb6, 1, 7, 0x0, 0x0, 0, 0 }; ··· 1170 1168 1171 1169 static int dst_read_status(struct dvb_frontend* fe, fe_status_t* status) 1172 1170 { 1173 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1171 + struct dst_state* state = fe->demodulator_priv; 1174 1172 1175 1173 *status = 0; 1176 1174 if (state->diseq_flags & HAS_LOCK) { ··· 1184 1182 1185 1183 static int dst_read_signal_strength(struct dvb_frontend* fe, u16* strength) 1186 1184 { 1187 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1185 + struct dst_state* state = fe->demodulator_priv; 1188 1186 1189 1187 dst_get_signal(state); 1190 1188 *strength = state->decode_strength; ··· 1194 1192 1195 1193 static int dst_read_snr(struct dvb_frontend* fe, u16* snr) 1196 1194 { 1197 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1195 + struct dst_state* state = fe->demodulator_priv; 1198 1196 1199 1197 dst_get_signal(state); 1200 1198 *snr = state->decode_snr; ··· 1204 1202 1205 1203 static int dst_set_frontend(struct dvb_frontend* fe, struct dvb_frontend_parameters *p) 1206 1204 { 1207 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1205 + struct dst_state* state = fe->demodulator_priv; 1208 1206 1209 1207 dst_set_freq(state, p->frequency); 1210 1208 if (verbose > 4) ··· 1230 1228 1231 1229 static int dst_get_frontend(struct dvb_frontend* fe, struct dvb_frontend_parameters *p) 1232 1230 { 1233 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1231 + struct dst_state* state = fe->demodulator_priv; 1234 1232 1235 1233 p->frequency = state->decode_freq; 1236 1234 p->inversion = state->inversion; ··· 1250 1248 1251 1249 static void dst_release(struct dvb_frontend* fe) 1252 1250 { 1253 - struct dst_state* state = (struct dst_state*) fe->demodulator_priv; 1251 + struct dst_state* state = fe->demodulator_priv; 1254 1252 kfree(state); 1255 1253 } 1256 1254 ··· 1348 1346 .read_signal_strength = dst_read_signal_strength, 1349 1347 .read_snr = dst_read_snr, 1350 1348 1351 - .diseqc_send_burst = dst_set_tone, 1349 + .diseqc_send_burst = dst_send_burst, 1352 1350 .diseqc_send_master_cmd = dst_set_diseqc, 1353 1351 .set_voltage = dst_set_voltage, 1354 1352 .set_tone = dst_set_tone,
+556 -15
drivers/net/tg3.c
··· 133 133 /* number of ETHTOOL_GSTATS u64's */ 134 134 #define TG3_NUM_STATS (sizeof(struct tg3_ethtool_stats)/sizeof(u64)) 135 135 136 + #define TG3_NUM_TEST 6 137 + 136 138 static char version[] __devinitdata = 137 139 DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n"; 138 140 ··· 316 314 { "nic_irqs" }, 317 315 { "nic_avoided_irqs" }, 318 316 { "nic_tx_threshold_hit" } 317 + }; 318 + 319 + static struct { 320 + const char string[ETH_GSTRING_LEN]; 321 + } ethtool_test_keys[TG3_NUM_TEST] = { 322 + { "nvram test (online) " }, 323 + { "link test (online) " }, 324 + { "register test (offline)" }, 325 + { "memory test (offline)" }, 326 + { "loopback test (offline)" }, 327 + { "interrupt test (offline)" }, 319 328 }; 320 329 321 330 static void tg3_write_indirect_reg32(struct tg3 *tp, u32 off, u32 val) ··· 3083 3070 } 3084 3071 3085 3072 static int tg3_init_hw(struct tg3 *); 3086 - static int tg3_halt(struct tg3 *, int); 3073 + static int tg3_halt(struct tg3 *, int, int); 3087 3074 3088 3075 #ifdef CONFIG_NET_POLL_CONTROLLER 3089 3076 static void tg3_poll_controller(struct net_device *dev) ··· 3107 3094 restart_timer = tp->tg3_flags2 & TG3_FLG2_RESTART_TIMER; 3108 3095 tp->tg3_flags2 &= ~TG3_FLG2_RESTART_TIMER; 3109 3096 3110 - tg3_halt(tp, 0); 3097 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 0); 3111 3098 tg3_init_hw(tp); 3112 3099 3113 3100 tg3_netif_start(tp); ··· 3453 3440 spin_lock_irq(&tp->lock); 3454 3441 spin_lock(&tp->tx_lock); 3455 3442 3456 - tg3_halt(tp, 1); 3443 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 3457 3444 3458 3445 tg3_set_mtu(dev, tp, new_mtu); 3459 3446 ··· 4144 4131 } 4145 4132 4146 4133 /* tp->lock is held. */ 4147 - static int tg3_halt(struct tg3 *tp, int silent) 4134 + static int tg3_halt(struct tg3 *tp, int kind, int silent) 4148 4135 { 4149 4136 int err; 4150 4137 4151 4138 tg3_stop_fw(tp); 4152 4139 4153 - tg3_write_sig_pre_reset(tp, RESET_KIND_SHUTDOWN); 4140 + tg3_write_sig_pre_reset(tp, kind); 4154 4141 4155 4142 tg3_abort_hw(tp, silent); 4156 4143 err = tg3_chip_reset(tp); 4157 4144 4158 - tg3_write_sig_legacy(tp, RESET_KIND_SHUTDOWN); 4159 - tg3_write_sig_post_reset(tp, RESET_KIND_SHUTDOWN); 4145 + tg3_write_sig_legacy(tp, kind); 4146 + tg3_write_sig_post_reset(tp, kind); 4160 4147 4161 4148 if (err) 4162 4149 return err; ··· 4370 4357 */ 4371 4358 tp->tg3_flags |= TG3_FLAG_PCIX_TARGET_HWBUG; 4372 4359 4360 + /* It is possible that bootcode is still loading at this point. 4361 + * Get the nvram lock first before halting the cpu. 4362 + */ 4363 + tg3_nvram_lock(tp); 4373 4364 err = tg3_halt_cpu(tp, cpu_base); 4365 + tg3_nvram_unlock(tp); 4374 4366 if (err) 4375 4367 goto out; 4376 4368 ··· 5899 5881 int err, i; 5900 5882 u32 int_mbox = 0; 5901 5883 5884 + if (!netif_running(dev)) 5885 + return -ENODEV; 5886 + 5902 5887 tg3_disable_ints(tp); 5903 5888 5904 5889 free_irq(tp->pdev->irq, dev); ··· 6005 5984 spin_lock_irq(&tp->lock); 6006 5985 spin_lock(&tp->tx_lock); 6007 5986 6008 - tg3_halt(tp, 1); 5987 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 6009 5988 err = tg3_init_hw(tp); 6010 5989 6011 5990 spin_unlock(&tp->tx_lock); ··· 6081 6060 6082 6061 err = tg3_init_hw(tp); 6083 6062 if (err) { 6084 - tg3_halt(tp, 1); 6063 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 6085 6064 tg3_free_rings(tp); 6086 6065 } else { 6087 6066 if (tp->tg3_flags & TG3_FLAG_TAGGED_STATUS) ··· 6125 6104 pci_disable_msi(tp->pdev); 6126 6105 tp->tg3_flags2 &= ~TG3_FLG2_USING_MSI; 6127 6106 } 6128 - tg3_halt(tp, 1); 6107 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 6129 6108 tg3_free_rings(tp); 6130 6109 tg3_free_consistent(tp); 6131 6110 ··· 6398 6377 6399 6378 tg3_disable_ints(tp); 6400 6379 6401 - tg3_halt(tp, 1); 6380 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 6402 6381 tg3_free_rings(tp); 6403 6382 tp->tg3_flags &= 6404 6383 ~(TG3_FLAG_INIT_COMPLETE | ··· 7118 7097 tp->tx_pending = ering->tx_pending; 7119 7098 7120 7099 if (netif_running(dev)) { 7121 - tg3_halt(tp, 1); 7100 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 7122 7101 tg3_init_hw(tp); 7123 7102 tg3_netif_start(tp); 7124 7103 } ··· 7161 7140 tp->tg3_flags &= ~TG3_FLAG_TX_PAUSE; 7162 7141 7163 7142 if (netif_running(dev)) { 7164 - tg3_halt(tp, 1); 7143 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 7165 7144 tg3_init_hw(tp); 7166 7145 tg3_netif_start(tp); 7167 7146 } ··· 7220 7199 return TG3_NUM_STATS; 7221 7200 } 7222 7201 7202 + static int tg3_get_test_count (struct net_device *dev) 7203 + { 7204 + return TG3_NUM_TEST; 7205 + } 7206 + 7223 7207 static void tg3_get_strings (struct net_device *dev, u32 stringset, u8 *buf) 7224 7208 { 7225 7209 switch (stringset) { 7226 7210 case ETH_SS_STATS: 7227 7211 memcpy(buf, &ethtool_stats_keys, sizeof(ethtool_stats_keys)); 7212 + break; 7213 + case ETH_SS_TEST: 7214 + memcpy(buf, &ethtool_test_keys, sizeof(ethtool_test_keys)); 7228 7215 break; 7229 7216 default: 7230 7217 WARN_ON(1); /* we need a WARN() */ ··· 7245 7216 { 7246 7217 struct tg3 *tp = netdev_priv(dev); 7247 7218 memcpy(tmp_stats, tg3_get_estats(tp), sizeof(tp->estats)); 7219 + } 7220 + 7221 + #define NVRAM_TEST_SIZE 0x100 7222 + 7223 + static int tg3_test_nvram(struct tg3 *tp) 7224 + { 7225 + u32 *buf, csum; 7226 + int i, j, err = 0; 7227 + 7228 + buf = kmalloc(NVRAM_TEST_SIZE, GFP_KERNEL); 7229 + if (buf == NULL) 7230 + return -ENOMEM; 7231 + 7232 + for (i = 0, j = 0; i < NVRAM_TEST_SIZE; i += 4, j++) { 7233 + u32 val; 7234 + 7235 + if ((err = tg3_nvram_read(tp, i, &val)) != 0) 7236 + break; 7237 + buf[j] = cpu_to_le32(val); 7238 + } 7239 + if (i < NVRAM_TEST_SIZE) 7240 + goto out; 7241 + 7242 + err = -EIO; 7243 + if (cpu_to_be32(buf[0]) != TG3_EEPROM_MAGIC) 7244 + goto out; 7245 + 7246 + /* Bootstrap checksum at offset 0x10 */ 7247 + csum = calc_crc((unsigned char *) buf, 0x10); 7248 + if(csum != cpu_to_le32(buf[0x10/4])) 7249 + goto out; 7250 + 7251 + /* Manufacturing block starts at offset 0x74, checksum at 0xfc */ 7252 + csum = calc_crc((unsigned char *) &buf[0x74/4], 0x88); 7253 + if (csum != cpu_to_le32(buf[0xfc/4])) 7254 + goto out; 7255 + 7256 + err = 0; 7257 + 7258 + out: 7259 + kfree(buf); 7260 + return err; 7261 + } 7262 + 7263 + #define TG3_SERDES_TIMEOUT_SEC 2 7264 + #define TG3_COPPER_TIMEOUT_SEC 6 7265 + 7266 + static int tg3_test_link(struct tg3 *tp) 7267 + { 7268 + int i, max; 7269 + 7270 + if (!netif_running(tp->dev)) 7271 + return -ENODEV; 7272 + 7273 + if (tp->tg3_flags2 & TG3_FLG2_PHY_SERDES) 7274 + max = TG3_SERDES_TIMEOUT_SEC; 7275 + else 7276 + max = TG3_COPPER_TIMEOUT_SEC; 7277 + 7278 + for (i = 0; i < max; i++) { 7279 + if (netif_carrier_ok(tp->dev)) 7280 + return 0; 7281 + 7282 + if (msleep_interruptible(1000)) 7283 + break; 7284 + } 7285 + 7286 + return -EIO; 7287 + } 7288 + 7289 + /* Only test the commonly used registers */ 7290 + static int tg3_test_registers(struct tg3 *tp) 7291 + { 7292 + int i, is_5705; 7293 + u32 offset, read_mask, write_mask, val, save_val, read_val; 7294 + static struct { 7295 + u16 offset; 7296 + u16 flags; 7297 + #define TG3_FL_5705 0x1 7298 + #define TG3_FL_NOT_5705 0x2 7299 + #define TG3_FL_NOT_5788 0x4 7300 + u32 read_mask; 7301 + u32 write_mask; 7302 + } reg_tbl[] = { 7303 + /* MAC Control Registers */ 7304 + { MAC_MODE, TG3_FL_NOT_5705, 7305 + 0x00000000, 0x00ef6f8c }, 7306 + { MAC_MODE, TG3_FL_5705, 7307 + 0x00000000, 0x01ef6b8c }, 7308 + { MAC_STATUS, TG3_FL_NOT_5705, 7309 + 0x03800107, 0x00000000 }, 7310 + { MAC_STATUS, TG3_FL_5705, 7311 + 0x03800100, 0x00000000 }, 7312 + { MAC_ADDR_0_HIGH, 0x0000, 7313 + 0x00000000, 0x0000ffff }, 7314 + { MAC_ADDR_0_LOW, 0x0000, 7315 + 0x00000000, 0xffffffff }, 7316 + { MAC_RX_MTU_SIZE, 0x0000, 7317 + 0x00000000, 0x0000ffff }, 7318 + { MAC_TX_MODE, 0x0000, 7319 + 0x00000000, 0x00000070 }, 7320 + { MAC_TX_LENGTHS, 0x0000, 7321 + 0x00000000, 0x00003fff }, 7322 + { MAC_RX_MODE, TG3_FL_NOT_5705, 7323 + 0x00000000, 0x000007fc }, 7324 + { MAC_RX_MODE, TG3_FL_5705, 7325 + 0x00000000, 0x000007dc }, 7326 + { MAC_HASH_REG_0, 0x0000, 7327 + 0x00000000, 0xffffffff }, 7328 + { MAC_HASH_REG_1, 0x0000, 7329 + 0x00000000, 0xffffffff }, 7330 + { MAC_HASH_REG_2, 0x0000, 7331 + 0x00000000, 0xffffffff }, 7332 + { MAC_HASH_REG_3, 0x0000, 7333 + 0x00000000, 0xffffffff }, 7334 + 7335 + /* Receive Data and Receive BD Initiator Control Registers. */ 7336 + { RCVDBDI_JUMBO_BD+0, TG3_FL_NOT_5705, 7337 + 0x00000000, 0xffffffff }, 7338 + { RCVDBDI_JUMBO_BD+4, TG3_FL_NOT_5705, 7339 + 0x00000000, 0xffffffff }, 7340 + { RCVDBDI_JUMBO_BD+8, TG3_FL_NOT_5705, 7341 + 0x00000000, 0x00000003 }, 7342 + { RCVDBDI_JUMBO_BD+0xc, TG3_FL_NOT_5705, 7343 + 0x00000000, 0xffffffff }, 7344 + { RCVDBDI_STD_BD+0, 0x0000, 7345 + 0x00000000, 0xffffffff }, 7346 + { RCVDBDI_STD_BD+4, 0x0000, 7347 + 0x00000000, 0xffffffff }, 7348 + { RCVDBDI_STD_BD+8, 0x0000, 7349 + 0x00000000, 0xffff0002 }, 7350 + { RCVDBDI_STD_BD+0xc, 0x0000, 7351 + 0x00000000, 0xffffffff }, 7352 + 7353 + /* Receive BD Initiator Control Registers. */ 7354 + { RCVBDI_STD_THRESH, TG3_FL_NOT_5705, 7355 + 0x00000000, 0xffffffff }, 7356 + { RCVBDI_STD_THRESH, TG3_FL_5705, 7357 + 0x00000000, 0x000003ff }, 7358 + { RCVBDI_JUMBO_THRESH, TG3_FL_NOT_5705, 7359 + 0x00000000, 0xffffffff }, 7360 + 7361 + /* Host Coalescing Control Registers. */ 7362 + { HOSTCC_MODE, TG3_FL_NOT_5705, 7363 + 0x00000000, 0x00000004 }, 7364 + { HOSTCC_MODE, TG3_FL_5705, 7365 + 0x00000000, 0x000000f6 }, 7366 + { HOSTCC_RXCOL_TICKS, TG3_FL_NOT_5705, 7367 + 0x00000000, 0xffffffff }, 7368 + { HOSTCC_RXCOL_TICKS, TG3_FL_5705, 7369 + 0x00000000, 0x000003ff }, 7370 + { HOSTCC_TXCOL_TICKS, TG3_FL_NOT_5705, 7371 + 0x00000000, 0xffffffff }, 7372 + { HOSTCC_TXCOL_TICKS, TG3_FL_5705, 7373 + 0x00000000, 0x000003ff }, 7374 + { HOSTCC_RXMAX_FRAMES, TG3_FL_NOT_5705, 7375 + 0x00000000, 0xffffffff }, 7376 + { HOSTCC_RXMAX_FRAMES, TG3_FL_5705 | TG3_FL_NOT_5788, 7377 + 0x00000000, 0x000000ff }, 7378 + { HOSTCC_TXMAX_FRAMES, TG3_FL_NOT_5705, 7379 + 0x00000000, 0xffffffff }, 7380 + { HOSTCC_TXMAX_FRAMES, TG3_FL_5705 | TG3_FL_NOT_5788, 7381 + 0x00000000, 0x000000ff }, 7382 + { HOSTCC_RXCOAL_TICK_INT, TG3_FL_NOT_5705, 7383 + 0x00000000, 0xffffffff }, 7384 + { HOSTCC_TXCOAL_TICK_INT, TG3_FL_NOT_5705, 7385 + 0x00000000, 0xffffffff }, 7386 + { HOSTCC_RXCOAL_MAXF_INT, TG3_FL_NOT_5705, 7387 + 0x00000000, 0xffffffff }, 7388 + { HOSTCC_RXCOAL_MAXF_INT, TG3_FL_5705 | TG3_FL_NOT_5788, 7389 + 0x00000000, 0x000000ff }, 7390 + { HOSTCC_TXCOAL_MAXF_INT, TG3_FL_NOT_5705, 7391 + 0x00000000, 0xffffffff }, 7392 + { HOSTCC_TXCOAL_MAXF_INT, TG3_FL_5705 | TG3_FL_NOT_5788, 7393 + 0x00000000, 0x000000ff }, 7394 + { HOSTCC_STAT_COAL_TICKS, TG3_FL_NOT_5705, 7395 + 0x00000000, 0xffffffff }, 7396 + { HOSTCC_STATS_BLK_HOST_ADDR, TG3_FL_NOT_5705, 7397 + 0x00000000, 0xffffffff }, 7398 + { HOSTCC_STATS_BLK_HOST_ADDR+4, TG3_FL_NOT_5705, 7399 + 0x00000000, 0xffffffff }, 7400 + { HOSTCC_STATUS_BLK_HOST_ADDR, 0x0000, 7401 + 0x00000000, 0xffffffff }, 7402 + { HOSTCC_STATUS_BLK_HOST_ADDR+4, 0x0000, 7403 + 0x00000000, 0xffffffff }, 7404 + { HOSTCC_STATS_BLK_NIC_ADDR, 0x0000, 7405 + 0xffffffff, 0x00000000 }, 7406 + { HOSTCC_STATUS_BLK_NIC_ADDR, 0x0000, 7407 + 0xffffffff, 0x00000000 }, 7408 + 7409 + /* Buffer Manager Control Registers. */ 7410 + { BUFMGR_MB_POOL_ADDR, 0x0000, 7411 + 0x00000000, 0x007fff80 }, 7412 + { BUFMGR_MB_POOL_SIZE, 0x0000, 7413 + 0x00000000, 0x007fffff }, 7414 + { BUFMGR_MB_RDMA_LOW_WATER, 0x0000, 7415 + 0x00000000, 0x0000003f }, 7416 + { BUFMGR_MB_MACRX_LOW_WATER, 0x0000, 7417 + 0x00000000, 0x000001ff }, 7418 + { BUFMGR_MB_HIGH_WATER, 0x0000, 7419 + 0x00000000, 0x000001ff }, 7420 + { BUFMGR_DMA_DESC_POOL_ADDR, TG3_FL_NOT_5705, 7421 + 0xffffffff, 0x00000000 }, 7422 + { BUFMGR_DMA_DESC_POOL_SIZE, TG3_FL_NOT_5705, 7423 + 0xffffffff, 0x00000000 }, 7424 + 7425 + /* Mailbox Registers */ 7426 + { GRCMBOX_RCVSTD_PROD_IDX+4, 0x0000, 7427 + 0x00000000, 0x000001ff }, 7428 + { GRCMBOX_RCVJUMBO_PROD_IDX+4, TG3_FL_NOT_5705, 7429 + 0x00000000, 0x000001ff }, 7430 + { GRCMBOX_RCVRET_CON_IDX_0+4, 0x0000, 7431 + 0x00000000, 0x000007ff }, 7432 + { GRCMBOX_SNDHOST_PROD_IDX_0+4, 0x0000, 7433 + 0x00000000, 0x000001ff }, 7434 + 7435 + { 0xffff, 0x0000, 0x00000000, 0x00000000 }, 7436 + }; 7437 + 7438 + if (tp->tg3_flags2 & TG3_FLG2_5705_PLUS) 7439 + is_5705 = 1; 7440 + else 7441 + is_5705 = 0; 7442 + 7443 + for (i = 0; reg_tbl[i].offset != 0xffff; i++) { 7444 + if (is_5705 && (reg_tbl[i].flags & TG3_FL_NOT_5705)) 7445 + continue; 7446 + 7447 + if (!is_5705 && (reg_tbl[i].flags & TG3_FL_5705)) 7448 + continue; 7449 + 7450 + if ((tp->tg3_flags2 & TG3_FLG2_IS_5788) && 7451 + (reg_tbl[i].flags & TG3_FL_NOT_5788)) 7452 + continue; 7453 + 7454 + offset = (u32) reg_tbl[i].offset; 7455 + read_mask = reg_tbl[i].read_mask; 7456 + write_mask = reg_tbl[i].write_mask; 7457 + 7458 + /* Save the original register content */ 7459 + save_val = tr32(offset); 7460 + 7461 + /* Determine the read-only value. */ 7462 + read_val = save_val & read_mask; 7463 + 7464 + /* Write zero to the register, then make sure the read-only bits 7465 + * are not changed and the read/write bits are all zeros. 7466 + */ 7467 + tw32(offset, 0); 7468 + 7469 + val = tr32(offset); 7470 + 7471 + /* Test the read-only and read/write bits. */ 7472 + if (((val & read_mask) != read_val) || (val & write_mask)) 7473 + goto out; 7474 + 7475 + /* Write ones to all the bits defined by RdMask and WrMask, then 7476 + * make sure the read-only bits are not changed and the 7477 + * read/write bits are all ones. 7478 + */ 7479 + tw32(offset, read_mask | write_mask); 7480 + 7481 + val = tr32(offset); 7482 + 7483 + /* Test the read-only bits. */ 7484 + if ((val & read_mask) != read_val) 7485 + goto out; 7486 + 7487 + /* Test the read/write bits. */ 7488 + if ((val & write_mask) != write_mask) 7489 + goto out; 7490 + 7491 + tw32(offset, save_val); 7492 + } 7493 + 7494 + return 0; 7495 + 7496 + out: 7497 + printk(KERN_ERR PFX "Register test failed at offset %x\n", offset); 7498 + tw32(offset, save_val); 7499 + return -EIO; 7500 + } 7501 + 7502 + static int tg3_do_mem_test(struct tg3 *tp, u32 offset, u32 len) 7503 + { 7504 + static u32 test_pattern[] = { 0x00000000, 0xffffffff, 0xaa55a55a }; 7505 + int i; 7506 + u32 j; 7507 + 7508 + for (i = 0; i < sizeof(test_pattern)/sizeof(u32); i++) { 7509 + for (j = 0; j < len; j += 4) { 7510 + u32 val; 7511 + 7512 + tg3_write_mem(tp, offset + j, test_pattern[i]); 7513 + tg3_read_mem(tp, offset + j, &val); 7514 + if (val != test_pattern[i]) 7515 + return -EIO; 7516 + } 7517 + } 7518 + return 0; 7519 + } 7520 + 7521 + static int tg3_test_memory(struct tg3 *tp) 7522 + { 7523 + static struct mem_entry { 7524 + u32 offset; 7525 + u32 len; 7526 + } mem_tbl_570x[] = { 7527 + { 0x00000000, 0x01000}, 7528 + { 0x00002000, 0x1c000}, 7529 + { 0xffffffff, 0x00000} 7530 + }, mem_tbl_5705[] = { 7531 + { 0x00000100, 0x0000c}, 7532 + { 0x00000200, 0x00008}, 7533 + { 0x00000b50, 0x00400}, 7534 + { 0x00004000, 0x00800}, 7535 + { 0x00006000, 0x01000}, 7536 + { 0x00008000, 0x02000}, 7537 + { 0x00010000, 0x0e000}, 7538 + { 0xffffffff, 0x00000} 7539 + }; 7540 + struct mem_entry *mem_tbl; 7541 + int err = 0; 7542 + int i; 7543 + 7544 + if (tp->tg3_flags2 & TG3_FLG2_5705_PLUS) 7545 + mem_tbl = mem_tbl_5705; 7546 + else 7547 + mem_tbl = mem_tbl_570x; 7548 + 7549 + for (i = 0; mem_tbl[i].offset != 0xffffffff; i++) { 7550 + if ((err = tg3_do_mem_test(tp, mem_tbl[i].offset, 7551 + mem_tbl[i].len)) != 0) 7552 + break; 7553 + } 7554 + 7555 + return err; 7556 + } 7557 + 7558 + static int tg3_test_loopback(struct tg3 *tp) 7559 + { 7560 + u32 mac_mode, send_idx, rx_start_idx, rx_idx, tx_idx, opaque_key; 7561 + u32 desc_idx; 7562 + struct sk_buff *skb, *rx_skb; 7563 + u8 *tx_data; 7564 + dma_addr_t map; 7565 + int num_pkts, tx_len, rx_len, i, err; 7566 + struct tg3_rx_buffer_desc *desc; 7567 + 7568 + if (!netif_running(tp->dev)) 7569 + return -ENODEV; 7570 + 7571 + err = -EIO; 7572 + 7573 + tg3_abort_hw(tp, 1); 7574 + 7575 + /* Clearing this flag to keep interrupts disabled */ 7576 + tp->tg3_flags &= ~TG3_FLAG_INIT_COMPLETE; 7577 + tg3_reset_hw(tp); 7578 + 7579 + mac_mode = (tp->mac_mode & ~MAC_MODE_PORT_MODE_MASK) | 7580 + MAC_MODE_PORT_INT_LPBACK | MAC_MODE_LINK_POLARITY | 7581 + MAC_MODE_PORT_MODE_GMII; 7582 + tw32(MAC_MODE, mac_mode); 7583 + 7584 + tx_len = 1514; 7585 + skb = dev_alloc_skb(tx_len); 7586 + tx_data = skb_put(skb, tx_len); 7587 + memcpy(tx_data, tp->dev->dev_addr, 6); 7588 + memset(tx_data + 6, 0x0, 8); 7589 + 7590 + tw32(MAC_RX_MTU_SIZE, tx_len + 4); 7591 + 7592 + for (i = 14; i < tx_len; i++) 7593 + tx_data[i] = (u8) (i & 0xff); 7594 + 7595 + map = pci_map_single(tp->pdev, skb->data, tx_len, PCI_DMA_TODEVICE); 7596 + 7597 + tw32_f(HOSTCC_MODE, tp->coalesce_mode | HOSTCC_MODE_ENABLE | 7598 + HOSTCC_MODE_NOW); 7599 + 7600 + udelay(10); 7601 + 7602 + rx_start_idx = tp->hw_status->idx[0].rx_producer; 7603 + 7604 + send_idx = 0; 7605 + num_pkts = 0; 7606 + 7607 + tg3_set_txd(tp, send_idx, map, tx_len, 0, 1); 7608 + 7609 + send_idx++; 7610 + num_pkts++; 7611 + 7612 + tw32_tx_mbox(MAILBOX_SNDHOST_PROD_IDX_0 + TG3_64BIT_REG_LOW, send_idx); 7613 + tr32(MAILBOX_SNDHOST_PROD_IDX_0 + TG3_64BIT_REG_LOW); 7614 + 7615 + udelay(10); 7616 + 7617 + for (i = 0; i < 10; i++) { 7618 + tw32_f(HOSTCC_MODE, tp->coalesce_mode | HOSTCC_MODE_ENABLE | 7619 + HOSTCC_MODE_NOW); 7620 + 7621 + udelay(10); 7622 + 7623 + tx_idx = tp->hw_status->idx[0].tx_consumer; 7624 + rx_idx = tp->hw_status->idx[0].rx_producer; 7625 + if ((tx_idx == send_idx) && 7626 + (rx_idx == (rx_start_idx + num_pkts))) 7627 + break; 7628 + } 7629 + 7630 + pci_unmap_single(tp->pdev, map, tx_len, PCI_DMA_TODEVICE); 7631 + dev_kfree_skb(skb); 7632 + 7633 + if (tx_idx != send_idx) 7634 + goto out; 7635 + 7636 + if (rx_idx != rx_start_idx + num_pkts) 7637 + goto out; 7638 + 7639 + desc = &tp->rx_rcb[rx_start_idx]; 7640 + desc_idx = desc->opaque & RXD_OPAQUE_INDEX_MASK; 7641 + opaque_key = desc->opaque & RXD_OPAQUE_RING_MASK; 7642 + if (opaque_key != RXD_OPAQUE_RING_STD) 7643 + goto out; 7644 + 7645 + if ((desc->err_vlan & RXD_ERR_MASK) != 0 && 7646 + (desc->err_vlan != RXD_ERR_ODD_NIBBLE_RCVD_MII)) 7647 + goto out; 7648 + 7649 + rx_len = ((desc->idx_len & RXD_LEN_MASK) >> RXD_LEN_SHIFT) - 4; 7650 + if (rx_len != tx_len) 7651 + goto out; 7652 + 7653 + rx_skb = tp->rx_std_buffers[desc_idx].skb; 7654 + 7655 + map = pci_unmap_addr(&tp->rx_std_buffers[desc_idx], mapping); 7656 + pci_dma_sync_single_for_cpu(tp->pdev, map, rx_len, PCI_DMA_FROMDEVICE); 7657 + 7658 + for (i = 14; i < tx_len; i++) { 7659 + if (*(rx_skb->data + i) != (u8) (i & 0xff)) 7660 + goto out; 7661 + } 7662 + err = 0; 7663 + 7664 + /* tg3_free_rings will unmap and free the rx_skb */ 7665 + out: 7666 + return err; 7667 + } 7668 + 7669 + static void tg3_self_test(struct net_device *dev, struct ethtool_test *etest, 7670 + u64 *data) 7671 + { 7672 + struct tg3 *tp = netdev_priv(dev); 7673 + 7674 + memset(data, 0, sizeof(u64) * TG3_NUM_TEST); 7675 + 7676 + if (tg3_test_nvram(tp) != 0) { 7677 + etest->flags |= ETH_TEST_FL_FAILED; 7678 + data[0] = 1; 7679 + } 7680 + if (tg3_test_link(tp) != 0) { 7681 + etest->flags |= ETH_TEST_FL_FAILED; 7682 + data[1] = 1; 7683 + } 7684 + if (etest->flags & ETH_TEST_FL_OFFLINE) { 7685 + if (netif_running(dev)) 7686 + tg3_netif_stop(tp); 7687 + 7688 + spin_lock_irq(&tp->lock); 7689 + spin_lock(&tp->tx_lock); 7690 + 7691 + tg3_halt(tp, RESET_KIND_SUSPEND, 1); 7692 + tg3_nvram_lock(tp); 7693 + tg3_halt_cpu(tp, RX_CPU_BASE); 7694 + if (!(tp->tg3_flags2 & TG3_FLG2_5705_PLUS)) 7695 + tg3_halt_cpu(tp, TX_CPU_BASE); 7696 + tg3_nvram_unlock(tp); 7697 + 7698 + if (tg3_test_registers(tp) != 0) { 7699 + etest->flags |= ETH_TEST_FL_FAILED; 7700 + data[2] = 1; 7701 + } 7702 + if (tg3_test_memory(tp) != 0) { 7703 + etest->flags |= ETH_TEST_FL_FAILED; 7704 + data[3] = 1; 7705 + } 7706 + if (tg3_test_loopback(tp) != 0) { 7707 + etest->flags |= ETH_TEST_FL_FAILED; 7708 + data[4] = 1; 7709 + } 7710 + 7711 + spin_unlock(&tp->tx_lock); 7712 + spin_unlock_irq(&tp->lock); 7713 + if (tg3_test_interrupt(tp) != 0) { 7714 + etest->flags |= ETH_TEST_FL_FAILED; 7715 + data[5] = 1; 7716 + } 7717 + spin_lock_irq(&tp->lock); 7718 + spin_lock(&tp->tx_lock); 7719 + 7720 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 7721 + if (netif_running(dev)) { 7722 + tp->tg3_flags |= TG3_FLAG_INIT_COMPLETE; 7723 + tg3_init_hw(tp); 7724 + tg3_netif_start(tp); 7725 + } 7726 + spin_unlock(&tp->tx_lock); 7727 + spin_unlock_irq(&tp->lock); 7728 + } 7248 7729 } 7249 7730 7250 7731 static int tg3_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) ··· 7870 7331 .get_tso = ethtool_op_get_tso, 7871 7332 .set_tso = tg3_set_tso, 7872 7333 #endif 7334 + .self_test_count = tg3_get_test_count, 7335 + .self_test = tg3_self_test, 7873 7336 .get_strings = tg3_get_strings, 7874 7337 .get_stats_count = tg3_get_stats_count, 7875 7338 .get_ethtool_stats = tg3_get_ethtool_stats, ··· 10019 9478 (tr32(WDMAC_MODE) & WDMAC_MODE_ENABLE)) { 10020 9479 pci_save_state(tp->pdev); 10021 9480 tw32(MEMARB_MODE, MEMARB_MODE_ENABLE); 10022 - tg3_halt(tp, 1); 9481 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 10023 9482 } 10024 9483 10025 9484 err = tg3_test_dma(tp); ··· 10146 9605 10147 9606 spin_lock_irq(&tp->lock); 10148 9607 spin_lock(&tp->tx_lock); 10149 - tg3_halt(tp, 1); 9608 + tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 10150 9609 spin_unlock(&tp->tx_lock); 10151 9610 spin_unlock_irq(&tp->lock); 10152 9611
+141 -167
drivers/pci/hotplug/cpci_hotplug_core.c
··· 1 1 /* 2 2 * CompactPCI Hot Plug Driver 3 3 * 4 - * Copyright (C) 2002 SOMA Networks, Inc. 4 + * Copyright (C) 2002,2005 SOMA Networks, Inc. 5 5 * Copyright (C) 2001 Greg Kroah-Hartman (greg@kroah.com) 6 6 * Copyright (C) 2001 IBM Corp. 7 7 * ··· 45 45 46 46 #define dbg(format, arg...) \ 47 47 do { \ 48 - if(cpci_debug) \ 48 + if (cpci_debug) \ 49 49 printk (KERN_DEBUG "%s: " format "\n", \ 50 50 MY_NAME , ## arg); \ 51 - } while(0) 51 + } while (0) 52 52 #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 53 53 #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 54 54 #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) ··· 111 111 112 112 dbg("%s - physical_slot = %s", __FUNCTION__, hotplug_slot->name); 113 113 114 - if(controller->ops->set_power) { 114 + if (controller->ops->set_power) 115 115 retval = controller->ops->set_power(slot, 1); 116 - } 117 - 118 116 return retval; 119 117 } 120 118 ··· 124 126 125 127 dbg("%s - physical_slot = %s", __FUNCTION__, hotplug_slot->name); 126 128 129 + down_write(&list_rwsem); 130 + 127 131 /* Unconfigure device */ 128 132 dbg("%s - unconfiguring slot %s", 129 133 __FUNCTION__, slot->hotplug_slot->name); 130 - if((retval = cpci_unconfigure_slot(slot))) { 134 + if ((retval = cpci_unconfigure_slot(slot))) { 131 135 err("%s - could not unconfigure slot %s", 132 136 __FUNCTION__, slot->hotplug_slot->name); 133 - return retval; 137 + goto disable_error; 134 138 } 135 139 dbg("%s - finished unconfiguring slot %s", 136 140 __FUNCTION__, slot->hotplug_slot->name); 137 141 138 142 /* Clear EXT (by setting it) */ 139 - if(cpci_clear_ext(slot)) { 143 + if (cpci_clear_ext(slot)) { 140 144 err("%s - could not clear EXT for slot %s", 141 145 __FUNCTION__, slot->hotplug_slot->name); 142 146 retval = -ENODEV; 147 + goto disable_error; 143 148 } 144 149 cpci_led_on(slot); 145 150 146 - if(controller->ops->set_power) { 147 - retval = controller->ops->set_power(slot, 0); 148 - } 151 + if (controller->ops->set_power) 152 + if ((retval = controller->ops->set_power(slot, 0))) 153 + goto disable_error; 149 154 150 - if(update_adapter_status(slot->hotplug_slot, 0)) { 155 + if (update_adapter_status(slot->hotplug_slot, 0)) 151 156 warn("failure to update adapter file"); 152 - } 153 157 154 - if(slot->extracting) { 158 + if (slot->extracting) { 155 159 slot->extracting = 0; 156 160 atomic_dec(&extracting); 157 161 } 162 + disable_error: 163 + up_write(&list_rwsem); 158 164 return retval; 159 165 } 160 166 ··· 167 165 { 168 166 u8 power = 1; 169 167 170 - if(controller->ops->get_power) { 168 + if (controller->ops->get_power) 171 169 power = controller->ops->get_power(slot); 172 - } 173 170 return power; 174 171 } 175 172 ··· 238 237 int status = -ENOMEM; 239 238 int i; 240 239 241 - if(!(controller && bus)) { 240 + if (!(controller && bus)) 242 241 return -ENODEV; 243 - } 244 242 245 243 /* 246 244 * Create a structure for each slot, and register that slot ··· 316 316 cpci_hp_unregister_bus(struct pci_bus *bus) 317 317 { 318 318 struct slot *slot; 319 - struct list_head *tmp; 320 - struct list_head *next; 321 - int status; 319 + struct slot *tmp; 320 + int status = 0; 322 321 323 322 down_write(&list_rwsem); 324 - if(!slots) { 323 + if (!slots) { 325 324 up_write(&list_rwsem); 326 325 return -1; 327 326 } 328 - list_for_each_safe(tmp, next, &slot_list) { 329 - slot = list_entry(tmp, struct slot, slot_list); 330 - if(slot->bus == bus) { 331 - dbg("deregistering slot %s", slot->hotplug_slot->name); 332 - status = pci_hp_deregister(slot->hotplug_slot); 333 - if(status) { 334 - err("pci_hp_deregister failed with error %d", 335 - status); 336 - return status; 337 - } 338 - 327 + list_for_each_entry_safe(slot, tmp, &slot_list, slot_list) { 328 + if (slot->bus == bus) { 339 329 list_del(&slot->slot_list); 340 330 slots--; 331 + 332 + dbg("deregistering slot %s", slot->hotplug_slot->name); 333 + status = pci_hp_deregister(slot->hotplug_slot); 334 + if (status) { 335 + err("pci_hp_deregister failed with error %d", 336 + status); 337 + break; 338 + } 341 339 } 342 340 } 343 341 up_write(&list_rwsem); 344 - return 0; 342 + return status; 345 343 } 346 344 347 345 /* This is the interrupt mode interrupt handler */ ··· 349 351 dbg("entered cpci_hp_intr"); 350 352 351 353 /* Check to see if it was our interrupt */ 352 - if((controller->irq_flags & SA_SHIRQ) && 354 + if ((controller->irq_flags & SA_SHIRQ) && 353 355 !controller->ops->check_irq(controller->dev_id)) { 354 356 dbg("exited cpci_hp_intr, not our interrupt"); 355 357 return IRQ_NONE; ··· 371 373 * INS bits of the cold-inserted devices. 372 374 */ 373 375 static int 374 - init_slots(void) 376 + init_slots(int clear_ins) 375 377 { 376 378 struct slot *slot; 377 - struct list_head *tmp; 378 379 struct pci_dev* dev; 379 380 380 381 dbg("%s - enter", __FUNCTION__); 381 382 down_read(&list_rwsem); 382 - if(!slots) { 383 + if (!slots) { 383 384 up_read(&list_rwsem); 384 385 return -1; 385 386 } 386 - list_for_each(tmp, &slot_list) { 387 - slot = list_entry(tmp, struct slot, slot_list); 387 + list_for_each_entry(slot, &slot_list, slot_list) { 388 388 dbg("%s - looking at slot %s", 389 389 __FUNCTION__, slot->hotplug_slot->name); 390 - if(cpci_check_and_clear_ins(slot)) { 390 + if (clear_ins && cpci_check_and_clear_ins(slot)) 391 391 dbg("%s - cleared INS for slot %s", 392 392 __FUNCTION__, slot->hotplug_slot->name); 393 - dev = pci_find_slot(slot->bus->number, PCI_DEVFN(slot->number, 0)); 394 - if(dev) { 395 - if(update_adapter_status(slot->hotplug_slot, 1)) { 396 - warn("failure to update adapter file"); 397 - } 398 - if(update_latch_status(slot->hotplug_slot, 1)) { 399 - warn("failure to update latch file"); 400 - } 401 - slot->dev = dev; 402 - } else { 403 - err("%s - no driver attached to device in slot %s", 404 - __FUNCTION__, slot->hotplug_slot->name); 405 - } 393 + dev = pci_get_slot(slot->bus, PCI_DEVFN(slot->number, 0)); 394 + if (dev) { 395 + if (update_adapter_status(slot->hotplug_slot, 1)) 396 + warn("failure to update adapter file"); 397 + if (update_latch_status(slot->hotplug_slot, 1)) 398 + warn("failure to update latch file"); 399 + slot->dev = dev; 406 400 } 407 401 } 408 402 up_read(&list_rwsem); ··· 406 416 check_slots(void) 407 417 { 408 418 struct slot *slot; 409 - struct list_head *tmp; 410 419 int extracted; 411 420 int inserted; 412 421 u16 hs_csr; 413 422 414 423 down_read(&list_rwsem); 415 - if(!slots) { 424 + if (!slots) { 416 425 up_read(&list_rwsem); 417 426 err("no slots registered, shutting down"); 418 427 return -1; 419 428 } 420 429 extracted = inserted = 0; 421 - list_for_each(tmp, &slot_list) { 422 - slot = list_entry(tmp, struct slot, slot_list); 430 + list_for_each_entry(slot, &slot_list, slot_list) { 423 431 dbg("%s - looking at slot %s", 424 432 __FUNCTION__, slot->hotplug_slot->name); 425 - if(cpci_check_and_clear_ins(slot)) { 426 - /* Some broken hardware (e.g. PLX 9054AB) asserts ENUM# twice... */ 427 - if(slot->dev) { 428 - warn("slot %s already inserted", slot->hotplug_slot->name); 433 + if (cpci_check_and_clear_ins(slot)) { 434 + /* 435 + * Some broken hardware (e.g. PLX 9054AB) asserts 436 + * ENUM# twice... 437 + */ 438 + if (slot->dev) { 439 + warn("slot %s already inserted", 440 + slot->hotplug_slot->name); 429 441 inserted++; 430 442 continue; 431 443 } ··· 444 452 /* Configure device */ 445 453 dbg("%s - configuring slot %s", 446 454 __FUNCTION__, slot->hotplug_slot->name); 447 - if(cpci_configure_slot(slot)) { 455 + if (cpci_configure_slot(slot)) { 448 456 err("%s - could not configure slot %s", 449 457 __FUNCTION__, slot->hotplug_slot->name); 450 458 continue; ··· 457 465 dbg("%s - slot %s HS_CSR (2) = %04x", 458 466 __FUNCTION__, slot->hotplug_slot->name, hs_csr); 459 467 460 - if(update_latch_status(slot->hotplug_slot, 1)) { 468 + if (update_latch_status(slot->hotplug_slot, 1)) 461 469 warn("failure to update latch file"); 462 - } 463 470 464 - if(update_adapter_status(slot->hotplug_slot, 1)) { 471 + if (update_adapter_status(slot->hotplug_slot, 1)) 465 472 warn("failure to update adapter file"); 466 - } 467 473 468 474 cpci_led_off(slot); 469 475 ··· 471 481 __FUNCTION__, slot->hotplug_slot->name, hs_csr); 472 482 473 483 inserted++; 474 - } else if(cpci_check_ext(slot)) { 484 + } else if (cpci_check_ext(slot)) { 475 485 /* Process extraction request */ 476 486 dbg("%s - slot %s extracted", 477 487 __FUNCTION__, slot->hotplug_slot->name); ··· 481 491 dbg("%s - slot %s HS_CSR = %04x", 482 492 __FUNCTION__, slot->hotplug_slot->name, hs_csr); 483 493 484 - if(!slot->extracting) { 485 - if(update_latch_status(slot->hotplug_slot, 0)) { 494 + if (!slot->extracting) { 495 + if (update_latch_status(slot->hotplug_slot, 0)) { 486 496 warn("failure to update latch file"); 487 - 488 497 } 489 - atomic_inc(&extracting); 490 498 slot->extracting = 1; 499 + atomic_inc(&extracting); 491 500 } 492 501 extracted++; 493 - } else if(slot->extracting) { 502 + } else if (slot->extracting) { 494 503 hs_csr = cpci_get_hs_csr(slot); 495 - if(hs_csr == 0xffff) { 504 + if (hs_csr == 0xffff) { 496 505 /* 497 506 * Hmmm, we're likely hosed at this point, should we 498 507 * bother trying to tell the driver or not? 499 508 */ 500 509 err("card in slot %s was improperly removed", 501 510 slot->hotplug_slot->name); 502 - if(update_adapter_status(slot->hotplug_slot, 0)) { 511 + if (update_adapter_status(slot->hotplug_slot, 0)) 503 512 warn("failure to update adapter file"); 504 - } 505 513 slot->extracting = 0; 506 514 atomic_dec(&extracting); 507 515 } ··· 508 520 up_read(&list_rwsem); 509 521 dbg("inserted=%d, extracted=%d, extracting=%d", 510 522 inserted, extracted, atomic_read(&extracting)); 511 - if(inserted || extracted) { 523 + if (inserted || extracted) 512 524 return extracted; 513 - } 514 - else if(!atomic_read(&extracting)) { 525 + else if (!atomic_read(&extracting)) { 515 526 err("cannot find ENUM# source, shutting down"); 516 527 return -1; 517 528 } ··· 528 541 unlock_kernel(); 529 542 530 543 dbg("%s - event thread started", __FUNCTION__); 531 - while(1) { 544 + while (1) { 532 545 dbg("event thread sleeping"); 533 546 down_interruptible(&event_semaphore); 534 547 dbg("event thread woken, thread_finished = %d", 535 548 thread_finished); 536 - if(thread_finished || signal_pending(current)) 549 + if (thread_finished || signal_pending(current)) 537 550 break; 538 551 do { 539 552 rc = check_slots(); ··· 545 558 thread_finished = 1; 546 559 break; 547 560 } 548 - } while(atomic_read(&extracting) != 0); 561 + } while (atomic_read(&extracting) && !thread_finished); 562 + if (thread_finished) 563 + break; 549 564 550 565 /* Re-enable ENUM# interrupt */ 551 566 dbg("%s - re-enabling irq", __FUNCTION__); ··· 568 579 daemonize("cpci_hp_polld"); 569 580 unlock_kernel(); 570 581 571 - while(1) { 572 - if(thread_finished || signal_pending(current)) 582 + while (1) { 583 + if (thread_finished || signal_pending(current)) 573 584 break; 574 - if(controller->ops->query_enum()) { 585 + if (controller->ops->query_enum()) { 575 586 do { 576 587 rc = check_slots(); 577 - if(rc > 0) { 588 + if (rc > 0) { 578 589 /* Give userspace a chance to handle extraction */ 579 590 msleep(500); 580 - } else if(rc < 0) { 591 + } else if (rc < 0) { 581 592 dbg("%s - error checking slots", __FUNCTION__); 582 593 thread_finished = 1; 583 594 break; 584 595 } 585 - } while(atomic_read(&extracting) != 0); 596 + } while (atomic_read(&extracting) && !thread_finished); 586 597 } 587 598 msleep(100); 588 599 } ··· 601 612 init_MUTEX_LOCKED(&thread_exit); 602 613 thread_finished = 0; 603 614 604 - if(controller->irq) { 615 + if (controller->irq) 605 616 pid = kernel_thread(event_thread, NULL, 0); 606 - } else { 617 + else 607 618 pid = kernel_thread(poll_thread, NULL, 0); 608 - } 609 - if(pid < 0) { 619 + if (pid < 0) { 610 620 err("Can't start up our thread"); 611 621 return -1; 612 622 } ··· 618 630 { 619 631 thread_finished = 1; 620 632 dbg("thread finish command given"); 621 - if(controller->irq) { 633 + if (controller->irq) 622 634 up(&event_semaphore); 623 - } 624 635 dbg("wait for thread to exit"); 625 636 down(&thread_exit); 626 637 } ··· 629 642 { 630 643 int status = 0; 631 644 632 - if(!controller) { 633 - controller = new_controller; 634 - if(controller->irq) { 635 - if(request_irq(controller->irq, 636 - cpci_hp_intr, 637 - controller->irq_flags, 638 - MY_NAME, controller->dev_id)) { 639 - err("Can't get irq %d for the hotplug cPCI controller", controller->irq); 640 - status = -ENODEV; 641 - } 642 - dbg("%s - acquired controller irq %d", __FUNCTION__, 643 - controller->irq); 645 + if (controller) 646 + return -1; 647 + if (!(new_controller && new_controller->ops)) 648 + return -EINVAL; 649 + if (new_controller->irq) { 650 + if (!(new_controller->ops->enable_irq && 651 + new_controller->ops->disable_irq)) 652 + status = -EINVAL; 653 + if (request_irq(new_controller->irq, 654 + cpci_hp_intr, 655 + new_controller->irq_flags, 656 + MY_NAME, 657 + new_controller->dev_id)) { 658 + err("Can't get irq %d for the hotplug cPCI controller", 659 + new_controller->irq); 660 + status = -ENODEV; 644 661 } 645 - } else { 646 - err("cPCI hotplug controller already registered"); 647 - status = -1; 662 + dbg("%s - acquired controller irq %d", 663 + __FUNCTION__, new_controller->irq); 648 664 } 665 + if (!status) 666 + controller = new_controller; 649 667 return status; 668 + } 669 + 670 + static void 671 + cleanup_slots(void) 672 + { 673 + struct slot *slot; 674 + struct slot *tmp; 675 + 676 + /* 677 + * Unregister all of our slots with the pci_hotplug subsystem, 678 + * and free up all memory that we had allocated. 679 + */ 680 + down_write(&list_rwsem); 681 + if (!slots) 682 + goto cleanup_null; 683 + list_for_each_entry_safe(slot, tmp, &slot_list, slot_list) { 684 + list_del(&slot->slot_list); 685 + pci_hp_deregister(slot->hotplug_slot); 686 + } 687 + cleanup_null: 688 + up_write(&list_rwsem); 689 + return; 650 690 } 651 691 652 692 int ··· 681 667 { 682 668 int status = 0; 683 669 684 - if(controller) { 685 - if(atomic_read(&extracting) != 0) { 686 - return -EBUSY; 687 - } 688 - if(!thread_finished) { 670 + if (controller) { 671 + if (!thread_finished) 689 672 cpci_stop_thread(); 690 - } 691 - if(controller->irq) { 673 + if (controller->irq) 692 674 free_irq(controller->irq, controller->dev_id); 693 - } 694 675 controller = NULL; 695 - } else { 676 + cleanup_slots(); 677 + } else 696 678 status = -ENODEV; 697 - } 698 679 return status; 699 680 } 700 681 ··· 700 691 int status; 701 692 702 693 dbg("%s - enter", __FUNCTION__); 703 - if(!controller) { 694 + if (!controller) 704 695 return -ENODEV; 705 - } 706 696 707 697 down_read(&list_rwsem); 708 - if(list_empty(&slot_list)) { 698 + if (list_empty(&slot_list)) { 709 699 up_read(&list_rwsem); 710 700 return -ENODEV; 711 701 } 712 702 up_read(&list_rwsem); 713 703 714 - if(first) { 715 - status = init_slots(); 716 - if(status) { 717 - return status; 718 - } 704 + status = init_slots(first); 705 + if (first) 719 706 first = 0; 720 - } 707 + if (status) 708 + return status; 721 709 722 710 status = cpci_start_thread(); 723 - if(status) { 711 + if (status) 724 712 return status; 725 - } 726 713 dbg("%s - thread started", __FUNCTION__); 727 714 728 - if(controller->irq) { 715 + if (controller->irq) { 729 716 /* Start enum interrupt processing */ 730 717 dbg("%s - enabling irq", __FUNCTION__); 731 718 controller->ops->enable_irq(); ··· 733 728 int 734 729 cpci_hp_stop(void) 735 730 { 736 - if(!controller) { 731 + if (!controller) 737 732 return -ENODEV; 738 - } 739 - if(atomic_read(&extracting) != 0) { 740 - return -EBUSY; 741 - } 742 - if(controller->irq) { 733 + if (controller->irq) { 743 734 /* Stop enum interrupt processing */ 744 735 dbg("%s - disabling irq", __FUNCTION__); 745 736 controller->ops->disable_irq(); 746 737 } 747 738 cpci_stop_thread(); 748 739 return 0; 749 - } 750 - 751 - static void __exit 752 - cleanup_slots(void) 753 - { 754 - struct list_head *tmp; 755 - struct slot *slot; 756 - 757 - /* 758 - * Unregister all of our slots with the pci_hotplug subsystem, 759 - * and free up all memory that we had allocated. 760 - */ 761 - down_write(&list_rwsem); 762 - if(!slots) { 763 - goto null_cleanup; 764 - } 765 - list_for_each(tmp, &slot_list) { 766 - slot = list_entry(tmp, struct slot, slot_list); 767 - list_del(&slot->slot_list); 768 - pci_hp_deregister(slot->hotplug_slot); 769 - kfree(slot->hotplug_slot->info); 770 - kfree(slot->hotplug_slot->name); 771 - kfree(slot->hotplug_slot); 772 - kfree(slot); 773 - } 774 - null_cleanup: 775 - up_write(&list_rwsem); 776 - return; 777 740 } 778 741 779 742 int __init ··· 757 784 /* 758 785 * Clean everything up. 759 786 */ 760 - cleanup_slots(); 787 + cpci_hp_stop(); 788 + cpci_hp_unregister_controller(controller); 761 789 } 762 790 763 791 EXPORT_SYMBOL_GPL(cpci_hp_register_controller);
+54 -90
drivers/pci/hotplug/cpci_hotplug_pci.c
··· 1 1 /* 2 2 * CompactPCI Hot Plug Driver PCI functions 3 3 * 4 - * Copyright (C) 2002 by SOMA Networks, Inc. 4 + * Copyright (C) 2002,2005 by SOMA Networks, Inc. 5 5 * 6 6 * All rights reserved. 7 7 * ··· 38 38 39 39 #define dbg(format, arg...) \ 40 40 do { \ 41 - if(cpci_debug) \ 41 + if (cpci_debug) \ 42 42 printk (KERN_DEBUG "%s: " format "\n", \ 43 43 MY_NAME , ## arg); \ 44 - } while(0) 44 + } while (0) 45 45 #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 46 46 #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 47 47 #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) ··· 57 57 hs_cap = pci_bus_find_capability(slot->bus, 58 58 slot->devfn, 59 59 PCI_CAP_ID_CHSWP); 60 - if(!hs_cap) { 60 + if (!hs_cap) 61 61 return 0; 62 - } 63 62 64 - if(pci_bus_read_config_word(slot->bus, 63 + if (pci_bus_read_config_word(slot->bus, 65 64 slot->devfn, 66 65 hs_cap + 2, 67 - &hs_csr)) { 66 + &hs_csr)) 68 67 return 0; 69 - } 68 + 70 69 return hs_csr & 0x0008 ? 1 : 0; 71 70 } 72 71 ··· 77 78 hs_cap = pci_bus_find_capability(slot->bus, 78 79 slot->devfn, 79 80 PCI_CAP_ID_CHSWP); 80 - if(!hs_cap) { 81 + if (!hs_cap) 81 82 return 0; 82 - } 83 - 84 - if(pci_bus_read_config_word(slot->bus, 83 + if (pci_bus_read_config_word(slot->bus, 85 84 slot->devfn, 86 85 hs_cap + 2, 87 - &hs_csr)) { 86 + &hs_csr)) 88 87 return 0; 89 - } 90 - if(status) { 88 + if (status) 91 89 hs_csr |= HS_CSR_LOO; 92 - } else { 90 + else 93 91 hs_csr &= ~HS_CSR_LOO; 94 - } 95 - if(pci_bus_write_config_word(slot->bus, 92 + if (pci_bus_write_config_word(slot->bus, 96 93 slot->devfn, 97 94 hs_cap + 2, 98 - hs_csr)) { 95 + hs_csr)) 99 96 return 0; 100 - } 101 97 return 1; 102 98 } 103 99 ··· 104 110 hs_cap = pci_bus_find_capability(slot->bus, 105 111 slot->devfn, 106 112 PCI_CAP_ID_CHSWP); 107 - if(!hs_cap) { 113 + if (!hs_cap) 108 114 return 0xFFFF; 109 - } 110 - 111 - if(pci_bus_read_config_word(slot->bus, 115 + if (pci_bus_read_config_word(slot->bus, 112 116 slot->devfn, 113 117 hs_cap + 2, 114 - &hs_csr)) { 118 + &hs_csr)) 115 119 return 0xFFFF; 116 - } 117 120 return hs_csr; 118 121 } 119 122 ··· 123 132 hs_cap = pci_bus_find_capability(slot->bus, 124 133 slot->devfn, 125 134 PCI_CAP_ID_CHSWP); 126 - if(!hs_cap) { 135 + if (!hs_cap) 127 136 return 0; 128 - } 129 - if(pci_bus_read_config_word(slot->bus, 137 + if (pci_bus_read_config_word(slot->bus, 130 138 slot->devfn, 131 139 hs_cap + 2, 132 - &hs_csr)) { 140 + &hs_csr)) 133 141 return 0; 134 - } 135 - if(hs_csr & HS_CSR_INS) { 142 + if (hs_csr & HS_CSR_INS) { 136 143 /* Clear INS (by setting it) */ 137 - if(pci_bus_write_config_word(slot->bus, 144 + if (pci_bus_write_config_word(slot->bus, 138 145 slot->devfn, 139 146 hs_cap + 2, 140 - hs_csr)) { 147 + hs_csr)) 141 148 ins = 0; 142 - } 143 - ins = 1; 149 + else 150 + ins = 1; 144 151 } 145 152 return ins; 146 153 } ··· 152 163 hs_cap = pci_bus_find_capability(slot->bus, 153 164 slot->devfn, 154 165 PCI_CAP_ID_CHSWP); 155 - if(!hs_cap) { 166 + if (!hs_cap) 156 167 return 0; 157 - } 158 - if(pci_bus_read_config_word(slot->bus, 168 + if (pci_bus_read_config_word(slot->bus, 159 169 slot->devfn, 160 170 hs_cap + 2, 161 - &hs_csr)) { 171 + &hs_csr)) 162 172 return 0; 163 - } 164 - if(hs_csr & HS_CSR_EXT) { 173 + if (hs_csr & HS_CSR_EXT) 165 174 ext = 1; 166 - } 167 175 return ext; 168 176 } 169 177 ··· 172 186 hs_cap = pci_bus_find_capability(slot->bus, 173 187 slot->devfn, 174 188 PCI_CAP_ID_CHSWP); 175 - if(!hs_cap) { 189 + if (!hs_cap) 176 190 return -ENODEV; 177 - } 178 - if(pci_bus_read_config_word(slot->bus, 191 + if (pci_bus_read_config_word(slot->bus, 179 192 slot->devfn, 180 193 hs_cap + 2, 181 - &hs_csr)) { 194 + &hs_csr)) 182 195 return -ENODEV; 183 - } 184 - if(hs_csr & HS_CSR_EXT) { 196 + if (hs_csr & HS_CSR_EXT) { 185 197 /* Clear EXT (by setting it) */ 186 - if(pci_bus_write_config_word(slot->bus, 198 + if (pci_bus_write_config_word(slot->bus, 187 199 slot->devfn, 188 200 hs_cap + 2, 189 - hs_csr)) { 201 + hs_csr)) 190 202 return -ENODEV; 191 - } 192 203 } 193 204 return 0; 194 205 } ··· 198 215 hs_cap = pci_bus_find_capability(slot->bus, 199 216 slot->devfn, 200 217 PCI_CAP_ID_CHSWP); 201 - if(!hs_cap) { 218 + if (!hs_cap) 202 219 return -ENODEV; 203 - } 204 - if(pci_bus_read_config_word(slot->bus, 220 + if (pci_bus_read_config_word(slot->bus, 205 221 slot->devfn, 206 222 hs_cap + 2, 207 - &hs_csr)) { 223 + &hs_csr)) 208 224 return -ENODEV; 209 - } 210 - if((hs_csr & HS_CSR_LOO) != HS_CSR_LOO) { 225 + if ((hs_csr & HS_CSR_LOO) != HS_CSR_LOO) { 211 226 hs_csr |= HS_CSR_LOO; 212 - if(pci_bus_write_config_word(slot->bus, 227 + if (pci_bus_write_config_word(slot->bus, 213 228 slot->devfn, 214 229 hs_cap + 2, 215 230 hs_csr)) { ··· 227 246 hs_cap = pci_bus_find_capability(slot->bus, 228 247 slot->devfn, 229 248 PCI_CAP_ID_CHSWP); 230 - if(!hs_cap) { 249 + if (!hs_cap) 231 250 return -ENODEV; 232 - } 233 - if(pci_bus_read_config_word(slot->bus, 251 + if (pci_bus_read_config_word(slot->bus, 234 252 slot->devfn, 235 253 hs_cap + 2, 236 - &hs_csr)) { 254 + &hs_csr)) 237 255 return -ENODEV; 238 - } 239 - if(hs_csr & HS_CSR_LOO) { 256 + if (hs_csr & HS_CSR_LOO) { 240 257 hs_csr &= ~HS_CSR_LOO; 241 - if(pci_bus_write_config_word(slot->bus, 258 + if (pci_bus_write_config_word(slot->bus, 242 259 slot->devfn, 243 260 hs_cap + 2, 244 261 hs_csr)) { ··· 253 274 * Device configuration functions 254 275 */ 255 276 256 - static void cpci_enable_device(struct pci_dev *dev) 257 - { 258 - struct pci_bus *bus; 259 - 260 - pci_enable_device(dev); 261 - if(dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 262 - bus = dev->subordinate; 263 - list_for_each_entry(dev, &bus->devices, bus_list) { 264 - cpci_enable_device(dev); 265 - } 266 - } 267 - } 268 - 269 277 int cpci_configure_slot(struct slot* slot) 270 278 { 271 279 unsigned char busnr; ··· 260 294 261 295 dbg("%s - enter", __FUNCTION__); 262 296 263 - if(slot->dev == NULL) { 297 + if (slot->dev == NULL) { 264 298 dbg("pci_dev null, finding %02x:%02x:%x", 265 299 slot->bus->number, PCI_SLOT(slot->devfn), PCI_FUNC(slot->devfn)); 266 - slot->dev = pci_find_slot(slot->bus->number, slot->devfn); 300 + slot->dev = pci_get_slot(slot->bus, slot->devfn); 267 301 } 268 302 269 303 /* Still NULL? Well then scan for it! */ 270 - if(slot->dev == NULL) { 304 + if (slot->dev == NULL) { 271 305 int n; 272 306 dbg("pci_dev still null"); 273 307 ··· 277 311 */ 278 312 n = pci_scan_slot(slot->bus, slot->devfn); 279 313 dbg("%s: pci_scan_slot returned %d", __FUNCTION__, n); 280 - if(n > 0) 314 + if (n > 0) 281 315 pci_bus_add_devices(slot->bus); 282 - slot->dev = pci_find_slot(slot->bus->number, slot->devfn); 283 - if(slot->dev == NULL) { 316 + slot->dev = pci_get_slot(slot->bus, slot->devfn); 317 + if (slot->dev == NULL) { 284 318 err("Could not find PCI device for slot %02x", slot->number); 285 319 return 1; 286 320 } ··· 295 329 296 330 pci_bus_assign_resources(slot->dev->bus); 297 331 298 - cpci_enable_device(slot->dev); 299 - 300 332 dbg("%s - exit", __FUNCTION__); 301 333 return 0; 302 334 } ··· 305 341 struct pci_dev *dev; 306 342 307 343 dbg("%s - enter", __FUNCTION__); 308 - if(!slot->dev) { 344 + if (!slot->dev) { 309 345 err("No device for slot %02x\n", slot->number); 310 346 return -ENODEV; 311 347 } 312 348 313 349 for (i = 0; i < 8; i++) { 314 - dev = pci_find_slot(slot->bus->number, 350 + dev = pci_get_slot(slot->bus, 315 351 PCI_DEVFN(PCI_SLOT(slot->devfn), i)); 316 - if(dev) { 352 + if (dev) { 317 353 pci_remove_bus_device(dev); 318 354 slot->dev = NULL; 319 355 }
+2 -2
drivers/pci/hotplug/shpchprm_acpi.c
··· 1626 1626 pci_bus->number = func->bus; 1627 1627 devfn = PCI_DEVFN(func->device, func->function); 1628 1628 1629 - ab = find_acpi_bridge_by_bus(acpi_bridges_head, ctrl->seg, ctrl->bus); 1629 + ab = find_acpi_bridge_by_bus(acpi_bridges_head, ctrl->seg, ctrl->slot_bus); 1630 1630 1631 1631 if (ab) { 1632 1632 if (ab->_hpp) { ··· 1681 1681 | PCI_COMMAND_IO | PCI_COMMAND_MEMORY; 1682 1682 bcmd = bcommand = bcommand | PCI_BRIDGE_CTL_NO_ISA; 1683 1683 1684 - ab = find_acpi_bridge_by_bus(acpi_bridges_head, ctrl->seg, ctrl->bus); 1684 + ab = find_acpi_bridge_by_bus(acpi_bridges_head, ctrl->seg, ctrl->slot_bus); 1685 1685 if (ab) { 1686 1686 if (ab->_hpp) { 1687 1687 if (ab->_hpp->enable_perr) {
+3
drivers/scsi/ahci.c
··· 32 32 #include <linux/delay.h> 33 33 #include <linux/interrupt.h> 34 34 #include <linux/sched.h> 35 + #include <linux/dma-mapping.h> 35 36 #include "scsi.h" 36 37 #include <scsi/scsi_host.h> 37 38 #include <linux/libata.h> ··· 290 289 { 291 290 struct ahci_host_priv *hpriv = host_set->private_data; 292 291 kfree(hpriv); 292 + 293 + ata_host_stop(host_set); 293 294 } 294 295 295 296 static int ahci_port_start(struct ata_port *ap)
+1 -1
drivers/scsi/aic7xxx/aic79xx_osm.c
··· 2488 2488 sprintf(current->comm, "ahd_dv_%d", ahd->unit); 2489 2489 #else 2490 2490 daemonize("ahd_dv_%d", ahd->unit); 2491 - current->flags |= PF_FREEZE; 2491 + current->flags |= PF_NOFREEZE; 2492 2492 #endif 2493 2493 unlock_kernel(); 2494 2494
+2
drivers/scsi/ata_piix.c
··· 153 153 154 154 .port_start = ata_port_start, 155 155 .port_stop = ata_port_stop, 156 + .host_stop = ata_host_stop, 156 157 }; 157 158 158 159 static struct ata_port_operations piix_sata_ops = { ··· 181 180 182 181 .port_start = ata_port_start, 183 182 .port_stop = ata_port_stop, 183 + .host_stop = ata_host_stop, 184 184 }; 185 185 186 186 static struct ata_port_info piix_port_info[] = {
+11 -4
drivers/scsi/libata-core.c
··· 3666 3666 dma_free_coherent(dev, ATA_PRD_TBL_SZ, ap->prd, ap->prd_dma); 3667 3667 } 3668 3668 3669 + void ata_host_stop (struct ata_host_set *host_set) 3670 + { 3671 + if (host_set->mmio_base) 3672 + iounmap(host_set->mmio_base); 3673 + } 3674 + 3675 + 3669 3676 /** 3670 3677 * ata_host_remove - Unregister SCSI host structure with upper layers 3671 3678 * @ap: Port to unregister ··· 4278 4271 } 4279 4272 4280 4273 free_irq(host_set->irq, host_set); 4281 - if (host_set->ops->host_stop) 4282 - host_set->ops->host_stop(host_set); 4283 - if (host_set->mmio_base) 4284 - iounmap(host_set->mmio_base); 4285 4274 4286 4275 for (i = 0; i < host_set->n_ports; i++) { 4287 4276 ap = host_set->ports[i]; ··· 4295 4292 4296 4293 scsi_host_put(ap->host); 4297 4294 } 4295 + 4296 + if (host_set->ops->host_stop) 4297 + host_set->ops->host_stop(host_set); 4298 4298 4299 4299 kfree(host_set); 4300 4300 ··· 4387 4381 EXPORT_SYMBOL_GPL(ata_exec_command); 4388 4382 EXPORT_SYMBOL_GPL(ata_port_start); 4389 4383 EXPORT_SYMBOL_GPL(ata_port_stop); 4384 + EXPORT_SYMBOL_GPL(ata_host_stop); 4390 4385 EXPORT_SYMBOL_GPL(ata_interrupt); 4391 4386 EXPORT_SYMBOL_GPL(ata_qc_prep); 4392 4387 EXPORT_SYMBOL_GPL(ata_bmdma_setup);
+1 -1
drivers/scsi/libata.h
··· 26 26 #define __LIBATA_H__ 27 27 28 28 #define DRV_NAME "libata" 29 - #define DRV_VERSION "1.10" /* must be exactly four chars */ 29 + #define DRV_VERSION "1.11" /* must be exactly four chars */ 30 30 31 31 struct ata_scsi_args { 32 32 u16 *id;
+2
drivers/scsi/sata_nv.c
··· 329 329 host->host_desc->disable_hotplug(host_set); 330 330 331 331 kfree(host); 332 + 333 + ata_host_stop(host_set); 332 334 } 333 335 334 336 static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
+1
drivers/scsi/sata_promise.c
··· 122 122 .scr_write = pdc_sata_scr_write, 123 123 .port_start = pdc_port_start, 124 124 .port_stop = pdc_port_stop, 125 + .host_stop = ata_host_stop, 125 126 }; 126 127 127 128 static struct ata_port_info pdc_port_info[] = {
+2
drivers/scsi/sata_qstor.c
··· 536 536 537 537 writeb(0, mmio_base + QS_HCT_CTRL); /* disable host interrupts */ 538 538 writeb(QS_CNFG3_GSRST, mmio_base + QS_HCF_CNFG3); /* global reset */ 539 + 540 + ata_host_stop(host_set); 539 541 } 540 542 541 543 static void qs_host_init(unsigned int chip_id, struct ata_probe_ent *pe)
+1
drivers/scsi/sata_sil.c
··· 161 161 .scr_write = sil_scr_write, 162 162 .port_start = ata_port_start, 163 163 .port_stop = ata_port_stop, 164 + .host_stop = ata_host_stop, 164 165 }; 165 166 166 167 static struct ata_port_info sil_port_info[] = {
+1
drivers/scsi/sata_sis.c
··· 114 114 .scr_write = sis_scr_write, 115 115 .port_start = ata_port_start, 116 116 .port_stop = ata_port_stop, 117 + .host_stop = ata_host_stop, 117 118 }; 118 119 119 120 static struct ata_port_info sis_port_info = {
+1
drivers/scsi/sata_svw.c
··· 313 313 .scr_write = k2_sata_scr_write, 314 314 .port_start = ata_port_start, 315 315 .port_stop = ata_port_stop, 316 + .host_stop = ata_host_stop, 316 317 }; 317 318 318 319 static void k2_sata_setup_port(struct ata_ioports *port, unsigned long base)
+2
drivers/scsi/sata_sx4.c
··· 245 245 246 246 iounmap(dimm_mmio); 247 247 kfree(hpriv); 248 + 249 + ata_host_stop(host_set); 248 250 } 249 251 250 252 static int pdc_port_start(struct ata_port *ap)
+1
drivers/scsi/sata_uli.c
··· 113 113 114 114 .port_start = ata_port_start, 115 115 .port_stop = ata_port_stop, 116 + .host_stop = ata_host_stop, 116 117 }; 117 118 118 119 static struct ata_port_info uli_port_info = {
+1
drivers/scsi/sata_via.c
··· 134 134 135 135 .port_start = ata_port_start, 136 136 .port_stop = ata_port_stop, 137 + .host_stop = ata_host_stop, 137 138 }; 138 139 139 140 static struct ata_port_info svia_port_info = {
+2
drivers/scsi/sata_vsc.c
··· 21 21 #include <linux/blkdev.h> 22 22 #include <linux/delay.h> 23 23 #include <linux/interrupt.h> 24 + #include <linux/dma-mapping.h> 24 25 #include "scsi.h" 25 26 #include <scsi/scsi_host.h> 26 27 #include <linux/libata.h> ··· 231 230 .scr_write = vsc_sata_scr_write, 232 231 .port_start = ata_port_start, 233 232 .port_stop = ata_port_stop, 233 + .host_stop = ata_host_stop, 234 234 }; 235 235 236 236 static void __devinit vsc_sata_setup_port(struct ata_ioports *port, unsigned long base)
+11
drivers/usb/host/Kconfig
··· 124 124 To compile this driver as a module, choose M here: the 125 125 module will be called sl811-hcd. 126 126 127 + config USB_SL811_CS 128 + tristate "CF/PCMCIA support for SL811HS HCD" 129 + depends on USB_SL811_HCD && PCMCIA 130 + default N 131 + help 132 + Wraps a PCMCIA driver around the SL811HS HCD, supporting the RATOC 133 + REX-CFU1U CF card (often used with PDAs). If unsure, say N. 134 + 135 + To compile this driver as a module, choose M here: the 136 + module will be called "sl811_cs". 137 +
+1
drivers/usb/host/Makefile
··· 7 7 obj-$(CONFIG_USB_OHCI_HCD) += ohci-hcd.o 8 8 obj-$(CONFIG_USB_UHCI_HCD) += uhci-hcd.o 9 9 obj-$(CONFIG_USB_SL811_HCD) += sl811-hcd.o 10 + obj-$(CONFIG_USB_SL811_CS) += sl811_cs.o 10 11 obj-$(CONFIG_ETRAX_ARCH_V10) += hc_crisv10.o
+82 -66
drivers/usb/host/sl811-hcd.c
··· 2 2 * SL811HS HCD (Host Controller Driver) for USB. 3 3 * 4 4 * Copyright (C) 2004 Psion Teklogix (for NetBook PRO) 5 - * Copyright (C) 2004 David Brownell 6 - * 5 + * Copyright (C) 2004-2005 David Brownell 6 + * 7 7 * Periodic scheduling is based on Roman's OHCI code 8 8 * Copyright (C) 1999 Roman Weissgaerber 9 9 * ··· 15 15 * For documentation, see the SL811HS spec and the "SL811HS Embedded Host" 16 16 * document (providing significant pieces missing from that spec); plus 17 17 * the SL811S spec if you want peripheral side info. 18 - */ 18 + */ 19 19 20 20 /* 21 21 * Status: Passed basic stress testing, works with hubs, mice, keyboards, ··· 67 67 MODULE_DESCRIPTION("SL811HS USB Host Controller Driver"); 68 68 MODULE_LICENSE("GPL"); 69 69 70 - #define DRIVER_VERSION "15 Dec 2004" 70 + #define DRIVER_VERSION "19 May 2005" 71 71 72 72 73 73 #ifndef DEBUG ··· 121 121 /* reset as thoroughly as we can */ 122 122 if (sl811->board && sl811->board->reset) 123 123 sl811->board->reset(hcd->self.controller); 124 + else { 125 + sl811_write(sl811, SL11H_CTLREG1, SL11H_CTL1MASK_SE0); 126 + mdelay(20); 127 + } 124 128 125 129 sl811_write(sl811, SL11H_IRQ_ENABLE, 0); 126 130 sl811_write(sl811, SL11H_CTLREG1, sl811->ctrl1); ··· 447 443 spin_lock(&urb->lock); 448 444 if (urb->status == -EINPROGRESS) 449 445 urb->status = status; 446 + urb->hcpriv = NULL; 450 447 spin_unlock(&urb->lock); 451 448 452 449 spin_unlock(&sl811->lock); ··· 477 472 if (*prev) 478 473 *prev = ep->next; 479 474 sl811->load[i] -= ep->load; 480 - } 475 + } 481 476 ep->branch = PERIODIC_SIZE; 482 477 sl811->periodic_count--; 483 478 sl811_to_hcd(sl811)->self.bandwidth_allocated ··· 666 661 667 662 #ifdef QUIRK2 668 663 /* this may no longer be necessary ... */ 669 - if (irqstat == 0 && ret == IRQ_NONE) { 664 + if (irqstat == 0) { 670 665 irqstat = checkdone(sl811); 671 - if (irqstat /* && irq != ~0 */ ) 666 + if (irqstat) 672 667 sl811->stat_lost++; 673 668 } 674 669 #endif ··· 727 722 if (sl811->active_a) { 728 723 sl811_write(sl811, SL811_EP_A(SL11H_HOSTCTLREG), 0); 729 724 finish_request(sl811, sl811->active_a, 730 - container_of(sl811->active_a->hep->urb_list.next, 725 + container_of(sl811->active_a 726 + ->hep->urb_list.next, 731 727 struct urb, urb_list), 732 728 NULL, -ESHUTDOWN); 733 729 sl811->active_a = NULL; ··· 737 731 if (sl811->active_b) { 738 732 sl811_write(sl811, SL811_EP_B(SL11H_HOSTCTLREG), 0); 739 733 finish_request(sl811, sl811->active_b, 740 - container_of(sl811->active_b->hep->urb_list.next, 734 + container_of(sl811->active_b 735 + ->hep->urb_list.next, 741 736 struct urb, urb_list), 742 737 NULL, -ESHUTDOWN); 743 738 sl811->active_b = NULL; ··· 768 761 goto retry; 769 762 } 770 763 771 - if (sl811->periodic_count == 0 && list_empty(&sl811->async)) 764 + if (sl811->periodic_count == 0 && list_empty(&sl811->async)) 772 765 sofirq_off(sl811); 773 766 sl811_write(sl811, SL11H_IRQ_ENABLE, sl811->irq_enable); 774 767 ··· 803 796 } 804 797 if (j < PERIODIC_SIZE) 805 798 continue; 806 - branch = i; 799 + branch = i; 807 800 } 808 801 } 809 802 return branch; ··· 897 890 break; 898 891 } 899 892 893 + ep->hep = hep; 900 894 hep->hcpriv = ep; 901 895 } 902 896 ··· 969 961 static int sl811h_urb_dequeue(struct usb_hcd *hcd, struct urb *urb) 970 962 { 971 963 struct sl811 *sl811 = hcd_to_sl811(hcd); 972 - struct usb_host_endpoint *hep = urb->hcpriv; 964 + struct usb_host_endpoint *hep; 973 965 unsigned long flags; 974 966 struct sl811h_ep *ep; 975 967 int retval = 0; 976 968 977 - if (!hep) 978 - return -EINVAL; 979 - 980 969 spin_lock_irqsave(&sl811->lock, flags); 970 + hep = urb->hcpriv; 971 + if (!hep) 972 + goto fail; 973 + 981 974 ep = hep->hcpriv; 982 975 if (ep) { 983 976 /* finish right away if this urb can't be active ... ··· 1026 1017 VDBG("dequeue, urb %p active %s; wait4irq\n", urb, 1027 1018 (sl811->active_a == ep) ? "A" : "B"); 1028 1019 } else 1020 + fail: 1029 1021 retval = -EINVAL; 1030 1022 spin_unlock_irqrestore(&sl811->lock, flags); 1031 1023 return retval; ··· 1586 1576 if (sl811->board && sl811->board->power) 1587 1577 hub_set_power_budget(udev, sl811->board->power * 2); 1588 1578 1579 + /* enable power and interupts */ 1580 + port_power(sl811, 1); 1581 + 1589 1582 return 0; 1590 1583 } 1591 1584 ··· 1631 1618 1632 1619 /*-------------------------------------------------------------------------*/ 1633 1620 1634 - static int __init_or_module 1621 + static int __devexit 1635 1622 sl811h_remove(struct device *dev) 1636 1623 { 1637 1624 struct usb_hcd *hcd = dev_get_drvdata(dev); ··· 1644 1631 remove_debug_file(sl811); 1645 1632 usb_remove_hcd(hcd); 1646 1633 1647 - iounmap(sl811->data_reg); 1634 + /* some platforms may use IORESOURCE_IO */ 1648 1635 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1649 - release_mem_region(res->start, 1); 1636 + if (res) 1637 + iounmap(sl811->data_reg); 1650 1638 1651 - iounmap(sl811->addr_reg); 1652 1639 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1653 - release_mem_region(res->start, 1); 1640 + if (res) 1641 + iounmap(sl811->addr_reg); 1654 1642 1655 1643 usb_put_hcd(hcd); 1656 1644 return 0; 1657 1645 } 1658 1646 1659 - #define resource_len(r) (((r)->end - (r)->start) + 1) 1660 - 1661 - static int __init 1647 + static int __devinit 1662 1648 sl811h_probe(struct device *dev) 1663 1649 { 1664 1650 struct usb_hcd *hcd; ··· 1668 1656 void __iomem *addr_reg; 1669 1657 void __iomem *data_reg; 1670 1658 int retval; 1671 - u8 tmp; 1659 + u8 tmp, ioaddr = 0; 1672 1660 1673 1661 /* basic sanity checks first. board-specific init logic should 1674 1662 * have initialized these three resources and probably board ··· 1676 1664 * minimal sanity checking. 1677 1665 */ 1678 1666 pdev = container_of(dev, struct platform_device, dev); 1679 - if (pdev->num_resources < 3) 1680 - return -ENODEV; 1681 - 1682 - addr = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1683 - data = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1684 1667 irq = platform_get_irq(pdev, 0); 1685 - if (!addr || !data || irq < 0) 1668 + if (pdev->num_resources < 3 || irq < 0) 1686 1669 return -ENODEV; 1687 1670 1688 1671 /* refuse to confuse usbcore */ ··· 1686 1679 return -EINVAL; 1687 1680 } 1688 1681 1689 - if (!request_mem_region(addr->start, 1, hcd_name)) { 1690 - retval = -EBUSY; 1691 - goto err1; 1692 - } 1693 - addr_reg = ioremap(addr->start, resource_len(addr)); 1694 - if (addr_reg == NULL) { 1695 - retval = -ENOMEM; 1696 - goto err2; 1697 - } 1682 + /* the chip may be wired for either kind of addressing */ 1683 + addr = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1684 + data = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1685 + retval = -EBUSY; 1686 + if (!addr || !data) { 1687 + addr = platform_get_resource(pdev, IORESOURCE_IO, 0); 1688 + data = platform_get_resource(pdev, IORESOURCE_IO, 1); 1689 + if (!addr || !data) 1690 + return -ENODEV; 1691 + ioaddr = 1; 1698 1692 1699 - if (!request_mem_region(data->start, 1, hcd_name)) { 1700 - retval = -EBUSY; 1701 - goto err3; 1702 - } 1703 - data_reg = ioremap(data->start, resource_len(addr)); 1704 - if (data_reg == NULL) { 1705 - retval = -ENOMEM; 1706 - goto err4; 1693 + addr_reg = (void __iomem *) addr->start; 1694 + data_reg = (void __iomem *) data->start; 1695 + } else { 1696 + addr_reg = ioremap(addr->start, 1); 1697 + if (addr_reg == NULL) { 1698 + retval = -ENOMEM; 1699 + goto err2; 1700 + } 1701 + 1702 + data_reg = ioremap(data->start, 1); 1703 + if (data_reg == NULL) { 1704 + retval = -ENOMEM; 1705 + goto err4; 1706 + } 1707 1707 } 1708 1708 1709 1709 /* allocate and initialize hcd */ ··· 1751 1737 goto err6; 1752 1738 } 1753 1739 1754 - /* sl811s would need a different handler for this irq */ 1755 - #ifdef CONFIG_ARM 1756 - /* Cypress docs say the IRQ is IRQT_HIGH ... */ 1757 - set_irq_type(irq, IRQT_RISING); 1758 - #endif 1759 - retval = usb_add_hcd(hcd, irq, SA_INTERRUPT); 1740 + /* The chip's IRQ is level triggered, active high. A requirement 1741 + * for platform device setup is to cope with things like signal 1742 + * inverters (e.g. CF is active low) or working only with edge 1743 + * triggers (e.g. most ARM CPUs). Initial driver stress testing 1744 + * was on a system with single edge triggering, so most sorts of 1745 + * triggering arrangement should work. 1746 + */ 1747 + retval = usb_add_hcd(hcd, irq, SA_INTERRUPT | SA_SHIRQ); 1760 1748 if (retval != 0) 1761 1749 goto err6; 1762 1750 ··· 1768 1752 err6: 1769 1753 usb_put_hcd(hcd); 1770 1754 err5: 1771 - iounmap(data_reg); 1755 + if (!ioaddr) 1756 + iounmap(data_reg); 1772 1757 err4: 1773 - release_mem_region(data->start, 1); 1774 - err3: 1775 - iounmap(addr_reg); 1758 + if (!ioaddr) 1759 + iounmap(addr_reg); 1776 1760 err2: 1777 - release_mem_region(addr->start, 1); 1778 - err1: 1779 1761 DBG("init error, %d\n", retval); 1780 1762 return retval; 1781 1763 } ··· 1781 1767 #ifdef CONFIG_PM 1782 1768 1783 1769 /* for this device there's no useful distinction between the controller 1784 - * and its root hub, except that the root hub only gets direct PM calls 1770 + * and its root hub, except that the root hub only gets direct PM calls 1785 1771 * when CONFIG_USB_SUSPEND is enabled. 1786 1772 */ 1787 1773 ··· 1835 1821 #endif 1836 1822 1837 1823 1838 - static struct device_driver sl811h_driver = { 1824 + /* this driver is exported so sl811_cs can depend on it */ 1825 + struct device_driver sl811h_driver = { 1839 1826 .name = (char *) hcd_name, 1840 1827 .bus = &platform_bus_type, 1841 1828 1842 1829 .probe = sl811h_probe, 1843 - .remove = sl811h_remove, 1830 + .remove = __devexit_p(sl811h_remove), 1844 1831 1845 1832 .suspend = sl811h_suspend, 1846 1833 .resume = sl811h_resume, 1847 1834 }; 1835 + EXPORT_SYMBOL(sl811h_driver); 1848 1836 1849 1837 /*-------------------------------------------------------------------------*/ 1850 - 1851 - static int __init sl811h_init(void) 1838 + 1839 + static int __init sl811h_init(void) 1852 1840 { 1853 1841 if (usb_disabled()) 1854 1842 return -ENODEV; ··· 1860 1844 } 1861 1845 module_init(sl811h_init); 1862 1846 1863 - static void __exit sl811h_cleanup(void) 1864 - { 1847 + static void __exit sl811h_cleanup(void) 1848 + { 1865 1849 driver_unregister(&sl811h_driver); 1866 1850 } 1867 1851 module_exit(sl811h_cleanup);
+442
drivers/usb/host/sl811_cs.c
··· 1 + /* 2 + * PCMCIA driver for SL811HS (as found in REX-CFU1U) 3 + * Filename: sl811_cs.c 4 + * Author: Yukio Yamamoto 5 + * 6 + * Port to sl811-hcd and 2.6.x by 7 + * Botond Botyanszki <boti@rocketmail.com> 8 + * Simon Pickering 9 + * 10 + * Last update: 2005-05-12 11 + */ 12 + 13 + #include <linux/kernel.h> 14 + #include <linux/module.h> 15 + #include <linux/init.h> 16 + #include <linux/sched.h> 17 + #include <linux/ptrace.h> 18 + #include <linux/slab.h> 19 + #include <linux/string.h> 20 + #include <linux/timer.h> 21 + #include <linux/ioport.h> 22 + 23 + #include <pcmcia/version.h> 24 + #include <pcmcia/cs_types.h> 25 + #include <pcmcia/cs.h> 26 + #include <pcmcia/cistpl.h> 27 + #include <pcmcia/cisreg.h> 28 + #include <pcmcia/ds.h> 29 + 30 + #include <linux/usb_sl811.h> 31 + 32 + MODULE_AUTHOR("Botond Botyanszki"); 33 + MODULE_DESCRIPTION("REX-CFU1U PCMCIA driver for 2.6"); 34 + MODULE_LICENSE("GPL"); 35 + 36 + 37 + /*====================================================================*/ 38 + /* MACROS */ 39 + /*====================================================================*/ 40 + 41 + #if defined(DEBUG) || defined(CONFIG_USB_DEBUG) || defined(PCMCIA_DEBUG) 42 + 43 + static int pc_debug = 0; 44 + module_param(pc_debug, int, 0644); 45 + 46 + #define DBG(n, args...) if (pc_debug>(n)) printk(KERN_DEBUG "sl811_cs: " args) 47 + 48 + #else 49 + #define DBG(n, args...) do{}while(0) 50 + #endif /* no debugging */ 51 + 52 + #define INFO(args...) printk(KERN_INFO "sl811_cs: " args) 53 + 54 + #define INT_MODULE_PARM(n, v) static int n = v; module_param(n, int, 0444) 55 + 56 + #define CS_CHECK(fn, ret) \ 57 + do { \ 58 + last_fn = (fn); \ 59 + if ((last_ret = (ret)) != 0) \ 60 + goto cs_failed; \ 61 + } while (0) 62 + 63 + /*====================================================================*/ 64 + /* VARIABLES */ 65 + /*====================================================================*/ 66 + 67 + static const char driver_name[DEV_NAME_LEN] = "sl811_cs"; 68 + 69 + static dev_link_t *dev_list = NULL; 70 + 71 + static int irq_list[4] = { -1 }; 72 + static int irq_list_count; 73 + 74 + module_param_array(irq_list, int, &irq_list_count, 0444); 75 + 76 + INT_MODULE_PARM(irq_mask, 0xdeb8); 77 + 78 + typedef struct local_info_t { 79 + dev_link_t link; 80 + dev_node_t node; 81 + } local_info_t; 82 + 83 + /*====================================================================*/ 84 + 85 + static void release_platform_dev(struct device * dev) 86 + { 87 + DBG(0, "sl811_cs platform_dev release\n"); 88 + dev->parent = NULL; 89 + } 90 + 91 + static struct sl811_platform_data platform_data = { 92 + .potpg = 100, 93 + .power = 50, /* == 100mA */ 94 + // .reset = ... FIXME: invoke CF reset on the card 95 + }; 96 + 97 + static struct resource resources[] = { 98 + [0] = { 99 + .flags = IORESOURCE_IRQ, 100 + }, 101 + [1] = { 102 + // .name = "address", 103 + .flags = IORESOURCE_IO, 104 + }, 105 + [2] = { 106 + // .name = "data", 107 + .flags = IORESOURCE_IO, 108 + }, 109 + }; 110 + 111 + extern struct device_driver sl811h_driver; 112 + 113 + static struct platform_device platform_dev = { 114 + .id = -1, 115 + .dev = { 116 + .platform_data = &platform_data, 117 + .release = release_platform_dev, 118 + }, 119 + .resource = resources, 120 + .num_resources = ARRAY_SIZE(resources), 121 + }; 122 + 123 + static int sl811_hc_init(struct device *parent, ioaddr_t base_addr, int irq) 124 + { 125 + if (platform_dev.dev.parent) 126 + return -EBUSY; 127 + platform_dev.dev.parent = parent; 128 + 129 + /* finish seting up the platform device */ 130 + resources[0].start = irq; 131 + 132 + resources[1].start = base_addr; 133 + resources[1].end = base_addr; 134 + 135 + resources[2].start = base_addr + 1; 136 + resources[2].end = base_addr + 1; 137 + 138 + /* The driver core will probe for us. We know sl811-hcd has been 139 + * initialized already because of the link order dependency. 140 + */ 141 + platform_dev.name = sl811h_driver.name; 142 + return platform_device_register(&platform_dev); 143 + } 144 + 145 + /*====================================================================*/ 146 + 147 + static void sl811_cs_detach(dev_link_t *link) 148 + { 149 + dev_link_t **linkp; 150 + 151 + DBG(0, "sl811_cs_detach(0x%p)\n", link); 152 + 153 + /* Locate device structure */ 154 + for (linkp = &dev_list; *linkp; linkp = &(*linkp)->next) { 155 + if (*linkp == link) 156 + break; 157 + } 158 + if (*linkp == NULL) 159 + return; 160 + 161 + /* Break the link with Card Services */ 162 + if (link->handle) 163 + pcmcia_deregister_client(link->handle); 164 + 165 + /* Unlink device structure, and free it */ 166 + *linkp = link->next; 167 + /* This points to the parent local_info_t struct */ 168 + kfree(link->priv); 169 + } 170 + 171 + static void sl811_cs_release(dev_link_t * link) 172 + { 173 + 174 + DBG(0, "sl811_cs_release(0x%p)\n", link); 175 + 176 + if (link->open) { 177 + DBG(1, "sl811_cs: release postponed, '%s' still open\n", 178 + link->dev->dev_name); 179 + link->state |= DEV_STALE_CONFIG; 180 + return; 181 + } 182 + 183 + /* Unlink the device chain */ 184 + link->dev = NULL; 185 + 186 + platform_device_unregister(&platform_dev); 187 + pcmcia_release_configuration(link->handle); 188 + if (link->io.NumPorts1) 189 + pcmcia_release_io(link->handle, &link->io); 190 + if (link->irq.AssignedIRQ) 191 + pcmcia_release_irq(link->handle, &link->irq); 192 + link->state &= ~DEV_CONFIG; 193 + 194 + if (link->state & DEV_STALE_LINK) 195 + sl811_cs_detach(link); 196 + } 197 + 198 + static void sl811_cs_config(dev_link_t *link) 199 + { 200 + client_handle_t handle = link->handle; 201 + struct device *parent = &handle_to_dev(handle); 202 + local_info_t *dev = link->priv; 203 + tuple_t tuple; 204 + cisparse_t parse; 205 + int last_fn, last_ret; 206 + u_char buf[64]; 207 + config_info_t conf; 208 + cistpl_cftable_entry_t dflt = { 0 }; 209 + 210 + DBG(0, "sl811_cs_config(0x%p)\n", link); 211 + 212 + tuple.DesiredTuple = CISTPL_CONFIG; 213 + tuple.Attributes = 0; 214 + tuple.TupleData = buf; 215 + tuple.TupleDataMax = sizeof(buf); 216 + tuple.TupleOffset = 0; 217 + CS_CHECK(GetFirstTuple, pcmcia_get_first_tuple(handle, &tuple)); 218 + CS_CHECK(GetTupleData, pcmcia_get_tuple_data(handle, &tuple)); 219 + CS_CHECK(ParseTuple, pcmcia_parse_tuple(handle, &tuple, &parse)); 220 + link->conf.ConfigBase = parse.config.base; 221 + link->conf.Present = parse.config.rmask[0]; 222 + 223 + /* Configure card */ 224 + link->state |= DEV_CONFIG; 225 + 226 + /* Look up the current Vcc */ 227 + CS_CHECK(GetConfigurationInfo, 228 + pcmcia_get_configuration_info(handle, &conf)); 229 + link->conf.Vcc = conf.Vcc; 230 + 231 + tuple.DesiredTuple = CISTPL_CFTABLE_ENTRY; 232 + CS_CHECK(GetFirstTuple, pcmcia_get_first_tuple(handle, &tuple)); 233 + while (1) { 234 + cistpl_cftable_entry_t *cfg = &(parse.cftable_entry); 235 + 236 + if (pcmcia_get_tuple_data(handle, &tuple) != 0 237 + || pcmcia_parse_tuple(handle, &tuple, &parse) 238 + != 0) 239 + goto next_entry; 240 + 241 + if (cfg->flags & CISTPL_CFTABLE_DEFAULT) { 242 + dflt = *cfg; 243 + } 244 + 245 + if (cfg->index == 0) 246 + goto next_entry; 247 + 248 + link->conf.ConfigIndex = cfg->index; 249 + 250 + /* Use power settings for Vcc and Vpp if present */ 251 + /* Note that the CIS values need to be rescaled */ 252 + if (cfg->vcc.present & (1<<CISTPL_POWER_VNOM)) { 253 + if (cfg->vcc.param[CISTPL_POWER_VNOM]/10000 254 + != conf.Vcc) 255 + goto next_entry; 256 + } else if (dflt.vcc.present & (1<<CISTPL_POWER_VNOM)) { 257 + if (dflt.vcc.param[CISTPL_POWER_VNOM]/10000 258 + != conf.Vcc) 259 + goto next_entry; 260 + } 261 + 262 + if (cfg->vpp1.present & (1<<CISTPL_POWER_VNOM)) 263 + link->conf.Vpp1 = link->conf.Vpp2 = 264 + cfg->vpp1.param[CISTPL_POWER_VNOM]/10000; 265 + else if (dflt.vpp1.present & (1<<CISTPL_POWER_VNOM)) 266 + link->conf.Vpp1 = link->conf.Vpp2 = 267 + dflt.vpp1.param[CISTPL_POWER_VNOM]/10000; 268 + 269 + /* we need an interrupt */ 270 + if (cfg->irq.IRQInfo1 || dflt.irq.IRQInfo1) 271 + link->conf.Attributes |= CONF_ENABLE_IRQ; 272 + 273 + /* IO window settings */ 274 + link->io.NumPorts1 = link->io.NumPorts2 = 0; 275 + if ((cfg->io.nwin > 0) || (dflt.io.nwin > 0)) { 276 + cistpl_io_t *io = (cfg->io.nwin) ? &cfg->io : &dflt.io; 277 + 278 + link->io.Attributes1 = IO_DATA_PATH_WIDTH_8; 279 + link->io.IOAddrLines = io->flags & CISTPL_IO_LINES_MASK; 280 + link->io.BasePort1 = io->win[0].base; 281 + link->io.NumPorts1 = io->win[0].len; 282 + 283 + if (pcmcia_request_io(link->handle, &link->io) != 0) 284 + goto next_entry; 285 + } 286 + break; 287 + 288 + next_entry: 289 + if (link->io.NumPorts1) 290 + pcmcia_release_io(link->handle, &link->io); 291 + last_ret = pcmcia_get_next_tuple(handle, &tuple); 292 + } 293 + 294 + /* require an IRQ and two registers */ 295 + if (!link->io.NumPorts1 || link->io.NumPorts1 < 2) 296 + goto cs_failed; 297 + if (link->conf.Attributes & CONF_ENABLE_IRQ) 298 + CS_CHECK(RequestIRQ, 299 + pcmcia_request_irq(link->handle, &link->irq)); 300 + else 301 + goto cs_failed; 302 + 303 + CS_CHECK(RequestConfiguration, 304 + pcmcia_request_configuration(link->handle, &link->conf)); 305 + 306 + sprintf(dev->node.dev_name, driver_name); 307 + dev->node.major = dev->node.minor = 0; 308 + link->dev = &dev->node; 309 + 310 + printk(KERN_INFO "%s: index 0x%02x: Vcc %d.%d", 311 + dev->node.dev_name, link->conf.ConfigIndex, 312 + link->conf.Vcc/10, link->conf.Vcc%10); 313 + if (link->conf.Vpp1) 314 + printk(", Vpp %d.%d", link->conf.Vpp1/10, link->conf.Vpp1%10); 315 + printk(", irq %d", link->irq.AssignedIRQ); 316 + printk(", io 0x%04x-0x%04x", link->io.BasePort1, 317 + link->io.BasePort1+link->io.NumPorts1-1); 318 + printk("\n"); 319 + 320 + link->state &= ~DEV_CONFIG_PENDING; 321 + 322 + if (sl811_hc_init(parent, link->io.BasePort1, link->irq.AssignedIRQ) 323 + < 0) { 324 + cs_failed: 325 + printk("sl811_cs_config failed\n"); 326 + cs_error(link->handle, last_fn, last_ret); 327 + sl811_cs_release(link); 328 + link->state &= ~DEV_CONFIG_PENDING; 329 + } 330 + } 331 + 332 + static int 333 + sl811_cs_event(event_t event, int priority, event_callback_args_t *args) 334 + { 335 + dev_link_t *link = args->client_data; 336 + 337 + DBG(1, "sl811_cs_event(0x%06x)\n", event); 338 + 339 + switch (event) { 340 + case CS_EVENT_CARD_REMOVAL: 341 + link->state &= ~DEV_PRESENT; 342 + if (link->state & DEV_CONFIG) 343 + sl811_cs_release(link); 344 + break; 345 + 346 + case CS_EVENT_CARD_INSERTION: 347 + link->state |= DEV_PRESENT | DEV_CONFIG_PENDING; 348 + sl811_cs_config(link); 349 + break; 350 + 351 + case CS_EVENT_PM_SUSPEND: 352 + link->state |= DEV_SUSPEND; 353 + /* Fall through... */ 354 + case CS_EVENT_RESET_PHYSICAL: 355 + if (link->state & DEV_CONFIG) 356 + pcmcia_release_configuration(link->handle); 357 + break; 358 + 359 + case CS_EVENT_PM_RESUME: 360 + link->state &= ~DEV_SUSPEND; 361 + /* Fall through... */ 362 + case CS_EVENT_CARD_RESET: 363 + if (link->state & DEV_CONFIG) 364 + pcmcia_request_configuration(link->handle, &link->conf); 365 + DBG(0, "reset sl811-hcd here?\n"); 366 + break; 367 + } 368 + return 0; 369 + } 370 + 371 + static dev_link_t *sl811_cs_attach(void) 372 + { 373 + local_info_t *local; 374 + dev_link_t *link; 375 + client_reg_t client_reg; 376 + int ret, i; 377 + 378 + local = kmalloc(sizeof(local_info_t), GFP_KERNEL); 379 + if (!local) 380 + return NULL; 381 + memset(local, 0, sizeof(local_info_t)); 382 + link = &local->link; 383 + link->priv = local; 384 + 385 + /* Initialize */ 386 + link->irq.Attributes = IRQ_TYPE_EXCLUSIVE; 387 + link->irq.IRQInfo1 = IRQ_INFO2_VALID|IRQ_LEVEL_ID; 388 + if (irq_list[0] == -1) 389 + link->irq.IRQInfo2 = irq_mask; 390 + else 391 + for (i = 0; i < irq_list_count; i++) 392 + link->irq.IRQInfo2 |= 1 << irq_list[i]; 393 + link->irq.Handler = NULL; 394 + 395 + link->conf.Attributes = 0; 396 + link->conf.Vcc = 33; 397 + link->conf.IntType = INT_MEMORY_AND_IO; 398 + 399 + /* Register with Card Services */ 400 + link->next = dev_list; 401 + dev_list = link; 402 + client_reg.dev_info = (dev_info_t *) &driver_name; 403 + client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE; 404 + client_reg.EventMask = 405 + CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL | 406 + CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET | 407 + CS_EVENT_PM_SUSPEND | CS_EVENT_PM_RESUME; 408 + client_reg.event_handler = &sl811_cs_event; 409 + client_reg.Version = 0x0210; 410 + client_reg.event_callback_args.client_data = link; 411 + ret = pcmcia_register_client(&link->handle, &client_reg); 412 + if (ret != CS_SUCCESS) { 413 + cs_error(link->handle, RegisterClient, ret); 414 + sl811_cs_detach(link); 415 + return NULL; 416 + } 417 + 418 + return link; 419 + } 420 + 421 + static struct pcmcia_driver sl811_cs_driver = { 422 + .owner = THIS_MODULE, 423 + .drv = { 424 + .name = (char *)driver_name, 425 + }, 426 + .attach = sl811_cs_attach, 427 + .detach = sl811_cs_detach, 428 + }; 429 + 430 + /*====================================================================*/ 431 + 432 + static int __init init_sl811_cs(void) 433 + { 434 + return pcmcia_register_driver(&sl811_cs_driver); 435 + } 436 + module_init(init_sl811_cs); 437 + 438 + static void __exit exit_sl811_cs(void) 439 + { 440 + pcmcia_unregister_driver(&sl811_cs_driver); 441 + } 442 + module_exit(exit_sl811_cs);
-2
drivers/usb/media/pwc/pwc-ctrl.c
··· 48 48 #include "pwc-uncompress.h" 49 49 #include "pwc-kiara.h" 50 50 #include "pwc-timon.h" 51 - #include "pwc-dec1.h" 52 - #include "pwc-dec23.h" 53 51 54 52 /* Request types: video */ 55 53 #define SET_LUM_CTL 0x01
-2
drivers/usb/media/pwc/pwc-uncompress.c
··· 29 29 30 30 #include "pwc.h" 31 31 #include "pwc-uncompress.h" 32 - #include "pwc-dec1.h" 33 - #include "pwc-dec23.h" 34 32 35 33 int pwc_decompress(struct pwc_device *pdev) 36 34 {
+3
drivers/usb/serial/ftdi_sio.c
··· 364 364 { USB_DEVICE_VER(FTDI_VID, PROTEGO_SPECIAL_3, 0, 0x3ff) }, 365 365 { USB_DEVICE_VER(FTDI_VID, PROTEGO_SPECIAL_4, 0, 0x3ff) }, 366 366 { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UO100_PID, 0, 0x3ff) }, 367 + { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UM100_PID, 0, 0x3ff) }, 367 368 { USB_DEVICE_VER(FTDI_VID, INSIDE_ACCESSO, 0, 0x3ff) }, 368 369 { USB_DEVICE_VER(INTREPID_VID, INTREPID_VALUECAN_PID, 0, 0x3ff) }, 369 370 { USB_DEVICE_VER(INTREPID_VID, INTREPID_NEOVI_PID, 0, 0x3ff) }, ··· 476 475 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88E_PID, 0x400, 0xffff) }, 477 476 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88F_PID, 0x400, 0xffff) }, 478 477 { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UO100_PID, 0x400, 0xffff) }, 478 + { USB_DEVICE_VER(FTDI_VID, FTDI_ELV_UM100_PID, 0x400, 0xffff) }, 479 479 { USB_DEVICE_VER(FTDI_VID, LINX_SDMUSBQSS_PID, 0x400, 0xffff) }, 480 480 { USB_DEVICE_VER(FTDI_VID, LINX_MASTERDEVEL2_PID, 0x400, 0xffff) }, 481 481 { USB_DEVICE_VER(FTDI_VID, LINX_FUTURE_0_PID, 0x400, 0xffff) }, ··· 620 618 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88E_PID, 0x400, 0xffff) }, 621 619 { USB_DEVICE_VER(FTDI_VID, FTDI_GUDEADS_E88F_PID, 0x400, 0xffff) }, 622 620 { USB_DEVICE(FTDI_VID, FTDI_ELV_UO100_PID) }, 621 + { USB_DEVICE(FTDI_VID, FTDI_ELV_UM100_PID) }, 623 622 { USB_DEVICE_VER(FTDI_VID, LINX_SDMUSBQSS_PID, 0x400, 0xffff) }, 624 623 { USB_DEVICE_VER(FTDI_VID, LINX_MASTERDEVEL2_PID, 0x400, 0xffff) }, 625 624 { USB_DEVICE_VER(FTDI_VID, LINX_FUTURE_0_PID, 0x400, 0xffff) },
+2
drivers/usb/serial/ftdi_sio.h
··· 144 144 145 145 /* ELV USB Module UO100 (PID sent by Stefan Frings) */ 146 146 #define FTDI_ELV_UO100_PID 0xFB58 /* Product Id */ 147 + /* ELV USB Module UM100 (PID sent by Arnim Laeuger) */ 148 + #define FTDI_ELV_UM100_PID 0xFB5A /* Product Id */ 147 149 148 150 /* 149 151 * Definitions for ID TECH (www.idt-net.com) devices
+10 -10
drivers/usb/serial/usb-serial.c
··· 1297 1297 goto exit_bus; 1298 1298 } 1299 1299 1300 - /* register the generic driver, if we should */ 1301 - result = usb_serial_generic_register(debug); 1302 - if (result < 0) { 1303 - err("%s - registering generic driver failed", __FUNCTION__); 1304 - goto exit_generic; 1305 - } 1306 - 1307 1300 usb_serial_tty_driver->owner = THIS_MODULE; 1308 1301 usb_serial_tty_driver->driver_name = "usbserial"; 1309 1302 usb_serial_tty_driver->devfs_name = "usb/tts/"; ··· 1322 1329 goto exit_tty; 1323 1330 } 1324 1331 1332 + /* register the generic driver, if we should */ 1333 + result = usb_serial_generic_register(debug); 1334 + if (result < 0) { 1335 + err("%s - registering generic driver failed", __FUNCTION__); 1336 + goto exit_generic; 1337 + } 1338 + 1325 1339 info(DRIVER_DESC " " DRIVER_VERSION); 1326 1340 1327 1341 return result; 1342 + 1343 + exit_generic: 1344 + usb_deregister(&usb_serial_driver); 1328 1345 1329 1346 exit_tty: 1330 1347 tty_unregister_driver(usb_serial_tty_driver); 1331 1348 1332 1349 exit_reg_driver: 1333 - usb_serial_generic_deregister(); 1334 - 1335 - exit_generic: 1336 1350 bus_unregister(&usb_serial_bus_type); 1337 1351 1338 1352 exit_bus:
+11 -11
drivers/video/intelfb/intelfbdrv.c
··· 228 228 MODULE_LICENSE("Dual BSD/GPL"); 229 229 MODULE_DEVICE_TABLE(pci, intelfb_pci_table); 230 230 231 - static int accel __initdata = 1; 232 - static int vram __initdata = 4; 233 - static int hwcursor __initdata = 1; 234 - static int mtrr __initdata = 1; 235 - static int fixed __initdata = 0; 236 - static int noinit __initdata = 0; 237 - static int noregister __initdata = 0; 238 - static int probeonly __initdata = 0; 239 - static int idonly __initdata = 0; 240 - static int bailearly __initdata = 0; 241 - static char *mode __initdata = NULL; 231 + static int accel = 1; 232 + static int vram = 4; 233 + static int hwcursor = 1; 234 + static int mtrr = 1; 235 + static int fixed = 0; 236 + static int noinit = 0; 237 + static int noregister = 0; 238 + static int probeonly = 0; 239 + static int idonly = 0; 240 + static int bailearly = 0; 241 + static char *mode = NULL; 242 242 243 243 module_param(accel, bool, S_IRUGO); 244 244 MODULE_PARM_DESC(accel, "Enable console acceleration");
+2 -2
fs/cifs/README
··· 371 371 on newly created files, directories, and devices (create, 372 372 mkdir, mknod) which will result in the server setting the 373 373 uid and gid to the default (usually the server uid of the 374 - usern who mounted the share). Letting the server (rather than 374 + user who mounted the share). Letting the server (rather than 375 375 the client) set the uid and gid is the default. This 376 376 parameter has no effect if the CIFS Unix Extensions are not 377 377 negotiated. ··· 384 384 client (e.g. when the application is doing large sequential 385 385 reads bigger than page size without rereading the same data) 386 386 this can provide better performance than the default 387 - behavior which caches reads (reaadahead) and writes 387 + behavior which caches reads (readahead) and writes 388 388 (writebehind) through the local Linux client pagecache 389 389 if oplock (caching token) is granted and held. Note that 390 390 direct allows write operations larger than page size
+1 -1
fs/cifs/cifsproto.h
··· 228 228 const struct nls_table *nls_codepage, 229 229 int remap_special_chars); 230 230 #endif /* CONFIG_CIFS_EXPERIMENTAL */ 231 - extern int cifs_convertUCSpath(char *target, const __u16 *source, int maxlen, 231 + extern int cifs_convertUCSpath(char *target, const __le16 *source, int maxlen, 232 232 const struct nls_table * codepage); 233 233 extern int cifsConvertToUCS(__le16 * target, const char *source, int maxlen, 234 234 const struct nls_table * cp, int mapChars);
+28 -28
fs/cifs/cifssmb.c
··· 567 567 568 568 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 569 569 name_len = 570 - cifsConvertToUCS((__u16 *) pSMB->fileName, fileName, 570 + cifsConvertToUCS((__le16 *) pSMB->fileName, fileName, 571 571 PATH_MAX, nls_codepage, remap); 572 572 name_len++; /* trailing null */ 573 573 name_len *= 2; ··· 665 665 return rc; 666 666 667 667 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 668 - name_len = cifsConvertToUCS((__u16 *) pSMB->DirName, name, 668 + name_len = cifsConvertToUCS((__le16 *) pSMB->DirName, name, 669 669 PATH_MAX, nls_codepage, remap); 670 670 name_len++; /* trailing null */ 671 671 name_len *= 2; ··· 719 719 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 720 720 count = 1; /* account for one byte pad to word boundary */ 721 721 name_len = 722 - cifsConvertToUCS((__u16 *) (pSMB->fileName + 1), 722 + cifsConvertToUCS((__le16 *) (pSMB->fileName + 1), 723 723 fileName, PATH_MAX, nls_codepage, remap); 724 724 name_len++; /* trailing null */ 725 725 name_len *= 2; ··· 1141 1141 1142 1142 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1143 1143 name_len = 1144 - cifsConvertToUCS((__u16 *) pSMB->OldFileName, fromName, 1144 + cifsConvertToUCS((__le16 *) pSMB->OldFileName, fromName, 1145 1145 PATH_MAX, nls_codepage, remap); 1146 1146 name_len++; /* trailing null */ 1147 1147 name_len *= 2; ··· 1149 1149 /* protocol requires ASCII signature byte on Unicode string */ 1150 1150 pSMB->OldFileName[name_len + 1] = 0x00; 1151 1151 name_len2 = 1152 - cifsConvertToUCS((__u16 *) &pSMB->OldFileName[name_len + 2], 1152 + cifsConvertToUCS((__le16 *) &pSMB->OldFileName[name_len + 2], 1153 1153 toName, PATH_MAX, nls_codepage, remap); 1154 1154 name_len2 += 1 /* trailing null */ + 1 /* Signature word */ ; 1155 1155 name_len2 *= 2; /* convert to bytes */ ··· 1236 1236 /* unicode only call */ 1237 1237 if(target_name == NULL) { 1238 1238 sprintf(dummy_string,"cifs%x",pSMB->hdr.Mid); 1239 - len_of_str = cifsConvertToUCS((__u16 *)rename_info->target_name, 1239 + len_of_str = cifsConvertToUCS((__le16 *)rename_info->target_name, 1240 1240 dummy_string, 24, nls_codepage, remap); 1241 1241 } else { 1242 - len_of_str = cifsConvertToUCS((__u16 *)rename_info->target_name, 1242 + len_of_str = cifsConvertToUCS((__le16 *)rename_info->target_name, 1243 1243 target_name, PATH_MAX, nls_codepage, remap); 1244 1244 } 1245 1245 rename_info->target_name_len = cpu_to_le32(2 * len_of_str); ··· 1296 1296 pSMB->Flags = cpu_to_le16(flags & COPY_TREE); 1297 1297 1298 1298 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1299 - name_len = cifsConvertToUCS((__u16 *) pSMB->OldFileName, 1299 + name_len = cifsConvertToUCS((__le16 *) pSMB->OldFileName, 1300 1300 fromName, PATH_MAX, nls_codepage, 1301 1301 remap); 1302 1302 name_len++; /* trailing null */ ··· 1304 1304 pSMB->OldFileName[name_len] = 0x04; /* pad */ 1305 1305 /* protocol requires ASCII signature byte on Unicode string */ 1306 1306 pSMB->OldFileName[name_len + 1] = 0x00; 1307 - name_len2 = cifsConvertToUCS((__u16 *)&pSMB->OldFileName[name_len + 2], 1307 + name_len2 = cifsConvertToUCS((__le16 *)&pSMB->OldFileName[name_len + 2], 1308 1308 toName, PATH_MAX, nls_codepage, remap); 1309 1309 name_len2 += 1 /* trailing null */ + 1 /* Signature word */ ; 1310 1310 name_len2 *= 2; /* convert to bytes */ ··· 1453 1453 return rc; 1454 1454 1455 1455 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1456 - name_len = cifsConvertToUCS((__u16 *) pSMB->FileName, toName, 1456 + name_len = cifsConvertToUCS((__le16 *) pSMB->FileName, toName, 1457 1457 PATH_MAX, nls_codepage, remap); 1458 1458 name_len++; /* trailing null */ 1459 1459 name_len *= 2; ··· 1476 1476 data_offset = (char *) (&pSMB->hdr.Protocol) + offset; 1477 1477 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1478 1478 name_len_target = 1479 - cifsConvertToUCS((__u16 *) data_offset, fromName, PATH_MAX, 1479 + cifsConvertToUCS((__le16 *) data_offset, fromName, PATH_MAX, 1480 1480 nls_codepage, remap); 1481 1481 name_len_target++; /* trailing null */ 1482 1482 name_len_target *= 2; ··· 1546 1546 1547 1547 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1548 1548 name_len = 1549 - cifsConvertToUCS((__u16 *) pSMB->OldFileName, fromName, 1549 + cifsConvertToUCS((__le16 *) pSMB->OldFileName, fromName, 1550 1550 PATH_MAX, nls_codepage, remap); 1551 1551 name_len++; /* trailing null */ 1552 1552 name_len *= 2; 1553 1553 pSMB->OldFileName[name_len] = 0; /* pad */ 1554 1554 pSMB->OldFileName[name_len + 1] = 0x04; 1555 1555 name_len2 = 1556 - cifsConvertToUCS((__u16 *)&pSMB->OldFileName[name_len + 2], 1556 + cifsConvertToUCS((__le16 *)&pSMB->OldFileName[name_len + 2], 1557 1557 toName, PATH_MAX, nls_codepage, remap); 1558 1558 name_len2 += 1 /* trailing null */ + 1 /* Signature word */ ; 1559 1559 name_len2 *= 2; /* convert to bytes */ ··· 1939 1939 1940 1940 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 1941 1941 name_len = 1942 - cifsConvertToUCS((__u16 *) pSMB->FileName, searchName, 1942 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 1943 1943 PATH_MAX, nls_codepage, remap); 1944 1944 name_len++; /* trailing null */ 1945 1945 name_len *= 2; ··· 2024 2024 return rc; 2025 2025 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2026 2026 name_len = 2027 - cifsConvertToUCS((__u16 *) pSMB->FileName, fileName, 2027 + cifsConvertToUCS((__le16 *) pSMB->FileName, fileName, 2028 2028 PATH_MAX, nls_codepage, remap); 2029 2029 name_len++; /* trailing null */ 2030 2030 name_len *= 2; ··· 2188 2188 2189 2189 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2190 2190 name_len = 2191 - cifsConvertToUCS((__u16 *) pSMB->FileName, searchName, 2191 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 2192 2192 PATH_MAX, nls_codepage, remap); 2193 2193 name_len++; /* trailing null */ 2194 2194 name_len *= 2; ··· 2269 2269 2270 2270 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2271 2271 name_len = 2272 - cifsConvertToUCS((__u16 *) pSMB->FileName, searchName, 2272 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 2273 2273 PATH_MAX, nls_codepage, remap); 2274 2274 name_len++; /* trailing null */ 2275 2275 name_len *= 2; ··· 2350 2350 2351 2351 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2352 2352 name_len = 2353 - cifsConvertToUCS((wchar_t *) pSMB->FileName, searchName, PATH_MAX 2353 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, PATH_MAX 2354 2354 /* find define for this maxpathcomponent */ 2355 2355 , nls_codepage); 2356 2356 name_len++; /* trailing null */ ··· 2435 2435 2436 2436 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2437 2437 name_len = 2438 - cifsConvertToUCS((__u16 *) pSMB->FileName,searchName, 2438 + cifsConvertToUCS((__le16 *) pSMB->FileName,searchName, 2439 2439 PATH_MAX, nls_codepage, remap); 2440 2440 /* We can not add the asterik earlier in case 2441 2441 it got remapped to 0xF03A as if it were part of the ··· 2726 2726 2727 2727 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 2728 2728 name_len = 2729 - cifsConvertToUCS((__u16 *) pSMB->FileName, searchName, 2729 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 2730 2730 PATH_MAX,nls_codepage, remap); 2731 2731 name_len++; /* trailing null */ 2732 2732 name_len *= 2; ··· 2837 2837 if (ses->capabilities & CAP_UNICODE) { 2838 2838 pSMB->hdr.Flags2 |= SMBFLG2_UNICODE; 2839 2839 name_len = 2840 - cifsConvertToUCS((__u16 *) pSMB->RequestFileName, 2840 + cifsConvertToUCS((__le16 *) pSMB->RequestFileName, 2841 2841 searchName, PATH_MAX, nls_codepage, remap); 2842 2842 name_len++; /* trailing null */ 2843 2843 name_len *= 2; ··· 3369 3369 3370 3370 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3371 3371 name_len = 3372 - cifsConvertToUCS((__u16 *) pSMB->FileName, fileName, 3372 + cifsConvertToUCS((__le16 *) pSMB->FileName, fileName, 3373 3373 PATH_MAX, nls_codepage, remap); 3374 3374 name_len++; /* trailing null */ 3375 3375 name_len *= 2; ··· 3627 3627 3628 3628 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3629 3629 name_len = 3630 - cifsConvertToUCS((__u16 *) pSMB->FileName, fileName, 3630 + cifsConvertToUCS((__le16 *) pSMB->FileName, fileName, 3631 3631 PATH_MAX, nls_codepage, remap); 3632 3632 name_len++; /* trailing null */ 3633 3633 name_len *= 2; ··· 3708 3708 3709 3709 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3710 3710 name_len = 3711 - ConvertToUCS((wchar_t *) pSMB->fileName, fileName, 3711 + ConvertToUCS((__le16 *) pSMB->fileName, fileName, 3712 3712 PATH_MAX, nls_codepage); 3713 3713 name_len++; /* trailing null */ 3714 3714 name_len *= 2; ··· 3759 3759 3760 3760 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3761 3761 name_len = 3762 - cifsConvertToUCS((__u16 *) pSMB->FileName, fileName, 3762 + cifsConvertToUCS((__le16 *) pSMB->FileName, fileName, 3763 3763 PATH_MAX, nls_codepage, remap); 3764 3764 name_len++; /* trailing null */ 3765 3765 name_len *= 2; ··· 3904 3904 3905 3905 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 3906 3906 name_len = 3907 - cifsConvertToUCS((wchar_t *) pSMB->FileName, searchName, 3907 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 3908 3908 PATH_MAX, nls_codepage, remap); 3909 3909 name_len++; /* trailing null */ 3910 3910 name_len *= 2; ··· 4047 4047 4048 4048 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 4049 4049 name_len = 4050 - cifsConvertToUCS((__u16 *) pSMB->FileName, searchName, 4050 + cifsConvertToUCS((__le16 *) pSMB->FileName, searchName, 4051 4051 PATH_MAX, nls_codepage, remap); 4052 4052 name_len++; /* trailing null */ 4053 4053 name_len *= 2; ··· 4194 4194 4195 4195 if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) { 4196 4196 name_len = 4197 - cifsConvertToUCS((__u16 *) pSMB->FileName, fileName, 4197 + cifsConvertToUCS((__le16 *) pSMB->FileName, fileName, 4198 4198 PATH_MAX, nls_codepage, remap); 4199 4199 name_len++; /* trailing null */ 4200 4200 name_len *= 2;
+2 -1
fs/cifs/dir.c
··· 392 392 rc = 0; 393 393 d_add(direntry, NULL); 394 394 } else { 395 - cERROR(1,("Error 0x%x or on cifs_get_inode_info in lookup",rc)); 395 + cERROR(1,("Error 0x%x on cifs_get_inode_info in lookup of %s", 396 + rc,full_path)); 396 397 /* BB special case check for Access Denied - watch security 397 398 exposure of returning dir info implicitly via different rc 398 399 if file exists or not but no access BB */
+15 -9
fs/cifs/inode.c
··· 422 422 cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); 423 423 424 424 if (!rc) { 425 - direntry->d_inode->i_nlink--; 425 + if(direntry->d_inode) 426 + direntry->d_inode->i_nlink--; 426 427 } else if (rc == -ENOENT) { 427 428 d_drop(direntry); 428 429 } else if (rc == -ETXTBSY) { ··· 441 440 cifs_sb->mnt_cifs_flags & 442 441 CIFS_MOUNT_MAP_SPECIAL_CHR); 443 442 CIFSSMBClose(xid, pTcon, netfid); 444 - direntry->d_inode->i_nlink--; 443 + if(direntry->d_inode) 444 + direntry->d_inode->i_nlink--; 445 445 } 446 446 } else if (rc == -EACCES) { 447 447 /* try only if r/o attribute set in local lookup data? */ ··· 496 494 cifs_sb->mnt_cifs_flags & 497 495 CIFS_MOUNT_MAP_SPECIAL_CHR); 498 496 if (!rc) { 499 - direntry->d_inode->i_nlink--; 497 + if(direntry->d_inode) 498 + direntry->d_inode->i_nlink--; 500 499 } else if (rc == -ETXTBSY) { 501 500 int oplock = FALSE; 502 501 __u16 netfid; ··· 517 514 cifs_sb->mnt_cifs_flags & 518 515 CIFS_MOUNT_MAP_SPECIAL_CHR); 519 516 CIFSSMBClose(xid, pTcon, netfid); 520 - direntry->d_inode->i_nlink--; 517 + if(direntry->d_inode) 518 + direntry->d_inode->i_nlink--; 521 519 } 522 520 /* BB if rc = -ETXTBUSY goto the rename logic BB */ 523 521 } 524 522 } 525 523 } 526 - cifsInode = CIFS_I(direntry->d_inode); 527 - cifsInode->time = 0; /* will force revalidate to get info when 528 - needed */ 529 - direntry->d_inode->i_ctime = inode->i_ctime = inode->i_mtime = 530 - current_fs_time(inode->i_sb); 524 + if(direntry->d_inode) { 525 + cifsInode = CIFS_I(direntry->d_inode); 526 + cifsInode->time = 0; /* will force revalidate to get info 527 + when needed */ 528 + direntry->d_inode->i_ctime = current_fs_time(inode->i_sb); 529 + } 530 + inode->i_ctime = inode->i_mtime = current_fs_time(inode->i_sb); 531 531 cifsInode = CIFS_I(inode); 532 532 cifsInode->time = 0; /* force revalidate of dir as well */ 533 533
+1
fs/cifs/misc.c
··· 571 571 break; 572 572 case UNI_LESSTHAN: 573 573 target[j] = '<'; 574 + break; 574 575 default: 575 576 len = cp->uni2char(src_char, &target[j], 576 577 NLS_MAX_CHARSET_SIZE);
-1
fs/hostfs/hostfs_kern.c
··· 23 23 #include "kern_util.h" 24 24 #include "kern.h" 25 25 #include "user_util.h" 26 - #include "2_5compat.h" 27 26 #include "init.h" 28 27 29 28 struct hostfs_inode_info {
+43 -72
fs/proc/proc_devtree.c
··· 12 12 #include <asm/uaccess.h> 13 13 14 14 #ifndef HAVE_ARCH_DEVTREE_FIXUPS 15 - static inline void set_node_proc_entry(struct device_node *np, struct proc_dir_entry *de) 16 - { 17 - } 18 - 19 - static void inline set_node_name_link(struct device_node *np, struct proc_dir_entry *de) 20 - { 21 - } 22 - 23 - static void inline set_node_addr_link(struct device_node *np, struct proc_dir_entry *de) 15 + static inline void set_node_proc_entry(struct device_node *np, 16 + struct proc_dir_entry *de) 24 17 { 25 18 } 26 19 #endif ··· 51 58 /* 52 59 * Process a node, adding entries for its children and its properties. 53 60 */ 54 - void proc_device_tree_add_node(struct device_node *np, struct proc_dir_entry *de) 61 + void proc_device_tree_add_node(struct device_node *np, 62 + struct proc_dir_entry *de) 55 63 { 56 64 struct property *pp; 57 65 struct proc_dir_entry *ent; 58 - struct device_node *child, *sib; 59 - const char *p, *at; 60 - int l; 61 - struct proc_dir_entry *list, **lastp, *al; 66 + struct device_node *child; 67 + struct proc_dir_entry *list = NULL, **lastp; 68 + const char *p; 62 69 63 70 set_node_proc_entry(np, de); 64 71 lastp = &list; 72 + for (child = NULL; (child = of_get_next_child(np, child));) { 73 + p = strrchr(child->full_name, '/'); 74 + if (!p) 75 + p = child->full_name; 76 + else 77 + ++p; 78 + ent = proc_mkdir(p, de); 79 + if (ent == 0) 80 + break; 81 + *lastp = ent; 82 + ent->next = NULL; 83 + lastp = &ent->next; 84 + proc_device_tree_add_node(child, ent); 85 + } 86 + of_node_put(child); 65 87 for (pp = np->properties; pp != 0; pp = pp->next) { 88 + /* 89 + * Yet another Apple device-tree bogosity: on some machines, 90 + * they have properties & nodes with the same name. Those 91 + * properties are quite unimportant for us though, thus we 92 + * simply "skip" them here, but we do have to check. 93 + */ 94 + for (ent = list; ent != NULL; ent = ent->next) 95 + if (!strcmp(ent->name, pp->name)) 96 + break; 97 + if (ent != NULL) { 98 + printk(KERN_WARNING "device-tree: property \"%s\" name" 99 + " conflicts with node in %s\n", pp->name, 100 + np->full_name); 101 + continue; 102 + } 103 + 66 104 /* 67 105 * Unfortunately proc_register puts each new entry 68 106 * at the beginning of the list. So we rearrange them. 69 107 */ 70 - ent = create_proc_read_entry(pp->name, strncmp(pp->name, "security-", 9) ? 71 - S_IRUGO : S_IRUSR, de, property_read_proc, pp); 108 + ent = create_proc_read_entry(pp->name, 109 + strncmp(pp->name, "security-", 9) 110 + ? S_IRUGO : S_IRUSR, de, 111 + property_read_proc, pp); 72 112 if (ent == 0) 73 113 break; 74 114 if (!strncmp(pp->name, "security-", 9)) 75 115 ent->size = 0; /* don't leak number of password chars */ 76 116 else 77 117 ent->size = pp->length; 118 + ent->next = NULL; 78 119 *lastp = ent; 79 120 lastp = &ent->next; 80 121 } 81 - child = NULL; 82 - while ((child = of_get_next_child(np, child))) { 83 - p = strrchr(child->full_name, '/'); 84 - if (!p) 85 - p = child->full_name; 86 - else 87 - ++p; 88 - /* chop off '@0' if the name ends with that */ 89 - l = strlen(p); 90 - if (l > 2 && p[l-2] == '@' && p[l-1] == '0') 91 - l -= 2; 92 - ent = proc_mkdir(p, de); 93 - if (ent == 0) 94 - break; 95 - *lastp = ent; 96 - lastp = &ent->next; 97 - proc_device_tree_add_node(child, ent); 98 - 99 - /* 100 - * If we left the address part on the name, consider 101 - * adding symlinks from the name and address parts. 102 - */ 103 - if (p[l] != 0 || (at = strchr(p, '@')) == 0) 104 - continue; 105 - 106 - /* 107 - * If this is the first node with a given name property, 108 - * add a symlink with the name property as its name. 109 - */ 110 - sib = NULL; 111 - while ((sib = of_get_next_child(np, sib)) && sib != child) 112 - if (sib->name && strcmp(sib->name, child->name) == 0) 113 - break; 114 - if (sib == child && strncmp(p, child->name, l) != 0) { 115 - al = proc_symlink(child->name, de, ent->name); 116 - if (al == 0) { 117 - of_node_put(sib); 118 - break; 119 - } 120 - set_node_name_link(child, al); 121 - *lastp = al; 122 - lastp = &al->next; 123 - } 124 - of_node_put(sib); 125 - /* 126 - * Add another directory with the @address part as its name. 127 - */ 128 - al = proc_symlink(at, de, ent->name); 129 - if (al == 0) 130 - break; 131 - set_node_addr_link(child, al); 132 - *lastp = al; 133 - lastp = &al->next; 134 - } 135 - of_node_put(child); 136 - *lastp = NULL; 137 122 de->subdir = list; 138 123 } 139 124
+1 -1
fs/udf/udftime.c
··· 46 46 #endif 47 47 48 48 /* How many days come before each month (0-12). */ 49 - const unsigned short int __mon_yday[2][13] = 49 + static const unsigned short int __mon_yday[2][13] = 50 50 { 51 51 /* Normal years. */ 52 52 { 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 },
+1
include/asm-i386/timer.h
··· 53 53 54 54 extern unsigned long calibrate_tsc(void); 55 55 extern void init_cpu_khz(void); 56 + extern int recalibrate_cpu_khz(void); 56 57 #ifdef CONFIG_HPET_TIMER 57 58 extern struct init_timer_opts timer_hpet_init; 58 59 extern unsigned long calibrate_tsc_hpet(unsigned long *tsc_hpet_quotient_ptr);
+8
include/asm-ia64/perfmon.h
··· 177 177 178 178 extern long perfmonctl(int fd, int cmd, void *arg, int narg); 179 179 180 + typedef struct { 181 + void (*handler)(int irq, void *arg, struct pt_regs *regs); 182 + } pfm_intr_handler_desc_t; 183 + 180 184 extern void pfm_save_regs (struct task_struct *); 181 185 extern void pfm_load_regs (struct task_struct *); 182 186 ··· 191 187 extern void pfm_inherit(struct task_struct *task, struct pt_regs *regs); 192 188 extern void pfm_init_percpu(void); 193 189 extern void pfm_handle_work(void); 190 + extern int pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *h); 191 + extern int pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *h); 192 + 193 + 194 194 195 195 /* 196 196 * Reset PMD register flags
+27 -1
include/asm-ia64/sn/sn_sal.h
··· 115 115 #define SAL_IROUTER_INTR_XMIT SAL_CONSOLE_INTR_XMIT 116 116 #define SAL_IROUTER_INTR_RECV SAL_CONSOLE_INTR_RECV 117 117 118 + /* 119 + * Error Handling Features 120 + */ 121 + #define SAL_ERR_FEAT_MCA_SLV_TO_OS_INIT_SLV 0x1 122 + #define SAL_ERR_FEAT_LOG_SBES 0x2 123 + #define SAL_ERR_FEAT_MFR_OVERRIDE 0x4 124 + #define SAL_ERR_FEAT_SBE_THRESHOLD 0xffff0000 118 125 119 126 /* 120 127 * SAL Error Codes ··· 349 342 } 350 343 351 344 /* 345 + * Set Error Handling Features 346 + */ 347 + static inline u64 348 + ia64_sn_plat_set_error_handling_features(void) 349 + { 350 + struct ia64_sal_retval ret_stuff; 351 + 352 + ret_stuff.status = 0; 353 + ret_stuff.v0 = 0; 354 + ret_stuff.v1 = 0; 355 + ret_stuff.v2 = 0; 356 + SAL_CALL_REENTRANT(ret_stuff, SN_SAL_SET_ERROR_HANDLING_FEATURES, 357 + (SAL_ERR_FEAT_MCA_SLV_TO_OS_INIT_SLV | SAL_ERR_FEAT_LOG_SBES), 358 + 0, 0, 0, 0, 0, 0); 359 + 360 + return ret_stuff.status; 361 + } 362 + 363 + /* 352 364 * Checks for console input. 353 365 */ 354 366 static inline u64 ··· 498 472 ia64_sn_pod_mode(void) 499 473 { 500 474 struct ia64_sal_retval isrv; 501 - SAL_CALL(isrv, SN_SAL_POD_MODE, 0, 0, 0, 0, 0, 0, 0); 475 + SAL_CALL_REENTRANT(isrv, SN_SAL_POD_MODE, 0, 0, 0, 0, 0, 0, 0); 502 476 if (isrv.status) 503 477 return 0; 504 478 return isrv.v0;
+46
include/asm-ppc/cpm2.h
··· 1039 1039 #define CMXSCR_TS4CS_CLK7 0x00000006 /* SCC4 Tx Clock Source is CLK7 */ 1040 1040 #define CMXSCR_TS4CS_CLK8 0x00000007 /* SCC4 Tx Clock Source is CLK8 */ 1041 1041 1042 + /*----------------------------------------------------------------------- 1043 + * SIUMCR - SIU Module Configuration Register 4-31 1044 + */ 1045 + #define SIUMCR_BBD 0x80000000 /* Bus Busy Disable */ 1046 + #define SIUMCR_ESE 0x40000000 /* External Snoop Enable */ 1047 + #define SIUMCR_PBSE 0x20000000 /* Parity Byte Select Enable */ 1048 + #define SIUMCR_CDIS 0x10000000 /* Core Disable */ 1049 + #define SIUMCR_DPPC00 0x00000000 /* Data Parity Pins Configuration*/ 1050 + #define SIUMCR_DPPC01 0x04000000 /* - " - */ 1051 + #define SIUMCR_DPPC10 0x08000000 /* - " - */ 1052 + #define SIUMCR_DPPC11 0x0c000000 /* - " - */ 1053 + #define SIUMCR_L2CPC00 0x00000000 /* L2 Cache Pins Configuration */ 1054 + #define SIUMCR_L2CPC01 0x01000000 /* - " - */ 1055 + #define SIUMCR_L2CPC10 0x02000000 /* - " - */ 1056 + #define SIUMCR_L2CPC11 0x03000000 /* - " - */ 1057 + #define SIUMCR_LBPC00 0x00000000 /* Local Bus Pins Configuration */ 1058 + #define SIUMCR_LBPC01 0x00400000 /* - " - */ 1059 + #define SIUMCR_LBPC10 0x00800000 /* - " - */ 1060 + #define SIUMCR_LBPC11 0x00c00000 /* - " - */ 1061 + #define SIUMCR_APPC00 0x00000000 /* Address Parity Pins Configuration*/ 1062 + #define SIUMCR_APPC01 0x00100000 /* - " - */ 1063 + #define SIUMCR_APPC10 0x00200000 /* - " - */ 1064 + #define SIUMCR_APPC11 0x00300000 /* - " - */ 1065 + #define SIUMCR_CS10PC00 0x00000000 /* CS10 Pin Configuration */ 1066 + #define SIUMCR_CS10PC01 0x00040000 /* - " - */ 1067 + #define SIUMCR_CS10PC10 0x00080000 /* - " - */ 1068 + #define SIUMCR_CS10PC11 0x000c0000 /* - " - */ 1069 + #define SIUMCR_BCTLC00 0x00000000 /* Buffer Control Configuration */ 1070 + #define SIUMCR_BCTLC01 0x00010000 /* - " - */ 1071 + #define SIUMCR_BCTLC10 0x00020000 /* - " - */ 1072 + #define SIUMCR_BCTLC11 0x00030000 /* - " - */ 1073 + #define SIUMCR_MMR00 0x00000000 /* Mask Masters Requests */ 1074 + #define SIUMCR_MMR01 0x00004000 /* - " - */ 1075 + #define SIUMCR_MMR10 0x00008000 /* - " - */ 1076 + #define SIUMCR_MMR11 0x0000c000 /* - " - */ 1077 + #define SIUMCR_LPBSE 0x00002000 /* LocalBus Parity Byte Select Enable*/ 1078 + 1079 + /*----------------------------------------------------------------------- 1080 + * SCCR - System Clock Control Register 9-8 1081 + */ 1082 + #define SCCR_PCI_MODE 0x00000100 /* PCI Mode */ 1083 + #define SCCR_PCI_MODCK 0x00000080 /* Value of PCI_MODCK pin */ 1084 + #define SCCR_PCIDF_MSK 0x00000078 /* PCI division factor */ 1085 + #define SCCR_PCIDF_SHIFT 3 1086 + 1087 + 1042 1088 #endif /* __CPM2__ */ 1043 1089 #endif /* __KERNEL__ */ 1044 1090
+1
include/asm-ppc/m8260_pci.h
··· 19 19 * Define the vendor/device ID for the MPC8265. 20 20 */ 21 21 #define PCI_DEVICE_ID_MPC8265 ((0x18C0 << 16) | PCI_VENDOR_ID_MOTOROLA) 22 + #define PCI_DEVICE_ID_MPC8272 ((0x18C1 << 16) | PCI_VENDOR_ID_MOTOROLA) 22 23 23 24 #define M8265_PCIBR0 0x101ac 24 25 #define M8265_PCIBR1 0x101b0
+1 -1
include/asm-ppc/mpc8260.h
··· 41 41 #endif 42 42 43 43 #ifdef CONFIG_PCI_8260 44 - #include <syslib/m8260_pci.h> 44 + #include <syslib/m82xx_pci.h> 45 45 #endif 46 46 47 47 /* Make sure the memory translation stuff is there if PCI not used.
+1 -12
include/asm-ppc64/prom.h
··· 147 147 struct device_node *sibling; 148 148 struct device_node *next; /* next device of same type */ 149 149 struct device_node *allnext; /* next in list of all nodes */ 150 - struct proc_dir_entry *pde; /* this node's proc directory */ 151 - struct proc_dir_entry *name_link; /* name symlink */ 152 - struct proc_dir_entry *addr_link; /* addr symlink */ 150 + struct proc_dir_entry *pde; /* this node's proc directory */ 153 151 struct kref kref; 154 152 unsigned long _flags; 155 153 }; ··· 172 174 dn->pde = de; 173 175 } 174 176 175 - static void inline set_node_name_link(struct device_node *dn, struct proc_dir_entry *de) 176 - { 177 - dn->name_link = de; 178 - } 179 - 180 - static void inline set_node_addr_link(struct device_node *dn, struct proc_dir_entry *de) 181 - { 182 - dn->addr_link = de; 183 - } 184 177 185 178 /* OBSOLETE: Old stlye node lookup */ 186 179 extern struct device_node *find_devices(const char *name);
+1 -1
include/asm-sh/thread_info.h
··· 27 27 28 28 #endif 29 29 30 - #define PREEMPT_ACTIVE 0x4000000 30 + #define PREEMPT_ACTIVE 0x10000000 31 31 32 32 /* 33 33 * macros/functions for gaining access to the thread information structure
+1 -1
include/asm-sh64/thread_info.h
··· 73 73 74 74 #define THREAD_SIZE 8192 75 75 76 - #define PREEMPT_ACTIVE 0x4000000 76 + #define PREEMPT_ACTIVE 0x10000000 77 77 78 78 /* thread information flags */ 79 79 #define TIF_SYSCALL_TRACE 0 /* syscall trace active */
+2
include/asm-sparc64/iommu.h
··· 16 16 #define IOPTE_CACHE 0x0000000000000010UL /* Cached (in UPA E-cache) */ 17 17 #define IOPTE_WRITE 0x0000000000000002UL /* Writeable */ 18 18 19 + #define IOMMU_NUM_CTXS 4096 20 + 19 21 #endif /* !(_SPARC_IOMMU_H) */
+5 -3
include/asm-sparc64/pbm.h
··· 15 15 #include <asm/io.h> 16 16 #include <asm/page.h> 17 17 #include <asm/oplib.h> 18 + #include <asm/iommu.h> 18 19 19 20 /* The abstraction used here is that there are PCI controllers, 20 21 * each with one (Sabre) or two (PSYCHO/SCHIZO) PCI bus modules ··· 40 39 * streaming buffers underneath. 41 40 */ 42 41 spinlock_t lock; 43 - 44 - /* Context allocator. */ 45 - unsigned int iommu_cur_ctx; 46 42 47 43 /* IOMMU page table, a linear array of ioptes. */ 48 44 iopte_t *page_table; /* The page table itself. */ ··· 84 86 u16 next; 85 87 u16 flush; 86 88 } alloc_info[PBM_NCLUSTERS]; 89 + 90 + /* CTX allocation. */ 91 + unsigned long ctx_lowest_free; 92 + unsigned long ctx_bitmap[IOMMU_NUM_CTXS / (sizeof(unsigned long) * 8)]; 87 93 88 94 /* Here a PCI controller driver describes the areas of 89 95 * PCI memory space where DMA to/from physical memory
+7 -1
include/asm-um/page.h
··· 98 98 99 99 extern unsigned long to_phys(void *virt); 100 100 extern void *to_virt(unsigned long phys); 101 - #define __pa(virt) to_phys((void *) virt) 101 + 102 + /* Cast to unsigned long before casting to void * to avoid a warning from 103 + * mmap_kmem about cutting a long long down to a void *. Not sure that 104 + * casting is the right thing, but 32-bit UML can't have 64-bit virtual 105 + * addresses 106 + */ 107 + #define __pa(virt) to_phys((void *) (unsigned long) virt) 102 108 #define __va(phys) to_virt((unsigned long) phys) 103 109 104 110 #define page_to_pfn(page) ((page) - mem_map)
-8
include/asm-um/pgtable.h
··· 114 114 extern unsigned long pg0[1024]; 115 115 116 116 /* 117 - * BAD_PAGETABLE is used when we need a bogus page-table, while 118 - * BAD_PAGE is used for a bogus page. 119 - * 120 117 * ZERO_PAGE is a global shared page that is always zero: used 121 118 * for zero-mapped memory areas etc.. 122 119 */ 123 - extern pte_t __bad_page(void); 124 - extern pte_t * __bad_pagetable(void); 125 - 126 - #define BAD_PAGETABLE __bad_pagetable() 127 - #define BAD_PAGE __bad_page() 128 120 129 121 #define ZERO_PAGE(vaddr) virt_to_page(empty_zero_page) 130 122
+4 -5
include/asm-um/thread_info.h
··· 41 41 #define init_thread_info (init_thread_union.thread_info) 42 42 #define init_stack (init_thread_union.stack) 43 43 44 + #define THREAD_SIZE ((1 << CONFIG_KERNEL_STACK_ORDER) * PAGE_SIZE) 44 45 /* how to get the thread information struct from C */ 45 46 static inline struct thread_info *current_thread_info(void) 46 47 { 47 48 struct thread_info *ti; 48 - unsigned long mask = PAGE_SIZE * 49 - (1 << CONFIG_KERNEL_STACK_ORDER) - 1; 50 - ti = (struct thread_info *) (((unsigned long) &ti) & ~mask); 49 + unsigned long mask = THREAD_SIZE - 1; 50 + ti = (struct thread_info *) (((unsigned long) &ti) & ~mask); 51 51 return ti; 52 52 } 53 53 54 54 /* thread information allocation */ 55 - #define THREAD_SIZE ((1 << CONFIG_KERNEL_STACK_ORDER) * PAGE_SIZE) 56 55 #define alloc_thread_info(tsk) \ 57 56 ((struct thread_info *) kmalloc(THREAD_SIZE, GFP_KERNEL)) 58 57 #define free_thread_info(ti) kfree(ti) ··· 61 62 62 63 #endif 63 64 64 - #define PREEMPT_ACTIVE 0x4000000 65 + #define PREEMPT_ACTIVE 0x10000000 65 66 66 67 #define TIF_SYSCALL_TRACE 0 /* syscall trace active */ 67 68 #define TIF_SIGPENDING 1 /* signal pending */
+1 -1
include/linux/cpufreq.h
··· 49 49 /* Frequency values here are CPU kHz so that hardware which doesn't run 50 50 * with some frequencies can complain without having to guess what per 51 51 * cent / per mille means. 52 - * Maximum transition latency is in microseconds - if it's unknown, 52 + * Maximum transition latency is in nanoseconds - if it's unknown, 53 53 * CPUFREQ_ETERNAL shall be used. 54 54 */ 55 55
+18 -4
include/linux/etherdevice.h
··· 56 56 } 57 57 58 58 /** 59 + * is_multicast_ether_addr - Determine if the given Ethernet address is a 60 + * multicast address. 61 + * 62 + * @addr: Pointer to a six-byte array containing the Ethernet address 63 + * 64 + * Return true if the address is a multicast address. 65 + */ 66 + static inline int is_multicast_ether_addr(const u8 *addr) 67 + { 68 + return addr[0] & 0x01; 69 + } 70 + 71 + /** 59 72 * is_valid_ether_addr - Determine if the given Ethernet address is valid 60 73 * @addr: Pointer to a six-byte array containing the Ethernet address 61 74 * 62 75 * Check that the Ethernet address (MAC) is not 00:00:00:00:00:00, is not 63 - * a multicast address, and is not FF:FF:FF:FF:FF:FF. The multicast 64 - * and FF:FF:... tests are combined into the single test "!(addr[0]&1)". 76 + * a multicast address, and is not FF:FF:FF:FF:FF:FF. 65 77 * 66 78 * Return true if the address is valid. 67 79 */ 68 80 static inline int is_valid_ether_addr(const u8 *addr) 69 81 { 70 - return !(addr[0]&1) && !is_zero_ether_addr(addr); 82 + /* FF:FF:FF:FF:FF:FF is a multicast address so we don't need to 83 + * explicitly check for it here. */ 84 + return !is_multicast_ether_addr(addr) && !is_zero_ether_addr(addr); 71 85 } 72 86 73 87 /** ··· 97 83 addr [0] &= 0xfe; /* clear multicast bit */ 98 84 addr [0] |= 0x02; /* set local assignment bit (IEEE802) */ 99 85 } 100 - #endif 86 + #endif /* __KERNEL__ */ 101 87 102 88 #endif /* _LINUX_ETHERDEVICE_H */
+1
include/linux/ethtool.h
··· 256 256 u32 ethtool_op_get_link(struct net_device *dev); 257 257 u32 ethtool_op_get_tx_csum(struct net_device *dev); 258 258 int ethtool_op_set_tx_csum(struct net_device *dev, u32 data); 259 + int ethtool_op_set_tx_hw_csum(struct net_device *dev, u32 data); 259 260 u32 ethtool_op_get_sg(struct net_device *dev); 260 261 int ethtool_op_set_sg(struct net_device *dev, u32 data); 261 262 u32 ethtool_op_get_tso(struct net_device *dev);
+25 -3
include/linux/gameport.h
··· 67 67 void gameport_close(struct gameport *gameport); 68 68 void gameport_rescan(struct gameport *gameport); 69 69 70 + #if defined(CONFIG_GAMEPORT) || (defined(MODULE) && defined(CONFIG_GAMEPORT_MODULE)) 71 + 70 72 void __gameport_register_port(struct gameport *gameport, struct module *owner); 71 73 static inline void gameport_register_port(struct gameport *gameport) 72 74 { ··· 76 74 } 77 75 78 76 void gameport_unregister_port(struct gameport *gameport); 77 + 78 + void gameport_set_phys(struct gameport *gameport, const char *fmt, ...) 79 + __attribute__ ((format (printf, 2, 3))); 80 + 81 + #else 82 + 83 + static inline void gameport_register_port(struct gameport *gameport) 84 + { 85 + return; 86 + } 87 + 88 + static inline void gameport_unregister_port(struct gameport *gameport) 89 + { 90 + return; 91 + } 92 + 93 + static inline void gameport_set_phys(struct gameport *gameport, 94 + const char *fmt, ...) 95 + { 96 + return; 97 + } 98 + 99 + #endif 79 100 80 101 static inline struct gameport *gameport_allocate_port(void) 81 102 { ··· 116 91 { 117 92 strlcpy(gameport->name, name, sizeof(gameport->name)); 118 93 } 119 - 120 - void gameport_set_phys(struct gameport *gameport, const char *fmt, ...) 121 - __attribute__ ((format (printf, 2, 3))); 122 94 123 95 /* 124 96 * Use the following fucntions to manipulate gameport's per-port
+5 -1
include/linux/hardirq.h
··· 43 43 #define __IRQ_MASK(x) ((1UL << (x))-1) 44 44 45 45 #define PREEMPT_MASK (__IRQ_MASK(PREEMPT_BITS) << PREEMPT_SHIFT) 46 - #define HARDIRQ_MASK (__IRQ_MASK(HARDIRQ_BITS) << HARDIRQ_SHIFT) 47 46 #define SOFTIRQ_MASK (__IRQ_MASK(SOFTIRQ_BITS) << SOFTIRQ_SHIFT) 47 + #define HARDIRQ_MASK (__IRQ_MASK(HARDIRQ_BITS) << HARDIRQ_SHIFT) 48 48 49 49 #define PREEMPT_OFFSET (1UL << PREEMPT_SHIFT) 50 50 #define SOFTIRQ_OFFSET (1UL << SOFTIRQ_SHIFT) 51 51 #define HARDIRQ_OFFSET (1UL << HARDIRQ_SHIFT) 52 + 53 + #if PREEMPT_ACTIVE < (1 << (HARDIRQ_SHIFT + HARDIRQ_BITS)) 54 + #error PREEMPT_ACTIVE is too low! 55 + #endif 52 56 53 57 #define hardirq_count() (preempt_count() & HARDIRQ_MASK) 54 58 #define softirq_count() (preempt_count() & SOFTIRQ_MASK)
+2
include/linux/inetdevice.h
··· 29 29 int no_xfrm; 30 30 int no_policy; 31 31 int force_igmp_version; 32 + int promote_secondaries; 32 33 void *sysctl; 33 34 }; 34 35 ··· 72 71 #define IN_DEV_SEC_REDIRECTS(in_dev) (ipv4_devconf.secure_redirects || (in_dev)->cnf.secure_redirects) 73 72 #define IN_DEV_IDTAG(in_dev) ((in_dev)->cnf.tag) 74 73 #define IN_DEV_MEDIUM_ID(in_dev) ((in_dev)->cnf.medium_id) 74 + #define IN_DEV_PROMOTE_SECONDARIES(in_dev) (ipv4_devconf.promote_secondaries || (in_dev)->cnf.promote_secondaries) 75 75 76 76 #define IN_DEV_RX_REDIRECTS(in_dev) \ 77 77 ((IN_DEV_FORWARD(in_dev) && \
+1
include/linux/libata.h
··· 410 410 extern void ata_exec_command(struct ata_port *ap, struct ata_taskfile *tf); 411 411 extern int ata_port_start (struct ata_port *ap); 412 412 extern void ata_port_stop (struct ata_port *ap); 413 + extern void ata_host_stop (struct ata_host_set *host_set); 413 414 extern irqreturn_t ata_interrupt (int irq, void *dev_instance, struct pt_regs *regs); 414 415 extern void ata_qc_prep(struct ata_queued_cmd *qc); 415 416 extern int ata_qc_issue_prot(struct ata_queued_cmd *qc);
+2 -1
include/linux/netdevice.h
··· 401 401 } reg_state; 402 402 403 403 /* Net device features */ 404 - int features; 404 + unsigned long features; 405 405 #define NETIF_F_SG 1 /* Scatter/gather IO. */ 406 406 #define NETIF_F_IP_CSUM 2 /* Can checksum only TCP/UDP over IPv4. */ 407 407 #define NETIF_F_NO_CSUM 4 /* Does not require checksum. F.e. loopack. */ ··· 913 913 extern void dev_set_promiscuity(struct net_device *dev, int inc); 914 914 extern void dev_set_allmulti(struct net_device *dev, int inc); 915 915 extern void netdev_state_change(struct net_device *dev); 916 + extern void netdev_features_change(struct net_device *dev); 916 917 /* Load a device via the kmod */ 917 918 extern void dev_load(const char *name); 918 919 extern void dev_mcast_init(void);
+1
include/linux/notifier.h
··· 56 56 #define NETDEV_CHANGEADDR 0x0008 57 57 #define NETDEV_GOING_DOWN 0x0009 58 58 #define NETDEV_CHANGENAME 0x000A 59 + #define NETDEV_FEAT_CHANGE 0x000B 59 60 60 61 #define SYS_DOWN 0x0001 /* Notify of system down */ 61 62 #define SYS_RESTART SYS_DOWN
+6
include/linux/pci_ids.h
··· 1230 1230 #define PCI_DEVICE_ID_NVIDIA_QUADRO4_900XGL 0x0258 1231 1231 #define PCI_DEVICE_ID_NVIDIA_QUADRO4_750XGL 0x0259 1232 1232 #define PCI_DEVICE_ID_NVIDIA_QUADRO4_700XGL 0x025B 1233 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE 0x0265 1234 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA 0x0266 1235 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA2 0x0267 1236 + #define PCI_DEVICE_ID_NVIDIA_NVENET_12 0x0268 1237 + #define PCI_DEVICE_ID_NVIDIA_NVENET_13 0x0269 1238 + #define PCI_DEVICE_ID_NVIDIA_MCP51_AUDIO 0x026B 1233 1239 #define PCI_DEVICE_ID_NVIDIA_GEFORCE4_TI_4800 0x0280 1234 1240 #define PCI_DEVICE_ID_NVIDIA_GEFORCE4_TI_4800_8X 0x0281 1235 1241 #define PCI_DEVICE_ID_NVIDIA_GEFORCE4_TI_4800SE 0x0282
+1
include/linux/sysctl.h
··· 399 399 NET_IPV4_CONF_FORCE_IGMP_VERSION=17, 400 400 NET_IPV4_CONF_ARP_ANNOUNCE=18, 401 401 NET_IPV4_CONF_ARP_IGNORE=19, 402 + NET_IPV4_CONF_PROMOTE_SECONDARIES=20, 402 403 __NET_IPV4_CONF_MAX 403 404 }; 404 405
-3
include/net/route.h
··· 181 181 memcpy(&fl, &(*rp)->fl, sizeof(fl)); 182 182 fl.fl_ip_sport = sport; 183 183 fl.fl_ip_dport = dport; 184 - #if defined(CONFIG_IP_ROUTE_MULTIPATH_CACHED) 185 - fl.flags |= FLOWI_FLAG_MULTIPATHOLDROUTE; 186 - #endif 187 184 ip_rt_put(*rp); 188 185 *rp = NULL; 189 186 return ip_route_output_flow(rp, &fl, sk, 0);
+1 -1
init/Kconfig
··· 442 442 443 443 config MODVERSIONS 444 444 bool "Module versioning support (EXPERIMENTAL)" 445 - depends on MODULES && EXPERIMENTAL && !UML 445 + depends on MODULES && EXPERIMENTAL 446 446 help 447 447 Usually, you have to use modules compiled with your kernel. 448 448 Saying Y here makes it sometimes possible to use modules
-2
kernel/irq/handle.c
··· 119 119 */ 120 120 desc->handler->ack(irq); 121 121 action_ret = handle_IRQ_event(irq, regs, desc->action); 122 - if (!noirqdebug) 123 - note_interrupt(irq, desc, action_ret); 124 122 desc->handler->end(irq); 125 123 return 1; 126 124 }
+6
kernel/module.c
··· 1758 1758 const char __user *uargs) 1759 1759 { 1760 1760 struct module *mod; 1761 + mm_segment_t old_fs = get_fs(); 1761 1762 int ret = 0; 1762 1763 1763 1764 /* Must have permission */ ··· 1776 1775 return PTR_ERR(mod); 1777 1776 } 1778 1777 1778 + /* flush the icache in correct context */ 1779 + set_fs(KERNEL_DS); 1780 + 1779 1781 /* Flush the instruction cache, since we've played with text */ 1780 1782 if (mod->module_init) 1781 1783 flush_icache_range((unsigned long)mod->module_init, ··· 1786 1782 + mod->init_size); 1787 1783 flush_icache_range((unsigned long)mod->module_core, 1788 1784 (unsigned long)mod->module_core + mod->core_size); 1785 + 1786 + set_fs(old_fs); 1789 1787 1790 1788 /* Now sew it into the lists. They won't access us, since 1791 1789 strong_try_module_get() will fail. */
+2 -1
lib/Kconfig.debug
··· 151 151 152 152 config FRAME_POINTER 153 153 bool "Compile the kernel with frame pointers" 154 - depends on DEBUG_KERNEL && ((X86 && !X86_64) || CRIS || M68K || M68KNOMMU || FRV) 154 + depends on DEBUG_KERNEL && ((X86 && !X86_64) || CRIS || M68K || M68KNOMMU || FRV || UML) 155 + default y if DEBUG_INFO && UML 155 156 help 156 157 If you say Y here the resulting kernel image will be slightly larger 157 158 and slower, but it will give very useful debugging information.
+7 -8
net/bridge/br_device.c
··· 21 21 22 22 static struct net_device_stats *br_dev_get_stats(struct net_device *dev) 23 23 { 24 - struct net_bridge *br; 25 - 26 - br = dev->priv; 27 - 24 + struct net_bridge *br = netdev_priv(dev); 28 25 return &br->statistics; 29 26 } 30 27 ··· 51 54 52 55 static int br_dev_open(struct net_device *dev) 53 56 { 54 - netif_start_queue(dev); 57 + struct net_bridge *br = netdev_priv(dev); 55 58 56 - br_stp_enable_bridge(dev->priv); 59 + br_features_recompute(br); 60 + netif_start_queue(dev); 61 + br_stp_enable_bridge(br); 57 62 58 63 return 0; 59 64 } ··· 66 67 67 68 static int br_dev_stop(struct net_device *dev) 68 69 { 69 - br_stp_disable_bridge(dev->priv); 70 + br_stp_disable_bridge(netdev_priv(dev)); 70 71 71 72 netif_stop_queue(dev); 72 73 ··· 75 76 76 77 static int br_change_mtu(struct net_device *dev, int new_mtu) 77 78 { 78 - if ((new_mtu < 68) || new_mtu > br_min_mtu(dev->priv)) 79 + if (new_mtu < 68 || new_mtu > br_min_mtu(netdev_priv(dev))) 79 80 return -EINVAL; 80 81 81 82 dev->mtu = new_mtu;
+23
net/bridge/br_if.c
··· 314 314 return mtu; 315 315 } 316 316 317 + /* 318 + * Recomputes features using slave's features 319 + */ 320 + void br_features_recompute(struct net_bridge *br) 321 + { 322 + struct net_bridge_port *p; 323 + unsigned long features, checksum; 324 + 325 + features = NETIF_F_SG | NETIF_F_FRAGLIST 326 + | NETIF_F_HIGHDMA | NETIF_F_TSO; 327 + checksum = NETIF_F_IP_CSUM; /* least commmon subset */ 328 + 329 + list_for_each_entry(p, &br->port_list, list) { 330 + if (!(p->dev->features 331 + & (NETIF_F_IP_CSUM|NETIF_F_NO_CSUM|NETIF_F_HW_CSUM))) 332 + checksum = 0; 333 + features &= p->dev->features; 334 + } 335 + 336 + br->dev->features = features | checksum | NETIF_F_LLTX; 337 + } 338 + 317 339 /* called with RTNL */ 318 340 int br_add_if(struct net_bridge *br, struct net_device *dev) 319 341 { ··· 390 368 391 369 spin_lock_bh(&br->lock); 392 370 br_stp_recalculate_bridge_id(br); 371 + br_features_recompute(br); 393 372 spin_unlock_bh(&br->lock); 394 373 395 374 return 0;
+5 -3
net/bridge/br_input.c
··· 26 26 #ifdef CONFIG_NETFILTER_DEBUG 27 27 skb->nf_debug = 0; 28 28 #endif 29 - netif_rx(skb); 29 + netif_receive_skb(skb); 30 30 31 31 return 0; 32 32 } ··· 53 53 struct net_bridge *br = p->br; 54 54 struct net_bridge_fdb_entry *dst; 55 55 int passedup = 0; 56 + 57 + /* insert into forwarding database after filtering to avoid spoofing */ 58 + br_fdb_update(p->br, p, eth_hdr(skb)->h_source); 56 59 57 60 if (br->dev->flags & IFF_PROMISC) { 58 61 struct sk_buff *skb2; ··· 111 108 if (!is_valid_ether_addr(eth_hdr(skb)->h_source)) 112 109 goto err; 113 110 114 - if (p->state == BR_STATE_LEARNING || 115 - p->state == BR_STATE_FORWARDING) 111 + if (p->state == BR_STATE_LEARNING) 116 112 br_fdb_update(p->br, p, eth_hdr(skb)->h_source); 117 113 118 114 if (p->br->stp_enabled &&
+9
net/bridge/br_notify.c
··· 65 65 } 66 66 break; 67 67 68 + case NETDEV_FEAT_CHANGE: 69 + if (br->dev->flags & IFF_UP) 70 + br_features_recompute(br); 71 + 72 + /* could do recursive feature change notification 73 + * but who would care?? 74 + */ 75 + break; 76 + 68 77 case NETDEV_DOWN: 69 78 if (br->dev->flags & IFF_UP) 70 79 br_stp_disable_port(p);
+1
net/bridge/br_private.h
··· 174 174 extern int br_del_if(struct net_bridge *br, 175 175 struct net_device *dev); 176 176 extern int br_min_mtu(const struct net_bridge *br); 177 + extern void br_features_recompute(struct net_bridge *br); 177 178 178 179 /* br_input.c */ 179 180 extern int br_handle_frame_finish(struct sk_buff *skb);
+3
net/bridge/br_stp_bpdu.c
··· 140 140 struct net_bridge *br = p->br; 141 141 unsigned char *buf; 142 142 143 + /* insert into forwarding database after filtering to avoid spoofing */ 144 + br_fdb_update(p->br, p, eth_hdr(skb)->h_source); 145 + 143 146 /* need at least the 802 and STP headers */ 144 147 if (!pskb_may_pull(skb, sizeof(header)+1) || 145 148 memcmp(skb->data, header, sizeof(header)))
+12
net/core/dev.c
··· 761 761 } 762 762 763 763 /** 764 + * netdev_features_change - device changes fatures 765 + * @dev: device to cause notification 766 + * 767 + * Called to indicate a device has changed features. 768 + */ 769 + void netdev_features_change(struct net_device *dev) 770 + { 771 + notifier_call_chain(&netdev_chain, NETDEV_FEAT_CHANGE, dev); 772 + } 773 + EXPORT_SYMBOL(netdev_features_change); 774 + 775 + /** 764 776 * netdev_state_change - device changes state 765 777 * @dev: device to cause notification 766 778 *
+18 -2
net/core/ethtool.c
··· 29 29 30 30 u32 ethtool_op_get_tx_csum(struct net_device *dev) 31 31 { 32 - return (dev->features & NETIF_F_IP_CSUM) != 0; 32 + return (dev->features & (NETIF_F_IP_CSUM | NETIF_F_HW_CSUM)) != 0; 33 33 } 34 34 35 35 int ethtool_op_set_tx_csum(struct net_device *dev, u32 data) ··· 42 42 return 0; 43 43 } 44 44 45 + int ethtool_op_set_tx_hw_csum(struct net_device *dev, u32 data) 46 + { 47 + if (data) 48 + dev->features |= NETIF_F_HW_CSUM; 49 + else 50 + dev->features &= ~NETIF_F_HW_CSUM; 51 + 52 + return 0; 53 + } 45 54 u32 ethtool_op_get_sg(struct net_device *dev) 46 55 { 47 56 return (dev->features & NETIF_F_SG) != 0; ··· 691 682 void __user *useraddr = ifr->ifr_data; 692 683 u32 ethcmd; 693 684 int rc; 685 + unsigned long old_features; 694 686 695 687 /* 696 688 * XXX: This can be pushed down into the ethtool_* handlers that ··· 713 703 if ((rc = dev->ethtool_ops->begin(dev)) < 0) 714 704 return rc; 715 705 706 + old_features = dev->features; 707 + 716 708 switch (ethcmd) { 717 709 case ETHTOOL_GSET: 718 710 rc = ethtool_get_settings(dev, useraddr); ··· 724 712 break; 725 713 case ETHTOOL_GDRVINFO: 726 714 rc = ethtool_get_drvinfo(dev, useraddr); 727 - 728 715 break; 729 716 case ETHTOOL_GREGS: 730 717 rc = ethtool_get_regs(dev, useraddr); ··· 812 801 813 802 if(dev->ethtool_ops->complete) 814 803 dev->ethtool_ops->complete(dev); 804 + 805 + if (old_features != dev->features) 806 + netdev_features_change(dev); 807 + 815 808 return rc; 816 809 817 810 ioctl: ··· 832 817 EXPORT_SYMBOL(ethtool_op_set_sg); 833 818 EXPORT_SYMBOL(ethtool_op_set_tso); 834 819 EXPORT_SYMBOL(ethtool_op_set_tx_csum); 820 + EXPORT_SYMBOL(ethtool_op_set_tx_hw_csum);
+2 -1
net/core/net-sysfs.c
··· 21 21 #define to_net_dev(class) container_of(class, struct net_device, class_dev) 22 22 23 23 static const char fmt_hex[] = "%#x\n"; 24 + static const char fmt_long_hex[] = "%#lx\n"; 24 25 static const char fmt_dec[] = "%d\n"; 25 26 static const char fmt_ulong[] = "%lu\n"; 26 27 ··· 92 91 NETDEVICE_ATTR(addr_len, fmt_dec); 93 92 NETDEVICE_ATTR(iflink, fmt_dec); 94 93 NETDEVICE_ATTR(ifindex, fmt_dec); 95 - NETDEVICE_ATTR(features, fmt_hex); 94 + NETDEVICE_ATTR(features, fmt_long_hex); 96 95 NETDEVICE_ATTR(type, fmt_dec); 97 96 98 97 /* use same locking rules as GIFHWADDR ioctl's */
+29 -5
net/ipv4/devinet.c
··· 233 233 static void inet_del_ifa(struct in_device *in_dev, struct in_ifaddr **ifap, 234 234 int destroy) 235 235 { 236 + struct in_ifaddr *promote = NULL; 236 237 struct in_ifaddr *ifa1 = *ifap; 237 238 238 239 ASSERT_RTNL(); 239 240 240 - /* 1. Deleting primary ifaddr forces deletion all secondaries */ 241 + /* 1. Deleting primary ifaddr forces deletion all secondaries 242 + * unless alias promotion is set 243 + **/ 241 244 242 245 if (!(ifa1->ifa_flags & IFA_F_SECONDARY)) { 243 246 struct in_ifaddr *ifa; ··· 254 251 continue; 255 252 } 256 253 257 - *ifap1 = ifa->ifa_next; 254 + if (!IN_DEV_PROMOTE_SECONDARIES(in_dev)) { 255 + *ifap1 = ifa->ifa_next; 258 256 259 - rtmsg_ifa(RTM_DELADDR, ifa); 260 - notifier_call_chain(&inetaddr_chain, NETDEV_DOWN, ifa); 261 - inet_free_ifa(ifa); 257 + rtmsg_ifa(RTM_DELADDR, ifa); 258 + notifier_call_chain(&inetaddr_chain, NETDEV_DOWN, ifa); 259 + inet_free_ifa(ifa); 260 + } else { 261 + promote = ifa; 262 + break; 263 + } 262 264 } 263 265 } 264 266 ··· 288 280 289 281 if (!in_dev->ifa_list) 290 282 inetdev_destroy(in_dev); 283 + } 284 + 285 + if (promote && IN_DEV_PROMOTE_SECONDARIES(in_dev)) { 286 + /* not sure if we should send a delete notify first? */ 287 + promote->ifa_flags &= ~IFA_F_SECONDARY; 288 + rtmsg_ifa(RTM_NEWADDR, promote); 289 + notifier_call_chain(&inetaddr_chain, NETDEV_UP, promote); 291 290 } 292 291 } 293 292 ··· 1394 1379 .ctl_name = NET_IPV4_CONF_FORCE_IGMP_VERSION, 1395 1380 .procname = "force_igmp_version", 1396 1381 .data = &ipv4_devconf.force_igmp_version, 1382 + .maxlen = sizeof(int), 1383 + .mode = 0644, 1384 + .proc_handler = &ipv4_doint_and_flush, 1385 + .strategy = &ipv4_doint_and_flush_strategy, 1386 + }, 1387 + { 1388 + .ctl_name = NET_IPV4_CONF_PROMOTE_SECONDARIES, 1389 + .procname = "promote_secondaries", 1390 + .data = &ipv4_devconf.promote_secondaries, 1397 1391 .maxlen = sizeof(int), 1398 1392 .mode = 0644, 1399 1393 .proc_handler = &ipv4_doint_and_flush,
+1 -1
net/ipv4/esp4.c
··· 478 478 { 479 479 struct xfrm_decap_state decap; 480 480 481 - if (sizeof(struct esp_decap_data) < 481 + if (sizeof(struct esp_decap_data) > 482 482 sizeof(decap.decap_data)) { 483 483 extern void decap_data_too_small(void); 484 484
+1 -17
net/ipv4/multipath_drr.c
··· 57 57 58 58 static struct multipath_device state[MULTIPATH_MAX_DEVICECANDIDATES]; 59 59 static DEFINE_SPINLOCK(state_lock); 60 - static struct rtable *last_selection = NULL; 61 60 62 61 static int inline __multipath_findslot(void) 63 62 { ··· 110 111 .notifier_call = drr_dev_event, 111 112 }; 112 113 113 - static void drr_remove(struct rtable *rt) 114 - { 115 - if (last_selection == rt) 116 - last_selection = NULL; 117 - } 118 114 119 115 static void drr_safe_inc(atomic_t *usecount) 120 116 { ··· 137 143 int min_usecount = -1; 138 144 int devidx = -1; 139 145 int cur_min_devidx = -1; 140 - 141 - /* if necessary and possible utilize the old alternative */ 142 - if ((flp->flags & FLOWI_FLAG_MULTIPATHOLDROUTE) != 0 && 143 - last_selection != NULL) { 144 - result = last_selection; 145 - *rp = result; 146 - return; 147 - } 148 146 149 147 /* 1. make sure all alt. nexthops have the same GC related data */ 150 148 /* 2. determine the new candidate to be returned */ ··· 215 229 } 216 230 217 231 *rp = result; 218 - last_selection = result; 219 232 } 220 233 221 234 static struct ip_mp_alg_ops drr_ops = { 222 235 .mp_alg_select_route = drr_select_route, 223 - .mp_alg_remove = drr_remove, 224 236 }; 225 237 226 238 static int __init drr_init(void) ··· 228 244 if (err) 229 245 return err; 230 246 231 - err = multipath_alg_register(&drr_ops, IP_MP_ALG_RR); 247 + err = multipath_alg_register(&drr_ops, IP_MP_ALG_DRR); 232 248 if (err) 233 249 goto fail; 234 250
-20
net/ipv4/multipath_rr.c
··· 47 47 #include <net/checksum.h> 48 48 #include <net/ip_mp_alg.h> 49 49 50 - #define MULTIPATH_MAX_CANDIDATES 40 51 - 52 - static struct rtable* last_used = NULL; 53 - 54 - static void rr_remove(struct rtable *rt) 55 - { 56 - if (last_used == rt) 57 - last_used = NULL; 58 - } 59 - 60 50 static void rr_select_route(const struct flowi *flp, 61 51 struct rtable *first, struct rtable **rp) 62 52 { 63 53 struct rtable *nh, *result, *min_use_cand = NULL; 64 54 int min_use = -1; 65 - 66 - /* if necessary and possible utilize the old alternative */ 67 - if ((flp->flags & FLOWI_FLAG_MULTIPATHOLDROUTE) != 0 && 68 - last_used != NULL) { 69 - result = last_used; 70 - goto out; 71 - } 72 55 73 56 /* 1. make sure all alt. nexthops have the same GC related data 74 57 * 2. determine the new candidate to be returned ··· 73 90 if (!result) 74 91 result = first; 75 92 76 - out: 77 - last_used = result; 78 93 result->u.dst.__use++; 79 94 *rp = result; 80 95 } 81 96 82 97 static struct ip_mp_alg_ops rr_ops = { 83 98 .mp_alg_select_route = rr_select_route, 84 - .mp_alg_remove = rr_remove, 85 99 }; 86 100 87 101 static int __init rr_init(void)
+10
net/ipv4/netfilter/ip_queue.c
··· 3 3 * communicating with userspace via netlink. 4 4 * 5 5 * (C) 2000-2002 James Morris <jmorris@intercode.com.au> 6 + * (C) 2003-2005 Netfilter Core Team <coreteam@netfilter.org> 6 7 * 7 8 * This program is free software; you can redistribute it and/or modify 8 9 * it under the terms of the GNU General Public License version 2 as ··· 18 17 * 2005-01-10: Added /proc counter for dropped packets; fixed so 19 18 * packets aren't delivered to user space if they're going 20 19 * to be dropped. 20 + * 2005-05-26: local_bh_{disable,enable} around nf_reinject (Harald Welte) 21 21 * 22 22 */ 23 23 #include <linux/module.h> ··· 73 71 static void 74 72 ipq_issue_verdict(struct ipq_queue_entry *entry, int verdict) 75 73 { 74 + /* TCP input path (and probably other bits) assume to be called 75 + * from softirq context, not from syscall, like ipq_issue_verdict is 76 + * called. TCP input path deadlocks with locks taken from timer 77 + * softirq, e.g. We therefore emulate this by local_bh_disable() */ 78 + 79 + local_bh_disable(); 76 80 nf_reinject(entry->skb, entry->info, verdict); 81 + local_bh_enable(); 82 + 77 83 kfree(entry); 78 84 } 79 85
+6 -6
net/ipv4/udp.c
··· 738 738 unsigned long amount; 739 739 740 740 amount = 0; 741 - spin_lock_irq(&sk->sk_receive_queue.lock); 741 + spin_lock_bh(&sk->sk_receive_queue.lock); 742 742 skb = skb_peek(&sk->sk_receive_queue); 743 743 if (skb != NULL) { 744 744 /* ··· 748 748 */ 749 749 amount = skb->len - sizeof(struct udphdr); 750 750 } 751 - spin_unlock_irq(&sk->sk_receive_queue.lock); 751 + spin_unlock_bh(&sk->sk_receive_queue.lock); 752 752 return put_user(amount, (int __user *)arg); 753 753 } 754 754 ··· 848 848 /* Clear queue. */ 849 849 if (flags&MSG_PEEK) { 850 850 int clear = 0; 851 - spin_lock_irq(&sk->sk_receive_queue.lock); 851 + spin_lock_bh(&sk->sk_receive_queue.lock); 852 852 if (skb == skb_peek(&sk->sk_receive_queue)) { 853 853 __skb_unlink(skb, &sk->sk_receive_queue); 854 854 clear = 1; 855 855 } 856 - spin_unlock_irq(&sk->sk_receive_queue.lock); 856 + spin_unlock_bh(&sk->sk_receive_queue.lock); 857 857 if (clear) 858 858 kfree_skb(skb); 859 859 } ··· 1334 1334 struct sk_buff_head *rcvq = &sk->sk_receive_queue; 1335 1335 struct sk_buff *skb; 1336 1336 1337 - spin_lock_irq(&rcvq->lock); 1337 + spin_lock_bh(&rcvq->lock); 1338 1338 while ((skb = skb_peek(rcvq)) != NULL) { 1339 1339 if (udp_checksum_complete(skb)) { 1340 1340 UDP_INC_STATS_BH(UDP_MIB_INERRORS); ··· 1345 1345 break; 1346 1346 } 1347 1347 } 1348 - spin_unlock_irq(&rcvq->lock); 1348 + spin_unlock_bh(&rcvq->lock); 1349 1349 1350 1350 /* nothing to see, move along */ 1351 1351 if (skb == NULL)
+6 -4
net/ipv6/ip6_flowlabel.c
··· 535 535 if (err) 536 536 goto done; 537 537 538 - /* Do not check for fault */ 539 - if (!freq.flr_label) 540 - copy_to_user(&((struct in6_flowlabel_req __user *) optval)->flr_label, 541 - &fl->label, sizeof(fl->label)); 538 + if (!freq.flr_label) { 539 + if (copy_to_user(&((struct in6_flowlabel_req __user *) optval)->flr_label, 540 + &fl->label, sizeof(fl->label))) { 541 + /* Intentionally ignore fault. */ 542 + } 543 + } 542 544 543 545 sfl1->fl = fl; 544 546 sfl1->next = np->ipv6_fl_list;
+11 -5
net/sched/sch_dsmark.c
··· 18 18 #include <asm/byteorder.h> 19 19 20 20 21 - #if 1 /* control */ 21 + #if 0 /* control */ 22 22 #define DPRINTK(format,args...) printk(KERN_DEBUG format,##args) 23 23 #else 24 24 #define DPRINTK(format,args...) ··· 73 73 74 74 DPRINTK("dsmark_graft(sch %p,[qdisc %p],new %p,old %p)\n",sch,p,new, 75 75 old); 76 - if (!new) 77 - new = &noop_qdisc; 76 + 77 + if (new == NULL) { 78 + new = qdisc_create_dflt(sch->dev, &pfifo_qdisc_ops); 79 + if (new == NULL) 80 + new = &noop_qdisc; 81 + } 82 + 78 83 sch_tree_lock(sch); 79 84 *old = xchg(&p->q,new); 80 85 if (*old) ··· 168 163 return; 169 164 for (i = 0; i < p->indices; i++) { 170 165 if (p->mask[i] == 0xff && !p->value[i]) 171 - continue; 166 + goto ignore; 172 167 if (walker->count >= walker->skip) { 173 168 if (walker->fn(sch, i+1, walker) < 0) { 174 169 walker->stop = 1; 175 170 break; 176 171 } 177 172 } 178 - walker->count++; 173 + ignore: 174 + walker->count++; 179 175 } 180 176 } 181 177
+6 -6
sound/oss/Kconfig
··· 112 112 113 113 config SOUND_ES1370 114 114 tristate "Ensoniq AudioPCI (ES1370)" 115 - depends on SOUND_PRIME!=n && SOUND && PCI && SOUND_GAMEPORT 115 + depends on SOUND_PRIME!=n && SOUND && PCI 116 116 help 117 117 Say Y or M if you have a PCI sound card utilizing the Ensoniq 118 118 ES1370 chipset, such as Ensoniq's AudioPCI (non-97). To find ··· 125 125 126 126 config SOUND_ES1371 127 127 tristate "Creative Ensoniq AudioPCI 97 (ES1371)" 128 - depends on SOUND_PRIME!=n && SOUND && PCI && SOUND_GAMEPORT 128 + depends on SOUND_PRIME!=n && SOUND && PCI 129 129 help 130 130 Say Y or M if you have a PCI sound card utilizing the Ensoniq 131 131 ES1371 chipset, such as Ensoniq's AudioPCI97. To find out if ··· 138 138 139 139 config SOUND_ESSSOLO1 140 140 tristate "ESS Technology Solo1" 141 - depends on SOUND_PRIME!=n && SOUND && SOUND_GAMEPORT && PCI 141 + depends on SOUND_PRIME!=n && SOUND && PCI 142 142 help 143 143 Say Y or M if you have a PCI sound card utilizing the ESS Technology 144 144 Solo1 chip. To find out if your sound card uses a ··· 179 179 180 180 config SOUND_SONICVIBES 181 181 tristate "S3 SonicVibes" 182 - depends on SOUND_PRIME!=n && SOUND && SOUND_GAMEPORT 182 + depends on SOUND_PRIME!=n && SOUND 183 183 help 184 184 Say Y or M if you have a PCI sound card utilizing the S3 185 185 SonicVibes chipset. To find out if your sound card uses a ··· 226 226 227 227 config SOUND_TRIDENT 228 228 tristate "Trident 4DWave DX/NX, SiS 7018 or ALi 5451 PCI Audio Core" 229 - depends on SOUND_PRIME!=n && SOUND && SOUND_GAMEPORT 229 + depends on SOUND_PRIME!=n && SOUND 230 230 ---help--- 231 231 Say Y or M if you have a PCI sound card utilizing the Trident 232 232 4DWave-DX/NX chipset or your mother board chipset has SiS 7018 ··· 739 739 740 740 config SOUND_MAD16 741 741 tristate "OPTi MAD16 and/or Mozart based cards" 742 - depends on SOUND_OSS && SOUND_GAMEPORT 742 + depends on SOUND_OSS 743 743 ---help--- 744 744 Answer Y if your card has a Mozart (OAK OTI-601) or MAD16 (OPTi 745 745 82C928 or 82C929 or 82C931) audio interface chip. These chips are
+19 -11
sound/ppc/pmac.c
··· 876 876 */ 877 877 static int __init snd_pmac_detect(pmac_t *chip) 878 878 { 879 - struct device_node *sound; 879 + struct device_node *sound = NULL; 880 880 unsigned int *prop, l; 881 881 struct macio_chip* macio; 882 882 ··· 906 906 chip->is_pbook_G3 = 1; 907 907 chip->node = find_devices("awacs"); 908 908 if (chip->node) 909 - return 0; /* ok */ 909 + sound = chip->node; 910 910 911 911 /* 912 912 * powermac G3 models have a node called "davbus" 913 913 * with a child called "sound". 914 914 */ 915 - chip->node = find_devices("davbus"); 915 + if (!chip->node) 916 + chip->node = find_devices("davbus"); 916 917 /* 917 918 * if we didn't find a davbus device, try 'i2s-a' since 918 919 * this seems to be what iBooks have 919 920 */ 920 921 if (! chip->node) { 921 922 chip->node = find_devices("i2s-a"); 922 - if (chip->node && chip->node->parent && chip->node->parent->parent) { 923 + if (chip->node && chip->node->parent && 924 + chip->node->parent->parent) { 923 925 if (device_is_compatible(chip->node->parent->parent, 924 926 "K2-Keylargo")) 925 927 chip->is_k2 = 1; ··· 930 928 if (! chip->node) 931 929 return -ENODEV; 932 930 933 - sound = find_devices("sound"); 934 - while (sound && sound->parent != chip->node) 935 - sound = sound->next; 931 + if (!sound) { 932 + sound = find_devices("sound"); 933 + while (sound && sound->parent != chip->node) 934 + sound = sound->next; 935 + } 936 936 if (! sound) 937 937 return -ENODEV; 938 938 prop = (unsigned int *) get_property(sound, "sub-frame", NULL); ··· 1023 1019 } 1024 1020 } 1025 1021 if (chip->pdev == NULL) 1026 - printk(KERN_WARNING "snd-powermac: can't locate macio PCI device !\n"); 1022 + printk(KERN_WARNING "snd-powermac: can't locate macio PCI" 1023 + " device !\n"); 1027 1024 1028 1025 detect_byte_swap(chip); 1029 1026 ··· 1032 1027 are available */ 1033 1028 prop = (unsigned int *) get_property(sound, "sample-rates", &l); 1034 1029 if (! prop) 1035 - prop = (unsigned int *) get_property(sound, "output-frame-rates", &l); 1030 + prop = (unsigned int *) get_property(sound, 1031 + "output-frame-rates", &l); 1036 1032 if (prop) { 1037 1033 int i; 1038 1034 chip->freqs_ok = 0; ··· 1060 1054 /* 1061 1055 * exported - boolean info callbacks for ease of programming 1062 1056 */ 1063 - int snd_pmac_boolean_stereo_info(snd_kcontrol_t *kcontrol, snd_ctl_elem_info_t *uinfo) 1057 + int snd_pmac_boolean_stereo_info(snd_kcontrol_t *kcontrol, 1058 + snd_ctl_elem_info_t *uinfo) 1064 1059 { 1065 1060 uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN; 1066 1061 uinfo->count = 2; ··· 1070 1063 return 0; 1071 1064 } 1072 1065 1073 - int snd_pmac_boolean_mono_info(snd_kcontrol_t *kcontrol, snd_ctl_elem_info_t *uinfo) 1066 + int snd_pmac_boolean_mono_info(snd_kcontrol_t *kcontrol, 1067 + snd_ctl_elem_info_t *uinfo) 1074 1068 { 1075 1069 uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN; 1076 1070 uinfo->count = 1;