Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branches 'x86/apic', 'x86/cpu', 'x86/fixmap', 'x86/mm', 'x86/sched', 'x86/setup-lzma', 'x86/signal' and 'x86/urgent' into x86/core

+5493 -2777
+43
Documentation/ABI/testing/sysfs-bus-pci
··· 1 + What: /sys/bus/pci/drivers/.../bind 2 + Date: December 2003 3 + Contact: linux-pci@vger.kernel.org 4 + Description: 5 + Writing a device location to this file will cause 6 + the driver to attempt to bind to the device found at 7 + this location. This is useful for overriding default 8 + bindings. The format for the location is: DDDD:BB:DD.F. 9 + That is Domain:Bus:Device.Function and is the same as 10 + found in /sys/bus/pci/devices/. For example: 11 + # echo 0000:00:19.0 > /sys/bus/pci/drivers/foo/bind 12 + (Note: kernels before 2.6.28 may require echo -n). 13 + 14 + What: /sys/bus/pci/drivers/.../unbind 15 + Date: December 2003 16 + Contact: linux-pci@vger.kernel.org 17 + Description: 18 + Writing a device location to this file will cause the 19 + driver to attempt to unbind from the device found at 20 + this location. This may be useful when overriding default 21 + bindings. The format for the location is: DDDD:BB:DD.F. 22 + That is Domain:Bus:Device.Function and is the same as 23 + found in /sys/bus/pci/devices/. For example: 24 + # echo 0000:00:19.0 > /sys/bus/pci/drivers/foo/unbind 25 + (Note: kernels before 2.6.28 may require echo -n). 26 + 27 + What: /sys/bus/pci/drivers/.../new_id 28 + Date: December 2003 29 + Contact: linux-pci@vger.kernel.org 30 + Description: 31 + Writing a device ID to this file will attempt to 32 + dynamically add a new device ID to a PCI device driver. 33 + This may allow the driver to support more hardware than 34 + was included in the driver's static device ID support 35 + table at compile time. The format for the device ID is: 36 + VVVV DDDD SVVV SDDD CCCC MMMM PPPP. That is Vendor ID, 37 + Device ID, Subsystem Vendor ID, Subsystem Device ID, 38 + Class, Class Mask, and Private Driver Data. The Vendor ID 39 + and Device ID fields are required, the rest are optional. 40 + Upon successfully adding an ID, the driver will probe 41 + for the device and attempt to bind to it. For example: 42 + # echo "8086 10f5" > /sys/bus/pci/drivers/foo/new_id 43 + 1 44 What: /sys/bus/pci/devices/.../vpd 2 45 Date: February 2008 3 46 Contact: Ben Hutchings <bhutchings@solarflare.com>
-205
Documentation/dvb/README.flexcop
··· 1 - This README escorted the skystar2-driver rewriting procedure. It describes the 2 - state of the new flexcop-driver set and some internals are written down here 3 - too. 4 - 5 - This document hopefully describes things about the flexcop and its 6 - device-offsprings. Goal was to write an easy-to-write and easy-to-read set of 7 - drivers based on the skystar2.c and other information. 8 - 9 - Remark: flexcop-pci.c was a copy of skystar2.c, but every line has been 10 - touched and rewritten. 11 - 12 - History & News 13 - ============== 14 - 2005-04-01 - correct USB ISOC transfers (thanks to Vadim Catana) 15 - 16 - 17 - 18 - 19 - General coding processing 20 - ========================= 21 - 22 - We should proceed as follows (as long as no one complains): 23 - 24 - 0) Think before start writing code! 25 - 26 - 1) rewriting the skystar2.c with the help of the flexcop register descriptions 27 - and splitting up the files to a pci-bus-part and a flexcop-part. 28 - The new driver will be called b2c2-flexcop-pci.ko/b2c2-flexcop-usb.ko for the 29 - device-specific part and b2c2-flexcop.ko for the common flexcop-functions. 30 - 31 - 2) Search for errors in the leftover of flexcop-pci.c (compare with pluto2.c 32 - and other pci drivers) 33 - 34 - 3) make some beautification (see 'Improvements when rewriting (refactoring) is 35 - done') 36 - 37 - 4) Testing the new driver and maybe substitute the skystar2.c with it, to reach 38 - a wider tester audience. 39 - 40 - 5) creating an usb-bus-part using the already written flexcop code for the pci 41 - card. 42 - 43 - Idea: create a kernel-object for the flexcop and export all important 44 - functions. This option saves kernel-memory, but maybe a lot of functions have 45 - to be exported to kernel namespace. 46 - 47 - 48 - Current situation 49 - ================= 50 - 51 - 0) Done :) 52 - 1) Done (some minor issues left) 53 - 2) Done 54 - 3) Not ready yet, more information is necessary 55 - 4) next to be done (see the table below) 56 - 5) USB driver is working (yes, there are some minor issues) 57 - 58 - What seems to be ready? 59 - ----------------------- 60 - 61 - 1) Rewriting 62 - 1a) i2c is cut off from the flexcop-pci.c and seems to work 63 - 1b) moved tuner and demod stuff from flexcop-pci.c to flexcop-tuner-fe.c 64 - 1c) moved lnb and diseqc stuff from flexcop-pci.c to flexcop-tuner-fe.c 65 - 1e) eeprom (reading MAC address) 66 - 1d) sram (no dynamic sll size detection (commented out) (using default as JJ told me)) 67 - 1f) misc. register accesses for reading parameters (e.g. resetting, revision) 68 - 1g) pid/mac filter (flexcop-hw-filter.c) 69 - 1i) dvb-stuff initialization in flexcop.c (done) 70 - 1h) dma stuff (now just using the size-irq, instead of all-together, to be done) 71 - 1j) remove flexcop initialization from flexcop-pci.c completely (done) 72 - 1l) use a well working dma IRQ method (done, see 'Known bugs and problems and TODO') 73 - 1k) cleanup flexcop-files (remove unused EXPORT_SYMBOLs, make static from 74 - non-static where possible, moved code to proper places) 75 - 76 - 2) Search for errors in the leftover of flexcop-pci.c (partially done) 77 - 5a) add MAC address reading 78 - 5c) feeding of ISOC data to the software demux (format of the isochronous data 79 - and speed optimization, no real error) (thanks to Vadim Catana) 80 - 81 - What to do in the near future? 82 - -------------------------------------- 83 - (no special order here) 84 - 85 - 5) USB driver 86 - 5b) optimize isoc-transfer (submitting/killing isoc URBs when transfer is starting) 87 - 88 - Testing changes 89 - --------------- 90 - 91 - O = item is working 92 - P = item is partially working 93 - X = item is not working 94 - N = item does not apply here 95 - <empty field> = item need to be examined 96 - 97 - | PCI | USB 98 - item | mt352 | nxt2002 | stv0299 | mt312 | mt352 | nxt2002 | stv0299 | mt312 99 - -------+-------+---------+---------+-------+-------+---------+---------+------- 100 - 1a) | O | | | | N | N | N | N 101 - 1b) | O | | | | | | O | 102 - 1c) | N | N | | | N | N | O | 103 - 1d) | O | O 104 - 1e) | O | O 105 - 1f) | P 106 - 1g) | O 107 - 1h) | P | 108 - 1i) | O | N 109 - 1j) | O | N 110 - 1l) | O | N 111 - 2) | O | N 112 - 5a) | N | O 113 - 5b)* | N | 114 - 5c) | N | O 115 - 116 - * - not done yet 117 - 118 - Known bugs and problems and TODO 119 - -------------------------------- 120 - 121 - 1g/h/l) when pid filtering is enabled on the pci card 122 - 123 - DMA usage currently: 124 - The DMA is splitted in 2 equal-sized subbuffers. The Flexcop writes to first 125 - address and triggers an IRQ when it's full and starts writing to the second 126 - address. When the second address is full, the IRQ is triggered again, and 127 - the flexcop writes to first address again, and so on. 128 - The buffersize of each address is currently 640*188 bytes. 129 - 130 - Problem is, when using hw-pid-filtering and doing some low-bandwidth 131 - operation (like scanning) the buffers won't be filled enough to trigger 132 - the IRQ. That's why: 133 - 134 - When PID filtering is activated, the timer IRQ is used. Every 1.97 ms the IRQ 135 - is triggered. Is the current write address of DMA1 different to the one 136 - during the last IRQ, then the data is passed to the demuxer. 137 - 138 - There is an additional DMA-IRQ-method: packet count IRQ. This isn't 139 - implemented correctly yet. 140 - 141 - The solution is to disable HW PID filtering, but I don't know how the DVB 142 - API software demux behaves on slow systems with 45MBit/s TS. 143 - 144 - Solved bugs :) 145 - -------------- 146 - 1g) pid-filtering (somehow pid index 4 and 5 (EMM_PID and ECM_PID) aren't 147 - working) 148 - SOLUTION: also index 0 was affected, because net_translation is done for 149 - these indexes by default 150 - 151 - 5b) isochronous transfer does only work in the first attempt (for the Sky2PC 152 - USB, Air2PC is working) SOLUTION: the flexcop was going asleep and never really 153 - woke up again (don't know if this need fixes, see 154 - flexcop-fe-tuner.c:flexcop_sleep) 155 - 156 - NEWS: when the driver is loaded and unloaded and loaded again (w/o doing 157 - anything in the while the driver is loaded the first time), no transfers take 158 - place anymore. 159 - 160 - Improvements when rewriting (refactoring) is done 161 - ================================================= 162 - 163 - - split sleeping of the flexcop (misc_204.ACPI3_sig = 1;) from lnb_control 164 - (enable sleeping for other demods than dvb-s) 165 - - add support for CableStar (stv0297 Microtune 203x/ALPS) (almost done, incompatibilities with the Nexus-CA) 166 - 167 - Debugging 168 - --------- 169 - - add verbose debugging to skystar2.c (dump the reg_dw_data) and compare it 170 - with this flexcop, this is important, because i2c is now using the 171 - flexcop_ibi_value union from flexcop-reg.h (do you have a better idea for 172 - that, please tell us so). 173 - 174 - Everything which is identical in the following table, can be put into a common 175 - flexcop-module. 176 - 177 - PCI USB 178 - ------------------------------------------------------------------------------- 179 - Different: 180 - Register access: accessing IO memory USB control message 181 - I2C bus: I2C bus of the FC USB control message 182 - Data transfer: DMA isochronous transfer 183 - EEPROM transfer: through i2c bus not clear yet 184 - 185 - Identical: 186 - Streaming: accessing registers 187 - PID Filtering: accessing registers 188 - Sram destinations: accessing registers 189 - Tuner/Demod: I2C bus 190 - DVB-stuff: can be written for common use 191 - 192 - Acknowledgements (just for the rewriting part) 193 - ================ 194 - 195 - Bjarne Steinsbo thought a lot in the first place of the pci part for this code 196 - sharing idea. 197 - 198 - Andreas Oberritter for providing a recent PCI initialization template 199 - (pluto2.c). 200 - 201 - Boleslaw Ciesielski for pointing out a problem with firmware loader. 202 - 203 - Vadim Catana for correcting the USB transfer. 204 - 205 - comments, critics and ideas to linux-dvb@linuxtv.org.
+20 -14
Documentation/dvb/technisat.txt
··· 1 - How to set up the Technisat devices 2 - =================================== 1 + How to set up the Technisat/B2C2 Flexcop devices 2 + ================================================ 3 3 4 4 1) Find out what device you have 5 5 ================================ ··· 16 16 17 17 If the Technisat is the only TV device in your box get rid of unnecessary modules and check this one: 18 18 "Multimedia devices" => "Customise analog and hybrid tuner modules to build" 19 - In this directory uncheck every driver which is activated there. 19 + In this directory uncheck every driver which is activated there (except "Simple tuner support" for case 9 only). 20 20 21 21 Then please activate: 22 22 2a) Main module part: 23 23 24 24 a.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" 25 - b.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Technisat/B2C2 Air/Sky/Cable2PC PCI" in case of a PCI card OR 25 + b.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Technisat/B2C2 Air/Sky/Cable2PC PCI" in case of a PCI card 26 + OR 26 27 c.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Technisat/B2C2 Air/Sky/Cable2PC USB" in case of an USB 1.1 adapter 27 28 d.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Enable debug for the B2C2 FlexCop drivers" 28 29 Notice: d.) is helpful for troubleshooting 29 30 30 31 2b) Frontend module part: 31 32 32 - 1.) Revision 2.3: 33 + 1.) SkyStar DVB-S Revision 2.3: 33 34 a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" 34 35 b.)"Multimedia devices" => "Customise DVB frontends" => "Zarlink VP310/MT312/ZL10313 based" 35 36 36 - 2.) Revision 2.6: 37 + 2.) SkyStar DVB-S Revision 2.6: 37 38 a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" 38 39 b.)"Multimedia devices" => "Customise DVB frontends" => "ST STV0299 based" 39 40 40 - 3.) Revision 2.7: 41 + 3.) SkyStar DVB-S Revision 2.7: 41 42 a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" 42 43 b.)"Multimedia devices" => "Customise DVB frontends" => "Samsung S5H1420 based" 43 44 c.)"Multimedia devices" => "Customise DVB frontends" => "Integrant ITD1000 Zero IF tuner for DVB-S/DSS" 44 45 d.)"Multimedia devices" => "Customise DVB frontends" => "ISL6421 SEC controller" 45 46 46 - 4.) Revision 2.8: 47 + 4.) SkyStar DVB-S Revision 2.8: 47 48 a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" 48 49 b.)"Multimedia devices" => "Customise DVB frontends" => "Conexant CX24113/CX24128 tuner for DVB-S/DSS" 49 50 c.)"Multimedia devices" => "Customise DVB frontends" => "Conexant CX24123 based" 50 51 d.)"Multimedia devices" => "Customise DVB frontends" => "ISL6421 SEC controller" 51 52 52 - 5.) DVB-T card: 53 + 5.) AirStar DVB-T card: 53 54 a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" 54 55 b.)"Multimedia devices" => "Customise DVB frontends" => "Zarlink MT352 based" 55 56 56 - 6.) DVB-C card: 57 + 6.) CableStar DVB-C card: 57 58 a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" 58 59 b.)"Multimedia devices" => "Customise DVB frontends" => "ST STV0297 based" 59 60 60 - 7.) ATSC card 1st generation: 61 + 7.) AirStar ATSC card 1st generation: 61 62 a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" 62 63 b.)"Multimedia devices" => "Customise DVB frontends" => "Broadcom BCM3510" 63 64 64 - 8.) ATSC card 2nd generation: 65 + 8.) AirStar ATSC card 2nd generation: 65 66 a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" 66 67 b.)"Multimedia devices" => "Customise DVB frontends" => "NxtWave Communications NXT2002/NXT2004 based" 67 - c.)"Multimedia devices" => "Customise DVB frontends" => "LG Electronics LGDT3302/LGDT3303 based" 68 + c.)"Multimedia devices" => "Customise DVB frontends" => "Generic I2C PLL based tuners" 68 69 69 - Author: Uwe Bugla <uwe.bugla@gmx.de> December 2008 70 + 9.) AirStar ATSC card 3rd generation: 71 + a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" 72 + b.)"Multimedia devices" => "Customise DVB frontends" => "LG Electronics LGDT3302/LGDT3303 based" 73 + c.)"Multimedia devices" => "Customise analog and hybrid tuner modules to build" => "Simple tuner support" 74 + 75 + Author: Uwe Bugla <uwe.bugla@gmx.de> February 2009
+4 -2
Documentation/kernel-parameters.txt
··· 868 868 icn= [HW,ISDN] 869 869 Format: <io>[,<membase>[,<icn_id>[,<icn_id2>]]] 870 870 871 - ide= [HW] (E)IDE subsystem 872 - Format: ide=nodma or ide=doubler 871 + ide-core.nodma= [HW] (E)IDE subsystem 872 + Format: =0.0 to prevent dma on hda, =0.1 hdb =1.0 hdc 873 + .vlb_clock .pci_clock .noflush .noprobe .nowerr .cdrom 874 + .chs .ignore_cable are additional options 873 875 See Documentation/ide/ide.txt. 874 876 875 877 idebus= [HW] (E)IDE subsystem - VLB/PCI bus speed
+5 -6
Documentation/scsi/cxgb3i.txt
··· 4 4 ============ 5 5 6 6 The Chelsio T3 ASIC based Adapters (S310, S320, S302, S304, Mezz cards, etc. 7 - series of products) supports iSCSI acceleration and iSCSI Direct Data Placement 7 + series of products) support iSCSI acceleration and iSCSI Direct Data Placement 8 8 (DDP) where the hardware handles the expensive byte touching operations, such 9 9 as CRC computation and verification, and direct DMA to the final host memory 10 10 destination: ··· 31 31 the TCP segments onto the wire. It handles TCP retransmission if 32 32 needed. 33 33 34 - On receving, S3 h/w recovers the iSCSI PDU by reassembling TCP 34 + On receiving, S3 h/w recovers the iSCSI PDU by reassembling TCP 35 35 segments, separating the header and data, calculating and verifying 36 - the digests, then forwards the header to the host. The payload data, 36 + the digests, then forwarding the header to the host. The payload data, 37 37 if possible, will be directly placed into the pre-posted host DDP 38 38 buffer. Otherwise, the payload data will be sent to the host too. 39 39 ··· 68 68 sure the ip address is unique in the network. 69 69 70 70 3. edit /etc/iscsi/iscsid.conf 71 - The default setting for MaxRecvDataSegmentLength (131072) is too big, 72 - replace "node.conn[0].iscsi.MaxRecvDataSegmentLength" to be a value no 73 - bigger than 15360 (for example 8192): 71 + The default setting for MaxRecvDataSegmentLength (131072) is too big; 72 + replace with a value no bigger than 15360 (for example 8192): 74 73 75 74 node.conn[0].iscsi.MaxRecvDataSegmentLength = 8192 76 75
+4 -1
Documentation/x86/boot.txt
··· 543 543 544 544 The payload may be compressed. The format of both the compressed and 545 545 uncompressed data should be determined using the standard magic 546 - numbers. Currently only gzip compressed ELF is used. 546 + numbers. The currently supported compression formats are gzip 547 + (magic numbers 1F 8B or 1F 9E), bzip2 (magic number 42 5A) and LZMA 548 + (magic number 5D 00). The uncompressed payload is currently always ELF 549 + (magic number 7F 45 4C 46). 547 550 548 551 Field name: payload_length 549 552 Type: read
+1 -1
MAINTAINERS
··· 2464 2464 2465 2465 ISDN SUBSYSTEM 2466 2466 P: Karsten Keil 2467 - M: kkeil@suse.de 2467 + M: isdn@linux-pingi.de 2468 2468 L: isdn4linux@listserv.isdn4linux.de (subscribers-only) 2469 2469 W: http://www.isdn4linux.de 2470 2470 T: git kernel.org:/pub/scm/linux/kernel/kkeil/isdn-2.6.git
+3 -3
arch/arm/mach-davinci/board-evm.c
··· 311 311 gpio_request(gpio + 7, "nCF_SEL"); 312 312 gpio_direction_output(gpio + 7, 1); 313 313 314 + /* irlml6401 sustains over 3A, switches 5V in under 8 msec */ 315 + setup_usb(500, 8); 316 + 314 317 return 0; 315 318 } 316 319 ··· 420 417 platform_add_devices(davinci_evm_devices, 421 418 ARRAY_SIZE(davinci_evm_devices)); 422 419 evm_init_i2c(); 423 - 424 - /* irlml6401 sustains over 3A, switches 5V in under 8 msec */ 425 - setup_usb(500, 8); 426 420 } 427 421 428 422 static __init void davinci_evm_irq_init(void)
+5
arch/arm/mach-davinci/clock.c
··· 231 231 .lpsc = DAVINCI_LPSC_GPIO, 232 232 }, 233 233 { 234 + .name = "usb", 235 + .rate = &commonrate, 236 + .lpsc = DAVINCI_LPSC_USB, 237 + }, 238 + { 234 239 .name = "AEMIFCLK", 235 240 .rate = &commonrate, 236 241 .lpsc = DAVINCI_LPSC_AEMIF,
+1
arch/arm/mach-davinci/usb.c
··· 47 47 #elif defined(CONFIG_USB_MUSB_HOST) 48 48 .mode = MUSB_HOST, 49 49 #endif 50 + .clock = "usb", 50 51 .config = &musb_config, 51 52 }; 52 53
+11
arch/ia64/Kconfig
··· 638 638 and include PCI device scope covered by these DMA 639 639 remapping devices. 640 640 641 + config DMAR_DEFAULT_ON 642 + def_bool y 643 + prompt "Enable DMA Remapping Devices by default" 644 + depends on DMAR 645 + help 646 + Selecting this option will enable a DMAR device at boot time if 647 + one is found. If this option is not selected, DMAR support can 648 + be enabled by passing intel_iommu=on to the kernel. It is 649 + recommended you say N here while the DMAR code remains 650 + experimental. 651 + 641 652 endmenu 642 653 643 654 endif
+1 -1
arch/ia64/kernel/iosapic.c
··· 507 507 if (trigger == IOSAPIC_EDGE) 508 508 return -EINVAL; 509 509 510 - for (i = 0; i <= NR_IRQS; i++) { 510 + for (i = 0; i < NR_IRQS; i++) { 511 511 info = &iosapic_intr_info[i]; 512 512 if (info->trigger == trigger && info->polarity == pol && 513 513 (info->dmode == IOSAPIC_FIXED ||
+1 -1
arch/ia64/kernel/unwind.c
··· 2149 2149 2150 2150 /* next, remove hash table entries for this table */ 2151 2151 2152 - for (index = 0; index <= UNW_HASH_SIZE; ++index) { 2152 + for (index = 0; index < UNW_HASH_SIZE; ++index) { 2153 2153 tmp = unw.cache + unw.hash[index]; 2154 2154 if (unw.hash[index] >= UNW_CACHE_SIZE 2155 2155 || tmp->ip < table->start || tmp->ip >= table->end)
+7 -2
arch/mips/Kconfig
··· 603 603 select SYS_SUPPORTS_64BIT_KERNEL 604 604 select SYS_SUPPORTS_BIG_ENDIAN 605 605 select SYS_SUPPORTS_HIGHMEM 606 - select CPU_CAVIUM_OCTEON 606 + select SYS_HAS_CPU_CAVIUM_OCTEON 607 607 help 608 608 The Octeon simulator is software performance model of the Cavium 609 609 Octeon Processor. It supports simulating Octeon processors on x86 ··· 618 618 select SYS_SUPPORTS_BIG_ENDIAN 619 619 select SYS_SUPPORTS_HIGHMEM 620 620 select SYS_HAS_EARLY_PRINTK 621 - select CPU_CAVIUM_OCTEON 621 + select SYS_HAS_CPU_CAVIUM_OCTEON 622 622 select SWAP_IO_SPACE 623 623 help 624 624 This option supports all of the Octeon reference boards from Cavium ··· 1234 1234 1235 1235 config CPU_CAVIUM_OCTEON 1236 1236 bool "Cavium Octeon processor" 1237 + depends on SYS_HAS_CPU_CAVIUM_OCTEON 1237 1238 select IRQ_CPU 1238 1239 select IRQ_CPU_OCTEON 1239 1240 select CPU_HAS_PREFETCH ··· 1315 1314 config SYS_HAS_CPU_SB1 1316 1315 bool 1317 1316 1317 + config SYS_HAS_CPU_CAVIUM_OCTEON 1318 + bool 1319 + 1318 1320 # 1319 1321 # CPU may reorder R->R, R->W, W->R, W->W 1320 1322 # Reordering beyond LL and SC is handled in WEAK_REORDERING_BEYOND_LLSC ··· 1391 1387 config 64BIT 1392 1388 bool "64-bit kernel" 1393 1389 depends on CPU_SUPPORTS_64BIT_KERNEL && SYS_SUPPORTS_64BIT_KERNEL 1390 + select HAVE_SYSCALL_WRAPPERS 1394 1391 help 1395 1392 Select this option if you want to build a 64-bit kernel. 1396 1393
+3 -3
arch/mips/alchemy/common/time.c
··· 118 118 * setup counter 1 (RTC) to tick at full speed 119 119 */ 120 120 t = 0xffffff; 121 - while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_T1S) && t--) 121 + while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_T1S) && --t) 122 122 asm volatile ("nop"); 123 123 if (!t) 124 124 goto cntr_err; ··· 127 127 au_sync(); 128 128 129 129 t = 0xffffff; 130 - while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && t--) 130 + while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && --t) 131 131 asm volatile ("nop"); 132 132 if (!t) 133 133 goto cntr_err; ··· 135 135 au_sync(); 136 136 137 137 t = 0xffffff; 138 - while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && t--) 138 + while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && --t) 139 139 asm volatile ("nop"); 140 140 if (!t) 141 141 goto cntr_err;
-1
arch/mips/include/asm/seccomp.h
··· 1 1 #ifndef __ASM_SECCOMP_H 2 2 3 - #include <linux/thread_info.h> 4 3 #include <linux/unistd.h> 5 4 6 5 #define __NR_seccomp_read __NR_read
-1
arch/mips/kernel/irq.c
··· 111 111 seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]); 112 112 #endif 113 113 seq_printf(p, " %14s", irq_desc[i].chip->name); 114 - seq_printf(p, "-%-8s", irq_desc[i].name); 115 114 seq_printf(p, " %s", action->name); 116 115 117 116 for (action=action->next; action; action = action->next)
+39 -30
arch/mips/kernel/linux32.c
··· 32 32 #include <linux/module.h> 33 33 #include <linux/binfmts.h> 34 34 #include <linux/security.h> 35 + #include <linux/syscalls.h> 35 36 #include <linux/compat.h> 36 37 #include <linux/vfs.h> 37 38 #include <linux/ipc.h> ··· 64 63 #define merge_64(r1, r2) ((((r2) & 0xffffffffUL) << 32) + ((r1) & 0xffffffffUL)) 65 64 #endif 66 65 67 - asmlinkage unsigned long 68 - sys32_mmap2(unsigned long addr, unsigned long len, unsigned long prot, 69 - unsigned long flags, unsigned long fd, unsigned long pgoff) 66 + SYSCALL_DEFINE6(32_mmap2, unsigned long, addr, unsigned long, len, 67 + unsigned long, prot, unsigned long, flags, unsigned long, fd, 68 + unsigned long, pgoff) 70 69 { 71 70 struct file * file = NULL; 72 71 unsigned long error; ··· 122 121 int rlim_max; 123 122 }; 124 123 125 - asmlinkage long sys32_truncate64(const char __user * path, 126 - unsigned long __dummy, int a2, int a3) 124 + SYSCALL_DEFINE4(32_truncate64, const char __user *, path, 125 + unsigned long, __dummy, unsigned long, a2, unsigned long, a3) 127 126 { 128 127 return sys_truncate(path, merge_64(a2, a3)); 129 128 } 130 129 131 - asmlinkage long sys32_ftruncate64(unsigned int fd, unsigned long __dummy, 132 - int a2, int a3) 130 + SYSCALL_DEFINE4(32_ftruncate64, unsigned long, fd, unsigned long, __dummy, 131 + unsigned long, a2, unsigned long, a3) 133 132 { 134 133 return sys_ftruncate(fd, merge_64(a2, a3)); 135 134 } 136 135 137 - asmlinkage int sys32_llseek(unsigned int fd, unsigned int offset_high, 138 - unsigned int offset_low, loff_t __user * result, 139 - unsigned int origin) 136 + SYSCALL_DEFINE5(32_llseek, unsigned long, fd, unsigned long, offset_high, 137 + unsigned long, offset_low, loff_t __user *, result, 138 + unsigned long, origin) 140 139 { 141 140 return sys_llseek(fd, offset_high, offset_low, result, origin); 142 141 } ··· 145 144 lseek back to original location. They fail just like lseek does on 146 145 non-seekable files. */ 147 146 148 - asmlinkage ssize_t sys32_pread(unsigned int fd, char __user * buf, 149 - size_t count, u32 unused, u64 a4, u64 a5) 147 + SYSCALL_DEFINE6(32_pread, unsigned long, fd, char __user *, buf, size_t, count, 148 + unsigned long, unused, unsigned long, a4, unsigned long, a5) 150 149 { 151 150 return sys_pread64(fd, buf, count, merge_64(a4, a5)); 152 151 } 153 152 154 - asmlinkage ssize_t sys32_pwrite(unsigned int fd, const char __user * buf, 155 - size_t count, u32 unused, u64 a4, u64 a5) 153 + SYSCALL_DEFINE6(32_pwrite, unsigned int, fd, const char __user *, buf, 154 + size_t, count, u32, unused, u64, a4, u64, a5) 156 155 { 157 156 return sys_pwrite64(fd, buf, count, merge_64(a4, a5)); 158 157 } 159 158 160 - asmlinkage int sys32_sched_rr_get_interval(compat_pid_t pid, 161 - struct compat_timespec __user *interval) 159 + SYSCALL_DEFINE2(32_sched_rr_get_interval, compat_pid_t, pid, 160 + struct compat_timespec __user *, interval) 162 161 { 163 162 struct timespec t; 164 163 int ret; ··· 175 174 176 175 #ifdef CONFIG_SYSVIPC 177 176 178 - asmlinkage long 179 - sys32_ipc(u32 call, int first, int second, int third, u32 ptr, u32 fifth) 177 + SYSCALL_DEFINE6(32_ipc, u32, call, long, first, long, second, long, third, 178 + unsigned long, ptr, unsigned long, fifth) 180 179 { 181 180 int version, err; 182 181 ··· 234 233 235 234 #else 236 235 237 - asmlinkage long 238 - sys32_ipc(u32 call, int first, int second, int third, u32 ptr, u32 fifth) 236 + SYSCALL_DEFINE6(32_ipc, u32, call, int, first, int, second, int, third, 237 + u32, ptr, u32 fifth) 239 238 { 240 239 return -ENOSYS; 241 240 } ··· 243 242 #endif /* CONFIG_SYSVIPC */ 244 243 245 244 #ifdef CONFIG_MIPS32_N32 246 - asmlinkage long sysn32_semctl(int semid, int semnum, int cmd, u32 arg) 245 + SYSCALL_DEFINE4(n32_semctl, int, semid, int, semnum, int, cmd, u32, arg) 247 246 { 248 247 /* compat_sys_semctl expects a pointer to union semun */ 249 248 u32 __user *uptr = compat_alloc_user_space(sizeof(u32)); ··· 252 251 return compat_sys_semctl(semid, semnum, cmd, uptr); 253 252 } 254 253 255 - asmlinkage long sysn32_msgsnd(int msqid, u32 msgp, unsigned msgsz, int msgflg) 254 + SYSCALL_DEFINE4(n32_msgsnd, int, msqid, u32, msgp, unsigned int, msgsz, 255 + int, msgflg) 256 256 { 257 257 return compat_sys_msgsnd(msqid, msgsz, msgflg, compat_ptr(msgp)); 258 258 } 259 259 260 - asmlinkage long sysn32_msgrcv(int msqid, u32 msgp, size_t msgsz, int msgtyp, 261 - int msgflg) 260 + SYSCALL_DEFINE5(n32_msgrcv, int, msqid, u32, msgp, size_t, msgsz, 261 + int, msgtyp, int, msgflg) 262 262 { 263 263 return compat_sys_msgrcv(msqid, msgsz, msgtyp, msgflg, IPC_64, 264 264 compat_ptr(msgp)); ··· 279 277 280 278 #ifdef CONFIG_SYSCTL_SYSCALL 281 279 282 - asmlinkage long sys32_sysctl(struct sysctl_args32 __user *args) 280 + SYSCALL_DEFINE1(32_sysctl, struct sysctl_args32 __user *, args) 283 281 { 284 282 struct sysctl_args32 tmp; 285 283 int error; ··· 318 316 return error; 319 317 } 320 318 319 + #else 320 + 321 + SYSCALL_DEFINE1(32_sysctl, struct sysctl_args32 __user *, args) 322 + { 323 + return -ENOSYS; 324 + } 325 + 321 326 #endif /* CONFIG_SYSCTL_SYSCALL */ 322 327 323 - asmlinkage long sys32_newuname(struct new_utsname __user * name) 328 + SYSCALL_DEFINE1(32_newuname, struct new_utsname __user *, name) 324 329 { 325 330 int ret = 0; 326 331 ··· 343 334 return ret; 344 335 } 345 336 346 - asmlinkage int sys32_personality(unsigned long personality) 337 + SYSCALL_DEFINE1(32_personality, unsigned long, personality) 347 338 { 348 339 int ret; 349 340 personality &= 0xffffffff; ··· 366 357 367 358 extern asmlinkage long sys_ustat(dev_t dev, struct ustat __user * ubuf); 368 359 369 - asmlinkage int sys32_ustat(dev_t dev, struct ustat32 __user * ubuf32) 360 + SYSCALL_DEFINE2(32_ustat, dev_t, dev, struct ustat32 __user *, ubuf32) 370 361 { 371 362 int err; 372 363 struct ustat tmp; ··· 390 381 return err; 391 382 } 392 383 393 - asmlinkage int sys32_sendfile(int out_fd, int in_fd, compat_off_t __user *offset, 394 - s32 count) 384 + SYSCALL_DEFINE4(32_sendfile, long, out_fd, long, in_fd, 385 + compat_off_t __user *, offset, s32, count) 395 386 { 396 387 mm_segment_t old_fs = get_fs(); 397 388 int ret;
+2 -2
arch/mips/kernel/scall32-o32.S
··· 399 399 sys sys_swapon 2 400 400 sys sys_reboot 3 401 401 sys sys_old_readdir 3 402 - sys old_mmap 6 /* 4090 */ 402 + sys sys_mips_mmap 6 /* 4090 */ 403 403 sys sys_munmap 2 404 404 sys sys_truncate 2 405 405 sys sys_ftruncate 2 ··· 519 519 sys sys_sendfile 4 520 520 sys sys_ni_syscall 0 521 521 sys sys_ni_syscall 0 522 - sys sys_mmap2 6 /* 4210 */ 522 + sys sys_mips_mmap2 6 /* 4210 */ 523 523 sys sys_truncate64 4 524 524 sys sys_ftruncate64 4 525 525 sys sys_stat64 2
+1 -1
arch/mips/kernel/scall64-64.S
··· 207 207 PTR sys_newlstat 208 208 PTR sys_poll 209 209 PTR sys_lseek 210 - PTR old_mmap 210 + PTR sys_mips_mmap 211 211 PTR sys_mprotect /* 5010 */ 212 212 PTR sys_munmap 213 213 PTR sys_brk
+14 -14
arch/mips/kernel/scall64-n32.S
··· 129 129 PTR sys_newlstat 130 130 PTR sys_poll 131 131 PTR sys_lseek 132 - PTR old_mmap 132 + PTR sys_mips_mmap 133 133 PTR sys_mprotect /* 6010 */ 134 134 PTR sys_munmap 135 135 PTR sys_brk 136 - PTR sys32_rt_sigaction 137 - PTR sys32_rt_sigprocmask 136 + PTR sys_32_rt_sigaction 137 + PTR sys_32_rt_sigprocmask 138 138 PTR compat_sys_ioctl /* 6015 */ 139 139 PTR sys_pread64 140 140 PTR sys_pwrite64 ··· 159 159 PTR compat_sys_setitimer 160 160 PTR sys_alarm 161 161 PTR sys_getpid 162 - PTR sys32_sendfile 162 + PTR sys_32_sendfile 163 163 PTR sys_socket /* 6040 */ 164 164 PTR sys_connect 165 165 PTR sys_accept ··· 181 181 PTR sys_exit 182 182 PTR compat_sys_wait4 183 183 PTR sys_kill /* 6060 */ 184 - PTR sys32_newuname 184 + PTR sys_32_newuname 185 185 PTR sys_semget 186 186 PTR sys_semop 187 - PTR sysn32_semctl 187 + PTR sys_n32_semctl 188 188 PTR sys_shmdt /* 6065 */ 189 189 PTR sys_msgget 190 - PTR sysn32_msgsnd 191 - PTR sysn32_msgrcv 190 + PTR sys_n32_msgsnd 191 + PTR sys_n32_msgrcv 192 192 PTR compat_sys_msgctl 193 193 PTR compat_sys_fcntl /* 6070 */ 194 194 PTR sys_flock ··· 245 245 PTR sys_getsid 246 246 PTR sys_capget 247 247 PTR sys_capset 248 - PTR sys32_rt_sigpending /* 6125 */ 248 + PTR sys_32_rt_sigpending /* 6125 */ 249 249 PTR compat_sys_rt_sigtimedwait 250 - PTR sys32_rt_sigqueueinfo 250 + PTR sys_32_rt_sigqueueinfo 251 251 PTR sysn32_rt_sigsuspend 252 252 PTR sys32_sigaltstack 253 253 PTR compat_sys_utime /* 6130 */ 254 254 PTR sys_mknod 255 - PTR sys32_personality 256 - PTR sys32_ustat 255 + PTR sys_32_personality 256 + PTR sys_32_ustat 257 257 PTR compat_sys_statfs 258 258 PTR compat_sys_fstatfs /* 6135 */ 259 259 PTR sys_sysfs ··· 265 265 PTR sys_sched_getscheduler 266 266 PTR sys_sched_get_priority_max 267 267 PTR sys_sched_get_priority_min 268 - PTR sys32_sched_rr_get_interval /* 6145 */ 268 + PTR sys_32_sched_rr_get_interval /* 6145 */ 269 269 PTR sys_mlock 270 270 PTR sys_munlock 271 271 PTR sys_mlockall 272 272 PTR sys_munlockall 273 273 PTR sys_vhangup /* 6150 */ 274 274 PTR sys_pivot_root 275 - PTR sys32_sysctl 275 + PTR sys_32_sysctl 276 276 PTR sys_prctl 277 277 PTR compat_sys_adjtimex 278 278 PTR compat_sys_setrlimit /* 6155 */
+20 -20
arch/mips/kernel/scall64-o32.S
··· 265 265 PTR sys_olduname 266 266 PTR sys_umask /* 4060 */ 267 267 PTR sys_chroot 268 - PTR sys32_ustat 268 + PTR sys_32_ustat 269 269 PTR sys_dup2 270 270 PTR sys_getppid 271 271 PTR sys_getpgrp /* 4065 */ 272 272 PTR sys_setsid 273 - PTR sys32_sigaction 273 + PTR sys_32_sigaction 274 274 PTR sys_sgetmask 275 275 PTR sys_ssetmask 276 276 PTR sys_setreuid /* 4070 */ ··· 293 293 PTR sys_swapon 294 294 PTR sys_reboot 295 295 PTR compat_sys_old_readdir 296 - PTR old_mmap /* 4090 */ 296 + PTR sys_mips_mmap /* 4090 */ 297 297 PTR sys_munmap 298 298 PTR sys_truncate 299 299 PTR sys_ftruncate ··· 320 320 PTR compat_sys_wait4 321 321 PTR sys_swapoff /* 4115 */ 322 322 PTR compat_sys_sysinfo 323 - PTR sys32_ipc 323 + PTR sys_32_ipc 324 324 PTR sys_fsync 325 325 PTR sys32_sigreturn 326 326 PTR sys32_clone /* 4120 */ 327 327 PTR sys_setdomainname 328 - PTR sys32_newuname 328 + PTR sys_32_newuname 329 329 PTR sys_ni_syscall /* sys_modify_ldt */ 330 330 PTR compat_sys_adjtimex 331 331 PTR sys_mprotect /* 4125 */ ··· 339 339 PTR sys_fchdir 340 340 PTR sys_bdflush 341 341 PTR sys_sysfs /* 4135 */ 342 - PTR sys32_personality 342 + PTR sys_32_personality 343 343 PTR sys_ni_syscall /* for afs_syscall */ 344 344 PTR sys_setfsuid 345 345 PTR sys_setfsgid 346 - PTR sys32_llseek /* 4140 */ 346 + PTR sys_32_llseek /* 4140 */ 347 347 PTR compat_sys_getdents 348 348 PTR compat_sys_select 349 349 PTR sys_flock ··· 356 356 PTR sys_ni_syscall /* 4150 */ 357 357 PTR sys_getsid 358 358 PTR sys_fdatasync 359 - PTR sys32_sysctl 359 + PTR sys_32_sysctl 360 360 PTR sys_mlock 361 361 PTR sys_munlock /* 4155 */ 362 362 PTR sys_mlockall ··· 368 368 PTR sys_sched_yield 369 369 PTR sys_sched_get_priority_max 370 370 PTR sys_sched_get_priority_min 371 - PTR sys32_sched_rr_get_interval /* 4165 */ 371 + PTR sys_32_sched_rr_get_interval /* 4165 */ 372 372 PTR compat_sys_nanosleep 373 373 PTR sys_mremap 374 374 PTR sys_accept ··· 397 397 PTR sys_getresgid 398 398 PTR sys_prctl 399 399 PTR sys32_rt_sigreturn 400 - PTR sys32_rt_sigaction 401 - PTR sys32_rt_sigprocmask /* 4195 */ 402 - PTR sys32_rt_sigpending 400 + PTR sys_32_rt_sigaction 401 + PTR sys_32_rt_sigprocmask /* 4195 */ 402 + PTR sys_32_rt_sigpending 403 403 PTR compat_sys_rt_sigtimedwait 404 - PTR sys32_rt_sigqueueinfo 404 + PTR sys_32_rt_sigqueueinfo 405 405 PTR sys32_rt_sigsuspend 406 - PTR sys32_pread /* 4200 */ 407 - PTR sys32_pwrite 406 + PTR sys_32_pread /* 4200 */ 407 + PTR sys_32_pwrite 408 408 PTR sys_chown 409 409 PTR sys_getcwd 410 410 PTR sys_capget 411 411 PTR sys_capset /* 4205 */ 412 412 PTR sys32_sigaltstack 413 - PTR sys32_sendfile 413 + PTR sys_32_sendfile 414 414 PTR sys_ni_syscall 415 415 PTR sys_ni_syscall 416 - PTR sys32_mmap2 /* 4210 */ 417 - PTR sys32_truncate64 418 - PTR sys32_ftruncate64 416 + PTR sys_mips_mmap2 /* 4210 */ 417 + PTR sys_32_truncate64 418 + PTR sys_32_ftruncate64 419 419 PTR sys_newstat 420 420 PTR sys_newlstat 421 421 PTR sys_newfstat /* 4215 */ ··· 481 481 PTR compat_sys_mq_notify /* 4275 */ 482 482 PTR compat_sys_mq_getsetattr 483 483 PTR sys_ni_syscall /* sys_vserver */ 484 - PTR sys32_waitid 484 + PTR sys_32_waitid 485 485 PTR sys_ni_syscall /* available, was setaltroot */ 486 486 PTR sys_add_key /* 4280 */ 487 487 PTR sys_request_key
+3 -2
arch/mips/kernel/signal.c
··· 19 19 #include <linux/ptrace.h> 20 20 #include <linux/unistd.h> 21 21 #include <linux/compiler.h> 22 + #include <linux/syscalls.h> 22 23 #include <linux/uaccess.h> 23 24 24 25 #include <asm/abi.h> ··· 339 338 } 340 339 341 340 #ifdef CONFIG_TRAD_SIGNALS 342 - asmlinkage int sys_sigaction(int sig, const struct sigaction __user *act, 343 - struct sigaction __user *oact) 341 + SYSCALL_DEFINE3(sigaction, int, sig, const struct sigaction __user *, act, 342 + struct sigaction __user *, oact) 344 343 { 345 344 struct k_sigaction new_ka, old_ka; 346 345 int ret;
+14 -14
arch/mips/kernel/signal32.c
··· 349 349 return -ERESTARTNOHAND; 350 350 } 351 351 352 - asmlinkage int sys32_sigaction(int sig, const struct sigaction32 __user *act, 353 - struct sigaction32 __user *oact) 352 + SYSCALL_DEFINE3(32_sigaction, long, sig, const struct sigaction32 __user *, act, 353 + struct sigaction32 __user *, oact) 354 354 { 355 355 struct k_sigaction new_ka, old_ka; 356 356 int ret; ··· 704 704 .restart = __NR_O32_restart_syscall 705 705 }; 706 706 707 - asmlinkage int sys32_rt_sigaction(int sig, const struct sigaction32 __user *act, 708 - struct sigaction32 __user *oact, 709 - unsigned int sigsetsize) 707 + SYSCALL_DEFINE4(32_rt_sigaction, int, sig, 708 + const struct sigaction32 __user *, act, 709 + struct sigaction32 __user *, oact, unsigned int, sigsetsize) 710 710 { 711 711 struct k_sigaction new_sa, old_sa; 712 712 int ret = -EINVAL; ··· 748 748 return ret; 749 749 } 750 750 751 - asmlinkage int sys32_rt_sigprocmask(int how, compat_sigset_t __user *set, 752 - compat_sigset_t __user *oset, unsigned int sigsetsize) 751 + SYSCALL_DEFINE4(32_rt_sigprocmask, int, how, compat_sigset_t __user *, set, 752 + compat_sigset_t __user *, oset, unsigned int, sigsetsize) 753 753 { 754 754 sigset_t old_set, new_set; 755 755 int ret; ··· 770 770 return ret; 771 771 } 772 772 773 - asmlinkage int sys32_rt_sigpending(compat_sigset_t __user *uset, 774 - unsigned int sigsetsize) 773 + SYSCALL_DEFINE2(32_rt_sigpending, compat_sigset_t __user *, uset, 774 + unsigned int, sigsetsize) 775 775 { 776 776 int ret; 777 777 sigset_t set; ··· 787 787 return ret; 788 788 } 789 789 790 - asmlinkage int sys32_rt_sigqueueinfo(int pid, int sig, compat_siginfo_t __user *uinfo) 790 + SYSCALL_DEFINE3(32_rt_sigqueueinfo, int, pid, int, sig, 791 + compat_siginfo_t __user *, uinfo) 791 792 { 792 793 siginfo_t info; 793 794 int ret; ··· 803 802 return ret; 804 803 } 805 804 806 - asmlinkage long 807 - sys32_waitid(int which, compat_pid_t pid, 808 - compat_siginfo_t __user *uinfo, int options, 809 - struct compat_rusage __user *uru) 805 + SYSCALL_DEFINE5(32_waitid, int, which, compat_pid_t, pid, 806 + compat_siginfo_t __user *, uinfo, int, options, 807 + struct compat_rusage __user *, uru) 810 808 { 811 809 siginfo_t info; 812 810 struct rusage ru;
+13 -13
arch/mips/kernel/syscall.c
··· 152 152 return error; 153 153 } 154 154 155 - asmlinkage unsigned long 156 - old_mmap(unsigned long addr, unsigned long len, int prot, 157 - int flags, int fd, off_t offset) 155 + SYSCALL_DEFINE6(mips_mmap, unsigned long, addr, unsigned long, len, 156 + unsigned long, prot, unsigned long, flags, unsigned long, 157 + fd, off_t, offset) 158 158 { 159 159 unsigned long result; 160 160 ··· 168 168 return result; 169 169 } 170 170 171 - asmlinkage unsigned long 172 - sys_mmap2(unsigned long addr, unsigned long len, unsigned long prot, 173 - unsigned long flags, unsigned long fd, unsigned long pgoff) 171 + SYSCALL_DEFINE6(mips_mmap2, unsigned long, addr, unsigned long, len, 172 + unsigned long, prot, unsigned long, flags, unsigned long, fd, 173 + unsigned long, pgoff) 174 174 { 175 175 if (pgoff & (~PAGE_MASK >> 12)) 176 176 return -EINVAL; ··· 240 240 /* 241 241 * Compacrapability ... 242 242 */ 243 - asmlinkage int sys_uname(struct old_utsname __user * name) 243 + SYSCALL_DEFINE1(uname, struct old_utsname __user *, name) 244 244 { 245 245 if (name && !copy_to_user(name, utsname(), sizeof (*name))) 246 246 return 0; ··· 250 250 /* 251 251 * Compacrapability ... 252 252 */ 253 - asmlinkage int sys_olduname(struct oldold_utsname __user * name) 253 + SYSCALL_DEFINE1(olduname, struct oldold_utsname __user *, name) 254 254 { 255 255 int error; 256 256 ··· 279 279 return error; 280 280 } 281 281 282 - asmlinkage int sys_set_thread_area(unsigned long addr) 282 + SYSCALL_DEFINE1(set_thread_area, unsigned long, addr) 283 283 { 284 284 struct thread_info *ti = task_thread_info(current); 285 285 ··· 290 290 return 0; 291 291 } 292 292 293 - asmlinkage int _sys_sysmips(int cmd, long arg1, int arg2, int arg3) 293 + asmlinkage int _sys_sysmips(long cmd, long arg1, long arg2, long arg3) 294 294 { 295 295 switch (cmd) { 296 296 case MIPS_ATOMIC_SET: ··· 325 325 * 326 326 * This is really horribly ugly. 327 327 */ 328 - asmlinkage int sys_ipc(unsigned int call, int first, int second, 329 - unsigned long third, void __user *ptr, long fifth) 328 + SYSCALL_DEFINE6(ipc, unsigned int, call, int, first, int, second, 329 + unsigned long, third, void __user *, ptr, long, fifth) 330 330 { 331 331 int version, ret; 332 332 ··· 411 411 /* 412 412 * No implemented yet ... 413 413 */ 414 - asmlinkage int sys_cachectl(char *addr, int nbytes, int op) 414 + SYSCALL_DEFINE3(cachectl, char *, addr, int, nbytes, int, op) 415 415 { 416 416 return -ENOSYS; 417 417 }
+3 -2
arch/mips/mm/cache.c
··· 13 13 #include <linux/linkage.h> 14 14 #include <linux/module.h> 15 15 #include <linux/sched.h> 16 + #include <linux/syscalls.h> 16 17 #include <linux/mm.h> 17 18 18 19 #include <asm/cacheflush.h> ··· 59 58 * We could optimize the case where the cache argument is not BCACHE but 60 59 * that seems very atypical use ... 61 60 */ 62 - asmlinkage int sys_cacheflush(unsigned long addr, 63 - unsigned long bytes, unsigned int cache) 61 + SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes, 62 + unsigned int, cache) 64 63 { 65 64 if (bytes == 0) 66 65 return 0;
+5
arch/powerpc/include/asm/compat.h
··· 210 210 compat_ulong_t __unused6; 211 211 }; 212 212 213 + static inline int is_compat_task(void) 214 + { 215 + return test_thread_flag(TIF_32BIT); 216 + } 217 + 213 218 #endif /* __KERNEL__ */ 214 219 #endif /* _ASM_POWERPC_COMPAT_H */
-4
arch/powerpc/include/asm/seccomp.h
··· 1 1 #ifndef _ASM_POWERPC_SECCOMP_H 2 2 #define _ASM_POWERPC_SECCOMP_H 3 3 4 - #ifdef __KERNEL__ 5 - #include <linux/thread_info.h> 6 - #endif 7 - 8 4 #include <linux/unistd.h> 9 5 10 6 #define __NR_seccomp_read __NR_read
+13 -16
arch/powerpc/kernel/align.c
··· 367 367 static int emulate_fp_pair(unsigned char __user *addr, unsigned int reg, 368 368 unsigned int flags) 369 369 { 370 - char *ptr = (char *) &current->thread.TS_FPR(reg); 371 - int i, ret; 370 + char *ptr0 = (char *) &current->thread.TS_FPR(reg); 371 + char *ptr1 = (char *) &current->thread.TS_FPR(reg+1); 372 + int i, ret, sw = 0; 372 373 373 374 if (!(flags & F)) 374 375 return 0; 375 376 if (reg & 1) 376 377 return 0; /* invalid form: FRS/FRT must be even */ 377 - if (!(flags & SW)) { 378 - /* not byte-swapped - easy */ 379 - if (!(flags & ST)) 380 - ret = __copy_from_user(ptr, addr, 16); 381 - else 382 - ret = __copy_to_user(addr, ptr, 16); 383 - } else { 384 - /* each FPR value is byte-swapped separately */ 385 - ret = 0; 386 - for (i = 0; i < 16; ++i) { 387 - if (!(flags & ST)) 388 - ret |= __get_user(ptr[i^7], addr + i); 389 - else 390 - ret |= __put_user(ptr[i^7], addr + i); 378 + if (flags & SW) 379 + sw = 7; 380 + ret = 0; 381 + for (i = 0; i < 8; ++i) { 382 + if (!(flags & ST)) { 383 + ret |= __get_user(ptr0[i^sw], addr + i); 384 + ret |= __get_user(ptr1[i^sw], addr + i + 8); 385 + } else { 386 + ret |= __put_user(ptr0[i^sw], addr + i); 387 + ret |= __put_user(ptr1[i^sw], addr + i + 8); 391 388 } 392 389 } 393 390 if (ret)
+31 -7
arch/powerpc/lib/copyuser_64.S
··· 62 62 72: std r8,8(r3) 63 63 beq+ 3f 64 64 addi r3,r3,16 65 - 23: ld r9,8(r4) 66 65 .Ldo_tail: 67 66 bf cr7*4+1,1f 68 - rotldi r9,r9,32 67 + 23: lwz r9,8(r4) 68 + addi r4,r4,4 69 69 73: stw r9,0(r3) 70 70 addi r3,r3,4 71 71 1: bf cr7*4+2,2f 72 - rotldi r9,r9,16 72 + 44: lhz r9,8(r4) 73 + addi r4,r4,2 73 74 74: sth r9,0(r3) 74 75 addi r3,r3,2 75 76 2: bf cr7*4+3,3f 76 - rotldi r9,r9,8 77 + 45: lbz r9,8(r4) 77 78 75: stb r9,0(r3) 78 79 3: li r3,0 79 80 blr ··· 142 141 6: cmpwi cr1,r5,8 143 142 addi r3,r3,32 144 143 sld r9,r9,r10 145 - ble cr1,.Ldo_tail 144 + ble cr1,7f 146 145 34: ld r0,8(r4) 147 146 srd r7,r0,r11 148 147 or r9,r7,r9 149 - b .Ldo_tail 148 + 7: 149 + bf cr7*4+1,1f 150 + rotldi r9,r9,32 151 + 94: stw r9,0(r3) 152 + addi r3,r3,4 153 + 1: bf cr7*4+2,2f 154 + rotldi r9,r9,16 155 + 95: sth r9,0(r3) 156 + addi r3,r3,2 157 + 2: bf cr7*4+3,3f 158 + rotldi r9,r9,8 159 + 96: stb r9,0(r3) 160 + 3: li r3,0 161 + blr 150 162 151 163 .Ldst_unaligned: 152 164 PPC_MTOCRF 0x01,r6 /* put #bytes to 8B bdry into cr7 */ ··· 232 218 121: 233 219 132: 234 220 addi r3,r3,8 235 - 123: 236 221 134: 237 222 135: 238 223 138: ··· 239 226 140: 240 227 141: 241 228 142: 229 + 123: 230 + 144: 231 + 145: 242 232 243 233 /* 244 234 * here we have had a fault on a load and r3 points to the first ··· 325 309 187: 326 310 188: 327 311 189: 312 + 194: 313 + 195: 314 + 196: 328 315 1: 329 316 ld r6,-24(r1) 330 317 ld r5,-8(r1) ··· 348 329 .llong 72b,172b 349 330 .llong 23b,123b 350 331 .llong 73b,173b 332 + .llong 44b,144b 351 333 .llong 74b,174b 334 + .llong 45b,145b 352 335 .llong 75b,175b 353 336 .llong 24b,124b 354 337 .llong 25b,125b ··· 368 347 .llong 79b,179b 369 348 .llong 80b,180b 370 349 .llong 34b,134b 350 + .llong 94b,194b 351 + .llong 95b,195b 352 + .llong 96b,196b 371 353 .llong 35b,135b 372 354 .llong 81b,181b 373 355 .llong 36b,136b
+20 -6
arch/powerpc/lib/memcpy_64.S
··· 53 53 3: std r8,8(r3) 54 54 beq 3f 55 55 addi r3,r3,16 56 - ld r9,8(r4) 57 56 .Ldo_tail: 58 57 bf cr7*4+1,1f 59 - rotldi r9,r9,32 58 + lwz r9,8(r4) 59 + addi r4,r4,4 60 60 stw r9,0(r3) 61 61 addi r3,r3,4 62 62 1: bf cr7*4+2,2f 63 - rotldi r9,r9,16 63 + lhz r9,8(r4) 64 + addi r4,r4,2 64 65 sth r9,0(r3) 65 66 addi r3,r3,2 66 67 2: bf cr7*4+3,3f 67 - rotldi r9,r9,8 68 + lbz r9,8(r4) 68 69 stb r9,0(r3) 69 70 3: ld r3,48(r1) /* return dest pointer */ 70 71 blr ··· 134 133 cmpwi cr1,r5,8 135 134 addi r3,r3,32 136 135 sld r9,r9,r10 137 - ble cr1,.Ldo_tail 136 + ble cr1,6f 138 137 ld r0,8(r4) 139 138 srd r7,r0,r11 140 139 or r9,r7,r9 141 - b .Ldo_tail 140 + 6: 141 + bf cr7*4+1,1f 142 + rotldi r9,r9,32 143 + stw r9,0(r3) 144 + addi r3,r3,4 145 + 1: bf cr7*4+2,2f 146 + rotldi r9,r9,16 147 + sth r9,0(r3) 148 + addi r3,r3,2 149 + 2: bf cr7*4+3,3f 150 + rotldi r9,r9,8 151 + stb r9,0(r3) 152 + 3: ld r3,48(r1) /* return dest pointer */ 153 + blr 142 154 143 155 .Ldst_unaligned: 144 156 PPC_MTOCRF 0x01,r6 # put #bytes to 8B bdry into cr7
+17
arch/powerpc/sysdev/ppc4xx_pci.c
··· 204 204 { 205 205 u32 ma, pcila, pciha; 206 206 207 + /* Hack warning ! The "old" PCI 2.x cell only let us configure the low 208 + * 32-bit of incoming PLB addresses. The top 4 bits of the 36-bit 209 + * address are actually hard wired to a value that appears to depend 210 + * on the specific SoC. For example, it's 0 on 440EP and 1 on 440EPx. 211 + * 212 + * The trick here is we just crop those top bits and ignore them when 213 + * programming the chip. That means the device-tree has to be right 214 + * for the specific part used (we don't print a warning if it's wrong 215 + * but on the other hand, you'll crash quickly enough), but at least 216 + * this code should work whatever the hard coded value is 217 + */ 218 + plb_addr &= 0xffffffffull; 219 + 220 + /* Note: Due to the above hack, the test below doesn't actually test 221 + * if you address is above 4G, but it tests that address and 222 + * (address + size) are both contained in the same 4G 223 + */ 207 224 if ((plb_addr + size) > 0xffffffffull || !is_power_of_2(size) || 208 225 size < 0x1000 || (plb_addr & (size - 1)) != 0) { 209 226 printk(KERN_WARNING "%s: Resource out of range\n",
+3 -50
arch/sh/boards/board-ap325rxa.c
··· 22 22 #include <linux/gpio.h> 23 23 #include <linux/spi/spi.h> 24 24 #include <linux/spi/spi_gpio.h> 25 - #include <media/ov772x.h> 26 25 #include <media/soc_camera_platform.h> 27 26 #include <media/sh_mobile_ceu.h> 28 27 #include <video/sh_mobile_lcdc.h> ··· 223 224 } 224 225 225 226 #ifdef CONFIG_I2C 226 - /* support for the old ncm03j camera */ 227 227 static unsigned char camera_ncm03j_magic[] = 228 228 { 229 229 0x87, 0x00, 0x88, 0x08, 0x89, 0x01, 0x8A, 0xE8, ··· 242 244 0x5F, 0x68, 0x60, 0x87, 0x61, 0xA3, 0x62, 0xBC, 243 245 0x63, 0xD4, 0x64, 0xEA, 0xD6, 0x0F, 244 246 }; 245 - 246 - static int camera_probe(void) 247 - { 248 - struct i2c_adapter *a = i2c_get_adapter(0); 249 - struct i2c_msg msg; 250 - int ret; 251 - 252 - camera_power(1); 253 - msg.addr = 0x6e; 254 - msg.buf = camera_ncm03j_magic; 255 - msg.len = 2; 256 - msg.flags = 0; 257 - ret = i2c_transfer(a, &msg, 1); 258 - camera_power(0); 259 - 260 - return ret; 261 - } 262 247 263 248 static int camera_set_capture(struct soc_camera_platform_info *info, 264 249 int enable) ··· 294 313 .platform_data = &camera_info, 295 314 }, 296 315 }; 297 - 298 - static int __init camera_setup(void) 299 - { 300 - if (camera_probe() > 0) 301 - platform_device_register(&camera_device); 302 - 303 - return 0; 304 - } 305 - late_initcall(camera_setup); 306 - 307 316 #endif /* CONFIG_I2C */ 308 - 309 - static int ov7725_power(struct device *dev, int mode) 310 - { 311 - camera_power(0); 312 - if (mode) 313 - camera_power(1); 314 - 315 - return 0; 316 - } 317 - 318 - static struct ov772x_camera_info ov7725_info = { 319 - .buswidth = SOCAM_DATAWIDTH_8, 320 - .flags = OV772X_FLAG_VFLIP | OV772X_FLAG_HFLIP, 321 - .link = { 322 - .power = ov7725_power, 323 - }, 324 - }; 325 317 326 318 static struct sh_mobile_ceu_info sh_mobile_ceu_info = { 327 319 .flags = SOCAM_PCLK_SAMPLE_RISING | SOCAM_HSYNC_ACTIVE_HIGH | ··· 346 392 &ap325rxa_nor_flash_device, 347 393 &lcdc_device, 348 394 &ceu_device, 395 + #ifdef CONFIG_I2C 396 + &camera_device, 397 + #endif 349 398 &nand_flash_device, 350 399 &sdcard_cn3_device, 351 400 }; ··· 356 399 static struct i2c_board_info __initdata ap325rxa_i2c_devices[] = { 357 400 { 358 401 I2C_BOARD_INFO("pcf8563", 0x51), 359 - }, 360 - { 361 - I2C_BOARD_INFO("ov772x", 0x21), 362 - .platform_data = &ov7725_info, 363 402 }, 364 403 }; 365 404
+2 -2
arch/sh/kernel/cpu/sh2a/clock-sh7201.c
··· 18 18 #include <asm/freq.h> 19 19 #include <asm/io.h> 20 20 21 - const static int pll1rate[]={1,2,3,4,6,8}; 22 - const static int pfc_divisors[]={1,2,3,4,6,8,12}; 21 + static const int pll1rate[]={1,2,3,4,6,8}; 22 + static const int pfc_divisors[]={1,2,3,4,6,8,12}; 23 23 #define ifc_divisors pfc_divisors 24 24 25 25 #if (CONFIG_SH_CLK_MD == 0)
+5
arch/sparc/include/asm/compat.h
··· 240 240 unsigned int __unused2; 241 241 }; 242 242 243 + static inline int is_compat_task(void) 244 + { 245 + return test_thread_flag(TIF_32BIT); 246 + } 247 + 243 248 #endif /* _ASM_SPARC64_COMPAT_H */
-6
arch/sparc/include/asm/seccomp.h
··· 1 1 #ifndef _ASM_SECCOMP_H 2 2 3 - #include <linux/thread_info.h> /* already defines TIF_32BIT */ 4 - 5 - #ifndef TIF_32BIT 6 - #error "unexpected TIF_32BIT on sparc64" 7 - #endif 8 - 9 3 #include <linux/unistd.h> 10 4 11 5 #define __NR_seccomp_read __NR_read
+1
arch/sparc/kernel/chmc.c
··· 306 306 buf[1] = '?'; 307 307 buf[2] = '?'; 308 308 buf[3] = '\0'; 309 + return 0; 309 310 } 310 311 p = dp->controller; 311 312 prop = &p->layout;
+4 -1
arch/x86/Kconfig
··· 40 40 select HAVE_GENERIC_DMA_COHERENT if X86_32 41 41 select HAVE_EFFICIENT_UNALIGNED_ACCESS 42 42 select USER_STACKTRACE_SUPPORT 43 + select HAVE_KERNEL_GZIP 44 + select HAVE_KERNEL_BZIP2 45 + select HAVE_KERNEL_LZMA 43 46 44 47 config ARCH_DEFCONFIG 45 48 string ··· 1825 1822 remapping devices. 1826 1823 1827 1824 config DMAR_DEFAULT_ON 1828 - def_bool n 1825 + def_bool y 1829 1826 prompt "Enable DMA Remapping Devices by default" 1830 1827 depends on DMAR 1831 1828 help
+19 -2
arch/x86/boot/compressed/Makefile
··· 4 4 # create a compressed vmlinux image from the original vmlinux 5 5 # 6 6 7 - targets := vmlinux vmlinux.bin vmlinux.bin.gz head_$(BITS).o misc.o piggy.o 7 + targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma head_$(BITS).o misc.o piggy.o 8 8 9 9 KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2 10 10 KBUILD_CFLAGS += -fno-strict-aliasing -fPIC ··· 47 47 ifdef CONFIG_RELOCATABLE 48 48 $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE 49 49 $(call if_changed,gzip) 50 + $(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin.all FORCE 51 + $(call if_changed,bzip2) 52 + $(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin.all FORCE 53 + $(call if_changed,lzma) 50 54 else 51 55 $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE 52 56 $(call if_changed,gzip) 57 + $(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE 58 + $(call if_changed,bzip2) 59 + $(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE 60 + $(call if_changed,lzma) 53 61 endif 54 62 LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T 55 63 56 64 else 65 + 57 66 $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE 58 67 $(call if_changed,gzip) 68 + $(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE 69 + $(call if_changed,bzip2) 70 + $(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE 71 + $(call if_changed,lzma) 59 72 60 73 LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T 61 74 endif 62 75 63 - $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.gz FORCE 76 + suffix_$(CONFIG_KERNEL_GZIP) = gz 77 + suffix_$(CONFIG_KERNEL_BZIP2) = bz2 78 + suffix_$(CONFIG_KERNEL_LZMA) = lzma 79 + 80 + $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix_y) FORCE 64 81 $(call if_changed,ld)
+14 -104
arch/x86/boot/compressed/misc.c
··· 116 116 /* 117 117 * gzip declarations 118 118 */ 119 - 120 - #define OF(args) args 121 119 #define STATIC static 122 120 123 121 #undef memset 124 122 #undef memcpy 125 123 #define memzero(s, n) memset((s), 0, (n)) 126 124 127 - typedef unsigned char uch; 128 - typedef unsigned short ush; 129 - typedef unsigned long ulg; 130 125 131 - /* 132 - * Window size must be at least 32k, and a power of two. 133 - * We don't actually have a window just a huge output buffer, 134 - * so we report a 2G window size, as that should always be 135 - * larger than our output buffer: 136 - */ 137 - #define WSIZE 0x80000000 138 - 139 - /* Input buffer: */ 140 - static unsigned char *inbuf; 141 - 142 - /* Sliding window buffer (and final output buffer): */ 143 - static unsigned char *window; 144 - 145 - /* Valid bytes in inbuf: */ 146 - static unsigned insize; 147 - 148 - /* Index of next byte to be processed in inbuf: */ 149 - static unsigned inptr; 150 - 151 - /* Bytes in output buffer: */ 152 - static unsigned outcnt; 153 - 154 - /* gzip flag byte */ 155 - #define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */ 156 - #define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gz file */ 157 - #define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */ 158 - #define ORIG_NAM 0x08 /* bit 3 set: original file name present */ 159 - #define COMMENT 0x10 /* bit 4 set: file comment present */ 160 - #define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */ 161 - #define RESERVED 0xC0 /* bit 6, 7: reserved */ 162 - 163 - #define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf()) 164 - 165 - /* Diagnostic functions */ 166 - #ifdef DEBUG 167 - # define Assert(cond, msg) do { if (!(cond)) error(msg); } while (0) 168 - # define Trace(x) do { fprintf x; } while (0) 169 - # define Tracev(x) do { if (verbose) fprintf x ; } while (0) 170 - # define Tracevv(x) do { if (verbose > 1) fprintf x ; } while (0) 171 - # define Tracec(c, x) do { if (verbose && (c)) fprintf x ; } while (0) 172 - # define Tracecv(c, x) do { if (verbose > 1 && (c)) fprintf x ; } while (0) 173 - #else 174 - # define Assert(cond, msg) 175 - # define Trace(x) 176 - # define Tracev(x) 177 - # define Tracevv(x) 178 - # define Tracec(c, x) 179 - # define Tracecv(c, x) 180 - #endif 181 - 182 - static int fill_inbuf(void); 183 - static void flush_window(void); 184 126 static void error(char *m); 185 127 186 128 /* ··· 131 189 static struct boot_params *real_mode; /* Pointer to real-mode data */ 132 190 static int quiet; 133 191 134 - extern unsigned char input_data[]; 135 - extern int input_len; 136 - 137 - static long bytes_out; 138 - 139 192 static void *memset(void *s, int c, unsigned n); 140 - static void *memcpy(void *dest, const void *src, unsigned n); 193 + void *memcpy(void *dest, const void *src, unsigned n); 141 194 142 195 static void __putstr(int, const char *); 143 196 #define putstr(__x) __putstr(0, __x) ··· 150 213 static int vidport; 151 214 static int lines, cols; 152 215 153 - #include "../../../../lib/inflate.c" 216 + #ifdef CONFIG_KERNEL_GZIP 217 + #include "../../../../lib/decompress_inflate.c" 218 + #endif 219 + 220 + #ifdef CONFIG_KERNEL_BZIP2 221 + #include "../../../../lib/decompress_bunzip2.c" 222 + #endif 223 + 224 + #ifdef CONFIG_KERNEL_LZMA 225 + #include "../../../../lib/decompress_unlzma.c" 226 + #endif 154 227 155 228 static void scroll(void) 156 229 { ··· 229 282 return s; 230 283 } 231 284 232 - static void *memcpy(void *dest, const void *src, unsigned n) 285 + void *memcpy(void *dest, const void *src, unsigned n) 233 286 { 234 287 int i; 235 288 const char *s = src; ··· 240 293 return dest; 241 294 } 242 295 243 - /* =========================================================================== 244 - * Fill the input buffer. This is called only when the buffer is empty 245 - * and at least one byte is really needed. 246 - */ 247 - static int fill_inbuf(void) 248 - { 249 - error("ran out of input data"); 250 - return 0; 251 - } 252 - 253 - /* =========================================================================== 254 - * Write the output window window[0..outcnt-1] and update crc and bytes_out. 255 - * (Used for the decompressed data only.) 256 - */ 257 - static void flush_window(void) 258 - { 259 - /* With my window equal to my output buffer 260 - * I only need to compute the crc here. 261 - */ 262 - unsigned long c = crc; /* temporary variable */ 263 - unsigned n; 264 - unsigned char *in, ch; 265 - 266 - in = window; 267 - for (n = 0; n < outcnt; n++) { 268 - ch = *in++; 269 - c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8); 270 - } 271 - crc = c; 272 - bytes_out += (unsigned long)outcnt; 273 - outcnt = 0; 274 - } 275 296 276 297 static void error(char *x) 277 298 { ··· 322 407 lines = real_mode->screen_info.orig_video_lines; 323 408 cols = real_mode->screen_info.orig_video_cols; 324 409 325 - window = output; /* Output buffer (Normally at 1M) */ 326 410 free_mem_ptr = heap; /* Heap */ 327 411 free_mem_end_ptr = heap + BOOT_HEAP_SIZE; 328 - inbuf = input_data; /* Input buffer */ 329 - insize = input_len; 330 - inptr = 0; 331 412 332 413 #ifdef CONFIG_X86_64 333 414 if ((unsigned long)output & (__KERNEL_ALIGN - 1)) ··· 341 430 #endif 342 431 #endif 343 432 344 - makecrc(); 345 433 if (!quiet) 346 434 putstr("\nDecompressing Linux... "); 347 - gunzip(); 435 + decompress(input_data, input_len, NULL, NULL, output, NULL, error); 348 436 parse_elf(output); 349 437 if (!quiet) 350 438 putstr("done.\nBooting the kernel.\n");
+15 -1
arch/x86/include/asm/boot.h
··· 10 10 #define EXTENDED_VGA 0xfffe /* 80x50 mode */ 11 11 #define ASK_VGA 0xfffd /* ask for it at bootup */ 12 12 13 + #ifdef __KERNEL__ 14 + 13 15 /* Physical address where kernel should be loaded. */ 14 16 #define LOAD_PHYSICAL_ADDR ((CONFIG_PHYSICAL_START \ 15 17 + (CONFIG_PHYSICAL_ALIGN - 1)) \ 16 18 & ~(CONFIG_PHYSICAL_ALIGN - 1)) 17 19 20 + #ifdef CONFIG_KERNEL_BZIP2 21 + #define BOOT_HEAP_SIZE 0x400000 22 + #else /* !CONFIG_KERNEL_BZIP2 */ 23 + 18 24 #ifdef CONFIG_X86_64 19 25 #define BOOT_HEAP_SIZE 0x7000 20 - #define BOOT_STACK_SIZE 0x4000 21 26 #else 22 27 #define BOOT_HEAP_SIZE 0x4000 28 + #endif 29 + 30 + #endif /* !CONFIG_KERNEL_BZIP2 */ 31 + 32 + #ifdef CONFIG_X86_64 33 + #define BOOT_STACK_SIZE 0x4000 34 + #else 23 35 #define BOOT_STACK_SIZE 0x1000 24 36 #endif 37 + 38 + #endif /* __KERNEL__ */ 25 39 26 40 #endif /* _ASM_X86_BOOT_H */
+147 -2
arch/x86/include/asm/fixmap.h
··· 1 + /* 2 + * fixmap.h: compile-time virtual memory allocation 3 + * 4 + * This file is subject to the terms and conditions of the GNU General Public 5 + * License. See the file "COPYING" in the main directory of this archive 6 + * for more details. 7 + * 8 + * Copyright (C) 1998 Ingo Molnar 9 + * 10 + * Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999 11 + * x86_32 and x86_64 integration by Gustavo F. Padovan, February 2009 12 + */ 13 + 1 14 #ifndef _ASM_X86_FIXMAP_H 2 15 #define _ASM_X86_FIXMAP_H 3 16 17 + #ifndef __ASSEMBLY__ 18 + #include <linux/kernel.h> 19 + #include <asm/acpi.h> 20 + #include <asm/apicdef.h> 21 + #include <asm/page.h> 4 22 #ifdef CONFIG_X86_32 5 - # include "fixmap_32.h" 23 + #include <linux/threads.h> 24 + #include <asm/kmap_types.h> 6 25 #else 7 - # include "fixmap_64.h" 26 + #include <asm/vsyscall.h> 27 + #ifdef CONFIG_EFI 28 + #include <asm/efi.h> 8 29 #endif 30 + #endif 31 + 32 + /* 33 + * We can't declare FIXADDR_TOP as variable for x86_64 because vsyscall 34 + * uses fixmaps that relies on FIXADDR_TOP for proper address calculation. 35 + * Because of this, FIXADDR_TOP x86 integration was left as later work. 36 + */ 37 + #ifdef CONFIG_X86_32 38 + /* used by vmalloc.c, vsyscall.lds.S. 39 + * 40 + * Leave one empty page between vmalloc'ed areas and 41 + * the start of the fixmap. 42 + */ 43 + extern unsigned long __FIXADDR_TOP; 44 + #define FIXADDR_TOP ((unsigned long)__FIXADDR_TOP) 45 + 46 + #define FIXADDR_USER_START __fix_to_virt(FIX_VDSO) 47 + #define FIXADDR_USER_END __fix_to_virt(FIX_VDSO - 1) 48 + #else 49 + #define FIXADDR_TOP (VSYSCALL_END-PAGE_SIZE) 50 + 51 + /* Only covers 32bit vsyscalls currently. Need another set for 64bit. */ 52 + #define FIXADDR_USER_START ((unsigned long)VSYSCALL32_VSYSCALL) 53 + #define FIXADDR_USER_END (FIXADDR_USER_START + PAGE_SIZE) 54 + #endif 55 + 56 + 57 + /* 58 + * Here we define all the compile-time 'special' virtual 59 + * addresses. The point is to have a constant address at 60 + * compile time, but to set the physical address only 61 + * in the boot process. 62 + * for x86_32: We allocate these special addresses 63 + * from the end of virtual memory (0xfffff000) backwards. 64 + * Also this lets us do fail-safe vmalloc(), we 65 + * can guarantee that these special addresses and 66 + * vmalloc()-ed addresses never overlap. 67 + * 68 + * These 'compile-time allocated' memory buffers are 69 + * fixed-size 4k pages (or larger if used with an increment 70 + * higher than 1). Use set_fixmap(idx,phys) to associate 71 + * physical memory with fixmap indices. 72 + * 73 + * TLB entries of such buffers will not be flushed across 74 + * task switches. 75 + */ 76 + enum fixed_addresses { 77 + #ifdef CONFIG_X86_32 78 + FIX_HOLE, 79 + FIX_VDSO, 80 + #else 81 + VSYSCALL_LAST_PAGE, 82 + VSYSCALL_FIRST_PAGE = VSYSCALL_LAST_PAGE 83 + + ((VSYSCALL_END-VSYSCALL_START) >> PAGE_SHIFT) - 1, 84 + VSYSCALL_HPET, 85 + #endif 86 + FIX_DBGP_BASE, 87 + FIX_EARLYCON_MEM_BASE, 88 + #ifdef CONFIG_X86_LOCAL_APIC 89 + FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */ 90 + #endif 91 + #ifdef CONFIG_X86_IO_APIC 92 + FIX_IO_APIC_BASE_0, 93 + FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS - 1, 94 + #endif 95 + #ifdef CONFIG_X86_64 96 + #ifdef CONFIG_EFI 97 + FIX_EFI_IO_MAP_LAST_PAGE, 98 + FIX_EFI_IO_MAP_FIRST_PAGE = FIX_EFI_IO_MAP_LAST_PAGE 99 + + MAX_EFI_IO_PAGES - 1, 100 + #endif 101 + #endif 102 + #ifdef CONFIG_X86_VISWS_APIC 103 + FIX_CO_CPU, /* Cobalt timer */ 104 + FIX_CO_APIC, /* Cobalt APIC Redirection Table */ 105 + FIX_LI_PCIA, /* Lithium PCI Bridge A */ 106 + FIX_LI_PCIB, /* Lithium PCI Bridge B */ 107 + #endif 108 + #ifdef CONFIG_X86_F00F_BUG 109 + FIX_F00F_IDT, /* Virtual mapping for IDT */ 110 + #endif 111 + #ifdef CONFIG_X86_CYCLONE_TIMER 112 + FIX_CYCLONE_TIMER, /*cyclone timer register*/ 113 + #endif 114 + #ifdef CONFIG_X86_32 115 + FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */ 116 + FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1, 117 + #ifdef CONFIG_PCI_MMCONFIG 118 + FIX_PCIE_MCFG, 119 + #endif 120 + #endif 121 + #ifdef CONFIG_PARAVIRT 122 + FIX_PARAVIRT_BOOTMAP, 123 + #endif 124 + __end_of_permanent_fixed_addresses, 125 + #ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT 126 + FIX_OHCI1394_BASE, 127 + #endif 128 + /* 129 + * 256 temporary boot-time mappings, used by early_ioremap(), 130 + * before ioremap() is functional. 131 + * 132 + * We round it up to the next 256 pages boundary so that we 133 + * can have a single pgd entry and a single pte table: 134 + */ 135 + #define NR_FIX_BTMAPS 64 136 + #define FIX_BTMAPS_SLOTS 4 137 + FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 - 138 + (__end_of_permanent_fixed_addresses & 255), 139 + FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1, 140 + #ifdef CONFIG_X86_32 141 + FIX_WP_TEST, 142 + #endif 143 + __end_of_fixed_addresses 144 + }; 145 + 146 + 147 + extern void reserve_top_address(unsigned long reserve); 148 + 149 + #define FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT) 150 + #define FIXADDR_BOOT_SIZE (__end_of_fixed_addresses << PAGE_SHIFT) 151 + #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE) 152 + #define FIXADDR_BOOT_START (FIXADDR_TOP - FIXADDR_BOOT_SIZE) 9 153 10 154 extern int fixmaps_set; 11 155 ··· 213 69 BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START); 214 70 return __virt_to_fix(vaddr); 215 71 } 72 + #endif /* !__ASSEMBLY__ */ 216 73 #endif /* _ASM_X86_FIXMAP_H */
-115
arch/x86/include/asm/fixmap_32.h
··· 1 - /* 2 - * fixmap.h: compile-time virtual memory allocation 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 - * 8 - * Copyright (C) 1998 Ingo Molnar 9 - * 10 - * Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999 11 - */ 12 - 13 - #ifndef _ASM_X86_FIXMAP_32_H 14 - #define _ASM_X86_FIXMAP_32_H 15 - 16 - 17 - /* used by vmalloc.c, vsyscall.lds.S. 18 - * 19 - * Leave one empty page between vmalloc'ed areas and 20 - * the start of the fixmap. 21 - */ 22 - extern unsigned long __FIXADDR_TOP; 23 - #define FIXADDR_USER_START __fix_to_virt(FIX_VDSO) 24 - #define FIXADDR_USER_END __fix_to_virt(FIX_VDSO - 1) 25 - 26 - #ifndef __ASSEMBLY__ 27 - #include <linux/kernel.h> 28 - #include <asm/acpi.h> 29 - #include <asm/apicdef.h> 30 - #include <asm/page.h> 31 - #include <linux/threads.h> 32 - #include <asm/kmap_types.h> 33 - 34 - /* 35 - * Here we define all the compile-time 'special' virtual 36 - * addresses. The point is to have a constant address at 37 - * compile time, but to set the physical address only 38 - * in the boot process. We allocate these special addresses 39 - * from the end of virtual memory (0xfffff000) backwards. 40 - * Also this lets us do fail-safe vmalloc(), we 41 - * can guarantee that these special addresses and 42 - * vmalloc()-ed addresses never overlap. 43 - * 44 - * these 'compile-time allocated' memory buffers are 45 - * fixed-size 4k pages. (or larger if used with an increment 46 - * highger than 1) use fixmap_set(idx,phys) to associate 47 - * physical memory with fixmap indices. 48 - * 49 - * TLB entries of such buffers will not be flushed across 50 - * task switches. 51 - */ 52 - enum fixed_addresses { 53 - FIX_HOLE, 54 - FIX_VDSO, 55 - FIX_DBGP_BASE, 56 - FIX_EARLYCON_MEM_BASE, 57 - #ifdef CONFIG_X86_LOCAL_APIC 58 - FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */ 59 - #endif 60 - #ifdef CONFIG_X86_IO_APIC 61 - FIX_IO_APIC_BASE_0, 62 - FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS-1, 63 - #endif 64 - #ifdef CONFIG_X86_VISWS_APIC 65 - FIX_CO_CPU, /* Cobalt timer */ 66 - FIX_CO_APIC, /* Cobalt APIC Redirection Table */ 67 - FIX_LI_PCIA, /* Lithium PCI Bridge A */ 68 - FIX_LI_PCIB, /* Lithium PCI Bridge B */ 69 - #endif 70 - #ifdef CONFIG_X86_F00F_BUG 71 - FIX_F00F_IDT, /* Virtual mapping for IDT */ 72 - #endif 73 - #ifdef CONFIG_X86_CYCLONE_TIMER 74 - FIX_CYCLONE_TIMER, /*cyclone timer register*/ 75 - #endif 76 - FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */ 77 - FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1, 78 - #ifdef CONFIG_PCI_MMCONFIG 79 - FIX_PCIE_MCFG, 80 - #endif 81 - #ifdef CONFIG_PARAVIRT 82 - FIX_PARAVIRT_BOOTMAP, 83 - #endif 84 - __end_of_permanent_fixed_addresses, 85 - /* 86 - * 256 temporary boot-time mappings, used by early_ioremap(), 87 - * before ioremap() is functional. 88 - * 89 - * We round it up to the next 256 pages boundary so that we 90 - * can have a single pgd entry and a single pte table: 91 - */ 92 - #define NR_FIX_BTMAPS 64 93 - #define FIX_BTMAPS_SLOTS 4 94 - FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 - 95 - (__end_of_permanent_fixed_addresses & 255), 96 - FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1, 97 - FIX_WP_TEST, 98 - #ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT 99 - FIX_OHCI1394_BASE, 100 - #endif 101 - __end_of_fixed_addresses 102 - }; 103 - 104 - extern void reserve_top_address(unsigned long reserve); 105 - 106 - 107 - #define FIXADDR_TOP ((unsigned long)__FIXADDR_TOP) 108 - 109 - #define __FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT) 110 - #define __FIXADDR_BOOT_SIZE (__end_of_fixed_addresses << PAGE_SHIFT) 111 - #define FIXADDR_START (FIXADDR_TOP - __FIXADDR_SIZE) 112 - #define FIXADDR_BOOT_START (FIXADDR_TOP - __FIXADDR_BOOT_SIZE) 113 - 114 - #endif /* !__ASSEMBLY__ */ 115 - #endif /* _ASM_X86_FIXMAP_32_H */
-79
arch/x86/include/asm/fixmap_64.h
··· 1 - /* 2 - * fixmap.h: compile-time virtual memory allocation 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 - * 8 - * Copyright (C) 1998 Ingo Molnar 9 - */ 10 - 11 - #ifndef _ASM_X86_FIXMAP_64_H 12 - #define _ASM_X86_FIXMAP_64_H 13 - 14 - #include <linux/kernel.h> 15 - #include <asm/acpi.h> 16 - #include <asm/apicdef.h> 17 - #include <asm/page.h> 18 - #include <asm/vsyscall.h> 19 - #include <asm/efi.h> 20 - 21 - /* 22 - * Here we define all the compile-time 'special' virtual 23 - * addresses. The point is to have a constant address at 24 - * compile time, but to set the physical address only 25 - * in the boot process. 26 - * 27 - * These 'compile-time allocated' memory buffers are 28 - * fixed-size 4k pages (or larger if used with an increment 29 - * higher than 1). Use set_fixmap(idx,phys) to associate 30 - * physical memory with fixmap indices. 31 - * 32 - * TLB entries of such buffers will not be flushed across 33 - * task switches. 34 - */ 35 - 36 - enum fixed_addresses { 37 - VSYSCALL_LAST_PAGE, 38 - VSYSCALL_FIRST_PAGE = VSYSCALL_LAST_PAGE 39 - + ((VSYSCALL_END-VSYSCALL_START) >> PAGE_SHIFT) - 1, 40 - VSYSCALL_HPET, 41 - FIX_DBGP_BASE, 42 - FIX_EARLYCON_MEM_BASE, 43 - FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */ 44 - FIX_IO_APIC_BASE_0, 45 - FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS - 1, 46 - FIX_EFI_IO_MAP_LAST_PAGE, 47 - FIX_EFI_IO_MAP_FIRST_PAGE = FIX_EFI_IO_MAP_LAST_PAGE 48 - + MAX_EFI_IO_PAGES - 1, 49 - #ifdef CONFIG_PARAVIRT 50 - FIX_PARAVIRT_BOOTMAP, 51 - #endif 52 - __end_of_permanent_fixed_addresses, 53 - #ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT 54 - FIX_OHCI1394_BASE, 55 - #endif 56 - /* 57 - * 256 temporary boot-time mappings, used by early_ioremap(), 58 - * before ioremap() is functional. 59 - * 60 - * We round it up to the next 256 pages boundary so that we 61 - * can have a single pgd entry and a single pte table: 62 - */ 63 - #define NR_FIX_BTMAPS 64 64 - #define FIX_BTMAPS_SLOTS 4 65 - FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 - 66 - (__end_of_permanent_fixed_addresses & 255), 67 - FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1, 68 - __end_of_fixed_addresses 69 - }; 70 - 71 - #define FIXADDR_TOP (VSYSCALL_END-PAGE_SIZE) 72 - #define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT) 73 - #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE) 74 - 75 - /* Only covers 32bit vsyscalls currently. Need another set for 64bit. */ 76 - #define FIXADDR_USER_START ((unsigned long)VSYSCALL32_VSYSCALL) 77 - #define FIXADDR_USER_END (FIXADDR_USER_START + PAGE_SIZE) 78 - 79 - #endif /* _ASM_X86_FIXMAP_64_H */
+1 -4
arch/x86/include/asm/iomap.h
··· 24 24 #include <asm/tlbflush.h> 25 25 26 26 int 27 - reserve_io_memtype_wc(u64 base, unsigned long size, pgprot_t *prot); 28 - 29 - void 30 - free_io_memtype(u64 base, unsigned long size); 27 + is_io_mapping_possible(resource_size_t base, unsigned long size); 31 28 32 29 void * 33 30 iomap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot);
+5 -1
arch/x86/include/asm/numa_32.h
··· 4 4 extern int pxm_to_nid(int pxm); 5 5 extern void numa_remove_cpu(int cpu); 6 6 7 - #ifdef CONFIG_NUMA 7 + #ifdef CONFIG_HIGHMEM 8 8 extern void set_highmem_pages_init(void); 9 + #else 10 + static inline void set_highmem_pages_init(void) 11 + { 12 + } 9 13 #endif 10 14 11 15 #endif /* _ASM_X86_NUMA_32_H */
-6
arch/x86/include/asm/processor.h
··· 248 248 #define IO_BITMAP_LONGS (IO_BITMAP_BYTES/sizeof(long)) 249 249 #define IO_BITMAP_OFFSET offsetof(struct tss_struct, io_bitmap) 250 250 #define INVALID_IO_BITMAP_OFFSET 0x8000 251 - #define INVALID_IO_BITMAP_OFFSET_LAZY 0x9000 252 251 253 252 struct tss_struct { 254 253 /* ··· 262 263 * be within the limit. 263 264 */ 264 265 unsigned long io_bitmap[IO_BITMAP_LONGS + 1]; 265 - /* 266 - * Cache the current maximum and the last task that used the bitmap: 267 - */ 268 - unsigned long io_bitmap_max; 269 - struct thread_struct *io_bitmap_owner; 270 266 271 267 /* 272 268 * .. and then another 0x100 bytes for the emergency kernel stack:
-6
arch/x86/include/asm/seccomp_32.h
··· 1 1 #ifndef _ASM_X86_SECCOMP_32_H 2 2 #define _ASM_X86_SECCOMP_32_H 3 3 4 - #include <linux/thread_info.h> 5 - 6 - #ifdef TIF_32BIT 7 - #error "unexpected TIF_32BIT on i386" 8 - #endif 9 - 10 4 #include <linux/unistd.h> 11 5 12 6 #define __NR_seccomp_read __NR_read
-8
arch/x86/include/asm/seccomp_64.h
··· 1 1 #ifndef _ASM_X86_SECCOMP_64_H 2 2 #define _ASM_X86_SECCOMP_64_H 3 3 4 - #include <linux/thread_info.h> 5 - 6 - #ifdef TIF_32BIT 7 - #error "unexpected TIF_32BIT on x86_64" 8 - #else 9 - #define TIF_32BIT TIF_IA32 10 - #endif 11 - 12 4 #include <linux/unistd.h> 13 5 #include <asm/ia32_unistd.h> 14 6
+3
arch/x86/include/asm/system.h
··· 20 20 struct task_struct; /* one of the stranger aspects of C forward declarations */ 21 21 struct task_struct *__switch_to(struct task_struct *prev, 22 22 struct task_struct *next); 23 + struct tss_struct; 24 + void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, 25 + struct tss_struct *tss); 23 26 24 27 #ifdef CONFIG_X86_32 25 28
+2 -2
arch/x86/include/asm/uaccess_32.h
··· 157 157 } 158 158 159 159 static __always_inline unsigned long __copy_from_user_nocache(void *to, 160 - const void __user *from, unsigned long n, unsigned long total) 160 + const void __user *from, unsigned long n) 161 161 { 162 162 might_fault(); 163 163 if (__builtin_constant_p(n)) { ··· 180 180 181 181 static __always_inline unsigned long 182 182 __copy_from_user_inatomic_nocache(void *to, const void __user *from, 183 - unsigned long n, unsigned long total) 183 + unsigned long n) 184 184 { 185 185 return __copy_from_user_ll_nocache_nozero(to, from, n); 186 186 }
+7 -18
arch/x86/include/asm/uaccess_64.h
··· 188 188 extern long __copy_user_nocache(void *dst, const void __user *src, 189 189 unsigned size, int zerorest); 190 190 191 - static inline int __copy_from_user_nocache(void *dst, const void __user *src, 192 - unsigned size, unsigned long total) 191 + static inline int 192 + __copy_from_user_nocache(void *dst, const void __user *src, unsigned size) 193 193 { 194 194 might_sleep(); 195 - /* 196 - * In practice this limit means that large file write()s 197 - * which get chunked to 4K copies get handled via 198 - * non-temporal stores here. Smaller writes get handled 199 - * via regular __copy_from_user(): 200 - */ 201 - if (likely(total >= PAGE_SIZE)) 202 - return __copy_user_nocache(dst, src, size, 1); 203 - else 204 - return __copy_from_user(dst, src, size); 195 + return __copy_user_nocache(dst, src, size, 1); 205 196 } 206 197 207 - static inline int __copy_from_user_inatomic_nocache(void *dst, 208 - const void __user *src, unsigned size, unsigned total) 198 + static inline int 199 + __copy_from_user_inatomic_nocache(void *dst, const void __user *src, 200 + unsigned size) 209 201 { 210 - if (likely(total >= PAGE_SIZE)) 211 - return __copy_user_nocache(dst, src, size, 0); 212 - else 213 - return __copy_from_user_inatomic(dst, src, size); 202 + return __copy_user_nocache(dst, src, size, 0); 214 203 } 215 204 216 205 unsigned long
+2 -18
arch/x86/kernel/cpu/proc.c
··· 7 7 /* 8 8 * Get CPU information for use by the procfs. 9 9 */ 10 - #ifdef CONFIG_X86_32 11 10 static void show_cpuinfo_core(struct seq_file *m, struct cpuinfo_x86 *c, 12 11 unsigned int cpu) 13 12 { 14 - #ifdef CONFIG_X86_HT 13 + #ifdef CONFIG_SMP 15 14 if (c->x86_max_cores * smp_num_siblings > 1) { 16 15 seq_printf(m, "physical id\t: %d\n", c->phys_proc_id); 17 16 seq_printf(m, "siblings\t: %d\n", ··· 23 24 #endif 24 25 } 25 26 27 + #ifdef CONFIG_X86_32 26 28 static void show_cpuinfo_misc(struct seq_file *m, struct cpuinfo_x86 *c) 27 29 { 28 30 /* ··· 50 50 c->wp_works_ok ? "yes" : "no"); 51 51 } 52 52 #else 53 - static void show_cpuinfo_core(struct seq_file *m, struct cpuinfo_x86 *c, 54 - unsigned int cpu) 55 - { 56 - #ifdef CONFIG_SMP 57 - if (c->x86_max_cores * smp_num_siblings > 1) { 58 - seq_printf(m, "physical id\t: %d\n", c->phys_proc_id); 59 - seq_printf(m, "siblings\t: %d\n", 60 - cpus_weight(per_cpu(cpu_core_map, cpu))); 61 - seq_printf(m, "core id\t\t: %d\n", c->cpu_core_id); 62 - seq_printf(m, "cpu cores\t: %d\n", c->booted_cores); 63 - seq_printf(m, "apicid\t\t: %d\n", c->apicid); 64 - seq_printf(m, "initial apicid\t: %d\n", c->initial_apicid); 65 - } 66 - #endif 67 - } 68 - 69 53 static void show_cpuinfo_misc(struct seq_file *m, struct cpuinfo_x86 *c) 70 54 { 71 55 seq_printf(m,
-11
arch/x86/kernel/ioport.c
··· 85 85 86 86 t->io_bitmap_max = bytes; 87 87 88 - #ifdef CONFIG_X86_32 89 - /* 90 - * Sets the lazy trigger so that the next I/O operation will 91 - * reload the correct bitmap. 92 - * Reset the owner so that a process switch will not set 93 - * tss->io_bitmap_base to IO_BITMAP_OFFSET. 94 - */ 95 - tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY; 96 - tss->io_bitmap_owner = NULL; 97 - #else 98 88 /* Update the TSS: */ 99 89 memcpy(tss->io_bitmap, t->io_bitmap_ptr, bytes_updated); 100 - #endif 101 90 102 91 put_cpu(); 103 92
+190 -1
arch/x86/kernel/process.c
··· 1 1 #include <linux/errno.h> 2 2 #include <linux/kernel.h> 3 3 #include <linux/mm.h> 4 - #include <asm/idle.h> 5 4 #include <linux/smp.h> 5 + #include <linux/prctl.h> 6 6 #include <linux/slab.h> 7 7 #include <linux/sched.h> 8 8 #include <linux/module.h> ··· 11 11 #include <linux/ftrace.h> 12 12 #include <asm/system.h> 13 13 #include <asm/apic.h> 14 + #include <asm/idle.h> 15 + #include <asm/uaccess.h> 16 + #include <asm/i387.h> 14 17 15 18 unsigned long idle_halt; 16 19 EXPORT_SYMBOL(idle_halt); ··· 57 54 __alignof__(union thread_xstate), 58 55 SLAB_PANIC, NULL); 59 56 } 57 + 58 + /* 59 + * Free current thread data structures etc.. 60 + */ 61 + void exit_thread(void) 62 + { 63 + struct task_struct *me = current; 64 + struct thread_struct *t = &me->thread; 65 + 66 + if (me->thread.io_bitmap_ptr) { 67 + struct tss_struct *tss = &per_cpu(init_tss, get_cpu()); 68 + 69 + kfree(t->io_bitmap_ptr); 70 + t->io_bitmap_ptr = NULL; 71 + clear_thread_flag(TIF_IO_BITMAP); 72 + /* 73 + * Careful, clear this in the TSS too: 74 + */ 75 + memset(tss->io_bitmap, 0xff, t->io_bitmap_max); 76 + t->io_bitmap_max = 0; 77 + put_cpu(); 78 + } 79 + 80 + ds_exit_thread(current); 81 + } 82 + 83 + void flush_thread(void) 84 + { 85 + struct task_struct *tsk = current; 86 + 87 + #ifdef CONFIG_X86_64 88 + if (test_tsk_thread_flag(tsk, TIF_ABI_PENDING)) { 89 + clear_tsk_thread_flag(tsk, TIF_ABI_PENDING); 90 + if (test_tsk_thread_flag(tsk, TIF_IA32)) { 91 + clear_tsk_thread_flag(tsk, TIF_IA32); 92 + } else { 93 + set_tsk_thread_flag(tsk, TIF_IA32); 94 + current_thread_info()->status |= TS_COMPAT; 95 + } 96 + } 97 + #endif 98 + 99 + clear_tsk_thread_flag(tsk, TIF_DEBUG); 100 + 101 + tsk->thread.debugreg0 = 0; 102 + tsk->thread.debugreg1 = 0; 103 + tsk->thread.debugreg2 = 0; 104 + tsk->thread.debugreg3 = 0; 105 + tsk->thread.debugreg6 = 0; 106 + tsk->thread.debugreg7 = 0; 107 + memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array)); 108 + /* 109 + * Forget coprocessor state.. 110 + */ 111 + tsk->fpu_counter = 0; 112 + clear_fpu(tsk); 113 + clear_used_math(); 114 + } 115 + 116 + static void hard_disable_TSC(void) 117 + { 118 + write_cr4(read_cr4() | X86_CR4_TSD); 119 + } 120 + 121 + void disable_TSC(void) 122 + { 123 + preempt_disable(); 124 + if (!test_and_set_thread_flag(TIF_NOTSC)) 125 + /* 126 + * Must flip the CPU state synchronously with 127 + * TIF_NOTSC in the current running context. 128 + */ 129 + hard_disable_TSC(); 130 + preempt_enable(); 131 + } 132 + 133 + static void hard_enable_TSC(void) 134 + { 135 + write_cr4(read_cr4() & ~X86_CR4_TSD); 136 + } 137 + 138 + static void enable_TSC(void) 139 + { 140 + preempt_disable(); 141 + if (test_and_clear_thread_flag(TIF_NOTSC)) 142 + /* 143 + * Must flip the CPU state synchronously with 144 + * TIF_NOTSC in the current running context. 145 + */ 146 + hard_enable_TSC(); 147 + preempt_enable(); 148 + } 149 + 150 + int get_tsc_mode(unsigned long adr) 151 + { 152 + unsigned int val; 153 + 154 + if (test_thread_flag(TIF_NOTSC)) 155 + val = PR_TSC_SIGSEGV; 156 + else 157 + val = PR_TSC_ENABLE; 158 + 159 + return put_user(val, (unsigned int __user *)adr); 160 + } 161 + 162 + int set_tsc_mode(unsigned int val) 163 + { 164 + if (val == PR_TSC_SIGSEGV) 165 + disable_TSC(); 166 + else if (val == PR_TSC_ENABLE) 167 + enable_TSC(); 168 + else 169 + return -EINVAL; 170 + 171 + return 0; 172 + } 173 + 174 + void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, 175 + struct tss_struct *tss) 176 + { 177 + struct thread_struct *prev, *next; 178 + 179 + prev = &prev_p->thread; 180 + next = &next_p->thread; 181 + 182 + if (test_tsk_thread_flag(next_p, TIF_DS_AREA_MSR) || 183 + test_tsk_thread_flag(prev_p, TIF_DS_AREA_MSR)) 184 + ds_switch_to(prev_p, next_p); 185 + else if (next->debugctlmsr != prev->debugctlmsr) 186 + update_debugctlmsr(next->debugctlmsr); 187 + 188 + if (test_tsk_thread_flag(next_p, TIF_DEBUG)) { 189 + set_debugreg(next->debugreg0, 0); 190 + set_debugreg(next->debugreg1, 1); 191 + set_debugreg(next->debugreg2, 2); 192 + set_debugreg(next->debugreg3, 3); 193 + /* no 4 and 5 */ 194 + set_debugreg(next->debugreg6, 6); 195 + set_debugreg(next->debugreg7, 7); 196 + } 197 + 198 + if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^ 199 + test_tsk_thread_flag(next_p, TIF_NOTSC)) { 200 + /* prev and next are different */ 201 + if (test_tsk_thread_flag(next_p, TIF_NOTSC)) 202 + hard_disable_TSC(); 203 + else 204 + hard_enable_TSC(); 205 + } 206 + 207 + if (test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) { 208 + /* 209 + * Copy the relevant range of the IO bitmap. 210 + * Normally this is 128 bytes or less: 211 + */ 212 + memcpy(tss->io_bitmap, next->io_bitmap_ptr, 213 + max(prev->io_bitmap_max, next->io_bitmap_max)); 214 + } else if (test_tsk_thread_flag(prev_p, TIF_IO_BITMAP)) { 215 + /* 216 + * Clear any possible leftover bits: 217 + */ 218 + memset(tss->io_bitmap, 0xff, prev->io_bitmap_max); 219 + } 220 + } 221 + 222 + int sys_fork(struct pt_regs *regs) 223 + { 224 + return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL); 225 + } 226 + 227 + /* 228 + * This is trivial, and on the face of it looks like it 229 + * could equally well be done in user mode. 230 + * 231 + * Not so, for quite unobvious reasons - register pressure. 232 + * In user mode vfork() cannot have a stack frame, and if 233 + * done by calling the "clone()" system call directly, you 234 + * do not have enough call-clobbered registers to hold all 235 + * the information you need. 236 + */ 237 + int sys_vfork(struct pt_regs *regs) 238 + { 239 + return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->sp, regs, 0, 240 + NULL, NULL); 241 + } 242 + 60 243 61 244 /* 62 245 * Idle related variables and functions
-190
arch/x86/kernel/process_32.c
··· 230 230 } 231 231 EXPORT_SYMBOL(kernel_thread); 232 232 233 - /* 234 - * Free current thread data structures etc.. 235 - */ 236 - void exit_thread(void) 237 - { 238 - /* The process may have allocated an io port bitmap... nuke it. */ 239 - if (unlikely(test_thread_flag(TIF_IO_BITMAP))) { 240 - struct task_struct *tsk = current; 241 - struct thread_struct *t = &tsk->thread; 242 - int cpu = get_cpu(); 243 - struct tss_struct *tss = &per_cpu(init_tss, cpu); 244 - 245 - kfree(t->io_bitmap_ptr); 246 - t->io_bitmap_ptr = NULL; 247 - clear_thread_flag(TIF_IO_BITMAP); 248 - /* 249 - * Careful, clear this in the TSS too: 250 - */ 251 - memset(tss->io_bitmap, 0xff, tss->io_bitmap_max); 252 - t->io_bitmap_max = 0; 253 - tss->io_bitmap_owner = NULL; 254 - tss->io_bitmap_max = 0; 255 - tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET; 256 - put_cpu(); 257 - } 258 - 259 - ds_exit_thread(current); 260 - } 261 - 262 - void flush_thread(void) 263 - { 264 - struct task_struct *tsk = current; 265 - 266 - tsk->thread.debugreg0 = 0; 267 - tsk->thread.debugreg1 = 0; 268 - tsk->thread.debugreg2 = 0; 269 - tsk->thread.debugreg3 = 0; 270 - tsk->thread.debugreg6 = 0; 271 - tsk->thread.debugreg7 = 0; 272 - memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array)); 273 - clear_tsk_thread_flag(tsk, TIF_DEBUG); 274 - /* 275 - * Forget coprocessor state.. 276 - */ 277 - tsk->fpu_counter = 0; 278 - clear_fpu(tsk); 279 - clear_used_math(); 280 - } 281 - 282 233 void release_thread(struct task_struct *dead_task) 283 234 { 284 235 BUG_ON(dead_task->mm); ··· 317 366 } 318 367 EXPORT_SYMBOL_GPL(start_thread); 319 368 320 - static void hard_disable_TSC(void) 321 - { 322 - write_cr4(read_cr4() | X86_CR4_TSD); 323 - } 324 - 325 - void disable_TSC(void) 326 - { 327 - preempt_disable(); 328 - if (!test_and_set_thread_flag(TIF_NOTSC)) 329 - /* 330 - * Must flip the CPU state synchronously with 331 - * TIF_NOTSC in the current running context. 332 - */ 333 - hard_disable_TSC(); 334 - preempt_enable(); 335 - } 336 - 337 - static void hard_enable_TSC(void) 338 - { 339 - write_cr4(read_cr4() & ~X86_CR4_TSD); 340 - } 341 - 342 - static void enable_TSC(void) 343 - { 344 - preempt_disable(); 345 - if (test_and_clear_thread_flag(TIF_NOTSC)) 346 - /* 347 - * Must flip the CPU state synchronously with 348 - * TIF_NOTSC in the current running context. 349 - */ 350 - hard_enable_TSC(); 351 - preempt_enable(); 352 - } 353 - 354 - int get_tsc_mode(unsigned long adr) 355 - { 356 - unsigned int val; 357 - 358 - if (test_thread_flag(TIF_NOTSC)) 359 - val = PR_TSC_SIGSEGV; 360 - else 361 - val = PR_TSC_ENABLE; 362 - 363 - return put_user(val, (unsigned int __user *)adr); 364 - } 365 - 366 - int set_tsc_mode(unsigned int val) 367 - { 368 - if (val == PR_TSC_SIGSEGV) 369 - disable_TSC(); 370 - else if (val == PR_TSC_ENABLE) 371 - enable_TSC(); 372 - else 373 - return -EINVAL; 374 - 375 - return 0; 376 - } 377 - 378 - static noinline void 379 - __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, 380 - struct tss_struct *tss) 381 - { 382 - struct thread_struct *prev, *next; 383 - 384 - prev = &prev_p->thread; 385 - next = &next_p->thread; 386 - 387 - if (test_tsk_thread_flag(next_p, TIF_DS_AREA_MSR) || 388 - test_tsk_thread_flag(prev_p, TIF_DS_AREA_MSR)) 389 - ds_switch_to(prev_p, next_p); 390 - else if (next->debugctlmsr != prev->debugctlmsr) 391 - update_debugctlmsr(next->debugctlmsr); 392 - 393 - if (test_tsk_thread_flag(next_p, TIF_DEBUG)) { 394 - set_debugreg(next->debugreg0, 0); 395 - set_debugreg(next->debugreg1, 1); 396 - set_debugreg(next->debugreg2, 2); 397 - set_debugreg(next->debugreg3, 3); 398 - /* no 4 and 5 */ 399 - set_debugreg(next->debugreg6, 6); 400 - set_debugreg(next->debugreg7, 7); 401 - } 402 - 403 - if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^ 404 - test_tsk_thread_flag(next_p, TIF_NOTSC)) { 405 - /* prev and next are different */ 406 - if (test_tsk_thread_flag(next_p, TIF_NOTSC)) 407 - hard_disable_TSC(); 408 - else 409 - hard_enable_TSC(); 410 - } 411 - 412 - if (!test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) { 413 - /* 414 - * Disable the bitmap via an invalid offset. We still cache 415 - * the previous bitmap owner and the IO bitmap contents: 416 - */ 417 - tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET; 418 - return; 419 - } 420 - 421 - if (likely(next == tss->io_bitmap_owner)) { 422 - /* 423 - * Previous owner of the bitmap (hence the bitmap content) 424 - * matches the next task, we dont have to do anything but 425 - * to set a valid offset in the TSS: 426 - */ 427 - tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET; 428 - return; 429 - } 430 - /* 431 - * Lazy TSS's I/O bitmap copy. We set an invalid offset here 432 - * and we let the task to get a GPF in case an I/O instruction 433 - * is performed. The handler of the GPF will verify that the 434 - * faulting task has a valid I/O bitmap and, it true, does the 435 - * real copy and restart the instruction. This will save us 436 - * redundant copies when the currently switched task does not 437 - * perform any I/O during its timeslice. 438 - */ 439 - tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY; 440 - } 441 369 442 370 /* 443 371 * switch_to(x,yn) should switch tasks from x to y. ··· 430 600 return prev_p; 431 601 } 432 602 433 - int sys_fork(struct pt_regs *regs) 434 - { 435 - return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL); 436 - } 437 - 438 603 int sys_clone(struct pt_regs *regs) 439 604 { 440 605 unsigned long clone_flags; ··· 443 618 if (!newsp) 444 619 newsp = regs->sp; 445 620 return do_fork(clone_flags, newsp, regs, 0, parent_tidptr, child_tidptr); 446 - } 447 - 448 - /* 449 - * This is trivial, and on the face of it looks like it 450 - * could equally well be done in user mode. 451 - * 452 - * Not so, for quite unobvious reasons - register pressure. 453 - * In user mode vfork() cannot have a stack frame, and if 454 - * done by calling the "clone()" system call directly, you 455 - * do not have enough call-clobbered registers to hold all 456 - * the information you need. 457 - */ 458 - int sys_vfork(struct pt_regs *regs) 459 - { 460 - return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->sp, regs, 0, NULL, NULL); 461 621 } 462 622 463 623 /*
-188
arch/x86/kernel/process_64.c
··· 237 237 show_trace(NULL, regs, (void *)(regs + 1), regs->bp); 238 238 } 239 239 240 - /* 241 - * Free current thread data structures etc.. 242 - */ 243 - void exit_thread(void) 244 - { 245 - struct task_struct *me = current; 246 - struct thread_struct *t = &me->thread; 247 - 248 - if (me->thread.io_bitmap_ptr) { 249 - struct tss_struct *tss = &per_cpu(init_tss, get_cpu()); 250 - 251 - kfree(t->io_bitmap_ptr); 252 - t->io_bitmap_ptr = NULL; 253 - clear_thread_flag(TIF_IO_BITMAP); 254 - /* 255 - * Careful, clear this in the TSS too: 256 - */ 257 - memset(tss->io_bitmap, 0xff, t->io_bitmap_max); 258 - t->io_bitmap_max = 0; 259 - put_cpu(); 260 - } 261 - 262 - ds_exit_thread(current); 263 - } 264 - 265 - void flush_thread(void) 266 - { 267 - struct task_struct *tsk = current; 268 - 269 - if (test_tsk_thread_flag(tsk, TIF_ABI_PENDING)) { 270 - clear_tsk_thread_flag(tsk, TIF_ABI_PENDING); 271 - if (test_tsk_thread_flag(tsk, TIF_IA32)) { 272 - clear_tsk_thread_flag(tsk, TIF_IA32); 273 - } else { 274 - set_tsk_thread_flag(tsk, TIF_IA32); 275 - current_thread_info()->status |= TS_COMPAT; 276 - } 277 - } 278 - clear_tsk_thread_flag(tsk, TIF_DEBUG); 279 - 280 - tsk->thread.debugreg0 = 0; 281 - tsk->thread.debugreg1 = 0; 282 - tsk->thread.debugreg2 = 0; 283 - tsk->thread.debugreg3 = 0; 284 - tsk->thread.debugreg6 = 0; 285 - tsk->thread.debugreg7 = 0; 286 - memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array)); 287 - /* 288 - * Forget coprocessor state.. 289 - */ 290 - tsk->fpu_counter = 0; 291 - clear_fpu(tsk); 292 - clear_used_math(); 293 - } 294 - 295 240 void release_thread(struct task_struct *dead_task) 296 241 { 297 242 if (dead_task->mm) { ··· 369 424 free_thread_xstate(current); 370 425 } 371 426 EXPORT_SYMBOL_GPL(start_thread); 372 - 373 - static void hard_disable_TSC(void) 374 - { 375 - write_cr4(read_cr4() | X86_CR4_TSD); 376 - } 377 - 378 - void disable_TSC(void) 379 - { 380 - preempt_disable(); 381 - if (!test_and_set_thread_flag(TIF_NOTSC)) 382 - /* 383 - * Must flip the CPU state synchronously with 384 - * TIF_NOTSC in the current running context. 385 - */ 386 - hard_disable_TSC(); 387 - preempt_enable(); 388 - } 389 - 390 - static void hard_enable_TSC(void) 391 - { 392 - write_cr4(read_cr4() & ~X86_CR4_TSD); 393 - } 394 - 395 - static void enable_TSC(void) 396 - { 397 - preempt_disable(); 398 - if (test_and_clear_thread_flag(TIF_NOTSC)) 399 - /* 400 - * Must flip the CPU state synchronously with 401 - * TIF_NOTSC in the current running context. 402 - */ 403 - hard_enable_TSC(); 404 - preempt_enable(); 405 - } 406 - 407 - int get_tsc_mode(unsigned long adr) 408 - { 409 - unsigned int val; 410 - 411 - if (test_thread_flag(TIF_NOTSC)) 412 - val = PR_TSC_SIGSEGV; 413 - else 414 - val = PR_TSC_ENABLE; 415 - 416 - return put_user(val, (unsigned int __user *)adr); 417 - } 418 - 419 - int set_tsc_mode(unsigned int val) 420 - { 421 - if (val == PR_TSC_SIGSEGV) 422 - disable_TSC(); 423 - else if (val == PR_TSC_ENABLE) 424 - enable_TSC(); 425 - else 426 - return -EINVAL; 427 - 428 - return 0; 429 - } 430 - 431 - /* 432 - * This special macro can be used to load a debugging register 433 - */ 434 - #define loaddebug(thread, r) set_debugreg(thread->debugreg ## r, r) 435 - 436 - static inline void __switch_to_xtra(struct task_struct *prev_p, 437 - struct task_struct *next_p, 438 - struct tss_struct *tss) 439 - { 440 - struct thread_struct *prev, *next; 441 - 442 - prev = &prev_p->thread, 443 - next = &next_p->thread; 444 - 445 - if (test_tsk_thread_flag(next_p, TIF_DS_AREA_MSR) || 446 - test_tsk_thread_flag(prev_p, TIF_DS_AREA_MSR)) 447 - ds_switch_to(prev_p, next_p); 448 - else if (next->debugctlmsr != prev->debugctlmsr) 449 - update_debugctlmsr(next->debugctlmsr); 450 - 451 - if (test_tsk_thread_flag(next_p, TIF_DEBUG)) { 452 - loaddebug(next, 0); 453 - loaddebug(next, 1); 454 - loaddebug(next, 2); 455 - loaddebug(next, 3); 456 - /* no 4 and 5 */ 457 - loaddebug(next, 6); 458 - loaddebug(next, 7); 459 - } 460 - 461 - if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^ 462 - test_tsk_thread_flag(next_p, TIF_NOTSC)) { 463 - /* prev and next are different */ 464 - if (test_tsk_thread_flag(next_p, TIF_NOTSC)) 465 - hard_disable_TSC(); 466 - else 467 - hard_enable_TSC(); 468 - } 469 - 470 - if (test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) { 471 - /* 472 - * Copy the relevant range of the IO bitmap. 473 - * Normally this is 128 bytes or less: 474 - */ 475 - memcpy(tss->io_bitmap, next->io_bitmap_ptr, 476 - max(prev->io_bitmap_max, next->io_bitmap_max)); 477 - } else if (test_tsk_thread_flag(prev_p, TIF_IO_BITMAP)) { 478 - /* 479 - * Clear any possible leftover bits: 480 - */ 481 - memset(tss->io_bitmap, 0xff, prev->io_bitmap_max); 482 - } 483 - } 484 427 485 428 /* 486 429 * switch_to(x,y) should switch tasks from x to y. ··· 527 694 current->personality &= ~READ_IMPLIES_EXEC; 528 695 } 529 696 530 - asmlinkage long sys_fork(struct pt_regs *regs) 531 - { 532 - return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL); 533 - } 534 - 535 697 asmlinkage long 536 698 sys_clone(unsigned long clone_flags, unsigned long newsp, 537 699 void __user *parent_tid, void __user *child_tid, struct pt_regs *regs) ··· 534 706 if (!newsp) 535 707 newsp = regs->sp; 536 708 return do_fork(clone_flags, newsp, regs, 0, parent_tid, child_tid); 537 - } 538 - 539 - /* 540 - * This is trivial, and on the face of it looks like it 541 - * could equally well be done in user mode. 542 - * 543 - * Not so, for quite unobvious reasons - register pressure. 544 - * In user mode vfork() cannot have a stack frame, and if 545 - * done by calling the "clone()" system call directly, you 546 - * do not have enough call-clobbered registers to hold all 547 - * the information you need. 548 - */ 549 - asmlinkage long sys_vfork(struct pt_regs *regs) 550 - { 551 - return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->sp, regs, 0, 552 - NULL, NULL); 553 709 } 554 710 555 711 unsigned long get_wchan(struct task_struct *p)
+1 -1
arch/x86/kernel/ptrace.c
··· 1383 1383 #ifdef CONFIG_X86_32 1384 1384 # define IS_IA32 1 1385 1385 #elif defined CONFIG_IA32_EMULATION 1386 - # define IS_IA32 test_thread_flag(TIF_IA32) 1386 + # define IS_IA32 is_compat_task() 1387 1387 #else 1388 1388 # define IS_IA32 0 1389 1389 #endif
+66 -75
arch/x86/kernel/signal.c
··· 187 187 /* 188 188 * Set up a signal frame. 189 189 */ 190 + 191 + /* 192 + * Determine which stack to use.. 193 + */ 194 + static unsigned long align_sigframe(unsigned long sp) 195 + { 196 + #ifdef CONFIG_X86_32 197 + /* 198 + * Align the stack pointer according to the i386 ABI, 199 + * i.e. so that on function entry ((sp + 4) & 15) == 0. 200 + */ 201 + sp = ((sp + 4) & -16ul) - 4; 202 + #else /* !CONFIG_X86_32 */ 203 + sp = round_down(sp, 16) - 8; 204 + #endif 205 + return sp; 206 + } 207 + 208 + static inline void __user * 209 + get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size, 210 + void __user **fpstate) 211 + { 212 + /* Default to using normal stack */ 213 + unsigned long sp = regs->sp; 214 + 215 + #ifdef CONFIG_X86_64 216 + /* redzone */ 217 + sp -= 128; 218 + #endif /* CONFIG_X86_64 */ 219 + 220 + /* 221 + * If we are on the alternate signal stack and would overflow it, don't. 222 + * Return an always-bogus address instead so we will die with SIGSEGV. 223 + */ 224 + if (on_sig_stack(sp) && !likely(on_sig_stack(sp - frame_size))) 225 + return (void __user *) -1L; 226 + 227 + /* This is the X/Open sanctioned signal stack switching. */ 228 + if (ka->sa.sa_flags & SA_ONSTACK) { 229 + if (sas_ss_flags(sp) == 0) 230 + sp = current->sas_ss_sp + current->sas_ss_size; 231 + } else { 232 + #ifdef CONFIG_X86_32 233 + /* This is the legacy signal stack switching. */ 234 + if ((regs->ss & 0xffff) != __USER_DS && 235 + !(ka->sa.sa_flags & SA_RESTORER) && 236 + ka->sa.sa_restorer) 237 + sp = (unsigned long) ka->sa.sa_restorer; 238 + #endif /* CONFIG_X86_32 */ 239 + } 240 + 241 + if (used_math()) { 242 + sp -= sig_xstate_size; 243 + #ifdef CONFIG_X86_64 244 + sp = round_down(sp, 64); 245 + #endif /* CONFIG_X86_64 */ 246 + *fpstate = (void __user *)sp; 247 + 248 + if (save_i387_xstate(*fpstate) < 0) 249 + return (void __user *)-1L; 250 + } 251 + 252 + return (void __user *)align_sigframe(sp - frame_size); 253 + } 254 + 190 255 #ifdef CONFIG_X86_32 191 256 static const struct { 192 257 u16 poplmovl; ··· 274 209 0x80cd, /* int $0x80 */ 275 210 0 276 211 }; 277 - 278 - /* 279 - * Determine which stack to use.. 280 - */ 281 - static inline void __user * 282 - get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size, 283 - void **fpstate) 284 - { 285 - unsigned long sp; 286 - 287 - /* Default to using normal stack */ 288 - sp = regs->sp; 289 - 290 - /* 291 - * If we are on the alternate signal stack and would overflow it, don't. 292 - * Return an always-bogus address instead so we will die with SIGSEGV. 293 - */ 294 - if (on_sig_stack(sp) && !likely(on_sig_stack(sp - frame_size))) 295 - return (void __user *) -1L; 296 - 297 - /* This is the X/Open sanctioned signal stack switching. */ 298 - if (ka->sa.sa_flags & SA_ONSTACK) { 299 - if (sas_ss_flags(sp) == 0) 300 - sp = current->sas_ss_sp + current->sas_ss_size; 301 - } else { 302 - /* This is the legacy signal stack switching. */ 303 - if ((regs->ss & 0xffff) != __USER_DS && 304 - !(ka->sa.sa_flags & SA_RESTORER) && 305 - ka->sa.sa_restorer) 306 - sp = (unsigned long) ka->sa.sa_restorer; 307 - } 308 - 309 - if (used_math()) { 310 - sp = sp - sig_xstate_size; 311 - *fpstate = (struct _fpstate *) sp; 312 - if (save_i387_xstate(*fpstate) < 0) 313 - return (void __user *)-1L; 314 - } 315 - 316 - sp -= frame_size; 317 - /* 318 - * Align the stack pointer according to the i386 ABI, 319 - * i.e. so that on function entry ((sp + 4) & 15) == 0. 320 - */ 321 - sp = ((sp + 4) & -16ul) - 4; 322 - 323 - return (void __user *) sp; 324 - } 325 212 326 213 static int 327 214 __setup_frame(int sig, struct k_sigaction *ka, sigset_t *set, ··· 405 388 return 0; 406 389 } 407 390 #else /* !CONFIG_X86_32 */ 408 - /* 409 - * Determine which stack to use.. 410 - */ 411 - static void __user * 412 - get_stack(struct k_sigaction *ka, unsigned long sp, unsigned long size) 413 - { 414 - /* Default to using normal stack - redzone*/ 415 - sp -= 128; 416 - 417 - /* This is the X/Open sanctioned signal stack switching. */ 418 - if (ka->sa.sa_flags & SA_ONSTACK) { 419 - if (sas_ss_flags(sp) == 0) 420 - sp = current->sas_ss_sp + current->sas_ss_size; 421 - } 422 - 423 - return (void __user *)round_down(sp - size, 64); 424 - } 425 - 426 391 static int __setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, 427 392 sigset_t *set, struct pt_regs *regs) 428 393 { ··· 413 414 int err = 0; 414 415 struct task_struct *me = current; 415 416 416 - if (used_math()) { 417 - fp = get_stack(ka, regs->sp, sig_xstate_size); 418 - frame = (void __user *)round_down( 419 - (unsigned long)fp - sizeof(struct rt_sigframe), 16) - 8; 420 - 421 - if (save_i387_xstate(fp) < 0) 422 - return -EFAULT; 423 - } else 424 - frame = get_stack(ka, regs->sp, sizeof(struct rt_sigframe)) - 8; 417 + frame = get_sigframe(ka, regs, sizeof(struct rt_sigframe), &fp); 425 418 426 419 if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) 427 420 return -EFAULT;
-46
arch/x86/kernel/traps.c
··· 118 118 if (!user_mode_vm(regs)) 119 119 die(str, regs, err); 120 120 } 121 - 122 - /* 123 - * Perform the lazy TSS's I/O bitmap copy. If the TSS has an 124 - * invalid offset set (the LAZY one) and the faulting thread has 125 - * a valid I/O bitmap pointer, we copy the I/O bitmap in the TSS, 126 - * we set the offset field correctly and return 1. 127 - */ 128 - static int lazy_iobitmap_copy(void) 129 - { 130 - struct thread_struct *thread; 131 - struct tss_struct *tss; 132 - int cpu; 133 - 134 - cpu = get_cpu(); 135 - tss = &per_cpu(init_tss, cpu); 136 - thread = &current->thread; 137 - 138 - if (tss->x86_tss.io_bitmap_base == INVALID_IO_BITMAP_OFFSET_LAZY && 139 - thread->io_bitmap_ptr) { 140 - memcpy(tss->io_bitmap, thread->io_bitmap_ptr, 141 - thread->io_bitmap_max); 142 - /* 143 - * If the previously set map was extending to higher ports 144 - * than the current one, pad extra space with 0xff (no access). 145 - */ 146 - if (thread->io_bitmap_max < tss->io_bitmap_max) { 147 - memset((char *) tss->io_bitmap + 148 - thread->io_bitmap_max, 0xff, 149 - tss->io_bitmap_max - thread->io_bitmap_max); 150 - } 151 - tss->io_bitmap_max = thread->io_bitmap_max; 152 - tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET; 153 - tss->io_bitmap_owner = thread; 154 - put_cpu(); 155 - 156 - return 1; 157 - } 158 - put_cpu(); 159 - 160 - return 0; 161 - } 162 121 #endif 163 122 164 123 static void __kprobes ··· 268 309 conditional_sti(regs); 269 310 270 311 #ifdef CONFIG_X86_32 271 - if (lazy_iobitmap_copy()) { 272 - /* restart the faulting instruction */ 273 - return; 274 - } 275 - 276 312 if (regs->flags & X86_VM_MASK) 277 313 goto gp_in_vm86; 278 314 #endif
+1 -1
arch/x86/mm/Makefile
··· 1 - obj-y := init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \ 1 + obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \ 2 2 pat.o pgtable.o gup.o 3 3 4 4 obj-$(CONFIG_SMP) += tlb.o
+34
arch/x86/mm/highmem_32.c
··· 1 1 #include <linux/highmem.h> 2 2 #include <linux/module.h> 3 + #include <linux/swap.h> /* for totalram_pages */ 3 4 4 5 void *kmap(struct page *page) 5 6 { ··· 157 156 EXPORT_SYMBOL(kunmap); 158 157 EXPORT_SYMBOL(kmap_atomic); 159 158 EXPORT_SYMBOL(kunmap_atomic); 159 + 160 + #ifdef CONFIG_NUMA 161 + void __init set_highmem_pages_init(void) 162 + { 163 + struct zone *zone; 164 + int nid; 165 + 166 + for_each_zone(zone) { 167 + unsigned long zone_start_pfn, zone_end_pfn; 168 + 169 + if (!is_highmem(zone)) 170 + continue; 171 + 172 + zone_start_pfn = zone->zone_start_pfn; 173 + zone_end_pfn = zone_start_pfn + zone->spanned_pages; 174 + 175 + nid = zone_to_nid(zone); 176 + printk(KERN_INFO "Initializing %s for node %d (%08lx:%08lx)\n", 177 + zone->name, nid, zone_start_pfn, zone_end_pfn); 178 + 179 + add_highpages_with_active_regions(nid, zone_start_pfn, 180 + zone_end_pfn); 181 + } 182 + totalram_pages += totalhigh_pages; 183 + } 184 + #else 185 + void __init set_highmem_pages_init(void) 186 + { 187 + add_highpages_with_active_regions(0, highstart_pfn, highend_pfn); 188 + 189 + totalram_pages += totalhigh_pages; 190 + } 191 + #endif /* CONFIG_NUMA */
+49
arch/x86/mm/init.c
··· 1 + #include <linux/swap.h> 2 + #include <asm/cacheflush.h> 3 + #include <asm/page.h> 4 + #include <asm/sections.h> 5 + #include <asm/system.h> 6 + 7 + void free_init_pages(char *what, unsigned long begin, unsigned long end) 8 + { 9 + unsigned long addr = begin; 10 + 11 + if (addr >= end) 12 + return; 13 + 14 + /* 15 + * If debugging page accesses then do not free this memory but 16 + * mark them not present - any buggy init-section access will 17 + * create a kernel page fault: 18 + */ 19 + #ifdef CONFIG_DEBUG_PAGEALLOC 20 + printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n", 21 + begin, PAGE_ALIGN(end)); 22 + set_memory_np(begin, (end - begin) >> PAGE_SHIFT); 23 + #else 24 + /* 25 + * We just marked the kernel text read only above, now that 26 + * we are going to free part of that, we need to make that 27 + * writeable first. 28 + */ 29 + set_memory_rw(begin, (end - begin) >> PAGE_SHIFT); 30 + 31 + printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10); 32 + 33 + for (; addr < end; addr += PAGE_SIZE) { 34 + ClearPageReserved(virt_to_page(addr)); 35 + init_page_count(virt_to_page(addr)); 36 + memset((void *)(addr & ~(PAGE_SIZE-1)), 37 + POISON_FREE_INITMEM, PAGE_SIZE); 38 + free_page(addr); 39 + totalram_pages++; 40 + } 41 + #endif 42 + } 43 + 44 + void free_initmem(void) 45 + { 46 + free_init_pages("unused kernel memory", 47 + (unsigned long)(&__init_begin), 48 + (unsigned long)(&__init_end)); 49 + }
+4 -57
arch/x86/mm/init_32.c
··· 50 50 #include <asm/setup.h> 51 51 #include <asm/cacheflush.h> 52 52 53 - unsigned int __VMALLOC_RESERVE = 128 << 20; 54 - 55 53 unsigned long max_low_pfn_mapped; 56 54 unsigned long max_pfn_mapped; 57 55 ··· 467 469 work_with_active_regions(nid, add_highpages_work_fn, &data); 468 470 } 469 471 470 - #ifndef CONFIG_NUMA 471 - static void __init set_highmem_pages_init(void) 472 - { 473 - add_highpages_with_active_regions(0, highstart_pfn, highend_pfn); 474 - 475 - totalram_pages += totalhigh_pages; 476 - } 477 - #endif /* !CONFIG_NUMA */ 478 - 479 472 #else 480 473 static inline void permanent_kmaps_init(pgd_t *pgd_base) 481 - { 482 - } 483 - static inline void set_highmem_pages_init(void) 484 474 { 485 475 } 486 476 #endif /* CONFIG_HIGHMEM */ ··· 833 847 unsigned long puds, pmds, ptes, tables, start; 834 848 835 849 puds = (end + PUD_SIZE - 1) >> PUD_SHIFT; 836 - tables = PAGE_ALIGN(puds * sizeof(pud_t)); 850 + tables = roundup(puds * sizeof(pud_t), PAGE_SIZE); 837 851 838 852 pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT; 839 - tables += PAGE_ALIGN(pmds * sizeof(pmd_t)); 853 + tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE); 840 854 841 855 if (use_pse) { 842 856 unsigned long extra; ··· 847 861 } else 848 862 ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT; 849 863 850 - tables += PAGE_ALIGN(ptes * sizeof(pte_t)); 864 + tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE); 851 865 852 866 /* for fixmap */ 853 - tables += PAGE_ALIGN(__end_of_fixed_addresses * sizeof(pte_t)); 867 + tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE); 854 868 855 869 /* 856 870 * RED-PEN putting page tables only on node 0 could ··· 1199 1213 #endif 1200 1214 } 1201 1215 #endif 1202 - 1203 - void free_init_pages(char *what, unsigned long begin, unsigned long end) 1204 - { 1205 - #ifdef CONFIG_DEBUG_PAGEALLOC 1206 - /* 1207 - * If debugging page accesses then do not free this memory but 1208 - * mark them not present - any buggy init-section access will 1209 - * create a kernel page fault: 1210 - */ 1211 - printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n", 1212 - begin, PAGE_ALIGN(end)); 1213 - set_memory_np(begin, (end - begin) >> PAGE_SHIFT); 1214 - #else 1215 - unsigned long addr; 1216 - 1217 - /* 1218 - * We just marked the kernel text read only above, now that 1219 - * we are going to free part of that, we need to make that 1220 - * writeable first. 1221 - */ 1222 - set_memory_rw(begin, (end - begin) >> PAGE_SHIFT); 1223 - 1224 - for (addr = begin; addr < end; addr += PAGE_SIZE) { 1225 - ClearPageReserved(virt_to_page(addr)); 1226 - init_page_count(virt_to_page(addr)); 1227 - memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE); 1228 - free_page(addr); 1229 - totalram_pages++; 1230 - } 1231 - printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10); 1232 - #endif 1233 - } 1234 - 1235 - void free_initmem(void) 1236 - { 1237 - free_init_pages("unused kernel memory", 1238 - (unsigned long)(&__init_begin), 1239 - (unsigned long)(&__init_end)); 1240 - } 1241 1216 1242 1217 #ifdef CONFIG_BLK_DEV_INITRD 1243 1218 void free_initrd_mem(unsigned long start, unsigned long end)
+2 -37
arch/x86/mm/init_64.c
··· 714 714 pos = start_pfn << PAGE_SHIFT; 715 715 end_pfn = ((pos + (PMD_SIZE - 1)) >> PMD_SHIFT) 716 716 << (PMD_SHIFT - PAGE_SHIFT); 717 + if (end_pfn > (end >> PAGE_SHIFT)) 718 + end_pfn = end >> PAGE_SHIFT; 717 719 if (start_pfn < end_pfn) { 718 720 nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0); 719 721 pos = end_pfn << PAGE_SHIFT; ··· 945 943 reservedpages << (PAGE_SHIFT-10), 946 944 datasize >> 10, 947 945 initsize >> 10); 948 - } 949 - 950 - void free_init_pages(char *what, unsigned long begin, unsigned long end) 951 - { 952 - unsigned long addr = begin; 953 - 954 - if (addr >= end) 955 - return; 956 - 957 - /* 958 - * If debugging page accesses then do not free this memory but 959 - * mark them not present - any buggy init-section access will 960 - * create a kernel page fault: 961 - */ 962 - #ifdef CONFIG_DEBUG_PAGEALLOC 963 - printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n", 964 - begin, PAGE_ALIGN(end)); 965 - set_memory_np(begin, (end - begin) >> PAGE_SHIFT); 966 - #else 967 - printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10); 968 - 969 - for (; addr < end; addr += PAGE_SIZE) { 970 - ClearPageReserved(virt_to_page(addr)); 971 - init_page_count(virt_to_page(addr)); 972 - memset((void *)(addr & ~(PAGE_SIZE-1)), 973 - POISON_FREE_INITMEM, PAGE_SIZE); 974 - free_page(addr); 975 - totalram_pages++; 976 - } 977 - #endif 978 - } 979 - 980 - void free_initmem(void) 981 - { 982 - free_init_pages("unused kernel memory", 983 - (unsigned long)(&__init_begin), 984 - (unsigned long)(&__init_end)); 985 946 } 986 947 987 948 #ifdef CONFIG_DEBUG_RODATA
+4 -51
arch/x86/mm/iomap_32.c
··· 20 20 #include <asm/pat.h> 21 21 #include <linux/module.h> 22 22 23 - #ifdef CONFIG_X86_PAE 24 - static int 25 - is_io_mapping_possible(resource_size_t base, unsigned long size) 23 + int is_io_mapping_possible(resource_size_t base, unsigned long size) 26 24 { 27 - return 1; 28 - } 29 - #else 30 - static int 31 - is_io_mapping_possible(resource_size_t base, unsigned long size) 32 - { 25 + #ifndef CONFIG_X86_PAE 33 26 /* There is no way to map greater than 1 << 32 address without PAE */ 34 27 if (base + size > 0x100000000ULL) 35 28 return 0; 36 - 29 + #endif 37 30 return 1; 38 31 } 39 - #endif 40 - 41 - int 42 - reserve_io_memtype_wc(u64 base, unsigned long size, pgprot_t *prot) 43 - { 44 - unsigned long ret_flag; 45 - 46 - if (!is_io_mapping_possible(base, size)) 47 - goto out_err; 48 - 49 - if (!pat_enabled) { 50 - *prot = pgprot_noncached(PAGE_KERNEL); 51 - return 0; 52 - } 53 - 54 - if (reserve_memtype(base, base + size, _PAGE_CACHE_WC, &ret_flag)) 55 - goto out_err; 56 - 57 - if (ret_flag == _PAGE_CACHE_WB) 58 - goto out_free; 59 - 60 - if (kernel_map_sync_memtype(base, size, ret_flag)) 61 - goto out_free; 62 - 63 - *prot = __pgprot(__PAGE_KERNEL | ret_flag); 64 - return 0; 65 - 66 - out_free: 67 - free_memtype(base, base + size); 68 - out_err: 69 - return -EINVAL; 70 - } 71 - EXPORT_SYMBOL_GPL(reserve_io_memtype_wc); 72 - 73 - void 74 - free_io_memtype(u64 base, unsigned long size) 75 - { 76 - if (pat_enabled) 77 - free_memtype(base, base + size); 78 - } 79 - EXPORT_SYMBOL_GPL(free_io_memtype); 32 + EXPORT_SYMBOL_GPL(is_io_mapping_possible); 80 33 81 34 /* Map 'pfn' using fixed map 'type' and protections 'prot' 82 35 */
-26
arch/x86/mm/numa_32.c
··· 423 423 setup_bootmem_allocator(); 424 424 } 425 425 426 - void __init set_highmem_pages_init(void) 427 - { 428 - #ifdef CONFIG_HIGHMEM 429 - struct zone *zone; 430 - int nid; 431 - 432 - for_each_zone(zone) { 433 - unsigned long zone_start_pfn, zone_end_pfn; 434 - 435 - if (!is_highmem(zone)) 436 - continue; 437 - 438 - zone_start_pfn = zone->zone_start_pfn; 439 - zone_end_pfn = zone_start_pfn + zone->spanned_pages; 440 - 441 - nid = zone_to_nid(zone); 442 - printk(KERN_INFO "Initializing %s for node %d (%08lx:%08lx)\n", 443 - zone->name, nid, zone_start_pfn, zone_end_pfn); 444 - 445 - add_highpages_with_active_regions(nid, zone_start_pfn, 446 - zone_end_pfn); 447 - } 448 - totalram_pages += totalhigh_pages; 449 - #endif 450 - } 451 - 452 426 #ifdef CONFIG_MEMORY_HOTPLUG 453 427 static int paddr_to_nid(u64 addr) 454 428 {
+2
arch/x86/mm/pat.c
··· 11 11 #include <linux/bootmem.h> 12 12 #include <linux/debugfs.h> 13 13 #include <linux/kernel.h> 14 + #include <linux/module.h> 14 15 #include <linux/gfp.h> 15 16 #include <linux/mm.h> 16 17 #include <linux/fs.h> ··· 890 889 else 891 890 return pgprot_noncached(prot); 892 891 } 892 + EXPORT_SYMBOL_GPL(pgprot_writecombine); 893 893 894 894 #if defined(CONFIG_DEBUG_FS) && defined(CONFIG_X86_PAT) 895 895
+18
arch/x86/mm/pgtable.c
··· 313 313 return young; 314 314 } 315 315 316 + /** 317 + * reserve_top_address - reserves a hole in the top of kernel address space 318 + * @reserve - size of hole to reserve 319 + * 320 + * Can be used to relocate the fixmap area and poke a hole in the top 321 + * of kernel address space to make room for a hypervisor. 322 + */ 323 + void __init reserve_top_address(unsigned long reserve) 324 + { 325 + #ifdef CONFIG_X86_32 326 + BUG_ON(fixmaps_set > 0); 327 + printk(KERN_INFO "Reserving virtual address space above 0x%08x\n", 328 + (int)-reserve); 329 + __FIXADDR_TOP = -reserve - PAGE_SIZE; 330 + __VMALLOC_RESERVE += reserve; 331 + #endif 332 + } 333 + 316 334 int fixmaps_set; 317 335 318 336 void __native_set_fixmap(enum fixed_addresses idx, pte_t pte)
+2 -16
arch/x86/mm/pgtable_32.c
··· 20 20 #include <asm/tlb.h> 21 21 #include <asm/tlbflush.h> 22 22 23 + unsigned int __VMALLOC_RESERVE = 128 << 20; 24 + 23 25 /* 24 26 * Associate a virtual page frame with a given physical page frame 25 27 * and protection flags for that frame. ··· 98 96 99 97 unsigned long __FIXADDR_TOP = 0xfffff000; 100 98 EXPORT_SYMBOL(__FIXADDR_TOP); 101 - 102 - /** 103 - * reserve_top_address - reserves a hole in the top of kernel address space 104 - * @reserve - size of hole to reserve 105 - * 106 - * Can be used to relocate the fixmap area and poke a hole in the top 107 - * of kernel address space to make room for a hypervisor. 108 - */ 109 - void __init reserve_top_address(unsigned long reserve) 110 - { 111 - BUG_ON(fixmaps_set > 0); 112 - printk(KERN_INFO "Reserving virtual address space above 0x%08x\n", 113 - (int)-reserve); 114 - __FIXADDR_TOP = -reserve - PAGE_SIZE; 115 - __VMALLOC_RESERVE += reserve; 116 - } 117 99 118 100 /* 119 101 * vmalloc=size forces the vmalloc area to be exactly 'size'
+12 -2
arch/x86/oprofile/op_model_ppro.c
··· 78 78 if (cpu_has_arch_perfmon) { 79 79 union cpuid10_eax eax; 80 80 eax.full = cpuid_eax(0xa); 81 - if (counter_width < eax.split.bit_width) 82 - counter_width = eax.split.bit_width; 81 + 82 + /* 83 + * For Core2 (family 6, model 15), don't reset the 84 + * counter width: 85 + */ 86 + if (!(eax.split.version_id == 0 && 87 + current_cpu_data.x86 == 6 && 88 + current_cpu_data.x86_model == 15)) { 89 + 90 + if (counter_width < eax.split.bit_width) 91 + counter_width = eax.split.bit_width; 92 + } 83 93 } 84 94 85 95 /* clear all counters */
+53 -41
block/blk-merge.c
··· 38 38 } 39 39 } 40 40 41 - void blk_recalc_rq_segments(struct request *rq) 41 + static unsigned int __blk_recalc_rq_segments(struct request_queue *q, 42 + struct bio *bio, 43 + unsigned int *seg_size_ptr) 42 44 { 43 - int nr_phys_segs; 44 45 unsigned int phys_size; 45 46 struct bio_vec *bv, *bvprv = NULL; 46 - int seg_size; 47 - int cluster; 48 - struct req_iterator iter; 49 - int high, highprv = 1; 50 - struct request_queue *q = rq->q; 47 + int cluster, i, high, highprv = 1; 48 + unsigned int seg_size, nr_phys_segs; 49 + struct bio *fbio; 51 50 52 - if (!rq->bio) 53 - return; 51 + if (!bio) 52 + return 0; 54 53 54 + fbio = bio; 55 55 cluster = test_bit(QUEUE_FLAG_CLUSTER, &q->queue_flags); 56 56 seg_size = 0; 57 57 phys_size = nr_phys_segs = 0; 58 - rq_for_each_segment(bv, rq, iter) { 59 - /* 60 - * the trick here is making sure that a high page is never 61 - * considered part of another segment, since that might 62 - * change with the bounce page. 63 - */ 64 - high = page_to_pfn(bv->bv_page) > q->bounce_pfn; 65 - if (high || highprv) 66 - goto new_segment; 67 - if (cluster) { 68 - if (seg_size + bv->bv_len > q->max_segment_size) 58 + for_each_bio(bio) { 59 + bio_for_each_segment(bv, bio, i) { 60 + /* 61 + * the trick here is making sure that a high page is 62 + * never considered part of another segment, since that 63 + * might change with the bounce page. 64 + */ 65 + high = page_to_pfn(bv->bv_page) > q->bounce_pfn; 66 + if (high || highprv) 69 67 goto new_segment; 70 - if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv)) 71 - goto new_segment; 72 - if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv)) 73 - goto new_segment; 68 + if (cluster) { 69 + if (seg_size + bv->bv_len > q->max_segment_size) 70 + goto new_segment; 71 + if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv)) 72 + goto new_segment; 73 + if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv)) 74 + goto new_segment; 74 75 75 - seg_size += bv->bv_len; 76 - bvprv = bv; 77 - continue; 78 - } 76 + seg_size += bv->bv_len; 77 + bvprv = bv; 78 + continue; 79 + } 79 80 new_segment: 80 - if (nr_phys_segs == 1 && seg_size > rq->bio->bi_seg_front_size) 81 - rq->bio->bi_seg_front_size = seg_size; 81 + if (nr_phys_segs == 1 && seg_size > 82 + fbio->bi_seg_front_size) 83 + fbio->bi_seg_front_size = seg_size; 82 84 83 - nr_phys_segs++; 84 - bvprv = bv; 85 - seg_size = bv->bv_len; 86 - highprv = high; 85 + nr_phys_segs++; 86 + bvprv = bv; 87 + seg_size = bv->bv_len; 88 + highprv = high; 89 + } 87 90 } 88 91 89 - if (nr_phys_segs == 1 && seg_size > rq->bio->bi_seg_front_size) 92 + if (seg_size_ptr) 93 + *seg_size_ptr = seg_size; 94 + 95 + return nr_phys_segs; 96 + } 97 + 98 + void blk_recalc_rq_segments(struct request *rq) 99 + { 100 + unsigned int seg_size = 0, phys_segs; 101 + 102 + phys_segs = __blk_recalc_rq_segments(rq->q, rq->bio, &seg_size); 103 + 104 + if (phys_segs == 1 && seg_size > rq->bio->bi_seg_front_size) 90 105 rq->bio->bi_seg_front_size = seg_size; 91 106 if (seg_size > rq->biotail->bi_seg_back_size) 92 107 rq->biotail->bi_seg_back_size = seg_size; 93 108 94 - rq->nr_phys_segments = nr_phys_segs; 109 + rq->nr_phys_segments = phys_segs; 95 110 } 96 111 97 112 void blk_recount_segments(struct request_queue *q, struct bio *bio) 98 113 { 99 - struct request rq; 100 114 struct bio *nxt = bio->bi_next; 101 - rq.q = q; 102 - rq.bio = rq.biotail = bio; 115 + 103 116 bio->bi_next = NULL; 104 - blk_recalc_rq_segments(&rq); 117 + bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, NULL); 105 118 bio->bi_next = nxt; 106 - bio->bi_phys_segments = rq.nr_phys_segments; 107 119 bio->bi_flags |= (1 << BIO_SEG_VALID); 108 120 } 109 121 EXPORT_SYMBOL(blk_recount_segments);
+16
block/genhd.c
··· 256 256 } 257 257 #endif /* CONFIG_PROC_FS */ 258 258 259 + /** 260 + * register_blkdev - register a new block device 261 + * 262 + * @major: the requested major device number [1..255]. If @major=0, try to 263 + * allocate any unused major number. 264 + * @name: the name of the new block device as a zero terminated string 265 + * 266 + * The @name must be unique within the system. 267 + * 268 + * The return value depends on the @major input parameter. 269 + * - if a major device number was requested in range [1..255] then the 270 + * function returns zero on success, or a negative error code 271 + * - if any unused major number was requested with @major=0 parameter 272 + * then the return value is the allocated major number in range 273 + * [1..255] or a negative error code otherwise 274 + */ 259 275 int register_blkdev(unsigned int major, const char *name) 260 276 { 261 277 struct blk_major_name **n, *p;
+59 -17
drivers/ata/pata_amd.c
··· 24 24 #include <linux/libata.h> 25 25 26 26 #define DRV_NAME "pata_amd" 27 - #define DRV_VERSION "0.3.11" 27 + #define DRV_VERSION "0.4.1" 28 28 29 29 /** 30 30 * timing_setup - shared timing computation and load ··· 145 145 return ata_sff_prereset(link, deadline); 146 146 } 147 147 148 + /** 149 + * amd_cable_detect - report cable type 150 + * @ap: port 151 + * 152 + * AMD controller/BIOS setups record the cable type in word 0x42 153 + */ 154 + 148 155 static int amd_cable_detect(struct ata_port *ap) 149 156 { 150 157 static const u32 bitmask[2] = {0x03, 0x0C}; ··· 165 158 } 166 159 167 160 /** 161 + * amd_fifo_setup - set the PIO FIFO for ATA/ATAPI 162 + * @ap: ATA interface 163 + * @adev: ATA device 164 + * 165 + * Set the PCI fifo for this device according to the devices present 166 + * on the bus at this point in time. We need to turn the post write buffer 167 + * off for ATAPI devices as we may need to issue a word sized write to the 168 + * device as the final I/O 169 + */ 170 + 171 + static void amd_fifo_setup(struct ata_port *ap) 172 + { 173 + struct ata_device *adev; 174 + struct pci_dev *pdev = to_pci_dev(ap->host->dev); 175 + static const u8 fifobit[2] = { 0xC0, 0x30}; 176 + u8 fifo = fifobit[ap->port_no]; 177 + u8 r; 178 + 179 + 180 + ata_for_each_dev(adev, &ap->link, ENABLED) { 181 + if (adev->class == ATA_DEV_ATAPI) 182 + fifo = 0; 183 + } 184 + if (pdev->device == PCI_DEVICE_ID_AMD_VIPER_7411) /* FIFO is broken */ 185 + fifo = 0; 186 + 187 + /* On the later chips the read prefetch bits become no-op bits */ 188 + pci_read_config_byte(pdev, 0x41, &r); 189 + r &= ~fifobit[ap->port_no]; 190 + r |= fifo; 191 + pci_write_config_byte(pdev, 0x41, r); 192 + } 193 + 194 + /** 168 195 * amd33_set_piomode - set initial PIO mode data 169 196 * @ap: ATA interface 170 197 * @adev: ATA device ··· 208 167 209 168 static void amd33_set_piomode(struct ata_port *ap, struct ata_device *adev) 210 169 { 170 + amd_fifo_setup(ap); 211 171 timing_setup(ap, adev, 0x40, adev->pio_mode, 1); 212 172 } 213 173 214 174 static void amd66_set_piomode(struct ata_port *ap, struct ata_device *adev) 215 175 { 176 + amd_fifo_setup(ap); 216 177 timing_setup(ap, adev, 0x40, adev->pio_mode, 2); 217 178 } 218 179 219 180 static void amd100_set_piomode(struct ata_port *ap, struct ata_device *adev) 220 181 { 182 + amd_fifo_setup(ap); 221 183 timing_setup(ap, adev, 0x40, adev->pio_mode, 3); 222 184 } 223 185 224 186 static void amd133_set_piomode(struct ata_port *ap, struct ata_device *adev) 225 187 { 188 + amd_fifo_setup(ap); 226 189 timing_setup(ap, adev, 0x40, adev->pio_mode, 4); 227 190 } 228 191 ··· 442 397 .set_dmamode = nv133_set_dmamode, 443 398 }; 444 399 400 + static void amd_clear_fifo(struct pci_dev *pdev) 401 + { 402 + u8 fifo; 403 + /* Disable the FIFO, the FIFO logic will re-enable it as 404 + appropriate */ 405 + pci_read_config_byte(pdev, 0x41, &fifo); 406 + fifo &= 0x0F; 407 + pci_write_config_byte(pdev, 0x41, fifo); 408 + } 409 + 445 410 static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id) 446 411 { 447 412 static const struct ata_port_info info[10] = { ··· 558 503 559 504 if (type < 3) 560 505 ata_pci_bmdma_clear_simplex(pdev); 561 - 562 - /* Check for AMD7411 */ 563 - if (type == 3) 564 - /* FIFO is broken */ 565 - pci_write_config_byte(pdev, 0x41, fifo & 0x0F); 566 - else 567 - pci_write_config_byte(pdev, 0x41, fifo | 0xF0); 568 - 506 + if (pdev->vendor == PCI_VENDOR_ID_AMD) 507 + amd_clear_fifo(pdev); 569 508 /* Cable detection on Nvidia chips doesn't work too well, 570 509 * cache BIOS programmed UDMA mode. 571 510 */ ··· 585 536 return rc; 586 537 587 538 if (pdev->vendor == PCI_VENDOR_ID_AMD) { 588 - u8 fifo; 589 - pci_read_config_byte(pdev, 0x41, &fifo); 590 - if (pdev->device == PCI_DEVICE_ID_AMD_VIPER_7411) 591 - /* FIFO is broken */ 592 - pci_write_config_byte(pdev, 0x41, fifo & 0x0F); 593 - else 594 - pci_write_config_byte(pdev, 0x41, fifo | 0xF0); 539 + amd_clear_fifo(pdev); 595 540 if (pdev->device == PCI_DEVICE_ID_AMD_VIPER_7409 || 596 541 pdev->device == PCI_DEVICE_ID_AMD_COBRA_7401) 597 542 ata_pci_bmdma_clear_simplex(pdev); 598 543 } 599 - 600 544 ata_host_resume(host); 601 545 return 0; 602 546 }
+3
drivers/ata/pata_it821x.c
··· 557 557 id[83] |= 0x4400; /* Word 83 is valid and LBA48 */ 558 558 id[86] |= 0x0400; /* LBA48 on */ 559 559 id[ATA_ID_MAJOR_VER] |= 0x1F; 560 + /* Clear the serial number because it's different each boot 561 + which breaks validation on resume */ 562 + memset(&id[ATA_ID_SERNO], 0x20, ATA_ID_SERNO_LEN); 560 563 } 561 564 return err_mask; 562 565 }
+4 -3
drivers/ata/pata_legacy.c
··· 283 283 static unsigned int pdc_data_xfer_vlb(struct ata_device *dev, 284 284 unsigned char *buf, unsigned int buflen, int rw) 285 285 { 286 - if (ata_id_has_dword_io(dev->id)) { 286 + int slop = buflen & 3; 287 + /* 32bit I/O capable *and* we need to write a whole number of dwords */ 288 + if (ata_id_has_dword_io(dev->id) && (slop == 0 || slop == 3)) { 287 289 struct ata_port *ap = dev->link->ap; 288 - int slop = buflen & 3; 289 290 unsigned long flags; 290 291 291 292 local_irq_save(flags); ··· 736 735 struct ata_port *ap = adev->link->ap; 737 736 int slop = buflen & 3; 738 737 739 - if (ata_id_has_dword_io(adev->id)) { 738 + if (ata_id_has_dword_io(adev->id) && (slop == 0 || slop == 3)) { 740 739 if (rw == WRITE) 741 740 iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2); 742 741 else
+9 -11
drivers/ata/sata_mv.c
··· 3114 3114 writelfl(0, hc_mmio + HC_IRQ_CAUSE_OFS); 3115 3115 } 3116 3116 3117 - if (!IS_SOC(hpriv)) { 3118 - /* Clear any currently outstanding host interrupt conditions */ 3119 - writelfl(0, mmio + hpriv->irq_cause_ofs); 3117 + /* Clear any currently outstanding host interrupt conditions */ 3118 + writelfl(0, mmio + hpriv->irq_cause_ofs); 3120 3119 3121 - /* and unmask interrupt generation for host regs */ 3122 - writelfl(hpriv->unmask_all_irqs, mmio + hpriv->irq_mask_ofs); 3120 + /* and unmask interrupt generation for host regs */ 3121 + writelfl(hpriv->unmask_all_irqs, mmio + hpriv->irq_mask_ofs); 3123 3122 3124 - /* 3125 - * enable only global host interrupts for now. 3126 - * The per-port interrupts get done later as ports are set up. 3127 - */ 3128 - mv_set_main_irq_mask(host, 0, PCI_ERR); 3129 - } 3123 + /* 3124 + * enable only global host interrupts for now. 3125 + * The per-port interrupts get done later as ports are set up. 3126 + */ 3127 + mv_set_main_irq_mask(host, 0, PCI_ERR); 3130 3128 done: 3131 3129 return rc; 3132 3130 }
+7 -3
drivers/block/cciss.c
··· 3611 3611 schedule_timeout_uninterruptible(30*HZ); 3612 3612 3613 3613 /* Now try to get the controller to respond to a no-op */ 3614 - for (i=0; i<12; i++) { 3614 + for (i=0; i<30; i++) { 3615 3615 if (cciss_noop(pdev) == 0) 3616 3616 break; 3617 - else 3618 - printk("cciss: no-op failed%s\n", (i < 11 ? "; re-trying" : "")); 3617 + 3618 + schedule_timeout_uninterruptible(HZ); 3619 + } 3620 + if (i == 30) { 3621 + printk(KERN_ERR "cciss: controller seems dead\n"); 3622 + return -EBUSY; 3619 3623 } 3620 3624 } 3621 3625
+15 -15
drivers/block/xen-blkfront.c
··· 40 40 #include <linux/hdreg.h> 41 41 #include <linux/cdrom.h> 42 42 #include <linux/module.h> 43 + #include <linux/scatterlist.h> 43 44 44 45 #include <xen/xenbus.h> 45 46 #include <xen/grant_table.h> ··· 83 82 enum blkif_state connected; 84 83 int ring_ref; 85 84 struct blkif_front_ring ring; 85 + struct scatterlist sg[BLKIF_MAX_SEGMENTS_PER_REQUEST]; 86 86 unsigned int evtchn, irq; 87 87 struct request_queue *rq; 88 88 struct work_struct work; ··· 206 204 struct blkfront_info *info = req->rq_disk->private_data; 207 205 unsigned long buffer_mfn; 208 206 struct blkif_request *ring_req; 209 - struct req_iterator iter; 210 - struct bio_vec *bvec; 211 207 unsigned long id; 212 208 unsigned int fsect, lsect; 213 - int ref; 209 + int i, ref; 214 210 grant_ref_t gref_head; 211 + struct scatterlist *sg; 215 212 216 213 if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) 217 214 return 1; ··· 239 238 if (blk_barrier_rq(req)) 240 239 ring_req->operation = BLKIF_OP_WRITE_BARRIER; 241 240 242 - ring_req->nr_segments = 0; 243 - rq_for_each_segment(bvec, req, iter) { 244 - BUG_ON(ring_req->nr_segments == BLKIF_MAX_SEGMENTS_PER_REQUEST); 245 - buffer_mfn = pfn_to_mfn(page_to_pfn(bvec->bv_page)); 246 - fsect = bvec->bv_offset >> 9; 247 - lsect = fsect + (bvec->bv_len >> 9) - 1; 241 + ring_req->nr_segments = blk_rq_map_sg(req->q, req, info->sg); 242 + BUG_ON(ring_req->nr_segments > BLKIF_MAX_SEGMENTS_PER_REQUEST); 243 + 244 + for_each_sg(info->sg, sg, ring_req->nr_segments, i) { 245 + buffer_mfn = pfn_to_mfn(page_to_pfn(sg_page(sg))); 246 + fsect = sg->offset >> 9; 247 + lsect = fsect + (sg->length >> 9) - 1; 248 248 /* install a grant reference. */ 249 249 ref = gnttab_claim_grant_reference(&gref_head); 250 250 BUG_ON(ref == -ENOSPC); ··· 256 254 buffer_mfn, 257 255 rq_data_dir(req) ); 258 256 259 - info->shadow[id].frame[ring_req->nr_segments] = 260 - mfn_to_pfn(buffer_mfn); 261 - 262 - ring_req->seg[ring_req->nr_segments] = 257 + info->shadow[id].frame[i] = mfn_to_pfn(buffer_mfn); 258 + ring_req->seg[i] = 263 259 (struct blkif_request_segment) { 264 260 .gref = ref, 265 261 .first_sect = fsect, 266 262 .last_sect = lsect }; 267 - 268 - ring_req->nr_segments++; 269 263 } 270 264 271 265 info->ring.req_prod_pvt++; ··· 619 621 } 620 622 SHARED_RING_INIT(sring); 621 623 FRONT_RING_INIT(&info->ring, sring, PAGE_SIZE); 624 + 625 + sg_init_table(info->sg, BLKIF_MAX_SEGMENTS_PER_REQUEST); 622 626 623 627 err = xenbus_grant_ring(dev, virt_to_mfn(info->ring.sring)); 624 628 if (err < 0) {
+1 -1
drivers/gpu/drm/drm_bufs.c
··· 420 420 dev->sigdata.lock = NULL; 421 421 master->lock.hw_lock = NULL; /* SHM removed */ 422 422 master->lock.file_priv = NULL; 423 - wake_up_interruptible(&master->lock.lock_queue); 423 + wake_up_interruptible_all(&master->lock.lock_queue); 424 424 } 425 425 break; 426 426 case _DRM_AGP:
+72 -4
drivers/gpu/drm/drm_crtc_helper.c
··· 452 452 kfree(modes); 453 453 kfree(enabled); 454 454 } 455 + 456 + /** 457 + * drm_encoder_crtc_ok - can a given crtc drive a given encoder? 458 + * @encoder: encoder to test 459 + * @crtc: crtc to test 460 + * 461 + * Return false if @encoder can't be driven by @crtc, true otherwise. 462 + */ 463 + static bool drm_encoder_crtc_ok(struct drm_encoder *encoder, 464 + struct drm_crtc *crtc) 465 + { 466 + struct drm_device *dev; 467 + struct drm_crtc *tmp; 468 + int crtc_mask = 1; 469 + 470 + WARN(!crtc, "checking null crtc?"); 471 + 472 + dev = crtc->dev; 473 + 474 + list_for_each_entry(tmp, &dev->mode_config.crtc_list, head) { 475 + if (tmp == crtc) 476 + break; 477 + crtc_mask <<= 1; 478 + } 479 + 480 + if (encoder->possible_crtcs & crtc_mask) 481 + return true; 482 + return false; 483 + } 484 + 485 + /* 486 + * Check the CRTC we're going to map each output to vs. its current 487 + * CRTC. If they don't match, we have to disable the output and the CRTC 488 + * since the driver will have to re-route things. 489 + */ 490 + static void 491 + drm_crtc_prepare_encoders(struct drm_device *dev) 492 + { 493 + struct drm_encoder_helper_funcs *encoder_funcs; 494 + struct drm_encoder *encoder; 495 + 496 + list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { 497 + encoder_funcs = encoder->helper_private; 498 + /* Disable unused encoders */ 499 + if (encoder->crtc == NULL) 500 + (*encoder_funcs->dpms)(encoder, DRM_MODE_DPMS_OFF); 501 + /* Disable encoders whose CRTC is about to change */ 502 + if (encoder_funcs->get_crtc && 503 + encoder->crtc != (*encoder_funcs->get_crtc)(encoder)) 504 + (*encoder_funcs->dpms)(encoder, DRM_MODE_DPMS_OFF); 505 + } 506 + } 507 + 455 508 /** 456 509 * drm_crtc_set_mode - set a mode 457 510 * @crtc: CRTC to program ··· 600 547 encoder_funcs->prepare(encoder); 601 548 } 602 549 550 + drm_crtc_prepare_encoders(dev); 551 + 603 552 crtc_funcs->prepare(crtc); 604 553 605 554 /* Set up the DPLL and any encoders state that needs to adjust or depend ··· 672 617 struct drm_device *dev; 673 618 struct drm_crtc **save_crtcs, *new_crtc; 674 619 struct drm_encoder **save_encoders, *new_encoder; 675 - struct drm_framebuffer *old_fb; 620 + struct drm_framebuffer *old_fb = NULL; 676 621 bool save_enabled; 677 622 bool mode_changed = false; 678 623 bool fb_changed = false; ··· 723 668 * and then just flip_or_move it */ 724 669 if (set->crtc->fb != set->fb) { 725 670 /* If we have no fb then treat it as a full mode set */ 726 - if (set->crtc->fb == NULL) 671 + if (set->crtc->fb == NULL) { 672 + DRM_DEBUG("crtc has no fb, full mode set\n"); 727 673 mode_changed = true; 728 - else if ((set->fb->bits_per_pixel != 674 + } else if ((set->fb->bits_per_pixel != 729 675 set->crtc->fb->bits_per_pixel) || 730 676 set->fb->depth != set->crtc->fb->depth) 731 677 fb_changed = true; ··· 738 682 fb_changed = true; 739 683 740 684 if (set->mode && !drm_mode_equal(set->mode, &set->crtc->mode)) { 741 - DRM_DEBUG("modes are different\n"); 685 + DRM_DEBUG("modes are different, full mode set\n"); 742 686 drm_mode_debug_printmodeline(&set->crtc->mode); 743 687 drm_mode_debug_printmodeline(set->mode); 744 688 mode_changed = true; ··· 764 708 } 765 709 766 710 if (new_encoder != connector->encoder) { 711 + DRM_DEBUG("encoder changed, full mode switch\n"); 767 712 mode_changed = true; 768 713 connector->encoder = new_encoder; 769 714 } ··· 791 734 if (set->connectors[ro] == connector) 792 735 new_crtc = set->crtc; 793 736 } 737 + 738 + /* Make sure the new CRTC will work with the encoder */ 739 + if (new_crtc && 740 + !drm_encoder_crtc_ok(connector->encoder, new_crtc)) { 741 + ret = -EINVAL; 742 + goto fail_set_mode; 743 + } 794 744 if (new_crtc != connector->encoder->crtc) { 745 + DRM_DEBUG("crtc changed, full mode switch\n"); 795 746 mode_changed = true; 796 747 connector->encoder->crtc = new_crtc; 797 748 } 749 + DRM_DEBUG("setting connector %d crtc to %p\n", 750 + connector->base.id, new_crtc); 798 751 } 799 752 800 753 /* mode_set_base is not a required function */ ··· 848 781 849 782 fail_set_mode: 850 783 set->crtc->enabled = save_enabled; 784 + set->crtc->fb = old_fb; 851 785 count = 0; 852 786 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 853 787 if (!connector->encoder)
+3 -3
drivers/gpu/drm/drm_edid.c
··· 125 125 DRM_ERROR("EDID has major version %d, instead of 1\n", edid->version); 126 126 goto bad; 127 127 } 128 - if (edid->revision <= 0 || edid->revision > 3) { 128 + if (edid->revision > 3) { 129 129 DRM_ERROR("EDID has minor version %d, which is not between 0-3\n", edid->revision); 130 130 goto bad; 131 131 } ··· 320 320 mode->htotal = mode->hdisplay + ((pt->hblank_hi << 8) | pt->hblank_lo); 321 321 322 322 mode->vdisplay = (pt->vactive_hi << 8) | pt->vactive_lo; 323 - mode->vsync_start = mode->vdisplay + ((pt->vsync_offset_hi << 8) | 323 + mode->vsync_start = mode->vdisplay + ((pt->vsync_offset_hi << 4) | 324 324 pt->vsync_offset_lo); 325 325 mode->vsync_end = mode->vsync_start + 326 - ((pt->vsync_pulse_width_hi << 8) | 326 + ((pt->vsync_pulse_width_hi << 4) | 327 327 pt->vsync_pulse_width_lo); 328 328 mode->vtotal = mode->vdisplay + ((pt->vblank_hi << 8) | pt->vblank_lo); 329 329
+14
drivers/gpu/drm/drm_fops.c
··· 484 484 mutex_lock(&dev->struct_mutex); 485 485 486 486 if (file_priv->is_master) { 487 + struct drm_master *master = file_priv->master; 487 488 struct drm_file *temp; 488 489 list_for_each_entry(temp, &dev->filelist, lhead) { 489 490 if ((temp->master == file_priv->master) && 490 491 (temp != file_priv)) 491 492 temp->authenticated = 0; 493 + } 494 + 495 + /** 496 + * Since the master is disappearing, so is the 497 + * possibility to lock. 498 + */ 499 + 500 + if (master->lock.hw_lock) { 501 + if (dev->sigdata.lock == master->lock.hw_lock) 502 + dev->sigdata.lock = NULL; 503 + master->lock.hw_lock = NULL; 504 + master->lock.file_priv = NULL; 505 + wake_up_interruptible_all(&master->lock.lock_queue); 492 506 } 493 507 494 508 if (file_priv->minor->master == file_priv->master) {
+10 -4
drivers/gpu/drm/drm_irq.c
··· 435 435 */ 436 436 void drm_vblank_put(struct drm_device *dev, int crtc) 437 437 { 438 + BUG_ON (atomic_read (&dev->vblank_refcount[crtc]) == 0); 439 + 438 440 /* Last user schedules interrupt disable */ 439 441 if (atomic_dec_and_test(&dev->vblank_refcount[crtc])) 440 442 mod_timer(&dev->vblank_disable_timer, jiffies + 5*DRM_HZ); ··· 462 460 * so that interrupts remain enabled in the interim. 463 461 */ 464 462 if (!dev->vblank_inmodeset[crtc]) { 465 - dev->vblank_inmodeset[crtc] = 1; 466 - drm_vblank_get(dev, crtc); 463 + dev->vblank_inmodeset[crtc] = 0x1; 464 + if (drm_vblank_get(dev, crtc) == 0) 465 + dev->vblank_inmodeset[crtc] |= 0x2; 467 466 } 468 467 } 469 468 EXPORT_SYMBOL(drm_vblank_pre_modeset); ··· 476 473 if (dev->vblank_inmodeset[crtc]) { 477 474 spin_lock_irqsave(&dev->vbl_lock, irqflags); 478 475 dev->vblank_disable_allowed = 1; 479 - dev->vblank_inmodeset[crtc] = 0; 480 476 spin_unlock_irqrestore(&dev->vbl_lock, irqflags); 481 - drm_vblank_put(dev, crtc); 477 + 478 + if (dev->vblank_inmodeset[crtc] & 0x2) 479 + drm_vblank_put(dev, crtc); 480 + 481 + dev->vblank_inmodeset[crtc] = 0; 482 482 } 483 483 } 484 484 EXPORT_SYMBOL(drm_vblank_post_modeset);
+2 -1
drivers/gpu/drm/drm_lock.c
··· 80 80 __set_current_state(TASK_INTERRUPTIBLE); 81 81 if (!master->lock.hw_lock) { 82 82 /* Device has been unregistered */ 83 + send_sig(SIGTERM, current, 0); 83 84 ret = -EINTR; 84 85 break; 85 86 } ··· 94 93 /* Contention */ 95 94 schedule(); 96 95 if (signal_pending(current)) { 97 - ret = -ERESTARTSYS; 96 + ret = -EINTR; 98 97 break; 99 98 } 100 99 }
-8
drivers/gpu/drm/drm_stub.c
··· 146 146 147 147 drm_ht_remove(&master->magiclist); 148 148 149 - if (master->lock.hw_lock) { 150 - if (dev->sigdata.lock == master->lock.hw_lock) 151 - dev->sigdata.lock = NULL; 152 - master->lock.hw_lock = NULL; 153 - master->lock.file_priv = NULL; 154 - wake_up_interruptible(&master->lock.lock_queue); 155 - } 156 - 157 149 drm_free(master, sizeof(*master), DRM_MEM_DRIVER); 158 150 } 159 151
+1 -1
drivers/gpu/drm/i915/i915_dma.c
··· 811 811 dev_priv->hws_map.flags = 0; 812 812 dev_priv->hws_map.mtrr = 0; 813 813 814 - drm_core_ioremap(&dev_priv->hws_map, dev); 814 + drm_core_ioremap_wc(&dev_priv->hws_map, dev); 815 815 if (dev_priv->hws_map.handle == NULL) { 816 816 i915_dma_cleanup(dev); 817 817 dev_priv->status_gfx_addr = 0;
+2 -2
drivers/gpu/drm/i915/i915_gem.c
··· 211 211 212 212 vaddr_atomic = io_mapping_map_atomic_wc(mapping, page_base); 213 213 unwritten = __copy_from_user_inatomic_nocache(vaddr_atomic + page_offset, 214 - user_data, length, length); 214 + user_data, length); 215 215 io_mapping_unmap_atomic(vaddr_atomic); 216 216 if (unwritten) 217 217 return -EFAULT; ··· 3548 3548 user_data = (char __user *) (uintptr_t) args->data_ptr; 3549 3549 obj_addr = obj_priv->phys_obj->handle->vaddr + args->offset; 3550 3550 3551 - DRM_ERROR("obj_addr %p, %lld\n", obj_addr, args->size); 3551 + DRM_DEBUG("obj_addr %p, %lld\n", obj_addr, args->size); 3552 3552 ret = copy_from_user(obj_addr, user_data, args->size); 3553 3553 if (ret) 3554 3554 return -EFAULT;
+3 -2
drivers/gpu/drm/i915/i915_irq.c
··· 383 383 drm_i915_irq_emit_t *emit = data; 384 384 int result; 385 385 386 - RING_LOCK_TEST_WITH_RETURN(dev, file_priv); 387 - 388 386 if (!dev_priv) { 389 387 DRM_ERROR("called with no initialization\n"); 390 388 return -EINVAL; 391 389 } 390 + 391 + RING_LOCK_TEST_WITH_RETURN(dev, file_priv); 392 + 392 393 mutex_lock(&dev->struct_mutex); 393 394 result = i915_emit_irq(dev); 394 395 mutex_unlock(&dev->struct_mutex);
+6
drivers/gpu/drm/i915/intel_bios.c
··· 111 111 panel_fixed_mode->clock = dvo_timing->clock * 10; 112 112 panel_fixed_mode->type = DRM_MODE_TYPE_PREFERRED; 113 113 114 + /* Some VBTs have bogus h/vtotal values */ 115 + if (panel_fixed_mode->hsync_end > panel_fixed_mode->htotal) 116 + panel_fixed_mode->htotal = panel_fixed_mode->hsync_end + 1; 117 + if (panel_fixed_mode->vsync_end > panel_fixed_mode->vtotal) 118 + panel_fixed_mode->vtotal = panel_fixed_mode->vsync_end + 1; 119 + 114 120 drm_mode_set_name(panel_fixed_mode); 115 121 116 122 dev_priv->vbt_mode = panel_fixed_mode;
+1 -1
drivers/gpu/drm/i915/intel_display.c
··· 217 217 return false; 218 218 } 219 219 220 - #define INTELPllInvalid(s) do { DRM_DEBUG(s); return false; } while (0) 220 + #define INTELPllInvalid(s) do { /* DRM_DEBUG(s); */ return false; } while (0) 221 221 /** 222 222 * Returns whether the given set of divisors are valid for a given refclk with 223 223 * the given connectors.
+1 -1
drivers/ide/Kconfig
··· 46 46 SMART parameters from disk drives. 47 47 48 48 To compile this driver as a module, choose M here: the 49 - module will be called ide. 49 + module will be called ide-core.ko. 50 50 51 51 For further information, please read <file:Documentation/ide/ide.txt>. 52 52
+1 -1
drivers/ide/amd74xx.c
··· 166 166 * Check for broken FIFO support. 167 167 */ 168 168 if (dev->vendor == PCI_VENDOR_ID_AMD && 169 - dev->vendor == PCI_DEVICE_ID_AMD_VIPER_7411) 169 + dev->device == PCI_DEVICE_ID_AMD_VIPER_7411) 170 170 t &= 0x0f; 171 171 else 172 172 t |= 0xf0;
+2 -2
drivers/ide/atiixp.c
··· 52 52 { 53 53 struct pci_dev *dev = to_pci_dev(drive->hwif->dev); 54 54 unsigned long flags; 55 - int timing_shift = (drive->dn & 2) ? 16 : 0 + (drive->dn & 1) ? 0 : 8; 55 + int timing_shift = (drive->dn ^ 1) * 8; 56 56 u32 pio_timing_data; 57 57 u16 pio_mode_data; 58 58 ··· 85 85 { 86 86 struct pci_dev *dev = to_pci_dev(drive->hwif->dev); 87 87 unsigned long flags; 88 - int timing_shift = (drive->dn & 2) ? 16 : 0 + (drive->dn & 1) ? 0 : 8; 88 + int timing_shift = (drive->dn ^ 1) * 8; 89 89 u32 tmp32; 90 90 u16 tmp16; 91 91 u16 udma_ctl = 0;
+26 -9
drivers/ide/ide-cd.c
··· 55 55 56 56 static DEFINE_MUTEX(idecd_ref_mutex); 57 57 58 - static void ide_cd_release(struct kref *); 58 + static void ide_cd_release(struct device *); 59 59 60 60 static struct cdrom_info *ide_cd_get(struct gendisk *disk) 61 61 { ··· 67 67 if (ide_device_get(cd->drive)) 68 68 cd = NULL; 69 69 else 70 - kref_get(&cd->kref); 70 + get_device(&cd->dev); 71 71 72 72 } 73 73 mutex_unlock(&idecd_ref_mutex); ··· 79 79 ide_drive_t *drive = cd->drive; 80 80 81 81 mutex_lock(&idecd_ref_mutex); 82 - kref_put(&cd->kref, ide_cd_release); 82 + put_device(&cd->dev); 83 83 ide_device_put(drive); 84 84 mutex_unlock(&idecd_ref_mutex); 85 85 } ··· 194 194 bio_sectors = max(bio_sectors(failed_command->bio), 4U); 195 195 sector &= ~(bio_sectors - 1); 196 196 197 + /* 198 + * The SCSI specification allows for the value 199 + * returned by READ CAPACITY to be up to 75 2K 200 + * sectors past the last readable block. 201 + * Therefore, if we hit a medium error within the 202 + * last 75 2K sectors, we decrease the saved size 203 + * value. 204 + */ 197 205 if (sector < get_capacity(info->disk) && 198 206 drive->probed_capacity - sector < 4 * 75) 199 207 set_capacity(info->disk, sector); ··· 1798 1790 ide_debug_log(IDE_DBG_FUNC, "Call %s\n", __func__); 1799 1791 1800 1792 ide_proc_unregister_driver(drive, info->driver); 1801 - 1793 + device_del(&info->dev); 1802 1794 del_gendisk(info->disk); 1803 1795 1804 - ide_cd_put(info); 1796 + mutex_lock(&idecd_ref_mutex); 1797 + put_device(&info->dev); 1798 + mutex_unlock(&idecd_ref_mutex); 1805 1799 } 1806 1800 1807 - static void ide_cd_release(struct kref *kref) 1801 + static void ide_cd_release(struct device *dev) 1808 1802 { 1809 - struct cdrom_info *info = to_ide_drv(kref, cdrom_info); 1803 + struct cdrom_info *info = to_ide_drv(dev, cdrom_info); 1810 1804 struct cdrom_device_info *devinfo = &info->devinfo; 1811 1805 ide_drive_t *drive = info->drive; 1812 1806 struct gendisk *g = info->disk; ··· 2007 1997 2008 1998 ide_init_disk(g, drive); 2009 1999 2010 - kref_init(&info->kref); 2000 + info->dev.parent = &drive->gendev; 2001 + info->dev.release = ide_cd_release; 2002 + dev_set_name(&info->dev, dev_name(&drive->gendev)); 2003 + 2004 + if (device_register(&info->dev)) 2005 + goto out_free_disk; 2011 2006 2012 2007 info->drive = drive; 2013 2008 info->driver = &ide_cdrom_driver; ··· 2026 2011 g->driverfs_dev = &drive->gendev; 2027 2012 g->flags = GENHD_FL_CD | GENHD_FL_REMOVABLE; 2028 2013 if (ide_cdrom_setup(drive)) { 2029 - ide_cd_release(&info->kref); 2014 + put_device(&info->dev); 2030 2015 goto failed; 2031 2016 } 2032 2017 ··· 2036 2021 add_disk(g); 2037 2022 return 0; 2038 2023 2024 + out_free_disk: 2025 + put_disk(g); 2039 2026 out_free_cd: 2040 2027 kfree(info); 2041 2028 failed:
+1 -1
drivers/ide/ide-cd.h
··· 80 80 ide_drive_t *drive; 81 81 struct ide_driver *driver; 82 82 struct gendisk *disk; 83 - struct kref kref; 83 + struct device dev; 84 84 85 85 /* Buffer for table of contents. NULL if we haven't allocated 86 86 a TOC buffer for this device yet. */
+17 -9
drivers/ide/ide-gd.c
··· 25 25 26 26 static DEFINE_MUTEX(ide_disk_ref_mutex); 27 27 28 - static void ide_disk_release(struct kref *); 28 + static void ide_disk_release(struct device *); 29 29 30 30 static struct ide_disk_obj *ide_disk_get(struct gendisk *disk) 31 31 { ··· 37 37 if (ide_device_get(idkp->drive)) 38 38 idkp = NULL; 39 39 else 40 - kref_get(&idkp->kref); 40 + get_device(&idkp->dev); 41 41 } 42 42 mutex_unlock(&ide_disk_ref_mutex); 43 43 return idkp; ··· 48 48 ide_drive_t *drive = idkp->drive; 49 49 50 50 mutex_lock(&ide_disk_ref_mutex); 51 - kref_put(&idkp->kref, ide_disk_release); 51 + put_device(&idkp->dev); 52 52 ide_device_put(drive); 53 53 mutex_unlock(&ide_disk_ref_mutex); 54 54 } ··· 66 66 struct gendisk *g = idkp->disk; 67 67 68 68 ide_proc_unregister_driver(drive, idkp->driver); 69 - 69 + device_del(&idkp->dev); 70 70 del_gendisk(g); 71 - 72 71 drive->disk_ops->flush(drive); 73 72 74 - ide_disk_put(idkp); 73 + mutex_lock(&ide_disk_ref_mutex); 74 + put_device(&idkp->dev); 75 + mutex_unlock(&ide_disk_ref_mutex); 75 76 } 76 77 77 - static void ide_disk_release(struct kref *kref) 78 + static void ide_disk_release(struct device *dev) 78 79 { 79 - struct ide_disk_obj *idkp = to_ide_drv(kref, ide_disk_obj); 80 + struct ide_disk_obj *idkp = to_ide_drv(dev, ide_disk_obj); 80 81 ide_drive_t *drive = idkp->drive; 81 82 struct gendisk *g = idkp->disk; 82 83 ··· 349 348 350 349 ide_init_disk(g, drive); 351 350 352 - kref_init(&idkp->kref); 351 + idkp->dev.parent = &drive->gendev; 352 + idkp->dev.release = ide_disk_release; 353 + dev_set_name(&idkp->dev, dev_name(&drive->gendev)); 354 + 355 + if (device_register(&idkp->dev)) 356 + goto out_free_disk; 353 357 354 358 idkp->drive = drive; 355 359 idkp->driver = &ide_gd_driver; ··· 379 373 add_disk(g); 380 374 return 0; 381 375 376 + out_free_disk: 377 + put_disk(g); 382 378 out_free_idkp: 383 379 kfree(idkp); 384 380 failed:
+1 -1
drivers/ide/ide-gd.h
··· 17 17 ide_drive_t *drive; 18 18 struct ide_driver *driver; 19 19 struct gendisk *disk; 20 - struct kref kref; 20 + struct device dev; 21 21 unsigned int openers; /* protected by BKL for now */ 22 22 23 23 /* Last failed packet command */
+19 -10
drivers/ide/ide-tape.c
··· 169 169 ide_drive_t *drive; 170 170 struct ide_driver *driver; 171 171 struct gendisk *disk; 172 - struct kref kref; 172 + struct device dev; 173 173 174 174 /* 175 175 * failed_pc points to the last failed packet command, or contains ··· 267 267 268 268 static struct class *idetape_sysfs_class; 269 269 270 - static void ide_tape_release(struct kref *); 270 + static void ide_tape_release(struct device *); 271 271 272 272 static struct ide_tape_obj *ide_tape_get(struct gendisk *disk) 273 273 { ··· 279 279 if (ide_device_get(tape->drive)) 280 280 tape = NULL; 281 281 else 282 - kref_get(&tape->kref); 282 + get_device(&tape->dev); 283 283 } 284 284 mutex_unlock(&idetape_ref_mutex); 285 285 return tape; ··· 290 290 ide_drive_t *drive = tape->drive; 291 291 292 292 mutex_lock(&idetape_ref_mutex); 293 - kref_put(&tape->kref, ide_tape_release); 293 + put_device(&tape->dev); 294 294 ide_device_put(drive); 295 295 mutex_unlock(&idetape_ref_mutex); 296 296 } ··· 308 308 mutex_lock(&idetape_ref_mutex); 309 309 tape = idetape_devs[i]; 310 310 if (tape) 311 - kref_get(&tape->kref); 311 + get_device(&tape->dev); 312 312 mutex_unlock(&idetape_ref_mutex); 313 313 return tape; 314 314 } ··· 2256 2256 idetape_tape_t *tape = drive->driver_data; 2257 2257 2258 2258 ide_proc_unregister_driver(drive, tape->driver); 2259 - 2259 + device_del(&tape->dev); 2260 2260 ide_unregister_region(tape->disk); 2261 2261 2262 - ide_tape_put(tape); 2262 + mutex_lock(&idetape_ref_mutex); 2263 + put_device(&tape->dev); 2264 + mutex_unlock(&idetape_ref_mutex); 2263 2265 } 2264 2266 2265 - static void ide_tape_release(struct kref *kref) 2267 + static void ide_tape_release(struct device *dev) 2266 2268 { 2267 - struct ide_tape_obj *tape = to_ide_drv(kref, ide_tape_obj); 2269 + struct ide_tape_obj *tape = to_ide_drv(dev, ide_tape_obj); 2268 2270 ide_drive_t *drive = tape->drive; 2269 2271 struct gendisk *g = tape->disk; 2270 2272 ··· 2409 2407 2410 2408 ide_init_disk(g, drive); 2411 2409 2412 - kref_init(&tape->kref); 2410 + tape->dev.parent = &drive->gendev; 2411 + tape->dev.release = ide_tape_release; 2412 + dev_set_name(&tape->dev, dev_name(&drive->gendev)); 2413 + 2414 + if (device_register(&tape->dev)) 2415 + goto out_free_disk; 2413 2416 2414 2417 tape->drive = drive; 2415 2418 tape->driver = &idetape_driver; ··· 2443 2436 2444 2437 return 0; 2445 2438 2439 + out_free_disk: 2440 + put_disk(g); 2446 2441 out_free_tape: 2447 2442 kfree(tape); 2448 2443 failed:
+8 -3
drivers/ide/ide.c
··· 337 337 int a, b, i, j = 1; 338 338 unsigned int *dev_param_mask = (unsigned int *)kp->arg; 339 339 340 + /* controller . device (0 or 1) [ : 1 (set) | 0 (clear) ] */ 340 341 if (sscanf(s, "%d.%d:%d", &a, &b, &j) != 3 && 341 342 sscanf(s, "%d.%d", &a, &b) != 2) 342 343 return -EINVAL; ··· 350 349 if (j) 351 350 *dev_param_mask |= (1 << i); 352 351 else 353 - *dev_param_mask &= (1 << i); 352 + *dev_param_mask &= ~(1 << i); 354 353 355 354 return 0; 356 355 } ··· 393 392 { 394 393 int a, b, c = 0, h = 0, s = 0, i, j = 1; 395 394 395 + /* controller . device (0 or 1) : Cylinders , Heads , Sectors */ 396 + /* controller . device (0 or 1) : 1 (use CHS) | 0 (ignore CHS) */ 396 397 if (sscanf(str, "%d.%d:%d,%d,%d", &a, &b, &c, &h, &s) != 5 && 397 398 sscanf(str, "%d.%d:%d", &a, &b, &j) != 3) 398 399 return -EINVAL; ··· 410 407 if (j) 411 408 ide_disks |= (1 << i); 412 409 else 413 - ide_disks &= (1 << i); 410 + ide_disks &= ~(1 << i); 414 411 415 412 ide_disks_chs[i].cyl = c; 416 413 ide_disks_chs[i].head = h; ··· 472 469 { 473 470 int i, j = 1; 474 471 472 + /* controller (ignore) */ 473 + /* controller : 1 (ignore) | 0 (use) */ 475 474 if (sscanf(s, "%d:%d", &i, &j) != 2 && sscanf(s, "%d", &i) != 1) 476 475 return -EINVAL; 477 476 ··· 483 478 if (j) 484 479 ide_ignore_cable |= (1 << i); 485 480 else 486 - ide_ignore_cable &= (1 << i); 481 + ide_ignore_cable &= ~(1 << i); 487 482 488 483 return 0; 489 484 }
+2 -3
drivers/ide/it821x.c
··· 5 5 * May be copied or modified under the terms of the GNU General Public License 6 6 * Based in part on the ITE vendor provided SCSI driver. 7 7 * 8 - * Documentation available from 9 - * http://www.ite.com.tw/pc/IT8212F_V04.pdf 10 - * Some other documents are NDA. 8 + * Documentation: 9 + * Datasheet is freely available, some other documents under NDA. 11 10 * 12 11 * The ITE8212 isn't exactly a standard IDE controller. It has two 13 12 * modes. In pass through mode then it is an IDE controller. In its smart
+1 -1
drivers/ieee1394/ieee1394_core.c
··· 1275 1275 unregister_chrdev_region(IEEE1394_CORE_DEV, 256); 1276 1276 } 1277 1277 1278 - module_init(ieee1394_init); 1278 + fs_initcall(ieee1394_init); 1279 1279 module_exit(ieee1394_cleanup); 1280 1280 1281 1281 /* Exported symbols */
+2 -2
drivers/input/keyboard/atkbd.c
··· 839 839 */ 840 840 static void atkbd_dell_laptop_keymap_fixup(struct atkbd *atkbd) 841 841 { 842 - const unsigned int forced_release_keys[] = { 842 + static const unsigned int forced_release_keys[] = { 843 843 0x85, 0x86, 0x87, 0x88, 0x89, 0x8a, 0x8b, 0x8f, 0x93, 844 844 }; 845 845 int i; ··· 856 856 */ 857 857 static void atkbd_hp_keymap_fixup(struct atkbd *atkbd) 858 858 { 859 - const unsigned int forced_release_keys[] = { 859 + static const unsigned int forced_release_keys[] = { 860 860 0x94, 861 861 }; 862 862 int i;
+2 -2
drivers/input/keyboard/bf54x-keys.c
··· 209 209 goto out; 210 210 } 211 211 212 - if (!pdata->debounce_time || !pdata->debounce_time > MAX_MULT || 213 - !pdata->coldrive_time || !pdata->coldrive_time > MAX_MULT) { 212 + if (!pdata->debounce_time || pdata->debounce_time > MAX_MULT || 213 + !pdata->coldrive_time || pdata->coldrive_time > MAX_MULT) { 214 214 printk(KERN_ERR DRV_NAME 215 215 ": Invalid Debounce/Columdrive Time from pdata\n"); 216 216 bfin_write_KPAD_MSEL(0xFF0); /* Default MSEL */
+4 -4
drivers/input/keyboard/corgikbd.c
··· 288 288 #define corgikbd_resume NULL 289 289 #endif 290 290 291 - static int __init corgikbd_probe(struct platform_device *pdev) 291 + static int __devinit corgikbd_probe(struct platform_device *pdev) 292 292 { 293 293 struct corgikbd *corgikbd; 294 294 struct input_dev *input_dev; ··· 368 368 return err; 369 369 } 370 370 371 - static int corgikbd_remove(struct platform_device *pdev) 371 + static int __devexit corgikbd_remove(struct platform_device *pdev) 372 372 { 373 373 int i; 374 374 struct corgikbd *corgikbd = platform_get_drvdata(pdev); ··· 388 388 389 389 static struct platform_driver corgikbd_driver = { 390 390 .probe = corgikbd_probe, 391 - .remove = corgikbd_remove, 391 + .remove = __devexit_p(corgikbd_remove), 392 392 .suspend = corgikbd_suspend, 393 393 .resume = corgikbd_resume, 394 394 .driver = { ··· 397 397 }, 398 398 }; 399 399 400 - static int __devinit corgikbd_init(void) 400 + static int __init corgikbd_init(void) 401 401 { 402 402 return platform_driver_register(&corgikbd_driver); 403 403 }
+4 -4
drivers/input/keyboard/omap-keypad.c
··· 279 279 #define omap_kp_resume NULL 280 280 #endif 281 281 282 - static int __init omap_kp_probe(struct platform_device *pdev) 282 + static int __devinit omap_kp_probe(struct platform_device *pdev) 283 283 { 284 284 struct omap_kp *omap_kp; 285 285 struct input_dev *input_dev; ··· 422 422 return -EINVAL; 423 423 } 424 424 425 - static int omap_kp_remove(struct platform_device *pdev) 425 + static int __devexit omap_kp_remove(struct platform_device *pdev) 426 426 { 427 427 struct omap_kp *omap_kp = platform_get_drvdata(pdev); 428 428 ··· 454 454 455 455 static struct platform_driver omap_kp_driver = { 456 456 .probe = omap_kp_probe, 457 - .remove = omap_kp_remove, 457 + .remove = __devexit_p(omap_kp_remove), 458 458 .suspend = omap_kp_suspend, 459 459 .resume = omap_kp_resume, 460 460 .driver = { ··· 463 463 }, 464 464 }; 465 465 466 - static int __devinit omap_kp_init(void) 466 + static int __init omap_kp_init(void) 467 467 { 468 468 printk(KERN_INFO "OMAP Keypad Driver\n"); 469 469 return platform_driver_register(&omap_kp_driver);
+4 -4
drivers/input/keyboard/spitzkbd.c
··· 343 343 #define spitzkbd_resume NULL 344 344 #endif 345 345 346 - static int __init spitzkbd_probe(struct platform_device *dev) 346 + static int __devinit spitzkbd_probe(struct platform_device *dev) 347 347 { 348 348 struct spitzkbd *spitzkbd; 349 349 struct input_dev *input_dev; ··· 444 444 return err; 445 445 } 446 446 447 - static int spitzkbd_remove(struct platform_device *dev) 447 + static int __devexit spitzkbd_remove(struct platform_device *dev) 448 448 { 449 449 int i; 450 450 struct spitzkbd *spitzkbd = platform_get_drvdata(dev); ··· 470 470 471 471 static struct platform_driver spitzkbd_driver = { 472 472 .probe = spitzkbd_probe, 473 - .remove = spitzkbd_remove, 473 + .remove = __devexit_p(spitzkbd_remove), 474 474 .suspend = spitzkbd_suspend, 475 475 .resume = spitzkbd_resume, 476 476 .driver = { ··· 479 479 }, 480 480 }; 481 481 482 - static int __devinit spitzkbd_init(void) 482 + static int __init spitzkbd_init(void) 483 483 { 484 484 return platform_driver_register(&spitzkbd_driver); 485 485 }
+1 -1
drivers/input/mouse/Kconfig
··· 70 70 config MOUSE_PS2_LIFEBOOK 71 71 bool "Fujitsu Lifebook PS/2 mouse protocol extension" if EMBEDDED 72 72 default y 73 - depends on MOUSE_PS2 73 + depends on MOUSE_PS2 && X86 74 74 help 75 75 Say Y here if you have a Fujitsu B-series Lifebook PS/2 76 76 TouchScreen connected to your system.
+24 -8
drivers/input/mouse/elantech.c
··· 542 542 ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) || 543 543 ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) || 544 544 ps2_command(ps2dev, param, PSMOUSE_CMD_GETINFO)) { 545 - pr_err("elantech.c: sending Elantech magic knock failed.\n"); 545 + pr_debug("elantech.c: sending Elantech magic knock failed.\n"); 546 546 return -1; 547 547 } 548 548 ··· 551 551 * set of magic numbers 552 552 */ 553 553 if (param[0] != 0x3c || param[1] != 0x03 || param[2] != 0xc8) { 554 - pr_info("elantech.c: unexpected magic knock result 0x%02x, 0x%02x, 0x%02x.\n", 555 - param[0], param[1], param[2]); 554 + pr_debug("elantech.c: " 555 + "unexpected magic knock result 0x%02x, 0x%02x, 0x%02x.\n", 556 + param[0], param[1], param[2]); 557 + return -1; 558 + } 559 + 560 + /* 561 + * Query touchpad's firmware version and see if it reports known 562 + * value to avoid mis-detection. Logitech mice are known to respond 563 + * to Elantech magic knock and there might be more. 564 + */ 565 + if (synaptics_send_cmd(psmouse, ETP_FW_VERSION_QUERY, param)) { 566 + pr_debug("elantech.c: failed to query firmware version.\n"); 567 + return -1; 568 + } 569 + 570 + pr_debug("elantech.c: Elantech version query result 0x%02x, 0x%02x, 0x%02x.\n", 571 + param[0], param[1], param[2]); 572 + 573 + if (param[0] == 0 || param[1] != 0) { 574 + pr_debug("elantech.c: Probably not a real Elantech touchpad. Aborting.\n"); 556 575 return -1; 557 576 } 558 577 ··· 619 600 int i, error; 620 601 unsigned char param[3]; 621 602 622 - etd = kzalloc(sizeof(struct elantech_data), GFP_KERNEL); 623 - psmouse->private = etd; 603 + psmouse->private = etd = kzalloc(sizeof(struct elantech_data), GFP_KERNEL); 624 604 if (!etd) 625 605 return -1; 626 606 ··· 628 610 etd->parity[i] = etd->parity[i & (i - 1)] ^ 1; 629 611 630 612 /* 631 - * Find out what version hardware this is 613 + * Do the version query again so we can store the result 632 614 */ 633 615 if (synaptics_send_cmd(psmouse, ETP_FW_VERSION_QUERY, param)) { 634 616 pr_err("elantech.c: failed to query firmware version.\n"); 635 617 goto init_fail; 636 618 } 637 - pr_info("elantech.c: Elantech version query result 0x%02x, 0x%02x, 0x%02x.\n", 638 - param[0], param[1], param[2]); 639 619 etd->fw_version_maj = param[0]; 640 620 etd->fw_version_min = param[2]; 641 621
+1 -1
drivers/input/mouse/pxa930_trkball.c
··· 83 83 84 84 __raw_writel(v, trkball->mmio_base + TBCR); 85 85 86 - while (i--) { 86 + while (--i) { 87 87 if (__raw_readl(trkball->mmio_base + TBCR) == v) 88 88 break; 89 89 msleep(1);
+4 -5
drivers/input/mouse/synaptics.c
··· 182 182 183 183 static int synaptics_query_hardware(struct psmouse *psmouse) 184 184 { 185 - int retries = 0; 186 - 187 - while ((retries++ < 3) && psmouse_reset(psmouse)) 188 - /* empty */; 189 - 190 185 if (synaptics_identify(psmouse)) 191 186 return -1; 192 187 if (synaptics_model_id(psmouse)) ··· 577 582 struct synaptics_data *priv = psmouse->private; 578 583 struct synaptics_data old_priv = *priv; 579 584 585 + psmouse_reset(psmouse); 586 + 580 587 if (synaptics_detect(psmouse, 0)) 581 588 return -1; 582 589 ··· 636 639 psmouse->private = priv = kzalloc(sizeof(struct synaptics_data), GFP_KERNEL); 637 640 if (!priv) 638 641 return -1; 642 + 643 + psmouse_reset(psmouse); 639 644 640 645 if (synaptics_query_hardware(psmouse)) { 641 646 printk(KERN_ERR "Unable to query Synaptics hardware.\n");
+3 -3
drivers/input/serio/ambakmi.c
··· 57 57 struct amba_kmi_port *kmi = io->port_data; 58 58 unsigned int timeleft = 10000; /* timeout in 100ms */ 59 59 60 - while ((readb(KMISTAT) & KMISTAT_TXEMPTY) == 0 && timeleft--) 60 + while ((readb(KMISTAT) & KMISTAT_TXEMPTY) == 0 && --timeleft) 61 61 udelay(10); 62 62 63 63 if (timeleft) ··· 129 129 io->write = amba_kmi_write; 130 130 io->open = amba_kmi_open; 131 131 io->close = amba_kmi_close; 132 - strlcpy(io->name, dev->dev.bus_id, sizeof(io->name)); 133 - strlcpy(io->phys, dev->dev.bus_id, sizeof(io->phys)); 132 + strlcpy(io->name, dev_name(&dev->dev), sizeof(io->name)); 133 + strlcpy(io->phys, dev_name(&dev->dev), sizeof(io->phys)); 134 134 io->port_data = kmi; 135 135 io->dev.parent = &dev->dev; 136 136
+1 -1
drivers/input/serio/gscps2.c
··· 359 359 360 360 snprintf(serio->name, sizeof(serio->name), "GSC PS/2 %s", 361 361 (ps2port->id == GSC_ID_KEYBOARD) ? "keyboard" : "mouse"); 362 - strlcpy(serio->phys, dev->dev.bus_id, sizeof(serio->phys)); 362 + strlcpy(serio->phys, dev_name(&dev->dev), sizeof(serio->phys)); 363 363 serio->id.type = SERIO_8042; 364 364 serio->write = gscps2_write; 365 365 serio->open = gscps2_open;
+2 -2
drivers/input/serio/sa1111ps2.c
··· 246 246 serio->write = ps2_write; 247 247 serio->open = ps2_open; 248 248 serio->close = ps2_close; 249 - strlcpy(serio->name, dev->dev.bus_id, sizeof(serio->name)); 250 - strlcpy(serio->phys, dev->dev.bus_id, sizeof(serio->phys)); 249 + strlcpy(serio->name, dev_name(&dev->dev), sizeof(serio->name)); 250 + strlcpy(serio->phys, dev_name(&dev->dev), sizeof(serio->phys)); 251 251 serio->port_data = ps2if; 252 252 serio->dev.parent = &dev->dev; 253 253 ps2if->io = serio;
+1 -1
drivers/input/touchscreen/atmel_tsadcc.c
··· 236 236 ts_dev->bufferedmeasure = 0; 237 237 238 238 snprintf(ts_dev->phys, sizeof(ts_dev->phys), 239 - "%s/input0", pdev->dev.bus_id); 239 + "%s/input0", dev_name(&pdev->dev)); 240 240 241 241 input_dev->name = "atmel touch screen controller"; 242 242 input_dev->phys = ts_dev->phys;
+5 -4
drivers/input/touchscreen/corgi_ts.c
··· 268 268 #define corgits_resume NULL 269 269 #endif 270 270 271 - static int __init corgits_probe(struct platform_device *pdev) 271 + static int __devinit corgits_probe(struct platform_device *pdev) 272 272 { 273 273 struct corgi_ts *corgi_ts; 274 274 struct input_dev *input_dev; ··· 343 343 return err; 344 344 } 345 345 346 - static int corgits_remove(struct platform_device *pdev) 346 + static int __devexit corgits_remove(struct platform_device *pdev) 347 347 { 348 348 struct corgi_ts *corgi_ts = platform_get_drvdata(pdev); 349 349 ··· 352 352 corgi_ts->machinfo->put_hsync(); 353 353 input_unregister_device(corgi_ts->input); 354 354 kfree(corgi_ts); 355 + 355 356 return 0; 356 357 } 357 358 358 359 static struct platform_driver corgits_driver = { 359 360 .probe = corgits_probe, 360 - .remove = corgits_remove, 361 + .remove = __devexit_p(corgits_remove), 361 362 .suspend = corgits_suspend, 362 363 .resume = corgits_resume, 363 364 .driver = { ··· 367 366 }, 368 367 }; 369 368 370 - static int __devinit corgits_init(void) 369 + static int __init corgits_init(void) 371 370 { 372 371 return platform_driver_register(&corgits_driver); 373 372 }
+2 -1
drivers/input/touchscreen/tsc2007.c
··· 289 289 290 290 pdata->init_platform_hw(); 291 291 292 - snprintf(ts->phys, sizeof(ts->phys), "%s/input0", client->dev.bus_id); 292 + snprintf(ts->phys, sizeof(ts->phys), 293 + "%s/input0", dev_name(&client->dev)); 293 294 294 295 input_dev->name = "TSC2007 Touchscreen"; 295 296 input_dev->phys = ts->phys;
+18 -2
drivers/input/touchscreen/usbtouchscreen.c
··· 60 60 module_param(swap_xy, bool, 0644); 61 61 MODULE_PARM_DESC(swap_xy, "If set X and Y axes are swapped."); 62 62 63 + static int hwcalib_xy; 64 + module_param(hwcalib_xy, bool, 0644); 65 + MODULE_PARM_DESC(hwcalib_xy, "If set hw-calibrated X/Y are used if available"); 66 + 63 67 /* device specifc data/functions */ 64 68 struct usbtouch_usb; 65 69 struct usbtouch_device_info { ··· 122 118 123 119 #define USB_DEVICE_HID_CLASS(vend, prod) \ 124 120 .match_flags = USB_DEVICE_ID_MATCH_INT_CLASS \ 121 + | USB_DEVICE_ID_MATCH_INT_PROTOCOL \ 125 122 | USB_DEVICE_ID_MATCH_DEVICE, \ 126 123 .idVendor = (vend), \ 127 124 .idProduct = (prod), \ ··· 265 260 266 261 static int mtouch_read_data(struct usbtouch_usb *dev, unsigned char *pkt) 267 262 { 268 - dev->x = (pkt[8] << 8) | pkt[7]; 269 - dev->y = (pkt[10] << 8) | pkt[9]; 263 + if (hwcalib_xy) { 264 + dev->x = (pkt[4] << 8) | pkt[3]; 265 + dev->y = 0xffff - ((pkt[6] << 8) | pkt[5]); 266 + } else { 267 + dev->x = (pkt[8] << 8) | pkt[7]; 268 + dev->y = (pkt[10] << 8) | pkt[9]; 269 + } 270 270 dev->touch = (pkt[2] & 0x40) ? 1 : 0; 271 271 272 272 return 1; ··· 302 292 break; 303 293 if (ret != -EPIPE) 304 294 return ret; 295 + } 296 + 297 + /* Default min/max xy are the raw values, override if using hw-calib */ 298 + if (hwcalib_xy) { 299 + input_set_abs_params(usbtouch->input, ABS_X, 0, 0xffff, 0, 0); 300 + input_set_abs_params(usbtouch->input, ABS_Y, 0, 0xffff, 0, 0); 305 301 } 306 302 307 303 return 0;
+2 -1
drivers/md/raid1.c
··· 1237 1237 update_head_pos(mirror, r1_bio); 1238 1238 1239 1239 if (atomic_dec_and_test(&r1_bio->remaining)) { 1240 - md_done_sync(mddev, r1_bio->sectors, uptodate); 1240 + sector_t s = r1_bio->sectors; 1241 1241 put_buf(r1_bio); 1242 + md_done_sync(mddev, s, uptodate); 1242 1243 } 1243 1244 } 1244 1245
+10 -9
drivers/md/raid10.c
··· 1236 1236 /* for reconstruct, we always reschedule after a read. 1237 1237 * for resync, only after all reads 1238 1238 */ 1239 + rdev_dec_pending(conf->mirrors[d].rdev, conf->mddev); 1239 1240 if (test_bit(R10BIO_IsRecover, &r10_bio->state) || 1240 1241 atomic_dec_and_test(&r10_bio->remaining)) { 1241 1242 /* we have read all the blocks, ··· 1244 1243 */ 1245 1244 reschedule_retry(r10_bio); 1246 1245 } 1247 - rdev_dec_pending(conf->mirrors[d].rdev, conf->mddev); 1248 1246 } 1249 1247 1250 1248 static void end_sync_write(struct bio *bio, int error) ··· 1264 1264 1265 1265 update_head_pos(i, r10_bio); 1266 1266 1267 + rdev_dec_pending(conf->mirrors[d].rdev, mddev); 1267 1268 while (atomic_dec_and_test(&r10_bio->remaining)) { 1268 1269 if (r10_bio->master_bio == NULL) { 1269 1270 /* the primary of several recovery bios */ 1270 - md_done_sync(mddev, r10_bio->sectors, 1); 1271 + sector_t s = r10_bio->sectors; 1271 1272 put_buf(r10_bio); 1273 + md_done_sync(mddev, s, 1); 1272 1274 break; 1273 1275 } else { 1274 1276 r10bio_t *r10_bio2 = (r10bio_t *)r10_bio->master_bio; ··· 1278 1276 r10_bio = r10_bio2; 1279 1277 } 1280 1278 } 1281 - rdev_dec_pending(conf->mirrors[d].rdev, mddev); 1282 1279 } 1283 1280 1284 1281 /* ··· 1750 1749 if (!go_faster && conf->nr_waiting) 1751 1750 msleep_interruptible(1000); 1752 1751 1753 - bitmap_cond_end_sync(mddev->bitmap, sector_nr); 1754 - 1755 1752 /* Again, very different code for resync and recovery. 1756 1753 * Both must result in an r10bio with a list of bios that 1757 1754 * have bi_end_io, bi_sector, bi_bdev set, ··· 1885 1886 /* resync. Schedule a read for every block at this virt offset */ 1886 1887 int count = 0; 1887 1888 1889 + bitmap_cond_end_sync(mddev->bitmap, sector_nr); 1890 + 1888 1891 if (!bitmap_start_sync(mddev->bitmap, sector_nr, 1889 1892 &sync_blocks, mddev->degraded) && 1890 1893 !conf->fullsync && !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) { ··· 2011 2010 /* There is nowhere to write, so all non-sync 2012 2011 * drives must be failed, so try the next chunk... 2013 2012 */ 2014 - { 2015 - sector_t sec = max_sector - sector_nr; 2016 - sectors_skipped += sec; 2013 + if (sector_nr + max_sync < max_sector) 2014 + max_sector = sector_nr + max_sync; 2015 + 2016 + sectors_skipped += (max_sector - sector_nr); 2017 2017 chunks_skipped ++; 2018 2018 sector_nr = max_sector; 2019 2019 goto skipped; 2020 - } 2021 2020 } 2022 2021 2023 2022 static int run(mddev_t *mddev)
+1
drivers/media/dvb/b2c2/flexcop-hw-filter.c
··· 192 192 193 193 return 0; 194 194 } 195 + EXPORT_SYMBOL(flexcop_pid_feed_control); 195 196 196 197 void flexcop_hw_filter_init(struct flexcop_device *fc) 197 198 {
+40 -21
drivers/media/dvb/b2c2/flexcop-pci.c
··· 13 13 module_param(enable_pid_filtering, int, 0444); 14 14 MODULE_PARM_DESC(enable_pid_filtering, "enable hardware pid filtering: supported values: 0 (fullts), 1"); 15 15 16 - static int irq_chk_intv; 16 + static int irq_chk_intv = 100; 17 17 module_param(irq_chk_intv, int, 0644); 18 - MODULE_PARM_DESC(irq_chk_intv, "set the interval for IRQ watchdog (currently just debugging)."); 18 + MODULE_PARM_DESC(irq_chk_intv, "set the interval for IRQ streaming watchdog."); 19 19 20 20 #ifdef CONFIG_DVB_B2C2_FLEXCOP_DEBUG 21 21 #define dprintk(level,args...) \ ··· 34 34 35 35 static int debug; 36 36 module_param(debug, int, 0644); 37 - MODULE_PARM_DESC(debug, "set debug level (1=info,2=regs,4=TS,8=irqdma (|-able))." DEBSTATUS); 37 + MODULE_PARM_DESC(debug, 38 + "set debug level (1=info,2=regs,4=TS,8=irqdma,16=check (|-able))." 39 + DEBSTATUS); 38 40 39 41 #define DRIVER_VERSION "0.1" 40 42 #define DRIVER_NAME "Technisat/B2C2 FlexCop II/IIb/III Digital TV PCI Driver" ··· 60 58 int active_dma1_addr; /* 0 = addr0 of dma1; 1 = addr1 of dma1 */ 61 59 u32 last_dma1_cur_pos; /* position of the pointer last time the timer/packet irq occured */ 62 60 int count; 61 + int count_prev; 62 + int stream_problem; 63 63 64 64 spinlock_t irq_lock; 65 65 ··· 107 103 container_of(work, struct flexcop_pci, irq_check_work.work); 108 104 struct flexcop_device *fc = fc_pci->fc_dev; 109 105 110 - flexcop_ibi_value v = fc->read_ibi_reg(fc,sram_dest_reg_714); 106 + if (fc->feedcount) { 111 107 112 - flexcop_dump_reg(fc_pci->fc_dev,dma1_000,4); 108 + if (fc_pci->count == fc_pci->count_prev) { 109 + deb_chk("no IRQ since the last check\n"); 110 + if (fc_pci->stream_problem++ == 3) { 111 + struct dvb_demux_feed *feed; 113 112 114 - if (v.sram_dest_reg_714.net_ovflow_error) 115 - deb_chk("sram net_ovflow_error\n"); 116 - if (v.sram_dest_reg_714.media_ovflow_error) 117 - deb_chk("sram media_ovflow_error\n"); 118 - if (v.sram_dest_reg_714.cai_ovflow_error) 119 - deb_chk("sram cai_ovflow_error\n"); 120 - if (v.sram_dest_reg_714.cai_ovflow_error) 121 - deb_chk("sram cai_ovflow_error\n"); 113 + spin_lock_irq(&fc->demux.lock); 114 + list_for_each_entry(feed, &fc->demux.feed_list, 115 + list_head) { 116 + flexcop_pid_feed_control(fc, feed, 0); 117 + } 118 + 119 + list_for_each_entry(feed, &fc->demux.feed_list, 120 + list_head) { 121 + flexcop_pid_feed_control(fc, feed, 1); 122 + } 123 + spin_unlock_irq(&fc->demux.lock); 124 + 125 + fc_pci->stream_problem = 0; 126 + } 127 + } else { 128 + fc_pci->stream_problem = 0; 129 + fc_pci->count_prev = fc_pci->count; 130 + } 131 + } 122 132 123 133 schedule_delayed_work(&fc_pci->irq_check_work, 124 134 msecs_to_jiffies(irq_chk_intv < 100 ? 100 : irq_chk_intv)); ··· 234 216 flexcop_dma_control_timer_irq(fc,FC_DMA_1,1); 235 217 deb_irq("IRQ enabled\n"); 236 218 219 + fc_pci->count_prev = fc_pci->count; 220 + 237 221 // fc_pci->active_dma1_addr = 0; 238 222 // flexcop_dma_control_size_irq(fc,FC_DMA_1,1); 239 223 240 - if (irq_chk_intv > 0) 241 - schedule_delayed_work(&fc_pci->irq_check_work, 242 - msecs_to_jiffies(irq_chk_intv < 100 ? 100 : irq_chk_intv)); 243 224 } else { 244 - if (irq_chk_intv > 0) 245 - cancel_delayed_work(&fc_pci->irq_check_work); 246 - 247 225 flexcop_dma_control_timer_irq(fc,FC_DMA_1,0); 248 226 deb_irq("IRQ disabled\n"); 249 227 ··· 312 298 if ((ret = request_irq(fc_pci->pdev->irq, flexcop_pci_isr, 313 299 IRQF_SHARED, DRIVER_NAME, fc_pci)) != 0) 314 300 goto err_pci_iounmap; 315 - 316 - 317 301 318 302 fc_pci->init_state |= FC_PCI_INIT; 319 303 return ret; ··· 387 375 388 376 INIT_DELAYED_WORK(&fc_pci->irq_check_work, flexcop_pci_irq_check_work); 389 377 378 + if (irq_chk_intv > 0) 379 + schedule_delayed_work(&fc_pci->irq_check_work, 380 + msecs_to_jiffies(irq_chk_intv < 100 ? 100 : irq_chk_intv)); 381 + 390 382 return ret; 391 383 392 384 err_fc_exit: ··· 408 392 static void flexcop_pci_remove(struct pci_dev *pdev) 409 393 { 410 394 struct flexcop_pci *fc_pci = pci_get_drvdata(pdev); 395 + 396 + if (irq_chk_intv > 0) 397 + cancel_delayed_work(&fc_pci->irq_check_work); 411 398 412 399 flexcop_pci_dma_exit(fc_pci); 413 400 flexcop_device_exit(fc_pci->fc_dev);
+1 -2
drivers/media/dvb/b2c2/flexcop.c
··· 212 212 v210.sw_reset_210.Block_reset_enable = 0xb2; 213 213 214 214 fc->write_ibi_reg(fc,sw_reset_210,v210); 215 - msleep(1); 216 - 215 + udelay(1000); 217 216 fc->write_ibi_reg(fc,ctrl_208,v208_save); 218 217 } 219 218
+2
drivers/media/video/em28xx/em28xx-audio.c
··· 463 463 pcm->info_flags = 0; 464 464 pcm->private_data = dev; 465 465 strcpy(pcm->name, "Empia 28xx Capture"); 466 + 467 + snd_card_set_dev(card, &dev->udev->dev); 466 468 strcpy(card->driver, "Empia Em28xx Audio"); 467 469 strcpy(card->shortname, "Em28xx Audio"); 468 470 strcpy(card->longname, "Empia Em28xx Audio");
+13 -13
drivers/media/video/pxa_camera.c
··· 1155 1155 { 1156 1156 struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent); 1157 1157 struct pxa_camera_dev *pcdev = ici->priv; 1158 - const struct soc_camera_data_format *host_fmt, *cam_fmt = NULL; 1159 - const struct soc_camera_format_xlate *xlate; 1158 + const struct soc_camera_data_format *cam_fmt = NULL; 1159 + const struct soc_camera_format_xlate *xlate = NULL; 1160 1160 struct soc_camera_sense sense = { 1161 1161 .master_clock = pcdev->mclk, 1162 1162 .pixel_clock_max = pcdev->ciclk / 4, 1163 1163 }; 1164 - int ret, buswidth; 1164 + int ret; 1165 1165 1166 - xlate = soc_camera_xlate_by_fourcc(icd, pixfmt); 1167 - if (!xlate) { 1168 - dev_warn(&ici->dev, "Format %x not found\n", pixfmt); 1169 - return -EINVAL; 1166 + if (pixfmt) { 1167 + xlate = soc_camera_xlate_by_fourcc(icd, pixfmt); 1168 + if (!xlate) { 1169 + dev_warn(&ici->dev, "Format %x not found\n", pixfmt); 1170 + return -EINVAL; 1171 + } 1172 + 1173 + cam_fmt = xlate->cam_fmt; 1170 1174 } 1171 - 1172 - buswidth = xlate->buswidth; 1173 - host_fmt = xlate->host_fmt; 1174 - cam_fmt = xlate->cam_fmt; 1175 1175 1176 1176 /* If PCLK is used to latch data from the sensor, check sense */ 1177 1177 if (pcdev->platform_flags & PXA_CAMERA_PCLK_EN) ··· 1201 1201 } 1202 1202 1203 1203 if (pixfmt && !ret) { 1204 - icd->buswidth = buswidth; 1205 - icd->current_fmt = host_fmt; 1204 + icd->buswidth = xlate->buswidth; 1205 + icd->current_fmt = xlate->host_fmt; 1206 1206 } 1207 1207 1208 1208 return ret;
+5 -8
drivers/media/video/sh_mobile_ceu_camera.c
··· 603 603 const struct soc_camera_format_xlate *xlate; 604 604 int ret; 605 605 606 + if (!pixfmt) 607 + return icd->ops->set_fmt(icd, pixfmt, rect); 608 + 606 609 xlate = soc_camera_xlate_by_fourcc(icd, pixfmt); 607 610 if (!xlate) { 608 611 dev_warn(&ici->dev, "Format %x not found\n", pixfmt); 609 612 return -EINVAL; 610 613 } 611 614 612 - switch (pixfmt) { 613 - case 0: /* Only geometry change */ 614 - ret = icd->ops->set_fmt(icd, pixfmt, rect); 615 - break; 616 - default: 617 - ret = icd->ops->set_fmt(icd, xlate->cam_fmt->fourcc, rect); 618 - } 615 + ret = icd->ops->set_fmt(icd, xlate->cam_fmt->fourcc, rect); 619 616 620 - if (pixfmt && !ret) { 617 + if (!ret) { 621 618 icd->buswidth = xlate->buswidth; 622 619 icd->current_fmt = xlate->host_fmt; 623 620 pcdev->camera_fmt = xlate->cam_fmt;
+6 -4
drivers/media/video/uvc/uvc_status.c
··· 46 46 usb_to_input_id(udev, &input->id); 47 47 input->dev.parent = &dev->intf->dev; 48 48 49 - set_bit(EV_KEY, input->evbit); 50 - set_bit(BTN_0, input->keybit); 49 + __set_bit(EV_KEY, input->evbit); 50 + __set_bit(KEY_CAMERA, input->keybit); 51 51 52 52 if ((ret = input_register_device(input)) < 0) 53 53 goto error; ··· 70 70 static void uvc_input_report_key(struct uvc_device *dev, unsigned int code, 71 71 int value) 72 72 { 73 - if (dev->input) 73 + if (dev->input) { 74 74 input_report_key(dev->input, code, value); 75 + input_sync(dev->input); 76 + } 75 77 } 76 78 77 79 #else ··· 98 96 return; 99 97 uvc_trace(UVC_TRACE_STATUS, "Button (intf %u) %s len %d\n", 100 98 data[1], data[3] ? "pressed" : "released", len); 101 - uvc_input_report_key(dev, BTN_0, data[3]); 99 + uvc_input_report_key(dev, KEY_CAMERA, data[3]); 102 100 } else { 103 101 uvc_trace(UVC_TRACE_STATUS, "Stream %u error event %02x %02x " 104 102 "len %d.\n", data[1], data[2], data[3], len);
+2 -2
drivers/message/fusion/mptbase.c
··· 91 91 controllers (default=0)"); 92 92 93 93 static int mpt_msi_enable_sas; 94 - module_param(mpt_msi_enable_sas, int, 1); 94 + module_param(mpt_msi_enable_sas, int, 0); 95 95 MODULE_PARM_DESC(mpt_msi_enable_sas, " Enable MSI Support for SAS \ 96 - controllers (default=1)"); 96 + controllers (default=0)"); 97 97 98 98 99 99 static int mpt_channel_mapping;
+2 -1
drivers/misc/hpilo.c
··· 710 710 711 711 static struct pci_device_id ilo_devices[] = { 712 712 { PCI_DEVICE(PCI_VENDOR_ID_COMPAQ, 0xB204) }, 713 + { PCI_DEVICE(PCI_VENDOR_ID_HP, 0x3307) }, 713 714 { } 714 715 }; 715 716 MODULE_DEVICE_TABLE(pci, ilo_devices); ··· 759 758 class_destroy(ilo_class); 760 759 } 761 760 762 - MODULE_VERSION("0.06"); 761 + MODULE_VERSION("1.0"); 763 762 MODULE_ALIAS(ILO_NAME); 764 763 MODULE_DESCRIPTION(ILO_NAME); 765 764 MODULE_AUTHOR("David Altobelli <david.altobelli@hp.com>");
+1
drivers/mmc/host/sdhci-pci.c
··· 107 107 108 108 static const struct sdhci_pci_fixes sdhci_cafe = { 109 109 .quirks = SDHCI_QUIRK_NO_SIMULT_VDD_AND_POWER | 110 + SDHCI_QUIRK_NO_BUSY_IRQ | 110 111 SDHCI_QUIRK_BROKEN_TIMEOUT_VAL, 111 112 }; 112 113
+4 -1
drivers/mmc/host/sdhci.c
··· 1291 1291 if (host->cmd->data) 1292 1292 DBG("Cannot wait for busy signal when also " 1293 1293 "doing a data transfer"); 1294 - else 1294 + else if (!(host->quirks & SDHCI_QUIRK_NO_BUSY_IRQ)) 1295 1295 return; 1296 + 1297 + /* The controller does not support the end-of-busy IRQ, 1298 + * fall through and take the SDHCI_INT_RESPONSE */ 1296 1299 } 1297 1300 1298 1301 if (intmask & SDHCI_INT_RESPONSE)
+2
drivers/mmc/host/sdhci.h
··· 208 208 #define SDHCI_QUIRK_BROKEN_TIMEOUT_VAL (1<<12) 209 209 /* Controller has an issue with buffer bits for small transfers */ 210 210 #define SDHCI_QUIRK_BROKEN_SMALL_PIO (1<<13) 211 + /* Controller does not provide transfer-complete interrupt when not busy */ 212 + #define SDHCI_QUIRK_NO_BUSY_IRQ (1<<14) 211 213 212 214 int irq; /* Device IRQ */ 213 215 void __iomem * ioaddr; /* Mapped address */
+8
drivers/mtd/chips/map_rom.c
··· 19 19 static int maprom_write (struct mtd_info *, loff_t, size_t, size_t *, const u_char *); 20 20 static void maprom_nop (struct mtd_info *); 21 21 static struct mtd_info *map_rom_probe(struct map_info *map); 22 + static int maprom_erase (struct mtd_info *mtd, struct erase_info *info); 22 23 23 24 static struct mtd_chip_driver maprom_chipdrv = { 24 25 .probe = map_rom_probe, ··· 43 42 mtd->read = maprom_read; 44 43 mtd->write = maprom_write; 45 44 mtd->sync = maprom_nop; 45 + mtd->erase = maprom_erase; 46 46 mtd->flags = MTD_CAP_ROM; 47 47 mtd->erasesize = map->size; 48 48 mtd->writesize = 1; ··· 71 69 { 72 70 printk(KERN_NOTICE "maprom_write called\n"); 73 71 return -EIO; 72 + } 73 + 74 + static int maprom_erase (struct mtd_info *mtd, struct erase_info *info) 75 + { 76 + /* We do our best 8) */ 77 + return -EROFS; 74 78 } 75 79 76 80 static int __init map_rom_init(void)
+10 -4
drivers/mtd/devices/slram.c
··· 267 267 if (*(szlength) != '+') { 268 268 devlength = simple_strtoul(szlength, &buffer, 0); 269 269 devlength = handle_unit(devlength, buffer) - devstart; 270 + if (devlength < devstart) 271 + goto err_out; 272 + 273 + devlength -= devstart; 270 274 } else { 271 275 devlength = simple_strtoul(szlength + 1, &buffer, 0); 272 276 devlength = handle_unit(devlength, buffer); 273 277 } 274 278 T("slram: devname=%s, devstart=0x%lx, devlength=0x%lx\n", 275 279 devname, devstart, devlength); 276 - if ((devstart < 0) || (devlength < 0) || (devlength % SLRAM_BLK_SZ != 0)) { 277 - E("slram: Illegal start / length parameter.\n"); 278 - return(-EINVAL); 279 - } 280 + if (devlength % SLRAM_BLK_SZ != 0) 281 + goto err_out; 280 282 281 283 if ((devstart = register_device(devname, devstart, devlength))){ 282 284 unregister_devices(); 283 285 return((int)devstart); 284 286 } 285 287 return(0); 288 + 289 + err_out: 290 + E("slram: Illegal length parameter.\n"); 291 + return(-EINVAL); 286 292 } 287 293 288 294 #ifndef MODULE
+1
drivers/mtd/lpddr/Kconfig
··· 12 12 DDR memories, intended for battery-operated systems. 13 13 14 14 config MTD_QINFO_PROBE 15 + depends on MTD_LPDDR 15 16 tristate "Detect flash chips by QINFO probe" 16 17 help 17 18 Device Information for LPDDR chips is offered through the Overlay
+1 -1
drivers/mtd/maps/Kconfig
··· 491 491 492 492 config MTD_BFIN_ASYNC 493 493 tristate "Blackfin BF533-STAMP Flash Chip Support" 494 - depends on BFIN533_STAMP && MTD_CFI 494 + depends on BFIN533_STAMP && MTD_CFI && MTD_COMPLEX_MAPPINGS 495 495 select MTD_PARTITIONS 496 496 default y 497 497 help
+5 -1
drivers/mtd/maps/bfin-async-flash.c
··· 152 152 153 153 if (gpio_request(state->enet_flash_pin, DRIVER_NAME)) { 154 154 pr_devinit(KERN_ERR DRIVER_NAME ": Failed to request gpio %d\n", state->enet_flash_pin); 155 + kfree(state); 155 156 return -EBUSY; 156 157 } 157 158 gpio_direction_output(state->enet_flash_pin, 1); 158 159 159 160 pr_devinit(KERN_NOTICE DRIVER_NAME ": probing %d-bit flash bus\n", state->map.bankwidth * 8); 160 161 state->mtd = do_map_probe(memory->name, &state->map); 161 - if (!state->mtd) 162 + if (!state->mtd) { 163 + gpio_free(state->enet_flash_pin); 164 + kfree(state); 162 165 return -ENXIO; 166 + } 163 167 164 168 #ifdef CONFIG_MTD_PARTITIONS 165 169 ret = parse_mtd_partitions(state->mtd, part_probe_types, &pdata->parts, 0);
+1 -1
drivers/mtd/maps/ck804xrom.c
··· 342 342 { 0, } 343 343 }; 344 344 345 + #if 0 345 346 MODULE_DEVICE_TABLE(pci, ck804xrom_pci_tbl); 346 347 347 - #if 0 348 348 static struct pci_driver ck804xrom_driver = { 349 349 .name = MOD_NAME, 350 350 .id_table = ck804xrom_pci_tbl,
+19 -19
drivers/mtd/maps/physmap.c
··· 29 29 struct map_info map[MAX_RESOURCES]; 30 30 #ifdef CONFIG_MTD_PARTITIONS 31 31 int nr_parts; 32 + struct mtd_partition *parts; 32 33 #endif 33 34 }; 34 35 ··· 46 45 47 46 physmap_data = dev->dev.platform_data; 48 47 49 - #ifdef CONFIG_MTD_CONCAT 50 - if (info->cmtd != info->mtd[0]) { 48 + #ifdef CONFIG_MTD_PARTITIONS 49 + if (info->nr_parts) { 50 + del_mtd_partitions(info->cmtd); 51 + kfree(info->parts); 52 + } else if (physmap_data->nr_parts) 53 + del_mtd_partitions(info->cmtd); 54 + else 51 55 del_mtd_device(info->cmtd); 56 + #else 57 + del_mtd_device(info->cmtd); 58 + #endif 59 + 60 + #ifdef CONFIG_MTD_CONCAT 61 + if (info->cmtd != info->mtd[0]) 52 62 mtd_concat_destroy(info->cmtd); 53 - } 54 63 #endif 55 64 56 65 for (i = 0; i < MAX_RESOURCES; i++) { 57 - if (info->mtd[i] != NULL) { 58 - #ifdef CONFIG_MTD_PARTITIONS 59 - if (info->nr_parts || physmap_data->nr_parts) 60 - del_mtd_partitions(info->mtd[i]); 61 - else 62 - del_mtd_device(info->mtd[i]); 63 - #else 64 - del_mtd_device(info->mtd[i]); 65 - #endif 66 + if (info->mtd[i] != NULL) 66 67 map_destroy(info->mtd[i]); 67 - } 68 68 } 69 69 return 0; 70 70 } ··· 88 86 int err = 0; 89 87 int i; 90 88 int devices_found = 0; 91 - #ifdef CONFIG_MTD_PARTITIONS 92 - struct mtd_partition *parts; 93 - #endif 94 89 95 90 physmap_data = dev->dev.platform_data; 96 91 if (physmap_data == NULL) ··· 166 167 goto err_out; 167 168 168 169 #ifdef CONFIG_MTD_PARTITIONS 169 - err = parse_mtd_partitions(info->cmtd, part_probe_types, &parts, 0); 170 + err = parse_mtd_partitions(info->cmtd, part_probe_types, 171 + &info->parts, 0); 170 172 if (err > 0) { 171 - add_mtd_partitions(info->cmtd, parts, err); 172 - kfree(parts); 173 + add_mtd_partitions(info->cmtd, info->parts, err); 174 + info->nr_parts = err; 173 175 return 0; 174 176 } 175 177
+9 -4
drivers/net/b44.c
··· 1264 1264 static void b44_chip_reset(struct b44 *bp, int reset_kind) 1265 1265 { 1266 1266 struct ssb_device *sdev = bp->sdev; 1267 + bool was_enabled; 1267 1268 1268 - if (ssb_device_is_enabled(bp->sdev)) { 1269 + was_enabled = ssb_device_is_enabled(bp->sdev); 1270 + 1271 + ssb_device_enable(bp->sdev, 0); 1272 + ssb_pcicore_dev_irqvecs_enable(&sdev->bus->pcicore, sdev); 1273 + 1274 + if (was_enabled) { 1269 1275 bw32(bp, B44_RCV_LAZY, 0); 1270 1276 bw32(bp, B44_ENET_CTRL, ENET_CTRL_DISABLE); 1271 1277 b44_wait_bit(bp, B44_ENET_CTRL, ENET_CTRL_DISABLE, 200, 1); ··· 1283 1277 } 1284 1278 bw32(bp, B44_DMARX_CTRL, 0); 1285 1279 bp->rx_prod = bp->rx_cons = 0; 1286 - } else 1287 - ssb_pcicore_dev_irqvecs_enable(&sdev->bus->pcicore, sdev); 1280 + } 1288 1281 1289 - ssb_device_enable(bp->sdev, 0); 1290 1282 b44_clear_stats(bp); 1291 1283 1292 1284 /* ··· 2240 2236 struct net_device *dev = ssb_get_drvdata(sdev); 2241 2237 2242 2238 unregister_netdev(dev); 2239 + ssb_device_disable(sdev, 0); 2243 2240 ssb_bus_may_powerdown(sdev->bus); 2244 2241 free_netdev(dev); 2245 2242 ssb_pcihost_set_power_state(sdev, PCI_D3hot);
+1 -1
drivers/net/gianfar.c
··· 1284 1284 spin_lock_irqsave(&priv->txlock, flags); 1285 1285 1286 1286 /* check if there is space to queue this packet */ 1287 - if (nr_frags > priv->num_txbdfree) { 1287 + if ((nr_frags+1) > priv->num_txbdfree) { 1288 1288 /* no space, stop the queue */ 1289 1289 netif_stop_queue(dev); 1290 1290 dev->stats.tx_fifo_errors++;
+1 -1
drivers/net/hp-plus.c
··· 467 467 if (this_dev != 0) break; /* only autoprobe 1st one */ 468 468 printk(KERN_NOTICE "hp-plus.c: Presently autoprobing (not recommended) for a single card.\n"); 469 469 } 470 - dev = alloc_ei_netdev(); 470 + dev = alloc_eip_netdev(); 471 471 if (!dev) 472 472 break; 473 473 dev->irq = irq[this_dev];
+12 -4
drivers/net/netxen/netxen_nic_main.c
··· 588 588 adapter->pci_mem_read = netxen_nic_pci_mem_read_2M; 589 589 adapter->pci_mem_write = netxen_nic_pci_mem_write_2M; 590 590 591 - mem_ptr0 = ioremap(mem_base, mem_len); 591 + mem_ptr0 = pci_ioremap_bar(pdev, 0); 592 + if (mem_ptr0 == NULL) { 593 + dev_err(&pdev->dev, "failed to map PCI bar 0\n"); 594 + return -EIO; 595 + } 596 + 592 597 pci_len0 = mem_len; 593 598 first_page_group_start = 0; 594 599 first_page_group_end = 0; ··· 800 795 * See if the firmware gave us a virtual-physical port mapping. 801 796 */ 802 797 adapter->physical_port = adapter->portnum; 803 - i = adapter->pci_read_normalize(adapter, CRB_V2P(adapter->portnum)); 804 - if (i != 0x55555555) 805 - adapter->physical_port = i; 798 + if (adapter->fw_major < 4) { 799 + i = adapter->pci_read_normalize(adapter, 800 + CRB_V2P(adapter->portnum)); 801 + if (i != 0x55555555) 802 + adapter->physical_port = i; 803 + } 806 804 807 805 adapter->flags &= ~(NETXEN_NIC_MSI_ENABLED | NETXEN_NIC_MSIX_ENABLED); 808 806
+112 -2
drivers/net/r8169.c
··· 81 81 #define RTL8169_TX_TIMEOUT (6*HZ) 82 82 #define RTL8169_PHY_TIMEOUT (10*HZ) 83 83 84 - #define RTL_EEPROM_SIG cpu_to_le32(0x8129) 85 - #define RTL_EEPROM_SIG_MASK cpu_to_le32(0xffff) 84 + #define RTL_EEPROM_SIG 0x8129 86 85 #define RTL_EEPROM_SIG_ADDR 0x0000 86 + #define RTL_EEPROM_MAC_ADDR 0x0007 87 87 88 88 /* write/read MMIO register */ 89 89 #define RTL_W8(reg, val8) writeb ((val8), ioaddr + (reg)) ··· 293 293 /* Cfg9346Bits */ 294 294 Cfg9346_Lock = 0x00, 295 295 Cfg9346_Unlock = 0xc0, 296 + Cfg9346_Program = 0x80, /* Programming mode */ 297 + Cfg9346_EECS = 0x08, /* Chip select */ 298 + Cfg9346_EESK = 0x04, /* Serial data clock */ 299 + Cfg9346_EEDI = 0x02, /* Data input */ 300 + Cfg9346_EEDO = 0x01, /* Data output */ 296 301 297 302 /* rx_mode_bits */ 298 303 AcceptErr = 0x20, ··· 310 305 /* RxConfigBits */ 311 306 RxCfgFIFOShift = 13, 312 307 RxCfgDMAShift = 8, 308 + RxCfg9356SEL = 6, /* EEPROM type: 0 = 9346, 1 = 9356 */ 313 309 314 310 /* TxConfigBits */ 315 311 TxInterFrameGapShift = 24, ··· 1969 1963 1970 1964 }; 1971 1965 1966 + /* Delay between EEPROM clock transitions. Force out buffered PCI writes. */ 1967 + #define RTL_EEPROM_DELAY() RTL_R8(Cfg9346) 1968 + #define RTL_EEPROM_READ_CMD 6 1969 + 1970 + /* read 16bit word stored in EEPROM. EEPROM is addressed by words. */ 1971 + static u16 rtl_eeprom_read(void __iomem *ioaddr, int addr) 1972 + { 1973 + u16 result = 0; 1974 + int cmd, cmd_len, i; 1975 + 1976 + /* check for EEPROM address size (in bits) */ 1977 + if (RTL_R32(RxConfig) & (1 << RxCfg9356SEL)) { 1978 + /* EEPROM is 93C56 */ 1979 + cmd_len = 3 + 8; /* 3 bits for command id and 8 for address */ 1980 + cmd = (RTL_EEPROM_READ_CMD << 8) | (addr & 0xff); 1981 + } else { 1982 + /* EEPROM is 93C46 */ 1983 + cmd_len = 3 + 6; /* 3 bits for command id and 6 for address */ 1984 + cmd = (RTL_EEPROM_READ_CMD << 6) | (addr & 0x3f); 1985 + } 1986 + 1987 + /* enter programming mode */ 1988 + RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS); 1989 + RTL_EEPROM_DELAY(); 1990 + 1991 + /* write command and requested address */ 1992 + while (cmd_len--) { 1993 + u8 x = Cfg9346_Program | Cfg9346_EECS; 1994 + 1995 + x |= (cmd & (1 << cmd_len)) ? Cfg9346_EEDI : 0; 1996 + 1997 + /* write a bit */ 1998 + RTL_W8(Cfg9346, x); 1999 + RTL_EEPROM_DELAY(); 2000 + 2001 + /* raise clock */ 2002 + RTL_W8(Cfg9346, x | Cfg9346_EESK); 2003 + RTL_EEPROM_DELAY(); 2004 + } 2005 + 2006 + /* lower clock */ 2007 + RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS); 2008 + RTL_EEPROM_DELAY(); 2009 + 2010 + /* read back 16bit value */ 2011 + for (i = 16; i > 0; i--) { 2012 + /* raise clock */ 2013 + RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS | Cfg9346_EESK); 2014 + RTL_EEPROM_DELAY(); 2015 + 2016 + result <<= 1; 2017 + result |= (RTL_R8(Cfg9346) & Cfg9346_EEDO) ? 1 : 0; 2018 + 2019 + /* lower clock */ 2020 + RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS); 2021 + RTL_EEPROM_DELAY(); 2022 + } 2023 + 2024 + RTL_W8(Cfg9346, Cfg9346_Program); 2025 + /* leave programming mode */ 2026 + RTL_W8(Cfg9346, Cfg9346_Lock); 2027 + 2028 + return result; 2029 + } 2030 + 2031 + static void rtl_init_mac_address(struct rtl8169_private *tp, 2032 + void __iomem *ioaddr) 2033 + { 2034 + struct pci_dev *pdev = tp->pci_dev; 2035 + u16 x; 2036 + u8 mac[8]; 2037 + 2038 + /* read EEPROM signature */ 2039 + x = rtl_eeprom_read(ioaddr, RTL_EEPROM_SIG_ADDR); 2040 + 2041 + if (x != RTL_EEPROM_SIG) { 2042 + dev_info(&pdev->dev, "Missing EEPROM signature: %04x\n", x); 2043 + return; 2044 + } 2045 + 2046 + /* read MAC address */ 2047 + x = rtl_eeprom_read(ioaddr, RTL_EEPROM_MAC_ADDR); 2048 + mac[0] = x & 0xff; 2049 + mac[1] = x >> 8; 2050 + x = rtl_eeprom_read(ioaddr, RTL_EEPROM_MAC_ADDR + 1); 2051 + mac[2] = x & 0xff; 2052 + mac[3] = x >> 8; 2053 + x = rtl_eeprom_read(ioaddr, RTL_EEPROM_MAC_ADDR + 2); 2054 + mac[4] = x & 0xff; 2055 + mac[5] = x >> 8; 2056 + 2057 + if (netif_msg_probe(tp)) { 2058 + DECLARE_MAC_BUF(buf); 2059 + 2060 + dev_info(&pdev->dev, "MAC address found in EEPROM: %s\n", 2061 + print_mac(buf, mac)); 2062 + } 2063 + 2064 + if (is_valid_ether_addr(mac)) 2065 + rtl_rar_set(tp, mac); 2066 + } 2067 + 1972 2068 static int __devinit 1973 2069 rtl8169_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) 1974 2070 { ··· 2248 2140 spin_lock_init(&tp->lock); 2249 2141 2250 2142 tp->mmio_addr = ioaddr; 2143 + 2144 + rtl_init_mac_address(tp, ioaddr); 2251 2145 2252 2146 /* Get MAC address */ 2253 2147 for (i = 0; i < MAC_ADDR_LEN; i++)
+8
drivers/net/usb/asix.c
··· 1451 1451 // Cables-to-Go USB Ethernet Adapter 1452 1452 USB_DEVICE(0x0b95, 0x772a), 1453 1453 .driver_info = (unsigned long) &ax88772_info, 1454 + }, { 1455 + // ABOCOM for pci 1456 + USB_DEVICE(0x14ea, 0xab11), 1457 + .driver_info = (unsigned long) &ax88178_info, 1458 + }, { 1459 + // ASIX 88772a 1460 + USB_DEVICE(0x0db0, 0xa877), 1461 + .driver_info = (unsigned long) &ax88772_info, 1454 1462 }, 1455 1463 { }, // END 1456 1464 };
+5
drivers/net/usb/cdc_ether.c
··· 559 559 USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ETHERNET, 560 560 USB_CDC_PROTO_NONE), 561 561 .driver_info = (unsigned long) &cdc_info, 562 + }, { 563 + /* Ericsson F3507g */ 564 + USB_DEVICE_AND_INTERFACE_INFO(0x0bdb, 0x1900, USB_CLASS_COMM, 565 + USB_CDC_SUBCLASS_MDLM, USB_CDC_PROTO_NONE), 566 + .driver_info = (unsigned long) &cdc_info, 562 567 }, 563 568 { }, // END 564 569 };
+2 -2
drivers/net/usb/usbnet.c
··· 723 723 if (dev->mii.mdio_read) 724 724 return mii_link_ok(&dev->mii); 725 725 726 - /* Otherwise, say we're up (to avoid breaking scripts) */ 727 - return 1; 726 + /* Otherwise, dtrt for drivers calling netif_carrier_{on,off} */ 727 + return ethtool_op_get_link(net); 728 728 } 729 729 EXPORT_SYMBOL_GPL(usbnet_get_link); 730 730
+5
drivers/net/usb/zaurus.c
··· 341 341 USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MDLM, 342 342 USB_CDC_PROTO_NONE), 343 343 .driver_info = (unsigned long) &bogus_mdlm_info, 344 + }, { 345 + /* Motorola MOTOMAGX phones */ 346 + USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x6425, USB_CLASS_COMM, 347 + USB_CDC_SUBCLASS_MDLM, USB_CDC_PROTO_NONE), 348 + .driver_info = (unsigned long) &bogus_mdlm_info, 344 349 }, 345 350 346 351 /* Olympus has some models with a Zaurus-compatible option.
+11 -40
drivers/net/veth.c
··· 239 239 return 0; 240 240 } 241 241 242 + static int veth_close(struct net_device *dev) 243 + { 244 + struct veth_priv *priv = netdev_priv(dev); 245 + 246 + netif_carrier_off(dev); 247 + netif_carrier_off(priv->peer); 248 + 249 + return 0; 250 + } 251 + 242 252 static int veth_dev_init(struct net_device *dev) 243 253 { 244 254 struct veth_net_stats *stats; ··· 275 265 static const struct net_device_ops veth_netdev_ops = { 276 266 .ndo_init = veth_dev_init, 277 267 .ndo_open = veth_open, 268 + .ndo_stop = veth_close, 278 269 .ndo_start_xmit = veth_xmit, 279 270 .ndo_get_stats = veth_get_stats, 280 271 .ndo_set_mac_address = eth_mac_addr, ··· 290 279 dev->features |= NETIF_F_LLTX; 291 280 dev->destructor = veth_dev_free; 292 281 } 293 - 294 - static void veth_change_state(struct net_device *dev) 295 - { 296 - struct net_device *peer; 297 - struct veth_priv *priv; 298 - 299 - priv = netdev_priv(dev); 300 - peer = priv->peer; 301 - 302 - if (netif_carrier_ok(peer)) { 303 - if (!netif_carrier_ok(dev)) 304 - netif_carrier_on(dev); 305 - } else { 306 - if (netif_carrier_ok(dev)) 307 - netif_carrier_off(dev); 308 - } 309 - } 310 - 311 - static int veth_device_event(struct notifier_block *unused, 312 - unsigned long event, void *ptr) 313 - { 314 - struct net_device *dev = ptr; 315 - 316 - if (dev->netdev_ops->ndo_open != veth_open) 317 - goto out; 318 - 319 - switch (event) { 320 - case NETDEV_CHANGE: 321 - veth_change_state(dev); 322 - break; 323 - } 324 - out: 325 - return NOTIFY_DONE; 326 - } 327 - 328 - static struct notifier_block veth_notifier_block __read_mostly = { 329 - .notifier_call = veth_device_event, 330 - }; 331 282 332 283 /* 333 284 * netlink interface ··· 441 468 442 469 static __init int veth_init(void) 443 470 { 444 - register_netdevice_notifier(&veth_notifier_block); 445 471 return rtnl_link_register(&veth_link_ops); 446 472 } 447 473 448 474 static __exit void veth_exit(void) 449 475 { 450 476 rtnl_link_unregister(&veth_link_ops); 451 - unregister_netdevice_notifier(&veth_notifier_block); 452 477 } 453 478 454 479 module_init(veth_init);
+17 -7
drivers/net/wireless/ath9k/main.c
··· 1538 1538 bad: 1539 1539 if (ah) 1540 1540 ath9k_hw_detach(ah); 1541 + ath9k_exit_debug(sc); 1541 1542 1542 1543 return error; 1543 1544 } ··· 1546 1545 static int ath_attach(u16 devid, struct ath_softc *sc) 1547 1546 { 1548 1547 struct ieee80211_hw *hw = sc->hw; 1549 - int error = 0; 1548 + int error = 0, i; 1550 1549 1551 1550 DPRINTF(sc, ATH_DBG_CONFIG, "Attach ATH hw\n"); 1552 1551 ··· 1590 1589 /* initialize tx/rx engine */ 1591 1590 error = ath_tx_init(sc, ATH_TXBUF); 1592 1591 if (error != 0) 1593 - goto detach; 1592 + goto error_attach; 1594 1593 1595 1594 error = ath_rx_init(sc, ATH_RXBUF); 1596 1595 if (error != 0) 1597 - goto detach; 1596 + goto error_attach; 1598 1597 1599 1598 #if defined(CONFIG_RFKILL) || defined(CONFIG_RFKILL_MODULE) 1600 1599 /* Initialze h/w Rfkill */ ··· 1602 1601 INIT_DELAYED_WORK(&sc->rf_kill.rfkill_poll, ath_rfkill_poll); 1603 1602 1604 1603 /* Initialize s/w rfkill */ 1605 - if (ath_init_sw_rfkill(sc)) 1606 - goto detach; 1604 + error = ath_init_sw_rfkill(sc); 1605 + if (error) 1606 + goto error_attach; 1607 1607 #endif 1608 1608 1609 1609 error = ieee80211_register_hw(hw); ··· 1613 1611 ath_init_leds(sc); 1614 1612 1615 1613 return 0; 1616 - detach: 1617 - ath_detach(sc); 1614 + 1615 + error_attach: 1616 + /* cleanup tx queues */ 1617 + for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) 1618 + if (ATH_TXQ_SETUP(sc, i)) 1619 + ath_tx_cleanupq(sc, &sc->tx.txq[i]); 1620 + 1621 + ath9k_hw_detach(sc->sc_ah); 1622 + ath9k_exit_debug(sc); 1623 + 1618 1624 return error; 1619 1625 } 1620 1626
+4 -4
drivers/net/wireless/iwlwifi/iwl-tx.c
··· 148 148 pci_unmap_single(dev, 149 149 pci_unmap_addr(&txq->cmd[index]->meta, mapping), 150 150 pci_unmap_len(&txq->cmd[index]->meta, len), 151 - PCI_DMA_TODEVICE); 151 + PCI_DMA_BIDIRECTIONAL); 152 152 153 153 /* Unmap chunks, if any. */ 154 154 for (i = 1; i < num_tbs; i++) { ··· 964 964 * within command buffer array. */ 965 965 txcmd_phys = pci_map_single(priv->pci_dev, 966 966 out_cmd, sizeof(struct iwl_cmd), 967 - PCI_DMA_TODEVICE); 967 + PCI_DMA_BIDIRECTIONAL); 968 968 pci_unmap_addr_set(&out_cmd->meta, mapping, txcmd_phys); 969 969 pci_unmap_len_set(&out_cmd->meta, len, sizeof(struct iwl_cmd)); 970 970 /* Add buffer containing Tx command and MAC(!) header to TFD's ··· 1115 1115 IWL_MAX_SCAN_SIZE : sizeof(struct iwl_cmd); 1116 1116 1117 1117 phys_addr = pci_map_single(priv->pci_dev, out_cmd, 1118 - len, PCI_DMA_TODEVICE); 1118 + len, PCI_DMA_BIDIRECTIONAL); 1119 1119 pci_unmap_addr_set(&out_cmd->meta, mapping, phys_addr); 1120 1120 pci_unmap_len_set(&out_cmd->meta, len, len); 1121 1121 phys_addr += offsetof(struct iwl_cmd, hdr); ··· 1212 1212 pci_unmap_single(priv->pci_dev, 1213 1213 pci_unmap_addr(&txq->cmd[cmd_idx]->meta, mapping), 1214 1214 pci_unmap_len(&txq->cmd[cmd_idx]->meta, len), 1215 - PCI_DMA_TODEVICE); 1215 + PCI_DMA_BIDIRECTIONAL); 1216 1216 1217 1217 for (idx = iwl_queue_inc_wrap(idx, q->n_bd); q->read_ptr != idx; 1218 1218 q->read_ptr = iwl_queue_inc_wrap(q->read_ptr, q->n_bd)) {
+6 -6
drivers/net/wireless/libertas/ethtool.c
··· 23 23 static void lbs_ethtool_get_drvinfo(struct net_device *dev, 24 24 struct ethtool_drvinfo *info) 25 25 { 26 - struct lbs_private *priv = netdev_priv(dev); 26 + struct lbs_private *priv = dev->ml_priv; 27 27 28 28 snprintf(info->fw_version, 32, "%u.%u.%u.p%u", 29 29 priv->fwrelease >> 24 & 0xff, ··· 47 47 static int lbs_ethtool_get_eeprom(struct net_device *dev, 48 48 struct ethtool_eeprom *eeprom, u8 * bytes) 49 49 { 50 - struct lbs_private *priv = netdev_priv(dev); 50 + struct lbs_private *priv = dev->ml_priv; 51 51 struct cmd_ds_802_11_eeprom_access cmd; 52 52 int ret; 53 53 ··· 76 76 static void lbs_ethtool_get_stats(struct net_device *dev, 77 77 struct ethtool_stats *stats, uint64_t *data) 78 78 { 79 - struct lbs_private *priv = netdev_priv(dev); 79 + struct lbs_private *priv = dev->ml_priv; 80 80 struct cmd_ds_mesh_access mesh_access; 81 81 int ret; 82 82 ··· 113 113 114 114 static int lbs_ethtool_get_sset_count(struct net_device *dev, int sset) 115 115 { 116 - struct lbs_private *priv = netdev_priv(dev); 116 + struct lbs_private *priv = dev->ml_priv; 117 117 118 118 if (sset == ETH_SS_STATS && dev == priv->mesh_dev) 119 119 return MESH_STATS_NUM; ··· 143 143 static void lbs_ethtool_get_wol(struct net_device *dev, 144 144 struct ethtool_wolinfo *wol) 145 145 { 146 - struct lbs_private *priv = netdev_priv(dev); 146 + struct lbs_private *priv = dev->ml_priv; 147 147 148 148 if (priv->wol_criteria == 0xffffffff) { 149 149 /* Interface driver didn't configure wake */ ··· 166 166 static int lbs_ethtool_set_wol(struct net_device *dev, 167 167 struct ethtool_wolinfo *wol) 168 168 { 169 - struct lbs_private *priv = netdev_priv(dev); 169 + struct lbs_private *priv = dev->ml_priv; 170 170 uint32_t criteria = 0; 171 171 172 172 if (priv->wol_criteria == 0xffffffff && wol->wolopts)
+2 -2
drivers/net/wireless/libertas/if_usb.c
··· 59 59 static ssize_t if_usb_firmware_set(struct device *dev, 60 60 struct device_attribute *attr, const char *buf, size_t count) 61 61 { 62 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 62 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 63 63 struct if_usb_card *cardp = priv->card; 64 64 char fwname[FIRMWARE_NAME_MAX]; 65 65 int ret; ··· 86 86 static ssize_t if_usb_boot2_set(struct device *dev, 87 87 struct device_attribute *attr, const char *buf, size_t count) 88 88 { 89 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 89 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 90 90 struct if_usb_card *cardp = priv->card; 91 91 char fwname[FIRMWARE_NAME_MAX]; 92 92 int ret;
+16 -15
drivers/net/wireless/libertas/main.c
··· 222 222 static ssize_t lbs_anycast_get(struct device *dev, 223 223 struct device_attribute *attr, char * buf) 224 224 { 225 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 225 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 226 226 struct cmd_ds_mesh_access mesh_access; 227 227 int ret; 228 228 ··· 241 241 static ssize_t lbs_anycast_set(struct device *dev, 242 242 struct device_attribute *attr, const char * buf, size_t count) 243 243 { 244 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 244 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 245 245 struct cmd_ds_mesh_access mesh_access; 246 246 uint32_t datum; 247 247 int ret; ··· 263 263 static ssize_t lbs_prb_rsp_limit_get(struct device *dev, 264 264 struct device_attribute *attr, char *buf) 265 265 { 266 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 266 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 267 267 struct cmd_ds_mesh_access mesh_access; 268 268 int ret; 269 269 u32 retry_limit; ··· 286 286 static ssize_t lbs_prb_rsp_limit_set(struct device *dev, 287 287 struct device_attribute *attr, const char *buf, size_t count) 288 288 { 289 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 289 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 290 290 struct cmd_ds_mesh_access mesh_access; 291 291 int ret; 292 292 unsigned long retry_limit; ··· 321 321 static ssize_t lbs_rtap_get(struct device *dev, 322 322 struct device_attribute *attr, char * buf) 323 323 { 324 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 324 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 325 325 return snprintf(buf, 5, "0x%X\n", priv->monitormode); 326 326 } 327 327 ··· 332 332 struct device_attribute *attr, const char * buf, size_t count) 333 333 { 334 334 int monitor_mode; 335 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 335 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 336 336 337 337 sscanf(buf, "%x", &monitor_mode); 338 338 if (monitor_mode) { ··· 383 383 static ssize_t lbs_mesh_get(struct device *dev, 384 384 struct device_attribute *attr, char * buf) 385 385 { 386 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 386 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 387 387 return snprintf(buf, 5, "0x%X\n", !!priv->mesh_dev); 388 388 } 389 389 ··· 393 393 static ssize_t lbs_mesh_set(struct device *dev, 394 394 struct device_attribute *attr, const char * buf, size_t count) 395 395 { 396 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 396 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 397 397 int enable; 398 398 int ret, action = CMD_ACT_MESH_CONFIG_STOP; 399 399 ··· 452 452 */ 453 453 static int lbs_dev_open(struct net_device *dev) 454 454 { 455 - struct lbs_private *priv = netdev_priv(dev) ; 455 + struct lbs_private *priv = dev->ml_priv; 456 456 int ret = 0; 457 457 458 458 lbs_deb_enter(LBS_DEB_NET); ··· 521 521 */ 522 522 static int lbs_eth_stop(struct net_device *dev) 523 523 { 524 - struct lbs_private *priv = netdev_priv(dev); 524 + struct lbs_private *priv = dev->ml_priv; 525 525 526 526 lbs_deb_enter(LBS_DEB_NET); 527 527 ··· 538 538 539 539 static void lbs_tx_timeout(struct net_device *dev) 540 540 { 541 - struct lbs_private *priv = netdev_priv(dev); 541 + struct lbs_private *priv = dev->ml_priv; 542 542 543 543 lbs_deb_enter(LBS_DEB_TX); 544 544 ··· 590 590 */ 591 591 static struct net_device_stats *lbs_get_stats(struct net_device *dev) 592 592 { 593 - struct lbs_private *priv = netdev_priv(dev); 593 + struct lbs_private *priv = dev->ml_priv; 594 594 595 595 lbs_deb_enter(LBS_DEB_NET); 596 596 return &priv->stats; ··· 599 599 static int lbs_set_mac_address(struct net_device *dev, void *addr) 600 600 { 601 601 int ret = 0; 602 - struct lbs_private *priv = netdev_priv(dev); 602 + struct lbs_private *priv = dev->ml_priv; 603 603 struct sockaddr *phwaddr = addr; 604 604 struct cmd_ds_802_11_mac_address cmd; 605 605 ··· 732 732 733 733 static void lbs_set_multicast_list(struct net_device *dev) 734 734 { 735 - struct lbs_private *priv = netdev_priv(dev); 735 + struct lbs_private *priv = dev->ml_priv; 736 736 737 737 schedule_work(&priv->mcast_work); 738 738 } ··· 748 748 static int lbs_thread(void *data) 749 749 { 750 750 struct net_device *dev = data; 751 - struct lbs_private *priv = netdev_priv(dev); 751 + struct lbs_private *priv = dev->ml_priv; 752 752 wait_queue_t wait; 753 753 754 754 lbs_deb_enter(LBS_DEB_THREAD); ··· 1184 1184 goto done; 1185 1185 } 1186 1186 priv = netdev_priv(dev); 1187 + dev->ml_priv = priv; 1187 1188 1188 1189 if (lbs_init_adapter(priv)) { 1189 1190 lbs_pr_err("failed to initialize adapter structure.\n");
+8 -8
drivers/net/wireless/libertas/persistcfg.c
··· 18 18 static int mesh_get_default_parameters(struct device *dev, 19 19 struct mrvl_mesh_defaults *defs) 20 20 { 21 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 21 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 22 22 struct cmd_ds_mesh_config cmd; 23 23 int ret; 24 24 ··· 57 57 static ssize_t bootflag_set(struct device *dev, struct device_attribute *attr, 58 58 const char *buf, size_t count) 59 59 { 60 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 60 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 61 61 struct cmd_ds_mesh_config cmd; 62 62 uint32_t datum; 63 63 int ret; ··· 100 100 static ssize_t boottime_set(struct device *dev, 101 101 struct device_attribute *attr, const char *buf, size_t count) 102 102 { 103 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 103 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 104 104 struct cmd_ds_mesh_config cmd; 105 105 uint32_t datum; 106 106 int ret; ··· 152 152 static ssize_t channel_set(struct device *dev, struct device_attribute *attr, 153 153 const char *buf, size_t count) 154 154 { 155 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 155 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 156 156 struct cmd_ds_mesh_config cmd; 157 157 uint32_t datum; 158 158 int ret; ··· 210 210 struct cmd_ds_mesh_config cmd; 211 211 struct mrvl_mesh_defaults defs; 212 212 struct mrvl_meshie *ie; 213 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 213 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 214 214 int len; 215 215 int ret; 216 216 ··· 269 269 struct cmd_ds_mesh_config cmd; 270 270 struct mrvl_mesh_defaults defs; 271 271 struct mrvl_meshie *ie; 272 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 272 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 273 273 uint32_t datum; 274 274 int ret; 275 275 ··· 323 323 struct cmd_ds_mesh_config cmd; 324 324 struct mrvl_mesh_defaults defs; 325 325 struct mrvl_meshie *ie; 326 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 326 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 327 327 uint32_t datum; 328 328 int ret; 329 329 ··· 377 377 struct cmd_ds_mesh_config cmd; 378 378 struct mrvl_mesh_defaults defs; 379 379 struct mrvl_meshie *ie; 380 - struct lbs_private *priv = netdev_priv(to_net_dev(dev)); 380 + struct lbs_private *priv = to_net_dev(dev)->ml_priv; 381 381 uint32_t datum; 382 382 int ret; 383 383
+2 -2
drivers/net/wireless/libertas/scan.c
··· 945 945 union iwreq_data *wrqu, char *extra) 946 946 { 947 947 DECLARE_SSID_BUF(ssid); 948 - struct lbs_private *priv = netdev_priv(dev); 948 + struct lbs_private *priv = dev->ml_priv; 949 949 int ret = 0; 950 950 951 951 lbs_deb_enter(LBS_DEB_WEXT); ··· 1008 1008 struct iw_point *dwrq, char *extra) 1009 1009 { 1010 1010 #define SCAN_ITEM_SIZE 128 1011 - struct lbs_private *priv = netdev_priv(dev); 1011 + struct lbs_private *priv = dev->ml_priv; 1012 1012 int err = 0; 1013 1013 char *ev = extra; 1014 1014 char *stop = ev + dwrq->length;
+1 -1
drivers/net/wireless/libertas/tx.c
··· 60 60 int lbs_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) 61 61 { 62 62 unsigned long flags; 63 - struct lbs_private *priv = netdev_priv(dev); 63 + struct lbs_private *priv = dev->ml_priv; 64 64 struct txpd *txpd; 65 65 char *p802x_hdr; 66 66 uint16_t pkt_len;
+36 -36
drivers/net/wireless/libertas/wext.c
··· 163 163 static int lbs_get_freq(struct net_device *dev, struct iw_request_info *info, 164 164 struct iw_freq *fwrq, char *extra) 165 165 { 166 - struct lbs_private *priv = netdev_priv(dev); 166 + struct lbs_private *priv = dev->ml_priv; 167 167 struct chan_freq_power *cfp; 168 168 169 169 lbs_deb_enter(LBS_DEB_WEXT); ··· 189 189 static int lbs_get_wap(struct net_device *dev, struct iw_request_info *info, 190 190 struct sockaddr *awrq, char *extra) 191 191 { 192 - struct lbs_private *priv = netdev_priv(dev); 192 + struct lbs_private *priv = dev->ml_priv; 193 193 194 194 lbs_deb_enter(LBS_DEB_WEXT); 195 195 ··· 207 207 static int lbs_set_nick(struct net_device *dev, struct iw_request_info *info, 208 208 struct iw_point *dwrq, char *extra) 209 209 { 210 - struct lbs_private *priv = netdev_priv(dev); 210 + struct lbs_private *priv = dev->ml_priv; 211 211 212 212 lbs_deb_enter(LBS_DEB_WEXT); 213 213 ··· 231 231 static int lbs_get_nick(struct net_device *dev, struct iw_request_info *info, 232 232 struct iw_point *dwrq, char *extra) 233 233 { 234 - struct lbs_private *priv = netdev_priv(dev); 234 + struct lbs_private *priv = dev->ml_priv; 235 235 236 236 lbs_deb_enter(LBS_DEB_WEXT); 237 237 ··· 248 248 static int mesh_get_nick(struct net_device *dev, struct iw_request_info *info, 249 249 struct iw_point *dwrq, char *extra) 250 250 { 251 - struct lbs_private *priv = netdev_priv(dev); 251 + struct lbs_private *priv = dev->ml_priv; 252 252 253 253 lbs_deb_enter(LBS_DEB_WEXT); 254 254 ··· 273 273 struct iw_param *vwrq, char *extra) 274 274 { 275 275 int ret = 0; 276 - struct lbs_private *priv = netdev_priv(dev); 276 + struct lbs_private *priv = dev->ml_priv; 277 277 u32 val = vwrq->value; 278 278 279 279 lbs_deb_enter(LBS_DEB_WEXT); ··· 293 293 static int lbs_get_rts(struct net_device *dev, struct iw_request_info *info, 294 294 struct iw_param *vwrq, char *extra) 295 295 { 296 - struct lbs_private *priv = netdev_priv(dev); 296 + struct lbs_private *priv = dev->ml_priv; 297 297 int ret = 0; 298 298 u16 val = 0; 299 299 ··· 315 315 static int lbs_set_frag(struct net_device *dev, struct iw_request_info *info, 316 316 struct iw_param *vwrq, char *extra) 317 317 { 318 - struct lbs_private *priv = netdev_priv(dev); 318 + struct lbs_private *priv = dev->ml_priv; 319 319 int ret = 0; 320 320 u32 val = vwrq->value; 321 321 ··· 336 336 static int lbs_get_frag(struct net_device *dev, struct iw_request_info *info, 337 337 struct iw_param *vwrq, char *extra) 338 338 { 339 - struct lbs_private *priv = netdev_priv(dev); 339 + struct lbs_private *priv = dev->ml_priv; 340 340 int ret = 0; 341 341 u16 val = 0; 342 342 ··· 359 359 static int lbs_get_mode(struct net_device *dev, 360 360 struct iw_request_info *info, u32 * uwrq, char *extra) 361 361 { 362 - struct lbs_private *priv = netdev_priv(dev); 362 + struct lbs_private *priv = dev->ml_priv; 363 363 364 364 lbs_deb_enter(LBS_DEB_WEXT); 365 365 ··· 385 385 struct iw_request_info *info, 386 386 struct iw_param *vwrq, char *extra) 387 387 { 388 - struct lbs_private *priv = netdev_priv(dev); 388 + struct lbs_private *priv = dev->ml_priv; 389 389 s16 curlevel = 0; 390 390 int ret = 0; 391 391 ··· 418 418 static int lbs_set_retry(struct net_device *dev, struct iw_request_info *info, 419 419 struct iw_param *vwrq, char *extra) 420 420 { 421 - struct lbs_private *priv = netdev_priv(dev); 421 + struct lbs_private *priv = dev->ml_priv; 422 422 int ret = 0; 423 423 u16 slimit = 0, llimit = 0; 424 424 ··· 466 466 static int lbs_get_retry(struct net_device *dev, struct iw_request_info *info, 467 467 struct iw_param *vwrq, char *extra) 468 468 { 469 - struct lbs_private *priv = netdev_priv(dev); 469 + struct lbs_private *priv = dev->ml_priv; 470 470 int ret = 0; 471 471 u16 val = 0; 472 472 ··· 542 542 struct iw_point *dwrq, char *extra) 543 543 { 544 544 int i, j; 545 - struct lbs_private *priv = netdev_priv(dev); 545 + struct lbs_private *priv = dev->ml_priv; 546 546 struct iw_range *range = (struct iw_range *)extra; 547 547 struct chan_freq_power *cfp; 548 548 u8 rates[MAX_RATES + 1]; ··· 708 708 static int lbs_set_power(struct net_device *dev, struct iw_request_info *info, 709 709 struct iw_param *vwrq, char *extra) 710 710 { 711 - struct lbs_private *priv = netdev_priv(dev); 711 + struct lbs_private *priv = dev->ml_priv; 712 712 713 713 lbs_deb_enter(LBS_DEB_WEXT); 714 714 ··· 758 758 static int lbs_get_power(struct net_device *dev, struct iw_request_info *info, 759 759 struct iw_param *vwrq, char *extra) 760 760 { 761 - struct lbs_private *priv = netdev_priv(dev); 761 + struct lbs_private *priv = dev->ml_priv; 762 762 763 763 lbs_deb_enter(LBS_DEB_WEXT); 764 764 ··· 781 781 EXCELLENT = 95, 782 782 PERFECT = 100 783 783 }; 784 - struct lbs_private *priv = netdev_priv(dev); 784 + struct lbs_private *priv = dev->ml_priv; 785 785 u32 rssi_qual; 786 786 u32 tx_qual; 787 787 u32 quality = 0; ··· 886 886 struct iw_freq *fwrq, char *extra) 887 887 { 888 888 int ret = -EINVAL; 889 - struct lbs_private *priv = netdev_priv(dev); 889 + struct lbs_private *priv = dev->ml_priv; 890 890 struct chan_freq_power *cfp; 891 891 struct assoc_request * assoc_req; 892 892 ··· 943 943 struct iw_request_info *info, 944 944 struct iw_freq *fwrq, char *extra) 945 945 { 946 - struct lbs_private *priv = netdev_priv(dev); 946 + struct lbs_private *priv = dev->ml_priv; 947 947 struct chan_freq_power *cfp; 948 948 int ret = -EINVAL; 949 949 ··· 994 994 static int lbs_set_rate(struct net_device *dev, struct iw_request_info *info, 995 995 struct iw_param *vwrq, char *extra) 996 996 { 997 - struct lbs_private *priv = netdev_priv(dev); 997 + struct lbs_private *priv = dev->ml_priv; 998 998 u8 new_rate = 0; 999 999 int ret = -EINVAL; 1000 1000 u8 rates[MAX_RATES + 1]; ··· 1054 1054 static int lbs_get_rate(struct net_device *dev, struct iw_request_info *info, 1055 1055 struct iw_param *vwrq, char *extra) 1056 1056 { 1057 - struct lbs_private *priv = netdev_priv(dev); 1057 + struct lbs_private *priv = dev->ml_priv; 1058 1058 1059 1059 lbs_deb_enter(LBS_DEB_WEXT); 1060 1060 ··· 1079 1079 struct iw_request_info *info, u32 * uwrq, char *extra) 1080 1080 { 1081 1081 int ret = 0; 1082 - struct lbs_private *priv = netdev_priv(dev); 1082 + struct lbs_private *priv = dev->ml_priv; 1083 1083 struct assoc_request * assoc_req; 1084 1084 1085 1085 lbs_deb_enter(LBS_DEB_WEXT); ··· 1124 1124 struct iw_request_info *info, 1125 1125 struct iw_point *dwrq, u8 * extra) 1126 1126 { 1127 - struct lbs_private *priv = netdev_priv(dev); 1127 + struct lbs_private *priv = dev->ml_priv; 1128 1128 int index = (dwrq->flags & IW_ENCODE_INDEX) - 1; 1129 1129 1130 1130 lbs_deb_enter(LBS_DEB_WEXT); ··· 1319 1319 struct iw_point *dwrq, char *extra) 1320 1320 { 1321 1321 int ret = 0; 1322 - struct lbs_private *priv = netdev_priv(dev); 1322 + struct lbs_private *priv = dev->ml_priv; 1323 1323 struct assoc_request * assoc_req; 1324 1324 u16 is_default = 0, index = 0, set_tx_key = 0; 1325 1325 ··· 1395 1395 char *extra) 1396 1396 { 1397 1397 int ret = -EINVAL; 1398 - struct lbs_private *priv = netdev_priv(dev); 1398 + struct lbs_private *priv = dev->ml_priv; 1399 1399 struct iw_encode_ext *ext = (struct iw_encode_ext *)extra; 1400 1400 int index, max_key_len; 1401 1401 ··· 1501 1501 char *extra) 1502 1502 { 1503 1503 int ret = 0; 1504 - struct lbs_private *priv = netdev_priv(dev); 1504 + struct lbs_private *priv = dev->ml_priv; 1505 1505 struct iw_encode_ext *ext = (struct iw_encode_ext *)extra; 1506 1506 int alg = ext->alg; 1507 1507 struct assoc_request * assoc_req; ··· 1639 1639 struct iw_point *dwrq, 1640 1640 char *extra) 1641 1641 { 1642 - struct lbs_private *priv = netdev_priv(dev); 1642 + struct lbs_private *priv = dev->ml_priv; 1643 1643 int ret = 0; 1644 1644 struct assoc_request * assoc_req; 1645 1645 ··· 1685 1685 char *extra) 1686 1686 { 1687 1687 int ret = 0; 1688 - struct lbs_private *priv = netdev_priv(dev); 1688 + struct lbs_private *priv = dev->ml_priv; 1689 1689 1690 1690 lbs_deb_enter(LBS_DEB_WEXT); 1691 1691 ··· 1713 1713 struct iw_param *dwrq, 1714 1714 char *extra) 1715 1715 { 1716 - struct lbs_private *priv = netdev_priv(dev); 1716 + struct lbs_private *priv = dev->ml_priv; 1717 1717 struct assoc_request * assoc_req; 1718 1718 int ret = 0; 1719 1719 int updated = 0; ··· 1816 1816 char *extra) 1817 1817 { 1818 1818 int ret = 0; 1819 - struct lbs_private *priv = netdev_priv(dev); 1819 + struct lbs_private *priv = dev->ml_priv; 1820 1820 1821 1821 lbs_deb_enter(LBS_DEB_WEXT); 1822 1822 ··· 1857 1857 struct iw_param *vwrq, char *extra) 1858 1858 { 1859 1859 int ret = 0; 1860 - struct lbs_private *priv = netdev_priv(dev); 1860 + struct lbs_private *priv = dev->ml_priv; 1861 1861 s16 dbm = (s16) vwrq->value; 1862 1862 1863 1863 lbs_deb_enter(LBS_DEB_WEXT); ··· 1936 1936 static int lbs_get_essid(struct net_device *dev, struct iw_request_info *info, 1937 1937 struct iw_point *dwrq, char *extra) 1938 1938 { 1939 - struct lbs_private *priv = netdev_priv(dev); 1939 + struct lbs_private *priv = dev->ml_priv; 1940 1940 1941 1941 lbs_deb_enter(LBS_DEB_WEXT); 1942 1942 ··· 1971 1971 static int lbs_set_essid(struct net_device *dev, struct iw_request_info *info, 1972 1972 struct iw_point *dwrq, char *extra) 1973 1973 { 1974 - struct lbs_private *priv = netdev_priv(dev); 1974 + struct lbs_private *priv = dev->ml_priv; 1975 1975 int ret = 0; 1976 1976 u8 ssid[IW_ESSID_MAX_SIZE]; 1977 1977 u8 ssid_len = 0; ··· 2040 2040 struct iw_request_info *info, 2041 2041 struct iw_point *dwrq, char *extra) 2042 2042 { 2043 - struct lbs_private *priv = netdev_priv(dev); 2043 + struct lbs_private *priv = dev->ml_priv; 2044 2044 2045 2045 lbs_deb_enter(LBS_DEB_WEXT); 2046 2046 ··· 2058 2058 struct iw_request_info *info, 2059 2059 struct iw_point *dwrq, char *extra) 2060 2060 { 2061 - struct lbs_private *priv = netdev_priv(dev); 2061 + struct lbs_private *priv = dev->ml_priv; 2062 2062 int ret = 0; 2063 2063 2064 2064 lbs_deb_enter(LBS_DEB_WEXT); ··· 2102 2102 static int lbs_set_wap(struct net_device *dev, struct iw_request_info *info, 2103 2103 struct sockaddr *awrq, char *extra) 2104 2104 { 2105 - struct lbs_private *priv = netdev_priv(dev); 2105 + struct lbs_private *priv = dev->ml_priv; 2106 2106 struct assoc_request * assoc_req; 2107 2107 int ret = 0; 2108 2108
+15 -4
drivers/net/wireless/orinoco/orinoco.c
··· 3157 3157 3158 3158 return NOTIFY_DONE; 3159 3159 } 3160 + 3161 + static void orinoco_register_pm_notifier(struct orinoco_private *priv) 3162 + { 3163 + priv->pm_notifier.notifier_call = orinoco_pm_notifier; 3164 + register_pm_notifier(&priv->pm_notifier); 3165 + } 3166 + 3167 + static void orinoco_unregister_pm_notifier(struct orinoco_private *priv) 3168 + { 3169 + unregister_pm_notifier(&priv->pm_notifier); 3170 + } 3160 3171 #else /* !PM_SLEEP || HERMES_CACHE_FW_ON_INIT */ 3161 - #define orinoco_pm_notifier NULL 3172 + #define orinoco_register_pm_notifier(priv) do { } while(0) 3173 + #define orinoco_unregister_pm_notifier(priv) do { } while(0) 3162 3174 #endif 3163 3175 3164 3176 /********************************************************************/ ··· 3660 3648 priv->cached_fw = NULL; 3661 3649 3662 3650 /* Register PM notifiers */ 3663 - priv->pm_notifier.notifier_call = orinoco_pm_notifier; 3664 - register_pm_notifier(&priv->pm_notifier); 3651 + orinoco_register_pm_notifier(priv); 3665 3652 3666 3653 return dev; 3667 3654 } ··· 3684 3673 kfree(rx_data); 3685 3674 } 3686 3675 3687 - unregister_pm_notifier(&priv->pm_notifier); 3676 + orinoco_unregister_pm_notifier(priv); 3688 3677 orinoco_uncache_fw(priv); 3689 3678 3690 3679 priv->wpa_ie_len = 0;
+12
drivers/net/wireless/rtl818x/rtl8187_dev.c
··· 48 48 {USB_DEVICE(0x0bda, 0x8189), .driver_info = DEVICE_RTL8187B}, 49 49 {USB_DEVICE(0x0bda, 0x8197), .driver_info = DEVICE_RTL8187B}, 50 50 {USB_DEVICE(0x0bda, 0x8198), .driver_info = DEVICE_RTL8187B}, 51 + /* Surecom */ 52 + {USB_DEVICE(0x0769, 0x11F2), .driver_info = DEVICE_RTL8187}, 53 + /* Logitech */ 54 + {USB_DEVICE(0x0789, 0x010C), .driver_info = DEVICE_RTL8187}, 51 55 /* Netgear */ 52 56 {USB_DEVICE(0x0846, 0x6100), .driver_info = DEVICE_RTL8187}, 53 57 {USB_DEVICE(0x0846, 0x6a00), .driver_info = DEVICE_RTL8187}, ··· 61 57 /* Sitecom */ 62 58 {USB_DEVICE(0x0df6, 0x000d), .driver_info = DEVICE_RTL8187}, 63 59 {USB_DEVICE(0x0df6, 0x0028), .driver_info = DEVICE_RTL8187B}, 60 + /* Sphairon Access Systems GmbH */ 61 + {USB_DEVICE(0x114B, 0x0150), .driver_info = DEVICE_RTL8187}, 62 + /* Dick Smith Electronics */ 63 + {USB_DEVICE(0x1371, 0x9401), .driver_info = DEVICE_RTL8187}, 64 64 /* Abocom */ 65 65 {USB_DEVICE(0x13d1, 0xabe6), .driver_info = DEVICE_RTL8187}, 66 + /* Qcom */ 67 + {USB_DEVICE(0x18E8, 0x6232), .driver_info = DEVICE_RTL8187}, 68 + /* AirLive */ 69 + {USB_DEVICE(0x1b75, 0x8187), .driver_info = DEVICE_RTL8187}, 66 70 {} 67 71 }; 68 72
+56 -17
drivers/pci/dmar.c
··· 332 332 entry_header = (struct acpi_dmar_header *)(dmar + 1); 333 333 while (((unsigned long)entry_header) < 334 334 (((unsigned long)dmar) + dmar_tbl->length)) { 335 + /* Avoid looping forever on bad ACPI tables */ 336 + if (entry_header->length == 0) { 337 + printk(KERN_WARNING PREFIX 338 + "Invalid 0-length structure\n"); 339 + ret = -EINVAL; 340 + break; 341 + } 342 + 335 343 dmar_table_print_dmar_entry(entry_header); 336 344 337 345 switch (entry_header->type) { ··· 502 494 int map_size; 503 495 u32 ver; 504 496 static int iommu_allocated = 0; 505 - int agaw; 497 + int agaw = 0; 506 498 507 499 iommu = kzalloc(sizeof(*iommu), GFP_KERNEL); 508 500 if (!iommu) ··· 518 510 iommu->cap = dmar_readq(iommu->reg + DMAR_CAP_REG); 519 511 iommu->ecap = dmar_readq(iommu->reg + DMAR_ECAP_REG); 520 512 513 + #ifdef CONFIG_DMAR 521 514 agaw = iommu_calculate_agaw(iommu); 522 515 if (agaw < 0) { 523 516 printk(KERN_ERR ··· 526 517 iommu->seq_id); 527 518 goto error; 528 519 } 520 + #endif 529 521 iommu->agaw = agaw; 530 522 531 523 /* the registers might be more than one page */ ··· 584 574 } 585 575 } 586 576 577 + static int qi_check_fault(struct intel_iommu *iommu, int index) 578 + { 579 + u32 fault; 580 + int head; 581 + struct q_inval *qi = iommu->qi; 582 + int wait_index = (index + 1) % QI_LENGTH; 583 + 584 + fault = readl(iommu->reg + DMAR_FSTS_REG); 585 + 586 + /* 587 + * If IQE happens, the head points to the descriptor associated 588 + * with the error. No new descriptors are fetched until the IQE 589 + * is cleared. 590 + */ 591 + if (fault & DMA_FSTS_IQE) { 592 + head = readl(iommu->reg + DMAR_IQH_REG); 593 + if ((head >> 4) == index) { 594 + memcpy(&qi->desc[index], &qi->desc[wait_index], 595 + sizeof(struct qi_desc)); 596 + __iommu_flush_cache(iommu, &qi->desc[index], 597 + sizeof(struct qi_desc)); 598 + writel(DMA_FSTS_IQE, iommu->reg + DMAR_FSTS_REG); 599 + return -EINVAL; 600 + } 601 + } 602 + 603 + return 0; 604 + } 605 + 587 606 /* 588 607 * Submit the queued invalidation descriptor to the remapping 589 608 * hardware unit and wait for its completion. 590 609 */ 591 - void qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu) 610 + int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu) 592 611 { 612 + int rc = 0; 593 613 struct q_inval *qi = iommu->qi; 594 614 struct qi_desc *hw, wait_desc; 595 615 int wait_index, index; 596 616 unsigned long flags; 597 617 598 618 if (!qi) 599 - return; 619 + return 0; 600 620 601 621 hw = qi->desc; 602 622 ··· 644 604 645 605 hw[index] = *desc; 646 606 647 - wait_desc.low = QI_IWD_STATUS_DATA(2) | QI_IWD_STATUS_WRITE | QI_IWD_TYPE; 607 + wait_desc.low = QI_IWD_STATUS_DATA(QI_DONE) | 608 + QI_IWD_STATUS_WRITE | QI_IWD_TYPE; 648 609 wait_desc.high = virt_to_phys(&qi->desc_status[wait_index]); 649 610 650 611 hw[wait_index] = wait_desc; ··· 656 615 qi->free_head = (qi->free_head + 2) % QI_LENGTH; 657 616 qi->free_cnt -= 2; 658 617 659 - spin_lock(&iommu->register_lock); 660 618 /* 661 619 * update the HW tail register indicating the presence of 662 620 * new descriptors. 663 621 */ 664 622 writel(qi->free_head << 4, iommu->reg + DMAR_IQT_REG); 665 - spin_unlock(&iommu->register_lock); 666 623 667 624 while (qi->desc_status[wait_index] != QI_DONE) { 668 625 /* ··· 670 631 * a deadlock where the interrupt context can wait indefinitely 671 632 * for free slots in the queue. 672 633 */ 634 + rc = qi_check_fault(iommu, index); 635 + if (rc) 636 + goto out; 637 + 673 638 spin_unlock(&qi->q_lock); 674 639 cpu_relax(); 675 640 spin_lock(&qi->q_lock); 676 641 } 677 - 678 - qi->desc_status[index] = QI_DONE; 642 + out: 643 + qi->desc_status[index] = qi->desc_status[wait_index] = QI_DONE; 679 644 680 645 reclaim_free_desc(qi); 681 646 spin_unlock_irqrestore(&qi->q_lock, flags); 647 + 648 + return rc; 682 649 } 683 650 684 651 /* ··· 697 652 desc.low = QI_IEC_TYPE; 698 653 desc.high = 0; 699 654 655 + /* should never fail */ 700 656 qi_submit_sync(&desc, iommu); 701 657 } 702 658 703 659 int qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm, 704 660 u64 type, int non_present_entry_flush) 705 661 { 706 - 707 662 struct qi_desc desc; 708 663 709 664 if (non_present_entry_flush) { ··· 717 672 | QI_CC_GRAN(type) | QI_CC_TYPE; 718 673 desc.high = 0; 719 674 720 - qi_submit_sync(&desc, iommu); 721 - 722 - return 0; 723 - 675 + return qi_submit_sync(&desc, iommu); 724 676 } 725 677 726 678 int qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr, ··· 747 705 desc.high = QI_IOTLB_ADDR(addr) | QI_IOTLB_IH(ih) 748 706 | QI_IOTLB_AM(size_order); 749 707 750 - qi_submit_sync(&desc, iommu); 751 - 752 - return 0; 753 - 708 + return qi_submit_sync(&desc, iommu); 754 709 } 755 710 756 711 /*
+2
drivers/pci/hotplug/pciehp.h
··· 111 111 int cmd_busy; 112 112 unsigned int no_cmd_complete:1; 113 113 unsigned int link_active_reporting:1; 114 + unsigned int notification_enabled:1; 114 115 }; 115 116 116 117 #define INT_BUTTON_IGNORE 0 ··· 171 170 extern int pciehp_unconfigure_device(struct slot *p_slot); 172 171 extern void pciehp_queue_pushbutton_work(struct work_struct *work); 173 172 struct controller *pcie_init(struct pcie_device *dev); 173 + int pcie_init_notification(struct controller *ctrl); 174 174 int pciehp_enable_slot(struct slot *p_slot); 175 175 int pciehp_disable_slot(struct slot *p_slot); 176 176 int pcie_enable_notification(struct controller *ctrl);
+7
drivers/pci/hotplug/pciehp_core.c
··· 434 434 goto err_out_release_ctlr; 435 435 } 436 436 437 + /* Enable events after we have setup the data structures */ 438 + rc = pcie_init_notification(ctrl); 439 + if (rc) { 440 + ctrl_err(ctrl, "Notification initialization failed\n"); 441 + goto err_out_release_ctlr; 442 + } 443 + 437 444 /* Check if slot is occupied */ 438 445 t_slot = pciehp_find_slot(ctrl, ctrl->slot_device_offset); 439 446 t_slot->hpc_ops->get_adapter_status(t_slot, &value);
+7 -8
drivers/pci/hotplug/pciehp_hpc.c
··· 934 934 ctrl_warn(ctrl, "Cannot disable software notification\n"); 935 935 } 936 936 937 - static int pcie_init_notification(struct controller *ctrl) 937 + int pcie_init_notification(struct controller *ctrl) 938 938 { 939 939 if (pciehp_request_irq(ctrl)) 940 940 return -1; ··· 942 942 pciehp_free_irq(ctrl); 943 943 return -1; 944 944 } 945 + ctrl->notification_enabled = 1; 945 946 return 0; 946 947 } 947 948 948 949 static void pcie_shutdown_notification(struct controller *ctrl) 949 950 { 950 - pcie_disable_notification(ctrl); 951 - pciehp_free_irq(ctrl); 951 + if (ctrl->notification_enabled) { 952 + pcie_disable_notification(ctrl); 953 + pciehp_free_irq(ctrl); 954 + ctrl->notification_enabled = 0; 955 + } 952 956 } 953 957 954 958 static int pcie_init_slot(struct controller *ctrl) ··· 1114 1110 if (pcie_init_slot(ctrl)) 1115 1111 goto abort_ctrl; 1116 1112 1117 - if (pcie_init_notification(ctrl)) 1118 - goto abort_slot; 1119 - 1120 1113 return ctrl; 1121 1114 1122 - abort_slot: 1123 - pcie_cleanup_slot(ctrl); 1124 1115 abort_ctrl: 1125 1116 kfree(ctrl); 1126 1117 abort:
+12 -9
drivers/pci/intr_remapping.c
··· 208 208 return index; 209 209 } 210 210 211 - static void qi_flush_iec(struct intel_iommu *iommu, int index, int mask) 211 + static int qi_flush_iec(struct intel_iommu *iommu, int index, int mask) 212 212 { 213 213 struct qi_desc desc; 214 214 ··· 216 216 | QI_IEC_SELECTIVE; 217 217 desc.high = 0; 218 218 219 - qi_submit_sync(&desc, iommu); 219 + return qi_submit_sync(&desc, iommu); 220 220 } 221 221 222 222 int map_irq_to_irte_handle(int irq, u16 *sub_handle) ··· 284 284 285 285 int modify_irte(int irq, struct irte *irte_modified) 286 286 { 287 + int rc; 287 288 int index; 288 289 struct irte *irte; 289 290 struct intel_iommu *iommu; ··· 305 304 set_64bit((unsigned long *)irte, irte_modified->low | (1 << 1)); 306 305 __iommu_flush_cache(iommu, irte, sizeof(*irte)); 307 306 308 - qi_flush_iec(iommu, index, 0); 309 - 307 + rc = qi_flush_iec(iommu, index, 0); 310 308 spin_unlock(&irq_2_ir_lock); 311 - return 0; 309 + 310 + return rc; 312 311 } 313 312 314 313 int flush_irte(int irq) 315 314 { 315 + int rc; 316 316 int index; 317 317 struct intel_iommu *iommu; 318 318 struct irq_2_iommu *irq_iommu; ··· 329 327 330 328 index = irq_iommu->irte_index + irq_iommu->sub_handle; 331 329 332 - qi_flush_iec(iommu, index, irq_iommu->irte_mask); 330 + rc = qi_flush_iec(iommu, index, irq_iommu->irte_mask); 333 331 spin_unlock(&irq_2_ir_lock); 334 332 335 - return 0; 333 + return rc; 336 334 } 337 335 338 336 struct intel_iommu *map_ioapic_to_ir(int apic) ··· 358 356 359 357 int free_irte(int irq) 360 358 { 359 + int rc = 0; 361 360 int index, i; 362 361 struct irte *irte; 363 362 struct intel_iommu *iommu; ··· 379 376 if (!irq_iommu->sub_handle) { 380 377 for (i = 0; i < (1 << irq_iommu->irte_mask); i++) 381 378 set_64bit((unsigned long *)irte, 0); 382 - qi_flush_iec(iommu, index, irq_iommu->irte_mask); 379 + rc = qi_flush_iec(iommu, index, irq_iommu->irte_mask); 383 380 } 384 381 385 382 irq_iommu->iommu = NULL; ··· 389 386 390 387 spin_unlock(&irq_2_ir_lock); 391 388 392 - return 0; 389 + return rc; 393 390 } 394 391 395 392 static void iommu_set_intr_remapping(struct intel_iommu *iommu, int mode)
+39 -9
drivers/pci/pcie/aer/aerdrv_core.c
··· 108 108 } 109 109 #endif /* 0 */ 110 110 111 + 112 + static void set_device_error_reporting(struct pci_dev *dev, void *data) 113 + { 114 + bool enable = *((bool *)data); 115 + 116 + if (dev->pcie_type != PCIE_RC_PORT && 117 + dev->pcie_type != PCIE_SW_UPSTREAM_PORT && 118 + dev->pcie_type != PCIE_SW_DOWNSTREAM_PORT) 119 + return; 120 + 121 + if (enable) 122 + pci_enable_pcie_error_reporting(dev); 123 + else 124 + pci_disable_pcie_error_reporting(dev); 125 + } 126 + 127 + /** 128 + * set_downstream_devices_error_reporting - enable/disable the error reporting bits on the root port and its downstream ports. 129 + * @dev: pointer to root port's pci_dev data structure 130 + * @enable: true = enable error reporting, false = disable error reporting. 131 + */ 132 + static void set_downstream_devices_error_reporting(struct pci_dev *dev, 133 + bool enable) 134 + { 135 + set_device_error_reporting(dev, &enable); 136 + pci_walk_bus(dev->subordinate, set_device_error_reporting, &enable); 137 + } 138 + 111 139 static int find_device_iter(struct device *device, void *data) 112 140 { 113 141 struct pci_dev *dev; ··· 553 525 pci_read_config_dword(pdev, aer_pos + PCI_ERR_UNCOR_STATUS, &reg32); 554 526 pci_write_config_dword(pdev, aer_pos + PCI_ERR_UNCOR_STATUS, reg32); 555 527 556 - /* Enable Root Port device reporting error itself */ 557 - pci_read_config_word(pdev, pos+PCI_EXP_DEVCTL, &reg16); 558 - reg16 = reg16 | 559 - PCI_EXP_DEVCTL_CERE | 560 - PCI_EXP_DEVCTL_NFERE | 561 - PCI_EXP_DEVCTL_FERE | 562 - PCI_EXP_DEVCTL_URRE; 563 - pci_write_config_word(pdev, pos+PCI_EXP_DEVCTL, 564 - reg16); 528 + /* 529 + * Enable error reporting for the root port device and downstream port 530 + * devices. 531 + */ 532 + set_downstream_devices_error_reporting(pdev, true); 565 533 566 534 /* Enable Root Port's interrupt in response to error messages */ 567 535 pci_write_config_dword(pdev, ··· 576 552 struct pci_dev *pdev = rpc->rpd->port; 577 553 u32 reg32; 578 554 int pos; 555 + 556 + /* 557 + * Disable error reporting for the root port device and downstream port 558 + * devices. 559 + */ 560 + set_downstream_devices_error_reporting(pdev, false); 579 561 580 562 pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR); 581 563 /* Disable Root's interrupt in response to error messages */
-2
drivers/pci/pcie/portdrv_pci.c
··· 97 97 98 98 pcie_portdrv_save_config(dev); 99 99 100 - pci_enable_pcie_error_reporting(dev); 101 - 102 100 return 0; 103 101 } 104 102
+106 -16
drivers/pci/quirks.c
··· 1584 1584 */ 1585 1585 #define AMD_813X_MISC 0x40 1586 1586 #define AMD_813X_NOIOAMODE (1<<0) 1587 + #define AMD_813X_REV_B2 0x13 1587 1588 1588 1589 static void quirk_disable_amd_813x_boot_interrupt(struct pci_dev *dev) 1589 1590 { 1590 1591 u32 pci_config_dword; 1591 1592 1592 1593 if (noioapicquirk) 1594 + return; 1595 + if (dev->revision == AMD_813X_REV_B2) 1593 1596 return; 1594 1597 1595 1598 pci_read_config_dword(dev, AMD_813X_MISC, &pci_config_dword); ··· 1984 1981 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE, 1985 1982 quirk_msi_ht_cap); 1986 1983 1987 - 1988 1984 /* The nVidia CK804 chipset may have 2 HT MSI mappings. 1989 1985 * MSI are supported if the MSI capability set in any of these mappings. 1990 1986 */ ··· 2034 2032 PCI_DEVICE_ID_SERVERWORKS_HT1000_PXB, 2035 2033 ht_enable_msi_mapping); 2036 2034 2035 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8132_BRIDGE, 2036 + ht_enable_msi_mapping); 2037 + 2037 2038 /* The P5N32-SLI Premium motherboard from Asus has a problem with msi 2038 2039 * for the MCP55 NIC. It is not yet determined whether the msi problem 2039 2040 * also affects other devices. As for now, turn off msi for this device. ··· 2053 2048 PCI_DEVICE_ID_NVIDIA_NVENET_15, 2054 2049 nvenet_msi_disable); 2055 2050 2051 + static void __devinit nv_ht_enable_msi_mapping(struct pci_dev *dev) 2052 + { 2053 + struct pci_dev *host_bridge; 2054 + int pos; 2055 + int i, dev_no; 2056 + int found = 0; 2057 + 2058 + dev_no = dev->devfn >> 3; 2059 + for (i = dev_no; i >= 0; i--) { 2060 + host_bridge = pci_get_slot(dev->bus, PCI_DEVFN(i, 0)); 2061 + if (!host_bridge) 2062 + continue; 2063 + 2064 + pos = pci_find_ht_capability(host_bridge, HT_CAPTYPE_SLAVE); 2065 + if (pos != 0) { 2066 + found = 1; 2067 + break; 2068 + } 2069 + pci_dev_put(host_bridge); 2070 + } 2071 + 2072 + if (!found) 2073 + return; 2074 + 2075 + /* root did that ! */ 2076 + if (msi_ht_cap_enabled(host_bridge)) 2077 + goto out; 2078 + 2079 + ht_enable_msi_mapping(dev); 2080 + 2081 + out: 2082 + pci_dev_put(host_bridge); 2083 + } 2084 + 2085 + static void __devinit ht_disable_msi_mapping(struct pci_dev *dev) 2086 + { 2087 + int pos, ttl = 48; 2088 + 2089 + pos = pci_find_ht_capability(dev, HT_CAPTYPE_MSI_MAPPING); 2090 + while (pos && ttl--) { 2091 + u8 flags; 2092 + 2093 + if (pci_read_config_byte(dev, pos + HT_MSI_FLAGS, 2094 + &flags) == 0) { 2095 + dev_info(&dev->dev, "Enabling HT MSI Mapping\n"); 2096 + 2097 + pci_write_config_byte(dev, pos + HT_MSI_FLAGS, 2098 + flags & ~HT_MSI_FLAGS_ENABLE); 2099 + } 2100 + pos = pci_find_next_ht_capability(dev, pos, 2101 + HT_CAPTYPE_MSI_MAPPING); 2102 + } 2103 + } 2104 + 2105 + static int __devinit ht_check_msi_mapping(struct pci_dev *dev) 2106 + { 2107 + int pos, ttl = 48; 2108 + int found = 0; 2109 + 2110 + /* check if there is HT MSI cap or enabled on this device */ 2111 + pos = pci_find_ht_capability(dev, HT_CAPTYPE_MSI_MAPPING); 2112 + while (pos && ttl--) { 2113 + u8 flags; 2114 + 2115 + if (found < 1) 2116 + found = 1; 2117 + if (pci_read_config_byte(dev, pos + HT_MSI_FLAGS, 2118 + &flags) == 0) { 2119 + if (flags & HT_MSI_FLAGS_ENABLE) { 2120 + if (found < 2) { 2121 + found = 2; 2122 + break; 2123 + } 2124 + } 2125 + } 2126 + pos = pci_find_next_ht_capability(dev, pos, 2127 + HT_CAPTYPE_MSI_MAPPING); 2128 + } 2129 + 2130 + return found; 2131 + } 2132 + 2056 2133 static void __devinit nv_msi_ht_cap_quirk(struct pci_dev *dev) 2057 2134 { 2058 2135 struct pci_dev *host_bridge; 2059 - int pos, ttl = 48; 2136 + int pos; 2137 + int found; 2138 + 2139 + /* check if there is HT MSI cap or enabled on this device */ 2140 + found = ht_check_msi_mapping(dev); 2141 + 2142 + /* no HT MSI CAP */ 2143 + if (found == 0) 2144 + return; 2060 2145 2061 2146 /* 2062 2147 * HT MSI mapping should be disabled on devices that are below ··· 2162 2067 pos = pci_find_ht_capability(host_bridge, HT_CAPTYPE_SLAVE); 2163 2068 if (pos != 0) { 2164 2069 /* Host bridge is to HT */ 2165 - ht_enable_msi_mapping(dev); 2070 + if (found == 1) { 2071 + /* it is not enabled, try to enable it */ 2072 + nv_ht_enable_msi_mapping(dev); 2073 + } 2166 2074 return; 2167 2075 } 2168 2076 2169 - /* Host bridge is not to HT, disable HT MSI mapping on this device */ 2170 - pos = pci_find_ht_capability(dev, HT_CAPTYPE_MSI_MAPPING); 2171 - while (pos && ttl--) { 2172 - u8 flags; 2077 + /* HT MSI is not enabled */ 2078 + if (found == 1) 2079 + return; 2173 2080 2174 - if (pci_read_config_byte(dev, pos + HT_MSI_FLAGS, 2175 - &flags) == 0) { 2176 - dev_info(&dev->dev, "Disabling HT MSI mapping"); 2177 - pci_write_config_byte(dev, pos + HT_MSI_FLAGS, 2178 - flags & ~HT_MSI_FLAGS_ENABLE); 2179 - } 2180 - pos = pci_find_next_ht_capability(dev, pos, 2181 - HT_CAPTYPE_MSI_MAPPING); 2182 - } 2081 + /* Host bridge is not to HT, disable HT MSI mapping on this device */ 2082 + ht_disable_msi_mapping(dev); 2183 2083 } 2184 2084 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, nv_msi_ht_cap_quirk); 2185 2085 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_ANY_ID, nv_msi_ht_cap_quirk);
+21
drivers/scsi/cxgb3i/cxgb3i.h
··· 20 20 #include <linux/list.h> 21 21 #include <linux/netdevice.h> 22 22 #include <linux/scatterlist.h> 23 + #include <linux/skbuff.h> 23 24 #include <scsi/libiscsi_tcp.h> 24 25 25 26 /* from cxgb3 LLD */ ··· 112 111 struct s3_conn *c3cn; 113 112 struct cxgb3i_hba *hba; 114 113 struct cxgb3i_conn *cconn; 114 + }; 115 + 116 + /** 117 + * struct cxgb3i_task_data - private iscsi task data 118 + * 119 + * @nr_frags: # of coalesced page frags (from scsi sgl) 120 + * @frags: coalesced page frags (from scsi sgl) 121 + * @skb: tx pdu skb 122 + * @offset: data offset for the next pdu 123 + * @count: max. possible pdu payload 124 + * @sgoffset: offset to the first sg entry for a given offset 125 + */ 126 + #define MAX_PDU_FRAGS ((ULP2_MAX_PDU_PAYLOAD + 512 - 1) / 512) 127 + struct cxgb3i_task_data { 128 + unsigned short nr_frags; 129 + skb_frag_t frags[MAX_PDU_FRAGS]; 130 + struct sk_buff *skb; 131 + unsigned int offset; 132 + unsigned int count; 133 + unsigned int sgoffset; 115 134 }; 116 135 117 136 int cxgb3i_iscsi_init(void);
+11 -8
drivers/scsi/cxgb3i/cxgb3i_ddp.c
··· 639 639 write_unlock(&cxgb3i_ddp_rwlock); 640 640 641 641 ddp_log_info("nppods %u (0x%x ~ 0x%x), bits %u, mask 0x%x,0x%x " 642 - "pkt %u,%u.\n", 642 + "pkt %u/%u, %u/%u.\n", 643 643 ppmax, ddp->llimit, ddp->ulimit, ddp->idx_bits, 644 644 ddp->idx_mask, ddp->rsvd_tag_mask, 645 - ddp->max_txsz, ddp->max_rxsz); 645 + ddp->max_txsz, uinfo.max_txsz, 646 + ddp->max_rxsz, uinfo.max_rxsz); 646 647 return 0; 647 648 648 649 free_ddp_map: ··· 655 654 * cxgb3i_adapter_ddp_init - initialize the adapter's ddp resource 656 655 * @tdev: t3cdev adapter 657 656 * @tformat: tag format 658 - * @txsz: max tx pkt size, filled in by this func. 659 - * @rxsz: max rx pkt size, filled in by this func. 657 + * @txsz: max tx pdu payload size, filled in by this func. 658 + * @rxsz: max rx pdu payload size, filled in by this func. 660 659 * initialize the ddp pagepod manager for a given adapter if needed and 661 660 * setup the tag format for a given iscsi entity 662 661 */ ··· 686 685 tformat->sw_bits, tformat->rsvd_bits, 687 686 tformat->rsvd_shift, tformat->rsvd_mask); 688 687 689 - *txsz = ddp->max_txsz; 690 - *rxsz = ddp->max_rxsz; 691 - ddp_log_info("ddp max pkt size: %u, %u.\n", 692 - ddp->max_txsz, ddp->max_rxsz); 688 + *txsz = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD, 689 + ddp->max_txsz - ISCSI_PDU_NONPAYLOAD_LEN); 690 + *rxsz = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD, 691 + ddp->max_rxsz - ISCSI_PDU_NONPAYLOAD_LEN); 692 + ddp_log_info("max payload size: %u/%u, %u/%u.\n", 693 + *txsz, ddp->max_txsz, *rxsz, ddp->max_rxsz); 693 694 return 0; 694 695 } 695 696 EXPORT_SYMBOL_GPL(cxgb3i_adapter_ddp_init);
+4 -1
drivers/scsi/cxgb3i/cxgb3i_ddp.h
··· 13 13 #ifndef __CXGB3I_ULP2_DDP_H__ 14 14 #define __CXGB3I_ULP2_DDP_H__ 15 15 16 + #include <linux/vmalloc.h> 17 + 16 18 /** 17 19 * struct cxgb3i_tag_format - cxgb3i ulp tag format for an iscsi entity 18 20 * ··· 87 85 struct sk_buff **gl_skb; 88 86 }; 89 87 88 + #define ISCSI_PDU_NONPAYLOAD_LEN 312 /* bhs(48) + ahs(256) + digest(8) */ 90 89 #define ULP2_MAX_PKT_SIZE 16224 91 - #define ULP2_MAX_PDU_PAYLOAD (ULP2_MAX_PKT_SIZE - ISCSI_PDU_NONPAYLOAD_MAX) 90 + #define ULP2_MAX_PDU_PAYLOAD (ULP2_MAX_PKT_SIZE - ISCSI_PDU_NONPAYLOAD_LEN) 92 91 #define PPOD_PAGES_MAX 4 93 92 #define PPOD_PAGES_SHIFT 2 /* 4 pages per pod */ 94 93
+2 -2
drivers/scsi/cxgb3i/cxgb3i_init.c
··· 12 12 #include "cxgb3i.h" 13 13 14 14 #define DRV_MODULE_NAME "cxgb3i" 15 - #define DRV_MODULE_VERSION "1.0.0" 16 - #define DRV_MODULE_RELDATE "Jun. 1, 2008" 15 + #define DRV_MODULE_VERSION "1.0.1" 16 + #define DRV_MODULE_RELDATE "Jan. 2009" 17 17 18 18 static char version[] = 19 19 "Chelsio S3xx iSCSI Driver " DRV_MODULE_NAME
+9 -13
drivers/scsi/cxgb3i/cxgb3i_iscsi.c
··· 364 364 365 365 cls_session = iscsi_session_setup(&cxgb3i_iscsi_transport, shost, 366 366 cmds_max, 367 - sizeof(struct iscsi_tcp_task), 367 + sizeof(struct iscsi_tcp_task) + 368 + sizeof(struct cxgb3i_task_data), 368 369 initial_cmdsn, ISCSI_MAX_TARGET); 369 370 if (!cls_session) 370 371 return NULL; ··· 403 402 { 404 403 struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 405 404 struct cxgb3i_conn *cconn = tcp_conn->dd_data; 406 - unsigned int max = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD, 407 - cconn->hba->snic->tx_max_size - 408 - ISCSI_PDU_NONPAYLOAD_MAX); 405 + unsigned int max = max(512 * MAX_SKB_FRAGS, SKB_TX_HEADROOM); 409 406 407 + max = min(cconn->hba->snic->tx_max_size, max); 410 408 if (conn->max_xmit_dlength) 411 - conn->max_xmit_dlength = min_t(unsigned int, 412 - conn->max_xmit_dlength, max); 409 + conn->max_xmit_dlength = min(conn->max_xmit_dlength, max); 413 410 else 414 411 conn->max_xmit_dlength = max; 415 412 align_pdu_size(conn->max_xmit_dlength); 416 - cxgb3i_log_info("conn 0x%p, max xmit %u.\n", 413 + cxgb3i_api_debug("conn 0x%p, max xmit %u.\n", 417 414 conn, conn->max_xmit_dlength); 418 415 return 0; 419 416 } ··· 426 427 { 427 428 struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 428 429 struct cxgb3i_conn *cconn = tcp_conn->dd_data; 429 - unsigned int max = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD, 430 - cconn->hba->snic->rx_max_size - 431 - ISCSI_PDU_NONPAYLOAD_MAX); 430 + unsigned int max = cconn->hba->snic->rx_max_size; 432 431 433 432 align_pdu_size(max); 434 433 if (conn->max_recv_dlength) { ··· 436 439 conn->max_recv_dlength, max); 437 440 return -EINVAL; 438 441 } 439 - conn->max_recv_dlength = min_t(unsigned int, 440 - conn->max_recv_dlength, max); 442 + conn->max_recv_dlength = min(conn->max_recv_dlength, max); 441 443 align_pdu_size(conn->max_recv_dlength); 442 444 } else 443 445 conn->max_recv_dlength = max; ··· 840 844 .proc_name = "cxgb3i", 841 845 .queuecommand = iscsi_queuecommand, 842 846 .change_queue_depth = iscsi_change_queue_depth, 843 - .can_queue = 128 * (ISCSI_DEF_XMIT_CMDS_MAX - 1), 847 + .can_queue = CXGB3I_SCSI_QDEPTH_DFLT - 1, 844 848 .sg_tablesize = SG_ALL, 845 849 .max_sectors = 0xFFFF, 846 850 .cmd_per_lun = ISCSI_DEF_CMD_PER_LUN,
+103 -43
drivers/scsi/cxgb3i/cxgb3i_offload.c
··· 23 23 #include "cxgb3i_ddp.h" 24 24 25 25 #ifdef __DEBUG_C3CN_CONN__ 26 - #define c3cn_conn_debug cxgb3i_log_info 26 + #define c3cn_conn_debug cxgb3i_log_debug 27 27 #else 28 28 #define c3cn_conn_debug(fmt...) 29 29 #endif 30 30 31 31 #ifdef __DEBUG_C3CN_TX__ 32 - #define c3cn_tx_debug cxgb3i_log_debug 32 + #define c3cn_tx_debug cxgb3i_log_debug 33 33 #else 34 34 #define c3cn_tx_debug(fmt...) 35 35 #endif 36 36 37 37 #ifdef __DEBUG_C3CN_RX__ 38 - #define c3cn_rx_debug cxgb3i_log_debug 38 + #define c3cn_rx_debug cxgb3i_log_debug 39 39 #else 40 40 #define c3cn_rx_debug(fmt...) 41 41 #endif ··· 47 47 module_param(cxgb3_rcv_win, int, 0644); 48 48 MODULE_PARM_DESC(cxgb3_rcv_win, "TCP receive window in bytes (default=256KB)"); 49 49 50 - static int cxgb3_snd_win = 64 * 1024; 50 + static int cxgb3_snd_win = 128 * 1024; 51 51 module_param(cxgb3_snd_win, int, 0644); 52 - MODULE_PARM_DESC(cxgb3_snd_win, "TCP send window in bytes (default=64KB)"); 52 + MODULE_PARM_DESC(cxgb3_snd_win, "TCP send window in bytes (default=128KB)"); 53 53 54 54 static int cxgb3_rx_credit_thres = 10 * 1024; 55 55 module_param(cxgb3_rx_credit_thres, int, 0644); ··· 301 301 static void skb_entail(struct s3_conn *c3cn, struct sk_buff *skb, 302 302 int flags) 303 303 { 304 - CXGB3_SKB_CB(skb)->seq = c3cn->write_seq; 305 - CXGB3_SKB_CB(skb)->flags = flags; 304 + skb_tcp_seq(skb) = c3cn->write_seq; 305 + skb_flags(skb) = flags; 306 306 __skb_queue_tail(&c3cn->write_queue, skb); 307 307 } 308 308 ··· 457 457 * The number of WRs needed for an skb depends on the number of fragments 458 458 * in the skb and whether it has any payload in its main body. This maps the 459 459 * length of the gather list represented by an skb into the # of necessary WRs. 460 - * 461 - * The max. length of an skb is controlled by the max pdu size which is ~16K. 462 - * Also, assume the min. fragment length is the sector size (512), then add 463 - * extra fragment counts for iscsi bhs and payload padding. 460 + * The extra two fragments are for iscsi bhs and payload padding. 464 461 */ 465 - #define SKB_WR_LIST_SIZE (16384/512 + 3) 462 + #define SKB_WR_LIST_SIZE (MAX_SKB_FRAGS + 2) 466 463 static unsigned int skb_wrs[SKB_WR_LIST_SIZE] __read_mostly; 467 464 468 465 static void s3_init_wr_tab(unsigned int wr_len) ··· 482 485 483 486 static inline void reset_wr_list(struct s3_conn *c3cn) 484 487 { 485 - c3cn->wr_pending_head = NULL; 488 + c3cn->wr_pending_head = c3cn->wr_pending_tail = NULL; 486 489 } 487 490 488 491 /* ··· 493 496 static inline void enqueue_wr(struct s3_conn *c3cn, 494 497 struct sk_buff *skb) 495 498 { 496 - skb_wr_data(skb) = NULL; 499 + skb_tx_wr_next(skb) = NULL; 497 500 498 501 /* 499 502 * We want to take an extra reference since both us and the driver ··· 506 509 if (!c3cn->wr_pending_head) 507 510 c3cn->wr_pending_head = skb; 508 511 else 509 - skb_wr_data(skb) = skb; 512 + skb_tx_wr_next(c3cn->wr_pending_tail) = skb; 510 513 c3cn->wr_pending_tail = skb; 514 + } 515 + 516 + static int count_pending_wrs(struct s3_conn *c3cn) 517 + { 518 + int n = 0; 519 + const struct sk_buff *skb = c3cn->wr_pending_head; 520 + 521 + while (skb) { 522 + n += skb->csum; 523 + skb = skb_tx_wr_next(skb); 524 + } 525 + return n; 511 526 } 512 527 513 528 static inline struct sk_buff *peek_wr(const struct s3_conn *c3cn) ··· 538 529 539 530 if (likely(skb)) { 540 531 /* Don't bother clearing the tail */ 541 - c3cn->wr_pending_head = skb_wr_data(skb); 542 - skb_wr_data(skb) = NULL; 532 + c3cn->wr_pending_head = skb_tx_wr_next(skb); 533 + skb_tx_wr_next(skb) = NULL; 543 534 } 544 535 return skb; 545 536 } ··· 552 543 } 553 544 554 545 static inline void make_tx_data_wr(struct s3_conn *c3cn, struct sk_buff *skb, 555 - int len) 546 + int len, int req_completion) 556 547 { 557 548 struct tx_data_wr *req; 558 549 559 550 skb_reset_transport_header(skb); 560 551 req = (struct tx_data_wr *)__skb_push(skb, sizeof(*req)); 561 - req->wr_hi = htonl(V_WR_OP(FW_WROPCODE_OFLD_TX_DATA)); 552 + req->wr_hi = htonl(V_WR_OP(FW_WROPCODE_OFLD_TX_DATA) | 553 + (req_completion ? F_WR_COMPL : 0)); 562 554 req->wr_lo = htonl(V_WR_TID(c3cn->tid)); 563 555 req->sndseq = htonl(c3cn->snd_nxt); 564 556 /* len includes the length of any HW ULP additions */ ··· 602 592 603 593 if (unlikely(c3cn->state == C3CN_STATE_CONNECTING || 604 594 c3cn->state == C3CN_STATE_CLOSE_WAIT_1 || 605 - c3cn->state == C3CN_STATE_ABORTING)) { 595 + c3cn->state >= C3CN_STATE_ABORTING)) { 606 596 c3cn_tx_debug("c3cn 0x%p, in closing state %u.\n", 607 597 c3cn, c3cn->state); 608 598 return 0; ··· 625 615 if (c3cn->wr_avail < wrs_needed) { 626 616 c3cn_tx_debug("c3cn 0x%p, skb len %u/%u, frag %u, " 627 617 "wr %d < %u.\n", 628 - c3cn, skb->len, skb->datalen, frags, 618 + c3cn, skb->len, skb->data_len, frags, 629 619 wrs_needed, c3cn->wr_avail); 630 620 break; 631 621 } ··· 637 627 c3cn->wr_unacked += wrs_needed; 638 628 enqueue_wr(c3cn, skb); 639 629 640 - if (likely(CXGB3_SKB_CB(skb)->flags & C3CB_FLAG_NEED_HDR)) { 641 - len += ulp_extra_len(skb); 642 - make_tx_data_wr(c3cn, skb, len); 643 - c3cn->snd_nxt += len; 644 - if ((req_completion 645 - && c3cn->wr_unacked == wrs_needed) 646 - || (CXGB3_SKB_CB(skb)->flags & C3CB_FLAG_COMPL) 647 - || c3cn->wr_unacked >= c3cn->wr_max / 2) { 648 - struct work_request_hdr *wr = cplhdr(skb); 630 + c3cn_tx_debug("c3cn 0x%p, enqueue, skb len %u/%u, frag %u, " 631 + "wr %d, left %u, unack %u.\n", 632 + c3cn, skb->len, skb->data_len, frags, 633 + wrs_needed, c3cn->wr_avail, c3cn->wr_unacked); 649 634 650 - wr->wr_hi |= htonl(F_WR_COMPL); 635 + 636 + if (likely(skb_flags(skb) & C3CB_FLAG_NEED_HDR)) { 637 + if ((req_completion && 638 + c3cn->wr_unacked == wrs_needed) || 639 + (skb_flags(skb) & C3CB_FLAG_COMPL) || 640 + c3cn->wr_unacked >= c3cn->wr_max / 2) { 641 + req_completion = 1; 651 642 c3cn->wr_unacked = 0; 652 643 } 653 - CXGB3_SKB_CB(skb)->flags &= ~C3CB_FLAG_NEED_HDR; 644 + len += ulp_extra_len(skb); 645 + make_tx_data_wr(c3cn, skb, len, req_completion); 646 + c3cn->snd_nxt += len; 647 + skb_flags(skb) &= ~C3CB_FLAG_NEED_HDR; 654 648 } 655 649 656 650 total_size += skb->truesize; ··· 749 735 if (unlikely(c3cn_flag(c3cn, C3CN_ACTIVE_CLOSE_NEEDED))) 750 736 /* upper layer has requested closing */ 751 737 send_abort_req(c3cn); 752 - else if (c3cn_push_tx_frames(c3cn, 1)) 738 + else { 739 + if (skb_queue_len(&c3cn->write_queue)) 740 + c3cn_push_tx_frames(c3cn, 1); 753 741 cxgb3i_conn_tx_open(c3cn); 742 + } 754 743 } 755 744 756 745 static int do_act_establish(struct t3cdev *cdev, struct sk_buff *skb, ··· 1099 1082 return; 1100 1083 } 1101 1084 1102 - CXGB3_SKB_CB(skb)->seq = ntohl(hdr_cpl->seq); 1103 - CXGB3_SKB_CB(skb)->flags = 0; 1085 + skb_tcp_seq(skb) = ntohl(hdr_cpl->seq); 1086 + skb_flags(skb) = 0; 1104 1087 1105 1088 skb_reset_transport_header(skb); 1106 1089 __skb_pull(skb, sizeof(struct cpl_iscsi_hdr)); ··· 1120 1103 goto abort_conn; 1121 1104 1122 1105 skb_ulp_mode(skb) = ULP2_FLAG_DATA_READY; 1123 - skb_ulp_pdulen(skb) = ntohs(ddp_cpl.len); 1124 - skb_ulp_ddigest(skb) = ntohl(ddp_cpl.ulp_crc); 1106 + skb_rx_pdulen(skb) = ntohs(ddp_cpl.len); 1107 + skb_rx_ddigest(skb) = ntohl(ddp_cpl.ulp_crc); 1125 1108 status = ntohl(ddp_cpl.ddp_status); 1126 1109 1127 1110 c3cn_rx_debug("rx skb 0x%p, len %u, pdulen %u, ddp status 0x%x.\n", 1128 - skb, skb->len, skb_ulp_pdulen(skb), status); 1111 + skb, skb->len, skb_rx_pdulen(skb), status); 1129 1112 1130 1113 if (status & (1 << RX_DDP_STATUS_HCRC_SHIFT)) 1131 1114 skb_ulp_mode(skb) |= ULP2_FLAG_HCRC_ERROR; ··· 1143 1126 } else if (status & (1 << RX_DDP_STATUS_DDP_SHIFT)) 1144 1127 skb_ulp_mode(skb) |= ULP2_FLAG_DATA_DDPED; 1145 1128 1146 - c3cn->rcv_nxt = ntohl(ddp_cpl.seq) + skb_ulp_pdulen(skb); 1129 + c3cn->rcv_nxt = ntohl(ddp_cpl.seq) + skb_rx_pdulen(skb); 1147 1130 __pskb_trim(skb, len); 1148 1131 __skb_queue_tail(&c3cn->receive_queue, skb); 1149 1132 cxgb3i_conn_pdu_ready(c3cn); ··· 1168 1151 * Process an acknowledgment of WR completion. Advance snd_una and send the 1169 1152 * next batch of work requests from the write queue. 1170 1153 */ 1154 + static void check_wr_invariants(struct s3_conn *c3cn) 1155 + { 1156 + int pending = count_pending_wrs(c3cn); 1157 + 1158 + if (unlikely(c3cn->wr_avail + pending != c3cn->wr_max)) 1159 + cxgb3i_log_error("TID %u: credit imbalance: avail %u, " 1160 + "pending %u, total should be %u\n", 1161 + c3cn->tid, c3cn->wr_avail, pending, 1162 + c3cn->wr_max); 1163 + } 1164 + 1171 1165 static void process_wr_ack(struct s3_conn *c3cn, struct sk_buff *skb) 1172 1166 { 1173 1167 struct cpl_wr_ack *hdr = cplhdr(skb); 1174 1168 unsigned int credits = ntohs(hdr->credits); 1175 1169 u32 snd_una = ntohl(hdr->snd_una); 1170 + 1171 + c3cn_tx_debug("%u WR credits, avail %u, unack %u, TID %u, state %u.\n", 1172 + credits, c3cn->wr_avail, c3cn->wr_unacked, 1173 + c3cn->tid, c3cn->state); 1176 1174 1177 1175 c3cn->wr_avail += credits; 1178 1176 if (c3cn->wr_unacked > c3cn->wr_max - c3cn->wr_avail) ··· 1203 1171 break; 1204 1172 } 1205 1173 if (unlikely(credits < p->csum)) { 1174 + struct tx_data_wr *w = cplhdr(p); 1175 + cxgb3i_log_error("TID %u got %u WR credits need %u, " 1176 + "len %u, main body %u, frags %u, " 1177 + "seq # %u, ACK una %u, ACK nxt %u, " 1178 + "WR_AVAIL %u, WRs pending %u\n", 1179 + c3cn->tid, credits, p->csum, p->len, 1180 + p->len - p->data_len, 1181 + skb_shinfo(p)->nr_frags, 1182 + ntohl(w->sndseq), snd_una, 1183 + ntohl(hdr->snd_nxt), c3cn->wr_avail, 1184 + count_pending_wrs(c3cn) - credits); 1206 1185 p->csum -= credits; 1207 1186 break; 1208 1187 } else { ··· 1223 1180 } 1224 1181 } 1225 1182 1226 - if (unlikely(before(snd_una, c3cn->snd_una))) 1183 + check_wr_invariants(c3cn); 1184 + 1185 + if (unlikely(before(snd_una, c3cn->snd_una))) { 1186 + cxgb3i_log_error("TID %u, unexpected sequence # %u in WR_ACK " 1187 + "snd_una %u\n", 1188 + c3cn->tid, snd_una, c3cn->snd_una); 1227 1189 goto out_free; 1190 + } 1228 1191 1229 1192 if (c3cn->snd_una != snd_una) { 1230 1193 c3cn->snd_una = snd_una; 1231 1194 dst_confirm(c3cn->dst_cache); 1232 1195 } 1233 1196 1234 - if (skb_queue_len(&c3cn->write_queue) && c3cn_push_tx_frames(c3cn, 0)) 1197 + if (skb_queue_len(&c3cn->write_queue)) { 1198 + if (c3cn_push_tx_frames(c3cn, 0)) 1199 + cxgb3i_conn_tx_open(c3cn); 1200 + } else 1235 1201 cxgb3i_conn_tx_open(c3cn); 1236 1202 out_free: 1237 1203 __kfree_skb(skb); ··· 1504 1452 struct dst_entry *dst) 1505 1453 { 1506 1454 BUG_ON(c3cn->cdev != cdev); 1507 - c3cn->wr_max = c3cn->wr_avail = T3C_DATA(cdev)->max_wrs; 1455 + c3cn->wr_max = c3cn->wr_avail = T3C_DATA(cdev)->max_wrs - 1; 1508 1456 c3cn->wr_unacked = 0; 1509 1457 c3cn->mss_idx = select_mss(c3cn, dst_mtu(dst)); 1510 1458 ··· 1723 1671 goto out_err; 1724 1672 } 1725 1673 1726 - err = -EPIPE; 1727 1674 if (c3cn->err) { 1728 1675 c3cn_tx_debug("c3cn 0x%p, err %d.\n", c3cn, c3cn->err); 1676 + err = -EPIPE; 1677 + goto out_err; 1678 + } 1679 + 1680 + if (c3cn->write_seq - c3cn->snd_una >= cxgb3_snd_win) { 1681 + c3cn_tx_debug("c3cn 0x%p, snd %u - %u > %u.\n", 1682 + c3cn, c3cn->write_seq, c3cn->snd_una, 1683 + cxgb3_snd_win); 1684 + err = -EAGAIN; 1729 1685 goto out_err; 1730 1686 } 1731 1687
+19 -10
drivers/scsi/cxgb3i/cxgb3i_offload.h
··· 178 178 * @flag: see C3CB_FLAG_* below 179 179 * @ulp_mode: ULP mode/submode of sk_buff 180 180 * @seq: tcp sequence number 181 - * @ddigest: pdu data digest 182 - * @pdulen: recovered pdu length 183 - * @wr_data: scratch area for tx wr 184 181 */ 182 + struct cxgb3_skb_rx_cb { 183 + __u32 ddigest; /* data digest */ 184 + __u32 pdulen; /* recovered pdu length */ 185 + }; 186 + 187 + struct cxgb3_skb_tx_cb { 188 + struct sk_buff *wr_next; /* next wr */ 189 + }; 190 + 185 191 struct cxgb3_skb_cb { 186 192 __u8 flags; 187 193 __u8 ulp_mode; 188 194 __u32 seq; 189 - __u32 ddigest; 190 - __u32 pdulen; 191 - struct sk_buff *wr_data; 195 + union { 196 + struct cxgb3_skb_rx_cb rx; 197 + struct cxgb3_skb_tx_cb tx; 198 + }; 192 199 }; 193 200 194 201 #define CXGB3_SKB_CB(skb) ((struct cxgb3_skb_cb *)&((skb)->cb[0])) 195 - 202 + #define skb_flags(skb) (CXGB3_SKB_CB(skb)->flags) 196 203 #define skb_ulp_mode(skb) (CXGB3_SKB_CB(skb)->ulp_mode) 197 - #define skb_ulp_ddigest(skb) (CXGB3_SKB_CB(skb)->ddigest) 198 - #define skb_ulp_pdulen(skb) (CXGB3_SKB_CB(skb)->pdulen) 199 - #define skb_wr_data(skb) (CXGB3_SKB_CB(skb)->wr_data) 204 + #define skb_tcp_seq(skb) (CXGB3_SKB_CB(skb)->seq) 205 + #define skb_rx_ddigest(skb) (CXGB3_SKB_CB(skb)->rx.ddigest) 206 + #define skb_rx_pdulen(skb) (CXGB3_SKB_CB(skb)->rx.pdulen) 207 + #define skb_tx_wr_next(skb) (CXGB3_SKB_CB(skb)->tx.wr_next) 200 208 201 209 enum c3cb_flags { 202 210 C3CB_FLAG_NEED_HDR = 1 << 0, /* packet needs a TX_DATA_WR header */ ··· 225 217 /* for TX: a skb must have a headroom of at least TX_HEADER_LEN bytes */ 226 218 #define TX_HEADER_LEN \ 227 219 (sizeof(struct tx_data_wr) + sizeof(struct sge_opaque_hdr)) 220 + #define SKB_TX_HEADROOM SKB_MAX_HEAD(TX_HEADER_LEN) 228 221 229 222 /* 230 223 * get and set private ip for iscsi traffic
+181 -88
drivers/scsi/cxgb3i/cxgb3i_pdu.c
··· 32 32 #define cxgb3i_tx_debug(fmt...) 33 33 #endif 34 34 35 + /* always allocate rooms for AHS */ 36 + #define SKB_TX_PDU_HEADER_LEN \ 37 + (sizeof(struct iscsi_hdr) + ISCSI_MAX_AHS_SIZE) 38 + static unsigned int skb_extra_headroom; 35 39 static struct page *pad_page; 36 40 37 41 /* ··· 150 146 151 147 void cxgb3i_conn_cleanup_task(struct iscsi_task *task) 152 148 { 153 - struct iscsi_tcp_task *tcp_task = task->dd_data; 149 + struct cxgb3i_task_data *tdata = task->dd_data + 150 + sizeof(struct iscsi_tcp_task); 154 151 155 152 /* never reached the xmit task callout */ 156 - if (tcp_task->dd_data) 157 - kfree_skb(tcp_task->dd_data); 158 - tcp_task->dd_data = NULL; 153 + if (tdata->skb) 154 + __kfree_skb(tdata->skb); 155 + memset(tdata, 0, sizeof(struct cxgb3i_task_data)); 159 156 160 157 /* MNC - Do we need a check in case this is called but 161 158 * cxgb3i_conn_alloc_pdu has never been called on the task */ ··· 164 159 iscsi_tcp_cleanup_task(task); 165 160 } 166 161 167 - /* 168 - * We do not support ahs yet 169 - */ 162 + static int sgl_seek_offset(struct scatterlist *sgl, unsigned int sgcnt, 163 + unsigned int offset, unsigned int *off, 164 + struct scatterlist **sgp) 165 + { 166 + int i; 167 + struct scatterlist *sg; 168 + 169 + for_each_sg(sgl, sg, sgcnt, i) { 170 + if (offset < sg->length) { 171 + *off = offset; 172 + *sgp = sg; 173 + return 0; 174 + } 175 + offset -= sg->length; 176 + } 177 + return -EFAULT; 178 + } 179 + 180 + static int sgl_read_to_frags(struct scatterlist *sg, unsigned int sgoffset, 181 + unsigned int dlen, skb_frag_t *frags, 182 + int frag_max) 183 + { 184 + unsigned int datalen = dlen; 185 + unsigned int sglen = sg->length - sgoffset; 186 + struct page *page = sg_page(sg); 187 + int i; 188 + 189 + i = 0; 190 + do { 191 + unsigned int copy; 192 + 193 + if (!sglen) { 194 + sg = sg_next(sg); 195 + if (!sg) { 196 + cxgb3i_log_error("%s, sg NULL, len %u/%u.\n", 197 + __func__, datalen, dlen); 198 + return -EINVAL; 199 + } 200 + sgoffset = 0; 201 + sglen = sg->length; 202 + page = sg_page(sg); 203 + 204 + } 205 + copy = min(datalen, sglen); 206 + if (i && page == frags[i - 1].page && 207 + sgoffset + sg->offset == 208 + frags[i - 1].page_offset + frags[i - 1].size) { 209 + frags[i - 1].size += copy; 210 + } else { 211 + if (i >= frag_max) { 212 + cxgb3i_log_error("%s, too many pages %u, " 213 + "dlen %u.\n", __func__, 214 + frag_max, dlen); 215 + return -EINVAL; 216 + } 217 + 218 + frags[i].page = page; 219 + frags[i].page_offset = sg->offset + sgoffset; 220 + frags[i].size = copy; 221 + i++; 222 + } 223 + datalen -= copy; 224 + sgoffset += copy; 225 + sglen -= copy; 226 + } while (datalen); 227 + 228 + return i; 229 + } 230 + 170 231 int cxgb3i_conn_alloc_pdu(struct iscsi_task *task, u8 opcode) 171 232 { 233 + struct iscsi_conn *conn = task->conn; 172 234 struct iscsi_tcp_task *tcp_task = task->dd_data; 173 - struct sk_buff *skb; 235 + struct cxgb3i_task_data *tdata = task->dd_data + sizeof(*tcp_task); 236 + struct scsi_cmnd *sc = task->sc; 237 + int headroom = SKB_TX_PDU_HEADER_LEN; 174 238 239 + tcp_task->dd_data = tdata; 175 240 task->hdr = NULL; 176 - /* always allocate rooms for AHS */ 177 - skb = alloc_skb(sizeof(struct iscsi_hdr) + ISCSI_MAX_AHS_SIZE + 178 - TX_HEADER_LEN, GFP_ATOMIC); 179 - if (!skb) 241 + 242 + /* write command, need to send data pdus */ 243 + if (skb_extra_headroom && (opcode == ISCSI_OP_SCSI_DATA_OUT || 244 + (opcode == ISCSI_OP_SCSI_CMD && 245 + (scsi_bidi_cmnd(sc) || sc->sc_data_direction == DMA_TO_DEVICE)))) 246 + headroom += min(skb_extra_headroom, conn->max_xmit_dlength); 247 + 248 + tdata->skb = alloc_skb(TX_HEADER_LEN + headroom, GFP_ATOMIC); 249 + if (!tdata->skb) 180 250 return -ENOMEM; 251 + skb_reserve(tdata->skb, TX_HEADER_LEN); 181 252 182 253 cxgb3i_tx_debug("task 0x%p, opcode 0x%x, skb 0x%p.\n", 183 - task, opcode, skb); 254 + task, opcode, tdata->skb); 184 255 185 - tcp_task->dd_data = skb; 186 - skb_reserve(skb, TX_HEADER_LEN); 187 - task->hdr = (struct iscsi_hdr *)skb->data; 188 - task->hdr_max = sizeof(struct iscsi_hdr); 256 + task->hdr = (struct iscsi_hdr *)tdata->skb->data; 257 + task->hdr_max = SKB_TX_PDU_HEADER_LEN; 189 258 190 259 /* data_out uses scsi_cmd's itt */ 191 260 if (opcode != ISCSI_OP_SCSI_DATA_OUT) ··· 271 192 int cxgb3i_conn_init_pdu(struct iscsi_task *task, unsigned int offset, 272 193 unsigned int count) 273 194 { 274 - struct iscsi_tcp_task *tcp_task = task->dd_data; 275 - struct sk_buff *skb = tcp_task->dd_data; 276 195 struct iscsi_conn *conn = task->conn; 277 - struct page *pg; 196 + struct iscsi_tcp_task *tcp_task = task->dd_data; 197 + struct cxgb3i_task_data *tdata = tcp_task->dd_data; 198 + struct sk_buff *skb = tdata->skb; 278 199 unsigned int datalen = count; 279 200 int i, padlen = iscsi_padding(count); 280 - skb_frag_t *frag; 201 + struct page *pg; 281 202 282 203 cxgb3i_tx_debug("task 0x%p,0x%p, offset %u, count %u, skb 0x%p.\n", 283 204 task, task->sc, offset, count, skb); ··· 288 209 return 0; 289 210 290 211 if (task->sc) { 291 - struct scatterlist *sg; 292 - struct scsi_data_buffer *sdb; 293 - unsigned int sgoffset = offset; 294 - struct page *sgpg; 295 - unsigned int sglen; 212 + struct scsi_data_buffer *sdb = scsi_out(task->sc); 213 + struct scatterlist *sg = NULL; 214 + int err; 296 215 297 - sdb = scsi_out(task->sc); 298 - sg = sdb->table.sgl; 299 - 300 - for_each_sg(sdb->table.sgl, sg, sdb->table.nents, i) { 301 - cxgb3i_tx_debug("sg %d, page 0x%p, len %u offset %u\n", 302 - i, sg_page(sg), sg->length, sg->offset); 303 - 304 - if (sgoffset < sg->length) 305 - break; 306 - sgoffset -= sg->length; 216 + tdata->offset = offset; 217 + tdata->count = count; 218 + err = sgl_seek_offset(sdb->table.sgl, sdb->table.nents, 219 + tdata->offset, &tdata->sgoffset, &sg); 220 + if (err < 0) { 221 + cxgb3i_log_warn("tpdu, sgl %u, bad offset %u/%u.\n", 222 + sdb->table.nents, tdata->offset, 223 + sdb->length); 224 + return err; 307 225 } 308 - sgpg = sg_page(sg); 309 - sglen = sg->length - sgoffset; 226 + err = sgl_read_to_frags(sg, tdata->sgoffset, tdata->count, 227 + tdata->frags, MAX_PDU_FRAGS); 228 + if (err < 0) { 229 + cxgb3i_log_warn("tpdu, sgl %u, bad offset %u + %u.\n", 230 + sdb->table.nents, tdata->offset, 231 + tdata->count); 232 + return err; 233 + } 234 + tdata->nr_frags = err; 310 235 311 - do { 312 - int j = skb_shinfo(skb)->nr_frags; 313 - unsigned int copy; 236 + if (tdata->nr_frags > MAX_SKB_FRAGS || 237 + (padlen && tdata->nr_frags == MAX_SKB_FRAGS)) { 238 + char *dst = skb->data + task->hdr_len; 239 + skb_frag_t *frag = tdata->frags; 314 240 315 - if (!sglen) { 316 - sg = sg_next(sg); 317 - sgpg = sg_page(sg); 318 - sgoffset = 0; 319 - sglen = sg->length; 320 - ++i; 241 + /* data fits in the skb's headroom */ 242 + for (i = 0; i < tdata->nr_frags; i++, frag++) { 243 + char *src = kmap_atomic(frag->page, 244 + KM_SOFTIRQ0); 245 + 246 + memcpy(dst, src+frag->page_offset, frag->size); 247 + dst += frag->size; 248 + kunmap_atomic(src, KM_SOFTIRQ0); 321 249 } 322 - copy = min(sglen, datalen); 323 - if (j && skb_can_coalesce(skb, j, sgpg, 324 - sg->offset + sgoffset)) { 325 - skb_shinfo(skb)->frags[j - 1].size += copy; 326 - } else { 327 - get_page(sgpg); 328 - skb_fill_page_desc(skb, j, sgpg, 329 - sg->offset + sgoffset, copy); 250 + if (padlen) { 251 + memset(dst, 0, padlen); 252 + padlen = 0; 330 253 } 331 - sgoffset += copy; 332 - sglen -= copy; 333 - datalen -= copy; 334 - } while (datalen); 254 + skb_put(skb, count + padlen); 255 + } else { 256 + /* data fit into frag_list */ 257 + for (i = 0; i < tdata->nr_frags; i++) 258 + get_page(tdata->frags[i].page); 259 + 260 + memcpy(skb_shinfo(skb)->frags, tdata->frags, 261 + sizeof(skb_frag_t) * tdata->nr_frags); 262 + skb_shinfo(skb)->nr_frags = tdata->nr_frags; 263 + skb->len += count; 264 + skb->data_len += count; 265 + skb->truesize += count; 266 + } 267 + 335 268 } else { 336 269 pg = virt_to_page(task->data); 337 270 338 - while (datalen) { 339 - i = skb_shinfo(skb)->nr_frags; 340 - frag = &skb_shinfo(skb)->frags[i]; 341 - 342 - get_page(pg); 343 - frag->page = pg; 344 - frag->page_offset = 0; 345 - frag->size = min((unsigned int)PAGE_SIZE, datalen); 346 - 347 - skb_shinfo(skb)->nr_frags++; 348 - datalen -= frag->size; 349 - pg++; 350 - } 271 + get_page(pg); 272 + skb_fill_page_desc(skb, 0, pg, offset_in_page(task->data), 273 + count); 274 + skb->len += count; 275 + skb->data_len += count; 276 + skb->truesize += count; 351 277 } 352 278 353 279 if (padlen) { 354 280 i = skb_shinfo(skb)->nr_frags; 355 - frag = &skb_shinfo(skb)->frags[i]; 356 - frag->page = pad_page; 357 - frag->page_offset = 0; 358 - frag->size = padlen; 359 - skb_shinfo(skb)->nr_frags++; 281 + get_page(pad_page); 282 + skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags, pad_page, 0, 283 + padlen); 284 + 285 + skb->data_len += padlen; 286 + skb->truesize += padlen; 287 + skb->len += padlen; 360 288 } 361 289 362 - datalen = count + padlen; 363 - skb->data_len += datalen; 364 - skb->truesize += datalen; 365 - skb->len += datalen; 366 290 return 0; 367 291 } 368 292 369 293 int cxgb3i_conn_xmit_pdu(struct iscsi_task *task) 370 294 { 371 - struct iscsi_tcp_task *tcp_task = task->dd_data; 372 - struct sk_buff *skb = tcp_task->dd_data; 373 295 struct iscsi_tcp_conn *tcp_conn = task->conn->dd_data; 374 296 struct cxgb3i_conn *cconn = tcp_conn->dd_data; 297 + struct iscsi_tcp_task *tcp_task = task->dd_data; 298 + struct cxgb3i_task_data *tdata = tcp_task->dd_data; 299 + struct sk_buff *skb = tdata->skb; 375 300 unsigned int datalen; 376 301 int err; 377 302 ··· 383 300 return 0; 384 301 385 302 datalen = skb->data_len; 386 - tcp_task->dd_data = NULL; 303 + tdata->skb = NULL; 387 304 err = cxgb3i_c3cn_send_pdus(cconn->cep->c3cn, skb); 388 - cxgb3i_tx_debug("task 0x%p, skb 0x%p, len %u/%u, rv %d.\n", 389 - task, skb, skb->len, skb->data_len, err); 390 305 if (err > 0) { 391 306 int pdulen = err; 307 + 308 + cxgb3i_tx_debug("task 0x%p, skb 0x%p, len %u/%u, rv %d.\n", 309 + task, skb, skb->len, skb->data_len, err); 392 310 393 311 if (task->conn->hdrdgst_en) 394 312 pdulen += ISCSI_DIGEST_SIZE; ··· 409 325 return err; 410 326 } 411 327 /* reset skb to send when we are called again */ 412 - tcp_task->dd_data = skb; 328 + tdata->skb = skb; 413 329 return -EAGAIN; 414 330 } 415 331 416 332 int cxgb3i_pdu_init(void) 417 333 { 334 + if (SKB_TX_HEADROOM > (512 * MAX_SKB_FRAGS)) 335 + skb_extra_headroom = SKB_TX_HEADROOM; 418 336 pad_page = alloc_page(GFP_KERNEL); 419 337 if (!pad_page) 420 338 return -ENOMEM; ··· 452 366 skb = skb_peek(&c3cn->receive_queue); 453 367 while (!err && skb) { 454 368 __skb_unlink(skb, &c3cn->receive_queue); 455 - read += skb_ulp_pdulen(skb); 369 + read += skb_rx_pdulen(skb); 370 + cxgb3i_rx_debug("conn 0x%p, cn 0x%p, rx skb 0x%p, pdulen %u.\n", 371 + conn, c3cn, skb, skb_rx_pdulen(skb)); 456 372 err = cxgb3i_conn_read_pdu_skb(conn, skb); 457 373 __kfree_skb(skb); 458 374 skb = skb_peek(&c3cn->receive_queue); ··· 465 377 cxgb3i_c3cn_rx_credits(c3cn, read); 466 378 } 467 379 conn->rxdata_octets += read; 380 + 381 + if (err) { 382 + cxgb3i_log_info("conn 0x%p rx failed err %d.\n", conn, err); 383 + iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED); 384 + } 468 385 } 469 386 470 387 void cxgb3i_conn_tx_open(struct s3_conn *c3cn)
+1 -1
drivers/scsi/cxgb3i/cxgb3i_pdu.h
··· 53 53 #define ULP2_FLAG_DCRC_ERROR 0x20 54 54 #define ULP2_FLAG_PAD_ERROR 0x40 55 55 56 - void cxgb3i_conn_closing(struct s3_conn *); 56 + void cxgb3i_conn_closing(struct s3_conn *c3cn); 57 57 void cxgb3i_conn_pdu_ready(struct s3_conn *c3cn); 58 58 void cxgb3i_conn_tx_open(struct s3_conn *c3cn); 59 59 #endif
+1
drivers/scsi/hptiop.c
··· 1251 1251 { PCI_VDEVICE(TTI, 0x3530), (kernel_ulong_t)&hptiop_itl_ops }, 1252 1252 { PCI_VDEVICE(TTI, 0x3560), (kernel_ulong_t)&hptiop_itl_ops }, 1253 1253 { PCI_VDEVICE(TTI, 0x4322), (kernel_ulong_t)&hptiop_itl_ops }, 1254 + { PCI_VDEVICE(TTI, 0x4321), (kernel_ulong_t)&hptiop_itl_ops }, 1254 1255 { PCI_VDEVICE(TTI, 0x4210), (kernel_ulong_t)&hptiop_itl_ops }, 1255 1256 { PCI_VDEVICE(TTI, 0x4211), (kernel_ulong_t)&hptiop_itl_ops }, 1256 1257 { PCI_VDEVICE(TTI, 0x4310), (kernel_ulong_t)&hptiop_itl_ops },
+2 -3
drivers/scsi/scsi_lib.c
··· 1040 1040 action = ACTION_FAIL; 1041 1041 break; 1042 1042 case ABORTED_COMMAND: 1043 + action = ACTION_FAIL; 1043 1044 if (sshdr.asc == 0x10) { /* DIF */ 1044 1045 description = "Target Data Integrity Failure"; 1045 - action = ACTION_FAIL; 1046 1046 error = -EILSEQ; 1047 - } else 1048 - action = ACTION_RETRY; 1047 + } 1049 1048 break; 1050 1049 case NOT_READY: 1051 1050 /* If the device is in the process of becoming
+7
drivers/scsi/sd.c
··· 107 107 static void sd_print_sense_hdr(struct scsi_disk *, struct scsi_sense_hdr *); 108 108 static void sd_print_result(struct scsi_disk *, int); 109 109 110 + static DEFINE_SPINLOCK(sd_index_lock); 110 111 static DEFINE_IDA(sd_index_ida); 111 112 112 113 /* This semaphore is used to mediate the 0->1 reference get in the ··· 1915 1914 if (!ida_pre_get(&sd_index_ida, GFP_KERNEL)) 1916 1915 goto out_put; 1917 1916 1917 + spin_lock(&sd_index_lock); 1918 1918 error = ida_get_new(&sd_index_ida, &index); 1919 + spin_unlock(&sd_index_lock); 1919 1920 } while (error == -EAGAIN); 1920 1921 1921 1922 if (error) ··· 1939 1936 return 0; 1940 1937 1941 1938 out_free_index: 1939 + spin_lock(&sd_index_lock); 1942 1940 ida_remove(&sd_index_ida, index); 1941 + spin_unlock(&sd_index_lock); 1943 1942 out_put: 1944 1943 put_disk(gd); 1945 1944 out_free: ··· 1991 1986 struct scsi_disk *sdkp = to_scsi_disk(dev); 1992 1987 struct gendisk *disk = sdkp->disk; 1993 1988 1989 + spin_lock(&sd_index_lock); 1994 1990 ida_remove(&sd_index_ida, sdkp->index); 1991 + spin_unlock(&sd_index_lock); 1995 1992 1996 1993 disk->private_data = NULL; 1997 1994 put_disk(disk);
+1 -1
drivers/serial/sh-sci.h
··· 133 133 # define SCSPTR3 0xffed0024 /* 16 bit SCIF */ 134 134 # define SCSPTR4 0xffee0024 /* 16 bit SCIF */ 135 135 # define SCSPTR5 0xffef0024 /* 16 bit SCIF */ 136 - # define SCIF_OPER 0x0001 /* Overrun error bit */ 136 + # define SCIF_ORER 0x0001 /* Overrun error bit */ 137 137 # define SCSCR_INIT(port) 0x3a /* TIE=0,RIE=0,TE=1,RE=1,REIE=1 */ 138 138 #elif defined(CONFIG_CPU_SUBTYPE_SH7201) || \ 139 139 defined(CONFIG_CPU_SUBTYPE_SH7203) || \
+12 -11
drivers/staging/panel/panel.c
··· 2164 2164 if (scan_timer.function != NULL) 2165 2165 del_timer(&scan_timer); 2166 2166 2167 - if (keypad_enabled) 2168 - misc_deregister(&keypad_dev); 2167 + if (pprt != NULL) { 2168 + if (keypad_enabled) 2169 + misc_deregister(&keypad_dev); 2169 2170 2170 - if (lcd_enabled) { 2171 - panel_lcd_print("\x0cLCD driver " PANEL_VERSION 2172 - "\nunloaded.\x1b[Lc\x1b[Lb\x1b[L-"); 2173 - misc_deregister(&lcd_dev); 2171 + if (lcd_enabled) { 2172 + panel_lcd_print("\x0cLCD driver " PANEL_VERSION 2173 + "\nunloaded.\x1b[Lc\x1b[Lb\x1b[L-"); 2174 + misc_deregister(&lcd_dev); 2175 + } 2176 + 2177 + /* TODO: free all input signals */ 2178 + parport_release(pprt); 2179 + parport_unregister_device(pprt); 2174 2180 } 2175 - 2176 - /* TODO: free all input signals */ 2177 - 2178 - parport_release(pprt); 2179 - parport_unregister_device(pprt); 2180 2181 parport_unregister_driver(&panel_driver); 2181 2182 } 2182 2183
+1
drivers/staging/rtl8187se/Kconfig
··· 1 1 config RTL8187SE 2 2 tristate "RealTek RTL8187SE Wireless LAN NIC driver" 3 3 depends on PCI 4 + depends on WIRELESS_EXT && COMPAT_NET_DEV_OPS 4 5 default N 5 6 ---help---
+10 -9
drivers/staging/rtl8187se/ieee80211/ieee80211_crypt.c
··· 234 234 void ieee80211_crypto_deinit(void) 235 235 { 236 236 struct list_head *ptr, *n; 237 + struct ieee80211_crypto_alg *alg = NULL; 237 238 238 239 if (hcrypt == NULL) 239 240 return; 240 241 241 - for (ptr = hcrypt->algs.next, n = ptr->next; ptr != &hcrypt->algs; 242 - ptr = n, n = ptr->next) { 243 - struct ieee80211_crypto_alg *alg = 244 - (struct ieee80211_crypto_alg *) ptr; 245 - list_del(ptr); 246 - printk(KERN_DEBUG "ieee80211_crypt: unregistered algorithm " 247 - "'%s' (deinit)\n", alg->ops->name); 248 - kfree(alg); 242 + list_for_each_safe(ptr, n, &hcrypt->algs) { 243 + alg = list_entry(ptr, struct ieee80211_crypto_alg, list); 244 + if (alg) { 245 + list_del(ptr); 246 + printk(KERN_DEBUG 247 + "ieee80211_crypt: unregistered algorithm '%s' (deinit)\n", 248 + alg->ops->name); 249 + kfree(alg); 250 + } 249 251 } 250 - 251 252 kfree(hcrypt); 252 253 } 253 254
+1 -1
drivers/staging/rtl8187se/r8180_core.c
··· 6161 6161 { 6162 6162 pci_unregister_driver (&rtl8180_pci_driver); 6163 6163 rtl8180_proc_module_remove(); 6164 - ieee80211_crypto_deinit(); 6165 6164 ieee80211_crypto_tkip_exit(); 6166 6165 ieee80211_crypto_ccmp_exit(); 6167 6166 ieee80211_crypto_wep_exit(); 6167 + ieee80211_crypto_deinit(); 6168 6168 DMESG("Exiting"); 6169 6169 } 6170 6170
+13 -7
drivers/staging/winbond/wbusb.c
··· 319 319 struct usb_device *udev = interface_to_usbdev(intf); 320 320 struct wbsoft_priv *priv; 321 321 struct ieee80211_hw *dev; 322 - int err; 322 + int nr, err; 323 323 324 324 usb_get_dev(udev); 325 325 326 326 // 20060630.2 Check the device if it already be opened 327 - err = usb_control_msg(udev, usb_rcvctrlpipe( udev, 0 ), 328 - 0x01, USB_TYPE_VENDOR|USB_RECIP_DEVICE|USB_DIR_IN, 329 - 0x0, 0x400, &ltmp, 4, HZ*100 ); 330 - if (err) 327 + nr = usb_control_msg(udev, usb_rcvctrlpipe( udev, 0 ), 328 + 0x01, USB_TYPE_VENDOR|USB_RECIP_DEVICE|USB_DIR_IN, 329 + 0x0, 0x400, &ltmp, 4, HZ*100 ); 330 + if (nr < 0) { 331 + err = nr; 331 332 goto error; 333 + } 332 334 333 335 ltmp = cpu_to_le32(ltmp); 334 336 if (ltmp) { // Is already initialized? ··· 339 337 } 340 338 341 339 dev = ieee80211_alloc_hw(sizeof(*priv), &wbsoft_ops); 342 - if (!dev) 340 + if (!dev) { 341 + err = -ENOMEM; 343 342 goto error; 343 + } 344 344 345 345 priv = dev->priv; 346 346 ··· 373 369 } 374 370 375 371 dev->extra_tx_headroom = 12; /* FIXME */ 376 - dev->flags = 0; 372 + dev->flags = IEEE80211_HW_SIGNAL_UNSPEC; 373 + dev->wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION); 377 374 378 375 dev->channel_change_time = 1000; 376 + dev->max_signal = 100; 379 377 dev->queues = 1; 380 378 381 379 dev->wiphy->bands[IEEE80211_BAND_2GHZ] = &wbsoft_band_2GHz;
+9
drivers/usb/class/cdc-acm.c
··· 1376 1376 { USB_DEVICE(0x0572, 0x1324), /* Conexant USB MODEM RD02-D400 */ 1377 1377 .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1378 1378 }, 1379 + { USB_DEVICE(0x22b8, 0x6425), /* Motorola MOTOMAGX phones */ 1380 + }, 1381 + { USB_DEVICE(0x0572, 0x1329), /* Hummingbird huc56s (Conexant) */ 1382 + .driver_info = NO_UNION_NORMAL, /* union descriptor misplaced on 1383 + data interface instead of 1384 + communications interface. 1385 + Maybe we should define a new 1386 + quirk for this. */ 1387 + }, 1379 1388 1380 1389 /* control interfaces with various AT-command sets */ 1381 1390 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
+8 -3
drivers/usb/core/message.c
··· 653 653 if (result <= 0 && result != -ETIMEDOUT) 654 654 continue; 655 655 if (result > 1 && ((u8 *)buf)[1] != type) { 656 - result = -EPROTO; 656 + result = -ENODATA; 657 657 continue; 658 658 } 659 659 break; ··· 696 696 USB_REQ_GET_DESCRIPTOR, USB_DIR_IN, 697 697 (USB_DT_STRING << 8) + index, langid, buf, size, 698 698 USB_CTRL_GET_TIMEOUT); 699 - if (!(result == 0 || result == -EPIPE)) 700 - break; 699 + if (result == 0 || result == -EPIPE) 700 + continue; 701 + if (result > 1 && ((u8 *) buf)[1] != USB_DT_STRING) { 702 + result = -ENODATA; 703 + continue; 704 + } 705 + break; 701 706 } 702 707 return result; 703 708 }
+1
drivers/usb/gadget/Kconfig
··· 191 191 boolean "OMAP USB Device Controller" 192 192 depends on ARCH_OMAP 193 193 select ISP1301_OMAP if MACH_OMAP_H2 || MACH_OMAP_H3 || MACH_OMAP_H4_OTG 194 + select USB_OTG_UTILS if ARCH_OMAP 194 195 help 195 196 Many Texas Instruments OMAP processors have flexible full 196 197 speed USB device controllers, with support for up to 30
+2 -2
drivers/usb/gadget/f_obex.c
··· 366 366 f->hs_descriptors = usb_copy_descriptors(hs_function); 367 367 368 368 obex->hs.obex_in = usb_find_endpoint(hs_function, 369 - f->descriptors, &obex_hs_ep_in_desc); 369 + f->hs_descriptors, &obex_hs_ep_in_desc); 370 370 obex->hs.obex_out = usb_find_endpoint(hs_function, 371 - f->descriptors, &obex_hs_ep_out_desc); 371 + f->hs_descriptors, &obex_hs_ep_out_desc); 372 372 } 373 373 374 374 /* Avoid letting this gadget enumerate until the userspace
+5 -1
drivers/usb/gadget/file_storage.c
··· 3879 3879 mod_data.protocol_type = USB_SC_SCSI; 3880 3880 mod_data.protocol_name = "Transparent SCSI"; 3881 3881 3882 - if (gadget_is_sh(fsg->gadget)) 3882 + /* Some peripheral controllers are known not to be able to 3883 + * halt bulk endpoints correctly. If one of them is present, 3884 + * disable stalls. 3885 + */ 3886 + if (gadget_is_sh(fsg->gadget) || gadget_is_at91(fsg->gadget)) 3883 3887 mod_data.can_stall = 0; 3884 3888 3885 3889 if (mod_data.release == 0xffff) { // Parameter wasn't set
+3
drivers/usb/gadget/fsl_usb2_udc.c
··· 404 404 } 405 405 if (zlt) 406 406 tmp |= EP_QUEUE_HEAD_ZLT_SEL; 407 + 407 408 p_QH->max_pkt_length = cpu_to_le32(tmp); 409 + p_QH->next_dtd_ptr = 1; 410 + p_QH->size_ioc_int_sts = 0; 408 411 409 412 return; 410 413 }
+2
drivers/usb/host/ehci-hcd.c
··· 485 485 * periodic_size can shrink by USBCMD update if hcc_params allows. 486 486 */ 487 487 ehci->periodic_size = DEFAULT_I_TDPS; 488 + INIT_LIST_HEAD(&ehci->cached_itd_list); 488 489 if ((retval = ehci_mem_init(ehci, GFP_KERNEL)) < 0) 489 490 return retval; 490 491 ··· 498 497 499 498 ehci->reclaim = NULL; 500 499 ehci->next_uframe = -1; 500 + ehci->clock_frame = -1; 501 501 502 502 /* 503 503 * dedicate a qh for the async ring head, since we couldn't unlink
+1
drivers/usb/host/ehci-mem.c
··· 128 128 129 129 static void ehci_mem_cleanup (struct ehci_hcd *ehci) 130 130 { 131 + free_cached_itd_list(ehci); 131 132 if (ehci->async) 132 133 qh_put (ehci->async); 133 134 ehci->async = NULL;
+48 -8
drivers/usb/host/ehci-sched.c
··· 1004 1004 1005 1005 is_in = (stream->bEndpointAddress & USB_DIR_IN) ? 0x10 : 0; 1006 1006 stream->bEndpointAddress &= 0x0f; 1007 - stream->ep->hcpriv = NULL; 1007 + if (stream->ep) 1008 + stream->ep->hcpriv = NULL; 1008 1009 1009 1010 if (stream->rescheduled) { 1010 1011 ehci_info (ehci, "ep%d%s-iso rescheduled " ··· 1654 1653 (stream->bEndpointAddress & USB_DIR_IN) ? "in" : "out"); 1655 1654 } 1656 1655 iso_stream_put (ehci, stream); 1657 - /* OK to recycle this ITD now that its completion callback ran. */ 1656 + 1658 1657 done: 1659 1658 usb_put_urb(urb); 1660 1659 itd->urb = NULL; 1661 - itd->stream = NULL; 1662 - list_move(&itd->itd_list, &stream->free_list); 1663 - iso_stream_put(ehci, stream); 1664 - 1660 + if (ehci->clock_frame != itd->frame || itd->index[7] != -1) { 1661 + /* OK to recycle this ITD now. */ 1662 + itd->stream = NULL; 1663 + list_move(&itd->itd_list, &stream->free_list); 1664 + iso_stream_put(ehci, stream); 1665 + } else { 1666 + /* HW might remember this ITD, so we can't recycle it yet. 1667 + * Move it to a safe place until a new frame starts. 1668 + */ 1669 + list_move(&itd->itd_list, &ehci->cached_itd_list); 1670 + if (stream->refcount == 2) { 1671 + /* If iso_stream_put() were called here, stream 1672 + * would be freed. Instead, just prevent reuse. 1673 + */ 1674 + stream->ep->hcpriv = NULL; 1675 + stream->ep = NULL; 1676 + } 1677 + } 1665 1678 return retval; 1666 1679 } 1667 1680 ··· 2116 2101 2117 2102 /*-------------------------------------------------------------------------*/ 2118 2103 2104 + static void free_cached_itd_list(struct ehci_hcd *ehci) 2105 + { 2106 + struct ehci_itd *itd, *n; 2107 + 2108 + list_for_each_entry_safe(itd, n, &ehci->cached_itd_list, itd_list) { 2109 + struct ehci_iso_stream *stream = itd->stream; 2110 + itd->stream = NULL; 2111 + list_move(&itd->itd_list, &stream->free_list); 2112 + iso_stream_put(ehci, stream); 2113 + } 2114 + } 2115 + 2116 + /*-------------------------------------------------------------------------*/ 2117 + 2119 2118 static void 2120 2119 scan_periodic (struct ehci_hcd *ehci) 2121 2120 { ··· 2144 2115 * Touches as few pages as possible: cache-friendly. 2145 2116 */ 2146 2117 now_uframe = ehci->next_uframe; 2147 - if (HC_IS_RUNNING (ehci_to_hcd(ehci)->state)) 2118 + if (HC_IS_RUNNING(ehci_to_hcd(ehci)->state)) { 2148 2119 clock = ehci_readl(ehci, &ehci->regs->frame_index); 2149 - else 2120 + clock_frame = (clock >> 3) % ehci->periodic_size; 2121 + } else { 2150 2122 clock = now_uframe + mod - 1; 2123 + clock_frame = -1; 2124 + } 2125 + if (ehci->clock_frame != clock_frame) { 2126 + free_cached_itd_list(ehci); 2127 + ehci->clock_frame = clock_frame; 2128 + } 2151 2129 clock %= mod; 2152 2130 clock_frame = clock >> 3; 2153 2131 ··· 2313 2277 /* rescan the rest of this frame, then ... */ 2314 2278 clock = now; 2315 2279 clock_frame = clock >> 3; 2280 + if (ehci->clock_frame != clock_frame) { 2281 + free_cached_itd_list(ehci); 2282 + ehci->clock_frame = clock_frame; 2283 + } 2316 2284 } else { 2317 2285 now_uframe++; 2318 2286 now_uframe %= mod;
+6
drivers/usb/host/ehci.h
··· 87 87 int next_uframe; /* scan periodic, start here */ 88 88 unsigned periodic_sched; /* periodic activity count */ 89 89 90 + /* list of itds completed while clock_frame was still active */ 91 + struct list_head cached_itd_list; 92 + unsigned clock_frame; 93 + 90 94 /* per root hub port */ 91 95 unsigned long reset_done [EHCI_MAX_ROOT_PORTS]; 92 96 ··· 223 219 mod_timer(&ehci->watchdog, t + jiffies); 224 220 } 225 221 } 222 + 223 + static void free_cached_itd_list(struct ehci_hcd *ehci); 226 224 227 225 /*-------------------------------------------------------------------------*/ 228 226
+4 -11
drivers/usb/musb/davinci.c
··· 377 377 u32 revision; 378 378 379 379 musb->mregs += DAVINCI_BASE_OFFSET; 380 - #if 0 381 - /* REVISIT there's something odd about clocking, this 382 - * didn't appear do the job ... 383 - */ 384 - musb->clock = clk_get(pDevice, "usb"); 385 - if (IS_ERR(musb->clock)) 386 - return PTR_ERR(musb->clock); 387 380 388 - status = clk_enable(musb->clock); 389 - if (status < 0) 390 - return -ENODEV; 391 - #endif 381 + clk_enable(musb->clock); 392 382 393 383 /* returns zero if e.g. not clocked */ 394 384 revision = musb_readl(tibase, DAVINCI_USB_VERSION_REG); ··· 443 453 } 444 454 445 455 phy_off(); 456 + 457 + clk_disable(musb->clock); 458 + 446 459 return 0; 447 460 }
+7 -6
drivers/usb/musb/musb_core.c
··· 115 115 116 116 117 117 unsigned musb_debug; 118 - module_param(musb_debug, uint, S_IRUGO | S_IWUSR); 118 + module_param_named(debug, musb_debug, uint, S_IRUGO | S_IWUSR); 119 119 MODULE_PARM_DESC(debug, "Debug message level. Default = 0"); 120 120 121 121 #define DRIVER_AUTHOR "Mentor Graphics, Texas Instruments, Nokia" ··· 767 767 #ifdef CONFIG_USB_MUSB_HDRC_HCD 768 768 case OTG_STATE_A_HOST: 769 769 case OTG_STATE_A_SUSPEND: 770 + usb_hcd_resume_root_hub(musb_to_hcd(musb)); 770 771 musb_root_disconnect(musb); 771 772 if (musb->a_wait_bcon != 0) 772 773 musb_platform_try_idle(musb, jiffies ··· 1816 1815 #ifdef CONFIG_SYSFS 1817 1816 device_remove_file(musb->controller, &dev_attr_mode); 1818 1817 device_remove_file(musb->controller, &dev_attr_vbus); 1819 - #ifdef CONFIG_USB_MUSB_OTG 1818 + #ifdef CONFIG_USB_GADGET_MUSB_HDRC 1820 1819 device_remove_file(musb->controller, &dev_attr_srp); 1821 1820 #endif 1822 1821 #endif ··· 2064 2063 #ifdef CONFIG_SYSFS 2065 2064 device_remove_file(musb->controller, &dev_attr_mode); 2066 2065 device_remove_file(musb->controller, &dev_attr_vbus); 2067 - #ifdef CONFIG_USB_MUSB_OTG 2066 + #ifdef CONFIG_USB_GADGET_MUSB_HDRC 2068 2067 device_remove_file(musb->controller, &dev_attr_srp); 2069 2068 #endif 2070 2069 #endif ··· 2244 2243 return platform_driver_probe(&musb_driver, musb_probe); 2245 2244 } 2246 2245 2247 - /* make us init after usbcore and before usb 2248 - * gadget and host-side drivers start to register 2246 + /* make us init after usbcore and i2c (transceivers, regulators, etc) 2247 + * and before usb gadget and host-side drivers start to register 2249 2248 */ 2250 - subsys_initcall(musb_init); 2249 + fs_initcall(musb_init); 2251 2250 2252 2251 static void __exit musb_cleanup(void) 2253 2252 {
+2 -2
drivers/usb/musb/musb_gadget.c
··· 575 575 struct usb_request *request = &req->request; 576 576 struct musb_ep *musb_ep = &musb->endpoints[epnum].ep_out; 577 577 void __iomem *epio = musb->endpoints[epnum].regs; 578 - u16 fifo_count = 0; 578 + unsigned fifo_count = 0; 579 579 u16 len = musb_ep->packet_sz; 580 580 581 581 csr = musb_readw(epio, MUSB_RXCSR); ··· 687 687 len, fifo_count, 688 688 musb_ep->packet_sz); 689 689 690 - fifo_count = min(len, fifo_count); 690 + fifo_count = min_t(unsigned, len, fifo_count); 691 691 692 692 #ifdef CONFIG_USB_TUSB_OMAP_DMA 693 693 if (tusb_dma_omap() && musb_ep->dma) {
+58 -35
drivers/usb/musb/musb_host.c
··· 335 335 static struct musb_qh * 336 336 musb_giveback(struct musb_qh *qh, struct urb *urb, int status) 337 337 { 338 - int is_in; 339 338 struct musb_hw_ep *ep = qh->hw_ep; 340 339 struct musb *musb = ep->musb; 340 + int is_in = usb_pipein(urb->pipe); 341 341 int ready = qh->is_ready; 342 - 343 - if (ep->is_shared_fifo) 344 - is_in = 1; 345 - else 346 - is_in = usb_pipein(urb->pipe); 347 342 348 343 /* save toggle eagerly, for paranoia */ 349 344 switch (qh->type) { ··· 427 432 else 428 433 qh = musb_giveback(qh, urb, urb->status); 429 434 430 - if (qh && qh->is_ready && !list_empty(&qh->hep->urb_list)) { 435 + if (qh != NULL && qh->is_ready) { 431 436 DBG(4, "... next ep%d %cX urb %p\n", 432 437 hw_ep->epnum, is_in ? 'R' : 'T', 433 438 next_urb(qh)); ··· 937 942 switch (musb->ep0_stage) { 938 943 case MUSB_EP0_IN: 939 944 fifo_dest = urb->transfer_buffer + urb->actual_length; 940 - fifo_count = min(len, ((u16) (urb->transfer_buffer_length 941 - - urb->actual_length))); 945 + fifo_count = min_t(size_t, len, urb->transfer_buffer_length - 946 + urb->actual_length); 942 947 if (fifo_count < len) 943 948 urb->status = -EOVERFLOW; 944 949 ··· 971 976 } 972 977 /* FALLTHROUGH */ 973 978 case MUSB_EP0_OUT: 974 - fifo_count = min(qh->maxpacket, ((u16) 975 - (urb->transfer_buffer_length 976 - - urb->actual_length))); 977 - 979 + fifo_count = min_t(size_t, qh->maxpacket, 980 + urb->transfer_buffer_length - 981 + urb->actual_length); 978 982 if (fifo_count) { 979 983 fifo_dest = (u8 *) (urb->transfer_buffer 980 984 + urb->actual_length); ··· 1155 1161 struct urb *urb; 1156 1162 struct musb_hw_ep *hw_ep = musb->endpoints + epnum; 1157 1163 void __iomem *epio = hw_ep->regs; 1158 - struct musb_qh *qh = hw_ep->out_qh; 1164 + struct musb_qh *qh = hw_ep->is_shared_fifo ? hw_ep->in_qh 1165 + : hw_ep->out_qh; 1159 1166 u32 status = 0; 1160 1167 void __iomem *mbase = musb->mregs; 1161 1168 struct dma_channel *dma; ··· 1303 1308 * packets before updating TXCSR ... other docs disagree ... 1304 1309 */ 1305 1310 /* PIO: start next packet in this URB */ 1306 - wLength = min(qh->maxpacket, (u16) wLength); 1311 + if (wLength > qh->maxpacket) 1312 + wLength = qh->maxpacket; 1307 1313 musb_write_fifo(hw_ep, wLength, buf); 1308 1314 qh->segsize = wLength; 1309 1315 ··· 1863 1867 } 1864 1868 qh->type_reg = type_reg; 1865 1869 1866 - /* precompute rxinterval/txinterval register */ 1867 - interval = min((u8)16, epd->bInterval); /* log encoding */ 1870 + /* Precompute RXINTERVAL/TXINTERVAL register */ 1868 1871 switch (qh->type) { 1869 1872 case USB_ENDPOINT_XFER_INT: 1870 - /* fullspeed uses linear encoding */ 1871 - if (USB_SPEED_FULL == urb->dev->speed) { 1872 - interval = epd->bInterval; 1873 - if (!interval) 1874 - interval = 1; 1873 + /* 1874 + * Full/low speeds use the linear encoding, 1875 + * high speed uses the logarithmic encoding. 1876 + */ 1877 + if (urb->dev->speed <= USB_SPEED_FULL) { 1878 + interval = max_t(u8, epd->bInterval, 1); 1879 + break; 1875 1880 } 1876 1881 /* FALLTHROUGH */ 1877 1882 case USB_ENDPOINT_XFER_ISOC: 1878 - /* iso always uses log encoding */ 1883 + /* ISO always uses logarithmic encoding */ 1884 + interval = min_t(u8, epd->bInterval, 16); 1879 1885 break; 1880 1886 default: 1881 1887 /* REVISIT we actually want to use NAK limits, hinting to the ··· 2035 2037 goto done; 2036 2038 2037 2039 /* Any URB not actively programmed into endpoint hardware can be 2038 - * immediately given back. Such an URB must be at the head of its 2040 + * immediately given back; that's any URB not at the head of an 2039 2041 * endpoint queue, unless someday we get real DMA queues. And even 2040 - * then, it might not be known to the hardware... 2042 + * if it's at the head, it might not be known to the hardware... 2041 2043 * 2042 2044 * Otherwise abort current transfer, pending dma, etc.; urb->status 2043 2045 * has already been updated. This is a synchronous abort; it'd be ··· 2076 2078 qh->is_ready = 0; 2077 2079 __musb_giveback(musb, urb, 0); 2078 2080 qh->is_ready = ready; 2081 + 2082 + /* If nothing else (usually musb_giveback) is using it 2083 + * and its URB list has emptied, recycle this qh. 2084 + */ 2085 + if (ready && list_empty(&qh->hep->urb_list)) { 2086 + qh->hep->hcpriv = NULL; 2087 + list_del(&qh->ring); 2088 + kfree(qh); 2089 + } 2079 2090 } else 2080 2091 ret = musb_cleanup_urb(urb, qh, urb->pipe & USB_DIR_IN); 2081 2092 done: ··· 2100 2093 unsigned long flags; 2101 2094 struct musb *musb = hcd_to_musb(hcd); 2102 2095 u8 is_in = epnum & USB_DIR_IN; 2103 - struct musb_qh *qh = hep->hcpriv; 2104 - struct urb *urb, *tmp; 2096 + struct musb_qh *qh; 2097 + struct urb *urb; 2105 2098 struct list_head *sched; 2106 2099 2107 - if (!qh) 2108 - return; 2109 - 2110 2100 spin_lock_irqsave(&musb->lock, flags); 2101 + 2102 + qh = hep->hcpriv; 2103 + if (qh == NULL) 2104 + goto exit; 2111 2105 2112 2106 switch (qh->type) { 2113 2107 case USB_ENDPOINT_XFER_CONTROL: ··· 2143 2135 2144 2136 /* cleanup */ 2145 2137 musb_cleanup_urb(urb, qh, urb->pipe & USB_DIR_IN); 2146 - } else 2147 - urb = NULL; 2148 2138 2149 - /* then just nuke all the others */ 2150 - list_for_each_entry_safe_from(urb, tmp, &hep->urb_list, urb_list) 2151 - musb_giveback(qh, urb, -ESHUTDOWN); 2139 + /* Then nuke all the others ... and advance the 2140 + * queue on hw_ep (e.g. bulk ring) when we're done. 2141 + */ 2142 + while (!list_empty(&hep->urb_list)) { 2143 + urb = next_urb(qh); 2144 + urb->status = -ESHUTDOWN; 2145 + musb_advance_schedule(musb, urb, qh->hw_ep, is_in); 2146 + } 2147 + } else { 2148 + /* Just empty the queue; the hardware is busy with 2149 + * other transfers, and since !qh->is_ready nothing 2150 + * will activate any of these as it advances. 2151 + */ 2152 + while (!list_empty(&hep->urb_list)) 2153 + __musb_giveback(musb, next_urb(qh), -ESHUTDOWN); 2152 2154 2155 + hep->hcpriv = NULL; 2156 + list_del(&qh->ring); 2157 + kfree(qh); 2158 + } 2159 + exit: 2153 2160 spin_unlock_irqrestore(&musb->lock, flags); 2154 2161 } 2155 2162
+9 -2
drivers/usb/serial/option.c
··· 294 294 295 295 /* Ericsson products */ 296 296 #define ERICSSON_VENDOR_ID 0x0bdb 297 - #define ERICSSON_PRODUCT_F3507G 0x1900 297 + #define ERICSSON_PRODUCT_F3507G_1 0x1900 298 + #define ERICSSON_PRODUCT_F3507G_2 0x1902 299 + 300 + #define BENQ_VENDOR_ID 0x04a5 301 + #define BENQ_PRODUCT_H10 0x4068 298 302 299 303 static struct usb_device_id option_ids[] = { 300 304 { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) }, ··· 513 509 { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_MF626) }, 514 510 { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_MF628) }, 515 511 { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_CDMA_TECH) }, 516 - { USB_DEVICE(ERICSSON_VENDOR_ID, ERICSSON_PRODUCT_F3507G) }, 512 + { USB_DEVICE(ERICSSON_VENDOR_ID, ERICSSON_PRODUCT_F3507G_1) }, 513 + { USB_DEVICE(ERICSSON_VENDOR_ID, ERICSSON_PRODUCT_F3507G_2) }, 514 + { USB_DEVICE(BENQ_VENDOR_ID, BENQ_PRODUCT_H10) }, 515 + { USB_DEVICE(0x1da5, 0x4515) }, /* BenQ H20 */ 517 516 { } /* Terminating entry */ 518 517 }; 519 518 MODULE_DEVICE_TABLE(usb, option_ids);
+2 -2
drivers/usb/storage/unusual_devs.h
··· 907 907 "Genesys Logic", 908 908 "USB to IDE Optical", 909 909 US_SC_DEVICE, US_PR_DEVICE, NULL, 910 - US_FL_GO_SLOW | US_FL_MAX_SECTORS_64 ), 910 + US_FL_GO_SLOW | US_FL_MAX_SECTORS_64 | US_FL_IGNORE_RESIDUE ), 911 911 912 912 UNUSUAL_DEV( 0x05e3, 0x0702, 0x0000, 0xffff, 913 913 "Genesys Logic", 914 914 "USB to IDE Disk", 915 915 US_SC_DEVICE, US_PR_DEVICE, NULL, 916 - US_FL_GO_SLOW | US_FL_MAX_SECTORS_64 ), 916 + US_FL_GO_SLOW | US_FL_MAX_SECTORS_64 | US_FL_IGNORE_RESIDUE ), 917 917 918 918 /* Reported by Ben Efros <ben@pc-doctor.com> */ 919 919 UNUSUAL_DEV( 0x05e3, 0x0723, 0x9451, 0x9451,
+6
drivers/w1/slaves/Kconfig
··· 16 16 Say Y here if you want to connect 1-wire 17 17 simple 64bit memory rom(ds2401/ds2411/ds1990*) to your wire. 18 18 19 + config W1_SLAVE_DS2431 20 + tristate "1kb EEPROM family support (DS2431)" 21 + help 22 + Say Y here if you want to use a 1-wire 23 + 1kb EEPROM family device (DS2431) 24 + 19 25 config W1_SLAVE_DS2433 20 26 tristate "4kb EEPROM family support (DS2433)" 21 27 help
+1
drivers/w1/slaves/Makefile
··· 4 4 5 5 obj-$(CONFIG_W1_SLAVE_THERM) += w1_therm.o 6 6 obj-$(CONFIG_W1_SLAVE_SMEM) += w1_smem.o 7 + obj-$(CONFIG_W1_SLAVE_DS2431) += w1_ds2431.o 7 8 obj-$(CONFIG_W1_SLAVE_DS2433) += w1_ds2433.o 8 9 obj-$(CONFIG_W1_SLAVE_DS2760) += w1_ds2760.o 9 10 obj-$(CONFIG_W1_SLAVE_BQ27000) += w1_bq27000.o
+6 -1
drivers/w1/slaves/w1_ds2433.c
··· 156 156 */ 157 157 static int w1_f23_write(struct w1_slave *sl, int addr, int len, const u8 *data) 158 158 { 159 + #ifdef CONFIG_W1_SLAVE_DS2433_CRC 160 + struct w1_f23_data *f23 = sl->family_data; 161 + #endif 159 162 u8 wrbuf[4]; 160 163 u8 rdbuf[W1_PAGE_SIZE + 3]; 161 164 u8 es = (addr + len - 1) & 0x1f; ··· 199 196 200 197 /* Reset the bus to wake up the EEPROM (this may not be needed) */ 201 198 w1_reset_bus(sl->master); 202 - 199 + #ifdef CONFIG_W1_SLAVE_DS2433_CRC 200 + f23->validcrc &= ~(1 << (addr >> W1_PAGE_BITS)); 201 + #endif 203 202 return 0; 204 203 } 205 204
+4 -2
fs/Makefile
··· 69 69 # Do not add any filesystems before this line 70 70 obj-$(CONFIG_REISERFS_FS) += reiserfs/ 71 71 obj-$(CONFIG_EXT3_FS) += ext3/ # Before ext2 so root fs can be ext3 72 - obj-$(CONFIG_EXT4_FS) += ext4/ # Before ext2 so root fs can be ext4 72 + obj-$(CONFIG_EXT2_FS) += ext2/ 73 + # We place ext4 after ext2 so plain ext2 root fs's are mounted using ext2 74 + # unless explicitly requested by rootfstype 75 + obj-$(CONFIG_EXT4_FS) += ext4/ 73 76 obj-$(CONFIG_JBD) += jbd/ 74 77 obj-$(CONFIG_JBD2) += jbd2/ 75 - obj-$(CONFIG_EXT2_FS) += ext2/ 76 78 obj-$(CONFIG_CRAMFS) += cramfs/ 77 79 obj-$(CONFIG_SQUASHFS) += squashfs/ 78 80 obj-y += ramfs/
+1 -1
fs/bio.c
··· 302 302 struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs) 303 303 { 304 304 struct bio *bio = NULL; 305 - void *p; 305 + void *uninitialized_var(p); 306 306 307 307 if (bs) { 308 308 p = mempool_alloc(bs->bio_pool, gfp_mask);
+8
fs/btrfs/btrfs_inode.h
··· 66 66 */ 67 67 struct list_head delalloc_inodes; 68 68 69 + /* the space_info for where this inode's data allocations are done */ 70 + struct btrfs_space_info *space_info; 71 + 69 72 /* full 64 bit generation number, struct vfs_inode doesn't have a big 70 73 * enough field for this. 71 74 */ ··· 96 93 * real block usage of the file 97 94 */ 98 95 u64 delalloc_bytes; 96 + 97 + /* total number of bytes that may be used for this inode for 98 + * delalloc 99 + */ 100 + u64 reserved_bytes; 99 101 100 102 /* 101 103 * the size of the file stored in the metadata on disk. data=ordered
+31 -9
fs/btrfs/ctree.h
··· 596 596 597 597 struct btrfs_space_info { 598 598 u64 flags; 599 - u64 total_bytes; 600 - u64 bytes_used; 601 - u64 bytes_pinned; 602 - u64 bytes_reserved; 603 - u64 bytes_readonly; 604 - int full; 605 - int force_alloc; 599 + 600 + u64 total_bytes; /* total bytes in the space */ 601 + u64 bytes_used; /* total bytes used on disk */ 602 + u64 bytes_pinned; /* total bytes pinned, will be freed when the 603 + transaction finishes */ 604 + u64 bytes_reserved; /* total bytes the allocator has reserved for 605 + current allocations */ 606 + u64 bytes_readonly; /* total bytes that are read only */ 607 + 608 + /* delalloc accounting */ 609 + u64 bytes_delalloc; /* number of bytes reserved for allocation, 610 + this space is not necessarily reserved yet 611 + by the allocator */ 612 + u64 bytes_may_use; /* number of bytes that may be used for 613 + delalloc */ 614 + 615 + int full; /* indicates that we cannot allocate any more 616 + chunks for this space */ 617 + int force_alloc; /* set if we need to force a chunk alloc for 618 + this space */ 619 + 606 620 struct list_head list; 607 621 608 622 /* for block groups in our same type */ ··· 1796 1782 int btrfs_cleanup_reloc_trees(struct btrfs_root *root); 1797 1783 int btrfs_reloc_clone_csums(struct inode *inode, u64 file_pos, u64 len); 1798 1784 u64 btrfs_reduce_alloc_profile(struct btrfs_root *root, u64 flags); 1785 + void btrfs_set_inode_space_info(struct btrfs_root *root, struct inode *ionde); 1786 + int btrfs_check_metadata_free_space(struct btrfs_root *root); 1787 + int btrfs_check_data_free_space(struct btrfs_root *root, struct inode *inode, 1788 + u64 bytes); 1789 + void btrfs_free_reserved_data_space(struct btrfs_root *root, 1790 + struct inode *inode, u64 bytes); 1791 + void btrfs_delalloc_reserve_space(struct btrfs_root *root, struct inode *inode, 1792 + u64 bytes); 1793 + void btrfs_delalloc_free_space(struct btrfs_root *root, struct inode *inode, 1794 + u64 bytes); 1799 1795 /* ctree.c */ 1800 1796 int btrfs_previous_item(struct btrfs_root *root, 1801 1797 struct btrfs_path *path, u64 min_objectid, ··· 2051 2027 unsigned long btrfs_force_ra(struct address_space *mapping, 2052 2028 struct file_ra_state *ra, struct file *file, 2053 2029 pgoff_t offset, pgoff_t last_index); 2054 - int btrfs_check_free_space(struct btrfs_root *root, u64 num_required, 2055 - int for_del); 2056 2030 int btrfs_page_mkwrite(struct vm_area_struct *vma, struct page *page); 2057 2031 int btrfs_readpage(struct file *file, struct page *page); 2058 2032 void btrfs_delete_inode(struct inode *inode);
+237 -15
fs/btrfs/extent-tree.c
··· 60 60 u64 bytenr, u64 num_bytes, int alloc, 61 61 int mark_free); 62 62 63 + static int do_chunk_alloc(struct btrfs_trans_handle *trans, 64 + struct btrfs_root *extent_root, u64 alloc_bytes, 65 + u64 flags, int force); 66 + 63 67 static int block_group_bits(struct btrfs_block_group_cache *cache, u64 bits) 64 68 { 65 69 return (cache->flags & bits) == bits; ··· 1913 1909 found->bytes_pinned = 0; 1914 1910 found->bytes_reserved = 0; 1915 1911 found->bytes_readonly = 0; 1912 + found->bytes_delalloc = 0; 1916 1913 found->full = 0; 1917 1914 found->force_alloc = 0; 1918 1915 *space_info = found; ··· 1975 1970 (flags & BTRFS_BLOCK_GROUP_DUP))) 1976 1971 flags &= ~BTRFS_BLOCK_GROUP_RAID0; 1977 1972 return flags; 1973 + } 1974 + 1975 + static u64 btrfs_get_alloc_profile(struct btrfs_root *root, u64 data) 1976 + { 1977 + struct btrfs_fs_info *info = root->fs_info; 1978 + u64 alloc_profile; 1979 + 1980 + if (data) { 1981 + alloc_profile = info->avail_data_alloc_bits & 1982 + info->data_alloc_profile; 1983 + data = BTRFS_BLOCK_GROUP_DATA | alloc_profile; 1984 + } else if (root == root->fs_info->chunk_root) { 1985 + alloc_profile = info->avail_system_alloc_bits & 1986 + info->system_alloc_profile; 1987 + data = BTRFS_BLOCK_GROUP_SYSTEM | alloc_profile; 1988 + } else { 1989 + alloc_profile = info->avail_metadata_alloc_bits & 1990 + info->metadata_alloc_profile; 1991 + data = BTRFS_BLOCK_GROUP_METADATA | alloc_profile; 1992 + } 1993 + 1994 + return btrfs_reduce_alloc_profile(root, data); 1995 + } 1996 + 1997 + void btrfs_set_inode_space_info(struct btrfs_root *root, struct inode *inode) 1998 + { 1999 + u64 alloc_target; 2000 + 2001 + alloc_target = btrfs_get_alloc_profile(root, 1); 2002 + BTRFS_I(inode)->space_info = __find_space_info(root->fs_info, 2003 + alloc_target); 2004 + } 2005 + 2006 + /* 2007 + * for now this just makes sure we have at least 5% of our metadata space free 2008 + * for use. 2009 + */ 2010 + int btrfs_check_metadata_free_space(struct btrfs_root *root) 2011 + { 2012 + struct btrfs_fs_info *info = root->fs_info; 2013 + struct btrfs_space_info *meta_sinfo; 2014 + u64 alloc_target, thresh; 2015 + int committed = 0, ret; 2016 + 2017 + /* get the space info for where the metadata will live */ 2018 + alloc_target = btrfs_get_alloc_profile(root, 0); 2019 + meta_sinfo = __find_space_info(info, alloc_target); 2020 + 2021 + again: 2022 + spin_lock(&meta_sinfo->lock); 2023 + if (!meta_sinfo->full) 2024 + thresh = meta_sinfo->total_bytes * 80; 2025 + else 2026 + thresh = meta_sinfo->total_bytes * 95; 2027 + 2028 + do_div(thresh, 100); 2029 + 2030 + if (meta_sinfo->bytes_used + meta_sinfo->bytes_reserved + 2031 + meta_sinfo->bytes_pinned + meta_sinfo->bytes_readonly > thresh) { 2032 + struct btrfs_trans_handle *trans; 2033 + if (!meta_sinfo->full) { 2034 + meta_sinfo->force_alloc = 1; 2035 + spin_unlock(&meta_sinfo->lock); 2036 + 2037 + trans = btrfs_start_transaction(root, 1); 2038 + if (!trans) 2039 + return -ENOMEM; 2040 + 2041 + ret = do_chunk_alloc(trans, root->fs_info->extent_root, 2042 + 2 * 1024 * 1024, alloc_target, 0); 2043 + btrfs_end_transaction(trans, root); 2044 + goto again; 2045 + } 2046 + spin_unlock(&meta_sinfo->lock); 2047 + 2048 + if (!committed) { 2049 + committed = 1; 2050 + trans = btrfs_join_transaction(root, 1); 2051 + if (!trans) 2052 + return -ENOMEM; 2053 + ret = btrfs_commit_transaction(trans, root); 2054 + if (ret) 2055 + return ret; 2056 + goto again; 2057 + } 2058 + return -ENOSPC; 2059 + } 2060 + spin_unlock(&meta_sinfo->lock); 2061 + 2062 + return 0; 2063 + } 2064 + 2065 + /* 2066 + * This will check the space that the inode allocates from to make sure we have 2067 + * enough space for bytes. 2068 + */ 2069 + int btrfs_check_data_free_space(struct btrfs_root *root, struct inode *inode, 2070 + u64 bytes) 2071 + { 2072 + struct btrfs_space_info *data_sinfo; 2073 + int ret = 0, committed = 0; 2074 + 2075 + /* make sure bytes are sectorsize aligned */ 2076 + bytes = (bytes + root->sectorsize - 1) & ~((u64)root->sectorsize - 1); 2077 + 2078 + data_sinfo = BTRFS_I(inode)->space_info; 2079 + again: 2080 + /* make sure we have enough space to handle the data first */ 2081 + spin_lock(&data_sinfo->lock); 2082 + if (data_sinfo->total_bytes - data_sinfo->bytes_used - 2083 + data_sinfo->bytes_delalloc - data_sinfo->bytes_reserved - 2084 + data_sinfo->bytes_pinned - data_sinfo->bytes_readonly - 2085 + data_sinfo->bytes_may_use < bytes) { 2086 + struct btrfs_trans_handle *trans; 2087 + 2088 + /* 2089 + * if we don't have enough free bytes in this space then we need 2090 + * to alloc a new chunk. 2091 + */ 2092 + if (!data_sinfo->full) { 2093 + u64 alloc_target; 2094 + 2095 + data_sinfo->force_alloc = 1; 2096 + spin_unlock(&data_sinfo->lock); 2097 + 2098 + alloc_target = btrfs_get_alloc_profile(root, 1); 2099 + trans = btrfs_start_transaction(root, 1); 2100 + if (!trans) 2101 + return -ENOMEM; 2102 + 2103 + ret = do_chunk_alloc(trans, root->fs_info->extent_root, 2104 + bytes + 2 * 1024 * 1024, 2105 + alloc_target, 0); 2106 + btrfs_end_transaction(trans, root); 2107 + if (ret) 2108 + return ret; 2109 + goto again; 2110 + } 2111 + spin_unlock(&data_sinfo->lock); 2112 + 2113 + /* commit the current transaction and try again */ 2114 + if (!committed) { 2115 + committed = 1; 2116 + trans = btrfs_join_transaction(root, 1); 2117 + if (!trans) 2118 + return -ENOMEM; 2119 + ret = btrfs_commit_transaction(trans, root); 2120 + if (ret) 2121 + return ret; 2122 + goto again; 2123 + } 2124 + 2125 + printk(KERN_ERR "no space left, need %llu, %llu delalloc bytes" 2126 + ", %llu bytes_used, %llu bytes_reserved, " 2127 + "%llu bytes_pinned, %llu bytes_readonly, %llu may use" 2128 + "%llu total\n", bytes, data_sinfo->bytes_delalloc, 2129 + data_sinfo->bytes_used, data_sinfo->bytes_reserved, 2130 + data_sinfo->bytes_pinned, data_sinfo->bytes_readonly, 2131 + data_sinfo->bytes_may_use, data_sinfo->total_bytes); 2132 + return -ENOSPC; 2133 + } 2134 + data_sinfo->bytes_may_use += bytes; 2135 + BTRFS_I(inode)->reserved_bytes += bytes; 2136 + spin_unlock(&data_sinfo->lock); 2137 + 2138 + return btrfs_check_metadata_free_space(root); 2139 + } 2140 + 2141 + /* 2142 + * if there was an error for whatever reason after calling 2143 + * btrfs_check_data_free_space, call this so we can cleanup the counters. 2144 + */ 2145 + void btrfs_free_reserved_data_space(struct btrfs_root *root, 2146 + struct inode *inode, u64 bytes) 2147 + { 2148 + struct btrfs_space_info *data_sinfo; 2149 + 2150 + /* make sure bytes are sectorsize aligned */ 2151 + bytes = (bytes + root->sectorsize - 1) & ~((u64)root->sectorsize - 1); 2152 + 2153 + data_sinfo = BTRFS_I(inode)->space_info; 2154 + spin_lock(&data_sinfo->lock); 2155 + data_sinfo->bytes_may_use -= bytes; 2156 + BTRFS_I(inode)->reserved_bytes -= bytes; 2157 + spin_unlock(&data_sinfo->lock); 2158 + } 2159 + 2160 + /* called when we are adding a delalloc extent to the inode's io_tree */ 2161 + void btrfs_delalloc_reserve_space(struct btrfs_root *root, struct inode *inode, 2162 + u64 bytes) 2163 + { 2164 + struct btrfs_space_info *data_sinfo; 2165 + 2166 + /* get the space info for where this inode will be storing its data */ 2167 + data_sinfo = BTRFS_I(inode)->space_info; 2168 + 2169 + /* make sure we have enough space to handle the data first */ 2170 + spin_lock(&data_sinfo->lock); 2171 + data_sinfo->bytes_delalloc += bytes; 2172 + 2173 + /* 2174 + * we are adding a delalloc extent without calling 2175 + * btrfs_check_data_free_space first. This happens on a weird 2176 + * writepage condition, but shouldn't hurt our accounting 2177 + */ 2178 + if (unlikely(bytes > BTRFS_I(inode)->reserved_bytes)) { 2179 + data_sinfo->bytes_may_use -= BTRFS_I(inode)->reserved_bytes; 2180 + BTRFS_I(inode)->reserved_bytes = 0; 2181 + } else { 2182 + data_sinfo->bytes_may_use -= bytes; 2183 + BTRFS_I(inode)->reserved_bytes -= bytes; 2184 + } 2185 + 2186 + spin_unlock(&data_sinfo->lock); 2187 + } 2188 + 2189 + /* called when we are clearing an delalloc extent from the inode's io_tree */ 2190 + void btrfs_delalloc_free_space(struct btrfs_root *root, struct inode *inode, 2191 + u64 bytes) 2192 + { 2193 + struct btrfs_space_info *info; 2194 + 2195 + info = BTRFS_I(inode)->space_info; 2196 + 2197 + spin_lock(&info->lock); 2198 + info->bytes_delalloc -= bytes; 2199 + spin_unlock(&info->lock); 1978 2200 } 1979 2201 1980 2202 static int do_chunk_alloc(struct btrfs_trans_handle *trans, ··· 3337 3105 (unsigned long long)(info->total_bytes - info->bytes_used - 3338 3106 info->bytes_pinned - info->bytes_reserved), 3339 3107 (info->full) ? "" : "not "); 3108 + printk(KERN_INFO "space_info total=%llu, pinned=%llu, delalloc=%llu," 3109 + " may_use=%llu, used=%llu\n", info->total_bytes, 3110 + info->bytes_pinned, info->bytes_delalloc, info->bytes_may_use, 3111 + info->bytes_used); 3340 3112 3341 3113 down_read(&info->groups_sem); 3342 3114 list_for_each_entry(cache, &info->block_groups, list) { ··· 3367 3131 { 3368 3132 int ret; 3369 3133 u64 search_start = 0; 3370 - u64 alloc_profile; 3371 3134 struct btrfs_fs_info *info = root->fs_info; 3372 3135 3373 - if (data) { 3374 - alloc_profile = info->avail_data_alloc_bits & 3375 - info->data_alloc_profile; 3376 - data = BTRFS_BLOCK_GROUP_DATA | alloc_profile; 3377 - } else if (root == root->fs_info->chunk_root) { 3378 - alloc_profile = info->avail_system_alloc_bits & 3379 - info->system_alloc_profile; 3380 - data = BTRFS_BLOCK_GROUP_SYSTEM | alloc_profile; 3381 - } else { 3382 - alloc_profile = info->avail_metadata_alloc_bits & 3383 - info->metadata_alloc_profile; 3384 - data = BTRFS_BLOCK_GROUP_METADATA | alloc_profile; 3385 - } 3136 + data = btrfs_get_alloc_profile(root, data); 3386 3137 again: 3387 - data = btrfs_reduce_alloc_profile(root, data); 3388 3138 /* 3389 3139 * the only place that sets empty_size is btrfs_realloc_node, which 3390 3140 * is not called recursively on allocations
+13 -3
fs/btrfs/file.c
··· 1091 1091 WARN_ON(num_pages > nrptrs); 1092 1092 memset(pages, 0, sizeof(struct page *) * nrptrs); 1093 1093 1094 - ret = btrfs_check_free_space(root, write_bytes, 0); 1094 + ret = btrfs_check_data_free_space(root, inode, write_bytes); 1095 1095 if (ret) 1096 1096 goto out; 1097 1097 1098 1098 ret = prepare_pages(root, file, pages, num_pages, 1099 1099 pos, first_index, last_index, 1100 1100 write_bytes); 1101 - if (ret) 1101 + if (ret) { 1102 + btrfs_free_reserved_data_space(root, inode, 1103 + write_bytes); 1102 1104 goto out; 1105 + } 1103 1106 1104 1107 ret = btrfs_copy_from_user(pos, num_pages, 1105 1108 write_bytes, pages, buf); 1106 1109 if (ret) { 1110 + btrfs_free_reserved_data_space(root, inode, 1111 + write_bytes); 1107 1112 btrfs_drop_pages(pages, num_pages); 1108 1113 goto out; 1109 1114 } ··· 1116 1111 ret = dirty_and_release_pages(NULL, root, file, pages, 1117 1112 num_pages, pos, write_bytes); 1118 1113 btrfs_drop_pages(pages, num_pages); 1119 - if (ret) 1114 + if (ret) { 1115 + btrfs_free_reserved_data_space(root, inode, 1116 + write_bytes); 1120 1117 goto out; 1118 + } 1121 1119 1122 1120 if (will_write) { 1123 1121 btrfs_fdatawrite_range(inode->i_mapping, pos, ··· 1144 1136 } 1145 1137 out: 1146 1138 mutex_unlock(&inode->i_mutex); 1139 + if (ret) 1140 + err = ret; 1147 1141 1148 1142 out_nolock: 1149 1143 kfree(pages);
+16 -46
fs/btrfs/inode.c
··· 102 102 } 103 103 104 104 /* 105 - * a very lame attempt at stopping writes when the FS is 85% full. There 106 - * are countless ways this is incorrect, but it is better than nothing. 107 - */ 108 - int btrfs_check_free_space(struct btrfs_root *root, u64 num_required, 109 - int for_del) 110 - { 111 - u64 total; 112 - u64 used; 113 - u64 thresh; 114 - int ret = 0; 115 - 116 - spin_lock(&root->fs_info->delalloc_lock); 117 - total = btrfs_super_total_bytes(&root->fs_info->super_copy); 118 - used = btrfs_super_bytes_used(&root->fs_info->super_copy); 119 - if (for_del) 120 - thresh = total * 90; 121 - else 122 - thresh = total * 85; 123 - 124 - do_div(thresh, 100); 125 - 126 - if (used + root->fs_info->delalloc_bytes + num_required > thresh) 127 - ret = -ENOSPC; 128 - spin_unlock(&root->fs_info->delalloc_lock); 129 - return ret; 130 - } 131 - 132 - /* 133 105 * this does all the hard work for inserting an inline extent into 134 106 * the btree. The caller should have done a btrfs_drop_extents so that 135 107 * no overlapping inline items exist in the btree ··· 1162 1190 */ 1163 1191 if (!(old & EXTENT_DELALLOC) && (bits & EXTENT_DELALLOC)) { 1164 1192 struct btrfs_root *root = BTRFS_I(inode)->root; 1193 + btrfs_delalloc_reserve_space(root, inode, end - start + 1); 1165 1194 spin_lock(&root->fs_info->delalloc_lock); 1166 1195 BTRFS_I(inode)->delalloc_bytes += end - start + 1; 1167 1196 root->fs_info->delalloc_bytes += end - start + 1; ··· 1196 1223 (unsigned long long)end - start + 1, 1197 1224 (unsigned long long) 1198 1225 root->fs_info->delalloc_bytes); 1226 + btrfs_delalloc_free_space(root, inode, (u64)-1); 1199 1227 root->fs_info->delalloc_bytes = 0; 1200 1228 BTRFS_I(inode)->delalloc_bytes = 0; 1201 1229 } else { 1230 + btrfs_delalloc_free_space(root, inode, 1231 + end - start + 1); 1202 1232 root->fs_info->delalloc_bytes -= end - start + 1; 1203 1233 BTRFS_I(inode)->delalloc_bytes -= end - start + 1; 1204 1234 } ··· 2221 2245 2222 2246 root = BTRFS_I(dir)->root; 2223 2247 2224 - ret = btrfs_check_free_space(root, 1, 1); 2225 - if (ret) 2226 - goto fail; 2227 - 2228 2248 trans = btrfs_start_transaction(root, 1); 2229 2249 2230 2250 btrfs_set_trans_block_group(trans, dir); ··· 2233 2261 nr = trans->blocks_used; 2234 2262 2235 2263 btrfs_end_transaction_throttle(trans, root); 2236 - fail: 2237 2264 btrfs_btree_balance_dirty(root, nr); 2238 2265 return ret; 2239 2266 } ··· 2255 2284 return -ENOTEMPTY; 2256 2285 } 2257 2286 2258 - ret = btrfs_check_free_space(root, 1, 1); 2259 - if (ret) 2260 - goto fail; 2261 - 2262 2287 trans = btrfs_start_transaction(root, 1); 2263 2288 btrfs_set_trans_block_group(trans, dir); 2264 2289 ··· 2271 2304 fail_trans: 2272 2305 nr = trans->blocks_used; 2273 2306 ret = btrfs_end_transaction_throttle(trans, root); 2274 - fail: 2275 2307 btrfs_btree_balance_dirty(root, nr); 2276 2308 2277 2309 if (ret && !err) ··· 2784 2818 if (size <= hole_start) 2785 2819 return 0; 2786 2820 2787 - err = btrfs_check_free_space(root, 1, 0); 2821 + err = btrfs_check_metadata_free_space(root); 2788 2822 if (err) 2789 2823 return err; 2790 2824 ··· 2980 3014 bi->last_trans = 0; 2981 3015 bi->logged_trans = 0; 2982 3016 bi->delalloc_bytes = 0; 3017 + bi->reserved_bytes = 0; 2983 3018 bi->disk_i_size = 0; 2984 3019 bi->flags = 0; 2985 3020 bi->index_cnt = (u64)-1; ··· 3002 3035 inode->i_ino = args->ino; 3003 3036 init_btrfs_i(inode); 3004 3037 BTRFS_I(inode)->root = args->root; 3038 + btrfs_set_inode_space_info(args->root, inode); 3005 3039 return 0; 3006 3040 } 3007 3041 ··· 3423 3455 BTRFS_I(inode)->index_cnt = 2; 3424 3456 BTRFS_I(inode)->root = root; 3425 3457 BTRFS_I(inode)->generation = trans->transid; 3458 + btrfs_set_inode_space_info(root, inode); 3426 3459 3427 3460 if (mode & S_IFDIR) 3428 3461 owner = 0; ··· 3571 3602 if (!new_valid_dev(rdev)) 3572 3603 return -EINVAL; 3573 3604 3574 - err = btrfs_check_free_space(root, 1, 0); 3605 + err = btrfs_check_metadata_free_space(root); 3575 3606 if (err) 3576 3607 goto fail; 3577 3608 ··· 3634 3665 u64 objectid; 3635 3666 u64 index = 0; 3636 3667 3637 - err = btrfs_check_free_space(root, 1, 0); 3668 + err = btrfs_check_metadata_free_space(root); 3638 3669 if (err) 3639 3670 goto fail; 3640 3671 trans = btrfs_start_transaction(root, 1); ··· 3702 3733 return -ENOENT; 3703 3734 3704 3735 btrfs_inc_nlink(inode); 3705 - err = btrfs_check_free_space(root, 1, 0); 3736 + err = btrfs_check_metadata_free_space(root); 3706 3737 if (err) 3707 3738 goto fail; 3708 3739 err = btrfs_set_inode_index(dir, &index); ··· 3748 3779 u64 index = 0; 3749 3780 unsigned long nr = 1; 3750 3781 3751 - err = btrfs_check_free_space(root, 1, 0); 3782 + err = btrfs_check_metadata_free_space(root); 3752 3783 if (err) 3753 3784 goto out_unlock; 3754 3785 ··· 4305 4336 u64 page_start; 4306 4337 u64 page_end; 4307 4338 4308 - ret = btrfs_check_free_space(root, PAGE_CACHE_SIZE, 0); 4339 + ret = btrfs_check_data_free_space(root, inode, PAGE_CACHE_SIZE); 4309 4340 if (ret) 4310 4341 goto out; 4311 4342 ··· 4318 4349 4319 4350 if ((page->mapping != inode->i_mapping) || 4320 4351 (page_start >= size)) { 4352 + btrfs_free_reserved_data_space(root, inode, PAGE_CACHE_SIZE); 4321 4353 /* page got truncated out from underneath us */ 4322 4354 goto out_unlock; 4323 4355 } ··· 4601 4631 if (old_inode->i_ino == BTRFS_FIRST_FREE_OBJECTID) 4602 4632 return -EXDEV; 4603 4633 4604 - ret = btrfs_check_free_space(root, 1, 0); 4634 + ret = btrfs_check_metadata_free_space(root); 4605 4635 if (ret) 4606 4636 goto out_unlock; 4607 4637 ··· 4719 4749 if (name_len > BTRFS_MAX_INLINE_DATA_SIZE(root)) 4720 4750 return -ENAMETOOLONG; 4721 4751 4722 - err = btrfs_check_free_space(root, 1, 0); 4752 + err = btrfs_check_metadata_free_space(root); 4723 4753 if (err) 4724 4754 goto out_fail; 4725 4755
+3 -3
fs/btrfs/ioctl.c
··· 70 70 u64 index = 0; 71 71 unsigned long nr = 1; 72 72 73 - ret = btrfs_check_free_space(root, 1, 0); 73 + ret = btrfs_check_metadata_free_space(root); 74 74 if (ret) 75 75 goto fail_commit; 76 76 ··· 203 203 if (!root->ref_cows) 204 204 return -EINVAL; 205 205 206 - ret = btrfs_check_free_space(root, 1, 0); 206 + ret = btrfs_check_metadata_free_space(root); 207 207 if (ret) 208 208 goto fail_unlock; 209 209 ··· 374 374 unsigned long i; 375 375 int ret; 376 376 377 - ret = btrfs_check_free_space(root, inode->i_size, 0); 377 + ret = btrfs_check_data_free_space(root, inode, inode->i_size); 378 378 if (ret) 379 379 return -ENOSPC; 380 380
+3
fs/compat_ioctl.c
··· 1913 1913 /* 0x00 */ 1914 1914 COMPATIBLE_IOCTL(FIBMAP) 1915 1915 COMPATIBLE_IOCTL(FIGETBSZ) 1916 + /* 'X' - originally XFS but some now in the VFS */ 1917 + COMPATIBLE_IOCTL(FIFREEZE) 1918 + COMPATIBLE_IOCTL(FITHAW) 1916 1919 /* RAID */ 1917 1920 COMPATIBLE_IOCTL(RAID_VERSION) 1918 1921 COMPATIBLE_IOCTL(GET_ARRAY_INFO)
+1 -1
fs/dcache.c
··· 1180 1180 iput(inode); 1181 1181 return res; 1182 1182 } 1183 - EXPORT_SYMBOL_GPL(d_obtain_alias); 1183 + EXPORT_SYMBOL(d_obtain_alias); 1184 1184 1185 1185 /** 1186 1186 * d_splice_alias - splice a disconnected dentry into the tree if one exists
+3 -1
fs/ext4/balloc.c
··· 609 609 */ 610 610 int ext4_should_retry_alloc(struct super_block *sb, int *retries) 611 611 { 612 - if (!ext4_has_free_blocks(EXT4_SB(sb), 1) || (*retries)++ > 3) 612 + if (!ext4_has_free_blocks(EXT4_SB(sb), 1) || 613 + (*retries)++ > 3 || 614 + !EXT4_SB(sb)->s_journal) 613 615 return 0; 614 616 615 617 jbd_debug(1, "%s: retrying operation after ENOSPC\n", sb->s_id);
+1 -1
fs/ext4/inode.c
··· 2544 2544 2545 2545 ext4_journal_stop(handle); 2546 2546 2547 - if (mpd.retval == -ENOSPC) { 2547 + if ((mpd.retval == -ENOSPC) && sbi->s_journal) { 2548 2548 /* commit the transaction which would 2549 2549 * free blocks released in the transaction 2550 2550 * and try again
-1
fs/ext4/super.c
··· 3091 3091 3092 3092 /* Journal blocked and flushed, clear needs_recovery flag. */ 3093 3093 EXT4_CLEAR_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_RECOVER); 3094 - ext4_commit_super(sb, EXT4_SB(sb)->s_es, 1); 3095 3094 error = ext4_commit_super(sb, EXT4_SB(sb)->s_es, 1); 3096 3095 if (error) 3097 3096 goto out;
+11 -7
fs/jffs2/background.c
··· 95 95 spin_unlock(&c->erase_completion_lock); 96 96 97 97 98 - /* This thread is purely an optimisation. But if it runs when 99 - other things could be running, it actually makes things a 100 - lot worse. Use yield() and put it at the back of the runqueue 101 - every time. Especially during boot, pulling an inode in 102 - with read_inode() is much preferable to having the GC thread 103 - get there first. */ 104 - yield(); 98 + /* Problem - immediately after bootup, the GCD spends a lot 99 + * of time in places like jffs2_kill_fragtree(); so much so 100 + * that userspace processes (like gdm and X) are starved 101 + * despite plenty of cond_resched()s and renicing. Yield() 102 + * doesn't help, either (presumably because userspace and GCD 103 + * are generally competing for a higher latency resource - 104 + * disk). 105 + * This forces the GCD to slow the hell down. Pulling an 106 + * inode in with read_inode() is much preferable to having 107 + * the GC thread get there first. */ 108 + schedule_timeout_interruptible(msecs_to_jiffies(50)); 105 109 106 110 /* Put_super will send a SIGKILL and then wait on the sem. 107 111 */
+33 -9
fs/jffs2/readinode.c
··· 220 220 struct jffs2_tmp_dnode_info *tn) 221 221 { 222 222 uint32_t fn_end = tn->fn->ofs + tn->fn->size; 223 - struct jffs2_tmp_dnode_info *this; 223 + struct jffs2_tmp_dnode_info *this, *ptn; 224 224 225 225 dbg_readinode("insert fragment %#04x-%#04x, ver %u at %08x\n", tn->fn->ofs, fn_end, tn->version, ref_offset(tn->fn->raw)); 226 226 ··· 251 251 if (this) { 252 252 /* If the node is coincident with another at a lower address, 253 253 back up until the other node is found. It may be relevant */ 254 - while (this->overlapped) 255 - this = tn_prev(this); 256 - 257 - /* First node should never be marked overlapped */ 258 - BUG_ON(!this); 254 + while (this->overlapped) { 255 + ptn = tn_prev(this); 256 + if (!ptn) { 257 + /* 258 + * We killed a node which set the overlapped 259 + * flags during the scan. Fix it up. 260 + */ 261 + this->overlapped = 0; 262 + break; 263 + } 264 + this = ptn; 265 + } 259 266 dbg_readinode("'this' found %#04x-%#04x (%s)\n", this->fn->ofs, this->fn->ofs + this->fn->size, this->fn ? "data" : "hole"); 260 267 } 261 268 ··· 367 360 } 368 361 if (!this->overlapped) 369 362 break; 370 - this = tn_prev(this); 363 + 364 + ptn = tn_prev(this); 365 + if (!ptn) { 366 + /* 367 + * We killed a node which set the overlapped 368 + * flags during the scan. Fix it up. 369 + */ 370 + this->overlapped = 0; 371 + break; 372 + } 373 + this = ptn; 371 374 } 372 375 } 373 376 ··· 473 456 eat_last(&rii->tn_root, &last->rb); 474 457 ver_insert(&ver_root, last); 475 458 476 - if (unlikely(last->overlapped)) 477 - continue; 459 + if (unlikely(last->overlapped)) { 460 + if (pen) 461 + continue; 462 + /* 463 + * We killed a node which set the overlapped 464 + * flags during the scan. Fix it up. 465 + */ 466 + last->overlapped = 0; 467 + } 478 468 479 469 /* Now we have a bunch of nodes in reverse version 480 470 order, in the tree at ver_root. Most of the time,
+26 -1
fs/ocfs2/alloc.c
··· 4796 4796 return ret; 4797 4797 } 4798 4798 4799 + static int ocfs2_replace_extent_rec(struct inode *inode, 4800 + handle_t *handle, 4801 + struct ocfs2_path *path, 4802 + struct ocfs2_extent_list *el, 4803 + int split_index, 4804 + struct ocfs2_extent_rec *split_rec) 4805 + { 4806 + int ret; 4807 + 4808 + ret = ocfs2_path_bh_journal_access(handle, inode, path, 4809 + path_num_items(path) - 1); 4810 + if (ret) { 4811 + mlog_errno(ret); 4812 + goto out; 4813 + } 4814 + 4815 + el->l_recs[split_index] = *split_rec; 4816 + 4817 + ocfs2_journal_dirty(handle, path_leaf_bh(path)); 4818 + out: 4819 + return ret; 4820 + } 4821 + 4799 4822 /* 4800 4823 * Mark part or all of the extent record at split_index in the leaf 4801 4824 * pointed to by path as written. This removes the unwritten ··· 4908 4885 4909 4886 if (ctxt.c_contig_type == CONTIG_NONE) { 4910 4887 if (ctxt.c_split_covers_rec) 4911 - el->l_recs[split_index] = *split_rec; 4888 + ret = ocfs2_replace_extent_rec(inode, handle, 4889 + path, el, 4890 + split_index, split_rec); 4912 4891 else 4913 4892 ret = ocfs2_split_and_insert(inode, handle, path, et, 4914 4893 &last_eb_bh, split_index,
+6 -6
fs/ocfs2/dlm/dlmmaster.c
··· 1849 1849 if (!mle) { 1850 1850 if (res->owner != DLM_LOCK_RES_OWNER_UNKNOWN && 1851 1851 res->owner != assert->node_idx) { 1852 - mlog(ML_ERROR, "assert_master from " 1853 - "%u, but current owner is " 1854 - "%u! (%.*s)\n", 1855 - assert->node_idx, res->owner, 1856 - namelen, name); 1857 - goto kill; 1852 + mlog(ML_ERROR, "DIE! Mastery assert from %u, " 1853 + "but current owner is %u! (%.*s)\n", 1854 + assert->node_idx, res->owner, namelen, 1855 + name); 1856 + __dlm_print_one_lock_resource(res); 1857 + BUG(); 1858 1858 } 1859 1859 } else if (mle->type != DLM_MLE_MIGRATION) { 1860 1860 if (res->owner != DLM_LOCK_RES_OWNER_UNKNOWN) {
+1 -2
fs/ocfs2/dlm/dlmthread.c
··· 181 181 182 182 spin_lock(&res->spinlock); 183 183 /* This ensures that clear refmap is sent after the set */ 184 - __dlm_wait_on_lockres_flags(res, (DLM_LOCK_RES_SETREF_INPROG | 185 - DLM_LOCK_RES_MIGRATING)); 184 + __dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_SETREF_INPROG); 186 185 spin_unlock(&res->spinlock); 187 186 188 187 /* clear our bit from the master's refmap, ignore errors */
+2 -2
fs/ocfs2/dlm/dlmunlock.c
··· 117 117 else 118 118 BUG_ON(res->owner == dlm->node_num); 119 119 120 - spin_lock(&dlm->spinlock); 120 + spin_lock(&dlm->ast_lock); 121 121 /* We want to be sure that we're not freeing a lock 122 122 * that still has AST's pending... */ 123 123 in_use = !list_empty(&lock->ast_list); 124 - spin_unlock(&dlm->spinlock); 124 + spin_unlock(&dlm->ast_lock); 125 125 if (in_use) { 126 126 mlog(ML_ERROR, "lockres %.*s: Someone is calling dlmunlock " 127 127 "while waiting for an ast!", res->lockname.len,
+8 -3
fs/ocfs2/dlmglue.c
··· 320 320 struct ocfs2_lock_res *lockres); 321 321 static inline void ocfs2_recover_from_dlm_error(struct ocfs2_lock_res *lockres, 322 322 int convert); 323 - #define ocfs2_log_dlm_error(_func, _err, _lockres) do { \ 324 - mlog(ML_ERROR, "DLM error %d while calling %s on resource %s\n", \ 325 - _err, _func, _lockres->l_name); \ 323 + #define ocfs2_log_dlm_error(_func, _err, _lockres) do { \ 324 + if ((_lockres)->l_type != OCFS2_LOCK_TYPE_DENTRY) \ 325 + mlog(ML_ERROR, "DLM error %d while calling %s on resource %s\n", \ 326 + _err, _func, _lockres->l_name); \ 327 + else \ 328 + mlog(ML_ERROR, "DLM error %d while calling %s on resource %.*s%08x\n", \ 329 + _err, _func, OCFS2_DENTRY_LOCK_INO_START - 1, (_lockres)->l_name, \ 330 + (unsigned int)ocfs2_get_dentry_lock_ino(_lockres)); \ 326 331 } while (0) 327 332 static int ocfs2_downconvert_thread(void *arg); 328 333 static void ocfs2_downconvert_on_unlock(struct ocfs2_super *osb,
+3
fs/ocfs2/ocfs2.h
··· 341 341 struct ocfs2_node_map osb_recovering_orphan_dirs; 342 342 unsigned int *osb_orphan_wipes; 343 343 wait_queue_head_t osb_wipe_event; 344 + 345 + /* used to protect metaecc calculation check of xattr. */ 346 + spinlock_t osb_xattr_lock; 344 347 }; 345 348 346 349 #define OCFS2_SB(sb) ((struct ocfs2_super *)(sb)->s_fs_info)
+8
fs/ocfs2/super.c
··· 1537 1537 unlock_buffer(*bh); 1538 1538 ll_rw_block(READ, 1, bh); 1539 1539 wait_on_buffer(*bh); 1540 + if (!buffer_uptodate(*bh)) { 1541 + mlog_errno(-EIO); 1542 + brelse(*bh); 1543 + *bh = NULL; 1544 + return -EIO; 1545 + } 1546 + 1540 1547 return 0; 1541 1548 } 1542 1549 ··· 1754 1747 INIT_LIST_HEAD(&osb->blocked_lock_list); 1755 1748 osb->blocked_lock_count = 0; 1756 1749 spin_lock_init(&osb->osb_lock); 1750 + spin_lock_init(&osb->osb_xattr_lock); 1757 1751 ocfs2_init_inode_steal_slot(osb); 1758 1752 1759 1753 atomic_set(&osb->alloc_stats.moves, 0);
+17 -10
fs/ocfs2/xattr.c
··· 82 82 83 83 #define OCFS2_XATTR_ROOT_SIZE (sizeof(struct ocfs2_xattr_def_value_root)) 84 84 #define OCFS2_XATTR_INLINE_SIZE 80 85 + #define OCFS2_XATTR_HEADER_GAP 4 85 86 #define OCFS2_XATTR_FREE_IN_IBODY (OCFS2_MIN_XATTR_INLINE_SIZE \ 86 87 - sizeof(struct ocfs2_xattr_header) \ 87 - - sizeof(__u32)) 88 + - OCFS2_XATTR_HEADER_GAP) 88 89 #define OCFS2_XATTR_FREE_IN_BLOCK(ptr) ((ptr)->i_sb->s_blocksize \ 89 90 - sizeof(struct ocfs2_xattr_block) \ 90 91 - sizeof(struct ocfs2_xattr_header) \ 91 - - sizeof(__u32)) 92 + - OCFS2_XATTR_HEADER_GAP) 92 93 93 94 static struct ocfs2_xattr_def_value_root def_xv = { 94 95 .xv.xr_list.l_count = cpu_to_le16(1), ··· 275 274 bucket->bu_blocks, bucket->bu_bhs, 0, 276 275 NULL); 277 276 if (!rc) { 277 + spin_lock(&OCFS2_SB(bucket->bu_inode->i_sb)->osb_xattr_lock); 278 278 rc = ocfs2_validate_meta_ecc_bhs(bucket->bu_inode->i_sb, 279 279 bucket->bu_bhs, 280 280 bucket->bu_blocks, 281 281 &bucket_xh(bucket)->xh_check); 282 + spin_unlock(&OCFS2_SB(bucket->bu_inode->i_sb)->osb_xattr_lock); 282 283 if (rc) 283 284 mlog_errno(rc); 284 285 } ··· 313 310 { 314 311 int i; 315 312 313 + spin_lock(&OCFS2_SB(bucket->bu_inode->i_sb)->osb_xattr_lock); 316 314 ocfs2_compute_meta_ecc_bhs(bucket->bu_inode->i_sb, 317 315 bucket->bu_bhs, bucket->bu_blocks, 318 316 &bucket_xh(bucket)->xh_check); 317 + spin_unlock(&OCFS2_SB(bucket->bu_inode->i_sb)->osb_xattr_lock); 319 318 320 319 for (i = 0; i < bucket->bu_blocks; i++) 321 320 ocfs2_journal_dirty(handle, bucket->bu_bhs[i]); ··· 1512 1507 last += 1; 1513 1508 } 1514 1509 1515 - free = min_offs - ((void *)last - xs->base) - sizeof(__u32); 1510 + free = min_offs - ((void *)last - xs->base) - OCFS2_XATTR_HEADER_GAP; 1516 1511 if (free < 0) 1517 1512 return -EIO; 1518 1513 ··· 2195 2190 last += 1; 2196 2191 } 2197 2192 2198 - free = min_offs - ((void *)last - xs->base) - sizeof(__u32); 2193 + free = min_offs - ((void *)last - xs->base) - OCFS2_XATTR_HEADER_GAP; 2199 2194 if (free < 0) 2200 2195 return 0; 2201 2196 ··· 2597 2592 2598 2593 if (!ret) { 2599 2594 /* Update inode ctime. */ 2600 - ret = ocfs2_journal_access(ctxt->handle, inode, xis->inode_bh, 2601 - OCFS2_JOURNAL_ACCESS_WRITE); 2595 + ret = ocfs2_journal_access_di(ctxt->handle, inode, 2596 + xis->inode_bh, 2597 + OCFS2_JOURNAL_ACCESS_WRITE); 2602 2598 if (ret) { 2603 2599 mlog_errno(ret); 2604 2600 goto out; ··· 5066 5060 xh_free_start = le16_to_cpu(xh->xh_free_start); 5067 5061 header_size = sizeof(struct ocfs2_xattr_header) + 5068 5062 count * sizeof(struct ocfs2_xattr_entry); 5069 - max_free = OCFS2_XATTR_BUCKET_SIZE - 5070 - le16_to_cpu(xh->xh_name_value_len) - header_size; 5063 + max_free = OCFS2_XATTR_BUCKET_SIZE - header_size - 5064 + le16_to_cpu(xh->xh_name_value_len) - OCFS2_XATTR_HEADER_GAP; 5071 5065 5072 5066 mlog_bug_on_msg(header_size > blocksize, "bucket %llu has header size " 5073 5067 "of %u which exceed block size\n", ··· 5100 5094 need = 0; 5101 5095 } 5102 5096 5103 - free = xh_free_start - header_size; 5097 + free = xh_free_start - header_size - OCFS2_XATTR_HEADER_GAP; 5104 5098 /* 5105 5099 * We need to make sure the new name/value pair 5106 5100 * can exist in the same block. ··· 5133 5127 } 5134 5128 5135 5129 xh_free_start = le16_to_cpu(xh->xh_free_start); 5136 - free = xh_free_start - header_size; 5130 + free = xh_free_start - header_size 5131 + - OCFS2_XATTR_HEADER_GAP; 5137 5132 if (xh_free_start % blocksize < need) 5138 5133 free -= xh_free_start % blocksize; 5139 5134
+1
include/drm/drm_crtc_helper.h
··· 76 76 void (*mode_set)(struct drm_encoder *encoder, 77 77 struct drm_display_mode *mode, 78 78 struct drm_display_mode *adjusted_mode); 79 + struct drm_crtc *(*get_crtc)(struct drm_encoder *encoder); 79 80 /* detect for DAC style encoders */ 80 81 enum drm_connector_status (*detect)(struct drm_encoder *encoder, 81 82 struct drm_connector *connector);
+2 -2
include/drm/drm_edid.h
··· 58 58 u8 hsync_pulse_width_lo; 59 59 u8 vsync_pulse_width_lo:4; 60 60 u8 vsync_offset_lo:4; 61 - u8 hsync_pulse_width_hi:2; 62 - u8 hsync_offset_hi:2; 63 61 u8 vsync_pulse_width_hi:2; 64 62 u8 vsync_offset_hi:2; 63 + u8 hsync_pulse_width_hi:2; 64 + u8 hsync_offset_hi:2; 65 65 u8 width_mm_lo; 66 66 u8 height_mm_lo; 67 67 u8 height_mm_hi:4;
+1
include/linux/Kbuild
··· 52 52 header-y += cgroupstats.h 53 53 header-y += cramfs_fs.h 54 54 header-y += cycx_cfm.h 55 + header-y += dcbnl.h 55 56 header-y += dlmconstants.h 56 57 header-y += dlm_device.h 57 58 header-y += dlm_netlink.h
+2
include/linux/blkdev.h
··· 708 708 }; 709 709 710 710 /* This should not be used directly - use rq_for_each_segment */ 711 + #define for_each_bio(_bio) \ 712 + for (; _bio; _bio = _bio->bi_next) 711 713 #define __rq_for_each_bio(_bio, rq) \ 712 714 if ((rq->bio)) \ 713 715 for (_bio = (rq)->bio; _bio; _bio = _bio->bi_next)
+3 -1
include/linux/dcbnl.h
··· 20 20 #ifndef __LINUX_DCBNL_H__ 21 21 #define __LINUX_DCBNL_H__ 22 22 23 + #include <linux/types.h> 24 + 23 25 #define DCB_PROTO_VERSION 1 24 26 25 27 struct dcbmsg { 26 - unsigned char dcb_family; 28 + __u8 dcb_family; 27 29 __u8 cmd; 28 30 __u16 dcb_pad; 29 31 };
+10
include/linux/decompress/bunzip2.h
··· 1 + #ifndef DECOMPRESS_BUNZIP2_H 2 + #define DECOMPRESS_BUNZIP2_H 3 + 4 + int bunzip2(unsigned char *inbuf, int len, 5 + int(*fill)(void*, unsigned int), 6 + int(*flush)(void*, unsigned int), 7 + unsigned char *output, 8 + int *pos, 9 + void(*error)(char *x)); 10 + #endif
+33
include/linux/decompress/generic.h
··· 1 + #ifndef DECOMPRESS_GENERIC_H 2 + #define DECOMPRESS_GENERIC_H 3 + 4 + /* Minimal chunksize to be read. 5 + *Bzip2 prefers at least 4096 6 + *Lzma prefers 0x10000 */ 7 + #define COMPR_IOBUF_SIZE 4096 8 + 9 + typedef int (*decompress_fn) (unsigned char *inbuf, int len, 10 + int(*fill)(void*, unsigned int), 11 + int(*writebb)(void*, unsigned int), 12 + unsigned char *output, 13 + int *posp, 14 + void(*error)(char *x)); 15 + 16 + /* inbuf - input buffer 17 + *len - len of pre-read data in inbuf 18 + *fill - function to fill inbuf if empty 19 + *writebb - function to write out outbug 20 + *posp - if non-null, input position (number of bytes read) will be 21 + * returned here 22 + * 23 + *If len != 0, the inbuf is initialized (with as much data), and fill 24 + *should not be called 25 + *If len = 0, the inbuf is allocated, but empty. Its size is IOBUF_SIZE 26 + *fill should be called (repeatedly...) to read data, at most IOBUF_SIZE 27 + */ 28 + 29 + /* Utility routine to detect the decompression method */ 30 + decompress_fn decompress_method(const unsigned char *inbuf, int len, 31 + const char **name); 32 + 33 + #endif
+13
include/linux/decompress/inflate.h
··· 1 + #ifndef INFLATE_H 2 + #define INFLATE_H 3 + 4 + /* Other housekeeping constants */ 5 + #define INBUFSIZ 4096 6 + 7 + int gunzip(unsigned char *inbuf, int len, 8 + int(*fill)(void*, unsigned int), 9 + int(*flush)(void*, unsigned int), 10 + unsigned char *output, 11 + int *pos, 12 + void(*error_fn)(char *x)); 13 + #endif
+87
include/linux/decompress/mm.h
··· 1 + /* 2 + * linux/compr_mm.h 3 + * 4 + * Memory management for pre-boot and ramdisk uncompressors 5 + * 6 + * Authors: Alain Knaff <alain@knaff.lu> 7 + * 8 + */ 9 + 10 + #ifndef DECOMPR_MM_H 11 + #define DECOMPR_MM_H 12 + 13 + #ifdef STATIC 14 + 15 + /* Code active when included from pre-boot environment: */ 16 + 17 + /* A trivial malloc implementation, adapted from 18 + * malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994 19 + */ 20 + static unsigned long malloc_ptr; 21 + static int malloc_count; 22 + 23 + static void *malloc(int size) 24 + { 25 + void *p; 26 + 27 + if (size < 0) 28 + error("Malloc error"); 29 + if (!malloc_ptr) 30 + malloc_ptr = free_mem_ptr; 31 + 32 + malloc_ptr = (malloc_ptr + 3) & ~3; /* Align */ 33 + 34 + p = (void *)malloc_ptr; 35 + malloc_ptr += size; 36 + 37 + if (free_mem_end_ptr && malloc_ptr >= free_mem_end_ptr) 38 + error("Out of memory"); 39 + 40 + malloc_count++; 41 + return p; 42 + } 43 + 44 + static void free(void *where) 45 + { 46 + malloc_count--; 47 + if (!malloc_count) 48 + malloc_ptr = free_mem_ptr; 49 + } 50 + 51 + #define large_malloc(a) malloc(a) 52 + #define large_free(a) free(a) 53 + 54 + #define set_error_fn(x) 55 + 56 + #define INIT 57 + 58 + #else /* STATIC */ 59 + 60 + /* Code active when compiled standalone for use when loading ramdisk: */ 61 + 62 + #include <linux/kernel.h> 63 + #include <linux/fs.h> 64 + #include <linux/string.h> 65 + #include <linux/vmalloc.h> 66 + 67 + /* Use defines rather than static inline in order to avoid spurious 68 + * warnings when not needed (indeed large_malloc / large_free are not 69 + * needed by inflate */ 70 + 71 + #define malloc(a) kmalloc(a, GFP_KERNEL) 72 + #define free(a) kfree(a) 73 + 74 + #define large_malloc(a) vmalloc(a) 75 + #define large_free(a) vfree(a) 76 + 77 + static void(*error)(char *m); 78 + #define set_error_fn(x) error = x; 79 + 80 + #define INIT __init 81 + #define STATIC 82 + 83 + #include <linux/init.h> 84 + 85 + #endif /* STATIC */ 86 + 87 + #endif /* DECOMPR_MM_H */
+12
include/linux/decompress/unlzma.h
··· 1 + #ifndef DECOMPRESS_UNLZMA_H 2 + #define DECOMPRESS_UNLZMA_H 3 + 4 + int unlzma(unsigned char *, int, 5 + int(*fill)(void*, unsigned int), 6 + int(*flush)(void*, unsigned int), 7 + unsigned char *output, 8 + int *posp, 9 + void(*error)(char *x) 10 + ); 11 + 12 + #endif
+1 -1
include/linux/ide.h
··· 663 663 #define to_ide_device(dev) container_of(dev, ide_drive_t, gendev) 664 664 665 665 #define to_ide_drv(obj, cont_type) \ 666 - container_of(obj, struct cont_type, kref) 666 + container_of(obj, struct cont_type, dev) 667 667 668 668 #define ide_drv_g(disk, cont_type) \ 669 669 container_of((disk)->private_data, struct cont_type, driver)
+2 -1
include/linux/intel-iommu.h
··· 194 194 /* FSTS_REG */ 195 195 #define DMA_FSTS_PPF ((u32)2) 196 196 #define DMA_FSTS_PFO ((u32)1) 197 + #define DMA_FSTS_IQE (1 << 4) 197 198 #define dma_fsts_fault_record_index(s) (((s) >> 8) & 0xff) 198 199 199 200 /* FRCD_REG, 32 bits access */ ··· 329 328 unsigned int size_order, u64 type, 330 329 int non_present_entry_flush); 331 330 332 - extern void qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu); 331 + extern int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu); 333 332 334 333 extern void *intel_alloc_coherent(struct device *, size_t, dma_addr_t *, gfp_t); 335 334 extern void intel_free_coherent(struct device *, size_t, void *, dma_addr_t);
+6 -5
include/linux/io-mapping.h
··· 49 49 io_mapping_create_wc(resource_size_t base, unsigned long size) 50 50 { 51 51 struct io_mapping *iomap; 52 - pgprot_t prot; 53 52 54 - if (!reserve_io_memtype_wc(base, size, &prot)) 53 + if (!is_io_mapping_possible(base, size)) 55 54 return NULL; 56 55 57 56 iomap = kmalloc(sizeof(*iomap), GFP_KERNEL); ··· 59 60 60 61 iomap->base = base; 61 62 iomap->size = size; 62 - iomap->prot = prot; 63 + iomap->prot = pgprot_writecombine(__pgprot(__PAGE_KERNEL)); 63 64 return iomap; 64 65 } 65 66 66 67 static inline void 67 68 io_mapping_free(struct io_mapping *mapping) 68 69 { 69 - free_io_memtype(mapping->base, mapping->size); 70 70 kfree(mapping); 71 71 } 72 72 ··· 91 93 static inline void * 92 94 io_mapping_map_wc(struct io_mapping *mapping, unsigned long offset) 93 95 { 96 + resource_size_t phys_addr; 97 + 94 98 BUG_ON(offset >= mapping->size); 95 - resource_size_t phys_addr = mapping->base + offset; 99 + phys_addr = mapping->base + offset; 100 + 96 101 return ioremap_wc(phys_addr, PAGE_SIZE); 97 102 } 98 103
+1 -1
include/linux/netfilter/xt_NFLOG.h
··· 2 2 #define _XT_NFLOG_TARGET 3 3 4 4 #define XT_NFLOG_DEFAULT_GROUP 0x1 5 - #define XT_NFLOG_DEFAULT_THRESHOLD 1 5 + #define XT_NFLOG_DEFAULT_THRESHOLD 0 6 6 7 7 #define XT_NFLOG_MASK 0x0 8 8
+2 -2
include/linux/uaccess.h
··· 41 41 #ifndef ARCH_HAS_NOCACHE_UACCESS 42 42 43 43 static inline unsigned long __copy_from_user_inatomic_nocache(void *to, 44 - const void __user *from, unsigned long n, unsigned long total) 44 + const void __user *from, unsigned long n) 45 45 { 46 46 return __copy_from_user_inatomic(to, from, n); 47 47 } 48 48 49 49 static inline unsigned long __copy_from_user_nocache(void *to, 50 - const void __user *from, unsigned long n, unsigned long total) 50 + const void __user *from, unsigned long n) 51 51 { 52 52 return __copy_from_user(to, from, n); 53 53 }
+1
include/linux/user_namespace.h
··· 13 13 struct kref kref; 14 14 struct hlist_head uidhash_table[UIDHASH_SZ]; 15 15 struct user_struct *creator; 16 + struct work_struct destroyer; 16 17 }; 17 18 18 19 extern struct user_namespace init_user_ns;
+1 -1
include/net/netfilter/nf_conntrack_core.h
··· 59 59 struct nf_conn *ct = (struct nf_conn *)skb->nfct; 60 60 int ret = NF_ACCEPT; 61 61 62 - if (ct) { 62 + if (ct && ct != &nf_conntrack_untracked) { 63 63 if (!nf_ct_is_confirmed(ct) && !nf_ct_is_dying(ct)) 64 64 ret = __nf_conntrack_confirm(skb); 65 65 nf_ct_deliver_cached_events(ct);
+60
init/Kconfig
··· 101 101 102 102 which is done within the script "scripts/setlocalversion".) 103 103 104 + config HAVE_KERNEL_GZIP 105 + bool 106 + 107 + config HAVE_KERNEL_BZIP2 108 + bool 109 + 110 + config HAVE_KERNEL_LZMA 111 + bool 112 + 113 + choice 114 + prompt "Kernel compression mode" 115 + default KERNEL_GZIP 116 + depends on HAVE_KERNEL_GZIP || HAVE_KERNEL_BZIP2 || HAVE_KERNEL_LZMA 117 + help 118 + The linux kernel is a kind of self-extracting executable. 119 + Several compression algorithms are available, which differ 120 + in efficiency, compression and decompression speed. 121 + Compression speed is only relevant when building a kernel. 122 + Decompression speed is relevant at each boot. 123 + 124 + If you have any problems with bzip2 or lzma compressed 125 + kernels, mail me (Alain Knaff) <alain@knaff.lu>. (An older 126 + version of this functionality (bzip2 only), for 2.4, was 127 + supplied by Christian Ludwig) 128 + 129 + High compression options are mostly useful for users, who 130 + are low on disk space (embedded systems), but for whom ram 131 + size matters less. 132 + 133 + If in doubt, select 'gzip' 134 + 135 + config KERNEL_GZIP 136 + bool "Gzip" 137 + depends on HAVE_KERNEL_GZIP 138 + help 139 + The old and tried gzip compression. Its compression ratio is 140 + the poorest among the 3 choices; however its speed (both 141 + compression and decompression) is the fastest. 142 + 143 + config KERNEL_BZIP2 144 + bool "Bzip2" 145 + depends on HAVE_KERNEL_BZIP2 146 + help 147 + Its compression ratio and speed is intermediate. 148 + Decompression speed is slowest among the three. The kernel 149 + size is about 10% smaller with bzip2, in comparison to gzip. 150 + Bzip2 uses a large amount of memory. For modern kernels you 151 + will need at least 8MB RAM or more for booting. 152 + 153 + config KERNEL_LZMA 154 + bool "LZMA" 155 + depends on HAVE_KERNEL_LZMA 156 + help 157 + The most recent compression algorithm. 158 + Its ratio is best, decompression speed is between the other 159 + two. Compression is slowest. The kernel size is about 33% 160 + smaller with LZMA in comparison to gzip. 161 + 162 + endchoice 163 + 104 164 config SWAP 105 165 bool "Support for paging of anonymous memory (swap)" 106 166 depends on MMU && BLOCK
+47 -131
init/do_mounts_rd.c
··· 11 11 #include "do_mounts.h" 12 12 #include "../fs/squashfs/squashfs_fs.h" 13 13 14 + #include <linux/decompress/generic.h> 15 + 16 + 14 17 int __initdata rd_prompt = 1;/* 1 = prompt for RAM disk, 0 = don't prompt */ 15 18 16 19 static int __init prompt_ramdisk(char *str) ··· 32 29 } 33 30 __setup("ramdisk_start=", ramdisk_start_setup); 34 31 35 - static int __init crd_load(int in_fd, int out_fd); 32 + static int __init crd_load(int in_fd, int out_fd, decompress_fn deco); 36 33 37 34 /* 38 35 * This routine tries to find a RAM disk image to load, and returns the ··· 41 38 * numbers could not be found. 42 39 * 43 40 * We currently check for the following magic numbers: 44 - * minix 45 - * ext2 41 + * minix 42 + * ext2 46 43 * romfs 47 44 * cramfs 48 45 * squashfs 49 - * gzip 46 + * gzip 50 47 */ 51 - static int __init 52 - identify_ramdisk_image(int fd, int start_block) 48 + static int __init 49 + identify_ramdisk_image(int fd, int start_block, decompress_fn *decompressor) 53 50 { 54 51 const int size = 512; 55 52 struct minix_super_block *minixsb; ··· 59 56 struct squashfs_super_block *squashfsb; 60 57 int nblocks = -1; 61 58 unsigned char *buf; 59 + const char *compress_name; 62 60 63 61 buf = kmalloc(size, GFP_KERNEL); 64 62 if (!buf) ··· 73 69 memset(buf, 0xe5, size); 74 70 75 71 /* 76 - * Read block 0 to test for gzipped kernel 72 + * Read block 0 to test for compressed kernel 77 73 */ 78 74 sys_lseek(fd, start_block * BLOCK_SIZE, 0); 79 75 sys_read(fd, buf, size); 80 76 81 - /* 82 - * If it matches the gzip magic numbers, return 0 83 - */ 84 - if (buf[0] == 037 && ((buf[1] == 0213) || (buf[1] == 0236))) { 85 - printk(KERN_NOTICE 86 - "RAMDISK: Compressed image found at block %d\n", 87 - start_block); 77 + *decompressor = decompress_method(buf, size, &compress_name); 78 + if (compress_name) { 79 + printk(KERN_NOTICE "RAMDISK: %s image found at block %d\n", 80 + compress_name, start_block); 81 + if (!*decompressor) 82 + printk(KERN_EMERG 83 + "RAMDISK: %s decompressor not configured!\n", 84 + compress_name); 88 85 nblocks = 0; 89 86 goto done; 90 87 } ··· 147 142 printk(KERN_NOTICE 148 143 "RAMDISK: Couldn't find valid RAM disk image starting at %d.\n", 149 144 start_block); 150 - 145 + 151 146 done: 152 147 sys_lseek(fd, start_block * BLOCK_SIZE, 0); 153 148 kfree(buf); ··· 162 157 int nblocks, i, disk; 163 158 char *buf = NULL; 164 159 unsigned short rotate = 0; 160 + decompress_fn decompressor = NULL; 165 161 #if !defined(CONFIG_S390) && !defined(CONFIG_PPC_ISERIES) 166 162 char rotator[4] = { '|' , '/' , '-' , '\\' }; 167 163 #endif ··· 175 169 if (in_fd < 0) 176 170 goto noclose_input; 177 171 178 - nblocks = identify_ramdisk_image(in_fd, rd_image_start); 172 + nblocks = identify_ramdisk_image(in_fd, rd_image_start, &decompressor); 179 173 if (nblocks < 0) 180 174 goto done; 181 175 182 176 if (nblocks == 0) { 183 - if (crd_load(in_fd, out_fd) == 0) 177 + if (crd_load(in_fd, out_fd, decompressor) == 0) 184 178 goto successful_load; 185 179 goto done; 186 180 } ··· 206 200 nblocks, rd_blocks); 207 201 goto done; 208 202 } 209 - 203 + 210 204 /* 211 205 * OK, time to copy in the data 212 206 */ ··· 279 273 return rd_load_image("/dev/root"); 280 274 } 281 275 282 - /* 283 - * gzip declarations 284 - */ 285 - 286 - #define OF(args) args 287 - 288 - #ifndef memzero 289 - #define memzero(s, n) memset ((s), 0, (n)) 290 - #endif 291 - 292 - typedef unsigned char uch; 293 - typedef unsigned short ush; 294 - typedef unsigned long ulg; 295 - 296 - #define INBUFSIZ 4096 297 - #define WSIZE 0x8000 /* window size--must be a power of two, and */ 298 - /* at least 32K for zip's deflate method */ 299 - 300 - static uch *inbuf; 301 - static uch *window; 302 - 303 - static unsigned insize; /* valid bytes in inbuf */ 304 - static unsigned inptr; /* index of next byte to be processed in inbuf */ 305 - static unsigned outcnt; /* bytes in output buffer */ 306 276 static int exit_code; 307 - static int unzip_error; 308 - static long bytes_out; 277 + static int decompress_error; 309 278 static int crd_infd, crd_outfd; 310 279 311 - #define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf()) 312 - 313 - /* Diagnostic functions (stubbed out) */ 314 - #define Assert(cond,msg) 315 - #define Trace(x) 316 - #define Tracev(x) 317 - #define Tracevv(x) 318 - #define Tracec(c,x) 319 - #define Tracecv(c,x) 320 - 321 - #define STATIC static 322 - #define INIT __init 323 - 324 - static int __init fill_inbuf(void); 325 - static void __init flush_window(void); 326 - static void __init error(char *m); 327 - 328 - #define NO_INFLATE_MALLOC 329 - 330 - #include "../lib/inflate.c" 331 - 332 - /* =========================================================================== 333 - * Fill the input buffer. This is called only when the buffer is empty 334 - * and at least one byte is really needed. 335 - * Returning -1 does not guarantee that gunzip() will ever return. 336 - */ 337 - static int __init fill_inbuf(void) 280 + static int __init compr_fill(void *buf, unsigned int len) 338 281 { 339 - if (exit_code) return -1; 340 - 341 - insize = sys_read(crd_infd, inbuf, INBUFSIZ); 342 - if (insize == 0) { 343 - error("RAMDISK: ran out of compressed data"); 344 - return -1; 345 - } 346 - 347 - inptr = 1; 348 - 349 - return inbuf[0]; 282 + int r = sys_read(crd_infd, buf, len); 283 + if (r < 0) 284 + printk(KERN_ERR "RAMDISK: error while reading compressed data"); 285 + else if (r == 0) 286 + printk(KERN_ERR "RAMDISK: EOF while reading compressed data"); 287 + return r; 350 288 } 351 289 352 - /* =========================================================================== 353 - * Write the output window window[0..outcnt-1] and update crc and bytes_out. 354 - * (Used for the decompressed data only.) 355 - */ 356 - static void __init flush_window(void) 290 + static int __init compr_flush(void *window, unsigned int outcnt) 357 291 { 358 - ulg c = crc; /* temporary variable */ 359 - unsigned n, written; 360 - uch *in, ch; 361 - 362 - written = sys_write(crd_outfd, window, outcnt); 363 - if (written != outcnt && unzip_error == 0) { 364 - printk(KERN_ERR "RAMDISK: incomplete write (%d != %d) %ld\n", 365 - written, outcnt, bytes_out); 366 - unzip_error = 1; 367 - } 368 - in = window; 369 - for (n = 0; n < outcnt; n++) { 370 - ch = *in++; 371 - c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8); 372 - } 373 - crc = c; 374 - bytes_out += (ulg)outcnt; 375 - outcnt = 0; 292 + int written = sys_write(crd_outfd, window, outcnt); 293 + if (written != outcnt) { 294 + if (decompress_error == 0) 295 + printk(KERN_ERR 296 + "RAMDISK: incomplete write (%d != %d)\n", 297 + written, outcnt); 298 + decompress_error = 1; 299 + return -1; 300 + } 301 + return outcnt; 376 302 } 377 303 378 304 static void __init error(char *x) 379 305 { 380 306 printk(KERN_ERR "%s\n", x); 381 307 exit_code = 1; 382 - unzip_error = 1; 308 + decompress_error = 1; 383 309 } 384 310 385 - static int __init crd_load(int in_fd, int out_fd) 311 + static int __init crd_load(int in_fd, int out_fd, decompress_fn deco) 386 312 { 387 313 int result; 388 - 389 - insize = 0; /* valid bytes in inbuf */ 390 - inptr = 0; /* index of next byte to be processed in inbuf */ 391 - outcnt = 0; /* bytes in output buffer */ 392 - exit_code = 0; 393 - bytes_out = 0; 394 - crc = (ulg)0xffffffffL; /* shift register contents */ 395 - 396 314 crd_infd = in_fd; 397 315 crd_outfd = out_fd; 398 - inbuf = kmalloc(INBUFSIZ, GFP_KERNEL); 399 - if (!inbuf) { 400 - printk(KERN_ERR "RAMDISK: Couldn't allocate gzip buffer\n"); 401 - return -1; 402 - } 403 - window = kmalloc(WSIZE, GFP_KERNEL); 404 - if (!window) { 405 - printk(KERN_ERR "RAMDISK: Couldn't allocate gzip window\n"); 406 - kfree(inbuf); 407 - return -1; 408 - } 409 - makecrc(); 410 - result = gunzip(); 411 - if (unzip_error) 316 + result = deco(NULL, 0, compr_fill, compr_flush, NULL, NULL, error); 317 + if (decompress_error) 412 318 result = 1; 413 - kfree(inbuf); 414 - kfree(window); 415 319 return result; 416 320 }
+37 -85
init/initramfs.c
··· 390 390 return len - count; 391 391 } 392 392 393 - static void __init flush_buffer(char *buf, unsigned len) 393 + static int __init flush_buffer(void *bufv, unsigned len) 394 394 { 395 + char *buf = (char *) bufv; 395 396 int written; 397 + int origLen = len; 396 398 if (message) 397 - return; 399 + return -1; 398 400 while ((written = write_buffer(buf, len)) < len && !message) { 399 401 char c = buf[written]; 400 402 if (c == '0') { ··· 410 408 } else 411 409 error("junk in compressed archive"); 412 410 } 411 + return origLen; 413 412 } 414 413 415 - /* 416 - * gzip declarations 417 - */ 414 + static unsigned my_inptr; /* index of next byte to be processed in inbuf */ 418 415 419 - #define OF(args) args 420 - 421 - #ifndef memzero 422 - #define memzero(s, n) memset ((s), 0, (n)) 423 - #endif 424 - 425 - typedef unsigned char uch; 426 - typedef unsigned short ush; 427 - typedef unsigned long ulg; 428 - 429 - #define WSIZE 0x8000 /* window size--must be a power of two, and */ 430 - /* at least 32K for zip's deflate method */ 431 - 432 - static uch *inbuf; 433 - static uch *window; 434 - 435 - static unsigned insize; /* valid bytes in inbuf */ 436 - static unsigned inptr; /* index of next byte to be processed in inbuf */ 437 - static unsigned outcnt; /* bytes in output buffer */ 438 - static long bytes_out; 439 - 440 - #define get_byte() (inptr < insize ? inbuf[inptr++] : -1) 441 - 442 - /* Diagnostic functions (stubbed out) */ 443 - #define Assert(cond,msg) 444 - #define Trace(x) 445 - #define Tracev(x) 446 - #define Tracevv(x) 447 - #define Tracec(c,x) 448 - #define Tracecv(c,x) 449 - 450 - #define STATIC static 451 - #define INIT __init 452 - 453 - static void __init flush_window(void); 454 - static void __init error(char *m); 455 - 456 - #define NO_INFLATE_MALLOC 457 - 458 - #include "../lib/inflate.c" 459 - 460 - /* =========================================================================== 461 - * Write the output window window[0..outcnt-1] and update crc and bytes_out. 462 - * (Used for the decompressed data only.) 463 - */ 464 - static void __init flush_window(void) 465 - { 466 - ulg c = crc; /* temporary variable */ 467 - unsigned n; 468 - uch *in, ch; 469 - 470 - flush_buffer(window, outcnt); 471 - in = window; 472 - for (n = 0; n < outcnt; n++) { 473 - ch = *in++; 474 - c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8); 475 - } 476 - crc = c; 477 - bytes_out += (ulg)outcnt; 478 - outcnt = 0; 479 - } 416 + #include <linux/decompress/generic.h> 480 417 481 418 static char * __init unpack_to_rootfs(char *buf, unsigned len, int check_only) 482 419 { 483 420 int written; 421 + decompress_fn decompress; 422 + const char *compress_name; 423 + static __initdata char msg_buf[64]; 424 + 484 425 dry_run = check_only; 485 426 header_buf = kmalloc(110, GFP_KERNEL); 486 427 symlink_buf = kmalloc(PATH_MAX + N_ALIGN(PATH_MAX) + 1, GFP_KERNEL); 487 428 name_buf = kmalloc(N_ALIGN(PATH_MAX), GFP_KERNEL); 488 - window = kmalloc(WSIZE, GFP_KERNEL); 489 - if (!window || !header_buf || !symlink_buf || !name_buf) 429 + 430 + if (!header_buf || !symlink_buf || !name_buf) 490 431 panic("can't allocate buffers"); 432 + 491 433 state = Start; 492 434 this_header = 0; 493 435 message = NULL; ··· 451 505 continue; 452 506 } 453 507 this_header = 0; 454 - insize = len; 455 - inbuf = buf; 456 - inptr = 0; 457 - outcnt = 0; /* bytes in output buffer */ 458 - bytes_out = 0; 459 - crc = (ulg)0xffffffffL; /* shift register contents */ 460 - makecrc(); 461 - gunzip(); 508 + decompress = decompress_method(buf, len, &compress_name); 509 + if (decompress) 510 + decompress(buf, len, NULL, flush_buffer, NULL, 511 + &my_inptr, error); 512 + else if (compress_name) { 513 + if (!message) { 514 + snprintf(msg_buf, sizeof msg_buf, 515 + "compression method %s not configured", 516 + compress_name); 517 + message = msg_buf; 518 + } 519 + } 462 520 if (state != Reset) 463 - error("junk in gzipped archive"); 464 - this_header = saved_offset + inptr; 465 - buf += inptr; 466 - len -= inptr; 521 + error("junk in compressed archive"); 522 + this_header = saved_offset + my_inptr; 523 + buf += my_inptr; 524 + len -= my_inptr; 467 525 } 468 526 dir_utime(); 469 - kfree(window); 470 527 kfree(name_buf); 471 528 kfree(symlink_buf); 472 529 kfree(header_buf); ··· 528 579 char *err = unpack_to_rootfs(__initramfs_start, 529 580 __initramfs_end - __initramfs_start, 0); 530 581 if (err) 531 - panic(err); 582 + panic(err); /* Failed to decompress INTERNAL initramfs */ 532 583 if (initrd_start) { 533 584 #ifdef CONFIG_BLK_DEV_RAM 534 585 int fd; ··· 554 605 printk(KERN_INFO "Unpacking initramfs..."); 555 606 err = unpack_to_rootfs((char *)initrd_start, 556 607 initrd_end - initrd_start, 0); 557 - if (err) 558 - panic(err); 559 - printk(" done\n"); 608 + if (err) { 609 + printk(" failed!\n"); 610 + printk(KERN_EMERG "%s\n", err); 611 + } else { 612 + printk(" done\n"); 613 + } 560 614 free_initrd(); 561 615 #endif 562 616 }
+4 -3
kernel/seccomp.c
··· 8 8 9 9 #include <linux/seccomp.h> 10 10 #include <linux/sched.h> 11 + #include <linux/compat.h> 11 12 12 13 /* #define SECCOMP_DEBUG 1 */ 13 14 #define NR_SECCOMP_MODES 1 ··· 23 22 0, /* null terminated */ 24 23 }; 25 24 26 - #ifdef TIF_32BIT 25 + #ifdef CONFIG_COMPAT 27 26 static int mode1_syscalls_32[] = { 28 27 __NR_seccomp_read_32, __NR_seccomp_write_32, __NR_seccomp_exit_32, __NR_seccomp_sigreturn_32, 29 28 0, /* null terminated */ ··· 38 37 switch (mode) { 39 38 case 1: 40 39 syscall = mode1_syscalls; 41 - #ifdef TIF_32BIT 42 - if (test_thread_flag(TIF_32BIT)) 40 + #ifdef CONFIG_COMPAT 41 + if (is_compat_task()) 43 42 syscall = mode1_syscalls_32; 44 43 #endif 45 44 do {
+17 -4
kernel/user_namespace.c
··· 60 60 return 0; 61 61 } 62 62 63 - void free_user_ns(struct kref *kref) 63 + /* 64 + * Deferred destructor for a user namespace. This is required because 65 + * free_user_ns() may be called with uidhash_lock held, but we need to call 66 + * back to free_uid() which will want to take the lock again. 67 + */ 68 + static void free_user_ns_work(struct work_struct *work) 64 69 { 65 - struct user_namespace *ns; 66 - 67 - ns = container_of(kref, struct user_namespace, kref); 70 + struct user_namespace *ns = 71 + container_of(work, struct user_namespace, destroyer); 68 72 free_uid(ns->creator); 69 73 kfree(ns); 74 + } 75 + 76 + void free_user_ns(struct kref *kref) 77 + { 78 + struct user_namespace *ns = 79 + container_of(kref, struct user_namespace, kref); 80 + 81 + INIT_WORK(&ns->destroyer, free_user_ns_work); 82 + schedule_work(&ns->destroyer); 70 83 } 71 84 EXPORT_SYMBOL(free_user_ns);
+14
lib/Kconfig
··· 98 98 tristate 99 99 100 100 # 101 + # These all provide a common interface (hence the apparent duplication with 102 + # ZLIB_INFLATE; DECOMPRESS_GZIP is just a wrapper.) 103 + # 104 + config DECOMPRESS_GZIP 105 + select ZLIB_INFLATE 106 + tristate 107 + 108 + config DECOMPRESS_BZIP2 109 + tristate 110 + 111 + config DECOMPRESS_LZMA 112 + tristate 113 + 114 + # 101 115 # Generic allocator support is selected if needed 102 116 # 103 117 config GENERIC_ALLOCATOR
+6 -1
lib/Makefile
··· 11 11 rbtree.o radix-tree.o dump_stack.o \ 12 12 idr.o int_sqrt.o extable.o prio_tree.o \ 13 13 sha1.o irq_regs.o reciprocal_div.o argv_split.o \ 14 - proportions.o prio_heap.o ratelimit.o show_mem.o is_single_threaded.o 14 + proportions.o prio_heap.o ratelimit.o show_mem.o \ 15 + is_single_threaded.o decompress.o 15 16 16 17 lib-$(CONFIG_MMU) += ioremap.o 17 18 lib-$(CONFIG_SMP) += cpumask.o ··· 65 64 obj-$(CONFIG_REED_SOLOMON) += reed_solomon/ 66 65 obj-$(CONFIG_LZO_COMPRESS) += lzo/ 67 66 obj-$(CONFIG_LZO_DECOMPRESS) += lzo/ 67 + 68 + lib-$(CONFIG_DECOMPRESS_GZIP) += decompress_inflate.o 69 + lib-$(CONFIG_DECOMPRESS_BZIP2) += decompress_bunzip2.o 70 + lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o 68 71 69 72 obj-$(CONFIG_TEXTSEARCH) += textsearch.o 70 73 obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
+54
lib/decompress.c
··· 1 + /* 2 + * decompress.c 3 + * 4 + * Detect the decompression method based on magic number 5 + */ 6 + 7 + #include <linux/decompress/generic.h> 8 + 9 + #include <linux/decompress/bunzip2.h> 10 + #include <linux/decompress/unlzma.h> 11 + #include <linux/decompress/inflate.h> 12 + 13 + #include <linux/types.h> 14 + #include <linux/string.h> 15 + 16 + #ifndef CONFIG_DECOMPRESS_GZIP 17 + # define gunzip NULL 18 + #endif 19 + #ifndef CONFIG_DECOMPRESS_BZIP2 20 + # define bunzip2 NULL 21 + #endif 22 + #ifndef CONFIG_DECOMPRESS_LZMA 23 + # define unlzma NULL 24 + #endif 25 + 26 + static const struct compress_format { 27 + unsigned char magic[2]; 28 + const char *name; 29 + decompress_fn decompressor; 30 + } compressed_formats[] = { 31 + { {037, 0213}, "gzip", gunzip }, 32 + { {037, 0236}, "gzip", gunzip }, 33 + { {0x42, 0x5a}, "bzip2", bunzip2 }, 34 + { {0x5d, 0x00}, "lzma", unlzma }, 35 + { {0, 0}, NULL, NULL } 36 + }; 37 + 38 + decompress_fn decompress_method(const unsigned char *inbuf, int len, 39 + const char **name) 40 + { 41 + const struct compress_format *cf; 42 + 43 + if (len < 2) 44 + return NULL; /* Need at least this much... */ 45 + 46 + for (cf = compressed_formats; cf->name; cf++) { 47 + if (!memcmp(inbuf, cf->magic, 2)) 48 + break; 49 + 50 + } 51 + if (name) 52 + *name = cf->name; 53 + return cf->decompressor; 54 + }
+735
lib/decompress_bunzip2.c
··· 1 + /* vi: set sw = 4 ts = 4: */ 2 + /* Small bzip2 deflate implementation, by Rob Landley (rob@landley.net). 3 + 4 + Based on bzip2 decompression code by Julian R Seward (jseward@acm.org), 5 + which also acknowledges contributions by Mike Burrows, David Wheeler, 6 + Peter Fenwick, Alistair Moffat, Radford Neal, Ian H. Witten, 7 + Robert Sedgewick, and Jon L. Bentley. 8 + 9 + This code is licensed under the LGPLv2: 10 + LGPL (http://www.gnu.org/copyleft/lgpl.html 11 + */ 12 + 13 + /* 14 + Size and speed optimizations by Manuel Novoa III (mjn3@codepoet.org). 15 + 16 + More efficient reading of Huffman codes, a streamlined read_bunzip() 17 + function, and various other tweaks. In (limited) tests, approximately 18 + 20% faster than bzcat on x86 and about 10% faster on arm. 19 + 20 + Note that about 2/3 of the time is spent in read_unzip() reversing 21 + the Burrows-Wheeler transformation. Much of that time is delay 22 + resulting from cache misses. 23 + 24 + I would ask that anyone benefiting from this work, especially those 25 + using it in commercial products, consider making a donation to my local 26 + non-profit hospice organization in the name of the woman I loved, who 27 + passed away Feb. 12, 2003. 28 + 29 + In memory of Toni W. Hagan 30 + 31 + Hospice of Acadiana, Inc. 32 + 2600 Johnston St., Suite 200 33 + Lafayette, LA 70503-3240 34 + 35 + Phone (337) 232-1234 or 1-800-738-2226 36 + Fax (337) 232-1297 37 + 38 + http://www.hospiceacadiana.com/ 39 + 40 + Manuel 41 + */ 42 + 43 + /* 44 + Made it fit for running in Linux Kernel by Alain Knaff (alain@knaff.lu) 45 + */ 46 + 47 + 48 + #ifndef STATIC 49 + #include <linux/decompress/bunzip2.h> 50 + #endif /* !STATIC */ 51 + 52 + #include <linux/decompress/mm.h> 53 + 54 + #ifndef INT_MAX 55 + #define INT_MAX 0x7fffffff 56 + #endif 57 + 58 + /* Constants for Huffman coding */ 59 + #define MAX_GROUPS 6 60 + #define GROUP_SIZE 50 /* 64 would have been more efficient */ 61 + #define MAX_HUFCODE_BITS 20 /* Longest Huffman code allowed */ 62 + #define MAX_SYMBOLS 258 /* 256 literals + RUNA + RUNB */ 63 + #define SYMBOL_RUNA 0 64 + #define SYMBOL_RUNB 1 65 + 66 + /* Status return values */ 67 + #define RETVAL_OK 0 68 + #define RETVAL_LAST_BLOCK (-1) 69 + #define RETVAL_NOT_BZIP_DATA (-2) 70 + #define RETVAL_UNEXPECTED_INPUT_EOF (-3) 71 + #define RETVAL_UNEXPECTED_OUTPUT_EOF (-4) 72 + #define RETVAL_DATA_ERROR (-5) 73 + #define RETVAL_OUT_OF_MEMORY (-6) 74 + #define RETVAL_OBSOLETE_INPUT (-7) 75 + 76 + /* Other housekeeping constants */ 77 + #define BZIP2_IOBUF_SIZE 4096 78 + 79 + /* This is what we know about each Huffman coding group */ 80 + struct group_data { 81 + /* We have an extra slot at the end of limit[] for a sentinal value. */ 82 + int limit[MAX_HUFCODE_BITS+1]; 83 + int base[MAX_HUFCODE_BITS]; 84 + int permute[MAX_SYMBOLS]; 85 + int minLen, maxLen; 86 + }; 87 + 88 + /* Structure holding all the housekeeping data, including IO buffers and 89 + memory that persists between calls to bunzip */ 90 + struct bunzip_data { 91 + /* State for interrupting output loop */ 92 + int writeCopies, writePos, writeRunCountdown, writeCount, writeCurrent; 93 + /* I/O tracking data (file handles, buffers, positions, etc.) */ 94 + int (*fill)(void*, unsigned int); 95 + int inbufCount, inbufPos /*, outbufPos*/; 96 + unsigned char *inbuf /*,*outbuf*/; 97 + unsigned int inbufBitCount, inbufBits; 98 + /* The CRC values stored in the block header and calculated from the 99 + data */ 100 + unsigned int crc32Table[256], headerCRC, totalCRC, writeCRC; 101 + /* Intermediate buffer and its size (in bytes) */ 102 + unsigned int *dbuf, dbufSize; 103 + /* These things are a bit too big to go on the stack */ 104 + unsigned char selectors[32768]; /* nSelectors = 15 bits */ 105 + struct group_data groups[MAX_GROUPS]; /* Huffman coding tables */ 106 + int io_error; /* non-zero if we have IO error */ 107 + }; 108 + 109 + 110 + /* Return the next nnn bits of input. All reads from the compressed input 111 + are done through this function. All reads are big endian */ 112 + static unsigned int INIT get_bits(struct bunzip_data *bd, char bits_wanted) 113 + { 114 + unsigned int bits = 0; 115 + 116 + /* If we need to get more data from the byte buffer, do so. 117 + (Loop getting one byte at a time to enforce endianness and avoid 118 + unaligned access.) */ 119 + while (bd->inbufBitCount < bits_wanted) { 120 + /* If we need to read more data from file into byte buffer, do 121 + so */ 122 + if (bd->inbufPos == bd->inbufCount) { 123 + if (bd->io_error) 124 + return 0; 125 + bd->inbufCount = bd->fill(bd->inbuf, BZIP2_IOBUF_SIZE); 126 + if (bd->inbufCount <= 0) { 127 + bd->io_error = RETVAL_UNEXPECTED_INPUT_EOF; 128 + return 0; 129 + } 130 + bd->inbufPos = 0; 131 + } 132 + /* Avoid 32-bit overflow (dump bit buffer to top of output) */ 133 + if (bd->inbufBitCount >= 24) { 134 + bits = bd->inbufBits&((1 << bd->inbufBitCount)-1); 135 + bits_wanted -= bd->inbufBitCount; 136 + bits <<= bits_wanted; 137 + bd->inbufBitCount = 0; 138 + } 139 + /* Grab next 8 bits of input from buffer. */ 140 + bd->inbufBits = (bd->inbufBits << 8)|bd->inbuf[bd->inbufPos++]; 141 + bd->inbufBitCount += 8; 142 + } 143 + /* Calculate result */ 144 + bd->inbufBitCount -= bits_wanted; 145 + bits |= (bd->inbufBits >> bd->inbufBitCount)&((1 << bits_wanted)-1); 146 + 147 + return bits; 148 + } 149 + 150 + /* Unpacks the next block and sets up for the inverse burrows-wheeler step. */ 151 + 152 + static int INIT get_next_block(struct bunzip_data *bd) 153 + { 154 + struct group_data *hufGroup = NULL; 155 + int *base = NULL; 156 + int *limit = NULL; 157 + int dbufCount, nextSym, dbufSize, groupCount, selector, 158 + i, j, k, t, runPos, symCount, symTotal, nSelectors, 159 + byteCount[256]; 160 + unsigned char uc, symToByte[256], mtfSymbol[256], *selectors; 161 + unsigned int *dbuf, origPtr; 162 + 163 + dbuf = bd->dbuf; 164 + dbufSize = bd->dbufSize; 165 + selectors = bd->selectors; 166 + 167 + /* Read in header signature and CRC, then validate signature. 168 + (last block signature means CRC is for whole file, return now) */ 169 + i = get_bits(bd, 24); 170 + j = get_bits(bd, 24); 171 + bd->headerCRC = get_bits(bd, 32); 172 + if ((i == 0x177245) && (j == 0x385090)) 173 + return RETVAL_LAST_BLOCK; 174 + if ((i != 0x314159) || (j != 0x265359)) 175 + return RETVAL_NOT_BZIP_DATA; 176 + /* We can add support for blockRandomised if anybody complains. 177 + There was some code for this in busybox 1.0.0-pre3, but nobody ever 178 + noticed that it didn't actually work. */ 179 + if (get_bits(bd, 1)) 180 + return RETVAL_OBSOLETE_INPUT; 181 + origPtr = get_bits(bd, 24); 182 + if (origPtr > dbufSize) 183 + return RETVAL_DATA_ERROR; 184 + /* mapping table: if some byte values are never used (encoding things 185 + like ascii text), the compression code removes the gaps to have fewer 186 + symbols to deal with, and writes a sparse bitfield indicating which 187 + values were present. We make a translation table to convert the 188 + symbols back to the corresponding bytes. */ 189 + t = get_bits(bd, 16); 190 + symTotal = 0; 191 + for (i = 0; i < 16; i++) { 192 + if (t&(1 << (15-i))) { 193 + k = get_bits(bd, 16); 194 + for (j = 0; j < 16; j++) 195 + if (k&(1 << (15-j))) 196 + symToByte[symTotal++] = (16*i)+j; 197 + } 198 + } 199 + /* How many different Huffman coding groups does this block use? */ 200 + groupCount = get_bits(bd, 3); 201 + if (groupCount < 2 || groupCount > MAX_GROUPS) 202 + return RETVAL_DATA_ERROR; 203 + /* nSelectors: Every GROUP_SIZE many symbols we select a new 204 + Huffman coding group. Read in the group selector list, 205 + which is stored as MTF encoded bit runs. (MTF = Move To 206 + Front, as each value is used it's moved to the start of the 207 + list.) */ 208 + nSelectors = get_bits(bd, 15); 209 + if (!nSelectors) 210 + return RETVAL_DATA_ERROR; 211 + for (i = 0; i < groupCount; i++) 212 + mtfSymbol[i] = i; 213 + for (i = 0; i < nSelectors; i++) { 214 + /* Get next value */ 215 + for (j = 0; get_bits(bd, 1); j++) 216 + if (j >= groupCount) 217 + return RETVAL_DATA_ERROR; 218 + /* Decode MTF to get the next selector */ 219 + uc = mtfSymbol[j]; 220 + for (; j; j--) 221 + mtfSymbol[j] = mtfSymbol[j-1]; 222 + mtfSymbol[0] = selectors[i] = uc; 223 + } 224 + /* Read the Huffman coding tables for each group, which code 225 + for symTotal literal symbols, plus two run symbols (RUNA, 226 + RUNB) */ 227 + symCount = symTotal+2; 228 + for (j = 0; j < groupCount; j++) { 229 + unsigned char length[MAX_SYMBOLS], temp[MAX_HUFCODE_BITS+1]; 230 + int minLen, maxLen, pp; 231 + /* Read Huffman code lengths for each symbol. They're 232 + stored in a way similar to mtf; record a starting 233 + value for the first symbol, and an offset from the 234 + previous value for everys symbol after that. 235 + (Subtracting 1 before the loop and then adding it 236 + back at the end is an optimization that makes the 237 + test inside the loop simpler: symbol length 0 238 + becomes negative, so an unsigned inequality catches 239 + it.) */ 240 + t = get_bits(bd, 5)-1; 241 + for (i = 0; i < symCount; i++) { 242 + for (;;) { 243 + if (((unsigned)t) > (MAX_HUFCODE_BITS-1)) 244 + return RETVAL_DATA_ERROR; 245 + 246 + /* If first bit is 0, stop. Else 247 + second bit indicates whether to 248 + increment or decrement the value. 249 + Optimization: grab 2 bits and unget 250 + the second if the first was 0. */ 251 + 252 + k = get_bits(bd, 2); 253 + if (k < 2) { 254 + bd->inbufBitCount++; 255 + break; 256 + } 257 + /* Add one if second bit 1, else 258 + * subtract 1. Avoids if/else */ 259 + t += (((k+1)&2)-1); 260 + } 261 + /* Correct for the initial -1, to get the 262 + * final symbol length */ 263 + length[i] = t+1; 264 + } 265 + /* Find largest and smallest lengths in this group */ 266 + minLen = maxLen = length[0]; 267 + 268 + for (i = 1; i < symCount; i++) { 269 + if (length[i] > maxLen) 270 + maxLen = length[i]; 271 + else if (length[i] < minLen) 272 + minLen = length[i]; 273 + } 274 + 275 + /* Calculate permute[], base[], and limit[] tables from 276 + * length[]. 277 + * 278 + * permute[] is the lookup table for converting 279 + * Huffman coded symbols into decoded symbols. base[] 280 + * is the amount to subtract from the value of a 281 + * Huffman symbol of a given length when using 282 + * permute[]. 283 + * 284 + * limit[] indicates the largest numerical value a 285 + * symbol with a given number of bits can have. This 286 + * is how the Huffman codes can vary in length: each 287 + * code with a value > limit[length] needs another 288 + * bit. 289 + */ 290 + hufGroup = bd->groups+j; 291 + hufGroup->minLen = minLen; 292 + hufGroup->maxLen = maxLen; 293 + /* Note that minLen can't be smaller than 1, so we 294 + adjust the base and limit array pointers so we're 295 + not always wasting the first entry. We do this 296 + again when using them (during symbol decoding).*/ 297 + base = hufGroup->base-1; 298 + limit = hufGroup->limit-1; 299 + /* Calculate permute[]. Concurently, initialize 300 + * temp[] and limit[]. */ 301 + pp = 0; 302 + for (i = minLen; i <= maxLen; i++) { 303 + temp[i] = limit[i] = 0; 304 + for (t = 0; t < symCount; t++) 305 + if (length[t] == i) 306 + hufGroup->permute[pp++] = t; 307 + } 308 + /* Count symbols coded for at each bit length */ 309 + for (i = 0; i < symCount; i++) 310 + temp[length[i]]++; 311 + /* Calculate limit[] (the largest symbol-coding value 312 + *at each bit length, which is (previous limit << 313 + *1)+symbols at this level), and base[] (number of 314 + *symbols to ignore at each bit length, which is limit 315 + *minus the cumulative count of symbols coded for 316 + *already). */ 317 + pp = t = 0; 318 + for (i = minLen; i < maxLen; i++) { 319 + pp += temp[i]; 320 + /* We read the largest possible symbol size 321 + and then unget bits after determining how 322 + many we need, and those extra bits could be 323 + set to anything. (They're noise from 324 + future symbols.) At each level we're 325 + really only interested in the first few 326 + bits, so here we set all the trailing 327 + to-be-ignored bits to 1 so they don't 328 + affect the value > limit[length] 329 + comparison. */ 330 + limit[i] = (pp << (maxLen - i)) - 1; 331 + pp <<= 1; 332 + base[i+1] = pp-(t += temp[i]); 333 + } 334 + limit[maxLen+1] = INT_MAX; /* Sentinal value for 335 + * reading next sym. */ 336 + limit[maxLen] = pp+temp[maxLen]-1; 337 + base[minLen] = 0; 338 + } 339 + /* We've finished reading and digesting the block header. Now 340 + read this block's Huffman coded symbols from the file and 341 + undo the Huffman coding and run length encoding, saving the 342 + result into dbuf[dbufCount++] = uc */ 343 + 344 + /* Initialize symbol occurrence counters and symbol Move To 345 + * Front table */ 346 + for (i = 0; i < 256; i++) { 347 + byteCount[i] = 0; 348 + mtfSymbol[i] = (unsigned char)i; 349 + } 350 + /* Loop through compressed symbols. */ 351 + runPos = dbufCount = symCount = selector = 0; 352 + for (;;) { 353 + /* Determine which Huffman coding group to use. */ 354 + if (!(symCount--)) { 355 + symCount = GROUP_SIZE-1; 356 + if (selector >= nSelectors) 357 + return RETVAL_DATA_ERROR; 358 + hufGroup = bd->groups+selectors[selector++]; 359 + base = hufGroup->base-1; 360 + limit = hufGroup->limit-1; 361 + } 362 + /* Read next Huffman-coded symbol. */ 363 + /* Note: It is far cheaper to read maxLen bits and 364 + back up than it is to read minLen bits and then an 365 + additional bit at a time, testing as we go. 366 + Because there is a trailing last block (with file 367 + CRC), there is no danger of the overread causing an 368 + unexpected EOF for a valid compressed file. As a 369 + further optimization, we do the read inline 370 + (falling back to a call to get_bits if the buffer 371 + runs dry). The following (up to got_huff_bits:) is 372 + equivalent to j = get_bits(bd, hufGroup->maxLen); 373 + */ 374 + while (bd->inbufBitCount < hufGroup->maxLen) { 375 + if (bd->inbufPos == bd->inbufCount) { 376 + j = get_bits(bd, hufGroup->maxLen); 377 + goto got_huff_bits; 378 + } 379 + bd->inbufBits = 380 + (bd->inbufBits << 8)|bd->inbuf[bd->inbufPos++]; 381 + bd->inbufBitCount += 8; 382 + }; 383 + bd->inbufBitCount -= hufGroup->maxLen; 384 + j = (bd->inbufBits >> bd->inbufBitCount)& 385 + ((1 << hufGroup->maxLen)-1); 386 + got_huff_bits: 387 + /* Figure how how many bits are in next symbol and 388 + * unget extras */ 389 + i = hufGroup->minLen; 390 + while (j > limit[i]) 391 + ++i; 392 + bd->inbufBitCount += (hufGroup->maxLen - i); 393 + /* Huffman decode value to get nextSym (with bounds checking) */ 394 + if ((i > hufGroup->maxLen) 395 + || (((unsigned)(j = (j>>(hufGroup->maxLen-i))-base[i])) 396 + >= MAX_SYMBOLS)) 397 + return RETVAL_DATA_ERROR; 398 + nextSym = hufGroup->permute[j]; 399 + /* We have now decoded the symbol, which indicates 400 + either a new literal byte, or a repeated run of the 401 + most recent literal byte. First, check if nextSym 402 + indicates a repeated run, and if so loop collecting 403 + how many times to repeat the last literal. */ 404 + if (((unsigned)nextSym) <= SYMBOL_RUNB) { /* RUNA or RUNB */ 405 + /* If this is the start of a new run, zero out 406 + * counter */ 407 + if (!runPos) { 408 + runPos = 1; 409 + t = 0; 410 + } 411 + /* Neat trick that saves 1 symbol: instead of 412 + or-ing 0 or 1 at each bit position, add 1 413 + or 2 instead. For example, 1011 is 1 << 0 414 + + 1 << 1 + 2 << 2. 1010 is 2 << 0 + 2 << 1 415 + + 1 << 2. You can make any bit pattern 416 + that way using 1 less symbol than the basic 417 + or 0/1 method (except all bits 0, which 418 + would use no symbols, but a run of length 0 419 + doesn't mean anything in this context). 420 + Thus space is saved. */ 421 + t += (runPos << nextSym); 422 + /* +runPos if RUNA; +2*runPos if RUNB */ 423 + 424 + runPos <<= 1; 425 + continue; 426 + } 427 + /* When we hit the first non-run symbol after a run, 428 + we now know how many times to repeat the last 429 + literal, so append that many copies to our buffer 430 + of decoded symbols (dbuf) now. (The last literal 431 + used is the one at the head of the mtfSymbol 432 + array.) */ 433 + if (runPos) { 434 + runPos = 0; 435 + if (dbufCount+t >= dbufSize) 436 + return RETVAL_DATA_ERROR; 437 + 438 + uc = symToByte[mtfSymbol[0]]; 439 + byteCount[uc] += t; 440 + while (t--) 441 + dbuf[dbufCount++] = uc; 442 + } 443 + /* Is this the terminating symbol? */ 444 + if (nextSym > symTotal) 445 + break; 446 + /* At this point, nextSym indicates a new literal 447 + character. Subtract one to get the position in the 448 + MTF array at which this literal is currently to be 449 + found. (Note that the result can't be -1 or 0, 450 + because 0 and 1 are RUNA and RUNB. But another 451 + instance of the first symbol in the mtf array, 452 + position 0, would have been handled as part of a 453 + run above. Therefore 1 unused mtf position minus 2 454 + non-literal nextSym values equals -1.) */ 455 + if (dbufCount >= dbufSize) 456 + return RETVAL_DATA_ERROR; 457 + i = nextSym - 1; 458 + uc = mtfSymbol[i]; 459 + /* Adjust the MTF array. Since we typically expect to 460 + *move only a small number of symbols, and are bound 461 + *by 256 in any case, using memmove here would 462 + *typically be bigger and slower due to function call 463 + *overhead and other assorted setup costs. */ 464 + do { 465 + mtfSymbol[i] = mtfSymbol[i-1]; 466 + } while (--i); 467 + mtfSymbol[0] = uc; 468 + uc = symToByte[uc]; 469 + /* We have our literal byte. Save it into dbuf. */ 470 + byteCount[uc]++; 471 + dbuf[dbufCount++] = (unsigned int)uc; 472 + } 473 + /* At this point, we've read all the Huffman-coded symbols 474 + (and repeated runs) for this block from the input stream, 475 + and decoded them into the intermediate buffer. There are 476 + dbufCount many decoded bytes in dbuf[]. Now undo the 477 + Burrows-Wheeler transform on dbuf. See 478 + http://dogma.net/markn/articles/bwt/bwt.htm 479 + */ 480 + /* Turn byteCount into cumulative occurrence counts of 0 to n-1. */ 481 + j = 0; 482 + for (i = 0; i < 256; i++) { 483 + k = j+byteCount[i]; 484 + byteCount[i] = j; 485 + j = k; 486 + } 487 + /* Figure out what order dbuf would be in if we sorted it. */ 488 + for (i = 0; i < dbufCount; i++) { 489 + uc = (unsigned char)(dbuf[i] & 0xff); 490 + dbuf[byteCount[uc]] |= (i << 8); 491 + byteCount[uc]++; 492 + } 493 + /* Decode first byte by hand to initialize "previous" byte. 494 + Note that it doesn't get output, and if the first three 495 + characters are identical it doesn't qualify as a run (hence 496 + writeRunCountdown = 5). */ 497 + if (dbufCount) { 498 + if (origPtr >= dbufCount) 499 + return RETVAL_DATA_ERROR; 500 + bd->writePos = dbuf[origPtr]; 501 + bd->writeCurrent = (unsigned char)(bd->writePos&0xff); 502 + bd->writePos >>= 8; 503 + bd->writeRunCountdown = 5; 504 + } 505 + bd->writeCount = dbufCount; 506 + 507 + return RETVAL_OK; 508 + } 509 + 510 + /* Undo burrows-wheeler transform on intermediate buffer to produce output. 511 + If start_bunzip was initialized with out_fd =-1, then up to len bytes of 512 + data are written to outbuf. Return value is number of bytes written or 513 + error (all errors are negative numbers). If out_fd!=-1, outbuf and len 514 + are ignored, data is written to out_fd and return is RETVAL_OK or error. 515 + */ 516 + 517 + static int INIT read_bunzip(struct bunzip_data *bd, char *outbuf, int len) 518 + { 519 + const unsigned int *dbuf; 520 + int pos, xcurrent, previous, gotcount; 521 + 522 + /* If last read was short due to end of file, return last block now */ 523 + if (bd->writeCount < 0) 524 + return bd->writeCount; 525 + 526 + gotcount = 0; 527 + dbuf = bd->dbuf; 528 + pos = bd->writePos; 529 + xcurrent = bd->writeCurrent; 530 + 531 + /* We will always have pending decoded data to write into the output 532 + buffer unless this is the very first call (in which case we haven't 533 + Huffman-decoded a block into the intermediate buffer yet). */ 534 + 535 + if (bd->writeCopies) { 536 + /* Inside the loop, writeCopies means extra copies (beyond 1) */ 537 + --bd->writeCopies; 538 + /* Loop outputting bytes */ 539 + for (;;) { 540 + /* If the output buffer is full, snapshot 541 + * state and return */ 542 + if (gotcount >= len) { 543 + bd->writePos = pos; 544 + bd->writeCurrent = xcurrent; 545 + bd->writeCopies++; 546 + return len; 547 + } 548 + /* Write next byte into output buffer, updating CRC */ 549 + outbuf[gotcount++] = xcurrent; 550 + bd->writeCRC = (((bd->writeCRC) << 8) 551 + ^bd->crc32Table[((bd->writeCRC) >> 24) 552 + ^xcurrent]); 553 + /* Loop now if we're outputting multiple 554 + * copies of this byte */ 555 + if (bd->writeCopies) { 556 + --bd->writeCopies; 557 + continue; 558 + } 559 + decode_next_byte: 560 + if (!bd->writeCount--) 561 + break; 562 + /* Follow sequence vector to undo 563 + * Burrows-Wheeler transform */ 564 + previous = xcurrent; 565 + pos = dbuf[pos]; 566 + xcurrent = pos&0xff; 567 + pos >>= 8; 568 + /* After 3 consecutive copies of the same 569 + byte, the 4th is a repeat count. We count 570 + down from 4 instead *of counting up because 571 + testing for non-zero is faster */ 572 + if (--bd->writeRunCountdown) { 573 + if (xcurrent != previous) 574 + bd->writeRunCountdown = 4; 575 + } else { 576 + /* We have a repeated run, this byte 577 + * indicates the count */ 578 + bd->writeCopies = xcurrent; 579 + xcurrent = previous; 580 + bd->writeRunCountdown = 5; 581 + /* Sometimes there are just 3 bytes 582 + * (run length 0) */ 583 + if (!bd->writeCopies) 584 + goto decode_next_byte; 585 + /* Subtract the 1 copy we'd output 586 + * anyway to get extras */ 587 + --bd->writeCopies; 588 + } 589 + } 590 + /* Decompression of this block completed successfully */ 591 + bd->writeCRC = ~bd->writeCRC; 592 + bd->totalCRC = ((bd->totalCRC << 1) | 593 + (bd->totalCRC >> 31)) ^ bd->writeCRC; 594 + /* If this block had a CRC error, force file level CRC error. */ 595 + if (bd->writeCRC != bd->headerCRC) { 596 + bd->totalCRC = bd->headerCRC+1; 597 + return RETVAL_LAST_BLOCK; 598 + } 599 + } 600 + 601 + /* Refill the intermediate buffer by Huffman-decoding next 602 + * block of input */ 603 + /* (previous is just a convenient unused temp variable here) */ 604 + previous = get_next_block(bd); 605 + if (previous) { 606 + bd->writeCount = previous; 607 + return (previous != RETVAL_LAST_BLOCK) ? previous : gotcount; 608 + } 609 + bd->writeCRC = 0xffffffffUL; 610 + pos = bd->writePos; 611 + xcurrent = bd->writeCurrent; 612 + goto decode_next_byte; 613 + } 614 + 615 + static int INIT nofill(void *buf, unsigned int len) 616 + { 617 + return -1; 618 + } 619 + 620 + /* Allocate the structure, read file header. If in_fd ==-1, inbuf must contain 621 + a complete bunzip file (len bytes long). If in_fd!=-1, inbuf and len are 622 + ignored, and data is read from file handle into temporary buffer. */ 623 + static int INIT start_bunzip(struct bunzip_data **bdp, void *inbuf, int len, 624 + int (*fill)(void*, unsigned int)) 625 + { 626 + struct bunzip_data *bd; 627 + unsigned int i, j, c; 628 + const unsigned int BZh0 = 629 + (((unsigned int)'B') << 24)+(((unsigned int)'Z') << 16) 630 + +(((unsigned int)'h') << 8)+(unsigned int)'0'; 631 + 632 + /* Figure out how much data to allocate */ 633 + i = sizeof(struct bunzip_data); 634 + 635 + /* Allocate bunzip_data. Most fields initialize to zero. */ 636 + bd = *bdp = malloc(i); 637 + memset(bd, 0, sizeof(struct bunzip_data)); 638 + /* Setup input buffer */ 639 + bd->inbuf = inbuf; 640 + bd->inbufCount = len; 641 + if (fill != NULL) 642 + bd->fill = fill; 643 + else 644 + bd->fill = nofill; 645 + 646 + /* Init the CRC32 table (big endian) */ 647 + for (i = 0; i < 256; i++) { 648 + c = i << 24; 649 + for (j = 8; j; j--) 650 + c = c&0x80000000 ? (c << 1)^0x04c11db7 : (c << 1); 651 + bd->crc32Table[i] = c; 652 + } 653 + 654 + /* Ensure that file starts with "BZh['1'-'9']." */ 655 + i = get_bits(bd, 32); 656 + if (((unsigned int)(i-BZh0-1)) >= 9) 657 + return RETVAL_NOT_BZIP_DATA; 658 + 659 + /* Fourth byte (ascii '1'-'9'), indicates block size in units of 100k of 660 + uncompressed data. Allocate intermediate buffer for block. */ 661 + bd->dbufSize = 100000*(i-BZh0); 662 + 663 + bd->dbuf = large_malloc(bd->dbufSize * sizeof(int)); 664 + return RETVAL_OK; 665 + } 666 + 667 + /* Example usage: decompress src_fd to dst_fd. (Stops at end of bzip2 data, 668 + not end of file.) */ 669 + STATIC int INIT bunzip2(unsigned char *buf, int len, 670 + int(*fill)(void*, unsigned int), 671 + int(*flush)(void*, unsigned int), 672 + unsigned char *outbuf, 673 + int *pos, 674 + void(*error_fn)(char *x)) 675 + { 676 + struct bunzip_data *bd; 677 + int i = -1; 678 + unsigned char *inbuf; 679 + 680 + set_error_fn(error_fn); 681 + if (flush) 682 + outbuf = malloc(BZIP2_IOBUF_SIZE); 683 + else 684 + len -= 4; /* Uncompressed size hack active in pre-boot 685 + environment */ 686 + if (!outbuf) { 687 + error("Could not allocate output bufer"); 688 + return -1; 689 + } 690 + if (buf) 691 + inbuf = buf; 692 + else 693 + inbuf = malloc(BZIP2_IOBUF_SIZE); 694 + if (!inbuf) { 695 + error("Could not allocate input bufer"); 696 + goto exit_0; 697 + } 698 + i = start_bunzip(&bd, inbuf, len, fill); 699 + if (!i) { 700 + for (;;) { 701 + i = read_bunzip(bd, outbuf, BZIP2_IOBUF_SIZE); 702 + if (i <= 0) 703 + break; 704 + if (!flush) 705 + outbuf += i; 706 + else 707 + if (i != flush(outbuf, i)) { 708 + i = RETVAL_UNEXPECTED_OUTPUT_EOF; 709 + break; 710 + } 711 + } 712 + } 713 + /* Check CRC and release memory */ 714 + if (i == RETVAL_LAST_BLOCK) { 715 + if (bd->headerCRC != bd->totalCRC) 716 + error("Data integrity error when decompressing."); 717 + else 718 + i = RETVAL_OK; 719 + } else if (i == RETVAL_UNEXPECTED_OUTPUT_EOF) { 720 + error("Compressed file ends unexpectedly"); 721 + } 722 + if (bd->dbuf) 723 + large_free(bd->dbuf); 724 + if (pos) 725 + *pos = bd->inbufPos; 726 + free(bd); 727 + if (!buf) 728 + free(inbuf); 729 + exit_0: 730 + if (flush) 731 + free(outbuf); 732 + return i; 733 + } 734 + 735 + #define decompress bunzip2
+167
lib/decompress_inflate.c
··· 1 + #ifdef STATIC 2 + /* Pre-boot environment: included */ 3 + 4 + /* prevent inclusion of _LINUX_KERNEL_H in pre-boot environment: lots 5 + * errors about console_printk etc... on ARM */ 6 + #define _LINUX_KERNEL_H 7 + 8 + #include "zlib_inflate/inftrees.c" 9 + #include "zlib_inflate/inffast.c" 10 + #include "zlib_inflate/inflate.c" 11 + 12 + #else /* STATIC */ 13 + /* initramfs et al: linked */ 14 + 15 + #include <linux/zutil.h> 16 + 17 + #include "zlib_inflate/inftrees.h" 18 + #include "zlib_inflate/inffast.h" 19 + #include "zlib_inflate/inflate.h" 20 + 21 + #include "zlib_inflate/infutil.h" 22 + 23 + #endif /* STATIC */ 24 + 25 + #include <linux/decompress/mm.h> 26 + 27 + #define INBUF_LEN (16*1024) 28 + 29 + /* Included from initramfs et al code */ 30 + STATIC int INIT gunzip(unsigned char *buf, int len, 31 + int(*fill)(void*, unsigned int), 32 + int(*flush)(void*, unsigned int), 33 + unsigned char *out_buf, 34 + int *pos, 35 + void(*error_fn)(char *x)) { 36 + u8 *zbuf; 37 + struct z_stream_s *strm; 38 + int rc; 39 + size_t out_len; 40 + 41 + set_error_fn(error_fn); 42 + rc = -1; 43 + if (flush) { 44 + out_len = 0x8000; /* 32 K */ 45 + out_buf = malloc(out_len); 46 + } else { 47 + out_len = 0x7fffffff; /* no limit */ 48 + } 49 + if (!out_buf) { 50 + error("Out of memory while allocating output buffer"); 51 + goto gunzip_nomem1; 52 + } 53 + 54 + if (buf) 55 + zbuf = buf; 56 + else { 57 + zbuf = malloc(INBUF_LEN); 58 + len = 0; 59 + } 60 + if (!zbuf) { 61 + error("Out of memory while allocating input buffer"); 62 + goto gunzip_nomem2; 63 + } 64 + 65 + strm = malloc(sizeof(*strm)); 66 + if (strm == NULL) { 67 + error("Out of memory while allocating z_stream"); 68 + goto gunzip_nomem3; 69 + } 70 + 71 + strm->workspace = malloc(flush ? zlib_inflate_workspacesize() : 72 + sizeof(struct inflate_state)); 73 + if (strm->workspace == NULL) { 74 + error("Out of memory while allocating workspace"); 75 + goto gunzip_nomem4; 76 + } 77 + 78 + if (len == 0) 79 + len = fill(zbuf, INBUF_LEN); 80 + 81 + /* verify the gzip header */ 82 + if (len < 10 || 83 + zbuf[0] != 0x1f || zbuf[1] != 0x8b || zbuf[2] != 0x08) { 84 + if (pos) 85 + *pos = 0; 86 + error("Not a gzip file"); 87 + goto gunzip_5; 88 + } 89 + 90 + /* skip over gzip header (1f,8b,08... 10 bytes total + 91 + * possible asciz filename) 92 + */ 93 + strm->next_in = zbuf + 10; 94 + /* skip over asciz filename */ 95 + if (zbuf[3] & 0x8) { 96 + while (strm->next_in[0]) 97 + strm->next_in++; 98 + strm->next_in++; 99 + } 100 + strm->avail_in = len - (strm->next_in - zbuf); 101 + 102 + strm->next_out = out_buf; 103 + strm->avail_out = out_len; 104 + 105 + rc = zlib_inflateInit2(strm, -MAX_WBITS); 106 + 107 + if (!flush) { 108 + WS(strm)->inflate_state.wsize = 0; 109 + WS(strm)->inflate_state.window = NULL; 110 + } 111 + 112 + while (rc == Z_OK) { 113 + if (strm->avail_in == 0) { 114 + /* TODO: handle case where both pos and fill are set */ 115 + len = fill(zbuf, INBUF_LEN); 116 + if (len < 0) { 117 + rc = -1; 118 + error("read error"); 119 + break; 120 + } 121 + strm->next_in = zbuf; 122 + strm->avail_in = len; 123 + } 124 + rc = zlib_inflate(strm, 0); 125 + 126 + /* Write any data generated */ 127 + if (flush && strm->next_out > out_buf) { 128 + int l = strm->next_out - out_buf; 129 + if (l != flush(out_buf, l)) { 130 + rc = -1; 131 + error("write error"); 132 + break; 133 + } 134 + strm->next_out = out_buf; 135 + strm->avail_out = out_len; 136 + } 137 + 138 + /* after Z_FINISH, only Z_STREAM_END is "we unpacked it all" */ 139 + if (rc == Z_STREAM_END) { 140 + rc = 0; 141 + break; 142 + } else if (rc != Z_OK) { 143 + error("uncompression error"); 144 + rc = -1; 145 + } 146 + } 147 + 148 + zlib_inflateEnd(strm); 149 + if (pos) 150 + /* add + 8 to skip over trailer */ 151 + *pos = strm->next_in - zbuf+8; 152 + 153 + gunzip_5: 154 + free(strm->workspace); 155 + gunzip_nomem4: 156 + free(strm); 157 + gunzip_nomem3: 158 + if (!buf) 159 + free(zbuf); 160 + gunzip_nomem2: 161 + if (flush) 162 + free(out_buf); 163 + gunzip_nomem1: 164 + return rc; /* returns Z_OK (0) if successful */ 165 + } 166 + 167 + #define decompress gunzip
+647
lib/decompress_unlzma.c
··· 1 + /* Lzma decompressor for Linux kernel. Shamelessly snarfed 2 + *from busybox 1.1.1 3 + * 4 + *Linux kernel adaptation 5 + *Copyright (C) 2006 Alain < alain@knaff.lu > 6 + * 7 + *Based on small lzma deflate implementation/Small range coder 8 + *implementation for lzma. 9 + *Copyright (C) 2006 Aurelien Jacobs < aurel@gnuage.org > 10 + * 11 + *Based on LzmaDecode.c from the LZMA SDK 4.22 (http://www.7-zip.org/) 12 + *Copyright (C) 1999-2005 Igor Pavlov 13 + * 14 + *Copyrights of the parts, see headers below. 15 + * 16 + * 17 + *This program is free software; you can redistribute it and/or 18 + *modify it under the terms of the GNU Lesser General Public 19 + *License as published by the Free Software Foundation; either 20 + *version 2.1 of the License, or (at your option) any later version. 21 + * 22 + *This program is distributed in the hope that it will be useful, 23 + *but WITHOUT ANY WARRANTY; without even the implied warranty of 24 + *MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 25 + *Lesser General Public License for more details. 26 + * 27 + *You should have received a copy of the GNU Lesser General Public 28 + *License along with this library; if not, write to the Free Software 29 + *Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 30 + */ 31 + 32 + #ifndef STATIC 33 + #include <linux/decompress/unlzma.h> 34 + #endif /* STATIC */ 35 + 36 + #include <linux/decompress/mm.h> 37 + 38 + #define MIN(a, b) (((a) < (b)) ? (a) : (b)) 39 + 40 + static long long INIT read_int(unsigned char *ptr, int size) 41 + { 42 + int i; 43 + long long ret = 0; 44 + 45 + for (i = 0; i < size; i++) 46 + ret = (ret << 8) | ptr[size-i-1]; 47 + return ret; 48 + } 49 + 50 + #define ENDIAN_CONVERT(x) \ 51 + x = (typeof(x))read_int((unsigned char *)&x, sizeof(x)) 52 + 53 + 54 + /* Small range coder implementation for lzma. 55 + *Copyright (C) 2006 Aurelien Jacobs < aurel@gnuage.org > 56 + * 57 + *Based on LzmaDecode.c from the LZMA SDK 4.22 (http://www.7-zip.org/) 58 + *Copyright (c) 1999-2005 Igor Pavlov 59 + */ 60 + 61 + #include <linux/compiler.h> 62 + 63 + #define LZMA_IOBUF_SIZE 0x10000 64 + 65 + struct rc { 66 + int (*fill)(void*, unsigned int); 67 + uint8_t *ptr; 68 + uint8_t *buffer; 69 + uint8_t *buffer_end; 70 + int buffer_size; 71 + uint32_t code; 72 + uint32_t range; 73 + uint32_t bound; 74 + }; 75 + 76 + 77 + #define RC_TOP_BITS 24 78 + #define RC_MOVE_BITS 5 79 + #define RC_MODEL_TOTAL_BITS 11 80 + 81 + 82 + /* Called twice: once at startup and once in rc_normalize() */ 83 + static void INIT rc_read(struct rc *rc) 84 + { 85 + rc->buffer_size = rc->fill((char *)rc->buffer, LZMA_IOBUF_SIZE); 86 + if (rc->buffer_size <= 0) 87 + error("unexpected EOF"); 88 + rc->ptr = rc->buffer; 89 + rc->buffer_end = rc->buffer + rc->buffer_size; 90 + } 91 + 92 + /* Called once */ 93 + static inline void INIT rc_init(struct rc *rc, 94 + int (*fill)(void*, unsigned int), 95 + char *buffer, int buffer_size) 96 + { 97 + rc->fill = fill; 98 + rc->buffer = (uint8_t *)buffer; 99 + rc->buffer_size = buffer_size; 100 + rc->buffer_end = rc->buffer + rc->buffer_size; 101 + rc->ptr = rc->buffer; 102 + 103 + rc->code = 0; 104 + rc->range = 0xFFFFFFFF; 105 + } 106 + 107 + static inline void INIT rc_init_code(struct rc *rc) 108 + { 109 + int i; 110 + 111 + for (i = 0; i < 5; i++) { 112 + if (rc->ptr >= rc->buffer_end) 113 + rc_read(rc); 114 + rc->code = (rc->code << 8) | *rc->ptr++; 115 + } 116 + } 117 + 118 + 119 + /* Called once. TODO: bb_maybe_free() */ 120 + static inline void INIT rc_free(struct rc *rc) 121 + { 122 + free(rc->buffer); 123 + } 124 + 125 + /* Called twice, but one callsite is in inline'd rc_is_bit_0_helper() */ 126 + static void INIT rc_do_normalize(struct rc *rc) 127 + { 128 + if (rc->ptr >= rc->buffer_end) 129 + rc_read(rc); 130 + rc->range <<= 8; 131 + rc->code = (rc->code << 8) | *rc->ptr++; 132 + } 133 + static inline void INIT rc_normalize(struct rc *rc) 134 + { 135 + if (rc->range < (1 << RC_TOP_BITS)) 136 + rc_do_normalize(rc); 137 + } 138 + 139 + /* Called 9 times */ 140 + /* Why rc_is_bit_0_helper exists? 141 + *Because we want to always expose (rc->code < rc->bound) to optimizer 142 + */ 143 + static inline uint32_t INIT rc_is_bit_0_helper(struct rc *rc, uint16_t *p) 144 + { 145 + rc_normalize(rc); 146 + rc->bound = *p * (rc->range >> RC_MODEL_TOTAL_BITS); 147 + return rc->bound; 148 + } 149 + static inline int INIT rc_is_bit_0(struct rc *rc, uint16_t *p) 150 + { 151 + uint32_t t = rc_is_bit_0_helper(rc, p); 152 + return rc->code < t; 153 + } 154 + 155 + /* Called ~10 times, but very small, thus inlined */ 156 + static inline void INIT rc_update_bit_0(struct rc *rc, uint16_t *p) 157 + { 158 + rc->range = rc->bound; 159 + *p += ((1 << RC_MODEL_TOTAL_BITS) - *p) >> RC_MOVE_BITS; 160 + } 161 + static inline void rc_update_bit_1(struct rc *rc, uint16_t *p) 162 + { 163 + rc->range -= rc->bound; 164 + rc->code -= rc->bound; 165 + *p -= *p >> RC_MOVE_BITS; 166 + } 167 + 168 + /* Called 4 times in unlzma loop */ 169 + static int INIT rc_get_bit(struct rc *rc, uint16_t *p, int *symbol) 170 + { 171 + if (rc_is_bit_0(rc, p)) { 172 + rc_update_bit_0(rc, p); 173 + *symbol *= 2; 174 + return 0; 175 + } else { 176 + rc_update_bit_1(rc, p); 177 + *symbol = *symbol * 2 + 1; 178 + return 1; 179 + } 180 + } 181 + 182 + /* Called once */ 183 + static inline int INIT rc_direct_bit(struct rc *rc) 184 + { 185 + rc_normalize(rc); 186 + rc->range >>= 1; 187 + if (rc->code >= rc->range) { 188 + rc->code -= rc->range; 189 + return 1; 190 + } 191 + return 0; 192 + } 193 + 194 + /* Called twice */ 195 + static inline void INIT 196 + rc_bit_tree_decode(struct rc *rc, uint16_t *p, int num_levels, int *symbol) 197 + { 198 + int i = num_levels; 199 + 200 + *symbol = 1; 201 + while (i--) 202 + rc_get_bit(rc, p + *symbol, symbol); 203 + *symbol -= 1 << num_levels; 204 + } 205 + 206 + 207 + /* 208 + * Small lzma deflate implementation. 209 + * Copyright (C) 2006 Aurelien Jacobs < aurel@gnuage.org > 210 + * 211 + * Based on LzmaDecode.c from the LZMA SDK 4.22 (http://www.7-zip.org/) 212 + * Copyright (C) 1999-2005 Igor Pavlov 213 + */ 214 + 215 + 216 + struct lzma_header { 217 + uint8_t pos; 218 + uint32_t dict_size; 219 + uint64_t dst_size; 220 + } __attribute__ ((packed)) ; 221 + 222 + 223 + #define LZMA_BASE_SIZE 1846 224 + #define LZMA_LIT_SIZE 768 225 + 226 + #define LZMA_NUM_POS_BITS_MAX 4 227 + 228 + #define LZMA_LEN_NUM_LOW_BITS 3 229 + #define LZMA_LEN_NUM_MID_BITS 3 230 + #define LZMA_LEN_NUM_HIGH_BITS 8 231 + 232 + #define LZMA_LEN_CHOICE 0 233 + #define LZMA_LEN_CHOICE_2 (LZMA_LEN_CHOICE + 1) 234 + #define LZMA_LEN_LOW (LZMA_LEN_CHOICE_2 + 1) 235 + #define LZMA_LEN_MID (LZMA_LEN_LOW \ 236 + + (1 << (LZMA_NUM_POS_BITS_MAX + LZMA_LEN_NUM_LOW_BITS))) 237 + #define LZMA_LEN_HIGH (LZMA_LEN_MID \ 238 + +(1 << (LZMA_NUM_POS_BITS_MAX + LZMA_LEN_NUM_MID_BITS))) 239 + #define LZMA_NUM_LEN_PROBS (LZMA_LEN_HIGH + (1 << LZMA_LEN_NUM_HIGH_BITS)) 240 + 241 + #define LZMA_NUM_STATES 12 242 + #define LZMA_NUM_LIT_STATES 7 243 + 244 + #define LZMA_START_POS_MODEL_INDEX 4 245 + #define LZMA_END_POS_MODEL_INDEX 14 246 + #define LZMA_NUM_FULL_DISTANCES (1 << (LZMA_END_POS_MODEL_INDEX >> 1)) 247 + 248 + #define LZMA_NUM_POS_SLOT_BITS 6 249 + #define LZMA_NUM_LEN_TO_POS_STATES 4 250 + 251 + #define LZMA_NUM_ALIGN_BITS 4 252 + 253 + #define LZMA_MATCH_MIN_LEN 2 254 + 255 + #define LZMA_IS_MATCH 0 256 + #define LZMA_IS_REP (LZMA_IS_MATCH + (LZMA_NUM_STATES << LZMA_NUM_POS_BITS_MAX)) 257 + #define LZMA_IS_REP_G0 (LZMA_IS_REP + LZMA_NUM_STATES) 258 + #define LZMA_IS_REP_G1 (LZMA_IS_REP_G0 + LZMA_NUM_STATES) 259 + #define LZMA_IS_REP_G2 (LZMA_IS_REP_G1 + LZMA_NUM_STATES) 260 + #define LZMA_IS_REP_0_LONG (LZMA_IS_REP_G2 + LZMA_NUM_STATES) 261 + #define LZMA_POS_SLOT (LZMA_IS_REP_0_LONG \ 262 + + (LZMA_NUM_STATES << LZMA_NUM_POS_BITS_MAX)) 263 + #define LZMA_SPEC_POS (LZMA_POS_SLOT \ 264 + +(LZMA_NUM_LEN_TO_POS_STATES << LZMA_NUM_POS_SLOT_BITS)) 265 + #define LZMA_ALIGN (LZMA_SPEC_POS \ 266 + + LZMA_NUM_FULL_DISTANCES - LZMA_END_POS_MODEL_INDEX) 267 + #define LZMA_LEN_CODER (LZMA_ALIGN + (1 << LZMA_NUM_ALIGN_BITS)) 268 + #define LZMA_REP_LEN_CODER (LZMA_LEN_CODER + LZMA_NUM_LEN_PROBS) 269 + #define LZMA_LITERAL (LZMA_REP_LEN_CODER + LZMA_NUM_LEN_PROBS) 270 + 271 + 272 + struct writer { 273 + uint8_t *buffer; 274 + uint8_t previous_byte; 275 + size_t buffer_pos; 276 + int bufsize; 277 + size_t global_pos; 278 + int(*flush)(void*, unsigned int); 279 + struct lzma_header *header; 280 + }; 281 + 282 + struct cstate { 283 + int state; 284 + uint32_t rep0, rep1, rep2, rep3; 285 + }; 286 + 287 + static inline size_t INIT get_pos(struct writer *wr) 288 + { 289 + return 290 + wr->global_pos + wr->buffer_pos; 291 + } 292 + 293 + static inline uint8_t INIT peek_old_byte(struct writer *wr, 294 + uint32_t offs) 295 + { 296 + if (!wr->flush) { 297 + int32_t pos; 298 + while (offs > wr->header->dict_size) 299 + offs -= wr->header->dict_size; 300 + pos = wr->buffer_pos - offs; 301 + return wr->buffer[pos]; 302 + } else { 303 + uint32_t pos = wr->buffer_pos - offs; 304 + while (pos >= wr->header->dict_size) 305 + pos += wr->header->dict_size; 306 + return wr->buffer[pos]; 307 + } 308 + 309 + } 310 + 311 + static inline void INIT write_byte(struct writer *wr, uint8_t byte) 312 + { 313 + wr->buffer[wr->buffer_pos++] = wr->previous_byte = byte; 314 + if (wr->flush && wr->buffer_pos == wr->header->dict_size) { 315 + wr->buffer_pos = 0; 316 + wr->global_pos += wr->header->dict_size; 317 + wr->flush((char *)wr->buffer, wr->header->dict_size); 318 + } 319 + } 320 + 321 + 322 + static inline void INIT copy_byte(struct writer *wr, uint32_t offs) 323 + { 324 + write_byte(wr, peek_old_byte(wr, offs)); 325 + } 326 + 327 + static inline void INIT copy_bytes(struct writer *wr, 328 + uint32_t rep0, int len) 329 + { 330 + do { 331 + copy_byte(wr, rep0); 332 + len--; 333 + } while (len != 0 && wr->buffer_pos < wr->header->dst_size); 334 + } 335 + 336 + static inline void INIT process_bit0(struct writer *wr, struct rc *rc, 337 + struct cstate *cst, uint16_t *p, 338 + int pos_state, uint16_t *prob, 339 + int lc, uint32_t literal_pos_mask) { 340 + int mi = 1; 341 + rc_update_bit_0(rc, prob); 342 + prob = (p + LZMA_LITERAL + 343 + (LZMA_LIT_SIZE 344 + * (((get_pos(wr) & literal_pos_mask) << lc) 345 + + (wr->previous_byte >> (8 - lc)))) 346 + ); 347 + 348 + if (cst->state >= LZMA_NUM_LIT_STATES) { 349 + int match_byte = peek_old_byte(wr, cst->rep0); 350 + do { 351 + int bit; 352 + uint16_t *prob_lit; 353 + 354 + match_byte <<= 1; 355 + bit = match_byte & 0x100; 356 + prob_lit = prob + 0x100 + bit + mi; 357 + if (rc_get_bit(rc, prob_lit, &mi)) { 358 + if (!bit) 359 + break; 360 + } else { 361 + if (bit) 362 + break; 363 + } 364 + } while (mi < 0x100); 365 + } 366 + while (mi < 0x100) { 367 + uint16_t *prob_lit = prob + mi; 368 + rc_get_bit(rc, prob_lit, &mi); 369 + } 370 + write_byte(wr, mi); 371 + if (cst->state < 4) 372 + cst->state = 0; 373 + else if (cst->state < 10) 374 + cst->state -= 3; 375 + else 376 + cst->state -= 6; 377 + } 378 + 379 + static inline void INIT process_bit1(struct writer *wr, struct rc *rc, 380 + struct cstate *cst, uint16_t *p, 381 + int pos_state, uint16_t *prob) { 382 + int offset; 383 + uint16_t *prob_len; 384 + int num_bits; 385 + int len; 386 + 387 + rc_update_bit_1(rc, prob); 388 + prob = p + LZMA_IS_REP + cst->state; 389 + if (rc_is_bit_0(rc, prob)) { 390 + rc_update_bit_0(rc, prob); 391 + cst->rep3 = cst->rep2; 392 + cst->rep2 = cst->rep1; 393 + cst->rep1 = cst->rep0; 394 + cst->state = cst->state < LZMA_NUM_LIT_STATES ? 0 : 3; 395 + prob = p + LZMA_LEN_CODER; 396 + } else { 397 + rc_update_bit_1(rc, prob); 398 + prob = p + LZMA_IS_REP_G0 + cst->state; 399 + if (rc_is_bit_0(rc, prob)) { 400 + rc_update_bit_0(rc, prob); 401 + prob = (p + LZMA_IS_REP_0_LONG 402 + + (cst->state << 403 + LZMA_NUM_POS_BITS_MAX) + 404 + pos_state); 405 + if (rc_is_bit_0(rc, prob)) { 406 + rc_update_bit_0(rc, prob); 407 + 408 + cst->state = cst->state < LZMA_NUM_LIT_STATES ? 409 + 9 : 11; 410 + copy_byte(wr, cst->rep0); 411 + return; 412 + } else { 413 + rc_update_bit_1(rc, prob); 414 + } 415 + } else { 416 + uint32_t distance; 417 + 418 + rc_update_bit_1(rc, prob); 419 + prob = p + LZMA_IS_REP_G1 + cst->state; 420 + if (rc_is_bit_0(rc, prob)) { 421 + rc_update_bit_0(rc, prob); 422 + distance = cst->rep1; 423 + } else { 424 + rc_update_bit_1(rc, prob); 425 + prob = p + LZMA_IS_REP_G2 + cst->state; 426 + if (rc_is_bit_0(rc, prob)) { 427 + rc_update_bit_0(rc, prob); 428 + distance = cst->rep2; 429 + } else { 430 + rc_update_bit_1(rc, prob); 431 + distance = cst->rep3; 432 + cst->rep3 = cst->rep2; 433 + } 434 + cst->rep2 = cst->rep1; 435 + } 436 + cst->rep1 = cst->rep0; 437 + cst->rep0 = distance; 438 + } 439 + cst->state = cst->state < LZMA_NUM_LIT_STATES ? 8 : 11; 440 + prob = p + LZMA_REP_LEN_CODER; 441 + } 442 + 443 + prob_len = prob + LZMA_LEN_CHOICE; 444 + if (rc_is_bit_0(rc, prob_len)) { 445 + rc_update_bit_0(rc, prob_len); 446 + prob_len = (prob + LZMA_LEN_LOW 447 + + (pos_state << 448 + LZMA_LEN_NUM_LOW_BITS)); 449 + offset = 0; 450 + num_bits = LZMA_LEN_NUM_LOW_BITS; 451 + } else { 452 + rc_update_bit_1(rc, prob_len); 453 + prob_len = prob + LZMA_LEN_CHOICE_2; 454 + if (rc_is_bit_0(rc, prob_len)) { 455 + rc_update_bit_0(rc, prob_len); 456 + prob_len = (prob + LZMA_LEN_MID 457 + + (pos_state << 458 + LZMA_LEN_NUM_MID_BITS)); 459 + offset = 1 << LZMA_LEN_NUM_LOW_BITS; 460 + num_bits = LZMA_LEN_NUM_MID_BITS; 461 + } else { 462 + rc_update_bit_1(rc, prob_len); 463 + prob_len = prob + LZMA_LEN_HIGH; 464 + offset = ((1 << LZMA_LEN_NUM_LOW_BITS) 465 + + (1 << LZMA_LEN_NUM_MID_BITS)); 466 + num_bits = LZMA_LEN_NUM_HIGH_BITS; 467 + } 468 + } 469 + 470 + rc_bit_tree_decode(rc, prob_len, num_bits, &len); 471 + len += offset; 472 + 473 + if (cst->state < 4) { 474 + int pos_slot; 475 + 476 + cst->state += LZMA_NUM_LIT_STATES; 477 + prob = 478 + p + LZMA_POS_SLOT + 479 + ((len < 480 + LZMA_NUM_LEN_TO_POS_STATES ? len : 481 + LZMA_NUM_LEN_TO_POS_STATES - 1) 482 + << LZMA_NUM_POS_SLOT_BITS); 483 + rc_bit_tree_decode(rc, prob, 484 + LZMA_NUM_POS_SLOT_BITS, 485 + &pos_slot); 486 + if (pos_slot >= LZMA_START_POS_MODEL_INDEX) { 487 + int i, mi; 488 + num_bits = (pos_slot >> 1) - 1; 489 + cst->rep0 = 2 | (pos_slot & 1); 490 + if (pos_slot < LZMA_END_POS_MODEL_INDEX) { 491 + cst->rep0 <<= num_bits; 492 + prob = p + LZMA_SPEC_POS + 493 + cst->rep0 - pos_slot - 1; 494 + } else { 495 + num_bits -= LZMA_NUM_ALIGN_BITS; 496 + while (num_bits--) 497 + cst->rep0 = (cst->rep0 << 1) | 498 + rc_direct_bit(rc); 499 + prob = p + LZMA_ALIGN; 500 + cst->rep0 <<= LZMA_NUM_ALIGN_BITS; 501 + num_bits = LZMA_NUM_ALIGN_BITS; 502 + } 503 + i = 1; 504 + mi = 1; 505 + while (num_bits--) { 506 + if (rc_get_bit(rc, prob + mi, &mi)) 507 + cst->rep0 |= i; 508 + i <<= 1; 509 + } 510 + } else 511 + cst->rep0 = pos_slot; 512 + if (++(cst->rep0) == 0) 513 + return; 514 + } 515 + 516 + len += LZMA_MATCH_MIN_LEN; 517 + 518 + copy_bytes(wr, cst->rep0, len); 519 + } 520 + 521 + 522 + 523 + STATIC inline int INIT unlzma(unsigned char *buf, int in_len, 524 + int(*fill)(void*, unsigned int), 525 + int(*flush)(void*, unsigned int), 526 + unsigned char *output, 527 + int *posp, 528 + void(*error_fn)(char *x) 529 + ) 530 + { 531 + struct lzma_header header; 532 + int lc, pb, lp; 533 + uint32_t pos_state_mask; 534 + uint32_t literal_pos_mask; 535 + uint16_t *p; 536 + int num_probs; 537 + struct rc rc; 538 + int i, mi; 539 + struct writer wr; 540 + struct cstate cst; 541 + unsigned char *inbuf; 542 + int ret = -1; 543 + 544 + set_error_fn(error_fn); 545 + if (!flush) 546 + in_len -= 4; /* Uncompressed size hack active in pre-boot 547 + environment */ 548 + if (buf) 549 + inbuf = buf; 550 + else 551 + inbuf = malloc(LZMA_IOBUF_SIZE); 552 + if (!inbuf) { 553 + error("Could not allocate input bufer"); 554 + goto exit_0; 555 + } 556 + 557 + cst.state = 0; 558 + cst.rep0 = cst.rep1 = cst.rep2 = cst.rep3 = 1; 559 + 560 + wr.header = &header; 561 + wr.flush = flush; 562 + wr.global_pos = 0; 563 + wr.previous_byte = 0; 564 + wr.buffer_pos = 0; 565 + 566 + rc_init(&rc, fill, inbuf, in_len); 567 + 568 + for (i = 0; i < sizeof(header); i++) { 569 + if (rc.ptr >= rc.buffer_end) 570 + rc_read(&rc); 571 + ((unsigned char *)&header)[i] = *rc.ptr++; 572 + } 573 + 574 + if (header.pos >= (9 * 5 * 5)) 575 + error("bad header"); 576 + 577 + mi = 0; 578 + lc = header.pos; 579 + while (lc >= 9) { 580 + mi++; 581 + lc -= 9; 582 + } 583 + pb = 0; 584 + lp = mi; 585 + while (lp >= 5) { 586 + pb++; 587 + lp -= 5; 588 + } 589 + pos_state_mask = (1 << pb) - 1; 590 + literal_pos_mask = (1 << lp) - 1; 591 + 592 + ENDIAN_CONVERT(header.dict_size); 593 + ENDIAN_CONVERT(header.dst_size); 594 + 595 + if (header.dict_size == 0) 596 + header.dict_size = 1; 597 + 598 + if (output) 599 + wr.buffer = output; 600 + else { 601 + wr.bufsize = MIN(header.dst_size, header.dict_size); 602 + wr.buffer = large_malloc(wr.bufsize); 603 + } 604 + if (wr.buffer == NULL) 605 + goto exit_1; 606 + 607 + num_probs = LZMA_BASE_SIZE + (LZMA_LIT_SIZE << (lc + lp)); 608 + p = (uint16_t *) large_malloc(num_probs * sizeof(*p)); 609 + if (p == 0) 610 + goto exit_2; 611 + num_probs = LZMA_LITERAL + (LZMA_LIT_SIZE << (lc + lp)); 612 + for (i = 0; i < num_probs; i++) 613 + p[i] = (1 << RC_MODEL_TOTAL_BITS) >> 1; 614 + 615 + rc_init_code(&rc); 616 + 617 + while (get_pos(&wr) < header.dst_size) { 618 + int pos_state = get_pos(&wr) & pos_state_mask; 619 + uint16_t *prob = p + LZMA_IS_MATCH + 620 + (cst.state << LZMA_NUM_POS_BITS_MAX) + pos_state; 621 + if (rc_is_bit_0(&rc, prob)) 622 + process_bit0(&wr, &rc, &cst, p, pos_state, prob, 623 + lc, literal_pos_mask); 624 + else { 625 + process_bit1(&wr, &rc, &cst, p, pos_state, prob); 626 + if (cst.rep0 == 0) 627 + break; 628 + } 629 + } 630 + 631 + if (posp) 632 + *posp = rc.ptr-rc.buffer; 633 + if (wr.flush) 634 + wr.flush(wr.buffer, wr.buffer_pos); 635 + ret = 0; 636 + large_free(p); 637 + exit_2: 638 + if (!output) 639 + large_free(wr.buffer); 640 + exit_1: 641 + if (!buf) 642 + free(inbuf); 643 + exit_0: 644 + return ret; 645 + } 646 + 647 + #define decompress unlzma
+4
lib/zlib_inflate/inflate.h
··· 1 + #ifndef INFLATE_H 2 + #define INFLATE_H 3 + 1 4 /* inflate.h -- internal inflate state definition 2 5 * Copyright (C) 1995-2004 Mark Adler 3 6 * For conditions of distribution and use, see copyright notice in zlib.h ··· 108 105 unsigned short work[288]; /* work area for code table building */ 109 106 code codes[ENOUGH]; /* space for code tables */ 110 107 }; 108 + #endif
+4
lib/zlib_inflate/inftrees.h
··· 1 + #ifndef INFTREES_H 2 + #define INFTREES_H 3 + 1 4 /* inftrees.h -- header to use inftrees.c 2 5 * Copyright (C) 1995-2005 Mark Adler 3 6 * For conditions of distribution and use, see copyright notice in zlib.h ··· 56 53 extern int zlib_inflate_table (codetype type, unsigned short *lens, 57 54 unsigned codes, code **table, 58 55 unsigned *bits, unsigned short *work); 56 + #endif
+4 -7
mm/filemap.c
··· 1816 1816 static size_t __iovec_copy_from_user_inatomic(char *vaddr, 1817 1817 const struct iovec *iov, size_t base, size_t bytes) 1818 1818 { 1819 - size_t copied = 0, left = 0, total = bytes; 1819 + size_t copied = 0, left = 0; 1820 1820 1821 1821 while (bytes) { 1822 1822 char __user *buf = iov->iov_base + base; 1823 1823 int copy = min(bytes, iov->iov_len - base); 1824 1824 1825 1825 base = 0; 1826 - left = __copy_from_user_inatomic_nocache(vaddr, buf, copy, total); 1826 + left = __copy_from_user_inatomic(vaddr, buf, copy); 1827 1827 copied += copy; 1828 1828 bytes -= copy; 1829 1829 vaddr += copy; ··· 1851 1851 if (likely(i->nr_segs == 1)) { 1852 1852 int left; 1853 1853 char __user *buf = i->iov->iov_base + i->iov_offset; 1854 - 1855 - left = __copy_from_user_inatomic_nocache(kaddr + offset, 1856 - buf, bytes, bytes); 1854 + left = __copy_from_user_inatomic(kaddr + offset, buf, bytes); 1857 1855 copied = bytes - left; 1858 1856 } else { 1859 1857 copied = __iovec_copy_from_user_inatomic(kaddr + offset, ··· 1879 1881 if (likely(i->nr_segs == 1)) { 1880 1882 int left; 1881 1883 char __user *buf = i->iov->iov_base + i->iov_offset; 1882 - 1883 - left = __copy_from_user_nocache(kaddr + offset, buf, bytes, bytes); 1884 + left = __copy_from_user(kaddr + offset, buf, bytes); 1884 1885 copied = bytes - left; 1885 1886 } else { 1886 1887 copied = __iovec_copy_from_user_inatomic(kaddr + offset,
+1 -1
mm/filemap_xip.c
··· 354 354 break; 355 355 356 356 copied = bytes - 357 - __copy_from_user_nocache(xip_mem + offset, buf, bytes, bytes); 357 + __copy_from_user_nocache(xip_mem + offset, buf, bytes); 358 358 359 359 if (likely(copied > 0)) { 360 360 status = copied;
+21 -22
mm/shmem.c
··· 169 169 */ 170 170 static inline int shmem_acct_size(unsigned long flags, loff_t size) 171 171 { 172 - return (flags & VM_ACCOUNT) ? 173 - security_vm_enough_memory_kern(VM_ACCT(size)) : 0; 172 + return (flags & VM_NORESERVE) ? 173 + 0 : security_vm_enough_memory_kern(VM_ACCT(size)); 174 174 } 175 175 176 176 static inline void shmem_unacct_size(unsigned long flags, loff_t size) 177 177 { 178 - if (flags & VM_ACCOUNT) 178 + if (!(flags & VM_NORESERVE)) 179 179 vm_unacct_memory(VM_ACCT(size)); 180 180 } 181 181 ··· 187 187 */ 188 188 static inline int shmem_acct_block(unsigned long flags) 189 189 { 190 - return (flags & VM_ACCOUNT) ? 191 - 0 : security_vm_enough_memory_kern(VM_ACCT(PAGE_CACHE_SIZE)); 190 + return (flags & VM_NORESERVE) ? 191 + security_vm_enough_memory_kern(VM_ACCT(PAGE_CACHE_SIZE)) : 0; 192 192 } 193 193 194 194 static inline void shmem_unacct_blocks(unsigned long flags, long pages) 195 195 { 196 - if (!(flags & VM_ACCOUNT)) 196 + if (flags & VM_NORESERVE) 197 197 vm_unacct_memory(pages * VM_ACCT(PAGE_CACHE_SIZE)); 198 198 } 199 199 ··· 1515 1515 return 0; 1516 1516 } 1517 1517 1518 - static struct inode * 1519 - shmem_get_inode(struct super_block *sb, int mode, dev_t dev) 1518 + static struct inode *shmem_get_inode(struct super_block *sb, int mode, 1519 + dev_t dev, unsigned long flags) 1520 1520 { 1521 1521 struct inode *inode; 1522 1522 struct shmem_inode_info *info; ··· 1537 1537 info = SHMEM_I(inode); 1538 1538 memset(info, 0, (char *)inode - (char *)info); 1539 1539 spin_lock_init(&info->lock); 1540 + info->flags = flags & VM_NORESERVE; 1540 1541 INIT_LIST_HEAD(&info->swaplist); 1541 1542 1542 1543 switch (mode & S_IFMT) { ··· 1780 1779 static int 1781 1780 shmem_mknod(struct inode *dir, struct dentry *dentry, int mode, dev_t dev) 1782 1781 { 1783 - struct inode *inode = shmem_get_inode(dir->i_sb, mode, dev); 1782 + struct inode *inode; 1784 1783 int error = -ENOSPC; 1785 1784 1785 + inode = shmem_get_inode(dir->i_sb, mode, dev, VM_NORESERVE); 1786 1786 if (inode) { 1787 1787 error = security_inode_init_security(inode, dir, NULL, NULL, 1788 1788 NULL); ··· 1922 1920 if (len > PAGE_CACHE_SIZE) 1923 1921 return -ENAMETOOLONG; 1924 1922 1925 - inode = shmem_get_inode(dir->i_sb, S_IFLNK|S_IRWXUGO, 0); 1923 + inode = shmem_get_inode(dir->i_sb, S_IFLNK|S_IRWXUGO, 0, VM_NORESERVE); 1926 1924 if (!inode) 1927 1925 return -ENOSPC; 1928 1926 ··· 2334 2332 sb->s_flags |= MS_POSIXACL; 2335 2333 #endif 2336 2334 2337 - inode = shmem_get_inode(sb, S_IFDIR | sbinfo->mode, 0); 2335 + inode = shmem_get_inode(sb, S_IFDIR | sbinfo->mode, 0, VM_NORESERVE); 2338 2336 if (!inode) 2339 2337 goto failed; 2340 2338 inode->i_uid = sbinfo->uid; ··· 2576 2574 return 0; 2577 2575 } 2578 2576 2579 - #define shmem_file_operations ramfs_file_operations 2580 - #define shmem_vm_ops generic_file_vm_ops 2581 - #define shmem_get_inode ramfs_get_inode 2582 - #define shmem_acct_size(a, b) 0 2583 - #define shmem_unacct_size(a, b) do {} while (0) 2584 - #define SHMEM_MAX_BYTES LLONG_MAX 2577 + #define shmem_vm_ops generic_file_vm_ops 2578 + #define shmem_file_operations ramfs_file_operations 2579 + #define shmem_get_inode(sb, mode, dev, flags) ramfs_get_inode(sb, mode, dev) 2580 + #define shmem_acct_size(flags, size) 0 2581 + #define shmem_unacct_size(flags, size) do {} while (0) 2582 + #define SHMEM_MAX_BYTES LLONG_MAX 2585 2583 2586 2584 #endif /* CONFIG_SHMEM */ 2587 2585 ··· 2591 2589 * shmem_file_setup - get an unlinked file living in tmpfs 2592 2590 * @name: name for dentry (to be seen in /proc/<pid>/maps 2593 2591 * @size: size to be set for the file 2594 - * @flags: vm_flags 2592 + * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size 2595 2593 */ 2596 2594 struct file *shmem_file_setup(char *name, loff_t size, unsigned long flags) 2597 2595 { ··· 2625 2623 goto put_dentry; 2626 2624 2627 2625 error = -ENOSPC; 2628 - inode = shmem_get_inode(root->d_sb, S_IFREG | S_IRWXUGO, 0); 2626 + inode = shmem_get_inode(root->d_sb, S_IFREG | S_IRWXUGO, 0, flags); 2629 2627 if (!inode) 2630 2628 goto close_file; 2631 2629 2632 - #ifdef CONFIG_SHMEM 2633 - SHMEM_I(inode)->flags = (flags & VM_NORESERVE) ? 0 : VM_ACCOUNT; 2634 - #endif 2635 2630 d_instantiate(dentry, inode); 2636 2631 inode->i_size = size; 2637 2632 inode->i_nlink = 0; /* It is unlinked */
+9 -1
mm/vmalloc.c
··· 323 323 unsigned long addr; 324 324 int purged = 0; 325 325 326 + BUG_ON(!size); 326 327 BUG_ON(size & ~PAGE_MASK); 327 328 328 329 va = kmalloc_node(sizeof(struct vmap_area), ··· 335 334 addr = ALIGN(vstart, align); 336 335 337 336 spin_lock(&vmap_area_lock); 337 + if (addr + size - 1 < addr) 338 + goto overflow; 339 + 338 340 /* XXX: could have a last_hole cache */ 339 341 n = vmap_area_root.rb_node; 340 342 if (n) { ··· 369 365 370 366 while (addr + size > first->va_start && addr + size <= vend) { 371 367 addr = ALIGN(first->va_end + PAGE_SIZE, align); 368 + if (addr + size - 1 < addr) 369 + goto overflow; 372 370 373 371 n = rb_next(&first->rb_node); 374 372 if (n) ··· 381 375 } 382 376 found: 383 377 if (addr + size > vend) { 378 + overflow: 384 379 spin_unlock(&vmap_area_lock); 385 380 if (!purged) { 386 381 purge_vmap_area_lazy(); ··· 505 498 static DEFINE_SPINLOCK(purge_lock); 506 499 LIST_HEAD(valist); 507 500 struct vmap_area *va; 501 + struct vmap_area *n_va; 508 502 int nr = 0; 509 503 510 504 /* ··· 545 537 546 538 if (nr) { 547 539 spin_lock(&vmap_area_lock); 548 - list_for_each_entry(va, &valist, purge_list) 540 + list_for_each_entry_safe(va, n_va, &valist, purge_list) 549 541 __free_vmap_area(va); 550 542 spin_unlock(&vmap_area_lock); 551 543 }
+10
net/8021q/vlan_core.c
··· 1 1 #include <linux/skbuff.h> 2 2 #include <linux/netdevice.h> 3 3 #include <linux/if_vlan.h> 4 + #include <linux/netpoll.h> 4 5 #include "vlan.h" 5 6 6 7 /* VLAN rx hw acceleration helper. This acts like netif_{rx,receive_skb}(). */ 7 8 int __vlan_hwaccel_rx(struct sk_buff *skb, struct vlan_group *grp, 8 9 u16 vlan_tci, int polling) 9 10 { 11 + if (netpoll_rx(skb)) 12 + return NET_RX_DROP; 13 + 10 14 if (skb_bond_should_drop(skb)) 11 15 goto drop; 12 16 ··· 104 100 { 105 101 int err = NET_RX_SUCCESS; 106 102 103 + if (netpoll_receive_skb(skb)) 104 + return NET_RX_DROP; 105 + 107 106 switch (vlan_gro_common(napi, grp, vlan_tci, skb)) { 108 107 case -1: 109 108 return netif_receive_skb(skb); ··· 131 124 int err = NET_RX_DROP; 132 125 133 126 if (!skb) 127 + goto out; 128 + 129 + if (netpoll_receive_skb(skb)) 134 130 goto out; 135 131 136 132 err = NET_RX_SUCCESS;
+6
net/core/dev.c
··· 2488 2488 2489 2489 int napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb) 2490 2490 { 2491 + if (netpoll_receive_skb(skb)) 2492 + return NET_RX_DROP; 2493 + 2491 2494 switch (__napi_gro_receive(napi, skb)) { 2492 2495 case -1: 2493 2496 return netif_receive_skb(skb); ··· 2559 2556 int err = NET_RX_DROP; 2560 2557 2561 2558 if (!skb) 2559 + goto out; 2560 + 2561 + if (netpoll_receive_skb(skb)) 2562 2562 goto out; 2563 2563 2564 2564 err = NET_RX_SUCCESS;
+5 -4
net/ipv4/tcp_input.c
··· 1374 1374 1375 1375 static int tcp_shifted_skb(struct sock *sk, struct sk_buff *skb, 1376 1376 struct tcp_sacktag_state *state, 1377 - unsigned int pcount, int shifted, int mss) 1377 + unsigned int pcount, int shifted, int mss, 1378 + int dup_sack) 1378 1379 { 1379 1380 struct tcp_sock *tp = tcp_sk(sk); 1380 1381 struct sk_buff *prev = tcp_write_queue_prev(sk, skb); ··· 1411 1410 } 1412 1411 1413 1412 /* We discard results */ 1414 - tcp_sacktag_one(skb, sk, state, 0, pcount); 1413 + tcp_sacktag_one(skb, sk, state, dup_sack, pcount); 1415 1414 1416 1415 /* Difference in this won't matter, both ACKed by the same cumul. ACK */ 1417 1416 TCP_SKB_CB(prev)->sacked |= (TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS); ··· 1562 1561 1563 1562 if (!skb_shift(prev, skb, len)) 1564 1563 goto fallback; 1565 - if (!tcp_shifted_skb(sk, skb, state, pcount, len, mss)) 1564 + if (!tcp_shifted_skb(sk, skb, state, pcount, len, mss, dup_sack)) 1566 1565 goto out; 1567 1566 1568 1567 /* Hole filled allows collapsing with the next as well, this is very ··· 1581 1580 len = skb->len; 1582 1581 if (skb_shift(prev, skb, len)) { 1583 1582 pcount += tcp_skb_pcount(skb); 1584 - tcp_shifted_skb(sk, skb, state, tcp_skb_pcount(skb), len, mss); 1583 + tcp_shifted_skb(sk, skb, state, tcp_skb_pcount(skb), len, mss, 0); 1585 1584 } 1586 1585 1587 1586 out:
+1 -1
net/ipv4/tcp_scalable.c
··· 1 1 /* Tom Kelly's Scalable TCP 2 2 * 3 - * See htt://www-lce.eng.cam.ac.uk/~ctk21/scalable/ 3 + * See http://www.deneholme.net/tom/scalable/ 4 4 * 5 5 * John Heffner <jheffner@sc.edu> 6 6 */
+2 -2
net/ipv6/inet6_hashtables.c
··· 258 258 259 259 if (twp != NULL) { 260 260 *twp = tw; 261 - NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITRECYCLED); 261 + NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED); 262 262 } else if (tw != NULL) { 263 263 /* Silly. Should hash-dance instead... */ 264 264 inet_twsk_deschedule(tw, death_row); 265 - NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITRECYCLED); 265 + NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED); 266 266 267 267 inet_twsk_put(tw); 268 268 }
+3 -2
net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c
··· 201 201 202 202 if (net->ct.sysctl_checksum && hooknum == NF_INET_PRE_ROUTING && 203 203 nf_ip6_checksum(skb, hooknum, dataoff, IPPROTO_ICMPV6)) { 204 - nf_log_packet(PF_INET6, 0, skb, NULL, NULL, NULL, 205 - "nf_ct_icmpv6: ICMPv6 checksum failed\n"); 204 + if (LOG_INVALID(net, IPPROTO_ICMPV6)) 205 + nf_log_packet(PF_INET6, 0, skb, NULL, NULL, NULL, 206 + "nf_ct_icmpv6: ICMPv6 checksum failed "); 206 207 return -NF_ACCEPT; 207 208 } 208 209
+145 -60
net/netfilter/x_tables.c
··· 827 827 .release = seq_release_net, 828 828 }; 829 829 830 + /* 831 + * Traverse state for ip{,6}_{tables,matches} for helping crossing 832 + * the multi-AF mutexes. 833 + */ 834 + struct nf_mttg_trav { 835 + struct list_head *head, *curr; 836 + uint8_t class, nfproto; 837 + }; 838 + 839 + enum { 840 + MTTG_TRAV_INIT, 841 + MTTG_TRAV_NFP_UNSPEC, 842 + MTTG_TRAV_NFP_SPEC, 843 + MTTG_TRAV_DONE, 844 + }; 845 + 846 + static void *xt_mttg_seq_next(struct seq_file *seq, void *v, loff_t *ppos, 847 + bool is_target) 848 + { 849 + static const uint8_t next_class[] = { 850 + [MTTG_TRAV_NFP_UNSPEC] = MTTG_TRAV_NFP_SPEC, 851 + [MTTG_TRAV_NFP_SPEC] = MTTG_TRAV_DONE, 852 + }; 853 + struct nf_mttg_trav *trav = seq->private; 854 + 855 + switch (trav->class) { 856 + case MTTG_TRAV_INIT: 857 + trav->class = MTTG_TRAV_NFP_UNSPEC; 858 + mutex_lock(&xt[NFPROTO_UNSPEC].mutex); 859 + trav->head = trav->curr = is_target ? 860 + &xt[NFPROTO_UNSPEC].target : &xt[NFPROTO_UNSPEC].match; 861 + break; 862 + case MTTG_TRAV_NFP_UNSPEC: 863 + trav->curr = trav->curr->next; 864 + if (trav->curr != trav->head) 865 + break; 866 + mutex_unlock(&xt[NFPROTO_UNSPEC].mutex); 867 + mutex_lock(&xt[trav->nfproto].mutex); 868 + trav->head = trav->curr = is_target ? 869 + &xt[trav->nfproto].target : &xt[trav->nfproto].match; 870 + trav->class = next_class[trav->class]; 871 + break; 872 + case MTTG_TRAV_NFP_SPEC: 873 + trav->curr = trav->curr->next; 874 + if (trav->curr != trav->head) 875 + break; 876 + /* fallthru, _stop will unlock */ 877 + default: 878 + return NULL; 879 + } 880 + 881 + if (ppos != NULL) 882 + ++*ppos; 883 + return trav; 884 + } 885 + 886 + static void *xt_mttg_seq_start(struct seq_file *seq, loff_t *pos, 887 + bool is_target) 888 + { 889 + struct nf_mttg_trav *trav = seq->private; 890 + unsigned int j; 891 + 892 + trav->class = MTTG_TRAV_INIT; 893 + for (j = 0; j < *pos; ++j) 894 + if (xt_mttg_seq_next(seq, NULL, NULL, is_target) == NULL) 895 + return NULL; 896 + return trav; 897 + } 898 + 899 + static void xt_mttg_seq_stop(struct seq_file *seq, void *v) 900 + { 901 + struct nf_mttg_trav *trav = seq->private; 902 + 903 + switch (trav->class) { 904 + case MTTG_TRAV_NFP_UNSPEC: 905 + mutex_unlock(&xt[NFPROTO_UNSPEC].mutex); 906 + break; 907 + case MTTG_TRAV_NFP_SPEC: 908 + mutex_unlock(&xt[trav->nfproto].mutex); 909 + break; 910 + } 911 + } 912 + 830 913 static void *xt_match_seq_start(struct seq_file *seq, loff_t *pos) 831 914 { 832 - struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private; 833 - u_int16_t af = (unsigned long)pde->data; 834 - 835 - mutex_lock(&xt[af].mutex); 836 - return seq_list_start(&xt[af].match, *pos); 915 + return xt_mttg_seq_start(seq, pos, false); 837 916 } 838 917 839 - static void *xt_match_seq_next(struct seq_file *seq, void *v, loff_t *pos) 918 + static void *xt_match_seq_next(struct seq_file *seq, void *v, loff_t *ppos) 840 919 { 841 - struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private; 842 - u_int16_t af = (unsigned long)pde->data; 843 - 844 - return seq_list_next(v, &xt[af].match, pos); 845 - } 846 - 847 - static void xt_match_seq_stop(struct seq_file *seq, void *v) 848 - { 849 - struct proc_dir_entry *pde = seq->private; 850 - u_int16_t af = (unsigned long)pde->data; 851 - 852 - mutex_unlock(&xt[af].mutex); 920 + return xt_mttg_seq_next(seq, v, ppos, false); 853 921 } 854 922 855 923 static int xt_match_seq_show(struct seq_file *seq, void *v) 856 924 { 857 - struct xt_match *match = list_entry(v, struct xt_match, list); 925 + const struct nf_mttg_trav *trav = seq->private; 926 + const struct xt_match *match; 858 927 859 - if (strlen(match->name)) 860 - return seq_printf(seq, "%s\n", match->name); 861 - else 862 - return 0; 928 + switch (trav->class) { 929 + case MTTG_TRAV_NFP_UNSPEC: 930 + case MTTG_TRAV_NFP_SPEC: 931 + if (trav->curr == trav->head) 932 + return 0; 933 + match = list_entry(trav->curr, struct xt_match, list); 934 + return (*match->name == '\0') ? 0 : 935 + seq_printf(seq, "%s\n", match->name); 936 + } 937 + return 0; 863 938 } 864 939 865 940 static const struct seq_operations xt_match_seq_ops = { 866 941 .start = xt_match_seq_start, 867 942 .next = xt_match_seq_next, 868 - .stop = xt_match_seq_stop, 943 + .stop = xt_mttg_seq_stop, 869 944 .show = xt_match_seq_show, 870 945 }; 871 946 872 947 static int xt_match_open(struct inode *inode, struct file *file) 873 948 { 949 + struct seq_file *seq; 950 + struct nf_mttg_trav *trav; 874 951 int ret; 875 952 876 - ret = seq_open(file, &xt_match_seq_ops); 877 - if (!ret) { 878 - struct seq_file *seq = file->private_data; 953 + trav = kmalloc(sizeof(*trav), GFP_KERNEL); 954 + if (trav == NULL) 955 + return -ENOMEM; 879 956 880 - seq->private = PDE(inode); 957 + ret = seq_open(file, &xt_match_seq_ops); 958 + if (ret < 0) { 959 + kfree(trav); 960 + return ret; 881 961 } 882 - return ret; 962 + 963 + seq = file->private_data; 964 + seq->private = trav; 965 + trav->nfproto = (unsigned long)PDE(inode)->data; 966 + return 0; 883 967 } 884 968 885 969 static const struct file_operations xt_match_ops = { ··· 971 887 .open = xt_match_open, 972 888 .read = seq_read, 973 889 .llseek = seq_lseek, 974 - .release = seq_release, 890 + .release = seq_release_private, 975 891 }; 976 892 977 893 static void *xt_target_seq_start(struct seq_file *seq, loff_t *pos) 978 894 { 979 - struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private; 980 - u_int16_t af = (unsigned long)pde->data; 981 - 982 - mutex_lock(&xt[af].mutex); 983 - return seq_list_start(&xt[af].target, *pos); 895 + return xt_mttg_seq_start(seq, pos, true); 984 896 } 985 897 986 - static void *xt_target_seq_next(struct seq_file *seq, void *v, loff_t *pos) 898 + static void *xt_target_seq_next(struct seq_file *seq, void *v, loff_t *ppos) 987 899 { 988 - struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private; 989 - u_int16_t af = (unsigned long)pde->data; 990 - 991 - return seq_list_next(v, &xt[af].target, pos); 992 - } 993 - 994 - static void xt_target_seq_stop(struct seq_file *seq, void *v) 995 - { 996 - struct proc_dir_entry *pde = seq->private; 997 - u_int16_t af = (unsigned long)pde->data; 998 - 999 - mutex_unlock(&xt[af].mutex); 900 + return xt_mttg_seq_next(seq, v, ppos, true); 1000 901 } 1001 902 1002 903 static int xt_target_seq_show(struct seq_file *seq, void *v) 1003 904 { 1004 - struct xt_target *target = list_entry(v, struct xt_target, list); 905 + const struct nf_mttg_trav *trav = seq->private; 906 + const struct xt_target *target; 1005 907 1006 - if (strlen(target->name)) 1007 - return seq_printf(seq, "%s\n", target->name); 1008 - else 1009 - return 0; 908 + switch (trav->class) { 909 + case MTTG_TRAV_NFP_UNSPEC: 910 + case MTTG_TRAV_NFP_SPEC: 911 + if (trav->curr == trav->head) 912 + return 0; 913 + target = list_entry(trav->curr, struct xt_target, list); 914 + return (*target->name == '\0') ? 0 : 915 + seq_printf(seq, "%s\n", target->name); 916 + } 917 + return 0; 1010 918 } 1011 919 1012 920 static const struct seq_operations xt_target_seq_ops = { 1013 921 .start = xt_target_seq_start, 1014 922 .next = xt_target_seq_next, 1015 - .stop = xt_target_seq_stop, 923 + .stop = xt_mttg_seq_stop, 1016 924 .show = xt_target_seq_show, 1017 925 }; 1018 926 1019 927 static int xt_target_open(struct inode *inode, struct file *file) 1020 928 { 929 + struct seq_file *seq; 930 + struct nf_mttg_trav *trav; 1021 931 int ret; 1022 932 1023 - ret = seq_open(file, &xt_target_seq_ops); 1024 - if (!ret) { 1025 - struct seq_file *seq = file->private_data; 933 + trav = kmalloc(sizeof(*trav), GFP_KERNEL); 934 + if (trav == NULL) 935 + return -ENOMEM; 1026 936 1027 - seq->private = PDE(inode); 937 + ret = seq_open(file, &xt_target_seq_ops); 938 + if (ret < 0) { 939 + kfree(trav); 940 + return ret; 1028 941 } 1029 - return ret; 942 + 943 + seq = file->private_data; 944 + seq->private = trav; 945 + trav->nfproto = (unsigned long)PDE(inode)->data; 946 + return 0; 1030 947 } 1031 948 1032 949 static const struct file_operations xt_target_ops = { ··· 1035 950 .open = xt_target_open, 1036 951 .read = seq_read, 1037 952 .llseek = seq_lseek, 1038 - .release = seq_release, 953 + .release = seq_release_private, 1039 954 }; 1040 955 1041 956 #define FORMAT_TABLES "_tables_names"
+1 -1
net/netfilter/xt_recent.c
··· 542 542 struct recent_entry *e; 543 543 char buf[sizeof("+b335:1d35:1e55:dead:c0de:1715:5afe:c0de")]; 544 544 const char *c = buf; 545 - union nf_inet_addr addr; 545 + union nf_inet_addr addr = {}; 546 546 u_int16_t family; 547 547 bool add, succ; 548 548
+5 -1
net/sched/sch_drr.c
··· 66 66 { 67 67 struct drr_sched *q = qdisc_priv(sch); 68 68 struct drr_class *cl = (struct drr_class *)*arg; 69 + struct nlattr *opt = tca[TCA_OPTIONS]; 69 70 struct nlattr *tb[TCA_DRR_MAX + 1]; 70 71 u32 quantum; 71 72 int err; 72 73 73 - err = nla_parse_nested(tb, TCA_DRR_MAX, tca[TCA_OPTIONS], drr_policy); 74 + if (!opt) 75 + return -EINVAL; 76 + 77 + err = nla_parse_nested(tb, TCA_DRR_MAX, opt, drr_policy); 74 78 if (err < 0) 75 79 return err; 76 80
+14
scripts/Makefile.lib
··· 186 186 cmd_gzip = gzip -f -9 < $< > $@ 187 187 188 188 189 + # Bzip2 190 + # --------------------------------------------------------------------------- 191 + 192 + # Bzip2 does not include size in file... so we have to fake that 193 + size_append=$(CONFIG_SHELL) $(srctree)/scripts/bin_size 194 + 195 + quiet_cmd_bzip2 = BZIP2 $@ 196 + cmd_bzip2 = (bzip2 -9 < $< && $(size_append) $<) > $@ || (rm -f $@ ; false) 197 + 198 + # Lzma 199 + # --------------------------------------------------------------------------- 200 + 201 + quiet_cmd_lzma = LZMA $@ 202 + cmd_lzma = (lzma -9 -c $< && $(size_append) $<) >$@ || (rm -f $@ ; false)
+10
scripts/bin_size
··· 1 + #!/bin/sh 2 + 3 + if [ $# = 0 ] ; then 4 + echo Usage: $0 file 5 + fi 6 + 7 + size_dec=`stat -c "%s" $1` 8 + size_hex_echo_string=`printf "%08x" $size_dec | 9 + sed 's/\(..\)\(..\)\(..\)\(..\)/\\\\x\4\\\\x\3\\\\x\2\\\\x\1/g'` 10 + /bin/echo -ne $size_hex_echo_string
+14 -12
scripts/checkpatch.pl
··· 10 10 my $P = $0; 11 11 $P =~ s@.*/@@g; 12 12 13 - my $V = '0.27'; 13 + my $V = '0.28'; 14 14 15 15 use Getopt::Long qw(:config no_auto_abbrev); 16 16 ··· 110 110 __iomem| 111 111 __must_check| 112 112 __init_refok| 113 - __kprobes 113 + __kprobes| 114 + __ref 114 115 }x; 115 116 our $Attribute = qr{ 116 117 const| ··· 1241 1240 $realfile =~ s@^([^/]*)/@@; 1242 1241 1243 1242 $p1_prefix = $1; 1244 - if ($tree && $p1_prefix ne '' && -e "$root/$p1_prefix") { 1243 + if (!$file && $tree && $p1_prefix ne '' && 1244 + -e "$root/$p1_prefix") { 1245 1245 WARN("patch prefix '$p1_prefix' exists, appears to be a -p0 patch\n"); 1246 1246 } 1247 1247 ··· 1585 1583 } 1586 1584 # TEST: allow direct testing of the attribute matcher. 1587 1585 if ($dbg_attr) { 1588 - if ($line =~ /^.\s*$Attribute\s*$/) { 1586 + if ($line =~ /^.\s*$Modifier\s*$/) { 1589 1587 ERROR("TEST: is attr\n" . $herecurr); 1590 - } elsif ($dbg_attr > 1 && $line =~ /^.+($Attribute)/) { 1588 + } elsif ($dbg_attr > 1 && $line =~ /^.+($Modifier)/) { 1591 1589 ERROR("TEST: is not attr ($1 is)\n". $herecurr); 1592 1590 } 1593 1591 next; ··· 1659 1657 1660 1658 # * goes on variable not on type 1661 1659 # (char*[ const]) 1662 - if ($line =~ m{\($NonptrType(\s*\*[\s\*]*(?:$Modifier\s*)*)\)}) { 1660 + if ($line =~ m{\($NonptrType(\s*(?:$Modifier\b\s*|\*\s*)+)\)}) { 1663 1661 my ($from, $to) = ($1, $1); 1664 1662 1665 1663 # Should start with a space. ··· 1674 1672 if ($from ne $to) { 1675 1673 ERROR("\"(foo$from)\" should be \"(foo$to)\"\n" . $herecurr); 1676 1674 } 1677 - } elsif ($line =~ m{\b$NonptrType(\s*\*[\s\*]*(?:$Modifier\s*)?)($Ident)}) { 1675 + } elsif ($line =~ m{\b$NonptrType(\s*(?:$Modifier\b\s*|\*\s*)+)($Ident)}) { 1678 1676 my ($from, $to, $ident) = ($1, $1, $2); 1679 1677 1680 1678 # Should start with a space. ··· 1687 1685 # Modifiers should have spaces. 1688 1686 $to =~ s/(\b$Modifier$)/$1 /; 1689 1687 1690 - #print "from<$from> to<$to>\n"; 1691 - if ($from ne $to) { 1688 + #print "from<$from> to<$to> ident<$ident>\n"; 1689 + if ($from ne $to && $ident !~ /^$Modifier$/) { 1692 1690 ERROR("\"foo${from}bar\" should be \"foo${to}bar\"\n" . $herecurr); 1693 1691 } 1694 1692 } ··· 1887 1885 if ($ctx !~ /[WEBC]x./ && $ca !~ /(?:\)|!|~|\*|-|\&|\||\+\+|\-\-|\{)$/) { 1888 1886 ERROR("space required before that '$op' $at\n" . $hereptr); 1889 1887 } 1890 - if ($op eq '*' && $cc =~/\s*const\b/) { 1888 + if ($op eq '*' && $cc =~/\s*$Modifier\b/) { 1891 1889 # A unary '*' may be const 1892 1890 1893 1891 } elsif ($ctx =~ /.xW/) { 1894 - ERROR("space prohibited after that '$op' $at\n" . $hereptr); 1892 + ERROR("Aspace prohibited after that '$op' $at\n" . $hereptr); 1895 1893 } 1896 1894 1897 1895 # unary ++ and unary -- are allowed no space on one side. ··· 2562 2560 if ($line =~ /\bin_atomic\s*\(/) { 2563 2561 if ($realfile =~ m@^drivers/@) { 2564 2562 ERROR("do not use in_atomic in drivers\n" . $herecurr); 2565 - } else { 2563 + } elsif ($realfile !~ m@^kernel/@) { 2566 2564 WARN("use of in_atomic() is incorrect outside core kernel code\n" . $herecurr); 2567 2565 } 2568 2566 }
+12 -6
scripts/gen_initramfs_list.sh
··· 5 5 # Released under the terms of the GNU GPL 6 6 # 7 7 # Generate a cpio packed initramfs. It uses gen_init_cpio to generate 8 - # the cpio archive, and gzip to pack it. 8 + # the cpio archive, and then compresses it. 9 9 # The script may also be used to generate the inputfile used for gen_init_cpio 10 10 # This script assumes that gen_init_cpio is located in usr/ directory 11 11 ··· 16 16 cat << EOF 17 17 Usage: 18 18 $0 [-o <file>] [-u <uid>] [-g <gid>] {-d | <cpio_source>} ... 19 - -o <file> Create gzipped initramfs file named <file> using 20 - gen_init_cpio and gzip 19 + -o <file> Create compressed initramfs file named <file> using 20 + gen_init_cpio and compressor depending on the extension 21 21 -u <uid> User ID to map to user ID 0 (root). 22 22 <uid> is only meaningful if <cpio_source> is a 23 23 directory. "squash" forces all files to uid 0. ··· 225 225 output="/dev/stdout" 226 226 output_file="" 227 227 is_cpio_compressed= 228 + compr="gzip -9 -f" 228 229 229 230 arg="$1" 230 231 case "$arg" in ··· 234 233 echo "deps_initramfs := \\" 235 234 shift 236 235 ;; 237 - "-o") # generate gzipped cpio image named $1 236 + "-o") # generate compressed cpio image named $1 238 237 shift 239 238 output_file="$1" 240 239 cpio_list="$(mktemp ${TMPDIR:-/tmp}/cpiolist.XXXXXX)" 241 240 output=${cpio_list} 241 + echo "$output_file" | grep -q "\.gz$" && compr="gzip -9 -f" 242 + echo "$output_file" | grep -q "\.bz2$" && compr="bzip2 -9 -f" 243 + echo "$output_file" | grep -q "\.lzma$" && compr="lzma -9 -f" 244 + echo "$output_file" | grep -q "\.cpio$" && compr="cat" 242 245 shift 243 246 ;; 244 247 esac ··· 279 274 esac 280 275 done 281 276 282 - # If output_file is set we will generate cpio archive and gzip it 277 + # If output_file is set we will generate cpio archive and compress it 283 278 # we are carefull to delete tmp files 284 279 if [ ! -z ${output_file} ]; then 285 280 if [ -z ${cpio_file} ]; then ··· 292 287 if [ "${is_cpio_compressed}" = "compressed" ]; then 293 288 cat ${cpio_tfile} > ${output_file} 294 289 else 295 - cat ${cpio_tfile} | gzip -f -9 - > ${output_file} 290 + (cat ${cpio_tfile} | ${compr} - > ${output_file}) \ 291 + || (rm -f ${output_file} ; false) 296 292 fi 297 293 [ -z ${cpio_file} ] && rm ${cpio_tfile} 298 294 fi
+3 -2
security/selinux/netlabel.c
··· 386 386 if (!S_ISSOCK(inode->i_mode) || 387 387 ((mask & (MAY_WRITE | MAY_APPEND)) == 0)) 388 388 return 0; 389 - 390 389 sock = SOCKET_I(inode); 391 390 sk = sock->sk; 391 + if (sk == NULL) 392 + return 0; 392 393 sksec = sk->sk_security; 393 - if (sksec->nlbl_state != NLBL_REQUIRE) 394 + if (sksec == NULL || sksec->nlbl_state != NLBL_REQUIRE) 394 395 return 0; 395 396 396 397 local_bh_disable();
+1 -1
sound/core/oss/rate.c
··· 157 157 while (dst_frames1 > 0) { 158 158 S1 = S2; 159 159 if (src_frames1-- > 0) { 160 - S1 = *src; 160 + S2 = *src; 161 161 src += src_step; 162 162 } 163 163 if (pos & ~R_MASK) {
+1 -1
sound/pci/aw2/aw2-alsa.c
··· 165 165 MODULE_PARM_DESC(enable, "Enable Audiowerk2 soundcard."); 166 166 167 167 static struct pci_device_id snd_aw2_ids[] = { 168 - {PCI_VENDOR_ID_SAA7146, PCI_DEVICE_ID_SAA7146, PCI_ANY_ID, PCI_ANY_ID, 168 + {PCI_VENDOR_ID_SAA7146, PCI_DEVICE_ID_SAA7146, 0, 0, 169 169 0, 0, 0}, 170 170 {0} 171 171 };
+1
sound/pci/emu10k1/emu10k1_main.c
··· 1528 1528 .ca0151_chip = 1, 1529 1529 .spk71 = 1, 1530 1530 .spdif_bug = 1, 1531 + .invert_shared_spdif = 1, /* digital/analog switch swapped */ 1531 1532 .ac97_chip = 1} , 1532 1533 {.vendor = 0x1102, .device = 0x0004, .subsystem = 0x10021102, 1533 1534 .driver = "Audigy2", .name = "SB Audigy 2 Platinum [SB0240P]",
+8 -7
sound/pci/hda/hda_hwdep.c
··· 277 277 { 278 278 struct snd_hwdep *hwdep = dev_get_drvdata(dev); 279 279 struct hda_codec *codec = hwdep->private_data; 280 - char *p; 281 - struct hda_verb verb, *v; 280 + struct hda_verb *v; 281 + int nid, verb, param; 282 282 283 - verb.nid = simple_strtoul(buf, &p, 0); 284 - verb.verb = simple_strtoul(p, &p, 0); 285 - verb.param = simple_strtoul(p, &p, 0); 286 - if (!verb.nid || !verb.verb || !verb.param) 283 + if (sscanf(buf, "%i %i %i", &nid, &verb, &param) != 3) 284 + return -EINVAL; 285 + if (!nid || !verb) 287 286 return -EINVAL; 288 287 v = snd_array_new(&codec->init_verbs); 289 288 if (!v) 290 289 return -ENOMEM; 291 - *v = verb; 290 + v->nid = nid; 291 + v->verb = verb; 292 + v->param = param; 292 293 return count; 293 294 } 294 295
+2
sound/pci/hda/hda_intel.c
··· 2095 2095 SND_PCI_QUIRK(0x1028, 0x20ac, "Dell Studio Desktop", 0x01), 2096 2096 /* including bogus ALC268 in slot#2 that conflicts with ALC888 */ 2097 2097 SND_PCI_QUIRK(0x17c0, 0x4085, "Medion MD96630", 0x01), 2098 + /* conflict of ALC268 in slot#3 (digital I/O); a temporary fix */ 2099 + SND_PCI_QUIRK(0x1179, 0xff00, "Toshiba laptop", 0x03), 2098 2100 {} 2099 2101 }; 2100 2102
+4
sound/pci/hda/patch_realtek.c
··· 7017 7017 case 0x106b3e00: /* iMac 24 Aluminium */ 7018 7018 board_config = ALC885_IMAC24; 7019 7019 break; 7020 + case 0x106b00a0: /* MacBookPro3,1 - Another revision */ 7020 7021 case 0x106b00a1: /* Macbook (might be wrong - PCI SSID?) */ 7021 7022 case 0x106b00a4: /* MacbookPro4,1 */ 7022 7023 case 0x106b2c00: /* Macbook Pro rev3 */ ··· 8469 8468 SND_PCI_QUIRK(0x1025, 0x013f, "Acer Aspire 5930G", 8470 8469 ALC888_ACER_ASPIRE_4930G), 8471 8470 SND_PCI_QUIRK(0x1025, 0x015e, "Acer Aspire 6930G", 8471 + ALC888_ACER_ASPIRE_4930G), 8472 + SND_PCI_QUIRK(0x1025, 0x0166, "Acer Aspire 6530G", 8472 8473 ALC888_ACER_ASPIRE_4930G), 8473 8474 SND_PCI_QUIRK(0x1025, 0, "Acer laptop", ALC883_ACER), /* default Acer */ 8474 8475 SND_PCI_QUIRK(0x1028, 0x020d, "Dell Inspiron 530", ALC888_6ST_DELL), ··· 10557 10554 SND_PCI_QUIRK(0x103c, 0x1309, "HP xw4*00", ALC262_HP_BPC), 10558 10555 SND_PCI_QUIRK(0x103c, 0x130a, "HP xw6*00", ALC262_HP_BPC), 10559 10556 SND_PCI_QUIRK(0x103c, 0x130b, "HP xw8*00", ALC262_HP_BPC), 10557 + SND_PCI_QUIRK(0x103c, 0x170b, "HP xw*", ALC262_HP_BPC), 10560 10558 SND_PCI_QUIRK(0x103c, 0x2800, "HP D7000", ALC262_HP_BPC_D7000_WL), 10561 10559 SND_PCI_QUIRK(0x103c, 0x2801, "HP D7000", ALC262_HP_BPC_D7000_WF), 10562 10560 SND_PCI_QUIRK(0x103c, 0x2802, "HP D7000", ALC262_HP_BPC_D7000_WL),
+1 -1
sound/pci/hda/patch_sigmatel.c
··· 4989 4989 case STAC_DELL_M4_3: 4990 4990 spec->num_dmics = 1; 4991 4991 spec->num_smuxes = 0; 4992 - spec->num_dmuxes = 0; 4992 + spec->num_dmuxes = 1; 4993 4993 break; 4994 4994 default: 4995 4995 spec->num_dmics = STAC92HD71BXX_NUM_DMICS;
+6 -6
sound/pci/pcxhr/pcxhr.h
··· 97 97 int capture_chips; 98 98 int fw_file_set; 99 99 int firmware_num; 100 - int is_hr_stereo:1; 101 - int board_has_aes1:1; /* if 1 board has AES1 plug and SRC */ 102 - int board_has_analog:1; /* if 0 the board is digital only */ 103 - int board_has_mic:1; /* if 1 the board has microphone input */ 104 - int board_aes_in_192k:1;/* if 1 the aes input plugs do support 192kHz */ 105 - int mono_capture:1; /* if 1 the board does mono capture */ 100 + unsigned int is_hr_stereo:1; 101 + unsigned int board_has_aes1:1; /* if 1 board has AES1 plug and SRC */ 102 + unsigned int board_has_analog:1; /* if 0 the board is digital only */ 103 + unsigned int board_has_mic:1; /* if 1 the board has microphone input */ 104 + unsigned int board_aes_in_192k:1;/* if 1 the aes input plugs do support 192kHz */ 105 + unsigned int mono_capture:1; /* if 1 the board does mono capture */ 106 106 107 107 struct snd_dma_buffer hostport; 108 108
+89
usr/Kconfig
··· 44 44 owned by group root in the initial ramdisk image. 45 45 46 46 If you are not sure, leave it set to "0". 47 + 48 + config RD_GZIP 49 + bool "Initial ramdisk compressed using gzip" 50 + default y 51 + depends on BLK_DEV_INITRD=y 52 + select DECOMPRESS_GZIP 53 + help 54 + Support loading of a gzip encoded initial ramdisk or cpio buffer. 55 + If unsure, say Y. 56 + 57 + config RD_BZIP2 58 + bool "Initial ramdisk compressed using bzip2" 59 + default n 60 + depends on BLK_DEV_INITRD=y 61 + select DECOMPRESS_BZIP2 62 + help 63 + Support loading of a bzip2 encoded initial ramdisk or cpio buffer 64 + If unsure, say N. 65 + 66 + config RD_LZMA 67 + bool "Initial ramdisk compressed using lzma" 68 + default n 69 + depends on BLK_DEV_INITRD=y 70 + select DECOMPRESS_LZMA 71 + help 72 + Support loading of a lzma encoded initial ramdisk or cpio buffer 73 + If unsure, say N. 74 + 75 + choice 76 + prompt "Built-in initramfs compression mode" 77 + help 78 + This setting is only meaningful if the INITRAMFS_SOURCE is 79 + set. It decides by which algorithm the INITRAMFS_SOURCE will 80 + be compressed. 81 + Several compression algorithms are available, which differ 82 + in efficiency, compression and decompression speed. 83 + Compression speed is only relevant when building a kernel. 84 + Decompression speed is relevant at each boot. 85 + 86 + If you have any problems with bzip2 or lzma compressed 87 + initramfs, mail me (Alain Knaff) <alain@knaff.lu>. 88 + 89 + High compression options are mostly useful for users who 90 + are low on disk space (embedded systems), but for whom ram 91 + size matters less. 92 + 93 + If in doubt, select 'gzip' 94 + 95 + config INITRAMFS_COMPRESSION_NONE 96 + bool "None" 97 + help 98 + Do not compress the built-in initramfs at all. This may 99 + sound wasteful in space, but, you should be aware that the 100 + built-in initramfs will be compressed at a later stage 101 + anyways along with the rest of the kernel, on those 102 + architectures that support this. 103 + However, not compressing the initramfs may lead to slightly 104 + higher memory consumption during a short time at boot, while 105 + both the cpio image and the unpacked filesystem image will 106 + be present in memory simultaneously 107 + 108 + config INITRAMFS_COMPRESSION_GZIP 109 + bool "Gzip" 110 + depends on RD_GZIP 111 + help 112 + The old and tried gzip compression. Its compression ratio is 113 + the poorest among the 3 choices; however its speed (both 114 + compression and decompression) is the fastest. 115 + 116 + config INITRAMFS_COMPRESSION_BZIP2 117 + bool "Bzip2" 118 + depends on RD_BZIP2 119 + help 120 + Its compression ratio and speed is intermediate. 121 + Decompression speed is slowest among the three. The initramfs 122 + size is about 10% smaller with bzip2, in comparison to gzip. 123 + Bzip2 uses a large amount of memory. For modern kernels you 124 + will need at least 8MB RAM or more for booting. 125 + 126 + config INITRAMFS_COMPRESSION_LZMA 127 + bool "LZMA" 128 + depends on RD_LZMA 129 + help 130 + The most recent compression algorithm. 131 + Its ratio is best, decompression speed is between the other 132 + two. Compression is slowest. The initramfs size is about 33% 133 + smaller with LZMA in comparison to gzip. 134 + 135 + endchoice
+25 -13
usr/Makefile
··· 6 6 PHONY += klibcdirs 7 7 8 8 9 - # Generate builtin.o based on initramfs_data.o 10 - obj-$(CONFIG_BLK_DEV_INITRD) := initramfs_data.o 9 + # No compression 10 + suffix_$(CONFIG_INITRAMFS_COMPRESSION_NONE) = 11 11 12 - # initramfs_data.o contains the initramfs_data.cpio.gz image. 12 + # Gzip, but no bzip2 13 + suffix_$(CONFIG_INITRAMFS_COMPRESSION_GZIP) = .gz 14 + 15 + # Bzip2 16 + suffix_$(CONFIG_INITRAMFS_COMPRESSION_BZIP2) = .bz2 17 + 18 + # Lzma 19 + suffix_$(CONFIG_INITRAMFS_COMPRESSION_LZMA) = .lzma 20 + 21 + # Generate builtin.o based on initramfs_data.o 22 + obj-$(CONFIG_BLK_DEV_INITRD) := initramfs_data$(suffix_y).o 23 + 24 + # initramfs_data.o contains the compressed initramfs_data.cpio image. 13 25 # The image is included using .incbin, a dependency which is not 14 26 # tracked automatically. 15 - $(obj)/initramfs_data.o: $(obj)/initramfs_data.cpio.gz FORCE 27 + $(obj)/initramfs_data$(suffix_y).o: $(obj)/initramfs_data.cpio$(suffix_y) FORCE 16 28 17 29 ##### 18 30 # Generate the initramfs cpio archive ··· 37 25 $(if $(CONFIG_INITRAMFS_ROOT_UID), -u $(CONFIG_INITRAMFS_ROOT_UID)) \ 38 26 $(if $(CONFIG_INITRAMFS_ROOT_GID), -g $(CONFIG_INITRAMFS_ROOT_GID)) 39 27 40 - # .initramfs_data.cpio.gz.d is used to identify all files included 28 + # .initramfs_data.cpio.d is used to identify all files included 41 29 # in initramfs and to detect if any files are added/removed. 42 30 # Removed files are identified by directory timestamp being updated 43 31 # The dependency list is generated by gen_initramfs.sh -l 44 - ifneq ($(wildcard $(obj)/.initramfs_data.cpio.gz.d),) 45 - include $(obj)/.initramfs_data.cpio.gz.d 32 + ifneq ($(wildcard $(obj)/.initramfs_data.cpio.d),) 33 + include $(obj)/.initramfs_data.cpio.d 46 34 endif 47 35 48 36 quiet_cmd_initfs = GEN $@ 49 37 cmd_initfs = $(initramfs) -o $@ $(ramfs-args) $(ramfs-input) 50 38 51 - targets := initramfs_data.cpio.gz 39 + targets := initramfs_data.cpio.gz initramfs_data.cpio.bz2 initramfs_data.cpio.lzma initramfs_data.cpio 52 40 # do not try to update files included in initramfs 53 41 $(deps_initramfs): ; 54 42 55 43 $(deps_initramfs): klibcdirs 56 - # We rebuild initramfs_data.cpio.gz if: 57 - # 1) Any included file is newer then initramfs_data.cpio.gz 44 + # We rebuild initramfs_data.cpio if: 45 + # 1) Any included file is newer then initramfs_data.cpio 58 46 # 2) There are changes in which files are included (added or deleted) 59 - # 3) If gen_init_cpio are newer than initramfs_data.cpio.gz 47 + # 3) If gen_init_cpio are newer than initramfs_data.cpio 60 48 # 4) arguments to gen_initramfs.sh changes 61 - $(obj)/initramfs_data.cpio.gz: $(obj)/gen_init_cpio $(deps_initramfs) klibcdirs 62 - $(Q)$(initramfs) -l $(ramfs-input) > $(obj)/.initramfs_data.cpio.gz.d 49 + $(obj)/initramfs_data.cpio$(suffix_y): $(obj)/gen_init_cpio $(deps_initramfs) klibcdirs 50 + $(Q)$(initramfs) -l $(ramfs-input) > $(obj)/.initramfs_data.cpio.d 63 51 $(call if_changed,initfs) 64 52
+1 -1
usr/initramfs_data.S
··· 26 26 */ 27 27 28 28 .section .init.ramfs,"a" 29 - .incbin "usr/initramfs_data.cpio.gz" 29 + .incbin "usr/initramfs_data.cpio" 30 30
+29
usr/initramfs_data.bz2.S
··· 1 + /* 2 + initramfs_data includes the compressed binary that is the 3 + filesystem used for early user space. 4 + Note: Older versions of "as" (prior to binutils 2.11.90.0.23 5 + released on 2001-07-14) dit not support .incbin. 6 + If you are forced to use older binutils than that then the 7 + following trick can be applied to create the resulting binary: 8 + 9 + 10 + ld -m elf_i386 --format binary --oformat elf32-i386 -r \ 11 + -T initramfs_data.scr initramfs_data.cpio.gz -o initramfs_data.o 12 + ld -m elf_i386 -r -o built-in.o initramfs_data.o 13 + 14 + initramfs_data.scr looks like this: 15 + SECTIONS 16 + { 17 + .init.ramfs : { *(.data) } 18 + } 19 + 20 + The above example is for i386 - the parameters vary from architectures. 21 + Eventually look up LDFLAGS_BLOB in an older version of the 22 + arch/$(ARCH)/Makefile to see the flags used before .incbin was introduced. 23 + 24 + Using .incbin has the advantage over ld that the correct flags are set 25 + in the ELF header, as required by certain architectures. 26 + */ 27 + 28 + .section .init.ramfs,"a" 29 + .incbin "usr/initramfs_data.cpio.bz2"
+29
usr/initramfs_data.gz.S
··· 1 + /* 2 + initramfs_data includes the compressed binary that is the 3 + filesystem used for early user space. 4 + Note: Older versions of "as" (prior to binutils 2.11.90.0.23 5 + released on 2001-07-14) dit not support .incbin. 6 + If you are forced to use older binutils than that then the 7 + following trick can be applied to create the resulting binary: 8 + 9 + 10 + ld -m elf_i386 --format binary --oformat elf32-i386 -r \ 11 + -T initramfs_data.scr initramfs_data.cpio.gz -o initramfs_data.o 12 + ld -m elf_i386 -r -o built-in.o initramfs_data.o 13 + 14 + initramfs_data.scr looks like this: 15 + SECTIONS 16 + { 17 + .init.ramfs : { *(.data) } 18 + } 19 + 20 + The above example is for i386 - the parameters vary from architectures. 21 + Eventually look up LDFLAGS_BLOB in an older version of the 22 + arch/$(ARCH)/Makefile to see the flags used before .incbin was introduced. 23 + 24 + Using .incbin has the advantage over ld that the correct flags are set 25 + in the ELF header, as required by certain architectures. 26 + */ 27 + 28 + .section .init.ramfs,"a" 29 + .incbin "usr/initramfs_data.cpio.gz"
+29
usr/initramfs_data.lzma.S
··· 1 + /* 2 + initramfs_data includes the compressed binary that is the 3 + filesystem used for early user space. 4 + Note: Older versions of "as" (prior to binutils 2.11.90.0.23 5 + released on 2001-07-14) dit not support .incbin. 6 + If you are forced to use older binutils than that then the 7 + following trick can be applied to create the resulting binary: 8 + 9 + 10 + ld -m elf_i386 --format binary --oformat elf32-i386 -r \ 11 + -T initramfs_data.scr initramfs_data.cpio.gz -o initramfs_data.o 12 + ld -m elf_i386 -r -o built-in.o initramfs_data.o 13 + 14 + initramfs_data.scr looks like this: 15 + SECTIONS 16 + { 17 + .init.ramfs : { *(.data) } 18 + } 19 + 20 + The above example is for i386 - the parameters vary from architectures. 21 + Eventually look up LDFLAGS_BLOB in an older version of the 22 + arch/$(ARCH)/Makefile to see the flags used before .incbin was introduced. 23 + 24 + Using .incbin has the advantage over ld that the correct flags are set 25 + in the ELF header, as required by certain architectures. 26 + */ 27 + 28 + .section .init.ramfs,"a" 29 + .incbin "usr/initramfs_data.cpio.lzma"