Merge branch 'master' into upstream-fixes

+5484 -1759
+1 -1
CREDITS
··· 3279 3279 S: Spain 3280 3280 3281 3281 N: Linus Torvalds 3282 - E: torvalds@osdl.org 3282 + E: torvalds@linux-foundation.org 3283 3283 D: Original kernel hacker 3284 3284 S: 12725 SW Millikan Way, Suite 400 3285 3285 S: Beaverton, Oregon 97005
+4
Documentation/SubmitChecklist
··· 72 72 73 73 If the new code is substantial, addition of subsystem-specific fault 74 74 injection might be appropriate. 75 + 76 + 22: Newly-added code has been compiled with `gcc -W'. This will generate 77 + lots of noise, but is good for finding bugs like "warning: comparison 78 + between signed and unsigned".
+3 -3
Documentation/SubmittingPatches
··· 134 134 135 135 136 136 Linus Torvalds is the final arbiter of all changes accepted into the 137 - Linux kernel. His e-mail address is <torvalds@osdl.org>. He gets 138 - a lot of e-mail, so typically you should do your best to -avoid- sending 139 - him e-mail. 137 + Linux kernel. His e-mail address is <torvalds@linux-foundation.org>. 138 + He gets a lot of e-mail, so typically you should do your best to -avoid- 139 + sending him e-mail. 140 140 141 141 Patches which are bug fixes, are "obvious" changes, or similarly 142 142 require little discussion should be sent or CC'd to Linus. Patches
+7
Documentation/feature-removal-schedule.txt
··· 318 318 Who: Len Brown <len.brown@intel.com> 319 319 320 320 --------------------------- 321 + 322 + What: JFFS (version 1) 323 + When: 2.6.21 324 + Why: Unmaintained for years, superceded by JFFS2 for years. 325 + Who: Jeff Garzik <jeff@garzik.org> 326 + 327 + ---------------------------
+38 -11
Documentation/kdump/kdump.txt
··· 17 17 memory image to a dump file on the local disk, or across the network to 18 18 a remote system. 19 19 20 - Kdump and kexec are currently supported on the x86, x86_64, ppc64 and IA64 20 + Kdump and kexec are currently supported on the x86, x86_64, ppc64 and ia64 21 21 architectures. 22 22 23 23 When the system kernel boots, it reserves a small section of memory for ··· 61 61 62 62 2) Download the kexec-tools user-space package from the following URL: 63 63 64 - http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/kexec-tools-testing-20061214.tar.gz 64 + http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/kexec-tools-testing.tar.gz 65 + 66 + This is a symlink to the latest version, which at the time of writing is 67 + 20061214, the only release of kexec-tools-testing so far. As other versions 68 + are made released, the older onese will remain available at 69 + http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/ 65 70 66 71 Note: Latest kexec-tools-testing git tree is available at 67 72 ··· 76 71 77 72 3) Unpack the tarball with the tar command, as follows: 78 73 79 - tar xvpzf kexec-tools-testing-20061214.tar.gz 74 + tar xvpzf kexec-tools-testing.tar.gz 80 75 81 - 4) Change to the kexec-tools-1.101 directory, as follows: 76 + 4) Change to the kexec-tools directory, as follows: 82 77 83 - cd kexec-tools-testing-20061214 78 + cd kexec-tools-testing-VERSION 84 79 85 80 5) Configure the package, as follows: 86 81 ··· 229 224 230 225 Dump-capture kernel config options (Arch Dependent, ia64) 231 226 ---------------------------------------------------------- 232 - (To be filled) 227 + 228 + - No specific options are required to create a dump-capture kernel 229 + for ia64, other than those specified in the arch idependent section 230 + above. This means that it is possible to use the system kernel 231 + as a dump-capture kernel if desired. 232 + 233 + The crashkernel region can be automatically placed by the system 234 + kernel at run time. This is done by specifying the base address as 0, 235 + or omitting it all together. 236 + 237 + crashkernel=256M@0 238 + or 239 + crashkernel=256M 240 + 241 + If the start address is specified, note that the start address of the 242 + kernel will be aligned to 64Mb, so if the start address is not then 243 + any space below the alignment point will be wasted. 233 244 234 245 235 246 Boot into System Kernel ··· 263 242 On x86 and x86_64, use "crashkernel=64M@16M". 264 243 265 244 On ppc64, use "crashkernel=128M@32M". 245 + 246 + On ia64, 256M@256M is a generous value that typically works. 247 + The region may be automatically placed on ia64, see the 248 + dump-capture kernel config option notes above. 266 249 267 250 Load the Dump-capture Kernel 268 251 ============================ ··· 286 261 For ppc64: 287 262 - Use vmlinux 288 263 For ia64: 289 - (To be filled) 264 + - Use vmlinux or vmlinuz.gz 265 + 290 266 291 267 If you are using a uncompressed vmlinux image then use following command 292 268 to load dump-capture kernel. ··· 303 277 --initrd=<initrd-for-dump-capture-kernel> \ 304 278 --append="root=<root-dev> <arch-specific-options>" 305 279 280 + Please note, that --args-linux does not need to be specified for ia64. 281 + It is planned to make this a no-op on that architecture, but for now 282 + it should be omitted 283 + 306 284 Following are the arch specific command line options to be used while 307 285 loading dump-capture kernel. 308 286 309 - For i386 and x86_64: 287 + For i386, x86_64 and ia64: 310 288 "init 1 irqpoll maxcpus=1" 311 289 312 290 For ppc64: 313 291 "init 1 maxcpus=1 noirqdistrib" 314 - 315 - For IA64 316 - (To be filled) 317 292 318 293 319 294 Notes on loading the dump-capture kernel:
+549 -153
Documentation/pci.txt
··· 1 - How To Write Linux PCI Drivers 2 1 3 - by Martin Mares <mj@ucw.cz> on 07-Feb-2000 2 + How To Write Linux PCI Drivers 3 + 4 + by Martin Mares <mj@ucw.cz> on 07-Feb-2000 5 + updated by Grant Grundler <grundler@parisc-linux.org> on 23-Dec-2006 4 6 5 7 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 6 - The world of PCI is vast and it's full of (mostly unpleasant) surprises. 7 - Different PCI devices have different requirements and different bugs -- 8 - because of this, the PCI support layer in Linux kernel is not as trivial 9 - as one would wish. This short pamphlet tries to help all potential driver 10 - authors find their way through the deep forests of PCI handling. 8 + The world of PCI is vast and full of (mostly unpleasant) surprises. 9 + Since each CPU architecture implements different chip-sets and PCI devices 10 + have different requirements (erm, "features"), the result is the PCI support 11 + in the Linux kernel is not as trivial as one would wish. This short paper 12 + tries to introduce all potential driver authors to Linux APIs for 13 + PCI device drivers. 14 + 15 + A more complete resource is the third edition of "Linux Device Drivers" 16 + by Jonathan Corbet, Alessandro Rubini, and Greg Kroah-Hartman. 17 + LDD3 is available for free (under Creative Commons License) from: 18 + 19 + http://lwn.net/Kernel/LDD3/ 20 + 21 + However, keep in mind that all documents are subject to "bit rot". 22 + Refer to the source code if things are not working as described here. 23 + 24 + Please send questions/comments/patches about Linux PCI API to the 25 + "Linux PCI" <linux-pci@atrey.karlin.mff.cuni.cz> mailing list. 26 + 11 27 12 28 13 29 0. Structure of PCI drivers 14 30 ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 15 - There exist two kinds of PCI drivers: new-style ones (which leave most of 16 - probing for devices to the PCI layer and support online insertion and removal 17 - of devices [thus supporting PCI, hot-pluggable PCI and CardBus in a single 18 - driver]) and old-style ones which just do all the probing themselves. Unless 19 - you have a very good reason to do so, please don't use the old way of probing 20 - in any new code. After the driver finds the devices it wishes to operate 21 - on (either the old or the new way), it needs to perform the following steps: 31 + PCI drivers "discover" PCI devices in a system via pci_register_driver(). 32 + Actually, it's the other way around. When the PCI generic code discovers 33 + a new device, the driver with a matching "description" will be notified. 34 + Details on this below. 35 + 36 + pci_register_driver() leaves most of the probing for devices to 37 + the PCI layer and supports online insertion/removal of devices [thus 38 + supporting hot-pluggable PCI, CardBus, and Express-Card in a single driver]. 39 + pci_register_driver() call requires passing in a table of function 40 + pointers and thus dictates the high level structure of a driver. 41 + 42 + Once the driver knows about a PCI device and takes ownership, the 43 + driver generally needs to perform the following initialization: 22 44 23 45 Enable the device 24 - Access device configuration space 25 - Discover resources (addresses and IRQ numbers) provided by the device 26 - Allocate these resources 27 - Communicate with the device 46 + Request MMIO/IOP resources 47 + Set the DMA mask size (for both coherent and streaming DMA) 48 + Allocate and initialize shared control data (pci_allocate_coherent()) 49 + Access device configuration space (if needed) 50 + Register IRQ handler (request_irq()) 51 + Initialize non-PCI (i.e. LAN/SCSI/etc parts of the chip) 52 + Enable DMA/processing engines 53 + 54 + When done using the device, and perhaps the module needs to be unloaded, 55 + the driver needs to take the follow steps: 56 + Disable the device from generating IRQs 57 + Release the IRQ (free_irq()) 58 + Stop all DMA activity 59 + Release DMA buffers (both streaming and coherent) 60 + Unregister from other subsystems (e.g. scsi or netdev) 61 + Release MMIO/IOP resources 28 62 Disable the device 29 63 30 - Most of these topics are covered by the following sections, for the rest 31 - look at <linux/pci.h>, it's hopefully well commented. 64 + Most of these topics are covered in the following sections. 65 + For the rest look at LDD3 or <linux/pci.h> . 32 66 33 67 If the PCI subsystem is not configured (CONFIG_PCI is not set), most of 34 - the functions described below are defined as inline functions either completely 35 - empty or just returning an appropriate error codes to avoid lots of ifdefs 36 - in the drivers. 68 + the PCI functions described below are defined as inline functions either 69 + completely empty or just returning an appropriate error codes to avoid 70 + lots of ifdefs in the drivers. 37 71 38 72 39 - 1. New-style drivers 40 - ~~~~~~~~~~~~~~~~~~~~ 41 - The new-style drivers just call pci_register_driver during their initialization 42 - with a pointer to a structure describing the driver (struct pci_driver) which 43 - contains: 44 73 45 - name Name of the driver 74 + 1. pci_register_driver() call 75 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 76 + 77 + PCI device drivers call pci_register_driver() during their 78 + initialization with a pointer to a structure describing the driver 79 + (struct pci_driver): 80 + 81 + field name Description 82 + ---------- ------------------------------------------------------ 46 83 id_table Pointer to table of device ID's the driver is 47 84 interested in. Most drivers should export this 48 85 table using MODULE_DEVICE_TABLE(pci,...). 49 - probe Pointer to a probing function which gets called (during 50 - execution of pci_register_driver for already existing 51 - devices or later if a new device gets inserted) for all 52 - PCI devices which match the ID table and are not handled 53 - by the other drivers yet. This function gets passed a 54 - pointer to the pci_dev structure representing the device 55 - and also which entry in the ID table did the device 56 - match. It returns zero when the driver has accepted the 57 - device or an error code (negative number) otherwise. 58 - This function always gets called from process context, 59 - so it can sleep. 60 - remove Pointer to a function which gets called whenever a 61 - device being handled by this driver is removed (either 62 - during deregistration of the driver or when it's 63 - manually pulled out of a hot-pluggable slot). This 64 - function always gets called from process context, so it 65 - can sleep. 66 - save_state Save a device's state before it's suspend. 86 + 87 + probe This probing function gets called (during execution 88 + of pci_register_driver() for already existing 89 + devices or later if a new device gets inserted) for 90 + all PCI devices which match the ID table and are not 91 + "owned" by the other drivers yet. This function gets 92 + passed a "struct pci_dev *" for each device whose 93 + entry in the ID table matches the device. The probe 94 + function returns zero when the driver chooses to 95 + take "ownership" of the device or an error code 96 + (negative number) otherwise. 97 + The probe function always gets called from process 98 + context, so it can sleep. 99 + 100 + remove The remove() function gets called whenever a device 101 + being handled by this driver is removed (either during 102 + deregistration of the driver or when it's manually 103 + pulled out of a hot-pluggable slot). 104 + The remove function always gets called from process 105 + context, so it can sleep. 106 + 67 107 suspend Put device into low power state. 108 + suspend_late Put device into low power state. 109 + 110 + resume_early Wake device from low power state. 68 111 resume Wake device from low power state. 112 + 113 + (Please see Documentation/power/pci.txt for descriptions 114 + of PCI Power Management and the related functions.) 115 + 69 116 enable_wake Enable device to generate wake events from a low power 70 117 state. 71 118 72 - (Please see Documentation/power/pci.txt for descriptions 73 - of PCI Power Management and the related functions) 119 + shutdown Hook into reboot_notifier_list (kernel/sys.c). 120 + Intended to stop any idling DMA operations. 121 + Useful for enabling wake-on-lan (NIC) or changing 122 + the power state of a device before reboot. 123 + e.g. drivers/net/e100.c. 74 124 75 - The ID table is an array of struct pci_device_id ending with a all-zero entry. 76 - Each entry consists of: 125 + err_handler See Documentation/pci-error-recovery.txt 77 126 78 - vendor, device Vendor and device ID to match (or PCI_ANY_ID) 127 + multithread_probe Enable multi-threaded probe/scan. Driver must 128 + provide its own locking/syncronization for init 129 + operations if this is enabled. 130 + 131 + 132 + The ID table is an array of struct pci_device_id entries ending with an 133 + all-zero entry. Each entry consists of: 134 + 135 + vendor,device Vendor and device ID to match (or PCI_ANY_ID) 136 + 79 137 subvendor, Subsystem vendor and device ID to match (or PCI_ANY_ID) 80 - subdevice 81 - class, Device class to match. The class_mask tells which bits 82 - class_mask of the class are honored during the comparison. 138 + subdevice, 139 + 140 + class Device class, subclass, and "interface" to match. 141 + See Appendix D of the PCI Local Bus Spec or 142 + include/linux/pci_ids.h for a full list of classes. 143 + Most drivers do not need to specify class/class_mask 144 + as vendor/device is normally sufficient. 145 + 146 + class_mask limit which sub-fields of the class field are compared. 147 + See drivers/scsi/sym53c8xx_2/ for example of usage. 148 + 83 149 driver_data Data private to the driver. 150 + Most drivers don't need to use driver_data field. 151 + Best practice is to use driver_data as an index 152 + into a static list of equivalent device types, 153 + instead of using it as a pointer. 84 154 85 - Most drivers don't need to use the driver_data field. Best practice 86 - for use of driver_data is to use it as an index into a static list of 87 - equivalent device types, not to use it as a pointer. 88 155 89 - Have a table entry {PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID} 90 - to have probe() called for every PCI device known to the system. 156 + Most drivers only need PCI_DEVICE() or PCI_DEVICE_CLASS() to set up 157 + a pci_device_id table. 91 158 92 - New PCI IDs may be added to a device driver at runtime by writing 93 - to the file /sys/bus/pci/drivers/{driver}/new_id. When added, the 94 - driver will probe for all devices it can support. 159 + New PCI IDs may be added to a device driver pci_ids table at runtime 160 + as shown below: 95 161 96 162 echo "vendor device subvendor subdevice class class_mask driver_data" > \ 97 - /sys/bus/pci/drivers/{driver}/new_id 98 - where all fields are passed in as hexadecimal values (no leading 0x). 99 - Users need pass only as many fields as necessary; vendor, device, 100 - subvendor, and subdevice fields default to PCI_ANY_ID (FFFFFFFF), 101 - class and classmask fields default to 0, and driver_data defaults to 102 - 0UL. Device drivers must initialize use_driver_data in the dynids struct 103 - in their pci_driver struct prior to calling pci_register_driver in order 104 - for the driver_data field to get passed to the driver. Otherwise, only a 105 - 0 is passed in that field. 163 + /sys/bus/pci/drivers/{driver}/new_id 164 + 165 + All fields are passed in as hexadecimal values (no leading 0x). 166 + Users need pass only as many fields as necessary: 167 + o vendor, device, subvendor, and subdevice fields default 168 + to PCI_ANY_ID (FFFFFFFF), 169 + o class and classmask fields default to 0 170 + o driver_data defaults to 0UL. 171 + 172 + Once added, the driver probe routine will be invoked for any unclaimed 173 + PCI devices listed in its (newly updated) pci_ids list. 106 174 107 175 When the driver exits, it just calls pci_unregister_driver() and the PCI layer 108 176 automatically calls the remove hook for all devices handled by the driver. 177 + 178 + 179 + 1.1 "Attributes" for driver functions/data 109 180 110 181 Please mark the initialization and cleanup functions where appropriate 111 182 (the corresponding macros are defined in <linux/init.h>): ··· 184 113 __init Initialization code. Thrown away after the driver 185 114 initializes. 186 115 __exit Exit code. Ignored for non-modular drivers. 187 - __devinit Device initialization code. Identical to __init if 188 - the kernel is not compiled with CONFIG_HOTPLUG, normal 189 - function otherwise. 116 + 117 + 118 + __devinit Device initialization code. 119 + Identical to __init if the kernel is not compiled 120 + with CONFIG_HOTPLUG, normal function otherwise. 190 121 __devexit The same for __exit. 191 122 192 - Tips: 193 - The module_init()/module_exit() functions (and all initialization 194 - functions called only from these) should be marked __init/exit. 195 - The struct pci_driver shouldn't be marked with any of these tags. 196 - The ID table array should be marked __devinitdata. 197 - The probe() and remove() functions (and all initialization 198 - functions called only from these) should be marked __devinit/exit. 199 - If you are sure the driver is not a hotplug driver then use only 200 - __init/exit __initdata/exitdata. 123 + Tips on when/where to use the above attributes: 124 + o The module_init()/module_exit() functions (and all 125 + initialization functions called _only_ from these) 126 + should be marked __init/__exit. 201 127 202 - Pointers to functions marked as __devexit must be created using 203 - __devexit_p(function_name). That will generate the function 204 - name or NULL if the __devexit function will be discarded. 128 + o Do not mark the struct pci_driver. 129 + 130 + o The ID table array should be marked __devinitdata. 131 + 132 + o The probe() and remove() functions should be marked __devinit 133 + and __devexit respectively. All initialization functions 134 + exclusively called by the probe() routine, can be marked __devinit. 135 + Ditto for remove() and __devexit. 136 + 137 + o If mydriver_probe() is marked with __devinit(), then all address 138 + references to mydriver_probe must use __devexit_p(mydriver_probe) 139 + (in the struct pci_driver declaration for example). 140 + __devexit_p() will generate the function name _or_ NULL if the 141 + function will be discarded. For an example, see drivers/net/tg3.c. 142 + 143 + o Do NOT mark a function if you are not sure which mark to use. 144 + Better to not mark the function than mark the function wrong. 205 145 206 146 207 - 2. How to find PCI devices manually (the old style) 208 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 209 - PCI drivers not using the pci_register_driver() interface search 210 - for PCI devices manually using the following constructs: 147 + 148 + 2. How to find PCI devices manually 149 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 150 + 151 + PCI drivers should have a really good reason for not using the 152 + pci_register_driver() interface to search for PCI devices. 153 + The main reason PCI devices are controlled by multiple drivers 154 + is because one PCI device implements several different HW services. 155 + E.g. combined serial/parallel port/floppy controller. 156 + 157 + A manual search may be performed using the following constructs: 211 158 212 159 Searching by vendor and device ID: 213 160 ··· 239 150 240 151 Searching by both vendor/device and subsystem vendor/device ID: 241 152 242 - pci_get_subsys(VENDOR_ID, DEVICE_ID, SUBSYS_VENDOR_ID, SUBSYS_DEVICE_ID, dev). 153 + pci_get_subsys(VENDOR_ID,DEVICE_ID, SUBSYS_VENDOR_ID, SUBSYS_DEVICE_ID, dev). 243 154 244 - You can use the constant PCI_ANY_ID as a wildcard replacement for 155 + You can use the constant PCI_ANY_ID as a wildcard replacement for 245 156 VENDOR_ID or DEVICE_ID. This allows searching for any device from a 246 157 specific vendor, for example. 247 158 248 - These functions are hotplug-safe. They increment the reference count on 159 + These functions are hotplug-safe. They increment the reference count on 249 160 the pci_dev that they return. You must eventually (possibly at module unload) 250 161 decrement the reference count on these devices by calling pci_dev_put(). 251 162 252 163 253 - 3. Enabling and disabling devices 254 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 255 - Before you do anything with the device you've found, you need to enable 256 - it by calling pci_enable_device() which enables I/O and memory regions of 257 - the device, allocates an IRQ if necessary, assigns missing resources if 258 - needed and wakes up the device if it was in suspended state. Please note 259 - that this function can fail. 260 164 261 - If you want to use the device in bus mastering mode, call pci_set_master() 262 - which enables the bus master bit in PCI_COMMAND register and also fixes 263 - the latency timer value if it's set to something bogus by the BIOS. 165 + 3. Device Initialization Steps 166 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 264 167 265 - If you want to use the PCI Memory-Write-Invalidate transaction, 168 + As noted in the introduction, most PCI drivers need the following steps 169 + for device initialization: 170 + 171 + Enable the device 172 + Request MMIO/IOP resources 173 + Set the DMA mask size (for both coherent and streaming DMA) 174 + Allocate and initialize shared control data (pci_allocate_coherent()) 175 + Access device configuration space (if needed) 176 + Register IRQ handler (request_irq()) 177 + Initialize non-PCI (i.e. LAN/SCSI/etc parts of the chip) 178 + Enable DMA/processing engines. 179 + 180 + The driver can access PCI config space registers at any time. 181 + (Well, almost. When running BIST, config space can go away...but 182 + that will just result in a PCI Bus Master Abort and config reads 183 + will return garbage). 184 + 185 + 186 + 3.1 Enable the PCI device 187 + ~~~~~~~~~~~~~~~~~~~~~~~~~ 188 + Before touching any device registers, the driver needs to enable 189 + the PCI device by calling pci_enable_device(). This will: 190 + o wake up the device if it was in suspended state, 191 + o allocate I/O and memory regions of the device (if BIOS did not), 192 + o allocate an IRQ (if BIOS did not). 193 + 194 + NOTE: pci_enable_device() can fail! Check the return value. 195 + NOTE2: Also see pci_enable_device_bars() below. Drivers can 196 + attempt to enable only a subset of BARs they need. 197 + 198 + [ OS BUG: we don't check resource allocations before enabling those 199 + resources. The sequence would make more sense if we called 200 + pci_request_resources() before calling pci_enable_device(). 201 + Currently, the device drivers can't detect the bug when when two 202 + devices have been allocated the same range. This is not a common 203 + problem and unlikely to get fixed soon. 204 + 205 + This has been discussed before but not changed as of 2.6.19: 206 + http://lkml.org/lkml/2006/3/2/194 207 + ] 208 + 209 + pci_set_master() will enable DMA by setting the bus master bit 210 + in the PCI_COMMAND register. It also fixes the latency timer value if 211 + it's set to something bogus by the BIOS. 212 + 213 + If the PCI device can use the PCI Memory-Write-Invalidate transaction, 266 214 call pci_set_mwi(). This enables the PCI_COMMAND bit for Mem-Wr-Inval 267 215 and also ensures that the cache line size register is set correctly. 268 - Make sure to check the return value of pci_set_mwi(), not all architectures 269 - may support Memory-Write-Invalidate. 216 + Check the return value of pci_set_mwi() as not all architectures 217 + or chip-sets may support Memory-Write-Invalidate. 270 218 271 - If your driver decides to stop using the device (e.g., there was an 272 - error while setting it up or the driver module is being unloaded), it 273 - should call pci_disable_device() to deallocate any IRQ resources, disable 274 - PCI bus-mastering, etc. You should not do anything with the device after 219 + 220 + 3.2 Request MMIO/IOP resources 221 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 222 + Memory (MMIO), and I/O port addresses should NOT be read directly 223 + from the PCI device config space. Use the values in the pci_dev structure 224 + as the PCI "bus address" might have been remapped to a "host physical" 225 + address by the arch/chip-set specific kernel support. 226 + 227 + See Documentation/IO-mapping.txt for how to access device registers 228 + or device memory. 229 + 230 + The device driver needs to call pci_request_region() to verify 231 + no other device is already using the same address resource. 232 + Conversely, drivers should call pci_release_region() AFTER 275 233 calling pci_disable_device(). 234 + The idea is to prevent two devices colliding on the same address range. 276 235 277 - 4. How to access PCI config space 236 + [ See OS BUG comment above. Currently (2.6.19), The driver can only 237 + determine MMIO and IO Port resource availability _after_ calling 238 + pci_enable_device(). ] 239 + 240 + Generic flavors of pci_request_region() are request_mem_region() 241 + (for MMIO ranges) and request_region() (for IO Port ranges). 242 + Use these for address resources that are not described by "normal" PCI 243 + BARs. 244 + 245 + Also see pci_request_selected_regions() below. 246 + 247 + 248 + 3.3 Set the DMA mask size 249 + ~~~~~~~~~~~~~~~~~~~~~~~~~ 250 + [ If anything below doesn't make sense, please refer to 251 + Documentation/DMA-API.txt. This section is just a reminder that 252 + drivers need to indicate DMA capabilities of the device and is not 253 + an authoritative source for DMA interfaces. ] 254 + 255 + While all drivers should explicitly indicate the DMA capability 256 + (e.g. 32 or 64 bit) of the PCI bus master, devices with more than 257 + 32-bit bus master capability for streaming data need the driver 258 + to "register" this capability by calling pci_set_dma_mask() with 259 + appropriate parameters. In general this allows more efficient DMA 260 + on systems where System RAM exists above 4G _physical_ address. 261 + 262 + Drivers for all PCI-X and PCIe compliant devices must call 263 + pci_set_dma_mask() as they are 64-bit DMA devices. 264 + 265 + Similarly, drivers must also "register" this capability if the device 266 + can directly address "consistent memory" in System RAM above 4G physical 267 + address by calling pci_set_consistent_dma_mask(). 268 + Again, this includes drivers for all PCI-X and PCIe compliant devices. 269 + Many 64-bit "PCI" devices (before PCI-X) and some PCI-X devices are 270 + 64-bit DMA capable for payload ("streaming") data but not control 271 + ("consistent") data. 272 + 273 + 274 + 3.4 Setup shared control data 275 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 276 + Once the DMA masks are set, the driver can allocate "consistent" (a.k.a. shared) 277 + memory. See Documentation/DMA-API.txt for a full description of 278 + the DMA APIs. This section is just a reminder that it needs to be done 279 + before enabling DMA on the device. 280 + 281 + 282 + 3.5 Initialize device registers 283 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 284 + Some drivers will need specific "capability" fields programmed 285 + or other "vendor specific" register initialized or reset. 286 + E.g. clearing pending interrupts. 287 + 288 + 289 + 3.6 Register IRQ handler 290 + ~~~~~~~~~~~~~~~~~~~~~~~~ 291 + While calling request_irq() is the the last step described here, 292 + this is often just another intermediate step to initialize a device. 293 + This step can often be deferred until the device is opened for use. 294 + 295 + All interrupt handlers for IRQ lines should be registered with IRQF_SHARED 296 + and use the devid to map IRQs to devices (remember that all PCI IRQ lines 297 + can be shared). 298 + 299 + request_irq() will associate an interrupt handler and device handle 300 + with an interrupt number. Historically interrupt numbers represent 301 + IRQ lines which run from the PCI device to the Interrupt controller. 302 + With MSI and MSI-X (more below) the interrupt number is a CPU "vector". 303 + 304 + request_irq() also enables the interrupt. Make sure the device is 305 + quiesced and does not have any interrupts pending before registering 306 + the interrupt handler. 307 + 308 + MSI and MSI-X are PCI capabilities. Both are "Message Signaled Interrupts" 309 + which deliver interrupts to the CPU via a DMA write to a Local APIC. 310 + The fundamental difference between MSI and MSI-X is how multiple 311 + "vectors" get allocated. MSI requires contiguous blocks of vectors 312 + while MSI-X can allocate several individual ones. 313 + 314 + MSI capability can be enabled by calling pci_enable_msi() or 315 + pci_enable_msix() before calling request_irq(). This causes 316 + the PCI support to program CPU vector data into the PCI device 317 + capability registers. 318 + 319 + If your PCI device supports both, try to enable MSI-X first. 320 + Only one can be enabled at a time. Many architectures, chip-sets, 321 + or BIOSes do NOT support MSI or MSI-X and the call to pci_enable_msi/msix 322 + will fail. This is important to note since many drivers have 323 + two (or more) interrupt handlers: one for MSI/MSI-X and another for IRQs. 324 + They choose which handler to register with request_irq() based on the 325 + return value from pci_enable_msi/msix(). 326 + 327 + There are (at least) two really good reasons for using MSI: 328 + 1) MSI is an exclusive interrupt vector by definition. 329 + This means the interrupt handler doesn't have to verify 330 + its device caused the interrupt. 331 + 332 + 2) MSI avoids DMA/IRQ race conditions. DMA to host memory is guaranteed 333 + to be visible to the host CPU(s) when the MSI is delivered. This 334 + is important for both data coherency and avoiding stale control data. 335 + This guarantee allows the driver to omit MMIO reads to flush 336 + the DMA stream. 337 + 338 + See drivers/infiniband/hw/mthca/ or drivers/net/tg3.c for examples 339 + of MSI/MSI-X usage. 340 + 341 + 342 + 343 + 4. PCI device shutdown 344 + ~~~~~~~~~~~~~~~~~~~~~~~ 345 + 346 + When a PCI device driver is being unloaded, most of the following 347 + steps need to be performed: 348 + 349 + Disable the device from generating IRQs 350 + Release the IRQ (free_irq()) 351 + Stop all DMA activity 352 + Release DMA buffers (both streaming and consistent) 353 + Unregister from other subsystems (e.g. scsi or netdev) 354 + Disable device from responding to MMIO/IO Port addresses 355 + Release MMIO/IO Port resource(s) 356 + 357 + 358 + 4.1 Stop IRQs on the device 359 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 360 + How to do this is chip/device specific. If it's not done, it opens 361 + the possibility of a "screaming interrupt" if (and only if) 362 + the IRQ is shared with another device. 363 + 364 + When the shared IRQ handler is "unhooked", the remaining devices 365 + using the same IRQ line will still need the IRQ enabled. Thus if the 366 + "unhooked" device asserts IRQ line, the system will respond assuming 367 + it was one of the remaining devices asserted the IRQ line. Since none 368 + of the other devices will handle the IRQ, the system will "hang" until 369 + it decides the IRQ isn't going to get handled and masks the IRQ (100,000 370 + iterations later). Once the shared IRQ is masked, the remaining devices 371 + will stop functioning properly. Not a nice situation. 372 + 373 + This is another reason to use MSI or MSI-X if it's available. 374 + MSI and MSI-X are defined to be exclusive interrupts and thus 375 + are not susceptible to the "screaming interrupt" problem. 376 + 377 + 378 + 4.2 Release the IRQ 379 + ~~~~~~~~~~~~~~~~~~~ 380 + Once the device is quiesced (no more IRQs), one can call free_irq(). 381 + This function will return control once any pending IRQs are handled, 382 + "unhook" the drivers IRQ handler from that IRQ, and finally release 383 + the IRQ if no one else is using it. 384 + 385 + 386 + 4.3 Stop all DMA activity 387 + ~~~~~~~~~~~~~~~~~~~~~~~~~ 388 + It's extremely important to stop all DMA operations BEFORE attempting 389 + to deallocate DMA control data. Failure to do so can result in memory 390 + corruption, hangs, and on some chip-sets a hard crash. 391 + 392 + Stopping DMA after stopping the IRQs can avoid races where the 393 + IRQ handler might restart DMA engines. 394 + 395 + While this step sounds obvious and trivial, several "mature" drivers 396 + didn't get this step right in the past. 397 + 398 + 399 + 4.4 Release DMA buffers 400 + ~~~~~~~~~~~~~~~~~~~~~~~ 401 + Once DMA is stopped, clean up streaming DMA first. 402 + I.e. unmap data buffers and return buffers to "upstream" 403 + owners if there is one. 404 + 405 + Then clean up "consistent" buffers which contain the control data. 406 + 407 + See Documentation/DMA-API.txt for details on unmapping interfaces. 408 + 409 + 410 + 4.5 Unregister from other subsystems 411 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 412 + Most low level PCI device drivers support some other subsystem 413 + like USB, ALSA, SCSI, NetDev, Infiniband, etc. Make sure your 414 + driver isn't losing resources from that other subsystem. 415 + If this happens, typically the symptom is an Oops (panic) when 416 + the subsystem attempts to call into a driver that has been unloaded. 417 + 418 + 419 + 4.6 Disable Device from responding to MMIO/IO Port addresses 420 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 421 + io_unmap() MMIO or IO Port resources and then call pci_disable_device(). 422 + This is the symmetric opposite of pci_enable_device(). 423 + Do not access device registers after calling pci_disable_device(). 424 + 425 + 426 + 4.7 Release MMIO/IO Port Resource(s) 427 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 428 + Call pci_release_region() to mark the MMIO or IO Port range as available. 429 + Failure to do so usually results in the inability to reload the driver. 430 + 431 + 432 + 433 + 5. How to access PCI config space 278 434 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 279 - You can use pci_(read|write)_config_(byte|word|dword) to access the config 435 + 436 + You can use pci_(read|write)_config_(byte|word|dword) to access the config 280 437 space of a device represented by struct pci_dev *. All these functions return 0 281 438 when successful or an error code (PCIBIOS_...) which can be translated to a text 282 439 string by pcibios_strerror. Most drivers expect that accesses to valid PCI 283 440 devices don't fail. 284 441 285 - If you don't have a struct pci_dev available, you can call 442 + If you don't have a struct pci_dev available, you can call 286 443 pci_bus_(read|write)_config_(byte|word|dword) to access a given device 287 444 and function on that bus. 288 445 289 - If you access fields in the standard portion of the config header, please 446 + If you access fields in the standard portion of the config header, please 290 447 use symbolic names of locations and bits declared in <linux/pci.h>. 291 448 292 - If you need to access Extended PCI Capability registers, just call 449 + If you need to access Extended PCI Capability registers, just call 293 450 pci_find_capability() for the particular capability and it will find the 294 451 corresponding register block for you. 295 452 296 453 297 - 5. Addresses and interrupts 298 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 299 - Memory and port addresses and interrupt numbers should NOT be read from the 300 - config space. You should use the values in the pci_dev structure as they might 301 - have been remapped by the kernel. 302 - 303 - See Documentation/IO-mapping.txt for how to access device memory. 304 - 305 - The device driver needs to call pci_request_region() to make sure 306 - no other device is already using the same resource. The driver is expected 307 - to determine MMIO and IO Port resource availability _before_ calling 308 - pci_enable_device(). Conversely, drivers should call pci_release_region() 309 - _after_ calling pci_disable_device(). The idea is to prevent two devices 310 - colliding on the same address range. 311 - 312 - Generic flavors of pci_request_region() are request_mem_region() 313 - (for MMIO ranges) and request_region() (for IO Port ranges). 314 - Use these for address resources that are not described by "normal" PCI 315 - interfaces (e.g. BAR). 316 - 317 - All interrupt handlers should be registered with IRQF_SHARED and use the devid 318 - to map IRQs to devices (remember that all PCI interrupts are shared). 319 - 320 454 321 455 6. Other interesting functions 322 456 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 457 + 323 458 pci_find_slot() Find pci_dev corresponding to given bus and 324 459 slot numbers. 325 460 pci_set_power_state() Set PCI Power Management state (0=D0 ... 3=D3) ··· 560 247 pci_clear_mwi() Disable Memory-Write-Invalidate transactions. 561 248 562 249 250 + 563 251 7. Miscellaneous hints 564 252 ~~~~~~~~~~~~~~~~~~~~~~ 565 - When displaying PCI slot names to the user (for example when a driver wants 566 - to tell the user what card has it found), please use pci_name(pci_dev) 567 - for this purpose. 253 + 254 + When displaying PCI device names to the user (for example when a driver wants 255 + to tell the user what card has it found), please use pci_name(pci_dev). 568 256 569 257 Always refer to the PCI devices by a pointer to the pci_dev structure. 570 258 All PCI layer functions use this identification and it's the only ··· 573 259 special purposes -- on systems with multiple primary buses their semantics 574 260 can be pretty complex. 575 261 576 - If you're going to use PCI bus mastering DMA, take a look at 577 - Documentation/DMA-mapping.txt. 578 - 579 262 Don't try to turn on Fast Back to Back writes in your driver. All devices 580 263 on the bus need to be capable of doing it, so this is something which needs 581 264 to be handled by platform and generic code, not individual drivers. 582 265 583 266 267 + 584 268 8. Vendor and device identifications 585 269 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 586 - For the future, let's avoid adding device ids to include/linux/pci_ids.h. 587 270 588 - PCI_VENDOR_ID_xxx for vendors, and a hex constant for device ids. 271 + One is not not required to add new device ids to include/linux/pci_ids.h. 272 + Please add PCI_VENDOR_ID_xxx for vendors and a hex constant for device ids. 589 273 590 - Rationale: PCI_VENDOR_ID_xxx constants are re-used, but device ids are not. 591 - Further, device ids are arbitrary hex numbers, normally used only in a 592 - single location, the pci_device_id table. 274 + PCI_VENDOR_ID_xxx constants are re-used. The device ids are arbitrary 275 + hex numbers (vendor controlled) and normally used only in a single 276 + location, the pci_device_id table. 277 + 278 + Please DO submit new vendor/device ids to pciids.sourceforge.net project. 279 + 280 + 593 281 594 282 9. Obsolete functions 595 283 ~~~~~~~~~~~~~~~~~~~~~ 284 + 596 285 There are several functions which you might come across when trying to 597 286 port an old driver to the new PCI interface. They are no longer present 598 287 in the kernel as they aren't compatible with hotplug or PCI domains or 599 288 having sane locking. 600 289 601 - pci_find_device() Superseded by pci_get_device() 602 - pci_find_subsys() Superseded by pci_get_subsys() 603 - pci_find_slot() Superseded by pci_get_slot() 290 + pci_find_device() Superseded by pci_get_device() 291 + pci_find_subsys() Superseded by pci_get_subsys() 292 + pci_find_slot() Superseded by pci_get_slot() 293 + 294 + 295 + The alternative is the traditional PCI device driver that walks PCI 296 + device lists. This is still possible but discouraged. 297 + 298 + 299 + 300 + 10. pci_enable_device_bars() and Legacy I/O Port space 301 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 302 + 303 + Large servers may not be able to provide I/O port resources to all PCI 304 + devices. I/O Port space is only 64KB on Intel Architecture[1] and is 305 + likely also fragmented since the I/O base register of PCI-to-PCI 306 + bridge will usually be aligned to a 4KB boundary[2]. On such systems, 307 + pci_enable_device() and pci_request_region() will fail when 308 + attempting to enable I/O Port regions that don't have I/O Port 309 + resources assigned. 310 + 311 + Fortunately, many PCI devices which request I/O Port resources also 312 + provide access to the same registers via MMIO BARs. These devices can 313 + be handled without using I/O port space and the drivers typically 314 + offer a CONFIG_ option to only use MMIO regions 315 + (e.g. CONFIG_TULIP_MMIO). PCI devices typically provide I/O port 316 + interface for legacy OSes and will work when I/O port resources are not 317 + assigned. The "PCI Local Bus Specification Revision 3.0" discusses 318 + this on p.44, "IMPLEMENTATION NOTE". 319 + 320 + If your PCI device driver doesn't need I/O port resources assigned to 321 + I/O Port BARs, you should use pci_enable_device_bars() instead of 322 + pci_enable_device() in order not to enable I/O port regions for the 323 + corresponding devices. In addition, you should use 324 + pci_request_selected_regions() and pci_release_selected_regions() 325 + instead of pci_request_regions()/pci_release_regions() in order not to 326 + request/release I/O port regions for the corresponding devices. 327 + 328 + [1] Some systems support 64KB I/O port space per PCI segment. 329 + [2] Some PCI-to-PCI bridges support optional 1KB aligned I/O base. 330 + 331 + 332 + 333 + 11. MMIO Space and "Write Posting" 334 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 335 + 336 + Converting a driver from using I/O Port space to using MMIO space 337 + often requires some additional changes. Specifically, "write posting" 338 + needs to be handled. Many drivers (e.g. tg3, acenic, sym53c8xx_2) 339 + already do this. I/O Port space guarantees write transactions reach the PCI 340 + device before the CPU can continue. Writes to MMIO space allow the CPU 341 + to continue before the transaction reaches the PCI device. HW weenies 342 + call this "Write Posting" because the write completion is "posted" to 343 + the CPU before the transaction has reached its destination. 344 + 345 + Thus, timing sensitive code should add readl() where the CPU is 346 + expected to wait before doing other work. The classic "bit banging" 347 + sequence works fine for I/O Port space: 348 + 349 + for (i = 8; --i; val >>= 1) { 350 + outb(val & 1, ioport_reg); /* write bit */ 351 + udelay(10); 352 + } 353 + 354 + The same sequence for MMIO space should be: 355 + 356 + for (i = 8; --i; val >>= 1) { 357 + writeb(val & 1, mmio_reg); /* write bit */ 358 + readb(safe_mmio_reg); /* flush posted write */ 359 + udelay(10); 360 + } 361 + 362 + It is important that "safe_mmio_reg" not have any side effects that 363 + interferes with the correct operation of the device. 364 + 365 + Another case to watch out for is when resetting a PCI device. Use PCI 366 + Configuration space reads to flush the writel(). This will gracefully 367 + handle the PCI master abort on all platforms if the PCI device is 368 + expected to not respond to a readl(). Most x86 platforms will allow 369 + MMIO reads to master abort (a.k.a. "Soft Fail") and return garbage 370 + (e.g. ~0). But many RISC platforms will crash (a.k.a."Hard Fail"). 371 +
+1 -1
Documentation/usb/CREDITS
··· 21 21 Bill Ryder <bryder@sgi.com> 22 22 Thomas Sailer <sailer@ife.ee.ethz.ch> 23 23 Gregory P. Smith <greg@electricrain.com> 24 - Linus Torvalds <torvalds@osdl.org> 24 + Linus Torvalds <torvalds@linux-foundation.org> 25 25 Roman Weissgaerber <weissg@vienna.at> 26 26 <Kazuki.Yasumatsu@fujixerox.co.jp> 27 27
+3 -3
MAINTAINERS
··· 1254 1254 1255 1255 ETHERNET BRIDGE 1256 1256 P: Stephen Hemminger 1257 - M: shemminger@osdl.org 1257 + M: shemminger@linux-foundation.org 1258 1258 L: bridge@osdl.org 1259 1259 W: http://bridge.sourceforge.net/ 1260 1260 S: Maintained ··· 2277 2277 2278 2278 NETEM NETWORK EMULATOR 2279 2279 P: Stephen Hemminger 2280 - M: shemminger@osdl.org 2280 + M: shemminger@linux-foundation.org 2281 2281 L: netem@osdl.org 2282 2282 S: Maintained 2283 2283 ··· 3081 3081 3082 3082 SKGE, SKY2 10/100/1000 GIGABIT ETHERNET DRIVERS 3083 3083 P: Stephen Hemminger 3084 - M: shemminger@osdl.org 3084 + M: shemminger@linux-foundation.org 3085 3085 L: netdev@vger.kernel.org 3086 3086 S: Maintained 3087 3087
+2 -2
README
··· 278 278 the file MAINTAINERS to see if there is a particular person associated 279 279 with the part of the kernel that you are having trouble with. If there 280 280 isn't anyone listed there, then the second best thing is to mail 281 - them to me (torvalds@osdl.org), and possibly to any other relevant 282 - mailing-list or to the newsgroup. 281 + them to me (torvalds@linux-foundation.org), and possibly to any other 282 + relevant mailing-list or to the newsgroup. 283 283 284 284 - In all bug-reports, *please* tell what kernel you are talking about, 285 285 how to duplicate the problem, and what your setup is (use your common
+9 -4
arch/i386/kernel/cpu/common.c
··· 710 710 return 1; 711 711 } 712 712 713 - /* Common CPU init for both boot and secondary CPUs */ 714 - static void __cpuinit _cpu_init(int cpu, struct task_struct *curr) 713 + void __cpuinit cpu_set_gdt(int cpu) 715 714 { 716 - struct tss_struct * t = &per_cpu(init_tss, cpu); 717 - struct thread_struct *thread = &curr->thread; 718 715 struct Xgt_desc_struct *cpu_gdt_descr = &per_cpu(cpu_gdt_descr, cpu); 719 716 720 717 /* Reinit these anyway, even if they've already been done (on ··· 719 722 the real ones). */ 720 723 load_gdt(cpu_gdt_descr); 721 724 set_kernel_gs(); 725 + } 726 + 727 + /* Common CPU init for both boot and secondary CPUs */ 728 + static void __cpuinit _cpu_init(int cpu, struct task_struct *curr) 729 + { 730 + struct tss_struct * t = &per_cpu(init_tss, cpu); 731 + struct thread_struct *thread = &curr->thread; 722 732 723 733 if (cpu_test_and_set(cpu, cpu_initialized)) { 724 734 printk(KERN_WARNING "CPU#%d already initialized!\n", cpu); ··· 811 807 local_irq_enable(); 812 808 } 813 809 810 + cpu_set_gdt(cpu); 814 811 _cpu_init(cpu, curr); 815 812 } 816 813
+1 -7
arch/i386/kernel/nmi.c
··· 310 310 311 311 if ((nmi >= NMI_INVALID) || (nmi < NMI_NONE)) 312 312 return 0; 313 - /* 314 - * If any other x86 CPU has a local APIC, then 315 - * please test the NMI stuff there and send me the 316 - * missing bits. Right now Intel P6/P4 and AMD K7 only. 317 - */ 318 - if ((nmi == NMI_LOCAL_APIC) && (nmi_known_cpu() == 0)) 319 - return 0; /* no lapic support */ 313 + 320 314 nmi_watchdog = nmi; 321 315 return 1; 322 316 }
+8 -1
arch/i386/kernel/paravirt.c
··· 566 566 .irq_enable_sysexit = native_irq_enable_sysexit, 567 567 .iret = native_iret, 568 568 }; 569 - EXPORT_SYMBOL(paravirt_ops); 569 + 570 + /* 571 + * NOTE: CONFIG_PARAVIRT is experimental and the paravirt_ops 572 + * semantics are subject to change. Hence we only do this 573 + * internal-only export of this, until it gets sorted out and 574 + * all lowlevel CPU ops used by modules are separately exported. 575 + */ 576 + EXPORT_SYMBOL_GPL(paravirt_ops);
+6 -3
arch/i386/kernel/smpboot.c
··· 596 596 void __devinit initialize_secondary(void) 597 597 { 598 598 /* 599 + * switch to the per CPU GDT we already set up 600 + * in do_boot_cpu() 601 + */ 602 + cpu_set_gdt(current_thread_info()->cpu); 603 + 604 + /* 599 605 * We don't actually need to load the full TSS, 600 606 * basically just the stack pointer and the eip. 601 607 */ ··· 977 971 printk("Booting processor %d/%d eip %lx\n", cpu, apicid, start_eip); 978 972 /* Stack for startup_32 can be just as for start_secondary onwards */ 979 973 stack_start.esp = (void *) idle->thread.esp; 980 - 981 - start_pda = cpu_pda(cpu); 982 - cpu_gdt_descr = per_cpu(cpu_gdt_descr, cpu); 983 974 984 975 irq_ctx_init(cpu); 985 976
+6
arch/i386/mach-voyager/voyager_smp.c
··· 773 773 #endif 774 774 775 775 /* 776 + * switch to the per CPU GDT we already set up 777 + * in do_boot_cpu() 778 + */ 779 + cpu_set_gdt(current_thread_info()->cpu); 780 + 781 + /* 776 782 * We don't actually need to load the full TSS, 777 783 * basically just the stack pointer and the eip. 778 784 */
+14
arch/mips/Kconfig
··· 1568 1568 depends on MIPS_MT 1569 1569 default y 1570 1570 1571 + config MIPS_MT_SMTC_INSTANT_REPLAY 1572 + bool "Low-latency Dispatch of Deferred SMTC IPIs" 1573 + depends on MIPS_MT_SMTC 1574 + default y 1575 + help 1576 + SMTC pseudo-interrupts between TCs are deferred and queued 1577 + if the target TC is interrupt-inhibited (IXMT). In the first 1578 + SMTC prototypes, these queued IPIs were serviced on return 1579 + to user mode, or on entry into the kernel idle loop. The 1580 + INSTANT_REPLAY option dispatches them as part of local_irq_restore() 1581 + processing, which adds runtime overhead (hence the option to turn 1582 + it off), but ensures that IPIs are handled promptly even under 1583 + heavy I/O interrupt load. 1584 + 1571 1585 config MIPS_VPE_LOADER_TOM 1572 1586 bool "Load VPE program into memory hidden from linux" 1573 1587 depends on MIPS_VPE_LOADER
+34 -22
arch/mips/kernel/smtc.c
··· 1017 1017 * SMTC-specific hacks invoked from elsewhere in the kernel. 1018 1018 */ 1019 1019 1020 + void smtc_ipi_replay(void) 1021 + { 1022 + /* 1023 + * To the extent that we've ever turned interrupts off, 1024 + * we may have accumulated deferred IPIs. This is subtle. 1025 + * If we use the smtc_ipi_qdepth() macro, we'll get an 1026 + * exact number - but we'll also disable interrupts 1027 + * and create a window of failure where a new IPI gets 1028 + * queued after we test the depth but before we re-enable 1029 + * interrupts. So long as IXMT never gets set, however, 1030 + * we should be OK: If we pick up something and dispatch 1031 + * it here, that's great. If we see nothing, but concurrent 1032 + * with this operation, another TC sends us an IPI, IXMT 1033 + * is clear, and we'll handle it as a real pseudo-interrupt 1034 + * and not a pseudo-pseudo interrupt. 1035 + */ 1036 + if (IPIQ[smp_processor_id()].depth > 0) { 1037 + struct smtc_ipi *pipi; 1038 + extern void self_ipi(struct smtc_ipi *); 1039 + 1040 + while ((pipi = smtc_ipi_dq(&IPIQ[smp_processor_id()]))) { 1041 + self_ipi(pipi); 1042 + smtc_cpu_stats[smp_processor_id()].selfipis++; 1043 + } 1044 + } 1045 + } 1046 + 1020 1047 void smtc_idle_loop_hook(void) 1021 1048 { 1022 1049 #ifdef SMTC_IDLE_HOOK_DEBUG ··· 1140 1113 if (pdb_msg != &id_ho_db_msg[0]) 1141 1114 printk("CPU%d: %s", smp_processor_id(), id_ho_db_msg); 1142 1115 #endif /* SMTC_IDLE_HOOK_DEBUG */ 1143 - /* 1144 - * To the extent that we've ever turned interrupts off, 1145 - * we may have accumulated deferred IPIs. This is subtle. 1146 - * If we use the smtc_ipi_qdepth() macro, we'll get an 1147 - * exact number - but we'll also disable interrupts 1148 - * and create a window of failure where a new IPI gets 1149 - * queued after we test the depth but before we re-enable 1150 - * interrupts. So long as IXMT never gets set, however, 1151 - * we should be OK: If we pick up something and dispatch 1152 - * it here, that's great. If we see nothing, but concurrent 1153 - * with this operation, another TC sends us an IPI, IXMT 1154 - * is clear, and we'll handle it as a real pseudo-interrupt 1155 - * and not a pseudo-pseudo interrupt. 1156 - */ 1157 - if (IPIQ[smp_processor_id()].depth > 0) { 1158 - struct smtc_ipi *pipi; 1159 - extern void self_ipi(struct smtc_ipi *); 1160 1116 1161 - if ((pipi = smtc_ipi_dq(&IPIQ[smp_processor_id()])) != NULL) { 1162 - self_ipi(pipi); 1163 - smtc_cpu_stats[smp_processor_id()].selfipis++; 1164 - } 1165 - } 1117 + /* 1118 + * Replay any accumulated deferred IPIs. If "Instant Replay" 1119 + * is in use, there should never be any. 1120 + */ 1121 + #ifndef CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY 1122 + smtc_ipi_replay(); 1123 + #endif /* CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY */ 1166 1124 } 1167 1125 1168 1126 void smtc_soft_dump(void)
+9 -3
arch/mips/vr41xx/common/irq.c
··· 1 1 /* 2 2 * Interrupt handing routines for NEC VR4100 series. 3 3 * 4 - * Copyright (C) 2005 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> 4 + * Copyright (C) 2005-2007 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License as published by ··· 73 73 if (cascade->get_irq != NULL) { 74 74 unsigned int source_irq = irq; 75 75 desc = irq_desc + source_irq; 76 - desc->chip->ack(source_irq); 76 + if (desc->chip->mask_ack) 77 + desc->chip->mask_ack(source_irq); 78 + else { 79 + desc->chip->mask(source_irq); 80 + desc->chip->ack(source_irq); 81 + } 77 82 irq = cascade->get_irq(irq); 78 83 if (irq < 0) 79 84 atomic_inc(&irq_err_count); 80 85 else 81 86 irq_dispatch(irq); 82 - desc->chip->end(source_irq); 87 + if (!(desc->status & IRQ_DISABLED) && desc->chip->unmask) 88 + desc->chip->unmask(source_irq); 83 89 } else 84 90 do_IRQ(irq); 85 91 }
+1 -2
arch/ppc/platforms/ev64360.c
··· 358 358 359 359 ptbl_entries = 3; 360 360 361 - if ((ptbl = kmalloc(ptbl_entries * sizeof(struct mtd_partition), 361 + if ((ptbl = kzalloc(ptbl_entries * sizeof(struct mtd_partition), 362 362 GFP_KERNEL)) == NULL) { 363 363 364 364 printk(KERN_WARNING "Can't alloc MTD partition table\n"); 365 365 return -ENOMEM; 366 366 } 367 - memset(ptbl, 0, ptbl_entries * sizeof(struct mtd_partition)); 368 367 369 368 ptbl[0].name = "reserved"; 370 369 ptbl[0].offset = 0;
-2
arch/x86_64/kernel/nmi.c
··· 302 302 if ((nmi >= NMI_INVALID) || (nmi < NMI_NONE)) 303 303 return 0; 304 304 305 - if ((nmi == NMI_LOCAL_APIC) && (nmi_known_cpu() == 0)) 306 - return 0; /* no lapic support */ 307 305 nmi_watchdog = nmi; 308 306 return 1; 309 307 }
+6 -5
block/elevator.c
··· 590 590 */ 591 591 rq->cmd_flags |= REQ_SOFTBARRIER; 592 592 593 + /* 594 + * Most requeues happen because of a busy condition, 595 + * don't force unplug of the queue for that case. 596 + */ 597 + unplug_it = 0; 598 + 593 599 if (q->ordseq == 0) { 594 600 list_add(&rq->queuelist, &q->queue_head); 595 601 break; ··· 610 604 } 611 605 612 606 list_add_tail(&rq->queuelist, pos); 613 - /* 614 - * most requeues happen because of a busy condition, don't 615 - * force unplug of the queue for that case. 616 - */ 617 - unplug_it = 0; 618 607 break; 619 608 620 609 default:
-2
drivers/acpi/video.c
··· 1677 1677 struct acpi_video_device *video_device = data; 1678 1678 struct acpi_device *device = NULL; 1679 1679 1680 - 1681 - printk("video device notify\n"); 1682 1680 if (!video_device) 1683 1681 return; 1684 1682
+1 -1
drivers/atm/horizon.c
··· 1845 1845 1846 1846 /********** initialise a card **********/ 1847 1847 1848 - static int __init hrz_init (hrz_dev * dev) { 1848 + static int __devinit hrz_init (hrz_dev * dev) { 1849 1849 int onefivefive; 1850 1850 1851 1851 u16 chan;
+28 -15
drivers/char/tlclk.c
··· 186 186 static void switchover_timeout(unsigned long data); 187 187 static struct timer_list switchover_timer = 188 188 TIMER_INITIALIZER(switchover_timeout , 0, 0); 189 + static unsigned long tlclk_timer_data; 189 190 190 191 static struct tlclk_alarms *alarm_events; 191 192 ··· 198 197 199 198 static DECLARE_WAIT_QUEUE_HEAD(wq); 200 199 200 + static unsigned long useflags; 201 + static DEFINE_MUTEX(tlclk_mutex); 202 + 201 203 static int tlclk_open(struct inode *inode, struct file *filp) 202 204 { 203 205 int result; 206 + 207 + if (test_and_set_bit(0, &useflags)) 208 + return -EBUSY; 209 + /* this legacy device is always one per system and it doesn't 210 + * know how to handle multiple concurrent clients. 211 + */ 204 212 205 213 /* Make sure there is no interrupt pending while 206 214 * initialising interrupt handler */ ··· 231 221 static int tlclk_release(struct inode *inode, struct file *filp) 232 222 { 233 223 free_irq(telclk_interrupt, tlclk_interrupt); 224 + clear_bit(0, &useflags); 234 225 235 226 return 0; 236 227 } ··· 241 230 { 242 231 if (count < sizeof(struct tlclk_alarms)) 243 232 return -EIO; 233 + if (mutex_lock_interruptible(&tlclk_mutex)) 234 + return -EINTR; 235 + 244 236 245 237 wait_event_interruptible(wq, got_event); 246 - if (copy_to_user(buf, alarm_events, sizeof(struct tlclk_alarms))) 238 + if (copy_to_user(buf, alarm_events, sizeof(struct tlclk_alarms))) { 239 + mutex_unlock(&tlclk_mutex); 247 240 return -EFAULT; 241 + } 248 242 249 243 memset(alarm_events, 0, sizeof(struct tlclk_alarms)); 250 244 got_event = 0; 251 245 246 + mutex_unlock(&tlclk_mutex); 252 247 return sizeof(struct tlclk_alarms); 253 - } 254 - 255 - static ssize_t tlclk_write(struct file *filp, const char __user *buf, size_t count, 256 - loff_t *f_pos) 257 - { 258 - return 0; 259 248 } 260 249 261 250 static const struct file_operations tlclk_fops = { 262 251 .read = tlclk_read, 263 - .write = tlclk_write, 264 252 .open = tlclk_open, 265 253 .release = tlclk_release, 266 254 ··· 550 540 SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x7); 551 541 switch (val) { 552 542 case CLK_8_592MHz: 553 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); 543 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); 554 544 break; 555 545 case CLK_11_184MHz: 556 546 SET_PORT_BITS(TLCLK_REG0, 0xfc, 0); ··· 559 549 SET_PORT_BITS(TLCLK_REG0, 0xfc, 3); 560 550 break; 561 551 case CLK_44_736MHz: 562 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); 552 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); 563 553 break; 564 554 } 565 555 } else ··· 849 839 850 840 static void switchover_timeout(unsigned long data) 851 841 { 852 - if ((data & 1)) { 853 - if ((inb(TLCLK_REG1) & 0x08) != (data & 0x08)) 842 + unsigned long flags = *(unsigned long *) data; 843 + 844 + if ((flags & 1)) { 845 + if ((inb(TLCLK_REG1) & 0x08) != (flags & 0x08)) 854 846 alarm_events->switchover_primary++; 855 847 } else { 856 - if ((inb(TLCLK_REG1) & 0x08) != (data & 0x08)) 848 + if ((inb(TLCLK_REG1) & 0x08) != (flags & 0x08)) 857 849 alarm_events->switchover_secondary++; 858 850 } 859 851 ··· 913 901 914 902 /* TIMEOUT in ~10ms */ 915 903 switchover_timer.expires = jiffies + msecs_to_jiffies(10); 916 - switchover_timer.data = inb(TLCLK_REG1); 917 - add_timer(&switchover_timer); 904 + tlclk_timer_data = inb(TLCLK_REG1); 905 + switchover_timer.data = (unsigned long) &tlclk_timer_data; 906 + mod_timer(&switchover_timer, switchover_timer.expires); 918 907 } else { 919 908 got_event = 1; 920 909 wake_up(&wq);
+52 -62
drivers/char/vr41xx_giu.c
··· 3 3 * 4 4 * Copyright (C) 2002 MontaVista Software Inc. 5 5 * Author: Yoichi Yuasa <yyuasa@mvista.com or source@mvista.com> 6 - * Copyright (C) 2003-2005 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> 6 + * Copyright (C) 2003-2007 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> 7 7 * 8 8 * This program is free software; you can redistribute it and/or modify 9 9 * it under the terms of the GNU General Public License as published by ··· 125 125 return data; 126 126 } 127 127 128 - static unsigned int startup_giuint_low_irq(unsigned int irq) 128 + static void ack_giuint_low(unsigned int irq) 129 129 { 130 - unsigned int pin; 131 - 132 - pin = GPIO_PIN_OF_IRQ(irq); 133 - giu_write(GIUINTSTATL, 1 << pin); 134 - giu_set(GIUINTENL, 1 << pin); 135 - 136 - return 0; 130 + giu_write(GIUINTSTATL, 1 << GPIO_PIN_OF_IRQ(irq)); 137 131 } 138 132 139 - static void shutdown_giuint_low_irq(unsigned int irq) 133 + static void mask_giuint_low(unsigned int irq) 140 134 { 141 135 giu_clear(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq)); 142 136 } 143 137 144 - static void enable_giuint_low_irq(unsigned int irq) 145 - { 146 - giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq)); 147 - } 148 - 149 - #define disable_giuint_low_irq shutdown_giuint_low_irq 150 - 151 - static void ack_giuint_low_irq(unsigned int irq) 138 + static void mask_ack_giuint_low(unsigned int irq) 152 139 { 153 140 unsigned int pin; 154 141 ··· 144 157 giu_write(GIUINTSTATL, 1 << pin); 145 158 } 146 159 147 - static void end_giuint_low_irq(unsigned int irq) 160 + static void unmask_giuint_low(unsigned int irq) 148 161 { 149 - if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS))) 150 - giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq)); 162 + giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq)); 151 163 } 152 164 153 - static struct hw_interrupt_type giuint_low_irq_type = { 154 - .typename = "GIUINTL", 155 - .startup = startup_giuint_low_irq, 156 - .shutdown = shutdown_giuint_low_irq, 157 - .enable = enable_giuint_low_irq, 158 - .disable = disable_giuint_low_irq, 159 - .ack = ack_giuint_low_irq, 160 - .end = end_giuint_low_irq, 165 + static struct irq_chip giuint_low_irq_chip = { 166 + .name = "GIUINTL", 167 + .ack = ack_giuint_low, 168 + .mask = mask_giuint_low, 169 + .mask_ack = mask_ack_giuint_low, 170 + .unmask = unmask_giuint_low, 161 171 }; 162 172 163 - static unsigned int startup_giuint_high_irq(unsigned int irq) 173 + static void ack_giuint_high(unsigned int irq) 164 174 { 165 - unsigned int pin; 166 - 167 - pin = GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET; 168 - giu_write(GIUINTSTATH, 1 << pin); 169 - giu_set(GIUINTENH, 1 << pin); 170 - 171 - return 0; 175 + giu_write(GIUINTSTATH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET)); 172 176 } 173 177 174 - static void shutdown_giuint_high_irq(unsigned int irq) 178 + static void mask_giuint_high(unsigned int irq) 175 179 { 176 180 giu_clear(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET)); 177 181 } 178 182 179 - static void enable_giuint_high_irq(unsigned int irq) 180 - { 181 - giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET)); 182 - } 183 - 184 - #define disable_giuint_high_irq shutdown_giuint_high_irq 185 - 186 - static void ack_giuint_high_irq(unsigned int irq) 183 + static void mask_ack_giuint_high(unsigned int irq) 187 184 { 188 185 unsigned int pin; 189 186 ··· 176 205 giu_write(GIUINTSTATH, 1 << pin); 177 206 } 178 207 179 - static void end_giuint_high_irq(unsigned int irq) 208 + static void unmask_giuint_high(unsigned int irq) 180 209 { 181 - if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS))) 182 - giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET)); 210 + giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET)); 183 211 } 184 212 185 - static struct hw_interrupt_type giuint_high_irq_type = { 186 - .typename = "GIUINTH", 187 - .startup = startup_giuint_high_irq, 188 - .shutdown = shutdown_giuint_high_irq, 189 - .enable = enable_giuint_high_irq, 190 - .disable = disable_giuint_high_irq, 191 - .ack = ack_giuint_high_irq, 192 - .end = end_giuint_high_irq, 213 + static struct irq_chip giuint_high_irq_chip = { 214 + .name = "GIUINTH", 215 + .ack = ack_giuint_high, 216 + .mask = mask_giuint_high, 217 + .mask_ack = mask_ack_giuint_high, 218 + .unmask = unmask_giuint_high, 193 219 }; 194 220 195 221 static int giu_get_irq(unsigned int irq) ··· 250 282 break; 251 283 } 252 284 } 285 + set_irq_chip_and_handler(GIU_IRQ(pin), 286 + &giuint_low_irq_chip, 287 + handle_edge_irq); 253 288 } else { 254 289 giu_clear(GIUINTTYPL, mask); 255 290 giu_clear(GIUINTHTSELL, mask); 291 + set_irq_chip_and_handler(GIU_IRQ(pin), 292 + &giuint_low_irq_chip, 293 + handle_level_irq); 256 294 } 257 295 giu_write(GIUINTSTATL, mask); 258 296 } else if (pin < GIUINT_HIGH_MAX) { ··· 285 311 break; 286 312 } 287 313 } 314 + set_irq_chip_and_handler(GIU_IRQ(pin), 315 + &giuint_high_irq_chip, 316 + handle_edge_irq); 288 317 } else { 289 318 giu_clear(GIUINTTYPH, mask); 290 319 giu_clear(GIUINTHTSELH, mask); 320 + set_irq_chip_and_handler(GIU_IRQ(pin), 321 + &giuint_high_irq_chip, 322 + handle_level_irq); 291 323 } 292 324 giu_write(GIUINTSTATH, mask); 293 325 } ··· 597 617 static int __devinit giu_probe(struct platform_device *dev) 598 618 { 599 619 unsigned long start, size, flags = 0; 600 - unsigned int nr_pins = 0; 620 + unsigned int nr_pins = 0, trigger, i, pin; 601 621 struct resource *res1, *res2 = NULL; 602 622 void *base; 603 - int retval, i; 623 + struct irq_chip *chip; 624 + int retval; 604 625 605 626 switch (current_cpu_data.cputype) { 606 627 case CPU_VR4111: ··· 669 688 giu_write(GIUINTENL, 0); 670 689 giu_write(GIUINTENH, 0); 671 690 691 + trigger = giu_read(GIUINTTYPH) << 16; 692 + trigger |= giu_read(GIUINTTYPL); 672 693 for (i = GIU_IRQ_BASE; i <= GIU_IRQ_LAST; i++) { 673 - if (i < GIU_IRQ(GIUINT_HIGH_OFFSET)) 674 - irq_desc[i].chip = &giuint_low_irq_type; 694 + pin = GPIO_PIN_OF_IRQ(i); 695 + if (pin < GIUINT_HIGH_OFFSET) 696 + chip = &giuint_low_irq_chip; 675 697 else 676 - irq_desc[i].chip = &giuint_high_irq_type; 698 + chip = &giuint_high_irq_chip; 699 + 700 + if (trigger & (1 << pin)) 701 + set_irq_chip_and_handler(i, chip, handle_edge_irq); 702 + else 703 + set_irq_chip_and_handler(i, chip, handle_level_irq); 704 + 677 705 } 678 706 679 707 return cascade_irq(GIUINT_IRQ, giu_get_irq);
+4 -1
drivers/infiniband/hw/ehca/ehca_cq.c
··· 344 344 unsigned long flags; 345 345 346 346 spin_lock_irqsave(&ehca_cq_idr_lock, flags); 347 - while (my_cq->nr_callbacks) 347 + while (my_cq->nr_callbacks) { 348 + spin_unlock_irqrestore(&ehca_cq_idr_lock, flags); 348 349 yield(); 350 + spin_lock_irqsave(&ehca_cq_idr_lock, flags); 351 + } 349 352 350 353 idr_remove(&ehca_cq_idr, my_cq->token); 351 354 spin_unlock_irqrestore(&ehca_cq_idr_lock, flags);
+2 -1
drivers/infiniband/hw/ehca/ehca_irq.c
··· 440 440 cq = idr_find(&ehca_cq_idr, token); 441 441 442 442 if (cq == NULL) { 443 - spin_unlock(&ehca_cq_idr_lock); 443 + spin_unlock_irqrestore(&ehca_cq_idr_lock, 444 + flags); 444 445 break; 445 446 } 446 447
+20
drivers/infiniband/ulp/srp/ib_srp.c
··· 1621 1621 switch (token) { 1622 1622 case SRP_OPT_ID_EXT: 1623 1623 p = match_strdup(args); 1624 + if (!p) { 1625 + ret = -ENOMEM; 1626 + goto out; 1627 + } 1624 1628 target->id_ext = cpu_to_be64(simple_strtoull(p, NULL, 16)); 1625 1629 kfree(p); 1626 1630 break; 1627 1631 1628 1632 case SRP_OPT_IOC_GUID: 1629 1633 p = match_strdup(args); 1634 + if (!p) { 1635 + ret = -ENOMEM; 1636 + goto out; 1637 + } 1630 1638 target->ioc_guid = cpu_to_be64(simple_strtoull(p, NULL, 16)); 1631 1639 kfree(p); 1632 1640 break; 1633 1641 1634 1642 case SRP_OPT_DGID: 1635 1643 p = match_strdup(args); 1644 + if (!p) { 1645 + ret = -ENOMEM; 1646 + goto out; 1647 + } 1636 1648 if (strlen(p) != 32) { 1637 1649 printk(KERN_WARNING PFX "bad dest GID parameter '%s'\n", p); 1638 1650 kfree(p); ··· 1668 1656 1669 1657 case SRP_OPT_SERVICE_ID: 1670 1658 p = match_strdup(args); 1659 + if (!p) { 1660 + ret = -ENOMEM; 1661 + goto out; 1662 + } 1671 1663 target->service_id = cpu_to_be64(simple_strtoull(p, NULL, 16)); 1672 1664 kfree(p); 1673 1665 break; ··· 1709 1693 1710 1694 case SRP_OPT_INITIATOR_EXT: 1711 1695 p = match_strdup(args); 1696 + if (!p) { 1697 + ret = -ENOMEM; 1698 + goto out; 1699 + } 1712 1700 target->initiator_ext = cpu_to_be64(simple_strtoull(p, NULL, 16)); 1713 1701 kfree(p); 1714 1702 break;
+2
drivers/kvm/kvm_main.c
··· 272 272 273 273 static void kvm_free_vcpu(struct kvm_vcpu *vcpu) 274 274 { 275 + vcpu_load(vcpu->kvm, vcpu_slot(vcpu)); 275 276 kvm_mmu_destroy(vcpu); 277 + vcpu_put(vcpu); 276 278 kvm_arch_ops->vcpu_free(vcpu); 277 279 } 278 280
+1 -1
drivers/kvm/paging_tmpl.h
··· 274 274 struct kvm_mmu_page *page; 275 275 276 276 if (is_writeble_pte(*shadow_ent)) 277 - return 0; 277 + return !user || (*shadow_ent & PT_USER_MASK); 278 278 279 279 writable_shadow = *shadow_ent & PT_SHADOW_WRITABLE_MASK; 280 280 if (user) {
+2 -1
drivers/kvm/svm.c
··· 1407 1407 int r; 1408 1408 1409 1409 again: 1410 - do_interrupt_requests(vcpu, kvm_run); 1410 + if (!vcpu->mmio_read_completed) 1411 + do_interrupt_requests(vcpu, kvm_run); 1411 1412 1412 1413 clgi(); 1413 1414
+3 -2
drivers/kvm/vmx.c
··· 1717 1717 vmcs_writel(HOST_GS_BASE, segment_base(gs_sel)); 1718 1718 #endif 1719 1719 1720 - do_interrupt_requests(vcpu, kvm_run); 1720 + if (!vcpu->mmio_read_completed) 1721 + do_interrupt_requests(vcpu, kvm_run); 1721 1722 1722 1723 if (vcpu->guest_debug.enabled) 1723 1724 kvm_guest_debug_pre(vcpu); ··· 1825 1824 #endif 1826 1825 "setbe %0 \n\t" 1827 1826 "popf \n\t" 1828 - : "=g" (fail) 1827 + : "=q" (fail) 1829 1828 : "r"(vcpu->launched), "d"((unsigned long)HOST_RSP), 1830 1829 "c"(vcpu), 1831 1830 [rax]"i"(offsetof(struct kvm_vcpu, regs[VCPU_REGS_RAX])),
+52 -46
drivers/kvm/x86_emulate.c
··· 61 61 #define ModRM (1<<6) 62 62 /* Destination is only written; never read. */ 63 63 #define Mov (1<<7) 64 + #define BitOp (1<<8) 64 65 65 66 static u8 opcode_table[256] = { 66 67 /* 0x00 - 0x07 */ ··· 149 148 0, 0, ByteOp | DstMem | SrcNone | ModRM, DstMem | SrcNone | ModRM 150 149 }; 151 150 152 - static u8 twobyte_table[256] = { 151 + static u16 twobyte_table[256] = { 153 152 /* 0x00 - 0x0F */ 154 153 0, SrcMem | ModRM | DstReg, 0, 0, 0, 0, ImplicitOps, 0, 155 154 0, 0, 0, 0, 0, ImplicitOps | ModRM, 0, 0, ··· 181 180 /* 0x90 - 0x9F */ 182 181 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 183 182 /* 0xA0 - 0xA7 */ 184 - 0, 0, 0, DstMem | SrcReg | ModRM, 0, 0, 0, 0, 183 + 0, 0, 0, DstMem | SrcReg | ModRM | BitOp, 0, 0, 0, 0, 185 184 /* 0xA8 - 0xAF */ 186 - 0, 0, 0, DstMem | SrcReg | ModRM, 0, 0, 0, 0, 185 + 0, 0, 0, DstMem | SrcReg | ModRM | BitOp, 0, 0, 0, 0, 187 186 /* 0xB0 - 0xB7 */ 188 187 ByteOp | DstMem | SrcReg | ModRM, DstMem | SrcReg | ModRM, 0, 189 - DstMem | SrcReg | ModRM, 188 + DstMem | SrcReg | ModRM | BitOp, 190 189 0, 0, ByteOp | DstReg | SrcMem | ModRM | Mov, 191 190 DstReg | SrcMem16 | ModRM | Mov, 192 191 /* 0xB8 - 0xBF */ 193 - 0, 0, DstMem | SrcImmByte | ModRM, DstMem | SrcReg | ModRM, 192 + 0, 0, DstMem | SrcImmByte | ModRM, DstMem | SrcReg | ModRM | BitOp, 194 193 0, 0, ByteOp | DstReg | SrcMem | ModRM | Mov, 195 194 DstReg | SrcMem16 | ModRM | Mov, 196 195 /* 0xC0 - 0xCF */ ··· 470 469 int 471 470 x86_emulate_memop(struct x86_emulate_ctxt *ctxt, struct x86_emulate_ops *ops) 472 471 { 473 - u8 b, d, sib, twobyte = 0, rex_prefix = 0; 472 + unsigned d; 473 + u8 b, sib, twobyte = 0, rex_prefix = 0; 474 474 u8 modrm, modrm_mod = 0, modrm_reg = 0, modrm_rm = 0; 475 475 unsigned long *override_base = NULL; 476 476 unsigned int op_bytes, ad_bytes, lock_prefix = 0, rep_prefix = 0, i; ··· 728 726 ; 729 727 } 730 728 731 - /* Decode and fetch the destination operand: register or memory. */ 732 - switch (d & DstMask) { 733 - case ImplicitOps: 734 - /* Special instructions do their own operand decoding. */ 735 - goto special_insn; 736 - case DstReg: 737 - dst.type = OP_REG; 738 - if ((d & ByteOp) 739 - && !(twobyte_table && (b == 0xb6 || b == 0xb7))) { 740 - dst.ptr = decode_register(modrm_reg, _regs, 741 - (rex_prefix == 0)); 742 - dst.val = *(u8 *) dst.ptr; 743 - dst.bytes = 1; 744 - } else { 745 - dst.ptr = decode_register(modrm_reg, _regs, 0); 746 - switch ((dst.bytes = op_bytes)) { 747 - case 2: 748 - dst.val = *(u16 *)dst.ptr; 749 - break; 750 - case 4: 751 - dst.val = *(u32 *)dst.ptr; 752 - break; 753 - case 8: 754 - dst.val = *(u64 *)dst.ptr; 755 - break; 756 - } 757 - } 758 - break; 759 - case DstMem: 760 - dst.type = OP_MEM; 761 - dst.ptr = (unsigned long *)cr2; 762 - dst.bytes = (d & ByteOp) ? 1 : op_bytes; 763 - if (!(d & Mov) && /* optimisation - avoid slow emulated read */ 764 - ((rc = ops->read_emulated((unsigned long)dst.ptr, 765 - &dst.val, dst.bytes, ctxt)) != 0)) 766 - goto done; 767 - break; 768 - } 769 - dst.orig_val = dst.val; 770 - 771 729 /* 772 730 * Decode and fetch the source operand: register, memory 773 731 * or immediate. ··· 799 837 src.val = insn_fetch(s8, 1, _eip); 800 838 break; 801 839 } 840 + 841 + /* Decode and fetch the destination operand: register or memory. */ 842 + switch (d & DstMask) { 843 + case ImplicitOps: 844 + /* Special instructions do their own operand decoding. */ 845 + goto special_insn; 846 + case DstReg: 847 + dst.type = OP_REG; 848 + if ((d & ByteOp) 849 + && !(twobyte_table && (b == 0xb6 || b == 0xb7))) { 850 + dst.ptr = decode_register(modrm_reg, _regs, 851 + (rex_prefix == 0)); 852 + dst.val = *(u8 *) dst.ptr; 853 + dst.bytes = 1; 854 + } else { 855 + dst.ptr = decode_register(modrm_reg, _regs, 0); 856 + switch ((dst.bytes = op_bytes)) { 857 + case 2: 858 + dst.val = *(u16 *)dst.ptr; 859 + break; 860 + case 4: 861 + dst.val = *(u32 *)dst.ptr; 862 + break; 863 + case 8: 864 + dst.val = *(u64 *)dst.ptr; 865 + break; 866 + } 867 + } 868 + break; 869 + case DstMem: 870 + dst.type = OP_MEM; 871 + dst.ptr = (unsigned long *)cr2; 872 + dst.bytes = (d & ByteOp) ? 1 : op_bytes; 873 + if (d & BitOp) { 874 + dst.ptr += src.val / BITS_PER_LONG; 875 + dst.bytes = sizeof(long); 876 + } 877 + if (!(d & Mov) && /* optimisation - avoid slow emulated read */ 878 + ((rc = ops->read_emulated((unsigned long)dst.ptr, 879 + &dst.val, dst.bytes, ctxt)) != 0)) 880 + goto done; 881 + break; 882 + } 883 + dst.orig_val = dst.val; 802 884 803 885 if (twobyte) 804 886 goto twobyte_insn;
+1
drivers/media/video/video-buf.c
··· 700 700 goto done; 701 701 } 702 702 if (buf->state == STATE_QUEUED || 703 + buf->state == STATE_PREPARED || 703 704 buf->state == STATE_ACTIVE) { 704 705 dprintk(1,"qbuf: buffer is already queued or active.\n"); 705 706 goto done;
+13 -2
drivers/mtd/Kconfig
··· 164 164 memory chips, and also use ioctl() to obtain information about 165 165 the device, or to erase parts of it. 166 166 167 + config MTD_BLKDEVS 168 + tristate "Common interface to block layer for MTD 'translation layers'" 169 + depends on MTD && BLOCK 170 + default n 171 + 167 172 config MTD_BLOCK 168 173 tristate "Caching block device access to MTD devices" 169 174 depends on MTD && BLOCK 175 + select MTD_BLKDEVS 170 176 ---help--- 171 177 Although most flash chips have an erase size too large to be useful 172 178 as block devices, it is possible to use MTD devices which are based ··· 195 189 config MTD_BLOCK_RO 196 190 tristate "Readonly block device access to MTD devices" 197 191 depends on MTD_BLOCK!=y && MTD && BLOCK 192 + select MTD_BLKDEVS 198 193 help 199 194 This allows you to mount read-only file systems (such as cramfs) 200 195 from an MTD device, without the overhead (and danger) of the caching ··· 207 200 config FTL 208 201 tristate "FTL (Flash Translation Layer) support" 209 202 depends on MTD && BLOCK 203 + select MTD_BLKDEVS 210 204 ---help--- 211 205 This provides support for the original Flash Translation Layer which 212 206 is part of the PCMCIA specification. It uses a kind of pseudo- ··· 224 216 config NFTL 225 217 tristate "NFTL (NAND Flash Translation Layer) support" 226 218 depends on MTD && BLOCK 219 + select MTD_BLKDEVS 227 220 ---help--- 228 221 This provides support for the NAND Flash Translation Layer which is 229 222 used on M-Systems' DiskOnChip devices. It uses a kind of pseudo- ··· 248 239 config INFTL 249 240 tristate "INFTL (Inverse NAND Flash Translation Layer) support" 250 241 depends on MTD && BLOCK 242 + select MTD_BLKDEVS 251 243 ---help--- 252 244 This provides support for the Inverse NAND Flash Translation 253 245 Layer which is used on M-Systems' newer DiskOnChip devices. It ··· 266 256 config RFD_FTL 267 257 tristate "Resident Flash Disk (Flash Translation Layer) support" 268 258 depends on MTD && BLOCK 259 + select MTD_BLKDEVS 269 260 ---help--- 270 261 This provides support for the flash translation layer known 271 262 as the Resident Flash Disk (RFD), as used by the Embedded BIOS ··· 276 265 277 266 config SSFDC 278 267 tristate "NAND SSFDC (SmartMedia) read only translation layer" 279 - depends on MTD 280 - default n 268 + depends on MTD && BLOCK 269 + select MTD_BLKDEVS 281 270 help 282 271 This enables read only access to SmartMedia formatted NAND 283 272 flash. You can mount it with FAT file system.
+8 -7
drivers/mtd/Makefile
··· 15 15 16 16 # 'Users' - code which presents functionality to userspace. 17 17 obj-$(CONFIG_MTD_CHAR) += mtdchar.o 18 - obj-$(CONFIG_MTD_BLOCK) += mtdblock.o mtd_blkdevs.o 19 - obj-$(CONFIG_MTD_BLOCK_RO) += mtdblock_ro.o mtd_blkdevs.o 20 - obj-$(CONFIG_FTL) += ftl.o mtd_blkdevs.o 21 - obj-$(CONFIG_NFTL) += nftl.o mtd_blkdevs.o 22 - obj-$(CONFIG_INFTL) += inftl.o mtd_blkdevs.o 23 - obj-$(CONFIG_RFD_FTL) += rfd_ftl.o mtd_blkdevs.o 24 - obj-$(CONFIG_SSFDC) += ssfdc.o mtd_blkdevs.o 18 + obj-$(CONFIG_MTD_BLKDEVS) += mtd_blkdevs.o 19 + obj-$(CONFIG_MTD_BLOCK) += mtdblock.o 20 + obj-$(CONFIG_MTD_BLOCK_RO) += mtdblock_ro.o 21 + obj-$(CONFIG_FTL) += ftl.o 22 + obj-$(CONFIG_NFTL) += nftl.o 23 + obj-$(CONFIG_INFTL) += inftl.o 24 + obj-$(CONFIG_RFD_FTL) += rfd_ftl.o 25 + obj-$(CONFIG_SSFDC) += ssfdc.o 25 26 26 27 nftl-objs := nftlcore.o nftlmount.o 27 28 inftl-objs := inftlcore.o inftlmount.o
+1 -2
drivers/mtd/afs.c
··· 207 207 if (!sz) 208 208 return ret; 209 209 210 - parts = kmalloc(sz, GFP_KERNEL); 210 + parts = kzalloc(sz, GFP_KERNEL); 211 211 if (!parts) 212 212 return -ENOMEM; 213 213 214 - memset(parts, 0, sz); 215 214 str = (char *)(parts + idx); 216 215 217 216 /*
+1 -2
drivers/mtd/chips/amd_flash.c
··· 643 643 int reg_idx; 644 644 int offset; 645 645 646 - mtd = (struct mtd_info*)kmalloc(sizeof(*mtd), GFP_KERNEL); 646 + mtd = kzalloc(sizeof(*mtd), GFP_KERNEL); 647 647 if (!mtd) { 648 648 printk(KERN_WARNING 649 649 "%s: kmalloc failed for info structure\n", map->name); 650 650 return NULL; 651 651 } 652 - memset(mtd, 0, sizeof(*mtd)); 653 652 mtd->priv = map; 654 653 655 654 memset(&temp, 0, sizeof(temp));
+3 -2
drivers/mtd/chips/cfi_cmdset_0001.c
··· 337 337 struct mtd_info *mtd; 338 338 int i; 339 339 340 - mtd = kmalloc(sizeof(*mtd), GFP_KERNEL); 340 + mtd = kzalloc(sizeof(*mtd), GFP_KERNEL); 341 341 if (!mtd) { 342 342 printk(KERN_ERR "Failed to allocate memory for MTD device\n"); 343 343 return NULL; 344 344 } 345 - memset(mtd, 0, sizeof(*mtd)); 346 345 mtd->priv = map; 347 346 mtd->type = MTD_NORFLASH; 348 347 ··· 2223 2224 case FL_CFI_QUERY: 2224 2225 case FL_JEDEC_QUERY: 2225 2226 if (chip->oldstate == FL_READY) { 2227 + /* place the chip in a known state before suspend */ 2228 + map_write(map, CMD(0xFF), cfi->chips[i].start); 2226 2229 chip->oldstate = chip->state; 2227 2230 chip->state = FL_PM_SUSPENDED; 2228 2231 /* No need to wake_up() on this state change -
+7 -4
drivers/mtd/chips/cfi_cmdset_0002.c
··· 48 48 #define MANUFACTURER_ATMEL 0x001F 49 49 #define MANUFACTURER_SST 0x00BF 50 50 #define SST49LF004B 0x0060 51 + #define SST49LF040B 0x0050 51 52 #define SST49LF008A 0x005a 52 53 #define AT49BV6416 0x00d6 53 54 ··· 234 233 }; 235 234 static struct cfi_fixup jedec_fixup_table[] = { 236 235 { MANUFACTURER_SST, SST49LF004B, fixup_use_fwh_lock, NULL, }, 236 + { MANUFACTURER_SST, SST49LF040B, fixup_use_fwh_lock, NULL, }, 237 237 { MANUFACTURER_SST, SST49LF008A, fixup_use_fwh_lock, NULL, }, 238 238 { 0, 0, NULL, NULL } 239 239 }; ··· 257 255 struct mtd_info *mtd; 258 256 int i; 259 257 260 - mtd = kmalloc(sizeof(*mtd), GFP_KERNEL); 258 + mtd = kzalloc(sizeof(*mtd), GFP_KERNEL); 261 259 if (!mtd) { 262 260 printk(KERN_WARNING "Failed to allocate memory for MTD device\n"); 263 261 return NULL; 264 262 } 265 - memset(mtd, 0, sizeof(*mtd)); 266 263 mtd->priv = map; 267 264 mtd->type = MTD_NORFLASH; 268 265 ··· 520 519 if (mode == FL_WRITING) /* FIXME: Erase-suspend-program appears broken. */ 521 520 goto sleep; 522 521 523 - if (!(mode == FL_READY || mode == FL_POINT 522 + if (!( mode == FL_READY 523 + || mode == FL_POINT 524 524 || !cfip 525 525 || (mode == FL_WRITING && (cfip->EraseSuspend & 0x2)) 526 - || (mode == FL_WRITING && (cfip->EraseSuspend & 0x1)))) 526 + || (mode == FL_WRITING && (cfip->EraseSuspend & 0x1) 527 + ))) 527 528 goto sleep; 528 529 529 530 /* We could check to see if we're trying to access the sector
+1 -2
drivers/mtd/chips/cfi_cmdset_0020.c
··· 172 172 int i,j; 173 173 unsigned long devsize = (1<<cfi->cfiq->DevSize) * cfi->interleave; 174 174 175 - mtd = kmalloc(sizeof(*mtd), GFP_KERNEL); 175 + mtd = kzalloc(sizeof(*mtd), GFP_KERNEL); 176 176 //printk(KERN_DEBUG "number of CFI chips: %d\n", cfi->numchips); 177 177 178 178 if (!mtd) { ··· 181 181 return NULL; 182 182 } 183 183 184 - memset(mtd, 0, sizeof(*mtd)); 185 184 mtd->priv = map; 186 185 mtd->type = MTD_NORFLASH; 187 186 mtd->size = devsize * cfi->numchips;
+2 -3
drivers/mtd/chips/gen_probe.c
··· 40 40 if (mtd) { 41 41 if (mtd->size > map->size) { 42 42 printk(KERN_WARNING "Reducing visibility of %ldKiB chip to %ldKiB\n", 43 - (unsigned long)mtd->size >> 10, 43 + (unsigned long)mtd->size >> 10, 44 44 (unsigned long)map->size >> 10); 45 45 mtd->size = map->size; 46 46 } ··· 113 113 } 114 114 115 115 mapsize = (max_chips + BITS_PER_LONG-1) / BITS_PER_LONG; 116 - chip_map = kmalloc(mapsize, GFP_KERNEL); 116 + chip_map = kzalloc(mapsize, GFP_KERNEL); 117 117 if (!chip_map) { 118 118 printk(KERN_WARNING "%s: kmalloc failed for CFI chip map\n", map->name); 119 119 kfree(cfi.cfiq); 120 120 return NULL; 121 121 } 122 - memset (chip_map, 0, mapsize); 123 122 124 123 set_bit(0, chip_map); /* Mark first chip valid */ 125 124
+1 -2
drivers/mtd/chips/jedec.c
··· 116 116 char Part[200]; 117 117 memset(&priv,0,sizeof(priv)); 118 118 119 - MTD = kmalloc(sizeof(struct mtd_info) + sizeof(struct jedec_private), GFP_KERNEL); 119 + MTD = kzalloc(sizeof(struct mtd_info) + sizeof(struct jedec_private), GFP_KERNEL); 120 120 if (!MTD) 121 121 return NULL; 122 122 123 - memset(MTD, 0, sizeof(struct mtd_info) + sizeof(struct jedec_private)); 124 123 priv = (struct jedec_private *)&MTD[1]; 125 124 126 125 my_bank_size = map->size;
+16 -1
drivers/mtd/chips/jedec_probe.c
··· 154 154 #define SST39SF010A 0x00B5 155 155 #define SST39SF020A 0x00B6 156 156 #define SST49LF004B 0x0060 157 + #define SST49LF040B 0x0050 157 158 #define SST49LF008A 0x005a 158 159 #define SST49LF030A 0x001C 159 160 #define SST49LF040A 0x0051 ··· 1402 1401 } 1403 1402 }, { 1404 1403 .mfr_id = MANUFACTURER_SST, 1404 + .dev_id = SST49LF040B, 1405 + .name = "SST 49LF040B", 1406 + .uaddr = { 1407 + [0] = MTD_UADDR_0x5555_0x2AAA /* x8 */ 1408 + }, 1409 + .DevSize = SIZE_512KiB, 1410 + .CmdSet = P_ID_AMD_STD, 1411 + .NumEraseRegions= 1, 1412 + .regions = { 1413 + ERASEINFO(0x01000,128), 1414 + } 1415 + }, { 1416 + 1417 + .mfr_id = MANUFACTURER_SST, 1405 1418 .dev_id = SST49LF004B, 1406 1419 .name = "SST 49LF004B", 1407 1420 .uaddr = { ··· 1889 1874 1890 1875 1891 1876 /* 1892 - * There is a BIG problem properly ID'ing the JEDEC devic and guaranteeing 1877 + * There is a BIG problem properly ID'ing the JEDEC device and guaranteeing 1893 1878 * the mapped address, unlock addresses, and proper chip ID. This function 1894 1879 * attempts to minimize errors. It is doubtfull that this probe will ever 1895 1880 * be perfect - consequently there should be some module parameters that
+1 -3
drivers/mtd/chips/map_absent.c
··· 47 47 { 48 48 struct mtd_info *mtd; 49 49 50 - mtd = kmalloc(sizeof(*mtd), GFP_KERNEL); 50 + mtd = kzalloc(sizeof(*mtd), GFP_KERNEL); 51 51 if (!mtd) { 52 52 return NULL; 53 53 } 54 - 55 - memset(mtd, 0, sizeof(*mtd)); 56 54 57 55 map->fldrv = &map_absent_chipdrv; 58 56 mtd->priv = map;
+1 -3
drivers/mtd/chips/map_ram.c
··· 55 55 #endif 56 56 /* OK. It seems to be RAM. */ 57 57 58 - mtd = kmalloc(sizeof(*mtd), GFP_KERNEL); 58 + mtd = kzalloc(sizeof(*mtd), GFP_KERNEL); 59 59 if (!mtd) 60 60 return NULL; 61 - 62 - memset(mtd, 0, sizeof(*mtd)); 63 61 64 62 map->fldrv = &mapram_chipdrv; 65 63 mtd->priv = map;
+1 -3
drivers/mtd/chips/map_rom.c
··· 31 31 { 32 32 struct mtd_info *mtd; 33 33 34 - mtd = kmalloc(sizeof(*mtd), GFP_KERNEL); 34 + mtd = kzalloc(sizeof(*mtd), GFP_KERNEL); 35 35 if (!mtd) 36 36 return NULL; 37 - 38 - memset(mtd, 0, sizeof(*mtd)); 39 37 40 38 map->fldrv = &maprom_chipdrv; 41 39 mtd->priv = map;
+2 -5
drivers/mtd/chips/sharp.c
··· 112 112 struct sharp_info *sharp = NULL; 113 113 int width; 114 114 115 - mtd = kmalloc(sizeof(*mtd), GFP_KERNEL); 115 + mtd = kzalloc(sizeof(*mtd), GFP_KERNEL); 116 116 if(!mtd) 117 117 return NULL; 118 118 119 - sharp = kmalloc(sizeof(*sharp), GFP_KERNEL); 119 + sharp = kzalloc(sizeof(*sharp), GFP_KERNEL); 120 120 if(!sharp) { 121 121 kfree(mtd); 122 122 return NULL; 123 123 } 124 - 125 - memset(mtd, 0, sizeof(*mtd)); 126 124 127 125 width = sharp_probe_map(map,mtd); 128 126 if(!width){ ··· 141 143 mtd->writesize = 1; 142 144 mtd->name = map->name; 143 145 144 - memset(sharp, 0, sizeof(*sharp)); 145 146 sharp->chipshift = 23; 146 147 sharp->numchips = 1; 147 148 sharp->chips[0].start = 0;
+2 -3
drivers/mtd/cmdlinepart.c
··· 163 163 *num_parts = this_part + 1; 164 164 alloc_size = *num_parts * sizeof(struct mtd_partition) + 165 165 extra_mem_size; 166 - parts = kmalloc(alloc_size, GFP_KERNEL); 166 + parts = kzalloc(alloc_size, GFP_KERNEL); 167 167 if (!parts) 168 168 { 169 169 printk(KERN_ERR ERRP "out of memory\n"); 170 170 return NULL; 171 171 } 172 - memset(parts, 0, alloc_size); 173 172 extra_mem = (unsigned char *)(parts + *num_parts); 174 173 } 175 174 /* enter this partition (offset will be calculated later if it is zero at this point) */ ··· 345 346 * 346 347 * This function needs to be visible for bootloaders. 347 348 */ 348 - int mtdpart_setup(char *s) 349 + static int mtdpart_setup(char *s) 349 350 { 350 351 cmdline = s; 351 352 return 1;
+1 -2
drivers/mtd/devices/block2mtd.c
··· 295 295 if (!devname) 296 296 return NULL; 297 297 298 - dev = kmalloc(sizeof(struct block2mtd_dev), GFP_KERNEL); 298 + dev = kzalloc(sizeof(struct block2mtd_dev), GFP_KERNEL); 299 299 if (!dev) 300 300 return NULL; 301 - memset(dev, 0, sizeof(*dev)); 302 301 303 302 /* Get a handle on the device */ 304 303 bdev = open_bdev_excl(devname, O_RDWR, NULL);
+6 -12
drivers/mtd/devices/ms02-nv.c
··· 131 131 int ret = -ENODEV; 132 132 133 133 /* The module decodes 8MiB of address space. */ 134 - mod_res = kmalloc(sizeof(*mod_res), GFP_KERNEL); 134 + mod_res = kzalloc(sizeof(*mod_res), GFP_KERNEL); 135 135 if (!mod_res) 136 136 return -ENOMEM; 137 137 138 - memset(mod_res, 0, sizeof(*mod_res)); 139 138 mod_res->name = ms02nv_name; 140 139 mod_res->start = addr; 141 140 mod_res->end = addr + MS02NV_SLOT_SIZE - 1; ··· 152 153 } 153 154 154 155 ret = -ENOMEM; 155 - mtd = kmalloc(sizeof(*mtd), GFP_KERNEL); 156 + mtd = kzalloc(sizeof(*mtd), GFP_KERNEL); 156 157 if (!mtd) 157 158 goto err_out_mod_res_rel; 158 - memset(mtd, 0, sizeof(*mtd)); 159 - mp = kmalloc(sizeof(*mp), GFP_KERNEL); 159 + mp = kzalloc(sizeof(*mp), GFP_KERNEL); 160 160 if (!mp) 161 161 goto err_out_mtd; 162 - memset(mp, 0, sizeof(*mp)); 163 162 164 163 mtd->priv = mp; 165 164 mp->resource.module = mod_res; 166 165 167 166 /* Firmware's diagnostic NVRAM area. */ 168 - diag_res = kmalloc(sizeof(*diag_res), GFP_KERNEL); 167 + diag_res = kzalloc(sizeof(*diag_res), GFP_KERNEL); 169 168 if (!diag_res) 170 169 goto err_out_mp; 171 170 172 - memset(diag_res, 0, sizeof(*diag_res)); 173 171 diag_res->name = ms02nv_res_diag_ram; 174 172 diag_res->start = addr; 175 173 diag_res->end = addr + MS02NV_RAM - 1; ··· 176 180 mp->resource.diag_ram = diag_res; 177 181 178 182 /* User-available general-purpose NVRAM area. */ 179 - user_res = kmalloc(sizeof(*user_res), GFP_KERNEL); 183 + user_res = kzalloc(sizeof(*user_res), GFP_KERNEL); 180 184 if (!user_res) 181 185 goto err_out_diag_res; 182 186 183 - memset(user_res, 0, sizeof(*user_res)); 184 187 user_res->name = ms02nv_res_user_ram; 185 188 user_res->start = addr + MS02NV_RAM; 186 189 user_res->end = addr + size - 1; ··· 189 194 mp->resource.user_ram = user_res; 190 195 191 196 /* Control and status register. */ 192 - csr_res = kmalloc(sizeof(*csr_res), GFP_KERNEL); 197 + csr_res = kzalloc(sizeof(*csr_res), GFP_KERNEL); 193 198 if (!csr_res) 194 199 goto err_out_user_res; 195 200 196 - memset(csr_res, 0, sizeof(*csr_res)); 197 201 csr_res->name = ms02nv_res_csr; 198 202 csr_res->start = addr + MS02NV_CSR; 199 203 csr_res->end = addr + MS02NV_CSR + 3;
+1 -1
drivers/mtd/devices/mtd_dataflash.c
··· 480 480 device->writesize = pagesize; 481 481 device->owner = THIS_MODULE; 482 482 device->type = MTD_DATAFLASH; 483 - device->flags = MTD_CAP_NORFLASH; 483 + device->flags = MTD_WRITEABLE; 484 484 device->erase = dataflash_erase; 485 485 device->read = dataflash_read; 486 486 device->write = dataflash_write;
+1 -3
drivers/mtd/devices/phram.c
··· 126 126 struct phram_mtd_list *new; 127 127 int ret = -ENOMEM; 128 128 129 - new = kmalloc(sizeof(*new), GFP_KERNEL); 129 + new = kzalloc(sizeof(*new), GFP_KERNEL); 130 130 if (!new) 131 131 goto out0; 132 - 133 - memset(new, 0, sizeof(*new)); 134 132 135 133 ret = -EIO; 136 134 new->mtd.priv = ioremap(start, len);
+2 -5
drivers/mtd/devices/slram.c
··· 168 168 E("slram: Cannot allocate new MTD device.\n"); 169 169 return(-ENOMEM); 170 170 } 171 - (*curmtd)->mtdinfo = kmalloc(sizeof(struct mtd_info), GFP_KERNEL); 171 + (*curmtd)->mtdinfo = kzalloc(sizeof(struct mtd_info), GFP_KERNEL); 172 172 (*curmtd)->next = NULL; 173 173 174 174 if ((*curmtd)->mtdinfo) { 175 - memset((char *)(*curmtd)->mtdinfo, 0, sizeof(struct mtd_info)); 176 175 (*curmtd)->mtdinfo->priv = 177 - kmalloc(sizeof(slram_priv_t), GFP_KERNEL); 176 + kzalloc(sizeof(slram_priv_t), GFP_KERNEL); 178 177 179 178 if (!(*curmtd)->mtdinfo->priv) { 180 179 kfree((*curmtd)->mtdinfo); 181 180 (*curmtd)->mtdinfo = NULL; 182 - } else { 183 - memset((*curmtd)->mtdinfo->priv,0,sizeof(slram_priv_t)); 184 181 } 185 182 } 186 183
+3 -4
drivers/mtd/ftl.c
··· 1033 1033 { 1034 1034 partition_t *partition; 1035 1035 1036 - partition = kmalloc(sizeof(partition_t), GFP_KERNEL); 1036 + partition = kzalloc(sizeof(partition_t), GFP_KERNEL); 1037 1037 1038 1038 if (!partition) { 1039 1039 printk(KERN_WARNING "No memory to scan for FTL on %s\n", 1040 1040 mtd->name); 1041 1041 return; 1042 1042 } 1043 - 1044 - memset(partition, 0, sizeof(partition_t)); 1045 1043 1046 1044 partition->mbd.mtd = mtd; 1047 1045 ··· 1052 1054 le32_to_cpu(partition->header.FormattedSize) >> 10); 1053 1055 #endif 1054 1056 partition->mbd.size = le32_to_cpu(partition->header.FormattedSize) >> 9; 1055 - partition->mbd.blksize = SECTOR_SIZE; 1057 + 1056 1058 partition->mbd.tr = tr; 1057 1059 partition->mbd.devnum = -1; 1058 1060 if (!add_mtd_blktrans_dev((void *)partition)) ··· 1074 1076 .name = "ftl", 1075 1077 .major = FTL_MAJOR, 1076 1078 .part_bits = PART_BITS, 1079 + .blksize = SECTOR_SIZE, 1077 1080 .readsect = ftl_readsect, 1078 1081 .writesect = ftl_writesect, 1079 1082 .getgeo = ftl_getgeo,
+5 -7
drivers/mtd/inftlcore.c
··· 67 67 68 68 DEBUG(MTD_DEBUG_LEVEL3, "INFTL: add_mtd for %s\n", mtd->name); 69 69 70 - inftl = kmalloc(sizeof(*inftl), GFP_KERNEL); 70 + inftl = kzalloc(sizeof(*inftl), GFP_KERNEL); 71 71 72 72 if (!inftl) { 73 73 printk(KERN_WARNING "INFTL: Out of memory for data structures\n"); 74 74 return; 75 75 } 76 - memset(inftl, 0, sizeof(*inftl)); 77 76 78 77 inftl->mbd.mtd = mtd; 79 78 inftl->mbd.devnum = -1; 80 - inftl->mbd.blksize = 512; 79 + 81 80 inftl->mbd.tr = tr; 82 81 83 82 if (INFTL_mount(inftl) < 0) { ··· 162 163 ops.ooblen = len; 163 164 ops.oobbuf = buf; 164 165 ops.datbuf = NULL; 165 - ops.len = len; 166 166 167 167 res = mtd->read_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 168 - *retlen = ops.retlen; 168 + *retlen = ops.oobretlen; 169 169 return res; 170 170 } 171 171 ··· 182 184 ops.ooblen = len; 183 185 ops.oobbuf = buf; 184 186 ops.datbuf = NULL; 185 - ops.len = len; 186 187 187 188 res = mtd->write_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 188 - *retlen = ops.retlen; 189 + *retlen = ops.oobretlen; 189 190 return res; 190 191 } 191 192 ··· 942 945 .name = "inftl", 943 946 .major = INFTL_MAJOR, 944 947 .part_bits = INFTL_PARTN_BITS, 948 + .blksize = 512, 945 949 .getgeo = inftl_getgeo, 946 950 .readsect = inftl_readblock, 947 951 .writesect = inftl_writeblock,
+27 -44
drivers/mtd/maps/Kconfig
··· 60 60 Ignore this option if you use run-time physmap configuration 61 61 (i.e., run-time calling physmap_configure()). 62 62 63 + config MTD_PHYSMAP_OF 64 + tristate "Flash device in physical memory map based on OF descirption" 65 + depends on PPC_OF && (MTD_CFI || MTD_JEDECPROBE || MTD_ROM) 66 + help 67 + This provides a 'mapping' driver which allows the NOR Flash and 68 + ROM driver code to communicate with chips which are mapped 69 + physically into the CPU's memory. The mapping description here is 70 + taken from OF device tree. 71 + 63 72 config MTD_SUN_UFLASH 64 73 tristate "Sun Microsystems userflash support" 65 74 depends on SPARC && MTD_CFI ··· 189 180 depends on X86 && MTD_JEDECPROBE 190 181 help 191 182 Support for treating the BIOS flash chip on ICHX motherboards 183 + as an MTD device - with this you can reprogram your BIOS. 184 + 185 + BE VERY CAREFUL. 186 + 187 + config MTD_ESB2ROM 188 + tristate "BIOS flash chip on Intel ESB Controller Hub 2" 189 + depends on X86 && MTD_JEDECPROBE && PCI 190 + help 191 + Support for treating the BIOS flash chip on ESB2 motherboards 192 + as an MTD device - with this you can reprogram your BIOS. 193 + 194 + BE VERY CAREFUL. 195 + 196 + config MTD_CK804XROM 197 + tristate "BIOS flash chip on Nvidia CK804" 198 + depends on X86 && MTD_JEDECPROBE 199 + help 200 + Support for treating the BIOS flash chip on nvidia motherboards 192 201 as an MTD device - with this you can reprogram your BIOS. 193 202 194 203 BE VERY CAREFUL. ··· 381 354 This enables access routines for the flash chips on the 382 355 TQ Components TQM834x boards. If you have one of these boards 383 356 and would like to use the flash chips on it, say 'Y'. 384 - 385 - config MTD_CSTM_MIPS_IXX 386 - tristate "Flash chip mapping on ITE QED-4N-S01B, Globespan IVR or custom board" 387 - depends on MIPS && MTD_CFI && MTD_JEDECPROBE && MTD_PARTITIONS 388 - help 389 - This provides a mapping driver for the Integrated Technology 390 - Express, Inc (ITE) QED-4N-S01B eval board and the Globespan IVR 391 - Reference Board. It provides the necessary addressing, length, 392 - buswidth, vpp code and addition setup of the flash device for 393 - these boards. In addition, this mapping driver can be used for 394 - other boards via setting of the CONFIG_MTD_CSTM_MIPS_IXX_START/ 395 - LEN/BUSWIDTH parameters. This mapping will provide one mtd device 396 - using one partition. The start address can be offset from the 397 - beginning of flash and the len can be less than the total flash 398 - device size to allow a window into the flash. Both CFI and JEDEC 399 - probes are called. 400 - 401 - config MTD_CSTM_MIPS_IXX_START 402 - hex "Physical start address of flash mapping" 403 - depends on MTD_CSTM_MIPS_IXX 404 - default "0x8000000" 405 - help 406 - This is the physical memory location that the MTD driver will 407 - use for the flash chips on your particular target board. 408 - Refer to the memory map which should hopefully be in the 409 - documentation for your board. 410 - 411 - config MTD_CSTM_MIPS_IXX_LEN 412 - hex "Physical length of flash mapping" 413 - depends on MTD_CSTM_MIPS_IXX 414 - default "0x4000000" 415 - help 416 - This is the total length that the MTD driver will use for the 417 - flash chips on your particular board. Refer to the memory 418 - map which should hopefully be in the documentation for your 419 - board. 420 - 421 - config MTD_CSTM_MIPS_IXX_BUSWIDTH 422 - int "Bus width in octets" 423 - depends on MTD_CSTM_MIPS_IXX 424 - default "2" 425 - help 426 - This is the total bus width of the mapping of the flash chips 427 - on your particular board. 428 357 429 358 config MTD_OCELOT 430 359 tristate "Momenco Ocelot boot flash device"
+3 -1
drivers/mtd/maps/Makefile
··· 12 12 obj-$(CONFIG_MTD_ARM_INTEGRATOR)+= integrator-flash.o 13 13 obj-$(CONFIG_MTD_BAST) += bast-flash.o 14 14 obj-$(CONFIG_MTD_CFI_FLAGADM) += cfi_flagadm.o 15 - obj-$(CONFIG_MTD_CSTM_MIPS_IXX) += cstm_mips_ixx.o 16 15 obj-$(CONFIG_MTD_DC21285) += dc21285.o 17 16 obj-$(CONFIG_MTD_DILNETPC) += dilnetpc.o 18 17 obj-$(CONFIG_MTD_L440GX) += l440gx.o 19 18 obj-$(CONFIG_MTD_AMD76XROM) += amd76xrom.o 19 + obj-$(CONFIG_MTD_ESB2ROM) += esb2rom.o 20 20 obj-$(CONFIG_MTD_ICHXROM) += ichxrom.o 21 + obj-$(CONFIG_MTD_CK804XROM) += ck804xrom.o 21 22 obj-$(CONFIG_MTD_TSUNAMI) += tsunami_flash.o 22 23 obj-$(CONFIG_MTD_LUBBOCK) += lubbock-flash.o 23 24 obj-$(CONFIG_MTD_MAINSTONE) += mainstone-flash.o ··· 26 25 obj-$(CONFIG_MTD_CEIVA) += ceiva.o 27 26 obj-$(CONFIG_MTD_OCTAGON) += octagon-5066.o 28 27 obj-$(CONFIG_MTD_PHYSMAP) += physmap.o 28 + obj-$(CONFIG_MTD_PHYSMAP_OF) += physmap_of.o 29 29 obj-$(CONFIG_MTD_PNC2000) += pnc2000.o 30 30 obj-$(CONFIG_MTD_PCMCIA) += pcmciamtd.o 31 31 obj-$(CONFIG_MTD_RPXLITE) += rpxlite.o
+28 -6
drivers/mtd/maps/amd76xrom.c
··· 7 7 8 8 #include <linux/module.h> 9 9 #include <linux/types.h> 10 + #include <linux/version.h> 10 11 #include <linux/kernel.h> 11 12 #include <linux/init.h> 12 13 #include <asm/io.h> ··· 44 43 struct resource rsrc; 45 44 char map_name[sizeof(MOD_NAME) + 2 + ADDRESS_NAME_LEN]; 46 45 }; 46 + 47 + /* The 2 bits controlling the window size are often set to allow reading 48 + * the BIOS, but too small to allow writing, since the lock registers are 49 + * 4MiB lower in the address space than the data. 50 + * 51 + * This is intended to prevent flashing the bios, perhaps accidentally. 52 + * 53 + * This parameter allows the normal driver to over-ride the BIOS settings. 54 + * 55 + * The bits are 6 and 7. If both bits are set, it is a 5MiB window. 56 + * If only the 7 Bit is set, it is a 4MiB window. Otherwise, a 57 + * 64KiB window. 58 + * 59 + */ 60 + static uint win_size_bits; 61 + module_param(win_size_bits, uint, 0); 62 + MODULE_PARM_DESC(win_size_bits, "ROM window size bits override for 0x43 byte, normally set by BIOS."); 47 63 48 64 static struct amd76xrom_window amd76xrom_window = { 49 65 .maps = LIST_HEAD_INIT(amd76xrom_window.maps), ··· 113 95 /* Remember the pci dev I find the window in - already have a ref */ 114 96 window->pdev = pdev; 115 97 98 + /* Enable the selected rom window. This is often incorrectly 99 + * set up by the BIOS, and the 4MiB offset for the lock registers 100 + * requires the full 5MiB of window space. 101 + * 102 + * This 'write, then read' approach leaves the bits for 103 + * other uses of the hardware info. 104 + */ 105 + pci_read_config_byte(pdev, 0x43, &byte); 106 + pci_write_config_byte(pdev, 0x43, byte | win_size_bits ); 107 + 116 108 /* Assume the rom window is properly setup, and find it's size */ 117 109 pci_read_config_byte(pdev, 0x43, &byte); 118 110 if ((byte & ((1<<7)|(1<<6))) == ((1<<7)|(1<<6))) { ··· 157 129 (unsigned long long)window->rsrc.end); 158 130 } 159 131 160 - #if 0 161 - 162 - /* Enable the selected rom window */ 163 - pci_read_config_byte(pdev, 0x43, &byte); 164 - pci_write_config_byte(pdev, 0x43, byte | rwindow->segen_bits); 165 - #endif 166 132 167 133 /* Enable writes through the rom window */ 168 134 pci_read_config_byte(pdev, 0x40, &byte);
+1 -1
drivers/mtd/maps/bast-flash.c
··· 131 131 132 132 info->map.phys = res->start; 133 133 info->map.size = res->end - res->start + 1; 134 - info->map.name = pdev->dev.bus_id; 134 + info->map.name = pdev->dev.bus_id; 135 135 info->map.bankwidth = 2; 136 136 137 137 if (info->map.size > AREA_MAXSIZE)
+1 -2
drivers/mtd/maps/ceiva.c
··· 122 122 /* 123 123 * Allocate the map_info structs in one go. 124 124 */ 125 - maps = kmalloc(sizeof(struct map_info) * nr, GFP_KERNEL); 125 + maps = kzalloc(sizeof(struct map_info) * nr, GFP_KERNEL); 126 126 if (!maps) 127 127 return -ENOMEM; 128 - memset(maps, 0, sizeof(struct map_info) * nr); 129 128 /* 130 129 * Claim and then map the memory regions. 131 130 */
+356
drivers/mtd/maps/ck804xrom.c
··· 1 + /* 2 + * ck804xrom.c 3 + * 4 + * Normal mappings of chips in physical memory 5 + * 6 + * Dave Olsen <dolsen@lnxi.com> 7 + * Ryan Jackson <rjackson@lnxi.com> 8 + */ 9 + 10 + #include <linux/module.h> 11 + #include <linux/types.h> 12 + #include <linux/version.h> 13 + #include <linux/kernel.h> 14 + #include <linux/init.h> 15 + #include <asm/io.h> 16 + #include <linux/mtd/mtd.h> 17 + #include <linux/mtd/map.h> 18 + #include <linux/mtd/cfi.h> 19 + #include <linux/mtd/flashchip.h> 20 + #include <linux/pci.h> 21 + #include <linux/pci_ids.h> 22 + #include <linux/list.h> 23 + 24 + 25 + #define MOD_NAME KBUILD_BASENAME 26 + 27 + #define ADDRESS_NAME_LEN 18 28 + 29 + #define ROM_PROBE_STEP_SIZE (64*1024) 30 + 31 + struct ck804xrom_window { 32 + void __iomem *virt; 33 + unsigned long phys; 34 + unsigned long size; 35 + struct list_head maps; 36 + struct resource rsrc; 37 + struct pci_dev *pdev; 38 + }; 39 + 40 + struct ck804xrom_map_info { 41 + struct list_head list; 42 + struct map_info map; 43 + struct mtd_info *mtd; 44 + struct resource rsrc; 45 + char map_name[sizeof(MOD_NAME) + 2 + ADDRESS_NAME_LEN]; 46 + }; 47 + 48 + 49 + /* The 2 bits controlling the window size are often set to allow reading 50 + * the BIOS, but too small to allow writing, since the lock registers are 51 + * 4MiB lower in the address space than the data. 52 + * 53 + * This is intended to prevent flashing the bios, perhaps accidentally. 54 + * 55 + * This parameter allows the normal driver to override the BIOS settings. 56 + * 57 + * The bits are 6 and 7. If both bits are set, it is a 5MiB window. 58 + * If only the 7 Bit is set, it is a 4MiB window. Otherwise, a 59 + * 64KiB window. 60 + * 61 + */ 62 + static uint win_size_bits = 0; 63 + module_param(win_size_bits, uint, 0); 64 + MODULE_PARM_DESC(win_size_bits, "ROM window size bits override for 0x88 byte, normally set by BIOS."); 65 + 66 + static struct ck804xrom_window ck804xrom_window = { 67 + .maps = LIST_HEAD_INIT(ck804xrom_window.maps), 68 + }; 69 + 70 + static void ck804xrom_cleanup(struct ck804xrom_window *window) 71 + { 72 + struct ck804xrom_map_info *map, *scratch; 73 + u8 byte; 74 + 75 + if (window->pdev) { 76 + /* Disable writes through the rom window */ 77 + pci_read_config_byte(window->pdev, 0x6d, &byte); 78 + pci_write_config_byte(window->pdev, 0x6d, byte & ~1); 79 + } 80 + 81 + /* Free all of the mtd devices */ 82 + list_for_each_entry_safe(map, scratch, &window->maps, list) { 83 + if (map->rsrc.parent) 84 + release_resource(&map->rsrc); 85 + 86 + del_mtd_device(map->mtd); 87 + map_destroy(map->mtd); 88 + list_del(&map->list); 89 + kfree(map); 90 + } 91 + if (window->rsrc.parent) 92 + release_resource(&window->rsrc); 93 + 94 + if (window->virt) { 95 + iounmap(window->virt); 96 + window->virt = NULL; 97 + window->phys = 0; 98 + window->size = 0; 99 + } 100 + pci_dev_put(window->pdev); 101 + } 102 + 103 + 104 + static int __devinit ck804xrom_init_one (struct pci_dev *pdev, 105 + const struct pci_device_id *ent) 106 + { 107 + static char *rom_probe_types[] = { "cfi_probe", "jedec_probe", NULL }; 108 + u8 byte; 109 + struct ck804xrom_window *window = &ck804xrom_window; 110 + struct ck804xrom_map_info *map = NULL; 111 + unsigned long map_top; 112 + 113 + /* Remember the pci dev I find the window in */ 114 + window->pdev = pci_dev_get(pdev); 115 + 116 + /* Enable the selected rom window. This is often incorrectly 117 + * set up by the BIOS, and the 4MiB offset for the lock registers 118 + * requires the full 5MiB of window space. 119 + * 120 + * This 'write, then read' approach leaves the bits for 121 + * other uses of the hardware info. 122 + */ 123 + pci_read_config_byte(pdev, 0x88, &byte); 124 + pci_write_config_byte(pdev, 0x88, byte | win_size_bits ); 125 + 126 + 127 + /* Assume the rom window is properly setup, and find it's size */ 128 + pci_read_config_byte(pdev, 0x88, &byte); 129 + 130 + if ((byte & ((1<<7)|(1<<6))) == ((1<<7)|(1<<6))) 131 + window->phys = 0xffb00000; /* 5MiB */ 132 + else if ((byte & (1<<7)) == (1<<7)) 133 + window->phys = 0xffc00000; /* 4MiB */ 134 + else 135 + window->phys = 0xffff0000; /* 64KiB */ 136 + 137 + window->size = 0xffffffffUL - window->phys + 1UL; 138 + 139 + /* 140 + * Try to reserve the window mem region. If this fails then 141 + * it is likely due to a fragment of the window being 142 + * "reserved" by the BIOS. In the case that the 143 + * request_mem_region() fails then once the rom size is 144 + * discovered we will try to reserve the unreserved fragment. 145 + */ 146 + window->rsrc.name = MOD_NAME; 147 + window->rsrc.start = window->phys; 148 + window->rsrc.end = window->phys + window->size - 1; 149 + window->rsrc.flags = IORESOURCE_MEM | IORESOURCE_BUSY; 150 + if (request_resource(&iomem_resource, &window->rsrc)) { 151 + window->rsrc.parent = NULL; 152 + printk(KERN_ERR MOD_NAME 153 + " %s(): Unable to register resource" 154 + " 0x%.016llx-0x%.016llx - kernel bug?\n", 155 + __func__, 156 + (unsigned long long)window->rsrc.start, 157 + (unsigned long long)window->rsrc.end); 158 + } 159 + 160 + 161 + /* Enable writes through the rom window */ 162 + pci_read_config_byte(pdev, 0x6d, &byte); 163 + pci_write_config_byte(pdev, 0x6d, byte | 1); 164 + 165 + /* FIXME handle registers 0x80 - 0x8C the bios region locks */ 166 + 167 + /* For write accesses caches are useless */ 168 + window->virt = ioremap_nocache(window->phys, window->size); 169 + if (!window->virt) { 170 + printk(KERN_ERR MOD_NAME ": ioremap(%08lx, %08lx) failed\n", 171 + window->phys, window->size); 172 + goto out; 173 + } 174 + 175 + /* Get the first address to look for a rom chip at */ 176 + map_top = window->phys; 177 + #if 1 178 + /* The probe sequence run over the firmware hub lock 179 + * registers sets them to 0x7 (no access). 180 + * Probe at most the last 4MiB of the address space. 181 + */ 182 + if (map_top < 0xffc00000) 183 + map_top = 0xffc00000; 184 + #endif 185 + /* Loop through and look for rom chips. Since we don't know the 186 + * starting address for each chip, probe every ROM_PROBE_STEP_SIZE 187 + * bytes from the starting address of the window. 188 + */ 189 + while((map_top - 1) < 0xffffffffUL) { 190 + struct cfi_private *cfi; 191 + unsigned long offset; 192 + int i; 193 + 194 + if (!map) 195 + map = kmalloc(sizeof(*map), GFP_KERNEL); 196 + 197 + if (!map) { 198 + printk(KERN_ERR MOD_NAME ": kmalloc failed"); 199 + goto out; 200 + } 201 + memset(map, 0, sizeof(*map)); 202 + INIT_LIST_HEAD(&map->list); 203 + map->map.name = map->map_name; 204 + map->map.phys = map_top; 205 + offset = map_top - window->phys; 206 + map->map.virt = (void __iomem *) 207 + (((unsigned long)(window->virt)) + offset); 208 + map->map.size = 0xffffffffUL - map_top + 1UL; 209 + /* Set the name of the map to the address I am trying */ 210 + sprintf(map->map_name, "%s @%08lx", 211 + MOD_NAME, map->map.phys); 212 + 213 + /* There is no generic VPP support */ 214 + for(map->map.bankwidth = 32; map->map.bankwidth; 215 + map->map.bankwidth >>= 1) 216 + { 217 + char **probe_type; 218 + /* Skip bankwidths that are not supported */ 219 + if (!map_bankwidth_supported(map->map.bankwidth)) 220 + continue; 221 + 222 + /* Setup the map methods */ 223 + simple_map_init(&map->map); 224 + 225 + /* Try all of the probe methods */ 226 + probe_type = rom_probe_types; 227 + for(; *probe_type; probe_type++) { 228 + map->mtd = do_map_probe(*probe_type, &map->map); 229 + if (map->mtd) 230 + goto found; 231 + } 232 + } 233 + map_top += ROM_PROBE_STEP_SIZE; 234 + continue; 235 + found: 236 + /* Trim the size if we are larger than the map */ 237 + if (map->mtd->size > map->map.size) { 238 + printk(KERN_WARNING MOD_NAME 239 + " rom(%u) larger than window(%lu). fixing...\n", 240 + map->mtd->size, map->map.size); 241 + map->mtd->size = map->map.size; 242 + } 243 + if (window->rsrc.parent) { 244 + /* 245 + * Registering the MTD device in iomem may not be possible 246 + * if there is a BIOS "reserved" and BUSY range. If this 247 + * fails then continue anyway. 248 + */ 249 + map->rsrc.name = map->map_name; 250 + map->rsrc.start = map->map.phys; 251 + map->rsrc.end = map->map.phys + map->mtd->size - 1; 252 + map->rsrc.flags = IORESOURCE_MEM | IORESOURCE_BUSY; 253 + if (request_resource(&window->rsrc, &map->rsrc)) { 254 + printk(KERN_ERR MOD_NAME 255 + ": cannot reserve MTD resource\n"); 256 + map->rsrc.parent = NULL; 257 + } 258 + } 259 + 260 + /* Make the whole region visible in the map */ 261 + map->map.virt = window->virt; 262 + map->map.phys = window->phys; 263 + cfi = map->map.fldrv_priv; 264 + for(i = 0; i < cfi->numchips; i++) 265 + cfi->chips[i].start += offset; 266 + 267 + /* Now that the mtd devices is complete claim and export it */ 268 + map->mtd->owner = THIS_MODULE; 269 + if (add_mtd_device(map->mtd)) { 270 + map_destroy(map->mtd); 271 + map->mtd = NULL; 272 + goto out; 273 + } 274 + 275 + 276 + /* Calculate the new value of map_top */ 277 + map_top += map->mtd->size; 278 + 279 + /* File away the map structure */ 280 + list_add(&map->list, &window->maps); 281 + map = NULL; 282 + } 283 + 284 + out: 285 + /* Free any left over map structures */ 286 + if (map) 287 + kfree(map); 288 + 289 + /* See if I have any map structures */ 290 + if (list_empty(&window->maps)) { 291 + ck804xrom_cleanup(window); 292 + return -ENODEV; 293 + } 294 + return 0; 295 + } 296 + 297 + 298 + static void __devexit ck804xrom_remove_one (struct pci_dev *pdev) 299 + { 300 + struct ck804xrom_window *window = &ck804xrom_window; 301 + 302 + ck804xrom_cleanup(window); 303 + } 304 + 305 + static struct pci_device_id ck804xrom_pci_tbl[] = { 306 + { PCI_VENDOR_ID_NVIDIA, 0x0051, 307 + PCI_ANY_ID, PCI_ANY_ID, }, /* nvidia ck804 */ 308 + { 0, } 309 + }; 310 + 311 + MODULE_DEVICE_TABLE(pci, ck804xrom_pci_tbl); 312 + 313 + #if 0 314 + static struct pci_driver ck804xrom_driver = { 315 + .name = MOD_NAME, 316 + .id_table = ck804xrom_pci_tbl, 317 + .probe = ck804xrom_init_one, 318 + .remove = ck804xrom_remove_one, 319 + }; 320 + #endif 321 + 322 + static int __init init_ck804xrom(void) 323 + { 324 + struct pci_dev *pdev; 325 + struct pci_device_id *id; 326 + int retVal; 327 + pdev = NULL; 328 + 329 + for(id = ck804xrom_pci_tbl; id->vendor; id++) { 330 + pdev = pci_find_device(id->vendor, id->device, NULL); 331 + if (pdev) 332 + break; 333 + } 334 + if (pdev) { 335 + retVal = ck804xrom_init_one(pdev, &ck804xrom_pci_tbl[0]); 336 + pci_dev_put(pdev); 337 + return retVal; 338 + } 339 + return -ENXIO; 340 + #if 0 341 + return pci_module_init(&ck804xrom_driver); 342 + #endif 343 + } 344 + 345 + static void __exit cleanup_ck804xrom(void) 346 + { 347 + ck804xrom_remove_one(ck804xrom_window.pdev); 348 + } 349 + 350 + module_init(init_ck804xrom); 351 + module_exit(cleanup_ck804xrom); 352 + 353 + MODULE_LICENSE("GPL"); 354 + MODULE_AUTHOR("Eric Biederman <ebiederman@lnxi.com>, Dave Olsen <dolsen@lnxi.com>"); 355 + MODULE_DESCRIPTION("MTD map driver for BIOS chips on the Nvidia ck804 southbridge"); 356 +
-283
drivers/mtd/maps/cstm_mips_ixx.c
··· 1 - /* 2 - * $Id: cstm_mips_ixx.c,v 1.14 2005/11/07 11:14:26 gleixner Exp $ 3 - * 4 - * Mapping of a custom board with both AMD CFI and JEDEC flash in partitions. 5 - * Config with both CFI and JEDEC device support. 6 - * 7 - * Basically physmap.c with the addition of partitions and 8 - * an array of mapping info to accomodate more than one flash type per board. 9 - * 10 - * Copyright 2000 MontaVista Software Inc. 11 - * 12 - * This program is free software; you can redistribute it and/or modify it 13 - * under the terms of the GNU General Public License as published by the 14 - * Free Software Foundation; either version 2 of the License, or (at your 15 - * option) any later version. 16 - * 17 - * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED 18 - * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 19 - * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN 20 - * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, 21 - * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT 22 - * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF 23 - * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 24 - * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 25 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF 26 - * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 27 - * 28 - * You should have received a copy of the GNU General Public License along 29 - * with this program; if not, write to the Free Software Foundation, Inc., 30 - * 675 Mass Ave, Cambridge, MA 02139, USA. 31 - */ 32 - 33 - #include <linux/module.h> 34 - #include <linux/types.h> 35 - #include <linux/kernel.h> 36 - #include <linux/init.h> 37 - #include <asm/io.h> 38 - #include <linux/mtd/mtd.h> 39 - #include <linux/mtd/map.h> 40 - #include <linux/mtd/partitions.h> 41 - #include <linux/delay.h> 42 - 43 - #if defined(CONFIG_MIPS_ITE8172) || defined(CONFIG_MIPS_IVR) 44 - #define CC_GCR 0xB4013818 45 - #define CC_GPBCR 0xB401380A 46 - #define CC_GPBDR 0xB4013808 47 - #define CC_M68K_DEVICE 1 48 - #define CC_M68K_FUNCTION 6 49 - #define CC_CONFADDR 0xB8004000 50 - #define CC_CONFDATA 0xB8004004 51 - #define CC_FC_FCR 0xB8002004 52 - #define CC_FC_DCR 0xB8002008 53 - #define CC_GPACR 0xB4013802 54 - #define CC_GPAICR 0xB4013804 55 - #endif /* defined(CONFIG_MIPS_ITE8172) || defined(CONFIG_MIPS_IVR) */ 56 - 57 - #if defined(CONFIG_MIPS_ITE8172) || defined(CONFIG_MIPS_IVR) 58 - void cstm_mips_ixx_set_vpp(struct map_info *map,int vpp) 59 - { 60 - static DEFINE_SPINLOCK(vpp_lock); 61 - static int vpp_count = 0; 62 - unsigned long flags; 63 - 64 - spin_lock_irqsave(&vpp_lock, flags); 65 - 66 - if (vpp) { 67 - if (!vpp_count++) { 68 - __u16 data; 69 - __u8 data1; 70 - static u8 first = 1; 71 - 72 - // Set GPIO port B pin3 to high 73 - data = *(__u16 *)(CC_GPBCR); 74 - data = (data & 0xff0f) | 0x0040; 75 - *(__u16 *)CC_GPBCR = data; 76 - *(__u8 *)CC_GPBDR = (*(__u8*)CC_GPBDR) | 0x08; 77 - if (first) { 78 - first = 0; 79 - /* need to have this delay for first 80 - enabling vpp after powerup */ 81 - udelay(40); 82 - } 83 - } 84 - } else { 85 - if (!--vpp_count) { 86 - __u16 data; 87 - 88 - // Set GPIO port B pin3 to high 89 - data = *(__u16 *)(CC_GPBCR); 90 - data = (data & 0xff3f) | 0x0040; 91 - *(__u16 *)CC_GPBCR = data; 92 - *(__u8 *)CC_GPBDR = (*(__u8*)CC_GPBDR) & 0xf7; 93 - } 94 - } 95 - spin_unlock_irqrestore(&vpp_lock, flags); 96 - } 97 - #endif 98 - 99 - /* board and partition description */ 100 - 101 - #define MAX_PHYSMAP_PARTITIONS 8 102 - struct cstm_mips_ixx_info { 103 - char *name; 104 - unsigned long window_addr; 105 - unsigned long window_size; 106 - int bankwidth; 107 - int num_partitions; 108 - }; 109 - 110 - #if defined(CONFIG_MIPS_ITE8172) || defined(CONFIG_MIPS_IVR) 111 - #define PHYSMAP_NUMBER 1 // number of board desc structs needed, one per contiguous flash type 112 - const struct cstm_mips_ixx_info cstm_mips_ixx_board_desc[PHYSMAP_NUMBER] = 113 - { 114 - { // 28F128J3A in 2x16 configuration 115 - "big flash", // name 116 - 0x08000000, // window_addr 117 - 0x02000000, // window_size 118 - 4, // bankwidth 119 - 1, // num_partitions 120 - } 121 - 122 - }; 123 - static struct mtd_partition cstm_mips_ixx_partitions[PHYSMAP_NUMBER][MAX_PHYSMAP_PARTITIONS] = { 124 - { // 28F128J3A in 2x16 configuration 125 - { 126 - .name = "main partition ", 127 - .size = 0x02000000, // 128 x 2 x 128k byte sectors 128 - .offset = 0, 129 - }, 130 - }, 131 - }; 132 - #else /* defined(CONFIG_MIPS_ITE8172) || defined(CONFIG_MIPS_IVR) */ 133 - #define PHYSMAP_NUMBER 1 // number of board desc structs needed, one per contiguous flash type 134 - const struct cstm_mips_ixx_info cstm_mips_ixx_board_desc[PHYSMAP_NUMBER] = 135 - { 136 - { 137 - "MTD flash", // name 138 - CONFIG_MTD_CSTM_MIPS_IXX_START, // window_addr 139 - CONFIG_MTD_CSTM_MIPS_IXX_LEN, // window_size 140 - CONFIG_MTD_CSTM_MIPS_IXX_BUSWIDTH, // bankwidth 141 - 1, // num_partitions 142 - }, 143 - 144 - }; 145 - static struct mtd_partition cstm_mips_ixx_partitions[PHYSMAP_NUMBER][MAX_PHYSMAP_PARTITIONS] = { 146 - { 147 - { 148 - .name = "main partition", 149 - .size = CONFIG_MTD_CSTM_MIPS_IXX_LEN, 150 - .offset = 0, 151 - }, 152 - }, 153 - }; 154 - #endif /* defined(CONFIG_MIPS_ITE8172) || defined(CONFIG_MIPS_IVR) */ 155 - 156 - struct map_info cstm_mips_ixx_map[PHYSMAP_NUMBER]; 157 - 158 - int __init init_cstm_mips_ixx(void) 159 - { 160 - int i; 161 - int jedec; 162 - struct mtd_info *mymtd; 163 - struct mtd_partition *parts; 164 - 165 - /* Initialize mapping */ 166 - for (i=0;i<PHYSMAP_NUMBER;i++) { 167 - printk(KERN_NOTICE "cstm_mips_ixx flash device: 0x%lx at 0x%lx\n", 168 - cstm_mips_ixx_board_desc[i].window_size, cstm_mips_ixx_board_desc[i].window_addr); 169 - 170 - 171 - cstm_mips_ixx_map[i].phys = cstm_mips_ixx_board_desc[i].window_addr; 172 - cstm_mips_ixx_map[i].virt = ioremap(cstm_mips_ixx_board_desc[i].window_addr, cstm_mips_ixx_board_desc[i].window_size); 173 - if (!cstm_mips_ixx_map[i].virt) { 174 - int j = 0; 175 - printk(KERN_WARNING "Failed to ioremap\n"); 176 - for (j = 0; j < i; j++) { 177 - if (cstm_mips_ixx_map[j].virt) { 178 - iounmap(cstm_mips_ixx_map[j].virt); 179 - cstm_mips_ixx_map[j].virt = NULL; 180 - } 181 - } 182 - return -EIO; 183 - } 184 - cstm_mips_ixx_map[i].name = cstm_mips_ixx_board_desc[i].name; 185 - cstm_mips_ixx_map[i].size = cstm_mips_ixx_board_desc[i].window_size; 186 - cstm_mips_ixx_map[i].bankwidth = cstm_mips_ixx_board_desc[i].bankwidth; 187 - #if defined(CONFIG_MIPS_ITE8172) || defined(CONFIG_MIPS_IVR) 188 - cstm_mips_ixx_map[i].set_vpp = cstm_mips_ixx_set_vpp; 189 - #endif 190 - simple_map_init(&cstm_mips_ixx_map[i]); 191 - //printk(KERN_NOTICE "cstm_mips_ixx: ioremap is %x\n",(unsigned int)(cstm_mips_ixx_map[i].virt)); 192 - } 193 - 194 - #if defined(CONFIG_MIPS_ITE8172) || defined(CONFIG_MIPS_IVR) 195 - setup_ITE_IVR_flash(); 196 - #endif /* defined(CONFIG_MIPS_ITE8172) || defined(CONFIG_MIPS_IVR) */ 197 - 198 - for (i=0;i<PHYSMAP_NUMBER;i++) { 199 - parts = &cstm_mips_ixx_partitions[i][0]; 200 - jedec = 0; 201 - mymtd = (struct mtd_info *)do_map_probe("cfi_probe", &cstm_mips_ixx_map[i]); 202 - //printk(KERN_NOTICE "phymap %d cfi_probe: mymtd is %x\n",i,(unsigned int)mymtd); 203 - if (!mymtd) { 204 - jedec = 1; 205 - mymtd = (struct mtd_info *)do_map_probe("jedec", &cstm_mips_ixx_map[i]); 206 - printk(KERN_NOTICE "cstm_mips_ixx %d jedec: mymtd is %x\n",i,(unsigned int)mymtd); 207 - } 208 - if (mymtd) { 209 - mymtd->owner = THIS_MODULE; 210 - 211 - cstm_mips_ixx_map[i].map_priv_2 = (unsigned long)mymtd; 212 - add_mtd_partitions(mymtd, parts, cstm_mips_ixx_board_desc[i].num_partitions); 213 - } 214 - else { 215 - for (i = 0; i < PHYSMAP_NUMBER; i++) { 216 - if (cstm_mips_ixx_map[i].virt) { 217 - iounmap(cstm_mips_ixx_map[i].virt); 218 - cstm_mips_ixx_map[i].virt = NULL; 219 - } 220 - } 221 - return -ENXIO; 222 - } 223 - } 224 - return 0; 225 - } 226 - 227 - static void __exit cleanup_cstm_mips_ixx(void) 228 - { 229 - int i; 230 - struct mtd_info *mymtd; 231 - 232 - for (i=0;i<PHYSMAP_NUMBER;i++) { 233 - mymtd = (struct mtd_info *)cstm_mips_ixx_map[i].map_priv_2; 234 - if (mymtd) { 235 - del_mtd_partitions(mymtd); 236 - map_destroy(mymtd); 237 - } 238 - if (cstm_mips_ixx_map[i].virt) { 239 - iounmap((void *)cstm_mips_ixx_map[i].virt); 240 - cstm_mips_ixx_map[i].virt = 0; 241 - } 242 - } 243 - } 244 - #if defined(CONFIG_MIPS_ITE8172) || defined(CONFIG_MIPS_IVR) 245 - void PCISetULongByOffset(__u32 DevNumber, __u32 FuncNumber, __u32 Offset, __u32 data) 246 - { 247 - __u32 offset; 248 - 249 - offset = ( unsigned long )( 0x80000000 | ( DevNumber << 11 ) + ( FuncNumber << 8 ) + Offset) ; 250 - 251 - *(__u32 *)CC_CONFADDR = offset; 252 - *(__u32 *)CC_CONFDATA = data; 253 - } 254 - void setup_ITE_IVR_flash() 255 - { 256 - __u32 size, base; 257 - 258 - size = 0x0e000000; // 32MiB 259 - base = (0x08000000) >> 8 >>1; // Bug: we must shift one more bit 260 - 261 - /* need to set ITE flash to 32 bits instead of default 8 */ 262 - #ifdef CONFIG_MIPS_IVR 263 - *(__u32 *)CC_FC_FCR = 0x55; 264 - *(__u32 *)CC_GPACR = 0xfffc; 265 - #else 266 - *(__u32 *)CC_FC_FCR = 0x77; 267 - #endif 268 - /* turn bursting off */ 269 - *(__u32 *)CC_FC_DCR = 0x0; 270 - 271 - /* setup for one chip 4 byte PCI access */ 272 - PCISetULongByOffset(CC_M68K_DEVICE, CC_M68K_FUNCTION, 0x60, size | base); 273 - PCISetULongByOffset(CC_M68K_DEVICE, CC_M68K_FUNCTION, 0x64, 0x02); 274 - } 275 - #endif /* defined(CONFIG_MIPS_ITE8172) || defined(CONFIG_MIPS_IVR) */ 276 - 277 - module_init(init_cstm_mips_ixx); 278 - module_exit(cleanup_cstm_mips_ixx); 279 - 280 - 281 - MODULE_LICENSE("GPL"); 282 - MODULE_AUTHOR("Alice Hennessy <ahennessy@mvista.com>"); 283 - MODULE_DESCRIPTION("MTD map driver for ITE 8172G and Globespan IVR boards");
+450
drivers/mtd/maps/esb2rom.c
··· 1 + /* 2 + * esb2rom.c 3 + * 4 + * Normal mappings of flash chips in physical memory 5 + * through the Intel ESB2 Southbridge. 6 + * 7 + * This was derived from ichxrom.c in May 2006 by 8 + * Lew Glendenning <lglendenning@lnxi.com> 9 + * 10 + * Eric Biederman, of course, was a major help in this effort. 11 + */ 12 + 13 + #include <linux/module.h> 14 + #include <linux/types.h> 15 + #include <linux/version.h> 16 + #include <linux/kernel.h> 17 + #include <linux/init.h> 18 + #include <asm/io.h> 19 + #include <linux/mtd/mtd.h> 20 + #include <linux/mtd/map.h> 21 + #include <linux/mtd/cfi.h> 22 + #include <linux/mtd/flashchip.h> 23 + #include <linux/pci.h> 24 + #include <linux/pci_ids.h> 25 + #include <linux/list.h> 26 + 27 + #define MOD_NAME KBUILD_BASENAME 28 + 29 + #define ADDRESS_NAME_LEN 18 30 + 31 + #define ROM_PROBE_STEP_SIZE (64*1024) /* 64KiB */ 32 + 33 + #define BIOS_CNTL 0xDC 34 + #define BIOS_LOCK_ENABLE 0x02 35 + #define BIOS_WRITE_ENABLE 0x01 36 + 37 + /* This became a 16-bit register, and EN2 has disappeared */ 38 + #define FWH_DEC_EN1 0xD8 39 + #define FWH_F8_EN 0x8000 40 + #define FWH_F0_EN 0x4000 41 + #define FWH_E8_EN 0x2000 42 + #define FWH_E0_EN 0x1000 43 + #define FWH_D8_EN 0x0800 44 + #define FWH_D0_EN 0x0400 45 + #define FWH_C8_EN 0x0200 46 + #define FWH_C0_EN 0x0100 47 + #define FWH_LEGACY_F_EN 0x0080 48 + #define FWH_LEGACY_E_EN 0x0040 49 + /* reserved 0x0020 and 0x0010 */ 50 + #define FWH_70_EN 0x0008 51 + #define FWH_60_EN 0x0004 52 + #define FWH_50_EN 0x0002 53 + #define FWH_40_EN 0x0001 54 + 55 + /* these are 32-bit values */ 56 + #define FWH_SEL1 0xD0 57 + #define FWH_SEL2 0xD4 58 + 59 + #define FWH_8MiB (FWH_F8_EN | FWH_F0_EN | FWH_E8_EN | FWH_E0_EN | \ 60 + FWH_D8_EN | FWH_D0_EN | FWH_C8_EN | FWH_C0_EN | \ 61 + FWH_70_EN | FWH_60_EN | FWH_50_EN | FWH_40_EN) 62 + 63 + #define FWH_7MiB (FWH_F8_EN | FWH_F0_EN | FWH_E8_EN | FWH_E0_EN | \ 64 + FWH_D8_EN | FWH_D0_EN | FWH_C8_EN | FWH_C0_EN | \ 65 + FWH_70_EN | FWH_60_EN | FWH_50_EN) 66 + 67 + #define FWH_6MiB (FWH_F8_EN | FWH_F0_EN | FWH_E8_EN | FWH_E0_EN | \ 68 + FWH_D8_EN | FWH_D0_EN | FWH_C8_EN | FWH_C0_EN | \ 69 + FWH_70_EN | FWH_60_EN) 70 + 71 + #define FWH_5MiB (FWH_F8_EN | FWH_F0_EN | FWH_E8_EN | FWH_E0_EN | \ 72 + FWH_D8_EN | FWH_D0_EN | FWH_C8_EN | FWH_C0_EN | \ 73 + FWH_70_EN) 74 + 75 + #define FWH_4MiB (FWH_F8_EN | FWH_F0_EN | FWH_E8_EN | FWH_E0_EN | \ 76 + FWH_D8_EN | FWH_D0_EN | FWH_C8_EN | FWH_C0_EN) 77 + 78 + #define FWH_3_5MiB (FWH_F8_EN | FWH_F0_EN | FWH_E8_EN | FWH_E0_EN | \ 79 + FWH_D8_EN | FWH_D0_EN | FWH_C8_EN) 80 + 81 + #define FWH_3MiB (FWH_F8_EN | FWH_F0_EN | FWH_E8_EN | FWH_E0_EN | \ 82 + FWH_D8_EN | FWH_D0_EN) 83 + 84 + #define FWH_2_5MiB (FWH_F8_EN | FWH_F0_EN | FWH_E8_EN | FWH_E0_EN | \ 85 + FWH_D8_EN) 86 + 87 + #define FWH_2MiB (FWH_F8_EN | FWH_F0_EN | FWH_E8_EN | FWH_E0_EN) 88 + 89 + #define FWH_1_5MiB (FWH_F8_EN | FWH_F0_EN | FWH_E8_EN) 90 + 91 + #define FWH_1MiB (FWH_F8_EN | FWH_F0_EN) 92 + 93 + #define FWH_0_5MiB (FWH_F8_EN) 94 + 95 + 96 + struct esb2rom_window { 97 + void __iomem* virt; 98 + unsigned long phys; 99 + unsigned long size; 100 + struct list_head maps; 101 + struct resource rsrc; 102 + struct pci_dev *pdev; 103 + }; 104 + 105 + struct esb2rom_map_info { 106 + struct list_head list; 107 + struct map_info map; 108 + struct mtd_info *mtd; 109 + struct resource rsrc; 110 + char map_name[sizeof(MOD_NAME) + 2 + ADDRESS_NAME_LEN]; 111 + }; 112 + 113 + static struct esb2rom_window esb2rom_window = { 114 + .maps = LIST_HEAD_INIT(esb2rom_window.maps), 115 + }; 116 + 117 + static void esb2rom_cleanup(struct esb2rom_window *window) 118 + { 119 + struct esb2rom_map_info *map, *scratch; 120 + u8 byte; 121 + 122 + /* Disable writes through the rom window */ 123 + pci_read_config_byte(window->pdev, BIOS_CNTL, &byte); 124 + pci_write_config_byte(window->pdev, BIOS_CNTL, 125 + byte & ~BIOS_WRITE_ENABLE); 126 + 127 + /* Free all of the mtd devices */ 128 + list_for_each_entry_safe(map, scratch, &window->maps, list) { 129 + if (map->rsrc.parent) 130 + release_resource(&map->rsrc); 131 + del_mtd_device(map->mtd); 132 + map_destroy(map->mtd); 133 + list_del(&map->list); 134 + kfree(map); 135 + } 136 + if (window->rsrc.parent) 137 + release_resource(&window->rsrc); 138 + if (window->virt) { 139 + iounmap(window->virt); 140 + window->virt = NULL; 141 + window->phys = 0; 142 + window->size = 0; 143 + } 144 + pci_dev_put(window->pdev); 145 + } 146 + 147 + static int __devinit esb2rom_init_one(struct pci_dev *pdev, 148 + const struct pci_device_id *ent) 149 + { 150 + static char *rom_probe_types[] = { "cfi_probe", "jedec_probe", NULL }; 151 + struct esb2rom_window *window = &esb2rom_window; 152 + struct esb2rom_map_info *map = NULL; 153 + unsigned long map_top; 154 + u8 byte; 155 + u16 word; 156 + 157 + /* For now I just handle the ecb2 and I assume there 158 + * are not a lot of resources up at the top of the address 159 + * space. It is possible to handle other devices in the 160 + * top 16MiB but it is very painful. Also since 161 + * you can only really attach a FWH to an ICHX there 162 + * a number of simplifications you can make. 163 + * 164 + * Also you can page firmware hubs if an 8MiB window isn't enough 165 + * but don't currently handle that case either. 166 + */ 167 + window->pdev = pci_dev_get(pdev); 168 + 169 + /* RLG: experiment 2. Force the window registers to the widest values */ 170 + 171 + /* 172 + pci_read_config_word(pdev, FWH_DEC_EN1, &word); 173 + printk(KERN_DEBUG "Original FWH_DEC_EN1 : %x\n", word); 174 + pci_write_config_byte(pdev, FWH_DEC_EN1, 0xff); 175 + pci_read_config_byte(pdev, FWH_DEC_EN1, &byte); 176 + printk(KERN_DEBUG "New FWH_DEC_EN1 : %x\n", byte); 177 + 178 + pci_read_config_byte(pdev, FWH_DEC_EN2, &byte); 179 + printk(KERN_DEBUG "Original FWH_DEC_EN2 : %x\n", byte); 180 + pci_write_config_byte(pdev, FWH_DEC_EN2, 0x0f); 181 + pci_read_config_byte(pdev, FWH_DEC_EN2, &byte); 182 + printk(KERN_DEBUG "New FWH_DEC_EN2 : %x\n", byte); 183 + */ 184 + 185 + /* Find a region continuous to the end of the ROM window */ 186 + window->phys = 0; 187 + pci_read_config_word(pdev, FWH_DEC_EN1, &word); 188 + printk(KERN_DEBUG "pci_read_config_byte : %x\n", word); 189 + 190 + if ((word & FWH_8MiB) == FWH_8MiB) 191 + window->phys = 0xff400000; 192 + else if ((word & FWH_7MiB) == FWH_7MiB) 193 + window->phys = 0xff500000; 194 + else if ((word & FWH_6MiB) == FWH_6MiB) 195 + window->phys = 0xff600000; 196 + else if ((word & FWH_5MiB) == FWH_5MiB) 197 + window->phys = 0xFF700000; 198 + else if ((word & FWH_4MiB) == FWH_4MiB) 199 + window->phys = 0xffc00000; 200 + else if ((word & FWH_3_5MiB) == FWH_3_5MiB) 201 + window->phys = 0xffc80000; 202 + else if ((word & FWH_3MiB) == FWH_3MiB) 203 + window->phys = 0xffd00000; 204 + else if ((word & FWH_2_5MiB) == FWH_2_5MiB) 205 + window->phys = 0xffd80000; 206 + else if ((word & FWH_2MiB) == FWH_2MiB) 207 + window->phys = 0xffe00000; 208 + else if ((word & FWH_1_5MiB) == FWH_1_5MiB) 209 + window->phys = 0xffe80000; 210 + else if ((word & FWH_1MiB) == FWH_1MiB) 211 + window->phys = 0xfff00000; 212 + else if ((word & FWH_0_5MiB) == FWH_0_5MiB) 213 + window->phys = 0xfff80000; 214 + 215 + /* reserved 0x0020 and 0x0010 */ 216 + window->phys -= 0x400000UL; 217 + window->size = (0xffffffffUL - window->phys) + 1UL; 218 + 219 + /* Enable writes through the rom window */ 220 + pci_read_config_byte(pdev, BIOS_CNTL, &byte); 221 + if (!(byte & BIOS_WRITE_ENABLE) && (byte & (BIOS_LOCK_ENABLE))) { 222 + /* The BIOS will generate an error if I enable 223 + * this device, so don't even try. 224 + */ 225 + printk(KERN_ERR MOD_NAME ": firmware access control, I can't enable writes\n"); 226 + goto out; 227 + } 228 + pci_write_config_byte(pdev, BIOS_CNTL, byte | BIOS_WRITE_ENABLE); 229 + 230 + /* 231 + * Try to reserve the window mem region. If this fails then 232 + * it is likely due to the window being "reseved" by the BIOS. 233 + */ 234 + window->rsrc.name = MOD_NAME; 235 + window->rsrc.start = window->phys; 236 + window->rsrc.end = window->phys + window->size - 1; 237 + window->rsrc.flags = IORESOURCE_MEM | IORESOURCE_BUSY; 238 + if (request_resource(&iomem_resource, &window->rsrc)) { 239 + window->rsrc.parent = NULL; 240 + printk(KERN_DEBUG MOD_NAME 241 + ": %s(): Unable to register resource" 242 + " 0x%.08llx-0x%.08llx - kernel bug?\n", 243 + __func__, 244 + (unsigned long long)window->rsrc.start, 245 + (unsigned long long)window->rsrc.end); 246 + } 247 + 248 + /* Map the firmware hub into my address space. */ 249 + window->virt = ioremap_nocache(window->phys, window->size); 250 + if (!window->virt) { 251 + printk(KERN_ERR MOD_NAME ": ioremap(%08lx, %08lx) failed\n", 252 + window->phys, window->size); 253 + goto out; 254 + } 255 + 256 + /* Get the first address to look for an rom chip at */ 257 + map_top = window->phys; 258 + if ((window->phys & 0x3fffff) != 0) { 259 + /* if not aligned on 4MiB, look 4MiB lower in address space */ 260 + map_top = window->phys + 0x400000; 261 + } 262 + #if 1 263 + /* The probe sequence run over the firmware hub lock 264 + * registers sets them to 0x7 (no access). 265 + * (Insane hardware design, but most copied Intel's.) 266 + * ==> Probe at most the last 4M of the address space. 267 + */ 268 + if (map_top < 0xffc00000) 269 + map_top = 0xffc00000; 270 + #endif 271 + /* Loop through and look for rom chips */ 272 + while ((map_top - 1) < 0xffffffffUL) { 273 + struct cfi_private *cfi; 274 + unsigned long offset; 275 + int i; 276 + 277 + if (!map) 278 + map = kmalloc(sizeof(*map), GFP_KERNEL); 279 + if (!map) { 280 + printk(KERN_ERR MOD_NAME ": kmalloc failed"); 281 + goto out; 282 + } 283 + memset(map, 0, sizeof(*map)); 284 + INIT_LIST_HEAD(&map->list); 285 + map->map.name = map->map_name; 286 + map->map.phys = map_top; 287 + offset = map_top - window->phys; 288 + map->map.virt = (void __iomem *) 289 + (((unsigned long)(window->virt)) + offset); 290 + map->map.size = 0xffffffffUL - map_top + 1UL; 291 + /* Set the name of the map to the address I am trying */ 292 + sprintf(map->map_name, "%s @%08lx", 293 + MOD_NAME, map->map.phys); 294 + 295 + /* Firmware hubs only use vpp when being programmed 296 + * in a factory setting. So in-place programming 297 + * needs to use a different method. 298 + */ 299 + for(map->map.bankwidth = 32; map->map.bankwidth; 300 + map->map.bankwidth >>= 1) { 301 + char **probe_type; 302 + /* Skip bankwidths that are not supported */ 303 + if (!map_bankwidth_supported(map->map.bankwidth)) 304 + continue; 305 + 306 + /* Setup the map methods */ 307 + simple_map_init(&map->map); 308 + 309 + /* Try all of the probe methods */ 310 + probe_type = rom_probe_types; 311 + for(; *probe_type; probe_type++) { 312 + map->mtd = do_map_probe(*probe_type, &map->map); 313 + if (map->mtd) 314 + goto found; 315 + } 316 + } 317 + map_top += ROM_PROBE_STEP_SIZE; 318 + continue; 319 + found: 320 + /* Trim the size if we are larger than the map */ 321 + if (map->mtd->size > map->map.size) { 322 + printk(KERN_WARNING MOD_NAME 323 + " rom(%u) larger than window(%lu). fixing...\n", 324 + map->mtd->size, map->map.size); 325 + map->mtd->size = map->map.size; 326 + } 327 + if (window->rsrc.parent) { 328 + /* 329 + * Registering the MTD device in iomem may not be possible 330 + * if there is a BIOS "reserved" and BUSY range. If this 331 + * fails then continue anyway. 332 + */ 333 + map->rsrc.name = map->map_name; 334 + map->rsrc.start = map->map.phys; 335 + map->rsrc.end = map->map.phys + map->mtd->size - 1; 336 + map->rsrc.flags = IORESOURCE_MEM | IORESOURCE_BUSY; 337 + if (request_resource(&window->rsrc, &map->rsrc)) { 338 + printk(KERN_ERR MOD_NAME 339 + ": cannot reserve MTD resource\n"); 340 + map->rsrc.parent = NULL; 341 + } 342 + } 343 + 344 + /* Make the whole region visible in the map */ 345 + map->map.virt = window->virt; 346 + map->map.phys = window->phys; 347 + cfi = map->map.fldrv_priv; 348 + for(i = 0; i < cfi->numchips; i++) 349 + cfi->chips[i].start += offset; 350 + 351 + /* Now that the mtd devices is complete claim and export it */ 352 + map->mtd->owner = THIS_MODULE; 353 + if (add_mtd_device(map->mtd)) { 354 + map_destroy(map->mtd); 355 + map->mtd = NULL; 356 + goto out; 357 + } 358 + 359 + /* Calculate the new value of map_top */ 360 + map_top += map->mtd->size; 361 + 362 + /* File away the map structure */ 363 + list_add(&map->list, &window->maps); 364 + map = NULL; 365 + } 366 + 367 + out: 368 + /* Free any left over map structures */ 369 + kfree(map); 370 + 371 + /* See if I have any map structures */ 372 + if (list_empty(&window->maps)) { 373 + esb2rom_cleanup(window); 374 + return -ENODEV; 375 + } 376 + return 0; 377 + } 378 + 379 + static void __devexit esb2rom_remove_one (struct pci_dev *pdev) 380 + { 381 + struct esb2rom_window *window = &esb2rom_window; 382 + esb2rom_cleanup(window); 383 + } 384 + 385 + static struct pci_device_id esb2rom_pci_tbl[] __devinitdata = { 386 + { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0, 387 + PCI_ANY_ID, PCI_ANY_ID, }, 388 + { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0, 389 + PCI_ANY_ID, PCI_ANY_ID, }, 390 + { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0, 391 + PCI_ANY_ID, PCI_ANY_ID, }, 392 + { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0, 393 + PCI_ANY_ID, PCI_ANY_ID, }, 394 + { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_1, 395 + PCI_ANY_ID, PCI_ANY_ID, }, 396 + { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB2_0, 397 + PCI_ANY_ID, PCI_ANY_ID, }, 398 + { 0, }, 399 + }; 400 + 401 + #if 0 402 + MODULE_DEVICE_TABLE(pci, esb2rom_pci_tbl); 403 + 404 + static struct pci_driver esb2rom_driver = { 405 + .name = MOD_NAME, 406 + .id_table = esb2rom_pci_tbl, 407 + .probe = esb2rom_init_one, 408 + .remove = esb2rom_remove_one, 409 + }; 410 + #endif 411 + 412 + static int __init init_esb2rom(void) 413 + { 414 + struct pci_dev *pdev; 415 + struct pci_device_id *id; 416 + int retVal; 417 + 418 + pdev = NULL; 419 + for (id = esb2rom_pci_tbl; id->vendor; id++) { 420 + printk(KERN_DEBUG "device id = %x\n", id->device); 421 + pdev = pci_get_device(id->vendor, id->device, NULL); 422 + if (pdev) { 423 + printk(KERN_DEBUG "matched device = %x\n", id->device); 424 + break; 425 + } 426 + } 427 + if (pdev) { 428 + printk(KERN_DEBUG "matched device id %x\n", id->device); 429 + retVal = esb2rom_init_one(pdev, &esb2rom_pci_tbl[0]); 430 + pci_dev_put(pdev); 431 + printk(KERN_DEBUG "retVal = %d\n", retVal); 432 + return retVal; 433 + } 434 + return -ENXIO; 435 + #if 0 436 + return pci_register_driver(&esb2rom_driver); 437 + #endif 438 + } 439 + 440 + static void __exit cleanup_esb2rom(void) 441 + { 442 + esb2rom_remove_one(esb2rom_window.pdev); 443 + } 444 + 445 + module_init(init_esb2rom); 446 + module_exit(cleanup_esb2rom); 447 + 448 + MODULE_LICENSE("GPL"); 449 + MODULE_AUTHOR("Lew Glendenning <lglendenning@lnxi.com>"); 450 + MODULE_DESCRIPTION("MTD map driver for BIOS chips on the ESB2 southbridge");
+1 -3
drivers/mtd/maps/integrator-flash.c
··· 75 75 int err; 76 76 void __iomem *base; 77 77 78 - info = kmalloc(sizeof(struct armflash_info), GFP_KERNEL); 78 + info = kzalloc(sizeof(struct armflash_info), GFP_KERNEL); 79 79 if (!info) { 80 80 err = -ENOMEM; 81 81 goto out; 82 82 } 83 - 84 - memset(info, 0, sizeof(struct armflash_info)); 85 83 86 84 info->plat = plat; 87 85 if (plat && plat->init) {
+3 -2
drivers/mtd/maps/nettel.c
··· 20 20 #include <linux/mtd/partitions.h> 21 21 #include <linux/mtd/cfi.h> 22 22 #include <linux/reboot.h> 23 + #include <linux/err.h> 23 24 #include <linux/kdev_t.h> 24 25 #include <linux/root_dev.h> 25 26 #include <asm/io.h> ··· 179 178 180 179 init_waitqueue_head(&wait_q); 181 180 mtd = get_mtd_device(NULL, 2); 182 - if (mtd) { 181 + if (!IS_ERR(mtd)) { 183 182 nettel_erase.mtd = mtd; 184 183 nettel_erase.callback = nettel_erasecallback; 185 184 nettel_erase.callback = NULL; ··· 472 471 iounmap(nettel_amd_map.virt); 473 472 474 473 return(rc); 475 - 474 + 476 475 } 477 476 478 477 /****************************************************************************/
+1 -3
drivers/mtd/maps/omap_nor.c
··· 78 78 struct resource *res = pdev->resource; 79 79 unsigned long size = res->end - res->start + 1; 80 80 81 - info = kmalloc(sizeof(struct omapflash_info), GFP_KERNEL); 81 + info = kzalloc(sizeof(struct omapflash_info), GFP_KERNEL); 82 82 if (!info) 83 83 return -ENOMEM; 84 - 85 - memset(info, 0, sizeof(struct omapflash_info)); 86 84 87 85 if (!request_mem_region(res->start, size, "flash")) { 88 86 err = -EBUSY;
+1 -2
drivers/mtd/maps/pcmciamtd.c
··· 735 735 struct pcmciamtd_dev *dev; 736 736 737 737 /* Create new memory card device */ 738 - dev = kmalloc(sizeof(*dev), GFP_KERNEL); 738 + dev = kzalloc(sizeof(*dev), GFP_KERNEL); 739 739 if (!dev) return -ENOMEM; 740 740 DEBUG(1, "dev=0x%p", dev); 741 741 742 - memset(dev, 0, sizeof(*dev)); 743 742 dev->p_dev = link; 744 743 link->priv = dev; 745 744
+2 -3
drivers/mtd/maps/physmap.c
··· 89 89 return -ENODEV; 90 90 91 91 printk(KERN_NOTICE "physmap platform flash device: %.8llx at %.8llx\n", 92 - (unsigned long long)dev->resource->end - dev->resource->start + 1, 92 + (unsigned long long)(dev->resource->end - dev->resource->start + 1), 93 93 (unsigned long long)dev->resource->start); 94 94 95 - info = kmalloc(sizeof(struct physmap_flash_info), GFP_KERNEL); 95 + info = kzalloc(sizeof(struct physmap_flash_info), GFP_KERNEL); 96 96 if (info == NULL) { 97 97 err = -ENOMEM; 98 98 goto err_out; 99 99 } 100 - memset(info, 0, sizeof(*info)); 101 100 102 101 platform_set_drvdata(dev, info); 103 102
+255
drivers/mtd/maps/physmap_of.c
··· 1 + /* 2 + * Normal mappings of chips in physical memory for OF devices 3 + * 4 + * Copyright (C) 2006 MontaVista Software Inc. 5 + * Author: Vitaly Wool <vwool@ru.mvista.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify it 8 + * under the terms of the GNU General Public License as published by the 9 + * Free Software Foundation; either version 2 of the License, or (at your 10 + * option) any later version. 11 + */ 12 + 13 + #include <linux/module.h> 14 + #include <linux/types.h> 15 + #include <linux/kernel.h> 16 + #include <linux/init.h> 17 + #include <linux/slab.h> 18 + #include <linux/device.h> 19 + #include <linux/mtd/mtd.h> 20 + #include <linux/mtd/map.h> 21 + #include <linux/mtd/partitions.h> 22 + #include <linux/mtd/physmap.h> 23 + #include <asm/io.h> 24 + #include <asm/prom.h> 25 + #include <asm/of_device.h> 26 + #include <asm/of_platform.h> 27 + 28 + struct physmap_flash_info { 29 + struct mtd_info *mtd; 30 + struct map_info map; 31 + struct resource *res; 32 + #ifdef CONFIG_MTD_PARTITIONS 33 + int nr_parts; 34 + struct mtd_partition *parts; 35 + #endif 36 + }; 37 + 38 + static const char *rom_probe_types[] = { "cfi_probe", "jedec_probe", "map_rom", NULL }; 39 + #ifdef CONFIG_MTD_PARTITIONS 40 + static const char *part_probe_types[] = { "cmdlinepart", "RedBoot", NULL }; 41 + #endif 42 + 43 + #ifdef CONFIG_MTD_PARTITIONS 44 + static int parse_flash_partitions(struct device_node *node, 45 + struct mtd_partition **parts) 46 + { 47 + int i, plen, retval = -ENOMEM; 48 + const u32 *part; 49 + const char *name; 50 + 51 + part = get_property(node, "partitions", &plen); 52 + if (part == NULL) 53 + goto err; 54 + 55 + retval = plen / (2 * sizeof(u32)); 56 + *parts = kzalloc(retval * sizeof(struct mtd_partition), GFP_KERNEL); 57 + if (*parts == NULL) { 58 + printk(KERN_ERR "Can't allocate the flash partition data!\n"); 59 + goto err; 60 + } 61 + 62 + name = get_property(node, "partition-names", &plen); 63 + 64 + for (i = 0; i < retval; i++) { 65 + (*parts)[i].offset = *part++; 66 + (*parts)[i].size = *part & ~1; 67 + if (*part++ & 1) /* bit 0 set signifies read only partition */ 68 + (*parts)[i].mask_flags = MTD_WRITEABLE; 69 + 70 + if (name != NULL && plen > 0) { 71 + int len = strlen(name) + 1; 72 + 73 + (*parts)[i].name = (char *)name; 74 + plen -= len; 75 + name += len; 76 + } else 77 + (*parts)[i].name = "unnamed"; 78 + } 79 + err: 80 + return retval; 81 + } 82 + #endif 83 + 84 + static int of_physmap_remove(struct of_device *dev) 85 + { 86 + struct physmap_flash_info *info; 87 + 88 + info = dev_get_drvdata(&dev->dev); 89 + if (info == NULL) 90 + return 0; 91 + dev_set_drvdata(&dev->dev, NULL); 92 + 93 + if (info->mtd != NULL) { 94 + #ifdef CONFIG_MTD_PARTITIONS 95 + if (info->nr_parts) { 96 + del_mtd_partitions(info->mtd); 97 + kfree(info->parts); 98 + } else { 99 + del_mtd_device(info->mtd); 100 + } 101 + #else 102 + del_mtd_device(info->mtd); 103 + #endif 104 + map_destroy(info->mtd); 105 + } 106 + 107 + if (info->map.virt != NULL) 108 + iounmap(info->map.virt); 109 + 110 + if (info->res != NULL) { 111 + release_resource(info->res); 112 + kfree(info->res); 113 + } 114 + 115 + return 0; 116 + } 117 + 118 + static int __devinit of_physmap_probe(struct of_device *dev, const struct of_device_id *match) 119 + { 120 + struct device_node *dp = dev->node; 121 + struct resource res; 122 + struct physmap_flash_info *info; 123 + const char **probe_type; 124 + const char *of_probe; 125 + const u32 *width; 126 + int err; 127 + 128 + 129 + if (of_address_to_resource(dp, 0, &res)) { 130 + dev_err(&dev->dev, "Can't get the flash mapping!\n"); 131 + err = -EINVAL; 132 + goto err_out; 133 + } 134 + 135 + dev_dbg(&dev->dev, "physmap flash device: %.8llx at %.8llx\n", 136 + (unsigned long long)res.end - res.start + 1, 137 + (unsigned long long)res.start); 138 + 139 + info = kzalloc(sizeof(struct physmap_flash_info), GFP_KERNEL); 140 + if (info == NULL) { 141 + err = -ENOMEM; 142 + goto err_out; 143 + } 144 + memset(info, 0, sizeof(*info)); 145 + 146 + dev_set_drvdata(&dev->dev, info); 147 + 148 + info->res = request_mem_region(res.start, res.end - res.start + 1, 149 + dev->dev.bus_id); 150 + if (info->res == NULL) { 151 + dev_err(&dev->dev, "Could not reserve memory region\n"); 152 + err = -ENOMEM; 153 + goto err_out; 154 + } 155 + 156 + width = get_property(dp, "bank-width", NULL); 157 + if (width == NULL) { 158 + dev_err(&dev->dev, "Can't get the flash bank width!\n"); 159 + err = -EINVAL; 160 + goto err_out; 161 + } 162 + 163 + info->map.name = dev->dev.bus_id; 164 + info->map.phys = res.start; 165 + info->map.size = res.end - res.start + 1; 166 + info->map.bankwidth = *width; 167 + 168 + info->map.virt = ioremap(info->map.phys, info->map.size); 169 + if (info->map.virt == NULL) { 170 + dev_err(&dev->dev, "Failed to ioremap flash region\n"); 171 + err = EIO; 172 + goto err_out; 173 + } 174 + 175 + simple_map_init(&info->map); 176 + 177 + of_probe = get_property(dp, "probe-type", NULL); 178 + if (of_probe == NULL) { 179 + probe_type = rom_probe_types; 180 + for (; info->mtd == NULL && *probe_type != NULL; probe_type++) 181 + info->mtd = do_map_probe(*probe_type, &info->map); 182 + } else if (!strcmp(of_probe, "CFI")) 183 + info->mtd = do_map_probe("cfi_probe", &info->map); 184 + else if (!strcmp(of_probe, "JEDEC")) 185 + info->mtd = do_map_probe("jedec_probe", &info->map); 186 + else { 187 + if (strcmp(of_probe, "ROM")) 188 + dev_dbg(&dev->dev, "map_probe: don't know probe type " 189 + "'%s', mapping as rom\n"); 190 + info->mtd = do_map_probe("mtd_rom", &info->map); 191 + } 192 + if (info->mtd == NULL) { 193 + dev_err(&dev->dev, "map_probe failed\n"); 194 + err = -ENXIO; 195 + goto err_out; 196 + } 197 + info->mtd->owner = THIS_MODULE; 198 + 199 + #ifdef CONFIG_MTD_PARTITIONS 200 + err = parse_mtd_partitions(info->mtd, part_probe_types, &info->parts, 0); 201 + if (err > 0) { 202 + add_mtd_partitions(info->mtd, info->parts, err); 203 + } else if ((err = parse_flash_partitions(dp, &info->parts)) > 0) { 204 + dev_info(&dev->dev, "Using OF partition information\n"); 205 + add_mtd_partitions(info->mtd, info->parts, err); 206 + info->nr_parts = err; 207 + } else 208 + #endif 209 + 210 + add_mtd_device(info->mtd); 211 + return 0; 212 + 213 + err_out: 214 + of_physmap_remove(dev); 215 + return err; 216 + 217 + return 0; 218 + 219 + 220 + } 221 + 222 + static struct of_device_id of_physmap_match[] = { 223 + { 224 + .type = "rom", 225 + .compatible = "direct-mapped" 226 + }, 227 + { }, 228 + }; 229 + 230 + MODULE_DEVICE_TABLE(of, of_physmap_match); 231 + 232 + 233 + static struct of_platform_driver of_physmap_flash_driver = { 234 + .name = "physmap-flash", 235 + .match_table = of_physmap_match, 236 + .probe = of_physmap_probe, 237 + .remove = of_physmap_remove, 238 + }; 239 + 240 + static int __init of_physmap_init(void) 241 + { 242 + return of_register_platform_driver(&of_physmap_flash_driver); 243 + } 244 + 245 + static void __exit of_physmap_exit(void) 246 + { 247 + of_unregister_platform_driver(&of_physmap_flash_driver); 248 + } 249 + 250 + module_init(of_physmap_init); 251 + module_exit(of_physmap_exit); 252 + 253 + MODULE_LICENSE("GPL"); 254 + MODULE_AUTHOR("Vitaly Wool <vwool@ru.mvista.com>"); 255 + MODULE_DESCRIPTION("Configurable MTD map driver for OF");
+1 -2
drivers/mtd/maps/plat-ram.c
··· 147 147 148 148 pdata = pdev->dev.platform_data; 149 149 150 - info = kmalloc(sizeof(*info), GFP_KERNEL); 150 + info = kzalloc(sizeof(*info), GFP_KERNEL); 151 151 if (info == NULL) { 152 152 dev_err(&pdev->dev, "no memory for flash info\n"); 153 153 err = -ENOMEM; 154 154 goto exit_error; 155 155 } 156 156 157 - memset(info, 0, sizeof(*info)); 158 157 platform_set_drvdata(pdev, info); 159 158 160 159 info->dev = &pdev->dev;
+1 -3
drivers/mtd/maps/sa1100-flash.c
··· 273 273 /* 274 274 * Allocate the map_info structs in one go. 275 275 */ 276 - info = kmalloc(size, GFP_KERNEL); 276 + info = kzalloc(size, GFP_KERNEL); 277 277 if (!info) { 278 278 ret = -ENOMEM; 279 279 goto out; 280 280 } 281 - 282 - memset(info, 0, size); 283 281 284 282 if (plat->init) { 285 283 ret = plat->init();
+2 -6
drivers/mtd/maps/tqm834x.c
··· 132 132 133 133 pr_debug("%s: chip probing count %d\n", __FUNCTION__, idx); 134 134 135 - map_banks[idx] = 136 - (struct map_info *)kmalloc(sizeof(struct map_info), 137 - GFP_KERNEL); 135 + map_banks[idx] = kzalloc(sizeof(struct map_info), GFP_KERNEL); 138 136 if (map_banks[idx] == NULL) { 139 137 ret = -ENOMEM; 140 138 goto error_mem; 141 139 } 142 - memset((void *)map_banks[idx], 0, sizeof(struct map_info)); 143 - map_banks[idx]->name = (char *)kmalloc(16, GFP_KERNEL); 140 + map_banks[idx]->name = kzalloc(16, GFP_KERNEL); 144 141 if (map_banks[idx]->name == NULL) { 145 142 ret = -ENOMEM; 146 143 goto error_mem; 147 144 } 148 - memset((void *)map_banks[idx]->name, 0, 16); 149 145 150 146 sprintf(map_banks[idx]->name, "TQM834x-%d", idx); 151 147 map_banks[idx]->size = flash_size;
+1 -2
drivers/mtd/maps/tqm8xxl.c
··· 134 134 135 135 printk(KERN_INFO "%s: chip probing count %d\n", __FUNCTION__, idx); 136 136 137 - map_banks[idx] = (struct map_info *)kmalloc(sizeof(struct map_info), GFP_KERNEL); 137 + map_banks[idx] = kzalloc(sizeof(struct map_info), GFP_KERNEL); 138 138 if(map_banks[idx] == NULL) { 139 139 ret = -ENOMEM; 140 140 /* FIXME: What if some MTD devices were probed already? */ 141 141 goto error_mem; 142 142 } 143 143 144 - memset((void *)map_banks[idx], 0, sizeof(struct map_info)); 145 144 map_banks[idx]->name = (char *)kmalloc(16, GFP_KERNEL); 146 145 147 146 if (!map_banks[idx]->name) {
+10 -9
drivers/mtd/mtd_blkdevs.c
··· 42 42 unsigned long block, nsect; 43 43 char *buf; 44 44 45 - block = req->sector; 46 - nsect = req->current_nr_sectors; 45 + block = req->sector << 9 >> tr->blkshift; 46 + nsect = req->current_nr_sectors << 9 >> tr->blkshift; 47 + 47 48 buf = req->buffer; 48 49 49 50 if (!blk_fs_request(req)) 50 51 return 0; 51 52 52 - if (block + nsect > get_capacity(req->rq_disk)) 53 + if (req->sector + req->current_nr_sectors > get_capacity(req->rq_disk)) 53 54 return 0; 54 55 55 56 switch(rq_data_dir(req)) { 56 57 case READ: 57 - for (; nsect > 0; nsect--, block++, buf += 512) 58 + for (; nsect > 0; nsect--, block++, buf += tr->blksize) 58 59 if (tr->readsect(dev, block, buf)) 59 60 return 0; 60 61 return 1; ··· 64 63 if (!tr->writesect) 65 64 return 0; 66 65 67 - for (; nsect > 0; nsect--, block++, buf += 512) 66 + for (; nsect > 0; nsect--, block++, buf += tr->blksize) 68 67 if (tr->writesect(dev, block, buf)) 69 68 return 0; 70 69 return 1; ··· 298 297 299 298 /* 2.5 has capacity in units of 512 bytes while still 300 299 having BLOCK_SIZE_BITS set to 10. Just to keep us amused. */ 301 - set_capacity(gd, (new->size * new->blksize) >> 9); 300 + set_capacity(gd, (new->size * tr->blksize) >> 9); 302 301 303 302 gd->private_data = new; 304 303 new->blkcore_priv = gd; ··· 373 372 if (!blktrans_notifier.list.next) 374 373 register_mtd_user(&blktrans_notifier); 375 374 376 - tr->blkcore_priv = kmalloc(sizeof(*tr->blkcore_priv), GFP_KERNEL); 375 + tr->blkcore_priv = kzalloc(sizeof(*tr->blkcore_priv), GFP_KERNEL); 377 376 if (!tr->blkcore_priv) 378 377 return -ENOMEM; 379 - 380 - memset(tr->blkcore_priv, 0, sizeof(*tr->blkcore_priv)); 381 378 382 379 mutex_lock(&mtd_table_mutex); 383 380 ··· 400 401 } 401 402 402 403 tr->blkcore_priv->rq->queuedata = tr; 404 + blk_queue_hardsect_size(tr->blkcore_priv->rq, tr->blksize); 405 + tr->blkshift = ffs(tr->blksize) - 1; 403 406 404 407 ret = kernel_thread(mtd_blktrans_thread, tr, CLONE_KERNEL); 405 408 if (ret < 0) {
+4 -6
drivers/mtd/mtdblock.c
··· 278 278 } 279 279 280 280 /* OK, it's not open. Create cache info for it */ 281 - mtdblk = kmalloc(sizeof(struct mtdblk_dev), GFP_KERNEL); 281 + mtdblk = kzalloc(sizeof(struct mtdblk_dev), GFP_KERNEL); 282 282 if (!mtdblk) 283 283 return -ENOMEM; 284 284 285 - memset(mtdblk, 0, sizeof(*mtdblk)); 286 285 mtdblk->count = 1; 287 286 mtdblk->mtd = mtd; 288 287 ··· 338 339 339 340 static void mtdblock_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd) 340 341 { 341 - struct mtd_blktrans_dev *dev = kmalloc(sizeof(*dev), GFP_KERNEL); 342 + struct mtd_blktrans_dev *dev = kzalloc(sizeof(*dev), GFP_KERNEL); 342 343 343 344 if (!dev) 344 345 return; 345 346 346 - memset(dev, 0, sizeof(*dev)); 347 - 348 347 dev->mtd = mtd; 349 348 dev->devnum = mtd->index; 350 - dev->blksize = 512; 349 + 351 350 dev->size = mtd->size >> 9; 352 351 dev->tr = tr; 353 352 ··· 365 368 .name = "mtdblock", 366 369 .major = 31, 367 370 .part_bits = 0, 371 + .blksize = 512, 368 372 .open = mtdblock_open, 369 373 .flush = mtdblock_flush, 370 374 .release = mtdblock_release,
+3 -4
drivers/mtd/mtdblock_ro.c
··· 33 33 34 34 static void mtdblock_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd) 35 35 { 36 - struct mtd_blktrans_dev *dev = kmalloc(sizeof(*dev), GFP_KERNEL); 36 + struct mtd_blktrans_dev *dev = kzalloc(sizeof(*dev), GFP_KERNEL); 37 37 38 38 if (!dev) 39 39 return; 40 40 41 - memset(dev, 0, sizeof(*dev)); 42 - 43 41 dev->mtd = mtd; 44 42 dev->devnum = mtd->index; 45 - dev->blksize = 512; 43 + 46 44 dev->size = mtd->size >> 9; 47 45 dev->tr = tr; 48 46 dev->readonly = 1; ··· 58 60 .name = "mtdblock", 59 61 .major = 31, 60 62 .part_bits = 0, 63 + .blksize = 512, 61 64 .readsect = mtdblock_readsect, 62 65 .writesect = mtdblock_writesect, 63 66 .add_mtd = mtdblock_add_mtd,
+11 -12
drivers/mtd/mtdchar.c
··· 7 7 8 8 #include <linux/device.h> 9 9 #include <linux/fs.h> 10 + #include <linux/err.h> 10 11 #include <linux/init.h> 11 12 #include <linux/kernel.h> 12 13 #include <linux/module.h> ··· 101 100 102 101 mtd = get_mtd_device(NULL, devnum); 103 102 104 - if (!mtd) 105 - return -ENODEV; 103 + if (IS_ERR(mtd)) 104 + return PTR_ERR(mtd); 106 105 107 106 if (MTD_ABSENT == mtd->type) { 108 107 put_mtd_device(mtd); ··· 432 431 if(!(file->f_mode & 2)) 433 432 return -EPERM; 434 433 435 - erase=kmalloc(sizeof(struct erase_info),GFP_KERNEL); 434 + erase=kzalloc(sizeof(struct erase_info),GFP_KERNEL); 436 435 if (!erase) 437 436 ret = -ENOMEM; 438 437 else { ··· 441 440 442 441 init_waitqueue_head(&waitq); 443 442 444 - memset (erase,0,sizeof(struct erase_info)); 445 443 if (copy_from_user(&erase->addr, argp, 446 444 sizeof(struct erase_info_user))) { 447 445 kfree(erase); ··· 499 499 if (ret) 500 500 return ret; 501 501 502 - ops.len = buf.length; 503 502 ops.ooblen = buf.length; 504 503 ops.ooboffs = buf.start & (mtd->oobsize - 1); 505 504 ops.datbuf = NULL; 506 505 ops.mode = MTD_OOB_PLACE; 507 506 508 - if (ops.ooboffs && ops.len > (mtd->oobsize - ops.ooboffs)) 507 + if (ops.ooboffs && ops.ooblen > (mtd->oobsize - ops.ooboffs)) 509 508 return -EINVAL; 510 509 511 510 ops.oobbuf = kmalloc(buf.length, GFP_KERNEL); ··· 519 520 buf.start &= ~(mtd->oobsize - 1); 520 521 ret = mtd->write_oob(mtd, buf.start, &ops); 521 522 522 - if (copy_to_user(argp + sizeof(uint32_t), &ops.retlen, 523 + if (copy_to_user(argp + sizeof(uint32_t), &ops.oobretlen, 523 524 sizeof(uint32_t))) 524 525 ret = -EFAULT; 525 526 ··· 547 548 if (ret) 548 549 return ret; 549 550 550 - ops.len = buf.length; 551 551 ops.ooblen = buf.length; 552 552 ops.ooboffs = buf.start & (mtd->oobsize - 1); 553 553 ops.datbuf = NULL; ··· 562 564 buf.start &= ~(mtd->oobsize - 1); 563 565 ret = mtd->read_oob(mtd, buf.start, &ops); 564 566 565 - if (put_user(ops.retlen, (uint32_t __user *)argp)) 567 + if (put_user(ops.oobretlen, (uint32_t __user *)argp)) 566 568 ret = -EFAULT; 567 - else if (ops.retlen && copy_to_user(buf.ptr, ops.oobbuf, 568 - ops.retlen)) 569 + else if (ops.oobretlen && copy_to_user(buf.ptr, ops.oobbuf, 570 + ops.oobretlen)) 569 571 ret = -EFAULT; 570 572 571 573 kfree(ops.oobbuf); ··· 614 616 memcpy(&oi.eccpos, mtd->ecclayout->eccpos, sizeof(oi.eccpos)); 615 617 memcpy(&oi.oobfree, mtd->ecclayout->oobfree, 616 618 sizeof(oi.oobfree)); 619 + oi.eccbytes = mtd->ecclayout->eccbytes; 617 620 618 621 if (copy_to_user(argp, &oi, sizeof(struct nand_oobinfo))) 619 622 return -EFAULT; ··· 714 715 if (!mtd->ecclayout) 715 716 return -EOPNOTSUPP; 716 717 717 - if (copy_to_user(argp, &mtd->ecclayout, 718 + if (copy_to_user(argp, mtd->ecclayout, 718 719 sizeof(struct nand_ecclayout))) 719 720 return -EFAULT; 720 721 break;
+26 -17
drivers/mtd/mtdconcat.c
··· 247 247 struct mtd_oob_ops devops = *ops; 248 248 int i, err, ret = 0; 249 249 250 - ops->retlen = 0; 250 + ops->retlen = ops->oobretlen = 0; 251 251 252 252 for (i = 0; i < concat->num_subdev; i++) { 253 253 struct mtd_info *subdev = concat->subdev[i]; ··· 263 263 264 264 err = subdev->read_oob(subdev, from, &devops); 265 265 ops->retlen += devops.retlen; 266 + ops->oobretlen += devops.oobretlen; 266 267 267 268 /* Save information about bitflips! */ 268 269 if (unlikely(err)) { ··· 279 278 return err; 280 279 } 281 280 282 - devops.len = ops->len - ops->retlen; 283 - if (!devops.len) 284 - return ret; 285 - 286 - if (devops.datbuf) 281 + if (devops.datbuf) { 282 + devops.len = ops->len - ops->retlen; 283 + if (!devops.len) 284 + return ret; 287 285 devops.datbuf += devops.retlen; 288 - if (devops.oobbuf) 289 - devops.oobbuf += devops.ooblen; 286 + } 287 + if (devops.oobbuf) { 288 + devops.ooblen = ops->ooblen - ops->oobretlen; 289 + if (!devops.ooblen) 290 + return ret; 291 + devops.oobbuf += ops->oobretlen; 292 + } 290 293 291 294 from = 0; 292 295 } ··· 326 321 if (err) 327 322 return err; 328 323 329 - devops.len = ops->len - ops->retlen; 330 - if (!devops.len) 331 - return 0; 332 - 333 - if (devops.datbuf) 324 + if (devops.datbuf) { 325 + devops.len = ops->len - ops->retlen; 326 + if (!devops.len) 327 + return 0; 334 328 devops.datbuf += devops.retlen; 335 - if (devops.oobbuf) 336 - devops.oobbuf += devops.ooblen; 329 + } 330 + if (devops.oobbuf) { 331 + devops.ooblen = ops->ooblen - ops->oobretlen; 332 + if (!devops.ooblen) 333 + return 0; 334 + devops.oobbuf += devops.oobretlen; 335 + } 337 336 to = 0; 338 337 } 339 338 return -EINVAL; ··· 708 699 709 700 /* allocate the device structure */ 710 701 size = SIZEOF_STRUCT_MTD_CONCAT(num_devs); 711 - concat = kmalloc(size, GFP_KERNEL); 702 + concat = kzalloc(size, GFP_KERNEL); 712 703 if (!concat) { 713 704 printk 714 705 ("memory allocation error while creating concatenated device \"%s\"\n", 715 706 name); 716 707 return NULL; 717 708 } 718 - memset(concat, 0, size); 719 709 concat->subdev = (struct mtd_info **) (concat + 1); 720 710 721 711 /* ··· 772 764 concat->mtd.ecc_stats.badblocks += 773 765 subdev[i]->ecc_stats.badblocks; 774 766 if (concat->mtd.writesize != subdev[i]->writesize || 767 + concat->mtd.subpage_sft != subdev[i]->subpage_sft || 775 768 concat->mtd.oobsize != subdev[i]->oobsize || 776 769 concat->mtd.ecctype != subdev[i]->ecctype || 777 770 concat->mtd.eccsize != subdev[i]->eccsize ||
+78 -15
drivers/mtd/mtdcore.c
··· 15 15 #include <linux/timer.h> 16 16 #include <linux/major.h> 17 17 #include <linux/fs.h> 18 + #include <linux/err.h> 18 19 #include <linux/ioctl.h> 19 20 #include <linux/init.h> 20 21 #include <linux/mtd/compatmac.h> ··· 193 192 * Given a number and NULL address, return the num'th entry in the device 194 193 * table, if any. Given an address and num == -1, search the device table 195 194 * for a device with that address and return if it's still present. Given 196 - * both, return the num'th driver only if its address matches. Return NULL 197 - * if not. 195 + * both, return the num'th driver only if its address matches. Return 196 + * error code if not. 198 197 */ 199 198 200 199 struct mtd_info *get_mtd_device(struct mtd_info *mtd, int num) 201 200 { 202 201 struct mtd_info *ret = NULL; 203 - int i; 202 + int i, err = -ENODEV; 204 203 205 204 mutex_lock(&mtd_table_mutex); 206 205 ··· 214 213 ret = NULL; 215 214 } 216 215 217 - if (ret && !try_module_get(ret->owner)) 218 - ret = NULL; 216 + if (!ret) 217 + goto out_unlock; 219 218 220 - if (ret) 221 - ret->usecount++; 219 + if (!try_module_get(ret->owner)) 220 + goto out_unlock; 222 221 222 + if (ret->get_device) { 223 + err = ret->get_device(ret); 224 + if (err) 225 + goto out_put; 226 + } 227 + 228 + ret->usecount++; 223 229 mutex_unlock(&mtd_table_mutex); 224 230 return ret; 231 + 232 + out_put: 233 + module_put(ret->owner); 234 + out_unlock: 235 + mutex_unlock(&mtd_table_mutex); 236 + return ERR_PTR(err); 237 + } 238 + 239 + /** 240 + * get_mtd_device_nm - obtain a validated handle for an MTD device by 241 + * device name 242 + * @name: MTD device name to open 243 + * 244 + * This function returns MTD device description structure in case of 245 + * success and an error code in case of failure. 246 + */ 247 + 248 + struct mtd_info *get_mtd_device_nm(const char *name) 249 + { 250 + int i, err = -ENODEV; 251 + struct mtd_info *mtd = NULL; 252 + 253 + mutex_lock(&mtd_table_mutex); 254 + 255 + for (i = 0; i < MAX_MTD_DEVICES; i++) { 256 + if (mtd_table[i] && !strcmp(name, mtd_table[i]->name)) { 257 + mtd = mtd_table[i]; 258 + break; 259 + } 260 + } 261 + 262 + if (!mtd) 263 + goto out_unlock; 264 + 265 + if (!try_module_get(mtd->owner)) 266 + goto out_unlock; 267 + 268 + if (mtd->get_device) { 269 + err = mtd->get_device(mtd); 270 + if (err) 271 + goto out_put; 272 + } 273 + 274 + mtd->usecount++; 275 + mutex_unlock(&mtd_table_mutex); 276 + return mtd; 277 + 278 + out_put: 279 + module_put(mtd->owner); 280 + out_unlock: 281 + mutex_unlock(&mtd_table_mutex); 282 + return ERR_PTR(err); 225 283 } 226 284 227 285 void put_mtd_device(struct mtd_info *mtd) ··· 289 229 290 230 mutex_lock(&mtd_table_mutex); 291 231 c = --mtd->usecount; 232 + if (mtd->put_device) 233 + mtd->put_device(mtd); 292 234 mutex_unlock(&mtd_table_mutex); 293 235 BUG_ON(c < 0); 294 236 ··· 298 236 } 299 237 300 238 /* default_mtd_writev - default mtd writev method for MTD devices that 301 - * dont implement their own 239 + * don't implement their own 302 240 */ 303 241 304 242 int default_mtd_writev(struct mtd_info *mtd, const struct kvec *vecs, ··· 326 264 return ret; 327 265 } 328 266 329 - EXPORT_SYMBOL(add_mtd_device); 330 - EXPORT_SYMBOL(del_mtd_device); 331 - EXPORT_SYMBOL(get_mtd_device); 332 - EXPORT_SYMBOL(put_mtd_device); 333 - EXPORT_SYMBOL(register_mtd_user); 334 - EXPORT_SYMBOL(unregister_mtd_user); 335 - EXPORT_SYMBOL(default_mtd_writev); 267 + EXPORT_SYMBOL_GPL(add_mtd_device); 268 + EXPORT_SYMBOL_GPL(del_mtd_device); 269 + EXPORT_SYMBOL_GPL(get_mtd_device); 270 + EXPORT_SYMBOL_GPL(get_mtd_device_nm); 271 + EXPORT_SYMBOL_GPL(put_mtd_device); 272 + EXPORT_SYMBOL_GPL(register_mtd_user); 273 + EXPORT_SYMBOL_GPL(unregister_mtd_user); 274 + EXPORT_SYMBOL_GPL(default_mtd_writev); 336 275 337 276 #ifdef CONFIG_PROC_FS 338 277
+4 -4
drivers/mtd/mtdpart.c
··· 94 94 95 95 if (from >= mtd->size) 96 96 return -EINVAL; 97 - if (from + ops->len > mtd->size) 97 + if (ops->datbuf && from + ops->len > mtd->size) 98 98 return -EINVAL; 99 99 res = part->master->read_oob(part->master, from + part->offset, ops); 100 100 ··· 161 161 162 162 if (to >= mtd->size) 163 163 return -EINVAL; 164 - if (to + ops->len > mtd->size) 164 + if (ops->datbuf && to + ops->len > mtd->size) 165 165 return -EINVAL; 166 166 return part->master->write_oob(part->master, to + part->offset, ops); 167 167 } ··· 323 323 for (i = 0; i < nbparts; i++) { 324 324 325 325 /* allocate the partition structure */ 326 - slave = kmalloc (sizeof(*slave), GFP_KERNEL); 326 + slave = kzalloc (sizeof(*slave), GFP_KERNEL); 327 327 if (!slave) { 328 328 printk ("memory allocation error while creating partitions for \"%s\"\n", 329 329 master->name); 330 330 del_mtd_partitions(master); 331 331 return -ENOMEM; 332 332 } 333 - memset(slave, 0, sizeof(*slave)); 334 333 list_add(&slave->list, &mtd_partitions); 335 334 336 335 /* set up the MTD object for this partition */ ··· 340 341 slave->mtd.oobsize = master->oobsize; 341 342 slave->mtd.ecctype = master->ecctype; 342 343 slave->mtd.eccsize = master->eccsize; 344 + slave->mtd.subpage_sft = master->subpage_sft; 343 345 344 346 slave->mtd.name = parts[i].name; 345 347 slave->mtd.bank_size = master->bank_size;
+16
drivers/mtd/nand/Kconfig
··· 90 90 depends on MTD_NAND && SH_SOLUTION_ENGINE 91 91 select REED_SOLOMON 92 92 select REED_SOLOMON_DEC8 93 + select BITREVERSE 93 94 help 94 95 This enables the driver for the Renesas Technology AG-AND 95 96 flash interface board (FROM_BOARD4) ··· 133 132 config MTD_NAND_NDFC 134 133 tristate "NDFC NanD Flash Controller" 135 134 depends on MTD_NAND && 44x 135 + select MTD_NAND_ECC_SMC 136 136 help 137 137 NDFC Nand Flash Controllers are integrated in EP44x SoCs 138 138 ··· 221 219 tristate "Support for NAND Flash on Sharp SL Series (C7xx + others)" 222 220 depends on MTD_NAND && ARCH_PXA 223 221 222 + config MTD_NAND_CAFE 223 + tristate "NAND support for OLPC CAFÉ chip" 224 + depends on PCI 225 + help 226 + Use NAND flash attached to the CAFÉ chip designed for the $100 227 + laptop. 228 + 224 229 config MTD_NAND_CS553X 225 230 tristate "NAND support for CS5535/CS5536 (AMD Geode companion chip)" 226 231 depends on MTD_NAND && X86_32 && (X86_PC || X86_GENERICARCH) ··· 240 231 the controller be in MMIO mode. 241 232 242 233 If you say "m", the module will be called "cs553x_nand.ko". 234 + 235 + config MTD_NAND_AT91 236 + bool "Support for NAND Flash / SmartMedia on AT91" 237 + depends on MTD_NAND && ARCH_AT91 238 + help 239 + Enables support for NAND Flash / Smart Media Card interface 240 + on Atmel AT91 processors. 243 241 244 242 config MTD_NAND_NANDSIM 245 243 tristate "Support for NAND Flash Simulator"
+4 -1
drivers/mtd/nand/Makefile
··· 6 6 obj-$(CONFIG_MTD_NAND) += nand.o nand_ecc.o 7 7 obj-$(CONFIG_MTD_NAND_IDS) += nand_ids.o 8 8 9 + obj-$(CONFIG_MTD_NAND_CAFE) += cafe_nand.o 9 10 obj-$(CONFIG_MTD_NAND_SPIA) += spia.o 10 11 obj-$(CONFIG_MTD_NAND_AMS_DELTA) += ams-delta.o 11 12 obj-$(CONFIG_MTD_NAND_TOTO) += toto.o ··· 23 22 obj-$(CONFIG_MTD_NAND_NANDSIM) += nandsim.o 24 23 obj-$(CONFIG_MTD_NAND_CS553X) += cs553x_nand.o 25 24 obj-$(CONFIG_MTD_NAND_NDFC) += ndfc.o 25 + obj-$(CONFIG_MTD_NAND_AT91) += at91_nand.o 26 26 27 - nand-objs = nand_base.o nand_bbt.o 27 + nand-objs := nand_base.o nand_bbt.o 28 + cafe_nand-objs := cafe.o cafe_ecc.o
+223
drivers/mtd/nand/at91_nand.c
··· 1 + /* 2 + * drivers/mtd/nand/at91_nand.c 3 + * 4 + * Copyright (C) 2003 Rick Bronson 5 + * 6 + * Derived from drivers/mtd/nand/autcpu12.c 7 + * Copyright (c) 2001 Thomas Gleixner (gleixner@autronix.de) 8 + * 9 + * Derived from drivers/mtd/spia.c 10 + * Copyright (C) 2000 Steven J. Hill (sjhill@cotw.com) 11 + * 12 + * This program is free software; you can redistribute it and/or modify 13 + * it under the terms of the GNU General Public License version 2 as 14 + * published by the Free Software Foundation. 15 + * 16 + */ 17 + 18 + #include <linux/slab.h> 19 + #include <linux/module.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/mtd/mtd.h> 22 + #include <linux/mtd/nand.h> 23 + #include <linux/mtd/partitions.h> 24 + 25 + #include <asm/io.h> 26 + #include <asm/sizes.h> 27 + 28 + #include <asm/hardware.h> 29 + #include <asm/arch/board.h> 30 + #include <asm/arch/gpio.h> 31 + 32 + struct at91_nand_host { 33 + struct nand_chip nand_chip; 34 + struct mtd_info mtd; 35 + void __iomem *io_base; 36 + struct at91_nand_data *board; 37 + }; 38 + 39 + /* 40 + * Hardware specific access to control-lines 41 + */ 42 + static void at91_nand_cmd_ctrl(struct mtd_info *mtd, int cmd, unsigned int ctrl) 43 + { 44 + struct nand_chip *nand_chip = mtd->priv; 45 + struct at91_nand_host *host = nand_chip->priv; 46 + 47 + if (cmd == NAND_CMD_NONE) 48 + return; 49 + 50 + if (ctrl & NAND_CLE) 51 + writeb(cmd, host->io_base + (1 << host->board->cle)); 52 + else 53 + writeb(cmd, host->io_base + (1 << host->board->ale)); 54 + } 55 + 56 + /* 57 + * Read the Device Ready pin. 58 + */ 59 + static int at91_nand_device_ready(struct mtd_info *mtd) 60 + { 61 + struct nand_chip *nand_chip = mtd->priv; 62 + struct at91_nand_host *host = nand_chip->priv; 63 + 64 + return at91_get_gpio_value(host->board->rdy_pin); 65 + } 66 + 67 + /* 68 + * Enable NAND. 69 + */ 70 + static void at91_nand_enable(struct at91_nand_host *host) 71 + { 72 + if (host->board->enable_pin) 73 + at91_set_gpio_value(host->board->enable_pin, 0); 74 + } 75 + 76 + /* 77 + * Disable NAND. 78 + */ 79 + static void at91_nand_disable(struct at91_nand_host *host) 80 + { 81 + if (host->board->enable_pin) 82 + at91_set_gpio_value(host->board->enable_pin, 1); 83 + } 84 + 85 + /* 86 + * Probe for the NAND device. 87 + */ 88 + static int __init at91_nand_probe(struct platform_device *pdev) 89 + { 90 + struct at91_nand_host *host; 91 + struct mtd_info *mtd; 92 + struct nand_chip *nand_chip; 93 + int res; 94 + 95 + #ifdef CONFIG_MTD_PARTITIONS 96 + struct mtd_partition *partitions = NULL; 97 + int num_partitions = 0; 98 + #endif 99 + 100 + /* Allocate memory for the device structure (and zero it) */ 101 + host = kzalloc(sizeof(struct at91_nand_host), GFP_KERNEL); 102 + if (!host) { 103 + printk(KERN_ERR "at91_nand: failed to allocate device structure.\n"); 104 + return -ENOMEM; 105 + } 106 + 107 + host->io_base = ioremap(pdev->resource[0].start, 108 + pdev->resource[0].end - pdev->resource[0].start + 1); 109 + if (host->io_base == NULL) { 110 + printk(KERN_ERR "at91_nand: ioremap failed\n"); 111 + kfree(host); 112 + return -EIO; 113 + } 114 + 115 + mtd = &host->mtd; 116 + nand_chip = &host->nand_chip; 117 + host->board = pdev->dev.platform_data; 118 + 119 + nand_chip->priv = host; /* link the private data structures */ 120 + mtd->priv = nand_chip; 121 + mtd->owner = THIS_MODULE; 122 + 123 + /* Set address of NAND IO lines */ 124 + nand_chip->IO_ADDR_R = host->io_base; 125 + nand_chip->IO_ADDR_W = host->io_base; 126 + nand_chip->cmd_ctrl = at91_nand_cmd_ctrl; 127 + nand_chip->dev_ready = at91_nand_device_ready; 128 + nand_chip->ecc.mode = NAND_ECC_SOFT; /* enable ECC */ 129 + nand_chip->chip_delay = 20; /* 20us command delay time */ 130 + 131 + if (host->board->bus_width_16) /* 16-bit bus width */ 132 + nand_chip->options |= NAND_BUSWIDTH_16; 133 + 134 + platform_set_drvdata(pdev, host); 135 + at91_nand_enable(host); 136 + 137 + if (host->board->det_pin) { 138 + if (at91_get_gpio_value(host->board->det_pin)) { 139 + printk ("No SmartMedia card inserted.\n"); 140 + res = ENXIO; 141 + goto out; 142 + } 143 + } 144 + 145 + /* Scan to find existance of the device */ 146 + if (nand_scan(mtd, 1)) { 147 + res = -ENXIO; 148 + goto out; 149 + } 150 + 151 + #ifdef CONFIG_MTD_PARTITIONS 152 + if (host->board->partition_info) 153 + partitions = host->board->partition_info(mtd->size, &num_partitions); 154 + 155 + if ((!partitions) || (num_partitions == 0)) { 156 + printk(KERN_ERR "at91_nand: No parititions defined, or unsupported device.\n"); 157 + res = ENXIO; 158 + goto release; 159 + } 160 + 161 + res = add_mtd_partitions(mtd, partitions, num_partitions); 162 + #else 163 + res = add_mtd_device(mtd); 164 + #endif 165 + 166 + if (!res) 167 + return res; 168 + 169 + release: 170 + nand_release(mtd); 171 + out: 172 + at91_nand_disable(host); 173 + platform_set_drvdata(pdev, NULL); 174 + iounmap(host->io_base); 175 + kfree(host); 176 + return res; 177 + } 178 + 179 + /* 180 + * Remove a NAND device. 181 + */ 182 + static int __devexit at91_nand_remove(struct platform_device *pdev) 183 + { 184 + struct at91_nand_host *host = platform_get_drvdata(pdev); 185 + struct mtd_info *mtd = &host->mtd; 186 + 187 + nand_release(mtd); 188 + 189 + at91_nand_disable(host); 190 + 191 + iounmap(host->io_base); 192 + kfree(host); 193 + 194 + return 0; 195 + } 196 + 197 + static struct platform_driver at91_nand_driver = { 198 + .probe = at91_nand_probe, 199 + .remove = at91_nand_remove, 200 + .driver = { 201 + .name = "at91_nand", 202 + .owner = THIS_MODULE, 203 + }, 204 + }; 205 + 206 + static int __init at91_nand_init(void) 207 + { 208 + return platform_driver_register(&at91_nand_driver); 209 + } 210 + 211 + 212 + static void __exit at91_nand_exit(void) 213 + { 214 + platform_driver_unregister(&at91_nand_driver); 215 + } 216 + 217 + 218 + module_init(at91_nand_init); 219 + module_exit(at91_nand_exit); 220 + 221 + MODULE_LICENSE("GPL"); 222 + MODULE_AUTHOR("Rick Bronson"); 223 + MODULE_DESCRIPTION("NAND/SmartMedia driver for AT91RM9200");
+770
drivers/mtd/nand/cafe.c
··· 1 + /* 2 + * Driver for One Laptop Per Child ‘CAFÉ’ controller, aka Marvell 88ALP01 3 + * 4 + * Copyright © 2006 Red Hat, Inc. 5 + * Copyright © 2006 David Woodhouse <dwmw2@infradead.org> 6 + */ 7 + 8 + #define DEBUG 9 + 10 + #include <linux/device.h> 11 + #undef DEBUG 12 + #include <linux/mtd/mtd.h> 13 + #include <linux/mtd/nand.h> 14 + #include <linux/pci.h> 15 + #include <linux/delay.h> 16 + #include <linux/interrupt.h> 17 + #include <asm/io.h> 18 + 19 + #define CAFE_NAND_CTRL1 0x00 20 + #define CAFE_NAND_CTRL2 0x04 21 + #define CAFE_NAND_CTRL3 0x08 22 + #define CAFE_NAND_STATUS 0x0c 23 + #define CAFE_NAND_IRQ 0x10 24 + #define CAFE_NAND_IRQ_MASK 0x14 25 + #define CAFE_NAND_DATA_LEN 0x18 26 + #define CAFE_NAND_ADDR1 0x1c 27 + #define CAFE_NAND_ADDR2 0x20 28 + #define CAFE_NAND_TIMING1 0x24 29 + #define CAFE_NAND_TIMING2 0x28 30 + #define CAFE_NAND_TIMING3 0x2c 31 + #define CAFE_NAND_NONMEM 0x30 32 + #define CAFE_NAND_ECC_RESULT 0x3C 33 + #define CAFE_NAND_DMA_CTRL 0x40 34 + #define CAFE_NAND_DMA_ADDR0 0x44 35 + #define CAFE_NAND_DMA_ADDR1 0x48 36 + #define CAFE_NAND_ECC_SYN01 0x50 37 + #define CAFE_NAND_ECC_SYN23 0x54 38 + #define CAFE_NAND_ECC_SYN45 0x58 39 + #define CAFE_NAND_ECC_SYN67 0x5c 40 + #define CAFE_NAND_READ_DATA 0x1000 41 + #define CAFE_NAND_WRITE_DATA 0x2000 42 + 43 + #define CAFE_GLOBAL_CTRL 0x3004 44 + #define CAFE_GLOBAL_IRQ 0x3008 45 + #define CAFE_GLOBAL_IRQ_MASK 0x300c 46 + #define CAFE_NAND_RESET 0x3034 47 + 48 + int cafe_correct_ecc(unsigned char *buf, 49 + unsigned short *chk_syndrome_list); 50 + 51 + struct cafe_priv { 52 + struct nand_chip nand; 53 + struct pci_dev *pdev; 54 + void __iomem *mmio; 55 + uint32_t ctl1; 56 + uint32_t ctl2; 57 + int datalen; 58 + int nr_data; 59 + int data_pos; 60 + int page_addr; 61 + dma_addr_t dmaaddr; 62 + unsigned char *dmabuf; 63 + }; 64 + 65 + static int usedma = 1; 66 + module_param(usedma, int, 0644); 67 + 68 + static int skipbbt = 0; 69 + module_param(skipbbt, int, 0644); 70 + 71 + static int debug = 0; 72 + module_param(debug, int, 0644); 73 + 74 + static int regdebug = 0; 75 + module_param(regdebug, int, 0644); 76 + 77 + static int checkecc = 1; 78 + module_param(checkecc, int, 0644); 79 + 80 + static int slowtiming = 0; 81 + module_param(slowtiming, int, 0644); 82 + 83 + /* Hrm. Why isn't this already conditional on something in the struct device? */ 84 + #define cafe_dev_dbg(dev, args...) do { if (debug) dev_dbg(dev, ##args); } while(0) 85 + 86 + /* Make it easier to switch to PIO if we need to */ 87 + #define cafe_readl(cafe, addr) readl((cafe)->mmio + CAFE_##addr) 88 + #define cafe_writel(cafe, datum, addr) writel(datum, (cafe)->mmio + CAFE_##addr) 89 + 90 + static int cafe_device_ready(struct mtd_info *mtd) 91 + { 92 + struct cafe_priv *cafe = mtd->priv; 93 + int result = !!(cafe_readl(cafe, NAND_STATUS) | 0x40000000); 94 + uint32_t irqs = cafe_readl(cafe, NAND_IRQ); 95 + 96 + cafe_writel(cafe, irqs, NAND_IRQ); 97 + 98 + cafe_dev_dbg(&cafe->pdev->dev, "NAND device is%s ready, IRQ %x (%x) (%x,%x)\n", 99 + result?"":" not", irqs, cafe_readl(cafe, NAND_IRQ), 100 + cafe_readl(cafe, GLOBAL_IRQ), cafe_readl(cafe, GLOBAL_IRQ_MASK)); 101 + 102 + return result; 103 + } 104 + 105 + 106 + static void cafe_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len) 107 + { 108 + struct cafe_priv *cafe = mtd->priv; 109 + 110 + if (usedma) 111 + memcpy(cafe->dmabuf + cafe->datalen, buf, len); 112 + else 113 + memcpy_toio(cafe->mmio + CAFE_NAND_WRITE_DATA + cafe->datalen, buf, len); 114 + 115 + cafe->datalen += len; 116 + 117 + cafe_dev_dbg(&cafe->pdev->dev, "Copy 0x%x bytes to write buffer. datalen 0x%x\n", 118 + len, cafe->datalen); 119 + } 120 + 121 + static void cafe_read_buf(struct mtd_info *mtd, uint8_t *buf, int len) 122 + { 123 + struct cafe_priv *cafe = mtd->priv; 124 + 125 + if (usedma) 126 + memcpy(buf, cafe->dmabuf + cafe->datalen, len); 127 + else 128 + memcpy_fromio(buf, cafe->mmio + CAFE_NAND_READ_DATA + cafe->datalen, len); 129 + 130 + cafe_dev_dbg(&cafe->pdev->dev, "Copy 0x%x bytes from position 0x%x in read buffer.\n", 131 + len, cafe->datalen); 132 + cafe->datalen += len; 133 + } 134 + 135 + static uint8_t cafe_read_byte(struct mtd_info *mtd) 136 + { 137 + struct cafe_priv *cafe = mtd->priv; 138 + uint8_t d; 139 + 140 + cafe_read_buf(mtd, &d, 1); 141 + cafe_dev_dbg(&cafe->pdev->dev, "Read %02x\n", d); 142 + 143 + return d; 144 + } 145 + 146 + static void cafe_nand_cmdfunc(struct mtd_info *mtd, unsigned command, 147 + int column, int page_addr) 148 + { 149 + struct cafe_priv *cafe = mtd->priv; 150 + int adrbytes = 0; 151 + uint32_t ctl1; 152 + uint32_t doneint = 0x80000000; 153 + 154 + cafe_dev_dbg(&cafe->pdev->dev, "cmdfunc %02x, 0x%x, 0x%x\n", 155 + command, column, page_addr); 156 + 157 + if (command == NAND_CMD_ERASE2 || command == NAND_CMD_PAGEPROG) { 158 + /* Second half of a command we already calculated */ 159 + cafe_writel(cafe, cafe->ctl2 | 0x100 | command, NAND_CTRL2); 160 + ctl1 = cafe->ctl1; 161 + cafe->ctl2 &= ~(1<<30); 162 + cafe_dev_dbg(&cafe->pdev->dev, "Continue command, ctl1 %08x, #data %d\n", 163 + cafe->ctl1, cafe->nr_data); 164 + goto do_command; 165 + } 166 + /* Reset ECC engine */ 167 + cafe_writel(cafe, 0, NAND_CTRL2); 168 + 169 + /* Emulate NAND_CMD_READOOB on large-page chips */ 170 + if (mtd->writesize > 512 && 171 + command == NAND_CMD_READOOB) { 172 + column += mtd->writesize; 173 + command = NAND_CMD_READ0; 174 + } 175 + 176 + /* FIXME: Do we need to send read command before sending data 177 + for small-page chips, to position the buffer correctly? */ 178 + 179 + if (column != -1) { 180 + cafe_writel(cafe, column, NAND_ADDR1); 181 + adrbytes = 2; 182 + if (page_addr != -1) 183 + goto write_adr2; 184 + } else if (page_addr != -1) { 185 + cafe_writel(cafe, page_addr & 0xffff, NAND_ADDR1); 186 + page_addr >>= 16; 187 + write_adr2: 188 + cafe_writel(cafe, page_addr, NAND_ADDR2); 189 + adrbytes += 2; 190 + if (mtd->size > mtd->writesize << 16) 191 + adrbytes++; 192 + } 193 + 194 + cafe->data_pos = cafe->datalen = 0; 195 + 196 + /* Set command valid bit */ 197 + ctl1 = 0x80000000 | command; 198 + 199 + /* Set RD or WR bits as appropriate */ 200 + if (command == NAND_CMD_READID || command == NAND_CMD_STATUS) { 201 + ctl1 |= (1<<26); /* rd */ 202 + /* Always 5 bytes, for now */ 203 + cafe->datalen = 4; 204 + /* And one address cycle -- even for STATUS, since the controller doesn't work without */ 205 + adrbytes = 1; 206 + } else if (command == NAND_CMD_READ0 || command == NAND_CMD_READ1 || 207 + command == NAND_CMD_READOOB || command == NAND_CMD_RNDOUT) { 208 + ctl1 |= 1<<26; /* rd */ 209 + /* For now, assume just read to end of page */ 210 + cafe->datalen = mtd->writesize + mtd->oobsize - column; 211 + } else if (command == NAND_CMD_SEQIN) 212 + ctl1 |= 1<<25; /* wr */ 213 + 214 + /* Set number of address bytes */ 215 + if (adrbytes) 216 + ctl1 |= ((adrbytes-1)|8) << 27; 217 + 218 + if (command == NAND_CMD_SEQIN || command == NAND_CMD_ERASE1) { 219 + /* Ignore the first command of a pair; the hardware 220 + deals with them both at once, later */ 221 + cafe->ctl1 = ctl1; 222 + cafe_dev_dbg(&cafe->pdev->dev, "Setup for delayed command, ctl1 %08x, dlen %x\n", 223 + cafe->ctl1, cafe->datalen); 224 + return; 225 + } 226 + /* RNDOUT and READ0 commands need a following byte */ 227 + if (command == NAND_CMD_RNDOUT) 228 + cafe_writel(cafe, cafe->ctl2 | 0x100 | NAND_CMD_RNDOUTSTART, NAND_CTRL2); 229 + else if (command == NAND_CMD_READ0 && mtd->writesize > 512) 230 + cafe_writel(cafe, cafe->ctl2 | 0x100 | NAND_CMD_READSTART, NAND_CTRL2); 231 + 232 + do_command: 233 + cafe_dev_dbg(&cafe->pdev->dev, "dlen %x, ctl1 %x, ctl2 %x\n", 234 + cafe->datalen, ctl1, cafe_readl(cafe, NAND_CTRL2)); 235 + 236 + /* NB: The datasheet lies -- we really should be subtracting 1 here */ 237 + cafe_writel(cafe, cafe->datalen, NAND_DATA_LEN); 238 + cafe_writel(cafe, 0x90000000, NAND_IRQ); 239 + if (usedma && (ctl1 & (3<<25))) { 240 + uint32_t dmactl = 0xc0000000 + cafe->datalen; 241 + /* If WR or RD bits set, set up DMA */ 242 + if (ctl1 & (1<<26)) { 243 + /* It's a read */ 244 + dmactl |= (1<<29); 245 + /* ... so it's done when the DMA is done, not just 246 + the command. */ 247 + doneint = 0x10000000; 248 + } 249 + cafe_writel(cafe, dmactl, NAND_DMA_CTRL); 250 + } 251 + cafe->datalen = 0; 252 + 253 + if (unlikely(regdebug)) { 254 + int i; 255 + printk("About to write command %08x to register 0\n", ctl1); 256 + for (i=4; i< 0x5c; i+=4) 257 + printk("Register %x: %08x\n", i, readl(cafe->mmio + i)); 258 + } 259 + 260 + cafe_writel(cafe, ctl1, NAND_CTRL1); 261 + /* Apply this short delay always to ensure that we do wait tWB in 262 + * any case on any machine. */ 263 + ndelay(100); 264 + 265 + if (1) { 266 + int c = 500000; 267 + uint32_t irqs; 268 + 269 + while (c--) { 270 + irqs = cafe_readl(cafe, NAND_IRQ); 271 + if (irqs & doneint) 272 + break; 273 + udelay(1); 274 + if (!(c % 100000)) 275 + cafe_dev_dbg(&cafe->pdev->dev, "Wait for ready, IRQ %x\n", irqs); 276 + cpu_relax(); 277 + } 278 + cafe_writel(cafe, doneint, NAND_IRQ); 279 + cafe_dev_dbg(&cafe->pdev->dev, "Command %x completed after %d usec, irqs %x (%x)\n", 280 + command, 500000-c, irqs, cafe_readl(cafe, NAND_IRQ)); 281 + } 282 + 283 + WARN_ON(cafe->ctl2 & (1<<30)); 284 + 285 + switch (command) { 286 + 287 + case NAND_CMD_CACHEDPROG: 288 + case NAND_CMD_PAGEPROG: 289 + case NAND_CMD_ERASE1: 290 + case NAND_CMD_ERASE2: 291 + case NAND_CMD_SEQIN: 292 + case NAND_CMD_RNDIN: 293 + case NAND_CMD_STATUS: 294 + case NAND_CMD_DEPLETE1: 295 + case NAND_CMD_RNDOUT: 296 + case NAND_CMD_STATUS_ERROR: 297 + case NAND_CMD_STATUS_ERROR0: 298 + case NAND_CMD_STATUS_ERROR1: 299 + case NAND_CMD_STATUS_ERROR2: 300 + case NAND_CMD_STATUS_ERROR3: 301 + cafe_writel(cafe, cafe->ctl2, NAND_CTRL2); 302 + return; 303 + } 304 + nand_wait_ready(mtd); 305 + cafe_writel(cafe, cafe->ctl2, NAND_CTRL2); 306 + } 307 + 308 + static void cafe_select_chip(struct mtd_info *mtd, int chipnr) 309 + { 310 + //struct cafe_priv *cafe = mtd->priv; 311 + // cafe_dev_dbg(&cafe->pdev->dev, "select_chip %d\n", chipnr); 312 + } 313 + 314 + static int cafe_nand_interrupt(int irq, void *id) 315 + { 316 + struct mtd_info *mtd = id; 317 + struct cafe_priv *cafe = mtd->priv; 318 + uint32_t irqs = cafe_readl(cafe, NAND_IRQ); 319 + cafe_writel(cafe, irqs & ~0x90000000, NAND_IRQ); 320 + if (!irqs) 321 + return IRQ_NONE; 322 + 323 + cafe_dev_dbg(&cafe->pdev->dev, "irq, bits %x (%x)\n", irqs, cafe_readl(cafe, NAND_IRQ)); 324 + return IRQ_HANDLED; 325 + } 326 + 327 + static void cafe_nand_bug(struct mtd_info *mtd) 328 + { 329 + BUG(); 330 + } 331 + 332 + static int cafe_nand_write_oob(struct mtd_info *mtd, 333 + struct nand_chip *chip, int page) 334 + { 335 + int status = 0; 336 + 337 + chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize, page); 338 + chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 339 + chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 340 + status = chip->waitfunc(mtd, chip); 341 + 342 + return status & NAND_STATUS_FAIL ? -EIO : 0; 343 + } 344 + 345 + /* Don't use -- use nand_read_oob_std for now */ 346 + static int cafe_nand_read_oob(struct mtd_info *mtd, struct nand_chip *chip, 347 + int page, int sndcmd) 348 + { 349 + chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 350 + chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 351 + return 1; 352 + } 353 + /** 354 + * cafe_nand_read_page_syndrome - {REPLACABLE] hardware ecc syndrom based page read 355 + * @mtd: mtd info structure 356 + * @chip: nand chip info structure 357 + * @buf: buffer to store read data 358 + * 359 + * The hw generator calculates the error syndrome automatically. Therefor 360 + * we need a special oob layout and handling. 361 + */ 362 + static int cafe_nand_read_page(struct mtd_info *mtd, struct nand_chip *chip, 363 + uint8_t *buf) 364 + { 365 + struct cafe_priv *cafe = mtd->priv; 366 + 367 + cafe_dev_dbg(&cafe->pdev->dev, "ECC result %08x SYN1,2 %08x\n", 368 + cafe_readl(cafe, NAND_ECC_RESULT), 369 + cafe_readl(cafe, NAND_ECC_SYN01)); 370 + 371 + chip->read_buf(mtd, buf, mtd->writesize); 372 + chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 373 + 374 + if (checkecc && cafe_readl(cafe, NAND_ECC_RESULT) & (1<<18)) { 375 + unsigned short syn[8]; 376 + int i; 377 + 378 + for (i=0; i<8; i+=2) { 379 + uint32_t tmp = cafe_readl(cafe, NAND_ECC_SYN01 + (i*2)); 380 + syn[i] = tmp & 0xfff; 381 + syn[i+1] = (tmp >> 16) & 0xfff; 382 + } 383 + 384 + if ((i = cafe_correct_ecc(buf, syn)) < 0) { 385 + dev_dbg(&cafe->pdev->dev, "Failed to correct ECC at %08x\n", 386 + cafe_readl(cafe, NAND_ADDR2) * 2048); 387 + for (i=0; i< 0x5c; i+=4) 388 + printk("Register %x: %08x\n", i, readl(cafe->mmio + i)); 389 + mtd->ecc_stats.failed++; 390 + } else { 391 + dev_dbg(&cafe->pdev->dev, "Corrected %d symbol errors\n", i); 392 + mtd->ecc_stats.corrected += i; 393 + } 394 + } 395 + 396 + 397 + return 0; 398 + } 399 + 400 + static struct nand_ecclayout cafe_oobinfo_2048 = { 401 + .eccbytes = 14, 402 + .eccpos = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13}, 403 + .oobfree = {{14, 50}} 404 + }; 405 + 406 + /* Ick. The BBT code really ought to be able to work this bit out 407 + for itself from the above, at least for the 2KiB case */ 408 + static uint8_t cafe_bbt_pattern_2048[] = { 'B', 'b', 't', '0' }; 409 + static uint8_t cafe_mirror_pattern_2048[] = { '1', 't', 'b', 'B' }; 410 + 411 + static uint8_t cafe_bbt_pattern_512[] = { 0xBB }; 412 + static uint8_t cafe_mirror_pattern_512[] = { 0xBC }; 413 + 414 + 415 + static struct nand_bbt_descr cafe_bbt_main_descr_2048 = { 416 + .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE 417 + | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP, 418 + .offs = 14, 419 + .len = 4, 420 + .veroffs = 18, 421 + .maxblocks = 4, 422 + .pattern = cafe_bbt_pattern_2048 423 + }; 424 + 425 + static struct nand_bbt_descr cafe_bbt_mirror_descr_2048 = { 426 + .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE 427 + | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP, 428 + .offs = 14, 429 + .len = 4, 430 + .veroffs = 18, 431 + .maxblocks = 4, 432 + .pattern = cafe_mirror_pattern_2048 433 + }; 434 + 435 + static struct nand_ecclayout cafe_oobinfo_512 = { 436 + .eccbytes = 14, 437 + .eccpos = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13}, 438 + .oobfree = {{14, 2}} 439 + }; 440 + 441 + static struct nand_bbt_descr cafe_bbt_main_descr_512 = { 442 + .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE 443 + | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP, 444 + .offs = 14, 445 + .len = 1, 446 + .veroffs = 15, 447 + .maxblocks = 4, 448 + .pattern = cafe_bbt_pattern_512 449 + }; 450 + 451 + static struct nand_bbt_descr cafe_bbt_mirror_descr_512 = { 452 + .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE 453 + | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP, 454 + .offs = 14, 455 + .len = 1, 456 + .veroffs = 15, 457 + .maxblocks = 4, 458 + .pattern = cafe_mirror_pattern_512 459 + }; 460 + 461 + 462 + static void cafe_nand_write_page_lowlevel(struct mtd_info *mtd, 463 + struct nand_chip *chip, const uint8_t *buf) 464 + { 465 + struct cafe_priv *cafe = mtd->priv; 466 + 467 + chip->write_buf(mtd, buf, mtd->writesize); 468 + chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 469 + 470 + /* Set up ECC autogeneration */ 471 + cafe->ctl2 |= (1<<30); 472 + } 473 + 474 + static int cafe_nand_write_page(struct mtd_info *mtd, struct nand_chip *chip, 475 + const uint8_t *buf, int page, int cached, int raw) 476 + { 477 + int status; 478 + 479 + chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page); 480 + 481 + if (unlikely(raw)) 482 + chip->ecc.write_page_raw(mtd, chip, buf); 483 + else 484 + chip->ecc.write_page(mtd, chip, buf); 485 + 486 + /* 487 + * Cached progamming disabled for now, Not sure if its worth the 488 + * trouble. The speed gain is not very impressive. (2.3->2.6Mib/s) 489 + */ 490 + cached = 0; 491 + 492 + if (!cached || !(chip->options & NAND_CACHEPRG)) { 493 + 494 + chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 495 + status = chip->waitfunc(mtd, chip); 496 + /* 497 + * See if operation failed and additional status checks are 498 + * available 499 + */ 500 + if ((status & NAND_STATUS_FAIL) && (chip->errstat)) 501 + status = chip->errstat(mtd, chip, FL_WRITING, status, 502 + page); 503 + 504 + if (status & NAND_STATUS_FAIL) 505 + return -EIO; 506 + } else { 507 + chip->cmdfunc(mtd, NAND_CMD_CACHEDPROG, -1, -1); 508 + status = chip->waitfunc(mtd, chip); 509 + } 510 + 511 + #ifdef CONFIG_MTD_NAND_VERIFY_WRITE 512 + /* Send command to read back the data */ 513 + chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 514 + 515 + if (chip->verify_buf(mtd, buf, mtd->writesize)) 516 + return -EIO; 517 + #endif 518 + return 0; 519 + } 520 + 521 + static int cafe_nand_block_bad(struct mtd_info *mtd, loff_t ofs, int getchip) 522 + { 523 + return 0; 524 + } 525 + 526 + static int __devinit cafe_nand_probe(struct pci_dev *pdev, 527 + const struct pci_device_id *ent) 528 + { 529 + struct mtd_info *mtd; 530 + struct cafe_priv *cafe; 531 + uint32_t ctrl; 532 + int err = 0; 533 + 534 + err = pci_enable_device(pdev); 535 + if (err) 536 + return err; 537 + 538 + pci_set_master(pdev); 539 + 540 + mtd = kzalloc(sizeof(*mtd) + sizeof(struct cafe_priv), GFP_KERNEL); 541 + if (!mtd) { 542 + dev_warn(&pdev->dev, "failed to alloc mtd_info\n"); 543 + return -ENOMEM; 544 + } 545 + cafe = (void *)(&mtd[1]); 546 + 547 + mtd->priv = cafe; 548 + mtd->owner = THIS_MODULE; 549 + 550 + cafe->pdev = pdev; 551 + cafe->mmio = pci_iomap(pdev, 0, 0); 552 + if (!cafe->mmio) { 553 + dev_warn(&pdev->dev, "failed to iomap\n"); 554 + err = -ENOMEM; 555 + goto out_free_mtd; 556 + } 557 + cafe->dmabuf = dma_alloc_coherent(&cafe->pdev->dev, 2112 + sizeof(struct nand_buffers), 558 + &cafe->dmaaddr, GFP_KERNEL); 559 + if (!cafe->dmabuf) { 560 + err = -ENOMEM; 561 + goto out_ior; 562 + } 563 + cafe->nand.buffers = (void *)cafe->dmabuf + 2112; 564 + 565 + cafe->nand.cmdfunc = cafe_nand_cmdfunc; 566 + cafe->nand.dev_ready = cafe_device_ready; 567 + cafe->nand.read_byte = cafe_read_byte; 568 + cafe->nand.read_buf = cafe_read_buf; 569 + cafe->nand.write_buf = cafe_write_buf; 570 + cafe->nand.select_chip = cafe_select_chip; 571 + 572 + cafe->nand.chip_delay = 0; 573 + 574 + /* Enable the following for a flash based bad block table */ 575 + cafe->nand.options = NAND_USE_FLASH_BBT | NAND_NO_AUTOINCR | NAND_OWN_BUFFERS; 576 + 577 + if (skipbbt) { 578 + cafe->nand.options |= NAND_SKIP_BBTSCAN; 579 + cafe->nand.block_bad = cafe_nand_block_bad; 580 + } 581 + 582 + /* Start off by resetting the NAND controller completely */ 583 + cafe_writel(cafe, 1, NAND_RESET); 584 + cafe_writel(cafe, 0, NAND_RESET); 585 + 586 + cafe_writel(cafe, 0xffffffff, NAND_IRQ_MASK); 587 + 588 + /* Timings from Marvell's test code (not verified or calculated by us) */ 589 + if (!slowtiming) { 590 + cafe_writel(cafe, 0x01010a0a, NAND_TIMING1); 591 + cafe_writel(cafe, 0x24121212, NAND_TIMING2); 592 + cafe_writel(cafe, 0x11000000, NAND_TIMING3); 593 + } else { 594 + cafe_writel(cafe, 0xffffffff, NAND_TIMING1); 595 + cafe_writel(cafe, 0xffffffff, NAND_TIMING2); 596 + cafe_writel(cafe, 0xffffffff, NAND_TIMING3); 597 + } 598 + cafe_writel(cafe, 0xffffffff, NAND_IRQ_MASK); 599 + err = request_irq(pdev->irq, &cafe_nand_interrupt, SA_SHIRQ, "CAFE NAND", mtd); 600 + if (err) { 601 + dev_warn(&pdev->dev, "Could not register IRQ %d\n", pdev->irq); 602 + 603 + goto out_free_dma; 604 + } 605 + #if 1 606 + /* Disable master reset, enable NAND clock */ 607 + ctrl = cafe_readl(cafe, GLOBAL_CTRL); 608 + ctrl &= 0xffffeff0; 609 + ctrl |= 0x00007000; 610 + cafe_writel(cafe, ctrl | 0x05, GLOBAL_CTRL); 611 + cafe_writel(cafe, ctrl | 0x0a, GLOBAL_CTRL); 612 + cafe_writel(cafe, 0, NAND_DMA_CTRL); 613 + 614 + cafe_writel(cafe, 0x7006, GLOBAL_CTRL); 615 + cafe_writel(cafe, 0x700a, GLOBAL_CTRL); 616 + 617 + /* Set up DMA address */ 618 + cafe_writel(cafe, cafe->dmaaddr & 0xffffffff, NAND_DMA_ADDR0); 619 + if (sizeof(cafe->dmaaddr) > 4) 620 + /* Shift in two parts to shut the compiler up */ 621 + cafe_writel(cafe, (cafe->dmaaddr >> 16) >> 16, NAND_DMA_ADDR1); 622 + else 623 + cafe_writel(cafe, 0, NAND_DMA_ADDR1); 624 + 625 + cafe_dev_dbg(&cafe->pdev->dev, "Set DMA address to %x (virt %p)\n", 626 + cafe_readl(cafe, NAND_DMA_ADDR0), cafe->dmabuf); 627 + 628 + /* Enable NAND IRQ in global IRQ mask register */ 629 + cafe_writel(cafe, 0x80000007, GLOBAL_IRQ_MASK); 630 + cafe_dev_dbg(&cafe->pdev->dev, "Control %x, IRQ mask %x\n", 631 + cafe_readl(cafe, GLOBAL_CTRL), cafe_readl(cafe, GLOBAL_IRQ_MASK)); 632 + #endif 633 + #if 1 634 + mtd->writesize=2048; 635 + mtd->oobsize = 0x40; 636 + memset(cafe->dmabuf, 0x5a, 2112); 637 + cafe->nand.cmdfunc(mtd, NAND_CMD_READID, 0, -1); 638 + cafe->nand.read_byte(mtd); 639 + cafe->nand.read_byte(mtd); 640 + cafe->nand.read_byte(mtd); 641 + cafe->nand.read_byte(mtd); 642 + cafe->nand.read_byte(mtd); 643 + #endif 644 + #if 0 645 + cafe->nand.cmdfunc(mtd, NAND_CMD_READ0, 0, 0); 646 + // nand_wait_ready(mtd); 647 + cafe->nand.read_byte(mtd); 648 + cafe->nand.read_byte(mtd); 649 + cafe->nand.read_byte(mtd); 650 + cafe->nand.read_byte(mtd); 651 + #endif 652 + #if 0 653 + writel(0x84600070, cafe->mmio); 654 + udelay(10); 655 + cafe_dev_dbg(&cafe->pdev->dev, "Status %x\n", cafe_readl(cafe, NAND_NONMEM)); 656 + #endif 657 + /* Scan to find existance of the device */ 658 + if (nand_scan_ident(mtd, 1)) { 659 + err = -ENXIO; 660 + goto out_irq; 661 + } 662 + 663 + cafe->ctl2 = 1<<27; /* Reed-Solomon ECC */ 664 + if (mtd->writesize == 2048) 665 + cafe->ctl2 |= 1<<29; /* 2KiB page size */ 666 + 667 + /* Set up ECC according to the type of chip we found */ 668 + if (mtd->writesize == 2048) { 669 + cafe->nand.ecc.layout = &cafe_oobinfo_2048; 670 + cafe->nand.bbt_td = &cafe_bbt_main_descr_2048; 671 + cafe->nand.bbt_md = &cafe_bbt_mirror_descr_2048; 672 + } else if (mtd->writesize == 512) { 673 + cafe->nand.ecc.layout = &cafe_oobinfo_512; 674 + cafe->nand.bbt_td = &cafe_bbt_main_descr_512; 675 + cafe->nand.bbt_md = &cafe_bbt_mirror_descr_512; 676 + } else { 677 + printk(KERN_WARNING "Unexpected NAND flash writesize %d. Aborting\n", 678 + mtd->writesize); 679 + goto out_irq; 680 + } 681 + cafe->nand.ecc.mode = NAND_ECC_HW_SYNDROME; 682 + cafe->nand.ecc.size = mtd->writesize; 683 + cafe->nand.ecc.bytes = 14; 684 + cafe->nand.ecc.hwctl = (void *)cafe_nand_bug; 685 + cafe->nand.ecc.calculate = (void *)cafe_nand_bug; 686 + cafe->nand.ecc.correct = (void *)cafe_nand_bug; 687 + cafe->nand.write_page = cafe_nand_write_page; 688 + cafe->nand.ecc.write_page = cafe_nand_write_page_lowlevel; 689 + cafe->nand.ecc.write_oob = cafe_nand_write_oob; 690 + cafe->nand.ecc.read_page = cafe_nand_read_page; 691 + cafe->nand.ecc.read_oob = cafe_nand_read_oob; 692 + 693 + err = nand_scan_tail(mtd); 694 + if (err) 695 + goto out_irq; 696 + 697 + pci_set_drvdata(pdev, mtd); 698 + add_mtd_device(mtd); 699 + goto out; 700 + 701 + out_irq: 702 + /* Disable NAND IRQ in global IRQ mask register */ 703 + cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK); 704 + free_irq(pdev->irq, mtd); 705 + out_free_dma: 706 + dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr); 707 + out_ior: 708 + pci_iounmap(pdev, cafe->mmio); 709 + out_free_mtd: 710 + kfree(mtd); 711 + out: 712 + return err; 713 + } 714 + 715 + static void __devexit cafe_nand_remove(struct pci_dev *pdev) 716 + { 717 + struct mtd_info *mtd = pci_get_drvdata(pdev); 718 + struct cafe_priv *cafe = mtd->priv; 719 + 720 + del_mtd_device(mtd); 721 + /* Disable NAND IRQ in global IRQ mask register */ 722 + cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK); 723 + free_irq(pdev->irq, mtd); 724 + nand_release(mtd); 725 + pci_iounmap(pdev, cafe->mmio); 726 + dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr); 727 + kfree(mtd); 728 + } 729 + 730 + static struct pci_device_id cafe_nand_tbl[] = { 731 + { 0x11ab, 0x4100, PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_MEMORY_FLASH << 8, 0xFFFF0 } 732 + }; 733 + 734 + MODULE_DEVICE_TABLE(pci, cafe_nand_tbl); 735 + 736 + static struct pci_driver cafe_nand_pci_driver = { 737 + .name = "CAFÉ NAND", 738 + .id_table = cafe_nand_tbl, 739 + .probe = cafe_nand_probe, 740 + .remove = __devexit_p(cafe_nand_remove), 741 + #ifdef CONFIG_PMx 742 + .suspend = cafe_nand_suspend, 743 + .resume = cafe_nand_resume, 744 + #endif 745 + }; 746 + 747 + static int cafe_nand_init(void) 748 + { 749 + return pci_register_driver(&cafe_nand_pci_driver); 750 + } 751 + 752 + static void cafe_nand_exit(void) 753 + { 754 + pci_unregister_driver(&cafe_nand_pci_driver); 755 + } 756 + module_init(cafe_nand_init); 757 + module_exit(cafe_nand_exit); 758 + 759 + MODULE_LICENSE("GPL"); 760 + MODULE_AUTHOR("David Woodhouse <dwmw2@infradead.org>"); 761 + MODULE_DESCRIPTION("NAND flash driver for OLPC CAFE chip"); 762 + 763 + /* Correct ECC for 2048 bytes of 0xff: 764 + 41 a0 71 65 54 27 f3 93 ec a9 be ed 0b a1 */ 765 + 766 + /* dwmw2's B-test board, in case of completely screwing it: 767 + Bad eraseblock 2394 at 0x12b40000 768 + Bad eraseblock 2627 at 0x14860000 769 + Bad eraseblock 3349 at 0x1a2a0000 770 + */
+1381
drivers/mtd/nand/cafe_ecc.c
··· 1 + /* Error correction for CAFÉ NAND controller 2 + * 3 + * © 2006 Marvell, Inc. 4 + * Author: Tom Chiou 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms of the GNU General Public License as published by the Free 8 + * Software Foundation; either version 2 of the License, or (at your option) 9 + * any later version. 10 + * 11 + * This program is distributed in the hope that it will be useful, but WITHOUT 12 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 + * more details. 15 + * 16 + * You should have received a copy of the GNU General Public License along with 17 + * this program; if not, write to the Free Software Foundation, Inc., 59 18 + * Temple Place - Suite 330, Boston, MA 02111-1307, USA. 19 + */ 20 + 21 + #include <linux/kernel.h> 22 + #include <linux/module.h> 23 + #include <linux/errno.h> 24 + 25 + static unsigned short gf4096_mul(unsigned short, unsigned short); 26 + static unsigned short gf64_mul(unsigned short, unsigned short); 27 + static unsigned short gf4096_inv(unsigned short); 28 + static unsigned short err_pos(unsigned short); 29 + static void find_4bit_err_coefs(unsigned short, unsigned short, unsigned short, 30 + unsigned short, unsigned short, unsigned short, 31 + unsigned short, unsigned short, unsigned short *); 32 + static void zero_4x5_col3(unsigned short[4][5]); 33 + static void zero_4x5_col2(unsigned short[4][5]); 34 + static void zero_4x5_col1(unsigned short[4][5]); 35 + static void swap_4x5_rows(unsigned short[4][5], int, int, int); 36 + static void swap_2x3_rows(unsigned short m[2][3]); 37 + static void solve_4x5(unsigned short m[4][5], unsigned short *, int *); 38 + static void sort_coefs(int *, unsigned short *, int); 39 + static void find_4bit_err_pats(unsigned short, unsigned short, unsigned short, 40 + unsigned short, unsigned short, unsigned short, 41 + unsigned short, unsigned short, unsigned short *); 42 + static void find_3bit_err_coefs(unsigned short, unsigned short, unsigned short, 43 + unsigned short, unsigned short, unsigned short, 44 + unsigned short *); 45 + static void zero_3x4_col2(unsigned short[3][4]); 46 + static void zero_3x4_col1(unsigned short[3][4]); 47 + static void swap_3x4_rows(unsigned short[3][4], int, int, int); 48 + static void solve_3x4(unsigned short[3][4], unsigned short *, int *); 49 + static void find_3bit_err_pats(unsigned short, unsigned short, unsigned short, 50 + unsigned short, unsigned short, unsigned short, 51 + unsigned short *); 52 + 53 + static void find_2bit_err_pats(unsigned short, unsigned short, unsigned short, 54 + unsigned short, unsigned short *); 55 + static void find_2x2_soln(unsigned short, unsigned short, unsigned short, 56 + unsigned short, unsigned short, unsigned short, 57 + unsigned short *); 58 + static void solve_2x3(unsigned short[2][3], unsigned short *); 59 + static int chk_no_err_only(unsigned short *, unsigned short *); 60 + static int chk_1_err_only(unsigned short *, unsigned short *); 61 + static int chk_2_err_only(unsigned short *, unsigned short *); 62 + static int chk_3_err_only(unsigned short *, unsigned short *); 63 + static int chk_4_err_only(unsigned short *, unsigned short *); 64 + 65 + static unsigned short gf64_mul(unsigned short a, unsigned short b) 66 + { 67 + unsigned short tmp1, tmp2, tmp3, tmp4, tmp5; 68 + unsigned short c_bit0, c_bit1, c_bit2, c_bit3, c_bit4, c_bit5, c; 69 + 70 + tmp1 = ((a) ^ (a >> 5)); 71 + tmp2 = ((a >> 4) ^ (a >> 5)); 72 + tmp3 = ((a >> 3) ^ (a >> 4)); 73 + tmp4 = ((a >> 2) ^ (a >> 3)); 74 + tmp5 = ((a >> 1) ^ (a >> 2)); 75 + 76 + c_bit0 = ((a & b) ^ ((a >> 5) & (b >> 1)) ^ ((a >> 4) & (b >> 2)) ^ 77 + ((a >> 3) & (b >> 3)) ^ ((a >> 2) & (b >> 4)) ^ ((a >> 1) & (b >> 5))) & 0x1; 78 + 79 + c_bit1 = (((a >> 1) & b) ^ (tmp1 & (b >> 1)) ^ (tmp2 & (b >> 2)) ^ 80 + (tmp3 & (b >> 3)) ^ (tmp4 & (b >> 4)) ^ (tmp5 & (b >> 5))) & 0x1; 81 + 82 + c_bit2 = (((a >> 2) & b) ^ ((a >> 1) & (b >> 1)) ^ (tmp1 & (b >> 2)) ^ 83 + (tmp2 & (b >> 3)) ^ (tmp3 & (b >> 4)) ^ (tmp4 & (b >> 5))) & 0x1; 84 + 85 + c_bit3 = (((a >> 3) & b) ^ ((a >> 2) & (b >> 1)) ^ ((a >> 1) & (b >> 2)) ^ 86 + (tmp1 & (b >> 3)) ^ (tmp2 & (b >> 4)) ^ (tmp3 & (b >> 5))) & 0x1; 87 + 88 + c_bit4 = (((a >> 4) & b) ^ ((a >> 3) & (b >> 1)) ^ ((a >> 2) & (b >> 2)) ^ 89 + ((a >> 1) & (b >> 3)) ^ (tmp1 & (b >> 4)) ^ (tmp2 & (b >> 5))) & 0x1; 90 + 91 + c_bit5 = (((a >> 5) & b) ^ ((a >> 4) & (b >> 1)) ^ ((a >> 3) & (b >> 2)) ^ 92 + ((a >> 2) & (b >> 3)) ^ ((a >> 1) & (b >> 4)) ^ (tmp1 & (b >> 5))) & 0x1; 93 + 94 + c = c_bit0 | (c_bit1 << 1) | (c_bit2 << 2) | (c_bit3 << 3) | (c_bit4 << 4) | (c_bit5 << 5); 95 + 96 + return c; 97 + } 98 + 99 + static unsigned short gf4096_mul(unsigned short a, unsigned short b) 100 + { 101 + unsigned short ah, al, bh, bl, alxah, blxbh, ablh, albl, ahbh, ahbhB, c; 102 + 103 + ah = (a >> 6) & 0x3f; 104 + al = a & 0x3f; 105 + bh = (b >> 6) & 0x3f; 106 + bl = b & 0x3f; 107 + alxah = al ^ ah; 108 + blxbh = bl ^ bh; 109 + 110 + ablh = gf64_mul(alxah, blxbh); 111 + albl = gf64_mul(al, bl); 112 + ahbh = gf64_mul(ah, bh); 113 + 114 + ahbhB = ((ahbh & 0x1) << 5) | 115 + ((ahbh & 0x20) >> 1) | 116 + ((ahbh & 0x10) >> 1) | ((ahbh & 0x8) >> 1) | ((ahbh & 0x4) >> 1) | (((ahbh >> 1) ^ ahbh) & 0x1); 117 + 118 + c = ((ablh ^ albl) << 6) | (ahbhB ^ albl); 119 + return c; 120 + } 121 + 122 + static void find_2bit_err_pats(unsigned short s0, unsigned short s1, unsigned short r0, unsigned short r1, unsigned short *pats) 123 + { 124 + find_2x2_soln(0x1, 0x1, r0, r1, s0, s1, pats); 125 + } 126 + 127 + static void find_3bit_err_coefs(unsigned short s0, unsigned short s1, 128 + unsigned short s2, unsigned short s3, unsigned short s4, unsigned short s5, unsigned short *coefs) 129 + { 130 + unsigned short m[3][4]; 131 + int row_order[3]; 132 + 133 + row_order[0] = 0; 134 + row_order[1] = 1; 135 + row_order[2] = 2; 136 + m[0][0] = s2; 137 + m[0][1] = s1; 138 + m[0][2] = s0; 139 + m[0][3] = s3; 140 + m[1][0] = s3; 141 + m[1][1] = s2; 142 + m[1][2] = s1; 143 + m[1][3] = s4; 144 + m[2][0] = s4; 145 + m[2][1] = s3; 146 + m[2][2] = s2; 147 + m[2][3] = s5; 148 + 149 + if (m[0][2] != 0x0) { 150 + zero_3x4_col2(m); 151 + } else if (m[1][2] != 0x0) { 152 + swap_3x4_rows(m, 0, 1, 4); 153 + zero_3x4_col2(m); 154 + } else if (m[2][2] != 0x0) { 155 + swap_3x4_rows(m, 0, 2, 4); 156 + zero_3x4_col2(m); 157 + } else { 158 + printk(KERN_ERR "Error: find_3bit_err_coefs, s0,s1,s2 all zeros!\n"); 159 + } 160 + 161 + if (m[1][1] != 0x0) { 162 + zero_3x4_col1(m); 163 + } else if (m[2][1] != 0x0) { 164 + swap_3x4_rows(m, 1, 2, 4); 165 + zero_3x4_col1(m); 166 + } else { 167 + printk(KERN_ERR "Error: find_3bit_err_coefs, cannot resolve col 1!\n"); 168 + } 169 + 170 + /* solve coefs */ 171 + solve_3x4(m, coefs, row_order); 172 + } 173 + 174 + static void zero_3x4_col2(unsigned short m[3][4]) 175 + { 176 + unsigned short minv1, minv2; 177 + 178 + minv1 = gf4096_mul(m[1][2], gf4096_inv(m[0][2])); 179 + minv2 = gf4096_mul(m[2][2], gf4096_inv(m[0][2])); 180 + m[1][0] = m[1][0] ^ gf4096_mul(m[0][0], minv1); 181 + m[1][1] = m[1][1] ^ gf4096_mul(m[0][1], minv1); 182 + m[1][3] = m[1][3] ^ gf4096_mul(m[0][3], minv1); 183 + m[2][0] = m[2][0] ^ gf4096_mul(m[0][0], minv2); 184 + m[2][1] = m[2][1] ^ gf4096_mul(m[0][1], minv2); 185 + m[2][3] = m[2][3] ^ gf4096_mul(m[0][3], minv2); 186 + } 187 + 188 + static void zero_3x4_col1(unsigned short m[3][4]) 189 + { 190 + unsigned short minv; 191 + minv = gf4096_mul(m[2][1], gf4096_inv(m[1][1])); 192 + m[2][0] = m[2][0] ^ gf4096_mul(m[1][0], minv); 193 + m[2][3] = m[2][3] ^ gf4096_mul(m[1][3], minv); 194 + } 195 + 196 + static void swap_3x4_rows(unsigned short m[3][4], int i, int j, int col_width) 197 + { 198 + unsigned short tmp0; 199 + int cnt; 200 + for (cnt = 0; cnt < col_width; cnt++) { 201 + tmp0 = m[i][cnt]; 202 + m[i][cnt] = m[j][cnt]; 203 + m[j][cnt] = tmp0; 204 + } 205 + } 206 + 207 + static void solve_3x4(unsigned short m[3][4], unsigned short *coefs, int *row_order) 208 + { 209 + unsigned short tmp[3]; 210 + tmp[0] = gf4096_mul(m[2][3], gf4096_inv(m[2][0])); 211 + tmp[1] = gf4096_mul((gf4096_mul(tmp[0], m[1][0]) ^ m[1][3]), gf4096_inv(m[1][1])); 212 + tmp[2] = gf4096_mul((gf4096_mul(tmp[0], m[0][0]) ^ gf4096_mul(tmp[1], m[0][1]) ^ m[0][3]), gf4096_inv(m[0][2])); 213 + sort_coefs(row_order, tmp, 3); 214 + coefs[0] = tmp[0]; 215 + coefs[1] = tmp[1]; 216 + coefs[2] = tmp[2]; 217 + } 218 + 219 + static void find_3bit_err_pats(unsigned short s0, unsigned short s1, 220 + unsigned short s2, unsigned short r0, 221 + unsigned short r1, unsigned short r2, 222 + unsigned short *pats) 223 + { 224 + find_2x2_soln(r0 ^ r2, r1 ^ r2, 225 + gf4096_mul(r0, r0 ^ r2), gf4096_mul(r1, r1 ^ r2), 226 + gf4096_mul(s0, r2) ^ s1, gf4096_mul(s1, r2) ^ s2, pats); 227 + pats[2] = s0 ^ pats[0] ^ pats[1]; 228 + } 229 + 230 + static void find_4bit_err_coefs(unsigned short s0, unsigned short s1, 231 + unsigned short s2, unsigned short s3, 232 + unsigned short s4, unsigned short s5, 233 + unsigned short s6, unsigned short s7, 234 + unsigned short *coefs) 235 + { 236 + unsigned short m[4][5]; 237 + int row_order[4]; 238 + 239 + row_order[0] = 0; 240 + row_order[1] = 1; 241 + row_order[2] = 2; 242 + row_order[3] = 3; 243 + 244 + m[0][0] = s3; 245 + m[0][1] = s2; 246 + m[0][2] = s1; 247 + m[0][3] = s0; 248 + m[0][4] = s4; 249 + m[1][0] = s4; 250 + m[1][1] = s3; 251 + m[1][2] = s2; 252 + m[1][3] = s1; 253 + m[1][4] = s5; 254 + m[2][0] = s5; 255 + m[2][1] = s4; 256 + m[2][2] = s3; 257 + m[2][3] = s2; 258 + m[2][4] = s6; 259 + m[3][0] = s6; 260 + m[3][1] = s5; 261 + m[3][2] = s4; 262 + m[3][3] = s3; 263 + m[3][4] = s7; 264 + 265 + if (m[0][3] != 0x0) { 266 + zero_4x5_col3(m); 267 + } else if (m[1][3] != 0x0) { 268 + swap_4x5_rows(m, 0, 1, 5); 269 + zero_4x5_col3(m); 270 + } else if (m[2][3] != 0x0) { 271 + swap_4x5_rows(m, 0, 2, 5); 272 + zero_4x5_col3(m); 273 + } else if (m[3][3] != 0x0) { 274 + swap_4x5_rows(m, 0, 3, 5); 275 + zero_4x5_col3(m); 276 + } else { 277 + printk(KERN_ERR "Error: find_4bit_err_coefs, s0,s1,s2,s3 all zeros!\n"); 278 + } 279 + 280 + if (m[1][2] != 0x0) { 281 + zero_4x5_col2(m); 282 + } else if (m[2][2] != 0x0) { 283 + swap_4x5_rows(m, 1, 2, 5); 284 + zero_4x5_col2(m); 285 + } else if (m[3][2] != 0x0) { 286 + swap_4x5_rows(m, 1, 3, 5); 287 + zero_4x5_col2(m); 288 + } else { 289 + printk(KERN_ERR "Error: find_4bit_err_coefs, cannot resolve col 2!\n"); 290 + } 291 + 292 + if (m[2][1] != 0x0) { 293 + zero_4x5_col1(m); 294 + } else if (m[3][1] != 0x0) { 295 + swap_4x5_rows(m, 2, 3, 5); 296 + zero_4x5_col1(m); 297 + } else { 298 + printk(KERN_ERR "Error: find_4bit_err_coefs, cannot resolve col 1!\n"); 299 + } 300 + 301 + solve_4x5(m, coefs, row_order); 302 + } 303 + 304 + static void zero_4x5_col3(unsigned short m[4][5]) 305 + { 306 + unsigned short minv1, minv2, minv3; 307 + 308 + minv1 = gf4096_mul(m[1][3], gf4096_inv(m[0][3])); 309 + minv2 = gf4096_mul(m[2][3], gf4096_inv(m[0][3])); 310 + minv3 = gf4096_mul(m[3][3], gf4096_inv(m[0][3])); 311 + 312 + m[1][0] = m[1][0] ^ gf4096_mul(m[0][0], minv1); 313 + m[1][1] = m[1][1] ^ gf4096_mul(m[0][1], minv1); 314 + m[1][2] = m[1][2] ^ gf4096_mul(m[0][2], minv1); 315 + m[1][4] = m[1][4] ^ gf4096_mul(m[0][4], minv1); 316 + m[2][0] = m[2][0] ^ gf4096_mul(m[0][0], minv2); 317 + m[2][1] = m[2][1] ^ gf4096_mul(m[0][1], minv2); 318 + m[2][2] = m[2][2] ^ gf4096_mul(m[0][2], minv2); 319 + m[2][4] = m[2][4] ^ gf4096_mul(m[0][4], minv2); 320 + m[3][0] = m[3][0] ^ gf4096_mul(m[0][0], minv3); 321 + m[3][1] = m[3][1] ^ gf4096_mul(m[0][1], minv3); 322 + m[3][2] = m[3][2] ^ gf4096_mul(m[0][2], minv3); 323 + m[3][4] = m[3][4] ^ gf4096_mul(m[0][4], minv3); 324 + } 325 + 326 + static void zero_4x5_col2(unsigned short m[4][5]) 327 + { 328 + unsigned short minv2, minv3; 329 + 330 + minv2 = gf4096_mul(m[2][2], gf4096_inv(m[1][2])); 331 + minv3 = gf4096_mul(m[3][2], gf4096_inv(m[1][2])); 332 + 333 + m[2][0] = m[2][0] ^ gf4096_mul(m[1][0], minv2); 334 + m[2][1] = m[2][1] ^ gf4096_mul(m[1][1], minv2); 335 + m[2][4] = m[2][4] ^ gf4096_mul(m[1][4], minv2); 336 + m[3][0] = m[3][0] ^ gf4096_mul(m[1][0], minv3); 337 + m[3][1] = m[3][1] ^ gf4096_mul(m[1][1], minv3); 338 + m[3][4] = m[3][4] ^ gf4096_mul(m[1][4], minv3); 339 + } 340 + 341 + static void zero_4x5_col1(unsigned short m[4][5]) 342 + { 343 + unsigned short minv; 344 + 345 + minv = gf4096_mul(m[3][1], gf4096_inv(m[2][1])); 346 + 347 + m[3][0] = m[3][0] ^ gf4096_mul(m[2][0], minv); 348 + m[3][4] = m[3][4] ^ gf4096_mul(m[2][4], minv); 349 + } 350 + 351 + static void swap_4x5_rows(unsigned short m[4][5], int i, int j, int col_width) 352 + { 353 + unsigned short tmp0; 354 + int cnt; 355 + 356 + for (cnt = 0; cnt < col_width; cnt++) { 357 + tmp0 = m[i][cnt]; 358 + m[i][cnt] = m[j][cnt]; 359 + m[j][cnt] = tmp0; 360 + } 361 + } 362 + 363 + static void solve_4x5(unsigned short m[4][5], unsigned short *coefs, int *row_order) 364 + { 365 + unsigned short tmp[4]; 366 + 367 + tmp[0] = gf4096_mul(m[3][4], gf4096_inv(m[3][0])); 368 + tmp[1] = gf4096_mul((gf4096_mul(tmp[0], m[2][0]) ^ m[2][4]), gf4096_inv(m[2][1])); 369 + tmp[2] = gf4096_mul((gf4096_mul(tmp[0], m[1][0]) ^ gf4096_mul(tmp[1], m[1][1]) ^ m[1][4]), gf4096_inv(m[1][2])); 370 + tmp[3] = gf4096_mul((gf4096_mul(tmp[0], m[0][0]) ^ 371 + gf4096_mul(tmp[1], m[0][1]) ^ gf4096_mul(tmp[2], m[0][2]) ^ m[0][4]), gf4096_inv(m[0][3])); 372 + sort_coefs(row_order, tmp, 4); 373 + coefs[0] = tmp[0]; 374 + coefs[1] = tmp[1]; 375 + coefs[2] = tmp[2]; 376 + coefs[3] = tmp[3]; 377 + } 378 + 379 + static void sort_coefs(int *order, unsigned short *soln, int len) 380 + { 381 + int cnt, start_cnt, least_ord, least_cnt; 382 + unsigned short tmp0; 383 + for (start_cnt = 0; start_cnt < len; start_cnt++) { 384 + for (cnt = start_cnt; cnt < len; cnt++) { 385 + if (cnt == start_cnt) { 386 + least_ord = order[cnt]; 387 + least_cnt = start_cnt; 388 + } else { 389 + if (least_ord > order[cnt]) { 390 + least_ord = order[cnt]; 391 + least_cnt = cnt; 392 + } 393 + } 394 + } 395 + if (least_cnt != start_cnt) { 396 + tmp0 = order[least_cnt]; 397 + order[least_cnt] = order[start_cnt]; 398 + order[start_cnt] = tmp0; 399 + tmp0 = soln[least_cnt]; 400 + soln[least_cnt] = soln[start_cnt]; 401 + soln[start_cnt] = tmp0; 402 + } 403 + } 404 + } 405 + 406 + static void find_4bit_err_pats(unsigned short s0, unsigned short s1, 407 + unsigned short s2, unsigned short s3, 408 + unsigned short z1, unsigned short z2, 409 + unsigned short z3, unsigned short z4, 410 + unsigned short *pats) 411 + { 412 + unsigned short z4_z1, z3z4_z3z3, z4_z2, s0z4_s1, z1z4_z1z1, 413 + z4_z3, z2z4_z2z2, s1z4_s2, z3z3z4_z3z3z3, z1z1z4_z1z1z1, z2z2z4_z2z2z2, s2z4_s3; 414 + unsigned short tmp0, tmp1, tmp2, tmp3; 415 + 416 + z4_z1 = z4 ^ z1; 417 + z3z4_z3z3 = gf4096_mul(z3, z4) ^ gf4096_mul(z3, z3); 418 + z4_z2 = z4 ^ z2; 419 + s0z4_s1 = gf4096_mul(s0, z4) ^ s1; 420 + z1z4_z1z1 = gf4096_mul(z1, z4) ^ gf4096_mul(z1, z1); 421 + z4_z3 = z4 ^ z3; 422 + z2z4_z2z2 = gf4096_mul(z2, z4) ^ gf4096_mul(z2, z2); 423 + s1z4_s2 = gf4096_mul(s1, z4) ^ s2; 424 + z3z3z4_z3z3z3 = gf4096_mul(gf4096_mul(z3, z3), z4) ^ gf4096_mul(gf4096_mul(z3, z3), z3); 425 + z1z1z4_z1z1z1 = gf4096_mul(gf4096_mul(z1, z1), z4) ^ gf4096_mul(gf4096_mul(z1, z1), z1); 426 + z2z2z4_z2z2z2 = gf4096_mul(gf4096_mul(z2, z2), z4) ^ gf4096_mul(gf4096_mul(z2, z2), z2); 427 + s2z4_s3 = gf4096_mul(s2, z4) ^ s3; 428 + 429 + //find err pat 0,1 430 + find_2x2_soln(gf4096_mul(z4_z1, z3z4_z3z3) ^ 431 + gf4096_mul(z1z4_z1z1, z4_z3), gf4096_mul(z4_z2, 432 + z3z4_z3z3) ^ 433 + gf4096_mul(z2z4_z2z2, z4_z3), gf4096_mul(z1z4_z1z1, 434 + z3z3z4_z3z3z3) ^ 435 + gf4096_mul(z1z1z4_z1z1z1, z3z4_z3z3), 436 + gf4096_mul(z2z4_z2z2, 437 + z3z3z4_z3z3z3) ^ gf4096_mul(z2z2z4_z2z2z2, 438 + z3z4_z3z3), 439 + gf4096_mul(s0z4_s1, z3z4_z3z3) ^ gf4096_mul(s1z4_s2, 440 + z4_z3), 441 + gf4096_mul(s1z4_s2, z3z3z4_z3z3z3) ^ gf4096_mul(s2z4_s3, z3z4_z3z3), pats); 442 + tmp0 = pats[0]; 443 + tmp1 = pats[1]; 444 + tmp2 = pats[0] ^ pats[1] ^ s0; 445 + tmp3 = gf4096_mul(pats[0], z1) ^ gf4096_mul(pats[1], z2) ^ s1; 446 + 447 + //find err pat 2,3 448 + find_2x2_soln(0x1, 0x1, z3, z4, tmp2, tmp3, pats); 449 + pats[2] = pats[0]; 450 + pats[3] = pats[1]; 451 + pats[0] = tmp0; 452 + pats[1] = tmp1; 453 + } 454 + 455 + static void find_2x2_soln(unsigned short c00, unsigned short c01, 456 + unsigned short c10, unsigned short c11, 457 + unsigned short lval0, unsigned short lval1, 458 + unsigned short *soln) 459 + { 460 + unsigned short m[2][3]; 461 + m[0][0] = c00; 462 + m[0][1] = c01; 463 + m[0][2] = lval0; 464 + m[1][0] = c10; 465 + m[1][1] = c11; 466 + m[1][2] = lval1; 467 + 468 + if (m[0][1] != 0x0) { 469 + /* */ 470 + } else if (m[1][1] != 0x0) { 471 + swap_2x3_rows(m); 472 + } else { 473 + printk(KERN_ERR "Warning: find_2bit_err_coefs, s0,s1 all zeros!\n"); 474 + } 475 + 476 + solve_2x3(m, soln); 477 + } 478 + 479 + static void swap_2x3_rows(unsigned short m[2][3]) 480 + { 481 + unsigned short tmp0; 482 + int cnt; 483 + 484 + for (cnt = 0; cnt < 3; cnt++) { 485 + tmp0 = m[0][cnt]; 486 + m[0][cnt] = m[1][cnt]; 487 + m[1][cnt] = tmp0; 488 + } 489 + } 490 + 491 + static void solve_2x3(unsigned short m[2][3], unsigned short *coefs) 492 + { 493 + unsigned short minv; 494 + 495 + minv = gf4096_mul(m[1][1], gf4096_inv(m[0][1])); 496 + m[1][0] = m[1][0] ^ gf4096_mul(m[0][0], minv); 497 + m[1][2] = m[1][2] ^ gf4096_mul(m[0][2], minv); 498 + coefs[0] = gf4096_mul(m[1][2], gf4096_inv(m[1][0])); 499 + coefs[1] = gf4096_mul((gf4096_mul(coefs[0], m[0][0]) ^ m[0][2]), gf4096_inv(m[0][1])); 500 + } 501 + 502 + static unsigned char gf64_inv[64] = { 503 + 0, 1, 33, 62, 49, 43, 31, 44, 57, 37, 52, 28, 46, 40, 22, 25, 504 + 61, 54, 51, 39, 26, 35, 14, 24, 23, 15, 20, 34, 11, 53, 45, 6, 505 + 63, 2, 27, 21, 56, 9, 50, 19, 13, 47, 48, 5, 7, 30, 12, 41, 506 + 42, 4, 38, 18, 10, 29, 17, 60, 36, 8, 59, 58, 55, 16, 3, 32 507 + }; 508 + 509 + static unsigned short gf4096_inv(unsigned short din) 510 + { 511 + unsigned short alahxal, ah2B, deno, inv, bl, bh; 512 + unsigned short ah, al, ahxal; 513 + unsigned short dout; 514 + 515 + ah = (din >> 6) & 0x3f; 516 + al = din & 0x3f; 517 + ahxal = ah ^ al; 518 + ah2B = (((ah ^ (ah >> 3)) & 0x1) << 5) | 519 + ((ah >> 1) & 0x10) | 520 + ((((ah >> 5) ^ (ah >> 2)) & 0x1) << 3) | 521 + ((ah >> 2) & 0x4) | ((((ah >> 4) ^ (ah >> 1)) & 0x1) << 1) | (ah & 0x1); 522 + alahxal = gf64_mul(ahxal, al); 523 + deno = alahxal ^ ah2B; 524 + inv = gf64_inv[deno]; 525 + bl = gf64_mul(inv, ahxal); 526 + bh = gf64_mul(inv, ah); 527 + dout = ((bh & 0x3f) << 6) | (bl & 0x3f); 528 + return (((bh & 0x3f) << 6) | (bl & 0x3f)); 529 + } 530 + 531 + static unsigned short err_pos_lut[4096] = { 532 + 0xfff, 0x000, 0x451, 0xfff, 0xfff, 0x3cf, 0xfff, 0x041, 533 + 0xfff, 0xfff, 0xfff, 0xfff, 0x28a, 0xfff, 0x492, 0xfff, 534 + 0x145, 0xfff, 0xfff, 0x514, 0xfff, 0x082, 0xfff, 0xfff, 535 + 0xfff, 0x249, 0x38e, 0x410, 0xfff, 0x104, 0x208, 0x1c7, 536 + 0xfff, 0xfff, 0xfff, 0xfff, 0x2cb, 0xfff, 0xfff, 0xfff, 537 + 0x0c3, 0x34d, 0x4d3, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 538 + 0xfff, 0xfff, 0xfff, 0x186, 0xfff, 0xfff, 0xfff, 0xfff, 539 + 0xfff, 0x30c, 0x555, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 540 + 0xfff, 0xfff, 0xfff, 0x166, 0xfff, 0xfff, 0xfff, 0xfff, 541 + 0x385, 0x14e, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x4e1, 542 + 0xfff, 0xfff, 0xfff, 0xfff, 0x538, 0xfff, 0x16d, 0xfff, 543 + 0xfff, 0xfff, 0x45b, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 544 + 0xfff, 0xfff, 0xfff, 0x29c, 0x2cc, 0x30b, 0x2b3, 0xfff, 545 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x0b3, 0xfff, 0x2f7, 546 + 0xfff, 0x32b, 0xfff, 0xfff, 0xfff, 0xfff, 0x0a7, 0xfff, 547 + 0xfff, 0x2da, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 548 + 0xfff, 0x07e, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 549 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x11c, 0xfff, 0xfff, 550 + 0xfff, 0xfff, 0xfff, 0x22f, 0xfff, 0x1f4, 0xfff, 0xfff, 551 + 0x2b0, 0x504, 0xfff, 0x114, 0xfff, 0xfff, 0xfff, 0x21d, 552 + 0xfff, 0xfff, 0xfff, 0xfff, 0x00d, 0x3c4, 0x340, 0x10f, 553 + 0xfff, 0xfff, 0x266, 0x02e, 0xfff, 0xfff, 0xfff, 0x4f8, 554 + 0x337, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 555 + 0xfff, 0xfff, 0xfff, 0x07b, 0x168, 0xfff, 0xfff, 0x0fe, 556 + 0xfff, 0xfff, 0x51a, 0xfff, 0x458, 0xfff, 0x36d, 0xfff, 557 + 0xfff, 0xfff, 0xfff, 0x073, 0x37d, 0x415, 0x550, 0xfff, 558 + 0xfff, 0xfff, 0x23b, 0x4b4, 0xfff, 0xfff, 0xfff, 0x1a1, 559 + 0xfff, 0xfff, 0x3aa, 0xfff, 0x117, 0x04d, 0x341, 0xfff, 560 + 0xfff, 0xfff, 0xfff, 0x518, 0x03e, 0x0f2, 0xfff, 0xfff, 561 + 0xfff, 0xfff, 0xfff, 0x363, 0xfff, 0x0b9, 0xfff, 0xfff, 562 + 0x241, 0xfff, 0xfff, 0x049, 0xfff, 0xfff, 0xfff, 0xfff, 563 + 0x15f, 0x52d, 0xfff, 0xfff, 0xfff, 0x29e, 0xfff, 0xfff, 564 + 0xfff, 0xfff, 0x4cf, 0x0fc, 0xfff, 0x36f, 0x3d3, 0xfff, 565 + 0x228, 0xfff, 0xfff, 0x45e, 0xfff, 0xfff, 0xfff, 0xfff, 566 + 0x238, 0xfff, 0xfff, 0xfff, 0xfff, 0x47f, 0xfff, 0xfff, 567 + 0x43a, 0x265, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x3e8, 568 + 0xfff, 0xfff, 0x01a, 0xfff, 0xfff, 0xfff, 0xfff, 0x21e, 569 + 0x1fc, 0x40b, 0xfff, 0xfff, 0xfff, 0x2d0, 0x159, 0xfff, 570 + 0xfff, 0x313, 0xfff, 0xfff, 0x05c, 0x4cc, 0xfff, 0xfff, 571 + 0x0f6, 0x3d5, 0xfff, 0xfff, 0xfff, 0x54f, 0xfff, 0xfff, 572 + 0xfff, 0x172, 0x1e4, 0x07c, 0xfff, 0xfff, 0xfff, 0xfff, 573 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x53c, 0x1ad, 0x535, 574 + 0x19b, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 575 + 0xfff, 0xfff, 0x092, 0xfff, 0x2be, 0xfff, 0xfff, 0x482, 576 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x0e6, 0xfff, 0xfff, 577 + 0xfff, 0xfff, 0xfff, 0x476, 0xfff, 0x51d, 0xfff, 0xfff, 578 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 579 + 0xfff, 0xfff, 0x342, 0x2b5, 0x22e, 0x09a, 0xfff, 0x08d, 580 + 0x44f, 0x3ed, 0xfff, 0xfff, 0xfff, 0xfff, 0x3d1, 0xfff, 581 + 0xfff, 0x543, 0xfff, 0x48f, 0xfff, 0x3d2, 0xfff, 0x0d5, 582 + 0x113, 0x0ec, 0x427, 0xfff, 0xfff, 0xfff, 0x4c4, 0xfff, 583 + 0xfff, 0x50a, 0xfff, 0x144, 0xfff, 0x105, 0x39f, 0x294, 584 + 0x164, 0xfff, 0x31a, 0xfff, 0xfff, 0x49a, 0xfff, 0x130, 585 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 586 + 0x1be, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 587 + 0xfff, 0xfff, 0x49e, 0x371, 0xfff, 0xfff, 0xfff, 0xfff, 588 + 0xfff, 0xfff, 0xfff, 0xfff, 0x0e8, 0x49c, 0x0f4, 0xfff, 589 + 0x338, 0x1a7, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 590 + 0xfff, 0x36c, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 591 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 592 + 0xfff, 0x1ae, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 593 + 0xfff, 0x31b, 0xfff, 0xfff, 0x2dd, 0x522, 0xfff, 0xfff, 594 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x2f4, 595 + 0x3c6, 0x30d, 0xfff, 0xfff, 0xfff, 0xfff, 0x34c, 0x18f, 596 + 0x30a, 0xfff, 0x01f, 0x079, 0xfff, 0xfff, 0x54d, 0x46b, 597 + 0x28c, 0x37f, 0xfff, 0xfff, 0xfff, 0xfff, 0x355, 0xfff, 598 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x14f, 0xfff, 0xfff, 599 + 0xfff, 0xfff, 0xfff, 0x359, 0x3fe, 0x3c5, 0xfff, 0xfff, 600 + 0xfff, 0xfff, 0x423, 0xfff, 0xfff, 0x34a, 0x22c, 0xfff, 601 + 0x25a, 0xfff, 0xfff, 0x4ad, 0xfff, 0x28d, 0xfff, 0xfff, 602 + 0xfff, 0xfff, 0xfff, 0x547, 0xfff, 0xfff, 0xfff, 0xfff, 603 + 0x2e2, 0xfff, 0xfff, 0x1d5, 0xfff, 0x2a8, 0xfff, 0xfff, 604 + 0x03f, 0xfff, 0xfff, 0xfff, 0xfff, 0x3eb, 0x0fa, 0xfff, 605 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x55b, 0xfff, 606 + 0x08e, 0xfff, 0x3ae, 0xfff, 0x3a4, 0xfff, 0x282, 0x158, 607 + 0xfff, 0x382, 0xfff, 0xfff, 0x499, 0xfff, 0xfff, 0x08a, 608 + 0xfff, 0xfff, 0xfff, 0x456, 0x3be, 0xfff, 0x1e2, 0xfff, 609 + 0xfff, 0xfff, 0xfff, 0xfff, 0x559, 0xfff, 0x1a0, 0xfff, 610 + 0xfff, 0x0b4, 0xfff, 0xfff, 0xfff, 0x2df, 0xfff, 0xfff, 611 + 0xfff, 0x07f, 0x4f5, 0xfff, 0xfff, 0x27c, 0x133, 0x017, 612 + 0xfff, 0x3fd, 0xfff, 0xfff, 0xfff, 0x44d, 0x4cd, 0x17a, 613 + 0x0d7, 0x537, 0xfff, 0xfff, 0x353, 0xfff, 0xfff, 0x351, 614 + 0x366, 0xfff, 0x44a, 0xfff, 0x1a6, 0xfff, 0xfff, 0xfff, 615 + 0x291, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x1e3, 616 + 0xfff, 0xfff, 0xfff, 0xfff, 0x389, 0xfff, 0x07a, 0xfff, 617 + 0x1b6, 0x2ed, 0xfff, 0xfff, 0xfff, 0xfff, 0x24e, 0x074, 618 + 0xfff, 0xfff, 0x3dc, 0xfff, 0x4e3, 0xfff, 0xfff, 0xfff, 619 + 0xfff, 0x4eb, 0xfff, 0xfff, 0x3b8, 0x4de, 0xfff, 0x19c, 620 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x262, 621 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x076, 0x4e8, 0x3da, 622 + 0xfff, 0x531, 0xfff, 0xfff, 0x14a, 0xfff, 0x0a2, 0x433, 623 + 0x3df, 0x1e9, 0xfff, 0xfff, 0xfff, 0xfff, 0x3e7, 0x285, 624 + 0x2d8, 0xfff, 0xfff, 0xfff, 0x349, 0x18d, 0x098, 0xfff, 625 + 0x0df, 0x4bf, 0xfff, 0xfff, 0x0b2, 0xfff, 0x346, 0x24d, 626 + 0xfff, 0xfff, 0xfff, 0x24f, 0x4fa, 0x2f9, 0xfff, 0xfff, 627 + 0x3c9, 0xfff, 0x2b4, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 628 + 0xfff, 0x056, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 629 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 630 + 0xfff, 0x179, 0xfff, 0x0e9, 0x3f0, 0x33d, 0xfff, 0xfff, 631 + 0xfff, 0xfff, 0xfff, 0x1fd, 0xfff, 0xfff, 0x526, 0xfff, 632 + 0xfff, 0xfff, 0x53d, 0xfff, 0xfff, 0xfff, 0x170, 0x331, 633 + 0xfff, 0x068, 0xfff, 0xfff, 0xfff, 0x3f7, 0xfff, 0x3d8, 634 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 635 + 0xfff, 0x09f, 0x556, 0xfff, 0xfff, 0x02d, 0xfff, 0xfff, 636 + 0x553, 0xfff, 0xfff, 0xfff, 0x1f0, 0xfff, 0xfff, 0x4d6, 637 + 0x41e, 0xfff, 0xfff, 0xfff, 0xfff, 0x4d5, 0xfff, 0xfff, 638 + 0xfff, 0xfff, 0xfff, 0x248, 0xfff, 0xfff, 0xfff, 0x0a3, 639 + 0xfff, 0x217, 0xfff, 0xfff, 0xfff, 0x4f1, 0x209, 0xfff, 640 + 0xfff, 0x475, 0x234, 0x52b, 0x398, 0xfff, 0x08b, 0xfff, 641 + 0xfff, 0xfff, 0xfff, 0x2c2, 0xfff, 0xfff, 0xfff, 0xfff, 642 + 0xfff, 0xfff, 0x268, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 643 + 0xfff, 0x4a3, 0xfff, 0x0aa, 0xfff, 0x1d9, 0xfff, 0xfff, 644 + 0xfff, 0xfff, 0x155, 0xfff, 0xfff, 0xfff, 0xfff, 0x0bf, 645 + 0x539, 0xfff, 0xfff, 0x2f1, 0x545, 0xfff, 0xfff, 0xfff, 646 + 0xfff, 0xfff, 0xfff, 0x2a7, 0x06f, 0xfff, 0x378, 0xfff, 647 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x25e, 0xfff, 648 + 0xfff, 0xfff, 0xfff, 0x15d, 0x02a, 0xfff, 0xfff, 0x0bc, 649 + 0x235, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 650 + 0x150, 0xfff, 0x1a9, 0xfff, 0xfff, 0xfff, 0xfff, 0x381, 651 + 0xfff, 0x04e, 0x270, 0x13f, 0xfff, 0xfff, 0x405, 0xfff, 652 + 0x3cd, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 653 + 0xfff, 0x2ef, 0xfff, 0x06a, 0xfff, 0xfff, 0xfff, 0x34f, 654 + 0x212, 0xfff, 0xfff, 0x0e2, 0xfff, 0x083, 0x298, 0xfff, 655 + 0xfff, 0xfff, 0x0c2, 0xfff, 0xfff, 0x52e, 0xfff, 0x488, 656 + 0xfff, 0xfff, 0xfff, 0x36b, 0xfff, 0xfff, 0xfff, 0x442, 657 + 0x091, 0xfff, 0x41c, 0xfff, 0xfff, 0x3a5, 0xfff, 0x4e6, 658 + 0xfff, 0xfff, 0x40d, 0x31d, 0xfff, 0xfff, 0xfff, 0x4c1, 659 + 0x053, 0xfff, 0x418, 0x13c, 0xfff, 0x350, 0xfff, 0x0ae, 660 + 0xfff, 0xfff, 0x41f, 0xfff, 0x470, 0xfff, 0x4ca, 0xfff, 661 + 0xfff, 0xfff, 0x02b, 0x450, 0xfff, 0x1f8, 0xfff, 0xfff, 662 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x293, 0xfff, 663 + 0xfff, 0xfff, 0xfff, 0x411, 0xfff, 0xfff, 0xfff, 0xfff, 664 + 0xfff, 0xfff, 0xfff, 0xfff, 0x0b8, 0xfff, 0xfff, 0xfff, 665 + 0x3e1, 0xfff, 0xfff, 0xfff, 0xfff, 0x43c, 0xfff, 0x2b2, 666 + 0x2ab, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x1ec, 667 + 0xfff, 0xfff, 0xfff, 0x3f8, 0x034, 0xfff, 0xfff, 0xfff, 668 + 0xfff, 0xfff, 0xfff, 0x11a, 0xfff, 0x541, 0x45c, 0x134, 669 + 0x1cc, 0xfff, 0xfff, 0xfff, 0x469, 0xfff, 0xfff, 0x44b, 670 + 0x161, 0xfff, 0xfff, 0xfff, 0x055, 0xfff, 0xfff, 0xfff, 671 + 0xfff, 0x307, 0xfff, 0xfff, 0xfff, 0xfff, 0x2d1, 0xfff, 672 + 0xfff, 0xfff, 0x124, 0x37b, 0x26b, 0x336, 0xfff, 0xfff, 673 + 0x2e4, 0x3cb, 0xfff, 0xfff, 0x0f8, 0x3c8, 0xfff, 0xfff, 674 + 0xfff, 0x461, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x4b5, 675 + 0x2cf, 0xfff, 0xfff, 0xfff, 0x20f, 0xfff, 0x35a, 0xfff, 676 + 0x490, 0xfff, 0x185, 0xfff, 0xfff, 0xfff, 0xfff, 0x42e, 677 + 0xfff, 0xfff, 0xfff, 0xfff, 0x54b, 0xfff, 0xfff, 0xfff, 678 + 0x146, 0xfff, 0x412, 0xfff, 0xfff, 0xfff, 0x1ff, 0xfff, 679 + 0xfff, 0x3e0, 0xfff, 0xfff, 0xfff, 0xfff, 0x2d5, 0xfff, 680 + 0x4df, 0x505, 0xfff, 0x413, 0xfff, 0x1a5, 0xfff, 0x3b2, 681 + 0xfff, 0xfff, 0xfff, 0x35b, 0xfff, 0x116, 0xfff, 0xfff, 682 + 0x171, 0x4d0, 0xfff, 0x154, 0x12d, 0xfff, 0xfff, 0xfff, 683 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x468, 0x4db, 0xfff, 684 + 0xfff, 0x1df, 0xfff, 0xfff, 0xfff, 0xfff, 0x05a, 0xfff, 685 + 0x0f1, 0x403, 0xfff, 0x22b, 0x2e0, 0xfff, 0xfff, 0xfff, 686 + 0x2b7, 0x373, 0xfff, 0xfff, 0xfff, 0xfff, 0x13e, 0xfff, 687 + 0xfff, 0xfff, 0x0d0, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 688 + 0x329, 0x1d2, 0x3fa, 0x047, 0xfff, 0x2f2, 0xfff, 0xfff, 689 + 0x141, 0x0ac, 0x1d7, 0xfff, 0x07d, 0xfff, 0xfff, 0xfff, 690 + 0x1c1, 0xfff, 0x487, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 691 + 0xfff, 0xfff, 0xfff, 0x045, 0xfff, 0xfff, 0xfff, 0xfff, 692 + 0x288, 0x0cd, 0xfff, 0xfff, 0xfff, 0xfff, 0x226, 0x1d8, 693 + 0xfff, 0x153, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x4cb, 694 + 0x528, 0xfff, 0xfff, 0xfff, 0x20a, 0x343, 0x3a1, 0xfff, 695 + 0xfff, 0xfff, 0x2d7, 0x2d3, 0x1aa, 0x4c5, 0xfff, 0xfff, 696 + 0xfff, 0x42b, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 697 + 0xfff, 0xfff, 0xfff, 0xfff, 0x3e9, 0xfff, 0x20b, 0x260, 698 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x37c, 0x2fd, 699 + 0xfff, 0xfff, 0x2c8, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 700 + 0xfff, 0x31e, 0xfff, 0x335, 0xfff, 0xfff, 0xfff, 0xfff, 701 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 702 + 0xfff, 0xfff, 0x135, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 703 + 0xfff, 0xfff, 0x35c, 0x4dd, 0x129, 0xfff, 0xfff, 0xfff, 704 + 0xfff, 0xfff, 0x1ef, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 705 + 0xfff, 0x34e, 0xfff, 0xfff, 0xfff, 0xfff, 0x407, 0xfff, 706 + 0xfff, 0xfff, 0xfff, 0xfff, 0x3ad, 0xfff, 0xfff, 0xfff, 707 + 0x379, 0xfff, 0xfff, 0x1d0, 0x38d, 0xfff, 0xfff, 0x1e8, 708 + 0x184, 0x3c1, 0x1c4, 0xfff, 0x1f9, 0xfff, 0xfff, 0x424, 709 + 0xfff, 0xfff, 0xfff, 0xfff, 0x1d3, 0x0d4, 0xfff, 0x4e9, 710 + 0xfff, 0xfff, 0xfff, 0x530, 0x107, 0xfff, 0x106, 0x04f, 711 + 0xfff, 0xfff, 0x4c7, 0x503, 0xfff, 0xfff, 0xfff, 0xfff, 712 + 0xfff, 0x15c, 0xfff, 0x23f, 0xfff, 0xfff, 0xfff, 0xfff, 713 + 0xfff, 0xfff, 0xfff, 0xfff, 0x4f3, 0xfff, 0xfff, 0x3c7, 714 + 0xfff, 0x278, 0xfff, 0xfff, 0x0a6, 0xfff, 0xfff, 0xfff, 715 + 0x122, 0x1cf, 0xfff, 0x327, 0xfff, 0x2e5, 0xfff, 0x29d, 716 + 0xfff, 0xfff, 0x3f1, 0xfff, 0xfff, 0x48d, 0xfff, 0xfff, 717 + 0xfff, 0xfff, 0x054, 0xfff, 0xfff, 0xfff, 0xfff, 0x178, 718 + 0x27e, 0x4e0, 0x352, 0x02f, 0x09c, 0xfff, 0x2a0, 0xfff, 719 + 0xfff, 0x46a, 0x457, 0xfff, 0xfff, 0x501, 0xfff, 0x2ba, 720 + 0xfff, 0xfff, 0xfff, 0x54e, 0x2e7, 0xfff, 0xfff, 0xfff, 721 + 0xfff, 0xfff, 0x551, 0xfff, 0xfff, 0x1db, 0x2aa, 0xfff, 722 + 0xfff, 0x4bc, 0xfff, 0xfff, 0x395, 0xfff, 0x0de, 0xfff, 723 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x455, 0xfff, 0x17e, 724 + 0xfff, 0x221, 0x4a7, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 725 + 0x388, 0xfff, 0xfff, 0xfff, 0x308, 0xfff, 0xfff, 0xfff, 726 + 0x20e, 0x4b9, 0xfff, 0x273, 0x20c, 0x09e, 0xfff, 0x057, 727 + 0xfff, 0xfff, 0xfff, 0xfff, 0x3f2, 0xfff, 0x1a8, 0x3a6, 728 + 0x14c, 0xfff, 0xfff, 0x071, 0xfff, 0xfff, 0x53a, 0xfff, 729 + 0xfff, 0xfff, 0xfff, 0x109, 0xfff, 0xfff, 0x399, 0xfff, 730 + 0x061, 0x4f0, 0x39e, 0x244, 0xfff, 0x035, 0xfff, 0xfff, 731 + 0x305, 0x47e, 0x297, 0xfff, 0xfff, 0x2b8, 0xfff, 0xfff, 732 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x1bc, 0xfff, 0x2fc, 733 + 0xfff, 0xfff, 0x554, 0xfff, 0xfff, 0xfff, 0xfff, 0x3b6, 734 + 0xfff, 0xfff, 0xfff, 0x515, 0x397, 0xfff, 0xfff, 0x12f, 735 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x4e5, 736 + 0xfff, 0x4fc, 0xfff, 0xfff, 0x05e, 0xfff, 0xfff, 0xfff, 737 + 0xfff, 0xfff, 0x0a8, 0x3af, 0x015, 0xfff, 0xfff, 0xfff, 738 + 0xfff, 0x138, 0xfff, 0xfff, 0xfff, 0x540, 0xfff, 0xfff, 739 + 0xfff, 0x027, 0x523, 0x2f0, 0xfff, 0xfff, 0xfff, 0xfff, 740 + 0xfff, 0xfff, 0x16c, 0xfff, 0x27d, 0xfff, 0xfff, 0xfff, 741 + 0xfff, 0x04c, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x4dc, 742 + 0xfff, 0xfff, 0x059, 0x301, 0xfff, 0xfff, 0xfff, 0xfff, 743 + 0xfff, 0xfff, 0xfff, 0x1a3, 0xfff, 0x15a, 0xfff, 0xfff, 744 + 0x0a5, 0xfff, 0x435, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 745 + 0xfff, 0x051, 0xfff, 0xfff, 0x131, 0xfff, 0x4f4, 0xfff, 746 + 0xfff, 0xfff, 0xfff, 0x441, 0xfff, 0x4fb, 0xfff, 0x03b, 747 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x1ed, 0x274, 748 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x0d3, 0x55e, 0x1b3, 749 + 0xfff, 0x0bd, 0xfff, 0xfff, 0xfff, 0xfff, 0x225, 0xfff, 750 + 0xfff, 0xfff, 0xfff, 0xfff, 0x4b7, 0xfff, 0xfff, 0x2ff, 751 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x4c3, 0xfff, 752 + 0x383, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x2f6, 753 + 0xfff, 0xfff, 0x1ee, 0xfff, 0x03d, 0xfff, 0xfff, 0xfff, 754 + 0xfff, 0xfff, 0x26f, 0x1dc, 0xfff, 0x0db, 0xfff, 0xfff, 755 + 0xfff, 0xfff, 0xfff, 0x0ce, 0xfff, 0xfff, 0x127, 0x03a, 756 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x311, 0xfff, 757 + 0xfff, 0x13d, 0x09d, 0x47b, 0x2a6, 0x50d, 0x510, 0x19a, 758 + 0xfff, 0x354, 0x414, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 759 + 0xfff, 0xfff, 0x44c, 0x3b0, 0xfff, 0x23d, 0x429, 0xfff, 760 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 761 + 0x4c0, 0x416, 0xfff, 0x05b, 0xfff, 0xfff, 0x137, 0xfff, 762 + 0x25f, 0x49f, 0xfff, 0x279, 0x013, 0xfff, 0xfff, 0xfff, 763 + 0x269, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 764 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x3d0, 0xfff, 0xfff, 765 + 0xfff, 0xfff, 0xfff, 0xfff, 0x077, 0xfff, 0xfff, 0x3fb, 766 + 0xfff, 0xfff, 0xfff, 0xfff, 0x271, 0x3a0, 0xfff, 0xfff, 767 + 0x40f, 0xfff, 0xfff, 0x3de, 0xfff, 0xfff, 0xfff, 0xfff, 768 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x1ab, 0x26a, 769 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x489, 0xfff, 0xfff, 770 + 0x252, 0xfff, 0xfff, 0xfff, 0xfff, 0x1b7, 0x42f, 0xfff, 771 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x3b7, 772 + 0xfff, 0x2bb, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 773 + 0xfff, 0xfff, 0xfff, 0x0f7, 0x01d, 0xfff, 0x067, 0xfff, 774 + 0xfff, 0xfff, 0xfff, 0x4e2, 0xfff, 0xfff, 0x4bb, 0xfff, 775 + 0xfff, 0xfff, 0x17b, 0xfff, 0x0ee, 0xfff, 0xfff, 0xfff, 776 + 0xfff, 0xfff, 0x36e, 0xfff, 0xfff, 0xfff, 0x533, 0xfff, 777 + 0xfff, 0xfff, 0x4d4, 0x356, 0xfff, 0xfff, 0x375, 0xfff, 778 + 0xfff, 0xfff, 0xfff, 0x4a4, 0x513, 0xfff, 0xfff, 0xfff, 779 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x4ff, 0xfff, 0x2af, 780 + 0xfff, 0xfff, 0x026, 0xfff, 0x0ad, 0xfff, 0xfff, 0xfff, 781 + 0xfff, 0x26e, 0xfff, 0xfff, 0xfff, 0xfff, 0x493, 0xfff, 782 + 0x463, 0x4d2, 0x4be, 0xfff, 0xfff, 0xfff, 0xfff, 0x4f2, 783 + 0x0b6, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 784 + 0xfff, 0x32d, 0x315, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 785 + 0xfff, 0x13a, 0x4a1, 0xfff, 0x27a, 0xfff, 0xfff, 0xfff, 786 + 0x47a, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 787 + 0x334, 0xfff, 0xfff, 0xfff, 0xfff, 0x54c, 0xfff, 0xfff, 788 + 0xfff, 0x0c9, 0x007, 0xfff, 0xfff, 0x12e, 0xfff, 0x0ff, 789 + 0xfff, 0xfff, 0x3f5, 0x509, 0xfff, 0xfff, 0xfff, 0xfff, 790 + 0x1c3, 0x2ad, 0xfff, 0xfff, 0x47c, 0x261, 0xfff, 0xfff, 791 + 0xfff, 0xfff, 0xfff, 0x152, 0xfff, 0xfff, 0xfff, 0x339, 792 + 0xfff, 0x243, 0x1c0, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 793 + 0x063, 0xfff, 0xfff, 0x254, 0xfff, 0xfff, 0x173, 0xfff, 794 + 0x0c7, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 795 + 0xfff, 0x362, 0x259, 0x485, 0x374, 0x0dc, 0x3ab, 0xfff, 796 + 0x1c5, 0x534, 0x544, 0xfff, 0xfff, 0x508, 0xfff, 0x402, 797 + 0x408, 0xfff, 0x0e7, 0xfff, 0xfff, 0x00a, 0x205, 0xfff, 798 + 0xfff, 0x2b9, 0xfff, 0xfff, 0xfff, 0x465, 0xfff, 0xfff, 799 + 0xfff, 0xfff, 0xfff, 0xfff, 0x23a, 0xfff, 0xfff, 0xfff, 800 + 0xfff, 0x147, 0x19d, 0x115, 0x214, 0xfff, 0x090, 0x368, 801 + 0xfff, 0x210, 0xfff, 0xfff, 0x280, 0x52a, 0x163, 0x148, 802 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x326, 0xfff, 0xfff, 803 + 0xfff, 0xfff, 0xfff, 0x2de, 0xfff, 0xfff, 0xfff, 0xfff, 804 + 0x206, 0x2c1, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 805 + 0x189, 0xfff, 0xfff, 0xfff, 0xfff, 0x367, 0xfff, 0x1a4, 806 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x443, 0xfff, 0x27b, 807 + 0xfff, 0xfff, 0x251, 0x549, 0xfff, 0xfff, 0xfff, 0xfff, 808 + 0xfff, 0xfff, 0x188, 0x04b, 0xfff, 0xfff, 0xfff, 0x31f, 809 + 0x4a6, 0xfff, 0x246, 0x1de, 0x156, 0xfff, 0xfff, 0xfff, 810 + 0x3a9, 0xfff, 0xfff, 0xfff, 0x2fa, 0xfff, 0x128, 0x0d1, 811 + 0x449, 0x255, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 812 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 813 + 0xfff, 0xfff, 0xfff, 0xfff, 0x258, 0xfff, 0xfff, 0xfff, 814 + 0x532, 0xfff, 0xfff, 0xfff, 0x303, 0x517, 0xfff, 0xfff, 815 + 0x2a9, 0x24a, 0xfff, 0xfff, 0x231, 0xfff, 0xfff, 0xfff, 816 + 0xfff, 0xfff, 0x4b6, 0x516, 0xfff, 0xfff, 0x0e4, 0x0eb, 817 + 0xfff, 0x4e4, 0xfff, 0x275, 0xfff, 0xfff, 0x031, 0xfff, 818 + 0xfff, 0xfff, 0xfff, 0xfff, 0x025, 0x21a, 0xfff, 0x0cc, 819 + 0x45f, 0x3d9, 0x289, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 820 + 0xfff, 0xfff, 0x23e, 0xfff, 0xfff, 0xfff, 0x438, 0x097, 821 + 0x419, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 822 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 823 + 0xfff, 0xfff, 0x0a9, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 824 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 825 + 0x37e, 0x0e0, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x431, 826 + 0x372, 0xfff, 0xfff, 0xfff, 0x1ba, 0x06e, 0xfff, 0x1b1, 827 + 0xfff, 0xfff, 0x12a, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 828 + 0xfff, 0xfff, 0x193, 0xfff, 0xfff, 0xfff, 0xfff, 0x10a, 829 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x048, 0x1b4, 830 + 0xfff, 0xfff, 0xfff, 0xfff, 0x295, 0x140, 0x108, 0xfff, 831 + 0xfff, 0xfff, 0xfff, 0x16f, 0xfff, 0x0a4, 0x37a, 0xfff, 832 + 0x29a, 0xfff, 0x284, 0xfff, 0xfff, 0xfff, 0xfff, 0x4c6, 833 + 0x2a2, 0x3a3, 0xfff, 0x201, 0xfff, 0xfff, 0xfff, 0x4bd, 834 + 0x005, 0x54a, 0x3b5, 0x204, 0x2ee, 0x11d, 0x436, 0xfff, 835 + 0xfff, 0xfff, 0xfff, 0xfff, 0x3ec, 0xfff, 0xfff, 0xfff, 836 + 0xfff, 0xfff, 0xfff, 0xfff, 0x11f, 0x498, 0x21c, 0xfff, 837 + 0xfff, 0xfff, 0x3d6, 0xfff, 0x4ab, 0xfff, 0x432, 0x2eb, 838 + 0x542, 0x4fd, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 839 + 0xfff, 0xfff, 0xfff, 0x4ce, 0xfff, 0xfff, 0x2fb, 0xfff, 840 + 0xfff, 0x2e1, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 841 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x1b9, 0x037, 0x0dd, 842 + 0xfff, 0xfff, 0xfff, 0x2bf, 0x521, 0x496, 0x095, 0xfff, 843 + 0xfff, 0x328, 0x070, 0x1bf, 0xfff, 0x393, 0xfff, 0xfff, 844 + 0x102, 0xfff, 0xfff, 0x21b, 0xfff, 0x142, 0x263, 0x519, 845 + 0xfff, 0x2a5, 0x177, 0xfff, 0x14d, 0x471, 0x4ae, 0xfff, 846 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 847 + 0x1f6, 0xfff, 0x481, 0xfff, 0xfff, 0xfff, 0x151, 0xfff, 848 + 0xfff, 0xfff, 0x085, 0x33f, 0xfff, 0xfff, 0xfff, 0x084, 849 + 0xfff, 0xfff, 0xfff, 0x345, 0x3a2, 0xfff, 0xfff, 0x0a0, 850 + 0x0da, 0x024, 0xfff, 0xfff, 0xfff, 0x1bd, 0xfff, 0x55c, 851 + 0x467, 0x445, 0xfff, 0xfff, 0xfff, 0x052, 0xfff, 0xfff, 852 + 0xfff, 0xfff, 0x51e, 0xfff, 0xfff, 0x39d, 0xfff, 0x35f, 853 + 0xfff, 0x376, 0x3ee, 0xfff, 0xfff, 0xfff, 0xfff, 0x448, 854 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x16a, 855 + 0xfff, 0x036, 0x38f, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 856 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x211, 857 + 0xfff, 0xfff, 0xfff, 0x230, 0xfff, 0xfff, 0x3ba, 0xfff, 858 + 0xfff, 0xfff, 0x3ce, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 859 + 0xfff, 0xfff, 0xfff, 0x229, 0xfff, 0x176, 0xfff, 0xfff, 860 + 0xfff, 0xfff, 0xfff, 0x00b, 0xfff, 0x162, 0x018, 0xfff, 861 + 0xfff, 0x233, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 862 + 0x400, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 863 + 0xfff, 0xfff, 0xfff, 0x12b, 0xfff, 0xfff, 0xfff, 0xfff, 864 + 0xfff, 0x3f4, 0xfff, 0x0f0, 0xfff, 0x1ac, 0xfff, 0xfff, 865 + 0x119, 0xfff, 0x2c0, 0xfff, 0xfff, 0xfff, 0x49b, 0xfff, 866 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x23c, 0xfff, 867 + 0x4b3, 0x010, 0x064, 0xfff, 0xfff, 0x4ba, 0xfff, 0xfff, 868 + 0xfff, 0xfff, 0xfff, 0x3c2, 0xfff, 0xfff, 0xfff, 0xfff, 869 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x006, 0x196, 0xfff, 870 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x100, 0x191, 0xfff, 871 + 0x1ea, 0x29f, 0xfff, 0xfff, 0xfff, 0x276, 0xfff, 0xfff, 872 + 0x2b1, 0x3b9, 0xfff, 0x03c, 0xfff, 0xfff, 0xfff, 0x180, 873 + 0xfff, 0x08f, 0xfff, 0xfff, 0x19e, 0x019, 0xfff, 0x0b0, 874 + 0x0fd, 0x332, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 875 + 0xfff, 0x06b, 0x2e8, 0xfff, 0x446, 0xfff, 0xfff, 0x004, 876 + 0x247, 0x197, 0xfff, 0x112, 0x169, 0x292, 0xfff, 0x302, 877 + 0xfff, 0xfff, 0x33b, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 878 + 0xfff, 0xfff, 0xfff, 0x287, 0x21f, 0xfff, 0x3ea, 0xfff, 879 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x4e7, 0xfff, 0xfff, 880 + 0xfff, 0xfff, 0xfff, 0x3a8, 0xfff, 0xfff, 0x2bc, 0xfff, 881 + 0x484, 0x296, 0xfff, 0x1c9, 0x08c, 0x1e5, 0x48a, 0xfff, 882 + 0x360, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 883 + 0x1ca, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 884 + 0xfff, 0xfff, 0xfff, 0x10d, 0xfff, 0xfff, 0xfff, 0xfff, 885 + 0xfff, 0xfff, 0x066, 0x2ea, 0x28b, 0x25b, 0xfff, 0x072, 886 + 0xfff, 0xfff, 0xfff, 0xfff, 0x2b6, 0xfff, 0xfff, 0x272, 887 + 0xfff, 0xfff, 0x525, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 888 + 0x2ca, 0xfff, 0xfff, 0xfff, 0x299, 0xfff, 0xfff, 0xfff, 889 + 0x558, 0x41a, 0xfff, 0x4f7, 0x557, 0xfff, 0x4a0, 0x344, 890 + 0x12c, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x125, 891 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 892 + 0x40e, 0xfff, 0xfff, 0x502, 0xfff, 0x103, 0x3e6, 0xfff, 893 + 0x527, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 894 + 0xfff, 0xfff, 0xfff, 0x45d, 0xfff, 0xfff, 0xfff, 0xfff, 895 + 0x44e, 0xfff, 0xfff, 0xfff, 0xfff, 0x0d2, 0x4c9, 0x35e, 896 + 0x459, 0x2d9, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x17d, 897 + 0x0c4, 0xfff, 0xfff, 0xfff, 0x3ac, 0x390, 0x094, 0xfff, 898 + 0x483, 0x0ab, 0xfff, 0x253, 0xfff, 0x391, 0xfff, 0xfff, 899 + 0xfff, 0xfff, 0x123, 0x0ef, 0xfff, 0xfff, 0xfff, 0x330, 900 + 0x38c, 0xfff, 0xfff, 0x2ae, 0xfff, 0xfff, 0xfff, 0x042, 901 + 0x012, 0x06d, 0xfff, 0xfff, 0xfff, 0x32a, 0x3db, 0x364, 902 + 0x2dc, 0xfff, 0x30f, 0x3d7, 0x4a5, 0x050, 0xfff, 0xfff, 903 + 0x029, 0xfff, 0xfff, 0xfff, 0xfff, 0x1d1, 0xfff, 0xfff, 904 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x480, 0xfff, 905 + 0x4ed, 0x081, 0x0a1, 0xfff, 0xfff, 0xfff, 0x30e, 0x52f, 906 + 0x257, 0xfff, 0xfff, 0x447, 0xfff, 0xfff, 0xfff, 0xfff, 907 + 0xfff, 0xfff, 0xfff, 0x401, 0x3cc, 0xfff, 0xfff, 0x0fb, 908 + 0x2c9, 0x42a, 0x314, 0x33e, 0x3bd, 0x318, 0xfff, 0x10e, 909 + 0x2a1, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x24c, 910 + 0x506, 0xfff, 0x267, 0xfff, 0xfff, 0x219, 0xfff, 0x1eb, 911 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 912 + 0x309, 0x3e2, 0x46c, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 913 + 0x384, 0xfff, 0xfff, 0xfff, 0xfff, 0x50c, 0xfff, 0x24b, 914 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x038, 915 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x194, 916 + 0x143, 0x3e3, 0xfff, 0xfff, 0xfff, 0x4c2, 0xfff, 0xfff, 917 + 0x0e1, 0x25c, 0xfff, 0x237, 0xfff, 0x1fe, 0xfff, 0xfff, 918 + 0xfff, 0x065, 0x2a4, 0xfff, 0x386, 0x55a, 0x11b, 0xfff, 919 + 0xfff, 0x192, 0xfff, 0x183, 0x00e, 0xfff, 0xfff, 0xfff, 920 + 0xfff, 0xfff, 0xfff, 0x4b2, 0x18e, 0xfff, 0xfff, 0xfff, 921 + 0xfff, 0x486, 0x4ef, 0x0c6, 0x380, 0xfff, 0x4a8, 0xfff, 922 + 0x0c5, 0xfff, 0xfff, 0xfff, 0xfff, 0x093, 0x1b8, 0xfff, 923 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x2e6, 924 + 0xfff, 0x0f3, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 925 + 0x28e, 0xfff, 0x53b, 0x420, 0x22a, 0x33a, 0xfff, 0x387, 926 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x2a3, 0xfff, 0xfff, 927 + 0xfff, 0x428, 0x500, 0xfff, 0xfff, 0x120, 0x2c6, 0x290, 928 + 0x2f5, 0x0e3, 0xfff, 0x0b7, 0xfff, 0x319, 0x474, 0xfff, 929 + 0xfff, 0xfff, 0x529, 0x014, 0xfff, 0x41b, 0x40a, 0x18b, 930 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x0d9, 931 + 0xfff, 0x38a, 0xfff, 0xfff, 0xfff, 0xfff, 0x1ce, 0xfff, 932 + 0xfff, 0xfff, 0xfff, 0xfff, 0x3b1, 0xfff, 0xfff, 0x05d, 933 + 0x2c4, 0xfff, 0xfff, 0x4af, 0xfff, 0x030, 0xfff, 0xfff, 934 + 0x203, 0xfff, 0x277, 0x256, 0xfff, 0xfff, 0xfff, 0x4f9, 935 + 0xfff, 0x2c7, 0xfff, 0x466, 0x016, 0x1cd, 0xfff, 0x167, 936 + 0xfff, 0xfff, 0x0c8, 0xfff, 0x43d, 0xfff, 0xfff, 0x020, 937 + 0xfff, 0xfff, 0x232, 0x1cb, 0x1e0, 0xfff, 0xfff, 0x347, 938 + 0xfff, 0x478, 0xfff, 0x365, 0xfff, 0xfff, 0xfff, 0xfff, 939 + 0x358, 0xfff, 0x10b, 0xfff, 0x35d, 0xfff, 0xfff, 0xfff, 940 + 0xfff, 0xfff, 0x452, 0x22d, 0xfff, 0xfff, 0x47d, 0xfff, 941 + 0x2f3, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x460, 0xfff, 942 + 0xfff, 0xfff, 0x50b, 0xfff, 0xfff, 0xfff, 0x2ec, 0xfff, 943 + 0xfff, 0xfff, 0xfff, 0xfff, 0x4b1, 0x422, 0xfff, 0xfff, 944 + 0xfff, 0x2d4, 0xfff, 0x239, 0xfff, 0xfff, 0xfff, 0x439, 945 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 946 + 0xfff, 0x491, 0x075, 0xfff, 0xfff, 0xfff, 0x06c, 0xfff, 947 + 0xfff, 0x0f9, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 948 + 0xfff, 0x139, 0xfff, 0x4f6, 0xfff, 0xfff, 0x409, 0xfff, 949 + 0xfff, 0x15b, 0xfff, 0xfff, 0x348, 0xfff, 0xfff, 0xfff, 950 + 0xfff, 0x4a2, 0x49d, 0xfff, 0x033, 0x175, 0xfff, 0x039, 951 + 0xfff, 0x312, 0x40c, 0xfff, 0xfff, 0x325, 0xfff, 0xfff, 952 + 0xfff, 0xfff, 0xfff, 0xfff, 0x4aa, 0xfff, 0xfff, 0xfff, 953 + 0xfff, 0xfff, 0xfff, 0x165, 0x3bc, 0x48c, 0x310, 0x096, 954 + 0xfff, 0xfff, 0x250, 0x1a2, 0xfff, 0xfff, 0xfff, 0xfff, 955 + 0x20d, 0x2ac, 0xfff, 0xfff, 0x39b, 0xfff, 0x377, 0xfff, 956 + 0x512, 0x495, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 957 + 0xfff, 0xfff, 0xfff, 0xfff, 0x357, 0x4ea, 0xfff, 0xfff, 958 + 0xfff, 0xfff, 0x198, 0xfff, 0xfff, 0xfff, 0x434, 0x04a, 959 + 0xfff, 0xfff, 0xfff, 0xfff, 0x062, 0xfff, 0x1d6, 0x1c8, 960 + 0xfff, 0x1f3, 0x281, 0xfff, 0x462, 0xfff, 0xfff, 0xfff, 961 + 0x4b0, 0xfff, 0x207, 0xfff, 0xfff, 0xfff, 0xfff, 0x3dd, 962 + 0xfff, 0xfff, 0x55d, 0xfff, 0x552, 0x494, 0x1af, 0xfff, 963 + 0xfff, 0xfff, 0xfff, 0xfff, 0x227, 0xfff, 0xfff, 0x069, 964 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x43e, 965 + 0x0b5, 0xfff, 0x524, 0x2d2, 0xfff, 0xfff, 0xfff, 0x28f, 966 + 0xfff, 0x01b, 0x50e, 0xfff, 0xfff, 0x1bb, 0xfff, 0xfff, 967 + 0x41d, 0xfff, 0x32e, 0x48e, 0xfff, 0x1f7, 0x224, 0xfff, 968 + 0xfff, 0xfff, 0xfff, 0xfff, 0x394, 0xfff, 0xfff, 0xfff, 969 + 0xfff, 0x52c, 0xfff, 0xfff, 0xfff, 0x392, 0xfff, 0x1e7, 970 + 0xfff, 0xfff, 0x3f9, 0x3a7, 0xfff, 0x51f, 0xfff, 0x0bb, 971 + 0x118, 0x3ca, 0xfff, 0x1dd, 0xfff, 0x48b, 0xfff, 0xfff, 972 + 0xfff, 0xfff, 0x50f, 0xfff, 0x0d6, 0xfff, 0x1fa, 0xfff, 973 + 0x11e, 0xfff, 0xfff, 0xfff, 0xfff, 0x4d7, 0xfff, 0x078, 974 + 0x008, 0xfff, 0x25d, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 975 + 0x032, 0x33c, 0xfff, 0x4d9, 0x160, 0xfff, 0xfff, 0x300, 976 + 0x0b1, 0xfff, 0x322, 0xfff, 0x4ec, 0xfff, 0xfff, 0x200, 977 + 0x00c, 0x369, 0x473, 0xfff, 0xfff, 0x32c, 0xfff, 0xfff, 978 + 0xfff, 0xfff, 0xfff, 0xfff, 0x53e, 0x3d4, 0x417, 0xfff, 979 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 980 + 0x34b, 0x001, 0x39a, 0x02c, 0xfff, 0xfff, 0x2ce, 0x00f, 981 + 0xfff, 0x0ba, 0xfff, 0xfff, 0xfff, 0xfff, 0x060, 0xfff, 982 + 0x406, 0xfff, 0xfff, 0xfff, 0x4ee, 0x4ac, 0xfff, 0x43f, 983 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x29b, 0xfff, 0xfff, 984 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x216, 985 + 0x190, 0xfff, 0x396, 0x464, 0xfff, 0xfff, 0x323, 0xfff, 986 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x2e9, 0xfff, 0x26d, 987 + 0x2cd, 0x040, 0xfff, 0xfff, 0xfff, 0xfff, 0x38b, 0x3c0, 988 + 0xfff, 0xfff, 0xfff, 0x1f2, 0xfff, 0x0ea, 0xfff, 0xfff, 989 + 0x472, 0xfff, 0x1fb, 0xfff, 0xfff, 0x0af, 0x27f, 0xfff, 990 + 0xfff, 0xfff, 0x479, 0x023, 0xfff, 0x0d8, 0x3b3, 0xfff, 991 + 0xfff, 0xfff, 0x121, 0xfff, 0xfff, 0x3bf, 0xfff, 0xfff, 992 + 0x16b, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 993 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 994 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 995 + 0x45a, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 996 + 0xfff, 0x0be, 0xfff, 0xfff, 0xfff, 0x111, 0xfff, 0x220, 997 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 998 + 0xfff, 0xfff, 0x09b, 0x218, 0xfff, 0x022, 0x202, 0xfff, 999 + 0x4c8, 0xfff, 0x0ed, 0xfff, 0xfff, 0x182, 0xfff, 0xfff, 1000 + 0xfff, 0x17f, 0x213, 0xfff, 0x321, 0x36a, 0xfff, 0x086, 1001 + 0xfff, 0xfff, 0xfff, 0x43b, 0x088, 0xfff, 0xfff, 0xfff, 1002 + 0xfff, 0x26c, 0xfff, 0x2f8, 0x3b4, 0xfff, 0xfff, 0xfff, 1003 + 0x132, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x333, 0x444, 1004 + 0x0c1, 0x4d8, 0x46d, 0x264, 0xfff, 0xfff, 0xfff, 0xfff, 1005 + 0x426, 0xfff, 0xfff, 0xfff, 0xfff, 0x2fe, 0xfff, 0xfff, 1006 + 0xfff, 0xfff, 0x011, 0xfff, 0x05f, 0xfff, 0xfff, 0xfff, 1007 + 0xfff, 0x10c, 0x101, 0xfff, 0xfff, 0xfff, 0xfff, 0x110, 1008 + 0xfff, 0x044, 0x304, 0x361, 0x404, 0xfff, 0x51b, 0x099, 1009 + 0xfff, 0x440, 0xfff, 0xfff, 0xfff, 0x222, 0xfff, 0xfff, 1010 + 0xfff, 0xfff, 0x1b5, 0xfff, 0x136, 0x430, 0xfff, 0x1da, 1011 + 0xfff, 0xfff, 0xfff, 0x043, 0xfff, 0x17c, 0xfff, 0xfff, 1012 + 0xfff, 0x01c, 0xfff, 0xfff, 0xfff, 0x425, 0x236, 0xfff, 1013 + 0x317, 0xfff, 0xfff, 0x437, 0x3fc, 0xfff, 0x1f1, 0xfff, 1014 + 0x324, 0xfff, 0xfff, 0x0ca, 0x306, 0xfff, 0x548, 0xfff, 1015 + 0x46e, 0xfff, 0xfff, 0xfff, 0x4b8, 0x1c2, 0x286, 0xfff, 1016 + 0xfff, 0x087, 0x18a, 0x19f, 0xfff, 0xfff, 0xfff, 0xfff, 1017 + 0x18c, 0xfff, 0x215, 0xfff, 0xfff, 0xfff, 0xfff, 0x283, 1018 + 0xfff, 0xfff, 0xfff, 0x126, 0xfff, 0xfff, 0x370, 0xfff, 1019 + 0x53f, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0x31c, 0xfff, 1020 + 0x4d1, 0xfff, 0xfff, 0xfff, 0x021, 0xfff, 0x157, 0xfff, 1021 + 0xfff, 0x028, 0x16e, 0xfff, 0x421, 0xfff, 0x1c6, 0xfff, 1022 + 0xfff, 0x511, 0xfff, 0xfff, 0x39c, 0x46f, 0x1b2, 0xfff, 1023 + 0xfff, 0x316, 0xfff, 0xfff, 0x009, 0xfff, 0xfff, 0x195, 1024 + 0xfff, 0x240, 0x546, 0xfff, 0xfff, 0x520, 0xfff, 0xfff, 1025 + 0xfff, 0xfff, 0xfff, 0xfff, 0x454, 0xfff, 0xfff, 0xfff, 1026 + 0x3f3, 0xfff, 0xfff, 0x187, 0xfff, 0x4a9, 0xfff, 0xfff, 1027 + 0xfff, 0xfff, 0xfff, 0xfff, 0x51c, 0x453, 0x1e6, 0xfff, 1028 + 0xfff, 0xfff, 0x1b0, 0xfff, 0x477, 0xfff, 0xfff, 0xfff, 1029 + 0x4fe, 0xfff, 0x32f, 0xfff, 0xfff, 0x15e, 0x1d4, 0xfff, 1030 + 0x0e5, 0xfff, 0xfff, 0xfff, 0x242, 0x14b, 0x046, 0xfff, 1031 + 0x3f6, 0x3bb, 0x3e4, 0xfff, 0xfff, 0x2e3, 0xfff, 0x245, 1032 + 0xfff, 0x149, 0xfff, 0xfff, 0xfff, 0x2db, 0xfff, 0xfff, 1033 + 0x181, 0xfff, 0x089, 0x2c5, 0xfff, 0x1f5, 0xfff, 0x2d6, 1034 + 0x507, 0xfff, 0x42d, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 1035 + 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 1036 + 0x080, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 0xfff, 1037 + 0xfff, 0xfff, 0xfff, 0xfff, 0x3c3, 0x320, 0xfff, 0x1e1, 1038 + 0xfff, 0x0f5, 0x13b, 0xfff, 0xfff, 0xfff, 0x003, 0x4da, 1039 + 0xfff, 0xfff, 0xfff, 0x42c, 0xfff, 0xfff, 0x0cb, 0xfff, 1040 + 0x536, 0x2c3, 0xfff, 0xfff, 0xfff, 0xfff, 0x199, 0xfff, 1041 + 0xfff, 0x0c0, 0xfff, 0x01e, 0x497, 0xfff, 0xfff, 0x3e5, 1042 + 0xfff, 0xfff, 0xfff, 0x0cf, 0xfff, 0x2bd, 0xfff, 0x223, 1043 + 0xfff, 0x3ff, 0xfff, 0x058, 0x174, 0x3ef, 0xfff, 0x002 1044 + }; 1045 + 1046 + static unsigned short err_pos(unsigned short din) 1047 + { 1048 + BUG_ON(din > 4096); 1049 + return err_pos_lut[din]; 1050 + } 1051 + static int chk_no_err_only(unsigned short *chk_syndrome_list, unsigned short *err_info) 1052 + { 1053 + if ((chk_syndrome_list[0] | chk_syndrome_list[1] | 1054 + chk_syndrome_list[2] | chk_syndrome_list[3] | 1055 + chk_syndrome_list[4] | chk_syndrome_list[5] | 1056 + chk_syndrome_list[6] | chk_syndrome_list[7]) != 0x0) { 1057 + return -EINVAL; 1058 + } else { 1059 + err_info[0] = 0x0; 1060 + return 0; 1061 + } 1062 + } 1063 + static int chk_1_err_only(unsigned short *chk_syndrome_list, unsigned short *err_info) 1064 + { 1065 + unsigned short tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6; 1066 + tmp0 = gf4096_mul(chk_syndrome_list[1], gf4096_inv(chk_syndrome_list[0])); 1067 + tmp1 = gf4096_mul(chk_syndrome_list[2], gf4096_inv(chk_syndrome_list[1])); 1068 + tmp2 = gf4096_mul(chk_syndrome_list[3], gf4096_inv(chk_syndrome_list[2])); 1069 + tmp3 = gf4096_mul(chk_syndrome_list[4], gf4096_inv(chk_syndrome_list[3])); 1070 + tmp4 = gf4096_mul(chk_syndrome_list[5], gf4096_inv(chk_syndrome_list[4])); 1071 + tmp5 = gf4096_mul(chk_syndrome_list[6], gf4096_inv(chk_syndrome_list[5])); 1072 + tmp6 = gf4096_mul(chk_syndrome_list[7], gf4096_inv(chk_syndrome_list[6])); 1073 + if ((tmp0 == tmp1) & (tmp1 == tmp2) & (tmp2 == tmp3) & (tmp3 == tmp4) & (tmp4 == tmp5) & (tmp5 == tmp6)) { 1074 + err_info[0] = 0x1; // encode 1-symbol error as 0x1 1075 + err_info[1] = err_pos(tmp0); 1076 + err_info[1] = (unsigned short)(0x55e - err_info[1]); 1077 + err_info[5] = chk_syndrome_list[0]; 1078 + return 0; 1079 + } else 1080 + return -EINVAL; 1081 + } 1082 + static int chk_2_err_only(unsigned short *chk_syndrome_list, unsigned short *err_info) 1083 + { 1084 + unsigned short tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7; 1085 + unsigned short coefs[4]; 1086 + unsigned short err_pats[4]; 1087 + int found_num_root = 0; 1088 + unsigned short bit2_root0, bit2_root1; 1089 + unsigned short bit2_root0_inv, bit2_root1_inv; 1090 + unsigned short err_loc_eqn, test_root; 1091 + unsigned short bit2_loc0, bit2_loc1; 1092 + unsigned short bit2_pat0, bit2_pat1; 1093 + 1094 + find_2x2_soln(chk_syndrome_list[1], 1095 + chk_syndrome_list[0], 1096 + chk_syndrome_list[2], chk_syndrome_list[1], chk_syndrome_list[2], chk_syndrome_list[3], coefs); 1097 + for (test_root = 0x1; test_root < 0xfff; test_root++) { 1098 + err_loc_eqn = 1099 + gf4096_mul(coefs[1], gf4096_mul(test_root, test_root)) ^ gf4096_mul(coefs[0], test_root) ^ 0x1; 1100 + if (err_loc_eqn == 0x0) { 1101 + if (found_num_root == 0) { 1102 + bit2_root0 = test_root; 1103 + found_num_root = 1; 1104 + } else if (found_num_root == 1) { 1105 + bit2_root1 = test_root; 1106 + found_num_root = 2; 1107 + break; 1108 + } 1109 + } 1110 + } 1111 + if (found_num_root != 2) 1112 + return -EINVAL; 1113 + else { 1114 + bit2_root0_inv = gf4096_inv(bit2_root0); 1115 + bit2_root1_inv = gf4096_inv(bit2_root1); 1116 + find_2bit_err_pats(chk_syndrome_list[0], 1117 + chk_syndrome_list[1], bit2_root0_inv, bit2_root1_inv, err_pats); 1118 + bit2_pat0 = err_pats[0]; 1119 + bit2_pat1 = err_pats[1]; 1120 + //for(x+1) 1121 + tmp0 = gf4096_mul(gf4096_mul(bit2_root0_inv, bit2_root0_inv), gf4096_mul(bit2_root0_inv, bit2_root0_inv)); //rinv0^4 1122 + tmp1 = gf4096_mul(bit2_root0_inv, tmp0); //rinv0^5 1123 + tmp2 = gf4096_mul(bit2_root0_inv, tmp1); //rinv0^6 1124 + tmp3 = gf4096_mul(bit2_root0_inv, tmp2); //rinv0^7 1125 + tmp4 = gf4096_mul(gf4096_mul(bit2_root1_inv, bit2_root1_inv), gf4096_mul(bit2_root1_inv, bit2_root1_inv)); //rinv1^4 1126 + tmp5 = gf4096_mul(bit2_root1_inv, tmp4); //rinv1^5 1127 + tmp6 = gf4096_mul(bit2_root1_inv, tmp5); //rinv1^6 1128 + tmp7 = gf4096_mul(bit2_root1_inv, tmp6); //rinv1^7 1129 + //check if only 2-bit error 1130 + if ((chk_syndrome_list[4] == 1131 + (gf4096_mul(bit2_pat0, tmp0) ^ 1132 + gf4096_mul(bit2_pat1, 1133 + tmp4))) & (chk_syndrome_list[5] == 1134 + (gf4096_mul(bit2_pat0, tmp1) ^ 1135 + gf4096_mul(bit2_pat1, 1136 + tmp5))) & 1137 + (chk_syndrome_list[6] == 1138 + (gf4096_mul(bit2_pat0, tmp2) ^ 1139 + gf4096_mul(bit2_pat1, 1140 + tmp6))) & (chk_syndrome_list[7] == 1141 + (gf4096_mul(bit2_pat0, tmp3) ^ gf4096_mul(bit2_pat1, tmp7)))) { 1142 + if ((err_pos(bit2_root0_inv) == 0xfff) | (err_pos(bit2_root1_inv) == 0xfff)) { 1143 + return -EINVAL; 1144 + } else { 1145 + bit2_loc0 = 0x55e - err_pos(bit2_root0_inv); 1146 + bit2_loc1 = 0x55e - err_pos(bit2_root1_inv); 1147 + err_info[0] = 0x2; // encode 2-symbol error as 0x2 1148 + err_info[1] = bit2_loc0; 1149 + err_info[2] = bit2_loc1; 1150 + err_info[5] = bit2_pat0; 1151 + err_info[6] = bit2_pat1; 1152 + return 0; 1153 + } 1154 + } else 1155 + return -EINVAL; 1156 + } 1157 + } 1158 + static int chk_3_err_only(unsigned short *chk_syndrome_list, unsigned short *err_info) 1159 + { 1160 + unsigned short tmp0, tmp1, tmp2, tmp3, tmp4, tmp5; 1161 + unsigned short coefs[4]; 1162 + unsigned short err_pats[4]; 1163 + int found_num_root = 0; 1164 + unsigned short bit3_root0, bit3_root1, bit3_root2; 1165 + unsigned short bit3_root0_inv, bit3_root1_inv, bit3_root2_inv; 1166 + unsigned short err_loc_eqn, test_root; 1167 + 1168 + find_3bit_err_coefs(chk_syndrome_list[0], chk_syndrome_list[1], 1169 + chk_syndrome_list[2], chk_syndrome_list[3], 1170 + chk_syndrome_list[4], chk_syndrome_list[5], coefs); 1171 + 1172 + for (test_root = 0x1; test_root < 0xfff; test_root++) { 1173 + err_loc_eqn = gf4096_mul(coefs[2], 1174 + gf4096_mul(gf4096_mul(test_root, test_root), 1175 + test_root)) ^ gf4096_mul(coefs[1], gf4096_mul(test_root, test_root)) 1176 + ^ gf4096_mul(coefs[0], test_root) ^ 0x1; 1177 + 1178 + if (err_loc_eqn == 0x0) { 1179 + if (found_num_root == 0) { 1180 + bit3_root0 = test_root; 1181 + found_num_root = 1; 1182 + } else if (found_num_root == 1) { 1183 + bit3_root1 = test_root; 1184 + found_num_root = 2; 1185 + } else if (found_num_root == 2) { 1186 + bit3_root2 = test_root; 1187 + found_num_root = 3; 1188 + break; 1189 + } 1190 + } 1191 + } 1192 + if (found_num_root != 3) 1193 + return -EINVAL; 1194 + else { 1195 + bit3_root0_inv = gf4096_inv(bit3_root0); 1196 + bit3_root1_inv = gf4096_inv(bit3_root1); 1197 + bit3_root2_inv = gf4096_inv(bit3_root2); 1198 + 1199 + find_3bit_err_pats(chk_syndrome_list[0], chk_syndrome_list[1], 1200 + chk_syndrome_list[2], bit3_root0_inv, 1201 + bit3_root1_inv, bit3_root2_inv, err_pats); 1202 + 1203 + //check if only 3-bit error 1204 + tmp0 = gf4096_mul(bit3_root0_inv, bit3_root0_inv); 1205 + tmp0 = gf4096_mul(tmp0, tmp0); 1206 + tmp0 = gf4096_mul(tmp0, bit3_root0_inv); 1207 + tmp0 = gf4096_mul(tmp0, bit3_root0_inv); //rinv0^6 1208 + tmp1 = gf4096_mul(tmp0, bit3_root0_inv); //rinv0^7 1209 + tmp2 = gf4096_mul(bit3_root1_inv, bit3_root1_inv); 1210 + tmp2 = gf4096_mul(tmp2, tmp2); 1211 + tmp2 = gf4096_mul(tmp2, bit3_root1_inv); 1212 + tmp2 = gf4096_mul(tmp2, bit3_root1_inv); //rinv1^6 1213 + tmp3 = gf4096_mul(tmp2, bit3_root1_inv); //rinv1^7 1214 + tmp4 = gf4096_mul(bit3_root2_inv, bit3_root2_inv); 1215 + tmp4 = gf4096_mul(tmp4, tmp4); 1216 + tmp4 = gf4096_mul(tmp4, bit3_root2_inv); 1217 + tmp4 = gf4096_mul(tmp4, bit3_root2_inv); //rinv2^6 1218 + tmp5 = gf4096_mul(tmp4, bit3_root2_inv); //rinv2^7 1219 + 1220 + //check if only 3 errors 1221 + if ((chk_syndrome_list[6] == (gf4096_mul(err_pats[0], tmp0) ^ 1222 + gf4096_mul(err_pats[1], tmp2) ^ 1223 + gf4096_mul(err_pats[2], tmp4))) & 1224 + (chk_syndrome_list[7] == (gf4096_mul(err_pats[0], tmp1) ^ 1225 + gf4096_mul(err_pats[1], tmp3) ^ gf4096_mul(err_pats[2], tmp5)))) { 1226 + if ((err_pos(bit3_root0_inv) == 0xfff) | 1227 + (err_pos(bit3_root1_inv) == 0xfff) | (err_pos(bit3_root2_inv) == 0xfff)) { 1228 + return -EINVAL; 1229 + } else { 1230 + err_info[0] = 0x3; 1231 + err_info[1] = (0x55e - err_pos(bit3_root0_inv)); 1232 + err_info[2] = (0x55e - err_pos(bit3_root1_inv)); 1233 + err_info[3] = (0x55e - err_pos(bit3_root2_inv)); 1234 + err_info[5] = err_pats[0]; 1235 + err_info[6] = err_pats[1]; 1236 + err_info[7] = err_pats[2]; 1237 + return 0; 1238 + } 1239 + } else 1240 + return -EINVAL; 1241 + } 1242 + } 1243 + static int chk_4_err_only(unsigned short *chk_syndrome_list, unsigned short *err_info) 1244 + { 1245 + unsigned short coefs[4]; 1246 + unsigned short err_pats[4]; 1247 + int found_num_root = 0; 1248 + unsigned short bit4_root0, bit4_root1, bit4_root2, bit4_root3; 1249 + unsigned short bit4_root0_inv, bit4_root1_inv, bit4_root2_inv, bit4_root3_inv; 1250 + unsigned short err_loc_eqn, test_root; 1251 + 1252 + find_4bit_err_coefs(chk_syndrome_list[0], 1253 + chk_syndrome_list[1], 1254 + chk_syndrome_list[2], 1255 + chk_syndrome_list[3], 1256 + chk_syndrome_list[4], 1257 + chk_syndrome_list[5], chk_syndrome_list[6], chk_syndrome_list[7], coefs); 1258 + 1259 + for (test_root = 0x1; test_root < 0xfff; test_root++) { 1260 + err_loc_eqn = 1261 + gf4096_mul(coefs[3], 1262 + gf4096_mul(gf4096_mul 1263 + (gf4096_mul(test_root, test_root), 1264 + test_root), 1265 + test_root)) ^ gf4096_mul(coefs[2], 1266 + gf4096_mul 1267 + (gf4096_mul(test_root, test_root), test_root)) 1268 + ^ gf4096_mul(coefs[1], gf4096_mul(test_root, test_root)) ^ gf4096_mul(coefs[0], test_root) 1269 + ^ 0x1; 1270 + if (err_loc_eqn == 0x0) { 1271 + if (found_num_root == 0) { 1272 + bit4_root0 = test_root; 1273 + found_num_root = 1; 1274 + } else if (found_num_root == 1) { 1275 + bit4_root1 = test_root; 1276 + found_num_root = 2; 1277 + } else if (found_num_root == 2) { 1278 + bit4_root2 = test_root; 1279 + found_num_root = 3; 1280 + } else { 1281 + found_num_root = 4; 1282 + bit4_root3 = test_root; 1283 + break; 1284 + } 1285 + } 1286 + } 1287 + if (found_num_root != 4) { 1288 + return -EINVAL; 1289 + } else { 1290 + bit4_root0_inv = gf4096_inv(bit4_root0); 1291 + bit4_root1_inv = gf4096_inv(bit4_root1); 1292 + bit4_root2_inv = gf4096_inv(bit4_root2); 1293 + bit4_root3_inv = gf4096_inv(bit4_root3); 1294 + find_4bit_err_pats(chk_syndrome_list[0], 1295 + chk_syndrome_list[1], 1296 + chk_syndrome_list[2], 1297 + chk_syndrome_list[3], 1298 + bit4_root0_inv, bit4_root1_inv, bit4_root2_inv, bit4_root3_inv, err_pats); 1299 + err_info[0] = 0x4; 1300 + err_info[1] = (0x55e - err_pos(bit4_root0_inv)); 1301 + err_info[2] = (0x55e - err_pos(bit4_root1_inv)); 1302 + err_info[3] = (0x55e - err_pos(bit4_root2_inv)); 1303 + err_info[4] = (0x55e - err_pos(bit4_root3_inv)); 1304 + err_info[5] = err_pats[0]; 1305 + err_info[6] = err_pats[1]; 1306 + err_info[7] = err_pats[2]; 1307 + err_info[8] = err_pats[3]; 1308 + return 0; 1309 + } 1310 + } 1311 + 1312 + void correct_12bit_symbol(unsigned char *buf, unsigned short sym, 1313 + unsigned short val) 1314 + { 1315 + if (unlikely(sym > 1366)) { 1316 + printk(KERN_ERR "Error: symbol %d out of range; cannot correct\n", sym); 1317 + } else if (sym == 0) { 1318 + buf[0] ^= val; 1319 + } else if (sym & 1) { 1320 + buf[1+(3*(sym-1))/2] ^= (val >> 4); 1321 + buf[2+(3*(sym-1))/2] ^= ((val & 0xf) << 4); 1322 + } else { 1323 + buf[2+(3*(sym-2))/2] ^= (val >> 8); 1324 + buf[3+(3*(sym-2))/2] ^= (val & 0xff); 1325 + } 1326 + } 1327 + 1328 + static int debugecc = 0; 1329 + module_param(debugecc, int, 0644); 1330 + 1331 + int cafe_correct_ecc(unsigned char *buf, 1332 + unsigned short *chk_syndrome_list) 1333 + { 1334 + unsigned short err_info[9]; 1335 + int i; 1336 + 1337 + if (debugecc) { 1338 + printk(KERN_WARNING "cafe_correct_ecc invoked. Syndromes %x %x %x %x %x %x %x %x\n", 1339 + chk_syndrome_list[0], chk_syndrome_list[1], 1340 + chk_syndrome_list[2], chk_syndrome_list[3], 1341 + chk_syndrome_list[4], chk_syndrome_list[5], 1342 + chk_syndrome_list[6], chk_syndrome_list[7]); 1343 + for (i=0; i < 2048; i+=16) { 1344 + printk(KERN_WARNING "D %04x: %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x\n", 1345 + i, 1346 + buf[i], buf[i+1], buf[i+2], buf[i+3], 1347 + buf[i+4], buf[i+5], buf[i+6], buf[i+7], 1348 + buf[i+8], buf[i+9], buf[i+10], buf[i+11], 1349 + buf[i+12], buf[i+13], buf[i+14], buf[i+15]); 1350 + } 1351 + for ( ; i < 2112; i+=16) { 1352 + printk(KERN_WARNING "O %02x: %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x\n", 1353 + i - 2048, 1354 + buf[i], buf[i+1], buf[i+2], buf[i+3], 1355 + buf[i+4], buf[i+5], buf[i+6], buf[i+7], 1356 + buf[i+8], buf[i+9], buf[i+10], buf[i+11], 1357 + buf[i+12], buf[i+13], buf[i+14], buf[i+15]); 1358 + } 1359 + } 1360 + 1361 + 1362 + 1363 + if (chk_no_err_only(chk_syndrome_list, err_info) && 1364 + chk_1_err_only(chk_syndrome_list, err_info) && 1365 + chk_2_err_only(chk_syndrome_list, err_info) && 1366 + chk_3_err_only(chk_syndrome_list, err_info) && 1367 + chk_4_err_only(chk_syndrome_list, err_info)) { 1368 + return -EIO; 1369 + } 1370 + 1371 + for (i=0; i < err_info[0]; i++) { 1372 + if (debugecc) 1373 + printk(KERN_WARNING "Correct symbol %d with 0x%03x\n", 1374 + err_info[1+i], err_info[5+i]); 1375 + 1376 + correct_12bit_symbol(buf, err_info[1+i], err_info[5+i]); 1377 + } 1378 + 1379 + return err_info[0]; 1380 + } 1381 +
+2 -2
drivers/mtd/nand/cs553x_nand.c
··· 11 11 * published by the Free Software Foundation. 12 12 * 13 13 * Overview: 14 - * This is a device driver for the NAND flash controller found on 14 + * This is a device driver for the NAND flash controller found on 15 15 * the AMD CS5535/CS5536 companion chipsets for the Geode processor. 16 16 * 17 17 */ ··· 303 303 err = cs553x_init_one(i, !!(val & FLSH_MEM_IO), val & 0xFFFFFFFF); 304 304 } 305 305 306 - /* Register all devices together here. This means we can easily hack it to 306 + /* Register all devices together here. This means we can easily hack it to 307 307 do mtdconcat etc. if we want to. */ 308 308 for (i = 0; i < NR_CS553X_CONTROLLERS; i++) { 309 309 if (cs553x_mtd[i]) {
+1 -2
drivers/mtd/nand/diskonchip.c
··· 1635 1635 1636 1636 len = sizeof(struct mtd_info) + 1637 1637 sizeof(struct nand_chip) + sizeof(struct doc_priv) + (2 * sizeof(struct nand_bbt_descr)); 1638 - mtd = kmalloc(len, GFP_KERNEL); 1638 + mtd = kzalloc(len, GFP_KERNEL); 1639 1639 if (!mtd) { 1640 1640 printk(KERN_ERR "DiskOnChip kmalloc (%d bytes) failed!\n", len); 1641 1641 ret = -ENOMEM; 1642 1642 goto fail; 1643 1643 } 1644 - memset(mtd, 0, len); 1645 1644 1646 1645 nand = (struct nand_chip *) (mtd + 1); 1647 1646 doc = (struct doc_priv *) (nand + 1);
+91 -42
drivers/mtd/nand/nand_base.c
··· 362 362 * access 363 363 */ 364 364 ofs += mtd->oobsize; 365 - chip->ops.len = 2; 365 + chip->ops.len = chip->ops.ooblen = 2; 366 366 chip->ops.datbuf = NULL; 367 367 chip->ops.oobbuf = buf; 368 368 chip->ops.ooboffs = chip->badblockpos & ~0x01; ··· 755 755 } 756 756 757 757 /** 758 - * nand_read_page_swecc - {REPLACABLE] software ecc based page read function 758 + * nand_read_page_swecc - [REPLACABLE] software ecc based page read function 759 759 * @mtd: mtd info structure 760 760 * @chip: nand chip info structure 761 761 * @buf: buffer to store read data ··· 795 795 } 796 796 797 797 /** 798 - * nand_read_page_hwecc - {REPLACABLE] hardware ecc based page read function 798 + * nand_read_page_hwecc - [REPLACABLE] hardware ecc based page read function 799 799 * @mtd: mtd info structure 800 800 * @chip: nand chip info structure 801 801 * @buf: buffer to store read data ··· 839 839 } 840 840 841 841 /** 842 - * nand_read_page_syndrome - {REPLACABLE] hardware ecc syndrom based page read 842 + * nand_read_page_syndrome - [REPLACABLE] hardware ecc syndrom based page read 843 843 * @mtd: mtd info structure 844 844 * @chip: nand chip info structure 845 845 * @buf: buffer to store read data ··· 897 897 * @chip: nand chip structure 898 898 * @oob: oob destination address 899 899 * @ops: oob ops structure 900 + * @len: size of oob to transfer 900 901 */ 901 902 static uint8_t *nand_transfer_oob(struct nand_chip *chip, uint8_t *oob, 902 - struct mtd_oob_ops *ops) 903 + struct mtd_oob_ops *ops, size_t len) 903 904 { 904 - size_t len = ops->ooblen; 905 - 906 905 switch(ops->mode) { 907 906 908 907 case MTD_OOB_PLACE: ··· 959 960 int sndcmd = 1; 960 961 int ret = 0; 961 962 uint32_t readlen = ops->len; 963 + uint32_t oobreadlen = ops->ooblen; 962 964 uint8_t *bufpoi, *oob, *buf; 963 965 964 966 stats = mtd->ecc_stats; ··· 971 971 page = realpage & chip->pagemask; 972 972 973 973 col = (int)(from & (mtd->writesize - 1)); 974 - chip->oob_poi = chip->buffers->oobrbuf; 975 974 976 975 buf = ops->datbuf; 977 976 oob = ops->oobbuf; ··· 1006 1007 1007 1008 if (unlikely(oob)) { 1008 1009 /* Raw mode does data:oob:data:oob */ 1009 - if (ops->mode != MTD_OOB_RAW) 1010 - oob = nand_transfer_oob(chip, oob, ops); 1011 - else 1012 - buf = nand_transfer_oob(chip, buf, ops); 1010 + if (ops->mode != MTD_OOB_RAW) { 1011 + int toread = min(oobreadlen, 1012 + chip->ecc.layout->oobavail); 1013 + if (toread) { 1014 + oob = nand_transfer_oob(chip, 1015 + oob, ops, toread); 1016 + oobreadlen -= toread; 1017 + } 1018 + } else 1019 + buf = nand_transfer_oob(chip, 1020 + buf, ops, mtd->oobsize); 1013 1021 } 1014 1022 1015 1023 if (!(chip->options & NAND_NO_READRDY)) { ··· 1063 1057 } 1064 1058 1065 1059 ops->retlen = ops->len - (size_t) readlen; 1060 + if (oob) 1061 + ops->oobretlen = ops->ooblen - oobreadlen; 1066 1062 1067 1063 if (ret) 1068 1064 return ret; ··· 1265 1257 int page, realpage, chipnr, sndcmd = 1; 1266 1258 struct nand_chip *chip = mtd->priv; 1267 1259 int blkcheck = (1 << (chip->phys_erase_shift - chip->page_shift)) - 1; 1268 - int readlen = ops->len; 1260 + int readlen = ops->ooblen; 1261 + int len; 1269 1262 uint8_t *buf = ops->oobbuf; 1270 1263 1271 1264 DEBUG(MTD_DEBUG_LEVEL3, "nand_read_oob: from = 0x%08Lx, len = %i\n", 1272 1265 (unsigned long long)from, readlen); 1266 + 1267 + if (ops->mode == MTD_OOB_RAW) 1268 + len = mtd->oobsize; 1269 + else 1270 + len = chip->ecc.layout->oobavail; 1273 1271 1274 1272 chipnr = (int)(from >> chip->chip_shift); 1275 1273 chip->select_chip(mtd, chipnr); ··· 1284 1270 realpage = (int)(from >> chip->page_shift); 1285 1271 page = realpage & chip->pagemask; 1286 1272 1287 - chip->oob_poi = chip->buffers->oobrbuf; 1288 - 1289 1273 while(1) { 1290 1274 sndcmd = chip->ecc.read_oob(mtd, chip, page, sndcmd); 1291 - buf = nand_transfer_oob(chip, buf, ops); 1275 + 1276 + len = min(len, readlen); 1277 + buf = nand_transfer_oob(chip, buf, ops, len); 1292 1278 1293 1279 if (!(chip->options & NAND_NO_READRDY)) { 1294 1280 /* ··· 1303 1289 nand_wait_ready(mtd); 1304 1290 } 1305 1291 1306 - readlen -= ops->ooblen; 1292 + readlen -= len; 1307 1293 if (!readlen) 1308 1294 break; 1309 1295 ··· 1325 1311 sndcmd = 1; 1326 1312 } 1327 1313 1328 - ops->retlen = ops->len; 1314 + ops->oobretlen = ops->ooblen; 1329 1315 return 0; 1330 1316 } 1331 1317 ··· 1346 1332 ops->retlen = 0; 1347 1333 1348 1334 /* Do not allow reads past end of device */ 1349 - if ((from + ops->len) > mtd->size) { 1335 + if (ops->datbuf && (from + ops->len) > mtd->size) { 1350 1336 DEBUG(MTD_DEBUG_LEVEL0, "nand_read_oob: " 1351 1337 "Attempt read beyond end of device\n"); 1352 1338 return -EINVAL; ··· 1389 1375 } 1390 1376 1391 1377 /** 1392 - * nand_write_page_swecc - {REPLACABLE] software ecc based page write function 1378 + * nand_write_page_swecc - [REPLACABLE] software ecc based page write function 1393 1379 * @mtd: mtd info structure 1394 1380 * @chip: nand chip info structure 1395 1381 * @buf: data buffer ··· 1415 1401 } 1416 1402 1417 1403 /** 1418 - * nand_write_page_hwecc - {REPLACABLE] hardware ecc based page write function 1404 + * nand_write_page_hwecc - [REPLACABLE] hardware ecc based page write function 1419 1405 * @mtd: mtd info structure 1420 1406 * @chip: nand chip info structure 1421 1407 * @buf: data buffer ··· 1443 1429 } 1444 1430 1445 1431 /** 1446 - * nand_write_page_syndrome - {REPLACABLE] hardware ecc syndrom based page write 1432 + * nand_write_page_syndrome - [REPLACABLE] hardware ecc syndrom based page write 1447 1433 * @mtd: mtd info structure 1448 1434 * @chip: nand chip info structure 1449 1435 * @buf: data buffer ··· 1591 1577 return NULL; 1592 1578 } 1593 1579 1594 - #define NOTALIGNED(x) (x & (mtd->writesize-1)) != 0 1580 + #define NOTALIGNED(x) (x & (chip->subpagesize - 1)) != 0 1595 1581 1596 1582 /** 1597 1583 * nand_do_write_ops - [Internal] NAND write with ECC ··· 1604 1590 static int nand_do_write_ops(struct mtd_info *mtd, loff_t to, 1605 1591 struct mtd_oob_ops *ops) 1606 1592 { 1607 - int chipnr, realpage, page, blockmask; 1593 + int chipnr, realpage, page, blockmask, column; 1608 1594 struct nand_chip *chip = mtd->priv; 1609 1595 uint32_t writelen = ops->len; 1610 1596 uint8_t *oob = ops->oobbuf; 1611 1597 uint8_t *buf = ops->datbuf; 1612 - int bytes = mtd->writesize; 1613 - int ret; 1598 + int ret, subpage; 1614 1599 1615 1600 ops->retlen = 0; 1601 + if (!writelen) 1602 + return 0; 1616 1603 1617 1604 /* reject writes, which are not page aligned */ 1618 1605 if (NOTALIGNED(to) || NOTALIGNED(ops->len)) { ··· 1622 1607 return -EINVAL; 1623 1608 } 1624 1609 1625 - if (!writelen) 1626 - return 0; 1610 + column = to & (mtd->writesize - 1); 1611 + subpage = column || (writelen & (mtd->writesize - 1)); 1612 + 1613 + if (subpage && oob) 1614 + return -EINVAL; 1627 1615 1628 1616 chipnr = (int)(to >> chip->chip_shift); 1629 1617 chip->select_chip(mtd, chipnr); ··· 1644 1626 (chip->pagebuf << chip->page_shift) < (to + ops->len)) 1645 1627 chip->pagebuf = -1; 1646 1628 1647 - chip->oob_poi = chip->buffers->oobwbuf; 1629 + /* If we're not given explicit OOB data, let it be 0xFF */ 1630 + if (likely(!oob)) 1631 + memset(chip->oob_poi, 0xff, mtd->oobsize); 1648 1632 1649 1633 while(1) { 1634 + int bytes = mtd->writesize; 1650 1635 int cached = writelen > bytes && page != blockmask; 1636 + uint8_t *wbuf = buf; 1637 + 1638 + /* Partial page write ? */ 1639 + if (unlikely(column || writelen < (mtd->writesize - 1))) { 1640 + cached = 0; 1641 + bytes = min_t(int, bytes - column, (int) writelen); 1642 + chip->pagebuf = -1; 1643 + memset(chip->buffers->databuf, 0xff, mtd->writesize); 1644 + memcpy(&chip->buffers->databuf[column], buf, bytes); 1645 + wbuf = chip->buffers->databuf; 1646 + } 1651 1647 1652 1648 if (unlikely(oob)) 1653 1649 oob = nand_fill_oob(chip, oob, ops); 1654 1650 1655 - ret = chip->write_page(mtd, chip, buf, page, cached, 1651 + ret = chip->write_page(mtd, chip, wbuf, page, cached, 1656 1652 (ops->mode == MTD_OOB_RAW)); 1657 1653 if (ret) 1658 1654 break; ··· 1675 1643 if (!writelen) 1676 1644 break; 1677 1645 1646 + column = 0; 1678 1647 buf += bytes; 1679 1648 realpage++; 1680 1649 ··· 1688 1655 } 1689 1656 } 1690 1657 1691 - if (unlikely(oob)) 1692 - memset(chip->oob_poi, 0xff, mtd->oobsize); 1693 - 1694 1658 ops->retlen = ops->len - writelen; 1659 + if (unlikely(oob)) 1660 + ops->oobretlen = ops->ooblen; 1695 1661 return ret; 1696 1662 } 1697 1663 ··· 1746 1714 struct nand_chip *chip = mtd->priv; 1747 1715 1748 1716 DEBUG(MTD_DEBUG_LEVEL3, "nand_write_oob: to = 0x%08x, len = %i\n", 1749 - (unsigned int)to, (int)ops->len); 1717 + (unsigned int)to, (int)ops->ooblen); 1750 1718 1751 1719 /* Do not allow write past end of page */ 1752 - if ((ops->ooboffs + ops->len) > mtd->oobsize) { 1720 + if ((ops->ooboffs + ops->ooblen) > mtd->oobsize) { 1753 1721 DEBUG(MTD_DEBUG_LEVEL0, "nand_write_oob: " 1754 1722 "Attempt to write past end of page\n"); 1755 1723 return -EINVAL; ··· 1777 1745 if (page == chip->pagebuf) 1778 1746 chip->pagebuf = -1; 1779 1747 1780 - chip->oob_poi = chip->buffers->oobwbuf; 1781 1748 memset(chip->oob_poi, 0xff, mtd->oobsize); 1782 1749 nand_fill_oob(chip, ops->oobbuf, ops); 1783 1750 status = chip->ecc.write_oob(mtd, chip, page & chip->pagemask); ··· 1785 1754 if (status) 1786 1755 return status; 1787 1756 1788 - ops->retlen = ops->len; 1757 + ops->oobretlen = ops->ooblen; 1789 1758 1790 1759 return 0; 1791 1760 } ··· 1805 1774 ops->retlen = 0; 1806 1775 1807 1776 /* Do not allow writes past end of device */ 1808 - if ((to + ops->len) > mtd->size) { 1777 + if (ops->datbuf && (to + ops->len) > mtd->size) { 1809 1778 DEBUG(MTD_DEBUG_LEVEL0, "nand_read_oob: " 1810 1779 "Attempt read beyond end of device\n"); 1811 1780 return -EINVAL; ··· 2219 2188 /* Newer devices have all the information in additional id bytes */ 2220 2189 if (!type->pagesize) { 2221 2190 int extid; 2222 - /* The 3rd id byte contains non relevant data ATM */ 2223 - extid = chip->read_byte(mtd); 2191 + /* The 3rd id byte holds MLC / multichip data */ 2192 + chip->cellinfo = chip->read_byte(mtd); 2224 2193 /* The 4th id byte is the important one */ 2225 2194 extid = chip->read_byte(mtd); 2226 2195 /* Calc pagesize */ ··· 2380 2349 if (!chip->buffers) 2381 2350 return -ENOMEM; 2382 2351 2383 - /* Preset the internal oob write buffer */ 2384 - memset(chip->buffers->oobwbuf, 0xff, mtd->oobsize); 2352 + /* Set the internal oob buffer location, just after the page data */ 2353 + chip->oob_poi = chip->buffers->databuf + mtd->writesize; 2385 2354 2386 2355 /* 2387 2356 * If no default placement scheme is given, select an appropriate one ··· 2499 2468 BUG(); 2500 2469 } 2501 2470 chip->ecc.total = chip->ecc.steps * chip->ecc.bytes; 2471 + 2472 + /* 2473 + * Allow subpage writes up to ecc.steps. Not possible for MLC 2474 + * FLASH. 2475 + */ 2476 + if (!(chip->options & NAND_NO_SUBPAGE_WRITE) && 2477 + !(chip->cellinfo & NAND_CI_CELLTYPE_MSK)) { 2478 + switch(chip->ecc.steps) { 2479 + case 2: 2480 + mtd->subpage_sft = 1; 2481 + break; 2482 + case 4: 2483 + case 8: 2484 + mtd->subpage_sft = 2; 2485 + break; 2486 + } 2487 + } 2488 + chip->subpagesize = mtd->writesize >> mtd->subpage_sft; 2502 2489 2503 2490 /* Initialize state */ 2504 2491 chip->state = FL_READY;
+4 -7
drivers/mtd/nand/nand_bbt.c
··· 333 333 struct mtd_oob_ops ops; 334 334 int j, ret; 335 335 336 - ops.len = mtd->oobsize; 337 336 ops.ooblen = mtd->oobsize; 338 337 ops.oobbuf = buf; 339 338 ops.ooboffs = 0; ··· 675 676 "bad block table\n"); 676 677 } 677 678 /* Read oob data */ 678 - ops.len = (len >> this->page_shift) * mtd->oobsize; 679 + ops.ooblen = (len >> this->page_shift) * mtd->oobsize; 679 680 ops.oobbuf = &buf[len]; 680 681 res = mtd->read_oob(mtd, to + mtd->writesize, &ops); 681 - if (res < 0 || ops.retlen != ops.len) 682 + if (res < 0 || ops.oobretlen != ops.ooblen) 682 683 goto outerr; 683 684 684 685 /* Calc the byte offset in the buffer */ ··· 960 961 struct nand_bbt_descr *md = this->bbt_md; 961 962 962 963 len = mtd->size >> (this->bbt_erase_shift + 2); 963 - /* Allocate memory (2bit per block) */ 964 - this->bbt = kmalloc(len, GFP_KERNEL); 964 + /* Allocate memory (2bit per block) and clear the memory bad block table */ 965 + this->bbt = kzalloc(len, GFP_KERNEL); 965 966 if (!this->bbt) { 966 967 printk(KERN_ERR "nand_scan_bbt: Out of memory\n"); 967 968 return -ENOMEM; 968 969 } 969 - /* Clear the memory bad block table */ 970 - memset(this->bbt, 0x00, len); 971 970 972 971 /* If no primary table decriptor is given, scan the device 973 972 * to build a memory based bad block table
+2 -2
drivers/mtd/nand/nand_ecc.c
··· 112 112 tmp2 |= (reg2 & 0x01) << 0; /* B7 -> B0 */ 113 113 114 114 /* Calculate final ECC code */ 115 - #ifdef CONFIG_NAND_ECC_SMC 115 + #ifdef CONFIG_MTD_NAND_ECC_SMC 116 116 ecc_code[0] = ~tmp2; 117 117 ecc_code[1] = ~tmp1; 118 118 #else ··· 148 148 { 149 149 uint8_t s0, s1, s2; 150 150 151 - #ifdef CONFIG_NAND_ECC_SMC 151 + #ifdef CONFIG_MTD_NAND_ECC_SMC 152 152 s0 = calc_ecc[0] ^ read_ecc[0]; 153 153 s1 = calc_ecc[1] ^ read_ecc[1]; 154 154 s2 = calc_ecc[2] ^ read_ecc[2];
+158 -85
drivers/mtd/nand/nandsim.c
··· 37 37 #include <linux/mtd/nand.h> 38 38 #include <linux/mtd/partitions.h> 39 39 #include <linux/delay.h> 40 - #ifdef CONFIG_NS_ABS_POS 41 - #include <asm/io.h> 42 - #endif 43 - 44 40 45 41 /* Default simulator parameters values */ 46 42 #if !defined(CONFIG_NANDSIM_FIRST_ID_BYTE) || \ ··· 160 164 /* After a command is input, the simulator goes to one of the following states */ 161 165 #define STATE_CMD_READ0 0x00000001 /* read data from the beginning of page */ 162 166 #define STATE_CMD_READ1 0x00000002 /* read data from the second half of page */ 163 - #define STATE_CMD_READSTART 0x00000003 /* read data second command (large page devices) */ 167 + #define STATE_CMD_READSTART 0x00000003 /* read data second command (large page devices) */ 164 168 #define STATE_CMD_PAGEPROG 0x00000004 /* start page programm */ 165 169 #define STATE_CMD_READOOB 0x00000005 /* read OOB area */ 166 170 #define STATE_CMD_ERASE1 0x00000006 /* sector erase first command */ ··· 227 231 #define NS_MAX_PREVSTATES 1 228 232 229 233 /* 234 + * A union to represent flash memory contents and flash buffer. 235 + */ 236 + union ns_mem { 237 + u_char *byte; /* for byte access */ 238 + uint16_t *word; /* for 16-bit word access */ 239 + }; 240 + 241 + /* 230 242 * The structure which describes all the internal simulator data. 231 243 */ 232 244 struct nandsim { ··· 251 247 uint16_t npstates; /* number of previous states saved */ 252 248 uint16_t stateidx; /* current state index */ 253 249 254 - /* The simulated NAND flash image */ 255 - union flash_media { 256 - u_char *byte; 257 - uint16_t *word; 258 - } mem; 250 + /* The simulated NAND flash pages array */ 251 + union ns_mem *pages; 259 252 260 253 /* Internal buffer of page + OOB size bytes */ 261 - union internal_buffer { 262 - u_char *byte; /* for byte access */ 263 - uint16_t *word; /* for 16-bit word access */ 264 - } buf; 254 + union ns_mem buf; 265 255 266 256 /* NAND flash "geometry" */ 267 257 struct nandsin_geometry { ··· 344 346 static u_char ns_verify_buf[NS_LARGEST_PAGE_SIZE]; 345 347 346 348 /* 349 + * Allocate array of page pointers and initialize the array to NULL 350 + * pointers. 351 + * 352 + * RETURNS: 0 if success, -ENOMEM if memory alloc fails. 353 + */ 354 + static int alloc_device(struct nandsim *ns) 355 + { 356 + int i; 357 + 358 + ns->pages = vmalloc(ns->geom.pgnum * sizeof(union ns_mem)); 359 + if (!ns->pages) { 360 + NS_ERR("alloc_map: unable to allocate page array\n"); 361 + return -ENOMEM; 362 + } 363 + for (i = 0; i < ns->geom.pgnum; i++) { 364 + ns->pages[i].byte = NULL; 365 + } 366 + 367 + return 0; 368 + } 369 + 370 + /* 371 + * Free any allocated pages, and free the array of page pointers. 372 + */ 373 + static void free_device(struct nandsim *ns) 374 + { 375 + int i; 376 + 377 + if (ns->pages) { 378 + for (i = 0; i < ns->geom.pgnum; i++) { 379 + if (ns->pages[i].byte) 380 + kfree(ns->pages[i].byte); 381 + } 382 + vfree(ns->pages); 383 + } 384 + } 385 + 386 + /* 347 387 * Initialize the nandsim structure. 348 388 * 349 389 * RETURNS: 0 if success, -ERRNO if failure. 350 390 */ 351 - static int 352 - init_nandsim(struct mtd_info *mtd) 391 + static int init_nandsim(struct mtd_info *mtd) 353 392 { 354 393 struct nand_chip *chip = (struct nand_chip *)mtd->priv; 355 394 struct nandsim *ns = (struct nandsim *)(chip->priv); ··· 440 405 } 441 406 } else { 442 407 if (ns->geom.totsz <= (128 << 20)) { 443 - ns->geom.pgaddrbytes = 5; 408 + ns->geom.pgaddrbytes = 4; 444 409 ns->geom.secaddrbytes = 2; 445 410 } else { 446 411 ns->geom.pgaddrbytes = 5; ··· 474 439 printk("sector address bytes: %u\n", ns->geom.secaddrbytes); 475 440 printk("options: %#x\n", ns->options); 476 441 477 - /* Map / allocate and initialize the flash image */ 478 - #ifdef CONFIG_NS_ABS_POS 479 - ns->mem.byte = ioremap(CONFIG_NS_ABS_POS, ns->geom.totszoob); 480 - if (!ns->mem.byte) { 481 - NS_ERR("init_nandsim: failed to map the NAND flash image at address %p\n", 482 - (void *)CONFIG_NS_ABS_POS); 483 - return -ENOMEM; 484 - } 485 - #else 486 - ns->mem.byte = vmalloc(ns->geom.totszoob); 487 - if (!ns->mem.byte) { 488 - NS_ERR("init_nandsim: unable to allocate %u bytes for flash image\n", 489 - ns->geom.totszoob); 490 - return -ENOMEM; 491 - } 492 - memset(ns->mem.byte, 0xFF, ns->geom.totszoob); 493 - #endif 442 + if (alloc_device(ns) != 0) 443 + goto error; 494 444 495 445 /* Allocate / initialize the internal buffer */ 496 446 ns->buf.byte = kmalloc(ns->geom.pgszoob, GFP_KERNEL); ··· 494 474 return 0; 495 475 496 476 error: 497 - #ifdef CONFIG_NS_ABS_POS 498 - iounmap(ns->mem.byte); 499 - #else 500 - vfree(ns->mem.byte); 501 - #endif 477 + free_device(ns); 502 478 503 479 return -ENOMEM; 504 480 } ··· 502 486 /* 503 487 * Free the nandsim structure. 504 488 */ 505 - static void 506 - free_nandsim(struct nandsim *ns) 489 + static void free_nandsim(struct nandsim *ns) 507 490 { 508 491 kfree(ns->buf.byte); 509 - 510 - #ifdef CONFIG_NS_ABS_POS 511 - iounmap(ns->mem.byte); 512 - #else 513 - vfree(ns->mem.byte); 514 - #endif 492 + free_device(ns); 515 493 516 494 return; 517 495 } ··· 513 503 /* 514 504 * Returns the string representation of 'state' state. 515 505 */ 516 - static char * 517 - get_state_name(uint32_t state) 506 + static char *get_state_name(uint32_t state) 518 507 { 519 508 switch (NS_STATE(state)) { 520 509 case STATE_CMD_READ0: ··· 571 562 * 572 563 * RETURNS: 1 if wrong command, 0 if right. 573 564 */ 574 - static int 575 - check_command(int cmd) 565 + static int check_command(int cmd) 576 566 { 577 567 switch (cmd) { 578 568 ··· 597 589 /* 598 590 * Returns state after command is accepted by command number. 599 591 */ 600 - static uint32_t 601 - get_state_by_command(unsigned command) 592 + static uint32_t get_state_by_command(unsigned command) 602 593 { 603 594 switch (command) { 604 595 case NAND_CMD_READ0: ··· 633 626 /* 634 627 * Move an address byte to the correspondent internal register. 635 628 */ 636 - static inline void 637 - accept_addr_byte(struct nandsim *ns, u_char bt) 629 + static inline void accept_addr_byte(struct nandsim *ns, u_char bt) 638 630 { 639 631 uint byte = (uint)bt; 640 632 ··· 651 645 /* 652 646 * Switch to STATE_READY state. 653 647 */ 654 - static inline void 655 - switch_to_ready_state(struct nandsim *ns, u_char status) 648 + static inline void switch_to_ready_state(struct nandsim *ns, u_char status) 656 649 { 657 650 NS_DBG("switch_to_ready_state: switch to %s state\n", get_state_name(STATE_READY)); 658 651 ··· 710 705 * -1 - several matches. 711 706 * 0 - operation is found. 712 707 */ 713 - static int 714 - find_operation(struct nandsim *ns, uint32_t flag) 708 + static int find_operation(struct nandsim *ns, uint32_t flag) 715 709 { 716 710 int opsfound = 0; 717 711 int i, j, idx = 0; ··· 795 791 } 796 792 797 793 /* 794 + * Returns a pointer to the current page. 795 + */ 796 + static inline union ns_mem *NS_GET_PAGE(struct nandsim *ns) 797 + { 798 + return &(ns->pages[ns->regs.row]); 799 + } 800 + 801 + /* 802 + * Retuns a pointer to the current byte, within the current page. 803 + */ 804 + static inline u_char *NS_PAGE_BYTE_OFF(struct nandsim *ns) 805 + { 806 + return NS_GET_PAGE(ns)->byte + ns->regs.column + ns->regs.off; 807 + } 808 + 809 + /* 810 + * Fill the NAND buffer with data read from the specified page. 811 + */ 812 + static void read_page(struct nandsim *ns, int num) 813 + { 814 + union ns_mem *mypage; 815 + 816 + mypage = NS_GET_PAGE(ns); 817 + if (mypage->byte == NULL) { 818 + NS_DBG("read_page: page %d not allocated\n", ns->regs.row); 819 + memset(ns->buf.byte, 0xFF, num); 820 + } else { 821 + NS_DBG("read_page: page %d allocated, reading from %d\n", 822 + ns->regs.row, ns->regs.column + ns->regs.off); 823 + memcpy(ns->buf.byte, NS_PAGE_BYTE_OFF(ns), num); 824 + } 825 + } 826 + 827 + /* 828 + * Erase all pages in the specified sector. 829 + */ 830 + static void erase_sector(struct nandsim *ns) 831 + { 832 + union ns_mem *mypage; 833 + int i; 834 + 835 + mypage = NS_GET_PAGE(ns); 836 + for (i = 0; i < ns->geom.pgsec; i++) { 837 + if (mypage->byte != NULL) { 838 + NS_DBG("erase_sector: freeing page %d\n", ns->regs.row+i); 839 + kfree(mypage->byte); 840 + mypage->byte = NULL; 841 + } 842 + mypage++; 843 + } 844 + } 845 + 846 + /* 847 + * Program the specified page with the contents from the NAND buffer. 848 + */ 849 + static int prog_page(struct nandsim *ns, int num) 850 + { 851 + int i; 852 + union ns_mem *mypage; 853 + u_char *pg_off; 854 + 855 + mypage = NS_GET_PAGE(ns); 856 + if (mypage->byte == NULL) { 857 + NS_DBG("prog_page: allocating page %d\n", ns->regs.row); 858 + mypage->byte = kmalloc(ns->geom.pgszoob, GFP_KERNEL); 859 + if (mypage->byte == NULL) { 860 + NS_ERR("prog_page: error allocating memory for page %d\n", ns->regs.row); 861 + return -1; 862 + } 863 + memset(mypage->byte, 0xFF, ns->geom.pgszoob); 864 + } 865 + 866 + pg_off = NS_PAGE_BYTE_OFF(ns); 867 + for (i = 0; i < num; i++) 868 + pg_off[i] &= ns->buf.byte[i]; 869 + 870 + return 0; 871 + } 872 + 873 + /* 798 874 * If state has any action bit, perform this action. 799 875 * 800 876 * RETURNS: 0 if success, -1 if error. 801 877 */ 802 - static int 803 - do_state_action(struct nandsim *ns, uint32_t action) 878 + static int do_state_action(struct nandsim *ns, uint32_t action) 804 879 { 805 - int i, num; 880 + int num; 806 881 int busdiv = ns->busw == 8 ? 1 : 2; 807 882 808 883 action &= ACTION_MASK; ··· 905 822 break; 906 823 } 907 824 num = ns->geom.pgszoob - ns->regs.off - ns->regs.column; 908 - memcpy(ns->buf.byte, ns->mem.byte + NS_RAW_OFFSET(ns) + ns->regs.off, num); 825 + read_page(ns, num); 909 826 910 827 NS_DBG("do_state_action: (ACTION_CPY:) copy %d bytes to int buf, raw offset %d\n", 911 828 num, NS_RAW_OFFSET(ns) + ns->regs.off); ··· 946 863 ns->regs.row, NS_RAW_OFFSET(ns)); 947 864 NS_LOG("erase sector %d\n", ns->regs.row >> (ns->geom.secshift - ns->geom.pgshift)); 948 865 949 - memset(ns->mem.byte + NS_RAW_OFFSET(ns), 0xFF, ns->geom.secszoob); 866 + erase_sector(ns); 950 867 951 868 NS_MDELAY(erase_delay); 952 869 ··· 969 886 return -1; 970 887 } 971 888 972 - for (i = 0; i < num; i++) 973 - ns->mem.byte[NS_RAW_OFFSET(ns) + ns->regs.off + i] &= ns->buf.byte[i]; 889 + if (prog_page(ns, num) == -1) 890 + return -1; 974 891 975 892 NS_DBG("do_state_action: copy %d bytes from int buf to (%#x, %#x), raw off = %d\n", 976 893 num, ns->regs.row, ns->regs.column, NS_RAW_OFFSET(ns) + ns->regs.off); ··· 1011 928 /* 1012 929 * Switch simulator's state. 1013 930 */ 1014 - static void 1015 - switch_state(struct nandsim *ns) 931 + static void switch_state(struct nandsim *ns) 1016 932 { 1017 933 if (ns->op) { 1018 934 /* ··· 1152 1070 } 1153 1071 } 1154 1072 1155 - static u_char 1156 - ns_nand_read_byte(struct mtd_info *mtd) 1073 + static u_char ns_nand_read_byte(struct mtd_info *mtd) 1157 1074 { 1158 1075 struct nandsim *ns = (struct nandsim *)((struct nand_chip *)mtd->priv)->priv; 1159 1076 u_char outb = 0x00; ··· 1225 1144 return outb; 1226 1145 } 1227 1146 1228 - static void 1229 - ns_nand_write_byte(struct mtd_info *mtd, u_char byte) 1147 + static void ns_nand_write_byte(struct mtd_info *mtd, u_char byte) 1230 1148 { 1231 1149 struct nandsim *ns = (struct nandsim *)((struct nand_chip *)mtd->priv)->priv; 1232 1150 ··· 1388 1308 ns_nand_write_byte(mtd, cmd); 1389 1309 } 1390 1310 1391 - static int 1392 - ns_device_ready(struct mtd_info *mtd) 1311 + static int ns_device_ready(struct mtd_info *mtd) 1393 1312 { 1394 1313 NS_DBG("device_ready\n"); 1395 1314 return 1; 1396 1315 } 1397 1316 1398 - static uint16_t 1399 - ns_nand_read_word(struct mtd_info *mtd) 1317 + static uint16_t ns_nand_read_word(struct mtd_info *mtd) 1400 1318 { 1401 1319 struct nand_chip *chip = (struct nand_chip *)mtd->priv; 1402 1320 ··· 1403 1325 return chip->read_byte(mtd) | (chip->read_byte(mtd) << 8); 1404 1326 } 1405 1327 1406 - static void 1407 - ns_nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len) 1328 + static void ns_nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len) 1408 1329 { 1409 1330 struct nandsim *ns = (struct nandsim *)((struct nand_chip *)mtd->priv)->priv; 1410 1331 ··· 1430 1353 } 1431 1354 } 1432 1355 1433 - static void 1434 - ns_nand_read_buf(struct mtd_info *mtd, u_char *buf, int len) 1356 + static void ns_nand_read_buf(struct mtd_info *mtd, u_char *buf, int len) 1435 1357 { 1436 1358 struct nandsim *ns = (struct nandsim *)((struct nand_chip *)mtd->priv)->priv; 1437 1359 ··· 1483 1407 return; 1484 1408 } 1485 1409 1486 - static int 1487 - ns_nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len) 1410 + static int ns_nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len) 1488 1411 { 1489 1412 ns_nand_read_buf(mtd, (u_char *)&ns_verify_buf[0], len); 1490 1413 ··· 1511 1436 } 1512 1437 1513 1438 /* Allocate and initialize mtd_info, nand_chip and nandsim structures */ 1514 - nsmtd = kmalloc(sizeof(struct mtd_info) + sizeof(struct nand_chip) 1439 + nsmtd = kzalloc(sizeof(struct mtd_info) + sizeof(struct nand_chip) 1515 1440 + sizeof(struct nandsim), GFP_KERNEL); 1516 1441 if (!nsmtd) { 1517 1442 NS_ERR("unable to allocate core structures.\n"); 1518 1443 return -ENOMEM; 1519 1444 } 1520 - memset(nsmtd, 0, sizeof(struct mtd_info) + sizeof(struct nand_chip) + 1521 - sizeof(struct nandsim)); 1522 1445 chip = (struct nand_chip *)(nsmtd + 1); 1523 1446 nsmtd->priv = (void *)chip; 1524 1447 nand = (struct nandsim *)(chip + 1);
+1 -1
drivers/mtd/nand/ndfc.c
··· 56 56 ccr |= NDFC_CCR_BS(chip + pchip->chip_offset); 57 57 } else 58 58 ccr |= NDFC_CCR_RESET_CE; 59 - writel(ccr, ndfc->ndfcbase + NDFC_CCR); 59 + __raw_writel(ccr, ndfc->ndfcbase + NDFC_CCR); 60 60 } 61 61 62 62 static void ndfc_hwcontrol(struct mtd_info *mtd, int cmd, unsigned int ctrl)
+3 -43
drivers/mtd/nand/rtc_from4.c
··· 24 24 #include <linux/init.h> 25 25 #include <linux/slab.h> 26 26 #include <linux/rslib.h> 27 + #include <linux/bitrev.h> 27 28 #include <linux/module.h> 28 29 #include <linux/mtd/compatmac.h> 29 30 #include <linux/mtd/mtd.h> ··· 151 150 16, 17, 18, 19, 20, 21, 22, 23, 152 151 24, 25, 26, 27, 28, 29, 30, 31}, 153 152 .oobfree = {{32, 32}} 154 - }; 155 - 156 - /* Aargh. I missed the reversed bit order, when I 157 - * was talking to Renesas about the FPGA. 158 - * 159 - * The table is used for bit reordering and inversion 160 - * of the ecc byte which we get from the FPGA 161 - */ 162 - static uint8_t revbits[256] = { 163 - 0x00, 0x80, 0x40, 0xc0, 0x20, 0xa0, 0x60, 0xe0, 164 - 0x10, 0x90, 0x50, 0xd0, 0x30, 0xb0, 0x70, 0xf0, 165 - 0x08, 0x88, 0x48, 0xc8, 0x28, 0xa8, 0x68, 0xe8, 166 - 0x18, 0x98, 0x58, 0xd8, 0x38, 0xb8, 0x78, 0xf8, 167 - 0x04, 0x84, 0x44, 0xc4, 0x24, 0xa4, 0x64, 0xe4, 168 - 0x14, 0x94, 0x54, 0xd4, 0x34, 0xb4, 0x74, 0xf4, 169 - 0x0c, 0x8c, 0x4c, 0xcc, 0x2c, 0xac, 0x6c, 0xec, 170 - 0x1c, 0x9c, 0x5c, 0xdc, 0x3c, 0xbc, 0x7c, 0xfc, 171 - 0x02, 0x82, 0x42, 0xc2, 0x22, 0xa2, 0x62, 0xe2, 172 - 0x12, 0x92, 0x52, 0xd2, 0x32, 0xb2, 0x72, 0xf2, 173 - 0x0a, 0x8a, 0x4a, 0xca, 0x2a, 0xaa, 0x6a, 0xea, 174 - 0x1a, 0x9a, 0x5a, 0xda, 0x3a, 0xba, 0x7a, 0xfa, 175 - 0x06, 0x86, 0x46, 0xc6, 0x26, 0xa6, 0x66, 0xe6, 176 - 0x16, 0x96, 0x56, 0xd6, 0x36, 0xb6, 0x76, 0xf6, 177 - 0x0e, 0x8e, 0x4e, 0xce, 0x2e, 0xae, 0x6e, 0xee, 178 - 0x1e, 0x9e, 0x5e, 0xde, 0x3e, 0xbe, 0x7e, 0xfe, 179 - 0x01, 0x81, 0x41, 0xc1, 0x21, 0xa1, 0x61, 0xe1, 180 - 0x11, 0x91, 0x51, 0xd1, 0x31, 0xb1, 0x71, 0xf1, 181 - 0x09, 0x89, 0x49, 0xc9, 0x29, 0xa9, 0x69, 0xe9, 182 - 0x19, 0x99, 0x59, 0xd9, 0x39, 0xb9, 0x79, 0xf9, 183 - 0x05, 0x85, 0x45, 0xc5, 0x25, 0xa5, 0x65, 0xe5, 184 - 0x15, 0x95, 0x55, 0xd5, 0x35, 0xb5, 0x75, 0xf5, 185 - 0x0d, 0x8d, 0x4d, 0xcd, 0x2d, 0xad, 0x6d, 0xed, 186 - 0x1d, 0x9d, 0x5d, 0xdd, 0x3d, 0xbd, 0x7d, 0xfd, 187 - 0x03, 0x83, 0x43, 0xc3, 0x23, 0xa3, 0x63, 0xe3, 188 - 0x13, 0x93, 0x53, 0xd3, 0x33, 0xb3, 0x73, 0xf3, 189 - 0x0b, 0x8b, 0x4b, 0xcb, 0x2b, 0xab, 0x6b, 0xeb, 190 - 0x1b, 0x9b, 0x5b, 0xdb, 0x3b, 0xbb, 0x7b, 0xfb, 191 - 0x07, 0x87, 0x47, 0xc7, 0x27, 0xa7, 0x67, 0xe7, 192 - 0x17, 0x97, 0x57, 0xd7, 0x37, 0xb7, 0x77, 0xf7, 193 - 0x0f, 0x8f, 0x4f, 0xcf, 0x2f, 0xaf, 0x6f, 0xef, 194 - 0x1f, 0x9f, 0x5f, 0xdf, 0x3f, 0xbf, 0x7f, 0xff, 195 153 }; 196 154 197 155 #endif ··· 357 397 /* Read the syndrom pattern from the FPGA and correct the bitorder */ 358 398 rs_ecc = (volatile unsigned short *)(rtc_from4_fio_base + RTC_FROM4_RS_ECC); 359 399 for (i = 0; i < 8; i++) { 360 - ecc[i] = revbits[(*rs_ecc) & 0xFF]; 400 + ecc[i] = bitrev8(*rs_ecc); 361 401 rs_ecc++; 362 402 } 363 403 ··· 456 496 rtn = nand_do_read(mtd, page, len, &retlen, buf); 457 497 458 498 /* if read failed or > 1-bit error corrected */ 459 - if (rtn || (mtd->ecc_stats.corrected - corrected) > 1) { 499 + if (rtn || (mtd->ecc_stats.corrected - corrected) > 1) 460 500 er_stat |= 1 << 1; 461 501 kfree(buf); 462 502 }
+1 -1
drivers/mtd/nand/s3c2410.c
··· 283 283 unsigned int ctrl) 284 284 { 285 285 struct s3c2410_nand_info *info = s3c2410_nand_mtd_toinfo(mtd); 286 - 286 + 287 287 if (cmd == NAND_CMD_NONE) 288 288 return; 289 289
+5 -7
drivers/mtd/nftlcore.c
··· 57 57 58 58 DEBUG(MTD_DEBUG_LEVEL1, "NFTL: add_mtd for %s\n", mtd->name); 59 59 60 - nftl = kmalloc(sizeof(struct NFTLrecord), GFP_KERNEL); 60 + nftl = kzalloc(sizeof(struct NFTLrecord), GFP_KERNEL); 61 61 62 62 if (!nftl) { 63 63 printk(KERN_WARNING "NFTL: out of memory for data structures\n"); 64 64 return; 65 65 } 66 - memset(nftl, 0, sizeof(*nftl)); 67 66 68 67 nftl->mbd.mtd = mtd; 69 68 nftl->mbd.devnum = -1; 70 - nftl->mbd.blksize = 512; 69 + 71 70 nftl->mbd.tr = tr; 72 71 73 72 if (NFTL_mount(nftl) < 0) { ··· 146 147 ops.ooblen = len; 147 148 ops.oobbuf = buf; 148 149 ops.datbuf = NULL; 149 - ops.len = len; 150 150 151 151 res = mtd->read_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 152 - *retlen = ops.retlen; 152 + *retlen = ops.oobretlen; 153 153 return res; 154 154 } 155 155 ··· 166 168 ops.ooblen = len; 167 169 ops.oobbuf = buf; 168 170 ops.datbuf = NULL; 169 - ops.len = len; 170 171 171 172 res = mtd->write_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 172 - *retlen = ops.retlen; 173 + *retlen = ops.oobretlen; 173 174 return res; 174 175 } 175 176 ··· 794 797 .name = "nftl", 795 798 .major = NFTL_MAJOR, 796 799 .part_bits = NFTL_PARTN_BITS, 800 + .blksize = 512, 797 801 .getgeo = nftl_getgeo, 798 802 .readsect = nftl_readblock, 799 803 #ifdef CONFIG_NFTL_RW
+2 -3
drivers/mtd/onenand/generic.c
··· 45 45 unsigned long size = res->end - res->start + 1; 46 46 int err; 47 47 48 - info = kmalloc(sizeof(struct onenand_info), GFP_KERNEL); 48 + info = kzalloc(sizeof(struct onenand_info), GFP_KERNEL); 49 49 if (!info) 50 50 return -ENOMEM; 51 - 52 - memset(info, 0, sizeof(struct onenand_info)); 53 51 54 52 if (!request_mem_region(res->start, size, dev->driver->name)) { 55 53 err = -EBUSY; ··· 61 63 } 62 64 63 65 info->onenand.mmcontrol = pdata->mmcontrol; 66 + info->onenand.irq = platform_get_irq(pdev, 0); 64 67 65 68 info->mtd.name = pdev->dev.bus_id; 66 69 info->mtd.priv = &info->onenand;
+281 -89
drivers/mtd/onenand/onenand_base.c
··· 13 13 #include <linux/module.h> 14 14 #include <linux/init.h> 15 15 #include <linux/sched.h> 16 + #include <linux/interrupt.h> 16 17 #include <linux/jiffies.h> 17 18 #include <linux/mtd/mtd.h> 18 19 #include <linux/mtd/onenand.h> ··· 192 191 struct onenand_chip *this = mtd->priv; 193 192 int value, readcmd = 0, block_cmd = 0; 194 193 int block, page; 195 - /* Now we use page size operation */ 196 - int sectors = 4, count = 4; 197 194 198 195 /* Address translation */ 199 196 switch (cmd) { ··· 243 244 } 244 245 245 246 if (page != -1) { 247 + /* Now we use page size operation */ 248 + int sectors = 4, count = 4; 246 249 int dataram; 247 250 248 251 switch (cmd) { ··· 298 297 unsigned long timeout; 299 298 unsigned int flags = ONENAND_INT_MASTER; 300 299 unsigned int interrupt = 0; 301 - unsigned int ctrl, ecc; 300 + unsigned int ctrl; 302 301 303 302 /* The 20 msec is enough */ 304 303 timeout = jiffies + msecs_to_jiffies(20); ··· 310 309 311 310 if (state != FL_READING) 312 311 cond_resched(); 313 - touch_softlockup_watchdog(); 314 312 } 315 313 /* To get correct interrupt status in timeout case */ 316 314 interrupt = this->read_word(this->base + ONENAND_REG_INTERRUPT); ··· 317 317 ctrl = this->read_word(this->base + ONENAND_REG_CTRL_STATUS); 318 318 319 319 if (ctrl & ONENAND_CTRL_ERROR) { 320 - /* It maybe occur at initial bad block */ 321 320 DEBUG(MTD_DEBUG_LEVEL0, "onenand_wait: controller error = 0x%04x\n", ctrl); 322 - /* Clear other interrupt bits for preventing ECC error */ 323 - interrupt &= ONENAND_INT_MASTER; 324 - } 325 - 326 - if (ctrl & ONENAND_CTRL_LOCK) { 327 - DEBUG(MTD_DEBUG_LEVEL0, "onenand_wait: it's locked error = 0x%04x\n", ctrl); 328 - return -EACCES; 321 + if (ctrl & ONENAND_CTRL_LOCK) 322 + DEBUG(MTD_DEBUG_LEVEL0, "onenand_wait: it's locked error.\n"); 323 + return ctrl; 329 324 } 330 325 331 326 if (interrupt & ONENAND_INT_READ) { 332 - ecc = this->read_word(this->base + ONENAND_REG_ECC_STATUS); 333 - if (ecc & ONENAND_ECC_2BIT_ALL) { 327 + int ecc = this->read_word(this->base + ONENAND_REG_ECC_STATUS); 328 + if (ecc) { 334 329 DEBUG(MTD_DEBUG_LEVEL0, "onenand_wait: ECC error = 0x%04x\n", ecc); 335 - return -EBADMSG; 330 + if (ecc & ONENAND_ECC_2BIT_ALL) { 331 + mtd->ecc_stats.failed++; 332 + return ecc; 333 + } else if (ecc & ONENAND_ECC_1BIT_ALL) 334 + mtd->ecc_stats.corrected++; 336 335 } 337 336 } 338 337 339 338 return 0; 339 + } 340 + 341 + /* 342 + * onenand_interrupt - [DEFAULT] onenand interrupt handler 343 + * @param irq onenand interrupt number 344 + * @param dev_id interrupt data 345 + * 346 + * complete the work 347 + */ 348 + static irqreturn_t onenand_interrupt(int irq, void *data) 349 + { 350 + struct onenand_chip *this = (struct onenand_chip *) data; 351 + 352 + /* To handle shared interrupt */ 353 + if (!this->complete.done) 354 + complete(&this->complete); 355 + 356 + return IRQ_HANDLED; 357 + } 358 + 359 + /* 360 + * onenand_interrupt_wait - [DEFAULT] wait until the command is done 361 + * @param mtd MTD device structure 362 + * @param state state to select the max. timeout value 363 + * 364 + * Wait for command done. 365 + */ 366 + static int onenand_interrupt_wait(struct mtd_info *mtd, int state) 367 + { 368 + struct onenand_chip *this = mtd->priv; 369 + 370 + wait_for_completion(&this->complete); 371 + 372 + return onenand_wait(mtd, state); 373 + } 374 + 375 + /* 376 + * onenand_try_interrupt_wait - [DEFAULT] try interrupt wait 377 + * @param mtd MTD device structure 378 + * @param state state to select the max. timeout value 379 + * 380 + * Try interrupt based wait (It is used one-time) 381 + */ 382 + static int onenand_try_interrupt_wait(struct mtd_info *mtd, int state) 383 + { 384 + struct onenand_chip *this = mtd->priv; 385 + unsigned long remain, timeout; 386 + 387 + /* We use interrupt wait first */ 388 + this->wait = onenand_interrupt_wait; 389 + 390 + timeout = msecs_to_jiffies(100); 391 + remain = wait_for_completion_timeout(&this->complete, timeout); 392 + if (!remain) { 393 + printk(KERN_INFO "OneNAND: There's no interrupt. " 394 + "We use the normal wait\n"); 395 + 396 + /* Release the irq */ 397 + free_irq(this->irq, this); 398 + 399 + this->wait = onenand_wait; 400 + } 401 + 402 + return onenand_wait(mtd, state); 403 + } 404 + 405 + /* 406 + * onenand_setup_wait - [OneNAND Interface] setup onenand wait method 407 + * @param mtd MTD device structure 408 + * 409 + * There's two method to wait onenand work 410 + * 1. polling - read interrupt status register 411 + * 2. interrupt - use the kernel interrupt method 412 + */ 413 + static void onenand_setup_wait(struct mtd_info *mtd) 414 + { 415 + struct onenand_chip *this = mtd->priv; 416 + int syscfg; 417 + 418 + init_completion(&this->complete); 419 + 420 + if (this->irq <= 0) { 421 + this->wait = onenand_wait; 422 + return; 423 + } 424 + 425 + if (request_irq(this->irq, &onenand_interrupt, 426 + IRQF_SHARED, "onenand", this)) { 427 + /* If we can't get irq, use the normal wait */ 428 + this->wait = onenand_wait; 429 + return; 430 + } 431 + 432 + /* Enable interrupt */ 433 + syscfg = this->read_word(this->base + ONENAND_REG_SYS_CFG1); 434 + syscfg |= ONENAND_SYS_CFG1_IOBE; 435 + this->write_word(syscfg, this->base + ONENAND_REG_SYS_CFG1); 436 + 437 + this->wait = onenand_try_interrupt_wait; 340 438 } 341 439 342 440 /** ··· 707 609 size_t *retlen, u_char *buf) 708 610 { 709 611 struct onenand_chip *this = mtd->priv; 612 + struct mtd_ecc_stats stats; 710 613 int read = 0, column; 711 614 int thislen; 712 - int ret = 0; 615 + int ret = 0, boundary = 0; 713 616 714 617 DEBUG(MTD_DEBUG_LEVEL3, "onenand_read: from = 0x%08x, len = %i\n", (unsigned int) from, (int) len); 715 618 ··· 726 627 727 628 /* TODO handling oob */ 728 629 729 - while (read < len) { 730 - thislen = min_t(int, mtd->writesize, len - read); 630 + stats = mtd->ecc_stats; 731 631 732 - column = from & (mtd->writesize - 1); 733 - if (column + thislen > mtd->writesize) 734 - thislen = mtd->writesize - column; 632 + /* Read-while-load method */ 735 633 736 - if (!onenand_check_bufferram(mtd, from)) { 737 - this->command(mtd, ONENAND_CMD_READ, from, mtd->writesize); 634 + /* Do first load to bufferRAM */ 635 + if (read < len) { 636 + if (!onenand_check_bufferram(mtd, from)) { 637 + this->command(mtd, ONENAND_CMD_READ, from, mtd->writesize); 638 + ret = this->wait(mtd, FL_READING); 639 + onenand_update_bufferram(mtd, from, !ret); 640 + } 641 + } 738 642 739 - ret = this->wait(mtd, FL_READING); 740 - /* First copy data and check return value for ECC handling */ 741 - onenand_update_bufferram(mtd, from, 1); 742 - } 643 + thislen = min_t(int, mtd->writesize, len - read); 644 + column = from & (mtd->writesize - 1); 645 + if (column + thislen > mtd->writesize) 646 + thislen = mtd->writesize - column; 743 647 744 - this->read_bufferram(mtd, ONENAND_DATARAM, buf, column, thislen); 648 + while (!ret) { 649 + /* If there is more to load then start next load */ 650 + from += thislen; 651 + if (read + thislen < len) { 652 + this->command(mtd, ONENAND_CMD_READ, from, mtd->writesize); 653 + /* 654 + * Chip boundary handling in DDP 655 + * Now we issued chip 1 read and pointed chip 1 656 + * bufferam so we have to point chip 0 bufferam. 657 + */ 658 + if (this->device_id & ONENAND_DEVICE_IS_DDP && 659 + unlikely(from == (this->chipsize >> 1))) { 660 + this->write_word(0, this->base + ONENAND_REG_START_ADDRESS2); 661 + boundary = 1; 662 + } else 663 + boundary = 0; 664 + ONENAND_SET_PREV_BUFFERRAM(this); 665 + } 666 + /* While load is going, read from last bufferRAM */ 667 + this->read_bufferram(mtd, ONENAND_DATARAM, buf, column, thislen); 668 + /* See if we are done */ 669 + read += thislen; 670 + if (read == len) 671 + break; 672 + /* Set up for next read from bufferRAM */ 673 + if (unlikely(boundary)) 674 + this->write_word(0x8000, this->base + ONENAND_REG_START_ADDRESS2); 675 + ONENAND_SET_NEXT_BUFFERRAM(this); 676 + buf += thislen; 677 + thislen = min_t(int, mtd->writesize, len - read); 678 + column = 0; 679 + cond_resched(); 680 + /* Now wait for load */ 681 + ret = this->wait(mtd, FL_READING); 682 + onenand_update_bufferram(mtd, from, !ret); 683 + } 745 684 746 - read += thislen; 747 - 748 - if (read == len) 749 - break; 750 - 751 - if (ret) { 752 - DEBUG(MTD_DEBUG_LEVEL0, "onenand_read: read failed = %d\n", ret); 753 - goto out; 754 - } 755 - 756 - from += thislen; 757 - buf += thislen; 758 - } 759 - 760 - out: 761 685 /* Deselect and wake up anyone waiting on the device */ 762 686 onenand_release_device(mtd); 763 687 ··· 790 668 * retlen == desired len and result == -EBADMSG 791 669 */ 792 670 *retlen = read; 793 - return ret; 671 + 672 + if (mtd->ecc_stats.failed - stats.failed) 673 + return -EBADMSG; 674 + 675 + if (ret) 676 + return ret; 677 + 678 + return mtd->ecc_stats.corrected - stats.corrected ? -EUCLEAN : 0; 794 679 } 795 680 796 681 /** ··· 834 705 column = from & (mtd->oobsize - 1); 835 706 836 707 while (read < len) { 708 + cond_resched(); 709 + 837 710 thislen = mtd->oobsize - column; 838 711 thislen = min_t(int, thislen, len); 839 712 ··· 848 717 849 718 this->read_bufferram(mtd, ONENAND_SPARERAM, buf, column, thislen); 850 719 720 + if (ret) { 721 + DEBUG(MTD_DEBUG_LEVEL0, "onenand_read_oob: read failed = 0x%x\n", ret); 722 + goto out; 723 + } 724 + 851 725 read += thislen; 852 726 853 727 if (read == len) 854 728 break; 855 - 856 - if (ret) { 857 - DEBUG(MTD_DEBUG_LEVEL0, "onenand_read_oob: read failed = %d\n", ret); 858 - goto out; 859 - } 860 729 861 730 buf += thislen; 862 731 ··· 887 756 { 888 757 BUG_ON(ops->mode != MTD_OOB_PLACE); 889 758 890 - return onenand_do_read_oob(mtd, from + ops->ooboffs, ops->len, 891 - &ops->retlen, ops->oobbuf); 759 + return onenand_do_read_oob(mtd, from + ops->ooboffs, ops->ooblen, 760 + &ops->oobretlen, ops->oobbuf); 892 761 } 893 762 894 763 #ifdef CONFIG_MTD_ONENAND_VERIFY_WRITE ··· 935 804 void __iomem *dataram0, *dataram1; 936 805 int ret = 0; 937 806 807 + /* In partial page write, just skip it */ 808 + if ((addr & (mtd->writesize - 1)) != 0) 809 + return 0; 810 + 938 811 this->command(mtd, ONENAND_CMD_READ, addr, mtd->writesize); 939 812 940 813 ret = this->wait(mtd, FL_READING); ··· 961 826 #define onenand_verify_oob(...) (0) 962 827 #endif 963 828 964 - #define NOTALIGNED(x) ((x & (mtd->writesize - 1)) != 0) 829 + #define NOTALIGNED(x) ((x & (this->subpagesize - 1)) != 0) 965 830 966 831 /** 967 832 * onenand_write - [MTD Interface] write buffer to FLASH ··· 979 844 struct onenand_chip *this = mtd->priv; 980 845 int written = 0; 981 846 int ret = 0; 847 + int column, subpage; 982 848 983 849 DEBUG(MTD_DEBUG_LEVEL3, "onenand_write: to = 0x%08x, len = %i\n", (unsigned int) to, (int) len); 984 850 ··· 998 862 return -EINVAL; 999 863 } 1000 864 865 + column = to & (mtd->writesize - 1); 866 + subpage = column || (len & (mtd->writesize - 1)); 867 + 1001 868 /* Grab the lock and see if the device is available */ 1002 869 onenand_get_device(mtd, FL_WRITING); 1003 870 1004 871 /* Loop until all data write */ 1005 872 while (written < len) { 1006 - int thislen = min_t(int, mtd->writesize, len - written); 873 + int bytes = mtd->writesize; 874 + int thislen = min_t(int, bytes, len - written); 875 + u_char *wbuf = (u_char *) buf; 1007 876 1008 - this->command(mtd, ONENAND_CMD_BUFFERRAM, to, mtd->writesize); 877 + cond_resched(); 1009 878 1010 - this->write_bufferram(mtd, ONENAND_DATARAM, buf, 0, thislen); 879 + this->command(mtd, ONENAND_CMD_BUFFERRAM, to, bytes); 880 + 881 + /* Partial page write */ 882 + if (subpage) { 883 + bytes = min_t(int, bytes - column, (int) len); 884 + memset(this->page_buf, 0xff, mtd->writesize); 885 + memcpy(this->page_buf + column, buf, bytes); 886 + wbuf = this->page_buf; 887 + /* Even though partial write, we need page size */ 888 + thislen = mtd->writesize; 889 + } 890 + 891 + this->write_bufferram(mtd, ONENAND_DATARAM, wbuf, 0, thislen); 1011 892 this->write_bufferram(mtd, ONENAND_SPARERAM, ffchars, 0, mtd->oobsize); 1012 893 1013 894 this->command(mtd, ONENAND_CMD_PROG, to, mtd->writesize); 1014 895 1015 - onenand_update_bufferram(mtd, to, 1); 896 + /* In partial page write we don't update bufferram */ 897 + onenand_update_bufferram(mtd, to, !subpage); 1016 898 1017 899 ret = this->wait(mtd, FL_WRITING); 1018 900 if (ret) { 1019 901 DEBUG(MTD_DEBUG_LEVEL0, "onenand_write: write filaed %d\n", ret); 1020 - goto out; 902 + break; 903 + } 904 + 905 + /* Only check verify write turn on */ 906 + ret = onenand_verify_page(mtd, (u_char *) wbuf, to); 907 + if (ret) { 908 + DEBUG(MTD_DEBUG_LEVEL0, "onenand_write: verify failed %d\n", ret); 909 + break; 1021 910 } 1022 911 1023 912 written += thislen; 1024 913 1025 - /* Only check verify write turn on */ 1026 - ret = onenand_verify_page(mtd, (u_char *) buf, to); 1027 - if (ret) { 1028 - DEBUG(MTD_DEBUG_LEVEL0, "onenand_write: verify failed %d\n", ret); 1029 - goto out; 1030 - } 1031 - 1032 914 if (written == len) 1033 915 break; 1034 916 917 + column = 0; 1035 918 to += thislen; 1036 919 buf += thislen; 1037 920 } 1038 921 1039 - out: 1040 922 /* Deselect and wake up anyone waiting on the device */ 1041 923 onenand_release_device(mtd); 1042 924 ··· 1097 943 /* Loop until all data write */ 1098 944 while (written < len) { 1099 945 int thislen = min_t(int, mtd->oobsize, len - written); 946 + 947 + cond_resched(); 1100 948 1101 949 column = to & (mtd->oobsize - 1); 1102 950 ··· 1155 999 { 1156 1000 BUG_ON(ops->mode != MTD_OOB_PLACE); 1157 1001 1158 - return onenand_do_write_oob(mtd, to + ops->ooboffs, ops->len, 1159 - &ops->retlen, ops->oobbuf); 1002 + return onenand_do_write_oob(mtd, to + ops->ooboffs, ops->ooblen, 1003 + &ops->oobretlen, ops->oobbuf); 1160 1004 } 1161 1005 1162 1006 /** ··· 1227 1071 instr->state = MTD_ERASING; 1228 1072 1229 1073 while (len) { 1074 + cond_resched(); 1230 1075 1231 1076 /* Check if we have a bad block, we do not erase bad blocks */ 1232 1077 if (onenand_block_checkbad(mtd, addr, 0, 0)) { ··· 1241 1084 ret = this->wait(mtd, FL_ERASING); 1242 1085 /* Check, if it is write protected */ 1243 1086 if (ret) { 1244 - if (ret == -EPERM) 1245 - DEBUG(MTD_DEBUG_LEVEL0, "onenand_erase: Device is write protected!!!\n"); 1246 - else 1247 - DEBUG(MTD_DEBUG_LEVEL0, "onenand_erase: Failed erase, block %d\n", (unsigned) (addr >> this->erase_shift)); 1087 + DEBUG(MTD_DEBUG_LEVEL0, "onenand_erase: Failed erase, block %d\n", (unsigned) (addr >> this->erase_shift)); 1248 1088 instr->state = MTD_ERASE_FAILED; 1249 1089 instr->fail_addr = addr; 1250 1090 goto erase_exit; ··· 1282 1128 /* Release it and go back */ 1283 1129 onenand_release_device(mtd); 1284 1130 } 1285 - 1286 1131 1287 1132 /** 1288 1133 * onenand_block_isbad - [MTD Interface] Check whether the block at the given offset is bad ··· 1349 1196 } 1350 1197 1351 1198 /** 1352 - * onenand_unlock - [MTD Interface] Unlock block(s) 1199 + * onenand_do_lock_cmd - [OneNAND Interface] Lock or unlock block(s) 1353 1200 * @param mtd MTD device structure 1354 1201 * @param ofs offset relative to mtd start 1355 - * @param len number of bytes to unlock 1202 + * @param len number of bytes to lock or unlock 1356 1203 * 1357 - * Unlock one or more blocks 1204 + * Lock or unlock one or more blocks 1358 1205 */ 1359 - static int onenand_unlock(struct mtd_info *mtd, loff_t ofs, size_t len) 1206 + static int onenand_do_lock_cmd(struct mtd_info *mtd, loff_t ofs, size_t len, int cmd) 1360 1207 { 1361 1208 struct onenand_chip *this = mtd->priv; 1362 1209 int start, end, block, value, status; 1210 + int wp_status_mask; 1363 1211 1364 1212 start = ofs >> this->erase_shift; 1365 1213 end = len >> this->erase_shift; 1214 + 1215 + if (cmd == ONENAND_CMD_LOCK) 1216 + wp_status_mask = ONENAND_WP_LS; 1217 + else 1218 + wp_status_mask = ONENAND_WP_US; 1366 1219 1367 1220 /* Continuous lock scheme */ 1368 1221 if (this->options & ONENAND_HAS_CONT_LOCK) { ··· 1376 1217 this->write_word(start, this->base + ONENAND_REG_START_BLOCK_ADDRESS); 1377 1218 /* Set end block address */ 1378 1219 this->write_word(start + end - 1, this->base + ONENAND_REG_END_BLOCK_ADDRESS); 1379 - /* Write unlock command */ 1380 - this->command(mtd, ONENAND_CMD_UNLOCK, 0, 0); 1220 + /* Write lock command */ 1221 + this->command(mtd, cmd, 0, 0); 1381 1222 1382 1223 /* There's no return value */ 1383 - this->wait(mtd, FL_UNLOCKING); 1224 + this->wait(mtd, FL_LOCKING); 1384 1225 1385 1226 /* Sanity check */ 1386 1227 while (this->read_word(this->base + ONENAND_REG_CTRL_STATUS) ··· 1389 1230 1390 1231 /* Check lock status */ 1391 1232 status = this->read_word(this->base + ONENAND_REG_WP_STATUS); 1392 - if (!(status & ONENAND_WP_US)) 1233 + if (!(status & wp_status_mask)) 1393 1234 printk(KERN_ERR "wp status = 0x%x\n", status); 1394 1235 1395 1236 return 0; ··· 1405 1246 this->write_word(value, this->base + ONENAND_REG_START_ADDRESS2); 1406 1247 /* Set start block address */ 1407 1248 this->write_word(block, this->base + ONENAND_REG_START_BLOCK_ADDRESS); 1408 - /* Write unlock command */ 1409 - this->command(mtd, ONENAND_CMD_UNLOCK, 0, 0); 1249 + /* Write lock command */ 1250 + this->command(mtd, cmd, 0, 0); 1410 1251 1411 1252 /* There's no return value */ 1412 - this->wait(mtd, FL_UNLOCKING); 1253 + this->wait(mtd, FL_LOCKING); 1413 1254 1414 1255 /* Sanity check */ 1415 1256 while (this->read_word(this->base + ONENAND_REG_CTRL_STATUS) ··· 1418 1259 1419 1260 /* Check lock status */ 1420 1261 status = this->read_word(this->base + ONENAND_REG_WP_STATUS); 1421 - if (!(status & ONENAND_WP_US)) 1262 + if (!(status & wp_status_mask)) 1422 1263 printk(KERN_ERR "block = %d, wp status = 0x%x\n", block, status); 1423 1264 } 1424 1265 1425 1266 return 0; 1267 + } 1268 + 1269 + /** 1270 + * onenand_lock - [MTD Interface] Lock block(s) 1271 + * @param mtd MTD device structure 1272 + * @param ofs offset relative to mtd start 1273 + * @param len number of bytes to unlock 1274 + * 1275 + * Lock one or more blocks 1276 + */ 1277 + static int onenand_lock(struct mtd_info *mtd, loff_t ofs, size_t len) 1278 + { 1279 + return onenand_do_lock_cmd(mtd, ofs, len, ONENAND_CMD_LOCK); 1280 + } 1281 + 1282 + /** 1283 + * onenand_unlock - [MTD Interface] Unlock block(s) 1284 + * @param mtd MTD device structure 1285 + * @param ofs offset relative to mtd start 1286 + * @param len number of bytes to unlock 1287 + * 1288 + * Unlock one or more blocks 1289 + */ 1290 + static int onenand_unlock(struct mtd_info *mtd, loff_t ofs, size_t len) 1291 + { 1292 + return onenand_do_lock_cmd(mtd, ofs, len, ONENAND_CMD_UNLOCK); 1426 1293 } 1427 1294 1428 1295 /** ··· 1495 1310 this->command(mtd, ONENAND_CMD_UNLOCK_ALL, 0, 0); 1496 1311 1497 1312 /* There's no return value */ 1498 - this->wait(mtd, FL_UNLOCKING); 1313 + this->wait(mtd, FL_LOCKING); 1499 1314 1500 1315 /* Sanity check */ 1501 1316 while (this->read_word(this->base + ONENAND_REG_CTRL_STATUS) ··· 1519 1334 return 0; 1520 1335 } 1521 1336 1522 - mtd->unlock(mtd, 0x0, this->chipsize); 1337 + onenand_unlock(mtd, 0x0, this->chipsize); 1523 1338 1524 1339 return 0; 1525 1340 } ··· 1947 1762 /* Read manufacturer and device IDs from Register */ 1948 1763 maf_id = this->read_word(this->base + ONENAND_REG_MANUFACTURER_ID); 1949 1764 dev_id = this->read_word(this->base + ONENAND_REG_DEVICE_ID); 1950 - ver_id= this->read_word(this->base + ONENAND_REG_VERSION_ID); 1765 + ver_id = this->read_word(this->base + ONENAND_REG_VERSION_ID); 1951 1766 1952 1767 /* Check OneNAND device */ 1953 1768 if (maf_id != bram_maf_id || dev_id != bram_dev_id) ··· 2031 1846 if (!this->command) 2032 1847 this->command = onenand_command; 2033 1848 if (!this->wait) 2034 - this->wait = onenand_wait; 1849 + onenand_setup_wait(mtd); 2035 1850 2036 1851 if (!this->read_bufferram) 2037 1852 this->read_bufferram = onenand_read_bufferram; ··· 2068 1883 init_waitqueue_head(&this->wq); 2069 1884 spin_lock_init(&this->chip_lock); 2070 1885 1886 + /* 1887 + * Allow subpage writes up to oobsize. 1888 + */ 2071 1889 switch (mtd->oobsize) { 2072 1890 case 64: 2073 1891 this->ecclayout = &onenand_oob_64; 1892 + mtd->subpage_sft = 2; 2074 1893 break; 2075 1894 2076 1895 case 32: 2077 1896 this->ecclayout = &onenand_oob_32; 1897 + mtd->subpage_sft = 1; 2078 1898 break; 2079 1899 2080 1900 default: 2081 1901 printk(KERN_WARNING "No OOB scheme defined for oobsize %d\n", 2082 1902 mtd->oobsize); 1903 + mtd->subpage_sft = 0; 2083 1904 /* To prevent kernel oops */ 2084 1905 this->ecclayout = &onenand_oob_32; 2085 1906 break; 2086 1907 } 2087 1908 1909 + this->subpagesize = mtd->writesize >> mtd->subpage_sft; 2088 1910 mtd->ecclayout = this->ecclayout; 2089 1911 2090 1912 /* Fill in remaining MTD driver data */ ··· 2114 1922 mtd->lock_user_prot_reg = onenand_lock_user_prot_reg; 2115 1923 #endif 2116 1924 mtd->sync = onenand_sync; 2117 - mtd->lock = NULL; 1925 + mtd->lock = onenand_lock; 2118 1926 mtd->unlock = onenand_unlock; 2119 1927 mtd->suspend = onenand_suspend; 2120 1928 mtd->resume = onenand_resume;
+6 -8
drivers/mtd/onenand/onenand_bbt.c
··· 93 93 ret = onenand_do_read_oob(mtd, from + j * mtd->writesize + bd->offs, 94 94 readlen, &retlen, &buf[0]); 95 95 96 - if (ret) 96 + /* If it is a initial bad block, just ignore it */ 97 + if (ret && !(ret & ONENAND_CTRL_LOAD)) 97 98 return ret; 98 99 99 100 if (check_short_pattern(&buf[j * scanlen], scanlen, mtd->writesize, bd)) { 100 101 bbm->bbt[i >> 3] |= 0x03 << (i & 0x6); 101 102 printk(KERN_WARNING "Bad eraseblock %d at 0x%08x\n", 102 103 i >> 1, (unsigned int) from); 104 + mtd->ecc_stats.badblocks++; 103 105 break; 104 106 } 105 107 } ··· 179 177 int len, ret = 0; 180 178 181 179 len = mtd->size >> (this->erase_shift + 2); 182 - /* Allocate memory (2bit per block) */ 183 - bbm->bbt = kmalloc(len, GFP_KERNEL); 180 + /* Allocate memory (2bit per block) and clear the memory bad block table */ 181 + bbm->bbt = kzalloc(len, GFP_KERNEL); 184 182 if (!bbm->bbt) { 185 183 printk(KERN_ERR "onenand_scan_bbt: Out of memory\n"); 186 184 return -ENOMEM; 187 185 } 188 - /* Clear the memory bad block table */ 189 - memset(bbm->bbt, 0x00, len); 190 186 191 187 /* Set the bad block position */ 192 188 bbm->badblockpos = ONENAND_BADBLOCK_POS; ··· 230 230 struct onenand_chip *this = mtd->priv; 231 231 struct bbm_info *bbm; 232 232 233 - this->bbm = kmalloc(sizeof(struct bbm_info), GFP_KERNEL); 233 + this->bbm = kzalloc(sizeof(struct bbm_info), GFP_KERNEL); 234 234 if (!this->bbm) 235 235 return -ENOMEM; 236 236 237 237 bbm = this->bbm; 238 - 239 - memset(bbm, 0, sizeof(struct bbm_info)); 240 238 241 239 /* 1KB page has same configuration as 2KB page */ 242 240 if (!bbm->badblock_pattern)
+24 -6
drivers/mtd/redboot.c
··· 96 96 */ 97 97 if (swab32(buf[i].size) == master->erasesize) { 98 98 int j; 99 - for (j = 0; j < numslots && buf[j].name[0] != 0xff; ++j) { 99 + for (j = 0; j < numslots; ++j) { 100 + 101 + /* A single 0xff denotes a deleted entry. 102 + * Two of them in a row is the end of the table. 103 + */ 104 + if (buf[j].name[0] == 0xff) { 105 + if (buf[j].name[1] == 0xff) { 106 + break; 107 + } else { 108 + continue; 109 + } 110 + } 111 + 100 112 /* The unsigned long fields were written with the 101 113 * wrong byte sex, name and pad have no byte sex. 102 114 */ ··· 122 110 } 123 111 } 124 112 break; 113 + } else { 114 + /* re-calculate of real numslots */ 115 + numslots = buf[i].size / sizeof(struct fis_image_desc); 125 116 } 126 117 } 127 118 if (i == numslots) { ··· 138 123 for (i = 0; i < numslots; i++) { 139 124 struct fis_list *new_fl, **prev; 140 125 141 - if (buf[i].name[0] == 0xff) 142 - continue; 126 + if (buf[i].name[0] == 0xff) { 127 + if (buf[i].name[1] == 0xff) { 128 + break; 129 + } else { 130 + continue; 131 + } 132 + } 143 133 if (!redboot_checksum(&buf[i])) 144 134 break; 145 135 ··· 185 165 } 186 166 } 187 167 #endif 188 - parts = kmalloc(sizeof(*parts)*nrparts + nulllen + namelen, GFP_KERNEL); 168 + parts = kzalloc(sizeof(*parts)*nrparts + nulllen + namelen, GFP_KERNEL); 189 169 190 170 if (!parts) { 191 171 ret = -ENOMEM; 192 172 goto out; 193 173 } 194 - 195 - memset(parts, 0, sizeof(*parts)*nrparts + nulllen + namelen); 196 174 197 175 nullname = (char *)&parts[nrparts]; 198 176 #ifdef CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED
+2 -1
drivers/mtd/rfd_ftl.c
··· 787 787 788 788 if (scan_header(part) == 0) { 789 789 part->mbd.size = part->sector_count; 790 - part->mbd.blksize = SECTOR_SIZE; 791 790 part->mbd.tr = tr; 792 791 part->mbd.devnum = -1; 793 792 if (!(mtd->flags & MTD_WRITEABLE)) ··· 828 829 .name = "rfd", 829 830 .major = RFD_FTL_MAJOR, 830 831 .part_bits = PART_BITS, 832 + .blksize = SECTOR_SIZE, 833 + 831 834 .readsect = rfd_ftl_readsect, 832 835 .writesect = rfd_ftl_writesect, 833 836 .getgeo = rfd_ftl_getgeo,
+3 -4
drivers/mtd/ssfdc.c
··· 172 172 173 173 ops.mode = MTD_OOB_RAW; 174 174 ops.ooboffs = 0; 175 - ops.ooblen = mtd->oobsize; 176 - ops.len = OOB_SIZE; 175 + ops.ooblen = OOB_SIZE; 177 176 ops.oobbuf = buf; 178 177 ops.datbuf = NULL; 179 178 180 179 ret = mtd->read_oob(mtd, offs, &ops); 181 - if (ret < 0 || ops.retlen != OOB_SIZE) 180 + if (ret < 0 || ops.oobretlen != OOB_SIZE) 182 181 return -1; 183 182 184 183 return 0; ··· 311 312 312 313 ssfdc->mbd.mtd = mtd; 313 314 ssfdc->mbd.devnum = -1; 314 - ssfdc->mbd.blksize = SECTOR_SIZE; 315 315 ssfdc->mbd.tr = tr; 316 316 ssfdc->mbd.readonly = 1; 317 317 ··· 445 447 .name = "ssfdc", 446 448 .major = SSFDCR_MAJOR, 447 449 .part_bits = SSFDCR_PARTN_BITS, 450 + .blksize = SECTOR_SIZE, 448 451 .getgeo = ssfdcr_getgeo, 449 452 .readsect = ssfdcr_readsect, 450 453 .add_mtd = ssfdcr_add_mtd,
+1 -1
drivers/net/irda/stir4200.c
··· 59 59 #include <asm/byteorder.h> 60 60 #include <asm/unaligned.h> 61 61 62 - MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>"); 62 + MODULE_AUTHOR("Stephen Hemminger <shemminger@linux-foundation.org>"); 63 63 MODULE_DESCRIPTION("IrDA-USB Dongle Driver for SigmaTel STIr4200"); 64 64 MODULE_LICENSE("GPL"); 65 65
+1 -1
drivers/net/skge.c
··· 60 60 #define LINK_HZ (HZ/2) 61 61 62 62 MODULE_DESCRIPTION("SysKonnect Gigabit Ethernet driver"); 63 - MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>"); 63 + MODULE_AUTHOR("Stephen Hemminger <shemminger@linux-foundation.org>"); 64 64 MODULE_LICENSE("GPL"); 65 65 MODULE_VERSION(DRV_VERSION); 66 66
+1 -1
drivers/net/sky2.c
··· 3691 3691 module_exit(sky2_cleanup_module); 3692 3692 3693 3693 MODULE_DESCRIPTION("Marvell Yukon 2 Gigabit Ethernet driver"); 3694 - MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>"); 3694 + MODULE_AUTHOR("Stephen Hemminger <shemminger@linux-foundation.org>"); 3695 3695 MODULE_LICENSE("GPL"); 3696 3696 MODULE_VERSION(DRV_VERSION);
+1 -2
drivers/pci/pci-driver.c
··· 150 150 } 151 151 152 152 /** 153 - * pci_match_device - Tell if a PCI device structure has a matching 154 - * PCI device id structure 153 + * pci_match_device - Tell if a PCI device structure has a matching PCI device id structure 155 154 * @drv: the PCI driver to match against 156 155 * @dev: the PCI device structure to match against 157 156 *
+5
drivers/pci/quirks.c
··· 1002 1002 case 0x186a: /* M6Ne notebook */ 1003 1003 asus_hides_smbus = 1; 1004 1004 } 1005 + if (dev->device == PCI_DEVICE_ID_INTEL_82865_HB) 1006 + switch (dev->subsystem_device) { 1007 + case 0x80f2: /* P4P800-X */ 1008 + asus_hides_smbus = 1; 1009 + } 1005 1010 if (dev->device == PCI_DEVICE_ID_INTEL_82915GM_HB) { 1006 1011 switch (dev->subsystem_device) { 1007 1012 case 0x1882: /* M6V notebook */
+7 -5
drivers/rtc/rtc-sh.c
··· 492 492 493 493 spin_lock_irq(&rtc->lock); 494 494 495 - /* disable alarm interrupt and clear flag */ 495 + /* disable alarm interrupt and clear the alarm flag */ 496 496 rcr1 = readb(rtc->regbase + RCR1); 497 - rcr1 &= ~RCR1_AF; 498 - writeb(rcr1 & ~RCR1_AIE, rtc->regbase + RCR1); 497 + rcr1 &= ~(RCR1_AF|RCR1_AIE); 498 + writeb(rcr1, rtc->regbase + RCR1); 499 499 500 500 rtc->rearm_aie = 0; 501 501 ··· 510 510 mon += 1; 511 511 sh_rtc_write_alarm_value(rtc, mon, RMONAR); 512 512 513 - /* Restore interrupt activation status */ 514 - writeb(rcr1, rtc->regbase + RCR1); 513 + if (wkalrm->enabled) { 514 + rcr1 |= RCR1_AIE; 515 + writeb(rcr1, rtc->regbase + RCR1); 516 + } 515 517 516 518 spin_unlock_irq(&rtc->lock); 517 519
-16
drivers/usb/core/Kconfig
··· 72 72 73 73 If you are unsure about this, say N here. 74 74 75 - config USB_MULTITHREAD_PROBE 76 - bool "USB Multi-threaded probe (EXPERIMENTAL)" 77 - depends on USB && EXPERIMENTAL 78 - default n 79 - help 80 - Say Y here if you want the USB core to spawn a new thread for 81 - every USB device that is probed. This can cause a small speedup 82 - in boot times on systems with a lot of different USB devices. 83 - 84 - This option should be safe to enable, but if any odd probing 85 - problems are found, please disable it, or dynamically turn it 86 - off in the /sys/module/usbcore/parameters/multithread_probe 87 - file 88 - 89 - When in doubt, say N. 90 - 91 75 config USB_OTG 92 76 bool 93 77 depends on USB && EXPERIMENTAL
+1 -8
drivers/usb/core/hub.c
··· 88 88 static struct task_struct *khubd_task; 89 89 90 90 /* multithreaded probe logic */ 91 - static int multithread_probe = 92 - #ifdef CONFIG_USB_MULTITHREAD_PROBE 93 - 1; 94 - #else 95 - 0; 96 - #endif 97 - module_param(multithread_probe, bool, S_IRUGO); 98 - MODULE_PARM_DESC(multithread_probe, "Run each USB device probe in a new thread"); 91 + static int multithread_probe = 0; 99 92 100 93 /* cycle leds on hubs that aren't blinking for attention */ 101 94 static int blinkenlights = 0;
+1 -1
drivers/usb/host/ohci-ep93xx.c
··· 169 169 static int ohci_hcd_ep93xx_drv_suspend(struct platform_device *pdev, pm_message_t state) 170 170 { 171 171 struct usb_hcd *hcd = platform_get_drvdata(pdev); 172 - struct ochi_hcd *ohci = hcd_to_ohci(hcd); 172 + struct ohci_hcd *ohci = hcd_to_ohci(hcd); 173 173 174 174 if (time_before(jiffies, ohci->next_statechange)) 175 175 msleep(5);
+4
drivers/usb/input/hid-core.c
··· 796 796 #define USB_VENDOR_ID_LOGITECH 0x046d 797 797 #define USB_DEVICE_ID_LOGITECH_USB_RECEIVER 0xc101 798 798 799 + #define USB_VENDOR_ID_IMATION 0x0718 800 + #define USB_DEVICE_ID_DISC_STAKKA 0xd000 801 + 799 802 /* 800 803 * Alphabetically sorted blacklist by quirk type. 801 804 */ ··· 886 883 { USB_VENDOR_ID_GTCO_IPANEL_1, USB_DEVICE_ID_GTCO_10, HID_QUIRK_IGNORE }, 887 884 { USB_VENDOR_ID_GTCO_IPANEL_2, USB_DEVICE_ID_GTCO_8, HID_QUIRK_IGNORE }, 888 885 { USB_VENDOR_ID_GTCO_IPANEL_2, USB_DEVICE_ID_GTCO_d, HID_QUIRK_IGNORE }, 886 + { USB_VENDOR_ID_IMATION, USB_DEVICE_ID_DISC_STAKKA, HID_QUIRK_IGNORE }, 889 887 { USB_VENDOR_ID_KBGEAR, USB_DEVICE_ID_KBGEAR_JAMSTUDIO, HID_QUIRK_IGNORE }, 890 888 { USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_CASSY, HID_QUIRK_IGNORE }, 891 889 { USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_POCKETCASSY, HID_QUIRK_IGNORE },
+59 -39
drivers/usb/input/usbtouchscreen.c
··· 66 66 67 67 void (*process_pkt) (struct usbtouch_usb *usbtouch, unsigned char *pkt, int len); 68 68 int (*get_pkt_len) (unsigned char *pkt, int len); 69 - int (*read_data) (unsigned char *pkt, int *x, int *y, int *touch, int *press); 69 + int (*read_data) (struct usbtouch_usb *usbtouch, unsigned char *pkt); 70 70 int (*init) (struct usbtouch_usb *usbtouch); 71 71 }; 72 72 ··· 85 85 struct usbtouch_device_info *type; 86 86 char name[128]; 87 87 char phys[64]; 88 + 89 + int x, y; 90 + int touch, press; 88 91 }; 89 92 90 93 ··· 164 161 #define EGALAX_PKT_TYPE_REPT 0x80 165 162 #define EGALAX_PKT_TYPE_DIAG 0x0A 166 163 167 - static int egalax_read_data(unsigned char *pkt, int *x, int *y, int *touch, int *press) 164 + static int egalax_read_data(struct usbtouch_usb *dev, unsigned char *pkt) 168 165 { 169 166 if ((pkt[0] & EGALAX_PKT_TYPE_MASK) != EGALAX_PKT_TYPE_REPT) 170 167 return 0; 171 168 172 - *x = ((pkt[3] & 0x0F) << 7) | (pkt[4] & 0x7F); 173 - *y = ((pkt[1] & 0x0F) << 7) | (pkt[2] & 0x7F); 174 - *touch = pkt[0] & 0x01; 169 + dev->x = ((pkt[3] & 0x0F) << 7) | (pkt[4] & 0x7F); 170 + dev->y = ((pkt[1] & 0x0F) << 7) | (pkt[2] & 0x7F); 171 + dev->touch = pkt[0] & 0x01; 175 172 176 173 return 1; 177 174 } ··· 198 195 * PanJit Part 199 196 */ 200 197 #ifdef CONFIG_USB_TOUCHSCREEN_PANJIT 201 - static int panjit_read_data(unsigned char *pkt, int *x, int *y, int *touch, int *press) 198 + static int panjit_read_data(struct usbtouch_usb *dev, unsigned char *pkt) 202 199 { 203 - *x = ((pkt[2] & 0x0F) << 8) | pkt[1]; 204 - *y = ((pkt[4] & 0x0F) << 8) | pkt[3]; 205 - *touch = pkt[0] & 0x01; 200 + dev->x = ((pkt[2] & 0x0F) << 8) | pkt[1]; 201 + dev->y = ((pkt[4] & 0x0F) << 8) | pkt[3]; 202 + dev->touch = pkt[0] & 0x01; 206 203 207 204 return 1; 208 205 } ··· 218 215 #define MTOUCHUSB_RESET 7 219 216 #define MTOUCHUSB_REQ_CTRLLR_ID 10 220 217 221 - static int mtouch_read_data(unsigned char *pkt, int *x, int *y, int *touch, int *press) 218 + static int mtouch_read_data(struct usbtouch_usb *dev, unsigned char *pkt) 222 219 { 223 - *x = (pkt[8] << 8) | pkt[7]; 224 - *y = (pkt[10] << 8) | pkt[9]; 225 - *touch = (pkt[2] & 0x40) ? 1 : 0; 220 + dev->x = (pkt[8] << 8) | pkt[7]; 221 + dev->y = (pkt[10] << 8) | pkt[9]; 222 + dev->touch = (pkt[2] & 0x40) ? 1 : 0; 226 223 227 224 return 1; 228 225 } ··· 263 260 * ITM Part 264 261 */ 265 262 #ifdef CONFIG_USB_TOUCHSCREEN_ITM 266 - static int itm_read_data(unsigned char *pkt, int *x, int *y, int *touch, int *press) 263 + static int itm_read_data(struct usbtouch_usb *dev, unsigned char *pkt) 267 264 { 268 - *x = ((pkt[0] & 0x1F) << 7) | (pkt[3] & 0x7F); 269 - *y = ((pkt[1] & 0x1F) << 7) | (pkt[4] & 0x7F); 270 - *press = ((pkt[2] & 0x01) << 7) | (pkt[5] & 0x7F); 271 - *touch = ~pkt[7] & 0x20; 265 + int touch; 266 + /* 267 + * ITM devices report invalid x/y data if not touched. 268 + * if the screen was touched before but is not touched any more 269 + * report touch as 0 with the last valid x/y data once. then stop 270 + * reporting data until touched again. 271 + */ 272 + dev->press = ((pkt[2] & 0x01) << 7) | (pkt[5] & 0x7F); 272 273 273 - return *touch; 274 + touch = ~pkt[7] & 0x20; 275 + if (!touch) { 276 + if (dev->touch) { 277 + dev->touch = 0; 278 + return 1; 279 + } 280 + 281 + return 0; 282 + } 283 + 284 + dev->x = ((pkt[0] & 0x1F) << 7) | (pkt[3] & 0x7F); 285 + dev->y = ((pkt[1] & 0x1F) << 7) | (pkt[4] & 0x7F); 286 + dev->touch = touch; 287 + 288 + return 1; 274 289 } 275 290 #endif 276 291 ··· 297 276 * eTurboTouch part 298 277 */ 299 278 #ifdef CONFIG_USB_TOUCHSCREEN_ETURBO 300 - static int eturbo_read_data(unsigned char *pkt, int *x, int *y, int *touch, int *press) 279 + static int eturbo_read_data(struct usbtouch_usb *dev, unsigned char *pkt) 301 280 { 302 281 unsigned int shift; 303 282 ··· 306 285 return 0; 307 286 308 287 shift = (6 - (pkt[0] & 0x03)); 309 - *x = ((pkt[3] << 7) | pkt[4]) >> shift; 310 - *y = ((pkt[1] << 7) | pkt[2]) >> shift; 311 - *touch = (pkt[0] & 0x10) ? 1 : 0; 288 + dev->x = ((pkt[3] << 7) | pkt[4]) >> shift; 289 + dev->y = ((pkt[1] << 7) | pkt[2]) >> shift; 290 + dev->touch = (pkt[0] & 0x10) ? 1 : 0; 312 291 313 292 return 1; 314 293 } ··· 328 307 * Gunze part 329 308 */ 330 309 #ifdef CONFIG_USB_TOUCHSCREEN_GUNZE 331 - static int gunze_read_data(unsigned char *pkt, int *x, int *y, int *touch, int *press) 310 + static int gunze_read_data(struct usbtouch_usb *dev, unsigned char *pkt) 332 311 { 333 312 if (!(pkt[0] & 0x80) || ((pkt[1] | pkt[2] | pkt[3]) & 0x80)) 334 313 return 0; 335 314 336 - *x = ((pkt[0] & 0x1F) << 7) | (pkt[2] & 0x7F); 337 - *y = ((pkt[1] & 0x1F) << 7) | (pkt[3] & 0x7F); 338 - *touch = pkt[0] & 0x20; 315 + dev->x = ((pkt[0] & 0x1F) << 7) | (pkt[2] & 0x7F); 316 + dev->y = ((pkt[1] & 0x1F) << 7) | (pkt[3] & 0x7F); 317 + dev->touch = pkt[0] & 0x20; 339 318 340 319 return 1; 341 320 } ··· 404 383 } 405 384 406 385 407 - static int dmc_tsc10_read_data(unsigned char *pkt, int *x, int *y, int *touch, int *press) 386 + static int dmc_tsc10_read_data(struct usbtouch_usb *dev, unsigned char *pkt) 408 387 { 409 - *x = ((pkt[2] & 0x03) << 8) | pkt[1]; 410 - *y = ((pkt[4] & 0x03) << 8) | pkt[3]; 411 - *touch = pkt[0] & 0x01; 388 + dev->x = ((pkt[2] & 0x03) << 8) | pkt[1]; 389 + dev->y = ((pkt[4] & 0x03) << 8) | pkt[3]; 390 + dev->touch = pkt[0] & 0x01; 412 391 413 392 return 1; 414 393 } ··· 513 492 static void usbtouch_process_pkt(struct usbtouch_usb *usbtouch, 514 493 unsigned char *pkt, int len) 515 494 { 516 - int x, y, touch, press; 517 495 struct usbtouch_device_info *type = usbtouch->type; 518 496 519 - if (!type->read_data(pkt, &x, &y, &touch, &press)) 497 + if (!type->read_data(usbtouch, pkt)) 520 498 return; 521 499 522 - input_report_key(usbtouch->input, BTN_TOUCH, touch); 500 + input_report_key(usbtouch->input, BTN_TOUCH, usbtouch->touch); 523 501 524 502 if (swap_xy) { 525 - input_report_abs(usbtouch->input, ABS_X, y); 526 - input_report_abs(usbtouch->input, ABS_Y, x); 503 + input_report_abs(usbtouch->input, ABS_X, usbtouch->y); 504 + input_report_abs(usbtouch->input, ABS_Y, usbtouch->x); 527 505 } else { 528 - input_report_abs(usbtouch->input, ABS_X, x); 529 - input_report_abs(usbtouch->input, ABS_Y, y); 506 + input_report_abs(usbtouch->input, ABS_X, usbtouch->x); 507 + input_report_abs(usbtouch->input, ABS_Y, usbtouch->y); 530 508 } 531 509 if (type->max_press) 532 - input_report_abs(usbtouch->input, ABS_PRESSURE, press); 510 + input_report_abs(usbtouch->input, ABS_PRESSURE, usbtouch->press); 533 511 input_sync(usbtouch->input); 534 512 } 535 513
+13 -5
drivers/usb/net/asix.c
··· 898 898 899 899 static int ax88772_bind(struct usbnet *dev, struct usb_interface *intf) 900 900 { 901 - int ret; 901 + int ret, embd_phy; 902 902 void *buf; 903 903 u16 rx_ctl; 904 904 struct asix_data *data = (struct asix_data *)&dev->data; ··· 919 919 AX_GPIO_RSE | AX_GPIO_GPO_2 | AX_GPIO_GPO2EN, 5)) < 0) 920 920 goto out2; 921 921 922 + /* 0x10 is the phy id of the embedded 10/100 ethernet phy */ 923 + embd_phy = ((asix_get_phy_addr(dev) & 0x1f) == 0x10 ? 1 : 0); 922 924 if ((ret = asix_write_cmd(dev, AX_CMD_SW_PHY_SELECT, 923 - 1, 0, 0, buf)) < 0) { 925 + embd_phy, 0, 0, buf)) < 0) { 924 926 dbg("Select PHY #1 failed: %d", ret); 925 927 goto out2; 926 928 } 927 929 928 - if ((ret = asix_sw_reset(dev, AX_SWRESET_IPPD)) < 0) 930 + if ((ret = asix_sw_reset(dev, AX_SWRESET_IPPD | AX_SWRESET_PRL)) < 0) 929 931 goto out2; 930 932 931 933 msleep(150); ··· 935 933 goto out2; 936 934 937 935 msleep(150); 938 - if ((ret = asix_sw_reset(dev, AX_SWRESET_IPRL | AX_SWRESET_PRL)) < 0) 939 - goto out2; 936 + if (embd_phy) { 937 + if ((ret = asix_sw_reset(dev, AX_SWRESET_IPRL)) < 0) 938 + goto out2; 939 + } 940 + else { 941 + if ((ret = asix_sw_reset(dev, AX_SWRESET_PRTE)) < 0) 942 + goto out2; 943 + } 940 944 941 945 msleep(150); 942 946 rx_ctl = asix_read_rx_ctl(dev);
+14 -9
drivers/usb/net/rndis_host.c
··· 379 379 { 380 380 int retval; 381 381 struct net_device *net = dev->net; 382 + struct cdc_state *info = (void *) &dev->data; 382 383 union { 383 384 void *buf; 384 385 struct rndis_msg_hdr *header; ··· 398 397 return -ENOMEM; 399 398 retval = usbnet_generic_cdc_bind(dev, intf); 400 399 if (retval < 0) 401 - goto done; 400 + goto fail; 402 401 403 402 net->hard_header_len += sizeof (struct rndis_data_hdr); 404 403 ··· 413 412 if (unlikely(retval < 0)) { 414 413 /* it might not even be an RNDIS device!! */ 415 414 dev_err(&intf->dev, "RNDIS init failed, %d\n", retval); 416 - fail: 417 - usb_driver_release_interface(driver_of(intf), 418 - ((struct cdc_state *)&(dev->data))->data); 419 - goto done; 415 + goto fail_and_release; 420 416 } 421 417 dev->hard_mtu = le32_to_cpu(u.init_c->max_transfer_size); 422 418 /* REVISIT: peripheral "alignment" request is ignored ... */ ··· 429 431 retval = rndis_command(dev, u.header); 430 432 if (unlikely(retval < 0)) { 431 433 dev_err(&intf->dev, "rndis get ethaddr, %d\n", retval); 432 - goto fail; 434 + goto fail_and_release; 433 435 } 434 436 tmp = le32_to_cpu(u.get_c->offset); 435 437 if (unlikely((tmp + 8) > (1024 - ETH_ALEN) ··· 437 439 dev_err(&intf->dev, "rndis ethaddr off %d len %d ?\n", 438 440 tmp, le32_to_cpu(u.get_c->len)); 439 441 retval = -EDOM; 440 - goto fail; 442 + goto fail_and_release; 441 443 } 442 444 memcpy(net->dev_addr, tmp + (char *)&u.get_c->request_id, ETH_ALEN); 443 445 ··· 453 455 retval = rndis_command(dev, u.header); 454 456 if (unlikely(retval < 0)) { 455 457 dev_err(&intf->dev, "rndis set packet filter, %d\n", retval); 456 - goto fail; 458 + goto fail_and_release; 457 459 } 458 460 459 461 retval = 0; 460 - done: 462 + 463 + kfree(u.buf); 464 + return retval; 465 + 466 + fail_and_release: 467 + usb_set_intfdata(info->data, NULL); 468 + usb_driver_release_interface(driver_of(intf), info->data); 469 + fail: 461 470 kfree(u.buf); 462 471 return retval; 463 472 }
+1 -1
drivers/usb/serial/funsoft.c
··· 27 27 static int funsoft_ioctl(struct usb_serial_port *port, struct file *file, 28 28 unsigned int cmd, unsigned long arg) 29 29 { 30 - struct termios t; 30 + struct ktermios t; 31 31 32 32 dbg("%s - port %d, cmd 0x%04x", __FUNCTION__, port->number, cmd); 33 33
+3
drivers/usb/serial/option.c
··· 78 78 #define OPTION_PRODUCT_FUSION2 0x6300 79 79 #define OPTION_PRODUCT_COBRA 0x6500 80 80 #define OPTION_PRODUCT_COBRA2 0x6600 81 + #define OPTION_PRODUCT_GTMAX36 0x6701 81 82 #define HUAWEI_PRODUCT_E600 0x1001 82 83 #define HUAWEI_PRODUCT_E220 0x1003 83 84 #define AUDIOVOX_PRODUCT_AIRCARD 0x0112 ··· 91 90 { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_FUSION2) }, 92 91 { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COBRA) }, 93 92 { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COBRA2) }, 93 + { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_GTMAX36) }, 94 94 { USB_DEVICE(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E600) }, 95 95 { USB_DEVICE(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E220) }, 96 96 { USB_DEVICE(AUDIOVOX_VENDOR_ID, AUDIOVOX_PRODUCT_AIRCARD) }, ··· 106 104 { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_FUSION2) }, 107 105 { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COBRA) }, 108 106 { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COBRA2) }, 107 + { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_GTMAX36) }, 109 108 { USB_DEVICE(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E600) }, 110 109 { USB_DEVICE(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E220) }, 111 110 { USB_DEVICE(AUDIOVOX_VENDOR_ID, AUDIOVOX_PRODUCT_AIRCARD) },
+19
drivers/usb/storage/unusual_devs.h
··· 197 197 US_SC_DEVICE, US_PR_DEVICE, NULL, 198 198 US_FL_MAX_SECTORS_64 ), 199 199 200 + /* Reported by Manuel Osdoba <manuel.osdoba@tu-ilmenau.de> */ 201 + UNUSUAL_DEV( 0x0421, 0x0492, 0x0452, 0x0452, 202 + "Nokia", 203 + "Nokia 6233", 204 + US_SC_DEVICE, US_PR_DEVICE, NULL, 205 + US_FL_MAX_SECTORS_64 ), 206 + 200 207 /* Reported by Alex Corcoles <alex@corcoles.net> */ 201 208 UNUSUAL_DEV( 0x0421, 0x0495, 0x0370, 0x0370, 202 209 "Nokia", ··· 260 253 "Rio Karma", 261 254 US_SC_SCSI, US_PR_KARMA, rio_karma_init, 0), 262 255 #endif 256 + 257 + /* 258 + * This virtual floppy is found in Sun equipment (x4600, x4200m2, etc.) 259 + * Reported by Pete Zaitcev <zaitcev@redhat.com> 260 + * This device chokes on both version of MODE SENSE which we have, so 261 + * use_10_for_ms is not effective, and we use US_FL_NO_WP_DETECT. 262 + */ 263 + UNUSUAL_DEV( 0x046b, 0xff40, 0x0100, 0x0100, 264 + "AMI", 265 + "Virtual Floppy", 266 + US_SC_DEVICE, US_PR_DEVICE, NULL, 267 + US_FL_NO_WP_DETECT), 263 268 264 269 /* Patch submitted by Philipp Friedrich <philipp@void.at> */ 265 270 UNUSUAL_DEV( 0x0482, 0x0100, 0x0100, 0x0100,
+9 -1
fs/block_dev.c
··· 146 146 iocb->ki_nbytes = -EIO; 147 147 148 148 if (atomic_dec_and_test(bio_count)) { 149 - if (iocb->ki_nbytes < 0) 149 + if ((long)iocb->ki_nbytes < 0) 150 150 aio_complete(iocb, iocb->ki_nbytes, 0); 151 151 else 152 152 aio_complete(iocb, iocb->ki_left, 0); ··· 188 188 pvec->idx = 0; 189 189 } 190 190 return pvec->page[pvec->idx++]; 191 + } 192 + 193 + /* return a page back to pvec array */ 194 + static void blk_unget_page(struct page *page, struct pvec *pvec) 195 + { 196 + pvec->page[--pvec->idx] = page; 191 197 } 192 198 193 199 static ssize_t ··· 284 278 count = min(count, nbytes); 285 279 goto same_bio; 286 280 } 281 + } else { 282 + blk_unget_page(page, &pvec); 287 283 } 288 284 289 285 /* bio is ready, submit it */
+2 -1
fs/jffs/jffs_fm.c
··· 17 17 * 18 18 */ 19 19 #include <linux/slab.h> 20 + #include <linux/err.h> 20 21 #include <linux/blkdev.h> 21 22 #include <linux/jffs.h> 22 23 #include "jffs_fm.h" ··· 105 104 106 105 mtd = get_mtd_device(NULL, unit); 107 106 108 - if (!mtd) { 107 + if (IS_ERR(mtd)) { 109 108 kfree(fmc); 110 109 DJM(no_jffs_fmcontrol--); 111 110 return NULL;
+2 -2
fs/jffs2/debug.c
··· 178 178 while (ref2) { 179 179 uint32_t totlen = ref_totlen(c, jeb, ref2); 180 180 181 - if (ref2->flash_offset < jeb->offset || 182 - ref2->flash_offset > jeb->offset + c->sector_size) { 181 + if (ref_offset(ref2) < jeb->offset || 182 + ref_offset(ref2) > jeb->offset + c->sector_size) { 183 183 JFFS2_ERROR("node_ref %#08x shouldn't be in block at %#08x.\n", 184 184 ref_offset(ref2), jeb->offset); 185 185 goto error;
+1
fs/jffs2/debug.h
··· 13 13 #ifndef _JFFS2_DEBUG_H_ 14 14 #define _JFFS2_DEBUG_H_ 15 15 16 + #include <linux/sched.h> 16 17 17 18 #ifndef CONFIG_JFFS2_FS_DEBUG 18 19 #define CONFIG_JFFS2_FS_DEBUG 0
+1 -2
fs/jffs2/fs.c
··· 502 502 if (ret) 503 503 return ret; 504 504 505 - c->inocache_list = kmalloc(INOCACHE_HASHSIZE * sizeof(struct jffs2_inode_cache *), GFP_KERNEL); 505 + c->inocache_list = kcalloc(INOCACHE_HASHSIZE, sizeof(struct jffs2_inode_cache *), GFP_KERNEL); 506 506 if (!c->inocache_list) { 507 507 ret = -ENOMEM; 508 508 goto out_wbuf; 509 509 } 510 - memset(c->inocache_list, 0, INOCACHE_HASHSIZE * sizeof(struct jffs2_inode_cache *)); 511 510 512 511 jffs2_init_xattr_subsystem(c); 513 512
+2
fs/jffs2/gc.c
··· 838 838 839 839 for (raw = f->inocache->nodes; raw != (void *)f->inocache; raw = raw->next_in_ino) { 840 840 841 + cond_resched(); 842 + 841 843 /* We only care about obsolete ones */ 842 844 if (!(ref_obsolete(raw))) 843 845 continue;
+4 -6
fs/jffs2/nodelist.h
··· 294 294 295 295 static inline struct jffs2_node_frag *frag_first(struct rb_root *root) 296 296 { 297 - struct rb_node *node = root->rb_node; 297 + struct rb_node *node = rb_first(root); 298 298 299 299 if (!node) 300 300 return NULL; 301 - while(node->rb_left) 302 - node = node->rb_left; 301 + 303 302 return rb_entry(node, struct jffs2_node_frag, rb); 304 303 } 305 304 306 305 static inline struct jffs2_node_frag *frag_last(struct rb_root *root) 307 306 { 308 - struct rb_node *node = root->rb_node; 307 + struct rb_node *node = rb_last(root); 309 308 310 309 if (!node) 311 310 return NULL; 312 - while(node->rb_right) 313 - node = node->rb_right; 311 + 314 312 return rb_entry(node, struct jffs2_node_frag, rb); 315 313 } 316 314
+1 -2
fs/jffs2/readinode.c
··· 944 944 int jffs2_do_crccheck_inode(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic) 945 945 { 946 946 struct jffs2_raw_inode n; 947 - struct jffs2_inode_info *f = kmalloc(sizeof(*f), GFP_KERNEL); 947 + struct jffs2_inode_info *f = kzalloc(sizeof(*f), GFP_KERNEL); 948 948 int ret; 949 949 950 950 if (!f) 951 951 return -ENOMEM; 952 952 953 - memset(f, 0, sizeof(*f)); 954 953 init_MUTEX_LOCKED(&f->sem); 955 954 f->inocache = ic; 956 955
+4 -2
fs/jffs2/scan.c
··· 128 128 } 129 129 130 130 if (jffs2_sum_active()) { 131 - s = kmalloc(sizeof(struct jffs2_summary), GFP_KERNEL); 131 + s = kzalloc(sizeof(struct jffs2_summary), GFP_KERNEL); 132 132 if (!s) { 133 + kfree(flashbuf); 133 134 JFFS2_WARNING("Can't allocate memory for summary\n"); 134 135 return -ENOMEM; 135 136 } 136 - memset(s, 0, sizeof(struct jffs2_summary)); 137 137 } 138 138 139 139 for (i=0; i<c->nr_blocks; i++) { 140 140 struct jffs2_eraseblock *jeb = &c->blocks[i]; 141 + 142 + cond_resched(); 141 143 142 144 /* reset summary info for next eraseblock scan */ 143 145 jffs2_sum_reset_collected(s);
+3 -3
fs/jffs2/summary.c
··· 26 26 27 27 int jffs2_sum_init(struct jffs2_sb_info *c) 28 28 { 29 - c->summary = kmalloc(sizeof(struct jffs2_summary), GFP_KERNEL); 29 + c->summary = kzalloc(sizeof(struct jffs2_summary), GFP_KERNEL); 30 30 31 31 if (!c->summary) { 32 32 JFFS2_WARNING("Can't allocate memory for summary information!\n"); 33 33 return -ENOMEM; 34 34 } 35 - 36 - memset(c->summary, 0, sizeof(struct jffs2_summary)); 37 35 38 36 c->summary->sum_buf = vmalloc(c->sector_size); 39 37 ··· 395 397 396 398 for (i=0; i<je32_to_cpu(summary->sum_num); i++) { 397 399 dbg_summary("processing summary index %d\n", i); 400 + 401 + cond_resched(); 398 402 399 403 /* Make sure there's a spare ref for dirty space */ 400 404 err = jffs2_prealloc_raw_node_refs(c, jeb, 2);
+4 -3
fs/jffs2/super.c
··· 17 17 #include <linux/init.h> 18 18 #include <linux/list.h> 19 19 #include <linux/fs.h> 20 + #include <linux/err.h> 20 21 #include <linux/mount.h> 21 22 #include <linux/jffs2.h> 22 23 #include <linux/pagemap.h> ··· 185 184 struct mtd_info *mtd; 186 185 187 186 mtd = get_mtd_device(NULL, mtdnr); 188 - if (!mtd) { 187 + if (IS_ERR(mtd)) { 189 188 D1(printk(KERN_DEBUG "jffs2: MTD device #%u doesn't appear to exist\n", mtdnr)); 190 - return -EINVAL; 189 + return PTR_ERR(mtd); 191 190 } 192 191 193 192 return jffs2_get_sb_mtd(fs_type, flags, dev_name, data, mtd, mnt); ··· 222 221 D1(printk(KERN_DEBUG "jffs2_get_sb(): mtd:%%s, name \"%s\"\n", dev_name+4)); 223 222 for (mtdnr = 0; mtdnr < MAX_MTD_DEVICES; mtdnr++) { 224 223 mtd = get_mtd_device(NULL, mtdnr); 225 - if (mtd) { 224 + if (!IS_ERR(mtd)) { 226 225 if (!strcmp(mtd->name, dev_name+4)) 227 226 return jffs2_get_sb_mtd(fs_type, flags, dev_name, data, mtd, mnt); 228 227 put_mtd_device(mtd);
+1 -1
fs/jffs2/symlink.c
··· 51 51 */ 52 52 53 53 if (!p) { 54 - printk(KERN_ERR "jffs2_follow_link(): can't find symlink taerget\n"); 54 + printk(KERN_ERR "jffs2_follow_link(): can't find symlink target\n"); 55 55 p = ERR_PTR(-EIO); 56 56 } 57 57 D1(printk(KERN_DEBUG "jffs2_follow_link(): target path is '%s'\n", (char *) f->target));
+9 -12
fs/jffs2/wbuf.c
··· 969 969 int oobsize = c->mtd->oobsize; 970 970 struct mtd_oob_ops ops; 971 971 972 - ops.len = NR_OOB_SCAN_PAGES * oobsize; 973 - ops.ooblen = oobsize; 972 + ops.ooblen = NR_OOB_SCAN_PAGES * oobsize; 974 973 ops.oobbuf = c->oobbuf; 975 974 ops.ooboffs = 0; 976 975 ops.datbuf = NULL; ··· 982 983 return ret; 983 984 } 984 985 985 - if (ops.retlen < ops.len) { 986 + if (ops.oobretlen < ops.ooblen) { 986 987 D1(printk(KERN_WARNING "jffs2_check_oob_empty(): Read OOB " 987 988 "returned short read (%zd bytes not %d) for block " 988 - "at %08x\n", ops.retlen, ops.len, jeb->offset)); 989 + "at %08x\n", ops.oobretlen, ops.ooblen, jeb->offset)); 989 990 return -EIO; 990 991 } 991 992 ··· 1004 1005 } 1005 1006 1006 1007 /* we know, we are aligned :) */ 1007 - for (page = oobsize; page < ops.len; page += sizeof(long)) { 1008 + for (page = oobsize; page < ops.ooblen; page += sizeof(long)) { 1008 1009 long dat = *(long *)(&ops.oobbuf[page]); 1009 1010 if(dat != -1) 1010 1011 return 1; ··· 1032 1033 return 2; 1033 1034 } 1034 1035 1035 - ops.len = oobsize; 1036 1036 ops.ooblen = oobsize; 1037 1037 ops.oobbuf = c->oobbuf; 1038 1038 ops.ooboffs = 0; ··· 1046 1048 return ret; 1047 1049 } 1048 1050 1049 - if (ops.retlen < ops.len) { 1051 + if (ops.oobretlen < ops.ooblen) { 1050 1052 D1 (printk (KERN_WARNING "jffs2_check_nand_cleanmarker(): " 1051 1053 "Read OOB return short read (%zd bytes not %d) " 1052 - "for block at %08x\n", ops.retlen, ops.len, 1054 + "for block at %08x\n", ops.oobretlen, ops.ooblen, 1053 1055 jeb->offset)); 1054 1056 return -EIO; 1055 1057 } ··· 1088 1090 n.nodetype = cpu_to_je16(JFFS2_NODETYPE_CLEANMARKER); 1089 1091 n.totlen = cpu_to_je32(8); 1090 1092 1091 - ops.len = c->fsdata_len; 1092 - ops.ooblen = c->fsdata_len;; 1093 + ops.ooblen = c->fsdata_len; 1093 1094 ops.oobbuf = (uint8_t *)&n; 1094 1095 ops.ooboffs = c->fsdata_pos; 1095 1096 ops.datbuf = NULL; ··· 1102 1105 jeb->offset, ret)); 1103 1106 return ret; 1104 1107 } 1105 - if (ops.retlen != ops.len) { 1108 + if (ops.oobretlen != ops.ooblen) { 1106 1109 D1(printk(KERN_WARNING "jffs2_write_nand_cleanmarker(): " 1107 1110 "Short write for block at %08x: %zd not %d\n", 1108 - jeb->offset, ops.retlen, ops.len)); 1111 + jeb->offset, ops.oobretlen, ops.ooblen)); 1109 1112 return -EIO; 1110 1113 } 1111 1114 return 0;
+2 -3
fs/jffs2/xattr.c
··· 399 399 { 400 400 /* must be called under down_write(xattr_sem) */ 401 401 if (atomic_dec_and_lock(&xd->refcnt, &c->erase_completion_lock)) { 402 - uint32_t xid = xd->xid, version = xd->version; 403 - 404 402 unload_xattr_datum(c, xd); 405 403 xd->flags |= JFFS2_XFLAGS_DEAD; 406 404 if (xd->node == (void *)xd) { ··· 409 411 } 410 412 spin_unlock(&c->erase_completion_lock); 411 413 412 - dbg_xattr("xdatum(xid=%u, version=%u) was removed.\n", xid, version); 414 + dbg_xattr("xdatum(xid=%u, version=%u) was removed.\n", 415 + xd->xid, xd->version); 413 416 } 414 417 } 415 418
+19 -1
fs/reiserfs/file.c
··· 48 48 } 49 49 50 50 mutex_lock(&inode->i_mutex); 51 + 52 + mutex_lock(&(REISERFS_I(inode)->i_mmap)); 53 + if (REISERFS_I(inode)->i_flags & i_ever_mapped) 54 + REISERFS_I(inode)->i_flags &= ~i_pack_on_close_mask; 55 + 51 56 reiserfs_write_lock(inode->i_sb); 52 57 /* freeing preallocation only involves relogging blocks that 53 58 * are already in the current transaction. preallocation gets ··· 105 100 err = reiserfs_truncate_file(inode, 0); 106 101 } 107 102 out: 103 + mutex_unlock(&(REISERFS_I(inode)->i_mmap)); 108 104 mutex_unlock(&inode->i_mutex); 109 105 reiserfs_write_unlock(inode->i_sb); 110 106 return err; 107 + } 108 + 109 + static int reiserfs_file_mmap(struct file *file, struct vm_area_struct *vma) 110 + { 111 + struct inode *inode; 112 + 113 + inode = file->f_path.dentry->d_inode; 114 + mutex_lock(&(REISERFS_I(inode)->i_mmap)); 115 + REISERFS_I(inode)->i_flags |= i_ever_mapped; 116 + mutex_unlock(&(REISERFS_I(inode)->i_mmap)); 117 + 118 + return generic_file_mmap(file, vma); 111 119 } 112 120 113 121 static void reiserfs_vfs_truncate_file(struct inode *inode) ··· 1545 1527 #ifdef CONFIG_COMPAT 1546 1528 .compat_ioctl = reiserfs_compat_ioctl, 1547 1529 #endif 1548 - .mmap = generic_file_mmap, 1530 + .mmap = reiserfs_file_mmap, 1549 1531 .open = generic_file_open, 1550 1532 .release = reiserfs_file_release, 1551 1533 .fsync = reiserfs_sync_file,
+2
fs/reiserfs/inode.c
··· 1125 1125 REISERFS_I(inode)->i_prealloc_count = 0; 1126 1126 REISERFS_I(inode)->i_trans_id = 0; 1127 1127 REISERFS_I(inode)->i_jl = NULL; 1128 + mutex_init(&(REISERFS_I(inode)->i_mmap)); 1128 1129 reiserfs_init_acl_access(inode); 1129 1130 reiserfs_init_acl_default(inode); 1130 1131 reiserfs_init_xattr_rwsem(inode); ··· 1833 1832 REISERFS_I(inode)->i_attrs = 1834 1833 REISERFS_I(dir)->i_attrs & REISERFS_INHERIT_MASK; 1835 1834 sd_attrs_to_i_attrs(REISERFS_I(inode)->i_attrs, inode); 1835 + mutex_init(&(REISERFS_I(inode)->i_mmap)); 1836 1836 reiserfs_init_acl_access(inode); 1837 1837 reiserfs_init_acl_default(inode); 1838 1838 reiserfs_init_xattr_rwsem(inode);
+1
include/asm-i386/processor.h
··· 743 743 extern int sysenter_setup(void); 744 744 745 745 extern int init_gdt(int cpu, struct task_struct *idle); 746 + extern void cpu_set_gdt(int); 746 747 extern void secondary_cpu_init(void); 747 748 748 749 #endif /* __ASM_I386_PROCESSOR_H */
+3 -3
include/asm-ia64/checksum.h
··· 72 72 73 73 #define _HAVE_ARCH_IPV6_CSUM 1 74 74 struct in6_addr; 75 - extern unsigned short int csum_ipv6_magic(struct in6_addr *saddr, 76 - struct in6_addr *daddr, __u32 len, unsigned short proto, 77 - unsigned int csum); 75 + extern __sum16 csum_ipv6_magic(const struct in6_addr *saddr, 76 + const struct in6_addr *daddr, __u32 len, unsigned short proto, 77 + __wsum csum); 78 78 79 79 #endif /* _ASM_IA64_CHECKSUM_H */
+22
include/asm-mips/irqflags.h
··· 15 15 16 16 #include <asm/hazards.h> 17 17 18 + /* 19 + * CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY does prompt replay of deferred IPIs, 20 + * at the cost of branch and call overhead on each local_irq_restore() 21 + */ 22 + 23 + #ifdef CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY 24 + 25 + extern void smtc_ipi_replay(void); 26 + 27 + #define irq_restore_epilog(flags) \ 28 + do { \ 29 + if (!(flags & 0x0400)) \ 30 + smtc_ipi_replay(); \ 31 + } while (0) 32 + 33 + #else 34 + 35 + #define irq_restore_epilog(ignore) do { } while (0) 36 + 37 + #endif /* CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY */ 38 + 18 39 __asm__ ( 19 40 " .macro raw_local_irq_enable \n" 20 41 " .set push \n" ··· 214 193 : "=r" (__tmp1) \ 215 194 : "0" (flags) \ 216 195 : "memory"); \ 196 + irq_restore_epilog(flags); \ 217 197 } while(0) 218 198 219 199 static inline int raw_irqs_disabled_flags(unsigned long flags)
-1
include/linux/Kbuild
··· 129 129 header-y += ppdev.h 130 130 header-y += prctl.h 131 131 header-y += ps2esdi.h 132 - header-y += qic117.h 133 132 header-y += qnxtypes.h 134 133 header-y += quotaio_v1.h 135 134 header-y += quotaio_v2.h
+2 -1
include/linux/mtd/blktrans.h
··· 24 24 struct mtd_info *mtd; 25 25 struct mutex lock; 26 26 int devnum; 27 - int blksize; 28 27 unsigned long size; 29 28 int readonly; 30 29 void *blkcore_priv; /* gendisk in 2.5, devfs_handle in 2.4 */ ··· 35 36 char *name; 36 37 int major; 37 38 int part_bits; 39 + int blksize; 40 + int blkshift; 38 41 39 42 /* Access functions */ 40 43 int (*readsect)(struct mtd_blktrans_dev *dev,
+16 -8
include/linux/mtd/mtd.h
··· 23 23 24 24 #define MTD_CHAR_MAJOR 90 25 25 #define MTD_BLOCK_MAJOR 31 26 - #define MAX_MTD_DEVICES 16 26 + #define MAX_MTD_DEVICES 32 27 27 28 28 #define MTD_ERASE_PENDING 0x01 29 29 #define MTD_ERASING 0x02 ··· 75 75 * struct mtd_oob_ops - oob operation operands 76 76 * @mode: operation mode 77 77 * 78 - * @len: number of bytes to write/read. When a data buffer is given 79 - * (datbuf != NULL) this is the number of data bytes. When 80 - * no data buffer is available this is the number of oob bytes. 78 + * @len: number of data bytes to write/read 81 79 * 82 - * @retlen: number of bytes written/read. When a data buffer is given 83 - * (datbuf != NULL) this is the number of data bytes. When 84 - * no data buffer is available this is the number of oob bytes. 80 + * @retlen: number of data bytes written/read 85 81 * 86 - * @ooblen: number of oob bytes per page 82 + * @ooblen: number of oob bytes to write/read 83 + * @oobretlen: number of oob bytes written/read 87 84 * @ooboffs: offset of oob data in the oob area (only relevant when 88 85 * mode = MTD_OOB_PLACE) 89 86 * @datbuf: data buffer - if NULL only oob data are read/written ··· 91 94 size_t len; 92 95 size_t retlen; 93 96 size_t ooblen; 97 + size_t oobretlen; 94 98 uint32_t ooboffs; 95 99 uint8_t *datbuf; 96 100 uint8_t *oobbuf; ··· 200 202 201 203 /* ECC status information */ 202 204 struct mtd_ecc_stats ecc_stats; 205 + /* Subpage shift (NAND) */ 206 + int subpage_sft; 203 207 204 208 void *priv; 205 209 206 210 struct module *owner; 207 211 int usecount; 212 + 213 + /* If the driver is something smart, like UBI, it may need to maintain 214 + * its own reference counting. The below functions are only for driver. 215 + * The driver may register its callbacks. These callbacks are not 216 + * supposed to be called by MTD users */ 217 + int (*get_device) (struct mtd_info *mtd); 218 + void (*put_device) (struct mtd_info *mtd); 208 219 }; 209 220 210 221 ··· 223 216 extern int del_mtd_device (struct mtd_info *mtd); 224 217 225 218 extern struct mtd_info *get_mtd_device(struct mtd_info *mtd, int num); 219 + extern struct mtd_info *get_mtd_device_nm(const char *name); 226 220 227 221 extern void put_mtd_device(struct mtd_info *mtd); 228 222
+10 -5
include/linux/mtd/nand.h
··· 166 166 * for all large page devices, as they do not support 167 167 * autoincrement.*/ 168 168 #define NAND_NO_READRDY 0x00000100 169 + /* Chip does not allow subpage writes */ 170 + #define NAND_NO_SUBPAGE_WRITE 0x00000200 171 + 169 172 170 173 /* Options valid for Samsung large page devices */ 171 174 #define NAND_SAMSUNG_LP_OPTIONS \ ··· 196 193 /* Nand scan has allocated controller struct */ 197 194 #define NAND_CONTROLLER_ALLOC 0x80000000 198 195 196 + /* Cell info constants */ 197 + #define NAND_CI_CHIPNR_MSK 0x03 198 + #define NAND_CI_CELLTYPE_MSK 0x0C 199 199 200 200 /* 201 201 * nand_state_t - chip states ··· 292 286 * struct nand_buffers - buffer structure for read/write 293 287 * @ecccalc: buffer for calculated ecc 294 288 * @ecccode: buffer for ecc read from flash 295 - * @oobwbuf: buffer for write oob data 296 289 * @databuf: buffer for data - dynamically sized 297 - * @oobrbuf: buffer to read oob data 298 290 * 299 291 * Do not change the order of buffers. databuf and oobrbuf must be in 300 292 * consecutive order. ··· 300 296 struct nand_buffers { 301 297 uint8_t ecccalc[NAND_MAX_OOBSIZE]; 302 298 uint8_t ecccode[NAND_MAX_OOBSIZE]; 303 - uint8_t oobwbuf[NAND_MAX_OOBSIZE]; 304 - uint8_t databuf[NAND_MAX_PAGESIZE]; 305 - uint8_t oobrbuf[NAND_MAX_OOBSIZE]; 299 + uint8_t databuf[NAND_MAX_PAGESIZE + NAND_MAX_OOBSIZE]; 306 300 }; 307 301 308 302 /** ··· 347 345 * @chipsize: [INTERN] the size of one chip for multichip arrays 348 346 * @pagemask: [INTERN] page number mask = number of (pages / chip) - 1 349 347 * @pagebuf: [INTERN] holds the pagenumber which is currently in data_buf 348 + * @subpagesize: [INTERN] holds the subpagesize 350 349 * @ecclayout: [REPLACEABLE] the default ecc placement scheme 351 350 * @bbt: [INTERN] bad block table pointer 352 351 * @bbt_td: [REPLACEABLE] bad block table descriptor for flash lookup ··· 395 392 unsigned long chipsize; 396 393 int pagemask; 397 394 int pagebuf; 395 + int subpagesize; 396 + uint8_t cellinfo; 398 397 int badblockpos; 399 398 400 399 nand_state_t state;
+7 -1
include/linux/mtd/onenand.h
··· 13 13 #define __LINUX_MTD_ONENAND_H 14 14 15 15 #include <linux/spinlock.h> 16 + #include <linux/completion.h> 16 17 #include <linux/mtd/onenand_regs.h> 17 18 #include <linux/mtd/bbm.h> 18 19 ··· 34 33 FL_WRITING, 35 34 FL_ERASING, 36 35 FL_SYNCING, 37 - FL_UNLOCKING, 38 36 FL_LOCKING, 39 37 FL_RESETING, 40 38 FL_OTPING, ··· 88 88 * operation is in progress 89 89 * @state: [INTERN] the current state of the OneNAND device 90 90 * @page_buf: data buffer 91 + * @subpagesize: [INTERN] holds the subpagesize 91 92 * @ecclayout: [REPLACEABLE] the default ecc placement scheme 92 93 * @bbm: [REPLACEABLE] pointer to Bad Block Management 93 94 * @priv: [OPTIONAL] pointer to private chip date ··· 121 120 int (*block_markbad)(struct mtd_info *mtd, loff_t ofs); 122 121 int (*scan_bbt)(struct mtd_info *mtd); 123 122 123 + struct completion complete; 124 + int irq; 125 + 124 126 spinlock_t chip_lock; 125 127 wait_queue_head_t wq; 126 128 onenand_state_t state; 127 129 unsigned char *page_buf; 128 130 131 + int subpagesize; 129 132 struct nand_ecclayout *ecclayout; 130 133 131 134 void *bbm; ··· 143 138 #define ONENAND_CURRENT_BUFFERRAM(this) (this->bufferram_index) 144 139 #define ONENAND_NEXT_BUFFERRAM(this) (this->bufferram_index ^ 1) 145 140 #define ONENAND_SET_NEXT_BUFFERRAM(this) (this->bufferram_index ^= 1) 141 + #define ONENAND_SET_PREV_BUFFERRAM(this) (this->bufferram_index ^= 1) 146 142 147 143 #define ONENAND_GET_SYS_CFG1(this) \ 148 144 (this->read_word(this->base + ONENAND_REG_SYS_CFG1))
+1
include/linux/mtd/onenand_regs.h
··· 179 179 * ECC Status Reigser FF00h (R) 180 180 */ 181 181 #define ONENAND_ECC_1BIT (1 << 0) 182 + #define ONENAND_ECC_1BIT_ALL (0x5555) 182 183 #define ONENAND_ECC_2BIT (1 << 1) 183 184 #define ONENAND_ECC_2BIT_ALL (0xAAAA) 184 185
-146
include/linux/mtio.h
··· 10 10 11 11 #include <linux/types.h> 12 12 #include <linux/ioctl.h> 13 - #include <linux/qic117.h> 14 13 15 14 /* 16 15 * Structures and definitions for mag tape io control commands ··· 115 116 #define MT_ISFTAPE_UNKNOWN 0x800000 /* obsolete */ 116 117 #define MT_ISFTAPE_FLAG 0x800000 117 118 118 - struct mt_tape_info { 119 - long t_type; /* device type id (mt_type) */ 120 - char *t_name; /* descriptive name */ 121 - }; 122 - 123 - #define MT_TAPE_INFO { \ 124 - {MT_ISUNKNOWN, "Unknown type of tape device"}, \ 125 - {MT_ISQIC02, "Generic QIC-02 tape streamer"}, \ 126 - {MT_ISWT5150, "Wangtek 5150, QIC-150"}, \ 127 - {MT_ISARCHIVE_5945L2, "Archive 5945L-2"}, \ 128 - {MT_ISCMSJ500, "CMS Jumbo 500"}, \ 129 - {MT_ISTDC3610, "Tandberg TDC 3610, QIC-24"}, \ 130 - {MT_ISARCHIVE_VP60I, "Archive VP60i, QIC-02"}, \ 131 - {MT_ISARCHIVE_2150L, "Archive Viper 2150L"}, \ 132 - {MT_ISARCHIVE_2060L, "Archive Viper 2060L"}, \ 133 - {MT_ISARCHIVESC499, "Archive SC-499 QIC-36 controller"}, \ 134 - {MT_ISQIC02_ALL_FEATURES, "Generic QIC-02 tape, all features"}, \ 135 - {MT_ISWT5099EEN24, "Wangtek 5099-een24, 60MB"}, \ 136 - {MT_ISTEAC_MT2ST, "Teac MT-2ST 155mb data cassette drive"}, \ 137 - {MT_ISEVEREX_FT40A, "Everex FT40A, QIC-40"}, \ 138 - {MT_ISONSTREAM_SC, "OnStream SC-, DI-, DP-, or USB tape drive"}, \ 139 - {MT_ISSCSI1, "Generic SCSI-1 tape"}, \ 140 - {MT_ISSCSI2, "Generic SCSI-2 tape"}, \ 141 - {0, NULL} \ 142 - } 143 - 144 119 145 120 /* structure for MTIOCPOS - mag tape get position command */ 146 121 ··· 123 150 }; 124 151 125 152 126 - /* structure for MTIOCVOLINFO, query information about the volume 127 - * currently positioned at (zftape) 128 - */ 129 - struct mtvolinfo { 130 - unsigned int mt_volno; /* vol-number */ 131 - unsigned int mt_blksz; /* blocksize used when recording */ 132 - unsigned int mt_rawsize; /* raw tape space consumed, in kb */ 133 - unsigned int mt_size; /* volume size after decompression, in kb */ 134 - unsigned int mt_cmpr:1; /* this volume has been compressed */ 135 - }; 136 - 137 - /* raw access to a floppy drive, read and write an arbitrary segment. 138 - * For ftape/zftape to support formatting etc. 139 - */ 140 - #define MT_FT_RD_SINGLE 0 141 - #define MT_FT_RD_AHEAD 1 142 - #define MT_FT_WR_ASYNC 0 /* start tape only when all buffers are full */ 143 - #define MT_FT_WR_MULTI 1 /* start tape, continue until buffers are empty */ 144 - #define MT_FT_WR_SINGLE 2 /* write a single segment and stop afterwards */ 145 - #define MT_FT_WR_DELETE 3 /* write deleted data marks, one segment at time */ 146 - 147 - struct mtftseg 148 - { 149 - unsigned mt_segno; /* the segment to read or write */ 150 - unsigned mt_mode; /* modes for read/write (sync/async etc.) */ 151 - int mt_result; /* result of r/w request, not of the ioctl */ 152 - void __user *mt_data; /* User space buffer: must be 29kb */ 153 - }; 154 - 155 - /* get tape capacity (ftape/zftape) 156 - */ 157 - struct mttapesize { 158 - unsigned long mt_capacity; /* entire, uncompressed capacity 159 - * of a cartridge 160 - */ 161 - unsigned long mt_used; /* what has been used so far, raw 162 - * uncompressed amount 163 - */ 164 - }; 165 - 166 - /* possible values of the ftfmt_op field 167 - */ 168 - #define FTFMT_SET_PARMS 1 /* set software parms */ 169 - #define FTFMT_GET_PARMS 2 /* get software parms */ 170 - #define FTFMT_FORMAT_TRACK 3 /* start formatting a tape track */ 171 - #define FTFMT_STATUS 4 /* monitor formatting a tape track */ 172 - #define FTFMT_VERIFY 5 /* verify the given segment */ 173 - 174 - struct ftfmtparms { 175 - unsigned char ft_qicstd; /* QIC-40/QIC-80/QIC-3010/QIC-3020 */ 176 - unsigned char ft_fmtcode; /* Refer to the QIC specs */ 177 - unsigned char ft_fhm; /* floppy head max */ 178 - unsigned char ft_ftm; /* floppy track max */ 179 - unsigned short ft_spt; /* segments per track */ 180 - unsigned short ft_tpc; /* tracks per cartridge */ 181 - }; 182 - 183 - struct ftfmttrack { 184 - unsigned int ft_track; /* track to format */ 185 - unsigned char ft_gap3; /* size of gap3, for FORMAT_TRK */ 186 - }; 187 - 188 - struct ftfmtstatus { 189 - unsigned int ft_segment; /* segment currently being formatted */ 190 - }; 191 - 192 - struct ftfmtverify { 193 - unsigned int ft_segment; /* segment to verify */ 194 - unsigned long ft_bsm; /* bsm as result of VERIFY cmd */ 195 - }; 196 - 197 - struct mtftformat { 198 - unsigned int fmt_op; /* operation to perform */ 199 - union fmt_arg { 200 - struct ftfmtparms fmt_parms; /* format parameters */ 201 - struct ftfmttrack fmt_track; /* ctrl while formatting */ 202 - struct ftfmtstatus fmt_status; 203 - struct ftfmtverify fmt_verify; /* for verifying */ 204 - } fmt_arg; 205 - }; 206 - 207 - struct mtftcmd { 208 - unsigned int ft_wait_before; /* timeout to wait for drive to get ready 209 - * before command is sent. Milliseconds 210 - */ 211 - qic117_cmd_t ft_cmd; /* command to send */ 212 - unsigned char ft_parm_cnt; /* zero: no parm is sent. */ 213 - unsigned char ft_parms[3]; /* parameter(s) to send to 214 - * the drive. The parms are nibbles 215 - * driver sends cmd + 2 step pulses */ 216 - unsigned int ft_result_bits; /* if non zero, number of bits 217 - * returned by the tape drive 218 - */ 219 - unsigned int ft_result; /* the result returned by the tape drive*/ 220 - unsigned int ft_wait_after; /* timeout to wait for drive to get ready 221 - * after command is sent. 0: don't wait */ 222 - int ft_status; /* status returned by ready wait 223 - * undefined if timeout was 0. 224 - */ 225 - int ft_error; /* error code if error status was set by 226 - * command 227 - */ 228 - }; 229 - 230 153 /* mag tape io control commands */ 231 154 #define MTIOCTOP _IOW('m', 1, struct mtop) /* do a mag tape op */ 232 155 #define MTIOCGET _IOR('m', 2, struct mtget) /* get tape status */ 233 156 #define MTIOCPOS _IOR('m', 3, struct mtpos) /* get tape position */ 234 157 235 - /* The next two are used by the QIC-02 driver for runtime reconfiguration. 236 - * See tpqic02.h for struct mtconfiginfo. 237 - */ 238 - #define MTIOCGETCONFIG _IOR('m', 4, struct mtconfiginfo) /* get tape config */ 239 - #define MTIOCSETCONFIG _IOW('m', 5, struct mtconfiginfo) /* set tape config */ 240 - 241 - /* the next six are used by the floppy ftape drivers and its frontends 242 - * sorry, but MTIOCTOP commands are write only. 243 - */ 244 - #define MTIOCRDFTSEG _IOWR('m', 6, struct mtftseg) /* read a segment */ 245 - #define MTIOCWRFTSEG _IOWR('m', 7, struct mtftseg) /* write a segment */ 246 - #define MTIOCVOLINFO _IOR('m', 8, struct mtvolinfo) /* info about volume */ 247 - #define MTIOCGETSIZE _IOR('m', 9, struct mttapesize)/* get cartridge size*/ 248 - #define MTIOCFTFORMAT _IOWR('m', 10, struct mtftformat) /* format ftape */ 249 - #define MTIOCFTCMD _IOWR('m', 11, struct mtftcmd) /* send QIC-117 cmd */ 250 158 251 159 /* Generic Mag Tape (device independent) status macros for examining 252 160 * mt_gstat -- HP-UX compatible.
-290
include/linux/qic117.h
··· 1 - #ifndef _QIC117_H 2 - #define _QIC117_H 3 - 4 - /* 5 - * Copyright (C) 1993-1996 Bas Laarhoven, 6 - * (C) 1997 Claus-Justus Heine. 7 - 8 - This program is free software; you can redistribute it and/or modify 9 - it under the terms of the GNU General Public License as published by 10 - the Free Software Foundation; either version 2, or (at your option) 11 - any later version. 12 - 13 - This program is distributed in the hope that it will be useful, 14 - but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 - GNU General Public License for more details. 17 - 18 - You should have received a copy of the GNU General Public License 19 - along with this program; see the file COPYING. If not, write to 20 - the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. 21 - 22 - * 23 - * $Source: /homes/cvs/ftape-stacked/include/linux/qic117.h,v $ 24 - * $Revision: 1.2 $ 25 - * $Date: 1997/10/05 19:19:32 $ 26 - * 27 - * This file contains QIC-117 spec. related definitions for the 28 - * QIC-40/80/3010/3020 floppy-tape driver "ftape" for Linux. 29 - * 30 - * These data were taken from the Quarter-Inch Cartridge 31 - * Drive Standards, Inc. document titled: 32 - * `Common Command Set Interface Specification for Flexible 33 - * Disk Controller Based Minicartridge Tape Drives' 34 - * document QIC-117 Revision J, 28 Aug 96. 35 - * For more information, contact: 36 - * Quarter-Inch Cartridge Drive Standards, Inc. 37 - * 311 East Carrillo Street 38 - * Santa Barbara, California 93101 39 - * Telephone (805) 963-3853 40 - * Fax (805) 962-1541 41 - * WWW http://www.qic.org 42 - * 43 - * Current QIC standard revisions (of interest) are: 44 - * QIC-40-MC, Rev. M, 2 Sep 92. 45 - * QIC-80-MC, Rev. N, 20 Mar 96. 46 - * QIC-80-MC, Rev. K, 15 Dec 94. 47 - * QIC-113, Rev. G, 15 Jun 95. 48 - * QIC-117, Rev. J, 28 Aug 96. 49 - * QIC-122, Rev. B, 6 Mar 91. 50 - * QIC-130, Rev. C, 2 Sep 92. 51 - * QIC-3010-MC, Rev. F, 14 Jun 95. 52 - * QIC-3020-MC, Rev. G, 31 Aug 95. 53 - * QIC-CRF3, Rev. B, 15 Jun 95. 54 - * */ 55 - 56 - /* 57 - * QIC-117 common command set rev. J. 58 - * These commands are sent to the tape unit 59 - * as number of pulses over the step line. 60 - */ 61 - 62 - typedef enum { 63 - QIC_NO_COMMAND = 0, 64 - QIC_RESET = 1, 65 - QIC_REPORT_NEXT_BIT = 2, 66 - QIC_PAUSE = 3, 67 - QIC_MICRO_STEP_PAUSE = 4, 68 - QIC_ALTERNATE_TIMEOUT = 5, 69 - QIC_REPORT_DRIVE_STATUS = 6, 70 - QIC_REPORT_ERROR_CODE = 7, 71 - QIC_REPORT_DRIVE_CONFIGURATION = 8, 72 - QIC_REPORT_ROM_VERSION = 9, 73 - QIC_LOGICAL_FORWARD = 10, 74 - QIC_PHYSICAL_REVERSE = 11, 75 - QIC_PHYSICAL_FORWARD = 12, 76 - QIC_SEEK_HEAD_TO_TRACK = 13, 77 - QIC_SEEK_LOAD_POINT = 14, 78 - QIC_ENTER_FORMAT_MODE = 15, 79 - QIC_WRITE_REFERENCE_BURST = 16, 80 - QIC_ENTER_VERIFY_MODE = 17, 81 - QIC_STOP_TAPE = 18, 82 - /* commands 19-20: reserved */ 83 - QIC_MICRO_STEP_HEAD_UP = 21, 84 - QIC_MICRO_STEP_HEAD_DOWN = 22, 85 - QIC_SOFT_SELECT = 23, 86 - QIC_SOFT_DESELECT = 24, 87 - QIC_SKIP_REVERSE = 25, 88 - QIC_SKIP_FORWARD = 26, 89 - QIC_SELECT_RATE = 27, 90 - /* command 27, in ccs2: Select Rate or Format */ 91 - QIC_ENTER_DIAGNOSTIC_1 = 28, 92 - QIC_ENTER_DIAGNOSTIC_2 = 29, 93 - QIC_ENTER_PRIMARY_MODE = 30, 94 - /* command 31: vendor unique */ 95 - QIC_REPORT_VENDOR_ID = 32, 96 - QIC_REPORT_TAPE_STATUS = 33, 97 - QIC_SKIP_EXTENDED_REVERSE = 34, 98 - QIC_SKIP_EXTENDED_FORWARD = 35, 99 - QIC_CALIBRATE_TAPE_LENGTH = 36, 100 - QIC_REPORT_FORMAT_SEGMENTS = 37, 101 - QIC_SET_FORMAT_SEGMENTS = 38, 102 - /* commands 39-45: reserved */ 103 - QIC_PHANTOM_SELECT = 46, 104 - QIC_PHANTOM_DESELECT = 47 105 - } qic117_cmd_t; 106 - 107 - typedef enum { 108 - discretional = 0, required, ccs1, ccs2 109 - } qic_compatibility; 110 - 111 - typedef enum { 112 - unused, mode, motion, report 113 - } command_types; 114 - 115 - struct qic117_command_table { 116 - char *name; 117 - __u8 mask; 118 - __u8 state; 119 - __u8 cmd_type; 120 - __u8 non_intr; 121 - __u8 level; 122 - }; 123 - 124 - #define QIC117_COMMANDS {\ 125 - /* command mask state cmd_type */\ 126 - /* | name | | | non_intr */\ 127 - /* | | | | | | level */\ 128 - /* 0*/ {NULL, 0x00, 0x00, mode, 0, discretional},\ 129 - /* 1*/ {"soft reset", 0x00, 0x00, motion, 1, required},\ 130 - /* 2*/ {"report next bit", 0x00, 0x00, report, 0, required},\ 131 - /* 3*/ {"pause", 0x36, 0x24, motion, 1, required},\ 132 - /* 4*/ {"micro step pause", 0x36, 0x24, motion, 1, required},\ 133 - /* 5*/ {"alternate command timeout", 0x00, 0x00, mode, 0, required},\ 134 - /* 6*/ {"report drive status", 0x00, 0x00, report, 0, required},\ 135 - /* 7*/ {"report error code", 0x01, 0x01, report, 0, required},\ 136 - /* 8*/ {"report drive configuration",0x00, 0x00, report, 0, required},\ 137 - /* 9*/ {"report rom version", 0x00, 0x00, report, 0, required},\ 138 - /*10*/ {"logical forward", 0x37, 0x25, motion, 0, required},\ 139 - /*11*/ {"physical reverse", 0x17, 0x05, motion, 0, required},\ 140 - /*12*/ {"physical forward", 0x17, 0x05, motion, 0, required},\ 141 - /*13*/ {"seek head to track", 0x37, 0x25, motion, 0, required},\ 142 - /*14*/ {"seek load point", 0x17, 0x05, motion, 1, required},\ 143 - /*15*/ {"enter format mode", 0x1f, 0x05, mode, 0, required},\ 144 - /*16*/ {"write reference burst", 0x1f, 0x05, motion, 1, required},\ 145 - /*17*/ {"enter verify mode", 0x37, 0x25, mode, 0, required},\ 146 - /*18*/ {"stop tape", 0x00, 0x00, motion, 1, required},\ 147 - /*19*/ {"reserved (19)", 0x00, 0x00, unused, 0, discretional},\ 148 - /*20*/ {"reserved (20)", 0x00, 0x00, unused, 0, discretional},\ 149 - /*21*/ {"micro step head up", 0x02, 0x00, motion, 0, required},\ 150 - /*22*/ {"micro step head down", 0x02, 0x00, motion, 0, required},\ 151 - /*23*/ {"soft select", 0x00, 0x00, mode, 0, discretional},\ 152 - /*24*/ {"soft deselect", 0x00, 0x00, mode, 0, discretional},\ 153 - /*25*/ {"skip segments reverse", 0x36, 0x24, motion, 1, required},\ 154 - /*26*/ {"skip segments forward", 0x36, 0x24, motion, 1, required},\ 155 - /*27*/ {"select rate or format", 0x03, 0x01, mode, 0, required /* [ccs2] */},\ 156 - /*28*/ {"enter diag mode 1", 0x00, 0x00, mode, 0, discretional},\ 157 - /*29*/ {"enter diag mode 2", 0x00, 0x00, mode, 0, discretional},\ 158 - /*30*/ {"enter primary mode", 0x00, 0x00, mode, 0, required},\ 159 - /*31*/ {"vendor unique (31)", 0x00, 0x00, unused, 0, discretional},\ 160 - /*32*/ {"report vendor id", 0x00, 0x00, report, 0, required},\ 161 - /*33*/ {"report tape status", 0x04, 0x04, report, 0, ccs1},\ 162 - /*34*/ {"skip extended reverse", 0x36, 0x24, motion, 1, ccs1},\ 163 - /*35*/ {"skip extended forward", 0x36, 0x24, motion, 1, ccs1},\ 164 - /*36*/ {"calibrate tape length", 0x17, 0x05, motion, 1, ccs2},\ 165 - /*37*/ {"report format segments", 0x17, 0x05, report, 0, ccs2},\ 166 - /*38*/ {"set format segments", 0x17, 0x05, mode, 0, ccs2},\ 167 - /*39*/ {"reserved (39)", 0x00, 0x00, unused, 0, discretional},\ 168 - /*40*/ {"vendor unique (40)", 0x00, 0x00, unused, 0, discretional},\ 169 - /*41*/ {"vendor unique (41)", 0x00, 0x00, unused, 0, discretional},\ 170 - /*42*/ {"vendor unique (42)", 0x00, 0x00, unused, 0, discretional},\ 171 - /*43*/ {"vendor unique (43)", 0x00, 0x00, unused, 0, discretional},\ 172 - /*44*/ {"vendor unique (44)", 0x00, 0x00, unused, 0, discretional},\ 173 - /*45*/ {"vendor unique (45)", 0x00, 0x00, unused, 0, discretional},\ 174 - /*46*/ {"phantom select", 0x00, 0x00, mode, 0, discretional},\ 175 - /*47*/ {"phantom deselect", 0x00, 0x00, mode, 0, discretional},\ 176 - } 177 - 178 - /* 179 - * Status bits returned by QIC_REPORT_DRIVE_STATUS 180 - */ 181 - 182 - #define QIC_STATUS_READY 0x01 /* Drive is ready or idle. */ 183 - #define QIC_STATUS_ERROR 0x02 /* Error detected, must read 184 - error code to clear this */ 185 - #define QIC_STATUS_CARTRIDGE_PRESENT 0x04 /* Tape is present */ 186 - #define QIC_STATUS_WRITE_PROTECT 0x08 /* Tape is write protected */ 187 - #define QIC_STATUS_NEW_CARTRIDGE 0x10 /* New cartridge inserted, must 188 - read error status to clear. */ 189 - #define QIC_STATUS_REFERENCED 0x20 /* Cartridge appears to have been 190 - formatted. */ 191 - #define QIC_STATUS_AT_BOT 0x40 /* Cartridge is at physical 192 - beginning of tape. */ 193 - #define QIC_STATUS_AT_EOT 0x80 /* Cartridge is at physical end 194 - of tape. */ 195 - /* 196 - * Status bits returned by QIC_REPORT_DRIVE_CONFIGURATION 197 - */ 198 - 199 - #define QIC_CONFIG_RATE_MASK 0x18 200 - #define QIC_CONFIG_RATE_SHIFT 3 201 - #define QIC_CONFIG_RATE_250 0 202 - #define QIC_CONFIG_RATE_500 2 203 - #define QIC_CONFIG_RATE_1000 3 204 - #define QIC_CONFIG_RATE_2000 1 205 - #define QIC_CONFIG_RATE_4000 0 /* since QIC-117 Rev. J */ 206 - 207 - #define QIC_CONFIG_LONG 0x40 /* Extra Length Tape Detected */ 208 - #define QIC_CONFIG_80 0x80 /* QIC-80 detected. */ 209 - 210 - /* 211 - * Status bits returned by QIC_REPORT_TAPE_STATUS 212 - */ 213 - 214 - #define QIC_TAPE_STD_MASK 0x0f 215 - #define QIC_TAPE_QIC40 0x01 216 - #define QIC_TAPE_QIC80 0x02 217 - #define QIC_TAPE_QIC3020 0x03 218 - #define QIC_TAPE_QIC3010 0x04 219 - 220 - #define QIC_TAPE_LEN_MASK 0x70 221 - #define QIC_TAPE_205FT 0x10 222 - #define QIC_TAPE_307FT 0x20 223 - #define QIC_TAPE_VARIABLE 0x30 224 - #define QIC_TAPE_1100FT 0x40 225 - #define QIC_TAPE_FLEX 0x60 226 - 227 - #define QIC_TAPE_WIDE 0x80 228 - 229 - /* Define a value (in feet) slightly higher than 230 - * the possible maximum tape length. 231 - */ 232 - #define QIC_TOP_TAPE_LEN 1500 233 - 234 - /* 235 - * Errors: List of error codes, and their severity. 236 - */ 237 - 238 - typedef struct { 239 - char *message; /* Text describing the error. */ 240 - unsigned int fatal:1; /* Non-zero if the error is fatal. */ 241 - } ftape_error; 242 - 243 - #define QIC117_ERRORS {\ 244 - /* 0*/ { "No error", 0, },\ 245 - /* 1*/ { "Command Received while Drive Not Ready", 0, },\ 246 - /* 2*/ { "Cartridge Not Present or Removed", 1, },\ 247 - /* 3*/ { "Motor Speed Error (not within 1%)", 1, },\ 248 - /* 4*/ { "Motor Speed Fault (jammed, or gross speed error", 1, },\ 249 - /* 5*/ { "Cartridge Write Protected", 1, },\ 250 - /* 6*/ { "Undefined or Reserved Command Code", 1, },\ 251 - /* 7*/ { "Illegal Track Address Specified for Seek", 1, },\ 252 - /* 8*/ { "Illegal Command in Report Subcontext", 0, },\ 253 - /* 9*/ { "Illegal Entry into a Diagnostic Mode", 1, },\ 254 - /*10*/ { "Broken Tape Detected (based on hole sensor)", 1, },\ 255 - /*11*/ { "Warning--Read Gain Setting Error", 1, },\ 256 - /*12*/ { "Command Received While Error Status Pending (obs)", 1, },\ 257 - /*13*/ { "Command Received While New Cartridge Pending", 1, },\ 258 - /*14*/ { "Command Illegal or Undefined in Primary Mode", 1, },\ 259 - /*15*/ { "Command Illegal or Undefined in Format Mode", 1, },\ 260 - /*16*/ { "Command Illegal or Undefined in Verify Mode", 1, },\ 261 - /*17*/ { "Logical Forward Not at Logical BOT or no Format Segments in Format Mode", 1, },\ 262 - /*18*/ { "Logical EOT Before All Segments generated", 1, },\ 263 - /*19*/ { "Command Illegal When Cartridge Not Referenced", 1, },\ 264 - /*20*/ { "Self-Diagnostic Failed (cannot be cleared)", 1, },\ 265 - /*21*/ { "Warning EEPROM Not Initialized, Defaults Set", 1, },\ 266 - /*22*/ { "EEPROM Corrupted or Hardware Failure", 1, },\ 267 - /*23*/ { "Motion Time-out Error", 1, },\ 268 - /*24*/ { "Data Segment Too Long -- Logical Forward or Pause", 1, },\ 269 - /*25*/ { "Transmit Overrun (obs)", 1, },\ 270 - /*26*/ { "Power On Reset Occurred", 0, },\ 271 - /*27*/ { "Software Reset Occurred", 0, },\ 272 - /*28*/ { "Diagnostic Mode 1 Error", 1, },\ 273 - /*29*/ { "Diagnostic Mode 2 Error", 1, },\ 274 - /*30*/ { "Command Received During Non-Interruptible Process", 1, },\ 275 - /*31*/ { "Rate or Format Selection Error", 1, },\ 276 - /*32*/ { "Illegal Command While in High Speed Mode", 1, },\ 277 - /*33*/ { "Illegal Seek Segment Value", 1, },\ 278 - /*34*/ { "Invalid Media", 1, },\ 279 - /*35*/ { "Head Positioning Failure", 1, },\ 280 - /*36*/ { "Write Reference Burst Failure", 1, },\ 281 - /*37*/ { "Prom Code Missing", 1, },\ 282 - /*38*/ { "Invalid Format", 1, },\ 283 - /*39*/ { "EOT/BOT System Failure", 1, },\ 284 - /*40*/ { "Prom A Checksum Error", 1, },\ 285 - /*41*/ { "Drive Wakeup Reset Occurred", 1, },\ 286 - /*42*/ { "Prom B Checksum Error", 1, },\ 287 - /*43*/ { "Illegal Entry into Format Mode", 1, },\ 288 - } 289 - 290 - #endif /* _QIC117_H */
+2
include/linux/reiserfs_fs_i.h
··· 25 25 i_link_saved_truncate_mask = 0x0020, 26 26 i_has_xattr_dir = 0x0040, 27 27 i_data_log = 0x0080, 28 + i_ever_mapped = 0x0100 28 29 } reiserfs_inode_flags; 29 30 30 31 struct reiserfs_inode_info { ··· 53 52 ** flushed */ 54 53 unsigned long i_trans_id; 55 54 struct reiserfs_journal_list *i_jl; 55 + struct mutex i_mmap; 56 56 #ifdef CONFIG_REISERFS_FS_POSIX_ACL 57 57 struct posix_acl *i_acl_access; 58 58 struct posix_acl *i_acl_default;
+1 -1
ipc/shm.c
··· 279 279 if (size < SHMMIN || size > ns->shm_ctlmax) 280 280 return -EINVAL; 281 281 282 - if (ns->shm_tot + numpages >= ns->shm_ctlall) 282 + if (ns->shm_tot + numpages > ns->shm_ctlall) 283 283 return -ENOSPC; 284 284 285 285 shp = ipc_rcu_alloc(sizeof(*shp));
+3
kernel/irq/manage.c
··· 315 315 /* Undo nested disables: */ 316 316 desc->depth = 1; 317 317 } 318 + /* Reset broken irq detection when installing new handler */ 319 + desc->irq_count = 0; 320 + desc->irqs_unhandled = 0; 318 321 spin_unlock_irqrestore(&desc->lock, flags); 319 322 320 323 new->irq = irq;
+2 -1
kernel/profile.c
··· 331 331 local_irq_restore(flags); 332 332 put_cpu(); 333 333 } 334 - EXPORT_SYMBOL_GPL(profile_hits); 335 334 336 335 static int __devinit profile_cpu_callback(struct notifier_block *info, 337 336 unsigned long action, void *__cpu) ··· 399 400 atomic_add(nr_hits, &prof_buffer[min(pc, prof_len - 1)]); 400 401 } 401 402 #endif /* !CONFIG_SMP */ 403 + 404 + EXPORT_SYMBOL_GPL(profile_hits); 402 405 403 406 void profile_tick(int type) 404 407 {
+11 -4
kernel/sys.c
··· 323 323 int blocking_notifier_call_chain(struct blocking_notifier_head *nh, 324 324 unsigned long val, void *v) 325 325 { 326 - int ret; 326 + int ret = NOTIFY_DONE; 327 327 328 - down_read(&nh->rwsem); 329 - ret = notifier_call_chain(&nh->head, val, v); 330 - up_read(&nh->rwsem); 328 + /* 329 + * We check the head outside the lock, but if this access is 330 + * racy then it does not matter what the result of the test 331 + * is, we re-check the list after having taken the lock anyway: 332 + */ 333 + if (rcu_dereference(nh->head)) { 334 + down_read(&nh->rwsem); 335 + ret = notifier_call_chain(&nh->head, val, v); 336 + up_read(&nh->rwsem); 337 + } 331 338 return ret; 332 339 } 333 340
+4
mm/mempolicy.c
··· 884 884 err = get_nodes(&nodes, nmask, maxnode); 885 885 if (err) 886 886 return err; 887 + #ifdef CONFIG_CPUSETS 888 + /* Restrict the nodes to the allowed nodes in the cpuset */ 889 + nodes_and(nodes, nodes, current->mems_allowed); 890 + #endif 887 891 return do_mbind(start, len, mode, &nodes, flags); 888 892 } 889 893
+1 -1
net/ipv4/tcp_probe.c
··· 30 30 31 31 #include <net/tcp.h> 32 32 33 - MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>"); 33 + MODULE_AUTHOR("Stephen Hemminger <shemminger@linux-foundation.org>"); 34 34 MODULE_DESCRIPTION("TCP cwnd snooper"); 35 35 MODULE_LICENSE("GPL"); 36 36
+1 -1
sound/usb/usx2y/usbusx2yaudio.c
··· 322 322 usX2Y_error_urb_status(usX2Y, subs, urb); 323 323 return; 324 324 } 325 - if (likely(urb->start_frame == usX2Y->wait_iso_frame)) 325 + if (likely((urb->start_frame & 0xFFFF) == (usX2Y->wait_iso_frame & 0xFFFF))) 326 326 subs->completed_urb = urb; 327 327 else { 328 328 usX2Y_error_sequence(usX2Y, subs, urb);
+1 -1
sound/usb/usx2y/usx2yhwdeppcm.c
··· 243 243 usX2Y_error_urb_status(usX2Y, subs, urb); 244 244 return; 245 245 } 246 - if (likely(urb->start_frame == usX2Y->wait_iso_frame)) 246 + if (likely((urb->start_frame & 0xFFFF) == (usX2Y->wait_iso_frame & 0xFFFF))) 247 247 subs->completed_urb = urb; 248 248 else { 249 249 usX2Y_error_sequence(usX2Y, subs, urb);