Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

[MTD] core: Clean up trailing white spaces

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

authored by

Thomas Gleixner and committed by
Thomas Gleixner
97894cda b95f9609

+399 -399
+16 -16
drivers/mtd/Kconfig
··· 1 - # $Id: Kconfig,v 1.10 2005/07/11 10:39:27 gleixner Exp $ 1 + # $Id: Kconfig,v 1.11 2005/11/07 11:14:19 gleixner Exp $ 2 2 3 3 menu "Memory Technology Devices (MTD)" 4 4 ··· 10 10 will provide the generic support for MTD drivers to register 11 11 themselves with the kernel and for potential users of MTD devices 12 12 to enumerate the devices which are present and obtain a handle on 13 - them. It will also allow you to select individual drivers for 13 + them. It will also allow you to select individual drivers for 14 14 particular hardware and users of MTD devices. If unsure, say N. 15 15 16 16 config MTD_DEBUG ··· 61 61 62 62 If you need code which can detect and parse this table, and register 63 63 MTD 'partitions' corresponding to each image in the table, enable 64 - this option. 64 + this option. 65 65 66 66 You will still need the parsing functions to be called by the driver 67 - for your particular device. It won't happen automatically. The 68 - SA1100 map driver (CONFIG_MTD_SA1100) has an option for this, for 67 + for your particular device. It won't happen automatically. The 68 + SA1100 map driver (CONFIG_MTD_SA1100) has an option for this, for 69 69 example. 70 70 71 71 config MTD_REDBOOT_DIRECTORY_BLOCK ··· 81 81 partition table. A zero or positive value gives an absolete 82 82 erase block number. A negative value specifies a number of 83 83 sectors before the end of the device. 84 - 84 + 85 85 For example "2" means block number 2, "-1" means the last 86 86 block and "-2" means the penultimate block. 87 - 87 + 88 88 config MTD_REDBOOT_PARTS_UNALLOCATED 89 89 bool " Include unallocated flash regions" 90 90 depends on MTD_REDBOOT_PARTS ··· 105 105 ---help--- 106 106 Allow generic configuration of the MTD paritition tables via the kernel 107 107 command line. Multiple flash resources are supported for hardware where 108 - different kinds of flash memory are available. 108 + different kinds of flash memory are available. 109 109 110 110 You will still need the parsing functions to be called by the driver 111 - for your particular device. It won't happen automatically. The 112 - SA1100 map driver (CONFIG_MTD_SA1100) has an option for this, for 111 + for your particular device. It won't happen automatically. The 112 + SA1100 map driver (CONFIG_MTD_SA1100) has an option for this, for 113 113 example. 114 114 115 115 The format for the command line is as follows: ··· 118 118 <mtddef> := <mtd-id>:<partdef>[,<partdef>] 119 119 <partdef> := <size>[@offset][<name>][ro] 120 120 <mtd-id> := unique id used in mapping driver/device 121 - <size> := standard linux memsize OR "-" to denote all 121 + <size> := standard linux memsize OR "-" to denote all 122 122 remaining space 123 123 <name> := (NAME) 124 124 125 - Due to the way Linux handles the command line, no spaces are 126 - allowed in the partition definition, including mtd id's and partition 125 + Due to the way Linux handles the command line, no spaces are 126 + allowed in the partition definition, including mtd id's and partition 127 127 names. 128 128 129 129 Examples: ··· 240 240 tristate "INFTL (Inverse NAND Flash Translation Layer) support" 241 241 depends on MTD 242 242 ---help--- 243 - This provides support for the Inverse NAND Flash Translation 243 + This provides support for the Inverse NAND Flash Translation 244 244 Layer which is used on M-Systems' newer DiskOnChip devices. It 245 245 uses a kind of pseudo-file system on a flash device to emulate 246 246 a block device with 512-byte sectors, on top of which you put ··· 257 257 tristate "Resident Flash Disk (Flash Translation Layer) support" 258 258 depends on MTD 259 259 ---help--- 260 - This provides support for the flash translation layer known 261 - as the Resident Flash Disk (RFD), as used by the Embedded BIOS 260 + This provides support for the flash translation layer known 261 + as the Resident Flash Disk (RFD), as used by the Embedded BIOS 262 262 of General Software. There is a blurb at: 263 263 264 264 http://www.gensw.com/pages/prod/bios/rfd.htm
+8 -8
drivers/mtd/afs.c
··· 1 1 /*====================================================================== 2 2 3 3 drivers/mtd/afs.c: ARM Flash Layout/Partitioning 4 - 4 + 5 5 Copyright (C) 2000 ARM Limited 6 - 6 + 7 7 This program is free software; you can redistribute it and/or modify 8 8 it under the terms of the GNU General Public License as published by 9 9 the Free Software Foundation; either version 2 of the License, or 10 10 (at your option) any later version. 11 - 11 + 12 12 This program is distributed in the hope that it will be useful, 13 13 but WITHOUT ANY WARRANTY; without even the implied warranty of 14 14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 15 GNU General Public License for more details. 16 - 16 + 17 17 You should have received a copy of the GNU General Public License 18 18 along with this program; if not, write to the Free Software 19 19 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 - 21 - This is access code for flashes using ARM's flash partitioning 20 + 21 + This is access code for flashes using ARM's flash partitioning 22 22 standards. 23 23 24 - $Id: afs.c,v 1.13 2004/02/27 22:09:59 rmk Exp $ 24 + $Id: afs.c,v 1.15 2005/11/07 11:14:19 gleixner Exp $ 25 25 26 26 ======================================================================*/ 27 27 ··· 163 163 return ret; 164 164 } 165 165 166 - static int parse_afs_partitions(struct mtd_info *mtd, 166 + static int parse_afs_partitions(struct mtd_info *mtd, 167 167 struct mtd_partition **pparts, 168 168 unsigned long origin) 169 169 {
+28 -28
drivers/mtd/cmdlinepart.c
··· 1 1 /* 2 - * $Id: cmdlinepart.c,v 1.18 2005/06/07 15:04:26 joern Exp $ 2 + * $Id: cmdlinepart.c,v 1.19 2005/11/07 11:14:19 gleixner Exp $ 3 3 * 4 4 * Read flash partition table from command line 5 5 * 6 6 * Copyright 2002 SYSGO Real-Time Solutions GmbH 7 7 * 8 8 * The format for the command line is as follows: 9 - * 9 + * 10 10 * mtdparts=<mtddef>[;<mtddef] 11 11 * <mtddef> := <mtd-id>:<partdef>[,<partdef>] 12 12 * <partdef> := <size>[@offset][<name>][ro] 13 13 * <mtd-id> := unique name used in mapping driver/device (mtd->name) 14 14 * <size> := standard linux memsize OR "-" to denote all remaining space 15 15 * <name> := '(' NAME ')' 16 - * 16 + * 17 17 * Examples: 18 - * 18 + * 19 19 * 1 NOR Flash, with 1 single writable partition: 20 20 * edb7312-nor:- 21 - * 21 + * 22 22 * 1 NOR Flash with 2 partitions, 1 NAND with one 23 23 * edb7312-nor:256k(ARMboot)ro,-(root);edb7312-nand:-(home) 24 24 */ ··· 60 60 61 61 /* 62 62 * Parse one partition definition for an MTD. Since there can be many 63 - * comma separated partition definitions, this function calls itself 63 + * comma separated partition definitions, this function calls itself 64 64 * recursively until no more partition definitions are found. Nice side 65 65 * effect: the memory to keep the mtd_partition structs and the names 66 66 * is allocated upon the last definition being found. At that point the 67 67 * syntax has been verified ok. 68 68 */ 69 - static struct mtd_partition * newpart(char *s, 69 + static struct mtd_partition * newpart(char *s, 70 70 char **retptr, 71 71 int *num_parts, 72 - int this_part, 73 - unsigned char **extra_mem_ptr, 72 + int this_part, 73 + unsigned char **extra_mem_ptr, 74 74 int extra_mem_size) 75 75 { 76 76 struct mtd_partition *parts; ··· 102 102 mask_flags = 0; /* this is going to be a regular partition */ 103 103 delim = 0; 104 104 /* check for offset */ 105 - if (*s == '@') 105 + if (*s == '@') 106 106 { 107 107 s++; 108 108 offset = memparse(s, &s); ··· 112 112 { 113 113 delim = ')'; 114 114 } 115 - 115 + 116 116 if (delim) 117 117 { 118 118 char *p; ··· 131 131 name = NULL; 132 132 name_len = 13; /* Partition_000 */ 133 133 } 134 - 134 + 135 135 /* record name length for memory allocation later */ 136 136 extra_mem_size += name_len + 1; 137 137 138 138 /* test for options */ 139 - if (strncmp(s, "ro", 2) == 0) 139 + if (strncmp(s, "ro", 2) == 0) 140 140 { 141 141 mask_flags |= MTD_WRITEABLE; 142 142 s += 2; ··· 151 151 return NULL; 152 152 } 153 153 /* more partitions follow, parse them */ 154 - if ((parts = newpart(s + 1, &s, num_parts, 154 + if ((parts = newpart(s + 1, &s, num_parts, 155 155 this_part + 1, &extra_mem, extra_mem_size)) == 0) 156 156 return NULL; 157 157 } ··· 187 187 extra_mem += name_len + 1; 188 188 189 189 dbg(("partition %d: name <%s>, offset %x, size %x, mask flags %x\n", 190 - this_part, 190 + this_part, 191 191 parts[this_part].name, 192 192 parts[this_part].offset, 193 193 parts[this_part].size, ··· 204 204 return parts; 205 205 } 206 206 207 - /* 208 - * Parse the command line. 207 + /* 208 + * Parse the command line. 209 209 */ 210 210 static int mtdpart_setup_real(char *s) 211 211 { ··· 230 230 231 231 dbg(("parsing <%s>\n", p+1)); 232 232 233 - /* 233 + /* 234 234 * parse one mtd. have it reserve memory for the 235 235 * struct cmdline_mtd_partition and the mtd-id string. 236 236 */ ··· 239 239 &num_parts, /* out: number of parts */ 240 240 0, /* first partition */ 241 241 (unsigned char**)&this_mtd, /* out: extra mem */ 242 - mtd_id_len + 1 + sizeof(*this_mtd) + 242 + mtd_id_len + 1 + sizeof(*this_mtd) + 243 243 sizeof(void*)-1 /*alignment*/); 244 244 if(!parts) 245 245 { ··· 254 254 } 255 255 256 256 /* align this_mtd */ 257 - this_mtd = (struct cmdline_mtd_partition *) 257 + this_mtd = (struct cmdline_mtd_partition *) 258 258 ALIGN((unsigned long)this_mtd, sizeof(void*)); 259 - /* enter results */ 259 + /* enter results */ 260 260 this_mtd->parts = parts; 261 261 this_mtd->num_parts = num_parts; 262 262 this_mtd->mtd_id = (char*)(this_mtd + 1); 263 263 strlcpy(this_mtd->mtd_id, mtd_id, mtd_id_len + 1); 264 264 265 265 /* link into chain */ 266 - this_mtd->next = partitions; 266 + this_mtd->next = partitions; 267 267 partitions = this_mtd; 268 268 269 - dbg(("mtdid=<%s> num_parts=<%d>\n", 269 + dbg(("mtdid=<%s> num_parts=<%d>\n", 270 270 this_mtd->mtd_id, this_mtd->num_parts)); 271 - 271 + 272 272 273 273 /* EOS - we're done */ 274 274 if (*s == 0) ··· 292 292 * information. It returns partitions for the requested mtd device, or 293 293 * the first one in the chain if a NULL mtd_id is passed in. 294 294 */ 295 - static int parse_cmdline_partitions(struct mtd_info *master, 295 + static int parse_cmdline_partitions(struct mtd_info *master, 296 296 struct mtd_partition **pparts, 297 297 unsigned long origin) 298 298 { ··· 322 322 part->parts[i].size = master->size - offset; 323 323 if (offset + part->parts[i].size > master->size) 324 324 { 325 - printk(KERN_WARNING ERRP 325 + printk(KERN_WARNING ERRP 326 326 "%s: partitioning exceeds flash size, truncating\n", 327 327 part->mtd_id); 328 328 part->parts[i].size = master->size - offset; ··· 338 338 } 339 339 340 340 341 - /* 342 - * This is the handler for our kernel parameter, called from 341 + /* 342 + * This is the handler for our kernel parameter, called from 343 343 * main.c::checksetup(). Note that we can not yet kmalloc() anything, 344 344 * so we only save the commandline for later processing. 345 345 *
+64 -64
drivers/mtd/ftl.c
··· 1 1 /* This version ported to the Linux-MTD system by dwmw2@infradead.org 2 - * $Id: ftl.c,v 1.55 2005/01/17 13:47:21 hvr Exp $ 2 + * $Id: ftl.c,v 1.58 2005/11/07 11:14:19 gleixner Exp $ 3 3 * 4 4 * Fixes: Arnaldo Carvalho de Melo <acme@conectiva.com.br> 5 5 * - fixes some leaks on failure in build_maps and ftl_notify_add, cleanups ··· 53 53 Use of the FTL format for non-PCMCIA applications may be an 54 54 infringement of these patents. For additional information, 55 55 contact M-Systems (http://www.m-sys.com) directly. 56 - 56 + 57 57 ======================================================================*/ 58 58 #include <linux/mtd/blktrans.h> 59 59 #include <linux/module.h> ··· 160 160 Scan_header() checks to see if a memory region contains an FTL 161 161 partition. build_maps() reads all the erase unit headers, builds 162 162 the erase unit map, and then builds the virtual page map. 163 - 163 + 164 164 ======================================================================*/ 165 165 166 166 static int scan_header(partition_t *part) ··· 176 176 (offset + sizeof(header)) < max_offset; 177 177 offset += part->mbd.mtd->erasesize ? : 0x2000) { 178 178 179 - err = part->mbd.mtd->read(part->mbd.mtd, offset, sizeof(header), &ret, 179 + err = part->mbd.mtd->read(part->mbd.mtd, offset, sizeof(header), &ret, 180 180 (unsigned char *)&header); 181 - 182 - if (err) 181 + 182 + if (err) 183 183 return err; 184 184 185 185 if (strcmp(header.DataOrgTuple+3, "FTL100") == 0) break; ··· 232 232 for (i = 0; i < le16_to_cpu(part->header.NumEraseUnits); i++) { 233 233 offset = ((i + le16_to_cpu(part->header.FirstPhysicalEUN)) 234 234 << part->header.EraseUnitSize); 235 - ret = part->mbd.mtd->read(part->mbd.mtd, offset, sizeof(header), &retval, 235 + ret = part->mbd.mtd->read(part->mbd.mtd, offset, sizeof(header), &retval, 236 236 (unsigned char *)&header); 237 - 238 - if (ret) 237 + 238 + if (ret) 239 239 goto out_XferInfo; 240 240 241 241 ret = -1; ··· 274 274 "don't add up!\n"); 275 275 goto out_XferInfo; 276 276 } 277 - 277 + 278 278 /* Set up virtual page map */ 279 279 blocks = le32_to_cpu(header.FormattedSize) >> header.BlockSize; 280 280 part->VirtualBlockMap = vmalloc(blocks * sizeof(u_int32_t)); ··· 296 296 part->EUNInfo[i].Free = 0; 297 297 part->EUNInfo[i].Deleted = 0; 298 298 offset = part->EUNInfo[i].Offset + le32_to_cpu(header.BAMOffset); 299 - 300 - ret = part->mbd.mtd->read(part->mbd.mtd, offset, 301 - part->BlocksPerUnit * sizeof(u_int32_t), &retval, 299 + 300 + ret = part->mbd.mtd->read(part->mbd.mtd, offset, 301 + part->BlocksPerUnit * sizeof(u_int32_t), &retval, 302 302 (unsigned char *)part->bam_cache); 303 - 304 - if (ret) 303 + 304 + if (ret) 305 305 goto out_bam_cache; 306 306 307 307 for (j = 0; j < part->BlocksPerUnit; j++) { ··· 316 316 part->EUNInfo[i].Deleted++; 317 317 } 318 318 } 319 - 319 + 320 320 ret = 0; 321 321 goto out; 322 322 ··· 336 336 337 337 Erase_xfer() schedules an asynchronous erase operation for a 338 338 transfer unit. 339 - 339 + 340 340 ======================================================================*/ 341 341 342 342 static int erase_xfer(partition_t *part, ··· 351 351 xfer->state = XFER_ERASING; 352 352 353 353 /* Is there a free erase slot? Always in MTD. */ 354 - 355 - 354 + 355 + 356 356 erase=kmalloc(sizeof(struct erase_info), GFP_KERNEL); 357 - if (!erase) 357 + if (!erase) 358 358 return -ENOMEM; 359 359 360 360 erase->mtd = part->mbd.mtd; ··· 362 362 erase->addr = xfer->Offset; 363 363 erase->len = 1 << part->header.EraseUnitSize; 364 364 erase->priv = (u_long)part; 365 - 365 + 366 366 ret = part->mbd.mtd->erase(part->mbd.mtd, erase); 367 367 368 368 if (!ret) ··· 377 377 378 378 Prepare_xfer() takes a freshly erased transfer unit and gives 379 379 it an appropriate header. 380 - 380 + 381 381 ======================================================================*/ 382 382 383 383 static void ftl_erase_callback(struct erase_info *erase) ··· 385 385 partition_t *part; 386 386 struct xfer_info_t *xfer; 387 387 int i; 388 - 388 + 389 389 /* Look up the transfer unit */ 390 390 part = (partition_t *)(erase->priv); 391 391 ··· 422 422 423 423 xfer = &part->XferInfo[i]; 424 424 xfer->state = XFER_FAILED; 425 - 425 + 426 426 DEBUG(1, "ftl_cs: preparing xfer unit at 0x%x\n", xfer->Offset); 427 427 428 428 /* Write the transfer unit header */ ··· 446 446 447 447 for (i = 0; i < nbam; i++, offset += sizeof(u_int32_t)) { 448 448 449 - ret = part->mbd.mtd->write(part->mbd.mtd, offset, sizeof(u_int32_t), 449 + ret = part->mbd.mtd->write(part->mbd.mtd, offset, sizeof(u_int32_t), 450 450 &retlen, (u_char *)&ctl); 451 451 452 452 if (ret) ··· 454 454 } 455 455 xfer->state = XFER_PREPARED; 456 456 return 0; 457 - 457 + 458 458 } /* prepare_xfer */ 459 459 460 460 /*====================================================================== ··· 466 466 All data blocks are copied to the corresponding blocks in the 467 467 target unit, so the virtual block map does not need to be 468 468 updated. 469 - 469 + 470 470 ======================================================================*/ 471 471 472 472 static int copy_erase_unit(partition_t *part, u_int16_t srcunit, ··· 486 486 xfer = &part->XferInfo[xferunit]; 487 487 DEBUG(2, "ftl_cs: copying block 0x%x to 0x%x\n", 488 488 eun->Offset, xfer->Offset); 489 - 490 - 489 + 490 + 491 491 /* Read current BAM */ 492 492 if (part->bam_index != srcunit) { 493 493 494 494 offset = eun->Offset + le32_to_cpu(part->header.BAMOffset); 495 495 496 - ret = part->mbd.mtd->read(part->mbd.mtd, offset, 496 + ret = part->mbd.mtd->read(part->mbd.mtd, offset, 497 497 part->BlocksPerUnit * sizeof(u_int32_t), 498 498 &retlen, (u_char *) (part->bam_cache)); 499 499 ··· 501 501 part->bam_index = 0xffff; 502 502 503 503 if (ret) { 504 - printk( KERN_WARNING "ftl: Failed to read BAM cache in copy_erase_unit()!\n"); 504 + printk( KERN_WARNING "ftl: Failed to read BAM cache in copy_erase_unit()!\n"); 505 505 return ret; 506 506 } 507 507 } 508 - 508 + 509 509 /* Write the LogicalEUN for the transfer unit */ 510 510 xfer->state = XFER_UNKNOWN; 511 511 offset = xfer->Offset + 20; /* Bad! */ ··· 513 513 514 514 ret = part->mbd.mtd->write(part->mbd.mtd, offset, sizeof(u_int16_t), 515 515 &retlen, (u_char *) &unit); 516 - 516 + 517 517 if (ret) { 518 518 printk( KERN_WARNING "ftl: Failed to write back to BAM cache in copy_erase_unit()!\n"); 519 519 return ret; 520 520 } 521 - 521 + 522 522 /* Copy all data blocks from source unit to transfer unit */ 523 523 src = eun->Offset; dest = xfer->Offset; 524 524 ··· 558 558 } 559 559 560 560 /* Write the BAM to the transfer unit */ 561 - ret = part->mbd.mtd->write(part->mbd.mtd, xfer->Offset + le32_to_cpu(part->header.BAMOffset), 562 - part->BlocksPerUnit * sizeof(int32_t), &retlen, 561 + ret = part->mbd.mtd->write(part->mbd.mtd, xfer->Offset + le32_to_cpu(part->header.BAMOffset), 562 + part->BlocksPerUnit * sizeof(int32_t), &retlen, 563 563 (u_char *)part->bam_cache); 564 564 if (ret) { 565 565 printk( KERN_WARNING "ftl: Error writing BAM in copy_erase_unit\n"); 566 566 return ret; 567 567 } 568 568 569 - 569 + 570 570 /* All clear? Then update the LogicalEUN again */ 571 571 ret = part->mbd.mtd->write(part->mbd.mtd, xfer->Offset + 20, sizeof(u_int16_t), 572 572 &retlen, (u_char *)&srcunitswap); ··· 574 574 if (ret) { 575 575 printk(KERN_WARNING "ftl: Error writing new LogicalEUN in copy_erase_unit\n"); 576 576 return ret; 577 - } 578 - 579 - 577 + } 578 + 579 + 580 580 /* Update the maps and usage stats*/ 581 581 i = xfer->EraseCount; 582 582 xfer->EraseCount = eun->EraseCount; ··· 588 588 part->FreeTotal += free; 589 589 eun->Free = free; 590 590 eun->Deleted = 0; 591 - 591 + 592 592 /* Now, the cache should be valid for the new block */ 593 593 part->bam_index = srcunit; 594 - 594 + 595 595 return 0; 596 596 } /* copy_erase_unit */ 597 597 ··· 608 608 oldest data unit instead. This means that we generally postpone 609 609 the next reclaimation as long as possible, but shuffle static 610 610 stuff around a bit for wear leveling. 611 - 611 + 612 612 ======================================================================*/ 613 613 614 614 static int reclaim_block(partition_t *part) ··· 666 666 else 667 667 DEBUG(1, "ftl_cs: reclaim failed: no " 668 668 "suitable transfer units!\n"); 669 - 669 + 670 670 return -EIO; 671 671 } 672 672 } ··· 715 715 returns the block index -- the erase unit is just the currently 716 716 cached unit. If there are no free blocks, it returns 0 -- this 717 717 is never a valid data block because it contains the header. 718 - 718 + 719 719 ======================================================================*/ 720 720 721 721 #ifdef PSYCHO_DEBUG ··· 737 737 u_int32_t blk; 738 738 size_t retlen; 739 739 int ret; 740 - 740 + 741 741 /* Find an erase unit with some free space */ 742 742 stop = (part->bam_index == 0xffff) ? 0 : part->bam_index; 743 743 eun = stop; ··· 749 749 750 750 if (part->EUNInfo[eun].Free == 0) 751 751 return 0; 752 - 752 + 753 753 /* Is this unit's BAM cached? */ 754 754 if (eun != part->bam_index) { 755 755 /* Invalidate cache */ 756 756 part->bam_index = 0xffff; 757 757 758 - ret = part->mbd.mtd->read(part->mbd.mtd, 758 + ret = part->mbd.mtd->read(part->mbd.mtd, 759 759 part->EUNInfo[eun].Offset + le32_to_cpu(part->header.BAMOffset), 760 760 part->BlocksPerUnit * sizeof(u_int32_t), 761 761 &retlen, (u_char *) (part->bam_cache)); 762 - 762 + 763 763 if (ret) { 764 764 printk(KERN_WARNING"ftl: Error reading BAM in find_free\n"); 765 765 return 0; ··· 781 781 } 782 782 DEBUG(2, "ftl_cs: found free block at %d in %d\n", blk, eun); 783 783 return blk; 784 - 784 + 785 785 } /* find_free */ 786 786 787 787 788 788 /*====================================================================== 789 789 790 790 Read a series of sectors from an FTL partition. 791 - 791 + 792 792 ======================================================================*/ 793 793 794 794 static int ftl_read(partition_t *part, caddr_t buffer, ··· 798 798 u_long i; 799 799 int ret; 800 800 size_t offset, retlen; 801 - 801 + 802 802 DEBUG(2, "ftl_cs: ftl_read(0x%p, 0x%lx, %ld)\n", 803 803 part, sector, nblocks); 804 804 if (!(part->state & FTL_FORMATTED)) { ··· 834 834 /*====================================================================== 835 835 836 836 Write a series of sectors to an FTL partition 837 - 837 + 838 838 ======================================================================*/ 839 839 840 840 static int set_bam_entry(partition_t *part, u_int32_t log_addr, ··· 855 855 blk = (log_addr % bsize) / SECTOR_SIZE; 856 856 offset = (part->EUNInfo[eun].Offset + blk * sizeof(u_int32_t) + 857 857 le32_to_cpu(part->header.BAMOffset)); 858 - 858 + 859 859 #ifdef PSYCHO_DEBUG 860 860 ret = part->mbd.mtd->read(part->mbd.mtd, offset, sizeof(u_int32_t), 861 861 &retlen, (u_char *)&old_addr); ··· 925 925 if (ret) 926 926 return ret; 927 927 } 928 - 928 + 929 929 bsize = 1 << part->header.EraseUnitSize; 930 930 931 931 virt_addr = sector * SECTOR_SIZE | BLOCK_DATA; ··· 949 949 log_addr = part->bam_index * bsize + blk * SECTOR_SIZE; 950 950 part->EUNInfo[part->bam_index].Free--; 951 951 part->FreeTotal--; 952 - if (set_bam_entry(part, log_addr, 0xfffffffe)) 952 + if (set_bam_entry(part, log_addr, 0xfffffffe)) 953 953 return -EIO; 954 954 part->EUNInfo[part->bam_index].Deleted++; 955 955 offset = (part->EUNInfo[part->bam_index].Offset + 956 956 blk * SECTOR_SIZE); 957 - ret = part->mbd.mtd->write(part->mbd.mtd, offset, SECTOR_SIZE, &retlen, 957 + ret = part->mbd.mtd->write(part->mbd.mtd, offset, SECTOR_SIZE, &retlen, 958 958 buffer); 959 959 960 960 if (ret) { ··· 964 964 offset); 965 965 return -EIO; 966 966 } 967 - 967 + 968 968 /* Only delete the old entry when the new entry is ready */ 969 969 old_addr = part->VirtualBlockMap[sector+i]; 970 970 if (old_addr != 0xffffffff) { ··· 979 979 return -EIO; 980 980 part->VirtualBlockMap[sector+i] = log_addr; 981 981 part->EUNInfo[part->bam_index].Deleted--; 982 - 982 + 983 983 buffer += SECTOR_SIZE; 984 984 virt_addr += SECTOR_SIZE; 985 985 } ··· 1034 1034 partition_t *partition; 1035 1035 1036 1036 partition = kmalloc(sizeof(partition_t), GFP_KERNEL); 1037 - 1037 + 1038 1038 if (!partition) { 1039 1039 printk(KERN_WARNING "No memory to scan for FTL on %s\n", 1040 1040 mtd->name); 1041 1041 return; 1042 - } 1042 + } 1043 1043 1044 1044 memset(partition, 0, sizeof(partition_t)); 1045 1045 1046 1046 partition->mbd.mtd = mtd; 1047 1047 1048 - if ((scan_header(partition) == 0) && 1048 + if ((scan_header(partition) == 0) && 1049 1049 (build_maps(partition) == 0)) { 1050 - 1050 + 1051 1051 partition->state = FTL_FORMATTED; 1052 1052 #ifdef PCMCIA_DEBUG 1053 1053 printk(KERN_INFO "ftl_cs: opening %d KiB FTL partition\n", ··· 1086 1086 1087 1087 int init_ftl(void) 1088 1088 { 1089 - DEBUG(0, "$Id: ftl.c,v 1.55 2005/01/17 13:47:21 hvr Exp $\n"); 1089 + DEBUG(0, "$Id: ftl.c,v 1.58 2005/11/07 11:14:19 gleixner Exp $\n"); 1090 1090 1091 1091 return register_mtd_blktrans(&ftl_tr); 1092 1092 }
+24 -24
drivers/mtd/inftlcore.c
··· 1 - /* 1 + /* 2 2 * inftlcore.c -- Linux driver for Inverse Flash Translation Layer (INFTL) 3 3 * 4 4 * (C) Copyright 2002, Greg Ungerer (gerg@snapgear.com) ··· 7 7 * (c) 1999 Machine Vision Holdings, Inc. 8 8 * Author: David Woodhouse <dwmw2@infradead.org> 9 9 * 10 - * $Id: inftlcore.c,v 1.18 2004/11/16 18:28:59 dwmw2 Exp $ 10 + * $Id: inftlcore.c,v 1.19 2005/11/07 11:14:20 gleixner Exp $ 11 11 * 12 12 * This program is free software; you can redistribute it and/or modify 13 13 * it under the terms of the GNU General Public License as published by ··· 113 113 114 114 if (inftl->mbd.size != inftl->heads * inftl->cylinders * inftl->sectors) { 115 115 /* 116 - Oh no we don't have 116 + Oh no we don't have 117 117 mbd.size == heads * cylinders * sectors 118 118 */ 119 119 printk(KERN_WARNING "INFTL: cannot calculate a geometry to " 120 120 "match size of 0x%lx.\n", inftl->mbd.size); 121 121 printk(KERN_WARNING "INFTL: using C:%d H:%d S:%d " 122 122 "(== 0x%lx sects)\n", 123 - inftl->cylinders, inftl->heads , inftl->sectors, 123 + inftl->cylinders, inftl->heads , inftl->sectors, 124 124 (long)inftl->cylinders * (long)inftl->heads * 125 125 (long)inftl->sectors ); 126 126 } ··· 223 223 "Virtual Unit Chain %d!\n", thisVUC); 224 224 return BLOCK_NIL; 225 225 } 226 - 226 + 227 227 /* 228 228 * Scan to find the Erase Unit which holds the actual data for each 229 229 * 512-byte block within the Chain. ··· 264 264 "Unit Chain 0x%x\n", thisVUC); 265 265 return BLOCK_NIL; 266 266 } 267 - 267 + 268 268 thisEUN = inftl->PUtable[thisEUN]; 269 269 } 270 270 ··· 295 295 */ 296 296 if (BlockMap[block] == BLOCK_NIL) 297 297 continue; 298 - 298 + 299 299 ret = MTD_READ(inftl->mbd.mtd, (inftl->EraseSize * 300 300 BlockMap[block]) + (block * SECTORSIZE), SECTORSIZE, 301 - &retlen, movebuf); 301 + &retlen, movebuf); 302 302 if (ret < 0) { 303 303 ret = MTD_READ(inftl->mbd.mtd, (inftl->EraseSize * 304 304 BlockMap[block]) + (block * SECTORSIZE), 305 305 SECTORSIZE, &retlen, movebuf); 306 - if (ret != -EIO) 306 + if (ret != -EIO) 307 307 DEBUG(MTD_DEBUG_LEVEL1, "INFTL: error went " 308 308 "away on retry?\n"); 309 309 } ··· 355 355 static u16 INFTL_makefreeblock(struct INFTLrecord *inftl, unsigned pendingblock) 356 356 { 357 357 /* 358 - * This is the part that needs some cleverness applied. 358 + * This is the part that needs some cleverness applied. 359 359 * For now, I'm doing the minimum applicable to actually 360 360 * get the thing to work. 361 361 * Wear-levelling and other clever stuff needs to be implemented ··· 414 414 } 415 415 416 416 /* 417 - * INFTL_findwriteunit: Return the unit number into which we can write 417 + * INFTL_findwriteunit: Return the unit number into which we can write 418 418 * for this block. Make it available if it isn't already. 419 419 */ 420 420 static inline u16 INFTL_findwriteunit(struct INFTLrecord *inftl, unsigned block) ··· 463 463 * Invalid block. Don't use it any more. 464 464 * Must implement. 465 465 */ 466 - break; 466 + break; 467 467 } 468 - 469 - if (!silly--) { 468 + 469 + if (!silly--) { 470 470 printk(KERN_WARNING "INFTL: infinite loop in " 471 471 "Virtual Unit Chain 0x%x\n", thisVUC); 472 472 return 0xffff; ··· 482 482 483 483 484 484 /* 485 - * OK. We didn't find one in the existing chain, or there 485 + * OK. We didn't find one in the existing chain, or there 486 486 * is no existing chain. Allocate a new one. 487 487 */ 488 488 writeEUN = INFTL_findfreeblock(inftl, 0); ··· 506 506 if (writeEUN == BLOCK_NIL) { 507 507 /* 508 508 * Ouch. This should never happen - we should 509 - * always be able to make some room somehow. 510 - * If we get here, we've allocated more storage 509 + * always be able to make some room somehow. 510 + * If we get here, we've allocated more storage 511 511 * space than actual media, or our makefreeblock 512 512 * routine is missing something. 513 513 */ ··· 518 518 INFTL_dumpVUchains(inftl); 519 519 #endif 520 520 return BLOCK_NIL; 521 - } 521 + } 522 522 } 523 523 524 524 /* ··· 543 543 parity |= (nrbits(prev_block, 16) & 0x1) ? 0x2 : 0; 544 544 parity |= (nrbits(anac, 8) & 0x1) ? 0x4 : 0; 545 545 parity |= (nrbits(nacs, 8) & 0x1) ? 0x8 : 0; 546 - 546 + 547 547 oob.u.a.virtualUnitNo = cpu_to_le16(thisVUC); 548 548 oob.u.a.prevUnitNo = cpu_to_le16(prev_block); 549 549 oob.u.a.ANAC = anac; ··· 562 562 oob.u.b.parityPerField = parity; 563 563 oob.u.b.discarded = 0xaa; 564 564 565 - MTD_WRITEOOB(inftl->mbd.mtd, writeEUN * inftl->EraseSize + 565 + MTD_WRITEOOB(inftl->mbd.mtd, writeEUN * inftl->EraseSize + 566 566 SECTORSIZE * 4 + 8, 8, &retlen, (char *)&oob.u); 567 567 568 568 inftl->PUtable[writeEUN] = inftl->VUtable[thisVUC]; ··· 602 602 "Virtual Unit Chain %d!\n", thisVUC); 603 603 return; 604 604 } 605 - 605 + 606 606 /* 607 607 * Scan through the Erase Units to determine whether any data is in 608 608 * each of the 512-byte blocks within the Chain. ··· 642 642 "Unit Chain 0x%x\n", thisVUC); 643 643 return; 644 644 } 645 - 645 + 646 646 thisEUN = inftl->PUtable[thisEUN]; 647 647 } 648 648 ··· 758 758 return 0; 759 759 } 760 760 761 - static int inftl_writeblock(struct mtd_blktrans_dev *mbd, unsigned long block, 761 + static int inftl_writeblock(struct mtd_blktrans_dev *mbd, unsigned long block, 762 762 char *buffer) 763 763 { 764 764 struct INFTLrecord *inftl = (void *)mbd; ··· 893 893 894 894 static int __init init_inftl(void) 895 895 { 896 - printk(KERN_INFO "INFTL: inftlcore.c $Revision: 1.18 $, " 896 + printk(KERN_INFO "INFTL: inftlcore.c $Revision: 1.19 $, " 897 897 "inftlmount.c %s\n", inftlmountrev); 898 898 899 899 return register_mtd_blktrans(&inftl_tr);
+10 -10
drivers/mtd/inftlmount.c
··· 1 - /* 1 + /* 2 2 * inftlmount.c -- INFTL mount code with extensive checks. 3 3 * 4 4 * Author: Greg Ungerer (gerg@snapgear.com) 5 5 * (C) Copyright 2002-2003, Greg Ungerer (gerg@snapgear.com) 6 6 * 7 7 * Based heavily on the nftlmount.c code which is: 8 - * Author: Fabrice Bellard (fabrice.bellard@netgem.com) 8 + * Author: Fabrice Bellard (fabrice.bellard@netgem.com) 9 9 * Copyright (C) 2000 Netgem S.A. 10 10 * 11 - * $Id: inftlmount.c,v 1.17 2005/08/08 08:56:19 dwmw2 Exp $ 11 + * $Id: inftlmount.c,v 1.18 2005/11/07 11:14:20 gleixner Exp $ 12 12 * 13 13 * This program is free software; you can redistribute it and/or modify 14 14 * it under the terms of the GNU General Public License as published by ··· 41 41 #include <linux/mtd/inftl.h> 42 42 #include <linux/mtd/compatmac.h> 43 43 44 - char inftlmountrev[]="$Revision: 1.17 $"; 44 + char inftlmountrev[]="$Revision: 1.18 $"; 45 45 46 46 /* 47 47 * find_boot_record: Find the INFTL Media Header and its Spare copy which ··· 273 273 inftl->nb_boot_blocks); 274 274 return -1; 275 275 } 276 - 276 + 277 277 inftl->mbd.size = inftl->numvunits * 278 278 (inftl->EraseSize / SECTORSIZE); 279 279 ··· 302 302 inftl->nb_blocks * sizeof(u16)); 303 303 return -ENOMEM; 304 304 } 305 - 305 + 306 306 /* Mark the blocks before INFTL MediaHeader as reserved */ 307 307 for (i = 0; i < inftl->nb_boot_blocks; i++) 308 308 inftl->PUtable[i] = BLOCK_RESERVED; ··· 380 380 * 381 381 * Return: 0 when succeed, -1 on error. 382 382 * 383 - * ToDo: 1. Is it neceressary to check_free_sector after erasing ?? 383 + * ToDo: 1. Is it neceressary to check_free_sector after erasing ?? 384 384 */ 385 385 int INFTL_formatblock(struct INFTLrecord *inftl, int block) 386 386 { ··· 578 578 printk(KERN_ERR "INFTL: Out of memory.\n"); 579 579 return -ENOMEM; 580 580 } 581 - 581 + 582 582 memset(ANACtable, 0, s->nb_blocks); 583 583 584 584 /* ··· 600 600 601 601 for (chain_length = 0; ; chain_length++) { 602 602 603 - if ((chain_length == 0) && 603 + if ((chain_length == 0) && 604 604 (s->PUtable[block] != BLOCK_NOTEXPLORED)) { 605 605 /* Nothing to do here, onto next block */ 606 606 break; ··· 747 747 "in virtual chain %d\n", 748 748 s->PUtable[block], logical_block); 749 749 s->PUtable[block] = BLOCK_NIL; 750 - 750 + 751 751 } 752 752 if (ANACtable[block] != ANAC) { 753 753 /*
+12 -12
drivers/mtd/mtd_blkdevs.c
··· 1 1 /* 2 - * $Id: mtd_blkdevs.c,v 1.26 2005/07/29 19:42:04 tpoynor Exp $ 2 + * $Id: mtd_blkdevs.c,v 1.27 2005/11/07 11:14:20 gleixner Exp $ 3 3 * 4 4 * (C) 2003 David Woodhouse <dwmw2@infradead.org> 5 5 * ··· 85 85 daemonize("%sd", tr->name); 86 86 87 87 /* daemonize() doesn't do this for us since some kernel threads 88 - actually want to deal with signals. We can't just call 88 + actually want to deal with signals. We can't just call 89 89 exit_sighand() since that'll cause an oops when we finally 90 90 do exit. */ 91 91 spin_lock_irq(&current->sighand->siglock); ··· 94 94 spin_unlock_irq(&current->sighand->siglock); 95 95 96 96 spin_lock_irq(rq->queue_lock); 97 - 97 + 98 98 while (!tr->blkcore_priv->exiting) { 99 99 struct request *req; 100 100 struct mtd_blktrans_dev *dev; ··· 157 157 if (!try_module_get(tr->owner)) 158 158 goto out_tr; 159 159 160 - /* FIXME: Locking. A hot pluggable device can go away 160 + /* FIXME: Locking. A hot pluggable device can go away 161 161 (del_mtd_device can be called for it) without its module 162 162 being unloaded. */ 163 163 dev->mtd->usecount++; ··· 195 195 } 196 196 197 197 198 - static int blktrans_ioctl(struct inode *inode, struct file *file, 198 + static int blktrans_ioctl(struct inode *inode, struct file *file, 199 199 unsigned int cmd, unsigned long arg) 200 200 { 201 201 struct mtd_blktrans_dev *dev = inode->i_bdev->bd_disk->private_data; ··· 264 264 /* Required number was free */ 265 265 list_add_tail(&new->list, &d->list); 266 266 goto added; 267 - } 267 + } 268 268 last_devnum = d->devnum; 269 269 } 270 270 if (new->devnum == -1) ··· 288 288 gd->major = tr->major; 289 289 gd->first_minor = (new->devnum) << tr->part_bits; 290 290 gd->fops = &mtd_blktrans_ops; 291 - 291 + 292 292 if (tr->part_bits) 293 293 if (new->devnum < 26) 294 294 snprintf(gd->disk_name, sizeof(gd->disk_name), ··· 314 314 set_disk_ro(gd, 1); 315 315 316 316 add_disk(gd); 317 - 317 + 318 318 return 0; 319 319 } 320 320 ··· 329 329 330 330 del_gendisk(old->blkcore_priv); 331 331 put_disk(old->blkcore_priv); 332 - 332 + 333 333 return 0; 334 334 } 335 335 ··· 368 368 .add = blktrans_notify_add, 369 369 .remove = blktrans_notify_remove, 370 370 }; 371 - 371 + 372 372 int register_mtd_blktrans(struct mtd_blktrans_ops *tr) 373 373 { 374 374 int ret, i; 375 375 376 - /* Register the notifier if/when the first device type is 376 + /* Register the notifier if/when the first device type is 377 377 registered, to prevent the link/init ordering from fucking 378 378 us over. */ 379 379 if (!blktrans_notifier.list.next) ··· 416 416 kfree(tr->blkcore_priv); 417 417 up(&mtd_table_mutex); 418 418 return ret; 419 - } 419 + } 420 420 421 421 INIT_LIST_HEAD(&tr->devs); 422 422 list_add(&tr->list, &blktrans_majors);
+22 -22
drivers/mtd/mtdblock.c
··· 1 - /* 1 + /* 2 2 * Direct MTD block device access 3 3 * 4 - * $Id: mtdblock.c,v 1.67 2005/11/06 10:04:37 gleixner Exp $ 4 + * $Id: mtdblock.c,v 1.68 2005/11/07 11:14:20 gleixner Exp $ 5 5 * 6 6 * (C) 2000-2003 Nicolas Pitre <nico@cam.org> 7 7 * (C) 1999-2003 David Woodhouse <dwmw2@infradead.org> ··· 32 32 33 33 /* 34 34 * Cache stuff... 35 - * 35 + * 36 36 * Since typical flash erasable sectors are much larger than what Linux's 37 37 * buffer cache can handle, we must implement read-modify-write on flash 38 38 * sectors for each block write requests. To avoid over-erasing flash sectors ··· 46 46 wake_up(wait_q); 47 47 } 48 48 49 - static int erase_write (struct mtd_info *mtd, unsigned long pos, 49 + static int erase_write (struct mtd_info *mtd, unsigned long pos, 50 50 int len, const char *buf) 51 51 { 52 52 struct erase_info erase; ··· 104 104 return 0; 105 105 106 106 DEBUG(MTD_DEBUG_LEVEL2, "mtdblock: writing cached data for \"%s\" " 107 - "at 0x%lx, size 0x%x\n", mtd->name, 107 + "at 0x%lx, size 0x%x\n", mtd->name, 108 108 mtdblk->cache_offset, mtdblk->cache_size); 109 - 110 - ret = erase_write (mtd, mtdblk->cache_offset, 109 + 110 + ret = erase_write (mtd, mtdblk->cache_offset, 111 111 mtdblk->cache_size, mtdblk->cache_data); 112 112 if (ret) 113 113 return ret; 114 114 115 115 /* 116 116 * Here we could argubly set the cache state to STATE_CLEAN. 117 - * However this could lead to inconsistency since we will not 118 - * be notified if this content is altered on the flash by other 117 + * However this could lead to inconsistency since we will not 118 + * be notified if this content is altered on the flash by other 119 119 * means. Let's declare it empty and leave buffering tasks to 120 120 * the buffer cache instead. 121 121 */ ··· 124 124 } 125 125 126 126 127 - static int do_cached_write (struct mtdblk_dev *mtdblk, unsigned long pos, 127 + static int do_cached_write (struct mtdblk_dev *mtdblk, unsigned long pos, 128 128 int len, const char *buf) 129 129 { 130 130 struct mtd_info *mtd = mtdblk->mtd; ··· 134 134 135 135 DEBUG(MTD_DEBUG_LEVEL2, "mtdblock: write on \"%s\" at 0x%lx, size 0x%x\n", 136 136 mtd->name, pos, len); 137 - 137 + 138 138 if (!sect_size) 139 139 return MTD_WRITE (mtd, pos, len, &retlen, buf); 140 140 ··· 142 142 unsigned long sect_start = (pos/sect_size)*sect_size; 143 143 unsigned int offset = pos - sect_start; 144 144 unsigned int size = sect_size - offset; 145 - if( size > len ) 145 + if( size > len ) 146 146 size = len; 147 147 148 148 if (size == sect_size) { 149 - /* 149 + /* 150 150 * We are covering a whole sector. Thus there is no 151 151 * need to bother with the cache while it may still be 152 152 * useful for other partial writes. ··· 160 160 if (mtdblk->cache_state == STATE_DIRTY && 161 161 mtdblk->cache_offset != sect_start) { 162 162 ret = write_cached_data(mtdblk); 163 - if (ret) 163 + if (ret) 164 164 return ret; 165 165 } 166 166 ··· 193 193 } 194 194 195 195 196 - static int do_cached_read (struct mtdblk_dev *mtdblk, unsigned long pos, 196 + static int do_cached_read (struct mtdblk_dev *mtdblk, unsigned long pos, 197 197 int len, char *buf) 198 198 { 199 199 struct mtd_info *mtd = mtdblk->mtd; ··· 201 201 size_t retlen; 202 202 int ret; 203 203 204 - DEBUG(MTD_DEBUG_LEVEL2, "mtdblock: read on \"%s\" at 0x%lx, size 0x%x\n", 204 + DEBUG(MTD_DEBUG_LEVEL2, "mtdblock: read on \"%s\" at 0x%lx, size 0x%x\n", 205 205 mtd->name, pos, len); 206 - 206 + 207 207 if (!sect_size) 208 208 return MTD_READ (mtd, pos, len, &retlen, buf); 209 209 ··· 211 211 unsigned long sect_start = (pos/sect_size)*sect_size; 212 212 unsigned int offset = pos - sect_start; 213 213 unsigned int size = sect_size - offset; 214 - if (size > len) 214 + if (size > len) 215 215 size = len; 216 216 217 217 /* ··· 269 269 int dev = mbd->devnum; 270 270 271 271 DEBUG(MTD_DEBUG_LEVEL1,"mtdblock_open\n"); 272 - 272 + 273 273 if (mtdblks[dev]) { 274 274 mtdblks[dev]->count++; 275 275 return 0; 276 276 } 277 - 277 + 278 278 /* OK, it's not open. Create cache info for it */ 279 279 mtdblk = kmalloc(sizeof(struct mtdblk_dev), GFP_KERNEL); 280 280 if (!mtdblk) ··· 293 293 } 294 294 295 295 mtdblks[dev] = mtdblk; 296 - 296 + 297 297 DEBUG(MTD_DEBUG_LEVEL1, "ok\n"); 298 298 299 299 return 0; ··· 321 321 DEBUG(MTD_DEBUG_LEVEL1, "ok\n"); 322 322 323 323 return 0; 324 - } 324 + } 325 325 326 326 static int mtdblock_flush(struct mtd_blktrans_dev *dev) 327 327 {
+30 -30
drivers/mtd/mtdchar.c
··· 1 1 /* 2 - * $Id: mtdchar.c,v 1.75 2005/11/06 10:04:37 gleixner Exp $ 2 + * $Id: mtdchar.c,v 1.76 2005/11/07 11:14:20 gleixner Exp $ 3 3 * 4 4 * Character-device access to raw MTD devices. 5 5 * ··· 28 28 29 29 class_device_create(mtd_class, NULL, MKDEV(MTD_CHAR_MAJOR, mtd->index*2), 30 30 NULL, "mtd%d", mtd->index); 31 - 31 + 32 32 class_device_create(mtd_class, NULL, 33 33 MKDEV(MTD_CHAR_MAJOR, mtd->index*2+1), 34 34 NULL, "mtd%dro", mtd->index); ··· 108 108 return -EACCES; 109 109 110 110 mtd = get_mtd_device(NULL, devnum); 111 - 111 + 112 112 if (!mtd) 113 113 return -ENODEV; 114 - 114 + 115 115 if (MTD_ABSENT == mtd->type) { 116 116 put_mtd_device(mtd); 117 117 return -ENODEV; 118 118 } 119 119 120 120 file->private_data = mtd; 121 - 121 + 122 122 /* You can't open it RW if it's not a writeable device */ 123 123 if ((file->f_mode & 2) && !(mtd->flags & MTD_WRITEABLE)) { 124 124 put_mtd_device(mtd); 125 125 return -EACCES; 126 126 } 127 - 127 + 128 128 return 0; 129 129 } /* mtd_open */ 130 130 ··· 137 137 DEBUG(MTD_DEBUG_LEVEL0, "MTD_close\n"); 138 138 139 139 mtd = TO_MTD(file); 140 - 140 + 141 141 if (mtd->sync) 142 142 mtd->sync(mtd); 143 - 143 + 144 144 put_mtd_device(mtd); 145 145 146 146 return 0; ··· 159 159 int ret=0; 160 160 int len; 161 161 char *kbuf; 162 - 162 + 163 163 DEBUG(MTD_DEBUG_LEVEL0,"MTD_read\n"); 164 164 165 165 if (*ppos + count > mtd->size) ··· 167 167 168 168 if (!count) 169 169 return 0; 170 - 170 + 171 171 /* FIXME: Use kiovec in 2.5 to lock down the user's buffers 172 172 and pass them directly to the MTD functions */ 173 173 while (count) { 174 - if (count > MAX_KMALLOC_SIZE) 174 + if (count > MAX_KMALLOC_SIZE) 175 175 len = MAX_KMALLOC_SIZE; 176 176 else 177 177 len = count; ··· 179 179 kbuf=kmalloc(len,GFP_KERNEL); 180 180 if (!kbuf) 181 181 return -ENOMEM; 182 - 182 + 183 183 switch (MTD_MODE(file)) { 184 184 case MTD_MODE_OTP_FACT: 185 185 ret = mtd->read_fact_prot_reg(mtd, *ppos, len, &retlen, kbuf); ··· 192 192 } 193 193 /* Nand returns -EBADMSG on ecc errors, but it returns 194 194 * the data. For our userspace tools it is important 195 - * to dump areas with ecc errors ! 195 + * to dump areas with ecc errors ! 196 196 * Userspace software which accesses NAND this way 197 197 * must be aware of the fact that it deals with NAND 198 198 */ ··· 214 214 kfree(kbuf); 215 215 return ret; 216 216 } 217 - 217 + 218 218 kfree(kbuf); 219 219 } 220 220 ··· 231 231 int len; 232 232 233 233 DEBUG(MTD_DEBUG_LEVEL0,"MTD_write\n"); 234 - 234 + 235 235 if (*ppos == mtd->size) 236 236 return -ENOSPC; 237 - 237 + 238 238 if (*ppos + count > mtd->size) 239 239 count = mtd->size - *ppos; 240 240 ··· 242 242 return 0; 243 243 244 244 while (count) { 245 - if (count > MAX_KMALLOC_SIZE) 245 + if (count > MAX_KMALLOC_SIZE) 246 246 len = MAX_KMALLOC_SIZE; 247 247 else 248 248 len = count; ··· 257 257 kfree(kbuf); 258 258 return -EFAULT; 259 259 } 260 - 260 + 261 261 switch (MTD_MODE(file)) { 262 262 case MTD_MODE_OTP_FACT: 263 263 ret = -EROFS; ··· 282 282 kfree(kbuf); 283 283 return ret; 284 284 } 285 - 285 + 286 286 kfree(kbuf); 287 287 } 288 288 ··· 306 306 void __user *argp = (void __user *)arg; 307 307 int ret = 0; 308 308 u_long size; 309 - 309 + 310 310 DEBUG(MTD_DEBUG_LEVEL0, "MTD_ioctl\n"); 311 311 312 312 size = (cmd & IOCSIZE_MASK) >> IOCSIZE_SHIFT; ··· 318 318 if (!access_ok(VERIFY_WRITE, argp, size)) 319 319 return -EFAULT; 320 320 } 321 - 321 + 322 322 switch (cmd) { 323 323 case MEMGETREGIONCOUNT: 324 324 if (copy_to_user(argp, &(mtd->numeraseregions), sizeof(int))) ··· 370 370 erase->mtd = mtd; 371 371 erase->callback = mtdchar_erase_callback; 372 372 erase->priv = (unsigned long)&waitq; 373 - 373 + 374 374 /* 375 375 FIXME: Allow INTERRUPTIBLE. Which means 376 376 not having the wait_queue head on the stack. 377 - 377 + 378 378 If the wq_head is on the stack, and we 379 379 leave because we got interrupted, then the 380 380 wq_head is no longer there when the ··· 402 402 struct mtd_oob_buf buf; 403 403 void *databuf; 404 404 ssize_t retlen; 405 - 405 + 406 406 if(!(file->f_mode & 2)) 407 407 return -EPERM; 408 408 409 409 if (copy_from_user(&buf, argp, sizeof(struct mtd_oob_buf))) 410 410 return -EFAULT; 411 - 411 + 412 412 if (buf.length > 0x4096) 413 413 return -EINVAL; 414 414 ··· 424 424 databuf = kmalloc(buf.length, GFP_KERNEL); 425 425 if (!databuf) 426 426 return -ENOMEM; 427 - 427 + 428 428 if (copy_from_user(databuf, buf.ptr, buf.length)) { 429 429 kfree(databuf); 430 430 return -EFAULT; ··· 448 448 449 449 if (copy_from_user(&buf, argp, sizeof(struct mtd_oob_buf))) 450 450 return -EFAULT; 451 - 451 + 452 452 if (buf.length > 0x4096) 453 453 return -EINVAL; 454 454 ··· 464 464 databuf = kmalloc(buf.length, GFP_KERNEL); 465 465 if (!databuf) 466 466 return -ENOMEM; 467 - 467 + 468 468 ret = (mtd->read_oob)(mtd, buf.start, buf.length, &retlen, databuf); 469 469 470 470 if (put_user(retlen, (uint32_t __user *)argp)) 471 471 ret = -EFAULT; 472 472 else if (retlen && copy_to_user(buf.ptr, databuf, retlen)) 473 473 ret = -EFAULT; 474 - 474 + 475 475 kfree(databuf); 476 476 break; 477 477 } ··· 521 521 case MEMGETBADBLOCK: 522 522 { 523 523 loff_t offs; 524 - 524 + 525 525 if (copy_from_user(&offs, argp, sizeof(loff_t))) 526 526 return -EFAULT; 527 527 if (!mtd->block_isbad)
+3 -3
drivers/mtd/mtdconcat.c
··· 7 7 * 8 8 * This code is GPL 9 9 * 10 - * $Id: mtdconcat.c,v 1.10 2005/11/06 10:04:37 gleixner Exp $ 10 + * $Id: mtdconcat.c,v 1.11 2005/11/07 11:14:20 gleixner Exp $ 11 11 */ 12 12 13 13 #include <linux/kernel.h> ··· 44 44 */ 45 45 #define CONCAT(x) ((struct mtd_concat *)(x)) 46 46 47 - /* 47 + /* 48 48 * MTD methods which look up the relevant subdevice, translate the 49 49 * effective address and pass through to the subdevice. 50 50 */ ··· 878 878 return &concat->mtd; 879 879 } 880 880 881 - /* 881 + /* 882 882 * This function destroys an MTD object obtained from concat_mtd_devs() 883 883 */ 884 884
+9 -9
drivers/mtd/mtdcore.c
··· 1 1 /* 2 - * $Id: mtdcore.c,v 1.46 2005/08/11 17:13:43 gleixner Exp $ 2 + * $Id: mtdcore.c,v 1.47 2005/11/07 11:14:20 gleixner Exp $ 3 3 * 4 4 * Core registration and callback routines for MTD 5 5 * drivers and users. ··· 25 25 26 26 #include <linux/mtd/mtd.h> 27 27 28 - /* These are exported solely for the purpose of mtd_blkdevs.c. You 28 + /* These are exported solely for the purpose of mtd_blkdevs.c. You 29 29 should not use them for _anything_ else */ 30 30 DECLARE_MUTEX(mtd_table_mutex); 31 31 struct mtd_info *mtd_table[MAX_MTD_DEVICES]; ··· 66 66 struct mtd_notifier *not = list_entry(this, struct mtd_notifier, list); 67 67 not->add(mtd); 68 68 } 69 - 69 + 70 70 up(&mtd_table_mutex); 71 71 /* We _know_ we aren't being removed, because 72 72 our caller is still holding us here. So none ··· 75 75 __module_get(THIS_MODULE); 76 76 return 0; 77 77 } 78 - 78 + 79 79 up(&mtd_table_mutex); 80 80 return 1; 81 81 } ··· 93 93 int del_mtd_device (struct mtd_info *mtd) 94 94 { 95 95 int ret; 96 - 96 + 97 97 down(&mtd_table_mutex); 98 98 99 99 if (mtd_table[mtd->index] != mtd) { 100 100 ret = -ENODEV; 101 101 } else if (mtd->usecount) { 102 - printk(KERN_NOTICE "Removing MTD device #%d (%s) with use count %d\n", 102 + printk(KERN_NOTICE "Removing MTD device #%d (%s) with use count %d\n", 103 103 mtd->index, mtd->name, mtd->usecount); 104 104 ret = -EBUSY; 105 105 } else { ··· 140 140 list_add(&new->list, &mtd_notifiers); 141 141 142 142 __module_get(THIS_MODULE); 143 - 143 + 144 144 for (i=0; i< MAX_MTD_DEVICES; i++) 145 145 if (mtd_table[i]) 146 146 new->add(mtd_table[i]); ··· 169 169 for (i=0; i< MAX_MTD_DEVICES; i++) 170 170 if (mtd_table[i]) 171 171 old->remove(mtd_table[i]); 172 - 172 + 173 173 list_del(&old->list); 174 174 up(&mtd_table_mutex); 175 175 return 0; ··· 187 187 * both, return the num'th driver only if its address matches. Return NULL 188 188 * if not. 189 189 */ 190 - 190 + 191 191 struct mtd_info *get_mtd_device(struct mtd_info *mtd, int num) 192 192 { 193 193 struct mtd_info *ret = NULL;
+46 -46
drivers/mtd/mtdpart.c
··· 5 5 * 6 6 * This code is GPL 7 7 * 8 - * $Id: mtdpart.c,v 1.54 2005/09/30 14:49:08 dedekind Exp $ 8 + * $Id: mtdpart.c,v 1.55 2005/11/07 11:14:20 gleixner Exp $ 9 9 * 10 10 * 02-21-2002 Thomas Gleixner <gleixner@autronix.de> 11 11 * added support for read_oob, write_oob 12 - */ 12 + */ 13 13 14 14 #include <linux/module.h> 15 15 #include <linux/types.h> ··· 41 41 */ 42 42 #define PART(x) ((struct mtd_part *)(x)) 43 43 44 - 45 - /* 44 + 45 + /* 46 46 * MTD methods which simply translate the effective address and pass through 47 47 * to the _real_ device. 48 48 */ 49 49 50 - static int part_read (struct mtd_info *mtd, loff_t from, size_t len, 50 + static int part_read (struct mtd_info *mtd, loff_t from, size_t len, 51 51 size_t *retlen, u_char *buf) 52 52 { 53 53 struct mtd_part *part = PART(mtd); ··· 55 55 len = 0; 56 56 else if (from + len > mtd->size) 57 57 len = mtd->size - from; 58 - if (part->master->read_ecc == NULL) 59 - return part->master->read (part->master, from + part->offset, 58 + if (part->master->read_ecc == NULL) 59 + return part->master->read (part->master, from + part->offset, 60 60 len, retlen, buf); 61 61 else 62 - return part->master->read_ecc (part->master, from + part->offset, 62 + return part->master->read_ecc (part->master, from + part->offset, 63 63 len, retlen, buf, NULL, &mtd->oobinfo); 64 64 } 65 65 66 - static int part_point (struct mtd_info *mtd, loff_t from, size_t len, 66 + static int part_point (struct mtd_info *mtd, loff_t from, size_t len, 67 67 size_t *retlen, u_char **buf) 68 68 { 69 69 struct mtd_part *part = PART(mtd); ··· 71 71 len = 0; 72 72 else if (from + len > mtd->size) 73 73 len = mtd->size - from; 74 - return part->master->point (part->master, from + part->offset, 74 + return part->master->point (part->master, from + part->offset, 75 75 len, retlen, buf); 76 76 } 77 77 static void part_unpoint (struct mtd_info *mtd, u_char *addr, loff_t from, size_t len) ··· 82 82 } 83 83 84 84 85 - static int part_read_ecc (struct mtd_info *mtd, loff_t from, size_t len, 85 + static int part_read_ecc (struct mtd_info *mtd, loff_t from, size_t len, 86 86 size_t *retlen, u_char *buf, u_char *eccbuf, struct nand_oobinfo *oobsel) 87 87 { 88 88 struct mtd_part *part = PART(mtd); ··· 92 92 len = 0; 93 93 else if (from + len > mtd->size) 94 94 len = mtd->size - from; 95 - return part->master->read_ecc (part->master, from + part->offset, 95 + return part->master->read_ecc (part->master, from + part->offset, 96 96 len, retlen, buf, eccbuf, oobsel); 97 97 } 98 98 99 - static int part_read_oob (struct mtd_info *mtd, loff_t from, size_t len, 99 + static int part_read_oob (struct mtd_info *mtd, loff_t from, size_t len, 100 100 size_t *retlen, u_char *buf) 101 101 { 102 102 struct mtd_part *part = PART(mtd); ··· 104 104 len = 0; 105 105 else if (from + len > mtd->size) 106 106 len = mtd->size - from; 107 - return part->master->read_oob (part->master, from + part->offset, 107 + return part->master->read_oob (part->master, from + part->offset, 108 108 len, retlen, buf); 109 109 } 110 110 111 - static int part_read_user_prot_reg (struct mtd_info *mtd, loff_t from, size_t len, 111 + static int part_read_user_prot_reg (struct mtd_info *mtd, loff_t from, size_t len, 112 112 size_t *retlen, u_char *buf) 113 113 { 114 114 struct mtd_part *part = PART(mtd); 115 - return part->master->read_user_prot_reg (part->master, from, 115 + return part->master->read_user_prot_reg (part->master, from, 116 116 len, retlen, buf); 117 117 } 118 118 ··· 123 123 return part->master->get_user_prot_info (part->master, buf, len); 124 124 } 125 125 126 - static int part_read_fact_prot_reg (struct mtd_info *mtd, loff_t from, size_t len, 126 + static int part_read_fact_prot_reg (struct mtd_info *mtd, loff_t from, size_t len, 127 127 size_t *retlen, u_char *buf) 128 128 { 129 129 struct mtd_part *part = PART(mtd); 130 - return part->master->read_fact_prot_reg (part->master, from, 130 + return part->master->read_fact_prot_reg (part->master, from, 131 131 len, retlen, buf); 132 132 } 133 133 ··· 148 148 len = 0; 149 149 else if (to + len > mtd->size) 150 150 len = mtd->size - to; 151 - if (part->master->write_ecc == NULL) 152 - return part->master->write (part->master, to + part->offset, 151 + if (part->master->write_ecc == NULL) 152 + return part->master->write (part->master, to + part->offset, 153 153 len, retlen, buf); 154 154 else 155 - return part->master->write_ecc (part->master, to + part->offset, 155 + return part->master->write_ecc (part->master, to + part->offset, 156 156 len, retlen, buf, NULL, &mtd->oobinfo); 157 - 157 + 158 158 } 159 159 160 160 static int part_write_ecc (struct mtd_info *mtd, loff_t to, size_t len, ··· 170 170 len = 0; 171 171 else if (to + len > mtd->size) 172 172 len = mtd->size - to; 173 - return part->master->write_ecc (part->master, to + part->offset, 173 + return part->master->write_ecc (part->master, to + part->offset, 174 174 len, retlen, buf, eccbuf, oobsel); 175 175 } 176 176 ··· 184 184 len = 0; 185 185 else if (to + len > mtd->size) 186 186 len = mtd->size - to; 187 - return part->master->write_oob (part->master, to + part->offset, 187 + return part->master->write_oob (part->master, to + part->offset, 188 188 len, retlen, buf); 189 189 } 190 190 191 - static int part_write_user_prot_reg (struct mtd_info *mtd, loff_t from, size_t len, 191 + static int part_write_user_prot_reg (struct mtd_info *mtd, loff_t from, size_t len, 192 192 size_t *retlen, u_char *buf) 193 193 { 194 194 struct mtd_part *part = PART(mtd); 195 - return part->master->write_user_prot_reg (part->master, from, 195 + return part->master->write_user_prot_reg (part->master, from, 196 196 len, retlen, buf); 197 197 } 198 198 199 - static int part_lock_user_prot_reg (struct mtd_info *mtd, loff_t from, size_t len) 199 + static int part_lock_user_prot_reg (struct mtd_info *mtd, loff_t from, size_t len) 200 200 { 201 201 struct mtd_part *part = PART(mtd); 202 202 return part->master->lock_user_prot_reg (part->master, from, len); ··· 208 208 struct mtd_part *part = PART(mtd); 209 209 if (!(mtd->flags & MTD_WRITEABLE)) 210 210 return -EROFS; 211 - if (part->master->writev_ecc == NULL) 211 + if (part->master->writev_ecc == NULL) 212 212 return part->master->writev (part->master, vecs, count, 213 213 to + part->offset, retlen); 214 214 else ··· 221 221 unsigned long count, loff_t from, size_t *retlen) 222 222 { 223 223 struct mtd_part *part = PART(mtd); 224 - if (part->master->readv_ecc == NULL) 224 + if (part->master->readv_ecc == NULL) 225 225 return part->master->readv (part->master, vecs, count, 226 226 from + part->offset, retlen); 227 227 else 228 228 return part->master->readv_ecc (part->master, vecs, count, 229 - from + part->offset, retlen, 229 + from + part->offset, retlen, 230 230 NULL, &mtd->oobinfo); 231 231 } 232 232 ··· 252 252 if (oobsel == NULL) 253 253 oobsel = &mtd->oobinfo; 254 254 return part->master->readv_ecc (part->master, vecs, count, 255 - from + part->offset, retlen, 255 + from + part->offset, retlen, 256 256 eccbuf, oobsel); 257 257 } 258 258 ··· 286 286 static int part_lock (struct mtd_info *mtd, loff_t ofs, size_t len) 287 287 { 288 288 struct mtd_part *part = PART(mtd); 289 - if ((len + ofs) > mtd->size) 289 + if ((len + ofs) > mtd->size) 290 290 return -EINVAL; 291 291 return part->master->lock(part->master, ofs + part->offset, len); 292 292 } ··· 294 294 static int part_unlock (struct mtd_info *mtd, loff_t ofs, size_t len) 295 295 { 296 296 struct mtd_part *part = PART(mtd); 297 - if ((len + ofs) > mtd->size) 297 + if ((len + ofs) > mtd->size) 298 298 return -EINVAL; 299 299 return part->master->unlock(part->master, ofs + part->offset, len); 300 300 } ··· 337 337 return part->master->block_markbad(part->master, ofs); 338 338 } 339 339 340 - /* 341 - * This function unregisters and destroy all slave MTD objects which are 340 + /* 341 + * This function unregisters and destroy all slave MTD objects which are 342 342 * attached to the given master MTD object. 343 343 */ 344 344 ··· 371 371 * (Q: should we register the master MTD object as well?) 372 372 */ 373 373 374 - int add_mtd_partitions(struct mtd_info *master, 374 + int add_mtd_partitions(struct mtd_info *master, 375 375 const struct mtd_partition *parts, 376 376 int nbparts) 377 377 { ··· 414 414 slave->mtd.point = part_point; 415 415 slave->mtd.unpoint = part_unpoint; 416 416 } 417 - 417 + 418 418 if (master->read_ecc) 419 419 slave->mtd.read_ecc = part_read_ecc; 420 420 if (master->write_ecc) ··· 477 477 if (slave->mtd.size == MTDPART_SIZ_FULL) 478 478 slave->mtd.size = master->size - slave->offset; 479 479 cur_offset = slave->offset + slave->mtd.size; 480 - 481 - printk (KERN_NOTICE "0x%08x-0x%08x : \"%s\"\n", slave->offset, 480 + 481 + printk (KERN_NOTICE "0x%08x-0x%08x : \"%s\"\n", slave->offset, 482 482 slave->offset + slave->mtd.size, slave->mtd.name); 483 483 484 484 /* let's do some sanity checks */ ··· 498 498 /* Deal with variable erase size stuff */ 499 499 int i; 500 500 struct mtd_erase_region_info *regions = master->eraseregions; 501 - 501 + 502 502 /* Find the first erase regions which is part of this partition. */ 503 503 for (i=0; i < master->numeraseregions && slave->offset >= regions[i].offset; i++) 504 504 ; ··· 513 513 slave->mtd.erasesize = master->erasesize; 514 514 } 515 515 516 - if ((slave->mtd.flags & MTD_WRITEABLE) && 516 + if ((slave->mtd.flags & MTD_WRITEABLE) && 517 517 (slave->offset % slave->mtd.erasesize)) { 518 518 /* Doesn't start on a boundary of major erase size */ 519 519 /* FIXME: Let it be writable if it is on a boundary of _minor_ erase size though */ ··· 521 521 printk ("mtd: partition \"%s\" doesn't start on an erase block boundary -- force read-only\n", 522 522 parts[i].name); 523 523 } 524 - if ((slave->mtd.flags & MTD_WRITEABLE) && 524 + if ((slave->mtd.flags & MTD_WRITEABLE) && 525 525 (slave->mtd.size % slave->mtd.erasesize)) { 526 526 slave->mtd.flags &= ~MTD_WRITEABLE; 527 527 printk ("mtd: partition \"%s\" doesn't end on an erase block -- force read-only\n", 528 528 parts[i].name); 529 529 } 530 530 531 - /* copy oobinfo from master */ 531 + /* copy oobinfo from master */ 532 532 memcpy(&slave->mtd.oobinfo, &master->oobinfo, sizeof(slave->mtd.oobinfo)); 533 533 534 534 if(parts[i].mtdp) ··· 589 589 return 0; 590 590 } 591 591 592 - int parse_mtd_partitions(struct mtd_info *master, const char **types, 592 + int parse_mtd_partitions(struct mtd_info *master, const char **types, 593 593 struct mtd_partition **pparts, unsigned long origin) 594 594 { 595 595 struct mtd_part_parser *parser; 596 596 int ret = 0; 597 - 597 + 598 598 for ( ; ret <= 0 && *types; types++) { 599 599 parser = get_partition_parser(*types); 600 600 #ifdef CONFIG_KMOD ··· 608 608 } 609 609 ret = (*parser->parse_fn)(master, pparts, origin); 610 610 if (ret > 0) { 611 - printk(KERN_NOTICE "%d %s partitions found on MTD device %s\n", 611 + printk(KERN_NOTICE "%d %s partitions found on MTD device %s\n", 612 612 ret, parser->name, master->name); 613 613 } 614 614 put_partition_parser(parser);
+40 -40
drivers/mtd/nftlcore.c
··· 1 1 /* Linux driver for NAND Flash Translation Layer */ 2 2 /* (c) 1999 Machine Vision Holdings, Inc. */ 3 3 /* Author: David Woodhouse <dwmw2@infradead.org> */ 4 - /* $Id: nftlcore.c,v 1.97 2004/11/16 18:28:59 dwmw2 Exp $ */ 4 + /* $Id: nftlcore.c,v 1.98 2005/11/07 11:14:21 gleixner Exp $ */ 5 5 6 6 /* 7 7 The contents of this file are distributed under the GNU General ··· 101 101 102 102 if (nftl->mbd.size != nftl->heads * nftl->cylinders * nftl->sectors) { 103 103 /* 104 - Oh no we don't have 104 + Oh no we don't have 105 105 mbd.size == heads * cylinders * sectors 106 106 */ 107 107 printk(KERN_WARNING "NFTL: cannot calculate a geometry to " 108 108 "match size of 0x%lx.\n", nftl->mbd.size); 109 109 printk(KERN_WARNING "NFTL: using C:%d H:%d S:%d " 110 110 "(== 0x%lx sects)\n", 111 - nftl->cylinders, nftl->heads , nftl->sectors, 111 + nftl->cylinders, nftl->heads , nftl->sectors, 112 112 (long)nftl->cylinders * (long)nftl->heads * 113 113 (long)nftl->sectors ); 114 114 } ··· 178 178 179 179 if (!silly--) { 180 180 printk("Argh! No free blocks found! LastFreeEUN = %d, " 181 - "FirstEUN = %d\n", nftl->LastFreeEUN, 181 + "FirstEUN = %d\n", nftl->LastFreeEUN, 182 182 le16_to_cpu(nftl->MediaHdr.FirstPhysicalEUN)); 183 183 return 0xffff; 184 184 } ··· 210 210 "Virtual Unit Chain %d!\n", thisVUC); 211 211 return BLOCK_NIL; 212 212 } 213 - 213 + 214 214 /* Scan to find the Erase Unit which holds the actual data for each 215 215 512-byte block within the Chain. 216 216 */ ··· 227 227 if (block == 2) { 228 228 foldmark = oob.u.c.FoldMark | oob.u.c.FoldMark1; 229 229 if (foldmark == FOLD_MARK_IN_PROGRESS) { 230 - DEBUG(MTD_DEBUG_LEVEL1, 230 + DEBUG(MTD_DEBUG_LEVEL1, 231 231 "Write Inhibited on EUN %d\n", thisEUN); 232 232 inplace = 0; 233 233 } else { ··· 249 249 if (!BlockFreeFound[block]) 250 250 BlockMap[block] = thisEUN; 251 251 else 252 - printk(KERN_WARNING 252 + printk(KERN_WARNING 253 253 "SECTOR_USED found after SECTOR_FREE " 254 254 "in Virtual Unit Chain %d for block %d\n", 255 255 thisVUC, block); ··· 258 258 if (!BlockFreeFound[block]) 259 259 BlockMap[block] = BLOCK_NIL; 260 260 else 261 - printk(KERN_WARNING 261 + printk(KERN_WARNING 262 262 "SECTOR_DELETED found after SECTOR_FREE " 263 263 "in Virtual Unit Chain %d for block %d\n", 264 264 thisVUC, block); ··· 277 277 thisVUC); 278 278 return BLOCK_NIL; 279 279 } 280 - 280 + 281 281 thisEUN = nftl->ReplUnitTable[thisEUN]; 282 282 } 283 283 284 284 if (inplace) { 285 285 /* We're being asked to be a fold-in-place. Check 286 286 that all blocks which actually have data associated 287 - with them (i.e. BlockMap[block] != BLOCK_NIL) are 287 + with them (i.e. BlockMap[block] != BLOCK_NIL) are 288 288 either already present or SECTOR_FREE in the target 289 289 block. If not, we're going to have to fold out-of-place 290 290 anyway. ··· 297 297 "block %d was %x lastEUN, " 298 298 "and is in EUN %d (%s) %d\n", 299 299 thisVUC, block, BlockLastState[block], 300 - BlockMap[block], 300 + BlockMap[block], 301 301 BlockMap[block]== targetEUN ? "==" : "!=", 302 302 targetEUN); 303 303 inplace = 0; ··· 314 314 inplace = 0; 315 315 } 316 316 } 317 - 317 + 318 318 if (!inplace) { 319 319 DEBUG(MTD_DEBUG_LEVEL1, "Cannot fold Virtual Unit Chain %d in place. " 320 320 "Trying out-of-place\n", thisVUC); 321 321 /* We need to find a targetEUN to fold into. */ 322 322 targetEUN = NFTL_findfreeblock(nftl, 1); 323 323 if (targetEUN == BLOCK_NIL) { 324 - /* Ouch. Now we're screwed. We need to do a 324 + /* Ouch. Now we're screwed. We need to do a 325 325 fold-in-place of another chain to make room 326 326 for this one. We need a better way of selecting 327 - which chain to fold, because makefreeblock will 327 + which chain to fold, because makefreeblock will 328 328 only ask us to fold the same one again. 329 329 */ 330 330 printk(KERN_WARNING ··· 338 338 chain by selecting the longer one */ 339 339 oob.u.c.FoldMark = oob.u.c.FoldMark1 = cpu_to_le16(FOLD_MARK_IN_PROGRESS); 340 340 oob.u.c.unused = 0xffffffff; 341 - MTD_WRITEOOB(nftl->mbd.mtd, (nftl->EraseSize * targetEUN) + 2 * 512 + 8, 341 + MTD_WRITEOOB(nftl->mbd.mtd, (nftl->EraseSize * targetEUN) + 2 * 512 + 8, 342 342 8, &retlen, (char *)&oob.u); 343 343 } 344 344 ··· 361 361 happen in case of media errors or deleted blocks) */ 362 362 if (BlockMap[block] == BLOCK_NIL) 363 363 continue; 364 - 364 + 365 365 ret = MTD_READ(nftl->mbd.mtd, (nftl->EraseSize * BlockMap[block]) + (block * 512), 366 - 512, &retlen, movebuf); 366 + 512, &retlen, movebuf); 367 367 if (ret < 0) { 368 368 ret = MTD_READ(nftl->mbd.mtd, (nftl->EraseSize * BlockMap[block]) 369 369 + (block * 512), 512, &retlen, 370 - movebuf); 371 - if (ret != -EIO) 370 + movebuf); 371 + if (ret != -EIO) 372 372 printk("Error went away on retry.\n"); 373 373 } 374 374 memset(&oob, 0xff, sizeof(struct nftl_oob)); ··· 376 376 MTD_WRITEECC(nftl->mbd.mtd, (nftl->EraseSize * targetEUN) + (block * 512), 377 377 512, &retlen, movebuf, (char *)&oob, &nftl->oobinfo); 378 378 } 379 - 379 + 380 380 /* add the header so that it is now a valid chain */ 381 381 oob.u.a.VirtUnitNum = oob.u.a.SpareVirtUnitNum 382 382 = cpu_to_le16(thisVUC); 383 383 oob.u.a.ReplUnitNum = oob.u.a.SpareReplUnitNum = 0xffff; 384 - 385 - MTD_WRITEOOB(nftl->mbd.mtd, (nftl->EraseSize * targetEUN) + 8, 384 + 385 + MTD_WRITEOOB(nftl->mbd.mtd, (nftl->EraseSize * targetEUN) + 8, 386 386 8, &retlen, (char *)&oob.u); 387 387 388 388 /* OK. We've moved the whole lot into the new block. Now we have to free the original blocks. */ 389 389 390 - /* At this point, we have two different chains for this Virtual Unit, and no way to tell 390 + /* At this point, we have two different chains for this Virtual Unit, and no way to tell 391 391 them apart. If we crash now, we get confused. However, both contain the same data, so we 392 392 shouldn't actually lose data in this case. It's just that when we load up on a medium which 393 393 has duplicate chains, we need to free one of the chains because it's not necessary any more. ··· 395 395 thisEUN = nftl->EUNtable[thisVUC]; 396 396 DEBUG(MTD_DEBUG_LEVEL1,"Want to erase\n"); 397 397 398 - /* For each block in the old chain (except the targetEUN of course), 398 + /* For each block in the old chain (except the targetEUN of course), 399 399 free it and make it available for future use */ 400 400 while (thisEUN <= nftl->lastEUN && thisEUN != targetEUN) { 401 401 unsigned int EUNtmp; ··· 413 413 } 414 414 thisEUN = EUNtmp; 415 415 } 416 - 416 + 417 417 /* Make this the new start of chain for thisVUC */ 418 418 nftl->ReplUnitTable[targetEUN] = BLOCK_NIL; 419 419 nftl->EUNtable[thisVUC] = targetEUN; ··· 423 423 424 424 static u16 NFTL_makefreeblock( struct NFTLrecord *nftl , unsigned pendingblock) 425 425 { 426 - /* This is the part that needs some cleverness applied. 426 + /* This is the part that needs some cleverness applied. 427 427 For now, I'm doing the minimum applicable to actually 428 428 get the thing to work. 429 429 Wear-levelling and other clever stuff needs to be implemented ··· 470 470 return NFTL_foldchain (nftl, LongestChain, pendingblock); 471 471 } 472 472 473 - /* NFTL_findwriteunit: Return the unit number into which we can write 473 + /* NFTL_findwriteunit: Return the unit number into which we can write 474 474 for this block. Make it available if it isn't already 475 475 */ 476 476 static inline u16 NFTL_findwriteunit(struct NFTLrecord *nftl, unsigned block) ··· 488 488 a free space for the block in question. 489 489 */ 490 490 491 - /* This condition catches the 0x[7f]fff cases, as well as 491 + /* This condition catches the 0x[7f]fff cases, as well as 492 492 being a sanity check for past-end-of-media access 493 493 */ 494 494 lastEUN = BLOCK_NIL; ··· 503 503 504 504 MTD_READOOB(nftl->mbd.mtd, (writeEUN * nftl->EraseSize) + blockofs, 505 505 8, &retlen, (char *)&bci); 506 - 506 + 507 507 DEBUG(MTD_DEBUG_LEVEL2, "Status of block %d in EUN %d is %x\n", 508 508 block , writeEUN, le16_to_cpu(bci.Status)); 509 509 ··· 518 518 break; 519 519 default: 520 520 // Invalid block. Don't use it any more. Must implement. 521 - break; 521 + break; 522 522 } 523 - 524 - if (!silly--) { 523 + 524 + if (!silly--) { 525 525 printk(KERN_WARNING 526 526 "Infinite loop in Virtual Unit Chain 0x%x\n", 527 527 thisVUC); ··· 532 532 writeEUN = nftl->ReplUnitTable[writeEUN]; 533 533 } 534 534 535 - /* OK. We didn't find one in the existing chain, or there 535 + /* OK. We didn't find one in the existing chain, or there 536 536 is no existing chain. */ 537 537 538 538 /* Try to find an already-free block */ ··· 546 546 547 547 /* First remember the start of this chain */ 548 548 //u16 startEUN = nftl->EUNtable[thisVUC]; 549 - 549 + 550 550 //printk("Write to VirtualUnitChain %d, calling makefreeblock()\n", thisVUC); 551 551 writeEUN = NFTL_makefreeblock(nftl, 0xffff); 552 552 553 553 if (writeEUN == BLOCK_NIL) { 554 - /* OK, we accept that the above comment is 554 + /* OK, we accept that the above comment is 555 555 lying - there may have been free blocks 556 556 last time we called NFTL_findfreeblock(), 557 557 but they are reserved for when we're ··· 562 562 } 563 563 if (writeEUN == BLOCK_NIL) { 564 564 /* Ouch. This should never happen - we should 565 - always be able to make some room somehow. 566 - If we get here, we've allocated more storage 565 + always be able to make some room somehow. 566 + If we get here, we've allocated more storage 567 567 space than actual media, or our makefreeblock 568 568 routine is missing something. 569 569 */ 570 570 printk(KERN_WARNING "Cannot make free space.\n"); 571 571 return BLOCK_NIL; 572 - } 572 + } 573 573 //printk("Restarting scan\n"); 574 574 lastEUN = BLOCK_NIL; 575 575 continue; 576 576 } 577 577 578 578 /* We've found a free block. Insert it into the chain. */ 579 - 579 + 580 580 if (lastEUN != BLOCK_NIL) { 581 581 thisVUC |= 0x8000; /* It's a replacement block */ 582 582 } else { ··· 749 749 750 750 static int __init init_nftl(void) 751 751 { 752 - printk(KERN_INFO "NFTL driver: nftlcore.c $Revision: 1.97 $, nftlmount.c %s\n", nftlmountrev); 752 + printk(KERN_INFO "NFTL driver: nftlcore.c $Revision: 1.98 $, nftlmount.c %s\n", nftlmountrev); 753 753 754 754 return register_mtd_blktrans(&nftl_tr); 755 755 }
+28 -28
drivers/mtd/nftlmount.c
··· 1 - /* 1 + /* 2 2 * NFTL mount code with extensive checks 3 3 * 4 - * Author: Fabrice Bellard (fabrice.bellard@netgem.com) 4 + * Author: Fabrice Bellard (fabrice.bellard@netgem.com) 5 5 * Copyright (C) 2000 Netgem S.A. 6 6 * 7 - * $Id: nftlmount.c,v 1.40 2004/11/22 14:38:29 kalev Exp $ 7 + * $Id: nftlmount.c,v 1.41 2005/11/07 11:14:21 gleixner Exp $ 8 8 * 9 9 * This program is free software; you can redistribute it and/or modify 10 10 * it under the terms of the GNU General Public License as published by ··· 31 31 32 32 #define SECTORSIZE 512 33 33 34 - char nftlmountrev[]="$Revision: 1.40 $"; 34 + char nftlmountrev[]="$Revision: 1.41 $"; 35 35 36 36 /* find_boot_record: Find the NFTL Media Header and its Spare copy which contains the 37 37 * various device information of the NFTL partition and Bad Unit Table. Update ··· 47 47 struct NFTLMediaHeader *mh = &nftl->MediaHdr; 48 48 unsigned int i; 49 49 50 - /* Assume logical EraseSize == physical erasesize for starting the scan. 50 + /* Assume logical EraseSize == physical erasesize for starting the scan. 51 51 We'll sort it out later if we find a MediaHeader which says otherwise */ 52 52 /* Actually, we won't. The new DiskOnChip driver has already scanned 53 53 the MediaHeader and adjusted the virtual erasesize it presents in ··· 83 83 if (retlen < 6 || memcmp(buf, "ANAND", 6)) { 84 84 /* ANAND\0 not found. Continue */ 85 85 #if 0 86 - printk(KERN_DEBUG "ANAND header not found at 0x%x in mtd%d\n", 86 + printk(KERN_DEBUG "ANAND header not found at 0x%x in mtd%d\n", 87 87 block * nftl->EraseSize, nftl->mbd.mtd->index); 88 - #endif 88 + #endif 89 89 continue; 90 90 } 91 91 ··· 103 103 */ 104 104 if (le16_to_cpu(h1.EraseMark | h1.EraseMark1) != ERASE_MARK) { 105 105 printk(KERN_NOTICE "ANAND header found at 0x%x in mtd%d, but erase mark not present (0x%04x,0x%04x instead)\n", 106 - block * nftl->EraseSize, nftl->mbd.mtd->index, 106 + block * nftl->EraseSize, nftl->mbd.mtd->index, 107 107 le16_to_cpu(h1.EraseMark), le16_to_cpu(h1.EraseMark1)); 108 108 continue; 109 109 } ··· 175 175 nftl->nb_boot_blocks = le16_to_cpu(mh->FirstPhysicalEUN); 176 176 if ((nftl->nb_boot_blocks + 2) >= nftl->nb_blocks) { 177 177 printk(KERN_NOTICE "NFTL Media Header sanity check failed:\n"); 178 - printk(KERN_NOTICE "nb_boot_blocks (%d) + 2 > nb_blocks (%d)\n", 178 + printk(KERN_NOTICE "nb_boot_blocks (%d) + 2 > nb_blocks (%d)\n", 179 179 nftl->nb_boot_blocks, nftl->nb_blocks); 180 180 return -1; 181 181 } ··· 187 187 nftl->numvunits, nftl->nb_blocks, nftl->nb_boot_blocks); 188 188 return -1; 189 189 } 190 - 190 + 191 191 nftl->mbd.size = nftl->numvunits * (nftl->EraseSize / SECTORSIZE); 192 192 193 193 /* If we're not using the last sectors in the device for some reason, ··· 210 210 printk(KERN_NOTICE "NFTL: allocation of ReplUnitTable failed\n"); 211 211 return -ENOMEM; 212 212 } 213 - 213 + 214 214 /* mark the bios blocks (blocks before NFTL MediaHeader) as reserved */ 215 215 for (i = 0; i < nftl->nb_boot_blocks; i++) 216 216 nftl->ReplUnitTable[i] = BLOCK_RESERVED; 217 217 /* mark all remaining blocks as potentially containing data */ 218 - for (; i < nftl->nb_blocks; i++) { 218 + for (; i < nftl->nb_blocks; i++) { 219 219 nftl->ReplUnitTable[i] = BLOCK_NOTEXPLORED; 220 220 } 221 221 ··· 245 245 if (nftl->mbd.mtd->block_isbad(nftl->mbd.mtd, i * nftl->EraseSize)) 246 246 nftl->ReplUnitTable[i] = BLOCK_RESERVED; 247 247 } 248 - 248 + 249 249 nftl->MediaUnit = block; 250 250 boot_record_count++; 251 - 251 + 252 252 } /* foreach (block) */ 253 - 253 + 254 254 return boot_record_count?0:-1; 255 255 } 256 256 ··· 265 265 } 266 266 267 267 /* check_free_sector: check if a free sector is actually FREE, i.e. All 0xff in data and oob area */ 268 - static int check_free_sectors(struct NFTLrecord *nftl, unsigned int address, int len, 268 + static int check_free_sectors(struct NFTLrecord *nftl, unsigned int address, int len, 269 269 int check_oob) 270 270 { 271 271 int i; ··· 293 293 * 294 294 * Return: 0 when succeed, -1 on error. 295 295 * 296 - * ToDo: 1. Is it neceressary to check_free_sector after erasing ?? 296 + * ToDo: 1. Is it neceressary to check_free_sector after erasing ?? 297 297 */ 298 298 int NFTL_formatblock(struct NFTLrecord *nftl, int block) 299 299 { ··· 385 385 /* verify that the sector is really free. If not, mark 386 386 as ignore */ 387 387 if (memcmpb(&bci, 0xff, 8) != 0 || 388 - check_free_sectors(nftl, block * nftl->EraseSize + i * SECTORSIZE, 388 + check_free_sectors(nftl, block * nftl->EraseSize + i * SECTORSIZE, 389 389 SECTORSIZE, 0) != 0) { 390 390 printk("Incorrect free sector %d in block %d: " 391 391 "marking it as ignored\n", ··· 486 486 size_t retlen; 487 487 488 488 /* check erase mark. */ 489 - if (MTD_READOOB(nftl->mbd.mtd, block * nftl->EraseSize + SECTORSIZE + 8, 8, 489 + if (MTD_READOOB(nftl->mbd.mtd, block * nftl->EraseSize + SECTORSIZE + 8, 8, 490 490 &retlen, (char *)&h1) < 0) 491 491 return -1; 492 492 ··· 501 501 h1.EraseMark = cpu_to_le16(ERASE_MARK); 502 502 h1.EraseMark1 = cpu_to_le16(ERASE_MARK); 503 503 h1.WearInfo = cpu_to_le32(0); 504 - if (MTD_WRITEOOB(nftl->mbd.mtd, block * nftl->EraseSize + SECTORSIZE + 8, 8, 504 + if (MTD_WRITEOOB(nftl->mbd.mtd, block * nftl->EraseSize + SECTORSIZE + 8, 8, 505 505 &retlen, (char *)&h1) < 0) 506 506 return -1; 507 507 } else { ··· 582 582 583 583 for (;;) { 584 584 /* read the block header. If error, we format the chain */ 585 - if (MTD_READOOB(s->mbd.mtd, block * s->EraseSize + 8, 8, 585 + if (MTD_READOOB(s->mbd.mtd, block * s->EraseSize + 8, 8, 586 586 &retlen, (char *)&h0) < 0 || 587 - MTD_READOOB(s->mbd.mtd, block * s->EraseSize + SECTORSIZE + 8, 8, 587 + MTD_READOOB(s->mbd.mtd, block * s->EraseSize + SECTORSIZE + 8, 8, 588 588 &retlen, (char *)&h1) < 0) { 589 589 s->ReplUnitTable[block] = BLOCK_NIL; 590 590 do_format_chain = 1; ··· 639 639 first_logical_block = logical_block; 640 640 } else { 641 641 if (logical_block != first_logical_block) { 642 - printk("Block %d: incorrect logical block: %d expected: %d\n", 642 + printk("Block %d: incorrect logical block: %d expected: %d\n", 643 643 block, logical_block, first_logical_block); 644 644 /* the chain is incorrect : we must format it, 645 645 but we need to read it completly */ ··· 668 668 s->ReplUnitTable[block] = BLOCK_NIL; 669 669 break; 670 670 } else if (rep_block >= s->nb_blocks) { 671 - printk("Block %d: referencing invalid block %d\n", 671 + printk("Block %d: referencing invalid block %d\n", 672 672 block, rep_block); 673 673 do_format_chain = 1; 674 674 s->ReplUnitTable[block] = BLOCK_NIL; ··· 688 688 s->ReplUnitTable[block] = rep_block; 689 689 s->EUNtable[first_logical_block] = BLOCK_NIL; 690 690 } else { 691 - printk("Block %d: referencing block %d already in another chain\n", 691 + printk("Block %d: referencing block %d already in another chain\n", 692 692 block, rep_block); 693 693 /* XXX: should handle correctly fold in progress chains */ 694 694 do_format_chain = 1; ··· 710 710 } else { 711 711 unsigned int first_block1, chain_to_format, chain_length1; 712 712 int fold_mark; 713 - 713 + 714 714 /* valid chain : get foldmark */ 715 715 fold_mark = get_fold_mark(s, first_block); 716 716 if (fold_mark == 0) { ··· 729 729 if (first_block1 != BLOCK_NIL) { 730 730 /* XXX: what to do if same length ? */ 731 731 chain_length1 = calc_chain_length(s, first_block1); 732 - printk("Two chains at blocks %d (len=%d) and %d (len=%d)\n", 732 + printk("Two chains at blocks %d (len=%d) and %d (len=%d)\n", 733 733 first_block1, chain_length1, first_block, chain_length); 734 - 734 + 735 735 if (chain_length >= chain_length1) { 736 736 chain_to_format = first_block1; 737 737 s->EUNtable[first_logical_block] = first_block;
+2 -2
drivers/mtd/redboot.c
··· 1 1 /* 2 - * $Id: redboot.c,v 1.17 2004/11/22 11:33:56 ijc Exp $ 2 + * $Id: redboot.c,v 1.18 2005/11/07 11:14:21 gleixner Exp $ 3 3 * 4 4 * Parse RedBoot-style Flash Image System (FIS) tables and 5 5 * produce a Linux partition array to match. ··· 39 39 return 1; 40 40 } 41 41 42 - static int parse_redboot_partitions(struct mtd_info *master, 42 + static int parse_redboot_partitions(struct mtd_info *master, 43 43 struct mtd_partition **pparts, 44 44 unsigned long fis_origin) 45 45 {
+57 -57
drivers/mtd/rfd_ftl.c
··· 3 3 * 4 4 * Copyright (C) 2005 Sean Young <sean@mess.org> 5 5 * 6 - * $Id: rfd_ftl.c,v 1.4 2005/07/31 22:49:14 sean Exp $ 6 + * $Id: rfd_ftl.c,v 1.5 2005/11/07 11:14:21 gleixner Exp $ 7 7 * 8 8 * This type of flash translation layer (FTL) is used by the Embedded BIOS 9 9 * by General Software. It is known as the Resident Flash Disk (RFD), see: ··· 95 95 { 96 96 struct block *block = &part->blocks[block_no]; 97 97 int i; 98 - 98 + 99 99 block->offset = part->block_size * block_no; 100 100 101 101 if (le16_to_cpu(part->header_cache[0]) != RFD_MAGIC) { ··· 109 109 110 110 for (i=0; i<part->data_sectors_per_block; i++) { 111 111 u16 entry; 112 - 112 + 113 113 entry = le16_to_cpu(part->header_cache[HEADER_MAP_OFFSET + i]); 114 114 115 115 if (entry == SECTOR_DELETED) 116 116 continue; 117 - 117 + 118 118 if (entry == SECTOR_FREE) { 119 119 block->free_sectors++; 120 120 continue; ··· 122 122 123 123 if (entry == SECTOR_ZERO) 124 124 entry = 0; 125 - 125 + 126 126 if (entry >= part->sector_count) { 127 - printk(KERN_NOTICE PREFIX 127 + printk(KERN_NOTICE PREFIX 128 128 "'%s': unit #%d: entry %d corrupt, " 129 129 "sector %d out of range\n", 130 130 part->mbd.mtd->name, block_no, i, entry); ··· 132 132 } 133 133 134 134 if (part->sector_map[entry] != -1) { 135 - printk(KERN_NOTICE PREFIX 135 + printk(KERN_NOTICE PREFIX 136 136 "'%s': more than one entry for sector %d\n", 137 137 part->mbd.mtd->name, entry); 138 138 part->errors = 1; 139 139 continue; 140 140 } 141 141 142 - part->sector_map[entry] = block->offset + 142 + part->sector_map[entry] = block->offset + 143 143 (i + part->header_sectors_per_block) * SECTOR_SIZE; 144 144 145 145 block->used_sectors++; ··· 165 165 return -ENOENT; 166 166 167 167 /* each erase block has three bytes header, followed by the map */ 168 - part->header_sectors_per_block = 169 - ((HEADER_MAP_OFFSET + sectors_per_block) * 168 + part->header_sectors_per_block = 169 + ((HEADER_MAP_OFFSET + sectors_per_block) * 170 170 sizeof(u16) + SECTOR_SIZE - 1) / SECTOR_SIZE; 171 171 172 - part->data_sectors_per_block = sectors_per_block - 172 + part->data_sectors_per_block = sectors_per_block - 173 173 part->header_sectors_per_block; 174 174 175 - part->header_size = (HEADER_MAP_OFFSET + 175 + part->header_size = (HEADER_MAP_OFFSET + 176 176 part->data_sectors_per_block) * sizeof(u16); 177 177 178 178 part->cylinders = (part->data_sectors_per_block * ··· 188 188 if (!part->header_cache) 189 189 goto err; 190 190 191 - part->blocks = kcalloc(part->total_blocks, sizeof(struct block), 191 + part->blocks = kcalloc(part->total_blocks, sizeof(struct block), 192 192 GFP_KERNEL); 193 193 if (!part->blocks) 194 194 goto err; ··· 200 200 goto err; 201 201 } 202 202 203 - for (i=0; i<part->sector_count; i++) 203 + for (i=0; i<part->sector_count; i++) 204 204 part->sector_map[i] = -1; 205 205 206 206 for (i=0, blocks_found=0; i<part->total_blocks; i++) { 207 - rc = part->mbd.mtd->read(part->mbd.mtd, 207 + rc = part->mbd.mtd->read(part->mbd.mtd, 208 208 i * part->block_size, part->header_size, 209 209 &retlen, (u_char*)part->header_cache); 210 210 211 211 if (!rc && retlen != part->header_size) 212 212 rc = -EIO; 213 213 214 - if (rc) 214 + if (rc) 215 215 goto err; 216 216 217 217 if (!build_block_map(part, i)) ··· 226 226 } 227 227 228 228 if (part->reserved_block == -1) { 229 - printk(KERN_NOTICE PREFIX "'%s': no empty erase unit found\n", 229 + printk(KERN_NOTICE PREFIX "'%s': no empty erase unit found\n", 230 230 part->mbd.mtd->name); 231 231 232 232 part->errors = 1; ··· 248 248 u_long addr; 249 249 size_t retlen; 250 250 int rc; 251 - 251 + 252 252 if (sector >= part->sector_count) 253 253 return -EIO; 254 254 ··· 266 266 } 267 267 } else 268 268 memset(buf, 0, SECTOR_SIZE); 269 - 269 + 270 270 return 0; 271 - } 271 + } 272 272 273 273 static void erase_callback(struct erase_info *erase) 274 274 { ··· 288 288 289 289 if (erase->state != MTD_ERASE_DONE) { 290 290 printk(KERN_WARNING PREFIX "erase failed at 0x%x on '%s', " 291 - "state %d\n", erase->addr, 291 + "state %d\n", erase->addr, 292 292 part->mbd.mtd->name, erase->state); 293 293 294 294 part->blocks[i].state = BLOCK_FAILED; ··· 307 307 part->blocks[i].used_sectors = 0; 308 308 part->blocks[i].erases++; 309 309 310 - rc = part->mbd.mtd->write(part->mbd.mtd, 311 - part->blocks[i].offset, sizeof(magic), &retlen, 310 + rc = part->mbd.mtd->write(part->mbd.mtd, 311 + part->blocks[i].offset, sizeof(magic), &retlen, 312 312 (u_char*)&magic); 313 - 313 + 314 314 if (!rc && retlen != sizeof(magic)) 315 315 rc = -EIO; 316 316 317 317 if (rc) { 318 318 printk(KERN_NOTICE PREFIX "'%s': unable to write RFD " 319 319 "header at 0x%lx\n", 320 - part->mbd.mtd->name, 320 + part->mbd.mtd->name, 321 321 part->blocks[i].offset); 322 322 part->blocks[i].state = BLOCK_FAILED; 323 323 } ··· 374 374 map = kmalloc(part->header_size, GFP_KERNEL); 375 375 if (!map) 376 376 goto err2; 377 - 378 - rc = part->mbd.mtd->read(part->mbd.mtd, 379 - part->blocks[block_no].offset, part->header_size, 377 + 378 + rc = part->mbd.mtd->read(part->mbd.mtd, 379 + part->blocks[block_no].offset, part->header_size, 380 380 &retlen, (u_char*)map); 381 - 381 + 382 382 if (!rc && retlen != part->header_size) 383 383 rc = -EIO; 384 384 385 385 if (rc) { 386 386 printk(KERN_NOTICE PREFIX "error reading '%s' at " 387 - "0x%lx\n", part->mbd.mtd->name, 387 + "0x%lx\n", part->mbd.mtd->name, 388 388 part->blocks[block_no].offset); 389 389 390 390 goto err; ··· 398 398 if (entry == SECTOR_FREE || entry == SECTOR_DELETED) 399 399 continue; 400 400 401 - if (entry == SECTOR_ZERO) 401 + if (entry == SECTOR_ZERO) 402 402 entry = 0; 403 403 404 404 /* already warned about and ignored in build_block_map() */ 405 - if (entry >= part->sector_count) 405 + if (entry >= part->sector_count) 406 406 continue; 407 407 408 408 addr = part->blocks[block_no].offset + ··· 418 418 } 419 419 rc = part->mbd.mtd->read(part->mbd.mtd, addr, 420 420 SECTOR_SIZE, &retlen, sector_data); 421 - 421 + 422 422 if (!rc && retlen != SECTOR_SIZE) 423 423 rc = -EIO; 424 424 ··· 429 429 430 430 goto err; 431 431 } 432 - 432 + 433 433 rc = rfd_ftl_writesect((struct mtd_blktrans_dev*)part, 434 434 entry, sector_data); 435 - 436 - if (rc) 435 + 436 + if (rc) 437 437 goto err; 438 438 } 439 439 ··· 447 447 return rc; 448 448 } 449 449 450 - static int reclaim_block(struct partition *part, u_long *old_sector) 450 + static int reclaim_block(struct partition *part, u_long *old_sector) 451 451 { 452 452 int block, best_block, score, old_sector_block; 453 453 int rc; 454 - 454 + 455 455 /* we have a race if sync doesn't exist */ 456 456 if (part->mbd.mtd->sync) 457 457 part->mbd.mtd->sync(part->mbd.mtd); ··· 474 474 * more removed sectors is more efficient (have to move 475 475 * less). 476 476 */ 477 - if (part->blocks[block].free_sectors) 477 + if (part->blocks[block].free_sectors) 478 478 return 0; 479 479 480 480 this_score = part->blocks[block].used_sectors; 481 481 482 - if (block == old_sector_block) 482 + if (block == old_sector_block) 483 483 this_score--; 484 484 else { 485 485 /* no point in moving a full block */ 486 - if (part->blocks[block].used_sectors == 486 + if (part->blocks[block].used_sectors == 487 487 part->data_sectors_per_block) 488 488 continue; 489 489 } ··· 529 529 stop = block; 530 530 531 531 do { 532 - if (part->blocks[block].free_sectors && 532 + if (part->blocks[block].free_sectors && 533 533 block != part->reserved_block) 534 534 return block; 535 535 ··· 563 563 } 564 564 } 565 565 566 - rc = part->mbd.mtd->read(part->mbd.mtd, part->blocks[block].offset, 566 + rc = part->mbd.mtd->read(part->mbd.mtd, part->blocks[block].offset, 567 567 part->header_size, &retlen, (u_char*)part->header_cache); 568 568 569 569 if (!rc && retlen != part->header_size) ··· 571 571 572 572 if (rc) { 573 573 printk(KERN_NOTICE PREFIX "'%s': unable to read header at " 574 - "0x%lx\n", part->mbd.mtd->name, 574 + "0x%lx\n", part->mbd.mtd->name, 575 575 part->blocks[block].offset); 576 576 goto err; 577 577 } ··· 580 580 581 581 err: 582 582 return rc; 583 - } 583 + } 584 584 585 585 static int mark_sector_deleted(struct partition *part, u_long old_addr) 586 586 { ··· 590 590 u16 del = const_cpu_to_le16(SECTOR_DELETED); 591 591 592 592 block = old_addr / part->block_size; 593 - offset = (old_addr % part->block_size) / SECTOR_SIZE - 593 + offset = (old_addr % part->block_size) / SECTOR_SIZE - 594 594 part->header_sectors_per_block; 595 595 596 596 addr = part->blocks[block].offset + ··· 604 604 if (rc) { 605 605 printk(KERN_WARNING PREFIX "error writing '%s' at " 606 606 "0x%lx\n", part->mbd.mtd->name, addr); 607 - if (rc) 607 + if (rc) 608 608 goto err; 609 609 } 610 610 if (block == part->current_block) ··· 627 627 i = stop = part->data_sectors_per_block - block->free_sectors; 628 628 629 629 do { 630 - if (le16_to_cpu(part->header_cache[HEADER_MAP_OFFSET + i]) 630 + if (le16_to_cpu(part->header_cache[HEADER_MAP_OFFSET + i]) 631 631 == SECTOR_FREE) 632 632 return i; 633 633 ··· 653 653 !part->blocks[part->current_block].free_sectors) { 654 654 655 655 rc = find_writeable_block(part, old_addr); 656 - if (rc) 656 + if (rc) 657 657 goto err; 658 658 } 659 659 ··· 665 665 rc = -ENOSPC; 666 666 goto err; 667 667 } 668 - 669 - addr = (i + part->header_sectors_per_block) * SECTOR_SIZE + 668 + 669 + addr = (i + part->header_sectors_per_block) * SECTOR_SIZE + 670 670 block->offset; 671 - rc = part->mbd.mtd->write(part->mbd.mtd, 671 + rc = part->mbd.mtd->write(part->mbd.mtd, 672 672 addr, SECTOR_SIZE, &retlen, (u_char*)buf); 673 673 674 674 if (!rc && retlen != SECTOR_SIZE) ··· 677 677 if (rc) { 678 678 printk(KERN_WARNING PREFIX "error writing '%s' at 0x%lx\n", 679 679 part->mbd.mtd->name, addr); 680 - if (rc) 680 + if (rc) 681 681 goto err; 682 682 } 683 683 ··· 697 697 if (rc) { 698 698 printk(KERN_WARNING PREFIX "error writing '%s' at 0x%lx\n", 699 699 part->mbd.mtd->name, addr); 700 - if (rc) 700 + if (rc) 701 701 goto err; 702 702 } 703 703 block->used_sectors++; ··· 738 738 break; 739 739 } 740 740 741 - if (i == SECTOR_SIZE) 741 + if (i == SECTOR_SIZE) 742 742 part->sector_map[sector] = -1; 743 743 744 744 if (old_addr != -1) ··· 801 801 802 802 if (!add_mtd_blktrans_dev((void*)part)) 803 803 return; 804 - } 804 + } 805 805 806 806 kfree(part); 807 807 } ··· 828 828 .major = RFD_FTL_MAJOR, 829 829 .part_bits = PART_BITS, 830 830 .readsect = rfd_ftl_readsect, 831 - .writesect = rfd_ftl_writesect, 831 + .writesect = rfd_ftl_writesect, 832 832 .getgeo = rfd_ftl_getgeo, 833 833 .add_mtd = rfd_ftl_add_mtd, 834 834 .remove_dev = rfd_ftl_remove_dev,